In partnership with

Gemini just learned to browse the web on your behalf. Most coverage treats this as a productivity feature. For image creators, it changes the prompt itself. Here is the pattern that has been quietly working for me this week.

Reading this in Promotions? Move it to Primary so you never miss an issue.

Luxe Prompting ISSUE 24   MAY 2026

An Essay

Gemini just learned
to browse the web.
Image creators
should care.

Most coverage frames this as a productivity feature. For image creators, it opens up a different way to prompt. Research first. Generate second. Here is the pattern.

Master Claude AI (Free Guide)

The professionals pulling ahead aren't working more. They're using Claude.

Our free guide will show you how to:

Configure Claude to be the perfect assistant

Master AI-powered content creation

Transform complex data into actionable strategies

Harness Claude’s full potential

Transform your workflow with AI and stay ahead of the curve with this comprehensive guide to using Claude at work.

•••

Google rolled out Gemini 3 in Chrome this week with a feature called Auto Browse. Most coverage framed it as a way to fill out forms, book hotels, and shop without lifting a finger. The reviews focused on whether it could navigate Etsy or compare apartment listings on Redfin. All of that is true and useful, and almost none of it matters for what I want to write about.

For image creators, Auto Browse and the agentic capabilities now showing up across ChatGPT, Claude, and Gemini change something more interesting. They change the prompt itself. The model can now research before it generates. It can pull a brand's actual visual identity off their website, study a competitor's social feed, examine a real photographer's portfolio, or look at twenty product pages and synthesize a style. Then it can use what it found to inform what it makes.

This is a workflow that did not exist eight weeks ago. It is quietly becoming the strongest pattern in my own process, and almost nobody is writing about it for creators.

One

What actually shipped.

Auto Browse is a Gemini 3 capability inside Chrome that takes a goal in plain English and executes it across multiple browser tabs. Google AI Pro and Ultra members in the United States have access to it on Mac, Windows, and Chromebook. The feature opens its own tab, navigates pages, scrolls, taps elements, and reads content while you watch a side panel narrate every step.

Bundled into the same release is Nano Banana image editing inside the Chrome side panel. Find an image on any webpage, describe a change, get a new version without ever leaving the tab. ChatGPT and Claude have shipped equivalents over the last two months. The same agentic capability is becoming standard across the major chat tools.

The framing in the press release is productivity. The framing missing from the press release is creative direction.

Two

Why this matters for image creators.

For most of the last two years, prompting has been a closed loop. You knew an aesthetic, you described it in words, the model interpreted those words and produced an image. The fidelity of the output depended entirely on how accurately your description matched what the model already knew. If you wanted a brand to feel like Aesop, you had to know what Aesop looks like and translate that into adjectives.

With browsing, the model can go look. You can point it at the actual Aesop website, ask it to study the typography, the photography style, the color palette, and the spatial composition, and then prompt the image. The output is grounded in the real reference, not the model's faded memory of the brand.

Same idea applies to almost any reference. A photographer's portfolio. A competitor's Instagram grid. A magazine spread. A product line you want to study. The visual research that used to live in your moodboard now happens inside the prompt itself.

Three

The pattern that keeps working.

The structure I have settled on uses two prompts instead of one. The first prompt is research. The second prompt is generation. The first prompt produces no image. It produces a brief.

A research prompt I keep going back to:

Visit the brand's website and the most recent ten posts on their Instagram. Study the photography. Tell me what kind of light they favor, what their color palette is, how their compositions are framed, what kind of subject matter they show, and what the overall feeling is. Give me a one-paragraph creative brief I could hand to a photographer.

The model returns a paragraph that is more accurate than anything I would write from memory. Then I take that paragraph and feed it into the image prompt as the style anchor.

The generation prompt looks something like:

Generate an image of [my subject] in the visual style described above. Match the lighting, palette, composition, and feel exactly. The image should look like it could appear on this brand's feed without anyone noticing.

The output lands closer to the reference than anything I could have produced from a single, beautifully crafted prompt. The model studied the actual brand. It wrote the brief in its own words. Then it generated against that brief. The result is a kind of style transfer that does not require a reference image, only a URL.

Four

Where this fits in real work.

The use cases that benefit most are anything where matching an existing visual identity matters. Brand work. Client work. Social content for a creator with an established aesthetic. Product photography that needs to look like the rest of the catalog. The places where consistency is the whole point and a generic AI image gives you away.

The pattern also flips an older problem on its head. Reference images used to require uploading and processing them. Now you point at a URL and the model figures it out. For anyone who has spent hours screenshotting a moodboard and feeding it into a prompt one image at a time, this is a meaningful shift in how moodboards work.

It also opens up a kind of competitive research that was previously slow. Pull up three competitors, ask the model to compare their visual identities, identify the gaps, and propose a visual position for your own brand that occupies the gap. None of that is image generation. All of it makes the image generation that follows much sharper.

Five

Caveats and friction points.

A few honest notes from running this for several weeks. The research prompt sometimes hallucinates. The model occasionally describes a website in terms that flatter the brand more than they describe it. The fix is to read the brief before you use it and edit anything that feels generic. A bad brief produces a bad image.

Auto Browse is Pro and Ultra only at the moment, and rate-limited even there. ChatGPT's equivalent works on Plus and above. Claude's research mode is broadly available but slower. None of these are perfect. All of them are usable enough to change my workflow this week.

There is also a copyright question worth being thoughtful about. Pulling style from a real brand to inform your own original work is fair game. Pulling style to clone someone else's work and pass it off is a different conversation. The line is the same line that has always existed for designers. The tooling is just faster.

•••

I am putting together a pack of research-and-generate prompt pairs for the most common creator use cases. Brand mood capture. Competitor visual analysis. Style anchoring from a single URL.

Want it when it ships? Reply with send me the research pack and I will get it to you.

A QUESTION FOR YOU

What is the first brand or aesthetic you would point a model at?

Reply and tell me. I will write you a research-and-generate pair for it. The replies determine which categories I cover next.

If this issue resonated, forward it to a creator working on brand or client visuals.

Until next time,

Luxe Prompting

Luxe Prompting

AI Image Generation for Creators

Keep Reading