Sponsored by

Two open-source AI image tools nobody talks about have been quietly outperforming GPT, Gemini, and Grok on Facebook. The audience reaction is different. Here is exactly which tools, and why they win.

Reading this in Promotions? Move it to Primary so you never miss an issue.

Luxe Prompting Issue 18   May 2026

A Working Strategy

The AI image tools
Facebook pays me
to post.

Two open-source tools nobody talks about have been outperforming GPT, Gemini, and Grok for monetized social posts. Here is which ones, and why they win.

Forget the hype. Here's what's actually working in AI.

90% of AI content is noise. The AI Report is the 10%.

We cover real enterprise deployments, actual business outcomes, and the AI strategies leaders are betting on right now — not lab experiments, not demos, not speculation.

400,000+ executives, operators, and founders read us every weekday to cut through the clutter and make faster, smarter decisions about AI before their competitors do.

No hype. No fluff. Just the signal.

See what's actually working in AI across every industry right now — free, in 5 minutes a day.

•••

For the past several months I have been quietly testing AI image tools against each other on Facebook. Same kind of post. Same posting time. Same audience. Different tool behind each image. The goal was simple: figure out which tools produce the kind of images that actually get engagement on social platforms with creator monetization, and which ones produce technically impressive output that nobody shares.

The results were not what I expected. The tools everyone writes about, GPT, Gemini, and Grok, were not the winners. The two tools that consistently outperformed them were both Chinese, both open-source, and both largely ignored by the English-language AI press. Z Image and Qwen Image. Posts using their output bring in an average of one dollar per image through Facebook Content Monetization. Some viral hits do far better. The number is conservative because it averages across everything I post, including images that flop.

This issue is the breakdown. Why these two tools win. What they do that GPT, Gemini, and Grok do not. The exact prompt approach that gets the most out of them. And the broader pattern this points to about where AI image generation is actually heading.

One

The platform that pays per image.

Facebook Content Monetization pays for engagement, not fidelity. A photographically perfect image that does not stop the scroll generates nothing. A slightly imperfect image that makes someone tag a friend generates several cents. A viral image generates dollars. One dollar per image averaged across a feed is what happens when most posts get modest engagement and a few break through.

Once you understand the platform rewards emotional pull rather than technical perfection, the question of which AI tool to use changes. The right one is not the tool that produces the cleanest output. It is the tool that produces output people actually want to share.

Two

Why GPT, Gemini, and Grok underperform.

The big three from the major American labs share a common aesthetic problem. Their outputs look correct. They look polished. They look like exactly what an AI image is supposed to look like in the public imagination. And that is the problem. People can recognize them at a glance, scroll past them, and move on. Familiarity is the enemy of engagement on a platform that pays for stops.

GPT renders text reliably and produces consistent compositions, but its house style is glossy in a way that reads as commercial stock photography. Gemini is technically strong but produces images with a particular flat lighting signature that makes them feel templated. Grok is more willing to take risks aesthetically but inherits a cartoon-leaning style that limits its photographic range. None of these is a flaw in the tool. They are tradeoffs the labs made to keep their tools safe, polished, and broadly useful.

The tradeoff that hurts on social platforms is the same one. All three tools converge toward a center. They produce the average of what users ask for, executed with high fidelity. That is the right choice for a tool millions of casual users will rely on for a school project or a presentation slide. It is the wrong choice for someone trying to produce images that stand out in a feed of a hundred other images.

Three

Why Z Image and Qwen win.

Z Image is the open-source release from Tongyi Lab, the AI research division at Alibaba. Qwen Image is from the Qwen team, also at Alibaba. Both have open-source weights, both run on Replicate and Fal for a few cents per generation, and both can be self-hosted by anyone with a capable GPU. They were trained on different data than the American tools. The aesthetic differences are immediate.

Z Image produces images with a quality that I can only describe as honest. The lighting feels physical. The skin looks like skin, not like a render of skin. Backgrounds have actual depth instead of the flattened bokeh that signals AI to a trained eye. When the tool gets something wrong, it gets it wrong in ways that feel like a real photograph with a flaw, not in the uncanny way that betrays AI authorship. People scroll past polished AI images. They stop on Z Image output because their brain registers it as a photograph first.

Qwen Image is more stylized but trades fidelity for emotional resonance. It produces images with what I think of as compositional confidence. Subjects are framed deliberately. Negative space is used intentionally. The tool seems to have absorbed something about visual storytelling that the American tools did not pick up. Where Z Image wins by looking real, Qwen wins by looking intentional. Both qualities drive engagement on social platforms in ways that polished perfection does not.

There is also a strategic reason both tools matter. Because they are open source, they have not been trained to avoid the controversial, the moody, the slightly off-kilter compositions that the closed American tools steer away from. They will produce a portrait that looks lonely, an image with negative emotional weight, a scene that feels uncertain. The closed tools will sand those edges off in the name of safety. The edges are exactly what makes images worth sharing.

Four

How to prompt them differently.

The prompts that get the most out of Z Image and Qwen are different from the prompts that work on the American tools. Both Chinese tools respond strongly to specific photography vocabulary, the kind this newsletter has been teaching for months. Lens names, lighting setups, film stock references, named photographers. They reward technical specificity in a way the closed tools do not always reciprocate.

A prompt that consistently produces strong Z Image output:

A woman in her 60s standing at her kitchen window at dawn, holding a chipped coffee mug, looking out at a garden in light rain. Shot on Mamiya RZ67 with 110mm lens, Kodak Portra 400 film, soft natural window light from the right, slight grain. The atmosphere of a Bill Henson photograph, quiet and inhabited. No retouching, visible skin texture, natural color.

Three things make this work. The named camera and film stock anchor the image in a specific photographic tradition. The reference to Bill Henson signals an emotional register, not just a visual style. The closing instruction ("no retouching, visible skin texture") tells the tool to stop trying to make the image perfect. That last part matters more than people realize. Without it, even Z Image will lean toward smoothness. With it, the tool produces output that feels lived-in.

For Qwen Image, lean harder on composition and mood instructions. Where Z Image wants to know about cameras, Qwen wants to know about framing. "Shot from a low angle looking up." "Subject in the bottom right third of the frame, large negative space above." "The composition of an Annie Leibovitz portrait, dramatic and intentional." Qwen rewards directorial language. It produces its strongest output when you treat it less as a camera and more as a cinematographer.

Five

How to start using them this week.

Both tools are accessible without any technical setup. Replicate hosts Z Image and Qwen Image on their platform. Go to replicate.com, search for either tool, and run prompts directly in the browser. The cost is a few cents per generation. Fal.ai also hosts both with a slightly different interface and similar pricing. Either platform works. Pick whichever feels easier to navigate.

Run the same prompt through Z Image, Qwen Image, GPT, and Gemini side by side. Look at the four outputs in a grid. The differences will be immediate, and you will start to develop an instinct for when each tool is the right one for the job. For most realistic photographic work, Z Image is going to feel more honest. For dramatic, stylized, or compositionally bold work, Qwen Image is going to feel more intentional. For commercial polish or text-heavy images, GPT and Gemini still have their place. Knowing the differences is half the skill.

The other half is testing on the platform. Generate a few images each week using Z Image, post them to whichever monetized social channel you use, and watch what happens to engagement compared to your usual baseline. The first time you post a Z Image output and watch the comments roll in differently than your previous posts, you will understand what I mean. The audience is not consciously aware that the tool behind the image changed. They just know this image stopped them when the others did not.

•••

The most monetizable image is not the most technically perfect one. It is the one a stranger sees and thinks, for half a second, that a real person took it. Two of the best tools for that are not the ones in the headlines.

A question for you

Which tool are you using right now, and how is it performing?

Reply and tell me. If you are getting good results from a tool I did not mention, I want to know. The replies determine what I cover next, and the comparison data shapes future issues.

If this issue resonated, forward it to someone who should read it.

Until next week,

Luxe Prompting

Luxe Prompting

AI Image Generation for Creators

Keep Reading