|
§ 01 — The problem
Every AI image tool starts from zero. This one doesn’t.
|
|
|
Every time you open ChatGPT, Midjourney, or any other image tool, you start from scratch. The model knows nothing about you. Your taste, your style, your family, your home, your brand — all invisible. You have to describe everything from the ground up, every single time.
That’s why prompts get so long. You’re not just describing what you want. You’re teaching the model who you are.
Google decided to skip that step entirely. Their approach: the model should already know you.
|
|
§ 02 — What it is
Nano Banana 2: Google’s image model nobody’s talking about.
|
|
|
Nano Banana 2 (technically Gemini 3.1 Flash Image) is Google’s image generation model that launched in February 2026. The original Nano Banana went viral in August 2025, generating millions of images in the Gemini app. Version 2 combines the quality of Nano Banana Pro with the speed of Gemini Flash.
But the model itself isn’t the story. The story is what Google connected it to.
|
| Model | Nano Banana 2 (Gemini 3.1 Flash Image) |
| Resolution | 512px to 4K |
| Speed | Flash-speed (seconds, not minutes) |
| Available in | Gemini app, Search, Lens, Flow, Ads |
| Personal Intelligence | US paid subscribers (rolling out now) |
| Watermark | SynthID (invisible, built-in) |
|
|
|
§ 03 — The real feature
Personal Intelligence: the feature that changes the game.
|
|
|
In mid-April, Google connected Nano Banana 2 to Personal Intelligence — a system that links Gemini to your Gmail, Google Photos, Calendar, Drive, and browsing history. When you generate an image, the model pulls context from your actual life.
This changes the prompting model fundamentally. Here’s what it enables:
|
| 01 |
Your photos become AI references
Connect Google Photos and Gemini uses your labeled people, pets, and places as visual guides. Say “create a Pixar illustration of me and my family” and it uses your actual family photos. No uploading. No describing appearances. It already knows.
|
| 02 |
Your taste is the default style
Type “design my dream home” and it produces an interior that matches your actual preferences — derived from your search history, saved images, and browsing patterns. Not a generic Pinterest board. Your dream home.
|
| 03 |
Five-word prompts produce personal results
The biggest change isn’t image quality. It’s prompt length. When the model knows your context, you can be brief. “Create a birthday card for Mom” pulls from your Photos to know what Mom looks like, your style preferences for the aesthetic, and your past interactions for tone.
|
| 04 |
Refinement is conversational
If the result isn’t right, tell Gemini what to change. It remembers the context. You can also tap a “+” icon to select a different reference photo from your library and regenerate. The process feels like directing an artist, not engineering a prompt.
|
|
|
§ 04 — The model itself
What Nano Banana 2 does well even without personalization.
|
|
| 01 |
Real-world knowledge in images
Powered by Gemini’s knowledge base and real-time web search. It can generate infographics with current data, turn notes into diagrams, and create data visualizations — not just pretty pictures.
|
| 02 |
Text rendering and translation
Generates readable text for marketing mockups, greeting cards, and posters. Can translate and localize text within an image across languages. Google’s documentation notes it may struggle with idioms in some languages, but Latin-script text is solid.
|
| 03 |
Subject consistency
Characters and objects maintain their appearance across multiple generations. Useful for building a consistent brand character, mascot, or visual series without regenerating from scratch each time.
|
| 04 |
It’s everywhere
Nano Banana 2 is the default in the Gemini app, Google Search AI Mode, Google Lens, Google Flow (video editing), and Google Ads. It’s also available via the Gemini API for developers. 141 countries. Eight languages. This is the most widely distributed AI image model in the world by reach.
|
|
|
§ 05 — Try it tonight
Four prompts. Each one gets shorter.
Open the Gemini app on your phone. Enable Personal Intelligence in settings if you haven’t. Connect Google Photos. Then try these:
|
|
| Prompt 01 — 14 words |
Standard prompt |
|
|
Create a watercolor painting of a cozy reading nook on a rainy afternoon.
|
|
|
What to watch for — Even without personalization, Nano Banana 2 produces this in seconds. The speed is the first thing you’ll notice. Compare how long ChatGPT takes for the same prompt.
|
|
| Prompt 02 — 5 words |
Personal Intelligence |
|
|
Design my dream home interior.
|
|
|
What to watch for — With Personal Intelligence on, this produces an interior that reflects your actual taste. Then try the exact same 5 words in ChatGPT. Compare the results. One knows you. The other doesn’t.
|
|
| Prompt 03 — 10 words |
Google Photos connected |
|
|
Create a claymation image of me and my family hiking.
|
|
|
What to watch for — Gemini uses your labeled Google Photos to know who “me” and “my family” are. The generated characters should resemble your actual family in claymation style. No reference photo uploads. No appearance descriptions. It just knows.
|
|
| Prompt 04 — 4 words |
The shortest prompt test |
|
|
Plan my weekend visually.
|
|
|
What to watch for — Gemini checks your Calendar, your recent searches, your location, and your interests. It should generate images related to activities you’d actually do this weekend. This is the prompt that makes people realize what personalized AI image gen actually means.
|
|
|
§ 06 — The privacy question
Is it worth giving Google this much access?
|
|
|
This is the question nobody wants to answer with a blanket statement, so I won’t. Here are the facts:
It’s opt-in. Personal Intelligence is off by default. You choose which apps to connect. You can disconnect anytime.
Google says it doesn’t train on your Photos. The company states it trains on “limited info, like specific prompts and the model’s responses” — not your private photo library.
Your data already lives on Google’s servers. If you use Gmail, Photos, and Search, Google already has this data. Personal Intelligence lets Gemini access what Google already stores.
The tradeoff is real. More personal results require more personal access. Whether that exchange works for you depends on how you feel about Google having this level of integration. There’s no right answer. Just an informed one.
|
|
§ 07 — Bottom line
The end of the 200-word prompt.
|
|
|
Google is making a bet that the future of AI image generation isn’t better models. It’s models that know you better. Instead of competing on raw rendering quality against ChatGPT and Midjourney, they’re competing on how little you need to type to get something useful.
That bet might be right. For daily social media content, quick personal projects, and anything where speed matters more than pixel-perfect aesthetics, “the AI that knows me” is a genuinely compelling product. Whether the privacy tradeoff is acceptable is the only question left.
|
|
Coming next
|
Midjourney V8.1 deep dive.
What they fixed, what they broke, and the prompts that work best.
|
The right tool for the right job.
A cheat sheet for which tool to use for which task.
|
|
|
Would you give Google access to your Photos for better AI images?
Hit reply. Yes, no, or “already did.” I want to know where people land on this.
|
|
|
See you next week.
|
|
Luxe Prompting
AI image generation for creators.
|
|