Drawing the World Back to Life: How a Sketch Taught AI to See

A few weeks ago, I wrote about how designers and creatives can take back control in AI-driven image making by starting with something deeply familiar: drawing.

Instead of prompting our way forward with increasingly complex text instructions, we can sketch. We can feed line drawings into image models and let the AI interpret what is already a very rich form of communication. A drawing carries intention, hierarchy, emphasis, and omission in a way text simply can’t. The result is not random image generation, but something we actively shape.

Since then, I’ve had a particularly interesting experience that pushed this idea further.

Earlier this January, while on a work-away journey through Southern Europe, I passed through Frigiliana again — a small Andalusian village that holds a very special place in my life.


Some years ago, I lived there for three months, alone, in a small apartment in the village. That period became a kind of refuge for me. It was a time of decompression, slowing down, and reorienting myself — both personally and professionally. I’ll write more about that chapter in a separate post, as it deserves its own space.

This time was different. We were passing through Frigiliana as part of a longer work-away “pilgrimage” through Portugal and Spain. I was there with my wife, moving rather than settling — revisiting rather than retreating. Still, returning to that place carried a strong emotional charge.

Picture 1: How Callejon Del Horno looks like Jan 2026 , as we passed through a few weeks ago.

During my original stay years ago, I made a line drawing based on a reference photo from the street I lived on at the time, overlooking my rental. (my apartment was the one in the center right with the arched balcony). Since then, that drawing has been hanging on my wall in Copenhagen, quietly holding the memory of that decompression period.

Walking through the village on a bright January day this year, I suddenly stood in front of the very same building again (picture 1). Seeing it in real life — not as a memory or a drawing, but as a present place — felt surprisingly grounding. The apartment was even up for sale, which briefly tempted me, but that is a story for another time.

As part of my ongoing experiments with image models — and specifically with Nano Banana — I decided to try something I’d been curious about: reverse image making. I fed my old line drawing (picture 2) into the model to see how closely it could reconstruct the place.

Picture 2: The drawing I made while living in Frigiliana 3 years ago of my street with the apartment,
attempting to catch the feel and culture of this amazing little city of Frigiliana, Andalusia

The result (picture 3) genuinely surprised me.

Nano Banana had no reference photos of the street. I didn’t provide the address. The handwritten street name in the drawing is incomplete and unlikely to be machine-actionable, and I couldn’t find usable online photo references myself. And yet, the generated image is astonishingly close to the feel of Callejón del Horno.

The algorithm correctly interpreted:

  • The characteristic Andalusian stonework and black-and-white street patterns
  • The narrow, crooked street layout
  • The proportions and rhythm of the buildings
  • The overall light, color palette, and atmosphere of a white village in southern Spain


All of this came purely from the line drawing.

Picture 3: The Nano Banana AI generated pitcure based on my 3 year old sketch,
made by the AI without the use of reference picture, only the Line and Wash sketch (Picture 2)

There are, of course, moments where the AI reveals itself (picture 4), and these are just as interesting:

a) Lines I had sketched on the rooftop — once clotheslines, now long removed — were interpreted as something resembling sails or sun protection. It’s incorrect, but culturally plausible. Something like this does exist in Andalusia, and the model knew that.

b) On the balcony of my old apartment, I had drawn a delicate wire sculpture — a bent-metal figure, perhaps a lizard. In the AI interpretation, it becomes something closer to a sci-fi robotic form. Clearly not right, but fascinating in how confidently it fills the gap.

c) A flower pot appears floating in mid-air. This doesn’t exist in the drawing and is probably the most obvious error. Still, given the strong tradition of flower displays throughout the village, the model’s assumption makes sense.

d) On a canopy, I had lightly written the first letters of “fruit shop.” Those letters appear verbatim in the generated image. Meanwhile, because I hadn’t drawn the window beneath the canopy, the AI simply invented a wall. It followed the information it was given — and nothing more.

Overall, what strikes me most is how little explicit instruction was needed. I never told the model this was an Andalusian village. I didn’t explain materials, climate, or geography. The drawing carried that language already.


Picture 4. Some funny peculiarities still to be found in the otherwise quite convincing AI rendering.


So what does this have to do with product discovery, innovation, or professional use of AI?

Quite a lot, actually.

This experiment shows how powerful the combination of imagination, trained visual thinking, and AI interpretation can be. When we sketch with intention — when we show rather than tell — we can guide AI systems toward highly realistic, coherent, and emotionally accurate worlds.

For designers, this means a shift:

  • From prompt engineering to visual steering
  • From generic outputs to authored universes
  • From describing outcomes to demonstrating intent

We can shape products, environments, narratives, and scenarios with a level of nuance that text alone struggles to achieve. The AI doesn’t replace our judgment — it amplifies it, as long as we give it something meaningful to work with.

That, to me, feels like a very strong direction for how we’ll work with these tools going forward.

Have a great weekend everyone.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *