← All Posts

How I Added Custom AI Visuals to Every Blog Post in One Session

5 min readMarch 7, 2026
AIimage generationImagen 4Geminiautomationblogportfolio

How I Added Custom AI Visuals to Every Blog Post in One Session

How I Added Custom AI Visuals to Every Blog Post in One Session
How I Added Custom AI Visuals to Every Blog Post in One Session

I had 26 published blog posts and zero hero images. Every post was just a wall of text. Not great for a portfolio site that's supposed to show off what I can build.

The fix took one session. Here's exactly how I did it.

The Problem with Generic AI Images

The obvious move is to generate images — but the trap is making every post look the same. If every article has the same "glowing 3D chip on dark background" aesthetic, it stops being interesting fast.

The insight that made this work: match the visual style to the content type.

A post about building a trip planner should feel like travel photography. A post about a music AI project should feel like album art. A post about retro AutoHotkey scripts should look like 1990s computing nostalgia. The image should reinforce what the post is about before you read a single word.

Mapping 8 Visual Styles Across 26 Posts

Before generating anything, I created a style guide:

StyleUsed For
🔷 3D chip / tech architectureMCP, orchestrator patterns, agent ecosystems
📊 Dashboard / statsCost analysis, AI operating systems, tool landscapes
🤖 AI brain / neuralDay-in-the-life posts, first agents, skill building
✈️ Travel / destinationTrip planner, travel tools
🎵 Music / audio visualAI music and generative art posts
🏥 Retro / vintage computingBefore-AI nostalgia posts
🚀 Product launchApp builds — pantry app, n8n, Docker
📸 Personal / lifestyleMobile workflow, couch coding, setup posts

Then I manually assigned each post to a style bucket. This took about 10 minutes and made every image feel intentional rather than random.

Spot-Checking Before Batch Generation

I generated 4 images first — one from each distinct style — before committing to all 26:

  • Travel style: Hong Kong/Bangkok/Sanya cinematic collage
  • Music style: synthwave waveform + AI brain
  • Retro style: vintage 90s clinic computer
  • AI brain style: neural network skill explosion

All four looked great. That was the green light to batch the rest.

The Generation Stack

I used two models depending on the visual style:

Imagen 4 via Vertex AI — better for cinematic photorealistic images (travel, lifestyle, product shots). Cleaner composition and more polished output.

Gemini 3 Pro Image — better for images with specific readable text (dashboards, 3D chip labels, stats). Follows text prompts more precisely.

The batch script generated images sequentially, auto-refreshed auth tokens every 10 images, and saved everything to ~/project-visuals/blog/.

The Rate Limit Problem

About 4 images in, Vertex AI hit a per-minute quota limit. The fix was simple: when Imagen 4 got throttled, fall back to Gemini 3 Pro for that image and continue. Both produce excellent results — it's mostly a stylistic choice anyway.

Lesson: Simpler Labels = Better Text

One image came back with embarrassing typos baked in — "SEPLOYMENT", "GitHur", "Spraedchets". This is a known limitation of AI image generation: the longer and more unusual the word, the higher the chance it garbles it.

The fix: regenerate with shorter, simpler labels. "Spreadsheets" → "DATA". "Deployment" → "DEPLOY". The map style image went from unacceptable to clean on the second attempt.

Rule of thumb: AI image models handle short common words well. Keep labels to 1-2 words when text accuracy matters.

Injecting Images Into All Posts

Once images were generated, a Python script handled injection automatically:

# Find the H1, insert image markdown on the next line
for post in posts:
    lines = content.split("\n")
    for i, line in enumerate(lines):
        if line.startswith("# "):
            lines.insert(i + 1, f"\n![{title}](/images/blog/{slug}.png)\n")
            break
    # PUT to admin API with updated content

All 27 posts updated in under 30 seconds. Images committed to GitHub, Vercel auto-deployed.

Result

Every blog post now has a hero image that matches its content. The portfolio feels like a real publication rather than a markdown dump.

Total time: one session. Total unique visual styles: 8. Total posts upgraded: 27.

The images aren't perfect — AI generation still struggles with complex text layouts and sometimes produces subtle errors. But at this quality level, for a personal portfolio site, they're more than good enough. And they're infinitely better than no image at all.

What's Next

The workflow is now documented as a reusable skill. Future blog posts get a custom image as part of the publishing flow — not as an afterthought.

The lesson isn't really about image generation. It's that most content improvements people put off for months can be done in a single focused session once you have the right tools and a clear style guide.