Skip to content
← ALL EXPERIMENTS

01 / CSS / 2026-04-24

Image Cache

The mobile-first review board behind every hero image on this site.

How it works

// 01 DPI and pixel density

The manifest is a plain JSON file at /lab-cache/manifest.json that lists every image by relative path plus any metadata the generator wants to attach (round, seed, prompt hash). The page loads it once on init, builds a flat list of cards, and renders them into a CSS grid sized for phone-first viewing at three columns below 768 pixels and five above.

// 02 Bleed and cut tolerance

Each card has three buttons. Tapping one writes the verdict to a single localStorage key shaped like "verdict:<path>" = "good" | "maybe" | "bad". The filter bar reads the full localStorage namespace on every toggle and hides cards whose verdict does not match the active filter. There are also "hide rated" and "show only favorites" modes that filter purely on the client without touching the DOM beyond display:none.

// 03 Safe zone and trim accuracy

The images use native lazy loading via loading="lazy" so a manifest with five hundred candidates does not blow up the initial page. Intersection observers reveal the rating buttons only when a card scrolls into view, which keeps the initial DOM tree small enough to render in under a second on mid-range phones. The whole page including the manifest download is under 80 kilobytes of JavaScript.

OBJECT / image-cache.reviewCSS
......
......

// why this exists

real internal tooling, not a demo

Before a hero image lands on an article page, it is one of hundreds of candidate renders sitting in a cache directory. The review board is the interface that decides which ones live and which get rerolled. It loads a manifest, lays out every candidate as a thumbnail grid, and lets you tap three buttons per image: good, maybe, bad. Ratings save to browser localStorage. Filters hide the already-rated ones so the next unseen image is always first. When a batch is done, the good verdicts become the shortlist the site actually ships.

The tool is static HTML, CSS, and vanilla JavaScript. No framework, no build step, no backend. The manifest is a single JSON file written by the image-generation pipeline after each render round. Open the page on a phone, swipe through, rate, close the tab. Come back later and the ratings are still there. The site has shipped roughly five hundred candidate hero images through this interface, across twenty-plus rounds of generation, and the board is the reason the taste signal scales instead of getting lost in a folder of file names.

The embed you see on this widget page is the live review board, sandboxed in an iframe with the same permissions it has at its own URL. The images are real renders from the site's image pipeline. The ratings are real localStorage. If you rate any of them here, those ratings are yours on this machine, not sent anywhere, not shared. The tool was built as internal infrastructure. It is live on this page because one of the points the Operator's Stack positioning keeps making is that the infrastructure underneath a personal site is just as real as the marketing layer on top.

The deep-dive post for this widget is the article on the feedback ledger that turned review verdicts into a prompt taxonomy. That post covers the data side, where the ratings end up and what 500 of them taught me about which subjects actually work in the brand system. The board below is the UI side: the cheapest possible mobile-first shell that has survived contact with real-world use at scale.

Frequently asked questions

What is this tool actually for?

Rating AI-generated candidate hero images on a phone so the site's image pipeline has a taste signal to feed back into prompt generation. It runs after every batch render.

Why not build it inside the Next.js app?

The review workflow predates the Next.js build pipeline for image generation. Static HTML is faster to iterate on, has zero build overhead, and deploys as a plain file. The React wrapper you see here is the first time it has been lifted into the app.

Are the ratings I make here sent anywhere?

No. Ratings are stored in your browser's localStorage under the image path. They never leave your device. If you clear your browser storage, your ratings go with it.

Can I use this tool for my own image pipeline?

The HTML is straightforward. Copy /lab-cache/index.html, swap the manifest format for whatever your pipeline writes, and host anywhere static. There is no secret sauce beyond the localStorage pattern.

Why noindex on the iframe target?

The raw tool URL is private, though not secret. SEO belongs to this widget page, not the cache folder. Putting the review UI behind a noindex keeps Googlebot from trying to index hundreds of near-duplicate image-review pages.

How big does the manifest get before it breaks?

Tested clean with about 500 entries. At 2,000+ the initial parse starts to cost a noticeable second. If the pipeline ever scales past that, the natural next move is paginating the manifest into weekly or round-based slices.

Do the verdicts feed back into anything automatically?

Not from the browser. A separate Python tool ingests a pasted verdict string from me and writes to a ledger JSON file outside the app. The feedback ledger article covers that side of the pipeline.

How does this work on desktop?

Same grid, denser. Five columns at 1440 pixels, three rating buttons per card, same localStorage schema. The UI was built mobile-first because that is where most of the actual review time happens, but desktop is where batch rating a fresh round of 150 renders is fastest.

What happens if I rate the same image twice?

The second rating overwrites the first. localStorage per key is a single value. There is no rating history. If you change your mind, the most recent verdict wins.