MoodiBoard. Ambient art. Signal-driven.
Problem
Digital art displays are static. Libraries stale out. The ambient surface becomes invisible. Meanwhile, every real-time signal about a person's day (weather, schedule, mood, music) goes unused by every display in the room.
Needs
Generative output that feels coherent, not arbitrary. Signal-to-visual mappings that encode meaning rather than random variation. An interface that disappears (the art is the product, not the controls).
A cost model that works at solo scale, which rules out cloud image generation APIs.
Solution
Self-hosted generative image pipeline for cost-viable real-time generation. An AI agent layer that reads signal context and drives visual output autonomously. An agentic dev workflow that makes a solo full-stack build tractable.
Design for the system, not the output.
Self-hosted generative image pipeline instead of a cloud API.
DALL·E, Midjourney, and comparable APIs price out real-time, per-user, continuous generation at any scale. Self-hosting was the only path to a unit economics model that works without VC funding (and gives full control over the generation behavior that defines the product's core experience).
Infrastructure complexity the cloud APIs abstract away is now mine to own. GPU memory, model loading, inference latency, versioning (all of it). The system is less portable as a result, and the operational surface is larger than it would be with an API call. Worth it: the economics only work this way.
Controls disappear by default. The canvas is the entire interface.
Every pixel of chrome the UI shows is a pixel competing with the art. The controls layer appears on hover and retracts on inactivity. Settings live in a modal triggered by a single icon (visible only when you're in configuration mode, invisible when you're not). This forced every control decision through the question: is this necessary, or is this feature creep that breaks the ambient quality?
Discoverability suffers. New users don't know the controls exist until they hover. The first-run experience needs explicit guidance that the rest of the product deliberately lacks. I haven't solved this cleanly (the onboarding and the ambient experience are in tension, and I've kicked the resolution to a later iteration).
Signal changes produce gradual visual shifts, not immediate regeneration.
Early builds had a direct mapping problem: one signal change → one new image. A cloud appearing meant an immediate canvas transformation. This felt wrong — jarring, no sense of continuity. Getting the system to feel responsive without feeling reactive is the core calibration challenge.
The right response curve varies by signal type; a slow weather shift should behave differently from a calendar event starting. Calibrating this across signal types is ongoing work. The current implementation is a reasonable approximation that still needs refinement at the edges.
Claude Code, Figma MCP, and v0 as the full development loop (not as supplements to a conventional workflow).
This isn't using AI tools to speed up conventional development. The architecture, implementation, and iteration all run through the agentic loop. I operate at the product and architecture layer; Claude Code handles the implementation layer. Figma MCP keeps design artifacts live and connected. v0 scaffolds components from specs that Claude Code then integrates. A two-engineer build compressed into one.
The agentic loop requires precise input. Vague prompts produce plausible-looking code that fails at the edges. The cognitive overhead isn't gone (it's shifted from writing implementation code to reviewing and directing it). You have to understand the codebase well enough to catch what the agent gets subtly wrong. Slower than it feels on the first pass; faster overall than writing everything yourself.
If I'd do it again
Build a dedicated calibration environment before writing any frontend. The signal-to-output mapping is the hardest problem in this product and the slowest to iterate on. Running calibration inside the production system means every test cycle touches too many variables at once. Isolated iteration would have compressed months into weeks.
Establish cleaner seams between system layers earlier. When an output is wrong, diagnosing whether it originated in signal collection, the agent layer, or the generation pipeline is harder than it should be. Clearer separation would have made failures readable from day one.
Commit to the agentic dev workflow fully from the start. I spent the first two weeks writing implementation code by hand (old habits). The Claude Code loop is faster and produces better-documented code. The only cost is the habit change. I'd skip the two-week ramp and go all-in on day one.