Release history
Changelog
A reverse-chronological record of every build decision, ship, and fix. Treating this as a real product means documenting as you go — not just at the end.
Currently building: live API wiring, image proxy, Seller Central OAuth integration
May 3, 2026
Day 2 — 11:00pm
v0.4All 5 supporting pages shipped
Added How It Works, API Docs, Shareable Results, Built With, and Changelog pages. Navbar updated with full navigation. Each page is designed for a specific audience — technical founders, API consumers, and referred visitors from shared reports.
newHow It Works — agent architecture diagram, SSE decision, graceful failure pattern with Promise.allSettled
newAPI Docs — POST /api/analyse, GET /api/results/:id, SSE stream spec with full event types, schema tables
newShareable Results — permanent public report URL with viral CTA strip; the PLG distribution mechanic
newBuilt With — tool decisions with rationale: Claude vs GPT-4V, SSE vs WebSockets, Upstash vs Postgres, DM Sans vs Inter
newChangelog — this page
improveNavbar updated across all pages with consistent active states
May 3, 2026
Day 2 — 2:00pm
v0.3Report page complete + shareable score card
Full report render: per-image breakdown, before/after conversion leak section, AI search query table, competitor comparison. Score card with copyable share link. Pixii design brief generator with copy-to-clipboard.
newScore card — shareable snapshot with bullet callouts and "Get fixed with Pixii" CTA
newDesign brief section — 9 structured fields synthesised from agent findings; copy button with 2s feedback
newCompetitor comparison — side-by-side hero images with gap narrative
newBefore/after section — dashed red callout overlay on current hero with prescription copy
improveScore animates from 0 → target on report reveal using requestAnimationFrame + easeOutCubic
improveSegment bar chart animates in with staggered width transitions per segment
May 2, 2026
Day 1 — 11:00pm
v0.2Category benchmarker agent + streaming dashboard
Added the 4th agent (category benchmarker) and wired the full agent fan-out. Realised while building that the agent states needed to be decoupled from each other — they run at different speeds and must not block.
newCategory benchmarker agent — SerpAPI Shopping + Claude Vision competitor diff; started first (slowest agent at ~8s)
archDecoupled agent state management: each agent's line stream is independently tracked; no shared mutex needed
fixAgent cards were flickering on line updates due to full re-render — fixed by keying on agent ID, not array index
improveSynthesis "Analysing…" → "Complete" transition now waits for all 4 agent complete states before triggering
May 2, 2026
Day 1 — 3:00pm
v0.1Core agent pipeline + streaming dashboard
First working build. Hero → URL submission → 3-agent streaming dashboard. The core loop works end to end: submit, watch agents run in real time, see the synthesis line.
newVisual auditor agent — Claude Sonnet vision, rubric scoring, per-image failure callouts
newReview intelligence agent — Firecrawl + NLP, keyword gap extraction
newAI search visibility agent — 6 queries across Claude + Gemini, visibility rate scoring
newSSE streaming dashboard — parallel agent cards with typewriter line output, status transitions (waiting → running → complete)
newURL validation — inline error state for non-Amazon /dp/ URLs
archChose SSE over WebSockets: analysis is strictly server-to-client; SSE rides HTTP/2 natively and works on Vercel Edge without persistent connections
archAgent fan-out via Promise.allSettled: prevents a single slow/failed agent from blocking the others; degraded mode ships partial results
fixCharacter streaming was scheduling too aggressively — added Math.random() * 20 jitter to make it feel natural rather than mechanical