Skip to main content

Havoptic: I Built a Visual Release Tracker Because I Couldn't Keep Up

· 7 min read
Scott Havird
Engineer

Here's a confession: I can't keep up with release notes. Claude Code, Cursor, Windsurf, Gemini CLI, Copilot CLI, Codex CLI, Kiro – they're all shipping at breakneck speed, and every week there's a new version with features that could change how I work. But reading through changelogs? My eyes glaze over by paragraph two. I'm a visual person. I need to see what changed, not read a wall of text about it.

That frustration is why I built Havoptic.

The Problem: Changelog Fatigue Is Real

If you're like me, you've got multiple AI coding tools in your daily workflow. Each one has its own release cadence, its own changelog format, and its own place where updates get posted. Some use GitHub Releases. Some have a dedicated changelog page. Some bury updates in npm registry metadata.

Keeping track of all of it feels like a part-time job. And the cost of missing a release? You might be grinding through a workflow that a new feature just made obsolete. Or worse, you miss a breaking change and spend hours debugging something that the changelog would've explained in two sentences.

I wanted a single place where I could glance at what's new across all my tools – visually, chronologically, and in real-time. That's the core idea behind Havoptic.

What Havoptic Actually Does

Havoptic aggregates releases from the top AI coding tools into a unified timeline. Every release gets an AI-generated infographic that summarizes the key features visually. No more reading – just look.

Currently tracking:

  • Claude Code – via npm registry + GitHub changelog
  • Cursor – via changelog page scraping
  • Windsurf – via changelog page scraping
  • Gemini CLI – via GitHub Releases API
  • Codex CLI – via GitHub Releases API
  • Copilot CLI – via GitHub Releases API
  • Kiro CLI – via changelog page scraping

Beyond the timeline, there's tool comparison pages, trend analytics with real metrics (npm downloads, GitHub stars, release velocity), a watchlist with browser push notifications, and an AI-generated blog with weekly digests and deep dives.

The Tech: Lightweight by Design

One of the architectural decisions I'm most proud of is how minimal the production footprint is. The entire client-side app has exactly two production dependencies: react and react-dom. That's it. Everything else lives in devDependencies.

The Stack

  • React 18 + TypeScript – strict mode, ES2020 target
  • Vite 6 – build tool and dev server
  • TailwindCSS – with custom brand colors per tool
  • Cloudflare Pages Functions – serverless API layer
  • Cloudflare D1 – SQLite for subscribers, sessions, watchlists
  • Cloudflare R2 – object storage for newsletter assets
  • Terraform – infrastructure as code for prod and dev environments

No Router Library

This one might raise eyebrows, but hear me out. Routing is handled entirely in App.tsx with a discriminated union type and a getPageFromLocation() function. Both path-based and hash-based routes are supported for backwards compatibility. It keeps the bundle tiny and removes an entire dependency from the critical path. When your app is a focused tool with a handful of views, a full router library is overhead you don't need.

The Data Pipeline

The release data flows through a batch-process architecture that keeps things simple:

  1. GitHub Actions run daily – a fetch script pulls from all 7 sources concurrently using Promise.all
  2. Multi-strategy scraping – GitHub Releases API for some tools, npm registry cross-referenced with raw CHANGELOG.md for Claude Code, and Cheerio-based HTML scraping for Cursor, Kiro, and Windsurf
  3. Deduplication and merge – releases are ID'd as {tool}-{version}, merged with existing data, sorted by date
  4. Output – a single static releases.json file that powers the entire frontend

That last point is key. The "database" for release data is just a JSON file. The Cloudflare Pages Function at /api/releases acts as a thin proxy over it, applying content gating based on auth status (anonymous users see the last 5 releases per tool, authenticated users see everything). It's an elegant way to incentivize sign-ups without overcomplicating the data layer.

AI-Generated Infographics: The Heart of It

This is the feature that makes Havoptic what it is. Every release gets a visual summary – an infographic generated by AI.

The pipeline works in two stages:

Stage 1: Feature Extraction The Anthropic Claude API reads the raw release notes and extracts the top 6 features. The extracted features are validated against error patterns to catch hallucinated content before they hit the image generator.

Stage 2: Image Generation Google Gemini generates branded infographic images in multiple aspect ratios (1:1, 16:9, 9:16) using the extracted features as input. Each image uses the tool's brand colors for visual consistency.

There's also a separate validation script that cross-references extracted features against the actual source content, classifying each as verified, inferred, or fabricated. I don't blindly trust AI output, and neither should you.

Push Notifications: Built from Scratch

Rather than reaching for a third-party push notification service, I implemented RFC 8291 Web Push encryption directly. Subscriptions are managed in Cloudflare D1, and failed deliveries trigger automatic cleanup after 3 attempts. Add a tool to your watchlist, and you get a browser notification the moment a new version drops.

The Newsletter System

Double opt-in, per-tool and per-content-type preferences, full audit trail. The subscriber system uses an opt-out model – no preference rows means you receive everything, and you granularly opt out of what you don't want. It's backed by Cloudflare D1 with AWS SES handling the actual email delivery.

SEO-First Architecture

Since one of my goals was discoverability, I went deep on SEO:

  • JSON-LD structured dataWebSite and SoftwareApplication schemas, plus dynamic ItemList data loaded at runtime
  • SSR meta tags – Cloudflare Pages Functions handle server-side rendering of Open Graph tags for release pages, tool pages, blog posts, and comparison pages
  • RSS and JSON feeds – for readers and aggregators
  • llms.txt – for AI crawler discoverability (yes, we're optimizing for AI readers now too)
  • PWA support – installable with offline capabilities

It's Open Source

The entire project is open source on GitHub under the MIT license. The repo includes everything: the React frontend, the Cloudflare Functions API, the release fetching scripts, the infographic generation pipeline, the Terraform infrastructure configs, and all 8 GitHub Actions workflows.

If you want to add a new tool to track, the pattern is straightforward – create a fetcher that outputs the standard release format, register it in the config, and the rest of the pipeline picks it up automatically.

Contributions are welcome. There's a CONTRIBUTING.md, a code of conduct, and a security policy in the repo. I'd love to see community PRs for new tools, UI improvements, or pipeline enhancements.

What I Learned Building This

Static JSON as a data layer works. When your data updates on a schedule (daily releases, not real-time events), you don't need a database for the read path. A JSON file served through a thin API proxy keeps things fast and simple.

Multi-strategy scraping is essential. Every tool publishes changelogs differently. The Kiro fetcher alone uses 4 different strategies to extract versions and dates from HTML. Building resilience into the scraping layer means format changes don't break everything.

AI content pipelines need validation. Generating infographics with AI is powerful, but you need a feedback loop that verifies the output against the source material. The verified/inferred/fabricated classification gives me confidence in what ships.

Cloudflare's platform is underrated for solo projects. Pages Functions + D1 + R2 gives you a full-stack serverless setup with generous free tiers. Paired with Terraform for IaC, it's a production-grade stack that costs almost nothing to run.

What's Next

I'm actively working on expanding tool coverage and improving the comparison analytics. The trend data (npm downloads, GitHub stars, release velocity) is already there, and I want to make it more useful for people evaluating which tools to invest in.

If you're tired of reading changelogs and just want to see what's new, give Havoptic a look. And if you want to peek under the hood or contribute, the source is on GitHub.

I built this for myself, but I hope other visual thinkers find it useful too.


Find me on X and GitHub.