Every technology choice in Liz Insight Engine maps to a deliberate product decision. This page explains the reasoning behind each one — and the PM skill it demonstrates.
Dashed lines indicate fallback routing. The sprint retro flow is entirely deterministic — no LLM calls.
I gave each source (Zoom, Slack, Jira) its own independent toggle and textarea. Only enabled sources are included in the analysis payload.
Not every customer has all three integrations connected. Anaplan might only have Zoom and Jira. A startup might only have Slack. If the tool forces you to populate all three, it breaks in a real customer call. Making each source optional means I can load just the Zoom transcript from a call I just finished and run the analysis immediately. The toggle state also makes the active sources visible in the Analyze button label — so the user always knows exactly what data they're analyzing.
I split the pipeline into two focused calls: one for analysis (understanding the signal) and one for prototype generation (building the response to it).
A single combined prompt that does both analysis and prototype generation would produce outputs that are hard to evaluate and debug. By separating them, each call has one job. If the prototype output is wrong, I can tune the prototype prompt without touching the analysis prompt. If the analysis themes are off, I can fix the analysis prompt without affecting prototype generation. This is how enterprise AI pipelines are actually built — separation of concerns applies to prompts the same way it applies to functions.
I used Claude Sonnet for both calls — fast enough for real-time use in a customer call, reliable enough to follow complex JSON schemas without hallucinating extra fields.
The JD explicitly names Claude as the tool Sema uses for prototyping. Beyond that, Sonnet is the right model for this use case: it returns valid JSON reliably (critical when the whole app depends on JSON.parse()), it completes in 2-3 seconds during a live demo, and it's significantly cheaper than Opus for a prototype that might run dozens of analyses in a day. I would evaluate Opus for production if output quality needed to improve.
If the Claude API call fails — network error, rate limit, invalid key — the app falls back to a deterministic keyword parser in analysisEngine.ts that produces the same InsightResult shape without any API call.
A prototype that crashes in a customer call is worse than one that produces slightly less nuanced output. The fallback ensures the demo never breaks. It also demonstrates a production design principle: AI features should degrade gracefully, not fail hard. The fallback output is clearly labeled as offline mode so the user knows they're not seeing Claude's analysis — but they can still see the tool's structure and UX.
The sprint retro analysis uses pure TypeScript and regex — no LLM calls. It parses structured ticket data deterministically.
The retro input is already structured — ticket IDs, point values, status flags. Using Claude for deterministic arithmetic would add 2-3 seconds of latency, cost money on every call, and introduce non-determinism. Regex gives the same answer every time, in milliseconds, for free. I use Claude where natural language understanding genuinely adds value — understanding what customers mean, generating feature specs. I use TypeScript where the data is already in the right shape. That distinction is a core engineering judgment PMs need to make when designing AI systems.
The Claude prototype prompt requires a feldmanNote field in the JSON schema — an explicit assessment of whether the proposed feature is net-positive, neutral, or negative for engineers.
Sema's culture requires that every feature shipped is neutral-to-positive for engineers (the Feldman Doctrine). If I don't require the engineer impact assessment in the prompt, Claude will skip it. Making it a required JSON field means I can't generate a prototype spec that hasn't been checked for engineer impact. It's a forcing function built into the AI pipeline — not a post-hoc review. This is how you operationalise a cultural value in a product.
Liz Insight Engine was designed and built by Olu Oso, a Senior Technical PM with 10+ years shipping enterprise AI platforms at Oracle and IBM. This prototype demonstrates the full PM skill set Sema is looking for: customer-to-prototype in hours, AI-first product thinking, and the ability to defend every architectural decision.