Refining the Narrative Engine: UI, Data, and Database Lessons from a Recent Sprint
A deep dive into a recent development session, covering UI enhancements, crucial data integrity fixes, a new analytical feature, and a classic database gotcha when building an AI-powered storytelling platform.
Building an AI-powered storytelling platform like ours, where complex narratives are broken down into 'beats' and analyzed by various 'personas,' is a journey of continuous refinement. Each development sprint brings us closer to making the tool more intuitive, intelligent, and robust for writers. This past session was no exception, focusing on a mix of UI/UX improvements, critical data integrity work, and laying the groundwork for more advanced AI interaction.
Let's unpack what went down.
Uncluttering the Workflow: Making Personas Collapsible
One of the core features of our platform is the ability to define and compare different analytical personas within a workflow step. This is incredibly powerful, but as the number of personas grew, the workflow detail page (src/app/(dashboard)/dashboard/workflows/[id]/page.tsx) started feeling a bit crowded.
The goal was simple: make these persona selectors collapsible. This wasn't just about aesthetics; it was about improving cognitive load and allowing users to focus on the active persona without distraction.
Here's how we tackled it:
- Introducing
CollapsibleSection: We created a new, reusableCollapsibleSectioncomponent (located around line 77 in our component library). This generic component handles the chevron toggle, icon display, and badge support, making it flexible for various sections. - Integrating with
PersonaPicker: I then wrapped the "Expert Personas" overview section (~line 598) and removed thecollapsible={false}prop from the per-step persona selector (~line 1071) and the Compare Personas A/B section (~line 1109). - Refining Labels: With the
CollapsibleSectionnow providing its own header, we could remove redundant external labels, further streamlining the UI.
The result? A much cleaner, more manageable workflow detail page. Users can now easily hide persona details they're not actively working with, leading to a smoother experience.
Giving Beats Their Voice: Enriching Narrative Data
A storytelling AI is only as good as the data it processes. Our 'beats' – the individual narrative segments – need rich, accurate metadata to enable deep analysis. This session involved a crucial data cleanup and enrichment phase for the initial beats of a specific book (3343183a-1936-4679-b7da-538996a2ba14, "inselwerk").
- Character Tags: We went through beats 9, 10, 13, and 25, ensuring all relevant characters were correctly tagged. For instance, 'Sasha' was added to beats 9 and 10, while 'Finn' and 'Nia' joined 'Mara' in beat 25. This ensures our AI can accurately track character arcs and interactions across the narrative.
- Motif Tags: Previously, the motif tags for beats 1-13 were all empty. This was a significant gap. I've now populated these, adding thematic elements that our AI can leverage for deeper structural and emotional analysis.
This meticulous data work is foundational. It's the difference between an AI that merely processes text and one that truly understands the underlying narrative fabric.
Unveiling Insights: The Beat Match Score Analysis
With richer beat data, we could introduce a new analytical capability: the Beat Match Score. This feature provides a quantitative assessment of each beat across five critical dimensions for storytelling:
- Struktur (Structure): How well does the beat contribute to the overall plot?
- Figuren (Characters): Does it develop characters effectively?
- Eskalation (Escalation): Does it build tension or advance the conflict?
- Szenen-Potenzial (Scene Potential): How vivid and engaging is the scene described?
- Motive (Motifs): Does it weave in thematic elements and motifs?
For the first 13 beats, the analysis revealed some interesting insights: Beat 12 scored a perfect 5.0, indicating strong performance across all dimensions. Conversely, Beats 2 and 6 lagged with a score of 3.6, highlighting areas for potential improvement or further development by the writer. This kind of immediate, multi-dimensional feedback is invaluable for refining a narrative.
Lessons Learned: The Case of the Quoted Identifier
Not every part of a dev session is about shiny new features; sometimes it's about wrestling with the fundamentals. This session had its own small but critical "pain point" that turned into a valuable lesson:
- The Problem: I was trying to query our PostgreSQL database using raw SQL with a column name like
book_id. - The Failure: The queries consistently failed.
- The Realization: Our Prisma schema uses
camelCasefor column names (e.g.,bookId,chapterNum). When interacting with PostgreSQL directly via raw SQL, these camelCase identifiers must be quoted ("bookId") otherwise PostgreSQL defaults tosnake_caseinterpretation, which doesn't match.
-- This will fail if the column is actually named "bookId" in the DB
SELECT * FROM "Book" WHERE book_id = 'some-uuid';
-- This is the correct way to query camelCase column names in raw SQL
SELECT * FROM "Book" WHERE "bookId" = 'some-uuid';
Actionable Takeaway: Always remember that Prisma's camelCase column names translate to quoted camelCase identifiers in raw SQL queries against PostgreSQL. A small detail, but one that can stop you dead in your tracks!
Looking Ahead: The Road to Smarter Stories
This session was a solid step forward, with collapsible UIs, enriched data, and new analytical power. But the journey continues. My immediate next steps involve:
- Committing and Pushing: Integrating these changes (collapsible personas, clickable workflows) into the main branch.
- Creating an Expert AI Persona: Designing a new, gender-neutral "PhD-level CORS/BigData/AI expert" LLM persona – friendly yet direct – to provide specialized feedback.
- Hallucination Review: Critically reviewing a specific workflow (
b53e5b53-9c54-4e36-b85f-5dd5f57e010e) for any LLM-generated hallucinations, ensuring the AI remains grounded in the provided narrative. - LLM Bias Research: Kicking off research into how LLMs evaluate personas based on factors like name, gender, or race. This is crucial for building an ethical and unbiased tool.
- Beats 14-21 Upgrade: Bringing the next set of narrative beats up to the same structural and data quality as beats 1-13.
- Sasha's Arc: Adding foreshadowing for the character Sasha in beats 3 and 7, setting the stage for the 'Fluestern' incident in beat 9.
- Nyx's Subtle Presence: Integrating Nyx as a subtle background presence in beats 1, 6, 8, and 12, adding depth to the narrative.
Each of these steps brings us closer to a powerful, intelligent, and truly helpful AI companion for writers. It's an exciting time to be building in this space, and I'm looking forward to sharing more insights from future sessions!
{"thingsDone":[
"Implemented collapsible persona sections in workflow UI",
"Fixed character tags for key beats (9, 10, 13, 25)",
"Populated motif tags for beats 1-13",
"Created 5-dimensional beat match score analysis",
"Verified beat descriptions for beats 1-13"
],"pains":[
"Encountered `book_id` vs. `\"bookId\"` issue in raw SQL queries"
],"successes":[
"Improved UI clarity and user experience",
"Enhanced data quality for narrative analysis",
"Introduced new analytical capability for writers",
"Learned and documented best practice for Prisma/PostgreSQL raw SQL column naming"
],"techStack":[
"Next.js",
"TypeScript",
"React",
"Prisma",
"PostgreSQL",
"LLM Integration"
]}