Beyond the Prompt: Building an AI Chat Experience That Flows
Transforming a basic 'discussions' feature into a dynamic, context-aware AI chat experience with markdown, auto-continue, and real-time flow visualization.
The journey from a simple prompt-and-response system to a truly engaging AI conversational interface is fraught with fascinating challenges and rewarding breakthroughs. Recently, our team embarked on a mission to elevate our existing "discussions" feature, aiming for an experience that feels less like a command line and more like a fluid, intelligent dialogue. This post dives into the technical decisions, UI innovations, and lessons learned from a recent late-night session that brought this vision to life.
Our goal was ambitious: take the core concept of AI discussions and transform it into a modern AI chat UX, complete with rich markdown rendering, intelligent auto-continuation, deep project context, dynamic persona switching, and a captivating stream flow visualization. After a marathon session, I'm thrilled to report that the core features are not just implemented and type-checking clean, but running smoothly on the dev server, paving the way for a new era of AI interaction.
Let's break down how we got there.
Crafting a Seamless User Experience
A great AI interaction isn't just about smart responses; it's about how those responses are delivered and managed. We focused heavily on the frontend to make the experience intuitive and delightful.
The Markdown Chat UX: Richness in Every Bubble
First up, making the chat visually appealing and functional. We introduced src/components/discussion/chat-message.tsx as a reusable component for chat bubbles. Assistant messages now boast full markdown rendering, making code snippets readable and lists digestible, while user messages remain clean and plain. Hovering over a message allows for easy copying, and a subtle streaming indicator keeps the user informed when the AI is still generating.
The secret sauce for beautiful code blocks? An enhanced src/components/markdown-renderer.tsx. We integrated rehype-highlight and highlight.js to bring syntax highlighting to life. Crucially, we added a copy button directly to code blocks, using a useRef hook for reliable text extraction – a detail we learned the hard way (more on that later!). A compact prop ensures these rich messages fit perfectly within the chat's confines.
The Dynamic Discussion Detail Page: Where AI Comes Alive
This is where the magic truly happens. The discussion detail page underwent a complete redesign:
- Header Reimagined: A two-row layout now houses the discussion title, mode, live status, and options for
.mddownload and auto-play controls. The second row is dedicated to the intelligent context and flow visualization. - The
StreamFlowComponent: This is arguably the most visually striking new feature. Imagine a dynamicKIMI ··· > NyxCore < ··· OPENAIdisplay. Animated dots pulse during streaming, the active AI provider is highlighted, and our centralNyxCorepersona glows, signifying its role in orchestrating the conversation. This provides immediate visual feedback on which AI is speaking and how the conversation is being steered. - Mid-Chat Persona Switching: Clicking on the central persona in the
StreamFlowcomponent now opens a dropdown, allowing users to switch the AI's core identity mid-chat. This is powered by a newupdatePersonatRPC mutation, enabling incredible flexibility. - Auto-Continue Loop: A prominent "Play" button initiates an auto-continue loop, allowing the AIs to converse autonomously until "Stop" is pressed or a manual user message is sent. This is fantastic for brainstorming or letting the models explore ideas.
- Markdown Download: A
saveAsMarkdown()function allows users to download the entire discussion history as a.mdfile, perfect for archiving or sharing. - Auto-Scroll: During streaming, the chat now intelligently auto-scrolls when near the bottom, ensuring the latest responses are always in view.
A Smarter Start: The New Discussion Page
Even starting a discussion received an upgrade. The new discussion page now features:
- Project Selector: A clear project selector with folder icons, making it easy to link a discussion to a specific project from the outset.
- Persona Cards: Persona selection is now presented in a clean 2-column grid, displaying both the persona's name and a brief description underneath.
The Brains Behind the Chat: Discussion Engine Overhaul
The frontend is only as good as the backend intelligence. We completely rewrote src/server/services/discussion-service.ts to power these new capabilities.
-
Unified AI Voice & Context:
- No Provider Leakage: We eliminated verbose
[ANTHROPIC]:or--- OPENAI ---prefixes from consensus and parallel modes. Instead, we use neutral participant names likeAlpha,Beta, andGammain consensus identity, presenting a coherent AI voice. - Language Detection: The first user message now triggers language detection (supporting DE, EN, FR, ES), which is then persisted to the
discussion.languagefield and injected into all subsequent system prompts. - Project Context Loading: Crucially, the system now loads detailed project context (description, blog posts, consolidation patterns) and injects it directly into the system prompt. This allows the AI to respond with deep, relevant knowledge specific to the user's project.
- Dynamic System Prompt Building: All discussion modes now benefit from a robust
buildSystemPrompt()function that intelligently combines the selected persona, detected language, and loaded project context.
- No Provider Leakage: We eliminated verbose
-
Auto-Continue Logic: The
autoRound()function is the engine for our auto-continue feature. It programmatically generates a response from the next AI provider in rotation without requiring a user message, mimicking a continuous dialogue. -
Robustness & Cancellation: We added
AbortSignalsupport, allowing for clean cancellation of ongoing AI streams, crucial for a responsive user experience.
The Plumbing: Schema, SSE, and tRPC
Underpinning these features are critical infrastructure changes.
- Schema Evolution: We updated our
Discussionmodel to includeprojectId(an optional foreign key to ourProjectmodel) andlanguage(a string). A newdiscussionsrelation was added to theProjectmodel. Naturally, this required runningdb:pushanddb:generate. - Real-time Streaming with SSE: Our
src/app/api/v1/events/discussions/[id]/route.tsendpoint was enhanced to support Server-Sent Events (SSE). A new?auto=1query parameter enables the auto-continue loop, andAbortControllerwithReadableStream.cancel()ensures clean stream termination. - tRPC Router Updates: The
src/server/trpc/routers/discussions.tsrouter was updated to reflect the new schema and features:projectIdnow accepted in thecreateinput for linking new discussions.- The
getquery now includes theprojectrelation. - The
updatePersonamutation was added to handle mid-chat persona switching.
Lessons Learned: Overcoming Hurdles
No development session is complete without a few head-scratching moments. Here are some of the challenges we faced and how we overcame them:
-
Challenge 1: Reliable Code Block Copying
- Initial Approach: We tried using
document.querySelector("pre:hover")to grab the content of the hovered code block. - Why it Failed: This approach was brittle. It often returned the wrong element if multiple code blocks were visible, and
innerTextisn't reliably available on allElementtypes returned byquerySelector. It also felt like fighting React's component model. - Solution: We embraced React's
useRefhook, attaching auseRef<HTMLPreElement>directly to the<pre>element. This allowed us to reliably accesspreRef.current?.innerTextfor accurate text extraction. A classic case of "work with React, not against it."
- Initial Approach: We tried using
-
Challenge 2: Schema Field Name Mismatch
- Problem: While building the project context loader, I initially tried to access
blogPost.summaryandblogPost.publishedAt. - Why it Failed: A quick check of the Prisma schema revealed the actual field names were
excerpt(for a summary) andupdatedAt(for the last update time). - Solution: A simple fix to use the correct field names from the schema. This highlights the importance of always double-checking your data model, especially after recent schema changes.
- Problem: While building the project context loader, I initially tried to access
-
Challenge 3: Stale Prisma Client
- Problem: After adding
projectIdto theDiscussionmodel and runningdb:push, my Prismacreatecalls were still failing, complaining about the unknown field. - Why it Failed: The dev server was caching an old, generated Prisma client. Even after
db:push, the application wasn't using the latest client. - Solution: Running
db:push,db:generate, and then restarting the entire dev server (npm run dev) forced the application to load the freshly generated Prisma client. A common pitfall when working with ORMs and schema changes in a hot-reloading environment.
- Problem: After adding
What's Next?
With the core functionality in place, our immediate next steps involve thorough testing of the new features:
- End-to-end testing of the auto-continue flow.
- Verifying persona switching mid-chat.
- Ensuring markdown downloads are perfectly formatted.
- Confirming project context is correctly injected and referenced by the AI.
Beyond that, we're considering adding token/cost tracking displays and a "rounds" counter during auto-play to provide even more transparency and control.
This session was a significant leap forward in making our AI discussions truly dynamic and intelligent. We're excited about the possibilities this new foundation unlocks for more intuitive and powerful AI-powered interactions.