nyxcore-systems
7 min read

Beyond the Prompt: Building an AI Chat Experience That Flows

Transforming a basic 'discussions' feature into a dynamic, context-aware AI chat experience with markdown, auto-continue, and real-time flow visualization.

AIChatbotUXFrontendBackendTypeScriptNext.jsPrismaSSEDevelopment

The journey from a simple prompt-and-response system to a truly engaging AI conversational interface is fraught with fascinating challenges and rewarding breakthroughs. Recently, our team embarked on a mission to elevate our existing "discussions" feature, aiming for an experience that feels less like a command line and more like a fluid, intelligent dialogue. This post dives into the technical decisions, UI innovations, and lessons learned from a recent late-night session that brought this vision to life.

Our goal was ambitious: take the core concept of AI discussions and transform it into a modern AI chat UX, complete with rich markdown rendering, intelligent auto-continuation, deep project context, dynamic persona switching, and a captivating stream flow visualization. After a marathon session, I'm thrilled to report that the core features are not just implemented and type-checking clean, but running smoothly on the dev server, paving the way for a new era of AI interaction.

Let's break down how we got there.

Crafting a Seamless User Experience

A great AI interaction isn't just about smart responses; it's about how those responses are delivered and managed. We focused heavily on the frontend to make the experience intuitive and delightful.

The Markdown Chat UX: Richness in Every Bubble

First up, making the chat visually appealing and functional. We introduced src/components/discussion/chat-message.tsx as a reusable component for chat bubbles. Assistant messages now boast full markdown rendering, making code snippets readable and lists digestible, while user messages remain clean and plain. Hovering over a message allows for easy copying, and a subtle streaming indicator keeps the user informed when the AI is still generating.

The secret sauce for beautiful code blocks? An enhanced src/components/markdown-renderer.tsx. We integrated rehype-highlight and highlight.js to bring syntax highlighting to life. Crucially, we added a copy button directly to code blocks, using a useRef hook for reliable text extraction – a detail we learned the hard way (more on that later!). A compact prop ensures these rich messages fit perfectly within the chat's confines.

The Dynamic Discussion Detail Page: Where AI Comes Alive

This is where the magic truly happens. The discussion detail page underwent a complete redesign:

  • Header Reimagined: A two-row layout now houses the discussion title, mode, live status, and options for .md download and auto-play controls. The second row is dedicated to the intelligent context and flow visualization.
  • The StreamFlow Component: This is arguably the most visually striking new feature. Imagine a dynamic KIMI ··· > NyxCore < ··· OPENAI display. Animated dots pulse during streaming, the active AI provider is highlighted, and our central NyxCore persona glows, signifying its role in orchestrating the conversation. This provides immediate visual feedback on which AI is speaking and how the conversation is being steered.
  • Mid-Chat Persona Switching: Clicking on the central persona in the StreamFlow component now opens a dropdown, allowing users to switch the AI's core identity mid-chat. This is powered by a new updatePersona tRPC mutation, enabling incredible flexibility.
  • Auto-Continue Loop: A prominent "Play" button initiates an auto-continue loop, allowing the AIs to converse autonomously until "Stop" is pressed or a manual user message is sent. This is fantastic for brainstorming or letting the models explore ideas.
  • Markdown Download: A saveAsMarkdown() function allows users to download the entire discussion history as a .md file, perfect for archiving or sharing.
  • Auto-Scroll: During streaming, the chat now intelligently auto-scrolls when near the bottom, ensuring the latest responses are always in view.

A Smarter Start: The New Discussion Page

Even starting a discussion received an upgrade. The new discussion page now features:

  • Project Selector: A clear project selector with folder icons, making it easy to link a discussion to a specific project from the outset.
  • Persona Cards: Persona selection is now presented in a clean 2-column grid, displaying both the persona's name and a brief description underneath.

The Brains Behind the Chat: Discussion Engine Overhaul

The frontend is only as good as the backend intelligence. We completely rewrote src/server/services/discussion-service.ts to power these new capabilities.

  • Unified AI Voice & Context:

    • No Provider Leakage: We eliminated verbose [ANTHROPIC]: or --- OPENAI --- prefixes from consensus and parallel modes. Instead, we use neutral participant names like Alpha, Beta, and Gamma in consensus identity, presenting a coherent AI voice.
    • Language Detection: The first user message now triggers language detection (supporting DE, EN, FR, ES), which is then persisted to the discussion.language field and injected into all subsequent system prompts.
    • Project Context Loading: Crucially, the system now loads detailed project context (description, blog posts, consolidation patterns) and injects it directly into the system prompt. This allows the AI to respond with deep, relevant knowledge specific to the user's project.
    • Dynamic System Prompt Building: All discussion modes now benefit from a robust buildSystemPrompt() function that intelligently combines the selected persona, detected language, and loaded project context.
  • Auto-Continue Logic: The autoRound() function is the engine for our auto-continue feature. It programmatically generates a response from the next AI provider in rotation without requiring a user message, mimicking a continuous dialogue.

  • Robustness & Cancellation: We added AbortSignal support, allowing for clean cancellation of ongoing AI streams, crucial for a responsive user experience.

The Plumbing: Schema, SSE, and tRPC

Underpinning these features are critical infrastructure changes.

  • Schema Evolution: We updated our Discussion model to include projectId (an optional foreign key to our Project model) and language (a string). A new discussions relation was added to the Project model. Naturally, this required running db:push and db:generate.
  • Real-time Streaming with SSE: Our src/app/api/v1/events/discussions/[id]/route.ts endpoint was enhanced to support Server-Sent Events (SSE). A new ?auto=1 query parameter enables the auto-continue loop, and AbortController with ReadableStream.cancel() ensures clean stream termination.
  • tRPC Router Updates: The src/server/trpc/routers/discussions.ts router was updated to reflect the new schema and features:
    • projectId now accepted in the create input for linking new discussions.
    • The get query now includes the project relation.
    • The updatePersona mutation was added to handle mid-chat persona switching.

Lessons Learned: Overcoming Hurdles

No development session is complete without a few head-scratching moments. Here are some of the challenges we faced and how we overcame them:

  • Challenge 1: Reliable Code Block Copying

    • Initial Approach: We tried using document.querySelector("pre:hover") to grab the content of the hovered code block.
    • Why it Failed: This approach was brittle. It often returned the wrong element if multiple code blocks were visible, and innerText isn't reliably available on all Element types returned by querySelector. It also felt like fighting React's component model.
    • Solution: We embraced React's useRef hook, attaching a useRef<HTMLPreElement> directly to the <pre> element. This allowed us to reliably access preRef.current?.innerText for accurate text extraction. A classic case of "work with React, not against it."
  • Challenge 2: Schema Field Name Mismatch

    • Problem: While building the project context loader, I initially tried to access blogPost.summary and blogPost.publishedAt.
    • Why it Failed: A quick check of the Prisma schema revealed the actual field names were excerpt (for a summary) and updatedAt (for the last update time).
    • Solution: A simple fix to use the correct field names from the schema. This highlights the importance of always double-checking your data model, especially after recent schema changes.
  • Challenge 3: Stale Prisma Client

    • Problem: After adding projectId to the Discussion model and running db:push, my Prisma create calls were still failing, complaining about the unknown field.
    • Why it Failed: The dev server was caching an old, generated Prisma client. Even after db:push, the application wasn't using the latest client.
    • Solution: Running db:push, db:generate, and then restarting the entire dev server (npm run dev) forced the application to load the freshly generated Prisma client. A common pitfall when working with ORMs and schema changes in a hot-reloading environment.

What's Next?

With the core functionality in place, our immediate next steps involve thorough testing of the new features:

  1. End-to-end testing of the auto-continue flow.
  2. Verifying persona switching mid-chat.
  3. Ensuring markdown downloads are perfectly formatted.
  4. Confirming project context is correctly injected and referenced by the AI.

Beyond that, we're considering adding token/cost tracking displays and a "rounds" counter during auto-play to provide even more transparency and control.

This session was a significant leap forward in making our AI discussions truly dynamic and intelligent. We're excited about the possibilities this new foundation unlocks for more intuitive and powerful AI-powered interactions.