nyxcore-systems
6 min read

Beyond the Basic Chat: Engineering a Multi-Persona, Context-Aware AI Discussion Platform

Dive into the journey of transforming a basic discussion feature into a sophisticated AI chat UX, complete with multi-persona interactions, real-time streaming, dynamic project context, and a visual stream flow.

AILLMsNextJSTypeScriptUXRealtimePrismaSSEWebDevelopmentFullstack

Late last night, after a marathon coding session, we hit a significant milestone. What started as a modest "discussions" feature has now evolved into a cutting-edge AI chat experience. Our goal was ambitious: to build a modern, interactive AI platform that not only renders rich markdown but also understands context, allows persona switching, facilitates multi-AI consensus, and visualizes the flow of thought in real-time. And I'm thrilled to report: it's all implemented, type-checked, and running smoothly on the dev server.

Let's unpack the journey and the exciting features we've shipped.

The Vision: An AI Conversation Reinvented

Imagine a discussion where AI agents aren't just spitting out responses, but genuinely collaborating, building on context, and even adopting different personas to explore ideas from multiple angles. That was the core vision. We wanted to move beyond simple prompt-response and create an environment where users could orchestrate intelligent conversations, whether it's brainstorming with a "NyxCore" persona or getting a critical review from a "Skeptic" AI.

This meant tackling several key areas: a rich user interface, a robust backend for AI orchestration, and a seamless real-time communication layer.

Crafting a Dynamic User Experience

The front-end was crucial for bringing this vision to life. We needed an interface that felt intuitive, modern, and powerful.

Markdown Magic in Every Bubble

First up was making the chat itself shine. We developed a reusable ChatMessage component (src/components/discussion/chat-message.tsx) that intelligently renders AI responses with full markdown, while keeping user inputs plain. This means code blocks, lists, and rich text are beautifully displayed.

To elevate the experience further, we enhanced our MarkdownRenderer (src/components/markdown-renderer.tsx). This involved:

  • Integrating rehype-highlight and highlight.js for stunning, syntax-highlighted code blocks.
  • Adding a convenient copy button to every code block, allowing users to effortlessly grab code snippets.
  • Introducing a compact prop to ensure chat messages fit perfectly within the UI without overwhelming the user.

The Pulse of the Conversation: StreamFlow Visualization

One of the most engaging features is the StreamFlow component on the discussion detail page. This visualizes the AI interaction in real-time: KIMI ··· > NyxCore < ··· OPENAI.

  • Animated dots pulse to indicate active streaming.
  • The currently active AI provider is highlighted.
  • The central persona (e.g., NyxCore) glows, signifying the core identity guiding the conversation.

This isn't just eye candy; it provides immediate feedback on which AI is contributing and which persona is influencing the output. What's more, clicking the glowing persona opens a dropdown, allowing users to switch the AI's persona mid-chat – a powerful tool for dynamic exploration.

Seamless Interactions and Control

We also built in features for effortless interaction:

  • Auto-Scroll: During streaming, the chat automatically scrolls to the bottom, keeping the latest messages in view.
  • Auto-Continue Loop: A prominent "Play" button initiates an auto-continue mode, where AI agents generate responses in rotation without user input. A "Stop" button or a manual user message halts this loop.
  • Markdown Download: A "Download .md" button allows users to export the entire discussion, preserving its rich formatting.

The Brains Behind the Operation: Discussion Engine Overhaul

The real intelligence happens on the server, where our discussion-service.ts underwent a complete rewrite to support these advanced features.

Context is King: Project-Aware AI

A key differentiator is the ability for AI discussions to be deeply rooted in project context.

  • Project Integration: We updated our Discussion and Project schemas, adding projectId to link discussions directly to a project.
  • Dynamic Language Detection: The first user message now triggers language detection (supporting de/en/fr/es), which is then persisted and injected into all subsequent system prompts, ensuring the AI communicates in the user's preferred language.
  • Rich Context Loading: Our buildSystemPrompt() function now dynamically loads project descriptions, relevant blog posts, and consolidation patterns, feeding this crucial information into the AI's system prompt. This means the AI understands the project's goals, existing content, and operational style.

Orchestrating Intelligence: Consensus and Auto-Round

To facilitate multi-AI interactions and seamless auto-continuation, we implemented:

  • Neutral Participant Names: In consensus mode (where multiple AIs contribute to a single response), we replaced raw API provider names (e.g., [ANTHROPIC]: ...) with neutral identities like "Alpha," "Beta," and "Gamma." This cleans up the output and focuses on the content, not the provider.
  • Intelligent System Prompts: All AI modes now receive properly constructed system prompts, encompassing the selected persona, detected language, and rich project context, ensuring coherent and relevant responses.
  • autoRound() Function: This new function enables the AI to generate a response from the next provider in rotation without requiring a user message, powering our auto-continue feature.

Real-time Data Flow and Control

For the streaming experience, we leveraged Server-Sent Events (SSE) via src/app/api/v1/events/discussions/[id]/route.ts.

  • The ?auto=1 query parameter triggers the auto-continue loop.
  • Crucially, we integrated AbortController with ReadableStream.cancel() for clean and efficient cancellation of AI streams, allowing the "Stop" button to work instantly.

Under the Hood: Schema & Backend Integration

These features required foundational changes:

  • Prisma Schema Updates: We added projectId (optional FK to Project) and language (string) to the Discussion model, along with a discussions relation to the Project model.
  • tRPC Router Enhancements: Our discussions.ts tRPC router now supports projectId in the create input, includes project relations in get queries, and features a new updatePersona mutation for mid-chat persona switching.

Navigating the Journey: Challenges & Lessons Learned

No development journey is without its bumps. Here are a few "pain points" that turned into valuable lessons:

DOM Manipulation vs. React Refs for Code Copy

Challenge: Initially, I tried a clever CSS selector (document.querySelector("pre:hover")) to grab the text from a code block for the copy button. This failed because querySelector returns the first matching element, not necessarily the one being hovered, and innerText isn't directly available on the generic Element type.

Lesson: In React, for direct DOM access, useRef is almost always the right answer. By attaching a useRef<HTMLPreElement> directly to the <pre> element, we gained reliable access to its content (preRef.current?.innerText), ensuring the copy button always works for the correct code block.

Schema Sync: Knowing Your Models

Challenge: When loading project context, I mistakenly tried to access blogPost.summary and blogPost.publishedAt.

Lesson: Always double-check your Prisma schema! The actual fields were excerpt (not summary) and updatedAt (not publishedAt). A quick glance at the schema.prisma file would have saved a few minutes of head-scratching.

The Infamous Prisma Client Refresh

Challenge: After making schema changes and running db:push and db:generate, I found my Prisma client was still using the old schema, leading to errors when trying to set projectId.

Lesson: The Prisma client can sometimes be aggressively cached by the dev server. The reliable fix for this common pitfall is to explicitly run npx prisma db push and npx prisma generate, then restart your development server. This ensures the application picks up the freshly generated client.

What's Next?

With the core features locked in, the immediate next steps involve thorough end-to-end testing of the auto-continue flow, persona switching, and markdown downloads. Beyond that, we're already eyeing enhancements like:

  • Adding token and cost tracking displays.
  • Implementing a "rounds" counter during auto-play for better visibility.

This session has brought the discussion feature to a whole new level, transforming it into an intelligent, interactive, and highly contextual AI playground. I'm excited to see how this platform empowers users to explore ideas and collaborate with AI in novel ways.