nyxcore-systems
8 min read

Building an AI Brain Trust: From Basic Chat to Context-Aware Multi-Agent Conversations

We just wrapped a marathon dev session transforming a simple discussion feature into a sophisticated AI chat UX with multi-agent consensus, real-time context, and dynamic stream visualization.

AILLMFullstackTypeScriptReactNext.jsPrismaSSEUX

It was late, the kind of late where the only company you have is your monitor's glow and a half-empty coffee mug. But the energy was high. We'd just pushed through a significant chunk of work, taking our application's discussions feature from a foundational concept to a truly modern, AI-powered conversational experience. The goal? To build not just a chat, but an interactive AI brain trust – complete with markdown rendering, auto-continue, project context injection, persona switching, and a mesmerizing stream flow visualization.

And I'm thrilled to report: we're there. All core features are implemented, type-checking clean, and the dev server is humming along. We even have a test discussion (9c398d16-cbfd-4eba-96ee-2bed16c34a97) where Kimi and OpenAI, playing as the "NyxCore" persona, are having a lively 15-message debate, entirely in German, without a single provider name leaking into the output. That's a win in my book.

The Vision: A Modern AI Dialogue

Our ambition was simple yet challenging: create an AI chat UX that feels intuitive, powerful, and deeply integrated. This meant moving beyond basic text responses to a rich, interactive environment where the AI agents could truly collaborate and understand their operational context.

Markdown Magic & Responsive UI

First up was the chat itself. We needed a robust way to display AI responses, especially code snippets and formatted text.

  • src/components/discussion/chat-message.tsx: This became our reusable chat bubble. User messages are plain, while assistant messages get full markdown treatment. We also baked in hover-to-copy functionality for individual messages and a subtle streaming indicator.
  • src/components/markdown-renderer.tsx: The heavy lifting for markdown came here. We integrated rehype-highlight and highlight.js for beautiful, syntax-highlighted code blocks. A small but mighty detail: a copy button for code blocks, which required a neat little React trick (more on that in "Lessons Learned"). A compact prop ensures it renders nicely within the chat context.
tsx
// Simplified snippet from src/components/markdown-renderer.tsx
import { Prism as SyntaxHighlighter } from 'react-syntax-highlighter';
import { materialDark } from 'react-syntax-highlighter/dist/esm/styles/prism';
import { useRef, useState } from 'react';

const CodeBlock = ({ className, children }) => {
  const codeRef = useRef<HTMLElement>(null);
  const [copied, setCopied] = useState(false);

  // ... (copy logic using codeRef.current?.innerText)

  return (
    <pre ref={codeRef} className={className}>
      <button onClick={handleCopy} className="copy-button">
        {copied ? 'Copied!' : 'Copy'}
      </button>
      <SyntaxHighlighter style={materialDark} language={language}>
        {children}
      </SyntaxHighlighter>
    </pre>
  );
};

// ... (used within markdown-renderer for `code` elements)

The Brain Trust: Discussion Engine Overhaul

This was the core of the AI intelligence. We completely rewrote src/server/services/discussion-service.ts to handle complex multi-agent interactions and context management.

  • Context is King: AI without context is just a parrot. We introduced projectId and language to our Discussion model. Now, every AI interaction starts with a buildSystemPrompt() that includes:
    • The chosen persona's instructions.
    • The detected language (from the first user message).
    • Crucially, project context: descriptions, blog posts, and consolidation patterns from the linked project are injected, giving the AI a deep understanding of its operational domain.
  • Multi-Agent Consensus: For our "consensus" mode, we wanted the AIs to feel like a unified team, not a collection of disparate providers.
    • Neutral Identity: Gone are the [ANTHROPIC]: or --- OPENAI --- prefixes. AIs now participate as "Alpha," "Beta," "Gamma," or the chosen NyxCore persona, fostering a sense of integrated intelligence.
    • Seamless Language: Language detection (de/en/fr/es) ensures the AI responds appropriately from the start.
  • Auto-Pilot Mode (autoRound()): This is where the magic of continuous AI dialogue happens. The autoRound() function allows the system to generate a response from the next provider in rotation without requiring user input. This powers our "auto-continue" feature, letting AIs debate amongst themselves.
  • Cancellation: Proper AbortSignal support means we can cleanly stop streaming responses if the user decides to intervene or close the discussion.

Real-time Streaming with SSE

To make the auto-continue and live AI responses feel truly dynamic, we leaned into Server-Sent Events (SSE).

  • src/app/api/v1/events/discussions/[id]/route.ts: This endpoint now supports a ?auto=1 query parameter, enabling the auto-continue loop. We also integrated AbortController with ReadableStream.cancel() for robust cancellation.

The Conductor's Baton: tRPC Router

Our tRPC router (src/server/trpc/routers/discussions.ts) got a significant upgrade to support the new features:

  • projectId is now part of the create input, linking discussions to projects from inception.
  • project relation is included in get queries, making project context easily accessible.
  • A new updatePersona mutation allows users to switch the active AI persona mid-chat, dynamically altering the AI's approach and tone.

The Stage: Discussion Detail Page

The frontend experience on src/app/(dashboard)/dashboard/discussions/[id]/page.tsx was completely reimagined.

  • Redesigned Header: We moved from flat badges to a more functional two-row layout:
    • Row 1: Title, mode, live status, markdown download, and the crucial Auto play/stop controls.
    • Row 2: The star of the show – our StreamFlow visualization and project context indicators.
  • StreamFlow Component: This visualizer is a game-changer. Imagine KIMI ··· > NyxCore < ··· OPENAI – with animated dots pulsing during streaming, the active provider highlighted, and the central persona (e.g., NyxCore) glowing. It provides instant visual feedback on who's "thinking" and who's contributing. Clicking the persona in the flow opens a dropdown, allowing mid-chat persona switching.
  • Auto-Continue Loop: A prominent "Play" button starts the AI debate, "Stop" halts it, and sending a manual message also pauses the auto-loop, giving the user full control.
  • saveAsMarkdown(): A simple but powerful feature to download the entire discussion as a .md file for offline review or sharing.
  • Auto-Scroll: During streaming, the chat now intelligently auto-scrolls when near the bottom, ensuring the user always sees the latest AI output without manual intervention.

Lessons Learned (from the "Pain Log")

No significant development session is complete without hitting a few snags. These are the moments where real learning happens.

  1. DOM Manipulation vs. React Refs for Reliability:

    • The Problem: I initially tried to implement the code block copy functionality using document.querySelector("pre:hover") to grab the content. This failed spectacularly. querySelector often returns the wrong element when multiple code blocks are visible, and innerText isn't directly available on a generic Element type. It was brittle and unreliable.
    • The Fix: The correct React way is to use useRef. By attaching a useRef<HTMLPreElement> directly to the <pre> element, we get a direct, reliable reference to the DOM node. Reading preRef.current?.innerText gives us exactly the text we need, every time.
    • Takeaway: For interacting with specific DOM elements in React, useRef is almost always the cleaner, more robust solution compared to global document.querySelector calls. It ensures you're targeting the correct instance of a rendered component.
  2. Schema Sanity Checks and ORM Field Names:

    • The Problem: When loading project context, I was trying to access blogPost.summary and blogPost.publishedAt. My schema, however, defined these fields as excerpt and updatedAt. A classic case of mental model vs. reality.
    • The Fix: A quick glance at the Prisma schema file (schema.prisma) and a correction to blogPost.excerpt and blogPost.updatedAt resolved it.
    • Takeaway: Even seasoned developers make these mistakes. Always double-check your schema or ORM definitions when fetching or mapping data. A few seconds verifying field names can save minutes of debugging "undefined" errors.
  3. The Importance of ORM Client Refresh:

    • The Problem: After adding projectId to the Discussion model in Prisma, my dev server kept complaining that projectId wasn't a valid field when trying to create a new discussion. The schema was updated, but the application wasn't picking it up.
    • The Fix: The Prisma client was stale. Running npm run db:push (to apply schema changes to the database), npm run db:generate (to regenerate the Prisma client based on the new schema), and then restarting the dev server (npm run dev) immediately resolved the issue.
    • Takeaway: When making schema changes with an ORM like Prisma, always remember the three-step dance: db:push (or db:migrate), db:generate, and a full application restart. The generated client code is what your application interacts with, and it needs to be up-to-date.

What's Next?

While the core functionality is solid, there are always immediate next steps:

  1. Commit all the uncommitted changes – a big one!
  2. Thoroughly test the auto-continue flow end-to-end.
  3. Verify persona switching mid-chat works seamlessly.
  4. Ensure the .md download preserves all formatting.
  5. Confirm project context injection is working as expected.
  6. Start thinking about token/cost tracking and displaying it.
  7. Perhaps a "rounds" counter during auto-play to give a sense of progress.

This session was a huge leap forward. We've laid the groundwork for a truly intelligent and interactive AI assistant, transforming a simple chat into a dynamic platform for collaborative AI intelligence. The journey continues, but for tonight, we celebrate a significant milestone.

json
{
  "thingsDone": [
    "Implemented modern Markdown Chat UX with streaming indicators and copy buttons.",
    "Overhauled Discussion Engine for multi-agent consensus, language detection, and project context injection.",
    "Integrated auto-continue functionality via SSE endpoint.",
    "Redesigned Discussion Detail Page with StreamFlow visualization and interactive controls (auto-play, persona switching, markdown download).",
    "Updated Prisma schema and tRPC router to support new discussion features (projectId, language, persona updates).",
    "Created a test discussion demonstrating multi-agent, persona-driven, context-aware dialogue in German."
  ],
  "pains": [
    "Reliably extracting text from code blocks for copy functionality (resolved with useRef).",
    "Mismatched schema field names when loading project context (resolved by checking schema).",
    "Stale Prisma client after schema changes requiring regeneration and server restart."
  ],
  "successes": [
    "Achieved type-checking clean implementation across all new features.",
    "Successfully implemented neutral participant names for consensus mode.",
    "Developed a visually engaging and informative StreamFlow component.",
    "Created a robust auto-continue mechanism for AI-to-AI dialogue.",
    "Ensured comprehensive project context injection into AI system prompts."
  ],
  "techStack": [
    "TypeScript",
    "React",
    "Next.js",
    "Prisma",
    "PostgreSQL",
    "Redis",
    "tRPC",
    "Server-Sent Events (SSE)",
    "rehype-highlight",
    "highlight.js"
  ]
}