nyxcore-systems
5 min read

From Silent Bots to Streaming Chats: Wiring Up Our LLM Discussion Service

Dive into the trenches of a recent development session, where we tackled stubborn runtime errors, wired up our LLM discussion service with Bring Your Own Key (BYOK) support, and mastered the art of SSE reconnects in Next.js 14.

Next.jsLLMDebuggingAuthenticationSSEBYOKTypeScriptPrismaNextAuth

Another development session, another round of exhilarating debugging and feature implementation! In our latest sprint (Session 3, dated 2026-02-21), our primary mission was clear: get our LLM discussion partners to actually respond and iron out persistent runtime errors. It was a journey filled with head-scratching moments and triumphant breakthroughs, culminating in a system ready for end-to-end LLM discussion testing.

Let's break down how we moved from a silent application to one poised for vibrant AI conversations.

The Core Challenge: Bringing Our LLM Partners to Life

At the heart of our application is the ability for users to engage in discussions with various LLMs. This involves securely managing API keys (Bring Your Own Key - BYOK), streaming responses, and ensuring the conversation flows naturally. This session was all about making those pieces click.

Securing & Activating LLM Providers with BYOK

One of the biggest hurdles was enabling the LLM discussion service to correctly identify, load, and use provider API keys. Initially, we envisioned a static provider registry, but the reality of BYOK keys – fetched from the database and encrypted – meant a more dynamic approach was needed.

The Problem: Our LLM service was failing with "Provider not registered: anthropic." The static registry was empty, as BYOK keys aren't known at compile-time.

The Solution: We introduced a new resolveProvider() function in src/server/services/discussion-service.ts. This function now intelligently:

  1. Fetches the encrypted API key for the desired provider from the database.
  2. Decrypts it using our crypto.decrypt() utility.
  3. Creates a new provider instance on the fly using createProviderWithKey().

This dynamic resolution ensures that whether we're streaming from a single provider (streamSingleProvider) or orchestrating parallel responses (streamParallelProviders), the LLM service always has a valid, decrypted API key to work with.

typescript
// src/server/services/discussion-service.ts (simplified)
async function resolveProvider(providerName: string): Promise<LLMProvider> {
  const apiKeyRecord = await db.apiKey.findUnique({
    where: { userId_provider: { userId: currentUserId, provider: providerName } },
  });

  if (!apiKeyRecord || !apiKeyRecord.encryptedKey) {
    throw new Error(`API key not found for provider: ${providerName}`);
  }

  const decryptedKey = crypto.decrypt(apiKeyRecord.encryptedKey);
  return createProviderWithKey(providerName, decryptedKey);
}

// Replaced getProvider() with resolveProvider() in streaming functions
async function streamSingleProvider(...) {
  const provider = await resolveProvider(providerName);
  // ... stream logic
}

The Dance of SSE: Ensuring Continuous Conversation

Our LLM responses are streamed using Server-Sent Events (SSE), providing a real-time chat experience. However, we hit a snag: after an initial connection, the SSE stream would often close, and subsequent user messages wouldn't trigger new LLM responses.

The Problem: When a user sent a follow-up message via a continue mutation, the existing SSE stream was already closed. The processDiscussion() function, which orchestrates LLM calls, wasn't being re-triggered.

The Solution: We implemented an SSE reconnect mechanism on the client-side. In src/app/(dashboard)/dashboard/discussions/[id]/page.tsx:

  1. A new sseKey state counter was added.
  2. Whenever the continue mutation (sending a new user message) successfully completes, sseKey is incremented.
  3. The SSE URL now includes this key as a query parameter (e.g., ?k=${sseKey}).
  4. Our useSSE hook, which depends on the URL, detects this change, forces a disconnect of the old stream, and establishes a new SSE connection. This new connection, in turn, triggers processDiscussion() on the server, which then sees the latest user message and prompts the LLM for a response.

We also added a crucial "needs response" guard within processDiscussion() to ensure the LLM is only called if the last message in the discussion is from the user.

typescript
// src/app/(dashboard)/dashboard/discussions/[id]/page.tsx (simplified)
const [sseKey, setSseKey] = useState(0);

// ... inside continue mutation success handler
onSuccess: () => {
  setSseKey(prev => prev + 1); // Force SSE reconnect
  // ... other logic
}

const sseUrl = `/api/v1/events/discussions/${id}?k=${sseKey}`;
useSSE(sseUrl, true, handleSseMessage);

Navigating Next.js 14 & Authentication Quirks

It wouldn't be a proper dev session without encountering some framework-specific surprises!

GitHub OAuth: The Elusive Email Address

The Problem: During GitHub OAuth, we frequently encountered Argument 'email' is missing errors. NextAuth v5's PrismaAdapter, when creating a new user, expected an email field. While our profile() callback attempted to provide a fallback email, the adapter primarily uses the raw OAuth user info, which often returns null for email if the user hasn't made it public.

The Workaround: The simplest and most robust solution was to make the email field optional (String?) in our prisma/schema.prisma User model.

prisma
// prisma/schema.prisma
model User {
  id            String    @id @default(cuid())
  name          String?
  email         String?   @unique // <-- Changed from String to String?
  emailVerified DateTime?
  image         String?
  accounts      Account[]
  sessions      Session[]
  createdAt     DateTime  @default(now())
  updatedAt     DateTime  @updatedAt
  role          UserRole  @default(USER)
}

After this, we ran npx prisma db push to apply the schema change. This resolved the immediate error and allowed users without public GitHub emails to sign up.

Next.js 14 Dynamic Routes: use(params) vs. Destructuring

The Problem: In our dynamic routes for discussions and workflows, we initially tried using use(params) to access route parameters in client components (e.g., src/app/(dashboard)/dashboard/discussions/[id]/page.tsx). This resulted in An unsupported type was passed to use(): [object Object].

The Lesson Learned: The use(Promise) pattern is a feature slated for Next.js 15+. In Next.js 14, params is a plain object, not a Promise.

The Solution: A straightforward destructuring was all that was needed:

tsx
// src/app/(dashboard)/dashboard/discussions/[id]/page.tsx
type DiscussionPageProps = {
  params: { id: string };
};

export default function DiscussionPage({ params }: DiscussionPageProps) {
  const { id } = params; // <-- Correct way for Next.js 14
  // ... rest of component
}

We applied the same fix to src/app/(dashboard)/dashboard/workflows/[id]/page.tsx. Interestingly, for server route handlers (src/app/api/v1/events/discussions/[id]/route.ts), the await params pattern is correct for accessing parameters within an async function. Context matters!

What's Next? The Moment of Truth!

With these critical fixes and new wiring in place, our application is finally ready for its first end-to-end LLM discussion tests. The immediate next steps involve:

  1. Testing LLM discussion end-to-end: Creating a new discussion and verifying Anthropic responds via streaming.
  2. Testing the continue flow: Sending follow-up messages and confirming SSE reconnects and LLM responses.
  3. Exploring advanced modes: Testing parallel and consensus discussion modes.
  4. Broader system checks: Verifying workflow, wardrobe, and memory CRUD operations.
  5. Minor housekeeping: Adding missing icons, creating a proper error page for /api/auth/error, and ensuring a clean production build (npm run build).

This session was a significant leap forward, transforming a collection of components into a truly interactive LLM discussion platform. The journey continues, but the core conversational engine is now humming!