nyxcore-systems
9 min read

Bringing AI to Life: A Debugging Odyssey to Get LLMs Talking

From silent AI partners to real-time conversations, this post details the trenches of wiring up an LLM discussion system, battling Next.js quirks, and making BYOK keys sing.

Next.jsLLMAuthenticationSSEDebuggingTypeScriptPrismaBYOKNextAuth

Building an application that leverages Large Language Models (LLMs) isn't just about calling an API; it's about orchestrating a symphony of authentication, real-time communication, and robust data handling. Our latest development session was a deep dive into precisely that: getting our LLM discussion system to finally talk. We weren't just fixing bugs; we were breathing life into silent AI partners.

The goal was clear: squash lingering runtime errors and, most critically, wire up the LLM discussion service so that chat partners actually respond. We needed to move from a static UI to a dynamic, interactive experience where users could bring their own LLM keys (BYOK) and engage in real-time conversations.

The Journey to a Conversational AI

This session was about connecting the dots. We had the database, the UI, and the concept, but the actual flow of messages from user to LLM and back was broken in several places. It was a classic case of "the pieces are there, but they're not talking."

Let's break down the key challenges we faced and the solutions that brought our AI discussion feature to life.


Lessons from the Trenches: When Things Go Sideways

Every developer knows the "pain log" – that list of unexpected hurdles that consume hours. For us, these were the critical turning points that led to significant architectural and code adjustments.

1. The Elusive Email: GitHub OAuth & NextAuth v5

The Problem: We were using GitHub OAuth for user authentication, but new users signing up without a public email address on GitHub were hitting a wall. Our prisma/schema.prisma User model expected an email field, but GitHub often returns null for this. We tried to patch it with NextAuth's profile() callback, which allowed us to construct a fallback email like <id>+<login>@users.noreply.github.com.

The Discovery: Despite our profile() callback generating a valid email, the PrismaAdapter for NextAuth v5 doesn't use the return value of the profile() callback for user creation. Instead, it directly uses the raw OAuth user info provided by GitHub. If that raw info has email: null, the adapter tries to create a user with a null email, and Prisma (expecting String) throws an error.

The Solution: The most straightforward fix, given NextAuth v5's behavior, was to make the email field optional in our Prisma schema.

typescript
// prisma/schema.prisma
model User {
  id            String    @id @default(cuid())
  name          String?
  email         String?   @unique // The crucial change: String?
  emailVerified DateTime?
  image         String?
  accounts      Account[]
  sessions      Session[]
  createdAt     DateTime  @default(now())
  updatedAt     DateTime  @updatedAt
  role          Role      @default(USER)
}

After running npx prisma db push, our GitHub OAuth flow finally allowed users without public emails to sign up.

Lesson Learned: Always double-check how your authentication adapter interacts with raw OAuth data versus custom profile callbacks. Sometimes, the adapter's internal logic overrides your custom transformations. When dealing with external providers, be prepared for missing or optional data and model your database accordingly.

2. Next.js 14 Dynamic Routes: use(params) is Not Your Friend (Yet)

The Problem: In our dynamic routes for discussions and workflows (e.g., /dashboard/discussions/[id]), we instinctively reached for use(params) to extract the id in our client components. This felt like the "React Server Components" way.

The Discovery: This immediately threw an error: An unsupported type was passed to use(): [object Object]. Turns out, in Next.js 14, params is a plain JavaScript object, not a Promise. The use(Promise) pattern is a feature slated for Next.js 15+ (or experimental versions), not the standard for 14.

The Solution: The good old-fashioned destructuring works perfectly:

typescript
// src/app/(dashboard)/dashboard/discussions/[id]/page.tsx
interface DiscussionPageProps {
  params: { id: string };
}

export default function DiscussionPage({ params }: DiscussionPageProps) {
  const { id } = params; // Simple destructuring, no `use()` needed
  // ... rest of your component logic
}

Lesson Learned: Stay vigilant about framework version-specific features. What might be idiomatic in an upcoming version or experimental branch could be a source of frustration in your current stable release. Always refer to the documentation for your specific Next.js version regarding server components and client component interactions.

3. The Silent Partner: Wiring Up BYOK LLM Providers

The Problem: Our LLM discussion service was designed to use Bring Your Own Key (BYOK) providers, allowing users to select their preferred LLM (e.g., Anthropic) and provide their API key. However, when a discussion started, we'd get Provider not registered: anthropic. Our initial approach involved a static provider registry, but no providers were ever being loaded into it at runtime from the user's saved keys.

The Discovery: The static registry pattern works well for pre-configured, application-wide providers. But for user-provided, encrypted API keys, we needed a dynamic approach. The keys were stored in the database, encrypted, and needed to be fetched and decrypted just-in-time for each LLM call.

The Solution: We introduced a new resolveProvider() function. This function takes the provider name, fetches the corresponding encrypted API key from the database for the current user, decrypts it using our crypto service, and then uses a createProviderWithKey() utility to instantiate the specific LLM provider (e.g., Anthropic, OpenAI) with that decrypted key.

typescript
// src/server/services/discussion-service.ts (conceptual)
import { createProviderWithKey } from './llm-provider-factory'; // A factory for LLM instances
import { getApiKeyForUser, decrypt } from '../lib/crypto'; // Our crypto utilities

async function resolveProvider(userId: string, providerName: string) {
  const encryptedApiKey = await getApiKeyForUser(userId, providerName); // Fetch from DB
  if (!encryptedApiKey) {
    throw new Error(`API key for provider ${providerName} not found.`);
  }
  const decryptedKey = decrypt(encryptedApiKey); // Decrypt it
  return createProviderWithKey(providerName, decryptedKey); // Create provider instance
}

// ... then, in streamSingleProvider or streamParallelProviders
const provider = await resolveProvider(userId, 'anthropic');
// ... use provider to stream LLM responses

Lesson Learned: Dynamic, user-specific resources (like BYOK API keys) require a dynamic resolution mechanism. A static registry is great for application-level configuration, but for per-user or per-request resources, you need a runtime fetch-and-instantiate pattern. Centralizing this logic within a dedicated resolveProvider function keeps your service clean and secure.

4. The Disappearing Conversation: SSE Reconnect Logic

The Problem: We were using Server-Sent Events (SSE) for real-time LLM streaming. The initial connection worked, the LLM would respond, and the stream would close. However, when the user sent a follow-up message via our continue mutation, no new LLM processing was triggered because the SSE stream was already closed and not reconnecting.

The Discovery: Our useSSE hook was configured to depend on [url, enabled]. For a new LLM response to be triggered after a user's continue action, the SSE URL needed to change to force a re-evaluation and reconnection of the useSSE hook.

The Solution: We introduced a simple sseKey state variable in our discussion page component. This sseKey is incremented after a successful continue mutation. The SSE URL then includes this key as a query parameter:

typescript
// src/app/(dashboard)/dashboard/discussions/[id]/page.tsx (conceptual)
import { useState } from 'react';
import { useMutation } from '@tanstack/react-query'; // Assuming react-query for mutations
import { useSSE } from '@/hooks/useSSE'; // Our custom SSE hook

export default function DiscussionPage({ params }: DiscussionPageProps) {
  const { id } = params;
  const [sseKey, setSseKey] = useState(0); // State to force SSE reconnect

  const continueMutation = useMutation({
    mutationFn: (message: string) => sendUserMessage(id, message), // Your API call
    onSuccess: () => {
      setSseKey(prev => prev + 1); // Increment key on success to force SSE reconnect
    },
  });

  const sseUrl = `/api/v1/events/discussions/${id}?k=${sseKey}`;
  useSSE(sseUrl, { enabled: true }); // useSSE will re-run when sseUrl changes

  // ... rest of your component
}

On the server-side, our SSE endpoint (src/app/api/v1/events/discussions/[id]/route.ts) now correctly handles params as a Promise (as it's a server route handler, not a client component), allowing it to process the latest discussion state when a new connection is established. A "needs response" guard in processDiscussion() ensures the LLM is only called if the last message is from the user.

Lesson Learned: For dynamic real-time streams that need to re-trigger based on user actions, a simple cache-busting mechanism (like a changing query parameter) can effectively force client-side hooks to re-evaluate and re-establish connections. Pair this with server-side logic that understands the state change.


Bringing It All Together: The "Done" List

With these critical fixes in place, we've made significant progress:

  • Robust GitHub OAuth: Users can now sign up regardless of their public email settings.
  • Stable Next.js Routing: Dynamic routes work as expected across client and server components.
  • Functional BYOK LLM Integration: The discussion service can now fetch, decrypt, and instantiate LLM providers using user-supplied keys from the database.
  • Real-time, Responsive Discussions: The SSE reconnect mechanism ensures that after a user sends a message, the LLM is prompted to respond, and its output streams back in real-time.
  • Intelligent LLM Pacing: A "needs response" guard prevents the LLM from generating redundant responses if it was already the last speaker.

Our discussion service now resolves BYOK API keys from the DB, decrypts them, and creates provider instances. The SSE reconnect logic is solid on the client. We're finally ready to test the entire end-to-end LLM discussion flow!

What's Next on the Horizon

With the core LLM discussion working, our immediate next steps involve thorough testing and expanding functionality:

  1. End-to-End LLM Discussion Testing: Verify Anthropic (and other providers) respond via streaming for new discussions and follow-ups.
  2. Parallel & Consensus Modes: Test our advanced discussion modes where multiple LLMs contribute.
  3. Workflow & Wardrobe Feature Testing: Ensure other core features are stable.
  4. Polish & Infrastructure: Add missing icons, create error pages, and verify a clean production build.

This session was a testament to the iterative nature of development. Each "pain" point, though frustrating in the moment, led to a deeper understanding of our stack and a more robust solution. Our AI partners are finally ready to talk, and we're excited to see the conversations unfold.


json
{
  "thingsDone": [
    "Fixed GitHub OAuth email missing by making email optional in Prisma schema.",
    "Fixed GitHub OAuth profile callback issues by understanding PrismaAdapter behavior.",
    "Fixed use(params) error in Next.js 14 dynamic routes by using direct destructuring.",
    "Wired up LLM discussion service to use BYOK API keys via a new resolveProvider function.",
    "Added 'needs response' guard in processDiscussion to prevent redundant LLM calls.",
    "Added SSE reconnect mechanism using a sseKey counter to force client-side re-connections.",
    "Fixed SSE endpoint params handling in server route handlers for Next.js 14."
  ],
  "pains": [
    "GitHub OAuth `email` missing due to NextAuth v5 PrismaAdapter not using profile callback email.",
    "`use(params)` error in Next.js 14 client components due to `params` being an object, not a Promise.",
    "LLM discussion failing with `Provider not registered` because BYOK keys weren't dynamically loaded.",
    "SSE endpoint closing after first response and not reconnecting for subsequent user messages."
  ],
  "successes": [
    "Seamless GitHub OAuth for all users.",
    "Stable dynamic routing in Next.js 14.",
    "Functional BYOK LLM provider integration.",
    "Robust real-time LLM discussion streaming with auto-reconnect.",
    "Efficient LLM call management."
  ],
  "techStack": [
    "Next.js 14",
    "React",
    "TypeScript",
    "Prisma",
    "PostgreSQL",
    "Redis",
    "NextAuth v5",
    "Server-Sent Events (SSE)",
    "LLM APIs (Anthropic, OpenAI)",
    "Docker"
  ]
}