nyxcore-systems
5 min read

Wrestling with Runtime: A Journey Through Next.js 14, OAuth, and LLM Integration

A deep dive into solving runtime errors when building an AI-powered discussion platform, from OAuth email quirks to SSE reconnection patterns.

nextjsoauthllmdebuggingtypescriptprisma

Wrestling with Runtime: A Journey Through Next.js 14, OAuth, and LLM Integration

Building modern web applications often feels like assembling a puzzle where the pieces keep changing shape. Today, I want to share a debugging session that perfectly captures this reality—a journey through OAuth authentication, Server-Sent Events, and LLM integration that taught me some valuable lessons about assumptions and runtime behavior.

The Mission: Bringing AI Discussions to Life

The goal seemed straightforward: wire up an LLM discussion system where users could chat with AI providers using their own API keys (BYOK - Bring Your Own Key). Users should be able to start conversations, get streaming responses, and continue discussions seamlessly.

What followed was a masterclass in why "it works in development" doesn't always translate to "it works in production."

Challenge #1: When GitHub Won't Share Emails

The first roadblock hit during GitHub OAuth integration. Users were experiencing authentication failures with the cryptic error: Argument 'email' is missing.

The Problem

GitHub's OAuth API doesn't always return public email addresses. When a user's email is private, the OAuth response contains email: null, but our Prisma schema required emails to be non-null.

The Solution

typescript
// Before: email was required
model User {
  id            String    @id @default(cuid())
  email         String    @unique  // This broke when GitHub returned null
  // ... other fields
}

// After: email became optional
model User {
  id            String    @id @default(cuid())
  email         String?   @unique  // Now handles null emails gracefully
  // ... other fields
}

I also added a profile callback to generate fallback emails:

typescript
// In src/server/auth.ts
profile(profile) {
  return {
    id: profile.id.toString(),
    name: profile.name || profile.login,
    email: profile.email || `${profile.id}+${profile.login}@users.noreply.github.com`,
    image: profile.avatar_url,
  }
}

The Lesson

NextAuth's PrismaAdapter creates users from raw OAuth data, not from the profile callback transformation. The profile callback is primarily for customizing the session object, not for fixing missing required fields in the database.

Challenge #2: The Next.js 14 Params Puzzle

Moving to dynamic routes, I encountered another head-scratcher: An unsupported type was passed to use(): [object Object].

The Problem

I was using the use() hook pattern from Next.js 15 documentation on a Next.js 14 project:

typescript
// This doesn't work in Next.js 14
export default function DiscussionPage({ params }: { params: Promise<{ id: string }> }) {
  const { id } = use(params);  // Error: params is not a Promise in v14
  // ...
}

The Solution

In Next.js 14, params are plain objects, not Promises:

typescript
// The correct Next.js 14 approach
export default function DiscussionPage({ params }: { params: { id: string } }) {
  const { id } = params;  // Direct destructuring works perfectly
  // ...
}

The Lesson

Framework version differences matter more than you think. The use(params) pattern is a Next.js 15+ feature. Always check which version you're actually running, not just the latest documentation.

Challenge #3: The Great Provider Registry Mystery

The most puzzling issue came when testing LLM integration. Despite saving API keys to the database, the system threw Provider not registered: anthropic errors.

The Problem

The code was trying to use a provider registry pattern:

typescript
// This failed because the registry was empty at runtime
const provider = getProvider("anthropic");

The registry was never populated because BYOK keys stored in the database weren't being loaded into the in-memory registry on startup.

The Solution

I bypassed the registry entirely for BYOK scenarios:

typescript
async function resolveProvider(providerId: string, userId: string) {
  // Fetch the encrypted API key from database
  const apiKey = await prisma.apiKey.findFirst({
    where: { userId, provider: providerId }
  });
  
  if (!apiKey) {
    throw new Error(`No API key found for provider: ${providerId}`);
  }
  
  // Decrypt and create provider instance directly
  const decryptedKey = crypto.decrypt(apiKey.encryptedKey);
  return createProviderWithKey(providerId, decryptedKey);
}

The Lesson

Static registries work great for pre-configured providers, but user-specific configurations need dynamic resolution. Don't assume your elegant architecture will handle all use cases—sometimes you need different patterns for different scenarios.

Challenge #4: The SSE Reconnection Dance

The final challenge involved Server-Sent Events (SSE). Users could start discussions, but follow-up messages wouldn't trigger new LLM responses because the SSE connection had already closed.

The Problem

The SSE connection would process one message, close the stream, and never reconnect when users sent additional messages via the continue mutation.

The Solution

I implemented a reconnection mechanism using a state counter:

typescript
const [sseKey, setSseKey] = useState(0);

// SSE hook depends on the key, forcing reconnection when it changes
const { data: events } = useSSE(
  `/api/v1/events/discussions/${id}?k=${sseKey}`,
  enabled
);

// After sending a message, increment the key to force reconnection
const continueMutation = useMutation({
  mutationFn: (content: string) => continueDiscussion(id, content),
  onSuccess: () => {
    setSseKey(prev => prev + 1);  // This triggers SSE reconnection
  },
});

The Lesson

SSE connections are stateful and don't automatically know when new work is available. Sometimes you need creative solutions like forced reconnections rather than trying to keep connections alive indefinitely.

The Bigger Picture

Each of these challenges taught me something valuable about building complex web applications:

  1. Third-party APIs are unpredictable - Always handle null/missing data gracefully
  2. Framework versions matter - Don't mix patterns from different versions
  3. Runtime behavior differs from design-time assumptions - Test your architecture with real data
  4. State management in real-time features is tricky - Sometimes simple solutions (like reconnecting) work better than complex ones

What's Next?

With these runtime issues resolved, the discussion system is finally ready for end-to-end testing. The path from "it should work" to "it actually works" was longer than expected, but each challenge revealed important insights about building robust, user-facing applications.

The next phase involves testing the complete flow: parallel discussions, consensus modes, and workflow automation. But that's a story for another debugging session.


Have you encountered similar runtime surprises in your projects? I'd love to hear about your debugging adventures in the comments below.