nyxcore-systems
6 min read

Navigating the Night: Deduplication Gotchas, LLM Quirks, and Progressive Auth Blueprints

A late-night development sprint reveals critical lessons in API integration, frontend data handling, and robust authentication design, from battling deprecated LLM parameters to crafting a future-proof auth spec.

TypeScriptOpenAILLMAuthenticationWebAuthntRPCFrontendBackendDevOpsLessonsLearned

It’s 3 AM. The glow of the terminal is my only companion, and the coffee, long since gone cold, is a distant memory. This isn't just a late-night coding session; it’s a microcosm of modern full-stack development – a rapid-fire sequence of bug fixes, feature builds, and architectural design, each with its own set of challenges and lessons.

This "session handoff" isn't just for my future self; it's a testament to the unpredictable, multi-faceted nature of our work. Let's dive into what got done, what broke, and what we learned.

Wearing Many Hats: A Developer's Typical Night

The goal was ambitious: tackle everything from internal workflow fixes to a critical batch import feature, squash some pesky OpenAI API errors, and lay the groundwork for a robust new authentication system. It's the kind of night where you context-switch faster than a CPU in a heavy load.

Here's the rundown of what we shipped:

  • Workflow Persona Refinement: Corrected 7 persona assignments in a critical clarait-auth workflow via direct SQL updates on production, ensuring our internal systems route tasks correctly. A quick win, but vital for process integrity.
  • nyxCore Description: Penned a 200-word German description for our nyxCore system, demonstrating that even late at night, documentation and communication are key.

But the real meat of the session lay in a few key areas that demanded deeper technical dives.

Streamlining Data: Building a Batch URL Import for Axiom

One of our immediate needs was to simplify the ingestion of large sets of URLs into our Axiom system. Manual entry is tedious and error-prone. The solution? A Batch URL Import.

We built this out with a classic full-stack approach:

  • Backend (tRPC): A new batchFetchUrls tRPC mutation was added to src/server/trpc/routers/axiom.ts. This handles the heavy lifting of processing and storing the incoming URLs.
  • Frontend (React / Next.js): A collapsible "Batch URL Import" section was integrated into our AxiomTab (src/app/(dashboard)/dashboard/projects/[id]/page.tsx). It provides a user-friendly interface with a textarea for pasting raw text (like a list of URLs) and a file upload option.

Lesson Learned: TypeScript & Deduplication Gotchas

During the frontend implementation, a common TypeScript pitfall emerged. My initial instinct for deduplicating extracted URLs was [...new Set(extractedUrls)]. Simple, elegant, and often correct. However, in certain compilation targets or configurations, this can lead to a TypeScript error requiring --downlevelIteration.

The Workaround: The universally compatible Array.from(new Set(extractedUrls)) bypasses this specific TypeScript quirk, ensuring your code works regardless of your tsconfig.json's target or lib settings.

typescript
// Before (potential TS error without --downlevelIteration):
// const uniqueUrls = [...new Set(extractedUrls)];

// After (robust and universally compatible):
const uniqueUrls = Array.from(new Set(extractedUrls));

This tiny detail can save you a frustrating debugging session.

Taming the LLM Beast: OpenAI API Nuances

Working with large language models is exhilarating, but their APIs are a moving target. This session brought two critical lessons to light, especially with the introduction of newer models like GPT-5.

Lesson Learned 1: Deprecated Parameters

We encountered 400 Bad Request errors when calling the OpenAI API, specifically with the max_tokens parameter. It turns out OpenAI has deprecated max_tokens in favor of max_completion_tokens. This seemingly minor change can break your integration if not updated across all calls.

The Fix: We updated our src/server/services/llm/adapters/openai.ts to use max_completion_tokens for all OpenAI models, ensuring future compatibility.

Lesson Learned 2: Model-Specific Restrictions (GPT-5)

Even after fixing the max_tokens issue, GPT-5 continued to be a challenge. Attempting to set temperature: 0.3 for a more deterministic output resulted in another 400 error: "Only the default (1) value is supported."

This highlights a crucial point: newer, more advanced models might have stricter parameter constraints.

The Fix: We added gpt-5 to our isReasoningModel() check within the adapter. This check now intelligently skips sending custom temperature and top_p values when interacting with GPT-5 or other models known to have such restrictions, preventing API errors.

typescript
// src/server/services/llm/adapters/openai.ts

const completionParams = {
    // ... other common parameters
    max_completion_tokens: config.maxCompletionTokens, // Always use this now
};

// Check for models with specific parameter restrictions
if (modelId === 'gpt-5' || isReasoningModel(modelId)) {
    // GPT-5 (and some other reasoning models) only support default temperature/top_p (1).
    // Do NOT send custom temperature/top_p for these models.
    // If you try to, you'll get a 400 error like "Only the default (1) value is supported".
} else {
    // For other models, apply custom temperature/top_p if configured
    completionParams.temperature = config.temperature;
    completionParams.top_p = config.topP;
}

// ... proceed with API call using completionParams

Architecting for the Future: Progressive Authentication

Authentication is a cornerstone of any secure application, and designing it right from the start is paramount. This session involved drafting the Progressive Auth Design Spec (docs/superpowers/specs/2026-03-18-progressive-auth-design.md).

Our strategy is to offer a robust and flexible authentication experience:

  • Social Logins: Seamless integration with Google and GitHub.
  • Passkeys (WebAuthn): Embracing the future of passwordless, secure authentication.
  • Magic Link Fallback: A reliable option for users who prefer email-based access or for recovery.

The Power of Design Review (Even with Agents!)

What's particularly noteworthy is that this spec was reviewed by our internal code-reviewer agent. This proactive step caught three critical issues before any code was written:

  1. Account Linking Security: Highlighted potential dangers with allowDangerousEmailAccountLinking.
  2. Passkey Session Creation: Ensured secure JWT session creation via @auth/core/jwt for Passkeys.
  3. Challenge Storage: Emphasized the necessity of Redis for secure, ephemeral challenge storage.

This early design review, even from an agent, saved us significant architectural refactoring and potential security vulnerabilities down the line. It underscores the value of catching issues at the design phase.

What's Next on the Horizon

While much was accomplished, the development journey is continuous:

  • The progressive auth spec is written but awaiting commit to git and a crucial user review before implementation planning.
  • We need to verify the impact of the OpenAI fixes on workflow 565e6345 and ensure the clarait-auth workflow 335a4785 completed successfully.
  • Confirming the Axiom batch import for ~50 compliance URLs is another key verification.
  • And, of course, remembering to top up those Anthropic API credits!

This session, running into the early hours, was a whirlwind of technical challenges and rewarding solutions. It's a reminder that every line of code, every design decision, and every bug squashed contributes to a more robust, user-friendly, and secure system.

json
{
  "thingsDone": [
    "Fixed 7 persona assignments on workflow 335a4785 (clarait-auth) via SQL UPDATE on production",
    "Built Batch URL Import for Axiom (backend tRPC mutation, frontend UI with textarea/file upload, regex URL extraction, Array.from(new Set(...)) deduplication)",
    "Fixed OpenAI API adapter: replaced 'max_tokens' with 'max_completion_tokens'",
    "Fixed OpenAI API adapter: added 'gpt-5' to isReasoningModel() to skip temperature/top_p for GPT-5",
    "Wrote Progressive Auth Design Spec (Social, Passkeys, Magic Link) with agent-reviewed fixes",
    "Wrote nyxCore description (German, ~200 words)"
  ],
  "pains": [
    "TypeScript error with [...new Set()] for deduplication (required --downlevelIteration)",
    "OpenAI 400 error: 'max_tokens' unsupported by newer models",
    "OpenAI 400 error: GPT-5 only supports default temperature (1)"
  ],
  "successes": [
    "Successfully implemented Array.from(new Set(...)) for robust deduplication",
    "Adapted OpenAI API calls for deprecated parameters and model-specific constraints",
    "Proactive design review by code-reviewer agent caught critical auth spec issues early",
    "Shipped a valuable batch import feature for Axiom"
  ],
  "techStack": [
    "TypeScript",
    "React",
    "Next.js",
    "tRPC",
    "OpenAI API",
    "SQL",
    "WebAuthn (Passkeys)",
    "Redis",
    "@auth/core/jwt"
  ]
}