nyxcore-systems
6 min read

Unleashing LLM Flexibility: Building a UI Chooser for AutoFix & Refactor

We just shipped a major UX improvement, empowering users to select their preferred LLM provider and model directly from our AutoFix and Refactor pipeline dialogs – no code edits required. Dive into the technical implementation and lessons learned.

LLMTypeScriptNext.jstRPCPrismaUXDeveloperExperienceFrontendBackend

Ever felt constrained by hardcoded configurations, especially when dealing with rapidly evolving technologies like Large Language Models? Our AutoFix and Refactor pipelines, critical tools in our development toolkit, were facing just this challenge. While powerful, switching the underlying LLM provider or model required a trip to the codebase. Not ideal for a smooth developer experience.

That's why our latest session focused on a clear goal: integrate a user-friendly LLM provider and model chooser directly into our AutoFix & Refactor dialogs. The idea was simple: give users the power to choose their LLM backend on the fly, unlocking flexibility for cost optimization, performance tuning, or leveraging specific model capabilities.

I'm happy to report that this mission is accomplished, committed to main in aae220a. Let's break down how we tackled it and the insights gained along the way.

Empowering Choice: The Frontend Experience

The core of this feature lies in the user interface. We needed a clean, intuitive way for users to make their selections.

The Provider/Model Selector

We introduced a LLM_PROVIDERS button group, allowing quick selection of major providers like Anthropic, OpenAI, Google, Kimi, and Ollama. Alongside this, an optional text input provides granular control for specific model names.

These controls were integrated into the New Scan dialogs for both src/app/(dashboard)/dashboard/auto-fix/page.tsx and src/app/(dashboard)/dashboard/refactor/page.tsx.

tsx
// Simplified example within a dialog component
<div className="flex flex-col gap-4">
  <label className="text-sm font-medium">LLM Provider</label>
  <ButtonGroup
    options={['anthropic', 'openai', 'google', 'kimi', 'ollama']}
    value={selectedProvider}
    onChange={handleProviderChange}
  />

  <label className="text-sm font-medium">Model (optional)</label>
  <Input
    type="text"
    placeholder={DEFAULT_MODELS[selectedProvider]}
    value={modelInput}
    onChange={(e) => setModelInput(e.target.value)}
  />
</div>

Dynamic Defaults for Better UX

A subtle but crucial UX detail is the dynamic model placeholder. We maintain a DEFAULT_MODELS map:

typescript
const DEFAULT_MODELS = {
  anthropic: 'claude-sonnet-4-20250514',
  openai: 'gpt-4o-mini',
  google: 'gemini-2.0-flash',
  kimi: 'kimi-k2-0711-preview',
  ollama: 'llama3',
};

When a user switches providers, the model input's placeholder automatically updates to suggest a sensible default for that provider. This also means we clear the model input on provider switch, ensuring the placeholder accurately reflects the new default. This little touch significantly reduces cognitive load for the user.

Bridging Frontend to Backend: tRPC Integration

Once the user makes their selection, the provider and model values need to be sent to the backend to kick off the pipeline. This was handled by passing these parameters directly to their respective tRPC start mutations. tRPC's end-to-end type safety made this a breeze, ensuring that our backend expected exactly what the frontend was sending.

typescript
// Conceptual tRPC mutation call
trpc.autoFix.start.useMutation({
  onSuccess: () => { /* navigate to detail page */ },
}).mutate({
  scanId: newScanId,
  // ... other parameters
  provider: selectedProvider,
  model: modelInput || DEFAULT_MODELS[selectedProvider], // Use input or default
});

Visibility is Key: Keeping Users Informed

After a scan is initiated, users need to quickly see which LLM configuration was used. We implemented two visual cues:

  1. Detail Page Headers: On the auto-fix/[id]/page.tsx and refactor/[id]/page.tsx detail pages, we added a clear Badge displaying the chosen provider and model (e.g., "OpenAI / gpt-4o-mini").
  2. Run List Cards: For a quick overview, a smaller, monospaced text-[10px] font-mono provider label now appears on the individual run list cards in both listing pages.

Crucially, we ensured backward compatibility. Existing runs in the database that predated this feature (and thus didn't have an explicit provider in their config) gracefully default to displaying "anthropic," providing a consistent experience without data migration.

Lessons Learned: Navigating Prisma's Json? with TypeScript

While the session was largely smooth sailing, one recurring "pain" point surfaced, common when working with dynamic data structures in a strictly typed environment: Prisma's Json? field.

Our config field in Prisma is defined as Json?. This is incredibly flexible for storing unstructured data, but it presents a challenge in TypeScript. When querying, the inferred type of config is often just Json | null, obscuring the actual object structure within.

The Problem in Practice:

Accessing nested properties like config.provider directly would lead to TypeScript errors because Json doesn't inherently know about provider.

typescript
// This would error: Property 'provider' does not exist on type 'Json'.
const provider = run.config.provider;

The Solution: Careful Type Casting

We had to resort to explicit type casting to inform TypeScript about the expected structure.

  1. For Direct Access: When we knew the config would be an object with string keys and values (like provider and model), we cast it to Record<string, string> | null:

    typescript
    const config = run.config as Record<string, string> | null;
    const provider = config?.provider || 'default-fallback';
    
  2. For Nested Access on List Pages: In scenarios where the config was part of a larger object inferred from a Prisma.RunInclude type, and we only cared about its potential existence and specific properties, we used a slightly different approach to avoid overly broad casts:

    typescript
    // `run` here might be inferred as `Run & { config: Json | null }`
    // We need to tell TS that if config exists, it's an object with a 'provider' key.
    const typedRun = run as unknown as { config?: { provider?: string } };
    const providerLabel = typedRun.config?.provider || 'anthropic';
    

This as unknown as ... pattern is a common workaround when TypeScript's inference engine can't fully peek into the structure of Json fields, and you need to assert a specific shape for safe access. It's a reminder that while Json offers flexibility, it trades off some of TypeScript's strictness, requiring developers to be more explicit about data shapes.

What's Next?

With the core functionality shipped, the immediate next steps involve thorough manual testing to ensure everything works as expected across different provider selections and scenarios.

One minor refactoring consideration is extracting the DEFAULT_MODELS map to src/lib/constants.ts. As our LLM integrations grow, centralizing such configurations will improve maintainability and discoverability.

This session was a solid step forward in enhancing the user experience of our LLM-powered tools. Giving users direct control over their LLM choices not only improves flexibility but also empowers them to experiment and optimize their workflows more effectively.


json
{"thingsDone":[
  "Added LLM provider button group to AutoFix New Scan dialog.",
  "Added LLM provider button group to Refactor New Scan dialog.",
  "Integrated optional model text input for custom model selection.",
  "Passed `provider` and `model` parameters to tRPC `start` mutations for both pipelines.",
  "Implemented dynamic model placeholder updates based on selected provider via `DEFAULT_MODELS` map.",
  "Ensured model input clears when switching providers for correct placeholder display.",
  "Added provider/model `Badge` to AutoFix and Refactor detail page headers.",
  "Added provider label to run list cards for quick identification.",
  "Implemented graceful fallback for existing runs without explicit provider data."
],"pains":[
  "Handling Prisma's `Json?` type in TypeScript, requiring explicit type casting (`as Record<string, string> | null` and `as unknown as { config?: Record<string, string> }`) to access nested properties safely."
],"successes":[
  "Achieved the primary goal of adding a user-friendly LLM chooser UI.",
  "Maintained backward compatibility for existing data.",
  "Improved developer experience by centralizing LLM configuration through UI.",
  "Smooth integration with existing tRPC backend."
],"techStack":[
  "TypeScript",
  "Next.js",
  "React",
  "tRPC",
  "Prisma",
  "TailwindCSS"
]}