nyxcore-systems
4 min read

Building a Dynamic LLM Provider Selector: From Hardcoded to User Choice

How we transformed our AI-powered code analysis tool from a single-provider system to a flexible multi-LLM interface, giving users the power to choose their preferred AI model on the fly.

ui-developmentllm-integrationuser-experiencereacttypescript

Building a Dynamic LLM Provider Selector: From Hardcoded to User Choice

Ever found yourself locked into a single AI provider in your application, wishing you could easily switch between OpenAI, Anthropic, Google's Gemini, or even local models? That's exactly the challenge we tackled this week while improving our AutoFix and Refactor pipeline tools.

The Problem: Choice Paralysis... for Developers

Our code analysis platform had a classic "developer convenience" problem. Users could technically switch between different LLM providers (OpenAI, Anthropic Claude, Google Gemini, Kimi, and local Ollama models), but only if they were willing to dive into configuration files or, worse, edit code directly.

As any UX-conscious developer knows, if a feature requires code changes, it might as well not exist for most users.

The Solution: Provider Selection in the UI

We decided to add a clean, intuitive provider selector directly to our "New Scan" dialogs. Here's what we built:

1. Dynamic Provider Selection

tsx
// Button group for provider selection
<div className="button-group">
  {LLM_PROVIDERS.map(provider => (
    <button 
      key={provider}
      className={selectedProvider === provider ? 'active' : ''}
      onClick={() => handleProviderChange(provider)}
    >
      {provider}
    </button>
  ))}
</div>

The beauty is in the simplicity. Users see buttons for anthropic, openai, google, kimi, and ollama right in the dialog where they're starting their scan.

2. Smart Model Defaults

Each provider has different model naming conventions. Rather than confuse users, we implemented a smart placeholder system:

tsx
const DEFAULT_MODELS = {
  anthropic: 'claude-sonnet-4-20250514',
  openai: 'gpt-4o-mini', 
  google: 'gemini-2.0-flash',
  kimi: 'kimi-k2-0711-preview',
  ollama: 'llama3'
}

When users switch providers, the model input field automatically updates its placeholder to show the most common model for that provider. It's a small touch that eliminates the "what model should I use?" friction.

3. Visual Feedback Throughout

We didn't stop at the input dialog. The selected provider and model now appear as badges in:

  • Detail pages: So users know exactly which AI analyzed their code
  • Run history: Quick visual scanning of past analyses with tiny font-mono labels
  • List views: At-a-glance provider identification

Implementation Insights

The Data Layer Challenge

One interesting technical hurdle was working with Prisma's Json? field type for storing configuration. Since JSON fields don't have compile-time type safety, we had to be creative with type casting:

tsx
// On detail pages
const config = run.config as Record<string, string> | null

// On list pages with complex includes  
const runWithConfig = run as unknown as { 
  config?: Record<string, string> 
}

It's not the prettiest code, but it handles the reality that our config field needs to be flexible while still giving us type safety where possible.

Graceful Backwards Compatibility

Existing scans didn't have provider information stored, so we implemented a sensible default:

tsx
const displayProvider = config?.provider || 'anthropic'

Users with historical data see "anthropic" as the provider, which was our previous default. No broken interfaces, no confused users.

Lessons Learned

JSON Fields Require Extra Type Care: Prisma's Json? type is flexible but requires manual type casting. Consider the tradeoffs between flexibility and type safety when designing your schema.

Small UX Details Matter: The auto-updating model placeholder seems minor, but it eliminates cognitive load for users switching between providers with different naming conventions.

Think Beyond the Input: Adding provider visibility to list views and detail pages created a cohesive experience where users always know which AI they're working with.

The Result

What started as a "quick UI improvement" turned into a comprehensive user experience upgrade. Users can now:

  1. ✅ Choose their preferred LLM provider without leaving the interface
  2. ✅ Get smart model suggestions based on their provider choice
  3. ✅ See provider information throughout their workflow
  4. ✅ Switch providers per-scan based on their specific needs

Sometimes the best features are the ones that remove friction rather than add functionality. In this case, we took something that was technically possible but practically difficult and made it genuinely usable.

The next time you're building AI-powered features, consider: are you giving your users real choice, or just technical possibility? There's a big difference, and your users will definitely notice which one you pick.


Want to see more posts about building user-friendly AI interfaces? Follow along as we continue improving our code analysis platform, one UX decision at a time.