nyxcore-systems
7 min read

Beyond Defaults: Unleashing Smart LLM Provider & Model Selection

We've just shipped a major upgrade, bringing intelligent LLM provider and model selection directly into our platform. This post dives into how we built a flexible, resilient, and user-friendly system for choosing the right AI for every conversation.

AILLMDeveloper ExperienceFrontendBackendTypeScriptPrismatRPCNext.jsProduct Development

In the rapidly evolving world of AI, sticking to a single LLM provider or model can feel like bringing a knife to a gunfight when you really need a bazooka (or sometimes, just a spork). We've all been there: one model excels at creative writing, another at precise code generation, and yet another offers the best cost-performance for simple queries.

That's why we're thrilled to announce a significant platform upgrade: smart LLM provider and model selection. This isn't just about picking a different model; it's about empowering users with unprecedented control, resilience, and flexibility over their AI interactions, from initial discussion setup to mid-conversation model switching, all while allowing tenant administrators to set intelligent defaults.

We've just pushed a comprehensive feature set (commit 6744e4a) that makes this a reality, and I wanted to share the journey from concept to code.

The Vision: Why Flexibility Matters

Our goal was clear:

  1. Resilience: If a provider goes down or a model fails, users should have an immediate, intuitive way to switch.
  2. Optimization: Allow users to choose the best model for their specific task, considering cost, speed, and capability.
  3. Customization: Enable tenant administrators to define default providers and models, streamlining workflows for their teams.
  4. Seamless Experience: Integrate these choices naturally into the UI, making powerful options feel effortlessly accessible.

This vision translated into core features like fallback UX, mid-discussion switching, tenant-level defaults, and a rich model catalog with helpful hints.

Under the Hood: Architecting for Choice

Building this required touching almost every layer of our stack.

Database Schema Evolution

First, we extended our data models to support these new capabilities:

  • Tenant Model: Now includes defaultProvider and defaultModel fields. This is crucial for administrators to set organization-wide preferences, ensuring new discussions start with sensible defaults.
  • Discussion Model: Gained a model_override field. This allows a specific discussion to "lock in" a model, overriding any tenant defaults or even the user's mid-discussion choices if needed for consistency.

After these changes, a quick prisma db push and prisma generate updated our database and regenerated the Prisma client, keeping everything in sync.

The Source of Truth: Our Model Catalog

To present choices to users, we needed a central, static source of information about available LLMs. We introduced:

  • ModelInfo interface: Defines properties like id, name, provider, costPerToken, speed, and bestFor.
  • MODEL_CATALOG: A comprehensive list in src/lib/constants.ts that currently includes 6 models across Anthropic, OpenAI, Google, and Kimi. This serves as our single source of truth for model capabilities and metadata.
  • Helper functions: getModelsForProvider, getDefaultModel, getModelInfo make it easy to query this catalog programmatically.
typescript
// src/lib/constants.ts (simplified example)
interface ModelInfo {
  id: string;
  name: string;
  provider: 'OpenAI' | 'Anthropic' | 'Google' | 'Kimi';
  costPerInputToken: number; // e.g., $0.0000005 per token
  costPerOutputToken: number;
  speed: 'fast' | 'medium' | 'slow';
  bestFor: string[];
}

export const MODEL_CATALOG: ModelInfo[] = [
  { id: 'gpt-4o', name: 'GPT-4o', provider: 'OpenAI', /* ... */ bestFor: ['complex reasoning', 'multimodal'] },
  { id: 'claude-3-opus', name: 'Claude 3 Opus', provider: 'Anthropic', /* ... */ bestFor: ['creative writing', 'long context'] },
  // ... more models
];

This static approach simplifies deployment and ensures consistency, though we'll consider API-driven model listing for future dynamic expansion.

Backend Logic: tRPC Powering the Choices

Our tRPC API was extended to handle the new selection logic:

  • discussions.availableProviders query: Checks which providers have valid API keys configured for the current tenant, ensuring users only see options they can actually use.
  • discussions.updateProvider and discussions.updateModel mutations: Allow users to change their selection mid-discussion.
  • discussions.create: Now accepts an optional modelOverride input, letting users specify a model from the get-go.
  • admin.getDefaults query and admin.updateDefaults mutation: Provide the interface for administrators to manage tenant-wide LLM settings. Importantly, getDefaults uses enforceTenant for read access (any authenticated user can see defaults), but updateDefaults uses enforceRole to restrict write access to admins only, ensuring security.

Crucially, our discussion-service.ts was updated. All four core discussion modes (single, parallel, consensus, autoRound) now correctly pass the chosen model_override as part of LLMCompletionOptions to the underlying provider stream/complete calls. This ensures that the selected model is truly utilized for every AI interaction.

Crafting the User Experience: The ProviderPicker

The frontend integration was where all these backend pieces came together in a tangible way.

The ProviderPicker Component

We built src/components/discussion/provider-picker.tsx as a reusable, robust dropdown component. It features:

  • Provider Grouping: Models are logically grouped under their respective providers.
  • Model Listing: Displays all available models from MODEL_CATALOG.
  • Cost/Speed Badges: Visual hints to help users make informed decisions.
  • Availability Filtering: Only shows providers for which the tenant has configured API keys.
  • defaultOpen/onClose props: Key for flexible embedding (more on this in "Lessons Learned").

Integration Across the App

  • New Discussion Page (new/page.tsx): This is where new conversations begin. The ProviderPicker here pre-selects tenant defaults and clearly shows provider availability, along with model selector hints (cost, speed, bestFor).
  • Discussion Detail Page ([id]/page.tsx):
    • Mid-Discussion Switching: Clickable provider labels within our StreamFlow UI now open the ProviderPicker, allowing users to change models on the fly for subsequent messages.
    • Inline Error Retry: If an LLM call fails (e.g., due to an invalid API key or service outage), an inline error retry UI appears. This includes the ProviderPicker (pre-opened for immediate selection) and a "Retry same" button for convenience, making recovery seamless.
  • Admin Page: A brand new "LLM Defaults" tab provides an intuitive interface for administrators to select default providers and models for their tenant using selection cards and a save button.

Navigating Challenges: A Double-Click Dilemma

No complex feature ships without its quirks. Our main challenge revolved around the ProviderPicker component's interaction with parent components.

The Problem: Initially, we rendered the ProviderPicker conditionally within a showProviderPicker state wrapper on the discussion detail page. The idea was: click a button, showProviderPicker becomes true, the ProviderPicker renders. However, because the ProviderPicker manages its own open/close state internally (via click-outside detection), this led to a "double-click" issue:

  1. User clicks "Change Model" button.
  2. showProviderPicker becomes true, ProviderPicker renders.
  3. The initial click that rendered the ProviderPicker is immediately registered by its internal click-outside detection, causing it to close itself right after rendering.
  4. The user then has to click again on the rendered ProviderPicker to open its internal dropdown. Frustrating!

The Solution: We added defaultOpen and onClose props to the ProviderPicker. Now, when we want the picker to open immediately upon rendering (like in an error retry scenario), we simply pass defaultOpen={true}. The component opens its internal dropdown without requiring an extra click, and onClose notifies the parent when the user has finished their selection or clicked away. This small but crucial design change made a world of difference in the user experience.

What's Next?

With commit 6744e4a now ready for prime time (it's on main, just waiting for a git push!), our immediate next steps are:

  1. Push to Origin: Get this feature out for broader testing.
  2. End-to-End Testing:
    • Verify tenant defaults: Set "kimi" as default in admin, ensure new discussions pre-select it.
    • Test error retry: Use an invalid API key, confirm the error UI with the picker appears, and switching providers works.
    • Test mid-discussion switch: Click a provider in StreamFlow, select a different model, and send a message to confirm the change.
  3. Future Model Expansion: Consider adding Ollama models to our MODEL_CATALOG for local-first AI experiences.
  4. Message-Level Persistence: Explore persisting the selected model in the DiscussionMessage.model field when a stream completes. (The model field already exists on DiscussionMessage, so this is a natural extension).

This journey of building smart LLM selection has been a fantastic exploration into balancing powerful backend logic with intuitive frontend design. We believe this new level of control will significantly enhance how users interact with AI on our platform, making it more robust, flexible, and tailored to their needs.