From Chaos to Clarity: Unifying AI Model Selection Across nyxCore
We tackled the sprawl of inconsistent AI provider and model selection UIs across nyxCore by introducing a single, self-fetching React component, drastically improving developer experience and consistency.
In the fast-paced world of AI-powered applications, flexibility in choosing the right large language model (LLM) or provider is paramount. But as any system grows, so does the potential for UI fragmentation. At nyxCore, we found ourselves in a classic "good intentions, messy reality" scenario when it came to selecting AI providers and models.
Multiple pages, each with its own bespoke implementation for picking an LLM provider and model, had become a source of inconsistency, duplicated effort, and maintenance headaches. Our recent mission: consolidate this sprawl into a single, elegant, and highly reusable component.
The Problem: A Patchwork of Pickers
Imagine a system where every new feature requiring AI model selection meant developers had to:
- Write a new data fetching query for available providers.
- Implement their own UI for displaying providers (buttons, dropdowns, grids).
- Implement their own UI for selecting models specific to a chosen provider.
- Manage the state for these selections.
This wasn't just hypothetical; it was our reality. From creating new personas to setting up evaluations or even configuring admin settings, each part of nyxCore had its own flavor of this interaction. The result? Inconsistent user experiences, brittle code, and a significant drag on development velocity.
The Solution: A Self-Fetching ProviderModelPicker
Our goal was clear: create a single source of truth for AI provider and model selection. The answer came in the form of the new ProviderModelPicker component. But we didn't just want a UI component; we wanted one that was smart.
The core innovation lies in its self-fetching architecture. Leveraging tRPC, we introduced a useAvailableProviders() hook that queries trpc.dashboard.availableProviders.useQuery() with a sensible staleTime. This means that if you simply drop <ProviderModelPicker /> into your page without passing any providers prop, it intelligently fetches the data it needs to render itself.
// src/components/shared/provider-model-picker.tsx (simplified)
import { useAvailableProviders } from '~/hooks/useAvailableProviders'; // Our new custom hook
interface ProviderModelPickerProps {
providers?: LLMProvider[]; // Optional: if provided, component uses these instead of self-fetching
// ... other props
}
const ProviderModelPicker: React.FC<ProviderModelPickerProps> = ({ providers: propProviders, ...props }) => {
const { data: fetchedProviders } = useAvailableProviders(propProviders === undefined); // Only fetch if no propProviders
const activeProviders = propProviders || fetchedProviders || [];
// ... rest of the component logic to display providers and models
};
This design brings several key benefits:
- Reduced Boilerplate: Developers no longer need to write data fetching logic on every consuming page.
- Single Source of Truth: All provider and model display logic resides in one place, ensuring consistency.
- Improved Developer Experience: Drop it in, and it just works.
- Flexibility: It can still accept a
providersprop for edge cases where a filtered list is required (e.g., an admin panel showing primary vs. fallback providers).
The Migration: Six Pages, Significant Simplification
The rollout involved migrating six different consumer pages across nyxCore. The impact was immediate and substantial, resulting in a net reduction of 162 lines of code across these pages.
Let's look at a couple of highlights:
- New Persona Creation (
personas/new/page.tsx): This page previously housed a sprawling ~80 lines of custom UI for provider and model selection. We replaced it with a single<ProviderModelPicker>instance. State management was also simplified, consolidatingselectedProviderandselectedModelinto a singlegenerationTarget: ProviderModelSelectionobject. - Admin Settings (
admin/page.tsx): This was a particularly complex page, featuring two native<select>elements and two separate model button grids, totaling around 150 lines of UI and logic. It was elegantly refactored to use twoProviderModelPickerinstances – one for the primary target and another for a fallback, with the fallback picker intelligently filtered to exclude the primary provider. This also allowed us to remove severalgetModelsForProviderandLLMProviderNameimports, cleaning up dependencies.
The migration wasn't just about deleting code; it was about elevating the developer experience and ensuring a consistent, robust foundation for future AI integrations.
Beyond the UI: A Quick Detour into AI Provider Integration
As part of our commitment to robustness, we also performed a direct production test of our Anthropic API key. It's one thing for keys to decrypt correctly, but another for the API itself to function as expected.
Our test revealed a crucial operational detail: while the key decrypted fine, Anthropic returned a 400 error: "credit balance is too low." A quick check confirmed a 6.74 currently available. This highlights the importance of end-to-end testing, catching real-world issues before they impact users. We're now just waiting for those pending credits to clear!
Lessons Learned from the Trenches
No development session is complete without its share of unexpected hurdles. Here are a few "pain points" that turned into valuable lessons:
1. Environment Mismatches and Module Paths
- The Challenge: When trying to run a test script directly in a production container, we hit a
Cannot find module '@prisma/client'error. - The Lesson: Production Docker containers often have optimized, minimal environments. Node.js modules might not be in the default expected path (like
/tmp/node_modules). Always verify theNODE_PATHor the actual location ofnode_moduleswithin your container (in our case,/app/). Copying the script to/app/resolved the issue.
2. Bash/SSH Escaping Nightmares
- The Challenge: Attempting to execute a Node.js script inline via SSH using
node -e "if (!key) { ... }"resulted in bash/SSH escaping corrupting the!character inif (!key). - The Lesson: Be extremely cautious with special characters in inline shell commands, especially when they are nested within quotes or passed through SSH. The safest approach for anything beyond trivial scripts is to write the script to a local file, securely copy it to the server, and then execute it. This avoids complex escaping rules.
3. API/Library Specific Data Formats
- The Challenge: Our initial decryption function expected a 3-part format (
iv:tag:data), but the actual encrypted data was in a 4-part format (v1:iv:tag:data). This led to an "Invalid initialization vector" error. - The Lesson: Even with established encryption patterns, subtle version prefixes or format variations can exist. Always double-check the exact data format expected by your encryption/decryption library or API. Destructuring
[, ivHex, tagHex, encHex]was the simple fix, skipping the version prefix.
What's Next? Solidifying the Foundation
With the ProviderModelPicker successfully deployed, our immediate next steps involve:
- Verify Anthropic Credits: Rerun the test script on production once the $39.99 payment clears to confirm full API functionality.
- Complete
ProviderPickerSunset: Migrate the last few consumers (discussions/[id]andworkflows/[id]) from the oldProviderPickerto the newProviderModelPicker. This might require addingdefaultOpen/onCloseprops or a slight UX rethink for these specific contexts. - Cleanup: Delete the deprecated
src/components/discussion/provider-picker.tsxand thediscussions.availableProvidersserver procedure (which now has zero client consumers). - Beyond UI: Run persona evaluations with Anthropic once credits are active and continue work on the Rent-a-Persona API and RLS policies for
persona_profiles.
This session marked a significant step forward in bringing consistency and efficiency to nyxCore's AI capabilities. By investing in shared components and tackling architectural debt, we're building a more robust and enjoyable platform for both our users and our development team.