Beyond the Default: Seamless LLM Provider & Model Switching Arrives in AutoFix & Refactor
We've just rolled out a significant update, empowering users to seamlessly switch between LLM providers and models directly within our AutoFix and Refactor pipelines. Dive into the details of how we built this flexible new UI!
The world of Large Language Models (LLMs) is evolving at lightning speed. New providers emerge, existing ones release more powerful or specialized models, and what was cutting-edge yesterday might be a baseline today. As developers, we understand the critical need for flexibility and choice when leveraging these powerful tools.
That's why we're thrilled to announce a major enhancement to our AutoFix and Refactor pipelines: you can now dynamically select your preferred LLM provider and model directly from the UI, without touching a single line of code!
Empowering Your Workflow: The New LLM Chooser
Our primary goal with this update was to put more control directly into your hands. Previously, switching LLM providers or models might have involved a dive into configuration files or even code. No more!
We've integrated a sleek new provider/model selector into the "New Scan" dialogs for both the AutoFix and Refactor features. Here’s what that means for your workflow:
- Intuitive Provider Selection: A prominent button group now allows you to easily switch between supported LLM providers like Anthropic, OpenAI, Google, Kimi, or even local Ollama instances.
- Flexible Model Input: Alongside the provider buttons, an optional text input allows you to specify a particular model. This is especially useful for experimenting with different versions or custom models offered by your chosen provider.
- Seamless Integration: These selectors are present in both the
src/app/(dashboard)/dashboard/auto-fix/page.tsxandsrc/app/(dashboard)/dashboard/refactor/page.tsx"New Scan" dialogs, ensuring a consistent experience across our core LLM-powered features.
Under the Hood: Smart Defaults & Dynamic Updates
Building a user-friendly interface isn't just about adding buttons; it's about anticipating user needs and providing intelligent defaults.
- Dynamic Model Placeholders: We've implemented a
DEFAULT_MODELSmap that intelligently updates the model input's placeholder text based on your selected provider. For instance, selecting "OpenAI" will suggestgpt-4o-mini, while "Anthropic" will hint atclaude-sonnet-4-20250514. This guides you towards common and recommended models. - Clean Slate on Switch: To ensure clarity and prevent accidental model selections, switching providers automatically clears the model input. This allows the new provider's default placeholder to take effect immediately, ready for your specific input.
Clarity at a Glance: Knowing Your LLM's Origin
Once a scan is initiated, it's crucial to know exactly which LLM was at work. We've added clear visual indicators to help you track this:
- Detail Page Badges: On the individual detail pages for
auto-fix/[id]/page.tsxandrefactor/[id]/page.tsx, a newBadgedisplays the chosen LLM provider and model prominently in the header. - Run List Labels: For quick identification, each run card in the main listing pages now includes a subtle, monospaced label (
text-[10px] font-mono) indicating the LLM provider used for that specific execution.
Graceful Evolution: Handling Legacy Data
What about existing runs that predate this feature? We've ensured a smooth transition:
- Backward Compatibility: Older runs in your history, which don't explicitly have a
providerspecified in theirconfig, will gracefully default to showing "anthropic" as their provider. This ensures all your historical data remains readable and consistent.
Lessons Learned: Navigating Dynamic Data Types
While the implementation was generally smooth, we did encounter a common challenge inherent in working with flexible data schemas:
- Prisma's
Json?Type: Ourconfigfield in Prisma is defined asJson?, allowing for flexible, unstructured data. While incredibly powerful, accessing nested properties from this field in TypeScript requires explicit type casting. For instance, we usedas Record<string, string> | nullto safely interact with theconfigobject's internal structure, andas unknown as { config?: Record<string, string> }on list pages where the inferred type fromincludedidn't immediately expose theJsoninternals. This served as a good reminder of the balance between schema flexibility and strong typing, and how careful casting is key.
What's Next?
This feature is now live on main (commit aae220a). We've thoroughly tested the new UI and data flow to ensure everything works as expected, from selecting a provider to verifying the correct badges appear on detail pages.
We believe this update significantly enhances your ability to experiment, optimize, and leverage the best of the LLM ecosystem within our platform. Give it a try and let us know what you think!