Leveling Up Our AI Dev Workflow: Choice, Clarity, and Closed-Loop Learning
A deep dive into our recent dev sprint, covering UI enhancements for LLM provider selection, tackling tricky state synchronization bugs, and kicking off our journey towards a self-improving AI development pipeline.
The world of AI-powered development tools is evolving at a breathtaking pace, and so are we. In our latest sprint, we tackled two exciting challenges: empowering users with more control over their AI models and laying the groundwork for a truly intelligent, self-improving development pipeline.
This post will walk you through the key achievements, the unexpected hurdles we overcame, and the ambitious next steps for our AutoFix and Refactor systems.
Empowering Choice: The LLM Provider & Model Selector
One of the most frequent requests we've heard is for more flexibility in choosing which Large Language Model (LLM) powers our AutoFix and Refactor tools. Different models excel at different tasks, come with varying cost structures, and sometimes, you just want to experiment!
Our primary goal for this session was to integrate a robust provider and model chooser directly into our AutoFix and Refactor dialogs. Here's what we shipped:
- Intuitive UI for Selection: We added a dedicated
LLM_PROVIDERSbutton group and a model input field to theauto-fix/page.tsxandrefactor/page.tsxdialogs. This gives users immediate visibility and control over their choice. - Clear Status Indicators: To ensure transparency, we integrated a provider/model badge into the detail headers of
auto-fix/[id]/page.tsxandrefactor/[id]/page.tsx. Now, at a glance, you can see exactly which AI brain is powering your current operation. - Run List Visibility: On the main listing pages, each run card now prominently displays the provider label, making it easy to track and compare results from different models.
- Seamless Backend Integration: All these frontend selections are now correctly passed to our tRPC
startmutations. Our backend was already primed to accept these parameters, making the integration smooth.
This feature significantly enhances the user experience, giving developers the power to tailor their AI assistance to their specific needs and preferences.
Navigating the Nuances: Lessons from State Management
Even with the best planning, development always throws a curveball or two. This session's came in the form of a subtle but frustrating state synchronization bug.
The Case of the Desynchronized Phases
Our AutoFix and Refactor runs progress through various phases (e.g., "scanning", "detecting", "improving"). Initially, we set the currentPhase state using useState<RefactorPhase>("scan"). This seemed logical: start at the beginning.
However, we quickly discovered a problem:
- If a user started a run, navigated away (e.g., to another tab or page), and then returned to the active run's detail page, the UI would often show the wrong phase – defaulting back to "detecting" instead of the run's actual, current phase like "improving." This created confusion and a broken user experience.
The useEffect Lifeline
The fix, a common pattern in React, involved leveraging the useEffect hook. We implemented a mechanism that:
- Reads the actual
run.statusfrom our database query on component mount. - Uses a
statusToPhasemap to translate the backend status into the correct frontend phase. - Sets the
currentPhasestate using this mapped value.
// Example snippet (simplified for blog)
useEffect(() => {
if (run?.status) {
setCurrentPhase(statusToPhase[run.status]);
}
}, [run?.status]);
This ensures that whenever a detail page loads or run.status changes, the UI accurately reflects the true state of the operation. Crucially, our Server-Sent Events (SSE) stream still overrides this local state as new events arrive, providing real-time updates without conflict. This subtle interplay of initial state, database sync, and real-time updates is vital for robust reactive applications.
The Json? Casting Conundrum
Another minor but persistent challenge involved working with Json? fields in Prisma. While incredibly flexible for storing arbitrary JSON data, accessing nested properties often requires explicit type casting (e.g., as Record<string, string> | null or as unknown as). This is a good reminder that while schemaless fields offer flexibility, they introduce a small amount of type juggling in a strongly-typed environment like TypeScript.
The Road Ahead: Towards a Self-Learning System
Beyond immediate user features, a significant part of this session was dedicated to kicking off an ambitious long-term goal: building a closed-loop learning system. Imagine an AI development pipeline that not only fixes and refactors your code but learns from every run, continuously improving its suggestions and understanding of your codebase.
We've officially created the learning-loop team and are moving into the research phase. The vision is to feed pipeline findings (e.g., successful fixes, failed refactors, performance metrics) back into a memory system that can then inform future AI prompts, making our tools smarter and more context-aware.
Our immediate next steps for this exciting journey include:
- Researching Insight & Memory Architecture: How do we store and retrieve meaningful "lessons learned"?
- Designing Pipeline Data Flow: How do we extract relevant data from each
AutoFixandRefactorrun? - Implementing Insight Extraction: Building the mechanisms to turn raw run data into actionable insights.
- Implementing Insight Injection: Figuring out how to dynamically modify AI prompts based on these insights.
- End-to-End Verification: Ensuring the entire loop functions flawlessly and delivers measurable improvements.
Wrapping Up
This sprint was a blend of immediate user value and foundational work for the future. We've put more power into developers' hands with the LLM provider selector, solidified our UI's reliability by tackling state synchronization, and embarked on the exciting path towards a truly self-improving AI development assistant.
Stay tuned as we continue to push the boundaries of what's possible in AI-assisted development!