Building Better Code Review UX: From Cards to Tables and the Art of Progressive Disclosure
How we transformed a clunky card-based review interface into a clean table layout and learned why sometimes less is more when it comes to showing information upfront.
Building Better Code Review UX: From Cards to Tables and the Art of Progressive Disclosure
Late evening development sessions often yield the most honest insights about what works and what doesn't in UI design. Last night was one of those sessions where I found myself completely rethinking how we display code review findings in our workflow dashboard.
The Problem: When Cards Don't Cut It
Our review interface was using a card-based layout to display key findings from code analysis. Each finding had a severity level, category, title, and various action buttons. Sounds reasonable, right?
The reality was messier. Here's what we were dealing with:
// The old approach - each card was independently laid out
<div className="space-y-4">
{keyPoints.map(point => (
<Card key={point.id}>
<div className="flex items-center gap-3">
<Badge variant={getSeverityVariant(point.severity)}>
{point.severity}
</Badge>
<span className="text-sm text-muted-foreground">
{point.category}
</span>
<h4 className="font-medium">{point.finding}</h4>
{/* Action buttons */}
</div>
</Card>
))}
</div>
The problem? Nothing aligned. Severity badges had different widths ("Critical" vs "Low"), categories varied in length, and the visual hierarchy was all over the place. It looked more like a messy list than a scannable review interface.
Solution: Embrace the Table (But Make It Pretty)
Sometimes the old ways are the right ways. Instead of fighting against the natural tabular structure of our data, I decided to lean into it using CSS Grid:
// The new approach - CSS Grid with explicit column sizing
<div className="grid grid-cols-[16px_72px_90px_1fr_auto] gap-3 items-center">
<ChevronIcon /> {/* 16px - expansion indicator */}
<Badge>{severity}</Badge> {/* 72px - enough for "Critical" */}
<span>{category}</span> {/* 90px - consistent category width */}
<h4>{finding}</h4> {/* 1fr - flexible title space */}
<StatusIcon /> {/* auto - just what the icon needs */}
</div>
The magic is in those column definitions: grid-cols-[16px_72px_90px_1fr_auto]. By giving fixed widths to the predictable elements and flexible space to the content that varies, every row aligns perfectly.
The Art of Progressive Disclosure
The second major insight came from user feedback. Our AI analysis output was verbose and detailed – great for debugging, but it was drowning out the actual review interface. Users wanted to focus on the findings first, then dive into the analysis details if needed.
Enter progressive disclosure:
// Analysis section - collapsed by default
<Collapsible
open={expandedPrompts[step.id + '-analysis'] ?? false}
onOpenChange={(open) => togglePrompt(step.id + '-analysis', open)}
>
<CollapsibleTrigger className="flex items-center gap-2 text-sm text-muted-foreground">
<ChevronRight className="h-4 w-4" />
Analysis Details
<span className="text-xs">
({tokens} tokens, ${cost}, {duration}ms)
</span>
</CollapsibleTrigger>
<CollapsibleContent>
{/* Detailed analysis content */}
</CollapsibleContent>
</Collapsible>
By defaulting to collapsed and showing key metadata (token count, cost, duration) in the header, users get just enough information to decide if they want to dig deeper.
Lessons Learned: When Flexibility Fights You
The journey to this solution wasn't straight. Here are the key lessons:
Flex vs Grid: Know Your Tool
What I tried first: Using flexbox with various justify and align properties to create consistent spacing.
Why it failed: Flex is great for one-dimensional layouts, but when you need consistent alignment across multiple rows, CSS Grid is your friend. Each flex container was making independent decisions about spacing.
The fix: CSS Grid with explicit column definitions gives you the control you need for tabular data.
Default States Matter
What I tried first: Showing all analysis details expanded by default – more information is better, right?
Why it failed: Information overload. The primary interaction (reviewing and acting on findings) was buried under walls of AI analysis text.
The fix: Progressive disclosure with smart defaults. Show enough context to be useful, hide complexity until requested.
The Iteration Loop
One of the most satisfying parts of this interface is how it enables iterative improvement. Each finding can be individually recreated using our recreateFromKeyPoint mutation:
const handleRecreateItem = async (keyPointId: string) => {
await recreateFromKeyPoint({
workflowId,
keyPointId,
mode: "recreate_with_hints"
});
};
This triggers a targeted regeneration of just that part of the code, followed by a fresh review cycle. It's the kind of tight feedback loop that makes code review feel more like a conversation than a judgment.
What's Next
The interface is working well, but there's always room for improvement:
-
Component extraction - The workflow detail page is approaching 1650 lines. Time to break out that review panel into its own component.
-
Type consolidation - We're still duplicating the
ReviewKeyPointtype between server and client code. Shared types file, here we come. -
Real-world testing - Ten backfilled key points is a good start, but how does this scale to 50? 100?
The Bigger Picture
This wasn't just about making a prettier interface. It was about understanding how information hierarchy affects user behavior. When you're reviewing code, you want to scan quickly, identify issues, and take action. Every pixel that doesn't serve that goal is working against you.
Sometimes the best UI improvements come from stepping back and asking: "What is the user actually trying to do here?" In our case, they weren't trying to read AI analysis – they were trying to ship better code. Everything else should get out of the way.
Building developer tools is equal parts engineering and empathy. The code has to work, but it also has to feel right. Late-night refactoring sessions like this remind me why both matter.