Persisting AI Insights: Bringing AutoFix, Refactor, and Workflow Reports to Life
We tackled the challenge of making AI-generated development insights persistent and actionable, building out a robust system for saving, viewing, and managing AutoFix, Refactor, and Workflow reports right within our project dashboard.
AI-powered development tools are incredible. They can suggest refactors, identify bugs, and even automate complex workflows. But often, these brilliant insights are ephemeral – generated, displayed, and then gone. How do you track them? How do you revisit a particularly useful suggestion from last week? How do you integrate them into a persistent workflow?
That was the core problem we set out to solve in a recent development sprint. Our goal: take the intelligent reports generated by our AutoFix, Refactor, and Workflow modules, persist them to a database, and then display them in a user-friendly, project-specific dashboard with full viewing and deletion capabilities.
It was a session packed with schema changes, API integrations, UI overhauls, and a few classic database gotchas. Here's how we did it, and what we learned along the way.
Building the Foundation: A Home for Our Reports
The first step, as always, was defining our data model. We needed a Report entity that could capture all the rich metadata associated with an AI-generated insight. This wasn't just about saving the content, but about understanding its context, origin, and even its cost.
Here's a glimpse of our Report model in prisma/schema.prisma:
model Report {
id String @id @default(cuid())
tenantId String
userId String?
projectId String?
title String
content String @db.Text
type ReportType // "autofix", "refactor", "workflow"
style ReportStyle // e.g., "brief", "detailed", "technical"
sourceId String? // ID of the specific AutoFix run, Refactor session, etc.
provider String? // e.g., "openai", "anthropic"
model String? // e.g., "gpt-4-turbo", "claude-3-opus"
tokenUsage Int?
costEstimate Decimal? @db.Decimal(10, 4)
personaId String?
personaName String?
createdAt DateTime @default(now())
// Reverse relations for easy lookup
project Project? @relation(fields: [projectId], references: [id])
tenant Tenant @relation(fields: [tenantId], references: [id])
user User? @relation(fields: [userId], references: [id])
@@index([projectId])
@@index([tenantId])
}
enum ReportType {
autofix
refactor
workflow
}
enum ReportStyle {
brief
standard
detailed
technical
// ... more styles as needed
}
Notice the tenantId, userId, and projectId for multi-tenancy and context, along with type, style, sourceId, provider, model, tokenUsage, and costEstimate. These fields are crucial for future analytics, filtering, and understanding the value and origin of each report.
The API Layer: tRPC to the Rescue
With the schema defined, we built out the necessary API endpoints using tRPC. Our src/server/trpc/routers/reports.ts now includes:
list: To fetch reports, filterable byprojectIdandtype.get: To retrieve the full content of a specific report.delete: To remove a report.
The real magic happened by integrating this persistence into our existing AI generation flows. We updated the generateReport mutations within our auto-fix.ts, refactor.ts, and workflows.ts routers. Now, after an AI successfully generates its output, that output (along with all its rich metadata) is saved directly to the Report table. Each mutation automatically assigns the correct type ("autofix", "refactor", or "workflow") and accepts an optional projectId to link the report to a specific project context.
Bringing it to Life: The Project Reports Tab
The frontend needed a significant overhaul to display these new persistent reports. We completely rewrote the ReportsTab component in src/app/(dashboard)/dashboard/projects/[id]/page.tsx.
Now, when you navigate to a project's reports tab, you're greeted with:
- Grouped Reports: Reports are intuitively grouped by their
type(AutoFix, Refactor, Workflow). - Rich Metadata Display: Each report entry shows key details like its style badge (e.g., "Standard", "Technical"), generation date, the persona used, and an estimated cost.
- Interactive Viewer: Clicking on a report opens a
Sheetpanel, rendering the full report content using ourMarkdownRenderer. This is particularly exciting because it means ourMERMAID_GUIDANCEfor generating diagrams now comes to life right within the report viewer! - Deletion Capability: A dedicated delete button for each report ensures users can manage their saved insights.
- New Report Generation: We've also included a "Generate New Report" section, allowing users to initiate new AI runs directly from this tab, leveraging available completed runs as context.
- Dynamic Updates: Crucially, the report list auto-invalidates and refreshes after a report is saved or deleted, ensuring the UI is always up-to-date.
We also enhanced other tabs earlier in the session, adding an AutoFix tab with severity badges and PR action items, further integrating these AI insights into the developer workflow.
The Unsung Hero: Row-Level Security (RLS)
For any multi-tenant application, security is paramount. We applied Row-Level Security (RLS) to our new reports table. This ensures that a user can only ever access reports that belong to their tenantId.
The RLS policy is straightforward but critical:
ALTER TABLE reports ENABLE ROW LEVEL SECURITY;
CREATE POLICY tenant_isolation_policy ON reports
USING (tenant_id = current_setting('app.tenant_id')::text);
ALTER TABLE reports FORCE ROW LEVEL SECURITY;
This policy, stored in prisma/rls.sql and applied on deployment, uses a current_setting to dynamically filter rows based on the active tenant, preventing data leakage between organizations.
Lessons Learned: Navigating the Trenches
No development session is complete without its challenges, and this one had a few doozies. Turning these "pains" into "lessons" is key:
- Prisma and Custom Column Types: We hit a snag where
db:pushdropped ourembedding vector(1536)column on theworkflow_insightstable. This happens when Prisma doesn't fully understand a custom type (likevectorfor embeddings) during schema migrations, especially when using--accept-data-loss.- Lesson: For custom database types or when
db:pushbehaves unexpectedly, always have a raw SQL fallback. We restored the column manually. In the future, we'll look into custom migration scripts for these edge cases or ensuring Prisma client extensions properly handle such types.
- Lesson: For custom database types or when
- Debugging
prisma db execute: When trying to verify data or RLS policies,prisma db execute --stdindoesn't displaySELECTquery output.- Lesson: For interactive SQL queries and debugging, nothing beats direct
psql. Don't fight the tools; use the right one for the job.
- Lesson: For interactive SQL queries and debugging, nothing beats direct
- Data Seeding and Relationships: Many of our existing repositories didn't have
projectIdset, requiring manualUPDATEstatements to link them up.- Lesson: As your application grows, robust data seeding and migration scripts become essential. Plan for these relationships from the start, especially when introducing new foreign keys to existing data. Automating initial data setup (e.g., linking existing repos to a default project) would have saved time.
What's Next? Solidifying the Wins
With the core functionality in place, our immediate next steps are all about quality assurance and consolidation:
- Commit and Push: Get these changes into version control!
- QA - Reports Tab: Generate a report from the Reports tab, ensure it saves and appears correctly.
- QA - Viewer: Verify clicking a saved report opens the Sheet viewer, with full content and
Mermaidcharts rendering properly. - QA - Delete: Confirm deleting a report removes it from the list.
- QA - Standalone Generation: Test generating reports from standalone AutoFix/Refactor detail pages to ensure they still work, even without explicitly passing a
projectId.
Conclusion
Making AI-generated insights persistent and manageable is a game-changer for developer tools. By carefully designing our data model, integrating with our existing API layer, and building a intuitive frontend, we've transformed ephemeral suggestions into actionable, trackable assets. The journey wasn't without its bumps, but the lessons learned around schema migrations, debugging, and data integrity will undoubtedly make our next sprints smoother.
Now, our development teams can truly leverage the power of AI, not just as a fleeting suggestion, but as an integrated, evolving part of their project's knowledge base.
{
"thingsDone": [
"Added Report model to Prisma schema with rich metadata",
"Created tRPC reports router (list, get, delete)",
"Integrated report persistence into AutoFix, Refactor, and Workflow generation mutations",
"Rewrote Project ReportsTab to display, view, and delete saved reports",
"Implemented RLS for tenant isolation on the reports table",
"Linked existing repositories to projects in the database"
],
"pains": [
"Prisma db:push dropped custom 'embedding' column, requiring raw SQL restore",
"prisma db execute --stdin does not show SELECT output, necessitating direct psql",
"Manual database updates required to link existing repositories to projects due to missing projectId"
],
"successes": [
"Achieved full persistence for AI-generated reports",
"Created a dynamic and interactive UI for report management",
"Ensured multi-tenant security with RLS",
"Successfully integrated new features into existing generation flows",
"Enabled rich report viewing with Markdown and Mermaid chart support"
],
"techStack": [
"Next.js",
"tRPC",
"Prisma",
"PostgreSQL",
"TypeScript",
"React",
"SQL (RLS)",
"AI/Generative AI (conceptual)"
]
}