From Ephemeral Insights to Enduring Records: Building a Robust AI Report System
We just shipped a major upgrade, transforming fleeting AI-generated insights into persistent, categorized, and interactive reports within our project dashboards. Discover the journey, the tech, and the lessons learned.
Every developer knows the thrill of a tool that just works. But what happens when those brilliant, AI-generated insights—like an automated code refactor suggestion or a workflow analysis—vanish after you close the tab? That's the problem we set out to solve this week.
Our goal was clear: take those powerful, ephemeral reports, give them a permanent home in our database, and then present them beautifully and functionally within each project's dashboard. This isn't just about storage; it's about creating a historical record, enabling collaboration, and making those AI insights truly actionable and reusable.
Giving AI Reports a Permanent Address
The core of this enhancement revolved around introducing a new Report model. We meticulously designed its schema to capture everything relevant, not just the raw content:
id,tenantId,userId,projectId: Standard identifiers for ownership and context.title,content: The essence of the report, withcontentoften holding rich Markdown.type: Crucial for categorization (e.g., "autofix", "refactor", "workflow").style: Think 'tone' or 'verbosity' – a badge to quickly identify the report's approach (e.g.,Efor 'Executive Summary',Sfor 'Standard',Mfor 'Detailed',Tfor 'Technical').sourceId,provider,model,tokenUsage,costEstimate: Vital metadata for understanding how the report was generated, its resource consumption, and its provenance.personaId,personaName: To track which AI persona (e.g., 'Security Expert', 'Refactor Guru') generated the report, adding another layer of context.createdAt: For chronological tracking.
// Excerpt from prisma/schema.prisma
model Report {
id String @id @default(cuid())
tenantId String
userId String
projectId String? // Optional, for reports not tied to a specific project initially
title String
content String
type ReportType // Enum: autofix, refactor, workflow
style ReportStyle // Enum: E, S, M, T
sourceId String?
provider String?
model String?
tokenUsage Int?
costEstimate Float?
personaId String?
personaName String?
createdAt DateTime @default(now())
// ... relations to User, Project, Tenant
}
With the schema defined, we built out a dedicated tRPC router (src/server/trpc/routers/reports.ts) providing the necessary API endpoints:
list: To fetch reports, filterable byprojectIdandtype.get: To retrieve the full content of a specific report.delete: To remove reports, ensuring users have control over their data.
Weaving Persistence into the Workflow
The real magic happened when we integrated this persistence layer into our existing AI generation pipelines. Previously, generateReport mutations in our auto-fix, refactor, and workflows routers would simply return the generated content. Now, they also save it to the database, accepting an optional projectId to link the report directly to its context.
This means whether you're generating an AutoFix suggestion on a specific file, a refactor plan for a module, or a high-level workflow summary, that valuable output is now automatically archived.
A Transformed Reports Dashboard
The most visible change for our users comes in the revamped ReportsTab within each project dashboard. We completely rewrote this section to be a hub for all saved reports:
- Categorized Views: Reports are now elegantly grouped by their
type(AutoFix, Refactor, Workflow). - Rich Previews: Each report entry displays key metadata like its
stylebadge, generation date, associatedpersona, and estimatedcost. - Interactive Viewer: Clicking a report opens a sleek Sheet panel, powered by a
MarkdownRenderer, allowing users to view the full content. Crucially, this viewer also supports Mermaid charts, bringing diagrams and flowcharts to life directly within the report. - Direct Control: A prominent "Delete" button accompanies each report, giving users immediate control.
- Seamless Interaction: We wired up auto-invalidation, so saving a new report or deleting an old one instantly refreshes the list, providing a fluid user experience.
- Generate New: A dedicated section to initiate new report generations from available completed runs, streamlining the report creation process.
This transformation provides not just a history, but a dynamic, interactive knowledge base for every project.
Broader Contextual Enhancements
This session wasn't just about reports. We also laid groundwork for richer project insights:
- An
AutoFixtab on the project page now displays runs with severity badges, PR action items, and expandable details. - We introduced
FINDING_FORMATfor structured, per-finding reports (title, category, description, solution, code) andMERMAID_GUIDANCEto ensure our AI models generate beautiful diagrams. These lay the foundation for even more sophisticated and actionable AI outputs.
Navigating the Rapids: Lessons Learned
Even with a clear plan, development always presents its unique challenges. This session was no exception:
-
The Elusive
embeddingColumn: Ourdb:pushcommand, while powerful, sometimes has a mind of its own when it comes to specific column types. It dropped ourembedding vector(1536)column on theworkflow_insightstable. This is a recurring reminder that for critical, non-standard column types (like vector embeddings), having raw SQL handy for restoration or specific migration scripts is essential. It reinforces the need for vigilance when dealing with schema changes, especially with--accept-data-loss. -
Prisma's
db executevs. Rawpsql: Whileprisma db execute --stdinis great for applying DDL, we discovered it doesn't outputSELECTresults. For quick data inspection or debugging queries, falling back to directpsql(psql "postgresql://nyxcore:nyxcore_dev@localhost:5454/nyxcore") remains the most efficient way to see what's actually happening in the database. A classic case of knowing when to use the right tool for the job. -
Data Seeding and Integrity: We found that several repositories lacked
projectIdlinks. This necessitated a manualUPDATEto correctly associate them. This experience highlighted the importance of robust data seeding, migration scripts, or initial setup processes to ensure foundational data relationships are correctly established from the start, preventing manual fixes down the line.
What's Next?
All the code is in, TypeScript is clean, the schema is pushed, and RLS (Row Level Security) is correctly applied to the reports table to ensure tenant isolation. The changes are staged and ready for commit.
Our immediate next steps are focused on thorough QA:
- Commit and push: Get these changes into the codebase.
- Generate & Verify: Create a new report from the Reports tab and confirm it saves and appears correctly.
- View & Inspect: Open a saved report, ensuring the Sheet viewer displays the full content, including any Mermaid charts.
- Delete & Confirm: Test the delete functionality to ensure reports disappear as expected.
- Standalone Generation: Verify that generating reports from standalone AutoFix/Refactor detail pages still works seamlessly, even without an explicit
projectIdbeing passed initially (they should still save).
This session marks a significant leap forward in making our AI-powered development tools more powerful, persistent, and user-friendly. We're excited about the historical context and enhanced collaboration this new reporting system will bring to every project!