From Ephemeral Insights to Persistent Knowledge: Leveling Up Our AI-Generated Reports
Discover how we transformed our AI-generated reports from transient outputs into a robust, persistent system, complete with database storage, a sleek UI, and valuable lessons learned along the way.
In the fast-paced world of AI-powered development, generating insightful reports is one thing; making those insights accessible, manageable, and persistent is another challenge entirely. We recently tackled this head-on, moving our AI-generated reports from fleeting outputs to a fully integrated, persistent system. This post dives into the journey, detailing how we built out our new report management capabilities, the technical decisions we made, and the valuable lessons we learned along the way.
The Vision: Persistent & Actionable Insights
Our goal was clear: empower users to generate reports (whether from AutoFix suggestions, Refactor analyses, or Workflow executions), save them to the database, and then easily view, categorize, and manage these reports within their project dashboard. No more losing valuable context after a page refresh or having to re-run an analysis just to recall a past finding.
This meant building a robust backend for persistence, an efficient API for retrieval and management, and an intuitive frontend for display and interaction.
Building the Foundation: The Report Model
The first step was defining how a report would live in our database. We introduced a new Report model in our Prisma schema, designed to capture all essential metadata alongside the report's content.
model Report {
id String @id @default(uuid())
tenantId String
userId String
projectId String?
title String
content String @db.Text
type ReportType // Enum: AutoFix, Refactor, Workflow
style ReportStyle // Enum: Elegant, Simple, Markdown, Technical
sourceId String? // ID of the original run/source
provider String? // e.g., "OpenAI"
model String? // e.g., "gpt-4-turbo"
tokenUsage Int?
costEstimate Float?
personaId String?
personaName String?
createdAt DateTime @default(now())
tenant Tenant @relation(fields: [tenantId], references: [id])
user User @relation(fields: [userId], references: [id])
project Project? @relation(fields: [projectId], references: [id])
@@index([projectId, type])
}
enum ReportType {
AUTOFix
REFACTOR
WORKFLOW
}
enum ReportStyle {
ELEGANT
SIMPLE
MARKDOWN
TECHNICAL
}
This model provides a rich structure, allowing us to not just store the report content, but also categorize it, track its origin, estimate its cost, and link it directly to specific projects, users, and tenants.
API & Integration: Connecting the Dots
With the database model in place, we exposed the necessary functionality through our tRPC API:
reports.list: A flexible query to fetch saved reports, filterable byprojectIdandtype.reports.get: To retrieve the full content of a specific report.reports.delete: To remove reports no longer needed.
// src/server/trpc/routers/reports.ts
export const reportsRouter = createTRPCRouter({
list: protectedProcedure
.input(z.object({ projectId: z.string().optional(), type: z.nativeEnum(ReportType).optional() }))
.query(async ({ ctx, input }) => { /* ... fetch reports ... */ }),
get: protectedProcedure
.input(z.object({ id: z.string() }))
.query(async ({ ctx, input }) => { /* ... fetch single report ... */ }),
delete: protectedProcedure
.input(z.object({ id: z.string() }))
.mutation(async ({ ctx, input }) => { /* ... delete report ... */ }),
});
The real magic happened by integrating the persistence logic directly into our existing report generation mutations. Now, when an AutoFix, Refactor, or Workflow report is generated, it's automatically saved to the database with its respective type and an optional projectId. This ensures a seamless experience where generating a report also means saving it for future reference.
Bringing it to Life: The Project Reports Tab
The frontend was crucial for making these saved reports truly useful. We completely rewrote the ReportsTab within our project dashboard (src/app/(dashboard)/dashboard/projects/[id]/page.tsx).
The new tab now:
- Groups reports by type: AutoFix, Refactor, and Workflow reports are logically organized.
- Displays rich metadata: Each report entry shows its style badge (e.g., Elegant, Simple), date, persona, and estimated cost.
- Provides a viewer: Clicking on a report opens a Sheet panel, rendering the full Markdown content, including support for Mermaid diagrams!
- Enables easy deletion: A delete button is available for each report.
- Offers "Generate New Report": Users can easily kick off new reports based on available completed runs.
- Auto-invalidates: The report list automatically refreshes after a save or delete operation, ensuring the UI is always up-to-date.
This transformation turns the Reports tab into a central hub for all project-related AI-generated insights.
Beyond the Basics: Richer Reports
Even before embarking on this persistence journey, we'd been enhancing the quality and structure of our reports.
- We introduced
FINDING_FORMATfor structured, per-finding reports, allowing us to break down complex analyses into actionable titles, categories, descriptions, solutions, and code snippets. - And for visual learners, we added
MERMAID_GUIDANCE, enabling our AI models to generate Mermaid charts directly within reports, bringing diagrams to life. These earlier enhancements made the persisted reports even more valuable.
Navigating the Trenches: Lessons Learned
No development session is without its challenges. Here are a few critical lessons we picked up along the way:
- Database Migration Woes (
embeddingcolumn): During adb:pushwith Prisma, theembedding vector(1536)column on ourworkflow_insightstable was unexpectedly dropped. While--accept-data-lossis convenient, it's a stark reminder to always be cautious and have a recovery plan. We had to restore the column manually via raw SQL, emphasizing the importance of understanding underlying database operations even when using ORMs.sqlALTER TABLE workflow_insights ADD COLUMN embedding vector(1536); - Prisma
db executeLimitations: When debugging database issues,prisma db execute --stdinis handy for running SQL. However, it doesn't displaySELECTquery outputs. For querying data directly, reverting topsql(or your database client of choice) is often more efficient:bashpsql "postgresql://nyxcore:nyxcore_dev@localhost:5454/nyxcore" - Data Consistency & Linking: We discovered that many of our existing repositories lacked
projectIdlinks. This required a manualUPDATEquery to associate them with their respective projects. This highlights the need for either robust initial data seeding or automated linking mechanisms to maintain data integrity across related entities.
Security First: Row-Level Security (RLS)
A critical component of data persistence, especially in multi-tenant applications, is ensuring data privacy and isolation. We applied Row-Level Security (RLS) to the new reports table. This involved:
- Enabling RLS on the table.
- Setting
FORCE ROW LEVEL SECURITYto ensure policies are always applied. - Implementing a
tenant_isolationpolicy to guarantee that users can only access reports belonging to theirtenantId. - Adding the RLS entry to our
prisma/rls.sqlscript for version control.
This ensures that even if an attacker bypasses our application logic, the database itself enforces data segregation.
What's Next: Polishing and QA
All the code is written, TypeScript is clean, schema is pushed, and RLS is applied. The next immediate steps involve thorough quality assurance:
- Commit and Push: Get these changes into version control!
- QA - Generate from Reports Tab: Verify that generating a report from the new Reports tab successfully saves it and it appears in the list.
- QA - View Saved Report: Confirm that clicking a saved report opens the Sheet viewer and renders the full content, including Mermaid charts.
- QA - Delete Report: Ensure deleting a saved report correctly removes it from the list.
- QA - Standalone Generation: Verify that generating reports from standalone AutoFix/Refactor detail pages (where
projectIdmight not be explicitly passed) still works as expected.
Conclusion
Transforming our AI-generated reports from ephemeral outputs to a persistent, manageable system has been a significant step forward. This not only enhances the user experience by providing a centralized knowledge base but also lays the groundwork for future features like report sharing, collaboration, and deeper analytical insights. While the journey had its share of database migration quirks and data consistency challenges, the end result is a more robust, intelligent, and user-friendly platform.