From Ephemeral to Enduring: Building Robust Report Persistence and PDF Export
Dive into how we tackled report persistence, implemented seamless PDF exports with `md2pdf-mermaid`, and navigated a tricky `vector` column challenge with Prisma, all while enhancing our project dashboard.
Every now and then, a development session feels like a full sprint, culminating in a satisfying moment of "it all just works." This past session was precisely that. Our mission? To transform our project reports from fleeting, on-demand insights into persistent, downloadable artifacts. We wanted to empower users to generate, save, and export their critical project data as beautiful, branded PDFs. And I'm thrilled to report: mission accomplished.
Let's unpack the journey.
The Core Mission: Making Reports Endure
Before this session, our generated reports were ephemeral – useful for immediate consumption, but gone once the modal closed. The first, and arguably most foundational, step was to introduce proper persistence.
This involved several key architectural changes:
- Prisma Model & RLS: We introduced a new
Reportmodel inprisma/schema.prismato store the generated report content, metadata, and associated project ID. Crucially, we implemented Row Level Security (RLS) inprisma/rls.sqlto ensure that users could only access reports belonging to projects they had permissions for. Data security, first and foremost! - tRPC Router for Reports: A dedicated tRPC router (
src/server/trpc/routers/reports.ts) was created, providing API endpoints for listing, retrieving, and deleting these saved reports. This keeps our API clean and well-organized. - Persisting Generative AI Outputs: The heart of the matter was updating our existing
generateReportmutations (for auto-fix, refactor, and workflows). Now, whenever a report is generated, it's automatically saved to the database through these mutations.
With persistence in place, we laid the groundwork for a much richer reporting experience.
Bringing Reports to Life: Seamless PDF Export
Having reports saved is great, but what's even better is being able to take them with you. Our next big leap was enabling PDF export, complete with proper styling and our brand's touch.
The challenge here was converting our Markdown-based reports (which often include Mermaid diagrams) into visually appealing PDFs. We opted for a Python-based solution:
md2pdf-mermaidand a Custom Script: We leveraged the excellentmd2pdf-mermaidlibrary, which handles Markdown to PDF conversion and renders Mermaid diagrams beautifully. A Python script (scripts/md2pdf.py) was created to wrap this functionality, adding a branded footer to each generated PDF.- PDF Generation API Endpoint: To integrate this Python script into our Next.js application, we built an API route (
src/app/api/v1/reports/pdf/route.ts). This endpoint takes the Markdown content, calls our Python script, and streams the generated PDF back to the client. We even hardcoded the path to the Python venv's interpreter for robustness (a detail we'll streamline later with better setup docs). - Frontend Integration: The
ReportGeneratorModaland the mainReportsTabwere updated significantly. We added dual download buttons – one for the raw Markdown, and another for the newly enabled PDF export. Functions likebuildFilename(),downloadFile(), anddownloadPdf()now handle the client-side logic for fetching and downloading these files, complete with apdfLoadingstate for a smooth spinner experience.
Enhancing the User Experience on the Dashboard
Beyond the core persistence and export, we dedicated significant effort to refining the user interface, particularly within the project detail page:
- Overhauled
ReportsTab: TheReportsTabitself received a major rewrite. Saved reports are now intelligently grouped by type, making them easier to navigate. A new viewer Sheet panel allows users to review saved reports directly within the dashboard, and a dedicated section enables generating new reports. - Contextual Data Passing: We ensured that the
projectNameprop was correctly passed fromReportsTabdown to theReportGeneratorModal, allowing for more context-aware report generation and naming. - New AutoFix Tab: As part of a broader effort (though mostly completed in prior sessions), we integrated an AutoFix tab into the project detail page, complete with severity badges, PR Action Items, and expandable run cards. This provides a rich context for the reports being generated.
Navigating the Treacherous Waters: Lessons Learned
No significant development session is without its bumps. Here are a couple of key challenges we faced and how we overcame them, offering valuable lessons for future work.
Prisma's vector Conundrum
This was a recurring headache that demanded a specific workaround.
- The Problem: Our
workflow_insightstable uses a PostgreSQLvector(1536)column for embeddings. While incredibly powerful, Prisma currently marks thevectortype asUnsupported. This means thatprisma db push– our go-to for schema migrations – would consistently drop this column every time we pushed schema changes. - The Failed Attempt: Repeatedly running
prisma db push --accept-data-loss(necessary due to schema changes) would leave us with a brokenworkflow_insightstable, missing its crucialembeddingcolumn. - The Workaround: After every
db:push, we now have a mandatory manual step: restore the column with raw SQL (ALTER TABLE workflow_insights ADD COLUMN IF NOT EXISTS embedding vector(1536);) and then recreate its associated HNSW index. This ensures our embeddings remain intact.
Lesson Learned: While Prisma is fantastic, for Unsupported types, be prepared to manage schema changes and data integrity with direct SQL commands post-migration. Documenting these steps thoroughly is crucial for team members.
The Path Less Traveled (or Mis-Traveled)
A classic developer pitfall, but a good reminder of attention to detail.
- The Problem: In our new PDF generation API route (
src/app/api/v1/reports/pdf/route.ts), I initially used an import path of../../../middlewareto reach our middleware. This resulted in aTS2307: Cannot find moduleerror. - The Diagnosis: The path was simply incorrect. The middleware file (
src/app/api/v1/middleware.ts) was only two levels up, not three. - The Solution: Correcting the import path to
../../middlewareresolved the issue immediately.
Lesson Learned: Always double-check relative import paths, especially when creating new files deep within a directory structure. IDE auto-imports can sometimes be a lifesaver, but manual paths require careful counting!
Behind the Scenes: The Tech Stack & Setup
A quick note on the environment for anyone looking to contribute or replicate:
- Our Python environment for PDF generation lives in a
.venv/directory, managed bypip. It containsmd2pdf-mermaidversion1.4.3,Playwright, and theChromiumbrowser (which Playwright uses to render the PDFs). - The
.venv/directory is, of course, in.gitignore. For new clones, the setup involves:bashpython3 -m venv .venv .venv/bin/pip install md2pdf-mermaid playwright .venv/bin/playwright install chromium - PostgreSQL powers our data, with the new
reportstable and RLS actively in use. - All repositories are now correctly linked to projects via a
projectIdforeign key, a crucial piece of our data model.
What's Next on the Horizon
With this massive chunk of work behind us, we're already looking ahead:
- Context-Aware AutoFix & Refactor Pipelines: Our next major focus is on enhancing our generative AI pipelines with more context, involving further schema changes, a robust pipeline context loader, and a refined frontend project selector.
- Expanding PDF Downloads: While the project dashboard and
ReportGeneratorModalnow support PDF downloads, we'll consider adding this functionality to standalone auto-fix, refactor, and workflow detail pages for broader accessibility. - Streamlining Setup: We'll create a dedicated setup script or comprehensive documentation for the Python venv and Playwright dependencies to make onboarding for new contributors even smoother.
This session was a significant step forward, bringing our project reports to a new level of utility and user experience. It's exciting to see these features come to life, and I'm already looking forward to the next set of challenges!