From Chat to Action: Bridging Conversations to Workflows with AI-Powered Knowledge Export
We just shipped a new feature that transforms raw discussion data into actionable Workflow Insights, leveraging LLMs to bridge the gap between unstructured conversation and structured workflows.
In the fast-paced world of software development and collaboration, valuable insights often emerge from the most unstructured of places: conversations. Whether it's a brainstorming session, a client feedback discussion, or an internal technical debate, these exchanges are brimming with context, decisions, and potential actions. The challenge, however, is making this ephemeral knowledge actionable and integrating it seamlessly into our structured workflows.
That's precisely the problem we set out to solve with our latest feature: Discussion Knowledge Export. Our goal was ambitious: to build a bridge between the free-flowing nature of discussions and the structured execution of workflows. By leveraging advanced AI, we can now extract key insights from any discussion and transform them into reusable WorkflowInsight records, feeding directly into our existing MemoryPicker + {{memory}} pipeline.
This means no more insights getting lost in the scrollback!
The Journey: Building the Knowledge Bridge
The implementation of Discussion Knowledge Export was a multi-faceted effort, touching various layers of our stack. Here’s a rundown of the key components we brought to life:
1. Laying the Foundation: Schema & Core Services
At the heart of any new data-driven feature are the schema changes. We augmented our Discussion model in prisma/schema.prisma with three crucial nullable fields: summary, usefulnessScore, and exportedAt. These fields provide essential metadata, allowing us to track when a discussion was processed, assess the quality of the extracted insights, and present a concise overview to users. A quick db:push and generate brought our database up to speed without needing complex migrations.
The true intelligence of this feature resides in src/server/services/discussion-knowledge.ts. This is where the magic happens:
- We make two parallel calls to Haiku (Anthropic's LLM): one to generate a concise digest of the discussion, and another to extract specific, actionable insights. This parallelization helps optimize latency.
- Extracted insights are then persisted as
WorkflowInsightrecords, complete with embeddings for future retrieval and similarity searches. - Crucially, we implemented tenant-scoped cleanup for re-exports, ensuring that when a discussion is re-processed, old insights are gracefully replaced, maintaining data integrity and preventing duplication.
2. The API Gateway: tRPC Procedures
To expose this new functionality to our frontend, we added three new tRPC procedures to src/server/trpc/routers/discussions.ts:
exportKnowledge(mutation): This is the main trigger for the LLM processing pipeline. We've implemented LLM rate limiting here to ensure responsible API usage.byProject(query): Allows fetching discussions linked to a specific project, essential for our new project-level view.getExportedInsights(query): Retrieves theWorkflowInsightrecords associated with a discussion.
3. Bringing it to Life: User Interface & Integrations
A powerful backend is only as good as its user experience. We focused on seamless integration across the application:
usefulness-badge.tsx: A visually intuitive, color-coded badge (<30% gray, 30-60% yellow, 60-80% green, 80%+ accent) to display theusefulnessScoreof exported discussions, giving users quick feedback on the quality of extracted insights.export-knowledge-dialog.tsx: A user-friendly bottom sheet dialog that guides users through the export process, allowing them to select a target project and showing a success state with the generated summary, score, and insight count.discussions/[id]/page.tsx: The individual discussion page now features an "Export Knowledge" button in the header and a summary bar below the header when the discussion has been exported.discussions/page.tsx: In the main discussions list view, exported discussions proudly display theirUsefulnessBadge.projects/[id]/page.tsx: We introduced a brand new "Conversations" tab between "Workflows" and "Settings", providing a dedicated space to view all discussions linked to a project, complete with their scores and summaries.
4. Robustness & Security First
Throughout the development process, security and reliability were paramount. Our code review process helped us catch and implement several critical fixes:
- Ensuring tenant-scoped raw SQL cleanup to prevent data leakage across tenants.
- Utilizing the correct Prisma JSON filter syntax with
ANDarrays for complex metadata queries. - Applying
tenantIdtoupdateManyoperations to ensure data modifications are always scoped. - Implementing project ownership validation on mutations to prevent unauthorized actions.
- Adding mutation reset logic when the export dialog re-opens, ensuring a clean state for subsequent exports.
Lessons Learned & Challenges Overcome
Even with careful planning, development always presents its unique puzzles. These moments are invaluable for growth:
-
The Whitespace Whammy with
replace_all: While trying to update multiple call sites for ourcleanupOldInsightsfunction, areplace_allcommand in my editor only caught two out of three instances. The culprit? A subtle difference in leading whitespace (two spaces vs. four spaces indentation) in the third call site.- Lesson: Always double-check and manually verify the results of global search-and-replace operations, especially when dealing with code formatting variations. A quick type check immediately caught the missing argument, saving further headaches.
-
Prisma's Nuances: JSON Filtering: Early attempts at filtering JSON fields in Prisma with
where.metadataand thenAND.metadatawere flagged by a vigilant code reviewer as potentially problematic (the second filter might shadow the first).- Lesson: For complex
ANDconditions on the same field, especially with JSON path filters, explicitAND: [{ filter1 }, { filter2 }]array syntax is the robust and correct approach. Code review proved its worth once again in ensuring query integrity.
- Lesson: For complex
What's Next: Validation & Evolution
With the feature fully implemented and committed (commit 01be669), our dev server is humming along at localhost:3000, ready for rigorous manual testing. Our immediate next steps involve:
- Manual Test: Open a discussion with 5+ messages, click "Export Knowledge", and verify that the summary, score, and insights appear correctly.
- Manual Test: Re-export the same discussion to confirm that old insights are replaced, not duplicated.
- Manual Test: Check the project detail "Conversations" tab to ensure it correctly displays linked discussions with their scores and summaries.
- Manual Test: Create a new workflow, open the
MemoryPicker, and verify that discussion insights appear alongside workflow insights. - Manual Test: Run a workflow with selected discussion insights to confirm that
{{memory}}correctly contains the discussion knowledge. - Future Consideration: The
byProjectquery currently returns all discussions. We'll be considering adding pagination to this query in the near future, as flagged in review, to handle large datasets more efficiently.
This feature marks a significant step forward in making our platform even more intelligent and productive. By transforming unstructured discussions into actionable insights, we're empowering users to truly bridge the gap between conversation and execution. We're excited to see how this enhances collaboration and workflow efficiency!