Automating the AI Prompt: Building a Self-Generating Implementation Spec for LLMs
Bridging the gap between high-level product plans and concrete technical specifications for LLMs, we built a feature that auto-generates Claude-ready implementation prompts right from our workflow engine.
As developers, we often find ourselves translating high-level product requirements into actionable technical specifications. This process can be a creative dance, but it's also a significant time sink, especially when you're trying to leverage powerful Large Language Models (LLMs) like Claude to assist with code generation. LLMs are amazing, but they need context – not just a "make me a feature" request, but a deep understanding of your existing codebase, data models, and API structure.
This exact challenge hit us head-on during a recent session. Our goal was ambitious: to analyze the output of a "rent-a-persona" workflow, transform it into a codebase-grounded implementation prompt, and then automate that entire process. The vision? Every workflow should, as its final step, auto-generate a paste-ready Claude Code prompt.
From PM Plan to Code Prompt: The Genesis of Automation
The trigger for this project was a specific "rent-a-persona" workflow (ID 0cc75d29). The output, while excellent from a product perspective, was a high-level PM plan – not a technical spec. It described what the persona feature should do, but lacked the crucial details for an LLM to generate meaningful code: specifics about our PersonaApiToken model, how tokens are served, existing REST endpoints, or our tRPC routes.
This meant a manual step was required. We crafted a comprehensive implementation prompt, meticulously documented at docs/prompts/rent-a-persona-implementation.md. This prompt wasn't just a generic "write code for me" request; it was deeply grounded in our existing codebase, referencing specific models and services.
This manual effort, while necessary, sparked an idea: What if we could automate this translation? What if our system could generate these codebase-aware prompts itself?
Designing the "Auto Implementation Prompt" Feature
The idea was compelling: a simple toggle on workflow creation that, when activated, would add a "virtual" final step to the workflow, generating that crucial implementation prompt. We laid out the design in docs/plans/2026-03-11-auto-implementation-prompt-design.md and docs/plans/2026-03-11-auto-implementation-prompt.md, outlining the necessary changes across our stack.
The Implementation Journey: Bringing it to Life
With a clear plan, we dove into the code. The feature involved touching several layers of our application, from the database schema to the UI and the core workflow engine. Here's a walkthrough of the key steps:
1. Database Schema Update: Opt-in for Prompt Generation
First, we needed a way to flag workflows that should generate a prompt. We added a new boolean field to our Workflow model in prisma/schema.prisma:
// prisma/schema.prisma
model Workflow {
// ... other fields
generatePrompt Boolean @default(true) // New field!
// ...
}
Setting @default(true) was a deliberate choice. It ensures that existing workflows, upon deployment, will implicitly have prompt generation enabled, making the rollout smoother and allowing us to disable it explicitly if needed. This is a safe, additive migration.
2. Wiring Through tRPC: API Integration
Next, we exposed this new field through our tRPC API. The generatePrompt boolean was wired into the create procedure for workflows in src/server/trpc/routers/workflows.ts, allowing the frontend to send this preference to the backend.
3. UI Toggle: User Control
A feature isn't complete without a way for users to interact with it. We added a simple checkbox toggle in the workflow creation form (src/app/(dashboard)/dashboard/workflows/new/page.tsx). This gave users direct control over whether their workflow should conclude with an implementation prompt.
4. The Brain: implementation-prompt-generator.ts
This was the core logic. We created src/server/services/implementation-prompt-generator.ts. This service contains:
buildImplementationPromptInput(): A function responsible for gathering all necessary context from the workflow, its outputs, and our codebase to construct a rich, detailed input for the LLM.IMPLEMENTATION_PROMPT_SYSTEM: A constant holding the system-level instructions for Claude, guiding it on how to interpret the input and format the technical specification.
We backed this new service with 3 dedicated unit tests to ensure its prompt-building logic was sound.
5. Integrating with the Workflow Engine: The Virtual Step
The magic truly happened in src/server/services/workflow-engine.ts. We integrated the prompt generation as a "virtual step" that executes after all configured workflow steps have completed. This approach is elegant because it doesn't require modifying the existing workflow step configuration or introducing a new step type visible to the user mid-flow. It simply adds a final, behind-the-scenes processing stage (around lines 2470-2580).
A minor but important fix during this stage was ensuring durationMs tracking correctly measured the actual elapsed time for this new virtual step, rather than hardcoding it to 0. Small details, big difference for observability!
6. Duplication Edge Case: Preserving generatePrompt
Finally, we ensured that when a workflow is duplicated, the generatePrompt setting is correctly carried over. This prevents users from having to re-enable the feature for cloned workflows.
All in all, this involved 7 focused commits on main, and I'm happy to report that all 321 tests passed with flying colors.
Lessons from the Trenches: What We Learned
No development session is without its quirks. Here are a few "pain points" that turned into valuable lessons:
UI Component Limitations: The Case of the Missing outline Badge
- The Problem: I tried to use a
<Badge variant="outline">in the workflow creation form for a specific visual style. - The Reality: Our
Badgecomponent only supported"default" | "success" | "accent" | "warning" | "danger". There was no"outline"variant defined. - The Workaround: I ended up using
variant="default", which thankfully renders with a border styling that was visually close enough to what I wanted. - Lesson Learned: Always check the component's API and available props/variants early. While it's tempting to assume common styles, component libraries often have specific implementations. Sometimes, a "good enough" workaround is better for velocity, but it's worth noting for future component enhancements or custom styling needs.
Database Migrations & Local Dev Setup
- The Current State: The
db:pushcommand hadn't been run yet because Docker wasn't running on my dev machine. This meant the newgeneratePromptcolumn wasn't physically present in my local database. - The Resolution: The next
db:push(or./scripts/dev-start.sh) will handle this automatically. For production, a manualdb:pushwill be required during deployment. - Lesson Learned: Ensure your local development environment (especially Docker for database services) is consistently up and running to avoid surprises with schema migrations. It's a reminder that even safe, additive migrations still require a database update step.
Query Optimization: include vs. select
- The Observation: Our existing
listandgetqueries for workflows useincludestatements (fetching related data) rather than explicitselectstatements (fetching only specific columns). - The Benefit: This meant the new
generatePromptcolumn was automatically returned by these queries without needing any modifications. - Lesson Learned: While
includecan sometimes lead to fetching more data than strictly necessary (potentially impacting performance on very large datasets), in this scenario, it actually simplified the schema evolution process by making the new column immediately accessible without any query changes. It's a trade-off to be aware of, but in this case, it was a win for development speed.
What's Next?
The feature is complete, tested, and ready for prime time. My immediate next steps are:
- Deploy to Production: A
db:push, build, and restart will get this into users' hands. - Live Test: Create a new workflow with "Generate Implementation Prompt" checked, run it, and verify the final step outputs a well-formed prompt.
- Negative Test: Create a workflow with the toggle unchecked to ensure no extra step is generated.
- Feature Implementation: Use the generated prompt for the "rent-a-persona" feature itself – dogfooding our own automation!
- Refinement: Based on the quality of the live output, we'll iterate and refine our
IMPLEMENTATION_PROMPT_SYSTEMto make it even more effective.
Automating the generation of codebase-grounded implementation prompts is a significant step forward in streamlining our development workflow and maximizing the utility of LLMs. It bridges the gap between high-level ideas and concrete code, allowing our AI tools to be truly effective partners in building features.
{"thingsDone":[
"Analyzed rent-a-persona workflow output and identified need for technical spec.",
"Created a comprehensive, codebase-grounded manual implementation prompt for 'rent-a-persona'.",
"Designed the 'Auto Implementation Prompt' feature to automate prompt generation.",
"Added `generatePrompt` boolean field to `Workflow` model in Prisma schema.",
"Wired `generatePrompt` through tRPC for workflow creation.",
"Implemented UI toggle checkbox for 'Generate Implementation Prompt' in workflow creation form.",
"Developed `implementation-prompt-generator.ts` service for building LLM prompts.",
"Integrated prompt generation as a virtual final step into the workflow engine.",
"Fixed `durationMs` tracking for the new virtual workflow step.",
"Ensured `generatePrompt` setting is preserved during workflow duplication.",
"All 321 tests pass with 7 commits on main."
],"pains":[
"UI Badge component lacked an 'outline' variant, requiring a workaround with 'default'.",
"Local `db:push` was pending due to Docker not running, delaying schema migration locally."
],"successes":[
"Successfully automated the generation of codebase-grounded LLM implementation prompts.",
"Seamless integration of a new feature across DB, API, UI, and core engine layers.",
"Additive Prisma migration with `@default(true)` facilitated a smooth rollout.",
"Existing tRPC queries using `include` automatically picked up the new column without modification.",
"All unit and integration tests passed, ensuring feature stability."
],"techStack":[
"Prisma (ORM)",
"tRPC (Type-safe APIs)",
"Next.js (Frontend/Backend framework)",
"TypeScript",
"Docker (for local database)",
"Claude (LLM)",
"Custom Workflow Engine"
]}