nyxcore-systems
5 min read

From Vision to Code: Automating Implementation Prompts with AI

We tackled the challenge of translating high-level project plans into actionable code specifications by building a new feature that automatically generates Claude-ready implementation prompts for our development team.

AIprompt engineeringworkflow automationfullstacktypescriptprismatrpcnextjsdevops

Every developer knows the drill: you get a high-level project plan, a fantastic vision from a product manager, but then comes the crucial, often time-consuming step of translating that vision into concrete, technical instructions for implementation. What if an AI could help bridge that gap, generating a ready-to-use coding prompt directly from your workflow output? That's precisely the problem we set out to solve this week.

Our latest feature, the "Auto Implementation Prompt," is designed to do just that. It's a significant step towards streamlining our development workflows, ensuring that every completed workflow can, if desired, culminate in a paste-ready Claude Code prompt, grounded in our existing codebase.

The Genesis of an Idea: Bridging the PM-Dev Gap

The catalyst for this innovation came from analyzing a recent "rent-a-persona" workflow. The output was brilliant – a comprehensive product management plan, rich with user stories and business logic. But for a developer, it was still several steps removed from a technical specification. We needed a bridge, a way to transform this high-level output into an actionable implementation plan, complete with references to our existing PersonaApiToken model, token services, and REST/tRPC endpoints.

Our goal was clear: build a system that could analyze these outputs and, using our codebase as context, generate a robust, detailed implementation prompt. This would not only save valuable developer time but also ensure consistency and accuracy in how new features integrate with our existing architecture.

Under the Hood: The Implementation Journey

Bringing the "Auto Implementation Prompt" to life involved a series of interconnected steps, touching various parts of our full-stack application:

  • Enabling the Feature Flag: First, we needed a way to toggle this feature. We introduced a new boolean field, generatePrompt Boolean @default(true), to our Workflow model in prisma/schema.prisma. This simple addition allows us to control whether a workflow should produce an implementation prompt.

  • User Control in the UI: With the database schema updated, the next logical step was to expose this control to our users. We wired generatePrompt through our tRPC create procedure and added a straightforward checkbox toggle to the workflow creation form in our Next.js dashboard UI. Now, users can decide at the point of workflow creation whether they want an auto-generated prompt.

  • The Brain: implementation-prompt-generator.ts: The core intelligence of this feature resides in a new service: src/server/services/implementation-prompt-generator.ts. This service contains the logic to buildImplementationPromptInput() and houses our IMPLEMENTATION_PROMPT_SYSTEM constant. It's designed to take the workflow output and contextualize it with our codebase, producing a structured input for the AI. We ensured this critical piece had robust test coverage with 3 dedicated unit tests.

  • Integrating with the Workflow Engine: The prompt generation isn't a standalone process; it's an integral part of our existing workflow engine. We wired the new service into src/server/services/workflow-engine.ts, creating a "virtual step" that executes after all other configured workflow steps are complete. This ensures the prompt is generated only when the necessary inputs from the preceding steps are available.

  • Refinement and Robustness: During integration, we caught and fixed a minor bug where workflow duration tracking was hardcoded to 0ms for this new virtual step; it now accurately measures the elapsed time. We also made sure that when a workflow is duplicated, the generatePrompt setting is correctly preserved, maintaining user intent.

  • Architectural Wins: A pleasant discovery during this process was how our existing data fetching logic handled the new generatePrompt column. Because our queries (for list and get operations) predominantly use include rather than select for related data, the new column was automatically returned without requiring any query modifications. This speaks to the robustness of our data access layer and saved us from additional query refactoring.

Navigating the Bumps: Challenges & Learnings

No development journey is without its minor bumps. Here are a couple of insights from this session:

  • UI Component Quirks: We initially tried to use Badge variant="outline" in our workflow creation form for a specific visual style. However, our Badge component only supports a predefined set of variants (default, success, accent, warning, danger), and 'outline' wasn't among them. We quickly adapted by using variant="default", which fortunately renders with a border, providing a visually similar outcome. A good reminder to always check component API documentation thoroughly!

  • Deployment Considerations: While the feature is complete and all 321 tests pass, it's not yet live in production. The new generatePrompt column requires a db:push operation to update the database schema. This will be handled during our next deployment or can be triggered manually. The good news is that this is an additive change with a default value, making it a safe migration.

What's Next?

With the code merged to main and all tests green, our immediate next steps are focused on deployment and validation:

  1. Production Deployment: Get this feature live by running db:push, building, and restarting our production environment.
  2. Live Testing: Create new workflows with the "Generate Implementation Prompt" toggle both checked and unchecked to verify the final step's output and absence, respectively.
  3. Real-World Application: We're excited to use this feature for the very rent-a-persona implementation that inspired it, leveraging the generated prompt to kickstart development.
  4. Prompt Refinement: Continuously monitor the quality of the generated prompts and refine our IMPLEMENTATION_PROMPT_SYSTEM based on real-world output and developer feedback.

This "Auto Implementation Prompt" feature marks a significant step in our commitment to developer efficiency and intelligent automation. By leveraging AI to translate high-level product visions into concrete, codebase-grounded implementation prompts, we're not just saving time; we're empowering our development team to focus more on building and less on translation. We're eager to see the impact of this feature in action!