Unlocking Developer Memories: Building an AI-Powered Blog Generator from GitHub
We just shipped an end-to-end pipeline that transforms raw GitHub 'memory' files into polished blog posts, complete with project-based UI and real-time generation. Here's how we built it and the hard-won lessons along the way.
From Code Commits to Compelling Content: Our AI-Powered Blog Generation Journey
The dream of every developer is to spend less time on mundane tasks and more time building cool stuff. For me, that often means streamlining content creation, especially when it comes to documenting my development process. What if your raw development "memories" – simple markdown files detailing progress, thoughts, and decisions – could magically transform into engaging blog posts?
That's precisely the vision we just brought to life. In a recent session, we pushed an end-to-end GitHub-to-Blog pipeline, allowing users to import their development memories from GitHub, manage them within projects, and generate blog posts with the click of a button. It's fully functional, tested, and live, marking a significant step towards truly automated content creation.
The Vision: A Seamless Content Pipeline
Our goal was ambitious: build a project-based UI that connects directly to GitHub, imports specific markdown "memory" files, and then uses an AI to convert them into public-ready blog posts. All within a mobile-first, responsive interface.
Here’s the high-level flow we aimed for:
- Project Creation: Users define a "project" within our application.
- GitHub Integration (BYOK): Connect their GitHub account (Bring Your Own Key) and select a repository.
- Memory Discovery: Specify a path within the repo where their
*.mdmemory files reside. - Database Sync: Import these files, syncing their content and metadata to our Prisma-backed database.
- AI Generation: Select imported memories and trigger an AI (Anthropic, in this case) to generate a blog post.
- UI & Publishing: View, manage, and eventually publish these generated posts, complete with beautiful markdown rendering.
What We Shipped: A Feature-Rich Foundation
The session culminated in three solid commits to main, bringing this vision to life. Here's a peek under the hood at what's now fully operational:
- Prisma Schema: Robust
ProjectandBlogPostmodels, establishing clear tenant/user relationships. - GitHub Connector (
src/server/services/github-connector.ts): This is the heart of our integration. It handles everything from resolving user tokens and fetching repositories to checking memory paths, listing files, fetching content, and syncing it all to our database. It's a full BYOK (Bring Your Own Key) implementation, giving users control over their GitHub access. - Blog Generator (
src/server/services/blog-generator.ts): A TypeScript port of ourblog_gen.pyscript, leveraging Anthropic's API to turn raw memory content into engaging blog narratives. - tRPC Routers (
src/server/trpc/routers/projects.ts): A comprehensive API layer for projects and blog posts, covering CRUD operations, GitHub integration (repos, files, import), and all blog post lifecycle actions (list, get, generate, batch generate, update, delete, view unblogged memories). - Markdown Renderer (
src/components/markdown-renderer.tsx): A custom component built withreact-markdownandremark-gfm, styled with ournyxtheme for beautiful, consistent markdown display. - User Interface: Four new pages: a projects list, new project creation, a detailed project view (with tabs for memories and blog posts), and a dedicated blog post viewer.
- Navigation: Seamless integration into the sidebar and a mobile-friendly bottom navigation.
We even squashed a few critical bugs and made significant UX improvements during the session, like completely restyling the "Generate More" sheet for better selection, content previews, and sticky controls.
Lessons from the Trenches: Overcoming Development Hurdles
No significant feature ships without its share of head-scratching moments. Here are some of the key lessons we learned, turning "pain" into "progress":
1. The Elusive refetch() on Disabled Queries
The Challenge: We wanted a button to manually trigger a data fetch, even if the query's enabled state was initially false.
The Pitfall: Calling reposQuery.refetch() on a tRPC query (backed by React Query v5) that was enabled: false did absolutely nothing. It's a common misconception that refetch() overrides the enabled flag.
The Solution: Instead of relying on refetch(), we managed the enabled state directly. We introduced a useState(false) variable, set it to true on button click, and passed this state directly to the enabled option of our useQuery hook. This forced the query to run when needed.
Takeaway: For on-demand fetches with React Query, explicitly control the enabled flag via component state rather than trying to force a refetch() on a disabled query.
2. Real-time Progress for Batch Operations
The Challenge: We implemented a generateBatch mutation, expecting to generate multiple blog posts sequentially on the server. The client needed to see live progress (e.g., "3/10 posts generated").
The Pitfall: The client was stuck at "0/10" until all posts were generated, then received the final result. Furthermore, our Zod.max(10) validation on the server-side batch prevented selecting more than 10 memories, which wasn't the intended UX.
The Solution: We shifted the sequential generation logic to the client. Instead of a single generateBatch mutation, we used generateSingle.mutateAsync() within a for loop. After each successful generation, we updated a local state variable, providing immediate feedback to the user.
// Example client-side sequential generation
const [progress, setProgress] = useState(0);
const generateSingleMutation = trpc.projects.blogPosts.generateSingle.useMutation();
const handleBatchGenerate = async (memoryIds: string[]) => {
setProgress(0);
for (const memoryId of memoryIds) {
await generateSingleMutation.mutateAsync({ projectId, memoryId });
setProgress((prev) => prev + 1);
// Optionally refetch the list of blog posts here
}
};
Takeaway: For operations requiring real-time progress updates, consider offloading the sequential processing to the client or implementing a more sophisticated server-side streaming/polling mechanism. For simple cases, client-side loops with individual mutations are highly effective.
3. The Mysterious .next Cache Corruption
The Challenge: After making Prisma schema changes, we ran prisma generate and restarted the dev server.
The Pitfall: We encountered clientModules errors, indicating internal Next.js cache corruption. Simply deleting .next while the dev server was running didn't fix it.
The Solution: The crucial step is to stop the dev server FIRST, then rm -rf .next, and then prisma generate (if schema changed), followed by npm run dev.
Takeaway: When dealing with deep cache issues in Next.js, always ensure your development server is completely stopped before clearing critical cache directories like .next.
4. TypeScript's Inference Limits with tRPC Return Types
The Challenge: We wanted to type a component prop using the return type of a tRPC query, specifically ReturnType<typeof trpc.projects.blogPosts.unblogged.useQuery>.
The Pitfall: TypeScript inferred the data property as {} (an empty object), preventing us from accessing properties like .length or using .map() on it. This happens when TypeScript can't fully resolve the generic types or complex nested structures from the useQuery hook.
The Solution: We defined an explicit interface for the expected data structure.
// Explicit interface for unblogged memory entries
interface UnbloggedEntry {
id: string;
filename: string;
contentLength: number;
// ... any other properties expected from the query
}
// Then use it to type your component prop
interface UnbloggedMemoriesProps {
unbloggedData: UnbloggedEntry[];
}
Takeaway: While TypeScript's inference is powerful, for complex tRPC query return types that are passed as props, defining explicit interfaces or types often leads to clearer code and avoids inference pitfalls.
What's Next?
With the core pipeline functional, our immediate focus shifts to refinement and robustness:
- Mobile Layout: Thoroughly test at 375px to verify sticky buttons, touch targets, and collapsible blog cards.
- Error Handling: Implement proper error toast notifications for failed generations or GitHub API issues.
- Pagination: Consider pagination for projects with many blog posts to improve performance and UX.
- Minor Bug Fixes: Address pre-existing UI issues, like a
Badgevariant error in a different dashboard section. - Edit Mode: Test the raw markdown textarea toggle for blog posts and ensure the update flow works correctly.
- Regeneration Flow: Verify that regenerating a blog post from a memory entry correctly replaces the existing content.
This session was a huge step forward in automating content creation directly from developer workflows. It's exciting to see the pieces come together, and I'm looking forward to continuing to refine this powerful tool!
{"thingsDone":["Prisma schema for Project and BlogPost","Full GitHub connector service","TypeScript port of blog generator using Anthropic","Comprehensive tRPC router for projects and blog posts","React Markdown renderer with GFM and custom theme","Four new UI pages (projects list, new project, project detail, blog post viewer)","Integrated navigation","Numerous bugfixes and UX improvements"],"pains":["Refetching disabled React Query/tRPC queries","Lack of client-side progress for server-side batch operations","Next.js .next cache corruption","TypeScript inference issues with tRPC query return types"],"successes":["End-to-end functional GitHub to blog generation pipeline","Successful project creation, memory import, and blog post generation","Robust mobile-first UI design","Real-time client progress for sequential generation"],"techStack":["Next.js","tRPC","Prisma","GitHub API","Anthropic (LLM)","React Query v5","react-markdown","remark-gfm","Tailwind CSS","TypeScript"]}