Sitemap

Stop Hoarding Prompts: Build Reusable AI Workflows That Survive Model Changes

Transition from individual prompts to systematic workflows for lasting AI effectiveness.

6 min read4 days ago

--

Transition from individual prompts to systematic workflows for lasting AI effectiveness.

The Problem with Prompt Hoarding

If you’re an AI enthusiast or ML engineer working with large language models, you’ve likely experienced this:

  • You craft the perfect prompt for a specific task
  • You save it in a document or note somewhere
  • The model gets updated or you switch providers
  • Your prompt no longer works optimally
  • You start from scratch, creating a new collection of prompts

This cycle is inefficient, frustrating, and ultimately unsustainable as models continue to evolve. What we need instead is a more structured approach to working with AI systems.

The Solution: Systematic AI Workflows

Rather than focusing solely on the exact wording of prompts, we need to build systematic workflows with:

  1. Reusable templates that can be adapted to different models
  2. Variables that make prompts configurable and flexible
  3. Checklists that ensure consistency regardless of the underlying model
  4. Context engineering techniques that work across model architectures

Let’s explore each of these components in detail.

1. Templates: Beyond Text Strings

Templates are structured frameworks for prompts that separate the core instructions from the specific inputs. Microsoft recently released Prompt Orchestration Markup Language (POML), which brings HTML/XML-inspired structure to prompt engineering.

<poml>
<role>You are a data analyst specializing in market trends.</role>
<task>Analyze the following quarterly sales data and identify key patterns.</task>
<data>{{ sales_data }}</data>
<output-format>
Provide 3-5 key insights followed by recommended actions.
</output-format>
</poml>

This template approach offers several advantages:

  • Clear separation between roles, tasks, data, and output formats
  • Easy modification of individual components
  • Ability to version control your templates
  • Reusability across different models and providers

2. Variables: Making Prompts Configurable

Variables allow you to parameterize your prompts, making them adaptable to different scenarios without changing the core structure.

For example, using LangChain’s prompt template system:

from langchain.prompts import PromptTemplate

analysis_template = PromptTemplate(
input_variables=["data_type", "time_period", "analysis_depth", "output_format"],
template=\""
Analyze the {data_type} for {time_period} at {analysis_depth} depth.\n\n DATA:\n {data}\n\n Provide insights in {output_format} format.\n \""
)

# Can now be used with different parameters
quarterly_report = analysis_template.format(
data_type="sales figures",
time_period="Q3 2025",
analysis_depth="detailed",
output_format="executive summary",
data=quarterly_data
)

By parameterizing your prompts, you create flexible tools rather than rigid instructions that break when models change.

3. Checklists: Ensuring Consistency

Checklists provide structure that helps ensure consistency across model versions and vendors. They work particularly well because they tap into a fundamental capability of language models: following multi-step instructions.

Instead of crafting the perfect “magic prompt,” build a checklist into your workflow:

# Code Review Checklist

1. [ ] Identify potential security vulnerabilities
2. [ ] Check for performance bottlenecks
3. [ ] Verify error handling
4. [ ] Assess code readability and documentation
5. [ ] Suggest specific improvements with code examples

This approach has several benefits:

  • Makes expected outputs explicit and verifiable
  • Works across different model versions and providers
  • Creates a consistent pattern that models can follow
  • Enables quality control of AI outputs

4. Context Engineering: Optimizing Input Information

Context engineering focuses on providing the right information in the right format to get optimal results. As GitHub explains in their recent blog post, this involves:

  • Session splitting: Using separate agent sessions for different phases (planning, implementation, testing)
  • Modular instructions: Applying targeted instructions to specific file types or contexts
  • Memory-driven development: Maintaining project knowledge across sessions
  • Context optimization: Using helper files to accelerate information retrieval

For example, rather than cramming all context into one massive prompt, you might split it:

# Planning Phase
<relevant architectural documents>

# Implementation Phase
<specific code examples and patterns>

# Testing Phase
<test frameworks and validation criteria>

Building Your First Reusable Workflow

Let’s apply these concepts to create a practical, model-agnostic workflow for content generation:

1. Create a Content Generation Template

# Content Generation Template

## Role Definition
You are a content specialist focusing on {content_type} for {target_audience}.

## Context
Topic: {topic}
Key points: {key_points}
Tone: {tone}
Length: {length}

## Output Format
- Introduction that hooks the reader
- Main sections covering key points
- Practical examples or applications
- Conclusion with call to action

2. Implement a Systematic Workflow

# Example implementation using a hypothetical AI workflow framework

from ai_workflow import Workflow, Step, Template

content_workflow = Workflow("Content Generation")

# Step 1: Research and outline
content_workflow.add_step(
Step(
"Research",
template=Template("research_template.md"),
variables={
"topic": "${topic}",
"depth": "comprehensive",
"output_format": "bullet points"
},
output_field="research_notes"
)
)

# Step 2: Draft content
content_workflow.add_step(
Step(
"Draft",
template=Template("content_template.md"),
variables={
"content_type": "${content_type}",
"target_audience": "${audience}",
"topic": "${topic}",
"key_points": "${research_notes}",
"tone": "${tone}",
"length": "${length}"
},
output_field="draft_content"
)
)

# Step 3: Edit and refine
content_workflow.add_step(
Step(
"Edit",
template=Template("editing_template.md"),
variables={
"content": "${draft_content}",
"style_guide": "${style_guide}",
"tone": "${tone}"
},
output_field="final_content"
)
)

# Execute workflow with specific parameters
result = content_workflow.execute({
"topic": "AI workflow automation",
"content_type": "technical blog post",
"audience": "ML engineers",
"tone": "professional but conversational",
"length": "1500 words",
"style_guide": "APA"
})

print(result.final_content)

This approach:

  • Separates concerns into discrete steps
  • Makes each component reusable and configurable
  • Creates a systematic process rather than relying on a single perfect prompt
  • Adapts to different models or model versions

Practical Implementation with Agent Primitives

GitHub’s agent primitives framework offers a structured approach to implementing these concepts:

  1. Instructions files (.instructions.md): For modular guidance with targeted scope
  2. Chat modes (.chatmode.md): For role-based expertise with clear boundaries
  3. Agentic workflows (.prompt.md): For reusable prompts with validation
  4. Specification files (.spec.md): For implementation-ready blueprints

Here’s how you might organize a project using this approach:

project/
├── .github/
│ ├── copilot-instructions.md # Global repository rules
│ ├── instructions/
│ │ ├── frontend.instructions.md # Frontend-specific guidance
│ │ └── backend.instructions.md # Backend-specific guidance
│ └── chatmodes/
│ ├── architect.chatmode.md # Planning specialist mode
│ └── engineer.chatmode.md # Implementation specialist mode
├── prompts/
│ ├── code-review.prompt.md # Reusable code review workflow
│ └── feature-spec.prompt.md # Feature specification workflow
└── specs/
└── auth-system.spec.md # Authentication system specification

This structure creates a reusable system that can evolve independently of specific model changes.

Tools for Building Reusable Workflows

Several emerging tools can help you implement these concepts:

  1. LangChain: Offers prompt templates and workflow components
  2. Microsoft’s POML: Provides XML-inspired structure for prompt orchestration
  3. GitHub’s Agent CLI: Enables execution of agent primitives from the command line
  4. GuardrailsAI: Helps validate outputs against defined schemas

Beyond Prompts: The Future of AI Interaction

As we move beyond prompt engineering toward systematic AI workflows, several emerging patterns are worth noting:

  1. Prompt as Code: Version-controlled, tested, and deployed like traditional software
  2. Agent Orchestration: Coordinating multiple specialized AI agents in a workflow
  3. Feedback Loops: Systematically incorporating user feedback to improve workflows
  4. Cross-Modal Integration: Combining text, image, and code understanding in unified workflows

Conclusion: From Prompts to Systems

The shift from collecting individual prompts to building systematic workflows represents a maturation in how we work with AI. By focusing on templates, variables, checklists, and context engineering, you can create resilient systems that adapt to model changes rather than breaking when the underlying AI evolves.

This approach not only makes your AI interactions more reliable but also more:

  • Maintainable: Easy to update and improve over time
  • Shareable: Clear enough for team collaboration
  • Scalable: Applicable across multiple projects and use cases
  • Durable: Resistant to model changes and updates

As models continue to evolve at a rapid pace, investing in systematic workflows rather than perfect prompts will pay dividends in the long run. Stop hoarding prompts — start building systems.

👋 Hey, I’m Dani García — Senior ML Engineer working across startups, academia, and consulting.
I write practical guides and build tools to help you get faster results in ML.

💡 If this post helped you, clap and subscribe so you don’t miss the next one.

🚀 Take the next step:

  • 🎁 Free “ML Second Brain” Template
    The Notion system I use to track experiments & ideas.
    Grab your free copy
  • 📬 Spanish Data Science Newsletter
    Weekly deep dives & tutorials in your inbox.
    Join here
  • 📘 Full-Stack ML Engineer Guide
    Learn to build real-world ML systems end-to-end.
    Get the guide
  • 🤝 Work with Me
    Need help with ML, automation, or AI strategy?
    Let’s talk
  • 🔗 Connect on LinkedIn
    Share ideas, collaborate, or just say hi.
    Connect

--

--

Daniel García
Daniel García

Written by Daniel García

Lifetime failure - I write as I learn 🤖

No responses yet