ACP Logo
Transforming Infrastructure Analysis: How We Built a Terraform Advisor Powered by AI Agents and Video-Based Memory
Separator
Roald Brunell
Roald Brunell

Transforming Infrastructure Analysis: How We Built a Terraform Advisor Powered by AI Agents and Video-Based Memory

A question-driven approach to analyzing Terraform infrastructure with specialized AI agents


In the ever-evolving landscape of cloud infrastructure, Terraform has become the gold standard for Infrastructure as Code (IaC). However, as our codebases grow in complexity and our teams scale, one critical challenge remains: How do we efficiently ensure our infrastructure is and stays aligned with industry standards and best practices?

Today, I'm excited to introduce Terraform Advisor by ACP Engineering - an AI-powered solution that analyzes Terraform GitHub repositories to answer specific infrastructure questions by combining code analysis with AWS Well-Architected Framework guidance through specialized AI agents.

The Challenge: Answering Infrastructure Questions at Scale

Traditional infrastructure analysis often involves manual research across multiple sources, making it difficult to get comprehensive answers to specific questions. Development teams frequently struggle with:

  • Question Research Time: Hours spent researching specific infrastructure concerns
  • Best Practice Application: Difficulty finding relevant AWS guidance for specific scenarios
  • Knowledge Gaps: Lack of expertise to properly interpret infrastructure patterns
  • Context Switching: Jumping between documentation, code, and recommendations
  • Inconsistent Analysis: Different approaches to similar infrastructure questions

Our Solution: Question-Focused AI Analysis

The Architecture: Revolutionary Multi-Agent AI Collaboration

At the heart of Terraform Advisor lies a multi-agent AI system where autonomous AI agents collaborate like a distributed team of experts.

This isn't just parallel processing; it's emergent AI intelligence where agents:

  • Autonomously plan their research strategies
  • Share contextual knowledge across agent boundaries
  • Synthesize insights that no single agent could achieve alone
  • Adapt their approach based on real-time findings

How is this different from traditional AI systems?

  • Beyond Prompting: We've moved past simple prompt engineering to agent orchestration
  • Distributed AI Intelligence: Each agent maintains its own expertise domain and decision-making capabilities
  • Dynamic Collaboration: Agents adjust their strategies based on what other agents discover

The Technology Stack

1. Flexible GitHub Repository Analysis Tool

The system can analyze any Terraform repository by configuring:

GITHUB_REPO: "your-org/terraform-infrastructure"
GITHUB_TOKEN: "your-github-token"

The system then:

  • Searches for code patterns relevant to your specific questions
  • Identifies resources and configurations that address what you're asking
  • Documents current implementation approaches

2. Memvid: Video-Based AI Memory for AWS Documentation Tool

One innovative aspect of our solution is the integration of Memvid - a ground-breaking technology that enables efficient semantic search through large documentation sets.

How Memvid Works:

  • Encodes AWS Well-Architected Framework documentation into video format
  • Enables fast semantic search across thousands of pages of AWS guidance
  • Works offline once the knowledge base is prepared
  • Provides fast retrieval of relevant AWS best practices
  • Can easily be shared across teams and organizations

🎥 The AWS Knowledge Base in Action

waf_docs.gif

An excerpt from the AWS Well-Architected Framework documentation encoded using memvid technology. Each frame contains encoded documentation chunks that enable near-instant semantic search across thousands of pages of AWS best practices.

3. Model-Agnostic Architecture

The system supports different AI models based on your needs:

# Environment-driven model selection
llm=os.getenv("MODEL")  # OpenAI GPT-4, Claude, Llama, etc.

# Tool-specific model configuration  
config=dict(
    llm=dict(
        provider="openai",
        config=dict(model=os.getenv("TOOL_MODEL")),
    ),
    embedder=dict(
        provider="openai",
        config=dict(model=os.getenv("EMBEDDING_MODEL")),
    ),
)

This flexibility allows organizations to:

  • Balance Cost and Quality: Use appropriate models for different analysis depths
  • Maintain Privacy: Use local models for sensitive codebases
  • Scale Efficiently: Adjust model complexity based on question complexity

The Question-Driven Analysis Workflow

Our system processes specific infrastructure questions through a configured three-phase approach, where each agent is defined in YAML configuration files and uses specialized tools to contribute to answering what you've asked.

Agent Configuration Architecture

Each agent is defined in config/agents.yaml with specific roles and tools:

# From config/agents.yaml (simplified for readability)
terraform_agent:
  role: "Senior Terraform Infrastructure Analyst"
  goal: "Answer questions by analyzing Terraform codebases"
  backstory: "You're a specialized Infrastructure Code Analyst who ..."

aws_agent:
  role: "AWS Well-Architected Framework Authority"  
  goal: "Research AWS best practices for specific questions"
  backstory: "You're a Principal AWS Solutions Architect who ..."

response_agent:
  role: "Solutions Architecture Integration Specialist"
  goal: "Synthesize analysis into actionable answers"
  backstory: "You're a Senior Solutions Architecture Integration Specialist who ..."

Note: All YAML examples in this section are simplified for readability. The actual configuration files contain much more comprehensive specifications, including detailed backstories, step-by-step methodologies, specific tool usage instructions, structured output formats, and quality standards.

Task Execution Workflow

Each phase corresponds to a task defined in config/tasks.yaml that orchestrates the agent's work:

Phase 1: Codebase Research Task

The terraform_agent executes the codebase_research_task:

# From config/tasks.yaml (simplified for readability)
codebase_research_task:
  description: "Analyze the Terraform codebase to answer: {query}"
  expected_output: "Evidence-based answer with code examples and current implementation details"
  agent: terraform_agent

What it does: Uses GithubSearchTool to search for Terraform configurations that address your specific question.

Phase 2: AWS Research Task

The aws_agent executes the aws_research_task:

# From config/tasks.yaml (simplified for readability)
aws_research_task:
  description: "Research AWS best practices to answer: {query}"
  expected_output: "Authoritative AWS guidance with specific recommendations and requirements"
  agent: aws_agent

What it does: Uses MemvidSearchTool to search AWS Well-Architected Framework documentation for guidance relevant to your question.

Phase 3: Response Synthesis Task

The response_agent executes the response_task:

# From config/tasks.yaml (simplified for readability)
response_task:
  description: "Synthesize analysis to provide comprehensive answer to: {query}"
  expected_output: "Complete answer with current state, AWS recommendations, and action plan"
  agent: response_agent
  context: [aws_research_task, codebase_research_task]

What it does: Combines the codebase analysis and AWS research into a comprehensive answer with specific recommendations.

How Configuration Drives Execution

The system uses this YAML-driven approach to ensure each agent:

  1. Knows Its Role: Clear role definition and expertise areas
  2. Focuses on the Question: All agents and tasks focus on the specific question being asked to avoid generic recommendations
  3. Uses Specific Tools: Each agent is has access to the appropriate tools it needs to answer the question
  4. Produces Structured Output: Expected output format defined for each task
  5. Maintains Context: Agents can be given context from previous tasks to help them answer the question
execution_excerpt.png

Above: Example of how the three agents collaborate to answer a specific infrastructure question using their specialized tools.

Why This Approach Works

1. Question-Focused Analysis

  • Avoids generic recommendations by focusing on what you actually asked
  • Provides evidence-based answers using your actual code
  • Combines current state analysis with authoritative best practices
  • Delivers actionable improvements specific to your question

2. Repository Flexibility

  • Works with any Terraform repository structure
  • Analyzes both public and private repositories
  • Supports different Terraform patterns and module structures
  • No vendor lock-in or proprietary requirements

3. AI Model Adaptability

  • Choose models based on budget and accuracy requirements
  • Support for multiple AI providers (OpenAI, Anthropic, local models)
  • Separate model configuration for different components
  • Easy to upgrade as new models become available

4. Efficient Knowledge Access

  • Fast semantic search through comprehensive AWS documentation
  • Offline-capable analysis once knowledge base is prepared
  • Always up-to-date AWS guidance when knowledge base is refreshed
  • No dependency on external APIs during analysis

Conclusion: The Future of Infrastructure Analysis

Terraform Advisor isn't just another AI tool - it's a glimpse into the future of infrastructure analysis. By pioneering multi-agent AI orchestration and video-based AI memory, we're not just solving today's problems; we're building the foundation for tomorrow's intelligent infrastructure systems.

What This Means for the Industry:

  • Agent-First AI: Moving beyond prompt engineering to autonomous AI collaboration
  • Visual AI Memory: Transforming how AI systems store and access knowledge
  • Emergent Intelligence: Demonstrating how collaborative AI agents create insights impossible with single models

The technologies we've integrated represent the cutting edge of AI research, productized for real-world infrastructure challenges. This is where AI-powered infrastructure analysis is heading - and we're leading the way.


Explore our GitHub repository to get started, or reach out to discuss how Terraform Advisor can help answer your infrastructure questions.

Built with ❤️ by the team at ACP Engineering

Get in touch

Feel free to get in touch to discuss your project ideas.