Skip to content

ADR-042: Dual Deployment Paths (CI/CD + Direct)

Status: Accepted
Date: 2025-10-17
Decision Makers: Engineering Team
Related Issues: #162, #163, #161

Context

The Loan Defenders infrastructure supports multiple deployment scenarios requiring different orchestration approaches while maintaining a single source of truth for infrastructure definitions.

Deployment Scenarios

  1. Production Deployments - Require automated, audited, and gated deployments through CI/CD
  2. Development Deployments - Need quick, interactive deployments for testing and iteration
  3. Disaster Recovery - May require manual deployment from local machines when CI/CD is unavailable
  4. Multi-Developer Environments - Developers need isolated environments for parallel development

The Challenge

How do we support both automated CI/CD deployments and direct local deployments without: - Duplicating infrastructure code - Maintaining separate deployment logic - Risking behavior divergence between deployment paths - Losing the benefits of Infrastructure as Code (IaC)

Decision

We implement a dual deployment path architecture with Bicep modules as the single source of truth:

Infrastructure as Code (Single Source of Truth)
infrastructure/bicep/modules/*.bicep
   ┌────────────┴────────────┐
   │                         │
   ↓                         ↓
GitHub Actions         Bicep Layer Files
(CI/CD Path)          (Direct Path)
   │                         │
   ↓                         ↓
Orchestrate modules    Orchestrate modules
with workflow YAML     with Bicep imports
   │                         │
   ↓                         ↓
Production Deploy      Dev/Local Deploy

Path 1: GitHub Actions (CI/CD)

GitHub workflows directly orchestrate Bicep modules with workflow-level control:

Structure:

.github/workflows/
├── deploy-infrastructure.yml  # Orchestrates foundation modules
├── deploy-platform.yml         # Orchestrates platform modules  
└── deploy-apps.yml             # Orchestrates app modules

Characteristics: - Workflow YAML defines deployment sequence - Direct calls to individual Bicep modules - Granular control over deployment stages - Automated testing and validation gates - OIDC authentication for security - Deployment history via GitHub UI

Example:

steps:
  - name: Deploy Networking
    run: az deployment group create \
      --template-file infrastructure/bicep/modules/networking.bicep \
      --parameters infrastructure/bicep/environments/dev.bicepparam

  - name: Deploy Security
    run: az deployment group create \
      --template-file infrastructure/bicep/modules/security.bicep \
      --parameters infrastructure/bicep/environments/dev.bicepparam

Path 2: Bicep Layer Files (Direct)

Bicep orchestration files import and coordinate modules for simplified deployment:

Structure:

infrastructure/bicep/
├── layer1-foundation.bicep    # Imports: networking, security, ai-services, vpn
├── layer2-platform.bicep      # Imports: container-platform (ACR, Container Apps Env)
├── layer3-apps.bicep          # Imports: container apps (UI, API, MCP servers)
└── all-in-one.bicep           # Imports: all three layers for complete deployment

Characteristics: - Bicep modules define deployment sequence - Single command deployment - Idempotent and resumable - Interactive parameter prompts - Local authentication (Azure CLI) - Deployment logs via Azure Portal

Example:

# Deploy complete infrastructure
az deployment group create \
  --template-file infrastructure/bicep/all-in-one.bicep \
  --parameters infrastructure/bicep/environments/dev.bicepparam

# Or deploy individual layers
az deployment group create \
  --template-file infrastructure/bicep/layer1-foundation.bicep \
  --parameters infrastructure/bicep/environments/dev.bicepparam

Shared Infrastructure (Single Source of Truth)

All Bicep modules remain in infrastructure/bicep/modules/:

infrastructure/bicep/modules/
├── networking.bicep           # VNet, NSGs, Route Tables
├── security.bicep             # Key Vault, Storage, Managed Identity
├── ai-services.bicep          # Azure AI Services
├── vpn-gateway.bicep          # VPN Gateway (optional)
├── container-platform.bicep   # ACR, Container Apps Environment
├── container-app-ui.bicep     # UI Container App
├── container-app-api.bicep    # API Container App
└── container-apps-mcp-servers.bicep  # MCP Server Container Apps

Critical Principle: Modules NEVER know which path is calling them. They are pure infrastructure definitions.

Rationale

Why Two Paths?

  1. Different Use Cases Require Different Orchestration
  2. CI/CD needs granular control for gates, approvals, and testing
  3. Local deployment needs simplicity and speed
  4. Both need the same infrastructure outcomes

  5. Industry Standard Pattern

  6. Terraform + Terragrunt: Core modules + orchestration layers
  7. Kubernetes + Helm: Kubernetes manifests + Helm charts
  8. AWS CDK + CloudFormation: CDK code + generated templates
  9. Pulumi + IaC: Pulumi programs + underlying providers

  10. No Code Duplication

  11. Bicep modules are written once
  12. Both paths import the same modules
  13. Bug fixes apply to both paths automatically

  14. Flexibility Without Fragmentation

  15. Teams can choose the right path for their scenario
  16. New deployment methods can be added without changing modules
  17. Infrastructure evolves independently of orchestration

Why NOT a Single Script Approach?

We explicitly rejected unifying both paths into a single PowerShell/Bash script because:

  1. GitHub Actions Native Features: Workflow features (gates, approvals, matrix deployments) don't translate to scripts
  2. Authentication Complexity: OIDC vs interactive auth require different approaches
  3. Visibility: GitHub UI shows deployment progress better than log parsing
  4. Maintenance: Maintaining a universal script that handles both paths adds complexity

Consequences

Positive

Single Source of Truth: All infrastructure definitions in Bicep modules
Flexibility: Right orchestration for each use case
No Duplication: Modules written once, used multiple ways
Industry Alignment: Follows established patterns (Terraform/Terragrunt, Helm/K8s)
Future-Proof: Easy to add new deployment methods (Terraform CDK, Pulumi, etc.)
Testability: Can test modules independently of orchestration
Incremental Updates: Layer-based deployment enables targeted updates

Negative

⚠️ Two Orchestration Sets: Must maintain both workflow YAML and layer Bicep files
⚠️ Sync Risk: Changes to module dependencies must be reflected in both paths
⚠️ Documentation: Must document both deployment approaches
⚠️ Learning Curve: Team must understand both approaches

Mitigation Strategies

  1. Automated Testing: CI tests ensure both paths work correctly
  2. Module Contracts: Well-defined module parameters prevent orchestration issues
  3. Documentation: Clear deployment decision tree helps users choose the right path
  4. Parameter Files: Shared .bicepparam files ensure consistency

Implementation

GitHub Actions Path (CI/CD)

Use When: - Deploying to production or staging - Need deployment approvals or gates - Want automated testing before deployment - Need audit trail in GitHub - Deploying as part of release process

Workflows:

# .github/workflows/deploy-infrastructure.yml
name: Deploy Infrastructure (Foundation)
on: workflow_dispatch
jobs:
  deploy-foundation:
    steps:
      - name: Deploy Networking
        run: az deployment group create --template-file modules/networking.bicep
      - name: Deploy Security
        run: az deployment group create --template-file modules/security.bicep
      - name: Deploy AI Services
        run: az deployment group create --template-file modules/ai-services.bicep

Bicep Layers Path (Direct)

Use When: - Developing and testing infrastructure changes - Deploying personal dev environment - Quick iterations on Bicep code - Disaster recovery scenarios - Learning/experimenting with infrastructure

Layer Files:

// infrastructure/bicep/layer1-foundation.bicep
module networking './modules/networking.bicep' = {
  name: 'networking-deployment'
  params: { /* ... */ }
}

module security './modules/security.bicep' = {
  name: 'security-deployment'
  params: { /* ... */ }
  dependsOn: [ networking ]
}

module aiServices './modules/ai-services.bicep' = {
  name: 'ai-services-deployment'
  params: { /* ... */ }
  dependsOn: [ networking, security ]
}

Deployment:

# Full deployment
az deployment group create \
  --template-file infrastructure/bicep/all-in-one.bicep \
  --parameters infrastructure/bicep/environments/dev.bicepparam

# Layer-by-layer deployment
az deployment group create \
  --template-file infrastructure/bicep/layer1-foundation.bicep \
  --parameters infrastructure/bicep/environments/dev.bicepparam
az deployment group create \
  --template-file infrastructure/bicep/layer2-platform.bicep \
  --parameters infrastructure/bicep/environments/dev.bicepparam
az deployment group create \
  --template-file infrastructure/bicep/layer3-apps.bicep \
  --parameters infrastructure/bicep/environments/dev.bicepparam

4-Layer Architecture

Both deployment paths use the same layering strategy:

  1. Layer 1: Foundation (~10-15 min without VPN, ~45 min with VPN)
  2. Networking (VNet, NSGs, Subnets)
  3. Security (Key Vault, Storage, Managed Identity)
  4. AI Services (Azure AI with Foundry Hub)
  5. VPN Gateway (Optional, for developer access)

  6. Layer 2: Platform (~5 min)

  7. Azure Container Registry (ACR)
  8. Container Apps Environment

  9. Layer 3: Applications (~2-3 min per app)

  10. UI Container App
  11. API Container App
  12. MCP Server Container Apps (3 apps)

  13. All-in-One (~20 min total)

  14. Deploys all three layers in sequence
  15. Used for complete environment provisioning

Selective Deployment

Layer 3 supports selective app deployment via parameters:

@description('Deploy UI app')
param deployUI bool = true

@description('Deploy API app')
param deployAPI bool = true

@description('Deploy MCP servers')
param deployMCP bool = true

Benefits: - Rebuild only changed applications - Faster iteration cycles - Reduced deployment risk

Comparison to Industry Standards

Terraform + Terragrunt

Terraform: - Infrastructure modules (like our Bicep modules) - Reusable, composable components

Terragrunt: - Orchestration layer (like our layer files + workflows) - Handles dependencies, remote state, DRY configuration

Our Approach: - Bicep modules = Terraform modules - Layer files = Terragrunt for local - GitHub Actions = Terragrunt for CI/CD

Kubernetes + Helm

Kubernetes: - Raw YAML manifests (like our Bicep modules) - Direct infrastructure definition

Helm: - Chart orchestration (like our layer files) - Templating and packaging

Our Approach: - Bicep modules = Kubernetes manifests - Layer files = Helm charts - GitHub Actions = kubectl apply automation

AWS CDK + CloudFormation

AWS CDK: - High-level code (TypeScript, Python) - Synthesizes to CloudFormation

CloudFormation: - Underlying infrastructure engine

Our Approach: - Bicep modules = CloudFormation templates - Layer files = CDK apps (local) - GitHub Actions = CDK pipelines (CI/CD)

Decision Tree

Developers should use this decision tree to choose the right deployment path:

┌─────────────────────────────────────────┐
│ What are you deploying?                  │
└───────────────┬─────────────────────────┘
    ┌───────────┴──────────┐
    │                      │
    ▼                      ▼
Production/Staging?   Development?
    │                      │
    ├─ YES → Use CI/CD     ├─ First time? → all-in-one.bicep
    │        (GitHub       │
    │         Actions)      ├─ Infra change? → layer1-foundation.bicep
    │                      │
    └─ NO → Continue       ├─ Platform change? → layer2-platform.bicep
                           ├─ App change? → layer3-apps.bicep
                           └─ Quick test? → Specific layer

Maintenance Strategy

When Adding New Infrastructure

  1. Create/modify Bicep module in infrastructure/bicep/modules/
  2. Update layer file imports if new module
  3. Update GitHub Actions workflow if new module
  4. Update parameter files
  5. Test both deployment paths
  6. Update documentation

When Changing Module Dependencies

  1. Update module code
  2. Update layer file dependsOn clauses
  3. Update GitHub Actions workflow step order
  4. Test both deployment paths
  5. Update documentation

Quality Gates

Before merging infrastructure changes: - [ ] Bicep linting passes (az bicep build) - [ ] Module can be deployed standalone (unit test) - [ ] Layer file deployment works (integration test) - [ ] GitHub Actions workflow works (E2E test) - [ ] Documentation updated - [ ] ADR updated if architecture changes

Monitoring and Validation

Both Paths Should:

  • Deploy identical infrastructure
  • Use same parameter files
  • Respect module dependencies
  • Be idempotent (safe to re-run)
  • Provide clear error messages

Validation Approach:

# Deploy via layer file
az deployment group create --template-file layer1-foundation.bicep --parameters dev.bicepparam

# Verify resources exist
az resource list --resource-group ldfdev-rg

# Deploy via GitHub Actions
# (trigger workflow)

# Verify identical outcome
az deployment group list --resource-group ldfdev-rg
  • Deployment Guide: docs/deployment/direct-azure-deployment.md
  • Layer Architecture: docs/deployment/layer-deployment-guide.md
  • GitHub Actions: .github/workflows/README.md
  • Bicep Modules: infrastructure/bicep/modules/README.md
  • Issue #162: Deployment script improvements
  • Issue #163: This ADR

References

Revision History

  • 2025-10-17: Initial ADR approved
  • Status: Accepted and implemented