ADR-042: Dual Deployment Paths (CI/CD + Direct)
Status: Accepted
Date: 2025-10-17
Decision Makers: Engineering Team
Related Issues: #162, #163, #161
Context
The Loan Defenders infrastructure supports multiple deployment scenarios requiring different orchestration approaches while maintaining a single source of truth for infrastructure definitions.
Deployment Scenarios
- Production Deployments - Require automated, audited, and gated deployments through CI/CD
- Development Deployments - Need quick, interactive deployments for testing and iteration
- Disaster Recovery - May require manual deployment from local machines when CI/CD is unavailable
- Multi-Developer Environments - Developers need isolated environments for parallel development
The Challenge
How do we support both automated CI/CD deployments and direct local deployments without: - Duplicating infrastructure code - Maintaining separate deployment logic - Risking behavior divergence between deployment paths - Losing the benefits of Infrastructure as Code (IaC)
Decision
We implement a dual deployment path architecture with Bicep modules as the single source of truth:
Infrastructure as Code (Single Source of Truth)
↓
infrastructure/bicep/modules/*.bicep
↓
┌────────────┴────────────┐
│ │
↓ ↓
GitHub Actions Bicep Layer Files
(CI/CD Path) (Direct Path)
│ │
↓ ↓
Orchestrate modules Orchestrate modules
with workflow YAML with Bicep imports
│ │
↓ ↓
Production Deploy Dev/Local Deploy
Path 1: GitHub Actions (CI/CD)
GitHub workflows directly orchestrate Bicep modules with workflow-level control:
Structure:
.github/workflows/
├── deploy-infrastructure.yml # Orchestrates foundation modules
├── deploy-platform.yml # Orchestrates platform modules
└── deploy-apps.yml # Orchestrates app modules
Characteristics: - Workflow YAML defines deployment sequence - Direct calls to individual Bicep modules - Granular control over deployment stages - Automated testing and validation gates - OIDC authentication for security - Deployment history via GitHub UI
Example:
steps:
- name: Deploy Networking
run: az deployment group create \
--template-file infrastructure/bicep/modules/networking.bicep \
--parameters infrastructure/bicep/environments/dev.bicepparam
- name: Deploy Security
run: az deployment group create \
--template-file infrastructure/bicep/modules/security.bicep \
--parameters infrastructure/bicep/environments/dev.bicepparam
Path 2: Bicep Layer Files (Direct)
Bicep orchestration files import and coordinate modules for simplified deployment:
Structure:
infrastructure/bicep/
├── layer1-foundation.bicep # Imports: networking, security, ai-services, vpn
├── layer2-platform.bicep # Imports: container-platform (ACR, Container Apps Env)
├── layer3-apps.bicep # Imports: container apps (UI, API, MCP servers)
└── all-in-one.bicep # Imports: all three layers for complete deployment
Characteristics: - Bicep modules define deployment sequence - Single command deployment - Idempotent and resumable - Interactive parameter prompts - Local authentication (Azure CLI) - Deployment logs via Azure Portal
Example:
# Deploy complete infrastructure
az deployment group create \
--template-file infrastructure/bicep/all-in-one.bicep \
--parameters infrastructure/bicep/environments/dev.bicepparam
# Or deploy individual layers
az deployment group create \
--template-file infrastructure/bicep/layer1-foundation.bicep \
--parameters infrastructure/bicep/environments/dev.bicepparam
Shared Infrastructure (Single Source of Truth)
All Bicep modules remain in infrastructure/bicep/modules/:
infrastructure/bicep/modules/
├── networking.bicep # VNet, NSGs, Route Tables
├── security.bicep # Key Vault, Storage, Managed Identity
├── ai-services.bicep # Azure AI Services
├── vpn-gateway.bicep # VPN Gateway (optional)
├── container-platform.bicep # ACR, Container Apps Environment
├── container-app-ui.bicep # UI Container App
├── container-app-api.bicep # API Container App
└── container-apps-mcp-servers.bicep # MCP Server Container Apps
Critical Principle: Modules NEVER know which path is calling them. They are pure infrastructure definitions.
Rationale
Why Two Paths?
- Different Use Cases Require Different Orchestration
- CI/CD needs granular control for gates, approvals, and testing
- Local deployment needs simplicity and speed
-
Both need the same infrastructure outcomes
-
Industry Standard Pattern
- Terraform + Terragrunt: Core modules + orchestration layers
- Kubernetes + Helm: Kubernetes manifests + Helm charts
- AWS CDK + CloudFormation: CDK code + generated templates
-
Pulumi + IaC: Pulumi programs + underlying providers
-
No Code Duplication
- Bicep modules are written once
- Both paths import the same modules
-
Bug fixes apply to both paths automatically
-
Flexibility Without Fragmentation
- Teams can choose the right path for their scenario
- New deployment methods can be added without changing modules
- Infrastructure evolves independently of orchestration
Why NOT a Single Script Approach?
We explicitly rejected unifying both paths into a single PowerShell/Bash script because:
- GitHub Actions Native Features: Workflow features (gates, approvals, matrix deployments) don't translate to scripts
- Authentication Complexity: OIDC vs interactive auth require different approaches
- Visibility: GitHub UI shows deployment progress better than log parsing
- Maintenance: Maintaining a universal script that handles both paths adds complexity
Consequences
Positive
✅ Single Source of Truth: All infrastructure definitions in Bicep modules
✅ Flexibility: Right orchestration for each use case
✅ No Duplication: Modules written once, used multiple ways
✅ Industry Alignment: Follows established patterns (Terraform/Terragrunt, Helm/K8s)
✅ Future-Proof: Easy to add new deployment methods (Terraform CDK, Pulumi, etc.)
✅ Testability: Can test modules independently of orchestration
✅ Incremental Updates: Layer-based deployment enables targeted updates
Negative
⚠️ Two Orchestration Sets: Must maintain both workflow YAML and layer Bicep files
⚠️ Sync Risk: Changes to module dependencies must be reflected in both paths
⚠️ Documentation: Must document both deployment approaches
⚠️ Learning Curve: Team must understand both approaches
Mitigation Strategies
- Automated Testing: CI tests ensure both paths work correctly
- Module Contracts: Well-defined module parameters prevent orchestration issues
- Documentation: Clear deployment decision tree helps users choose the right path
- Parameter Files: Shared
.bicepparamfiles ensure consistency
Implementation
GitHub Actions Path (CI/CD)
Use When: - Deploying to production or staging - Need deployment approvals or gates - Want automated testing before deployment - Need audit trail in GitHub - Deploying as part of release process
Workflows:
# .github/workflows/deploy-infrastructure.yml
name: Deploy Infrastructure (Foundation)
on: workflow_dispatch
jobs:
deploy-foundation:
steps:
- name: Deploy Networking
run: az deployment group create --template-file modules/networking.bicep
- name: Deploy Security
run: az deployment group create --template-file modules/security.bicep
- name: Deploy AI Services
run: az deployment group create --template-file modules/ai-services.bicep
Bicep Layers Path (Direct)
Use When: - Developing and testing infrastructure changes - Deploying personal dev environment - Quick iterations on Bicep code - Disaster recovery scenarios - Learning/experimenting with infrastructure
Layer Files:
// infrastructure/bicep/layer1-foundation.bicep
module networking './modules/networking.bicep' = {
name: 'networking-deployment'
params: { /* ... */ }
}
module security './modules/security.bicep' = {
name: 'security-deployment'
params: { /* ... */ }
dependsOn: [ networking ]
}
module aiServices './modules/ai-services.bicep' = {
name: 'ai-services-deployment'
params: { /* ... */ }
dependsOn: [ networking, security ]
}
Deployment:
# Full deployment
az deployment group create \
--template-file infrastructure/bicep/all-in-one.bicep \
--parameters infrastructure/bicep/environments/dev.bicepparam
# Layer-by-layer deployment
az deployment group create \
--template-file infrastructure/bicep/layer1-foundation.bicep \
--parameters infrastructure/bicep/environments/dev.bicepparam
az deployment group create \
--template-file infrastructure/bicep/layer2-platform.bicep \
--parameters infrastructure/bicep/environments/dev.bicepparam
az deployment group create \
--template-file infrastructure/bicep/layer3-apps.bicep \
--parameters infrastructure/bicep/environments/dev.bicepparam
4-Layer Architecture
Both deployment paths use the same layering strategy:
- Layer 1: Foundation (~10-15 min without VPN, ~45 min with VPN)
- Networking (VNet, NSGs, Subnets)
- Security (Key Vault, Storage, Managed Identity)
- AI Services (Azure AI with Foundry Hub)
-
VPN Gateway (Optional, for developer access)
-
Layer 2: Platform (~5 min)
- Azure Container Registry (ACR)
-
Container Apps Environment
-
Layer 3: Applications (~2-3 min per app)
- UI Container App
- API Container App
-
MCP Server Container Apps (3 apps)
-
All-in-One (~20 min total)
- Deploys all three layers in sequence
- Used for complete environment provisioning
Selective Deployment
Layer 3 supports selective app deployment via parameters:
@description('Deploy UI app')
param deployUI bool = true
@description('Deploy API app')
param deployAPI bool = true
@description('Deploy MCP servers')
param deployMCP bool = true
Benefits: - Rebuild only changed applications - Faster iteration cycles - Reduced deployment risk
Comparison to Industry Standards
Terraform + Terragrunt
Terraform: - Infrastructure modules (like our Bicep modules) - Reusable, composable components
Terragrunt: - Orchestration layer (like our layer files + workflows) - Handles dependencies, remote state, DRY configuration
Our Approach: - Bicep modules = Terraform modules - Layer files = Terragrunt for local - GitHub Actions = Terragrunt for CI/CD
Kubernetes + Helm
Kubernetes: - Raw YAML manifests (like our Bicep modules) - Direct infrastructure definition
Helm: - Chart orchestration (like our layer files) - Templating and packaging
Our Approach: - Bicep modules = Kubernetes manifests - Layer files = Helm charts - GitHub Actions = kubectl apply automation
AWS CDK + CloudFormation
AWS CDK: - High-level code (TypeScript, Python) - Synthesizes to CloudFormation
CloudFormation: - Underlying infrastructure engine
Our Approach: - Bicep modules = CloudFormation templates - Layer files = CDK apps (local) - GitHub Actions = CDK pipelines (CI/CD)
Decision Tree
Developers should use this decision tree to choose the right deployment path:
┌─────────────────────────────────────────┐
│ What are you deploying? │
└───────────────┬─────────────────────────┘
│
┌───────────┴──────────┐
│ │
▼ ▼
Production/Staging? Development?
│ │
├─ YES → Use CI/CD ├─ First time? → all-in-one.bicep
│ (GitHub │
│ Actions) ├─ Infra change? → layer1-foundation.bicep
│ │
└─ NO → Continue ├─ Platform change? → layer2-platform.bicep
│
├─ App change? → layer3-apps.bicep
│
└─ Quick test? → Specific layer
Maintenance Strategy
When Adding New Infrastructure
- Create/modify Bicep module in
infrastructure/bicep/modules/ - Update layer file imports if new module
- Update GitHub Actions workflow if new module
- Update parameter files
- Test both deployment paths
- Update documentation
When Changing Module Dependencies
- Update module code
- Update layer file
dependsOnclauses - Update GitHub Actions workflow step order
- Test both deployment paths
- Update documentation
Quality Gates
Before merging infrastructure changes:
- [ ] Bicep linting passes (az bicep build)
- [ ] Module can be deployed standalone (unit test)
- [ ] Layer file deployment works (integration test)
- [ ] GitHub Actions workflow works (E2E test)
- [ ] Documentation updated
- [ ] ADR updated if architecture changes
Monitoring and Validation
Both Paths Should:
- Deploy identical infrastructure
- Use same parameter files
- Respect module dependencies
- Be idempotent (safe to re-run)
- Provide clear error messages
Validation Approach:
# Deploy via layer file
az deployment group create --template-file layer1-foundation.bicep --parameters dev.bicepparam
# Verify resources exist
az resource list --resource-group ldfdev-rg
# Deploy via GitHub Actions
# (trigger workflow)
# Verify identical outcome
az deployment group list --resource-group ldfdev-rg
Related Documentation
- Deployment Guide:
docs/deployment/direct-azure-deployment.md - Layer Architecture:
docs/deployment/layer-deployment-guide.md - GitHub Actions:
.github/workflows/README.md - Bicep Modules:
infrastructure/bicep/modules/README.md - Issue #162: Deployment script improvements
- Issue #163: This ADR
References
- Azure Verified Modules
- Bicep Modules Documentation
- Terraform + Terragrunt Pattern
- Helm Charts Pattern
- AWS CDK Documentation
Revision History
- 2025-10-17: Initial ADR approved
- Status: Accepted and implemented