Skip to content

Bicep Module Organization & Deployment Strategy

✅ Final Architecture (Implemented)

Clear Separation of Concerns

infrastructure/bicep/
├── main-avm.bicep                          # Main orchestrator (foundation + platform)
│   ├── networking.bicep                    # VNet, subnets, NSGs
│   ├── security.bicep                      # Key Vault, Storage, Managed Identity
│   ├── ai-services.bicep                   # Azure AI Services, AI Hub
│   ├── container-platform.bicep            # ACR + Container Apps Environment (ONLY)
│   └── vpn-gateway.bicep                   # VPN Gateway (dev only)
└── modules/
    ├── container-apps-mcp-servers.bicep    # All 3 MCP servers (independent deployment)
    │   └── container-app-mcp-server.bicep  # Reusable MCP server template
    ├── container-app-ui.bicep              # UI app (TODO)
    └── container-app-api.bicep             # API app (TODO)

Key Design Decisions

1. container-platform.bicep = Platform ONLY

Contents: Azure Container Registry + Container Apps Environment
No applications - just the shared infrastructure

Why? - Platform is stable, apps change frequently - Can deploy platform once, update apps independently - Clear responsibility: platform vs applications

2. container-apps-mcp-servers.bicep = All MCP Servers

Contents: All 3 MCP server container apps
Independent deployment - can update without touching UI/API

Why? - MCP servers evolve together (new tools, shared patterns) - Can deploy/update frequently without affecting UI/API - Logical grouping: all backend tools in one module

3. Future: Separate UI and API Modules

Pattern: container-app-ui.bicep and container-app-api.bicep
Benefit: Each app can be deployed independently


Deployment Strategy

Phase 1: Foundation + Platform (One-Time Setup)

# Deploy infrastructure (VNet, AI, ACR, Container Environment)
./infrastructure/scripts/deploy.sh dev

What gets deployed: - Networking (VNet, subnets, NSGs) - Security (Managed Identity, Storage) - AI Services (Azure AI Hub, models) - Container Platform (ACR + Container Apps Environment) - VPN Gateway (dev only, optional)

Frequency: Once per environment, rarely updated


Phase 2: MCP Servers (Independent, Frequent Updates)

# Build and push MCP server images
./infrastructure/scripts/build-and-push-mcp-images.sh dev latest

# Deploy MCP servers
./infrastructure/scripts/deploy-mcp-servers.sh dev

What gets deployed: - Application Verification MCP Server (port 8010) - Document Processing MCP Server (port 8011) - Financial Calculations MCP Server (port 8012)

Frequency: Frequent (as tools/features are added)


Phase 3: UI and API (TODO - Future)

# Build and push UI/API images (TODO: create script)
./infrastructure/scripts/build-and-push-app-images.sh dev latest

# Deploy UI and API (TODO: create script)
./infrastructure/scripts/deploy-apps.sh dev

What gets deployed: - UI Container App (external ingress) - API Container App (internal ingress)

Frequency: Regular (as features are developed)


Module Responsibilities

Module Purpose Updates Deployment Command
container-platform.bicep ACR + Container Apps Environment Rare ./deploy.sh dev
container-apps-mcp-servers.bicep All MCP servers Frequent ./deploy-mcp-servers.sh dev
container-app-ui.bicep UI application Regular ./deploy-apps.sh dev (future)
container-app-api.bicep API application Regular ./deploy-apps.sh dev (future)

Benefits of This Architecture

1. Independent Deployment Cycles

  • Update MCP servers without redeploying platform or UI/API
  • Update UI/API without affecting MCP servers
  • Platform changes are rare and isolated

2. Fast Iteration

# Make MCP tool changes → rebuild → redeploy (< 5 minutes)
./build-and-push-mcp-images.sh dev latest
./deploy-mcp-servers.sh dev

3. Clear Separation

  • Platform team owns container-platform.bicep
  • Backend team owns container-apps-mcp-servers.bicep
  • Frontend team owns container-app-ui.bicep
  • API team owns container-app-api.bicep

4. Rollback Support

# Rollback MCP servers independently
./deploy-mcp-servers.sh dev --image-tag v1.2.3

# Rollback to previous revision
az containerapp revision copy --from-revision previous-revision

5. Cost Optimization

  • Deploy only what you need
  • Test MCP servers in dev without full stack
  • Scale each component independently

File Structure

infrastructure/
├── bicep/
│   ├── main-avm.bicep                          # Foundation orchestrator
│   ├── modules/
│   │   ├── networking.bicep                     # VNet infrastructure
│   │   ├── security.bicep                       # Security resources
│   │   ├── ai-services.bicep                    # AI services
│   │   ├── container-platform.bicep             # ACR + Environment ONLY
│   │   ├── container-apps-mcp-servers.bicep     # All MCP servers
│   │   ├── container-app-mcp-server.bicep       # Reusable MCP template
│   │   ├── container-app-ui.bicep               # UI (TODO)
│   │   ├── container-app-api.bicep              # API (TODO)
│   │   ├── rbac.bicep                           # Role assignments
│   │   └── vpn-gateway.bicep                    # VPN (dev only)
│   └── environments/
│       ├── dev.parameters.json                  # Foundation parameters
│       ├── dev-container-platform.parameters.json  # Platform parameters
│       └── dev-mcp-servers.parameters.json      # MCP servers parameters
├── scripts/
│   ├── deploy.sh                                # Deploy foundation + platform
│   ├── deploy-mcp-servers.sh                    # Deploy MCP servers
│   ├── build-and-push-mcp-images.sh             # Build MCP images
│   └── deploy-models.sh                         # Deploy AI models

Common Workflows

New MCP Tool Added

# 1. Add tool to MCP server code
# 2. Rebuild image
./infrastructure/scripts/build-and-push-mcp-images.sh dev latest

# 3. Redeploy MCP servers
./infrastructure/scripts/deploy-mcp-servers.sh dev

# Done! (< 5 minutes)

Update Resource Allocation

# 1. Edit dev-mcp-servers.parameters.json
#    Change cpu: "0.5" → "1.0"

# 2. Redeploy
./infrastructure/scripts/deploy-mcp-servers.sh dev

# Bicep updates only the changed resources

Add New MCP Server

# 1. Create Dockerfile in apps/mcp_servers/new_server/
# 2. Add to container-apps-mcp-servers.bicep
# 3. Update dev-mcp-servers.parameters.json
# 4. Build and deploy
./infrastructure/scripts/build-and-push-mcp-images.sh dev latest
./infrastructure/scripts/deploy-mcp-servers.sh dev

Migration Path (from old to new)

Before (Confusing)

container-platform.bicep = ACR + Environment + MCP Servers (all mixed)
container-apps.bicep = Environment only (unused?)

After (Clear)

container-platform.bicep = ACR + Environment (platform only)
container-apps-mcp-servers.bicep = MCP Servers (applications)

Migration Steps

  1. ✅ Remove MCP server deployment from container-platform.bicep
  2. ✅ Create container-apps-mcp-servers.bicep
  3. ✅ Update parameter files
  4. ✅ Create deploy-mcp-servers.sh script
  5. ⏳ Test deployment end-to-end
  6. ⏳ Deprecate old container-apps.bicep

Summary

Platform (Stable): ACR + Container Apps Environment
Applications (Dynamic): MCP Servers, UI, API (deployed separately)

Result: Fast iterations, clear responsibilities, independent deployments

Module Hierarchy

main-avm.bicep (orchestrator)
├── networking.bicep          - VNet, subnets, NSGs
├── security.bicep            - Key Vault, Storage, Managed Identity
├── ai-services.bicep         - Azure AI Services, AI Hub
├── container-apps.bicep      - Container Apps Environment (shared platform)
├── container-platform.bicep  - ACR + Container Apps Environment (alternative)
└── vpn-gateway.bicep         - VPN Gateway (dev only)

Current Confusion: Two Similar Modules

1. container-apps.bicep (Legacy/Original)

Purpose: Creates only the Container Apps Environment
Resources: - Container Apps Environment (using native Bicep resource) - Requires Log Analytics workspace customer ID and shared key

When used: Called from main-avm.bicep if deployApps flag is true

Limitations: - Only creates the environment, not the ACR - Doesn't deploy individual container apps (UI, API, MCP servers) - Uses native resource (not AVM module)

2. container-platform.bicep (Enhanced - What We Just Updated)

Purpose: Creates ACR + Container Apps Environment + MCP Servers
Resources: - Azure Container Registry (using AVM module) - Container Apps Environment (using AVM module) - 3 MCP Server Container Apps (using our parameterized module)

When used: Can be deployed independently for container platform setup

Advantages: - Uses Azure Verified Modules (AVM) for ACR and Environment - Includes MCP server deployments - More complete solution for container-based workloads


The Problem: Overlapping Modules

Issue: We have two modules that create Container Apps Environments: 1. container-apps.bicep - standalone environment 2. container-platform.bicep - environment + ACR + MCP servers

This creates confusion about: - Which module to use when? - How to deploy UI and API apps? - Where do MCP servers fit in the architecture?


Option A: Use main-avm.bicep as Single Entry Point (RECOMMENDED)

Approach: Update main-avm.bicep to orchestrate everything

main-avm.bicep
├── networking.bicep
├── security.bicep
├── ai-services.bicep
├── container-platform.bicep     // Deploys ACR + Environment + MCP servers
│   ├── container-app-mcp-server.bicep (×3)
├── container-app-ui.bicep       // NEW: Deploy UI
├── container-app-api.bicep      // NEW: Deploy API
└── vpn-gateway.bicep

Benefits: - Single deployment command for entire infrastructure - Clear dependency order - All resources deployed together - Easier to maintain

Steps: 1. Remove old container-apps.bicep (or mark deprecated) 2. Update main-avm.bicep to call container-platform.bicep 3. Create container-app-ui.bicep module 4. Create container-app-api.bicep module 5. Update main-avm.bicep to deploy UI and API apps

Deployment:

./infrastructure/scripts/deploy.sh dev
# Deploys everything: network, security, AI, ACR, environment, UI, API, MCP servers


Option B: Separate Modules for Different Concerns (MODULAR)

Approach: Keep infrastructure and apps separate

Phase 1: Infrastructure
  ./deploy.sh dev --stage foundation
  → network, security, AI, ACR, Container Apps Environment

Phase 2: Applications
  ./deploy-apps.sh dev
  → UI, API, MCP servers

Benefits: - Can deploy infrastructure without apps - Can redeploy apps without touching infrastructure - Easier to test app changes - Faster deployment cycles for apps

Structure:

infrastructure/bicep/
├── main-avm.bicep              // Infrastructure only
│   ├── networking.bicep
│   ├── security.bicep
│   ├── ai-services.bicep
│   ├── container-platform.bicep  // ACR + Environment (no apps)
│   └── vpn-gateway.bicep
└── apps-deployment.bicep        // Applications only
    ├── container-app-ui.bicep
    ├── container-app-api.bicep
    └── container-app-mcp-server.bicep (×3)

Deployment:

# Deploy infrastructure
./infrastructure/scripts/deploy.sh dev

# Deploy applications separately
./infrastructure/scripts/deploy-apps.sh dev


Option C: Keep Current Structure, Clarify Roles (MINIMAL CHANGE)

Approach: Document and clarify existing modules

Changes: 1. Rename container-apps.bicepcontainer-apps-environment-only.bicep 2. Update main-avm.bicep to use container-platform.bicep instead 3. Add UI and API deployments to container-platform.bicep

Structure:

infrastructure/bicep/modules/
├── container-platform.bicep        // ACR + Environment + ALL APPS
│   ├── container-app-ui.bicep
│   ├── container-app-api.bicep
│   └── container-app-mcp-server.bicep (×3)
├── container-apps-environment-only.bicep  // DEPRECATED
└── ... other modules


My Recommendation: Option A (Single Entry Point)

Why?

  1. Simplicity: One command deploys everything
  2. Consistency: All resources managed together
  3. Dependencies: Bicep handles dependency order automatically
  4. Azure Best Practice: Incremental mode means only changes are deployed
  5. Faster iteration: Change one module, redeploy main - only that module updates

Implementation Plan

Step 1: Create UI and API Container App Modules

infrastructure/bicep/modules/
├── container-app-ui.bicep       // NEW
└── container-app-api.bicep      // NEW

Step 2: Update main-avm.bicep

// Remove old container-apps.bicep reference
// Add new modules
module containerPlatform 'modules/container-platform.bicep' = {
  // Deploys ACR + Environment + MCP servers
}

module uiApp 'modules/container-app-ui.bicep' = {
  dependsOn: [containerPlatform]
  // Deploys UI container app
}

module apiApp 'modules/container-app-api.bicep' = {
  dependsOn: [containerPlatform]
  // Deploys API container app
}

Step 3: Update Deployment Script

# No changes needed - deploy.sh already calls main-avm.bicep
./infrastructure/scripts/deploy.sh dev

Step 4: Mark Old Module as Deprecated

// container-apps.bicep
// ⚠️ DEPRECATED: Use container-platform.bicep instead
// This module is kept for backwards compatibility only

Deployment Flow (After Refactoring)

Full Deployment (Initial Setup)

# 1. Build and push images to ACR
./infrastructure/scripts/build-and-push-mcp-images.sh dev latest

# 2. Deploy all infrastructure + apps
./infrastructure/scripts/deploy.sh dev
# → Deploys: Network, Security, AI, ACR, Container Apps Environment,
#            UI, API, MCP Servers (all in one command)

# 3. Deploy AI models (separate, long-running)
./infrastructure/scripts/deploy-models.sh dev

Incremental Updates (Day-to-Day Development)

Update MCP servers:

# 1. Rebuild images
./infrastructure/scripts/build-and-push-mcp-images.sh dev latest

# 2. Redeploy (only container-platform module updates)
./infrastructure/scripts/deploy.sh dev

Update UI or API:

# 1. Rebuild specific image
docker build -t ldfdevacr.azurecr.io/ui:latest -f apps/ui/Dockerfile .
docker push ldfdevacr.azurecr.io/ui:latest

# 2. Redeploy (only UI module updates)
./infrastructure/scripts/deploy.sh dev

Update infrastructure only:

# Just redeploy - Bicep's incremental mode handles it
./infrastructure/scripts/deploy.sh dev


Benefits of Single Entry Point

1. Atomic Deployments

All resources deployed together, ensuring consistency

2. Dependency Management

Bicep automatically handles dependencies between modules

3. Rollback Support

Can rollback entire deployment to previous version

4. CI/CD Simplification

One GitHub Action workflow for everything

5. Faster Development

Change one module, redeploy main - Bicep only updates changed resources


Current Issue: UI and API Not in Bicep

Problem: We have Dockerfiles for UI and API, but no Bicep modules to deploy them

Need to Create: 1. container-app-ui.bicep - Deploy UI with external ingress 2. container-app-api.bicep - Deploy API with internal ingress (but accessible from UI)

These can follow the same pattern as container-app-mcp-server.bicep: - System-assigned managed identity - Health probes - Auto-scaling - Environment variables - ACR integration


Next Steps

Immediate (This Session)

  1. ✅ Understand current module structure (DONE)
  2. ⏳ Decide on deployment strategy (Option A recommended)
  3. ⏳ Create container-app-ui.bicep
  4. ⏳ Create container-app-api.bicep
  5. ⏳ Update main-avm.bicep to orchestrate everything

Short-term

  1. ⏳ Test full deployment end-to-end
  2. ⏳ Update documentation
  3. ⏳ Deprecate old container-apps.bicep

Long-term

  1. ⏳ Create GitHub Actions workflow for automated deployments
  2. ⏳ Add integration tests
  3. ⏳ Set up monitoring and alerts

Summary

Current State: - container-apps.bicep creates only Container Apps Environment - container-platform.bicep creates ACR + Environment + MCP servers - UI and API have no Bicep modules (not deployed via IaC)

Recommended State: - main-avm.bicep orchestrates everything - container-platform.bicep creates ACR + Environment + MCP servers - container-app-ui.bicep deploys UI - container-app-api.bicep deploys API - One command: ./deploy.sh dev deploys everything

Deployment Strategy: Single entry point with modular Bicep files, leveraging Bicep's incremental deployment for fast iterations.