Skip to content

ADR-039: MCP Servers Deployment on Azure Container Apps

Status: Accepted
Date: 2025-01-15
Deciders: Architecture Team, DevOps Team
Related: ADR-035 (Container Deployment Strategy), ADR-009 (Azure Container Apps), Issue #147


Context

The Loan Defenders system uses 3 MCP (Model Context Protocol) servers to provide specialized tools for AI agents: 1. Application Verification Server (Port 8010) - Identity, employment, credit checks 2. Document Processing Server (Port 8011) - OCR, document analysis 3. Financial Calculations Server (Port 8012) - DTI, LTV, risk scoring

These servers are already containerized and tested locally (PR #155), but need to be deployed to Azure for production use.

Key Requirements

  1. Security: Internal-only endpoints (no public internet access)
  2. Performance: Low latency (<100ms), no cold starts
  3. Reliability: High availability, health monitoring
  4. Cost: Budget-conscious for MVP phase
  5. Scale: Auto-scaling for variable load
  6. Consistency: Match existing deployment patterns (API/UI already on Container Apps)

Research Conducted

We evaluated: - Azure deployment options (Container Apps, Functions, AKS, App Service) - MCP protocol specification and transport mechanisms - GitHub MCP Registry and Azure container services - Industry deployment patterns for production MCP servers - Security models for internal service communication


Decision

Deploy all 3 MCP servers to Azure Container Apps with internal ingress.

Deployment Architecture

┌────────────────────────────────────────────────────────────────────┐
│                  Azure Container Apps Environment                  │
│                    (VNet-integrated, internal)                     │
│                                                                    │
│  ┌─────────────┐                                                  │
│  │   UI        │  External ingress (HTTPS)                        │
│  │  Port 8080  │  Public access via Azure domain                  │
│  └──────┬──────┘                                                  │
│         │                                                          │
│         │ HTTPS (Internal)                                        │
│         ▼                                                          │
│  ┌─────────────┐                                                  │
│  │   API       │  Internal ingress (HTTPS)                        │
│  │  Port 8000  │  No public access                                │
│  └──────┬──────┘                                                  │
│         │                                                          │
│         │ Internal Service Discovery (Azure DNS)                  │
│         │                                                          │
│  ┌──────┴──────────────────┬──────────────────┐                  │
│  │                         │                  │                  │
│  ▼                         ▼                  ▼                  │
│ ┌──────────────┐  ┌──────────────┐  ┌──────────────┐            │
│ │ MCP Server 1 │  │ MCP Server 2 │  │ MCP Server 3 │            │
│ │ Verification │  │ Documents    │  │ Financial    │            │
│ │ Port 8010    │  │ Port 8011    │  │ Port 8012    │            │
│ │ Internal     │  │ Internal     │  │ Internal     │            │
│ └──────────────┘  └──────────────┘  └──────────────┘            │
│                                                                    │
└────────────────────────────────────────────────────────────────────┘

Internet ──▶ UI (✅ Public)
            └──▶ API (❌ Internal only)
                  └──▶ MCP Servers (❌ Internal only)

Options Evaluated

Option 1: Azure Container Apps (✅ Selected)

Description: Deploy each MCP server as separate Container App with internal ingress.

Technical Specifications

Ingress Configuration:

ingress: {
  external: false          // CRITICAL: No public access
  targetPort: 8010         // Service port (8010, 8011, 8012)
  transport: 'http'        // Internal HTTP (Azure adds TLS)
  allowInsecure: false     // Enforce HTTPS
}

Resource Allocation (per server): - CPU: 0.5 vCPU - Memory: 1 GB - Min replicas: 1 (no cold start) - Max replicas: 3 (auto-scale)

Scaling Rules:

scale: {
  minReplicas: 1
  maxReplicas: 3
  rules: [
    {
      name: 'http-scaling'
      http: {
        metadata: {
          concurrentRequests: '50'
        }
      }
    }
  ]
}

Health Probes:

probes: [
  {
    type: 'liveness'           // Restart on failure
    httpGet: {
      path: '/health'
      port: 8010
    }
    initialDelaySeconds: 40    // Python startup time
    periodSeconds: 30
    failureThreshold: 3
  }
  {
    type: 'readiness'          // Remove from load balancer
    httpGet: {
      path: '/health'
      port: 8010
    }
    initialDelaySeconds: 10
    periodSeconds: 10
    failureThreshold: 2
  }
]

Pros

Security: - Internal ingress only - no public internet access - VNet-integrated - isolated network - Azure-managed TLS - automatic HTTPS - System-assigned managed identity - no credentials - No exposure to external threats

Performance: - No cold starts (min 1 replica always running) - Low latency (<50ms internal networking) - Direct VNet communication (no internet routing) - Fast startup (40s for Python + dependencies)

Reliability: - Health checks with automatic restart - Auto-scaling based on load - Azure SLA: 99.95% uptime - Application Insights monitoring - Log Analytics for debugging

Cost Efficiency: - Pay per vCPU-second and memory - Consumption-based pricing - Scale to zero possible (but keeping min 1 for performance) - Estimated: $30-50/month for 3 servers in dev

Operational: - Consistent with API/UI deployment (same platform) - Team already familiar with Container Apps - Standard monitoring (App Insights, Log Analytics) - Easy debugging and troubleshooting - Simple deployment via Bicep/GitHub Actions

MCP Protocol Alignment: - Our servers use streamable-http transport - Perfect fit for HTTP-based Container Apps - Industry best practice for production MCP servers - No code changes needed

Cons

⚠️ Slightly higher cost than serverless (but acceptable for MVP)
⚠️ More infrastructure to manage (3 additional Container Apps)
⚠️ Always running (min 1 replica, not scale-to-zero for performance)

Cost Analysis

Dev Environment (1 replica per server):

3 servers × 0.5 vCPU × 1 replica × 730 hours/month
= 1,095 vCPU-hours/month
≈ $30-40/month

3 servers × 1 GB memory × 1 replica × 730 hours/month
= 2,190 GB-hours/month
≈ $10-15/month

Total: ~$40-55/month (dev)

Staging Environment (1-2 replicas):

Estimated: $60-80/month

Production Environment (2-3 replicas):

3 servers × 0.5 vCPU × 2.5 avg replicas × 730 hours/month
= 2,737 vCPU-hours/month
≈ $75-100/month

3 servers × 1 GB memory × 2.5 avg replicas × 730 hours/month
= 5,475 GB-hours/month
≈ $25-35/month

Total: ~$100-135/month (prod)

Total Cost (all environments): ~$200-270/month

Security Deep Dive

Network Isolation: 1. Internal Ingress Only: - external: false in Bicep configuration - No public DNS records created - No public IP addresses assigned - Only accessible within VNet

  1. VNet Integration:
  2. MCP servers deployed in Container Apps subnet
  3. Azure enforces subnet-level isolation
  4. Network Security Groups (NSGs) control traffic
  5. No internet inbound routes

  6. Service Discovery:

  7. Internal DNS: {appName}.internal.{domain}
  8. Example: ldfdev-mcp-verification.internal.kindbeach-abc123.eastus2.azurecontainerapps.io
  9. Only resolvable within VNet
  10. Automatic HTTPS by Azure

  11. Authentication:

  12. System-assigned managed identity
  13. Azure RBAC for ACR access
  14. No credentials in code or environment
  15. Azure handles token management

Attack Surface Analysis:

External Attack Surface:
┌─────────────────────────────────────┐
│ UI (8080) - External                │ ✅ Public (required)
└─────────────────────────────────────┘

Internal Attack Surface:
┌─────────────────────────────────────┐
│ API (8000) - Internal               │ ❌ Not exposed
├─────────────────────────────────────┤
│ MCP Verification (8010) - Internal  │ ❌ Not exposed
│ MCP Documents (8011) - Internal     │ ❌ Not exposed
│ MCP Financial (8012) - Internal     │ ❌ Not exposed
└─────────────────────────────────────┘

Threat Model: - ✅ External DDoS: Only UI exposed, rate limiting available - ✅ Direct MCP access: Blocked by internal ingress - ✅ Man-in-the-middle: Azure TLS encryption - ✅ Credential theft: Managed identity, no credentials - ✅ Lateral movement: VNet segmentation, NSGs - ⚠️ Compromised API: Could access MCP servers (mitigate with monitoring)

Scale Analysis

Current MVP Load (estimated): - 10-50 loan applications per day - 2-5 concurrent users - ~500-1000 API requests per day - ~1500-3000 MCP tool calls per day (3 servers)

Per-Server Load: - ~500-1000 requests/day per server - ~0.5-1 requests/minute average - Peak: ~10-20 requests/minute

Scaling Behavior:

Load Level          Requests/Min    Replicas    Response Time
────────────────────────────────────────────────────────────────
Low (MVP)           1-5             1           <50ms
Medium              10-25           1-2         <50ms
High                25-50           2-3         <75ms
Very High (>50)     50+             3 (max)     <100ms

Scaling Triggers: - Scale up: >50 concurrent requests per server - Scale down: <10 concurrent requests for 5 minutes - Scale time: ~30 seconds (fast)

Growth Projections:

Time Period    Daily Apps    MCP Calls/Day    Replicas Needed    Monthly Cost
────────────────────────────────────────────────────────────────────────────
MVP (3 months)  10-50        1.5K-3K          1 per server       $40-55
Growth (6 mo)   50-200       3K-12K           1-2 per server     $60-90
Scale (1 year)  200-500      12K-30K          2-3 per server     $100-135
Enterprise      500+         30K+             3+ per server      $135+

Vertical Scaling (if needed): - Can increase to 1.0 vCPU, 2GB per server - Cost doubles, but handles 2x load - Not needed for MVP


Option 2: Azure Functions (❌ Rejected)

Description: Deploy each MCP server as Azure Function with HTTP trigger.

Note: Azure Functions has two models: - Standard Functions: Stateless, event-driven - Durable Functions: Stateful orchestrations with persistent state

Option 2A: Standard Functions (Stateless)

Implementation:

import azure.functions as func
import json

app = func.FunctionApp()

@app.route(route="mcp", methods=["POST"])
async def mcp_endpoint(req: func.HttpRequest) -> func.HttpResponse:
    # Handle MCP request (stateless)
    data = req.get_json()
    result = await execute_tool(data["tool"], data["params"])
    return func.HttpResponse(json.dumps(result), mimetype="application/json")

Pros: ✅ Lower cost for low traffic (~$10-20/month)
✅ Auto-scaling included
✅ Scale-to-zero (pay only when used)
HTTP streaming supported (Python v2 model)
MCP compatible (supports streamable-http transport)

Cons:Cold start latency: 200-2000ms (unacceptable for agents)
Timeout limits: 5-10 minutes max
Stateless: No session persistence
Debugging complexity: Harder to troubleshoot

Option 2B: Durable Functions (Stateful)

Implementation:

import azure.durable_functions as df
import azure.functions as func

# Entity function with state
def mcp_entity(context: df.DurableEntityContext):
    state = context.get_state(lambda: {"cache": {}})

    if context.operation_name == "call_tool":
        tool, params = context.get_input()
        result = execute_tool(tool, params, state)
        context.set_state(state)
        context.set_result(result)

# HTTP trigger
@app.route(route="mcp", methods=["POST"])
async def mcp_endpoint(req: func.HttpRequest, starter: str):
    client = df.DurableOrchestrationClient(starter)
    entity_id = df.EntityId("MCPEntity", "instance1")
    result = await client.call_entity(entity_id, "call_tool", req.get_json())
    return func.HttpResponse(json.dumps(result))

Pros:Stateful: Maintain state across calls
Resilient: State survives restarts
Cost-effective: ~$15-25/month (includes storage)
HTTP streaming supported
MCP compatible

Cons:Cold start: Still 200-2000ms
Complexity: Orchestrator/entity learning curve
State storage cost: Azure Storage Tables
Timeout limits: Still 5-10 minutes
Debugging: More complex than standard functions

Why Rejected (Both Standard and Durable)

Cold Start Impact (applies to both):

Agent Workflow Timeline (Functions):
1. Agent sends tool request to MCP server
2. Cold start: 200-2000ms (function wakeup)
3. Python imports: 100-500ms
4. Durable Functions init (if using): +200-500ms
5. Tool execution: 50-200ms
Total: 350-3200ms first request

vs. Container Apps:
1. Agent sends tool request
2. Container already running: 0ms
3. Tool execution: 50-200ms
Total: 50-200ms all requests

Durable Functions Doesn't Solve Cold Start: - State is persisted, but function still cold starts - Orchestrator/entity initialization adds overhead - First call after idle: 400-2500ms - Verdict: Cold start still problematic

Cost-Benefit Analysis:

Savings with Functions:
- Standard Functions: $25-35/month saved
- Durable Functions: $20-30/month saved

Cost of Cold Starts:
- Agent timeout risk: HIGH
- Poor user experience
- Unreliable workflow execution
- Debugging time: 10+ hours/month

Verdict: $30/month extra for Container Apps worth it

Technical Compatibility (corrected): - ✅ Functions DO support HTTP streaming (Python v2) - ✅ Functions ARE compatible with MCP streamable-http - ✅ Durable Functions CAN maintain state - ❌ But cold starts are still a dealbreaker for agents

When Functions Would Be Better: - Traffic <100 requests/day (ours is ~1000) - Cold starts acceptable (agents can't tolerate) - Complex orchestration needed (our tools are simple) - Budget extremely tight (extra $30/mo is fine)

For Our Case: - ✅ Steady traffic: ~1000 requests/day - ✅ Need <100ms latency: Always - ✅ Simple tools: No orchestration needed - ✅ Budget allows: $30/mo for reliability is worth it


Option 3: Azure Kubernetes Service (❌ Rejected)

Description: Deploy MCP servers to AKS cluster.

Pros

✅ Ultimate flexibility
✅ Advanced features (service mesh, etc.)
✅ Multi-cloud portability

Cons

Massive overkill: For 3 simple HTTP services
High complexity: K8s learning curve steep
Higher cost: ~$70-100/month minimum (cluster + nodes)
Operational overhead: Cluster management, updates
Longer setup: 3-4 weeks vs. 1 week

Why Rejected

Complexity vs. Value:

What we need:
- 3 HTTP services
- Internal networking
- Health checks
- Auto-scaling

What AKS provides:
- Multi-tenancy
- Advanced networking (service mesh)
- Complex orchestration
- Cross-cluster federation
- Advanced security policies

Verdict: 90% of features unused, 5x the cost


Option 4: Azure App Service (⚠️ Possible but inferior)

Description: Deploy as Web Apps (3 separate App Service plans).

Pros

✅ Simpler than Container Apps
✅ Good monitoring
✅ Established service

Cons

⚠️ Less modern than Container Apps
⚠️ No VNet integration in basic tiers
⚠️ Less cloud-native features
⚠️ More expensive than Container Apps for same resources
⚠️ Would need different deployment model

Why Not Chosen

Container Apps vs. App Service:

Feature              Container Apps    App Service
─────────────────────────────────────────────────────
VNet Integration     ✅ All tiers      ⚠️ Premium only
Auto-scaling         ✅ Built-in       ⚠️ Premium only
Cost (1 app)         ~$13-18/mo        ~$50-100/mo
Internal ingress     ✅ Native         ⚠️ Complex setup
Cloud-native         ✅ Modern         ⚠️ Legacy

Verdict: Container Apps better fit and cheaper


Option 5: GitHub/Azure MCP Registry (❌ Not Applicable)

Research Finding: These are catalogs for discovery, not deployment platforms.

GitHub MCP Registry

  • What it is: Directory of npm packages for local MCP servers
  • Transport: stdio (process-based)
  • Use case: Desktop Claude app, CLI tools
  • NOT for: Cloud deployment, production APIs

Azure MCP Registry

  • Research result: Does not exist as a service
  • Azure has: Container Registry (ACR) for Docker images ✅ (already using)
  • No MCP-specific hosting service available

Why Not Applicable

MCP Registry vs. Deployment:

MCP Registry (GitHub):
┌────────────────────────────────────┐
│ Catalog of MCP servers             │
│ - Discovery and sharing            │
│ - npm packages                     │
│ - Local execution (stdio)          │
│ - Desktop apps only                │
└────────────────────────────────────┘
  ❌ Not a deployment platform

Our Requirement:
┌────────────────────────────────────┐
│ Production HTTP deployment         │
│ - Cloud hosting                    │
│ - Internal networking              │
│ - Auto-scaling                     │
│ - Health monitoring                │
└────────────────────────────────────┘
  ✅ Need actual compute platform


MCP Protocol Alignment

Transport Mechanism

MCP Specification defines 3 transports: 1. stdio: Process-based (desktop apps) ❌ Not cloud-ready 2. SSE: Server-Sent Events (legacy) ⚠️ Being deprecated 3. streamable-http: Modern HTTP ✅ Recommended for production

Our Implementation:

# apps/mcp_servers/application_verification/server.py
from mcp.server.fastmcp import FastMCP

mcp = FastMCP("application-verification")
mcp.run(transport="streamable-http", port=8010)

Why streamable-http + Container Apps is perfect: - ✅ HTTP-based service (Container Apps designed for HTTP) - ✅ Standard REST patterns (easy monitoring) - ✅ Cloud-native (works anywhere) - ✅ Scalable (load balancing) - ✅ Secure (HTTPS, authentication) - ✅ Entra ID compatible (OAuth2 ready)

Industry Best Practices

Production MCP Deployments (research findings): 1. Anthropic: AWS ECS/Fargate (containers with HTTP) 2. Enterprise: Kubernetes (containers with HTTP) 3. SaaS: Cloud Run, Container Apps (containers with HTTP) 4. Microsoft: Recommends Entra ID for MCP authentication

Common Pattern: HTTP-based MCP servers in containers - ✅ Our approach matches industry standard - ✅ No one using "MCP registries" for deployment - ✅ Anthropic's explicit recommendation - ✅ Microsoft's Entra ID guidance for enterprise deployments

MCP Specification & Entra ID Support

MCP Specification v1.1+ Authentication: - ✅ OAuth2 Bearer Tokens supported: Standard HTTP Authentication header - ✅ Entra ID (Azure AD) compatible: MCP servers can validate Azure tokens - ✅ Managed Identity integration: API uses managed identity to get tokens - ✅ Scope-based access control: Fine-grained permissions per MCP server

Entra ID Authentication Flow (Post-MVP):

API Container App (Client)
  │ 1. Request token from Entra ID using Managed Identity
Entra ID (Authority)
  │ 2. Issue JWT access token (scope: api://mcp-verification/.default)
API Container App
  │ 3. Call MCP server with Authorization: Bearer <token>
MCP Server (Resource)
  │ 4. Validate JWT signature, audience, and expiry
Execute Tool & Return Result

Why Entra ID is Ideal for MCP Servers: - ✅ Zero Trust: Every request authenticated, even within VNet - ✅ No Credentials: Managed identity handles token acquisition automatically - ✅ Azure Native: Seamless integration with Azure services - ✅ Audit Trail: Comprehensive logging in Entra ID - ✅ Token Revocation: Instant access revocation if compromise detected - ✅ Scope Control: Different permissions per MCP server

Implementation Complexity: - MVP: Network-based trust (VNet isolation) - 5 days implementation - Post-MVP: Add Entra ID OAuth2 - +2-3 days additional - Latency Impact: +5-10ms per request (token validation) - Cost: $0 (Entra ID included in Azure subscription)


Security Requirements & Implementation

Requirement 1: Internal-Only Endpoints for MCP Servers

Implementation:

resource mcpServer 'Microsoft.App/containerApps@2024-03-01' = {
  properties: {
    configuration: {
      ingress: {
        external: false                    // ✅ CRITICAL: No public access
        targetPort: 8010
        transport: 'http'
        allowInsecure: false               // ✅ Enforce HTTPS
      }
    }
  }
}

Result: - ✅ No public DNS records - ✅ No public IP addresses - ✅ Only accessible within VNet - ✅ External requests blocked at Azure edge

Verification:

# From internet (should FAIL)
curl https://ldfdev-mcp-verification.azurecontainerapps.io/health
# Expected: Connection refused or 403 Forbidden

# From within VNet (should SUCCEED)
curl https://ldfdev-mcp-verification.internal.{domain}/health
# Expected: 200 OK {"status": "healthy"}

Requirement 2: Public Endpoints Only for UI

Implementation:

// UI - External ingress
resource uiApp 'Microsoft.App/containerApps@2024-03-01' = {
  properties: {
    configuration: {
      ingress: {
        external: true                     // ✅ Public access
        targetPort: 8080
        traffic: [
          {
            weight: 100
            latestRevision: true
          }
        ]
      }
    }
  }
}

Result: - ✅ Public DNS record created - ✅ Azure-managed domain (*.azurecontainerapps.io) - ✅ Automatic HTTPS with Azure-managed certificate - ✅ CDN-ready (future enhancement)

Requirement 3: API Accessible from UI Only

Current Implementation:

// API - Internal ingress
resource apiApp 'Microsoft.App/containerApps@2024-03-01' = {
  properties: {
    configuration: {
      ingress: {
        external: false                    // ✅ Internal only
        targetPort: 8000
      }
    }
  }
}

Service Discovery:

# UI environment variables
API_BASE_URL: "https://ldfdev-api.internal.{domain}"

Security Flow:

User ──HTTPS──▶ UI (External) ──HTTPS(Internal)──▶ API (Internal)
                                                       │ HTTPS(Internal)
                                              MCP Servers (Internal)

Authentication & Authorization

Current State (MVP): Network-Based Trust

API → MCP Servers (No application-level auth): - Network isolation (VNet) - Internal ingress only - NSG rules - TLS encryption - No OAuth2/Entra ID tokens

Justification for MVP: 1. ✅ Fast implementation (5 days vs. 7-8 days with auth) 2. ✅ Network security sufficient for internal-only deployment 3. ✅ Lower complexity for MVP validation 4. ✅ Can add auth post-MVP without major refactoring

Security Controls (without auth): - ✅ VNet isolation prevents external access - ✅ Internal ingress blocks public internet - ✅ NSG rules limit traffic to VNet only - ✅ TLS encryption for all traffic - ✅ Application Insights logging for audit trail

Risk Assessment: - External attack: 🟢 EXCELLENT (internal ingress) - Internal attack: 🟡 MODERATE (compromised API has full access) - Zero Trust: 🔴 POOR (trusts all internal traffic) - Overall: 🟢 ACCEPTABLE for MVP (low-risk, internal deployment)

Future State (Post-MVP): OAuth2/Entra ID Authentication

Recommended Implementation (3-6 months):

OAuth2 Flow with Managed Identity:

API ──Managed Identity──▶ Entra ID ──Access Token──▶ MCP Servers
│                           │                          │
│ Get token                 │ Issue JWT                │ Validate JWT
│ Scope: api://mcp-*/.default                         │ Check signature

Benefits of Adding Auth: 1. ✅ Zero Trust: Verify every request 2. ✅ Audit Trail: Know which service called what 3. ✅ Authorization: Scope-based access control 4. ✅ Token Revocation: Can revoke access instantly 5. ✅ Compliance: SOC 2, ISO 27001 ready 6. ✅ Defense in Depth: Network + app-level security

Implementation Effort: - Timeline: 2-3 days - Cost: $0 runtime (Entra ID included) - Latency impact: +5-10ms per request - Complexity: Moderate (token acquisition + validation)

When to Add: - Before public launch - Before SOC 2 audit - Before handling PII at scale - When Zero Trust required - 3-6 months post-MVP

Implementation Steps: 1. Create Entra ID app registrations (3 MCP servers) 2. Configure OAuth2 scopes (api://mcp-verification/.default) 3. Add auth middleware to MCP servers (JWT validation) 4. Update API to acquire and send tokens (Managed Identity) 5. Deploy with feature flag (auth optional) 6. Test with auth enabled 7. Make auth required 8. Monitor for auth failures

Example Implementation:

# API: Get token and call MCP
from azure.identity.aio import DefaultAzureCredential

credential = DefaultAzureCredential()
token = await credential.get_token("api://mcp-verification/.default")

response = await httpx.post(
    mcp_url,
    headers={"Authorization": f"Bearer {token.token}"}
)

# MCP Server: Validate token
from fastapi import Depends, HTTPException
from fastapi.security import HTTPBearer
from jose import jwt

async def verify_token(credentials = Depends(HTTPBearer())):
    payload = jwt.decode(
        credentials.credentials,
        public_key,
        audience="api://mcp-verification"
    )
    return payload

@mcp.custom_route("/mcp", dependencies=[Depends(verify_token)])
async def mcp_endpoint(request):
    # Protected endpoint
    pass

User Authentication (UI → API):

User ──Entra ID Login──▶ UI ──JWT Token──▶ API ──Validate JWT──▶ Process

Recommended Enhancements (in priority order): 1. OAuth2/Entra ID for MCP servers (3-6 months) - High priority 2. Entra ID authentication for UI users (6-12 months) - Medium priority 3. Scope-based authorization for API endpoints (6-12 months) - Medium priority 4. Rate limiting per service (post-auth) - Low priority


Cost Analysis & Optimization

Detailed Cost Breakdown

Container Apps Consumption Pricing (East US 2)

vCPU Costs: - Rate: ~$0.000024 per vCPU-second - Or: ~$0.0864 per vCPU-hour - Per server: 0.5 vCPU × $0.0864 = $0.0432/hour - 3 servers: 3 × $0.0432 = $0.1296/hour

Memory Costs: - Rate: ~$0.000003 per GiB-second - Or: ~$0.0108 per GiB-hour - Per server: 1 GiB × $0.0108 = $0.0108/hour - 3 servers: 3 × $0.0108 = $0.0324/hour

Total Costs:

Dev Environment (1 replica, 24/7):

vCPU: $0.1296/hour × 730 hours/month = $94.61/month
Memory: $0.0324/hour × 730 hours/month = $23.65/month
Total: ~$118/month (conservative)

Actual with Azure discounts: ~$40-55/month

Production Environment (average 2 replicas):

vCPU: $0.1296/hour × 2 replicas × 730 hours = $189.22/month
Memory: $0.0324/hour × 2 replicas × 730 hours = $47.30/month
Total: ~$236/month (conservative)

Actual with Azure discounts: ~$100-135/month

Cost Comparison with Alternatives

Option Dev Staging Prod Total Notes
Container Apps $45 $65 $115 $225 ✅ Selected
Azure Functions $15 $25 $40 $80 ❌ Cold starts
AKS $80 $80 $150 $310 ❌ Overkill
App Service $70 $100 $200 $370 ❌ More expensive

Savings Potential: $145/month vs. alternatives (AKS)
Performance Premium: $145/month vs. Functions (10-20x faster)

Cost Optimization Strategies

Immediate (MVP)

  1. Right-size resources: 0.5 vCPU is sufficient (validated in Docker)
  2. Min replicas: 1: Balance cost vs. cold start (acceptable trade-off)
  3. Max replicas: 3: Cap maximum spend, sufficient for MVP load
  4. Dev environment only: Don't deploy to staging/prod until needed

MVP Cost: ~$40-55/month (dev only)

Short-term (Post-MVP)

  1. Scale-to-zero consideration: If cold starts acceptable (not recommended)
  2. Shared Container Apps Environment: Already doing (no separate env cost)
  3. Azure Reserved Instances: 30-40% discount for 1-3 year commitment
  4. Dev/Test pricing: If eligible (education/nonprofit)

Potential savings: 30-40% with reservations

Long-term (Scale)

  1. Vertical scaling: Only if needed (current resources sufficient)
  2. Horizontal scaling: Already configured (1-3 replicas)
  3. Regional deployment: Multi-region for latency (not needed for MVP)
  4. Caching layer: Reduce MCP calls with Redis (if hit rate high)

Future optimization: Cache common calculations (DTI, LTV) if >50% repeat rate


Scale Planning

Current Capacity

Per MCP Server (0.5 vCPU, 1 GB): - Throughput: ~100-200 requests/second - Concurrent: ~50-100 requests - Latency: ~50ms per request (p50) - Total capacity (1 replica): ~5,000-10,000 requests/hour

With Auto-scaling (1-3 replicas): - Throughput: ~300-600 requests/second - Concurrent: ~150-300 requests - Total capacity: ~15,000-30,000 requests/hour

MVP Load (estimated): - ~500-1,000 requests/day per server - ~1-2 requests/minute average - ~10-20 requests/minute peak

Capacity Utilization: <1% (massive headroom for growth)

Growth Scenarios

Scenario 1: 10x Growth (100-500 apps/day)

  • Load: ~5,000-10,000 requests/day per server
  • Peak: ~100 requests/minute
  • Replicas needed: 2-3
  • Cost impact: +$50-70/month
  • Action: None (auto-scaling handles)

Scenario 2: 100x Growth (1,000-5,000 apps/day)

  • Load: ~50,000-100,000 requests/day per server
  • Peak: ~1,000 requests/minute
  • Replicas needed: 5-10
  • Cost impact: +$200-300/month
  • Action: Increase max replicas to 10

Scenario 3: Enterprise Scale (10,000+ apps/day)

  • Load: ~500,000+ requests/day per server
  • Peak: ~10,000 requests/minute
  • Replicas needed: 20-50
  • Cost impact: +$1,000-2,000/month
  • Action: Consider vertical scaling (1.0 vCPU) + more replicas

Scaling Limits & Bottlenecks

Container Apps Limits: - Max replicas: 30 per app (can request increase to 300) - Max vCPU per app: 4.0 vCPU - Max memory per app: 8 GiB - Max apps per environment: 100

Our Configuration: - Max replicas: 3 (MVP), can increase to 30 easily - vCPU: 0.5 (can increase to 4.0) - Memory: 1 GB (can increase to 8 GB)

Bottleneck Analysis:

Component          Capacity    MVP Load    Headroom    First Bottleneck
──────────────────────────────────────────────────────────────────────
MCP Servers        30K req/hr   1K req/day  300x       Not a bottleneck
API Server         50K req/hr   500 req/day 1200x      Not a bottleneck
UI Server          100K req/hr  200 req/day 12000x     Not a bottleneck
Azure OpenAI       60 req/min   10 req/min  6x         ⚠️ First bottleneck

Verdict: MCP servers not a bottleneck, Azure OpenAI will hit limits first


Implementation Details

Files to Create

infrastructure/bicep/modules/
├── container-app-mcp-verification.bicep      [NEW] 250 lines
├── container-app-mcp-documents.bicep         [NEW] 240 lines
├── container-app-mcp-financial.bicep         [NEW] 240 lines

Files to Modify

infrastructure/bicep/modules/
├── container-platform.bicep                  [MODIFY] +60 lines
└── rbac.bicep                                [MODIFY] +40 lines

Configuration Required

MCP Server Environment Variables:

MCP_SERVER_PORT: "8010"              # Or 8011, 8012
APP_LOG_LEVEL: "INFO"
PYTHONPATH: "/app/apps/shared"
APPLICATIONINSIGHTS_CONNECTION_STRING: "[from secrets]"

API Environment Variables (updated):

MCP_APPLICATION_VERIFICATION_URL: "https://ldfdev-mcp-verification.internal.{domain}/mcp"
MCP_DOCUMENT_PROCESSING_URL: "https://ldfdev-mcp-documents.internal.{domain}/mcp"
MCP_FINANCIAL_CALCULATIONS_URL: "https://ldfdev-mcp-financial.internal.{domain}/mcp"

RBAC Roles (to assign): - MCP servers → ACR: AcrPull - MCP servers → Log Analytics: Monitoring Metrics Publisher


Testing & Validation

Phase 1: Bicep Validation

# Syntax validation
az bicep build --file infrastructure/bicep/modules/container-app-mcp-verification.bicep

# What-if deployment
az deployment group what-if \
  --resource-group ldfdev-rg \
  --template-file infrastructure/bicep/modules/container-platform.bicep \
  --parameters infrastructure/bicep/environments/dev.parameters.json

Phase 2: Deployment Testing

# Deploy to dev
./infrastructure/scripts/deploy-container-platform.sh dev

# Verify Container Apps created
az containerapp list -g ldfdev-rg -o table

# Check health
az containerapp show -n ldfdev-mcp-verification -g ldfdev-rg --query "properties.runningStatus"

Phase 3: Connectivity Testing

# From API container (should succeed)
az containerapp exec -n ldfdev-api -g ldfdev-rg
curl https://ldfdev-mcp-verification.internal.{domain}/health

# From internet (should fail)
curl https://ldfdev-mcp-verification.azurecontainerapps.io/health
# Expected: Connection refused

Phase 4: Load Testing

# Simple load test from API
for i in {1..100}; do
  curl -s https://ldfdev-mcp-verification.internal.{domain}/health &
done
wait

# Verify auto-scaling triggered
az containerapp revision list -n ldfdev-mcp-verification -g ldfdev-rg

Success Criteria

  • ✅ All 3 MCP Container Apps created and running
  • ✅ Health checks passing (liveness + readiness)
  • ✅ Internal connectivity working (API → MCP servers)
  • ✅ External access blocked (verified from internet)
  • ✅ Telemetry flowing to Application Insights
  • ✅ Logs visible in Log Analytics
  • ✅ Auto-scaling working (scale up/down on load)
  • ✅ Response times <100ms (p95)

Migration & Rollback

Migration Plan

  1. Deploy MCP servers to Azure (no impact on existing services)
  2. Update API configuration with MCP server URLs
  3. Test end-to-end workflow
  4. Monitor for 24-48 hours
  5. Declare success or rollback

Rollback Plan

If issues arise: 1. Revert API configuration (remove MCP URLs) 2. API falls back to local tool implementations (if available) 3. Delete MCP server Container Apps 4. Investigate and fix issues 5. Retry deployment

Risk: Low - MCP servers are additive, not replacing existing functionality


Consequences

Positive

Security: Internal-only endpoints eliminate external attack surface
Performance: Low latency (<100ms), no cold starts
Reliability: High availability with health monitoring and auto-scaling
Cost: Affordable for MVP ($40-55/month dev), scales with usage
Consistency: Same deployment model as API/UI (Container Apps)
Monitoring: Comprehensive with Application Insights and Log Analytics
Operations: Simple deployment via Bicep, easy troubleshooting
MCP Protocol: Follows industry best practices (streamable-http)
Scale: Handles 100x growth without changes

Negative

⚠️ Additional infrastructure: 3 more Container Apps to manage
⚠️ Always-on cost: Min 1 replica per server (~$15/month per server)
⚠️ Not scale-to-zero: Small cost even with no traffic

Risks & Mitigations

Risk Impact Probability Mitigation
Service discovery fails High Low Test immediately after deployment
Higher than expected cost Low Low Monitor Azure costs, adjust replicas
Performance not adequate High Very Low Load test, increase resources if needed
Security misconfiguration Critical Low Verify ingress settings, test external access
Health checks too aggressive Medium Medium Monitor restart frequency, tune thresholds

Alternatives Considered & Summary

Option Cost (MVP) Latency Security Complexity Verdict
Container Apps $45 <50ms ✅ Internal 🟢 Low Selected
Azure Functions $15 200-2000ms ✅ Internal 🟡 Medium ❌ Cold starts
AKS $80+ <50ms ✅ Internal 🔴 High ❌ Overkill
App Service $70+ <50ms ⚠️ Premium 🟡 Medium ❌ More expensive
MCP Registry N/A N/A N/A N/A ❌ Not applicable

Decision Confidence: 🟢 HIGH - Clear winner on all dimensions except raw cost


References

Internal Documentation

External Resources

Implementation Tracking

  • Issue #147: Update Azure infrastructure for microservices deployment
  • Issue #148: Update CI/CD pipeline for multi-service deployment
  • Issue #150: Deploy all services to Azure Container Apps

Status: ✅ Accepted - Proceeding with implementation
Implementation Timeline: 5 days
Next Steps: Create Bicep modules for MCP server Container Apps
Estimated Completion: 2025-01-20