Architecture
A health check pattern architecture diagram with load balancer probes, deep health checks verifying database, cache, disk, and dependency status, automatic instance rotation, and alerting integration with PagerDuty for consecutive failures. This template models the health monitoring infrastructure that enables self-healing systems, where unhealthy instances are automatically removed from rotation and operations teams are alerted. Key for building production-ready services with proper observability.
Full FlowZap Code
LoadBalancer { # Load Balancer
n1: circle label:"Health Check Timer"
n2: rectangle label:"Send Health Probe"
n3: rectangle label:"Evaluate Response"
n4: diamond label:"Instance Healthy?"
n5: rectangle label:"Keep in Rotation"
n6: rectangle label:"Remove from Rotation"
n7: rectangle label:"Update Routing Table"
n1.handle(right) -> n2.handle(left) [label="Every 10s"]
n2.handle(bottom) -> ServiceA.n8.handle(top) [label="GET /health"]
n2.handle(bottom) -> ServiceB.n12.handle(top) [label="GET /health"]
n3.handle(right) -> n4.handle(left)
n4.handle(right) -> n5.handle(left) [label="200 OK"]
n4.handle(bottom) -> n6.handle(top) [label="Timeout/5xx"]
n5.handle(right) -> n7.handle(left)
n6.handle(right) -> n7.handle(left)
}
ServiceA { # Service A
n8: rectangle label:"Check Database Connection"
n9: rectangle label:"Check Redis Connection"
n10: rectangle label:"Check Disk Space"
n11: rectangle label:"Return Health Status"
n8.handle(right) -> n9.handle(left) [label="DB OK"]
n9.handle(right) -> n10.handle(left) [label="Cache OK"]
n10.handle(right) -> n11.handle(left) [label="Disk OK"]
n11.handle(top) -> LoadBalancer.n3.handle(bottom) [label="Status JSON"]
}
ServiceB { # Service B
n12: rectangle label:"Check API Dependencies"
n13: rectangle label:"Check Message Queue"
n14: rectangle label:"Check Memory Usage"
n15: rectangle label:"Return Health Status"
n12.handle(right) -> n13.handle(left) [label="APIs OK"]
n13.handle(right) -> n14.handle(left) [label="Queue OK"]
n14.handle(right) -> n15.handle(left) [label="Memory OK"]
n15.handle(top) -> LoadBalancer.n3.handle(bottom) [label="Status JSON"]
}
Alerting { # Alerting System
n16: rectangle label:"Monitor Health Trends"
n17: diamond label:"Consecutive Failures?"
n18: rectangle label:"Send PagerDuty Alert"
n19: rectangle label:"Log Health Metric"
n16.handle(right) -> n17.handle(left)
n17.handle(right) -> n18.handle(left) [label="3+ Failures"]
n17.handle(bottom) -> n19.handle(top) [label="Below Threshold"]
n7.handle(bottom) -> Alerting.n16.handle(top) [label="Status Change"]
}
Why This Workflow?
Load balancers that route traffic to unhealthy instances cause user-facing errors. Deep health checks verify not just that the process is running, but that all critical dependencies (database, cache, disk, external APIs) are functioning—enabling automatic instance rotation and proactive alerting before users are affected.
How It Works
- Step 1: The load balancer sends periodic health probes (GET /health) to each service instance.
- Step 2: Each service checks its database connection, cache connectivity, disk space, and memory usage.
- Step 3: A JSON health status is returned with individual component statuses.
- Step 4: Healthy instances remain in the load balancer rotation; unhealthy ones are removed.
- Step 5: The alerting system monitors health trends and fires alerts on consecutive failures.
- Step 6: PagerDuty notifications are sent when failure thresholds are exceeded.
Alternatives
Simple TCP health checks only verify the process is listening. Liveness probes in Kubernetes restart unhealthy pods. Readiness probes control traffic routing. This template shows deep application-level health checks with alerting integration.
Key Facts
| Template Name | Health Check Pattern Architecture |
| Category | Architecture |
| Steps | 6 workflow steps |
| Format | FlowZap Code (.fz file) |
Related templates
Architecture
A service discovery architecture diagram with Consul or Eureka registry, client-side load balancing, health check heartbeats, and automatic instance registration and deregistration. This template visualizes how microservices dynamically locate each other without hardcoded endpoints, enabling elastic scaling and self-healing infrastructure. Key for platform teams building resilient service-to-service communication.
Architecture
A microservices API gateway architecture diagram showing request routing, JWT authentication, rate limiting, service discovery, and response aggregation across distributed backend services. This template models the entry point for all client traffic in a microservices ecosystem, enforcing security policies before requests reach internal services. Ideal for platform engineers designing scalable API infrastructure with centralized cross-cutting concerns.
Architecture
A service mesh architecture diagram with Istio or Linkerd sidecar proxies handling mTLS encryption, traffic policies, circuit breaking, and distributed tracing across microservices. This template visualizes how a service mesh abstracts networking concerns away from application code, enabling zero-trust communication between services. Essential for teams adopting service mesh infrastructure to improve observability and security.
Architecture
A database-per-service architecture diagram where each microservice owns its dedicated data store, with event-driven synchronization via Kafka for cross-service data consistency. This template demonstrates the core microservices data isolation principle, showing how PostgreSQL and MongoDB coexist in a polyglot persistence strategy. Critical for architects enforcing service autonomy while maintaining eventual consistency.
Architecture
A microservices decomposition architecture diagram organized by business capabilities: Identity, Product Catalog, Pricing, and Order Fulfillment, each with independent data stores and APIs. This template shows how to break a monolith into services aligned with business domains, using a Backend-for-Frontend (BFF) pattern for client-specific aggregation. Useful for architects planning domain-driven microservice boundaries.
Architecture
A strangler fig migration architecture diagram showing the incremental replacement of a legacy monolith with new microservices, using a routing layer to split traffic between old and new systems. This template models the proven migration strategy where new features are built as microservices while legacy endpoints are gradually retired. Essential for teams modernizing legacy systems without risky big-bang rewrites.