Architecture
A Change Data Capture architecture diagram using Debezium to tail database transaction logs and publish change events to Kafka, feeding downstream consumers that update search indexes, invalidate caches, load data warehouses, and write audit logs. This template shows how CDC eliminates dual-write problems by capturing changes at the database level without modifying application code. Critical for data synchronization across heterogeneous systems.
Full FlowZap Code
SourceDB { # Source Database
n1: circle label:"Data Write Operation"
n2: rectangle label:"Transaction Log (WAL)"
n3: rectangle label:"Commit to Database"
n1.handle(right) -> n2.handle(left)
n2.handle(right) -> n3.handle(left) [label="Persist"]
n2.handle(bottom) -> CDC.n4.handle(top) [label="Log Tail"]
}
CDC { # CDC Engine (Debezium)
n4: rectangle label:"Read Transaction Log"
n5: rectangle label:"Detect Change Event"
n6: rectangle label:"Serialize to Avro/JSON"
n7: diamond label:"Schema Changed?"
n8: rectangle label:"Update Schema Registry"
n9: rectangle label:"Publish Change Event"
n4.handle(right) -> n5.handle(left) [label="INSERT/UPDATE/DELETE"]
n5.handle(right) -> n6.handle(left) [label="Before + After"]
n6.handle(right) -> n7.handle(left)
n7.handle(right) -> n9.handle(left) [label="No"]
n7.handle(bottom) -> n8.handle(top) [label="Yes"]
n8.handle(right) -> n9.handle(left) [label="Registered"]
n9.handle(bottom) -> Consumers.n10.handle(top) [label="Kafka Topic"]
}
Consumers { # Downstream Consumers
n10: rectangle label:"Search Index Updater"
n11: rectangle label:"Cache Invalidator"
n12: rectangle label:"Data Warehouse Loader"
n13: rectangle label:"Audit Log Writer"
n10.handle(right) -> n11.handle(left) [label="Elasticsearch"]
n12.handle(right) -> n13.handle(left) [label="Snowflake"]
}
TargetDB { # Target Systems
n14: rectangle label:"Elasticsearch Index"
n15: rectangle label:"Redis Cache"
n16: rectangle label:"Data Warehouse"
n17: rectangle label:"Audit Database"
n10.handle(bottom) -> TargetDB.n14.handle(top) [label="Upsert Doc"]
n11.handle(bottom) -> TargetDB.n15.handle(top) [label="Invalidate"]
n12.handle(bottom) -> TargetDB.n16.handle(top) [label="Load"]
n13.handle(bottom) -> TargetDB.n17.handle(top) [label="Append"]
}
Why This Workflow?
Application-level dual writes (updating a database AND publishing an event) are inherently unreliable—if either fails, data becomes inconsistent. CDC captures changes directly from the database transaction log, guaranteeing that every committed write is captured and published without modifying application code.
How It Works
- Step 1: Application writes data to the source database as normal.
- Step 2: Debezium tails the database transaction log (WAL) to detect changes.
- Step 3: Each INSERT, UPDATE, or DELETE is captured as a change event with before/after state.
- Step 4: Events are serialized and published to Kafka topics.
- Step 5: Downstream consumers update search indexes, invalidate caches, and load data warehouses.
- Step 6: Schema changes are detected and registered in the Schema Registry.
Alternatives
Application-level event publishing requires code changes and risks dual-write inconsistency. Polling-based CDC is simpler but adds latency. Debezium with Kafka Connect provides the most reliable log-based CDC. This template shows the complete CDC pipeline.
Key Facts
| Template Name | Event-Driven CDC (Change Data Capture) Architecture |
| Category | Architecture |
| Steps | 6 workflow steps |
| Format | FlowZap Code (.fz file) |
Related templates
Architecture
A transactional outbox architecture diagram where domain writes and event publishing happen in the same database transaction, with a relay process polling the outbox table and publishing events to a message broker with guaranteed delivery. This template solves the dual-write problem where updating a database and publishing an event are not atomic, ensuring exactly-once event delivery without distributed transactions. Critical for event-driven architectures requiring reliable message publishing.
Architecture
A microservices API gateway architecture diagram showing request routing, JWT authentication, rate limiting, service discovery, and response aggregation across distributed backend services. This template models the entry point for all client traffic in a microservices ecosystem, enforcing security policies before requests reach internal services. Ideal for platform engineers designing scalable API infrastructure with centralized cross-cutting concerns.
Architecture
A service mesh architecture diagram with Istio or Linkerd sidecar proxies handling mTLS encryption, traffic policies, circuit breaking, and distributed tracing across microservices. This template visualizes how a service mesh abstracts networking concerns away from application code, enabling zero-trust communication between services. Essential for teams adopting service mesh infrastructure to improve observability and security.
Architecture
A database-per-service architecture diagram where each microservice owns its dedicated data store, with event-driven synchronization via Kafka for cross-service data consistency. This template demonstrates the core microservices data isolation principle, showing how PostgreSQL and MongoDB coexist in a polyglot persistence strategy. Critical for architects enforcing service autonomy while maintaining eventual consistency.
Architecture
A microservices decomposition architecture diagram organized by business capabilities: Identity, Product Catalog, Pricing, and Order Fulfillment, each with independent data stores and APIs. This template shows how to break a monolith into services aligned with business domains, using a Backend-for-Frontend (BFF) pattern for client-specific aggregation. Useful for architects planning domain-driven microservice boundaries.
Architecture
A strangler fig migration architecture diagram showing the incremental replacement of a legacy monolith with new microservices, using a routing layer to split traffic between old and new systems. This template models the proven migration strategy where new features are built as microservices while legacy endpoints are gradually retired. Essential for teams modernizing legacy systems without risky big-bang rewrites.