Architecture
A parallel fan-out architecture that runs multiple agents simultaneously on independent checks (style, security, performance) and then merges results. This is a standard multi-agent design approach for throughput, mapping cleanly to CI/CD, incident response, and research. Fan-in reconciliation becomes the subtle part.
Full FlowZap Code
Trigger { # Trigger
n1: circle label:"Start"
n2: rectangle label:"PR submitted for review"
n1.handle(right) -> n2.handle(left)
n2.handle(bottom) -> Coordinator.n3.handle(top) [label="PR payload"]
}
Coordinator { # Coordinator Agent
n3: rectangle label:"Fan out to reviewers"
n4: rectangle label:"Gather all reviews"
n5: diamond label:"All passed?"
n6: rectangle label:"Approve PR"
n7: rectangle label:"Request changes"
n8: circle label:"Done"
n3.handle(bottom) -> Reviewers.n9.handle(top) [label="Style check"]
n3.handle(bottom) -> Reviewers.n10.handle(top) [label="Security audit"]
n3.handle(bottom) -> Reviewers.n11.handle(top) [label="Perf analysis"]
n4.handle(right) -> n5.handle(left)
n5.handle(right) -> n6.handle(left) [label="Yes"]
n5.handle(bottom) -> n7.handle(top) [label="No"]
n6.handle(right) -> n8.handle(left)
}
Reviewers { # Parallel Review Agents
n9: rectangle label:"Style Agent"
n10: rectangle label:"Security Agent"
n11: rectangle label:"Performance Agent"
n9.handle(top) -> Coordinator.n4.handle(bottom) [label="Style report"]
n10.handle(top) -> Coordinator.n4.handle(bottom) [label="Security report"]
n11.handle(top) -> Coordinator.n4.handle(bottom) [label="Perf report"]
}
Related templates
Architecture
A single-agent AI architecture where one agent handles everything: parsing requests, reasoning, calling tools via MCP, and generating responses. This is the default architecture for prototypes and simple automations—easy to debug but hits context-window limits quickly and is hard to parallelize. Ideal for MVPs and solo builders shipping fast.
Architecture
A sequential pipeline architecture chaining multiple agents in a fixed order (parse → enrich → analyze → format), which is a common 'LLM microservices' setup when each step can be isolated. This structure is often used in document processing and ETL-like workflows because each step is testable and predictable.
Architecture
An orchestrator-worker architecture where an orchestrator agent breaks a goal into subtasks, dispatches to specialized workers, then synthesizes a final response. This is the most common 'agent orchestration' architecture—powerful but the orchestrator can become a bottleneck as the number of workers grows. Frameworks like LangGraph focus on explicit routing/state to manage this.
Architecture
A hierarchical multi-agent architecture that scales orchestration by stacking supervisors and team leads (a tree structure), which mirrors enterprise org structures and helps partition context. This is the 'enterprise-grade agentic AI architecture' when a single orchestrator cannot manage all workers directly. Ideal for large enterprises and multi-domain workflows.
Architecture
An event-driven agentic AI architecture that replaces the central orchestrator with Kafka/PubSub topics: agents subscribe, react, and publish new events. This aligns multi-agent systems with proven microservices choreography and is ideal for real-time, high-throughput systems and 'agent mesh' setups.
Architecture
A competitive / generator-critic architecture where multiple generators produce independent answers, then an evaluator agent scores and selects the best output. This approach improves quality and reduces single-model brittleness. It's costlier (multiple LLM calls) but pays off when correctness or creativity matters more than latency.