Welcome to FlowZap, the App to diagram with Speed, Clarity and Control.

AI Orchestration - Orchestrator-Worker

Architecture

A conductor-style architecture where one orchestrator agent receives a complex task, breaks it into subtasks, dispatches each to specialist worker agents (research, code, review), collects results, and synthesizes the final answer. Best for complex multi-step tasks with dynamic decomposition.

Full FlowZap Code

User { # User
  n1: circle label="Start"
  n2: rectangle label="Submit complex task"
  n1.handle(right) -> n2.handle(left)
  n2.handle(bottom) -> Orchestrator.n3.handle(top) [label="Task"]
}

Orchestrator { # Orchestrator Agent
  n3: rectangle label="Receive task"
  n4: rectangle label="Break into subtasks"
  n5: rectangle label="Dispatch subtasks"
  n6: rectangle label="Collect results"
  n7: rectangle label="Synthesize final answer"
  n8: circle label="Done"
  n3.handle(right) -> n4.handle(left)
  n4.handle(right) -> n5.handle(left)
  n5.handle(bottom) -> Research.n9.handle(top) [label="Research subtask"]
  n5.handle(bottom) -> Code.n11.handle(top) [label="Code subtask"]
  n5.handle(bottom) -> Review.n13.handle(top) [label="Review subtask"]
  n6.handle(right) -> n7.handle(left)
  n7.handle(right) -> n8.handle(left)
  n7.handle(top) -> User.n2.handle(bottom) [label="Response"]
}

Research { # Research Agent
  n9: rectangle label="Search sources"
  n10: rectangle label="Summarize findings"
  n9.handle(right) -> n10.handle(left)
  n10.handle(top) -> Orchestrator.n6.handle(bottom) [label="Research result"]
}

Code { # Code Agent
  n11: rectangle label="Generate code"
  n12: rectangle label="Run tests"
  n11.handle(right) -> n12.handle(left)
  n12.handle(top) -> Orchestrator.n6.handle(bottom) [label="Code result"]
}

Review { # Review Agent
  n13: rectangle label="Evaluate quality"
  n14: rectangle label="Flag issues"
  n13.handle(right) -> n14.handle(left)
  n14.handle(top) -> Orchestrator.n6.handle(bottom) [label="Review result"]
}

Related templates

AI-Native Orchestrator-Worker Architecture

Architecture

An orchestrator-worker architecture where an orchestrator agent breaks a goal into subtasks, dispatches to specialized workers, then synthesizes a final response. This is the most common 'agent orchestration' architecture—powerful but the orchestrator can become a bottleneck as the number of workers grows. Frameworks like LangGraph focus on explicit routing/state to manage this.

AI Orchestration - Hierarchical (Org Chart)

Architecture

An org-chart architecture with multi-level structure. A top-level supervisor manages team leads, who each manage their own pool of specialist workers. Teams within teams. Best for enterprise-scale automation with 10+ specialized agents spanning multiple domains.

AI Orchestration - Parallel Fan-Out (Map-Reduce)

Architecture

A map-reduce style architecture where a coordinator fans out tasks to multiple parallel worker agents (style check, security audit, performance analysis), gathers all results, and makes an aggregate decision. Best for PR reviews, code reviews, and multi-dimensional analysis.

AI Orchestration - Competitive Generator-Critic

Architecture

A tournament-mode architecture where multiple generator agents produce independent outputs in parallel, then an evaluator agent scores and selects the best. Quality threshold checking with refinement loops. Best when correctness or creativity matters more than latency.

AI-Native Hierarchical Architecture

Architecture

A hierarchical multi-agent architecture that scales orchestration by stacking supervisors and team leads (a tree structure), which mirrors enterprise org structures and helps partition context. This is the 'enterprise-grade agentic AI architecture' when a single orchestrator cannot manage all workers directly. Ideal for large enterprises and multi-domain workflows.

AI-Native Generator-Critic Architecture

Architecture

A competitive / generator-critic architecture where multiple generators produce independent answers, then an evaluator agent scores and selects the best output. This approach improves quality and reduces single-model brittleness. It's costlier (multiple LLM calls) but pays off when correctness or creativity matters more than latency.

Back to all templates