Architecture
Short-term context pattern where the channel sends new messages plus recent history. The agent runtime merges that with local session state, assembles the prompt, and persists the response back into history. Simple but cost and latency grow with history length.
Full FlowZap Code
User { # User
n1: circle label="User sends message"
n2: rectangle label="See agent reply"
n1.handle(right) -> Agent.n3.handle(left)
Agent.n8.handle(right) -> n2.handle(left)
}
Agent { # Agent
n3: rectangle label="Receive message"
n4: rectangle label="Load session history"
n5: rectangle label="Assemble prompt"
n6: rectangle label="Call LLM"
n7: rectangle label="Receive LLM reply"
n8: rectangle label="Return answer to user"
n9: rectangle label="Persist updated session"
n3.handle(right) -> n4.handle(left)
n4.handle(right) -> n5.handle(left)
n5.handle(right) -> n6.handle(left)
n6.handle(right) -> LLM.n10.handle(left)
n7.handle(right) -> n8.handle(left)
n8.handle(bottom) -> n9.handle(top) [label="Save session"]
}
Memory { # Session Store
n11: rectangle label="Read session state"
n12: rectangle label="Write session state"
n3.handle(bottom) -> n11.handle(top) [label="Get history"]
n11.handle(right) -> Agent.n4.handle(left)
Agent.n9.handle(bottom) -> n12.handle(top) [label="Store history"]
}
LLM { # LLM
n10: rectangle label="Generate answer"
n10.handle(right) -> Agent.n7.handle(left)
}
Related templates
Architecture
Compressed history pattern that keeps full history for a while, then when a threshold is hit, summarizes the last chunk and replaces detailed turns with a shorter summary message. Dramatically reduces prompt size on long-running chats while maintaining gist continuity.
Architecture
Identity-style memory pattern where profile data is loaded at session start. Each prompt combines system persona, user profile, and current message. New facts can be written back to profile memory. Small predictable overhead with big UX lift — the agent remembers your name, stack, tone, and constraints.
Architecture
Vector-based memory pattern where text is chunked, embedded, and stored in a vector database. On query, the question is embedded, a vector search is run, candidates are reranked, and top results are injected into the prompt. Where the agent feels like it remembers everything without hallucinating.
Architecture
Learning-from-experience pattern where every task run becomes an episode with input, actions, and outcome. Before tackling a new task, the agent fetches similar episodes and uses them as hints. Over time, the agent feels like it's learning instead of repeating the same failed plans.
Architecture
Multi-modal retrieval pattern combining semantic search, exact/keyword search, and recency search in parallel. Results are merged and reranked into a single context set. Much higher recall because the agent can find both fuzzy references and exact entities. Essential for comprehensive knowledge retrieval.
Architecture
Multi-agent coordination pattern where an orchestrator breaks work into subtasks, specialist agents pull from and push to a shared state store, and the orchestrator composes the final answer from that shared state. Multi-agent setups feel coherent instead of each assistant having its own inconsistent memory.