Welcome to FlowZap, the App to diagram with Speed, Clarity and Control.

Context Management - Episodic Memory

Architecture

Learning-from-experience pattern where every task run becomes an episode with input, actions, and outcome. Before tackling a new task, the agent fetches similar episodes and uses them as hints. Over time, the agent feels like it's learning instead of repeating the same failed plans.

Full FlowZap Code

User {
  n1: circle label="1. Submit task"
  n2: rectangle label="12. See result"
  n1.handle(right) -> Agent.n3.handle(left)
  Agent.n9.handle(right) -> n2.handle(left)
}

Agent {
  n3: rectangle label="1. Receive task"
  n4: rectangle label="2. Search similar episodes"
  n5: rectangle label="4. Past lessons"
  n6: rectangle label="5. Execute plan"
  n7: rectangle label="6. Capture outcome"
  n8: rectangle label="7. Send episode for scoring"
  n9: rectangle label="11. Return result"
  n3.handle(right) -> n4.handle(left)
  n5.handle(right) -> n6.handle(left)
  n6.handle(right) -> n7.handle(left)
  n7.handle(right) -> n8.handle(left)
  n8.handle(right) -> Evaluator.n13.handle(left) [label="8. Outcome"]
  EpisodeStore.n12.handle(right) -> n9.handle(left) [label="11.b Outcome"]
}

EpisodeStore {
  n10: rectangle label="3. Episode search"
  n11: rectangle label="3.2 Search similar episodes"
  n12: rectangle label="10. Episode + lesson"
  Agent.n4.handle(right) -> n10.handle(left) [label="2. Search similar episodes"]
  n10.handle(right) -> n11.handle(left) [label="3. Episode search"]
  n11.handle(right) -> Agent.n5.handle(left) [label="4. Past lessons"]
  Evaluator.n14.handle(right) -> n12.handle(left) [label="10. Episode + lesson"]
}

Evaluator {
  n13: rectangle label="9. Score outcome"
  n14: rectangle label="9. Extract lesson"
  n13.handle(right) -> n14.handle(left)
}

Related templates

Context Management - Session Memory

Architecture

Short-term context pattern where the channel sends new messages plus recent history. The agent runtime merges that with local session state, assembles the prompt, and persists the response back into history. Simple but cost and latency grow with history length.

Context Management - Rolling Summary Memory

Architecture

Compressed history pattern that keeps full history for a while, then when a threshold is hit, summarizes the last chunk and replaces detailed turns with a shorter summary message. Dramatically reduces prompt size on long-running chats while maintaining gist continuity.

Context Management - Profile Memory

Architecture

Identity-style memory pattern where profile data is loaded at session start. Each prompt combines system persona, user profile, and current message. New facts can be written back to profile memory. Small predictable overhead with big UX lift — the agent remembers your name, stack, tone, and constraints.

Context Management - Semantic Memory

Architecture

Vector-based memory pattern where text is chunked, embedded, and stored in a vector database. On query, the question is embedded, a vector search is run, candidates are reranked, and top results are injected into the prompt. Where the agent feels like it remembers everything without hallucinating.

Context Management - Hybrid Retrieval Memory

Architecture

Multi-modal retrieval pattern combining semantic search, exact/keyword search, and recency search in parallel. Results are merged and reranked into a single context set. Much higher recall because the agent can find both fuzzy references and exact entities. Essential for comprehensive knowledge retrieval.

Context Management - Shared Memory

Architecture

Multi-agent coordination pattern where an orchestrator breaks work into subtasks, specialist agents pull from and push to a shared state store, and the orchestrator composes the final answer from that shared state. Multi-agent setups feel coherent instead of each assistant having its own inconsistent memory.

Back to all templates