Welcome to FlowZap, the App to diagram with Speed, Clarity and Control.

Context Management - Semantic Memory

Architecture

Vector-based memory pattern where text is chunked, embedded, and stored in a vector database. On query, the question is embedded, a vector search is run, candidates are reranked, and top results are injected into the prompt. Where the agent feels like it remembers everything without hallucinating.

Full FlowZap Code

User {
  n1: circle label="Ask question"
  n2: rectangle label="See answer with references"
  n1.handle(right) -> Agent.n3.handle(left)
  Agent.n12.handle(right) -> n2.handle(left)
}

Agent {
  n3: rectangle label="Receive query"
  n4: rectangle label="Build memory search query"
  n5: rectangle label="Send to retriever"
  n6: rectangle label="Inject recalled memories"
  n7: rectangle label="Assemble final prompt"
  n8: rectangle label="Call LLM"
  n12: rectangle label="Return answer"
  n3.handle(right) -> n4.handle(left)
  n4.handle(right) -> n5.handle(left)
  n5.handle(bottom) -> Retriever.n9.handle(top) [label="Semantic query"]
  n6.handle(right) -> n7.handle(left)
  n7.handle(right) -> n8.handle(left)
  n8.handle(right) -> LLM.n13.handle(left)
}

Retriever {
  n9: rectangle label="Embed query"
  n10: rectangle label="Search vector store"
  n11: rectangle label="Rerank top memories"
  n9.handle(right) -> n10.handle(left)
  n10.handle(right) -> n11.handle(left)
  n11.handle(top) -> Agent.n6.handle(bottom) [label="Top memories"]
}

VectorDB {
  n13: rectangle label="Vector index"
  Retriever.n10.handle(bottom) -> n13.handle(top) [label="Similarity search"]
}

LLM {
  n14: rectangle label="Answer using recalled facts"
  n14.handle(right) -> Agent.n12.handle(left)
}

Related templates

Context Management - Session Memory

Architecture

Short-term context pattern where the channel sends new messages plus recent history. The agent runtime merges that with local session state, assembles the prompt, and persists the response back into history. Simple but cost and latency grow with history length.

Context Management - Rolling Summary Memory

Architecture

Compressed history pattern that keeps full history for a while, then when a threshold is hit, summarizes the last chunk and replaces detailed turns with a shorter summary message. Dramatically reduces prompt size on long-running chats while maintaining gist continuity.

Context Management - Profile Memory

Architecture

Identity-style memory pattern where profile data is loaded at session start. Each prompt combines system persona, user profile, and current message. New facts can be written back to profile memory. Small predictable overhead with big UX lift — the agent remembers your name, stack, tone, and constraints.

Context Management - Episodic Memory

Architecture

Learning-from-experience pattern where every task run becomes an episode with input, actions, and outcome. Before tackling a new task, the agent fetches similar episodes and uses them as hints. Over time, the agent feels like it's learning instead of repeating the same failed plans.

Context Management - Hybrid Retrieval Memory

Architecture

Multi-modal retrieval pattern combining semantic search, exact/keyword search, and recency search in parallel. Results are merged and reranked into a single context set. Much higher recall because the agent can find both fuzzy references and exact entities. Essential for comprehensive knowledge retrieval.

Context Management - Shared Memory

Architecture

Multi-agent coordination pattern where an orchestrator breaks work into subtasks, specialist agents pull from and push to a shared state store, and the orchestrator composes the final answer from that shared state. Multi-agent setups feel coherent instead of each assistant having its own inconsistent memory.

Back to all templates