Welcome to FlowZap, the App to diagram with Speed, Clarity and Control.

Context Management - Profile Memory

Architecture

Identity-style memory pattern where profile data is loaded at session start. Each prompt combines system persona, user profile, and current message. New facts can be written back to profile memory. Small predictable overhead with big UX lift — the agent remembers your name, stack, tone, and constraints.

Full FlowZap Code

User {
  n1: circle label="Start session"
  n2: rectangle label="Send request"
  n3: rectangle label="See personalized reply"
  n1.handle(right) -> n2.handle(left)
  n2.handle(right) -> Agent.n4.handle(left)
  Agent.n9.handle(right) -> n3.handle(left)
}

Agent {
  n4: rectangle label="Identify user"
  n5: rectangle label="Load profile"
  n6: rectangle label="Build prompt with profile"
  n7: rectangle label="Call LLM"
  n8: rectangle label="Optionally update profile"
  n9: rectangle label="Return answer"
  n4.handle(right) -> n5.handle(left)
  n5.handle(right) -> n6.handle(left)
  n6.handle(right) -> n7.handle(left)
  n7.handle(right) -> LLM.n10.handle(left)
  n8.handle(right) -> ProfileStore.n11.handle(left) [label="Write profile"]
}

ProfileStore {
  n11: rectangle label="Read/Write profile"
  Agent.n5.handle(bottom) -> n11.handle(top) [label="Read profile"]
  n11.handle(right) -> Agent.n5.handle(left)
}

LLM {
  n10: rectangle label="Generate answer using profile"
  n10.handle(right) -> Agent.n9.handle(left)
}

Related templates

Context Management - Session Memory

Architecture

Short-term context pattern where the channel sends new messages plus recent history. The agent runtime merges that with local session state, assembles the prompt, and persists the response back into history. Simple but cost and latency grow with history length.

Context Management - Rolling Summary Memory

Architecture

Compressed history pattern that keeps full history for a while, then when a threshold is hit, summarizes the last chunk and replaces detailed turns with a shorter summary message. Dramatically reduces prompt size on long-running chats while maintaining gist continuity.

Context Management - Semantic Memory

Architecture

Vector-based memory pattern where text is chunked, embedded, and stored in a vector database. On query, the question is embedded, a vector search is run, candidates are reranked, and top results are injected into the prompt. Where the agent feels like it remembers everything without hallucinating.

Context Management - Episodic Memory

Architecture

Learning-from-experience pattern where every task run becomes an episode with input, actions, and outcome. Before tackling a new task, the agent fetches similar episodes and uses them as hints. Over time, the agent feels like it's learning instead of repeating the same failed plans.

Context Management - Hybrid Retrieval Memory

Architecture

Multi-modal retrieval pattern combining semantic search, exact/keyword search, and recency search in parallel. Results are merged and reranked into a single context set. Much higher recall because the agent can find both fuzzy references and exact entities. Essential for comprehensive knowledge retrieval.

Context Management - Shared Memory

Architecture

Multi-agent coordination pattern where an orchestrator breaks work into subtasks, specialist agents pull from and push to a shared state store, and the orchestrator composes the final answer from that shared state. Multi-agent setups feel coherent instead of each assistant having its own inconsistent memory.

Back to all templates