Implementing Persistent Memory in .NET AI Assistants
The article describes the author's experience building an AI chat assistant in Blazor that maintains user context across sessions. It introduces an open-source .NET library called BlazorMemory that extracts facts from conversations, stores them as vector embeddings, and injects relevant memories into future prompts.
Why it matters
This library addresses a common problem in AI assistants, where they fail to maintain user context across sessions, providing a solution that can improve the user experience and usefulness of AI-powered chat applications.
Key Points
- 1BlazorMemory is a .NET library that provides a memory layer for AI assistants
- 2It extracts facts from conversations, stores them as vector embeddings, and injects relevant memories into future prompts
- 3The library solves the problem of AI assistants forgetting user context between sessions
- 4It uses a two-step pipeline to consolidate similar memories and avoid duplication
Details
The author was building an AI chat assistant in Blazor, but found that each new conversation started from scratch, with the assistant forgetting previous context about the user. To address this, the author developed BlazorMemory, an open-source .NET library that sits between the chat logic and the language model. BlazorMemory extracts facts from conversations using an LLM, stores them as vector embeddings, and injects relevant memories into future prompts. The key challenge was making the stored memories useful, which the author solved with a two-step pipeline: first, extracting discrete facts from the conversation, and then consolidating similar memories to avoid duplication. The library works in Blazor WASM with zero backend, using IndexedDB for storage, and also supports server-side use with EF Core.
No comments yet
Be the first to comment