Section 1 of 1
Using Vault as a Knowledge Engine
Retrieval-Augmented Generation (RAG) is the technique of injecting relevant knowledge into a prompt before sending it to the LLM. The Vault serves as your RAG knowledge base — every document you upload can be semantically searched and injected.
Upload Your Knowledge
Upload company documents, FAQs, product manuals, and policies. Each document is automatically chunked and embedded into vectors.
Reference in Prompts
When creating a prompt in the Architect, click 'Inject from Vault'. Search for relevant documents. The system pulls the most relevant chunks.
Automatic Context
The injected content appears in a special 'Knowledge Context' section of your Master Prompt, clearly delineated from instructions.
Dynamic Updates
When you update a Vault document, all prompts referencing it automatically get the updated content on next execution.
The system automatically chunks documents into ~512-token segments. For technical documentation, this is optimal. For legal documents where context spans multiple paragraphs, consider structuring your documents with clear section headers — the chunker uses headers as natural break points.