Adding an embeddings model
Click Add Embeddings Model:| Field | Description |
|---|---|
| Name | Display name used when selecting embeddings in agent creation |
| Class Path | The LangChain class the Runtime uses to call the embeddings service |
| Model URL | The endpoint URL for the embeddings service |
Common class paths
| Provider | Class Path |
|---|---|
| Ollama (local) | langchain_community.embeddings.OllamaEmbeddings |
| OpenAI | langchain_openai.OpenAIEmbeddings |
| Cohere | langchain_cohere.CohereEmbeddings |
| HuggingFace | langchain_community.embeddings.HuggingFaceEmbeddings |
Ollama embeddings (local)
If you started the stack with--profile with-local-models and included an embed model in OLLAMA_MODELS, it’s already registered:
nomic-embed-text model is registered automatically as an embeddings model. The others are registered as chat models.
For manually added Ollama embeddings:
- Class Path:
langchain_community.embeddings.OllamaEmbeddings - Model URL:
http://ollama:11434(inside Docker network) orhttp://localhost:11434(from host)
Required for Knowledge Base
An agent cannot use the Knowledge Base feature without an embeddings model configured here and selected for that agent. If Knowledge Base in agent creation shows no embeddings model selector, return here and add one first. For long-term memory, configure Memory instead.Next steps
Knowledge Base
Configure document retrieval (RAG) for an agent.
Memory
Configure short-term or long-term conversation memory (Neuralyzer or CoD Summarizer).