Clarification

This commit is contained in:
Nicolas Mowen 2026-03-08 07:26:39 -06:00
parent ced2e31857
commit 6a77ee6db6

View File

@ -76,7 +76,7 @@ Switching between V1 and V2 requires reindexing your embeddings. The embeddings
::: :::
### GenAI Provider (llama.cpp) ### GenAI Provider
Frigate can use a GenAI provider for semantic search embeddings when that provider has the `embeddings` role. Currently, only **llama.cpp** supports multimodal embeddings (both text and images). Frigate can use a GenAI provider for semantic search embeddings when that provider has the `embeddings` role. Currently, only **llama.cpp** supports multimodal embeddings (both text and images).
@ -102,7 +102,7 @@ semantic_search:
model: default model: default
``` ```
The llama.cpp server must be started with `--embeddings` for the embeddings API, and `--mmproj <mmproj.gguf>` when using image embeddings. See the [llama.cpp server documentation](https://github.com/ggml-org/llama.cpp/blob/master/tools/server/README.md) for details. The llama.cpp server must be started with `--embeddings` for the embeddings API, and a multi-modal embeddings model. See the [llama.cpp server documentation](https://github.com/ggml-org/llama.cpp/blob/master/tools/server/README.md) for details.
:::note :::note