mirror of
https://github.com/blakeblackshear/frigate.git
synced 2026-03-10 10:33:11 +03:00
Clarification
This commit is contained in:
parent
ced2e31857
commit
6a77ee6db6
@ -76,7 +76,7 @@ Switching between V1 and V2 requires reindexing your embeddings. The embeddings
|
|||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
### GenAI Provider (llama.cpp)
|
### GenAI Provider
|
||||||
|
|
||||||
Frigate can use a GenAI provider for semantic search embeddings when that provider has the `embeddings` role. Currently, only **llama.cpp** supports multimodal embeddings (both text and images).
|
Frigate can use a GenAI provider for semantic search embeddings when that provider has the `embeddings` role. Currently, only **llama.cpp** supports multimodal embeddings (both text and images).
|
||||||
|
|
||||||
@ -102,7 +102,7 @@ semantic_search:
|
|||||||
model: default
|
model: default
|
||||||
```
|
```
|
||||||
|
|
||||||
The llama.cpp server must be started with `--embeddings` for the embeddings API, and `--mmproj <mmproj.gguf>` when using image embeddings. See the [llama.cpp server documentation](https://github.com/ggml-org/llama.cpp/blob/master/tools/server/README.md) for details.
|
The llama.cpp server must be started with `--embeddings` for the embeddings API, and a multi-modal embeddings model. See the [llama.cpp server documentation](https://github.com/ggml-org/llama.cpp/blob/master/tools/server/README.md) for details.
|
||||||
|
|
||||||
:::note
|
:::note
|
||||||
|
|
||||||
|
|||||||
Loading…
Reference in New Issue
Block a user