Add specific information about GPUs that are supported for semantic search

This commit is contained in:
Nicolas Mowen 2024-10-30 06:26:48 -06:00 committed by GitHub
parent d10fea6012
commit a921aa5859
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -54,10 +54,23 @@ semantic_search:
### GPU Acceleration ### GPU Acceleration
The CLIP models are downloaded in ONNX format, and the `large` model can be accelerated using GPU hardware, when available. This depends on the Docker build that is used, see [the object detector docs](../configuration/object_detectors.md) for more information. The CLIP models are downloaded in ONNX format, and the `large` model can be accelerated using GPU hardware, when available. This depends on the Docker build that is used.
:::info
If the correct build is used for your GPU and the `large` model is configured, then the GPU will be automatically detected and used automatically. If the correct build is used for your GPU and the `large` model is configured, then the GPU will be automatically detected and used automatically.
**AMD**
- ROCm will automatically be detected and used for semantic search in the `-rocm` Frigate image.
**Intel**
- OpenVINO will automatically be detected and used as a detector in the default Frigate image.
**Nvidia**
- Nvidia GPUs will automatically be detected and used as a detector in the `-tensorrt` Frigate image.
:::
```yaml ```yaml
semantic_search: semantic_search:
enabled: True enabled: True