From 5540db4e1e3c3dde2fe90bca75baf406d85a1da7 Mon Sep 17 00:00:00 2001 From: Nicolas Mowen Date: Wed, 9 Oct 2024 16:06:50 -0600 Subject: [PATCH] Add tip to docs about GPU acceleration --- docs/docs/configuration/semantic_search.md | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/docs/docs/configuration/semantic_search.md b/docs/docs/configuration/semantic_search.md index 8e9c4abc2..7f0368596 100644 --- a/docs/docs/configuration/semantic_search.md +++ b/docs/docs/configuration/semantic_search.md @@ -29,6 +29,12 @@ If you are enabling the Search feature for the first time, be advised that Friga ### Jina AI CLIP +:::tip + +The CLIP models downloaded in ONNX format, this means they will be accelerated using GPU hardware when available depending on the docker build that is used. See [the object detector docs](../configuration/object_detectors.md) for more information. + +::: + The vision model is able to embed both images and text into the same vector space, which allows `image -> image` and `text -> image` similarity searches. Frigate uses this model on tracked objects to encode the thumbnail image and store it in the database. When searching for tracked objects via text in the search box, Frigate will perform a `text -> image` similarity search against this embedding. When clicking "Find Similar" in the tracked object detail pane, Frigate will perform an `image -> image` similarity search to retrieve the closest matching thumbnails. The text model is used to embed tracked object descriptions and perform searches against them. Descriptions can be created, viewed, and modified on the Search page when clicking on the gray tracked object chip at the top left of each review item. See [the Generative AI docs](/configuration/genai.md) for more information on how to automatically generate tracked object descriptions.