diff --git a/docs/docs/configuration/hardware_acceleration_enrichments.md b/docs/docs/configuration/hardware_acceleration_enrichments.md index fac2ffa61..fc246df98 100644 --- a/docs/docs/configuration/hardware_acceleration_enrichments.md +++ b/docs/docs/configuration/hardware_acceleration_enrichments.md @@ -12,23 +12,20 @@ Some of Frigate's enrichments can use a discrete GPU or integrated GPU for accel Object detection and enrichments (like Semantic Search, Face Recognition, and License Plate Recognition) are independent features. To use a GPU / NPU for object detection, see the [Object Detectors](/configuration/object_detectors.md) documentation. If you want to use your GPU for any supported enrichments, you must choose the appropriate Frigate Docker image for your GPU / NPU and configure the enrichment according to its specific documentation. - **AMD** - - ROCm support in the `-rocm` Frigate image is automatically detected for enrichments, but only some enrichment models are available due to ROCm's focus on LLMs and limited stability with certain neural network models. Frigate disables models that perform poorly or are unstable to ensure reliable operation, so only compatible enrichments may be active. - **Intel** - - OpenVINO will automatically be detected and used for enrichments in the default Frigate image. - **Note:** Intel NPUs have limited model support for enrichments. GPU is recommended for enrichments when available. - **Nvidia** - - Nvidia GPUs will automatically be detected and used for enrichments in the `-tensorrt` Frigate image. - Jetson devices will automatically be detected and used for enrichments in the `-tensorrt-jp6` Frigate image. - **RockChip** - RockChip NPU will automatically be detected and used for semantic search v1 and face recognition in the `-rk` Frigate image. -Utilizing a GPU for enrichments does not require you to use the same GPU for object detection. For example, you can run the `tensorrt` Docker image for enrichments and still use other dedicated hardware like a Coral or Hailo for object detection. However, one combination that is not supported is TensorRT for object detection and OpenVINO for enrichments. +Utilizing a GPU for enrichments does not require you to use the same GPU for object detection. For example, you can run the `tensorrt` Docker image to run enrichments on an Nvidia GPU and still use other dedicated hardware like a Coral or Hailo for object detection. However, one combination that is not supported is the `tensorrt` image for object detection on an Nvidia GPU and Intel iGPU for enrichments. :::note diff --git a/docs/docs/configuration/index.md b/docs/docs/configuration/index.md index b1fa876f9..2144ef7ea 100644 --- a/docs/docs/configuration/index.md +++ b/docs/docs/configuration/index.md @@ -29,12 +29,12 @@ cameras: When running Frigate through the HA Add-on, the Frigate `/config` directory is mapped to `/addon_configs/` in the host, where `` is specific to the variant of the Frigate Add-on you are running. -| Add-on Variant | Configuration directory | -| -------------------------- | -------------------------------------------- | -| Frigate | `/addon_configs/ccab4aaf_frigate` | -| Frigate (Full Access) | `/addon_configs/ccab4aaf_frigate-fa` | -| Frigate Beta | `/addon_configs/ccab4aaf_frigate-beta` | -| Frigate Beta (Full Access) | `/addon_configs/ccab4aaf_frigate-fa-beta` | +| Add-on Variant | Configuration directory | +| -------------------------- | ----------------------------------------- | +| Frigate | `/addon_configs/ccab4aaf_frigate` | +| Frigate (Full Access) | `/addon_configs/ccab4aaf_frigate-fa` | +| Frigate Beta | `/addon_configs/ccab4aaf_frigate-beta` | +| Frigate Beta (Full Access) | `/addon_configs/ccab4aaf_frigate-fa-beta` | **Whenever you see `/config` in the documentation, it refers to this directory.** @@ -109,15 +109,16 @@ detectors: record: enabled: True - retain: + motion: days: 7 - mode: motion alerts: retain: days: 30 + mode: motion detections: retain: days: 30 + mode: motion snapshots: enabled: True @@ -165,15 +166,16 @@ detectors: record: enabled: True - retain: + motion: days: 7 - mode: motion alerts: retain: days: 30 + mode: motion detections: retain: days: 30 + mode: motion snapshots: enabled: True @@ -231,15 +233,16 @@ model: record: enabled: True - retain: + motion: days: 7 - mode: motion alerts: retain: days: 30 + mode: motion detections: retain: days: 30 + mode: motion snapshots: enabled: True diff --git a/docs/docs/configuration/object_detectors.md b/docs/docs/configuration/object_detectors.md index 8a8cccd02..7d24196dc 100644 --- a/docs/docs/configuration/object_detectors.md +++ b/docs/docs/configuration/object_detectors.md @@ -34,7 +34,7 @@ Frigate supports multiple different detectors that work on different types of ha **Nvidia GPU** -- [ONNX](#onnx): TensorRT will automatically be detected and used as a detector in the `-tensorrt` Frigate image when a supported ONNX model is configured. +- [ONNX](#onnx): Nvidia GPUs will automatically be detected and used as a detector in the `-tensorrt` Frigate image when a supported ONNX model is configured. **Nvidia Jetson** @@ -70,7 +70,7 @@ This does not affect using hardware for accelerating other tasks such as [semant # Officially Supported Detectors -Frigate provides the following builtin detector types: `cpu`, `edgetpu`, `hailo8l`, `memryx`, `onnx`, `openvino`, `rknn`, and `tensorrt`. By default, Frigate will use a single CPU detector. Other detectors may require additional configuration as described below. When using multiple detectors they will run in dedicated processes, but pull from a common queue of detection requests from across all cameras. +Frigate provides a number of builtin detector types. By default, Frigate will use a single CPU detector. Other detectors may require additional configuration as described below. When using multiple detectors they will run in dedicated processes, but pull from a common queue of detection requests from across all cameras. ## Edge TPU Detector @@ -162,7 +162,13 @@ A TensorFlow Lite model is provided in the container at `/edgetpu_model.tflite` #### YOLOv9 -YOLOv9 models that are compiled for TensorFlow Lite and properly quantized are supported, but not included by default. [Download the model](https://github.com/dbro/frigate-detector-edgetpu-yolo9/releases/download/v1.0/yolov9-s-relu6-best_320_int8_edgetpu.tflite), bind mount the file into the container, and provide the path with `model.path`. Note that the linked model requires a 17-label [labelmap file](https://raw.githubusercontent.com/dbro/frigate-detector-edgetpu-yolo9/refs/heads/main/labels-coco17.txt) that includes only 17 COCO classes. +YOLOv9 models that are compiled for TensorFlow Lite and properly quantized are supported, but not included by default. [Instructions](#yolov9-for-google-coral-support) for downloading a model with support for the Google Coral. + +:::tip + +**Frigate+ Users:** Follow the [instructions](../integrations/plus#use-models) to set a model ID in your config file. + +:::
YOLOv9 Setup & Config @@ -659,11 +665,9 @@ ONNX is an open format for building machine learning models, Frigate supports ru If the correct build is used for your GPU then the GPU will be detected and used automatically. - **AMD** - - ROCm will automatically be detected and used with the ONNX detector in the `-rocm` Frigate image. - **Intel** - - OpenVINO will automatically be detected and used with the ONNX detector in the default Frigate image. - **Nvidia** @@ -1554,11 +1558,11 @@ RF-DETR can be exported as ONNX by running the command below. You can copy and p ```sh docker build . --build-arg MODEL_SIZE=Nano --rm --output . -f- <<'EOF' -FROM python:3.11 AS build +FROM python:3.12 AS build RUN apt-get update && apt-get install --no-install-recommends -y libgl1 && rm -rf /var/lib/apt/lists/* -COPY --from=ghcr.io/astral-sh/uv:0.8.0 /uv /bin/ +COPY --from=ghcr.io/astral-sh/uv:0.10.4 /uv /bin/ WORKDIR /rfdetr -RUN uv pip install --system rfdetr[onnxexport] torch==2.8.0 onnx==1.19.1 onnxscript +RUN uv pip install --system rfdetr[onnxexport] torch==2.8.0 onnx==1.19.1 transformers==4.57.6 onnxscript ARG MODEL_SIZE RUN python3 -c "from rfdetr import RFDETR${MODEL_SIZE}; x = RFDETR${MODEL_SIZE}(resolution=320); x.export(simplify=True)" FROM scratch @@ -1596,19 +1600,23 @@ cd tensorrt_demos/yolo python3 yolo_to_onnx.py -m yolov7-320 ``` -#### YOLOv9 +#### YOLOv9 for Google Coral Support + +[Download the model](https://github.com/dbro/frigate-detector-edgetpu-yolo9/releases/download/v1.0/yolov9-s-relu6-best_320_int8_edgetpu.tflite), bind mount the file into the container, and provide the path with `model.path`. Note that the linked model requires a 17-label [labelmap file](https://raw.githubusercontent.com/dbro/frigate-detector-edgetpu-yolo9/refs/heads/main/labels-coco17.txt) that includes only 17 COCO classes. + +#### YOLOv9 for other detectors YOLOv9 model can be exported as ONNX using the command below. You can copy and paste the whole thing to your terminal and execute, altering `MODEL_SIZE=t` and `IMG_SIZE=320` in the first line to the [model size](https://github.com/WongKinYiu/yolov9#performance) you would like to convert (available model sizes are `t`, `s`, `m`, `c`, and `e`, common image sizes are `320` and `640`). ```sh docker build . --build-arg MODEL_SIZE=t --build-arg IMG_SIZE=320 --output . -f- <<'EOF' FROM python:3.11 AS build -RUN apt-get update && apt-get install --no-install-recommends -y libgl1 && rm -rf /var/lib/apt/lists/* -COPY --from=ghcr.io/astral-sh/uv:0.8.0 /uv /bin/ +RUN apt-get update && apt-get install --no-install-recommends -y cmake libgl1 && rm -rf /var/lib/apt/lists/* +COPY --from=ghcr.io/astral-sh/uv:0.10.4 /uv /bin/ WORKDIR /yolov9 ADD https://github.com/WongKinYiu/yolov9.git . RUN uv pip install --system -r requirements.txt -RUN uv pip install --system onnx==1.18.0 onnxruntime onnx-simplifier>=0.4.1 onnxscript +RUN uv pip install --system onnx==1.18.0 onnxruntime onnx-simplifier==0.4.* onnxscript ARG MODEL_SIZE ARG IMG_SIZE ADD https://github.com/WongKinYiu/yolov9/releases/download/v0.1/yolov9-${MODEL_SIZE}-converted.pt yolov9-${MODEL_SIZE}.pt diff --git a/docs/docs/configuration/snapshots.md b/docs/docs/configuration/snapshots.md index 815e301ba..01c034a04 100644 --- a/docs/docs/configuration/snapshots.md +++ b/docs/docs/configuration/snapshots.md @@ -9,4 +9,25 @@ Snapshots are accessible in the UI in the Explore pane. This allows for quick su To only save snapshots for objects that enter a specific zone, [see the zone docs](./zones.md#restricting-snapshots-to-specific-zones) -Snapshots sent via MQTT are configured in the [config file](https://docs.frigate.video/configuration/) under `cameras -> your_camera -> mqtt` +Snapshots sent via MQTT are configured in the [config file](/configuration) under `cameras -> your_camera -> mqtt` + +## Frame Selection + +Frigate does not save every frame — it picks a single "best" frame for each tracked object and uses it for both the snapshot and clean copy. As the object is tracked across frames, Frigate continuously evaluates whether the current frame is better than the previous best based on detection confidence, object size, and the presence of key attributes like faces or license plates. Frames where the object touches the edge of the frame are deprioritized. The snapshot is written to disk once tracking ends using whichever frame was determined to be the best. + +MQTT snapshots are published more frequently — each time a better thumbnail frame is found during tracking, or when the current best image is older than `best_image_timeout` (default: 60s). These use their own annotation settings configured under `cameras -> your_camera -> mqtt`. + +## Clean Copy + +Frigate can produce up to two snapshot files per event, each used in different places: + +| Version | File | Annotations | Used by | +| --- | --- | --- | --- | +| **Regular snapshot** | `-.jpg` | Respects your `timestamp`, `bounding_box`, `crop`, and `height` settings | API (`/api/events//snapshot.jpg`), MQTT (`/