From fdbeba113a47537b0a84b0b913fe10d044c95627 Mon Sep 17 00:00:00 2001 From: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com> Date: Thu, 1 May 2025 07:30:20 -0500 Subject: [PATCH] docs updates --- docs/docs/configuration/face_recognition.md | 3 ++- docs/docs/configuration/license_plate_recognition.md | 11 ++++++----- docs/docs/configuration/object_detectors.md | 4 ++-- docs/docs/configuration/semantic_search.md | 4 ++-- 4 files changed, 12 insertions(+), 10 deletions(-) diff --git a/docs/docs/configuration/face_recognition.md b/docs/docs/configuration/face_recognition.md index 4ca57f9e13..80aac0fe0e 100644 --- a/docs/docs/configuration/face_recognition.md +++ b/docs/docs/configuration/face_recognition.md @@ -26,7 +26,7 @@ In both cases, a lightweight face landmark detection model is also used to align The `small` model is optimized for efficiency and runs on the CPU, most CPUs should run the model efficiently. -The `large` model is optimized for accuracy, an integrated or discrete GPU is highly recommended. +The `large` model is optimized for accuracy, an integrated or discrete GPU is highly recommended. Intel users should use the default Docker image to run on an iGPU with OpenVINO and Nvidia users should use the `-tensorrt` Docker image to run on dedicated Nvidia GPUs. Face recognition does not run on a Google Coral. ## Configuration @@ -133,6 +133,7 @@ No, using another face recognition service will interfere with Frigate's built i ### Does face recognition run on the recording stream? Face recognition does not run on the recording stream, this would be suboptimal for many reasons: + 1. The latency of accessing the recordings means the notifications would not include the names of recognized people because recognition would not complete until after. 2. The embedding models used run on a set image size, so larger images will be scaled down to match this anyway. 3. Motion clarity is much more important than extra pixels, over-compression and motion blur are much more detrimental to results than resolution. diff --git a/docs/docs/configuration/license_plate_recognition.md b/docs/docs/configuration/license_plate_recognition.md index c5565fed5a..26522f1068 100644 --- a/docs/docs/configuration/license_plate_recognition.md +++ b/docs/docs/configuration/license_plate_recognition.md @@ -7,13 +7,14 @@ Frigate can recognize license plates on vehicles and automatically add the detec LPR works best when the license plate is clearly visible to the camera. For moving vehicles, Frigate continuously refines the recognition process, keeping the most confident result. However, LPR does not run on stationary vehicles. -When a plate is recognized, the recognized name is: +When a plate is recognized, the details are: - Added as a `sub_label` (if known) or the `recognized_license_plate` field (if unknown) to a tracked object. - Viewable in the Review Item Details pane in Review (sub labels). - Viewable in the Tracked Object Details pane in Explore (sub labels and recognized license plates). - Filterable through the More Filters menu in Explore. - Published via the `frigate/events` MQTT topic as a `sub_label` (known) or `recognized_license_plate` (unknown) for the `car` tracked object. +- Published via the `frigate/tracked_object_update` MQTT topic with `name` (if known) and `plate`. ## Model Requirements @@ -68,10 +69,10 @@ Fine-tune the LPR feature using these optional parameters at the global level of - Depending on the resolution of your camera's `detect` stream, you can increase this value to ignore small or distant plates. - **`device`**: Device to use to run license plate recognition models. - Default: `CPU` - - This can be `CPU` or `GPU`. For users without a model that detects license plates natively, using a GPU may increase performance of the models, especially the YOLOv9 license plate detector model. + - This can be `CPU` or `GPU`. For users without a model that detects license plates natively, using a GPU may increase performance of the models, especially the YOLOv9 license plate detector model. Intel users should use the default Docker image to run on an iGPU with OpenVINO and Nvidia users should use the `-tensorrt` Docker image to run on dedicated Nvidia GPUs. - **`model_size`**: The size of the model used to detect text on plates. - Default: `small` - - This can be `small` or `large`. The `large` model uses an enhanced text detector and is more accurate at finding text on plates but slower than the `small` model. For most users, the small model is recommended. For users in countries with multiple lines of text on plates, the large model is recommended. Note that using the large does not improve _text recognition_, but it may improve _text detection_. + - This can be `small` or `large`. The `large` model uses an enhanced text detector and is more accurate at finding text on plates but slower than the `small` model. For most users, the small model is recommended. For users in countries with multiple lines of text on plates, the large model is recommended. Note that using the large model does not improve _text recognition_, but it may improve _text detection_. ### Recognition @@ -184,7 +185,7 @@ cameras: ffmpeg: ... # add your streams detect: enabled: True - fps: 5 # increase to 10 if vehicles move quickly across your frame. Higher than 15 is unnecessary and is not recommended. + fps: 5 # increase to 10 if vehicles move quickly across your frame. Higher than 10 is unnecessary and is not recommended. min_initialized: 2 width: 1920 height: 1080 @@ -228,7 +229,7 @@ An example configuration for a dedicated LPR camera using the secondary pipeline # LPR global configuration lpr: enabled: True - device: CPU # can also be GPU if available + device: CPU # can also be GPU if available and correct Docker image is used detection_threshold: 0.7 # change if necessary # Dedicated LPR camera configuration diff --git a/docs/docs/configuration/object_detectors.md b/docs/docs/configuration/object_detectors.md index c61e74c20b..9193b2925e 100644 --- a/docs/docs/configuration/object_detectors.md +++ b/docs/docs/configuration/object_detectors.md @@ -610,7 +610,7 @@ If the correct build is used for your GPU then the GPU will be detected and used - **Nvidia** - Nvidia GPUs will automatically be detected and used with the ONNX detector in the `-tensorrt` Frigate image. - - Jetson devices will automatically be detected and used with the ONNX detector in the `-tensorrt-jp(4/5)` Frigate image. + - Jetson devices will automatically be detected and used with the ONNX detector in the `-tensorrt-jp6` Frigate image. ::: @@ -659,7 +659,7 @@ YOLOv3, YOLOv4, YOLOv7, and [YOLOv9](https://github.com/WongKinYiu/yolov9) model :::tip -The YOLO detector has been designed to support YOLOv3, YOLOv4, YOLOv7, and YOLOv9 models, but may support other YOLO model architectures as well. See [the models section](#downloading-yolo-models) for more information on downloading YOLO models for use in Frigate. +The YOLO detector has been designed to support YOLOv3, YOLOv4, YOLOv7, and YOLOv9 models, but may support other YOLO model architectures as well. See [the models section](#downloading-yolo-models) for more information on downloading YOLO models for use in Frigate. ::: diff --git a/docs/docs/configuration/semantic_search.md b/docs/docs/configuration/semantic_search.md index 07e2cbfb25..74363c0b3c 100644 --- a/docs/docs/configuration/semantic_search.md +++ b/docs/docs/configuration/semantic_search.md @@ -90,7 +90,7 @@ semantic_search: If the correct build is used for your GPU and the `large` model is configured, then the GPU will be detected and used automatically. -**NOTE:** Object detection and Semantic Search are independent features. If you want to use your GPU with Semantic Search, you must choose the appropriate Frigate Docker image for your GPU. +**NOTE:** Object detection and Semantic Search (as well as Frigate's other enrichments) are independent features. If you want to use your GPU with Semantic Search, you must choose the appropriate Frigate Docker image for your GPU. - **AMD** @@ -102,7 +102,7 @@ If the correct build is used for your GPU and the `large` model is configured, t - **Nvidia** - Nvidia GPUs will automatically be detected and used for Semantic Search in the `-tensorrt` Frigate image. - - Jetson devices will automatically be detected and used for Semantic Search in the `-tensorrt-jp(4/5)` Frigate image. + - Jetson devices will automatically be detected and used for Semantic Search in the `-tensorrt-jp6` Frigate image. :::