From 8cdaef307a4a0359ab26976ca79fd916823a3adb Mon Sep 17 00:00:00 2001 From: Nicolas Mowen Date: Sun, 28 Sep 2025 10:31:59 -0600 Subject: [PATCH 01/17] Update face rec docs (#20256) * Update face rec docs * clarify Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com> --------- Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com> --- docs/docs/configuration/face_recognition.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/docs/docs/configuration/face_recognition.md b/docs/docs/configuration/face_recognition.md index 3026615d4..d72b66639 100644 --- a/docs/docs/configuration/face_recognition.md +++ b/docs/docs/configuration/face_recognition.md @@ -158,6 +158,8 @@ Start with the [Usage](#usage) section and re-read the [Model Requirements](#mod Accuracy is definitely a going to be improved with higher quality cameras / streams. It is important to look at the DORI (Detection Observation Recognition Identification) range of your camera, if that specification is posted. This specification explains the distance from the camera that a person can be detected, observed, recognized, and identified. The identification range is the most relevant here, and the distance listed by the camera is the furthest that face recognition will realistically work. +Some users have also noted that setting the stream in camera firmware to a constant bit rate (CBR) leads to better image clarity than with a variable bit rate (VBR). + ### Why can't I bulk upload photos? It is important to methodically add photos to the library, bulk importing photos (especially from a general photo library) will lead to over-fitting in that particular scenario and hurt recognition performance. From b94ebda9e51193948466fe218b0ce268f3ed74e1 Mon Sep 17 00:00:00 2001 From: AmirHossein_Omidi <151873319+AmirHoseinOmidi@users.noreply.github.com> Date: Wed, 1 Oct 2025 16:48:47 +0330 Subject: [PATCH 02/17] Update license_plate_recognition.md (#20306) * Update license_plate_recognition.md Add PaddleOCR description for license plate recognition in Frigate docs * Update docs/docs/configuration/license_plate_recognition.md Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com> * Update docs/docs/configuration/license_plate_recognition.md Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com> --------- Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com> --- docs/docs/configuration/license_plate_recognition.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/docs/docs/configuration/license_plate_recognition.md b/docs/docs/configuration/license_plate_recognition.md index 933fd72d3..36e8b7dad 100644 --- a/docs/docs/configuration/license_plate_recognition.md +++ b/docs/docs/configuration/license_plate_recognition.md @@ -30,8 +30,7 @@ In the default mode, Frigate's LPR needs to first detect a `car` or `motorcycle` ## Minimum System Requirements -License plate recognition works by running AI models locally on your system. The models are relatively lightweight and can run on your CPU or GPU, depending on your configuration. At least 4GB of RAM is required. - +License plate recognition works by running AI models locally on your system. The YOLOv9 plate detector model and the OCR models ([PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR)) are relatively lightweight and can run on your CPU or GPU, depending on your configuration. At least 4GB of RAM is required. ## Configuration License plate recognition is disabled by default. Enable it in your config file: From 20e5e3bdc067634a70a67d3a14a4237c627056d9 Mon Sep 17 00:00:00 2001 From: mpking828 Date: Fri, 3 Oct 2025 10:49:51 -0400 Subject: [PATCH 03/17] Update camera_specific.md to fix 2 way audio example for Reolink (#20343) Update camera_specific.md to fix 2 way audio example for Reolink --- docs/docs/configuration/camera_specific.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/docs/configuration/camera_specific.md b/docs/docs/configuration/camera_specific.md index 3a3809605..ca31604c8 100644 --- a/docs/docs/configuration/camera_specific.md +++ b/docs/docs/configuration/camera_specific.md @@ -213,7 +213,7 @@ go2rtc: streams: your_reolink_doorbell: - "ffmpeg:http://reolink_ip/flv?port=1935&app=bcs&stream=channel0_main.bcs&user=username&password=password#video=copy#audio=copy#audio=opus" - - rtsp://reolink_ip/Preview_01_sub + - rtsp://username:password@reolink_ip/Preview_01_sub your_reolink_doorbell_sub: - "ffmpeg:http://reolink_ip/flv?port=1935&app=bcs&stream=channel0_ext.bcs&user=username&password=password" ``` From 59102794e879d6dfbb654547dc9088df0fbf7cc9 Mon Sep 17 00:00:00 2001 From: Sean Kelly Date: Sat, 11 Oct 2025 09:43:41 -0700 Subject: [PATCH 04/17] Add keyboard shortcut for switching to previous label (#20426) * Add keyboard shortcut for switching to previous label * Update docs/docs/plus/annotating.md Co-authored-by: Blake Blackshear --------- Co-authored-by: Blake Blackshear --- docs/docs/plus/annotating.md | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/docs/plus/annotating.md b/docs/docs/plus/annotating.md index 102e4a489..dc8e571be 100644 --- a/docs/docs/plus/annotating.md +++ b/docs/docs/plus/annotating.md @@ -42,6 +42,7 @@ Misidentified objects should have a correct label added. For example, if a perso | `w` | Add box | | `d` | Toggle difficult | | `s` | Switch to the next label | +| `Shift + s` | Switch to the previous label | | `tab` | Select next largest box | | `del` | Delete current box | | `esc` | Deselect/Cancel | From 925bf78811d4d35c77fb90934eec94cd176429c2 Mon Sep 17 00:00:00 2001 From: Nicolas Mowen Date: Sun, 12 Oct 2025 06:28:08 -0600 Subject: [PATCH 05/17] Update review topic description (#20445) --- docs/docs/integrations/mqtt.md | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/docs/docs/integrations/mqtt.md b/docs/docs/integrations/mqtt.md index 3ad435b81..78b4b849c 100644 --- a/docs/docs/integrations/mqtt.md +++ b/docs/docs/integrations/mqtt.md @@ -161,7 +161,14 @@ Message published for updates to tracked object metadata, for example: ### `frigate/reviews` -Message published for each changed review item. The first message is published when the `detection` or `alert` is initiated. When additional objects are detected or when a zone change occurs, it will publish a, `update` message with the same id. When the review activity has ended a final `end` message is published. +Message published for each changed review item. The first message is published when the `detection` or `alert` is initiated. + +An `update` with the same ID will be published when: +- The severity changes from `detection` to `alert` +- Additional objects are detected +- An object is recognized via face, lpr, etc. + +When the review activity has ended a final `end` message is published. ```json { From 2a271c0f5ec6c0aeafd89026b47039beea656bf1 Mon Sep 17 00:00:00 2001 From: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com> Date: Mon, 13 Oct 2025 11:00:21 -0500 Subject: [PATCH 06/17] Update GenAI docs for Gemini model deprecation (#20462) --- docs/docs/configuration/genai.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/docs/docs/configuration/genai.md b/docs/docs/configuration/genai.md index f76c075b7..9279e459d 100644 --- a/docs/docs/configuration/genai.md +++ b/docs/docs/configuration/genai.md @@ -18,10 +18,10 @@ genai: enabled: True provider: gemini api_key: "{FRIGATE_GEMINI_API_KEY}" - model: gemini-1.5-flash + model: gemini-2.0-flash cameras: - front_camera: + front_camera: genai: enabled: True # <- enable GenAI for your front camera use_snapshot: True @@ -30,7 +30,7 @@ cameras: required_zones: - steps indoor_camera: - genai: + genai: enabled: False # <- disable GenAI for your indoor camera ``` @@ -78,7 +78,7 @@ Google Gemini has a free tier allowing [15 queries per minute](https://ai.google ### Supported Models -You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://ai.google.dev/gemini-api/docs/models/gemini). At the time of writing, this includes `gemini-1.5-pro` and `gemini-1.5-flash`. +You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://ai.google.dev/gemini-api/docs/models/gemini). ### Get API Key @@ -96,7 +96,7 @@ genai: enabled: True provider: gemini api_key: "{FRIGATE_GEMINI_API_KEY}" - model: gemini-1.5-flash + model: gemini-2.0-flash ``` :::note @@ -202,7 +202,7 @@ genai: car: "Observe the primary vehicle in these images. Focus on its movement, direction, or purpose (e.g., parking, approaching, circling). If it's a delivery vehicle, mention the company." ``` -Prompts can also be overriden at the camera level to provide a more detailed prompt to the model about your specific camera, if you desire. +Prompts can also be overriden at the camera level to provide a more detailed prompt to the model about your specific camera, if you desire. ```yaml cameras: From e0a8445bac5d95c8f355c4dd1b3b7a03e351665d Mon Sep 17 00:00:00 2001 From: Nicolas Mowen Date: Tue, 14 Oct 2025 07:32:44 -0600 Subject: [PATCH 07/17] Improve rf-detr export (#20485) --- docs/docs/configuration/object_detectors.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/docs/configuration/object_detectors.md b/docs/docs/configuration/object_detectors.md index 1e68d6ff4..6d5ea07c8 100644 --- a/docs/docs/configuration/object_detectors.md +++ b/docs/docs/configuration/object_detectors.md @@ -1012,9 +1012,9 @@ FROM python:3.11 AS build RUN apt-get update && apt-get install --no-install-recommends -y libgl1 && rm -rf /var/lib/apt/lists/* COPY --from=ghcr.io/astral-sh/uv:0.8.0 /uv /bin/ WORKDIR /rfdetr -RUN uv pip install --system rfdetr onnx onnxruntime onnxsim onnx-graphsurgeon +RUN uv pip install --system rfdetr[onnxexport] ARG MODEL_SIZE -RUN python3 -c "from rfdetr import RFDETR${MODEL_SIZE}; x = RFDETR${MODEL_SIZE}(resolution=320); x.export()" +RUN python3 -c "from rfdetr import RFDETR${MODEL_SIZE}; x = RFDETR${MODEL_SIZE}(resolution=320); x.export(simplify=True)" FROM scratch ARG MODEL_SIZE COPY --from=build /rfdetr/output/inference_model.onnx /rfdetr-${MODEL_SIZE}.onnx From 4d582062fba09f69fb40658fbe02b4f527dc46af Mon Sep 17 00:00:00 2001 From: Nicolas Mowen Date: Tue, 14 Oct 2025 15:29:20 -0600 Subject: [PATCH 08/17] Ensure that a user must provide an image in an expected location (#20491) * Ensure that a user must provide an image in an expected location * Use const --- frigate/api/export.py | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/frigate/api/export.py b/frigate/api/export.py index 160434c68..44ec05c15 100644 --- a/frigate/api/export.py +++ b/frigate/api/export.py @@ -8,6 +8,7 @@ from pathlib import Path import psutil from fastapi import APIRouter, Depends, Request from fastapi.responses import JSONResponse +from pathvalidate import sanitize_filepath from peewee import DoesNotExist from playhouse.shortcuts import model_to_dict @@ -15,7 +16,7 @@ from frigate.api.auth import require_role from frigate.api.defs.request.export_recordings_body import ExportRecordingsBody from frigate.api.defs.request.export_rename_body import ExportRenameBody from frigate.api.defs.tags import Tags -from frigate.const import EXPORT_DIR +from frigate.const import CLIPS_DIR, EXPORT_DIR from frigate.models import Export, Previews, Recordings from frigate.record.export import ( PlaybackFactorEnum, @@ -54,7 +55,14 @@ def export_recording( playback_factor = body.playback playback_source = body.source friendly_name = body.name - existing_image = body.image_path + existing_image = sanitize_filepath(body.image_path) if body.image_path else None + + # Ensure that existing_image is a valid path + if existing_image and not existing_image.startswith(CLIPS_DIR): + return JSONResponse( + content=({"success": False, "message": "Invalid image path"}), + status_code=400, + ) if playback_source == "recordings": recordings_count = ( From 942a61ddfbff715d9268b9c4373ba832ebcc993b Mon Sep 17 00:00:00 2001 From: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com> Date: Wed, 15 Oct 2025 06:53:31 -0500 Subject: [PATCH 09/17] version bump in docs (#20501) --- docs/docs/frigate/updating.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/docs/docs/frigate/updating.md b/docs/docs/frigate/updating.md index fdfbf906b..d95ae83c5 100644 --- a/docs/docs/frigate/updating.md +++ b/docs/docs/frigate/updating.md @@ -5,7 +5,7 @@ title: Updating # Updating Frigate -The current stable version of Frigate is **0.16.1**. The release notes and any breaking changes for this version can be found on the [Frigate GitHub releases page](https://github.com/blakeblackshear/frigate/releases/tag/v0.16.1). +The current stable version of Frigate is **0.16.2**. The release notes and any breaking changes for this version can be found on the [Frigate GitHub releases page](https://github.com/blakeblackshear/frigate/releases/tag/v0.16.2). Keeping Frigate up to date ensures you benefit from the latest features, performance improvements, and bug fixes. The update process varies slightly depending on your installation method (Docker, Home Assistant Addon, etc.). Below are instructions for the most common setups. @@ -33,21 +33,21 @@ If you’re running Frigate via Docker (recommended method), follow these steps: 2. **Update and Pull the Latest Image**: - If using Docker Compose: - - Edit your `docker-compose.yml` file to specify the desired version tag (e.g., `0.16.1` instead of `0.15.2`). For example: + - Edit your `docker-compose.yml` file to specify the desired version tag (e.g., `0.16.2` instead of `0.15.2`). For example: ```yaml services: frigate: - image: ghcr.io/blakeblackshear/frigate:0.16.1 + image: ghcr.io/blakeblackshear/frigate:0.16.2 ``` - Then pull the image: ```bash - docker pull ghcr.io/blakeblackshear/frigate:0.16.1 + docker pull ghcr.io/blakeblackshear/frigate:0.16.2 ``` - **Note for `stable` Tag Users**: If your `docker-compose.yml` uses the `stable` tag (e.g., `ghcr.io/blakeblackshear/frigate:stable`), you don’t need to update the tag manually. The `stable` tag always points to the latest stable release after pulling. - If using `docker run`: - - Pull the image with the appropriate tag (e.g., `0.16.1`, `0.16.1-tensorrt`, or `stable`): + - Pull the image with the appropriate tag (e.g., `0.16.2`, `0.16.2-tensorrt`, or `stable`): ```bash - docker pull ghcr.io/blakeblackshear/frigate:0.16.1 + docker pull ghcr.io/blakeblackshear/frigate:0.16.2 ``` 3. **Start the Container**: From a4764563a5e980289fcad714d236b0ce87e95875 Mon Sep 17 00:00:00 2001 From: Nicolas Mowen Date: Thu, 16 Oct 2025 06:56:37 -0600 Subject: [PATCH 10/17] Fix YOLOv9 export script (#20514) --- docs/docs/configuration/object_detectors.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/docs/configuration/object_detectors.md b/docs/docs/configuration/object_detectors.md index 6d5ea07c8..088ffc46c 100644 --- a/docs/docs/configuration/object_detectors.md +++ b/docs/docs/configuration/object_detectors.md @@ -1062,7 +1062,7 @@ COPY --from=ghcr.io/astral-sh/uv:0.8.0 /uv /bin/ WORKDIR /yolov9 ADD https://github.com/WongKinYiu/yolov9.git . RUN uv pip install --system -r requirements.txt -RUN uv pip install --system onnx==1.18.0 onnxruntime onnx-simplifier>=0.4.1 +RUN uv pip install --system onnx==1.18.0 onnxruntime onnx-simplifier>=0.4.1 onnxscript ARG MODEL_SIZE ARG IMG_SIZE ADD https://github.com/WongKinYiu/yolov9/releases/download/v0.1/yolov9-${MODEL_SIZE}-converted.pt yolov9-${MODEL_SIZE}.pt From 0302db1c4389526f8bc2ab1d8d5755d48f288632 Mon Sep 17 00:00:00 2001 From: Nicolas Mowen Date: Fri, 17 Oct 2025 06:16:30 -0600 Subject: [PATCH 11/17] Fix model exports (#20540) --- docs/docs/configuration/object_detectors.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/docs/configuration/object_detectors.md b/docs/docs/configuration/object_detectors.md index 088ffc46c..e0faaf7fc 100644 --- a/docs/docs/configuration/object_detectors.md +++ b/docs/docs/configuration/object_detectors.md @@ -988,7 +988,7 @@ COPY --from=ghcr.io/astral-sh/uv:0.8.0 /uv /bin/ WORKDIR /dfine RUN git clone https://github.com/Peterande/D-FINE.git . RUN uv pip install --system -r requirements.txt -RUN uv pip install --system onnx onnxruntime onnxsim +RUN uv pip install --system onnx onnxruntime onnxsim onnxscript # Create output directory and download checkpoint RUN mkdir -p output ARG MODEL_SIZE @@ -1012,7 +1012,7 @@ FROM python:3.11 AS build RUN apt-get update && apt-get install --no-install-recommends -y libgl1 && rm -rf /var/lib/apt/lists/* COPY --from=ghcr.io/astral-sh/uv:0.8.0 /uv /bin/ WORKDIR /rfdetr -RUN uv pip install --system rfdetr[onnxexport] +RUN uv pip install --system rfdetr[onnxexport] torch==2.8.0 onnxscript ARG MODEL_SIZE RUN python3 -c "from rfdetr import RFDETR${MODEL_SIZE}; x = RFDETR${MODEL_SIZE}(resolution=320); x.export(simplify=True)" FROM scratch From 5dc8a85f2f0a6ecc84e3a32d2719046a8850c7b9 Mon Sep 17 00:00:00 2001 From: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com> Date: Sat, 18 Oct 2025 07:44:26 -0500 Subject: [PATCH 12/17] Update Azure OpenAI genai docs (#20549) * Update azure openai genai docs * tweak url --- docs/docs/configuration/genai.md | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/docs/docs/configuration/genai.md b/docs/docs/configuration/genai.md index 9279e459d..6e1d42c34 100644 --- a/docs/docs/configuration/genai.md +++ b/docs/docs/configuration/genai.md @@ -111,7 +111,7 @@ OpenAI does not have a free tier for their API. With the release of gpt-4o, pric ### Supported Models -You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://platform.openai.com/docs/models). At the time of writing, this includes `gpt-4o` and `gpt-4-turbo`. +You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://platform.openai.com/docs/models). ### Get API Key @@ -139,11 +139,11 @@ Microsoft offers several vision models through Azure OpenAI. A subscription is r ### Supported Models -You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models). At the time of writing, this includes `gpt-4o` and `gpt-4-turbo`. +You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models). ### Create Resource and Get API Key -To start using Azure OpenAI, you must first [create a resource](https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource). You'll need your API key and resource URL, which must include the `api-version` parameter (see the example below). The model field is not required in your configuration as the model is part of the deployment name you chose when deploying the resource. +To start using Azure OpenAI, you must first [create a resource](https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource). You'll need your API key, model name, and resource URL, which must include the `api-version` parameter (see the example below). ### Configuration @@ -151,7 +151,8 @@ To start using Azure OpenAI, you must first [create a resource](https://learn.mi genai: enabled: True provider: azure_openai - base_url: https://example-endpoint.openai.azure.com/openai/deployments/gpt-4o/chat/completions?api-version=2023-03-15-preview + base_url: https://instance.cognitiveservices.azure.com/openai/responses?api-version=2025-04-01-preview + model: gpt-5-mini api_key: "{FRIGATE_OPENAI_API_KEY}" ``` From c5fe354552b7d6c68e2fa7a30f17e462e56da43a Mon Sep 17 00:00:00 2001 From: Nicolas Mowen Date: Tue, 21 Oct 2025 16:20:41 -0600 Subject: [PATCH 13/17] Improve Reolink Camera Documentation (#20605) * Improve Reolink Camera Documentation * Update Reolink configuration link in live.md --- docs/docs/configuration/camera_specific.md | 39 +++++++++++++--------- docs/docs/configuration/live.md | 2 +- 2 files changed, 24 insertions(+), 17 deletions(-) diff --git a/docs/docs/configuration/camera_specific.md b/docs/docs/configuration/camera_specific.md index ca31604c8..334e3682b 100644 --- a/docs/docs/configuration/camera_specific.md +++ b/docs/docs/configuration/camera_specific.md @@ -164,13 +164,35 @@ According to [this discussion](https://github.com/blakeblackshear/frigate/issues Cameras connected via a Reolink NVR can be connected with the http stream, use `channel[0..15]` in the stream url for the additional channels. The setup of main stream can be also done via RTSP, but isn't always reliable on all hardware versions. The example configuration is working with the oldest HW version RLN16-410 device with multiple types of cameras. +
+ Example Config + +:::tip + +Reolink's latest cameras support two way audio via go2rtc and other applications. It is important that the http-flv stream is still used for stability, a secondary rtsp stream can be added that will be using for the two way audio only. + +NOTE: The RTSP stream can not be prefixed with `ffmpeg:`, as go2rtc needs to handle the stream to support two way audio. + +Ensure HTTP is enabled in the camera's advanced network settings. To use two way talk with Frigate, see the [Live view documentation](/configuration/live#two-way-talk). + +::: + ```yaml go2rtc: streams: + # example for connecting to a standard Reolink camera your_reolink_camera: - "ffmpeg:http://reolink_ip/flv?port=1935&app=bcs&stream=channel0_main.bcs&user=username&password=password#video=copy#audio=copy#audio=opus" your_reolink_camera_sub: - "ffmpeg:http://reolink_ip/flv?port=1935&app=bcs&stream=channel0_ext.bcs&user=username&password=password" + # example for connectin to a Reolink camera that supports two way talk + your_reolink_camera_twt: + - "ffmpeg:http://reolink_ip/flv?port=1935&app=bcs&stream=channel0_main.bcs&user=username&password=password#video=copy#audio=copy#audio=opus" + - "rtsp://username:password@reolink_ip/Preview_01_sub + your_reolink_camera_twt_sub: + - "ffmpeg:http://reolink_ip/flv?port=1935&app=bcs&stream=channel0_ext.bcs&user=username&password=password" + - "rtsp://username:password@reolink_ip/Preview_01_sub + # example for connecting to a Reolink NVR your_reolink_camera_via_nvr: - "ffmpeg:http://reolink_nvr_ip/flv?port=1935&app=bcs&stream=channel3_main.bcs&user=username&password=password" # channel numbers are 0-15 - "ffmpeg:your_reolink_camera_via_nvr#audio=aac" @@ -201,22 +223,7 @@ cameras: roles: - detect ``` - -#### Reolink Doorbell - -The reolink doorbell supports two way audio via go2rtc and other applications. It is important that the http-flv stream is still used for stability, a secondary rtsp stream can be added that will be using for the two way audio only. - -Ensure HTTP is enabled in the camera's advanced network settings. To use two way talk with Frigate, see the [Live view documentation](/configuration/live#two-way-talk). - -```yaml -go2rtc: - streams: - your_reolink_doorbell: - - "ffmpeg:http://reolink_ip/flv?port=1935&app=bcs&stream=channel0_main.bcs&user=username&password=password#video=copy#audio=copy#audio=opus" - - rtsp://username:password@reolink_ip/Preview_01_sub - your_reolink_doorbell_sub: - - "ffmpeg:http://reolink_ip/flv?port=1935&app=bcs&stream=channel0_ext.bcs&user=username&password=password" -``` +
### Unifi Protect Cameras diff --git a/docs/docs/configuration/live.md b/docs/docs/configuration/live.md index d9bc107f2..11339d584 100644 --- a/docs/docs/configuration/live.md +++ b/docs/docs/configuration/live.md @@ -174,7 +174,7 @@ For devices that support two way talk, Frigate can be configured to use the feat - Ensure you access Frigate via https (may require [opening port 8971](/frigate/installation/#ports)). - For the Home Assistant Frigate card, [follow the docs](http://card.camera/#/usage/2-way-audio) for the correct source. -To use the Reolink Doorbell with two way talk, you should use the [recommended Reolink configuration](/configuration/camera_specific#reolink-doorbell) +To use the Reolink Doorbell with two way talk, you should use the [recommended Reolink configuration](/configuration/camera_specific#reolink-cameras) ### Streaming options on camera group dashboards From 0a6b9f98ed648258316dfb70ff13cf59a9b263b8 Mon Sep 17 00:00:00 2001 From: Nicolas Mowen Date: Sat, 25 Oct 2025 15:40:04 -0600 Subject: [PATCH 14/17] Various fixes (#20666) * Remove nvidia pyindex * Improve prompt --- docker/main/requirements.txt | 1 - frigate/genai/__init__.py | 32 ++++++++++++++++++++++---------- 2 files changed, 22 insertions(+), 11 deletions(-) diff --git a/docker/main/requirements.txt b/docker/main/requirements.txt index 3ae420d07..f1ba7d9ad 100644 --- a/docker/main/requirements.txt +++ b/docker/main/requirements.txt @@ -1,2 +1 @@ scikit-build == 0.18.* -nvidia-pyindex diff --git a/frigate/genai/__init__.py b/frigate/genai/__init__.py index 7bc1bbf75..e02c35d4b 100644 --- a/frigate/genai/__init__.py +++ b/frigate/genai/__init__.py @@ -63,18 +63,24 @@ class GenAIClient: else: return "" - def get_verified_objects() -> str: + def get_verified_object_prompt() -> str: if review_data["recognized_objects"]: - return " - " + "\n - ".join(review_data["recognized_objects"]) + object_list = " - " + "\n - ".join(review_data["recognized_objects"]) + return f"""## Verified Objects (USE THESE NAMES) +When any of the following verified objects are present in the scene, you MUST use these exact names in your title and scene description: +{object_list} +""" else: - return " None" + return "" context_prompt = f""" -Please analyze the sequence of images ({len(thumbnails)} total) taken in chronological order from the perspective of the {review_data["camera"].replace("_", " ")} security camera. +Your task is to analyze the sequence of images ({len(thumbnails)} total) taken in chronological order from the perspective of the {review_data["camera"].replace("_", " ")} security camera. -**Normal activity patterns for this property:** +## Normal Activity Patterns for This Property {activity_context_prompt} +## Task Instructions + Your task is to provide a clear, accurate description of the scene that: 1. States exactly what is happening based on observable actions and movements. 2. Evaluates whether the observable evidence suggests normal activity for this property or genuine security concerns. @@ -82,6 +88,8 @@ Your task is to provide a clear, accurate description of the scene that: **IMPORTANT: Start by checking if the activity matches the normal patterns above. If it does, assign Level 0. Only consider higher threat levels if the activity clearly deviates from normal patterns or shows genuine security concerns.** +## Analysis Guidelines + When forming your description: - **CRITICAL: Only describe objects explicitly listed in "Detected objects" below.** Do not infer or mention additional people, vehicles, or objects not present in the detected objects list, even if visual patterns suggest them. If only a car is detected, do not describe a person interacting with it unless "person" is also in the detected objects list. - **Only describe actions actually visible in the frames.** Do not assume or infer actions that you don't observe happening. If someone walks toward furniture but you never see them sit, do not say they sat. Stick to what you can see across the sequence. @@ -92,6 +100,8 @@ When forming your description: - Identify patterns that suggest genuine security concerns: testing doors/windows on vehicles or buildings, accessing unauthorized areas, attempting to conceal actions, extended loitering without apparent purpose, taking items, behavior that clearly doesn't align with the zone context and detected objects. - **Weigh all evidence holistically**: Start by checking if the activity matches the normal patterns above. If it does, assign Level 0. Only consider Level 1 if the activity clearly deviates from normal patterns or shows genuine security concerns that warrant attention. +## Response Format + Your response MUST be a flat JSON object with: - `title` (string): A concise, one-sentence title that captures the main activity. Include any verified recognized objects (from the "Verified recognized objects" list below) and key detected objects. Examples: "Joe walking dog in backyard", "Unknown person testing car doors at night". - `scene` (string): A narrative description of what happens across the sequence from start to finish. **Only describe actions you can actually observe happening in the frames provided.** Do not infer or assume actions that aren't visible (e.g., if you see someone walking but never see them sit, don't say they sat down). Include setting, detected objects, and their observable actions. Avoid speculation or filling in assumed behaviors. Your description should align with and support the threat level you assign. @@ -99,20 +109,22 @@ Your response MUST be a flat JSON object with: - `potential_threat_level` (integer): 0, 1, or 2 as defined below. Your threat level must be consistent with your scene description and the guidance above. {get_concern_prompt()} -Threat-level definitions: +## Threat Level Definitions + - 0 — **Normal activity (DEFAULT)**: What you observe matches the normal activity patterns above or is consistent with expected activity for this property type. The observable evidence—considering zone context, detected objects, and timing together—supports a benign explanation. **Use this level for routine activities even if minor ambiguous elements exist.** - 1 — **Potentially suspicious**: Observable behavior raises genuine security concerns that warrant human review. The evidence doesn't support a routine explanation and clearly deviates from the normal patterns above. Examples: testing doors/windows on vehicles or structures, accessing areas that don't align with the activity, taking items that likely don't belong to them, behavior clearly inconsistent with the zone and context, or activity that lacks any visible legitimate indicators. **Only use this level when the activity clearly doesn't match normal patterns.** - 2 — **Immediate threat**: Clear evidence of forced entry, break-in, vandalism, aggression, weapons, theft in progress, or active property damage. -Sequence details: +## Sequence Details + - Frame 1 = earliest, Frame {len(thumbnails)} = latest - Activity started at {review_data["start"]} and lasted {review_data["duration"]} seconds - Detected objects: {", ".join(review_data["objects"])} -- Verified recognized objects (use these names when describing these objects): -{get_verified_objects()} - Zones involved: {", ".join(z.replace("_", " ").title() for z in review_data["zones"]) or "None"} -**IMPORTANT:** +{get_verified_object_prompt()} + +## Important Notes - Values must be plain strings, floats, or integers — no nested objects, no extra commentary. - Only describe objects from the "Detected objects" list above. Do not hallucinate additional objects. {get_language_prompt()} From 63042b9c08a66efea17cad5b6805202bcf2c055f Mon Sep 17 00:00:00 2001 From: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com> Date: Sat, 25 Oct 2025 17:15:36 -0500 Subject: [PATCH 15/17] Review stream tweaks (#20662) * tweak api to fetch multiple timelines * support multiple selected objects in context * rework context provider * use toggle in detail stream * use toggle in menu * plot multiple object tracks * verified icon, recognized plate, and clicking tweaks * add plate to object lifecycle * close menu before opening frigate+ dialog * clean up * normal text case for tooltip * capitalization * use flexbox for recording view --- frigate/api/app.py | 6 +- .../components/overlay/ObjectTrackOverlay.tsx | 501 ++++++++++-------- .../overlay/detail/ObjectLifecycle.tsx | 26 +- web/src/components/player/HlsVideoPlayer.tsx | 21 +- .../player/dynamic/DynamicVideoPlayer.tsx | 10 +- web/src/components/timeline/DetailStream.tsx | 91 ++-- web/src/components/timeline/EventMenu.tsx | 21 +- web/src/context/detail-stream-context.tsx | 31 +- web/src/types/event.ts | 1 + web/src/utils/lifecycleUtil.ts | 3 +- web/src/views/recording/RecordingView.tsx | 150 +++--- 11 files changed, 470 insertions(+), 391 deletions(-) diff --git a/frigate/api/app.py b/frigate/api/app.py index 5d09ecf00..5efb8b523 100644 --- a/frigate/api/app.py +++ b/frigate/api/app.py @@ -696,7 +696,11 @@ def timeline(camera: str = "all", limit: int = 100, source_id: Optional[str] = N clauses.append((Timeline.camera == camera)) if source_id: - clauses.append((Timeline.source_id == source_id)) + source_ids = [sid.strip() for sid in source_id.split(",")] + if len(source_ids) == 1: + clauses.append((Timeline.source_id == source_ids[0])) + else: + clauses.append((Timeline.source_id.in_(source_ids))) if len(clauses) == 0: clauses.append((True)) diff --git a/web/src/components/overlay/ObjectTrackOverlay.tsx b/web/src/components/overlay/ObjectTrackOverlay.tsx index 2bd355306..50cf92781 100644 --- a/web/src/components/overlay/ObjectTrackOverlay.tsx +++ b/web/src/components/overlay/ObjectTrackOverlay.tsx @@ -11,38 +11,80 @@ import { import { TooltipPortal } from "@radix-ui/react-tooltip"; import { cn } from "@/lib/utils"; import { useTranslation } from "react-i18next"; +import { Event } from "@/types/event"; type ObjectTrackOverlayProps = { camera: string; - selectedObjectId: string; showBoundingBoxes?: boolean; currentTime: number; videoWidth: number; videoHeight: number; className?: string; onSeekToTime?: (timestamp: number, play?: boolean) => void; - objectTimeline?: ObjectLifecycleSequence[]; +}; + +type PathPoint = { + x: number; + y: number; + timestamp: number; + lifecycle_item?: ObjectLifecycleSequence; + objectId: string; +}; + +type ObjectData = { + objectId: string; + label: string; + color: string; + pathPoints: PathPoint[]; + currentZones: string[]; + currentBox?: number[]; }; export default function ObjectTrackOverlay({ camera, - selectedObjectId, showBoundingBoxes = false, currentTime, videoWidth, videoHeight, className, onSeekToTime, - objectTimeline, }: ObjectTrackOverlayProps) { const { t } = useTranslation("views/events"); const { data: config } = useSWR("config"); - const { annotationOffset } = useDetailStream(); + const { annotationOffset, selectedObjectIds } = useDetailStream(); const effectiveCurrentTime = currentTime - annotationOffset / 1000; - // Fetch the full event data to get saved path points - const { data: eventData } = useSWR(["event_ids", { ids: selectedObjectId }]); + // Fetch all event data in a single request (CSV ids) + const { data: eventsData } = useSWR( + selectedObjectIds.length > 0 + ? ["event_ids", { ids: selectedObjectIds.join(",") }] + : null, + ); + + // Fetch timeline data for each object ID using fixed number of hooks + const { data: timelineData } = useSWR( + selectedObjectIds.length > 0 + ? `timeline?source_id=${selectedObjectIds.join(",")}&limit=1000` + : null, + { revalidateOnFocus: false }, + ); + + const timelineResults = useMemo(() => { + // Group timeline entries by source_id + if (!timelineData) return selectedObjectIds.map(() => []); + + const grouped: Record = {}; + for (const entry of timelineData) { + if (!grouped[entry.source_id]) { + grouped[entry.source_id] = []; + } + grouped[entry.source_id].push(entry); + } + + // Return timeline arrays in the same order as selectedObjectIds + return selectedObjectIds.map((id) => grouped[id] || []); + }, [selectedObjectIds, timelineData]); const typeColorMap = useMemo( () => ({ @@ -58,16 +100,18 @@ export default function ObjectTrackOverlay({ [], ); - const getObjectColor = useMemo(() => { - return (label: string) => { + const getObjectColor = useCallback( + (label: string, objectId: string) => { const objectColor = config?.model?.colormap[label]; if (objectColor) { const reversed = [...objectColor].reverse(); return `rgb(${reversed.join(",")})`; } - return "rgb(255, 0, 0)"; // fallback red - }; - }, [config]); + // Fallback to deterministic color based on object ID + return generateColorFromId(objectId); + }, + [config], + ); const getZoneColor = useCallback( (zoneName: string) => { @@ -81,125 +125,121 @@ export default function ObjectTrackOverlay({ [config, camera], ); - const currentObjectZones = useMemo(() => { - if (!objectTimeline) return []; - - // Find the most recent timeline event at or before effective current time - const relevantEvents = objectTimeline - .filter((event) => event.timestamp <= effectiveCurrentTime) - .sort((a, b) => b.timestamp - a.timestamp); // Most recent first - - // Get zones from the most recent event - return relevantEvents[0]?.data?.zones || []; - }, [objectTimeline, effectiveCurrentTime]); - - const zones = useMemo(() => { - if (!config?.cameras?.[camera]?.zones || !currentObjectZones.length) + // Build per-object data structures + const objectsData = useMemo(() => { + if (!eventsData || !Array.isArray(eventsData)) return []; + if (config?.cameras[camera]?.onvif.autotracking.enabled_in_config) return []; + return selectedObjectIds + .map((objectId, index) => { + const eventData = eventsData.find((e) => e.id === objectId); + const timelineData = timelineResults[index]; + + // get saved path points from event + const savedPathPoints: PathPoint[] = + eventData?.data?.path_data?.map( + ([coords, timestamp]: [number[], number]) => ({ + x: coords[0], + y: coords[1], + timestamp, + lifecycle_item: undefined, + objectId, + }), + ) || []; + + // timeline points for this object + const eventSequencePoints: PathPoint[] = + timelineData + ?.filter( + (event: ObjectLifecycleSequence) => event.data.box !== undefined, + ) + .map((event: ObjectLifecycleSequence) => { + const [left, top, width, height] = event.data.box!; + return { + x: left + width / 2, // Center x + y: top + height, // Bottom y + timestamp: event.timestamp, + lifecycle_item: event, + objectId, + }; + }) || []; + + // show full path once current time has reached the object's start time + const combinedPoints = [...savedPathPoints, ...eventSequencePoints] + .sort((a, b) => a.timestamp - b.timestamp) + .filter( + (point) => + currentTime >= (eventData?.start_time ?? 0) && + point.timestamp >= (eventData?.start_time ?? 0) && + point.timestamp <= (eventData?.end_time ?? Infinity), + ); + + // Get color for this object + const label = eventData?.label || "unknown"; + const color = getObjectColor(label, objectId); + + // Get current zones + const currentZones = + timelineData + ?.filter( + (event: ObjectLifecycleSequence) => + event.timestamp <= effectiveCurrentTime, + ) + .sort( + (a: ObjectLifecycleSequence, b: ObjectLifecycleSequence) => + b.timestamp - a.timestamp, + )[0]?.data?.zones || []; + + // Get current bounding box + const currentBox = timelineData + ?.filter( + (event: ObjectLifecycleSequence) => + event.timestamp <= effectiveCurrentTime && event.data.box, + ) + .sort( + (a: ObjectLifecycleSequence, b: ObjectLifecycleSequence) => + b.timestamp - a.timestamp, + )[0]?.data?.box; + + return { + objectId, + label, + color, + pathPoints: combinedPoints, + currentZones, + currentBox, + }; + }) + .filter((obj: ObjectData) => obj.pathPoints.length > 0); // Only include objects with path data + }, [ + eventsData, + selectedObjectIds, + timelineResults, + currentTime, + effectiveCurrentTime, + getObjectColor, + config, + camera, + ]); + + // Collect all zones across all objects + const allZones = useMemo(() => { + if (!config?.cameras?.[camera]?.zones) return []; + + const zoneNames = new Set(); + objectsData.forEach((obj) => { + obj.currentZones.forEach((zone) => zoneNames.add(zone)); + }); + return Object.entries(config.cameras[camera].zones) - .filter(([name]) => currentObjectZones.includes(name)) + .filter(([name]) => zoneNames.has(name)) .map(([name, zone]) => ({ name, coordinates: zone.coordinates, color: getZoneColor(name), })); - }, [config, camera, getZoneColor, currentObjectZones]); - - // get saved path points from event - const savedPathPoints = useMemo(() => { - return ( - eventData?.[0].data?.path_data?.map( - ([coords, timestamp]: [number[], number]) => ({ - x: coords[0], - y: coords[1], - timestamp, - lifecycle_item: undefined, - }), - ) || [] - ); - }, [eventData]); - - // timeline points for selected event - const eventSequencePoints = useMemo(() => { - return ( - objectTimeline - ?.filter((event) => event.data.box !== undefined) - .map((event) => { - const [left, top, width, height] = event.data.box!; - - return { - x: left + width / 2, // Center x - y: top + height, // Bottom y - timestamp: event.timestamp, - lifecycle_item: event, - }; - }) || [] - ); - }, [objectTimeline]); - - // final object path with timeline points included - const pathPoints = useMemo(() => { - // don't display a path for autotracking cameras - if (config?.cameras[camera]?.onvif.autotracking.enabled_in_config) - return []; - - const combinedPoints = [...savedPathPoints, ...eventSequencePoints].sort( - (a, b) => a.timestamp - b.timestamp, - ); - - // Filter points around current time (within a reasonable window) - const timeWindow = 30; // 30 seconds window - return combinedPoints.filter( - (point) => - point.timestamp >= currentTime - timeWindow && - point.timestamp <= currentTime + timeWindow, - ); - }, [savedPathPoints, eventSequencePoints, config, camera, currentTime]); - - // get absolute positions on the svg canvas for each point - const absolutePositions = useMemo(() => { - if (!pathPoints) return []; - - return pathPoints.map((point) => { - // Find the corresponding timeline entry for this point - const timelineEntry = objectTimeline?.find( - (entry) => entry.timestamp == point.timestamp, - ); - return { - x: point.x * videoWidth, - y: point.y * videoHeight, - timestamp: point.timestamp, - lifecycle_item: - timelineEntry || - (point.box // normal path point - ? { - timestamp: point.timestamp, - camera: camera, - source: "tracked_object", - source_id: selectedObjectId, - class_type: "visible" as LifecycleClassType, - data: { - camera: camera, - label: point.label, - sub_label: "", - box: point.box, - region: [0, 0, 0, 0], // placeholder - attribute: "", - zones: [], - }, - } - : undefined), - }; - }); - }, [ - pathPoints, - videoWidth, - videoHeight, - objectTimeline, - camera, - selectedObjectId, - ]); + }, [config, camera, objectsData, getZoneColor]); const generateStraightPath = useCallback( (points: { x: number; y: number }[]) => { @@ -214,15 +254,20 @@ export default function ObjectTrackOverlay({ ); const getPointColor = useCallback( - (baseColor: number[], type?: string) => { + (baseColorString: string, type?: string) => { if (type && typeColorMap[type as keyof typeof typeColorMap]) { const typeColor = typeColorMap[type as keyof typeof typeColorMap]; if (typeColor) { return `rgb(${typeColor.join(",")})`; } } - // normal path point - return `rgb(${baseColor.map((c) => Math.max(0, c - 10)).join(",")})`; + // Parse and darken base color slightly for path points + const match = baseColorString.match(/\d+/g); + if (match) { + const [r, g, b] = match.map(Number); + return `rgb(${Math.max(0, r - 10)}, ${Math.max(0, g - 10)}, ${Math.max(0, b - 10)})`; + } + return baseColorString; }, [typeColorMap], ); @@ -234,49 +279,8 @@ export default function ObjectTrackOverlay({ [onSeekToTime], ); - // render bounding box for object at current time if we have a timeline entry - const currentBoundingBox = useMemo(() => { - if (!objectTimeline) return null; - - // Find the most recent timeline event at or before effective current time with a bounding box - const relevantEvents = objectTimeline - .filter( - (event) => event.timestamp <= effectiveCurrentTime && event.data.box, - ) - .sort((a, b) => b.timestamp - a.timestamp); // Most recent first - - const currentEvent = relevantEvents[0]; - - if (!currentEvent?.data.box) return null; - - const [left, top, width, height] = currentEvent.data.box; - return { - left, - top, - width, - height, - centerX: left + width / 2, - centerY: top + height, - }; - }, [objectTimeline, effectiveCurrentTime]); - - const objectColor = useMemo(() => { - return pathPoints[0]?.label - ? getObjectColor(pathPoints[0].label) - : "rgb(255, 0, 0)"; - }, [pathPoints, getObjectColor]); - - const objectColorArray = useMemo(() => { - return pathPoints[0]?.label - ? getObjectColor(pathPoints[0].label).match(/\d+/g)?.map(Number) || [ - 255, 0, 0, - ] - : [255, 0, 0]; - }, [pathPoints, getObjectColor]); - - // render any zones for object at current time const zonePolygons = useMemo(() => { - return zones.map((zone) => { + return allZones.map((zone) => { // Convert zone coordinates from normalized (0-1) to pixel coordinates const points = zone.coordinates .split(",") @@ -298,9 +302,9 @@ export default function ObjectTrackOverlay({ stroke: zone.color, }; }); - }, [zones, videoWidth, videoHeight]); + }, [allZones, videoWidth, videoHeight]); - if (!pathPoints.length || !config) { + if (objectsData.length === 0 || !config) { return null; } @@ -325,73 +329,102 @@ export default function ObjectTrackOverlay({ /> ))} - {absolutePositions.length > 1 && ( - - )} + {objectsData.map((objData) => { + const absolutePositions = objData.pathPoints.map((point) => ({ + x: point.x * videoWidth, + y: point.y * videoHeight, + timestamp: point.timestamp, + lifecycle_item: point.lifecycle_item, + })); - {absolutePositions.map((pos, index) => ( - - - handlePointClick(pos.timestamp)} - /> - - - - {pos.lifecycle_item - ? `${pos.lifecycle_item.class_type.replace("_", " ")} at ${new Date(pos.timestamp * 1000).toLocaleTimeString()}` - : t("objectTrack.trackedPoint")} - {onSeekToTime && ( -
- {t("objectTrack.clickToSeek")} -
- )} -
-
-
- ))} + return ( + + {absolutePositions.length > 1 && ( + + )} - {currentBoundingBox && showBoundingBoxes && ( - - + {absolutePositions.map((pos, index) => ( + + + handlePointClick(pos.timestamp)} + /> + + + + {pos.lifecycle_item + ? `${pos.lifecycle_item.class_type.replace("_", " ")} at ${new Date(pos.timestamp * 1000).toLocaleTimeString()}` + : t("objectTrack.trackedPoint")} + {onSeekToTime && ( +
+ {t("objectTrack.clickToSeek")} +
+ )} +
+
+
+ ))} - -
- )} + {objData.currentBox && showBoundingBoxes && ( + + + + + )} +
+ ); + })} ); } + +// Generate a deterministic HSL color from a string (object ID) +function generateColorFromId(id: string): string { + let hash = 0; + for (let i = 0; i < id.length; i++) { + hash = id.charCodeAt(i) + ((hash << 5) - hash); + } + // Use golden ratio to distribute hues evenly + const hue = (hash * 137.508) % 360; + return `hsl(${hue}, 70%, 50%)`; +} diff --git a/web/src/components/overlay/detail/ObjectLifecycle.tsx b/web/src/components/overlay/detail/ObjectLifecycle.tsx index 0f1eaadf5..761be65ae 100644 --- a/web/src/components/overlay/detail/ObjectLifecycle.tsx +++ b/web/src/components/overlay/detail/ObjectLifecycle.tsx @@ -94,6 +94,10 @@ export default function ObjectLifecycle({ ); }, [config, event]); + const label = event.sub_label + ? event.sub_label + : getTranslatedLabel(event.label); + const getZoneColor = useCallback( (zoneName: string) => { const zoneColor = @@ -628,17 +632,29 @@ export default function ObjectLifecycle({ }} role="button" > -
+
{getIconForLabel( - event.label, - "size-6 text-primary dark:text-white", + event.sub_label ? event.label + "-verified" : event.label, + "size-4 text-white", )}
-
- {getTranslatedLabel(event.label)} +
+ {label} {formattedStart ?? ""} - {formattedEnd ?? ""} + {event.data?.recognized_license_plate && ( + <> + ·{" "} + + {event.data.recognized_license_plate} + + + )}
diff --git a/web/src/components/player/HlsVideoPlayer.tsx b/web/src/components/player/HlsVideoPlayer.tsx index a41d31db2..fad88815b 100644 --- a/web/src/components/player/HlsVideoPlayer.tsx +++ b/web/src/components/player/HlsVideoPlayer.tsx @@ -20,7 +20,6 @@ import { cn } from "@/lib/utils"; import { ASPECT_VERTICAL_LAYOUT, RecordingPlayerError } from "@/types/record"; import { useTranslation } from "react-i18next"; import ObjectTrackOverlay from "@/components/overlay/ObjectTrackOverlay"; -import { DetailStreamContextType } from "@/context/detail-stream-context"; // Android native hls does not seek correctly const USE_NATIVE_HLS = !isAndroid; @@ -54,8 +53,11 @@ type HlsVideoPlayerProps = { onUploadFrame?: (playTime: number) => Promise | undefined; toggleFullscreen?: () => void; onError?: (error: RecordingPlayerError) => void; - detail?: Partial; + isDetailMode?: boolean; + camera?: string; + currentTimeOverride?: number; }; + export default function HlsVideoPlayer({ videoRef, containerRef, @@ -75,17 +77,15 @@ export default function HlsVideoPlayer({ onUploadFrame, toggleFullscreen, onError, - detail, + isDetailMode = false, + camera, + currentTimeOverride, }: HlsVideoPlayerProps) { const { t } = useTranslation("components/player"); const { data: config } = useSWR("config"); // for detail stream context in History - const selectedObjectId = detail?.selectedObjectId; - const selectedObjectTimeline = detail?.selectedObjectTimeline; - const currentTime = detail?.currentTime; - const camera = detail?.camera; - const isDetailMode = detail?.isDetailMode ?? false; + const currentTime = currentTimeOverride; // playback @@ -316,16 +316,14 @@ export default function HlsVideoPlayer({ }} > {isDetailMode && - selectedObjectId && camera && currentTime && videoDimensions.width > 0 && videoDimensions.height > 0 && (
)} diff --git a/web/src/components/player/dynamic/DynamicVideoPlayer.tsx b/web/src/components/player/dynamic/DynamicVideoPlayer.tsx index 1b7689804..2a6f3a1cf 100644 --- a/web/src/components/player/dynamic/DynamicVideoPlayer.tsx +++ b/web/src/components/player/dynamic/DynamicVideoPlayer.tsx @@ -61,7 +61,11 @@ export default function DynamicVideoPlayer({ const { data: config } = useSWR("config"); // for detail stream context in History - const detail = useDetailStream(); + const { + isDetailMode, + camera: contextCamera, + currentTime, + } = useDetailStream(); // controlling playback @@ -295,7 +299,9 @@ export default function DynamicVideoPlayer({ setIsBuffering(true); } }} - detail={detail} + isDetailMode={isDetailMode} + camera={contextCamera || camera} + currentTimeOverride={currentTime} /> setUpload(undefined)} - onEventUploaded={() => setUpload(undefined)} + onEventUploaded={() => { + if (upload) { + upload.plus_id = "new_upload"; + } + }} />
e.label) + ? fetchedEvents.map((e) => + e.sub_label ? e.label + "-verified" : e.label, + ) : (review.data?.objects ?? [])), ...(review.data?.audio ?? []), ]; @@ -317,7 +323,7 @@ function ReviewGroup({
{displayTime}
-
+
{iconLabels.slice(0, 5).map((lbl, idx) => (
("config"); - const { selectedObjectId, setSelectedObjectId } = useDetailStream(); + const { selectedObjectIds, toggleObjectSelection } = useDetailStream(); + + const isSelected = selectedObjectIds.includes(event.id); + + const label = event.sub_label || getTranslatedLabel(event.label); const handleObjectSelect = (event: Event | undefined) => { if (event) { - onSeek(event.start_time ?? 0); - setSelectedObjectId(event.id); + // onSeek(event.start_time ?? 0); + toggleObjectSelection(event.id); } else { - setSelectedObjectId(undefined); + toggleObjectSelection(undefined); } }; - // Clear selectedObjectId when effectiveTime has passed this event's end_time + // Clear selection when effectiveTime has passed this event's end_time useEffect(() => { - if (selectedObjectId === event.id && effectiveTime && event.end_time) { + if (isSelected && effectiveTime && event.end_time) { if (effectiveTime >= event.end_time) { - setSelectedObjectId(undefined); + toggleObjectSelection(event.id); } } }, [ - selectedObjectId, + isSelected, event.id, event.end_time, effectiveTime, - setSelectedObjectId, + toggleObjectSelection, ]); return ( @@ -454,48 +464,59 @@ function EventList({
= (event.start_time ?? 0) - 0.5 && (effectiveTime ?? 0) <= (event.end_time ?? event.start_time ?? 0) + 0.5 && "bg-secondary-highlight", )} > -
-
{ - e.stopPropagation(); - handleObjectSelect( - event.id == selectedObjectId ? undefined : event, - ); - }} - role="button" - > +
+
{ + e.stopPropagation(); + handleObjectSelect(isSelected ? undefined : event); + }} > - {getIconForLabel(event.label, "size-3 text-white")} + {getIconForLabel( + event.sub_label ? event.label + "-verified" : event.label, + "size-3 text-white", + )}
-
- {getTranslatedLabel(event.label)} +
{ + e.stopPropagation(); + onSeek(event.start_time ?? 0); + }} + role="button" + > + {label} + {event.data?.recognized_license_plate && ( + <> + ·{" "} + + {event.data.recognized_license_plate} + + + )}
-
+
onOpenUpload?.(e)} - selectedObjectId={selectedObjectId} - setSelectedObjectId={handleObjectSelect} + isSelected={isSelected} + onToggleSelection={handleObjectSelect} />
diff --git a/web/src/components/timeline/EventMenu.tsx b/web/src/components/timeline/EventMenu.tsx index ac98a8ebc..1caed65e4 100644 --- a/web/src/components/timeline/EventMenu.tsx +++ b/web/src/components/timeline/EventMenu.tsx @@ -12,14 +12,15 @@ import { useNavigate } from "react-router-dom"; import { useTranslation } from "react-i18next"; import { Event } from "@/types/event"; import { FrigateConfig } from "@/types/frigateConfig"; +import { useState } from "react"; type EventMenuProps = { event: Event; config?: FrigateConfig; onOpenUpload?: (e: Event) => void; onOpenSimilarity?: (e: Event) => void; - selectedObjectId?: string; - setSelectedObjectId?: (event: Event | undefined) => void; + isSelected?: boolean; + onToggleSelection?: (event: Event | undefined) => void; }; export default function EventMenu({ @@ -27,25 +28,26 @@ export default function EventMenu({ config, onOpenUpload, onOpenSimilarity, - selectedObjectId, - setSelectedObjectId, + isSelected = false, + onToggleSelection, }: EventMenuProps) { const apiHost = useApiHost(); const navigate = useNavigate(); const { t } = useTranslation("views/explore"); + const [isOpen, setIsOpen] = useState(false); const handleObjectSelect = () => { - if (event.id === selectedObjectId) { - setSelectedObjectId?.(undefined); + if (isSelected) { + onToggleSelection?.(undefined); } else { - setSelectedObjectId?.(event); + onToggleSelection?.(event); } }; return ( <> - +
@@ -54,7 +56,7 @@ export default function EventMenu({ - {event.id === selectedObjectId + {isSelected ? t("itemMenu.hideObjectDetails.label") : t("itemMenu.showObjectDetails.label")} @@ -85,6 +87,7 @@ export default function EventMenu({ config?.plus?.enabled && ( { + setIsOpen(false); onOpenUpload?.(event); }} > diff --git a/web/src/context/detail-stream-context.tsx b/web/src/context/detail-stream-context.tsx index 12d7df592..aa7b2478b 100644 --- a/web/src/context/detail-stream-context.tsx +++ b/web/src/context/detail-stream-context.tsx @@ -1,16 +1,14 @@ import React, { createContext, useContext, useState, useEffect } from "react"; import { FrigateConfig } from "@/types/frigateConfig"; import useSWR from "swr"; -import { ObjectLifecycleSequence } from "@/types/timeline"; export interface DetailStreamContextType { - selectedObjectId: string | undefined; - selectedObjectTimeline?: ObjectLifecycleSequence[]; + selectedObjectIds: string[]; currentTime: number; camera: string; annotationOffset: number; // milliseconds setAnnotationOffset: (ms: number) => void; - setSelectedObjectId: (id: string | undefined) => void; + toggleObjectSelection: (id: string | undefined) => void; isDetailMode: boolean; } @@ -31,13 +29,21 @@ export function DetailStreamProvider({ currentTime, camera, }: DetailStreamProviderProps) { - const [selectedObjectId, setSelectedObjectId] = useState< - string | undefined - >(); + const [selectedObjectIds, setSelectedObjectIds] = useState([]); - const { data: selectedObjectTimeline } = useSWR( - selectedObjectId ? ["timeline", { source_id: selectedObjectId }] : null, - ); + const toggleObjectSelection = (id: string | undefined) => { + if (id === undefined) { + setSelectedObjectIds([]); + } else { + setSelectedObjectIds((prev) => { + if (prev.includes(id)) { + return prev.filter((existingId) => existingId !== id); + } else { + return [...prev, id]; + } + }); + } + }; const { data: config } = useSWR("config"); @@ -53,13 +59,12 @@ export function DetailStreamProvider({ }, [config, camera]); const value: DetailStreamContextType = { - selectedObjectId, - selectedObjectTimeline, + selectedObjectIds, currentTime, camera, annotationOffset, setAnnotationOffset, - setSelectedObjectId, + toggleObjectSelection, isDetailMode, }; diff --git a/web/src/types/event.ts b/web/src/types/event.ts index d7c8ca665..cef53475a 100644 --- a/web/src/types/event.ts +++ b/web/src/types/event.ts @@ -22,6 +22,7 @@ export interface Event { area: number; ratio: number; type: "object" | "audio" | "manual"; + recognized_license_plate?: string; path_data: [number[], number][]; }; } diff --git a/web/src/utils/lifecycleUtil.ts b/web/src/utils/lifecycleUtil.ts index edb46b969..e0016ccd8 100644 --- a/web/src/utils/lifecycleUtil.ts +++ b/web/src/utils/lifecycleUtil.ts @@ -1,6 +1,7 @@ import { ObjectLifecycleSequence } from "@/types/timeline"; import { t } from "i18next"; import { getTranslatedLabel } from "./i18n"; +import { capitalizeFirstLetter } from "./stringUtil"; export function getLifecycleItemDescription( lifecycleItem: ObjectLifecycleSequence, @@ -10,7 +11,7 @@ export function getLifecycleItemDescription( : lifecycleItem.data.sub_label || lifecycleItem.data.label; const label = lifecycleItem.data.sub_label - ? rawLabel + ? capitalizeFirstLetter(rawLabel) : getTranslatedLabel(rawLabel); switch (lifecycleItem.class_type) { diff --git a/web/src/views/recording/RecordingView.tsx b/web/src/views/recording/RecordingView.tsx index 3b001cb16..bde6c6d43 100644 --- a/web/src/views/recording/RecordingView.tsx +++ b/web/src/views/recording/RecordingView.tsx @@ -11,6 +11,7 @@ import DetailStream from "@/components/timeline/DetailStream"; import { Button } from "@/components/ui/button"; import { ToggleGroup, ToggleGroupItem } from "@/components/ui/toggle-group"; import { useOverlayState } from "@/hooks/use-overlay-state"; +import { useResizeObserver } from "@/hooks/resize-observer"; import { ExportMode } from "@/types/filter"; import { FrigateConfig } from "@/types/frigateConfig"; import { Preview } from "@/types/preview"; @@ -31,12 +32,7 @@ import { useRef, useState, } from "react"; -import { - isDesktop, - isMobile, - isMobileOnly, - isTablet, -} from "react-device-detect"; +import { isDesktop, isMobile } from "react-device-detect"; import { IoMdArrowRoundBack } from "react-icons/io"; import { useNavigate } from "react-router-dom"; import { Toaster } from "@/components/ui/sonner"; @@ -55,7 +51,6 @@ import { RecordingSegment, RecordingStartingPoint, } from "@/types/record"; -import { useResizeObserver } from "@/hooks/resize-observer"; import { cn } from "@/lib/utils"; import { useFullscreen } from "@/hooks/use-fullscreen"; import { useTimezone } from "@/hooks/use-date-utils"; @@ -399,49 +394,47 @@ export function RecordingView({ } }, [mainCameraAspect]); - const [{ width: mainWidth, height: mainHeight }] = + // use a resize observer to determine whether to use w-full or h-full based on container aspect ratio + const [{ width: containerWidth, height: containerHeight }] = useResizeObserver(cameraLayoutRef); + const [{ width: previewRowWidth, height: previewRowHeight }] = + useResizeObserver(previewRowRef); - const mainCameraStyle = useMemo(() => { - if (isMobile || mainCameraAspect != "normal" || !config) { - return undefined; + const useHeightBased = useMemo(() => { + if (!containerWidth || !containerHeight) { + return false; } - const camera = config.cameras[mainCamera]; - - if (!camera) { - return undefined; + const cameraAspectRatio = getCameraAspect(mainCamera); + if (!cameraAspectRatio) { + return false; } - const aspect = getCameraAspect(mainCamera); + // Calculate available space for camera after accounting for preview row + // For tall cameras: preview row is side-by-side (takes width) + // For wide/normal cameras: preview row is stacked (takes height) + const availableWidth = + mainCameraAspect == "tall" && previewRowWidth + ? containerWidth - previewRowWidth + : containerWidth; + const availableHeight = + mainCameraAspect != "tall" && previewRowHeight + ? containerHeight - previewRowHeight + : containerHeight; - if (!aspect) { - return undefined; - } + const availableAspectRatio = availableWidth / availableHeight; - const availableHeight = mainHeight - 112; - - let percent; - if (mainWidth / availableHeight < aspect) { - percent = 100; - } else { - const availableWidth = aspect * availableHeight; - percent = - (mainWidth < availableWidth - ? mainWidth / availableWidth - : availableWidth / mainWidth) * 100; - } - - return { - width: `${Math.round(percent)}%`, - }; + // If available space is wider than camera aspect, constrain by height (h-full) + // If available space is taller than camera aspect, constrain by width (w-full) + return availableAspectRatio >= cameraAspectRatio; }, [ - config, - mainCameraAspect, - mainWidth, - mainHeight, - mainCamera, + containerWidth, + containerHeight, + previewRowWidth, + previewRowHeight, getCameraAspect, + mainCamera, + mainCameraAspect, ]); const previewRowOverflows = useMemo(() => { @@ -685,19 +678,17 @@ export function RecordingView({
{isDesktop && ( @@ -782,10 +761,10 @@ export function RecordingView({
{isMobile && ( From 1fb21a4dacf7e1e8c51ea777d8bc94fbab94fffe Mon Sep 17 00:00:00 2001 From: Nicolas Mowen Date: Sat, 25 Oct 2025 16:15:49 -0600 Subject: [PATCH 16/17] Classification improvements (#20665) * Don't classify objects that are ended * Use weighted scoring for object classification * Implement state verification --- .../real_time/custom_classification.py | 165 +++++++++++++++--- 1 file changed, 140 insertions(+), 25 deletions(-) diff --git a/frigate/data_processing/real_time/custom_classification.py b/frigate/data_processing/real_time/custom_classification.py index ac6387785..46929041f 100644 --- a/frigate/data_processing/real_time/custom_classification.py +++ b/frigate/data_processing/real_time/custom_classification.py @@ -53,6 +53,7 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi): self.tensor_output_details: dict[str, Any] | None = None self.labelmap: dict[int, str] = {} self.classifications_per_second = EventsPerSecond() + self.state_history: dict[str, dict[str, Any]] = {} if ( self.metrics @@ -94,6 +95,42 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi): if self.inference_speed: self.inference_speed.update(duration) + def verify_state_change(self, camera: str, detected_state: str) -> str | None: + """ + Verify state change requires 3 consecutive identical states before publishing. + Returns state to publish or None if verification not complete. + """ + if camera not in self.state_history: + self.state_history[camera] = { + "current_state": None, + "pending_state": None, + "consecutive_count": 0, + } + + verification = self.state_history[camera] + + if detected_state == verification["current_state"]: + verification["pending_state"] = None + verification["consecutive_count"] = 0 + return None + + if detected_state == verification["pending_state"]: + verification["consecutive_count"] += 1 + + if verification["consecutive_count"] >= 3: + verification["current_state"] = detected_state + verification["pending_state"] = None + verification["consecutive_count"] = 0 + return detected_state + else: + verification["pending_state"] = detected_state + verification["consecutive_count"] = 1 + logger.debug( + f"New state '{detected_state}' detected for {camera}, need {3 - verification['consecutive_count']} more consecutive detections" + ) + + return None + def process_frame(self, frame_data: dict[str, Any], frame: np.ndarray): if self.metrics and self.model_config.name in self.metrics.classification_cps: self.metrics.classification_cps[ @@ -131,6 +168,19 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi): self.last_run = now should_run = True + # Shortcut: always run if we have a pending state verification to complete + if ( + not should_run + and camera in self.state_history + and self.state_history[camera]["pending_state"] is not None + and now > self.last_run + 0.5 + ): + self.last_run = now + should_run = True + logger.debug( + f"Running verification check for pending state: {self.state_history[camera]['pending_state']} ({self.state_history[camera]['consecutive_count']}/3)" + ) + if not should_run: return @@ -188,10 +238,19 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi): score, ) - if score >= self.model_config.threshold: + if score < self.model_config.threshold: + logger.debug( + f"Score {score} below threshold {self.model_config.threshold}, skipping verification" + ) + return + + detected_state = self.labelmap[best_id] + verified_state = self.verify_state_change(camera, detected_state) + + if verified_state is not None: self.requestor.send_data( f"{camera}/classification/{self.model_config.name}", - self.labelmap[best_id], + verified_state, ) def handle_request(self, topic, request_data): @@ -230,7 +289,7 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi): self.sub_label_publisher = sub_label_publisher self.tensor_input_details: dict[str, Any] | None = None self.tensor_output_details: dict[str, Any] | None = None - self.detected_objects: dict[str, float] = {} + self.classification_history: dict[str, list[tuple[str, float, float]]] = {} self.labelmap: dict[int, str] = {} self.classifications_per_second = EventsPerSecond() @@ -272,6 +331,56 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi): if self.inference_speed: self.inference_speed.update(duration) + def get_weighted_score( + self, + object_id: str, + current_label: str, + current_score: float, + current_time: float, + ) -> tuple[str | None, float]: + """ + Determine weighted score based on history to prevent false positives/negatives. + Requires 60% of attempts to agree on a label before publishing. + Returns (weighted_label, weighted_score) or (None, 0.0) if no weighted score. + """ + if object_id not in self.classification_history: + self.classification_history[object_id] = [] + + self.classification_history[object_id].append( + (current_label, current_score, current_time) + ) + + history = self.classification_history[object_id] + + if len(history) < 3: + return None, 0.0 + + label_counts = {} + label_scores = {} + total_attempts = len(history) + + for label, score, timestamp in history: + if label not in label_counts: + label_counts[label] = 0 + label_scores[label] = [] + + label_counts[label] += 1 + label_scores[label].append(score) + + best_label = max(label_counts, key=label_counts.get) + best_count = label_counts[best_label] + + consensus_threshold = total_attempts * 0.6 + if best_count < consensus_threshold: + return None, 0.0 + + avg_score = sum(label_scores[best_label]) / len(label_scores[best_label]) + + if best_label == "none": + return None, 0.0 + + return best_label, avg_score + def process_frame(self, obj_data, frame): if self.metrics and self.model_config.name in self.metrics.classification_cps: self.metrics.classification_cps[ @@ -284,6 +393,9 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi): if obj_data["label"] not in self.model_config.object_config.objects: return + if obj_data.get("end_time") is not None: + return + now = datetime.datetime.now().timestamp() x, y, x2, y2 = calculate_region( frame.shape, @@ -331,7 +443,6 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi): probs = res / res.sum(axis=0) best_id = np.argmax(probs) score = round(probs[best_id], 2) - previous_score = self.detected_objects.get(obj_data["id"], 0.0) self.__update_metrics(datetime.datetime.now().timestamp() - now) write_classification_attempt( @@ -347,30 +458,34 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi): logger.debug(f"Score {score} is less than threshold.") return - if score <= previous_score: - logger.debug(f"Score {score} is worse than previous score {previous_score}") - return - sub_label = self.labelmap[best_id] - self.detected_objects[obj_data["id"]] = score - if ( - self.model_config.object_config.classification_type - == ObjectClassificationType.sub_label - ): - if sub_label != "none": + consensus_label, consensus_score = self.get_weighted_score( + obj_data["id"], sub_label, score, now + ) + + if consensus_label is not None: + if ( + self.model_config.object_config.classification_type + == ObjectClassificationType.sub_label + ): self.sub_label_publisher.publish( - (obj_data["id"], sub_label, score), + (obj_data["id"], consensus_label, consensus_score), EventMetadataTypeEnum.sub_label, ) - elif ( - self.model_config.object_config.classification_type - == ObjectClassificationType.attribute - ): - self.sub_label_publisher.publish( - (obj_data["id"], self.model_config.name, sub_label, score), - EventMetadataTypeEnum.attribute.value, - ) + elif ( + self.model_config.object_config.classification_type + == ObjectClassificationType.attribute + ): + self.sub_label_publisher.publish( + ( + obj_data["id"], + self.model_config.name, + consensus_label, + consensus_score, + ), + EventMetadataTypeEnum.attribute.value, + ) def handle_request(self, topic, request_data): if topic == EmbeddingsRequestEnum.reload_classification_model.value: @@ -388,8 +503,8 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi): return None def expire_object(self, object_id, camera): - if object_id in self.detected_objects: - self.detected_objects.pop(object_id) + if object_id in self.classification_history: + self.classification_history.pop(object_id) @staticmethod From 2c480b9a895f0dea8bac254ce00aba5863671738 Mon Sep 17 00:00:00 2001 From: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com> Date: Sat, 25 Oct 2025 19:44:06 -0500 Subject: [PATCH 17/17] Fix History layout for mobile portrait cameras (#20669) --- web/src/views/recording/RecordingView.tsx | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/web/src/views/recording/RecordingView.tsx b/web/src/views/recording/RecordingView.tsx index bde6c6d43..44a3d0aab 100644 --- a/web/src/views/recording/RecordingView.tsx +++ b/web/src/views/recording/RecordingView.tsx @@ -710,13 +710,12 @@ export function RecordingView({ ? "h-full" : "w-full" : cn( - "flex-shrink-0 pt-2", + "flex-shrink-0 portrait:w-full landscape:h-full", mainCameraAspect == "wide" ? "aspect-wide" : mainCameraAspect == "tall" - ? "aspect-tall" + ? "aspect-tall portrait:h-full" : "aspect-video", - "portrait:w-full landscape:h-full", ), )} style={{