Compare commits

...

10 Commits

Author SHA1 Message Date
Josh Hawkins
5b85c67f29
Merge 4e64c94ee1 into 704ee9667c 2026-05-06 03:23:49 +00:00
Nicolas Mowen
4e64c94ee1 Add note about speed zones unit when enabled 2026-05-05 21:23:44 -06:00
Vinnie Esposito
704ee9667c
fix(face_recognition): feed BGR (not RGB) to FaceDetectorYN in manual detection branch (#23123)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
* fix(face_recognition): feed BGR (not RGB) to FaceDetectorYN in manual detection branch

Frigate's `requires_face_detection` branch in `FaceRealTimeProcessor.process_frame`
converts the YUV camera frame to RGB and passes it to `cv2.FaceDetectorYN`.
YuNet is trained on BGR — feeding it RGB silently degrades detection
confidence by ~10× on typical person crops, causing face_recognition to
emit no `sub_label` and produce no `train/` entries. There is no log signal
because the detector simply returns 0 faces; from outside the box it looks
like nobody is walking past any camera.

The same file already does the YUV→BGR conversion correctly in the
else-branch (was line 271, now line 285) — only the manual-detection
branch was missed.

## Reproduction

Verified in-pod against the running Frigate's models on identical
person crops (snapshot pulled from a real person event):

    BGR (correct):  cv2.FaceDetectorYN ←  confidence 0.744 ✓
    RGB (current):  cv2.FaceDetectorYN ←  confidence 0.047 ✗

The `score_threshold=0.5` set on `FaceDetectorYN.create()` filters anything
under 0.5 at the detector layer, so the RGB-degraded crops never reach
the user-configurable `detection_threshold`. Result: silent outage.

## Fix

Three changes in `frigate/data_processing/real_time/face.py`:

1. `cv2.COLOR_YUV2RGB_I420` → `cv2.COLOR_YUV2BGR_I420`
2. Variable rename `rgb` → `bgr` to match
3. Remove the now-redundant `cv2.cvtColor(face_frame, cv2.COLOR_RGB2BGR)`
   block — `face_frame` is already BGR after the upstream conversion change

Net diff: +6 / -7. Pure Python, no new dependencies.

## How a deployment confirms the fix

After this change, walking past a camera produces:
- `data.attributes` with a `face` entry on the person event (currently empty)
- New entries in `/api/faces` `train/` array (currently frozen)
- `sub_label` populated on subsequent person events for trained faces

Signed-off-by: Vinnie Esposito <vespo21@gmail.com>

* Cleanup comment

---------

Signed-off-by: Vinnie Esposito <vespo21@gmail.com>
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2026-05-05 19:10:24 -05:00
Nicolas Mowen
76a1230885
ROCm Optimizations (#23118)
* Update to ROCm 7.2.3

* Add inference time for 9060XT

* Update times

* Update hardware info for latest ROCm

* Add env vars to save kernels and miopen database

* re-enable face recognition for ROCm

* Update

* Save LLVM cache
2026-05-05 16:33:43 -05:00
Josh Hawkins
52a3301726
Miscellaneous fixes (#23111)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
* return 404 from /api/login if auth is disabled

* locale sort object label switches

* enable search on object switches field

* add profiles docs link
2026-05-05 09:03:49 -06:00
Josh Hawkins
f448b259a2
Settings UI improvements (#23109)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
* use badge with popover to show which cameras override each global config section

* don't use shorthand

* use label i18n
2026-05-04 09:50:00 -06:00
Nicolas Mowen
ef9d7e07b7
Rewrite intel stats (#23108)
* Rewrite intel GPU stats to use file descriptors instead of intel_gpu_top, leading to significantly better API for interaction and more accurate results

* Update tests

* Update docs

* Adjust approach

* Update strings
2026-05-04 10:36:32 -05:00
Josh Hawkins
814c497bef
Use Job infrastructure for Debug Replay (#23099)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
* use ReplayState enum

* extract shared ffmpeg progress helper

* make start call non-blocking with worker thread

* expose replay state on status endpoint and return 202 from start

* cancel in-flight ffmpeg when stop is called during preparation

* add replay i18n strings for preparing and error states

* show status in replay UI

* navigate immediately on 202 from debug replay menus and dialog

* remove unused

* simplify to use Job infrastructure

* tests

* cleanup and tweaks

* fetch schema

* update api spec

* formatting

* fix e2e test

* mypy

* clean up

* formatting

* fix

* fix test

* don't try to show camera image until status reports ready

* simplify loading logic

* fix race in latest_frame on debug replay shutdown

* remove toast when successfully stopping

it gets hidden almost immediately
2026-05-03 14:54:20 -06:00
Josh Hawkins
5bc15d4aa9
chapter and thumbnail fixes (#23100)
- Skip null end_time when building export chapter metadata
- Use plain seconds for export thumbnail ffmpeg seek
2026-05-03 13:25:53 -06:00
Josh Hawkins
7ad233ef15
fix malformed svg from breaking docs build (#23102) 2026-05-03 13:21:22 -06:00
42 changed files with 2799 additions and 739 deletions

View File

@ -13,7 +13,7 @@ ARG ROCM
RUN apt update -qq && \ RUN apt update -qq && \
apt install -y wget gpg && \ apt install -y wget gpg && \
wget -O rocm.deb https://repo.radeon.com/amdgpu-install/7.2/ubuntu/jammy/amdgpu-install_7.2.70200-1_all.deb && \ wget -O rocm.deb https://repo.radeon.com/amdgpu-install/7.2.3/ubuntu/jammy/amdgpu-install_7.2.3.70203-1_all.deb && \
apt install -y ./rocm.deb && \ apt install -y ./rocm.deb && \
apt update && \ apt update && \
apt install -qq -y rocm apt install -qq -y rocm
@ -78,6 +78,10 @@ ENV MIGRAPHX_DISABLE_MIOPEN_FUSION=1
ENV MIGRAPHX_DISABLE_SCHEDULE_PASS=1 ENV MIGRAPHX_DISABLE_SCHEDULE_PASS=1
ENV MIGRAPHX_DISABLE_REDUCE_FUSION=1 ENV MIGRAPHX_DISABLE_REDUCE_FUSION=1
ENV MIGRAPHX_ENABLE_HIPRTC_WORKAROUNDS=1 ENV MIGRAPHX_ENABLE_HIPRTC_WORKAROUNDS=1
ENV MIOPEN_CUSTOM_CACHE_DIR=/config/model_cache/migraphx
ENV MIOPEN_USER_DB_PATH=/config/model_cache/migraphx
ENV AMD_COMGR_CACHE=1
ENV AMD_COMGR_CACHE_DIR=/config/model_cache/migraphx
COPY --from=rocm-dist / / COPY --from=rocm-dist / /

View File

@ -1 +1 @@
onnxruntime-migraphx @ https://github.com/NickM-27/frigate-onnxruntime-rocm/releases/download/v7.2.0/onnxruntime_migraphx-1.23.1-cp311-cp311-linux_x86_64.whl onnxruntime-migraphx @ https://github.com/NickM-27/frigate-onnxruntime-rocm/releases/download/v7.2.3-1/onnxruntime_migraphx-1.24.4-cp311-cp311-linux_x86_64.whl

View File

@ -1,5 +1,5 @@
variable "ROCM" { variable "ROCM" {
default = "7.2.0" default = "7.2.3"
} }
variable "HSA_OVERRIDE_GFX_VERSION" { variable "HSA_OVERRIDE_GFX_VERSION" {
default = "" default = ""

View File

@ -136,90 +136,32 @@ ffmpeg:
</TabItem> </TabItem>
</ConfigTabs> </ConfigTabs>
### Configuring Intel GPU Stats in Docker ### Configuring Intel GPU Stats
Additional configuration is needed for the Docker container to be able to access the `intel_gpu_top` command for GPU stats. There are two options: Frigate reads Intel GPU utilization directly from the kernel's per-client DRM usage counters exposed at `/proc/<pid>/fdinfo/<fd>`. This requires:
1. Run the container as privileged. - Linux kernel **5.19 or newer** for the `i915` driver, or any release of the `xe` driver.
2. Add the `CAP_PERFMON` capability (note: you might need to set the `perf_event_paranoid` low enough to allow access to the performance event system.) - Frigate running with permission to read other processes' fdinfo. Running as root inside the container (the default) satisfies this; non-root setups may need `CAP_SYS_PTRACE`.
#### Run as privileged No `intel_gpu_top` binary, `CAP_PERFMON`, privileged mode, or `perf_event_paranoid` tuning is required.
This method works, but it gives more permissions to the container than are actually needed. #### Stats for SR-IOV or specific devices
##### Docker Compose - Privileged If the host has more than one Intel GPU (e.g. an iGPU plus a discrete GPU, or SR-IOV virtual functions), pin stats collection to a specific device by setting `intel_gpu_device` to either its PCI bus address or a DRM card/render-node path:
```yaml
services:
frigate:
...
image: ghcr.io/blakeblackshear/frigate:stable
# highlight-next-line
privileged: true
```
##### Docker Run CLI - Privileged
```bash {4}
docker run -d \
--name frigate \
...
--privileged \
ghcr.io/blakeblackshear/frigate:stable
```
#### CAP_PERFMON
Only recent versions of Docker support the `CAP_PERFMON` capability. You can test to see if yours supports it by running: `docker run --cap-add=CAP_PERFMON hello-world`
##### Docker Compose - CAP_PERFMON
```yaml {5,6}
services:
frigate:
...
image: ghcr.io/blakeblackshear/frigate:stable
cap_add:
- CAP_PERFMON
```
##### Docker Run CLI - CAP_PERFMON
```bash {4}
docker run -d \
--name frigate \
...
--cap-add=CAP_PERFMON \
ghcr.io/blakeblackshear/frigate:stable
```
#### perf_event_paranoid
_Note: This setting must be changed for the entire system._
For more information on the various values across different distributions, see https://askubuntu.com/questions/1400874/what-does-perf-paranoia-level-four-do.
Depending on your OS and kernel configuration, you may need to change the `/proc/sys/kernel/perf_event_paranoid` kernel tunable. You can test the change by running `sudo sh -c 'echo 2 >/proc/sys/kernel/perf_event_paranoid'` which will persist until a reboot. Make it permanent by running `sudo sh -c 'echo kernel.perf_event_paranoid=2 >> /etc/sysctl.d/local.conf'`
#### Stats for SR-IOV or other devices
When using virtualized GPUs via SR-IOV, you need to specify the device path to use to gather stats from `intel_gpu_top`. This example may work for some systems using SR-IOV:
```yaml ```yaml
telemetry: telemetry:
stats: stats:
intel_gpu_device: "sriov" intel_gpu_device: "0000:00:02.0"
``` ```
For other virtualized GPUs, try specifying the direct path to the device instead:
```yaml ```yaml
telemetry: telemetry:
stats: stats:
intel_gpu_device: "drm:/dev/dri/card0" intel_gpu_device: "/dev/dri/card1"
``` ```
If you are passing in a device path, make sure you've passed the device through to the container. When passing a device path, make sure the device is also passed through to the container.
## AMD-based CPUs ## AMD-based CPUs

View File

@ -1022,12 +1022,12 @@ detectors:
### ONNX Supported Models ### ONNX Supported Models
| Model | Nvidia GPU | AMD GPU | Notes | | Model | Nvidia GPU | AMD GPU | Notes |
| ----------------------------- | ---------- | ------- | --------------------------------------------------- | | ------------------------------------ | ---------- | ------- | --------------------------------------------------- |
| [YOLOv9](#yolo-v3-v4-v7-v9-2) | ✅ | ✅ | Supports CUDA Graphs for optimal Nvidia performance | | [YOLOv9](#yolo-v3-v4-v7-v9-2) | ✅ | ✅ | Supports CUDA Graphs for optimal Nvidia performance |
| [RF-DETR](#rf-detr) | ✅ | ❌ | Supports CUDA Graphs for optimal Nvidia performance | | [RF-DETR](#rf-detr) | ✅ | ⚠️ | Supports CUDA Graphs for optimal Nvidia performance |
| [YOLO-NAS](#yolo-nas-1) | ⚠️ | ⚠️ | Not supported by CUDA Graphs | | [YOLO-NAS](#yolo-nas-1) | ⚠️ | ⚠️ | Not supported by CUDA Graphs |
| [YOLOX](#yolox-1) | ✅ | ✅ | Supports CUDA Graphs for optimal Nvidia performance | | [YOLOX](#yolox-1) | ✅ | ✅ | Supports CUDA Graphs for optimal Nvidia performance |
| [D-FINE / DEIMv2](#d-fine--deimv2-1) | ⚠️ | ❌ | Not supported by CUDA Graphs | | [D-FINE / DEIMv2](#d-fine--deimv2-1) | ⚠️ | ❌ | Not supported by CUDA Graphs |
There is no default model provided, the following formats are supported: There is no default model provided, the following formats are supported:

View File

@ -223,10 +223,11 @@ Apple Silicon can not run within a container, so a ZMQ proxy is utilized to comm
With the [ROCm](../configuration/object_detectors.md#amdrocm-gpu-detector) detector Frigate can take advantage of many discrete AMD GPUs. With the [ROCm](../configuration/object_detectors.md#amdrocm-gpu-detector) detector Frigate can take advantage of many discrete AMD GPUs.
| Name | YOLOv9 Inference Time | YOLO-NAS Inference Time | | Name | YOLOv9 Inference Time | YOLO-NAS Inference Time | RF-DETR Inference Time |
| --------- | --------------------------- | ------------------------- | | -------------- | --------------------------- | ------------------------- | ---------------------- |
| AMD 780M | t-320: ~ 14 ms s-320: 20 ms | 320: ~ 25 ms 640: ~ 50 ms | | AMD 780M | t-320: ~ 14 ms s-320: 20 ms | 320: ~ 25 ms 640: ~ 50 ms | |
| AMD 8700G | | 320: ~ 20 ms 640: ~ 40 ms | | AMD 8700G | | 320: ~ 20 ms 640: ~ 40 ms | |
| AMD 9060XT 16G | t-320: ~ 4 ms s-320: 5 ms | 320: ~ 6 ms | Nano-320: ~ 90 ms |
## Community Supported Detectors ## Community Supported Detectors

View File

@ -12,7 +12,7 @@ devices:
- id: "intel" - id: "intel"
name: "Intel Device" name: "Intel Device"
description: "Intel GPU / NPU" description: "Intel GPU / NPU"
icon: '<svg xmlns="http://www.w3.org/2000/svg"width="41" height="25" viewBox="0 0 50 12" fill="none" "viewBox="0 0 51 20"fill="none"><g clip-path="url(#clip0_5432_16259)"><path d="M4.2299 0.667969H0.605469V4.27284H4.2299V0.667969Z"fill="#00C7FD"></path><path d="M4.13917 19.4491V6.4536H0.695312V19.4363H4.13917V19.4491ZM26.9434 19.5774V16.3959C26.4404 16.3959 26.0147 16.3702 25.7052 16.3189C25.344 16.2676 25.0732 16.1393 24.8926 15.9597C24.712 15.7801 24.5959 15.5235 24.5314 15.1771C24.4798 14.8564 24.454 14.4331 24.454 13.9199V9.39138H26.9434V6.4536H24.454V1.39908H21.0102V13.9584C21.0102 15.0232 21.1005 15.9212 21.2811 16.6396C21.4616 17.3452 21.7712 17.9225 22.1968 18.3587C22.6225 18.7948 23.19 19.1027 23.8736 19.2952C24.5701 19.4876 25.4472 19.5774 26.492 19.5774H26.9434ZM46.6521 19.4491V0.398438H43.2082V19.4491H46.6521ZM17.6953 7.73647C16.7408 6.71018 15.3994 6.19703 13.6968 6.19703C12.8713 6.19703 12.1103 6.3638 11.4396 6.69735C10.756 7.03089 10.1885 7.49273 9.72414 8.08285L9.54357 8.3266V8.10851V6.4536H6.15131V19.4363H9.56936V12.5216V12.9963C9.56936 12.9193 9.56936 12.8423 9.56936 12.7653C9.60806 11.5466 9.90472 10.6486 10.4722 10.0713C11.0785 9.45553 11.8137 9.14764 12.6521 9.14764C13.6452 9.14764 14.4062 9.45553 14.9093 10.0456C15.3994 10.6358 15.6574 11.4696 15.6574 12.5344V12.5729V19.4363H19.127V12.0726C19.1399 10.2252 18.6498 8.76277 17.6953 7.73647ZM41.4282 12.9321C41.4282 11.9956 41.2606 11.1233 40.9381 10.3022C40.6028 9.49401 40.1384 8.7756 39.558 8.15982C38.9647 7.54404 38.2553 7.06938 37.4298 6.723C36.6043 6.37663 35.6885 6.20986 34.6953 6.20986C33.7537 6.20986 32.8638 6.38946 32.0383 6.73583C31.2128 7.09504 30.4905 7.5697 29.8842 8.17265C29.278 8.7756 28.7879 9.49401 28.4396 10.3151C28.0785 11.1361 27.9108 12.0213 27.9108 12.9578C27.9108 13.8943 28.0785 14.7795 28.4138 15.6005C28.7492 16.4215 29.2264 17.1399 29.8197 17.7429C30.4131 18.3458 31.1483 18.8333 31.9996 19.1797C32.8509 19.5389 33.7924 19.7185 34.7985 19.7185C37.7135 19.7185 39.5193 18.3972 40.6027 17.1656L38.1263 15.2926C37.6103 15.9084 36.3721 16.7422 34.8243 16.7422C33.8569 16.7422 33.0572 16.5242 32.451 16.0752C31.8448 15.639 31.432 15.0232 31.2128 14.2663L31.1741 14.1508H41.4282V12.9321ZM31.1999 11.739C31.1999 10.7897 32.2962 9.13481 34.6566 9.12198C37.017 9.12198 38.1263 10.7769 38.1263 11.7262L31.1999 11.739ZM50.5732 17.807C50.5087 17.6531 50.4184 17.5248 50.3023 17.4093C50.1862 17.2939 50.0572 17.2041 49.9024 17.1399C49.7477 17.0758 49.58 17.0373 49.4123 17.0373C49.2317 17.0373 49.077 17.0758 48.9222 17.1399C48.7674 17.2041 48.6384 17.2939 48.5223 17.4093C48.4062 17.5248 48.316 17.6531 48.2515 17.807C48.187 17.961 48.1483 18.1278 48.1483 18.2945C48.1483 18.4741 48.187 18.6281 48.2515 18.782C48.316 18.936 48.4062 19.0642 48.5223 19.1797C48.6384 19.2952 48.7674 19.385 48.9222 19.4491C49.077 19.5133 49.2446 19.5517 49.4123 19.5517C49.5929 19.5517 49.7477 19.5133 49.9024 19.4491C50.0572 19.385 50.1862 19.2952 50.3023 19.1797C50.4184 19.0642 50.5087 18.936 50.5732 18.782C50.6377 18.6281 50.6763 18.4613 50.6763 18.2945C50.6763 18.1278 50.6377 17.961 50.5732 17.807ZM50.3668 18.705C50.3152 18.8333 50.2378 18.9488 50.1475 19.0386C50.0572 19.1284 49.9411 19.2054 49.8122 19.2567C49.6832 19.308 49.5542 19.3337 49.3994 19.3337C49.2575 19.3337 49.1157 19.308 48.9867 19.2567C48.8577 19.2054 48.7416 19.1284 48.6513 19.0386C48.561 18.9488 48.4836 18.8333 48.432 18.705C48.3804 18.5768 48.3546 18.4485 48.3546 18.2945C48.3546 18.1534 48.3804 18.0123 48.432 17.884C48.4836 17.7557 48.561 17.6403 48.6513 17.5505C48.7416 17.4607 48.8577 17.3837 48.9867 17.3324C49.1157 17.2811 49.2446 17.2554 49.3994 17.2554C49.5413 17.2554 49.6832 17.2811 49.8122 17.3324C49.9411 17.3837 50.0572 17.4607 50.1475 17.5505C50.2378 17.6403 50.3152 17.7557 50.3668 17.884C50.4184 18.0123 50.4442 18.1406 50.4442 18.2945C50.4442 18.4485 50.4184 18.5768 50.3668 18.705ZM49.6445 18.41C49.7477 18.3972 49.8251 18.3587 49.8895 18.2945C49.954 18.2304 49.9927 18.1406 49.9927 18.0123C49.9927 17.8712 49.954 17.7685 49.8638 17.6916C49.7864 17.6146 49.6445 17.5761 49.4768 17.5761H48.9093V19.0258H49.1801V18.4356H49.3736L49.7348 19.0258H50.0185L49.6445 18.41ZM49.5026 18.1919C49.4639 18.1919 49.4252 18.1919 49.3736 18.1919H49.1801V17.7814H49.3736C49.4123 17.7814 49.451 17.7814 49.5026 17.7814C49.5413 17.7814 49.58 17.7942 49.6187 17.807C49.6574 17.8199 49.6832 17.8455 49.6961 17.8712C49.7219 17.8968 49.7219 17.9353 49.7219 17.9866C49.7219 18.0379 49.709 18.0764 49.6961 18.1021C49.6703 18.1278 49.6445 18.1534 49.6187 18.1662C49.58 18.1791 49.5413 18.1919 49.5026 18.1919Z"fill="currentColor"></path></g><defs><clipPath id="clip0_5432_16259"><rect width="51"height="20"fill="white"></rect></clipPath></defs></svg>' icon: '<svg xmlns="http://www.w3.org/2000/svg" width="41" height="25" viewBox="0 0 51 20" fill="none"><g clip-path="url(#clip0_5432_16259)"><path d="M4.2299 0.667969H0.605469V4.27284H4.2299V0.667969Z"fill="#00C7FD"></path><path d="M4.13917 19.4491V6.4536H0.695312V19.4363H4.13917V19.4491ZM26.9434 19.5774V16.3959C26.4404 16.3959 26.0147 16.3702 25.7052 16.3189C25.344 16.2676 25.0732 16.1393 24.8926 15.9597C24.712 15.7801 24.5959 15.5235 24.5314 15.1771C24.4798 14.8564 24.454 14.4331 24.454 13.9199V9.39138H26.9434V6.4536H24.454V1.39908H21.0102V13.9584C21.0102 15.0232 21.1005 15.9212 21.2811 16.6396C21.4616 17.3452 21.7712 17.9225 22.1968 18.3587C22.6225 18.7948 23.19 19.1027 23.8736 19.2952C24.5701 19.4876 25.4472 19.5774 26.492 19.5774H26.9434ZM46.6521 19.4491V0.398438H43.2082V19.4491H46.6521ZM17.6953 7.73647C16.7408 6.71018 15.3994 6.19703 13.6968 6.19703C12.8713 6.19703 12.1103 6.3638 11.4396 6.69735C10.756 7.03089 10.1885 7.49273 9.72414 8.08285L9.54357 8.3266V8.10851V6.4536H6.15131V19.4363H9.56936V12.5216V12.9963C9.56936 12.9193 9.56936 12.8423 9.56936 12.7653C9.60806 11.5466 9.90472 10.6486 10.4722 10.0713C11.0785 9.45553 11.8137 9.14764 12.6521 9.14764C13.6452 9.14764 14.4062 9.45553 14.9093 10.0456C15.3994 10.6358 15.6574 11.4696 15.6574 12.5344V12.5729V19.4363H19.127V12.0726C19.1399 10.2252 18.6498 8.76277 17.6953 7.73647ZM41.4282 12.9321C41.4282 11.9956 41.2606 11.1233 40.9381 10.3022C40.6028 9.49401 40.1384 8.7756 39.558 8.15982C38.9647 7.54404 38.2553 7.06938 37.4298 6.723C36.6043 6.37663 35.6885 6.20986 34.6953 6.20986C33.7537 6.20986 32.8638 6.38946 32.0383 6.73583C31.2128 7.09504 30.4905 7.5697 29.8842 8.17265C29.278 8.7756 28.7879 9.49401 28.4396 10.3151C28.0785 11.1361 27.9108 12.0213 27.9108 12.9578C27.9108 13.8943 28.0785 14.7795 28.4138 15.6005C28.7492 16.4215 29.2264 17.1399 29.8197 17.7429C30.4131 18.3458 31.1483 18.8333 31.9996 19.1797C32.8509 19.5389 33.7924 19.7185 34.7985 19.7185C37.7135 19.7185 39.5193 18.3972 40.6027 17.1656L38.1263 15.2926C37.6103 15.9084 36.3721 16.7422 34.8243 16.7422C33.8569 16.7422 33.0572 16.5242 32.451 16.0752C31.8448 15.639 31.432 15.0232 31.2128 14.2663L31.1741 14.1508H41.4282V12.9321ZM31.1999 11.739C31.1999 10.7897 32.2962 9.13481 34.6566 9.12198C37.017 9.12198 38.1263 10.7769 38.1263 11.7262L31.1999 11.739ZM50.5732 17.807C50.5087 17.6531 50.4184 17.5248 50.3023 17.4093C50.1862 17.2939 50.0572 17.2041 49.9024 17.1399C49.7477 17.0758 49.58 17.0373 49.4123 17.0373C49.2317 17.0373 49.077 17.0758 48.9222 17.1399C48.7674 17.2041 48.6384 17.2939 48.5223 17.4093C48.4062 17.5248 48.316 17.6531 48.2515 17.807C48.187 17.961 48.1483 18.1278 48.1483 18.2945C48.1483 18.4741 48.187 18.6281 48.2515 18.782C48.316 18.936 48.4062 19.0642 48.5223 19.1797C48.6384 19.2952 48.7674 19.385 48.9222 19.4491C49.077 19.5133 49.2446 19.5517 49.4123 19.5517C49.5929 19.5517 49.7477 19.5133 49.9024 19.4491C50.0572 19.385 50.1862 19.2952 50.3023 19.1797C50.4184 19.0642 50.5087 18.936 50.5732 18.782C50.6377 18.6281 50.6763 18.4613 50.6763 18.2945C50.6763 18.1278 50.6377 17.961 50.5732 17.807ZM50.3668 18.705C50.3152 18.8333 50.2378 18.9488 50.1475 19.0386C50.0572 19.1284 49.9411 19.2054 49.8122 19.2567C49.6832 19.308 49.5542 19.3337 49.3994 19.3337C49.2575 19.3337 49.1157 19.308 48.9867 19.2567C48.8577 19.2054 48.7416 19.1284 48.6513 19.0386C48.561 18.9488 48.4836 18.8333 48.432 18.705C48.3804 18.5768 48.3546 18.4485 48.3546 18.2945C48.3546 18.1534 48.3804 18.0123 48.432 17.884C48.4836 17.7557 48.561 17.6403 48.6513 17.5505C48.7416 17.4607 48.8577 17.3837 48.9867 17.3324C49.1157 17.2811 49.2446 17.2554 49.3994 17.2554C49.5413 17.2554 49.6832 17.2811 49.8122 17.3324C49.9411 17.3837 50.0572 17.4607 50.1475 17.5505C50.2378 17.6403 50.3152 17.7557 50.3668 17.884C50.4184 18.0123 50.4442 18.1406 50.4442 18.2945C50.4442 18.4485 50.4184 18.5768 50.3668 18.705ZM49.6445 18.41C49.7477 18.3972 49.8251 18.3587 49.8895 18.2945C49.954 18.2304 49.9927 18.1406 49.9927 18.0123C49.9927 17.8712 49.954 17.7685 49.8638 17.6916C49.7864 17.6146 49.6445 17.5761 49.4768 17.5761H48.9093V19.0258H49.1801V18.4356H49.3736L49.7348 19.0258H50.0185L49.6445 18.41ZM49.5026 18.1919C49.4639 18.1919 49.4252 18.1919 49.3736 18.1919H49.1801V17.7814H49.3736C49.4123 17.7814 49.451 17.7814 49.5026 17.7814C49.5413 17.7814 49.58 17.7942 49.6187 17.807C49.6574 17.8199 49.6832 17.8455 49.6961 17.8712C49.7219 17.8968 49.7219 17.9353 49.7219 17.9866C49.7219 18.0379 49.709 18.0764 49.6961 18.1021C49.6703 18.1278 49.6445 18.1534 49.6187 18.1662C49.58 18.1791 49.5413 18.1919 49.5026 18.1919Z"fill="currentColor"></path></g><defs><clipPath id="clip0_5432_16259"><rect width="51"height="20"fill="white"></rect></clipPath></defs></svg>'
imageTag: "stable" imageTag: "stable"
autoHardware: autoHardware:
- "gpu" - "gpu"

View File

@ -5997,7 +5997,10 @@ paths:
tags: tags:
- App - App
summary: Start debug replay summary: Start debug replay
description: Start a debug replay session from camera recordings. description:
Start a debug replay session from camera recordings. Returns
immediately while clip generation runs as a background job; subscribe
to the 'debug_replay' job_state WS topic to track progress.
operationId: start_debug_replay_debug_replay_start_post operationId: start_debug_replay_debug_replay_start_post
requestBody: requestBody:
required: true required: true
@ -6006,12 +6009,16 @@ paths:
schema: schema:
$ref: "#/components/schemas/DebugReplayStartBody" $ref: "#/components/schemas/DebugReplayStartBody"
responses: responses:
"200": "202":
description: Successful Response description: Successful Response
content: content:
application/json: application/json:
schema: schema:
$ref: "#/components/schemas/DebugReplayStartResponse" $ref: "#/components/schemas/DebugReplayStartResponse"
"400":
description: Invalid camera, time range, or no recordings
"409":
description: A replay session is already active
"422": "422":
description: Validation Error description: Validation Error
content: content:
@ -6272,10 +6279,14 @@ components:
replay_camera: replay_camera:
type: string type: string
title: Replay Camera title: Replay Camera
job_id:
type: string
title: Job Id
type: object type: object
required: required:
- success - success
- replay_camera - replay_camera
- job_id
title: DebugReplayStartResponse title: DebugReplayStartResponse
description: Response for starting a debug replay session. description: Response for starting a debug replay session.
DebugReplayStatusResponse: DebugReplayStatusResponse:

View File

@ -812,6 +812,11 @@ limiter = Limiter(key_func=get_remote_addr)
) )
@limiter.limit(limit_value=rateLimiter.get_limit) @limiter.limit(limit_value=rateLimiter.get_limit)
def login(request: Request, body: AppPostLoginBody): def login(request: Request, body: AppPostLoginBody):
if not request.app.frigate_config.auth.enabled:
return JSONResponse(
content={"message": "Authentication is disabled"}, status_code=404
)
JWT_COOKIE_NAME = request.app.frigate_config.auth.cookie_name JWT_COOKIE_NAME = request.app.frigate_config.auth.cookie_name
JWT_COOKIE_SECURE = request.app.frigate_config.auth.cookie_secure JWT_COOKIE_SECURE = request.app.frigate_config.auth.cookie_secure
JWT_SESSION_LENGTH = request.app.frigate_config.auth.session_length JWT_SESSION_LENGTH = request.app.frigate_config.auth.session_length

View File

@ -37,6 +37,7 @@ from frigate.api.defs.response.chat_response import (
from frigate.api.defs.tags import Tags from frigate.api.defs.tags import Tags
from frigate.api.event import events from frigate.api.event import events
from frigate.config import FrigateConfig from frigate.config import FrigateConfig
from frigate.config.ui import UnitSystemEnum
from frigate.genai.utils import build_assistant_message_for_conversation from frigate.genai.utils import build_assistant_message_for_conversation
from frigate.jobs.vlm_watch import ( from frigate.jobs.vlm_watch import (
get_vlm_watch_job, get_vlm_watch_job,
@ -1301,6 +1302,7 @@ async def chat_completion(
cameras_info = [] cameras_info = []
config = request.app.frigate_config config = request.app.frigate_config
has_speed_zone = False
for camera_id in allowed_cameras: for camera_id in allowed_cameras:
if camera_id not in config.cameras: if camera_id not in config.cameras:
continue continue
@ -1311,6 +1313,10 @@ async def chat_completion(
else camera_id.replace("_", " ").title() else camera_id.replace("_", " ").title()
) )
zone_names = list(camera_config.zones.keys()) zone_names = list(camera_config.zones.keys())
if not has_speed_zone:
has_speed_zone = any(
zone.distances for zone in camera_config.zones.values()
)
if zone_names: if zone_names:
cameras_info.append( cameras_info.append(
f" - {friendly_name} (ID: {camera_id}, zones: {', '.join(zone_names)})" f" - {friendly_name} (ID: {camera_id}, zones: {', '.join(zone_names)})"
@ -1326,6 +1332,13 @@ async def chat_completion(
+ "\n\nWhen users refer to cameras by their friendly name (e.g., 'Back Deck Camera'), use the corresponding camera ID (e.g., 'back_deck_cam') in tool calls." + "\n\nWhen users refer to cameras by their friendly name (e.g., 'Back Deck Camera'), use the corresponding camera ID (e.g., 'back_deck_cam') in tool calls."
) )
speed_units_section = ""
if has_speed_zone:
speed_unit = (
"mph" if config.ui.unit_system == UnitSystemEnum.imperial else "km/h"
)
speed_units_section = f"\n\nReport object speeds to the user in {speed_unit}."
system_prompt = f"""You are a helpful assistant for Frigate, a security camera NVR system. You help users answer questions about their cameras, detected objects, and events. system_prompt = f"""You are a helpful assistant for Frigate, a security camera NVR system. You help users answer questions about their cameras, detected objects, and events.
Current server local date and time: {current_date_str} at {current_time_str} Current server local date and time: {current_date_str} at {current_time_str}
@ -1337,7 +1350,7 @@ When users ask about "today", "yesterday", "this week", etc., use the current da
When searching for objects or events, use ISO 8601 format for dates (e.g., {current_date_str}T00:00:00Z for the start of today). When searching for objects or events, use ISO 8601 format for dates (e.g., {current_date_str}T00:00:00Z for the start of today).
Always be accurate with time calculations based on the current date provided. Always be accurate with time calculations based on the current date provided.
When a user refers to a specific object they have seen or describe with identifying details ("that green car", "the person in the red jacket", "a package left today"), prefer the find_similar_objects tool over search_objects. Use search_objects first only to locate the anchor event, then pass its id to find_similar_objects. For generic queries like "show me all cars today", keep using search_objects. If a user message begins with [attached_event:<id>], treat that event id as the anchor for any similarity or "tell me more" request in the same message and call find_similar_objects with that id.{cameras_section}""" When a user refers to a specific object they have seen or describe with identifying details ("that green car", "the person in the red jacket", "a package left today"), prefer the find_similar_objects tool over search_objects. Use search_objects first only to locate the anchor event, then pass its id to find_similar_objects. For generic queries like "show me all cars today", keep using search_objects. If a user message begins with [attached_event:<id>], treat that event id as the anchor for any similarity or "tell me more" request in the same message and call find_similar_objects with that id.{cameras_section}{speed_units_section}"""
conversation.append( conversation.append(
{ {

View File

@ -10,6 +10,7 @@ from pydantic import BaseModel, Field
from frigate.api.auth import require_role from frigate.api.auth import require_role
from frigate.api.defs.tags import Tags from frigate.api.defs.tags import Tags
from frigate.jobs.debug_replay import start_debug_replay_job
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -29,10 +30,17 @@ class DebugReplayStartResponse(BaseModel):
success: bool success: bool
replay_camera: str replay_camera: str
job_id: str
class DebugReplayStatusResponse(BaseModel): class DebugReplayStatusResponse(BaseModel):
"""Response for debug replay status.""" """Response for debug replay status.
Returns only session-presence fields. Startup progress and error
details flow through the job_state WebSocket topic via the
debug_replay job (see frigate.jobs.debug_replay); the
Replay page subscribes there with useJobStatus("debug_replay").
"""
active: bool active: bool
replay_camera: str | None = None replay_camera: str | None = None
@ -51,15 +59,32 @@ class DebugReplayStopResponse(BaseModel):
@router.post( @router.post(
"/debug_replay/start", "/debug_replay/start",
response_model=DebugReplayStartResponse, response_model=DebugReplayStartResponse,
status_code=202,
responses={
400: {"description": "Invalid camera, time range, or no recordings"},
409: {"description": "A replay session is already active"},
},
dependencies=[Depends(require_role(["admin"]))], dependencies=[Depends(require_role(["admin"]))],
summary="Start debug replay", summary="Start debug replay",
description="Start a debug replay session from camera recordings.", description="Start a debug replay session from camera recordings. Returns "
"immediately while clip generation runs as a background job; subscribe "
"to the 'debug_replay' job_state WS topic to track progress.",
) )
async def start_debug_replay(request: Request, body: DebugReplayStartBody): async def start_debug_replay(request: Request, body: DebugReplayStartBody):
"""Start a debug replay session.""" """Start a debug replay session asynchronously."""
replay_manager = request.app.replay_manager replay_manager = request.app.replay_manager
if replay_manager.active: try:
job_id = await asyncio.to_thread(
start_debug_replay_job,
source_camera=body.camera,
start_ts=body.start_time,
end_ts=body.end_time,
frigate_config=request.app.frigate_config,
config_publisher=request.app.config_publisher,
replay_manager=replay_manager,
)
except RuntimeError:
return JSONResponse( return JSONResponse(
content={ content={
"success": False, "success": False,
@ -67,38 +92,23 @@ async def start_debug_replay(request: Request, body: DebugReplayStartBody):
}, },
status_code=409, status_code=409,
) )
try:
replay_camera = await asyncio.to_thread(
replay_manager.start,
source_camera=body.camera,
start_ts=body.start_time,
end_ts=body.end_time,
frigate_config=request.app.frigate_config,
config_publisher=request.app.config_publisher,
)
except ValueError: except ValueError:
logger.exception("Invalid parameters for debug replay start request") logger.exception("Rejected debug replay start request")
return JSONResponse( return JSONResponse(
content={ content={
"success": False, "success": False,
"message": "Invalid debug replay request parameters", "message": "Invalid debug replay parameters",
}, },
status_code=400, status_code=400,
) )
except RuntimeError:
logger.exception("Error while starting debug replay session")
return JSONResponse(
content={
"success": False,
"message": "An internal error occurred while starting debug replay",
},
status_code=500,
)
return DebugReplayStartResponse( return JSONResponse(
success=True, content={
replay_camera=replay_camera, "success": True,
"replay_camera": replay_manager.replay_camera_name,
"job_id": job_id,
},
status_code=202,
) )
@ -118,12 +128,16 @@ def get_debug_replay_status(request: Request):
if replay_manager.active and replay_camera: if replay_manager.active and replay_camera:
frame_processor = request.app.detected_frames_processor frame_processor = request.app.detected_frames_processor
frame = frame_processor.get_current_frame(replay_camera) frame = (
frame_processor.get_current_frame(replay_camera)
if frame_processor is not None
else None
)
if frame is not None: if frame is not None:
frame_time = frame_processor.get_current_frame_time(replay_camera) frame_time = frame_processor.get_current_frame_time(replay_camera)
camera_config = request.app.frigate_config.cameras.get(replay_camera) camera_config = request.app.frigate_config.cameras.get(replay_camera)
retry_interval = 10 retry_interval = 10.0
if camera_config is not None: if camera_config is not None:
retry_interval = float(camera_config.ffmpeg.retry_interval or 10) retry_interval = float(camera_config.ffmpeg.retry_interval or 10)

View File

@ -174,12 +174,10 @@ async def latest_frame(
} }
quality_params = get_image_quality_params(extension.value, params.quality) quality_params = get_image_quality_params(extension.value, params.quality)
if camera_name in request.app.frigate_config.cameras: camera_config = request.app.frigate_config.cameras.get(camera_name)
if camera_config is not None:
frame = frame_processor.get_current_frame(camera_name, draw_options) frame = frame_processor.get_current_frame(camera_name, draw_options)
retry_interval = float( retry_interval = float(camera_config.ffmpeg.retry_interval or 10)
request.app.frigate_config.cameras.get(camera_name).ffmpeg.retry_interval
or 10
)
is_offline = False is_offline = False
if frame is None or datetime.now().timestamp() > ( if frame is None or datetime.now().timestamp() > (

View File

@ -25,8 +25,8 @@ class StatsConfig(FrigateBaseModel):
) )
intel_gpu_device: Optional[str] = Field( intel_gpu_device: Optional[str] = Field(
default=None, default=None,
title="SR-IOV device", title="Intel GPU device",
description="Device identifier used when treating Intel GPUs as SR-IOV to fix GPU stats.", description="PCI bus address or DRM device path (e.g. /dev/dri/card1) used to pin Intel GPU stats to a specific device when multiple are present.",
) )

View File

@ -229,9 +229,10 @@ class FaceRealTimeProcessor(RealTimeProcessorApi):
logger.debug(f"No person box available for {id}") logger.debug(f"No person box available for {id}")
return return
rgb = cv2.cvtColor(frame, cv2.COLOR_YUV2RGB_I420) # YuNet (cv2.FaceDetectorYN) is trained on BGR
bgr = cv2.cvtColor(frame, cv2.COLOR_YUV2BGR_I420)
left, top, right, bottom = person_box left, top, right, bottom = person_box
person = rgb[top:bottom, left:right] person = bgr[top:bottom, left:right]
face_box = self.__detect_face(person, self.face_config.detection_threshold) face_box = self.__detect_face(person, self.face_config.detection_threshold)
if not face_box: if not face_box:
@ -250,11 +251,6 @@ class FaceRealTimeProcessor(RealTimeProcessorApi):
) )
return return
try:
face_frame = cv2.cvtColor(face_frame, cv2.COLOR_RGB2BGR)
except Exception as e:
logger.debug(f"Failed to convert face frame color for {id}: {e}")
return
else: else:
# don't run for object without attributes # don't run for object without attributes
if not obj_data.get("current_attributes"): if not obj_data.get("current_attributes"):

View File

@ -1,9 +1,13 @@
"""Debug replay camera management for replaying recordings with detection overlays.""" """Debug replay camera management for replaying recordings with detection overlays.
The startup work (ffmpeg concat + camera config publish) lives in
frigate.jobs.debug_replay. This module owns only session presence
(active), session metadata, and post-session cleanup.
"""
import logging import logging
import os import os
import shutil import shutil
import subprocess as sp
import threading import threading
from ruamel.yaml import YAML from ruamel.yaml import YAML
@ -21,7 +25,7 @@ from frigate.const import (
REPLAY_DIR, REPLAY_DIR,
THUMB_DIR, THUMB_DIR,
) )
from frigate.models import Recordings from frigate.jobs.debug_replay import cancel_debug_replay_job, wait_for_runner
from frigate.util.camera_cleanup import cleanup_camera_db, cleanup_camera_files from frigate.util.camera_cleanup import cleanup_camera_db, cleanup_camera_files
from frigate.util.config import find_config_file from frigate.util.config import find_config_file
@ -29,7 +33,14 @@ logger = logging.getLogger(__name__)
class DebugReplayManager: class DebugReplayManager:
"""Manages a single debug replay session.""" """Owns the lifecycle pointers for a single debug replay session.
A session exists from the moment mark_starting is called (synchronously,
inside the API handler) until clear_session runs (on success cleanup,
failure, or stop). The active property is the source of truth that the
status bar consumes broader than the startup job, which only covers the
preparing_clip / starting_camera window.
"""
def __init__(self) -> None: def __init__(self) -> None:
self._lock = threading.Lock() self._lock = threading.Lock()
@ -41,144 +52,66 @@ class DebugReplayManager:
@property @property
def active(self) -> bool: def active(self) -> bool:
"""Whether a replay session is currently active.""" """True from mark_starting until clear_session."""
return self.replay_camera_name is not None return self.replay_camera_name is not None
def start( def mark_starting(
self, self,
source_camera: str, source_camera: str,
replay_camera_name: str,
start_ts: float, start_ts: float,
end_ts: float, end_ts: float,
frigate_config: FrigateConfig, ) -> None:
config_publisher: CameraConfigUpdatePublisher, """Synchronously claim the session before the job runner starts.
) -> str:
"""Start a debug replay session.
Args: Called inside the API handler so the status bar sees active=True
source_camera: Name of the source camera to replay immediately, before the worker thread does any ffmpeg work.
start_ts: Start timestamp
end_ts: End timestamp
frigate_config: Current Frigate configuration
config_publisher: Publisher for camera config updates
Returns:
The replay camera name
Raises:
ValueError: If a session is already active or parameters are invalid
RuntimeError: If clip generation fails
""" """
with self._lock: with self._lock:
return self._start_locked( self.replay_camera_name = replay_camera_name
source_camera, start_ts, end_ts, frigate_config, config_publisher self.source_camera = source_camera
) self.start_ts = start_ts
self.end_ts = end_ts
self.clip_path = None
def _start_locked( def mark_session_ready(self, clip_path: str) -> None:
"""Record the on-disk clip path after the camera has been published."""
with self._lock:
self.clip_path = clip_path
def clear_session(self) -> None:
"""Reset session pointers without publishing camera removal.
Used by the job runner on failure paths. stop() does the camera
teardown plus this clear in one step.
"""
with self._lock:
self._clear_locked()
def _clear_locked(self) -> None:
self.replay_camera_name = None
self.source_camera = None
self.clip_path = None
self.start_ts = None
self.end_ts = None
def publish_camera(
self, self,
source_camera: str, source_camera: str,
start_ts: float, replay_name: str,
end_ts: float, clip_path: str,
frigate_config: FrigateConfig, frigate_config: FrigateConfig,
config_publisher: CameraConfigUpdatePublisher, config_publisher: CameraConfigUpdatePublisher,
) -> str: ) -> None:
if self.active: """Build the in-memory replay camera config and publish the add event.
raise ValueError("A replay session is already active")
if source_camera not in frigate_config.cameras: Called by the job runner during the starting_camera phase.
raise ValueError(f"Camera '{source_camera}' not found") """
if end_ts <= start_ts:
raise ValueError("End time must be after start time")
# Query recordings for the source camera in the time range
recordings = (
Recordings.select(
Recordings.path,
Recordings.start_time,
Recordings.end_time,
)
.where(
Recordings.start_time.between(start_ts, end_ts)
| Recordings.end_time.between(start_ts, end_ts)
| ((start_ts > Recordings.start_time) & (end_ts < Recordings.end_time))
)
.where(Recordings.camera == source_camera)
.order_by(Recordings.start_time.asc())
)
if not recordings.count():
raise ValueError(
f"No recordings found for camera '{source_camera}' in the specified time range"
)
# Create replay directory
os.makedirs(REPLAY_DIR, exist_ok=True)
# Generate replay camera name
replay_name = f"{REPLAY_CAMERA_PREFIX}{source_camera}"
# Build concat file for ffmpeg
concat_file = os.path.join(REPLAY_DIR, f"{replay_name}_concat.txt")
clip_path = os.path.join(REPLAY_DIR, f"{replay_name}.mp4")
with open(concat_file, "w") as f:
for recording in recordings:
f.write(f"file '{recording.path}'\n")
# Concatenate recordings into a single clip with -c copy (fast)
ffmpeg_cmd = [
frigate_config.ffmpeg.ffmpeg_path,
"-hide_banner",
"-y",
"-f",
"concat",
"-safe",
"0",
"-i",
concat_file,
"-c",
"copy",
"-movflags",
"+faststart",
clip_path,
]
logger.info(
"Generating replay clip for %s (%.1f - %.1f)",
source_camera,
start_ts,
end_ts,
)
try:
result = sp.run(
ffmpeg_cmd,
capture_output=True,
text=True,
timeout=120,
)
if result.returncode != 0:
logger.error("FFmpeg error: %s", result.stderr)
raise RuntimeError(
f"Failed to generate replay clip: {result.stderr[-500:]}"
)
except sp.TimeoutExpired:
raise RuntimeError("Clip generation timed out")
finally:
# Clean up concat file
if os.path.exists(concat_file):
os.remove(concat_file)
if not os.path.exists(clip_path):
raise RuntimeError("Clip file was not created")
# Build camera config dict for the replay camera
source_config = frigate_config.cameras[source_camera] source_config = frigate_config.cameras[source_camera]
camera_dict = self._build_camera_config_dict( camera_dict = self._build_camera_config_dict(
source_config, replay_name, clip_path source_config, replay_name, clip_path
) )
# Build an in-memory config with the replay camera added
config_file = find_config_file() config_file = find_config_file()
yaml_parser = YAML() yaml_parser = YAML()
with open(config_file, "r") as f: with open(config_file, "r") as f:
@ -191,75 +124,48 @@ class DebugReplayManager:
try: try:
new_config = FrigateConfig.parse_object(config_data) new_config = FrigateConfig.parse_object(config_data)
except Exception as e: except Exception as e:
raise RuntimeError(f"Failed to validate replay camera config: {e}") raise RuntimeError(f"Failed to validate replay camera config: {e}") from e
# Update the running config
frigate_config.cameras[replay_name] = new_config.cameras[replay_name] frigate_config.cameras[replay_name] = new_config.cameras[replay_name]
# Publish the add event
config_publisher.publish_update( config_publisher.publish_update(
CameraConfigUpdateTopic(CameraConfigUpdateEnum.add, replay_name), CameraConfigUpdateTopic(CameraConfigUpdateEnum.add, replay_name),
new_config.cameras[replay_name], new_config.cameras[replay_name],
) )
# Store session state
self.replay_camera_name = replay_name
self.source_camera = source_camera
self.clip_path = clip_path
self.start_ts = start_ts
self.end_ts = end_ts
logger.info("Debug replay started: %s -> %s", source_camera, replay_name)
return replay_name
def stop( def stop(
self, self,
frigate_config: FrigateConfig, frigate_config: FrigateConfig,
config_publisher: CameraConfigUpdatePublisher, config_publisher: CameraConfigUpdatePublisher,
) -> None: ) -> None:
"""Stop the active replay session and clean up all artifacts. """Cancel any in-flight startup job and tear down the active session.
Args: Safe to call when no session is active (no-op with a warning).
frigate_config: Current Frigate configuration
config_publisher: Publisher for camera config updates
""" """
cancel_debug_replay_job()
wait_for_runner(timeout=2.0)
with self._lock: with self._lock:
self._stop_locked(frigate_config, config_publisher) if not self.active:
logger.warning("No active replay session to stop")
return
def _stop_locked( replay_name = self.replay_camera_name
self,
frigate_config: FrigateConfig,
config_publisher: CameraConfigUpdatePublisher,
) -> None:
if not self.active:
logger.warning("No active replay session to stop")
return
replay_name = self.replay_camera_name # Only publish remove if the camera was actually added to the live
# config (i.e. the runner reached the starting_camera phase).
if replay_name is not None and replay_name in frigate_config.cameras:
config_publisher.publish_update(
CameraConfigUpdateTopic(CameraConfigUpdateEnum.remove, replay_name),
frigate_config.cameras[replay_name],
)
# Publish remove event so subscribers stop and remove from their config if replay_name is not None:
if replay_name in frigate_config.cameras: self._cleanup_db(replay_name)
config_publisher.publish_update( self._cleanup_files(replay_name)
CameraConfigUpdateTopic(CameraConfigUpdateEnum.remove, replay_name),
frigate_config.cameras[replay_name],
)
# Do NOT pop here — let subscribers handle removal from the shared
# config dict when they process the ZMQ message to avoid race conditions
# Defensive DB cleanup self._clear_locked()
self._cleanup_db(replay_name)
# Remove filesystem artifacts logger.info("Debug replay stopped and cleaned up: %s", replay_name)
self._cleanup_files(replay_name)
# Reset state
self.replay_camera_name = None
self.source_camera = None
self.clip_path = None
self.start_ts = None
self.end_ts = None
logger.info("Debug replay stopped and cleaned up: %s", replay_name)
def _build_camera_config_dict( def _build_camera_config_dict(
self, self,
@ -267,16 +173,7 @@ class DebugReplayManager:
replay_name: str, replay_name: str,
clip_path: str, clip_path: str,
) -> dict: ) -> dict:
"""Build a camera config dictionary for the replay camera. """Build a camera config dictionary for the replay camera."""
Args:
source_config: Source camera's CameraConfig
replay_name: Name for the replay camera
clip_path: Path to the replay clip file
Returns:
Camera config as a dictionary
"""
# Extract detect config (exclude computed fields) # Extract detect config (exclude computed fields)
detect_dict = source_config.detect.model_dump( detect_dict = source_config.detect.model_dump(
exclude={"min_initialized", "max_disappeared", "enabled_in_config"} exclude={"min_initialized", "max_disappeared", "enabled_in_config"}
@ -311,7 +208,6 @@ class DebugReplayManager:
zone_dump = zone_config.model_dump( zone_dump = zone_config.model_dump(
exclude={"contour", "color"}, exclude_defaults=True exclude={"contour", "color"}, exclude_defaults=True
) )
# Always include required fields
zone_dump.setdefault("coordinates", zone_config.coordinates) zone_dump.setdefault("coordinates", zone_config.coordinates)
zones_dict[zone_name] = zone_dump zones_dict[zone_name] = zone_dump

View File

@ -132,7 +132,6 @@ class ONNXModelRunner(BaseModelRunner):
return model_type in [ return model_type in [
EnrichmentModelTypeEnum.paddleocr.value, EnrichmentModelTypeEnum.paddleocr.value,
EnrichmentModelTypeEnum.jina_v2.value, EnrichmentModelTypeEnum.jina_v2.value,
EnrichmentModelTypeEnum.arcface.value,
ModelTypeEnum.rfdetr.value, ModelTypeEnum.rfdetr.value,
ModelTypeEnum.dfine.value, ModelTypeEnum.dfine.value,
] ]

View File

@ -0,0 +1,386 @@
"""Debug replay startup job: ffmpeg concat + camera config publish.
The runner orchestrates the async portion of starting a debug replay
session. The DebugReplayManager (in frigate.debug_replay) owns session
presence so the status bar can keep reading a single `active` flag from
/debug_replay/status for the entire session window which is broader
than this job's lifetime.
"""
import logging
import os
import subprocess as sp
import threading
import time
from dataclasses import dataclass
from typing import TYPE_CHECKING, Any, Optional, cast
from peewee import ModelSelect
from frigate.config import FrigateConfig
from frigate.config.camera.updater import CameraConfigUpdatePublisher
from frigate.const import REPLAY_CAMERA_PREFIX, REPLAY_DIR
from frigate.jobs.export import JobStatePublisher
from frigate.jobs.job import Job
from frigate.jobs.manager import job_is_running, set_current_job
from frigate.models import Recordings
from frigate.types import JobStatusTypesEnum
from frigate.util.ffmpeg import run_ffmpeg_with_progress
if TYPE_CHECKING:
from frigate.debug_replay import DebugReplayManager
logger = logging.getLogger(__name__)
# Coalesce frequent ffmpeg progress callbacks so the WS isn't flooded.
PROGRESS_BROADCAST_MIN_INTERVAL = 1.0
JOB_TYPE = "debug_replay"
STEP_PREPARING_CLIP = "preparing_clip"
STEP_STARTING_CAMERA = "starting_camera"
_active_runner: Optional["DebugReplayJobRunner"] = None
_runner_lock = threading.Lock()
def _set_active_runner(runner: Optional["DebugReplayJobRunner"]) -> None:
global _active_runner
with _runner_lock:
_active_runner = runner
def get_active_runner() -> Optional["DebugReplayJobRunner"]:
with _runner_lock:
return _active_runner
@dataclass
class DebugReplayJob(Job):
"""Job state for a debug replay startup."""
job_type: str = JOB_TYPE
source_camera: str = ""
replay_camera_name: str = ""
start_ts: float = 0.0
end_ts: float = 0.0
current_step: Optional[str] = None
progress_percent: float = 0.0
def to_dict(self) -> dict[str, Any]:
"""Whitelisted payload for the job_state WS topic.
Replay-specific fields land in results so the frontend's
generic Job<TResults> type can be parameterised cleanly.
"""
return {
"id": self.id,
"job_type": self.job_type,
"status": self.status,
"start_time": self.start_time,
"end_time": self.end_time,
"error_message": self.error_message,
"results": {
"current_step": self.current_step,
"progress_percent": self.progress_percent,
"source_camera": self.source_camera,
"replay_camera_name": self.replay_camera_name,
"start_ts": self.start_ts,
"end_ts": self.end_ts,
},
}
def query_recordings(source_camera: str, start_ts: float, end_ts: float) -> ModelSelect:
"""Return the Recordings query for the time range.
Module-level so tests can patch it without instantiating a runner.
"""
query = (
Recordings.select(
Recordings.path,
Recordings.start_time,
Recordings.end_time,
)
.where(
Recordings.start_time.between(start_ts, end_ts)
| Recordings.end_time.between(start_ts, end_ts)
| ((start_ts > Recordings.start_time) & (end_ts < Recordings.end_time))
)
.where(Recordings.camera == source_camera)
.order_by(Recordings.start_time.asc())
)
return cast(ModelSelect, query)
class DebugReplayJobRunner(threading.Thread):
"""Worker thread that drives the startup job to completion.
Owns the live ffmpeg Popen reference for cancellation. Cancellation
is two-step (threading.Event + proc.terminate()) so the runner
both knows it should stop and is unblocked from its blocking subprocess
wait.
"""
def __init__(
self,
job: DebugReplayJob,
frigate_config: FrigateConfig,
config_publisher: CameraConfigUpdatePublisher,
replay_manager: "DebugReplayManager",
publisher: Optional[JobStatePublisher] = None,
) -> None:
super().__init__(daemon=True, name=f"debug_replay_{job.id}")
self.job = job
self.frigate_config = frigate_config
self.config_publisher = config_publisher
self.replay_manager = replay_manager
self.publisher = publisher if publisher is not None else JobStatePublisher()
self._cancel_event = threading.Event()
self._active_process: sp.Popen | None = None
self._proc_lock = threading.Lock()
self._last_broadcast_monotonic: float = 0.0
def cancel(self) -> None:
"""Request cancellation. Idempotent."""
self._cancel_event.set()
with self._proc_lock:
proc = self._active_process
if proc is not None:
try:
proc.terminate()
except Exception as exc:
logger.warning("Failed to terminate ffmpeg subprocess: %s", exc)
def is_cancelled(self) -> bool:
return self._cancel_event.is_set()
def _record_proc(self, proc: sp.Popen) -> None:
with self._proc_lock:
self._active_process = proc
# Race: cancel arrived between Popen and _record_proc.
if self._cancel_event.is_set():
try:
proc.terminate()
except Exception:
pass
def _broadcast(self, force: bool = False) -> None:
now = time.monotonic()
if (
not force
and now - self._last_broadcast_monotonic < PROGRESS_BROADCAST_MIN_INTERVAL
):
return
self._last_broadcast_monotonic = now
try:
self.publisher.publish(self.job.to_dict())
except Exception as err:
logger.warning("Publisher raised during job state broadcast: %s", err)
def run(self) -> None:
replay_name = self.job.replay_camera_name
os.makedirs(REPLAY_DIR, exist_ok=True)
concat_file = os.path.join(REPLAY_DIR, f"{replay_name}_concat.txt")
clip_path = os.path.join(REPLAY_DIR, f"{replay_name}.mp4")
self.job.status = JobStatusTypesEnum.running
self.job.start_time = time.time()
self.job.current_step = STEP_PREPARING_CLIP
self._broadcast(force=True)
try:
recordings = query_recordings(
self.job.source_camera, self.job.start_ts, self.job.end_ts
)
with open(concat_file, "w") as f:
for recording in recordings:
f.write(f"file '{recording.path}'\n")
ffmpeg_cmd = [
self.frigate_config.ffmpeg.ffmpeg_path,
"-hide_banner",
"-y",
"-f",
"concat",
"-safe",
"0",
"-i",
concat_file,
"-c",
"copy",
"-movflags",
"+faststart",
clip_path,
]
logger.info(
"Generating replay clip for %s (%.1f - %.1f)",
self.job.source_camera,
self.job.start_ts,
self.job.end_ts,
)
def _on_progress(percent: float) -> None:
self.job.progress_percent = percent
self._broadcast()
try:
returncode, stderr = run_ffmpeg_with_progress(
ffmpeg_cmd,
expected_duration_seconds=max(
0.0, self.job.end_ts - self.job.start_ts
),
on_progress=_on_progress,
process_started=self._record_proc,
use_low_priority=True,
)
finally:
with self._proc_lock:
self._active_process = None
if self._cancel_event.is_set():
self._finalize_cancelled(clip_path)
return
if returncode != 0:
raise RuntimeError(f"FFmpeg failed: {stderr[-500:]}")
if not os.path.exists(clip_path):
raise RuntimeError("Clip file was not created")
self.job.current_step = STEP_STARTING_CAMERA
self.job.progress_percent = 100.0
self._broadcast(force=True)
if self._cancel_event.is_set():
self._finalize_cancelled(clip_path)
return
self.replay_manager.publish_camera(
source_camera=self.job.source_camera,
replay_name=replay_name,
clip_path=clip_path,
frigate_config=self.frigate_config,
config_publisher=self.config_publisher,
)
self.replay_manager.mark_session_ready(clip_path)
self.job.status = JobStatusTypesEnum.success
self.job.end_time = time.time()
self._broadcast(force=True)
logger.info(
"Debug replay started: %s -> %s",
self.job.source_camera,
replay_name,
)
except Exception as exc:
logger.exception("Debug replay startup failed")
self.job.status = JobStatusTypesEnum.failed
self.job.error_message = str(exc)
self.job.end_time = time.time()
self._broadcast(force=True)
self.replay_manager.clear_session()
_remove_silent(clip_path)
finally:
_remove_silent(concat_file)
_set_active_runner(None)
def _finalize_cancelled(self, clip_path: str) -> None:
logger.info("Debug replay startup cancelled")
self.job.status = JobStatusTypesEnum.cancelled
self.job.end_time = time.time()
self._broadcast(force=True)
# The caller of cancel_debug_replay_job (DebugReplayManager.stop) owns
# session cleanup — db rows, filesystem artifacts, clear_session. We
# only clean up the partial concat output we created.
_remove_silent(clip_path)
def _remove_silent(path: str) -> None:
try:
if os.path.exists(path):
os.remove(path)
except OSError:
pass
def start_debug_replay_job(
*,
source_camera: str,
start_ts: float,
end_ts: float,
frigate_config: FrigateConfig,
config_publisher: CameraConfigUpdatePublisher,
replay_manager: "DebugReplayManager",
) -> str:
"""Validate, create job, start runner. Returns the job id.
Raises ValueError for bad params (camera missing, time range
invalid, no recordings) and RuntimeError if a session is already
active.
"""
if job_is_running(JOB_TYPE) or replay_manager.active:
raise RuntimeError("A replay session is already active")
if source_camera not in frigate_config.cameras:
raise ValueError(f"Camera '{source_camera}' not found")
if end_ts <= start_ts:
raise ValueError("End time must be after start time")
recordings = query_recordings(source_camera, start_ts, end_ts)
if not recordings.count():
raise ValueError(
f"No recordings found for camera '{source_camera}' in the specified time range"
)
replay_name = f"{REPLAY_CAMERA_PREFIX}{source_camera}"
replay_manager.mark_starting(
source_camera=source_camera,
replay_camera_name=replay_name,
start_ts=start_ts,
end_ts=end_ts,
)
job = DebugReplayJob(
source_camera=source_camera,
replay_camera_name=replay_name,
start_ts=start_ts,
end_ts=end_ts,
)
set_current_job(job)
runner = DebugReplayJobRunner(
job=job,
frigate_config=frigate_config,
config_publisher=config_publisher,
replay_manager=replay_manager,
)
_set_active_runner(runner)
runner.start()
return job.id
def cancel_debug_replay_job() -> bool:
"""Signal the active runner to cancel.
Returns True if a runner was signalled, False if no job was active.
"""
runner = get_active_runner()
if runner is None:
return False
runner.cancel()
return True
def wait_for_runner(timeout: float = 2.0) -> bool:
"""Join the active runner. Returns True if the runner ended in time."""
runner = get_active_runner()
if runner is None:
return True
runner.join(timeout=timeout)
return not runner.is_alive()

View File

@ -23,13 +23,13 @@ from frigate.const import (
EXPORT_DIR, EXPORT_DIR,
MAX_PLAYLIST_SECONDS, MAX_PLAYLIST_SECONDS,
PREVIEW_FRAME_TYPE, PREVIEW_FRAME_TYPE,
PROCESS_PRIORITY_LOW,
) )
from frigate.ffmpeg_presets import ( from frigate.ffmpeg_presets import (
EncodeTypeEnum, EncodeTypeEnum,
parse_preset_hardware_acceleration_encode, parse_preset_hardware_acceleration_encode,
) )
from frigate.models import Export, Previews, Recordings, ReviewSegment from frigate.models import Export, Previews, Recordings, ReviewSegment
from frigate.util.ffmpeg import run_ffmpeg_with_progress
from frigate.util.time import is_current_hour from frigate.util.time import is_current_hour
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -243,107 +243,29 @@ class RecordingExporter(threading.Thread):
return total return total
def _inject_progress_flags(self, ffmpeg_cmd: list[str]) -> list[str]:
"""Insert FFmpeg progress reporting flags before the output path.
``-progress pipe:2`` writes structured key=value lines to stderr,
``-nostats`` suppresses the noisy default stats output.
"""
if not ffmpeg_cmd:
return ffmpeg_cmd
return ffmpeg_cmd[:-1] + ["-progress", "pipe:2", "-nostats", ffmpeg_cmd[-1]]
def _run_ffmpeg_with_progress( def _run_ffmpeg_with_progress(
self, self,
ffmpeg_cmd: list[str], ffmpeg_cmd: list[str],
playlist_lines: str | list[str], playlist_lines: str | list[str],
step: str = "encoding", step: str = "encoding",
) -> tuple[int, str]: ) -> tuple[int, str]:
"""Run an FFmpeg export command, parsing progress events from stderr. """Delegate to the shared helper, mapping percent → (step, percent).
Returns ``(returncode, captured_stderr)``. Stdout is left attached to Returns ``(returncode, captured_stderr)``.
the parent process so we don't have to drain it (and risk a deadlock
if the buffer fills). Progress percent is computed against the
expected output duration; values are clamped to [0, 100] inside
:py:meth:`_emit_progress`.
""" """
cmd = ["nice", "-n", str(PROCESS_PRIORITY_LOW)] + self._inject_progress_flags(
ffmpeg_cmd
)
if isinstance(playlist_lines, list): if isinstance(playlist_lines, list):
stdin_payload = "\n".join(playlist_lines) stdin_payload = "\n".join(playlist_lines)
else: else:
stdin_payload = playlist_lines stdin_payload = playlist_lines
expected_duration = self._expected_output_duration_seconds() return run_ffmpeg_with_progress(
ffmpeg_cmd,
self._emit_progress(step, 0.0) expected_duration_seconds=self._expected_output_duration_seconds(),
on_progress=lambda percent: self._emit_progress(step, percent),
proc = sp.Popen( stdin_payload=stdin_payload,
cmd, use_low_priority=True,
stdin=sp.PIPE,
stderr=sp.PIPE,
text=True,
encoding="ascii",
errors="replace",
) )
assert proc.stdin is not None
assert proc.stderr is not None
try:
proc.stdin.write(stdin_payload)
except (BrokenPipeError, OSError):
# FFmpeg may have rejected the input early; still wait for it
# to terminate so the returncode is meaningful.
pass
finally:
try:
proc.stdin.close()
except (BrokenPipeError, OSError):
pass
captured: list[str] = []
try:
for raw_line in proc.stderr:
captured.append(raw_line)
line = raw_line.strip()
if not line:
continue
if line.startswith("out_time_us="):
if expected_duration <= 0:
continue
try:
out_time_us = int(line.split("=", 1)[1])
except (ValueError, IndexError):
continue
if out_time_us < 0:
continue
out_seconds = out_time_us / 1_000_000.0
percent = (out_seconds / expected_duration) * 100.0
self._emit_progress(step, percent)
elif line == "progress=end":
self._emit_progress(step, 100.0)
break
except Exception:
logger.exception("Failed reading FFmpeg progress for %s", self.export_id)
proc.wait()
# Drain any remaining stderr so callers can log it on failure.
try:
remaining = proc.stderr.read()
if remaining:
captured.append(remaining)
except Exception:
pass
return proc.returncode, "".join(captured)
def get_datetime_from_timestamp(self, timestamp: int) -> str: def get_datetime_from_timestamp(self, timestamp: int) -> str:
# return in iso format using the configured ui.timezone when set, # return in iso format using the configured ui.timezone when set,
# so the auto-generated export name reflects local time rather # so the auto-generated export name reflects local time rather
@ -420,6 +342,7 @@ class RecordingExporter(threading.Thread):
return None return None
total_output = windows[-1][2] + (windows[-1][1] - windows[-1][0]) total_output = windows[-1][2] + (windows[-1][1] - windows[-1][0])
last_recorded_end = windows[-1][1]
def wall_to_output(t: float) -> float: def wall_to_output(t: float) -> float:
t = max(float(self.start_time), min(float(self.end_time), t)) t = max(float(self.start_time), min(float(self.end_time), t))
@ -432,8 +355,18 @@ class RecordingExporter(threading.Thread):
chapter_blocks: list[str] = [] chapter_blocks: list[str] = []
for review in review_rows: for review in review_rows:
if review.start_time is None:
continue
# In-progress segments have a NULL end_time until the activity
# closes; clamp to the last recorded second so the chapter never
# extends past the actual video.
review_end = (
float(review.end_time)
if review.end_time is not None
else last_recorded_end
)
start_out = wall_to_output(float(review.start_time)) start_out = wall_to_output(float(review.start_time))
end_out = wall_to_output(float(review.end_time)) end_out = wall_to_output(review_end)
# Drop chapters that fall entirely in a recording gap, or are # Drop chapters that fall entirely in a recording gap, or are
# too short to be navigable in a player. # too short to be navigable in a player.
@ -516,16 +449,14 @@ class RecordingExporter(threading.Thread):
except DoesNotExist: except DoesNotExist:
return "" return ""
diff = self.start_time - preview.start_time diff = max(0.0, float(self.start_time) - float(preview.start_time))
minutes = int(diff / 60)
seconds = int(diff % 60)
ffmpeg_cmd = [ ffmpeg_cmd = [
"/usr/lib/ffmpeg/7.0/bin/ffmpeg", # hardcode path for exports thumbnail due to missing libwebp support "/usr/lib/ffmpeg/7.0/bin/ffmpeg", # hardcode path for exports thumbnail due to missing libwebp support
"-hide_banner", "-hide_banner",
"-loglevel", "-loglevel",
"warning", "warning",
"-ss", "-ss",
f"00:{minutes}:{seconds}", f"{diff:.3f}",
"-i", "-i",
preview.path, preview.path,
"-frames", "-frames",

View File

@ -0,0 +1,123 @@
"""Tests for /debug_replay API endpoints."""
from unittest.mock import patch
from frigate.models import Event, Recordings, ReviewSegment
from frigate.test.http_api.base_http_test import AuthTestClient, BaseTestHttp
class TestDebugReplayAPI(BaseTestHttp):
def setUp(self):
super().setUp([Event, Recordings, ReviewSegment])
self.app = self.create_app()
def test_start_returns_202_with_job_id(self):
# Stub the factory to skip validation/threading and just record the
# name on the manager the way the real factory's mark_starting would.
def fake_start(**kwargs):
kwargs["replay_manager"].mark_starting(
source_camera=kwargs["source_camera"],
replay_camera_name="_replay_front",
start_ts=kwargs["start_ts"],
end_ts=kwargs["end_ts"],
)
return "job-1234"
with patch(
"frigate.api.debug_replay.start_debug_replay_job",
side_effect=fake_start,
):
with AuthTestClient(self.app) as client:
resp = client.post(
"/debug_replay/start",
json={
"camera": "front",
"start_time": 100,
"end_time": 200,
},
)
self.assertEqual(resp.status_code, 202)
body = resp.json()
self.assertTrue(body["success"])
self.assertEqual(body["job_id"], "job-1234")
self.assertEqual(body["replay_camera"], "_replay_front")
def test_start_returns_400_on_validation_error(self):
with patch(
"frigate.api.debug_replay.start_debug_replay_job",
side_effect=ValueError("Camera 'missing' not found"),
):
with AuthTestClient(self.app) as client:
resp = client.post(
"/debug_replay/start",
json={
"camera": "missing",
"start_time": 100,
"end_time": 200,
},
)
self.assertEqual(resp.status_code, 400)
body = resp.json()
self.assertFalse(body["success"])
# Message is hard-coded so we don't echo exception text back to clients
# (CodeQL: information exposure through an exception).
self.assertEqual(body["message"], "Invalid debug replay parameters")
def test_start_returns_409_when_session_already_active(self):
with patch(
"frigate.api.debug_replay.start_debug_replay_job",
side_effect=RuntimeError("A replay session is already active"),
):
with AuthTestClient(self.app) as client:
resp = client.post(
"/debug_replay/start",
json={
"camera": "front",
"start_time": 100,
"end_time": 200,
},
)
self.assertEqual(resp.status_code, 409)
body = resp.json()
self.assertFalse(body["success"])
def test_status_inactive_when_no_session(self):
with AuthTestClient(self.app) as client:
resp = client.get("/debug_replay/status")
self.assertEqual(resp.status_code, 200)
body = resp.json()
self.assertFalse(body["active"])
self.assertIsNone(body["replay_camera"])
self.assertIsNone(body["source_camera"])
self.assertIsNone(body["start_time"])
self.assertIsNone(body["end_time"])
self.assertFalse(body["live_ready"])
# Make sure deprecated fields are gone
self.assertNotIn("state", body)
self.assertNotIn("progress_percent", body)
self.assertNotIn("error_message", body)
def test_status_active_after_mark_starting(self):
manager = self.app.replay_manager
manager.mark_starting(
source_camera="front",
replay_camera_name="_replay_front",
start_ts=100.0,
end_ts=200.0,
)
with AuthTestClient(self.app) as client:
resp = client.get("/debug_replay/status")
self.assertEqual(resp.status_code, 200)
body = resp.json()
self.assertTrue(body["active"])
self.assertEqual(body["replay_camera"], "_replay_front")
self.assertEqual(body["source_camera"], "front")
self.assertEqual(body["start_time"], 100.0)
self.assertEqual(body["end_time"], 200.0)
self.assertFalse(body["live_ready"])

View File

@ -0,0 +1,242 @@
"""Tests for the simplified DebugReplayManager.
Startup orchestration lives in ``frigate.jobs.debug_replay`` (covered by
``test_debug_replay_job``). The manager owns only session presence and
cleanup.
"""
import unittest
import unittest.mock
from unittest.mock import MagicMock, patch
class TestDebugReplayManagerSession(unittest.TestCase):
def test_inactive_by_default(self) -> None:
from frigate.debug_replay import DebugReplayManager
manager = DebugReplayManager()
self.assertFalse(manager.active)
self.assertIsNone(manager.replay_camera_name)
self.assertIsNone(manager.source_camera)
self.assertIsNone(manager.clip_path)
self.assertIsNone(manager.start_ts)
self.assertIsNone(manager.end_ts)
def test_mark_starting_sets_session_pointers_and_active(self) -> None:
from frigate.debug_replay import DebugReplayManager
manager = DebugReplayManager()
manager.mark_starting(
source_camera="front",
replay_camera_name="_replay_front",
start_ts=100.0,
end_ts=200.0,
)
self.assertTrue(manager.active)
self.assertEqual(manager.replay_camera_name, "_replay_front")
self.assertEqual(manager.source_camera, "front")
self.assertEqual(manager.start_ts, 100.0)
self.assertEqual(manager.end_ts, 200.0)
self.assertIsNone(manager.clip_path)
def test_mark_session_ready_sets_clip_path(self) -> None:
from frigate.debug_replay import DebugReplayManager
manager = DebugReplayManager()
manager.mark_starting("front", "_replay_front", 100.0, 200.0)
manager.mark_session_ready(clip_path="/tmp/replay/_replay_front.mp4")
self.assertEqual(manager.clip_path, "/tmp/replay/_replay_front.mp4")
self.assertTrue(manager.active)
def test_clear_session_resets_all_pointers(self) -> None:
from frigate.debug_replay import DebugReplayManager
manager = DebugReplayManager()
manager.mark_starting("front", "_replay_front", 100.0, 200.0)
manager.mark_session_ready("/tmp/replay/clip.mp4")
manager.clear_session()
self.assertFalse(manager.active)
self.assertIsNone(manager.replay_camera_name)
self.assertIsNone(manager.source_camera)
self.assertIsNone(manager.clip_path)
self.assertIsNone(manager.start_ts)
self.assertIsNone(manager.end_ts)
class TestDebugReplayManagerStop(unittest.TestCase):
def test_stop_when_inactive_is_a_noop(self) -> None:
from frigate.debug_replay import DebugReplayManager
manager = DebugReplayManager()
frigate_config = MagicMock()
frigate_config.cameras = {}
publisher = MagicMock()
# Should not raise; should not publish any events.
manager.stop(frigate_config=frigate_config, config_publisher=publisher)
publisher.publish_update.assert_not_called()
def test_stop_publishes_remove_when_camera_was_published(self) -> None:
from frigate.config.camera.updater import CameraConfigUpdateEnum
from frigate.debug_replay import DebugReplayManager
manager = DebugReplayManager()
manager.mark_starting("front", "_replay_front", 100.0, 200.0)
manager.mark_session_ready("/tmp/replay/_replay_front.mp4")
camera_config = MagicMock()
frigate_config = MagicMock()
frigate_config.cameras = {"_replay_front": camera_config}
publisher = MagicMock()
with (
patch.object(manager, "_cleanup_db"),
patch.object(manager, "_cleanup_files"),
patch("frigate.debug_replay.cancel_debug_replay_job", return_value=False),
):
manager.stop(frigate_config=frigate_config, config_publisher=publisher)
# One publish_update call with a remove topic.
self.assertEqual(publisher.publish_update.call_count, 1)
topic_arg = publisher.publish_update.call_args.args[0]
self.assertEqual(topic_arg.update_type, CameraConfigUpdateEnum.remove)
self.assertFalse(manager.active)
def test_stop_skips_remove_publish_when_camera_not_in_config(self) -> None:
"""Cancellation during preparing_clip: no camera was published yet."""
from frigate.debug_replay import DebugReplayManager
manager = DebugReplayManager()
manager.mark_starting("front", "_replay_front", 100.0, 200.0)
# clip_path stays None because we cancelled before camera publish.
frigate_config = MagicMock()
frigate_config.cameras = {} # _replay_front not present
publisher = MagicMock()
with (
patch.object(manager, "_cleanup_db"),
patch.object(manager, "_cleanup_files"),
patch("frigate.debug_replay.cancel_debug_replay_job", return_value=True),
):
manager.stop(frigate_config=frigate_config, config_publisher=publisher)
publisher.publish_update.assert_not_called()
self.assertFalse(manager.active)
def test_stop_calls_cancel_debug_replay_job(self) -> None:
from frigate.debug_replay import DebugReplayManager
manager = DebugReplayManager()
manager.mark_starting("front", "_replay_front", 100.0, 200.0)
frigate_config = MagicMock()
frigate_config.cameras = {}
publisher = MagicMock()
with (
patch.object(manager, "_cleanup_db"),
patch.object(manager, "_cleanup_files"),
patch(
"frigate.debug_replay.cancel_debug_replay_job",
return_value=True,
) as mock_cancel,
):
manager.stop(frigate_config=frigate_config, config_publisher=publisher)
mock_cancel.assert_called_once()
class TestDebugReplayManagerPublishCamera(unittest.TestCase):
def test_publish_camera_invokes_publisher_with_add_topic(self) -> None:
from frigate.config.camera.updater import CameraConfigUpdateEnum
from frigate.debug_replay import DebugReplayManager
manager = DebugReplayManager()
source_config = MagicMock()
new_camera_config = MagicMock()
frigate_config = MagicMock()
frigate_config.cameras = {"front": source_config}
publisher = MagicMock()
with (
patch.object(
manager,
"_build_camera_config_dict",
return_value={"enabled": True},
),
patch("frigate.debug_replay.find_config_file", return_value="/cfg.yml"),
patch("frigate.debug_replay.YAML") as yaml_cls,
patch("frigate.debug_replay.FrigateConfig.parse_object") as parse_object,
patch("builtins.open", unittest.mock.mock_open(read_data="cameras:\n")),
):
yaml_instance = yaml_cls.return_value
yaml_instance.load.return_value = {"cameras": {}}
parsed = MagicMock()
parsed.cameras = {"_replay_front": new_camera_config}
parse_object.return_value = parsed
manager.publish_camera(
source_camera="front",
replay_name="_replay_front",
clip_path="/tmp/clip.mp4",
frigate_config=frigate_config,
config_publisher=publisher,
)
# Camera registered into the live config dict
self.assertIn("_replay_front", frigate_config.cameras)
# Publisher invoked with an add topic
self.assertEqual(publisher.publish_update.call_count, 1)
topic_arg = publisher.publish_update.call_args.args[0]
self.assertEqual(topic_arg.update_type, CameraConfigUpdateEnum.add)
def test_publish_camera_wraps_parse_failure_in_runtime_error(self) -> None:
from frigate.debug_replay import DebugReplayManager
manager = DebugReplayManager()
frigate_config = MagicMock()
frigate_config.cameras = {"front": MagicMock()}
publisher = MagicMock()
with (
patch.object(
manager,
"_build_camera_config_dict",
return_value={"enabled": True},
),
patch("frigate.debug_replay.find_config_file", return_value="/cfg.yml"),
patch("frigate.debug_replay.YAML") as yaml_cls,
patch(
"frigate.debug_replay.FrigateConfig.parse_object",
side_effect=ValueError("zone foo has invalid coordinates"),
),
patch("builtins.open", unittest.mock.mock_open(read_data="cameras:\n")),
):
yaml_cls.return_value.load.return_value = {"cameras": {}}
with self.assertRaises(RuntimeError) as ctx:
manager.publish_camera(
source_camera="front",
replay_name="_replay_front",
clip_path="/tmp/clip.mp4",
frigate_config=frigate_config,
config_publisher=publisher,
)
self.assertIn("replay camera config", str(ctx.exception))
self.assertIn("invalid coordinates", str(ctx.exception))
publisher.publish_update.assert_not_called()
if __name__ == "__main__":
unittest.main()

View File

@ -0,0 +1,460 @@
"""Tests for the debug replay job runner and factory."""
import threading
import time
import unittest
import unittest.mock
from unittest.mock import MagicMock, patch
from frigate.debug_replay import DebugReplayManager
from frigate.jobs.debug_replay import (
DebugReplayJob,
cancel_debug_replay_job,
get_active_runner,
start_debug_replay_job,
)
from frigate.jobs.export import JobStatePublisher
from frigate.jobs.manager import _completed_jobs, _current_jobs
from frigate.types import JobStatusTypesEnum
def _reset_job_manager() -> None:
"""Clear the global job manager state between tests."""
_current_jobs.clear()
_completed_jobs.clear()
def _patch_publisher(test_case: unittest.TestCase) -> None:
"""Replace JobStatePublisher.publish with a no-op to avoid hanging on IPC."""
publisher_patch = patch.object(
JobStatePublisher, "publish", lambda self, payload: None
)
publisher_patch.start()
test_case.addCleanup(publisher_patch.stop)
class TestDebugReplayJob(unittest.TestCase):
def test_default_fields(self) -> None:
job = DebugReplayJob()
self.assertEqual(job.job_type, "debug_replay")
self.assertEqual(job.status, JobStatusTypesEnum.queued)
self.assertIsNone(job.current_step)
self.assertEqual(job.progress_percent, 0.0)
def test_to_dict_whitelist(self) -> None:
job = DebugReplayJob(
source_camera="front",
replay_camera_name="_replay_front",
start_ts=100.0,
end_ts=200.0,
)
job.current_step = "preparing_clip"
job.progress_percent = 42.5
payload = job.to_dict()
# Top-level matches the standard Job<TResults> shape.
for key in (
"id",
"job_type",
"status",
"start_time",
"end_time",
"error_message",
"results",
):
self.assertIn(key, payload, f"missing top-level field: {key}")
results = payload["results"]
self.assertEqual(results["source_camera"], "front")
self.assertEqual(results["replay_camera_name"], "_replay_front")
self.assertEqual(results["current_step"], "preparing_clip")
self.assertEqual(results["progress_percent"], 42.5)
self.assertEqual(results["start_ts"], 100.0)
self.assertEqual(results["end_ts"], 200.0)
class TestStartDebugReplayJob(unittest.TestCase):
def setUp(self) -> None:
_reset_job_manager()
_patch_publisher(self)
self.manager = DebugReplayManager()
self.frigate_config = MagicMock()
self.frigate_config.cameras = {"front": MagicMock()}
self.frigate_config.ffmpeg.ffmpeg_path = "/bin/true"
self.publisher = MagicMock()
self.recordings_qs = MagicMock()
self.recordings_qs.count.return_value = 1
self.recordings_qs.__iter__.return_value = iter([MagicMock(path="/tmp/r1.mp4")])
def tearDown(self) -> None:
runner = get_active_runner()
if runner is not None:
runner.cancel()
runner.join(timeout=2.0)
_reset_job_manager()
def test_rejects_unknown_camera(self) -> None:
with self.assertRaises(ValueError):
start_debug_replay_job(
source_camera="missing",
start_ts=100.0,
end_ts=200.0,
frigate_config=self.frigate_config,
config_publisher=self.publisher,
replay_manager=self.manager,
)
def test_rejects_invalid_time_range(self) -> None:
with self.assertRaises(ValueError):
start_debug_replay_job(
source_camera="front",
start_ts=200.0,
end_ts=100.0,
frigate_config=self.frigate_config,
config_publisher=self.publisher,
replay_manager=self.manager,
)
def test_rejects_when_no_recordings(self) -> None:
empty_qs = MagicMock()
empty_qs.count.return_value = 0
with patch("frigate.jobs.debug_replay.query_recordings", return_value=empty_qs):
with self.assertRaises(ValueError):
start_debug_replay_job(
source_camera="front",
start_ts=100.0,
end_ts=200.0,
frigate_config=self.frigate_config,
config_publisher=self.publisher,
replay_manager=self.manager,
)
def test_returns_job_id_and_marks_session_starting(self) -> None:
block = threading.Event()
def slow_helper(cmd, **kwargs):
block.wait(timeout=5)
return 0, ""
with (
patch(
"frigate.jobs.debug_replay.query_recordings",
return_value=self.recordings_qs,
),
patch(
"frigate.jobs.debug_replay.run_ffmpeg_with_progress",
side_effect=slow_helper,
),
patch.object(self.manager, "publish_camera"),
patch("os.path.exists", return_value=True),
patch("os.makedirs"),
patch("builtins.open", unittest.mock.mock_open()),
):
job_id = start_debug_replay_job(
source_camera="front",
start_ts=100.0,
end_ts=200.0,
frigate_config=self.frigate_config,
config_publisher=self.publisher,
replay_manager=self.manager,
)
self.assertIsInstance(job_id, str)
self.assertTrue(self.manager.active)
self.assertEqual(self.manager.replay_camera_name, "_replay_front")
self.assertEqual(self.manager.source_camera, "front")
block.set()
def test_rejects_concurrent_calls(self) -> None:
block = threading.Event()
def slow_helper(cmd, **kwargs):
block.wait(timeout=5)
return 0, ""
with (
patch(
"frigate.jobs.debug_replay.query_recordings",
return_value=self.recordings_qs,
),
patch(
"frigate.jobs.debug_replay.run_ffmpeg_with_progress",
side_effect=slow_helper,
),
patch.object(self.manager, "publish_camera"),
patch("os.path.exists", return_value=True),
patch("os.makedirs"),
patch("builtins.open", unittest.mock.mock_open()),
):
start_debug_replay_job(
source_camera="front",
start_ts=100.0,
end_ts=200.0,
frigate_config=self.frigate_config,
config_publisher=self.publisher,
replay_manager=self.manager,
)
with self.assertRaises(RuntimeError):
start_debug_replay_job(
source_camera="front",
start_ts=100.0,
end_ts=200.0,
frigate_config=self.frigate_config,
config_publisher=self.publisher,
replay_manager=self.manager,
)
block.set()
class TestRunnerHappyPath(unittest.TestCase):
def setUp(self) -> None:
_reset_job_manager()
_patch_publisher(self)
self.manager = DebugReplayManager()
self.frigate_config = MagicMock()
self.frigate_config.cameras = {"front": MagicMock()}
self.frigate_config.ffmpeg.ffmpeg_path = "/bin/true"
self.publisher = MagicMock()
self.recordings_qs = MagicMock()
self.recordings_qs.count.return_value = 1
self.recordings_qs.__iter__.return_value = iter([MagicMock(path="/tmp/r1.mp4")])
def tearDown(self) -> None:
runner = get_active_runner()
if runner is not None:
runner.cancel()
runner.join(timeout=2.0)
_reset_job_manager()
def _wait_for(self, predicate, timeout: float = 5.0) -> bool:
deadline = time.time() + timeout
while time.time() < deadline:
if predicate():
return True
time.sleep(0.02)
return False
def test_progress_callback_updates_job_percent(self) -> None:
captured: list[float] = []
def fake_helper(cmd, *, on_progress=None, **kwargs):
on_progress(0.0)
on_progress(50.0)
on_progress(100.0)
return 0, ""
with (
patch(
"frigate.jobs.debug_replay.query_recordings",
return_value=self.recordings_qs,
),
patch(
"frigate.jobs.debug_replay.run_ffmpeg_with_progress",
side_effect=fake_helper,
),
patch.object(
self.manager,
"publish_camera",
side_effect=lambda *a, **kw: captured.append("published"),
),
patch("os.path.exists", return_value=True),
patch("os.makedirs"),
patch("builtins.open", unittest.mock.mock_open()),
):
start_debug_replay_job(
source_camera="front",
start_ts=100.0,
end_ts=200.0,
frigate_config=self.frigate_config,
config_publisher=self.publisher,
replay_manager=self.manager,
)
self.assertTrue(
self._wait_for(lambda: get_active_runner() is None),
"runner did not finish",
)
from frigate.jobs.manager import get_current_job
job = get_current_job("debug_replay")
self.assertIsNotNone(job)
self.assertEqual(job.status, JobStatusTypesEnum.success)
self.assertEqual(job.progress_percent, 100.0)
self.assertEqual(captured, ["published"])
# Manager should have been told the session is ready with the clip path.
self.assertIsNotNone(self.manager.clip_path)
class TestRunnerFailurePath(unittest.TestCase):
def setUp(self) -> None:
_reset_job_manager()
_patch_publisher(self)
self.manager = DebugReplayManager()
self.frigate_config = MagicMock()
self.frigate_config.cameras = {"front": MagicMock()}
self.frigate_config.ffmpeg.ffmpeg_path = "/bin/true"
self.publisher = MagicMock()
self.recordings_qs = MagicMock()
self.recordings_qs.count.return_value = 1
self.recordings_qs.__iter__.return_value = iter([MagicMock(path="/tmp/r1.mp4")])
def tearDown(self) -> None:
runner = get_active_runner()
if runner is not None:
runner.cancel()
runner.join(timeout=2.0)
_reset_job_manager()
def _wait_for(self, predicate, timeout: float = 5.0) -> bool:
deadline = time.time() + timeout
while time.time() < deadline:
if predicate():
return True
time.sleep(0.02)
return False
def test_ffmpeg_failure_marks_job_failed_and_clears_session(self) -> None:
def failing_helper(cmd, **kwargs):
return 1, "ffmpeg exploded"
with (
patch(
"frigate.jobs.debug_replay.query_recordings",
return_value=self.recordings_qs,
),
patch(
"frigate.jobs.debug_replay.run_ffmpeg_with_progress",
side_effect=failing_helper,
),
patch("os.path.exists", return_value=True),
patch("os.makedirs"),
patch("os.remove"),
patch("builtins.open", unittest.mock.mock_open()),
):
start_debug_replay_job(
source_camera="front",
start_ts=100.0,
end_ts=200.0,
frigate_config=self.frigate_config,
config_publisher=self.publisher,
replay_manager=self.manager,
)
self.assertTrue(
self._wait_for(lambda: get_active_runner() is None),
"runner did not finish",
)
from frigate.jobs.manager import get_current_job
job = get_current_job("debug_replay")
self.assertIsNotNone(job)
self.assertEqual(job.status, JobStatusTypesEnum.failed)
self.assertIsNotNone(job.error_message)
self.assertIn("ffmpeg", job.error_message.lower())
# Session cleared so a new /start is allowed
self.assertFalse(self.manager.active)
class TestRunnerCancellation(unittest.TestCase):
def setUp(self) -> None:
_reset_job_manager()
_patch_publisher(self)
self.manager = DebugReplayManager()
self.frigate_config = MagicMock()
self.frigate_config.cameras = {"front": MagicMock()}
self.frigate_config.ffmpeg.ffmpeg_path = "/bin/true"
self.publisher = MagicMock()
self.recordings_qs = MagicMock()
self.recordings_qs.count.return_value = 1
self.recordings_qs.__iter__.return_value = iter([MagicMock(path="/tmp/r1.mp4")])
def tearDown(self) -> None:
runner = get_active_runner()
if runner is not None:
runner.cancel()
runner.join(timeout=2.0)
_reset_job_manager()
def _wait_for(self, predicate, timeout: float = 5.0) -> bool:
deadline = time.time() + timeout
while time.time() < deadline:
if predicate():
return True
time.sleep(0.02)
return False
def test_cancel_terminates_ffmpeg_and_marks_cancelled(self) -> None:
terminated = threading.Event()
fake_proc = MagicMock()
fake_proc.terminate = MagicMock(side_effect=lambda: terminated.set())
def fake_helper(cmd, *, process_started=None, **kwargs):
if process_started is not None:
process_started(fake_proc)
terminated.wait(timeout=5)
return -15, "killed"
with (
patch(
"frigate.jobs.debug_replay.query_recordings",
return_value=self.recordings_qs,
),
patch(
"frigate.jobs.debug_replay.run_ffmpeg_with_progress",
side_effect=fake_helper,
),
patch("os.path.exists", return_value=True),
patch("os.makedirs"),
patch("os.remove"),
patch("builtins.open", unittest.mock.mock_open()),
):
start_debug_replay_job(
source_camera="front",
start_ts=100.0,
end_ts=200.0,
frigate_config=self.frigate_config,
config_publisher=self.publisher,
replay_manager=self.manager,
)
# Wait for the runner to register the active process.
self.assertTrue(
self._wait_for(
lambda: (
get_active_runner() is not None
and get_active_runner()._active_process is fake_proc
)
)
)
cancelled = cancel_debug_replay_job()
self.assertTrue(cancelled)
self.assertTrue(fake_proc.terminate.called)
self.assertTrue(
self._wait_for(lambda: get_active_runner() is None),
"runner did not finish",
)
from frigate.jobs.manager import get_current_job
job = get_current_job("debug_replay")
self.assertEqual(job.status, JobStatusTypesEnum.cancelled)
# Runner must not clear the manager session on cancellation —
# that belongs to the caller of cancel_debug_replay_job (stop()).
# If the runner cleared it, stop() would log "no active session"
# and skip its cleanup_db / cleanup_files calls.
self.assertTrue(self.manager.active)
if __name__ == "__main__":
unittest.main()

View File

@ -14,6 +14,7 @@ from frigate.jobs.export import (
) )
from frigate.record.export import PlaybackSourceEnum, RecordingExporter from frigate.record.export import PlaybackSourceEnum, RecordingExporter
from frigate.types import JobStatusTypesEnum from frigate.types import JobStatusTypesEnum
from frigate.util.ffmpeg import inject_progress_flags
def _make_exporter( def _make_exporter(
@ -118,10 +119,9 @@ class TestExpectedOutputDuration(unittest.TestCase):
class TestProgressFlagInjection(unittest.TestCase): class TestProgressFlagInjection(unittest.TestCase):
def test_inserts_before_output_path(self) -> None: def test_inserts_before_output_path(self) -> None:
exporter = _make_exporter()
cmd = ["ffmpeg", "-i", "input.m3u8", "-c", "copy", "/tmp/output.mp4"] cmd = ["ffmpeg", "-i", "input.m3u8", "-c", "copy", "/tmp/output.mp4"]
result = exporter._inject_progress_flags(cmd) result = inject_progress_flags(cmd)
assert result == [ assert result == [
"ffmpeg", "ffmpeg",
@ -136,8 +136,7 @@ class TestProgressFlagInjection(unittest.TestCase):
] ]
def test_handles_empty_cmd(self) -> None: def test_handles_empty_cmd(self) -> None:
exporter = _make_exporter() assert inject_progress_flags([]) == []
assert exporter._inject_progress_flags([]) == []
class TestFfmpegProgressParsing(unittest.TestCase): class TestFfmpegProgressParsing(unittest.TestCase):
@ -167,7 +166,7 @@ class TestFfmpegProgressParsing(unittest.TestCase):
fake_proc.returncode = 0 fake_proc.returncode = 0
fake_proc.wait = MagicMock(return_value=0) fake_proc.wait = MagicMock(return_value=0)
with patch("frigate.record.export.sp.Popen", return_value=fake_proc): with patch("frigate.util.ffmpeg.sp.Popen", return_value=fake_proc):
returncode, _stderr = exporter._run_ffmpeg_with_progress( returncode, _stderr = exporter._run_ffmpeg_with_progress(
["ffmpeg", "-i", "x.m3u8", "/tmp/out.mp4"], "playlist", step="encoding" ["ffmpeg", "-i", "x.m3u8", "/tmp/out.mp4"], "playlist", step="encoding"
) )
@ -499,5 +498,56 @@ class TestSchedulesCleanup(unittest.TestCase):
assert job.id not in manager.jobs assert job.id not in manager.jobs
class TestChapterMetadataInProgressReview(unittest.TestCase):
"""Regression: in-progress review segments have end_time=NULL until the
activity closes. The chapter builder must clamp the chapter end to the
last recorded second instead of crashing on float(None)."""
def _fake_select_returning(self, rows: list) -> MagicMock:
mock_query = MagicMock()
mock_query.where.return_value = mock_query
mock_query.order_by.return_value = mock_query
mock_query.iterator.return_value = iter(rows)
return mock_query
def test_in_progress_review_does_not_crash_and_clamps_to_last_recording(
self,
) -> None:
exporter = _make_exporter(end_minus_start=200)
# Recordings cover [1000, 1150]; export window is [1000, 1200] so
# the last recorded second is 1150 (a 50s gap at the tail).
recordings = [
MagicMock(start_time=1000.0, end_time=1150.0),
]
in_progress = MagicMock(
start_time=1100.0,
end_time=None,
severity="alert",
data={"objects": ["person"]},
)
with tempfile.TemporaryDirectory() as tmpdir:
chapter_path = os.path.join(tmpdir, "chapters.txt")
exporter._chapter_metadata_path = lambda: chapter_path # type: ignore[method-assign]
with patch(
"frigate.record.export.ReviewSegment.select",
return_value=self._fake_select_returning([in_progress]),
):
result = exporter._build_chapter_metadata_file(recordings)
assert result == chapter_path
with open(chapter_path) as f:
content = f.read()
# Output time is windows[-1][1] - windows[-1][0] = 150s.
# Review starts at wall=1100, output offset = 100s -> 100000ms.
# Clamped end = last_recorded_end (1150) -> output offset = 150s -> 150000ms.
assert "[CHAPTER]" in content
assert "START=100000" in content
assert "END=150000" in content
assert "title=Alert: person" in content
if __name__ == "__main__": if __name__ == "__main__":
unittest.main() unittest.main()

View File

@ -0,0 +1,111 @@
"""Tests for the shared ffmpeg progress helper."""
import unittest
from unittest.mock import MagicMock, patch
from frigate.util.ffmpeg import inject_progress_flags, run_ffmpeg_with_progress
class TestInjectProgressFlags(unittest.TestCase):
def test_inserts_flags_before_output_path(self):
cmd = ["ffmpeg", "-i", "in.mp4", "-c", "copy", "out.mp4"]
result = inject_progress_flags(cmd)
self.assertEqual(
result,
[
"ffmpeg",
"-i",
"in.mp4",
"-c",
"copy",
"-progress",
"pipe:2",
"-nostats",
"out.mp4",
],
)
def test_empty_cmd_returns_empty(self):
self.assertEqual(inject_progress_flags([]), [])
class TestRunFfmpegWithProgress(unittest.TestCase):
def _make_fake_proc(self, stderr_lines, returncode=0):
proc = MagicMock()
proc.stderr = iter(stderr_lines)
proc.stdin = MagicMock()
proc.returncode = returncode
proc.wait = MagicMock()
return proc
def test_emits_percent_from_out_time_us_lines(self):
captured: list[float] = []
def on_progress(percent: float) -> None:
captured.append(percent)
stderr_lines = [
"out_time_us=1000000\n",
"out_time_us=5000000\n",
"progress=end\n",
]
proc = self._make_fake_proc(stderr_lines)
proc.stderr = MagicMock()
proc.stderr.__iter__ = lambda self: iter(stderr_lines)
proc.stderr.read = MagicMock(return_value="")
with patch("subprocess.Popen", return_value=proc):
returncode, _stderr = run_ffmpeg_with_progress(
["ffmpeg", "-i", "in", "out"],
expected_duration_seconds=10.0,
on_progress=on_progress,
use_low_priority=False,
)
self.assertEqual(returncode, 0)
self.assertEqual(len(captured), 4) # initial 0.0 + two parsed + final 100.0
self.assertAlmostEqual(captured[0], 0.0)
self.assertAlmostEqual(captured[1], 10.0)
self.assertAlmostEqual(captured[2], 50.0)
self.assertAlmostEqual(captured[3], 100.0)
def test_passes_started_process_to_callback(self):
proc = self._make_fake_proc([])
proc.stderr = MagicMock()
proc.stderr.__iter__ = lambda self: iter([])
proc.stderr.read = MagicMock(return_value="")
seen: list = []
with patch("subprocess.Popen", return_value=proc):
run_ffmpeg_with_progress(
["ffmpeg", "out"],
expected_duration_seconds=1.0,
process_started=lambda p: seen.append(p),
use_low_priority=False,
)
self.assertEqual(seen, [proc])
def test_clamps_percent_to_0_100(self):
captured: list[float] = []
def on_progress(percent: float) -> None:
captured.append(percent)
stderr_lines = ["out_time_us=999999999999\n"]
proc = self._make_fake_proc(stderr_lines)
proc.stderr = MagicMock()
proc.stderr.__iter__ = lambda self: iter(stderr_lines)
proc.stderr.read = MagicMock(return_value="")
with patch("subprocess.Popen", return_value=proc):
run_ffmpeg_with_progress(
["ffmpeg", "out"],
expected_duration_seconds=10.0,
on_progress=on_progress,
use_low_priority=False,
)
# initial 0.0 then a clamped reading
self.assertEqual(captured[-1], 100.0)

View File

@ -7,8 +7,6 @@ from frigate.util.services import get_amd_gpu_stats, get_intel_gpu_stats
class TestGpuStats(unittest.TestCase): class TestGpuStats(unittest.TestCase):
def setUp(self): def setUp(self):
self.amd_results = "Unknown Radeon card. <= R500 won't work, new cards might.\nDumping to -, line limit 1.\n1664070990.607556: bus 10, gpu 4.17%, ee 0.00%, vgt 0.00%, ta 0.00%, tc 0.00%, sx 0.00%, sh 0.00%, spi 0.83%, smx 0.00%, cr 0.00%, sc 0.00%, pa 0.00%, db 0.00%, cb 0.00%, vram 60.37% 294.04mb, gtt 0.33% 52.21mb, mclk 100.00% 1.800ghz, sclk 26.65% 0.533ghz\n" self.amd_results = "Unknown Radeon card. <= R500 won't work, new cards might.\nDumping to -, line limit 1.\n1664070990.607556: bus 10, gpu 4.17%, ee 0.00%, vgt 0.00%, ta 0.00%, tc 0.00%, sx 0.00%, sh 0.00%, spi 0.83%, smx 0.00%, cr 0.00%, sc 0.00%, pa 0.00%, db 0.00%, cb 0.00%, vram 60.37% 294.04mb, gtt 0.33% 52.21mb, mclk 100.00% 1.800ghz, sclk 26.65% 0.533ghz\n"
self.intel_results = """{"period":{"duration":1.194033,"unit":"ms"},"frequency":{"requested":0.000000,"actual":0.000000,"unit":"MHz"},"interrupts":{"count":3349.991164,"unit":"irq/s"},"rc6":{"value":47.844741,"unit":"%"},"engines":{"Render/3D/0":{"busy":0.000000,"sema":0.000000,"wait":0.000000,"unit":"%"},"Blitter/0":{"busy":0.000000,"sema":0.000000,"wait":0.000000,"unit":"%"},"Video/0":{"busy":4.533124,"sema":0.000000,"wait":0.000000,"unit":"%"},"Video/1":{"busy":6.194385,"sema":0.000000,"wait":0.000000,"unit":"%"},"VideoEnhance/0":{"busy":0.000000,"sema":0.000000,"wait":0.000000,"unit":"%"}}},{"period":{"duration":1.189291,"unit":"ms"},"frequency":{"requested":0.000000,"actual":0.000000,"unit":"MHz"},"interrupts":{"count":0.000000,"unit":"irq/s"},"rc6":{"value":100.000000,"unit":"%"},"engines":{"Render/3D/0":{"busy":0.000000,"sema":0.000000,"wait":0.000000,"unit":"%"},"Blitter/0":{"busy":0.000000,"sema":0.000000,"wait":0.000000,"unit":"%"},"Video/0":{"busy":0.000000,"sema":0.000000,"wait":0.000000,"unit":"%"},"Video/1":{"busy":0.000000,"sema":0.000000,"wait":0.000000,"unit":"%"},"VideoEnhance/0":{"busy":0.000000,"sema":0.000000,"wait":0.000000,"unit":"%"}}}"""
self.nvidia_results = "name, utilization.gpu [%], memory.used [MiB], memory.total [MiB]\nNVIDIA GeForce RTX 3050, 42 %, 5036 MiB, 8192 MiB\n"
@patch("subprocess.run") @patch("subprocess.run")
def test_amd_gpu_stats(self, sp): def test_amd_gpu_stats(self, sp):
@ -19,32 +17,76 @@ class TestGpuStats(unittest.TestCase):
amd_stats = get_amd_gpu_stats() amd_stats = get_amd_gpu_stats()
assert amd_stats == {"gpu": "4.17%", "mem": "60.37%"} assert amd_stats == {"gpu": "4.17%", "mem": "60.37%"}
# @patch("subprocess.run") @patch("frigate.util.services.time.sleep")
# def test_nvidia_gpu_stats(self, sp): @patch("frigate.util.services.time.monotonic")
# process = MagicMock() @patch("frigate.util.services._read_intel_drm_fdinfo")
# process.returncode = 0 def test_intel_gpu_stats_fdinfo(self, read_fdinfo, monotonic, sleep):
# process.stdout = self.nvidia_results # 1 second of wall clock between snapshots
# sp.return_value = process monotonic.side_effect = [0.0, 1.0]
# nvidia_stats = get_nvidia_gpu_stats()
# assert nvidia_stats == {
# "name": "NVIDIA GeForce RTX 3050",
# "gpu": "42 %",
# "mem": "61.5 %",
# }
@patch("subprocess.run") # Two i915 clients on the same iGPU. Engine values are cumulative ns.
def test_intel_gpu_stats(self, sp): # Deltas over the 1s window:
process = MagicMock() # client A (pid 100): render +200_000_000 (20%), video +500_000_000 (50%),
process.returncode = 124 # video-enhance +100_000_000 (10%)
process.stdout = self.intel_results # client B (pid 200): compute +100_000_000 (10%)
sp.return_value = process # Engine totals → render 20, video 50, video-enhance 10, compute 10
intel_stats = get_intel_gpu_stats(False) # → compute = render + compute = 30
# rc6 values: 47.844741 and 100.0 → avg 73.92 → gpu = 100 - 73.92 = 26.08% # → dec = video + video-enhance = 60
# Render/3D/0: 0.0 and 0.0 → enc = 0.0% # → gpu = compute + dec = 90
# Video/0: 4.533124 and 0.0 → dec = 2.27% snapshot_a = {
assert intel_stats == { ("0000:00:02.0", "1", "100"): {
"gpu": "26.08%", "driver": "i915",
"mem": "-%", "pid": "100",
"compute": "0.0%", "engines": {
"dec": "2.27%", "render": (1_000_000_000, 0),
"video": (5_000_000_000, 0),
"video-enhance": (200_000_000, 0),
"compute": (0, 0),
},
},
("0000:00:02.0", "2", "200"): {
"driver": "i915",
"pid": "200",
"engines": {
"render": (0, 0),
"compute": (2_000_000_000, 0),
},
},
} }
snapshot_b = {
("0000:00:02.0", "1", "100"): {
"driver": "i915",
"pid": "100",
"engines": {
"render": (1_200_000_000, 0),
"video": (5_500_000_000, 0),
"video-enhance": (300_000_000, 0),
"compute": (0, 0),
},
},
("0000:00:02.0", "2", "200"): {
"driver": "i915",
"pid": "200",
"engines": {
"render": (0, 0),
"compute": (2_100_000_000, 0),
},
},
}
read_fdinfo.side_effect = [snapshot_a, snapshot_b]
intel_stats = get_intel_gpu_stats(None)
sleep.assert_called_once()
assert intel_stats == {
"gpu": "90.0%",
"mem": "-%",
"compute": "30.0%",
"dec": "60.0%",
"clients": {"100": "80.0%", "200": "10.0%"},
}
@patch("frigate.util.services._read_intel_drm_fdinfo")
def test_intel_gpu_stats_no_clients(self, read_fdinfo):
read_fdinfo.return_value = {}
assert get_intel_gpu_stats(None) is None

View File

@ -2,8 +2,9 @@
import logging import logging
import subprocess as sp import subprocess as sp
from typing import Any from typing import Any, Callable, Optional
from frigate.const import PROCESS_PRIORITY_LOW
from frigate.log import LogPipe from frigate.log import LogPipe
@ -46,3 +47,124 @@ def start_or_restart_ffmpeg(
start_new_session=True, start_new_session=True,
) )
return process return process
logger = logging.getLogger(__name__)
def inject_progress_flags(cmd: list[str]) -> list[str]:
"""Insert `-progress pipe:2 -nostats` immediately before the output path.
`-progress pipe:2` writes structured key=value lines to stderr;
`-nostats` suppresses the noisy default stats output. The output path
is conventionally the last token in an FFmpeg argv.
"""
if not cmd:
return cmd
return cmd[:-1] + ["-progress", "pipe:2", "-nostats", cmd[-1]]
def run_ffmpeg_with_progress(
cmd: list[str],
*,
expected_duration_seconds: float,
on_progress: Optional[Callable[[float], None]] = None,
stdin_payload: Optional[str] = None,
process_started: Optional[Callable[[sp.Popen], None]] = None,
use_low_priority: bool = True,
) -> tuple[int, str]:
"""Run an ffmpeg command, streaming progress via `-progress pipe:2`.
Args:
cmd: ffmpeg argv. Output path must be the last token.
expected_duration_seconds: Duration of the expected output clip in
seconds. Used to convert ffmpeg's `out_time_us` into a percent.
on_progress: Optional callback invoked with a percent in [0, 100].
Called once with 0.0 at start, again on each `out_time_us=`
stderr line, and once with 100.0 on `progress=end`.
stdin_payload: Optional string written to ffmpeg stdin (used by
export for concat playlists).
process_started: Optional callback invoked with the live `Popen`
once spawned lets callers store the ref for cancellation.
use_low_priority: When True, prepend `nice -n PROCESS_PRIORITY_LOW`
so concat doesn't starve detection.
Returns:
Tuple of `(returncode, captured_stderr)`. Stdout is left attached
to the parent process to avoid buffer-full deadlocks.
"""
full_cmd = inject_progress_flags(cmd)
if use_low_priority:
full_cmd = ["nice", "-n", str(PROCESS_PRIORITY_LOW)] + full_cmd
def emit(percent: float) -> None:
if on_progress is None:
return
try:
on_progress(max(0.0, min(100.0, percent)))
except Exception:
logger.exception("FFmpeg progress callback failed")
emit(0.0)
proc = sp.Popen(
full_cmd,
stdin=sp.PIPE if stdin_payload is not None else None,
stderr=sp.PIPE,
text=True,
encoding="ascii",
errors="replace",
)
if process_started is not None:
try:
process_started(proc)
except Exception:
logger.exception("FFmpeg process_started callback failed")
if stdin_payload is not None and proc.stdin is not None:
try:
proc.stdin.write(stdin_payload)
except (BrokenPipeError, OSError):
pass
finally:
try:
proc.stdin.close()
except (BrokenPipeError, OSError):
pass
captured: list[str] = []
if proc.stderr is not None:
try:
for raw_line in proc.stderr:
captured.append(raw_line)
line = raw_line.strip()
if not line:
continue
if line.startswith("out_time_us="):
if expected_duration_seconds <= 0:
continue
try:
out_time_us = int(line.split("=", 1)[1])
except (ValueError, IndexError):
continue
if out_time_us < 0:
continue
out_seconds = out_time_us / 1_000_000.0
emit((out_seconds / expected_duration_seconds) * 100.0)
elif line == "progress=end":
emit(100.0)
break
except Exception:
logger.exception("Failed reading FFmpeg progress stream")
proc.wait()
if proc.stderr is not None:
try:
remaining = proc.stderr.read()
if remaining:
captured.append(remaining)
except Exception:
pass
return proc.returncode or 0, "".join(captured)

View File

@ -264,156 +264,214 @@ def get_amd_gpu_stats() -> Optional[dict[str, str]]:
return results return results
def get_intel_gpu_stats(intel_gpu_device: Optional[str]) -> Optional[dict[str, str]]: _INTEL_FDINFO_SAMPLE_SECONDS = 1.0
"""Get stats using intel_gpu_top.
Returns overall GPU usage derived from rc6 residency (idle time), # Engines we track. Render/3D and Compute are pooled into "compute"; Video and
plus individual engine breakdowns: # VideoEnhance into "dec" (VideoEnhance is the post-process engine that handles
- enc: Render/3D engine (compute/shader encoder, used by QSV) # VAAPI scaling/deinterlace/CSC, e.g. ffmpeg `-vf scale_vaapi=...`). The Copy
- dec: Video engines (fixed-function codec, used by VAAPI) # (DMA blitter) engine is intentionally ignored — it represents transparent
# memory transfers, not user-visible GPU work.
# i915 fdinfo keys (cumulative ns) → logical engine name.
_I915_ENGINE_KEYS = {
"drm-engine-render": "render",
"drm-engine-video": "video",
"drm-engine-video-enhance": "video-enhance",
"drm-engine-compute": "compute",
}
# Xe fdinfo suffixes (cumulative cycles, paired with drm-total-cycles-*).
_XE_ENGINE_KEYS = {
"rcs": "render",
"vcs": "video",
"vecs": "video-enhance",
"ccs": "compute",
}
def _resolve_intel_gpu_pdev(device: Optional[str]) -> Optional[str]:
"""Map a configured GPU hint (/dev/dri/card1, renderD128, or a PCI bus
address) to its drm-pdev string so we can filter fdinfo entries to that
device. Returns None when no hint is supplied or it cannot be resolved."""
if not device:
return None
if re.match(r"^[0-9a-fA-F]{4}:[0-9a-fA-F]{2}:[0-9a-fA-F]{2}\.[0-9a-fA-F]$", device):
return device
name = os.path.basename(device.rstrip("/"))
try:
return os.path.basename(os.path.realpath(f"/sys/class/drm/{name}/device"))
except OSError:
return None
def _read_intel_drm_fdinfo(target_pdev: Optional[str]) -> dict:
"""Snapshot DRM fdinfo for every Intel client visible in /proc.
Returns a dict keyed by (pdev, drm-client-id, pid) so the same context
seen via multiple file descriptors on a single process collapses to one
entry.
""" """
snapshot: dict = {}
def get_stats_manually(output: str) -> dict[str, str]:
"""Find global stats via regex when json fails to parse."""
reading = "".join(output)
results: dict[str, str] = {}
# rc6 residency for overall GPU usage
rc6_match = re.search(r'"rc6":\{"value":([\d.]+)', reading)
if rc6_match:
rc6_value = float(rc6_match.group(1))
results["gpu"] = f"{round(100.0 - rc6_value, 2)}%"
else:
results["gpu"] = "-%"
results["mem"] = "-%"
# Render/3D is the compute/encode engine
render = []
for result in re.findall(r'"Render/3D/0":{[a-z":\d.,%]+}', reading):
packet = json.loads(result[14:])
single = packet.get("busy", 0.0)
render.append(float(single))
if render:
results["compute"] = f"{round(sum(render) / len(render), 2)}%"
# Video engines are the fixed-function decode engines
video = []
for result in re.findall(r'"Video/\d":{[a-z":\d.,%]+}', reading):
packet = json.loads(result[10:])
single = packet.get("busy", 0.0)
video.append(float(single))
if video:
results["dec"] = f"{round(sum(video) / len(video), 2)}%"
return results
intel_gpu_top_command = [
"timeout",
"0.5s",
"intel_gpu_top",
"-J",
"-o",
"-",
"-s",
"1000", # Intel changed this from seconds to milliseconds in 2024+ versions
]
if intel_gpu_device:
intel_gpu_top_command += ["-d", intel_gpu_device]
try: try:
p = sp.run( proc_entries = os.listdir("/proc")
intel_gpu_top_command, except OSError:
encoding="ascii", return snapshot
capture_output=True,
)
except UnicodeDecodeError:
return None
# timeout has a non-zero returncode when timeout is reached for entry in proc_entries:
if p.returncode != 124: if not entry.isdigit():
logger.error(f"Unable to poll intel GPU stats: {p.stderr}") continue
return None
else:
output = "".join(p.stdout.split())
fdinfo_dir = f"/proc/{entry}/fdinfo"
try: try:
data = json.loads(f"[{output}]") fds = os.listdir(fdinfo_dir)
except json.JSONDecodeError: except (FileNotFoundError, PermissionError, NotADirectoryError, OSError):
return get_stats_manually(output) continue
results: dict[str, str] = {} for fd in fds:
rc6_values = [] try:
render_global = [] with open(f"{fdinfo_dir}/{fd}") as f:
video_global = [] content = f.read()
# per-client: {pid: [total_busy_per_sample, ...]} except (FileNotFoundError, PermissionError, OSError):
client_usages: dict[str, list[float]] = {} continue
for block in data: if "drm-driver" not in content:
# rc6 residency: percentage of time GPU is idle continue
rc6 = block.get("rc6", {}).get("value")
if rc6 is not None:
rc6_values.append(float(rc6))
global_engine = block.get("engines") fields: dict[str, str] = {}
for line in content.splitlines():
key, sep, value = line.partition(":")
if sep:
fields[key.strip()] = value.strip()
if global_engine: driver = fields.get("drm-driver")
render_frame = global_engine.get("Render/3D/0", {}).get("busy") if driver not in ("i915", "xe"):
video_frame = global_engine.get("Video/0", {}).get("busy") continue
if render_frame is not None: pdev = fields.get("drm-pdev", "")
render_global.append(float(render_frame)) if target_pdev and pdev != target_pdev:
continue
if video_frame is not None: client_id = fields.get("drm-client-id")
video_global.append(float(video_frame)) if not client_id:
continue
clients = block.get("clients", {}) key = (pdev, client_id, entry)
if key in snapshot:
continue
if clients: engines: dict[str, tuple[int, int]] = {}
for client_block in clients.values():
pid = client_block["pid"]
if pid not in client_usages: if driver == "i915":
client_usages[pid] = [] for fkey, engine in _I915_ENGINE_KEYS.items():
raw = fields.get(fkey)
if not raw:
continue
try:
engines[engine] = (int(raw.split()[0]), 0)
except (ValueError, IndexError):
continue
else:
for suffix, engine in _XE_ENGINE_KEYS.items():
busy_raw = fields.get(f"drm-cycles-{suffix}")
total_raw = fields.get(f"drm-total-cycles-{suffix}")
if not (busy_raw and total_raw):
continue
try:
engines[engine] = (
int(busy_raw.split()[0]),
int(total_raw.split()[0]),
)
except (ValueError, IndexError):
continue
# Sum all engine-class busy values for this client if not engines:
total_busy = 0.0 continue
for engine in client_block.get("engine-classes", {}).values():
busy = engine.get("busy")
if busy is not None:
total_busy += float(busy)
client_usages[pid].append(total_busy) snapshot[key] = {"driver": driver, "pid": entry, "engines": engines}
# Overall GPU usage from rc6 (idle) residency return snapshot
if rc6_values:
rc6_avg = sum(rc6_values) / len(rc6_values)
results["gpu"] = f"{round(100.0 - rc6_avg, 2)}%"
results["mem"] = "-%"
# Compute: Render/3D engine (compute/shader workloads and QSV encode) def get_intel_gpu_stats(intel_gpu_device: Optional[str]) -> Optional[dict[str, Any]]:
if render_global: """Get stats by reading DRM fdinfo files.
results["compute"] = f"{round(sum(render_global) / len(render_global), 2)}%"
# Decoder: Video engine (fixed-function codec) Each DRM client FD exposes monotonic per-engine busy counters via
if video_global: /proc/<pid>/fdinfo/<fd> (i915 since kernel 5.19, Xe since first release).
results["dec"] = f"{round(sum(video_global) / len(video_global), 2)}%" We sample twice and divide busy-time deltas by wall-clock to derive
utilization. Render/3D and Compute are pooled into "compute"; Video and
VideoEnhance into "dec". Overall "gpu" is the sum of those pools (clamped
to 100%).
"""
target_pdev = _resolve_intel_gpu_pdev(intel_gpu_device)
# Per-client GPU usage (sum of all engines per process) snapshot_a = _read_intel_drm_fdinfo(target_pdev)
if client_usages: if not snapshot_a:
results["clients"] = {} return None
for pid, samples in client_usages.items(): start = time.monotonic()
if samples: time.sleep(_INTEL_FDINFO_SAMPLE_SECONDS)
results["clients"][pid] = ( elapsed_ns = (time.monotonic() - start) * 1e9
f"{round(sum(samples) / len(samples), 2)}%"
)
return results snapshot_b = _read_intel_drm_fdinfo(target_pdev)
if not snapshot_b or elapsed_ns <= 0:
return None
engine_pct: dict[str, float] = {
"render": 0.0,
"video": 0.0,
"video-enhance": 0.0,
"compute": 0.0,
}
pid_pct: dict[str, float] = {}
for key, data_b in snapshot_b.items():
data_a = snapshot_a.get(key)
if not data_a or data_a["driver"] != data_b["driver"]:
continue
client_total = 0.0
for engine, (busy_b, total_b) in data_b["engines"].items():
if engine not in engine_pct:
continue
busy_a, total_a = data_a["engines"].get(engine, (busy_b, total_b))
if data_b["driver"] == "i915":
delta = max(0, busy_b - busy_a)
pct = min(100.0, delta / elapsed_ns * 100.0)
else:
delta_busy = max(0, busy_b - busy_a)
delta_total = total_b - total_a
if delta_total <= 0:
continue
pct = min(100.0, delta_busy / delta_total * 100.0)
engine_pct[engine] += pct
client_total += pct
pid_pct[data_b["pid"]] = pid_pct.get(data_b["pid"], 0.0) + client_total
for engine in engine_pct:
engine_pct[engine] = min(100.0, engine_pct[engine])
compute_pct = min(100.0, engine_pct["render"] + engine_pct["compute"])
dec_pct = min(100.0, engine_pct["video"] + engine_pct["video-enhance"])
overall_pct = min(100.0, compute_pct + dec_pct)
results: dict[str, Any] = {
"gpu": f"{round(overall_pct, 2)}%",
"mem": "-%",
"compute": f"{round(compute_pct, 2)}%",
"dec": f"{round(dec_pct, 2)}%",
}
if pid_pct:
results["clients"] = {
pid: f"{round(min(100.0, pct), 2)}%" for pid, pct in pid_pct.items()
}
return results
def get_openvino_npu_stats() -> Optional[dict[str, str]]: def get_openvino_npu_stats() -> Optional[dict[str, str]]:

View File

@ -31,7 +31,7 @@ test.describe("Replay — no active session @medium", () => {
await expect( await expect(
frigateApp.page.getByRole("heading", { frigateApp.page.getByRole("heading", {
level: 2, level: 2,
name: /No Active Replay Session/i, name: /No Active Debug Replay Session/i,
}), }),
).toBeVisible({ timeout: 10_000 }); ).toBeVisible({ timeout: 10_000 });
const goButton = frigateApp.page.getByRole("button", { const goButton = frigateApp.page.getByRole("button", {
@ -48,7 +48,7 @@ test.describe("Replay — no active session @medium", () => {
await expect( await expect(
frigateApp.page.getByRole("heading", { frigateApp.page.getByRole("heading", {
level: 2, level: 2,
name: /No Active Replay Session/i, name: /No Active Debug Replay Session/i,
}), }),
).toBeVisible({ timeout: 10_000 }); ).toBeVisible({ timeout: 10_000 });
await frigateApp.page await frigateApp.page
@ -297,7 +297,7 @@ test.describe("Replay — mobile @medium @mobile", () => {
await expect( await expect(
frigateApp.page.getByRole("heading", { frigateApp.page.getByRole("heading", {
level: 2, level: 2,
name: /No Active Replay Session/i, name: /No Active Debug Replay Session/i,
}), }),
).toBeVisible({ timeout: 10_000 }); ).toBeVisible({ timeout: 10_000 });
}); });

View File

@ -485,6 +485,10 @@
"hwaccel_args": { "hwaccel_args": {
"label": "Export hwaccel args", "label": "Export hwaccel args",
"description": "Hardware acceleration args to use for export/transcode operations." "description": "Hardware acceleration args to use for export/transcode operations."
},
"max_concurrent": {
"label": "Maximum concurrent exports",
"description": "Maximum number of export jobs to process at the same time."
} }
}, },
"preview": { "preview": {

View File

@ -242,8 +242,8 @@
"description": "Enable per-process network bandwidth monitoring for camera ffmpeg processes and detectors (requires capabilities)." "description": "Enable per-process network bandwidth monitoring for camera ffmpeg processes and detectors (requires capabilities)."
}, },
"intel_gpu_device": { "intel_gpu_device": {
"label": "SR-IOV device", "label": "Intel GPU device",
"description": "Device identifier used when treating Intel GPUs as SR-IOV to fix GPU stats." "description": "PCI bus address or DRM device path (e.g. /dev/dri/card1) used to pin Intel GPU stats to a specific device when multiple are present."
} }
}, },
"version_check": { "version_check": {
@ -1000,6 +1000,10 @@
"hwaccel_args": { "hwaccel_args": {
"label": "Export hwaccel args", "label": "Export hwaccel args",
"description": "Hardware acceleration args to use for export/transcode operations." "description": "Hardware acceleration args to use for export/transcode operations."
},
"max_concurrent": {
"label": "Maximum concurrent exports",
"description": "Maximum number of export jobs to process at the same time."
} }
}, },
"preview": { "preview": {

View File

@ -19,26 +19,31 @@
"startLabel": "Start", "startLabel": "Start",
"endLabel": "End", "endLabel": "End",
"toast": { "toast": {
"success": "Debug replay started successfully",
"error": "Failed to start debug replay: {{error}}", "error": "Failed to start debug replay: {{error}}",
"alreadyActive": "A replay session is already active", "alreadyActive": "A replay session is already active",
"stopped": "Debug replay stopped",
"stopError": "Failed to stop debug replay: {{error}}", "stopError": "Failed to stop debug replay: {{error}}",
"goToReplay": "Go to Replay" "goToReplay": "Go to Replay"
} }
}, },
"page": { "page": {
"noSession": "No Active Replay Session", "noSession": "No Active Debug Replay Session",
"noSessionDesc": "Start a debug replay from the History view by clicking the Debug Replay button in the toolbar.", "noSessionDesc": "Start a Debug Replay from History view by clicking the Actions button in the toolbar and choosing Debug Replay.",
"goToRecordings": "Go to History", "goToRecordings": "Go to History",
"preparingClip": "Preparing clip…",
"preparingClipDesc": "Frigate is stitching together recordings for the selected time range. This can take a minute for longer ranges.",
"startingCamera": "Starting Debug Replay…",
"startError": {
"title": "Failed to start Debug Replay",
"back": "Back to History"
},
"sourceCamera": "Source Camera", "sourceCamera": "Source Camera",
"replayCamera": "Replay Camera", "replayCamera": "Replay Camera",
"initializingReplay": "Initializing replay...", "initializingReplay": "Initializing Debug Replay...",
"stoppingReplay": "Stopping replay...", "stoppingReplay": "Stopping Debug Replay...",
"stopReplay": "Stop Replay", "stopReplay": "Stop Replay",
"confirmStop": { "confirmStop": {
"title": "Stop Debug Replay?", "title": "Stop Debug Replay?",
"description": "This will stop the replay session and clean up all temporary data. Are you sure?", "description": "This will stop the session and clean up all temporary data. Are you sure?",
"confirm": "Stop Replay", "confirm": "Stop Replay",
"cancel": "Cancel" "cancel": "Cancel"
}, },
@ -49,6 +54,6 @@
"activeTracking": "Active tracking", "activeTracking": "Active tracking",
"noActiveTracking": "No active tracking", "noActiveTracking": "No active tracking",
"configuration": "Configuration", "configuration": "Configuration",
"configurationDesc": "Fine tune motion detection and object tracking settings for the debug replay camera. No changes are saved to your Frigate configuration file." "configurationDesc": "Fine tune motion detection and object tracking settings for the Debug Replay camera. No changes are saved to your Frigate configuration file."
} }
} }

View File

@ -20,7 +20,18 @@
"overriddenGlobal": "Overridden (Global)", "overriddenGlobal": "Overridden (Global)",
"overriddenGlobalTooltip": "This camera overrides global configuration settings in this section", "overriddenGlobalTooltip": "This camera overrides global configuration settings in this section",
"overriddenBaseConfig": "Overridden (Base Config)", "overriddenBaseConfig": "Overridden (Base Config)",
"overriddenBaseConfigTooltip": "The {{profile}} profile overrides configuration settings in this section" "overriddenBaseConfigTooltip": "The {{profile}} profile overrides configuration settings in this section",
"overriddenInCameras": {
"label_one": "Overridden in {{count}} camera",
"label_other": "Overridden in {{count}} cameras",
"tooltip_one": "{{count}} camera overrides values in this section. Click to see details.",
"tooltip_other": "{{count}} cameras override values in this section. Click to see details.",
"heading_one": "This global section has fields that are overridden in {{count}} camera.",
"heading_other": "This global section has fields that are overridden in {{count}} cameras.",
"othersField_one": "{{count}} other",
"othersField_other": "{{count}} others",
"profilePrefix": "{{profile}} profile: {{fields}}"
}
}, },
"menu": { "menu": {
"general": "General", "general": "General",

View File

@ -25,6 +25,7 @@ import {
} from "./section-special-cases"; } from "./section-special-cases";
import { getSectionValidation } from "../section-validations"; import { getSectionValidation } from "../section-validations";
import { useConfigOverride } from "@/hooks/use-config-override"; import { useConfigOverride } from "@/hooks/use-config-override";
import { CameraOverridesBadge } from "./CameraOverridesBadge";
import { useSectionSchema } from "@/hooks/use-config-schema"; import { useSectionSchema } from "@/hooks/use-config-schema";
import type { FrigateConfig } from "@/types/frigateConfig"; import type { FrigateConfig } from "@/types/frigateConfig";
import { Badge } from "@/components/ui/badge"; import { Badge } from "@/components/ui/badge";
@ -1263,6 +1264,9 @@ export function ConfigSection({
</TooltipContent> </TooltipContent>
</Tooltip> </Tooltip>
)} )}
{showOverrideIndicator && effectiveLevel === "global" && (
<CameraOverridesBadge sectionPath={sectionPath} />
)}
{hasChanges && ( {hasChanges && (
<Badge variant="outline" className="text-xs"> <Badge variant="outline" className="text-xs">
{t("button.modified", { {t("button.modified", {
@ -1334,6 +1338,9 @@ export function ConfigSection({
</TooltipContent> </TooltipContent>
</Tooltip> </Tooltip>
)} )}
{showOverrideIndicator && effectiveLevel === "global" && (
<CameraOverridesBadge sectionPath={sectionPath} />
)}
{hasChanges && ( {hasChanges && (
<Badge <Badge
variant="secondary" variant="secondary"

View File

@ -0,0 +1,303 @@
import useSWR from "swr";
import { useMemo } from "react";
import { useTranslation } from "react-i18next";
import { Link } from "react-router-dom";
import { LuChevronDown } from "react-icons/lu";
import { Badge } from "@/components/ui/badge";
import {
Popover,
PopoverContent,
PopoverTrigger,
} from "@/components/ui/popover";
import {
CameraOverrideEntry,
FieldDelta,
useCamerasOverridingSection,
} from "@/hooks/use-config-override";
import type { FrigateConfig } from "@/types/frigateConfig";
import type { ProfilesApiResponse } from "@/types/profile";
import { humanizeKey } from "@/components/config-form/theme/utils/i18n";
import { useCameraFriendlyName } from "@/hooks/use-camera-friendly-name";
import { formatList } from "@/utils/stringUtil";
import { getSectionConfig } from "@/utils/configUtil";
const CAMERA_PAGE_BY_SECTION: Record<string, string> = {
detect: "cameraDetect",
ffmpeg: "cameraFfmpeg",
record: "cameraRecording",
snapshots: "cameraSnapshots",
motion: "cameraMotion",
objects: "cameraObjects",
review: "cameraReview",
audio: "cameraAudioEvents",
audio_transcription: "cameraAudioTranscription",
notifications: "cameraNotifications",
live: "cameraLivePlayback",
birdseye: "cameraBirdseye",
face_recognition: "cameraFaceRecognition",
lpr: "cameraLpr",
timestamp_style: "cameraTimestampStyle",
};
const MAX_FIELDS_PER_CAMERA = 5;
/**
* Enrichment sections where the cross-camera override badge should be
* suppressed because they're effectively global-only (or per-camera
* configuration there isn't a useful affordance to surface here).
* Face recognition and LPR are intentionally omitted so the badge does show
* on those enrichment pages.
*/
const SECTIONS_WITHOUT_OVERRIDE_BADGE = new Set([
"semantic_search",
"genai",
"classification",
"audio_transcription",
]);
/**
* Match a delta path against a hidden-field pattern. Supports literal prefixes
* (so a hidden field "streams" also hides "streams.foo.bar") and `*` wildcards
* matching exactly one path segment (e.g. "filters.*.mask").
*/
function pathMatchesHiddenPattern(path: string, pattern: string): boolean {
if (!pattern) return false;
if (!pattern.includes("*")) {
return path === pattern || path.startsWith(`${pattern}.`);
}
const patternSegments = pattern.split(".");
const pathSegments = path.split(".");
if (pathSegments.length < patternSegments.length) return false;
for (let i = 0; i < patternSegments.length; i += 1) {
if (patternSegments[i] === "*") continue;
if (patternSegments[i] !== pathSegments[i]) return false;
}
return true;
}
type CameraEntryProps = {
sectionPath: string;
entry: CameraOverrideEntry;
cameraPage?: string;
};
type SourceGroup = {
/** undefined → camera-level; string → profile name */
profileName: string | undefined;
deltas: FieldDelta[];
};
function groupDeltasBySource(deltas: FieldDelta[]): SourceGroup[] {
const cameraDeltas: FieldDelta[] = [];
const byProfile = new Map<string, FieldDelta[]>();
for (const delta of deltas) {
if (delta.profileName) {
const arr = byProfile.get(delta.profileName) ?? [];
arr.push(delta);
byProfile.set(delta.profileName, arr);
} else {
cameraDeltas.push(delta);
}
}
const groups: SourceGroup[] = [];
if (cameraDeltas.length > 0) {
groups.push({ profileName: undefined, deltas: cameraDeltas });
}
for (const [profileName, group] of byProfile) {
groups.push({ profileName, deltas: group });
}
return groups;
}
function CameraEntry({ sectionPath, entry, cameraPage }: CameraEntryProps) {
const { t, i18n } = useTranslation([
"config/global",
"views/settings",
"objects",
]);
const friendlyName = useCameraFriendlyName(entry.camera);
const { data: profilesData } = useSWR<ProfilesApiResponse>("profiles");
const profileFriendlyNames = useMemo(() => {
const map = new Map<string, string>();
profilesData?.profiles?.forEach((p) => map.set(p.name, p.friendly_name));
return map;
}, [profilesData]);
const fieldLabel = (fieldPath: string) => {
if (!fieldPath) {
const sectionKey = `${sectionPath}.label`;
return i18n.exists(sectionKey, { ns: "config/global" })
? t(sectionKey, { ns: "config/global" })
: humanizeKey(sectionPath);
}
const segments = fieldPath.split(".");
// Most specific: try the full nested path
const fullKey = `${sectionPath}.${fieldPath}.label`;
if (i18n.exists(fullKey, { ns: "config/global" })) {
return t(fullKey, { ns: "config/global" });
}
// Try dropping each intermediate segment in turn — those are typically
// user-defined dict keys (object class names, zone names, etc.) that
// don't have their own label entries. Prepend the dropped segment as
// context to disambiguate (e.g. "Person · Minimum object area").
for (let i = 0; i < segments.length; i++) {
const reduced = [...segments.slice(0, i), ...segments.slice(i + 1)].join(
".",
);
if (!reduced) continue;
const reducedKey = `${sectionPath}.${reduced}.label`;
if (i18n.exists(reducedKey, { ns: "config/global" })) {
const resolvedLabel = t(reducedKey, { ns: "config/global" });
const dropped = segments[i];
// Object class names ("person", "car", "fox") have translations in
// the `objects` namespace; fall back to humanizing the raw key for
// anything that isn't a known label.
const droppedLabel = i18n.exists(dropped, { ns: "objects" })
? t(dropped, { ns: "objects" })
: humanizeKey(dropped);
return `${droppedLabel} · ${resolvedLabel}`;
}
}
// Last resort: humanize the leaf segment
return humanizeKey(segments[segments.length - 1]);
};
const formatDeltas = (deltas: FieldDelta[]) => {
const visibleLabels = deltas
.slice(0, MAX_FIELDS_PER_CAMERA)
.map((delta) => fieldLabel(delta.fieldPath));
const hiddenCount = deltas.length - visibleLabels.length;
const labelsForList =
hiddenCount > 0
? [
...visibleLabels,
t("button.overriddenInCameras.othersField", {
ns: "views/settings",
count: hiddenCount,
}),
]
: visibleLabels;
return formatList(labelsForList);
};
const groups = groupDeltasBySource(entry.fieldDeltas);
return (
<div className="flex flex-col gap-0.5 text-xs">
{cameraPage ? (
<Link
to={`/settings?page=${cameraPage}&camera=${encodeURIComponent(entry.camera)}`}
className="font-medium hover:underline"
>
{friendlyName}
</Link>
) : (
<span className="font-medium">{friendlyName}</span>
)}
{groups.map((group) => (
<span
key={group.profileName ?? "__camera__"}
className="ml-2 text-muted-foreground"
>
{group.profileName
? t("button.overriddenInCameras.profilePrefix", {
ns: "views/settings",
profile:
profileFriendlyNames.get(group.profileName) ??
group.profileName,
fields: formatDeltas(group.deltas),
})
: formatDeltas(group.deltas)}
</span>
))}
</div>
);
}
type Props = {
sectionPath: string;
className?: string;
};
export function CameraOverridesBadge({ sectionPath, className }: Props) {
const { data: config } = useSWR<FrigateConfig>("config");
const { t } = useTranslation(["views/settings"]);
const rawEntries = useCamerasOverridingSection(config, sectionPath);
const entries = useMemo(() => {
const hiddenFields =
getSectionConfig(sectionPath, "global").hiddenFields ?? [];
if (hiddenFields.length === 0) return rawEntries;
return rawEntries
.map((entry) => ({
...entry,
fieldDeltas: entry.fieldDeltas.filter(
(delta) =>
!hiddenFields.some((pattern) =>
pathMatchesHiddenPattern(delta.fieldPath, pattern),
),
),
}))
.filter((entry) => entry.fieldDeltas.length > 0);
}, [rawEntries, sectionPath]);
if (SECTIONS_WITHOUT_OVERRIDE_BADGE.has(sectionPath)) {
return null;
}
if (entries.length === 0) {
return null;
}
const cameraPage = CAMERA_PAGE_BY_SECTION[sectionPath];
const count = entries.length;
return (
<Popover>
<PopoverTrigger asChild>
<Badge
variant="secondary"
className={`cursor-pointer border-2 border-selected text-xs text-primary-variant ${className ?? ""}`}
aria-label={t("button.overriddenInCameras.tooltip", {
ns: "views/settings",
count: count,
})}
>
<span>
{t("button.overriddenInCameras.label", {
ns: "views/settings",
count: count,
})}
</span>
<LuChevronDown className="ml-1 size-3" />
</Badge>
</PopoverTrigger>
<PopoverContent align="start" className="w-80 max-w-[90vw] pr-0">
<div className="flex flex-col gap-3">
<div className="pr-4 text-xs text-primary-variant">
{t("button.overriddenInCameras.heading", {
ns: "views/settings",
count: count,
})}
</div>
<div className="scrollbar-container flex max-h-[40dvh] flex-col gap-2 overflow-y-auto pr-4">
{entries.map((entry) => (
<CameraEntry
key={entry.camera}
sectionPath={sectionPath}
entry={entry}
cameraPage={cameraPage}
/>
))}
</div>
</div>
</PopoverContent>
</Popover>
);
}

View File

@ -2,7 +2,7 @@
import { WidgetProps } from "@rjsf/utils"; import { WidgetProps } from "@rjsf/utils";
import { SwitchesWidget } from "./SwitchesWidget"; import { SwitchesWidget } from "./SwitchesWidget";
import { FormContext } from "./SwitchesWidget"; import { FormContext } from "./SwitchesWidget";
import { getTranslatedLabel } from "@/utils/i18n"; import i18n, { getTranslatedLabel } from "@/utils/i18n";
import { FrigateConfig } from "@/types/frigateConfig"; import { FrigateConfig } from "@/types/frigateConfig";
import { JsonObject } from "@/types/configForm"; import { JsonObject } from "@/types/configForm";
@ -76,7 +76,12 @@ function getObjectLabels(context: FormContext): string[] {
...sourceLabels, ...sourceLabels,
...formDataLabels, ...formDataLabels,
]); ]);
return [...combinedLabels].sort(); return [...combinedLabels].sort((a, b) =>
getObjectLabelDisplayName(a).localeCompare(
getObjectLabelDisplayName(b),
i18n.language,
),
);
} }
function getObjectLabelDisplayName(label: string): string { function getObjectLabelDisplayName(label: string): string {
@ -94,6 +99,7 @@ export function ObjectLabelSwitchesWidget(props: WidgetProps) {
i18nKey: "objectLabels", i18nKey: "objectLabels",
listClassName: listClassName:
"relative max-h-none overflow-visible md:max-h-64 md:overflow-y-auto md:overscroll-contain md:scrollbar-container", "relative max-h-none overflow-visible md:max-h-64 md:overflow-y-auto md:overscroll-contain md:scrollbar-container",
enableSearch: true,
}} }}
/> />
); );

View File

@ -90,10 +90,6 @@ export default function SearchResultActions({
const handleDebugReplay = useCallback( const handleDebugReplay = useCallback(
(event: SearchResult) => { (event: SearchResult) => {
setIsStarting(true); setIsStarting(true);
const toastId = toast.loading(
t("dialog.starting", { ns: "views/replay" }),
{ position: "top-center" },
);
axios axios
.post("debug_replay/start", { .post("debug_replay/start", {
@ -102,11 +98,7 @@ export default function SearchResultActions({
end_time: event.end_time, end_time: event.end_time,
}) })
.then((response) => { .then((response) => {
if (response.status === 200) { if (response.status === 202 || response.status === 200) {
toast.success(t("dialog.toast.success", { ns: "views/replay" }), {
id: toastId,
position: "top-center",
});
navigate("/replay"); navigate("/replay");
} }
}) })
@ -120,7 +112,6 @@ export default function SearchResultActions({
toast.error( toast.error(
t("dialog.toast.alreadyActive", { ns: "views/replay" }), t("dialog.toast.alreadyActive", { ns: "views/replay" }),
{ {
id: toastId,
position: "top-center", position: "top-center",
closeButton: true, closeButton: true,
dismissible: false, dismissible: false,
@ -135,7 +126,6 @@ export default function SearchResultActions({
); );
} else { } else {
toast.error(t("dialog.toast.error", { error: errorMessage }), { toast.error(t("dialog.toast.error", { error: errorMessage }), {
id: toastId,
position: "top-center", position: "top-center",
}); });
} }

View File

@ -209,10 +209,7 @@ export default function DebugReplayDialog({
end_time: range.before, end_time: range.before,
}) })
.then((response) => { .then((response) => {
if (response.status === 200) { if (response.status === 202 || response.status === 200) {
toast.success(t("dialog.toast.success"), {
position: "top-center",
});
setMode("none"); setMode("none");
setRange(undefined); setRange(undefined);
navigate("/replay"); navigate("/replay");

View File

@ -262,10 +262,7 @@ export default function MobileReviewSettingsDrawer({
end_time: debugReplayRange.before, end_time: debugReplayRange.before,
}); });
if (response.status === 200) { if (response.status === 202 || response.status === 200) {
toast.success(t("dialog.toast.success", { ns: "views/replay" }), {
position: "top-center",
});
setDebugReplayMode("none"); setDebugReplayMode("none");
setDebugReplayRange(undefined); setDebugReplayRange(undefined);
setDrawerMode("none"); setDrawerMode("none");

View File

@ -53,10 +53,6 @@ export default function EventMenu({
const handleDebugReplay = useCallback( const handleDebugReplay = useCallback(
(event: Event) => { (event: Event) => {
setIsStarting(true); setIsStarting(true);
const toastId = toast.loading(
t("dialog.starting", { ns: "views/replay" }),
{ position: "top-center" },
);
axios axios
.post("debug_replay/start", { .post("debug_replay/start", {
@ -65,11 +61,7 @@ export default function EventMenu({
end_time: event.end_time, end_time: event.end_time,
}) })
.then((response) => { .then((response) => {
if (response.status === 200) { if (response.status === 202 || response.status === 200) {
toast.success(t("dialog.toast.success", { ns: "views/replay" }), {
id: toastId,
position: "top-center",
});
navigate("/replay"); navigate("/replay");
} }
}) })
@ -83,7 +75,6 @@ export default function EventMenu({
toast.error( toast.error(
t("dialog.toast.alreadyActive", { ns: "views/replay" }), t("dialog.toast.alreadyActive", { ns: "views/replay" }),
{ {
id: toastId,
position: "top-center", position: "top-center",
closeButton: true, closeButton: true,
dismissible: false, dismissible: false,
@ -98,7 +89,6 @@ export default function EventMenu({
); );
} else { } else {
toast.error(t("dialog.toast.error", { error: errorMessage }), { toast.error(t("dialog.toast.error", { error: errorMessage }), {
id: toastId,
position: "top-center", position: "top-center",
}); });
} }

View File

@ -202,6 +202,49 @@ export function useConfigOverride({
}, [config, cameraName, sectionPath, compareFields]); }, [config, cameraName, sectionPath, compareFields]);
} }
/**
* Sections that can be overridden per-camera, with optional compareFields
* filters that scope the override comparison to a subset of fields.
*/
export const OVERRIDABLE_SECTIONS: ReadonlyArray<{
key: string;
compareFields?: string[];
}> = [
{ key: "detect" },
{ key: "record" },
{ key: "snapshots" },
{ key: "motion" },
{ key: "objects" },
{ key: "review" },
{ key: "audio" },
{ key: "notifications" },
{ key: "live" },
{ key: "timestamp_style" },
{
key: "audio_transcription",
compareFields: ["enabled", "live_enabled"],
},
{ key: "birdseye", compareFields: ["enabled", "mode"] },
{ key: "face_recognition", compareFields: ["enabled", "min_area"] },
{
key: "ffmpeg",
compareFields: [
"path",
"global_args",
"hwaccel_args",
"input_args",
"output_args",
"retry_interval",
"apple_compatibility",
"gpu",
],
},
{
key: "lpr",
compareFields: ["enabled", "min_area", "enhancement"],
},
];
/** /**
* Hook to get all overridden fields for a camera * Hook to get all overridden fields for a camera
*/ */
@ -221,47 +264,7 @@ export function useAllCameraOverrides(
const overriddenSections: string[] = []; const overriddenSections: string[] = [];
// Check each section that can be overridden for (const { key, compareFields } of OVERRIDABLE_SECTIONS) {
const sectionsToCheck: Array<{
key: string;
compareFields?: string[];
}> = [
{ key: "detect" },
{ key: "record" },
{ key: "snapshots" },
{ key: "motion" },
{ key: "objects" },
{ key: "review" },
{ key: "audio" },
{ key: "notifications" },
{ key: "live" },
{ key: "timestamp_style" },
{
key: "audio_transcription",
compareFields: ["enabled", "live_enabled"],
},
{ key: "birdseye", compareFields: ["enabled", "mode"] },
{ key: "face_recognition", compareFields: ["enabled", "min_area"] },
{
key: "ffmpeg",
compareFields: [
"path",
"global_args",
"hwaccel_args",
"input_args",
"output_args",
"retry_interval",
"apple_compatibility",
"gpu",
],
},
{
key: "lpr",
compareFields: ["enabled", "min_area", "enhancement"],
},
];
for (const { key, compareFields } of sectionsToCheck) {
const globalValue = normalizeConfigValue(get(config, key)); const globalValue = normalizeConfigValue(get(config, key));
const cameraValue = normalizeConfigValue( const cameraValue = normalizeConfigValue(
getBaseCameraSectionValue(config, cameraName, key), getBaseCameraSectionValue(config, cameraName, key),
@ -286,3 +289,252 @@ export function useAllCameraOverrides(
return overriddenSections; return overriddenSections;
}, [config, cameraName]); }, [config, cameraName]);
} }
export interface FieldDelta {
/** Path relative to the section (e.g. "genai.enabled") */
fieldPath: string;
globalValue: unknown;
cameraValue: unknown;
/** Profile name when the override originates from a profile; undefined for camera-level overrides */
profileName?: string;
}
export interface CameraOverrideEntry {
camera: string;
fieldDeltas: FieldDelta[];
}
/**
* Collect leaf-level field differences between a global section value
* and a camera section value. When compareFields is provided, only those
* paths are compared; otherwise the objects are walked recursively.
*/
function collectFieldDeltas(
globalValue: JsonValue,
cameraValue: JsonValue,
compareFields?: string[],
pathPrefix = "",
): FieldDelta[] {
if (compareFields) {
if (compareFields.length === 0) {
return [];
}
const deltas: FieldDelta[] = [];
for (const path of compareFields) {
const g = get(globalValue, path);
const c = get(cameraValue, path);
if (!isEqual(g, c)) {
deltas.push({ fieldPath: path, globalValue: g, cameraValue: c });
}
}
return deltas;
}
if (isJsonObject(globalValue) && isJsonObject(cameraValue)) {
const deltas: FieldDelta[] = [];
const keys = new Set([
...Object.keys(globalValue),
...Object.keys(cameraValue),
]);
for (const key of keys) {
const g = (globalValue as JsonObject)[key];
const c = (cameraValue as JsonObject)[key];
if (isEqual(g, c)) continue;
const childPath = pathPrefix ? `${pathPrefix}.${key}` : key;
if (isJsonObject(g) && isJsonObject(c)) {
deltas.push(...collectFieldDeltas(g, c, undefined, childPath));
} else {
deltas.push({ fieldPath: childPath, globalValue: g, cameraValue: c });
}
}
return deltas;
}
if (!isEqual(globalValue, cameraValue)) {
return [{ fieldPath: pathPrefix, globalValue, cameraValue }];
}
return [];
}
/**
* Walk a partial config object and return the dot-paths of every leaf value
* (primitive or array) actually defined on it. Used to limit profile-vs-global
* diffs to keys the profile actually sets, avoiding false "undefined" deltas
* for fields the profile leaves unspecified.
*/
function collectDefinedLeafPaths(value: JsonValue, prefix = ""): string[] {
if (!isJsonObject(value)) {
return prefix ? [prefix] : [];
}
const paths: string[] = [];
for (const [key, val] of Object.entries(value as JsonObject)) {
const childPath = prefix ? `${prefix}.${key}` : key;
if (isJsonObject(val)) {
paths.push(...collectDefinedLeafPaths(val as JsonValue, childPath));
} else {
paths.push(childPath);
}
}
return paths;
}
function isPathAllowed(path: string, compareFields?: string[]): boolean {
if (!compareFields) return true;
return compareFields.some(
(allowed) => path === allowed || path.startsWith(`${allowed}.`),
);
}
/**
* Some Frigate sections (notably `motion`) are dumped by the backend with
* `exclude_unset=True`, so when the user hasn't explicitly written the section
* in their global YAML the API returns null even though every camera still
* gets defaults applied at runtime. To still detect cross-camera differences
* in those sections we synthesize a baseline by taking the modal (most common)
* value at each leaf path across cameras cameras whose value diverges from
* the modal are treated as overriding.
*/
function deriveSyntheticGlobalValue(
cameraSectionValues: JsonValue[],
compareFields?: string[],
): JsonObject {
const cameras = cameraSectionValues.filter(isJsonObject) as JsonObject[];
if (cameras.length === 0) return {};
const allPaths = new Set<string>();
for (const cam of cameras) {
for (const path of collectDefinedLeafPaths(cam as JsonValue)) {
if (!isPathAllowed(path, compareFields)) continue;
allPaths.add(path);
}
}
const baseline: JsonObject = {};
for (const path of allPaths) {
const counts = new Map<string, { value: unknown; count: number }>();
for (const cam of cameras) {
const v = get(cam, path);
const key = JSON.stringify(v ?? null);
const existing = counts.get(key);
if (existing) {
existing.count += 1;
} else {
counts.set(key, { value: v, count: 1 });
}
}
let modal: { value: unknown; count: number } | undefined;
for (const entry of counts.values()) {
if (!modal || entry.count > modal.count) modal = entry;
}
if (modal) {
set(baseline, path, modal.value);
}
}
return baseline;
}
/**
* Paths that are intentionally hidden from the cross-camera override summary
* because they're inherently per-camera (mask polygons, zone definitions) and
* would otherwise dominate the popover with noise. Excludes any path where
* `mask` appears as a path segment, so nested keys under a mask dict (e.g.
* `mask.global_object_mask_1.coordinates`) are also filtered.
*/
function isCrossCameraIgnoredPath(path: string): boolean {
if (!path) return false;
return path.split(".").includes("mask");
}
/**
* Hook to find every camera that overrides a given global section. Returns
* one entry per overriding camera with the specific field-level deltas.
* Considers both the camera's own (pre-profile) section value and any of its
* defined profiles, so a field overridden only inside a profile still surfaces.
*
* @example
* ```tsx
* const entries = useCamerasOverridingSection(config, "review");
* // [{ camera: "front_door", fieldDeltas: [{ fieldPath: "genai.enabled", ... }] }]
* ```
*/
export function useCamerasOverridingSection(
config: FrigateConfig | undefined,
sectionPath: string,
): CameraOverrideEntry[] {
return useMemo(() => {
if (!config?.cameras || !sectionPath) {
return [];
}
const sectionMeta = OVERRIDABLE_SECTIONS.find((s) => s.key === sectionPath);
const compareFields = sectionMeta?.compareFields;
const cameraNames = Object.keys(config.cameras);
const cameraSectionValues = cameraNames.map((name) =>
normalizeConfigValue(
getBaseCameraSectionValue(config, name, sectionPath),
),
);
const rawGlobalValue = get(config, sectionPath);
const globalValue: JsonValue =
rawGlobalValue == null
? deriveSyntheticGlobalValue(cameraSectionValues, compareFields)
: normalizeConfigValue(rawGlobalValue);
const entries: CameraOverrideEntry[] = [];
for (let idx = 0; idx < cameraNames.length; idx += 1) {
const cameraName = cameraNames[idx];
const cameraConfig = config.cameras[cameraName];
const deltasByPath = new Map<string, FieldDelta>();
// 1. Camera-level overrides (uses base_config when a profile is active)
const cameraValue = cameraSectionValues[idx];
for (const delta of collectFieldDeltas(
globalValue,
cameraValue,
compareFields,
)) {
if (isCrossCameraIgnoredPath(delta.fieldPath)) continue;
deltasByPath.set(delta.fieldPath, delta);
}
// 2. Profile-level overrides — diff only the paths each profile actually
// defines, so unspecified-in-profile fields don't register as deltas.
const profiles = cameraConfig?.profiles ?? {};
for (const profileName of Object.keys(profiles)) {
const profileSection = (
profiles[profileName] as Record<string, unknown> | undefined
)?.[sectionPath];
if (profileSection === undefined) continue;
const normalizedProfile = normalizeConfigValue(
profileSection as JsonValue,
);
for (const path of collectDefinedLeafPaths(normalizedProfile)) {
if (deltasByPath.has(path)) continue;
if (isCrossCameraIgnoredPath(path)) continue;
if (!isPathAllowed(path, compareFields)) continue;
const g = get(globalValue, path);
const p = get(normalizedProfile, path);
if (!isEqual(g, p)) {
deltasByPath.set(path, {
fieldPath: path,
globalValue: g,
cameraValue: p,
profileName,
});
}
}
}
if (deltasByPath.size > 0) {
entries.push({
camera: cameraName,
fieldDeltas: Array.from(deltasByPath.values()),
});
}
}
return entries;
}, [config, sectionPath]);
}

View File

@ -42,7 +42,9 @@ import { CameraConfig, FrigateConfig } from "@/types/frigateConfig";
import { getIconForLabel } from "@/utils/iconUtil"; import { getIconForLabel } from "@/utils/iconUtil";
import { getTranslatedLabel } from "@/utils/i18n"; import { getTranslatedLabel } from "@/utils/i18n";
import { Card } from "@/components/ui/card"; import { Card } from "@/components/ui/card";
import { Progress } from "@/components/ui/progress";
import { ObjectType } from "@/types/ws"; import { ObjectType } from "@/types/ws";
import { useJobStatus } from "@/api/ws";
import WsMessageFeed from "@/components/ws/WsMessageFeed"; import WsMessageFeed from "@/components/ws/WsMessageFeed";
import { ConfigSectionTemplate } from "@/components/config-form/sections/ConfigSectionTemplate"; import { ConfigSectionTemplate } from "@/components/config-form/sections/ConfigSectionTemplate";
@ -53,6 +55,7 @@ import { isDesktop, isMobile } from "react-device-detect";
import Logo from "@/components/Logo"; import Logo from "@/components/Logo";
import { Separator } from "@/components/ui/separator"; import { Separator } from "@/components/ui/separator";
import { useDocDomain } from "@/hooks/use-doc-domain"; import { useDocDomain } from "@/hooks/use-doc-domain";
import { useConfigSchema } from "@/hooks/use-config-schema";
import DebugDrawingLayer from "@/components/overlay/DebugDrawingLayer"; import DebugDrawingLayer from "@/components/overlay/DebugDrawingLayer";
import { IoMdArrowRoundBack } from "react-icons/io"; import { IoMdArrowRoundBack } from "react-icons/io";
@ -65,6 +68,15 @@ type DebugReplayStatus = {
live_ready: boolean; live_ready: boolean;
}; };
type DebugReplayJobResults = {
current_step: "preparing_clip" | "starting_camera" | null;
progress_percent: number | null;
source_camera: string | null;
replay_camera_name: string | null;
start_ts: number | null;
end_ts: number | null;
};
type DebugOptions = { type DebugOptions = {
bbox: boolean; bbox: boolean;
timestamp: boolean; timestamp: boolean;
@ -105,8 +117,6 @@ const DEBUG_OPTION_I18N_KEY: Record<keyof DebugOptions, string> = {
paths: "paths", paths: "paths",
}; };
const REPLAY_INIT_SKELETON_TIMEOUT_MS = 8000;
export default function Replay() { export default function Replay() {
const { t } = useTranslation(["views/replay", "views/settings", "common"]); const { t } = useTranslation(["views/replay", "views/settings", "common"]);
const navigate = useNavigate(); const navigate = useNavigate();
@ -119,6 +129,9 @@ export default function Replay() {
} = useSWR<DebugReplayStatus>("debug_replay/status", { } = useSWR<DebugReplayStatus>("debug_replay/status", {
refreshInterval: 1000, refreshInterval: 1000,
}); });
const { payload: replayJob } =
useJobStatus<DebugReplayJobResults>("debug_replay");
const configSchema = useConfigSchema();
const [isInitializing, setIsInitializing] = useState(true); const [isInitializing, setIsInitializing] = useState(true);
// Refresh status immediately on mount to avoid showing "no session" briefly // Refresh status immediately on mount to avoid showing "no session" briefly
@ -130,12 +143,6 @@ export default function Replay() {
initializeStatus(); initializeStatus();
}, [refreshStatus]); }, [refreshStatus]);
useEffect(() => {
if (status?.live_ready) {
setShowReplayInitSkeleton(false);
}
}, [status?.live_ready]);
const [options, setOptions] = useState<DebugOptions>(DEFAULT_OPTIONS); const [options, setOptions] = useState<DebugOptions>(DEFAULT_OPTIONS);
const [isStopping, setIsStopping] = useState(false); const [isStopping, setIsStopping] = useState(false);
const [configDialogOpen, setConfigDialogOpen] = useState(false); const [configDialogOpen, setConfigDialogOpen] = useState(false);
@ -160,11 +167,7 @@ export default function Replay() {
axios axios
.post("debug_replay/stop") .post("debug_replay/stop")
.then(() => { .then(() => {
toast.success(t("dialog.toast.stopped"), {
position: "top-center",
});
refreshStatus(); refreshStatus();
navigate("/review");
}) })
.catch((error) => { .catch((error) => {
const errorMessage = const errorMessage =
@ -178,7 +181,7 @@ export default function Replay() {
.finally(() => { .finally(() => {
setIsStopping(false); setIsStopping(false);
}); });
}, [navigate, refreshStatus, t]); }, [refreshStatus, t]);
// Camera activity for the replay camera // Camera activity for the replay camera
const { data: config } = useSWR<FrigateConfig>("config", { const { data: config } = useSWR<FrigateConfig>("config", {
@ -191,35 +194,10 @@ export default function Replay() {
const { objects } = useCameraActivity(replayCameraConfig); const { objects } = useCameraActivity(replayCameraConfig);
const [showReplayInitSkeleton, setShowReplayInitSkeleton] = useState(false);
// debug draw // debug draw
const containerRef = useRef<HTMLDivElement>(null); const containerRef = useRef<HTMLDivElement>(null);
const [debugDraw, setDebugDraw] = useState(false); const [debugDraw, setDebugDraw] = useState(false);
useEffect(() => {
if (!status?.active || !status.replay_camera) {
setShowReplayInitSkeleton(false);
return;
}
setShowReplayInitSkeleton(true);
const timeout = window.setTimeout(() => {
setShowReplayInitSkeleton(false);
}, REPLAY_INIT_SKELETON_TIMEOUT_MS);
return () => {
window.clearTimeout(timeout);
};
}, [status?.active, status?.replay_camera]);
useEffect(() => {
if (status?.live_ready) {
setShowReplayInitSkeleton(false);
}
}, [status?.live_ready]);
// Format time range for display // Format time range for display
const timeRangeDisplay = useMemo(() => { const timeRangeDisplay = useMemo(() => {
if (!status?.start_time || !status?.end_time) return ""; if (!status?.start_time || !status?.end_time) return "";
@ -237,8 +215,39 @@ export default function Replay() {
); );
} }
// No active session // Startup error (job failed). Only show when status.active is also true so
if (!status?.active) { // we don't surface stale failed jobs after a session ended cleanly.
if (replayJob?.status === "failed" && status?.active) {
return (
<div className="flex size-full flex-col items-center justify-center gap-4 p-8">
<Heading as="h2" className="text-center">
{t("page.startError.title")}
</Heading>
{replayJob.error_message && (
<p className="max-w-xl text-center text-sm text-muted-foreground">
{replayJob.error_message}
</p>
)}
<Button
variant="default"
onClick={() => {
axios
.post("debug_replay/stop")
.catch(() => {})
.finally(() => navigate("/review"));
}}
>
{t("page.startError.back")}
</Button>
</div>
);
}
// No active session. Also covers the brief window between the runner
// pushing job.status = "cancelled" via WS and the next SWR refresh
// flipping status.active to false — without this, render falls through
// to the full replay UI and you see a flash of it before stop completes.
if (!status?.active || replayJob?.status === "cancelled") {
return ( return (
<div className="flex size-full flex-col items-center justify-center gap-4 p-8"> <div className="flex size-full flex-col items-center justify-center gap-4 p-8">
<MdReplay className="size-12" /> <MdReplay className="size-12" />
@ -255,6 +264,52 @@ export default function Replay() {
); );
} }
// Startup in progress (job is running). The session is active but the
// replay camera isn't ready yet; show progress / phase from the job.
const startupStep =
replayJob?.status === "running"
? (replayJob.results?.current_step ?? null)
: null;
if (startupStep === "preparing_clip" || startupStep === "starting_camera") {
const phaseTitle =
startupStep === "preparing_clip"
? t("page.preparingClip")
: t("page.startingCamera");
const progressPercent = replayJob?.results?.progress_percent ?? null;
const showProgressBar =
startupStep === "preparing_clip" && progressPercent != null;
return (
<div className="flex size-full flex-col items-center justify-center gap-4 p-8">
{showProgressBar ? (
<div className="flex w-64 flex-col items-center gap-2">
<Progress value={progressPercent ?? 0} />
<div className="text-xs text-muted-foreground">
{Math.round(progressPercent ?? 0)}%
</div>
</div>
) : (
<ActivityIndicator className="size-8" />
)}
<Heading as="h3" className="text-center">
{phaseTitle}
</Heading>
{startupStep === "preparing_clip" && (
<p className="max-w-md text-center text-sm text-muted-foreground">
{t("page.preparingClipDesc")}
</p>
)}
<Button
variant="outline"
size="sm"
disabled={isStopping}
onClick={handleStop}
>
{t("button.cancel", { ns: "common" })}
</Button>
</div>
);
}
return ( return (
<div className="flex size-full flex-col overflow-hidden"> <div className="flex size-full flex-col overflow-hidden">
<Toaster position="top-center" closeButton={true} /> <Toaster position="top-center" closeButton={true} />
@ -345,27 +400,30 @@ export default function Replay() {
) : ( ) : (
status.replay_camera && ( status.replay_camera && (
<div className="relative size-full min-h-10" ref={containerRef}> <div className="relative size-full min-h-10" ref={containerRef}>
<AutoUpdatingCameraImage {status.live_ready ? (
className="size-full" <>
cameraClasses="relative w-full h-full flex flex-col justify-start" <AutoUpdatingCameraImage
searchParams={searchParams} className="size-full"
camera={status.replay_camera} cameraClasses="relative w-full h-full flex flex-col justify-start"
showFps={false} searchParams={searchParams}
/> camera={status.replay_camera}
{debugDraw && ( showFps={false}
<DebugDrawingLayer />
containerRef={containerRef} {debugDraw && (
cameraWidth={ <DebugDrawingLayer
config?.cameras?.[status.source_camera ?? ""]?.detect containerRef={containerRef}
.width ?? 1280 cameraWidth={
} config?.cameras?.[status.source_camera ?? ""]?.detect
cameraHeight={ .width ?? 1280
config?.cameras?.[status.source_camera ?? ""]?.detect }
.height ?? 720 cameraHeight={
} config?.cameras?.[status.source_camera ?? ""]?.detect
/> .height ?? 720
)} }
{showReplayInitSkeleton && ( />
)}
</>
) : (
<div className="pointer-events-none absolute inset-0 z-10 size-full rounded-lg bg-background"> <div className="pointer-events-none absolute inset-0 z-10 size-full rounded-lg bg-background">
<Skeleton className="size-full rounded-lg" /> <Skeleton className="size-full rounded-lg" />
<div className="absolute left-1/2 top-1/2 flex -translate-x-1/2 -translate-y-1/2 flex-col items-center justify-center gap-2"> <div className="absolute left-1/2 top-1/2 flex -translate-x-1/2 -translate-y-1/2 flex-col items-center justify-center gap-2">
@ -595,32 +653,38 @@ export default function Replay() {
{t("page.configurationDesc")} {t("page.configurationDesc")}
</DialogDescription> </DialogDescription>
</DialogHeader> </DialogHeader>
<div className="space-y-6"> {configSchema == null ? (
<ConfigSectionTemplate <div className="flex h-40 items-center justify-center">
sectionKey="motion" <ActivityIndicator />
level="replay" </div>
cameraName={status.replay_camera ?? undefined} ) : (
skipSave <div className="space-y-6">
noStickyButtons <ConfigSectionTemplate
requiresRestart={false} sectionKey="motion"
collapsible level="replay"
defaultCollapsed={false} cameraName={status.replay_camera ?? undefined}
showTitle skipSave
showOverrideIndicator={false} noStickyButtons
/> requiresRestart={false}
<ConfigSectionTemplate collapsible
sectionKey="objects" defaultCollapsed={false}
level="replay" showTitle
cameraName={status.replay_camera ?? undefined} showOverrideIndicator={false}
skipSave />
noStickyButtons <ConfigSectionTemplate
requiresRestart={false} sectionKey="objects"
collapsible level="replay"
defaultCollapsed={false} cameraName={status.replay_camera ?? undefined}
showTitle skipSave
showOverrideIndicator={false} noStickyButtons
/> requiresRestart={false}
</div> collapsible
defaultCollapsed={false}
showTitle
showOverrideIndicator={false}
/>
</div>
)}
</DialogContent> </DialogContent>
</Dialog> </Dialog>
</div> </div>

View File

@ -7,13 +7,20 @@ import useSWR from "swr";
import axios from "axios"; import axios from "axios";
import { toast } from "sonner"; import { toast } from "sonner";
import { Pencil, Trash2 } from "lucide-react"; import { Pencil, Trash2 } from "lucide-react";
import { LuChevronDown, LuChevronRight, LuPlus } from "react-icons/lu"; import {
LuChevronDown,
LuChevronRight,
LuExternalLink,
LuPlus,
} from "react-icons/lu";
import { Link } from "react-router-dom";
import type { FrigateConfig } from "@/types/frigateConfig"; import type { FrigateConfig } from "@/types/frigateConfig";
import type { JsonObject } from "@/types/configForm"; import type { JsonObject } from "@/types/configForm";
import type { ProfileState, ProfilesApiResponse } from "@/types/profile"; import type { ProfileState, ProfilesApiResponse } from "@/types/profile";
import { getProfileColor } from "@/utils/profileColors"; import { getProfileColor } from "@/utils/profileColors";
import { PROFILE_ELIGIBLE_SECTIONS } from "@/utils/configUtil"; import { PROFILE_ELIGIBLE_SECTIONS } from "@/utils/configUtil";
import { resolveCameraName } from "@/hooks/use-camera-friendly-name"; import { resolveCameraName } from "@/hooks/use-camera-friendly-name";
import { useDocDomain } from "@/hooks/use-doc-domain";
import { cn } from "@/lib/utils"; import { cn } from "@/lib/utils";
import Heading from "@/components/ui/heading"; import Heading from "@/components/ui/heading";
import { Button } from "@/components/ui/button"; import { Button } from "@/components/ui/button";
@ -66,6 +73,7 @@ export default function ProfilesView({
setProfilesUIEnabled, setProfilesUIEnabled,
}: ProfilesViewProps) { }: ProfilesViewProps) {
const { t } = useTranslation(["views/settings", "common"]); const { t } = useTranslation(["views/settings", "common"]);
const { getLocaleDocUrl } = useDocDomain();
const { data: config, mutate: updateConfig } = const { data: config, mutate: updateConfig } =
useSWR<FrigateConfig>("config"); useSWR<FrigateConfig>("config");
const { data: profilesData, mutate: updateProfiles } = const { data: profilesData, mutate: updateProfiles } =
@ -360,6 +368,17 @@ export default function ProfilesView({
<div className="my-1 text-sm text-muted-foreground"> <div className="my-1 text-sm text-muted-foreground">
{t("profiles.disabledDescription", { ns: "views/settings" })} {t("profiles.disabledDescription", { ns: "views/settings" })}
</div> </div>
<div className="flex items-center text-sm text-primary-variant">
<Link
to={getLocaleDocUrl("configuration/profiles")}
target="_blank"
rel="noopener noreferrer"
className="inline"
>
{t("readTheDocumentation", { ns: "common" })}
<LuExternalLink className="ml-2 inline-flex size-3" />
</Link>
</div>
{/* Enable Profiles Toggle — shown only when no profiles exist */} {/* Enable Profiles Toggle — shown only when no profiles exist */}
{!hasProfiles && setProfilesUIEnabled && ( {!hasProfiles && setProfilesUIEnabled && (

View File

@ -2,6 +2,7 @@ import { useCallback, useMemo, useState } from "react";
import { useTranslation } from "react-i18next"; import { useTranslation } from "react-i18next";
import type { SectionConfig } from "@/components/config-form/sections"; import type { SectionConfig } from "@/components/config-form/sections";
import { ConfigSectionTemplate } from "@/components/config-form/sections"; import { ConfigSectionTemplate } from "@/components/config-form/sections";
import { CameraOverridesBadge } from "@/components/config-form/sections/CameraOverridesBadge";
import type { PolygonType } from "@/types/canvas"; import type { PolygonType } from "@/types/canvas";
import { Badge } from "@/components/ui/badge"; import { Badge } from "@/components/ui/badge";
import { import {
@ -167,6 +168,9 @@ export function SingleSectionPage({
</div> </div>
{/* Desktop: badge inline next to title */} {/* Desktop: badge inline next to title */}
<div className="hidden shrink-0 sm:flex sm:flex-wrap sm:items-center sm:gap-2"> <div className="hidden shrink-0 sm:flex sm:flex-wrap sm:items-center sm:gap-2">
{level === "global" && showOverrideIndicator && (
<CameraOverridesBadge sectionPath={sectionKey} />
)}
{level === "camera" && {level === "camera" &&
showOverrideIndicator && showOverrideIndicator &&
sectionStatus.isOverridden && ( sectionStatus.isOverridden && (
@ -224,6 +228,9 @@ export function SingleSectionPage({
</div> </div>
{/* Mobile: badge below title/description */} {/* Mobile: badge below title/description */}
<div className="flex flex-wrap items-center gap-2 sm:hidden"> <div className="flex flex-wrap items-center gap-2 sm:hidden">
{level === "global" && showOverrideIndicator && (
<CameraOverridesBadge sectionPath={sectionKey} />
)}
{level === "camera" && {level === "camera" &&
showOverrideIndicator && showOverrideIndicator &&
sectionStatus.isOverridden && ( sectionStatus.isOverridden && (