Compare commits

..

17 Commits

Author SHA1 Message Date
dependabot[bot]
e28ae67f0c
Merge 5753b5b77c into 814c497bef 2026-05-04 08:48:55 -03:00
Josh Hawkins
814c497bef
Use Job infrastructure for Debug Replay (#23099)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
* use ReplayState enum

* extract shared ffmpeg progress helper

* make start call non-blocking with worker thread

* expose replay state on status endpoint and return 202 from start

* cancel in-flight ffmpeg when stop is called during preparation

* add replay i18n strings for preparing and error states

* show status in replay UI

* navigate immediately on 202 from debug replay menus and dialog

* remove unused

* simplify to use Job infrastructure

* tests

* cleanup and tweaks

* fetch schema

* update api spec

* formatting

* fix e2e test

* mypy

* clean up

* formatting

* fix

* fix test

* don't try to show camera image until status reports ready

* simplify loading logic

* fix race in latest_frame on debug replay shutdown

* remove toast when successfully stopping

it gets hidden almost immediately
2026-05-03 14:54:20 -06:00
Josh Hawkins
5bc15d4aa9
chapter and thumbnail fixes (#23100)
- Skip null end_time when building export chapter metadata
- Use plain seconds for export thumbnail ffmpeg seek
2026-05-03 13:25:53 -06:00
Josh Hawkins
7ad233ef15
fix malformed svg from breaking docs build (#23102) 2026-05-03 13:21:22 -06:00
GuoQing Liu
882b3a8ffd
docs: add docker compose generator (#22956)
* docs: add docker compose generator

* docs: add more icon support

* Update docs/src/components/DockerComposeGenerator/config/config.yaml

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

* Update docs/src/components/DockerComposeGenerator/config/config.yaml

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

* Update docs/src/components/DockerComposeGenerator/config/config.yaml

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

* Rename heading from 'Generic Hardware Acceleration' to 'Generic Hardware Devices'

* Remove port 5000 configuration for security reasons

Removed unauthenticated Web UI port 5000 from configuration due to security risks.

* docs: remove 5000 port tips

* docs: improve NVIDIA GPU count input

* docs: add docker compose tabs

* Update docs/src/components/DockerComposeGenerator/config/config.yaml

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

* Update docs/src/components/DockerComposeGenerator/components/OtherOptions.tsx

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

* Update docs/src/components/DockerComposeGenerator/config/config.yaml

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

* Update docs/src/components/DockerComposeGenerator/config/config.yaml

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

* Update docs/src/components/DockerComposeGenerator/components/StoragePaths.tsx

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

* Update docs/src/components/DockerComposeGenerator/components/StoragePaths.tsx

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

* Update docs/src/components/DockerComposeGenerator/config/config.yaml

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

* docs: Adjust the position of the RTSP password variable option

* docs: timezone change to select

* docs: add hailo and memryX mx3 driver tips

* docs: RTSP password is optional

* docs: fix select style

---------

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2026-05-03 13:56:51 -05:00
Pedro Diogo
b6fd86a066
feat(genai): add api_key auth support for ollama cloud (#23096)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
- Add _auth_headers() helper to pass Bearer token when api_key is set
- Wire headers into all Ollama client instantiations (sync + async)
- Update docs with Ollama Cloud direct connection example and yaml config
2026-05-02 17:55:25 -06:00
Josh Hawkins
147cd5cc2b
Miscellaneous fixes (#23092)
* lpr fixes

- remove duplicate code
- fix min_area check for non frigate+ code path
- move log outside of non frigate+ code path

* only show chat link when a genai provider is configured with the chat role

* respect ui.timezone when generating fallback export names

* reapply radix pointer events fix to call sites that use navigate()

* formatting

* fall back to prior preview frame for short export thumbnails

* fix typing

* fix e2e test for chat navigation

* batch annotation offset to seek atomically and throttle slider drag

* add debug replay loading toast for explore actions

* Improve handling of webpush missing shortSummary

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2026-05-02 16:35:42 -06:00
Blake Blackshear
6a2b914b10 Merge remote-tracking branch 'origin/master' into dev
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
2026-05-02 10:08:36 -05:00
Josh Hawkins
45213d0420
Miscellaneous fixes (#23082)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
* openvino log message and preview directory checks

* restrict config vars for viewer users

* recording timestamp fix

when startTime is exactly on an hour boundary, findIndex returns the first matching chunk, which is the previous hour's chunk (where before == startTime), instead of the correct chunk (where after == startTime)

the bug shows up when using the share timestamp feature and sharing a specific timestamp on the exact hour mark. when accessing the shared link, the timeline would jump to the incorrect hour

* use helper for chunked time range

* Adjustments to contributing docs

* tweak

* Improve wording

* tweak

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2026-05-01 11:25:26 -06:00
Josh Hawkins
2cfb530dbf
fix yolonas colab notebook (#22936)
Some checks failed
CI / AMD64 Build (push) Has been cancelled
CI / ARM Build (push) Has been cancelled
CI / Jetson Jetpack 6 (push) Has been cancelled
CI / AMD64 Extra Build (push) Has been cancelled
CI / ARM Extra Build (push) Has been cancelled
CI / Synaptics Build (push) Has been cancelled
CI / Assemble and push default build (push) Has been cancelled
2026-04-21 11:08:10 -06:00
Josh Hawkins
81b0d94793
fix broken docs links with hash fragments that resolve wrong on reload (#22925) 2026-04-18 16:50:28 -06:00
Mark
67837f61d0
Update restream.md docs and clarify output config (#22860)
* Update restream.md

Clarified that exec output must be put in curly braces ONLY in case of RTSP, not pipe, as per go2rtc docs. Added additional example use case for exec function (rpi5b cam set-up).

* Cleanup

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2026-04-14 14:00:18 -05:00
Josh Hawkins
58c93c2e9e
clarify emergency cleanup (#22864) 2026-04-13 07:07:09 -06:00
Abinila Siva
6b71feffab
Memryx docs update (#22746)
* docs: update MemryX documentation section

* docs: update MemryX documentation section
2026-04-03 11:32:32 -06:00
Abinila Siva
1c26bc289e
docs: update MemryX docs (#22712)
Some checks failed
CI / AMD64 Build (push) Has been cancelled
CI / ARM Build (push) Has been cancelled
CI / Jetson Jetpack 6 (push) Has been cancelled
CI / AMD64 Extra Build (push) Has been cancelled
CI / ARM Extra Build (push) Has been cancelled
CI / Synaptics Build (push) Has been cancelled
CI / Assemble and push default build (push) Has been cancelled
2026-03-31 12:22:23 -05:00
Josh Hawkins
0371b60c71
limit access to admin-only websocket topics for viewer users (#22710) 2026-03-31 08:51:55 -05:00
Nicolas Mowen
01392e03ac
Update docs for DEIMv2 support (#22598) 2026-03-23 16:16:54 -06:00
67 changed files with 4858 additions and 651 deletions

5
.gitignore vendored
View File

@ -22,3 +22,8 @@ core
!/web/**/*.ts
.idea/*
.ipynb_checkpoints
# Auto-generated Docker Compose Generator config files
docs/src/components/DockerComposeGenerator/config/devices.ts
docs/src/components/DockerComposeGenerator/config/hardware.ts
docs/src/components/DockerComposeGenerator/config/ports.ts

View File

@ -10,11 +10,14 @@ If you've found a bug and want to fix it, go for it. Link to the relevant issue
### New features
Every new feature adds scope that the maintainers must test, maintain, and support long-term. Before writing code for a new feature:
A pull request is more than just code — it's a request for the maintainers to review, integrate, and support the change long-term. We're selective about what we take on, and prioritize changes that align with the project's direction and can be responsibly maintained in the long term.
**Large or highly-requested features** raise the bar even higher. Popularity signals demand, but it doesn't pre-approve any particular implementation. The bigger the change, the higher the long-term cost, and the more important it is that we're aligned on scope and approach before any code is written. A large PR that lands without prior discussion is unlikely to be merged as-is, no matter how well it's implemented.
Before writing code for a new feature:
1. **Check for existing discussion.** Search [feature requests](https://github.com/blakeblackshear/frigate/issues) and [discussions](https://github.com/blakeblackshear/frigate/discussions) to see if it's been proposed or discussed. Feature requests tagged with "planned" are on our radar — we plan to get to them, but we don't maintain a public roadmap or timeline. Check in with us first if you have interest in contributing to one.
2. **Start a discussion or feature request first.** This helps ensure your idea aligns with Frigate's direction before you invest time building it. Community interest in a feature request helps us gauge demand, though a great idea is a great idea even without a crowd behind it.
3. **Be open to "no".** We try to be thoughtful about what we take on, and sometimes that means saying no to good code if the feature isn't the right fit for the project. These calls are sometimes subjective, and we won't always get them right. We're happy to discuss and reconsider.
## AI usage policy
@ -39,6 +42,8 @@ We're not trying to gatekeep how you write code. Use whatever tools make you pro
Some honest context: when we review a PR, we're not just evaluating whether the code works today. We're evaluating whether we can maintain it, debug it, and extend it long-term — often without the original author's involvement. Code that the author doesn't deeply understand is code that nobody understands, and that's a liability.
One more thing worth saying directly: most maintainers already have access to the same AI tools you do. A PR that's entirely AI-generated — where the author can't explain the design, debug issues independently, or engage substantively in design discussions — doesn't offer something we couldn't produce ourselves. What makes a contribution genuinely valuable is the human judgment and domain understanding behind it, as well as the engagement during review that shapes it into something we can confidently take on long-term.
## Pull request guidelines
### Before submitting

View File

@ -19,7 +19,7 @@ Face recognition requires a one-time internet connection to download detection a
### Face Detection
When running a Frigate+ model (or any custom model that natively detects faces) should ensure that `face` is added to the [list of objects to track](../plus/#available-label-types) either globally or for a specific camera. This will allow face detection to run at the same time as object detection and be more efficient.
When running a Frigate+ model (or any custom model that natively detects faces) should ensure that `face` is added to the [list of objects to track](../plus/index.md#available-label-types) either globally or for a specific camera. This will allow face detection to run at the same time as object detection and be more efficient.
When running a default COCO model or another model that does not include `face` as a detectable label, face detection will run via CV2 using a lightweight DNN model that runs on the CPU. In this case, you should _not_ define `face` in your list of objects to track.

View File

@ -201,7 +201,7 @@ Cloud Generative AI providers require an active internet connection to send imag
### Ollama Cloud
Ollama also supports [cloud models](https://ollama.com/cloud), where your local Ollama instance handles requests from Frigate, but model inference is performed in the cloud. Set up Ollama locally, sign in with your Ollama account, and specify the cloud model name in your Frigate config. For more details, see the Ollama cloud model [docs](https://docs.ollama.com/cloud).
Ollama also supports [cloud models](https://ollama.com/cloud), where model inference is performed in the cloud. You can connect directly to Ollama Cloud by setting `base_url` to `https://ollama.com` and providing an API key. Alternatively, you can run Ollama locally and use a cloud model name so your local instance forwards requests to the cloud. For more details, see the Ollama cloud model [docs](https://docs.ollama.com/cloud).
#### Configuration
@ -210,7 +210,8 @@ Ollama also supports [cloud models](https://ollama.com/cloud), where your local
1. Navigate to <NavPath path="Settings > Enrichments > Generative AI" />.
- Set **Provider** to `ollama`
- Set **Base URL** to your local Ollama address (e.g., `http://localhost:11434`)
- Set **Base URL** to your local Ollama address (e.g., `http://localhost:11434`) or `https://ollama.com` for direct cloud inference
- Set **API key** if required by your endpoint (e.g., when using `https://ollama.com`)
- Set **Model** to the cloud model name
</TabItem>
@ -223,6 +224,16 @@ genai:
model: cloud-model-name
```
or when using Ollama Cloud directly
```yaml
genai:
provider: ollama
base_url: https://ollama.com
model: cloud-model-name
api_key: your-api-key
```
</TabItem>
</ConfigTabs>

View File

@ -494,7 +494,7 @@ detectors:
| [YOLO-NAS](#yolo-nas) | ✅ | ✅ | |
| [MobileNet v2](#ssdlite-mobilenet-v2) | ✅ | ✅ | Fast and lightweight model, less accurate than larger models |
| [YOLOX](#yolox) | ✅ | ? | |
| [D-FINE](#d-fine) | ❌ | ❌ | |
| [D-FINE / DEIMv2](#d-fine--deimv2) | ❌ | ❌ | |
#### SSDLite MobileNet v2
@ -710,13 +710,13 @@ model:
</details>
#### D-FINE
#### D-FINE / DEIMv2
[D-FINE](https://github.com/Peterande/D-FINE) is a DETR based model. The ONNX exported models are supported, but not included by default. See [the models section](#downloading-d-fine-model) for more information on downloading the D-FINE model for use in Frigate.
[D-FINE](https://github.com/Peterande/D-FINE) and [DEIMv2](https://github.com/Intellindust-AI-Lab/DEIMv2) are DETR based models that share the same ONNX input/output format. The ONNX exported models are supported, but not included by default. See the models section for downloading [D-FINE](#downloading-d-fine-model) or [DEIMv2](#downloading-deimv2-model) for use in Frigate.
:::warning
Currently D-FINE models only run on OpenVINO in CPU mode, GPUs currently fail to compile the model
Currently D-FINE / DEIMv2 models only run on OpenVINO in CPU mode, GPUs currently fail to compile the model
:::
@ -766,6 +766,31 @@ Note that the labelmap uses a subset of the complete COCO label set that has onl
</details>
<details>
<summary>DEIMv2 Setup & Config</summary>
After placing the downloaded onnx model in your `config/model_cache` folder, you can use the following configuration:
```yaml
detectors:
ov:
type: openvino
device: CPU
model:
model_type: dfine
width: 640
height: 640
input_tensor: nchw
input_dtype: float
path: /config/model_cache/deimv2_hgnetv2_n.onnx
labelmap_path: /labelmap/coco-80.txt
```
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
</details>
## Apple Silicon detector
The NPU in Apple Silicon can't be accessed from within a container, so the [Apple Silicon detector client](https://github.com/frigate-nvr/apple-silicon-detector) must first be setup. It is recommended to use the Frigate docker image with `-standard-arm64` suffix, for example `ghcr.io/blakeblackshear/frigate:stable-standard-arm64`.
@ -947,7 +972,7 @@ The AMD GPU kernel is known problematic especially when converting models to mxr
See [ONNX supported models](#supported-models) for supported models, there are some caveats:
- D-FINE models are not supported
- D-FINE / DEIMv2 models are not supported
- YOLO-NAS models are known to not run well on integrated GPUs
## ONNX
@ -1003,7 +1028,7 @@ detectors:
| [RF-DETR](#rf-detr) | ✅ | ❌ | Supports CUDA Graphs for optimal Nvidia performance |
| [YOLO-NAS](#yolo-nas-1) | ⚠️ | ⚠️ | Not supported by CUDA Graphs |
| [YOLOX](#yolox-1) | ✅ | ✅ | Supports CUDA Graphs for optimal Nvidia performance |
| [D-FINE](#d-fine) | ⚠️ | ❌ | Not supported by CUDA Graphs |
| [D-FINE / DEIMv2](#d-fine--deimv2-1) | ⚠️ | ❌ | Not supported by CUDA Graphs |
There is no default model provided, the following formats are supported:
@ -1215,9 +1240,9 @@ model:
</details>
#### D-FINE
#### D-FINE / DEIMv2
[D-FINE](https://github.com/Peterande/D-FINE) is a DETR based model. The ONNX exported models are supported, but not included by default. See [the models section](#downloading-d-fine-model) for more information on downloading the D-FINE model for use in Frigate.
[D-FINE](https://github.com/Peterande/D-FINE) and [DEIMv2](https://github.com/Intellindust-AI-Lab/DEIMv2) are DETR based models that share the same ONNX input/output format. The ONNX exported models are supported, but not included by default. See the models section for downloading [D-FINE](#downloading-d-fine-model) or [DEIMv2](#downloading-deimv2-model) for use in Frigate.
<details>
<summary>D-FINE Setup & Config</summary>
@ -1262,6 +1287,28 @@ model:
</details>
<details>
<summary>DEIMv2 Setup & Config</summary>
After placing the downloaded onnx model in your `config/model_cache` folder, you can use the following configuration:
```yaml
detectors:
onnx:
type: onnx
model:
model_type: dfine
width: 640
height: 640
input_tensor: nchw
input_dtype: float
path: /config/model_cache/deimv2_hgnetv2_n.onnx
labelmap_path: /labelmap/coco-80.txt
```
</details>
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
## CPU Detector (not recommended)
@ -1405,7 +1452,7 @@ MemryX `.dfp` models are automatically downloaded at runtime, if enabled, to the
#### YOLO-NAS
The [YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) model included in this detector is downloaded from the [Models Section](#downloading-yolo-nas-model) and compiled to DFP with [mx_nc](https://developer.memryx.com/tools/neural_compiler.html#usage).
The [YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) model included in this detector is downloaded from the [Models Section](#downloading-yolo-nas-model) and compiled to DFP with [mx_nc](https://developer.memryx.com/2p1/tools/neural_compiler.html#usage).
**Note:** The default model for the MemryX detector is YOLO-NAS 320x320.
@ -1459,7 +1506,7 @@ model:
#### YOLOv9
The YOLOv9s model included in this detector is downloaded from [the original GitHub](https://github.com/WongKinYiu/yolov9) like in the [Models Section](#yolov9-1) and compiled to DFP with [mx_nc](https://developer.memryx.com/tools/neural_compiler.html#usage).
The YOLOv9s model included in this detector is downloaded from [the original GitHub](https://github.com/WongKinYiu/yolov9) like in the [Models Section](#yolov9-1) and compiled to DFP with [mx_nc](https://developer.memryx.com/2p1/tools/neural_compiler.html#usage).
##### Configuration
@ -1601,19 +1648,39 @@ model:
#### Using a Custom Model
To use your own model:
To use your own custom model, first compile it into a [.dfp](https://developer.memryx.com/2p1/specs/files.html#dataflow-program) file, which is the format used by MemryX.
1. Package your compiled model into a `.zip` file.
#### Compile the Model
2. The `.zip` must contain the compiled `.dfp` file.
Custom models must be compiled using **MemryX SDK 2.1**.
3. Depending on the model, the compiler may also generate a cropped post-processing network. If present, it will be named with the suffix `_post.onnx`.
Before compiling your model, install the MemryX Neural Compiler tools from the
[Install Tools](https://developer.memryx.com/2p1/get_started/install_tools.html) page on the **host**.
4. Bind-mount the `.zip` file into the container and specify its path using `model.path` in your config.
> **Note:** It is recommended to compile the model on the host machine, or on another separate machine, rather than inside the Frigate Docker container. Installing the compiler inside Docker may conflict with container packages. It is recommended to create a Python virtual environment and install the compiler there.
5. Update the `labelmap_path` to match your custom model's labels.
Once the SDK 2.1 environment is set up, follow the
[MemryX Compiler](https://developer.memryx.com/2p1/tools/neural_compiler.html#usage) documentation to compile your model.
For detailed instructions on compiling models, refer to the [MemryX Compiler](https://developer.memryx.com/tools/neural_compiler.html#usage) docs and [Tutorials](https://developer.memryx.com/tutorials/tutorials.html).
Example:
```bash
mx_nc -m yolonas.onnx -c 4 --autocrop -v --dfp_fname yolonas.dfp
```
For detailed instructions on compiling models, refer to the [MemryX Compiler](https://developer.memryx.com/2p1/tools/neural_compiler.html#usage) docs and [Tutorials](https://developer.memryx.com/2p1/tutorials/tutorials.html).
#### Package the Compiled Model
1. Package your compiled model into a `.zip` file.
2. The `.zip` file must contain the compiled `.dfp` file.
3. Depending on the model, the compiler may also generate a cropped post-processing network. If present, it will be named with the suffix `_post.onnx`.
4. Bind-mount the `.zip` file into the container and specify its path using `model.path` in your config.
5. Update `labelmap_path` to match your custom model's labels.
```yaml
# The detector automatically selects the default model if nothing is provided in the config.
@ -2274,6 +2341,49 @@ COPY --from=build /dfine/output/dfine_${MODEL_SIZE}_obj2coco.onnx /dfine-${MODEL
EOF
```
### Downloading DEIMv2 Model
[DEIMv2](https://github.com/Intellindust-AI-Lab/DEIMv2) can be exported as ONNX by running the command below. Pretrained weights are available on Hugging Face for two backbone families:
- **HGNetv2** (smaller/faster): `atto`, `femto`, `pico`, `n`
- **DINOv3** (larger/more accurate): `s`, `m`, `l`, `x`
Set `BACKBONE` and `MODEL_SIZE` in the first line to match your desired variant. Hugging Face model names use uppercase (e.g. `HGNetv2_N`, `DINOv3_S`), while config files use lowercase (e.g. `hgnetv2_n`, `dinov3_s`).
```sh
docker build . --rm --build-arg BACKBONE=hgnetv2 --build-arg MODEL_SIZE=n --output . -f- <<'EOF'
FROM python:3.11-slim AS build
RUN apt-get update && apt-get install --no-install-recommends -y git libgl1 libglib2.0-0 && rm -rf /var/lib/apt/lists/*
COPY --from=ghcr.io/astral-sh/uv:0.8.0 /uv /bin/
WORKDIR /deimv2
RUN git clone https://github.com/Intellindust-AI-Lab/DEIMv2.git .
# Install CPU-only PyTorch first to avoid pulling CUDA variant
RUN uv pip install --no-cache --system torch torchvision --index-url https://download.pytorch.org/whl/cpu
RUN uv pip install --no-cache --system -r requirements.txt
RUN uv pip install --no-cache --system onnx safetensors huggingface_hub
RUN mkdir -p output
ARG BACKBONE
ARG MODEL_SIZE
# Download from Hugging Face and convert safetensors to pth
RUN python3 -c "\
from huggingface_hub import hf_hub_download; \
from safetensors.torch import load_file; \
import torch; \
backbone = '${BACKBONE}'.replace('hgnetv2','HGNetv2').replace('dinov3','DINOv3'); \
size = '${MODEL_SIZE}'.upper(); \
st = load_file(hf_hub_download('Intellindust/DEIMv2_' + backbone + '_' + size + '_COCO', 'model.safetensors')); \
torch.save({'model': st}, 'output/deimv2.pth')"
RUN sed -i "s/data = torch.rand(2/data = torch.rand(1/" tools/deployment/export_onnx.py
# HuggingFace safetensors omits frozen constants that the model constructor initializes
RUN sed -i "s/cfg.model.load_state_dict(state)/cfg.model.load_state_dict(state, strict=False)/" tools/deployment/export_onnx.py
RUN python3 tools/deployment/export_onnx.py -c configs/deimv2/deimv2_${BACKBONE}_${MODEL_SIZE}_coco.yml -r output/deimv2.pth
FROM scratch
ARG BACKBONE
ARG MODEL_SIZE
COPY --from=build /deimv2/output/deimv2.onnx /deimv2_${BACKBONE}_${MODEL_SIZE}.onnx
EOF
```
### Downloading RF-DETR Model
RF-DETR can be exported as ONNX by running the command below. You can copy and paste the whole thing to your terminal and execute, altering `MODEL_SIZE=Nano` in the first line to `Nano`, `Small`, or `Medium` size.

View File

@ -195,7 +195,7 @@ Pre and post capture footage is included in the **recording timeline**, visible
## Will Frigate delete old recordings if my storage runs out?
As of Frigate 0.12 if there is less than an hour left of storage, the oldest 2 hours of recordings will be deleted.
If there is less than an hour left of storage, the oldest hour of recordings will be deleted and a message will be printed in the Frigate logs. This emergency cleanup deletes the oldest recordings first regardless of retention settings to reclaim space as quickly as possible.
## Configuring Recording Retention

View File

@ -236,7 +236,7 @@ Enabling arbitrary exec sources allows execution of arbitrary commands through g
## Advanced Restream Configurations
The [exec](https://github.com/AlexxIT/go2rtc/tree/v1.9.13#source-exec) source in go2rtc can be used for custom ffmpeg commands. An example is below:
The [exec](https://github.com/AlexxIT/go2rtc/tree/v1.9.13#source-exec) source in go2rtc can be used for custom ffmpeg commands and other applications. An example is below:
:::warning
@ -244,16 +244,11 @@ The `exec:`, `echo:`, and `expr:` sources are disabled by default for security.
:::
:::warning
The `exec:`, `echo:`, and `expr:` sources are disabled by default for security. You must set `GO2RTC_ALLOW_ARBITRARY_EXEC=true` to use them. See [Security: Restricted Stream Sources](#security-restricted-stream-sources) for more information.
:::
NOTE: The output will need to be passed with two curly braces `{{output}}`
NOTE: RTSP output will need to be passed with two curly braces `{{output}}`, whereas pipe output must be passed without curly braces.
```yaml
go2rtc:
streams:
stream1: exec:ffmpeg -hide_banner -re -stream_loop -1 -i /media/BigBuckBunny.mp4 -c copy -rtsp_transport tcp -f rtsp {{output}}
stream2: exec:rpicam-vid -t 0 --libav-format h264 -o -
```

View File

@ -4,12 +4,15 @@ title: Installation
---
import ShmCalculator from '@site/src/components/ShmCalculator'
import DockerComposeGenerator from '@site/src/components/DockerComposeGenerator'
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
Frigate is a Docker container that can be run on any Docker host including as a [Home Assistant App](https://www.home-assistant.io/apps/). Note that the Home Assistant App is **not** the same thing as the integration. The [integration](/integrations/home-assistant) is required to integrate Frigate into Home Assistant, whether you are running Frigate as a standalone Docker container or as a Home Assistant App.
:::tip
If you already have Frigate installed as a Home Assistant App, check out the [getting started guide](../guides/getting_started#configuring-frigate) to configure Frigate.
If you already have Frigate installed as a Home Assistant App, check out the [getting started guide](../guides/getting_started.md#configuring-frigate) to configure Frigate.
:::
@ -286,7 +289,7 @@ The MemryX MX3 Accelerator is available in the M.2 2280 form factor (like an NVM
#### Installation
To get started with MX3 hardware setup for your system, refer to the [Hardware Setup Guide](https://developer.memryx.com/get_started/hardware_setup.html).
To get started with MX3 hardware setup for your system, refer to the [Hardware Setup Guide](https://developer.memryx.com/2p1/get_started/install_hardware.html).
Then follow these steps for installing the correct driver/runtime configuration:
@ -295,6 +298,12 @@ Then follow these steps for installing the correct driver/runtime configuration:
3. Run the script with `./user_installation.sh`
4. **Restart your computer** to complete driver installation.
:::warning
For manual setup, use **MemryX SDK 2.1** only. Other SDK versions are not supported for this setup. See the [SDK 2.1 documentation](https://developer.memryx.com/2p1/index.html)
:::
#### Setup
To set up Frigate, follow the default installation instructions, for example: `ghcr.io/blakeblackshear/frigate:stable`
@ -468,6 +477,16 @@ Finally, configure [hardware object detection](/configuration/object_detectors#a
Running through Docker with Docker Compose is the recommended install method.
<Tabs>
<TabItem value="domestic" label="Docker Compose Generator" default>
Generate a Frigate Docker Compose configuration based on your hardware and requirements.
<DockerComposeGenerator/>
</TabItem>
<TabItem value="original" label="Example Docker Compose File">
```yaml
services:
frigate:
@ -501,6 +520,10 @@ services:
environment:
FRIGATE_RTSP_PASSWORD: "password"
```
</TabItem>
</Tabs>
**Docker CLI**
If you can't use Docker Compose, you can run the container with something similar to this:

View File

@ -14,9 +14,11 @@
"@docusaurus/theme-mermaid": "^3.7.0",
"@inkeep/docusaurus": "^2.0.16",
"@mdx-js/react": "^3.1.0",
"@types/js-yaml": "^4.0.9",
"clsx": "^2.1.1",
"docusaurus-plugin-openapi-docs": "^4.5.1",
"docusaurus-theme-openapi-docs": "^4.5.1",
"js-yaml": "^4.1.1",
"prism-react-renderer": "^2.4.1",
"raw-loader": "^4.0.2",
"react": "^18.3.1",
@ -5747,6 +5749,11 @@
"@types/istanbul-lib-report": "*"
}
},
"node_modules/@types/js-yaml": {
"version": "4.0.9",
"resolved": "https://mirrors.tencent.com/npm/@types/js-yaml/-/js-yaml-4.0.9.tgz",
"integrity": "sha512-k4MGaQl5TGo/iipqb2UDG2UwjXziSWkh0uysQelTlJpX1qGlpUZYm8PnO4DxG1qBomtJUdYJ6qR6xdIah10JLg=="
},
"node_modules/@types/json-schema": {
"version": "7.0.15",
"resolved": "https://registry.npmjs.org/@types/json-schema/-/json-schema-7.0.15.tgz",
@ -12883,7 +12890,7 @@
},
"node_modules/js-yaml": {
"version": "4.1.1",
"resolved": "https://registry.npmjs.org/js-yaml/-/js-yaml-4.1.1.tgz",
"resolved": "https://mirrors.tencent.com/npm/js-yaml/-/js-yaml-4.1.1.tgz",
"integrity": "sha512-qQKT4zQxXl8lLwBtHMWwaTcGfFOZviOJet3Oy/xmGk2gZH677CJM9EvtfdSkgWcATZhj/55JZ0rmy3myCT5lsA==",
"license": "MIT",
"dependencies": {

View File

@ -3,9 +3,10 @@
"version": "0.0.0",
"private": true,
"scripts": {
"build:config": "node scripts/build-config.mjs",
"docusaurus": "docusaurus",
"start": "npm run regen-docs && docusaurus start --host 0.0.0.0",
"build": "npm run regen-docs && docusaurus build",
"start": "npm run build:config && npm run regen-docs && docusaurus start --host 0.0.0.0",
"build": "npm run build:config && npm run regen-docs && docusaurus build",
"swizzle": "docusaurus swizzle",
"deploy": "docusaurus deploy",
"clear": "docusaurus clear",
@ -23,9 +24,11 @@
"@docusaurus/theme-mermaid": "^3.7.0",
"@inkeep/docusaurus": "^2.0.16",
"@mdx-js/react": "^3.1.0",
"@types/js-yaml": "^4.0.9",
"clsx": "^2.1.1",
"docusaurus-plugin-openapi-docs": "^4.5.1",
"docusaurus-theme-openapi-docs": "^4.5.1",
"js-yaml": "^4.1.1",
"prism-react-renderer": "^2.4.1",
"raw-loader": "^4.0.2",
"react": "^18.3.1",

View File

@ -0,0 +1,64 @@
#!/usr/bin/env node
/**
* Build script: reads config.yaml and generates TypeScript files
* for the Docker Compose Generator.
*
* Usage: node scripts/build-config.mjs
*/
import fs from "node:fs";
import path from "node:path";
import { fileURLToPath } from "node:url";
import yaml from "js-yaml";
const __dirname = path.dirname(fileURLToPath(import.meta.url));
const CONFIG_DIR = path.resolve(__dirname, "../src/components/DockerComposeGenerator/config");
const YAML_PATH = path.join(CONFIG_DIR, "config.yaml");
// Read & parse YAML
const raw = fs.readFileSync(YAML_PATH, "utf8");
const config = yaml.load(raw);
if (!config.devices || !config.hardware || !config.ports) {
console.error("config.yaml must contain 'devices', 'hardware', and 'ports' sections.");
process.exit(1);
}
/**
* Generate a .ts file from a section of the YAML config.
*/
function generateTsFile(sectionName, items, typeName, varName, mapVarName, yamlFilename) {
const jsonItems = JSON.stringify(items, null, 2);
// Indent JSON to fit inside the array literal
const indented = jsonItems
.split("\n")
.map((line, i) => (i === 0 ? line : " " + line))
.join("\n");
const content = `/**
* AUTO-GENERATED FILE do not edit directly.
* Source: ${yamlFilename}
* To update, edit the YAML file and run: npm run build:config
*/
import type { ${typeName} } from "./types";
export const ${varName}: ${typeName}[] = ${indented};
/** Lookup map for quick access by ID */
export const ${mapVarName}: Map<string, ${typeName}> = new Map(${varName}.map((item) => [item.id, item]));
`;
const outPath = path.join(CONFIG_DIR, `${sectionName}.ts`);
fs.writeFileSync(outPath, content, "utf8");
console.log(` ✓ Generated ${sectionName}.ts (${items.length} items)`);
}
console.log("Building config from config.yaml...");
generateTsFile("devices", config.devices, "DeviceConfig", "devices", "deviceMap", "config.yaml");
generateTsFile("hardware", config.hardware, "HardwareOption", "hardwareOptions", "hardwareMap", "config.yaml");
generateTsFile("ports", config.ports, "PortConfig", "ports", "portMap", "config.yaml");
console.log("Done!");

View File

@ -0,0 +1,108 @@
import React from "react";
import Admonition from "@theme/Admonition";
import DeviceSelector from "./components/DeviceSelector";
import HardwareOptions from "./components/HardwareOptions";
import PortConfigSection from "./components/PortConfig";
import StoragePaths from "./components/StoragePaths";
import NvidiaGpuConfig from "./components/NvidiaGpuConfig";
import OtherOptions from "./components/OtherOptions";
import GeneratedOutput from "./components/GeneratedOutput";
import { useConfigGenerator } from "./hooks/useConfigGenerator";
import styles from "./styles.module.css";
/**
* Simple markdown-link-to-React renderer for help text.
* Only supports [text](url) syntax no nested brackets.
*/
function renderHelpText(text: string): React.ReactNode {
const parts = text.split(/(\[[^\]]+\]\([^)]+\))/g);
return parts.map((part, i) => {
const match = part.match(/^\[([^\]]+)\]\(([^)]+)\)$/);
if (match) {
return (
<a key={i} href={match[2]}>
{match[1]}
</a>
);
}
return <React.Fragment key={i}>{part}</React.Fragment>;
});
}
export default function DockerComposeGenerator() {
const {
deviceId, device, hardwareEnabled,
portEnabled,
nvidiaGpuCount, nvidiaGpuDeviceId,
configPath, mediaPath, rtspPassword, timezone, shmSize,
shmSizeError, gpuDeviceIdError, configPathError, mediaPathError,
hasAnyHardware, generatedYaml,
selectDevice, toggleHardware, togglePort,
handleShmSizeChange, handleConfigPathChange, handleMediaPathChange,
handleNvidiaGpuCountChange, handleNvidiaGpuDeviceIdChange,
setRtspPassword, setTimezone, isHardwareDisabled,
} = useConfigGenerator();
return (
<div className={styles.generator}>
<div className={styles.card}>
<DeviceSelector selectedId={deviceId} onSelect={selectDevice} />
{device.helpText && (
<Admonition type={device.helpType || "info"}>
{renderHelpText(device.helpText)}
</Admonition>
)}
{device.needsNvidiaConfig && (
<NvidiaGpuConfig
gpuCount={nvidiaGpuCount}
gpuDeviceId={nvidiaGpuDeviceId}
gpuDeviceIdError={gpuDeviceIdError}
onGpuCountChange={handleNvidiaGpuCountChange}
onGpuDeviceIdChange={handleNvidiaGpuDeviceIdChange}
/>
)}
<HardwareOptions
deviceId={deviceId}
hardwareEnabled={hardwareEnabled}
onToggle={toggleHardware}
isDisabled={isHardwareDisabled}
/>
<StoragePaths
configPath={configPath}
mediaPath={mediaPath}
configPathError={configPathError}
mediaPathError={mediaPathError}
onConfigPathChange={handleConfigPathChange}
onMediaPathChange={handleMediaPathChange}
/>
<PortConfigSection
portEnabled={portEnabled}
onTogglePort={togglePort}
/>
<OtherOptions
rtspPassword={rtspPassword}
timezone={timezone}
shmSize={shmSize}
shmSizeError={shmSizeError}
onRtspPasswordChange={setRtspPassword}
onTimezoneChange={setTimezone}
onShmSizeChange={handleShmSizeChange}
/>
<GeneratedOutput
yaml={generatedYaml}
configPath={configPath}
mediaPath={mediaPath}
hasAnyHardware={hasAnyHardware}
deviceId={deviceId}
/>
</div>
</div>
);
}

View File

@ -0,0 +1,147 @@
import React from "react";
import { useColorMode } from "@docusaurus/theme-common";
import { devices } from "../config";
import type { DeviceConfig } from "../config";
import styles from "../styles.module.css";
interface Props {
selectedId: string;
onSelect: (id: string) => void;
}
/**
* Determine the icon type from the icon string:
* - Starts with "<svg" inline SVG
* - Starts with "/" or "http" image URL/path
* - Otherwise emoji text
*/
function getIconType(icon: string): "svg" | "image" | "emoji" {
const trimmed = icon.trim();
if (trimmed.startsWith("<svg")) return "svg";
if (trimmed.startsWith("/") || trimmed.startsWith("http://") || trimmed.startsWith("https://")) return "image";
return "emoji";
}
/**
* Check if the style object contains background-* properties,
* indicating the image should be rendered as a CSS background-image
* rather than an <img> tag.
*/
function hasBackgroundProps(style: React.CSSProperties | undefined): boolean {
if (!style) return false;
return Object.keys(style).some((key) => {
const k = key.toLowerCase().replace(/-/g, "");
return k === "backgroundsize" || k === "backgroundposition" || k === "backgroundrepeat" || k === "backgroundimage";
});
}
/**
* Convert a style object to CSS custom properties (e.g. { width: "24px" } { "--svg-width": "24px" })
* so they can be consumed by CSS rules targeting child elements like <svg>.
*/
function toCssVars(style: React.CSSProperties | undefined, prefix: string): React.CSSProperties {
if (!style) return {};
const vars: Record<string, string> = {};
for (const [key, value] of Object.entries(style)) {
const cssKey = key.replace(/([A-Z])/g, "-$1").toLowerCase();
vars[`--${prefix}-${cssKey}`] = value;
}
return vars as React.CSSProperties;
}
function DeviceIcon({ device }: { device: DeviceConfig }) {
const { isDarkTheme } = useColorMode();
const iconStr = isDarkTheme && device.iconDark ? device.iconDark : device.icon;
const iconStyle = (isDarkTheme && device.iconDarkStyle
? device.iconDarkStyle
: device.iconStyle) as React.CSSProperties | undefined;
const svgStyle = (isDarkTheme && device.svgDarkStyle
? device.svgDarkStyle
: device.svgStyle) as React.CSSProperties | undefined;
const iconType = getIconType(iconStr);
if (iconType === "svg") {
return (
<div
className={styles.deviceIconSvg}
style={{ ...iconStyle, ...toCssVars(svgStyle, "svg") }}
dangerouslySetInnerHTML={{ __html: iconStr }}
/>
);
}
if (iconType === "image") {
// When iconStyle contains background-* properties, render as background-image
// on the container div instead of an <img> tag, enabling background-size/position control.
if (hasBackgroundProps(iconStyle)) {
return (
<div
className={styles.deviceIconImage}
style={{
backgroundImage: `url(${iconStr})`,
backgroundRepeat: "no-repeat",
backgroundPosition: "center",
backgroundSize: "contain",
...iconStyle,
}}
/>
);
}
return (
<div className={styles.deviceIconImage}>
<img src={iconStr} alt={device.name} style={iconStyle} />
</div>
);
}
return (
<div className={styles.deviceIcon} style={iconStyle}>
{iconStr}
</div>
);
}
function DeviceCard({
device,
active,
onClick,
}: {
device: DeviceConfig;
active: boolean;
onClick: () => void;
}) {
return (
<div
className={`${styles.deviceCard} ${active ? styles.deviceCardActive : ""}`}
onClick={onClick}
role="button"
tabIndex={0}
onKeyDown={(e) => {
if (e.key === "Enter" || e.key === " ") onClick();
}}
>
<DeviceIcon device={device} />
<div className={styles.deviceName}>{device.name}</div>
<div className={styles.deviceDesc}>{device.description}</div>
</div>
);
}
export default function DeviceSelector({ selectedId, onSelect }: Props) {
return (
<div className={styles.formSection}>
<h4>Device Type</h4>
<div className={styles.deviceGrid}>
{devices.map((d) => (
<DeviceCard
key={d.id}
device={d}
active={selectedId === d.id}
onClick={() => onSelect(d.id)}
/>
))}
</div>
</div>
);
}

View File

@ -0,0 +1,60 @@
import React, { useState, useCallback } from "react";
import CodeBlock from "@theme/CodeBlock";
import Admonition from "@theme/Admonition";
import styles from "../styles.module.css";
interface Props {
yaml: string;
configPath: string;
mediaPath: string;
hasAnyHardware: boolean;
deviceId: string;
}
export default function GeneratedOutput({
yaml,
configPath,
mediaPath,
hasAnyHardware,
deviceId,
}: Props) {
const [copied, setCopied] = useState(false);
const handleCopy = useCallback(() => {
navigator.clipboard.writeText(yaml).then(() => {
setCopied(true);
setTimeout(() => setCopied(false), 2000);
});
}, [yaml]);
return (
<div className={styles.resultSection}>
<div className={styles.resultHeader}>
<h4>Generated Configuration</h4>
<button className="button button--primary button--sm" onClick={handleCopy}>
{copied ? "Copied!" : "Copy"}
</button>
</div>
{!configPath && (
<Admonition type="tip">
<p>You haven&apos;t specified a config file directory. You may want to modify the default path.</p>
</Admonition>
)}
{!mediaPath && (
<Admonition type="tip">
<p>You haven&apos;t specified a recording storage directory. You may want to modify the default path.</p>
</Admonition>
)}
{deviceId === "stable" && !hasAnyHardware && (
<Admonition type="warning">
<p>You haven&apos;t selected any hardware acceleration. Please check if you have supported hardware available.</p>
</Admonition>
)}
<CodeBlock language="yaml" title="docker-compose.yml">
{yaml}
</CodeBlock>
</div>
);
}

View File

@ -0,0 +1,62 @@
import React from "react";
import { hardwareOptions } from "../config";
import type { HardwareOption } from "../config";
import styles from "../styles.module.css";
interface Props {
deviceId: string;
hardwareEnabled: Record<string, boolean>;
onToggle: (hwId: string) => void;
isDisabled: (hwId: string) => boolean;
}
function renderDescription(text: string): React.ReactNode {
const parts = text.split(/(\[[^\]]+\]\([^)]+\))/g);
return parts.map((part, i) => {
const match = part.match(/^\[([^\]]+)\]\(([^)]+)\)$/);
if (match) {
return <a key={i} href={match[2]}>{match[1]}</a>;
}
return <React.Fragment key={i}>{part}</React.Fragment>;
});
}
function HardwareCheckbox({
hw, disabled, checked, onToggle,
}: {
hw: HardwareOption; disabled: boolean; checked: boolean; onToggle: () => void;
}) {
return (
<div className={styles.hardwareItem}>
<label className={`${styles.checkboxLabel} ${disabled ? styles.checkboxDisabled : ""}`}>
<input type="checkbox" checked={checked} onChange={onToggle} disabled={disabled} />
<span>{hw.label}</span>
</label>
{checked && hw.description && (
<div className={styles.hardwareDescription}>{renderDescription(hw.description)}</div>
)}
</div>
);
}
export default function HardwareOptions({ deviceId, hardwareEnabled, onToggle, isDisabled }: Props) {
return (
<div className={styles.formSection}>
<h4>Generic Hardware Devices</h4>
{deviceId !== "stable" && (
<p className={styles.helpText}>
Some options have been auto-configured based on your device type.
</p>
)}
<div className={styles.checkboxGrid}>
{hardwareOptions.map((hw) => {
const disabled = isDisabled(hw.id);
const checked = disabled ? false : !!hardwareEnabled[hw.id];
return (
<HardwareCheckbox key={hw.id} hw={hw} disabled={disabled} checked={checked} onToggle={() => onToggle(hw.id)} />
);
})}
</div>
</div>
);
}

View File

@ -0,0 +1,64 @@
import React from "react";
import styles from "../styles.module.css";
interface Props {
gpuCount: string;
gpuDeviceId: string;
gpuDeviceIdError: boolean;
onGpuCountChange: (value: string) => void;
onGpuDeviceIdChange: (value: string) => void;
}
export default function NvidiaGpuConfig({
gpuCount,
gpuDeviceId,
gpuDeviceIdError,
onGpuCountChange,
onGpuDeviceIdChange,
}: Props) {
const showDeviceId = gpuCount !== "";
return (
<div className={styles.nvidiaConfig}>
<div className={styles.formGroup}>
<label htmlFor="dcg-gpu-count" className={styles.label}>
GPU count:
</label>
<input
id="dcg-gpu-count"
type="text"
inputMode="numeric"
pattern="[0-9]*"
className={styles.input}
value={gpuCount}
placeholder="all"
onChange={(e) => onGpuCountChange(e.target.value.replace(/\D/g, ""))}
/>
</div>
{showDeviceId && (
<div className={styles.formGroup}>
<label htmlFor="dcg-gpu-device-id" className={styles.label}>
GPU device IDs (required, comma-separated):
</label>
<input
id="dcg-gpu-device-id"
type="text"
className={`${styles.input} ${gpuDeviceIdError ? styles.inputError : ""}`}
value={gpuDeviceId}
placeholder="0"
onChange={(e) => onGpuDeviceIdChange(e.target.value)}
/>
{gpuDeviceIdError ? (
<p className={styles.helpText}>
GPU device IDs are required when GPU count is a number
</p>
) : (
<p className={styles.helpText}>
Single GPU: 0 &nbsp;|&nbsp; Multiple GPUs: 0,1,2
</p>
)}
</div>
)}
</div>
);
}

View File

@ -0,0 +1,122 @@
import React, { useMemo } from "react";
import CodeInline from "@theme/CodeInline";
import styles from "../styles.module.css";
const AUTO_TIMEZONE_VALUE = "__auto__";
function getTimezoneList(): string[] {
if (typeof Intl !== "undefined") {
const intl = Intl as typeof Intl & {
supportedValuesOf?: (key: string) => string[];
};
const supported = intl.supportedValuesOf?.("timeZone");
if (supported && supported.length > 0) {
return [...supported].sort();
}
}
const fallback = Intl.DateTimeFormat().resolvedOptions().timeZone;
return fallback ? [fallback] : ["UTC"];
}
interface Props {
rtspPassword: string;
timezone: string;
shmSize: string;
shmSizeError: boolean;
onRtspPasswordChange: (value: string) => void;
onTimezoneChange: (value: string) => void;
onShmSizeChange: (value: string) => void;
}
export default function OtherOptions({
rtspPassword,
timezone,
shmSize,
shmSizeError,
onRtspPasswordChange,
onTimezoneChange,
onShmSizeChange,
}: Props) {
const timezones = useMemo(() => getTimezoneList(), []);
const systemTimezone =
Intl.DateTimeFormat().resolvedOptions().timeZone || "Etc/UTC";
const selectedValue = timezone || AUTO_TIMEZONE_VALUE;
return (
<div className={styles.formSection}>
<h4>Other Options</h4>
<div className={styles.formGrid}>
<div className={styles.formGroup}>
<label htmlFor="dcg-timezone" className={styles.label}>
Timezone:
</label>
<select
id="dcg-timezone"
className={`${styles.input} ${styles.select}`}
value={selectedValue}
onChange={(e) =>
onTimezoneChange(
e.target.value === AUTO_TIMEZONE_VALUE ? "" : e.target.value
)
}
>
<option value={AUTO_TIMEZONE_VALUE}>
Use browser timezone ({systemTimezone})
</option>
{timezones.map((tz) => (
<option key={tz} value={tz}>
{tz}
</option>
))}
</select>
</div>
<div className={styles.formGroup}>
<label htmlFor="dcg-shm-size" className={styles.label}>
Shared memory (SHM):
</label>
<input
id="dcg-shm-size"
type="text"
className={`${styles.input} ${shmSizeError ? styles.inputError : ""}`}
value={shmSize}
placeholder="512mb"
onChange={(e) => onShmSizeChange(e.target.value)}
/>
{shmSizeError ? (
<p className={styles.helpText}>
Invalid format. Use a number followed by a unit (e.g. 512mb, 1gb)
</p>
) : (
<p className={styles.helpText}>
See{" "}
<a href="/frigate/installation#calculating-required-shm-size">
calculating required SHM size
</a>{" "}
for the correct value.
</p>
)}
</div>
<div className={styles.formGroup}>
<label htmlFor="dcg-rtsp-password" className={styles.label}>
RTSP password:
</label>
<input
id="dcg-rtsp-password"
type="text"
className={styles.input}
value={rtspPassword}
placeholder="password"
onChange={(e) => onRtspPasswordChange(e.target.value)}
/>
<p className={styles.helpText}>
Optional. You can specify{" "}
<CodeInline>{"{FRIGATE_RTSP_PASSWORD}"}</CodeInline>{" "}
in the config file to reference camera stream passwords. This is NOT
the Frigate login password.
</p>
</div>
</div>
</div>
);
}

View File

@ -0,0 +1,71 @@
import React from "react";
import Admonition from "@theme/Admonition";
import { ports } from "../config";
import styles from "../styles.module.css";
interface Props {
portEnabled: Record<string, boolean>;
onTogglePort: (portId: string) => void;
}
function PortItem({
port,
enabled,
onToggle,
}: {
port: typeof ports[number];
enabled: boolean;
onToggle: () => void;
}) {
const showWarning = port.warningContent && (
port.warningWhen === "checked" ? enabled :
port.warningWhen === "unchecked" ? !enabled : enabled
);
return (
<div className={styles.hardwareItem}>
<label className={`${styles.checkboxLabel} ${port.locked ? styles.checkboxDisabled : ""}`}>
<input
type="checkbox"
checked={enabled}
onChange={onToggle}
disabled={port.locked}
/>
<span>
{port.locked && "🔒 "}
Port {port.host}
{port.protocol !== "tcp" && `/${port.protocol}`}
</span>
</label>
{port.description && (
<div className={styles.hardwareDescription}>{port.description}</div>
)}
{showWarning && (
<Admonition type={port.warningType || "warning"}>
{port.warningContent}
</Admonition>
)}
</div>
);
}
export default function PortConfigSection({
portEnabled,
onTogglePort,
}: Props) {
return (
<div className={styles.formSection}>
<h4>Port Configuration</h4>
<div className={styles.checkboxGrid}>
{ports.map((port) => (
<PortItem
key={port.id}
port={port}
enabled={!!portEnabled[port.id]}
onToggle={() => onTogglePort(port.id)}
/>
))}
</div>
</div>
);
}

View File

@ -0,0 +1,66 @@
import React from "react";
import styles from "../styles.module.css";
interface Props {
configPath: string;
mediaPath: string;
configPathError: boolean;
mediaPathError: boolean;
onConfigPathChange: (value: string) => void;
onMediaPathChange: (value: string) => void;
}
export default function StoragePaths({
configPath,
mediaPath,
configPathError,
mediaPathError,
onConfigPathChange,
onMediaPathChange,
}: Props) {
return (
<div className={styles.formSection}>
<h4>Storage Paths</h4>
<div className={styles.formGrid}>
<div className={styles.formGroup}>
<label htmlFor="dcg-config-path" className={styles.label}>
Config / DB / model cache directory (on your host):
</label>
<input
id="dcg-config-path"
type="text"
className={`${styles.input} ${configPathError ? styles.inputError : ""}`}
value={configPath}
placeholder="/path/to/your/config"
onChange={(e) => onConfigPathChange(e.target.value)}
/>
{configPathError && (
<p className={styles.helpText}>
Path contains invalid characters. Only letters, numbers,
underscores, hyphens, slashes, and dots are allowed.
</p>
)}
</div>
<div className={styles.formGroup}>
<label htmlFor="dcg-media-path" className={styles.label}>
Recording storage directory (on your host):
</label>
<input
id="dcg-media-path"
type="text"
className={`${styles.input} ${mediaPathError ? styles.inputError : ""}`}
value={mediaPath}
placeholder="/path/to/your/storage"
onChange={(e) => onMediaPathChange(e.target.value)}
/>
{mediaPathError && (
<p className={styles.helpText}>
Path contains invalid characters. Only letters, numbers,
underscores, hyphens, slashes, and dots are allowed.
</p>
)}
</div>
</div>
</div>
);
}

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,12 @@
export { devices, deviceMap } from "./devices";
export { hardwareOptions, hardwareMap } from "./hardware";
export { ports, portMap } from "./ports";
export type {
DeviceConfig,
DeviceMapping,
VolumeMapping,
HardwareOption,
PortConfig,
NvidiaDeployConfig,
} from "./types";

View File

@ -0,0 +1,154 @@
/**
* Type definitions for the Docker Compose Generator configuration.
* All device, hardware, and port options are declaratively defined
* so that adding a new device only requires editing config files.
*/
/** A single device mapping entry (e.g. /dev/dri:/dev/dri) */
export interface DeviceMapping {
/** Host device path */
host: string;
/** Container device path (defaults to host if omitted) */
container?: string;
/** Inline comment for this device line */
comment?: string;
}
/** A single volume mapping entry */
export interface VolumeMapping {
/** Host path */
host: string;
/** Container path */
container: string;
/** Whether the mount is read-only */
readOnly?: boolean;
/** Inline comment */
comment?: string;
}
/** NVIDIA deploy configuration for docker-compose */
export interface NvidiaDeployConfig {
/** "all" or a specific number */
count: string;
/** Specific GPU device IDs (when count is a number) */
deviceIds?: string[];
}
/** Full device type definition */
export interface DeviceConfig {
/** Unique identifier, e.g. "intel" */
id: string;
/** Display name, e.g. "Intel GPU" */
name: string;
/** Short description */
description: string;
/**
* Icon for the device card. Supports:
* - Emoji string (e.g. "🖥️")
* - Image URL or static path (e.g. "/img/intel.svg", "https://example.com/icon.png")
* - Inline SVG markup (e.g. "<svg>...</svg>")
*/
icon: string;
/**
* Additional CSS properties applied to the icon element.
* - For image-type icons: if any `background-*` property (e.g. `background-size`,
* `background-position`) is present, the image is rendered as a CSS `background-image`
* on the container div, enabling full background positioning control.
* Otherwise the image is rendered as an `<img>` tag and styles apply to it.
* - For emoji/SVG icons: styles apply to the container div.
*/
iconStyle?: Record<string, string>;
/**
* Additional CSS properties applied directly to the inner `<svg>` element
* when the icon is an inline SVG. Use this to override the default
* `width: 100%; height: 100%` or set `fill`, `transform`, etc.
* Ignored for emoji and image-type icons.
*/
svgStyle?: Record<string, string>;
/**
* Icon for dark mode. Same format as `icon`. When provided, this icon
* replaces `icon` when the user is in dark mode.
*/
iconDark?: string;
/** Additional CSS properties for the dark mode icon container */
iconDarkStyle?: Record<string, string>;
/**
* SVG-specific styles for dark mode. Same as `svgStyle` but applied
* when dark mode is active. Merged over `svgStyle` in dark mode.
*/
svgDarkStyle?: Record<string, string>;
/** Docker image tag, e.g. "stable" */
imageTag: string;
/**
* Image tag suffix appended to the base tag.
* e.g. "-standard-arm64" produces "stable-standard-arm64"
*/
imageTagSuffix?: string;
/** Hardware option IDs to auto-enable when this device is selected */
autoHardware: string[];
/** Help text shown as an admonition when this device is selected */
helpText?: string;
/** Admonition type for help text */
helpType?: "info" | "warning" | "danger";
/** Device mappings always added for this device type */
devices?: DeviceMapping[];
/** Volume mappings always added for this device type */
volumes?: VolumeMapping[];
/** Extra environment variables for this device type */
env?: Record<string, string>;
/** NVIDIA deploy config (only for tensorrt) */
nvidiaDeploy?: NvidiaDeployConfig;
/** Runtime setting, e.g. "nvidia" for Jetson */
runtime?: string;
/** Extra hosts entries, e.g. "host.docker.internal:host-gateway" */
extraHosts?: string[];
/** Security options, e.g. ["apparmor=unconfined"] */
securityOpt?: string[];
/** Whether this device type needs the NVIDIA GPU config UI */
needsNvidiaConfig?: boolean;
}
/** Generic hardware acceleration option definition */
export interface HardwareOption {
/** Unique identifier, e.g. "usbCoral" */
id: string;
/** Display label */
label: string;
/**
* Description shown below the checkbox when this option is enabled.
* Supports markdown link syntax: [text](url)
*/
description?: string;
/** Device IDs that disable this option */
disabledWhen?: string[];
/** Device mappings added when this option is enabled */
devices?: DeviceMapping[];
/** Volume mappings added when this option is enabled */
volumes?: VolumeMapping[];
/** Extra environment variables */
env?: Record<string, string>;
}
/** Port definition */
export interface PortConfig {
/** Unique identifier (also the default host port as string) */
id: string;
/** Host port number */
host: number;
/** Container port number */
container: number;
/** Protocol */
protocol?: "tcp" | "udp";
/** Description of the port's purpose */
description: string;
/** Whether enabled by default */
defaultEnabled: boolean;
/** Whether this port is locked (always enabled, cannot be toggled off) */
locked?: boolean;
/** Admonition type for the warning */
warningType?: "warning" | "danger";
/** Warning content (markdown) */
warningContent?: string;
/** When to show the warning: when the port is checked or unchecked */
warningWhen?: "checked" | "unchecked";
}

View File

@ -0,0 +1,250 @@
import type {
DeviceConfig,
DeviceMapping,
VolumeMapping,
} from "../config/types";
import { hardwareMap } from "../config";
// ---------------------------------------------------------------------------
// Input type
// ---------------------------------------------------------------------------
export interface GeneratorInput {
device: DeviceConfig;
selectedHardware: string[];
enabledPorts: string[];
configPath: string;
mediaPath: string;
rtspPassword?: string;
timezone: string;
shmSize: string;
nvidiaGpuCount?: string;
nvidiaGpuDeviceId?: string;
}
// ---------------------------------------------------------------------------
// Helpers
// ---------------------------------------------------------------------------
function deviceLine(dm: DeviceMapping): string {
const host = dm.host;
const container = dm.container ?? dm.host;
const mapping = host === container ? host : `${host}:${container}`;
const comment = dm.comment ? ` # ${dm.comment}` : "";
return ` - ${mapping}${comment}`;
}
function volumeLine(vm: VolumeMapping): string {
const ro = vm.readOnly ? ":ro" : "";
const comment = vm.comment ? ` # ${vm.comment}` : "";
return ` - ${vm.host}:${vm.container}${ro}${comment}`;
}
// ---------------------------------------------------------------------------
// YAML builder — each section returns an array of lines
// ---------------------------------------------------------------------------
function buildImage(device: DeviceConfig): string[] {
const tag = device.imageTagSuffix
? `${device.imageTag}${device.imageTagSuffix}`
: device.imageTag;
return [` image: ghcr.io/blakeblackshear/frigate:${tag}`];
}
function buildDevices(
device: DeviceConfig,
hwDevices: DeviceMapping[]
): string[] {
const all: DeviceMapping[] = [
...(device.devices ?? []),
...hwDevices,
];
if (all.length === 0) return [];
return [
" devices:",
...all.map(deviceLine),
];
}
function buildVolumes(
device: DeviceConfig,
hwVolumes: VolumeMapping[],
configPath: string,
mediaPath: string
): string[] {
const all: VolumeMapping[] = [
...(device.volumes ?? []),
...hwVolumes,
];
return [
" volumes:",
" - /etc/localtime:/etc/localtime:ro # Sync host time",
` - ${configPath}:/config # Config file directory`,
` - ${mediaPath}:/media/frigate # Recording storage directory`,
" - type: tmpfs # 1GB in-memory filesystem for recording segment storage",
" target: /tmp/cache",
" tmpfs:",
" size: 1000000000",
...all.map(volumeLine),
];
}
function buildPorts(enabledPorts: string[]): string[] {
return [
" ports:",
...enabledPorts,
];
}
function buildEnvironment(
device: DeviceConfig,
hwEnv: Record<string, string>,
rtspPassword: string | undefined,
timezone: string
): string[] {
const allEnv: Record<string, string> = {
...hwEnv,
...(device.env ?? {}),
};
const lines: string[] = [" environment:"];
if (rtspPassword) {
lines.push(
` FRIGATE_RTSP_PASSWORD: "${rtspPassword}" # RTSP password — change to your own`
);
}
lines.push(` TZ: "${timezone}" # Timezone`);
for (const [key, value] of Object.entries(allEnv)) {
lines.push(` ${key}: "${value}"`);
}
return lines;
}
function buildDeploy(device: DeviceConfig, input: GeneratorInput): string[] {
if (device.id === "stable-tensorrt") {
const count = input.nvidiaGpuCount || "all";
const isAll = count === "all";
const deviceId = input.nvidiaGpuDeviceId?.trim();
if (isAll) {
return [
" deploy:",
" resources:",
" reservations:",
" devices:",
" - driver: nvidia",
" count: all # Use all GPUs",
" capabilities: [gpu]",
];
}
if (deviceId) {
const ids = deviceId
.split(",")
.map((s) => s.trim())
.filter(Boolean)
.map((s) => `'${s}'`)
.join(", ");
return [
" deploy:",
" resources:",
" reservations:",
" devices:",
" - driver: nvidia",
` device_ids: [${ids}] # GPU device IDs`,
` count: ${count} # GPU count`,
" capabilities: [gpu]",
];
}
return [
" deploy:",
" resources:",
" reservations:",
" devices:",
" - driver: nvidia",
` count: ${count} # GPU count`,
" capabilities: [gpu]",
];
}
return [];
}
function buildRuntime(device: DeviceConfig): string[] {
if (device.runtime) {
return [` runtime: ${device.runtime}`];
}
return [];
}
function buildExtraHosts(device: DeviceConfig): string[] {
if (!device.extraHosts?.length) return [];
return [
" extra_hosts:",
...device.extraHosts.map(
(h, i) =>
` - "${h}"${i === 0 ? " # Required to talk to the NPU detector" : ""}`
),
];
}
function buildSecurityOpt(device: DeviceConfig): string[] {
if (!device.securityOpt?.length) return [];
return [
" security_opt:",
...device.securityOpt.map((s) => ` - ${s}`),
];
}
// ---------------------------------------------------------------------------
// Public API
// ---------------------------------------------------------------------------
/**
* Generate a docker-compose YAML string from the given input.
* The output is pure YAML with inline comments (no Shiki annotations).
*/
export function generateDockerCompose(input: GeneratorInput): string {
const { device } = input;
// Collect hardware-level devices, volumes, and env
const hwDevices: DeviceMapping[] = [];
const hwVolumes: VolumeMapping[] = [];
const hwEnv: Record<string, string> = {};
for (const hwId of input.selectedHardware) {
const hw = hardwareMap.get(hwId);
if (!hw) continue;
// Skip GPU device mapping for tensorrt images (it uses deploy instead)
if (hw.id === "gpu" && device.imageTag === "stable-tensorrt") continue;
hwDevices.push(...(hw.devices ?? []));
hwVolumes.push(...(hw.volumes ?? []));
Object.assign(hwEnv, hw.env ?? {});
}
const lines: string[] = [
"services:",
" frigate:",
" container_name: frigate",
" privileged: true # This may not be necessary for all setups",
" restart: unless-stopped",
" stop_grace_period: 30s # Allow enough time to shut down the various services",
...buildImage(device),
` shm_size: "${input.shmSize || "512mb"}" # Update for your cameras based on SHM calculation`,
...buildRuntime(device),
...buildDeploy(device, input),
...buildExtraHosts(device),
...buildSecurityOpt(device),
...buildDevices(device, hwDevices),
...buildVolumes(device, hwVolumes, input.configPath, input.mediaPath),
...buildPorts(input.enabledPorts),
...buildEnvironment(device, hwEnv, input.rtspPassword, input.timezone),
];
return lines.join("\n");
}

View File

@ -0,0 +1,195 @@
import { useState, useCallback, useMemo } from "react";
import { deviceMap, hardwareMap, portMap } from "../config";
import { generateDockerCompose } from "../generator";
import type { GeneratorInput } from "../generator";
/**
* Main hook that holds all form state and generates the Docker Compose output.
* Configuration is loaded synchronously from build-time generated .ts files.
*/
export function useConfigGenerator() {
const [deviceId, setDeviceId] = useState("stable");
const [hardwareEnabled, setHardwareEnabled] = useState<Record<string, boolean>>(() => {
const defaultDevice = deviceMap.get("stable");
const initial: Record<string, boolean> = {};
if (defaultDevice) {
for (const hwId of defaultDevice.autoHardware) {
initial[hwId] = true;
}
}
return initial;
});
const [portEnabled, setPortEnabled] = useState<Record<string, boolean>>(() => {
const initial: Record<string, boolean> = {};
for (const p of portMap.values()) {
initial[p.id] = p.defaultEnabled;
}
return initial;
});
const [nvidiaGpuCount, setNvidiaGpuCount] = useState("");
const [nvidiaGpuDeviceId, setNvidiaGpuDeviceId] = useState("");
const [configPath, setConfigPath] = useState("");
const [mediaPath, setMediaPath] = useState("");
const [rtspPassword, setRtspPassword] = useState("");
const [timezone, setTimezone] = useState("");
const [shmSize, setShmSize] = useState("512mb");
const [shmSizeError, setShmSizeError] = useState(false);
const [gpuDeviceIdError, setGpuDeviceIdError] = useState(false);
const [configPathError, setConfigPathError] = useState(false);
const [mediaPathError, setMediaPathError] = useState(false);
const device = useMemo(() => deviceMap.get(deviceId)!, [deviceId]);
const selectDevice = useCallback((id: string) => {
const newDevice = deviceMap.get(id);
if (!newDevice) return;
setDeviceId(id);
setHardwareEnabled(() => {
const next: Record<string, boolean> = {};
for (const hwId of newDevice.autoHardware) {
next[hwId] = true;
}
return next;
});
setNvidiaGpuCount("");
setNvidiaGpuDeviceId("");
setGpuDeviceIdError(false);
}, []);
const toggleHardware = useCallback((hwId: string) => {
setHardwareEnabled((prev) => ({ ...prev, [hwId]: !prev[hwId] }));
}, []);
const togglePort = useCallback((portId: string) => {
const port = portMap.get(portId);
if (port?.locked) return;
setPortEnabled((prev) => ({ ...prev, [portId]: !prev[portId] }));
}, []);
const isHardwareDisabled = useCallback(
(hwId: string): boolean => {
const hw = hardwareMap.get(hwId);
if (!hw) return false;
return hw.disabledWhen?.includes(deviceId) ?? false;
},
[deviceId]
);
const validateShmSize = useCallback((value: string): boolean => {
if (!value) return true;
return /^\d+(\.\d+)?[bkmgBKMG]{1,2}$/.test(value);
}, []);
const validatePath = useCallback((value: string): boolean => {
if (!value) return true;
return /^[a-zA-Z0-9_\-/./]+$/.test(value);
}, []);
const handleShmSizeChange = useCallback(
(value: string) => {
const filtered = value.replace(/[^0-9.bkmgBKMG]/g, "");
const valid = validateShmSize(filtered);
setShmSize(filtered);
setShmSizeError(!valid && filtered !== "");
},
[validateShmSize]
);
const handleConfigPathChange = useCallback(
(value: string) => {
const filtered = value.replace(/[^a-zA-Z0-9_\-/./]/g, "");
const valid = validatePath(filtered);
setConfigPath(filtered);
setConfigPathError(!valid && filtered !== "");
},
[validatePath]
);
const handleMediaPathChange = useCallback(
(value: string) => {
const filtered = value.replace(/[^a-zA-Z0-9_\-/./]/g, "");
const valid = validatePath(filtered);
setMediaPath(filtered);
setMediaPathError(!valid && filtered !== "");
},
[validatePath]
);
const handleNvidiaGpuCountChange = useCallback((value: string) => {
// Only allow digits
setNvidiaGpuCount(value);
if (value === "") {
setNvidiaGpuDeviceId("");
setGpuDeviceIdError(false);
} else {
setGpuDeviceIdError(false);
}
}, []);
const handleNvidiaGpuDeviceIdChange = useCallback((value: string) => {
setNvidiaGpuDeviceId(value.trim());
setGpuDeviceIdError(false);
}, []);
const enabledPortLines = useMemo(() => {
const lines: string[] = [];
for (const [id, enabled] of Object.entries(portEnabled)) {
if (!enabled) continue;
const p = portMap.get(id);
if (!p) continue;
const proto = p.protocol && p.protocol !== "tcp" ? `/${p.protocol}` : "";
const comment = p.description ? ` # ${p.description}` : "";
lines.push(` - "${p.host}:${p.container}${proto}"${comment}`);
}
return lines;
}, [portEnabled]);
const selectedHardwareIds = useMemo(() => {
return Object.entries(hardwareEnabled)
.filter(([id, enabled]) => {
if (!enabled) return false;
const hw = hardwareMap.get(id);
if (!hw) return false;
if (hw.disabledWhen?.includes(deviceId)) return false;
return true;
})
.map(([id]) => id);
}, [hardwareEnabled, deviceId]);
const generatedYaml = useMemo(() => {
const input: GeneratorInput = {
device,
selectedHardware: selectedHardwareIds,
enabledPorts: enabledPortLines,
configPath: configPath || "/path/to/your/config",
mediaPath: mediaPath || "/path/to/your/storage",
rtspPassword,
timezone: timezone || Intl.DateTimeFormat().resolvedOptions().timeZone || "Etc/UTC",
shmSize: shmSize || "512mb",
nvidiaGpuCount,
nvidiaGpuDeviceId,
};
return generateDockerCompose(input);
}, [
device, selectedHardwareIds, enabledPortLines,
configPath, mediaPath, rtspPassword, timezone, shmSize,
nvidiaGpuCount, nvidiaGpuDeviceId,
]);
const hasAnyHardware = selectedHardwareIds.length > 0 || !!device?.devices?.length;
return {
deviceId, device, hardwareEnabled, portEnabled,
nvidiaGpuCount, nvidiaGpuDeviceId,
configPath, mediaPath, rtspPassword, timezone, shmSize,
shmSizeError, gpuDeviceIdError, configPathError, mediaPathError,
hasAnyHardware, generatedYaml,
selectDevice, toggleHardware, togglePort,
handleShmSizeChange, handleConfigPathChange, handleMediaPathChange,
handleNvidiaGpuCountChange, handleNvidiaGpuDeviceIdChange,
setRtspPassword, setTimezone, isHardwareDisabled,
};
}

View File

@ -0,0 +1 @@
export { default } from "./DockerComposeGenerator";

View File

@ -0,0 +1,381 @@
/* ===================================================================
Docker Compose Generator styles
Uses Docusaurus / Infima CSS variables for theme compatibility.
=================================================================== */
.generator {
margin: 2rem 0;
}
.card {
background: var(--ifm-background-surface-color);
border: 1px solid var(--ifm-color-emphasis-400);
border-radius: 12px;
padding: 2rem;
box-shadow: var(--ifm-global-shadow-lw);
}
[data-theme="light"] .card {
background: var(--ifm-color-emphasis-100);
border: 1px solid var(--ifm-color-emphasis-300);
}
/* --- Form sections --- */
.formSection {
margin-bottom: 1.5rem;
padding-bottom: 1.5rem;
border-bottom: 1px solid var(--ifm-color-emphasis-400);
}
.formSection:last-child {
border-bottom: none;
margin-bottom: 0;
padding-bottom: 0;
}
.formSection h4 {
margin: 0 0 1rem 0;
color: var(--ifm-font-color-base);
font-size: 1.1rem;
font-weight: var(--ifm-font-weight-semibold);
}
/* --- Form controls --- */
.formGroup {
margin-bottom: 1rem;
}
.formGroup:last-child {
margin-bottom: 0;
}
.label {
display: block;
margin-bottom: 0.25rem;
color: var(--ifm-font-color-base);
font-weight: var(--ifm-font-weight-semibold);
font-size: 0.9rem;
}
.input {
width: 100%;
padding: 0.5rem 0.75rem;
border: 1px solid var(--ifm-color-emphasis-400);
border-radius: 6px;
background: var(--ifm-background-color);
color: var(--ifm-font-color-base);
font-size: 0.95rem;
transition: border-color 0.2s, box-shadow 0.2s;
}
[data-theme="light"] .input {
background: #fff;
border: 1px solid #d0d7de;
}
.input:focus {
outline: none;
border-color: var(--ifm-color-primary);
box-shadow: 0 0 0 3px var(--ifm-color-primary-lightest);
}
[data-theme="dark"] .input {
border-color: var(--ifm-color-emphasis-300);
}
.inputError {
border-color: #e74c3c;
animation: shake 0.3s ease-in-out;
}
@keyframes shake {
0%,
100% {
transform: translateX(0);
}
25% {
transform: translateX(-5px);
}
75% {
transform: translateX(5px);
}
}
/* --- Select dropdown --- */
.select {
cursor: pointer;
appearance: none;
-moz-appearance: none;
-webkit-appearance: none;
background: var(--ifm-background-color)
url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' width='12' height='12' viewBox='0 0 12 12'%3E%3Cpath fill='%23666' d='M6 8L1 3h10z'/%3E%3C/svg%3E")
no-repeat right 0.75rem center / 12px 12px;
padding-right: 2rem;
}
[data-theme="light"] .select {
background: #fff
url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' width='12' height='12' viewBox='0 0 12 12'%3E%3Cpath fill='%23555' d='M6 8L1 3h10z'/%3E%3C/svg%3E")
no-repeat right 0.75rem center / 12px 12px;
}
.helpText {
margin: 0.5rem 0 0 0;
font-size: 0.85rem;
color: var(--ifm-font-color-secondary);
line-height: 1.5;
}
.helpText a {
color: var(--ifm-color-primary);
}
/* --- Device grid --- */
.deviceGrid {
display: grid;
grid-template-columns: repeat(auto-fill, minmax(130px, 1fr));
gap: 0.75rem;
margin-top: 0.5rem;
}
.deviceCard {
padding: 0.75rem;
border: 2px solid var(--ifm-color-emphasis-400);
border-radius: 12px;
cursor: pointer;
transition: all 0.2s;
text-align: center;
background: var(--ifm-background-color);
display: flex;
flex-direction: column;
align-items: center;
}
[data-theme="light"] .deviceCard {
border: 2px solid #d0d7de;
background: #fff;
}
.deviceCard:hover {
border-color: var(--ifm-color-primary);
background: var(--ifm-color-emphasis-100);
transform: translateY(-2px);
}
.deviceCardActive {
border-color: var(--ifm-color-primary);
background: var(--ifm-color-primary-lightest);
box-shadow: 0 0 0 1px var(--ifm-color-primary);
}
[data-theme="light"] .deviceCardActive {
background: color-mix(in srgb, var(--ifm-color-primary) 12%, #fff);
}
[data-theme="dark"] .deviceCardActive {
background: color-mix(in srgb, var(--ifm-color-primary) 25%, #1b1b1b);
}
[data-theme="dark"] .deviceCardActive .deviceName {
color: var(--ifm-color-primary-light);
}
[data-theme="dark"] .deviceCardActive .deviceDesc {
color: var(--ifm-color-primary-light);
opacity: 0.85;
}
.deviceIcon {
font-size: 2rem;
margin-bottom: 0.25rem;
height: 40px;
width: 50px;
display: flex;
align-items: center;
justify-content: center;
}
.deviceIconSvg {
margin-bottom: 0.25rem;
height: 40px;
width: 50px;
display: flex;
align-items: center;
justify-content: center;
overflow: visible;
/* Allow iconStyle width/height to override */
flex-shrink: 0;
}
.deviceIconSvg svg {
width: var(--svg-width, 100%);
height: var(--svg-height, 100%);
fill: var(--svg-fill, currentColor);
transform: var(--svg-transform, none);
}
.deviceIconImage {
margin-bottom: 0.25rem;
height: 40px;
width: 50px;
display: flex;
align-items: center;
justify-content: center;
}
.deviceIconImage img {
max-width: 100%;
max-height: 100%;
object-fit: contain;
}
.deviceName {
font-weight: var(--ifm-font-weight-semibold);
color: var(--ifm-font-color-base);
margin-bottom: 0.15rem;
font-size: 0.9rem;
}
.deviceDesc {
font-size: 0.75rem;
color: var(--ifm-font-color-secondary);
line-height: 1.3;
}
/* --- Checkbox grid --- */
.checkboxGrid {
display: grid;
grid-template-columns: repeat(2, 1fr);
gap: 0.5rem;
}
@media (max-width: 576px) {
.checkboxGrid {
grid-template-columns: 1fr;
}
}
.hardwareItem {
margin-bottom: 0;
}
.hardwareDescription {
margin: 0.15rem 0 0.4rem 1.6rem;
font-size: 0.8rem;
color: var(--ifm-font-color-secondary);
line-height: 1.5;
}
.hardwareDescription a {
color: var(--ifm-color-primary);
text-decoration: underline;
text-underline-offset: 2px;
}
.checkboxLabel {
display: flex;
align-items: center;
gap: 0.5rem;
cursor: pointer;
padding: 0.4rem 0.5rem;
border-radius: 6px;
transition: background-color 0.2s;
font-size: 0.9rem;
}
.checkboxLabel:hover {
background: var(--ifm-color-emphasis-100);
}
.checkboxLabel input[type="checkbox"] {
width: 1.1rem;
height: 1.1rem;
cursor: pointer;
flex-shrink: 0;
}
.checkboxLabel span {
color: var(--ifm-font-color-base);
}
.checkboxDisabled {
cursor: not-allowed;
}
.checkboxDisabled:hover {
background: transparent;
}
.checkboxDisabled input[type="checkbox"] {
cursor: not-allowed;
opacity: 0.5;
}
/* --- Form grid (side-by-side) --- */
.formGrid {
display: grid;
grid-template-columns: repeat(2, 1fr);
gap: 1rem;
}
@media (max-width: 576px) {
.formGrid {
grid-template-columns: 1fr;
}
}
.formGrid .formGroup {
margin-bottom: 0;
}
/* --- Port section --- */
.portSection {
margin-bottom: 0.75rem;
}
.warningBadge {
margin-left: auto;
color: #e67e22;
font-size: 0.85rem;
}
/* --- NVIDIA config --- */
.nvidiaConfig {
margin-top: 1rem;
margin-bottom: 1.5rem;
padding: 1rem;
background: var(--ifm-background-color);
border-radius: 8px;
border-left: 3px solid var(--ifm-color-primary);
}
[data-theme="light"] .nvidiaConfig {
background: #f6f8fa;
border-left: 3px solid var(--ifm-color-primary);
}
/* --- Result section --- */
.resultSection {
margin-top: 2rem;
}
.resultHeader {
display: flex;
justify-content: space-between;
align-items: center;
margin-bottom: 1rem;
}
.resultHeader h4 {
margin: 0;
color: var(--ifm-font-color-base);
}

View File

@ -5997,7 +5997,10 @@ paths:
tags:
- App
summary: Start debug replay
description: Start a debug replay session from camera recordings.
description:
Start a debug replay session from camera recordings. Returns
immediately while clip generation runs as a background job; subscribe
to the 'debug_replay' job_state WS topic to track progress.
operationId: start_debug_replay_debug_replay_start_post
requestBody:
required: true
@ -6006,12 +6009,16 @@ paths:
schema:
$ref: "#/components/schemas/DebugReplayStartBody"
responses:
"200":
"202":
description: Successful Response
content:
application/json:
schema:
$ref: "#/components/schemas/DebugReplayStartResponse"
"400":
description: Invalid camera, time range, or no recordings
"409":
description: A replay session is already active
"422":
description: Validation Error
content:
@ -6272,10 +6279,14 @@ components:
replay_camera:
type: string
title: Replay Camera
job_id:
type: string
title: Job Id
type: object
required:
- success
- replay_camera
- job_id
title: DebugReplayStartResponse
description: Response for starting a debug replay session.
DebugReplayStatusResponse:

View File

@ -146,8 +146,13 @@ def config(request: Request):
for name, detector in config_obj.detectors.items()
}
# remove the mqtt password
# remove environment_vars for non-admin users
if request.headers.get("remote-role") != "admin":
config.pop("environment_vars", None)
# remove mqtt credentials
config["mqtt"].pop("password", None)
config["mqtt"].pop("user", None)
# remove the proxy secret
config["proxy"].pop("auth_secret", None)

View File

@ -10,6 +10,7 @@ from pydantic import BaseModel, Field
from frigate.api.auth import require_role
from frigate.api.defs.tags import Tags
from frigate.jobs.debug_replay import start_debug_replay_job
logger = logging.getLogger(__name__)
@ -29,10 +30,17 @@ class DebugReplayStartResponse(BaseModel):
success: bool
replay_camera: str
job_id: str
class DebugReplayStatusResponse(BaseModel):
"""Response for debug replay status."""
"""Response for debug replay status.
Returns only session-presence fields. Startup progress and error
details flow through the job_state WebSocket topic via the
debug_replay job (see frigate.jobs.debug_replay); the
Replay page subscribes there with useJobStatus("debug_replay").
"""
active: bool
replay_camera: str | None = None
@ -51,15 +59,32 @@ class DebugReplayStopResponse(BaseModel):
@router.post(
"/debug_replay/start",
response_model=DebugReplayStartResponse,
status_code=202,
responses={
400: {"description": "Invalid camera, time range, or no recordings"},
409: {"description": "A replay session is already active"},
},
dependencies=[Depends(require_role(["admin"]))],
summary="Start debug replay",
description="Start a debug replay session from camera recordings.",
description="Start a debug replay session from camera recordings. Returns "
"immediately while clip generation runs as a background job; subscribe "
"to the 'debug_replay' job_state WS topic to track progress.",
)
async def start_debug_replay(request: Request, body: DebugReplayStartBody):
"""Start a debug replay session."""
"""Start a debug replay session asynchronously."""
replay_manager = request.app.replay_manager
if replay_manager.active:
try:
job_id = await asyncio.to_thread(
start_debug_replay_job,
source_camera=body.camera,
start_ts=body.start_time,
end_ts=body.end_time,
frigate_config=request.app.frigate_config,
config_publisher=request.app.config_publisher,
replay_manager=replay_manager,
)
except RuntimeError:
return JSONResponse(
content={
"success": False,
@ -67,38 +92,23 @@ async def start_debug_replay(request: Request, body: DebugReplayStartBody):
},
status_code=409,
)
try:
replay_camera = await asyncio.to_thread(
replay_manager.start,
source_camera=body.camera,
start_ts=body.start_time,
end_ts=body.end_time,
frigate_config=request.app.frigate_config,
config_publisher=request.app.config_publisher,
)
except ValueError:
logger.exception("Invalid parameters for debug replay start request")
logger.exception("Rejected debug replay start request")
return JSONResponse(
content={
"success": False,
"message": "Invalid debug replay request parameters",
"message": "Invalid debug replay parameters",
},
status_code=400,
)
except RuntimeError:
logger.exception("Error while starting debug replay session")
return JSONResponse(
content={
"success": False,
"message": "An internal error occurred while starting debug replay",
},
status_code=500,
)
return DebugReplayStartResponse(
success=True,
replay_camera=replay_camera,
return JSONResponse(
content={
"success": True,
"replay_camera": replay_manager.replay_camera_name,
"job_id": job_id,
},
status_code=202,
)
@ -118,12 +128,16 @@ def get_debug_replay_status(request: Request):
if replay_manager.active and replay_camera:
frame_processor = request.app.detected_frames_processor
frame = frame_processor.get_current_frame(replay_camera)
frame = (
frame_processor.get_current_frame(replay_camera)
if frame_processor is not None
else None
)
if frame is not None:
frame_time = frame_processor.get_current_frame_time(replay_camera)
camera_config = request.app.frigate_config.cameras.get(replay_camera)
retry_interval = 10
retry_interval = 10.0
if camera_config is not None:
retry_interval = float(camera_config.ffmpeg.retry_interval or 10)

View File

@ -174,12 +174,10 @@ async def latest_frame(
}
quality_params = get_image_quality_params(extension.value, params.quality)
if camera_name in request.app.frigate_config.cameras:
camera_config = request.app.frigate_config.cameras.get(camera_name)
if camera_config is not None:
frame = frame_processor.get_current_frame(camera_name, draw_options)
retry_interval = float(
request.app.frigate_config.cameras.get(camera_name).ffmpeg.retry_interval
or 10
)
retry_interval = float(camera_config.ffmpeg.retry_interval or 10)
is_offline = False
if frame is None or datetime.now().timestamp() > (

View File

@ -429,7 +429,10 @@ class WebPushClient(Communicator):
else:
title = base_title
message = payload["after"]["data"]["metadata"]["shortSummary"]
if payload["after"]["data"]["metadata"].get("shortSummary"):
message = payload["after"]["data"]["metadata"]["shortSummary"]
else:
message = f"Detected on {camera_name}"
else:
zone_names = payload["after"]["data"]["zones"]
formatted_zone_names = []

View File

@ -17,9 +17,90 @@ from ws4py.websocket import WebSocket as WebSocket_
from frigate.comms.base_communicator import Communicator
from frigate.config import FrigateConfig
from frigate.const import (
CLEAR_ONGOING_REVIEW_SEGMENTS,
EXPIRE_AUDIO_ACTIVITY,
INSERT_MANY_RECORDINGS,
INSERT_PREVIEW,
NOTIFICATION_TEST,
REQUEST_REGION_GRID,
UPDATE_AUDIO_ACTIVITY,
UPDATE_AUDIO_TRANSCRIPTION_STATE,
UPDATE_BIRDSEYE_LAYOUT,
UPDATE_CAMERA_ACTIVITY,
UPDATE_EMBEDDINGS_REINDEX_PROGRESS,
UPDATE_EVENT_DESCRIPTION,
UPDATE_MODEL_STATE,
UPDATE_REVIEW_DESCRIPTION,
UPSERT_REVIEW_SEGMENT,
)
logger = logging.getLogger(__name__)
# Internal IPC topics — NEVER allowed from WebSocket, regardless of role
_WS_BLOCKED_TOPICS = frozenset(
{
INSERT_MANY_RECORDINGS,
INSERT_PREVIEW,
REQUEST_REGION_GRID,
UPSERT_REVIEW_SEGMENT,
CLEAR_ONGOING_REVIEW_SEGMENTS,
UPDATE_CAMERA_ACTIVITY,
UPDATE_AUDIO_ACTIVITY,
EXPIRE_AUDIO_ACTIVITY,
UPDATE_EVENT_DESCRIPTION,
UPDATE_REVIEW_DESCRIPTION,
UPDATE_MODEL_STATE,
UPDATE_EMBEDDINGS_REINDEX_PROGRESS,
UPDATE_BIRDSEYE_LAYOUT,
UPDATE_AUDIO_TRANSCRIPTION_STATE,
NOTIFICATION_TEST,
}
)
# Read-only topics any authenticated user (including viewer) can send
_WS_VIEWER_TOPICS = frozenset(
{
"onConnect",
"modelState",
"audioTranscriptionState",
"birdseyeLayout",
"embeddingsReindexProgress",
}
)
def _check_ws_authorization(
topic: str,
role_header: str | None,
separator: str,
) -> bool:
"""Check if a WebSocket message is authorized.
Args:
topic: The message topic.
role_header: The HTTP_REMOTE_ROLE header value, or None.
separator: The role separator character from proxy config.
Returns:
True if authorized, False if blocked.
"""
# Block IPC-only topics unconditionally
if topic in _WS_BLOCKED_TOPICS:
return False
# No role header: default to viewer (fail-closed)
if role_header is None:
return topic in _WS_VIEWER_TOPICS
# Check if any role is admin
roles = [r.strip() for r in role_header.split(separator)]
if "admin" in roles:
return True
# Non-admin: only viewer topics allowed
return topic in _WS_VIEWER_TOPICS
class WebSocket(WebSocket_): # type: ignore[misc]
def unhandled_error(self, error: Any) -> None:
@ -49,6 +130,7 @@ class WebSocketClient(Communicator):
class _WebSocketHandler(WebSocket):
receiver = self._dispatcher
role_separator = self.config.proxy.separator or ","
def received_message(self, message: WebSocket.received_message) -> None: # type: ignore[name-defined]
try:
@ -63,11 +145,25 @@ class WebSocketClient(Communicator):
)
return
logger.debug(
f"Publishing mqtt message from websockets at {json_message['topic']}."
topic = json_message["topic"]
# Authorization check (skip when environ is None — direct internal connection)
role_header = (
self.environ.get("HTTP_REMOTE_ROLE") if self.environ else None
)
if self.environ is not None and not _check_ws_authorization(
topic, role_header, self.role_separator
):
logger.warning(
"Blocked unauthorized WebSocket message: topic=%s, role=%s",
topic,
role_header,
)
return
logger.debug(f"Publishing mqtt message from websockets at {topic}.")
self.receiver(
json_message["topic"],
topic,
json_message["payload"],
)

View File

@ -1073,10 +1073,6 @@ class LicensePlateProcessingMixin:
top_score = score
top_box = bbox
if score > top_score:
top_score = score
top_box = bbox
# Return the top scoring bounding box if found
if top_box is not None:
# expand box by 5% to help with OCR
@ -1092,9 +1088,6 @@ class LicensePlateProcessingMixin:
]
).clip(0, [input.shape[1], input.shape[0]] * 2)
logger.debug(
f"{camera}: Found license plate. Bounding box: {expanded_box.astype(int)}"
)
return tuple(int(x) for x in expanded_box) # type: ignore[return-value]
else:
return None # No detection above the threshold
@ -1360,8 +1353,8 @@ class LicensePlateProcessingMixin:
)
# check that license plate is valid
# double the value because we've doubled the size of the car
if license_plate_area < self.config.cameras[camera].lpr.min_area * 2:
# quadruple the value because we've doubled both dimensions of the car
if license_plate_area < self.config.cameras[camera].lpr.min_area * 4:
logger.debug(f"{camera}: License plate is less than min_area")
return
@ -1465,6 +1458,7 @@ class LicensePlateProcessingMixin:
license_plate_frame,
)
logger.debug(f"{camera}: Found license plate. Bounding box: {list(plate_box)}")
logger.debug(f"{camera}: Running plate recognition for id: {id}.")
# run detection, returns results sorted by confidence, best first

View File

@ -1,9 +1,13 @@
"""Debug replay camera management for replaying recordings with detection overlays."""
"""Debug replay camera management for replaying recordings with detection overlays.
The startup work (ffmpeg concat + camera config publish) lives in
frigate.jobs.debug_replay. This module owns only session presence
(active), session metadata, and post-session cleanup.
"""
import logging
import os
import shutil
import subprocess as sp
import threading
from ruamel.yaml import YAML
@ -21,7 +25,7 @@ from frigate.const import (
REPLAY_DIR,
THUMB_DIR,
)
from frigate.models import Recordings
from frigate.jobs.debug_replay import cancel_debug_replay_job, wait_for_runner
from frigate.util.camera_cleanup import cleanup_camera_db, cleanup_camera_files
from frigate.util.config import find_config_file
@ -29,7 +33,14 @@ logger = logging.getLogger(__name__)
class DebugReplayManager:
"""Manages a single debug replay session."""
"""Owns the lifecycle pointers for a single debug replay session.
A session exists from the moment mark_starting is called (synchronously,
inside the API handler) until clear_session runs (on success cleanup,
failure, or stop). The active property is the source of truth that the
status bar consumes broader than the startup job, which only covers the
preparing_clip / starting_camera window.
"""
def __init__(self) -> None:
self._lock = threading.Lock()
@ -41,144 +52,66 @@ class DebugReplayManager:
@property
def active(self) -> bool:
"""Whether a replay session is currently active."""
"""True from mark_starting until clear_session."""
return self.replay_camera_name is not None
def start(
def mark_starting(
self,
source_camera: str,
replay_camera_name: str,
start_ts: float,
end_ts: float,
frigate_config: FrigateConfig,
config_publisher: CameraConfigUpdatePublisher,
) -> str:
"""Start a debug replay session.
) -> None:
"""Synchronously claim the session before the job runner starts.
Args:
source_camera: Name of the source camera to replay
start_ts: Start timestamp
end_ts: End timestamp
frigate_config: Current Frigate configuration
config_publisher: Publisher for camera config updates
Returns:
The replay camera name
Raises:
ValueError: If a session is already active or parameters are invalid
RuntimeError: If clip generation fails
Called inside the API handler so the status bar sees active=True
immediately, before the worker thread does any ffmpeg work.
"""
with self._lock:
return self._start_locked(
source_camera, start_ts, end_ts, frigate_config, config_publisher
)
self.replay_camera_name = replay_camera_name
self.source_camera = source_camera
self.start_ts = start_ts
self.end_ts = end_ts
self.clip_path = None
def _start_locked(
def mark_session_ready(self, clip_path: str) -> None:
"""Record the on-disk clip path after the camera has been published."""
with self._lock:
self.clip_path = clip_path
def clear_session(self) -> None:
"""Reset session pointers without publishing camera removal.
Used by the job runner on failure paths. stop() does the camera
teardown plus this clear in one step.
"""
with self._lock:
self._clear_locked()
def _clear_locked(self) -> None:
self.replay_camera_name = None
self.source_camera = None
self.clip_path = None
self.start_ts = None
self.end_ts = None
def publish_camera(
self,
source_camera: str,
start_ts: float,
end_ts: float,
replay_name: str,
clip_path: str,
frigate_config: FrigateConfig,
config_publisher: CameraConfigUpdatePublisher,
) -> str:
if self.active:
raise ValueError("A replay session is already active")
) -> None:
"""Build the in-memory replay camera config and publish the add event.
if source_camera not in frigate_config.cameras:
raise ValueError(f"Camera '{source_camera}' not found")
if end_ts <= start_ts:
raise ValueError("End time must be after start time")
# Query recordings for the source camera in the time range
recordings = (
Recordings.select(
Recordings.path,
Recordings.start_time,
Recordings.end_time,
)
.where(
Recordings.start_time.between(start_ts, end_ts)
| Recordings.end_time.between(start_ts, end_ts)
| ((start_ts > Recordings.start_time) & (end_ts < Recordings.end_time))
)
.where(Recordings.camera == source_camera)
.order_by(Recordings.start_time.asc())
)
if not recordings.count():
raise ValueError(
f"No recordings found for camera '{source_camera}' in the specified time range"
)
# Create replay directory
os.makedirs(REPLAY_DIR, exist_ok=True)
# Generate replay camera name
replay_name = f"{REPLAY_CAMERA_PREFIX}{source_camera}"
# Build concat file for ffmpeg
concat_file = os.path.join(REPLAY_DIR, f"{replay_name}_concat.txt")
clip_path = os.path.join(REPLAY_DIR, f"{replay_name}.mp4")
with open(concat_file, "w") as f:
for recording in recordings:
f.write(f"file '{recording.path}'\n")
# Concatenate recordings into a single clip with -c copy (fast)
ffmpeg_cmd = [
frigate_config.ffmpeg.ffmpeg_path,
"-hide_banner",
"-y",
"-f",
"concat",
"-safe",
"0",
"-i",
concat_file,
"-c",
"copy",
"-movflags",
"+faststart",
clip_path,
]
logger.info(
"Generating replay clip for %s (%.1f - %.1f)",
source_camera,
start_ts,
end_ts,
)
try:
result = sp.run(
ffmpeg_cmd,
capture_output=True,
text=True,
timeout=120,
)
if result.returncode != 0:
logger.error("FFmpeg error: %s", result.stderr)
raise RuntimeError(
f"Failed to generate replay clip: {result.stderr[-500:]}"
)
except sp.TimeoutExpired:
raise RuntimeError("Clip generation timed out")
finally:
# Clean up concat file
if os.path.exists(concat_file):
os.remove(concat_file)
if not os.path.exists(clip_path):
raise RuntimeError("Clip file was not created")
# Build camera config dict for the replay camera
Called by the job runner during the starting_camera phase.
"""
source_config = frigate_config.cameras[source_camera]
camera_dict = self._build_camera_config_dict(
source_config, replay_name, clip_path
)
# Build an in-memory config with the replay camera added
config_file = find_config_file()
yaml_parser = YAML()
with open(config_file, "r") as f:
@ -191,75 +124,48 @@ class DebugReplayManager:
try:
new_config = FrigateConfig.parse_object(config_data)
except Exception as e:
raise RuntimeError(f"Failed to validate replay camera config: {e}")
# Update the running config
raise RuntimeError(f"Failed to validate replay camera config: {e}") from e
frigate_config.cameras[replay_name] = new_config.cameras[replay_name]
# Publish the add event
config_publisher.publish_update(
CameraConfigUpdateTopic(CameraConfigUpdateEnum.add, replay_name),
new_config.cameras[replay_name],
)
# Store session state
self.replay_camera_name = replay_name
self.source_camera = source_camera
self.clip_path = clip_path
self.start_ts = start_ts
self.end_ts = end_ts
logger.info("Debug replay started: %s -> %s", source_camera, replay_name)
return replay_name
def stop(
self,
frigate_config: FrigateConfig,
config_publisher: CameraConfigUpdatePublisher,
) -> None:
"""Stop the active replay session and clean up all artifacts.
"""Cancel any in-flight startup job and tear down the active session.
Args:
frigate_config: Current Frigate configuration
config_publisher: Publisher for camera config updates
Safe to call when no session is active (no-op with a warning).
"""
cancel_debug_replay_job()
wait_for_runner(timeout=2.0)
with self._lock:
self._stop_locked(frigate_config, config_publisher)
if not self.active:
logger.warning("No active replay session to stop")
return
def _stop_locked(
self,
frigate_config: FrigateConfig,
config_publisher: CameraConfigUpdatePublisher,
) -> None:
if not self.active:
logger.warning("No active replay session to stop")
return
replay_name = self.replay_camera_name
replay_name = self.replay_camera_name
# Only publish remove if the camera was actually added to the live
# config (i.e. the runner reached the starting_camera phase).
if replay_name is not None and replay_name in frigate_config.cameras:
config_publisher.publish_update(
CameraConfigUpdateTopic(CameraConfigUpdateEnum.remove, replay_name),
frigate_config.cameras[replay_name],
)
# Publish remove event so subscribers stop and remove from their config
if replay_name in frigate_config.cameras:
config_publisher.publish_update(
CameraConfigUpdateTopic(CameraConfigUpdateEnum.remove, replay_name),
frigate_config.cameras[replay_name],
)
# Do NOT pop here — let subscribers handle removal from the shared
# config dict when they process the ZMQ message to avoid race conditions
if replay_name is not None:
self._cleanup_db(replay_name)
self._cleanup_files(replay_name)
# Defensive DB cleanup
self._cleanup_db(replay_name)
self._clear_locked()
# Remove filesystem artifacts
self._cleanup_files(replay_name)
# Reset state
self.replay_camera_name = None
self.source_camera = None
self.clip_path = None
self.start_ts = None
self.end_ts = None
logger.info("Debug replay stopped and cleaned up: %s", replay_name)
logger.info("Debug replay stopped and cleaned up: %s", replay_name)
def _build_camera_config_dict(
self,
@ -267,16 +173,7 @@ class DebugReplayManager:
replay_name: str,
clip_path: str,
) -> dict:
"""Build a camera config dictionary for the replay camera.
Args:
source_config: Source camera's CameraConfig
replay_name: Name for the replay camera
clip_path: Path to the replay clip file
Returns:
Camera config as a dictionary
"""
"""Build a camera config dictionary for the replay camera."""
# Extract detect config (exclude computed fields)
detect_dict = source_config.detect.model_dump(
exclude={"min_initialized", "max_disappeared", "enabled_in_config"}
@ -311,7 +208,6 @@ class DebugReplayManager:
zone_dump = zone_config.model_dump(
exclude={"contour", "color"}, exclude_defaults=True
)
# Always include required fields
zone_dump.setdefault("coordinates", zone_config.coordinates)
zones_dict[zone_name] = zone_dump

View File

@ -52,6 +52,12 @@ class OvDetector(DetectionApi):
self.h = detector_config.model.height
self.w = detector_config.model.width
logger.info(
"Loading OpenVINO model %s on device %s",
detector_config.model.path,
detector_config.device,
)
self.runner = OpenVINOModelRunner(
model_path=detector_config.model.path,
device=detector_config.device,

View File

@ -31,6 +31,12 @@ class OllamaClient(GenAIClient):
provider: ApiClient | None
provider_options: dict[str, Any]
def _auth_headers(self) -> dict | None:
if self.genai_config.api_key:
return {"Authorization": "Bearer " + self.genai_config.api_key}
return None
def _init_provider(self) -> ApiClient | None:
"""Initialize the client."""
self.provider_options = {
@ -39,7 +45,11 @@ class OllamaClient(GenAIClient):
}
try:
client = ApiClient(host=self.genai_config.base_url, timeout=self.timeout)
client = ApiClient(
host=self.genai_config.base_url,
timeout=self.timeout,
headers=self._auth_headers(),
)
# ensure the model is available locally
response = client.show(self.genai_config.model)
if response.get("error"):
@ -166,7 +176,9 @@ class OllamaClient(GenAIClient):
return []
try:
client = ApiClient(
host=self.genai_config.base_url, timeout=self.timeout
host=self.genai_config.base_url,
timeout=self.timeout,
headers=self._auth_headers(),
)
except Exception:
return []
@ -344,6 +356,7 @@ class OllamaClient(GenAIClient):
async_client = OllamaAsyncClient(
host=self.genai_config.base_url,
timeout=self.timeout,
headers=self._auth_headers(),
)
response = await async_client.chat(**request_params)
result = self._message_from_response(response)
@ -359,6 +372,7 @@ class OllamaClient(GenAIClient):
async_client = OllamaAsyncClient(
host=self.genai_config.base_url,
timeout=self.timeout,
headers=self._auth_headers(),
)
content_parts: list[str] = []
final_message: dict[str, Any] | None = None

View File

@ -0,0 +1,386 @@
"""Debug replay startup job: ffmpeg concat + camera config publish.
The runner orchestrates the async portion of starting a debug replay
session. The DebugReplayManager (in frigate.debug_replay) owns session
presence so the status bar can keep reading a single `active` flag from
/debug_replay/status for the entire session window which is broader
than this job's lifetime.
"""
import logging
import os
import subprocess as sp
import threading
import time
from dataclasses import dataclass
from typing import TYPE_CHECKING, Any, Optional, cast
from peewee import ModelSelect
from frigate.config import FrigateConfig
from frigate.config.camera.updater import CameraConfigUpdatePublisher
from frigate.const import REPLAY_CAMERA_PREFIX, REPLAY_DIR
from frigate.jobs.export import JobStatePublisher
from frigate.jobs.job import Job
from frigate.jobs.manager import job_is_running, set_current_job
from frigate.models import Recordings
from frigate.types import JobStatusTypesEnum
from frigate.util.ffmpeg import run_ffmpeg_with_progress
if TYPE_CHECKING:
from frigate.debug_replay import DebugReplayManager
logger = logging.getLogger(__name__)
# Coalesce frequent ffmpeg progress callbacks so the WS isn't flooded.
PROGRESS_BROADCAST_MIN_INTERVAL = 1.0
JOB_TYPE = "debug_replay"
STEP_PREPARING_CLIP = "preparing_clip"
STEP_STARTING_CAMERA = "starting_camera"
_active_runner: Optional["DebugReplayJobRunner"] = None
_runner_lock = threading.Lock()
def _set_active_runner(runner: Optional["DebugReplayJobRunner"]) -> None:
global _active_runner
with _runner_lock:
_active_runner = runner
def get_active_runner() -> Optional["DebugReplayJobRunner"]:
with _runner_lock:
return _active_runner
@dataclass
class DebugReplayJob(Job):
"""Job state for a debug replay startup."""
job_type: str = JOB_TYPE
source_camera: str = ""
replay_camera_name: str = ""
start_ts: float = 0.0
end_ts: float = 0.0
current_step: Optional[str] = None
progress_percent: float = 0.0
def to_dict(self) -> dict[str, Any]:
"""Whitelisted payload for the job_state WS topic.
Replay-specific fields land in results so the frontend's
generic Job<TResults> type can be parameterised cleanly.
"""
return {
"id": self.id,
"job_type": self.job_type,
"status": self.status,
"start_time": self.start_time,
"end_time": self.end_time,
"error_message": self.error_message,
"results": {
"current_step": self.current_step,
"progress_percent": self.progress_percent,
"source_camera": self.source_camera,
"replay_camera_name": self.replay_camera_name,
"start_ts": self.start_ts,
"end_ts": self.end_ts,
},
}
def query_recordings(source_camera: str, start_ts: float, end_ts: float) -> ModelSelect:
"""Return the Recordings query for the time range.
Module-level so tests can patch it without instantiating a runner.
"""
query = (
Recordings.select(
Recordings.path,
Recordings.start_time,
Recordings.end_time,
)
.where(
Recordings.start_time.between(start_ts, end_ts)
| Recordings.end_time.between(start_ts, end_ts)
| ((start_ts > Recordings.start_time) & (end_ts < Recordings.end_time))
)
.where(Recordings.camera == source_camera)
.order_by(Recordings.start_time.asc())
)
return cast(ModelSelect, query)
class DebugReplayJobRunner(threading.Thread):
"""Worker thread that drives the startup job to completion.
Owns the live ffmpeg Popen reference for cancellation. Cancellation
is two-step (threading.Event + proc.terminate()) so the runner
both knows it should stop and is unblocked from its blocking subprocess
wait.
"""
def __init__(
self,
job: DebugReplayJob,
frigate_config: FrigateConfig,
config_publisher: CameraConfigUpdatePublisher,
replay_manager: "DebugReplayManager",
publisher: Optional[JobStatePublisher] = None,
) -> None:
super().__init__(daemon=True, name=f"debug_replay_{job.id}")
self.job = job
self.frigate_config = frigate_config
self.config_publisher = config_publisher
self.replay_manager = replay_manager
self.publisher = publisher if publisher is not None else JobStatePublisher()
self._cancel_event = threading.Event()
self._active_process: sp.Popen | None = None
self._proc_lock = threading.Lock()
self._last_broadcast_monotonic: float = 0.0
def cancel(self) -> None:
"""Request cancellation. Idempotent."""
self._cancel_event.set()
with self._proc_lock:
proc = self._active_process
if proc is not None:
try:
proc.terminate()
except Exception as exc:
logger.warning("Failed to terminate ffmpeg subprocess: %s", exc)
def is_cancelled(self) -> bool:
return self._cancel_event.is_set()
def _record_proc(self, proc: sp.Popen) -> None:
with self._proc_lock:
self._active_process = proc
# Race: cancel arrived between Popen and _record_proc.
if self._cancel_event.is_set():
try:
proc.terminate()
except Exception:
pass
def _broadcast(self, force: bool = False) -> None:
now = time.monotonic()
if (
not force
and now - self._last_broadcast_monotonic < PROGRESS_BROADCAST_MIN_INTERVAL
):
return
self._last_broadcast_monotonic = now
try:
self.publisher.publish(self.job.to_dict())
except Exception as err:
logger.warning("Publisher raised during job state broadcast: %s", err)
def run(self) -> None:
replay_name = self.job.replay_camera_name
os.makedirs(REPLAY_DIR, exist_ok=True)
concat_file = os.path.join(REPLAY_DIR, f"{replay_name}_concat.txt")
clip_path = os.path.join(REPLAY_DIR, f"{replay_name}.mp4")
self.job.status = JobStatusTypesEnum.running
self.job.start_time = time.time()
self.job.current_step = STEP_PREPARING_CLIP
self._broadcast(force=True)
try:
recordings = query_recordings(
self.job.source_camera, self.job.start_ts, self.job.end_ts
)
with open(concat_file, "w") as f:
for recording in recordings:
f.write(f"file '{recording.path}'\n")
ffmpeg_cmd = [
self.frigate_config.ffmpeg.ffmpeg_path,
"-hide_banner",
"-y",
"-f",
"concat",
"-safe",
"0",
"-i",
concat_file,
"-c",
"copy",
"-movflags",
"+faststart",
clip_path,
]
logger.info(
"Generating replay clip for %s (%.1f - %.1f)",
self.job.source_camera,
self.job.start_ts,
self.job.end_ts,
)
def _on_progress(percent: float) -> None:
self.job.progress_percent = percent
self._broadcast()
try:
returncode, stderr = run_ffmpeg_with_progress(
ffmpeg_cmd,
expected_duration_seconds=max(
0.0, self.job.end_ts - self.job.start_ts
),
on_progress=_on_progress,
process_started=self._record_proc,
use_low_priority=True,
)
finally:
with self._proc_lock:
self._active_process = None
if self._cancel_event.is_set():
self._finalize_cancelled(clip_path)
return
if returncode != 0:
raise RuntimeError(f"FFmpeg failed: {stderr[-500:]}")
if not os.path.exists(clip_path):
raise RuntimeError("Clip file was not created")
self.job.current_step = STEP_STARTING_CAMERA
self.job.progress_percent = 100.0
self._broadcast(force=True)
if self._cancel_event.is_set():
self._finalize_cancelled(clip_path)
return
self.replay_manager.publish_camera(
source_camera=self.job.source_camera,
replay_name=replay_name,
clip_path=clip_path,
frigate_config=self.frigate_config,
config_publisher=self.config_publisher,
)
self.replay_manager.mark_session_ready(clip_path)
self.job.status = JobStatusTypesEnum.success
self.job.end_time = time.time()
self._broadcast(force=True)
logger.info(
"Debug replay started: %s -> %s",
self.job.source_camera,
replay_name,
)
except Exception as exc:
logger.exception("Debug replay startup failed")
self.job.status = JobStatusTypesEnum.failed
self.job.error_message = str(exc)
self.job.end_time = time.time()
self._broadcast(force=True)
self.replay_manager.clear_session()
_remove_silent(clip_path)
finally:
_remove_silent(concat_file)
_set_active_runner(None)
def _finalize_cancelled(self, clip_path: str) -> None:
logger.info("Debug replay startup cancelled")
self.job.status = JobStatusTypesEnum.cancelled
self.job.end_time = time.time()
self._broadcast(force=True)
# The caller of cancel_debug_replay_job (DebugReplayManager.stop) owns
# session cleanup — db rows, filesystem artifacts, clear_session. We
# only clean up the partial concat output we created.
_remove_silent(clip_path)
def _remove_silent(path: str) -> None:
try:
if os.path.exists(path):
os.remove(path)
except OSError:
pass
def start_debug_replay_job(
*,
source_camera: str,
start_ts: float,
end_ts: float,
frigate_config: FrigateConfig,
config_publisher: CameraConfigUpdatePublisher,
replay_manager: "DebugReplayManager",
) -> str:
"""Validate, create job, start runner. Returns the job id.
Raises ValueError for bad params (camera missing, time range
invalid, no recordings) and RuntimeError if a session is already
active.
"""
if job_is_running(JOB_TYPE) or replay_manager.active:
raise RuntimeError("A replay session is already active")
if source_camera not in frigate_config.cameras:
raise ValueError(f"Camera '{source_camera}' not found")
if end_ts <= start_ts:
raise ValueError("End time must be after start time")
recordings = query_recordings(source_camera, start_ts, end_ts)
if not recordings.count():
raise ValueError(
f"No recordings found for camera '{source_camera}' in the specified time range"
)
replay_name = f"{REPLAY_CAMERA_PREFIX}{source_camera}"
replay_manager.mark_starting(
source_camera=source_camera,
replay_camera_name=replay_name,
start_ts=start_ts,
end_ts=end_ts,
)
job = DebugReplayJob(
source_camera=source_camera,
replay_camera_name=replay_name,
start_ts=start_ts,
end_ts=end_ts,
)
set_current_job(job)
runner = DebugReplayJobRunner(
job=job,
frigate_config=frigate_config,
config_publisher=config_publisher,
replay_manager=replay_manager,
)
_set_active_runner(runner)
runner.start()
return job.id
def cancel_debug_replay_job() -> bool:
"""Signal the active runner to cancel.
Returns True if a runner was signalled, False if no job was active.
"""
runner = get_active_runner()
if runner is None:
return False
runner.cancel()
return True
def wait_for_runner(timeout: float = 2.0) -> bool:
"""Join the active runner. Returns True if the runner ended in time."""
runner = get_active_runner()
if runner is None:
return True
runner.join(timeout=timeout)
return not runner.is_alive()

View File

@ -349,6 +349,13 @@ def move_preview_frames(loc: str) -> None:
if not os.path.exists(preview_holdover):
return
if not os.access(preview_holdover, os.R_OK | os.W_OK):
logger.error(
"Insufficient permissions on preview restart cache at %s",
preview_holdover,
)
return
shutil.move(preview_holdover, preview_cache)
except shutil.Error:
logger.error("Failed to restore preview cache.")

View File

@ -361,14 +361,17 @@ class PreviewRecorder:
small_frame,
cv2.COLOR_YUV2BGR_I420,
)
cv2.imwrite(
get_cache_image_name(self.camera_name, frame_time),
cache_path = get_cache_image_name(self.camera_name, frame_time)
if not cv2.imwrite(
cache_path,
small_frame,
[
int(cv2.IMWRITE_WEBP_QUALITY),
PREVIEW_QUALITY_WEBP[self.config.record.preview.quality],
],
)
):
logger.error("Failed to write preview frame to %s", cache_path)
def write_data(
self,

View File

@ -13,6 +13,7 @@ from enum import Enum
from pathlib import Path
from typing import Callable, Optional
import pytz # type: ignore[import-untyped]
from peewee import DoesNotExist
from frigate.config import FfmpegConfig, FrigateConfig
@ -22,13 +23,13 @@ from frigate.const import (
EXPORT_DIR,
MAX_PLAYLIST_SECONDS,
PREVIEW_FRAME_TYPE,
PROCESS_PRIORITY_LOW,
)
from frigate.ffmpeg_presets import (
EncodeTypeEnum,
parse_preset_hardware_acceleration_encode,
)
from frigate.models import Export, Previews, Recordings, ReviewSegment
from frigate.util.ffmpeg import run_ffmpeg_with_progress
from frigate.util.time import is_current_hour
logger = logging.getLogger(__name__)
@ -242,109 +243,43 @@ class RecordingExporter(threading.Thread):
return total
def _inject_progress_flags(self, ffmpeg_cmd: list[str]) -> list[str]:
"""Insert FFmpeg progress reporting flags before the output path.
``-progress pipe:2`` writes structured key=value lines to stderr,
``-nostats`` suppresses the noisy default stats output.
"""
if not ffmpeg_cmd:
return ffmpeg_cmd
return ffmpeg_cmd[:-1] + ["-progress", "pipe:2", "-nostats", ffmpeg_cmd[-1]]
def _run_ffmpeg_with_progress(
self,
ffmpeg_cmd: list[str],
playlist_lines: str | list[str],
step: str = "encoding",
) -> tuple[int, str]:
"""Run an FFmpeg export command, parsing progress events from stderr.
"""Delegate to the shared helper, mapping percent → (step, percent).
Returns ``(returncode, captured_stderr)``. Stdout is left attached to
the parent process so we don't have to drain it (and risk a deadlock
if the buffer fills). Progress percent is computed against the
expected output duration; values are clamped to [0, 100] inside
:py:meth:`_emit_progress`.
Returns ``(returncode, captured_stderr)``.
"""
cmd = ["nice", "-n", str(PROCESS_PRIORITY_LOW)] + self._inject_progress_flags(
ffmpeg_cmd
)
if isinstance(playlist_lines, list):
stdin_payload = "\n".join(playlist_lines)
else:
stdin_payload = playlist_lines
expected_duration = self._expected_output_duration_seconds()
self._emit_progress(step, 0.0)
proc = sp.Popen(
cmd,
stdin=sp.PIPE,
stderr=sp.PIPE,
text=True,
encoding="ascii",
errors="replace",
return run_ffmpeg_with_progress(
ffmpeg_cmd,
expected_duration_seconds=self._expected_output_duration_seconds(),
on_progress=lambda percent: self._emit_progress(step, percent),
stdin_payload=stdin_payload,
use_low_priority=True,
)
assert proc.stdin is not None
assert proc.stderr is not None
try:
proc.stdin.write(stdin_payload)
except (BrokenPipeError, OSError):
# FFmpeg may have rejected the input early; still wait for it
# to terminate so the returncode is meaningful.
pass
finally:
try:
proc.stdin.close()
except (BrokenPipeError, OSError):
pass
captured: list[str] = []
try:
for raw_line in proc.stderr:
captured.append(raw_line)
line = raw_line.strip()
if not line:
continue
if line.startswith("out_time_us="):
if expected_duration <= 0:
continue
try:
out_time_us = int(line.split("=", 1)[1])
except (ValueError, IndexError):
continue
if out_time_us < 0:
continue
out_seconds = out_time_us / 1_000_000.0
percent = (out_seconds / expected_duration) * 100.0
self._emit_progress(step, percent)
elif line == "progress=end":
self._emit_progress(step, 100.0)
break
except Exception:
logger.exception("Failed reading FFmpeg progress for %s", self.export_id)
proc.wait()
# Drain any remaining stderr so callers can log it on failure.
try:
remaining = proc.stderr.read()
if remaining:
captured.append(remaining)
except Exception:
pass
return proc.returncode, "".join(captured)
def get_datetime_from_timestamp(self, timestamp: int) -> str:
# return in iso format
# return in iso format using the configured ui.timezone when set,
# so the auto-generated export name reflects local time rather
# than the container's UTC clock
tz_name = self.config.ui.timezone
if tz_name:
try:
tz = pytz.timezone(tz_name)
except pytz.UnknownTimeZoneError:
tz = None
if tz is not None:
return datetime.datetime.fromtimestamp(timestamp, tz=tz).strftime(
"%Y-%m-%d %H:%M:%S"
)
return datetime.datetime.fromtimestamp(timestamp).strftime("%Y-%m-%d %H:%M:%S")
def _chapter_metadata_path(self) -> str:
@ -407,6 +342,7 @@ class RecordingExporter(threading.Thread):
return None
total_output = windows[-1][2] + (windows[-1][1] - windows[-1][0])
last_recorded_end = windows[-1][1]
def wall_to_output(t: float) -> float:
t = max(float(self.start_time), min(float(self.end_time), t))
@ -419,8 +355,18 @@ class RecordingExporter(threading.Thread):
chapter_blocks: list[str] = []
for review in review_rows:
if review.start_time is None:
continue
# In-progress segments have a NULL end_time until the activity
# closes; clamp to the last recorded second so the chapter never
# extends past the actual video.
review_end = (
float(review.end_time)
if review.end_time is not None
else last_recorded_end
)
start_out = wall_to_output(float(review.start_time))
end_out = wall_to_output(float(review.end_time))
end_out = wall_to_output(review_end)
# Drop chapters that fall entirely in a recording gap, or are
# too short to be navigable in a player.
@ -503,16 +449,14 @@ class RecordingExporter(threading.Thread):
except DoesNotExist:
return ""
diff = self.start_time - preview.start_time
minutes = int(diff / 60)
seconds = int(diff % 60)
diff = max(0.0, float(self.start_time) - float(preview.start_time))
ffmpeg_cmd = [
"/usr/lib/ffmpeg/7.0/bin/ffmpeg", # hardcode path for exports thumbnail due to missing libwebp support
"-hide_banner",
"-loglevel",
"warning",
"-ss",
f"00:{minutes}:{seconds}",
f"{diff:.3f}",
"-i",
preview.path,
"-frames",
@ -538,12 +482,18 @@ class RecordingExporter(threading.Thread):
start_file = f"{file_start}{self.start_time}.{PREVIEW_FRAME_TYPE}"
end_file = f"{file_start}{self.end_time}.{PREVIEW_FRAME_TYPE}"
selected_preview = None
# Preview frames are written at most 1-2 fps during activity
# and as little as one every 30s during quiet periods, so a
# short export window can contain zero frames. Track the most
# recent frame before the window as a fallback.
fallback_preview = None
for file in sorted(os.listdir(preview_dir)):
if not file.startswith(file_start):
continue
if file < start_file:
fallback_preview = os.path.join(preview_dir, file)
continue
if file > end_file:
@ -552,6 +502,9 @@ class RecordingExporter(threading.Thread):
selected_preview = os.path.join(preview_dir, file)
break
if not selected_preview:
selected_preview = fallback_preview
if not selected_preview:
return ""

View File

@ -0,0 +1,123 @@
"""Tests for /debug_replay API endpoints."""
from unittest.mock import patch
from frigate.models import Event, Recordings, ReviewSegment
from frigate.test.http_api.base_http_test import AuthTestClient, BaseTestHttp
class TestDebugReplayAPI(BaseTestHttp):
def setUp(self):
super().setUp([Event, Recordings, ReviewSegment])
self.app = self.create_app()
def test_start_returns_202_with_job_id(self):
# Stub the factory to skip validation/threading and just record the
# name on the manager the way the real factory's mark_starting would.
def fake_start(**kwargs):
kwargs["replay_manager"].mark_starting(
source_camera=kwargs["source_camera"],
replay_camera_name="_replay_front",
start_ts=kwargs["start_ts"],
end_ts=kwargs["end_ts"],
)
return "job-1234"
with patch(
"frigate.api.debug_replay.start_debug_replay_job",
side_effect=fake_start,
):
with AuthTestClient(self.app) as client:
resp = client.post(
"/debug_replay/start",
json={
"camera": "front",
"start_time": 100,
"end_time": 200,
},
)
self.assertEqual(resp.status_code, 202)
body = resp.json()
self.assertTrue(body["success"])
self.assertEqual(body["job_id"], "job-1234")
self.assertEqual(body["replay_camera"], "_replay_front")
def test_start_returns_400_on_validation_error(self):
with patch(
"frigate.api.debug_replay.start_debug_replay_job",
side_effect=ValueError("Camera 'missing' not found"),
):
with AuthTestClient(self.app) as client:
resp = client.post(
"/debug_replay/start",
json={
"camera": "missing",
"start_time": 100,
"end_time": 200,
},
)
self.assertEqual(resp.status_code, 400)
body = resp.json()
self.assertFalse(body["success"])
# Message is hard-coded so we don't echo exception text back to clients
# (CodeQL: information exposure through an exception).
self.assertEqual(body["message"], "Invalid debug replay parameters")
def test_start_returns_409_when_session_already_active(self):
with patch(
"frigate.api.debug_replay.start_debug_replay_job",
side_effect=RuntimeError("A replay session is already active"),
):
with AuthTestClient(self.app) as client:
resp = client.post(
"/debug_replay/start",
json={
"camera": "front",
"start_time": 100,
"end_time": 200,
},
)
self.assertEqual(resp.status_code, 409)
body = resp.json()
self.assertFalse(body["success"])
def test_status_inactive_when_no_session(self):
with AuthTestClient(self.app) as client:
resp = client.get("/debug_replay/status")
self.assertEqual(resp.status_code, 200)
body = resp.json()
self.assertFalse(body["active"])
self.assertIsNone(body["replay_camera"])
self.assertIsNone(body["source_camera"])
self.assertIsNone(body["start_time"])
self.assertIsNone(body["end_time"])
self.assertFalse(body["live_ready"])
# Make sure deprecated fields are gone
self.assertNotIn("state", body)
self.assertNotIn("progress_percent", body)
self.assertNotIn("error_message", body)
def test_status_active_after_mark_starting(self):
manager = self.app.replay_manager
manager.mark_starting(
source_camera="front",
replay_camera_name="_replay_front",
start_ts=100.0,
end_ts=200.0,
)
with AuthTestClient(self.app) as client:
resp = client.get("/debug_replay/status")
self.assertEqual(resp.status_code, 200)
body = resp.json()
self.assertTrue(body["active"])
self.assertEqual(body["replay_camera"], "_replay_front")
self.assertEqual(body["source_camera"], "front")
self.assertEqual(body["start_time"], 100.0)
self.assertEqual(body["end_time"], 200.0)
self.assertFalse(body["live_ready"])

View File

@ -0,0 +1,242 @@
"""Tests for the simplified DebugReplayManager.
Startup orchestration lives in ``frigate.jobs.debug_replay`` (covered by
``test_debug_replay_job``). The manager owns only session presence and
cleanup.
"""
import unittest
import unittest.mock
from unittest.mock import MagicMock, patch
class TestDebugReplayManagerSession(unittest.TestCase):
def test_inactive_by_default(self) -> None:
from frigate.debug_replay import DebugReplayManager
manager = DebugReplayManager()
self.assertFalse(manager.active)
self.assertIsNone(manager.replay_camera_name)
self.assertIsNone(manager.source_camera)
self.assertIsNone(manager.clip_path)
self.assertIsNone(manager.start_ts)
self.assertIsNone(manager.end_ts)
def test_mark_starting_sets_session_pointers_and_active(self) -> None:
from frigate.debug_replay import DebugReplayManager
manager = DebugReplayManager()
manager.mark_starting(
source_camera="front",
replay_camera_name="_replay_front",
start_ts=100.0,
end_ts=200.0,
)
self.assertTrue(manager.active)
self.assertEqual(manager.replay_camera_name, "_replay_front")
self.assertEqual(manager.source_camera, "front")
self.assertEqual(manager.start_ts, 100.0)
self.assertEqual(manager.end_ts, 200.0)
self.assertIsNone(manager.clip_path)
def test_mark_session_ready_sets_clip_path(self) -> None:
from frigate.debug_replay import DebugReplayManager
manager = DebugReplayManager()
manager.mark_starting("front", "_replay_front", 100.0, 200.0)
manager.mark_session_ready(clip_path="/tmp/replay/_replay_front.mp4")
self.assertEqual(manager.clip_path, "/tmp/replay/_replay_front.mp4")
self.assertTrue(manager.active)
def test_clear_session_resets_all_pointers(self) -> None:
from frigate.debug_replay import DebugReplayManager
manager = DebugReplayManager()
manager.mark_starting("front", "_replay_front", 100.0, 200.0)
manager.mark_session_ready("/tmp/replay/clip.mp4")
manager.clear_session()
self.assertFalse(manager.active)
self.assertIsNone(manager.replay_camera_name)
self.assertIsNone(manager.source_camera)
self.assertIsNone(manager.clip_path)
self.assertIsNone(manager.start_ts)
self.assertIsNone(manager.end_ts)
class TestDebugReplayManagerStop(unittest.TestCase):
def test_stop_when_inactive_is_a_noop(self) -> None:
from frigate.debug_replay import DebugReplayManager
manager = DebugReplayManager()
frigate_config = MagicMock()
frigate_config.cameras = {}
publisher = MagicMock()
# Should not raise; should not publish any events.
manager.stop(frigate_config=frigate_config, config_publisher=publisher)
publisher.publish_update.assert_not_called()
def test_stop_publishes_remove_when_camera_was_published(self) -> None:
from frigate.config.camera.updater import CameraConfigUpdateEnum
from frigate.debug_replay import DebugReplayManager
manager = DebugReplayManager()
manager.mark_starting("front", "_replay_front", 100.0, 200.0)
manager.mark_session_ready("/tmp/replay/_replay_front.mp4")
camera_config = MagicMock()
frigate_config = MagicMock()
frigate_config.cameras = {"_replay_front": camera_config}
publisher = MagicMock()
with (
patch.object(manager, "_cleanup_db"),
patch.object(manager, "_cleanup_files"),
patch("frigate.debug_replay.cancel_debug_replay_job", return_value=False),
):
manager.stop(frigate_config=frigate_config, config_publisher=publisher)
# One publish_update call with a remove topic.
self.assertEqual(publisher.publish_update.call_count, 1)
topic_arg = publisher.publish_update.call_args.args[0]
self.assertEqual(topic_arg.update_type, CameraConfigUpdateEnum.remove)
self.assertFalse(manager.active)
def test_stop_skips_remove_publish_when_camera_not_in_config(self) -> None:
"""Cancellation during preparing_clip: no camera was published yet."""
from frigate.debug_replay import DebugReplayManager
manager = DebugReplayManager()
manager.mark_starting("front", "_replay_front", 100.0, 200.0)
# clip_path stays None because we cancelled before camera publish.
frigate_config = MagicMock()
frigate_config.cameras = {} # _replay_front not present
publisher = MagicMock()
with (
patch.object(manager, "_cleanup_db"),
patch.object(manager, "_cleanup_files"),
patch("frigate.debug_replay.cancel_debug_replay_job", return_value=True),
):
manager.stop(frigate_config=frigate_config, config_publisher=publisher)
publisher.publish_update.assert_not_called()
self.assertFalse(manager.active)
def test_stop_calls_cancel_debug_replay_job(self) -> None:
from frigate.debug_replay import DebugReplayManager
manager = DebugReplayManager()
manager.mark_starting("front", "_replay_front", 100.0, 200.0)
frigate_config = MagicMock()
frigate_config.cameras = {}
publisher = MagicMock()
with (
patch.object(manager, "_cleanup_db"),
patch.object(manager, "_cleanup_files"),
patch(
"frigate.debug_replay.cancel_debug_replay_job",
return_value=True,
) as mock_cancel,
):
manager.stop(frigate_config=frigate_config, config_publisher=publisher)
mock_cancel.assert_called_once()
class TestDebugReplayManagerPublishCamera(unittest.TestCase):
def test_publish_camera_invokes_publisher_with_add_topic(self) -> None:
from frigate.config.camera.updater import CameraConfigUpdateEnum
from frigate.debug_replay import DebugReplayManager
manager = DebugReplayManager()
source_config = MagicMock()
new_camera_config = MagicMock()
frigate_config = MagicMock()
frigate_config.cameras = {"front": source_config}
publisher = MagicMock()
with (
patch.object(
manager,
"_build_camera_config_dict",
return_value={"enabled": True},
),
patch("frigate.debug_replay.find_config_file", return_value="/cfg.yml"),
patch("frigate.debug_replay.YAML") as yaml_cls,
patch("frigate.debug_replay.FrigateConfig.parse_object") as parse_object,
patch("builtins.open", unittest.mock.mock_open(read_data="cameras:\n")),
):
yaml_instance = yaml_cls.return_value
yaml_instance.load.return_value = {"cameras": {}}
parsed = MagicMock()
parsed.cameras = {"_replay_front": new_camera_config}
parse_object.return_value = parsed
manager.publish_camera(
source_camera="front",
replay_name="_replay_front",
clip_path="/tmp/clip.mp4",
frigate_config=frigate_config,
config_publisher=publisher,
)
# Camera registered into the live config dict
self.assertIn("_replay_front", frigate_config.cameras)
# Publisher invoked with an add topic
self.assertEqual(publisher.publish_update.call_count, 1)
topic_arg = publisher.publish_update.call_args.args[0]
self.assertEqual(topic_arg.update_type, CameraConfigUpdateEnum.add)
def test_publish_camera_wraps_parse_failure_in_runtime_error(self) -> None:
from frigate.debug_replay import DebugReplayManager
manager = DebugReplayManager()
frigate_config = MagicMock()
frigate_config.cameras = {"front": MagicMock()}
publisher = MagicMock()
with (
patch.object(
manager,
"_build_camera_config_dict",
return_value={"enabled": True},
),
patch("frigate.debug_replay.find_config_file", return_value="/cfg.yml"),
patch("frigate.debug_replay.YAML") as yaml_cls,
patch(
"frigate.debug_replay.FrigateConfig.parse_object",
side_effect=ValueError("zone foo has invalid coordinates"),
),
patch("builtins.open", unittest.mock.mock_open(read_data="cameras:\n")),
):
yaml_cls.return_value.load.return_value = {"cameras": {}}
with self.assertRaises(RuntimeError) as ctx:
manager.publish_camera(
source_camera="front",
replay_name="_replay_front",
clip_path="/tmp/clip.mp4",
frigate_config=frigate_config,
config_publisher=publisher,
)
self.assertIn("replay camera config", str(ctx.exception))
self.assertIn("invalid coordinates", str(ctx.exception))
publisher.publish_update.assert_not_called()
if __name__ == "__main__":
unittest.main()

View File

@ -0,0 +1,460 @@
"""Tests for the debug replay job runner and factory."""
import threading
import time
import unittest
import unittest.mock
from unittest.mock import MagicMock, patch
from frigate.debug_replay import DebugReplayManager
from frigate.jobs.debug_replay import (
DebugReplayJob,
cancel_debug_replay_job,
get_active_runner,
start_debug_replay_job,
)
from frigate.jobs.export import JobStatePublisher
from frigate.jobs.manager import _completed_jobs, _current_jobs
from frigate.types import JobStatusTypesEnum
def _reset_job_manager() -> None:
"""Clear the global job manager state between tests."""
_current_jobs.clear()
_completed_jobs.clear()
def _patch_publisher(test_case: unittest.TestCase) -> None:
"""Replace JobStatePublisher.publish with a no-op to avoid hanging on IPC."""
publisher_patch = patch.object(
JobStatePublisher, "publish", lambda self, payload: None
)
publisher_patch.start()
test_case.addCleanup(publisher_patch.stop)
class TestDebugReplayJob(unittest.TestCase):
def test_default_fields(self) -> None:
job = DebugReplayJob()
self.assertEqual(job.job_type, "debug_replay")
self.assertEqual(job.status, JobStatusTypesEnum.queued)
self.assertIsNone(job.current_step)
self.assertEqual(job.progress_percent, 0.0)
def test_to_dict_whitelist(self) -> None:
job = DebugReplayJob(
source_camera="front",
replay_camera_name="_replay_front",
start_ts=100.0,
end_ts=200.0,
)
job.current_step = "preparing_clip"
job.progress_percent = 42.5
payload = job.to_dict()
# Top-level matches the standard Job<TResults> shape.
for key in (
"id",
"job_type",
"status",
"start_time",
"end_time",
"error_message",
"results",
):
self.assertIn(key, payload, f"missing top-level field: {key}")
results = payload["results"]
self.assertEqual(results["source_camera"], "front")
self.assertEqual(results["replay_camera_name"], "_replay_front")
self.assertEqual(results["current_step"], "preparing_clip")
self.assertEqual(results["progress_percent"], 42.5)
self.assertEqual(results["start_ts"], 100.0)
self.assertEqual(results["end_ts"], 200.0)
class TestStartDebugReplayJob(unittest.TestCase):
def setUp(self) -> None:
_reset_job_manager()
_patch_publisher(self)
self.manager = DebugReplayManager()
self.frigate_config = MagicMock()
self.frigate_config.cameras = {"front": MagicMock()}
self.frigate_config.ffmpeg.ffmpeg_path = "/bin/true"
self.publisher = MagicMock()
self.recordings_qs = MagicMock()
self.recordings_qs.count.return_value = 1
self.recordings_qs.__iter__.return_value = iter([MagicMock(path="/tmp/r1.mp4")])
def tearDown(self) -> None:
runner = get_active_runner()
if runner is not None:
runner.cancel()
runner.join(timeout=2.0)
_reset_job_manager()
def test_rejects_unknown_camera(self) -> None:
with self.assertRaises(ValueError):
start_debug_replay_job(
source_camera="missing",
start_ts=100.0,
end_ts=200.0,
frigate_config=self.frigate_config,
config_publisher=self.publisher,
replay_manager=self.manager,
)
def test_rejects_invalid_time_range(self) -> None:
with self.assertRaises(ValueError):
start_debug_replay_job(
source_camera="front",
start_ts=200.0,
end_ts=100.0,
frigate_config=self.frigate_config,
config_publisher=self.publisher,
replay_manager=self.manager,
)
def test_rejects_when_no_recordings(self) -> None:
empty_qs = MagicMock()
empty_qs.count.return_value = 0
with patch("frigate.jobs.debug_replay.query_recordings", return_value=empty_qs):
with self.assertRaises(ValueError):
start_debug_replay_job(
source_camera="front",
start_ts=100.0,
end_ts=200.0,
frigate_config=self.frigate_config,
config_publisher=self.publisher,
replay_manager=self.manager,
)
def test_returns_job_id_and_marks_session_starting(self) -> None:
block = threading.Event()
def slow_helper(cmd, **kwargs):
block.wait(timeout=5)
return 0, ""
with (
patch(
"frigate.jobs.debug_replay.query_recordings",
return_value=self.recordings_qs,
),
patch(
"frigate.jobs.debug_replay.run_ffmpeg_with_progress",
side_effect=slow_helper,
),
patch.object(self.manager, "publish_camera"),
patch("os.path.exists", return_value=True),
patch("os.makedirs"),
patch("builtins.open", unittest.mock.mock_open()),
):
job_id = start_debug_replay_job(
source_camera="front",
start_ts=100.0,
end_ts=200.0,
frigate_config=self.frigate_config,
config_publisher=self.publisher,
replay_manager=self.manager,
)
self.assertIsInstance(job_id, str)
self.assertTrue(self.manager.active)
self.assertEqual(self.manager.replay_camera_name, "_replay_front")
self.assertEqual(self.manager.source_camera, "front")
block.set()
def test_rejects_concurrent_calls(self) -> None:
block = threading.Event()
def slow_helper(cmd, **kwargs):
block.wait(timeout=5)
return 0, ""
with (
patch(
"frigate.jobs.debug_replay.query_recordings",
return_value=self.recordings_qs,
),
patch(
"frigate.jobs.debug_replay.run_ffmpeg_with_progress",
side_effect=slow_helper,
),
patch.object(self.manager, "publish_camera"),
patch("os.path.exists", return_value=True),
patch("os.makedirs"),
patch("builtins.open", unittest.mock.mock_open()),
):
start_debug_replay_job(
source_camera="front",
start_ts=100.0,
end_ts=200.0,
frigate_config=self.frigate_config,
config_publisher=self.publisher,
replay_manager=self.manager,
)
with self.assertRaises(RuntimeError):
start_debug_replay_job(
source_camera="front",
start_ts=100.0,
end_ts=200.0,
frigate_config=self.frigate_config,
config_publisher=self.publisher,
replay_manager=self.manager,
)
block.set()
class TestRunnerHappyPath(unittest.TestCase):
def setUp(self) -> None:
_reset_job_manager()
_patch_publisher(self)
self.manager = DebugReplayManager()
self.frigate_config = MagicMock()
self.frigate_config.cameras = {"front": MagicMock()}
self.frigate_config.ffmpeg.ffmpeg_path = "/bin/true"
self.publisher = MagicMock()
self.recordings_qs = MagicMock()
self.recordings_qs.count.return_value = 1
self.recordings_qs.__iter__.return_value = iter([MagicMock(path="/tmp/r1.mp4")])
def tearDown(self) -> None:
runner = get_active_runner()
if runner is not None:
runner.cancel()
runner.join(timeout=2.0)
_reset_job_manager()
def _wait_for(self, predicate, timeout: float = 5.0) -> bool:
deadline = time.time() + timeout
while time.time() < deadline:
if predicate():
return True
time.sleep(0.02)
return False
def test_progress_callback_updates_job_percent(self) -> None:
captured: list[float] = []
def fake_helper(cmd, *, on_progress=None, **kwargs):
on_progress(0.0)
on_progress(50.0)
on_progress(100.0)
return 0, ""
with (
patch(
"frigate.jobs.debug_replay.query_recordings",
return_value=self.recordings_qs,
),
patch(
"frigate.jobs.debug_replay.run_ffmpeg_with_progress",
side_effect=fake_helper,
),
patch.object(
self.manager,
"publish_camera",
side_effect=lambda *a, **kw: captured.append("published"),
),
patch("os.path.exists", return_value=True),
patch("os.makedirs"),
patch("builtins.open", unittest.mock.mock_open()),
):
start_debug_replay_job(
source_camera="front",
start_ts=100.0,
end_ts=200.0,
frigate_config=self.frigate_config,
config_publisher=self.publisher,
replay_manager=self.manager,
)
self.assertTrue(
self._wait_for(lambda: get_active_runner() is None),
"runner did not finish",
)
from frigate.jobs.manager import get_current_job
job = get_current_job("debug_replay")
self.assertIsNotNone(job)
self.assertEqual(job.status, JobStatusTypesEnum.success)
self.assertEqual(job.progress_percent, 100.0)
self.assertEqual(captured, ["published"])
# Manager should have been told the session is ready with the clip path.
self.assertIsNotNone(self.manager.clip_path)
class TestRunnerFailurePath(unittest.TestCase):
def setUp(self) -> None:
_reset_job_manager()
_patch_publisher(self)
self.manager = DebugReplayManager()
self.frigate_config = MagicMock()
self.frigate_config.cameras = {"front": MagicMock()}
self.frigate_config.ffmpeg.ffmpeg_path = "/bin/true"
self.publisher = MagicMock()
self.recordings_qs = MagicMock()
self.recordings_qs.count.return_value = 1
self.recordings_qs.__iter__.return_value = iter([MagicMock(path="/tmp/r1.mp4")])
def tearDown(self) -> None:
runner = get_active_runner()
if runner is not None:
runner.cancel()
runner.join(timeout=2.0)
_reset_job_manager()
def _wait_for(self, predicate, timeout: float = 5.0) -> bool:
deadline = time.time() + timeout
while time.time() < deadline:
if predicate():
return True
time.sleep(0.02)
return False
def test_ffmpeg_failure_marks_job_failed_and_clears_session(self) -> None:
def failing_helper(cmd, **kwargs):
return 1, "ffmpeg exploded"
with (
patch(
"frigate.jobs.debug_replay.query_recordings",
return_value=self.recordings_qs,
),
patch(
"frigate.jobs.debug_replay.run_ffmpeg_with_progress",
side_effect=failing_helper,
),
patch("os.path.exists", return_value=True),
patch("os.makedirs"),
patch("os.remove"),
patch("builtins.open", unittest.mock.mock_open()),
):
start_debug_replay_job(
source_camera="front",
start_ts=100.0,
end_ts=200.0,
frigate_config=self.frigate_config,
config_publisher=self.publisher,
replay_manager=self.manager,
)
self.assertTrue(
self._wait_for(lambda: get_active_runner() is None),
"runner did not finish",
)
from frigate.jobs.manager import get_current_job
job = get_current_job("debug_replay")
self.assertIsNotNone(job)
self.assertEqual(job.status, JobStatusTypesEnum.failed)
self.assertIsNotNone(job.error_message)
self.assertIn("ffmpeg", job.error_message.lower())
# Session cleared so a new /start is allowed
self.assertFalse(self.manager.active)
class TestRunnerCancellation(unittest.TestCase):
def setUp(self) -> None:
_reset_job_manager()
_patch_publisher(self)
self.manager = DebugReplayManager()
self.frigate_config = MagicMock()
self.frigate_config.cameras = {"front": MagicMock()}
self.frigate_config.ffmpeg.ffmpeg_path = "/bin/true"
self.publisher = MagicMock()
self.recordings_qs = MagicMock()
self.recordings_qs.count.return_value = 1
self.recordings_qs.__iter__.return_value = iter([MagicMock(path="/tmp/r1.mp4")])
def tearDown(self) -> None:
runner = get_active_runner()
if runner is not None:
runner.cancel()
runner.join(timeout=2.0)
_reset_job_manager()
def _wait_for(self, predicate, timeout: float = 5.0) -> bool:
deadline = time.time() + timeout
while time.time() < deadline:
if predicate():
return True
time.sleep(0.02)
return False
def test_cancel_terminates_ffmpeg_and_marks_cancelled(self) -> None:
terminated = threading.Event()
fake_proc = MagicMock()
fake_proc.terminate = MagicMock(side_effect=lambda: terminated.set())
def fake_helper(cmd, *, process_started=None, **kwargs):
if process_started is not None:
process_started(fake_proc)
terminated.wait(timeout=5)
return -15, "killed"
with (
patch(
"frigate.jobs.debug_replay.query_recordings",
return_value=self.recordings_qs,
),
patch(
"frigate.jobs.debug_replay.run_ffmpeg_with_progress",
side_effect=fake_helper,
),
patch("os.path.exists", return_value=True),
patch("os.makedirs"),
patch("os.remove"),
patch("builtins.open", unittest.mock.mock_open()),
):
start_debug_replay_job(
source_camera="front",
start_ts=100.0,
end_ts=200.0,
frigate_config=self.frigate_config,
config_publisher=self.publisher,
replay_manager=self.manager,
)
# Wait for the runner to register the active process.
self.assertTrue(
self._wait_for(
lambda: (
get_active_runner() is not None
and get_active_runner()._active_process is fake_proc
)
)
)
cancelled = cancel_debug_replay_job()
self.assertTrue(cancelled)
self.assertTrue(fake_proc.terminate.called)
self.assertTrue(
self._wait_for(lambda: get_active_runner() is None),
"runner did not finish",
)
from frigate.jobs.manager import get_current_job
job = get_current_job("debug_replay")
self.assertEqual(job.status, JobStatusTypesEnum.cancelled)
# Runner must not clear the manager session on cancellation —
# that belongs to the caller of cancel_debug_replay_job (stop()).
# If the runner cleared it, stop() would log "no active session"
# and skip its cleanup_db / cleanup_files calls.
self.assertTrue(self.manager.active)
if __name__ == "__main__":
unittest.main()

View File

@ -1,6 +1,9 @@
"""Tests for export progress tracking, broadcast, and FFmpeg parsing."""
import io
import os
import shutil
import tempfile
import unittest
from unittest.mock import MagicMock, patch
@ -11,6 +14,7 @@ from frigate.jobs.export import (
)
from frigate.record.export import PlaybackSourceEnum, RecordingExporter
from frigate.types import JobStatusTypesEnum
from frigate.util.ffmpeg import inject_progress_flags
def _make_exporter(
@ -115,10 +119,9 @@ class TestExpectedOutputDuration(unittest.TestCase):
class TestProgressFlagInjection(unittest.TestCase):
def test_inserts_before_output_path(self) -> None:
exporter = _make_exporter()
cmd = ["ffmpeg", "-i", "input.m3u8", "-c", "copy", "/tmp/output.mp4"]
result = exporter._inject_progress_flags(cmd)
result = inject_progress_flags(cmd)
assert result == [
"ffmpeg",
@ -133,8 +136,7 @@ class TestProgressFlagInjection(unittest.TestCase):
]
def test_handles_empty_cmd(self) -> None:
exporter = _make_exporter()
assert exporter._inject_progress_flags([]) == []
assert inject_progress_flags([]) == []
class TestFfmpegProgressParsing(unittest.TestCase):
@ -164,7 +166,7 @@ class TestFfmpegProgressParsing(unittest.TestCase):
fake_proc.returncode = 0
fake_proc.wait = MagicMock(return_value=0)
with patch("frigate.record.export.sp.Popen", return_value=fake_proc):
with patch("frigate.util.ffmpeg.sp.Popen", return_value=fake_proc):
returncode, _stderr = exporter._run_ffmpeg_with_progress(
["ffmpeg", "-i", "x.m3u8", "/tmp/out.mp4"], "playlist", step="encoding"
)
@ -363,6 +365,121 @@ class TestBroadcastAggregation(unittest.TestCase):
assert job.progress_percent == 33.0
class TestGetDatetimeFromTimestamp(unittest.TestCase):
"""Auto-generated export name should honor config.ui.timezone, not
fall back to the container's UTC clock when a timezone is configured.
"""
def test_uses_configured_ui_timezone(self) -> None:
exporter = _make_exporter()
exporter.config.ui.timezone = "America/New_York"
# 2025-01-15 12:00:00 UTC is 07:00:00 EST
assert exporter.get_datetime_from_timestamp(1736942400) == "2025-01-15 07:00:00"
def test_falls_back_to_local_when_timezone_unset(self) -> None:
exporter = _make_exporter()
exporter.config.ui.timezone = None
# No assertion on the exact wall-clock value — just confirm no
# exception and that pytz isn't required when the field is unset.
assert isinstance(exporter.get_datetime_from_timestamp(1736942400), str)
def test_invalid_timezone_falls_back_to_local(self) -> None:
exporter = _make_exporter()
exporter.config.ui.timezone = "Not/A_Real_Zone"
assert isinstance(exporter.get_datetime_from_timestamp(1736942400), str)
class TestSaveThumbnailFromPreviewFrames(unittest.TestCase):
"""Short exports in the current hour can fall between preview frame
writes (1-2 fps during activity, every 30s otherwise). When no frame
falls inside the export window, save_thumbnail should fall back to
the most recent prior frame instead of returning no thumbnail."""
def setUp(self) -> None:
self.tmp_root = tempfile.mkdtemp(prefix="frigate_thumb_test_")
self.preview_dir = os.path.join(self.tmp_root, "cache", "preview_frames")
self.export_clips = os.path.join(self.tmp_root, "clips", "export")
os.makedirs(self.preview_dir, exist_ok=True)
os.makedirs(self.export_clips, exist_ok=True)
def tearDown(self) -> None:
shutil.rmtree(self.tmp_root, ignore_errors=True)
def _write_frame(self, camera: str, frame_time: float) -> str:
path = os.path.join(self.preview_dir, f"preview_{camera}-{frame_time}.webp")
with open(path, "wb") as f:
f.write(b"fake-webp-bytes")
return path
def _make_short_current_hour_exporter(self) -> RecordingExporter:
# Use a "now-ish" timestamp so save_thumbnail's start-of-hour
# comparison takes the current-hour branch (preview frames).
import datetime
now = datetime.datetime.now(datetime.timezone.utc).timestamp()
exporter = _make_exporter()
exporter.export_id = "thumb_short"
exporter.start_time = now
exporter.end_time = now + 3
return exporter
def test_short_export_falls_back_to_prior_preview_frame(self) -> None:
exporter = self._make_short_current_hour_exporter()
# Most recent preview frame is 10s before the export window
prior = self._write_frame(exporter.camera, exporter.start_time - 10.0)
thumb_target = os.path.join(self.export_clips, f"{exporter.export_id}.webp")
with (
patch(
"frigate.record.export.CACHE_DIR", os.path.join(self.tmp_root, "cache")
),
patch(
"frigate.record.export.CLIPS_DIR", os.path.join(self.tmp_root, "clips")
),
):
result = exporter.save_thumbnail(exporter.export_id)
assert result == thumb_target
assert os.path.isfile(thumb_target)
with open(thumb_target, "rb") as f, open(prior, "rb") as src:
assert f.read() == src.read()
def test_returns_empty_when_no_preview_frames_exist(self) -> None:
exporter = self._make_short_current_hour_exporter()
with (
patch(
"frigate.record.export.CACHE_DIR", os.path.join(self.tmp_root, "cache")
),
patch(
"frigate.record.export.CLIPS_DIR", os.path.join(self.tmp_root, "clips")
),
):
result = exporter.save_thumbnail(exporter.export_id)
assert result == ""
def test_prefers_in_window_frame_over_prior_frame(self) -> None:
exporter = self._make_short_current_hour_exporter()
self._write_frame(exporter.camera, exporter.start_time - 10.0)
in_window = self._write_frame(exporter.camera, exporter.start_time + 1.0)
thumb_target = os.path.join(self.export_clips, f"{exporter.export_id}.webp")
with (
patch(
"frigate.record.export.CACHE_DIR", os.path.join(self.tmp_root, "cache")
),
patch(
"frigate.record.export.CLIPS_DIR", os.path.join(self.tmp_root, "clips")
),
):
result = exporter.save_thumbnail(exporter.export_id)
assert result == thumb_target
with open(thumb_target, "rb") as f, open(in_window, "rb") as src:
assert f.read() == src.read()
class TestSchedulesCleanup(unittest.TestCase):
def test_schedule_job_cleanup_removes_after_delay(self) -> None:
config = MagicMock()
@ -381,5 +498,56 @@ class TestSchedulesCleanup(unittest.TestCase):
assert job.id not in manager.jobs
class TestChapterMetadataInProgressReview(unittest.TestCase):
"""Regression: in-progress review segments have end_time=NULL until the
activity closes. The chapter builder must clamp the chapter end to the
last recorded second instead of crashing on float(None)."""
def _fake_select_returning(self, rows: list) -> MagicMock:
mock_query = MagicMock()
mock_query.where.return_value = mock_query
mock_query.order_by.return_value = mock_query
mock_query.iterator.return_value = iter(rows)
return mock_query
def test_in_progress_review_does_not_crash_and_clamps_to_last_recording(
self,
) -> None:
exporter = _make_exporter(end_minus_start=200)
# Recordings cover [1000, 1150]; export window is [1000, 1200] so
# the last recorded second is 1150 (a 50s gap at the tail).
recordings = [
MagicMock(start_time=1000.0, end_time=1150.0),
]
in_progress = MagicMock(
start_time=1100.0,
end_time=None,
severity="alert",
data={"objects": ["person"]},
)
with tempfile.TemporaryDirectory() as tmpdir:
chapter_path = os.path.join(tmpdir, "chapters.txt")
exporter._chapter_metadata_path = lambda: chapter_path # type: ignore[method-assign]
with patch(
"frigate.record.export.ReviewSegment.select",
return_value=self._fake_select_returning([in_progress]),
):
result = exporter._build_chapter_metadata_file(recordings)
assert result == chapter_path
with open(chapter_path) as f:
content = f.read()
# Output time is windows[-1][1] - windows[-1][0] = 150s.
# Review starts at wall=1100, output offset = 100s -> 100000ms.
# Clamped end = last_recorded_end (1150) -> output offset = 150s -> 150000ms.
assert "[CHAPTER]" in content
assert "START=100000" in content
assert "END=150000" in content
assert "title=Alert: person" in content
if __name__ == "__main__":
unittest.main()

View File

@ -0,0 +1,111 @@
"""Tests for the shared ffmpeg progress helper."""
import unittest
from unittest.mock import MagicMock, patch
from frigate.util.ffmpeg import inject_progress_flags, run_ffmpeg_with_progress
class TestInjectProgressFlags(unittest.TestCase):
def test_inserts_flags_before_output_path(self):
cmd = ["ffmpeg", "-i", "in.mp4", "-c", "copy", "out.mp4"]
result = inject_progress_flags(cmd)
self.assertEqual(
result,
[
"ffmpeg",
"-i",
"in.mp4",
"-c",
"copy",
"-progress",
"pipe:2",
"-nostats",
"out.mp4",
],
)
def test_empty_cmd_returns_empty(self):
self.assertEqual(inject_progress_flags([]), [])
class TestRunFfmpegWithProgress(unittest.TestCase):
def _make_fake_proc(self, stderr_lines, returncode=0):
proc = MagicMock()
proc.stderr = iter(stderr_lines)
proc.stdin = MagicMock()
proc.returncode = returncode
proc.wait = MagicMock()
return proc
def test_emits_percent_from_out_time_us_lines(self):
captured: list[float] = []
def on_progress(percent: float) -> None:
captured.append(percent)
stderr_lines = [
"out_time_us=1000000\n",
"out_time_us=5000000\n",
"progress=end\n",
]
proc = self._make_fake_proc(stderr_lines)
proc.stderr = MagicMock()
proc.stderr.__iter__ = lambda self: iter(stderr_lines)
proc.stderr.read = MagicMock(return_value="")
with patch("subprocess.Popen", return_value=proc):
returncode, _stderr = run_ffmpeg_with_progress(
["ffmpeg", "-i", "in", "out"],
expected_duration_seconds=10.0,
on_progress=on_progress,
use_low_priority=False,
)
self.assertEqual(returncode, 0)
self.assertEqual(len(captured), 4) # initial 0.0 + two parsed + final 100.0
self.assertAlmostEqual(captured[0], 0.0)
self.assertAlmostEqual(captured[1], 10.0)
self.assertAlmostEqual(captured[2], 50.0)
self.assertAlmostEqual(captured[3], 100.0)
def test_passes_started_process_to_callback(self):
proc = self._make_fake_proc([])
proc.stderr = MagicMock()
proc.stderr.__iter__ = lambda self: iter([])
proc.stderr.read = MagicMock(return_value="")
seen: list = []
with patch("subprocess.Popen", return_value=proc):
run_ffmpeg_with_progress(
["ffmpeg", "out"],
expected_duration_seconds=1.0,
process_started=lambda p: seen.append(p),
use_low_priority=False,
)
self.assertEqual(seen, [proc])
def test_clamps_percent_to_0_100(self):
captured: list[float] = []
def on_progress(percent: float) -> None:
captured.append(percent)
stderr_lines = ["out_time_us=999999999999\n"]
proc = self._make_fake_proc(stderr_lines)
proc.stderr = MagicMock()
proc.stderr.__iter__ = lambda self: iter(stderr_lines)
proc.stderr.read = MagicMock(return_value="")
with patch("subprocess.Popen", return_value=proc):
run_ffmpeg_with_progress(
["ffmpeg", "out"],
expected_duration_seconds=10.0,
on_progress=on_progress,
use_low_priority=False,
)
# initial 0.0 then a clamped reading
self.assertEqual(captured[-1], 100.0)

View File

@ -0,0 +1,166 @@
"""Tests for WebSocket authorization checks."""
import unittest
from frigate.comms.ws import _check_ws_authorization
from frigate.const import INSERT_MANY_RECORDINGS, UPDATE_CAMERA_ACTIVITY
class TestCheckWsAuthorization(unittest.TestCase):
"""Tests for the _check_ws_authorization pure function."""
DEFAULT_SEPARATOR = ","
# --- IPC topic blocking (unconditional, regardless of role) ---
def test_ipc_topic_blocked_for_admin(self):
self.assertFalse(
_check_ws_authorization(
INSERT_MANY_RECORDINGS, "admin", self.DEFAULT_SEPARATOR
)
)
def test_ipc_topic_blocked_for_viewer(self):
self.assertFalse(
_check_ws_authorization(
UPDATE_CAMERA_ACTIVITY, "viewer", self.DEFAULT_SEPARATOR
)
)
def test_ipc_topic_blocked_when_no_role(self):
self.assertFalse(
_check_ws_authorization(
INSERT_MANY_RECORDINGS, None, self.DEFAULT_SEPARATOR
)
)
# --- Viewer allowed topics ---
def test_viewer_can_send_on_connect(self):
self.assertTrue(
_check_ws_authorization("onConnect", "viewer", self.DEFAULT_SEPARATOR)
)
def test_viewer_can_send_model_state(self):
self.assertTrue(
_check_ws_authorization("modelState", "viewer", self.DEFAULT_SEPARATOR)
)
def test_viewer_can_send_audio_transcription_state(self):
self.assertTrue(
_check_ws_authorization(
"audioTranscriptionState", "viewer", self.DEFAULT_SEPARATOR
)
)
def test_viewer_can_send_birdseye_layout(self):
self.assertTrue(
_check_ws_authorization("birdseyeLayout", "viewer", self.DEFAULT_SEPARATOR)
)
def test_viewer_can_send_embeddings_reindex_progress(self):
self.assertTrue(
_check_ws_authorization(
"embeddingsReindexProgress", "viewer", self.DEFAULT_SEPARATOR
)
)
# --- Viewer blocked from admin topics ---
def test_viewer_blocked_from_restart(self):
self.assertFalse(
_check_ws_authorization("restart", "viewer", self.DEFAULT_SEPARATOR)
)
def test_viewer_blocked_from_camera_detect_set(self):
self.assertFalse(
_check_ws_authorization(
"front_door/detect/set", "viewer", self.DEFAULT_SEPARATOR
)
)
def test_viewer_blocked_from_camera_ptz(self):
self.assertFalse(
_check_ws_authorization("front_door/ptz", "viewer", self.DEFAULT_SEPARATOR)
)
def test_viewer_blocked_from_global_notifications_set(self):
self.assertFalse(
_check_ws_authorization(
"notifications/set", "viewer", self.DEFAULT_SEPARATOR
)
)
def test_viewer_blocked_from_camera_notifications_suspend(self):
self.assertFalse(
_check_ws_authorization(
"front_door/notifications/suspend", "viewer", self.DEFAULT_SEPARATOR
)
)
def test_viewer_blocked_from_arbitrary_unknown_topic(self):
self.assertFalse(
_check_ws_authorization(
"some_random_topic", "viewer", self.DEFAULT_SEPARATOR
)
)
# --- Admin access ---
def test_admin_can_send_restart(self):
self.assertTrue(
_check_ws_authorization("restart", "admin", self.DEFAULT_SEPARATOR)
)
def test_admin_can_send_camera_detect_set(self):
self.assertTrue(
_check_ws_authorization(
"front_door/detect/set", "admin", self.DEFAULT_SEPARATOR
)
)
def test_admin_can_send_camera_ptz(self):
self.assertTrue(
_check_ws_authorization("front_door/ptz", "admin", self.DEFAULT_SEPARATOR)
)
# --- Comma-separated roles ---
def test_comma_separated_admin_viewer_grants_admin(self):
self.assertTrue(
_check_ws_authorization("restart", "admin,viewer", self.DEFAULT_SEPARATOR)
)
def test_comma_separated_viewer_admin_grants_admin(self):
self.assertTrue(
_check_ws_authorization("restart", "viewer,admin", self.DEFAULT_SEPARATOR)
)
def test_comma_separated_with_spaces(self):
self.assertTrue(
_check_ws_authorization("restart", "viewer, admin", self.DEFAULT_SEPARATOR)
)
# --- Custom separator ---
def test_pipe_separator(self):
self.assertTrue(_check_ws_authorization("restart", "viewer|admin", "|"))
def test_pipe_separator_no_admin(self):
self.assertFalse(_check_ws_authorization("restart", "viewer|editor", "|"))
# --- No role header (fail-closed) ---
def test_no_role_header_blocks_admin_topics(self):
self.assertFalse(
_check_ws_authorization("restart", None, self.DEFAULT_SEPARATOR)
)
def test_no_role_header_allows_viewer_topics(self):
self.assertTrue(
_check_ws_authorization("onConnect", None, self.DEFAULT_SEPARATOR)
)
if __name__ == "__main__":
unittest.main()

View File

@ -2,8 +2,9 @@
import logging
import subprocess as sp
from typing import Any
from typing import Any, Callable, Optional
from frigate.const import PROCESS_PRIORITY_LOW
from frigate.log import LogPipe
@ -46,3 +47,124 @@ def start_or_restart_ffmpeg(
start_new_session=True,
)
return process
logger = logging.getLogger(__name__)
def inject_progress_flags(cmd: list[str]) -> list[str]:
"""Insert `-progress pipe:2 -nostats` immediately before the output path.
`-progress pipe:2` writes structured key=value lines to stderr;
`-nostats` suppresses the noisy default stats output. The output path
is conventionally the last token in an FFmpeg argv.
"""
if not cmd:
return cmd
return cmd[:-1] + ["-progress", "pipe:2", "-nostats", cmd[-1]]
def run_ffmpeg_with_progress(
cmd: list[str],
*,
expected_duration_seconds: float,
on_progress: Optional[Callable[[float], None]] = None,
stdin_payload: Optional[str] = None,
process_started: Optional[Callable[[sp.Popen], None]] = None,
use_low_priority: bool = True,
) -> tuple[int, str]:
"""Run an ffmpeg command, streaming progress via `-progress pipe:2`.
Args:
cmd: ffmpeg argv. Output path must be the last token.
expected_duration_seconds: Duration of the expected output clip in
seconds. Used to convert ffmpeg's `out_time_us` into a percent.
on_progress: Optional callback invoked with a percent in [0, 100].
Called once with 0.0 at start, again on each `out_time_us=`
stderr line, and once with 100.0 on `progress=end`.
stdin_payload: Optional string written to ffmpeg stdin (used by
export for concat playlists).
process_started: Optional callback invoked with the live `Popen`
once spawned lets callers store the ref for cancellation.
use_low_priority: When True, prepend `nice -n PROCESS_PRIORITY_LOW`
so concat doesn't starve detection.
Returns:
Tuple of `(returncode, captured_stderr)`. Stdout is left attached
to the parent process to avoid buffer-full deadlocks.
"""
full_cmd = inject_progress_flags(cmd)
if use_low_priority:
full_cmd = ["nice", "-n", str(PROCESS_PRIORITY_LOW)] + full_cmd
def emit(percent: float) -> None:
if on_progress is None:
return
try:
on_progress(max(0.0, min(100.0, percent)))
except Exception:
logger.exception("FFmpeg progress callback failed")
emit(0.0)
proc = sp.Popen(
full_cmd,
stdin=sp.PIPE if stdin_payload is not None else None,
stderr=sp.PIPE,
text=True,
encoding="ascii",
errors="replace",
)
if process_started is not None:
try:
process_started(proc)
except Exception:
logger.exception("FFmpeg process_started callback failed")
if stdin_payload is not None and proc.stdin is not None:
try:
proc.stdin.write(stdin_payload)
except (BrokenPipeError, OSError):
pass
finally:
try:
proc.stdin.close()
except (BrokenPipeError, OSError):
pass
captured: list[str] = []
if proc.stderr is not None:
try:
for raw_line in proc.stderr:
captured.append(raw_line)
line = raw_line.strip()
if not line:
continue
if line.startswith("out_time_us="):
if expected_duration_seconds <= 0:
continue
try:
out_time_us = int(line.split("=", 1)[1])
except (ValueError, IndexError):
continue
if out_time_us < 0:
continue
out_seconds = out_time_us / 1_000_000.0
emit((out_seconds / expected_duration_seconds) * 100.0)
elif line == "progress=end":
emit(100.0)
break
except Exception:
logger.exception("Failed reading FFmpeg progress stream")
proc.wait()
if proc.stderr is not None:
try:
remaining = proc.stderr.read()
if remaining:
captured.append(remaining)
except Exception:
pass
return proc.returncode or 0, "".join(captured)

View File

@ -1,88 +1,95 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "rmuF9iKWTbdk"
},
"outputs": [],
"source": [
"! pip install -q git+https://github.com/Deci-AI/super-gradients.git"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "NiRCt917KKcL"
},
"outputs": [],
"source": [
"! sed -i 's/sghub.deci.ai/sg-hub-nv.s3.amazonaws.com/' /usr/local/lib/python3.12/dist-packages/super_gradients/training/pretrained_models.py\n",
"! sed -i 's/sghub.deci.ai/sg-hub-nv.s3.amazonaws.com/' /usr/local/lib/python3.12/dist-packages/super_gradients/training/utils/checkpoint_utils.py"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "dTB0jy_NNSFz"
},
"outputs": [],
"source": [
"from super_gradients.common.object_names import Models\n",
"from super_gradients.conversion import DetectionOutputFormatMode\n",
"from super_gradients.training import models\n",
"\n",
"model = models.get(Models.YOLO_NAS_S, pretrained_weights=\"coco\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "GymUghyCNXem"
},
"outputs": [],
"source": [
"# export the model for compatibility with Frigate\n",
"\n",
"model.export(\"yolo_nas_s.onnx\",\n",
" output_predictions_format=DetectionOutputFormatMode.FLAT_FORMAT,\n",
" max_predictions_per_image=20,\n",
" num_pre_nms_predictions=300,\n",
" confidence_threshold=0.4,\n",
" input_image_shape=(320,320),\n",
" )"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "uBhXV5g4Nh42"
},
"outputs": [],
"source": [
"from google.colab import files\n",
"\n",
"files.download('yolo_nas_s.onnx')"
]
}
],
"metadata": {
"colab": {
"provenance": []
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
},
"language_info": {
"name": "python"
}
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "runtime-notice"
},
"source": [
"**Before running:** go to **Runtime → Change runtime type → Fallback runtime version: 2025.07** (Python 3.11). The current Colab default (Python 3.12+) is incompatible with `super-gradients`."
]
},
"nbformat": 4,
"nbformat_minor": 0
}
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "rmuF9iKWTbdk"
},
"outputs": [],
"source": [
"! pip install -q \"jedi>=0.16\"\n",
"! pip install -q git+https://github.com/Deci-AI/super-gradients.git"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "NiRCt917KKcL"
},
"outputs": [],
"source": "! sed -i 's/sghub\\.deci\\.ai/d2gjn4b69gu75n.cloudfront.net/g; s/sg-hub-nv\\.s3\\.amazonaws\\.com/d2gjn4b69gu75n.cloudfront.net/g' /usr/local/lib/python*/dist-packages/super_gradients/training/pretrained_models.py\n! sed -i 's/sghub\\.deci\\.ai/d2gjn4b69gu75n.cloudfront.net/g; s/sg-hub-nv\\.s3\\.amazonaws\\.com/d2gjn4b69gu75n.cloudfront.net/g' /usr/local/lib/python*/dist-packages/super_gradients/training/utils/checkpoint_utils.py"
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "dTB0jy_NNSFz"
},
"outputs": [],
"source": [
"from super_gradients.common.object_names import Models\n",
"from super_gradients.conversion import DetectionOutputFormatMode\n",
"from super_gradients.training import models\n",
"\n",
"model = models.get(Models.YOLO_NAS_S, pretrained_weights=\"coco\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "GymUghyCNXem"
},
"outputs": [],
"source": [
"# export the model for compatibility with Frigate\n",
"\n",
"model.export(\"yolo_nas_s.onnx\",\n",
" output_predictions_format=DetectionOutputFormatMode.FLAT_FORMAT,\n",
" max_predictions_per_image=20,\n",
" num_pre_nms_predictions=300,\n",
" confidence_threshold=0.4,\n",
" input_image_shape=(320,320),\n",
" )"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "uBhXV5g4Nh42"
},
"outputs": [],
"source": [
"from google.colab import files\n",
"\n",
"files.download('yolo_nas_s.onnx')"
]
}
],
"metadata": {
"colab": {
"provenance": []
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
},
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 0
}

View File

@ -69,17 +69,18 @@ test.describe("Navigation — conditional items @critical", () => {
).toBeVisible();
});
test("/chat is hidden when genai.model is none (desktop)", async ({
test("/chat is hidden when no agent has the chat role (desktop)", async ({
frigateApp,
}) => {
test.skip(frigateApp.isMobile, "Desktop sidebar");
await frigateApp.installDefaults({
config: {
genai: {
enabled: false,
provider: "ollama",
model: "none",
base_url: "",
descriptions_only: {
provider: "ollama",
model: "llava",
roles: ["descriptions"],
},
},
},
});
@ -89,12 +90,20 @@ test.describe("Navigation — conditional items @critical", () => {
).toHaveCount(0);
});
test("/chat is visible when genai.model is set (desktop)", async ({
test("/chat is visible when an agent has the chat role (desktop)", async ({
frigateApp,
}) => {
test.skip(frigateApp.isMobile, "Desktop sidebar");
await frigateApp.installDefaults({
config: { genai: { enabled: true, model: "llava" } },
config: {
genai: {
chat_agent: {
provider: "ollama",
model: "llava",
roles: ["chat"],
},
},
},
});
await frigateApp.goto("/");
await expect(

View File

@ -31,7 +31,7 @@ test.describe("Replay — no active session @medium", () => {
await expect(
frigateApp.page.getByRole("heading", {
level: 2,
name: /No Active Replay Session/i,
name: /No Active Debug Replay Session/i,
}),
).toBeVisible({ timeout: 10_000 });
const goButton = frigateApp.page.getByRole("button", {
@ -48,7 +48,7 @@ test.describe("Replay — no active session @medium", () => {
await expect(
frigateApp.page.getByRole("heading", {
level: 2,
name: /No Active Replay Session/i,
name: /No Active Debug Replay Session/i,
}),
).toBeVisible({ timeout: 10_000 });
await frigateApp.page
@ -297,7 +297,7 @@ test.describe("Replay — mobile @medium @mobile", () => {
await expect(
frigateApp.page.getByRole("heading", {
level: 2,
name: /No Active Replay Session/i,
name: /No Active Debug Replay Session/i,
}),
).toBeVisible({ timeout: 10_000 });
});

View File

@ -19,26 +19,31 @@
"startLabel": "Start",
"endLabel": "End",
"toast": {
"success": "Debug replay started successfully",
"error": "Failed to start debug replay: {{error}}",
"alreadyActive": "A replay session is already active",
"stopped": "Debug replay stopped",
"stopError": "Failed to stop debug replay: {{error}}",
"goToReplay": "Go to Replay"
}
},
"page": {
"noSession": "No Active Replay Session",
"noSessionDesc": "Start a debug replay from the History view by clicking the Debug Replay button in the toolbar.",
"noSession": "No Active Debug Replay Session",
"noSessionDesc": "Start a Debug Replay from History view by clicking the Actions button in the toolbar and choosing Debug Replay.",
"goToRecordings": "Go to History",
"preparingClip": "Preparing clip…",
"preparingClipDesc": "Frigate is stitching together recordings for the selected time range. This can take a minute for longer ranges.",
"startingCamera": "Starting Debug Replay…",
"startError": {
"title": "Failed to start Debug Replay",
"back": "Back to History"
},
"sourceCamera": "Source Camera",
"replayCamera": "Replay Camera",
"initializingReplay": "Initializing replay...",
"stoppingReplay": "Stopping replay...",
"initializingReplay": "Initializing Debug Replay...",
"stoppingReplay": "Stopping Debug Replay...",
"stopReplay": "Stop Replay",
"confirmStop": {
"title": "Stop Debug Replay?",
"description": "This will stop the replay session and clean up all temporary data. Are you sure?",
"description": "This will stop the session and clean up all temporary data. Are you sure?",
"confirm": "Stop Replay",
"cancel": "Cancel"
},
@ -49,6 +54,6 @@
"activeTracking": "Active tracking",
"noActiveTracking": "No active tracking",
"configuration": "Configuration",
"configurationDesc": "Fine tune motion detection and object tracking settings for the debug replay camera. No changes are saved to your Frigate configuration file."
"configurationDesc": "Fine tune motion detection and object tracking settings for the Debug Replay camera. No changes are saved to your Frigate configuration file."
}
}

View File

@ -93,6 +93,14 @@ export default function GeneralSettings({ className }: GeneralSettingsProps) {
useSWR<ProfilesApiResponse>("profiles");
const logoutUrl = config?.proxy?.logout_url || "/api/logout";
const hasChatAgent = useMemo(
() =>
Object.values(config?.genai ?? {}).some((agent) =>
agent?.roles?.includes("chat"),
),
[config?.genai],
);
// languages
const languages = useMemo(() => {
@ -511,7 +519,7 @@ export default function GeneralSettings({ className }: GeneralSettingsProps) {
<span>{t("menu.classification")}</span>
</MenuItem>
</Link>
{config?.genai?.model !== "none" && (
{hasChatAgent && (
<Link to="/chat">
<MenuItem
className="flex w-full items-center p-2 text-sm"

View File

@ -98,10 +98,7 @@ export default function SearchResultActions({
end_time: event.end_time,
})
.then((response) => {
if (response.status === 200) {
toast.success(t("dialog.toast.success", { ns: "views/replay" }), {
position: "top-center",
});
if (response.status === 202 || response.status === 200) {
navigate("/replay");
}
})

View File

@ -209,10 +209,7 @@ export default function DebugReplayDialog({
end_time: range.before,
})
.then((response) => {
if (response.status === 200) {
toast.success(t("dialog.toast.success"), {
position: "top-center",
});
if (response.status === 202 || response.status === 200) {
setMode("none");
setRange(undefined);
navigate("/replay");

View File

@ -262,10 +262,7 @@ export default function MobileReviewSettingsDrawer({
end_time: debugReplayRange.before,
});
if (response.status === 200) {
toast.success(t("dialog.toast.success", { ns: "views/replay" }), {
position: "top-center",
});
if (response.status === 202 || response.status === 200) {
setDebugReplayMode("none");
setDebugReplayRange(undefined);
setDrawerMode("none");

View File

@ -1,4 +1,6 @@
import { useCallback, useState } from "react";
import { useCallback, useEffect, useMemo, useState } from "react";
import { flushSync } from "react-dom";
import { throttle } from "lodash";
import { Slider } from "@/components/ui/slider";
import { Button } from "@/components/ui/button";
import { Popover, PopoverContent, PopoverTrigger } from "../../ui/popover";
@ -19,11 +21,21 @@ import { useIsAdmin } from "@/hooks/use-is-admin";
import { useDocDomain } from "@/hooks/use-doc-domain";
import { Link } from "react-router-dom";
const SLIDER_DRAG_THROTTLE_MS = 80;
type Props = {
className?: string;
// Optional side-effect invoked atomically with setAnnotationOffset (inside
// flushSync) so callers like the timeline panel can re-seek the video in the
// same React commit as the offset state update — preventing a one-frame
// overlay mismatch where annotationOffset has changed but currentTime has not.
onApplyOffset?: (newOffset: number) => void;
};
export default function AnnotationOffsetSlider({ className }: Props) {
export default function AnnotationOffsetSlider({
className,
onApplyOffset,
}: Props) {
const { annotationOffset, setAnnotationOffset, camera } = useDetailStream();
const isAdmin = useIsAdmin();
const { getLocaleDocUrl } = useDocDomain();
@ -31,31 +43,62 @@ export default function AnnotationOffsetSlider({ className }: Props) {
const { t } = useTranslation(["views/explore"]);
const [isSaving, setIsSaving] = useState(false);
const applyOffset = useCallback(
(newOffset: number) => {
flushSync(() => {
setAnnotationOffset(newOffset);
onApplyOffset?.(newOffset);
});
},
[setAnnotationOffset, onApplyOffset],
);
const throttledApplyOffset = useMemo(
() =>
throttle(applyOffset, SLIDER_DRAG_THROTTLE_MS, {
leading: true,
trailing: true,
}),
[applyOffset],
);
useEffect(() => () => throttledApplyOffset.cancel(), [throttledApplyOffset]);
const handleChange = useCallback(
(values: number[]) => {
if (!values || values.length === 0) return;
const valueMs = values[0];
setAnnotationOffset(valueMs);
throttledApplyOffset(values[0]);
},
[setAnnotationOffset],
[throttledApplyOffset],
);
const handleCommit = useCallback(
(values: number[]) => {
if (!values || values.length === 0) return;
// Ensure the final value lands even if it would otherwise be discarded
// by the trailing edge of the throttle window.
throttledApplyOffset.cancel();
applyOffset(values[0]);
},
[throttledApplyOffset, applyOffset],
);
const stepOffset = useCallback(
(delta: number) => {
setAnnotationOffset((prev) => {
const next = prev + delta;
return Math.max(
ANNOTATION_OFFSET_MIN,
Math.min(ANNOTATION_OFFSET_MAX, next),
);
});
const next = Math.max(
ANNOTATION_OFFSET_MIN,
Math.min(ANNOTATION_OFFSET_MAX, annotationOffset + delta),
);
throttledApplyOffset.cancel();
applyOffset(next);
},
[setAnnotationOffset],
[annotationOffset, applyOffset, throttledApplyOffset],
);
const reset = useCallback(() => {
setAnnotationOffset(0);
}, [setAnnotationOffset]);
throttledApplyOffset.cancel();
applyOffset(0);
}, [applyOffset, throttledApplyOffset]);
const save = useCallback(async () => {
setIsSaving(true);
@ -130,6 +173,7 @@ export default function AnnotationOffsetSlider({ className }: Props) {
max={ANNOTATION_OFFSET_MAX}
step={ANNOTATION_OFFSET_STEP}
onValueChange={handleChange}
onValueCommit={handleCommit}
/>
</div>
<Button

View File

@ -1,7 +1,9 @@
import { Event } from "@/types/event";
import { FrigateConfig } from "@/types/frigateConfig";
import axios from "axios";
import { useCallback, useState } from "react";
import { useCallback, useEffect, useMemo, useState } from "react";
import { flushSync } from "react-dom";
import { throttle } from "lodash";
import { LuExternalLink, LuMinus, LuPlus } from "react-icons/lu";
import { Link } from "react-router-dom";
import { toast } from "sonner";
@ -19,6 +21,8 @@ import {
ANNOTATION_OFFSET_STEP,
} from "@/lib/const";
const SLIDER_DRAG_THROTTLE_MS = 80;
type AnnotationSettingsPaneProps = {
event: Event;
annotationOffset: number;
@ -38,30 +42,64 @@ export function AnnotationSettingsPane({
const [isLoading, setIsLoading] = useState(false);
const handleSliderChange = useCallback(
(values: number[]) => {
if (!values || values.length === 0) return;
setAnnotationOffset(values[0]);
},
[setAnnotationOffset],
);
const stepOffset = useCallback(
(delta: number) => {
setAnnotationOffset((prev) => {
const next = prev + delta;
return Math.max(
ANNOTATION_OFFSET_MIN,
Math.min(ANNOTATION_OFFSET_MAX, next),
);
// flushSync ensures setAnnotationOffset commits synchronously so the
// useLayoutEffect in TrackingDetails (which seeks the video and sets
// currentTime in response) runs before the browser paints — preventing a
// one-frame overlay mismatch where annotationOffset has changed but
// currentTime has not.
const applyOffset = useCallback(
(newOffset: number) => {
flushSync(() => {
setAnnotationOffset(newOffset);
});
},
[setAnnotationOffset],
);
const throttledApplyOffset = useMemo(
() =>
throttle(applyOffset, SLIDER_DRAG_THROTTLE_MS, {
leading: true,
trailing: true,
}),
[applyOffset],
);
useEffect(() => () => throttledApplyOffset.cancel(), [throttledApplyOffset]);
const handleSliderChange = useCallback(
(values: number[]) => {
if (!values || values.length === 0) return;
throttledApplyOffset(values[0]);
},
[throttledApplyOffset],
);
const handleSliderCommit = useCallback(
(values: number[]) => {
if (!values || values.length === 0) return;
throttledApplyOffset.cancel();
applyOffset(values[0]);
},
[throttledApplyOffset, applyOffset],
);
const stepOffset = useCallback(
(delta: number) => {
const next = Math.max(
ANNOTATION_OFFSET_MIN,
Math.min(ANNOTATION_OFFSET_MAX, annotationOffset + delta),
);
throttledApplyOffset.cancel();
applyOffset(next);
},
[annotationOffset, applyOffset, throttledApplyOffset],
);
const reset = useCallback(() => {
setAnnotationOffset(0);
}, [setAnnotationOffset]);
throttledApplyOffset.cancel();
applyOffset(0);
}, [applyOffset, throttledApplyOffset]);
const saveToConfig = useCallback(async () => {
if (!config || !event) return;
@ -143,6 +181,7 @@ export function AnnotationSettingsPane({
max={ANNOTATION_OFFSET_MAX}
step={ANNOTATION_OFFSET_STEP}
onValueChange={handleSliderChange}
onValueCommit={handleSliderCommit}
className="flex-1"
/>
<Button

View File

@ -73,7 +73,7 @@ export default function DetailActionsMenu({
}
return (
<DropdownMenu open={isOpen} onOpenChange={setIsOpen}>
<DropdownMenu modal={false} open={isOpen} onOpenChange={setIsOpen}>
<DropdownMenuTrigger>
<div className="rounded" role="button">
<HiDotsHorizontal className="size-4 text-muted-foreground" />

View File

@ -957,8 +957,9 @@ function ObjectDetailsTab({
toast.success(
t("details.item.toast.success.regenerate", {
provider: capitalizeAll(
config?.genai.provider.replaceAll("_", " ") ??
t("generativeAI"),
Object.values(config?.genai ?? {})
.find((agent) => agent?.roles?.includes("descriptions"))
?.provider?.replaceAll("_", " ") ?? t("generativeAI"),
),
}),
{
@ -976,8 +977,9 @@ function ObjectDetailsTab({
toast.error(
t("details.item.toast.error.regenerate", {
provider: capitalizeAll(
config?.genai.provider.replaceAll("_", " ") ??
t("generativeAI"),
Object.values(config?.genai ?? {})
.find((agent) => agent?.roles?.includes("descriptions"))
?.provider?.replaceAll("_", " ") ?? t("generativeAI"),
),
errorMessage,
}),

View File

@ -1,5 +1,13 @@
import useSWR from "swr";
import { useCallback, useEffect, useMemo, useRef, useState } from "react";
import {
useCallback,
useEffect,
useLayoutEffect,
useMemo,
useRef,
useState,
} from "react";
import { flushSync } from "react-dom";
import { useResizeObserver } from "@/hooks/resize-observer";
import { useFullscreen } from "@/hooks/use-fullscreen";
import { Event } from "@/types/event";
@ -389,7 +397,12 @@ export function TrackingDetails({
// When the pinned timestamp or offset changes, re-seek the video and
// explicitly update currentTime so the overlay shows the pinned event's box.
useEffect(() => {
// useLayoutEffect + flushSync force the setCurrentTime commit to land before
// the browser paints, so the overlay never shows a frame where
// annotationOffset has changed but currentTime has not — that mismatch would
// resolve effectiveCurrentTime away from the pinned detect timestamp and
// make the bounding box disappear or jump for one frame.
useLayoutEffect(() => {
const pinned = pinnedDetectTimestampRef.current;
if (!isAnnotationSettingsOpen || pinned == null) return;
if (!videoRef.current || displaySource !== "video") return;
@ -398,10 +411,9 @@ export function TrackingDetails({
const relativeTime = timestampToVideoTime(targetTimeRecord);
videoRef.current.currentTime = relativeTime;
// Explicitly update currentTime state so the overlay's effectiveCurrentTime
// resolves back to the pinned detect timestamp:
// effectiveCurrentTime = targetTimeRecord - annotationOffset/1000 = pinned
setCurrentTime(targetTimeRecord);
flushSync(() => {
setCurrentTime(targetTimeRecord);
});
}, [
isAnnotationSettingsOpen,
annotationOffset,
@ -1204,7 +1216,11 @@ function LifecycleIconRow({
<div className="flex flex-row items-center gap-3">
<div className="whitespace-nowrap">{formattedEventTimestamp}</div>
{isAdmin && (config?.plus?.enabled || item.data.box) && (
<DropdownMenu open={isOpen} onOpenChange={setIsOpen}>
<DropdownMenu
modal={false}
open={isOpen}
onOpenChange={setIsOpen}
>
<DropdownMenuTrigger>
<div className="rounded p-1 pr-2" role="button">
<HiDotsHorizontal className="size-4 text-muted-foreground" />

View File

@ -126,13 +126,20 @@ export default function DetailStream({
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [controlsExpanded]);
// Re-seek on annotation offset change while settings panel is open
useEffect(() => {
const pinned = pinnedDetectTimestampRef.current;
if (!controlsExpanded || pinned == null) return;
const recordTime = pinned + annotationOffset / 1000;
onSeek(recordTime, false);
}, [controlsExpanded, annotationOffset, onSeek]);
// The slider invokes this atomically with setAnnotationOffset (inside the
// same flushSync) so currentTime advances in the same React commit as the
// offset. Without this, the overlay would render one frame with the new
// offset but the old currentTime, briefly resolving effectiveCurrentTime to
// the wrong detect-stream timestamp and making the bounding box vanish or
// jump.
const handleApplyOffset = useCallback(
(newOffset: number) => {
const pinned = pinnedDetectTimestampRef.current;
if (!controlsExpanded || pinned == null) return;
onSeek(pinned + newOffset / 1000, false);
},
[controlsExpanded, onSeek],
);
// Ensure we initialize the active review when reviewItems first arrive.
// This helps when the component mounts while the video is already
@ -337,7 +344,7 @@ export default function DetailStream({
</button>
{controlsExpanded && (
<div className="space-y-4 px-3 pb-5 pt-2">
<AnnotationOffsetSlider />
<AnnotationOffsetSlider onApplyOffset={handleApplyOffset} />
<Separator />
<div className="flex flex-col gap-1">
<div className="flex items-center justify-between">

View File

@ -61,10 +61,7 @@ export default function EventMenu({
end_time: event.end_time,
})
.then((response) => {
if (response.status === 200) {
toast.success(t("dialog.toast.success", { ns: "views/replay" }), {
position: "top-center",
});
if (response.status === 202 || response.status === 200) {
navigate("/replay");
}
})
@ -106,7 +103,7 @@ export default function EventMenu({
return (
<>
<span tabIndex={0} className="sr-only" />
<DropdownMenu open={isOpen} onOpenChange={setIsOpen}>
<DropdownMenu modal={false} open={isOpen} onOpenChange={setIsOpen}>
<DropdownMenuTrigger>
<div className="rounded p-1 pr-2" role="button">
<HiDotsHorizontal className="size-4 text-muted-foreground" />

View File

@ -28,6 +28,14 @@ export default function useNavigation(
});
const isAdmin = useIsAdmin();
const hasChatAgent = useMemo(
() =>
Object.values(config?.genai ?? {}).some((agent) =>
agent?.roles?.includes("chat"),
),
[config?.genai],
);
return useMemo(
() =>
[
@ -89,9 +97,9 @@ export default function useNavigation(
icon: MdChat,
title: "menu.chat",
url: "/chat",
enabled: isDesktop && isAdmin && config?.genai?.model !== "none",
enabled: isDesktop && isAdmin && hasChatAgent,
},
] as NavData[],
[config?.face_recognition?.enabled, config?.genai?.model, variant, isAdmin],
[config?.face_recognition?.enabled, hasChatAgent, variant, isAdmin],
);
}

View File

@ -42,7 +42,9 @@ import { CameraConfig, FrigateConfig } from "@/types/frigateConfig";
import { getIconForLabel } from "@/utils/iconUtil";
import { getTranslatedLabel } from "@/utils/i18n";
import { Card } from "@/components/ui/card";
import { Progress } from "@/components/ui/progress";
import { ObjectType } from "@/types/ws";
import { useJobStatus } from "@/api/ws";
import WsMessageFeed from "@/components/ws/WsMessageFeed";
import { ConfigSectionTemplate } from "@/components/config-form/sections/ConfigSectionTemplate";
@ -53,6 +55,7 @@ import { isDesktop, isMobile } from "react-device-detect";
import Logo from "@/components/Logo";
import { Separator } from "@/components/ui/separator";
import { useDocDomain } from "@/hooks/use-doc-domain";
import { useConfigSchema } from "@/hooks/use-config-schema";
import DebugDrawingLayer from "@/components/overlay/DebugDrawingLayer";
import { IoMdArrowRoundBack } from "react-icons/io";
@ -65,6 +68,15 @@ type DebugReplayStatus = {
live_ready: boolean;
};
type DebugReplayJobResults = {
current_step: "preparing_clip" | "starting_camera" | null;
progress_percent: number | null;
source_camera: string | null;
replay_camera_name: string | null;
start_ts: number | null;
end_ts: number | null;
};
type DebugOptions = {
bbox: boolean;
timestamp: boolean;
@ -105,8 +117,6 @@ const DEBUG_OPTION_I18N_KEY: Record<keyof DebugOptions, string> = {
paths: "paths",
};
const REPLAY_INIT_SKELETON_TIMEOUT_MS = 8000;
export default function Replay() {
const { t } = useTranslation(["views/replay", "views/settings", "common"]);
const navigate = useNavigate();
@ -119,6 +129,9 @@ export default function Replay() {
} = useSWR<DebugReplayStatus>("debug_replay/status", {
refreshInterval: 1000,
});
const { payload: replayJob } =
useJobStatus<DebugReplayJobResults>("debug_replay");
const configSchema = useConfigSchema();
const [isInitializing, setIsInitializing] = useState(true);
// Refresh status immediately on mount to avoid showing "no session" briefly
@ -130,12 +143,6 @@ export default function Replay() {
initializeStatus();
}, [refreshStatus]);
useEffect(() => {
if (status?.live_ready) {
setShowReplayInitSkeleton(false);
}
}, [status?.live_ready]);
const [options, setOptions] = useState<DebugOptions>(DEFAULT_OPTIONS);
const [isStopping, setIsStopping] = useState(false);
const [configDialogOpen, setConfigDialogOpen] = useState(false);
@ -160,11 +167,7 @@ export default function Replay() {
axios
.post("debug_replay/stop")
.then(() => {
toast.success(t("dialog.toast.stopped"), {
position: "top-center",
});
refreshStatus();
navigate("/review");
})
.catch((error) => {
const errorMessage =
@ -178,7 +181,7 @@ export default function Replay() {
.finally(() => {
setIsStopping(false);
});
}, [navigate, refreshStatus, t]);
}, [refreshStatus, t]);
// Camera activity for the replay camera
const { data: config } = useSWR<FrigateConfig>("config", {
@ -191,35 +194,10 @@ export default function Replay() {
const { objects } = useCameraActivity(replayCameraConfig);
const [showReplayInitSkeleton, setShowReplayInitSkeleton] = useState(false);
// debug draw
const containerRef = useRef<HTMLDivElement>(null);
const [debugDraw, setDebugDraw] = useState(false);
useEffect(() => {
if (!status?.active || !status.replay_camera) {
setShowReplayInitSkeleton(false);
return;
}
setShowReplayInitSkeleton(true);
const timeout = window.setTimeout(() => {
setShowReplayInitSkeleton(false);
}, REPLAY_INIT_SKELETON_TIMEOUT_MS);
return () => {
window.clearTimeout(timeout);
};
}, [status?.active, status?.replay_camera]);
useEffect(() => {
if (status?.live_ready) {
setShowReplayInitSkeleton(false);
}
}, [status?.live_ready]);
// Format time range for display
const timeRangeDisplay = useMemo(() => {
if (!status?.start_time || !status?.end_time) return "";
@ -237,8 +215,39 @@ export default function Replay() {
);
}
// No active session
if (!status?.active) {
// Startup error (job failed). Only show when status.active is also true so
// we don't surface stale failed jobs after a session ended cleanly.
if (replayJob?.status === "failed" && status?.active) {
return (
<div className="flex size-full flex-col items-center justify-center gap-4 p-8">
<Heading as="h2" className="text-center">
{t("page.startError.title")}
</Heading>
{replayJob.error_message && (
<p className="max-w-xl text-center text-sm text-muted-foreground">
{replayJob.error_message}
</p>
)}
<Button
variant="default"
onClick={() => {
axios
.post("debug_replay/stop")
.catch(() => {})
.finally(() => navigate("/review"));
}}
>
{t("page.startError.back")}
</Button>
</div>
);
}
// No active session. Also covers the brief window between the runner
// pushing job.status = "cancelled" via WS and the next SWR refresh
// flipping status.active to false — without this, render falls through
// to the full replay UI and you see a flash of it before stop completes.
if (!status?.active || replayJob?.status === "cancelled") {
return (
<div className="flex size-full flex-col items-center justify-center gap-4 p-8">
<MdReplay className="size-12" />
@ -255,6 +264,52 @@ export default function Replay() {
);
}
// Startup in progress (job is running). The session is active but the
// replay camera isn't ready yet; show progress / phase from the job.
const startupStep =
replayJob?.status === "running"
? (replayJob.results?.current_step ?? null)
: null;
if (startupStep === "preparing_clip" || startupStep === "starting_camera") {
const phaseTitle =
startupStep === "preparing_clip"
? t("page.preparingClip")
: t("page.startingCamera");
const progressPercent = replayJob?.results?.progress_percent ?? null;
const showProgressBar =
startupStep === "preparing_clip" && progressPercent != null;
return (
<div className="flex size-full flex-col items-center justify-center gap-4 p-8">
{showProgressBar ? (
<div className="flex w-64 flex-col items-center gap-2">
<Progress value={progressPercent ?? 0} />
<div className="text-xs text-muted-foreground">
{Math.round(progressPercent ?? 0)}%
</div>
</div>
) : (
<ActivityIndicator className="size-8" />
)}
<Heading as="h3" className="text-center">
{phaseTitle}
</Heading>
{startupStep === "preparing_clip" && (
<p className="max-w-md text-center text-sm text-muted-foreground">
{t("page.preparingClipDesc")}
</p>
)}
<Button
variant="outline"
size="sm"
disabled={isStopping}
onClick={handleStop}
>
{t("button.cancel", { ns: "common" })}
</Button>
</div>
);
}
return (
<div className="flex size-full flex-col overflow-hidden">
<Toaster position="top-center" closeButton={true} />
@ -345,27 +400,30 @@ export default function Replay() {
) : (
status.replay_camera && (
<div className="relative size-full min-h-10" ref={containerRef}>
<AutoUpdatingCameraImage
className="size-full"
cameraClasses="relative w-full h-full flex flex-col justify-start"
searchParams={searchParams}
camera={status.replay_camera}
showFps={false}
/>
{debugDraw && (
<DebugDrawingLayer
containerRef={containerRef}
cameraWidth={
config?.cameras?.[status.source_camera ?? ""]?.detect
.width ?? 1280
}
cameraHeight={
config?.cameras?.[status.source_camera ?? ""]?.detect
.height ?? 720
}
/>
)}
{showReplayInitSkeleton && (
{status.live_ready ? (
<>
<AutoUpdatingCameraImage
className="size-full"
cameraClasses="relative w-full h-full flex flex-col justify-start"
searchParams={searchParams}
camera={status.replay_camera}
showFps={false}
/>
{debugDraw && (
<DebugDrawingLayer
containerRef={containerRef}
cameraWidth={
config?.cameras?.[status.source_camera ?? ""]?.detect
.width ?? 1280
}
cameraHeight={
config?.cameras?.[status.source_camera ?? ""]?.detect
.height ?? 720
}
/>
)}
</>
) : (
<div className="pointer-events-none absolute inset-0 z-10 size-full rounded-lg bg-background">
<Skeleton className="size-full rounded-lg" />
<div className="absolute left-1/2 top-1/2 flex -translate-x-1/2 -translate-y-1/2 flex-col items-center justify-center gap-2">
@ -595,32 +653,38 @@ export default function Replay() {
{t("page.configurationDesc")}
</DialogDescription>
</DialogHeader>
<div className="space-y-6">
<ConfigSectionTemplate
sectionKey="motion"
level="replay"
cameraName={status.replay_camera ?? undefined}
skipSave
noStickyButtons
requiresRestart={false}
collapsible
defaultCollapsed={false}
showTitle
showOverrideIndicator={false}
/>
<ConfigSectionTemplate
sectionKey="objects"
level="replay"
cameraName={status.replay_camera ?? undefined}
skipSave
noStickyButtons
requiresRestart={false}
collapsible
defaultCollapsed={false}
showTitle
showOverrideIndicator={false}
/>
</div>
{configSchema == null ? (
<div className="flex h-40 items-center justify-center">
<ActivityIndicator />
</div>
) : (
<div className="space-y-6">
<ConfigSectionTemplate
sectionKey="motion"
level="replay"
cameraName={status.replay_camera ?? undefined}
skipSave
noStickyButtons
requiresRestart={false}
collapsible
defaultCollapsed={false}
showTitle
showOverrideIndicator={false}
/>
<ConfigSectionTemplate
sectionKey="objects"
level="replay"
cameraName={status.replay_camera ?? undefined}
skipSave
noStickyButtons
requiresRestart={false}
collapsible
defaultCollapsed={false}
showTitle
showOverrideIndicator={false}
/>
</div>
)}
</DialogContent>
</Dialog>
</div>

View File

@ -382,6 +382,18 @@ export type AllGroupsStreamingSettings = {
[groupName: string]: GroupStreamingSettings;
};
export type GenAIRole = "chat" | "descriptions" | "embeddings";
export type GenAIAgentConfig = {
api_key?: string;
base_url?: string;
model: string;
provider?: string;
roles: GenAIRole[];
provider_options?: Record<string, unknown>;
runtime_options?: Record<string, unknown>;
};
export interface FrigateConfig {
version: string;
safe_mode: boolean;
@ -478,12 +490,7 @@ export interface FrigateConfig {
retry_interval: number;
};
genai: {
provider: string;
base_url?: string;
api_key?: string;
model: string;
};
genai: Record<string, GenAIAgentConfig>;
go2rtc: {
streams: Record<string, string | string[]>;

View File

@ -38,6 +38,22 @@ export function getChunkedTimeDay(timeRange: TimeRange): TimeRange[] {
return data;
}
/**
* Find the chunk index that contains the given timestamp.
* Uses half-open intervals [after, before) for all chunks except the last,
* which uses a closed interval [after, before] so the terminal boundary
* is always reachable.
*/
export function findChunkIndex(chunks: TimeRange[], timestamp: number): number {
return chunks.findIndex((chunk, i) => {
const isLast = i === chunks.length - 1;
return (
chunk.after <= timestamp &&
(isLast ? chunk.before >= timestamp : chunk.before > timestamp)
);
});
}
export function getChunkedTimeRange(
startTimestamp: number,
endTimestamp: number,

View File

@ -26,7 +26,7 @@ import {
ReviewSummary,
ZoomLevel,
} from "@/types/review";
import { getChunkedTimeDay } from "@/utils/timelineUtil";
import { findChunkIndex, getChunkedTimeDay } from "@/utils/timelineUtil";
import {
MutableRefObject,
useCallback,
@ -169,9 +169,7 @@ export function RecordingView({
[timeRange],
);
const [selectedRangeIdx, setSelectedRangeIdx] = useState(
chunkedTimeRange.findIndex((chunk) => {
return chunk.after <= startTime && chunk.before >= startTime;
}),
findChunkIndex(chunkedTimeRange, startTime),
);
const currentTimeRange = useMemo<TimeRange>(
() =>
@ -274,9 +272,7 @@ export function RecordingView({
const updateSelectedSegment = useCallback(
(currentTime: number, updateStartTime: boolean) => {
const index = chunkedTimeRange.findIndex(
(seg) => seg.after <= currentTime && seg.before >= currentTime,
);
const index = findChunkIndex(chunkedTimeRange, currentTime);
if (index != -1) {
if (updateStartTime) {