mirror of
https://github.com/blakeblackshear/frigate.git
synced 2026-05-03 20:17:42 +03:00
Merge remote-tracking branch 'origin/master' into dev
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
This commit is contained in:
commit
6a2b914b10
@ -19,7 +19,7 @@ Face recognition requires a one-time internet connection to download detection a
|
|||||||
|
|
||||||
### Face Detection
|
### Face Detection
|
||||||
|
|
||||||
When running a Frigate+ model (or any custom model that natively detects faces) should ensure that `face` is added to the [list of objects to track](../plus/#available-label-types) either globally or for a specific camera. This will allow face detection to run at the same time as object detection and be more efficient.
|
When running a Frigate+ model (or any custom model that natively detects faces) should ensure that `face` is added to the [list of objects to track](../plus/index.md#available-label-types) either globally or for a specific camera. This will allow face detection to run at the same time as object detection and be more efficient.
|
||||||
|
|
||||||
When running a default COCO model or another model that does not include `face` as a detectable label, face detection will run via CV2 using a lightweight DNN model that runs on the CPU. In this case, you should _not_ define `face` in your list of objects to track.
|
When running a default COCO model or another model that does not include `face` as a detectable label, face detection will run via CV2 using a lightweight DNN model that runs on the CPU. In this case, you should _not_ define `face` in your list of objects to track.
|
||||||
|
|
||||||
|
|||||||
@ -494,7 +494,7 @@ detectors:
|
|||||||
| [YOLO-NAS](#yolo-nas) | ✅ | ✅ | |
|
| [YOLO-NAS](#yolo-nas) | ✅ | ✅ | |
|
||||||
| [MobileNet v2](#ssdlite-mobilenet-v2) | ✅ | ✅ | Fast and lightweight model, less accurate than larger models |
|
| [MobileNet v2](#ssdlite-mobilenet-v2) | ✅ | ✅ | Fast and lightweight model, less accurate than larger models |
|
||||||
| [YOLOX](#yolox) | ✅ | ? | |
|
| [YOLOX](#yolox) | ✅ | ? | |
|
||||||
| [D-FINE](#d-fine) | ❌ | ❌ | |
|
| [D-FINE / DEIMv2](#d-fine--deimv2) | ❌ | ❌ | |
|
||||||
|
|
||||||
#### SSDLite MobileNet v2
|
#### SSDLite MobileNet v2
|
||||||
|
|
||||||
@ -710,13 +710,13 @@ model:
|
|||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
#### D-FINE
|
#### D-FINE / DEIMv2
|
||||||
|
|
||||||
[D-FINE](https://github.com/Peterande/D-FINE) is a DETR based model. The ONNX exported models are supported, but not included by default. See [the models section](#downloading-d-fine-model) for more information on downloading the D-FINE model for use in Frigate.
|
[D-FINE](https://github.com/Peterande/D-FINE) and [DEIMv2](https://github.com/Intellindust-AI-Lab/DEIMv2) are DETR based models that share the same ONNX input/output format. The ONNX exported models are supported, but not included by default. See the models section for downloading [D-FINE](#downloading-d-fine-model) or [DEIMv2](#downloading-deimv2-model) for use in Frigate.
|
||||||
|
|
||||||
:::warning
|
:::warning
|
||||||
|
|
||||||
Currently D-FINE models only run on OpenVINO in CPU mode, GPUs currently fail to compile the model
|
Currently D-FINE / DEIMv2 models only run on OpenVINO in CPU mode, GPUs currently fail to compile the model
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
@ -766,6 +766,31 @@ Note that the labelmap uses a subset of the complete COCO label set that has onl
|
|||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>DEIMv2 Setup & Config</summary>
|
||||||
|
|
||||||
|
After placing the downloaded onnx model in your `config/model_cache` folder, you can use the following configuration:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
detectors:
|
||||||
|
ov:
|
||||||
|
type: openvino
|
||||||
|
device: CPU
|
||||||
|
|
||||||
|
model:
|
||||||
|
model_type: dfine
|
||||||
|
width: 640
|
||||||
|
height: 640
|
||||||
|
input_tensor: nchw
|
||||||
|
input_dtype: float
|
||||||
|
path: /config/model_cache/deimv2_hgnetv2_n.onnx
|
||||||
|
labelmap_path: /labelmap/coco-80.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
## Apple Silicon detector
|
## Apple Silicon detector
|
||||||
|
|
||||||
The NPU in Apple Silicon can't be accessed from within a container, so the [Apple Silicon detector client](https://github.com/frigate-nvr/apple-silicon-detector) must first be setup. It is recommended to use the Frigate docker image with `-standard-arm64` suffix, for example `ghcr.io/blakeblackshear/frigate:stable-standard-arm64`.
|
The NPU in Apple Silicon can't be accessed from within a container, so the [Apple Silicon detector client](https://github.com/frigate-nvr/apple-silicon-detector) must first be setup. It is recommended to use the Frigate docker image with `-standard-arm64` suffix, for example `ghcr.io/blakeblackshear/frigate:stable-standard-arm64`.
|
||||||
@ -947,7 +972,7 @@ The AMD GPU kernel is known problematic especially when converting models to mxr
|
|||||||
|
|
||||||
See [ONNX supported models](#supported-models) for supported models, there are some caveats:
|
See [ONNX supported models](#supported-models) for supported models, there are some caveats:
|
||||||
|
|
||||||
- D-FINE models are not supported
|
- D-FINE / DEIMv2 models are not supported
|
||||||
- YOLO-NAS models are known to not run well on integrated GPUs
|
- YOLO-NAS models are known to not run well on integrated GPUs
|
||||||
|
|
||||||
## ONNX
|
## ONNX
|
||||||
@ -1003,7 +1028,7 @@ detectors:
|
|||||||
| [RF-DETR](#rf-detr) | ✅ | ❌ | Supports CUDA Graphs for optimal Nvidia performance |
|
| [RF-DETR](#rf-detr) | ✅ | ❌ | Supports CUDA Graphs for optimal Nvidia performance |
|
||||||
| [YOLO-NAS](#yolo-nas-1) | ⚠️ | ⚠️ | Not supported by CUDA Graphs |
|
| [YOLO-NAS](#yolo-nas-1) | ⚠️ | ⚠️ | Not supported by CUDA Graphs |
|
||||||
| [YOLOX](#yolox-1) | ✅ | ✅ | Supports CUDA Graphs for optimal Nvidia performance |
|
| [YOLOX](#yolox-1) | ✅ | ✅ | Supports CUDA Graphs for optimal Nvidia performance |
|
||||||
| [D-FINE](#d-fine) | ⚠️ | ❌ | Not supported by CUDA Graphs |
|
| [D-FINE / DEIMv2](#d-fine--deimv2-1) | ⚠️ | ❌ | Not supported by CUDA Graphs |
|
||||||
|
|
||||||
There is no default model provided, the following formats are supported:
|
There is no default model provided, the following formats are supported:
|
||||||
|
|
||||||
@ -1215,9 +1240,9 @@ model:
|
|||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
#### D-FINE
|
#### D-FINE / DEIMv2
|
||||||
|
|
||||||
[D-FINE](https://github.com/Peterande/D-FINE) is a DETR based model. The ONNX exported models are supported, but not included by default. See [the models section](#downloading-d-fine-model) for more information on downloading the D-FINE model for use in Frigate.
|
[D-FINE](https://github.com/Peterande/D-FINE) and [DEIMv2](https://github.com/Intellindust-AI-Lab/DEIMv2) are DETR based models that share the same ONNX input/output format. The ONNX exported models are supported, but not included by default. See the models section for downloading [D-FINE](#downloading-d-fine-model) or [DEIMv2](#downloading-deimv2-model) for use in Frigate.
|
||||||
|
|
||||||
<details>
|
<details>
|
||||||
<summary>D-FINE Setup & Config</summary>
|
<summary>D-FINE Setup & Config</summary>
|
||||||
@ -1262,6 +1287,28 @@ model:
|
|||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>DEIMv2 Setup & Config</summary>
|
||||||
|
|
||||||
|
After placing the downloaded onnx model in your `config/model_cache` folder, you can use the following configuration:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
detectors:
|
||||||
|
onnx:
|
||||||
|
type: onnx
|
||||||
|
|
||||||
|
model:
|
||||||
|
model_type: dfine
|
||||||
|
width: 640
|
||||||
|
height: 640
|
||||||
|
input_tensor: nchw
|
||||||
|
input_dtype: float
|
||||||
|
path: /config/model_cache/deimv2_hgnetv2_n.onnx
|
||||||
|
labelmap_path: /labelmap/coco-80.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
|
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
|
||||||
|
|
||||||
## CPU Detector (not recommended)
|
## CPU Detector (not recommended)
|
||||||
@ -1405,7 +1452,7 @@ MemryX `.dfp` models are automatically downloaded at runtime, if enabled, to the
|
|||||||
|
|
||||||
#### YOLO-NAS
|
#### YOLO-NAS
|
||||||
|
|
||||||
The [YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) model included in this detector is downloaded from the [Models Section](#downloading-yolo-nas-model) and compiled to DFP with [mx_nc](https://developer.memryx.com/tools/neural_compiler.html#usage).
|
The [YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) model included in this detector is downloaded from the [Models Section](#downloading-yolo-nas-model) and compiled to DFP with [mx_nc](https://developer.memryx.com/2p1/tools/neural_compiler.html#usage).
|
||||||
|
|
||||||
**Note:** The default model for the MemryX detector is YOLO-NAS 320x320.
|
**Note:** The default model for the MemryX detector is YOLO-NAS 320x320.
|
||||||
|
|
||||||
@ -1459,7 +1506,7 @@ model:
|
|||||||
|
|
||||||
#### YOLOv9
|
#### YOLOv9
|
||||||
|
|
||||||
The YOLOv9s model included in this detector is downloaded from [the original GitHub](https://github.com/WongKinYiu/yolov9) like in the [Models Section](#yolov9-1) and compiled to DFP with [mx_nc](https://developer.memryx.com/tools/neural_compiler.html#usage).
|
The YOLOv9s model included in this detector is downloaded from [the original GitHub](https://github.com/WongKinYiu/yolov9) like in the [Models Section](#yolov9-1) and compiled to DFP with [mx_nc](https://developer.memryx.com/2p1/tools/neural_compiler.html#usage).
|
||||||
|
|
||||||
##### Configuration
|
##### Configuration
|
||||||
|
|
||||||
@ -1601,19 +1648,39 @@ model:
|
|||||||
|
|
||||||
#### Using a Custom Model
|
#### Using a Custom Model
|
||||||
|
|
||||||
To use your own model:
|
To use your own custom model, first compile it into a [.dfp](https://developer.memryx.com/2p1/specs/files.html#dataflow-program) file, which is the format used by MemryX.
|
||||||
|
|
||||||
1. Package your compiled model into a `.zip` file.
|
#### Compile the Model
|
||||||
|
|
||||||
2. The `.zip` must contain the compiled `.dfp` file.
|
Custom models must be compiled using **MemryX SDK 2.1**.
|
||||||
|
|
||||||
3. Depending on the model, the compiler may also generate a cropped post-processing network. If present, it will be named with the suffix `_post.onnx`.
|
Before compiling your model, install the MemryX Neural Compiler tools from the
|
||||||
|
[Install Tools](https://developer.memryx.com/2p1/get_started/install_tools.html) page on the **host**.
|
||||||
|
|
||||||
4. Bind-mount the `.zip` file into the container and specify its path using `model.path` in your config.
|
> **Note:** It is recommended to compile the model on the host machine, or on another separate machine, rather than inside the Frigate Docker container. Installing the compiler inside Docker may conflict with container packages. It is recommended to create a Python virtual environment and install the compiler there.
|
||||||
|
|
||||||
5. Update the `labelmap_path` to match your custom model's labels.
|
Once the SDK 2.1 environment is set up, follow the
|
||||||
|
[MemryX Compiler](https://developer.memryx.com/2p1/tools/neural_compiler.html#usage) documentation to compile your model.
|
||||||
|
|
||||||
For detailed instructions on compiling models, refer to the [MemryX Compiler](https://developer.memryx.com/tools/neural_compiler.html#usage) docs and [Tutorials](https://developer.memryx.com/tutorials/tutorials.html).
|
Example:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mx_nc -m yolonas.onnx -c 4 --autocrop -v --dfp_fname yolonas.dfp
|
||||||
|
```
|
||||||
|
|
||||||
|
For detailed instructions on compiling models, refer to the [MemryX Compiler](https://developer.memryx.com/2p1/tools/neural_compiler.html#usage) docs and [Tutorials](https://developer.memryx.com/2p1/tutorials/tutorials.html).
|
||||||
|
|
||||||
|
#### Package the Compiled Model
|
||||||
|
|
||||||
|
1. Package your compiled model into a `.zip` file.
|
||||||
|
|
||||||
|
2. The `.zip` file must contain the compiled `.dfp` file.
|
||||||
|
|
||||||
|
3. Depending on the model, the compiler may also generate a cropped post-processing network. If present, it will be named with the suffix `_post.onnx`.
|
||||||
|
|
||||||
|
4. Bind-mount the `.zip` file into the container and specify its path using `model.path` in your config.
|
||||||
|
|
||||||
|
5. Update `labelmap_path` to match your custom model's labels.
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
# The detector automatically selects the default model if nothing is provided in the config.
|
# The detector automatically selects the default model if nothing is provided in the config.
|
||||||
@ -2274,6 +2341,49 @@ COPY --from=build /dfine/output/dfine_${MODEL_SIZE}_obj2coco.onnx /dfine-${MODEL
|
|||||||
EOF
|
EOF
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Downloading DEIMv2 Model
|
||||||
|
|
||||||
|
[DEIMv2](https://github.com/Intellindust-AI-Lab/DEIMv2) can be exported as ONNX by running the command below. Pretrained weights are available on Hugging Face for two backbone families:
|
||||||
|
|
||||||
|
- **HGNetv2** (smaller/faster): `atto`, `femto`, `pico`, `n`
|
||||||
|
- **DINOv3** (larger/more accurate): `s`, `m`, `l`, `x`
|
||||||
|
|
||||||
|
Set `BACKBONE` and `MODEL_SIZE` in the first line to match your desired variant. Hugging Face model names use uppercase (e.g. `HGNetv2_N`, `DINOv3_S`), while config files use lowercase (e.g. `hgnetv2_n`, `dinov3_s`).
|
||||||
|
|
||||||
|
```sh
|
||||||
|
docker build . --rm --build-arg BACKBONE=hgnetv2 --build-arg MODEL_SIZE=n --output . -f- <<'EOF'
|
||||||
|
FROM python:3.11-slim AS build
|
||||||
|
RUN apt-get update && apt-get install --no-install-recommends -y git libgl1 libglib2.0-0 && rm -rf /var/lib/apt/lists/*
|
||||||
|
COPY --from=ghcr.io/astral-sh/uv:0.8.0 /uv /bin/
|
||||||
|
WORKDIR /deimv2
|
||||||
|
RUN git clone https://github.com/Intellindust-AI-Lab/DEIMv2.git .
|
||||||
|
# Install CPU-only PyTorch first to avoid pulling CUDA variant
|
||||||
|
RUN uv pip install --no-cache --system torch torchvision --index-url https://download.pytorch.org/whl/cpu
|
||||||
|
RUN uv pip install --no-cache --system -r requirements.txt
|
||||||
|
RUN uv pip install --no-cache --system onnx safetensors huggingface_hub
|
||||||
|
RUN mkdir -p output
|
||||||
|
ARG BACKBONE
|
||||||
|
ARG MODEL_SIZE
|
||||||
|
# Download from Hugging Face and convert safetensors to pth
|
||||||
|
RUN python3 -c "\
|
||||||
|
from huggingface_hub import hf_hub_download; \
|
||||||
|
from safetensors.torch import load_file; \
|
||||||
|
import torch; \
|
||||||
|
backbone = '${BACKBONE}'.replace('hgnetv2','HGNetv2').replace('dinov3','DINOv3'); \
|
||||||
|
size = '${MODEL_SIZE}'.upper(); \
|
||||||
|
st = load_file(hf_hub_download('Intellindust/DEIMv2_' + backbone + '_' + size + '_COCO', 'model.safetensors')); \
|
||||||
|
torch.save({'model': st}, 'output/deimv2.pth')"
|
||||||
|
RUN sed -i "s/data = torch.rand(2/data = torch.rand(1/" tools/deployment/export_onnx.py
|
||||||
|
# HuggingFace safetensors omits frozen constants that the model constructor initializes
|
||||||
|
RUN sed -i "s/cfg.model.load_state_dict(state)/cfg.model.load_state_dict(state, strict=False)/" tools/deployment/export_onnx.py
|
||||||
|
RUN python3 tools/deployment/export_onnx.py -c configs/deimv2/deimv2_${BACKBONE}_${MODEL_SIZE}_coco.yml -r output/deimv2.pth
|
||||||
|
FROM scratch
|
||||||
|
ARG BACKBONE
|
||||||
|
ARG MODEL_SIZE
|
||||||
|
COPY --from=build /deimv2/output/deimv2.onnx /deimv2_${BACKBONE}_${MODEL_SIZE}.onnx
|
||||||
|
EOF
|
||||||
|
```
|
||||||
|
|
||||||
### Downloading RF-DETR Model
|
### Downloading RF-DETR Model
|
||||||
|
|
||||||
RF-DETR can be exported as ONNX by running the command below. You can copy and paste the whole thing to your terminal and execute, altering `MODEL_SIZE=Nano` in the first line to `Nano`, `Small`, or `Medium` size.
|
RF-DETR can be exported as ONNX by running the command below. You can copy and paste the whole thing to your terminal and execute, altering `MODEL_SIZE=Nano` in the first line to `Nano`, `Small`, or `Medium` size.
|
||||||
|
|||||||
@ -195,7 +195,7 @@ Pre and post capture footage is included in the **recording timeline**, visible
|
|||||||
|
|
||||||
## Will Frigate delete old recordings if my storage runs out?
|
## Will Frigate delete old recordings if my storage runs out?
|
||||||
|
|
||||||
As of Frigate 0.12 if there is less than an hour left of storage, the oldest 2 hours of recordings will be deleted.
|
If there is less than an hour left of storage, the oldest hour of recordings will be deleted and a message will be printed in the Frigate logs. This emergency cleanup deletes the oldest recordings first regardless of retention settings to reclaim space as quickly as possible.
|
||||||
|
|
||||||
## Configuring Recording Retention
|
## Configuring Recording Retention
|
||||||
|
|
||||||
|
|||||||
@ -236,7 +236,7 @@ Enabling arbitrary exec sources allows execution of arbitrary commands through g
|
|||||||
|
|
||||||
## Advanced Restream Configurations
|
## Advanced Restream Configurations
|
||||||
|
|
||||||
The [exec](https://github.com/AlexxIT/go2rtc/tree/v1.9.13#source-exec) source in go2rtc can be used for custom ffmpeg commands. An example is below:
|
The [exec](https://github.com/AlexxIT/go2rtc/tree/v1.9.13#source-exec) source in go2rtc can be used for custom ffmpeg commands and other applications. An example is below:
|
||||||
|
|
||||||
:::warning
|
:::warning
|
||||||
|
|
||||||
@ -244,16 +244,11 @@ The `exec:`, `echo:`, and `expr:` sources are disabled by default for security.
|
|||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
:::warning
|
NOTE: RTSP output will need to be passed with two curly braces `{{output}}`, whereas pipe output must be passed without curly braces.
|
||||||
|
|
||||||
The `exec:`, `echo:`, and `expr:` sources are disabled by default for security. You must set `GO2RTC_ALLOW_ARBITRARY_EXEC=true` to use them. See [Security: Restricted Stream Sources](#security-restricted-stream-sources) for more information.
|
|
||||||
|
|
||||||
:::
|
|
||||||
|
|
||||||
NOTE: The output will need to be passed with two curly braces `{{output}}`
|
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
go2rtc:
|
go2rtc:
|
||||||
streams:
|
streams:
|
||||||
stream1: exec:ffmpeg -hide_banner -re -stream_loop -1 -i /media/BigBuckBunny.mp4 -c copy -rtsp_transport tcp -f rtsp {{output}}
|
stream1: exec:ffmpeg -hide_banner -re -stream_loop -1 -i /media/BigBuckBunny.mp4 -c copy -rtsp_transport tcp -f rtsp {{output}}
|
||||||
|
stream2: exec:rpicam-vid -t 0 --libav-format h264 -o -
|
||||||
```
|
```
|
||||||
|
|||||||
@ -9,7 +9,7 @@ Frigate is a Docker container that can be run on any Docker host including as a
|
|||||||
|
|
||||||
:::tip
|
:::tip
|
||||||
|
|
||||||
If you already have Frigate installed as a Home Assistant App, check out the [getting started guide](../guides/getting_started#configuring-frigate) to configure Frigate.
|
If you already have Frigate installed as a Home Assistant App, check out the [getting started guide](../guides/getting_started.md#configuring-frigate) to configure Frigate.
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
@ -286,7 +286,7 @@ The MemryX MX3 Accelerator is available in the M.2 2280 form factor (like an NVM
|
|||||||
|
|
||||||
#### Installation
|
#### Installation
|
||||||
|
|
||||||
To get started with MX3 hardware setup for your system, refer to the [Hardware Setup Guide](https://developer.memryx.com/get_started/hardware_setup.html).
|
To get started with MX3 hardware setup for your system, refer to the [Hardware Setup Guide](https://developer.memryx.com/2p1/get_started/install_hardware.html).
|
||||||
|
|
||||||
Then follow these steps for installing the correct driver/runtime configuration:
|
Then follow these steps for installing the correct driver/runtime configuration:
|
||||||
|
|
||||||
@ -295,6 +295,12 @@ Then follow these steps for installing the correct driver/runtime configuration:
|
|||||||
3. Run the script with `./user_installation.sh`
|
3. Run the script with `./user_installation.sh`
|
||||||
4. **Restart your computer** to complete driver installation.
|
4. **Restart your computer** to complete driver installation.
|
||||||
|
|
||||||
|
:::warning
|
||||||
|
|
||||||
|
For manual setup, use **MemryX SDK 2.1** only. Other SDK versions are not supported for this setup. See the [SDK 2.1 documentation](https://developer.memryx.com/2p1/index.html)
|
||||||
|
|
||||||
|
:::
|
||||||
|
|
||||||
#### Setup
|
#### Setup
|
||||||
|
|
||||||
To set up Frigate, follow the default installation instructions, for example: `ghcr.io/blakeblackshear/frigate:stable`
|
To set up Frigate, follow the default installation instructions, for example: `ghcr.io/blakeblackshear/frigate:stable`
|
||||||
|
|||||||
@ -17,9 +17,90 @@ from ws4py.websocket import WebSocket as WebSocket_
|
|||||||
|
|
||||||
from frigate.comms.base_communicator import Communicator
|
from frigate.comms.base_communicator import Communicator
|
||||||
from frigate.config import FrigateConfig
|
from frigate.config import FrigateConfig
|
||||||
|
from frigate.const import (
|
||||||
|
CLEAR_ONGOING_REVIEW_SEGMENTS,
|
||||||
|
EXPIRE_AUDIO_ACTIVITY,
|
||||||
|
INSERT_MANY_RECORDINGS,
|
||||||
|
INSERT_PREVIEW,
|
||||||
|
NOTIFICATION_TEST,
|
||||||
|
REQUEST_REGION_GRID,
|
||||||
|
UPDATE_AUDIO_ACTIVITY,
|
||||||
|
UPDATE_AUDIO_TRANSCRIPTION_STATE,
|
||||||
|
UPDATE_BIRDSEYE_LAYOUT,
|
||||||
|
UPDATE_CAMERA_ACTIVITY,
|
||||||
|
UPDATE_EMBEDDINGS_REINDEX_PROGRESS,
|
||||||
|
UPDATE_EVENT_DESCRIPTION,
|
||||||
|
UPDATE_MODEL_STATE,
|
||||||
|
UPDATE_REVIEW_DESCRIPTION,
|
||||||
|
UPSERT_REVIEW_SEGMENT,
|
||||||
|
)
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
# Internal IPC topics — NEVER allowed from WebSocket, regardless of role
|
||||||
|
_WS_BLOCKED_TOPICS = frozenset(
|
||||||
|
{
|
||||||
|
INSERT_MANY_RECORDINGS,
|
||||||
|
INSERT_PREVIEW,
|
||||||
|
REQUEST_REGION_GRID,
|
||||||
|
UPSERT_REVIEW_SEGMENT,
|
||||||
|
CLEAR_ONGOING_REVIEW_SEGMENTS,
|
||||||
|
UPDATE_CAMERA_ACTIVITY,
|
||||||
|
UPDATE_AUDIO_ACTIVITY,
|
||||||
|
EXPIRE_AUDIO_ACTIVITY,
|
||||||
|
UPDATE_EVENT_DESCRIPTION,
|
||||||
|
UPDATE_REVIEW_DESCRIPTION,
|
||||||
|
UPDATE_MODEL_STATE,
|
||||||
|
UPDATE_EMBEDDINGS_REINDEX_PROGRESS,
|
||||||
|
UPDATE_BIRDSEYE_LAYOUT,
|
||||||
|
UPDATE_AUDIO_TRANSCRIPTION_STATE,
|
||||||
|
NOTIFICATION_TEST,
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
# Read-only topics any authenticated user (including viewer) can send
|
||||||
|
_WS_VIEWER_TOPICS = frozenset(
|
||||||
|
{
|
||||||
|
"onConnect",
|
||||||
|
"modelState",
|
||||||
|
"audioTranscriptionState",
|
||||||
|
"birdseyeLayout",
|
||||||
|
"embeddingsReindexProgress",
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def _check_ws_authorization(
|
||||||
|
topic: str,
|
||||||
|
role_header: str | None,
|
||||||
|
separator: str,
|
||||||
|
) -> bool:
|
||||||
|
"""Check if a WebSocket message is authorized.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
topic: The message topic.
|
||||||
|
role_header: The HTTP_REMOTE_ROLE header value, or None.
|
||||||
|
separator: The role separator character from proxy config.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
True if authorized, False if blocked.
|
||||||
|
"""
|
||||||
|
# Block IPC-only topics unconditionally
|
||||||
|
if topic in _WS_BLOCKED_TOPICS:
|
||||||
|
return False
|
||||||
|
|
||||||
|
# No role header: default to viewer (fail-closed)
|
||||||
|
if role_header is None:
|
||||||
|
return topic in _WS_VIEWER_TOPICS
|
||||||
|
|
||||||
|
# Check if any role is admin
|
||||||
|
roles = [r.strip() for r in role_header.split(separator)]
|
||||||
|
if "admin" in roles:
|
||||||
|
return True
|
||||||
|
|
||||||
|
# Non-admin: only viewer topics allowed
|
||||||
|
return topic in _WS_VIEWER_TOPICS
|
||||||
|
|
||||||
|
|
||||||
class WebSocket(WebSocket_): # type: ignore[misc]
|
class WebSocket(WebSocket_): # type: ignore[misc]
|
||||||
def unhandled_error(self, error: Any) -> None:
|
def unhandled_error(self, error: Any) -> None:
|
||||||
@ -49,6 +130,7 @@ class WebSocketClient(Communicator):
|
|||||||
|
|
||||||
class _WebSocketHandler(WebSocket):
|
class _WebSocketHandler(WebSocket):
|
||||||
receiver = self._dispatcher
|
receiver = self._dispatcher
|
||||||
|
role_separator = self.config.proxy.separator or ","
|
||||||
|
|
||||||
def received_message(self, message: WebSocket.received_message) -> None: # type: ignore[name-defined]
|
def received_message(self, message: WebSocket.received_message) -> None: # type: ignore[name-defined]
|
||||||
try:
|
try:
|
||||||
@ -63,11 +145,25 @@ class WebSocketClient(Communicator):
|
|||||||
)
|
)
|
||||||
return
|
return
|
||||||
|
|
||||||
logger.debug(
|
topic = json_message["topic"]
|
||||||
f"Publishing mqtt message from websockets at {json_message['topic']}."
|
|
||||||
|
# Authorization check (skip when environ is None — direct internal connection)
|
||||||
|
role_header = (
|
||||||
|
self.environ.get("HTTP_REMOTE_ROLE") if self.environ else None
|
||||||
)
|
)
|
||||||
|
if self.environ is not None and not _check_ws_authorization(
|
||||||
|
topic, role_header, self.role_separator
|
||||||
|
):
|
||||||
|
logger.warning(
|
||||||
|
"Blocked unauthorized WebSocket message: topic=%s, role=%s",
|
||||||
|
topic,
|
||||||
|
role_header,
|
||||||
|
)
|
||||||
|
return
|
||||||
|
|
||||||
|
logger.debug(f"Publishing mqtt message from websockets at {topic}.")
|
||||||
self.receiver(
|
self.receiver(
|
||||||
json_message["topic"],
|
topic,
|
||||||
json_message["payload"],
|
json_message["payload"],
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|||||||
166
frigate/test/test_ws_auth.py
Normal file
166
frigate/test/test_ws_auth.py
Normal file
@ -0,0 +1,166 @@
|
|||||||
|
"""Tests for WebSocket authorization checks."""
|
||||||
|
|
||||||
|
import unittest
|
||||||
|
|
||||||
|
from frigate.comms.ws import _check_ws_authorization
|
||||||
|
from frigate.const import INSERT_MANY_RECORDINGS, UPDATE_CAMERA_ACTIVITY
|
||||||
|
|
||||||
|
|
||||||
|
class TestCheckWsAuthorization(unittest.TestCase):
|
||||||
|
"""Tests for the _check_ws_authorization pure function."""
|
||||||
|
|
||||||
|
DEFAULT_SEPARATOR = ","
|
||||||
|
|
||||||
|
# --- IPC topic blocking (unconditional, regardless of role) ---
|
||||||
|
|
||||||
|
def test_ipc_topic_blocked_for_admin(self):
|
||||||
|
self.assertFalse(
|
||||||
|
_check_ws_authorization(
|
||||||
|
INSERT_MANY_RECORDINGS, "admin", self.DEFAULT_SEPARATOR
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_ipc_topic_blocked_for_viewer(self):
|
||||||
|
self.assertFalse(
|
||||||
|
_check_ws_authorization(
|
||||||
|
UPDATE_CAMERA_ACTIVITY, "viewer", self.DEFAULT_SEPARATOR
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_ipc_topic_blocked_when_no_role(self):
|
||||||
|
self.assertFalse(
|
||||||
|
_check_ws_authorization(
|
||||||
|
INSERT_MANY_RECORDINGS, None, self.DEFAULT_SEPARATOR
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
# --- Viewer allowed topics ---
|
||||||
|
|
||||||
|
def test_viewer_can_send_on_connect(self):
|
||||||
|
self.assertTrue(
|
||||||
|
_check_ws_authorization("onConnect", "viewer", self.DEFAULT_SEPARATOR)
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_viewer_can_send_model_state(self):
|
||||||
|
self.assertTrue(
|
||||||
|
_check_ws_authorization("modelState", "viewer", self.DEFAULT_SEPARATOR)
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_viewer_can_send_audio_transcription_state(self):
|
||||||
|
self.assertTrue(
|
||||||
|
_check_ws_authorization(
|
||||||
|
"audioTranscriptionState", "viewer", self.DEFAULT_SEPARATOR
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_viewer_can_send_birdseye_layout(self):
|
||||||
|
self.assertTrue(
|
||||||
|
_check_ws_authorization("birdseyeLayout", "viewer", self.DEFAULT_SEPARATOR)
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_viewer_can_send_embeddings_reindex_progress(self):
|
||||||
|
self.assertTrue(
|
||||||
|
_check_ws_authorization(
|
||||||
|
"embeddingsReindexProgress", "viewer", self.DEFAULT_SEPARATOR
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
# --- Viewer blocked from admin topics ---
|
||||||
|
|
||||||
|
def test_viewer_blocked_from_restart(self):
|
||||||
|
self.assertFalse(
|
||||||
|
_check_ws_authorization("restart", "viewer", self.DEFAULT_SEPARATOR)
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_viewer_blocked_from_camera_detect_set(self):
|
||||||
|
self.assertFalse(
|
||||||
|
_check_ws_authorization(
|
||||||
|
"front_door/detect/set", "viewer", self.DEFAULT_SEPARATOR
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_viewer_blocked_from_camera_ptz(self):
|
||||||
|
self.assertFalse(
|
||||||
|
_check_ws_authorization("front_door/ptz", "viewer", self.DEFAULT_SEPARATOR)
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_viewer_blocked_from_global_notifications_set(self):
|
||||||
|
self.assertFalse(
|
||||||
|
_check_ws_authorization(
|
||||||
|
"notifications/set", "viewer", self.DEFAULT_SEPARATOR
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_viewer_blocked_from_camera_notifications_suspend(self):
|
||||||
|
self.assertFalse(
|
||||||
|
_check_ws_authorization(
|
||||||
|
"front_door/notifications/suspend", "viewer", self.DEFAULT_SEPARATOR
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_viewer_blocked_from_arbitrary_unknown_topic(self):
|
||||||
|
self.assertFalse(
|
||||||
|
_check_ws_authorization(
|
||||||
|
"some_random_topic", "viewer", self.DEFAULT_SEPARATOR
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
# --- Admin access ---
|
||||||
|
|
||||||
|
def test_admin_can_send_restart(self):
|
||||||
|
self.assertTrue(
|
||||||
|
_check_ws_authorization("restart", "admin", self.DEFAULT_SEPARATOR)
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_admin_can_send_camera_detect_set(self):
|
||||||
|
self.assertTrue(
|
||||||
|
_check_ws_authorization(
|
||||||
|
"front_door/detect/set", "admin", self.DEFAULT_SEPARATOR
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_admin_can_send_camera_ptz(self):
|
||||||
|
self.assertTrue(
|
||||||
|
_check_ws_authorization("front_door/ptz", "admin", self.DEFAULT_SEPARATOR)
|
||||||
|
)
|
||||||
|
|
||||||
|
# --- Comma-separated roles ---
|
||||||
|
|
||||||
|
def test_comma_separated_admin_viewer_grants_admin(self):
|
||||||
|
self.assertTrue(
|
||||||
|
_check_ws_authorization("restart", "admin,viewer", self.DEFAULT_SEPARATOR)
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_comma_separated_viewer_admin_grants_admin(self):
|
||||||
|
self.assertTrue(
|
||||||
|
_check_ws_authorization("restart", "viewer,admin", self.DEFAULT_SEPARATOR)
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_comma_separated_with_spaces(self):
|
||||||
|
self.assertTrue(
|
||||||
|
_check_ws_authorization("restart", "viewer, admin", self.DEFAULT_SEPARATOR)
|
||||||
|
)
|
||||||
|
|
||||||
|
# --- Custom separator ---
|
||||||
|
|
||||||
|
def test_pipe_separator(self):
|
||||||
|
self.assertTrue(_check_ws_authorization("restart", "viewer|admin", "|"))
|
||||||
|
|
||||||
|
def test_pipe_separator_no_admin(self):
|
||||||
|
self.assertFalse(_check_ws_authorization("restart", "viewer|editor", "|"))
|
||||||
|
|
||||||
|
# --- No role header (fail-closed) ---
|
||||||
|
|
||||||
|
def test_no_role_header_blocks_admin_topics(self):
|
||||||
|
self.assertFalse(
|
||||||
|
_check_ws_authorization("restart", None, self.DEFAULT_SEPARATOR)
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_no_role_header_allows_viewer_topics(self):
|
||||||
|
self.assertTrue(
|
||||||
|
_check_ws_authorization("onConnect", None, self.DEFAULT_SEPARATOR)
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
unittest.main()
|
||||||
@ -1,88 +1,95 @@
|
|||||||
{
|
{
|
||||||
"cells": [
|
"cells": [
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "markdown",
|
||||||
"execution_count": null,
|
"metadata": {
|
||||||
"metadata": {
|
"id": "runtime-notice"
|
||||||
"id": "rmuF9iKWTbdk"
|
},
|
||||||
},
|
"source": [
|
||||||
"outputs": [],
|
"**Before running:** go to **Runtime → Change runtime type → Fallback runtime version: 2025.07** (Python 3.11). The current Colab default (Python 3.12+) is incompatible with `super-gradients`."
|
||||||
"source": [
|
]
|
||||||
"! pip install -q git+https://github.com/Deci-AI/super-gradients.git"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {
|
|
||||||
"id": "NiRCt917KKcL"
|
|
||||||
},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"! sed -i 's/sghub.deci.ai/sg-hub-nv.s3.amazonaws.com/' /usr/local/lib/python3.12/dist-packages/super_gradients/training/pretrained_models.py\n",
|
|
||||||
"! sed -i 's/sghub.deci.ai/sg-hub-nv.s3.amazonaws.com/' /usr/local/lib/python3.12/dist-packages/super_gradients/training/utils/checkpoint_utils.py"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {
|
|
||||||
"id": "dTB0jy_NNSFz"
|
|
||||||
},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"from super_gradients.common.object_names import Models\n",
|
|
||||||
"from super_gradients.conversion import DetectionOutputFormatMode\n",
|
|
||||||
"from super_gradients.training import models\n",
|
|
||||||
"\n",
|
|
||||||
"model = models.get(Models.YOLO_NAS_S, pretrained_weights=\"coco\")"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {
|
|
||||||
"id": "GymUghyCNXem"
|
|
||||||
},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"# export the model for compatibility with Frigate\n",
|
|
||||||
"\n",
|
|
||||||
"model.export(\"yolo_nas_s.onnx\",\n",
|
|
||||||
" output_predictions_format=DetectionOutputFormatMode.FLAT_FORMAT,\n",
|
|
||||||
" max_predictions_per_image=20,\n",
|
|
||||||
" num_pre_nms_predictions=300,\n",
|
|
||||||
" confidence_threshold=0.4,\n",
|
|
||||||
" input_image_shape=(320,320),\n",
|
|
||||||
" )"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {
|
|
||||||
"id": "uBhXV5g4Nh42"
|
|
||||||
},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"from google.colab import files\n",
|
|
||||||
"\n",
|
|
||||||
"files.download('yolo_nas_s.onnx')"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"metadata": {
|
|
||||||
"colab": {
|
|
||||||
"provenance": []
|
|
||||||
},
|
|
||||||
"kernelspec": {
|
|
||||||
"display_name": "Python 3",
|
|
||||||
"name": "python3"
|
|
||||||
},
|
|
||||||
"language_info": {
|
|
||||||
"name": "python"
|
|
||||||
}
|
|
||||||
},
|
},
|
||||||
"nbformat": 4,
|
{
|
||||||
"nbformat_minor": 0
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "rmuF9iKWTbdk"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"! pip install -q \"jedi>=0.16\"\n",
|
||||||
|
"! pip install -q git+https://github.com/Deci-AI/super-gradients.git"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "NiRCt917KKcL"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
|
"source": "! sed -i 's/sghub\\.deci\\.ai/d2gjn4b69gu75n.cloudfront.net/g; s/sg-hub-nv\\.s3\\.amazonaws\\.com/d2gjn4b69gu75n.cloudfront.net/g' /usr/local/lib/python*/dist-packages/super_gradients/training/pretrained_models.py\n! sed -i 's/sghub\\.deci\\.ai/d2gjn4b69gu75n.cloudfront.net/g; s/sg-hub-nv\\.s3\\.amazonaws\\.com/d2gjn4b69gu75n.cloudfront.net/g' /usr/local/lib/python*/dist-packages/super_gradients/training/utils/checkpoint_utils.py"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "dTB0jy_NNSFz"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"from super_gradients.common.object_names import Models\n",
|
||||||
|
"from super_gradients.conversion import DetectionOutputFormatMode\n",
|
||||||
|
"from super_gradients.training import models\n",
|
||||||
|
"\n",
|
||||||
|
"model = models.get(Models.YOLO_NAS_S, pretrained_weights=\"coco\")"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "GymUghyCNXem"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"# export the model for compatibility with Frigate\n",
|
||||||
|
"\n",
|
||||||
|
"model.export(\"yolo_nas_s.onnx\",\n",
|
||||||
|
" output_predictions_format=DetectionOutputFormatMode.FLAT_FORMAT,\n",
|
||||||
|
" max_predictions_per_image=20,\n",
|
||||||
|
" num_pre_nms_predictions=300,\n",
|
||||||
|
" confidence_threshold=0.4,\n",
|
||||||
|
" input_image_shape=(320,320),\n",
|
||||||
|
" )"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "uBhXV5g4Nh42"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"from google.colab import files\n",
|
||||||
|
"\n",
|
||||||
|
"files.download('yolo_nas_s.onnx')"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"provenance": []
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"display_name": "Python 3",
|
||||||
|
"name": "python3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0
|
||||||
}
|
}
|
||||||
Loading…
Reference in New Issue
Block a user