mirror of
https://github.com/blakeblackshear/frigate.git
synced 2026-05-07 05:55:27 +03:00
Compare commits
16 Commits
11ef0ef558
...
37d36039e1
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
37d36039e1 | ||
|
|
d011bcf356 | ||
|
|
9377d81406 | ||
|
|
3524d661d0 | ||
|
|
b25be0180e | ||
|
|
b6fd86a066 | ||
|
|
147cd5cc2b | ||
|
|
6a2b914b10 | ||
|
|
2cfb530dbf | ||
|
|
81b0d94793 | ||
|
|
67837f61d0 | ||
|
|
58c93c2e9e | ||
|
|
6b71feffab | ||
|
|
1c26bc289e | ||
|
|
0371b60c71 | ||
|
|
01392e03ac |
@ -19,7 +19,7 @@ Face recognition requires a one-time internet connection to download detection a
|
||||
|
||||
### Face Detection
|
||||
|
||||
When running a Frigate+ model (or any custom model that natively detects faces) should ensure that `face` is added to the [list of objects to track](../plus/#available-label-types) either globally or for a specific camera. This will allow face detection to run at the same time as object detection and be more efficient.
|
||||
When running a Frigate+ model (or any custom model that natively detects faces) should ensure that `face` is added to the [list of objects to track](../plus/index.md#available-label-types) either globally or for a specific camera. This will allow face detection to run at the same time as object detection and be more efficient.
|
||||
|
||||
When running a default COCO model or another model that does not include `face` as a detectable label, face detection will run via CV2 using a lightweight DNN model that runs on the CPU. In this case, you should _not_ define `face` in your list of objects to track.
|
||||
|
||||
|
||||
@ -201,7 +201,7 @@ Cloud Generative AI providers require an active internet connection to send imag
|
||||
|
||||
### Ollama Cloud
|
||||
|
||||
Ollama also supports [cloud models](https://ollama.com/cloud), where your local Ollama instance handles requests from Frigate, but model inference is performed in the cloud. Set up Ollama locally, sign in with your Ollama account, and specify the cloud model name in your Frigate config. For more details, see the Ollama cloud model [docs](https://docs.ollama.com/cloud).
|
||||
Ollama also supports [cloud models](https://ollama.com/cloud), where model inference is performed in the cloud. You can connect directly to Ollama Cloud by setting `base_url` to `https://ollama.com` and providing an API key. Alternatively, you can run Ollama locally and use a cloud model name so your local instance forwards requests to the cloud. For more details, see the Ollama cloud model [docs](https://docs.ollama.com/cloud).
|
||||
|
||||
#### Configuration
|
||||
|
||||
@ -210,7 +210,8 @@ Ollama also supports [cloud models](https://ollama.com/cloud), where your local
|
||||
|
||||
1. Navigate to <NavPath path="Settings > Enrichments > Generative AI" />.
|
||||
- Set **Provider** to `ollama`
|
||||
- Set **Base URL** to your local Ollama address (e.g., `http://localhost:11434`)
|
||||
- Set **Base URL** to your local Ollama address (e.g., `http://localhost:11434`) or `https://ollama.com` for direct cloud inference
|
||||
- Set **API key** if required by your endpoint (e.g., when using `https://ollama.com`)
|
||||
- Set **Model** to the cloud model name
|
||||
|
||||
</TabItem>
|
||||
@ -223,6 +224,16 @@ genai:
|
||||
model: cloud-model-name
|
||||
```
|
||||
|
||||
or when using Ollama Cloud directly
|
||||
|
||||
```yaml
|
||||
genai:
|
||||
provider: ollama
|
||||
base_url: https://ollama.com
|
||||
model: cloud-model-name
|
||||
api_key: your-api-key
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</ConfigTabs>
|
||||
|
||||
|
||||
@ -494,7 +494,7 @@ detectors:
|
||||
| [YOLO-NAS](#yolo-nas) | ✅ | ✅ | |
|
||||
| [MobileNet v2](#ssdlite-mobilenet-v2) | ✅ | ✅ | Fast and lightweight model, less accurate than larger models |
|
||||
| [YOLOX](#yolox) | ✅ | ? | |
|
||||
| [D-FINE](#d-fine) | ❌ | ❌ | |
|
||||
| [D-FINE / DEIMv2](#d-fine--deimv2) | ❌ | ❌ | |
|
||||
|
||||
#### SSDLite MobileNet v2
|
||||
|
||||
@ -710,13 +710,13 @@ model:
|
||||
|
||||
</details>
|
||||
|
||||
#### D-FINE
|
||||
#### D-FINE / DEIMv2
|
||||
|
||||
[D-FINE](https://github.com/Peterande/D-FINE) is a DETR based model. The ONNX exported models are supported, but not included by default. See [the models section](#downloading-d-fine-model) for more information on downloading the D-FINE model for use in Frigate.
|
||||
[D-FINE](https://github.com/Peterande/D-FINE) and [DEIMv2](https://github.com/Intellindust-AI-Lab/DEIMv2) are DETR based models that share the same ONNX input/output format. The ONNX exported models are supported, but not included by default. See the models section for downloading [D-FINE](#downloading-d-fine-model) or [DEIMv2](#downloading-deimv2-model) for use in Frigate.
|
||||
|
||||
:::warning
|
||||
|
||||
Currently D-FINE models only run on OpenVINO in CPU mode, GPUs currently fail to compile the model
|
||||
Currently D-FINE / DEIMv2 models only run on OpenVINO in CPU mode, GPUs currently fail to compile the model
|
||||
|
||||
:::
|
||||
|
||||
@ -766,6 +766,31 @@ Note that the labelmap uses a subset of the complete COCO label set that has onl
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>DEIMv2 Setup & Config</summary>
|
||||
|
||||
After placing the downloaded onnx model in your `config/model_cache` folder, you can use the following configuration:
|
||||
|
||||
```yaml
|
||||
detectors:
|
||||
ov:
|
||||
type: openvino
|
||||
device: CPU
|
||||
|
||||
model:
|
||||
model_type: dfine
|
||||
width: 640
|
||||
height: 640
|
||||
input_tensor: nchw
|
||||
input_dtype: float
|
||||
path: /config/model_cache/deimv2_hgnetv2_n.onnx
|
||||
labelmap_path: /labelmap/coco-80.txt
|
||||
```
|
||||
|
||||
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
|
||||
|
||||
</details>
|
||||
|
||||
## Apple Silicon detector
|
||||
|
||||
The NPU in Apple Silicon can't be accessed from within a container, so the [Apple Silicon detector client](https://github.com/frigate-nvr/apple-silicon-detector) must first be setup. It is recommended to use the Frigate docker image with `-standard-arm64` suffix, for example `ghcr.io/blakeblackshear/frigate:stable-standard-arm64`.
|
||||
@ -947,7 +972,7 @@ The AMD GPU kernel is known problematic especially when converting models to mxr
|
||||
|
||||
See [ONNX supported models](#supported-models) for supported models, there are some caveats:
|
||||
|
||||
- D-FINE models are not supported
|
||||
- D-FINE / DEIMv2 models are not supported
|
||||
- YOLO-NAS models are known to not run well on integrated GPUs
|
||||
|
||||
## ONNX
|
||||
@ -1003,7 +1028,7 @@ detectors:
|
||||
| [RF-DETR](#rf-detr) | ✅ | ❌ | Supports CUDA Graphs for optimal Nvidia performance |
|
||||
| [YOLO-NAS](#yolo-nas-1) | ⚠️ | ⚠️ | Not supported by CUDA Graphs |
|
||||
| [YOLOX](#yolox-1) | ✅ | ✅ | Supports CUDA Graphs for optimal Nvidia performance |
|
||||
| [D-FINE](#d-fine) | ⚠️ | ❌ | Not supported by CUDA Graphs |
|
||||
| [D-FINE / DEIMv2](#d-fine--deimv2-1) | ⚠️ | ❌ | Not supported by CUDA Graphs |
|
||||
|
||||
There is no default model provided, the following formats are supported:
|
||||
|
||||
@ -1215,9 +1240,9 @@ model:
|
||||
|
||||
</details>
|
||||
|
||||
#### D-FINE
|
||||
#### D-FINE / DEIMv2
|
||||
|
||||
[D-FINE](https://github.com/Peterande/D-FINE) is a DETR based model. The ONNX exported models are supported, but not included by default. See [the models section](#downloading-d-fine-model) for more information on downloading the D-FINE model for use in Frigate.
|
||||
[D-FINE](https://github.com/Peterande/D-FINE) and [DEIMv2](https://github.com/Intellindust-AI-Lab/DEIMv2) are DETR based models that share the same ONNX input/output format. The ONNX exported models are supported, but not included by default. See the models section for downloading [D-FINE](#downloading-d-fine-model) or [DEIMv2](#downloading-deimv2-model) for use in Frigate.
|
||||
|
||||
<details>
|
||||
<summary>D-FINE Setup & Config</summary>
|
||||
@ -1262,6 +1287,28 @@ model:
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>DEIMv2 Setup & Config</summary>
|
||||
|
||||
After placing the downloaded onnx model in your `config/model_cache` folder, you can use the following configuration:
|
||||
|
||||
```yaml
|
||||
detectors:
|
||||
onnx:
|
||||
type: onnx
|
||||
|
||||
model:
|
||||
model_type: dfine
|
||||
width: 640
|
||||
height: 640
|
||||
input_tensor: nchw
|
||||
input_dtype: float
|
||||
path: /config/model_cache/deimv2_hgnetv2_n.onnx
|
||||
labelmap_path: /labelmap/coco-80.txt
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
|
||||
|
||||
## CPU Detector (not recommended)
|
||||
@ -1405,7 +1452,7 @@ MemryX `.dfp` models are automatically downloaded at runtime, if enabled, to the
|
||||
|
||||
#### YOLO-NAS
|
||||
|
||||
The [YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) model included in this detector is downloaded from the [Models Section](#downloading-yolo-nas-model) and compiled to DFP with [mx_nc](https://developer.memryx.com/tools/neural_compiler.html#usage).
|
||||
The [YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) model included in this detector is downloaded from the [Models Section](#downloading-yolo-nas-model) and compiled to DFP with [mx_nc](https://developer.memryx.com/2p1/tools/neural_compiler.html#usage).
|
||||
|
||||
**Note:** The default model for the MemryX detector is YOLO-NAS 320x320.
|
||||
|
||||
@ -1459,7 +1506,7 @@ model:
|
||||
|
||||
#### YOLOv9
|
||||
|
||||
The YOLOv9s model included in this detector is downloaded from [the original GitHub](https://github.com/WongKinYiu/yolov9) like in the [Models Section](#yolov9-1) and compiled to DFP with [mx_nc](https://developer.memryx.com/tools/neural_compiler.html#usage).
|
||||
The YOLOv9s model included in this detector is downloaded from [the original GitHub](https://github.com/WongKinYiu/yolov9) like in the [Models Section](#yolov9-1) and compiled to DFP with [mx_nc](https://developer.memryx.com/2p1/tools/neural_compiler.html#usage).
|
||||
|
||||
##### Configuration
|
||||
|
||||
@ -1601,19 +1648,39 @@ model:
|
||||
|
||||
#### Using a Custom Model
|
||||
|
||||
To use your own model:
|
||||
To use your own custom model, first compile it into a [.dfp](https://developer.memryx.com/2p1/specs/files.html#dataflow-program) file, which is the format used by MemryX.
|
||||
|
||||
1. Package your compiled model into a `.zip` file.
|
||||
#### Compile the Model
|
||||
|
||||
2. The `.zip` must contain the compiled `.dfp` file.
|
||||
Custom models must be compiled using **MemryX SDK 2.1**.
|
||||
|
||||
3. Depending on the model, the compiler may also generate a cropped post-processing network. If present, it will be named with the suffix `_post.onnx`.
|
||||
Before compiling your model, install the MemryX Neural Compiler tools from the
|
||||
[Install Tools](https://developer.memryx.com/2p1/get_started/install_tools.html) page on the **host**.
|
||||
|
||||
4. Bind-mount the `.zip` file into the container and specify its path using `model.path` in your config.
|
||||
> **Note:** It is recommended to compile the model on the host machine, or on another separate machine, rather than inside the Frigate Docker container. Installing the compiler inside Docker may conflict with container packages. It is recommended to create a Python virtual environment and install the compiler there.
|
||||
|
||||
5. Update the `labelmap_path` to match your custom model's labels.
|
||||
Once the SDK 2.1 environment is set up, follow the
|
||||
[MemryX Compiler](https://developer.memryx.com/2p1/tools/neural_compiler.html#usage) documentation to compile your model.
|
||||
|
||||
For detailed instructions on compiling models, refer to the [MemryX Compiler](https://developer.memryx.com/tools/neural_compiler.html#usage) docs and [Tutorials](https://developer.memryx.com/tutorials/tutorials.html).
|
||||
Example:
|
||||
|
||||
```bash
|
||||
mx_nc -m yolonas.onnx -c 4 --autocrop -v --dfp_fname yolonas.dfp
|
||||
```
|
||||
|
||||
For detailed instructions on compiling models, refer to the [MemryX Compiler](https://developer.memryx.com/2p1/tools/neural_compiler.html#usage) docs and [Tutorials](https://developer.memryx.com/2p1/tutorials/tutorials.html).
|
||||
|
||||
#### Package the Compiled Model
|
||||
|
||||
1. Package your compiled model into a `.zip` file.
|
||||
|
||||
2. The `.zip` file must contain the compiled `.dfp` file.
|
||||
|
||||
3. Depending on the model, the compiler may also generate a cropped post-processing network. If present, it will be named with the suffix `_post.onnx`.
|
||||
|
||||
4. Bind-mount the `.zip` file into the container and specify its path using `model.path` in your config.
|
||||
|
||||
5. Update `labelmap_path` to match your custom model's labels.
|
||||
|
||||
```yaml
|
||||
# The detector automatically selects the default model if nothing is provided in the config.
|
||||
@ -2274,6 +2341,49 @@ COPY --from=build /dfine/output/dfine_${MODEL_SIZE}_obj2coco.onnx /dfine-${MODEL
|
||||
EOF
|
||||
```
|
||||
|
||||
### Downloading DEIMv2 Model
|
||||
|
||||
[DEIMv2](https://github.com/Intellindust-AI-Lab/DEIMv2) can be exported as ONNX by running the command below. Pretrained weights are available on Hugging Face for two backbone families:
|
||||
|
||||
- **HGNetv2** (smaller/faster): `atto`, `femto`, `pico`, `n`
|
||||
- **DINOv3** (larger/more accurate): `s`, `m`, `l`, `x`
|
||||
|
||||
Set `BACKBONE` and `MODEL_SIZE` in the first line to match your desired variant. Hugging Face model names use uppercase (e.g. `HGNetv2_N`, `DINOv3_S`), while config files use lowercase (e.g. `hgnetv2_n`, `dinov3_s`).
|
||||
|
||||
```sh
|
||||
docker build . --rm --build-arg BACKBONE=hgnetv2 --build-arg MODEL_SIZE=n --output . -f- <<'EOF'
|
||||
FROM python:3.11-slim AS build
|
||||
RUN apt-get update && apt-get install --no-install-recommends -y git libgl1 libglib2.0-0 && rm -rf /var/lib/apt/lists/*
|
||||
COPY --from=ghcr.io/astral-sh/uv:0.8.0 /uv /bin/
|
||||
WORKDIR /deimv2
|
||||
RUN git clone https://github.com/Intellindust-AI-Lab/DEIMv2.git .
|
||||
# Install CPU-only PyTorch first to avoid pulling CUDA variant
|
||||
RUN uv pip install --no-cache --system torch torchvision --index-url https://download.pytorch.org/whl/cpu
|
||||
RUN uv pip install --no-cache --system -r requirements.txt
|
||||
RUN uv pip install --no-cache --system onnx safetensors huggingface_hub
|
||||
RUN mkdir -p output
|
||||
ARG BACKBONE
|
||||
ARG MODEL_SIZE
|
||||
# Download from Hugging Face and convert safetensors to pth
|
||||
RUN python3 -c "\
|
||||
from huggingface_hub import hf_hub_download; \
|
||||
from safetensors.torch import load_file; \
|
||||
import torch; \
|
||||
backbone = '${BACKBONE}'.replace('hgnetv2','HGNetv2').replace('dinov3','DINOv3'); \
|
||||
size = '${MODEL_SIZE}'.upper(); \
|
||||
st = load_file(hf_hub_download('Intellindust/DEIMv2_' + backbone + '_' + size + '_COCO', 'model.safetensors')); \
|
||||
torch.save({'model': st}, 'output/deimv2.pth')"
|
||||
RUN sed -i "s/data = torch.rand(2/data = torch.rand(1/" tools/deployment/export_onnx.py
|
||||
# HuggingFace safetensors omits frozen constants that the model constructor initializes
|
||||
RUN sed -i "s/cfg.model.load_state_dict(state)/cfg.model.load_state_dict(state, strict=False)/" tools/deployment/export_onnx.py
|
||||
RUN python3 tools/deployment/export_onnx.py -c configs/deimv2/deimv2_${BACKBONE}_${MODEL_SIZE}_coco.yml -r output/deimv2.pth
|
||||
FROM scratch
|
||||
ARG BACKBONE
|
||||
ARG MODEL_SIZE
|
||||
COPY --from=build /deimv2/output/deimv2.onnx /deimv2_${BACKBONE}_${MODEL_SIZE}.onnx
|
||||
EOF
|
||||
```
|
||||
|
||||
### Downloading RF-DETR Model
|
||||
|
||||
RF-DETR can be exported as ONNX by running the command below. You can copy and paste the whole thing to your terminal and execute, altering `MODEL_SIZE=Nano` in the first line to `Nano`, `Small`, or `Medium` size.
|
||||
|
||||
@ -195,7 +195,7 @@ Pre and post capture footage is included in the **recording timeline**, visible
|
||||
|
||||
## Will Frigate delete old recordings if my storage runs out?
|
||||
|
||||
As of Frigate 0.12 if there is less than an hour left of storage, the oldest 2 hours of recordings will be deleted.
|
||||
If there is less than an hour left of storage, the oldest hour of recordings will be deleted and a message will be printed in the Frigate logs. This emergency cleanup deletes the oldest recordings first regardless of retention settings to reclaim space as quickly as possible.
|
||||
|
||||
## Configuring Recording Retention
|
||||
|
||||
|
||||
@ -236,7 +236,7 @@ Enabling arbitrary exec sources allows execution of arbitrary commands through g
|
||||
|
||||
## Advanced Restream Configurations
|
||||
|
||||
The [exec](https://github.com/AlexxIT/go2rtc/tree/v1.9.13#source-exec) source in go2rtc can be used for custom ffmpeg commands. An example is below:
|
||||
The [exec](https://github.com/AlexxIT/go2rtc/tree/v1.9.13#source-exec) source in go2rtc can be used for custom ffmpeg commands and other applications. An example is below:
|
||||
|
||||
:::warning
|
||||
|
||||
@ -244,16 +244,11 @@ The `exec:`, `echo:`, and `expr:` sources are disabled by default for security.
|
||||
|
||||
:::
|
||||
|
||||
:::warning
|
||||
|
||||
The `exec:`, `echo:`, and `expr:` sources are disabled by default for security. You must set `GO2RTC_ALLOW_ARBITRARY_EXEC=true` to use them. See [Security: Restricted Stream Sources](#security-restricted-stream-sources) for more information.
|
||||
|
||||
:::
|
||||
|
||||
NOTE: The output will need to be passed with two curly braces `{{output}}`
|
||||
NOTE: RTSP output will need to be passed with two curly braces `{{output}}`, whereas pipe output must be passed without curly braces.
|
||||
|
||||
```yaml
|
||||
go2rtc:
|
||||
streams:
|
||||
stream1: exec:ffmpeg -hide_banner -re -stream_loop -1 -i /media/BigBuckBunny.mp4 -c copy -rtsp_transport tcp -f rtsp {{output}}
|
||||
stream2: exec:rpicam-vid -t 0 --libav-format h264 -o -
|
||||
```
|
||||
|
||||
@ -9,7 +9,7 @@ Frigate is a Docker container that can be run on any Docker host including as a
|
||||
|
||||
:::tip
|
||||
|
||||
If you already have Frigate installed as a Home Assistant App, check out the [getting started guide](../guides/getting_started#configuring-frigate) to configure Frigate.
|
||||
If you already have Frigate installed as a Home Assistant App, check out the [getting started guide](../guides/getting_started.md#configuring-frigate) to configure Frigate.
|
||||
|
||||
:::
|
||||
|
||||
@ -286,7 +286,7 @@ The MemryX MX3 Accelerator is available in the M.2 2280 form factor (like an NVM
|
||||
|
||||
#### Installation
|
||||
|
||||
To get started with MX3 hardware setup for your system, refer to the [Hardware Setup Guide](https://developer.memryx.com/get_started/hardware_setup.html).
|
||||
To get started with MX3 hardware setup for your system, refer to the [Hardware Setup Guide](https://developer.memryx.com/2p1/get_started/install_hardware.html).
|
||||
|
||||
Then follow these steps for installing the correct driver/runtime configuration:
|
||||
|
||||
@ -295,6 +295,12 @@ Then follow these steps for installing the correct driver/runtime configuration:
|
||||
3. Run the script with `./user_installation.sh`
|
||||
4. **Restart your computer** to complete driver installation.
|
||||
|
||||
:::warning
|
||||
|
||||
For manual setup, use **MemryX SDK 2.1** only. Other SDK versions are not supported for this setup. See the [SDK 2.1 documentation](https://developer.memryx.com/2p1/index.html)
|
||||
|
||||
:::
|
||||
|
||||
#### Setup
|
||||
|
||||
To set up Frigate, follow the default installation instructions, for example: `ghcr.io/blakeblackshear/frigate:stable`
|
||||
|
||||
@ -429,7 +429,10 @@ class WebPushClient(Communicator):
|
||||
else:
|
||||
title = base_title
|
||||
|
||||
message = payload["after"]["data"]["metadata"]["shortSummary"]
|
||||
if payload["after"]["data"]["metadata"].get("shortSummary"):
|
||||
message = payload["after"]["data"]["metadata"]["shortSummary"]
|
||||
else:
|
||||
message = f"Detected on {camera_name}"
|
||||
else:
|
||||
zone_names = payload["after"]["data"]["zones"]
|
||||
formatted_zone_names = []
|
||||
|
||||
@ -17,9 +17,90 @@ from ws4py.websocket import WebSocket as WebSocket_
|
||||
|
||||
from frigate.comms.base_communicator import Communicator
|
||||
from frigate.config import FrigateConfig
|
||||
from frigate.const import (
|
||||
CLEAR_ONGOING_REVIEW_SEGMENTS,
|
||||
EXPIRE_AUDIO_ACTIVITY,
|
||||
INSERT_MANY_RECORDINGS,
|
||||
INSERT_PREVIEW,
|
||||
NOTIFICATION_TEST,
|
||||
REQUEST_REGION_GRID,
|
||||
UPDATE_AUDIO_ACTIVITY,
|
||||
UPDATE_AUDIO_TRANSCRIPTION_STATE,
|
||||
UPDATE_BIRDSEYE_LAYOUT,
|
||||
UPDATE_CAMERA_ACTIVITY,
|
||||
UPDATE_EMBEDDINGS_REINDEX_PROGRESS,
|
||||
UPDATE_EVENT_DESCRIPTION,
|
||||
UPDATE_MODEL_STATE,
|
||||
UPDATE_REVIEW_DESCRIPTION,
|
||||
UPSERT_REVIEW_SEGMENT,
|
||||
)
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Internal IPC topics — NEVER allowed from WebSocket, regardless of role
|
||||
_WS_BLOCKED_TOPICS = frozenset(
|
||||
{
|
||||
INSERT_MANY_RECORDINGS,
|
||||
INSERT_PREVIEW,
|
||||
REQUEST_REGION_GRID,
|
||||
UPSERT_REVIEW_SEGMENT,
|
||||
CLEAR_ONGOING_REVIEW_SEGMENTS,
|
||||
UPDATE_CAMERA_ACTIVITY,
|
||||
UPDATE_AUDIO_ACTIVITY,
|
||||
EXPIRE_AUDIO_ACTIVITY,
|
||||
UPDATE_EVENT_DESCRIPTION,
|
||||
UPDATE_REVIEW_DESCRIPTION,
|
||||
UPDATE_MODEL_STATE,
|
||||
UPDATE_EMBEDDINGS_REINDEX_PROGRESS,
|
||||
UPDATE_BIRDSEYE_LAYOUT,
|
||||
UPDATE_AUDIO_TRANSCRIPTION_STATE,
|
||||
NOTIFICATION_TEST,
|
||||
}
|
||||
)
|
||||
|
||||
# Read-only topics any authenticated user (including viewer) can send
|
||||
_WS_VIEWER_TOPICS = frozenset(
|
||||
{
|
||||
"onConnect",
|
||||
"modelState",
|
||||
"audioTranscriptionState",
|
||||
"birdseyeLayout",
|
||||
"embeddingsReindexProgress",
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
def _check_ws_authorization(
|
||||
topic: str,
|
||||
role_header: str | None,
|
||||
separator: str,
|
||||
) -> bool:
|
||||
"""Check if a WebSocket message is authorized.
|
||||
|
||||
Args:
|
||||
topic: The message topic.
|
||||
role_header: The HTTP_REMOTE_ROLE header value, or None.
|
||||
separator: The role separator character from proxy config.
|
||||
|
||||
Returns:
|
||||
True if authorized, False if blocked.
|
||||
"""
|
||||
# Block IPC-only topics unconditionally
|
||||
if topic in _WS_BLOCKED_TOPICS:
|
||||
return False
|
||||
|
||||
# No role header: default to viewer (fail-closed)
|
||||
if role_header is None:
|
||||
return topic in _WS_VIEWER_TOPICS
|
||||
|
||||
# Check if any role is admin
|
||||
roles = [r.strip() for r in role_header.split(separator)]
|
||||
if "admin" in roles:
|
||||
return True
|
||||
|
||||
# Non-admin: only viewer topics allowed
|
||||
return topic in _WS_VIEWER_TOPICS
|
||||
|
||||
|
||||
class WebSocket(WebSocket_): # type: ignore[misc]
|
||||
def unhandled_error(self, error: Any) -> None:
|
||||
@ -49,6 +130,7 @@ class WebSocketClient(Communicator):
|
||||
|
||||
class _WebSocketHandler(WebSocket):
|
||||
receiver = self._dispatcher
|
||||
role_separator = self.config.proxy.separator or ","
|
||||
|
||||
def received_message(self, message: WebSocket.received_message) -> None: # type: ignore[name-defined]
|
||||
try:
|
||||
@ -63,11 +145,25 @@ class WebSocketClient(Communicator):
|
||||
)
|
||||
return
|
||||
|
||||
logger.debug(
|
||||
f"Publishing mqtt message from websockets at {json_message['topic']}."
|
||||
topic = json_message["topic"]
|
||||
|
||||
# Authorization check (skip when environ is None — direct internal connection)
|
||||
role_header = (
|
||||
self.environ.get("HTTP_REMOTE_ROLE") if self.environ else None
|
||||
)
|
||||
if self.environ is not None and not _check_ws_authorization(
|
||||
topic, role_header, self.role_separator
|
||||
):
|
||||
logger.warning(
|
||||
"Blocked unauthorized WebSocket message: topic=%s, role=%s",
|
||||
topic,
|
||||
role_header,
|
||||
)
|
||||
return
|
||||
|
||||
logger.debug(f"Publishing mqtt message from websockets at {topic}.")
|
||||
self.receiver(
|
||||
json_message["topic"],
|
||||
topic,
|
||||
json_message["payload"],
|
||||
)
|
||||
|
||||
|
||||
@ -1073,10 +1073,6 @@ class LicensePlateProcessingMixin:
|
||||
top_score = score
|
||||
top_box = bbox
|
||||
|
||||
if score > top_score:
|
||||
top_score = score
|
||||
top_box = bbox
|
||||
|
||||
# Return the top scoring bounding box if found
|
||||
if top_box is not None:
|
||||
# expand box by 5% to help with OCR
|
||||
@ -1092,9 +1088,6 @@ class LicensePlateProcessingMixin:
|
||||
]
|
||||
).clip(0, [input.shape[1], input.shape[0]] * 2)
|
||||
|
||||
logger.debug(
|
||||
f"{camera}: Found license plate. Bounding box: {expanded_box.astype(int)}"
|
||||
)
|
||||
return tuple(int(x) for x in expanded_box) # type: ignore[return-value]
|
||||
else:
|
||||
return None # No detection above the threshold
|
||||
@ -1360,8 +1353,8 @@ class LicensePlateProcessingMixin:
|
||||
)
|
||||
|
||||
# check that license plate is valid
|
||||
# double the value because we've doubled the size of the car
|
||||
if license_plate_area < self.config.cameras[camera].lpr.min_area * 2:
|
||||
# quadruple the value because we've doubled both dimensions of the car
|
||||
if license_plate_area < self.config.cameras[camera].lpr.min_area * 4:
|
||||
logger.debug(f"{camera}: License plate is less than min_area")
|
||||
return
|
||||
|
||||
@ -1465,6 +1458,7 @@ class LicensePlateProcessingMixin:
|
||||
license_plate_frame,
|
||||
)
|
||||
|
||||
logger.debug(f"{camera}: Found license plate. Bounding box: {list(plate_box)}")
|
||||
logger.debug(f"{camera}: Running plate recognition for id: {id}.")
|
||||
|
||||
# run detection, returns results sorted by confidence, best first
|
||||
|
||||
@ -31,6 +31,12 @@ class OllamaClient(GenAIClient):
|
||||
provider: ApiClient | None
|
||||
provider_options: dict[str, Any]
|
||||
|
||||
def _auth_headers(self) -> dict | None:
|
||||
if self.genai_config.api_key:
|
||||
return {"Authorization": "Bearer " + self.genai_config.api_key}
|
||||
|
||||
return None
|
||||
|
||||
def _init_provider(self) -> ApiClient | None:
|
||||
"""Initialize the client."""
|
||||
self.provider_options = {
|
||||
@ -39,7 +45,11 @@ class OllamaClient(GenAIClient):
|
||||
}
|
||||
|
||||
try:
|
||||
client = ApiClient(host=self.genai_config.base_url, timeout=self.timeout)
|
||||
client = ApiClient(
|
||||
host=self.genai_config.base_url,
|
||||
timeout=self.timeout,
|
||||
headers=self._auth_headers(),
|
||||
)
|
||||
# ensure the model is available locally
|
||||
response = client.show(self.genai_config.model)
|
||||
if response.get("error"):
|
||||
@ -166,7 +176,9 @@ class OllamaClient(GenAIClient):
|
||||
return []
|
||||
try:
|
||||
client = ApiClient(
|
||||
host=self.genai_config.base_url, timeout=self.timeout
|
||||
host=self.genai_config.base_url,
|
||||
timeout=self.timeout,
|
||||
headers=self._auth_headers(),
|
||||
)
|
||||
except Exception:
|
||||
return []
|
||||
@ -344,6 +356,7 @@ class OllamaClient(GenAIClient):
|
||||
async_client = OllamaAsyncClient(
|
||||
host=self.genai_config.base_url,
|
||||
timeout=self.timeout,
|
||||
headers=self._auth_headers(),
|
||||
)
|
||||
response = await async_client.chat(**request_params)
|
||||
result = self._message_from_response(response)
|
||||
@ -359,6 +372,7 @@ class OllamaClient(GenAIClient):
|
||||
async_client = OllamaAsyncClient(
|
||||
host=self.genai_config.base_url,
|
||||
timeout=self.timeout,
|
||||
headers=self._auth_headers(),
|
||||
)
|
||||
content_parts: list[str] = []
|
||||
final_message: dict[str, Any] | None = None
|
||||
|
||||
@ -13,6 +13,7 @@ from enum import Enum
|
||||
from pathlib import Path
|
||||
from typing import Callable, Optional
|
||||
|
||||
import pytz # type: ignore[import-untyped]
|
||||
from peewee import DoesNotExist
|
||||
|
||||
from frigate.config import FfmpegConfig, FrigateConfig
|
||||
@ -344,7 +345,19 @@ class RecordingExporter(threading.Thread):
|
||||
return proc.returncode, "".join(captured)
|
||||
|
||||
def get_datetime_from_timestamp(self, timestamp: int) -> str:
|
||||
# return in iso format
|
||||
# return in iso format using the configured ui.timezone when set,
|
||||
# so the auto-generated export name reflects local time rather
|
||||
# than the container's UTC clock
|
||||
tz_name = self.config.ui.timezone
|
||||
if tz_name:
|
||||
try:
|
||||
tz = pytz.timezone(tz_name)
|
||||
except pytz.UnknownTimeZoneError:
|
||||
tz = None
|
||||
if tz is not None:
|
||||
return datetime.datetime.fromtimestamp(timestamp, tz=tz).strftime(
|
||||
"%Y-%m-%d %H:%M:%S"
|
||||
)
|
||||
return datetime.datetime.fromtimestamp(timestamp).strftime("%Y-%m-%d %H:%M:%S")
|
||||
|
||||
def _chapter_metadata_path(self) -> str:
|
||||
@ -538,12 +551,18 @@ class RecordingExporter(threading.Thread):
|
||||
start_file = f"{file_start}{self.start_time}.{PREVIEW_FRAME_TYPE}"
|
||||
end_file = f"{file_start}{self.end_time}.{PREVIEW_FRAME_TYPE}"
|
||||
selected_preview = None
|
||||
# Preview frames are written at most 1-2 fps during activity
|
||||
# and as little as one every 30s during quiet periods, so a
|
||||
# short export window can contain zero frames. Track the most
|
||||
# recent frame before the window as a fallback.
|
||||
fallback_preview = None
|
||||
|
||||
for file in sorted(os.listdir(preview_dir)):
|
||||
if not file.startswith(file_start):
|
||||
continue
|
||||
|
||||
if file < start_file:
|
||||
fallback_preview = os.path.join(preview_dir, file)
|
||||
continue
|
||||
|
||||
if file > end_file:
|
||||
@ -552,6 +571,9 @@ class RecordingExporter(threading.Thread):
|
||||
selected_preview = os.path.join(preview_dir, file)
|
||||
break
|
||||
|
||||
if not selected_preview:
|
||||
selected_preview = fallback_preview
|
||||
|
||||
if not selected_preview:
|
||||
return ""
|
||||
|
||||
|
||||
@ -1,6 +1,9 @@
|
||||
"""Tests for export progress tracking, broadcast, and FFmpeg parsing."""
|
||||
|
||||
import io
|
||||
import os
|
||||
import shutil
|
||||
import tempfile
|
||||
import unittest
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
@ -363,6 +366,121 @@ class TestBroadcastAggregation(unittest.TestCase):
|
||||
assert job.progress_percent == 33.0
|
||||
|
||||
|
||||
class TestGetDatetimeFromTimestamp(unittest.TestCase):
|
||||
"""Auto-generated export name should honor config.ui.timezone, not
|
||||
fall back to the container's UTC clock when a timezone is configured.
|
||||
"""
|
||||
|
||||
def test_uses_configured_ui_timezone(self) -> None:
|
||||
exporter = _make_exporter()
|
||||
exporter.config.ui.timezone = "America/New_York"
|
||||
# 2025-01-15 12:00:00 UTC is 07:00:00 EST
|
||||
assert exporter.get_datetime_from_timestamp(1736942400) == "2025-01-15 07:00:00"
|
||||
|
||||
def test_falls_back_to_local_when_timezone_unset(self) -> None:
|
||||
exporter = _make_exporter()
|
||||
exporter.config.ui.timezone = None
|
||||
# No assertion on the exact wall-clock value — just confirm no
|
||||
# exception and that pytz isn't required when the field is unset.
|
||||
assert isinstance(exporter.get_datetime_from_timestamp(1736942400), str)
|
||||
|
||||
def test_invalid_timezone_falls_back_to_local(self) -> None:
|
||||
exporter = _make_exporter()
|
||||
exporter.config.ui.timezone = "Not/A_Real_Zone"
|
||||
assert isinstance(exporter.get_datetime_from_timestamp(1736942400), str)
|
||||
|
||||
|
||||
class TestSaveThumbnailFromPreviewFrames(unittest.TestCase):
|
||||
"""Short exports in the current hour can fall between preview frame
|
||||
writes (1-2 fps during activity, every 30s otherwise). When no frame
|
||||
falls inside the export window, save_thumbnail should fall back to
|
||||
the most recent prior frame instead of returning no thumbnail."""
|
||||
|
||||
def setUp(self) -> None:
|
||||
self.tmp_root = tempfile.mkdtemp(prefix="frigate_thumb_test_")
|
||||
self.preview_dir = os.path.join(self.tmp_root, "cache", "preview_frames")
|
||||
self.export_clips = os.path.join(self.tmp_root, "clips", "export")
|
||||
os.makedirs(self.preview_dir, exist_ok=True)
|
||||
os.makedirs(self.export_clips, exist_ok=True)
|
||||
|
||||
def tearDown(self) -> None:
|
||||
shutil.rmtree(self.tmp_root, ignore_errors=True)
|
||||
|
||||
def _write_frame(self, camera: str, frame_time: float) -> str:
|
||||
path = os.path.join(self.preview_dir, f"preview_{camera}-{frame_time}.webp")
|
||||
with open(path, "wb") as f:
|
||||
f.write(b"fake-webp-bytes")
|
||||
return path
|
||||
|
||||
def _make_short_current_hour_exporter(self) -> RecordingExporter:
|
||||
# Use a "now-ish" timestamp so save_thumbnail's start-of-hour
|
||||
# comparison takes the current-hour branch (preview frames).
|
||||
import datetime
|
||||
|
||||
now = datetime.datetime.now(datetime.timezone.utc).timestamp()
|
||||
exporter = _make_exporter()
|
||||
exporter.export_id = "thumb_short"
|
||||
exporter.start_time = now
|
||||
exporter.end_time = now + 3
|
||||
return exporter
|
||||
|
||||
def test_short_export_falls_back_to_prior_preview_frame(self) -> None:
|
||||
exporter = self._make_short_current_hour_exporter()
|
||||
# Most recent preview frame is 10s before the export window
|
||||
prior = self._write_frame(exporter.camera, exporter.start_time - 10.0)
|
||||
thumb_target = os.path.join(self.export_clips, f"{exporter.export_id}.webp")
|
||||
|
||||
with (
|
||||
patch(
|
||||
"frigate.record.export.CACHE_DIR", os.path.join(self.tmp_root, "cache")
|
||||
),
|
||||
patch(
|
||||
"frigate.record.export.CLIPS_DIR", os.path.join(self.tmp_root, "clips")
|
||||
),
|
||||
):
|
||||
result = exporter.save_thumbnail(exporter.export_id)
|
||||
|
||||
assert result == thumb_target
|
||||
assert os.path.isfile(thumb_target)
|
||||
with open(thumb_target, "rb") as f, open(prior, "rb") as src:
|
||||
assert f.read() == src.read()
|
||||
|
||||
def test_returns_empty_when_no_preview_frames_exist(self) -> None:
|
||||
exporter = self._make_short_current_hour_exporter()
|
||||
|
||||
with (
|
||||
patch(
|
||||
"frigate.record.export.CACHE_DIR", os.path.join(self.tmp_root, "cache")
|
||||
),
|
||||
patch(
|
||||
"frigate.record.export.CLIPS_DIR", os.path.join(self.tmp_root, "clips")
|
||||
),
|
||||
):
|
||||
result = exporter.save_thumbnail(exporter.export_id)
|
||||
|
||||
assert result == ""
|
||||
|
||||
def test_prefers_in_window_frame_over_prior_frame(self) -> None:
|
||||
exporter = self._make_short_current_hour_exporter()
|
||||
self._write_frame(exporter.camera, exporter.start_time - 10.0)
|
||||
in_window = self._write_frame(exporter.camera, exporter.start_time + 1.0)
|
||||
thumb_target = os.path.join(self.export_clips, f"{exporter.export_id}.webp")
|
||||
|
||||
with (
|
||||
patch(
|
||||
"frigate.record.export.CACHE_DIR", os.path.join(self.tmp_root, "cache")
|
||||
),
|
||||
patch(
|
||||
"frigate.record.export.CLIPS_DIR", os.path.join(self.tmp_root, "clips")
|
||||
),
|
||||
):
|
||||
result = exporter.save_thumbnail(exporter.export_id)
|
||||
|
||||
assert result == thumb_target
|
||||
with open(thumb_target, "rb") as f, open(in_window, "rb") as src:
|
||||
assert f.read() == src.read()
|
||||
|
||||
|
||||
class TestSchedulesCleanup(unittest.TestCase):
|
||||
def test_schedule_job_cleanup_removes_after_delay(self) -> None:
|
||||
config = MagicMock()
|
||||
|
||||
166
frigate/test/test_ws_auth.py
Normal file
166
frigate/test/test_ws_auth.py
Normal file
@ -0,0 +1,166 @@
|
||||
"""Tests for WebSocket authorization checks."""
|
||||
|
||||
import unittest
|
||||
|
||||
from frigate.comms.ws import _check_ws_authorization
|
||||
from frigate.const import INSERT_MANY_RECORDINGS, UPDATE_CAMERA_ACTIVITY
|
||||
|
||||
|
||||
class TestCheckWsAuthorization(unittest.TestCase):
|
||||
"""Tests for the _check_ws_authorization pure function."""
|
||||
|
||||
DEFAULT_SEPARATOR = ","
|
||||
|
||||
# --- IPC topic blocking (unconditional, regardless of role) ---
|
||||
|
||||
def test_ipc_topic_blocked_for_admin(self):
|
||||
self.assertFalse(
|
||||
_check_ws_authorization(
|
||||
INSERT_MANY_RECORDINGS, "admin", self.DEFAULT_SEPARATOR
|
||||
)
|
||||
)
|
||||
|
||||
def test_ipc_topic_blocked_for_viewer(self):
|
||||
self.assertFalse(
|
||||
_check_ws_authorization(
|
||||
UPDATE_CAMERA_ACTIVITY, "viewer", self.DEFAULT_SEPARATOR
|
||||
)
|
||||
)
|
||||
|
||||
def test_ipc_topic_blocked_when_no_role(self):
|
||||
self.assertFalse(
|
||||
_check_ws_authorization(
|
||||
INSERT_MANY_RECORDINGS, None, self.DEFAULT_SEPARATOR
|
||||
)
|
||||
)
|
||||
|
||||
# --- Viewer allowed topics ---
|
||||
|
||||
def test_viewer_can_send_on_connect(self):
|
||||
self.assertTrue(
|
||||
_check_ws_authorization("onConnect", "viewer", self.DEFAULT_SEPARATOR)
|
||||
)
|
||||
|
||||
def test_viewer_can_send_model_state(self):
|
||||
self.assertTrue(
|
||||
_check_ws_authorization("modelState", "viewer", self.DEFAULT_SEPARATOR)
|
||||
)
|
||||
|
||||
def test_viewer_can_send_audio_transcription_state(self):
|
||||
self.assertTrue(
|
||||
_check_ws_authorization(
|
||||
"audioTranscriptionState", "viewer", self.DEFAULT_SEPARATOR
|
||||
)
|
||||
)
|
||||
|
||||
def test_viewer_can_send_birdseye_layout(self):
|
||||
self.assertTrue(
|
||||
_check_ws_authorization("birdseyeLayout", "viewer", self.DEFAULT_SEPARATOR)
|
||||
)
|
||||
|
||||
def test_viewer_can_send_embeddings_reindex_progress(self):
|
||||
self.assertTrue(
|
||||
_check_ws_authorization(
|
||||
"embeddingsReindexProgress", "viewer", self.DEFAULT_SEPARATOR
|
||||
)
|
||||
)
|
||||
|
||||
# --- Viewer blocked from admin topics ---
|
||||
|
||||
def test_viewer_blocked_from_restart(self):
|
||||
self.assertFalse(
|
||||
_check_ws_authorization("restart", "viewer", self.DEFAULT_SEPARATOR)
|
||||
)
|
||||
|
||||
def test_viewer_blocked_from_camera_detect_set(self):
|
||||
self.assertFalse(
|
||||
_check_ws_authorization(
|
||||
"front_door/detect/set", "viewer", self.DEFAULT_SEPARATOR
|
||||
)
|
||||
)
|
||||
|
||||
def test_viewer_blocked_from_camera_ptz(self):
|
||||
self.assertFalse(
|
||||
_check_ws_authorization("front_door/ptz", "viewer", self.DEFAULT_SEPARATOR)
|
||||
)
|
||||
|
||||
def test_viewer_blocked_from_global_notifications_set(self):
|
||||
self.assertFalse(
|
||||
_check_ws_authorization(
|
||||
"notifications/set", "viewer", self.DEFAULT_SEPARATOR
|
||||
)
|
||||
)
|
||||
|
||||
def test_viewer_blocked_from_camera_notifications_suspend(self):
|
||||
self.assertFalse(
|
||||
_check_ws_authorization(
|
||||
"front_door/notifications/suspend", "viewer", self.DEFAULT_SEPARATOR
|
||||
)
|
||||
)
|
||||
|
||||
def test_viewer_blocked_from_arbitrary_unknown_topic(self):
|
||||
self.assertFalse(
|
||||
_check_ws_authorization(
|
||||
"some_random_topic", "viewer", self.DEFAULT_SEPARATOR
|
||||
)
|
||||
)
|
||||
|
||||
# --- Admin access ---
|
||||
|
||||
def test_admin_can_send_restart(self):
|
||||
self.assertTrue(
|
||||
_check_ws_authorization("restart", "admin", self.DEFAULT_SEPARATOR)
|
||||
)
|
||||
|
||||
def test_admin_can_send_camera_detect_set(self):
|
||||
self.assertTrue(
|
||||
_check_ws_authorization(
|
||||
"front_door/detect/set", "admin", self.DEFAULT_SEPARATOR
|
||||
)
|
||||
)
|
||||
|
||||
def test_admin_can_send_camera_ptz(self):
|
||||
self.assertTrue(
|
||||
_check_ws_authorization("front_door/ptz", "admin", self.DEFAULT_SEPARATOR)
|
||||
)
|
||||
|
||||
# --- Comma-separated roles ---
|
||||
|
||||
def test_comma_separated_admin_viewer_grants_admin(self):
|
||||
self.assertTrue(
|
||||
_check_ws_authorization("restart", "admin,viewer", self.DEFAULT_SEPARATOR)
|
||||
)
|
||||
|
||||
def test_comma_separated_viewer_admin_grants_admin(self):
|
||||
self.assertTrue(
|
||||
_check_ws_authorization("restart", "viewer,admin", self.DEFAULT_SEPARATOR)
|
||||
)
|
||||
|
||||
def test_comma_separated_with_spaces(self):
|
||||
self.assertTrue(
|
||||
_check_ws_authorization("restart", "viewer, admin", self.DEFAULT_SEPARATOR)
|
||||
)
|
||||
|
||||
# --- Custom separator ---
|
||||
|
||||
def test_pipe_separator(self):
|
||||
self.assertTrue(_check_ws_authorization("restart", "viewer|admin", "|"))
|
||||
|
||||
def test_pipe_separator_no_admin(self):
|
||||
self.assertFalse(_check_ws_authorization("restart", "viewer|editor", "|"))
|
||||
|
||||
# --- No role header (fail-closed) ---
|
||||
|
||||
def test_no_role_header_blocks_admin_topics(self):
|
||||
self.assertFalse(
|
||||
_check_ws_authorization("restart", None, self.DEFAULT_SEPARATOR)
|
||||
)
|
||||
|
||||
def test_no_role_header_allows_viewer_topics(self):
|
||||
self.assertTrue(
|
||||
_check_ws_authorization("onConnect", None, self.DEFAULT_SEPARATOR)
|
||||
)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
@ -1,88 +1,95 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "rmuF9iKWTbdk"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"! pip install -q git+https://github.com/Deci-AI/super-gradients.git"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "NiRCt917KKcL"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"! sed -i 's/sghub.deci.ai/sg-hub-nv.s3.amazonaws.com/' /usr/local/lib/python3.12/dist-packages/super_gradients/training/pretrained_models.py\n",
|
||||
"! sed -i 's/sghub.deci.ai/sg-hub-nv.s3.amazonaws.com/' /usr/local/lib/python3.12/dist-packages/super_gradients/training/utils/checkpoint_utils.py"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "dTB0jy_NNSFz"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from super_gradients.common.object_names import Models\n",
|
||||
"from super_gradients.conversion import DetectionOutputFormatMode\n",
|
||||
"from super_gradients.training import models\n",
|
||||
"\n",
|
||||
"model = models.get(Models.YOLO_NAS_S, pretrained_weights=\"coco\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "GymUghyCNXem"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# export the model for compatibility with Frigate\n",
|
||||
"\n",
|
||||
"model.export(\"yolo_nas_s.onnx\",\n",
|
||||
" output_predictions_format=DetectionOutputFormatMode.FLAT_FORMAT,\n",
|
||||
" max_predictions_per_image=20,\n",
|
||||
" num_pre_nms_predictions=300,\n",
|
||||
" confidence_threshold=0.4,\n",
|
||||
" input_image_shape=(320,320),\n",
|
||||
" )"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "uBhXV5g4Nh42"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from google.colab import files\n",
|
||||
"\n",
|
||||
"files.download('yolo_nas_s.onnx')"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"provenance": []
|
||||
},
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"name": "python"
|
||||
}
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "runtime-notice"
|
||||
},
|
||||
"source": [
|
||||
"**Before running:** go to **Runtime → Change runtime type → Fallback runtime version: 2025.07** (Python 3.11). The current Colab default (Python 3.12+) is incompatible with `super-gradients`."
|
||||
]
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 0
|
||||
}
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "rmuF9iKWTbdk"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"! pip install -q \"jedi>=0.16\"\n",
|
||||
"! pip install -q git+https://github.com/Deci-AI/super-gradients.git"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "NiRCt917KKcL"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": "! sed -i 's/sghub\\.deci\\.ai/d2gjn4b69gu75n.cloudfront.net/g; s/sg-hub-nv\\.s3\\.amazonaws\\.com/d2gjn4b69gu75n.cloudfront.net/g' /usr/local/lib/python*/dist-packages/super_gradients/training/pretrained_models.py\n! sed -i 's/sghub\\.deci\\.ai/d2gjn4b69gu75n.cloudfront.net/g; s/sg-hub-nv\\.s3\\.amazonaws\\.com/d2gjn4b69gu75n.cloudfront.net/g' /usr/local/lib/python*/dist-packages/super_gradients/training/utils/checkpoint_utils.py"
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "dTB0jy_NNSFz"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from super_gradients.common.object_names import Models\n",
|
||||
"from super_gradients.conversion import DetectionOutputFormatMode\n",
|
||||
"from super_gradients.training import models\n",
|
||||
"\n",
|
||||
"model = models.get(Models.YOLO_NAS_S, pretrained_weights=\"coco\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "GymUghyCNXem"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# export the model for compatibility with Frigate\n",
|
||||
"\n",
|
||||
"model.export(\"yolo_nas_s.onnx\",\n",
|
||||
" output_predictions_format=DetectionOutputFormatMode.FLAT_FORMAT,\n",
|
||||
" max_predictions_per_image=20,\n",
|
||||
" num_pre_nms_predictions=300,\n",
|
||||
" confidence_threshold=0.4,\n",
|
||||
" input_image_shape=(320,320),\n",
|
||||
" )"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "uBhXV5g4Nh42"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from google.colab import files\n",
|
||||
"\n",
|
||||
"files.download('yolo_nas_s.onnx')"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"provenance": []
|
||||
},
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"name": "python"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 0
|
||||
}
|
||||
@ -69,17 +69,18 @@ test.describe("Navigation — conditional items @critical", () => {
|
||||
).toBeVisible();
|
||||
});
|
||||
|
||||
test("/chat is hidden when genai.model is none (desktop)", async ({
|
||||
test("/chat is hidden when no agent has the chat role (desktop)", async ({
|
||||
frigateApp,
|
||||
}) => {
|
||||
test.skip(frigateApp.isMobile, "Desktop sidebar");
|
||||
await frigateApp.installDefaults({
|
||||
config: {
|
||||
genai: {
|
||||
enabled: false,
|
||||
provider: "ollama",
|
||||
model: "none",
|
||||
base_url: "",
|
||||
descriptions_only: {
|
||||
provider: "ollama",
|
||||
model: "llava",
|
||||
roles: ["descriptions"],
|
||||
},
|
||||
},
|
||||
},
|
||||
});
|
||||
@ -89,12 +90,20 @@ test.describe("Navigation — conditional items @critical", () => {
|
||||
).toHaveCount(0);
|
||||
});
|
||||
|
||||
test("/chat is visible when genai.model is set (desktop)", async ({
|
||||
test("/chat is visible when an agent has the chat role (desktop)", async ({
|
||||
frigateApp,
|
||||
}) => {
|
||||
test.skip(frigateApp.isMobile, "Desktop sidebar");
|
||||
await frigateApp.installDefaults({
|
||||
config: { genai: { enabled: true, model: "llava" } },
|
||||
config: {
|
||||
genai: {
|
||||
chat_agent: {
|
||||
provider: "ollama",
|
||||
model: "llava",
|
||||
roles: ["chat"],
|
||||
},
|
||||
},
|
||||
},
|
||||
});
|
||||
await frigateApp.goto("/");
|
||||
await expect(
|
||||
|
||||
@ -242,7 +242,7 @@
|
||||
"done": "Fet",
|
||||
"disabled": "Deshabilitat",
|
||||
"disable": "Deshabilitar",
|
||||
"save": "Guardar",
|
||||
"save": "Desa",
|
||||
"copy": "Copiar",
|
||||
"back": "Enrere",
|
||||
"pictureInPicture": "Imatge en Imatge",
|
||||
|
||||
@ -38,8 +38,8 @@
|
||||
"s": "{{time}}s",
|
||||
"minute_one": "{{time}}minuutti",
|
||||
"minute_other": "{{time}}minuuttia",
|
||||
"second_one": "{{time}}sekuntti",
|
||||
"second_other": "{{time}}sekunttia",
|
||||
"second_one": "{{time}} sekunti",
|
||||
"second_other": "{{time}} sekuntia",
|
||||
"formattedTimestampHourMinute": {
|
||||
"24hour": "HH:mm"
|
||||
},
|
||||
|
||||
@ -1,5 +1,5 @@
|
||||
{
|
||||
"alerts": "Hälytyset",
|
||||
"alerts": "Hälytykset",
|
||||
"empty": {
|
||||
"detection": "Ei havaintoja tarkastettavaksi",
|
||||
"motion": "Ei liiketietoja",
|
||||
|
||||
@ -8,7 +8,7 @@
|
||||
"general": "Yleiset asetukset - Frigate",
|
||||
"frigatePlus": "Frigate+ asetukset - Frigate",
|
||||
"object": "Virheenjäljitys - Frigate",
|
||||
"authentication": "Autentikointiuasetukset - Frigate",
|
||||
"authentication": "Autentikointiasetukset - Frigate",
|
||||
"notifications": "Ilmoitusasetukset - Frigate",
|
||||
"enrichments": "Laajennusasetukset – Frigate",
|
||||
"cameraManagement": "Hallitse Kameroita - Frigate",
|
||||
|
||||
@ -185,7 +185,8 @@
|
||||
"classification": "분류",
|
||||
"chat": "채팅",
|
||||
"actions": "작업",
|
||||
"profiles": "프로필"
|
||||
"profiles": "프로필",
|
||||
"features": "기능"
|
||||
},
|
||||
"unit": {
|
||||
"speed": {
|
||||
|
||||
@ -81,6 +81,7 @@
|
||||
"zones": "구역 (Zones)",
|
||||
"mask": "마스크",
|
||||
"motion": "움직임",
|
||||
"regions": "영역 (Regions)"
|
||||
"regions": "영역 (Regions)",
|
||||
"paths": "경로"
|
||||
}
|
||||
}
|
||||
|
||||
@ -1,7 +1,8 @@
|
||||
{
|
||||
"submitFrigatePlus": {
|
||||
"submit": "제출",
|
||||
"title": "이 프레임을 Frigate+에 제출하시겠습니까?"
|
||||
"title": "이 프레임을 Frigate+에 제출하시겠습니까?",
|
||||
"previewError": "스냅샷 미리보기를 불러올 수 없습니다. 현재 녹화된 영상을 사용할 수 없을 수 있습니다."
|
||||
},
|
||||
"stats": {
|
||||
"bandwidth": {
|
||||
|
||||
@ -38,5 +38,26 @@
|
||||
"enabled_in_config": {
|
||||
"label": "원래 오디오 상태"
|
||||
}
|
||||
},
|
||||
"mqtt": {
|
||||
"label": "MQTT"
|
||||
},
|
||||
"notifications": {
|
||||
"label": "알림",
|
||||
"enabled": {
|
||||
"label": "알림 활성화"
|
||||
},
|
||||
"email": {
|
||||
"label": "알림 이메일",
|
||||
"description": "푸시 알림에 사용되거나 특정 알림 제공업체에서 요구하는 이메일 주소입니다."
|
||||
},
|
||||
"cooldown": {
|
||||
"label": "알림 재발송 대기 시간",
|
||||
"description": "수신자에게 스팸 메일을 보내는 것을 방지하기 위해 알림 재발송 대기시간(초)을 설정합니다."
|
||||
},
|
||||
"enabled_in_config": {
|
||||
"label": "초기 알림 활성 상태",
|
||||
"description": "초기 구성에서 알림이 활성화되었는지 여부를 나타냅니다."
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@ -60,7 +60,140 @@
|
||||
"description": "true로 설정하면 시작 시 관리자 비밀번호를 재설정하고 새 비밀번호를 로그에 출력합니다."
|
||||
},
|
||||
"cookie_name": {
|
||||
"label": "JWT 쿠키 이름"
|
||||
"label": "JWT 쿠키 이름",
|
||||
"description": "자체 인증용 JWT 토큰을 저장할 쿠키 이름입니다."
|
||||
},
|
||||
"cookie_secure": {
|
||||
"label": "보안 쿠키 설정",
|
||||
"description": "인증 쿠키에 보안 플래그를 설정합니다. TLS를 사용하는 경우 'True'로 설정해야 합니다."
|
||||
},
|
||||
"session_length": {
|
||||
"label": "세션 길이",
|
||||
"description": "JWT 기반 세션의 유지 시간(초)입니다."
|
||||
},
|
||||
"refresh_time": {
|
||||
"label": "세션 갱신 주기",
|
||||
"description": "세션 만료까지 남은 시간이 몇 초 남지 않을 경우, 세션 시간을 다시 최대로 연장합니다."
|
||||
},
|
||||
"failed_login_rate_limit": {
|
||||
"label": "로그인 실패 제한",
|
||||
"description": "무차별 대입 공격을 줄이기 위해 로그인 시도 실패 횟수를 제한하는 규칙을 적용합니다."
|
||||
},
|
||||
"trusted_proxies": {
|
||||
"label": "신뢰할 수 있는 프록시",
|
||||
"description": "속도 제한을 위해 클라이언트 IP를 결정할 때 사용되는 신뢰할 수 있는 프록시 IP 목록입니다."
|
||||
},
|
||||
"hash_iterations": {
|
||||
"label": "해시 반복 횟수",
|
||||
"description": "사용자 암호를 해싱할 때 사용할 PBKDF2-SHA256 반복 횟수입니다."
|
||||
},
|
||||
"roles": {
|
||||
"label": "역할 할당",
|
||||
"description": "역할별로 접근 가능한 카메라 목록을 매핑합니다. 목록이 비어 있으면 해당 역할에 모든 카메라 접근 권한을 부여합니다."
|
||||
},
|
||||
"admin_first_time_login": {
|
||||
"label": "관리자 초기 로그인 설정",
|
||||
"description": "활성화 시, 관리자 비밀번호 초기화 후 로그인 방법 안내 링크가 로그인 페이지에 표시됩니다. "
|
||||
}
|
||||
},
|
||||
"database": {
|
||||
"label": "데이터베이스",
|
||||
"description": "추적된 객체 및 녹화 메타데이터를 저장하는 SQLite 데이터베이스 설정입니다.",
|
||||
"path": {
|
||||
"label": "데이터베이스 경로",
|
||||
"description": "Frigate SQLite 데이터베이스 파일이 저장될 파일 시스템 경로입니다."
|
||||
}
|
||||
},
|
||||
"go2rtc": {
|
||||
"label": "go2rtc",
|
||||
"description": "라이브 스트림 중계 및 번역에 사용되는 통합 go2rtc 리스트리밍 서비스 설정입니다."
|
||||
},
|
||||
"mqtt": {
|
||||
"label": "MQTT",
|
||||
"description": "MQTT 브로커에 원격 측정 데이터, 스냅샷 및 이벤트 세부 정보를 연결하고 게시하기 위한 설정입니다.",
|
||||
"enabled": {
|
||||
"label": "MQTT 활성화",
|
||||
"description": "상태, 이벤트 및 스냅샷에 대한 MQTT 통합을 활성화 또는 비활성화합니다."
|
||||
},
|
||||
"host": {
|
||||
"label": "MQTT 호스트",
|
||||
"description": "MQTT 브로커의 호스트 이름 또는 IP 주소입니다."
|
||||
},
|
||||
"port": {
|
||||
"label": "MQTT 포트",
|
||||
"description": "MQTT 브로커의 포트 번호입니다 (일반적인 포트는 1883입니다)."
|
||||
},
|
||||
"topic_prefix": {
|
||||
"label": "토픽 접두사",
|
||||
"description": "Frigate의 모든 MQTT 메시지에 사용할 접두사입니다. 여러 대의 Frigate를 실행하는 경우 각각 고유한 이름을 사용해야 합니다."
|
||||
},
|
||||
"client_id": {
|
||||
"label": "클라이언트 ID",
|
||||
"description": "MQTT 브로커 연결 시 사용하는 클라이언트 식별자입니다. 인스턴스마다 고유한 이름을 사용해야 합니다."
|
||||
},
|
||||
"stats_interval": {
|
||||
"label": "통계 간격",
|
||||
"description": "시스템 및 카메라 통계 정보를 MQTT로 전송하는 간격(초)입니다."
|
||||
},
|
||||
"user": {
|
||||
"label": "MQTT 사용자 이름",
|
||||
"description": "MQTT 사용자 이름(선택 사항)입니다. 환경 변수나 비밀 값(Secrets)을 통해 입력할 수 있습니다."
|
||||
},
|
||||
"password": {
|
||||
"label": "MQTT 비밀번호",
|
||||
"description": "MQTT 비밀번호(선택 사항)입니다. 환경 변수나 비밀 값(Secrets)을 통해 입력할 수 있습니다."
|
||||
},
|
||||
"tls_ca_certs": {
|
||||
"label": "TLS CA 인증서",
|
||||
"description": "브로커와의 TLS 연결에 사용할 CA 인증서 경로(자체 서명 인증서의 경우)."
|
||||
},
|
||||
"tls_client_cert": {
|
||||
"label": "클라이언트 인증서",
|
||||
"description": "TLS 상호 인증을 위한 클라이언트 인증서 경로입니다. 클라이언트 인증서를 사용할 때는 사용자 이름/암호를 설정하지 마십시오."
|
||||
},
|
||||
"tls_client_key": {
|
||||
"label": "클라이언트 키",
|
||||
"description": "클라이언트 인증서의 개인 키 경로입니다."
|
||||
},
|
||||
"tls_insecure": {
|
||||
"label": "TLS 비보안 모드",
|
||||
"description": "호스트 이름 확인을 건너뛰어 안전하지 않은 TLS 연결을 허용합니다(권장하지 않음)."
|
||||
},
|
||||
"qos": {
|
||||
"label": "MQTT QoS",
|
||||
"description": "MQTT 메시지 전송 및 구독에 대한 서비스 품질(QoS) 등급입니다 (0, 1, 2 중 선택)."
|
||||
}
|
||||
},
|
||||
"notifications": {
|
||||
"label": "알림",
|
||||
"description": "모든 카메라에 대한 알림을 활성화하고 제어하는 설정입니다. 카메라별로 설정을 재정의할 수 있습니다.",
|
||||
"enabled": {
|
||||
"label": "알림 활성화",
|
||||
"description": "모든 카메라에 대한 알림을 활성화 또는 비활성화할 수 있으며, 카메라별로 설정을 재정의할 수 있습니다."
|
||||
},
|
||||
"email": {
|
||||
"label": "알림 이메일",
|
||||
"description": "푸시 알림에 사용되거나 특정 알림 제공업체에서 요구하는 이메일 주소입니다."
|
||||
},
|
||||
"cooldown": {
|
||||
"label": "알림 재발송 대기 시간",
|
||||
"description": "수신자에게 스팸 메일을 보내는 것을 방지하기 위해 알림 재발송 대기시간(초)을 설정합니다."
|
||||
},
|
||||
"enabled_in_config": {
|
||||
"label": "초기 알림 활성 상태",
|
||||
"description": "초기 구성에서 알림이 활성화되었는지 여부를 나타냅니다."
|
||||
}
|
||||
},
|
||||
"networking": {
|
||||
"label": "네트워킹",
|
||||
"description": "Frigate 엔드포인트에 대한 IPv6 활성화와 같은 네트워크 관련 설정입니다.",
|
||||
"ipv6": {
|
||||
"label": "IPv6 구성",
|
||||
"description": "Frigate 네트워크 서비스에 대한 IPv6 관련 설정입니다.",
|
||||
"enabled": {
|
||||
"label": "IPv6 활성화",
|
||||
"description": "Frigate 서비스(API 및 UI)에 IPv6 지원이 필요한 경우 활성화하십시오."
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@ -59,7 +59,7 @@
|
||||
"toast": {
|
||||
"success": "Eksport startet. Se filen på eksportsiden.",
|
||||
"error": {
|
||||
"failed": "Kune ikke legge eksport i kø: {{error}}",
|
||||
"failed": "Kunne ikke legge eksport i kø: {{error}}",
|
||||
"noVaildTimeSelected": "Ingen gyldig tidsperiode valgt",
|
||||
"endTimeMustAfterStartTime": "Sluttid må være etter starttid"
|
||||
},
|
||||
|
||||
@ -1 +1,6 @@
|
||||
{}
|
||||
{
|
||||
"label": "Kamera konfiguration",
|
||||
"name": {
|
||||
"label": "Kameranamn"
|
||||
}
|
||||
}
|
||||
|
||||
@ -1 +1,5 @@
|
||||
{}
|
||||
{
|
||||
"version": {
|
||||
"label": "Nuvarande konfigurationsversion"
|
||||
}
|
||||
}
|
||||
|
||||
@ -1 +1,7 @@
|
||||
{}
|
||||
{
|
||||
"audio": {
|
||||
"global": {
|
||||
"sensitivity": "Global känslighet"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@ -1 +1,3 @@
|
||||
{}
|
||||
{
|
||||
"minimum": "Måste minst vara {{limit}}"
|
||||
}
|
||||
|
||||
@ -93,6 +93,14 @@ export default function GeneralSettings({ className }: GeneralSettingsProps) {
|
||||
useSWR<ProfilesApiResponse>("profiles");
|
||||
const logoutUrl = config?.proxy?.logout_url || "/api/logout";
|
||||
|
||||
const hasChatAgent = useMemo(
|
||||
() =>
|
||||
Object.values(config?.genai ?? {}).some((agent) =>
|
||||
agent?.roles?.includes("chat"),
|
||||
),
|
||||
[config?.genai],
|
||||
);
|
||||
|
||||
// languages
|
||||
|
||||
const languages = useMemo(() => {
|
||||
@ -511,7 +519,7 @@ export default function GeneralSettings({ className }: GeneralSettingsProps) {
|
||||
<span>{t("menu.classification")}</span>
|
||||
</MenuItem>
|
||||
</Link>
|
||||
{config?.genai?.model !== "none" && (
|
||||
{hasChatAgent && (
|
||||
<Link to="/chat">
|
||||
<MenuItem
|
||||
className="flex w-full items-center p-2 text-sm"
|
||||
|
||||
@ -90,6 +90,10 @@ export default function SearchResultActions({
|
||||
const handleDebugReplay = useCallback(
|
||||
(event: SearchResult) => {
|
||||
setIsStarting(true);
|
||||
const toastId = toast.loading(
|
||||
t("dialog.starting", { ns: "views/replay" }),
|
||||
{ position: "top-center" },
|
||||
);
|
||||
|
||||
axios
|
||||
.post("debug_replay/start", {
|
||||
@ -100,6 +104,7 @@ export default function SearchResultActions({
|
||||
.then((response) => {
|
||||
if (response.status === 200) {
|
||||
toast.success(t("dialog.toast.success", { ns: "views/replay" }), {
|
||||
id: toastId,
|
||||
position: "top-center",
|
||||
});
|
||||
navigate("/replay");
|
||||
@ -115,6 +120,7 @@ export default function SearchResultActions({
|
||||
toast.error(
|
||||
t("dialog.toast.alreadyActive", { ns: "views/replay" }),
|
||||
{
|
||||
id: toastId,
|
||||
position: "top-center",
|
||||
closeButton: true,
|
||||
dismissible: false,
|
||||
@ -129,6 +135,7 @@ export default function SearchResultActions({
|
||||
);
|
||||
} else {
|
||||
toast.error(t("dialog.toast.error", { error: errorMessage }), {
|
||||
id: toastId,
|
||||
position: "top-center",
|
||||
});
|
||||
}
|
||||
|
||||
@ -1,4 +1,6 @@
|
||||
import { useCallback, useState } from "react";
|
||||
import { useCallback, useEffect, useMemo, useState } from "react";
|
||||
import { flushSync } from "react-dom";
|
||||
import { throttle } from "lodash";
|
||||
import { Slider } from "@/components/ui/slider";
|
||||
import { Button } from "@/components/ui/button";
|
||||
import { Popover, PopoverContent, PopoverTrigger } from "../../ui/popover";
|
||||
@ -19,11 +21,21 @@ import { useIsAdmin } from "@/hooks/use-is-admin";
|
||||
import { useDocDomain } from "@/hooks/use-doc-domain";
|
||||
import { Link } from "react-router-dom";
|
||||
|
||||
const SLIDER_DRAG_THROTTLE_MS = 80;
|
||||
|
||||
type Props = {
|
||||
className?: string;
|
||||
// Optional side-effect invoked atomically with setAnnotationOffset (inside
|
||||
// flushSync) so callers like the timeline panel can re-seek the video in the
|
||||
// same React commit as the offset state update — preventing a one-frame
|
||||
// overlay mismatch where annotationOffset has changed but currentTime has not.
|
||||
onApplyOffset?: (newOffset: number) => void;
|
||||
};
|
||||
|
||||
export default function AnnotationOffsetSlider({ className }: Props) {
|
||||
export default function AnnotationOffsetSlider({
|
||||
className,
|
||||
onApplyOffset,
|
||||
}: Props) {
|
||||
const { annotationOffset, setAnnotationOffset, camera } = useDetailStream();
|
||||
const isAdmin = useIsAdmin();
|
||||
const { getLocaleDocUrl } = useDocDomain();
|
||||
@ -31,31 +43,62 @@ export default function AnnotationOffsetSlider({ className }: Props) {
|
||||
const { t } = useTranslation(["views/explore"]);
|
||||
const [isSaving, setIsSaving] = useState(false);
|
||||
|
||||
const applyOffset = useCallback(
|
||||
(newOffset: number) => {
|
||||
flushSync(() => {
|
||||
setAnnotationOffset(newOffset);
|
||||
onApplyOffset?.(newOffset);
|
||||
});
|
||||
},
|
||||
[setAnnotationOffset, onApplyOffset],
|
||||
);
|
||||
|
||||
const throttledApplyOffset = useMemo(
|
||||
() =>
|
||||
throttle(applyOffset, SLIDER_DRAG_THROTTLE_MS, {
|
||||
leading: true,
|
||||
trailing: true,
|
||||
}),
|
||||
[applyOffset],
|
||||
);
|
||||
|
||||
useEffect(() => () => throttledApplyOffset.cancel(), [throttledApplyOffset]);
|
||||
|
||||
const handleChange = useCallback(
|
||||
(values: number[]) => {
|
||||
if (!values || values.length === 0) return;
|
||||
const valueMs = values[0];
|
||||
setAnnotationOffset(valueMs);
|
||||
throttledApplyOffset(values[0]);
|
||||
},
|
||||
[setAnnotationOffset],
|
||||
[throttledApplyOffset],
|
||||
);
|
||||
|
||||
const handleCommit = useCallback(
|
||||
(values: number[]) => {
|
||||
if (!values || values.length === 0) return;
|
||||
// Ensure the final value lands even if it would otherwise be discarded
|
||||
// by the trailing edge of the throttle window.
|
||||
throttledApplyOffset.cancel();
|
||||
applyOffset(values[0]);
|
||||
},
|
||||
[throttledApplyOffset, applyOffset],
|
||||
);
|
||||
|
||||
const stepOffset = useCallback(
|
||||
(delta: number) => {
|
||||
setAnnotationOffset((prev) => {
|
||||
const next = prev + delta;
|
||||
return Math.max(
|
||||
ANNOTATION_OFFSET_MIN,
|
||||
Math.min(ANNOTATION_OFFSET_MAX, next),
|
||||
);
|
||||
});
|
||||
const next = Math.max(
|
||||
ANNOTATION_OFFSET_MIN,
|
||||
Math.min(ANNOTATION_OFFSET_MAX, annotationOffset + delta),
|
||||
);
|
||||
throttledApplyOffset.cancel();
|
||||
applyOffset(next);
|
||||
},
|
||||
[setAnnotationOffset],
|
||||
[annotationOffset, applyOffset, throttledApplyOffset],
|
||||
);
|
||||
|
||||
const reset = useCallback(() => {
|
||||
setAnnotationOffset(0);
|
||||
}, [setAnnotationOffset]);
|
||||
throttledApplyOffset.cancel();
|
||||
applyOffset(0);
|
||||
}, [applyOffset, throttledApplyOffset]);
|
||||
|
||||
const save = useCallback(async () => {
|
||||
setIsSaving(true);
|
||||
@ -130,6 +173,7 @@ export default function AnnotationOffsetSlider({ className }: Props) {
|
||||
max={ANNOTATION_OFFSET_MAX}
|
||||
step={ANNOTATION_OFFSET_STEP}
|
||||
onValueChange={handleChange}
|
||||
onValueCommit={handleCommit}
|
||||
/>
|
||||
</div>
|
||||
<Button
|
||||
|
||||
@ -1,7 +1,9 @@
|
||||
import { Event } from "@/types/event";
|
||||
import { FrigateConfig } from "@/types/frigateConfig";
|
||||
import axios from "axios";
|
||||
import { useCallback, useState } from "react";
|
||||
import { useCallback, useEffect, useMemo, useState } from "react";
|
||||
import { flushSync } from "react-dom";
|
||||
import { throttle } from "lodash";
|
||||
import { LuExternalLink, LuMinus, LuPlus } from "react-icons/lu";
|
||||
import { Link } from "react-router-dom";
|
||||
import { toast } from "sonner";
|
||||
@ -19,6 +21,8 @@ import {
|
||||
ANNOTATION_OFFSET_STEP,
|
||||
} from "@/lib/const";
|
||||
|
||||
const SLIDER_DRAG_THROTTLE_MS = 80;
|
||||
|
||||
type AnnotationSettingsPaneProps = {
|
||||
event: Event;
|
||||
annotationOffset: number;
|
||||
@ -38,30 +42,64 @@ export function AnnotationSettingsPane({
|
||||
|
||||
const [isLoading, setIsLoading] = useState(false);
|
||||
|
||||
const handleSliderChange = useCallback(
|
||||
(values: number[]) => {
|
||||
if (!values || values.length === 0) return;
|
||||
setAnnotationOffset(values[0]);
|
||||
},
|
||||
[setAnnotationOffset],
|
||||
);
|
||||
|
||||
const stepOffset = useCallback(
|
||||
(delta: number) => {
|
||||
setAnnotationOffset((prev) => {
|
||||
const next = prev + delta;
|
||||
return Math.max(
|
||||
ANNOTATION_OFFSET_MIN,
|
||||
Math.min(ANNOTATION_OFFSET_MAX, next),
|
||||
);
|
||||
// flushSync ensures setAnnotationOffset commits synchronously so the
|
||||
// useLayoutEffect in TrackingDetails (which seeks the video and sets
|
||||
// currentTime in response) runs before the browser paints — preventing a
|
||||
// one-frame overlay mismatch where annotationOffset has changed but
|
||||
// currentTime has not.
|
||||
const applyOffset = useCallback(
|
||||
(newOffset: number) => {
|
||||
flushSync(() => {
|
||||
setAnnotationOffset(newOffset);
|
||||
});
|
||||
},
|
||||
[setAnnotationOffset],
|
||||
);
|
||||
|
||||
const throttledApplyOffset = useMemo(
|
||||
() =>
|
||||
throttle(applyOffset, SLIDER_DRAG_THROTTLE_MS, {
|
||||
leading: true,
|
||||
trailing: true,
|
||||
}),
|
||||
[applyOffset],
|
||||
);
|
||||
|
||||
useEffect(() => () => throttledApplyOffset.cancel(), [throttledApplyOffset]);
|
||||
|
||||
const handleSliderChange = useCallback(
|
||||
(values: number[]) => {
|
||||
if (!values || values.length === 0) return;
|
||||
throttledApplyOffset(values[0]);
|
||||
},
|
||||
[throttledApplyOffset],
|
||||
);
|
||||
|
||||
const handleSliderCommit = useCallback(
|
||||
(values: number[]) => {
|
||||
if (!values || values.length === 0) return;
|
||||
throttledApplyOffset.cancel();
|
||||
applyOffset(values[0]);
|
||||
},
|
||||
[throttledApplyOffset, applyOffset],
|
||||
);
|
||||
|
||||
const stepOffset = useCallback(
|
||||
(delta: number) => {
|
||||
const next = Math.max(
|
||||
ANNOTATION_OFFSET_MIN,
|
||||
Math.min(ANNOTATION_OFFSET_MAX, annotationOffset + delta),
|
||||
);
|
||||
throttledApplyOffset.cancel();
|
||||
applyOffset(next);
|
||||
},
|
||||
[annotationOffset, applyOffset, throttledApplyOffset],
|
||||
);
|
||||
|
||||
const reset = useCallback(() => {
|
||||
setAnnotationOffset(0);
|
||||
}, [setAnnotationOffset]);
|
||||
throttledApplyOffset.cancel();
|
||||
applyOffset(0);
|
||||
}, [applyOffset, throttledApplyOffset]);
|
||||
|
||||
const saveToConfig = useCallback(async () => {
|
||||
if (!config || !event) return;
|
||||
@ -143,6 +181,7 @@ export function AnnotationSettingsPane({
|
||||
max={ANNOTATION_OFFSET_MAX}
|
||||
step={ANNOTATION_OFFSET_STEP}
|
||||
onValueChange={handleSliderChange}
|
||||
onValueCommit={handleSliderCommit}
|
||||
className="flex-1"
|
||||
/>
|
||||
<Button
|
||||
|
||||
@ -73,7 +73,7 @@ export default function DetailActionsMenu({
|
||||
}
|
||||
|
||||
return (
|
||||
<DropdownMenu open={isOpen} onOpenChange={setIsOpen}>
|
||||
<DropdownMenu modal={false} open={isOpen} onOpenChange={setIsOpen}>
|
||||
<DropdownMenuTrigger>
|
||||
<div className="rounded" role="button">
|
||||
<HiDotsHorizontal className="size-4 text-muted-foreground" />
|
||||
|
||||
@ -957,8 +957,9 @@ function ObjectDetailsTab({
|
||||
toast.success(
|
||||
t("details.item.toast.success.regenerate", {
|
||||
provider: capitalizeAll(
|
||||
config?.genai.provider.replaceAll("_", " ") ??
|
||||
t("generativeAI"),
|
||||
Object.values(config?.genai ?? {})
|
||||
.find((agent) => agent?.roles?.includes("descriptions"))
|
||||
?.provider?.replaceAll("_", " ") ?? t("generativeAI"),
|
||||
),
|
||||
}),
|
||||
{
|
||||
@ -976,8 +977,9 @@ function ObjectDetailsTab({
|
||||
toast.error(
|
||||
t("details.item.toast.error.regenerate", {
|
||||
provider: capitalizeAll(
|
||||
config?.genai.provider.replaceAll("_", " ") ??
|
||||
t("generativeAI"),
|
||||
Object.values(config?.genai ?? {})
|
||||
.find((agent) => agent?.roles?.includes("descriptions"))
|
||||
?.provider?.replaceAll("_", " ") ?? t("generativeAI"),
|
||||
),
|
||||
errorMessage,
|
||||
}),
|
||||
|
||||
@ -1,5 +1,13 @@
|
||||
import useSWR from "swr";
|
||||
import { useCallback, useEffect, useMemo, useRef, useState } from "react";
|
||||
import {
|
||||
useCallback,
|
||||
useEffect,
|
||||
useLayoutEffect,
|
||||
useMemo,
|
||||
useRef,
|
||||
useState,
|
||||
} from "react";
|
||||
import { flushSync } from "react-dom";
|
||||
import { useResizeObserver } from "@/hooks/resize-observer";
|
||||
import { useFullscreen } from "@/hooks/use-fullscreen";
|
||||
import { Event } from "@/types/event";
|
||||
@ -389,7 +397,12 @@ export function TrackingDetails({
|
||||
|
||||
// When the pinned timestamp or offset changes, re-seek the video and
|
||||
// explicitly update currentTime so the overlay shows the pinned event's box.
|
||||
useEffect(() => {
|
||||
// useLayoutEffect + flushSync force the setCurrentTime commit to land before
|
||||
// the browser paints, so the overlay never shows a frame where
|
||||
// annotationOffset has changed but currentTime has not — that mismatch would
|
||||
// resolve effectiveCurrentTime away from the pinned detect timestamp and
|
||||
// make the bounding box disappear or jump for one frame.
|
||||
useLayoutEffect(() => {
|
||||
const pinned = pinnedDetectTimestampRef.current;
|
||||
if (!isAnnotationSettingsOpen || pinned == null) return;
|
||||
if (!videoRef.current || displaySource !== "video") return;
|
||||
@ -398,10 +411,9 @@ export function TrackingDetails({
|
||||
const relativeTime = timestampToVideoTime(targetTimeRecord);
|
||||
videoRef.current.currentTime = relativeTime;
|
||||
|
||||
// Explicitly update currentTime state so the overlay's effectiveCurrentTime
|
||||
// resolves back to the pinned detect timestamp:
|
||||
// effectiveCurrentTime = targetTimeRecord - annotationOffset/1000 = pinned
|
||||
setCurrentTime(targetTimeRecord);
|
||||
flushSync(() => {
|
||||
setCurrentTime(targetTimeRecord);
|
||||
});
|
||||
}, [
|
||||
isAnnotationSettingsOpen,
|
||||
annotationOffset,
|
||||
@ -1204,7 +1216,11 @@ function LifecycleIconRow({
|
||||
<div className="flex flex-row items-center gap-3">
|
||||
<div className="whitespace-nowrap">{formattedEventTimestamp}</div>
|
||||
{isAdmin && (config?.plus?.enabled || item.data.box) && (
|
||||
<DropdownMenu open={isOpen} onOpenChange={setIsOpen}>
|
||||
<DropdownMenu
|
||||
modal={false}
|
||||
open={isOpen}
|
||||
onOpenChange={setIsOpen}
|
||||
>
|
||||
<DropdownMenuTrigger>
|
||||
<div className="rounded p-1 pr-2" role="button">
|
||||
<HiDotsHorizontal className="size-4 text-muted-foreground" />
|
||||
|
||||
@ -126,13 +126,20 @@ export default function DetailStream({
|
||||
// eslint-disable-next-line react-hooks/exhaustive-deps
|
||||
}, [controlsExpanded]);
|
||||
|
||||
// Re-seek on annotation offset change while settings panel is open
|
||||
useEffect(() => {
|
||||
const pinned = pinnedDetectTimestampRef.current;
|
||||
if (!controlsExpanded || pinned == null) return;
|
||||
const recordTime = pinned + annotationOffset / 1000;
|
||||
onSeek(recordTime, false);
|
||||
}, [controlsExpanded, annotationOffset, onSeek]);
|
||||
// The slider invokes this atomically with setAnnotationOffset (inside the
|
||||
// same flushSync) so currentTime advances in the same React commit as the
|
||||
// offset. Without this, the overlay would render one frame with the new
|
||||
// offset but the old currentTime, briefly resolving effectiveCurrentTime to
|
||||
// the wrong detect-stream timestamp and making the bounding box vanish or
|
||||
// jump.
|
||||
const handleApplyOffset = useCallback(
|
||||
(newOffset: number) => {
|
||||
const pinned = pinnedDetectTimestampRef.current;
|
||||
if (!controlsExpanded || pinned == null) return;
|
||||
onSeek(pinned + newOffset / 1000, false);
|
||||
},
|
||||
[controlsExpanded, onSeek],
|
||||
);
|
||||
|
||||
// Ensure we initialize the active review when reviewItems first arrive.
|
||||
// This helps when the component mounts while the video is already
|
||||
@ -337,7 +344,7 @@ export default function DetailStream({
|
||||
</button>
|
||||
{controlsExpanded && (
|
||||
<div className="space-y-4 px-3 pb-5 pt-2">
|
||||
<AnnotationOffsetSlider />
|
||||
<AnnotationOffsetSlider onApplyOffset={handleApplyOffset} />
|
||||
<Separator />
|
||||
<div className="flex flex-col gap-1">
|
||||
<div className="flex items-center justify-between">
|
||||
|
||||
@ -53,6 +53,10 @@ export default function EventMenu({
|
||||
const handleDebugReplay = useCallback(
|
||||
(event: Event) => {
|
||||
setIsStarting(true);
|
||||
const toastId = toast.loading(
|
||||
t("dialog.starting", { ns: "views/replay" }),
|
||||
{ position: "top-center" },
|
||||
);
|
||||
|
||||
axios
|
||||
.post("debug_replay/start", {
|
||||
@ -63,6 +67,7 @@ export default function EventMenu({
|
||||
.then((response) => {
|
||||
if (response.status === 200) {
|
||||
toast.success(t("dialog.toast.success", { ns: "views/replay" }), {
|
||||
id: toastId,
|
||||
position: "top-center",
|
||||
});
|
||||
navigate("/replay");
|
||||
@ -78,6 +83,7 @@ export default function EventMenu({
|
||||
toast.error(
|
||||
t("dialog.toast.alreadyActive", { ns: "views/replay" }),
|
||||
{
|
||||
id: toastId,
|
||||
position: "top-center",
|
||||
closeButton: true,
|
||||
dismissible: false,
|
||||
@ -92,6 +98,7 @@ export default function EventMenu({
|
||||
);
|
||||
} else {
|
||||
toast.error(t("dialog.toast.error", { error: errorMessage }), {
|
||||
id: toastId,
|
||||
position: "top-center",
|
||||
});
|
||||
}
|
||||
@ -106,7 +113,7 @@ export default function EventMenu({
|
||||
return (
|
||||
<>
|
||||
<span tabIndex={0} className="sr-only" />
|
||||
<DropdownMenu open={isOpen} onOpenChange={setIsOpen}>
|
||||
<DropdownMenu modal={false} open={isOpen} onOpenChange={setIsOpen}>
|
||||
<DropdownMenuTrigger>
|
||||
<div className="rounded p-1 pr-2" role="button">
|
||||
<HiDotsHorizontal className="size-4 text-muted-foreground" />
|
||||
|
||||
@ -28,6 +28,14 @@ export default function useNavigation(
|
||||
});
|
||||
const isAdmin = useIsAdmin();
|
||||
|
||||
const hasChatAgent = useMemo(
|
||||
() =>
|
||||
Object.values(config?.genai ?? {}).some((agent) =>
|
||||
agent?.roles?.includes("chat"),
|
||||
),
|
||||
[config?.genai],
|
||||
);
|
||||
|
||||
return useMemo(
|
||||
() =>
|
||||
[
|
||||
@ -89,9 +97,9 @@ export default function useNavigation(
|
||||
icon: MdChat,
|
||||
title: "menu.chat",
|
||||
url: "/chat",
|
||||
enabled: isDesktop && isAdmin && config?.genai?.model !== "none",
|
||||
enabled: isDesktop && isAdmin && hasChatAgent,
|
||||
},
|
||||
] as NavData[],
|
||||
[config?.face_recognition?.enabled, config?.genai?.model, variant, isAdmin],
|
||||
[config?.face_recognition?.enabled, hasChatAgent, variant, isAdmin],
|
||||
);
|
||||
}
|
||||
|
||||
@ -382,6 +382,18 @@ export type AllGroupsStreamingSettings = {
|
||||
[groupName: string]: GroupStreamingSettings;
|
||||
};
|
||||
|
||||
export type GenAIRole = "chat" | "descriptions" | "embeddings";
|
||||
|
||||
export type GenAIAgentConfig = {
|
||||
api_key?: string;
|
||||
base_url?: string;
|
||||
model: string;
|
||||
provider?: string;
|
||||
roles: GenAIRole[];
|
||||
provider_options?: Record<string, unknown>;
|
||||
runtime_options?: Record<string, unknown>;
|
||||
};
|
||||
|
||||
export interface FrigateConfig {
|
||||
version: string;
|
||||
safe_mode: boolean;
|
||||
@ -478,12 +490,7 @@ export interface FrigateConfig {
|
||||
retry_interval: number;
|
||||
};
|
||||
|
||||
genai: {
|
||||
provider: string;
|
||||
base_url?: string;
|
||||
api_key?: string;
|
||||
model: string;
|
||||
};
|
||||
genai: Record<string, GenAIAgentConfig>;
|
||||
|
||||
go2rtc: {
|
||||
streams: Record<string, string | string[]>;
|
||||
|
||||
Loading…
Reference in New Issue
Block a user