mirror of
https://github.com/blakeblackshear/frigate.git
synced 2026-05-07 22:15:28 +03:00
Compare commits
1 Commits
7f04c67d1b
...
b2a1ac67b2
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
b2a1ac67b2 |
2
.github/pull_request_template.md
vendored
2
.github/pull_request_template.md
vendored
@ -26,7 +26,7 @@ _Please read the [contributing guidelines](https://github.com/blakeblackshear/fr
|
||||
|
||||
- This PR fixes or closes issue: fixes #
|
||||
- This PR is related to issue:
|
||||
- Link to discussion with maintainers (**required** for any large or "planned" features):
|
||||
- Link to discussion with maintainers (**required** for large/pinned features):
|
||||
|
||||
## For new features
|
||||
|
||||
|
||||
2
.github/workflows/pr_template_check.yml
vendored
2
.github/workflows/pr_template_check.yml
vendored
@ -13,7 +13,7 @@ jobs:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Check PR description against template
|
||||
uses: actions/github-script@v9
|
||||
uses: actions/github-script@v7
|
||||
with:
|
||||
script: |
|
||||
const maintainers = ['blakeblackshear', 'NickM-27', 'hawkeye217', 'dependabot[bot]', 'weblate'];
|
||||
|
||||
2
.github/workflows/pull_request.yml
vendored
2
.github/workflows/pull_request.yml
vendored
@ -72,7 +72,7 @@ jobs:
|
||||
run: npm run e2e
|
||||
working-directory: ./web
|
||||
- name: Upload test artifacts
|
||||
uses: actions/upload-artifact@v7
|
||||
uses: actions/upload-artifact@v4
|
||||
if: failure()
|
||||
with:
|
||||
name: playwright-report
|
||||
|
||||
6
.github/workflows/stale.yml
vendored
6
.github/workflows/stale.yml
vendored
@ -18,9 +18,9 @@ jobs:
|
||||
close-issue-message: ""
|
||||
days-before-stale: 30
|
||||
days-before-close: 3
|
||||
exempt-draft-pr: false
|
||||
exempt-issue-labels: "planned,security"
|
||||
exempt-pr-labels: "planned,security,dependencies"
|
||||
exempt-draft-pr: true
|
||||
exempt-issue-labels: "pinned,security"
|
||||
exempt-pr-labels: "pinned,security,dependencies"
|
||||
operations-per-run: 120
|
||||
- name: Print outputs
|
||||
env:
|
||||
|
||||
5
.gitignore
vendored
5
.gitignore
vendored
@ -22,8 +22,3 @@ core
|
||||
!/web/**/*.ts
|
||||
.idea/*
|
||||
.ipynb_checkpoints
|
||||
|
||||
# Auto-generated Docker Compose Generator config files
|
||||
docs/src/components/DockerComposeGenerator/config/devices.ts
|
||||
docs/src/components/DockerComposeGenerator/config/hardware.ts
|
||||
docs/src/components/DockerComposeGenerator/config/ports.ts
|
||||
|
||||
@ -10,14 +10,11 @@ If you've found a bug and want to fix it, go for it. Link to the relevant issue
|
||||
|
||||
### New features
|
||||
|
||||
A pull request is more than just code — it's a request for the maintainers to review, integrate, and support the change long-term. We're selective about what we take on, and prioritize changes that align with the project's direction and can be responsibly maintained in the long term.
|
||||
Every new feature adds scope that the maintainers must test, maintain, and support long-term. Before writing code for a new feature:
|
||||
|
||||
**Large or highly-requested features** raise the bar even higher. Popularity signals demand, but it doesn't pre-approve any particular implementation. The bigger the change, the higher the long-term cost, and the more important it is that we're aligned on scope and approach before any code is written. A large PR that lands without prior discussion is unlikely to be merged as-is, no matter how well it's implemented.
|
||||
|
||||
Before writing code for a new feature:
|
||||
|
||||
1. **Check for existing discussion.** Search [feature requests](https://github.com/blakeblackshear/frigate/issues) and [discussions](https://github.com/blakeblackshear/frigate/discussions) to see if it's been proposed or discussed. Feature requests tagged with "planned" are on our radar — we plan to get to them, but we don't maintain a public roadmap or timeline. Check in with us first if you have interest in contributing to one.
|
||||
1. **Check for existing discussion.** Search [feature requests](https://github.com/blakeblackshear/frigate/issues) and [discussions](https://github.com/blakeblackshear/frigate/discussions) to see if it's been proposed or discussed. Pinned feature requests are on our radar — we plan to get to them, but we don't maintain a public roadmap or timeline. Check in with us first if you have interest in contributing to one.
|
||||
2. **Start a discussion or feature request first.** This helps ensure your idea aligns with Frigate's direction before you invest time building it. Community interest in a feature request helps us gauge demand, though a great idea is a great idea even without a crowd behind it.
|
||||
3. **Be open to "no".** We try to be thoughtful about what we take on, and sometimes that means saying no to good code if the feature isn't the right fit for the project. These calls are sometimes subjective, and we won't always get them right. We're happy to discuss and reconsider.
|
||||
|
||||
## AI usage policy
|
||||
|
||||
@ -42,8 +39,6 @@ We're not trying to gatekeep how you write code. Use whatever tools make you pro
|
||||
|
||||
Some honest context: when we review a PR, we're not just evaluating whether the code works today. We're evaluating whether we can maintain it, debug it, and extend it long-term — often without the original author's involvement. Code that the author doesn't deeply understand is code that nobody understands, and that's a liability.
|
||||
|
||||
One more thing worth saying directly: most maintainers already have access to the same AI tools you do. A PR that's entirely AI-generated — where the author can't explain the design, debug issues independently, or engage substantively in design discussions — doesn't offer something we couldn't produce ourselves. What makes a contribution genuinely valuable is the human judgment and domain understanding behind it, as well as the engagement during review that shapes it into something we can confidently take on long-term.
|
||||
|
||||
## Pull request guidelines
|
||||
|
||||
### Before submitting
|
||||
|
||||
@ -32,14 +32,11 @@ RUN echo /opt/rocm/lib|tee /opt/rocm-dist/etc/ld.so.conf.d/rocm.conf
|
||||
FROM deps AS deps-prelim
|
||||
|
||||
COPY docker/rocm/debian-backports.sources /etc/apt/sources.list.d/debian-backports.sources
|
||||
# install_deps.sh upgraded libstdc++6 from trixie for Battlemage; the matching
|
||||
# -dev package must also come from trixie or apt refuses to satisfy it.
|
||||
RUN echo "deb http://deb.debian.org/debian trixie main" > /etc/apt/sources.list.d/trixie.list && \
|
||||
apt-get update && \
|
||||
RUN apt-get update && \
|
||||
apt-get install -y libnuma1 && \
|
||||
apt-get install -qq -y -t bookworm-backports mesa-va-drivers mesa-vulkan-drivers && \
|
||||
apt-get install -qq -y -t trixie libstdc++-14-dev && \
|
||||
rm -f /etc/apt/sources.list.d/trixie.list && \
|
||||
# Install C++ standard library headers for HIPRTC kernel compilation fallback
|
||||
apt-get install -qq -y libstdc++-12-dev && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
|
||||
WORKDIR /opt/frigate
|
||||
|
||||
@ -19,7 +19,7 @@ Face recognition requires a one-time internet connection to download detection a
|
||||
|
||||
### Face Detection
|
||||
|
||||
When running a Frigate+ model (or any custom model that natively detects faces) should ensure that `face` is added to the [list of objects to track](../plus/index.md#available-label-types) either globally or for a specific camera. This will allow face detection to run at the same time as object detection and be more efficient.
|
||||
When running a Frigate+ model (or any custom model that natively detects faces) should ensure that `face` is added to the [list of objects to track](../plus/#available-label-types) either globally or for a specific camera. This will allow face detection to run at the same time as object detection and be more efficient.
|
||||
|
||||
When running a default COCO model or another model that does not include `face` as a detectable label, face detection will run via CV2 using a lightweight DNN model that runs on the CPU. In this case, you should _not_ define `face` in your list of objects to track.
|
||||
|
||||
@ -171,7 +171,7 @@ When choosing images to include in the face training set it is recommended to al
|
||||
- If it is difficult to make out details in a persons face it will not be helpful in training.
|
||||
- Avoid images with extreme under/over-exposure.
|
||||
- Avoid blurry / pixelated images.
|
||||
- Avoid training on infrared (gray-scale). The models are trained on color images and will not be able to extract features from gray-scale images.
|
||||
- Avoid training on infrared (gray-scale). The models are trained on color images and will be able to extract features from gray-scale images.
|
||||
- Using images of people wearing hats / sunglasses may confuse the model.
|
||||
- Do not upload too many similar images at the same time, it is recommended to train no more than 4-6 similar images for each person to avoid over-fitting.
|
||||
|
||||
|
||||
@ -201,7 +201,7 @@ Cloud Generative AI providers require an active internet connection to send imag
|
||||
|
||||
### Ollama Cloud
|
||||
|
||||
Ollama also supports [cloud models](https://ollama.com/cloud), where model inference is performed in the cloud. You can connect directly to Ollama Cloud by setting `base_url` to `https://ollama.com` and providing an API key. Alternatively, you can run Ollama locally and use a cloud model name so your local instance forwards requests to the cloud. For more details, see the Ollama cloud model [docs](https://docs.ollama.com/cloud).
|
||||
Ollama also supports [cloud models](https://ollama.com/cloud), where your local Ollama instance handles requests from Frigate, but model inference is performed in the cloud. Set up Ollama locally, sign in with your Ollama account, and specify the cloud model name in your Frigate config. For more details, see the Ollama cloud model [docs](https://docs.ollama.com/cloud).
|
||||
|
||||
#### Configuration
|
||||
|
||||
@ -210,8 +210,7 @@ Ollama also supports [cloud models](https://ollama.com/cloud), where model infer
|
||||
|
||||
1. Navigate to <NavPath path="Settings > Enrichments > Generative AI" />.
|
||||
- Set **Provider** to `ollama`
|
||||
- Set **Base URL** to your local Ollama address (e.g., `http://localhost:11434`) or `https://ollama.com` for direct cloud inference
|
||||
- Set **API key** if required by your endpoint (e.g., when using `https://ollama.com`)
|
||||
- Set **Base URL** to your local Ollama address (e.g., `http://localhost:11434`)
|
||||
- Set **Model** to the cloud model name
|
||||
|
||||
</TabItem>
|
||||
@ -224,16 +223,6 @@ genai:
|
||||
model: cloud-model-name
|
||||
```
|
||||
|
||||
or when using Ollama Cloud directly
|
||||
|
||||
```yaml
|
||||
genai:
|
||||
provider: ollama
|
||||
base_url: https://ollama.com
|
||||
model: cloud-model-name
|
||||
api_key: your-api-key
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</ConfigTabs>
|
||||
|
||||
|
||||
@ -494,7 +494,7 @@ detectors:
|
||||
| [YOLO-NAS](#yolo-nas) | ✅ | ✅ | |
|
||||
| [MobileNet v2](#ssdlite-mobilenet-v2) | ✅ | ✅ | Fast and lightweight model, less accurate than larger models |
|
||||
| [YOLOX](#yolox) | ✅ | ? | |
|
||||
| [D-FINE / DEIMv2](#d-fine--deimv2) | ❌ | ❌ | |
|
||||
| [D-FINE](#d-fine) | ❌ | ❌ | |
|
||||
|
||||
#### SSDLite MobileNet v2
|
||||
|
||||
@ -710,13 +710,13 @@ model:
|
||||
|
||||
</details>
|
||||
|
||||
#### D-FINE / DEIMv2
|
||||
#### D-FINE
|
||||
|
||||
[D-FINE](https://github.com/Peterande/D-FINE) and [DEIMv2](https://github.com/Intellindust-AI-Lab/DEIMv2) are DETR based models that share the same ONNX input/output format. The ONNX exported models are supported, but not included by default. See the models section for downloading [D-FINE](#downloading-d-fine-model) or [DEIMv2](#downloading-deimv2-model) for use in Frigate.
|
||||
[D-FINE](https://github.com/Peterande/D-FINE) is a DETR based model. The ONNX exported models are supported, but not included by default. See [the models section](#downloading-d-fine-model) for more information on downloading the D-FINE model for use in Frigate.
|
||||
|
||||
:::warning
|
||||
|
||||
Currently D-FINE / DEIMv2 models only run on OpenVINO in CPU mode, GPUs currently fail to compile the model
|
||||
Currently D-FINE models only run on OpenVINO in CPU mode, GPUs currently fail to compile the model
|
||||
|
||||
:::
|
||||
|
||||
@ -766,31 +766,6 @@ Note that the labelmap uses a subset of the complete COCO label set that has onl
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>DEIMv2 Setup & Config</summary>
|
||||
|
||||
After placing the downloaded onnx model in your `config/model_cache` folder, you can use the following configuration:
|
||||
|
||||
```yaml
|
||||
detectors:
|
||||
ov:
|
||||
type: openvino
|
||||
device: CPU
|
||||
|
||||
model:
|
||||
model_type: dfine
|
||||
width: 640
|
||||
height: 640
|
||||
input_tensor: nchw
|
||||
input_dtype: float
|
||||
path: /config/model_cache/deimv2_hgnetv2_n.onnx
|
||||
labelmap_path: /labelmap/coco-80.txt
|
||||
```
|
||||
|
||||
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
|
||||
|
||||
</details>
|
||||
|
||||
## Apple Silicon detector
|
||||
|
||||
The NPU in Apple Silicon can't be accessed from within a container, so the [Apple Silicon detector client](https://github.com/frigate-nvr/apple-silicon-detector) must first be setup. It is recommended to use the Frigate docker image with `-standard-arm64` suffix, for example `ghcr.io/blakeblackshear/frigate:stable-standard-arm64`.
|
||||
@ -972,7 +947,7 @@ The AMD GPU kernel is known problematic especially when converting models to mxr
|
||||
|
||||
See [ONNX supported models](#supported-models) for supported models, there are some caveats:
|
||||
|
||||
- D-FINE / DEIMv2 models are not supported
|
||||
- D-FINE models are not supported
|
||||
- YOLO-NAS models are known to not run well on integrated GPUs
|
||||
|
||||
## ONNX
|
||||
@ -1028,7 +1003,7 @@ detectors:
|
||||
| [RF-DETR](#rf-detr) | ✅ | ❌ | Supports CUDA Graphs for optimal Nvidia performance |
|
||||
| [YOLO-NAS](#yolo-nas-1) | ⚠️ | ⚠️ | Not supported by CUDA Graphs |
|
||||
| [YOLOX](#yolox-1) | ✅ | ✅ | Supports CUDA Graphs for optimal Nvidia performance |
|
||||
| [D-FINE / DEIMv2](#d-fine--deimv2-1) | ⚠️ | ❌ | Not supported by CUDA Graphs |
|
||||
| [D-FINE](#d-fine) | ⚠️ | ❌ | Not supported by CUDA Graphs |
|
||||
|
||||
There is no default model provided, the following formats are supported:
|
||||
|
||||
@ -1240,9 +1215,9 @@ model:
|
||||
|
||||
</details>
|
||||
|
||||
#### D-FINE / DEIMv2
|
||||
#### D-FINE
|
||||
|
||||
[D-FINE](https://github.com/Peterande/D-FINE) and [DEIMv2](https://github.com/Intellindust-AI-Lab/DEIMv2) are DETR based models that share the same ONNX input/output format. The ONNX exported models are supported, but not included by default. See the models section for downloading [D-FINE](#downloading-d-fine-model) or [DEIMv2](#downloading-deimv2-model) for use in Frigate.
|
||||
[D-FINE](https://github.com/Peterande/D-FINE) is a DETR based model. The ONNX exported models are supported, but not included by default. See [the models section](#downloading-d-fine-model) for more information on downloading the D-FINE model for use in Frigate.
|
||||
|
||||
<details>
|
||||
<summary>D-FINE Setup & Config</summary>
|
||||
@ -1287,28 +1262,6 @@ model:
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>DEIMv2 Setup & Config</summary>
|
||||
|
||||
After placing the downloaded onnx model in your `config/model_cache` folder, you can use the following configuration:
|
||||
|
||||
```yaml
|
||||
detectors:
|
||||
onnx:
|
||||
type: onnx
|
||||
|
||||
model:
|
||||
model_type: dfine
|
||||
width: 640
|
||||
height: 640
|
||||
input_tensor: nchw
|
||||
input_dtype: float
|
||||
path: /config/model_cache/deimv2_hgnetv2_n.onnx
|
||||
labelmap_path: /labelmap/coco-80.txt
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
|
||||
|
||||
## CPU Detector (not recommended)
|
||||
@ -1452,7 +1405,7 @@ MemryX `.dfp` models are automatically downloaded at runtime, if enabled, to the
|
||||
|
||||
#### YOLO-NAS
|
||||
|
||||
The [YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) model included in this detector is downloaded from the [Models Section](#downloading-yolo-nas-model) and compiled to DFP with [mx_nc](https://developer.memryx.com/2p1/tools/neural_compiler.html#usage).
|
||||
The [YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) model included in this detector is downloaded from the [Models Section](#downloading-yolo-nas-model) and compiled to DFP with [mx_nc](https://developer.memryx.com/tools/neural_compiler.html#usage).
|
||||
|
||||
**Note:** The default model for the MemryX detector is YOLO-NAS 320x320.
|
||||
|
||||
@ -1506,7 +1459,7 @@ model:
|
||||
|
||||
#### YOLOv9
|
||||
|
||||
The YOLOv9s model included in this detector is downloaded from [the original GitHub](https://github.com/WongKinYiu/yolov9) like in the [Models Section](#yolov9-1) and compiled to DFP with [mx_nc](https://developer.memryx.com/2p1/tools/neural_compiler.html#usage).
|
||||
The YOLOv9s model included in this detector is downloaded from [the original GitHub](https://github.com/WongKinYiu/yolov9) like in the [Models Section](#yolov9-1) and compiled to DFP with [mx_nc](https://developer.memryx.com/tools/neural_compiler.html#usage).
|
||||
|
||||
##### Configuration
|
||||
|
||||
@ -1648,39 +1601,19 @@ model:
|
||||
|
||||
#### Using a Custom Model
|
||||
|
||||
To use your own custom model, first compile it into a [.dfp](https://developer.memryx.com/2p1/specs/files.html#dataflow-program) file, which is the format used by MemryX.
|
||||
To use your own model:
|
||||
|
||||
#### Compile the Model
|
||||
1. Package your compiled model into a `.zip` file.
|
||||
|
||||
Custom models must be compiled using **MemryX SDK 2.1**.
|
||||
2. The `.zip` must contain the compiled `.dfp` file.
|
||||
|
||||
Before compiling your model, install the MemryX Neural Compiler tools from the
|
||||
[Install Tools](https://developer.memryx.com/2p1/get_started/install_tools.html) page on the **host**.
|
||||
3. Depending on the model, the compiler may also generate a cropped post-processing network. If present, it will be named with the suffix `_post.onnx`.
|
||||
|
||||
> **Note:** It is recommended to compile the model on the host machine, or on another separate machine, rather than inside the Frigate Docker container. Installing the compiler inside Docker may conflict with container packages. It is recommended to create a Python virtual environment and install the compiler there.
|
||||
4. Bind-mount the `.zip` file into the container and specify its path using `model.path` in your config.
|
||||
|
||||
Once the SDK 2.1 environment is set up, follow the
|
||||
[MemryX Compiler](https://developer.memryx.com/2p1/tools/neural_compiler.html#usage) documentation to compile your model.
|
||||
5. Update the `labelmap_path` to match your custom model's labels.
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
mx_nc -m yolonas.onnx -c 4 --autocrop -v --dfp_fname yolonas.dfp
|
||||
```
|
||||
|
||||
For detailed instructions on compiling models, refer to the [MemryX Compiler](https://developer.memryx.com/2p1/tools/neural_compiler.html#usage) docs and [Tutorials](https://developer.memryx.com/2p1/tutorials/tutorials.html).
|
||||
|
||||
#### Package the Compiled Model
|
||||
|
||||
1. Package your compiled model into a `.zip` file.
|
||||
|
||||
2. The `.zip` file must contain the compiled `.dfp` file.
|
||||
|
||||
3. Depending on the model, the compiler may also generate a cropped post-processing network. If present, it will be named with the suffix `_post.onnx`.
|
||||
|
||||
4. Bind-mount the `.zip` file into the container and specify its path using `model.path` in your config.
|
||||
|
||||
5. Update `labelmap_path` to match your custom model's labels.
|
||||
For detailed instructions on compiling models, refer to the [MemryX Compiler](https://developer.memryx.com/tools/neural_compiler.html#usage) docs and [Tutorials](https://developer.memryx.com/tutorials/tutorials.html).
|
||||
|
||||
```yaml
|
||||
# The detector automatically selects the default model if nothing is provided in the config.
|
||||
@ -2341,49 +2274,6 @@ COPY --from=build /dfine/output/dfine_${MODEL_SIZE}_obj2coco.onnx /dfine-${MODEL
|
||||
EOF
|
||||
```
|
||||
|
||||
### Downloading DEIMv2 Model
|
||||
|
||||
[DEIMv2](https://github.com/Intellindust-AI-Lab/DEIMv2) can be exported as ONNX by running the command below. Pretrained weights are available on Hugging Face for two backbone families:
|
||||
|
||||
- **HGNetv2** (smaller/faster): `atto`, `femto`, `pico`, `n`
|
||||
- **DINOv3** (larger/more accurate): `s`, `m`, `l`, `x`
|
||||
|
||||
Set `BACKBONE` and `MODEL_SIZE` in the first line to match your desired variant. Hugging Face model names use uppercase (e.g. `HGNetv2_N`, `DINOv3_S`), while config files use lowercase (e.g. `hgnetv2_n`, `dinov3_s`).
|
||||
|
||||
```sh
|
||||
docker build . --rm --build-arg BACKBONE=hgnetv2 --build-arg MODEL_SIZE=n --output . -f- <<'EOF'
|
||||
FROM python:3.11-slim AS build
|
||||
RUN apt-get update && apt-get install --no-install-recommends -y git libgl1 libglib2.0-0 && rm -rf /var/lib/apt/lists/*
|
||||
COPY --from=ghcr.io/astral-sh/uv:0.8.0 /uv /bin/
|
||||
WORKDIR /deimv2
|
||||
RUN git clone https://github.com/Intellindust-AI-Lab/DEIMv2.git .
|
||||
# Install CPU-only PyTorch first to avoid pulling CUDA variant
|
||||
RUN uv pip install --no-cache --system torch torchvision --index-url https://download.pytorch.org/whl/cpu
|
||||
RUN uv pip install --no-cache --system -r requirements.txt
|
||||
RUN uv pip install --no-cache --system onnx safetensors huggingface_hub
|
||||
RUN mkdir -p output
|
||||
ARG BACKBONE
|
||||
ARG MODEL_SIZE
|
||||
# Download from Hugging Face and convert safetensors to pth
|
||||
RUN python3 -c "\
|
||||
from huggingface_hub import hf_hub_download; \
|
||||
from safetensors.torch import load_file; \
|
||||
import torch; \
|
||||
backbone = '${BACKBONE}'.replace('hgnetv2','HGNetv2').replace('dinov3','DINOv3'); \
|
||||
size = '${MODEL_SIZE}'.upper(); \
|
||||
st = load_file(hf_hub_download('Intellindust/DEIMv2_' + backbone + '_' + size + '_COCO', 'model.safetensors')); \
|
||||
torch.save({'model': st}, 'output/deimv2.pth')"
|
||||
RUN sed -i "s/data = torch.rand(2/data = torch.rand(1/" tools/deployment/export_onnx.py
|
||||
# HuggingFace safetensors omits frozen constants that the model constructor initializes
|
||||
RUN sed -i "s/cfg.model.load_state_dict(state)/cfg.model.load_state_dict(state, strict=False)/" tools/deployment/export_onnx.py
|
||||
RUN python3 tools/deployment/export_onnx.py -c configs/deimv2/deimv2_${BACKBONE}_${MODEL_SIZE}_coco.yml -r output/deimv2.pth
|
||||
FROM scratch
|
||||
ARG BACKBONE
|
||||
ARG MODEL_SIZE
|
||||
COPY --from=build /deimv2/output/deimv2.onnx /deimv2_${BACKBONE}_${MODEL_SIZE}.onnx
|
||||
EOF
|
||||
```
|
||||
|
||||
### Downloading RF-DETR Model
|
||||
|
||||
RF-DETR can be exported as ONNX by running the command below. You can copy and paste the whole thing to your terminal and execute, altering `MODEL_SIZE=Nano` in the first line to `Nano`, `Small`, or `Medium` size.
|
||||
|
||||
@ -195,7 +195,7 @@ Pre and post capture footage is included in the **recording timeline**, visible
|
||||
|
||||
## Will Frigate delete old recordings if my storage runs out?
|
||||
|
||||
If there is less than an hour left of storage, the oldest hour of recordings will be deleted and a message will be printed in the Frigate logs. This emergency cleanup deletes the oldest recordings first regardless of retention settings to reclaim space as quickly as possible.
|
||||
As of Frigate 0.12 if there is less than an hour left of storage, the oldest 2 hours of recordings will be deleted.
|
||||
|
||||
## Configuring Recording Retention
|
||||
|
||||
|
||||
@ -236,7 +236,7 @@ Enabling arbitrary exec sources allows execution of arbitrary commands through g
|
||||
|
||||
## Advanced Restream Configurations
|
||||
|
||||
The [exec](https://github.com/AlexxIT/go2rtc/tree/v1.9.13#source-exec) source in go2rtc can be used for custom ffmpeg commands and other applications. An example is below:
|
||||
The [exec](https://github.com/AlexxIT/go2rtc/tree/v1.9.13#source-exec) source in go2rtc can be used for custom ffmpeg commands. An example is below:
|
||||
|
||||
:::warning
|
||||
|
||||
@ -244,11 +244,16 @@ The `exec:`, `echo:`, and `expr:` sources are disabled by default for security.
|
||||
|
||||
:::
|
||||
|
||||
NOTE: RTSP output will need to be passed with two curly braces `{{output}}`, whereas pipe output must be passed without curly braces.
|
||||
:::warning
|
||||
|
||||
The `exec:`, `echo:`, and `expr:` sources are disabled by default for security. You must set `GO2RTC_ALLOW_ARBITRARY_EXEC=true` to use them. See [Security: Restricted Stream Sources](#security-restricted-stream-sources) for more information.
|
||||
|
||||
:::
|
||||
|
||||
NOTE: The output will need to be passed with two curly braces `{{output}}`
|
||||
|
||||
```yaml
|
||||
go2rtc:
|
||||
streams:
|
||||
stream1: exec:ffmpeg -hide_banner -re -stream_loop -1 -i /media/BigBuckBunny.mp4 -c copy -rtsp_transport tcp -f rtsp {{output}}
|
||||
stream2: exec:rpicam-vid -t 0 --libav-format h264 -o -
|
||||
```
|
||||
|
||||
@ -4,15 +4,12 @@ title: Installation
|
||||
---
|
||||
|
||||
import ShmCalculator from '@site/src/components/ShmCalculator'
|
||||
import DockerComposeGenerator from '@site/src/components/DockerComposeGenerator'
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
Frigate is a Docker container that can be run on any Docker host including as a [Home Assistant App](https://www.home-assistant.io/apps/). Note that the Home Assistant App is **not** the same thing as the integration. The [integration](/integrations/home-assistant) is required to integrate Frigate into Home Assistant, whether you are running Frigate as a standalone Docker container or as a Home Assistant App.
|
||||
|
||||
:::tip
|
||||
|
||||
If you already have Frigate installed as a Home Assistant App, check out the [getting started guide](../guides/getting_started.md#configuring-frigate) to configure Frigate.
|
||||
If you already have Frigate installed as a Home Assistant App, check out the [getting started guide](../guides/getting_started#configuring-frigate) to configure Frigate.
|
||||
|
||||
:::
|
||||
|
||||
@ -289,7 +286,7 @@ The MemryX MX3 Accelerator is available in the M.2 2280 form factor (like an NVM
|
||||
|
||||
#### Installation
|
||||
|
||||
To get started with MX3 hardware setup for your system, refer to the [Hardware Setup Guide](https://developer.memryx.com/2p1/get_started/install_hardware.html).
|
||||
To get started with MX3 hardware setup for your system, refer to the [Hardware Setup Guide](https://developer.memryx.com/get_started/hardware_setup.html).
|
||||
|
||||
Then follow these steps for installing the correct driver/runtime configuration:
|
||||
|
||||
@ -298,12 +295,6 @@ Then follow these steps for installing the correct driver/runtime configuration:
|
||||
3. Run the script with `./user_installation.sh`
|
||||
4. **Restart your computer** to complete driver installation.
|
||||
|
||||
:::warning
|
||||
|
||||
For manual setup, use **MemryX SDK 2.1** only. Other SDK versions are not supported for this setup. See the [SDK 2.1 documentation](https://developer.memryx.com/2p1/index.html)
|
||||
|
||||
:::
|
||||
|
||||
#### Setup
|
||||
|
||||
To set up Frigate, follow the default installation instructions, for example: `ghcr.io/blakeblackshear/frigate:stable`
|
||||
@ -477,16 +468,6 @@ Finally, configure [hardware object detection](/configuration/object_detectors#a
|
||||
|
||||
Running through Docker with Docker Compose is the recommended install method.
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="domestic" label="Docker Compose Generator" default>
|
||||
|
||||
Generate a Frigate Docker Compose configuration based on your hardware and requirements.
|
||||
|
||||
<DockerComposeGenerator/>
|
||||
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="original" label="Example Docker Compose File">
|
||||
```yaml
|
||||
services:
|
||||
frigate:
|
||||
@ -520,10 +501,6 @@ services:
|
||||
environment:
|
||||
FRIGATE_RTSP_PASSWORD: "password"
|
||||
```
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
**Docker CLI**
|
||||
|
||||
If you can't use Docker Compose, you can run the container with something similar to this:
|
||||
|
||||
|
||||
@ -39,10 +39,6 @@ This is a fork (with fixed errors and new features) of [original Double Take](ht
|
||||
|
||||
[Frigate telegram](https://github.com/OldTyT/frigate-telegram) makes it possible to send events from Frigate to Telegram. Events are sent as a message with a text description, video, and thumbnail.
|
||||
|
||||
## [kiosk-monitor](https://github.com/extremeshok/kiosk-monitor)
|
||||
|
||||
[kiosk-monitor](https://github.com/extremeshok/kiosk-monitor) is a Raspberry Pi watchdog that runs Chromium fullscreen on a Frigate dashboard (optionally with VLC on a second monitor for an RTSP camera stream), auto-restarts on frozen screens or unreachable URLs, and ships a Birdseye-aware Chromium helper that auto-sizes the grid to the display.
|
||||
|
||||
## [Periscope](https://github.com/maksz42/periscope)
|
||||
|
||||
[Periscope](https://github.com/maksz42/periscope) is a lightweight Android app that turns old devices into live viewers for Frigate. It works on Android 2.2 and above, including Android TV. It supports authentication and HTTPS.
|
||||
|
||||
9
docs/package-lock.json
generated
9
docs/package-lock.json
generated
@ -14,11 +14,9 @@
|
||||
"@docusaurus/theme-mermaid": "^3.7.0",
|
||||
"@inkeep/docusaurus": "^2.0.16",
|
||||
"@mdx-js/react": "^3.1.0",
|
||||
"@types/js-yaml": "^4.0.9",
|
||||
"clsx": "^2.1.1",
|
||||
"docusaurus-plugin-openapi-docs": "^4.5.1",
|
||||
"docusaurus-theme-openapi-docs": "^4.5.1",
|
||||
"js-yaml": "^4.1.1",
|
||||
"prism-react-renderer": "^2.4.1",
|
||||
"raw-loader": "^4.0.2",
|
||||
"react": "^18.3.1",
|
||||
@ -5749,11 +5747,6 @@
|
||||
"@types/istanbul-lib-report": "*"
|
||||
}
|
||||
},
|
||||
"node_modules/@types/js-yaml": {
|
||||
"version": "4.0.9",
|
||||
"resolved": "https://mirrors.tencent.com/npm/@types/js-yaml/-/js-yaml-4.0.9.tgz",
|
||||
"integrity": "sha512-k4MGaQl5TGo/iipqb2UDG2UwjXziSWkh0uysQelTlJpX1qGlpUZYm8PnO4DxG1qBomtJUdYJ6qR6xdIah10JLg=="
|
||||
},
|
||||
"node_modules/@types/json-schema": {
|
||||
"version": "7.0.15",
|
||||
"resolved": "https://registry.npmjs.org/@types/json-schema/-/json-schema-7.0.15.tgz",
|
||||
@ -12890,7 +12883,7 @@
|
||||
},
|
||||
"node_modules/js-yaml": {
|
||||
"version": "4.1.1",
|
||||
"resolved": "https://mirrors.tencent.com/npm/js-yaml/-/js-yaml-4.1.1.tgz",
|
||||
"resolved": "https://registry.npmjs.org/js-yaml/-/js-yaml-4.1.1.tgz",
|
||||
"integrity": "sha512-qQKT4zQxXl8lLwBtHMWwaTcGfFOZviOJet3Oy/xmGk2gZH677CJM9EvtfdSkgWcATZhj/55JZ0rmy3myCT5lsA==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
|
||||
@ -3,10 +3,9 @@
|
||||
"version": "0.0.0",
|
||||
"private": true,
|
||||
"scripts": {
|
||||
"build:config": "node scripts/build-config.mjs",
|
||||
"docusaurus": "docusaurus",
|
||||
"start": "npm run build:config && npm run regen-docs && docusaurus start --host 0.0.0.0",
|
||||
"build": "npm run build:config && npm run regen-docs && docusaurus build",
|
||||
"start": "npm run regen-docs && docusaurus start --host 0.0.0.0",
|
||||
"build": "npm run regen-docs && docusaurus build",
|
||||
"swizzle": "docusaurus swizzle",
|
||||
"deploy": "docusaurus deploy",
|
||||
"clear": "docusaurus clear",
|
||||
@ -24,11 +23,9 @@
|
||||
"@docusaurus/theme-mermaid": "^3.7.0",
|
||||
"@inkeep/docusaurus": "^2.0.16",
|
||||
"@mdx-js/react": "^3.1.0",
|
||||
"@types/js-yaml": "^4.0.9",
|
||||
"clsx": "^2.1.1",
|
||||
"docusaurus-plugin-openapi-docs": "^4.5.1",
|
||||
"docusaurus-theme-openapi-docs": "^4.5.1",
|
||||
"js-yaml": "^4.1.1",
|
||||
"prism-react-renderer": "^2.4.1",
|
||||
"raw-loader": "^4.0.2",
|
||||
"react": "^18.3.1",
|
||||
|
||||
@ -1,64 +0,0 @@
|
||||
#!/usr/bin/env node
|
||||
|
||||
/**
|
||||
* Build script: reads config.yaml and generates TypeScript files
|
||||
* for the Docker Compose Generator.
|
||||
*
|
||||
* Usage: node scripts/build-config.mjs
|
||||
*/
|
||||
|
||||
import fs from "node:fs";
|
||||
import path from "node:path";
|
||||
import { fileURLToPath } from "node:url";
|
||||
import yaml from "js-yaml";
|
||||
|
||||
const __dirname = path.dirname(fileURLToPath(import.meta.url));
|
||||
const CONFIG_DIR = path.resolve(__dirname, "../src/components/DockerComposeGenerator/config");
|
||||
const YAML_PATH = path.join(CONFIG_DIR, "config.yaml");
|
||||
|
||||
// Read & parse YAML
|
||||
const raw = fs.readFileSync(YAML_PATH, "utf8");
|
||||
const config = yaml.load(raw);
|
||||
|
||||
if (!config.devices || !config.hardware || !config.ports) {
|
||||
console.error("config.yaml must contain 'devices', 'hardware', and 'ports' sections.");
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
/**
|
||||
* Generate a .ts file from a section of the YAML config.
|
||||
*/
|
||||
function generateTsFile(sectionName, items, typeName, varName, mapVarName, yamlFilename) {
|
||||
const jsonItems = JSON.stringify(items, null, 2);
|
||||
// Indent JSON to fit inside the array literal
|
||||
const indented = jsonItems
|
||||
.split("\n")
|
||||
.map((line, i) => (i === 0 ? line : " " + line))
|
||||
.join("\n");
|
||||
|
||||
const content = `/**
|
||||
* AUTO-GENERATED FILE — do not edit directly.
|
||||
* Source: ${yamlFilename}
|
||||
* To update, edit the YAML file and run: npm run build:config
|
||||
*/
|
||||
|
||||
import type { ${typeName} } from "./types";
|
||||
|
||||
export const ${varName}: ${typeName}[] = ${indented};
|
||||
|
||||
/** Lookup map for quick access by ID */
|
||||
export const ${mapVarName}: Map<string, ${typeName}> = new Map(${varName}.map((item) => [item.id, item]));
|
||||
`;
|
||||
|
||||
const outPath = path.join(CONFIG_DIR, `${sectionName}.ts`);
|
||||
fs.writeFileSync(outPath, content, "utf8");
|
||||
console.log(` ✓ Generated ${sectionName}.ts (${items.length} items)`);
|
||||
}
|
||||
|
||||
console.log("Building config from config.yaml...");
|
||||
|
||||
generateTsFile("devices", config.devices, "DeviceConfig", "devices", "deviceMap", "config.yaml");
|
||||
generateTsFile("hardware", config.hardware, "HardwareOption", "hardwareOptions", "hardwareMap", "config.yaml");
|
||||
generateTsFile("ports", config.ports, "PortConfig", "ports", "portMap", "config.yaml");
|
||||
|
||||
console.log("Done!");
|
||||
@ -1,108 +0,0 @@
|
||||
import React from "react";
|
||||
import Admonition from "@theme/Admonition";
|
||||
import DeviceSelector from "./components/DeviceSelector";
|
||||
import HardwareOptions from "./components/HardwareOptions";
|
||||
import PortConfigSection from "./components/PortConfig";
|
||||
import StoragePaths from "./components/StoragePaths";
|
||||
import NvidiaGpuConfig from "./components/NvidiaGpuConfig";
|
||||
import OtherOptions from "./components/OtherOptions";
|
||||
import GeneratedOutput from "./components/GeneratedOutput";
|
||||
import { useConfigGenerator } from "./hooks/useConfigGenerator";
|
||||
import styles from "./styles.module.css";
|
||||
|
||||
/**
|
||||
* Simple markdown-link-to-React renderer for help text.
|
||||
* Only supports [text](url) syntax — no nested brackets.
|
||||
*/
|
||||
function renderHelpText(text: string): React.ReactNode {
|
||||
const parts = text.split(/(\[[^\]]+\]\([^)]+\))/g);
|
||||
return parts.map((part, i) => {
|
||||
const match = part.match(/^\[([^\]]+)\]\(([^)]+)\)$/);
|
||||
if (match) {
|
||||
return (
|
||||
<a key={i} href={match[2]}>
|
||||
{match[1]}
|
||||
</a>
|
||||
);
|
||||
}
|
||||
return <React.Fragment key={i}>{part}</React.Fragment>;
|
||||
});
|
||||
}
|
||||
|
||||
export default function DockerComposeGenerator() {
|
||||
const {
|
||||
deviceId, device, hardwareEnabled,
|
||||
portEnabled,
|
||||
nvidiaGpuCount, nvidiaGpuDeviceId,
|
||||
configPath, mediaPath, rtspPassword, timezone, shmSize,
|
||||
shmSizeError, gpuDeviceIdError, configPathError, mediaPathError,
|
||||
hasAnyHardware, generatedYaml,
|
||||
selectDevice, toggleHardware, togglePort,
|
||||
handleShmSizeChange, handleConfigPathChange, handleMediaPathChange,
|
||||
handleNvidiaGpuCountChange, handleNvidiaGpuDeviceIdChange,
|
||||
setRtspPassword, setTimezone, isHardwareDisabled,
|
||||
} = useConfigGenerator();
|
||||
|
||||
return (
|
||||
<div className={styles.generator}>
|
||||
<div className={styles.card}>
|
||||
<DeviceSelector selectedId={deviceId} onSelect={selectDevice} />
|
||||
|
||||
{device.helpText && (
|
||||
<Admonition type={device.helpType || "info"}>
|
||||
{renderHelpText(device.helpText)}
|
||||
</Admonition>
|
||||
)}
|
||||
|
||||
{device.needsNvidiaConfig && (
|
||||
<NvidiaGpuConfig
|
||||
gpuCount={nvidiaGpuCount}
|
||||
gpuDeviceId={nvidiaGpuDeviceId}
|
||||
gpuDeviceIdError={gpuDeviceIdError}
|
||||
onGpuCountChange={handleNvidiaGpuCountChange}
|
||||
onGpuDeviceIdChange={handleNvidiaGpuDeviceIdChange}
|
||||
/>
|
||||
)}
|
||||
|
||||
<HardwareOptions
|
||||
deviceId={deviceId}
|
||||
hardwareEnabled={hardwareEnabled}
|
||||
onToggle={toggleHardware}
|
||||
isDisabled={isHardwareDisabled}
|
||||
/>
|
||||
|
||||
<StoragePaths
|
||||
configPath={configPath}
|
||||
mediaPath={mediaPath}
|
||||
configPathError={configPathError}
|
||||
mediaPathError={mediaPathError}
|
||||
onConfigPathChange={handleConfigPathChange}
|
||||
onMediaPathChange={handleMediaPathChange}
|
||||
/>
|
||||
|
||||
<PortConfigSection
|
||||
portEnabled={portEnabled}
|
||||
onTogglePort={togglePort}
|
||||
/>
|
||||
|
||||
<OtherOptions
|
||||
rtspPassword={rtspPassword}
|
||||
timezone={timezone}
|
||||
shmSize={shmSize}
|
||||
shmSizeError={shmSizeError}
|
||||
onRtspPasswordChange={setRtspPassword}
|
||||
onTimezoneChange={setTimezone}
|
||||
onShmSizeChange={handleShmSizeChange}
|
||||
/>
|
||||
|
||||
<GeneratedOutput
|
||||
yaml={generatedYaml}
|
||||
configPath={configPath}
|
||||
mediaPath={mediaPath}
|
||||
hasAnyHardware={hasAnyHardware}
|
||||
deviceId={deviceId}
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
@ -1,147 +0,0 @@
|
||||
import React from "react";
|
||||
import { useColorMode } from "@docusaurus/theme-common";
|
||||
import { devices } from "../config";
|
||||
import type { DeviceConfig } from "../config";
|
||||
import styles from "../styles.module.css";
|
||||
|
||||
interface Props {
|
||||
selectedId: string;
|
||||
onSelect: (id: string) => void;
|
||||
}
|
||||
|
||||
/**
|
||||
* Determine the icon type from the icon string:
|
||||
* - Starts with "<svg" → inline SVG
|
||||
* - Starts with "/" or "http" → image URL/path
|
||||
* - Otherwise → emoji text
|
||||
*/
|
||||
function getIconType(icon: string): "svg" | "image" | "emoji" {
|
||||
const trimmed = icon.trim();
|
||||
if (trimmed.startsWith("<svg")) return "svg";
|
||||
if (trimmed.startsWith("/") || trimmed.startsWith("http://") || trimmed.startsWith("https://")) return "image";
|
||||
return "emoji";
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if the style object contains background-* properties,
|
||||
* indicating the image should be rendered as a CSS background-image
|
||||
* rather than an <img> tag.
|
||||
*/
|
||||
function hasBackgroundProps(style: React.CSSProperties | undefined): boolean {
|
||||
if (!style) return false;
|
||||
return Object.keys(style).some((key) => {
|
||||
const k = key.toLowerCase().replace(/-/g, "");
|
||||
return k === "backgroundsize" || k === "backgroundposition" || k === "backgroundrepeat" || k === "backgroundimage";
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Convert a style object to CSS custom properties (e.g. { width: "24px" } → { "--svg-width": "24px" })
|
||||
* so they can be consumed by CSS rules targeting child elements like <svg>.
|
||||
*/
|
||||
function toCssVars(style: React.CSSProperties | undefined, prefix: string): React.CSSProperties {
|
||||
if (!style) return {};
|
||||
const vars: Record<string, string> = {};
|
||||
for (const [key, value] of Object.entries(style)) {
|
||||
const cssKey = key.replace(/([A-Z])/g, "-$1").toLowerCase();
|
||||
vars[`--${prefix}-${cssKey}`] = value;
|
||||
}
|
||||
return vars as React.CSSProperties;
|
||||
}
|
||||
|
||||
function DeviceIcon({ device }: { device: DeviceConfig }) {
|
||||
const { isDarkTheme } = useColorMode();
|
||||
const iconStr = isDarkTheme && device.iconDark ? device.iconDark : device.icon;
|
||||
const iconStyle = (isDarkTheme && device.iconDarkStyle
|
||||
? device.iconDarkStyle
|
||||
: device.iconStyle) as React.CSSProperties | undefined;
|
||||
const svgStyle = (isDarkTheme && device.svgDarkStyle
|
||||
? device.svgDarkStyle
|
||||
: device.svgStyle) as React.CSSProperties | undefined;
|
||||
|
||||
const iconType = getIconType(iconStr);
|
||||
|
||||
if (iconType === "svg") {
|
||||
return (
|
||||
<div
|
||||
className={styles.deviceIconSvg}
|
||||
style={{ ...iconStyle, ...toCssVars(svgStyle, "svg") }}
|
||||
dangerouslySetInnerHTML={{ __html: iconStr }}
|
||||
/>
|
||||
);
|
||||
}
|
||||
|
||||
if (iconType === "image") {
|
||||
// When iconStyle contains background-* properties, render as background-image
|
||||
// on the container div instead of an <img> tag, enabling background-size/position control.
|
||||
if (hasBackgroundProps(iconStyle)) {
|
||||
return (
|
||||
<div
|
||||
className={styles.deviceIconImage}
|
||||
style={{
|
||||
backgroundImage: `url(${iconStr})`,
|
||||
backgroundRepeat: "no-repeat",
|
||||
backgroundPosition: "center",
|
||||
backgroundSize: "contain",
|
||||
...iconStyle,
|
||||
}}
|
||||
/>
|
||||
);
|
||||
}
|
||||
return (
|
||||
<div className={styles.deviceIconImage}>
|
||||
<img src={iconStr} alt={device.name} style={iconStyle} />
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
return (
|
||||
<div className={styles.deviceIcon} style={iconStyle}>
|
||||
{iconStr}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
function DeviceCard({
|
||||
device,
|
||||
active,
|
||||
onClick,
|
||||
}: {
|
||||
device: DeviceConfig;
|
||||
active: boolean;
|
||||
onClick: () => void;
|
||||
}) {
|
||||
return (
|
||||
<div
|
||||
className={`${styles.deviceCard} ${active ? styles.deviceCardActive : ""}`}
|
||||
onClick={onClick}
|
||||
role="button"
|
||||
tabIndex={0}
|
||||
onKeyDown={(e) => {
|
||||
if (e.key === "Enter" || e.key === " ") onClick();
|
||||
}}
|
||||
>
|
||||
<DeviceIcon device={device} />
|
||||
<div className={styles.deviceName}>{device.name}</div>
|
||||
<div className={styles.deviceDesc}>{device.description}</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
export default function DeviceSelector({ selectedId, onSelect }: Props) {
|
||||
return (
|
||||
<div className={styles.formSection}>
|
||||
<h4>Device Type</h4>
|
||||
<div className={styles.deviceGrid}>
|
||||
{devices.map((d) => (
|
||||
<DeviceCard
|
||||
key={d.id}
|
||||
device={d}
|
||||
active={selectedId === d.id}
|
||||
onClick={() => onSelect(d.id)}
|
||||
/>
|
||||
))}
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
@ -1,60 +0,0 @@
|
||||
import React, { useState, useCallback } from "react";
|
||||
import CodeBlock from "@theme/CodeBlock";
|
||||
import Admonition from "@theme/Admonition";
|
||||
import styles from "../styles.module.css";
|
||||
|
||||
interface Props {
|
||||
yaml: string;
|
||||
configPath: string;
|
||||
mediaPath: string;
|
||||
hasAnyHardware: boolean;
|
||||
deviceId: string;
|
||||
}
|
||||
|
||||
export default function GeneratedOutput({
|
||||
yaml,
|
||||
configPath,
|
||||
mediaPath,
|
||||
hasAnyHardware,
|
||||
deviceId,
|
||||
}: Props) {
|
||||
const [copied, setCopied] = useState(false);
|
||||
|
||||
const handleCopy = useCallback(() => {
|
||||
navigator.clipboard.writeText(yaml).then(() => {
|
||||
setCopied(true);
|
||||
setTimeout(() => setCopied(false), 2000);
|
||||
});
|
||||
}, [yaml]);
|
||||
|
||||
return (
|
||||
<div className={styles.resultSection}>
|
||||
<div className={styles.resultHeader}>
|
||||
<h4>Generated Configuration</h4>
|
||||
<button className="button button--primary button--sm" onClick={handleCopy}>
|
||||
{copied ? "Copied!" : "Copy"}
|
||||
</button>
|
||||
</div>
|
||||
|
||||
{!configPath && (
|
||||
<Admonition type="tip">
|
||||
<p>You haven't specified a config file directory. You may want to modify the default path.</p>
|
||||
</Admonition>
|
||||
)}
|
||||
{!mediaPath && (
|
||||
<Admonition type="tip">
|
||||
<p>You haven't specified a recording storage directory. You may want to modify the default path.</p>
|
||||
</Admonition>
|
||||
)}
|
||||
{deviceId === "stable" && !hasAnyHardware && (
|
||||
<Admonition type="warning">
|
||||
<p>You haven't selected any hardware acceleration. Please check if you have supported hardware available.</p>
|
||||
</Admonition>
|
||||
)}
|
||||
|
||||
<CodeBlock language="yaml" title="docker-compose.yml">
|
||||
{yaml}
|
||||
</CodeBlock>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
@ -1,62 +0,0 @@
|
||||
import React from "react";
|
||||
import { hardwareOptions } from "../config";
|
||||
import type { HardwareOption } from "../config";
|
||||
import styles from "../styles.module.css";
|
||||
|
||||
interface Props {
|
||||
deviceId: string;
|
||||
hardwareEnabled: Record<string, boolean>;
|
||||
onToggle: (hwId: string) => void;
|
||||
isDisabled: (hwId: string) => boolean;
|
||||
}
|
||||
|
||||
function renderDescription(text: string): React.ReactNode {
|
||||
const parts = text.split(/(\[[^\]]+\]\([^)]+\))/g);
|
||||
return parts.map((part, i) => {
|
||||
const match = part.match(/^\[([^\]]+)\]\(([^)]+)\)$/);
|
||||
if (match) {
|
||||
return <a key={i} href={match[2]}>{match[1]}</a>;
|
||||
}
|
||||
return <React.Fragment key={i}>{part}</React.Fragment>;
|
||||
});
|
||||
}
|
||||
|
||||
function HardwareCheckbox({
|
||||
hw, disabled, checked, onToggle,
|
||||
}: {
|
||||
hw: HardwareOption; disabled: boolean; checked: boolean; onToggle: () => void;
|
||||
}) {
|
||||
return (
|
||||
<div className={styles.hardwareItem}>
|
||||
<label className={`${styles.checkboxLabel} ${disabled ? styles.checkboxDisabled : ""}`}>
|
||||
<input type="checkbox" checked={checked} onChange={onToggle} disabled={disabled} />
|
||||
<span>{hw.label}</span>
|
||||
</label>
|
||||
{checked && hw.description && (
|
||||
<div className={styles.hardwareDescription}>{renderDescription(hw.description)}</div>
|
||||
)}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
export default function HardwareOptions({ deviceId, hardwareEnabled, onToggle, isDisabled }: Props) {
|
||||
return (
|
||||
<div className={styles.formSection}>
|
||||
<h4>Generic Hardware Devices</h4>
|
||||
{deviceId !== "stable" && (
|
||||
<p className={styles.helpText}>
|
||||
Some options have been auto-configured based on your device type.
|
||||
</p>
|
||||
)}
|
||||
<div className={styles.checkboxGrid}>
|
||||
{hardwareOptions.map((hw) => {
|
||||
const disabled = isDisabled(hw.id);
|
||||
const checked = disabled ? false : !!hardwareEnabled[hw.id];
|
||||
return (
|
||||
<HardwareCheckbox key={hw.id} hw={hw} disabled={disabled} checked={checked} onToggle={() => onToggle(hw.id)} />
|
||||
);
|
||||
})}
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
@ -1,64 +0,0 @@
|
||||
import React from "react";
|
||||
import styles from "../styles.module.css";
|
||||
|
||||
interface Props {
|
||||
gpuCount: string;
|
||||
gpuDeviceId: string;
|
||||
gpuDeviceIdError: boolean;
|
||||
onGpuCountChange: (value: string) => void;
|
||||
onGpuDeviceIdChange: (value: string) => void;
|
||||
}
|
||||
|
||||
export default function NvidiaGpuConfig({
|
||||
gpuCount,
|
||||
gpuDeviceId,
|
||||
gpuDeviceIdError,
|
||||
onGpuCountChange,
|
||||
onGpuDeviceIdChange,
|
||||
}: Props) {
|
||||
const showDeviceId = gpuCount !== "";
|
||||
|
||||
return (
|
||||
<div className={styles.nvidiaConfig}>
|
||||
<div className={styles.formGroup}>
|
||||
<label htmlFor="dcg-gpu-count" className={styles.label}>
|
||||
GPU count:
|
||||
</label>
|
||||
<input
|
||||
id="dcg-gpu-count"
|
||||
type="text"
|
||||
inputMode="numeric"
|
||||
pattern="[0-9]*"
|
||||
className={styles.input}
|
||||
value={gpuCount}
|
||||
placeholder="all"
|
||||
onChange={(e) => onGpuCountChange(e.target.value.replace(/\D/g, ""))}
|
||||
/>
|
||||
</div>
|
||||
{showDeviceId && (
|
||||
<div className={styles.formGroup}>
|
||||
<label htmlFor="dcg-gpu-device-id" className={styles.label}>
|
||||
GPU device IDs (required, comma-separated):
|
||||
</label>
|
||||
<input
|
||||
id="dcg-gpu-device-id"
|
||||
type="text"
|
||||
className={`${styles.input} ${gpuDeviceIdError ? styles.inputError : ""}`}
|
||||
value={gpuDeviceId}
|
||||
placeholder="0"
|
||||
onChange={(e) => onGpuDeviceIdChange(e.target.value)}
|
||||
/>
|
||||
{gpuDeviceIdError ? (
|
||||
<p className={styles.helpText}>
|
||||
⚠️ GPU device IDs are required when GPU count is a number
|
||||
</p>
|
||||
) : (
|
||||
<p className={styles.helpText}>
|
||||
Single GPU: 0 | Multiple GPUs: 0,1,2
|
||||
</p>
|
||||
)}
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
@ -1,122 +0,0 @@
|
||||
import React, { useMemo } from "react";
|
||||
import CodeInline from "@theme/CodeInline";
|
||||
import styles from "../styles.module.css";
|
||||
|
||||
const AUTO_TIMEZONE_VALUE = "__auto__";
|
||||
|
||||
function getTimezoneList(): string[] {
|
||||
if (typeof Intl !== "undefined") {
|
||||
const intl = Intl as typeof Intl & {
|
||||
supportedValuesOf?: (key: string) => string[];
|
||||
};
|
||||
const supported = intl.supportedValuesOf?.("timeZone");
|
||||
if (supported && supported.length > 0) {
|
||||
return [...supported].sort();
|
||||
}
|
||||
}
|
||||
|
||||
const fallback = Intl.DateTimeFormat().resolvedOptions().timeZone;
|
||||
return fallback ? [fallback] : ["UTC"];
|
||||
}
|
||||
|
||||
interface Props {
|
||||
rtspPassword: string;
|
||||
timezone: string;
|
||||
shmSize: string;
|
||||
shmSizeError: boolean;
|
||||
onRtspPasswordChange: (value: string) => void;
|
||||
onTimezoneChange: (value: string) => void;
|
||||
onShmSizeChange: (value: string) => void;
|
||||
}
|
||||
|
||||
export default function OtherOptions({
|
||||
rtspPassword,
|
||||
timezone,
|
||||
shmSize,
|
||||
shmSizeError,
|
||||
onRtspPasswordChange,
|
||||
onTimezoneChange,
|
||||
onShmSizeChange,
|
||||
}: Props) {
|
||||
const timezones = useMemo(() => getTimezoneList(), []);
|
||||
const systemTimezone =
|
||||
Intl.DateTimeFormat().resolvedOptions().timeZone || "Etc/UTC";
|
||||
const selectedValue = timezone || AUTO_TIMEZONE_VALUE;
|
||||
|
||||
return (
|
||||
<div className={styles.formSection}>
|
||||
<h4>Other Options</h4>
|
||||
<div className={styles.formGrid}>
|
||||
<div className={styles.formGroup}>
|
||||
<label htmlFor="dcg-timezone" className={styles.label}>
|
||||
Timezone:
|
||||
</label>
|
||||
<select
|
||||
id="dcg-timezone"
|
||||
className={`${styles.input} ${styles.select}`}
|
||||
value={selectedValue}
|
||||
onChange={(e) =>
|
||||
onTimezoneChange(
|
||||
e.target.value === AUTO_TIMEZONE_VALUE ? "" : e.target.value
|
||||
)
|
||||
}
|
||||
>
|
||||
<option value={AUTO_TIMEZONE_VALUE}>
|
||||
Use browser timezone ({systemTimezone})
|
||||
</option>
|
||||
{timezones.map((tz) => (
|
||||
<option key={tz} value={tz}>
|
||||
{tz}
|
||||
</option>
|
||||
))}
|
||||
</select>
|
||||
</div>
|
||||
<div className={styles.formGroup}>
|
||||
<label htmlFor="dcg-shm-size" className={styles.label}>
|
||||
Shared memory (SHM):
|
||||
</label>
|
||||
<input
|
||||
id="dcg-shm-size"
|
||||
type="text"
|
||||
className={`${styles.input} ${shmSizeError ? styles.inputError : ""}`}
|
||||
value={shmSize}
|
||||
placeholder="512mb"
|
||||
onChange={(e) => onShmSizeChange(e.target.value)}
|
||||
/>
|
||||
{shmSizeError ? (
|
||||
<p className={styles.helpText}>
|
||||
⚠️ Invalid format. Use a number followed by a unit (e.g. 512mb, 1gb)
|
||||
</p>
|
||||
) : (
|
||||
<p className={styles.helpText}>
|
||||
See{" "}
|
||||
<a href="/frigate/installation#calculating-required-shm-size">
|
||||
calculating required SHM size
|
||||
</a>{" "}
|
||||
for the correct value.
|
||||
</p>
|
||||
)}
|
||||
</div>
|
||||
<div className={styles.formGroup}>
|
||||
<label htmlFor="dcg-rtsp-password" className={styles.label}>
|
||||
RTSP password:
|
||||
</label>
|
||||
<input
|
||||
id="dcg-rtsp-password"
|
||||
type="text"
|
||||
className={styles.input}
|
||||
value={rtspPassword}
|
||||
placeholder="password"
|
||||
onChange={(e) => onRtspPasswordChange(e.target.value)}
|
||||
/>
|
||||
<p className={styles.helpText}>
|
||||
Optional. You can specify{" "}
|
||||
<CodeInline>{"{FRIGATE_RTSP_PASSWORD}"}</CodeInline>{" "}
|
||||
in the config file to reference camera stream passwords. This is NOT
|
||||
the Frigate login password.
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
@ -1,71 +0,0 @@
|
||||
import React from "react";
|
||||
import Admonition from "@theme/Admonition";
|
||||
import { ports } from "../config";
|
||||
import styles from "../styles.module.css";
|
||||
|
||||
interface Props {
|
||||
portEnabled: Record<string, boolean>;
|
||||
onTogglePort: (portId: string) => void;
|
||||
}
|
||||
|
||||
function PortItem({
|
||||
port,
|
||||
enabled,
|
||||
onToggle,
|
||||
}: {
|
||||
port: typeof ports[number];
|
||||
enabled: boolean;
|
||||
onToggle: () => void;
|
||||
}) {
|
||||
const showWarning = port.warningContent && (
|
||||
port.warningWhen === "checked" ? enabled :
|
||||
port.warningWhen === "unchecked" ? !enabled : enabled
|
||||
);
|
||||
|
||||
return (
|
||||
<div className={styles.hardwareItem}>
|
||||
<label className={`${styles.checkboxLabel} ${port.locked ? styles.checkboxDisabled : ""}`}>
|
||||
<input
|
||||
type="checkbox"
|
||||
checked={enabled}
|
||||
onChange={onToggle}
|
||||
disabled={port.locked}
|
||||
/>
|
||||
<span>
|
||||
{port.locked && "🔒 "}
|
||||
Port {port.host}
|
||||
{port.protocol !== "tcp" && `/${port.protocol}`}
|
||||
</span>
|
||||
</label>
|
||||
{port.description && (
|
||||
<div className={styles.hardwareDescription}>{port.description}</div>
|
||||
)}
|
||||
{showWarning && (
|
||||
<Admonition type={port.warningType || "warning"}>
|
||||
{port.warningContent}
|
||||
</Admonition>
|
||||
)}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
export default function PortConfigSection({
|
||||
portEnabled,
|
||||
onTogglePort,
|
||||
}: Props) {
|
||||
return (
|
||||
<div className={styles.formSection}>
|
||||
<h4>Port Configuration</h4>
|
||||
<div className={styles.checkboxGrid}>
|
||||
{ports.map((port) => (
|
||||
<PortItem
|
||||
key={port.id}
|
||||
port={port}
|
||||
enabled={!!portEnabled[port.id]}
|
||||
onToggle={() => onTogglePort(port.id)}
|
||||
/>
|
||||
))}
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
@ -1,66 +0,0 @@
|
||||
import React from "react";
|
||||
import styles from "../styles.module.css";
|
||||
|
||||
interface Props {
|
||||
configPath: string;
|
||||
mediaPath: string;
|
||||
configPathError: boolean;
|
||||
mediaPathError: boolean;
|
||||
onConfigPathChange: (value: string) => void;
|
||||
onMediaPathChange: (value: string) => void;
|
||||
}
|
||||
|
||||
export default function StoragePaths({
|
||||
configPath,
|
||||
mediaPath,
|
||||
configPathError,
|
||||
mediaPathError,
|
||||
onConfigPathChange,
|
||||
onMediaPathChange,
|
||||
}: Props) {
|
||||
return (
|
||||
<div className={styles.formSection}>
|
||||
<h4>Storage Paths</h4>
|
||||
<div className={styles.formGrid}>
|
||||
<div className={styles.formGroup}>
|
||||
<label htmlFor="dcg-config-path" className={styles.label}>
|
||||
Config / DB / model cache directory (on your host):
|
||||
</label>
|
||||
<input
|
||||
id="dcg-config-path"
|
||||
type="text"
|
||||
className={`${styles.input} ${configPathError ? styles.inputError : ""}`}
|
||||
value={configPath}
|
||||
placeholder="/path/to/your/config"
|
||||
onChange={(e) => onConfigPathChange(e.target.value)}
|
||||
/>
|
||||
{configPathError && (
|
||||
<p className={styles.helpText}>
|
||||
⚠️ Path contains invalid characters. Only letters, numbers,
|
||||
underscores, hyphens, slashes, and dots are allowed.
|
||||
</p>
|
||||
)}
|
||||
</div>
|
||||
<div className={styles.formGroup}>
|
||||
<label htmlFor="dcg-media-path" className={styles.label}>
|
||||
Recording storage directory (on your host):
|
||||
</label>
|
||||
<input
|
||||
id="dcg-media-path"
|
||||
type="text"
|
||||
className={`${styles.input} ${mediaPathError ? styles.inputError : ""}`}
|
||||
value={mediaPath}
|
||||
placeholder="/path/to/your/storage"
|
||||
onChange={(e) => onMediaPathChange(e.target.value)}
|
||||
/>
|
||||
{mediaPathError && (
|
||||
<p className={styles.helpText}>
|
||||
⚠️ Path contains invalid characters. Only letters, numbers,
|
||||
underscores, hyphens, slashes, and dots are allowed.
|
||||
</p>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
File diff suppressed because one or more lines are too long
@ -1,12 +0,0 @@
|
||||
export { devices, deviceMap } from "./devices";
|
||||
export { hardwareOptions, hardwareMap } from "./hardware";
|
||||
export { ports, portMap } from "./ports";
|
||||
|
||||
export type {
|
||||
DeviceConfig,
|
||||
DeviceMapping,
|
||||
VolumeMapping,
|
||||
HardwareOption,
|
||||
PortConfig,
|
||||
NvidiaDeployConfig,
|
||||
} from "./types";
|
||||
@ -1,154 +0,0 @@
|
||||
/**
|
||||
* Type definitions for the Docker Compose Generator configuration.
|
||||
* All device, hardware, and port options are declaratively defined
|
||||
* so that adding a new device only requires editing config files.
|
||||
*/
|
||||
|
||||
/** A single device mapping entry (e.g. /dev/dri:/dev/dri) */
|
||||
export interface DeviceMapping {
|
||||
/** Host device path */
|
||||
host: string;
|
||||
/** Container device path (defaults to host if omitted) */
|
||||
container?: string;
|
||||
/** Inline comment for this device line */
|
||||
comment?: string;
|
||||
}
|
||||
|
||||
/** A single volume mapping entry */
|
||||
export interface VolumeMapping {
|
||||
/** Host path */
|
||||
host: string;
|
||||
/** Container path */
|
||||
container: string;
|
||||
/** Whether the mount is read-only */
|
||||
readOnly?: boolean;
|
||||
/** Inline comment */
|
||||
comment?: string;
|
||||
}
|
||||
|
||||
/** NVIDIA deploy configuration for docker-compose */
|
||||
export interface NvidiaDeployConfig {
|
||||
/** "all" or a specific number */
|
||||
count: string;
|
||||
/** Specific GPU device IDs (when count is a number) */
|
||||
deviceIds?: string[];
|
||||
}
|
||||
|
||||
/** Full device type definition */
|
||||
export interface DeviceConfig {
|
||||
/** Unique identifier, e.g. "intel" */
|
||||
id: string;
|
||||
/** Display name, e.g. "Intel GPU" */
|
||||
name: string;
|
||||
/** Short description */
|
||||
description: string;
|
||||
/**
|
||||
* Icon for the device card. Supports:
|
||||
* - Emoji string (e.g. "🖥️")
|
||||
* - Image URL or static path (e.g. "/img/intel.svg", "https://example.com/icon.png")
|
||||
* - Inline SVG markup (e.g. "<svg>...</svg>")
|
||||
*/
|
||||
icon: string;
|
||||
/**
|
||||
* Additional CSS properties applied to the icon element.
|
||||
* - For image-type icons: if any `background-*` property (e.g. `background-size`,
|
||||
* `background-position`) is present, the image is rendered as a CSS `background-image`
|
||||
* on the container div, enabling full background positioning control.
|
||||
* Otherwise the image is rendered as an `<img>` tag and styles apply to it.
|
||||
* - For emoji/SVG icons: styles apply to the container div.
|
||||
*/
|
||||
iconStyle?: Record<string, string>;
|
||||
/**
|
||||
* Additional CSS properties applied directly to the inner `<svg>` element
|
||||
* when the icon is an inline SVG. Use this to override the default
|
||||
* `width: 100%; height: 100%` or set `fill`, `transform`, etc.
|
||||
* Ignored for emoji and image-type icons.
|
||||
*/
|
||||
svgStyle?: Record<string, string>;
|
||||
/**
|
||||
* Icon for dark mode. Same format as `icon`. When provided, this icon
|
||||
* replaces `icon` when the user is in dark mode.
|
||||
*/
|
||||
iconDark?: string;
|
||||
/** Additional CSS properties for the dark mode icon container */
|
||||
iconDarkStyle?: Record<string, string>;
|
||||
/**
|
||||
* SVG-specific styles for dark mode. Same as `svgStyle` but applied
|
||||
* when dark mode is active. Merged over `svgStyle` in dark mode.
|
||||
*/
|
||||
svgDarkStyle?: Record<string, string>;
|
||||
/** Docker image tag, e.g. "stable" */
|
||||
imageTag: string;
|
||||
/**
|
||||
* Image tag suffix appended to the base tag.
|
||||
* e.g. "-standard-arm64" produces "stable-standard-arm64"
|
||||
*/
|
||||
imageTagSuffix?: string;
|
||||
/** Hardware option IDs to auto-enable when this device is selected */
|
||||
autoHardware: string[];
|
||||
/** Help text shown as an admonition when this device is selected */
|
||||
helpText?: string;
|
||||
/** Admonition type for help text */
|
||||
helpType?: "info" | "warning" | "danger";
|
||||
/** Device mappings always added for this device type */
|
||||
devices?: DeviceMapping[];
|
||||
/** Volume mappings always added for this device type */
|
||||
volumes?: VolumeMapping[];
|
||||
/** Extra environment variables for this device type */
|
||||
env?: Record<string, string>;
|
||||
/** NVIDIA deploy config (only for tensorrt) */
|
||||
nvidiaDeploy?: NvidiaDeployConfig;
|
||||
/** Runtime setting, e.g. "nvidia" for Jetson */
|
||||
runtime?: string;
|
||||
/** Extra hosts entries, e.g. "host.docker.internal:host-gateway" */
|
||||
extraHosts?: string[];
|
||||
/** Security options, e.g. ["apparmor=unconfined"] */
|
||||
securityOpt?: string[];
|
||||
/** Whether this device type needs the NVIDIA GPU config UI */
|
||||
needsNvidiaConfig?: boolean;
|
||||
}
|
||||
|
||||
/** Generic hardware acceleration option definition */
|
||||
export interface HardwareOption {
|
||||
/** Unique identifier, e.g. "usbCoral" */
|
||||
id: string;
|
||||
/** Display label */
|
||||
label: string;
|
||||
/**
|
||||
* Description shown below the checkbox when this option is enabled.
|
||||
* Supports markdown link syntax: [text](url)
|
||||
*/
|
||||
description?: string;
|
||||
/** Device IDs that disable this option */
|
||||
disabledWhen?: string[];
|
||||
/** Device mappings added when this option is enabled */
|
||||
devices?: DeviceMapping[];
|
||||
/** Volume mappings added when this option is enabled */
|
||||
volumes?: VolumeMapping[];
|
||||
/** Extra environment variables */
|
||||
env?: Record<string, string>;
|
||||
}
|
||||
|
||||
/** Port definition */
|
||||
export interface PortConfig {
|
||||
/** Unique identifier (also the default host port as string) */
|
||||
id: string;
|
||||
/** Host port number */
|
||||
host: number;
|
||||
/** Container port number */
|
||||
container: number;
|
||||
/** Protocol */
|
||||
protocol?: "tcp" | "udp";
|
||||
/** Description of the port's purpose */
|
||||
description: string;
|
||||
/** Whether enabled by default */
|
||||
defaultEnabled: boolean;
|
||||
/** Whether this port is locked (always enabled, cannot be toggled off) */
|
||||
locked?: boolean;
|
||||
/** Admonition type for the warning */
|
||||
warningType?: "warning" | "danger";
|
||||
/** Warning content (markdown) */
|
||||
warningContent?: string;
|
||||
/** When to show the warning: when the port is checked or unchecked */
|
||||
warningWhen?: "checked" | "unchecked";
|
||||
}
|
||||
@ -1,250 +0,0 @@
|
||||
import type {
|
||||
DeviceConfig,
|
||||
DeviceMapping,
|
||||
VolumeMapping,
|
||||
} from "../config/types";
|
||||
import { hardwareMap } from "../config";
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Input type
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export interface GeneratorInput {
|
||||
device: DeviceConfig;
|
||||
selectedHardware: string[];
|
||||
enabledPorts: string[];
|
||||
configPath: string;
|
||||
mediaPath: string;
|
||||
rtspPassword?: string;
|
||||
timezone: string;
|
||||
shmSize: string;
|
||||
nvidiaGpuCount?: string;
|
||||
nvidiaGpuDeviceId?: string;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Helpers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
function deviceLine(dm: DeviceMapping): string {
|
||||
const host = dm.host;
|
||||
const container = dm.container ?? dm.host;
|
||||
const mapping = host === container ? host : `${host}:${container}`;
|
||||
const comment = dm.comment ? ` # ${dm.comment}` : "";
|
||||
return ` - ${mapping}${comment}`;
|
||||
}
|
||||
|
||||
function volumeLine(vm: VolumeMapping): string {
|
||||
const ro = vm.readOnly ? ":ro" : "";
|
||||
const comment = vm.comment ? ` # ${vm.comment}` : "";
|
||||
return ` - ${vm.host}:${vm.container}${ro}${comment}`;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// YAML builder — each section returns an array of lines
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
function buildImage(device: DeviceConfig): string[] {
|
||||
const tag = device.imageTagSuffix
|
||||
? `${device.imageTag}${device.imageTagSuffix}`
|
||||
: device.imageTag;
|
||||
return [` image: ghcr.io/blakeblackshear/frigate:${tag}`];
|
||||
}
|
||||
|
||||
function buildDevices(
|
||||
device: DeviceConfig,
|
||||
hwDevices: DeviceMapping[]
|
||||
): string[] {
|
||||
const all: DeviceMapping[] = [
|
||||
...(device.devices ?? []),
|
||||
...hwDevices,
|
||||
];
|
||||
if (all.length === 0) return [];
|
||||
return [
|
||||
" devices:",
|
||||
...all.map(deviceLine),
|
||||
];
|
||||
}
|
||||
|
||||
function buildVolumes(
|
||||
device: DeviceConfig,
|
||||
hwVolumes: VolumeMapping[],
|
||||
configPath: string,
|
||||
mediaPath: string
|
||||
): string[] {
|
||||
const all: VolumeMapping[] = [
|
||||
...(device.volumes ?? []),
|
||||
...hwVolumes,
|
||||
];
|
||||
return [
|
||||
" volumes:",
|
||||
" - /etc/localtime:/etc/localtime:ro # Sync host time",
|
||||
` - ${configPath}:/config # Config file directory`,
|
||||
` - ${mediaPath}:/media/frigate # Recording storage directory`,
|
||||
" - type: tmpfs # 1GB in-memory filesystem for recording segment storage",
|
||||
" target: /tmp/cache",
|
||||
" tmpfs:",
|
||||
" size: 1000000000",
|
||||
...all.map(volumeLine),
|
||||
];
|
||||
}
|
||||
|
||||
function buildPorts(enabledPorts: string[]): string[] {
|
||||
return [
|
||||
" ports:",
|
||||
...enabledPorts,
|
||||
];
|
||||
}
|
||||
|
||||
function buildEnvironment(
|
||||
device: DeviceConfig,
|
||||
hwEnv: Record<string, string>,
|
||||
rtspPassword: string | undefined,
|
||||
timezone: string
|
||||
): string[] {
|
||||
const allEnv: Record<string, string> = {
|
||||
...hwEnv,
|
||||
...(device.env ?? {}),
|
||||
};
|
||||
|
||||
const lines: string[] = [" environment:"];
|
||||
|
||||
if (rtspPassword) {
|
||||
lines.push(
|
||||
` FRIGATE_RTSP_PASSWORD: "${rtspPassword}" # RTSP password — change to your own`
|
||||
);
|
||||
}
|
||||
|
||||
lines.push(` TZ: "${timezone}" # Timezone`);
|
||||
|
||||
for (const [key, value] of Object.entries(allEnv)) {
|
||||
lines.push(` ${key}: "${value}"`);
|
||||
}
|
||||
|
||||
return lines;
|
||||
}
|
||||
|
||||
function buildDeploy(device: DeviceConfig, input: GeneratorInput): string[] {
|
||||
if (device.id === "stable-tensorrt") {
|
||||
const count = input.nvidiaGpuCount || "all";
|
||||
const isAll = count === "all";
|
||||
const deviceId = input.nvidiaGpuDeviceId?.trim();
|
||||
|
||||
if (isAll) {
|
||||
return [
|
||||
" deploy:",
|
||||
" resources:",
|
||||
" reservations:",
|
||||
" devices:",
|
||||
" - driver: nvidia",
|
||||
" count: all # Use all GPUs",
|
||||
" capabilities: [gpu]",
|
||||
];
|
||||
}
|
||||
|
||||
if (deviceId) {
|
||||
const ids = deviceId
|
||||
.split(",")
|
||||
.map((s) => s.trim())
|
||||
.filter(Boolean)
|
||||
.map((s) => `'${s}'`)
|
||||
.join(", ");
|
||||
return [
|
||||
" deploy:",
|
||||
" resources:",
|
||||
" reservations:",
|
||||
" devices:",
|
||||
" - driver: nvidia",
|
||||
` device_ids: [${ids}] # GPU device IDs`,
|
||||
` count: ${count} # GPU count`,
|
||||
" capabilities: [gpu]",
|
||||
];
|
||||
}
|
||||
|
||||
return [
|
||||
" deploy:",
|
||||
" resources:",
|
||||
" reservations:",
|
||||
" devices:",
|
||||
" - driver: nvidia",
|
||||
` count: ${count} # GPU count`,
|
||||
" capabilities: [gpu]",
|
||||
];
|
||||
}
|
||||
|
||||
return [];
|
||||
}
|
||||
|
||||
function buildRuntime(device: DeviceConfig): string[] {
|
||||
if (device.runtime) {
|
||||
return [` runtime: ${device.runtime}`];
|
||||
}
|
||||
return [];
|
||||
}
|
||||
|
||||
function buildExtraHosts(device: DeviceConfig): string[] {
|
||||
if (!device.extraHosts?.length) return [];
|
||||
return [
|
||||
" extra_hosts:",
|
||||
...device.extraHosts.map(
|
||||
(h, i) =>
|
||||
` - "${h}"${i === 0 ? " # Required to talk to the NPU detector" : ""}`
|
||||
),
|
||||
];
|
||||
}
|
||||
|
||||
function buildSecurityOpt(device: DeviceConfig): string[] {
|
||||
if (!device.securityOpt?.length) return [];
|
||||
return [
|
||||
" security_opt:",
|
||||
...device.securityOpt.map((s) => ` - ${s}`),
|
||||
];
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Public API
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/**
|
||||
* Generate a docker-compose YAML string from the given input.
|
||||
* The output is pure YAML with inline comments (no Shiki annotations).
|
||||
*/
|
||||
export function generateDockerCompose(input: GeneratorInput): string {
|
||||
const { device } = input;
|
||||
|
||||
// Collect hardware-level devices, volumes, and env
|
||||
const hwDevices: DeviceMapping[] = [];
|
||||
const hwVolumes: VolumeMapping[] = [];
|
||||
const hwEnv: Record<string, string> = {};
|
||||
|
||||
for (const hwId of input.selectedHardware) {
|
||||
const hw = hardwareMap.get(hwId);
|
||||
if (!hw) continue;
|
||||
// Skip GPU device mapping for tensorrt images (it uses deploy instead)
|
||||
if (hw.id === "gpu" && device.imageTag === "stable-tensorrt") continue;
|
||||
hwDevices.push(...(hw.devices ?? []));
|
||||
hwVolumes.push(...(hw.volumes ?? []));
|
||||
Object.assign(hwEnv, hw.env ?? {});
|
||||
}
|
||||
|
||||
const lines: string[] = [
|
||||
"services:",
|
||||
" frigate:",
|
||||
" container_name: frigate",
|
||||
" privileged: true # This may not be necessary for all setups",
|
||||
" restart: unless-stopped",
|
||||
" stop_grace_period: 30s # Allow enough time to shut down the various services",
|
||||
...buildImage(device),
|
||||
` shm_size: "${input.shmSize || "512mb"}" # Update for your cameras based on SHM calculation`,
|
||||
...buildRuntime(device),
|
||||
...buildDeploy(device, input),
|
||||
...buildExtraHosts(device),
|
||||
...buildSecurityOpt(device),
|
||||
...buildDevices(device, hwDevices),
|
||||
...buildVolumes(device, hwVolumes, input.configPath, input.mediaPath),
|
||||
...buildPorts(input.enabledPorts),
|
||||
...buildEnvironment(device, hwEnv, input.rtspPassword, input.timezone),
|
||||
];
|
||||
|
||||
return lines.join("\n");
|
||||
}
|
||||
@ -1,195 +0,0 @@
|
||||
import { useState, useCallback, useMemo } from "react";
|
||||
import { deviceMap, hardwareMap, portMap } from "../config";
|
||||
import { generateDockerCompose } from "../generator";
|
||||
import type { GeneratorInput } from "../generator";
|
||||
|
||||
/**
|
||||
* Main hook that holds all form state and generates the Docker Compose output.
|
||||
* Configuration is loaded synchronously from build-time generated .ts files.
|
||||
*/
|
||||
export function useConfigGenerator() {
|
||||
const [deviceId, setDeviceId] = useState("stable");
|
||||
|
||||
const [hardwareEnabled, setHardwareEnabled] = useState<Record<string, boolean>>(() => {
|
||||
const defaultDevice = deviceMap.get("stable");
|
||||
const initial: Record<string, boolean> = {};
|
||||
if (defaultDevice) {
|
||||
for (const hwId of defaultDevice.autoHardware) {
|
||||
initial[hwId] = true;
|
||||
}
|
||||
}
|
||||
return initial;
|
||||
});
|
||||
|
||||
const [portEnabled, setPortEnabled] = useState<Record<string, boolean>>(() => {
|
||||
const initial: Record<string, boolean> = {};
|
||||
for (const p of portMap.values()) {
|
||||
initial[p.id] = p.defaultEnabled;
|
||||
}
|
||||
return initial;
|
||||
});
|
||||
|
||||
const [nvidiaGpuCount, setNvidiaGpuCount] = useState("");
|
||||
const [nvidiaGpuDeviceId, setNvidiaGpuDeviceId] = useState("");
|
||||
const [configPath, setConfigPath] = useState("");
|
||||
const [mediaPath, setMediaPath] = useState("");
|
||||
const [rtspPassword, setRtspPassword] = useState("");
|
||||
const [timezone, setTimezone] = useState("");
|
||||
const [shmSize, setShmSize] = useState("512mb");
|
||||
const [shmSizeError, setShmSizeError] = useState(false);
|
||||
const [gpuDeviceIdError, setGpuDeviceIdError] = useState(false);
|
||||
const [configPathError, setConfigPathError] = useState(false);
|
||||
const [mediaPathError, setMediaPathError] = useState(false);
|
||||
|
||||
const device = useMemo(() => deviceMap.get(deviceId)!, [deviceId]);
|
||||
|
||||
const selectDevice = useCallback((id: string) => {
|
||||
const newDevice = deviceMap.get(id);
|
||||
if (!newDevice) return;
|
||||
setDeviceId(id);
|
||||
setHardwareEnabled(() => {
|
||||
const next: Record<string, boolean> = {};
|
||||
for (const hwId of newDevice.autoHardware) {
|
||||
next[hwId] = true;
|
||||
}
|
||||
return next;
|
||||
});
|
||||
setNvidiaGpuCount("");
|
||||
setNvidiaGpuDeviceId("");
|
||||
setGpuDeviceIdError(false);
|
||||
}, []);
|
||||
|
||||
const toggleHardware = useCallback((hwId: string) => {
|
||||
setHardwareEnabled((prev) => ({ ...prev, [hwId]: !prev[hwId] }));
|
||||
}, []);
|
||||
|
||||
const togglePort = useCallback((portId: string) => {
|
||||
const port = portMap.get(portId);
|
||||
if (port?.locked) return;
|
||||
setPortEnabled((prev) => ({ ...prev, [portId]: !prev[portId] }));
|
||||
}, []);
|
||||
|
||||
const isHardwareDisabled = useCallback(
|
||||
(hwId: string): boolean => {
|
||||
const hw = hardwareMap.get(hwId);
|
||||
if (!hw) return false;
|
||||
return hw.disabledWhen?.includes(deviceId) ?? false;
|
||||
},
|
||||
[deviceId]
|
||||
);
|
||||
|
||||
const validateShmSize = useCallback((value: string): boolean => {
|
||||
if (!value) return true;
|
||||
return /^\d+(\.\d+)?[bkmgBKMG]{1,2}$/.test(value);
|
||||
}, []);
|
||||
|
||||
const validatePath = useCallback((value: string): boolean => {
|
||||
if (!value) return true;
|
||||
return /^[a-zA-Z0-9_\-/./]+$/.test(value);
|
||||
}, []);
|
||||
|
||||
const handleShmSizeChange = useCallback(
|
||||
(value: string) => {
|
||||
const filtered = value.replace(/[^0-9.bkmgBKMG]/g, "");
|
||||
const valid = validateShmSize(filtered);
|
||||
setShmSize(filtered);
|
||||
setShmSizeError(!valid && filtered !== "");
|
||||
},
|
||||
[validateShmSize]
|
||||
);
|
||||
|
||||
const handleConfigPathChange = useCallback(
|
||||
(value: string) => {
|
||||
const filtered = value.replace(/[^a-zA-Z0-9_\-/./]/g, "");
|
||||
const valid = validatePath(filtered);
|
||||
setConfigPath(filtered);
|
||||
setConfigPathError(!valid && filtered !== "");
|
||||
},
|
||||
[validatePath]
|
||||
);
|
||||
|
||||
const handleMediaPathChange = useCallback(
|
||||
(value: string) => {
|
||||
const filtered = value.replace(/[^a-zA-Z0-9_\-/./]/g, "");
|
||||
const valid = validatePath(filtered);
|
||||
setMediaPath(filtered);
|
||||
setMediaPathError(!valid && filtered !== "");
|
||||
},
|
||||
[validatePath]
|
||||
);
|
||||
|
||||
const handleNvidiaGpuCountChange = useCallback((value: string) => {
|
||||
// Only allow digits
|
||||
setNvidiaGpuCount(value);
|
||||
if (value === "") {
|
||||
setNvidiaGpuDeviceId("");
|
||||
setGpuDeviceIdError(false);
|
||||
} else {
|
||||
setGpuDeviceIdError(false);
|
||||
}
|
||||
}, []);
|
||||
|
||||
const handleNvidiaGpuDeviceIdChange = useCallback((value: string) => {
|
||||
setNvidiaGpuDeviceId(value.trim());
|
||||
setGpuDeviceIdError(false);
|
||||
}, []);
|
||||
|
||||
const enabledPortLines = useMemo(() => {
|
||||
const lines: string[] = [];
|
||||
for (const [id, enabled] of Object.entries(portEnabled)) {
|
||||
if (!enabled) continue;
|
||||
const p = portMap.get(id);
|
||||
if (!p) continue;
|
||||
const proto = p.protocol && p.protocol !== "tcp" ? `/${p.protocol}` : "";
|
||||
const comment = p.description ? ` # ${p.description}` : "";
|
||||
lines.push(` - "${p.host}:${p.container}${proto}"${comment}`);
|
||||
}
|
||||
return lines;
|
||||
}, [portEnabled]);
|
||||
|
||||
const selectedHardwareIds = useMemo(() => {
|
||||
return Object.entries(hardwareEnabled)
|
||||
.filter(([id, enabled]) => {
|
||||
if (!enabled) return false;
|
||||
const hw = hardwareMap.get(id);
|
||||
if (!hw) return false;
|
||||
if (hw.disabledWhen?.includes(deviceId)) return false;
|
||||
return true;
|
||||
})
|
||||
.map(([id]) => id);
|
||||
}, [hardwareEnabled, deviceId]);
|
||||
|
||||
const generatedYaml = useMemo(() => {
|
||||
const input: GeneratorInput = {
|
||||
device,
|
||||
selectedHardware: selectedHardwareIds,
|
||||
enabledPorts: enabledPortLines,
|
||||
configPath: configPath || "/path/to/your/config",
|
||||
mediaPath: mediaPath || "/path/to/your/storage",
|
||||
rtspPassword,
|
||||
timezone: timezone || Intl.DateTimeFormat().resolvedOptions().timeZone || "Etc/UTC",
|
||||
shmSize: shmSize || "512mb",
|
||||
nvidiaGpuCount,
|
||||
nvidiaGpuDeviceId,
|
||||
};
|
||||
return generateDockerCompose(input);
|
||||
}, [
|
||||
device, selectedHardwareIds, enabledPortLines,
|
||||
configPath, mediaPath, rtspPassword, timezone, shmSize,
|
||||
nvidiaGpuCount, nvidiaGpuDeviceId,
|
||||
]);
|
||||
|
||||
const hasAnyHardware = selectedHardwareIds.length > 0 || !!device?.devices?.length;
|
||||
|
||||
return {
|
||||
deviceId, device, hardwareEnabled, portEnabled,
|
||||
nvidiaGpuCount, nvidiaGpuDeviceId,
|
||||
configPath, mediaPath, rtspPassword, timezone, shmSize,
|
||||
shmSizeError, gpuDeviceIdError, configPathError, mediaPathError,
|
||||
hasAnyHardware, generatedYaml,
|
||||
selectDevice, toggleHardware, togglePort,
|
||||
handleShmSizeChange, handleConfigPathChange, handleMediaPathChange,
|
||||
handleNvidiaGpuCountChange, handleNvidiaGpuDeviceIdChange,
|
||||
setRtspPassword, setTimezone, isHardwareDisabled,
|
||||
};
|
||||
}
|
||||
@ -1 +0,0 @@
|
||||
export { default } from "./DockerComposeGenerator";
|
||||
@ -1,381 +0,0 @@
|
||||
/* ===================================================================
|
||||
Docker Compose Generator — styles
|
||||
Uses Docusaurus / Infima CSS variables for theme compatibility.
|
||||
=================================================================== */
|
||||
|
||||
.generator {
|
||||
margin: 2rem 0;
|
||||
}
|
||||
|
||||
.card {
|
||||
background: var(--ifm-background-surface-color);
|
||||
border: 1px solid var(--ifm-color-emphasis-400);
|
||||
border-radius: 12px;
|
||||
padding: 2rem;
|
||||
box-shadow: var(--ifm-global-shadow-lw);
|
||||
}
|
||||
|
||||
[data-theme="light"] .card {
|
||||
background: var(--ifm-color-emphasis-100);
|
||||
border: 1px solid var(--ifm-color-emphasis-300);
|
||||
}
|
||||
|
||||
/* --- Form sections --- */
|
||||
|
||||
.formSection {
|
||||
margin-bottom: 1.5rem;
|
||||
padding-bottom: 1.5rem;
|
||||
border-bottom: 1px solid var(--ifm-color-emphasis-400);
|
||||
}
|
||||
|
||||
.formSection:last-child {
|
||||
border-bottom: none;
|
||||
margin-bottom: 0;
|
||||
padding-bottom: 0;
|
||||
}
|
||||
|
||||
.formSection h4 {
|
||||
margin: 0 0 1rem 0;
|
||||
color: var(--ifm-font-color-base);
|
||||
font-size: 1.1rem;
|
||||
font-weight: var(--ifm-font-weight-semibold);
|
||||
}
|
||||
|
||||
/* --- Form controls --- */
|
||||
|
||||
.formGroup {
|
||||
margin-bottom: 1rem;
|
||||
}
|
||||
|
||||
.formGroup:last-child {
|
||||
margin-bottom: 0;
|
||||
}
|
||||
|
||||
.label {
|
||||
display: block;
|
||||
margin-bottom: 0.25rem;
|
||||
color: var(--ifm-font-color-base);
|
||||
font-weight: var(--ifm-font-weight-semibold);
|
||||
font-size: 0.9rem;
|
||||
}
|
||||
|
||||
.input {
|
||||
width: 100%;
|
||||
padding: 0.5rem 0.75rem;
|
||||
border: 1px solid var(--ifm-color-emphasis-400);
|
||||
border-radius: 6px;
|
||||
background: var(--ifm-background-color);
|
||||
color: var(--ifm-font-color-base);
|
||||
font-size: 0.95rem;
|
||||
transition: border-color 0.2s, box-shadow 0.2s;
|
||||
}
|
||||
|
||||
[data-theme="light"] .input {
|
||||
background: #fff;
|
||||
border: 1px solid #d0d7de;
|
||||
}
|
||||
|
||||
.input:focus {
|
||||
outline: none;
|
||||
border-color: var(--ifm-color-primary);
|
||||
box-shadow: 0 0 0 3px var(--ifm-color-primary-lightest);
|
||||
}
|
||||
|
||||
[data-theme="dark"] .input {
|
||||
border-color: var(--ifm-color-emphasis-300);
|
||||
}
|
||||
|
||||
.inputError {
|
||||
border-color: #e74c3c;
|
||||
animation: shake 0.3s ease-in-out;
|
||||
}
|
||||
|
||||
@keyframes shake {
|
||||
0%,
|
||||
100% {
|
||||
transform: translateX(0);
|
||||
}
|
||||
25% {
|
||||
transform: translateX(-5px);
|
||||
}
|
||||
75% {
|
||||
transform: translateX(5px);
|
||||
}
|
||||
}
|
||||
|
||||
/* --- Select dropdown --- */
|
||||
|
||||
.select {
|
||||
cursor: pointer;
|
||||
appearance: none;
|
||||
-moz-appearance: none;
|
||||
-webkit-appearance: none;
|
||||
background: var(--ifm-background-color)
|
||||
url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' width='12' height='12' viewBox='0 0 12 12'%3E%3Cpath fill='%23666' d='M6 8L1 3h10z'/%3E%3C/svg%3E")
|
||||
no-repeat right 0.75rem center / 12px 12px;
|
||||
padding-right: 2rem;
|
||||
}
|
||||
|
||||
[data-theme="light"] .select {
|
||||
background: #fff
|
||||
url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' width='12' height='12' viewBox='0 0 12 12'%3E%3Cpath fill='%23555' d='M6 8L1 3h10z'/%3E%3C/svg%3E")
|
||||
no-repeat right 0.75rem center / 12px 12px;
|
||||
}
|
||||
|
||||
.helpText {
|
||||
margin: 0.5rem 0 0 0;
|
||||
font-size: 0.85rem;
|
||||
color: var(--ifm-font-color-secondary);
|
||||
line-height: 1.5;
|
||||
}
|
||||
|
||||
.helpText a {
|
||||
color: var(--ifm-color-primary);
|
||||
}
|
||||
|
||||
/* --- Device grid --- */
|
||||
|
||||
.deviceGrid {
|
||||
display: grid;
|
||||
grid-template-columns: repeat(auto-fill, minmax(130px, 1fr));
|
||||
gap: 0.75rem;
|
||||
margin-top: 0.5rem;
|
||||
}
|
||||
|
||||
.deviceCard {
|
||||
padding: 0.75rem;
|
||||
border: 2px solid var(--ifm-color-emphasis-400);
|
||||
border-radius: 12px;
|
||||
cursor: pointer;
|
||||
transition: all 0.2s;
|
||||
text-align: center;
|
||||
background: var(--ifm-background-color);
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
align-items: center;
|
||||
}
|
||||
|
||||
[data-theme="light"] .deviceCard {
|
||||
border: 2px solid #d0d7de;
|
||||
background: #fff;
|
||||
}
|
||||
|
||||
.deviceCard:hover {
|
||||
border-color: var(--ifm-color-primary);
|
||||
background: var(--ifm-color-emphasis-100);
|
||||
transform: translateY(-2px);
|
||||
}
|
||||
|
||||
.deviceCardActive {
|
||||
border-color: var(--ifm-color-primary);
|
||||
background: var(--ifm-color-primary-lightest);
|
||||
box-shadow: 0 0 0 1px var(--ifm-color-primary);
|
||||
}
|
||||
|
||||
[data-theme="light"] .deviceCardActive {
|
||||
background: color-mix(in srgb, var(--ifm-color-primary) 12%, #fff);
|
||||
}
|
||||
|
||||
[data-theme="dark"] .deviceCardActive {
|
||||
background: color-mix(in srgb, var(--ifm-color-primary) 25%, #1b1b1b);
|
||||
}
|
||||
|
||||
[data-theme="dark"] .deviceCardActive .deviceName {
|
||||
color: var(--ifm-color-primary-light);
|
||||
}
|
||||
|
||||
[data-theme="dark"] .deviceCardActive .deviceDesc {
|
||||
color: var(--ifm-color-primary-light);
|
||||
opacity: 0.85;
|
||||
}
|
||||
|
||||
.deviceIcon {
|
||||
font-size: 2rem;
|
||||
margin-bottom: 0.25rem;
|
||||
height: 40px;
|
||||
width: 50px;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
}
|
||||
|
||||
.deviceIconSvg {
|
||||
margin-bottom: 0.25rem;
|
||||
height: 40px;
|
||||
width: 50px;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
overflow: visible;
|
||||
/* Allow iconStyle width/height to override */
|
||||
flex-shrink: 0;
|
||||
}
|
||||
|
||||
.deviceIconSvg svg {
|
||||
width: var(--svg-width, 100%);
|
||||
height: var(--svg-height, 100%);
|
||||
fill: var(--svg-fill, currentColor);
|
||||
transform: var(--svg-transform, none);
|
||||
}
|
||||
|
||||
.deviceIconImage {
|
||||
margin-bottom: 0.25rem;
|
||||
height: 40px;
|
||||
width: 50px;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
}
|
||||
|
||||
.deviceIconImage img {
|
||||
max-width: 100%;
|
||||
max-height: 100%;
|
||||
object-fit: contain;
|
||||
}
|
||||
|
||||
.deviceName {
|
||||
font-weight: var(--ifm-font-weight-semibold);
|
||||
color: var(--ifm-font-color-base);
|
||||
margin-bottom: 0.15rem;
|
||||
font-size: 0.9rem;
|
||||
}
|
||||
|
||||
.deviceDesc {
|
||||
font-size: 0.75rem;
|
||||
color: var(--ifm-font-color-secondary);
|
||||
line-height: 1.3;
|
||||
}
|
||||
|
||||
/* --- Checkbox grid --- */
|
||||
|
||||
.checkboxGrid {
|
||||
display: grid;
|
||||
grid-template-columns: repeat(2, 1fr);
|
||||
gap: 0.5rem;
|
||||
}
|
||||
|
||||
@media (max-width: 576px) {
|
||||
.checkboxGrid {
|
||||
grid-template-columns: 1fr;
|
||||
}
|
||||
}
|
||||
|
||||
.hardwareItem {
|
||||
margin-bottom: 0;
|
||||
}
|
||||
|
||||
.hardwareDescription {
|
||||
margin: 0.15rem 0 0.4rem 1.6rem;
|
||||
font-size: 0.8rem;
|
||||
color: var(--ifm-font-color-secondary);
|
||||
line-height: 1.5;
|
||||
}
|
||||
|
||||
.hardwareDescription a {
|
||||
color: var(--ifm-color-primary);
|
||||
text-decoration: underline;
|
||||
text-underline-offset: 2px;
|
||||
}
|
||||
|
||||
.checkboxLabel {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 0.5rem;
|
||||
cursor: pointer;
|
||||
padding: 0.4rem 0.5rem;
|
||||
border-radius: 6px;
|
||||
transition: background-color 0.2s;
|
||||
font-size: 0.9rem;
|
||||
}
|
||||
|
||||
.checkboxLabel:hover {
|
||||
background: var(--ifm-color-emphasis-100);
|
||||
}
|
||||
|
||||
.checkboxLabel input[type="checkbox"] {
|
||||
width: 1.1rem;
|
||||
height: 1.1rem;
|
||||
cursor: pointer;
|
||||
flex-shrink: 0;
|
||||
}
|
||||
|
||||
.checkboxLabel span {
|
||||
color: var(--ifm-font-color-base);
|
||||
}
|
||||
|
||||
.checkboxDisabled {
|
||||
cursor: not-allowed;
|
||||
}
|
||||
|
||||
.checkboxDisabled:hover {
|
||||
background: transparent;
|
||||
}
|
||||
|
||||
.checkboxDisabled input[type="checkbox"] {
|
||||
cursor: not-allowed;
|
||||
opacity: 0.5;
|
||||
}
|
||||
|
||||
/* --- Form grid (side-by-side) --- */
|
||||
|
||||
.formGrid {
|
||||
display: grid;
|
||||
grid-template-columns: repeat(2, 1fr);
|
||||
gap: 1rem;
|
||||
}
|
||||
|
||||
@media (max-width: 576px) {
|
||||
.formGrid {
|
||||
grid-template-columns: 1fr;
|
||||
}
|
||||
}
|
||||
|
||||
.formGrid .formGroup {
|
||||
margin-bottom: 0;
|
||||
}
|
||||
|
||||
/* --- Port section --- */
|
||||
|
||||
.portSection {
|
||||
margin-bottom: 0.75rem;
|
||||
}
|
||||
|
||||
.warningBadge {
|
||||
margin-left: auto;
|
||||
color: #e67e22;
|
||||
font-size: 0.85rem;
|
||||
}
|
||||
|
||||
/* --- NVIDIA config --- */
|
||||
|
||||
.nvidiaConfig {
|
||||
margin-top: 1rem;
|
||||
margin-bottom: 1.5rem;
|
||||
padding: 1rem;
|
||||
background: var(--ifm-background-color);
|
||||
border-radius: 8px;
|
||||
border-left: 3px solid var(--ifm-color-primary);
|
||||
}
|
||||
|
||||
[data-theme="light"] .nvidiaConfig {
|
||||
background: #f6f8fa;
|
||||
border-left: 3px solid var(--ifm-color-primary);
|
||||
}
|
||||
|
||||
/* --- Result section --- */
|
||||
|
||||
.resultSection {
|
||||
margin-top: 2rem;
|
||||
}
|
||||
|
||||
.resultHeader {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: center;
|
||||
margin-bottom: 1rem;
|
||||
}
|
||||
|
||||
.resultHeader h4 {
|
||||
margin: 0;
|
||||
color: var(--ifm-font-color-base);
|
||||
}
|
||||
15
docs/static/frigate-api.yaml
vendored
15
docs/static/frigate-api.yaml
vendored
@ -5997,10 +5997,7 @@ paths:
|
||||
tags:
|
||||
- App
|
||||
summary: Start debug replay
|
||||
description:
|
||||
Start a debug replay session from camera recordings. Returns
|
||||
immediately while clip generation runs as a background job; subscribe
|
||||
to the 'debug_replay' job_state WS topic to track progress.
|
||||
description: Start a debug replay session from camera recordings.
|
||||
operationId: start_debug_replay_debug_replay_start_post
|
||||
requestBody:
|
||||
required: true
|
||||
@ -6009,16 +6006,12 @@ paths:
|
||||
schema:
|
||||
$ref: "#/components/schemas/DebugReplayStartBody"
|
||||
responses:
|
||||
"202":
|
||||
"200":
|
||||
description: Successful Response
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/DebugReplayStartResponse"
|
||||
"400":
|
||||
description: Invalid camera, time range, or no recordings
|
||||
"409":
|
||||
description: A replay session is already active
|
||||
"422":
|
||||
description: Validation Error
|
||||
content:
|
||||
@ -6279,14 +6272,10 @@ components:
|
||||
replay_camera:
|
||||
type: string
|
||||
title: Replay Camera
|
||||
job_id:
|
||||
type: string
|
||||
title: Job Id
|
||||
type: object
|
||||
required:
|
||||
- success
|
||||
- replay_camera
|
||||
- job_id
|
||||
title: DebugReplayStartResponse
|
||||
description: Response for starting a debug replay session.
|
||||
DebugReplayStatusResponse:
|
||||
|
||||
@ -146,13 +146,8 @@ def config(request: Request):
|
||||
for name, detector in config_obj.detectors.items()
|
||||
}
|
||||
|
||||
# remove environment_vars for non-admin users
|
||||
if request.headers.get("remote-role") != "admin":
|
||||
config.pop("environment_vars", None)
|
||||
|
||||
# remove mqtt credentials
|
||||
# remove the mqtt password
|
||||
config["mqtt"].pop("password", None)
|
||||
config["mqtt"].pop("user", None)
|
||||
|
||||
# remove the proxy secret
|
||||
config["proxy"].pop("auth_secret", None)
|
||||
|
||||
@ -36,7 +36,6 @@ from frigate.api.defs.response.chat_response import (
|
||||
)
|
||||
from frigate.api.defs.tags import Tags
|
||||
from frigate.api.event import events
|
||||
from frigate.config import FrigateConfig
|
||||
from frigate.genai.utils import build_assistant_message_for_conversation
|
||||
from frigate.jobs.vlm_watch import (
|
||||
get_vlm_watch_job,
|
||||
@ -402,38 +401,9 @@ def get_tools() -> JSONResponse:
|
||||
return JSONResponse(content={"tools": tools})
|
||||
|
||||
|
||||
def _resolve_zones(
|
||||
zones: List[str],
|
||||
config: FrigateConfig,
|
||||
target_cameras: List[str],
|
||||
) -> List[str]:
|
||||
"""Map zone names to their canonical config keys, case-insensitively.
|
||||
|
||||
LLMs frequently echo a user's casing ("Front Yard") instead of the
|
||||
configured key ("front_yard"). The downstream zone filter is a SQLite GLOB
|
||||
over the JSON-encoded zones column, which is case-sensitive — so an
|
||||
unnormalized name silently returns zero matches. Build a lookup over the
|
||||
relevant cameras' configured zones and substitute when we find a match;
|
||||
unknown names pass through so behavior matches what the model asked for.
|
||||
"""
|
||||
if not zones:
|
||||
return zones
|
||||
|
||||
lookup: Dict[str, str] = {}
|
||||
for camera_id in target_cameras:
|
||||
camera_config = config.cameras.get(camera_id)
|
||||
if camera_config is None:
|
||||
continue
|
||||
for zone_name in camera_config.zones.keys():
|
||||
lookup.setdefault(zone_name.lower(), zone_name)
|
||||
|
||||
return [lookup.get(z.lower(), z) for z in zones]
|
||||
|
||||
|
||||
async def _execute_search_objects(
|
||||
arguments: Dict[str, Any],
|
||||
allowed_cameras: List[str],
|
||||
config: FrigateConfig,
|
||||
) -> JSONResponse:
|
||||
"""
|
||||
Execute the search_objects tool.
|
||||
@ -467,11 +437,6 @@ async def _execute_search_objects(
|
||||
# Convert zones array to comma-separated string if provided
|
||||
zones = arguments.get("zones")
|
||||
if isinstance(zones, list):
|
||||
camera_arg = arguments.get("camera")
|
||||
target_cameras = (
|
||||
[camera_arg] if camera_arg and camera_arg != "all" else allowed_cameras
|
||||
)
|
||||
zones = _resolve_zones(zones, config, target_cameras)
|
||||
zones = ",".join(zones)
|
||||
elif zones is None:
|
||||
zones = "all"
|
||||
@ -563,11 +528,6 @@ async def _execute_find_similar_objects(
|
||||
sub_labels = arguments.get("sub_labels")
|
||||
zones = arguments.get("zones")
|
||||
|
||||
if zones:
|
||||
zones = _resolve_zones(
|
||||
zones, request.app.frigate_config, cameras or list(allowed_cameras)
|
||||
)
|
||||
|
||||
similarity_mode = arguments.get("similarity_mode", "fused")
|
||||
if similarity_mode not in ("visual", "semantic", "fused"):
|
||||
similarity_mode = "fused"
|
||||
@ -695,9 +655,7 @@ async def execute_tool(
|
||||
logger.debug(f"Executing tool: {tool_name} with arguments: {arguments}")
|
||||
|
||||
if tool_name == "search_objects":
|
||||
return await _execute_search_objects(
|
||||
arguments, allowed_cameras, request.app.frigate_config
|
||||
)
|
||||
return await _execute_search_objects(arguments, allowed_cameras)
|
||||
|
||||
if tool_name == "find_similar_objects":
|
||||
result = await _execute_find_similar_objects(
|
||||
@ -877,9 +835,7 @@ async def _execute_tool_internal(
|
||||
This is used by the chat completion endpoint to execute tools.
|
||||
"""
|
||||
if tool_name == "search_objects":
|
||||
response = await _execute_search_objects(
|
||||
arguments, allowed_cameras, request.app.frigate_config
|
||||
)
|
||||
response = await _execute_search_objects(arguments, allowed_cameras)
|
||||
try:
|
||||
if hasattr(response, "body"):
|
||||
body_str = response.body.decode("utf-8")
|
||||
@ -943,9 +899,6 @@ async def _execute_start_camera_watch(
|
||||
|
||||
await require_camera_access(camera, request=request)
|
||||
|
||||
if zones:
|
||||
zones = _resolve_zones(zones, config, [camera])
|
||||
|
||||
genai_manager = request.app.genai_manager
|
||||
chat_client = genai_manager.chat_client
|
||||
if chat_client is None or not chat_client.supports_vision:
|
||||
|
||||
@ -10,7 +10,6 @@ from pydantic import BaseModel, Field
|
||||
|
||||
from frigate.api.auth import require_role
|
||||
from frigate.api.defs.tags import Tags
|
||||
from frigate.jobs.debug_replay import start_debug_replay_job
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@ -30,17 +29,10 @@ class DebugReplayStartResponse(BaseModel):
|
||||
|
||||
success: bool
|
||||
replay_camera: str
|
||||
job_id: str
|
||||
|
||||
|
||||
class DebugReplayStatusResponse(BaseModel):
|
||||
"""Response for debug replay status.
|
||||
|
||||
Returns only session-presence fields. Startup progress and error
|
||||
details flow through the job_state WebSocket topic via the
|
||||
debug_replay job (see frigate.jobs.debug_replay); the
|
||||
Replay page subscribes there with useJobStatus("debug_replay").
|
||||
"""
|
||||
"""Response for debug replay status."""
|
||||
|
||||
active: bool
|
||||
replay_camera: str | None = None
|
||||
@ -59,32 +51,15 @@ class DebugReplayStopResponse(BaseModel):
|
||||
@router.post(
|
||||
"/debug_replay/start",
|
||||
response_model=DebugReplayStartResponse,
|
||||
status_code=202,
|
||||
responses={
|
||||
400: {"description": "Invalid camera, time range, or no recordings"},
|
||||
409: {"description": "A replay session is already active"},
|
||||
},
|
||||
dependencies=[Depends(require_role(["admin"]))],
|
||||
summary="Start debug replay",
|
||||
description="Start a debug replay session from camera recordings. Returns "
|
||||
"immediately while clip generation runs as a background job; subscribe "
|
||||
"to the 'debug_replay' job_state WS topic to track progress.",
|
||||
description="Start a debug replay session from camera recordings.",
|
||||
)
|
||||
async def start_debug_replay(request: Request, body: DebugReplayStartBody):
|
||||
"""Start a debug replay session asynchronously."""
|
||||
"""Start a debug replay session."""
|
||||
replay_manager = request.app.replay_manager
|
||||
|
||||
try:
|
||||
job_id = await asyncio.to_thread(
|
||||
start_debug_replay_job,
|
||||
source_camera=body.camera,
|
||||
start_ts=body.start_time,
|
||||
end_ts=body.end_time,
|
||||
frigate_config=request.app.frigate_config,
|
||||
config_publisher=request.app.config_publisher,
|
||||
replay_manager=replay_manager,
|
||||
)
|
||||
except RuntimeError:
|
||||
if replay_manager.active:
|
||||
return JSONResponse(
|
||||
content={
|
||||
"success": False,
|
||||
@ -92,23 +67,38 @@ async def start_debug_replay(request: Request, body: DebugReplayStartBody):
|
||||
},
|
||||
status_code=409,
|
||||
)
|
||||
|
||||
try:
|
||||
replay_camera = await asyncio.to_thread(
|
||||
replay_manager.start,
|
||||
source_camera=body.camera,
|
||||
start_ts=body.start_time,
|
||||
end_ts=body.end_time,
|
||||
frigate_config=request.app.frigate_config,
|
||||
config_publisher=request.app.config_publisher,
|
||||
)
|
||||
except ValueError:
|
||||
logger.exception("Rejected debug replay start request")
|
||||
logger.exception("Invalid parameters for debug replay start request")
|
||||
return JSONResponse(
|
||||
content={
|
||||
"success": False,
|
||||
"message": "Invalid debug replay parameters",
|
||||
"message": "Invalid debug replay request parameters",
|
||||
},
|
||||
status_code=400,
|
||||
)
|
||||
except RuntimeError:
|
||||
logger.exception("Error while starting debug replay session")
|
||||
return JSONResponse(
|
||||
content={
|
||||
"success": False,
|
||||
"message": "An internal error occurred while starting debug replay",
|
||||
},
|
||||
status_code=500,
|
||||
)
|
||||
|
||||
return JSONResponse(
|
||||
content={
|
||||
"success": True,
|
||||
"replay_camera": replay_manager.replay_camera_name,
|
||||
"job_id": job_id,
|
||||
},
|
||||
status_code=202,
|
||||
return DebugReplayStartResponse(
|
||||
success=True,
|
||||
replay_camera=replay_camera,
|
||||
)
|
||||
|
||||
|
||||
@ -128,16 +118,12 @@ def get_debug_replay_status(request: Request):
|
||||
|
||||
if replay_manager.active and replay_camera:
|
||||
frame_processor = request.app.detected_frames_processor
|
||||
frame = (
|
||||
frame_processor.get_current_frame(replay_camera)
|
||||
if frame_processor is not None
|
||||
else None
|
||||
)
|
||||
frame = frame_processor.get_current_frame(replay_camera)
|
||||
|
||||
if frame is not None:
|
||||
frame_time = frame_processor.get_current_frame_time(replay_camera)
|
||||
camera_config = request.app.frigate_config.cameras.get(replay_camera)
|
||||
retry_interval = 10.0
|
||||
retry_interval = 10
|
||||
|
||||
if camera_config is not None:
|
||||
retry_interval = float(camera_config.ffmpeg.retry_interval or 10)
|
||||
|
||||
@ -754,15 +754,6 @@ def events_search(
|
||||
status_code=404,
|
||||
)
|
||||
|
||||
if search_event.camera not in allowed_cameras:
|
||||
return JSONResponse(
|
||||
content={
|
||||
"success": False,
|
||||
"message": "Event not found",
|
||||
},
|
||||
status_code=404,
|
||||
)
|
||||
|
||||
thumb_result = context.search_thumbnail(search_event)
|
||||
thumb_ids = {result[0]: result[1] for result in thumb_result}
|
||||
search_results = {
|
||||
|
||||
@ -5,15 +5,13 @@ import logging
|
||||
import random
|
||||
import string
|
||||
import time
|
||||
import zipfile
|
||||
from collections import deque
|
||||
from pathlib import Path
|
||||
from typing import Iterator, List, Optional
|
||||
from typing import List, Optional
|
||||
|
||||
import psutil
|
||||
from fastapi import APIRouter, Depends, Query, Request
|
||||
from fastapi.responses import JSONResponse, StreamingResponse
|
||||
from pathvalidate import sanitize_filename, sanitize_filepath
|
||||
from fastapi.responses import JSONResponse
|
||||
from pathvalidate import sanitize_filepath
|
||||
from peewee import DoesNotExist
|
||||
from playhouse.shortcuts import model_to_dict
|
||||
|
||||
@ -363,136 +361,6 @@ def get_export_case(case_id: str):
|
||||
)
|
||||
|
||||
|
||||
_ZIP_STREAM_CHUNK_SIZE = 1024 * 1024 # 1 MiB
|
||||
|
||||
|
||||
class _StreamingZipBuffer:
|
||||
"""File-like sink for ZipFile that exposes written bytes via drain().
|
||||
|
||||
ZipFile writes synchronously into this buffer; the generator drains the
|
||||
queue between writes so StreamingResponse can yield bytes without
|
||||
materializing the whole archive in memory.
|
||||
"""
|
||||
|
||||
def __init__(self) -> None:
|
||||
self._queue: deque[bytes] = deque()
|
||||
self._offset = 0
|
||||
|
||||
def write(self, data: bytes) -> int:
|
||||
if data:
|
||||
self._queue.append(bytes(data))
|
||||
self._offset += len(data)
|
||||
return len(data)
|
||||
|
||||
def tell(self) -> int:
|
||||
return self._offset
|
||||
|
||||
def flush(self) -> None:
|
||||
pass
|
||||
|
||||
def drain(self) -> Iterator[bytes]:
|
||||
while self._queue:
|
||||
yield self._queue.popleft()
|
||||
|
||||
|
||||
def _unique_archive_name(export: Export, used: set[str]) -> str:
|
||||
base = sanitize_filename(export.name) if export.name else None
|
||||
if not base:
|
||||
base = f"{export.camera}_{int(datetime.datetime.timestamp(export.date))}"
|
||||
|
||||
candidate = f"{base}.mp4"
|
||||
counter = 1
|
||||
while candidate in used:
|
||||
candidate = f"{base}_{counter}.mp4"
|
||||
counter += 1
|
||||
|
||||
used.add(candidate)
|
||||
return candidate
|
||||
|
||||
|
||||
def _stream_case_archive(exports: List[Export]) -> Iterator[bytes]:
|
||||
"""Yield bytes of a zip archive built from the given exports' mp4 files."""
|
||||
buffer = _StreamingZipBuffer()
|
||||
used_names: set[str] = set()
|
||||
|
||||
# ZIP_STORED: mp4 is already compressed, recompressing wastes CPU for ~0% size win.
|
||||
with zipfile.ZipFile(
|
||||
buffer,
|
||||
mode="w",
|
||||
compression=zipfile.ZIP_STORED,
|
||||
allowZip64=True,
|
||||
) as archive:
|
||||
for export in exports:
|
||||
source = Path(export.video_path)
|
||||
if not source.exists():
|
||||
continue
|
||||
|
||||
arcname = _unique_archive_name(export, used_names)
|
||||
|
||||
with (
|
||||
archive.open(arcname, mode="w", force_zip64=True) as entry,
|
||||
source.open("rb") as src,
|
||||
):
|
||||
while True:
|
||||
chunk = src.read(_ZIP_STREAM_CHUNK_SIZE)
|
||||
if not chunk:
|
||||
break
|
||||
|
||||
entry.write(chunk)
|
||||
yield from buffer.drain()
|
||||
|
||||
yield from buffer.drain()
|
||||
|
||||
yield from buffer.drain()
|
||||
|
||||
|
||||
@router.get(
|
||||
"/cases/{case_id}/download",
|
||||
dependencies=[Depends(allow_any_authenticated())],
|
||||
summary="Download export case as zip",
|
||||
description="Streams a zip archive containing every completed export's mp4 for the given case.",
|
||||
)
|
||||
def download_export_case(
|
||||
case_id: str,
|
||||
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
|
||||
):
|
||||
try:
|
||||
case = ExportCase.get(ExportCase.id == case_id)
|
||||
except DoesNotExist:
|
||||
return JSONResponse(
|
||||
content={"success": False, "message": "Export case not found"},
|
||||
status_code=404,
|
||||
)
|
||||
|
||||
exports = list(
|
||||
Export.select()
|
||||
.where(
|
||||
Export.export_case == case_id,
|
||||
~Export.in_progress,
|
||||
Export.camera << allowed_cameras,
|
||||
)
|
||||
.order_by(Export.date.asc())
|
||||
)
|
||||
|
||||
if not exports:
|
||||
return JSONResponse(
|
||||
content={"success": False, "message": "No exports available to download."},
|
||||
status_code=404,
|
||||
)
|
||||
|
||||
archive_base = sanitize_filename(case.name) if case.name else ""
|
||||
if not archive_base:
|
||||
archive_base = case_id
|
||||
|
||||
return StreamingResponse(
|
||||
_stream_case_archive(exports),
|
||||
media_type="application/zip",
|
||||
headers={
|
||||
"Content-Disposition": f'attachment; filename="{archive_base}.zip"',
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
@router.patch(
|
||||
"/cases/{case_id}",
|
||||
response_model=GenericResponse,
|
||||
|
||||
@ -174,10 +174,12 @@ async def latest_frame(
|
||||
}
|
||||
quality_params = get_image_quality_params(extension.value, params.quality)
|
||||
|
||||
camera_config = request.app.frigate_config.cameras.get(camera_name)
|
||||
if camera_config is not None:
|
||||
if camera_name in request.app.frigate_config.cameras:
|
||||
frame = frame_processor.get_current_frame(camera_name, draw_options)
|
||||
retry_interval = float(camera_config.ffmpeg.retry_interval or 10)
|
||||
retry_interval = float(
|
||||
request.app.frigate_config.cameras.get(camera_name).ffmpeg.retry_interval
|
||||
or 10
|
||||
)
|
||||
|
||||
is_offline = False
|
||||
if frame is None or datetime.now().timestamp() > (
|
||||
@ -1366,17 +1368,12 @@ def preview_gif(
|
||||
file_start = f"preview_{camera_name}-"
|
||||
start_file = f"{file_start}{start_ts}.{PREVIEW_FRAME_TYPE}"
|
||||
end_file = f"{file_start}{end_ts}.{PREVIEW_FRAME_TYPE}"
|
||||
|
||||
camera_files = [
|
||||
entry.name
|
||||
for entry in os.scandir(preview_dir)
|
||||
if entry.name.startswith(file_start)
|
||||
]
|
||||
camera_files.sort()
|
||||
|
||||
selected_previews = []
|
||||
|
||||
for file in camera_files:
|
||||
for file in sorted(os.listdir(preview_dir)):
|
||||
if not file.startswith(file_start):
|
||||
continue
|
||||
|
||||
if file < start_file:
|
||||
continue
|
||||
|
||||
@ -1553,17 +1550,12 @@ def preview_mp4(
|
||||
file_start = f"preview_{camera_name}-"
|
||||
start_file = f"{file_start}{start_ts}.{PREVIEW_FRAME_TYPE}"
|
||||
end_file = f"{file_start}{end_ts}.{PREVIEW_FRAME_TYPE}"
|
||||
|
||||
camera_files = [
|
||||
entry.name
|
||||
for entry in os.scandir(preview_dir)
|
||||
if entry.name.startswith(file_start)
|
||||
]
|
||||
camera_files.sort()
|
||||
|
||||
selected_previews = []
|
||||
|
||||
for file in camera_files:
|
||||
for file in sorted(os.listdir(preview_dir)):
|
||||
if not file.startswith(file_start):
|
||||
continue
|
||||
|
||||
if file < start_file:
|
||||
continue
|
||||
|
||||
|
||||
@ -148,17 +148,12 @@ def get_preview_frames_from_cache(camera_name: str, start_ts: float, end_ts: flo
|
||||
file_start = f"preview_{camera_name}-"
|
||||
start_file = f"{file_start}{start_ts}.{PREVIEW_FRAME_TYPE}"
|
||||
end_file = f"{file_start}{end_ts}.{PREVIEW_FRAME_TYPE}"
|
||||
|
||||
camera_files = [
|
||||
entry.name
|
||||
for entry in os.scandir(preview_dir)
|
||||
if entry.name.startswith(file_start)
|
||||
]
|
||||
camera_files.sort()
|
||||
|
||||
selected_previews = []
|
||||
|
||||
for file in camera_files:
|
||||
for file in sorted(os.listdir(preview_dir)):
|
||||
if not file.startswith(file_start):
|
||||
continue
|
||||
|
||||
if file < start_file:
|
||||
continue
|
||||
|
||||
|
||||
@ -35,7 +35,7 @@ logger = logging.getLogger(__name__)
|
||||
router = APIRouter(tags=[Tags.recordings])
|
||||
|
||||
|
||||
@router.get("/recordings/storage", dependencies=[Depends(require_role(["admin"]))])
|
||||
@router.get("/recordings/storage", dependencies=[Depends(allow_any_authenticated())])
|
||||
def get_recordings_storage_usage(request: Request):
|
||||
recording_stats = request.app.stats_emitter.get_latest_stats()["service"][
|
||||
"storage"
|
||||
|
||||
@ -429,10 +429,7 @@ class WebPushClient(Communicator):
|
||||
else:
|
||||
title = base_title
|
||||
|
||||
if payload["after"]["data"]["metadata"].get("shortSummary"):
|
||||
message = payload["after"]["data"]["metadata"]["shortSummary"]
|
||||
else:
|
||||
message = f"Detected on {camera_name}"
|
||||
message = payload["after"]["data"]["metadata"]["shortSummary"]
|
||||
else:
|
||||
zone_names = payload["after"]["data"]["zones"]
|
||||
formatted_zone_names = []
|
||||
@ -552,14 +549,6 @@ class WebPushClient(Communicator):
|
||||
logger.debug(f"Sending camera monitoring push notification for {camera_name}")
|
||||
|
||||
for user in self.web_pushers:
|
||||
if not self._user_has_camera_access(user, camera):
|
||||
logger.debug(
|
||||
"Skipping notification for user %s - no access to camera %s",
|
||||
user,
|
||||
camera,
|
||||
)
|
||||
continue
|
||||
|
||||
self.send_push_notification(
|
||||
user=user,
|
||||
payload=payload,
|
||||
|
||||
@ -17,90 +17,9 @@ from ws4py.websocket import WebSocket as WebSocket_
|
||||
|
||||
from frigate.comms.base_communicator import Communicator
|
||||
from frigate.config import FrigateConfig
|
||||
from frigate.const import (
|
||||
CLEAR_ONGOING_REVIEW_SEGMENTS,
|
||||
EXPIRE_AUDIO_ACTIVITY,
|
||||
INSERT_MANY_RECORDINGS,
|
||||
INSERT_PREVIEW,
|
||||
NOTIFICATION_TEST,
|
||||
REQUEST_REGION_GRID,
|
||||
UPDATE_AUDIO_ACTIVITY,
|
||||
UPDATE_AUDIO_TRANSCRIPTION_STATE,
|
||||
UPDATE_BIRDSEYE_LAYOUT,
|
||||
UPDATE_CAMERA_ACTIVITY,
|
||||
UPDATE_EMBEDDINGS_REINDEX_PROGRESS,
|
||||
UPDATE_EVENT_DESCRIPTION,
|
||||
UPDATE_MODEL_STATE,
|
||||
UPDATE_REVIEW_DESCRIPTION,
|
||||
UPSERT_REVIEW_SEGMENT,
|
||||
)
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Internal IPC topics — NEVER allowed from WebSocket, regardless of role
|
||||
_WS_BLOCKED_TOPICS = frozenset(
|
||||
{
|
||||
INSERT_MANY_RECORDINGS,
|
||||
INSERT_PREVIEW,
|
||||
REQUEST_REGION_GRID,
|
||||
UPSERT_REVIEW_SEGMENT,
|
||||
CLEAR_ONGOING_REVIEW_SEGMENTS,
|
||||
UPDATE_CAMERA_ACTIVITY,
|
||||
UPDATE_AUDIO_ACTIVITY,
|
||||
EXPIRE_AUDIO_ACTIVITY,
|
||||
UPDATE_EVENT_DESCRIPTION,
|
||||
UPDATE_REVIEW_DESCRIPTION,
|
||||
UPDATE_MODEL_STATE,
|
||||
UPDATE_EMBEDDINGS_REINDEX_PROGRESS,
|
||||
UPDATE_BIRDSEYE_LAYOUT,
|
||||
UPDATE_AUDIO_TRANSCRIPTION_STATE,
|
||||
NOTIFICATION_TEST,
|
||||
}
|
||||
)
|
||||
|
||||
# Read-only topics any authenticated user (including viewer) can send
|
||||
_WS_VIEWER_TOPICS = frozenset(
|
||||
{
|
||||
"onConnect",
|
||||
"modelState",
|
||||
"audioTranscriptionState",
|
||||
"birdseyeLayout",
|
||||
"embeddingsReindexProgress",
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
def _check_ws_authorization(
|
||||
topic: str,
|
||||
role_header: str | None,
|
||||
separator: str,
|
||||
) -> bool:
|
||||
"""Check if a WebSocket message is authorized.
|
||||
|
||||
Args:
|
||||
topic: The message topic.
|
||||
role_header: The HTTP_REMOTE_ROLE header value, or None.
|
||||
separator: The role separator character from proxy config.
|
||||
|
||||
Returns:
|
||||
True if authorized, False if blocked.
|
||||
"""
|
||||
# Block IPC-only topics unconditionally
|
||||
if topic in _WS_BLOCKED_TOPICS:
|
||||
return False
|
||||
|
||||
# No role header: default to viewer (fail-closed)
|
||||
if role_header is None:
|
||||
return topic in _WS_VIEWER_TOPICS
|
||||
|
||||
# Check if any role is admin
|
||||
roles = [r.strip() for r in role_header.split(separator)]
|
||||
if "admin" in roles:
|
||||
return True
|
||||
|
||||
# Non-admin: only viewer topics allowed
|
||||
return topic in _WS_VIEWER_TOPICS
|
||||
|
||||
|
||||
class WebSocket(WebSocket_): # type: ignore[misc]
|
||||
def unhandled_error(self, error: Any) -> None:
|
||||
@ -130,7 +49,6 @@ class WebSocketClient(Communicator):
|
||||
|
||||
class _WebSocketHandler(WebSocket):
|
||||
receiver = self._dispatcher
|
||||
role_separator = self.config.proxy.separator or ","
|
||||
|
||||
def received_message(self, message: WebSocket.received_message) -> None: # type: ignore[name-defined]
|
||||
try:
|
||||
@ -145,25 +63,11 @@ class WebSocketClient(Communicator):
|
||||
)
|
||||
return
|
||||
|
||||
topic = json_message["topic"]
|
||||
|
||||
# Authorization check (skip when environ is None — direct internal connection)
|
||||
role_header = (
|
||||
self.environ.get("HTTP_REMOTE_ROLE") if self.environ else None
|
||||
logger.debug(
|
||||
f"Publishing mqtt message from websockets at {json_message['topic']}."
|
||||
)
|
||||
if self.environ is not None and not _check_ws_authorization(
|
||||
topic, role_header, self.role_separator
|
||||
):
|
||||
logger.warning(
|
||||
"Blocked unauthorized WebSocket message: topic=%s, role=%s",
|
||||
topic,
|
||||
role_header,
|
||||
)
|
||||
return
|
||||
|
||||
logger.debug(f"Publishing mqtt message from websockets at {topic}.")
|
||||
self.receiver(
|
||||
topic,
|
||||
json_message["topic"],
|
||||
json_message["payload"],
|
||||
)
|
||||
|
||||
|
||||
@ -20,7 +20,6 @@ class CameraConfigUpdateEnum(str, Enum):
|
||||
ffmpeg = "ffmpeg"
|
||||
live = "live"
|
||||
motion = "motion" # includes motion and motion masks
|
||||
mqtt = "mqtt"
|
||||
notifications = "notifications"
|
||||
objects = "objects"
|
||||
object_genai = "object_genai"
|
||||
@ -34,7 +33,6 @@ class CameraConfigUpdateEnum(str, Enum):
|
||||
lpr = "lpr"
|
||||
snapshots = "snapshots"
|
||||
timestamp_style = "timestamp_style"
|
||||
ui = "ui"
|
||||
zones = "zones"
|
||||
|
||||
|
||||
|
||||
@ -15,7 +15,7 @@ TRIGGER_DIR = f"{CLIPS_DIR}/triggers"
|
||||
BIRDSEYE_PIPE = "/tmp/cache/birdseye"
|
||||
CACHE_DIR = "/tmp/cache"
|
||||
REPLAY_CAMERA_PREFIX = "_replay_"
|
||||
REPLAY_DIR = os.path.join(CLIPS_DIR, "replay")
|
||||
REPLAY_DIR = os.path.join(CACHE_DIR, "replay")
|
||||
PLUS_ENV_VAR = "PLUS_API_KEY"
|
||||
PLUS_API_HOST = "https://api.frigate.video"
|
||||
|
||||
|
||||
@ -133,61 +133,6 @@ class FaceRecognizer(ABC):
|
||||
return 0.0
|
||||
|
||||
|
||||
def build_class_mean(
|
||||
embs: list[np.ndarray],
|
||||
trim: float = 0.15,
|
||||
outlier_threshold: float = 0.30,
|
||||
min_keep_frac: float = 0.7,
|
||||
max_iters: int = 3,
|
||||
) -> np.ndarray:
|
||||
"""Build a class-mean embedding with two-layer outlier protection.
|
||||
|
||||
Layer 1 (iterative, vector-wise): drop whole embeddings whose cosine
|
||||
similarity to the current class mean is below ``outlier_threshold``.
|
||||
Catches mislabeled or corrupted training samples (wrong face in the
|
||||
folder, full-frame screenshots, extreme crops) that per-dimension
|
||||
trimming cannot detect.
|
||||
|
||||
Layer 2 (per-dimension): ``scipy.stats.trim_mean`` on the retained set
|
||||
to smooth per-component noise (lighting, expression, alignment jitter).
|
||||
|
||||
Collections with fewer than 5 images bypass outlier rejection — too few
|
||||
samples to establish a reliable class center.
|
||||
"""
|
||||
arr = np.stack(embs, axis=0)
|
||||
|
||||
if len(arr) < 5:
|
||||
return np.asarray(stats.trim_mean(arr, trim, axis=0))
|
||||
|
||||
keep = np.ones(len(arr), dtype=bool)
|
||||
floor = max(5, int(np.ceil(min_keep_frac * len(arr))))
|
||||
|
||||
for _ in range(max_iters):
|
||||
mean = stats.trim_mean(arr[keep], trim, axis=0)
|
||||
m_norm = mean / (np.linalg.norm(mean) + 1e-9)
|
||||
e_norms = arr / (np.linalg.norm(arr, axis=1, keepdims=True) + 1e-9)
|
||||
cos = e_norms @ m_norm
|
||||
new_keep = cos >= outlier_threshold
|
||||
|
||||
if new_keep.sum() < floor:
|
||||
top = np.argsort(-cos)[:floor]
|
||||
new_keep = np.zeros(len(arr), dtype=bool)
|
||||
new_keep[top] = True
|
||||
|
||||
if np.array_equal(new_keep, keep):
|
||||
break
|
||||
keep = new_keep
|
||||
|
||||
dropped = int((~keep).sum())
|
||||
|
||||
if dropped:
|
||||
logger.debug(
|
||||
f"Vector-wise outlier filter dropped {dropped}/{len(arr)} embeddings"
|
||||
)
|
||||
|
||||
return np.asarray(stats.trim_mean(arr[keep], trim, axis=0))
|
||||
|
||||
|
||||
def similarity_to_confidence(
|
||||
cosine_similarity: float,
|
||||
median: float = 0.3,
|
||||
@ -284,7 +229,7 @@ class FaceNetRecognizer(FaceRecognizer):
|
||||
|
||||
for name, embs in face_embeddings_map.items():
|
||||
if embs:
|
||||
self.mean_embs[name] = build_class_mean(embs)
|
||||
self.mean_embs[name] = stats.trim_mean(embs, 0.15)
|
||||
|
||||
logger.debug("Finished building ArcFace model")
|
||||
|
||||
@ -395,7 +340,7 @@ class ArcFaceRecognizer(FaceRecognizer):
|
||||
|
||||
for name, embs in face_embeddings_map.items():
|
||||
if embs:
|
||||
self.mean_embs[name] = build_class_mean(embs)
|
||||
self.mean_embs[name] = stats.trim_mean(embs, 0.15)
|
||||
|
||||
logger.debug("Finished building ArcFace model")
|
||||
|
||||
|
||||
@ -1073,6 +1073,10 @@ class LicensePlateProcessingMixin:
|
||||
top_score = score
|
||||
top_box = bbox
|
||||
|
||||
if score > top_score:
|
||||
top_score = score
|
||||
top_box = bbox
|
||||
|
||||
# Return the top scoring bounding box if found
|
||||
if top_box is not None:
|
||||
# expand box by 5% to help with OCR
|
||||
@ -1088,6 +1092,9 @@ class LicensePlateProcessingMixin:
|
||||
]
|
||||
).clip(0, [input.shape[1], input.shape[0]] * 2)
|
||||
|
||||
logger.debug(
|
||||
f"{camera}: Found license plate. Bounding box: {expanded_box.astype(int)}"
|
||||
)
|
||||
return tuple(int(x) for x in expanded_box) # type: ignore[return-value]
|
||||
else:
|
||||
return None # No detection above the threshold
|
||||
@ -1353,8 +1360,8 @@ class LicensePlateProcessingMixin:
|
||||
)
|
||||
|
||||
# check that license plate is valid
|
||||
# quadruple the value because we've doubled both dimensions of the car
|
||||
if license_plate_area < self.config.cameras[camera].lpr.min_area * 4:
|
||||
# double the value because we've doubled the size of the car
|
||||
if license_plate_area < self.config.cameras[camera].lpr.min_area * 2:
|
||||
logger.debug(f"{camera}: License plate is less than min_area")
|
||||
return
|
||||
|
||||
@ -1458,7 +1465,6 @@ class LicensePlateProcessingMixin:
|
||||
license_plate_frame,
|
||||
)
|
||||
|
||||
logger.debug(f"{camera}: Found license plate. Bounding box: {list(plate_box)}")
|
||||
logger.debug(f"{camera}: Running plate recognition for id: {id}.")
|
||||
|
||||
# run detection, returns results sorted by confidence, best first
|
||||
|
||||
@ -39,8 +39,6 @@ logger = logging.getLogger(__name__)
|
||||
|
||||
RECORDING_BUFFER_EXTENSION_PERCENT = 0.10
|
||||
MIN_RECORDING_DURATION = 10
|
||||
MAX_IMAGE_TOKENS = 24000
|
||||
MAX_FRAMES_PER_SECOND = 1
|
||||
|
||||
|
||||
class ReviewDescriptionProcessor(PostProcessorApi):
|
||||
@ -62,22 +60,14 @@ class ReviewDescriptionProcessor(PostProcessorApi):
|
||||
def calculate_frame_count(
|
||||
self,
|
||||
camera: str,
|
||||
duration: float,
|
||||
image_source: ImageSourceEnum = ImageSourceEnum.preview,
|
||||
height: int = 480,
|
||||
) -> int:
|
||||
"""Calculate optimal number of frames based on event duration, context size,
|
||||
image source, and resolution.
|
||||
"""Calculate optimal number of frames based on context size, image source, and resolution.
|
||||
|
||||
Per-image token cost is asked of the GenAI provider so providers that know
|
||||
their model's true cost (e.g. llama.cpp can probe the loaded mmproj) can
|
||||
diverge from the default ~1-token-per-1250-pixels heuristic. The frame
|
||||
budget is bounded by:
|
||||
- remaining context window after prompt + response reservations
|
||||
- a fixed MAX_IMAGE_TOKENS ceiling
|
||||
- MAX_FRAMES_PER_SECOND x duration, to avoid drowning short events in
|
||||
near-duplicate frames where the model latches onto the redundant middle
|
||||
and skips the start/end action
|
||||
Token usage varies by resolution: larger images (ultra-wide aspect ratios) use more tokens.
|
||||
Estimates ~1 token per 1250 pixels. Targets 98% context utilization with safety margin.
|
||||
Capped at 20 frames.
|
||||
"""
|
||||
client = self.genai_manager.description_client
|
||||
|
||||
@ -115,15 +105,14 @@ class ReviewDescriptionProcessor(PostProcessorApi):
|
||||
width = target_width
|
||||
height = int(target_width / aspect_ratio)
|
||||
|
||||
tokens_per_image = client.estimate_image_tokens(width, height)
|
||||
pixels_per_image = width * height
|
||||
tokens_per_image = pixels_per_image / 1250
|
||||
prompt_tokens = 3800
|
||||
response_tokens = 300
|
||||
context_budget = context_size - prompt_tokens - response_tokens
|
||||
image_token_budget = min(context_budget, MAX_IMAGE_TOKENS)
|
||||
max_frames_by_tokens = int(image_token_budget / tokens_per_image)
|
||||
max_frames_by_duration = int(duration * MAX_FRAMES_PER_SECOND)
|
||||
max_frames = min(max_frames_by_tokens, max_frames_by_duration)
|
||||
return max(max_frames, 3)
|
||||
available_tokens = context_size - prompt_tokens - response_tokens
|
||||
max_frames = int(available_tokens / tokens_per_image)
|
||||
|
||||
return min(max(max_frames, 3), 20)
|
||||
|
||||
def process_data(
|
||||
self, data: dict[str, Any], data_type: PostProcessDataEnum
|
||||
@ -366,17 +355,12 @@ class ReviewDescriptionProcessor(PostProcessorApi):
|
||||
file_start = f"preview_{camera}-"
|
||||
start_file = f"{file_start}{start_time}.webp"
|
||||
end_file = f"{file_start}{end_time}.webp"
|
||||
|
||||
camera_files = [
|
||||
entry.name
|
||||
for entry in os.scandir(preview_dir)
|
||||
if entry.name.startswith(file_start)
|
||||
]
|
||||
camera_files.sort()
|
||||
|
||||
all_frames: list[str] = []
|
||||
|
||||
for file in camera_files:
|
||||
for file in sorted(os.listdir(preview_dir)):
|
||||
if not file.startswith(file_start):
|
||||
continue
|
||||
|
||||
if file < start_file:
|
||||
if len(all_frames):
|
||||
all_frames[0] = os.path.join(preview_dir, file)
|
||||
@ -392,9 +376,7 @@ class ReviewDescriptionProcessor(PostProcessorApi):
|
||||
all_frames.append(os.path.join(preview_dir, file))
|
||||
|
||||
frame_count = len(all_frames)
|
||||
desired_frame_count = self.calculate_frame_count(
|
||||
camera, duration=end_time - start_time
|
||||
)
|
||||
desired_frame_count = self.calculate_frame_count(camera)
|
||||
|
||||
if frame_count <= desired_frame_count:
|
||||
return all_frames
|
||||
@ -418,7 +400,7 @@ class ReviewDescriptionProcessor(PostProcessorApi):
|
||||
"""Get frames from recordings at specified timestamps."""
|
||||
duration = end_time - start_time
|
||||
desired_frame_count = self.calculate_frame_count(
|
||||
camera, duration, ImageSourceEnum.recordings, height
|
||||
camera, ImageSourceEnum.recordings, height
|
||||
)
|
||||
|
||||
# Calculate evenly spaced timestamps throughout the duration
|
||||
|
||||
@ -1,48 +1,21 @@
|
||||
from typing import Annotated
|
||||
|
||||
from pydantic import BaseModel, ConfigDict, Field, StringConstraints
|
||||
|
||||
ObservationItem = Annotated[str, StringConstraints(min_length=20, max_length=160)]
|
||||
from pydantic import BaseModel, ConfigDict, Field
|
||||
|
||||
|
||||
class ReviewMetadata(BaseModel):
|
||||
model_config = ConfigDict(extra="ignore", protected_namespaces=())
|
||||
|
||||
observations: list[ObservationItem] = Field(
|
||||
...,
|
||||
min_length=3,
|
||||
max_length=15,
|
||||
description=(
|
||||
"Enumerate the significant observations across all frames, in "
|
||||
"chronological order, BEFORE composing the scene narrative. "
|
||||
"Include the very start of the activity — for example, a vehicle "
|
||||
"entering the frame or pulling into the driveway — even if it "
|
||||
"lasts only a few frames and the rest of the clip is dominated "
|
||||
"by a longer activity. Include each arrival, departure, motion "
|
||||
"event, object handled, and notable change in position or state. "
|
||||
"Each item is a single concrete fact written as a complete "
|
||||
"sentence. Do not summarize, interpret, or assign meaning here — "
|
||||
"that belongs in the scene field."
|
||||
),
|
||||
)
|
||||
title: str = Field(
|
||||
max_length=80,
|
||||
description="Under 10 words. Name the apparent purpose or outcome of the activity together with the location involved. Do not narrate or list the sequence of actions step by step.",
|
||||
description="A short title characterizing what took place and where, under 10 words."
|
||||
)
|
||||
scene: str = Field(
|
||||
min_length=150,
|
||||
max_length=600,
|
||||
description="A chronological narrative of what happens from start to finish, drawing directly from the items in observations.",
|
||||
description="A chronological narrative of what happens from start to finish."
|
||||
)
|
||||
shortSummary: str = Field(
|
||||
min_length=70,
|
||||
max_length=120,
|
||||
description="A brief 2-sentence summary of the scene, suitable for notifications.",
|
||||
description="A brief 2-sentence summary of the scene, suitable for notifications."
|
||||
)
|
||||
confidence: float = Field(
|
||||
ge=0.0,
|
||||
le=1.0,
|
||||
description="Confidence in the analysis as a decimal between 0.0 and 1.0, where 0.0 means no confidence and 1.0 means complete confidence. Express ONLY as a decimal.",
|
||||
description="Confidence in the analysis, from 0 to 1.",
|
||||
)
|
||||
potential_threat_level: int = Field(
|
||||
ge=0,
|
||||
|
||||
@ -1,13 +1,9 @@
|
||||
"""Debug replay camera management for replaying recordings with detection overlays.
|
||||
|
||||
The startup work (ffmpeg concat + camera config publish) lives in
|
||||
frigate.jobs.debug_replay. This module owns only session presence
|
||||
(active), session metadata, and post-session cleanup.
|
||||
"""
|
||||
"""Debug replay camera management for replaying recordings with detection overlays."""
|
||||
|
||||
import logging
|
||||
import os
|
||||
import shutil
|
||||
import subprocess as sp
|
||||
import threading
|
||||
|
||||
from ruamel.yaml import YAML
|
||||
@ -25,7 +21,7 @@ from frigate.const import (
|
||||
REPLAY_DIR,
|
||||
THUMB_DIR,
|
||||
)
|
||||
from frigate.jobs.debug_replay import cancel_debug_replay_job, wait_for_runner
|
||||
from frigate.models import Recordings
|
||||
from frigate.util.camera_cleanup import cleanup_camera_db, cleanup_camera_files
|
||||
from frigate.util.config import find_config_file
|
||||
|
||||
@ -33,14 +29,7 @@ logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class DebugReplayManager:
|
||||
"""Owns the lifecycle pointers for a single debug replay session.
|
||||
|
||||
A session exists from the moment mark_starting is called (synchronously,
|
||||
inside the API handler) until clear_session runs (on success cleanup,
|
||||
failure, or stop). The active property is the source of truth that the
|
||||
status bar consumes — broader than the startup job, which only covers the
|
||||
preparing_clip / starting_camera window.
|
||||
"""
|
||||
"""Manages a single debug replay session."""
|
||||
|
||||
def __init__(self) -> None:
|
||||
self._lock = threading.Lock()
|
||||
@ -52,66 +41,144 @@ class DebugReplayManager:
|
||||
|
||||
@property
|
||||
def active(self) -> bool:
|
||||
"""True from mark_starting until clear_session."""
|
||||
"""Whether a replay session is currently active."""
|
||||
return self.replay_camera_name is not None
|
||||
|
||||
def mark_starting(
|
||||
def start(
|
||||
self,
|
||||
source_camera: str,
|
||||
replay_camera_name: str,
|
||||
start_ts: float,
|
||||
end_ts: float,
|
||||
) -> None:
|
||||
"""Synchronously claim the session before the job runner starts.
|
||||
|
||||
Called inside the API handler so the status bar sees active=True
|
||||
immediately, before the worker thread does any ffmpeg work.
|
||||
"""
|
||||
with self._lock:
|
||||
self.replay_camera_name = replay_camera_name
|
||||
self.source_camera = source_camera
|
||||
self.start_ts = start_ts
|
||||
self.end_ts = end_ts
|
||||
self.clip_path = None
|
||||
|
||||
def mark_session_ready(self, clip_path: str) -> None:
|
||||
"""Record the on-disk clip path after the camera has been published."""
|
||||
with self._lock:
|
||||
self.clip_path = clip_path
|
||||
|
||||
def clear_session(self) -> None:
|
||||
"""Reset session pointers without publishing camera removal.
|
||||
|
||||
Used by the job runner on failure paths. stop() does the camera
|
||||
teardown plus this clear in one step.
|
||||
"""
|
||||
with self._lock:
|
||||
self._clear_locked()
|
||||
|
||||
def _clear_locked(self) -> None:
|
||||
self.replay_camera_name = None
|
||||
self.source_camera = None
|
||||
self.clip_path = None
|
||||
self.start_ts = None
|
||||
self.end_ts = None
|
||||
|
||||
def publish_camera(
|
||||
self,
|
||||
source_camera: str,
|
||||
replay_name: str,
|
||||
clip_path: str,
|
||||
frigate_config: FrigateConfig,
|
||||
config_publisher: CameraConfigUpdatePublisher,
|
||||
) -> None:
|
||||
"""Build the in-memory replay camera config and publish the add event.
|
||||
) -> str:
|
||||
"""Start a debug replay session.
|
||||
|
||||
Called by the job runner during the starting_camera phase.
|
||||
Args:
|
||||
source_camera: Name of the source camera to replay
|
||||
start_ts: Start timestamp
|
||||
end_ts: End timestamp
|
||||
frigate_config: Current Frigate configuration
|
||||
config_publisher: Publisher for camera config updates
|
||||
|
||||
Returns:
|
||||
The replay camera name
|
||||
|
||||
Raises:
|
||||
ValueError: If a session is already active or parameters are invalid
|
||||
RuntimeError: If clip generation fails
|
||||
"""
|
||||
with self._lock:
|
||||
return self._start_locked(
|
||||
source_camera, start_ts, end_ts, frigate_config, config_publisher
|
||||
)
|
||||
|
||||
def _start_locked(
|
||||
self,
|
||||
source_camera: str,
|
||||
start_ts: float,
|
||||
end_ts: float,
|
||||
frigate_config: FrigateConfig,
|
||||
config_publisher: CameraConfigUpdatePublisher,
|
||||
) -> str:
|
||||
if self.active:
|
||||
raise ValueError("A replay session is already active")
|
||||
|
||||
if source_camera not in frigate_config.cameras:
|
||||
raise ValueError(f"Camera '{source_camera}' not found")
|
||||
|
||||
if end_ts <= start_ts:
|
||||
raise ValueError("End time must be after start time")
|
||||
|
||||
# Query recordings for the source camera in the time range
|
||||
recordings = (
|
||||
Recordings.select(
|
||||
Recordings.path,
|
||||
Recordings.start_time,
|
||||
Recordings.end_time,
|
||||
)
|
||||
.where(
|
||||
Recordings.start_time.between(start_ts, end_ts)
|
||||
| Recordings.end_time.between(start_ts, end_ts)
|
||||
| ((start_ts > Recordings.start_time) & (end_ts < Recordings.end_time))
|
||||
)
|
||||
.where(Recordings.camera == source_camera)
|
||||
.order_by(Recordings.start_time.asc())
|
||||
)
|
||||
|
||||
if not recordings.count():
|
||||
raise ValueError(
|
||||
f"No recordings found for camera '{source_camera}' in the specified time range"
|
||||
)
|
||||
|
||||
# Create replay directory
|
||||
os.makedirs(REPLAY_DIR, exist_ok=True)
|
||||
|
||||
# Generate replay camera name
|
||||
replay_name = f"{REPLAY_CAMERA_PREFIX}{source_camera}"
|
||||
|
||||
# Build concat file for ffmpeg
|
||||
concat_file = os.path.join(REPLAY_DIR, f"{replay_name}_concat.txt")
|
||||
clip_path = os.path.join(REPLAY_DIR, f"{replay_name}.mp4")
|
||||
|
||||
with open(concat_file, "w") as f:
|
||||
for recording in recordings:
|
||||
f.write(f"file '{recording.path}'\n")
|
||||
|
||||
# Concatenate recordings into a single clip with -c copy (fast)
|
||||
ffmpeg_cmd = [
|
||||
frigate_config.ffmpeg.ffmpeg_path,
|
||||
"-hide_banner",
|
||||
"-y",
|
||||
"-f",
|
||||
"concat",
|
||||
"-safe",
|
||||
"0",
|
||||
"-i",
|
||||
concat_file,
|
||||
"-c",
|
||||
"copy",
|
||||
"-movflags",
|
||||
"+faststart",
|
||||
clip_path,
|
||||
]
|
||||
|
||||
logger.info(
|
||||
"Generating replay clip for %s (%.1f - %.1f)",
|
||||
source_camera,
|
||||
start_ts,
|
||||
end_ts,
|
||||
)
|
||||
|
||||
try:
|
||||
result = sp.run(
|
||||
ffmpeg_cmd,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=120,
|
||||
)
|
||||
if result.returncode != 0:
|
||||
logger.error("FFmpeg error: %s", result.stderr)
|
||||
raise RuntimeError(
|
||||
f"Failed to generate replay clip: {result.stderr[-500:]}"
|
||||
)
|
||||
except sp.TimeoutExpired:
|
||||
raise RuntimeError("Clip generation timed out")
|
||||
finally:
|
||||
# Clean up concat file
|
||||
if os.path.exists(concat_file):
|
||||
os.remove(concat_file)
|
||||
|
||||
if not os.path.exists(clip_path):
|
||||
raise RuntimeError("Clip file was not created")
|
||||
|
||||
# Build camera config dict for the replay camera
|
||||
source_config = frigate_config.cameras[source_camera]
|
||||
camera_dict = self._build_camera_config_dict(
|
||||
source_config, replay_name, clip_path
|
||||
)
|
||||
|
||||
# Build an in-memory config with the replay camera added
|
||||
config_file = find_config_file()
|
||||
yaml_parser = YAML()
|
||||
with open(config_file, "r") as f:
|
||||
@ -124,48 +191,75 @@ class DebugReplayManager:
|
||||
try:
|
||||
new_config = FrigateConfig.parse_object(config_data)
|
||||
except Exception as e:
|
||||
raise RuntimeError(f"Failed to validate replay camera config: {e}") from e
|
||||
raise RuntimeError(f"Failed to validate replay camera config: {e}")
|
||||
|
||||
# Update the running config
|
||||
frigate_config.cameras[replay_name] = new_config.cameras[replay_name]
|
||||
|
||||
# Publish the add event
|
||||
config_publisher.publish_update(
|
||||
CameraConfigUpdateTopic(CameraConfigUpdateEnum.add, replay_name),
|
||||
new_config.cameras[replay_name],
|
||||
)
|
||||
|
||||
# Store session state
|
||||
self.replay_camera_name = replay_name
|
||||
self.source_camera = source_camera
|
||||
self.clip_path = clip_path
|
||||
self.start_ts = start_ts
|
||||
self.end_ts = end_ts
|
||||
|
||||
logger.info("Debug replay started: %s -> %s", source_camera, replay_name)
|
||||
return replay_name
|
||||
|
||||
def stop(
|
||||
self,
|
||||
frigate_config: FrigateConfig,
|
||||
config_publisher: CameraConfigUpdatePublisher,
|
||||
) -> None:
|
||||
"""Cancel any in-flight startup job and tear down the active session.
|
||||
"""Stop the active replay session and clean up all artifacts.
|
||||
|
||||
Safe to call when no session is active (no-op with a warning).
|
||||
Args:
|
||||
frigate_config: Current Frigate configuration
|
||||
config_publisher: Publisher for camera config updates
|
||||
"""
|
||||
cancel_debug_replay_job()
|
||||
wait_for_runner(timeout=2.0)
|
||||
|
||||
with self._lock:
|
||||
if not self.active:
|
||||
logger.warning("No active replay session to stop")
|
||||
return
|
||||
self._stop_locked(frigate_config, config_publisher)
|
||||
|
||||
replay_name = self.replay_camera_name
|
||||
def _stop_locked(
|
||||
self,
|
||||
frigate_config: FrigateConfig,
|
||||
config_publisher: CameraConfigUpdatePublisher,
|
||||
) -> None:
|
||||
if not self.active:
|
||||
logger.warning("No active replay session to stop")
|
||||
return
|
||||
|
||||
# Only publish remove if the camera was actually added to the live
|
||||
# config (i.e. the runner reached the starting_camera phase).
|
||||
if replay_name is not None and replay_name in frigate_config.cameras:
|
||||
config_publisher.publish_update(
|
||||
CameraConfigUpdateTopic(CameraConfigUpdateEnum.remove, replay_name),
|
||||
frigate_config.cameras[replay_name],
|
||||
)
|
||||
replay_name = self.replay_camera_name
|
||||
|
||||
if replay_name is not None:
|
||||
self._cleanup_db(replay_name)
|
||||
self._cleanup_files(replay_name)
|
||||
# Publish remove event so subscribers stop and remove from their config
|
||||
if replay_name in frigate_config.cameras:
|
||||
config_publisher.publish_update(
|
||||
CameraConfigUpdateTopic(CameraConfigUpdateEnum.remove, replay_name),
|
||||
frigate_config.cameras[replay_name],
|
||||
)
|
||||
# Do NOT pop here — let subscribers handle removal from the shared
|
||||
# config dict when they process the ZMQ message to avoid race conditions
|
||||
|
||||
self._clear_locked()
|
||||
# Defensive DB cleanup
|
||||
self._cleanup_db(replay_name)
|
||||
|
||||
logger.info("Debug replay stopped and cleaned up: %s", replay_name)
|
||||
# Remove filesystem artifacts
|
||||
self._cleanup_files(replay_name)
|
||||
|
||||
# Reset state
|
||||
self.replay_camera_name = None
|
||||
self.source_camera = None
|
||||
self.clip_path = None
|
||||
self.start_ts = None
|
||||
self.end_ts = None
|
||||
|
||||
logger.info("Debug replay stopped and cleaned up: %s", replay_name)
|
||||
|
||||
def _build_camera_config_dict(
|
||||
self,
|
||||
@ -173,7 +267,16 @@ class DebugReplayManager:
|
||||
replay_name: str,
|
||||
clip_path: str,
|
||||
) -> dict:
|
||||
"""Build a camera config dictionary for the replay camera."""
|
||||
"""Build a camera config dictionary for the replay camera.
|
||||
|
||||
Args:
|
||||
source_config: Source camera's CameraConfig
|
||||
replay_name: Name for the replay camera
|
||||
clip_path: Path to the replay clip file
|
||||
|
||||
Returns:
|
||||
Camera config as a dictionary
|
||||
"""
|
||||
# Extract detect config (exclude computed fields)
|
||||
detect_dict = source_config.detect.model_dump(
|
||||
exclude={"min_initialized", "max_disappeared", "enabled_in_config"}
|
||||
@ -208,6 +311,7 @@ class DebugReplayManager:
|
||||
zone_dump = zone_config.model_dump(
|
||||
exclude={"contour", "color"}, exclude_defaults=True
|
||||
)
|
||||
# Always include required fields
|
||||
zone_dump.setdefault("coordinates", zone_config.coordinates)
|
||||
zones_dict[zone_name] = zone_dump
|
||||
|
||||
|
||||
@ -52,12 +52,6 @@ class OvDetector(DetectionApi):
|
||||
self.h = detector_config.model.height
|
||||
self.w = detector_config.model.width
|
||||
|
||||
logger.info(
|
||||
"Loading OpenVINO model %s on device %s",
|
||||
detector_config.model.path,
|
||||
detector_config.device,
|
||||
)
|
||||
|
||||
self.runner = OpenVINOModelRunner(
|
||||
model_path=detector_config.model.path,
|
||||
device=detector_config.device,
|
||||
|
||||
@ -4,7 +4,6 @@ import base64
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import sys
|
||||
import threading
|
||||
from json.decoder import JSONDecodeError
|
||||
from multiprocessing.synchronize import Event as MpEvent
|
||||
@ -53,14 +52,6 @@ class EmbeddingProcess(FrigateProcess):
|
||||
self.stop_event,
|
||||
)
|
||||
maintainer.start()
|
||||
maintainer.join()
|
||||
|
||||
# If the maintainer thread exited but no shutdown was requested, it
|
||||
# crashed. Surface as a non-zero exit so the watchdog restarts us
|
||||
# instead of treating the silent thread death as a clean shutdown.
|
||||
if not self.stop_event.is_set():
|
||||
logger.error("Embeddings maintainer thread exited unexpectedly")
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
class EmbeddingsContext:
|
||||
|
||||
@ -517,16 +517,10 @@ class EmbeddingMaintainer(threading.Thread):
|
||||
try:
|
||||
event: Event = Event.get(Event.id == event_id)
|
||||
except DoesNotExist:
|
||||
for processor in self.post_processors:
|
||||
if isinstance(processor, ObjectDescriptionProcessor):
|
||||
processor.cleanup_event(event_id)
|
||||
continue
|
||||
|
||||
# Skip the event if not an object
|
||||
if event.data.get("type") != "object":
|
||||
for processor in self.post_processors:
|
||||
if isinstance(processor, ObjectDescriptionProcessor):
|
||||
processor.cleanup_event(event_id)
|
||||
continue
|
||||
|
||||
# Extract valid thumbnail
|
||||
|
||||
@ -205,7 +205,6 @@ class AudioEventMaintainer(threading.Thread):
|
||||
self.transcription_thread.start()
|
||||
|
||||
self.was_enabled = camera.enabled
|
||||
self.was_audio_enabled = camera.audio.enabled
|
||||
|
||||
def detect_audio(self, audio: np.ndarray) -> None:
|
||||
if not self.camera_config.audio.enabled or self.stop_event.is_set():
|
||||
@ -364,17 +363,6 @@ class AudioEventMaintainer(threading.Thread):
|
||||
time.sleep(0.1)
|
||||
continue
|
||||
|
||||
audio_enabled = self.camera_config.audio.enabled
|
||||
if audio_enabled != self.was_audio_enabled:
|
||||
if not audio_enabled:
|
||||
self.logger.debug(
|
||||
f"Disabling audio detections for {self.camera_config.name}, ending events"
|
||||
)
|
||||
self.requestor.send_data(
|
||||
EXPIRE_AUDIO_ACTIVITY, self.camera_config.name
|
||||
)
|
||||
self.was_audio_enabled = audio_enabled
|
||||
|
||||
self.read_audio()
|
||||
|
||||
if self.audio_listener:
|
||||
|
||||
@ -2,7 +2,6 @@
|
||||
|
||||
import datetime
|
||||
import importlib
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
@ -10,7 +9,6 @@ from typing import Any, Callable, Optional
|
||||
|
||||
import numpy as np
|
||||
from playhouse.shortcuts import model_to_dict
|
||||
from pydantic import ValidationError
|
||||
|
||||
from frigate.config import CameraConfig, GenAIConfig, GenAIProviderEnum
|
||||
from frigate.const import CLIPS_DIR
|
||||
@ -153,6 +151,9 @@ Each line represents a detection state, not necessarily unique individuals. The
|
||||
if "other_concerns" in schema.get("required", []):
|
||||
schema["required"].remove("other_concerns")
|
||||
|
||||
# OpenAI strict mode requires additionalProperties: false on all objects
|
||||
schema["additionalProperties"] = False
|
||||
|
||||
response_format = {
|
||||
"type": "json_schema",
|
||||
"json_schema": {
|
||||
@ -180,36 +181,7 @@ Each line represents a detection state, not necessarily unique individuals. The
|
||||
|
||||
try:
|
||||
metadata = ReviewMetadata.model_validate_json(clean_json)
|
||||
except ValidationError as ve:
|
||||
# Constraint violations (length, item count, ranges) are logged
|
||||
# at debug and the response is kept anyway — a slightly
|
||||
# off-spec answer is still usable, and dropping the whole
|
||||
# response loses the narrative content the model produced.
|
||||
for err in ve.errors():
|
||||
loc = ".".join(str(p) for p in err["loc"]) or "<root>"
|
||||
logger.debug(
|
||||
"Review metadata soft validation: %s — %s (input: %r)",
|
||||
loc,
|
||||
err["msg"],
|
||||
err.get("input"),
|
||||
)
|
||||
try:
|
||||
raw = json.loads(clean_json)
|
||||
except json.JSONDecodeError as je:
|
||||
logger.error("Failed to parse review description JSON: %s", je)
|
||||
return None
|
||||
# observations and confidence are required on the model; fill an empty default
|
||||
# if the response omitted it so attribute access stays safe.
|
||||
raw.setdefault("observations", [])
|
||||
raw.setdefault("confidence", 0.0)
|
||||
metadata = ReviewMetadata.model_construct(**raw)
|
||||
except Exception as e:
|
||||
logger.error(
|
||||
f"Failed to parse review description as the response did not match expected format. {e}"
|
||||
)
|
||||
return None
|
||||
|
||||
try:
|
||||
# Normalize confidence if model returned a percentage (e.g. 85 instead of 0.85)
|
||||
if metadata.confidence > 1.0:
|
||||
metadata.confidence = min(metadata.confidence / 100.0, 1.0)
|
||||
@ -222,7 +194,10 @@ Each line represents a detection state, not necessarily unique individuals. The
|
||||
metadata.time = review_data["start"]
|
||||
return metadata
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to post-process review metadata: {e}")
|
||||
# rarely LLMs can fail to follow directions on output format
|
||||
logger.warning(
|
||||
f"Failed to parse review description as the response did not match expected format. {e}"
|
||||
)
|
||||
return None
|
||||
else:
|
||||
logger.debug(
|
||||
@ -369,14 +344,6 @@ Guidelines:
|
||||
"""Get the context window size for this provider in tokens."""
|
||||
return 4096
|
||||
|
||||
def estimate_image_tokens(self, width: int, height: int) -> float:
|
||||
"""Estimate prompt tokens consumed by a single image of the given dimensions.
|
||||
|
||||
Default heuristic: ~1 token per 1250 pixels. Providers that can measure or
|
||||
know their model's exact image-token cost should override.
|
||||
"""
|
||||
return (width * height) / 1250
|
||||
|
||||
def embed(
|
||||
self,
|
||||
texts: list[str] | None = None,
|
||||
|
||||
@ -136,44 +136,22 @@ class GeminiClient(GenAIClient):
|
||||
)
|
||||
)
|
||||
elif role == "assistant":
|
||||
parts: list[types.Part] = []
|
||||
if content:
|
||||
parts.append(types.Part.from_text(text=content))
|
||||
for tc in msg.get("tool_calls") or []:
|
||||
func = tc.get("function") or {}
|
||||
tc_name = func.get("name") or ""
|
||||
tc_args: Any = func.get("arguments")
|
||||
if isinstance(tc_args, str):
|
||||
try:
|
||||
tc_args = json.loads(tc_args)
|
||||
except (json.JSONDecodeError, TypeError):
|
||||
tc_args = {}
|
||||
if not isinstance(tc_args, dict):
|
||||
tc_args = {}
|
||||
if tc_name:
|
||||
parts.append(
|
||||
types.Part.from_function_call(
|
||||
name=tc_name, args=tc_args
|
||||
)
|
||||
)
|
||||
if not parts:
|
||||
parts.append(types.Part.from_text(text=" "))
|
||||
gemini_messages.append(types.Content(role="model", parts=parts))
|
||||
gemini_messages.append(
|
||||
types.Content(
|
||||
role="model", parts=[types.Part.from_text(text=content)]
|
||||
)
|
||||
)
|
||||
elif role == "tool":
|
||||
# Handle tool response
|
||||
response_payload = (
|
||||
content if isinstance(content, dict) else {"result": content}
|
||||
)
|
||||
function_response = {
|
||||
"name": msg.get("name", ""),
|
||||
"response": content,
|
||||
}
|
||||
gemini_messages.append(
|
||||
types.Content(
|
||||
role="function",
|
||||
parts=[
|
||||
types.Part.from_function_response(
|
||||
name=msg.get("name")
|
||||
or msg.get("tool_call_id")
|
||||
or "",
|
||||
response=response_payload,
|
||||
)
|
||||
types.Part.from_function_response(function_response) # type: ignore[misc,call-arg,arg-type]
|
||||
],
|
||||
)
|
||||
)
|
||||
@ -365,44 +343,22 @@ class GeminiClient(GenAIClient):
|
||||
)
|
||||
)
|
||||
elif role == "assistant":
|
||||
parts: list[types.Part] = []
|
||||
if content:
|
||||
parts.append(types.Part.from_text(text=content))
|
||||
for tc in msg.get("tool_calls") or []:
|
||||
func = tc.get("function") or {}
|
||||
tc_name = func.get("name") or ""
|
||||
tc_args: Any = func.get("arguments")
|
||||
if isinstance(tc_args, str):
|
||||
try:
|
||||
tc_args = json.loads(tc_args)
|
||||
except (json.JSONDecodeError, TypeError):
|
||||
tc_args = {}
|
||||
if not isinstance(tc_args, dict):
|
||||
tc_args = {}
|
||||
if tc_name:
|
||||
parts.append(
|
||||
types.Part.from_function_call(
|
||||
name=tc_name, args=tc_args
|
||||
)
|
||||
)
|
||||
if not parts:
|
||||
parts.append(types.Part.from_text(text=" "))
|
||||
gemini_messages.append(types.Content(role="model", parts=parts))
|
||||
gemini_messages.append(
|
||||
types.Content(
|
||||
role="model", parts=[types.Part.from_text(text=content)]
|
||||
)
|
||||
)
|
||||
elif role == "tool":
|
||||
# Handle tool response
|
||||
response_payload = (
|
||||
content if isinstance(content, dict) else {"result": content}
|
||||
)
|
||||
function_response = {
|
||||
"name": msg.get("name", ""),
|
||||
"response": content,
|
||||
}
|
||||
gemini_messages.append(
|
||||
types.Content(
|
||||
role="function",
|
||||
parts=[
|
||||
types.Part.from_function_response(
|
||||
name=msg.get("name")
|
||||
or msg.get("tool_call_id")
|
||||
or "",
|
||||
response=response_payload,
|
||||
)
|
||||
types.Part.from_function_response(function_response) # type: ignore[misc,call-arg,arg-type]
|
||||
],
|
||||
)
|
||||
)
|
||||
|
||||
@ -42,9 +42,6 @@ class LlamaCppClient(GenAIClient):
|
||||
_supports_vision: bool
|
||||
_supports_audio: bool
|
||||
_supports_tools: bool
|
||||
_image_token_cache: dict[tuple[int, int], int]
|
||||
_text_baseline_tokens: int | None
|
||||
_media_marker: str
|
||||
|
||||
def _init_provider(self) -> str | None:
|
||||
"""Initialize the client and query model metadata from the server."""
|
||||
@ -55,9 +52,6 @@ class LlamaCppClient(GenAIClient):
|
||||
self._supports_vision = False
|
||||
self._supports_audio = False
|
||||
self._supports_tools = False
|
||||
self._image_token_cache = {}
|
||||
self._text_baseline_tokens = None
|
||||
self._media_marker = "<__media__>"
|
||||
|
||||
base_url = (
|
||||
self.genai_config.base_url.rstrip("/")
|
||||
@ -143,13 +137,6 @@ class LlamaCppClient(GenAIClient):
|
||||
chat_caps = props.get("chat_template_caps", {})
|
||||
self._supports_tools = chat_caps.get("supports_tools", False)
|
||||
|
||||
# Media marker for multimodal embeddings; the server randomizes this
|
||||
# per startup unless LLAMA_MEDIA_MARKER is set, so we must read it
|
||||
# from /props rather than hardcoding "<__media__>".
|
||||
media_marker = props.get("media_marker")
|
||||
if isinstance(media_marker, str) and media_marker:
|
||||
self._media_marker = media_marker
|
||||
|
||||
logger.info(
|
||||
"llama.cpp model '%s' initialized — context: %s, vision: %s, audio: %s, tools: %s",
|
||||
configured_model,
|
||||
@ -285,91 +272,6 @@ class LlamaCppClient(GenAIClient):
|
||||
return self._context_size
|
||||
return 4096
|
||||
|
||||
def estimate_image_tokens(self, width: int, height: int) -> float:
|
||||
"""Probe the llama.cpp server to learn the model's image-token cost at the
|
||||
requested dimensions.
|
||||
|
||||
llama.cpp's image tokenization is a deterministic function of dimensions and
|
||||
the loaded mmproj, so the result is cached per (width, height) for the
|
||||
lifetime of the process. Falls back to the base pixel heuristic if the
|
||||
server is unreachable or the response is malformed.
|
||||
"""
|
||||
if self.provider is None:
|
||||
return super().estimate_image_tokens(width, height)
|
||||
|
||||
cached = self._image_token_cache.get((width, height))
|
||||
|
||||
if cached is not None:
|
||||
return cached
|
||||
|
||||
try:
|
||||
baseline = self._probe_baseline_tokens()
|
||||
with_image = self._probe_image_prompt_tokens(width, height)
|
||||
tokens = max(1, with_image - baseline)
|
||||
except Exception as e:
|
||||
logger.debug(
|
||||
"llama.cpp image-token probe failed for %dx%d (%s); using heuristic",
|
||||
width,
|
||||
height,
|
||||
e,
|
||||
)
|
||||
return super().estimate_image_tokens(width, height)
|
||||
|
||||
self._image_token_cache[(width, height)] = tokens
|
||||
logger.debug(
|
||||
"llama.cpp model '%s' uses ~%d tokens for %dx%d images",
|
||||
self.genai_config.model,
|
||||
tokens,
|
||||
width,
|
||||
height,
|
||||
)
|
||||
return tokens
|
||||
|
||||
def _probe_baseline_tokens(self) -> int:
|
||||
"""Return prompt_tokens for a minimal text-only request. Cached after first call."""
|
||||
if self._text_baseline_tokens is not None:
|
||||
return self._text_baseline_tokens
|
||||
|
||||
self._text_baseline_tokens = self._probe_prompt_tokens(
|
||||
[{"type": "text", "text": "."}]
|
||||
)
|
||||
return self._text_baseline_tokens
|
||||
|
||||
def _probe_image_prompt_tokens(self, width: int, height: int) -> int:
|
||||
"""Return prompt_tokens for a single synthetic image plus minimal text."""
|
||||
img = Image.new("RGB", (width, height), (128, 128, 128))
|
||||
buf = io.BytesIO()
|
||||
img.save(buf, format="JPEG", quality=60)
|
||||
encoded = base64.b64encode(buf.getvalue()).decode("utf-8")
|
||||
return self._probe_prompt_tokens(
|
||||
[
|
||||
{"type": "text", "text": "."},
|
||||
{
|
||||
"type": "image_url",
|
||||
"image_url": {"url": f"data:image/jpeg;base64,{encoded}"},
|
||||
},
|
||||
]
|
||||
)
|
||||
|
||||
def _probe_prompt_tokens(self, content: list[dict[str, Any]]) -> int:
|
||||
"""POST a 1-token chat completion and return reported prompt_tokens.
|
||||
|
||||
Uses a generous timeout to absorb a cold model load on the first probe
|
||||
when the server lazily loads models on demand (e.g. llama-swap).
|
||||
"""
|
||||
payload = {
|
||||
"model": self.genai_config.model,
|
||||
"messages": [{"role": "user", "content": content}],
|
||||
"max_tokens": 1,
|
||||
}
|
||||
response = requests.post(
|
||||
f"{self.provider}/v1/chat/completions",
|
||||
json=payload,
|
||||
timeout=60,
|
||||
)
|
||||
response.raise_for_status()
|
||||
return int(response.json()["usage"]["prompt_tokens"])
|
||||
|
||||
def _build_payload(
|
||||
self,
|
||||
messages: list[dict[str, Any]],
|
||||
@ -474,11 +376,10 @@ class LlamaCppClient(GenAIClient):
|
||||
jpeg_bytes = _to_jpeg(img)
|
||||
to_encode = jpeg_bytes if jpeg_bytes is not None else img
|
||||
encoded = base64.b64encode(to_encode).decode("utf-8")
|
||||
# prompt_string must contain the server's media marker placeholder.
|
||||
# The marker is randomized per server startup (read from /props).
|
||||
# prompt_string must contain <__media__> placeholder for image tokenization
|
||||
content.append(
|
||||
{
|
||||
"prompt_string": f"{self._media_marker}\n",
|
||||
"prompt_string": "<__media__>\n",
|
||||
"multimodal_data": [encoded], # type: ignore[dict-item]
|
||||
}
|
||||
)
|
||||
|
||||
@ -31,12 +31,6 @@ class OllamaClient(GenAIClient):
|
||||
provider: ApiClient | None
|
||||
provider_options: dict[str, Any]
|
||||
|
||||
def _auth_headers(self) -> dict | None:
|
||||
if self.genai_config.api_key:
|
||||
return {"Authorization": "Bearer " + self.genai_config.api_key}
|
||||
|
||||
return None
|
||||
|
||||
def _init_provider(self) -> ApiClient | None:
|
||||
"""Initialize the client."""
|
||||
self.provider_options = {
|
||||
@ -45,11 +39,7 @@ class OllamaClient(GenAIClient):
|
||||
}
|
||||
|
||||
try:
|
||||
client = ApiClient(
|
||||
host=self.genai_config.base_url,
|
||||
timeout=self.timeout,
|
||||
headers=self._auth_headers(),
|
||||
)
|
||||
client = ApiClient(host=self.genai_config.base_url, timeout=self.timeout)
|
||||
# ensure the model is available locally
|
||||
response = client.show(self.genai_config.model)
|
||||
if response.get("error"):
|
||||
@ -176,9 +166,7 @@ class OllamaClient(GenAIClient):
|
||||
return []
|
||||
try:
|
||||
client = ApiClient(
|
||||
host=self.genai_config.base_url,
|
||||
timeout=self.timeout,
|
||||
headers=self._auth_headers(),
|
||||
host=self.genai_config.base_url, timeout=self.timeout
|
||||
)
|
||||
except Exception:
|
||||
return []
|
||||
@ -356,7 +344,6 @@ class OllamaClient(GenAIClient):
|
||||
async_client = OllamaAsyncClient(
|
||||
host=self.genai_config.base_url,
|
||||
timeout=self.timeout,
|
||||
headers=self._auth_headers(),
|
||||
)
|
||||
response = await async_client.chat(**request_params)
|
||||
result = self._message_from_response(response)
|
||||
@ -372,7 +359,6 @@ class OllamaClient(GenAIClient):
|
||||
async_client = OllamaAsyncClient(
|
||||
host=self.genai_config.base_url,
|
||||
timeout=self.timeout,
|
||||
headers=self._auth_headers(),
|
||||
)
|
||||
content_parts: list[str] = []
|
||||
final_message: dict[str, Any] | None = None
|
||||
|
||||
@ -73,17 +73,8 @@ class OpenAIClient(GenAIClient):
|
||||
**self.genai_config.runtime_options,
|
||||
}
|
||||
if response_format:
|
||||
# OpenAI strict mode requires additionalProperties: false on the schema
|
||||
if response_format.get("type") == "json_schema" and response_format.get(
|
||||
"json_schema", {}
|
||||
).get("strict"):
|
||||
schema = response_format.get("json_schema", {}).get("schema")
|
||||
if isinstance(schema, dict):
|
||||
schema["additionalProperties"] = False
|
||||
request_params["response_format"] = response_format
|
||||
|
||||
result = self.provider.chat.completions.create(**request_params)
|
||||
|
||||
if (
|
||||
result is not None
|
||||
and hasattr(result, "choices")
|
||||
|
||||
@ -1,386 +0,0 @@
|
||||
"""Debug replay startup job: ffmpeg concat + camera config publish.
|
||||
|
||||
The runner orchestrates the async portion of starting a debug replay
|
||||
session. The DebugReplayManager (in frigate.debug_replay) owns session
|
||||
presence so the status bar can keep reading a single `active` flag from
|
||||
/debug_replay/status for the entire session window — which is broader
|
||||
than this job's lifetime.
|
||||
"""
|
||||
|
||||
import logging
|
||||
import os
|
||||
import subprocess as sp
|
||||
import threading
|
||||
import time
|
||||
from dataclasses import dataclass
|
||||
from typing import TYPE_CHECKING, Any, Optional, cast
|
||||
|
||||
from peewee import ModelSelect
|
||||
|
||||
from frigate.config import FrigateConfig
|
||||
from frigate.config.camera.updater import CameraConfigUpdatePublisher
|
||||
from frigate.const import REPLAY_CAMERA_PREFIX, REPLAY_DIR
|
||||
from frigate.jobs.export import JobStatePublisher
|
||||
from frigate.jobs.job import Job
|
||||
from frigate.jobs.manager import job_is_running, set_current_job
|
||||
from frigate.models import Recordings
|
||||
from frigate.types import JobStatusTypesEnum
|
||||
from frigate.util.ffmpeg import run_ffmpeg_with_progress
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from frigate.debug_replay import DebugReplayManager
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Coalesce frequent ffmpeg progress callbacks so the WS isn't flooded.
|
||||
PROGRESS_BROADCAST_MIN_INTERVAL = 1.0
|
||||
|
||||
JOB_TYPE = "debug_replay"
|
||||
|
||||
STEP_PREPARING_CLIP = "preparing_clip"
|
||||
STEP_STARTING_CAMERA = "starting_camera"
|
||||
|
||||
|
||||
_active_runner: Optional["DebugReplayJobRunner"] = None
|
||||
_runner_lock = threading.Lock()
|
||||
|
||||
|
||||
def _set_active_runner(runner: Optional["DebugReplayJobRunner"]) -> None:
|
||||
global _active_runner
|
||||
with _runner_lock:
|
||||
_active_runner = runner
|
||||
|
||||
|
||||
def get_active_runner() -> Optional["DebugReplayJobRunner"]:
|
||||
with _runner_lock:
|
||||
return _active_runner
|
||||
|
||||
|
||||
@dataclass
|
||||
class DebugReplayJob(Job):
|
||||
"""Job state for a debug replay startup."""
|
||||
|
||||
job_type: str = JOB_TYPE
|
||||
source_camera: str = ""
|
||||
replay_camera_name: str = ""
|
||||
start_ts: float = 0.0
|
||||
end_ts: float = 0.0
|
||||
current_step: Optional[str] = None
|
||||
progress_percent: float = 0.0
|
||||
|
||||
def to_dict(self) -> dict[str, Any]:
|
||||
"""Whitelisted payload for the job_state WS topic.
|
||||
|
||||
Replay-specific fields land in results so the frontend's
|
||||
generic Job<TResults> type can be parameterised cleanly.
|
||||
"""
|
||||
return {
|
||||
"id": self.id,
|
||||
"job_type": self.job_type,
|
||||
"status": self.status,
|
||||
"start_time": self.start_time,
|
||||
"end_time": self.end_time,
|
||||
"error_message": self.error_message,
|
||||
"results": {
|
||||
"current_step": self.current_step,
|
||||
"progress_percent": self.progress_percent,
|
||||
"source_camera": self.source_camera,
|
||||
"replay_camera_name": self.replay_camera_name,
|
||||
"start_ts": self.start_ts,
|
||||
"end_ts": self.end_ts,
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
def query_recordings(source_camera: str, start_ts: float, end_ts: float) -> ModelSelect:
|
||||
"""Return the Recordings query for the time range.
|
||||
|
||||
Module-level so tests can patch it without instantiating a runner.
|
||||
"""
|
||||
query = (
|
||||
Recordings.select(
|
||||
Recordings.path,
|
||||
Recordings.start_time,
|
||||
Recordings.end_time,
|
||||
)
|
||||
.where(
|
||||
Recordings.start_time.between(start_ts, end_ts)
|
||||
| Recordings.end_time.between(start_ts, end_ts)
|
||||
| ((start_ts > Recordings.start_time) & (end_ts < Recordings.end_time))
|
||||
)
|
||||
.where(Recordings.camera == source_camera)
|
||||
.order_by(Recordings.start_time.asc())
|
||||
)
|
||||
return cast(ModelSelect, query)
|
||||
|
||||
|
||||
class DebugReplayJobRunner(threading.Thread):
|
||||
"""Worker thread that drives the startup job to completion.
|
||||
|
||||
Owns the live ffmpeg Popen reference for cancellation. Cancellation
|
||||
is two-step (threading.Event + proc.terminate()) so the runner
|
||||
both knows it should stop and is unblocked from its blocking subprocess
|
||||
wait.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
job: DebugReplayJob,
|
||||
frigate_config: FrigateConfig,
|
||||
config_publisher: CameraConfigUpdatePublisher,
|
||||
replay_manager: "DebugReplayManager",
|
||||
publisher: Optional[JobStatePublisher] = None,
|
||||
) -> None:
|
||||
super().__init__(daemon=True, name=f"debug_replay_{job.id}")
|
||||
self.job = job
|
||||
self.frigate_config = frigate_config
|
||||
self.config_publisher = config_publisher
|
||||
self.replay_manager = replay_manager
|
||||
self.publisher = publisher if publisher is not None else JobStatePublisher()
|
||||
self._cancel_event = threading.Event()
|
||||
self._active_process: sp.Popen | None = None
|
||||
self._proc_lock = threading.Lock()
|
||||
self._last_broadcast_monotonic: float = 0.0
|
||||
|
||||
def cancel(self) -> None:
|
||||
"""Request cancellation. Idempotent."""
|
||||
self._cancel_event.set()
|
||||
with self._proc_lock:
|
||||
proc = self._active_process
|
||||
if proc is not None:
|
||||
try:
|
||||
proc.terminate()
|
||||
except Exception as exc:
|
||||
logger.warning("Failed to terminate ffmpeg subprocess: %s", exc)
|
||||
|
||||
def is_cancelled(self) -> bool:
|
||||
return self._cancel_event.is_set()
|
||||
|
||||
def _record_proc(self, proc: sp.Popen) -> None:
|
||||
with self._proc_lock:
|
||||
self._active_process = proc
|
||||
# Race: cancel arrived between Popen and _record_proc.
|
||||
if self._cancel_event.is_set():
|
||||
try:
|
||||
proc.terminate()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
def _broadcast(self, force: bool = False) -> None:
|
||||
now = time.monotonic()
|
||||
if (
|
||||
not force
|
||||
and now - self._last_broadcast_monotonic < PROGRESS_BROADCAST_MIN_INTERVAL
|
||||
):
|
||||
return
|
||||
self._last_broadcast_monotonic = now
|
||||
|
||||
try:
|
||||
self.publisher.publish(self.job.to_dict())
|
||||
except Exception as err:
|
||||
logger.warning("Publisher raised during job state broadcast: %s", err)
|
||||
|
||||
def run(self) -> None:
|
||||
replay_name = self.job.replay_camera_name
|
||||
os.makedirs(REPLAY_DIR, exist_ok=True)
|
||||
concat_file = os.path.join(REPLAY_DIR, f"{replay_name}_concat.txt")
|
||||
clip_path = os.path.join(REPLAY_DIR, f"{replay_name}.mp4")
|
||||
|
||||
self.job.status = JobStatusTypesEnum.running
|
||||
self.job.start_time = time.time()
|
||||
self.job.current_step = STEP_PREPARING_CLIP
|
||||
self._broadcast(force=True)
|
||||
|
||||
try:
|
||||
recordings = query_recordings(
|
||||
self.job.source_camera, self.job.start_ts, self.job.end_ts
|
||||
)
|
||||
with open(concat_file, "w") as f:
|
||||
for recording in recordings:
|
||||
f.write(f"file '{recording.path}'\n")
|
||||
|
||||
ffmpeg_cmd = [
|
||||
self.frigate_config.ffmpeg.ffmpeg_path,
|
||||
"-hide_banner",
|
||||
"-y",
|
||||
"-f",
|
||||
"concat",
|
||||
"-safe",
|
||||
"0",
|
||||
"-i",
|
||||
concat_file,
|
||||
"-c",
|
||||
"copy",
|
||||
"-movflags",
|
||||
"+faststart",
|
||||
clip_path,
|
||||
]
|
||||
|
||||
logger.info(
|
||||
"Generating replay clip for %s (%.1f - %.1f)",
|
||||
self.job.source_camera,
|
||||
self.job.start_ts,
|
||||
self.job.end_ts,
|
||||
)
|
||||
|
||||
def _on_progress(percent: float) -> None:
|
||||
self.job.progress_percent = percent
|
||||
self._broadcast()
|
||||
|
||||
try:
|
||||
returncode, stderr = run_ffmpeg_with_progress(
|
||||
ffmpeg_cmd,
|
||||
expected_duration_seconds=max(
|
||||
0.0, self.job.end_ts - self.job.start_ts
|
||||
),
|
||||
on_progress=_on_progress,
|
||||
process_started=self._record_proc,
|
||||
use_low_priority=True,
|
||||
)
|
||||
finally:
|
||||
with self._proc_lock:
|
||||
self._active_process = None
|
||||
|
||||
if self._cancel_event.is_set():
|
||||
self._finalize_cancelled(clip_path)
|
||||
return
|
||||
|
||||
if returncode != 0:
|
||||
raise RuntimeError(f"FFmpeg failed: {stderr[-500:]}")
|
||||
|
||||
if not os.path.exists(clip_path):
|
||||
raise RuntimeError("Clip file was not created")
|
||||
|
||||
self.job.current_step = STEP_STARTING_CAMERA
|
||||
self.job.progress_percent = 100.0
|
||||
self._broadcast(force=True)
|
||||
|
||||
if self._cancel_event.is_set():
|
||||
self._finalize_cancelled(clip_path)
|
||||
return
|
||||
|
||||
self.replay_manager.publish_camera(
|
||||
source_camera=self.job.source_camera,
|
||||
replay_name=replay_name,
|
||||
clip_path=clip_path,
|
||||
frigate_config=self.frigate_config,
|
||||
config_publisher=self.config_publisher,
|
||||
)
|
||||
self.replay_manager.mark_session_ready(clip_path)
|
||||
|
||||
self.job.status = JobStatusTypesEnum.success
|
||||
self.job.end_time = time.time()
|
||||
self._broadcast(force=True)
|
||||
logger.info(
|
||||
"Debug replay started: %s -> %s",
|
||||
self.job.source_camera,
|
||||
replay_name,
|
||||
)
|
||||
except Exception as exc:
|
||||
logger.exception("Debug replay startup failed")
|
||||
self.job.status = JobStatusTypesEnum.failed
|
||||
self.job.error_message = str(exc)
|
||||
self.job.end_time = time.time()
|
||||
self._broadcast(force=True)
|
||||
self.replay_manager.clear_session()
|
||||
_remove_silent(clip_path)
|
||||
finally:
|
||||
_remove_silent(concat_file)
|
||||
_set_active_runner(None)
|
||||
|
||||
def _finalize_cancelled(self, clip_path: str) -> None:
|
||||
logger.info("Debug replay startup cancelled")
|
||||
self.job.status = JobStatusTypesEnum.cancelled
|
||||
self.job.end_time = time.time()
|
||||
self._broadcast(force=True)
|
||||
# The caller of cancel_debug_replay_job (DebugReplayManager.stop) owns
|
||||
# session cleanup — db rows, filesystem artifacts, clear_session. We
|
||||
# only clean up the partial concat output we created.
|
||||
_remove_silent(clip_path)
|
||||
|
||||
|
||||
def _remove_silent(path: str) -> None:
|
||||
try:
|
||||
if os.path.exists(path):
|
||||
os.remove(path)
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
|
||||
def start_debug_replay_job(
|
||||
*,
|
||||
source_camera: str,
|
||||
start_ts: float,
|
||||
end_ts: float,
|
||||
frigate_config: FrigateConfig,
|
||||
config_publisher: CameraConfigUpdatePublisher,
|
||||
replay_manager: "DebugReplayManager",
|
||||
) -> str:
|
||||
"""Validate, create job, start runner. Returns the job id.
|
||||
|
||||
Raises ValueError for bad params (camera missing, time range
|
||||
invalid, no recordings) and RuntimeError if a session is already
|
||||
active.
|
||||
"""
|
||||
if job_is_running(JOB_TYPE) or replay_manager.active:
|
||||
raise RuntimeError("A replay session is already active")
|
||||
|
||||
if source_camera not in frigate_config.cameras:
|
||||
raise ValueError(f"Camera '{source_camera}' not found")
|
||||
|
||||
if end_ts <= start_ts:
|
||||
raise ValueError("End time must be after start time")
|
||||
|
||||
recordings = query_recordings(source_camera, start_ts, end_ts)
|
||||
if not recordings.count():
|
||||
raise ValueError(
|
||||
f"No recordings found for camera '{source_camera}' in the specified time range"
|
||||
)
|
||||
|
||||
replay_name = f"{REPLAY_CAMERA_PREFIX}{source_camera}"
|
||||
replay_manager.mark_starting(
|
||||
source_camera=source_camera,
|
||||
replay_camera_name=replay_name,
|
||||
start_ts=start_ts,
|
||||
end_ts=end_ts,
|
||||
)
|
||||
|
||||
job = DebugReplayJob(
|
||||
source_camera=source_camera,
|
||||
replay_camera_name=replay_name,
|
||||
start_ts=start_ts,
|
||||
end_ts=end_ts,
|
||||
)
|
||||
set_current_job(job)
|
||||
|
||||
runner = DebugReplayJobRunner(
|
||||
job=job,
|
||||
frigate_config=frigate_config,
|
||||
config_publisher=config_publisher,
|
||||
replay_manager=replay_manager,
|
||||
)
|
||||
_set_active_runner(runner)
|
||||
runner.start()
|
||||
|
||||
return job.id
|
||||
|
||||
|
||||
def cancel_debug_replay_job() -> bool:
|
||||
"""Signal the active runner to cancel.
|
||||
|
||||
Returns True if a runner was signalled, False if no job was active.
|
||||
"""
|
||||
runner = get_active_runner()
|
||||
if runner is None:
|
||||
return False
|
||||
runner.cancel()
|
||||
return True
|
||||
|
||||
|
||||
def wait_for_runner(timeout: float = 2.0) -> bool:
|
||||
"""Join the active runner. Returns True if the runner ended in time."""
|
||||
runner = get_active_runner()
|
||||
if runner is None:
|
||||
return True
|
||||
runner.join(timeout=timeout)
|
||||
return not runner.is_alive()
|
||||
@ -8,6 +8,7 @@ import os
|
||||
import queue
|
||||
import subprocess as sp
|
||||
import threading
|
||||
import time
|
||||
import traceback
|
||||
from multiprocessing.synchronize import Event as MpEvent
|
||||
from typing import Any, Optional
|
||||
@ -18,7 +19,6 @@ import numpy as np
|
||||
from frigate.comms.inter_process import InterProcessRequestor
|
||||
from frigate.config import BirdseyeModeEnum, FfmpegConfig, FrigateConfig
|
||||
from frigate.const import BASE_DIR, BIRDSEYE_PIPE, INSTALL_DIR, UPDATE_BIRDSEYE_LAYOUT
|
||||
from frigate.output.ws_auth import ws_has_camera_access
|
||||
from frigate.util.image import (
|
||||
SharedMemoryFrameManager,
|
||||
copy_yuv_to_position,
|
||||
@ -236,14 +236,12 @@ class BroadcastThread(threading.Thread):
|
||||
converter: FFMpegConverter,
|
||||
websocket_server: Any,
|
||||
stop_event: MpEvent,
|
||||
config: FrigateConfig,
|
||||
):
|
||||
super().__init__()
|
||||
self.camera = camera
|
||||
self.converter = converter
|
||||
self.websocket_server = websocket_server
|
||||
self.stop_event = stop_event
|
||||
self.config = config
|
||||
|
||||
def run(self) -> None:
|
||||
while not self.stop_event.is_set():
|
||||
@ -258,7 +256,6 @@ class BroadcastThread(threading.Thread):
|
||||
if (
|
||||
not ws.terminated
|
||||
and ws.environ["PATH_INFO"] == f"/{self.camera}"
|
||||
and ws_has_camera_access(ws, self.camera, self.config)
|
||||
):
|
||||
try:
|
||||
ws.send(buf, binary=True)
|
||||
@ -809,11 +806,7 @@ class Birdseye:
|
||||
config.birdseye.restream,
|
||||
)
|
||||
self.broadcaster = BroadcastThread(
|
||||
"birdseye",
|
||||
self.converter,
|
||||
websocket_server,
|
||||
stop_event,
|
||||
config,
|
||||
"birdseye", self.converter, websocket_server, stop_event
|
||||
)
|
||||
self.birdseye_manager = BirdsEyeFrameManager(self.config, stop_event)
|
||||
self.frame_manager = SharedMemoryFrameManager()
|
||||
@ -881,7 +874,7 @@ class Birdseye:
|
||||
coordinates = self.birdseye_manager.get_camera_coordinates()
|
||||
self.requestor.send_data(UPDATE_BIRDSEYE_LAYOUT, coordinates)
|
||||
if self._idle_interval:
|
||||
now = datetime.datetime.now().timestamp()
|
||||
now = time.monotonic()
|
||||
is_idle = len(self.birdseye_manager.camera_layout) == 0
|
||||
if (
|
||||
is_idle
|
||||
|
||||
@ -7,8 +7,7 @@ import threading
|
||||
from multiprocessing.synchronize import Event as MpEvent
|
||||
from typing import Any
|
||||
|
||||
from frigate.config import CameraConfig, FfmpegConfig, FrigateConfig
|
||||
from frigate.output.ws_auth import ws_has_camera_access
|
||||
from frigate.config import CameraConfig, FfmpegConfig
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@ -103,14 +102,12 @@ class BroadcastThread(threading.Thread):
|
||||
converter: FFMpegConverter,
|
||||
websocket_server: Any,
|
||||
stop_event: MpEvent,
|
||||
config: FrigateConfig,
|
||||
):
|
||||
super().__init__()
|
||||
self.camera = camera
|
||||
self.converter = converter
|
||||
self.websocket_server = websocket_server
|
||||
self.stop_event = stop_event
|
||||
self.config = config
|
||||
|
||||
def run(self) -> None:
|
||||
while not self.stop_event.is_set():
|
||||
@ -125,7 +122,6 @@ class BroadcastThread(threading.Thread):
|
||||
if (
|
||||
not ws.terminated
|
||||
and ws.environ["PATH_INFO"] == f"/{self.camera}"
|
||||
and ws_has_camera_access(ws, self.camera, self.config)
|
||||
):
|
||||
try:
|
||||
ws.send(buf, binary=True)
|
||||
@ -139,11 +135,7 @@ class BroadcastThread(threading.Thread):
|
||||
|
||||
class JsmpegCamera:
|
||||
def __init__(
|
||||
self,
|
||||
config: CameraConfig,
|
||||
frigate_config: FrigateConfig,
|
||||
stop_event: MpEvent,
|
||||
websocket_server: Any,
|
||||
self, config: CameraConfig, stop_event: MpEvent, websocket_server: Any
|
||||
) -> None:
|
||||
self.config = config
|
||||
self.input: queue.Queue[bytes] = queue.Queue(maxsize=config.detect.fps)
|
||||
@ -162,11 +154,7 @@ class JsmpegCamera:
|
||||
config.live.quality,
|
||||
)
|
||||
self.broadcaster = BroadcastThread(
|
||||
config.name or "",
|
||||
self.converter,
|
||||
websocket_server,
|
||||
stop_event,
|
||||
frigate_config,
|
||||
config.name or "", self.converter, websocket_server, stop_event
|
||||
)
|
||||
|
||||
self.converter.start()
|
||||
|
||||
@ -32,7 +32,6 @@ from frigate.const import (
|
||||
from frigate.output.birdseye import Birdseye
|
||||
from frigate.output.camera import JsmpegCamera
|
||||
from frigate.output.preview import PreviewRecorder
|
||||
from frigate.output.ws_auth import ws_has_camera_access
|
||||
from frigate.util.image import SharedMemoryFrameManager, get_blank_yuv_frame
|
||||
from frigate.util.process import FrigateProcess
|
||||
|
||||
@ -103,7 +102,7 @@ class OutputProcess(FrigateProcess):
|
||||
) -> None:
|
||||
camera_config = self.config.cameras[camera]
|
||||
jsmpeg_cameras[camera] = JsmpegCamera(
|
||||
camera_config, self.config, self.stop_event, websocket_server
|
||||
camera_config, self.stop_event, websocket_server
|
||||
)
|
||||
preview_recorders[camera] = PreviewRecorder(camera_config)
|
||||
preview_write_times[camera] = 0
|
||||
@ -263,7 +262,6 @@ class OutputProcess(FrigateProcess):
|
||||
# send camera frame to ffmpeg process if websockets are connected
|
||||
if any(
|
||||
ws.environ["PATH_INFO"].endswith(camera)
|
||||
and ws_has_camera_access(ws, camera, self.config)
|
||||
for ws in websocket_server.manager
|
||||
):
|
||||
# write to the converter for the camera if clients are listening to the specific camera
|
||||
@ -277,7 +275,6 @@ class OutputProcess(FrigateProcess):
|
||||
self.config.birdseye.restream
|
||||
or any(
|
||||
ws.environ["PATH_INFO"].endswith("birdseye")
|
||||
and ws_has_camera_access(ws, "birdseye", self.config)
|
||||
for ws in websocket_server.manager
|
||||
)
|
||||
)
|
||||
@ -349,13 +346,6 @@ def move_preview_frames(loc: str) -> None:
|
||||
if not os.path.exists(preview_holdover):
|
||||
return
|
||||
|
||||
if not os.access(preview_holdover, os.R_OK | os.W_OK):
|
||||
logger.error(
|
||||
"Insufficient permissions on preview restart cache at %s",
|
||||
preview_holdover,
|
||||
)
|
||||
return
|
||||
|
||||
shutil.move(preview_holdover, preview_cache)
|
||||
except shutil.Error:
|
||||
logger.error("Failed to restore preview cache.")
|
||||
|
||||
@ -361,17 +361,14 @@ class PreviewRecorder:
|
||||
small_frame,
|
||||
cv2.COLOR_YUV2BGR_I420,
|
||||
)
|
||||
cache_path = get_cache_image_name(self.camera_name, frame_time)
|
||||
|
||||
if not cv2.imwrite(
|
||||
cache_path,
|
||||
cv2.imwrite(
|
||||
get_cache_image_name(self.camera_name, frame_time),
|
||||
small_frame,
|
||||
[
|
||||
int(cv2.IMWRITE_WEBP_QUALITY),
|
||||
PREVIEW_QUALITY_WEBP[self.config.record.preview.quality],
|
||||
],
|
||||
):
|
||||
logger.error("Failed to write preview frame to %s", cache_path)
|
||||
)
|
||||
|
||||
def write_data(
|
||||
self,
|
||||
|
||||
@ -1,43 +0,0 @@
|
||||
"""Authorization helpers for JSMPEG websocket clients."""
|
||||
|
||||
from typing import Any
|
||||
|
||||
from frigate.config import FrigateConfig
|
||||
from frigate.models import User
|
||||
|
||||
|
||||
def _get_valid_ws_roles(ws: Any, config: FrigateConfig) -> list[str]:
|
||||
role_header = ws.environ.get("HTTP_REMOTE_ROLE", "")
|
||||
roles = [
|
||||
role.strip()
|
||||
for role in role_header.split(config.proxy.separator)
|
||||
if role.strip()
|
||||
]
|
||||
return [role for role in roles if role in config.auth.roles]
|
||||
|
||||
|
||||
def ws_has_camera_access(ws: Any, camera_name: str, config: FrigateConfig) -> bool:
|
||||
"""Return True when a websocket client is authorized for the camera path."""
|
||||
roles = _get_valid_ws_roles(ws, config)
|
||||
|
||||
if not roles:
|
||||
return False
|
||||
|
||||
roles_dict = config.auth.roles
|
||||
|
||||
# Birdseye is a composite stream, so only users with unrestricted access
|
||||
# should receive it.
|
||||
if camera_name == "birdseye":
|
||||
return any(role == "admin" or not roles_dict.get(role) for role in roles)
|
||||
|
||||
all_camera_names = set(config.cameras.keys())
|
||||
|
||||
for role in roles:
|
||||
if role == "admin" or not roles_dict.get(role):
|
||||
return True
|
||||
|
||||
allowed_cameras = User.get_allowed_cameras(role, roles_dict, all_camera_names)
|
||||
if camera_name in allowed_cameras:
|
||||
return True
|
||||
|
||||
return False
|
||||
@ -13,7 +13,6 @@ from enum import Enum
|
||||
from pathlib import Path
|
||||
from typing import Callable, Optional
|
||||
|
||||
import pytz # type: ignore[import-untyped]
|
||||
from peewee import DoesNotExist
|
||||
|
||||
from frigate.config import FfmpegConfig, FrigateConfig
|
||||
@ -23,13 +22,13 @@ from frigate.const import (
|
||||
EXPORT_DIR,
|
||||
MAX_PLAYLIST_SECONDS,
|
||||
PREVIEW_FRAME_TYPE,
|
||||
PROCESS_PRIORITY_LOW,
|
||||
)
|
||||
from frigate.ffmpeg_presets import (
|
||||
EncodeTypeEnum,
|
||||
parse_preset_hardware_acceleration_encode,
|
||||
)
|
||||
from frigate.models import Export, Previews, Recordings, ReviewSegment
|
||||
from frigate.util.ffmpeg import run_ffmpeg_with_progress
|
||||
from frigate.models import Export, Previews, Recordings
|
||||
from frigate.util.time import is_current_hour
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
@ -243,171 +242,110 @@ class RecordingExporter(threading.Thread):
|
||||
|
||||
return total
|
||||
|
||||
def _inject_progress_flags(self, ffmpeg_cmd: list[str]) -> list[str]:
|
||||
"""Insert FFmpeg progress reporting flags before the output path.
|
||||
|
||||
``-progress pipe:2`` writes structured key=value lines to stderr,
|
||||
``-nostats`` suppresses the noisy default stats output.
|
||||
"""
|
||||
if not ffmpeg_cmd:
|
||||
return ffmpeg_cmd
|
||||
return ffmpeg_cmd[:-1] + ["-progress", "pipe:2", "-nostats", ffmpeg_cmd[-1]]
|
||||
|
||||
def _run_ffmpeg_with_progress(
|
||||
self,
|
||||
ffmpeg_cmd: list[str],
|
||||
playlist_lines: str | list[str],
|
||||
step: str = "encoding",
|
||||
) -> tuple[int, str]:
|
||||
"""Delegate to the shared helper, mapping percent → (step, percent).
|
||||
"""Run an FFmpeg export command, parsing progress events from stderr.
|
||||
|
||||
Returns ``(returncode, captured_stderr)``.
|
||||
Returns ``(returncode, captured_stderr)``. Stdout is left attached to
|
||||
the parent process so we don't have to drain it (and risk a deadlock
|
||||
if the buffer fills). Progress percent is computed against the
|
||||
expected output duration; values are clamped to [0, 100] inside
|
||||
:py:meth:`_emit_progress`.
|
||||
"""
|
||||
cmd = ["nice", "-n", str(PROCESS_PRIORITY_LOW)] + self._inject_progress_flags(
|
||||
ffmpeg_cmd
|
||||
)
|
||||
|
||||
if isinstance(playlist_lines, list):
|
||||
stdin_payload = "\n".join(playlist_lines)
|
||||
else:
|
||||
stdin_payload = playlist_lines
|
||||
|
||||
return run_ffmpeg_with_progress(
|
||||
ffmpeg_cmd,
|
||||
expected_duration_seconds=self._expected_output_duration_seconds(),
|
||||
on_progress=lambda percent: self._emit_progress(step, percent),
|
||||
stdin_payload=stdin_payload,
|
||||
use_low_priority=True,
|
||||
expected_duration = self._expected_output_duration_seconds()
|
||||
|
||||
self._emit_progress(step, 0.0)
|
||||
|
||||
proc = sp.Popen(
|
||||
cmd,
|
||||
stdin=sp.PIPE,
|
||||
stderr=sp.PIPE,
|
||||
text=True,
|
||||
encoding="ascii",
|
||||
errors="replace",
|
||||
)
|
||||
|
||||
def get_datetime_from_timestamp(self, timestamp: int) -> str:
|
||||
# return in iso format using the configured ui.timezone when set,
|
||||
# so the auto-generated export name reflects local time rather
|
||||
# than the container's UTC clock
|
||||
tz_name = self.config.ui.timezone
|
||||
if tz_name:
|
||||
assert proc.stdin is not None
|
||||
assert proc.stderr is not None
|
||||
|
||||
try:
|
||||
proc.stdin.write(stdin_payload)
|
||||
except (BrokenPipeError, OSError):
|
||||
# FFmpeg may have rejected the input early; still wait for it
|
||||
# to terminate so the returncode is meaningful.
|
||||
pass
|
||||
finally:
|
||||
try:
|
||||
tz = pytz.timezone(tz_name)
|
||||
except pytz.UnknownTimeZoneError:
|
||||
tz = None
|
||||
if tz is not None:
|
||||
return datetime.datetime.fromtimestamp(timestamp, tz=tz).strftime(
|
||||
"%Y-%m-%d %H:%M:%S"
|
||||
)
|
||||
return datetime.datetime.fromtimestamp(timestamp).strftime("%Y-%m-%d %H:%M:%S")
|
||||
proc.stdin.close()
|
||||
except (BrokenPipeError, OSError):
|
||||
pass
|
||||
|
||||
def _chapter_metadata_path(self) -> str:
|
||||
return os.path.join(CACHE_DIR, f"export_chapters_{self.export_id}.txt")
|
||||
|
||||
def _build_chapter_metadata_file(self, recordings: list) -> Optional[str]:
|
||||
"""Write an FFmpeg metadata file with chapters for review items in range.
|
||||
|
||||
Chapter offsets are computed in *output time*: the VOD endpoint
|
||||
concatenates recording clips back-to-back, so wall-clock gaps
|
||||
between recordings collapse in the produced video. We walk the
|
||||
same recording rows that feed the playlist and convert each
|
||||
review item's wall-clock boundaries into output-time offsets.
|
||||
Returns ``None`` when there are no recordings, no review items,
|
||||
or any chapter would have zero output duration.
|
||||
"""
|
||||
if not recordings:
|
||||
return None
|
||||
|
||||
windows: list[tuple[float, float, float]] = []
|
||||
output_offset = 0.0
|
||||
for rec in recordings:
|
||||
clipped_start = max(float(rec.start_time), float(self.start_time))
|
||||
clipped_end = min(float(rec.end_time), float(self.end_time))
|
||||
if clipped_end <= clipped_start:
|
||||
continue
|
||||
windows.append((clipped_start, clipped_end, output_offset))
|
||||
output_offset += clipped_end - clipped_start
|
||||
|
||||
if not windows:
|
||||
return None
|
||||
captured: list[str] = []
|
||||
|
||||
try:
|
||||
review_rows = list(
|
||||
ReviewSegment.select(
|
||||
ReviewSegment.start_time,
|
||||
ReviewSegment.end_time,
|
||||
ReviewSegment.severity,
|
||||
ReviewSegment.data,
|
||||
)
|
||||
.where(
|
||||
ReviewSegment.start_time.between(self.start_time, self.end_time)
|
||||
| ReviewSegment.end_time.between(self.start_time, self.end_time)
|
||||
| (
|
||||
(self.start_time > ReviewSegment.start_time)
|
||||
& (self.end_time < ReviewSegment.end_time)
|
||||
)
|
||||
)
|
||||
.where(ReviewSegment.camera == self.camera)
|
||||
.order_by(ReviewSegment.start_time.asc())
|
||||
.iterator()
|
||||
)
|
||||
for raw_line in proc.stderr:
|
||||
captured.append(raw_line)
|
||||
line = raw_line.strip()
|
||||
|
||||
if not line:
|
||||
continue
|
||||
|
||||
if line.startswith("out_time_us="):
|
||||
if expected_duration <= 0:
|
||||
continue
|
||||
try:
|
||||
out_time_us = int(line.split("=", 1)[1])
|
||||
except (ValueError, IndexError):
|
||||
continue
|
||||
if out_time_us < 0:
|
||||
continue
|
||||
out_seconds = out_time_us / 1_000_000.0
|
||||
percent = (out_seconds / expected_duration) * 100.0
|
||||
self._emit_progress(step, percent)
|
||||
elif line == "progress=end":
|
||||
self._emit_progress(step, 100.0)
|
||||
break
|
||||
except Exception:
|
||||
logger.exception(
|
||||
"Failed to query review segments for export %s", self.export_id
|
||||
)
|
||||
return None
|
||||
logger.exception("Failed reading FFmpeg progress for %s", self.export_id)
|
||||
|
||||
if not review_rows:
|
||||
return None
|
||||
proc.wait()
|
||||
|
||||
total_output = windows[-1][2] + (windows[-1][1] - windows[-1][0])
|
||||
last_recorded_end = windows[-1][1]
|
||||
|
||||
def wall_to_output(t: float) -> float:
|
||||
t = max(float(self.start_time), min(float(self.end_time), t))
|
||||
for w_start, w_end, w_offset in windows:
|
||||
if t < w_start:
|
||||
return w_offset
|
||||
if t <= w_end:
|
||||
return w_offset + (t - w_start)
|
||||
return total_output
|
||||
|
||||
chapter_blocks: list[str] = []
|
||||
for review in review_rows:
|
||||
if review.start_time is None:
|
||||
continue
|
||||
# In-progress segments have a NULL end_time until the activity
|
||||
# closes; clamp to the last recorded second so the chapter never
|
||||
# extends past the actual video.
|
||||
review_end = (
|
||||
float(review.end_time)
|
||||
if review.end_time is not None
|
||||
else last_recorded_end
|
||||
)
|
||||
start_out = wall_to_output(float(review.start_time))
|
||||
end_out = wall_to_output(review_end)
|
||||
|
||||
# Drop chapters that fall entirely in a recording gap, or are
|
||||
# too short to be navigable in a player.
|
||||
if end_out - start_out < 1.0:
|
||||
continue
|
||||
|
||||
data = review.data or {}
|
||||
labels: list[str] = []
|
||||
for obj in data.get("objects") or []:
|
||||
label = str(obj).split("-")[0]
|
||||
if label and label not in labels:
|
||||
labels.append(label)
|
||||
|
||||
title = str(review.severity).capitalize()
|
||||
if labels:
|
||||
title = f"{title}: {', '.join(labels)}"
|
||||
|
||||
chapter_blocks.append(
|
||||
"[CHAPTER]\n"
|
||||
"TIMEBASE=1/1000\n"
|
||||
f"START={int(start_out * 1000)}\n"
|
||||
f"END={int(end_out * 1000)}\n"
|
||||
f"title={title}"
|
||||
)
|
||||
|
||||
if not chapter_blocks:
|
||||
return None
|
||||
|
||||
meta_path = self._chapter_metadata_path()
|
||||
# Drain any remaining stderr so callers can log it on failure.
|
||||
try:
|
||||
with open(meta_path, "w", encoding="utf-8") as f:
|
||||
f.write(";FFMETADATA1\n")
|
||||
f.write("\n".join(chapter_blocks))
|
||||
f.write("\n")
|
||||
except OSError:
|
||||
logger.exception(
|
||||
"Failed to write chapter metadata file for export %s", self.export_id
|
||||
)
|
||||
return None
|
||||
remaining = proc.stderr.read()
|
||||
if remaining:
|
||||
captured.append(remaining)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
return meta_path
|
||||
return proc.returncode, "".join(captured)
|
||||
|
||||
def get_datetime_from_timestamp(self, timestamp: int) -> str:
|
||||
# return in iso format
|
||||
return datetime.datetime.fromtimestamp(timestamp).strftime("%Y-%m-%d %H:%M:%S")
|
||||
|
||||
def save_thumbnail(self, id: str) -> str:
|
||||
thumb_path = os.path.join(CLIPS_DIR, f"export/{id}.webp")
|
||||
@ -449,14 +387,16 @@ class RecordingExporter(threading.Thread):
|
||||
except DoesNotExist:
|
||||
return ""
|
||||
|
||||
diff = max(0.0, float(self.start_time) - float(preview.start_time))
|
||||
diff = self.start_time - preview.start_time
|
||||
minutes = int(diff / 60)
|
||||
seconds = int(diff % 60)
|
||||
ffmpeg_cmd = [
|
||||
"/usr/lib/ffmpeg/7.0/bin/ffmpeg", # hardcode path for exports thumbnail due to missing libwebp support
|
||||
"-hide_banner",
|
||||
"-loglevel",
|
||||
"warning",
|
||||
"-ss",
|
||||
f"{diff:.3f}",
|
||||
f"00:{minutes}:{seconds}",
|
||||
"-i",
|
||||
preview.path,
|
||||
"-frames",
|
||||
@ -482,18 +422,12 @@ class RecordingExporter(threading.Thread):
|
||||
start_file = f"{file_start}{self.start_time}.{PREVIEW_FRAME_TYPE}"
|
||||
end_file = f"{file_start}{self.end_time}.{PREVIEW_FRAME_TYPE}"
|
||||
selected_preview = None
|
||||
# Preview frames are written at most 1-2 fps during activity
|
||||
# and as little as one every 30s during quiet periods, so a
|
||||
# short export window can contain zero frames. Track the most
|
||||
# recent frame before the window as a fallback.
|
||||
fallback_preview = None
|
||||
|
||||
for file in sorted(os.listdir(preview_dir)):
|
||||
if not file.startswith(file_start):
|
||||
continue
|
||||
|
||||
if file < start_file:
|
||||
fallback_preview = os.path.join(preview_dir, file)
|
||||
continue
|
||||
|
||||
if file > end_file:
|
||||
@ -502,9 +436,6 @@ class RecordingExporter(threading.Thread):
|
||||
selected_preview = os.path.join(preview_dir, file)
|
||||
break
|
||||
|
||||
if not selected_preview:
|
||||
selected_preview = fallback_preview
|
||||
|
||||
if not selected_preview:
|
||||
return ""
|
||||
|
||||
@ -520,24 +451,6 @@ class RecordingExporter(threading.Thread):
|
||||
if type(internal_port) is str:
|
||||
internal_port = int(internal_port.split(":")[-1])
|
||||
|
||||
recordings = list(
|
||||
Recordings.select(
|
||||
Recordings.start_time,
|
||||
Recordings.end_time,
|
||||
)
|
||||
.where(
|
||||
Recordings.start_time.between(self.start_time, self.end_time)
|
||||
| Recordings.end_time.between(self.start_time, self.end_time)
|
||||
| (
|
||||
(self.start_time > Recordings.start_time)
|
||||
& (self.end_time < Recordings.end_time)
|
||||
)
|
||||
)
|
||||
.where(Recordings.camera == self.camera)
|
||||
.order_by(Recordings.start_time.asc())
|
||||
.iterator()
|
||||
)
|
||||
|
||||
playlist_lines: list[str] = []
|
||||
if (self.end_time - self.start_time) <= MAX_PLAYLIST_SECONDS:
|
||||
playlist_url = f"http://127.0.0.1:{internal_port}/vod/{self.camera}/start/{self.start_time}/end/{self.end_time}/index.m3u8"
|
||||
@ -545,13 +458,32 @@ class RecordingExporter(threading.Thread):
|
||||
f"-y -protocol_whitelist pipe,file,http,tcp -i {playlist_url}"
|
||||
)
|
||||
else:
|
||||
# Chunk the recording rows into pages so each playlist line
|
||||
# references a bounded sub-range rather than the full export.
|
||||
# get full set of recordings
|
||||
export_recordings = (
|
||||
Recordings.select(
|
||||
Recordings.start_time,
|
||||
Recordings.end_time,
|
||||
)
|
||||
.where(
|
||||
Recordings.start_time.between(self.start_time, self.end_time)
|
||||
| Recordings.end_time.between(self.start_time, self.end_time)
|
||||
| (
|
||||
(self.start_time > Recordings.start_time)
|
||||
& (self.end_time < Recordings.end_time)
|
||||
)
|
||||
)
|
||||
.where(Recordings.camera == self.camera)
|
||||
.order_by(Recordings.start_time.asc())
|
||||
)
|
||||
|
||||
# Use pagination to process records in chunks
|
||||
page_size = 1000
|
||||
for i in range(0, len(recordings), page_size):
|
||||
chunk = recordings[i : i + page_size]
|
||||
num_pages = (export_recordings.count() + page_size - 1) // page_size
|
||||
|
||||
for page in range(1, num_pages + 1):
|
||||
playlist = export_recordings.paginate(page, page_size)
|
||||
playlist_lines.append(
|
||||
f"file 'http://127.0.0.1:{internal_port}/vod/{self.camera}/start/{float(chunk[0].start_time)}/end/{float(chunk[-1].end_time)}/index.m3u8'"
|
||||
f"file 'http://127.0.0.1:{internal_port}/vod/{self.camera}/start/{float(playlist[0].start_time)}/end/{float(playlist[-1].end_time)}/index.m3u8'"
|
||||
)
|
||||
|
||||
ffmpeg_input = "-y -protocol_whitelist pipe,file,http,tcp -f concat -safe 0 -i /dev/stdin"
|
||||
@ -572,12 +504,8 @@ class RecordingExporter(threading.Thread):
|
||||
)
|
||||
).split(" ")
|
||||
else:
|
||||
chapters_path = self._build_chapter_metadata_file(recordings)
|
||||
chapter_args = (
|
||||
f" -i {chapters_path} -map 0 -map_metadata 1" if chapters_path else ""
|
||||
)
|
||||
ffmpeg_cmd = (
|
||||
f"{self.config.ffmpeg.ffmpeg_path} -hide_banner {ffmpeg_input}{chapter_args} -c copy -movflags +faststart"
|
||||
f"{self.config.ffmpeg.ffmpeg_path} -hide_banner {ffmpeg_input} -c copy -movflags +faststart"
|
||||
).split(" ")
|
||||
|
||||
# add metadata
|
||||
@ -763,8 +691,6 @@ class RecordingExporter(threading.Thread):
|
||||
ffmpeg_cmd, playlist_lines, step="encoding_retry"
|
||||
)
|
||||
|
||||
Path(self._chapter_metadata_path()).unlink(missing_ok=True)
|
||||
|
||||
if returncode != 0:
|
||||
logger.error(
|
||||
f"Failed to export {self.playback_source.value} for command {' '.join(ffmpeg_cmd)}"
|
||||
|
||||
@ -1,123 +0,0 @@
|
||||
"""Tests for /debug_replay API endpoints."""
|
||||
|
||||
from unittest.mock import patch
|
||||
|
||||
from frigate.models import Event, Recordings, ReviewSegment
|
||||
from frigate.test.http_api.base_http_test import AuthTestClient, BaseTestHttp
|
||||
|
||||
|
||||
class TestDebugReplayAPI(BaseTestHttp):
|
||||
def setUp(self):
|
||||
super().setUp([Event, Recordings, ReviewSegment])
|
||||
self.app = self.create_app()
|
||||
|
||||
def test_start_returns_202_with_job_id(self):
|
||||
# Stub the factory to skip validation/threading and just record the
|
||||
# name on the manager the way the real factory's mark_starting would.
|
||||
def fake_start(**kwargs):
|
||||
kwargs["replay_manager"].mark_starting(
|
||||
source_camera=kwargs["source_camera"],
|
||||
replay_camera_name="_replay_front",
|
||||
start_ts=kwargs["start_ts"],
|
||||
end_ts=kwargs["end_ts"],
|
||||
)
|
||||
return "job-1234"
|
||||
|
||||
with patch(
|
||||
"frigate.api.debug_replay.start_debug_replay_job",
|
||||
side_effect=fake_start,
|
||||
):
|
||||
with AuthTestClient(self.app) as client:
|
||||
resp = client.post(
|
||||
"/debug_replay/start",
|
||||
json={
|
||||
"camera": "front",
|
||||
"start_time": 100,
|
||||
"end_time": 200,
|
||||
},
|
||||
)
|
||||
|
||||
self.assertEqual(resp.status_code, 202)
|
||||
body = resp.json()
|
||||
self.assertTrue(body["success"])
|
||||
self.assertEqual(body["job_id"], "job-1234")
|
||||
self.assertEqual(body["replay_camera"], "_replay_front")
|
||||
|
||||
def test_start_returns_400_on_validation_error(self):
|
||||
with patch(
|
||||
"frigate.api.debug_replay.start_debug_replay_job",
|
||||
side_effect=ValueError("Camera 'missing' not found"),
|
||||
):
|
||||
with AuthTestClient(self.app) as client:
|
||||
resp = client.post(
|
||||
"/debug_replay/start",
|
||||
json={
|
||||
"camera": "missing",
|
||||
"start_time": 100,
|
||||
"end_time": 200,
|
||||
},
|
||||
)
|
||||
|
||||
self.assertEqual(resp.status_code, 400)
|
||||
body = resp.json()
|
||||
self.assertFalse(body["success"])
|
||||
# Message is hard-coded so we don't echo exception text back to clients
|
||||
# (CodeQL: information exposure through an exception).
|
||||
self.assertEqual(body["message"], "Invalid debug replay parameters")
|
||||
|
||||
def test_start_returns_409_when_session_already_active(self):
|
||||
with patch(
|
||||
"frigate.api.debug_replay.start_debug_replay_job",
|
||||
side_effect=RuntimeError("A replay session is already active"),
|
||||
):
|
||||
with AuthTestClient(self.app) as client:
|
||||
resp = client.post(
|
||||
"/debug_replay/start",
|
||||
json={
|
||||
"camera": "front",
|
||||
"start_time": 100,
|
||||
"end_time": 200,
|
||||
},
|
||||
)
|
||||
|
||||
self.assertEqual(resp.status_code, 409)
|
||||
body = resp.json()
|
||||
self.assertFalse(body["success"])
|
||||
|
||||
def test_status_inactive_when_no_session(self):
|
||||
with AuthTestClient(self.app) as client:
|
||||
resp = client.get("/debug_replay/status")
|
||||
|
||||
self.assertEqual(resp.status_code, 200)
|
||||
body = resp.json()
|
||||
self.assertFalse(body["active"])
|
||||
self.assertIsNone(body["replay_camera"])
|
||||
self.assertIsNone(body["source_camera"])
|
||||
self.assertIsNone(body["start_time"])
|
||||
self.assertIsNone(body["end_time"])
|
||||
self.assertFalse(body["live_ready"])
|
||||
# Make sure deprecated fields are gone
|
||||
self.assertNotIn("state", body)
|
||||
self.assertNotIn("progress_percent", body)
|
||||
self.assertNotIn("error_message", body)
|
||||
|
||||
def test_status_active_after_mark_starting(self):
|
||||
manager = self.app.replay_manager
|
||||
manager.mark_starting(
|
||||
source_camera="front",
|
||||
replay_camera_name="_replay_front",
|
||||
start_ts=100.0,
|
||||
end_ts=200.0,
|
||||
)
|
||||
|
||||
with AuthTestClient(self.app) as client:
|
||||
resp = client.get("/debug_replay/status")
|
||||
|
||||
self.assertEqual(resp.status_code, 200)
|
||||
body = resp.json()
|
||||
self.assertTrue(body["active"])
|
||||
self.assertEqual(body["replay_camera"], "_replay_front")
|
||||
self.assertEqual(body["source_camera"], "front")
|
||||
self.assertEqual(body["start_time"], 100.0)
|
||||
self.assertEqual(body["end_time"], 200.0)
|
||||
self.assertFalse(body["live_ready"])
|
||||
@ -23,26 +23,6 @@ class TestHttpApp(BaseTestHttp):
|
||||
response_json = response.json()
|
||||
assert response_json == self.test_stats
|
||||
|
||||
def test_recordings_storage_requires_admin(self):
|
||||
stats = Mock(spec=StatsEmitter)
|
||||
stats.get_latest_stats.return_value = self.test_stats
|
||||
app = super().create_app(stats)
|
||||
app.storage_maintainer = Mock()
|
||||
app.storage_maintainer.calculate_camera_usages.return_value = {
|
||||
"front_door": {"usage": 2.0},
|
||||
}
|
||||
|
||||
with AuthTestClient(app) as client:
|
||||
response = client.get(
|
||||
"/recordings/storage",
|
||||
headers={"remote-user": "viewer", "remote-role": "viewer"},
|
||||
)
|
||||
assert response.status_code == 403
|
||||
|
||||
response = client.get("/recordings/storage")
|
||||
assert response.status_code == 200
|
||||
assert response.json()["front_door"]["usage_percent"] == 25.0
|
||||
|
||||
def test_config_set_in_memory_replaces_objects_track_list(self):
|
||||
self.minimal_config["cameras"]["front_door"]["objects"] = {
|
||||
"track": ["person", "car"],
|
||||
|
||||
@ -219,25 +219,6 @@ class TestHttpApp(BaseTestHttp):
|
||||
assert len(events) == 1
|
||||
assert events[0]["id"] == event_id
|
||||
|
||||
def test_similarity_search_hides_unauthorized_anchor_event(self):
|
||||
mock_embeddings = Mock()
|
||||
self.app.frigate_config.semantic_search.enabled = True
|
||||
self.app.embeddings = mock_embeddings
|
||||
|
||||
with AuthTestClient(self.app) as client:
|
||||
super().insert_mock_event("hidden.anchor", camera="back_door")
|
||||
response = client.get(
|
||||
"/events/search",
|
||||
params={
|
||||
"search_type": "similarity",
|
||||
"event_id": "hidden.anchor",
|
||||
},
|
||||
)
|
||||
|
||||
assert response.status_code == 404
|
||||
assert response.json()["message"] == "Event not found"
|
||||
mock_embeddings.search_thumbnail.assert_not_called()
|
||||
|
||||
def test_get_good_event(self):
|
||||
id = "123456.random"
|
||||
|
||||
|
||||
@ -145,12 +145,9 @@ class TestExecuteFindSimilarObjects(unittest.TestCase):
|
||||
embeddings=embeddings,
|
||||
frigate_config=SimpleNamespace(
|
||||
semantic_search=SimpleNamespace(enabled=semantic_enabled),
|
||||
cameras={"driveway": object()},
|
||||
auth=SimpleNamespace(roles={"admin": [], "viewer": ["driveway"]}),
|
||||
proxy=SimpleNamespace(separator=","),
|
||||
),
|
||||
)
|
||||
return SimpleNamespace(app=app, headers={})
|
||||
return SimpleNamespace(app=app)
|
||||
|
||||
def test_semantic_search_disabled_returns_error(self):
|
||||
req = self._make_request(semantic_enabled=False)
|
||||
@ -183,7 +180,7 @@ class TestExecuteFindSimilarObjects(unittest.TestCase):
|
||||
_execute_find_similar_objects(
|
||||
req,
|
||||
{"event_id": "anchor", "cameras": ["nonexistent_cam"]},
|
||||
allowed_cameras=["driveway"],
|
||||
allowed_cameras=["nonexistent_cam"],
|
||||
)
|
||||
)
|
||||
self.assertEqual(result["results"], [])
|
||||
|
||||
@ -1,242 +0,0 @@
|
||||
"""Tests for the simplified DebugReplayManager.
|
||||
|
||||
Startup orchestration lives in ``frigate.jobs.debug_replay`` (covered by
|
||||
``test_debug_replay_job``). The manager owns only session presence and
|
||||
cleanup.
|
||||
"""
|
||||
|
||||
import unittest
|
||||
import unittest.mock
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
|
||||
class TestDebugReplayManagerSession(unittest.TestCase):
|
||||
def test_inactive_by_default(self) -> None:
|
||||
from frigate.debug_replay import DebugReplayManager
|
||||
|
||||
manager = DebugReplayManager()
|
||||
|
||||
self.assertFalse(manager.active)
|
||||
self.assertIsNone(manager.replay_camera_name)
|
||||
self.assertIsNone(manager.source_camera)
|
||||
self.assertIsNone(manager.clip_path)
|
||||
self.assertIsNone(manager.start_ts)
|
||||
self.assertIsNone(manager.end_ts)
|
||||
|
||||
def test_mark_starting_sets_session_pointers_and_active(self) -> None:
|
||||
from frigate.debug_replay import DebugReplayManager
|
||||
|
||||
manager = DebugReplayManager()
|
||||
|
||||
manager.mark_starting(
|
||||
source_camera="front",
|
||||
replay_camera_name="_replay_front",
|
||||
start_ts=100.0,
|
||||
end_ts=200.0,
|
||||
)
|
||||
|
||||
self.assertTrue(manager.active)
|
||||
self.assertEqual(manager.replay_camera_name, "_replay_front")
|
||||
self.assertEqual(manager.source_camera, "front")
|
||||
self.assertEqual(manager.start_ts, 100.0)
|
||||
self.assertEqual(manager.end_ts, 200.0)
|
||||
self.assertIsNone(manager.clip_path)
|
||||
|
||||
def test_mark_session_ready_sets_clip_path(self) -> None:
|
||||
from frigate.debug_replay import DebugReplayManager
|
||||
|
||||
manager = DebugReplayManager()
|
||||
manager.mark_starting("front", "_replay_front", 100.0, 200.0)
|
||||
|
||||
manager.mark_session_ready(clip_path="/tmp/replay/_replay_front.mp4")
|
||||
|
||||
self.assertEqual(manager.clip_path, "/tmp/replay/_replay_front.mp4")
|
||||
self.assertTrue(manager.active)
|
||||
|
||||
def test_clear_session_resets_all_pointers(self) -> None:
|
||||
from frigate.debug_replay import DebugReplayManager
|
||||
|
||||
manager = DebugReplayManager()
|
||||
manager.mark_starting("front", "_replay_front", 100.0, 200.0)
|
||||
manager.mark_session_ready("/tmp/replay/clip.mp4")
|
||||
|
||||
manager.clear_session()
|
||||
|
||||
self.assertFalse(manager.active)
|
||||
self.assertIsNone(manager.replay_camera_name)
|
||||
self.assertIsNone(manager.source_camera)
|
||||
self.assertIsNone(manager.clip_path)
|
||||
self.assertIsNone(manager.start_ts)
|
||||
self.assertIsNone(manager.end_ts)
|
||||
|
||||
|
||||
class TestDebugReplayManagerStop(unittest.TestCase):
|
||||
def test_stop_when_inactive_is_a_noop(self) -> None:
|
||||
from frigate.debug_replay import DebugReplayManager
|
||||
|
||||
manager = DebugReplayManager()
|
||||
frigate_config = MagicMock()
|
||||
frigate_config.cameras = {}
|
||||
publisher = MagicMock()
|
||||
|
||||
# Should not raise; should not publish any events.
|
||||
manager.stop(frigate_config=frigate_config, config_publisher=publisher)
|
||||
|
||||
publisher.publish_update.assert_not_called()
|
||||
|
||||
def test_stop_publishes_remove_when_camera_was_published(self) -> None:
|
||||
from frigate.config.camera.updater import CameraConfigUpdateEnum
|
||||
from frigate.debug_replay import DebugReplayManager
|
||||
|
||||
manager = DebugReplayManager()
|
||||
manager.mark_starting("front", "_replay_front", 100.0, 200.0)
|
||||
manager.mark_session_ready("/tmp/replay/_replay_front.mp4")
|
||||
|
||||
camera_config = MagicMock()
|
||||
frigate_config = MagicMock()
|
||||
frigate_config.cameras = {"_replay_front": camera_config}
|
||||
publisher = MagicMock()
|
||||
|
||||
with (
|
||||
patch.object(manager, "_cleanup_db"),
|
||||
patch.object(manager, "_cleanup_files"),
|
||||
patch("frigate.debug_replay.cancel_debug_replay_job", return_value=False),
|
||||
):
|
||||
manager.stop(frigate_config=frigate_config, config_publisher=publisher)
|
||||
|
||||
# One publish_update call with a remove topic.
|
||||
self.assertEqual(publisher.publish_update.call_count, 1)
|
||||
topic_arg = publisher.publish_update.call_args.args[0]
|
||||
self.assertEqual(topic_arg.update_type, CameraConfigUpdateEnum.remove)
|
||||
self.assertFalse(manager.active)
|
||||
|
||||
def test_stop_skips_remove_publish_when_camera_not_in_config(self) -> None:
|
||||
"""Cancellation during preparing_clip: no camera was published yet."""
|
||||
from frigate.debug_replay import DebugReplayManager
|
||||
|
||||
manager = DebugReplayManager()
|
||||
manager.mark_starting("front", "_replay_front", 100.0, 200.0)
|
||||
# clip_path stays None because we cancelled before camera publish.
|
||||
|
||||
frigate_config = MagicMock()
|
||||
frigate_config.cameras = {} # _replay_front not present
|
||||
publisher = MagicMock()
|
||||
|
||||
with (
|
||||
patch.object(manager, "_cleanup_db"),
|
||||
patch.object(manager, "_cleanup_files"),
|
||||
patch("frigate.debug_replay.cancel_debug_replay_job", return_value=True),
|
||||
):
|
||||
manager.stop(frigate_config=frigate_config, config_publisher=publisher)
|
||||
|
||||
publisher.publish_update.assert_not_called()
|
||||
self.assertFalse(manager.active)
|
||||
|
||||
def test_stop_calls_cancel_debug_replay_job(self) -> None:
|
||||
from frigate.debug_replay import DebugReplayManager
|
||||
|
||||
manager = DebugReplayManager()
|
||||
manager.mark_starting("front", "_replay_front", 100.0, 200.0)
|
||||
|
||||
frigate_config = MagicMock()
|
||||
frigate_config.cameras = {}
|
||||
publisher = MagicMock()
|
||||
|
||||
with (
|
||||
patch.object(manager, "_cleanup_db"),
|
||||
patch.object(manager, "_cleanup_files"),
|
||||
patch(
|
||||
"frigate.debug_replay.cancel_debug_replay_job",
|
||||
return_value=True,
|
||||
) as mock_cancel,
|
||||
):
|
||||
manager.stop(frigate_config=frigate_config, config_publisher=publisher)
|
||||
|
||||
mock_cancel.assert_called_once()
|
||||
|
||||
|
||||
class TestDebugReplayManagerPublishCamera(unittest.TestCase):
|
||||
def test_publish_camera_invokes_publisher_with_add_topic(self) -> None:
|
||||
from frigate.config.camera.updater import CameraConfigUpdateEnum
|
||||
from frigate.debug_replay import DebugReplayManager
|
||||
|
||||
manager = DebugReplayManager()
|
||||
|
||||
source_config = MagicMock()
|
||||
new_camera_config = MagicMock()
|
||||
frigate_config = MagicMock()
|
||||
frigate_config.cameras = {"front": source_config}
|
||||
publisher = MagicMock()
|
||||
|
||||
with (
|
||||
patch.object(
|
||||
manager,
|
||||
"_build_camera_config_dict",
|
||||
return_value={"enabled": True},
|
||||
),
|
||||
patch("frigate.debug_replay.find_config_file", return_value="/cfg.yml"),
|
||||
patch("frigate.debug_replay.YAML") as yaml_cls,
|
||||
patch("frigate.debug_replay.FrigateConfig.parse_object") as parse_object,
|
||||
patch("builtins.open", unittest.mock.mock_open(read_data="cameras:\n")),
|
||||
):
|
||||
yaml_instance = yaml_cls.return_value
|
||||
yaml_instance.load.return_value = {"cameras": {}}
|
||||
parsed = MagicMock()
|
||||
parsed.cameras = {"_replay_front": new_camera_config}
|
||||
parse_object.return_value = parsed
|
||||
|
||||
manager.publish_camera(
|
||||
source_camera="front",
|
||||
replay_name="_replay_front",
|
||||
clip_path="/tmp/clip.mp4",
|
||||
frigate_config=frigate_config,
|
||||
config_publisher=publisher,
|
||||
)
|
||||
|
||||
# Camera registered into the live config dict
|
||||
self.assertIn("_replay_front", frigate_config.cameras)
|
||||
# Publisher invoked with an add topic
|
||||
self.assertEqual(publisher.publish_update.call_count, 1)
|
||||
topic_arg = publisher.publish_update.call_args.args[0]
|
||||
self.assertEqual(topic_arg.update_type, CameraConfigUpdateEnum.add)
|
||||
|
||||
def test_publish_camera_wraps_parse_failure_in_runtime_error(self) -> None:
|
||||
from frigate.debug_replay import DebugReplayManager
|
||||
|
||||
manager = DebugReplayManager()
|
||||
frigate_config = MagicMock()
|
||||
frigate_config.cameras = {"front": MagicMock()}
|
||||
publisher = MagicMock()
|
||||
|
||||
with (
|
||||
patch.object(
|
||||
manager,
|
||||
"_build_camera_config_dict",
|
||||
return_value={"enabled": True},
|
||||
),
|
||||
patch("frigate.debug_replay.find_config_file", return_value="/cfg.yml"),
|
||||
patch("frigate.debug_replay.YAML") as yaml_cls,
|
||||
patch(
|
||||
"frigate.debug_replay.FrigateConfig.parse_object",
|
||||
side_effect=ValueError("zone foo has invalid coordinates"),
|
||||
),
|
||||
patch("builtins.open", unittest.mock.mock_open(read_data="cameras:\n")),
|
||||
):
|
||||
yaml_cls.return_value.load.return_value = {"cameras": {}}
|
||||
|
||||
with self.assertRaises(RuntimeError) as ctx:
|
||||
manager.publish_camera(
|
||||
source_camera="front",
|
||||
replay_name="_replay_front",
|
||||
clip_path="/tmp/clip.mp4",
|
||||
frigate_config=frigate_config,
|
||||
config_publisher=publisher,
|
||||
)
|
||||
|
||||
self.assertIn("replay camera config", str(ctx.exception))
|
||||
self.assertIn("invalid coordinates", str(ctx.exception))
|
||||
publisher.publish_update.assert_not_called()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
@ -1,460 +0,0 @@
|
||||
"""Tests for the debug replay job runner and factory."""
|
||||
|
||||
import threading
|
||||
import time
|
||||
import unittest
|
||||
import unittest.mock
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
from frigate.debug_replay import DebugReplayManager
|
||||
from frigate.jobs.debug_replay import (
|
||||
DebugReplayJob,
|
||||
cancel_debug_replay_job,
|
||||
get_active_runner,
|
||||
start_debug_replay_job,
|
||||
)
|
||||
from frigate.jobs.export import JobStatePublisher
|
||||
from frigate.jobs.manager import _completed_jobs, _current_jobs
|
||||
from frigate.types import JobStatusTypesEnum
|
||||
|
||||
|
||||
def _reset_job_manager() -> None:
|
||||
"""Clear the global job manager state between tests."""
|
||||
_current_jobs.clear()
|
||||
_completed_jobs.clear()
|
||||
|
||||
|
||||
def _patch_publisher(test_case: unittest.TestCase) -> None:
|
||||
"""Replace JobStatePublisher.publish with a no-op to avoid hanging on IPC."""
|
||||
publisher_patch = patch.object(
|
||||
JobStatePublisher, "publish", lambda self, payload: None
|
||||
)
|
||||
publisher_patch.start()
|
||||
test_case.addCleanup(publisher_patch.stop)
|
||||
|
||||
|
||||
class TestDebugReplayJob(unittest.TestCase):
|
||||
def test_default_fields(self) -> None:
|
||||
job = DebugReplayJob()
|
||||
|
||||
self.assertEqual(job.job_type, "debug_replay")
|
||||
self.assertEqual(job.status, JobStatusTypesEnum.queued)
|
||||
self.assertIsNone(job.current_step)
|
||||
self.assertEqual(job.progress_percent, 0.0)
|
||||
|
||||
def test_to_dict_whitelist(self) -> None:
|
||||
job = DebugReplayJob(
|
||||
source_camera="front",
|
||||
replay_camera_name="_replay_front",
|
||||
start_ts=100.0,
|
||||
end_ts=200.0,
|
||||
)
|
||||
job.current_step = "preparing_clip"
|
||||
job.progress_percent = 42.5
|
||||
|
||||
payload = job.to_dict()
|
||||
|
||||
# Top-level matches the standard Job<TResults> shape.
|
||||
for key in (
|
||||
"id",
|
||||
"job_type",
|
||||
"status",
|
||||
"start_time",
|
||||
"end_time",
|
||||
"error_message",
|
||||
"results",
|
||||
):
|
||||
self.assertIn(key, payload, f"missing top-level field: {key}")
|
||||
|
||||
results = payload["results"]
|
||||
self.assertEqual(results["source_camera"], "front")
|
||||
self.assertEqual(results["replay_camera_name"], "_replay_front")
|
||||
self.assertEqual(results["current_step"], "preparing_clip")
|
||||
self.assertEqual(results["progress_percent"], 42.5)
|
||||
self.assertEqual(results["start_ts"], 100.0)
|
||||
self.assertEqual(results["end_ts"], 200.0)
|
||||
|
||||
|
||||
class TestStartDebugReplayJob(unittest.TestCase):
|
||||
def setUp(self) -> None:
|
||||
_reset_job_manager()
|
||||
_patch_publisher(self)
|
||||
self.manager = DebugReplayManager()
|
||||
self.frigate_config = MagicMock()
|
||||
self.frigate_config.cameras = {"front": MagicMock()}
|
||||
self.frigate_config.ffmpeg.ffmpeg_path = "/bin/true"
|
||||
self.publisher = MagicMock()
|
||||
|
||||
self.recordings_qs = MagicMock()
|
||||
self.recordings_qs.count.return_value = 1
|
||||
self.recordings_qs.__iter__.return_value = iter([MagicMock(path="/tmp/r1.mp4")])
|
||||
|
||||
def tearDown(self) -> None:
|
||||
runner = get_active_runner()
|
||||
if runner is not None:
|
||||
runner.cancel()
|
||||
runner.join(timeout=2.0)
|
||||
_reset_job_manager()
|
||||
|
||||
def test_rejects_unknown_camera(self) -> None:
|
||||
with self.assertRaises(ValueError):
|
||||
start_debug_replay_job(
|
||||
source_camera="missing",
|
||||
start_ts=100.0,
|
||||
end_ts=200.0,
|
||||
frigate_config=self.frigate_config,
|
||||
config_publisher=self.publisher,
|
||||
replay_manager=self.manager,
|
||||
)
|
||||
|
||||
def test_rejects_invalid_time_range(self) -> None:
|
||||
with self.assertRaises(ValueError):
|
||||
start_debug_replay_job(
|
||||
source_camera="front",
|
||||
start_ts=200.0,
|
||||
end_ts=100.0,
|
||||
frigate_config=self.frigate_config,
|
||||
config_publisher=self.publisher,
|
||||
replay_manager=self.manager,
|
||||
)
|
||||
|
||||
def test_rejects_when_no_recordings(self) -> None:
|
||||
empty_qs = MagicMock()
|
||||
empty_qs.count.return_value = 0
|
||||
with patch("frigate.jobs.debug_replay.query_recordings", return_value=empty_qs):
|
||||
with self.assertRaises(ValueError):
|
||||
start_debug_replay_job(
|
||||
source_camera="front",
|
||||
start_ts=100.0,
|
||||
end_ts=200.0,
|
||||
frigate_config=self.frigate_config,
|
||||
config_publisher=self.publisher,
|
||||
replay_manager=self.manager,
|
||||
)
|
||||
|
||||
def test_returns_job_id_and_marks_session_starting(self) -> None:
|
||||
block = threading.Event()
|
||||
|
||||
def slow_helper(cmd, **kwargs):
|
||||
block.wait(timeout=5)
|
||||
return 0, ""
|
||||
|
||||
with (
|
||||
patch(
|
||||
"frigate.jobs.debug_replay.query_recordings",
|
||||
return_value=self.recordings_qs,
|
||||
),
|
||||
patch(
|
||||
"frigate.jobs.debug_replay.run_ffmpeg_with_progress",
|
||||
side_effect=slow_helper,
|
||||
),
|
||||
patch.object(self.manager, "publish_camera"),
|
||||
patch("os.path.exists", return_value=True),
|
||||
patch("os.makedirs"),
|
||||
patch("builtins.open", unittest.mock.mock_open()),
|
||||
):
|
||||
job_id = start_debug_replay_job(
|
||||
source_camera="front",
|
||||
start_ts=100.0,
|
||||
end_ts=200.0,
|
||||
frigate_config=self.frigate_config,
|
||||
config_publisher=self.publisher,
|
||||
replay_manager=self.manager,
|
||||
)
|
||||
|
||||
self.assertIsInstance(job_id, str)
|
||||
self.assertTrue(self.manager.active)
|
||||
self.assertEqual(self.manager.replay_camera_name, "_replay_front")
|
||||
self.assertEqual(self.manager.source_camera, "front")
|
||||
|
||||
block.set()
|
||||
|
||||
def test_rejects_concurrent_calls(self) -> None:
|
||||
block = threading.Event()
|
||||
|
||||
def slow_helper(cmd, **kwargs):
|
||||
block.wait(timeout=5)
|
||||
return 0, ""
|
||||
|
||||
with (
|
||||
patch(
|
||||
"frigate.jobs.debug_replay.query_recordings",
|
||||
return_value=self.recordings_qs,
|
||||
),
|
||||
patch(
|
||||
"frigate.jobs.debug_replay.run_ffmpeg_with_progress",
|
||||
side_effect=slow_helper,
|
||||
),
|
||||
patch.object(self.manager, "publish_camera"),
|
||||
patch("os.path.exists", return_value=True),
|
||||
patch("os.makedirs"),
|
||||
patch("builtins.open", unittest.mock.mock_open()),
|
||||
):
|
||||
start_debug_replay_job(
|
||||
source_camera="front",
|
||||
start_ts=100.0,
|
||||
end_ts=200.0,
|
||||
frigate_config=self.frigate_config,
|
||||
config_publisher=self.publisher,
|
||||
replay_manager=self.manager,
|
||||
)
|
||||
|
||||
with self.assertRaises(RuntimeError):
|
||||
start_debug_replay_job(
|
||||
source_camera="front",
|
||||
start_ts=100.0,
|
||||
end_ts=200.0,
|
||||
frigate_config=self.frigate_config,
|
||||
config_publisher=self.publisher,
|
||||
replay_manager=self.manager,
|
||||
)
|
||||
|
||||
block.set()
|
||||
|
||||
|
||||
class TestRunnerHappyPath(unittest.TestCase):
|
||||
def setUp(self) -> None:
|
||||
_reset_job_manager()
|
||||
_patch_publisher(self)
|
||||
self.manager = DebugReplayManager()
|
||||
self.frigate_config = MagicMock()
|
||||
self.frigate_config.cameras = {"front": MagicMock()}
|
||||
self.frigate_config.ffmpeg.ffmpeg_path = "/bin/true"
|
||||
self.publisher = MagicMock()
|
||||
|
||||
self.recordings_qs = MagicMock()
|
||||
self.recordings_qs.count.return_value = 1
|
||||
self.recordings_qs.__iter__.return_value = iter([MagicMock(path="/tmp/r1.mp4")])
|
||||
|
||||
def tearDown(self) -> None:
|
||||
runner = get_active_runner()
|
||||
if runner is not None:
|
||||
runner.cancel()
|
||||
runner.join(timeout=2.0)
|
||||
_reset_job_manager()
|
||||
|
||||
def _wait_for(self, predicate, timeout: float = 5.0) -> bool:
|
||||
deadline = time.time() + timeout
|
||||
while time.time() < deadline:
|
||||
if predicate():
|
||||
return True
|
||||
time.sleep(0.02)
|
||||
return False
|
||||
|
||||
def test_progress_callback_updates_job_percent(self) -> None:
|
||||
captured: list[float] = []
|
||||
|
||||
def fake_helper(cmd, *, on_progress=None, **kwargs):
|
||||
on_progress(0.0)
|
||||
on_progress(50.0)
|
||||
on_progress(100.0)
|
||||
return 0, ""
|
||||
|
||||
with (
|
||||
patch(
|
||||
"frigate.jobs.debug_replay.query_recordings",
|
||||
return_value=self.recordings_qs,
|
||||
),
|
||||
patch(
|
||||
"frigate.jobs.debug_replay.run_ffmpeg_with_progress",
|
||||
side_effect=fake_helper,
|
||||
),
|
||||
patch.object(
|
||||
self.manager,
|
||||
"publish_camera",
|
||||
side_effect=lambda *a, **kw: captured.append("published"),
|
||||
),
|
||||
patch("os.path.exists", return_value=True),
|
||||
patch("os.makedirs"),
|
||||
patch("builtins.open", unittest.mock.mock_open()),
|
||||
):
|
||||
start_debug_replay_job(
|
||||
source_camera="front",
|
||||
start_ts=100.0,
|
||||
end_ts=200.0,
|
||||
frigate_config=self.frigate_config,
|
||||
config_publisher=self.publisher,
|
||||
replay_manager=self.manager,
|
||||
)
|
||||
|
||||
self.assertTrue(
|
||||
self._wait_for(lambda: get_active_runner() is None),
|
||||
"runner did not finish",
|
||||
)
|
||||
|
||||
from frigate.jobs.manager import get_current_job
|
||||
|
||||
job = get_current_job("debug_replay")
|
||||
self.assertIsNotNone(job)
|
||||
self.assertEqual(job.status, JobStatusTypesEnum.success)
|
||||
self.assertEqual(job.progress_percent, 100.0)
|
||||
self.assertEqual(captured, ["published"])
|
||||
# Manager should have been told the session is ready with the clip path.
|
||||
self.assertIsNotNone(self.manager.clip_path)
|
||||
|
||||
|
||||
class TestRunnerFailurePath(unittest.TestCase):
|
||||
def setUp(self) -> None:
|
||||
_reset_job_manager()
|
||||
_patch_publisher(self)
|
||||
self.manager = DebugReplayManager()
|
||||
self.frigate_config = MagicMock()
|
||||
self.frigate_config.cameras = {"front": MagicMock()}
|
||||
self.frigate_config.ffmpeg.ffmpeg_path = "/bin/true"
|
||||
self.publisher = MagicMock()
|
||||
self.recordings_qs = MagicMock()
|
||||
self.recordings_qs.count.return_value = 1
|
||||
self.recordings_qs.__iter__.return_value = iter([MagicMock(path="/tmp/r1.mp4")])
|
||||
|
||||
def tearDown(self) -> None:
|
||||
runner = get_active_runner()
|
||||
if runner is not None:
|
||||
runner.cancel()
|
||||
runner.join(timeout=2.0)
|
||||
_reset_job_manager()
|
||||
|
||||
def _wait_for(self, predicate, timeout: float = 5.0) -> bool:
|
||||
deadline = time.time() + timeout
|
||||
while time.time() < deadline:
|
||||
if predicate():
|
||||
return True
|
||||
time.sleep(0.02)
|
||||
return False
|
||||
|
||||
def test_ffmpeg_failure_marks_job_failed_and_clears_session(self) -> None:
|
||||
def failing_helper(cmd, **kwargs):
|
||||
return 1, "ffmpeg exploded"
|
||||
|
||||
with (
|
||||
patch(
|
||||
"frigate.jobs.debug_replay.query_recordings",
|
||||
return_value=self.recordings_qs,
|
||||
),
|
||||
patch(
|
||||
"frigate.jobs.debug_replay.run_ffmpeg_with_progress",
|
||||
side_effect=failing_helper,
|
||||
),
|
||||
patch("os.path.exists", return_value=True),
|
||||
patch("os.makedirs"),
|
||||
patch("os.remove"),
|
||||
patch("builtins.open", unittest.mock.mock_open()),
|
||||
):
|
||||
start_debug_replay_job(
|
||||
source_camera="front",
|
||||
start_ts=100.0,
|
||||
end_ts=200.0,
|
||||
frigate_config=self.frigate_config,
|
||||
config_publisher=self.publisher,
|
||||
replay_manager=self.manager,
|
||||
)
|
||||
|
||||
self.assertTrue(
|
||||
self._wait_for(lambda: get_active_runner() is None),
|
||||
"runner did not finish",
|
||||
)
|
||||
|
||||
from frigate.jobs.manager import get_current_job
|
||||
|
||||
job = get_current_job("debug_replay")
|
||||
self.assertIsNotNone(job)
|
||||
self.assertEqual(job.status, JobStatusTypesEnum.failed)
|
||||
self.assertIsNotNone(job.error_message)
|
||||
self.assertIn("ffmpeg", job.error_message.lower())
|
||||
# Session cleared so a new /start is allowed
|
||||
self.assertFalse(self.manager.active)
|
||||
|
||||
|
||||
class TestRunnerCancellation(unittest.TestCase):
|
||||
def setUp(self) -> None:
|
||||
_reset_job_manager()
|
||||
_patch_publisher(self)
|
||||
self.manager = DebugReplayManager()
|
||||
self.frigate_config = MagicMock()
|
||||
self.frigate_config.cameras = {"front": MagicMock()}
|
||||
self.frigate_config.ffmpeg.ffmpeg_path = "/bin/true"
|
||||
self.publisher = MagicMock()
|
||||
self.recordings_qs = MagicMock()
|
||||
self.recordings_qs.count.return_value = 1
|
||||
self.recordings_qs.__iter__.return_value = iter([MagicMock(path="/tmp/r1.mp4")])
|
||||
|
||||
def tearDown(self) -> None:
|
||||
runner = get_active_runner()
|
||||
if runner is not None:
|
||||
runner.cancel()
|
||||
runner.join(timeout=2.0)
|
||||
_reset_job_manager()
|
||||
|
||||
def _wait_for(self, predicate, timeout: float = 5.0) -> bool:
|
||||
deadline = time.time() + timeout
|
||||
while time.time() < deadline:
|
||||
if predicate():
|
||||
return True
|
||||
time.sleep(0.02)
|
||||
return False
|
||||
|
||||
def test_cancel_terminates_ffmpeg_and_marks_cancelled(self) -> None:
|
||||
terminated = threading.Event()
|
||||
fake_proc = MagicMock()
|
||||
fake_proc.terminate = MagicMock(side_effect=lambda: terminated.set())
|
||||
|
||||
def fake_helper(cmd, *, process_started=None, **kwargs):
|
||||
if process_started is not None:
|
||||
process_started(fake_proc)
|
||||
terminated.wait(timeout=5)
|
||||
return -15, "killed"
|
||||
|
||||
with (
|
||||
patch(
|
||||
"frigate.jobs.debug_replay.query_recordings",
|
||||
return_value=self.recordings_qs,
|
||||
),
|
||||
patch(
|
||||
"frigate.jobs.debug_replay.run_ffmpeg_with_progress",
|
||||
side_effect=fake_helper,
|
||||
),
|
||||
patch("os.path.exists", return_value=True),
|
||||
patch("os.makedirs"),
|
||||
patch("os.remove"),
|
||||
patch("builtins.open", unittest.mock.mock_open()),
|
||||
):
|
||||
start_debug_replay_job(
|
||||
source_camera="front",
|
||||
start_ts=100.0,
|
||||
end_ts=200.0,
|
||||
frigate_config=self.frigate_config,
|
||||
config_publisher=self.publisher,
|
||||
replay_manager=self.manager,
|
||||
)
|
||||
|
||||
# Wait for the runner to register the active process.
|
||||
self.assertTrue(
|
||||
self._wait_for(
|
||||
lambda: (
|
||||
get_active_runner() is not None
|
||||
and get_active_runner()._active_process is fake_proc
|
||||
)
|
||||
)
|
||||
)
|
||||
|
||||
cancelled = cancel_debug_replay_job()
|
||||
self.assertTrue(cancelled)
|
||||
self.assertTrue(fake_proc.terminate.called)
|
||||
|
||||
self.assertTrue(
|
||||
self._wait_for(lambda: get_active_runner() is None),
|
||||
"runner did not finish",
|
||||
)
|
||||
|
||||
from frigate.jobs.manager import get_current_job
|
||||
|
||||
job = get_current_job("debug_replay")
|
||||
self.assertEqual(job.status, JobStatusTypesEnum.cancelled)
|
||||
# Runner must not clear the manager session on cancellation —
|
||||
# that belongs to the caller of cancel_debug_replay_job (stop()).
|
||||
# If the runner cleared it, stop() would log "no active session"
|
||||
# and skip its cleanup_db / cleanup_files calls.
|
||||
self.assertTrue(self.manager.active)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
@ -1,9 +1,6 @@
|
||||
"""Tests for export progress tracking, broadcast, and FFmpeg parsing."""
|
||||
|
||||
import io
|
||||
import os
|
||||
import shutil
|
||||
import tempfile
|
||||
import unittest
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
@ -14,7 +11,6 @@ from frigate.jobs.export import (
|
||||
)
|
||||
from frigate.record.export import PlaybackSourceEnum, RecordingExporter
|
||||
from frigate.types import JobStatusTypesEnum
|
||||
from frigate.util.ffmpeg import inject_progress_flags
|
||||
|
||||
|
||||
def _make_exporter(
|
||||
@ -119,9 +115,10 @@ class TestExpectedOutputDuration(unittest.TestCase):
|
||||
|
||||
class TestProgressFlagInjection(unittest.TestCase):
|
||||
def test_inserts_before_output_path(self) -> None:
|
||||
exporter = _make_exporter()
|
||||
cmd = ["ffmpeg", "-i", "input.m3u8", "-c", "copy", "/tmp/output.mp4"]
|
||||
|
||||
result = inject_progress_flags(cmd)
|
||||
result = exporter._inject_progress_flags(cmd)
|
||||
|
||||
assert result == [
|
||||
"ffmpeg",
|
||||
@ -136,7 +133,8 @@ class TestProgressFlagInjection(unittest.TestCase):
|
||||
]
|
||||
|
||||
def test_handles_empty_cmd(self) -> None:
|
||||
assert inject_progress_flags([]) == []
|
||||
exporter = _make_exporter()
|
||||
assert exporter._inject_progress_flags([]) == []
|
||||
|
||||
|
||||
class TestFfmpegProgressParsing(unittest.TestCase):
|
||||
@ -166,7 +164,7 @@ class TestFfmpegProgressParsing(unittest.TestCase):
|
||||
fake_proc.returncode = 0
|
||||
fake_proc.wait = MagicMock(return_value=0)
|
||||
|
||||
with patch("frigate.util.ffmpeg.sp.Popen", return_value=fake_proc):
|
||||
with patch("frigate.record.export.sp.Popen", return_value=fake_proc):
|
||||
returncode, _stderr = exporter._run_ffmpeg_with_progress(
|
||||
["ffmpeg", "-i", "x.m3u8", "/tmp/out.mp4"], "playlist", step="encoding"
|
||||
)
|
||||
@ -365,121 +363,6 @@ class TestBroadcastAggregation(unittest.TestCase):
|
||||
assert job.progress_percent == 33.0
|
||||
|
||||
|
||||
class TestGetDatetimeFromTimestamp(unittest.TestCase):
|
||||
"""Auto-generated export name should honor config.ui.timezone, not
|
||||
fall back to the container's UTC clock when a timezone is configured.
|
||||
"""
|
||||
|
||||
def test_uses_configured_ui_timezone(self) -> None:
|
||||
exporter = _make_exporter()
|
||||
exporter.config.ui.timezone = "America/New_York"
|
||||
# 2025-01-15 12:00:00 UTC is 07:00:00 EST
|
||||
assert exporter.get_datetime_from_timestamp(1736942400) == "2025-01-15 07:00:00"
|
||||
|
||||
def test_falls_back_to_local_when_timezone_unset(self) -> None:
|
||||
exporter = _make_exporter()
|
||||
exporter.config.ui.timezone = None
|
||||
# No assertion on the exact wall-clock value — just confirm no
|
||||
# exception and that pytz isn't required when the field is unset.
|
||||
assert isinstance(exporter.get_datetime_from_timestamp(1736942400), str)
|
||||
|
||||
def test_invalid_timezone_falls_back_to_local(self) -> None:
|
||||
exporter = _make_exporter()
|
||||
exporter.config.ui.timezone = "Not/A_Real_Zone"
|
||||
assert isinstance(exporter.get_datetime_from_timestamp(1736942400), str)
|
||||
|
||||
|
||||
class TestSaveThumbnailFromPreviewFrames(unittest.TestCase):
|
||||
"""Short exports in the current hour can fall between preview frame
|
||||
writes (1-2 fps during activity, every 30s otherwise). When no frame
|
||||
falls inside the export window, save_thumbnail should fall back to
|
||||
the most recent prior frame instead of returning no thumbnail."""
|
||||
|
||||
def setUp(self) -> None:
|
||||
self.tmp_root = tempfile.mkdtemp(prefix="frigate_thumb_test_")
|
||||
self.preview_dir = os.path.join(self.tmp_root, "cache", "preview_frames")
|
||||
self.export_clips = os.path.join(self.tmp_root, "clips", "export")
|
||||
os.makedirs(self.preview_dir, exist_ok=True)
|
||||
os.makedirs(self.export_clips, exist_ok=True)
|
||||
|
||||
def tearDown(self) -> None:
|
||||
shutil.rmtree(self.tmp_root, ignore_errors=True)
|
||||
|
||||
def _write_frame(self, camera: str, frame_time: float) -> str:
|
||||
path = os.path.join(self.preview_dir, f"preview_{camera}-{frame_time}.webp")
|
||||
with open(path, "wb") as f:
|
||||
f.write(b"fake-webp-bytes")
|
||||
return path
|
||||
|
||||
def _make_short_current_hour_exporter(self) -> RecordingExporter:
|
||||
# Use a "now-ish" timestamp so save_thumbnail's start-of-hour
|
||||
# comparison takes the current-hour branch (preview frames).
|
||||
import datetime
|
||||
|
||||
now = datetime.datetime.now(datetime.timezone.utc).timestamp()
|
||||
exporter = _make_exporter()
|
||||
exporter.export_id = "thumb_short"
|
||||
exporter.start_time = now
|
||||
exporter.end_time = now + 3
|
||||
return exporter
|
||||
|
||||
def test_short_export_falls_back_to_prior_preview_frame(self) -> None:
|
||||
exporter = self._make_short_current_hour_exporter()
|
||||
# Most recent preview frame is 10s before the export window
|
||||
prior = self._write_frame(exporter.camera, exporter.start_time - 10.0)
|
||||
thumb_target = os.path.join(self.export_clips, f"{exporter.export_id}.webp")
|
||||
|
||||
with (
|
||||
patch(
|
||||
"frigate.record.export.CACHE_DIR", os.path.join(self.tmp_root, "cache")
|
||||
),
|
||||
patch(
|
||||
"frigate.record.export.CLIPS_DIR", os.path.join(self.tmp_root, "clips")
|
||||
),
|
||||
):
|
||||
result = exporter.save_thumbnail(exporter.export_id)
|
||||
|
||||
assert result == thumb_target
|
||||
assert os.path.isfile(thumb_target)
|
||||
with open(thumb_target, "rb") as f, open(prior, "rb") as src:
|
||||
assert f.read() == src.read()
|
||||
|
||||
def test_returns_empty_when_no_preview_frames_exist(self) -> None:
|
||||
exporter = self._make_short_current_hour_exporter()
|
||||
|
||||
with (
|
||||
patch(
|
||||
"frigate.record.export.CACHE_DIR", os.path.join(self.tmp_root, "cache")
|
||||
),
|
||||
patch(
|
||||
"frigate.record.export.CLIPS_DIR", os.path.join(self.tmp_root, "clips")
|
||||
),
|
||||
):
|
||||
result = exporter.save_thumbnail(exporter.export_id)
|
||||
|
||||
assert result == ""
|
||||
|
||||
def test_prefers_in_window_frame_over_prior_frame(self) -> None:
|
||||
exporter = self._make_short_current_hour_exporter()
|
||||
self._write_frame(exporter.camera, exporter.start_time - 10.0)
|
||||
in_window = self._write_frame(exporter.camera, exporter.start_time + 1.0)
|
||||
thumb_target = os.path.join(self.export_clips, f"{exporter.export_id}.webp")
|
||||
|
||||
with (
|
||||
patch(
|
||||
"frigate.record.export.CACHE_DIR", os.path.join(self.tmp_root, "cache")
|
||||
),
|
||||
patch(
|
||||
"frigate.record.export.CLIPS_DIR", os.path.join(self.tmp_root, "clips")
|
||||
),
|
||||
):
|
||||
result = exporter.save_thumbnail(exporter.export_id)
|
||||
|
||||
assert result == thumb_target
|
||||
with open(thumb_target, "rb") as f, open(in_window, "rb") as src:
|
||||
assert f.read() == src.read()
|
||||
|
||||
|
||||
class TestSchedulesCleanup(unittest.TestCase):
|
||||
def test_schedule_job_cleanup_removes_after_delay(self) -> None:
|
||||
config = MagicMock()
|
||||
@ -498,56 +381,5 @@ class TestSchedulesCleanup(unittest.TestCase):
|
||||
assert job.id not in manager.jobs
|
||||
|
||||
|
||||
class TestChapterMetadataInProgressReview(unittest.TestCase):
|
||||
"""Regression: in-progress review segments have end_time=NULL until the
|
||||
activity closes. The chapter builder must clamp the chapter end to the
|
||||
last recorded second instead of crashing on float(None)."""
|
||||
|
||||
def _fake_select_returning(self, rows: list) -> MagicMock:
|
||||
mock_query = MagicMock()
|
||||
mock_query.where.return_value = mock_query
|
||||
mock_query.order_by.return_value = mock_query
|
||||
mock_query.iterator.return_value = iter(rows)
|
||||
return mock_query
|
||||
|
||||
def test_in_progress_review_does_not_crash_and_clamps_to_last_recording(
|
||||
self,
|
||||
) -> None:
|
||||
exporter = _make_exporter(end_minus_start=200)
|
||||
# Recordings cover [1000, 1150]; export window is [1000, 1200] so
|
||||
# the last recorded second is 1150 (a 50s gap at the tail).
|
||||
recordings = [
|
||||
MagicMock(start_time=1000.0, end_time=1150.0),
|
||||
]
|
||||
in_progress = MagicMock(
|
||||
start_time=1100.0,
|
||||
end_time=None,
|
||||
severity="alert",
|
||||
data={"objects": ["person"]},
|
||||
)
|
||||
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
chapter_path = os.path.join(tmpdir, "chapters.txt")
|
||||
exporter._chapter_metadata_path = lambda: chapter_path # type: ignore[method-assign]
|
||||
|
||||
with patch(
|
||||
"frigate.record.export.ReviewSegment.select",
|
||||
return_value=self._fake_select_returning([in_progress]),
|
||||
):
|
||||
result = exporter._build_chapter_metadata_file(recordings)
|
||||
|
||||
assert result == chapter_path
|
||||
with open(chapter_path) as f:
|
||||
content = f.read()
|
||||
|
||||
# Output time is windows[-1][1] - windows[-1][0] = 150s.
|
||||
# Review starts at wall=1100, output offset = 100s -> 100000ms.
|
||||
# Clamped end = last_recorded_end (1150) -> output offset = 150s -> 150000ms.
|
||||
assert "[CHAPTER]" in content
|
||||
assert "START=100000" in content
|
||||
assert "END=150000" in content
|
||||
assert "title=Alert: person" in content
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
|
||||
@ -1,111 +0,0 @@
|
||||
"""Tests for the shared ffmpeg progress helper."""
|
||||
|
||||
import unittest
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
from frigate.util.ffmpeg import inject_progress_flags, run_ffmpeg_with_progress
|
||||
|
||||
|
||||
class TestInjectProgressFlags(unittest.TestCase):
|
||||
def test_inserts_flags_before_output_path(self):
|
||||
cmd = ["ffmpeg", "-i", "in.mp4", "-c", "copy", "out.mp4"]
|
||||
result = inject_progress_flags(cmd)
|
||||
self.assertEqual(
|
||||
result,
|
||||
[
|
||||
"ffmpeg",
|
||||
"-i",
|
||||
"in.mp4",
|
||||
"-c",
|
||||
"copy",
|
||||
"-progress",
|
||||
"pipe:2",
|
||||
"-nostats",
|
||||
"out.mp4",
|
||||
],
|
||||
)
|
||||
|
||||
def test_empty_cmd_returns_empty(self):
|
||||
self.assertEqual(inject_progress_flags([]), [])
|
||||
|
||||
|
||||
class TestRunFfmpegWithProgress(unittest.TestCase):
|
||||
def _make_fake_proc(self, stderr_lines, returncode=0):
|
||||
proc = MagicMock()
|
||||
proc.stderr = iter(stderr_lines)
|
||||
proc.stdin = MagicMock()
|
||||
proc.returncode = returncode
|
||||
proc.wait = MagicMock()
|
||||
return proc
|
||||
|
||||
def test_emits_percent_from_out_time_us_lines(self):
|
||||
captured: list[float] = []
|
||||
|
||||
def on_progress(percent: float) -> None:
|
||||
captured.append(percent)
|
||||
|
||||
stderr_lines = [
|
||||
"out_time_us=1000000\n",
|
||||
"out_time_us=5000000\n",
|
||||
"progress=end\n",
|
||||
]
|
||||
proc = self._make_fake_proc(stderr_lines)
|
||||
proc.stderr = MagicMock()
|
||||
proc.stderr.__iter__ = lambda self: iter(stderr_lines)
|
||||
proc.stderr.read = MagicMock(return_value="")
|
||||
|
||||
with patch("subprocess.Popen", return_value=proc):
|
||||
returncode, _stderr = run_ffmpeg_with_progress(
|
||||
["ffmpeg", "-i", "in", "out"],
|
||||
expected_duration_seconds=10.0,
|
||||
on_progress=on_progress,
|
||||
use_low_priority=False,
|
||||
)
|
||||
|
||||
self.assertEqual(returncode, 0)
|
||||
self.assertEqual(len(captured), 4) # initial 0.0 + two parsed + final 100.0
|
||||
self.assertAlmostEqual(captured[0], 0.0)
|
||||
self.assertAlmostEqual(captured[1], 10.0)
|
||||
self.assertAlmostEqual(captured[2], 50.0)
|
||||
self.assertAlmostEqual(captured[3], 100.0)
|
||||
|
||||
def test_passes_started_process_to_callback(self):
|
||||
proc = self._make_fake_proc([])
|
||||
proc.stderr = MagicMock()
|
||||
proc.stderr.__iter__ = lambda self: iter([])
|
||||
proc.stderr.read = MagicMock(return_value="")
|
||||
|
||||
seen: list = []
|
||||
|
||||
with patch("subprocess.Popen", return_value=proc):
|
||||
run_ffmpeg_with_progress(
|
||||
["ffmpeg", "out"],
|
||||
expected_duration_seconds=1.0,
|
||||
process_started=lambda p: seen.append(p),
|
||||
use_low_priority=False,
|
||||
)
|
||||
|
||||
self.assertEqual(seen, [proc])
|
||||
|
||||
def test_clamps_percent_to_0_100(self):
|
||||
captured: list[float] = []
|
||||
|
||||
def on_progress(percent: float) -> None:
|
||||
captured.append(percent)
|
||||
|
||||
stderr_lines = ["out_time_us=999999999999\n"]
|
||||
proc = self._make_fake_proc(stderr_lines)
|
||||
proc.stderr = MagicMock()
|
||||
proc.stderr.__iter__ = lambda self: iter(stderr_lines)
|
||||
proc.stderr.read = MagicMock(return_value="")
|
||||
|
||||
with patch("subprocess.Popen", return_value=proc):
|
||||
run_ffmpeg_with_progress(
|
||||
["ffmpeg", "out"],
|
||||
expected_duration_seconds=10.0,
|
||||
on_progress=on_progress,
|
||||
use_low_priority=False,
|
||||
)
|
||||
|
||||
# initial 0.0 then a clamped reading
|
||||
self.assertEqual(captured[-1], 100.0)
|
||||
@ -1,57 +0,0 @@
|
||||
"""Tests for JSMPEG websocket authorization."""
|
||||
|
||||
import unittest
|
||||
from types import SimpleNamespace
|
||||
|
||||
from frigate.config import FrigateConfig
|
||||
from frigate.output.ws_auth import ws_has_camera_access
|
||||
|
||||
|
||||
class TestWsHasCameraAccess(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self.config = FrigateConfig(
|
||||
mqtt={"host": "mqtt"},
|
||||
auth={"roles": {"limited_user": ["front_door"]}},
|
||||
cameras={
|
||||
"front_door": {
|
||||
"ffmpeg": {
|
||||
"inputs": [
|
||||
{"path": "rtsp://10.0.0.1:554/video", "roles": ["detect"]}
|
||||
]
|
||||
},
|
||||
"detect": {"height": 1080, "width": 1920, "fps": 5},
|
||||
},
|
||||
"back_door": {
|
||||
"ffmpeg": {
|
||||
"inputs": [
|
||||
{"path": "rtsp://10.0.0.2:554/video", "roles": ["detect"]}
|
||||
]
|
||||
},
|
||||
"detect": {"height": 1080, "width": 1920, "fps": 5},
|
||||
},
|
||||
},
|
||||
)
|
||||
|
||||
def _make_ws(self, role: str):
|
||||
return SimpleNamespace(environ={"HTTP_REMOTE_ROLE": role})
|
||||
|
||||
def test_restricted_role_only_gets_allowed_camera(self):
|
||||
ws = self._make_ws("limited_user")
|
||||
self.assertTrue(ws_has_camera_access(ws, "front_door", self.config))
|
||||
self.assertFalse(ws_has_camera_access(ws, "back_door", self.config))
|
||||
|
||||
def test_unrestricted_role_can_access_any_camera(self):
|
||||
ws = self._make_ws("viewer")
|
||||
self.assertTrue(ws_has_camera_access(ws, "front_door", self.config))
|
||||
self.assertTrue(ws_has_camera_access(ws, "back_door", self.config))
|
||||
|
||||
def test_birdseye_requires_unrestricted_access(self):
|
||||
self.assertTrue(
|
||||
ws_has_camera_access(self._make_ws("admin"), "birdseye", self.config)
|
||||
)
|
||||
self.assertTrue(
|
||||
ws_has_camera_access(self._make_ws("viewer"), "birdseye", self.config)
|
||||
)
|
||||
self.assertFalse(
|
||||
ws_has_camera_access(self._make_ws("limited_user"), "birdseye", self.config)
|
||||
)
|
||||
@ -1,29 +0,0 @@
|
||||
"""Tests for camera monitoring notification authorization."""
|
||||
|
||||
import unittest
|
||||
from types import SimpleNamespace
|
||||
from unittest.mock import MagicMock
|
||||
|
||||
from frigate.comms.webpush import WebPushClient
|
||||
|
||||
|
||||
class TestCameraMonitoringNotifications(unittest.TestCase):
|
||||
def test_send_camera_monitoring_filters_by_camera_access(self):
|
||||
client = WebPushClient.__new__(WebPushClient)
|
||||
client.config = SimpleNamespace(
|
||||
cameras={"front_door": SimpleNamespace(friendly_name=None)}
|
||||
)
|
||||
client.web_pushers = {"allowed": [], "denied": []}
|
||||
client.user_cameras = {"allowed": {"front_door"}, "denied": set()}
|
||||
client.check_registrations = MagicMock()
|
||||
client.cleanup_registrations = MagicMock()
|
||||
client.send_push_notification = MagicMock()
|
||||
|
||||
client.send_camera_monitoring(
|
||||
{"camera": "front_door", "message": "Monitoring condition met"}
|
||||
)
|
||||
|
||||
self.assertEqual(client.send_push_notification.call_count, 1)
|
||||
self.assertEqual(
|
||||
client.send_push_notification.call_args.kwargs["user"], "allowed"
|
||||
)
|
||||
@ -1,166 +0,0 @@
|
||||
"""Tests for WebSocket authorization checks."""
|
||||
|
||||
import unittest
|
||||
|
||||
from frigate.comms.ws import _check_ws_authorization
|
||||
from frigate.const import INSERT_MANY_RECORDINGS, UPDATE_CAMERA_ACTIVITY
|
||||
|
||||
|
||||
class TestCheckWsAuthorization(unittest.TestCase):
|
||||
"""Tests for the _check_ws_authorization pure function."""
|
||||
|
||||
DEFAULT_SEPARATOR = ","
|
||||
|
||||
# --- IPC topic blocking (unconditional, regardless of role) ---
|
||||
|
||||
def test_ipc_topic_blocked_for_admin(self):
|
||||
self.assertFalse(
|
||||
_check_ws_authorization(
|
||||
INSERT_MANY_RECORDINGS, "admin", self.DEFAULT_SEPARATOR
|
||||
)
|
||||
)
|
||||
|
||||
def test_ipc_topic_blocked_for_viewer(self):
|
||||
self.assertFalse(
|
||||
_check_ws_authorization(
|
||||
UPDATE_CAMERA_ACTIVITY, "viewer", self.DEFAULT_SEPARATOR
|
||||
)
|
||||
)
|
||||
|
||||
def test_ipc_topic_blocked_when_no_role(self):
|
||||
self.assertFalse(
|
||||
_check_ws_authorization(
|
||||
INSERT_MANY_RECORDINGS, None, self.DEFAULT_SEPARATOR
|
||||
)
|
||||
)
|
||||
|
||||
# --- Viewer allowed topics ---
|
||||
|
||||
def test_viewer_can_send_on_connect(self):
|
||||
self.assertTrue(
|
||||
_check_ws_authorization("onConnect", "viewer", self.DEFAULT_SEPARATOR)
|
||||
)
|
||||
|
||||
def test_viewer_can_send_model_state(self):
|
||||
self.assertTrue(
|
||||
_check_ws_authorization("modelState", "viewer", self.DEFAULT_SEPARATOR)
|
||||
)
|
||||
|
||||
def test_viewer_can_send_audio_transcription_state(self):
|
||||
self.assertTrue(
|
||||
_check_ws_authorization(
|
||||
"audioTranscriptionState", "viewer", self.DEFAULT_SEPARATOR
|
||||
)
|
||||
)
|
||||
|
||||
def test_viewer_can_send_birdseye_layout(self):
|
||||
self.assertTrue(
|
||||
_check_ws_authorization("birdseyeLayout", "viewer", self.DEFAULT_SEPARATOR)
|
||||
)
|
||||
|
||||
def test_viewer_can_send_embeddings_reindex_progress(self):
|
||||
self.assertTrue(
|
||||
_check_ws_authorization(
|
||||
"embeddingsReindexProgress", "viewer", self.DEFAULT_SEPARATOR
|
||||
)
|
||||
)
|
||||
|
||||
# --- Viewer blocked from admin topics ---
|
||||
|
||||
def test_viewer_blocked_from_restart(self):
|
||||
self.assertFalse(
|
||||
_check_ws_authorization("restart", "viewer", self.DEFAULT_SEPARATOR)
|
||||
)
|
||||
|
||||
def test_viewer_blocked_from_camera_detect_set(self):
|
||||
self.assertFalse(
|
||||
_check_ws_authorization(
|
||||
"front_door/detect/set", "viewer", self.DEFAULT_SEPARATOR
|
||||
)
|
||||
)
|
||||
|
||||
def test_viewer_blocked_from_camera_ptz(self):
|
||||
self.assertFalse(
|
||||
_check_ws_authorization("front_door/ptz", "viewer", self.DEFAULT_SEPARATOR)
|
||||
)
|
||||
|
||||
def test_viewer_blocked_from_global_notifications_set(self):
|
||||
self.assertFalse(
|
||||
_check_ws_authorization(
|
||||
"notifications/set", "viewer", self.DEFAULT_SEPARATOR
|
||||
)
|
||||
)
|
||||
|
||||
def test_viewer_blocked_from_camera_notifications_suspend(self):
|
||||
self.assertFalse(
|
||||
_check_ws_authorization(
|
||||
"front_door/notifications/suspend", "viewer", self.DEFAULT_SEPARATOR
|
||||
)
|
||||
)
|
||||
|
||||
def test_viewer_blocked_from_arbitrary_unknown_topic(self):
|
||||
self.assertFalse(
|
||||
_check_ws_authorization(
|
||||
"some_random_topic", "viewer", self.DEFAULT_SEPARATOR
|
||||
)
|
||||
)
|
||||
|
||||
# --- Admin access ---
|
||||
|
||||
def test_admin_can_send_restart(self):
|
||||
self.assertTrue(
|
||||
_check_ws_authorization("restart", "admin", self.DEFAULT_SEPARATOR)
|
||||
)
|
||||
|
||||
def test_admin_can_send_camera_detect_set(self):
|
||||
self.assertTrue(
|
||||
_check_ws_authorization(
|
||||
"front_door/detect/set", "admin", self.DEFAULT_SEPARATOR
|
||||
)
|
||||
)
|
||||
|
||||
def test_admin_can_send_camera_ptz(self):
|
||||
self.assertTrue(
|
||||
_check_ws_authorization("front_door/ptz", "admin", self.DEFAULT_SEPARATOR)
|
||||
)
|
||||
|
||||
# --- Comma-separated roles ---
|
||||
|
||||
def test_comma_separated_admin_viewer_grants_admin(self):
|
||||
self.assertTrue(
|
||||
_check_ws_authorization("restart", "admin,viewer", self.DEFAULT_SEPARATOR)
|
||||
)
|
||||
|
||||
def test_comma_separated_viewer_admin_grants_admin(self):
|
||||
self.assertTrue(
|
||||
_check_ws_authorization("restart", "viewer,admin", self.DEFAULT_SEPARATOR)
|
||||
)
|
||||
|
||||
def test_comma_separated_with_spaces(self):
|
||||
self.assertTrue(
|
||||
_check_ws_authorization("restart", "viewer, admin", self.DEFAULT_SEPARATOR)
|
||||
)
|
||||
|
||||
# --- Custom separator ---
|
||||
|
||||
def test_pipe_separator(self):
|
||||
self.assertTrue(_check_ws_authorization("restart", "viewer|admin", "|"))
|
||||
|
||||
def test_pipe_separator_no_admin(self):
|
||||
self.assertFalse(_check_ws_authorization("restart", "viewer|editor", "|"))
|
||||
|
||||
# --- No role header (fail-closed) ---
|
||||
|
||||
def test_no_role_header_blocks_admin_topics(self):
|
||||
self.assertFalse(
|
||||
_check_ws_authorization("restart", None, self.DEFAULT_SEPARATOR)
|
||||
)
|
||||
|
||||
def test_no_role_header_allows_viewer_topics(self):
|
||||
self.assertTrue(
|
||||
_check_ws_authorization("onConnect", None, self.DEFAULT_SEPARATOR)
|
||||
)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
@ -24,12 +24,8 @@ from frigate.log import redirect_output_to_logger, suppress_stderr_during
|
||||
from frigate.models import Event, Recordings, ReviewSegment
|
||||
from frigate.types import ModelStatusTypesEnum
|
||||
from frigate.util.downloader import ModelDownloader
|
||||
from frigate.util.file import get_event_thumbnail_bytes, load_event_snapshot_image
|
||||
from frigate.util.image import (
|
||||
calculate_region,
|
||||
get_image_from_recording,
|
||||
relative_box_to_absolute,
|
||||
)
|
||||
from frigate.util.file import get_event_thumbnail_bytes
|
||||
from frigate.util.image import get_image_from_recording
|
||||
from frigate.util.process import FrigateProcess
|
||||
|
||||
BATCH_SIZE = 16
|
||||
@ -717,7 +713,7 @@ def collect_object_classification_examples(
|
||||
This function:
|
||||
1. Queries events for the specified label
|
||||
2. Selects 100 balanced events across different cameras and times
|
||||
3. Crops each event's clean snapshot around the object bounding box
|
||||
3. Retrieves thumbnails for selected events (with 33% center crop applied)
|
||||
4. Selects 24 most visually distinct thumbnails
|
||||
5. Saves to dataset directory
|
||||
|
||||
@ -836,106 +832,66 @@ def _select_balanced_events(
|
||||
|
||||
def _extract_event_thumbnails(events: list[Event], output_dir: str) -> list[str]:
|
||||
"""
|
||||
Extract a training image for each event.
|
||||
|
||||
Preferred path: load the full-frame clean snapshot and crop around the
|
||||
stored bounding box with the same calculate_region(..., max(w, h), 1.0)
|
||||
call the live ObjectClassificationProcessor uses, so wizard examples
|
||||
are framed like inference-time inputs.
|
||||
|
||||
Fallback: if no clean snapshot exists (snapshots disabled, or only a
|
||||
legacy annotated JPG is on disk), center-crop the stored thumbnail
|
||||
using a step ladder sized from the box/region area ratio.
|
||||
Extract thumbnails from events and save to disk.
|
||||
|
||||
Args:
|
||||
events: List of Event objects
|
||||
output_dir: Directory to save crops
|
||||
output_dir: Directory to save thumbnails
|
||||
|
||||
Returns:
|
||||
List of paths to successfully extracted images
|
||||
List of paths to successfully extracted thumbnail images
|
||||
"""
|
||||
image_paths = []
|
||||
thumbnail_paths = []
|
||||
|
||||
for idx, event in enumerate(events):
|
||||
try:
|
||||
img = _load_event_classification_crop(event)
|
||||
if img is None:
|
||||
continue
|
||||
thumbnail_bytes = get_event_thumbnail_bytes(event)
|
||||
|
||||
resized = cv2.resize(img, (224, 224))
|
||||
output_path = os.path.join(output_dir, f"thumbnail_{idx:04d}.jpg")
|
||||
cv2.imwrite(output_path, resized)
|
||||
image_paths.append(output_path)
|
||||
if thumbnail_bytes:
|
||||
nparr = np.frombuffer(thumbnail_bytes, np.uint8)
|
||||
img = cv2.imdecode(nparr, cv2.IMREAD_COLOR)
|
||||
|
||||
if img is not None:
|
||||
height, width = img.shape[:2]
|
||||
|
||||
crop_size = 1.0
|
||||
if event.data and "box" in event.data and "region" in event.data:
|
||||
box = event.data["box"]
|
||||
region = event.data["region"]
|
||||
|
||||
if len(box) == 4 and len(region) == 4:
|
||||
box_w, box_h = box[2], box[3]
|
||||
region_w, region_h = region[2], region[3]
|
||||
|
||||
box_area = (box_w * box_h) / (region_w * region_h)
|
||||
|
||||
if box_area < 0.05:
|
||||
crop_size = 0.4
|
||||
elif box_area < 0.10:
|
||||
crop_size = 0.5
|
||||
elif box_area < 0.20:
|
||||
crop_size = 0.65
|
||||
elif box_area < 0.35:
|
||||
crop_size = 0.80
|
||||
else:
|
||||
crop_size = 0.95
|
||||
|
||||
crop_width = int(width * crop_size)
|
||||
crop_height = int(height * crop_size)
|
||||
|
||||
x1 = (width - crop_width) // 2
|
||||
y1 = (height - crop_height) // 2
|
||||
x2 = x1 + crop_width
|
||||
y2 = y1 + crop_height
|
||||
|
||||
cropped = img[y1:y2, x1:x2]
|
||||
resized = cv2.resize(cropped, (224, 224))
|
||||
output_path = os.path.join(output_dir, f"thumbnail_{idx:04d}.jpg")
|
||||
cv2.imwrite(output_path, resized)
|
||||
thumbnail_paths.append(output_path)
|
||||
|
||||
except Exception as e:
|
||||
logger.debug(f"Failed to extract image for event {event.id}: {e}")
|
||||
logger.debug(f"Failed to extract thumbnail for event {event.id}: {e}")
|
||||
continue
|
||||
|
||||
return image_paths
|
||||
|
||||
|
||||
def _load_event_classification_crop(event: Event) -> np.ndarray | None:
|
||||
"""Prefer a snapshot-based object crop; fall back to a center-cropped thumbnail."""
|
||||
if event.data and "box" in event.data:
|
||||
snapshot, _ = load_event_snapshot_image(event, clean_only=True)
|
||||
if snapshot is not None:
|
||||
abs_box = relative_box_to_absolute(snapshot.shape, event.data["box"])
|
||||
if abs_box is not None:
|
||||
xmin, ymin, xmax, ymax = abs_box
|
||||
box_w = xmax - xmin
|
||||
box_h = ymax - ymin
|
||||
if box_w > 0 and box_h > 0:
|
||||
x1, y1, x2, y2 = calculate_region(
|
||||
snapshot.shape,
|
||||
xmin,
|
||||
ymin,
|
||||
xmax,
|
||||
ymax,
|
||||
max(box_w, box_h),
|
||||
1.0,
|
||||
)
|
||||
cropped = snapshot[y1:y2, x1:x2]
|
||||
if cropped.size > 0:
|
||||
return cropped
|
||||
|
||||
thumbnail_bytes = get_event_thumbnail_bytes(event)
|
||||
if not thumbnail_bytes:
|
||||
return None
|
||||
|
||||
nparr = np.frombuffer(thumbnail_bytes, np.uint8)
|
||||
img = cv2.imdecode(nparr, cv2.IMREAD_COLOR)
|
||||
if img is None or img.size == 0:
|
||||
return None
|
||||
|
||||
height, width = img.shape[:2]
|
||||
crop_size = 1.0
|
||||
|
||||
if event.data and "box" in event.data and "region" in event.data:
|
||||
box = event.data["box"]
|
||||
region = event.data["region"]
|
||||
|
||||
if len(box) == 4 and len(region) == 4:
|
||||
box_w, box_h = box[2], box[3]
|
||||
region_w, region_h = region[2], region[3]
|
||||
box_area = (box_w * box_h) / (region_w * region_h)
|
||||
|
||||
if box_area < 0.05:
|
||||
crop_size = 0.4
|
||||
elif box_area < 0.10:
|
||||
crop_size = 0.5
|
||||
elif box_area < 0.20:
|
||||
crop_size = 0.65
|
||||
elif box_area < 0.35:
|
||||
crop_size = 0.80
|
||||
else:
|
||||
crop_size = 0.95
|
||||
|
||||
crop_width = int(width * crop_size)
|
||||
crop_height = int(height * crop_size)
|
||||
x1 = (width - crop_width) // 2
|
||||
y1 = (height - crop_height) // 2
|
||||
cropped = img[y1 : y1 + crop_height, x1 : x1 + crop_width]
|
||||
if cropped.size == 0:
|
||||
return None
|
||||
|
||||
return cropped
|
||||
return thumbnail_paths
|
||||
|
||||
@ -2,9 +2,8 @@
|
||||
|
||||
import logging
|
||||
import subprocess as sp
|
||||
from typing import Any, Callable, Optional
|
||||
from typing import Any
|
||||
|
||||
from frigate.const import PROCESS_PRIORITY_LOW
|
||||
from frigate.log import LogPipe
|
||||
|
||||
|
||||
@ -47,124 +46,3 @@ def start_or_restart_ffmpeg(
|
||||
start_new_session=True,
|
||||
)
|
||||
return process
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def inject_progress_flags(cmd: list[str]) -> list[str]:
|
||||
"""Insert `-progress pipe:2 -nostats` immediately before the output path.
|
||||
|
||||
`-progress pipe:2` writes structured key=value lines to stderr;
|
||||
`-nostats` suppresses the noisy default stats output. The output path
|
||||
is conventionally the last token in an FFmpeg argv.
|
||||
"""
|
||||
if not cmd:
|
||||
return cmd
|
||||
return cmd[:-1] + ["-progress", "pipe:2", "-nostats", cmd[-1]]
|
||||
|
||||
|
||||
def run_ffmpeg_with_progress(
|
||||
cmd: list[str],
|
||||
*,
|
||||
expected_duration_seconds: float,
|
||||
on_progress: Optional[Callable[[float], None]] = None,
|
||||
stdin_payload: Optional[str] = None,
|
||||
process_started: Optional[Callable[[sp.Popen], None]] = None,
|
||||
use_low_priority: bool = True,
|
||||
) -> tuple[int, str]:
|
||||
"""Run an ffmpeg command, streaming progress via `-progress pipe:2`.
|
||||
|
||||
Args:
|
||||
cmd: ffmpeg argv. Output path must be the last token.
|
||||
expected_duration_seconds: Duration of the expected output clip in
|
||||
seconds. Used to convert ffmpeg's `out_time_us` into a percent.
|
||||
on_progress: Optional callback invoked with a percent in [0, 100].
|
||||
Called once with 0.0 at start, again on each `out_time_us=`
|
||||
stderr line, and once with 100.0 on `progress=end`.
|
||||
stdin_payload: Optional string written to ffmpeg stdin (used by
|
||||
export for concat playlists).
|
||||
process_started: Optional callback invoked with the live `Popen`
|
||||
once spawned — lets callers store the ref for cancellation.
|
||||
use_low_priority: When True, prepend `nice -n PROCESS_PRIORITY_LOW`
|
||||
so concat doesn't starve detection.
|
||||
|
||||
Returns:
|
||||
Tuple of `(returncode, captured_stderr)`. Stdout is left attached
|
||||
to the parent process to avoid buffer-full deadlocks.
|
||||
"""
|
||||
full_cmd = inject_progress_flags(cmd)
|
||||
if use_low_priority:
|
||||
full_cmd = ["nice", "-n", str(PROCESS_PRIORITY_LOW)] + full_cmd
|
||||
|
||||
def emit(percent: float) -> None:
|
||||
if on_progress is None:
|
||||
return
|
||||
try:
|
||||
on_progress(max(0.0, min(100.0, percent)))
|
||||
except Exception:
|
||||
logger.exception("FFmpeg progress callback failed")
|
||||
|
||||
emit(0.0)
|
||||
|
||||
proc = sp.Popen(
|
||||
full_cmd,
|
||||
stdin=sp.PIPE if stdin_payload is not None else None,
|
||||
stderr=sp.PIPE,
|
||||
text=True,
|
||||
encoding="ascii",
|
||||
errors="replace",
|
||||
)
|
||||
if process_started is not None:
|
||||
try:
|
||||
process_started(proc)
|
||||
except Exception:
|
||||
logger.exception("FFmpeg process_started callback failed")
|
||||
|
||||
if stdin_payload is not None and proc.stdin is not None:
|
||||
try:
|
||||
proc.stdin.write(stdin_payload)
|
||||
except (BrokenPipeError, OSError):
|
||||
pass
|
||||
finally:
|
||||
try:
|
||||
proc.stdin.close()
|
||||
except (BrokenPipeError, OSError):
|
||||
pass
|
||||
|
||||
captured: list[str] = []
|
||||
if proc.stderr is not None:
|
||||
try:
|
||||
for raw_line in proc.stderr:
|
||||
captured.append(raw_line)
|
||||
line = raw_line.strip()
|
||||
if not line:
|
||||
continue
|
||||
if line.startswith("out_time_us="):
|
||||
if expected_duration_seconds <= 0:
|
||||
continue
|
||||
try:
|
||||
out_time_us = int(line.split("=", 1)[1])
|
||||
except (ValueError, IndexError):
|
||||
continue
|
||||
if out_time_us < 0:
|
||||
continue
|
||||
out_seconds = out_time_us / 1_000_000.0
|
||||
emit((out_seconds / expected_duration_seconds) * 100.0)
|
||||
elif line == "progress=end":
|
||||
emit(100.0)
|
||||
break
|
||||
except Exception:
|
||||
logger.exception("Failed reading FFmpeg progress stream")
|
||||
|
||||
proc.wait()
|
||||
|
||||
if proc.stderr is not None:
|
||||
try:
|
||||
remaining = proc.stderr.read()
|
||||
if remaining:
|
||||
captured.append(remaining)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
return proc.returncode or 0, "".join(captured)
|
||||
|
||||
@ -711,44 +711,23 @@ def ffprobe_stream(ffmpeg, path: str, detailed: bool = False) -> sp.CompletedPro
|
||||
else:
|
||||
format_entries = None
|
||||
|
||||
def run(rtsp_transport: Optional[str] = None) -> sp.CompletedProcess:
|
||||
cmd = [ffmpeg.ffprobe_path]
|
||||
if rtsp_transport:
|
||||
cmd += ["-rtsp_transport", rtsp_transport]
|
||||
cmd += [
|
||||
"-timeout",
|
||||
"1000000",
|
||||
"-print_format",
|
||||
"json",
|
||||
"-show_entries",
|
||||
f"stream={stream_entries}",
|
||||
]
|
||||
if detailed and format_entries:
|
||||
cmd.extend(["-show_entries", f"format={format_entries}"])
|
||||
cmd.extend(["-loglevel", "error", clean_path])
|
||||
try:
|
||||
return sp.run(cmd, capture_output=True, timeout=6)
|
||||
except sp.TimeoutExpired as e:
|
||||
logger.info(
|
||||
"ffprobe timed out while probing %s (transport=%s)",
|
||||
clean_camera_user_pass(path),
|
||||
rtsp_transport or "default",
|
||||
)
|
||||
return sp.CompletedProcess(
|
||||
args=cmd,
|
||||
returncode=1,
|
||||
stdout=e.stdout or b"",
|
||||
stderr=(e.stderr or b"") + b"\nffprobe timed out",
|
||||
)
|
||||
ffprobe_cmd = [
|
||||
ffmpeg.ffprobe_path,
|
||||
"-timeout",
|
||||
"1000000",
|
||||
"-print_format",
|
||||
"json",
|
||||
"-show_entries",
|
||||
f"stream={stream_entries}",
|
||||
]
|
||||
|
||||
result = run()
|
||||
# Add format entries for detailed mode
|
||||
if detailed and format_entries:
|
||||
ffprobe_cmd.extend(["-show_entries", f"format={format_entries}"])
|
||||
|
||||
# For RTSP: retry with explicit TCP transport if the first attempt failed
|
||||
# (default UDP may be blocked)
|
||||
if result.returncode != 0 and clean_path.startswith("rtsp://"):
|
||||
result = run(rtsp_transport="tcp")
|
||||
ffprobe_cmd.extend(["-loglevel", "error", clean_path])
|
||||
|
||||
return result
|
||||
return sp.run(ffprobe_cmd, capture_output=True)
|
||||
|
||||
|
||||
def vainfo_hwaccel(device_name: Optional[str] = None) -> sp.CompletedProcess:
|
||||
@ -845,23 +824,11 @@ async def get_video_properties(
|
||||
"-show_streams",
|
||||
url,
|
||||
]
|
||||
proc = None
|
||||
try:
|
||||
proc = await asyncio.create_subprocess_exec(
|
||||
*cmd, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE
|
||||
)
|
||||
try:
|
||||
stdout, _ = await asyncio.wait_for(proc.communicate(), timeout=6)
|
||||
except asyncio.TimeoutError:
|
||||
logger.info(
|
||||
"ffprobe timed out while probing %s (transport=%s)",
|
||||
clean_camera_user_pass(url),
|
||||
rtsp_transport or "default",
|
||||
)
|
||||
proc.kill()
|
||||
await proc.wait()
|
||||
return False, 0, 0, None, -1
|
||||
|
||||
stdout, _ = await proc.communicate()
|
||||
if proc.returncode != 0:
|
||||
return False, 0, 0, None, -1
|
||||
|
||||
|
||||
@ -24,7 +24,7 @@ from frigate.config.camera.updater import (
|
||||
)
|
||||
from frigate.const import PROCESS_PRIORITY_HIGH
|
||||
from frigate.log import LogPipe
|
||||
from frigate.util.builtin import EventsPerSecond, get_ffmpeg_arg_list
|
||||
from frigate.util.builtin import EventsPerSecond
|
||||
from frigate.util.ffmpeg import start_or_restart_ffmpeg, stop_ffmpeg
|
||||
from frigate.util.image import (
|
||||
FrameManager,
|
||||
@ -34,23 +34,6 @@ from frigate.util.process import FrigateProcess
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# all built-in record presets use this segment_time
|
||||
DEFAULT_RECORD_SEGMENT_TIME = 10
|
||||
|
||||
|
||||
def _get_record_segment_time(config: CameraConfig) -> int:
|
||||
"""Extract -segment_time from the camera's record output args."""
|
||||
record_args = get_ffmpeg_arg_list(config.ffmpeg.output_args.record)
|
||||
|
||||
if record_args and record_args[0].startswith("preset"):
|
||||
return DEFAULT_RECORD_SEGMENT_TIME
|
||||
|
||||
try:
|
||||
idx = record_args.index("-segment_time")
|
||||
return int(record_args[idx + 1])
|
||||
except (ValueError, IndexError):
|
||||
return DEFAULT_RECORD_SEGMENT_TIME
|
||||
|
||||
|
||||
def capture_frames(
|
||||
ffmpeg_process: sp.Popen[Any],
|
||||
@ -181,12 +164,6 @@ class CameraWatchdog(threading.Thread):
|
||||
self.latest_cache_segment_time: float = 0
|
||||
self.record_enable_time: datetime | None = None
|
||||
|
||||
# `valid` segments are published with the segment's start time, so the
|
||||
# gap between consecutive publishes can reach 2 * segment_time. Pad the
|
||||
# staleness threshold so it's never tighter than that worst case.
|
||||
segment_time = _get_record_segment_time(self.config)
|
||||
self.record_stale_threshold = max(120, 2 * segment_time + 30)
|
||||
|
||||
# Stall tracking (based on last processed frame)
|
||||
self._stall_timestamps: deque[float] = deque()
|
||||
self._stall_active: bool = False
|
||||
@ -340,16 +317,16 @@ class CameraWatchdog(threading.Thread):
|
||||
if camera != self.config.name:
|
||||
continue
|
||||
|
||||
if topic.endswith(RecordingsDataTypeEnum.invalid.value):
|
||||
self.logger.warning(
|
||||
f"Invalid recording segment detected for {camera} at {segment_time}"
|
||||
)
|
||||
self.latest_invalid_segment_time = segment_time
|
||||
elif topic.endswith(RecordingsDataTypeEnum.valid.value):
|
||||
if topic.endswith(RecordingsDataTypeEnum.valid.value):
|
||||
self.logger.debug(
|
||||
f"Latest valid recording segment time on {camera}: {segment_time}"
|
||||
)
|
||||
self.latest_valid_segment_time = segment_time
|
||||
elif topic.endswith(RecordingsDataTypeEnum.invalid.value):
|
||||
self.logger.warning(
|
||||
f"Invalid recording segment detected for {camera} at {segment_time}"
|
||||
)
|
||||
self.latest_invalid_segment_time = segment_time
|
||||
elif topic.endswith(RecordingsDataTypeEnum.latest.value):
|
||||
if segment_time is not None:
|
||||
self.latest_cache_segment_time = segment_time
|
||||
@ -436,17 +413,16 @@ class CameraWatchdog(threading.Thread):
|
||||
|
||||
# ensure segments are still being created and that they have valid video data
|
||||
# Skip checks during grace period to allow segments to start being created
|
||||
stale_window = timedelta(seconds=self.record_stale_threshold)
|
||||
cache_stale = not in_grace_period and now_utc > (
|
||||
latest_cache_dt + stale_window
|
||||
latest_cache_dt + timedelta(seconds=120)
|
||||
)
|
||||
valid_stale = not in_grace_period and now_utc > (
|
||||
latest_valid_dt + stale_window
|
||||
latest_valid_dt + timedelta(seconds=120)
|
||||
)
|
||||
invalid_stale_condition = (
|
||||
self.latest_invalid_segment_time > 0
|
||||
and not in_grace_period
|
||||
and now_utc > (latest_invalid_dt + stale_window)
|
||||
and now_utc > (latest_invalid_dt + timedelta(seconds=120))
|
||||
and self.latest_valid_segment_time
|
||||
<= self.latest_invalid_segment_time
|
||||
)
|
||||
@ -463,7 +439,7 @@ class CameraWatchdog(threading.Thread):
|
||||
)
|
||||
|
||||
self.logger.error(
|
||||
f"{reason} for {self.config.name} in the last {self.record_stale_threshold}s. Restarting the ffmpeg record process..."
|
||||
f"{reason} for {self.config.name} in the last 120s. Restarting the ffmpeg record process..."
|
||||
)
|
||||
p["process"] = start_or_restart_ffmpeg(
|
||||
p["cmd"],
|
||||
|
||||
@ -28,7 +28,6 @@ class MonitoredProcess:
|
||||
restart_timestamps: deque[float] = field(
|
||||
default_factory=lambda: deque(maxlen=MAX_RESTARTS)
|
||||
)
|
||||
clean_exit_logged: bool = False
|
||||
|
||||
def is_restarting_too_fast(self, now: float) -> bool:
|
||||
while (
|
||||
@ -73,9 +72,7 @@ class FrigateWatchdog(threading.Thread):
|
||||
|
||||
exitcode = entry.process.exitcode
|
||||
if exitcode == 0:
|
||||
if not entry.clean_exit_logged:
|
||||
logger.info("Process %s exited cleanly, not restarting", entry.name)
|
||||
entry.clean_exit_logged = True
|
||||
logger.info("Process %s exited cleanly, not restarting", entry.name)
|
||||
return
|
||||
|
||||
logger.warning(
|
||||
|
||||
@ -1,95 +1,88 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "runtime-notice"
|
||||
},
|
||||
"source": [
|
||||
"**Before running:** go to **Runtime → Change runtime type → Fallback runtime version: 2025.07** (Python 3.11). The current Colab default (Python 3.12+) is incompatible with `super-gradients`."
|
||||
]
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "rmuF9iKWTbdk"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"! pip install -q git+https://github.com/Deci-AI/super-gradients.git"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "NiRCt917KKcL"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"! sed -i 's/sghub.deci.ai/sg-hub-nv.s3.amazonaws.com/' /usr/local/lib/python3.12/dist-packages/super_gradients/training/pretrained_models.py\n",
|
||||
"! sed -i 's/sghub.deci.ai/sg-hub-nv.s3.amazonaws.com/' /usr/local/lib/python3.12/dist-packages/super_gradients/training/utils/checkpoint_utils.py"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "dTB0jy_NNSFz"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from super_gradients.common.object_names import Models\n",
|
||||
"from super_gradients.conversion import DetectionOutputFormatMode\n",
|
||||
"from super_gradients.training import models\n",
|
||||
"\n",
|
||||
"model = models.get(Models.YOLO_NAS_S, pretrained_weights=\"coco\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "GymUghyCNXem"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# export the model for compatibility with Frigate\n",
|
||||
"\n",
|
||||
"model.export(\"yolo_nas_s.onnx\",\n",
|
||||
" output_predictions_format=DetectionOutputFormatMode.FLAT_FORMAT,\n",
|
||||
" max_predictions_per_image=20,\n",
|
||||
" num_pre_nms_predictions=300,\n",
|
||||
" confidence_threshold=0.4,\n",
|
||||
" input_image_shape=(320,320),\n",
|
||||
" )"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "uBhXV5g4Nh42"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from google.colab import files\n",
|
||||
"\n",
|
||||
"files.download('yolo_nas_s.onnx')"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"provenance": []
|
||||
},
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"name": "python"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "rmuF9iKWTbdk"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"! pip install -q \"jedi>=0.16\"\n",
|
||||
"! pip install -q git+https://github.com/Deci-AI/super-gradients.git"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "NiRCt917KKcL"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": "! sed -i 's/sghub\\.deci\\.ai/d2gjn4b69gu75n.cloudfront.net/g; s/sg-hub-nv\\.s3\\.amazonaws\\.com/d2gjn4b69gu75n.cloudfront.net/g' /usr/local/lib/python*/dist-packages/super_gradients/training/pretrained_models.py\n! sed -i 's/sghub\\.deci\\.ai/d2gjn4b69gu75n.cloudfront.net/g; s/sg-hub-nv\\.s3\\.amazonaws\\.com/d2gjn4b69gu75n.cloudfront.net/g' /usr/local/lib/python*/dist-packages/super_gradients/training/utils/checkpoint_utils.py"
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "dTB0jy_NNSFz"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from super_gradients.common.object_names import Models\n",
|
||||
"from super_gradients.conversion import DetectionOutputFormatMode\n",
|
||||
"from super_gradients.training import models\n",
|
||||
"\n",
|
||||
"model = models.get(Models.YOLO_NAS_S, pretrained_weights=\"coco\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "GymUghyCNXem"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# export the model for compatibility with Frigate\n",
|
||||
"\n",
|
||||
"model.export(\"yolo_nas_s.onnx\",\n",
|
||||
" output_predictions_format=DetectionOutputFormatMode.FLAT_FORMAT,\n",
|
||||
" max_predictions_per_image=20,\n",
|
||||
" num_pre_nms_predictions=300,\n",
|
||||
" confidence_threshold=0.4,\n",
|
||||
" input_image_shape=(320,320),\n",
|
||||
" )"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "uBhXV5g4Nh42"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from google.colab import files\n",
|
||||
"\n",
|
||||
"files.download('yolo_nas_s.onnx')"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"provenance": []
|
||||
},
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"name": "python"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 0
|
||||
}
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 0
|
||||
}
|
||||
|
||||
@ -1,376 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Analyze keyframe and timestamp structure of Frigate recording segments.
|
||||
|
||||
This is a diagnostic tool for investigating seek precision / GOP behavior on
|
||||
recorded segments. It does not modify anything.
|
||||
|
||||
ffprobe is only available inside the Frigate container, at
|
||||
/usr/lib/ffmpeg/$DEFAULT_FFMPEG_VERSION/bin/ffprobe
|
||||
This script auto-resolves that path from the DEFAULT_FFMPEG_VERSION env var
|
||||
(or falls back to scanning /usr/lib/ffmpeg/*/bin/ffprobe). Pass --ffprobe to
|
||||
override if needed.
|
||||
|
||||
All recording segments on the filesystem are in UTC. The --timestamp flag
|
||||
expects a UTC Unix timestamp.
|
||||
|
||||
Typical use:
|
||||
# Inside the Frigate container (or wherever recordings are mounted)
|
||||
python3 analyze_recording_keyframes.py <camera_name>
|
||||
|
||||
# Analyze 10 most recent segments
|
||||
python3 analyze_recording_keyframes.py <camera_name> --count 10
|
||||
|
||||
# Locate the segment that contains a specific UTC Unix timestamp and
|
||||
# show it plus surrounding segments
|
||||
python3 analyze_recording_keyframes.py <camera> --timestamp 1713471234.567
|
||||
|
||||
# Custom recordings directory
|
||||
python3 analyze_recording_keyframes.py <camera> --recordings-dir /media/frigate/recordings
|
||||
|
||||
# Override the ffprobe path explicitly
|
||||
python3 analyze_recording_keyframes.py <camera> --ffprobe /usr/lib/ffmpeg/7.0/bin/ffprobe
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import datetime
|
||||
import json
|
||||
import os
|
||||
import subprocess
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from statistics import mean, median, stdev
|
||||
|
||||
|
||||
def resolve_ffprobe_path(override: str | None) -> str:
|
||||
"""Resolve the ffprobe binary path.
|
||||
|
||||
Inside the Frigate container, ffprobe lives at
|
||||
/usr/lib/ffmpeg/{DEFAULT_FFMPEG_VERSION}/bin/ffprobe — the exact version
|
||||
depends on the image build and is exposed as an env var.
|
||||
"""
|
||||
if override:
|
||||
return override
|
||||
version = os.environ.get("DEFAULT_FFMPEG_VERSION", "")
|
||||
if version:
|
||||
path = f"/usr/lib/ffmpeg/{version}/bin/ffprobe"
|
||||
if Path(path).is_file():
|
||||
return path
|
||||
# Fall back to scanning the Frigate ffmpeg install root.
|
||||
for candidate in sorted(Path("/usr/lib/ffmpeg").glob("*/bin/ffprobe")):
|
||||
if candidate.is_file():
|
||||
return str(candidate)
|
||||
print(
|
||||
"Could not locate ffprobe. Pass --ffprobe <path> or set "
|
||||
"DEFAULT_FFMPEG_VERSION.",
|
||||
file=sys.stderr,
|
||||
)
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def find_recent_segments(recordings_dir: Path, camera: str, count: int) -> list[Path]:
|
||||
"""Return the N most recent .mp4 segments for the given camera.
|
||||
|
||||
Expected layout: <recordings_dir>/<YYYY-MM-DD>/<HH>/<camera>/<MM>.<SS>.mp4
|
||||
"""
|
||||
pattern = f"*/*/{camera}/*.mp4"
|
||||
segments = sorted(recordings_dir.glob(pattern))
|
||||
return segments[-count:]
|
||||
|
||||
|
||||
def find_segments_near_timestamp(
|
||||
recordings_dir: Path, camera: str, target_ts: float, count: int
|
||||
) -> tuple[list[Path], Path | None]:
|
||||
"""Return `count` segments centered on the one containing `target_ts`.
|
||||
|
||||
Also returns the specific segment that should contain the timestamp, so
|
||||
callers can highlight it in output.
|
||||
"""
|
||||
pattern = f"*/*/{camera}/*.mp4"
|
||||
with_ts: list[tuple[float, Path]] = []
|
||||
for seg in sorted(recordings_dir.glob(pattern)):
|
||||
ts = filename_to_timestamp(seg)
|
||||
if ts is not None:
|
||||
with_ts.append((ts, seg))
|
||||
|
||||
if not with_ts:
|
||||
return [], None
|
||||
|
||||
# Largest filename_ts that is <= target_ts — that's the segment that
|
||||
# should contain the timestamp (Frigate catalogs segments by filename).
|
||||
target_idx = -1
|
||||
for i, (ts, _) in enumerate(with_ts):
|
||||
if ts <= target_ts:
|
||||
target_idx = i
|
||||
else:
|
||||
break
|
||||
|
||||
if target_idx < 0:
|
||||
# target_ts is before the earliest segment we have — just return the
|
||||
# first `count` segments so the user can see what's available.
|
||||
window = with_ts[:count]
|
||||
return [seg for _, seg in window], None
|
||||
|
||||
half = count // 2
|
||||
start = max(0, target_idx - half)
|
||||
end = min(len(with_ts), start + count)
|
||||
start = max(0, end - count)
|
||||
|
||||
window = with_ts[start:end]
|
||||
return [seg for _, seg in window], with_ts[target_idx][1]
|
||||
|
||||
|
||||
def filename_to_timestamp(segment: Path) -> float | None:
|
||||
"""Parse the wall-clock time from Frigate's segment path layout."""
|
||||
try:
|
||||
date = segment.parent.parent.parent.name # YYYY-MM-DD
|
||||
hour = segment.parent.parent.name # HH
|
||||
mm_ss = segment.stem # MM.SS
|
||||
minute, second = mm_ss.split(".")
|
||||
dt = datetime.datetime.strptime(
|
||||
f"{date} {hour}:{minute}:{second}",
|
||||
"%Y-%m-%d %H:%M:%S",
|
||||
).replace(tzinfo=datetime.timezone.utc)
|
||||
return dt.timestamp()
|
||||
except (ValueError, IndexError):
|
||||
return None
|
||||
|
||||
|
||||
def run_ffprobe(ffprobe: str, args: list[str]) -> dict:
|
||||
"""Run ffprobe and return parsed JSON, or empty dict on failure."""
|
||||
result = subprocess.run(
|
||||
[ffprobe, "-v", "error", *args, "-of", "json"],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
check=False,
|
||||
)
|
||||
if result.returncode != 0:
|
||||
print(f" ffprobe error: {result.stderr.strip()}", file=sys.stderr)
|
||||
return {}
|
||||
try:
|
||||
return json.loads(result.stdout)
|
||||
except json.JSONDecodeError:
|
||||
return {}
|
||||
|
||||
|
||||
def get_format_info(ffprobe: str, segment: Path) -> tuple[dict, dict]:
|
||||
"""Return (format_dict, stream_dict) for the first video stream."""
|
||||
data = run_ffprobe(
|
||||
ffprobe,
|
||||
[
|
||||
"-show_entries",
|
||||
"format=duration,start_time",
|
||||
"-show_entries",
|
||||
"stream=codec_name,profile,r_frame_rate,width,height",
|
||||
"-select_streams",
|
||||
"v:0",
|
||||
str(segment),
|
||||
],
|
||||
)
|
||||
fmt = data.get("format", {})
|
||||
streams = data.get("streams") or [{}]
|
||||
return fmt, streams[0]
|
||||
|
||||
|
||||
def get_video_packets(ffprobe: str, segment: Path) -> list[dict]:
|
||||
"""Return video packets with pts_time and flags."""
|
||||
data = run_ffprobe(
|
||||
ffprobe,
|
||||
[
|
||||
"-select_streams",
|
||||
"v",
|
||||
"-show_entries",
|
||||
"packet=pts_time,dts_time,flags",
|
||||
str(segment),
|
||||
],
|
||||
)
|
||||
return data.get("packets", [])
|
||||
|
||||
|
||||
def analyze(ffprobe: str, segment: Path, highlight: bool = False) -> None:
|
||||
marker = " <-- contains target timestamp" if highlight else ""
|
||||
print(f"\n=== {segment} ==={marker}")
|
||||
|
||||
fmt, stream = get_format_info(ffprobe, segment)
|
||||
duration = float(fmt.get("duration", 0) or 0)
|
||||
start_time = float(fmt.get("start_time", 0) or 0)
|
||||
codec = stream.get("codec_name", "?")
|
||||
profile = stream.get("profile", "?")
|
||||
width = stream.get("width", "?")
|
||||
height = stream.get("height", "?")
|
||||
fps = stream.get("r_frame_rate", "?/1")
|
||||
|
||||
filename_ts = filename_to_timestamp(segment)
|
||||
filename_iso = (
|
||||
datetime.datetime.fromtimestamp(
|
||||
filename_ts, tz=datetime.timezone.utc
|
||||
).isoformat()
|
||||
if filename_ts is not None
|
||||
else "?"
|
||||
)
|
||||
|
||||
print(f" Codec: {codec} ({profile}) {width}x{height} {fps}")
|
||||
print(f" Filename time: {filename_ts} ({filename_iso})")
|
||||
print(f" Format duration: {duration:.3f}s")
|
||||
print(f" Format start: {start_time:.3f}s (PTS offset of first packet)")
|
||||
|
||||
packets = get_video_packets(ffprobe, segment)
|
||||
if not packets:
|
||||
print(" (no video packets)")
|
||||
return
|
||||
|
||||
keyframe_times: list[float] = []
|
||||
first_pts: float | None = None
|
||||
last_pts: float | None = None
|
||||
|
||||
for pkt in packets:
|
||||
pts_str = pkt.get("pts_time")
|
||||
if pts_str is None or pts_str == "N/A":
|
||||
continue
|
||||
pts = float(pts_str)
|
||||
if first_pts is None:
|
||||
first_pts = pts
|
||||
last_pts = pts
|
||||
if "K" in pkt.get("flags", ""):
|
||||
keyframe_times.append(pts)
|
||||
|
||||
total_packets = len(packets)
|
||||
kf_count = len(keyframe_times)
|
||||
|
||||
print(f" Video packets: {total_packets}")
|
||||
print(f" Keyframes: {kf_count}")
|
||||
if first_pts is not None and last_pts is not None:
|
||||
print(
|
||||
f" Packet PTS: first={first_pts:.3f}s last={last_pts:.3f}s "
|
||||
f"span={last_pts - first_pts:.3f}s"
|
||||
)
|
||||
|
||||
if keyframe_times:
|
||||
print(
|
||||
f" Keyframe PTS: first={keyframe_times[0]:.3f}s "
|
||||
f"last={keyframe_times[-1]:.3f}s"
|
||||
)
|
||||
formatted = ", ".join(f"{t:.3f}" for t in keyframe_times)
|
||||
print(f" Keyframe times: [{formatted}]")
|
||||
|
||||
if len(keyframe_times) >= 2:
|
||||
gaps = [b - a for a, b in zip(keyframe_times, keyframe_times[1:])]
|
||||
avg_fps_estimate = (
|
||||
total_packets / (last_pts - first_pts)
|
||||
if last_pts and first_pts is not None and last_pts > first_pts
|
||||
else 0
|
||||
)
|
||||
print(
|
||||
f" GOP gaps (s): min={min(gaps):.3f} max={max(gaps):.3f} "
|
||||
f"mean={mean(gaps):.3f} median={median(gaps):.3f}"
|
||||
)
|
||||
if len(gaps) > 1:
|
||||
print(f" stdev={stdev(gaps):.3f}")
|
||||
print(
|
||||
f" Est. mean GOP: ~{mean(gaps) * avg_fps_estimate:.1f} frames"
|
||||
if avg_fps_estimate
|
||||
else ""
|
||||
)
|
||||
if max(gaps) > 5:
|
||||
print(
|
||||
" !! Max GOP > 5s — consistent with adaptive/smart codec "
|
||||
"(even if 'Smart Codec' is off in the UI, some cameras still "
|
||||
"produce irregular GOPs under specific encoder profiles)"
|
||||
)
|
||||
elif kf_count == 1:
|
||||
print(" !! Only one keyframe in segment — very long GOP")
|
||||
|
||||
# Report how well filename time aligns with first-packet PTS.
|
||||
# (Filename time is what Frigate uses as recording.start_time in the DB.)
|
||||
if filename_ts is not None and first_pts is not None:
|
||||
print(
|
||||
f" Notes: first packet PTS is {first_pts:.3f}s into the file; "
|
||||
f"Frigate treats filename time as PTS=0 for seek math."
|
||||
)
|
||||
|
||||
|
||||
def main() -> None:
|
||||
parser = argparse.ArgumentParser(
|
||||
description=__doc__,
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||
)
|
||||
parser.add_argument("camera", help="Camera name (matches the recordings subfolder)")
|
||||
parser.add_argument(
|
||||
"--count",
|
||||
type=int,
|
||||
default=5,
|
||||
help="Number of most recent segments to analyze (default: 5)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--recordings-dir",
|
||||
default="/media/frigate/recordings",
|
||||
help="Path to the recordings directory (default: /media/frigate/recordings)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--ffprobe",
|
||||
default=None,
|
||||
help=(
|
||||
"Full path to the ffprobe binary. Defaults to the Frigate-bundled "
|
||||
"binary at /usr/lib/ffmpeg/$DEFAULT_FFMPEG_VERSION/bin/ffprobe."
|
||||
),
|
||||
)
|
||||
parser.add_argument(
|
||||
"--timestamp",
|
||||
type=float,
|
||||
default=None,
|
||||
help=(
|
||||
"Unix timestamp (UTC seconds, decimals allowed) to locate. The "
|
||||
"script finds the segment that should contain this time and "
|
||||
"analyzes it plus surrounding segments (count controls the "
|
||||
"window). All on-disk segments are stored in UTC, so pass a UTC "
|
||||
"Unix timestamp."
|
||||
),
|
||||
)
|
||||
args = parser.parse_args()
|
||||
|
||||
ffprobe = resolve_ffprobe_path(args.ffprobe)
|
||||
|
||||
recordings_dir = Path(args.recordings_dir)
|
||||
if not recordings_dir.is_dir():
|
||||
print(
|
||||
f"Recordings directory not found: {recordings_dir}",
|
||||
file=sys.stderr,
|
||||
)
|
||||
sys.exit(1)
|
||||
|
||||
target_segment: Path | None = None
|
||||
if args.timestamp is not None:
|
||||
segments, target_segment = find_segments_near_timestamp(
|
||||
recordings_dir, args.camera, args.timestamp, args.count
|
||||
)
|
||||
target_iso = datetime.datetime.fromtimestamp(
|
||||
args.timestamp, tz=datetime.timezone.utc
|
||||
).isoformat()
|
||||
mode = f"around timestamp {args.timestamp} ({target_iso})"
|
||||
else:
|
||||
segments = find_recent_segments(recordings_dir, args.camera, args.count)
|
||||
mode = "most recent"
|
||||
|
||||
if not segments:
|
||||
print(
|
||||
f"No segments found for camera '{args.camera}' under {recordings_dir}",
|
||||
file=sys.stderr,
|
||||
)
|
||||
sys.exit(1)
|
||||
|
||||
if args.timestamp is not None and target_segment is None:
|
||||
print(
|
||||
f"!! Target timestamp {args.timestamp} is before the earliest "
|
||||
f"segment on disk; showing the earliest available segments instead.",
|
||||
file=sys.stderr,
|
||||
)
|
||||
|
||||
print(
|
||||
f"Analyzing {len(segments)} {mode} segment(s) for camera "
|
||||
f"'{args.camera}' under {recordings_dir} (ffprobe: {ffprobe})"
|
||||
)
|
||||
for segment in segments:
|
||||
analyze(ffprobe, segment, highlight=(segment == target_segment))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@ -1,783 +0,0 @@
|
||||
"""
|
||||
Face recognition investigation script.
|
||||
|
||||
Standalone replica of Frigate's ArcFace pipeline (see
|
||||
frigate/data_processing/common/face/model.py and
|
||||
frigate/embeddings/onnx/face_embedding.py) for analyzing a face collection
|
||||
outside the running service. Useful for:
|
||||
|
||||
- Diagnosing why a person's collection produces false positives
|
||||
- Finding outlier/contaminating training images
|
||||
- Inspecting the effect of the shipped vector-wise outlier filter
|
||||
|
||||
Layout:
|
||||
- Core pipeline: LandmarkAligner, ArcFaceEmbedder, arcface_preprocess,
|
||||
similarity_to_confidence, blur_reduction — all mirroring the production
|
||||
code exactly
|
||||
- Default run: summarize positive and negative sets against a baseline
|
||||
trim_mean class representation
|
||||
- Optional diagnostics (flags): vector-outlier filter behavior, degenerate
|
||||
"tiny crop" embedding clustering, and multi-identity contamination
|
||||
|
||||
Usage:
|
||||
python3 face_investigate.py \\
|
||||
--positive <positive_folder> \\
|
||||
--negative <negative_folder> \\
|
||||
[--model-cache /path/to/model_cache] \\
|
||||
[--vector-outlier] [--degenerate] [--contamination]
|
||||
|
||||
The positive folder should contain training images for a single identity
|
||||
(same layout as FACE_DIR/<name>/*.webp). The negative folder should contain
|
||||
runtime crops to test against — a mix of true matches and misfires.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import os
|
||||
import sys
|
||||
from dataclasses import dataclass
|
||||
from typing import Iterable
|
||||
|
||||
import cv2
|
||||
import numpy as np
|
||||
import onnxruntime as ort
|
||||
from PIL import Image
|
||||
from scipy import stats
|
||||
|
||||
ARCFACE_INPUT_SIZE = 112
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Replicated Frigate pipeline
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def _process_image_frigate(image: np.ndarray) -> Image.Image:
|
||||
"""Mirror BaseEmbedding._process_image for an ndarray input.
|
||||
|
||||
NOTE: Frigate passes the output of `cv2.imread` (BGR) directly in. PIL's
|
||||
`Image.fromarray` does NOT reorder channels, so the embedder effectively
|
||||
receives a BGR-ordered tensor. We replicate that faithfully here. (Tested
|
||||
— swapping to RGB produces near-identical embeddings; this model is
|
||||
robust to channel order.)
|
||||
"""
|
||||
return Image.fromarray(image)
|
||||
|
||||
|
||||
def arcface_preprocess(image_bgr: np.ndarray) -> np.ndarray:
|
||||
"""Mirror ArcfaceEmbedding._preprocess_inputs."""
|
||||
pil = _process_image_frigate(image_bgr)
|
||||
|
||||
width, height = pil.size
|
||||
if width != ARCFACE_INPUT_SIZE or height != ARCFACE_INPUT_SIZE:
|
||||
if width > height:
|
||||
new_height = int(((height / width) * ARCFACE_INPUT_SIZE) // 4 * 4)
|
||||
pil = pil.resize((ARCFACE_INPUT_SIZE, new_height))
|
||||
else:
|
||||
new_width = int(((width / height) * ARCFACE_INPUT_SIZE) // 4 * 4)
|
||||
pil = pil.resize((new_width, ARCFACE_INPUT_SIZE))
|
||||
|
||||
og = np.array(pil).astype(np.float32)
|
||||
og_h, og_w, channels = og.shape
|
||||
|
||||
frame = np.zeros(
|
||||
(ARCFACE_INPUT_SIZE, ARCFACE_INPUT_SIZE, channels), dtype=np.float32
|
||||
)
|
||||
x_center = (ARCFACE_INPUT_SIZE - og_w) // 2
|
||||
y_center = (ARCFACE_INPUT_SIZE - og_h) // 2
|
||||
frame[y_center : y_center + og_h, x_center : x_center + og_w] = og
|
||||
|
||||
frame = (frame / 127.5) - 1.0
|
||||
frame = np.transpose(frame, (2, 0, 1))
|
||||
frame = np.expand_dims(frame, axis=0)
|
||||
return frame
|
||||
|
||||
|
||||
class LandmarkAligner:
|
||||
"""Mirror FaceRecognizer.align_face."""
|
||||
|
||||
def __init__(self, landmark_model_path: str):
|
||||
if not os.path.exists(landmark_model_path):
|
||||
raise FileNotFoundError(landmark_model_path)
|
||||
self.detector = cv2.face.createFacemarkLBF()
|
||||
self.detector.loadModel(landmark_model_path)
|
||||
|
||||
def align(
|
||||
self, image: np.ndarray, out_w: int, out_h: int
|
||||
) -> tuple[np.ndarray, dict]:
|
||||
land_image = (
|
||||
cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) if image.ndim == 3 else image
|
||||
)
|
||||
_, lands = self.detector.fit(
|
||||
land_image, np.array([(0, 0, land_image.shape[1], land_image.shape[0])])
|
||||
)
|
||||
landmarks = lands[0][0]
|
||||
|
||||
leftEyePts = landmarks[42:48]
|
||||
rightEyePts = landmarks[36:42]
|
||||
leftEyeCenter = leftEyePts.mean(axis=0).astype("int")
|
||||
rightEyeCenter = rightEyePts.mean(axis=0).astype("int")
|
||||
|
||||
dY = rightEyeCenter[1] - leftEyeCenter[1]
|
||||
dX = rightEyeCenter[0] - leftEyeCenter[0]
|
||||
angle = np.degrees(np.arctan2(dY, dX)) - 180
|
||||
dist = float(np.sqrt((dX**2) + (dY**2)))
|
||||
|
||||
desiredRightEyeX = 1.0 - 0.35
|
||||
desiredDist = (desiredRightEyeX - 0.35) * out_w
|
||||
scale = desiredDist / dist if dist > 0 else 1.0
|
||||
|
||||
eyesCenter = (
|
||||
int((leftEyeCenter[0] + rightEyeCenter[0]) // 2),
|
||||
int((leftEyeCenter[1] + rightEyeCenter[1]) // 2),
|
||||
)
|
||||
M = cv2.getRotationMatrix2D(eyesCenter, angle, scale)
|
||||
tX = out_w * 0.5
|
||||
tY = out_h * 0.35
|
||||
M[0, 2] += tX - eyesCenter[0]
|
||||
M[1, 2] += tY - eyesCenter[1]
|
||||
|
||||
aligned = cv2.warpAffine(
|
||||
image, M, (out_w, out_h), flags=cv2.INTER_CUBIC
|
||||
)
|
||||
info = dict(
|
||||
angle=float(angle),
|
||||
eye_dist_px=dist,
|
||||
scale=float(scale),
|
||||
landmarks=landmarks,
|
||||
)
|
||||
return aligned, info
|
||||
|
||||
|
||||
class ArcFaceEmbedder:
|
||||
def __init__(self, model_path: str):
|
||||
self.session = ort.InferenceSession(
|
||||
model_path, providers=["CPUExecutionProvider"]
|
||||
)
|
||||
self.input_name = self.session.get_inputs()[0].name
|
||||
|
||||
def embed(self, image_bgr: np.ndarray) -> np.ndarray:
|
||||
tensor = arcface_preprocess(image_bgr)
|
||||
out = self.session.run(None, {self.input_name: tensor})[0]
|
||||
return out.squeeze()
|
||||
|
||||
|
||||
def similarity_to_confidence(
|
||||
cos_sim: float,
|
||||
median: float = 0.3,
|
||||
range_width: float = 0.6,
|
||||
slope_factor: float = 12,
|
||||
) -> float:
|
||||
slope = slope_factor / range_width
|
||||
return float(1.0 / (1.0 + np.exp(-slope * (cos_sim - median))))
|
||||
|
||||
|
||||
def laplacian_variance(image: np.ndarray) -> float:
|
||||
return float(cv2.Laplacian(image, cv2.CV_64F).var())
|
||||
|
||||
|
||||
def blur_reduction(variance: float) -> float:
|
||||
if variance < 120:
|
||||
return 0.06
|
||||
elif variance < 160:
|
||||
return 0.04
|
||||
elif variance < 200:
|
||||
return 0.02
|
||||
elif variance < 250:
|
||||
return 0.01
|
||||
return 0.0
|
||||
|
||||
|
||||
def cosine(a: np.ndarray, b: np.ndarray) -> float:
|
||||
denom = np.linalg.norm(a) * np.linalg.norm(b)
|
||||
if denom == 0:
|
||||
return 0.0
|
||||
return float(np.dot(a, b) / denom)
|
||||
|
||||
|
||||
def l2(v: np.ndarray) -> np.ndarray:
|
||||
return v / (np.linalg.norm(v) + 1e-9)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Sample loading
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
@dataclass
|
||||
class FaceSample:
|
||||
path: str
|
||||
shape: tuple[int, int]
|
||||
embedding: np.ndarray
|
||||
blur_var: float
|
||||
align_info: dict
|
||||
|
||||
|
||||
def load_folder(
|
||||
folder: str, aligner: LandmarkAligner, embedder: ArcFaceEmbedder
|
||||
) -> list[FaceSample]:
|
||||
samples: list[FaceSample] = []
|
||||
names = sorted(os.listdir(folder))
|
||||
for name in names:
|
||||
if name.startswith("."):
|
||||
continue
|
||||
path = os.path.join(folder, name)
|
||||
if not os.path.isfile(path):
|
||||
continue
|
||||
img = cv2.imread(path)
|
||||
if img is None:
|
||||
print(f" [skip unreadable] {name}")
|
||||
continue
|
||||
aligned, info = aligner.align(img, img.shape[1], img.shape[0])
|
||||
emb = embedder.embed(aligned)
|
||||
samples.append(
|
||||
FaceSample(
|
||||
path=path,
|
||||
shape=(img.shape[1], img.shape[0]),
|
||||
embedding=emb,
|
||||
blur_var=laplacian_variance(img),
|
||||
align_info=info,
|
||||
)
|
||||
)
|
||||
return samples
|
||||
|
||||
|
||||
def trimmed_mean(embs: Iterable[np.ndarray], trim: float = 0.15) -> np.ndarray:
|
||||
arr = np.stack(list(embs), axis=0)
|
||||
return stats.trim_mean(arr, trim, axis=0)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Baseline analyses (always run)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def summarize_positive(samples: list[FaceSample], mean_emb: np.ndarray) -> None:
|
||||
"""Summary of training set: per-sample cos to class mean, intra-class stats.
|
||||
|
||||
Outliers with cos far below the rest are likely degrading the mean —
|
||||
they'd be the first candidates the shipped vector-outlier filter drops.
|
||||
"""
|
||||
print("\n" + "=" * 78)
|
||||
print(f"POSITIVE SET ANALYSIS ({len(samples)} images)")
|
||||
print("=" * 78)
|
||||
|
||||
rows = []
|
||||
for s in samples:
|
||||
cs = cosine(s.embedding, mean_emb)
|
||||
conf = similarity_to_confidence(cs)
|
||||
red = blur_reduction(s.blur_var)
|
||||
rows.append(
|
||||
dict(
|
||||
name=os.path.basename(s.path),
|
||||
shape=f"{s.shape[0]}x{s.shape[1]}",
|
||||
eye_px=s.align_info["eye_dist_px"],
|
||||
angle=s.align_info["angle"] + 180,
|
||||
blur=s.blur_var,
|
||||
cos=cs,
|
||||
conf=conf,
|
||||
red=red,
|
||||
adj_conf=max(0.0, conf - red),
|
||||
)
|
||||
)
|
||||
|
||||
rows.sort(key=lambda r: r["cos"])
|
||||
sims = np.array([r["cos"] for r in rows])
|
||||
print(
|
||||
f"\nCosine-to-trimmed-mean: mean={sims.mean():.3f} std={sims.std():.3f} "
|
||||
f"min={sims.min():.3f} max={sims.max():.3f}"
|
||||
)
|
||||
|
||||
print("\n-- Worst matches (bottom 10, most likely hurting the mean) --")
|
||||
print(
|
||||
f"{'cos':>6} {'conf':>6} {'blur':>7} {'eyes':>6} "
|
||||
f"{'angle':>6} {'shape':>9} name"
|
||||
)
|
||||
for r in rows[:10]:
|
||||
print(
|
||||
f"{r['cos']:6.3f} {r['conf']:6.3f} {r['blur']:7.1f} "
|
||||
f"{r['eye_px']:6.1f} {r['angle']:6.1f} {r['shape']:>9} {r['name']}"
|
||||
)
|
||||
|
||||
print("\n-- Best matches (top 5) --")
|
||||
for r in rows[-5:][::-1]:
|
||||
print(
|
||||
f"{r['cos']:6.3f} {r['conf']:6.3f} {r['blur']:7.1f} "
|
||||
f"{r['eye_px']:6.1f} {r['angle']:6.1f} {r['shape']:>9} {r['name']}"
|
||||
)
|
||||
|
||||
# Pairwise analysis — flags embeddings poorly correlated with the rest
|
||||
print("\n-- Pairwise intra-class similarity (mean cos vs. other positives) --")
|
||||
embs = np.stack([s.embedding for s in samples], axis=0)
|
||||
norms = embs / (np.linalg.norm(embs, axis=1, keepdims=True) + 1e-9)
|
||||
sim_matrix = norms @ norms.T
|
||||
np.fill_diagonal(sim_matrix, np.nan)
|
||||
mean_pairwise = np.nanmean(sim_matrix, axis=1)
|
||||
names = [os.path.basename(s.path) for s in samples]
|
||||
ordered = sorted(zip(names, mean_pairwise), key=lambda t: t[1])
|
||||
print(f"{'mean_cos':>9} name")
|
||||
for nm, mp in ordered[:10]:
|
||||
print(f"{mp:9.3f} {nm}")
|
||||
print(f"\n overall mean pairwise cos: {np.nanmean(sim_matrix):.3f}")
|
||||
print(f" median pairwise cos: {np.nanmedian(sim_matrix):.3f}")
|
||||
|
||||
|
||||
def summarize_negative(
|
||||
neg_samples: list[FaceSample],
|
||||
mean_emb: np.ndarray,
|
||||
pos_samples: list[FaceSample],
|
||||
) -> None:
|
||||
"""Score each negative against the class mean, then show its top-3
|
||||
nearest positives. High-scoring negatives that match specific outlier
|
||||
positives hint at training-set contamination.
|
||||
"""
|
||||
print("\n" + "=" * 78)
|
||||
print(f"NEGATIVE SET ANALYSIS ({len(neg_samples)} images)")
|
||||
print("=" * 78)
|
||||
print(
|
||||
f"\n{'cos':>6} {'conf':>6} {'red':>5} {'adj':>5} "
|
||||
f"{'blur':>7} {'eyes':>6} {'shape':>9} name"
|
||||
)
|
||||
for s in neg_samples:
|
||||
cs = cosine(s.embedding, mean_emb)
|
||||
conf = similarity_to_confidence(cs)
|
||||
red = blur_reduction(s.blur_var)
|
||||
print(
|
||||
f"{cs:6.3f} {conf:6.3f} {red:5.2f} {max(0, conf - red):5.2f} "
|
||||
f"{s.blur_var:7.1f} {s.align_info['eye_dist_px']:6.1f} "
|
||||
f"{s.shape[0]}x{s.shape[1]:<5} {os.path.basename(s.path)}"
|
||||
)
|
||||
|
||||
print("\n-- For each negative, top-3 most similar positives --")
|
||||
pos_embs = np.stack([p.embedding for p in pos_samples])
|
||||
pos_norm = pos_embs / (np.linalg.norm(pos_embs, axis=1, keepdims=True) + 1e-9)
|
||||
for s in neg_samples:
|
||||
v = s.embedding / (np.linalg.norm(s.embedding) + 1e-9)
|
||||
sims = pos_norm @ v
|
||||
idx = np.argsort(-sims)[:3]
|
||||
print(f"\n {os.path.basename(s.path)}:")
|
||||
for i in idx:
|
||||
print(
|
||||
f" {sims[i]:6.3f} {os.path.basename(pos_samples[i].path)} "
|
||||
f"blur={pos_samples[i].blur_var:.1f} "
|
||||
f"eyes={pos_samples[i].align_info['eye_dist_px']:.1f}"
|
||||
)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Optional diagnostics
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def vector_outlier_test(
|
||||
pos: list[FaceSample], neg: list[FaceSample], base_trim: float = 0.15
|
||||
) -> None:
|
||||
"""Measure the shipped vector-wise outlier filter at various thresholds.
|
||||
|
||||
The production filter at `build_class_mean` in
|
||||
frigate/data_processing/common/face/model.py uses T=0.30. This test
|
||||
sweeps T so you can see which images would be dropped on a new collection
|
||||
and how that affects the negative scores.
|
||||
|
||||
Algorithm: iteratively recompute trim_mean on the kept set, drop any
|
||||
embedding with cos < T to that mean, repeat until converged. Floor at
|
||||
50% of the collection to avoid collapse.
|
||||
"""
|
||||
print("\n" + "=" * 78)
|
||||
print("VECTOR-WISE OUTLIER PRE-FILTER — layered on trim_mean(0.15)")
|
||||
print("=" * 78)
|
||||
|
||||
all_embs = np.stack([s.embedding for s in pos])
|
||||
|
||||
def iterative_mean(
|
||||
embs: np.ndarray,
|
||||
threshold: float,
|
||||
iters: int = 3,
|
||||
min_keep_frac: float = 0.5,
|
||||
) -> tuple[np.ndarray, np.ndarray]:
|
||||
keep = np.ones(len(embs), dtype=bool)
|
||||
floor = max(5, int(np.ceil(min_keep_frac * len(embs))))
|
||||
for _ in range(iters):
|
||||
m = stats.trim_mean(embs[keep], base_trim, axis=0)
|
||||
m_norm = m / (np.linalg.norm(m) + 1e-9)
|
||||
e_norms = embs / (np.linalg.norm(embs, axis=1, keepdims=True) + 1e-9)
|
||||
cos_to_mean = e_norms @ m_norm
|
||||
new_keep = cos_to_mean >= threshold
|
||||
if new_keep.sum() < floor:
|
||||
top_idx = np.argsort(-cos_to_mean)[:floor]
|
||||
new_keep = np.zeros_like(new_keep)
|
||||
new_keep[top_idx] = True
|
||||
if np.array_equal(new_keep, keep):
|
||||
break
|
||||
keep = new_keep
|
||||
final = stats.trim_mean(embs[keep], base_trim, axis=0)
|
||||
return final, keep
|
||||
|
||||
provisional = stats.trim_mean(all_embs, base_trim, axis=0)
|
||||
p_norm = provisional / (np.linalg.norm(provisional) + 1e-9)
|
||||
e_norms_all = all_embs / (np.linalg.norm(all_embs, axis=1, keepdims=True) + 1e-9)
|
||||
cos_to_prov = e_norms_all @ p_norm
|
||||
print("\nDistribution of cos(positive, provisional trim_mean):")
|
||||
print(
|
||||
f" min={cos_to_prov.min():.3f} p10={np.percentile(cos_to_prov, 10):.3f} "
|
||||
f"p25={np.percentile(cos_to_prov, 25):.3f} "
|
||||
f"median={np.median(cos_to_prov):.3f} "
|
||||
f"p75={np.percentile(cos_to_prov, 75):.3f} max={cos_to_prov.max():.3f}"
|
||||
)
|
||||
|
||||
baseline_mean = stats.trim_mean(all_embs, base_trim, axis=0)
|
||||
baseline_pos = np.array([cosine(p.embedding, baseline_mean) for p in pos])
|
||||
baseline_neg = (
|
||||
np.array([cosine(n.embedding, baseline_mean) for n in neg])
|
||||
if neg
|
||||
else np.array([])
|
||||
)
|
||||
baseline_conf_neg = np.array(
|
||||
[similarity_to_confidence(c) for c in baseline_neg]
|
||||
)
|
||||
|
||||
print(
|
||||
f"\nBaseline (trim_mean only, {len(pos)} images):"
|
||||
f"\n pos cos min={baseline_pos.min():.3f} "
|
||||
f"mean={baseline_pos.mean():.3f} max={baseline_pos.max():.3f}"
|
||||
)
|
||||
if len(neg):
|
||||
print(
|
||||
f" neg cos min={baseline_neg.min():.3f} "
|
||||
f"mean={baseline_neg.mean():.3f} max={baseline_neg.max():.3f}"
|
||||
)
|
||||
print(
|
||||
f" neg conf min={baseline_conf_neg.min():.3f} "
|
||||
f"mean={baseline_conf_neg.mean():.3f} max={baseline_conf_neg.max():.3f}"
|
||||
)
|
||||
print(
|
||||
f" margin (pos.min - neg.max): "
|
||||
f"{baseline_pos.min() - baseline_neg.max():+.3f}"
|
||||
)
|
||||
|
||||
print("\nIterative (refine mean → drop vectors with cos<T → repeat):")
|
||||
print(
|
||||
f"\n{'T':>5} {'kept':>6} {'pos min':>7} {'pos mean':>8} "
|
||||
f"{'neg max':>7} {'neg mean':>8} {'neg conf.max':>12} {'margin':>7}"
|
||||
)
|
||||
for T in [0.15, 0.20, 0.25, 0.28, 0.30, 0.33, 0.36, 0.40]:
|
||||
mean, keep = iterative_mean(all_embs, T)
|
||||
pos_sims = np.array([cosine(p.embedding, mean) for p in pos])
|
||||
neg_sims = (
|
||||
np.array([cosine(n.embedding, mean) for n in neg])
|
||||
if neg
|
||||
else np.array([])
|
||||
)
|
||||
neg_conf = np.array([similarity_to_confidence(c) for c in neg_sims])
|
||||
margin = pos_sims.min() - (neg_sims.max() if len(neg_sims) else 0)
|
||||
print(
|
||||
f"{T:5.2f} {int(keep.sum()):>3}/{len(pos):<2} "
|
||||
f"{pos_sims.min():7.3f} {pos_sims.mean():8.3f} "
|
||||
f"{neg_sims.max() if len(neg_sims) else float('nan'):7.3f} "
|
||||
f"{neg_sims.mean() if len(neg_sims) else float('nan'):8.3f} "
|
||||
f"{neg_conf.max() if len(neg_conf) else float('nan'):12.3f} "
|
||||
f"{margin:+7.3f}"
|
||||
)
|
||||
|
||||
# Show which images get dropped at the shipped threshold + neighbors
|
||||
for T_show in (0.25, 0.30, 0.33):
|
||||
_, keep = iterative_mean(all_embs, T_show)
|
||||
print(
|
||||
f"\nAt T={T_show}, the {int((~keep).sum())} dropped positives are:"
|
||||
)
|
||||
final_mean = stats.trim_mean(all_embs[keep], base_trim, axis=0)
|
||||
m_n = final_mean / (np.linalg.norm(final_mean) + 1e-9)
|
||||
for i, (p, k) in enumerate(zip(pos, keep)):
|
||||
if not k:
|
||||
e_n = p.embedding / (np.linalg.norm(p.embedding) + 1e-9)
|
||||
cos_final = float(e_n @ m_n)
|
||||
print(
|
||||
f" cos_to_clean_mean={cos_final:6.3f} "
|
||||
f"shape={p.shape[0]}x{p.shape[1]} "
|
||||
f"eyes={p.align_info['eye_dist_px']:6.1f} "
|
||||
f"blur={p.blur_var:7.1f} "
|
||||
f"{os.path.basename(p.path)}"
|
||||
)
|
||||
|
||||
|
||||
def degenerate_embedding_test(
|
||||
pos: list[FaceSample], neg: list[FaceSample]
|
||||
) -> None:
|
||||
"""Detect whether negatives and low-quality positives share a degenerate
|
||||
'tiny/noisy face' region of the embedding space.
|
||||
|
||||
Signal: if neg-to-neg cos is higher than pos-to-pos cos, the negatives
|
||||
aren't really per-identity embeddings — they're dominated by upsample /
|
||||
low-resolution artifacts that all map to a similar corner of embedding
|
||||
space regardless of who the face belongs to.
|
||||
|
||||
Also rebuilds the mean using only high-intra-similarity positives to
|
||||
show whether a cleaner training set separates the negatives.
|
||||
"""
|
||||
print("\n" + "=" * 78)
|
||||
print("DEGENERATE-EMBEDDING TEST")
|
||||
print("=" * 78)
|
||||
|
||||
pos_embs = np.stack([l2(s.embedding) for s in pos])
|
||||
neg_embs = np.stack([l2(s.embedding) for s in neg])
|
||||
|
||||
nn = neg_embs @ neg_embs.T
|
||||
np.fill_diagonal(nn, np.nan)
|
||||
pp = pos_embs @ pos_embs.T
|
||||
np.fill_diagonal(pp, np.nan)
|
||||
pn = pos_embs @ neg_embs.T
|
||||
|
||||
print(
|
||||
f"\n neg<->neg mean cos : {np.nanmean(nn):.3f} "
|
||||
f"(how tightly negatives cluster together)"
|
||||
)
|
||||
print(
|
||||
f" pos<->pos mean cos : {np.nanmean(pp):.3f} "
|
||||
f"(how tightly positives cluster)"
|
||||
)
|
||||
print(
|
||||
f" pos<->neg mean cos : {pn.mean():.3f} "
|
||||
f"(cross-class — should be low for a clean class)"
|
||||
)
|
||||
if np.nanmean(nn) > np.nanmean(pp):
|
||||
print(
|
||||
"\n >> neg<->neg > pos<->pos: negatives cluster more tightly than\n"
|
||||
" positives. This is the degenerate-embedding signature —\n"
|
||||
" upsampled tiny crops share a common 'face-like blob' region\n"
|
||||
" regardless of identity."
|
||||
)
|
||||
|
||||
mean_intra = np.nanmean(pp, axis=1)
|
||||
for thresh in (0.30, 0.33, 0.36):
|
||||
keep = mean_intra >= thresh
|
||||
if keep.sum() < 5:
|
||||
continue
|
||||
clean_embs = [pos[i].embedding for i in range(len(pos)) if keep[i]]
|
||||
clean_mean = stats.trim_mean(np.stack(clean_embs), 0.15, axis=0)
|
||||
neg_scores = np.array([cosine(n.embedding, clean_mean) for n in neg])
|
||||
neg_confs = np.array([similarity_to_confidence(c) for c in neg_scores])
|
||||
pos_scores = np.array(
|
||||
[
|
||||
cosine(pos[i].embedding, clean_mean)
|
||||
for i in range(len(pos))
|
||||
if keep[i]
|
||||
]
|
||||
)
|
||||
print(
|
||||
f"\n mean_intra >= {thresh}: keeping {int(keep.sum())}/{len(pos)} positives"
|
||||
)
|
||||
print(
|
||||
f" pos cos vs mean : min={pos_scores.min():.3f} "
|
||||
f"mean={pos_scores.mean():.3f} max={pos_scores.max():.3f}"
|
||||
)
|
||||
print(
|
||||
f" neg cos vs mean : min={neg_scores.min():.3f} "
|
||||
f"mean={neg_scores.mean():.3f} max={neg_scores.max():.3f}"
|
||||
)
|
||||
print(
|
||||
f" neg conf : min={neg_confs.min():.3f} "
|
||||
f"mean={neg_confs.mean():.3f} max={neg_confs.max():.3f}"
|
||||
)
|
||||
print(
|
||||
f" margin (pos.min - neg.max): "
|
||||
f"{pos_scores.min() - neg_scores.max():+.3f}"
|
||||
)
|
||||
|
||||
|
||||
def contamination_analysis(
|
||||
pos: list[FaceSample], neg: list[FaceSample]
|
||||
) -> None:
|
||||
"""Check whether the positive collection contains a second identity.
|
||||
|
||||
Two signals:
|
||||
(a) Per-positive: if an image is closer to at least one negative than
|
||||
to the rest of the positive class, it's likely a mislabeled face.
|
||||
(b) 2-means split of the positive embeddings: if one cluster center
|
||||
lands close to the negative mean, that cluster is a contaminating
|
||||
sub-identity that's pulling the class mean toward the negatives.
|
||||
"""
|
||||
print("\n" + "=" * 78)
|
||||
print("CONTAMINATION ANALYSIS")
|
||||
print("=" * 78)
|
||||
|
||||
pos_embs = np.stack([l2(s.embedding) for s in pos])
|
||||
neg_embs = np.stack([l2(s.embedding) for s in neg])
|
||||
pos_names = [os.path.basename(s.path) for s in pos]
|
||||
|
||||
pos_pos = pos_embs @ pos_embs.T
|
||||
np.fill_diagonal(pos_pos, np.nan)
|
||||
pos_neg = pos_embs @ neg_embs.T
|
||||
|
||||
mean_intra = np.nanmean(pos_pos, axis=1)
|
||||
max_to_neg = pos_neg.max(axis=1)
|
||||
mean_to_neg = pos_neg.mean(axis=1)
|
||||
|
||||
print(
|
||||
"\nPositives closer to a negative than to their own class avg"
|
||||
"\n(these are candidates for mislabeled images):"
|
||||
)
|
||||
print(
|
||||
f"\n{'max_neg':>7} {'mean_neg':>8} {'mean_intra':>10} "
|
||||
f"{'delta':>6} name"
|
||||
)
|
||||
rows = list(zip(pos_names, max_to_neg, mean_to_neg, mean_intra))
|
||||
rows.sort(key=lambda r: -(r[1] - r[3]))
|
||||
for nm, mxn, mnn, mi in rows[:15]:
|
||||
delta = mxn - mi
|
||||
marker = " <<" if delta > 0 else ""
|
||||
print(f"{mxn:7.3f} {mnn:8.3f} {mi:10.3f} {delta:6.3f} {nm}{marker}")
|
||||
|
||||
# 2-means in cosine space (no sklearn dependency).
|
||||
print("\n2-means split of positive embeddings (cosine space):")
|
||||
rng = np.random.default_rng(0)
|
||||
best = None
|
||||
for _ in range(5):
|
||||
idx = rng.choice(len(pos_embs), 2, replace=False)
|
||||
centers = pos_embs[idx].copy()
|
||||
for _ in range(50):
|
||||
sims = pos_embs @ centers.T
|
||||
labels = np.argmax(sims, axis=1)
|
||||
new_centers = np.stack(
|
||||
[
|
||||
l2(pos_embs[labels == k].mean(axis=0))
|
||||
if np.any(labels == k)
|
||||
else centers[k]
|
||||
for k in range(2)
|
||||
]
|
||||
)
|
||||
if np.allclose(new_centers, centers):
|
||||
break
|
||||
centers = new_centers
|
||||
tight = float(np.mean([sims[i, labels[i]] for i in range(len(labels))]))
|
||||
if best is None or tight > best[0]:
|
||||
best = (tight, labels.copy(), centers.copy())
|
||||
|
||||
_, labels, centers = best
|
||||
sizes = [int((labels == k).sum()) for k in range(2)]
|
||||
neg_mean = l2(neg_embs.mean(axis=0))
|
||||
print(
|
||||
f" cluster 0: size={sizes[0]:>2} "
|
||||
f"center<->other_center_cos={float(centers[0] @ centers[1]):.3f} "
|
||||
f"center<->neg_mean_cos={float(centers[0] @ neg_mean):.3f}"
|
||||
)
|
||||
print(
|
||||
f" cluster 1: size={sizes[1]:>2} "
|
||||
f"center<->neg_mean_cos={float(centers[1] @ neg_mean):.3f}"
|
||||
)
|
||||
|
||||
neg_aligned = 0 if centers[0] @ neg_mean > centers[1] @ neg_mean else 1
|
||||
print(
|
||||
f"\n cluster {neg_aligned} is more similar to the negatives — "
|
||||
f"its members are the contamination candidates:"
|
||||
)
|
||||
for i, lbl in enumerate(labels):
|
||||
if lbl == neg_aligned:
|
||||
print(
|
||||
f" max_to_neg={max_to_neg[i]:.3f} "
|
||||
f"mean_intra={mean_intra[i]:.3f} {pos_names[i]}"
|
||||
)
|
||||
|
||||
keep_mask = labels != neg_aligned
|
||||
if keep_mask.sum() >= 3:
|
||||
clean_embs = [pos[i].embedding for i in range(len(pos)) if keep_mask[i]]
|
||||
clean_mean = stats.trim_mean(np.stack(clean_embs), 0.15, axis=0)
|
||||
print(
|
||||
f"\n Rebuilding class mean from the OTHER cluster "
|
||||
f"({keep_mask.sum()} images):"
|
||||
)
|
||||
print(f" {'cos':>6} {'conf':>6} name")
|
||||
for n in neg:
|
||||
cs = cosine(n.embedding, clean_mean)
|
||||
cf = similarity_to_confidence(cs)
|
||||
print(f" {cs:6.3f} {cf:6.3f} {os.path.basename(n.path)}")
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# main
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def main() -> int:
|
||||
ap = argparse.ArgumentParser(
|
||||
description="Analyze a face recognition collection outside Frigate.",
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||
epilog=__doc__,
|
||||
)
|
||||
ap.add_argument("--positive", required=True, help="Training folder for one identity")
|
||||
ap.add_argument(
|
||||
"--negative",
|
||||
default=None,
|
||||
help="Runtime-crop folder to score against (optional)",
|
||||
)
|
||||
ap.add_argument(
|
||||
"--model-cache",
|
||||
default="/config/model_cache",
|
||||
help="Directory containing facedet/arcface.onnx and facedet/landmarkdet.yaml",
|
||||
)
|
||||
ap.add_argument(
|
||||
"--trim",
|
||||
type=float,
|
||||
default=0.15,
|
||||
help="trim_mean proportion (Frigate uses 0.15)",
|
||||
)
|
||||
ap.add_argument(
|
||||
"--vector-outlier",
|
||||
action="store_true",
|
||||
help="Sweep the vector-wise outlier filter threshold",
|
||||
)
|
||||
ap.add_argument(
|
||||
"--degenerate",
|
||||
action="store_true",
|
||||
help="Test whether negatives share a degenerate embedding region",
|
||||
)
|
||||
ap.add_argument(
|
||||
"--contamination",
|
||||
action="store_true",
|
||||
help="Check whether the positive folder contains a second identity",
|
||||
)
|
||||
args = ap.parse_args()
|
||||
|
||||
arcface_path = os.path.join(args.model_cache, "facedet", "arcface.onnx")
|
||||
landmark_path = os.path.join(args.model_cache, "facedet", "landmarkdet.yaml")
|
||||
for p in (arcface_path, landmark_path):
|
||||
if not os.path.exists(p):
|
||||
print(f"ERROR: model file not found: {p}")
|
||||
return 1
|
||||
|
||||
print(f"Loading ArcFace from {arcface_path}")
|
||||
embedder = ArcFaceEmbedder(arcface_path)
|
||||
print(f"Loading landmark model from {landmark_path}")
|
||||
aligner = LandmarkAligner(landmark_path)
|
||||
|
||||
print(f"\nLoading positives from {args.positive} ...")
|
||||
pos = load_folder(args.positive, aligner, embedder)
|
||||
print(f" {len(pos)} positives loaded")
|
||||
|
||||
neg: list[FaceSample] = []
|
||||
if args.negative:
|
||||
print(f"\nLoading negatives from {args.negative} ...")
|
||||
neg = load_folder(args.negative, aligner, embedder)
|
||||
print(f" {len(neg)} negatives loaded")
|
||||
|
||||
if not pos:
|
||||
print("no positive samples — aborting")
|
||||
return 1
|
||||
|
||||
mean_emb = trimmed_mean([s.embedding for s in pos], trim=args.trim)
|
||||
summarize_positive(pos, mean_emb)
|
||||
if neg:
|
||||
summarize_negative(neg, mean_emb, pos)
|
||||
|
||||
if args.vector_outlier:
|
||||
vector_outlier_test(pos, neg, args.trim)
|
||||
if args.degenerate and neg:
|
||||
degenerate_embedding_test(pos, neg)
|
||||
if args.contamination and neg:
|
||||
contamination_analysis(pos, neg)
|
||||
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
@ -1,114 +1,4 @@
|
||||
import { test, expect } from "../fixtures/frigate-test";
|
||||
import {
|
||||
expectBodyInteractive,
|
||||
waitForBodyInteractive,
|
||||
} from "../helpers/overlay-interaction";
|
||||
|
||||
test.describe("Export Page - Delete race @high", () => {
|
||||
// Empirical guard for radix-ui/primitives#3445: when a modal DropdownMenu
|
||||
// opens an AlertDialog and the AlertDialog's confirm action causes the
|
||||
// parent's optimistic cache update to unmount the card, we want to know
|
||||
// whether the deduped react-dismissable-layer (1.1.11) handles the
|
||||
// pointer-events stack cleanup or whether `modal={false}` is still
|
||||
// required on the DropdownMenu. The classic "canonical" pattern, distinct
|
||||
// from the FaceSelectionDialog auto-unmount race already covered by
|
||||
// face-library.spec.ts.
|
||||
test("deleting an export via dropdown→alert→confirm leaves body interactive", async ({
|
||||
frigateApp,
|
||||
}) => {
|
||||
if (frigateApp.isMobile) {
|
||||
test.skip();
|
||||
return;
|
||||
}
|
||||
|
||||
const initialExports = [
|
||||
{
|
||||
id: "export-race-001",
|
||||
camera: "front_door",
|
||||
name: "Race - Test Export",
|
||||
date: 1775490731.3863528,
|
||||
video_path: "/exports/export-race-001.mp4",
|
||||
thumb_path: "/exports/export-race-001-thumb.jpg",
|
||||
in_progress: false,
|
||||
export_case_id: null,
|
||||
},
|
||||
];
|
||||
let deleted = false;
|
||||
|
||||
await frigateApp.installDefaults({
|
||||
exports: initialExports,
|
||||
});
|
||||
|
||||
// Flip /api/export to empty after the delete POST is observed so the
|
||||
// page's SWR mutate sees the export gone.
|
||||
await frigateApp.page.route("**/api/export**", async (route) => {
|
||||
const payload = deleted ? [] : initialExports;
|
||||
await route.fulfill({ json: payload });
|
||||
});
|
||||
await frigateApp.page.route("**/api/exports/delete", async (route) => {
|
||||
deleted = true;
|
||||
const delayMs = Number(
|
||||
(globalThis as { process?: { env?: Record<string, string> } }).process
|
||||
?.env?.DELETE_DELAY_MS ?? "100",
|
||||
);
|
||||
if (delayMs > 0) {
|
||||
await new Promise((resolve) => setTimeout(resolve, delayMs));
|
||||
}
|
||||
await route.fulfill({ json: { success: true } });
|
||||
});
|
||||
|
||||
await frigateApp.goto("/export");
|
||||
await expect(frigateApp.page.getByText("Race - Test Export")).toBeVisible({
|
||||
timeout: 5_000,
|
||||
});
|
||||
|
||||
// Open the kebab menu on the export card. The kebab uses the
|
||||
// (misleading) aria-label "Edit name" from ExportCard's source — it
|
||||
// wraps the FiMoreVertical icon. There is exactly one such button on
|
||||
// the page once we have a single export rendered.
|
||||
const kebab = frigateApp.page
|
||||
.getByRole("button", { name: /edit name/i })
|
||||
.first();
|
||||
await expect(kebab).toBeVisible({ timeout: 5_000 });
|
||||
await kebab.click();
|
||||
|
||||
const menu = frigateApp.page
|
||||
.locator('[role="menu"], [data-radix-menu-content]')
|
||||
.first();
|
||||
await expect(menu).toBeVisible({ timeout: 3_000 });
|
||||
|
||||
// Delete Export
|
||||
await menu
|
||||
.getByRole("menuitem", { name: /delete export/i })
|
||||
.first()
|
||||
.click();
|
||||
|
||||
// AlertDialog at page level. The confirm button's accessible name is
|
||||
// "Delete Export" (its aria-label), the visible text is just "Delete".
|
||||
const confirm = frigateApp.page.getByRole("alertdialog");
|
||||
await expect(confirm).toBeVisible({ timeout: 3_000 });
|
||||
await confirm
|
||||
.getByRole("button", { name: /^delete export$/i })
|
||||
.first()
|
||||
.click();
|
||||
|
||||
// The card optimistically disappears, the dialog closes, and body
|
||||
// pointer-events must come unstuck.
|
||||
await expect(
|
||||
frigateApp.page.getByText("Race - Test Export"),
|
||||
).not.toBeVisible({ timeout: 5_000 });
|
||||
await waitForBodyInteractive(frigateApp.page, 5_000);
|
||||
await expectBodyInteractive(frigateApp.page);
|
||||
|
||||
// Sanity: another page-level button still responds.
|
||||
const newCase = frigateApp.page.getByRole("button", { name: /new case/i });
|
||||
await expect(newCase).toBeVisible({ timeout: 3_000 });
|
||||
await newCase.click();
|
||||
await expect(
|
||||
frigateApp.page.getByRole("dialog").filter({ hasText: /create case/i }),
|
||||
).toBeVisible({ timeout: 3_000 });
|
||||
});
|
||||
});
|
||||
|
||||
test.describe("Export Page - Overview @high", () => {
|
||||
test("renders uncategorized exports and case cards from mock data", async ({
|
||||
|
||||
@ -358,158 +358,6 @@ test.describe("FaceSelectionDialog @high", () => {
|
||||
await frigateApp.page.keyboard.press("Escape");
|
||||
await expect(menu).not.toBeVisible({ timeout: 3_000 });
|
||||
});
|
||||
|
||||
test("classifying the last image in a group leaves body interactive", async ({
|
||||
frigateApp,
|
||||
}) => {
|
||||
// Regression guard for the stuck body pointer-events bug when the
|
||||
// last image in a grouped-recognition detail Dialog is classified.
|
||||
// Tracked upstream at radix-ui/primitives#3445.
|
||||
//
|
||||
// Root cause: when the user clicks a FaceSelectionDialog menu item,
|
||||
// the modal DropdownMenu enters its exit animation (Radix's Presence
|
||||
// keeps it in the DOM with data-state="closed" until animationend).
|
||||
// While that is in flight the classify axios resolves, SWR removes
|
||||
// the image from /api/faces, the parent's map no longer renders the
|
||||
// grouped card, and React unmounts the subtree — including the still-
|
||||
// animating DropdownMenu's Presence container. DismissableLayer's
|
||||
// shared modal-layer stack can't reconcile the interrupted exit, so
|
||||
// the `body { pointer-events: none }` entry it put on mount is never
|
||||
// popped and the rest of the UI becomes unclickable.
|
||||
//
|
||||
// The fix is `modal={false}` on the FaceSelectionDialog's
|
||||
// DropdownMenu (desktop path only). With modal=false the DropdownMenu
|
||||
// never puts an entry on DismissableLayer's body-pointer-events stack
|
||||
// in the first place, so there's nothing to leak when its Presence is
|
||||
// torn down mid-animation. The Radix-community-documented workaround
|
||||
// for #3445.
|
||||
//
|
||||
// The bug only reproduces when the mock resolves fast enough that
|
||||
// the parent unmounts before the dropdown's exit animation finishes.
|
||||
// Measured window via a 3x sweep on the pre-fix build: 0–200 ms
|
||||
// triggers it; 300 ms+ no longer reproduces. Production LAN networks
|
||||
// sit comfortably inside the bad window, while `npm run dev` seems
|
||||
// to mask it via React StrictMode's double-effect scheduling.
|
||||
const EVENT_ID = "1775487131.3863528-race";
|
||||
const initialFaces = withGroupedTrainingAttempt(basicFacesMock(), {
|
||||
eventId: EVENT_ID,
|
||||
attempts: [
|
||||
{ timestamp: 1775487131.3863528, label: "unknown", score: 0.95 },
|
||||
],
|
||||
});
|
||||
|
||||
let classified = false;
|
||||
|
||||
await frigateApp.installDefaults({
|
||||
faces: initialFaces,
|
||||
events: [
|
||||
{
|
||||
id: EVENT_ID,
|
||||
label: "person",
|
||||
sub_label: null,
|
||||
camera: "front_door",
|
||||
start_time: 1775487131.3863528,
|
||||
end_time: 1775487161.3863528,
|
||||
false_positive: false,
|
||||
zones: ["front_yard"],
|
||||
thumbnail: null,
|
||||
has_clip: true,
|
||||
has_snapshot: true,
|
||||
retain_indefinitely: false,
|
||||
plus_id: null,
|
||||
model_hash: "abc123",
|
||||
detector_type: "cpu",
|
||||
model_type: "ssd",
|
||||
data: {
|
||||
top_score: 0.92,
|
||||
score: 0.92,
|
||||
region: [0.1, 0.1, 0.5, 0.8],
|
||||
box: [0.2, 0.15, 0.45, 0.75],
|
||||
area: 0.18,
|
||||
ratio: 0.6,
|
||||
type: "object",
|
||||
path_data: [],
|
||||
},
|
||||
},
|
||||
],
|
||||
});
|
||||
|
||||
// Re-route /api/faces to flip to the "train empty" payload once the
|
||||
// classify POST has been received. Registered AFTER installDefaults so
|
||||
// Playwright's LIFO route matching hits this handler first.
|
||||
await frigateApp.page.route("**/api/faces", async (route) => {
|
||||
const payload = classified ? basicFacesMock() : initialFaces;
|
||||
await route.fulfill({ json: payload });
|
||||
});
|
||||
|
||||
// Hold the classify POST briefly. The race opens when the parent
|
||||
// unmounts before the dropdown's exit animation finishes (~200ms
|
||||
// in Radix). 100ms keeps us comfortably inside that window and
|
||||
// reliably triggered the bug in a 3x sweep across 0/50/100/200ms
|
||||
// on the pre-fix build. CLASSIFY_DELAY_MS overrides for local sweeps.
|
||||
const delayMs = Number(
|
||||
(globalThis as { process?: { env?: Record<string, string> } }).process
|
||||
?.env?.CLASSIFY_DELAY_MS ?? "100",
|
||||
);
|
||||
await frigateApp.page.route(
|
||||
"**/api/faces/train/*/classify",
|
||||
async (route) => {
|
||||
classified = true;
|
||||
if (delayMs > 0) {
|
||||
await new Promise((resolve) => setTimeout(resolve, delayMs));
|
||||
}
|
||||
await route.fulfill({ json: { success: true } });
|
||||
},
|
||||
);
|
||||
|
||||
await frigateApp.goto("/faces");
|
||||
|
||||
// Open the grouped detail Dialog.
|
||||
const groupedImage = frigateApp.page
|
||||
.locator('img[src*="clips/faces/train/"]')
|
||||
.first();
|
||||
await expect(groupedImage).toBeVisible({ timeout: 5_000 });
|
||||
await groupedImage.locator("xpath=..").click();
|
||||
const dialog = frigateApp.page
|
||||
.getByRole("dialog")
|
||||
.filter({
|
||||
has: frigateApp.page.locator('img[src*="clips/faces/train/"]'),
|
||||
})
|
||||
.first();
|
||||
await expect(dialog).toBeVisible({ timeout: 5_000 });
|
||||
|
||||
// Single attempt → single `+` trigger.
|
||||
const triggers = dialog.locator('[aria-haspopup="menu"]');
|
||||
await expect(triggers).toHaveCount(1);
|
||||
await triggers.first().click();
|
||||
|
||||
const menu = frigateApp.page
|
||||
.locator('[role="menu"], [data-radix-menu-content]')
|
||||
.first();
|
||||
await expect(menu).toBeVisible({ timeout: 5_000 });
|
||||
await menu.getByRole("menuitem", { name: /^alice$/i }).click();
|
||||
|
||||
// The Dialog must leave the tree cleanly, and body must recover.
|
||||
await expect(dialog).not.toBeVisible({ timeout: 5_000 });
|
||||
|
||||
// Give Radix's exit animation + cleanup a comfortable margin on top of
|
||||
// the ~300ms simulated network delay.
|
||||
await waitForBodyInteractive(frigateApp.page, 5_000);
|
||||
await expectBodyInteractive(frigateApp.page);
|
||||
|
||||
// User-visible confirmation: click something outside the dialog
|
||||
// and assert it actually responds.
|
||||
const librarySelector = frigateApp.page
|
||||
.getByRole("button")
|
||||
.filter({ hasText: /\(\d+\)/ })
|
||||
.first();
|
||||
await librarySelector.click();
|
||||
await expect(
|
||||
frigateApp.page
|
||||
.locator('[role="menu"], [data-radix-menu-content]')
|
||||
.first(),
|
||||
).toBeVisible({ timeout: 3_000 });
|
||||
});
|
||||
});
|
||||
|
||||
test.describe("Face Library — mobile @high @mobile", () => {
|
||||
|
||||
@ -69,18 +69,17 @@ test.describe("Navigation — conditional items @critical", () => {
|
||||
).toBeVisible();
|
||||
});
|
||||
|
||||
test("/chat is hidden when no agent has the chat role (desktop)", async ({
|
||||
test("/chat is hidden when genai.model is none (desktop)", async ({
|
||||
frigateApp,
|
||||
}) => {
|
||||
test.skip(frigateApp.isMobile, "Desktop sidebar");
|
||||
await frigateApp.installDefaults({
|
||||
config: {
|
||||
genai: {
|
||||
descriptions_only: {
|
||||
provider: "ollama",
|
||||
model: "llava",
|
||||
roles: ["descriptions"],
|
||||
},
|
||||
enabled: false,
|
||||
provider: "ollama",
|
||||
model: "none",
|
||||
base_url: "",
|
||||
},
|
||||
},
|
||||
});
|
||||
@ -90,20 +89,12 @@ test.describe("Navigation — conditional items @critical", () => {
|
||||
).toHaveCount(0);
|
||||
});
|
||||
|
||||
test("/chat is visible when an agent has the chat role (desktop)", async ({
|
||||
test("/chat is visible when genai.model is set (desktop)", async ({
|
||||
frigateApp,
|
||||
}) => {
|
||||
test.skip(frigateApp.isMobile, "Desktop sidebar");
|
||||
await frigateApp.installDefaults({
|
||||
config: {
|
||||
genai: {
|
||||
chat_agent: {
|
||||
provider: "ollama",
|
||||
model: "llava",
|
||||
roles: ["chat"],
|
||||
},
|
||||
},
|
||||
},
|
||||
config: { genai: { enabled: true, model: "llava" } },
|
||||
});
|
||||
await frigateApp.goto("/");
|
||||
await expect(
|
||||
|
||||
@ -31,7 +31,7 @@ test.describe("Replay — no active session @medium", () => {
|
||||
await expect(
|
||||
frigateApp.page.getByRole("heading", {
|
||||
level: 2,
|
||||
name: /No Active Debug Replay Session/i,
|
||||
name: /No Active Replay Session/i,
|
||||
}),
|
||||
).toBeVisible({ timeout: 10_000 });
|
||||
const goButton = frigateApp.page.getByRole("button", {
|
||||
@ -48,7 +48,7 @@ test.describe("Replay — no active session @medium", () => {
|
||||
await expect(
|
||||
frigateApp.page.getByRole("heading", {
|
||||
level: 2,
|
||||
name: /No Active Debug Replay Session/i,
|
||||
name: /No Active Replay Session/i,
|
||||
}),
|
||||
).toBeVisible({ timeout: 10_000 });
|
||||
await frigateApp.page
|
||||
@ -297,7 +297,7 @@ test.describe("Replay — mobile @medium @mobile", () => {
|
||||
await expect(
|
||||
frigateApp.page.getByRole("heading", {
|
||||
level: 2,
|
||||
name: /No Active Debug Replay Session/i,
|
||||
name: /No Active Replay Session/i,
|
||||
}),
|
||||
).toBeVisible({ timeout: 10_000 });
|
||||
});
|
||||
|
||||
6
web/package-lock.json
generated
6
web/package-lock.json
generated
@ -9642,9 +9642,9 @@
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/lodash-es": {
|
||||
"version": "4.18.1",
|
||||
"resolved": "https://registry.npmjs.org/lodash-es/-/lodash-es-4.18.1.tgz",
|
||||
"integrity": "sha512-J8xewKD/Gk22OZbhpOVSwcs60zhd95ESDwezOFuA3/099925PdHJ7OFHNTGtajL3AlZkykD32HykiMo+BIBI8A==",
|
||||
"version": "4.17.23",
|
||||
"resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.23.tgz",
|
||||
"integrity": "sha512-LgVTMpQtIopCi79SJeDiP0TfWi5CNEc/L/aRdTh3yIvmZXTnheWpKjSZhnvMl8iXbC1tFg9gdHHDMLoV7CnG+w==",
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/lodash.merge": {
|
||||
|
||||
@ -1,8 +1 @@
|
||||
{
|
||||
"auth": {
|
||||
"label": "Автентикация",
|
||||
"session_length": {
|
||||
"label": "Продължителност на сесията"
|
||||
}
|
||||
}
|
||||
}
|
||||
{}
|
||||
|
||||
@ -109,8 +109,7 @@
|
||||
"classification": "Classificació",
|
||||
"chat": "Xat",
|
||||
"actions": "Accions",
|
||||
"profiles": "Perfils",
|
||||
"features": "Característiques"
|
||||
"profiles": "Perfils"
|
||||
},
|
||||
"pagination": {
|
||||
"previous": {
|
||||
|
||||
@ -60,76 +60,15 @@
|
||||
"noVaildTimeSelected": "No s'ha seleccionat un rang de temps vàlid",
|
||||
"failed": "No s'ha pogut inciar l'exportació: {{error}}"
|
||||
},
|
||||
"view": "Vista",
|
||||
"queued": "Exporta a la cua. Mostra el progrés a la pàgina d'exportacions.",
|
||||
"batchSuccess_one": "S'ha iniciat l'exportació 1. Obrint el cas ara.",
|
||||
"batchSuccess_many": "S'han iniciat {{count}} exportacions. Obrint el cas ara.",
|
||||
"batchSuccess_other": "S'han iniciat {{count}} exportacions. Obrint el cas ara.",
|
||||
"batchPartial": "S'han iniciat {{successful}} de {{total}} exportacions. Càmeres fallides: {{failedCameras}}",
|
||||
"batchFailed": "No s'han pogut iniciar {{total}} exportacions. Càmeres fallides: {{failedCameras}}",
|
||||
"batchQueuedSuccess_one": "Exporta a la cua 1. Obrint el cas ara.",
|
||||
"batchQueuedSuccess_many": "{{count}} exportacions a la cua. Obrint el cas ara.",
|
||||
"batchQueuedSuccess_other": "{{count}} exportacions a la cua. Obrint el cas ara.",
|
||||
"batchQueuedPartial": "{{successful}} de {{total}} exportacions a la cua. Càmeres fallides: {{failedCameras}}",
|
||||
"batchQueueFailed": "No s'han pogut posar a la cua {{total}} exportacions. Càmeres fallides: {{failedCameras}}"
|
||||
"view": "Vista"
|
||||
},
|
||||
"fromTimeline": {
|
||||
"saveExport": "Guardar exportació",
|
||||
"previewExport": "Previsualitzar exportació",
|
||||
"queueingExport": "S'està fent la cua de l'exportació...",
|
||||
"useThisRange": "Utilitza aquest interval"
|
||||
"previewExport": "Previsualitzar exportació"
|
||||
},
|
||||
"case": {
|
||||
"label": "Cas",
|
||||
"placeholder": "Selecciona un cas",
|
||||
"newCaseOption": "Crea un cas no",
|
||||
"newCaseNamePlaceholder": "Nom de cas nou",
|
||||
"newCaseDescriptionPlaceholder": "Descripció del cas",
|
||||
"nonAdminHelp": "Es crearà un nou cas per a aquestes exportacions."
|
||||
},
|
||||
"queueing": "S'està fent la cua de l'exportació...",
|
||||
"tabs": {
|
||||
"export": "Càmera única",
|
||||
"multiCamera": "Multicàmera"
|
||||
},
|
||||
"multiCamera": {
|
||||
"timeRange": "Interval de temps",
|
||||
"selectFromTimeline": "Selecciona des de la línia de temps",
|
||||
"cameraSelection": "Càmeres",
|
||||
"cameraSelectionHelp": "Les càmeres amb objectes rastrejats en aquest interval de temps estan preseleccionades",
|
||||
"checkingActivity": "Comprovant l'activitat de la càmera...",
|
||||
"noCameras": "No hi ha càmeres disponibles",
|
||||
"detectionCount_one": "1 objecte rastrejat",
|
||||
"detectionCount_many": "{{count}} objectes rastrejats",
|
||||
"detectionCount_other": "{{count}} objectes rastrejats",
|
||||
"nameLabel": "Nom de l'exportació",
|
||||
"namePlaceholder": "Nom base opcional per a aquestes exportacions",
|
||||
"queueingButton": "S'estan posant a la cua les exportacions...",
|
||||
"exportButton_one": "Exporta 1 càmera",
|
||||
"exportButton_many": "Exporta {{count}} càmeres",
|
||||
"exportButton_other": "Exporta {{count}} càmeres"
|
||||
},
|
||||
"multi": {
|
||||
"title_one": "Exporta {{count}} ressenyes",
|
||||
"title_many": "Exporta {{count}} ressenyes",
|
||||
"title_other": "Exporta {{count}} ressenyes",
|
||||
"description": "Exporta cada revisió seleccionada. Totes les exportacions s'agruparan en un sol cas.",
|
||||
"descriptionNoCase": "Exporta cada revisió seleccionada.",
|
||||
"caseNamePlaceholder": "Exporta la revisió - {{date}}",
|
||||
"exportButton_one": "Exporta {{count}} ressenyes",
|
||||
"exportButton_many": "Exporta {{count}} ressenyes",
|
||||
"exportButton_other": "Exporta {{count}} ressenyes",
|
||||
"exportingButton": "S'està exportant...",
|
||||
"toast": {
|
||||
"started_one": "S'ha iniciat l'exportació 1. Obrint el cas ara.",
|
||||
"started_many": "S'han iniciat {{count}} exportacions. Obrint el cas ara.",
|
||||
"started_other": "S'han iniciat {{count}} exportacions. Obrint el cas ara.",
|
||||
"startedNoCase_one": "S'ha iniciat l'exportació 1.",
|
||||
"startedNoCase_many": "S'han iniciat {{count}} exportacions.",
|
||||
"startedNoCase_other": "S'han iniciat {{count}} exportacions.",
|
||||
"partial": "S'han iniciat {{successful}} de {{total}} exportacions. Ha fallat: {{failedItems}}",
|
||||
"failed": "No s'han pogut iniciar {{total}} exportacions. Ha fallat: {{failedItems}}"
|
||||
}
|
||||
"placeholder": "Selecciona un cas"
|
||||
}
|
||||
},
|
||||
"streaming": {
|
||||
@ -177,14 +116,6 @@
|
||||
"success": "Els enregistraments de vídeo associats als elements de revisió seleccionats s’han suprimit correctament.",
|
||||
"error": "No s'ha pogut suprimir: {{error}}"
|
||||
}
|
||||
},
|
||||
"shareTimestamp": {
|
||||
"label": "Comparteix la marca horària",
|
||||
"title": "Comparteix la marca horària",
|
||||
"description": "Comparteix un URL amb marca horària de la posició actual del jugador o tria una marca horària personalitzada. Tingueu en compte que aquest no és un URL de compartició pública i només és accessible per als usuaris amb accés a Frigate i aquesta càmera.",
|
||||
"custom": "Marca horària personalitzada",
|
||||
"button": "Comparteix l'URL de la marca horària",
|
||||
"shareTitle": "Marca de temps de revisió de Frigate: {{camera}}"
|
||||
}
|
||||
},
|
||||
"imagePicker": {
|
||||
|
||||
@ -32,8 +32,7 @@
|
||||
"noPreviewFoundFor": "No s'ha trobat cap previsualització per a {{cameraName}}",
|
||||
"submitFrigatePlus": {
|
||||
"title": "Enviar aquesta imatge a Frigate+?",
|
||||
"submit": "Enviar",
|
||||
"previewError": "No s'ha pogut carregar la vista prèvia de la instantània. És possible que l'enregistrament no estigui disponible en aquest moment."
|
||||
"submit": "Enviar"
|
||||
},
|
||||
"livePlayerRequiredIOSVersion": "Es requereix iOS 17.1 o superior per a aquest tipus de reproducció en directe.",
|
||||
"streamOffline": {
|
||||
|
||||
@ -1951,7 +1951,7 @@
|
||||
},
|
||||
"roles": {
|
||||
"label": "Rols",
|
||||
"description": "Rols de GenAI (xat, descripcions, incrustacions); un proveïdor per rol."
|
||||
"description": "Funcions genAI (eines, visió, incrustacions); un proveïdor per rol."
|
||||
},
|
||||
"provider_options": {
|
||||
"label": "Opcions del proveïdor",
|
||||
|
||||
@ -27,9 +27,7 @@
|
||||
},
|
||||
"documentTitle": "Revisió - Frigate",
|
||||
"recordings": {
|
||||
"documentTitle": "Enregistraments - Frigate",
|
||||
"invalidSharedLink": "No s'ha pogut obrir l'enllaç d'enregistrament amb marques de temps a causa d'un error d'anàlisi.",
|
||||
"invalidSharedCamera": "No s'ha pogut obrir l'enllaç d'enregistrament amb marques de temps a causa d'una càmera desconeguda o no autoritzada."
|
||||
"documentTitle": "Enregistraments - Frigate"
|
||||
},
|
||||
"calendarFilter": {
|
||||
"last24Hours": "Últimes 24 hores"
|
||||
|
||||
@ -248,7 +248,7 @@
|
||||
"dialog": {
|
||||
"confirmDelete": {
|
||||
"title": "Confirmar la supressió",
|
||||
"desc": "Suprimir aquest objecte rastrejat elimina la instantània, qualsevol incrustació desada, i qualsevol entrada de detalls de seguiment associada. Les imatges gravades d'aquest objecte seguit en l'historial <em>NO</em> seràn eliminades.<br /><br />Estas segur que vols continuar?"
|
||||
"desc": "Eliminant aquest objecte seguit borrarà l'snapshot, qualsevol embedding gravat, i qualsevol detall de seguiment. Les imatges gravades d'aquest objecte seguit en l'historial <em>NO</em> seràn eliminades.<br /><br />Estas segur que vols continuar?"
|
||||
},
|
||||
"toast": {
|
||||
"error": "S'ha produït un error en suprimir aquest objecte rastrejat: {{errorMessage}}"
|
||||
@ -289,10 +289,7 @@
|
||||
"zones": "Zones",
|
||||
"ratio": "Ràtio",
|
||||
"area": "Àrea",
|
||||
"score": "Puntuació",
|
||||
"computedScore": "Puntuació calculada",
|
||||
"topScore": "Puntuació superior",
|
||||
"toggleAdvancedScores": "Commuta les puntuacions avançades"
|
||||
"score": "Puntuació"
|
||||
}
|
||||
},
|
||||
"annotationSettings": {
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user