Merge branch 'dev' into feature/share-review-timestamp

This commit is contained in:
0x464e 2026-03-24 18:33:06 +02:00
commit 20494333f6
No known key found for this signature in database
GPG Key ID: E6D221DF6CBFBFFA
382 changed files with 29408 additions and 4486 deletions

View File

@ -1,17 +1,17 @@
_Please read the [contributing guidelines](https://github.com/blakeblackshear/frigate/blob/dev/CONTRIBUTING.md) before submitting a PR._
## Proposed change
<!--
Thank you!
Describe what this pull request does and how it will benefit users of Frigate.
Please describe in detail any considerations, breaking changes, etc.
If you're introducing a new feature or significantly refactoring existing functionality,
we encourage you to start a discussion first. This helps ensure your idea aligns with
Frigate's development goals.
Describe what this pull request does and how it will benefit users of Frigate.
Please describe in detail any considerations, breaking changes, etc. that are
made in this pull request.
-->
## Type of change
- [ ] Dependency upgrade
@ -26,6 +26,44 @@
- This PR fixes or closes issue: fixes #
- This PR is related to issue:
## For new features
<!--
Every new feature adds scope that maintainers must test, maintain, and support long-term.
We try to be thoughtful about what we take on, and sometimes that means saying no to
good code if the feature isn't the right fit — or saying yes to something we weren't sure
about. These calls are sometimes subjective, and we won't always get them right. We're
happy to discuss and reconsider.
Linking to an existing feature request or discussion with community interest helps us
understand demand, but a great idea is a great idea even without a crowd behind it.
You can delete this section for bugfixes and non-feature changes.
-->
- [ ] There is an existing feature request or discussion with community interest for this change.
- Link:
## AI disclosure
<!--
We welcome contributions that use AI tools, but we need to understand your relationship
with the code you're submitting. See our AI usage policy in CONTRIBUTING.md for details.
Be honest — this won't disqualify your PR. Trust matters more than method.
-->
- [ ] No AI tools were used in this PR.
- [ ] AI tools were used in this PR. Details below:
**AI tool(s) used** (e.g., Claude, Copilot, ChatGPT, Cursor):
**How AI was used** (e.g., code generation, code review, debugging, documentation):
**Extent of AI involvement** (e.g., generated entire implementation, assisted with specific functions, suggested fixes):
**Human oversight**: Describe what manual review, testing, and validation you performed on the AI-generated portions.
## Checklist
<!--
@ -35,5 +73,6 @@
- [ ] The code change is tested and works locally.
- [ ] Local tests pass. **Your PR cannot be merged unless tests pass**
- [ ] There is no commented out code in this PR.
- [ ] I can explain every line of code in this PR if asked.
- [ ] UI changes including text have used i18n keys and have been added to the `en` locale.
- [ ] The code has been formatted using Ruff (`ruff format frigate`)

View File

@ -27,6 +27,9 @@ jobs:
- name: Lint
run: npm run lint
working-directory: ./web
- name: Check i18n keys
run: npm run i18n:extract:ci
working-directory: ./web
web_test:
name: Web - Test

17
.vscode/launch.json vendored
View File

@ -6,6 +6,23 @@
"type": "debugpy",
"request": "launch",
"module": "frigate"
},
{
"type": "editor-browser",
"request": "launch",
"name": "Vite: Launch in integrated browser",
"url": "http://localhost:5173"
},
{
"type": "editor-browser",
"request": "launch",
"name": "Nginx: Launch in integrated browser",
"url": "http://localhost:5000"
},
{
"type": "editor-browser",
"request": "attach",
"name": "Attach to integrated browser"
}
]
}

140
CONTRIBUTING.md Normal file
View File

@ -0,0 +1,140 @@
# Contributing to Frigate
Thank you for your interest in contributing to Frigate. This document covers the expectations and guidelines for contributions. Please read it before submitting a pull request.
## Before you start
### Bugfixes
If you've found a bug and want to fix it, go for it. Link to the relevant issue in your PR if one exists, or describe the bug in the PR description.
### New features
Every new feature adds scope that the maintainers must test, maintain, and support long-term. Before writing code for a new feature:
1. **Check for existing discussion.** Search [feature requests](https://github.com/blakeblackshear/frigate/issues) and [discussions](https://github.com/blakeblackshear/frigate/discussions) to see if it's been proposed or discussed. Pinned feature requests are on our radar — we plan to get to them, but we don't maintain a public roadmap or timeline. Check in with us first if you have interest in contributing to one.
2. **Start a discussion or feature request first.** This helps ensure your idea aligns with Frigate's direction before you invest time building it. Community interest in a feature request helps us gauge demand, though a great idea is a great idea even without a crowd behind it.
3. **Be open to "no".** We try to be thoughtful about what we take on, and sometimes that means saying no to good code if the feature isn't the right fit for the project. These calls are sometimes subjective, and we won't always get them right. We're happy to discuss and reconsider.
## AI usage policy
AI tools are a reality of modern development and we're not opposed to their use. But we need to understand your relationship with the code you're submitting. The more AI was involved, the more important it is that you've genuinely reviewed, tested, and understood what it produced.
### Requirements when AI is used
If AI is used to generate any portion of the code, contributors must adhere to the following requirements:
1. **Explicitly disclose the manner in which AI was employed.** The PR template asks for this. Be honest — this won't automatically disqualify your PR. We'd rather have an honest disclosure than find out later. Trust matters more than method.
2. **Perform a comprehensive manual review prior to submitting the pull request.** Don't submit code you haven't read carefully and tested locally.
3. **Be prepared to explain every line of code they submitted when asked about it by a maintainer.** If you can't explain why something works the way it does, you're not ready to submit it.
4. **It is strictly prohibited to use AI to write your posts for you** (bug reports, feature requests, pull request descriptions, GitHub discussions, responding to humans, etc.). We need to hear from _you_, not your AI assistant. These are the spaces where we build trust and understanding with contributors, and that only works if we're talking to each other.
### Established contributors
Contributors with a long history of thoughtful, quality contributions to Frigate have earned trust through that track record. The level of scrutiny we apply to AI usage naturally reflects that trust. This isn't a formal exemption — it's just how trust works. If you've been around, we know how you think and how you work. If you're new, we're still getting to know you, and clear disclosure helps build that relationship.
### What this means in practice
We're not trying to gatekeep how you write code. Use whatever tools make you productive. But there's a difference between using AI as a tool to implement something you understand and handing a feature request to an AI and submitting whatever comes back. The former is fine. The latter creates maintenance risk for the project.
Some honest context: when we review a PR, we're not just evaluating whether the code works today. We're evaluating whether we can maintain it, debug it, and extend it long-term — often without the original author's involvement. Code that the author doesn't deeply understand is code that nobody understands, and that's a liability.
## Pull request guidelines
### Before submitting
- **Search for existing PRs** to avoid duplicating effort.
- **Test your changes locally.** Your PR cannot be merged unless tests pass.
- **Format your code.** Run `ruff format frigate` for Python and `npm run prettier:write` from the `web/` directory for frontend changes.
- **Run the linter.** Run `ruff check frigate` for Python and `npm run lint` from `web/` for frontend.
- **One concern per PR.** Don't combine unrelated changes. A bugfix and a new feature should be separate PRs.
### What we look for in review
- **Does it work?** Tested locally, tests pass, no regressions.
- **Is it maintainable?** Clear code, appropriate complexity, good separation of concerns.
- **Does it fit?** Consistent with Frigate's architecture and design philosophy.
- **Is it scoped well?** Solves the stated problem without unnecessary additions.
### After submitting
- Be responsive to review feedback. We may ask for changes.
- Expect honest, direct feedback. We try to be respectful but we also try to be efficient.
- If your PR goes stale, rebase it on the latest `dev` branch.
## Coding standards
### Python (backend)
- **Python** — use modern language features (type hints, pattern matching, f-strings, dataclasses)
- **Formatting**: Ruff (configured in `pyproject.toml`)
- **Linting**: Ruff
- **Testing**: `python3 -u -m unittest`
- **Logging**: Use module-level `logger = logging.getLogger(__name__)` with lazy formatting
- **Async**: All external I/O must be async. No blocking calls in async functions.
- **Error handling**: Use specific exception types. Keep try blocks minimal.
- **Language**: American English for all code, comments, and documentation
### TypeScript/React (frontend)
- **Linting**: ESLint (`npm run lint` from `web/`)
- **Formatting**: Prettier (`npm run prettier:write` from `web/`)
- **Type safety**: TypeScript strict mode. Avoid `any`.
- **i18n**: All user-facing strings must use `react-i18next`. Never hardcode display text in components. Add English strings to the appropriate files in `web/public/locales/en/`.
- **Components**: Use Radix UI/shadcn primitives and TailwindCSS with the `cn()` utility.
### Development commands
```bash
# Python
python3 -u -m unittest # Run all tests
python3 -u -m unittest frigate.test.test_ffmpeg_presets # Run specific test
ruff format frigate # Format
ruff check frigate # Lint
# Frontend (from web/ directory)
npm run build # Build
npm run lint # Lint
npm run lint:fix # Lint + fix
npm run prettier:write # Format
```
## Project structure
```
frigate/ # Python backend
api/ # FastAPI route handlers
config/ # Configuration parsing and validation
detectors/ # Object detection backends
events/ # Event management and storage
test/ # Backend tests
util/ # Shared utilities
web/ # React/TypeScript frontend
src/
api/ # API client functions
components/ # Reusable components
hooks/ # Custom React hooks
pages/ # Route components
types/ # TypeScript type definitions
views/ # Complex view components
docker/ # Docker build files
docs/ # Documentation site
migrations/ # Database migrations
```
## Translations
Frigate uses [Weblate](https://hosted.weblate.org/projects/frigate-nvr/) for managing language translations. If you'd like to help translate Frigate into your language:
1. Visit the [Frigate project on Weblate](https://hosted.weblate.org/projects/frigate-nvr/).
2. Create an account or log in.
3. Browse the available languages and select the one you'd like to contribute to, or request a new language.
4. Translate strings directly in the Weblate interface — no code changes or pull requests needed.
Translation contributions through Weblate are automatically synced to the repository. Please do not submit pull requests for translation changes — use Weblate instead so that translations are properly tracked and coordinated.
## Resources
- [Documentation](https://docs.frigate.video)
- [Discussions, Support, and Bug Reports](https://github.com/blakeblackshear/frigate/discussions)
- [Feature Requests](https://github.com/blakeblackshear/frigate/issues)

View File

@ -52,7 +52,7 @@ if [[ "${TARGETARCH}" == "amd64" ]]; then
tar -xf ffmpeg.tar.xz -C /usr/lib/ffmpeg/5.0 --strip-components 1 amd64/bin/ffmpeg amd64/bin/ffprobe
rm -rf ffmpeg.tar.xz
mkdir -p /usr/lib/ffmpeg/7.0
wget -qO ffmpeg.tar.xz "https://github.com/NickM-27/FFmpeg-Builds/releases/download/autobuild-2024-09-19-12-51/ffmpeg-n7.0.2-18-g3e6cec1286-linux64-gpl-7.0.tar.xz"
wget -qO ffmpeg.tar.xz "https://github.com/NickM-27/FFmpeg-Builds/releases/download/autobuild-2026-03-19-13-03/ffmpeg-n7.1.3-43-g5a1f107b4c-linux64-gpl-7.1.tar.xz"
tar -xf ffmpeg.tar.xz -C /usr/lib/ffmpeg/7.0 --strip-components 1 amd64/bin/ffmpeg amd64/bin/ffprobe
rm -rf ffmpeg.tar.xz
fi
@ -64,7 +64,7 @@ if [[ "${TARGETARCH}" == "arm64" ]]; then
tar -xf ffmpeg.tar.xz -C /usr/lib/ffmpeg/5.0 --strip-components 1 arm64/bin/ffmpeg arm64/bin/ffprobe
rm -f ffmpeg.tar.xz
mkdir -p /usr/lib/ffmpeg/7.0
wget -qO ffmpeg.tar.xz "https://github.com/NickM-27/FFmpeg-Builds/releases/download/autobuild-2024-09-19-12-51/ffmpeg-n7.0.2-18-g3e6cec1286-linuxarm64-gpl-7.0.tar.xz"
wget -qO ffmpeg.tar.xz "https://github.com/NickM-27/FFmpeg-Builds/releases/download/autobuild-2026-03-19-13-03/ffmpeg-n7.1.3-43-g5a1f107b4c-linuxarm64-gpl-7.1.tar.xz"
tar -xf ffmpeg.tar.xz -C /usr/lib/ffmpeg/7.0 --strip-components 1 arm64/bin/ffmpeg arm64/bin/ffprobe
rm -f ffmpeg.tar.xz
fi

View File

@ -55,7 +55,7 @@ function setup_homekit_config() {
if [[ ! -f "${config_path}" ]]; then
echo "[INFO] Creating empty config file for HomeKit..."
echo '{}' > "${config_path}"
: > "${config_path}"
fi
# Convert YAML to JSON for jq processing
@ -65,23 +65,25 @@ function setup_homekit_config() {
return 0
}
# Use jq to filter and keep only the homekit section
local cleaned_json="/tmp/cache/homekit_cleaned.json"
jq '
# Keep only the homekit section if it exists, otherwise empty object
if has("homekit") then {homekit: .homekit} else {} end
' "${temp_json}" > "${cleaned_json}" 2>/dev/null || {
echo '{}' > "${cleaned_json}"
}
# Use jq to extract the homekit section, if it exists
local homekit_json
homekit_json=$(jq '
if has("homekit") then {homekit: .homekit} else null end
' "${temp_json}" 2>/dev/null) || homekit_json="null"
# Convert back to YAML and write to the config file
yq eval -P "${cleaned_json}" > "${config_path}" 2>/dev/null || {
echo "[WARNING] Failed to convert cleaned config to YAML, creating minimal config"
echo '{}' > "${config_path}"
}
# If no homekit section, write an empty config file
if [[ "${homekit_json}" == "null" ]]; then
: > "${config_path}"
else
# Convert homekit JSON back to YAML and write to the config file
echo "${homekit_json}" | yq eval -P - > "${config_path}" 2>/dev/null || {
echo "[WARNING] Failed to convert cleaned config to YAML, creating minimal config"
: > "${config_path}"
}
fi
# Clean up temp files
rm -f "${temp_json}" "${cleaned_json}"
rm -f "${temp_json}"
}
set_libva_version

View File

@ -59,12 +59,14 @@ ARG ROCM
# Copy HIP headers required for MIOpen JIT (BuildHip) / HIPRTC at runtime
COPY --from=rocm /opt/rocm-${ROCM}/include/ /opt/rocm-${ROCM}/include/
COPY --from=rocm /opt/rocm-$ROCM/bin/rocminfo /opt/rocm-$ROCM/bin/migraphx-driver /opt/rocm-$ROCM/bin/
# Copy MIOpen database files for gfx10xx and gfx11xx only (RDNA2/RDNA3)
# Copy MIOpen database files for gfx10xx, gfx11xx, and gfx12xx only (RDNA2/RDNA3/RDNA4)
COPY --from=rocm /opt/rocm-$ROCM/share/miopen/db/*gfx10* /opt/rocm-$ROCM/share/miopen/db/
COPY --from=rocm /opt/rocm-$ROCM/share/miopen/db/*gfx11* /opt/rocm-$ROCM/share/miopen/db/
# Copy rocBLAS library files for gfx10xx and gfx11xx only
COPY --from=rocm /opt/rocm-$ROCM/share/miopen/db/*gfx12* /opt/rocm-$ROCM/share/miopen/db/
# Copy rocBLAS library files for gfx10xx, gfx11xx, and gfx12xx only
COPY --from=rocm /opt/rocm-$ROCM/lib/rocblas/library/*gfx10* /opt/rocm-$ROCM/lib/rocblas/library/
COPY --from=rocm /opt/rocm-$ROCM/lib/rocblas/library/*gfx11* /opt/rocm-$ROCM/lib/rocblas/library/
COPY --from=rocm /opt/rocm-$ROCM/lib/rocblas/library/*gfx12* /opt/rocm-$ROCM/lib/rocblas/library/
COPY --from=rocm /opt/rocm-dist/ /
#######################################################################

View File

@ -1,18 +1,18 @@
# Nvidia ONNX Runtime GPU Support
--extra-index-url 'https://pypi.nvidia.com'
cython==3.0.*; platform_machine == 'x86_64'
nvidia-cuda-cupti-cu12==12.9.79; platform_machine == 'x86_64'
nvidia-cublas-cu12==12.9.1.*; platform_machine == 'x86_64'
nvidia-cudnn-cu12==9.19.0.*; platform_machine == 'x86_64'
nvidia-cufft-cu12==11.4.1.*; platform_machine == 'x86_64'
nvidia-curand-cu12==10.3.10.*; platform_machine == 'x86_64'
nvidia-cuda-nvcc-cu12==12.9.86; platform_machine == 'x86_64'
nvidia-cuda-nvrtc-cu12==12.9.86; platform_machine == 'x86_64'
nvidia-cuda-runtime-cu12==12.9.79; platform_machine == 'x86_64'
nvidia-cusolver-cu12==11.7.5.*; platform_machine == 'x86_64'
nvidia-cusparse-cu12==12.5.10.*; platform_machine == 'x86_64'
nvidia-nccl-cu12==2.29.7; platform_machine == 'x86_64'
nvidia-nvjitlink-cu12==12.9.86; platform_machine == 'x86_64'
nvidia-cuda-cupti-cu12==12.8.90; platform_machine == 'x86_64'
nvidia-cublas-cu12==12.8.4.1; platform_machine == 'x86_64'
nvidia-cudnn-cu12==9.8.0.87; platform_machine == 'x86_64'
nvidia-cufft-cu12==11.3.3.83; platform_machine == 'x86_64'
nvidia-curand-cu12==10.3.9.90; platform_machine == 'x86_64'
nvidia-cuda-nvcc-cu12==12.8.93; platform_machine == 'x86_64'
nvidia-cuda-nvrtc-cu12==12.8.93; platform_machine == 'x86_64'
nvidia-cuda-runtime-cu12==12.8.90; platform_machine == 'x86_64'
nvidia-cusolver-cu12==11.7.3.90; platform_machine == 'x86_64'
nvidia-cusparse-cu12==12.5.8.93; platform_machine == 'x86_64'
nvidia-nccl-cu12==2.26.2.post1; platform_machine == 'x86_64'
nvidia-nvjitlink-cu12==12.8.93; platform_machine == 'x86_64'
onnx==1.16.*; platform_machine == 'x86_64'
onnxruntime-gpu==1.24.*; platform_machine == 'x86_64'
protobuf==3.20.3; platform_machine == 'x86_64'

View File

@ -44,13 +44,21 @@ go2rtc:
### `environment_vars`
This section can be used to set environment variables for those unable to modify the environment of the container, like within Home Assistant OS.
This section can be used to set environment variables for those unable to modify the environment of the container, like within Home Assistant OS. Docker users should set environment variables in their `docker run` command (`-e FRIGATE_MQTT_PASSWORD=secret`) or `docker-compose.yml` file (`environment:` section) instead. Note that values set here are stored in plain text in your config file, so if the goal is to keep credentials out of your configuration, use Docker environment variables or Docker secrets instead.
Variables prefixed with `FRIGATE_` can be referenced in config fields that support environment variable substitution (such as MQTT host and credentials, camera stream URLs, and ONVIF host and credentials) using the `{FRIGATE_VARIABLE_NAME}` syntax.
Example:
```yaml
environment_vars:
VARIABLE_NAME: variable_value
FRIGATE_MQTT_USER: my_mqtt_user
FRIGATE_MQTT_PASSWORD: my_mqtt_password
mqtt:
host: "{FRIGATE_MQTT_HOST}"
user: "{FRIGATE_MQTT_USER}"
password: "{FRIGATE_MQTT_PASSWORD}"
```
#### TensorFlow Thread Configuration

View File

@ -86,7 +86,7 @@ Frigate looks for a JWT token secret in the following order:
1. An environment variable named `FRIGATE_JWT_SECRET`
2. A file named `FRIGATE_JWT_SECRET` in the directory specified by the `CREDENTIALS_DIRECTORY` environment variable (defaults to the Docker Secrets directory: `/run/secrets/`)
3. A `jwt_secret` option from the Home Assistant Add-on options
3. A `jwt_secret` option from the Home Assistant App options
4. A `.jwt_secret` file in the config directory
If no secret is found on startup, Frigate generates one and stores it in a `.jwt_secret` file in the config directory.
@ -232,7 +232,7 @@ The viewer role provides read-only access to all cameras in the UI and API. Cust
### Role Configuration Example
```yaml
```yaml {11-16}
cameras:
front_door:
# ... camera config

View File

@ -24,7 +24,7 @@ A custom icon can be added to the birdseye background by providing a 180x180 ima
If you want to include a camera in Birdseye view only for specific circumstances, or just don't include it at all, the Birdseye setting can be set at the camera level.
```yaml
```yaml {8-10,12-14}
# Include all cameras by default in Birdseye view
birdseye:
enabled: True
@ -48,6 +48,7 @@ By default birdseye shows all cameras that have had the configured activity in t
```yaml
birdseye:
enabled: True
# highlight-next-line
inactivity_threshold: 15
```
@ -78,9 +79,11 @@ birdseye:
cameras:
front:
birdseye:
# highlight-next-line
order: 1
back:
birdseye:
# highlight-next-line
order: 2
```
@ -92,7 +95,7 @@ It is possible to limit the number of cameras shown on birdseye at one time. Whe
For example, this can be configured to only show the most recently active camera.
```yaml
```yaml {3-4}
birdseye:
enabled: True
layout:
@ -103,7 +106,7 @@ birdseye:
By default birdseye tries to fit 2 cameras in each row and then double in size until a suitable layout is found. The scaling can be configured with a value between 1.0 and 5.0 depending on use case.
```yaml
```yaml {3-4}
birdseye:
enabled: True
layout:

View File

@ -23,6 +23,7 @@ Some cameras support h265 with different formats, but Safari only supports the a
cameras:
h265_cam: # <------ Doesn't matter what the camera is called
ffmpeg:
# highlight-next-line
apple_compatibility: true # <- Adds compatibility with MacOS and iPhone
```
@ -30,7 +31,7 @@ cameras:
Note that mjpeg cameras require encoding the video into h264 for recording, and restream roles. This will use significantly more CPU than if the cameras supported h264 feeds directly. It is recommended to use the restream role to create an h264 restream and then use that as the source for ffmpeg.
```yaml
```yaml {3,10}
go2rtc:
streams:
mjpeg_cam: "ffmpeg:http://your_mjpeg_stream_url#video=h264#hardware" # <- use hardware acceleration to create an h264 stream usable for other components.
@ -96,6 +97,7 @@ This camera is H.265 only. To be able to play clips on some devices (like MacOs
cameras:
annkec800: # <------ Name the camera
ffmpeg:
# highlight-next-line
apple_compatibility: true # <- Adds compatibility with MacOS and iPhone
output_args:
record: preset-record-generic-audio-aac
@ -274,7 +276,7 @@ To use a USB camera (webcam) with Frigate, the recommendation is to use go2rtc's
- In your Frigate Configuration File, add the go2rtc stream and roles as appropriate:
```
```yaml {4,11-12}
go2rtc:
streams:
usb_camera:

View File

@ -66,7 +66,7 @@ Not every PTZ supports ONVIF, which is the standard protocol Frigate uses to com
Add the onvif section to your camera in your configuration file:
```yaml
```yaml {4-8}
cameras:
back:
ffmpeg: ...

View File

@ -7,11 +7,11 @@ Object classification allows you to train a custom MobileNetV2 classification mo
## Minimum System Requirements
Object classification models are lightweight and run very fast on CPU. Inference should be usable on virtually any machine that can run Frigate.
Object classification models are lightweight and run very fast on CPU.
Training the model does briefly use a high amount of system resources for about 13 minutes per training run. On lower-power devices, training may take longer.
A CPU with AVX instructions is required for training and inference.
A CPU with AVX + AVX2 instructions is required for training and inference.
## Classes
@ -27,7 +27,6 @@ For object classification:
### Classification Type
- **Sub label**:
- Applied to the objects `sub_label` field.
- Ideal for a single, more specific identity or type.
- Example: `cat``Leo`, `Charlie`, `None`.
@ -119,6 +118,7 @@ Enable debug logs for classification models by adding `frigate.data_processing.r
logger:
default: info
logs:
# highlight-next-line
frigate.data_processing.real_time.custom_classification: debug
```

View File

@ -7,11 +7,11 @@ State classification allows you to train a custom MobileNetV2 classification mod
## Minimum System Requirements
State classification models are lightweight and run very fast on CPU. Inference should be usable on virtually any machine that can run Frigate.
State classification models are lightweight and run very fast on CPU.
Training the model does briefly use a high amount of system resources for about 13 minutes per training run. On lower-power devices, training may take longer.
A CPU with AVX instructions is required for training and inference.
A CPU with AVX + AVX2 instructions is required for training and inference.
## Classes
@ -85,6 +85,7 @@ Enable debug logs for classification models by adding `frigate.data_processing.r
logger:
default: info
logs:
# highlight-next-line
frigate.data_processing.real_time.custom_classification: debug
```

View File

@ -32,6 +32,8 @@ All of these features run locally on your system.
## Minimum System Requirements
A CPU with AVX + AVX2 instructions is required to run Face Recognition.
The `small` model is optimized for efficiency and runs on the CPU, most CPUs should run the model efficiently.
The `large` model is optimized for accuracy, an integrated or discrete GPU / NPU is required. See the [Hardware Accelerated Enrichments](/configuration/hardware_acceleration_enrichments.md) documentation.
@ -143,17 +145,14 @@ Start with the [Usage](#usage) section and re-read the [Model Requirements](#mod
1. Ensure `person` is being _detected_. A `person` will automatically be scanned by Frigate for a face. Any detected faces will appear in the Recent Recognitions tab in the Frigate UI's Face Library.
If you are using a Frigate+ or `face` detecting model:
- Watch the debug view (Settings --> Debug) to ensure that `face` is being detected along with `person`.
- You may need to adjust the `min_score` for the `face` object if faces are not being detected.
If you are **not** using a Frigate+ or `face` detecting model:
- Check your `detect` stream resolution and ensure it is sufficiently high enough to capture face details on `person` objects.
- You may need to lower your `detection_threshold` if faces are not being detected.
2. Any detected faces will then be _recognized_.
- Make sure you have trained at least one face per the recommendations above.
- Adjust `recognition_threshold` settings per the suggestions [above](#advanced-configuration).

View File

@ -187,7 +187,7 @@ genai:
To use a different Gemini-compatible API endpoint, set the `provider_options` with the `base_url` key to your provider's API URL. For example:
```
```yaml {4,5}
genai:
provider: gemini
...
@ -220,6 +220,29 @@ genai:
model: gpt-4o
```
:::note
To use a different OpenAI-compatible API endpoint, set the `OPENAI_BASE_URL` environment variable to your provider's API URL.
:::
:::tip
For OpenAI-compatible servers (such as llama.cpp) that don't expose the configured context size in the API response, you can manually specify the context size in `provider_options`:
```yaml {5,6}
genai:
provider: openai
base_url: http://your-llama-server
model: your-model-name
provider_options:
context_size: 8192 # Specify the configured context size
```
This ensures Frigate uses the correct context window size when generating prompts.
:::
### Azure OpenAI
Microsoft offers several vision models through Azure OpenAI. A subscription is required.

View File

@ -80,6 +80,7 @@ By default, review summaries use preview images (cached preview frames) which ha
review:
genai:
enabled: true
# highlight-next-line
image_source: recordings # Options: "preview" (default) or "recordings"
```
@ -104,7 +105,7 @@ If recordings are not available for a given time period, the system will automat
Along with the concern of suspicious activity or immediate threat, you may have concerns such as animals in your garden or a gate being left open. These concerns can be configured so that the review summaries will make note of them if the activity requires additional review. For example:
```yaml
```yaml {4,5}
review:
genai:
enabled: true
@ -116,7 +117,7 @@ review:
By default, review summaries are generated in English. You can configure Frigate to generate summaries in your preferred language by setting the `preferred_language` option:
```yaml
```yaml {4}
review:
genai:
enabled: true

View File

@ -10,6 +10,7 @@ import CommunityBadge from '@site/src/components/CommunityBadge';
It is highly recommended to use an integrated or discrete GPU for hardware acceleration video decoding in Frigate.
Some types of hardware acceleration are detected and used automatically, but you may need to update your configuration to enable hardware accelerated decoding in ffmpeg. To verify that hardware acceleration is working:
- Check the logs: A message will either say that hardware acceleration was automatically detected, or there will be a warning that no hardware acceleration was automatically detected
- If hardware acceleration is specified in the config, verification can be done by ensuring the logs are free from errors. There is no CPU fallback for hardware acceleration.
@ -67,7 +68,7 @@ Frigate can utilize most Intel integrated GPUs and Arc GPUs to accelerate video
:::note
The default driver is `iHD`. You may need to change the driver to `i965` by adding the following environment variable `LIBVA_DRIVER_NAME=i965` to your docker-compose file or [in the `config.yml` for HA Add-on users](advanced.md#environment_vars).
The default driver is `iHD`. You may need to change the driver to `i965` by adding the following environment variable `LIBVA_DRIVER_NAME=i965` to your docker-compose file or [in the `config.yml` for HA App users](advanced.md#environment_vars).
See [The Intel Docs](https://www.intel.com/content/www/us/en/support/articles/000005505/processors.html) to figure out what generation your CPU is.
@ -116,12 +117,13 @@ services:
frigate:
...
image: ghcr.io/blakeblackshear/frigate:stable
# highlight-next-line
privileged: true
```
##### Docker Run CLI - Privileged
```bash
```bash {4}
docker run -d \
--name frigate \
...
@ -135,7 +137,7 @@ Only recent versions of Docker support the `CAP_PERFMON` capability. You can tes
##### Docker Compose - CAP_PERFMON
```yaml
```yaml {5,6}
services:
frigate:
...
@ -146,7 +148,7 @@ services:
##### Docker Run CLI - CAP_PERFMON
```bash
```bash {4}
docker run -d \
--name frigate \
...
@ -188,7 +190,7 @@ Frigate can utilize modern AMD integrated GPUs and AMD GPUs to accelerate video
### Configuring Radeon Driver
You need to change the driver to `radeonsi` by adding the following environment variable `LIBVA_DRIVER_NAME=radeonsi` to your docker-compose file or [in the `config.yml` for HA Add-on users](advanced.md#environment_vars).
You need to change the driver to `radeonsi` by adding the following environment variable `LIBVA_DRIVER_NAME=radeonsi` to your docker-compose file or [in the `config.yml` for HA App users](advanced.md#environment_vars).
### Via VAAPI
@ -213,7 +215,7 @@ Additional configuration is needed for the Docker container to be able to access
#### Docker Compose - Nvidia GPU
```yaml
```yaml {5-12}
services:
frigate:
...
@ -230,7 +232,7 @@ services:
#### Docker Run CLI - Nvidia GPU
```bash
```bash {4}
docker run -d \
--name frigate \
...
@ -292,7 +294,7 @@ These instructions were originally based on the [Jellyfin documentation](https:/
## Raspberry Pi 3/4
Ensure you increase the allocated RAM for your GPU to at least 128 (`raspi-config` > Performance Options > GPU Memory).
If you are using the HA Add-on, you may need to use the full access variant and turn off _Protection mode_ for hardware acceleration.
If you are using the HA App, you may need to use the full access variant and turn off _Protection mode_ for hardware acceleration.
```yaml
# if you want to decode a h264 stream
@ -309,7 +311,7 @@ ffmpeg:
If running Frigate through Docker, you either need to run in privileged mode or
map the `/dev/video*` devices to Frigate. With Docker Compose add:
```yaml
```yaml {4-5}
services:
frigate:
...
@ -319,7 +321,7 @@ services:
Or with `docker run`:
```bash
```bash {4}
docker run -d \
--name frigate \
...
@ -351,7 +353,7 @@ You will need to use the image with the nvidia container runtime:
### Docker Run CLI - Jetson
```bash
```bash {3}
docker run -d \
...
--runtime nvidia
@ -360,7 +362,7 @@ docker run -d \
### Docker Compose - Jetson
```yaml
```yaml {5}
services:
frigate:
...
@ -451,14 +453,14 @@ Restarting ffmpeg...
you should try to uprade to FFmpeg 7. This can be done using this config option:
```
```yaml
ffmpeg:
path: "7.0"
```
You can set this option globally to use FFmpeg 7 for all cameras or on camera level to use it only for specific cameras. Do not confuse this option with:
```
```yaml
cameras:
name:
ffmpeg:
@ -480,7 +482,7 @@ Make sure to follow the [Synaptics specific installation instructions](/frigate/
Add one of the following FFmpeg presets to your `config.yml` to enable hardware video processing:
```yaml
```yaml {2}
ffmpeg:
hwaccel_args: -c:v h264_v4l2m2m
input_args: preset-rtsp-restream

View File

@ -3,7 +3,7 @@ id: index
title: Frigate Configuration
---
For Home Assistant Add-on installations, the config file should be at `/addon_configs/<addon_directory>/config.yml`, where `<addon_directory>` is specific to the variant of the Frigate Add-on you are running. See the list of directories [here](#accessing-add-on-config-dir).
For Home Assistant App installations, the config file should be at `/addon_configs/<addon_directory>/config.yml`, where `<addon_directory>` is specific to the variant of the Frigate App you are running. See the list of directories [here](#accessing-app-config-dir).
For all other installation types, the config file should be mapped to `/config/config.yml` inside the container.
@ -25,11 +25,11 @@ cameras:
- detect
```
## Accessing the Home Assistant Add-on configuration directory {#accessing-add-on-config-dir}
## Accessing the Home Assistant App configuration directory {#accessing-app-config-dir}
When running Frigate through the HA Add-on, the Frigate `/config` directory is mapped to `/addon_configs/<addon_directory>` in the host, where `<addon_directory>` is specific to the variant of the Frigate Add-on you are running.
When running Frigate through the HA App, the Frigate `/config` directory is mapped to `/addon_configs/<addon_directory>` in the host, where `<addon_directory>` is specific to the variant of the Frigate App you are running.
| Add-on Variant | Configuration directory |
| App Variant | Configuration directory |
| -------------------------- | ----------------------------------------- |
| Frigate | `/addon_configs/ccab4aaf_frigate` |
| Frigate (Full Access) | `/addon_configs/ccab4aaf_frigate-fa` |
@ -38,11 +38,11 @@ When running Frigate through the HA Add-on, the Frigate `/config` directory is m
**Whenever you see `/config` in the documentation, it refers to this directory.**
If for example you are running the standard Add-on variant and use the [VS Code Add-on](https://github.com/hassio-addons/addon-vscode) to browse your files, you can click _File_ > _Open folder..._ and navigate to `/addon_configs/ccab4aaf_frigate` to access the Frigate `/config` directory and edit the `config.yaml` file. You can also use the built-in file editor in the Frigate UI to edit the configuration file.
If for example you are running the standard App variant and use the [VS Code App](https://github.com/hassio-addons/addon-vscode) to browse your files, you can click _File_ > _Open folder..._ and navigate to `/addon_configs/ccab4aaf_frigate` to access the Frigate `/config` directory and edit the `config.yaml` file. You can also use the built-in file editor in the Frigate UI to edit the configuration file.
## VS Code Configuration Schema
VS Code supports JSON schemas for automatically validating configuration files. You can enable this feature by adding `# yaml-language-server: $schema=http://frigate_host:5000/api/config/schema.json` to the beginning of the configuration file. Replace `frigate_host` with the IP address or hostname of your Frigate server. If you're using both VS Code and Frigate as an Add-on, you should use `ccab4aaf-frigate` instead. Make sure to expose the internal unauthenticated port `5000` when accessing the config from VS Code on another machine.
VS Code supports JSON schemas for automatically validating configuration files. You can enable this feature by adding `# yaml-language-server: $schema=http://frigate_host:5000/api/config/schema.json` to the beginning of the configuration file. Replace `frigate_host` with the IP address or hostname of your Frigate server. If you're using both VS Code and Frigate as an App, you should use `ccab4aaf-frigate` instead. Make sure to expose the internal unauthenticated port `5000` when accessing the config from VS Code on another machine.
## Environment Variable Substitution
@ -50,6 +50,7 @@ Frigate supports the use of environment variables starting with `FRIGATE_` **onl
```yaml
mqtt:
host: "{FRIGATE_MQTT_HOST}"
user: "{FRIGATE_MQTT_USER}"
password: "{FRIGATE_MQTT_PASSWORD}"
```
@ -60,7 +61,7 @@ mqtt:
```yaml
onvif:
host: 10.0.10.10
host: "192.168.1.12"
port: 8000
user: "{FRIGATE_RTSP_USER}"
password: "{FRIGATE_RTSP_PASSWORD}"
@ -82,10 +83,10 @@ genai:
Here are some common starter configuration examples. Refer to the [reference config](./reference.md) for detailed information about all the config values.
### Raspberry Pi Home Assistant Add-on with USB Coral
### Raspberry Pi Home Assistant App with USB Coral
- Single camera with 720p, 5fps stream for detect
- MQTT connected to the Home Assistant Mosquitto Add-on
- MQTT connected to the Home Assistant Mosquitto App
- Hardware acceleration for decoding video
- USB Coral detector
- Save all video with any detectable motion for 7 days regardless of whether any objects were detected or not

View File

@ -30,7 +30,7 @@ In the default mode, Frigate's LPR needs to first detect a `car` or `motorcycle`
## Minimum System Requirements
License plate recognition works by running AI models locally on your system. The YOLOv9 plate detector model and the OCR models ([PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR)) are relatively lightweight and can run on your CPU or GPU, depending on your configuration. At least 4GB of RAM is required.
License plate recognition works by running AI models locally on your system. The YOLOv9 plate detector model and the OCR models ([PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR)) are relatively lightweight and can run on your CPU or GPU, depending on your configuration. At least 4GB of RAM and a CPU with AVX + AVX2 instructions is required.
## Configuration
@ -43,7 +43,7 @@ lpr:
Like other enrichments in Frigate, LPR **must be enabled globally** to use the feature. You should disable it for specific cameras at the camera level if you don't want to run LPR on cars on those cameras:
```yaml
```yaml {4,5}
cameras:
garage:
...
@ -375,7 +375,6 @@ Use `match_distance` to allow small character mismatches. Alternatively, define
Start with ["Why isn't my license plate being detected and recognized?"](#why-isnt-my-license-plate-being-detected-and-recognized). If you are still having issues, work through these steps.
1. Start with a simplified LPR config.
- Remove or comment out everything in your LPR config, including `min_area`, `min_plate_length`, `format`, `known_plates`, or `enhancement` values so that the only values left are `enabled` and `debug_save_plates`. This will run LPR with Frigate's default values.
```yaml
@ -386,31 +385,28 @@ Start with ["Why isn't my license plate being detected and recognized?"](#why-is
```
2. Enable debug logs to see exactly what Frigate is doing.
- Enable debug logs for LPR by adding `frigate.data_processing.common.license_plate: debug` to your `logger` configuration. These logs are _very_ verbose, so only keep this enabled when necessary. Restart Frigate after this change.
```yaml
logger:
default: info
logs:
# highlight-next-line
frigate.data_processing.common.license_plate: debug
```
3. Ensure your plates are being _detected_.
If you are using a Frigate+ or `license_plate` detecting model:
- Watch the debug view (Settings --> Debug) to ensure that `license_plate` is being detected.
- View MQTT messages for `frigate/events` to verify detected plates.
- You may need to adjust your `min_score` and/or `threshold` for the `license_plate` object if your plates are not being detected.
If you are **not** using a Frigate+ or `license_plate` detecting model:
- Watch the debug logs for messages from the YOLOv9 plate detector.
- You may need to adjust your `detection_threshold` if your plates are not being detected.
4. Ensure the characters on detected plates are being _recognized_.
- Enable `debug_save_plates` to save images of detected text on plates to the clips directory (`/media/frigate/clips/lpr`). Ensure these images are readable and the text is clear.
- Watch the debug view to see plates recognized in real-time. For non-dedicated LPR cameras, the `car` or `motorcycle` label will change to the recognized plate when LPR is enabled and working.
- Adjust `recognition_threshold` settings per the suggestions [above](#advanced-configuration).

View File

@ -15,7 +15,7 @@ The jsmpeg live view will use more browser and client GPU resources. Using go2rt
| ------ | ------------------------------------- | ---------- | ---------------------------- | --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| jsmpeg | same as `detect -> fps`, capped at 10 | 720p | no | no | Resolution is configurable, but go2rtc is recommended if you want higher resolutions and better frame rates. jsmpeg is Frigate's default without go2rtc configured. |
| mse | native | native | yes (depends on audio codec) | yes | iPhone requires iOS 17.1+, Firefox is h.264 only. This is Frigate's default when go2rtc is configured. |
| webrtc | native | native | yes (depends on audio codec) | yes | Requires extra configuration. Frigate attempts to use WebRTC when MSE fails or when using a camera's two-way talk feature. |
| webrtc | native | native | yes (depends on audio codec) | yes | Requires extra configuration. Frigate attempts to use WebRTC when MSE fails or when using a camera's two-way talk feature. |
### Camera Settings Recommendations
@ -77,7 +77,7 @@ Configure the `streams` option with a "friendly name" for your stream followed b
Using Frigate's internal version of go2rtc is required to use this feature. You cannot specify paths in the `streams` configuration, only go2rtc stream names.
```yaml
```yaml {3,6,8,25-29}
go2rtc:
streams:
test_cam:
@ -114,9 +114,9 @@ cameras:
WebRTC works by creating a TCP or UDP connection on port `8555`. However, it requires additional configuration:
- For external access, over the internet, setup your router to forward port `8555` to port `8555` on the Frigate device, for both TCP and UDP.
- For internal/local access, unless you are running through the HA Add-on, you will also need to set the WebRTC candidates list in the go2rtc config. For example, if `192.168.1.10` is the local IP of the device running Frigate:
- For internal/local access, unless you are running through the HA App, you will also need to set the WebRTC candidates list in the go2rtc config. For example, if `192.168.1.10` is the local IP of the device running Frigate:
```yaml title="config.yml"
```yaml title="config.yml" {4-7}
go2rtc:
streams:
test_cam: ...
@ -128,13 +128,13 @@ WebRTC works by creating a TCP or UDP connection on port `8555`. However, it req
- For access through Tailscale, the Frigate system's Tailscale IP must be added as a WebRTC candidate. Tailscale IPs all start with `100.`, and are reserved within the `100.64.0.0/10` CIDR block.
- Note that some browsers may not support H.265 (HEVC). You can check your browser's current version for H.265 compatibility [here](https://github.com/AlexxIT/go2rtc?tab=readme-ov-file#codecs-madness).
- Note that some browsers may not support H.265 (HEVC). You can check your browser's current version for H.265 compatibility [here](https://github.com/AlexxIT/go2rtc?tab=readme-ov-file#codecs-madness).
:::tip
This extra configuration may not be required if Frigate has been installed as a Home Assistant Add-on, as Frigate uses the Supervisor's API to generate a WebRTC candidate.
This extra configuration may not be required if Frigate has been installed as a Home Assistant App, as Frigate uses the Supervisor's API to generate a WebRTC candidate.
However, it is recommended if issues occur to define the candidates manually. You should do this if the Frigate Add-on fails to generate a valid candidate. If an error occurs you will see some warnings like the below in the Add-on logs page during the initialization:
However, it is recommended if issues occur to define the candidates manually. You should do this if the Frigate App fails to generate a valid candidate. If an error occurs you will see some warnings like the below in the App logs page during the initialization:
```log
[WARN] Failed to get IP address from supervisor
@ -154,7 +154,7 @@ If not running in host mode, port 8555 will need to be mapped for the container:
docker-compose.yml
```yaml
```yaml {4-6}
services:
frigate:
...
@ -222,34 +222,28 @@ Note that disabling a camera through the config file (`enabled: False`) removes
When your browser runs into problems playing back your camera streams, it will log short error messages to the browser console. They indicate playback, codec, or network issues on the client/browser side, not something server side with Frigate itself. Below are the common messages you may see and simple actions you can take to try to resolve them.
- **startup**
- What it means: The player failed to initialize or connect to the live stream (network or startup error).
- What to try: Reload the Live view or click _Reset_. Verify `go2rtc` is running and the camera stream is reachable. Try switching to a different stream from the Live UI dropdown (if available) or use a different browser.
- Possible console messages from the player code:
- `Error opening MediaSource.`
- `Browser reported a network error.`
- `Max error count ${errorCount} exceeded.` (the numeric value will vary)
- **mse-decode**
- What it means: The browser reported a decoding error while trying to play the stream, which usually is a result of a codec incompatibility or corrupted frames.
- What to try: Check the browser console for the supported and negotiated codecs. Ensure your camera/restream is using H.264 video and AAC audio (these are the most compatible). If your camera uses a non-standard audio codec, configure `go2rtc` to transcode the stream to AAC. Try another browser (some browsers have stricter MSE/codec support) and, for iPhone, ensure you're on iOS 17.1 or newer.
- Possible console messages from the player code:
- `Safari cannot open MediaSource.`
- `Safari reported InvalidStateError.`
- `Safari reported decoding errors.`
- **stalled**
- What it means: Playback has stalled because the player has fallen too far behind live (extended buffering or no data arriving).
- What to try: This is usually indicative of the browser struggling to decode too many high-resolution streams at once. Try selecting a lower-bandwidth stream (substream), reduce the number of live streams open, improve the network connection, or lower the camera resolution. Also check your camera's keyframe (I-frame) interval — shorter intervals make playback start and recover faster. You can also try increasing the timeout value in the UI pane of Frigate's settings.
- Possible console messages from the player code:
- `Buffer time (10 seconds) exceeded, browser may not be playing media correctly.`
- `Media playback has stalled after <n> seconds due to insufficient buffering or a network interruption.` (the seconds value will vary)
@ -270,21 +264,18 @@ When your browser runs into problems playing back your camera streams, it will l
If you are using continuous streaming or you are loading more than a few high resolution streams at once on the dashboard, your browser may struggle to begin playback of your streams before the timeout. Frigate always prioritizes showing a live stream as quickly as possible, even if it is a lower quality jsmpeg stream. You can use the "Reset" link/button to try loading your high resolution stream again.
Errors in stream playback (e.g., connection failures, codec issues, or buffering timeouts) that cause the fallback to low bandwidth mode (jsmpeg) are logged to the browser console for easier debugging. These errors may include:
- Network issues (e.g., MSE or WebRTC network connection problems).
- Unsupported codecs or stream formats (e.g., H.265 in WebRTC, which is not supported in some browsers).
- Buffering timeouts or low bandwidth conditions causing fallback to jsmpeg.
- Browser compatibility problems (e.g., iOS Safari limitations with MSE).
To view browser console logs:
1. Open the Frigate Live View in your browser.
2. Open the browser's Developer Tools (F12 or right-click > Inspect > Console tab).
3. Reproduce the error (e.g., load a problematic stream or simulate network issues).
4. Look for messages prefixed with the camera name.
These logs help identify if the issue is player-specific (MSE vs. WebRTC) or related to camera configuration (e.g., go2rtc streams, codecs). If you see frequent errors:
- Verify your camera's H.264/AAC settings (see [Frigate's camera settings recommendations](#camera_settings_recommendations)).
- Check go2rtc configuration for transcoding (e.g., audio to AAC/OPUS).
- Test with a different stream via the UI dropdown (if `live -> streams` is configured).
@ -324,9 +315,7 @@ When your browser runs into problems playing back your camera streams, it will l
To prevent this, make the `detect` stream match the go2rtc live stream's aspect ratio (resolution does not need to match, just the aspect ratio). You can either adjust the camera's output resolution or set the `width` and `height` values in your config's `detect` section to a resolution with an aspect ratio that matches.
Example: Resolutions from two streams
- Mismatched (may cause aspect ratio switching on the dashboard):
- Live/go2rtc stream: 1920x1080 (16:9)
- Detect stream: 640x352 (~1.82:1, not 16:9)

View File

@ -166,7 +166,7 @@ YOLOv9 models that are compiled for TensorFlow Lite and properly quantized are s
:::tip
**Frigate+ Users:** Follow the [instructions](../integrations/plus#use-models) to set a model ID in your config file.
**Frigate+ Users:** Follow the [instructions](/integrations/plus#use-models) to set a model ID in your config file.
:::
@ -577,7 +577,7 @@ $ docker run --device=/dev/kfd --device=/dev/dri \
When using Docker Compose:
```yaml
```yaml {4-6}
services:
frigate:
...
@ -608,7 +608,7 @@ $ docker run -e HSA_OVERRIDE_GFX_VERSION=10.0.0 \
When using Docker Compose:
```yaml
```yaml {4-5}
services:
frigate:
...
@ -1514,7 +1514,8 @@ model:
model_type: yolo-generic
width: 320
height: 320
tensor_format: bgr
input_dtype: int
input_pixel_format: bgr
labelmap_path: /labelmap/coco-80.txt
```

View File

@ -0,0 +1,188 @@
---
id: profiles
title: Profiles
---
Profiles allow you to define named sets of camera configuration overrides that can be activated and deactivated at runtime without restarting Frigate. This is useful for scenarios like switching between "Home" and "Away" modes, daytime and nighttime configurations, or any situation where you want to quickly change how multiple cameras behave.
## How Profiles Work
Profiles operate as a two-level system:
1. **Profile definitions** are declared at the top level of your config under `profiles`. Each definition has a machine name (the key) and a `friendly_name` for display in the UI.
2. **Camera profile overrides** are declared under each camera's `profiles` section, keyed by the profile name. Only the settings you want to change need to be specified — everything else is inherited from the camera's base configuration.
When a profile is activated, Frigate merges each camera's profile overrides on top of its base config. When the profile is deactivated, all cameras revert to their original settings. Only one profile can be active at a time.
:::info
Profile changes are applied in-memory and take effect immediately — no restart is required. The active profile is persisted across Frigate restarts (stored in the `/config/.active_profile` file).
:::
## Configuration
The easiest way to define profiles is to use the Frigate UI. Profiles can also be configured manually in your configuration file.
### Using the UI
To create and manage profiles from the UI, open **Settings**. From there you can:
1. **Create a profile** — Navigate to **Profiles**. Click the **Add Profile** button, enter a name (and optionally a profile ID).
2. **Configure overrides** — Navigate to a camera configuration section (e.g. Motion detection, Record, Notifications). In the top right, two buttons will appear - choose a camera and a profile from the profile selector to edit overrides for that camera and section. Only the fields you change will be stored as overrides — fields that require a restart are hidden since profiles are applied at runtime. You can click the **Remove Profile Override** button
3. **Activate a profile** — Use the **Profiles** option in Frigate's main menu to choose a profile. Alternatively, in Settings, navigate to **Profiles**, then choose a profile in the Active Profile dropdown to activate it. The active profile is also shown in the status bar at the bottom of the screen on desktop browsers.
4. **Delete a profile** — Navigate to **Profiles**, then click the trash icon for a profile. This removes the profile definition and all camera overrides associated with it.
### Defining Profiles in YAML
First, define your profiles at the top level of your Frigate config. Every profile name referenced by a camera must be defined here.
```yaml
profiles:
home:
friendly_name: Home
away:
friendly_name: Away
night:
friendly_name: Night Mode
```
### Camera Profile Overrides
Under each camera, add a `profiles` section with overrides for each profile. You only need to include the settings you want to change.
```yaml
cameras:
front_door:
ffmpeg:
inputs:
- path: rtsp://camera:554/stream
roles:
- detect
- record
detect:
enabled: true
record:
enabled: true
profiles:
away:
detect:
enabled: true
notifications:
enabled: true
objects:
track:
- person
- car
- package
review:
alerts:
labels:
- person
- car
- package
home:
detect:
enabled: true
notifications:
enabled: false
objects:
track:
- person
```
### Supported Override Sections
The following camera configuration sections can be overridden in a profile:
| Section | Description |
| ------------------ | ----------------------------------------- |
| `enabled` | Enable or disable the camera entirely |
| `audio` | Audio detection settings |
| `birdseye` | Birdseye view settings |
| `detect` | Object detection settings |
| `face_recognition` | Face recognition settings |
| `lpr` | License plate recognition settings |
| `motion` | Motion detection settings |
| `notifications` | Notification settings |
| `objects` | Object tracking and filter settings |
| `record` | Recording settings |
| `review` | Review alert and detection settings |
| `snapshots` | Snapshot settings |
| `zones` | Zone definitions (merged with base zones) |
:::note
Only the fields you explicitly set in a profile override are applied. All other fields retain their base configuration values. For zones, profile zones are merged with the camera's base zones — any zone defined in the profile will override or add to the base zones.
:::
## Activating Profiles
Profiles can be activated and deactivated from the Frigate UI. Open the Settings cog and select **Profiles** from the submenu to see all defined profiles. From there you can activate any profile or deactivate the current one. The active profile is indicated in the UI so you always know which profile is in effect.
## Example: Home / Away Setup
A common use case is having different detection and notification settings based on whether you are home or away.
```yaml
profiles:
home:
friendly_name: Home
away:
friendly_name: Away
cameras:
front_door:
ffmpeg:
inputs:
- path: rtsp://camera:554/stream
roles:
- detect
- record
detect:
enabled: true
record:
enabled: true
notifications:
enabled: false
profiles:
away:
notifications:
enabled: true
review:
alerts:
labels:
- person
- car
home:
notifications:
enabled: false
indoor_cam:
ffmpeg:
inputs:
- path: rtsp://camera:554/indoor
roles:
- detect
- record
detect:
enabled: false
record:
enabled: false
profiles:
away:
enabled: true
detect:
enabled: true
record:
enabled: true
home:
enabled: false
```
In this example:
- **Away profile**: The front door camera enables notifications and tracks specific alert labels. The indoor camera is fully enabled with detection and recording.
- **Home profile**: The front door camera disables notifications. The indoor camera is completely disabled for privacy.
- **No profile active**: All cameras use their base configuration values.

View File

@ -130,7 +130,7 @@ When exporting a time-lapse the default speed-up is 25x with 30 FPS. This means
To configure the speed-up factor, the frame rate and further custom settings, the configuration parameter `timelapse_args` can be used. The below configuration example would change the time-lapse speed to 60x (for fitting 1 hour of recording into 1 minute of time-lapse) with 25 FPS:
```yaml
```yaml {3-4}
record:
enabled: True
export:
@ -162,6 +162,8 @@ Normal operation may leave small numbers of orphaned files until Frigate's sched
The Maintenance pane in the Frigate UI or an API endpoint `POST /api/media/sync` can be used to trigger a media sync. When using the API, a job ID is returned and the operation continues on the server. Status can be checked with the `/api/media/sync/status/{job_id}` endpoint.
Setting `verbose: true` writes a detailed report of every orphaned file and database entry to `/config/media_sync/<job_id>.txt`. For recordings, the report separates orphaned database entries (DB records whose files are missing from disk) from orphaned files (files on disk with no corresponding database record).
:::warning
This operation uses considerable CPU resources and includes a safety threshold that aborts if more than 50% of files would be deleted. Only run when necessary. If you set `force: true` the safety threshold will be bypassed; do not use `force` unless you are certain the deletions are intended.

View File

@ -16,6 +16,8 @@ mqtt:
# Optional: Enable mqtt server (default: shown below)
enabled: True
# Required: host name
# NOTE: MQTT host can be specified with an environment variable or docker secrets that must begin with 'FRIGATE_'.
# e.g. host: '{FRIGATE_MQTT_HOST}'
host: mqtt.server.com
# Optional: port (default: shown below)
port: 1883
@ -616,13 +618,12 @@ record:
# never stored, so setting the mode to "all" here won't bring them back.
mode: motion
# Optional: Configuration for the jpg snapshots written to the clips directory for each tracked object
# Optional: Configuration for the snapshots written to the clips directory for each tracked object
# Timestamp, bounding_box, crop and height settings are applied by default to API requests for snapshots.
# NOTE: Can be overridden at the camera level
snapshots:
# Optional: Enable writing jpg snapshot to /media/frigate/clips (default: shown below)
# Optional: Enable writing snapshot images to /media/frigate/clips (default: shown below)
enabled: False
# Optional: save a clean copy of the snapshot image (default: shown below)
clean_copy: True
# Optional: print a timestamp on the snapshots (default: shown below)
timestamp: False
# Optional: draw bounding box on the snapshots (default: shown below)
@ -640,8 +641,8 @@ snapshots:
# Optional: Per object retention days
objects:
person: 15
# Optional: quality of the encoded jpeg, 0-100 (default: shown below)
quality: 70
# Optional: quality of the encoded snapshot image, 0-100 (default: shown below)
quality: 60
# Optional: Configuration for semantic search capability
semantic_search:
@ -950,6 +951,8 @@ cameras:
onvif:
# Required: host of the camera being connected to.
# NOTE: HTTP is assumed by default; HTTPS is supported if you specify the scheme, ex: "https://0.0.0.0".
# NOTE: ONVIF user, and password can be specified with environment variables or docker secrets
# that must begin with 'FRIGATE_'. e.g. host: '{FRIGATE_ONVIF_USERNAME}'
host: 0.0.0.0
# Optional: ONVIF port for device (default: shown below).
port: 8000
@ -1026,6 +1029,49 @@ cameras:
actions:
- notification
# Optional: Named config profiles with partial overrides that can be activated at runtime.
# NOTE: Profile names must be defined in the top-level 'profiles' section.
profiles:
# Required: name of the profile (must match a top-level profile definition)
away:
# Optional: Enable or disable the camera when this profile is active (default: not set, inherits base)
enabled: true
# Optional: Override audio settings
audio:
enabled: true
# Optional: Override birdseye settings
# birdseye:
# Optional: Override detect settings
detect:
enabled: true
# Optional: Override face_recognition settings
# face_recognition:
# Optional: Override lpr settings
# lpr:
# Optional: Override motion settings
# motion:
# Optional: Override notification settings
notifications:
enabled: true
# Optional: Override objects settings
objects:
track:
- person
- car
# Optional: Override record settings
record:
enabled: true
# Optional: Override review settings
review:
alerts:
labels:
- person
- car
# Optional: Override snapshot settings
# snapshots:
# Optional: Override or add zones (merged with base zones)
# zones:
# Optional
ui:
# Optional: Set a timezone to use in the UI (default: use browser local time)
@ -1092,4 +1138,14 @@ camera_groups:
icon: LuCar
# Required: index of this group
order: 0
# Optional: Profile definitions for named config overrides
# NOTE: Profile names defined here can be referenced in camera profiles sections
profiles:
# Required: name of the profile (machine name used internally)
home:
# Required: display name shown in the UI
friendly_name: Home
away:
friendly_name: Away
```

View File

@ -34,7 +34,7 @@ To improve connection speed when using Birdseye via restream you can enable a sm
The go2rtc restream can be secured with RTSP based username / password authentication. Ex:
```yaml
```yaml {2-4}
go2rtc:
rtsp:
username: "admin"
@ -147,6 +147,7 @@ For example:
```yaml
go2rtc:
streams:
# highlight-error-line
my_camera: rtsp://username:$@foo%@192.168.1.100
```
@ -155,6 +156,7 @@ becomes
```yaml
go2rtc:
streams:
# highlight-next-line
my_camera: rtsp://username:$%40foo%25@192.168.1.100
```

View File

@ -71,7 +71,7 @@ To exclude a specific camera from alerts or detections, simply provide an empty
For example, to exclude objects on the camera _gatecamera_ from any detections, include this in your config:
```yaml
```yaml {3-5}
cameras:
gatecamera:
review:

View File

@ -13,7 +13,7 @@ Semantic Search is accessed via the _Explore_ view in the Frigate UI.
Semantic Search works by running a large AI model locally on your system. Small or underpowered systems like a Raspberry Pi will not run Semantic Search reliably or at all.
A minimum of 8GB of RAM is required to use Semantic Search. A GPU is not strictly required but will provide a significant performance increase over CPU-only systems.
A minimum of 8GB of RAM is required to use Semantic Search. A CPU with AVX + AVX2 instructions is required to run Semantic Search. A GPU is not strictly required but will provide a significant performance increase over CPU-only systems.
For best performance, 16GB or more of RAM and a dedicated GPU are recommended.

View File

@ -3,7 +3,7 @@ id: snapshots
title: Snapshots
---
Frigate can save a snapshot image to `/media/frigate/clips` for each object that is detected named as `<camera>-<id>.jpg`. They are also accessible [via the api](../integrations/api/event-snapshot-events-event-id-snapshot-jpg-get.api.mdx)
Frigate can save a snapshot image to `/media/frigate/clips` for each object that is detected named as `<camera>-<id>-clean.webp`. They are also accessible [via the api](../integrations/api/event-snapshot-events-event-id-snapshot-jpg-get.api.mdx)
Snapshots are accessible in the UI in the Explore pane. This allows for quick submission to the Frigate+ service.
@ -13,21 +13,19 @@ Snapshots sent via MQTT are configured in the [config file](/configuration) unde
## Frame Selection
Frigate does not save every frame — it picks a single "best" frame for each tracked object and uses it for both the snapshot and clean copy. As the object is tracked across frames, Frigate continuously evaluates whether the current frame is better than the previous best based on detection confidence, object size, and the presence of key attributes like faces or license plates. Frames where the object touches the edge of the frame are deprioritized. The snapshot is written to disk once tracking ends using whichever frame was determined to be the best.
Frigate does not save every frame. It picks a single "best" frame for each tracked object based on detection confidence, object size, and the presence of key attributes like faces or license plates. Frames where the object touches the edge of the frame are deprioritized. That best frame is written to disk once tracking ends.
MQTT snapshots are published more frequently — each time a better thumbnail frame is found during tracking, or when the current best image is older than `best_image_timeout` (default: 60s). These use their own annotation settings configured under `cameras -> your_camera -> mqtt`.
## Clean Copy
## Rendering
Frigate can produce up to two snapshot files per event, each used in different places:
Frigate stores a single clean snapshot on disk:
| Version | File | Annotations | Used by |
| --- | --- | --- | --- |
| **Regular snapshot** | `<camera>-<id>.jpg` | Respects your `timestamp`, `bounding_box`, `crop`, and `height` settings | API (`/api/events/<id>/snapshot.jpg`), MQTT (`<camera>/<label>/snapshot`), Explore pane in the UI |
| **Clean copy** | `<camera>-<id>-clean.webp` | Always unannotated — no bounding box, no timestamp, no crop, full resolution | API (`/api/events/<id>/snapshot-clean.webp`), [Frigate+](/plus/first_model) submissions, "Download Clean Snapshot" in the UI |
| API / Use | Result |
| ---------------------------------------- | ----------------------------------------------------------------------------------------------------- |
| Stored file | `<camera>-<id>-clean.webp`, always unannotated |
| `/api/events/<id>/snapshot.jpg` | Starts from the camera's `snapshots` defaults, then applies any query param overrides at request time |
| `/api/events/<id>/snapshot-clean.webp` | Returns the same stored snapshot without annotations |
| [Frigate+](/plus/first_model) submission | Uses the same stored clean snapshot |
MQTT snapshots are configured separately under `cameras -> your_camera -> mqtt` and are unrelated to the clean copy.
The clean copy is required for submitting events to [Frigate+](/plus/first_model) — if you plan to use Frigate+, keep `clean_copy` enabled regardless of your other snapshot settings.
If you are not using Frigate+ and `timestamp`, `bounding_box`, and `crop` are all disabled, the regular snapshot is already effectively clean, so `clean_copy` provides no benefit and only uses additional disk space. You can safely set `clean_copy: False` in this case.
MQTT snapshots are configured separately under `cameras -> your_camera -> mqtt` and are unrelated to the stored event snapshot.

View File

@ -20,7 +20,7 @@ tls:
TLS certificates can be mounted at `/etc/letsencrypt/live/frigate` using a bind mount or docker volume.
```yaml
```yaml {3-4}
frigate:
...
volumes:
@ -32,7 +32,7 @@ Within the folder, the private key is expected to be named `privkey.pem` and the
Note that certbot uses symlinks, and those can't be followed by the container unless it has access to the targets as well, so if using certbot you'll also have to mount the `archive` folder for your domain, e.g.:
```yaml
```yaml {3-5}
frigate:
...
volumes:
@ -46,7 +46,7 @@ Frigate automatically compares the fingerprint of the certificate at `/etc/letse
If you issue Frigate valid certificates you will likely want to configure it to run on port 443 so you can access it without a port number like `https://your-frigate-domain.com` by mapping 8971 to 443.
```yaml
```yaml {3-4}
frigate:
...
ports:

View File

@ -22,7 +22,7 @@ To create a zone, follow [the steps for a "Motion mask"](masks.md), but use the
Often you will only want alerts to be created when an object enters areas of interest. This is done using zones along with setting required_zones. Let's say you only want to have an alert created when an object enters your entire_yard zone, the config would be:
```yaml
```yaml {6,8}
cameras:
name_of_your_camera:
review:
@ -108,6 +108,7 @@ cameras:
name_of_your_camera:
zones:
sidewalk:
# highlight-next-line
loitering_time: 4 # unit is in seconds
objects:
- person
@ -122,6 +123,7 @@ cameras:
name_of_your_camera:
zones:
front_yard:
# highlight-next-line
inertia: 3
objects:
- person
@ -134,6 +136,7 @@ cameras:
name_of_your_camera:
zones:
driveway_entrance:
# highlight-next-line
inertia: 1
objects:
- car
@ -196,5 +199,6 @@ cameras:
coordinates: ...
distances: ...
inertia: 1
# highlight-next-line
speed_threshold: 20 # unit is in kph or mph, depending on how unit_system is set (see above)
```

View File

@ -17,15 +17,15 @@ From here, follow the guides for:
- [Web Interface](#web-interface)
- [Documentation](#documentation)
### Frigate Home Assistant Add-on
### Frigate Home Assistant App
This repository holds the Home Assistant Add-on, for use with Home Assistant OS and compatible installations. It is the piece that allows you to run Frigate from your Home Assistant Supervisor tab.
This repository holds the Home Assistant App, for use with Home Assistant OS and compatible installations. It is the piece that allows you to run Frigate from your Home Assistant Supervisor tab.
Fork [blakeblackshear/frigate-hass-addons](https://github.com/blakeblackshear/frigate-hass-addons) to your own Github profile, then clone the forked repo to your local machine.
### Frigate Home Assistant Integration
This repository holds the custom integration that allows your Home Assistant installation to automatically create entities for your Frigate instance, whether you are running Frigate as a standalone Docker container or as a [Home Assistant Add-on](#frigate-home-assistant-add-on).
This repository holds the custom integration that allows your Home Assistant installation to automatically create entities for your Frigate instance, whether you are running Frigate as a standalone Docker container or as a [Home Assistant App](#frigate-home-assistant-app).
Fork [blakeblackshear/frigate-hass-integration](https://github.com/blakeblackshear/frigate-hass-integration) to your own GitHub profile, then clone the forked repo to your local machine.
@ -89,6 +89,14 @@ After closing VS Code, you may still have containers running. To close everythin
### Testing
#### Unit Tests
GitHub will execute unit tests on new PRs. You must ensure that all tests pass.
```shell
python3 -u -m unittest
```
#### FFMPEG Hardware Acceleration
The following commands are used inside the container to ensure hardware acceleration is working properly.
@ -125,6 +133,28 @@ ffmpeg -hwaccel vaapi -hwaccel_device /dev/dri/renderD128 -hwaccel_output_format
ffmpeg -c:v h264_qsv -re -stream_loop -1 -i https://streams.videolan.org/ffmpeg/incoming/720p60.mp4 -f rawvideo -pix_fmt yuv420p pipe: > /dev/null
```
### Submitting a pull request
Code must be formatted, linted and type-tested. GitHub will run these checks on pull requests, so it is advised to run them yourself prior to opening.
**Formatting**
```shell
ruff format frigate migrations docker *.py
```
**Linting**
```shell
ruff check frigate migrations docker *.py
```
**MyPy Static Typing**
```shell
python3 -u -m mypy --config-file frigate/mypy.ini frigate
```
## Web Interface
### Prerequisites

View File

@ -26,7 +26,7 @@ I may earn a small commission for my endorsement, recommendation, testimonial, o
## Server
My current favorite is the Beelink EQ13 because of the efficient N100 CPU and dual NICs that allow you to setup a dedicated private network for your cameras where they can be blocked from accessing the internet. There are many used workstation options on eBay that work very well. Anything with an Intel CPU and capable of running Debian should work fine. As a bonus, you may want to look for devices with a M.2 or PCIe express slot that is compatible with the Google Coral, Hailo, or other AI accelerators.
My current favorite is the Beelink EQ13 because of the efficient N100 CPU and dual NICs that allow you to setup a dedicated private network for your cameras where they can be blocked from accessing the internet. There are many used workstation options on eBay that work very well. Anything with an Intel CPU (with AVX + AVX2 instructions) and capable of running Debian should work fine. As a bonus, you may want to look for devices with a M.2 or PCIe express slot that is compatible with the Google Coral, Hailo, or other AI accelerators.
Note that many of these mini PCs come with Windows pre-installed, and you will need to install Linux according to the [getting started guide](../guides/getting_started.md).
@ -205,7 +205,7 @@ Inference is done with the `onnx` detector type. Speeds will vary greatly depend
| GTX 1070 | s-320: 16 ms | | 320: 14 ms |
| RTX 3050 | t-320: 8 ms s-320: 10 ms s-640: 28 ms | Nano-320: ~ 12 ms | 320: ~ 10 ms 640: ~ 16 ms |
| RTX 3070 | t-320: 6 ms s-320: 8 ms s-640: 25 ms | Nano-320: ~ 9 ms | 320: ~ 8 ms 640: ~ 14 ms |
| RTX 5060 Ti | t-320: 5 ms s-320: 7 ms s-640: 22 ms | Nano-320: ~ 6 ms | |
| RTX 5060 Ti | t-320: 5 ms s-320: 7 ms s-640: 22 ms | Nano-320: ~ 4 ms | |
| RTX A4000 | | | 320: ~ 15 ms |
| Tesla P40 | | | 320: ~ 105 ms |

View File

@ -3,11 +3,13 @@ id: installation
title: Installation
---
Frigate is a Docker container that can be run on any Docker host including as a [Home Assistant Add-on](https://www.home-assistant.io/addons/). Note that the Home Assistant Add-on is **not** the same thing as the integration. The [integration](/integrations/home-assistant) is required to integrate Frigate into Home Assistant, whether you are running Frigate as a standalone Docker container or as a Home Assistant Add-on.
import ShmCalculator from '@site/src/components/ShmCalculator'
Frigate is a Docker container that can be run on any Docker host including as a [Home Assistant App](https://www.home-assistant.io/apps/). Note that the Home Assistant App is **not** the same thing as the integration. The [integration](/integrations/home-assistant) is required to integrate Frigate into Home Assistant, whether you are running Frigate as a standalone Docker container or as a Home Assistant App.
:::tip
If you already have Frigate installed as a Home Assistant Add-on, check out the [getting started guide](../guides/getting_started#configuring-frigate) to configure Frigate.
If you already have Frigate installed as a Home Assistant App, check out the [getting started guide](../guides/getting_started#configuring-frigate) to configure Frigate.
:::
@ -77,22 +79,9 @@ The default shm size of **128MB** is fine for setups with **2 cameras** detectin
The Frigate container also stores logs in shm, which can take up to **40MB**, so make sure to take this into account in your math as well.
You can calculate the **minimum** shm size for each camera with the following formula using the resolution specified for detect:
<ShmCalculator/>
```console
# Template for one camera without logs, replace <width> and <height>
$ python -c 'print("{:.2f}MB".format((<width> * <height> * 1.5 * 20 + 270480) / 1048576))'
# Example for 1280x720, including logs
$ python -c 'print("{:.2f}MB".format((1280 * 720 * 1.5 * 20 + 270480) / 1048576 + 40))'
66.63MB
# Example for eight cameras detecting at 1280x720, including logs
$ python -c 'print("{:.2f}MB".format(((1280 * 720 * 1.5 * 20 + 270480) / 1048576) * 8 + 40))'
253MB
```
The shm size cannot be set per container for Home Assistant add-ons. However, this is probably not required since by default Home Assistant Supervisor allocates `/dev/shm` with half the size of your total memory. If your machine has 8GB of memory, chances are that Frigate will have access to up to 4GB without any additional configuration.
The shm size cannot be set per container for Home Assistant Apps. However, this is probably not required since by default Home Assistant Supervisor allocates `/dev/shm` with half the size of your total memory. If your machine has 8GB of memory, chances are that Frigate will have access to up to 4GB without any additional configuration.
## Extra Steps for Specific Hardware
@ -464,6 +453,9 @@ devices:
- /dev/axcl_host
- /dev/ax_mmb_dev
- /dev/msg_userdev
volumes:
- /usr/bin/axcl:/usr/bin/axcl
- /usr/lib/axcl:/usr/lib/axcl
```
If you are using `docker run`, add this option to your command `--device /dev/axcl_host --device /dev/ax_mmb_dev --device /dev/msg_userdev`
@ -543,7 +535,7 @@ The community supported docker image tags for the current stable version are:
- `stable-tensorrt-jp6` - Frigate build optimized for Nvidia Jetson devices running Jetpack 6
- `stable-rk` - Frigate build for SBCs with Rockchip SoC
## Home Assistant Add-on
## Home Assistant App
:::warning
@ -554,7 +546,7 @@ There are important limitations in HA OS to be aware of:
- Separate local storage for media is not yet supported by Home Assistant
- AMD GPUs are not supported because HA OS does not include the mesa driver.
- Intel NPUs are not supported because HA OS does not include the NPU firmware.
- Nvidia GPUs are not supported because addons do not support the Nvidia runtime.
- Nvidia GPUs are not supported because HA Apps do not support the Nvidia runtime.
:::
@ -564,27 +556,27 @@ See [the network storage guide](/guides/ha_network_storage.md) for instructions
:::
Home Assistant OS users can install via the Add-on repository.
Home Assistant OS users can install via the App repository.
1. In Home Assistant, navigate to _Settings_ > _Add-ons_ > _Add-on Store_ > _Repositories_
1. In Home Assistant, navigate to _Settings_ > _Apps_ > _App Store_ > _Repositories_
2. Add `https://github.com/blakeblackshear/frigate-hass-addons`
3. Install the desired variant of the Frigate Add-on (see below)
3. Install the desired variant of the Frigate App (see below)
4. Setup your network configuration in the `Configuration` tab
5. Start the Add-on
5. Start the App
6. Use the _Open Web UI_ button to access the Frigate UI, then click in the _cog icon_ > _Configuration editor_ and configure Frigate to your liking
There are several variants of the Add-on available:
There are several variants of the App available:
| Add-on Variant | Description |
| App Variant | Description |
| -------------------------- | ---------------------------------------------------------- |
| Frigate | Current release with protection mode on |
| Frigate (Full Access) | Current release with the option to disable protection mode |
| Frigate Beta | Beta release with protection mode on |
| Frigate Beta (Full Access) | Beta release with the option to disable protection mode |
If you are using hardware acceleration for ffmpeg, you **may** need to use the _Full Access_ variant of the Add-on. This is because the Frigate Add-on runs in a container with limited access to the host system. The _Full Access_ variant allows you to disable _Protection mode_ and give Frigate full access to the host system.
If you are using hardware acceleration for ffmpeg, you **may** need to use the _Full Access_ variant of the App. This is because the Frigate App runs in a container with limited access to the host system. The _Full Access_ variant allows you to disable _Protection mode_ and give Frigate full access to the host system.
You can also edit the Frigate configuration file through the [VS Code Add-on](https://github.com/hassio-addons/addon-vscode) or similar. In that case, the configuration file will be at `/addon_configs/<addon_directory>/config.yml`, where `<addon_directory>` is specific to the variant of the Frigate Add-on you are running. See the list of directories [here](../configuration/index.md#accessing-add-on-config-dir).
You can also edit the Frigate configuration file through the [VS Code App](https://github.com/hassio-addons/addon-vscode) or similar. In that case, the configuration file will be at `/addon_configs/<addon_directory>/config.yml`, where `<addon_directory>` is specific to the variant of the Frigate App you are running. See the list of directories [here](../configuration/index.md#accessing-app-config-dir).
## Kubernetes

View File

@ -34,11 +34,14 @@ For commercial installations it is important to verify the number of supported c
There are many different hardware options for object detection depending on priorities and available hardware. See [the recommended hardware page](./hardware.md#detectors) for more specifics on what hardware is recommended for object detection.
### CPU
Frigate requires a CPU with AVX + AVX2 instructions. Most modern CPUs (post-2011) support AVX and AVX2, but it is generally absent in low-power or budget-oriented processors, particularly older Intel Pentium, Celeron, and Atom-based chips. Specifically, Intel Celeron and Pentium models prior to the 2020 Tiger Lake generation typically lack AVX. Older Intel Xeon models may have AVX, but may lack AVX2.
### Storage
Storage is an important consideration when planning a new installation. To get a more precise estimate of your storage requirements, you can use an IP camera storage calculator. Websites like [IPConfigure Storage Calculator](https://calculator.ipconfigure.com/) can help you determine the necessary disk space based on your camera settings.
#### SSDs (Solid State Drives)
SSDs are an excellent choice for Frigate, offering high speed and responsiveness. The older concern that SSDs would quickly "wear out" from constant video recording is largely no longer valid for modern consumer and enterprise-grade SSDs.
@ -71,4 +74,4 @@ While supported, using network-attached storage (NAS) for recordings can introdu
- **Basic Minimum: 4GB RAM**: This is generally sufficient for a very basic Frigate setup with a few cameras and a dedicated object detection accelerator, without running any enrichments. Performance might be tight, especially with higher resolution streams or numerous detections.
- **Minimum for Enrichments: 8GB RAM**: If you plan to utilize Frigate's enrichment features (e.g., facial recognition, license plate recognition, or other AI models that run alongside standard object detection), 8GB of RAM should be considered the minimum. Enrichments require additional memory to load and process their respective models and data.
- **Recommended: 16GB RAM**: For most users, especially those with many cameras (8+) or who plan to heavily leverage enrichments, 16GB of RAM is highly recommended. This provides ample headroom for smooth operation, reduces the likelihood of swapping to disk (which can impact performance), and allows for future expansion.
- **Recommended: 16GB RAM**: For most users, especially those with many cameras (8+) or who plan to heavily leverage enrichments, 16GB of RAM is highly recommended. This provides ample headroom for smooth operation, reduces the likelihood of swapping to disk (which can impact performance), and allows for future expansion.

View File

@ -7,7 +7,7 @@ title: Updating
The current stable version of Frigate is **0.17.0**. The release notes and any breaking changes for this version can be found on the [Frigate GitHub releases page](https://github.com/blakeblackshear/frigate/releases/tag/v0.17.0).
Keeping Frigate up to date ensures you benefit from the latest features, performance improvements, and bug fixes. The update process varies slightly depending on your installation method (Docker, Home Assistant Addon, etc.). Below are instructions for the most common setups.
Keeping Frigate up to date ensures you benefit from the latest features, performance improvements, and bug fixes. The update process varies slightly depending on your installation method (Docker, Home Assistant App, etc.). Below are instructions for the most common setups.
## Before You Begin
@ -67,30 +67,30 @@ If youre running Frigate via Docker (recommended method), follow these steps:
- If youve customized other settings (e.g., `shm-size`), ensure theyre still appropriate after the update.
- Docker will automatically use the updated image when you restart the container, as long as you pulled the correct version.
## Updating the Home Assistant Addon
## Updating the Home Assistant App (formerly Addon)
For users running Frigate as a Home Assistant Addon:
For users running Frigate as a Home Assistant App:
1. **Check for Updates**:
- Navigate to **Settings > Add-ons** in Home Assistant.
- Find your installed Frigate addon (e.g., "Frigate NVR" or "Frigate NVR (Full Access)").
- Navigate to **Settings > Apps** in Home Assistant.
- Find your installed Frigate app (e.g., "Frigate NVR" or "Frigate NVR (Full Access)").
- If an update is available, youll see an "Update" button.
2. **Update the Addon**:
- Click the "Update" button next to the Frigate addon.
2. **Update the App**:
- Click the "Update" button next to the Frigate app.
- Wait for the process to complete. Home Assistant will handle downloading and installing the new version.
3. **Restart the Addon**:
- After updating, go to the addons page and click "Restart" to apply the changes.
3. **Restart the App**:
- After updating, go to the apps page and click "Restart" to apply the changes.
4. **Verify the Update**:
- Check the addon logs (under the "Log" tab) to ensure Frigate starts without errors.
- Check the app logs (under the "Log" tab) to ensure Frigate starts without errors.
- Access the Frigate Web UI to confirm the new version is running.
### Notes
- Ensure your `/config/frigate.yml` is compatible with the new version by reviewing the [Release notes](https://github.com/blakeblackshear/frigate/releases).
- If using custom hardware (e.g., Coral or GPU), verify that configurations still work, as addon updates dont modify your hardware settings.
- If using custom hardware (e.g., Coral or GPU), verify that configurations still work, as app updates dont modify your hardware settings.
## Rolling Back
@ -101,7 +101,7 @@ If an update causes issues:
3. Revert to the previous image version:
- For Docker: Specify an older tag (e.g., `ghcr.io/blakeblackshear/frigate:0.16.4`) in your `docker run` command.
- For Docker Compose: Edit your `docker-compose.yml`, specify the older version tag (e.g., `ghcr.io/blakeblackshear/frigate:0.16.4`), and re-run `docker compose up -d`.
- For Home Assistant: Reinstall the previous addon version manually via the repository if needed and restart the addon.
- For Home Assistant: Restore from the app/addon backup you took before you updated.
4. Verify the old version is running again.
## Troubleshooting

View File

@ -37,18 +37,18 @@ The following diagram adds a lot more detail than the simple view explained befo
%%{init: {"themeVariables": {"edgeLabelBackground": "transparent"}}}%%
flowchart TD
RecStore[(Recording\nstore)]
SnapStore[(Snapshot\nstore)]
RecStore[(Recording<br>store)]
SnapStore[(Snapshot<br>store)]
subgraph Acquisition
Cam["Camera"] -->|FFmpeg supported| Stream
Cam -->|"Other streaming\nprotocols"| go2rtc
Cam -->|"Other streaming<br>protocols"| go2rtc
go2rtc("go2rtc") --> Stream
Stream[Capture main and\nsub streams] --> |detect stream|Decode(Decode and\ndownscale)
Stream[Capture main and<br>sub streams] --> |detect stream|Decode(Decode and<br>downscale)
end
subgraph Motion
Decode --> MotionM(Apply\nmotion masks)
MotionM --> MotionD(Motion\ndetection)
Decode --> MotionM(Apply<br>motion masks)
MotionM --> MotionD(Motion<br>detection)
end
subgraph Detection
MotionD --> |motion regions| ObjectD(Object detection)
@ -60,8 +60,8 @@ flowchart TD
MotionD --> |motion event|Birdseye
ObjectZ --> |object event|Birdseye
MotionD --> |"video segments\n(retain motion)"|RecStore
MotionD --> |"video segments<br>(retain motion)"|RecStore
ObjectZ --> |detection clip|RecStore
Stream -->|"video segments\n(retain all)"| RecStore
Stream -->|"video segments<br>(retain all)"| RecStore
ObjectZ --> |detection snapshot|SnapStore
```

View File

@ -33,19 +33,16 @@ After adding this to the config, restart Frigate and try to watch the live strea
### What if my video doesn't play?
- Check Logs:
- Access the go2rtc logs in the Frigate UI under Logs in the sidebar.
- If go2rtc is having difficulty connecting to your camera, you should see some error messages in the log.
- Check go2rtc Web Interface: if you don't see any errors in the logs, try viewing the camera through go2rtc's web interface.
- Navigate to port 1984 in your browser to access go2rtc's web interface.
- If using Frigate through Home Assistant, enable the web interface at port 1984.
- If using Docker, forward port 1984 before accessing the web interface.
- Click `stream` for the specific camera to see if the camera's stream is being received.
- Check Video Codec:
- If the camera stream works in go2rtc but not in your browser, the video codec might be unsupported.
- If using H265, switch to H264. Refer to [video codec compatibility](https://github.com/AlexxIT/go2rtc/tree/v1.9.13#codecs-madness) in go2rtc documentation.
- If unable to switch from H265 to H264, or if the stream format is different (e.g., MJPEG), re-encode the video using [FFmpeg parameters](https://github.com/AlexxIT/go2rtc/tree/v1.9.13#source-ffmpeg). It supports rotating and resizing video feeds and hardware acceleration. Keep in mind that transcoding video from one format to another is a resource intensive task and you may be better off using the built-in jsmpeg view.
@ -58,7 +55,6 @@ After adding this to the config, restart Frigate and try to watch the live strea
```
- Switch to FFmpeg if needed:
- Some camera streams may need to use the ffmpeg module in go2rtc. This has the downside of slower startup times, but has compatibility with more stream types.
```yaml
@ -101,9 +97,9 @@ After adding this to the config, restart Frigate and try to watch the live strea
:::warning
To access the go2rtc stream externally when utilizing the Frigate Add-On (for
To access the go2rtc stream externally when utilizing the Frigate App (for
instance through VLC), you must first enable the RTSP Restream port.
You can do this by visiting the Frigate Add-On configuration page within Home
You can do this by visiting the Frigate App configuration page within Home
Assistant and revealing the hidden options under the "Show disabled ports"
section.

View File

@ -9,7 +9,7 @@ title: Getting started
If you already have an environment with Linux and Docker installed, you can continue to [Installing Frigate](#installing-frigate) below.
If you already have Frigate installed through Docker or through a Home Assistant Add-on, you can continue to [Configuring Frigate](#configuring-frigate) below.
If you already have Frigate installed through Docker or through a Home Assistant App, you can continue to [Configuring Frigate](#configuring-frigate) below.
:::
@ -81,7 +81,7 @@ Now you have a minimal Debian server that requires very little maintenance.
## Installing Frigate
This section shows how to create a minimal directory structure for a Docker installation on Debian. If you have installed Frigate as a Home Assistant Add-on or another way, you can continue to [Configuring Frigate](#configuring-frigate).
This section shows how to create a minimal directory structure for a Docker installation on Debian. If you have installed Frigate as a Home Assistant App or another way, you can continue to [Configuring Frigate](#configuring-frigate).
### Setup directories
@ -150,7 +150,7 @@ Here is an example configuration with hardware acceleration configured to work w
`docker-compose.yml` (after modifying, you will need to run `docker compose up -d` to apply changes)
```yaml
```yaml {4,5}
services:
frigate:
...
@ -168,17 +168,57 @@ cameras:
name_of_your_camera:
ffmpeg:
inputs: ...
# highlight-next-line
hwaccel_args: preset-vaapi
detect: ...
```
### Step 4: Configure detectors
By default, Frigate will use a single CPU detector. If you have a USB Coral, you will need to add a detectors section to your config.
By default, Frigate will use a single CPU detector.
In many cases, the integrated graphics on Intel CPUs provides sufficient performance for typical Frigate setups. If you have an Intel processor, you can follow the configuration below.
<details>
<summary>Use Intel OpenVINO detector</summary>
You need to refer to **Configure hardware acceleration** above to enable the container to use the GPU.
```yaml {3-6,9-15,20-21}
mqtt: ...
detectors: # <---- add detectors
ov:
type: openvino # <---- use openvino detector
device: GPU
# We will use the default MobileNet_v2 model from OpenVINO.
model:
width: 300
height: 300
input_tensor: nhwc
input_pixel_format: bgr
path: /openvino-model/ssdlite_mobilenet_v2.xml
labelmap_path: /openvino-model/coco_91cl_bkgr.txt
cameras:
name_of_your_camera:
ffmpeg: ...
detect:
enabled: True # <---- turn on detection
...
```
</details>
If you have a USB Coral, you will need to add a detectors section to your config.
<details>
<summary>Use USB Coral detector</summary>
`docker-compose.yml` (after modifying, you will need to run `docker compose up -d` to apply changes)
```yaml
```yaml {4-6}
services:
frigate:
...
@ -188,7 +228,7 @@ services:
...
```
```yaml
```yaml {3-6,11-12}
mqtt: ...
detectors: # <---- add detectors
@ -204,6 +244,8 @@ cameras:
...
```
</details>
More details on available detectors can be found [here](../configuration/object_detectors.md).
Restart Frigate and you should start seeing detections for `person`. If you want to track other objects, they will need to be added according to the [configuration file reference](../configuration/reference.md).
@ -222,7 +264,7 @@ Note that motion masks should not be used to mark out areas where you do not wan
Your configuration should look similar to this now.
```yaml
```yaml {16-18}
mqtt:
enabled: False
@ -252,7 +294,7 @@ In order to review activity in the Frigate UI, recordings need to be enabled.
To enable recording video, add the `record` role to a stream and enable it in the config. If record is disabled in the config, it won't be possible to enable it in the UI.
```yaml
```yaml {16-17}
mqtt: ...
detectors: ...

View File

@ -3,7 +3,7 @@ id: ha_network_storage
title: Home Assistant network storage
---
As of Home Assistant 2023.6, Network Mounted Storage is supported for Add-ons.
As of Home Assistant 2023.6, Network Mounted Storage is supported for Apps.
## Setting Up Remote Storage For Frigate
@ -14,7 +14,7 @@ As of Home Assistant 2023.6, Network Mounted Storage is supported for Add-ons.
### Initial Setup
1. Stop the Frigate Add-on
1. Stop the Frigate App
### Move current data
@ -37,4 +37,4 @@ Keeping the current data is optional, but the data will need to be moved regardl
4. Fill out the additional required info for your particular NAS
5. Connect
6. Move files from `/media/frigate_tmp` to `/media/frigate` if they were kept in previous step
7. Start the Frigate Add-on
7. Start the Frigate App

View File

@ -99,11 +99,11 @@ services:
...
```
### Home Assistant Add-on
### Home Assistant App
If you are using Home Assistant Add-on, the URL should be one of the following depending on which Add-on variant you are using. Note that if you are using the Proxy Add-on, you should NOT point the integration at the proxy URL. Just enter the same URL used to access Frigate directly from your network.
If you are using Home Assistant App, the URL should be one of the following depending on which App variant you are using. Note that if you are using the Proxy App, you should NOT point the integration at the proxy URL. Just enter the same URL used to access Frigate directly from your network.
| Add-on Variant | URL |
| App Variant | URL |
| -------------------------- | -------------------------------------- |
| Frigate | `http://ccab4aaf-frigate:5000` |
| Frigate (Full Access) | `http://ccab4aaf-frigate-fa:5000` |

View File

@ -11,7 +11,8 @@ These are the MQTT messages generated by Frigate. The default topic_prefix is `f
Designed to be used as an availability topic with Home Assistant. Possible message are:
"online": published when Frigate is running (on startup)
"offline": published after Frigate has stopped
"stopped": published when Frigate is stopped normally
"offline": published automatically by the MQTT broker if Frigate disconnects unexpectedly (via MQTT Will Message)
### `frigate/restart`
@ -275,6 +276,14 @@ Same data available at `/api/stats` published at a configurable interval.
Returns data about each camera, its current features, and if it is detecting motion, objects, etc. Can be triggered by publising to `frigate/onConnect`
### `frigate/profile/set`
Topic to activate or deactivate a [profile](/configuration/profiles). Publish a profile name to activate it, or `none` to deactivate the current profile.
### `frigate/profile/state`
Topic with the currently active profile name. Published value is the profile name or `none` if no profile is active. This topic is retained.
### `frigate/notifications/set`
Topic to turn notifications on and off. Expected values are `ON` and `OFF`.

View File

@ -19,11 +19,11 @@ Once logged in, you can generate an API key for Frigate in Settings.
### Set your API key
In Frigate, you can use an environment variable or a docker secret named `PLUS_API_KEY` to enable the `Frigate+` buttons on the Explore page. Home Assistant Addon users can set it under Settings > Add-ons > Frigate > Configuration > Options (be sure to toggle the "Show unused optional configuration options" switch).
In Frigate, you can use an environment variable or a docker secret named `PLUS_API_KEY` to enable the `Frigate+` buttons on the Explore page. Home Assistant App users can set it under Settings > Apps > Frigate > Configuration > Options (be sure to toggle the "Show unused optional configuration options" switch).
:::warning
You cannot use the `environment_vars` section of your Frigate configuration file to set this environment variable. It must be defined as an environment variable in the docker config or Home Assistant Add-on config.
You cannot use the `environment_vars` section of your Frigate configuration file to set this environment variable. It must be defined as an environment variable in the docker config or Home Assistant App config.
:::

View File

@ -42,3 +42,7 @@ This is a fork (with fixed errors and new features) of [original Double Take](ht
## [Scrypted - Frigate bridge plugin](https://github.com/apocaliss92/scrypted-frigate-bridge)
[Scrypted - Frigate bridge](https://github.com/apocaliss92/scrypted-frigate-bridge) is an plugin that allows to ingest Frigate detections, motion, videoclips on Scrypted as well as provide templates to export rebroadcast configurations on Frigate.
## [Strix](https://github.com/eduard256/Strix)
[Strix](https://github.com/eduard256/Strix) auto-discovers working stream URLs for IP cameras and generates ready-to-use Frigate configs. It tests thousands of URL patterns against your camera and supports cameras without RTSP or ONVIF. 67K+ camera models from 3.6K+ brands.

View File

@ -25,10 +25,9 @@ Yes. Subscriptions to Frigate+ provide access to the infrastructure used to trai
### Why can't I submit images to Frigate+?
If you've configured your API key and the Frigate+ Settings page in the UI shows that the key is active, you need to ensure that you've enabled both snapshots and `clean_copy` snapshots for the cameras you'd like to submit images for. Note that `clean_copy` is enabled by default when snapshots are enabled.
If you've configured your API key and the Frigate+ Settings page in the UI shows that the key is active, you need to ensure that snapshots are enabled for the cameras you'd like to submit images for.
```yaml
snapshots:
enabled: true
clean_copy: true
```

View File

@ -32,7 +32,7 @@ The USB coral can draw up to 900mA and this can be too much for some on-device U
The USB coral has different IDs when it is uninitialized and initialized.
- When running Frigate in a VM, Proxmox lxc, etc. you must ensure both device IDs are mapped.
- When running through the Home Assistant OS you may need to run the Full Access variant of the Frigate Add-on with the _Protection mode_ switch disabled so that the coral can be accessed.
- When running through the Home Assistant OS you may need to run the Full Access variant of the Frigate App with the _Protection mode_ switch disabled so that the coral can be accessed.
### Synology 716+II running DSM 7.2.1-69057 Update 5

View File

@ -83,6 +83,17 @@ const config: Config = {
},
},
prism: {
magicComments:[
{
className: 'theme-code-block-highlighted-line',
line: 'highlight-next-line',
block: {start: 'highlight-start', end: 'highlight-end'},
},
{
className: 'code-block-error-line',
line: 'highlight-error-line',
},
],
additionalLanguages: ["bash", "json"],
},
languageTabs: [

View File

@ -12313,9 +12313,9 @@
}
},
"node_modules/immutable": {
"version": "5.1.4",
"resolved": "https://registry.npmjs.org/immutable/-/immutable-5.1.4.tgz",
"integrity": "sha512-p6u1bG3YSnINT5RQmx/yRZBpenIl30kVxkTLDyHLIMk0gict704Q9n+thfDI7lTRm9vXdDYutVzXhzcThxTnXA==",
"version": "5.1.5",
"resolved": "https://registry.npmjs.org/immutable/-/immutable-5.1.5.tgz",
"integrity": "sha512-t7xcm2siw+hlUM68I+UEOK+z84RzmN59as9DZ7P1l0994DKUWV7UXBMQZVxaoMSRQ+PBZbHCOoBt7a2wxOMt+A==",
"license": "MIT"
},
"node_modules/import-fresh": {

View File

@ -94,6 +94,7 @@ const sidebars: SidebarsConfig = {
"Extra Configuration": [
"configuration/authentication",
"configuration/notifications",
"configuration/profiles",
"configuration/ffmpeg_presets",
"configuration/pwa",
"configuration/tls",

View File

@ -0,0 +1,201 @@
import React, { useState, useEffect } from "react";
import Admonition from "@theme/Admonition";
import styles from "./styles.module.css";
const ShmCalculator = () => {
const [width, setWidth] = useState(1280);
const [height, setHeight] = useState(720);
const [cameraCount, setCameraCount] = useState(1);
const [result, setResult] = useState("26.32MB");
const [singleCameraShm, setSingleCameraShm] = useState("26.32MB");
const [totalShm, setTotalShm] = useState("26.32MB");
const calculate = () => {
if (!width || !height || !cameraCount) {
setResult("Please enter valid values");
setSingleCameraShm("-");
setTotalShm("-");
return;
}
// Single camera base SHM calculation (excluding logs)
// Formula: (width * height * 1.5 * 20 + 270480) / 1048576
const singleCameraBase =
(width * height * 1.5 * 20 + 270480) / 1048576;
setSingleCameraShm(`${singleCameraBase.toFixed(2)}mb`);
// Total SHM calculation (multiple cameras, including logs)
const totalBase = singleCameraBase * cameraCount;
const finalResult = totalBase + 40; // Default includes logs +40mb
setTotalShm(`${(totalBase + 40).toFixed(2)}mb`);
// Format result
if (finalResult < 1) {
setResult(`${(finalResult * 1024).toFixed(2)}kb`);
} else if (finalResult >= 1024) {
setResult(`${(finalResult / 1024).toFixed(2)}gb`);
} else {
setResult(`${finalResult.toFixed(2)}mb`);
}
};
const formatWithUnit = (value) => {
const match = value.match(/^([\d.]+)(mb|kb|gb)$/i);
if (match) {
return (
<>
{match[1]}<span className={styles.unit}>{match[2]}</span>
</>
);
}
return value;
};
const applyPreset = (w, h, count) => {
setWidth(w);
setHeight(h);
setCameraCount(count);
calculate();
};
useEffect(() => {
calculate();
}, [width, height, cameraCount]);
return (
<div className={styles.shmCalculator}>
<div className={styles.card}>
<h3 className={styles.title}>SHM Calculator</h3>
<p className={styles.description}>
Calculate required shared memory (SHM) based on camera resolution and
count
</p>
<Admonition type="note">
The resolution below is the <strong>detect</strong> stream resolution,
not the <strong>record</strong> stream resolution. SHM size is
determined by the detect resolution used for object detection.{" "}
<a href="/frigate/camera_setup#choosing-a-detect-resolution">
Learn more about choosing a detect resolution.
</a>
</Admonition>
{width * height > 1280 * 720 && (
<Admonition type="warning">
Using a detect resolution higher than 720p is not recommended.
Higher resolutions do not improve object detection accuracy and will
consume significantly more resources.
</Admonition>
)}
<div className="row">
<div className="col col--6">
<div className={styles.formGroup}>
<label htmlFor="width" className={styles.label}>
Width:
</label>
<input
id="width"
type="number"
min="1"
placeholder="e.g.: 1280"
className={styles.input}
value={width}
onChange={(e) => setWidth(Number(e.target.value))}
/>
</div>
</div>
<div className="col col--6">
<div className={styles.formGroup}>
<label htmlFor="height" className={styles.label}>
Height:
</label>
<input
id="height"
type="number"
min="1"
placeholder="e.g.: 720"
className={styles.input}
value={height}
onChange={(e) => setHeight(Number(e.target.value))}
/>
</div>
</div>
</div>
<div className={styles.formGroup}>
<label htmlFor="cameraCount" className={styles.label}>
Camera Count:
</label>
<input
id="cameraCount"
type="number"
min="1"
placeholder="e.g.: 8"
className={styles.input}
value={cameraCount}
onChange={(e) => setCameraCount(Number(e.target.value))}
/>
</div>
<div className={styles.resultSection}>
<h4>Calculation Result</h4>
<div className={styles.resultValue}>
<span className={styles.resultNumber}>{formatWithUnit(result)}</span>
</div>
<div className={styles.formulaDisplay}>
<p>
<strong>Single Camera:</strong> {formatWithUnit(singleCameraShm)}
</p>
<p>
<strong>Formula:</strong> (width × height × 1.5 × 20 + 270480) ÷
1048576
</p>
{cameraCount > 1 && (
<p>
<strong>Total ({cameraCount} cameras):</strong> {formatWithUnit(totalShm)}
</p>
)}
<p>
<strong>With Logs:</strong> + 40<span className={styles.unit}>mb</span>
</p>
</div>
</div>
<div className={styles.presets}>
<h4>Common Presets</h4>
<div className={styles.presetButtons}>
<button
className="button button--outline button--primary button--sm"
onClick={() => applyPreset(640, 360, 1)}
>
640x360 × 1
</button>
<button
className="button button--outline button--primary button--sm"
onClick={() => applyPreset(1280, 720, 1)}
>
1280x720 × 1
</button>
<button
className="button button--outline button--primary button--sm"
onClick={() => applyPreset(1280, 720, 4)}
>
1280x720 × 4
</button>
<button
className="button button--outline button--primary button--sm"
onClick={() => applyPreset(1280, 720, 8)}
>
1280x720 × 8
</button>
</div>
</div>
</div>
</div>
);
};
export default ShmCalculator;

View File

@ -0,0 +1,131 @@
.shmCalculator {
margin: 2rem 0;
max-width: 600px;
}
.card {
background: var(--ifm-background-surface-color);
border: 1px solid var(--ifm-border-color);
border-radius: 12px;
padding: 2rem;
box-shadow: var(--ifm-global-shadow-lw);
}
[data-theme='light'] .card {
background: var(--ifm-color-emphasis-100);
border: 1px solid var(--ifm-color-emphasis-300);
}
.title {
margin: 0 0 0.5rem 0;
font-size: 1.5rem;
color: var(--ifm-font-color-base);
font-weight: var(--ifm-font-weight-semibold);
}
.description {
margin: 0 0 1.5rem 0;
color: var(--ifm-font-color-secondary);
font-size: 0.9rem;
}
.formGroup {
margin-bottom: 1rem;
}
.label {
display: block;
margin-bottom: 0.25rem;
color: var(--ifm-font-color-base);
font-weight: var(--ifm-font-weight-semibold);
font-size: 0.9rem;
}
.input {
width: 100%;
padding: 0.5rem 0.75rem;
border: 1px solid var(--ifm-border-color);
border-radius: 6px;
background: var(--ifm-background-color);
color: var(--ifm-font-color-base);
font-size: 0.95rem;
transition: border-color 0.2s, box-shadow 0.2s;
}
[data-theme='light'] .input {
background: #fff;
border: 1px solid #d0d7de;
}
.input:focus {
outline: none;
border-color: var(--ifm-color-primary);
box-shadow: 0 0 0 3px var(--ifm-color-primary-lightest);
}
.resultSection {
margin-top: 1rem;
padding: 1.5rem;
background: var(--ifm-background-color);
border-radius: 8px;
border: 1px solid var(--ifm-border-color);
}
[data-theme='light'] .resultSection {
background: #f6f8fa;
border: 1px solid #d0d7de;
}
.resultSection h4 {
margin: 0 0 1rem 0;
color: var(--ifm-font-color-base);
font-weight: var(--ifm-font-weight-semibold);
}
.resultValue {
text-align: center;
padding: 1rem;
background: var(--ifm-color-primary);
border-radius: 6px;
margin-bottom: 1rem;
}
.resultNumber {
font-size: 2rem;
font-weight: var(--ifm-font-weight-bold);
color: #fff;
}
.formulaDisplay {
font-size: 0.85rem;
color: var(--ifm-font-color-secondary);
line-height: 1.6;
}
.formulaDisplay p {
margin: 0.25rem 0;
}
.formulaDisplay strong {
color: var(--ifm-font-color-base);
}
.unit {
text-transform: uppercase;
}
.presets {
margin-top: 1.5rem;
}
.presets h4 {
margin: 0 0 0.75rem 0;
color: var(--ifm-font-color-base);
font-weight: var(--ifm-font-weight-semibold);
}
.presetButtons {
display: flex;
flex-wrap: wrap;
gap: 0.5rem;
}

View File

@ -234,3 +234,11 @@
content: "schema";
color: var(--ifm-color-secondary-contrast-foreground);
}
.code-block-error-line {
background-color: #ff000020;
display: block;
margin: 0 calc(-1 * var(--ifm-pre-padding));
padding: 0 var(--ifm-pre-padding);
border-left: 3px solid #ff000080;
}

File diff suppressed because it is too large Load Diff

View File

@ -5,6 +5,7 @@ import copy
import json
import logging
import os
import platform
import traceback
import urllib
from datetime import datetime, timedelta
@ -31,7 +32,10 @@ from frigate.api.auth import (
require_role,
)
from frigate.api.defs.query.app_query_parameters import AppTimelineHourlyQueryParameters
from frigate.api.defs.request.app_body import AppConfigSetBody, MediaSyncBody
from frigate.api.defs.request.app_body import (
AppConfigSetBody,
MediaSyncBody,
)
from frigate.api.defs.tags import Tags
from frigate.config import FrigateConfig
from frigate.config.camera.updater import (
@ -154,6 +158,31 @@ def config(request: Request):
for zone_name, zone in config_obj.cameras[camera_name].zones.items():
camera_dict["zones"][zone_name]["color"] = zone.color
# Re-dump profile overrides with exclude_unset so that only
# explicitly-set fields are returned (not Pydantic defaults).
# Without this, the frontend merges defaults (e.g. threshold=30)
# over the camera's actual base values (e.g. threshold=20).
if camera.profiles:
for profile_name, profile_config in camera.profiles.items():
camera_dict.setdefault("profiles", {})[profile_name] = (
profile_config.model_dump(
mode="json", warnings="none", exclude_unset=True
)
)
# When a profile is active, the top-level camera sections contain
# profile-merged (effective) values. Include the original base
# configs so the frontend settings can display them separately.
if (
config_obj.active_profile is not None
and request.app.profile_manager is not None
):
base_sections = request.app.profile_manager.get_base_configs_for_api(
camera_name
)
if base_sections:
camera_dict["base_config"] = base_sections
# remove go2rtc stream passwords
go2rtc: dict[str, Any] = config_obj.go2rtc.model_dump(
mode="json", warnings="none", exclude_none=True
@ -201,22 +230,43 @@ def config(request: Request):
return JSONResponse(content=config)
@router.get("/profiles", dependencies=[Depends(allow_any_authenticated())])
def get_profiles(request: Request):
"""List all available profiles and the currently active profile."""
profile_manager = request.app.profile_manager
return JSONResponse(content=profile_manager.get_profile_info())
@router.get("/profile/active", dependencies=[Depends(allow_any_authenticated())])
def get_active_profile(request: Request):
"""Get the currently active profile."""
config_obj: FrigateConfig = request.app.frigate_config
return JSONResponse(content={"active_profile": config_obj.active_profile})
@router.get("/ffmpeg/presets", dependencies=[Depends(allow_any_authenticated())])
def ffmpeg_presets():
"""Return available ffmpeg preset keys for config UI usage."""
machine = platform.machine().lower()
is_arm64 = machine in ("aarch64", "arm64", "armv8", "armv7l")
if is_arm64:
hwaccel_presets = [
"preset-rpi-64-h264",
"preset-rpi-64-h265",
"preset-jetson-h264",
"preset-jetson-h265",
"preset-rkmpp",
"preset-vaapi",
]
else:
hwaccel_presets = [
"preset-vaapi",
"preset-intel-qsv-h264",
"preset-intel-qsv-h265",
"preset-nvidia",
]
# Whitelist based on documented presets in ffmpeg_presets.md
hwaccel_presets = [
"preset-rpi-64-h264",
"preset-rpi-64-h265",
"preset-vaapi",
"preset-intel-qsv-h264",
"preset-intel-qsv-h265",
"preset-nvidia",
"preset-jetson-h264",
"preset-jetson-h265",
"preset-rkmpp",
]
input_presets = [
"preset-http-jpeg-generic",
"preset-http-mjpeg-generic",
@ -279,7 +329,7 @@ def config_raw_paths(request: Request):
return JSONResponse(content=raw_paths)
@router.get("/config/raw", dependencies=[Depends(allow_any_authenticated())])
@router.get("/config/raw", dependencies=[Depends(require_role(["admin"]))])
def config_raw():
config_file = find_config_file()
@ -589,6 +639,9 @@ def config_set(request: Request, body: AppConfigSetBody):
request.app.frigate_config = config
request.app.genai_manager.update_config(config)
if request.app.profile_manager is not None:
request.app.profile_manager.update_config(config)
if request.app.stats_emitter is not None:
request.app.stats_emitter.config = config
@ -819,7 +872,10 @@ def sync_media(body: MediaSyncBody = Body(...)):
202 Accepted with job_id, or 409 Conflict if job already running.
"""
job_id = start_media_sync_job(
dry_run=body.dry_run, media_types=body.media_types, force=body.force
dry_run=body.dry_run,
media_types=body.media_types,
force=body.force,
verbose=body.verbose,
)
if job_id is None:
@ -1028,7 +1084,12 @@ def get_recognized_license_plates(
@router.get("/timeline", dependencies=[Depends(allow_any_authenticated())])
def timeline(camera: str = "all", limit: int = 100, source_id: Optional[str] = None):
def timeline(
camera: str = "all",
limit: int = 100,
source_id: Optional[str] = None,
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
):
clauses = []
selected_columns = [
@ -1050,6 +1111,9 @@ def timeline(camera: str = "all", limit: int = 100, source_id: Optional[str] = N
else:
clauses.append((Timeline.source_id.in_(source_ids)))
# Enforce per-camera access control
clauses.append((Timeline.camera << allowed_cameras))
if len(clauses) == 0:
clauses.append((True))
@ -1065,7 +1129,10 @@ def timeline(camera: str = "all", limit: int = 100, source_id: Optional[str] = N
@router.get("/timeline/hourly", dependencies=[Depends(allow_any_authenticated())])
def hourly_timeline(params: AppTimelineHourlyQueryParameters = Depends()):
def hourly_timeline(
params: AppTimelineHourlyQueryParameters = Depends(),
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
):
"""Get hourly summary for timeline."""
cameras = params.cameras
labels = params.labels
@ -1083,6 +1150,9 @@ def hourly_timeline(params: AppTimelineHourlyQueryParameters = Depends()):
camera_list = cameras.split(",")
clauses.append((Timeline.camera << camera_list))
# Enforce per-camera access control
clauses.append((Timeline.camera << allowed_cameras))
if labels != "all":
label_list = labels.split(",")
clauses.append((Timeline.data["label"] << label_list))

View File

@ -73,7 +73,6 @@ def require_admin_by_default():
"/stats",
"/stats/history",
"/config",
"/config/raw",
"/vainfo",
"/nvinfo",
"/labels",
@ -896,6 +895,7 @@ def create_user(
User.notification_tokens: [],
}
).execute()
request.app.config_publisher.publisher.publish("config/auth", None)
return JSONResponse(content={"username": body.username})
@ -913,6 +913,7 @@ def delete_user(request: Request, username: str):
)
User.delete_by_id(username)
request.app.config_publisher.publisher.publish("config/auth", None)
return JSONResponse(content={"success": True})
@ -1032,6 +1033,7 @@ async def update_role(
)
User.set_by_id(username, {User.role: body.role})
request.app.config_publisher.publisher.publish("config/auth", None)
return JSONResponse(content={"success": True})
@ -1045,7 +1047,16 @@ async def require_camera_access(
current_user = await get_current_user(request)
if isinstance(current_user, JSONResponse):
return current_user
detail = "Authentication required"
try:
error_payload = json.loads(current_user.body)
detail = (
error_payload.get("message") or error_payload.get("detail") or detail
)
except Exception:
pass
raise HTTPException(status_code=current_user.status_code, detail=detail)
role = current_user["role"]
all_camera_names = set(request.app.frigate_config.cameras.keys())
@ -1063,6 +1074,61 @@ async def require_camera_access(
)
def _get_stream_owner_cameras(request: Request, stream_name: str) -> set[str]:
owner_cameras: set[str] = set()
for camera_name, camera in request.app.frigate_config.cameras.items():
if stream_name == camera_name:
owner_cameras.add(camera_name)
continue
if stream_name in camera.live.streams.values():
owner_cameras.add(camera_name)
return owner_cameras
async def require_go2rtc_stream_access(
stream_name: Optional[str] = None,
request: Request = None,
):
"""Dependency to enforce go2rtc stream access based on owning camera access."""
if stream_name is None:
return
current_user = await get_current_user(request)
if isinstance(current_user, JSONResponse):
detail = "Authentication required"
try:
error_payload = json.loads(current_user.body)
detail = (
error_payload.get("message") or error_payload.get("detail") or detail
)
except Exception:
pass
raise HTTPException(status_code=current_user.status_code, detail=detail)
role = current_user["role"]
all_camera_names = set(request.app.frigate_config.cameras.keys())
roles_dict = request.app.frigate_config.auth.roles
allowed_cameras = User.get_allowed_cameras(role, roles_dict, all_camera_names)
# Admin or full access bypasses
if role == "admin" or not roles_dict.get(role):
return
owner_cameras = _get_stream_owner_cameras(request, stream_name)
if owner_cameras & set(allowed_cameras):
return
raise HTTPException(
status_code=403,
detail=f"Access denied to camera '{stream_name}'. Allowed: {allowed_cameras}",
)
async def get_allowed_cameras_for_filter(request: Request):
"""Dependency to get allowed_cameras for filtering lists."""
current_user = await get_current_user(request)

View File

@ -20,9 +20,10 @@ from zeep.transports import AsyncTransport
from frigate.api.auth import (
allow_any_authenticated,
require_camera_access,
require_go2rtc_stream_access,
require_role,
)
from frigate.api.defs.request.app_body import CameraSetBody
from frigate.api.defs.tags import Tags
from frigate.config import FrigateConfig
from frigate.config.camera.updater import (
@ -80,14 +81,27 @@ def go2rtc_streams():
@router.get(
"/go2rtc/streams/{camera_name}", dependencies=[Depends(require_camera_access)]
"/go2rtc/streams/{stream_name}",
dependencies=[Depends(require_go2rtc_stream_access)],
)
def go2rtc_camera_stream(request: Request, camera_name: str):
def go2rtc_camera_stream(request: Request, stream_name: str):
r = requests.get(
f"http://127.0.0.1:1984/api/streams?src={camera_name}&video=all&audio=all&microphone"
"http://127.0.0.1:1984/api/streams",
params={
"src": stream_name,
"video": "all",
"audio": "all",
"microphone": "",
},
)
if not r.ok:
camera_config = request.app.frigate_config.cameras.get(camera_name)
camera_config = request.app.frigate_config.cameras.get(stream_name)
if camera_config is None:
for camera_name, camera in request.app.frigate_config.cameras.items():
if stream_name in camera.live.streams.values():
camera_config = request.app.frigate_config.cameras.get(camera_name)
break
if camera_config and camera_config.enabled:
logger.error("Failed to fetch streams from go2rtc")
@ -1155,3 +1169,76 @@ async def delete_camera(
},
status_code=200,
)
_SUB_COMMAND_FEATURES = {"motion_mask", "object_mask", "zone"}
@router.put(
"/camera/{camera_name}/set/{feature}",
dependencies=[Depends(require_role(["admin"]))],
)
@router.put(
"/camera/{camera_name}/set/{feature}/{sub_command}",
dependencies=[Depends(require_role(["admin"]))],
)
def camera_set(
request: Request,
camera_name: str,
feature: str,
body: CameraSetBody,
sub_command: str | None = None,
):
"""Set a camera feature state. Use camera_name='*' to target all cameras."""
dispatcher = request.app.dispatcher
frigate_config: FrigateConfig = request.app.frigate_config
if feature == "profile":
if camera_name != "*":
return JSONResponse(
content={
"success": False,
"message": "Profile feature requires camera_name='*'",
},
status_code=400,
)
dispatcher._receive("profile/set", body.value)
return JSONResponse(content={"success": True})
if feature not in dispatcher._camera_settings_handlers:
return JSONResponse(
content={"success": False, "message": f"Unknown feature: {feature}"},
status_code=400,
)
if sub_command and feature not in _SUB_COMMAND_FEATURES:
return JSONResponse(
content={
"success": False,
"message": f"Feature '{feature}' does not support sub-commands",
},
status_code=400,
)
if camera_name == "*":
cameras = list(frigate_config.cameras.keys())
elif camera_name not in frigate_config.cameras:
return JSONResponse(
content={
"success": False,
"message": f"Camera '{camera_name}' not found",
},
status_code=404,
)
else:
cameras = [camera_name]
for cam in cameras:
topic = (
f"{cam}/{feature}/{sub_command}/set"
if sub_command
else f"{cam}/{feature}/set"
)
dispatcher._receive(topic, body.value)
return JSONResponse(content={"success": True})

View File

@ -26,6 +26,11 @@ from frigate.api.defs.response.chat_response import (
from frigate.api.defs.tags import Tags
from frigate.api.event import events
from frigate.genai.utils import build_assistant_message_for_conversation
from frigate.jobs.vlm_watch import (
get_vlm_watch_job,
start_vlm_watch_job,
stop_vlm_watch_job,
)
logger = logging.getLogger(__name__)
@ -82,6 +87,16 @@ class ToolExecuteRequest(BaseModel):
arguments: Dict[str, Any]
class VLMMonitorRequest(BaseModel):
"""Request model for starting a VLM watch job."""
camera: str
condition: str
max_duration_minutes: int = 60
labels: List[str] = []
zones: List[str] = []
def get_tool_definitions() -> List[Dict[str, Any]]:
"""
Get OpenAI-compatible tool definitions for Frigate.
@ -95,9 +110,11 @@ def get_tool_definitions() -> List[Dict[str, Any]]:
"function": {
"name": "search_objects",
"description": (
"Search for detected objects in Frigate by camera, object label, time range, "
"zones, and other filters. Use this to answer questions about when "
"objects were detected, what objects appeared, or to find specific object detections. "
"Search the historical record of detected objects in Frigate. "
"Use this ONLY for questions about the PAST — e.g. 'did anyone come by today?', "
"'when was the last car?', 'show me detections from yesterday'. "
"Do NOT use this for monitoring or alerting requests about future events — "
"use start_camera_watch instead for those. "
"An 'object' in Frigate represents a tracked detection (e.g., a person, package, car). "
"When the user asks about a specific name (person, delivery company, animal, etc.), "
"filter by sub_label only and do not set label."
@ -140,15 +157,70 @@ def get_tool_definitions() -> List[Dict[str, Any]]:
"required": [],
},
},
{
"type": "function",
"function": {
"name": "set_camera_state",
"description": (
"Change a camera's feature state (e.g., turn detection on/off, enable/disable recordings). "
"Use camera='*' to apply to all cameras at once. "
"Only call this tool when the user explicitly asks to change a camera setting. "
"Requires admin privileges."
),
"parameters": {
"type": "object",
"properties": {
"camera": {
"type": "string",
"description": "Camera name to target, or '*' to target all cameras.",
},
"feature": {
"type": "string",
"enum": [
"detect",
"record",
"snapshots",
"audio",
"motion",
"enabled",
"birdseye",
"birdseye_mode",
"improve_contrast",
"ptz_autotracker",
"motion_contour_area",
"motion_threshold",
"notifications",
"audio_transcription",
"review_alerts",
"review_detections",
"object_descriptions",
"review_descriptions",
"profile",
],
"description": (
"The feature to change. Most features accept ON or OFF. "
"birdseye_mode accepts CONTINUOUS, MOTION, or OBJECTS. "
"motion_contour_area and motion_threshold accept a number. "
"profile accepts a profile name or 'none' to deactivate (requires camera='*')."
),
},
"value": {
"type": "string",
"description": "The value to set. ON or OFF for toggles, a number for thresholds, a profile name or 'none' for profile.",
},
},
"required": ["camera", "feature", "value"],
},
},
},
{
"type": "function",
"function": {
"name": "get_live_context",
"description": (
"Get the current detection information for a camera: objects being tracked, "
"Get the current live image and detection information for a camera: objects being tracked, "
"zones, timestamps. Use this to understand what is visible in the live view. "
"Call this when the user has included a live image (via include_live_image) or "
"when answering questions about what is happening right now on a specific camera."
"Call this when answering questions about what is happening right now on a specific camera."
),
"parameters": {
"type": "object",
@ -162,6 +234,65 @@ def get_tool_definitions() -> List[Dict[str, Any]]:
},
},
},
{
"type": "function",
"function": {
"name": "start_camera_watch",
"description": (
"Start a continuous VLM watch job that monitors a camera and sends a notification "
"when a specified condition is met. Use this when the user wants to be alerted about "
"a future event, e.g. 'tell me when guests arrive' or 'notify me when the package is picked up'. "
"Only one watch job can run at a time. Returns a job ID."
),
"parameters": {
"type": "object",
"properties": {
"camera": {
"type": "string",
"description": "Camera ID to monitor.",
},
"condition": {
"type": "string",
"description": (
"Natural-language description of the condition to watch for, "
"e.g. 'a person arrives at the front door'."
),
},
"max_duration_minutes": {
"type": "integer",
"description": "Maximum time to watch before giving up (minutes, default 60).",
"default": 60,
},
"labels": {
"type": "array",
"items": {"type": "string"},
"description": "Object labels that should trigger a VLM check (e.g. ['person', 'car']). If omitted, any detection on the camera triggers a check.",
},
"zones": {
"type": "array",
"items": {"type": "string"},
"description": "Zone names to filter by. If specified, only detections in these zones trigger a VLM check.",
},
},
"required": ["camera", "condition"],
},
},
},
{
"type": "function",
"function": {
"name": "stop_camera_watch",
"description": (
"Cancel the currently running VLM watch job. Use this when the user wants to "
"stop a previously started watch, e.g. 'stop watching the front door'."
),
"parameters": {
"type": "object",
"properties": {},
"required": [],
},
},
},
]
@ -255,6 +386,7 @@ async def _execute_search_objects(
description="Execute a tool function call from an LLM.",
)
async def execute_tool(
request: Request,
body: ToolExecuteRequest = Body(...),
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
) -> JSONResponse:
@ -272,6 +404,12 @@ async def execute_tool(
if tool_name == "search_objects":
return await _execute_search_objects(arguments, allowed_cameras)
if tool_name == "set_camera_state":
result = await _execute_set_camera_state(request, arguments)
return JSONResponse(
content=result, status_code=200 if result.get("success") else 400
)
return JSONResponse(
content={
"success": False,
@ -321,12 +459,54 @@ async def _execute_get_live_context(
"stationary": obj_dict.get("stationary", False),
}
return {
result: Dict[str, Any] = {
"camera": camera,
"timestamp": frame_time,
"detections": list(tracked_objects_dict.values()),
}
# Grab live frame and handle based on provider configuration
image_url = await _get_live_frame_image_url(request, camera, allowed_cameras)
if image_url:
genai_manager = request.app.genai_manager
if genai_manager.tool_client is genai_manager.vision_client:
# Same provider handles both roles — pass image URL so it can
# be injected as a user message (images can't be in tool results)
result["_image_url"] = image_url
elif genai_manager.vision_client is not None:
# Separate vision provider — have it describe the image,
# providing detection context so it knows what to focus on
frame_bytes = _decode_data_url(image_url)
if frame_bytes:
detections = result.get("detections", [])
if detections:
detection_lines = []
for d in detections:
parts = [d.get("label", "unknown")]
if d.get("sub_label"):
parts.append(f"({d['sub_label']})")
if d.get("zones"):
parts.append(f"in {', '.join(d['zones'])}")
detection_lines.append(" ".join(parts))
context = (
"The following objects are currently being tracked: "
+ "; ".join(detection_lines)
+ "."
)
else:
context = "No objects are currently being tracked."
description = genai_manager.vision_client._send(
f"Describe what you see in this security camera image. "
f"{context} Focus on the scene, any visible activity, "
f"and details about the tracked objects.",
[frame_bytes],
)
if description:
result["image_description"] = description
return result
except Exception as e:
logger.error(f"Error executing get_live_context: {e}", exc_info=True)
return {
@ -342,8 +522,8 @@ async def _get_live_frame_image_url(
"""
Fetch the current live frame for a camera as a base64 data URL.
Returns None if the frame cannot be retrieved. Used when include_live_image
is set to attach the image to the first user message.
Returns None if the frame cannot be retrieved. Used by get_live_context
to attach the live image to the conversation.
"""
if (
camera not in allowed_cameras
@ -358,12 +538,12 @@ async def _get_live_frame_image_url(
if frame is None:
return None
height, width = frame.shape[:2]
max_dimension = 1024
if height > max_dimension or width > max_dimension:
scale = max_dimension / max(height, width)
target_height = 480
if height > target_height:
scale = target_height / height
frame = cv2.resize(
frame,
(int(width * scale), int(height * scale)),
(int(width * scale), target_height),
interpolation=cv2.INTER_AREA,
)
_, img_encoded = cv2.imencode(".jpg", frame, [cv2.IMWRITE_JPEG_QUALITY, 85])
@ -374,6 +554,57 @@ async def _get_live_frame_image_url(
return None
def _decode_data_url(data_url: str) -> Optional[bytes]:
"""Decode a base64 data URL to raw bytes."""
try:
# Format: data:image/jpeg;base64,<data>
_, encoded = data_url.split(",", 1)
return base64.b64decode(encoded)
except (ValueError, Exception) as e:
logger.debug("Failed to decode data URL: %s", e)
return None
async def _execute_set_camera_state(
request: Request,
arguments: Dict[str, Any],
) -> Dict[str, Any]:
role = request.headers.get("remote-role", "")
if "admin" not in [r.strip() for r in role.split(",")]:
return {"error": "Admin privileges required to change camera settings."}
camera = arguments.get("camera", "").strip()
feature = arguments.get("feature", "").strip()
value = arguments.get("value", "").strip()
if not camera or not feature or not value:
return {"error": "camera, feature, and value are all required."}
dispatcher = request.app.dispatcher
frigate_config = request.app.frigate_config
if feature == "profile":
if camera != "*":
return {"error": "Profile feature requires camera='*'."}
dispatcher._receive("profile/set", value)
return {"success": True, "camera": camera, "feature": feature, "value": value}
if feature not in dispatcher._camera_settings_handlers:
return {"error": f"Unknown feature: {feature}"}
if camera == "*":
cameras = list(frigate_config.cameras.keys())
elif camera not in frigate_config.cameras:
return {"error": f"Camera '{camera}' not found."}
else:
cameras = [camera]
for cam in cameras:
dispatcher._receive(f"{cam}/{feature}/set", value)
return {"success": True, "camera": camera, "feature": feature, "value": value}
async def _execute_tool_internal(
tool_name: str,
arguments: Dict[str, Any],
@ -398,6 +629,8 @@ async def _execute_tool_internal(
except (json.JSONDecodeError, AttributeError) as e:
logger.warning(f"Failed to extract tool result: {e}")
return {"error": "Failed to parse tool result"}
elif tool_name == "set_camera_state":
return await _execute_set_camera_state(request, arguments)
elif tool_name == "get_live_context":
camera = arguments.get("camera")
if not camera:
@ -408,26 +641,91 @@ async def _execute_tool_internal(
)
return {"error": "Camera parameter is required"}
return await _execute_get_live_context(request, camera, allowed_cameras)
elif tool_name == "start_camera_watch":
return await _execute_start_camera_watch(request, arguments)
elif tool_name == "stop_camera_watch":
return _execute_stop_camera_watch()
else:
logger.error(
"Tool call failed: unknown tool %r. Expected one of: search_objects, get_live_context. "
"Arguments received: %s",
"Tool call failed: unknown tool %r. Expected one of: search_objects, get_live_context, "
"start_camera_watch, stop_camera_watch. Arguments received: %s",
tool_name,
json.dumps(arguments),
)
return {"error": f"Unknown tool: {tool_name}"}
async def _execute_start_camera_watch(
request: Request,
arguments: Dict[str, Any],
) -> Dict[str, Any]:
camera = arguments.get("camera", "").strip()
condition = arguments.get("condition", "").strip()
max_duration_minutes = int(arguments.get("max_duration_minutes", 60))
labels = arguments.get("labels") or []
zones = arguments.get("zones") or []
if not camera or not condition:
return {"error": "camera and condition are required."}
config = request.app.frigate_config
if camera not in config.cameras:
return {"error": f"Camera '{camera}' not found."}
genai_manager = request.app.genai_manager
vision_client = genai_manager.vision_client or genai_manager.tool_client
if vision_client is None:
return {"error": "No vision/GenAI provider configured."}
try:
job_id = start_vlm_watch_job(
camera=camera,
condition=condition,
max_duration_minutes=max_duration_minutes,
config=config,
frame_processor=request.app.detected_frames_processor,
genai_manager=genai_manager,
dispatcher=request.app.dispatcher,
labels=labels,
zones=zones,
)
except RuntimeError as e:
logger.error("Failed to start VLM watch job: %s", e, exc_info=True)
return {"error": "Failed to start VLM watch job."}
return {
"success": True,
"job_id": job_id,
"message": (
f"Now watching '{camera}' for: {condition}. "
f"You'll receive a notification when the condition is met (timeout: {max_duration_minutes} min)."
),
}
def _execute_stop_camera_watch() -> Dict[str, Any]:
cancelled = stop_vlm_watch_job()
if cancelled:
return {"success": True, "message": "Watch job cancelled."}
return {"success": False, "message": "No active watch job to cancel."}
async def _execute_pending_tools(
pending_tool_calls: List[Dict[str, Any]],
request: Request,
allowed_cameras: List[str],
) -> tuple[List[ToolCall], List[Dict[str, Any]]]:
) -> tuple[List[ToolCall], List[Dict[str, Any]], List[Dict[str, Any]]]:
"""
Execute a list of tool calls; return (ToolCall list for API response, tool result dicts for conversation).
Execute a list of tool calls.
Returns:
(ToolCall list for API response,
tool result dicts for conversation,
extra messages to inject after tool results e.g. user messages with images)
"""
tool_calls_out: List[ToolCall] = []
tool_results: List[Dict[str, Any]] = []
extra_messages: List[Dict[str, Any]] = []
for tool_call in pending_tool_calls:
tool_name = tool_call["name"]
tool_args = tool_call.get("arguments") or {}
@ -464,6 +762,27 @@ async def _execute_pending_tools(
for evt in tool_result
if isinstance(evt, dict)
]
# Extract _image_url from get_live_context results — images can
# only be sent in user messages, not tool results
if isinstance(tool_result, dict) and "_image_url" in tool_result:
image_url = tool_result.pop("_image_url")
extra_messages.append(
{
"role": "user",
"content": [
{
"type": "text",
"text": f"Here is the current live image from camera '{tool_result.get('camera', 'unknown')}'.",
},
{
"type": "image_url",
"image_url": {"url": image_url},
},
],
}
)
result_content = (
json.dumps(tool_result)
if isinstance(tool_result, (dict, list))
@ -499,7 +818,7 @@ async def _execute_pending_tools(
"content": error_content,
}
)
return (tool_calls_out, tool_results)
return (tool_calls_out, tool_results, extra_messages)
@router.post(
@ -555,7 +874,13 @@ async def chat_completion(
if camera_config.friendly_name
else camera_id.replace("_", " ").title()
)
cameras_info.append(f" - {friendly_name} (ID: {camera_id})")
zone_names = list(camera_config.zones.keys())
if zone_names:
cameras_info.append(
f" - {friendly_name} (ID: {camera_id}, zones: {', '.join(zone_names)})"
)
else:
cameras_info.append(f" - {friendly_name} (ID: {camera_id})")
cameras_section = ""
if cameras_info:
@ -565,14 +890,6 @@ async def chat_completion(
+ "\n\nWhen users refer to cameras by their friendly name (e.g., 'Back Deck Camera'), use the corresponding camera ID (e.g., 'back_deck_cam') in tool calls."
)
live_image_note = ""
if body.include_live_image:
live_image_note = (
f"\n\nThe first user message includes a live image from camera "
f"'{body.include_live_image}'. Use get_live_context for that camera to get "
"current detection details (objects, zones) to aid in understanding the image."
)
system_prompt = f"""You are a helpful assistant for Frigate, a security camera NVR system. You help users answer questions about their cameras, detected objects, and events.
Current server local date and time: {current_date_str} at {current_time_str}
@ -582,7 +899,7 @@ Do not start your response with phrases like "I will check...", "Let me see...",
Always present times to the user in the server's local timezone. When tool results include start_time_local and end_time_local, use those exact strings when listing or describing detection times—do not convert or invent timestamps. Do not use UTC or ISO format with Z for the user-facing answer unless the tool result only provides Unix timestamps without local time fields.
When users ask about "today", "yesterday", "this week", etc., use the current date above as reference.
When searching for objects or events, use ISO 8601 format for dates (e.g., {current_date_str}T00:00:00Z for the start of today).
Always be accurate with time calculations based on the current date provided.{cameras_section}{live_image_note}"""
Always be accurate with time calculations based on the current date provided.{cameras_section}"""
conversation.append(
{
@ -591,7 +908,6 @@ Always be accurate with time calculations based on the current date provided.{ca
}
)
first_user_message_seen = False
for msg in body.messages:
msg_dict = {
"role": msg.role,
@ -602,21 +918,6 @@ Always be accurate with time calculations based on the current date provided.{ca
if msg.name:
msg_dict["name"] = msg.name
if (
msg.role == "user"
and not first_user_message_seen
and body.include_live_image
):
first_user_message_seen = True
image_url = await _get_live_frame_image_url(
request, body.include_live_image, allowed_cameras
)
if image_url:
msg_dict["content"] = [
{"type": "text", "text": msg.content},
{"type": "image_url", "image_url": {"url": image_url}},
]
conversation.append(msg_dict)
tool_iterations = 0
@ -674,11 +975,16 @@ Always be accurate with time calculations based on the current date provided.{ca
msg.get("content"), pending
)
)
executed_calls, tool_results = await _execute_pending_tools(
(
executed_calls,
tool_results,
extra_msgs,
) = await _execute_pending_tools(
pending, request, allowed_cameras
)
stream_tool_calls.extend(executed_calls)
conversation.extend(tool_results)
conversation.extend(extra_msgs)
yield (
json.dumps(
{
@ -785,11 +1091,12 @@ Always be accurate with time calculations based on the current date provided.{ca
f"Tool calls detected (iteration {tool_iterations}/{max_iterations}): "
f"{len(pending_tool_calls)} tool(s) to execute"
)
executed_calls, tool_results = await _execute_pending_tools(
executed_calls, tool_results, extra_msgs = await _execute_pending_tools(
pending_tool_calls, request, allowed_cameras
)
tool_calls.extend(executed_calls)
conversation.extend(tool_results)
conversation.extend(extra_msgs)
logger.debug(
f"Added {len(tool_results)} tool result(s) to conversation. "
f"Continuing with next LLM call..."
@ -819,3 +1126,95 @@ Always be accurate with time calculations based on the current date provided.{ca
},
status_code=500,
)
# ---------------------------------------------------------------------------
# VLM Monitor endpoints
# ---------------------------------------------------------------------------
@router.post(
"/vlm/monitor",
dependencies=[Depends(allow_any_authenticated())],
summary="Start a VLM watch job",
description=(
"Start monitoring a camera with the vision provider. "
"The VLM analyzes live frames until the specified condition is met, "
"then sends a notification. Only one watch job can run at a time."
),
)
async def start_vlm_monitor(
request: Request,
body: VLMMonitorRequest,
) -> JSONResponse:
config = request.app.frigate_config
genai_manager = request.app.genai_manager
if body.camera not in config.cameras:
return JSONResponse(
content={"success": False, "message": f"Camera '{body.camera}' not found."},
status_code=404,
)
vision_client = genai_manager.vision_client or genai_manager.tool_client
if vision_client is None:
return JSONResponse(
content={
"success": False,
"message": "No vision/GenAI provider configured.",
},
status_code=400,
)
try:
job_id = start_vlm_watch_job(
camera=body.camera,
condition=body.condition,
max_duration_minutes=body.max_duration_minutes,
config=config,
frame_processor=request.app.detected_frames_processor,
genai_manager=genai_manager,
dispatcher=request.app.dispatcher,
labels=body.labels,
zones=body.zones,
)
except RuntimeError as e:
logger.error("Failed to start VLM watch job: %s", e, exc_info=True)
return JSONResponse(
content={"success": False, "message": "Failed to start VLM watch job."},
status_code=409,
)
return JSONResponse(
content={"success": True, "job_id": job_id},
status_code=201,
)
@router.get(
"/vlm/monitor",
dependencies=[Depends(allow_any_authenticated())],
summary="Get current VLM watch job",
description="Returns the current (or most recently completed) VLM watch job.",
)
async def get_vlm_monitor() -> JSONResponse:
job = get_vlm_watch_job()
if job is None:
return JSONResponse(content={"active": False}, status_code=200)
return JSONResponse(content={"active": True, **job.to_dict()}, status_code=200)
@router.delete(
"/vlm/monitor",
dependencies=[Depends(allow_any_authenticated())],
summary="Cancel the current VLM watch job",
description="Cancels the running watch job if one exists.",
)
async def cancel_vlm_monitor() -> JSONResponse:
cancelled = stop_vlm_watch_job()
if not cancelled:
return JSONResponse(
content={"success": False, "message": "No active watch job to cancel."},
status_code=404,
)
return JSONResponse(content={"success": True}, status_code=200)

View File

@ -338,6 +338,82 @@ async def recognize_face(request: Request, file: UploadFile):
)
@router.post(
"/faces/{name}/reclassify",
response_model=GenericResponse,
dependencies=[Depends(require_role(["admin"]))],
summary="Reclassify a face image to a different name",
description="""Moves a single face image from one person's folder to another.
The image is moved and renamed, and the face classifier is cleared to
incorporate the change. Returns a success message or an error if the
image or target name is invalid.""",
)
def reclassify_face_image(request: Request, name: str, body: dict = None):
if not request.app.frigate_config.face_recognition.enabled:
return JSONResponse(
status_code=400,
content={"message": "Face recognition is not enabled.", "success": False},
)
json: dict[str, Any] = body or {}
image_id = sanitize_filename(json.get("id", ""))
new_name = sanitize_filename(json.get("new_name", ""))
if not image_id or not new_name:
return JSONResponse(
content=(
{
"success": False,
"message": "Both 'id' and 'new_name' are required.",
}
),
status_code=400,
)
if new_name == name:
return JSONResponse(
content=(
{
"success": False,
"message": "New name must differ from the current name.",
}
),
status_code=400,
)
source_folder = os.path.join(FACE_DIR, sanitize_filename(name))
source_file = os.path.join(source_folder, image_id)
if not os.path.isfile(source_file):
return JSONResponse(
content=(
{
"success": False,
"message": f"Image not found: {image_id}",
}
),
status_code=404,
)
target_filename = f"{new_name}-{datetime.datetime.now().timestamp()}.webp"
target_folder = os.path.join(FACE_DIR, new_name)
os.makedirs(target_folder, exist_ok=True)
shutil.move(source_file, os.path.join(target_folder, target_filename))
# Clean up empty source folder
if os.path.exists(source_folder) and not os.listdir(source_folder):
os.rmdir(source_folder)
context: EmbeddingsContext = request.app.embeddings
context.clear_face_classifier()
return JSONResponse(
content=({"success": True, "message": "Successfully reclassified face."}),
status_code=200,
)
@router.post(
"/faces/{name}/delete",
response_model=GenericResponse,
@ -787,6 +863,101 @@ def delete_classification_dataset_images(
)
@router.post(
"/classification/{name}/dataset/{category}/reclassify",
response_model=GenericResponse,
dependencies=[Depends(require_role(["admin"]))],
summary="Reclassify a dataset image to a different category",
description="""Moves a single dataset image from one category to another.
The image is re-saved as PNG in the target category and removed from the source.""",
)
def reclassify_classification_image(
request: Request, name: str, category: str, body: dict = None
):
config: FrigateConfig = request.app.frigate_config
if name not in config.classification.custom:
return JSONResponse(
content=(
{
"success": False,
"message": f"{name} is not a known classification model.",
}
),
status_code=404,
)
json: dict[str, Any] = body or {}
image_id = sanitize_filename(json.get("id", ""))
new_category = sanitize_filename(json.get("new_category", ""))
if not image_id or not new_category:
return JSONResponse(
content=(
{
"success": False,
"message": "Both 'id' and 'new_category' are required.",
}
),
status_code=400,
)
if new_category == category:
return JSONResponse(
content=(
{
"success": False,
"message": "New category must differ from the current category.",
}
),
status_code=400,
)
sanitized_name = sanitize_filename(name)
source_folder = os.path.join(
CLIPS_DIR, sanitized_name, "dataset", sanitize_filename(category)
)
source_file = os.path.join(source_folder, image_id)
if not os.path.isfile(source_file):
return JSONResponse(
content=(
{
"success": False,
"message": f"Image not found: {image_id}",
}
),
status_code=404,
)
random_id = "".join(random.choices(string.ascii_lowercase + string.digits, k=6))
timestamp = datetime.datetime.now().timestamp()
new_name = f"{new_category}-{timestamp}-{random_id}.png"
target_folder = os.path.join(CLIPS_DIR, sanitized_name, "dataset", new_category)
os.makedirs(target_folder, exist_ok=True)
img = cv2.imread(source_file)
cv2.imwrite(os.path.join(target_folder, new_name), img)
os.unlink(source_file)
# Clean up empty source folder (unless it is "none")
if (
os.path.exists(source_folder)
and not os.listdir(source_folder)
and category.lower() != "none"
):
os.rmdir(source_folder)
# Mark dataset as changed so UI knows retraining is needed
write_training_metadata(sanitized_name, 0)
return JSONResponse(
content=({"success": True, "message": "Successfully reclassified image."}),
status_code=200,
)
@router.put(
"/classification/{name}/dataset/{old_category}/rename",
response_model=GenericResponse,

View File

@ -35,7 +35,7 @@ class MediaEventsSnapshotQueryParams(BaseModel):
bbox: Optional[int] = None
crop: Optional[int] = None
height: Optional[int] = None
quality: Optional[int] = 70
quality: Optional[int] = None
class MediaMjpegFeedQueryParams(BaseModel):

View File

@ -30,6 +30,10 @@ class AppPutRoleBody(BaseModel):
role: str
class CameraSetBody(BaseModel):
value: str = Field(..., description="The value to set for the feature")
class MediaSyncBody(BaseModel):
dry_run: bool = Field(
default=True, description="If True, only report orphans without deleting them"
@ -41,3 +45,7 @@ class MediaSyncBody(BaseModel):
force: bool = Field(
default=False, description="If True, bypass safety threshold checks"
)
verbose: bool = Field(
default=False,
description="If True, write full orphan file list to disk",
)

View File

@ -32,13 +32,6 @@ class ChatCompletionRequest(BaseModel):
le=10,
description="Maximum number of tool call iterations (default: 5)",
)
include_live_image: Optional[str] = Field(
default=None,
description=(
"If set, the current live frame from this camera is attached to the first "
"user message as multimodal content. Use with get_live_context for detection info."
),
)
stream: bool = Field(
default=False,
description="If true, stream the final assistant response in the body as newline-delimited JSON.",

View File

@ -13,7 +13,6 @@ from pathlib import Path
from typing import List
from urllib.parse import unquote
import cv2
import numpy as np
from fastapi import APIRouter, Request
from fastapi.params import Depends
@ -62,7 +61,7 @@ from frigate.const import CLIPS_DIR, TRIGGER_DIR
from frigate.embeddings import EmbeddingsContext
from frigate.models import Event, ReviewSegment, Timeline, Trigger
from frigate.track.object_processing import TrackedObject
from frigate.util.file import get_event_thumbnail_bytes
from frigate.util.file import get_event_thumbnail_bytes, load_event_snapshot_image
from frigate.util.time import get_dst_transitions, get_tz_modifiers
logger = logging.getLogger(__name__)
@ -1082,30 +1081,8 @@ async def send_to_plus(request: Request, event_id: str, body: SubmitPlusBody = N
content=({"success": False, "message": message}), status_code=400
)
# load clean.webp or clean.png (legacy)
try:
filename_webp = f"{event.camera}-{event.id}-clean.webp"
filename_png = f"{event.camera}-{event.id}-clean.png"
image_path = None
if os.path.exists(os.path.join(CLIPS_DIR, filename_webp)):
image_path = os.path.join(CLIPS_DIR, filename_webp)
elif os.path.exists(os.path.join(CLIPS_DIR, filename_png)):
image_path = os.path.join(CLIPS_DIR, filename_png)
if image_path is None:
logger.error(f"Unable to find clean snapshot for event: {event.id}")
return JSONResponse(
content=(
{
"success": False,
"message": "Unable to find clean snapshot for event",
}
),
status_code=400,
)
image = cv2.imread(image_path)
image, is_clean_snapshot = load_event_snapshot_image(event, clean_only=True)
except Exception:
logger.error(f"Unable to load clean snapshot for event: {event.id}")
return JSONResponse(
@ -1115,11 +1092,14 @@ async def send_to_plus(request: Request, event_id: str, body: SubmitPlusBody = N
status_code=400,
)
if image is None or image.size == 0:
logger.error(f"Unable to load clean snapshot for event: {event.id}")
if not is_clean_snapshot or image is None or image.size == 0:
logger.error(f"Unable to find clean snapshot for event: {event.id}")
return JSONResponse(
content=(
{"success": False, "message": "Unable to load clean snapshot for event"}
{
"success": False,
"message": "Unable to find clean snapshot for event",
}
),
status_code=400,
)

View File

@ -46,6 +46,7 @@ from frigate.record.export import (
DEFAULT_TIME_LAPSE_FFMPEG_ARGS,
PlaybackSourceEnum,
RecordingExporter,
validate_ffmpeg_args,
)
from frigate.util.time import is_current_hour
@ -547,6 +548,24 @@ def export_recording_custom(
export_id = f"{camera_name}_{''.join(random.choices(string.ascii_lowercase + string.digits, k=6))}"
# Validate user-provided ffmpeg args to prevent injection
for args_label, args_value in [
("input", ffmpeg_input_args),
("output", ffmpeg_output_args),
]:
if args_value is not None:
valid, message = validate_ffmpeg_args(args_value)
if not valid:
return JSONResponse(
content=(
{
"success": False,
"message": f"Invalid ffmpeg {args_label} arguments: {message}",
}
),
status_code=400,
)
# Set default values if not provided (timelapse defaults)
if ffmpeg_input_args is None:
ffmpeg_input_args = ""

View File

@ -29,11 +29,13 @@ from frigate.api import (
review,
)
from frigate.api.auth import get_jwt_secret, limiter, require_admin_by_default
from frigate.comms.dispatcher import Dispatcher
from frigate.comms.event_metadata_updater import (
EventMetadataPublisher,
)
from frigate.config import FrigateConfig
from frigate.config.camera.updater import CameraConfigUpdatePublisher
from frigate.config.profile_manager import ProfileManager
from frigate.debug_replay import DebugReplayManager
from frigate.embeddings import EmbeddingsContext
from frigate.genai import GenAIClientManager
@ -69,6 +71,8 @@ def create_fastapi_app(
event_metadata_updater: EventMetadataPublisher,
config_publisher: CameraConfigUpdatePublisher,
replay_manager: DebugReplayManager,
dispatcher: Optional[Dispatcher] = None,
profile_manager: Optional[ProfileManager] = None,
enforce_default_admin: bool = True,
):
logger.info("Starting FastAPI app")
@ -151,6 +155,8 @@ def create_fastapi_app(
app.event_metadata_updater = event_metadata_updater
app.config_publisher = config_publisher
app.replay_manager = replay_manager
app.dispatcher = dispatcher
app.profile_manager = profile_manager
if frigate_config.auth.enabled:
secret = get_jwt_secret()

View File

@ -35,9 +35,9 @@ from frigate.api.defs.query.media_query_parameters import (
from frigate.api.defs.tags import Tags
from frigate.camera.state import CameraState
from frigate.config import FrigateConfig
from frigate.config.camera.snapshots import SnapshotsConfig
from frigate.const import (
CACHE_DIR,
CLIPS_DIR,
INSTALL_DIR,
MAX_SEGMENT_DURATION,
PREVIEW_FRAME_TYPE,
@ -45,11 +45,18 @@ from frigate.const import (
from frigate.models import Event, Previews, Recordings, Regions, ReviewSegment
from frigate.output.preview import get_most_recent_preview_frame
from frigate.track.object_processing import TrackedObjectProcessor
from frigate.util.file import get_event_thumbnail_bytes
from frigate.util.image import get_image_from_recording
from frigate.util.file import (
get_event_snapshot_bytes,
get_event_snapshot_path,
get_event_thumbnail_bytes,
load_event_snapshot_image,
)
from frigate.util.image import get_image_from_recording, get_image_quality_params
from frigate.util.media import get_keyframe_before
logger = logging.getLogger(__name__)
router = APIRouter(tags=[Tags.media])
@ -110,6 +117,24 @@ def imagestream(
)
def _resolve_snapshot_settings(
snapshot_config: SnapshotsConfig, params: MediaEventsSnapshotQueryParams
) -> dict[str, Any]:
return {
"timestamp": snapshot_config.timestamp
if params.timestamp is None
else bool(params.timestamp),
"bounding_box": snapshot_config.bounding_box
if params.bbox is None
else bool(params.bbox),
"crop": snapshot_config.crop if params.crop is None else bool(params.crop),
"height": snapshot_config.height if params.height is None else params.height,
"quality": snapshot_config.quality
if params.quality is None
else params.quality,
}
@router.get("/{camera_name}/ptz/info", dependencies=[Depends(require_camera_access)])
async def camera_ptz_info(request: Request, camera_name: str):
if camera_name in request.app.frigate_config.cameras:
@ -147,14 +172,7 @@ async def latest_frame(
"paths": params.paths,
"regions": params.regions,
}
quality = params.quality
if extension == Extension.png:
quality_params = None
elif extension == Extension.webp:
quality_params = [int(cv2.IMWRITE_WEBP_QUALITY), quality]
else: # jpg or jpeg
quality_params = [int(cv2.IMWRITE_JPEG_QUALITY), quality]
quality_params = get_image_quality_params(extension.value, params.quality)
if camera_name in request.app.frigate_config.cameras:
frame = frame_processor.get_current_frame(camera_name, draw_options)
@ -592,6 +610,33 @@ async def vod_ts(
if recording.end_time > end_ts:
duration -= int((recording.end_time - end_ts) * 1000)
# nginx-vod-module pushes clipFrom forward to the next keyframe,
# which can leave too few frames and produce an empty/unplayable
# segment. Snap clipFrom back to the preceding keyframe so the
# segment always starts with a decodable frame.
if "clipFrom" in clip:
keyframe_ms = get_keyframe_before(recording.path, clip["clipFrom"])
if keyframe_ms is not None:
gained = clip["clipFrom"] - keyframe_ms
clip["clipFrom"] = keyframe_ms
duration += gained
logger.debug(
"VOD: snapped clipFrom to keyframe at %sms for %s, duration now %sms",
keyframe_ms,
recording.path,
duration,
)
else:
# could not read keyframes, remove clipFrom to use full recording
logger.debug(
"VOD: no keyframe info for %s, removing clipFrom to use full recording",
recording.path,
)
del clip["clipFrom"]
duration = int(recording.duration * 1000)
if recording.end_time > end_ts:
duration -= int((recording.end_time - end_ts) * 1000)
if duration < min_duration_ms:
# skip if the clip has no valid duration (too short to contain frames)
logger.debug(
@ -729,7 +774,7 @@ async def vod_clip(
@router.get(
"/events/{event_id}/snapshot.jpg",
description="Returns a snapshot image for the specified object id. NOTE: The query params only take affect while the event is in-progress. Once the event has ended the snapshot configuration is used.",
description="Returns a snapshot image for the specified object id.",
)
async def event_snapshot(
request: Request,
@ -748,11 +793,22 @@ async def event_snapshot(
content={"success": False, "message": "Snapshot not available"},
status_code=404,
)
# read snapshot from disk
with open(
os.path.join(CLIPS_DIR, f"{event.camera}-{event.id}.jpg"), "rb"
) as image_file:
jpg_bytes = image_file.read()
snapshot_settings = _resolve_snapshot_settings(
request.app.frigate_config.cameras[event.camera].snapshots, params
)
jpg_bytes, frame_time = get_event_snapshot_bytes(
event,
ext="jpg",
timestamp=snapshot_settings["timestamp"],
bounding_box=snapshot_settings["bounding_box"],
crop=snapshot_settings["crop"],
height=snapshot_settings["height"],
quality=snapshot_settings["quality"],
timestamp_style=request.app.frigate_config.cameras[
event.camera
].timestamp_style,
colormap=request.app.frigate_config.model.colormap,
)
except DoesNotExist:
# see if the object is currently being tracked
try:
@ -763,13 +819,16 @@ async def event_snapshot(
if event_id in camera_state.tracked_objects:
tracked_obj = camera_state.tracked_objects.get(event_id)
if tracked_obj is not None:
snapshot_settings = _resolve_snapshot_settings(
camera_state.camera_config.snapshots, params
)
jpg_bytes, frame_time = tracked_obj.get_img_bytes(
ext="jpg",
timestamp=params.timestamp,
bounding_box=params.bbox,
crop=params.crop,
height=params.height,
quality=params.quality,
timestamp=snapshot_settings["timestamp"],
bounding_box=snapshot_settings["bounding_box"],
crop=snapshot_settings["crop"],
height=snapshot_settings["height"],
quality=snapshot_settings["quality"],
)
await require_camera_access(camera_state.name, request=request)
except Exception:
@ -807,7 +866,6 @@ async def event_snapshot(
@router.get(
"/events/{event_id}/thumbnail.{extension}",
dependencies=[Depends(require_camera_access)],
)
async def event_thumbnail(
request: Request,
@ -851,11 +909,12 @@ async def event_thumbnail(
status_code=404,
)
img_as_np = np.frombuffer(thumbnail_bytes, dtype=np.uint8)
img = cv2.imdecode(img_as_np, flags=1)
# android notifications prefer a 2:1 ratio
if format == "android":
img_as_np = np.frombuffer(thumbnail_bytes, dtype=np.uint8)
img = cv2.imdecode(img_as_np, flags=1)
thumbnail = cv2.copyMakeBorder(
img = cv2.copyMakeBorder(
img,
0,
0,
@ -865,14 +924,14 @@ async def event_thumbnail(
(0, 0, 0),
)
quality_params = None
if extension in (Extension.jpg, Extension.jpeg):
quality_params = [int(cv2.IMWRITE_JPEG_QUALITY), 70]
elif extension == Extension.webp:
quality_params = [int(cv2.IMWRITE_WEBP_QUALITY), 60]
quality_params = None
if extension in (Extension.jpg, Extension.jpeg):
quality_params = [int(cv2.IMWRITE_JPEG_QUALITY), 70]
elif extension == Extension.webp:
quality_params = [int(cv2.IMWRITE_WEBP_QUALITY), 60]
_, img = cv2.imencode(f".{extension.value}", thumbnail, quality_params)
thumbnail_bytes = img.tobytes()
_, encoded = cv2.imencode(f".{extension.value}", img, quality_params)
thumbnail_bytes = encoded.tobytes()
return Response(
thumbnail_bytes,
@ -1025,18 +1084,20 @@ def clear_region_grid(request: Request, camera_name: str):
@router.get(
"/events/{event_id}/snapshot-clean.webp",
dependencies=[Depends(require_camera_access)],
)
def event_snapshot_clean(request: Request, event_id: str, download: bool = False):
async def event_snapshot_clean(request: Request, event_id: str, download: bool = False):
webp_bytes = None
event_complete = False
try:
event = Event.get(Event.id == event_id)
event_complete = event.end_time is not None
await require_camera_access(event.camera, request=request)
snapshot_config = request.app.frigate_config.cameras[event.camera].snapshots
if not (snapshot_config.enabled and event.has_snapshot):
return JSONResponse(
content={
"success": False,
"message": "Snapshots and clean_copy must be enabled in the config",
"message": "Snapshots must be enabled in the config",
},
status_code=404,
)
@ -1068,54 +1129,10 @@ def event_snapshot_clean(request: Request, event_id: str, download: bool = False
)
if webp_bytes is None:
try:
# webp
clean_snapshot_path_webp = os.path.join(
CLIPS_DIR, f"{event.camera}-{event.id}-clean.webp"
image_path, is_clean_snapshot = get_event_snapshot_path(
event, clean_only=True
)
# png (legacy)
clean_snapshot_path_png = os.path.join(
CLIPS_DIR, f"{event.camera}-{event.id}-clean.png"
)
if os.path.exists(clean_snapshot_path_webp):
with open(clean_snapshot_path_webp, "rb") as image_file:
webp_bytes = image_file.read()
elif os.path.exists(clean_snapshot_path_png):
# convert png to webp and save for future use
png_image = cv2.imread(clean_snapshot_path_png, cv2.IMREAD_UNCHANGED)
if png_image is None:
return JSONResponse(
content={
"success": False,
"message": "Invalid png snapshot",
},
status_code=400,
)
ret, webp_data = cv2.imencode(
".webp", png_image, [int(cv2.IMWRITE_WEBP_QUALITY), 60]
)
if not ret:
return JSONResponse(
content={
"success": False,
"message": "Unable to convert png to webp",
},
status_code=400,
)
webp_bytes = webp_data.tobytes()
# save the converted webp for future requests
try:
with open(clean_snapshot_path_webp, "wb") as f:
f.write(webp_bytes)
except Exception as e:
logger.warning(
f"Failed to save converted webp for event {event.id}: {e}"
)
# continue since we now have the data to return
else:
if not is_clean_snapshot or image_path is None:
return JSONResponse(
content={
"success": False,
@ -1123,6 +1140,34 @@ def event_snapshot_clean(request: Request, event_id: str, download: bool = False
},
status_code=404,
)
if image_path.endswith(".webp"):
with open(image_path, "rb") as image_file:
webp_bytes = image_file.read()
else:
image = load_event_snapshot_image(event, clean_only=True)[0]
if image is None:
return JSONResponse(
content={
"success": False,
"message": "Unable to load clean snapshot for event",
},
status_code=400,
)
ret, webp_data = cv2.imencode(
".webp", image, get_image_quality_params("webp", None)
)
if not ret:
return JSONResponse(
content={
"success": False,
"message": "Unable to convert snapshot to webp",
},
status_code=400,
)
webp_bytes = webp_data.tobytes()
except Exception:
logger.error(f"Unable to load clean snapshot for event: {event.id}")
return JSONResponse(
@ -1135,7 +1180,7 @@ def event_snapshot_clean(request: Request, event_id: str, download: bool = False
headers = {
"Content-Type": "image/webp",
"Cache-Control": "private, max-age=31536000",
"Cache-Control": "private, max-age=31536000" if event_complete else "no-cache",
}
if download:
@ -1151,7 +1196,7 @@ def event_snapshot_clean(request: Request, event_id: str, download: bool = False
@router.get(
"/events/{event_id}/clip.mp4", dependencies=[Depends(require_camera_access)]
"/events/{event_id}/clip.mp4",
)
async def event_clip(
request: Request,
@ -1165,6 +1210,8 @@ async def event_clip(
content={"success": False, "message": "Event not found"}, status_code=404
)
await require_camera_access(event.camera, request=request)
if not event.has_clip:
return JSONResponse(
content={"success": False, "message": "Clip not available"}, status_code=404
@ -1181,9 +1228,36 @@ async def event_clip(
@router.get(
"/events/{event_id}/preview.gif", dependencies=[Depends(require_camera_access)]
"/review/{review_id}/clip.mp4",
)
def event_preview(request: Request, event_id: str):
async def review_clip(
request: Request,
review_id: str,
padding: int = Query(0, description="Padding to apply to clip."),
):
try:
review: ReviewSegment = ReviewSegment.get(ReviewSegment.id == review_id)
except DoesNotExist:
return JSONResponse(
content={"success": False, "message": "Review not found"}, status_code=404
)
await require_camera_access(review.camera, request=request)
end_ts = (
datetime.now().timestamp()
if review.end_time is None
else review.end_time + padding
)
return await recording_clip(
request, review.camera, review.start_time - padding, end_ts
)
@router.get(
"/events/{event_id}/preview.gif",
)
async def event_preview(request: Request, event_id: str):
try:
event: Event = Event.get(Event.id == event_id)
except DoesNotExist:
@ -1191,6 +1265,8 @@ def event_preview(request: Request, event_id: str):
content={"success": False, "message": "Event not found"}, status_code=404
)
await require_camera_access(event.camera, request=request)
start_ts = event.start_time
end_ts = start_ts + (
min(event.end_time - event.start_time, 20) if event.end_time else 20
@ -1213,25 +1289,25 @@ def preview_gif(
):
if datetime.fromtimestamp(start_ts) < datetime.now().replace(minute=0, second=0):
# has preview mp4
preview: Previews = (
Previews.select(
Previews.camera,
Previews.path,
Previews.duration,
Previews.start_time,
Previews.end_time,
try:
preview: Previews = (
Previews.select(
Previews.camera,
Previews.path,
Previews.duration,
Previews.start_time,
Previews.end_time,
)
.where(
Previews.start_time.between(start_ts, end_ts)
| Previews.end_time.between(start_ts, end_ts)
| ((start_ts > Previews.start_time) & (end_ts < Previews.end_time))
)
.where(Previews.camera == camera_name)
.limit(1)
.get()
)
.where(
Previews.start_time.between(start_ts, end_ts)
| Previews.end_time.between(start_ts, end_ts)
| ((start_ts > Previews.start_time) & (end_ts < Previews.end_time))
)
.where(Previews.camera == camera_name)
.limit(1)
.get()
)
if not preview:
except DoesNotExist:
return JSONResponse(
content={"success": False, "message": "Preview not found"},
status_code=404,
@ -1288,9 +1364,9 @@ def preview_gif(
status_code=404,
)
file_start = f"preview_{camera_name}"
start_file = f"{file_start}-{start_ts}.{PREVIEW_FRAME_TYPE}"
end_file = f"{file_start}-{end_ts}.{PREVIEW_FRAME_TYPE}"
file_start = f"preview_{camera_name}-"
start_file = f"{file_start}{start_ts}.{PREVIEW_FRAME_TYPE}"
end_file = f"{file_start}{end_ts}.{PREVIEW_FRAME_TYPE}"
selected_previews = []
for file in sorted(os.listdir(preview_dir)):
@ -1470,9 +1546,9 @@ def preview_mp4(
status_code=404,
)
file_start = f"preview_{camera_name}"
start_file = f"{file_start}-{start_ts}.{PREVIEW_FRAME_TYPE}"
end_file = f"{file_start}-{end_ts}.{PREVIEW_FRAME_TYPE}"
file_start = f"preview_{camera_name}-"
start_file = f"{file_start}{start_ts}.{PREVIEW_FRAME_TYPE}"
end_file = f"{file_start}{end_ts}.{PREVIEW_FRAME_TYPE}"
selected_previews = []
for file in sorted(os.listdir(preview_dir)):
@ -1549,8 +1625,8 @@ def preview_mp4(
)
@router.get("/review/{event_id}/preview", dependencies=[Depends(require_camera_access)])
def review_preview(
@router.get("/review/{event_id}/preview")
async def review_preview(
request: Request,
event_id: str,
format: str = Query(default="gif", enum=["gif", "mp4"]),
@ -1563,6 +1639,8 @@ def review_preview(
status_code=404,
)
await require_camera_access(review.camera, request=request)
padding = 8
start_ts = review.start_time - padding
end_ts = (
@ -1576,12 +1654,14 @@ def review_preview(
@router.get(
"/preview/{file_name}/thumbnail.jpg", dependencies=[Depends(require_camera_access)]
"/preview/{file_name}/thumbnail.jpg",
dependencies=[Depends(allow_any_authenticated())],
)
@router.get(
"/preview/{file_name}/thumbnail.webp", dependencies=[Depends(require_camera_access)]
"/preview/{file_name}/thumbnail.webp",
dependencies=[Depends(allow_any_authenticated())],
)
def preview_thumbnail(file_name: str):
async def preview_thumbnail(request: Request, file_name: str):
"""Get a thumbnail from the cached preview frames."""
if len(file_name) > 1000:
return JSONResponse(
@ -1591,6 +1671,17 @@ def preview_thumbnail(file_name: str):
status_code=403,
)
# Extract camera name from preview filename (format: preview_{camera}-{timestamp}.ext)
if not file_name.startswith("preview_"):
return JSONResponse(
content={"success": False, "message": "Invalid preview filename"},
status_code=400,
)
# Use rsplit to handle camera names containing dashes (e.g. front-door)
name_part = file_name[len("preview_") :].rsplit(".", 1)[0] # strip extension
camera_name = name_part.rsplit("-", 1)[0] # split off timestamp
await require_camera_access(camera_name, request=request)
safe_file_name_current = sanitize_filename(file_name)
preview_dir = os.path.join(CACHE_DIR, "preview_frames")

View File

@ -145,9 +145,9 @@ def preview_hour(
def get_preview_frames_from_cache(camera_name: str, start_ts: float, end_ts: float):
"""Get list of cached preview frames"""
preview_dir = os.path.join(CACHE_DIR, "preview_frames")
file_start = f"preview_{camera_name}"
start_file = f"{file_start}-{start_ts}.{PREVIEW_FRAME_TYPE}"
end_file = f"{file_start}-{end_ts}.{PREVIEW_FRAME_TYPE}"
file_start = f"preview_{camera_name}-"
start_file = f"{file_start}{start_ts}.{PREVIEW_FRAME_TYPE}"
end_file = f"{file_start}{end_ts}.{PREVIEW_FRAME_TYPE}"
selected_previews = []
for file in sorted(os.listdir(preview_dir)):

View File

@ -8,7 +8,7 @@ from multiprocessing import Queue
from multiprocessing.managers import DictProxy, SyncManager
from multiprocessing.synchronize import Event as MpEvent
from pathlib import Path
from typing import Optional
from typing import Callable, Optional
import psutil
import uvicorn
@ -30,6 +30,7 @@ from frigate.comms.ws import WebSocketClient
from frigate.comms.zmq_proxy import ZmqProxy
from frigate.config.camera.updater import CameraConfigUpdatePublisher
from frigate.config.config import FrigateConfig
from frigate.config.profile_manager import ProfileManager
from frigate.const import (
CACHE_DIR,
CLIPS_DIR,
@ -80,6 +81,7 @@ from frigate.timeline import TimelineProcessor
from frigate.track.object_processing import TrackedObjectProcessor
from frigate.util.builtin import empty_and_close_queue
from frigate.util.image import UntrackedSharedMemory
from frigate.util.process import FrigateProcess
from frigate.util.services import set_file_limit
from frigate.version import VERSION
from frigate.watchdog import FrigateWatchdog
@ -118,6 +120,7 @@ class FrigateApp:
self.ptz_metrics: dict[str, PTZMetrics] = {}
self.processes: dict[str, int] = {}
self.embeddings: Optional[EmbeddingsContext] = None
self.profile_manager: Optional[ProfileManager] = None
self.config = config
def ensure_dirs(self) -> None:
@ -349,6 +352,19 @@ class FrigateApp:
comms,
)
def init_profile_manager(self) -> None:
self.profile_manager = ProfileManager(
self.config, self.inter_config_updater, self.dispatcher
)
self.dispatcher.profile_manager = self.profile_manager
persisted = ProfileManager.load_persisted_profile()
if persisted and any(
persisted in cam.profiles for cam in self.config.cameras.values()
):
logger.info("Restoring persisted profile '%s'", persisted)
self.profile_manager.activate_profile(persisted)
def start_detectors(self) -> None:
for name in self.config.cameras.keys():
try:
@ -482,6 +498,47 @@ class FrigateApp:
def start_watchdog(self) -> None:
self.frigate_watchdog = FrigateWatchdog(self.detectors, self.stop_event)
# (attribute on self, key in self.processes, factory)
specs: list[tuple[str, str, Callable[[], FrigateProcess]]] = [
(
"embedding_process",
"embeddings",
lambda: EmbeddingProcess(
self.config, self.embeddings_metrics, self.stop_event
),
),
(
"recording_process",
"recording",
lambda: RecordProcess(self.config, self.stop_event),
),
(
"review_segment_process",
"review_segment",
lambda: ReviewProcess(self.config, self.stop_event),
),
(
"output_processor",
"output",
lambda: OutputProcess(self.config, self.stop_event),
),
]
for attr, key, factory in specs:
if not hasattr(self, attr):
continue
def on_restart(
proc: FrigateProcess, _attr: str = attr, _key: str = key
) -> None:
setattr(self, _attr, proc)
self.processes[_key] = proc.pid or 0
self.frigate_watchdog.register(
key, getattr(self, attr), factory, on_restart
)
self.frigate_watchdog.start()
def init_auth(self) -> None:
@ -557,6 +614,7 @@ class FrigateApp:
self.init_inter_process_communicator()
self.start_detectors()
self.init_dispatcher()
self.init_profile_manager()
self.init_embeddings_client()
self.start_video_output_processor()
self.start_ptz_autotracker()
@ -586,6 +644,8 @@ class FrigateApp:
self.event_metadata_updater,
self.inter_config_updater,
self.replay_manager,
self.dispatcher,
self.profile_manager,
),
host="127.0.0.1",
port=5001,

View File

@ -532,48 +532,19 @@ class CameraState:
) -> None:
img_frame = frame if frame is not None else self.get_current_frame()
# write clean snapshot if enabled
if self.camera_config.snapshots.clean_copy:
ret, webp = cv2.imencode(
".webp", img_frame, [int(cv2.IMWRITE_WEBP_QUALITY), 80]
)
ret, webp = cv2.imencode(
".webp", img_frame, [int(cv2.IMWRITE_WEBP_QUALITY), 80]
)
if ret:
with open(
os.path.join(
CLIPS_DIR,
f"{self.camera_config.name}-{event_id}-clean.webp",
),
"wb",
) as p:
p.write(webp.tobytes())
# write jpg snapshot with optional annotations
if draw.get("boxes") and isinstance(draw.get("boxes"), list):
for box in draw.get("boxes"):
x = int(box["box"][0] * self.camera_config.detect.width)
y = int(box["box"][1] * self.camera_config.detect.height)
width = int(box["box"][2] * self.camera_config.detect.width)
height = int(box["box"][3] * self.camera_config.detect.height)
draw_box_with_label(
img_frame,
x,
y,
x + width,
y + height,
label,
f"{box.get('score', '-')}% {int(width * height)}",
thickness=2,
color=box.get("color", (255, 0, 0)),
)
ret, jpg = cv2.imencode(".jpg", img_frame)
with open(
os.path.join(CLIPS_DIR, f"{self.camera_config.name}-{event_id}.jpg"),
"wb",
) as j:
j.write(jpg.tobytes())
if ret:
with open(
os.path.join(
CLIPS_DIR,
f"{self.camera_config.name}-{event_id}-clean.webp",
),
"wb",
) as p:
p.write(webp.tobytes())
# create thumbnail with max height of 175 and save
width = int(175 * img_frame.shape[1] / img_frame.shape[0])

View File

@ -16,6 +16,7 @@ from frigate.config.camera.updater import (
CameraConfigUpdateTopic,
)
from frigate.config.config import RuntimeFilterConfig, RuntimeMotionConfig
from frigate.config.profile_manager import ProfileManager
from frigate.const import (
CLEAR_ONGOING_REVIEW_SEGMENTS,
EXPIRE_AUDIO_ACTIVITY,
@ -91,7 +92,9 @@ class Dispatcher:
}
self._global_settings_handlers: dict[str, Callable] = {
"notifications": self._on_global_notification_command,
"profile": self._on_profile_command,
}
self.profile_manager: Optional[ProfileManager] = None
for comm in self.comms:
comm.subscribe(self._receive)
@ -298,6 +301,11 @@ class Dispatcher:
)
self.publish("birdseye_layout", json.dumps(self.birdseye_layout.copy()))
self.publish("audio_detections", json.dumps(audio_detections))
self.publish(
"profile/state",
self.config.active_profile or "none",
retain=True,
)
def handle_notification_test() -> None:
self.publish("notification_test", "Test notification")
@ -556,6 +564,22 @@ class Dispatcher:
)
self.publish("notifications/state", payload, retain=True)
def _on_profile_command(self, payload: str) -> None:
"""Callback for profile/set topic."""
if self.profile_manager is None:
logger.error("Profile manager not initialized")
return
profile_name = (
payload.strip() if payload.strip() not in ("", "none", "None") else None
)
err = self.profile_manager.activate_profile(profile_name)
if err:
logger.error("Failed to activate profile: %s", err)
return
self.publish("profile/state", payload.strip() or "none", retain=True)
def _on_audio_command(self, camera_name: str, payload: str) -> None:
"""Callback for audio topic."""
audio_settings = self.config.cameras[camera_name].audio

View File

@ -38,6 +38,7 @@ class MqttClient(Communicator):
)
def stop(self) -> None:
self.publish("available", "stopped", retain=True)
self.client.disconnect()
def _set_initial_topics(self) -> None:
@ -163,6 +164,11 @@ class MqttClient(Communicator):
retain=True,
)
self.publish(
"profile/state",
self.config.active_profile or "none",
retain=True,
)
self.publish("available", "online", retain=True)
def on_mqtt_command(
@ -289,6 +295,11 @@ class MqttClient(Communicator):
self.on_mqtt_command,
)
self.client.message_callback_add(
f"{self.mqtt_config.topic_prefix}/profile/set",
self.on_mqtt_command,
)
self.client.message_callback_add(
f"{self.mqtt_config.topic_prefix}/onConnect", self.on_mqtt_command
)

View File

@ -17,6 +17,7 @@ from titlecase import titlecase
from frigate.comms.base_communicator import Communicator
from frigate.comms.config_updater import ConfigSubscriber
from frigate.config import FrigateConfig
from frigate.config.auth import AuthConfig
from frigate.config.camera.updater import (
CameraConfigUpdateEnum,
CameraConfigUpdateSubscriber,
@ -58,6 +59,7 @@ class WebPushClient(Communicator):
for c in self.config.cameras.values()
}
self.last_notification_time: float = 0
self.user_cameras: dict[str, set[str]] = {}
self.notification_queue: queue.Queue[PushNotification] = queue.Queue()
self.notification_thread = threading.Thread(
target=self._process_notifications, daemon=True
@ -78,13 +80,12 @@ class WebPushClient(Communicator):
for sub in user["notification_tokens"]:
self.web_pushers[user["username"]].append(WebPusher(sub))
# notification config updater
self.global_config_subscriber = ConfigSubscriber(
"config/notifications", exact=True
)
# notification and auth config updater
self.global_config_subscriber = ConfigSubscriber("config/")
self.config_subscriber = CameraConfigUpdateSubscriber(
self.config, self.config.cameras, [CameraConfigUpdateEnum.notifications]
)
self._refresh_user_cameras()
def subscribe(self, receiver: Callable) -> None:
"""Wrapper for allowing dispatcher to subscribe."""
@ -164,13 +165,19 @@ class WebPushClient(Communicator):
def publish(self, topic: str, payload: Any, retain: bool = False) -> None:
"""Wrapper for publishing when client is in valid state."""
# check for updated notification config
_, updated_notification_config = (
self.global_config_subscriber.check_for_update()
)
if updated_notification_config:
self.config.notifications = updated_notification_config
# check for updated global config (notifications, auth)
while True:
config_topic, config_payload = (
self.global_config_subscriber.check_for_update()
)
if config_topic is None:
break
if config_topic == "config/notifications" and config_payload:
self.config.notifications = config_payload
elif config_topic == "config/auth":
if isinstance(config_payload, AuthConfig):
self.config.auth = config_payload
self._refresh_user_cameras()
updates = self.config_subscriber.check_for_updates()
@ -210,6 +217,15 @@ class WebPushClient(Communicator):
logger.debug(f"Notifications for {camera} are currently suspended.")
return
self.send_trigger(decoded)
elif topic == "camera_monitoring":
decoded = json.loads(payload)
camera = decoded["camera"]
if not self.config.cameras[camera].notifications.enabled:
return
if self.is_camera_suspended(camera):
logger.debug(f"Notifications for {camera} are currently suspended.")
return
self.send_camera_monitoring(decoded)
elif topic == "notification_test":
if not self.config.notifications.enabled and not any(
cam.notifications.enabled for cam in self.config.cameras.values()
@ -291,6 +307,31 @@ class WebPushClient(Communicator):
except Exception as e:
logger.error(f"Error processing notification: {str(e)}")
def _refresh_user_cameras(self) -> None:
"""Rebuild the user-to-cameras access cache from the database."""
all_camera_names = set(self.config.cameras.keys())
roles_dict = self.config.auth.roles
updated: dict[str, set[str]] = {}
for user in User.select(User.username, User.role).dicts().iterator():
allowed = User.get_allowed_cameras(
user["role"], roles_dict, all_camera_names
)
updated[user["username"]] = set(allowed)
logger.debug(
"User %s has access to cameras: %s",
user["username"],
", ".join(allowed),
)
self.user_cameras = updated
def _user_has_camera_access(self, username: str, camera: str) -> bool:
"""Check if a user has access to a specific camera based on cached roles."""
allowed = self.user_cameras.get(username)
if allowed is None:
logger.debug(f"No camera access information found for user {username}")
return False
return camera in allowed
def _within_cooldown(self, camera: str) -> bool:
now = datetime.datetime.now().timestamp()
if now - self.last_notification_time < self.config.notifications.cooldown:
@ -418,6 +459,14 @@ class WebPushClient(Communicator):
logger.debug(f"Sending push notification for {camera}, review ID {reviewId}")
for user in self.web_pushers:
if not self._user_has_camera_access(user, camera):
logger.debug(
"Skipping notification for user %s - no access to camera %s",
user,
camera,
)
continue
self.send_push_notification(
user=user,
payload=payload,
@ -465,6 +514,14 @@ class WebPushClient(Communicator):
)
for user in self.web_pushers:
if not self._user_has_camera_access(user, camera):
logger.debug(
"Skipping notification for user %s - no access to camera %s",
user,
camera,
)
continue
self.send_push_notification(
user=user,
payload=payload,
@ -477,6 +534,30 @@ class WebPushClient(Communicator):
self.cleanup_registrations()
def send_camera_monitoring(self, payload: dict[str, Any]) -> None:
camera: str = payload["camera"]
camera_name: str = getattr(
self.config.cameras[camera], "friendly_name", None
) or titlecase(camera.replace("_", " "))
self.check_registrations()
reasoning: str = payload.get("reasoning", "")
title = f"{camera_name}: Monitoring Alert"
message = (reasoning[:197] + "...") if len(reasoning) > 200 else reasoning
logger.debug(f"Sending camera monitoring push notification for {camera_name}")
for user in self.web_pushers:
self.send_push_notification(
user=user,
payload=payload,
title=title,
message=message,
)
self.cleanup_registrations()
def stop(self) -> None:
logger.info("Closing notification queue")
self.notification_thread.join()

View File

@ -34,6 +34,7 @@ from .mqtt import CameraMqttConfig
from .notification import NotificationConfig
from .objects import ObjectConfig
from .onvif import OnvifConfig
from .profile import CameraProfileConfig
from .record import RecordConfig
from .review import ReviewConfig
from .snapshots import SnapshotsConfig
@ -140,7 +141,7 @@ class CameraConfig(FrigateBaseModel):
snapshots: SnapshotsConfig = Field(
default_factory=SnapshotsConfig,
title="Snapshots",
description="Settings for saved JPEG snapshots of tracked objects for this camera.",
description="Settings for API-generated snapshots of tracked objects for this camera.",
)
timestamp_style: TimestampStyleConfig = Field(
default_factory=TimestampStyleConfig,
@ -184,6 +185,12 @@ class CameraConfig(FrigateBaseModel):
title="Camera URL",
description="URL to visit the camera directly from system page",
)
profiles: dict[str, CameraProfileConfig] = Field(
default_factory=dict,
title="Profiles",
description="Named config profiles with partial overrides that can be activated at runtime.",
)
zones: dict[str, ZoneConfig] = Field(
default_factory=dict,
title="Zones",

View File

@ -49,8 +49,8 @@ class StationaryConfig(FrigateBaseModel):
class DetectConfig(FrigateBaseModel):
enabled: bool = Field(
default=False,
title="Detection enabled",
description="Enable or disable object detection for all cameras; can be overridden per-camera. Detection must be enabled for object tracking to run.",
title="Enable object detection",
description="Enable or disable object detection for all cameras; can be overridden per-camera.",
)
height: Optional[int] = Field(
default=None,

View File

@ -92,7 +92,7 @@ class PtzAutotrackConfig(FrigateBaseModel):
class OnvifConfig(FrigateBaseModel):
host: str = Field(
host: EnvString = Field(
default="",
title="ONVIF host",
description="Host (and optional scheme) for the ONVIF service for this camera.",

View File

@ -0,0 +1,44 @@
"""Camera profile configuration for named config overrides."""
from typing import Optional
from ..base import FrigateBaseModel
from ..classification import (
CameraFaceRecognitionConfig,
CameraLicensePlateRecognitionConfig,
)
from .audio import AudioConfig
from .birdseye import BirdseyeCameraConfig
from .detect import DetectConfig
from .motion import MotionConfig
from .notification import NotificationConfig
from .objects import ObjectConfig
from .record import RecordConfig
from .review import ReviewConfig
from .snapshots import SnapshotsConfig
from .zone import ZoneConfig
__all__ = ["CameraProfileConfig"]
class CameraProfileConfig(FrigateBaseModel):
"""A named profile containing partial camera config overrides.
Sections set to None inherit from the camera's base config.
Sections that are defined get Pydantic-validated, then only
explicitly-set fields are used as overrides via exclude_unset.
"""
enabled: Optional[bool] = None
audio: Optional[AudioConfig] = None
birdseye: Optional[BirdseyeCameraConfig] = None
detect: Optional[DetectConfig] = None
face_recognition: Optional[CameraFaceRecognitionConfig] = None
lpr: Optional[CameraLicensePlateRecognitionConfig] = None
motion: Optional[MotionConfig] = None
notifications: Optional[NotificationConfig] = None
objects: Optional[ObjectConfig] = None
record: Optional[RecordConfig] = None
review: Optional[ReviewConfig] = None
snapshots: Optional[SnapshotsConfig] = None
zones: Optional[dict[str, ZoneConfig]] = None

View File

@ -29,28 +29,23 @@ class RetainConfig(FrigateBaseModel):
class SnapshotsConfig(FrigateBaseModel):
enabled: bool = Field(
default=False,
title="Snapshots enabled",
title="Enable snapshots",
description="Enable or disable saving snapshots for all cameras; can be overridden per-camera.",
)
clean_copy: bool = Field(
default=True,
title="Save clean copy",
description="Save an unannotated clean copy of snapshots in addition to annotated ones.",
)
timestamp: bool = Field(
default=False,
title="Timestamp overlay",
description="Overlay a timestamp on saved snapshots.",
description="Overlay a timestamp on snapshots from API.",
)
bounding_box: bool = Field(
default=True,
title="Bounding box overlay",
description="Draw bounding boxes for tracked objects on saved snapshots.",
description="Draw bounding boxes for tracked objects on snapshots from API.",
)
crop: bool = Field(
default=False,
title="Crop snapshot",
description="Crop saved snapshots to the detected object's bounding box.",
description="Crop snapshots from API to the detected object's bounding box.",
)
required_zones: list[str] = Field(
default_factory=list,
@ -60,17 +55,17 @@ class SnapshotsConfig(FrigateBaseModel):
height: Optional[int] = Field(
default=None,
title="Snapshot height",
description="Height (pixels) to resize saved snapshots to; leave empty to preserve original size.",
description="Height (pixels) to resize snapshots from API to; leave empty to preserve original size.",
)
retain: RetainConfig = Field(
default_factory=RetainConfig,
title="Snapshot retention",
description="Retention settings for saved snapshots including default days and per-object overrides.",
description="Retention settings for snapshots including default days and per-object overrides.",
)
quality: int = Field(
default=70,
title="JPEG quality",
description="JPEG encode quality for saved snapshots (0-100).",
default=60,
title="Snapshot quality",
description="Encode quality for saved snapshots (0-100).",
ge=0,
le=100,
)

View File

@ -18,6 +18,7 @@ class CameraConfigUpdateEnum(str, Enum):
detect = "detect"
enabled = "enabled"
ffmpeg = "ffmpeg"
live = "live"
motion = "motion" # includes motion and motion masks
notifications = "notifications"
objects = "objects"
@ -27,6 +28,8 @@ class CameraConfigUpdateEnum(str, Enum):
review = "review"
review_genai = "review_genai"
semantic_search = "semantic_search" # for semantic search triggers
face_recognition = "face_recognition"
lpr = "lpr"
snapshots = "snapshots"
zones = "zones"
@ -105,6 +108,8 @@ class CameraConfigUpdateSubscriber:
config.enabled = updated_config
elif update_type == CameraConfigUpdateEnum.object_genai:
config.objects.genai = updated_config
elif update_type == CameraConfigUpdateEnum.live:
config.live = updated_config
elif update_type == CameraConfigUpdateEnum.motion:
config.motion = updated_config
elif update_type == CameraConfigUpdateEnum.notifications:
@ -119,6 +124,10 @@ class CameraConfigUpdateSubscriber:
config.review.genai = updated_config
elif update_type == CameraConfigUpdateEnum.semantic_search:
config.semantic_search = updated_config
elif update_type == CameraConfigUpdateEnum.face_recognition:
config.face_recognition = updated_config
elif update_type == CameraConfigUpdateEnum.lpr:
config.lpr = updated_config
elif update_type == CameraConfigUpdateEnum.snapshots:
config.snapshots = updated_config
elif update_type == CameraConfigUpdateEnum.zones:

View File

@ -12,7 +12,6 @@ from pydantic import (
Field,
TypeAdapter,
ValidationInfo,
field_serializer,
field_validator,
model_validator,
)
@ -68,6 +67,7 @@ from .env import EnvVars
from .logger import LoggerConfig
from .mqtt import MqttConfig
from .network import NetworkingConfig
from .profile import ProfileDefinitionConfig
from .proxy import ProxyConfig
from .telemetry import TelemetryConfig
from .tls import TlsConfig
@ -97,8 +97,7 @@ stream_info_retriever = StreamInfoRetriever()
class RuntimeMotionConfig(MotionConfig):
"""Runtime version of MotionConfig with rasterized masks."""
# The rasterized numpy mask (combination of all enabled masks)
rasterized_mask: np.ndarray = None
rasterized_mask: np.ndarray = Field(default=None, exclude=True)
def __init__(self, **config):
frame_shape = config.get("frame_shape", (1, 1))
@ -144,24 +143,13 @@ class RuntimeMotionConfig(MotionConfig):
empty_mask[:] = 255
self.rasterized_mask = empty_mask
def dict(self, **kwargs):
ret = super().model_dump(**kwargs)
if "rasterized_mask" in ret:
ret.pop("rasterized_mask")
return ret
@field_serializer("rasterized_mask", when_used="json")
def serialize_rasterized_mask(self, value: Any, info):
return None
model_config = ConfigDict(arbitrary_types_allowed=True, extra="ignore")
class RuntimeFilterConfig(FilterConfig):
"""Runtime version of FilterConfig with rasterized masks."""
# The rasterized numpy mask (combination of all enabled masks)
rasterized_mask: Optional[np.ndarray] = None
rasterized_mask: Optional[np.ndarray] = Field(default=None, exclude=True)
def __init__(self, **config):
frame_shape = config.get("frame_shape", (1, 1))
@ -225,16 +213,6 @@ class RuntimeFilterConfig(FilterConfig):
else:
self.rasterized_mask = None
def dict(self, **kwargs):
ret = super().model_dump(**kwargs)
if "rasterized_mask" in ret:
ret.pop("rasterized_mask")
return ret
@field_serializer("rasterized_mask", when_used="json")
def serialize_rasterized_mask(self, value: Any, info):
return None
model_config = ConfigDict(arbitrary_types_allowed=True, extra="ignore")
@ -466,7 +444,7 @@ class FrigateConfig(FrigateBaseModel):
# GenAI config (named provider configs: name -> GenAIConfig)
genai: Dict[str, GenAIConfig] = Field(
default_factory=dict,
title="Generative AI configuration (named providers).",
title="Generative AI configuration",
description="Settings for integrated generative AI providers used to generate object descriptions and review summaries.",
)
@ -520,7 +498,7 @@ class FrigateConfig(FrigateBaseModel):
snapshots: SnapshotsConfig = Field(
default_factory=SnapshotsConfig,
title="Snapshots",
description="Settings for saved JPEG snapshots of tracked objects for all cameras; can be overridden per-camera.",
description="Settings for API-generated snapshots of tracked objects for all cameras; can be overridden per-camera.",
)
timestamp_style: TimestampStyleConfig = Field(
default_factory=TimestampStyleConfig,
@ -561,6 +539,19 @@ class FrigateConfig(FrigateBaseModel):
description="Configuration for named camera groups used to organize cameras in the UI.",
)
profiles: Dict[str, ProfileDefinitionConfig] = Field(
default_factory=dict,
title="Profiles",
description="Named profile definitions with friendly names. Camera profiles must reference names defined here.",
)
active_profile: Optional[str] = Field(
default=None,
title="Active profile",
description="Currently active profile name. Runtime-only, not persisted in YAML.",
exclude=True,
)
_plus_api: PlusApi
@property
@ -910,6 +901,15 @@ class FrigateConfig(FrigateBaseModel):
verify_objects_track(camera_config, labelmap_objects)
verify_lpr_and_face(self, camera_config)
# Validate camera profiles reference top-level profile definitions
for cam_name, cam_config in self.cameras.items():
for profile_name in cam_config.profiles:
if profile_name not in self.profiles:
raise ValueError(
f"Camera '{cam_name}' references profile '{profile_name}' "
f"which is not defined in the top-level 'profiles' section"
)
# set names on classification configs
for name, config in self.classification.custom.items():
config.name = name
@ -933,11 +933,6 @@ class FrigateConfig(FrigateBaseModel):
f"Camera {camera.name} has audio transcription enabled, but audio detection is not enabled for this camera. Audio detection must be enabled for cameras with audio transcription when it is disabled globally."
)
if self.plus_api and not self.snapshots.clean_copy:
logger.warning(
"Frigate+ is configured but clean snapshots are not enabled, submissions to Frigate+ will not be possible./"
)
# Validate auth roles against cameras
camera_names = set(self.cameras.keys())

View File

@ -24,8 +24,10 @@ EnvString = Annotated[str, AfterValidator(validate_env_string)]
def validate_env_vars(v: dict[str, str], info: ValidationInfo) -> dict[str, str]:
if isinstance(info.context, dict) and info.context.get("install", False):
for k, v in v.items():
os.environ[k] = v
for k, val in v.items():
os.environ[k] = val
if k.startswith("FRIGATE_"):
FRIGATE_ENV_VARS[k] = val
return v

View File

@ -17,7 +17,7 @@ class MqttConfig(FrigateBaseModel):
title="Enable MQTT",
description="Enable or disable MQTT integration for state, events, and snapshots.",
)
host: str = Field(
host: EnvString = Field(
default="",
title="MQTT host",
description="Hostname or IP address of the MQTT broker.",

20
frigate/config/profile.py Normal file
View File

@ -0,0 +1,20 @@
"""Top-level profile definition configuration."""
from pydantic import Field
from .base import FrigateBaseModel
__all__ = ["ProfileDefinitionConfig"]
class ProfileDefinitionConfig(FrigateBaseModel):
"""Defines a named profile with a human-readable display name.
The dict key is the machine name used internally; friendly_name
is the label shown in the UI and API responses.
"""
friendly_name: str = Field(
title="Friendly name",
description="Display name for this profile shown in the UI.",
)

View File

@ -0,0 +1,334 @@
"""Profile manager for activating/deactivating named config profiles."""
import copy
import logging
from pathlib import Path
from typing import Optional
from frigate.config.camera.updater import (
CameraConfigUpdateEnum,
CameraConfigUpdatePublisher,
CameraConfigUpdateTopic,
)
from frigate.config.camera.zone import ZoneConfig
from frigate.const import CONFIG_DIR
from frigate.util.builtin import deep_merge
from frigate.util.config import apply_section_update
logger = logging.getLogger(__name__)
PROFILE_SECTION_UPDATES: dict[str, CameraConfigUpdateEnum] = {
"audio": CameraConfigUpdateEnum.audio,
"birdseye": CameraConfigUpdateEnum.birdseye,
"detect": CameraConfigUpdateEnum.detect,
"face_recognition": CameraConfigUpdateEnum.face_recognition,
"lpr": CameraConfigUpdateEnum.lpr,
"motion": CameraConfigUpdateEnum.motion,
"notifications": CameraConfigUpdateEnum.notifications,
"objects": CameraConfigUpdateEnum.objects,
"record": CameraConfigUpdateEnum.record,
"review": CameraConfigUpdateEnum.review,
"snapshots": CameraConfigUpdateEnum.snapshots,
"zones": CameraConfigUpdateEnum.zones,
}
PERSISTENCE_FILE = Path(CONFIG_DIR) / ".active_profile"
class ProfileManager:
"""Manages profile activation, persistence, and config application."""
def __init__(
self,
config,
config_updater: CameraConfigUpdatePublisher,
dispatcher=None,
):
from frigate.config.config import FrigateConfig
self.config: FrigateConfig = config
self.config_updater = config_updater
self.dispatcher = dispatcher
self._base_configs: dict[str, dict[str, dict]] = {}
self._base_api_configs: dict[str, dict[str, dict]] = {}
self._base_enabled: dict[str, bool] = {}
self._base_zones: dict[str, dict[str, ZoneConfig]] = {}
self._snapshot_base_configs()
def _snapshot_base_configs(self) -> None:
"""Snapshot each camera's current section configs, enabled, and zones."""
for cam_name, cam_config in self.config.cameras.items():
self._base_configs[cam_name] = {}
self._base_api_configs[cam_name] = {}
self._base_enabled[cam_name] = cam_config.enabled
self._base_zones[cam_name] = copy.deepcopy(cam_config.zones)
for section in PROFILE_SECTION_UPDATES:
section_value = getattr(cam_config, section, None)
if section_value is None:
continue
if section == "zones":
# zones is a dict of ZoneConfig models
self._base_configs[cam_name][section] = {
name: zone.model_dump() for name, zone in section_value.items()
}
self._base_api_configs[cam_name][section] = {
name: {
**zone.model_dump(
mode="json",
warnings="none",
exclude_none=True,
),
"color": zone.color,
}
for name, zone in section_value.items()
}
else:
self._base_configs[cam_name][section] = section_value.model_dump()
self._base_api_configs[cam_name][section] = (
section_value.model_dump(
mode="json",
warnings="none",
exclude_none=True,
)
)
def update_config(self, new_config) -> None:
"""Update config reference after config/set replaces the in-memory config.
Preserves active profile state: re-snapshots base configs from the new
(freshly parsed) config, then re-applies profile overrides if a profile
was active.
"""
current_active = self.config.active_profile
self.config = new_config
# Re-snapshot base configs from the new config (which has base values)
self._base_configs.clear()
self._base_api_configs.clear()
self._base_enabled.clear()
self._base_zones.clear()
self._snapshot_base_configs()
# Re-apply profile overrides without publishing ZMQ updates
# (the config/set caller handles its own ZMQ publishing)
if current_active is not None:
if current_active in self.config.profiles:
changed: dict[str, set[str]] = {}
self._apply_profile_overrides(current_active, changed)
self.config.active_profile = current_active
else:
# Profile was deleted — deactivate
self.config.active_profile = None
self._persist_active_profile(None)
def activate_profile(self, profile_name: Optional[str]) -> Optional[str]:
"""Activate a profile by name, or deactivate if None.
Args:
profile_name: Profile name to activate, or None to deactivate.
Returns:
None on success, or an error message string on failure.
"""
if profile_name is not None:
if profile_name not in self.config.profiles:
return (
f"Profile '{profile_name}' is not defined in the profiles section"
)
# Track which camera/section pairs get changed for ZMQ publishing
changed: dict[str, set[str]] = {}
# Reset all cameras to base config
self._reset_to_base(changed)
# Apply new profile overrides if activating
if profile_name is not None:
err = self._apply_profile_overrides(profile_name, changed)
if err:
return err
# Publish ZMQ updates only for sections that actually changed
self._publish_updates(changed)
self.config.active_profile = profile_name
self._persist_active_profile(profile_name)
logger.info(
"Profile %s",
f"'{profile_name}' activated" if profile_name else "deactivated",
)
return None
def _reset_to_base(self, changed: dict[str, set[str]]) -> None:
"""Reset all cameras to their base (no-profile) config."""
for cam_name, cam_config in self.config.cameras.items():
# Restore enabled state
base_enabled = self._base_enabled.get(cam_name)
if base_enabled is not None and cam_config.enabled != base_enabled:
cam_config.enabled = base_enabled
changed.setdefault(cam_name, set()).add("enabled")
# Restore zones (always restore from snapshot; direct Pydantic
# comparison fails when ZoneConfig contains numpy arrays)
base_zones = self._base_zones.get(cam_name)
if base_zones is not None:
cam_config.zones = copy.deepcopy(base_zones)
changed.setdefault(cam_name, set()).add("zones")
# Restore section configs (zones handled above)
base = self._base_configs.get(cam_name, {})
for section in PROFILE_SECTION_UPDATES:
if section == "zones":
continue
base_data = base.get(section)
if base_data is None:
continue
err = apply_section_update(cam_config, section, base_data)
if err:
logger.error(
"Failed to reset section '%s' on camera '%s': %s",
section,
cam_name,
err,
)
else:
changed.setdefault(cam_name, set()).add(section)
def _apply_profile_overrides(
self, profile_name: str, changed: dict[str, set[str]]
) -> Optional[str]:
"""Apply profile overrides for all cameras that have the named profile."""
for cam_name, cam_config in self.config.cameras.items():
profile = cam_config.profiles.get(profile_name)
if profile is None:
continue
# Apply enabled override
if profile.enabled is not None and cam_config.enabled != profile.enabled:
cam_config.enabled = profile.enabled
changed.setdefault(cam_name, set()).add("enabled")
# Apply zones override — merge profile zones into base zones
if profile.zones is not None:
base_zones = self._base_zones.get(cam_name, {})
merged_zones = copy.deepcopy(base_zones)
merged_zones.update(profile.zones)
# Profile zone objects are parsed without colors or contours
# (those are set during CameraConfig init / post-validation).
# Inherit the base zone's color when available, and ensure
# every zone has a valid contour for rendering.
for name, zone in merged_zones.items():
if zone.contour.size == 0:
zone.generate_contour(cam_config.frame_shape)
if zone.color == (0, 0, 0) and name in base_zones:
zone._color = base_zones[name].color
cam_config.zones = merged_zones
changed.setdefault(cam_name, set()).add("zones")
base = self._base_configs.get(cam_name, {})
for section in PROFILE_SECTION_UPDATES:
if section == "zones":
continue
profile_section = getattr(profile, section, None)
if profile_section is None:
continue
overrides = profile_section.model_dump(exclude_unset=True)
if not overrides:
continue
base_data = base.get(section, {})
merged = deep_merge(overrides, base_data)
err = apply_section_update(cam_config, section, merged)
if err:
return f"Failed to apply profile '{profile_name}' section '{section}' on camera '{cam_name}': {err}"
changed.setdefault(cam_name, set()).add(section)
return None
def _publish_updates(self, changed: dict[str, set[str]]) -> None:
"""Publish ZMQ config updates only for sections that changed."""
for cam_name, sections in changed.items():
cam_config = self.config.cameras.get(cam_name)
if cam_config is None:
continue
for section in sections:
if section == "enabled":
self.config_updater.publish_update(
CameraConfigUpdateTopic(
CameraConfigUpdateEnum.enabled, cam_name
),
cam_config.enabled,
)
if self.dispatcher is not None:
self.dispatcher.publish(
f"{cam_name}/enabled/state",
"ON" if cam_config.enabled else "OFF",
retain=True,
)
continue
if section == "zones":
self.config_updater.publish_update(
CameraConfigUpdateTopic(CameraConfigUpdateEnum.zones, cam_name),
cam_config.zones,
)
continue
update_enum = PROFILE_SECTION_UPDATES.get(section)
if update_enum is None:
continue
settings = getattr(cam_config, section, None)
if settings is not None:
self.config_updater.publish_update(
CameraConfigUpdateTopic(update_enum, cam_name),
settings,
)
def _persist_active_profile(self, profile_name: Optional[str]) -> None:
"""Persist the active profile name to disk."""
try:
if profile_name is None:
PERSISTENCE_FILE.unlink(missing_ok=True)
else:
PERSISTENCE_FILE.write_text(profile_name)
except OSError:
logger.exception("Failed to persist active profile")
@staticmethod
def load_persisted_profile() -> Optional[str]:
"""Load the persisted active profile name from disk."""
try:
if PERSISTENCE_FILE.exists():
name = PERSISTENCE_FILE.read_text().strip()
return name if name else None
except OSError:
logger.exception("Failed to load persisted profile")
return None
def get_base_configs_for_api(self, camera_name: str) -> dict[str, dict]:
"""Return base (pre-profile) section configs for a camera.
These are JSON-serializable dicts suitable for direct inclusion in
the /api/config response, with None values already excluded.
"""
return self._base_api_configs.get(camera_name, {})
def get_available_profiles(self) -> list[dict[str, str]]:
"""Get list of all profile definitions from the top-level config."""
return [
{"name": name, "friendly_name": defn.friendly_name}
for name, defn in sorted(self.config.profiles.items())
]
def get_profile_info(self) -> dict:
"""Get profile state info for API responses."""
return {
"profiles": self.get_available_profiles(),
"active_profile": self.config.active_profile,
}

View File

@ -20,7 +20,7 @@ from frigate.genai import GenAIClient
from frigate.models import Event
from frigate.types import TrackedObjectUpdateTypesEnum
from frigate.util.builtin import EventsPerSecond, InferenceSpeed
from frigate.util.file import get_event_thumbnail_bytes
from frigate.util.file import get_event_thumbnail_bytes, load_event_snapshot_image
from frigate.util.image import create_thumbnail, ensure_jpeg_bytes
if TYPE_CHECKING:
@ -103,16 +103,19 @@ class ObjectDescriptionProcessor(PostProcessorApi):
logger.debug(f"{camera} sending early request to GenAI")
self.early_request_sent[data["id"]] = True
# Copy thumbnails to avoid holding references after cleanup
thumbnails_copy = [
data["thumbnail"][:] if data.get("thumbnail") else None
for data in self.tracked_events[data["id"]]
if data.get("thumbnail")
]
threading.Thread(
target=self._genai_embed_description,
name=f"_genai_embed_description_{event.id}",
daemon=True,
args=(
event,
[
data["thumbnail"]
for data in self.tracked_events[data["id"]]
],
thumbnails_copy,
),
).start()
@ -172,8 +175,13 @@ class ObjectDescriptionProcessor(PostProcessorApi):
embed_image = (
[snapshot_image]
if event.has_snapshot and source == "snapshot"
# Copy thumbnails to avoid holding references
else (
[data["thumbnail"] for data in self.tracked_events[event_id]]
[
data["thumbnail"][:] if data.get("thumbnail") else None
for data in self.tracked_events[event_id]
if data.get("thumbnail")
]
if len(self.tracked_events.get(event_id, [])) > 0
else [thumbnail]
)
@ -224,39 +232,28 @@ class ObjectDescriptionProcessor(PostProcessorApi):
def _read_and_crop_snapshot(self, event: Event) -> bytes | None:
"""Read, decode, and crop the snapshot image."""
snapshot_file = os.path.join(CLIPS_DIR, f"{event.camera}-{event.id}.jpg")
if not os.path.isfile(snapshot_file):
logger.error(
f"Cannot load snapshot for {event.id}, file not found: {snapshot_file}"
)
return None
try:
with open(snapshot_file, "rb") as image_file:
snapshot_image = image_file.read()
img, _ = load_event_snapshot_image(event)
if img is None:
logger.error(f"Cannot load snapshot for {event.id}, file not found")
return None
img = cv2.imdecode(
np.frombuffer(snapshot_image, dtype=np.int8),
cv2.IMREAD_COLOR,
)
# Crop snapshot based on region
# provide full image if region doesn't exist (manual events)
height, width = img.shape[:2]
x1_rel, y1_rel, width_rel, height_rel = event.data.get(
"region", [0, 0, 1, 1]
)
x1, y1 = int(x1_rel * width), int(y1_rel * height)
# Crop snapshot based on region
# provide full image if region doesn't exist (manual events)
height, width = img.shape[:2]
x1_rel, y1_rel, width_rel, height_rel = event.data.get(
"region", [0, 0, 1, 1]
)
x1, y1 = int(x1_rel * width), int(y1_rel * height)
cropped_image = img[
y1 : y1 + int(height_rel * height),
x1 : x1 + int(width_rel * width),
]
cropped_image = img[
y1 : y1 + int(height_rel * height),
x1 : x1 + int(width_rel * width),
]
_, buffer = cv2.imencode(".jpg", cropped_image)
_, buffer = cv2.imencode(".jpg", cropped_image)
return buffer.tobytes()
return buffer.tobytes()
except Exception:
return None
@ -276,8 +273,13 @@ class ObjectDescriptionProcessor(PostProcessorApi):
embed_image = (
[snapshot_image]
if event.has_snapshot and camera_config.objects.genai.use_snapshot
# Copy thumbnails to avoid holding references after cleanup
else (
[data["thumbnail"] for data in self.tracked_events[event.id]]
[
data["thumbnail"][:] if data.get("thumbnail") else None
for data in self.tracked_events[event.id]
if data.get("thumbnail")
]
if num_thumbnails > 0
else [thumbnail]
)

View File

@ -324,9 +324,9 @@ class ReviewDescriptionProcessor(PostProcessorApi):
end_time: float,
) -> list[str]:
preview_dir = os.path.join(CACHE_DIR, "preview_frames")
file_start = f"preview_{camera}"
start_file = f"{file_start}-{start_time}.webp"
end_file = f"{file_start}-{end_time}.webp"
file_start = f"preview_{camera}-"
start_file = f"{file_start}{start_time}.webp"
end_file = f"{file_start}{end_time}.webp"
all_frames = []
for file in sorted(os.listdir(preview_dir)):
@ -463,6 +463,13 @@ class ReviewDescriptionProcessor(PostProcessorApi):
thumbs = []
for idx, thumb_path in enumerate(frame_paths):
thumb_data = cv2.imread(thumb_path)
if thumb_data is None:
logger.warning(
"Could not read preview frame at %s, skipping", thumb_path
)
continue
ret, jpg = cv2.imencode(
".jpg", thumb_data, [int(cv2.IMWRITE_JPEG_QUALITY), 100]
)
@ -521,7 +528,7 @@ def run_analysis(
for i, verified_label in enumerate(final_data["data"]["verified_objects"]):
object_type = verified_label.replace("-verified", "").replace("_", " ")
name = titlecase(sub_labels_list[i].replace("_", " "))
unified_objects.append(f"{name} ({object_type})")
unified_objects.append(f"{name} {object_type}")
for label in objects_list:
if "-verified" in label:

View File

@ -527,6 +527,17 @@ class RKNNModelRunner(BaseModelRunner):
# Transpose from NCHW to NHWC
pixel_data = np.transpose(pixel_data, (0, 2, 3, 1))
rknn_inputs.append(pixel_data)
elif name == "data":
# ArcFace: undo Python normalisation to uint8 [0,255]
# RKNN runtime applies mean=127.5/std=127.5 internally before first layer
face_data = inputs[name]
if len(face_data.shape) == 4 and face_data.shape[1] == 3:
# Transpose from NCHW to NHWC
face_data = np.transpose(face_data, (0, 2, 3, 1))
face_data = (
((face_data + 1.0) * 127.5).clip(0, 255).astype(np.uint8)
)
rknn_inputs.append(face_data)
else:
rknn_inputs.append(inputs[name])

View File

@ -4,7 +4,7 @@ import re
import urllib.request
from typing import Literal
import axengine as axe
from pydantic import ConfigDict
from frigate.const import MODEL_CACHE_DIR
from frigate.detectors.detection_api import DetectionApi
@ -23,6 +23,12 @@ model_cache_dir = os.path.join(MODEL_CACHE_DIR, "axengine_cache/")
class AxengineDetectorConfig(BaseDetectorConfig):
"""AXERA AX650N/AX8850N NPU detector running compiled .axmodel files via the AXEngine runtime."""
model_config = ConfigDict(
title="AXEngine NPU",
)
type: Literal[DETECTOR_KEY]
@ -30,6 +36,12 @@ class Axengine(DetectionApi):
type_key = DETECTOR_KEY
def __init__(self, config: AxengineDetectorConfig):
try:
import axengine as axe
except ModuleNotFoundError:
raise ImportError("AXEngine is not installed.")
return
logger.info("__init__ axengine")
super().__init__(config)
self.height = config.model.height

View File

@ -205,14 +205,14 @@ class EmbeddingsContext:
)
def get_face_ids(self, name: str) -> list[str]:
sql_query = f"""
sql_query = """
SELECT
id
FROM vec_descriptions
WHERE id LIKE '%{name}%'
WHERE id LIKE ?
"""
return self.db.execute_sql(sql_query).fetchall()
return self.db.execute_sql(sql_query, (f"%{name}%",)).fetchall()
def reprocess_face(self, face_file: str) -> dict[str, Any]:
return self.requestor.send_data(

View File

@ -266,7 +266,7 @@ class Embeddings:
)
duration = datetime.datetime.now().timestamp() - start
self.text_inference_speed.update(duration / len(valid_ids))
self.image_inference_speed.update(duration / len(valid_ids))
return embeddings

View File

@ -705,4 +705,7 @@ class EmbeddingMaintainer(threading.Thread):
if not self.config.semantic_search.enabled:
return
self.embeddings.embed_thumbnail(event_id, thumbnail)
try:
self.embeddings.embed_thumbnail(event_id, thumbnail)
except ValueError:
logger.warning(f"Failed to embed thumbnail for event {event_id}")

View File

@ -321,6 +321,9 @@ class AudioEventMaintainer(threading.Thread):
self.start_or_restart_ffmpeg()
while not self.stop_event.is_set():
# check if there is an updated config
self.config_subscriber.check_for_updates()
enabled = self.camera_config.enabled
if enabled != self.was_enabled:
if enabled:
@ -347,9 +350,6 @@ class AudioEventMaintainer(threading.Thread):
time.sleep(0.1)
continue
# check if there is an updated config
self.config_subscriber.check_for_updates()
self.read_audio()
if self.audio_listener:

View File

@ -326,6 +326,10 @@ class EventCleanup(threading.Thread):
return events_to_update
def run(self) -> None:
if self.config.safe_mode:
logger.info("Safe mode enabled, skipping event cleanup")
return
# only expire events every 5 minutes
while not self.stop_event.wait(300):
events_with_expired_clips = self.expire_clips()

View File

@ -158,36 +158,33 @@ class EventProcessor(threading.Thread):
end_time = (
None if event_data["end_time"] is None else event_data["end_time"]
)
snapshot = event_data["snapshot"]
# score of the snapshot
score = (
None
if event_data["snapshot"] is None
else event_data["snapshot"]["score"]
)
score = None if snapshot is None else snapshot["score"]
# detection region in the snapshot
region = (
None
if event_data["snapshot"] is None
if snapshot is None
else to_relative_box(
width,
height,
event_data["snapshot"]["region"],
snapshot["region"],
)
)
# bounding box for the snapshot
box = (
None
if event_data["snapshot"] is None
if snapshot is None
else to_relative_box(
width,
height,
event_data["snapshot"]["box"],
snapshot["box"],
)
)
attributes = (
None
if event_data["snapshot"] is None
if snapshot is None
else [
{
"box": to_relative_box(
@ -198,9 +195,14 @@ class EventProcessor(threading.Thread):
"label": a["label"],
"score": a["score"],
}
for a in event_data["snapshot"]["attributes"]
for a in snapshot["attributes"]
]
)
snapshot_frame_time = None if snapshot is None else snapshot["frame_time"]
snapshot_area = None if snapshot is None else snapshot["area"]
snapshot_estimated_speed = (
None if snapshot is None else snapshot["current_estimated_speed"]
)
# keep these from being set back to false because the event
# may have started while recordings/snapshots/alerts/detections were enabled
@ -229,6 +231,10 @@ class EventProcessor(threading.Thread):
"score": score,
"top_score": event_data["top_score"],
"attributes": attributes,
"snapshot_clean": event_data.get("snapshot_clean", False),
"snapshot_frame_time": snapshot_frame_time,
"snapshot_area": snapshot_area,
"snapshot_estimated_speed": snapshot_estimated_speed,
"average_estimated_speed": event_data["average_estimated_speed"],
"velocity_angle": event_data["velocity_angle"],
"type": "object",
@ -306,8 +312,11 @@ class EventProcessor(threading.Thread):
"type": event_data["type"],
"score": event_data["score"],
"top_score": event_data["score"],
"snapshot_clean": event_data.get("snapshot_clean", False),
},
}
if event_data.get("draw") is not None:
event[Event.data]["draw"] = event_data["draw"]
if event_data.get("recognized_license_plate") is not None:
event[Event.data]["recognized_license_plate"] = event_data[
"recognized_license_plate"

View File

@ -120,10 +120,10 @@ PRESETS_HW_ACCEL_DECODE["preset-rk-h265"] = PRESETS_HW_ACCEL_DECODE[
PRESETS_HW_ACCEL_SCALE = {
"preset-rpi-64-h264": "-r {0} -vf fps={0},scale={1}:{2}",
"preset-rpi-64-h265": "-r {0} -vf fps={0},scale={1}:{2}",
FFMPEG_HWACCEL_VAAPI: "-r {0} -vf fps={0},scale_vaapi=w={1}:h={2},hwdownload,format=nv12,eq=gamma=1.4:gamma_weight=0.5",
"preset-intel-qsv-h264": "-r {0} -vf vpp_qsv=framerate={0}:w={1}:h={2}:format=nv12,hwdownload,format=nv12,format=yuv420p",
"preset-intel-qsv-h265": "-r {0} -vf vpp_qsv=framerate={0}:w={1}:h={2}:format=nv12,hwdownload,format=nv12,format=yuv420p",
FFMPEG_HWACCEL_NVIDIA: "-r {0} -vf fps={0},scale_cuda=w={1}:h={2},hwdownload,format=nv12,eq=gamma=1.4:gamma_weight=0.5",
FFMPEG_HWACCEL_VAAPI: "-r {0} -vf fps={0},scale_vaapi=w={1}:h={2},hwdownload,format=nv12",
"preset-intel-qsv-h264": "-r {0} -vf vpp_qsv=w={1}:h={2}:format=nv12,hwdownload,format=nv12,fps={0},format=yuv420p",
"preset-intel-qsv-h265": "-r {0} -vf vpp_qsv=w={1}:h={2}:format=nv12,hwdownload,format=nv12,fps={0},format=yuv420p",
FFMPEG_HWACCEL_NVIDIA: "-r {0} -vf fps={0},scale_cuda=w={1}:h={2},hwdownload,format=nv12",
"preset-jetson-h264": "-r {0}", # scaled in decoder
"preset-jetson-h265": "-r {0}", # scaled in decoder
FFMPEG_HWACCEL_RKMPP: "-r {0} -vf scale_rkrga=w={1}:h={2}:format=yuv420p:force_original_aspect_ratio=0,hwmap=mode=read,format=yuv420p",
@ -242,15 +242,6 @@ def parse_preset_hardware_acceleration_scale(
else:
scale = PRESETS_HW_ACCEL_SCALE.get(arg, PRESETS_HW_ACCEL_SCALE["default"])
if (
",hwdownload,format=nv12,eq=gamma=1.4:gamma_weight=0.5" in scale
and os.environ.get("FFMPEG_DISABLE_GAMMA_EQUALIZER") is not None
):
scale = scale.replace(
",hwdownload,format=nv12,eq=gamma=1.4:gamma_weight=0.5",
":format=nv12,hwdownload,format=nv12,format=yuv420p",
)
scale = scale.format(fps, width, height).split(" ")
scale.extend(detect_args)
return scale

View File

@ -106,7 +106,7 @@ When forming your description:
## Response Field Guidelines
Respond with a JSON object matching the provided schema. Field-specific guidance:
- `scene`: Describe how the sequence begins, then the progression of events all significant movements and actions in order. For example, if a vehicle arrives and then a person exits, describe both sequentially. Your description should align with and support the threat level you assign.
- `scene`: Describe how the sequence begins, then the progression of events all significant movements and actions in order. For example, if a vehicle arrives and then a person exits, describe both sequentially. Always use subject names from "Objects in Scene" do not replace named subjects with generic terms like "a person" or "the individual". Your description should align with and support the threat level you assign.
- `title`: Characterize **what took place and where** interpret the overall purpose or outcome, do not simply compress the scene description into fewer words. Include the relevant location (zone, area, or entry point). Always include subject names from "Objects in Scene" do not replace named subjects with generic terms. No editorial qualifiers like "routine" or "suspicious."
- `potential_threat_level`: Must be consistent with your scene description and the activity patterns above.
{get_concern_prompt()}
@ -120,9 +120,7 @@ Respond with a JSON object matching the provided schema. Field-specific guidance
## Objects in Scene
Each line represents a detection state, not necessarily unique individuals. Parentheses indicate object type or category, use only the name/label in your response, not the parentheses.
**CRITICAL: When you see both recognized and unrecognized entries of the same type (e.g., "Joe (person)" and "Person"), visually count how many distinct people/objects you actually see based on appearance and clothing. If you observe only ONE person throughout the sequence, use ONLY the recognized name (e.g., "Joe"). The same person may be recognized in some frames but not others. Only describe both if you visually see MULTIPLE distinct people with clearly different appearances.**
Each line represents a detection state, not necessarily unique individuals. The `` symbol separates a recognized subject's name from their object type — use only the name (before the `←`) in your response, not the type after it. The same subject may appear across multiple lines if detected multiple times.
**Note: Unidentified objects (without names) are NOT indicators of suspicious activitythey simply mean the system hasn't identified that object.**
{get_objects_list()}
@ -188,8 +186,8 @@ Each line represents a detection state, not necessarily unique individuals. Pare
if metadata.confidence > 1.0:
metadata.confidence = min(metadata.confidence / 100.0, 1.0)
# If any verified objects (contain parentheses with name), set to 0
if any("(" in obj for obj in review_data["unified_objects"]):
# If any verified objects (contain ← separator), set to 0
if any("" in obj for obj in review_data["unified_objects"]):
metadata.potential_threat_level = 0
metadata.time = review_data["start"]

View File

@ -397,13 +397,13 @@ class GeminiClient(GenAIClient):
tool_calls_by_index: dict[int, dict[str, Any]] = {}
finish_reason = "stop"
response = self.provider.models.generate_content_stream(
stream = await self.provider.aio.models.generate_content_stream(
model=self.genai_config.model,
contents=gemini_messages,
config=types.GenerateContentConfig(**config_params),
)
async for chunk in response:
async for chunk in stream:
if not chunk or not chunk.candidates:
continue

View File

@ -64,7 +64,12 @@ class LlamaCppClient(GenAIClient):
return None
try:
content = []
content = [
{
"type": "text",
"text": prompt,
}
]
for image in images:
encoded_image = base64.b64encode(image).decode("utf-8")
content.append(
@ -75,12 +80,6 @@ class LlamaCppClient(GenAIClient):
},
}
)
content.append(
{
"type": "text",
"text": prompt,
}
)
# Build request payload with llama.cpp native options
payload = {

View File

@ -53,6 +53,45 @@ class OllamaClient(GenAIClient):
logger.warning("Error initializing Ollama: %s", str(e))
return None
@staticmethod
def _clean_schema_for_ollama(schema: dict, *, _is_properties: bool = False) -> dict:
"""Strip Pydantic metadata from a JSON schema for Ollama compatibility.
Ollama's grammar-based constrained generation works best with minimal
schemas. Pydantic adds title/description/constraint fields that can
cause the grammar generator to silently skip required fields.
Keys inside a ``properties`` dict are actual field names and must never
be stripped, even if they collide with a metadata key name (e.g. a
model field called ``title``).
"""
STRIP_KEYS = {
"title",
"description",
"minimum",
"maximum",
"exclusiveMinimum",
"exclusiveMaximum",
}
result = {}
for key, value in schema.items():
if not _is_properties and key in STRIP_KEYS:
continue
if isinstance(value, dict):
result[key] = OllamaClient._clean_schema_for_ollama(
value, _is_properties=(key == "properties")
)
elif isinstance(value, list):
result[key] = [
OllamaClient._clean_schema_for_ollama(item)
if isinstance(item, dict)
else item
for item in value
]
else:
result[key] = value
return result
def _send(
self,
prompt: str,
@ -73,7 +112,7 @@ class OllamaClient(GenAIClient):
if response_format and response_format.get("type") == "json_schema":
schema = response_format.get("json_schema", {}).get("schema")
if schema:
ollama_options["format"] = schema
ollama_options["format"] = self._clean_schema_for_ollama(schema)
result = self.provider.generate(
self.genai_config.model,
prompt,

View File

@ -1,13 +1,14 @@
"""Media sync job management with background execution."""
import logging
import os
import threading
from dataclasses import dataclass, field
from datetime import datetime
from typing import Optional
from frigate.comms.inter_process import InterProcessRequestor
from frigate.const import UPDATE_JOB_STATE
from frigate.const import CONFIG_DIR, UPDATE_JOB_STATE
from frigate.jobs.job import Job
from frigate.jobs.manager import (
get_current_job,
@ -16,7 +17,7 @@ from frigate.jobs.manager import (
set_current_job,
)
from frigate.types import JobStatusTypesEnum
from frigate.util.media import sync_all_media
from frigate.util.media import sync_all_media, write_orphan_report
logger = logging.getLogger(__name__)
@ -29,6 +30,7 @@ class MediaSyncJob(Job):
dry_run: bool = False
media_types: list[str] = field(default_factory=lambda: ["all"])
force: bool = False
verbose: bool = False
class MediaSyncRunner(threading.Thread):
@ -61,6 +63,21 @@ class MediaSyncRunner(threading.Thread):
force=self.job.force,
)
# Write verbose report if requested
if self.job.verbose:
report_dir = os.path.join(CONFIG_DIR, "media_sync")
os.makedirs(report_dir, exist_ok=True)
report_path = os.path.join(report_dir, f"{self.job.id}.txt")
write_orphan_report(
results,
report_path,
job_id=self.job.id,
dry_run=self.job.dry_run,
)
logger.info(
"Media sync verbose orphan report written to %s", report_path
)
# Store results and mark as complete
self.job.results = results.to_dict()
self.job.status = JobStatusTypesEnum.success
@ -95,6 +112,7 @@ def start_media_sync_job(
dry_run: bool = False,
media_types: Optional[list[str]] = None,
force: bool = False,
verbose: bool = False,
) -> Optional[str]:
"""Start a new media sync job if none is currently running.
@ -113,6 +131,7 @@ def start_media_sync_job(
dry_run=dry_run,
media_types=media_types or ["all"],
force=force,
verbose=verbose,
)
logger.debug(f"Creating new media sync job: {job.id}")

405
frigate/jobs/vlm_watch.py Normal file
View File

@ -0,0 +1,405 @@
"""VLM watch job: continuously monitors a camera and notifies when a condition is met."""
import base64
import json
import logging
import re
import threading
import time
from dataclasses import asdict, dataclass, field
from datetime import datetime
from typing import Any, Optional
import cv2
from frigate.comms.detections_updater import DetectionSubscriber, DetectionTypeEnum
from frigate.comms.inter_process import InterProcessRequestor
from frigate.config import FrigateConfig
from frigate.const import UPDATE_JOB_STATE
from frigate.jobs.job import Job
from frigate.types import JobStatusTypesEnum
logger = logging.getLogger(__name__)
# Polling interval bounds (seconds)
_MIN_INTERVAL = 1
_MAX_INTERVAL = 300
# Max user/assistant turn pairs to keep in conversation history
_MAX_HISTORY = 10
@dataclass
class VLMWatchJob(Job):
"""Job state for a VLM watch monitor."""
job_type: str = "vlm_watch"
camera: str = ""
condition: str = ""
max_duration_minutes: int = 60
labels: list = field(default_factory=list)
zones: list = field(default_factory=list)
last_reasoning: str = ""
iteration_count: int = 0
def to_dict(self) -> dict[str, Any]:
return asdict(self)
class VLMWatchRunner(threading.Thread):
"""Background thread that polls a camera with the vision client until a condition is met."""
def __init__(
self,
job: VLMWatchJob,
config: FrigateConfig,
cancel_event: threading.Event,
frame_processor,
genai_manager,
dispatcher,
) -> None:
super().__init__(daemon=True, name=f"vlm_watch_{job.id}")
self.job = job
self.config = config
self.cancel_event = cancel_event
self.frame_processor = frame_processor
self.genai_manager = genai_manager
self.dispatcher = dispatcher
self.requestor = InterProcessRequestor()
self.detection_subscriber = DetectionSubscriber(DetectionTypeEnum.video.value)
self.conversation: list[dict[str, Any]] = []
def run(self) -> None:
self.job.status = JobStatusTypesEnum.running
self.job.start_time = time.time()
self._broadcast_status()
self.conversation = [{"role": "system", "content": self._build_system_prompt()}]
max_end_time = self.job.start_time + self.job.max_duration_minutes * 60
try:
while not self.cancel_event.is_set():
if time.time() > max_end_time:
logger.debug(
"VLM watch job %s timed out after %d minutes",
self.job.id,
self.job.max_duration_minutes,
)
self.job.status = JobStatusTypesEnum.failed
self.job.error_message = f"Monitor timed out after {self.job.max_duration_minutes} minutes"
break
next_run_in = self._run_iteration()
if self.job.status == JobStatusTypesEnum.success:
break
self._wait_for_trigger(next_run_in)
except Exception as e:
logger.exception("VLM watch job %s failed: %s", self.job.id, e)
self.job.status = JobStatusTypesEnum.failed
self.job.error_message = str(e)
finally:
if self.job.status == JobStatusTypesEnum.running:
self.job.status = JobStatusTypesEnum.cancelled
self.job.end_time = time.time()
self._broadcast_status()
try:
self.detection_subscriber.stop()
except Exception:
pass
try:
self.requestor.stop()
except Exception:
pass
def _run_iteration(self) -> float:
"""Run one VLM analysis iteration. Returns seconds until next run."""
vision_client = (
self.genai_manager.vision_client or self.genai_manager.tool_client
)
if vision_client is None:
logger.warning("VLM watch job %s: no vision client available", self.job.id)
return 30
frame = self.frame_processor.get_current_frame(self.job.camera, {})
if frame is None:
logger.debug(
"VLM watch job %s: frame unavailable for camera %s",
self.job.id,
self.job.camera,
)
self.job.last_reasoning = "Camera frame unavailable"
return 10
# Downscale frame to 480p max height
h, w = frame.shape[:2]
if h > 480:
scale = 480.0 / h
frame = cv2.resize(
frame, (int(w * scale), 480), interpolation=cv2.INTER_AREA
)
_, enc = cv2.imencode(".jpg", frame, [cv2.IMWRITE_JPEG_QUALITY, 85])
b64 = base64.b64encode(enc.tobytes()).decode()
timestamp = datetime.now().strftime("%H:%M:%S")
self.conversation.append(
{
"role": "user",
"content": [
{"type": "text", "text": f"Frame captured at {timestamp}."},
{
"type": "image_url",
"image_url": {"url": f"data:image/jpeg;base64,{b64}"},
},
],
}
)
response = vision_client.chat_with_tools(
messages=self.conversation,
tools=None,
tool_choice=None,
)
response_str = response.get("content") or ""
if not response_str:
logger.warning(
"VLM watch job %s: empty response from vision client", self.job.id
)
# Remove the user message we just added so we don't leave a dangling turn
self.conversation.pop()
return 30
logger.debug("VLM watch job %s response: %s", self.job.id, response_str)
self.conversation.append({"role": "assistant", "content": response_str})
# Keep system prompt + last _MAX_HISTORY user/assistant pairs
max_msgs = 1 + _MAX_HISTORY * 2
if len(self.conversation) > max_msgs:
self.conversation = [self.conversation[0]] + self.conversation[
-(max_msgs - 1) :
]
try:
clean = re.sub(
r"\n?```$", "", re.sub(r"^```[a-zA-Z0-9]*\n?", "", response_str)
)
parsed = json.loads(clean)
condition_met = bool(parsed.get("condition_met", False))
next_run_in = max(
_MIN_INTERVAL,
min(_MAX_INTERVAL, int(parsed.get("next_run_in", 30))),
)
reasoning = str(parsed.get("reasoning", ""))
except (json.JSONDecodeError, ValueError, TypeError) as e:
logger.warning(
"VLM watch job %s: failed to parse VLM response: %s", self.job.id, e
)
return 30
self.job.last_reasoning = reasoning
self.job.iteration_count += 1
self._broadcast_status()
if condition_met:
logger.debug(
"VLM watch job %s: condition met on camera %s%s",
self.job.id,
self.job.camera,
reasoning,
)
self._send_notification(reasoning)
self.job.status = JobStatusTypesEnum.success
return 0
return next_run_in
def _wait_for_trigger(self, max_wait: float) -> None:
"""Wait up to max_wait seconds, returning early if a relevant detection fires on the target camera."""
deadline = time.time() + max_wait
while not self.cancel_event.is_set():
remaining = deadline - time.time()
if remaining <= 0:
break
topic, payload = self.detection_subscriber.check_for_update(
timeout=min(1.0, remaining)
)
if topic is None or payload is None:
continue
# payload = (camera, frame_name, frame_time, tracked_objects, motion_boxes, regions)
cam = payload[0]
tracked_objects = payload[3]
logger.debug(
"VLM watch job %s: detection event cam=%s (want %s), objects=%s",
self.job.id,
cam,
self.job.camera,
[
{"label": o.get("label"), "zones": o.get("current_zones")}
for o in (tracked_objects or [])
],
)
if cam != self.job.camera or not tracked_objects:
continue
if self._detection_matches_filters(tracked_objects):
logger.debug(
"VLM watch job %s: woken early by detection event on %s",
self.job.id,
self.job.camera,
)
break
def _detection_matches_filters(self, tracked_objects: list) -> bool:
"""Return True if any tracked object passes the label and zone filters."""
labels = self.job.labels
zones = self.job.zones
for obj in tracked_objects:
label_ok = not labels or obj.get("label") in labels
zone_ok = not zones or bool(set(obj.get("current_zones", [])) & set(zones))
if label_ok and zone_ok:
return True
return False
def _build_system_prompt(self) -> str:
focus_text = ""
if self.job.labels or self.job.zones:
parts = []
if self.job.labels:
parts.append(f"object types: {', '.join(self.job.labels)}")
if self.job.zones:
parts.append(f"zones: {', '.join(self.job.zones)}")
focus_text = f"\nFocus on {' and '.join(parts)}.\n"
return (
f'You are monitoring a security camera. Your task: determine when "{self.job.condition}" occurs.\n'
f"{focus_text}\n"
f"You will receive a sequence of frames over time. Use the conversation history to understand "
f"what is stationary vs. actively changing.\n\n"
f"For each frame respond with JSON only:\n"
f'{{"condition_met": <true/false>, "next_run_in": <integer seconds 1-300>, "reasoning": "<brief explanation>"}}\n\n'
f"Guidelines for next_run_in:\n"
f"- Scene is empty / nothing of interest visible: 60-300.\n"
f"- Relevant object(s) visible anywhere in frame (even outside the target zone): 3-10. "
f"They may be moving toward the zone.\n"
f"- Condition is actively forming (object approaching zone or threshold): 1-5.\n"
f"- Set condition_met to true only when you are confident the condition is currently met.\n"
f"- Keep reasoning to 1-2 sentences."
)
def _send_notification(self, reasoning: str) -> None:
"""Publish a camera_monitoring event so downstream handlers (web push, MQTT) can notify users."""
payload = {
"camera": self.job.camera,
"condition": self.job.condition,
"reasoning": reasoning,
"job_id": self.job.id,
}
if self.dispatcher:
try:
self.dispatcher.publish("camera_monitoring", json.dumps(payload))
except Exception as e:
logger.warning(
"VLM watch job %s: failed to publish alert: %s", self.job.id, e
)
def _broadcast_status(self) -> None:
try:
self.requestor.send_data(UPDATE_JOB_STATE, self.job.to_dict())
except Exception as e:
logger.warning(
"VLM watch job %s: failed to broadcast status: %s", self.job.id, e
)
# Module-level singleton (only one watch job at a time)
_current_job: Optional[VLMWatchJob] = None
_cancel_event: Optional[threading.Event] = None
_job_lock = threading.Lock()
def start_vlm_watch_job(
camera: str,
condition: str,
max_duration_minutes: int,
config: FrigateConfig,
frame_processor,
genai_manager,
dispatcher,
labels: list[str] | None = None,
zones: list[str] | None = None,
) -> str:
"""Start a new VLM watch job. Returns the job ID.
Raises RuntimeError if a job is already running.
"""
global _current_job, _cancel_event
with _job_lock:
if _current_job is not None and _current_job.status in (
JobStatusTypesEnum.queued,
JobStatusTypesEnum.running,
):
raise RuntimeError(
f"A VLM watch job is already running (id={_current_job.id}). "
"Cancel it before starting a new one."
)
job = VLMWatchJob(
camera=camera,
condition=condition,
max_duration_minutes=max_duration_minutes,
labels=labels or [],
zones=zones or [],
)
cancel_ev = threading.Event()
_current_job = job
_cancel_event = cancel_ev
runner = VLMWatchRunner(
job=job,
config=config,
cancel_event=cancel_ev,
frame_processor=frame_processor,
genai_manager=genai_manager,
dispatcher=dispatcher,
)
runner.start()
logger.debug(
"Started VLM watch job %s: camera=%s, condition=%r, max_duration=%dm",
job.id,
camera,
condition,
max_duration_minutes,
)
return job.id
def stop_vlm_watch_job() -> bool:
"""Cancel the current VLM watch job. Returns True if a job was cancelled."""
global _current_job, _cancel_event
with _job_lock:
if _current_job is None or _current_job.status not in (
JobStatusTypesEnum.queued,
JobStatusTypesEnum.running,
):
return False
if _cancel_event:
_cancel_event.set()
_current_job.status = JobStatusTypesEnum.cancelled
logger.debug("Cancelled VLM watch job %s", _current_job.id)
return True
def get_vlm_watch_job() -> Optional[VLMWatchJob]:
"""Return the current (or most recent) VLM watch job."""
return _current_job

Some files were not shown because too many files have changed in this diff Show More