Compare commits

..

7 Commits

Author SHA1 Message Date
dependabot[bot]
1b8e7c9f25
Merge 7af9209e1e into 90b14f1a32 2026-01-22 00:46:23 +00:00
dependabot[bot]
90b14f1a32
Bump lodash from 4.17.21 to 4.17.23 in /web (#21749)
Bumps [lodash](https://github.com/lodash/lodash) from 4.17.21 to 4.17.23.
- [Release notes](https://github.com/lodash/lodash/releases)
- [Commits](https://github.com/lodash/lodash/compare/4.17.21...4.17.23)

---
updated-dependencies:
- dependency-name: lodash
  dependency-version: 4.17.23
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-21 17:46:06 -07:00
Josh Hawkins
d633c7d966
Miscellaneous fixes (0.17 beta) (#21699)
Some checks failed
CI / AMD64 Build (push) Has been cancelled
CI / ARM Build (push) Has been cancelled
CI / Jetson Jetpack 6 (push) Has been cancelled
CI / AMD64 Extra Build (push) Has been cancelled
CI / ARM Extra Build (push) Has been cancelled
CI / Synaptics Build (push) Has been cancelled
CI / Assemble and push default build (push) Has been cancelled
* tracking details tweaks

- fix 4:3 layout
- get and use aspect of record stream if different from detect stream

* aspect ratio docs tip

* spacing

* fix

* i18n fix

* additional logs on ffmpeg exit

* improve no camera view

instead of showing an "add camera" message, show a specific message for empty camera groups when frigate already has cameras added

* add note about separate onvif accounts in some camera firmware

* clarify review summary report docs

* review settings tweaks

- remove horizontal divider
- update description language for switches
- keep save button disabled until review classification settings change

* use correct Toaster component from shadcn

* clarify support for intel b-series (battlemage) gpus

* add clarifying comment to dummy camera docs
2026-01-20 08:17:58 -07:00
Josh Hawkins
0a8f499640
Miscellaneous fixes (0.17 beta) (#21683)
Some checks failed
CI / AMD64 Build (push) Has been cancelled
CI / ARM Build (push) Has been cancelled
CI / Jetson Jetpack 6 (push) Has been cancelled
CI / AMD64 Extra Build (push) Has been cancelled
CI / ARM Extra Build (push) Has been cancelled
CI / Synaptics Build (push) Has been cancelled
CI / Assemble and push default build (push) Has been cancelled
* misc triggers tweaks

i18n fixes
fix toaster color
fix clicking on labels selecting incorrect checkbox

* update copilot instructions

* lpr docs tweaks

* add retry params to gemini

* i18n fix

* ensure users only see recognized plates from accessible cameras in explore

* ensure all zone filters are converted to pixels

zone-level filters were never converted from percentage area to pixels. RuntimeFilterConfig was only applied to filters at the camera level, not zone.filters.

Fixes https://github.com/blakeblackshear/frigate/discussions/21694

* add test for percentage based zone filters

* use export id for key instead of name

* update gemini docs
2026-01-18 06:36:27 -07:00
Kirill Kulakov
cfeb86646f
fix(recording): handle unexpected filenames in cache maintainer to prevent crash (#21676)
Some checks failed
CI / AMD64 Build (push) Has been cancelled
CI / ARM Build (push) Has been cancelled
CI / Jetson Jetpack 6 (push) Has been cancelled
CI / AMD64 Extra Build (push) Has been cancelled
CI / ARM Extra Build (push) Has been cancelled
CI / Synaptics Build (push) Has been cancelled
CI / Assemble and push default build (push) Has been cancelled
* fix(recording): handle unexpected filenames in cache maintainer to prevent crash

* test(recording): add test for maintainer cache file parsing

* Prevent log spam from unexpected cache files

Addresses PR review feedback: Add deduplication to prevent warning
messages from being logged repeatedly for the same unexpected file
in the cache directory. Each unexpected filename is only logged once
per RecordingMaintainer instance lifecycle.

Also adds test to verify warning is only emitted once per filename.

* Fix code formatting for test_maintainer.py

* fixes + ruff
2026-01-16 19:23:23 -07:00
Nicolas Mowen
bf099c3edd
Miscellaneous fixes (0.17 beta) (#21655)
Some checks failed
CI / AMD64 Build (push) Has been cancelled
CI / ARM Build (push) Has been cancelled
CI / Jetson Jetpack 6 (push) Has been cancelled
CI / AMD64 Extra Build (push) Has been cancelled
CI / ARM Extra Build (push) Has been cancelled
CI / Synaptics Build (push) Has been cancelled
CI / Assemble and push default build (push) Has been cancelled
* Fix jetson stats reading

* Return result

* Avoid unknown class for cover image

* fix double encoding of passwords in camera wizard

* formatting

* empty homekit config fixes

* add locks to jina v1 embeddings

protect tokenizer and feature extractor in jina_v1_embedding with per-instance thread lock to avoid the "Already borrowed" RuntimeError during concurrent tokenization

* Capitalize correctly

* replace deprecated google-generativeai with google-genai

update gemini genai provider with new calls from SDK
provider_options specifies any http options
suppress unneeded info logging

* fix attribute area on detail stream hover

---------

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2026-01-15 07:08:49 -07:00
dependabot[bot]
2e1706baa0
Bump @remix-run/router and react-router-dom in /web (#21580)
Some checks failed
CI / AMD64 Build (push) Has been cancelled
CI / ARM Build (push) Has been cancelled
CI / Jetson Jetpack 6 (push) Has been cancelled
CI / AMD64 Extra Build (push) Has been cancelled
CI / ARM Extra Build (push) Has been cancelled
CI / Synaptics Build (push) Has been cancelled
CI / Assemble and push default build (push) Has been cancelled
Bumps [@remix-run/router](https://github.com/remix-run/react-router/tree/HEAD/packages/router) to 1.23.2 and updates ancestor dependency [react-router-dom](https://github.com/remix-run/react-router/tree/HEAD/packages/react-router-dom). These dependencies need to be updated together.


Updates `@remix-run/router` from 1.19.0 to 1.23.2
- [Release notes](https://github.com/remix-run/react-router/releases)
- [Changelog](https://github.com/remix-run/react-router/blob/@remix-run/router@1.23.2/packages/router/CHANGELOG.md)
- [Commits](https://github.com/remix-run/react-router/commits/@remix-run/router@1.23.2/packages/router)

Updates `react-router-dom` from 6.26.0 to 6.30.3
- [Release notes](https://github.com/remix-run/react-router/releases)
- [Changelog](https://github.com/remix-run/react-router/blob/main/CHANGELOG.md)
- [Commits](https://github.com/remix-run/react-router/commits/react-router-dom@6.30.3/packages/react-router-dom)

---
updated-dependencies:
- dependency-name: "@remix-run/router"
  dependency-version: 1.23.2
  dependency-type: indirect
- dependency-name: react-router-dom
  dependency-version: 6.30.3
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-14 07:26:56 -06:00
43 changed files with 384 additions and 146 deletions

View File

@ -1,2 +1,3 @@
Never write strings in the frontend directly, always write to and reference the relevant translations file. - For Frigate NVR, never write strings in the frontend directly. Since the project uses `react-i18next`, use `t()` and write the English string in the relevant translations file in `web/public/locales/en`.
Always conform new and refactored code to the existing coding style in the project. - Always conform new and refactored code to the existing coding style in the project.
- Always have a way to test your work and confirm your changes. When running backend tests, use `python3 -u -m unittest`.

View File

@ -47,7 +47,7 @@ onnxruntime == 1.22.*
# Embeddings # Embeddings
transformers == 4.45.* transformers == 4.45.*
# Generative AI # Generative AI
google-generativeai == 0.8.* google-genai == 1.58.*
ollama == 0.6.* ollama == 0.6.*
openai == 1.65.* openai == 1.65.*
# push notifications # push notifications

View File

@ -69,15 +69,15 @@ function setup_homekit_config() {
local cleaned_json="/tmp/cache/homekit_cleaned.json" local cleaned_json="/tmp/cache/homekit_cleaned.json"
jq ' jq '
# Keep only the homekit section if it exists, otherwise empty object # Keep only the homekit section if it exists, otherwise empty object
if has("homekit") then {homekit: .homekit} else {homekit: {}} end if has("homekit") then {homekit: .homekit} else {} end
' "${temp_json}" > "${cleaned_json}" 2>/dev/null || { ' "${temp_json}" > "${cleaned_json}" 2>/dev/null || {
echo '{"homekit": {}}' > "${cleaned_json}" echo '{}' > "${cleaned_json}"
} }
# Convert back to YAML and write to the config file # Convert back to YAML and write to the config file
yq eval -P "${cleaned_json}" > "${config_path}" 2>/dev/null || { yq eval -P "${cleaned_json}" > "${config_path}" 2>/dev/null || {
echo "[WARNING] Failed to convert cleaned config to YAML, creating minimal config" echo "[WARNING] Failed to convert cleaned config to YAML, creating minimal config"
echo 'homekit: {}' > "${config_path}" echo '{}' > "${config_path}"
} }
# Clean up temp files # Clean up temp files

View File

@ -79,6 +79,12 @@ cameras:
If the ONVIF connection is successful, PTZ controls will be available in the camera's WebUI. If the ONVIF connection is successful, PTZ controls will be available in the camera's WebUI.
:::note
Some cameras use a separate ONVIF/service account that is distinct from the device administrator credentials. If ONVIF authentication fails with the admin account, try creating or using an ONVIF/service user in the camera's firmware. Refer to your camera manufacturer's documentation for more.
:::
:::tip :::tip
If your ONVIF camera does not require authentication credentials, you may still need to specify an empty string for `user` and `password`, eg: `user: ""` and `password: ""`. If your ONVIF camera does not require authentication credentials, you may still need to specify an empty string for `user` and `password`, eg: `user: ""` and `password: ""`.
@ -95,7 +101,7 @@ The FeatureList on the [ONVIF Conformant Products Database](https://www.onvif.or
| Brand or specific camera | PTZ Controls | Autotracking | Notes | | Brand or specific camera | PTZ Controls | Autotracking | Notes |
| ---------------------------- | :----------: | :----------: | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | ---------------------------- | :----------: | :----------: | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Amcrest | ✅ | ✅ | ⛔️ Generally, Amcrest should work, but some older models (like the common IP2M-841) don't support autotracking | | Amcrest | ✅ | ✅ | ⛔️ Generally, Amcrest should work, but some older models (like the common IP2M-841) don't support autotracking |
| Amcrest ASH21 | ✅ | ❌ | ONVIF service port: 80 | | Amcrest ASH21 | ✅ | ❌ | ONVIF service port: 80 |
| Amcrest IP4M-S2112EW-AI | ✅ | ❌ | FOV relative movement not supported. | | Amcrest IP4M-S2112EW-AI | ✅ | ❌ | FOV relative movement not supported. |
| Amcrest IP5M-1190EW | ✅ | ❌ | ONVIF Port: 80. FOV relative movement not supported. | | Amcrest IP5M-1190EW | ✅ | ❌ | ONVIF Port: 80. FOV relative movement not supported. |

View File

@ -66,8 +66,6 @@ Some models are labeled as **hybrid** (capable of both thinking and instruct tas
**Recommendation:** **Recommendation:**
Always select the `-instruct` or documented instruct/tagged variant of any model you use in your Frigate configuration. If in doubt, refer to your model providers documentation or model library for guidance on the correct model variant to use. Always select the `-instruct` or documented instruct/tagged variant of any model you use in your Frigate configuration. If in doubt, refer to your model providers documentation or model library for guidance on the correct model variant to use.
### Supported Models ### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their model library](https://ollama.com/search?c=vision). Note that Frigate will not automatically download the model you specify in your config, you must download the model to your local instance of Ollama first i.e. by running `ollama pull qwen3-vl:2b-instruct` on your Ollama server/Docker container. Note that the model specified in Frigate's config must match the downloaded model tag. You must use a vision capable model with Frigate. Current model variants can be found [in their model library](https://ollama.com/search?c=vision). Note that Frigate will not automatically download the model you specify in your config, you must download the model to your local instance of Ollama first i.e. by running `ollama pull qwen3-vl:2b-instruct` on your Ollama server/Docker container. Note that the model specified in Frigate's config must match the downloaded model tag.
@ -93,7 +91,7 @@ genai:
## Google Gemini ## Google Gemini
Google Gemini has a free tier allowing [15 queries per minute](https://ai.google.dev/pricing) to the API, which is more than sufficient for standard Frigate usage. Google Gemini has a [free tier](https://ai.google.dev/pricing) for the API, however the limits may not be sufficient for standard Frigate usage. Choose a plan appropriate for your installation.
### Supported Models ### Supported Models
@ -114,7 +112,7 @@ To start using Gemini, you must first get an API key from [Google AI Studio](htt
genai: genai:
provider: gemini provider: gemini
api_key: "{FRIGATE_GEMINI_API_KEY}" api_key: "{FRIGATE_GEMINI_API_KEY}"
model: gemini-2.0-flash model: gemini-2.5-flash
``` ```
:::note :::note

View File

@ -125,10 +125,10 @@ review:
## Review Reports ## Review Reports
Along with individual review item summaries, Generative AI provides the ability to request a report of a given time period. For example, you can get a daily report while on a vacation of any suspicious activity or other concerns that may require review. Along with individual review item summaries, Generative AI can also produce a single report of review items from all cameras marked "suspicious" over a specified time period (for example, a daily summary of suspicious activity while you're on vacation).
### Requesting Reports Programmatically ### Requesting Reports Programmatically
Review reports can be requested via the [API](/integrations/api#review-summarization) by sending a POST request to `/api/review/summarize/start/{start_ts}/end/{end_ts}` with Unix timestamps. Review reports can be requested via the [API](/integrations/api/generate-review-summary-review-summarize-start-start-ts-end-end-ts-post) by sending a POST request to `/api/review/summarize/start/{start_ts}/end/{end_ts}` with Unix timestamps.
For Home Assistant users, there is a built-in service (`frigate.review_summarize`) that makes it easy to request review reports as part of automations or scripts. This allows you to automatically generate daily summaries, vacation reports, or custom time period reports based on your specific needs. For Home Assistant users, there is a built-in service (`frigate.review_summarize`) that makes it easy to request review reports as part of automations or scripts. This allows you to automatically generate daily summaries, vacation reports, or custom time period reports based on your specific needs.

View File

@ -68,8 +68,8 @@ Fine-tune the LPR feature using these optional parameters at the global level of
- Default: `1000` pixels. Note: this is intentionally set very low as it is an _area_ measurement (length x width). For reference, 1000 pixels represents a ~32x32 pixel square in your camera image. - Default: `1000` pixels. Note: this is intentionally set very low as it is an _area_ measurement (length x width). For reference, 1000 pixels represents a ~32x32 pixel square in your camera image.
- Depending on the resolution of your camera's `detect` stream, you can increase this value to ignore small or distant plates. - Depending on the resolution of your camera's `detect` stream, you can increase this value to ignore small or distant plates.
- **`device`**: Device to use to run license plate detection _and_ recognition models. - **`device`**: Device to use to run license plate detection _and_ recognition models.
- Default: `CPU` - Default: `None`
- This can be `CPU`, `GPU`, or the GPU's device number. For users without a model that detects license plates natively, using a GPU may increase performance of the YOLOv9 license plate detector model. See the [Hardware Accelerated Enrichments](/configuration/hardware_acceleration_enrichments.md) documentation. However, for users who run a model that detects `license_plate` natively, there is little to no performance gain reported with running LPR on GPU compared to the CPU. - This is auto-selected by Frigate and can be `CPU`, `GPU`, or the GPU's device number. For users without a model that detects license plates natively, using a GPU may increase performance of the YOLOv9 license plate detector model. See the [Hardware Accelerated Enrichments](/configuration/hardware_acceleration_enrichments.md) documentation. However, for users who run a model that detects `license_plate` natively, there is little to no performance gain reported with running LPR on GPU compared to the CPU.
- **`model_size`**: The size of the model used to identify regions of text on plates. - **`model_size`**: The size of the model used to identify regions of text on plates.
- Default: `small` - Default: `small`
- This can be `small` or `large`. - This can be `small` or `large`.
@ -432,6 +432,6 @@ If you are using a model that natively detects `license_plate`, add an _object m
If you are not using a model that natively detects `license_plate` or you are using dedicated LPR camera mode, only a _motion mask_ over your text is required. If you are not using a model that natively detects `license_plate` or you are using dedicated LPR camera mode, only a _motion mask_ over your text is required.
### I see "Error running ... model" in my logs. How can I fix this? ### I see "Error running ... model" in my logs, or my inference time is very high. How can I fix this?
This usually happens when your GPU is unable to compile or use one of the LPR models. Set your `device` to `CPU` and try again. GPU acceleration only provides a slight performance increase, and the models are lightweight enough to run without issue on most CPUs. This usually happens when your GPU is unable to compile or use one of the LPR models. Set your `device` to `CPU` and try again. GPU acceleration only provides a slight performance increase, and the models are lightweight enough to run without issue on most CPUs.

View File

@ -11,6 +11,12 @@ Cameras configured to output H.264 video and AAC audio will offer the most compa
- **Stream Viewing**: This stream will be rebroadcast as is to Home Assistant for viewing with the stream component. Setting this resolution too high will use significant bandwidth when viewing streams in Home Assistant, and they may not load reliably over slower connections. - **Stream Viewing**: This stream will be rebroadcast as is to Home Assistant for viewing with the stream component. Setting this resolution too high will use significant bandwidth when viewing streams in Home Assistant, and they may not load reliably over slower connections.
:::tip
For the best experience in Frigate's UI, configure your camera so that the detection and recording streams use the same aspect ratio. For example, if your main stream is 3840x2160 (16:9), set your substream to 640x360 (also 16:9) instead of 640x480 (4:3). While not strictly required, matching aspect ratios helps ensure seamless live stream display and preview/recordings playback.
:::
### Choosing a detect resolution ### Choosing a detect resolution
The ideal resolution for detection is one where the objects you want to detect fit inside the dimensions of the model used by Frigate (320x320). Frigate does not pass the entire camera frame to object detection. It will crop an area of motion from the full frame and look in that portion of the frame. If the area being inspected is larger than 320x320, Frigate must resize it before running object detection. Higher resolutions do not improve the detection accuracy because the additional detail is lost in the resize. Below you can see a reference for how large a 320x320 area is against common resolutions. The ideal resolution for detection is one where the objects you want to detect fit inside the dimensions of the model used by Frigate (320x320). Frigate does not pass the entire camera frame to object detection. It will crop an area of motion from the full frame and look in that portion of the frame. If the area being inspected is larger than 320x320, Frigate must resize it before running object detection. Higher resolutions do not improve the detection accuracy because the additional detail is lost in the resize. Below you can see a reference for how large a 320x320 area is against common resolutions.

View File

@ -42,7 +42,7 @@ If the EQ13 is out of stock, the link below may take you to a suggested alternat
| ------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------- | --------------------------------------------------- | | ------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------- | --------------------------------------------------- |
| Beelink EQ13 (<a href="https://amzn.to/4jn2qVr" target="_blank" rel="nofollow noopener sponsored">Amazon</a>) | Can run object detection on several 1080p cameras with low-medium activity | Dual gigabit NICs for easy isolated camera network. | | Beelink EQ13 (<a href="https://amzn.to/4jn2qVr" target="_blank" rel="nofollow noopener sponsored">Amazon</a>) | Can run object detection on several 1080p cameras with low-medium activity | Dual gigabit NICs for easy isolated camera network. |
| Intel 1120p ([Amazon](https://www.amazon.com/Beelink-i3-1220P-Computer-Display-Gigabit/dp/B0DDCKT9YP) | Can handle a large number of 1080p cameras with high activity | | | Intel 1120p ([Amazon](https://www.amazon.com/Beelink-i3-1220P-Computer-Display-Gigabit/dp/B0DDCKT9YP) | Can handle a large number of 1080p cameras with high activity | |
| Intel 125H ([Amazon](https://www.amazon.com/MINISFORUM-Pro-125H-Barebone-Computer-HDMI2-1/dp/B0FH21FSZM) | Can handle a significant number of 1080p cameras with high activity | Includes NPU for more efficient detection in 0.17+ | | Intel 125H ([Amazon](https://www.amazon.com/MINISFORUM-Pro-125H-Barebone-Computer-HDMI2-1/dp/B0FH21FSZM) | Can handle a significant number of 1080p cameras with high activity | Includes NPU for more efficient detection in 0.17+ |
## Detectors ## Detectors
@ -55,12 +55,10 @@ Frigate supports multiple different detectors that work on different types of ha
**Most Hardware** **Most Hardware**
- [Hailo](#hailo-8): The Hailo8 and Hailo8L AI Acceleration module is available in m.2 format with a HAT for RPi devices offering a wide range of compatibility with devices. - [Hailo](#hailo-8): The Hailo8 and Hailo8L AI Acceleration module is available in m.2 format with a HAT for RPi devices offering a wide range of compatibility with devices.
- [Supports many model architectures](../../configuration/object_detectors#configuration) - [Supports many model architectures](../../configuration/object_detectors#configuration)
- Runs best with tiny or small size models - Runs best with tiny or small size models
- [Google Coral EdgeTPU](#google-coral-tpu): The Google Coral EdgeTPU is available in USB and m.2 format allowing for a wide range of compatibility with devices. - [Google Coral EdgeTPU](#google-coral-tpu): The Google Coral EdgeTPU is available in USB and m.2 format allowing for a wide range of compatibility with devices.
- [Supports primarily ssdlite and mobilenet model architectures](../../configuration/object_detectors#edge-tpu-detector) - [Supports primarily ssdlite and mobilenet model architectures](../../configuration/object_detectors#edge-tpu-detector)
- <CommunityBadge /> [MemryX](#memryx-mx3): The MX3 M.2 accelerator module is available in m.2 format allowing for a wide range of compatibility with devices. - <CommunityBadge /> [MemryX](#memryx-mx3): The MX3 M.2 accelerator module is available in m.2 format allowing for a wide range of compatibility with devices.
@ -89,7 +87,6 @@ Frigate supports multiple different detectors that work on different types of ha
**Nvidia** **Nvidia**
- [TensortRT](#tensorrt---nvidia-gpu): TensorRT can run on Nvidia GPUs to provide efficient object detection. - [TensortRT](#tensorrt---nvidia-gpu): TensorRT can run on Nvidia GPUs to provide efficient object detection.
- [Supports majority of model architectures via ONNX](../../configuration/object_detectors#onnx-supported-models) - [Supports majority of model architectures via ONNX](../../configuration/object_detectors#onnx-supported-models)
- Runs well with any size models including large - Runs well with any size models including large
@ -152,9 +149,7 @@ The OpenVINO detector type is able to run on:
:::note :::note
Intel NPUs have seen [limited success in community deployments](https://github.com/blakeblackshear/frigate/discussions/13248#discussioncomment-12347357), although they remain officially unsupported. Intel B-series (Battlemage) GPUs are not officially supported with Frigate 0.17, though a user has [provided steps to rebuild the Frigate container](https://github.com/blakeblackshear/frigate/discussions/21257) with support for them.
In testing, the NPU delivered performance that was only comparable to — or in some cases worse than — the integrated GPU.
::: :::

View File

@ -37,7 +37,7 @@ cameras:
## Steps ## Steps
1. Export or copy the clip you want to replay to the Frigate host (e.g., `/media/frigate/` or `debug/clips/`). 1. Export or copy the clip you want to replay to the Frigate host (e.g., `/media/frigate/` or `debug/clips/`). Depending on what you are looking to debug, it is often helpful to add some "pre-capture" time (where the tracked object is not yet visible) to the clip when exporting.
2. Add the temporary camera to `config/config.yml` (example above). Use a unique name such as `test` or `replay_camera` so it's easy to remove later. 2. Add the temporary camera to `config/config.yml` (example above). Use a unique name such as `test` or `replay_camera` so it's easy to remove later.
- If you're debugging a specific camera, copy the settings from that camera (frame rate, model/enrichment settings, zones, etc.) into the temporary camera so the replay closely matches the original environment. Leave `record` and `snapshots` disabled unless you are specifically debugging recording or snapshot behavior. - If you're debugging a specific camera, copy the settings from that camera (frame rate, model/enrichment settings, zones, etc.) into the temporary camera so the replay closely matches the original environment. Leave `record` and `snapshots` disabled unless you are specifically debugging recording or snapshot behavior.
3. Restart Frigate. 3. Restart Frigate.

View File

@ -23,7 +23,12 @@ from markupsafe import escape
from peewee import SQL, fn, operator from peewee import SQL, fn, operator
from pydantic import ValidationError from pydantic import ValidationError
from frigate.api.auth import allow_any_authenticated, allow_public, require_role from frigate.api.auth import (
allow_any_authenticated,
allow_public,
get_allowed_cameras_for_filter,
require_role,
)
from frigate.api.defs.query.app_query_parameters import AppTimelineHourlyQueryParameters from frigate.api.defs.query.app_query_parameters import AppTimelineHourlyQueryParameters
from frigate.api.defs.request.app_body import AppConfigSetBody from frigate.api.defs.request.app_body import AppConfigSetBody
from frigate.api.defs.tags import Tags from frigate.api.defs.tags import Tags
@ -687,13 +692,19 @@ def plusModels(request: Request, filterByCurrentModelDetector: bool = False):
@router.get( @router.get(
"/recognized_license_plates", dependencies=[Depends(allow_any_authenticated())] "/recognized_license_plates", dependencies=[Depends(allow_any_authenticated())]
) )
def get_recognized_license_plates(split_joined: Optional[int] = None): def get_recognized_license_plates(
split_joined: Optional[int] = None,
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
):
try: try:
query = ( query = (
Event.select( Event.select(
SQL("json_extract(data, '$.recognized_license_plate') AS plate") SQL("json_extract(data, '$.recognized_license_plate') AS plate")
) )
.where(SQL("json_extract(data, '$.recognized_license_plate') IS NOT NULL")) .where(
(SQL("json_extract(data, '$.recognized_license_plate') IS NOT NULL"))
& (Event.camera << allowed_cameras)
)
.distinct() .distinct()
) )
recognized_license_plates = [row[0] for row in query.tuples()] recognized_license_plates = [row[0] for row in query.tuples()]

View File

@ -848,9 +848,10 @@ async def onvif_probe(
try: try:
if isinstance(uri, str) and uri.startswith("rtsp://"): if isinstance(uri, str) and uri.startswith("rtsp://"):
if username and password and "@" not in uri: if username and password and "@" not in uri:
# Inject URL-encoded credentials and add only the # Inject raw credentials and add only the
# authenticated version. # authenticated version. The credentials will be encoded
cred = f"{quote_plus(username)}:{quote_plus(password)}@" # later by ffprobe_stream or the config system.
cred = f"{username}:{password}@"
injected = uri.replace( injected = uri.replace(
"rtsp://", f"rtsp://{cred}", 1 "rtsp://", f"rtsp://{cred}", 1
) )
@ -903,12 +904,8 @@ async def onvif_probe(
"/cam/realmonitor?channel=1&subtype=0", "/cam/realmonitor?channel=1&subtype=0",
"/11", "/11",
] ]
# Use URL-encoded credentials for pattern fallback URIs when provided # Use raw credentials for pattern fallback URIs when provided
auth_str = ( auth_str = f"{username}:{password}@" if username and password else ""
f"{quote_plus(username)}:{quote_plus(password)}@"
if username and password
else ""
)
rtsp_port = 554 rtsp_port = 554
for path in common_paths: for path in common_paths:
uri = f"rtsp://{auth_str}{host}:{rtsp_port}{path}" uri = f"rtsp://{auth_str}{host}:{rtsp_port}{path}"
@ -930,7 +927,7 @@ async def onvif_probe(
and uri.startswith("rtsp://") and uri.startswith("rtsp://")
and "@" not in uri and "@" not in uri
): ):
cred = f"{quote_plus(username)}:{quote_plus(password)}@" cred = f"{username}:{password}@"
cred_uri = uri.replace("rtsp://", f"rtsp://{cred}", 1) cred_uri = uri.replace("rtsp://", f"rtsp://{cred}", 1)
if cred_uri not in to_test: if cred_uri not in to_test:
to_test.append(cred_uri) to_test.append(cred_uri)

View File

@ -662,6 +662,13 @@ class FrigateConfig(FrigateBaseModel):
# generate zone contours # generate zone contours
if len(camera_config.zones) > 0: if len(camera_config.zones) > 0:
for zone in camera_config.zones.values(): for zone in camera_config.zones.values():
if zone.filters:
for object_name, filter_config in zone.filters.items():
zone.filters[object_name] = RuntimeFilterConfig(
frame_shape=camera_config.frame_shape,
**filter_config.model_dump(exclude_unset=True),
)
zone.generate_contour(camera_config.frame_shape) zone.generate_contour(camera_config.frame_shape)
# Set live view stream if none is set # Set live view stream if none is set

View File

@ -2,6 +2,7 @@
import logging import logging
import os import os
import threading
import warnings import warnings
from transformers import AutoFeatureExtractor, AutoTokenizer from transformers import AutoFeatureExtractor, AutoTokenizer
@ -54,6 +55,7 @@ class JinaV1TextEmbedding(BaseEmbedding):
self.tokenizer = None self.tokenizer = None
self.feature_extractor = None self.feature_extractor = None
self.runner = None self.runner = None
self._lock = threading.Lock()
files_names = list(self.download_urls.keys()) + [self.tokenizer_file] files_names = list(self.download_urls.keys()) + [self.tokenizer_file]
if not all( if not all(
@ -134,17 +136,18 @@ class JinaV1TextEmbedding(BaseEmbedding):
) )
def _preprocess_inputs(self, raw_inputs): def _preprocess_inputs(self, raw_inputs):
max_length = max(len(self.tokenizer.encode(text)) for text in raw_inputs) with self._lock:
return [ max_length = max(len(self.tokenizer.encode(text)) for text in raw_inputs)
self.tokenizer( return [
text, self.tokenizer(
padding="max_length", text,
truncation=True, padding="max_length",
max_length=max_length, truncation=True,
return_tensors="np", max_length=max_length,
) return_tensors="np",
for text in raw_inputs )
] for text in raw_inputs
]
class JinaV1ImageEmbedding(BaseEmbedding): class JinaV1ImageEmbedding(BaseEmbedding):
@ -174,6 +177,7 @@ class JinaV1ImageEmbedding(BaseEmbedding):
self.download_path = os.path.join(MODEL_CACHE_DIR, self.model_name) self.download_path = os.path.join(MODEL_CACHE_DIR, self.model_name)
self.feature_extractor = None self.feature_extractor = None
self.runner: BaseModelRunner | None = None self.runner: BaseModelRunner | None = None
self._lock = threading.Lock()
files_names = list(self.download_urls.keys()) files_names = list(self.download_urls.keys())
if not all( if not all(
os.path.exists(os.path.join(self.download_path, n)) for n in files_names os.path.exists(os.path.join(self.download_path, n)) for n in files_names
@ -216,8 +220,9 @@ class JinaV1ImageEmbedding(BaseEmbedding):
) )
def _preprocess_inputs(self, raw_inputs): def _preprocess_inputs(self, raw_inputs):
processed_images = [self._process_image(img) for img in raw_inputs] with self._lock:
return [ processed_images = [self._process_image(img) for img in raw_inputs]
self.feature_extractor(images=image, return_tensors="np") return [
for image in processed_images self.feature_extractor(images=image, return_tensors="np")
] for image in processed_images
]

View File

@ -3,8 +3,8 @@
import logging import logging
from typing import Optional from typing import Optional
import google.generativeai as genai from google import genai
from google.api_core.exceptions import GoogleAPICallError from google.genai import errors, types
from frigate.config import GenAIProviderEnum from frigate.config import GenAIProviderEnum
from frigate.genai import GenAIClient, register_genai_provider from frigate.genai import GenAIClient, register_genai_provider
@ -16,44 +16,59 @@ logger = logging.getLogger(__name__)
class GeminiClient(GenAIClient): class GeminiClient(GenAIClient):
"""Generative AI client for Frigate using Gemini.""" """Generative AI client for Frigate using Gemini."""
provider: genai.GenerativeModel provider: genai.Client
def _init_provider(self): def _init_provider(self):
"""Initialize the client.""" """Initialize the client."""
genai.configure(api_key=self.genai_config.api_key) # Merge provider_options into HttpOptions
return genai.GenerativeModel( http_options_dict = {
self.genai_config.model, **self.genai_config.provider_options "api_version": "v1",
"timeout": int(self.timeout * 1000), # requires milliseconds
"retry_options": types.HttpRetryOptions(
attempts=3,
initial_delay=1.0,
max_delay=60.0,
exp_base=2.0,
jitter=1.0,
http_status_codes=[429, 500, 502, 503, 504],
),
}
if isinstance(self.genai_config.provider_options, dict):
http_options_dict.update(self.genai_config.provider_options)
return genai.Client(
api_key=self.genai_config.api_key,
http_options=types.HttpOptions(**http_options_dict),
) )
def _send(self, prompt: str, images: list[bytes]) -> Optional[str]: def _send(self, prompt: str, images: list[bytes]) -> Optional[str]:
"""Submit a request to Gemini.""" """Submit a request to Gemini."""
data = [ contents = [
{ types.Part.from_bytes(data=img, mime_type="image/jpeg") for img in images
"mime_type": "image/jpeg",
"data": img,
}
for img in images
] + [prompt] ] + [prompt]
try: try:
# Merge runtime_options into generation_config if provided # Merge runtime_options into generation_config if provided
generation_config_dict = {"candidate_count": 1} generation_config_dict = {"candidate_count": 1}
generation_config_dict.update(self.genai_config.runtime_options) generation_config_dict.update(self.genai_config.runtime_options)
response = self.provider.generate_content( response = self.provider.models.generate_content(
data, model=self.genai_config.model,
generation_config=genai.types.GenerationConfig( contents=contents,
**generation_config_dict config=types.GenerateContentConfig(
), **generation_config_dict,
request_options=genai.types.RequestOptions(
timeout=self.timeout,
), ),
) )
except GoogleAPICallError as e: except errors.APIError as e:
logger.warning("Gemini returned an error: %s", str(e)) logger.warning("Gemini returned an error: %s", str(e))
return None return None
except Exception as e:
logger.warning("An unexpected error occurred with Gemini: %s", str(e))
return None
try: try:
description = response.text.strip() description = response.text.strip()
except ValueError: except (ValueError, AttributeError):
# No description was generated # No description was generated
return None return None
return description return description

View File

@ -89,6 +89,7 @@ def apply_log_levels(default: str, log_levels: dict[str, LogLevel]) -> None:
"ws4py": LogLevel.error, "ws4py": LogLevel.error,
"PIL": LogLevel.warning, "PIL": LogLevel.warning,
"numba": LogLevel.warning, "numba": LogLevel.warning,
"google_genai.models": LogLevel.warning,
**log_levels, **log_levels,
} }

View File

@ -97,6 +97,7 @@ class RecordingMaintainer(threading.Thread):
self.object_recordings_info: dict[str, list] = defaultdict(list) self.object_recordings_info: dict[str, list] = defaultdict(list)
self.audio_recordings_info: dict[str, list] = defaultdict(list) self.audio_recordings_info: dict[str, list] = defaultdict(list)
self.end_time_cache: dict[str, Tuple[datetime.datetime, float]] = {} self.end_time_cache: dict[str, Tuple[datetime.datetime, float]] = {}
self.unexpected_cache_files_logged: bool = False
async def move_files(self) -> None: async def move_files(self) -> None:
cache_files = [ cache_files = [
@ -112,7 +113,14 @@ class RecordingMaintainer(threading.Thread):
for cache in cache_files: for cache in cache_files:
cache_path = os.path.join(CACHE_DIR, cache) cache_path = os.path.join(CACHE_DIR, cache)
basename = os.path.splitext(cache)[0] basename = os.path.splitext(cache)[0]
camera, date = basename.rsplit("@", maxsplit=1) try:
camera, date = basename.rsplit("@", maxsplit=1)
except ValueError:
if not self.unexpected_cache_files_logged:
logger.warning("Skipping unexpected files in cache")
self.unexpected_cache_files_logged = True
continue
start_time = datetime.datetime.strptime( start_time = datetime.datetime.strptime(
date, CACHE_SEGMENT_FORMAT date, CACHE_SEGMENT_FORMAT
).astimezone(datetime.timezone.utc) ).astimezone(datetime.timezone.utc)
@ -164,7 +172,13 @@ class RecordingMaintainer(threading.Thread):
cache_path = os.path.join(CACHE_DIR, cache) cache_path = os.path.join(CACHE_DIR, cache)
basename = os.path.splitext(cache)[0] basename = os.path.splitext(cache)[0]
camera, date = basename.rsplit("@", maxsplit=1) try:
camera, date = basename.rsplit("@", maxsplit=1)
except ValueError:
if not self.unexpected_cache_files_logged:
logger.warning("Skipping unexpected files in cache")
self.unexpected_cache_files_logged = True
continue
# important that start_time is utc because recordings are stored and compared in utc # important that start_time is utc because recordings are stored and compared in utc
start_time = datetime.datetime.strptime( start_time = datetime.datetime.strptime(

View File

@ -632,6 +632,49 @@ class TestConfig(unittest.TestCase):
) )
assert frigate_config.cameras["back"].zones["test"].color != (0, 0, 0) assert frigate_config.cameras["back"].zones["test"].color != (0, 0, 0)
def test_zone_filter_area_percent_converts_to_pixels(self):
config = {
"mqtt": {"host": "mqtt"},
"record": {
"alerts": {
"retain": {
"days": 20,
}
}
},
"cameras": {
"back": {
"ffmpeg": {
"inputs": [
{"path": "rtsp://10.0.0.1:554/video", "roles": ["detect"]}
]
},
"detect": {
"height": 1080,
"width": 1920,
"fps": 5,
},
"zones": {
"notification": {
"coordinates": "0.03,1,0.025,0,0.626,0,0.643,1",
"objects": ["person"],
"filters": {"person": {"min_area": 0.1}},
}
},
}
},
}
frigate_config = FrigateConfig(**config)
expected_min_area = int(1080 * 1920 * 0.1)
assert (
frigate_config.cameras["back"]
.zones["notification"]
.filters["person"]
.min_area
== expected_min_area
)
def test_zone_relative_matches_explicit(self): def test_zone_relative_matches_explicit(self):
config = { config = {
"mqtt": {"host": "mqtt"}, "mqtt": {"host": "mqtt"},

View File

@ -0,0 +1,66 @@
import sys
import unittest
from unittest.mock import MagicMock, patch
# Mock complex imports before importing maintainer
sys.modules["frigate.comms.inter_process"] = MagicMock()
sys.modules["frigate.comms.detections_updater"] = MagicMock()
sys.modules["frigate.comms.recordings_updater"] = MagicMock()
sys.modules["frigate.config.camera.updater"] = MagicMock()
# Now import the class under test
from frigate.config import FrigateConfig # noqa: E402
from frigate.record.maintainer import RecordingMaintainer # noqa: E402
class TestMaintainer(unittest.IsolatedAsyncioTestCase):
async def test_move_files_survives_bad_filename(self):
config = MagicMock(spec=FrigateConfig)
config.cameras = {}
stop_event = MagicMock()
maintainer = RecordingMaintainer(config, stop_event)
# We need to mock end_time_cache to avoid key errors if logic proceeds
maintainer.end_time_cache = {}
# Mock filesystem
# One bad file, one good file
files = ["bad_filename.mp4", "camera@20210101000000+0000.mp4"]
with patch("os.listdir", return_value=files):
with patch("os.path.isfile", return_value=True):
with patch(
"frigate.record.maintainer.psutil.process_iter", return_value=[]
):
with patch("frigate.record.maintainer.logger.warning") as warn:
# Mock validate_and_move_segment to avoid further logic
maintainer.validate_and_move_segment = MagicMock()
try:
await maintainer.move_files()
except ValueError as e:
if "not enough values to unpack" in str(e):
self.fail("move_files() crashed on bad filename!")
raise e
except Exception:
# Ignore other errors (like DB connection) as we only care about the unpack crash
pass
# The bad filename is encountered in multiple loops, but should only warn once.
matching = [
c
for c in warn.call_args_list
if c.args
and isinstance(c.args[0], str)
and "Skipping unexpected files in cache" in c.args[0]
]
self.assertEqual(
1,
len(matching),
f"Expected a single warning for unexpected files, got {len(matching)}",
)
if __name__ == "__main__":
unittest.main()

View File

@ -540,9 +540,16 @@ def get_jetson_stats() -> Optional[dict[int, dict]]:
try: try:
results["mem"] = "-" # no discrete gpu memory results["mem"] = "-" # no discrete gpu memory
with open("/sys/devices/gpu.0/load", "r") as f: if os.path.exists("/sys/devices/gpu.0/load"):
gpuload = float(f.readline()) / 10 with open("/sys/devices/gpu.0/load", "r") as f:
results["gpu"] = f"{gpuload}%" gpuload = float(f.readline()) / 10
results["gpu"] = f"{gpuload}%"
elif os.path.exists("/sys/devices/platform/gpu.0/load"):
with open("/sys/devices/platform/gpu.0/load", "r") as f:
gpuload = float(f.readline()) / 10
results["gpu"] = f"{gpuload}%"
else:
results["gpu"] = "-"
except Exception: except Exception:
return None return None

View File

@ -64,10 +64,12 @@ def stop_ffmpeg(ffmpeg_process: sp.Popen[Any], logger: logging.Logger):
try: try:
logger.info("Waiting for ffmpeg to exit gracefully...") logger.info("Waiting for ffmpeg to exit gracefully...")
ffmpeg_process.communicate(timeout=30) ffmpeg_process.communicate(timeout=30)
logger.info("FFmpeg has exited")
except sp.TimeoutExpired: except sp.TimeoutExpired:
logger.info("FFmpeg didn't exit. Force killing...") logger.info("FFmpeg didn't exit. Force killing...")
ffmpeg_process.kill() ffmpeg_process.kill()
ffmpeg_process.communicate() ffmpeg_process.communicate()
logger.info("FFmpeg has been killed")
ffmpeg_process = None ffmpeg_process = None

35
web/package-lock.json generated
View File

@ -48,7 +48,7 @@
"idb-keyval": "^6.2.1", "idb-keyval": "^6.2.1",
"immer": "^10.1.1", "immer": "^10.1.1",
"konva": "^9.3.18", "konva": "^9.3.18",
"lodash": "^4.17.21", "lodash": "^4.17.23",
"lucide-react": "^0.477.0", "lucide-react": "^0.477.0",
"monaco-yaml": "^5.3.1", "monaco-yaml": "^5.3.1",
"next-themes": "^0.3.0", "next-themes": "^0.3.0",
@ -64,7 +64,7 @@
"react-i18next": "^15.2.0", "react-i18next": "^15.2.0",
"react-icons": "^5.5.0", "react-icons": "^5.5.0",
"react-konva": "^18.2.10", "react-konva": "^18.2.10",
"react-router-dom": "^6.26.0", "react-router-dom": "^6.30.3",
"react-swipeable": "^7.0.2", "react-swipeable": "^7.0.2",
"react-tracked": "^2.0.1", "react-tracked": "^2.0.1",
"react-transition-group": "^4.4.5", "react-transition-group": "^4.4.5",
@ -3293,9 +3293,9 @@
"license": "MIT" "license": "MIT"
}, },
"node_modules/@remix-run/router": { "node_modules/@remix-run/router": {
"version": "1.19.0", "version": "1.23.2",
"resolved": "https://registry.npmjs.org/@remix-run/router/-/router-1.19.0.tgz", "resolved": "https://registry.npmjs.org/@remix-run/router/-/router-1.23.2.tgz",
"integrity": "sha512-zDICCLKEwbVYTS6TjYaWtHXxkdoUvD/QXvyVZjGCsWz5vyH7aFeONlPffPdW+Y/t6KT0MgXb2Mfjun9YpWN1dA==", "integrity": "sha512-Ic6m2U/rMjTkhERIa/0ZtXJP17QUi2CbWE7cqx4J58M8aA3QTfW+2UlQ4psvTX9IO1RfNVhK3pcpdjej7L+t2w==",
"license": "MIT", "license": "MIT",
"engines": { "engines": {
"node": ">=14.0.0" "node": ">=14.0.0"
@ -7210,9 +7210,10 @@
} }
}, },
"node_modules/lodash": { "node_modules/lodash": {
"version": "4.17.21", "version": "4.17.23",
"resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.21.tgz", "resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.23.tgz",
"integrity": "sha512-v2kDEe57lecTulaDIuNTPy3Ry4gLGJ6Z1O3vE1krgXZNrsQ+LFTGHVxVjcXPs17LhbZVGedAJv8XZ1tvj5FvSg==" "integrity": "sha512-LgVTMpQtIopCi79SJeDiP0TfWi5CNEc/L/aRdTh3yIvmZXTnheWpKjSZhnvMl8iXbC1tFg9gdHHDMLoV7CnG+w==",
"license": "MIT"
}, },
"node_modules/lodash.merge": { "node_modules/lodash.merge": {
"version": "4.6.2", "version": "4.6.2",
@ -8617,12 +8618,12 @@
} }
}, },
"node_modules/react-router": { "node_modules/react-router": {
"version": "6.26.0", "version": "6.30.3",
"resolved": "https://registry.npmjs.org/react-router/-/react-router-6.26.0.tgz", "resolved": "https://registry.npmjs.org/react-router/-/react-router-6.30.3.tgz",
"integrity": "sha512-wVQq0/iFYd3iZ9H2l3N3k4PL8EEHcb0XlU2Na8nEwmiXgIUElEH6gaJDtUQxJ+JFzmIXaQjfdpcGWaM6IoQGxg==", "integrity": "sha512-XRnlbKMTmktBkjCLE8/XcZFlnHvr2Ltdr1eJX4idL55/9BbORzyZEaIkBFDhFGCEWBBItsVrDxwx3gnisMitdw==",
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"@remix-run/router": "1.19.0" "@remix-run/router": "1.23.2"
}, },
"engines": { "engines": {
"node": ">=14.0.0" "node": ">=14.0.0"
@ -8632,13 +8633,13 @@
} }
}, },
"node_modules/react-router-dom": { "node_modules/react-router-dom": {
"version": "6.26.0", "version": "6.30.3",
"resolved": "https://registry.npmjs.org/react-router-dom/-/react-router-dom-6.26.0.tgz", "resolved": "https://registry.npmjs.org/react-router-dom/-/react-router-dom-6.30.3.tgz",
"integrity": "sha512-RRGUIiDtLrkX3uYcFiCIxKFWMcWQGMojpYZfcstc63A1+sSnVgILGIm9gNUA6na3Fm1QuPGSBQH2EMbAZOnMsQ==", "integrity": "sha512-pxPcv1AczD4vso7G4Z3TKcvlxK7g7TNt3/FNGMhfqyntocvYKj+GCatfigGDjbLozC4baguJ0ReCigoDJXb0ag==",
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"@remix-run/router": "1.19.0", "@remix-run/router": "1.23.2",
"react-router": "6.26.0" "react-router": "6.30.3"
}, },
"engines": { "engines": {
"node": ">=14.0.0" "node": ">=14.0.0"

View File

@ -54,7 +54,7 @@
"idb-keyval": "^6.2.1", "idb-keyval": "^6.2.1",
"immer": "^10.1.1", "immer": "^10.1.1",
"konva": "^9.3.18", "konva": "^9.3.18",
"lodash": "^4.17.21", "lodash": "^4.17.23",
"lucide-react": "^0.477.0", "lucide-react": "^0.477.0",
"monaco-yaml": "^5.3.1", "monaco-yaml": "^5.3.1",
"next-themes": "^0.3.0", "next-themes": "^0.3.0",
@ -70,7 +70,7 @@
"react-i18next": "^15.2.0", "react-i18next": "^15.2.0",
"react-icons": "^5.5.0", "react-icons": "^5.5.0",
"react-konva": "^18.2.10", "react-konva": "^18.2.10",
"react-router-dom": "^6.26.0", "react-router-dom": "^6.30.3",
"react-swipeable": "^7.0.2", "react-swipeable": "^7.0.2",
"react-tracked": "^2.0.1", "react-tracked": "^2.0.1",
"react-transition-group": "^4.4.5", "react-transition-group": "^4.4.5",

View File

@ -3,6 +3,7 @@
"untilForTime": "Until {{time}}", "untilForTime": "Until {{time}}",
"untilForRestart": "Until Frigate restarts.", "untilForRestart": "Until Frigate restarts.",
"untilRestart": "Until restart", "untilRestart": "Until restart",
"never": "Never",
"ago": "{{timeAgo}} ago", "ago": "{{timeAgo}} ago",
"justNow": "Just now", "justNow": "Just now",
"today": "Today", "today": "Today",

View File

@ -181,6 +181,16 @@
"restricted": { "restricted": {
"title": "No Cameras Available", "title": "No Cameras Available",
"description": "You don't have permission to view any cameras in this group." "description": "You don't have permission to view any cameras in this group."
},
"default": {
"title": "No Cameras Configured",
"description": "Get started by connecting a camera to Frigate.",
"buttonText": "Add Camera"
},
"group": {
"title": "No Cameras in Group",
"description": "This camera group has no assigned or enabled cameras.",
"buttonText": "Manage Groups"
} }
} }
} }

View File

@ -386,11 +386,11 @@
"title": "Camera Review Settings", "title": "Camera Review Settings",
"object_descriptions": { "object_descriptions": {
"title": "Generative AI Object Descriptions", "title": "Generative AI Object Descriptions",
"desc": "Temporarily enable/disable Generative AI object descriptions for this camera. When disabled, AI generated descriptions will not be requested for tracked objects on this camera." "desc": "Temporarily enable/disable Generative AI object descriptions for this camera until Frigate restarts. When disabled, AI generated descriptions will not be requested for tracked objects on this camera."
}, },
"review_descriptions": { "review_descriptions": {
"title": "Generative AI Review Descriptions", "title": "Generative AI Review Descriptions",
"desc": "Temporarily enable/disable Generative AI review descriptions for this camera. When disabled, AI generated descriptions will not be requested for review items on this camera." "desc": "Temporarily enable/disable Generative AI review descriptions for this camera until Frigate restarts. When disabled, AI generated descriptions will not be requested for review items on this camera."
}, },
"review": { "review": {
"title": "Review", "title": "Review",

View File

@ -35,7 +35,9 @@ export function EmptyCard({
{icon} {icon}
{TitleComponent} {TitleComponent}
{description && ( {description && (
<div className="mb-3 text-secondary-foreground">{description}</div> <div className="mb-3 text-center text-secondary-foreground">
{description}
</div>
)} )}
{buttonText?.length && ( {buttonText?.length && (
<Button size="sm" variant="select"> <Button size="sm" variant="select">

View File

@ -268,7 +268,7 @@ export default function CreateTriggerDialog({
<FormItem className="flex flex-row items-center justify-between"> <FormItem className="flex flex-row items-center justify-between">
<div className="space-y-0.5"> <div className="space-y-0.5">
<FormLabel className="text-base"> <FormLabel className="text-base">
{t("enabled", { ns: "common" })} {t("button.enabled", { ns: "common" })}
</FormLabel> </FormLabel>
<div className="text-sm text-muted-foreground"> <div className="text-sm text-muted-foreground">
{t("triggers.dialog.form.enabled.description")} {t("triggers.dialog.form.enabled.description")}
@ -394,7 +394,10 @@ export default function CreateTriggerDialog({
</FormLabel> </FormLabel>
<div className="space-y-2"> <div className="space-y-2">
{availableActions.map((action) => ( {availableActions.map((action) => (
<div key={action} className="flex items-center space-x-2"> <label
key={action}
className="flex cursor-pointer items-center space-x-2"
>
<FormControl> <FormControl>
<Checkbox <Checkbox
checked={form checked={form
@ -416,10 +419,10 @@ export default function CreateTriggerDialog({
}} }}
/> />
</FormControl> </FormControl>
<FormLabel className="text-sm font-normal"> <span className="text-sm font-normal">
{t(`triggers.actions.${action}`)} {t(`triggers.actions.${action}`)}
</FormLabel> </span>
</div> </label>
))} ))}
</div> </div>
<FormDescription> <FormDescription>

View File

@ -13,7 +13,7 @@ import HlsVideoPlayer from "@/components/player/HlsVideoPlayer";
import { baseUrl } from "@/api/baseUrl"; import { baseUrl } from "@/api/baseUrl";
import { REVIEW_PADDING } from "@/types/review"; import { REVIEW_PADDING } from "@/types/review";
import { import {
ASPECT_VERTICAL_LAYOUT, ASPECT_PORTRAIT_LAYOUT,
ASPECT_WIDE_LAYOUT, ASPECT_WIDE_LAYOUT,
Recording, Recording,
} from "@/types/record"; } from "@/types/record";
@ -39,6 +39,7 @@ import { useApiHost } from "@/api";
import ImageLoadingIndicator from "@/components/indicators/ImageLoadingIndicator"; import ImageLoadingIndicator from "@/components/indicators/ImageLoadingIndicator";
import ObjectTrackOverlay from "../ObjectTrackOverlay"; import ObjectTrackOverlay from "../ObjectTrackOverlay";
import { useIsAdmin } from "@/hooks/use-is-admin"; import { useIsAdmin } from "@/hooks/use-is-admin";
import { VideoResolutionType } from "@/types/live";
type TrackingDetailsProps = { type TrackingDetailsProps = {
className?: string; className?: string;
@ -253,16 +254,25 @@ export function TrackingDetails({
const [timelineSize] = useResizeObserver(timelineContainerRef); const [timelineSize] = useResizeObserver(timelineContainerRef);
const [fullResolution, setFullResolution] = useState<VideoResolutionType>({
width: 0,
height: 0,
});
const aspectRatio = useMemo(() => { const aspectRatio = useMemo(() => {
if (!config) { if (!config) {
return 16 / 9; return 16 / 9;
} }
if (fullResolution.width && fullResolution.height) {
return fullResolution.width / fullResolution.height;
}
return ( return (
config.cameras[event.camera].detect.width / config.cameras[event.camera].detect.width /
config.cameras[event.camera].detect.height config.cameras[event.camera].detect.height
); );
}, [config, event]); }, [config, event, fullResolution]);
const label = event.sub_label const label = event.sub_label
? event.sub_label ? event.sub_label
@ -460,7 +470,7 @@ export function TrackingDetails({
return "normal"; return "normal";
} else if (aspectRatio > ASPECT_WIDE_LAYOUT) { } else if (aspectRatio > ASPECT_WIDE_LAYOUT) {
return "wide"; return "wide";
} else if (aspectRatio < ASPECT_VERTICAL_LAYOUT) { } else if (aspectRatio < ASPECT_PORTRAIT_LAYOUT) {
return "tall"; return "tall";
} else { } else {
return "normal"; return "normal";
@ -556,6 +566,7 @@ export function TrackingDetails({
onSeekToTime={handleSeekToTime} onSeekToTime={handleSeekToTime}
onUploadFrame={onUploadFrameToPlus} onUploadFrame={onUploadFrameToPlus}
onPlaying={() => setIsVideoLoading(false)} onPlaying={() => setIsVideoLoading(false)}
setFullResolution={setFullResolution}
isDetailMode={true} isDetailMode={true}
camera={event.camera} camera={event.camera}
currentTimeOverride={currentTime} currentTimeOverride={currentTime}
@ -623,7 +634,7 @@ export function TrackingDetails({
<div <div
className={cn( className={cn(
isDesktop && "justify-start overflow-hidden", isDesktop && "justify-start overflow-hidden",
aspectRatio > 1 && aspectRatio < 1.5 aspectRatio > 1 && aspectRatio < ASPECT_PORTRAIT_LAYOUT
? "lg:basis-3/5" ? "lg:basis-3/5"
: "lg:basis-2/5", : "lg:basis-2/5",
)} )}

View File

@ -16,7 +16,7 @@ import { zodResolver } from "@hookform/resolvers/zod";
import { useForm, useFieldArray } from "react-hook-form"; import { useForm, useFieldArray } from "react-hook-form";
import { z } from "zod"; import { z } from "zod";
import axios from "axios"; import axios from "axios";
import { toast, Toaster } from "sonner"; import { toast } from "sonner";
import { useTranslation } from "react-i18next"; import { useTranslation } from "react-i18next";
import { useState, useMemo, useEffect } from "react"; import { useState, useMemo, useEffect } from "react";
import { LuTrash2, LuPlus } from "react-icons/lu"; import { LuTrash2, LuPlus } from "react-icons/lu";
@ -26,6 +26,7 @@ import useSWR from "swr";
import { processCameraName } from "@/utils/cameraUtil"; import { processCameraName } from "@/utils/cameraUtil";
import { Label } from "@/components/ui/label"; import { Label } from "@/components/ui/label";
import { ConfigSetBody } from "@/types/cameraWizard"; import { ConfigSetBody } from "@/types/cameraWizard";
import { Toaster } from "../ui/sonner";
const RoleEnum = z.enum(["audio", "detect", "record"]); const RoleEnum = z.enum(["audio", "detect", "record"]);
type Role = z.infer<typeof RoleEnum>; type Role = z.infer<typeof RoleEnum>;

View File

@ -887,7 +887,10 @@ function LifecycleItem({
</span> </span>
<span className="font-medium text-foreground"> <span className="font-medium text-foreground">
{attributeAreaPx}{" "} {attributeAreaPx}{" "}
{t("information.pixels", { ns: "common" })}{" "} {t("information.pixels", {
ns: "common",
area: attributeAreaPx,
})}{" "}
<span className="text-secondary-foreground">·</span>{" "} <span className="text-secondary-foreground">·</span>{" "}
{attributeAreaPct}% {attributeAreaPct}%
</span> </span>

View File

@ -142,7 +142,10 @@ export default function Step3ThresholdAndActions({
<FormLabel>{t("triggers.dialog.form.actions.title")}</FormLabel> <FormLabel>{t("triggers.dialog.form.actions.title")}</FormLabel>
<div className="space-y-2"> <div className="space-y-2">
{availableActions.map((action) => ( {availableActions.map((action) => (
<div key={action} className="flex items-center space-x-2"> <label
key={action}
className="flex cursor-pointer items-center space-x-2"
>
<FormControl> <FormControl>
<Checkbox <Checkbox
checked={form checked={form
@ -164,10 +167,10 @@ export default function Step3ThresholdAndActions({
}} }}
/> />
</FormControl> </FormControl>
<FormLabel className="text-sm font-normal"> <span className="text-sm font-normal">
{t(`triggers.actions.${action}`)} {t(`triggers.actions.${action}`)}
</FormLabel> </span>
</div> </label>
))} ))}
</div> </div>
<FormDescription> <FormDescription>
@ -197,9 +200,7 @@ export default function Step3ThresholdAndActions({
{isLoading && <ActivityIndicator className="mr-2 size-5" />} {isLoading && <ActivityIndicator className="mr-2 size-5" />}
{isLoading {isLoading
? t("button.saving", { ns: "common" }) ? t("button.saving", { ns: "common" })
: t("triggers.dialog.form.save", { : t("button.save", { ns: "common" })}
defaultValue: "Save Trigger",
})}
</Button> </Button>
</div> </div>
</form> </form>

View File

@ -206,7 +206,7 @@ function Exports() {
> >
{Object.values(exports).map((item) => ( {Object.values(exports).map((item) => (
<ExportCard <ExportCard
key={item.name} key={item.id}
className={ className={
search == "" || filteredExports.includes(item) ? "" : "hidden" search == "" || filteredExports.includes(item) ? "" : "hidden"
} }

View File

@ -44,4 +44,5 @@ export type RecordingStartingPoint = {
export type RecordingPlayerError = "stalled" | "startup"; export type RecordingPlayerError = "stalled" | "startup";
export const ASPECT_VERTICAL_LAYOUT = 1.5; export const ASPECT_VERTICAL_LAYOUT = 1.5;
export const ASPECT_PORTRAIT_LAYOUT = 1.333;
export const ASPECT_WIDE_LAYOUT = 2; export const ASPECT_WIDE_LAYOUT = 2;

View File

@ -81,7 +81,8 @@ export async function detectReolinkCamera(
export function maskUri(uri: string): string { export function maskUri(uri: string): string {
try { try {
// Handle RTSP URLs with user:pass@host format // Handle RTSP URLs with user:pass@host format
const rtspMatch = uri.match(/rtsp:\/\/([^:]+):([^@]+)@(.+)/); // Use greedy match for password to handle passwords with @
const rtspMatch = uri.match(/rtsp:\/\/([^:]+):(.+)@(.+)/);
if (rtspMatch) { if (rtspMatch) {
return `rtsp://${rtspMatch[1]}:${"*".repeat(4)}@${rtspMatch[3]}`; return `rtsp://${rtspMatch[1]}:${"*".repeat(4)}@${rtspMatch[3]}`;
} }

View File

@ -266,7 +266,10 @@ function ModelCard({ config, onClick, onUpdate, onDelete }: ModelCardProps) {
return undefined; return undefined;
} }
const keys = Object.keys(dataset.categories).filter((key) => key != "none"); const keys = Object.keys(dataset.categories).filter(
(key) => key != "none" && key.toLowerCase() != "unknown",
);
if (keys.length === 0) { if (keys.length === 0) {
return undefined; return undefined;
} }

View File

@ -75,6 +75,7 @@ import SearchDetailDialog, {
} from "@/components/overlay/detail/SearchDetailDialog"; } from "@/components/overlay/detail/SearchDetailDialog";
import { SearchResult } from "@/types/search"; import { SearchResult } from "@/types/search";
import { HiSparkles } from "react-icons/hi"; import { HiSparkles } from "react-icons/hi";
import { capitalizeFirstLetter } from "@/utils/stringUtil";
type ModelTrainingViewProps = { type ModelTrainingViewProps = {
model: CustomClassificationModelConfig; model: CustomClassificationModelConfig;
@ -88,7 +89,7 @@ export default function ModelTrainingView({ model }: ModelTrainingViewProps) {
// title // title
useEffect(() => { useEffect(() => {
document.title = `${model.name.toUpperCase()} - ${t("documentTitle")}`; document.title = `${capitalizeFirstLetter(model.name)} - ${t("documentTitle")}`;
}, [model.name, t]); }, [model.name, t]);
// model state // model state

View File

@ -447,7 +447,7 @@ export default function LiveDashboardView({
)} )}
{cameras.length == 0 && !includeBirdseye ? ( {cameras.length == 0 && !includeBirdseye ? (
<NoCameraView /> <NoCameraView cameraGroup={cameraGroup} />
) : ( ) : (
<> <>
{!fullscreen && events && events.length > 0 && ( {!fullscreen && events && events.length > 0 && (
@ -666,28 +666,39 @@ export default function LiveDashboardView({
); );
} }
function NoCameraView() { function NoCameraView({ cameraGroup }: { cameraGroup?: string }) {
const { t } = useTranslation(["views/live"]); const { t } = useTranslation(["views/live"]);
const { auth } = useContext(AuthContext); const { auth } = useContext(AuthContext);
const isAdmin = useIsAdmin(); const isAdmin = useIsAdmin();
// Check if this is a restricted user with no cameras in this group const isDefault = cameraGroup === "default";
const isRestricted = !isAdmin && auth.isAuthenticated; const isRestricted = !isAdmin && auth.isAuthenticated;
let type: "default" | "group" | "restricted";
if (isRestricted) {
type = "restricted";
} else if (isDefault) {
type = "default";
} else {
type = "group";
}
return ( return (
<div className="flex size-full items-center justify-center"> <div className="flex size-full items-center justify-center">
<EmptyCard <EmptyCard
icon={<BsFillCameraVideoOffFill className="size-8" />} icon={<BsFillCameraVideoOffFill className="size-8" />}
title={ title={t(`noCameras.${type}.title`)}
isRestricted ? t("noCameras.restricted.title") : t("noCameras.title") description={t(`noCameras.${type}.description`)}
buttonText={
type !== "restricted" && isDefault
? t(`noCameras.${type}.buttonText`)
: undefined
} }
description={ link={
isRestricted type !== "restricted" && isDefault
? t("noCameras.restricted.description") ? "/settings?page=cameraManagement"
: t("noCameras.description") : undefined
} }
buttonText={!isRestricted ? t("noCameras.buttonText") : undefined}
link={!isRestricted ? "/settings?page=cameraManagement" : undefined}
/> />
</div> </div>
); );

View File

@ -1,6 +1,6 @@
import Heading from "@/components/ui/heading"; import Heading from "@/components/ui/heading";
import { useCallback, useEffect, useMemo, useState } from "react"; import { useCallback, useEffect, useMemo, useState } from "react";
import { Toaster } from "sonner"; import { Toaster } from "@/components/ui/sonner";
import { Button } from "@/components/ui/button"; import { Button } from "@/components/ui/button";
import useSWR from "swr"; import useSWR from "swr";
import { FrigateConfig } from "@/types/frigateConfig"; import { FrigateConfig } from "@/types/frigateConfig";

View File

@ -1,6 +1,7 @@
import Heading from "@/components/ui/heading"; import Heading from "@/components/ui/heading";
import { useCallback, useContext, useEffect, useMemo, useState } from "react"; import { useCallback, useContext, useEffect, useMemo, useState } from "react";
import { Toaster, toast } from "sonner"; import { toast } from "sonner";
import { Toaster } from "@/components/ui/sonner";
import { import {
Form, Form,
FormControl, FormControl,
@ -158,11 +159,12 @@ export default function CameraReviewSettingsView({
}); });
} }
setChangedValue(true); setChangedValue(true);
setUnsavedChanges(true);
setSelectDetections(isChecked as boolean); setSelectDetections(isChecked as boolean);
}, },
// we know that these deps are correct // we know that these deps are correct
// eslint-disable-next-line react-hooks/exhaustive-deps // eslint-disable-next-line react-hooks/exhaustive-deps
[watchedAlertsZones], [watchedAlertsZones, setUnsavedChanges],
); );
const saveToConfig = useCallback( const saveToConfig = useCallback(
@ -197,6 +199,8 @@ export default function CameraReviewSettingsView({
position: "top-center", position: "top-center",
}, },
); );
setChangedValue(false);
setUnsavedChanges(false);
updateConfig(); updateConfig();
} else { } else {
toast.error( toast.error(
@ -229,7 +233,14 @@ export default function CameraReviewSettingsView({
setIsLoading(false); setIsLoading(false);
}); });
}, },
[updateConfig, setIsLoading, selectedCamera, cameraConfig, t], [
updateConfig,
setIsLoading,
selectedCamera,
cameraConfig,
t,
setUnsavedChanges,
],
); );
const onCancel = useCallback(() => { const onCancel = useCallback(() => {
@ -495,6 +506,7 @@ export default function CameraReviewSettingsView({
)} )}
onCheckedChange={(checked) => { onCheckedChange={(checked) => {
setChangedValue(true); setChangedValue(true);
setUnsavedChanges(true);
return checked return checked
? field.onChange([ ? field.onChange([
...field.value, ...field.value,
@ -600,6 +612,8 @@ export default function CameraReviewSettingsView({
zone.name, zone.name,
)} )}
onCheckedChange={(checked) => { onCheckedChange={(checked) => {
setChangedValue(true);
setUnsavedChanges(true);
return checked return checked
? field.onChange([ ? field.onChange([
...field.value, ...field.value,
@ -699,7 +713,6 @@ export default function CameraReviewSettingsView({
)} )}
/> />
</div> </div>
<Separator className="my-2 flex bg-secondary" />
<div className="flex w-full flex-row items-center gap-2 pt-2 md:w-[25%]"> <div className="flex w-full flex-row items-center gap-2 pt-2 md:w-[25%]">
<Button <Button
@ -712,7 +725,7 @@ export default function CameraReviewSettingsView({
</Button> </Button>
<Button <Button
variant="select" variant="select"
disabled={isLoading} disabled={!changedValue || isLoading}
className="flex flex-1" className="flex flex-1"
aria-label={t("button.save", { ns: "common" })} aria-label={t("button.save", { ns: "common" })}
type="submit" type="submit"

View File

@ -1,7 +1,7 @@
import Heading from "@/components/ui/heading"; import Heading from "@/components/ui/heading";
import { Label } from "@/components/ui/label"; import { Label } from "@/components/ui/label";
import { useCallback, useContext, useEffect, useState } from "react"; import { useCallback, useContext, useEffect, useState } from "react";
import { Toaster } from "sonner"; import { Toaster } from "@/components/ui/sonner";
import { Separator } from "../../components/ui/separator"; import { Separator } from "../../components/ui/separator";
import ActivityIndicator from "@/components/indicators/activity-indicator"; import ActivityIndicator from "@/components/indicators/activity-indicator";
import { toast } from "sonner"; import { toast } from "sonner";

View File

@ -1,6 +1,7 @@
import { useCallback, useEffect, useMemo, useState } from "react"; import { useCallback, useEffect, useMemo, useState } from "react";
import { Trans, useTranslation } from "react-i18next"; import { Trans, useTranslation } from "react-i18next";
import { Toaster, toast } from "sonner"; import { toast } from "sonner";
import { Toaster } from "@/components/ui/sonner";
import useSWR from "swr"; import useSWR from "swr";
import axios from "axios"; import axios from "axios";
import { Button } from "@/components/ui/button"; import { Button } from "@/components/ui/button";
@ -598,7 +599,7 @@ export default function TriggerView({
date_style: "medium", date_style: "medium",
}, },
) )
: "Never"} : t("never", { ns: "common" })}
</span> </span>
{trigger_status?.triggers[trigger.name] {trigger_status?.triggers[trigger.name]
?.triggering_event_id && ( ?.triggering_event_id && (
@ -663,7 +664,7 @@ export default function TriggerView({
<TableHeader className="sticky top-0 bg-muted/50"> <TableHeader className="sticky top-0 bg-muted/50">
<TableRow> <TableRow>
<TableHead className="w-4"></TableHead> <TableHead className="w-4"></TableHead>
<TableHead>{t("name", { ns: "common" })}</TableHead> <TableHead>{t("triggers.table.name")}</TableHead>
<TableHead>{t("triggers.table.type")}</TableHead> <TableHead>{t("triggers.table.type")}</TableHead>
<TableHead> <TableHead>
{t("triggers.table.lastTriggered")} {t("triggers.table.lastTriggered")}
@ -759,7 +760,7 @@ export default function TriggerView({
date_style: "medium", date_style: "medium",
}, },
) )
: "Never"} : t("time.never", { ns: "common" })}
</span> </span>
{trigger_status?.triggers[trigger.name] {trigger_status?.triggers[trigger.name]
?.triggering_event_id && ( ?.triggering_event_id && (

View File

@ -2,7 +2,7 @@ import Heading from "@/components/ui/heading";
import { Label } from "@/components/ui/label"; import { Label } from "@/components/ui/label";
import { Switch } from "@/components/ui/switch"; import { Switch } from "@/components/ui/switch";
import { useCallback, useContext, useEffect } from "react"; import { useCallback, useContext, useEffect } from "react";
import { Toaster } from "sonner"; import { Toaster } from "@/components/ui/sonner";
import { toast } from "sonner"; import { toast } from "sonner";
import { Separator } from "../../components/ui/separator"; import { Separator } from "../../components/ui/separator";
import { Button } from "../../components/ui/button"; import { Button } from "../../components/ui/button";