Compare commits

...

6 Commits

Author SHA1 Message Date
Josh Hawkins
7b1b02de17 fix 2026-01-18 08:26:11 -06:00
Josh Hawkins
a42b28bb5e spacing 2026-01-18 08:26:01 -06:00
Josh Hawkins
39914cc3ad aspect ratio docs tip 2026-01-18 08:24:57 -06:00
Josh Hawkins
3f61ae1b78 tracking details tweaks
- fix 4:3 layout
- get and use aspect of record stream if different from detect stream
2026-01-18 07:50:19 -06:00
Josh Hawkins
0a8f499640
Miscellaneous fixes (0.17 beta) (#21683)
Some checks failed
CI / AMD64 Build (push) Has been cancelled
CI / ARM Build (push) Has been cancelled
CI / Jetson Jetpack 6 (push) Has been cancelled
CI / AMD64 Extra Build (push) Has been cancelled
CI / ARM Extra Build (push) Has been cancelled
CI / Synaptics Build (push) Has been cancelled
CI / Assemble and push default build (push) Has been cancelled
* misc triggers tweaks

i18n fixes
fix toaster color
fix clicking on labels selecting incorrect checkbox

* update copilot instructions

* lpr docs tweaks

* add retry params to gemini

* i18n fix

* ensure users only see recognized plates from accessible cameras in explore

* ensure all zone filters are converted to pixels

zone-level filters were never converted from percentage area to pixels. RuntimeFilterConfig was only applied to filters at the camera level, not zone.filters.

Fixes https://github.com/blakeblackshear/frigate/discussions/21694

* add test for percentage based zone filters

* use export id for key instead of name

* update gemini docs
2026-01-18 06:36:27 -07:00
Kirill Kulakov
cfeb86646f
fix(recording): handle unexpected filenames in cache maintainer to prevent crash (#21676)
Some checks failed
CI / AMD64 Build (push) Has been cancelled
CI / ARM Build (push) Has been cancelled
CI / Jetson Jetpack 6 (push) Has been cancelled
CI / AMD64 Extra Build (push) Has been cancelled
CI / ARM Extra Build (push) Has been cancelled
CI / Synaptics Build (push) Has been cancelled
CI / Assemble and push default build (push) Has been cancelled
* fix(recording): handle unexpected filenames in cache maintainer to prevent crash

* test(recording): add test for maintainer cache file parsing

* Prevent log spam from unexpected cache files

Addresses PR review feedback: Add deduplication to prevent warning
messages from being logged repeatedly for the same unexpected file
in the cache directory. Each unexpected filename is only logged once
per RecordingMaintainer instance lifecycle.

Also adds test to verify warning is only emitted once per filename.

* Fix code formatting for test_maintainer.py

* fixes + ruff
2026-01-16 19:23:23 -07:00
17 changed files with 209 additions and 35 deletions

View File

@ -1,2 +1,3 @@
Never write strings in the frontend directly, always write to and reference the relevant translations file.
Always conform new and refactored code to the existing coding style in the project.
- For Frigate NVR, never write strings in the frontend directly. Since the project uses `react-i18next`, use `t()` and write the English string in the relevant translations file in `web/public/locales/en`.
- Always conform new and refactored code to the existing coding style in the project.
- Always have a way to test your work and confirm your changes. When running backend tests, use `python3 -u -m unittest`.

View File

@ -66,8 +66,6 @@ Some models are labeled as **hybrid** (capable of both thinking and instruct tas
**Recommendation:**
Always select the `-instruct` or documented instruct/tagged variant of any model you use in your Frigate configuration. If in doubt, refer to your model providers documentation or model library for guidance on the correct model variant to use.
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their model library](https://ollama.com/search?c=vision). Note that Frigate will not automatically download the model you specify in your config, you must download the model to your local instance of Ollama first i.e. by running `ollama pull qwen3-vl:2b-instruct` on your Ollama server/Docker container. Note that the model specified in Frigate's config must match the downloaded model tag.
@ -93,7 +91,7 @@ genai:
## Google Gemini
Google Gemini has a free tier allowing [15 queries per minute](https://ai.google.dev/pricing) to the API, which is more than sufficient for standard Frigate usage.
Google Gemini has a [free tier](https://ai.google.dev/pricing) for the API, however the limits may not be sufficient for standard Frigate usage. Choose a plan appropriate for your installation.
### Supported Models
@ -114,7 +112,7 @@ To start using Gemini, you must first get an API key from [Google AI Studio](htt
genai:
provider: gemini
api_key: "{FRIGATE_GEMINI_API_KEY}"
model: gemini-2.0-flash
model: gemini-2.5-flash
```
:::note

View File

@ -68,8 +68,8 @@ Fine-tune the LPR feature using these optional parameters at the global level of
- Default: `1000` pixels. Note: this is intentionally set very low as it is an _area_ measurement (length x width). For reference, 1000 pixels represents a ~32x32 pixel square in your camera image.
- Depending on the resolution of your camera's `detect` stream, you can increase this value to ignore small or distant plates.
- **`device`**: Device to use to run license plate detection _and_ recognition models.
- Default: `CPU`
- This can be `CPU`, `GPU`, or the GPU's device number. For users without a model that detects license plates natively, using a GPU may increase performance of the YOLOv9 license plate detector model. See the [Hardware Accelerated Enrichments](/configuration/hardware_acceleration_enrichments.md) documentation. However, for users who run a model that detects `license_plate` natively, there is little to no performance gain reported with running LPR on GPU compared to the CPU.
- Default: `None`
- This is auto-selected by Frigate and can be `CPU`, `GPU`, or the GPU's device number. For users without a model that detects license plates natively, using a GPU may increase performance of the YOLOv9 license plate detector model. See the [Hardware Accelerated Enrichments](/configuration/hardware_acceleration_enrichments.md) documentation. However, for users who run a model that detects `license_plate` natively, there is little to no performance gain reported with running LPR on GPU compared to the CPU.
- **`model_size`**: The size of the model used to identify regions of text on plates.
- Default: `small`
- This can be `small` or `large`.
@ -432,6 +432,6 @@ If you are using a model that natively detects `license_plate`, add an _object m
If you are not using a model that natively detects `license_plate` or you are using dedicated LPR camera mode, only a _motion mask_ over your text is required.
### I see "Error running ... model" in my logs. How can I fix this?
### I see "Error running ... model" in my logs, or my inference time is very high. How can I fix this?
This usually happens when your GPU is unable to compile or use one of the LPR models. Set your `device` to `CPU` and try again. GPU acceleration only provides a slight performance increase, and the models are lightweight enough to run without issue on most CPUs.

View File

@ -11,6 +11,12 @@ Cameras configured to output H.264 video and AAC audio will offer the most compa
- **Stream Viewing**: This stream will be rebroadcast as is to Home Assistant for viewing with the stream component. Setting this resolution too high will use significant bandwidth when viewing streams in Home Assistant, and they may not load reliably over slower connections.
:::tip
For the best experience in Frigate's UI, configure your camera so that the detection and recording streams use the same aspect ratio. For example, if your main stream is 3840x2160 (16:9), set your substream to 640x360 (also 16:9) instead of 640x480 (4:3). While not strictly required, matching aspect ratios helps ensure seamless live stream display and preview/recordings playback.
:::
### Choosing a detect resolution
The ideal resolution for detection is one where the objects you want to detect fit inside the dimensions of the model used by Frigate (320x320). Frigate does not pass the entire camera frame to object detection. It will crop an area of motion from the full frame and look in that portion of the frame. If the area being inspected is larger than 320x320, Frigate must resize it before running object detection. Higher resolutions do not improve the detection accuracy because the additional detail is lost in the resize. Below you can see a reference for how large a 320x320 area is against common resolutions.

View File

@ -23,7 +23,12 @@ from markupsafe import escape
from peewee import SQL, fn, operator
from pydantic import ValidationError
from frigate.api.auth import allow_any_authenticated, allow_public, require_role
from frigate.api.auth import (
allow_any_authenticated,
allow_public,
get_allowed_cameras_for_filter,
require_role,
)
from frigate.api.defs.query.app_query_parameters import AppTimelineHourlyQueryParameters
from frigate.api.defs.request.app_body import AppConfigSetBody
from frigate.api.defs.tags import Tags
@ -687,13 +692,19 @@ def plusModels(request: Request, filterByCurrentModelDetector: bool = False):
@router.get(
"/recognized_license_plates", dependencies=[Depends(allow_any_authenticated())]
)
def get_recognized_license_plates(split_joined: Optional[int] = None):
def get_recognized_license_plates(
split_joined: Optional[int] = None,
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
):
try:
query = (
Event.select(
SQL("json_extract(data, '$.recognized_license_plate') AS plate")
)
.where(SQL("json_extract(data, '$.recognized_license_plate') IS NOT NULL"))
.where(
(SQL("json_extract(data, '$.recognized_license_plate') IS NOT NULL"))
& (Event.camera << allowed_cameras)
)
.distinct()
)
recognized_license_plates = [row[0] for row in query.tuples()]

View File

@ -662,6 +662,13 @@ class FrigateConfig(FrigateBaseModel):
# generate zone contours
if len(camera_config.zones) > 0:
for zone in camera_config.zones.values():
if zone.filters:
for object_name, filter_config in zone.filters.items():
zone.filters[object_name] = RuntimeFilterConfig(
frame_shape=camera_config.frame_shape,
**filter_config.model_dump(exclude_unset=True),
)
zone.generate_contour(camera_config.frame_shape)
# Set live view stream if none is set

View File

@ -24,6 +24,14 @@ class GeminiClient(GenAIClient):
http_options_dict = {
"api_version": "v1",
"timeout": int(self.timeout * 1000), # requires milliseconds
"retry_options": types.HttpRetryOptions(
attempts=3,
initial_delay=1.0,
max_delay=60.0,
exp_base=2.0,
jitter=1.0,
http_status_codes=[429, 500, 502, 503, 504],
),
}
if isinstance(self.genai_config.provider_options, dict):

View File

@ -97,6 +97,7 @@ class RecordingMaintainer(threading.Thread):
self.object_recordings_info: dict[str, list] = defaultdict(list)
self.audio_recordings_info: dict[str, list] = defaultdict(list)
self.end_time_cache: dict[str, Tuple[datetime.datetime, float]] = {}
self.unexpected_cache_files_logged: bool = False
async def move_files(self) -> None:
cache_files = [
@ -112,7 +113,14 @@ class RecordingMaintainer(threading.Thread):
for cache in cache_files:
cache_path = os.path.join(CACHE_DIR, cache)
basename = os.path.splitext(cache)[0]
camera, date = basename.rsplit("@", maxsplit=1)
try:
camera, date = basename.rsplit("@", maxsplit=1)
except ValueError:
if not self.unexpected_cache_files_logged:
logger.warning("Skipping unexpected files in cache")
self.unexpected_cache_files_logged = True
continue
start_time = datetime.datetime.strptime(
date, CACHE_SEGMENT_FORMAT
).astimezone(datetime.timezone.utc)
@ -164,7 +172,13 @@ class RecordingMaintainer(threading.Thread):
cache_path = os.path.join(CACHE_DIR, cache)
basename = os.path.splitext(cache)[0]
camera, date = basename.rsplit("@", maxsplit=1)
try:
camera, date = basename.rsplit("@", maxsplit=1)
except ValueError:
if not self.unexpected_cache_files_logged:
logger.warning("Skipping unexpected files in cache")
self.unexpected_cache_files_logged = True
continue
# important that start_time is utc because recordings are stored and compared in utc
start_time = datetime.datetime.strptime(

View File

@ -632,6 +632,49 @@ class TestConfig(unittest.TestCase):
)
assert frigate_config.cameras["back"].zones["test"].color != (0, 0, 0)
def test_zone_filter_area_percent_converts_to_pixels(self):
config = {
"mqtt": {"host": "mqtt"},
"record": {
"alerts": {
"retain": {
"days": 20,
}
}
},
"cameras": {
"back": {
"ffmpeg": {
"inputs": [
{"path": "rtsp://10.0.0.1:554/video", "roles": ["detect"]}
]
},
"detect": {
"height": 1080,
"width": 1920,
"fps": 5,
},
"zones": {
"notification": {
"coordinates": "0.03,1,0.025,0,0.626,0,0.643,1",
"objects": ["person"],
"filters": {"person": {"min_area": 0.1}},
}
},
}
},
}
frigate_config = FrigateConfig(**config)
expected_min_area = int(1080 * 1920 * 0.1)
assert (
frigate_config.cameras["back"]
.zones["notification"]
.filters["person"]
.min_area
== expected_min_area
)
def test_zone_relative_matches_explicit(self):
config = {
"mqtt": {"host": "mqtt"},

View File

@ -0,0 +1,66 @@
import sys
import unittest
from unittest.mock import MagicMock, patch
# Mock complex imports before importing maintainer
sys.modules["frigate.comms.inter_process"] = MagicMock()
sys.modules["frigate.comms.detections_updater"] = MagicMock()
sys.modules["frigate.comms.recordings_updater"] = MagicMock()
sys.modules["frigate.config.camera.updater"] = MagicMock()
# Now import the class under test
from frigate.config import FrigateConfig # noqa: E402
from frigate.record.maintainer import RecordingMaintainer # noqa: E402
class TestMaintainer(unittest.IsolatedAsyncioTestCase):
async def test_move_files_survives_bad_filename(self):
config = MagicMock(spec=FrigateConfig)
config.cameras = {}
stop_event = MagicMock()
maintainer = RecordingMaintainer(config, stop_event)
# We need to mock end_time_cache to avoid key errors if logic proceeds
maintainer.end_time_cache = {}
# Mock filesystem
# One bad file, one good file
files = ["bad_filename.mp4", "camera@20210101000000+0000.mp4"]
with patch("os.listdir", return_value=files):
with patch("os.path.isfile", return_value=True):
with patch(
"frigate.record.maintainer.psutil.process_iter", return_value=[]
):
with patch("frigate.record.maintainer.logger.warning") as warn:
# Mock validate_and_move_segment to avoid further logic
maintainer.validate_and_move_segment = MagicMock()
try:
await maintainer.move_files()
except ValueError as e:
if "not enough values to unpack" in str(e):
self.fail("move_files() crashed on bad filename!")
raise e
except Exception:
# Ignore other errors (like DB connection) as we only care about the unpack crash
pass
# The bad filename is encountered in multiple loops, but should only warn once.
matching = [
c
for c in warn.call_args_list
if c.args
and isinstance(c.args[0], str)
and "Skipping unexpected files in cache" in c.args[0]
]
self.assertEqual(
1,
len(matching),
f"Expected a single warning for unexpected files, got {len(matching)}",
)
if __name__ == "__main__":
unittest.main()

View File

@ -3,6 +3,7 @@
"untilForTime": "Until {{time}}",
"untilForRestart": "Until Frigate restarts.",
"untilRestart": "Until restart",
"never": "Never",
"ago": "{{timeAgo}} ago",
"justNow": "Just now",
"today": "Today",

View File

@ -268,7 +268,7 @@ export default function CreateTriggerDialog({
<FormItem className="flex flex-row items-center justify-between">
<div className="space-y-0.5">
<FormLabel className="text-base">
{t("enabled", { ns: "common" })}
{t("button.enabled", { ns: "common" })}
</FormLabel>
<div className="text-sm text-muted-foreground">
{t("triggers.dialog.form.enabled.description")}
@ -394,7 +394,10 @@ export default function CreateTriggerDialog({
</FormLabel>
<div className="space-y-2">
{availableActions.map((action) => (
<div key={action} className="flex items-center space-x-2">
<label
key={action}
className="flex cursor-pointer items-center space-x-2"
>
<FormControl>
<Checkbox
checked={form
@ -416,10 +419,10 @@ export default function CreateTriggerDialog({
}}
/>
</FormControl>
<FormLabel className="text-sm font-normal">
<span className="text-sm font-normal">
{t(`triggers.actions.${action}`)}
</FormLabel>
</div>
</span>
</label>
))}
</div>
<FormDescription>

View File

@ -13,7 +13,7 @@ import HlsVideoPlayer from "@/components/player/HlsVideoPlayer";
import { baseUrl } from "@/api/baseUrl";
import { REVIEW_PADDING } from "@/types/review";
import {
ASPECT_VERTICAL_LAYOUT,
ASPECT_PORTRAIT_LAYOUT,
ASPECT_WIDE_LAYOUT,
Recording,
} from "@/types/record";
@ -39,6 +39,7 @@ import { useApiHost } from "@/api";
import ImageLoadingIndicator from "@/components/indicators/ImageLoadingIndicator";
import ObjectTrackOverlay from "../ObjectTrackOverlay";
import { useIsAdmin } from "@/hooks/use-is-admin";
import { VideoResolutionType } from "@/types/live";
type TrackingDetailsProps = {
className?: string;
@ -253,16 +254,25 @@ export function TrackingDetails({
const [timelineSize] = useResizeObserver(timelineContainerRef);
const [fullResolution, setFullResolution] = useState<VideoResolutionType>({
width: 0,
height: 0,
});
const aspectRatio = useMemo(() => {
if (!config) {
return 16 / 9;
}
if (fullResolution.width && fullResolution.height) {
return fullResolution.width / fullResolution.height;
}
return (
config.cameras[event.camera].detect.width /
config.cameras[event.camera].detect.height
);
}, [config, event]);
}, [config, event, fullResolution]);
const label = event.sub_label
? event.sub_label
@ -460,7 +470,7 @@ export function TrackingDetails({
return "normal";
} else if (aspectRatio > ASPECT_WIDE_LAYOUT) {
return "wide";
} else if (aspectRatio < ASPECT_VERTICAL_LAYOUT) {
} else if (aspectRatio < ASPECT_PORTRAIT_LAYOUT) {
return "tall";
} else {
return "normal";
@ -556,6 +566,7 @@ export function TrackingDetails({
onSeekToTime={handleSeekToTime}
onUploadFrame={onUploadFrameToPlus}
onPlaying={() => setIsVideoLoading(false)}
setFullResolution={setFullResolution}
isDetailMode={true}
camera={event.camera}
currentTimeOverride={currentTime}
@ -623,7 +634,7 @@ export function TrackingDetails({
<div
className={cn(
isDesktop && "justify-start overflow-hidden",
aspectRatio > 1 && aspectRatio < 1.5
aspectRatio > 1 && aspectRatio < ASPECT_PORTRAIT_LAYOUT
? "lg:basis-3/5"
: "lg:basis-2/5",
)}

View File

@ -142,7 +142,10 @@ export default function Step3ThresholdAndActions({
<FormLabel>{t("triggers.dialog.form.actions.title")}</FormLabel>
<div className="space-y-2">
{availableActions.map((action) => (
<div key={action} className="flex items-center space-x-2">
<label
key={action}
className="flex cursor-pointer items-center space-x-2"
>
<FormControl>
<Checkbox
checked={form
@ -164,10 +167,10 @@ export default function Step3ThresholdAndActions({
}}
/>
</FormControl>
<FormLabel className="text-sm font-normal">
<span className="text-sm font-normal">
{t(`triggers.actions.${action}`)}
</FormLabel>
</div>
</span>
</label>
))}
</div>
<FormDescription>
@ -197,9 +200,7 @@ export default function Step3ThresholdAndActions({
{isLoading && <ActivityIndicator className="mr-2 size-5" />}
{isLoading
? t("button.saving", { ns: "common" })
: t("triggers.dialog.form.save", {
defaultValue: "Save Trigger",
})}
: t("button.save", { ns: "common" })}
</Button>
</div>
</form>

View File

@ -206,7 +206,7 @@ function Exports() {
>
{Object.values(exports).map((item) => (
<ExportCard
key={item.name}
key={item.id}
className={
search == "" || filteredExports.includes(item) ? "" : "hidden"
}

View File

@ -44,4 +44,5 @@ export type RecordingStartingPoint = {
export type RecordingPlayerError = "stalled" | "startup";
export const ASPECT_VERTICAL_LAYOUT = 1.5;
export const ASPECT_PORTRAIT_LAYOUT = 1.333;
export const ASPECT_WIDE_LAYOUT = 2;

View File

@ -1,6 +1,7 @@
import { useCallback, useEffect, useMemo, useState } from "react";
import { Trans, useTranslation } from "react-i18next";
import { Toaster, toast } from "sonner";
import { toast } from "sonner";
import { Toaster } from "@/components/ui/sonner";
import useSWR from "swr";
import axios from "axios";
import { Button } from "@/components/ui/button";
@ -598,7 +599,7 @@ export default function TriggerView({
date_style: "medium",
},
)
: "Never"}
: t("never", { ns: "common" })}
</span>
{trigger_status?.triggers[trigger.name]
?.triggering_event_id && (
@ -663,7 +664,9 @@ export default function TriggerView({
<TableHeader className="sticky top-0 bg-muted/50">
<TableRow>
<TableHead className="w-4"></TableHead>
<TableHead>{t("name", { ns: "common" })}</TableHead>
<TableHead>
{t("name", { ns: "triggers.table.name" })}
</TableHead>
<TableHead>{t("triggers.table.type")}</TableHead>
<TableHead>
{t("triggers.table.lastTriggered")}
@ -759,7 +762,7 @@ export default function TriggerView({
date_style: "medium",
},
)
: "Never"}
: t("time.never", { ns: "common" })}
</span>
{trigger_status?.triggers[trigger.name]
?.triggering_event_id && (