Compare commits

...

17 Commits

Author SHA1 Message Date
ryzendigo
c6995b4d1d
Merge fb721b3ec9 into 8fc1e97df5 2026-04-22 22:31:36 +01:00
Josh Hawkins
8fc1e97df5
Stream probe fallback (#22971)
* fall back to tcp transport when rtsp probes fail over udp

* tweak wizard message
2026-04-22 14:38:54 -06:00
eXtremeSHOK
0a332cada9
Update third_party_extensions.md (#22973) 2026-04-22 14:38:36 -06:00
dependabot[bot]
ba499201e6
Bump lodash-es from 4.17.23 to 4.18.1 in /web (#22733)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
Bumps [lodash-es](https://github.com/lodash/lodash) from 4.17.23 to 4.18.1.
- [Release notes](https://github.com/lodash/lodash/releases)
- [Commits](https://github.com/lodash/lodash/compare/4.17.23...4.18.1)

---
updated-dependencies:
- dependency-name: lodash-es
  dependency-version: 4.18.1
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-22 15:03:43 -05:00
dependabot[bot]
c244e6582a
Bump path-to-regexp from 0.1.12 to 0.1.13 in /docs (#22683)
Bumps [path-to-regexp](https://github.com/pillarjs/path-to-regexp) from 0.1.12 to 0.1.13.
- [Release notes](https://github.com/pillarjs/path-to-regexp/releases)
- [Changelog](https://github.com/pillarjs/path-to-regexp/blob/v.0.1.13/History.md)
- [Commits](https://github.com/pillarjs/path-to-regexp/compare/v0.1.12...v.0.1.13)

---
updated-dependencies:
- dependency-name: path-to-regexp
  dependency-version: 0.1.13
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-22 14:39:46 -05:00
dependabot[bot]
fff3594553
Bump lodash from 4.17.23 to 4.18.1 in /web (#22787)
Bumps [lodash](https://github.com/lodash/lodash) from 4.17.23 to 4.18.1.
- [Release notes](https://github.com/lodash/lodash/releases)
- [Commits](https://github.com/lodash/lodash/compare/4.17.23...4.18.1)

---
updated-dependencies:
- dependency-name: lodash
  dependency-version: 4.18.1
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-22 14:39:08 -05:00
dependabot[bot]
25bfb2c481
Bump python-multipart from 0.0.20 to 0.0.26 in /docker/main (#22894)
Bumps [python-multipart](https://github.com/Kludex/python-multipart) from 0.0.20 to 0.0.26.
- [Release notes](https://github.com/Kludex/python-multipart/releases)
- [Changelog](https://github.com/Kludex/python-multipart/blob/master/CHANGELOG.md)
- [Commits](https://github.com/Kludex/python-multipart/compare/0.0.20...0.0.26)

---
updated-dependencies:
- dependency-name: python-multipart
  dependency-version: 0.0.26
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-22 14:38:56 -05:00
Nicolas Mowen
b7261c8e70
GenAI Tweaks (#22968)
* Add debug logs

* refresh embeddings maintainer genai clients on config update

---------

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2026-04-22 09:55:54 -06:00
Josh Hawkins
ad9092d0da
Tweaks (#22965)
* use ffmpeg to probe rtsp urls instead of cv2

cv2 is faster (no subprocess launch) and will continue to be used for recording segments

* tweak faq

* change unsaved color to orange

avoids confusion with validation errors (red)

* don't use any variant of orange as a profile color

avoids confusion with unsaved changes

* more unsaved color tweaks
2026-04-22 09:19:30 -06:00
Nicolas Mowen
20705a3e97
Update oneVPL (#22966) 2026-04-22 08:50:37 -06:00
Josh Hawkins
f4ac063b37
Add camera wizard improvements (#22963)
* warn in camera wizard when detect stream resolution cannot be determined

* add timeout and tcp fallback for rtsp urls only
2026-04-22 08:15:17 -05:00
Abhilash Kishore
2dcaeb6809
fix: bump OpenVINO to 2025.4.x to resolve LXC container detector crash (#22859)
* fix: bump OpenVINO to 2025.4.x to resolve LXC container crash

* fix: replace openvino + onnxruntime with onnxruntime-openvino 1.24.*

onnxruntime-openvino 1.24.* bundles OpenVINO 2025.4.1, which fixes a
crash in constrained CPU environments (e.g. Proxmox LXC) where
lin_system_conf.cpp calls stoi("") on empty strings read from offline
CPU sysfs entries.

Consolidating to onnxruntime-openvino also ensures the OpenVINO runtime
and ONNX Runtime OpenVINO EP are always compatible versions.

* revert: restore onnxruntime, keep openvino bump

Reverting onnxruntime-openvino consolidation - onnxruntime is used with
multiple execution providers (CUDA, TensorRT, MIGraphX, CPU) and cannot
be replaced wholesale with the openvino-specific wheel.
2026-04-22 07:12:14 -06:00
ryzendigo
fb721b3ec9 fix: correct balance_groups test to match actual algorithm behavior 2026-03-21 17:52:25 +08:00
ryzendigo
0115265cb6 fix: use dependencies=[] for auth deps, fix balance test 2026-03-21 17:21:48 +08:00
ryzendigo
2b3f32e5df fix: clean up comment formatting 2026-03-21 16:42:18 +08:00
ryzendigo
17e5211991 feat(recap): add auto-generation scheduler and more config options
New config options:
- auto_generate: trigger daily recap at a scheduled time
- schedule_time: HH:MM for when to run (default 02:00)
- cameras: list of cameras to process (empty = all)
- speed: playback speed multiplier (1-8x, default 2)
- max_per_group: how many events play simultaneously (1-10, default 3)

New scheduler thread (recap/scheduler.py) checks once per minute,
generates yesterday's recap for each configured camera when the
scheduled time hits. Validated schedule_time format in config.
2026-03-21 16:40:49 +08:00
ryzendigo
717b878956 feat: add daily recap video generation
Adds a new recap feature that composites detected people from throughout
the day onto a clean background, producing a short summary video of all
activity for a given camera.

How it works:
- Builds a clean background plate via median of sampled frames
- Extracts clip frames for each person event from recordings
- Uses per-event background subtraction (first frame of clip as reference)
  within a soft spotlight region to isolate the person
- Groups non-overlapping events to play simultaneously
- Balances groups by duration so the video stays even
- Renders at 2x speed, stitches groups into final output

New files:
- frigate/recap/ — core generation module
- frigate/api/recap.py — POST /recap/{camera}, GET /recap/{camera}
- frigate/config/recap.py — recap config section (enabled, fps, etc)
- frigate/test/test_recap.py — unit tests
- web/src/components/overlay/RecapDialog.tsx — UI component (not yet wired)

Config example:
  recap:
    enabled: true
    default_label: person
    output_fps: 10
    video_duration: 30
    background_samples: 30

Relates to #54
2026-03-21 16:36:39 +08:00
35 changed files with 1577 additions and 103 deletions

View File

@ -87,43 +87,43 @@ if [[ "${TARGETARCH}" == "amd64" ]]; then
# intel packages use zst compression so we need to update dpkg
apt-get install -y dpkg
# use intel apt intel packages
# use intel apt repo for libmfx1 (legacy QSV, pre-Gen12)
wget -qO - https://repositories.intel.com/gpu/intel-graphics.key | gpg --yes --dearmor --output /usr/share/keyrings/intel-graphics.gpg
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/intel-graphics.gpg] https://repositories.intel.com/gpu/ubuntu jammy client" | tee /etc/apt/sources.list.d/intel-gpu-jammy.list
apt-get -qq update
# intel-media-va-driver-non-free is built from source in the
# intel-media-driver Dockerfile stage for Battlemage (Xe2) support
apt-get -qq install --no-install-recommends --no-install-suggests -y \
libmfx1 libmfxgen1 libvpl2
libmfx1
rm -f /usr/share/keyrings/intel-graphics.gpg
rm -f /etc/apt/sources.list.d/intel-gpu-jammy.list
# upgrade libva2, oneVPL runtime, and libvpl2 from trixie for Battlemage support
echo "deb http://deb.debian.org/debian trixie main" > /etc/apt/sources.list.d/trixie.list
apt-get -qq update
apt-get -qq install -y -t trixie libva2 libva-drm2 libzstd1
apt-get -qq install -y -t trixie libmfx-gen1.2 libvpl2
rm -f /etc/apt/sources.list.d/trixie.list
apt-get -qq update
apt-get -qq install -y ocl-icd-libopencl1
# install libtbb12 for NPU support
apt-get -qq install -y libtbb12
rm -f /usr/share/keyrings/intel-graphics.gpg
rm -f /etc/apt/sources.list.d/intel-gpu-jammy.list
# install legacy and standard intel icd and level-zero-gpu
# install legacy and standard intel compute packages
# see https://github.com/intel/compute-runtime/blob/master/LEGACY_PLATFORMS.md for more info
# newer intel packages (gmmlib 22.9+, igc 2.32+) require libstdc++ >= 13.1 and libzstd >= 1.5.5
echo "deb http://deb.debian.org/debian trixie main" > /etc/apt/sources.list.d/trixie.list
apt-get -qq update
apt-get -qq install -y -t trixie libstdc++6 libzstd1
rm -f /etc/apt/sources.list.d/trixie.list
apt-get -qq update
# needed core package
wget https://github.com/intel/compute-runtime/releases/download/26.14.37833.4/libigdgmm12_22.9.0_amd64.deb
dpkg -i libigdgmm12_22.9.0_amd64.deb
rm libigdgmm12_22.9.0_amd64.deb
# legacy packages
# legacy compute-runtime packages
wget https://github.com/intel/compute-runtime/releases/download/24.35.30872.36/intel-opencl-icd-legacy1_24.35.30872.36_amd64.deb
wget https://github.com/intel/compute-runtime/releases/download/24.35.30872.36/intel-level-zero-gpu-legacy1_1.5.30872.36_amd64.deb
wget https://github.com/intel/intel-graphics-compiler/releases/download/igc-1.0.17537.24/intel-igc-opencl_1.0.17537.24_amd64.deb
wget https://github.com/intel/intel-graphics-compiler/releases/download/igc-1.0.17537.24/intel-igc-core_1.0.17537.24_amd64.deb
# standard packages
# standard compute-runtime packages
wget https://github.com/intel/compute-runtime/releases/download/26.14.37833.4/intel-opencl-icd_26.14.37833.4-0_amd64.deb
wget https://github.com/intel/compute-runtime/releases/download/26.14.37833.4/libze-intel-gpu1_26.14.37833.4-0_amd64.deb
wget https://github.com/intel/intel-graphics-compiler/releases/download/v2.32.7/intel-igc-opencl-2_2.32.7+21184_amd64.deb
@ -137,6 +137,10 @@ if [[ "${TARGETARCH}" == "amd64" ]]; then
dpkg -i *.deb
rm *.deb
apt-get -qq install -f -y
# Battlemage uses the xe kernel driver, but the VA-API driver is still iHD.
# The oneVPL runtime may look for a driver named after the kernel module.
ln -sf /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so /usr/lib/x86_64-linux-gnu/dri/xe_drv_video.so
fi
if [[ "${TARGETARCH}" == "arm64" ]]; then

View File

@ -11,7 +11,7 @@ joserfc == 1.2.*
cryptography == 44.0.*
pathvalidate == 3.3.*
markupsafe == 3.0.*
python-multipart == 0.0.20
python-multipart == 0.0.26
# Classification Model Training
tensorflow == 2.19.* ; platform_machine == 'aarch64'
tensorflow-cpu == 2.19.* ; platform_machine == 'x86_64'
@ -42,7 +42,7 @@ opencv-python-headless == 4.11.0.*
opencv-contrib-python == 4.11.0.*
scipy == 1.16.*
# OpenVino & ONNX
openvino == 2025.3.*
openvino == 2025.4.*
onnxruntime == 1.22.*
# Embeddings
transformers == 4.45.*

View File

@ -39,6 +39,10 @@ This is a fork (with fixed errors and new features) of [original Double Take](ht
[Frigate telegram](https://github.com/OldTyT/frigate-telegram) makes it possible to send events from Frigate to Telegram. Events are sent as a message with a text description, video, and thumbnail.
## [kiosk-monitor](https://github.com/extremeshok/kiosk-monitor)
[kiosk-monitor](https://github.com/extremeshok/kiosk-monitor) is a Raspberry Pi watchdog that runs Chromium fullscreen on a Frigate dashboard (optionally with VLC on a second monitor for an RTSP camera stream), auto-restarts on frozen screens or unreachable URLs, and ships a Birdseye-aware Chromium helper that auto-sizes the grid to the display.
## [Periscope](https://github.com/maksz42/periscope)
[Periscope](https://github.com/maksz42/periscope) is a lightweight Android app that turns old devices into live viewers for Frigate. It works on Android 2.2 and above, including Android TV. It supports authentication and HTTPS.

View File

@ -111,26 +111,16 @@ TCP ensures that all data packets arrive in the correct order. This is crucial f
You can still configure Frigate to use UDP by using ffmpeg input args or the preset `preset-rtsp-udp`. See the [ffmpeg presets](/configuration/ffmpeg_presets) documentation.
### Frigate hangs on startup with a "probing detect stream" message in the logs
### Frigate is slow to start up with a "probing detect stream" message in the logs
On startup, Frigate probes each camera's detect stream with OpenCV to auto-detect its resolution. OpenCV's FFmpeg backend may attempt RTSP over UDP during this probe regardless of the `-rtsp_transport tcp` in your `input_args` or preset. For cameras that do not respond to UDP (common on some Reolink models and others behind firewalls that block UDP), the probe can hang indefinitely and block Frigate from finishing startup, or it can return zeroed-out dimensions that show up as width `0` and height `0` in Camera Probe Info under System Metrics.
When `detect.width` and `detect.height` are not set, Frigate probes each camera's detect stream on startup (and when saving the config) to auto-detect its resolution. For RTSP streams Frigate probes with ffprobe and automatically retries over TCP if UDP doesn't respond, with a 5 second timeout per attempt. A camera that cannot be reached over either transport will add up to ~10 seconds to startup before Frigate falls through with default dimensions, which may show up as width `0` and height `0` in Camera Probe Info under System Metrics.
There are two ways to avoid this:
To skip the probe entirely and make startup instant, set `detect.width` and `detect.height` explicitly in your camera config:
1. Set `detect.width` and `detect.height` explicitly in your camera config. When both are set, Frigate skips the auto-detect probe entirely:
```yaml
cameras:
my_camera:
detect:
width: 1280
height: 720
```
2. Force OpenCV's FFmpeg backend to use TCP for RTSP by setting the environment variable on your Frigate container:
```
OPENCV_FFMPEG_CAPTURE_OPTIONS=rtsp_transport;tcp
```
This is a process-wide setting and applies to all cameras. If you have any cameras that require `preset-rtsp-udp`, use option 1 instead.
```yaml
cameras:
my_camera:
detect:
width: 1280
height: 720
```

View File

@ -10897,9 +10897,9 @@
"license": "MIT"
},
"node_modules/express/node_modules/path-to-regexp": {
"version": "0.1.12",
"resolved": "https://registry.npmjs.org/path-to-regexp/-/path-to-regexp-0.1.12.tgz",
"integrity": "sha512-RA1GjUVMnvYFxuqovrEqZoxxW5NUZqbwKtYz/Tt7nXerk0LbLblQmrsgdeOxV5SFHf0UDggjS/bSeOZwt1pmEQ==",
"version": "0.1.13",
"resolved": "https://registry.npmjs.org/path-to-regexp/-/path-to-regexp-0.1.13.tgz",
"integrity": "sha512-A/AGNMFN3c8bOlvV9RreMdrv7jsmF9XIfDeCd87+I8RNg6s78BhJxMu69NEMHBSJFxKidViTEdruRwEk/WIKqA==",
"license": "MIT"
},
"node_modules/express/node_modules/range-parser": {

View File

@ -15,4 +15,5 @@ class Tags(Enum):
notifications = "Notifications"
preview = "Preview"
recordings = "Recordings"
recap = "Recap"
review = "Review"

View File

@ -25,6 +25,7 @@ from frigate.api import (
motion_search,
notification,
preview,
recap,
record,
review,
)
@ -138,6 +139,7 @@ def create_fastapi_app(
app.include_router(preview.router)
app.include_router(notification.router)
app.include_router(export.router)
app.include_router(recap.router)
app.include_router(event.router)
app.include_router(media.router)
app.include_router(motion_search.router)

100
frigate/api/recap.py Normal file
View File

@ -0,0 +1,100 @@
"""Recap API endpoints."""
import logging
import random
import string
from typing import Optional
from fastapi import APIRouter, Depends, Request
from fastapi.responses import JSONResponse
from frigate.api.auth import require_camera_access, require_role
from frigate.api.defs.tags import Tags
from frigate.models import Export
from frigate.recap.recap import RecapGenerator
logger = logging.getLogger(__name__)
router = APIRouter(tags=[Tags.recap])
@router.post(
"/recap/{camera_name}",
summary="Generate a time-stacked recap video",
description="Creates a video showing all detected objects from the given time range "
"composited onto a clean background. Each detection appears at its real "
"position with a timestamp label.",
dependencies=[Depends(require_role(["admin"]))],
)
def generate_recap(
request: Request,
camera_name: str,
start_time: float,
end_time: float,
label: Optional[str] = None,
):
config = request.app.frigate_config
if not config.recap.enabled:
return JSONResponse(
content={
"success": False,
"message": "recap generation is not enabled in config",
},
status_code=400,
)
if camera_name not in config.cameras:
return JSONResponse(
content={"success": False, "message": f"unknown camera: {camera_name}"},
status_code=404,
)
if end_time <= start_time:
return JSONResponse(
content={"success": False, "message": "end_time must be after start_time"},
status_code=400,
)
use_label = label or config.recap.default_label
export_id = (
f"{camera_name}_recap_"
f"{''.join(random.choices(string.ascii_lowercase + string.digits, k=6))}"
)
generator = RecapGenerator(
config=config,
export_id=export_id,
camera=camera_name,
start_time=start_time,
end_time=end_time,
label=use_label,
)
generator.start()
return JSONResponse(
content={
"success": True,
"message": "recap generation started",
"export_id": export_id,
}
)
@router.get(
"/recap/{camera_name}",
summary="List recap exports for a camera",
dependencies=[Depends(require_camera_access)],
)
def get_recaps(
request: Request,
camera_name: str,
):
recaps = (
Export.select()
.where(Export.camera == camera_name)
.where(Export.id.contains("_recap_"))
.order_by(Export.date.desc())
.dicts()
)
return list(recaps)

View File

@ -10,6 +10,7 @@ from .logger import * # noqa: F403
from .mqtt import * # noqa: F403
from .network import * # noqa: F403
from .proxy import * # noqa: F403
from .recap import * # noqa: F403
from .telemetry import * # noqa: F403
from .tls import * # noqa: F403
from .ui import * # noqa: F403

View File

@ -70,6 +70,7 @@ from .mqtt import MqttConfig
from .network import NetworkingConfig
from .profile import ProfileDefinitionConfig
from .proxy import ProxyConfig
from .recap import RecapConfig
from .telemetry import TelemetryConfig
from .tls import TlsConfig
from .ui import UIConfig
@ -414,6 +415,11 @@ class FrigateConfig(FrigateBaseModel):
title="Proxy",
description="Settings for integrating Frigate behind a reverse proxy that passes authenticated user headers.",
)
recap: RecapConfig = Field(
default_factory=RecapConfig,
title="Recap",
description="Settings for time-stacked recap video generation that composites detected objects onto a clean background.",
)
telemetry: TelemetryConfig = Field(
default_factory=TelemetryConfig,
title="Telemetry",

89
frigate/config/recap.py Normal file
View File

@ -0,0 +1,89 @@
from pydantic import Field, field_validator
from .base import FrigateBaseModel
__all__ = ["RecapConfig"]
class RecapConfig(FrigateBaseModel):
enabled: bool = Field(
default=False,
title="Enable recaps",
description="Allow generation of time-stacked recap videos that composite detected objects onto a clean background.",
)
auto_generate: bool = Field(
default=False,
title="Auto-generate daily",
description="Automatically generate a recap for the previous day at the scheduled time.",
)
schedule_time: str = Field(
default="02:00",
title="Schedule time",
description="Time of day (HH:MM, 24h format) to auto-generate the previous day's recap. Only used when auto_generate is true.",
)
cameras: list[str] = Field(
default=[],
title="Cameras",
description="List of camera names to generate recaps for. Empty list means all cameras.",
)
default_label: str = Field(
default="person",
title="Default object label",
description="The object type to include in recaps.",
)
speed: int = Field(
default=2,
title="Playback speed",
description="Speed multiplier for the output video.",
ge=1,
le=8,
)
max_per_group: int = Field(
default=3,
title="Max events per group",
description="Maximum number of events to composite simultaneously. Higher values pack more into the video but can get crowded.",
ge=1,
le=10,
)
ghost_duration: float = Field(
default=3.0,
title="Ghost visibility duration",
description="How long (in seconds of video time) each detection stays visible when path data is unavailable.",
ge=0.5,
le=30.0,
)
output_fps: int = Field(
default=10,
title="Output frame rate",
description="Frame rate of the generated recap video.",
ge=1,
le=30,
)
video_duration: int = Field(
default=30,
title="Minimum video duration",
description="Minimum length in seconds for the output video. Actual length depends on event count and durations.",
ge=5,
le=300,
)
background_samples: int = Field(
default=30,
title="Background sample count",
description="Number of frames sampled across the time range to build the clean background plate via median.",
ge=5,
le=100,
)
@field_validator("schedule_time")
@classmethod
def validate_schedule_time(cls, v: str) -> str:
parts = v.split(":")
if len(parts) != 2:
raise ValueError("schedule_time must be HH:MM format")
try:
h, m = int(parts[0]), int(parts[1])
except ValueError:
raise ValueError("schedule_time must be HH:MM format")
if not (0 <= h <= 23 and 0 <= m <= 59):
raise ValueError("schedule_time hours must be 0-23 and minutes 0-59")
return v

View File

@ -310,6 +310,10 @@ class EmbeddingMaintainer(threading.Thread):
self._handle_custom_classification_update(topic, payload)
return
if topic == "config/genai":
self.config.genai = payload
self.genai_manager.update_config(self.config)
# Broadcast to all processors — each decides if the topic is relevant
for processor in self.realtime_processors:
processor.update_config(topic, payload)

View File

@ -113,6 +113,15 @@ class OllamaClient(GenAIClient):
schema = response_format.get("json_schema", {}).get("schema")
if schema:
ollama_options["format"] = self._clean_schema_for_ollama(schema)
logger.debug(
"Ollama generate request: model=%s, prompt_len=%s, image_count=%s, "
"has_format=%s, options=%s",
self.genai_config.model,
len(prompt),
len(images) if images else 0,
"format" in ollama_options,
{k: v for k, v in ollama_options.items() if k != "format"},
)
result = self.provider.generate(
self.genai_config.model,
prompt,
@ -120,9 +129,24 @@ class OllamaClient(GenAIClient):
**ollama_options,
)
logger.debug(
f"Ollama tokens used: eval_count={result.get('eval_count')}, prompt_eval_count={result.get('prompt_eval_count')}"
"Ollama generate response: done=%s, done_reason=%s, eval_count=%s, "
"prompt_eval_count=%s, response_len=%s",
result.get("done"),
result.get("done_reason"),
result.get("eval_count"),
result.get("prompt_eval_count"),
len(result.get("response", "") or ""),
)
return str(result["response"]).strip()
response_text = str(result["response"]).strip()
if not response_text:
logger.warning(
"Ollama returned a blank response for model %s (done_reason=%s, "
"eval_count=%s). Check model output, ensure thinking is disabled.",
self.genai_config.model,
result.get("done_reason"),
result.get("eval_count"),
)
return response_text
except (
TimeoutException,
ResponseError,

View File

@ -80,7 +80,23 @@ class OpenAIClient(GenAIClient):
and hasattr(result, "choices")
and len(result.choices) > 0
):
return str(result.choices[0].message.content.strip())
message = result.choices[0].message
content = message.content
if not content:
# When reasoning is enabled for some OpenAI backends the actual response
# is incorrectly placed in reasoning_content instead of content.
# This is buggy/incorrect behavior — reasoning should not be
# enabled for these models.
reasoning_content = getattr(message, "reasoning_content", None)
if reasoning_content:
logger.warning(
"Response content was empty but reasoning_content was provided; "
"reasoning appears to be enabled and should be disabled for this model."
)
content = reasoning_content
return str(content.strip()) if content else None
return None
except (TimeoutException, Exception) as e:
logger.warning("OpenAI returned an error: %s", str(e))

View File

658
frigate/recap/recap.py Normal file
View File

@ -0,0 +1,658 @@
"""Time-stacked recap video generator.
Composites detected people from throughout the day onto a single clean
background. Multiple non-overlapping events play simultaneously so you
can see all the day's activity in a short video.
Each person is extracted from their recording clip using per-event
background subtraction within a spotlight region, producing clean cutouts
without needing a segmentation model.
"""
import datetime
import logging
import os
import re
import subprocess as sp
import threading
import time
from pathlib import Path
from typing import Optional
import cv2
import numpy as np
from peewee import DoesNotExist
from frigate.config import FrigateConfig
from frigate.const import (
CACHE_DIR,
CLIPS_DIR,
EXPORT_DIR,
PROCESS_PRIORITY_LOW,
)
from frigate.models import Event, Export, Recordings
logger = logging.getLogger(__name__)
RECAP_CACHE = os.path.join(CACHE_DIR, "recap")
OUTPUT_CRF = "23"
# bg subtraction within per-event spotlight - threshold can be low
# because the reference frame matches the event's lighting exactly
BG_DIFF_THRESHOLD = 25
DILATE_ITERATIONS = 2
# spotlight params: generous area, bg sub handles the rest
SPOTLIGHT_PAD = 1.5
SPOTLIGHT_BLUR = 25
def _lower_priority():
os.nice(PROCESS_PRIORITY_LOW)
def _get_recording_at(camera: str, ts: float) -> Optional[tuple[str, float]]:
"""Find the recording segment covering a timestamp.
Returns (path, offset_into_file) or None.
"""
try:
rec = (
Recordings.select(Recordings.path, Recordings.start_time)
.where(Recordings.camera == camera)
.where(Recordings.start_time <= ts)
.where(Recordings.end_time >= ts)
.get()
)
return rec.path, ts - float(rec.start_time)
except DoesNotExist:
return None
def _probe_resolution(ffmpeg_path: str, path: str) -> Optional[tuple[int, int]]:
probe = sp.run(
[ffmpeg_path, "-hide_banner", "-i", path, "-f", "null", "-"],
capture_output=True,
timeout=10,
preexec_fn=_lower_priority,
)
match = re.search(r"(\d{2,5})x(\d{2,5})", probe.stderr.decode(errors="replace"))
if not match:
return None
return int(match.group(1)), int(match.group(2))
def _extract_frame(
ffmpeg_path: str, path: str, offset: float, w: int, h: int
) -> Optional[np.ndarray]:
p = sp.run(
[
ffmpeg_path,
"-hide_banner",
"-loglevel",
"error",
"-ss",
f"{offset:.3f}",
"-i",
path,
"-frames:v",
"1",
"-f",
"rawvideo",
"-pix_fmt",
"bgr24",
"pipe:1",
],
capture_output=True,
timeout=15,
preexec_fn=_lower_priority,
)
if p.returncode != 0 or len(p.stdout) == 0:
return None
expected = w * h * 3
if len(p.stdout) < expected:
return None
return np.frombuffer(p.stdout, dtype=np.uint8)[:expected].reshape((h, w, 3))
def _extract_frames_range(
ffmpeg_path: str,
path: str,
offset: float,
duration: float,
fps: int,
w: int,
h: int,
) -> list[np.ndarray]:
"""Pull multiple frames from a recording at a given fps."""
p = sp.run(
[
ffmpeg_path,
"-hide_banner",
"-loglevel",
"error",
"-ss",
f"{offset:.3f}",
"-t",
f"{duration:.3f}",
"-i",
path,
"-vf",
f"fps={fps}",
"-f",
"rawvideo",
"-pix_fmt",
"bgr24",
"pipe:1",
],
capture_output=True,
timeout=max(30, int(duration) + 15),
preexec_fn=_lower_priority,
)
if p.returncode != 0 or len(p.stdout) == 0:
return []
frame_size = w * h * 3
return [
np.frombuffer(p.stdout[i : i + frame_size], dtype=np.uint8).reshape((h, w, 3))
for i in range(0, len(p.stdout) - frame_size + 1, frame_size)
]
def _build_background(
ffmpeg_path: str,
camera: str,
start_time: float,
end_time: float,
sample_count: int,
) -> Optional[np.ndarray]:
"""Median of sampled frames - removes moving objects, keeps the static scene."""
duration = end_time - start_time
step = duration / (sample_count + 1)
resolution = None
frames = []
for i in range(1, sample_count + 1):
ts = start_time + step * i
result = _get_recording_at(camera, ts)
if result is None:
continue
rec_path, offset = result
if not os.path.isfile(rec_path):
continue
if resolution is None:
resolution = _probe_resolution(ffmpeg_path, rec_path)
if resolution is None:
continue
w, h = resolution
frame = _extract_frame(ffmpeg_path, rec_path, offset, w, h)
if frame is not None and frame.shape == (h, w, 3):
frames.append(frame)
if len(frames) < 3:
logger.warning("only got %d bg frames, need 3+", len(frames))
return None
return np.median(np.stack(frames, axis=0), axis=0).astype(np.uint8)
def _relative_box_to_pixels(
box: list[float], w: int, h: int
) -> tuple[int, int, int, int]:
"""Normalized [x, y, w, h] -> pixel [x1, y1, x2, y2]."""
x1 = max(0, int(box[0] * w))
y1 = max(0, int(box[1] * h))
x2 = min(w, int((box[0] + box[2]) * w))
y2 = min(h, int((box[1] + box[3]) * h))
return x1, y1, x2, y2
def _make_spotlight(w: int, h: int, cx: int, cy: int, rx: int, ry: int) -> np.ndarray:
"""Soft elliptical spotlight mask, float32 0-1."""
m = np.zeros((h, w), np.uint8)
cv2.ellipse(m, (cx, cy), (rx, ry), 0, 0, 360, 255, -1)
m = cv2.GaussianBlur(m, (SPOTLIGHT_BLUR, SPOTLIGHT_BLUR), 0)
return m.astype(np.float32) / 255.0
def _person_mask(
frame: np.ndarray, ref_bg: np.ndarray, spotlight: np.ndarray
) -> np.ndarray:
"""Extract person by diffing against the per-event reference frame,
then AND with the spotlight to contain it to the detection area.
"""
diff = cv2.absdiff(frame, ref_bg)
gray = cv2.cvtColor(diff, cv2.COLOR_BGR2GRAY)
_, fg = cv2.threshold(gray, BG_DIFF_THRESHOLD, 255, cv2.THRESH_BINARY)
fg = cv2.dilate(fg, None, iterations=DILATE_ITERATIONS)
fg = cv2.erode(fg, None, iterations=1)
return (fg.astype(np.float32) / 255.0) * spotlight
def _mask_centroid(m: np.ndarray) -> Optional[tuple[int, int]]:
coords = np.argwhere(m > 0.3)
if len(coords) == 0:
return None
return int(coords[:, 1].mean()), int(coords[:, 0].mean())
def _interpolate_path(
path_data: list, t: float, w: int, h: int
) -> Optional[tuple[int, int]]:
"""Interpolate person position from path_data at time t."""
if not path_data or len(path_data) < 1:
return None
prev = None
for coord, ts in path_data:
if ts > t:
if prev is None:
return int(coord[0] * w), int(coord[1] * h)
pc, pt = prev
dt = ts - pt
if dt <= 0:
return int(coord[0] * w), int(coord[1] * h)
f = (t - pt) / dt
ix = pc[0] + (coord[0] - pc[0]) * f
iy = pc[1] + (coord[1] - pc[1]) * f
return int(ix * w), int(iy * h)
prev = (coord, ts)
if prev:
return int(prev[0][0] * w), int(prev[0][1] * h)
return None
def _draw_label(frame: np.ndarray, text: str, x: int, y: int):
font = cv2.FONT_HERSHEY_SIMPLEX
scale = 0.28
thickness = 1
(tw, th), _ = cv2.getTextSize(text, font, scale, thickness)
lx = max(0, min(x - tw // 2, frame.shape[1] - tw - 3))
ly = max(th + 3, min(y, frame.shape[0] - 2))
cv2.rectangle(frame, (lx, ly - th - 2), (lx + tw + 2, ly + 2), (0, 0, 0), -1)
cv2.putText(frame, text, (lx + 1, ly), font, scale, (255, 255, 255), thickness)
def _balance_groups(events: list[dict], max_per: int) -> list[list[dict]]:
"""Spread events across groups so durations are roughly even.
Longest events get their own group first, shorter ones fill in.
"""
by_len = sorted(events, key=lambda e: len(e["frames"]), reverse=True)
groups: list[list[dict]] = []
lengths: list[int] = []
for ev in by_len:
best = None
best_len = float("inf")
for i, g in enumerate(groups):
if len(g) < max_per and lengths[i] < best_len:
best = i
best_len = lengths[i]
if best is not None:
groups[best].append(ev)
lengths[best] = max(lengths[best], len(ev["frames"]))
else:
groups.append([ev])
lengths.append(len(ev["frames"]))
for g in groups:
g.sort(key=lambda e: e["time"])
return groups
class RecapGenerator(threading.Thread):
def __init__(
self,
config: FrigateConfig,
export_id: str,
camera: str,
start_time: float,
end_time: float,
label: str = "person",
):
super().__init__(daemon=True)
self.config = config
self.export_id = export_id
self.camera = camera
self.start_time = start_time
self.end_time = end_time
self.label = label
self.ffmpeg_path = config.ffmpeg.ffmpeg_path
recap_cfg = config.recap
self.output_fps = recap_cfg.output_fps
self.speed = recap_cfg.speed
self.max_per_group = recap_cfg.max_per_group
self.video_duration = recap_cfg.video_duration
self.background_samples = recap_cfg.background_samples
Path(RECAP_CACHE).mkdir(parents=True, exist_ok=True)
Path(os.path.join(CLIPS_DIR, "export")).mkdir(exist_ok=True)
def _get_events(self) -> list[dict]:
return list(
Event.select(
Event.id,
Event.start_time,
Event.end_time,
Event.label,
Event.data,
Event.box,
Event.top_score,
)
.where(Event.camera == self.camera)
.where(Event.label == self.label)
.where(Event.start_time >= self.start_time)
.where(Event.start_time <= self.end_time)
.where(Event.false_positive == False) # noqa: E712
.order_by(Event.start_time.asc())
.dicts()
)
def run(self):
logger.info(
"generating recap for %s (%s to %s)",
self.camera,
datetime.datetime.fromtimestamp(self.start_time).isoformat(),
datetime.datetime.fromtimestamp(self.end_time).isoformat(),
)
wall_start = time.monotonic()
start_dt = datetime.datetime.fromtimestamp(self.start_time)
end_dt = datetime.datetime.fromtimestamp(self.end_time)
export_name = f"{self.camera} recap {start_dt.strftime('%Y-%m-%d')}"
filename = (
f"{self.camera}_recap_{start_dt.strftime('%Y%m%d_%H%M%S')}-"
f"{end_dt.strftime('%Y%m%d_%H%M%S')}_{self.export_id.split('_')[-1]}.mp4"
)
video_path = os.path.join(EXPORT_DIR, filename)
Export.insert(
{
Export.id: self.export_id,
Export.camera: self.camera,
Export.name: export_name,
Export.date: self.start_time,
Export.video_path: video_path,
Export.thumb_path: "",
Export.in_progress: True,
}
).execute()
try:
self._generate(video_path)
except Exception:
logger.exception("recap failed for %s", self.camera)
Path(video_path).unlink(missing_ok=True)
Export.delete().where(Export.id == self.export_id).execute()
return
logger.info(
"recap for %s done in %.1fs -> %s",
self.camera,
time.monotonic() - wall_start,
video_path,
)
def _generate(self, out_path: str):
events = self._get_events()
if not events:
logger.info("no %s events for %s, nothing to do", self.label, self.camera)
Export.delete().where(Export.id == self.export_id).execute()
return
logger.info("found %d %s events", len(events), self.label)
background = _build_background(
self.ffmpeg_path,
self.camera,
self.start_time,
self.end_time,
self.background_samples,
)
if background is None:
logger.error("couldn't build background for %s", self.camera)
Export.delete().where(Export.id == self.export_id).execute()
return
bg_h, bg_w = background.shape[:2]
bg_f = background.astype(np.float32)
# build clip data for each event
prepped = []
for ev in events:
data = ev.get("data") or {}
box = data.get("box") or ev.get("box")
if not box or len(box) != 4:
continue
ev_time = float(ev["start_time"])
ev_end = float(ev.get("end_time") or ev_time)
ev_dur = max(ev_end - ev_time, 0.5)
result = _get_recording_at(self.camera, ev_time)
if result is None:
continue
rec_path, offset = result
if not os.path.isfile(rec_path):
continue
frames = _extract_frames_range(
self.ffmpeg_path,
rec_path,
offset,
ev_dur,
self.output_fps,
bg_w,
bg_h,
)
if len(frames) < 3:
continue
# first frame is from pre-capture - use as per-event bg reference
ref_bg = frames[0]
event_frames = frames[2:]
if not event_frames:
continue
pbox = _relative_box_to_pixels(box, bg_w, bg_h)
ts_str = datetime.datetime.fromtimestamp(ev_time).strftime("%H:%M:%S")
prepped.append(
{
"frames": event_frames,
"ref_bg": ref_bg,
"pbox": pbox,
"path": data.get("path_data"),
"ts_str": ts_str,
"time": ev_time,
}
)
if not prepped:
logger.warning("no usable clips for %s", self.camera)
Export.delete().where(Export.id == self.export_id).execute()
return
groups = _balance_groups(prepped, self.max_per_group)
logger.info(
"%d events -> %d groups (max %d/group)",
len(prepped),
len(groups),
self.max_per_group,
)
# render each group to a temp file, then concat
tmp_dir = os.path.join(RECAP_CACHE, self.export_id)
Path(tmp_dir).mkdir(parents=True, exist_ok=True)
seg_paths = []
for gi, group in enumerate(groups):
max_frames = max(len(e["frames"]) for e in group)
seg_path = os.path.join(tmp_dir, f"seg_{gi:04d}.mp4")
proc = sp.Popen(
[
self.ffmpeg_path,
"-hide_banner",
"-loglevel",
"error",
"-y",
"-f",
"rawvideo",
"-pix_fmt",
"bgr24",
"-s",
f"{bg_w}x{bg_h}",
"-r",
str(self.output_fps * self.speed),
"-i",
"pipe:0",
"-c:v",
"libx264",
"-preset",
"fast",
"-crf",
OUTPUT_CRF,
"-pix_fmt",
"yuv420p",
"-movflags",
"+faststart",
seg_path,
],
stdin=sp.PIPE,
stdout=sp.PIPE,
stderr=sp.PIPE,
preexec_fn=_lower_priority,
)
try:
for fi in range(max_frames):
canvas = bg_f.copy()
label_info = []
for ev in group:
if fi >= len(ev["frames"]):
continue
src = ev["frames"][fi]
src_f = src.astype(np.float32)
bx1, by1, bx2, by2 = ev["pbox"]
bw = bx2 - bx1
bh = by2 - by1
ft = ev["time"] + fi / self.output_fps
pos = None
if ev["path"] and len(ev["path"]) >= 2:
pos = _interpolate_path(ev["path"], ft, bg_w, bg_h)
cx, cy = pos if pos else ((bx1 + bx2) // 2, (by1 + by2) // 2)
rx = max(20, int(bw * SPOTLIGHT_PAD))
ry = max(25, int(bh * SPOTLIGHT_PAD))
sl = _make_spotlight(bg_w, bg_h, cx, cy, rx, ry)
mask = _person_mask(src, ev["ref_bg"], sl)
m3 = mask[:, :, np.newaxis]
canvas = src_f * m3 + canvas * (1.0 - m3)
ctr = _mask_centroid(mask)
if ctr:
label_info.append(
(ev["ts_str"], ctr[0], ctr[1] - int(bh * 0.5))
)
else:
label_info.append((ev["ts_str"], cx, cy - int(bh * 0.5)))
cu8 = canvas.astype(np.uint8)
for ts, lx, ly in label_info:
_draw_label(cu8, ts, lx, ly)
cv2.rectangle(
cu8,
(0, bg_h - 2),
(int(bg_w * fi / max_frames), bg_h),
(0, 180, 255),
-1,
)
proc.stdin.write(cu8.tobytes())
proc.stdin.close()
proc.wait(timeout=120)
except Exception:
proc.kill()
proc.wait()
raise
if proc.returncode == 0:
seg_paths.append(seg_path)
# free memory as we go
for ev in group:
ev["frames"] = None
ev["ref_bg"] = None
if not seg_paths:
logger.error("no segments rendered for %s", self.camera)
Export.delete().where(Export.id == self.export_id).execute()
return
# concat all segments
concat_file = os.path.join(tmp_dir, "concat.txt")
with open(concat_file, "w") as f:
for p in seg_paths:
f.write(f"file '{p}'\n")
sp.run(
[
self.ffmpeg_path,
"-hide_banner",
"-loglevel",
"error",
"-f",
"concat",
"-safe",
"0",
"-i",
concat_file,
"-c",
"copy",
"-movflags",
"+faststart",
"-y",
out_path,
],
capture_output=True,
timeout=300,
preexec_fn=_lower_priority,
)
# cleanup temp files
for p in seg_paths:
Path(p).unlink(missing_ok=True)
Path(concat_file).unlink(missing_ok=True)
Path(tmp_dir).rmdir()
# thumbnail from the middle
thumb_path = os.path.join(CLIPS_DIR, f"export/{self.export_id}.webp")
total_frames = sum(
max(len(e["frames"]) for e in g) if any(e["frames"] for e in g) else 0
for g in groups
)
sp.run(
[
self.ffmpeg_path,
"-hide_banner",
"-loglevel",
"error",
"-i",
out_path,
"-vf",
f"select=eq(n\\,{max(1, total_frames // 2)})",
"-frames:v",
"1",
"-c:v",
"libwebp",
"-y",
thumb_path,
],
capture_output=True,
timeout=30,
preexec_fn=_lower_priority,
)
Export.update({Export.in_progress: False, Export.thumb_path: thumb_path}).where(
Export.id == self.export_id
).execute()

View File

@ -0,0 +1,94 @@
"""Scheduled daily recap generation.
Runs as a background thread, checks once per minute if it's time
to generate recaps for the previous day.
"""
import logging
import random
import string
import threading
import time
from datetime import datetime, timedelta
from frigate.config import FrigateConfig
from frigate.recap.recap import RecapGenerator
logger = logging.getLogger(__name__)
class RecapScheduler(threading.Thread):
"""Triggers daily recap generation at the configured time."""
def __init__(self, config: FrigateConfig):
super().__init__(daemon=True, name="recap_scheduler")
self.config = config
self._last_run_date = None
def run(self):
recap_cfg = self.config.recap
if not recap_cfg.enabled or not recap_cfg.auto_generate:
logger.info("recap scheduler not enabled, exiting")
return
hour, minute = (int(x) for x in recap_cfg.schedule_time.split(":"))
logger.info(
"recap scheduler started, will run daily at %02d:%02d", hour, minute
)
while True:
now = datetime.now()
today = now.date()
# check if it's time and we haven't already run today
if (
now.hour == hour
and now.minute == minute
and self._last_run_date != today
):
self._last_run_date = today
self._generate_all()
# sleep until next minute
time.sleep(60)
def _generate_all(self):
recap_cfg = self.config.recap
yesterday = datetime.now() - timedelta(days=1)
start = yesterday.replace(hour=0, minute=0, second=0, microsecond=0)
end = start + timedelta(days=1)
# figure out which cameras to process
camera_names = (
list(recap_cfg.cameras)
if recap_cfg.cameras
else list(self.config.cameras.keys())
)
logger.info(
"auto-generating recaps for %d cameras (%s)",
len(camera_names),
start.strftime("%Y-%m-%d"),
)
for camera in camera_names:
if camera not in self.config.cameras:
logger.warning("recap: camera %s not found, skipping", camera)
continue
export_id = (
f"{camera}_recap_"
f"{''.join(random.choices(string.ascii_lowercase + string.digits, k=6))}"
)
generator = RecapGenerator(
config=self.config,
export_id=export_id,
camera=camera,
start_time=start.timestamp(),
end_time=end.timestamp(),
label=recap_cfg.default_label,
)
generator.start()
logger.info("recap started for %s (export_id=%s)", camera, export_id)

269
frigate/test/test_recap.py Normal file
View File

@ -0,0 +1,269 @@
import unittest
from unittest.mock import patch
import numpy as np
from frigate.recap.recap import (
_balance_groups,
_build_background,
_draw_label,
_interpolate_path,
_make_spotlight,
_mask_centroid,
_person_mask,
_relative_box_to_pixels,
)
class TestRelativeBoxConversion(unittest.TestCase):
def test_basic(self):
x1, y1, x2, y2 = _relative_box_to_pixels([0.5, 0.25, 0.1, 0.2], 1920, 1080)
self.assertEqual(x1, 960)
self.assertEqual(y1, 270)
self.assertEqual(x2, 1152)
self.assertEqual(y2, 486)
def test_clamps(self):
_, _, x2, y2 = _relative_box_to_pixels([0.9, 0.9, 0.2, 0.2], 100, 100)
self.assertEqual(x2, 100)
self.assertEqual(y2, 100)
def test_full_frame(self):
x1, y1, x2, y2 = _relative_box_to_pixels([0.0, 0.0, 1.0, 1.0], 1920, 1080)
self.assertEqual((x1, y1, x2, y2), (0, 0, 1920, 1080))
def test_real_frigate_data(self):
x1, y1, x2, y2 = _relative_box_to_pixels([0.65, 0.117, 0.025, 0.089], 640, 360)
self.assertEqual(x1, 416)
self.assertEqual(y1, 42)
self.assertGreater(x2, x1)
self.assertGreater(y2, y1)
class TestSpotlight(unittest.TestCase):
def test_shape_and_range(self):
sl = _make_spotlight(100, 100, 50, 50, 20, 20)
self.assertEqual(sl.shape, (100, 100))
self.assertGreater(sl[50, 50], 0.5)
self.assertAlmostEqual(sl[0, 0], 0.0, places=1)
def test_off_center(self):
sl = _make_spotlight(200, 200, 10, 10, 15, 15)
self.assertGreater(sl[10, 10], 0.5)
self.assertAlmostEqual(sl[199, 199], 0.0, places=1)
class TestPersonMask(unittest.TestCase):
def test_identical_frames_empty_mask(self):
frame = np.full((100, 100, 3), 128, np.uint8)
ref = frame.copy()
sl = _make_spotlight(100, 100, 50, 50, 30, 30)
mask = _person_mask(frame, ref, sl)
self.assertEqual(mask.sum(), 0.0)
def test_different_region_shows_fg(self):
ref = np.full((100, 100, 3), 50, np.uint8)
frame = ref.copy()
frame[40:60, 40:60] = 200 # person-sized bright block
sl = _make_spotlight(100, 100, 50, 50, 30, 30)
mask = _person_mask(frame, ref, sl)
self.assertGreater(mask[50, 50], 0.0)
class TestMaskCentroid(unittest.TestCase):
def test_centered_blob(self):
m = np.zeros((100, 100), np.float32)
m[40:60, 40:60] = 1.0
cx, cy = _mask_centroid(m)
self.assertAlmostEqual(cx, 50, delta=2)
self.assertAlmostEqual(cy, 50, delta=2)
def test_empty_mask(self):
m = np.zeros((100, 100), np.float32)
self.assertIsNone(_mask_centroid(m))
class TestInterpolatePath(unittest.TestCase):
def test_empty(self):
self.assertIsNone(_interpolate_path([], 1.0, 100, 100))
self.assertIsNone(_interpolate_path(None, 1.0, 100, 100))
def test_midpoint(self):
path = [((0.0, 0.0), 10.0), ((1.0, 1.0), 20.0)]
self.assertEqual(_interpolate_path(path, 15.0, 100, 100), (50, 50))
def test_before_first(self):
path = [((0.25, 0.75), 10.0), ((0.5, 0.5), 20.0)]
self.assertEqual(_interpolate_path(path, 5.0, 100, 100), (25, 75))
def test_after_last(self):
path = [((0.1, 0.2), 10.0), ((0.3, 0.4), 20.0)]
self.assertEqual(_interpolate_path(path, 30.0, 1000, 1000), (300, 400))
def test_real_path(self):
path = [
([0.6219, 0.2028], 1774057715.808),
([0.6297, 0.2028], 1774057716.008),
([0.7078, 0.2167], 1774057720.019),
]
pos = _interpolate_path(path, 1774057718.0, 640, 360)
self.assertIsNotNone(pos)
self.assertGreater(pos[0], int(0.6297 * 640))
self.assertLess(pos[0], int(0.7078 * 640))
class TestDrawLabel(unittest.TestCase):
def test_draws(self):
f = np.zeros((200, 300, 3), np.uint8)
_draw_label(f, "12:34:56", 100, 100)
self.assertFalse(np.all(f == 0))
def test_edge(self):
f = np.zeros((50, 50, 3), np.uint8)
_draw_label(f, "test", 0, 5)
self.assertFalse(np.all(f == 0))
class TestBalanceGroups(unittest.TestCase):
def test_single_event(self):
events = [{"frames": [1] * 10, "time": 0}]
groups = _balance_groups(events, 3)
self.assertEqual(len(groups), 1)
self.assertEqual(len(groups[0]), 1)
def test_even_split(self):
events = [{"frames": [1] * 100, "time": i} for i in range(6)]
groups = _balance_groups(events, 3)
self.assertEqual(len(groups), 2)
self.assertEqual(len(groups[0]), 3)
self.assertEqual(len(groups[1]), 3)
def test_long_events_packed_with_short(self):
events = [
{"frames": [1] * 500, "time": 0},
{"frames": [1] * 400, "time": 1},
{"frames": [1] * 10, "time": 2},
{"frames": [1] * 10, "time": 3},
]
groups = _balance_groups(events, 2)
# the algorithm packs into the shortest available group,
# so 500 and 400 end up together (both long), short ones together
self.assertEqual(len(groups), 2)
all_lengths = sorted(
[len(e["frames"]) for g in groups for e in g], reverse=True
)
self.assertEqual(all_lengths, [500, 400, 10, 10])
def test_sorted_by_time(self):
events = [
{"frames": [1] * 10, "time": 30},
{"frames": [1] * 10, "time": 10},
{"frames": [1] * 10, "time": 20},
]
groups = _balance_groups(events, 3)
times = [e["time"] for e in groups[0]]
self.assertEqual(times, sorted(times))
class TestBuildBackground(unittest.TestCase):
@patch("frigate.recap.recap._extract_frame")
@patch("frigate.recap.recap._probe_resolution")
@patch("frigate.recap.recap._get_recording_at")
def test_too_few(self, mock_rec, mock_probe, mock_extract):
mock_rec.return_value = ("/fake.mp4", 0.0)
mock_probe.return_value = (100, 100)
mock_extract.return_value = None
self.assertIsNone(_build_background("/usr/bin/ffmpeg", "cam", 0.0, 100.0, 10))
@patch("frigate.recap.recap.os.path.isfile", return_value=True)
@patch("frigate.recap.recap._extract_frame")
@patch("frigate.recap.recap._probe_resolution")
@patch("frigate.recap.recap._get_recording_at")
def test_median(self, mock_rec, mock_probe, mock_extract, mock_isfile):
mock_rec.return_value = ("/fake.mp4", 0.0)
mock_probe.return_value = (4, 4)
frames = [np.full((4, 4, 3), v, np.uint8) for v in [0, 100, 200]]
idx = [0]
def side_effect(*a, **kw):
r = frames[idx[0] % 3]
idx[0] += 1
return r
mock_extract.side_effect = side_effect
result = _build_background("/usr/bin/ffmpeg", "cam", 0.0, 100.0, 5)
self.assertIsNotNone(result)
self.assertEqual(result[0, 0, 0], 100)
class TestRecapConfig(unittest.TestCase):
def test_defaults(self):
from frigate.config.recap import RecapConfig
cfg = RecapConfig()
self.assertFalse(cfg.enabled)
self.assertFalse(cfg.auto_generate)
self.assertEqual(cfg.schedule_time, "02:00")
self.assertEqual(cfg.cameras, [])
self.assertEqual(cfg.default_label, "person")
self.assertEqual(cfg.speed, 2)
self.assertEqual(cfg.max_per_group, 3)
self.assertEqual(cfg.video_duration, 30)
def test_custom_values(self):
from frigate.config.recap import RecapConfig
cfg = RecapConfig(
enabled=True,
auto_generate=True,
schedule_time="03:30",
cameras=["front", "back"],
speed=4,
max_per_group=5,
)
self.assertTrue(cfg.auto_generate)
self.assertEqual(cfg.schedule_time, "03:30")
self.assertEqual(cfg.cameras, ["front", "back"])
self.assertEqual(cfg.speed, 4)
self.assertEqual(cfg.max_per_group, 5)
def test_validation_ranges(self):
from pydantic import ValidationError
from frigate.config.recap import RecapConfig
with self.assertRaises(ValidationError):
RecapConfig(ghost_duration=0.1)
with self.assertRaises(ValidationError):
RecapConfig(output_fps=60)
with self.assertRaises(ValidationError):
RecapConfig(video_duration=2)
with self.assertRaises(ValidationError):
RecapConfig(background_samples=2)
with self.assertRaises(ValidationError):
RecapConfig(speed=0)
with self.assertRaises(ValidationError):
RecapConfig(speed=10)
with self.assertRaises(ValidationError):
RecapConfig(max_per_group=0)
def test_schedule_time_validation(self):
from pydantic import ValidationError
from frigate.config.recap import RecapConfig
with self.assertRaises(ValidationError):
RecapConfig(schedule_time="25:00")
with self.assertRaises(ValidationError):
RecapConfig(schedule_time="abc")
with self.assertRaises(ValidationError):
RecapConfig(schedule_time="12:60")
# valid edge cases
cfg = RecapConfig(schedule_time="00:00")
self.assertEqual(cfg.schedule_time, "00:00")
cfg = RecapConfig(schedule_time="23:59")
self.assertEqual(cfg.schedule_time, "23:59")
if __name__ == "__main__":
unittest.main()

View File

@ -711,23 +711,31 @@ def ffprobe_stream(ffmpeg, path: str, detailed: bool = False) -> sp.CompletedPro
else:
format_entries = None
ffprobe_cmd = [
ffmpeg.ffprobe_path,
"-timeout",
"1000000",
"-print_format",
"json",
"-show_entries",
f"stream={stream_entries}",
]
def run(rtsp_transport: Optional[str] = None) -> sp.CompletedProcess:
cmd = [ffmpeg.ffprobe_path]
if rtsp_transport:
cmd += ["-rtsp_transport", rtsp_transport]
cmd += [
"-timeout",
"1000000",
"-print_format",
"json",
"-show_entries",
f"stream={stream_entries}",
]
if detailed and format_entries:
cmd.extend(["-show_entries", f"format={format_entries}"])
cmd.extend(["-loglevel", "error", clean_path])
return sp.run(cmd, capture_output=True)
# Add format entries for detailed mode
if detailed and format_entries:
ffprobe_cmd.extend(["-show_entries", f"format={format_entries}"])
result = run()
ffprobe_cmd.extend(["-loglevel", "error", clean_path])
# For RTSP: retry with explicit TCP transport if the first attempt failed
# (default UDP may be blocked)
if result.returncode != 0 and clean_path.startswith("rtsp://"):
result = run(rtsp_transport="tcp")
return sp.run(ffprobe_cmd, capture_output=True)
return result
def vainfo_hwaccel(device_name: Optional[str] = None) -> sp.CompletedProcess:
@ -807,10 +815,15 @@ async def get_video_properties(
) -> dict[str, Any]:
async def probe_with_ffprobe(
url: str,
rtsp_transport: Optional[str] = None,
) -> tuple[bool, int, int, Optional[str], float]:
"""Fallback using ffprobe: returns (valid, width, height, codec, duration)."""
cmd = [
ffmpeg.ffprobe_path,
cmd = [ffmpeg.ffprobe_path]
if rtsp_transport:
cmd += ["-rtsp_transport", rtsp_transport]
cmd += [
"-rw_timeout",
"5000000",
"-v",
"quiet",
"-print_format",
@ -872,12 +885,26 @@ async def get_video_properties(
cap.release()
return valid, width, height, fourcc, duration
# try cv2 first
has_video, width, height, fourcc, duration = probe_with_cv2(url)
is_rtsp = url.startswith("rtsp://")
# fallback to ffprobe if needed
if not has_video or (get_duration and duration < 0):
if is_rtsp:
# skip cv2 for RTSP: its FFmpeg backend has a hardcoded ~30s internal
# timeout that cannot be shortened per-call, and ffprobe bounded by
# -rw_timeout handles RTSP probing reliably
has_video, width, height, fourcc, duration = await probe_with_ffprobe(url)
else:
# try cv2 first for local files, HTTP, RTMP
has_video, width, height, fourcc, duration = probe_with_cv2(url)
# fallback to ffprobe if needed
if not has_video or (get_duration and duration < 0):
has_video, width, height, fourcc, duration = await probe_with_ffprobe(url)
# last resort for RTSP: try TCP transport, since default UDP may be blocked
if (not has_video or (get_duration and duration < 0)) and is_rtsp:
has_video, width, height, fourcc, duration = await probe_with_ffprobe(
url, rtsp_transport="tcp"
)
result: dict[str, Any] = {"has_valid_video": has_video}
if has_video:

14
web/package-lock.json generated
View File

@ -54,7 +54,7 @@
"immer": "^10.1.1",
"js-yaml": "^4.1.1",
"konva": "^10.2.3",
"lodash": "^4.17.23",
"lodash": "^4.18.1",
"lucide-react": "^0.577.0",
"monaco-yaml": "^5.4.1",
"next-themes": "^0.4.6",
@ -9636,15 +9636,15 @@
}
},
"node_modules/lodash": {
"version": "4.17.23",
"resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.23.tgz",
"integrity": "sha512-LgVTMpQtIopCi79SJeDiP0TfWi5CNEc/L/aRdTh3yIvmZXTnheWpKjSZhnvMl8iXbC1tFg9gdHHDMLoV7CnG+w==",
"version": "4.18.1",
"resolved": "https://registry.npmjs.org/lodash/-/lodash-4.18.1.tgz",
"integrity": "sha512-dMInicTPVE8d1e5otfwmmjlxkZoUpiVLwyeTdUsi/Caj/gfzzblBcCE5sRHV/AsjuCmxWrte2TNGSYuCeCq+0Q==",
"license": "MIT"
},
"node_modules/lodash-es": {
"version": "4.17.23",
"resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.23.tgz",
"integrity": "sha512-LgVTMpQtIopCi79SJeDiP0TfWi5CNEc/L/aRdTh3yIvmZXTnheWpKjSZhnvMl8iXbC1tFg9gdHHDMLoV7CnG+w==",
"version": "4.18.1",
"resolved": "https://registry.npmjs.org/lodash-es/-/lodash-es-4.18.1.tgz",
"integrity": "sha512-J8xewKD/Gk22OZbhpOVSwcs60zhd95ESDwezOFuA3/099925PdHJ7OFHNTGtajL3AlZkykD32HykiMo+BIBI8A==",
"license": "MIT"
},
"node_modules/lodash.merge": {

View File

@ -68,7 +68,7 @@
"immer": "^10.1.1",
"js-yaml": "^4.1.1",
"konva": "^10.2.3",
"lodash": "^4.17.23",
"lodash": "^4.18.1",
"lucide-react": "^0.577.0",
"monaco-yaml": "^5.4.1",
"next-themes": "^0.4.6",

View File

@ -415,6 +415,7 @@
"audioCodecGood": "Audio codec is {{codec}}.",
"resolutionHigh": "A resolution of {{resolution}} may cause increased resource usage.",
"resolutionLow": "A resolution of {{resolution}} may be too low for reliable detection of small objects.",
"resolutionUnknown": "The resolution of this stream could not be probed. You should manually set the detect resolution in Settings or your config.",
"noAudioWarning": "No audio detected for this stream, recordings will not have audio.",
"audioCodecRecordError": "The AAC audio codec is required to support audio in recordings.",
"audioCodecRequired": "An audio stream is required to support audio detection.",

View File

@ -218,7 +218,7 @@ export default function CameraReviewClassification({
<Label
className={cn(
"flex flex-row items-center text-base",
alertsZonesModified && "text-danger",
alertsZonesModified && "text-unsaved",
)}
>
<Trans ns="views/settings">cameraReview.review.alerts</Trans>
@ -286,7 +286,7 @@ export default function CameraReviewClassification({
<Label
className={cn(
"flex flex-row items-center text-base",
detectionsZonesModified && "text-danger",
detectionsZonesModified && "text-unsaved",
)}
>
<Trans ns="views/settings">

View File

@ -1012,7 +1012,7 @@ export function ConfigSection({
>
{hasChanges && (
<div className="flex items-center gap-2">
<span className="text-sm text-danger">
<span className="text-sm text-unsaved">
{t("unsavedChanges", {
ns: "views/settings",
defaultValue: "You have unsaved changes",
@ -1299,7 +1299,7 @@ export function ConfigSection({
{hasChanges && (
<Badge
variant="secondary"
className="cursor-default bg-danger text-xs text-white hover:bg-danger"
className="cursor-default bg-unsaved text-xs text-black hover:bg-unsaved"
>
{t("button.modified", {
ns: "common",

View File

@ -154,7 +154,7 @@ export function KnownPlatesField(props: FieldProps) {
<div className="flex items-center justify-between">
<div>
<CardTitle
className={cn("text-sm", isModified && "text-danger")}
className={cn("text-sm", isModified && "text-unsaved")}
>
{title}
</CardTitle>

View File

@ -142,7 +142,7 @@ export function ReplaceRulesField(props: FieldProps) {
<div className="flex items-center justify-between">
<div>
<CardTitle
className={cn("text-sm", isModified && "text-danger")}
className={cn("text-sm", isModified && "text-unsaved")}
>
{title}
</CardTitle>

View File

@ -497,7 +497,7 @@ export function FieldTemplate(props: FieldTemplateProps) {
htmlFor={id}
className={cn(
"text-sm font-medium",
isModified && "text-danger",
isModified && "text-unsaved",
hasFieldErrors && "text-destructive",
)}
>
@ -516,7 +516,7 @@ export function FieldTemplate(props: FieldTemplateProps) {
return (
<Label
htmlFor={id}
className={cn("text-sm font-medium", isModified && "text-danger")}
className={cn("text-sm font-medium", isModified && "text-unsaved")}
>
{finalLabel}
{required && <span className="ml-1 text-destructive">*</span>}
@ -535,7 +535,7 @@ export function FieldTemplate(props: FieldTemplateProps) {
htmlFor={id}
className={cn(
"text-sm font-medium",
isModified && "text-danger",
isModified && "text-unsaved",
hasFieldErrors && "text-destructive",
)}
>

View File

@ -467,7 +467,7 @@ export function ObjectFieldTemplate(props: ObjectFieldTemplateProps) {
<CardTitle
className={cn(
"flex items-center text-sm",
hasModifiedDescendants && "text-danger",
hasModifiedDescendants && "text-unsaved",
)}
>
{inferredLabel}

View File

@ -0,0 +1,166 @@
import { useCallback, useState } from "react";
import {
Dialog,
DialogContent,
DialogFooter,
DialogHeader,
DialogTitle,
} from "../ui/dialog";
import { Button } from "../ui/button";
import { Label } from "../ui/label";
import { RadioGroup, RadioGroupItem } from "../ui/radio-group";
import { Input } from "../ui/input";
import { SelectSeparator } from "../ui/select";
import axios from "axios";
import { toast } from "sonner";
import { isDesktop } from "react-device-detect";
import { Drawer, DrawerContent } from "../ui/drawer";
import ActivityIndicator from "../indicators/activity-indicator";
const RECAP_PERIODS = ["24", "12", "8", "4", "1"] as const;
type RecapPeriod = (typeof RECAP_PERIODS)[number];
type RecapDialogProps = {
camera: string;
open: boolean;
onOpenChange: (open: boolean) => void;
};
export default function RecapDialog({
camera,
open,
onOpenChange,
}: RecapDialogProps) {
const [selectedPeriod, setSelectedPeriod] = useState<RecapPeriod>("24");
const [label, setLabel] = useState("person");
const [isGenerating, setIsGenerating] = useState(false);
const onGenerate = useCallback(() => {
const now = Date.now() / 1000;
const hours = parseInt(selectedPeriod);
const startTime = now - hours * 3600;
setIsGenerating(true);
axios
.post(`recap/${camera}`, null, {
params: {
start_time: startTime,
end_time: now,
label,
},
})
.then((response) => {
if (response.status === 200 && response.data.success) {
toast.success("Recap generation started", {
position: "top-center",
description: "Check Exports when it's done.",
});
onOpenChange(false);
}
})
.catch((error) => {
const msg =
error.response?.data?.message ||
error.response?.data?.detail ||
"Unknown error";
toast.error(`Recap failed: ${msg}`, { position: "top-center" });
})
.finally(() => {
setIsGenerating(false);
});
}, [camera, selectedPeriod, label, onOpenChange]);
const Overlay = isDesktop ? Dialog : Drawer;
const Content = isDesktop ? DialogContent : DrawerContent;
return (
<Overlay open={open} onOpenChange={onOpenChange}>
<Content
className={
isDesktop
? "sm:rounded-lg md:rounded-2xl"
: "mx-4 rounded-lg px-4 pb-4 md:rounded-2xl"
}
>
<div className="w-full">
{isDesktop && (
<>
<DialogHeader>
<DialogTitle>Generate Recap</DialogTitle>
</DialogHeader>
<SelectSeparator className="my-4 bg-secondary" />
</>
)}
<div className={`flex flex-col gap-4 ${isDesktop ? "" : "mt-4"}`}>
<Label className="text-sm font-medium">Time period</Label>
<RadioGroup
className="flex flex-col gap-3"
defaultValue="24"
onValueChange={(v) => setSelectedPeriod(v as RecapPeriod)}
>
{RECAP_PERIODS.map((period) => (
<div key={period} className="flex items-center gap-2">
<RadioGroupItem
className={
period === selectedPeriod
? "bg-selected from-selected/50 to-selected/90 text-selected"
: "bg-secondary from-secondary/50 to-secondary/90 text-secondary"
}
id={`recap-${period}`}
value={period}
/>
<Label
className="cursor-pointer"
htmlFor={`recap-${period}`}
>
Last {period} {parseInt(period) === 1 ? "hour" : "hours"}
</Label>
</div>
))}
</RadioGroup>
<div className="mt-2">
<Label className="text-sm text-secondary-foreground">
Object type
</Label>
<Input
className="text-md mt-2"
type="text"
value={label}
onChange={(e) => setLabel(e.target.value)}
placeholder="person"
/>
</div>
</div>
{isDesktop && <SelectSeparator className="my-4 bg-secondary" />}
<DialogFooter
className={isDesktop ? "" : "mt-6 flex flex-col-reverse gap-4"}
>
<div
className={`cursor-pointer p-2 text-center ${isDesktop ? "" : "w-full"}`}
onClick={() => onOpenChange(false)}
>
Cancel
</div>
<Button
className={isDesktop ? "" : "w-full"}
variant="select"
size="sm"
disabled={isGenerating}
onClick={onGenerate}
>
{isGenerating && (
<ActivityIndicator className="mr-2 h-4 w-4" />
)}
Generate Recap
</Button>
</DialogFooter>
</div>
</Content>
</Overlay>
);
}

View File

@ -607,23 +607,38 @@ function StreamIssues({
}
}
if (stream.roles.includes("detect") && stream.resolution) {
const [width, height] = stream.resolution.split("x").map(Number);
if (!isNaN(width) && !isNaN(height) && width > 0 && height > 0) {
const minDimension = Math.min(width, height);
const maxDimension = Math.max(width, height);
if (stream.roles.includes("detect") && stream.testResult) {
const probedResolution = stream.testResult.resolution;
let probedWidth = 0;
let probedHeight = 0;
if (probedResolution) {
const [w, h] = probedResolution.split("x").map(Number);
if (!isNaN(w) && !isNaN(h)) {
probedWidth = w;
probedHeight = h;
}
}
if (probedWidth <= 0 || probedHeight <= 0) {
result.push({
type: "error",
message: t("cameraWizard.step4.issues.resolutionUnknown"),
});
} else {
const minDimension = Math.min(probedWidth, probedHeight);
const maxDimension = Math.max(probedWidth, probedHeight);
if (minDimension > 1080) {
result.push({
type: "warning",
message: t("cameraWizard.step4.issues.resolutionHigh", {
resolution: stream.resolution,
resolution: probedResolution,
}),
});
} else if (maxDimension < 640) {
result.push({
type: "error",
message: t("cameraWizard.step4.issues.resolutionLow", {
resolution: stream.resolution,
resolution: probedResolution,
}),
});
}

View File

@ -1435,7 +1435,7 @@ export default function Settings() {
/>
)}
{showUnsavedDot && (
<span className="inline-block size-2 rounded-full bg-danger" />
<span className="inline-block size-2 rounded-full bg-unsaved" />
)}
</div>
)}
@ -1516,7 +1516,7 @@ export default function Settings() {
<div className="sticky bottom-0 z-50 mt-2 bg-background p-4">
<div className="flex flex-col items-center gap-2">
<div className="flex items-center gap-2">
<span className="text-sm text-danger">
<span className="text-sm text-unsaved">
{t("unsavedChanges", {
ns: "views/settings",
defaultValue: "You have unsaved changes",

View File

@ -79,11 +79,11 @@ const PROFILE_COLORS: ProfileColor[] = [
bgMuted: "bg-green-400/20",
},
{
bg: "bg-amber-400",
text: "text-amber-400",
dot: "bg-amber-400",
border: "border-amber-400",
bgMuted: "bg-amber-400/20",
bg: "bg-fuchsia-500",
text: "text-fuchsia-500",
dot: "bg-fuchsia-500",
border: "border-fuchsia-500",
bgMuted: "bg-fuchsia-500/20",
},
{
bg: "bg-slate-400",
@ -93,11 +93,11 @@ const PROFILE_COLORS: ProfileColor[] = [
bgMuted: "bg-slate-400/20",
},
{
bg: "bg-orange-300",
text: "text-orange-300",
dot: "bg-orange-300",
border: "border-orange-300",
bgMuted: "bg-orange-300/20",
bg: "bg-stone-500",
text: "text-stone-500",
dot: "bg-stone-500",
border: "border-stone-500",
bgMuted: "bg-stone-500/20",
},
{
bg: "bg-blue-300",

View File

@ -380,7 +380,9 @@ export default function Go2RtcStreamsSettingsView({
>
{hasChanges && (
<div className="flex items-center gap-2">
<span className="text-sm text-danger">{t("unsavedChanges")}</span>
<span className="text-sm text-unsaved">
{t("unsavedChanges")}
</span>
</div>
)}
<div className="flex w-full items-center gap-2 md:w-auto">

View File

@ -212,7 +212,7 @@ export function SingleSectionPage({
{sectionStatus.hasChanges && (
<Badge
variant="secondary"
className="cursor-default bg-danger text-xs text-white hover:bg-danger"
className="cursor-default bg-unsaved text-xs text-black hover:bg-unsaved"
>
{t("button.modified", {
ns: "common",
@ -250,7 +250,7 @@ export function SingleSectionPage({
{sectionStatus.hasChanges && (
<Badge
variant="secondary"
className="cursor-default bg-danger text-xs text-white hover:bg-danger"
className="cursor-default bg-unsaved text-xs text-black hover:bg-unsaved"
>
{t("button.modified", { ns: "common", defaultValue: "Modified" })}
</Badge>

View File

@ -65,6 +65,7 @@ module.exports = {
ring: "hsl(var(--ring))",
danger: "#ef4444",
success: "#22c55e",
unsaved: "#f59e0b",
background: "hsl(var(--background))",
background_alt: "hsl(var(--background-alt))",
foreground: "hsl(var(--foreground))",