Compare commits

...

13 Commits

Author SHA1 Message Date
GuoQing Liu
178df21acf
Merge bf0d985b9d into acdfed40a9 2026-03-07 19:19:42 -05:00
Josh Hawkins
acdfed40a9
Improve annotation offset UX (#22310)
Some checks failed
CI / AMD64 Build (push) Has been cancelled
CI / ARM Build (push) Has been cancelled
CI / Jetson Jetpack 6 (push) Has been cancelled
CI / AMD64 Extra Build (push) Has been cancelled
CI / ARM Extra Build (push) Has been cancelled
CI / Synaptics Build (push) Has been cancelled
CI / Assemble and push default build (push) Has been cancelled
* keep nav buttons visible

nav buttons would be hidden when closing and reopening dialog after selecting the tracking details pane

* better ux in tracking details

actually pause the video and seek when annotation offset changes to make it easier to visually line up the bounding box

* improve detail stream ux

* update dummy camera docs

* fix docs link
2026-03-07 07:50:00 -06:00
Josh Hawkins
889dfca36c
Frontend fixes (#22309)
* prevent unnecessary reloads in useUserPersistence hook

* always render ProtectedRoute (handling undefined roles internally) and add Suspense fallback

* add missing i18n namespaces

react 19 enforces Suspense more strictly, so components using useTranslation() with unloaded namespaces would suspend, blanking the content behind the empty Suspense fallback

* add missing namespace

* remove unneeded

* remove modal from actions dropdown
2026-03-07 06:43:00 -07:00
Josh Hawkins
dda9f7bfed
apply filters after clustering (#22308)
apply length and format filters to the clustered representative plate rather than individual OCR readings, so noisy variants still contribute to clustering even when they don't pass on their own
2026-03-07 06:42:27 -07:00
Josh Hawkins
c2e667c0dd
Add dynamic configuration for more fields (#22295)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
* face recognition dynamic config

* lpr dynamic config

* safe changes for birdseye dynamic config

* bird classification dynamic config

* always assign new config to stats emitter to make telemetry fields dynamic

* add wildcard support for camera config updates in config_set

* update restart required fields for global sections

* add test

* fix rebase issue

* collapsible settings sidebar

use the preexisting control available with shadcn's sidebar (cmd/ctrl-B) to give users more space to set masks/zones on smaller screens

* dynamic ffmpeg

* ensure previews dir exists

when ffmpeg processes restart, there's a brief window where the preview frame generation pipeline is torn down and restarted. before these changes, ffmpeg only restarted on crash/stall recovery or full Frigate restart. Now that ffmpeg restarts happen on-demand via config changes, there's a higher chance a frontend request hits the preview_mp4 or preview_gif endpoints during that brief restart window when the directory might not exist yet. The existing os.listdir() call would throw FileNotFoundError without a directory existence check. this fix just checks if the directory exists and returns 404 if not, exactly how preview_thumbnail already handles the same scenario a few lines below

* global ffmpeg section

* clean up

* tweak

* fix test
2026-03-06 13:45:39 -07:00
Josh Hawkins
c9bd907721
Frontend fixes (#22294)
* fix useImageLoaded hook running on every render

* fix volume not applying for all cameras

* Fix maximum update depth exceeded errors on Review page

- use-overlay-state: use refs for location to keep setter identity
  stable across renders, preventing cascading re-render loops when
  effects depend on the setter. Add Object.is bail-out guard to skip
  redundant navigate calls. Move setPersistedValue after bail-out to
  avoid unnecessary IndexedDB writes.

* don't try to fetch previews when motion search dialog is open

* revert unneeded changes

re-rendering was caused by the overlay state hook, not this one

* filter dicts to only use id field in sync recordings
2026-03-06 13:41:15 -07:00
Josh Hawkins
34cc1208a6
Skip motion threshold configuration (#22255)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
* backend

* frontend

* i18n

* docs

* add test

* clean up

* clean up motion detection docs

* formatting

* make optional
2026-03-05 18:20:03 -06:00
Josh Hawkins
2babfd2ec9
Improve motion review and add motion search (#22253)
* implement motion search and motion previews

* tweaks

* fix merge issue

* fix copilot instructions
2026-03-05 17:53:48 -06:00
Josh Hawkins
229436c94a
Add ability to clear region grids from the frontend (#22277)
* backend

* frontend

* i18n

* tweaks
2026-03-05 16:19:30 -07:00
Josh Hawkins
02678f4a09
show log when anonymous users log in (#22254)
based on a cache key built from remote_addr and user agent, expires after 7 days by default
2026-03-05 16:17:41 -07:00
Josh Hawkins
65db9b0aec
Fixes (#22280)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
* fix ollama chat tool calling

handle dict arguments, streaming fallback, and message format

* pin setuptools<81 to ensure pkg_resources remains available

When ensure_torch_dependencies() installs torch/torchvision via pip, it can upgrade setuptools to >=81.0.0, which removed the pkg_resources module. rknn-toolkit2 depends on pkg_resources internally, so subsequent RKNN conversion fails with No module named 'pkg_resources'.
2026-03-05 14:11:32 -06:00
Josh Hawkins
2782931c72
Update frontend to React 19 (#22275)
* remove unused RecoilRoot and fix implicit ref callback

Remove the vestigial recoil dependency (zero consumers) and convert
the implicit-return ref callback in SearchView to block form to
prevent React 19 interpreting it as a cleanup function.

* replace react-transition-group with framer-motion in Chip

Replace CSSTransition with framer-motion AnimatePresence + motion.div
for React 19 compatibility (react-transition-group uses findDOMNode).
framer-motion is already a project dependency.

* migrate react-grid-layout v1 to v2

- Replace WidthProvider(Responsive) HOC with useContainerWidth hook
- Update types: Layout (single item) → LayoutItem, Layout[] → Layout
- Replace isDraggable/isResizable/resizeHandles with dragConfig/resizeConfig
- Update EventCallback signature for v2 API
- Remove @types/react-grid-layout (v2 includes its own types)

* upgrade vaul, next-themes, framer-motion, react-zoom-pan-pinch

- vaul: ^0.9.1 → ^1.1.2
- next-themes: ^0.3.0 → ^0.4.6
- framer-motion: ^11.5.4 → ^12.35.0 (React 19 native support)
- react-zoom-pan-pinch: 3.4.4 → latest

* upgrade to React 19, react-konva v19, eslint-plugin-react-hooks v5

Core React 19 upgrade with all necessary type fixes:
- Update RefObject types to accept T | null (React 19 refs always nullable)
- Add JSX namespace imports (no longer global in React 19)
- Add initial values to useRef calls (required in React 19)
- Fix ReactElement.props unknown type in config-form components
- Fix IconWrapper interface to use HTMLAttributes instead of index signature
- Add monaco-editor as dev dependency for type declarations
- Upgrade react-konva to v19, eslint-plugin-react-hooks to v5

* upgrade typescript to 5.9.3

* modernize Context.Provider to React 19 shorthand

Replace <Context.Provider value={...}> with <Context value={...}>
across all project-owned context providers. External library contexts
(react-icons IconContext, radix TooltipPrimitive) left unchanged.

* add runtime patches for React 19 compatibility

- Patch @radix-ui/react-compose-refs@1.1.2: stabilize useComposedRefs
  to prevent infinite render loops from unstable ref callbacks
  https://github.com/radix-ui/primitives/issues/3799
- Patch @radix-ui/react-slot@1.2.4: use useComposedRefs hook in
  SlotClone instead of inline composeRefs to prevent re-render cycles
  https://github.com/radix-ui/primitives/pull/3804
- Patch react-use-websocket@4.8.1: remove flushSync wrappers that
  cause "Maximum update depth exceeded" with React 19 auto-batching
  https://github.com/facebook/react/issues/27613
- Add npm overrides to ensure single hoisted copies of compose-refs
  and react-slot across all Radix packages
- Add postinstall script for patch-package
- Remove leftover react-transition-group dependency

* formatting

* use availableWidth instead of useContainerWidth for grid layout

The useContainerWidth hook from react-grid-layout v2 returns raw
container width without accounting for scrollbar width, causing the
grid to not fill the full available space. Use the existing
availableWidth value from useResizeObserver which already compensates
for scrollbar width, matching the working implementation.

* remove unused carousel component and fix React 19 peer deps

Remove embla-carousel-react and its unused Carousel UI component.
Upgrade sonner v1 → v2 for native React 19 support. Remove
@types/react-icons stub (react-icons bundles its own types).
These changes eliminate all peer dependency conflicts, so
npm install works without --legacy-peer-deps.

* fix React 19 infinite re-render loop on live dashboard

The "Maximum update depth exceeded" error was caused by two issues:

1. useDeferredStreamMetadata returned a new `{}` default on every render
   when SWR data was undefined, creating an unstable reference that
   triggered the useEffect in useCameraLiveMode on every render cycle.
   Fixed by using a stable module-level EMPTY_METADATA constant.

2. useResizeObserver's rest parameter `...refs` created a new array on
   every render, causing its useEffect to re-run and re-observe elements
   continuously. Fixed by stabilizing refs with useRef and only
   reconnecting the observer when actual DOM elements change.
2026-03-05 07:42:38 -07:00
ZhaiSoul
bf0d985b9d fix: fix where go2rtc functionality becomes unavailable after setting the HTTP_PROXY environment variable. 2026-02-05 15:38:30 +00:00
133 changed files with 10133 additions and 3738 deletions

View File

@ -38,7 +38,6 @@ Remember that motion detection is just used to determine when object detection s
The threshold value dictates how much of a change in a pixels luminance is required to be considered motion.
```yaml
# default threshold value
motion:
# Optional: The threshold passed to cv2.threshold to determine if a pixel is different enough to be counted as motion. (default: shown below)
# Increasing this value will make motion detection less sensitive and decreasing it will make motion detection more sensitive.
@ -53,7 +52,6 @@ Watching the motion boxes in the debug view, increase the threshold until you on
### Contour Area
```yaml
# default contour_area value
motion:
# Optional: Minimum size in pixels in the resized motion image that counts as motion (default: shown below)
# Increasing this value will prevent smaller areas of motion from being detected. Decreasing will
@ -81,27 +79,49 @@ However, if the preferred day settings do not work well at night it is recommend
## Tuning For Large Changes In Motion
### Lightning Threshold
```yaml
# default lightning_threshold:
motion:
# Optional: The percentage of the image used to detect lightning or other substantial changes where motion detection
# needs to recalibrate. (default: shown below)
# Increasing this value will make motion detection more likely to consider lightning or ir mode changes as valid motion.
# Decreasing this value will make motion detection more likely to ignore large amounts of motion such as a person approaching
# a doorbell camera.
# Optional: The percentage of the image used to detect lightning or
# other substantial changes where motion detection needs to
# recalibrate. (default: shown below)
# Increasing this value will make motion detection more likely
# to consider lightning or IR mode changes as valid motion.
# Decreasing this value will make motion detection more likely
# to ignore large amounts of motion such as a person
# approaching a doorbell camera.
lightning_threshold: 0.8
```
Large changes in motion like PTZ moves and camera switches between Color and IR mode should result in a pause in object detection. `lightning_threshold` defines the percentage of the image used to detect these substantial changes. Increasing this value makes motion detection more likely to treat large changes (like IR mode switches) as valid motion. Decreasing it makes motion detection more likely to ignore large amounts of motion, such as a person approaching a doorbell camera.
Note that `lightning_threshold` does **not** stop motion-based recordings from being saved — it only prevents additional motion analysis after the threshold is exceeded, reducing false positive object detections during high-motion periods (e.g. storms or PTZ sweeps) without interfering with recordings.
:::warning
Some cameras like doorbell cameras may have missed detections when someone walks directly in front of the camera and the lightning_threshold causes motion detection to be re-calibrated. In this case, it may be desirable to increase the `lightning_threshold` to ensure these objects are not missed.
Some cameras, like doorbell cameras, may have missed detections when someone walks directly in front of the camera and the `lightning_threshold` causes motion detection to recalibrate. In this case, it may be desirable to increase the `lightning_threshold` to ensure these objects are not missed.
:::
:::note
### Skip Motion On Large Scene Changes
Lightning threshold does not stop motion based recordings from being saved.
```yaml
motion:
# Optional: Fraction of the frame that must change in a single update
# before Frigate will completely ignore any motion in that frame.
# Values range between 0.0 and 1.0, leave unset (null) to disable.
# Setting this to 0.7 would cause Frigate to **skip** reporting
# motion boxes when more than 70% of the image appears to change
# (e.g. during lightning storms, IR/color mode switches, or other
# sudden lighting events).
skip_motion_threshold: 0.7
```
This option is handy when you want to prevent large transient changes from triggering recordings or object detection. It differs from `lightning_threshold` because it completely suppresses motion instead of just forcing a recalibration.
:::warning
When the skip threshold is exceeded, **no motion is reported** for that frame, meaning **nothing is recorded** for that frame. That means you can miss something important, like a PTZ camera auto-tracking an object or activity while the camera is moving. If you prefer to guarantee that every frame is saved, leave this unset and accept occasional recordings containing scene noise — they typically only take up a few megabytes and are quick to scan in the timeline UI.
:::
Large changes in motion like PTZ moves and camera switches between Color and IR mode should result in a pause in object detection. This is done via the `lightning_threshold` configuration. It is defined as the percentage of the image used to detect lightning or other substantial changes where motion detection needs to recalibrate. Increasing this value will make motion detection more likely to consider lightning or IR mode changes as valid motion. Decreasing this value will make motion detection more likely to ignore large amounts of motion such as a person approaching a doorbell camera.

View File

@ -480,12 +480,16 @@ motion:
# Increasing this value will make motion detection less sensitive and decreasing it will make motion detection more sensitive.
# The value should be between 1 and 255.
threshold: 30
# Optional: The percentage of the image used to detect lightning or other substantial changes where motion detection
# needs to recalibrate. (default: shown below)
# Optional: The percentage of the image used to detect lightning or other substantial changes where motion detection needs
# to recalibrate and motion checks stop for that frame. Recordings are unaffected. (default: shown below)
# Increasing this value will make motion detection more likely to consider lightning or ir mode changes as valid motion.
# Decreasing this value will make motion detection more likely to ignore large amounts of motion such as a person approaching
# a doorbell camera.
# Decreasing this value will make motion detection more likely to ignore large amounts of motion such as a person approaching a doorbell camera.
lightning_threshold: 0.8
# Optional: Fraction of the frame that must change in a single update before motion boxes are completely
# ignored. Values range between 0.0 and 1.0. When exceeded, no motion boxes are reported and **no motion
# recording** is created for that frame. Leave unset (null) to disable this feature. Use with care on PTZ
# cameras or other situations where you require guaranteed frame capture.
skip_motion_threshold: None
# Optional: Minimum size in pixels in the resized motion image that counts as motion (default: shown below)
# Increasing this value will prevent smaller areas of motion from being detected. Decreasing will
# make motion detection more sensitive to smaller moving objects.

View File

@ -3,17 +3,67 @@ id: dummy-camera
title: Analyzing Object Detection
---
When investigating object detection or tracking problems, it can be helpful to replay an exported video as a temporary "dummy" camera. This lets you reproduce issues locally, iterate on configuration (detections, zones, enrichment settings), and capture logs and clips for analysis.
Frigate provides several tools for investigating object detection and tracking behavior: reviewing recorded detections through the UI, using the built-in Debug Replay feature, and manually setting up a dummy camera for advanced scenarios.
## When to use
## Reviewing Detections in the UI
- Replaying an exported clip to reproduce incorrect detections
- Testing configuration changes (model settings, trackers, filters) against a known clip
- Gathering deterministic logs and recordings for debugging or issue reports
Before setting up a replay, you can often diagnose detection issues by reviewing existing recordings directly in the Frigate UI.
## Example Config
### Detail View (History)
Place the clip you want to replay in a location accessible to Frigate (for example `/media/frigate/` or the repository `debug/` folder when developing). Then add a temporary camera to your `config/config.yml` like this:
The **Detail Stream** view in History shows recorded video with detection overlays (bounding boxes, path points, and zone highlights) drawn on top. Select a review item to see its tracked objects and lifecycle events. Clicking a lifecycle event seeks the video to that point so you can see exactly what the detector saw.
### Tracking Details (Explore)
In **Explore**, clicking a thumbnail opens the **Tracking Details** pane, which shows the full lifecycle of a single tracked object: every detection, zone entry/exit, and attribute change. The video plays back with the bounding box overlaid, letting you step through the object's entire lifecycle.
### Annotation Offset
Both views support an **Annotation Offset** setting (`detect.annotation_offset` in your camera config) that shifts the detection overlay in time relative to the recorded video. This compensates for the timing drift between the `detect` and `record` pipelines.
These streams use fundamentally different clocks with different buffering and latency characteristics, so the detection data and the recorded video are never perfectly synchronized. The annotation offset shifts the overlay to visually align the bounding boxes with the objects in the recorded video.
#### Why the offset varies between clips
The base timing drift between detect and record is roughly constant for a given camera, so a single offset value works well on average. However, you may notice the alignment is not pixel-perfect in every clip. This is normal and caused by several factors:
- **Keyframe-constrained seeking**: When the browser seeks to a timestamp, it can only land on the nearest keyframe. Each recording segment has keyframes at different positions relative to the detection timestamps, so the same offset may land slightly early in one clip and slightly late in another.
- **Segment boundary trimming**: When a recording range starts mid-segment, the video is trimmed to the requested start point. This trim may not align with a keyframe, shifting the effective reference point.
- **Capture-time jitter**: Network buffering, camera buffer flushes, and ffmpeg's own buffering mean the system-clock timestamp and the corresponding recorded frame are not always offset by exactly the same amount.
The per-clip variation is typically quite low and is mostly an artifact of keyframe granularity rather than a change in the true drift. A "perfect" alignment would require per-frame, keyframe-aware offset compensation, which is not practical. Treat the annotation offset as a best-effort average for your camera.
## Debug Replay
Debug Replay lets you re-run Frigate's detection pipeline against a section of recorded video without manually configuring a dummy camera. It automatically extracts the recording, creates a temporary camera with the same detection settings as the original, and loops the clip through the pipeline so you can observe detections in real time.
### When to use
- Reproducing a detection or tracking issue from a specific time range
- Testing configuration changes (model settings, zones, filters, motion) against a known clip
- Gathering logs and debug overlays for a bug report
:::note
Only one replay session can be active at a time. If a session is already running, you will be prompted to navigate to it or stop it first.
:::
### Variables to consider
- The replay will not always produce identical results to the original run. Different frames may be selected on replay, which can change detections and tracking.
- Motion detection depends on the exact frames used; small frame shifts can change motion regions and therefore what gets passed to the detector.
- Object detection is not fully deterministic: models and post-processing can yield slightly different results across runs.
Treat the replay as a close approximation rather than an exact reproduction. Run multiple loops and examine the debug overlays and logs to understand the behavior.
## Manual Dummy Camera
For advanced scenarios — such as testing with a clip from a different source, debugging ffmpeg behavior, or running a clip through a completely custom configuration — you can set up a dummy camera manually.
### Example config
Place the clip you want to replay in a location accessible to Frigate (for example `/media/frigate/` or the repository `debug/` folder when developing). Then add a temporary camera to your `config/config.yml`:
```yaml
cameras:
@ -32,10 +82,10 @@ cameras:
enabled: false
```
- `-re -stream_loop -1` tells `ffmpeg` to play the file in realtime and loop indefinitely, which is useful for long debugging sessions.
- `-fflags +genpts` helps generate presentation timestamps when they are missing in the file.
- `-re -stream_loop -1` tells ffmpeg to play the file in real time and loop indefinitely.
- `-fflags +genpts` generates presentation timestamps when they are missing in the file.
## Steps
### Steps
1. Export or copy the clip you want to replay to the Frigate host (e.g., `/media/frigate/` or `debug/clips/`). Depending on what you are looking to debug, it is often helpful to add some "pre-capture" time (where the tracked object is not yet visible) to the clip when exporting.
2. Add the temporary camera to `config/config.yml` (example above). Use a unique name such as `test` or `replay_camera` so it's easy to remove later.
@ -45,16 +95,8 @@ cameras:
5. Iterate on camera or enrichment settings (model, fps, zones, filters) and re-check the replay until the behavior is resolved.
6. Remove the temporary camera from your config after debugging to avoid spurious telemetry or recordings.
## Variables to consider in object tracking
### Troubleshooting
- The exported video will not always line up exactly with how it originally ran through Frigate (or even with the last loop). Different frames may be used on replay, which can change detections and tracking.
- Motion detection depends on the frames used; small frame shifts can change motion regions and therefore what gets passed to the detector.
- Object detection is not deterministic: models and post-processing can yield different results across runs, so you may not get identical detections or track IDs every time.
When debugging, treat the replay as a close approximation rather than a byte-for-byte replay. Capture multiple runs, enable recording if helpful, and examine logs and saved event clips to understand variability.
## Troubleshooting
- No video: verify the path is correct and accessible from the Frigate process/container.
- FFmpeg errors: check the log output for ffmpeg-specific flags and adjust `input_args` accordingly for your file/container. You may also need to disable hardware acceleration (`hwaccel_args: ""`) for the dummy camera.
- No detections: confirm the camera `roles` include `detect`, and model/detector configuration is enabled.
- **No video**: verify the file path is correct and accessible from the Frigate process/container.
- **FFmpeg errors**: check the log output and adjust `input_args` for your file format. You may also need to disable hardware acceleration (`hwaccel_args: ""`) for the dummy camera.
- **No detections**: confirm the camera `roles` include `detect` and that the model/detector configuration is enabled.

View File

@ -589,23 +589,38 @@ def config_set(request: Request, body: AppConfigSetBody):
request.app.frigate_config = config
request.app.genai_manager.update_config(config)
if request.app.stats_emitter is not None:
request.app.stats_emitter.config = config
if body.update_topic:
if body.update_topic.startswith("config/cameras/"):
_, _, camera, field = body.update_topic.split("/")
if field == "add":
settings = config.cameras[camera]
elif field == "remove":
settings = old_config.cameras[camera]
if camera == "*":
# Wildcard: fan out update to all cameras
enum_value = CameraConfigUpdateEnum[field]
for camera_name in config.cameras:
settings = config.get_nested_object(
f"config/cameras/{camera_name}/{field}"
)
request.app.config_publisher.publish_update(
CameraConfigUpdateTopic(enum_value, camera_name),
settings,
)
else:
settings = config.get_nested_object(body.update_topic)
if field == "add":
settings = config.cameras[camera]
elif field == "remove":
settings = old_config.cameras[camera]
else:
settings = config.get_nested_object(body.update_topic)
request.app.config_publisher.publish_update(
CameraConfigUpdateTopic(
CameraConfigUpdateEnum[field], camera
),
settings,
)
request.app.config_publisher.publish_update(
CameraConfigUpdateTopic(
CameraConfigUpdateEnum[field], camera
),
settings,
)
else:
# Generic handling for global config updates
settings = config.get_nested_object(body.update_topic)

View File

@ -32,6 +32,12 @@ from frigate.models import User
logger = logging.getLogger(__name__)
# In-memory cache to track which clients we've logged for an anonymous access event.
# Keyed by a hashed value combining remote address + user-agent. The value is
# an expiration timestamp (float).
FIRST_LOAD_TTL_SECONDS = 60 * 60 * 24 * 7 # 7 days
_first_load_seen: dict[str, float] = {}
def require_admin_by_default():
"""
@ -284,6 +290,15 @@ def get_remote_addr(request: Request):
return remote_addr or "127.0.0.1"
def _cleanup_first_load_seen() -> None:
"""Cleanup expired entries in the in-memory first-load cache."""
now = time.time()
# Build list for removal to avoid mutating dict during iteration
expired = [k for k, exp in _first_load_seen.items() if exp <= now]
for k in expired:
del _first_load_seen[k]
def get_jwt_secret() -> str:
jwt_secret = None
# check env var
@ -744,10 +759,30 @@ def profile(request: Request):
roles_dict = request.app.frigate_config.auth.roles
allowed_cameras = User.get_allowed_cameras(role, roles_dict, all_camera_names)
return JSONResponse(
response = JSONResponse(
content={"username": username, "role": role, "allowed_cameras": allowed_cameras}
)
if username == "anonymous":
try:
remote_addr = get_remote_addr(request)
except Exception:
remote_addr = (
request.client.host if hasattr(request, "client") else "unknown"
)
ua = request.headers.get("user-agent", "")
key_material = f"{remote_addr}|{ua}"
cache_key = hashlib.sha256(key_material.encode()).hexdigest()
_cleanup_first_load_seen()
now = time.time()
if cache_key not in _first_load_seen:
_first_load_seen[cache_key] = now + FIRST_LOAD_TTL_SECONDS
logger.info(f"Anonymous user access from {remote_addr} ua={ua[:200]}")
return response
@router.get(
"/logout",

View File

@ -56,7 +56,9 @@ def _is_valid_host(host: str) -> bool:
@router.get("/go2rtc/streams", dependencies=[Depends(allow_any_authenticated())])
def go2rtc_streams():
r = requests.get("http://127.0.0.1:1984/api/streams")
r = requests.get(
"http://127.0.0.1:1984/api/streams", proxies={"http": None, "https": None}
)
if not r.ok:
logger.error("Failed to fetch streams from go2rtc")
return JSONResponse(
@ -75,7 +77,8 @@ def go2rtc_streams():
)
def go2rtc_camera_stream(request: Request, camera_name: str):
r = requests.get(
f"http://127.0.0.1:1984/api/streams?src={camera_name}&video=all&audio=all&microphone"
f"http://127.0.0.1:1984/api/streams?src={camera_name}&video=all&audio=all&microphone",
proxies={"http": None, "https": None},
)
if not r.ok:
camera_config = request.app.frigate_config.cameras.get(camera_name)
@ -107,6 +110,7 @@ def go2rtc_add_stream(request: Request, stream_name: str, src: str = ""):
"http://127.0.0.1:1984/api/streams",
params=params,
timeout=10,
proxies={"http": None, "https": None},
)
if not r.ok:
logger.error(f"Failed to add go2rtc stream {stream_name}: {r.text}")
@ -142,6 +146,7 @@ def go2rtc_delete_stream(stream_name: str):
"http://127.0.0.1:1984/api/streams",
params={"src": stream_name},
timeout=10,
proxies={"http": None, "https": None},
)
if not r.ok:
logger.error(f"Failed to delete go2rtc stream {stream_name}: {r.text}")

View File

@ -11,6 +11,7 @@ class Tags(Enum):
classification = "Classification"
logs = "Logs"
media = "Media"
motion_search = "Motion Search"
notifications = "Notifications"
preview = "Preview"
recordings = "Recordings"

View File

@ -22,6 +22,7 @@ from frigate.api import (
event,
export,
media,
motion_search,
notification,
preview,
record,
@ -135,6 +136,7 @@ def create_fastapi_app(
app.include_router(export.router)
app.include_router(event.router)
app.include_router(media.router)
app.include_router(motion_search.router)
app.include_router(record.router)
app.include_router(debug_replay.router)
# App Properties

View File

@ -24,6 +24,7 @@ from tzlocal import get_localzone_name
from frigate.api.auth import (
allow_any_authenticated,
require_camera_access,
require_role,
)
from frigate.api.defs.query.media_query_parameters import (
Extension,
@ -1005,6 +1006,23 @@ def grid_snapshot(
)
@router.delete(
"/{camera_name}/region_grid", dependencies=[Depends(require_role("admin"))]
)
def clear_region_grid(request: Request, camera_name: str):
"""Clear the region grid for a camera."""
if camera_name not in request.app.frigate_config.cameras:
return JSONResponse(
content={"success": False, "message": "Camera not found"},
status_code=404,
)
Regions.delete().where(Regions.camera == camera_name).execute()
return JSONResponse(
content={"success": True, "message": "Region grid cleared"},
)
@router.get(
"/events/{event_id}/snapshot-clean.webp",
dependencies=[Depends(require_camera_access)],
@ -1263,6 +1281,13 @@ def preview_gif(
else:
# need to generate from existing images
preview_dir = os.path.join(CACHE_DIR, "preview_frames")
if not os.path.isdir(preview_dir):
return JSONResponse(
content={"success": False, "message": "Preview not found"},
status_code=404,
)
file_start = f"preview_{camera_name}"
start_file = f"{file_start}-{start_ts}.{PREVIEW_FRAME_TYPE}"
end_file = f"{file_start}-{end_ts}.{PREVIEW_FRAME_TYPE}"
@ -1438,6 +1463,13 @@ def preview_mp4(
else:
# need to generate from existing images
preview_dir = os.path.join(CACHE_DIR, "preview_frames")
if not os.path.isdir(preview_dir):
return JSONResponse(
content={"success": False, "message": "Preview not found"},
status_code=404,
)
file_start = f"preview_{camera_name}"
start_file = f"{file_start}-{start_ts}.{PREVIEW_FRAME_TYPE}"
end_file = f"{file_start}-{end_ts}.{PREVIEW_FRAME_TYPE}"

View File

@ -0,0 +1,292 @@
"""Motion search API for detecting changes within a region of interest."""
import logging
from typing import Any, List, Optional
from fastapi import APIRouter, Depends, Request
from fastapi.responses import JSONResponse
from pydantic import BaseModel, Field
from frigate.api.auth import require_camera_access
from frigate.api.defs.tags import Tags
from frigate.jobs.motion_search import (
cancel_motion_search_job,
get_motion_search_job,
start_motion_search_job,
)
from frigate.types import JobStatusTypesEnum
logger = logging.getLogger(__name__)
router = APIRouter(tags=[Tags.motion_search])
class MotionSearchRequest(BaseModel):
"""Request body for motion search."""
start_time: float = Field(description="Start timestamp for the search range")
end_time: float = Field(description="End timestamp for the search range")
polygon_points: List[List[float]] = Field(
description="List of [x, y] normalized coordinates (0-1) defining the ROI polygon"
)
threshold: int = Field(
default=30,
ge=1,
le=255,
description="Pixel difference threshold (1-255)",
)
min_area: float = Field(
default=5.0,
ge=0.1,
le=100.0,
description="Minimum change area as a percentage of the ROI",
)
frame_skip: int = Field(
default=5,
ge=1,
le=30,
description="Process every Nth frame (1=all frames, 5=every 5th frame)",
)
parallel: bool = Field(
default=False,
description="Enable parallel scanning across segments",
)
max_results: int = Field(
default=25,
ge=1,
le=200,
description="Maximum number of search results to return",
)
class MotionSearchResult(BaseModel):
"""A single search result with timestamp and change info."""
timestamp: float = Field(description="Timestamp where change was detected")
change_percentage: float = Field(description="Percentage of ROI area that changed")
class MotionSearchMetricsResponse(BaseModel):
"""Metrics collected during motion search execution."""
segments_scanned: int = 0
segments_processed: int = 0
metadata_inactive_segments: int = 0
heatmap_roi_skip_segments: int = 0
fallback_full_range_segments: int = 0
frames_decoded: int = 0
wall_time_seconds: float = 0.0
segments_with_errors: int = 0
class MotionSearchStartResponse(BaseModel):
"""Response when motion search job starts."""
success: bool
message: str
job_id: str
class MotionSearchStatusResponse(BaseModel):
"""Response containing job status and results."""
success: bool
message: str
status: str # "queued", "running", "success", "failed", or "cancelled"
results: Optional[List[MotionSearchResult]] = None
total_frames_processed: Optional[int] = None
error_message: Optional[str] = None
metrics: Optional[MotionSearchMetricsResponse] = None
@router.post(
"/{camera_name}/search/motion",
response_model=MotionSearchStartResponse,
dependencies=[Depends(require_camera_access)],
summary="Start motion search job",
description="""Starts an asynchronous search for significant motion changes within
a user-defined Region of Interest (ROI) over a specified time range. Returns a job_id
that can be used to poll for results.""",
)
async def start_motion_search(
request: Request,
camera_name: str,
body: MotionSearchRequest,
):
"""Start an async motion search job."""
config = request.app.frigate_config
if camera_name not in config.cameras:
return JSONResponse(
content={"success": False, "message": f"Camera {camera_name} not found"},
status_code=404,
)
# Validate polygon has at least 3 points
if len(body.polygon_points) < 3:
return JSONResponse(
content={
"success": False,
"message": "Polygon must have at least 3 points",
},
status_code=400,
)
# Validate time range
if body.start_time >= body.end_time:
return JSONResponse(
content={
"success": False,
"message": "Start time must be before end time",
},
status_code=400,
)
# Start the job using the jobs module
job_id = start_motion_search_job(
config=config,
camera_name=camera_name,
start_time=body.start_time,
end_time=body.end_time,
polygon_points=body.polygon_points,
threshold=body.threshold,
min_area=body.min_area,
frame_skip=body.frame_skip,
parallel=body.parallel,
max_results=body.max_results,
)
return JSONResponse(
content={
"success": True,
"message": "Search job started",
"job_id": job_id,
}
)
@router.get(
"/{camera_name}/search/motion/{job_id}",
response_model=MotionSearchStatusResponse,
dependencies=[Depends(require_camera_access)],
summary="Get motion search job status",
description="Returns the status and results (if complete) of a motion search job.",
)
async def get_motion_search_status_endpoint(
request: Request,
camera_name: str,
job_id: str,
):
"""Get the status of a motion search job."""
config = request.app.frigate_config
if camera_name not in config.cameras:
return JSONResponse(
content={"success": False, "message": f"Camera {camera_name} not found"},
status_code=404,
)
job = get_motion_search_job(job_id)
if not job:
return JSONResponse(
content={"success": False, "message": "Job not found"},
status_code=404,
)
api_status = job.status
# Build response content
response_content: dict[str, Any] = {
"success": api_status != JobStatusTypesEnum.failed,
"status": api_status,
}
if api_status == JobStatusTypesEnum.failed:
response_content["message"] = job.error_message or "Search failed"
response_content["error_message"] = job.error_message
elif api_status == JobStatusTypesEnum.cancelled:
response_content["message"] = "Search cancelled"
response_content["total_frames_processed"] = job.total_frames_processed
elif api_status == JobStatusTypesEnum.success:
response_content["message"] = "Search complete"
if job.results:
response_content["results"] = job.results.get("results", [])
response_content["total_frames_processed"] = job.results.get(
"total_frames_processed", job.total_frames_processed
)
else:
response_content["results"] = []
response_content["total_frames_processed"] = job.total_frames_processed
else:
response_content["message"] = "Job processing"
response_content["total_frames_processed"] = job.total_frames_processed
# Include partial results if available (streaming)
if job.results:
response_content["results"] = job.results.get("results", [])
response_content["total_frames_processed"] = job.results.get(
"total_frames_processed", job.total_frames_processed
)
# Include metrics if available
if job.metrics:
response_content["metrics"] = job.metrics.to_dict()
return JSONResponse(content=response_content)
@router.post(
"/{camera_name}/search/motion/{job_id}/cancel",
dependencies=[Depends(require_camera_access)],
summary="Cancel motion search job",
description="Cancels an active motion search job if it is still processing.",
)
async def cancel_motion_search_endpoint(
request: Request,
camera_name: str,
job_id: str,
):
"""Cancel an active motion search job."""
config = request.app.frigate_config
if camera_name not in config.cameras:
return JSONResponse(
content={"success": False, "message": f"Camera {camera_name} not found"},
status_code=404,
)
job = get_motion_search_job(job_id)
if not job:
return JSONResponse(
content={"success": False, "message": "Job not found"},
status_code=404,
)
# Check if already finished
api_status = job.status
if api_status not in (JobStatusTypesEnum.queued, JobStatusTypesEnum.running):
return JSONResponse(
content={
"success": True,
"message": "Job already finished",
"status": api_status,
}
)
# Request cancellation
cancelled = cancel_motion_search_job(job_id)
if cancelled:
return JSONResponse(
content={
"success": True,
"message": "Search cancelled",
"status": "cancelled",
}
)
return JSONResponse(
content={
"success": False,
"message": "Failed to cancel job",
},
status_code=500,
)

View File

@ -261,6 +261,7 @@ async def recordings(
Recordings.segment_size,
Recordings.motion,
Recordings.objects,
Recordings.motion_heatmap,
Recordings.duration,
)
.where(

View File

@ -51,6 +51,7 @@ from frigate.embeddings import EmbeddingProcess, EmbeddingsContext
from frigate.events.audio import AudioProcessor
from frigate.events.cleanup import EventCleanup
from frigate.events.maintainer import EventProcessor
from frigate.jobs.motion_search import stop_all_motion_search_jobs
from frigate.log import _stop_logging
from frigate.models import (
Event,
@ -599,6 +600,9 @@ class FrigateApp:
# used by the docker healthcheck
Path("/dev/shm/.frigate-is-stopping").touch()
# Cancel any running motion search jobs before setting stop_event
stop_all_motion_search_jobs()
self.stop_event.set()
# set an end_time on entries without an end_time before exiting

View File

@ -242,6 +242,14 @@ class CameraConfig(FrigateBaseModel):
def create_ffmpeg_cmds(self):
if "_ffmpeg_cmds" in self:
return
self._build_ffmpeg_cmds()
def recreate_ffmpeg_cmds(self):
"""Force regeneration of ffmpeg commands from current config."""
self._build_ffmpeg_cmds()
def _build_ffmpeg_cmds(self):
"""Build ffmpeg commands from the current ffmpeg config."""
ffmpeg_cmds = []
for ffmpeg_input in self.ffmpeg.inputs:
ffmpeg_cmd = self._get_ffmpeg_cmd(ffmpeg_input)

View File

@ -24,10 +24,17 @@ class MotionConfig(FrigateBaseModel):
lightning_threshold: float = Field(
default=0.8,
title="Lightning threshold",
description="Threshold to detect and ignore brief lighting spikes (lower is more sensitive, values between 0.3 and 1.0).",
description="Threshold to detect and ignore brief lighting spikes (lower is more sensitive, values between 0.3 and 1.0). This does not prevent motion detection entirely; it merely causes the detector to stop analyzing additional frames once the threshold is exceeded. Motion-based recordings are still created during these events.",
ge=0.3,
le=1.0,
)
skip_motion_threshold: Optional[float] = Field(
default=None,
title="Skip motion threshold",
description="If set to a value between 0.0 and 1.0, and more than this fraction of the image changes in a single frame, the detector will return no motion boxes and immediately recalibrate. This can save CPU and reduce false positives during lightning, storms, etc., but may miss real events such as a PTZ camera autotracking an object. The tradeoff is between dropping a few megabytes of recordings versus reviewing a couple short clips. Leave unset (None) to disable this feature.",
ge=0.0,
le=1.0,
)
improve_contrast: bool = Field(
default=True,
title="Improve contrast",

View File

@ -17,6 +17,7 @@ class CameraConfigUpdateEnum(str, Enum):
birdseye = "birdseye"
detect = "detect"
enabled = "enabled"
ffmpeg = "ffmpeg"
motion = "motion" # includes motion and motion masks
notifications = "notifications"
objects = "objects"
@ -91,6 +92,9 @@ class CameraConfigUpdateSubscriber:
if update_type == CameraConfigUpdateEnum.audio:
config.audio = updated_config
elif update_type == CameraConfigUpdateEnum.ffmpeg:
config.ffmpeg = updated_config
config.recreate_ffmpeg_cmds()
elif update_type == CameraConfigUpdateEnum.audio_transcription:
config.audio_transcription = updated_config
elif update_type == CameraConfigUpdateEnum.birdseye:

View File

@ -401,35 +401,10 @@ class LicensePlateProcessingMixin:
all_confidences.append(flat_confidences)
all_areas.append(combined_area)
# Step 3: Filter and sort the combined plates
# Step 3: Sort the combined plates
if all_license_plates:
filtered_data = []
for plate, conf_list, area in zip(
all_license_plates, all_confidences, all_areas
):
if len(plate) < self.lpr_config.min_plate_length:
logger.debug(
f"{camera}: Filtered out '{plate}' due to length ({len(plate)} < {self.lpr_config.min_plate_length})"
)
continue
if self.lpr_config.format:
try:
if not re.fullmatch(self.lpr_config.format, plate):
logger.debug(
f"{camera}: Filtered out '{plate}' due to format mismatch"
)
continue
except re.error:
# Skip format filtering if regex is invalid
logger.error(
f"{camera}: Invalid regex in LPR format configuration: {self.lpr_config.format}"
)
filtered_data.append((plate, conf_list, area))
sorted_data = sorted(
filtered_data,
zip(all_license_plates, all_confidences, all_areas),
key=lambda x: (x[2], len(x[0]), sum(x[1]) / len(x[1]) if x[1] else 0),
reverse=True,
)
@ -1557,6 +1532,27 @@ class LicensePlateProcessingMixin:
f"{camera}: Clustering changed top plate '{top_plate}' (conf: {avg_confidence:.3f}) to rep '{rep_plate}' (conf: {rep_conf:.3f})"
)
# Apply length and format filters to the clustered representative
# rather than individual OCR readings, so noisy variants still
# contribute to clustering even when they don't pass on their own.
if len(rep_plate) < self.lpr_config.min_plate_length:
logger.debug(
f"{camera}: Filtered out clustered plate '{rep_plate}' due to length ({len(rep_plate)} < {self.lpr_config.min_plate_length})"
)
return
if self.lpr_config.format:
try:
if not re.fullmatch(self.lpr_config.format, rep_plate):
logger.debug(
f"{camera}: Filtered out clustered plate '{rep_plate}' due to format mismatch"
)
return
except re.error:
logger.error(
f"{camera}: Invalid regex in LPR format configuration: {self.lpr_config.format}"
)
# Update stored rep
self.detected_license_plates[id].update(
{

View File

@ -12,6 +12,7 @@ from frigate.comms.embeddings_updater import EmbeddingsRequestEnum
from frigate.comms.event_metadata_updater import EventMetadataPublisher
from frigate.comms.inter_process import InterProcessRequestor
from frigate.config import FrigateConfig
from frigate.config.classification import LicensePlateRecognitionConfig
from frigate.data_processing.common.license_plate.mixin import (
WRITE_DEBUG_IMAGES,
LicensePlateProcessingMixin,
@ -47,6 +48,11 @@ class LicensePlatePostProcessor(LicensePlateProcessingMixin, PostProcessorApi):
self.sub_label_publisher = sub_label_publisher
super().__init__(config, metrics, model_runner)
def update_config(self, lpr_config: LicensePlateRecognitionConfig) -> None:
"""Update LPR config at runtime."""
self.lpr_config = lpr_config
logger.debug("LPR config updated dynamically")
def process_data(
self, data: dict[str, Any], data_type: PostProcessDataEnum
) -> None:

View File

@ -19,6 +19,7 @@ from frigate.comms.event_metadata_updater import (
)
from frigate.comms.inter_process import InterProcessRequestor
from frigate.config import FrigateConfig
from frigate.config.classification import FaceRecognitionConfig
from frigate.const import FACE_DIR, MODEL_CACHE_DIR
from frigate.data_processing.common.face.model import (
ArcFaceRecognizer,
@ -95,6 +96,11 @@ class FaceRealTimeProcessor(RealTimeProcessorApi):
self.recognizer.build()
def update_config(self, face_config: FaceRecognitionConfig) -> None:
"""Update face recognition config at runtime."""
self.face_config = face_config
logger.debug("Face recognition config updated dynamically")
def __download_models(self, path: str) -> None:
try:
file_name = os.path.basename(path)

View File

@ -8,6 +8,7 @@ import numpy as np
from frigate.comms.event_metadata_updater import EventMetadataPublisher
from frigate.comms.inter_process import InterProcessRequestor
from frigate.config import FrigateConfig
from frigate.config.classification import LicensePlateRecognitionConfig
from frigate.data_processing.common.license_plate.mixin import (
LicensePlateProcessingMixin,
)
@ -40,6 +41,11 @@ class LicensePlateRealTimeProcessor(LicensePlateProcessingMixin, RealTimeProcess
self.camera_current_cars: dict[str, list[str]] = {}
super().__init__(config, metrics)
def update_config(self, lpr_config: LicensePlateRecognitionConfig) -> None:
"""Update LPR config at runtime."""
self.lpr_config = lpr_config
logger.debug("LPR config updated dynamically")
def process_frame(
self,
obj_data: dict[str, Any],

View File

@ -99,6 +99,13 @@ class EmbeddingMaintainer(threading.Thread):
self.classification_config_subscriber = ConfigSubscriber(
"config/classification/custom/"
)
self.bird_classification_config_subscriber = ConfigSubscriber(
"config/classification", exact=True
)
self.face_recognition_config_subscriber = ConfigSubscriber(
"config/face_recognition", exact=True
)
self.lpr_config_subscriber = ConfigSubscriber("config/lpr", exact=True)
# Configure Frigate DB
db = SqliteVecQueueDatabase(
@ -273,6 +280,9 @@ class EmbeddingMaintainer(threading.Thread):
while not self.stop_event.is_set():
self.config_updater.check_for_updates()
self._check_classification_config_updates()
self._check_bird_classification_config_updates()
self._check_face_recognition_config_updates()
self._check_lpr_config_updates()
self._process_requests()
self._process_updates()
self._process_recordings_updates()
@ -284,6 +294,9 @@ class EmbeddingMaintainer(threading.Thread):
self.config_updater.stop()
self.classification_config_subscriber.stop()
self.bird_classification_config_subscriber.stop()
self.face_recognition_config_subscriber.stop()
self.lpr_config_subscriber.stop()
self.event_subscriber.stop()
self.event_end_subscriber.stop()
self.recordings_subscriber.stop()
@ -356,6 +369,62 @@ class EmbeddingMaintainer(threading.Thread):
f"Added classification processor for model: {model_name} (type: {type(processor).__name__})"
)
def _check_bird_classification_config_updates(self) -> None:
"""Check for bird classification config updates."""
topic, classification_config = (
self.bird_classification_config_subscriber.check_for_update()
)
if topic is None:
return
self.config.classification = classification_config
logger.debug("Applied dynamic bird classification config update")
def _check_face_recognition_config_updates(self) -> None:
"""Check for face recognition config updates."""
topic, face_config = self.face_recognition_config_subscriber.check_for_update()
if topic is None:
return
previous_min_area = self.config.face_recognition.min_area
self.config.face_recognition = face_config
for camera_config in self.config.cameras.values():
if camera_config.face_recognition.min_area == previous_min_area:
camera_config.face_recognition.min_area = face_config.min_area
for processor in self.realtime_processors:
if isinstance(processor, FaceRealTimeProcessor):
processor.update_config(face_config)
logger.debug("Applied dynamic face recognition config update")
def _check_lpr_config_updates(self) -> None:
"""Check for LPR config updates."""
topic, lpr_config = self.lpr_config_subscriber.check_for_update()
if topic is None:
return
previous_min_area = self.config.lpr.min_area
self.config.lpr = lpr_config
for camera_config in self.config.cameras.values():
if camera_config.lpr.min_area == previous_min_area:
camera_config.lpr.min_area = lpr_config.min_area
for processor in self.realtime_processors:
if isinstance(processor, LicensePlateRealTimeProcessor):
processor.update_config(lpr_config)
for processor in self.post_processors:
if isinstance(processor, LicensePlatePostProcessor):
processor.update_config(lpr_config)
logger.debug("Applied dynamic LPR config update")
def _process_requests(self) -> None:
"""Process embeddings requests"""

View File

@ -1,5 +1,6 @@
"""Ollama Provider for Frigate AI."""
import json
import logging
from typing import Any, Optional
@ -108,7 +109,22 @@ class OllamaClient(GenAIClient):
if msg.get("name"):
msg_dict["name"] = msg["name"]
if msg.get("tool_calls"):
msg_dict["tool_calls"] = msg["tool_calls"]
# Ollama requires tool call arguments as dicts, but the
# conversation format (OpenAI-style) stores them as JSON
# strings. Convert back to dicts for Ollama.
ollama_tool_calls = []
for tc in msg["tool_calls"]:
func = tc.get("function") or {}
args = func.get("arguments") or {}
if isinstance(args, str):
try:
args = json.loads(args)
except (json.JSONDecodeError, TypeError):
args = {}
ollama_tool_calls.append(
{"function": {"name": func.get("name", ""), "arguments": args}}
)
msg_dict["tool_calls"] = ollama_tool_calls
request_messages.append(msg_dict)
request_params: dict[str, Any] = {
@ -120,25 +136,27 @@ class OllamaClient(GenAIClient):
request_params["stream"] = True
if tools:
request_params["tools"] = tools
if tool_choice:
request_params["tool_choice"] = (
"none"
if tool_choice == "none"
else "required"
if tool_choice == "required"
else "auto"
)
return request_params
def _message_from_response(self, response: dict[str, Any]) -> dict[str, Any]:
"""Parse Ollama chat response into {content, tool_calls, finish_reason}."""
if not response or "message" not in response:
logger.debug("Ollama response empty or missing 'message' key")
return {
"content": None,
"tool_calls": None,
"finish_reason": "error",
}
message = response["message"]
logger.debug(
"Ollama response message keys: %s, content_len=%s, thinking_len=%s, "
"tool_calls=%s, done=%s",
list(message.keys()) if hasattr(message, "keys") else "N/A",
len(message.get("content", "") or "") if message.get("content") else 0,
len(message.get("thinking", "") or "") if message.get("thinking") else 0,
bool(message.get("tool_calls")),
response.get("done"),
)
content = message.get("content", "").strip() if message.get("content") else None
tool_calls = parse_tool_calls_from_message(message)
finish_reason = "error"
@ -198,7 +216,13 @@ class OllamaClient(GenAIClient):
tools: Optional[list[dict[str, Any]]] = None,
tool_choice: Optional[str] = "auto",
):
"""Stream chat with tools; yields content deltas then final message."""
"""Stream chat with tools; yields content deltas then final message.
When tools are provided, Ollama streaming does not include tool_calls
in the response chunks. To work around this, we use a non-streaming
call when tools are present to ensure tool calls are captured, then
emit the content as a single delta followed by the final message.
"""
if self.provider is None:
logger.warning(
"Ollama provider has not been initialized. Check your Ollama configuration."
@ -213,6 +237,27 @@ class OllamaClient(GenAIClient):
)
return
try:
# Ollama does not return tool_calls in streaming mode, so fall
# back to a non-streaming call when tools are provided.
if tools:
logger.debug(
"Ollama: tools provided, using non-streaming call for tool support"
)
request_params = self._build_request_params(
messages, tools, tool_choice, stream=False
)
async_client = OllamaAsyncClient(
host=self.genai_config.base_url,
timeout=self.timeout,
)
response = await async_client.chat(**request_params)
result = self._message_from_response(response)
content = result.get("content")
if content:
yield ("content_delta", content)
yield ("message", result)
return
request_params = self._build_request_params(
messages, tools, tool_choice, stream=True
)
@ -233,11 +278,10 @@ class OllamaClient(GenAIClient):
yield ("content_delta", delta)
if chunk.get("done"):
full_content = "".join(content_parts).strip() or None
tool_calls = parse_tool_calls_from_message(msg)
final_message = {
"content": full_content,
"tool_calls": tool_calls,
"finish_reason": "tool_calls" if tool_calls else "stop",
"tool_calls": None,
"finish_reason": "stop",
}
break

View File

@ -23,21 +23,26 @@ def parse_tool_calls_from_message(
if not raw or not isinstance(raw, list):
return None
result = []
for tool_call in raw:
for idx, tool_call in enumerate(raw):
function_data = tool_call.get("function") or {}
try:
arguments_str = function_data.get("arguments") or "{}"
arguments = json.loads(arguments_str)
except (json.JSONDecodeError, KeyError, TypeError) as e:
logger.warning(
"Failed to parse tool call arguments: %s, tool: %s",
e,
function_data.get("name", "unknown"),
)
raw_arguments = function_data.get("arguments") or {}
if isinstance(raw_arguments, dict):
arguments = raw_arguments
elif isinstance(raw_arguments, str):
try:
arguments = json.loads(raw_arguments)
except (json.JSONDecodeError, KeyError, TypeError) as e:
logger.warning(
"Failed to parse tool call arguments: %s, tool: %s",
e,
function_data.get("name", "unknown"),
)
arguments = {}
else:
arguments = {}
result.append(
{
"id": tool_call.get("id", ""),
"id": tool_call.get("id", "") or f"call_{idx}",
"name": function_data.get("name", ""),
"arguments": arguments,
}

View File

@ -0,0 +1,864 @@
"""Motion search job management with background execution and parallel verification."""
import logging
import os
import threading
from concurrent.futures import Future, ThreadPoolExecutor, as_completed
from dataclasses import asdict, dataclass, field
from datetime import datetime
from typing import Any, Optional
import cv2
import numpy as np
from frigate.comms.inter_process import InterProcessRequestor
from frigate.config import FrigateConfig
from frigate.const import UPDATE_JOB_STATE
from frigate.jobs.job import Job
from frigate.jobs.manager import (
get_job_by_id,
set_current_job,
)
from frigate.models import Recordings
from frigate.types import JobStatusTypesEnum
logger = logging.getLogger(__name__)
# Constants
HEATMAP_GRID_SIZE = 16
@dataclass
class MotionSearchMetrics:
"""Metrics collected during motion search execution."""
segments_scanned: int = 0
segments_processed: int = 0
metadata_inactive_segments: int = 0
heatmap_roi_skip_segments: int = 0
fallback_full_range_segments: int = 0
frames_decoded: int = 0
wall_time_seconds: float = 0.0
segments_with_errors: int = 0
def to_dict(self) -> dict[str, Any]:
"""Convert to dictionary."""
return asdict(self)
@dataclass
class MotionSearchResult:
"""A single search result with timestamp and change info."""
timestamp: float
change_percentage: float
def to_dict(self) -> dict[str, Any]:
"""Convert to dictionary."""
return asdict(self)
@dataclass
class MotionSearchJob(Job):
"""Job state for motion search operations."""
job_type: str = "motion_search"
camera: str = ""
start_time_range: float = 0.0
end_time_range: float = 0.0
polygon_points: list[list[float]] = field(default_factory=list)
threshold: int = 30
min_area: float = 5.0
frame_skip: int = 5
parallel: bool = False
max_results: int = 25
# Track progress
total_frames_processed: int = 0
# Metrics for observability
metrics: Optional[MotionSearchMetrics] = None
def to_dict(self) -> dict[str, Any]:
"""Convert to dictionary for WebSocket transmission."""
d = asdict(self)
if self.metrics:
d["metrics"] = self.metrics.to_dict()
return d
def create_polygon_mask(
polygon_points: list[list[float]], frame_width: int, frame_height: int
) -> np.ndarray:
"""Create a binary mask from normalized polygon coordinates."""
motion_points = np.array(
[[int(p[0] * frame_width), int(p[1] * frame_height)] for p in polygon_points],
dtype=np.int32,
)
mask = np.zeros((frame_height, frame_width), dtype=np.uint8)
cv2.fillPoly(mask, [motion_points], 255)
return mask
def compute_roi_bbox_normalized(
polygon_points: list[list[float]],
) -> tuple[float, float, float, float]:
"""Compute the bounding box of the ROI in normalized coordinates (0-1).
Returns (x_min, y_min, x_max, y_max) in normalized coordinates.
"""
if not polygon_points:
return (0.0, 0.0, 1.0, 1.0)
x_coords = [p[0] for p in polygon_points]
y_coords = [p[1] for p in polygon_points]
return (min(x_coords), min(y_coords), max(x_coords), max(y_coords))
def heatmap_overlaps_roi(
heatmap: dict[str, int], roi_bbox: tuple[float, float, float, float]
) -> bool:
"""Check if a sparse motion heatmap has any overlap with the ROI bounding box.
Args:
heatmap: Sparse dict mapping cell index (str) to intensity (1-255).
roi_bbox: (x_min, y_min, x_max, y_max) in normalized coordinates (0-1).
Returns:
True if there is overlap (any active cell in the ROI region).
"""
if not isinstance(heatmap, dict):
# Invalid heatmap, assume overlap to be safe
return True
x_min, y_min, x_max, y_max = roi_bbox
# Convert normalized coordinates to grid cells (0-15)
grid_x_min = max(0, int(x_min * HEATMAP_GRID_SIZE))
grid_y_min = max(0, int(y_min * HEATMAP_GRID_SIZE))
grid_x_max = min(HEATMAP_GRID_SIZE - 1, int(x_max * HEATMAP_GRID_SIZE))
grid_y_max = min(HEATMAP_GRID_SIZE - 1, int(y_max * HEATMAP_GRID_SIZE))
# Check each cell in the ROI bbox
for y in range(grid_y_min, grid_y_max + 1):
for x in range(grid_x_min, grid_x_max + 1):
idx = str(y * HEATMAP_GRID_SIZE + x)
if idx in heatmap:
return True
return False
def segment_passes_activity_gate(recording: Recordings) -> bool:
"""Check if a segment passes the activity gate.
Returns True if any of motion, objects, or regions is non-zero/non-null.
Returns True if all are null (old segments without data).
"""
motion = recording.motion
objects = recording.objects
regions = recording.regions
# Old segments without metadata - pass through (conservative)
if motion is None and objects is None and regions is None:
return True
# Pass if any activity indicator is positive
return bool(motion) or bool(objects) or bool(regions)
def segment_passes_heatmap_gate(
recording: Recordings, roi_bbox: tuple[float, float, float, float]
) -> bool:
"""Check if a segment passes the heatmap overlap gate.
Returns True if:
- No heatmap is stored (old segments).
- The heatmap overlaps with the ROI bbox.
"""
heatmap = getattr(recording, "motion_heatmap", None)
if heatmap is None:
# No heatmap stored, fall back to activity gate
return True
return heatmap_overlaps_roi(heatmap, roi_bbox)
class MotionSearchRunner(threading.Thread):
"""Thread-based runner for motion search jobs with parallel verification."""
def __init__(
self,
job: MotionSearchJob,
config: FrigateConfig,
cancel_event: threading.Event,
) -> None:
super().__init__(daemon=True, name=f"motion_search_{job.id}")
self.job = job
self.config = config
self.cancel_event = cancel_event
self.internal_stop_event = threading.Event()
self.requestor = InterProcessRequestor()
self.metrics = MotionSearchMetrics()
self.job.metrics = self.metrics
# Worker cap: min(4, cpu_count)
cpu_count = os.cpu_count() or 1
self.max_workers = min(4, cpu_count)
def run(self) -> None:
"""Execute the motion search job."""
try:
self.job.status = JobStatusTypesEnum.running
self.job.start_time = datetime.now().timestamp()
self._broadcast_status()
results = self._execute_search()
if self.cancel_event.is_set():
self.job.status = JobStatusTypesEnum.cancelled
else:
self.job.status = JobStatusTypesEnum.success
self.job.results = {
"results": [r.to_dict() for r in results],
"total_frames_processed": self.job.total_frames_processed,
}
self.job.end_time = datetime.now().timestamp()
self.metrics.wall_time_seconds = self.job.end_time - self.job.start_time
self.job.metrics = self.metrics
logger.debug(
"Motion search job %s completed: status=%s, results=%d, frames=%d",
self.job.id,
self.job.status,
len(results),
self.job.total_frames_processed,
)
self._broadcast_status()
except Exception as e:
logger.exception("Motion search job %s failed: %s", self.job.id, e)
self.job.status = JobStatusTypesEnum.failed
self.job.error_message = str(e)
self.job.end_time = datetime.now().timestamp()
self.metrics.wall_time_seconds = self.job.end_time - (
self.job.start_time or 0
)
self.job.metrics = self.metrics
self._broadcast_status()
finally:
if self.requestor:
self.requestor.stop()
def _broadcast_status(self) -> None:
"""Broadcast job status update via IPC to WebSocket subscribers."""
if self.job.status == JobStatusTypesEnum.running and self.job.start_time:
self.metrics.wall_time_seconds = (
datetime.now().timestamp() - self.job.start_time
)
try:
self.requestor.send_data(UPDATE_JOB_STATE, self.job.to_dict())
except Exception as e:
logger.warning("Failed to broadcast motion search status: %s", e)
def _should_stop(self) -> bool:
"""Check if processing should stop due to cancellation or internal limits."""
return self.cancel_event.is_set() or self.internal_stop_event.is_set()
def _execute_search(self) -> list[MotionSearchResult]:
"""Main search execution logic."""
camera_name = self.job.camera
camera_config = self.config.cameras.get(camera_name)
if not camera_config:
raise ValueError(f"Camera {camera_name} not found")
frame_width = camera_config.detect.width
frame_height = camera_config.detect.height
# Create polygon mask
polygon_mask = create_polygon_mask(
self.job.polygon_points, frame_width, frame_height
)
if np.count_nonzero(polygon_mask) == 0:
logger.warning("Polygon mask is empty for job %s", self.job.id)
return []
# Compute ROI bbox in normalized coordinates for heatmap gate
roi_bbox = compute_roi_bbox_normalized(self.job.polygon_points)
# Query recordings
recordings = list(
Recordings.select()
.where(
(
Recordings.start_time.between(
self.job.start_time_range, self.job.end_time_range
)
)
| (
Recordings.end_time.between(
self.job.start_time_range, self.job.end_time_range
)
)
| (
(self.job.start_time_range > Recordings.start_time)
& (self.job.end_time_range < Recordings.end_time)
)
)
.where(Recordings.camera == camera_name)
.order_by(Recordings.start_time.asc())
)
if not recordings:
logger.debug("No recordings found for motion search job %s", self.job.id)
return []
logger.debug(
"Motion search job %s: queried %d recording segments for camera %s "
"(range %.1f - %.1f)",
self.job.id,
len(recordings),
camera_name,
self.job.start_time_range,
self.job.end_time_range,
)
self.metrics.segments_scanned = len(recordings)
# Apply activity and heatmap gates
filtered_recordings = []
for recording in recordings:
if not segment_passes_activity_gate(recording):
self.metrics.metadata_inactive_segments += 1
self.metrics.segments_processed += 1
logger.debug(
"Motion search job %s: segment %s skipped by activity gate "
"(motion=%s, objects=%s, regions=%s)",
self.job.id,
recording.id,
recording.motion,
recording.objects,
recording.regions,
)
continue
if not segment_passes_heatmap_gate(recording, roi_bbox):
self.metrics.heatmap_roi_skip_segments += 1
self.metrics.segments_processed += 1
logger.debug(
"Motion search job %s: segment %s skipped by heatmap gate "
"(heatmap present=%s, roi_bbox=%s)",
self.job.id,
recording.id,
recording.motion_heatmap is not None,
roi_bbox,
)
continue
filtered_recordings.append(recording)
self._broadcast_status()
# Fallback: if all segments were filtered out, scan all segments
# This allows motion search to find things the detector missed
if not filtered_recordings and recordings:
logger.info(
"All %d segments filtered by gates, falling back to full scan",
len(recordings),
)
self.metrics.fallback_full_range_segments = len(recordings)
filtered_recordings = recordings
logger.debug(
"Motion search job %s: %d/%d segments passed gates "
"(activity_skipped=%d, heatmap_skipped=%d)",
self.job.id,
len(filtered_recordings),
len(recordings),
self.metrics.metadata_inactive_segments,
self.metrics.heatmap_roi_skip_segments,
)
if self.job.parallel:
return self._search_motion_parallel(filtered_recordings, polygon_mask)
return self._search_motion_sequential(filtered_recordings, polygon_mask)
def _search_motion_parallel(
self,
recordings: list[Recordings],
polygon_mask: np.ndarray,
) -> list[MotionSearchResult]:
"""Search for motion in parallel across segments, streaming results."""
all_results: list[MotionSearchResult] = []
total_frames = 0
next_recording_idx_to_merge = 0
logger.debug(
"Motion search job %s: starting motion search with %d workers "
"across %d segments",
self.job.id,
self.max_workers,
len(recordings),
)
# Initialize partial results on the job so they stream to the frontend
self.job.results = {"results": [], "total_frames_processed": 0}
with ThreadPoolExecutor(max_workers=self.max_workers) as executor:
futures: dict[Future, int] = {}
completed_segments: dict[int, tuple[list[MotionSearchResult], int]] = {}
for idx, recording in enumerate(recordings):
if self._should_stop():
break
future = executor.submit(
self._process_recording_for_motion,
recording.path,
recording.start_time,
recording.end_time,
self.job.start_time_range,
self.job.end_time_range,
polygon_mask,
self.job.threshold,
self.job.min_area,
self.job.frame_skip,
)
futures[future] = idx
for future in as_completed(futures):
if self._should_stop():
# Cancel remaining futures
for f in futures:
f.cancel()
break
recording_idx = futures[future]
recording = recordings[recording_idx]
try:
results, frames = future.result()
self.metrics.segments_processed += 1
completed_segments[recording_idx] = (results, frames)
while next_recording_idx_to_merge in completed_segments:
segment_results, segment_frames = completed_segments.pop(
next_recording_idx_to_merge
)
all_results.extend(segment_results)
total_frames += segment_frames
self.job.total_frames_processed = total_frames
self.metrics.frames_decoded = total_frames
if segment_results:
deduped = self._deduplicate_results(all_results)
self.job.results = {
"results": [
r.to_dict() for r in deduped[: self.job.max_results]
],
"total_frames_processed": total_frames,
}
self._broadcast_status()
if segment_results and len(deduped) >= self.job.max_results:
self.internal_stop_event.set()
for pending_future in futures:
pending_future.cancel()
break
next_recording_idx_to_merge += 1
if self.internal_stop_event.is_set():
break
except Exception as e:
self.metrics.segments_processed += 1
self.metrics.segments_with_errors += 1
self._broadcast_status()
logger.warning(
"Error processing segment %s: %s",
recording.path,
e,
)
self.job.total_frames_processed = total_frames
self.metrics.frames_decoded = total_frames
logger.debug(
"Motion search job %s: motion search complete, "
"found %d raw results, decoded %d frames, %d segment errors",
self.job.id,
len(all_results),
total_frames,
self.metrics.segments_with_errors,
)
# Sort and deduplicate results
all_results.sort(key=lambda x: x.timestamp)
return self._deduplicate_results(all_results)[: self.job.max_results]
def _search_motion_sequential(
self,
recordings: list[Recordings],
polygon_mask: np.ndarray,
) -> list[MotionSearchResult]:
"""Search for motion sequentially across segments, streaming results."""
all_results: list[MotionSearchResult] = []
total_frames = 0
logger.debug(
"Motion search job %s: starting sequential motion search across %d segments",
self.job.id,
len(recordings),
)
self.job.results = {"results": [], "total_frames_processed": 0}
for recording in recordings:
if self.cancel_event.is_set():
break
try:
results, frames = self._process_recording_for_motion(
recording.path,
recording.start_time,
recording.end_time,
self.job.start_time_range,
self.job.end_time_range,
polygon_mask,
self.job.threshold,
self.job.min_area,
self.job.frame_skip,
)
all_results.extend(results)
total_frames += frames
self.job.total_frames_processed = total_frames
self.metrics.frames_decoded = total_frames
self.metrics.segments_processed += 1
if results:
all_results.sort(key=lambda x: x.timestamp)
deduped = self._deduplicate_results(all_results)[
: self.job.max_results
]
self.job.results = {
"results": [r.to_dict() for r in deduped],
"total_frames_processed": total_frames,
}
self._broadcast_status()
if results and len(deduped) >= self.job.max_results:
break
except Exception as e:
self.metrics.segments_processed += 1
self.metrics.segments_with_errors += 1
self._broadcast_status()
logger.warning("Error processing segment %s: %s", recording.path, e)
self.job.total_frames_processed = total_frames
self.metrics.frames_decoded = total_frames
logger.debug(
"Motion search job %s: sequential motion search complete, "
"found %d raw results, decoded %d frames, %d segment errors",
self.job.id,
len(all_results),
total_frames,
self.metrics.segments_with_errors,
)
all_results.sort(key=lambda x: x.timestamp)
return self._deduplicate_results(all_results)[: self.job.max_results]
def _deduplicate_results(
self, results: list[MotionSearchResult], min_gap: float = 1.0
) -> list[MotionSearchResult]:
"""Deduplicate results that are too close together."""
if not results:
return results
deduplicated: list[MotionSearchResult] = []
last_timestamp = 0.0
for result in results:
if result.timestamp - last_timestamp >= min_gap:
deduplicated.append(result)
last_timestamp = result.timestamp
return deduplicated
def _process_recording_for_motion(
self,
recording_path: str,
recording_start: float,
recording_end: float,
search_start: float,
search_end: float,
polygon_mask: np.ndarray,
threshold: int,
min_area: float,
frame_skip: int,
) -> tuple[list[MotionSearchResult], int]:
"""Process a single recording file for motion detection.
This method is designed to be called from a thread pool.
Args:
min_area: Minimum change area as a percentage of the ROI (0-100).
"""
results: list[MotionSearchResult] = []
frames_processed = 0
if not os.path.exists(recording_path):
logger.warning("Recording file not found: %s", recording_path)
return results, frames_processed
cap = cv2.VideoCapture(recording_path)
if not cap.isOpened():
logger.error("Could not open recording: %s", recording_path)
return results, frames_processed
try:
fps = cap.get(cv2.CAP_PROP_FPS) or 30.0
total_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
recording_duration = recording_end - recording_start
# Calculate frame range
start_offset = max(0, search_start - recording_start)
end_offset = min(recording_duration, search_end - recording_start)
start_frame = int(start_offset * fps)
end_frame = int(end_offset * fps)
start_frame = max(0, min(start_frame, total_frames - 1))
end_frame = max(0, min(end_frame, total_frames))
if start_frame >= end_frame:
return results, frames_processed
cap.set(cv2.CAP_PROP_POS_FRAMES, start_frame)
# Get ROI bounding box
roi_bbox = cv2.boundingRect(polygon_mask)
roi_x, roi_y, roi_w, roi_h = roi_bbox
prev_frame_gray = None
frame_step = max(frame_skip, 1)
frame_idx = start_frame
while frame_idx < end_frame:
if self._should_stop():
break
ret, frame = cap.read()
if not ret:
frame_idx += 1
continue
if (frame_idx - start_frame) % frame_step != 0:
frame_idx += 1
continue
frames_processed += 1
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Handle frame dimension changes
if gray.shape != polygon_mask.shape:
resized_mask = cv2.resize(
polygon_mask, (gray.shape[1], gray.shape[0]), cv2.INTER_NEAREST
)
current_bbox = cv2.boundingRect(resized_mask)
else:
resized_mask = polygon_mask
current_bbox = roi_bbox
roi_x, roi_y, roi_w, roi_h = current_bbox
cropped_gray = gray[roi_y : roi_y + roi_h, roi_x : roi_x + roi_w]
cropped_mask = resized_mask[
roi_y : roi_y + roi_h, roi_x : roi_x + roi_w
]
cropped_mask_area = np.count_nonzero(cropped_mask)
if cropped_mask_area == 0:
frame_idx += 1
continue
# Convert percentage to pixel count for this ROI
min_area_pixels = int((min_area / 100.0) * cropped_mask_area)
masked_gray = cv2.bitwise_and(
cropped_gray, cropped_gray, mask=cropped_mask
)
if prev_frame_gray is not None:
diff = cv2.absdiff(prev_frame_gray, masked_gray)
diff_blurred = cv2.GaussianBlur(diff, (3, 3), 0)
_, thresh = cv2.threshold(
diff_blurred, threshold, 255, cv2.THRESH_BINARY
)
thresh_dilated = cv2.dilate(thresh, None, iterations=1)
thresh_masked = cv2.bitwise_and(
thresh_dilated, thresh_dilated, mask=cropped_mask
)
change_pixels = cv2.countNonZero(thresh_masked)
if change_pixels > min_area_pixels:
contours, _ = cv2.findContours(
thresh_masked, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE
)
total_change_area = sum(
cv2.contourArea(c)
for c in contours
if cv2.contourArea(c) >= min_area_pixels
)
if total_change_area > 0:
frame_time_offset = (frame_idx - start_frame) / fps
timestamp = (
recording_start + start_offset + frame_time_offset
)
change_percentage = (
total_change_area / cropped_mask_area
) * 100
results.append(
MotionSearchResult(
timestamp=timestamp,
change_percentage=round(change_percentage, 2),
)
)
prev_frame_gray = masked_gray
frame_idx += 1
finally:
cap.release()
logger.debug(
"Motion search segment complete: %s, %d frames processed, %d results found",
recording_path,
frames_processed,
len(results),
)
return results, frames_processed
# Module-level state for managing per-camera jobs
_motion_search_jobs: dict[str, tuple[MotionSearchJob, threading.Event]] = {}
_jobs_lock = threading.Lock()
def stop_all_motion_search_jobs() -> None:
"""Cancel all running motion search jobs for clean shutdown."""
with _jobs_lock:
for job_id, (job, cancel_event) in _motion_search_jobs.items():
if job.status in (JobStatusTypesEnum.queued, JobStatusTypesEnum.running):
cancel_event.set()
logger.debug("Signalling motion search job %s to stop", job_id)
def start_motion_search_job(
config: FrigateConfig,
camera_name: str,
start_time: float,
end_time: float,
polygon_points: list[list[float]],
threshold: int = 30,
min_area: float = 5.0,
frame_skip: int = 5,
parallel: bool = False,
max_results: int = 25,
) -> str:
"""Start a new motion search job.
Returns the job ID.
"""
job = MotionSearchJob(
camera=camera_name,
start_time_range=start_time,
end_time_range=end_time,
polygon_points=polygon_points,
threshold=threshold,
min_area=min_area,
frame_skip=frame_skip,
parallel=parallel,
max_results=max_results,
)
cancel_event = threading.Event()
with _jobs_lock:
_motion_search_jobs[job.id] = (job, cancel_event)
set_current_job(job)
runner = MotionSearchRunner(job, config, cancel_event)
runner.start()
logger.debug(
"Started motion search job %s for camera %s: "
"time_range=%.1f-%.1f, threshold=%d, min_area=%.1f%%, "
"frame_skip=%d, parallel=%s, max_results=%d, polygon_points=%d vertices",
job.id,
camera_name,
start_time,
end_time,
threshold,
min_area,
frame_skip,
parallel,
max_results,
len(polygon_points),
)
return job.id
def get_motion_search_job(job_id: str) -> Optional[MotionSearchJob]:
"""Get a motion search job by ID."""
with _jobs_lock:
job_entry = _motion_search_jobs.get(job_id)
if job_entry:
return job_entry[0]
# Check completed jobs via manager
return get_job_by_id("motion_search", job_id)
def cancel_motion_search_job(job_id: str) -> bool:
"""Cancel a motion search job.
Returns True if cancellation was initiated, False if job not found.
"""
with _jobs_lock:
job_entry = _motion_search_jobs.get(job_id)
if not job_entry:
return False
job, cancel_event = job_entry
if job.status not in (JobStatusTypesEnum.queued, JobStatusTypesEnum.running):
# Already finished
return True
cancel_event.set()
job.status = JobStatusTypesEnum.cancelled
job_payload = job.to_dict()
logger.info("Cancelled motion search job %s", job_id)
requestor: Optional[InterProcessRequestor] = None
try:
requestor = InterProcessRequestor()
requestor.send_data(UPDATE_JOB_STATE, job_payload)
except Exception as e:
logger.warning(
"Failed to broadcast cancelled motion search job %s: %s", job_id, e
)
finally:
if requestor:
requestor.stop()
return True

View File

@ -78,6 +78,7 @@ class Recordings(Model):
dBFS = IntegerField(null=True)
segment_size = FloatField(default=0) # this should be stored as MB
regions = IntegerField(null=True)
motion_heatmap = JSONField(null=True) # 16x16 grid, 256 values (0-255)
class ExportCase(Model):

View File

@ -176,11 +176,32 @@ class ImprovedMotionDetector(MotionDetector):
motion_boxes = []
pct_motion = 0
# skip motion entirely if the scene change percentage exceeds configured
# threshold. this is useful to ignore lighting storms, IR mode switches,
# etc. rather than registering them as brief motion and then recalibrating.
# note: skipping means the frame is dropped and **no recording will be
# created**, which could hide a legitimate object if the camera is actively
# autotracking. the alternative is to allow motion and accept a small
# recording that can be reviewed in the timeline. disabled by default (None).
if (
self.config.skip_motion_threshold is not None
and pct_motion > self.config.skip_motion_threshold
):
# force a recalibration so we transition to the new background
self.calibrating = True
return []
# once the motion is less than 5% and the number of contours is < 4, assume its calibrated
if pct_motion < 0.05 and len(motion_boxes) <= 4:
self.calibrating = False
# if calibrating or the motion contours are > 80% of the image area (lightning, ir, ptz) recalibrate
# if calibrating or the motion contours are > 80% of the image area
# (lightning, ir, ptz) recalibrate. the lightning threshold does **not**
# stop motion detection entirely; it simply halts additional processing for
# the current frame once the percentage crosses the threshold. this helps
# reduce false positive object detections and CPU usage during highmotion
# events. recordings continue to be generated because users expect data
# while a PTZ camera is moving.
if self.calibrating or pct_motion > self.config.lightning_threshold:
self.calibrating = True

View File

@ -273,17 +273,13 @@ class BirdsEyeFrameManager:
stop_event: mp.Event,
):
self.config = config
self.mode = config.birdseye.mode
width, height = get_canvas_shape(config.birdseye.width, config.birdseye.height)
self.frame_shape = (height, width)
self.yuv_shape = (height * 3 // 2, width)
self.frame = np.ndarray(self.yuv_shape, dtype=np.uint8)
self.canvas = Canvas(width, height, config.birdseye.layout.scaling_factor)
self.stop_event = stop_event
self.inactivity_threshold = config.birdseye.inactivity_threshold
if config.birdseye.layout.max_cameras:
self.last_refresh_time = 0
self.last_refresh_time = 0
# initialize the frame as black and with the Frigate logo
self.blank_frame = np.zeros(self.yuv_shape, np.uint8)
@ -426,7 +422,7 @@ class BirdsEyeFrameManager:
and self.config.cameras[cam].enabled
and cam_data["last_active_frame"] > 0
and cam_data["current_frame_time"] - cam_data["last_active_frame"]
< self.inactivity_threshold
< self.config.birdseye.inactivity_threshold
]
)
logger.debug(f"Active cameras: {active_cameras}")

View File

@ -15,6 +15,7 @@ from ws4py.server.wsgirefserver import (
)
from ws4py.server.wsgiutils import WebSocketWSGIApplication
from frigate.comms.config_updater import ConfigSubscriber
from frigate.comms.detections_updater import DetectionSubscriber, DetectionTypeEnum
from frigate.comms.ws import WebSocket
from frigate.config import FrigateConfig
@ -138,6 +139,7 @@ class OutputProcess(FrigateProcess):
CameraConfigUpdateEnum.record,
],
)
birdseye_config_subscriber = ConfigSubscriber("config/birdseye", exact=True)
jsmpeg_cameras: dict[str, JsmpegCamera] = {}
birdseye: Birdseye | None = None
@ -167,6 +169,20 @@ class OutputProcess(FrigateProcess):
websocket_thread.start()
while not self.stop_event.is_set():
update_topic, birdseye_config = (
birdseye_config_subscriber.check_for_update()
)
if update_topic is not None:
previous_global_mode = self.config.birdseye.mode
self.config.birdseye = birdseye_config
for camera_config in self.config.cameras.values():
if camera_config.birdseye.mode == previous_global_mode:
camera_config.birdseye.mode = birdseye_config.mode
logger.debug("Applied dynamic birdseye config update")
# check if there is an updated config
updates = config_subscriber.check_for_updates()
@ -297,6 +313,7 @@ class OutputProcess(FrigateProcess):
birdseye.stop()
config_subscriber.stop()
birdseye_config_subscriber.stop()
websocket_server.manager.close_all()
websocket_server.manager.stop()
websocket_server.manager.join()

View File

@ -50,11 +50,13 @@ class SegmentInfo:
active_object_count: int,
region_count: int,
average_dBFS: int,
motion_heatmap: dict[str, int] | None = None,
) -> None:
self.motion_count = motion_count
self.active_object_count = active_object_count
self.region_count = region_count
self.average_dBFS = average_dBFS
self.motion_heatmap = motion_heatmap
def should_discard_segment(self, retain_mode: RetainModeEnum) -> bool:
keep = False
@ -454,6 +456,59 @@ class RecordingMaintainer(threading.Thread):
if end_time < retain_cutoff:
self.drop_segment(cache_path)
def _compute_motion_heatmap(
self, camera: str, motion_boxes: list[tuple[int, int, int, int]]
) -> dict[str, int] | None:
"""Compute a 16x16 motion intensity heatmap from motion boxes.
Returns a sparse dict mapping cell index (as string) to intensity (1-255).
Only cells with motion are included.
Args:
camera: Camera name to get detect dimensions from.
motion_boxes: List of (x1, y1, x2, y2) pixel coordinates.
Returns:
Sparse dict like {"45": 3, "46": 5}, or None if no boxes.
"""
if not motion_boxes:
return None
camera_config = self.config.cameras.get(camera)
if not camera_config:
return None
frame_width = camera_config.detect.width
frame_height = camera_config.detect.height
if frame_width <= 0 or frame_height <= 0:
return None
GRID_SIZE = 16
counts: dict[int, int] = {}
for box in motion_boxes:
if len(box) < 4:
continue
x1, y1, x2, y2 = box
# Convert pixel coordinates to grid cells
grid_x1 = max(0, int((x1 / frame_width) * GRID_SIZE))
grid_y1 = max(0, int((y1 / frame_height) * GRID_SIZE))
grid_x2 = min(GRID_SIZE - 1, int((x2 / frame_width) * GRID_SIZE))
grid_y2 = min(GRID_SIZE - 1, int((y2 / frame_height) * GRID_SIZE))
for y in range(grid_y1, grid_y2 + 1):
for x in range(grid_x1, grid_x2 + 1):
idx = y * GRID_SIZE + x
counts[idx] = min(255, counts.get(idx, 0) + 1)
if not counts:
return None
# Convert to string keys for JSON storage
return {str(k): v for k, v in counts.items()}
def segment_stats(
self, camera: str, start_time: datetime.datetime, end_time: datetime.datetime
) -> SegmentInfo:
@ -461,6 +516,8 @@ class RecordingMaintainer(threading.Thread):
active_count = 0
region_count = 0
motion_count = 0
all_motion_boxes: list[tuple[int, int, int, int]] = []
for frame in self.object_recordings_info[camera]:
# frame is after end time of segment
if frame[0] > end_time.timestamp():
@ -479,6 +536,8 @@ class RecordingMaintainer(threading.Thread):
)
motion_count += len(frame[2])
region_count += len(frame[3])
# Collect motion boxes for heatmap computation
all_motion_boxes.extend(frame[2])
audio_values = []
for frame in self.audio_recordings_info[camera]:
@ -498,8 +557,14 @@ class RecordingMaintainer(threading.Thread):
average_dBFS = 0 if not audio_values else np.average(audio_values)
motion_heatmap = self._compute_motion_heatmap(camera, all_motion_boxes)
return SegmentInfo(
motion_count, active_count, region_count, round(average_dBFS)
motion_count,
active_count,
region_count,
round(average_dBFS),
motion_heatmap,
)
async def move_segment(
@ -590,6 +655,7 @@ class RecordingMaintainer(threading.Thread):
Recordings.regions.name: segment_info.region_count,
Recordings.dBFS.name: segment_info.average_dBFS,
Recordings.segment_size.name: segment_size,
Recordings.motion_heatmap.name: segment_info.motion_heatmap,
}
except Exception as e:
logger.error(f"Unable to store recording segment {cache_path}")

View File

@ -0,0 +1,261 @@
"""Tests for the config_set endpoint's wildcard camera propagation."""
import os
import tempfile
import unittest
from unittest.mock import MagicMock, Mock, patch
import ruamel.yaml
from frigate.config import FrigateConfig
from frigate.config.camera.updater import (
CameraConfigUpdateEnum,
CameraConfigUpdatePublisher,
CameraConfigUpdateTopic,
)
from frigate.models import Event, Recordings, ReviewSegment
from frigate.test.http_api.base_http_test import AuthTestClient, BaseTestHttp
class TestConfigSetWildcardPropagation(BaseTestHttp):
"""Test that wildcard camera updates fan out to all cameras."""
def setUp(self):
super().setUp(models=[Event, Recordings, ReviewSegment])
self.minimal_config = {
"mqtt": {"host": "mqtt"},
"cameras": {
"front_door": {
"ffmpeg": {
"inputs": [
{"path": "rtsp://10.0.0.1:554/video", "roles": ["detect"]}
]
},
"detect": {
"height": 1080,
"width": 1920,
"fps": 5,
},
},
"back_yard": {
"ffmpeg": {
"inputs": [
{"path": "rtsp://10.0.0.2:554/video", "roles": ["detect"]}
]
},
"detect": {
"height": 720,
"width": 1280,
"fps": 10,
},
},
},
}
def _create_app_with_publisher(self):
"""Create app with a mocked config publisher."""
from fastapi import Request
from frigate.api.auth import get_allowed_cameras_for_filter, get_current_user
from frigate.api.fastapi_app import create_fastapi_app
mock_publisher = Mock(spec=CameraConfigUpdatePublisher)
mock_publisher.publisher = MagicMock()
app = create_fastapi_app(
FrigateConfig(**self.minimal_config),
self.db,
None,
None,
None,
None,
None,
None,
mock_publisher,
None,
enforce_default_admin=False,
)
async def mock_get_current_user(request: Request):
username = request.headers.get("remote-user")
role = request.headers.get("remote-role")
return {"username": username, "role": role}
async def mock_get_allowed_cameras_for_filter(request: Request):
return list(self.minimal_config.get("cameras", {}).keys())
app.dependency_overrides[get_current_user] = mock_get_current_user
app.dependency_overrides[get_allowed_cameras_for_filter] = (
mock_get_allowed_cameras_for_filter
)
return app, mock_publisher
def _write_config_file(self):
"""Write the minimal config to a temp YAML file and return the path."""
yaml = ruamel.yaml.YAML()
f = tempfile.NamedTemporaryFile(mode="w", suffix=".yml", delete=False)
yaml.dump(self.minimal_config, f)
f.close()
return f.name
@patch("frigate.api.app.find_config_file")
def test_wildcard_detect_update_fans_out_to_all_cameras(self, mock_find_config):
"""config/cameras/*/detect fans out to all cameras."""
config_path = self._write_config_file()
mock_find_config.return_value = config_path
try:
app, mock_publisher = self._create_app_with_publisher()
with AuthTestClient(app) as client:
resp = client.put(
"/config/set",
json={
"config_data": {"detect": {"fps": 15}},
"update_topic": "config/cameras/*/detect",
"requires_restart": 0,
},
)
self.assertEqual(resp.status_code, 200)
data = resp.json()
self.assertTrue(data["success"])
# Verify publish_update called for each camera
self.assertEqual(mock_publisher.publish_update.call_count, 2)
published_cameras = set()
for c in mock_publisher.publish_update.call_args_list:
topic = c[0][0]
self.assertIsInstance(topic, CameraConfigUpdateTopic)
self.assertEqual(topic.update_type, CameraConfigUpdateEnum.detect)
published_cameras.add(topic.camera)
self.assertEqual(published_cameras, {"front_door", "back_yard"})
# Global publisher should NOT be called for wildcard
mock_publisher.publisher.publish.assert_not_called()
finally:
os.unlink(config_path)
@patch("frigate.api.app.find_config_file")
def test_wildcard_motion_update_fans_out(self, mock_find_config):
"""config/cameras/*/motion fans out to all cameras."""
config_path = self._write_config_file()
mock_find_config.return_value = config_path
try:
app, mock_publisher = self._create_app_with_publisher()
with AuthTestClient(app) as client:
resp = client.put(
"/config/set",
json={
"config_data": {"motion": {"threshold": 30}},
"update_topic": "config/cameras/*/motion",
"requires_restart": 0,
},
)
self.assertEqual(resp.status_code, 200)
published_cameras = set()
for c in mock_publisher.publish_update.call_args_list:
topic = c[0][0]
self.assertEqual(topic.update_type, CameraConfigUpdateEnum.motion)
published_cameras.add(topic.camera)
self.assertEqual(published_cameras, {"front_door", "back_yard"})
finally:
os.unlink(config_path)
@patch("frigate.api.app.find_config_file")
def test_camera_specific_topic_only_updates_one_camera(self, mock_find_config):
"""config/cameras/front_door/detect only updates front_door."""
config_path = self._write_config_file()
mock_find_config.return_value = config_path
try:
app, mock_publisher = self._create_app_with_publisher()
with AuthTestClient(app) as client:
resp = client.put(
"/config/set",
json={
"config_data": {
"cameras": {"front_door": {"detect": {"fps": 20}}}
},
"update_topic": "config/cameras/front_door/detect",
"requires_restart": 0,
},
)
self.assertEqual(resp.status_code, 200)
# Only one camera updated
self.assertEqual(mock_publisher.publish_update.call_count, 1)
topic = mock_publisher.publish_update.call_args[0][0]
self.assertEqual(topic.camera, "front_door")
self.assertEqual(topic.update_type, CameraConfigUpdateEnum.detect)
# Global publisher should NOT be called
mock_publisher.publisher.publish.assert_not_called()
finally:
os.unlink(config_path)
@patch("frigate.api.app.find_config_file")
def test_wildcard_sends_merged_per_camera_config(self, mock_find_config):
"""Wildcard fan-out sends each camera's own merged config."""
config_path = self._write_config_file()
mock_find_config.return_value = config_path
try:
app, mock_publisher = self._create_app_with_publisher()
with AuthTestClient(app) as client:
resp = client.put(
"/config/set",
json={
"config_data": {"detect": {"fps": 15}},
"update_topic": "config/cameras/*/detect",
"requires_restart": 0,
},
)
self.assertEqual(resp.status_code, 200)
for c in mock_publisher.publish_update.call_args_list:
camera_detect_config = c[0][1]
self.assertIsNotNone(camera_detect_config)
self.assertTrue(hasattr(camera_detect_config, "fps"))
finally:
os.unlink(config_path)
@patch("frigate.api.app.find_config_file")
def test_non_camera_global_topic_uses_generic_publish(self, mock_find_config):
"""Non-camera topics (e.g. config/live) use the generic publisher."""
config_path = self._write_config_file()
mock_find_config.return_value = config_path
try:
app, mock_publisher = self._create_app_with_publisher()
with AuthTestClient(app) as client:
resp = client.put(
"/config/set",
json={
"config_data": {"live": {"height": 720}},
"update_topic": "config/live",
"requires_restart": 0,
},
)
self.assertEqual(resp.status_code, 200)
# Global topic publisher called
mock_publisher.publisher.publish.assert_called_once()
# Camera-level publish_update NOT called
mock_publisher.publish_update.assert_not_called()
finally:
os.unlink(config_path)
if __name__ == "__main__":
unittest.main()

View File

@ -0,0 +1,91 @@
import unittest
import numpy as np
from frigate.config.camera.motion import MotionConfig
from frigate.motion.improved_motion import ImprovedMotionDetector
class TestImprovedMotionDetector(unittest.TestCase):
def setUp(self):
# small frame for testing; actual frames are grayscale
self.frame_shape = (100, 100) # height, width
self.config = MotionConfig()
# motion detector assumes a rasterized_mask attribute exists on config
# when update_mask() is called; add one manually by bypassing pydantic.
object.__setattr__(
self.config,
"rasterized_mask",
np.ones((self.frame_shape[0], self.frame_shape[1]), dtype=np.uint8),
)
# create minimal PTZ metrics stub to satisfy detector checks
class _Stub:
def __init__(self, value=False):
self.value = value
def is_set(self):
return bool(self.value)
class DummyPTZ:
def __init__(self):
self.autotracker_enabled = _Stub(False)
self.motor_stopped = _Stub(False)
self.stop_time = _Stub(0)
self.detector = ImprovedMotionDetector(
self.frame_shape, self.config, fps=30, ptz_metrics=DummyPTZ()
)
# establish a baseline frame (all zeros)
base_frame = np.zeros(
(self.frame_shape[0], self.frame_shape[1]), dtype=np.uint8
)
self.detector.detect(base_frame)
def _half_change_frame(self) -> np.ndarray:
"""Produce a frame where roughly half of the pixels are different."""
frame = np.zeros((self.frame_shape[0], self.frame_shape[1]), dtype=np.uint8)
# flip the top half to white
frame[: self.frame_shape[0] // 2, :] = 255
return frame
def test_skip_motion_threshold_default(self):
"""With the default (None) setting, motion should always be reported."""
frame = self._half_change_frame()
boxes = self.detector.detect(frame)
self.assertTrue(
boxes, "Expected motion boxes when skip threshold is unset (disabled)"
)
def test_skip_motion_threshold_applied(self):
"""Setting a low skip threshold should prevent any boxes from being returned."""
# change the config and update the detector reference
self.config.skip_motion_threshold = 0.4
self.detector.config = self.config
self.detector.update_mask()
frame = self._half_change_frame()
boxes = self.detector.detect(frame)
self.assertEqual(
boxes,
[],
"Motion boxes should be empty when scene change exceeds skip threshold",
)
def test_skip_motion_threshold_does_not_affect_calibration(self):
"""Even when skipping, the detector should go into calibrating state."""
self.config.skip_motion_threshold = 0.4
self.detector.config = self.config
self.detector.update_mask()
frame = self._half_change_frame()
_ = self.detector.detect(frame)
self.assertTrue(
self.detector.calibrating,
"Detector should be in calibrating state after skip event",
)
if __name__ == "__main__":
unittest.main()

View File

@ -151,7 +151,9 @@ def sync_recordings(
max_inserts = 1000
for batch in chunked(recordings_to_delete, max_inserts):
RecordingsToDelete.insert_many(batch).execute()
RecordingsToDelete.insert_many(
[{"id": r["id"]} for r in batch]
).execute()
try:
deleted = (

View File

@ -110,6 +110,7 @@ def ensure_torch_dependencies() -> bool:
"pip",
"install",
"--break-system-packages",
"setuptools<81",
"torch",
"torchvision",
],

View File

@ -711,7 +711,11 @@ def auto_detect_hwaccel() -> str:
try:
cuda = False
vaapi = False
resp = requests.get("http://127.0.0.1:1984/api/ffmpeg/hardware", timeout=3)
resp = requests.get(
"http://127.0.0.1:1984/api/ffmpeg/hardware",
timeout=3,
proxies={"http": None, "https": None},
)
if resp.status_code == 200:
data: dict[str, list[dict[str, str]]] = resp.json()

View File

@ -214,7 +214,11 @@ class CameraWatchdog(threading.Thread):
self.config_subscriber = CameraConfigUpdateSubscriber(
None,
{config.name: config},
[CameraConfigUpdateEnum.enabled, CameraConfigUpdateEnum.record],
[
CameraConfigUpdateEnum.enabled,
CameraConfigUpdateEnum.ffmpeg,
CameraConfigUpdateEnum.record,
],
)
self.requestor = InterProcessRequestor()
self.was_enabled = self.config.enabled
@ -254,9 +258,13 @@ class CameraWatchdog(threading.Thread):
self._last_record_status = status
self._last_status_update_time = now
def _check_config_updates(self) -> dict[str, list[str]]:
"""Check for config updates and return the update dict."""
return self.config_subscriber.check_for_updates()
def _update_enabled_state(self) -> bool:
"""Fetch the latest config and update enabled state."""
self.config_subscriber.check_for_updates()
self._check_config_updates()
return self.config.enabled
def reset_capture_thread(
@ -317,7 +325,24 @@ class CameraWatchdog(threading.Thread):
# 1 second watchdog loop
while not self.stop_event.wait(1):
enabled = self._update_enabled_state()
updates = self._check_config_updates()
# Handle ffmpeg config changes by restarting all ffmpeg processes
if "ffmpeg" in updates and self.config.enabled:
self.logger.debug(
"FFmpeg config updated for %s, restarting ffmpeg processes",
self.config.name,
)
self.stop_all_ffmpeg()
self.start_all_ffmpeg()
self.latest_valid_segment_time = 0
self.latest_invalid_segment_time = 0
self.latest_cache_segment_time = 0
self.record_enable_time = datetime.now().astimezone(timezone.utc)
last_restart_time = datetime.now().timestamp()
continue
enabled = self.config.enabled
if enabled != self.was_enabled:
if enabled:
self.logger.debug(f"Enabling camera {self.config.name}")

View File

@ -0,0 +1,34 @@
"""Peewee migrations -- 035_add_motion_heatmap.py.
Some examples (model - class or model name)::
> Model = migrator.orm['model_name'] # Return model in current state by name
> migrator.sql(sql) # Run custom SQL
> migrator.python(func, *args, **kwargs) # Run python code
> migrator.create_model(Model) # Create a model (could be used as decorator)
> migrator.remove_model(model, cascade=True) # Remove a model
> migrator.add_fields(model, **fields) # Add fields to a model
> migrator.change_fields(model, **fields) # Change fields
> migrator.remove_fields(model, *field_names, cascade=True)
> migrator.rename_field(model, old_field_name, new_field_name)
> migrator.rename_table(model, new_table_name)
> migrator.add_index(model, *col_names, unique=False)
> migrator.drop_index(model, *col_names)
> migrator.add_not_null(model, *field_names)
> migrator.drop_not_null(model, *field_names)
> migrator.add_default(model, field_name, default)
"""
import peewee as pw
SQL = pw.SQL
def migrate(migrator, database, fake=False, **kwargs):
migrator.sql('ALTER TABLE "recordings" ADD COLUMN "motion_heatmap" TEXT NULL')
def rollback(migrator, database, fake=False, **kwargs):
pass

4594
web/package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@ -5,6 +5,7 @@
"type": "module",
"scripts": {
"dev": "vite --host",
"postinstall": "patch-package",
"build": "tsc && vite build --base=/BASE_PATH/",
"lint": "eslint --ext .jsx,.js,.tsx,.ts --ignore-path .gitignore .",
"lint:fix": "eslint --ext .jsx,.js,.tsx,.ts --ignore-path .gitignore --fix .",
@ -27,12 +28,13 @@
"@radix-ui/react-hover-card": "^1.1.6",
"@radix-ui/react-label": "^2.1.2",
"@radix-ui/react-popover": "^1.1.6",
"@radix-ui/react-progress": "^1.1.8",
"@radix-ui/react-radio-group": "^1.2.3",
"@radix-ui/react-scroll-area": "^1.2.3",
"@radix-ui/react-select": "^2.1.6",
"@radix-ui/react-separator": "^1.1.7",
"@radix-ui/react-slider": "^1.2.3",
"@radix-ui/react-slot": "^1.2.3",
"@radix-ui/react-slot": "1.2.4",
"@radix-ui/react-switch": "^1.1.3",
"@radix-ui/react-tabs": "^1.1.3",
"@radix-ui/react-toggle": "^1.1.2",
@ -50,8 +52,7 @@
"copy-to-clipboard": "^3.3.3",
"date-fns": "^3.6.0",
"date-fns-tz": "^3.2.0",
"embla-carousel-react": "^8.2.0",
"framer-motion": "^11.5.4",
"framer-motion": "^12.35.0",
"hls.js": "^1.5.20",
"i18next": "^24.2.0",
"i18next-http-backend": "^3.0.1",
@ -61,30 +62,28 @@
"lodash": "^4.17.23",
"lucide-react": "^0.477.0",
"monaco-yaml": "^5.3.1",
"next-themes": "^0.3.0",
"next-themes": "^0.4.6",
"nosleep.js": "^0.12.0",
"react": "^18.3.1",
"react": "^19.2.4",
"react-apexcharts": "^1.4.1",
"react-day-picker": "^9.7.0",
"react-device-detect": "^2.2.3",
"react-dom": "^18.3.1",
"react-dom": "^19.2.4",
"react-dropzone": "^14.3.8",
"react-grid-layout": "^1.5.0",
"react-grid-layout": "^2.2.2",
"react-hook-form": "^7.52.1",
"react-i18next": "^15.2.0",
"react-icons": "^5.5.0",
"react-konva": "^18.2.10",
"react-router-dom": "^6.30.3",
"react-konva": "^19.2.3",
"react-markdown": "^9.0.1",
"remark-gfm": "^4.0.0",
"react-router-dom": "^6.30.3",
"react-swipeable": "^7.0.2",
"react-tracked": "^2.0.1",
"react-transition-group": "^4.4.5",
"react-use-websocket": "^4.8.1",
"react-zoom-pan-pinch": "3.4.4",
"recoil": "^0.7.7",
"react-zoom-pan-pinch": "^3.7.0",
"remark-gfm": "^4.0.0",
"scroll-into-view-if-needed": "^3.1.0",
"sonner": "^1.5.0",
"sonner": "^2.0.7",
"sort-by": "^1.2.0",
"strftime": "^0.10.3",
"swr": "^2.3.2",
@ -92,7 +91,7 @@
"tailwind-scrollbar": "^3.1.0",
"tailwindcss-animate": "^1.0.7",
"use-long-press": "^3.2.0",
"vaul": "^0.9.1",
"vaul": "^1.1.2",
"vite-plugin-monaco-editor": "^1.1.0",
"zod": "^3.23.8"
},
@ -101,11 +100,8 @@
"@testing-library/jest-dom": "^6.6.2",
"@types/lodash": "^4.17.12",
"@types/node": "^20.14.10",
"@types/react": "^18.3.2",
"@types/react-dom": "^18.3.0",
"@types/react-grid-layout": "^1.3.5",
"@types/react-icons": "^3.0.0",
"@types/react-transition-group": "^4.4.10",
"@types/react": "^19.2.14",
"@types/react-dom": "^19.2.3",
"@types/strftime": "^0.9.8",
"@typescript-eslint/eslint-plugin": "^7.5.0",
"@typescript-eslint/parser": "^7.5.0",
@ -116,19 +112,26 @@
"eslint-config-prettier": "^9.1.0",
"eslint-plugin-jest": "^28.2.0",
"eslint-plugin-prettier": "^5.0.1",
"eslint-plugin-react-hooks": "^4.6.0",
"eslint-plugin-react-hooks": "^5.2.0",
"eslint-plugin-react-refresh": "^0.4.8",
"eslint-plugin-vitest-globals": "^1.5.0",
"fake-indexeddb": "^6.0.0",
"jest-websocket-mock": "^2.5.0",
"jsdom": "^24.1.1",
"monaco-editor": "^0.52.0",
"msw": "^2.3.5",
"patch-package": "^8.0.1",
"postcss": "^8.4.47",
"prettier": "^3.3.3",
"prettier-plugin-tailwindcss": "^0.6.5",
"tailwindcss": "^3.4.9",
"typescript": "^5.8.2",
"typescript": "^5.9.3",
"vite": "^6.4.1",
"vitest": "^3.0.7"
},
"overrides": {
"@radix-ui/react-compose-refs": "1.1.2",
"@radix-ui/react-popper": "1.2.8",
"@radix-ui/react-slot": "1.2.4"
}
}

View File

@ -0,0 +1,75 @@
diff --git a/node_modules/@radix-ui/react-compose-refs/dist/index.js b/node_modules/@radix-ui/react-compose-refs/dist/index.js
index 5ba7a95..65aa7be 100644
--- a/node_modules/@radix-ui/react-compose-refs/dist/index.js
+++ b/node_modules/@radix-ui/react-compose-refs/dist/index.js
@@ -69,6 +69,31 @@ function composeRefs(...refs) {
};
}
function useComposedRefs(...refs) {
- return React.useCallback(composeRefs(...refs), refs);
+ const refsRef = React.useRef(refs);
+ React.useLayoutEffect(() => {
+ refsRef.current = refs;
+ });
+ return React.useCallback((node) => {
+ let hasCleanup = false;
+ const cleanups = refsRef.current.map((ref) => {
+ const cleanup = setRef(ref, node);
+ if (!hasCleanup && typeof cleanup === "function") {
+ hasCleanup = true;
+ }
+ return cleanup;
+ });
+ if (hasCleanup) {
+ return () => {
+ for (let i = 0; i < cleanups.length; i++) {
+ const cleanup = cleanups[i];
+ if (typeof cleanup === "function") {
+ cleanup();
+ } else {
+ setRef(refsRef.current[i], null);
+ }
+ }
+ };
+ }
+ }, []);
}
//# sourceMappingURL=index.js.map
diff --git a/node_modules/@radix-ui/react-compose-refs/dist/index.mjs b/node_modules/@radix-ui/react-compose-refs/dist/index.mjs
index 7dd9172..d1b53a5 100644
--- a/node_modules/@radix-ui/react-compose-refs/dist/index.mjs
+++ b/node_modules/@radix-ui/react-compose-refs/dist/index.mjs
@@ -32,7 +32,32 @@ function composeRefs(...refs) {
};
}
function useComposedRefs(...refs) {
- return React.useCallback(composeRefs(...refs), refs);
+ const refsRef = React.useRef(refs);
+ React.useLayoutEffect(() => {
+ refsRef.current = refs;
+ });
+ return React.useCallback((node) => {
+ let hasCleanup = false;
+ const cleanups = refsRef.current.map((ref) => {
+ const cleanup = setRef(ref, node);
+ if (!hasCleanup && typeof cleanup === "function") {
+ hasCleanup = true;
+ }
+ return cleanup;
+ });
+ if (hasCleanup) {
+ return () => {
+ for (let i = 0; i < cleanups.length; i++) {
+ const cleanup = cleanups[i];
+ if (typeof cleanup === "function") {
+ cleanup();
+ } else {
+ setRef(refsRef.current[i], null);
+ }
+ }
+ };
+ }
+ }, []);
}
export {
composeRefs,

View File

@ -0,0 +1,46 @@
diff --git a/node_modules/@radix-ui/react-slot/dist/index.js b/node_modules/@radix-ui/react-slot/dist/index.js
index 3691205..3b62ea8 100644
--- a/node_modules/@radix-ui/react-slot/dist/index.js
+++ b/node_modules/@radix-ui/react-slot/dist/index.js
@@ -85,11 +85,12 @@ function createSlotClone(ownerName) {
if (isLazyComponent(children) && typeof use === "function") {
children = use(children._payload);
}
+ const childrenRef = React.isValidElement(children) ? getElementRef(children) : null;
+ const composedRef = (0, import_react_compose_refs.useComposedRefs)(forwardedRef, childrenRef);
if (React.isValidElement(children)) {
- const childrenRef = getElementRef(children);
const props2 = mergeProps(slotProps, children.props);
if (children.type !== React.Fragment) {
- props2.ref = forwardedRef ? (0, import_react_compose_refs.composeRefs)(forwardedRef, childrenRef) : childrenRef;
+ props2.ref = forwardedRef ? composedRef : childrenRef;
}
return React.cloneElement(children, props2);
}
diff --git a/node_modules/@radix-ui/react-slot/dist/index.mjs b/node_modules/@radix-ui/react-slot/dist/index.mjs
index d7ea374..a990150 100644
--- a/node_modules/@radix-ui/react-slot/dist/index.mjs
+++ b/node_modules/@radix-ui/react-slot/dist/index.mjs
@@ -1,6 +1,6 @@
// src/slot.tsx
import * as React from "react";
-import { composeRefs } from "@radix-ui/react-compose-refs";
+import { composeRefs, useComposedRefs } from "@radix-ui/react-compose-refs";
import { Fragment as Fragment2, jsx } from "react/jsx-runtime";
var REACT_LAZY_TYPE = Symbol.for("react.lazy");
var use = React[" use ".trim().toString()];
@@ -45,11 +45,12 @@ function createSlotClone(ownerName) {
if (isLazyComponent(children) && typeof use === "function") {
children = use(children._payload);
}
+ const childrenRef = React.isValidElement(children) ? getElementRef(children) : null;
+ const composedRef = useComposedRefs(forwardedRef, childrenRef);
if (React.isValidElement(children)) {
- const childrenRef = getElementRef(children);
const props2 = mergeProps(slotProps, children.props);
if (children.type !== React.Fragment) {
- props2.ref = forwardedRef ? composeRefs(forwardedRef, childrenRef) : childrenRef;
+ props2.ref = forwardedRef ? composedRef : childrenRef;
}
return React.cloneElement(children, props2);
}

View File

@ -0,0 +1,23 @@
diff --git a/node_modules/react-use-websocket/dist/lib/use-websocket.js b/node_modules/react-use-websocket/dist/lib/use-websocket.js
index f01db48..b30aff2 100644
--- a/node_modules/react-use-websocket/dist/lib/use-websocket.js
+++ b/node_modules/react-use-websocket/dist/lib/use-websocket.js
@@ -139,15 +139,15 @@ var useWebSocket = function (url, options, connect) {
}
protectedSetLastMessage = function (message) {
if (!expectClose_1) {
- (0, react_dom_1.flushSync)(function () { return setLastMessage(message); });
+ setLastMessage(message);
}
};
protectedSetReadyState = function (state) {
if (!expectClose_1) {
- (0, react_dom_1.flushSync)(function () { return setReadyState(function (prev) {
+ setReadyState(function (prev) {
var _a;
return (__assign(__assign({}, prev), (convertedUrl.current && (_a = {}, _a[convertedUrl.current] = state, _a))));
- }); });
+ });
}
};
if (createOrJoin_1) {

View File

@ -264,7 +264,11 @@
},
"lightning_threshold": {
"label": "Lightning threshold",
"description": "Threshold to detect and ignore brief lighting spikes (lower is more sensitive, values between 0.3 and 1.0)."
"description": "Threshold to detect and ignore brief lighting spikes (lower is more sensitive, values between 0.3 and 1.0). This does not prevent motion detection entirely; it merely causes the detector to stop analyzing additional frames once the threshold is exceeded. Motion-based recordings are still created during these events."
},
"skip_motion_threshold": {
"label": "Skip motion threshold",
"description": "If more than this fraction of the image changes in a single frame, the detector will return no motion boxes and immediately recalibrate. This can save CPU and reduce false positives during lightning, storms, etc., but may miss real events such as a PTZ camera autotracking an object. The tradeoff is between dropping a few megabytes of recordings versus reviewing a couple short clips. Range 0.0 to 1.0."
},
"improve_contrast": {
"label": "Improve contrast",
@ -864,7 +868,8 @@
"description": "A user-friendly name for the zone, displayed in the Frigate UI. If not set, a formatted version of the zone name will be used."
},
"enabled": {
"label": "Whether this zone is active. Disabled zones are ignored at runtime."
"label": "Enabled",
"description": "Enable or disable this zone. Disabled zones are ignored at runtime."
},
"enabled_in_config": {
"label": "Keep track of original state of zone."

View File

@ -1391,7 +1391,11 @@
},
"lightning_threshold": {
"label": "Lightning threshold",
"description": "Threshold to detect and ignore brief lighting spikes (lower is more sensitive, values between 0.3 and 1.0)."
"description": "Threshold to detect and ignore brief lighting spikes (lower is more sensitive, values between 0.3 and 1.0). This does not prevent motion detection entirely; it merely causes the detector to stop analyzing additional frames once the threshold is exceeded. Motion-based recordings are still created during these events."
},
"skip_motion_threshold": {
"label": "Skip motion threshold",
"description": "If more than this fraction of the image changes in a single frame, the detector will return no motion boxes and immediately recalibrate. This can save CPU and reduce false positives during lightning, storms, etc., but may miss real events such as a PTZ camera autotracking an object. The tradeoff is between dropping a few megabytes of recordings versus reviewing a couple short clips. Range 0.0 to 1.0."
},
"improve_contrast": {
"label": "Improve contrast",

View File

@ -61,5 +61,25 @@
"detected": "detected",
"normalActivity": "Normal",
"needsReview": "Needs review",
"securityConcern": "Security concern"
"securityConcern": "Security concern",
"motionSearch": {
"menuItem": "Motion search",
"openMenu": "Camera options"
},
"motionPreviews": {
"menuItem": "View motion previews",
"title": "Motion previews: {{camera}}",
"mobileSettingsTitle": "Motion Preview Settings",
"mobileSettingsDesc": "Adjust playback speed and dimming, and choose a date to review motion-only clips.",
"dim": "Dim",
"dimAria": "Adjust dimming intensity",
"dimDesc": "Increase dimming to increase motion area visibility.",
"speed": "Speed",
"speedAria": "Select preview playback speed",
"speedDesc": "Choose how quickly preview clips play.",
"back": "Back",
"empty": "No previews available",
"noPreview": "Preview unavailable",
"seekAria": "Seek {{camera}} player to {{time}}"
}
}

View File

@ -0,0 +1,75 @@
{
"documentTitle": "Motion Search - Frigate",
"title": "Motion Search",
"description": "Draw a polygon to define the region of interest, and specify a time range to search for motion changes within that region.",
"selectCamera": "Motion Search is loading",
"startSearch": "Start Search",
"searchStarted": "Search started",
"searchCancelled": "Search cancelled",
"cancelSearch": "Cancel",
"searching": "Search in progress.",
"searchComplete": "Search complete",
"noResultsYet": "Run a search to find motion changes in the selected region",
"noChangesFound": "No pixel changes detected in the selected region",
"changesFound_one": "Found {{count}} motion change",
"changesFound_other": "Found {{count}} motion changes",
"framesProcessed": "{{count}} frames processed",
"jumpToTime": "Jump to this time",
"results": "Results",
"showSegmentHeatmap": "Heatmap",
"newSearch": "New Search",
"clearResults": "Clear Results",
"clearROI": "Clear polygon",
"polygonControls": {
"points_one": "{{count}} point",
"points_other": "{{count}} points",
"undo": "Undo last point",
"reset": "Reset polygon"
},
"motionHeatmapLabel": "Motion Heatmap",
"dialog": {
"title": "Motion Search",
"cameraLabel": "Camera",
"previewAlt": "Camera preview for {{camera}}"
},
"timeRange": {
"title": "Search Range",
"start": "Start time",
"end": "End time"
},
"settings": {
"title": "Search Settings",
"parallelMode": "Parallel mode",
"parallelModeDesc": "Scan multiple recording segments at the same time (faster, but significantly more CPU intensive)",
"threshold": "Sensitivity Threshold",
"thresholdDesc": "Lower values detect smaller changes (1-255)",
"minArea": "Minimum Change Area",
"minAreaDesc": "Minimum percentage of the region of interest that must change to be considered significant",
"frameSkip": "Frame Skip",
"frameSkipDesc": "Process every Nth frame. Set this to your camera's frame rate to process one frame per second (e.g. 5 for a 5 FPS camera, 30 for a 30 FPS camera). Higher values will be faster, but may miss short motion events.",
"maxResults": "Maximum Results",
"maxResultsDesc": "Stop after this many matching timestamps"
},
"errors": {
"noCamera": "Please select a camera",
"noROI": "Please draw a region of interest",
"noTimeRange": "Please select a time range",
"invalidTimeRange": "End time must be after start time",
"searchFailed": "Search failed: {{message}}",
"polygonTooSmall": "Polygon must have at least 3 points",
"unknown": "Unknown error"
},
"changePercentage": "{{percentage}}% changed",
"metrics": {
"title": "Search Metrics",
"segmentsScanned": "Segments scanned",
"segmentsProcessed": "Processed",
"segmentsSkippedInactive": "Skipped (no activity)",
"segmentsSkippedHeatmap": "Skipped (no ROI overlap)",
"fallbackFullRange": "Fallback full-range scan",
"framesDecoded": "Frames decoded",
"wallTime": "Search time",
"segmentErrors": "Segment errors",
"seconds": "{{seconds}}s"
}
}

View File

@ -83,7 +83,8 @@
"triggers": "Triggers",
"debug": "Debug",
"frigateplus": "Frigate+",
"maintenance": "Maintenance"
"mediaSync": "Media sync",
"regionGrid": "Region grid"
},
"dialog": {
"unsavedChanges": {
@ -1232,6 +1233,16 @@
"previews": "Previews",
"exports": "Exports",
"recordings": "Recordings"
},
"regionGrid": {
"title": "Region Grid",
"desc": "The region grid is an optimization that learns where objects of different sizes typically appear in each camera's field of view. Frigate uses this data to efficiently size detection regions. The grid is automatically built over time from tracked object data.",
"clear": "Clear region grid",
"clearConfirmTitle": "Clear Region Grid",
"clearConfirmDesc": "Clearing the region grid is not recommended unless you have recently changed your detector model size or have changed your camera's physical position and are having object tracking issues. The grid will be automatically rebuilt over time as objects are tracked. A Frigate restart is required for changes to take effect.",
"clearSuccess": "Region grid cleared successfully",
"clearError": "Failed to clear region grid",
"restartRequired": "Restart required for region grid changes to take effect"
}
},
"configForm": {

View File

@ -11,7 +11,6 @@ import { Redirect } from "./components/navigation/Redirect";
import { cn } from "./lib/utils";
import { isPWA } from "./utils/isPWA";
import ProtectedRoute from "@/components/auth/ProtectedRoute";
import { AuthProvider } from "@/context/auth-context";
import useSWR from "swr";
import { FrigateConfig } from "./types/frigateConfig";
import ActivityIndicator from "@/components/indicators/activity-indicator";
@ -39,13 +38,11 @@ function App() {
return (
<Providers>
<AuthProvider>
<BrowserRouter basename={window.baseUrl}>
<Wrapper>
{config?.safe_mode ? <SafeAppView /> : <DefaultAppView />}
</Wrapper>
</BrowserRouter>
</AuthProvider>
<BrowserRouter basename={window.baseUrl}>
<Wrapper>
{config?.safe_mode ? <SafeAppView /> : <DefaultAppView />}
</Wrapper>
</BrowserRouter>
</Providers>
);
}
@ -85,17 +82,13 @@ function DefaultAppView() {
: "bottom-8 left-[52px]",
)}
>
<Suspense>
<Suspense
fallback={
<ActivityIndicator className="absolute left-1/2 top-1/2 -translate-x-1/2 -translate-y-1/2" />
}
>
<Routes>
<Route
element={
mainRouteRoles ? (
<ProtectedRoute requiredRoles={mainRouteRoles} />
) : (
<ActivityIndicator className="absolute left-1/2 top-1/2 -translate-x-1/2 -translate-y-1/2" />
)
}
>
<Route element={<ProtectedRoute requiredRoles={mainRouteRoles} />}>
<Route index element={<Live />} />
<Route path="/review" element={<Events />} />
<Route path="/explore" element={<Explore />} />

View File

@ -10,7 +10,7 @@ import {
export default function ProtectedRoute({
requiredRoles,
}: {
requiredRoles: string[];
requiredRoles?: string[];
}) {
const { auth } = useContext(AuthContext);
@ -36,6 +36,13 @@ export default function ProtectedRoute({
);
}
// Wait for config to provide required roles
if (!requiredRoles) {
return (
<ActivityIndicator className="absolute left-1/2 top-1/2 -translate-x-1/2 -translate-y-1/2" />
);
}
if (auth.isLoading) {
return (
<ActivityIndicator className="absolute left-1/2 top-1/2 -translate-x-1/2 -translate-y-1/2" />

View File

@ -25,14 +25,7 @@ const audio: SectionConfigOverrides = {
},
},
global: {
restartRequired: [
"enabled",
"listen",
"filters",
"min_volume",
"max_not_heard",
"num_threads",
],
restartRequired: ["num_threads"],
},
camera: {
restartRequired: ["num_threads"],

View File

@ -28,10 +28,7 @@ const birdseye: SectionConfigOverrides = {
"width",
"height",
"quality",
"mode",
"layout.scaling_factor",
"inactivity_threshold",
"layout.max_cameras",
"idle_heartbeat_fps",
],
uiSchema: {

View File

@ -3,7 +3,7 @@ import type { SectionConfigOverrides } from "./types";
const classification: SectionConfigOverrides = {
base: {
sectionDocs: "/configuration/custom_classification/object_classification",
restartRequired: ["bird.enabled", "bird.threshold"],
restartRequired: ["bird.enabled"],
hiddenFields: ["custom"],
advancedFields: [],
},

View File

@ -30,16 +30,7 @@ const detect: SectionConfigOverrides = {
],
},
global: {
restartRequired: [
"enabled",
"width",
"height",
"fps",
"min_initialized",
"max_disappeared",
"annotation_offset",
"stationary",
],
restartRequired: ["width", "height", "min_initialized", "max_disappeared"],
},
camera: {
restartRequired: ["width", "height", "min_initialized", "max_disappeared"],

View File

@ -32,18 +32,7 @@ const faceRecognition: SectionConfigOverrides = {
"blur_confidence_filter",
"device",
],
restartRequired: [
"enabled",
"model_size",
"unknown_score",
"detection_threshold",
"recognition_threshold",
"min_area",
"min_faces",
"save_attempts",
"blur_confidence_filter",
"device",
],
restartRequired: ["enabled", "model_size", "device"],
},
};

View File

@ -116,16 +116,7 @@ const ffmpeg: SectionConfigOverrides = {
},
},
global: {
restartRequired: [
"path",
"global_args",
"hwaccel_args",
"input_args",
"output_args",
"retry_interval",
"apple_compatibility",
"gpu",
],
restartRequired: [],
fieldOrder: [
"hwaccel_args",
"path",
@ -162,17 +153,7 @@ const ffmpeg: SectionConfigOverrides = {
fieldGroups: {
cameraFfmpeg: ["input_args", "hwaccel_args", "output_args"],
},
restartRequired: [
"inputs",
"path",
"global_args",
"hwaccel_args",
"input_args",
"output_args",
"retry_interval",
"apple_compatibility",
"gpu",
],
restartRequired: [],
},
};

View File

@ -40,21 +40,7 @@ const lpr: SectionConfigOverrides = {
"device",
"replace_rules",
],
restartRequired: [
"enabled",
"model_size",
"detection_threshold",
"min_area",
"recognition_threshold",
"min_plate_length",
"format",
"match_distance",
"known_plates",
"enhancement",
"debug_save_plates",
"device",
"replace_rules",
],
restartRequired: ["model_size", "enhancement", "device"],
uiSchema: {
format: {
"ui:options": { size: "md" },

View File

@ -8,6 +8,7 @@ const motion: SectionConfigOverrides = {
"enabled",
"threshold",
"lightning_threshold",
"skip_motion_threshold",
"improve_contrast",
"contour_area",
"delta_alpha",
@ -22,6 +23,7 @@ const motion: SectionConfigOverrides = {
hiddenFields: ["enabled_in_config", "mask", "raw_mask"],
advancedFields: [
"lightning_threshold",
"skip_motion_threshold",
"delta_alpha",
"frame_alpha",
"frame_height",
@ -29,17 +31,7 @@ const motion: SectionConfigOverrides = {
],
},
global: {
restartRequired: [
"enabled",
"threshold",
"lightning_threshold",
"improve_contrast",
"contour_area",
"delta_alpha",
"frame_alpha",
"frame_height",
"mqtt_off_delay",
],
restartRequired: ["frame_height"],
},
camera: {
restartRequired: ["frame_height"],

View File

@ -83,7 +83,7 @@ const objects: SectionConfigOverrides = {
},
},
global: {
restartRequired: ["track", "alert", "detect", "filters", "genai"],
restartRequired: [],
hiddenFields: [
"enabled_in_config",
"mask",

View File

@ -29,16 +29,7 @@ const record: SectionConfigOverrides = {
},
},
global: {
restartRequired: [
"enabled",
"expire_interval",
"continuous",
"motion",
"alerts",
"detections",
"preview",
"export",
],
restartRequired: [],
},
camera: {
restartRequired: [],

View File

@ -44,7 +44,7 @@ const review: SectionConfigOverrides = {
},
},
global: {
restartRequired: ["alerts", "detections", "genai"],
restartRequired: [],
},
camera: {
restartRequired: [],

View File

@ -27,14 +27,7 @@ const snapshots: SectionConfigOverrides = {
},
},
global: {
restartRequired: [
"enabled",
"bounding_box",
"crop",
"quality",
"timestamp",
"retain",
],
restartRequired: [],
hiddenFields: ["enabled_in_config", "required_zones"],
},
camera: {

View File

@ -3,14 +3,7 @@ import type { SectionConfigOverrides } from "./types";
const telemetry: SectionConfigOverrides = {
base: {
sectionDocs: "/configuration/reference",
restartRequired: [
"network_interfaces",
"stats.amd_gpu_stats",
"stats.intel_gpu_stats",
"stats.intel_gpu_device",
"stats.network_bandwidth",
"version_check",
],
restartRequired: ["version_check"],
fieldOrder: ["network_interfaces", "stats", "version_check"],
advancedFields: [],
},

View File

@ -56,6 +56,7 @@ import ActivityIndicator from "@/components/indicators/activity-indicator";
import { StatusBarMessagesContext } from "@/context/statusbar-provider";
import {
cameraUpdateTopicMap,
globalCameraDefaultSections,
buildOverrides,
buildConfigDataForPath,
sanitizeSectionData as sharedSanitizeSectionData,
@ -234,7 +235,10 @@ export function ConfigSection({
? cameraUpdateTopicMap[sectionPath]
? `config/cameras/${cameraName}/${cameraUpdateTopicMap[sectionPath]}`
: undefined
: `config/${sectionPath}`;
: globalCameraDefaultSections.has(sectionPath) &&
cameraUpdateTopicMap[sectionPath]
? `config/cameras/*/${cameraUpdateTopicMap[sectionPath]}`
: `config/${sectionPath}`;
// Default: show title for camera level (since it might be collapsible), hide for global
const shouldShowTitle = showTitle ?? effectiveLevel === "camera";
@ -827,7 +831,7 @@ export function ConfigSection({
<div
className={cn(
"w-full border-t border-secondary bg-background pb-5 pt-0",
"w-full border-t border-secondary bg-background pt-0",
!noStickyButtons && "sticky bottom-0 z-50",
)}
>

View File

@ -114,10 +114,17 @@ interface PropertyElement {
content: React.ReactElement;
}
/** Shape of the props that RJSF injects into each property element. */
interface RjsfElementProps {
schema?: { type?: string | string[] };
uiSchema?: Record<string, unknown> & {
"ui:widget"?: string;
"ui:options"?: Record<string, unknown>;
};
}
function isObjectLikeElement(item: PropertyElement) {
const fieldSchema = item.content.props?.schema as
| { type?: string | string[] }
| undefined;
const fieldSchema = (item.content.props as RjsfElementProps)?.schema;
return fieldSchema?.type === "object";
}
@ -163,16 +170,21 @@ function GridLayoutObjectFieldTemplate(
// Override the properties rendering with grid layout
const isHiddenProp = (prop: (typeof properties)[number]) =>
prop.content.props.uiSchema?.["ui:widget"] === "hidden";
(prop.content.props as RjsfElementProps).uiSchema?.["ui:widget"] ===
"hidden";
const visibleProps = properties.filter((prop) => !isHiddenProp(prop));
// Separate regular and advanced properties
const advancedProps = visibleProps.filter(
(p) => p.content.props.uiSchema?.["ui:options"]?.advanced === true,
(p) =>
(p.content.props as RjsfElementProps).uiSchema?.["ui:options"]
?.advanced === true,
);
const regularProps = visibleProps.filter(
(p) => p.content.props.uiSchema?.["ui:options"]?.advanced !== true,
(p) =>
(p.content.props as RjsfElementProps).uiSchema?.["ui:options"]
?.advanced !== true,
);
const hasModifiedAdvanced = advancedProps.some((prop) =>
isPathModified([...fieldPath, prop.name]),

View File

@ -448,6 +448,9 @@ export function FieldTemplate(props: FieldTemplateProps) {
);
};
const errorsProps = errors?.props as { errors?: unknown[] } | undefined;
const hasFieldErrors = !!errors && (errorsProps?.errors?.length ?? 0) > 0;
const renderStandardLabel = () => {
if (!shouldRenderStandardLabel) {
return null;
@ -459,7 +462,7 @@ export function FieldTemplate(props: FieldTemplateProps) {
className={cn(
"text-sm font-medium",
isModified && "text-danger",
errors && errors.props?.errors?.length > 0 && "text-destructive",
hasFieldErrors && "text-destructive",
)}
>
{finalLabel}
@ -497,7 +500,7 @@ export function FieldTemplate(props: FieldTemplateProps) {
className={cn(
"text-sm font-medium",
isModified && "text-danger",
errors && errors.props?.errors?.length > 0 && "text-destructive",
hasFieldErrors && "text-destructive",
)}
>
{finalLabel}

View File

@ -1,6 +1,7 @@
// Custom MultiSchemaFieldTemplate to handle anyOf [Type, null] fields
// Renders simple nullable types as single inputs instead of dropdowns
import type { JSX } from "react";
import {
MultiSchemaFieldTemplateProps,
StrictRJSFSchema,

View File

@ -25,6 +25,15 @@ import {
import get from "lodash/get";
import { AddPropertyButton, AdvancedCollapsible } from "../components";
/** Shape of the props that RJSF injects into each property element. */
interface RjsfElementProps {
schema?: { type?: string | string[] };
uiSchema?: Record<string, unknown> & {
"ui:widget"?: string;
"ui:options"?: Record<string, unknown>;
};
}
export function ObjectFieldTemplate(props: ObjectFieldTemplateProps) {
const {
title,
@ -182,16 +191,21 @@ export function ObjectFieldTemplate(props: ObjectFieldTemplateProps) {
uiSchema?.["ui:options"]?.disableNestedCard === true;
const isHiddenProp = (prop: (typeof properties)[number]) =>
prop.content.props.uiSchema?.["ui:widget"] === "hidden";
(prop.content.props as RjsfElementProps).uiSchema?.["ui:widget"] ===
"hidden";
const visibleProps = properties.filter((prop) => !isHiddenProp(prop));
// Check for advanced section grouping
const advancedProps = visibleProps.filter(
(p) => p.content.props.uiSchema?.["ui:options"]?.advanced === true,
(p) =>
(p.content.props as RjsfElementProps).uiSchema?.["ui:options"]
?.advanced === true,
);
const regularProps = visibleProps.filter(
(p) => p.content.props.uiSchema?.["ui:options"]?.advanced !== true,
(p) =>
(p.content.props as RjsfElementProps).uiSchema?.["ui:options"]
?.advanced !== true,
);
const hasModifiedAdvanced = advancedProps.some((prop) =>
checkSubtreeModified([...fieldPath, prop.name]),
@ -333,9 +347,7 @@ export function ObjectFieldTemplate(props: ObjectFieldTemplateProps) {
const ungrouped = items.filter((item) => !grouped.has(item.name));
const isObjectLikeField = (item: (typeof properties)[number]) => {
const fieldSchema = item.content.props.schema as
| { type?: string | string[] }
| undefined;
const fieldSchema = (item.content.props as RjsfElementProps)?.schema;
return fieldSchema?.type === "object";
};

View File

@ -1,4 +1,5 @@
import { t } from "i18next";
import type { JSX } from "react";
import { FunctionComponent, useEffect, useMemo, useState } from "react";
interface IProp {

View File

@ -1,8 +1,8 @@
import { cn } from "@/lib/utils";
import { LogSeverity } from "@/types/log";
import { ReactNode, useMemo, useRef } from "react";
import { ReactNode, useMemo } from "react";
import { isIOS } from "react-device-detect";
import { CSSTransition } from "react-transition-group";
import { AnimatePresence, motion } from "framer-motion";
type ChipProps = {
className?: string;
@ -17,39 +17,31 @@ export default function Chip({
in: inProp = true,
onClick,
}: ChipProps) {
const nodeRef = useRef(null);
return (
<CSSTransition
in={inProp}
nodeRef={nodeRef}
timeout={500}
classNames={{
enter: "opacity-0",
enterActive: "opacity-100 transition-opacity duration-500 ease-in-out",
exit: "opacity-100",
exitActive: "opacity-0 transition-opacity duration-500 ease-in-out",
}}
unmountOnExit
>
<div
ref={nodeRef}
className={cn(
"flex items-center rounded-2xl px-2 py-1.5",
className,
!isIOS && "z-10",
)}
onClick={(e) => {
e.stopPropagation();
<AnimatePresence>
{inProp && (
<motion.div
initial={{ opacity: 0 }}
animate={{ opacity: 1 }}
exit={{ opacity: 0 }}
transition={{ duration: 0.5, ease: "easeInOut" }}
className={cn(
"flex items-center rounded-2xl px-2 py-1.5",
className,
!isIOS && "z-10",
)}
onClick={(e) => {
e.stopPropagation();
if (onClick) {
onClick();
}
}}
>
{children}
</div>
</CSSTransition>
if (onClick) {
onClick();
}
}}
>
{children}
</motion.div>
)}
</AnimatePresence>
);
}

View File

@ -55,9 +55,9 @@ export function MobilePage({
});
return (
<MobilePageContext.Provider value={{ open, onOpenChange: setOpen }}>
<MobilePageContext value={{ open, onOpenChange: setOpen }}>
{children}
</MobilePageContext.Provider>
</MobilePageContext>
);
}
@ -102,7 +102,7 @@ export function MobilePagePortal({
type MobilePageContentProps = {
children: React.ReactNode;
className?: string;
scrollerRef?: React.RefObject<HTMLDivElement>;
scrollerRef?: React.RefObject<HTMLDivElement | null>;
};
export function MobilePageContent({

View File

@ -20,7 +20,7 @@ export default function ActionsDropdown({
const { t } = useTranslation(["components/dialog", "views/replay", "common"]);
return (
<DropdownMenu>
<DropdownMenu modal={false}>
<DropdownMenuTrigger asChild>
<Button
className="flex items-center gap-2"

View File

@ -10,7 +10,7 @@ import Konva from "konva";
import { useResizeObserver } from "@/hooks/resize-observer";
type DebugDrawingLayerProps = {
containerRef: React.RefObject<HTMLDivElement>;
containerRef: React.RefObject<HTMLDivElement | null>;
cameraWidth: number;
cameraHeight: number;
};

View File

@ -7,10 +7,16 @@ import axios from "axios";
import { useSWRConfig } from "swr";
import { toast } from "sonner";
import { Trans, useTranslation } from "react-i18next";
import { LuInfo } from "react-icons/lu";
import { LuExternalLink, LuInfo, LuMinus, LuPlus } from "react-icons/lu";
import { cn } from "@/lib/utils";
import { isMobile } from "react-device-detect";
import { useIsAdmin } from "@/hooks/use-is-admin";
import { useDocDomain } from "@/hooks/use-doc-domain";
import { Link } from "react-router-dom";
const OFFSET_MIN = -2500;
const OFFSET_MAX = 2500;
const OFFSET_STEP = 50;
type Props = {
className?: string;
@ -19,6 +25,7 @@ type Props = {
export default function AnnotationOffsetSlider({ className }: Props) {
const { annotationOffset, setAnnotationOffset, camera } = useDetailStream();
const isAdmin = useIsAdmin();
const { getLocaleDocUrl } = useDocDomain();
const { mutate } = useSWRConfig();
const { t } = useTranslation(["views/explore"]);
const [isSaving, setIsSaving] = useState(false);
@ -32,6 +39,16 @@ export default function AnnotationOffsetSlider({ className }: Props) {
[setAnnotationOffset],
);
const stepOffset = useCallback(
(delta: number) => {
setAnnotationOffset((prev) => {
const next = prev + delta;
return Math.max(OFFSET_MIN, Math.min(OFFSET_MAX, next));
});
},
[setAnnotationOffset],
);
const reset = useCallback(() => {
setAnnotationOffset(0);
}, [setAnnotationOffset]);
@ -72,11 +89,18 @@ export default function AnnotationOffsetSlider({ className }: Props) {
return (
<div
className={cn(
"flex flex-col gap-0.5",
"flex flex-col gap-1.5",
isMobile && "landscape:gap-3",
className,
)}
>
<div className="flex items-center gap-2 text-sm">
<span>{t("trackingDetails.annotationSettings.offset.label")}:</span>
<span className="font-mono tabular-nums text-primary-variant">
{annotationOffset > 0 ? "+" : ""}
{annotationOffset}ms
</span>
</div>
<div
className={cn(
"flex items-center gap-3",
@ -84,57 +108,81 @@ export default function AnnotationOffsetSlider({ className }: Props) {
"landscape:flex-col landscape:items-start landscape:gap-4",
)}
>
<div className="flex max-w-28 flex-row items-center gap-2 text-sm md:max-w-48">
<span className="max-w-24 md:max-w-44">
{t("trackingDetails.annotationSettings.offset.label")}:
</span>
<span className="text-primary-variant">{annotationOffset}</span>
</div>
<Button
type="button"
variant="outline"
size="icon"
className="size-8 shrink-0"
aria-label="-50ms"
onClick={() => stepOffset(-OFFSET_STEP)}
disabled={annotationOffset <= OFFSET_MIN}
>
<LuMinus className="size-4" />
</Button>
<div className="w-full flex-1 landscape:flex">
<Slider
value={[annotationOffset]}
min={-2500}
max={2500}
step={50}
min={OFFSET_MIN}
max={OFFSET_MAX}
step={OFFSET_STEP}
onValueChange={handleChange}
/>
</div>
<div className="flex items-center gap-2">
<Button size="sm" variant="ghost" onClick={reset}>
{t("button.reset", { ns: "common" })}
</Button>
{isAdmin && (
<Button size="sm" onClick={save} disabled={isSaving}>
{isSaving
? t("button.saving", { ns: "common" })
: t("button.save", { ns: "common" })}
</Button>
)}
</div>
<Button
type="button"
variant="outline"
size="icon"
className="size-8 shrink-0"
aria-label="+50ms"
onClick={() => stepOffset(OFFSET_STEP)}
disabled={annotationOffset >= OFFSET_MAX}
>
<LuPlus className="size-4" />
</Button>
</div>
<div
className={cn(
"flex items-center gap-2 text-xs text-muted-foreground",
isMobile && "landscape:flex-col landscape:items-start",
)}
>
<div className="flex items-start gap-1.5 text-xs text-muted-foreground">
<Trans ns="views/explore">
trackingDetails.annotationSettings.offset.millisecondsToOffset
</Trans>
<Popover>
<PopoverTrigger asChild>
<button
className="focus:outline-none"
className="mt-px shrink-0 focus:outline-none"
aria-label={t("trackingDetails.annotationSettings.offset.tips")}
>
<LuInfo className="size-4" />
<LuInfo className="size-3.5" />
</button>
</PopoverTrigger>
<PopoverContent className="w-80 text-sm">
{t("trackingDetails.annotationSettings.offset.tips")}
<div className="mt-2 flex items-center text-primary-variant">
<Link
to={getLocaleDocUrl(
"troubleshooting/dummy-camera#annotation-offset",
)}
target="_blank"
rel="noopener noreferrer"
className="inline"
>
{t("readTheDocumentation", { ns: "common" })}
<LuExternalLink className="ml-2 inline-flex size-3" />
</Link>
</div>
</PopoverContent>
</Popover>
</div>
<div className="flex items-center justify-end gap-2">
<Button size="sm" variant="ghost" onClick={reset}>
{t("button.reset", { ns: "common" })}
</Button>
{isAdmin && (
<Button size="sm" onClick={save} disabled={isSaving}>
{isSaving
? t("button.saving", { ns: "common" })
: t("button.save", { ns: "common" })}
</Button>
)}
</div>
</div>
);
}

View File

@ -1,31 +1,23 @@
import { Event } from "@/types/event";
import { FrigateConfig } from "@/types/frigateConfig";
import { zodResolver } from "@hookform/resolvers/zod";
import axios from "axios";
import { useCallback, useState } from "react";
import { useForm } from "react-hook-form";
import { LuExternalLink } from "react-icons/lu";
import { LuExternalLink, LuMinus, LuPlus } from "react-icons/lu";
import { Link } from "react-router-dom";
import { toast } from "sonner";
import useSWR from "swr";
import {
Form,
FormControl,
FormDescription,
FormField,
FormItem,
FormLabel,
FormMessage,
} from "@/components/ui/form";
import { z } from "zod";
import { Button } from "@/components/ui/button";
import ActivityIndicator from "@/components/indicators/activity-indicator";
import { Input } from "@/components/ui/input";
import { Separator } from "@/components/ui/separator";
import { Slider } from "@/components/ui/slider";
import { Trans, useTranslation } from "react-i18next";
import { useDocDomain } from "@/hooks/use-doc-domain";
import { useIsAdmin } from "@/hooks/use-is-admin";
const OFFSET_MIN = -2500;
const OFFSET_MAX = 2500;
const OFFSET_STEP = 50;
type AnnotationSettingsPaneProps = {
event: Event;
annotationOffset: number;
@ -45,93 +37,69 @@ export function AnnotationSettingsPane({
const [isLoading, setIsLoading] = useState(false);
const formSchema = z.object({
annotationOffset: z.coerce.number().optional().or(z.literal("")),
});
const form = useForm<z.infer<typeof formSchema>>({
resolver: zodResolver(formSchema),
mode: "onChange",
defaultValues: {
annotationOffset: annotationOffset,
const handleSliderChange = useCallback(
(values: number[]) => {
if (!values || values.length === 0) return;
setAnnotationOffset(values[0]);
},
});
const saveToConfig = useCallback(
async (annotation_offset: number | string) => {
if (!config || !event) {
return;
}
axios
.put(
`config/set?cameras.${event?.camera}.detect.annotation_offset=${annotation_offset}`,
{
requires_restart: 0,
},
)
.then((res) => {
if (res.status === 200) {
toast.success(
t("trackingDetails.annotationSettings.offset.toast.success", {
camera: event?.camera,
}),
{
position: "top-center",
},
);
updateConfig();
} else {
toast.error(
t("toast.save.error.title", {
errorMessage: res.statusText,
ns: "common",
}),
{
position: "top-center",
},
);
}
})
.catch((error) => {
const errorMessage =
error.response?.data?.message ||
error.response?.data?.detail ||
"Unknown error";
toast.error(
t("toast.save.error.title", { errorMessage, ns: "common" }),
{
position: "top-center",
},
);
})
.finally(() => {
setIsLoading(false);
});
},
[updateConfig, config, event, t],
[setAnnotationOffset],
);
function onSubmit(values: z.infer<typeof formSchema>) {
if (!values || values.annotationOffset == null || !config) {
return;
}
const stepOffset = useCallback(
(delta: number) => {
setAnnotationOffset((prev) => {
const next = prev + delta;
return Math.max(OFFSET_MIN, Math.min(OFFSET_MAX, next));
});
},
[setAnnotationOffset],
);
const reset = useCallback(() => {
setAnnotationOffset(0);
}, [setAnnotationOffset]);
const saveToConfig = useCallback(async () => {
if (!config || !event) return;
setIsLoading(true);
saveToConfig(values.annotationOffset);
}
function onApply(values: z.infer<typeof formSchema>) {
if (
!values ||
values.annotationOffset === null ||
values.annotationOffset === "" ||
!config
) {
return;
try {
const res = await axios.put(
`config/set?cameras.${event.camera}.detect.annotation_offset=${annotationOffset}`,
{ requires_restart: 0 },
);
if (res.status === 200) {
toast.success(
t("trackingDetails.annotationSettings.offset.toast.success", {
camera: event.camera,
}),
{ position: "top-center" },
);
updateConfig();
} else {
toast.error(
t("toast.save.error.title", {
errorMessage: res.statusText,
ns: "common",
}),
{ position: "top-center" },
);
}
} catch (error: unknown) {
const err = error as {
response?: { data?: { message?: string; detail?: string } };
};
const errorMessage =
err?.response?.data?.message ||
err?.response?.data?.detail ||
"Unknown error";
toast.error(t("toast.save.error.title", { errorMessage, ns: "common" }), {
position: "top-center",
});
} finally {
setIsLoading(false);
}
setAnnotationOffset(values.annotationOffset ?? 0);
}
}, [annotationOffset, config, event, updateConfig, t]);
return (
<div className="p-4">
@ -140,91 +108,100 @@ export function AnnotationSettingsPane({
</div>
<Separator className="mb-4 flex bg-secondary" />
<Form {...form}>
<form
onSubmit={form.handleSubmit(onSubmit)}
className="flex flex-1 flex-col space-y-3"
>
<FormField
control={form.control}
name="annotationOffset"
render={({ field }) => (
<>
<FormItem className="flex flex-row items-start justify-between space-x-2">
<div className="flex flex-col gap-1">
<FormLabel>
{t("trackingDetails.annotationSettings.offset.label")}
</FormLabel>
<FormDescription>
<Trans ns="views/explore">
trackingDetails.annotationSettings.offset.millisecondsToOffset
</Trans>
<FormMessage />
</FormDescription>
</div>
<div className="flex flex-col gap-3">
<div className="min-w-24">
<FormControl>
<Input
className="text-md w-full border border-input bg-background p-2 text-center hover:bg-accent hover:text-accent-foreground dark:[color-scheme:dark]"
placeholder="0"
{...field}
/>
</FormControl>
</div>
</div>
</FormItem>
<div className="mt-1 text-sm text-secondary-foreground">
{t("trackingDetails.annotationSettings.offset.tips")}
<div className="mt-2 flex items-center text-primary-variant">
<Link
to={getLocaleDocUrl("configuration/reference")}
target="_blank"
rel="noopener noreferrer"
className="inline"
>
{t("readTheDocumentation", { ns: "common" })}
<LuExternalLink className="ml-2 inline-flex size-3" />
</Link>
</div>
</div>
</>
)}
/>
<div className="flex flex-1 flex-col justify-end">
<div className="flex flex-row gap-2 pt-5">
<Button
className="flex flex-1"
variant="default"
aria-label={t("button.apply", { ns: "common" })}
type="button"
onClick={form.handleSubmit(onApply)}
>
{t("button.apply", { ns: "common" })}
</Button>
{isAdmin && (
<Button
variant="select"
aria-label={t("button.save", { ns: "common" })}
disabled={isLoading}
className="flex flex-1"
type="submit"
>
{isLoading ? (
<div className="flex flex-row items-center gap-2">
<ActivityIndicator />
<span>{t("button.saving", { ns: "common" })}</span>
</div>
) : (
t("button.save", { ns: "common" })
)}
</Button>
)}
</div>
<div className="flex flex-col gap-4">
<div className="flex flex-col gap-1">
<div className="text-sm font-medium">
{t("trackingDetails.annotationSettings.offset.label")}
</div>
</form>
</Form>
<div className="text-sm text-muted-foreground">
<Trans ns="views/explore">
trackingDetails.annotationSettings.offset.millisecondsToOffset
</Trans>
</div>
</div>
<div className="flex items-center gap-3">
<Button
type="button"
variant="outline"
size="icon"
className="size-8 shrink-0"
aria-label="-50ms"
onClick={() => stepOffset(-OFFSET_STEP)}
disabled={annotationOffset <= OFFSET_MIN}
>
<LuMinus className="size-4" />
</Button>
<Slider
value={[annotationOffset]}
min={OFFSET_MIN}
max={OFFSET_MAX}
step={OFFSET_STEP}
onValueChange={handleSliderChange}
className="flex-1"
/>
<Button
type="button"
variant="outline"
size="icon"
className="size-8 shrink-0"
aria-label="+50ms"
onClick={() => stepOffset(OFFSET_STEP)}
disabled={annotationOffset >= OFFSET_MAX}
>
<LuPlus className="size-4" />
</Button>
</div>
<div className="flex items-center justify-between">
<span className="font-mono text-sm tabular-nums text-primary-variant">
{annotationOffset > 0 ? "+" : ""}
{annotationOffset}ms
</span>
<Button type="button" variant="ghost" size="sm" onClick={reset}>
{t("button.reset", { ns: "common" })}
</Button>
</div>
<div className="text-sm text-secondary-foreground">
{t("trackingDetails.annotationSettings.offset.tips")}
<div className="mt-2 flex items-center text-primary-variant">
<Link
to={getLocaleDocUrl(
"troubleshooting/dummy-camera#annotation-offset",
)}
target="_blank"
rel="noopener noreferrer"
className="inline"
>
{t("readTheDocumentation", { ns: "common" })}
<LuExternalLink className="ml-2 inline-flex size-3" />
</Link>
</div>
</div>
{isAdmin && (
<>
<Separator className="bg-secondary" />
<Button
variant="select"
aria-label={t("button.save", { ns: "common" })}
disabled={isLoading}
onClick={saveToConfig}
>
{isLoading ? (
<div className="flex flex-row items-center gap-2">
<ActivityIndicator />
<span>{t("button.saving", { ns: "common" })}</span>
</div>
) : (
t("button.save", { ns: "common" })
)}
</Button>
</>
)}
</div>
</div>
);
}

View File

@ -17,7 +17,7 @@ type ObjectPathProps = {
color?: number[];
width?: number;
pointRadius?: number;
imgRef: React.RefObject<HTMLImageElement>;
imgRef: React.RefObject<HTMLImageElement | null>;
onPointClick?: (index: number) => void;
visible?: boolean;
};

View File

@ -323,6 +323,7 @@ function DialogContentComponent({
<TrackingDetails
className={cn(isDesktop ? "size-full" : "flex flex-col gap-4")}
event={search as unknown as Event}
isAnnotationSettingsOpen={isPopoverOpen}
tabs={
isDesktop ? (
<TabsWithActions
@ -495,6 +496,15 @@ export default function SearchDetailDialog({
}
}, [search]);
useEffect(() => {
if (!isDesktop || !onPrevious || !onNext) {
setShowNavigationButtons(false);
return;
}
setShowNavigationButtons(isOpen);
}, [isOpen, onNext, onPrevious]);
// show/hide annotation settings is handled inside TabsWithActions
const searchTabs = useMemo(() => {

View File

@ -47,12 +47,14 @@ type TrackingDetailsProps = {
event: Event;
fullscreen?: boolean;
tabs?: React.ReactNode;
isAnnotationSettingsOpen?: boolean;
};
export function TrackingDetails({
className,
event,
tabs,
isAnnotationSettingsOpen = false,
}: TrackingDetailsProps) {
const videoRef = useRef<HTMLVideoElement | null>(null);
const { t } = useTranslation(["views/explore"]);
@ -69,6 +71,14 @@ export function TrackingDetails({
// user (eg, clicking a lifecycle row). When null we display `currentTime`.
const [manualOverride, setManualOverride] = useState<number | null>(null);
// Capture the annotation offset used for building the video source URL.
// This only updates when the event changes, NOT on every slider drag,
// so the HLS player doesn't reload while the user is adjusting the offset.
const sourceOffsetRef = useRef(annotationOffset);
useEffect(() => {
sourceOffsetRef.current = annotationOffset;
}, [event.id]); // eslint-disable-line react-hooks/exhaustive-deps
// event.start_time is detect time, convert to record, then subtract padding
const [currentTime, setCurrentTime] = useState(
(event.start_time ?? 0) + annotationOffset / 1000 - REVIEW_PADDING,
@ -90,14 +100,19 @@ export function TrackingDetails({
const { data: config } = useSWR<FrigateConfig>("config");
// Fetch recording segments for the event's time range to handle motion-only gaps
// Fetch recording segments for the event's time range to handle motion-only gaps.
// Use the source offset (stable per event) so recordings don't refetch on every
// slider drag while adjusting annotation offset.
const eventStartRecord = useMemo(
() => (event.start_time ?? 0) + annotationOffset / 1000,
[event.start_time, annotationOffset],
() => (event.start_time ?? 0) + sourceOffsetRef.current / 1000,
// eslint-disable-next-line react-hooks/exhaustive-deps
[event.start_time, event.id],
);
const eventEndRecord = useMemo(
() => (event.end_time ?? Date.now() / 1000) + annotationOffset / 1000,
[event.end_time, annotationOffset],
() =>
(event.end_time ?? Date.now() / 1000) + sourceOffsetRef.current / 1000,
// eslint-disable-next-line react-hooks/exhaustive-deps
[event.end_time, event.id],
);
const { data: recordings } = useSWR<Recording[]>(
@ -298,6 +313,53 @@ export function TrackingDetails({
setSelectedObjectIds([event.id]);
}, [event.id, setSelectedObjectIds]);
// When the annotation settings popover is open, pin the video to a specific
// lifecycle event (detect-stream timestamp). As the user drags the offset
// slider, the video re-seeks to show the recording frame at
// pinnedTimestamp + newOffset, while the bounding box stays fixed at the
// pinned detect timestamp. This lets the user visually align the box to
// the car in the video.
const pinnedDetectTimestampRef = useRef<number | null>(null);
const wasAnnotationOpenRef = useRef(false);
// On popover open: pause, pin first lifecycle item, and seek.
useEffect(() => {
if (isAnnotationSettingsOpen && !wasAnnotationOpenRef.current) {
if (videoRef.current && displaySource === "video") {
videoRef.current.pause();
}
if (eventSequence && eventSequence.length > 0) {
pinnedDetectTimestampRef.current = eventSequence[0].timestamp;
}
}
if (!isAnnotationSettingsOpen) {
pinnedDetectTimestampRef.current = null;
}
wasAnnotationOpenRef.current = isAnnotationSettingsOpen;
}, [isAnnotationSettingsOpen, displaySource, eventSequence]);
// When the pinned timestamp or offset changes, re-seek the video and
// explicitly update currentTime so the overlay shows the pinned event's box.
useEffect(() => {
const pinned = pinnedDetectTimestampRef.current;
if (!isAnnotationSettingsOpen || pinned == null) return;
if (!videoRef.current || displaySource !== "video") return;
const targetTimeRecord = pinned + annotationOffset / 1000;
const relativeTime = timestampToVideoTime(targetTimeRecord);
videoRef.current.currentTime = relativeTime;
// Explicitly update currentTime state so the overlay's effectiveCurrentTime
// resolves back to the pinned detect timestamp:
// effectiveCurrentTime = targetTimeRecord - annotationOffset/1000 = pinned
setCurrentTime(targetTimeRecord);
}, [
isAnnotationSettingsOpen,
annotationOffset,
displaySource,
timestampToVideoTime,
]);
const handleLifecycleClick = useCallback(
(item: TrackingDetailsSequence) => {
if (!videoRef.current && !imgRef.current) return;
@ -453,19 +515,23 @@ export function TrackingDetails({
const videoSource = useMemo(() => {
// event.start_time and event.end_time are in DETECT stream time
// Convert to record stream time, then create video clip with padding
const eventStartRecord = event.start_time + annotationOffset / 1000;
const eventEndRecord =
(event.end_time ?? Date.now() / 1000) + annotationOffset / 1000;
const startTime = eventStartRecord - REVIEW_PADDING;
const endTime = eventEndRecord + REVIEW_PADDING;
// Convert to record stream time, then create video clip with padding.
// Use sourceOffsetRef (stable per event) so the HLS player doesn't
// reload while the user is dragging the annotation offset slider.
const sourceOffset = sourceOffsetRef.current;
const eventStartRec = event.start_time + sourceOffset / 1000;
const eventEndRec =
(event.end_time ?? Date.now() / 1000) + sourceOffset / 1000;
const startTime = eventStartRec - REVIEW_PADDING;
const endTime = eventEndRec + REVIEW_PADDING;
const playlist = `${baseUrl}vod/clip/${event.camera}/start/${startTime}/end/${endTime}/index.m3u8`;
return {
playlist,
startPosition: 0,
};
}, [event, annotationOffset]);
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [event]);
// Determine camera aspect ratio category
const cameraAspect = useMemo(() => {

View File

@ -22,6 +22,7 @@ import {
} from "@/components/ui/sheet";
import { cn } from "@/lib/utils";
import { isMobile } from "react-device-detect";
import type { JSX } from "react";
import { useRef } from "react";
type PlatformAwareDialogProps = {

View File

@ -1,5 +1,6 @@
import {
MutableRefObject,
ReactNode,
useCallback,
useEffect,
useRef,
@ -57,6 +58,7 @@ type HlsVideoPlayerProps = {
isDetailMode?: boolean;
camera?: string;
currentTimeOverride?: number;
transformedOverlay?: ReactNode;
};
export default function HlsVideoPlayer({
@ -81,6 +83,7 @@ export default function HlsVideoPlayer({
isDetailMode = false,
camera,
currentTimeOverride,
transformedOverlay,
}: HlsVideoPlayerProps) {
const { t } = useTranslation("components/player");
const { data: config } = useSWR<FrigateConfig>("config");
@ -91,7 +94,7 @@ export default function HlsVideoPlayer({
// playback
const hlsRef = useRef<Hls>();
const hlsRef = useRef<Hls>(undefined);
const [useHlsCompat, setUseHlsCompat] = useState(false);
const [loadedMetadata, setLoadedMetadata] = useState(false);
const [bufferTimeout, setBufferTimeout] = useState<NodeJS.Timeout>();
@ -350,157 +353,162 @@ export default function HlsVideoPlayer({
height: isMobile ? "100%" : undefined,
}}
>
{isDetailMode &&
camera &&
currentTime &&
loadedMetadata &&
videoDimensions.width > 0 &&
videoDimensions.height > 0 && (
<div
className={cn(
"absolute inset-0 z-50",
isDesktop
? "size-full"
: "mx-auto flex items-center justify-center portrait:max-h-[50dvh]",
)}
style={{
aspectRatio: `${videoDimensions.width} / ${videoDimensions.height}`,
}}
>
<ObjectTrackOverlay
key={`overlay-${currentTime}`}
camera={camera}
showBoundingBoxes={!isPlaying}
currentTime={currentTime}
videoWidth={videoDimensions.width}
videoHeight={videoDimensions.height}
className="absolute inset-0 z-10"
onSeekToTime={(timestamp, play) => {
if (onSeekToTime) {
onSeekToTime(timestamp, play);
}
<div className="relative size-full">
{transformedOverlay}
{isDetailMode &&
camera &&
currentTime &&
loadedMetadata &&
videoDimensions.width > 0 &&
videoDimensions.height > 0 && (
<div
className={cn(
"absolute inset-0 z-50",
isDesktop
? "size-full"
: "mx-auto flex items-center justify-center portrait:max-h-[50dvh]",
)}
style={{
aspectRatio: `${videoDimensions.width} / ${videoDimensions.height}`,
}}
/>
</div>
)}
<video
ref={videoRef}
className={`size-full rounded-lg bg-black md:rounded-2xl ${loadedMetadata ? "" : "invisible"} cursor-pointer`}
preload="auto"
autoPlay
controls={!frigateControls}
playsInline
muted={muted}
onClick={
isDesktop
? () => {
if (zoomScale == 1.0) onPlayPause(!isPlaying);
}
: undefined
}
onVolumeChange={() => {
setVolume(videoRef.current?.volume ?? 1.0, true);
if (!frigateControls) {
setMuted(videoRef.current?.muted);
}
}}
onPlay={() => {
setIsPlaying(true);
if (isMobile) {
setControls(true);
setMobileCtrlTimeout(setTimeout(() => setControls(false), 4000));
}
}}
onPlaying={onPlaying}
onPause={() => {
setIsPlaying(false);
clearTimeout(bufferTimeout);
if (isMobile && mobileCtrlTimeout) {
clearTimeout(mobileCtrlTimeout);
}
}}
onWaiting={() => {
if (onError != undefined) {
if (videoRef.current?.paused) {
return;
}
setBufferTimeout(
setTimeout(() => {
if (
document.visibilityState === "visible" &&
videoRef.current
) {
onError("stalled");
>
<ObjectTrackOverlay
key={`overlay-${currentTime}`}
camera={camera}
showBoundingBoxes={!isPlaying}
currentTime={currentTime}
videoWidth={videoDimensions.width}
videoHeight={videoDimensions.height}
className="absolute inset-0 z-10"
onSeekToTime={(timestamp, play) => {
if (onSeekToTime) {
onSeekToTime(timestamp, play);
}
}}
/>
</div>
)}
<video
ref={videoRef}
className={`size-full rounded-lg bg-black md:rounded-2xl ${loadedMetadata ? "" : "invisible"} cursor-pointer`}
preload="auto"
autoPlay
controls={!frigateControls}
playsInline
muted={muted}
onClick={
isDesktop
? () => {
if (zoomScale == 1.0) onPlayPause(!isPlaying);
}
}, 3000),
);
: undefined
}
}}
onProgress={() => {
if (onError != undefined) {
if (videoRef.current?.paused) {
onVolumeChange={() => {
setVolume(videoRef.current?.volume ?? 1.0, true);
if (!frigateControls) {
setMuted(videoRef.current?.muted);
}
}}
onPlay={() => {
setIsPlaying(true);
if (isMobile) {
setControls(true);
setMobileCtrlTimeout(
setTimeout(() => setControls(false), 4000),
);
}
}}
onPlaying={onPlaying}
onPause={() => {
setIsPlaying(false);
clearTimeout(bufferTimeout);
if (isMobile && mobileCtrlTimeout) {
clearTimeout(mobileCtrlTimeout);
}
}}
onWaiting={() => {
if (onError != undefined) {
if (videoRef.current?.paused) {
return;
}
setBufferTimeout(
setTimeout(() => {
if (
document.visibilityState === "visible" &&
videoRef.current
) {
onError("stalled");
}
}, 3000),
);
}
}}
onProgress={() => {
if (onError != undefined) {
if (videoRef.current?.paused) {
return;
}
if (bufferTimeout) {
clearTimeout(bufferTimeout);
setBufferTimeout(undefined);
}
}
}}
onTimeUpdate={() => {
if (!onTimeUpdate) {
return;
}
if (bufferTimeout) {
clearTimeout(bufferTimeout);
setBufferTimeout(undefined);
const frameTime = getVideoTime();
if (frameTime) {
onTimeUpdate(frameTime);
}
}
}}
onTimeUpdate={() => {
if (!onTimeUpdate) {
return;
}
}}
onLoadedData={() => {
onPlayerLoaded?.();
handleLoadedMetadata();
const frameTime = getVideoTime();
if (videoRef.current) {
if (playbackRate) {
videoRef.current.playbackRate = playbackRate;
}
if (frameTime) {
onTimeUpdate(frameTime);
}
}}
onLoadedData={() => {
onPlayerLoaded?.();
handleLoadedMetadata();
if (videoRef.current) {
if (playbackRate) {
videoRef.current.playbackRate = playbackRate;
if (volume) {
videoRef.current.volume = volume;
}
}
if (volume) {
videoRef.current.volume = volume;
}}
onEnded={() => {
if (onClipEnded) {
onClipEnded(getVideoTime() ?? 0);
}
}
}}
onEnded={() => {
if (onClipEnded) {
onClipEnded(getVideoTime() ?? 0);
}
}}
onError={(e) => {
if (
!hlsRef.current &&
// @ts-expect-error code does exist
unsupportedErrorCodes.includes(e.target.error.code) &&
videoRef.current
) {
setLoadedMetadata(false);
setUseHlsCompat(true);
} else {
toast.error(
}}
onError={(e) => {
if (
!hlsRef.current &&
// @ts-expect-error code does exist
`Failed to play recordings (error ${e.target.error.code}): ${e.target.error.message}`,
{
position: "top-center",
},
);
}
}}
/>
unsupportedErrorCodes.includes(e.target.error.code) &&
videoRef.current
) {
setLoadedMetadata(false);
setUseHlsCompat(true);
} else {
toast.error(
// @ts-expect-error code does exist
`Failed to play recordings (error ${e.target.error.code}): ${e.target.error.message}`,
{
position: "top-center",
},
);
}
}}
/>
</div>
</TransformComponent>
</TransformWrapper>
);

View File

@ -51,10 +51,10 @@ export default function WebRtcPlayer({
// camera states
const pcRef = useRef<RTCPeerConnection | undefined>();
const pcRef = useRef<RTCPeerConnection | undefined>(undefined);
const videoRef = useRef<HTMLVideoElement | null>(null);
const [bufferTimeout, setBufferTimeout] = useState<NodeJS.Timeout>();
const videoLoadTimeoutRef = useRef<NodeJS.Timeout>();
const videoLoadTimeoutRef = useRef<NodeJS.Timeout>(undefined);
const PeerConnection = useCallback(
async (media: string) => {

View File

@ -1,4 +1,11 @@
import { useCallback, useEffect, useMemo, useRef, useState } from "react";
import {
ReactNode,
useCallback,
useEffect,
useMemo,
useRef,
useState,
} from "react";
import { useApiHost } from "@/api";
import useSWR from "swr";
import { FrigateConfig } from "@/types/frigateConfig";
@ -40,6 +47,7 @@ type DynamicVideoPlayerProps = {
setFullResolution: React.Dispatch<React.SetStateAction<VideoResolutionType>>;
toggleFullscreen: () => void;
containerRef?: React.MutableRefObject<HTMLDivElement | null>;
transformedOverlay?: ReactNode;
};
export default function DynamicVideoPlayer({
className,
@ -58,6 +66,7 @@ export default function DynamicVideoPlayer({
setFullResolution,
toggleFullscreen,
containerRef,
transformedOverlay,
}: DynamicVideoPlayerProps) {
const { t } = useTranslation(["components/player"]);
const apiHost = useApiHost();
@ -312,6 +321,7 @@ export default function DynamicVideoPlayer({
isDetailMode={isDetailMode}
camera={contextCamera || camera}
currentTimeOverride={currentTime}
transformedOverlay={transformedOverlay}
/>
)}
<PreviewPlayer

View File

@ -10,7 +10,7 @@ import { snapPointToLines } from "@/utils/canvasUtil";
import { usePolygonStates } from "@/hooks/use-polygon-states";
type PolygonCanvasProps = {
containerRef: RefObject<HTMLDivElement>;
containerRef: RefObject<HTMLDivElement | null>;
camera: string;
width: number;
height: number;

View File

@ -18,7 +18,7 @@ import Konva from "konva";
import { Vector2d } from "konva/lib/types";
type PolygonDrawerProps = {
stageRef: RefObject<Konva.Stage>;
stageRef: RefObject<Konva.Stage | null>;
points: number[][];
distances: number[];
isActive: boolean;

View File

@ -1,4 +1,4 @@
import { useEffect, useMemo, useRef, useState } from "react";
import { useCallback, useEffect, useMemo, useRef, useState } from "react";
import { TrackingDetailsSequence } from "@/types/timeline";
import { getLifecycleItemDescription } from "@/utils/lifecycleUtil";
import { useDetailStream } from "@/context/detail-stream-context";
@ -33,6 +33,7 @@ import { MdAutoAwesome } from "react-icons/md";
import { isPWA } from "@/utils/isPWA";
import { isInIframe } from "@/utils/isIFrame";
import { GenAISummaryDialog } from "../overlay/chip/GenAISummaryChip";
import { Separator } from "../ui/separator";
type DetailStreamProps = {
reviewItems?: ReviewSegment[];
@ -49,7 +50,8 @@ export default function DetailStream({
}: DetailStreamProps) {
const { data: config } = useSWR<FrigateConfig>("config");
const { t } = useTranslation("views/events");
const { annotationOffset } = useDetailStream();
const { annotationOffset, selectedObjectIds, setSelectedObjectIds } =
useDetailStream();
const scrollRef = useRef<HTMLDivElement>(null);
const [activeReviewId, setActiveReviewId] = useState<string | undefined>(
@ -67,9 +69,69 @@ export default function DetailStream({
true,
);
const onSeekCheckPlaying = (timestamp: number) => {
onSeek(timestamp, isPlaying);
};
// When the settings panel opens, pin to the nearest review with detections
// so the user can visually align the bounding box using the offset slider
const pinnedDetectTimestampRef = useRef<number | null>(null);
const wasControlsExpandedRef = useRef(false);
const selectedBeforeExpandRef = useRef<string[]>([]);
const onSeekCheckPlaying = useCallback(
(timestamp: number) => {
onSeek(timestamp, isPlaying);
},
[onSeek, isPlaying],
);
useEffect(() => {
if (controlsExpanded && !wasControlsExpandedRef.current) {
selectedBeforeExpandRef.current = selectedObjectIds;
const items = (reviewItems ?? []).filter(
(r) => r.data?.detections?.length > 0,
);
if (items.length > 0) {
// Pick the nearest review to current effective time
let nearest = items[0];
let minDiff = Math.abs(effectiveTime - nearest.start_time);
for (const r of items) {
const diff = Math.abs(effectiveTime - r.start_time);
if (diff < minDiff) {
nearest = r;
minDiff = diff;
}
}
const nearestId = `review-${nearest.id ?? nearest.start_time ?? Math.floor(nearest.start_time ?? 0)}`;
setActiveReviewId(nearestId);
const detectionId = nearest.data.detections[0];
setSelectedObjectIds([detectionId]);
// Use the detection's actual start timestamp (parsed from its ID)
// rather than review.start_time, which can be >10ms away from any
// lifecycle event and would fail the bounding-box TOLERANCE check.
const detectTimestamp = parseFloat(detectionId);
pinnedDetectTimestampRef.current = detectTimestamp;
const recordTime = detectTimestamp + annotationOffset / 1000;
onSeek(recordTime, false);
}
}
if (!controlsExpanded && wasControlsExpandedRef.current) {
pinnedDetectTimestampRef.current = null;
setSelectedObjectIds(selectedBeforeExpandRef.current);
}
wasControlsExpandedRef.current = controlsExpanded;
// Only trigger on expand/collapse transition
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [controlsExpanded]);
// Re-seek on annotation offset change while settings panel is open
useEffect(() => {
const pinned = pinnedDetectTimestampRef.current;
if (!controlsExpanded || pinned == null) return;
const recordTime = pinned + annotationOffset / 1000;
onSeek(recordTime, false);
}, [controlsExpanded, annotationOffset, onSeek]);
// Ensure we initialize the active review when reviewItems first arrive.
// This helps when the component mounts while the video is already
@ -214,6 +276,12 @@ export default function DetailStream({
/>
<div className="relative flex h-full flex-col">
{controlsExpanded && (
<div
className="absolute inset-0 z-20 cursor-pointer bg-black/50"
onClick={() => setControlsExpanded(false)}
/>
)}
<div
ref={scrollRef}
className="scrollbar-container flex-1 overflow-y-auto overflow-x-hidden pb-14"
@ -267,8 +335,9 @@ export default function DetailStream({
)}
</button>
{controlsExpanded && (
<div className="space-y-3 px-3 pb-3">
<div className="space-y-4 px-3 pb-5 pt-2">
<AnnotationOffsetSlider />
<Separator />
<div className="flex flex-col gap-1">
<div className="flex items-center justify-between">
<label className="text-sm font-medium">

View File

@ -37,8 +37,8 @@ export type EventReviewTimelineProps = {
events: ReviewSegment[];
visibleTimestamps?: number[];
severityType: ReviewSeverity;
timelineRef?: RefObject<HTMLDivElement>;
contentRef: RefObject<HTMLDivElement>;
timelineRef?: RefObject<HTMLDivElement | null>;
contentRef: RefObject<HTMLDivElement | null>;
onHandlebarDraggingChange?: (isDragging: boolean) => void;
isZooming: boolean;
zoomDirection: TimelineZoomDirection;

View File

@ -28,7 +28,7 @@ type EventSegmentProps = {
minimapStartTime?: number;
minimapEndTime?: number;
severityType: ReviewSeverity;
contentRef: RefObject<HTMLDivElement>;
contentRef: RefObject<HTMLDivElement | null>;
setHandlebarTime?: React.Dispatch<React.SetStateAction<number>>;
scrollToSegment: (segmentTime: number, ifNeeded?: boolean) => void;
dense: boolean;

View File

@ -25,6 +25,7 @@ export type MotionReviewTimelineProps = {
timestampSpread: number;
timelineStart: number;
timelineEnd: number;
scrollToTime?: number;
showHandlebar?: boolean;
handlebarTime?: number;
setHandlebarTime?: React.Dispatch<React.SetStateAction<number>>;
@ -41,8 +42,8 @@ export type MotionReviewTimelineProps = {
events: ReviewSegment[];
motion_events: MotionData[];
noRecordingRanges?: RecordingSegment[];
contentRef: RefObject<HTMLDivElement>;
timelineRef?: RefObject<HTMLDivElement>;
contentRef: RefObject<HTMLDivElement | null>;
timelineRef?: RefObject<HTMLDivElement | null>;
onHandlebarDraggingChange?: (isDragging: boolean) => void;
dense?: boolean;
isZooming: boolean;
@ -58,6 +59,7 @@ export function MotionReviewTimeline({
timestampSpread,
timelineStart,
timelineEnd,
scrollToTime,
showHandlebar = false,
handlebarTime,
setHandlebarTime,
@ -176,6 +178,15 @@ export function MotionReviewTimeline({
[],
);
// allow callers to request the timeline center on a specific time
useEffect(() => {
if (scrollToTime == undefined) return;
setTimeout(() => {
scrollToSegment(alignStartDateToTimeline(scrollToTime), true, "auto");
}, 0);
}, [scrollToTime, scrollToSegment, alignStartDateToTimeline]);
// keep handlebar centered when zooming
useEffect(() => {
setTimeout(() => {

View File

@ -20,8 +20,8 @@ import { Tooltip, TooltipContent, TooltipTrigger } from "../ui/tooltip";
import { TooltipPortal } from "@radix-ui/react-tooltip";
export type ReviewTimelineProps = {
timelineRef: RefObject<HTMLDivElement>;
contentRef: RefObject<HTMLDivElement>;
timelineRef: RefObject<HTMLDivElement | null>;
contentRef: RefObject<HTMLDivElement | null>;
segmentDuration: number;
timelineDuration: number;
timelineStartAligned: number;
@ -343,9 +343,12 @@ export function ReviewTimeline({
useEffect(() => {
if (onHandlebarDraggingChange) {
onHandlebarDraggingChange(isDraggingHandlebar);
// Keep existing callback name but treat it as a generic dragging signal.
// This allows consumers (e.g. export-handle timelines) to correctly
// enable preview scrubbing while dragging export handles.
onHandlebarDraggingChange(isDragging);
}
}, [isDraggingHandlebar, onHandlebarDraggingChange]);
}, [isDragging, onHandlebarDraggingChange]);
const isHandlebarInNoRecordingPeriod = useMemo(() => {
if (!getRecordingAvailability || handlebarTime === undefined) return false;

View File

@ -14,7 +14,7 @@ import {
import { useTimelineUtils } from "@/hooks/use-timeline-utils";
export type SummaryTimelineProps = {
reviewTimelineRef: React.RefObject<HTMLDivElement>;
reviewTimelineRef: React.RefObject<HTMLDivElement | null>;
timelineStart: number;
timelineEnd: number;
segmentDuration: number;

View File

@ -10,7 +10,7 @@ import { EventSegment } from "./EventSegment";
import { ReviewSegment, ReviewSeverity } from "@/types/review";
type VirtualizedEventSegmentsProps = {
timelineRef: React.RefObject<HTMLDivElement>;
timelineRef: React.RefObject<HTMLDivElement | null>;
segments: number[];
events: ReviewSegment[];
segmentDuration: number;
@ -19,7 +19,7 @@ type VirtualizedEventSegmentsProps = {
minimapStartTime?: number;
minimapEndTime?: number;
severityType: ReviewSeverity;
contentRef: React.RefObject<HTMLDivElement>;
contentRef: React.RefObject<HTMLDivElement | null>;
setHandlebarTime?: React.Dispatch<React.SetStateAction<number>>;
dense: boolean;
alignStartDateToTimeline: (timestamp: number) => number;

View File

@ -10,7 +10,7 @@ import MotionSegment from "./MotionSegment";
import { ReviewSegment, MotionData } from "@/types/review";
type VirtualizedMotionSegmentsProps = {
timelineRef: React.RefObject<HTMLDivElement>;
timelineRef: React.RefObject<HTMLDivElement | null>;
segments: number[];
events: ReviewSegment[];
motion_events: MotionData[];
@ -19,7 +19,7 @@ type VirtualizedMotionSegmentsProps = {
showMinimap: boolean;
minimapStartTime?: number;
minimapEndTime?: number;
contentRef: React.RefObject<HTMLDivElement>;
contentRef: React.RefObject<HTMLDivElement | null>;
setHandlebarTime?: React.Dispatch<React.SetStateAction<number>>;
dense: boolean;
motionOnly: boolean;

View File

@ -1,3 +1,4 @@
import type { JSX } from "react";
import { useState, useEffect, useRef } from "react";
import { Button } from "./button";
import { Calendar } from "./calendar";
@ -124,8 +125,8 @@ export function DateRangePicker({
);
// Refs to store the values of range and rangeCompare when the date picker is opened
const openedRangeRef = useRef<DateRange | undefined>();
const openedRangeCompareRef = useRef<DateRange | undefined>();
const openedRangeRef = useRef<DateRange | undefined>(undefined);
const openedRangeCompareRef = useRef<DateRange | undefined>(undefined);
const [selectedPreset, setSelectedPreset] = useState<string | undefined>(
undefined,

View File

@ -1,265 +0,0 @@
import * as React from "react";
import useEmblaCarousel, {
type UseEmblaCarouselType,
} from "embla-carousel-react";
import { ArrowLeft, ArrowRight } from "lucide-react";
import { cn } from "@/lib/utils";
import { Button } from "@/components/ui/button";
import { useTranslation } from "react-i18next";
type CarouselApi = UseEmblaCarouselType[1];
type UseCarouselParameters = Parameters<typeof useEmblaCarousel>;
type CarouselOptions = UseCarouselParameters[0];
type CarouselPlugin = UseCarouselParameters[1];
type CarouselProps = {
opts?: CarouselOptions;
plugins?: CarouselPlugin;
orientation?: "horizontal" | "vertical";
setApi?: (api: CarouselApi) => void;
};
type CarouselContextProps = {
carouselRef: ReturnType<typeof useEmblaCarousel>[0];
api: ReturnType<typeof useEmblaCarousel>[1];
scrollPrev: () => void;
scrollNext: () => void;
canScrollPrev: boolean;
canScrollNext: boolean;
} & CarouselProps;
const CarouselContext = React.createContext<CarouselContextProps | null>(null);
function useCarousel() {
const context = React.useContext(CarouselContext);
if (!context) {
throw new Error("useCarousel must be used within a <Carousel />");
}
return context;
}
const Carousel = React.forwardRef<
HTMLDivElement,
React.HTMLAttributes<HTMLDivElement> & CarouselProps
>(
(
{
orientation = "horizontal",
opts,
setApi,
plugins,
className,
children,
...props
},
ref,
) => {
const [carouselRef, api] = useEmblaCarousel(
{
...opts,
axis: orientation === "horizontal" ? "x" : "y",
},
plugins,
);
const [canScrollPrev, setCanScrollPrev] = React.useState(false);
const [canScrollNext, setCanScrollNext] = React.useState(false);
const onSelect = React.useCallback((api: CarouselApi) => {
if (!api) {
return;
}
setCanScrollPrev(api.canScrollPrev());
setCanScrollNext(api.canScrollNext());
}, []);
const scrollPrev = React.useCallback(() => {
api?.scrollPrev();
}, [api]);
const scrollNext = React.useCallback(() => {
api?.scrollNext();
}, [api]);
const handleKeyDown = React.useCallback(
(event: React.KeyboardEvent<HTMLDivElement>) => {
if (event.key === "ArrowLeft") {
event.preventDefault();
scrollPrev();
} else if (event.key === "ArrowRight") {
event.preventDefault();
scrollNext();
}
},
[scrollPrev, scrollNext],
);
React.useEffect(() => {
if (!api || !setApi) {
return;
}
setApi(api);
}, [api, setApi]);
React.useEffect(() => {
if (!api) {
return;
}
onSelect(api);
api.on("reInit", onSelect);
api.on("select", onSelect);
return () => {
api?.off("select", onSelect);
};
}, [api, onSelect]);
return (
<CarouselContext.Provider
value={{
carouselRef,
api: api,
opts,
orientation:
orientation || (opts?.axis === "y" ? "vertical" : "horizontal"),
scrollPrev,
scrollNext,
canScrollPrev,
canScrollNext,
}}
>
<div
ref={ref}
onKeyDownCapture={handleKeyDown}
className={cn("relative", className)}
role="region"
aria-roledescription="carousel"
{...props}
>
{children}
</div>
</CarouselContext.Provider>
);
},
);
Carousel.displayName = "Carousel";
const CarouselContent = React.forwardRef<
HTMLDivElement,
React.HTMLAttributes<HTMLDivElement>
>(({ className, ...props }, ref) => {
const { carouselRef, orientation } = useCarousel();
return (
<div ref={carouselRef} className="overflow-hidden">
<div
ref={ref}
className={cn(
"flex",
orientation === "horizontal" ? "-ml-4" : "-mt-4 flex-col",
className,
)}
{...props}
/>
</div>
);
});
CarouselContent.displayName = "CarouselContent";
const CarouselItem = React.forwardRef<
HTMLDivElement,
React.HTMLAttributes<HTMLDivElement>
>(({ className, ...props }, ref) => {
const { orientation } = useCarousel();
return (
<div
ref={ref}
role="group"
aria-roledescription="slide"
className={cn(
"min-w-0 shrink-0 grow-0 basis-full",
orientation === "horizontal" ? "pl-4" : "pt-4",
className,
)}
{...props}
/>
);
});
CarouselItem.displayName = "CarouselItem";
const CarouselPrevious = React.forwardRef<
HTMLButtonElement,
React.ComponentProps<typeof Button>
>(({ className, variant = "outline", size = "icon", ...props }, ref) => {
const { t } = useTranslation(["views/explore"]);
const { orientation, scrollPrev, canScrollPrev } = useCarousel();
return (
<Button
ref={ref}
variant={variant}
size={size}
className={cn(
"absolute h-8 w-8 rounded-full",
orientation === "horizontal"
? "-left-12 top-1/2 -translate-y-1/2"
: "-top-12 left-1/2 -translate-x-1/2 rotate-90",
className,
)}
aria-label={t("trackingDetails.carousel.previous")}
disabled={!canScrollPrev}
onClick={scrollPrev}
{...props}
>
<ArrowLeft className="h-4 w-4" />
<span className="sr-only">{t("trackingDetails.carousel.previous")}</span>
</Button>
);
});
CarouselPrevious.displayName = "CarouselPrevious";
const CarouselNext = React.forwardRef<
HTMLButtonElement,
React.ComponentProps<typeof Button>
>(({ className, variant = "outline", size = "icon", ...props }, ref) => {
const { t } = useTranslation(["views/explore"]);
const { orientation, scrollNext, canScrollNext } = useCarousel();
return (
<Button
ref={ref}
variant={variant}
size={size}
className={cn(
"absolute h-8 w-8 rounded-full",
orientation === "horizontal"
? "-right-12 top-1/2 -translate-y-1/2"
: "-bottom-12 left-1/2 -translate-x-1/2 rotate-90",
className,
)}
aria-label={t("trackingDetails.carousel.next")}
disabled={!canScrollNext}
onClick={scrollNext}
{...props}
>
<ArrowRight className="h-4 w-4" />
<span className="sr-only">{t("trackingDetails.carousel.next")}</span>
</Button>
);
});
CarouselNext.displayName = "CarouselNext";
export {
type CarouselApi,
Carousel,
CarouselContent,
CarouselItem,
CarouselPrevious,
CarouselNext,
};

View File

@ -33,9 +33,9 @@ const FormField = <
...props
}: ControllerProps<TFieldValues, TName>) => {
return (
<FormFieldContext.Provider value={{ name: props.name }}>
<FormFieldContext value={{ name: props.name }}>
<Controller {...props} />
</FormFieldContext.Provider>
</FormFieldContext>
);
};
@ -77,9 +77,9 @@ const FormItem = React.forwardRef<
const id = React.useId();
return (
<FormItemContext.Provider value={{ id }}>
<FormItemContext value={{ id }}>
<div ref={ref} className={cn("space-y-1", className)} {...props} />
</FormItemContext.Provider>
</FormItemContext>
);
});
FormItem.displayName = "FormItem";

View File

@ -1,10 +1,10 @@
import { ForwardedRef, forwardRef } from "react";
import { IconType } from "react-icons";
interface IconWrapperProps {
interface IconWrapperProps extends React.HTMLAttributes<HTMLDivElement> {
icon: IconType;
className?: string;
[key: string]: any;
disabled?: boolean;
}
const IconWrapper = forwardRef(

View File

@ -0,0 +1,26 @@
import * as React from "react"
import * as ProgressPrimitive from "@radix-ui/react-progress"
import { cn } from "@/lib/utils"
const Progress = React.forwardRef<
React.ElementRef<typeof ProgressPrimitive.Root>,
React.ComponentPropsWithoutRef<typeof ProgressPrimitive.Root>
>(({ className, value, ...props }, ref) => (
<ProgressPrimitive.Root
ref={ref}
className={cn(
"relative h-4 w-full overflow-hidden rounded-full bg-secondary",
className
)}
{...props}
>
<ProgressPrimitive.Indicator
className="h-full w-full flex-1 bg-primary transition-all"
style={{ transform: `translateX(-${100 - (value || 0)}%)` }}
/>
</ProgressPrimitive.Root>
))
Progress.displayName = ProgressPrimitive.Root.displayName
export { Progress }

View File

@ -141,7 +141,7 @@ const SidebarProvider = React.forwardRef<
);
return (
<SidebarContext.Provider value={contextValue}>
<SidebarContext value={contextValue}>
<TooltipProvider delayDuration={0}>
<div
style={
@ -161,7 +161,7 @@ const SidebarProvider = React.forwardRef<
{children}
</div>
</TooltipProvider>
</SidebarContext.Provider>
</SidebarContext>
);
},
);

View File

@ -22,9 +22,9 @@ const ToggleGroup = React.forwardRef<
className={cn("flex items-center justify-center gap-1", className)}
{...props}
>
<ToggleGroupContext.Provider value={{ variant, size }}>
<ToggleGroupContext value={{ variant, size }}>
{children}
</ToggleGroupContext.Provider>
</ToggleGroupContext>
</ToggleGroupPrimitive.Root>
))

View File

@ -102,9 +102,5 @@ export function AuthProvider({ children }: { children: React.ReactNode }) {
axios.get("/logout", { withCredentials: true });
};
return (
<AuthContext.Provider value={{ auth, login, logout }}>
{children}
</AuthContext.Provider>
);
return <AuthContext value={{ auth, login, logout }}>{children}</AuthContext>;
}

View File

@ -125,11 +125,7 @@ export function DetailStreamProvider({
isDetailMode,
};
return (
<DetailStreamContext.Provider value={value}>
{children}
</DetailStreamContext.Provider>
);
return <DetailStreamContext value={value}>{children}</DetailStreamContext>;
}
// eslint-disable-next-line react-refresh/only-export-components

View File

@ -77,9 +77,9 @@ export function LanguageProvider({
};
return (
<LanguageProviderContext.Provider {...props} value={value}>
<LanguageProviderContext {...props} value={value}>
{children}
</LanguageProviderContext.Provider>
</LanguageProviderContext>
);
}

View File

@ -1,6 +1,5 @@
import { ReactNode } from "react";
import { ThemeProvider } from "@/context/theme-provider";
import { RecoilRoot } from "recoil";
import { ApiProvider } from "@/api";
import { IconContext } from "react-icons";
import { TooltipProvider } from "@/components/ui/tooltip";
@ -15,25 +14,23 @@ type TProvidersProps = {
function providers({ children }: TProvidersProps) {
return (
<RecoilRoot>
<AuthProvider>
<ApiProvider>
<ThemeProvider defaultTheme="system" storageKey="frigate-ui-theme">
<LanguageProvider>
<TooltipProvider>
<IconContext.Provider value={{ size: "20" }}>
<StatusBarMessagesProvider>
<StreamingSettingsProvider>
{children}
</StreamingSettingsProvider>
</StatusBarMessagesProvider>
</IconContext.Provider>
</TooltipProvider>
</LanguageProvider>
</ThemeProvider>
</ApiProvider>
</AuthProvider>
</RecoilRoot>
<AuthProvider>
<ApiProvider>
<ThemeProvider defaultTheme="system" storageKey="frigate-ui-theme">
<LanguageProvider>
<TooltipProvider>
<IconContext.Provider value={{ size: "20" }}>
<StatusBarMessagesProvider>
<StreamingSettingsProvider>
{children}
</StreamingSettingsProvider>
</StatusBarMessagesProvider>
</IconContext.Provider>
</TooltipProvider>
</LanguageProvider>
</ThemeProvider>
</ApiProvider>
</AuthProvider>
);
}

Some files were not shown because too many files have changed in this diff Show More