mirror of
https://github.com/blakeblackshear/frigate.git
synced 2026-03-20 15:18:21 +03:00
Compare commits
47 Commits
eb762f70ce
...
856ee99f90
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
856ee99f90 | ||
|
|
3072636684 | ||
|
|
c3298f63bf | ||
|
|
9fc1a18a6f | ||
|
|
de70f3b6cd | ||
|
|
6486d218fe | ||
|
|
a4137a4888 | ||
|
|
f7c8a1346a | ||
|
|
006c3dfa01 | ||
|
|
68c0e0c7fd | ||
|
|
8fb44b4349 | ||
|
|
afd2138e13 | ||
|
|
7d0adcc62f | ||
|
|
4e1237f2d1 | ||
|
|
eb16a3221d | ||
|
|
5c5193ee48 | ||
|
|
ae58bbe550 | ||
|
|
734f392849 | ||
|
|
076ec8153b | ||
|
|
c7cb4157f3 | ||
|
|
1813eaa9ea | ||
|
|
ace13c1062 | ||
|
|
04a5633d8c | ||
|
|
ea79e1453c | ||
|
|
12754a2a0d | ||
|
|
a5e88b312f | ||
|
|
789bd156aa | ||
|
|
fb61c1128a | ||
|
|
60faaec312 | ||
|
|
925d1f38b1 | ||
|
|
372488939a | ||
|
|
28b0b0b911 | ||
|
|
d1bb6ad76f | ||
|
|
35d176468b | ||
|
|
223c183b85 | ||
|
|
79d5e4891a | ||
|
|
82cb5f01b2 | ||
|
|
2ff7d05044 | ||
|
|
e560becbec | ||
|
|
2d0afb4267 | ||
|
|
2075ec71f9 | ||
|
|
573d08236c | ||
|
|
d4aeb2ed12 | ||
|
|
3a67688206 | ||
|
|
fcc5c1c6a2 | ||
|
|
590a0397a9 | ||
|
|
42b3bfa6da |
@ -38,6 +38,7 @@ Remember that motion detection is just used to determine when object detection s
|
||||
The threshold value dictates how much of a change in a pixels luminance is required to be considered motion.
|
||||
|
||||
```yaml
|
||||
# default threshold value
|
||||
motion:
|
||||
# Optional: The threshold passed to cv2.threshold to determine if a pixel is different enough to be counted as motion. (default: shown below)
|
||||
# Increasing this value will make motion detection less sensitive and decreasing it will make motion detection more sensitive.
|
||||
@ -52,6 +53,7 @@ Watching the motion boxes in the debug view, increase the threshold until you on
|
||||
### Contour Area
|
||||
|
||||
```yaml
|
||||
# default contour_area value
|
||||
motion:
|
||||
# Optional: Minimum size in pixels in the resized motion image that counts as motion (default: shown below)
|
||||
# Increasing this value will prevent smaller areas of motion from being detected. Decreasing will
|
||||
@ -79,49 +81,27 @@ However, if the preferred day settings do not work well at night it is recommend
|
||||
|
||||
## Tuning For Large Changes In Motion
|
||||
|
||||
### Lightning Threshold
|
||||
|
||||
```yaml
|
||||
# default lightning_threshold:
|
||||
motion:
|
||||
# Optional: The percentage of the image used to detect lightning or
|
||||
# other substantial changes where motion detection needs to
|
||||
# recalibrate. (default: shown below)
|
||||
# Increasing this value will make motion detection more likely
|
||||
# to consider lightning or IR mode changes as valid motion.
|
||||
# Decreasing this value will make motion detection more likely
|
||||
# to ignore large amounts of motion such as a person
|
||||
# approaching a doorbell camera.
|
||||
# Optional: The percentage of the image used to detect lightning or other substantial changes where motion detection
|
||||
# needs to recalibrate. (default: shown below)
|
||||
# Increasing this value will make motion detection more likely to consider lightning or ir mode changes as valid motion.
|
||||
# Decreasing this value will make motion detection more likely to ignore large amounts of motion such as a person approaching
|
||||
# a doorbell camera.
|
||||
lightning_threshold: 0.8
|
||||
```
|
||||
|
||||
Large changes in motion like PTZ moves and camera switches between Color and IR mode should result in a pause in object detection. `lightning_threshold` defines the percentage of the image used to detect these substantial changes. Increasing this value makes motion detection more likely to treat large changes (like IR mode switches) as valid motion. Decreasing it makes motion detection more likely to ignore large amounts of motion, such as a person approaching a doorbell camera.
|
||||
|
||||
Note that `lightning_threshold` does **not** stop motion-based recordings from being saved — it only prevents additional motion analysis after the threshold is exceeded, reducing false positive object detections during high-motion periods (e.g. storms or PTZ sweeps) without interfering with recordings.
|
||||
|
||||
:::warning
|
||||
|
||||
Some cameras, like doorbell cameras, may have missed detections when someone walks directly in front of the camera and the `lightning_threshold` causes motion detection to recalibrate. In this case, it may be desirable to increase the `lightning_threshold` to ensure these objects are not missed.
|
||||
Some cameras like doorbell cameras may have missed detections when someone walks directly in front of the camera and the lightning_threshold causes motion detection to be re-calibrated. In this case, it may be desirable to increase the `lightning_threshold` to ensure these objects are not missed.
|
||||
|
||||
:::
|
||||
|
||||
### Skip Motion On Large Scene Changes
|
||||
:::note
|
||||
|
||||
```yaml
|
||||
motion:
|
||||
# Optional: Fraction of the frame that must change in a single update
|
||||
# before Frigate will completely ignore any motion in that frame.
|
||||
# Values range between 0.0 and 1.0, leave unset (null) to disable.
|
||||
# Setting this to 0.7 would cause Frigate to **skip** reporting
|
||||
# motion boxes when more than 70% of the image appears to change
|
||||
# (e.g. during lightning storms, IR/color mode switches, or other
|
||||
# sudden lighting events).
|
||||
skip_motion_threshold: 0.7
|
||||
```
|
||||
|
||||
This option is handy when you want to prevent large transient changes from triggering recordings or object detection. It differs from `lightning_threshold` because it completely suppresses motion instead of just forcing a recalibration.
|
||||
|
||||
:::warning
|
||||
|
||||
When the skip threshold is exceeded, **no motion is reported** for that frame, meaning **nothing is recorded** for that frame. That means you can miss something important, like a PTZ camera auto-tracking an object or activity while the camera is moving. If you prefer to guarantee that every frame is saved, leave this unset and accept occasional recordings containing scene noise — they typically only take up a few megabytes and are quick to scan in the timeline UI.
|
||||
Lightning threshold does not stop motion based recordings from being saved.
|
||||
|
||||
:::
|
||||
|
||||
Large changes in motion like PTZ moves and camera switches between Color and IR mode should result in a pause in object detection. This is done via the `lightning_threshold` configuration. It is defined as the percentage of the image used to detect lightning or other substantial changes where motion detection needs to recalibrate. Increasing this value will make motion detection more likely to consider lightning or IR mode changes as valid motion. Decreasing this value will make motion detection more likely to ignore large amounts of motion such as a person approaching a doorbell camera.
|
||||
|
||||
@ -480,16 +480,12 @@ motion:
|
||||
# Increasing this value will make motion detection less sensitive and decreasing it will make motion detection more sensitive.
|
||||
# The value should be between 1 and 255.
|
||||
threshold: 30
|
||||
# Optional: The percentage of the image used to detect lightning or other substantial changes where motion detection needs
|
||||
# to recalibrate and motion checks stop for that frame. Recordings are unaffected. (default: shown below)
|
||||
# Optional: The percentage of the image used to detect lightning or other substantial changes where motion detection
|
||||
# needs to recalibrate. (default: shown below)
|
||||
# Increasing this value will make motion detection more likely to consider lightning or ir mode changes as valid motion.
|
||||
# Decreasing this value will make motion detection more likely to ignore large amounts of motion such as a person approaching a doorbell camera.
|
||||
# Decreasing this value will make motion detection more likely to ignore large amounts of motion such as a person approaching
|
||||
# a doorbell camera.
|
||||
lightning_threshold: 0.8
|
||||
# Optional: Fraction of the frame that must change in a single update before motion boxes are completely
|
||||
# ignored. Values range between 0.0 and 1.0. When exceeded, no motion boxes are reported and **no motion
|
||||
# recording** is created for that frame. Leave unset (null) to disable this feature. Use with care on PTZ
|
||||
# cameras or other situations where you require guaranteed frame capture.
|
||||
skip_motion_threshold: None
|
||||
# Optional: Minimum size in pixels in the resized motion image that counts as motion (default: shown below)
|
||||
# Increasing this value will prevent smaller areas of motion from being detected. Decreasing will
|
||||
# make motion detection more sensitive to smaller moving objects.
|
||||
|
||||
@ -32,12 +32,6 @@ from frigate.models import User
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# In-memory cache to track which clients we've logged for an anonymous access event.
|
||||
# Keyed by a hashed value combining remote address + user-agent. The value is
|
||||
# an expiration timestamp (float).
|
||||
FIRST_LOAD_TTL_SECONDS = 60 * 60 * 24 * 7 # 7 days
|
||||
_first_load_seen: dict[str, float] = {}
|
||||
|
||||
|
||||
def require_admin_by_default():
|
||||
"""
|
||||
@ -290,15 +284,6 @@ def get_remote_addr(request: Request):
|
||||
return remote_addr or "127.0.0.1"
|
||||
|
||||
|
||||
def _cleanup_first_load_seen() -> None:
|
||||
"""Cleanup expired entries in the in-memory first-load cache."""
|
||||
now = time.time()
|
||||
# Build list for removal to avoid mutating dict during iteration
|
||||
expired = [k for k, exp in _first_load_seen.items() if exp <= now]
|
||||
for k in expired:
|
||||
del _first_load_seen[k]
|
||||
|
||||
|
||||
def get_jwt_secret() -> str:
|
||||
jwt_secret = None
|
||||
# check env var
|
||||
@ -759,30 +744,10 @@ def profile(request: Request):
|
||||
roles_dict = request.app.frigate_config.auth.roles
|
||||
allowed_cameras = User.get_allowed_cameras(role, roles_dict, all_camera_names)
|
||||
|
||||
response = JSONResponse(
|
||||
return JSONResponse(
|
||||
content={"username": username, "role": role, "allowed_cameras": allowed_cameras}
|
||||
)
|
||||
|
||||
if username == "anonymous":
|
||||
try:
|
||||
remote_addr = get_remote_addr(request)
|
||||
except Exception:
|
||||
remote_addr = (
|
||||
request.client.host if hasattr(request, "client") else "unknown"
|
||||
)
|
||||
|
||||
ua = request.headers.get("user-agent", "")
|
||||
key_material = f"{remote_addr}|{ua}"
|
||||
cache_key = hashlib.sha256(key_material.encode()).hexdigest()
|
||||
|
||||
_cleanup_first_load_seen()
|
||||
now = time.time()
|
||||
if cache_key not in _first_load_seen:
|
||||
_first_load_seen[cache_key] = now + FIRST_LOAD_TTL_SECONDS
|
||||
logger.info(f"Anonymous user access from {remote_addr} ua={ua[:200]}")
|
||||
|
||||
return response
|
||||
|
||||
|
||||
@router.get(
|
||||
"/logout",
|
||||
|
||||
@ -11,7 +11,6 @@ class Tags(Enum):
|
||||
classification = "Classification"
|
||||
logs = "Logs"
|
||||
media = "Media"
|
||||
motion_search = "Motion Search"
|
||||
notifications = "Notifications"
|
||||
preview = "Preview"
|
||||
recordings = "Recordings"
|
||||
|
||||
@ -22,7 +22,6 @@ from frigate.api import (
|
||||
event,
|
||||
export,
|
||||
media,
|
||||
motion_search,
|
||||
notification,
|
||||
preview,
|
||||
record,
|
||||
@ -136,7 +135,6 @@ def create_fastapi_app(
|
||||
app.include_router(export.router)
|
||||
app.include_router(event.router)
|
||||
app.include_router(media.router)
|
||||
app.include_router(motion_search.router)
|
||||
app.include_router(record.router)
|
||||
app.include_router(debug_replay.router)
|
||||
# App Properties
|
||||
|
||||
@ -24,7 +24,6 @@ from tzlocal import get_localzone_name
|
||||
from frigate.api.auth import (
|
||||
allow_any_authenticated,
|
||||
require_camera_access,
|
||||
require_role,
|
||||
)
|
||||
from frigate.api.defs.query.media_query_parameters import (
|
||||
Extension,
|
||||
@ -1006,23 +1005,6 @@ def grid_snapshot(
|
||||
)
|
||||
|
||||
|
||||
@router.delete(
|
||||
"/{camera_name}/region_grid", dependencies=[Depends(require_role("admin"))]
|
||||
)
|
||||
def clear_region_grid(request: Request, camera_name: str):
|
||||
"""Clear the region grid for a camera."""
|
||||
if camera_name not in request.app.frigate_config.cameras:
|
||||
return JSONResponse(
|
||||
content={"success": False, "message": "Camera not found"},
|
||||
status_code=404,
|
||||
)
|
||||
|
||||
Regions.delete().where(Regions.camera == camera_name).execute()
|
||||
return JSONResponse(
|
||||
content={"success": True, "message": "Region grid cleared"},
|
||||
)
|
||||
|
||||
|
||||
@router.get(
|
||||
"/events/{event_id}/snapshot-clean.webp",
|
||||
dependencies=[Depends(require_camera_access)],
|
||||
|
||||
@ -1,292 +0,0 @@
|
||||
"""Motion search API for detecting changes within a region of interest."""
|
||||
|
||||
import logging
|
||||
from typing import Any, List, Optional
|
||||
|
||||
from fastapi import APIRouter, Depends, Request
|
||||
from fastapi.responses import JSONResponse
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from frigate.api.auth import require_camera_access
|
||||
from frigate.api.defs.tags import Tags
|
||||
from frigate.jobs.motion_search import (
|
||||
cancel_motion_search_job,
|
||||
get_motion_search_job,
|
||||
start_motion_search_job,
|
||||
)
|
||||
from frigate.types import JobStatusTypesEnum
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
router = APIRouter(tags=[Tags.motion_search])
|
||||
|
||||
|
||||
class MotionSearchRequest(BaseModel):
|
||||
"""Request body for motion search."""
|
||||
|
||||
start_time: float = Field(description="Start timestamp for the search range")
|
||||
end_time: float = Field(description="End timestamp for the search range")
|
||||
polygon_points: List[List[float]] = Field(
|
||||
description="List of [x, y] normalized coordinates (0-1) defining the ROI polygon"
|
||||
)
|
||||
threshold: int = Field(
|
||||
default=30,
|
||||
ge=1,
|
||||
le=255,
|
||||
description="Pixel difference threshold (1-255)",
|
||||
)
|
||||
min_area: float = Field(
|
||||
default=5.0,
|
||||
ge=0.1,
|
||||
le=100.0,
|
||||
description="Minimum change area as a percentage of the ROI",
|
||||
)
|
||||
frame_skip: int = Field(
|
||||
default=5,
|
||||
ge=1,
|
||||
le=30,
|
||||
description="Process every Nth frame (1=all frames, 5=every 5th frame)",
|
||||
)
|
||||
parallel: bool = Field(
|
||||
default=False,
|
||||
description="Enable parallel scanning across segments",
|
||||
)
|
||||
max_results: int = Field(
|
||||
default=25,
|
||||
ge=1,
|
||||
le=200,
|
||||
description="Maximum number of search results to return",
|
||||
)
|
||||
|
||||
|
||||
class MotionSearchResult(BaseModel):
|
||||
"""A single search result with timestamp and change info."""
|
||||
|
||||
timestamp: float = Field(description="Timestamp where change was detected")
|
||||
change_percentage: float = Field(description="Percentage of ROI area that changed")
|
||||
|
||||
|
||||
class MotionSearchMetricsResponse(BaseModel):
|
||||
"""Metrics collected during motion search execution."""
|
||||
|
||||
segments_scanned: int = 0
|
||||
segments_processed: int = 0
|
||||
metadata_inactive_segments: int = 0
|
||||
heatmap_roi_skip_segments: int = 0
|
||||
fallback_full_range_segments: int = 0
|
||||
frames_decoded: int = 0
|
||||
wall_time_seconds: float = 0.0
|
||||
segments_with_errors: int = 0
|
||||
|
||||
|
||||
class MotionSearchStartResponse(BaseModel):
|
||||
"""Response when motion search job starts."""
|
||||
|
||||
success: bool
|
||||
message: str
|
||||
job_id: str
|
||||
|
||||
|
||||
class MotionSearchStatusResponse(BaseModel):
|
||||
"""Response containing job status and results."""
|
||||
|
||||
success: bool
|
||||
message: str
|
||||
status: str # "queued", "running", "success", "failed", or "cancelled"
|
||||
results: Optional[List[MotionSearchResult]] = None
|
||||
total_frames_processed: Optional[int] = None
|
||||
error_message: Optional[str] = None
|
||||
metrics: Optional[MotionSearchMetricsResponse] = None
|
||||
|
||||
|
||||
@router.post(
|
||||
"/{camera_name}/search/motion",
|
||||
response_model=MotionSearchStartResponse,
|
||||
dependencies=[Depends(require_camera_access)],
|
||||
summary="Start motion search job",
|
||||
description="""Starts an asynchronous search for significant motion changes within
|
||||
a user-defined Region of Interest (ROI) over a specified time range. Returns a job_id
|
||||
that can be used to poll for results.""",
|
||||
)
|
||||
async def start_motion_search(
|
||||
request: Request,
|
||||
camera_name: str,
|
||||
body: MotionSearchRequest,
|
||||
):
|
||||
"""Start an async motion search job."""
|
||||
config = request.app.frigate_config
|
||||
|
||||
if camera_name not in config.cameras:
|
||||
return JSONResponse(
|
||||
content={"success": False, "message": f"Camera {camera_name} not found"},
|
||||
status_code=404,
|
||||
)
|
||||
|
||||
# Validate polygon has at least 3 points
|
||||
if len(body.polygon_points) < 3:
|
||||
return JSONResponse(
|
||||
content={
|
||||
"success": False,
|
||||
"message": "Polygon must have at least 3 points",
|
||||
},
|
||||
status_code=400,
|
||||
)
|
||||
|
||||
# Validate time range
|
||||
if body.start_time >= body.end_time:
|
||||
return JSONResponse(
|
||||
content={
|
||||
"success": False,
|
||||
"message": "Start time must be before end time",
|
||||
},
|
||||
status_code=400,
|
||||
)
|
||||
|
||||
# Start the job using the jobs module
|
||||
job_id = start_motion_search_job(
|
||||
config=config,
|
||||
camera_name=camera_name,
|
||||
start_time=body.start_time,
|
||||
end_time=body.end_time,
|
||||
polygon_points=body.polygon_points,
|
||||
threshold=body.threshold,
|
||||
min_area=body.min_area,
|
||||
frame_skip=body.frame_skip,
|
||||
parallel=body.parallel,
|
||||
max_results=body.max_results,
|
||||
)
|
||||
|
||||
return JSONResponse(
|
||||
content={
|
||||
"success": True,
|
||||
"message": "Search job started",
|
||||
"job_id": job_id,
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
@router.get(
|
||||
"/{camera_name}/search/motion/{job_id}",
|
||||
response_model=MotionSearchStatusResponse,
|
||||
dependencies=[Depends(require_camera_access)],
|
||||
summary="Get motion search job status",
|
||||
description="Returns the status and results (if complete) of a motion search job.",
|
||||
)
|
||||
async def get_motion_search_status_endpoint(
|
||||
request: Request,
|
||||
camera_name: str,
|
||||
job_id: str,
|
||||
):
|
||||
"""Get the status of a motion search job."""
|
||||
config = request.app.frigate_config
|
||||
|
||||
if camera_name not in config.cameras:
|
||||
return JSONResponse(
|
||||
content={"success": False, "message": f"Camera {camera_name} not found"},
|
||||
status_code=404,
|
||||
)
|
||||
|
||||
job = get_motion_search_job(job_id)
|
||||
if not job:
|
||||
return JSONResponse(
|
||||
content={"success": False, "message": "Job not found"},
|
||||
status_code=404,
|
||||
)
|
||||
|
||||
api_status = job.status
|
||||
|
||||
# Build response content
|
||||
response_content: dict[str, Any] = {
|
||||
"success": api_status != JobStatusTypesEnum.failed,
|
||||
"status": api_status,
|
||||
}
|
||||
|
||||
if api_status == JobStatusTypesEnum.failed:
|
||||
response_content["message"] = job.error_message or "Search failed"
|
||||
response_content["error_message"] = job.error_message
|
||||
elif api_status == JobStatusTypesEnum.cancelled:
|
||||
response_content["message"] = "Search cancelled"
|
||||
response_content["total_frames_processed"] = job.total_frames_processed
|
||||
elif api_status == JobStatusTypesEnum.success:
|
||||
response_content["message"] = "Search complete"
|
||||
if job.results:
|
||||
response_content["results"] = job.results.get("results", [])
|
||||
response_content["total_frames_processed"] = job.results.get(
|
||||
"total_frames_processed", job.total_frames_processed
|
||||
)
|
||||
else:
|
||||
response_content["results"] = []
|
||||
response_content["total_frames_processed"] = job.total_frames_processed
|
||||
else:
|
||||
response_content["message"] = "Job processing"
|
||||
response_content["total_frames_processed"] = job.total_frames_processed
|
||||
# Include partial results if available (streaming)
|
||||
if job.results:
|
||||
response_content["results"] = job.results.get("results", [])
|
||||
response_content["total_frames_processed"] = job.results.get(
|
||||
"total_frames_processed", job.total_frames_processed
|
||||
)
|
||||
|
||||
# Include metrics if available
|
||||
if job.metrics:
|
||||
response_content["metrics"] = job.metrics.to_dict()
|
||||
|
||||
return JSONResponse(content=response_content)
|
||||
|
||||
|
||||
@router.post(
|
||||
"/{camera_name}/search/motion/{job_id}/cancel",
|
||||
dependencies=[Depends(require_camera_access)],
|
||||
summary="Cancel motion search job",
|
||||
description="Cancels an active motion search job if it is still processing.",
|
||||
)
|
||||
async def cancel_motion_search_endpoint(
|
||||
request: Request,
|
||||
camera_name: str,
|
||||
job_id: str,
|
||||
):
|
||||
"""Cancel an active motion search job."""
|
||||
config = request.app.frigate_config
|
||||
|
||||
if camera_name not in config.cameras:
|
||||
return JSONResponse(
|
||||
content={"success": False, "message": f"Camera {camera_name} not found"},
|
||||
status_code=404,
|
||||
)
|
||||
|
||||
job = get_motion_search_job(job_id)
|
||||
if not job:
|
||||
return JSONResponse(
|
||||
content={"success": False, "message": "Job not found"},
|
||||
status_code=404,
|
||||
)
|
||||
|
||||
# Check if already finished
|
||||
api_status = job.status
|
||||
if api_status not in (JobStatusTypesEnum.queued, JobStatusTypesEnum.running):
|
||||
return JSONResponse(
|
||||
content={
|
||||
"success": True,
|
||||
"message": "Job already finished",
|
||||
"status": api_status,
|
||||
}
|
||||
)
|
||||
|
||||
# Request cancellation
|
||||
cancelled = cancel_motion_search_job(job_id)
|
||||
if cancelled:
|
||||
return JSONResponse(
|
||||
content={
|
||||
"success": True,
|
||||
"message": "Search cancelled",
|
||||
"status": "cancelled",
|
||||
}
|
||||
)
|
||||
|
||||
return JSONResponse(
|
||||
content={
|
||||
"success": False,
|
||||
"message": "Failed to cancel job",
|
||||
},
|
||||
status_code=500,
|
||||
)
|
||||
@ -261,7 +261,6 @@ async def recordings(
|
||||
Recordings.segment_size,
|
||||
Recordings.motion,
|
||||
Recordings.objects,
|
||||
Recordings.motion_heatmap,
|
||||
Recordings.duration,
|
||||
)
|
||||
.where(
|
||||
|
||||
@ -51,7 +51,6 @@ from frigate.embeddings import EmbeddingProcess, EmbeddingsContext
|
||||
from frigate.events.audio import AudioProcessor
|
||||
from frigate.events.cleanup import EventCleanup
|
||||
from frigate.events.maintainer import EventProcessor
|
||||
from frigate.jobs.motion_search import stop_all_motion_search_jobs
|
||||
from frigate.log import _stop_logging
|
||||
from frigate.models import (
|
||||
Event,
|
||||
@ -600,9 +599,6 @@ class FrigateApp:
|
||||
# used by the docker healthcheck
|
||||
Path("/dev/shm/.frigate-is-stopping").touch()
|
||||
|
||||
# Cancel any running motion search jobs before setting stop_event
|
||||
stop_all_motion_search_jobs()
|
||||
|
||||
self.stop_event.set()
|
||||
|
||||
# set an end_time on entries without an end_time before exiting
|
||||
|
||||
@ -24,17 +24,10 @@ class MotionConfig(FrigateBaseModel):
|
||||
lightning_threshold: float = Field(
|
||||
default=0.8,
|
||||
title="Lightning threshold",
|
||||
description="Threshold to detect and ignore brief lighting spikes (lower is more sensitive, values between 0.3 and 1.0). This does not prevent motion detection entirely; it merely causes the detector to stop analyzing additional frames once the threshold is exceeded. Motion-based recordings are still created during these events.",
|
||||
description="Threshold to detect and ignore brief lighting spikes (lower is more sensitive, values between 0.3 and 1.0).",
|
||||
ge=0.3,
|
||||
le=1.0,
|
||||
)
|
||||
skip_motion_threshold: Optional[float] = Field(
|
||||
default=None,
|
||||
title="Skip motion threshold",
|
||||
description="If set to a value between 0.0 and 1.0, and more than this fraction of the image changes in a single frame, the detector will return no motion boxes and immediately recalibrate. This can save CPU and reduce false positives during lightning, storms, etc., but may miss real events such as a PTZ camera auto‑tracking an object. The trade‑off is between dropping a few megabytes of recordings versus reviewing a couple short clips. Leave unset (None) to disable this feature.",
|
||||
ge=0.0,
|
||||
le=1.0,
|
||||
)
|
||||
improve_contrast: bool = Field(
|
||||
default=True,
|
||||
title="Improve contrast",
|
||||
|
||||
@ -1,6 +1,5 @@
|
||||
"""Ollama Provider for Frigate AI."""
|
||||
|
||||
import json
|
||||
import logging
|
||||
from typing import Any, Optional
|
||||
|
||||
@ -109,22 +108,7 @@ class OllamaClient(GenAIClient):
|
||||
if msg.get("name"):
|
||||
msg_dict["name"] = msg["name"]
|
||||
if msg.get("tool_calls"):
|
||||
# Ollama requires tool call arguments as dicts, but the
|
||||
# conversation format (OpenAI-style) stores them as JSON
|
||||
# strings. Convert back to dicts for Ollama.
|
||||
ollama_tool_calls = []
|
||||
for tc in msg["tool_calls"]:
|
||||
func = tc.get("function") or {}
|
||||
args = func.get("arguments") or {}
|
||||
if isinstance(args, str):
|
||||
try:
|
||||
args = json.loads(args)
|
||||
except (json.JSONDecodeError, TypeError):
|
||||
args = {}
|
||||
ollama_tool_calls.append(
|
||||
{"function": {"name": func.get("name", ""), "arguments": args}}
|
||||
)
|
||||
msg_dict["tool_calls"] = ollama_tool_calls
|
||||
msg_dict["tool_calls"] = msg["tool_calls"]
|
||||
request_messages.append(msg_dict)
|
||||
|
||||
request_params: dict[str, Any] = {
|
||||
@ -136,27 +120,25 @@ class OllamaClient(GenAIClient):
|
||||
request_params["stream"] = True
|
||||
if tools:
|
||||
request_params["tools"] = tools
|
||||
if tool_choice:
|
||||
request_params["tool_choice"] = (
|
||||
"none"
|
||||
if tool_choice == "none"
|
||||
else "required"
|
||||
if tool_choice == "required"
|
||||
else "auto"
|
||||
)
|
||||
return request_params
|
||||
|
||||
def _message_from_response(self, response: dict[str, Any]) -> dict[str, Any]:
|
||||
"""Parse Ollama chat response into {content, tool_calls, finish_reason}."""
|
||||
if not response or "message" not in response:
|
||||
logger.debug("Ollama response empty or missing 'message' key")
|
||||
return {
|
||||
"content": None,
|
||||
"tool_calls": None,
|
||||
"finish_reason": "error",
|
||||
}
|
||||
message = response["message"]
|
||||
logger.debug(
|
||||
"Ollama response message keys: %s, content_len=%s, thinking_len=%s, "
|
||||
"tool_calls=%s, done=%s",
|
||||
list(message.keys()) if hasattr(message, "keys") else "N/A",
|
||||
len(message.get("content", "") or "") if message.get("content") else 0,
|
||||
len(message.get("thinking", "") or "") if message.get("thinking") else 0,
|
||||
bool(message.get("tool_calls")),
|
||||
response.get("done"),
|
||||
)
|
||||
content = message.get("content", "").strip() if message.get("content") else None
|
||||
tool_calls = parse_tool_calls_from_message(message)
|
||||
finish_reason = "error"
|
||||
@ -216,13 +198,7 @@ class OllamaClient(GenAIClient):
|
||||
tools: Optional[list[dict[str, Any]]] = None,
|
||||
tool_choice: Optional[str] = "auto",
|
||||
):
|
||||
"""Stream chat with tools; yields content deltas then final message.
|
||||
|
||||
When tools are provided, Ollama streaming does not include tool_calls
|
||||
in the response chunks. To work around this, we use a non-streaming
|
||||
call when tools are present to ensure tool calls are captured, then
|
||||
emit the content as a single delta followed by the final message.
|
||||
"""
|
||||
"""Stream chat with tools; yields content deltas then final message."""
|
||||
if self.provider is None:
|
||||
logger.warning(
|
||||
"Ollama provider has not been initialized. Check your Ollama configuration."
|
||||
@ -237,27 +213,6 @@ class OllamaClient(GenAIClient):
|
||||
)
|
||||
return
|
||||
try:
|
||||
# Ollama does not return tool_calls in streaming mode, so fall
|
||||
# back to a non-streaming call when tools are provided.
|
||||
if tools:
|
||||
logger.debug(
|
||||
"Ollama: tools provided, using non-streaming call for tool support"
|
||||
)
|
||||
request_params = self._build_request_params(
|
||||
messages, tools, tool_choice, stream=False
|
||||
)
|
||||
async_client = OllamaAsyncClient(
|
||||
host=self.genai_config.base_url,
|
||||
timeout=self.timeout,
|
||||
)
|
||||
response = await async_client.chat(**request_params)
|
||||
result = self._message_from_response(response)
|
||||
content = result.get("content")
|
||||
if content:
|
||||
yield ("content_delta", content)
|
||||
yield ("message", result)
|
||||
return
|
||||
|
||||
request_params = self._build_request_params(
|
||||
messages, tools, tool_choice, stream=True
|
||||
)
|
||||
@ -278,10 +233,11 @@ class OllamaClient(GenAIClient):
|
||||
yield ("content_delta", delta)
|
||||
if chunk.get("done"):
|
||||
full_content = "".join(content_parts).strip() or None
|
||||
tool_calls = parse_tool_calls_from_message(msg)
|
||||
final_message = {
|
||||
"content": full_content,
|
||||
"tool_calls": None,
|
||||
"finish_reason": "stop",
|
||||
"tool_calls": tool_calls,
|
||||
"finish_reason": "tool_calls" if tool_calls else "stop",
|
||||
}
|
||||
break
|
||||
|
||||
|
||||
@ -23,26 +23,21 @@ def parse_tool_calls_from_message(
|
||||
if not raw or not isinstance(raw, list):
|
||||
return None
|
||||
result = []
|
||||
for idx, tool_call in enumerate(raw):
|
||||
for tool_call in raw:
|
||||
function_data = tool_call.get("function") or {}
|
||||
raw_arguments = function_data.get("arguments") or {}
|
||||
if isinstance(raw_arguments, dict):
|
||||
arguments = raw_arguments
|
||||
elif isinstance(raw_arguments, str):
|
||||
try:
|
||||
arguments = json.loads(raw_arguments)
|
||||
except (json.JSONDecodeError, KeyError, TypeError) as e:
|
||||
logger.warning(
|
||||
"Failed to parse tool call arguments: %s, tool: %s",
|
||||
e,
|
||||
function_data.get("name", "unknown"),
|
||||
)
|
||||
arguments = {}
|
||||
else:
|
||||
try:
|
||||
arguments_str = function_data.get("arguments") or "{}"
|
||||
arguments = json.loads(arguments_str)
|
||||
except (json.JSONDecodeError, KeyError, TypeError) as e:
|
||||
logger.warning(
|
||||
"Failed to parse tool call arguments: %s, tool: %s",
|
||||
e,
|
||||
function_data.get("name", "unknown"),
|
||||
)
|
||||
arguments = {}
|
||||
result.append(
|
||||
{
|
||||
"id": tool_call.get("id", "") or f"call_{idx}",
|
||||
"id": tool_call.get("id", ""),
|
||||
"name": function_data.get("name", ""),
|
||||
"arguments": arguments,
|
||||
}
|
||||
|
||||
@ -1,864 +0,0 @@
|
||||
"""Motion search job management with background execution and parallel verification."""
|
||||
|
||||
import logging
|
||||
import os
|
||||
import threading
|
||||
from concurrent.futures import Future, ThreadPoolExecutor, as_completed
|
||||
from dataclasses import asdict, dataclass, field
|
||||
from datetime import datetime
|
||||
from typing import Any, Optional
|
||||
|
||||
import cv2
|
||||
import numpy as np
|
||||
|
||||
from frigate.comms.inter_process import InterProcessRequestor
|
||||
from frigate.config import FrigateConfig
|
||||
from frigate.const import UPDATE_JOB_STATE
|
||||
from frigate.jobs.job import Job
|
||||
from frigate.jobs.manager import (
|
||||
get_job_by_id,
|
||||
set_current_job,
|
||||
)
|
||||
from frigate.models import Recordings
|
||||
from frigate.types import JobStatusTypesEnum
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Constants
|
||||
HEATMAP_GRID_SIZE = 16
|
||||
|
||||
|
||||
@dataclass
|
||||
class MotionSearchMetrics:
|
||||
"""Metrics collected during motion search execution."""
|
||||
|
||||
segments_scanned: int = 0
|
||||
segments_processed: int = 0
|
||||
metadata_inactive_segments: int = 0
|
||||
heatmap_roi_skip_segments: int = 0
|
||||
fallback_full_range_segments: int = 0
|
||||
frames_decoded: int = 0
|
||||
wall_time_seconds: float = 0.0
|
||||
segments_with_errors: int = 0
|
||||
|
||||
def to_dict(self) -> dict[str, Any]:
|
||||
"""Convert to dictionary."""
|
||||
return asdict(self)
|
||||
|
||||
|
||||
@dataclass
|
||||
class MotionSearchResult:
|
||||
"""A single search result with timestamp and change info."""
|
||||
|
||||
timestamp: float
|
||||
change_percentage: float
|
||||
|
||||
def to_dict(self) -> dict[str, Any]:
|
||||
"""Convert to dictionary."""
|
||||
return asdict(self)
|
||||
|
||||
|
||||
@dataclass
|
||||
class MotionSearchJob(Job):
|
||||
"""Job state for motion search operations."""
|
||||
|
||||
job_type: str = "motion_search"
|
||||
camera: str = ""
|
||||
start_time_range: float = 0.0
|
||||
end_time_range: float = 0.0
|
||||
polygon_points: list[list[float]] = field(default_factory=list)
|
||||
threshold: int = 30
|
||||
min_area: float = 5.0
|
||||
frame_skip: int = 5
|
||||
parallel: bool = False
|
||||
max_results: int = 25
|
||||
|
||||
# Track progress
|
||||
total_frames_processed: int = 0
|
||||
|
||||
# Metrics for observability
|
||||
metrics: Optional[MotionSearchMetrics] = None
|
||||
|
||||
def to_dict(self) -> dict[str, Any]:
|
||||
"""Convert to dictionary for WebSocket transmission."""
|
||||
d = asdict(self)
|
||||
if self.metrics:
|
||||
d["metrics"] = self.metrics.to_dict()
|
||||
return d
|
||||
|
||||
|
||||
def create_polygon_mask(
|
||||
polygon_points: list[list[float]], frame_width: int, frame_height: int
|
||||
) -> np.ndarray:
|
||||
"""Create a binary mask from normalized polygon coordinates."""
|
||||
motion_points = np.array(
|
||||
[[int(p[0] * frame_width), int(p[1] * frame_height)] for p in polygon_points],
|
||||
dtype=np.int32,
|
||||
)
|
||||
mask = np.zeros((frame_height, frame_width), dtype=np.uint8)
|
||||
cv2.fillPoly(mask, [motion_points], 255)
|
||||
return mask
|
||||
|
||||
|
||||
def compute_roi_bbox_normalized(
|
||||
polygon_points: list[list[float]],
|
||||
) -> tuple[float, float, float, float]:
|
||||
"""Compute the bounding box of the ROI in normalized coordinates (0-1).
|
||||
|
||||
Returns (x_min, y_min, x_max, y_max) in normalized coordinates.
|
||||
"""
|
||||
if not polygon_points:
|
||||
return (0.0, 0.0, 1.0, 1.0)
|
||||
|
||||
x_coords = [p[0] for p in polygon_points]
|
||||
y_coords = [p[1] for p in polygon_points]
|
||||
return (min(x_coords), min(y_coords), max(x_coords), max(y_coords))
|
||||
|
||||
|
||||
def heatmap_overlaps_roi(
|
||||
heatmap: dict[str, int], roi_bbox: tuple[float, float, float, float]
|
||||
) -> bool:
|
||||
"""Check if a sparse motion heatmap has any overlap with the ROI bounding box.
|
||||
|
||||
Args:
|
||||
heatmap: Sparse dict mapping cell index (str) to intensity (1-255).
|
||||
roi_bbox: (x_min, y_min, x_max, y_max) in normalized coordinates (0-1).
|
||||
|
||||
Returns:
|
||||
True if there is overlap (any active cell in the ROI region).
|
||||
"""
|
||||
if not isinstance(heatmap, dict):
|
||||
# Invalid heatmap, assume overlap to be safe
|
||||
return True
|
||||
|
||||
x_min, y_min, x_max, y_max = roi_bbox
|
||||
|
||||
# Convert normalized coordinates to grid cells (0-15)
|
||||
grid_x_min = max(0, int(x_min * HEATMAP_GRID_SIZE))
|
||||
grid_y_min = max(0, int(y_min * HEATMAP_GRID_SIZE))
|
||||
grid_x_max = min(HEATMAP_GRID_SIZE - 1, int(x_max * HEATMAP_GRID_SIZE))
|
||||
grid_y_max = min(HEATMAP_GRID_SIZE - 1, int(y_max * HEATMAP_GRID_SIZE))
|
||||
|
||||
# Check each cell in the ROI bbox
|
||||
for y in range(grid_y_min, grid_y_max + 1):
|
||||
for x in range(grid_x_min, grid_x_max + 1):
|
||||
idx = str(y * HEATMAP_GRID_SIZE + x)
|
||||
if idx in heatmap:
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
|
||||
def segment_passes_activity_gate(recording: Recordings) -> bool:
|
||||
"""Check if a segment passes the activity gate.
|
||||
|
||||
Returns True if any of motion, objects, or regions is non-zero/non-null.
|
||||
Returns True if all are null (old segments without data).
|
||||
"""
|
||||
motion = recording.motion
|
||||
objects = recording.objects
|
||||
regions = recording.regions
|
||||
|
||||
# Old segments without metadata - pass through (conservative)
|
||||
if motion is None and objects is None and regions is None:
|
||||
return True
|
||||
|
||||
# Pass if any activity indicator is positive
|
||||
return bool(motion) or bool(objects) or bool(regions)
|
||||
|
||||
|
||||
def segment_passes_heatmap_gate(
|
||||
recording: Recordings, roi_bbox: tuple[float, float, float, float]
|
||||
) -> bool:
|
||||
"""Check if a segment passes the heatmap overlap gate.
|
||||
|
||||
Returns True if:
|
||||
- No heatmap is stored (old segments).
|
||||
- The heatmap overlaps with the ROI bbox.
|
||||
"""
|
||||
heatmap = getattr(recording, "motion_heatmap", None)
|
||||
if heatmap is None:
|
||||
# No heatmap stored, fall back to activity gate
|
||||
return True
|
||||
|
||||
return heatmap_overlaps_roi(heatmap, roi_bbox)
|
||||
|
||||
|
||||
class MotionSearchRunner(threading.Thread):
|
||||
"""Thread-based runner for motion search jobs with parallel verification."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
job: MotionSearchJob,
|
||||
config: FrigateConfig,
|
||||
cancel_event: threading.Event,
|
||||
) -> None:
|
||||
super().__init__(daemon=True, name=f"motion_search_{job.id}")
|
||||
self.job = job
|
||||
self.config = config
|
||||
self.cancel_event = cancel_event
|
||||
self.internal_stop_event = threading.Event()
|
||||
self.requestor = InterProcessRequestor()
|
||||
self.metrics = MotionSearchMetrics()
|
||||
self.job.metrics = self.metrics
|
||||
|
||||
# Worker cap: min(4, cpu_count)
|
||||
cpu_count = os.cpu_count() or 1
|
||||
self.max_workers = min(4, cpu_count)
|
||||
|
||||
def run(self) -> None:
|
||||
"""Execute the motion search job."""
|
||||
try:
|
||||
self.job.status = JobStatusTypesEnum.running
|
||||
self.job.start_time = datetime.now().timestamp()
|
||||
self._broadcast_status()
|
||||
|
||||
results = self._execute_search()
|
||||
|
||||
if self.cancel_event.is_set():
|
||||
self.job.status = JobStatusTypesEnum.cancelled
|
||||
else:
|
||||
self.job.status = JobStatusTypesEnum.success
|
||||
self.job.results = {
|
||||
"results": [r.to_dict() for r in results],
|
||||
"total_frames_processed": self.job.total_frames_processed,
|
||||
}
|
||||
|
||||
self.job.end_time = datetime.now().timestamp()
|
||||
self.metrics.wall_time_seconds = self.job.end_time - self.job.start_time
|
||||
self.job.metrics = self.metrics
|
||||
|
||||
logger.debug(
|
||||
"Motion search job %s completed: status=%s, results=%d, frames=%d",
|
||||
self.job.id,
|
||||
self.job.status,
|
||||
len(results),
|
||||
self.job.total_frames_processed,
|
||||
)
|
||||
self._broadcast_status()
|
||||
|
||||
except Exception as e:
|
||||
logger.exception("Motion search job %s failed: %s", self.job.id, e)
|
||||
self.job.status = JobStatusTypesEnum.failed
|
||||
self.job.error_message = str(e)
|
||||
self.job.end_time = datetime.now().timestamp()
|
||||
self.metrics.wall_time_seconds = self.job.end_time - (
|
||||
self.job.start_time or 0
|
||||
)
|
||||
self.job.metrics = self.metrics
|
||||
self._broadcast_status()
|
||||
|
||||
finally:
|
||||
if self.requestor:
|
||||
self.requestor.stop()
|
||||
|
||||
def _broadcast_status(self) -> None:
|
||||
"""Broadcast job status update via IPC to WebSocket subscribers."""
|
||||
if self.job.status == JobStatusTypesEnum.running and self.job.start_time:
|
||||
self.metrics.wall_time_seconds = (
|
||||
datetime.now().timestamp() - self.job.start_time
|
||||
)
|
||||
|
||||
try:
|
||||
self.requestor.send_data(UPDATE_JOB_STATE, self.job.to_dict())
|
||||
except Exception as e:
|
||||
logger.warning("Failed to broadcast motion search status: %s", e)
|
||||
|
||||
def _should_stop(self) -> bool:
|
||||
"""Check if processing should stop due to cancellation or internal limits."""
|
||||
return self.cancel_event.is_set() or self.internal_stop_event.is_set()
|
||||
|
||||
def _execute_search(self) -> list[MotionSearchResult]:
|
||||
"""Main search execution logic."""
|
||||
camera_name = self.job.camera
|
||||
camera_config = self.config.cameras.get(camera_name)
|
||||
if not camera_config:
|
||||
raise ValueError(f"Camera {camera_name} not found")
|
||||
|
||||
frame_width = camera_config.detect.width
|
||||
frame_height = camera_config.detect.height
|
||||
|
||||
# Create polygon mask
|
||||
polygon_mask = create_polygon_mask(
|
||||
self.job.polygon_points, frame_width, frame_height
|
||||
)
|
||||
|
||||
if np.count_nonzero(polygon_mask) == 0:
|
||||
logger.warning("Polygon mask is empty for job %s", self.job.id)
|
||||
return []
|
||||
|
||||
# Compute ROI bbox in normalized coordinates for heatmap gate
|
||||
roi_bbox = compute_roi_bbox_normalized(self.job.polygon_points)
|
||||
|
||||
# Query recordings
|
||||
recordings = list(
|
||||
Recordings.select()
|
||||
.where(
|
||||
(
|
||||
Recordings.start_time.between(
|
||||
self.job.start_time_range, self.job.end_time_range
|
||||
)
|
||||
)
|
||||
| (
|
||||
Recordings.end_time.between(
|
||||
self.job.start_time_range, self.job.end_time_range
|
||||
)
|
||||
)
|
||||
| (
|
||||
(self.job.start_time_range > Recordings.start_time)
|
||||
& (self.job.end_time_range < Recordings.end_time)
|
||||
)
|
||||
)
|
||||
.where(Recordings.camera == camera_name)
|
||||
.order_by(Recordings.start_time.asc())
|
||||
)
|
||||
|
||||
if not recordings:
|
||||
logger.debug("No recordings found for motion search job %s", self.job.id)
|
||||
return []
|
||||
|
||||
logger.debug(
|
||||
"Motion search job %s: queried %d recording segments for camera %s "
|
||||
"(range %.1f - %.1f)",
|
||||
self.job.id,
|
||||
len(recordings),
|
||||
camera_name,
|
||||
self.job.start_time_range,
|
||||
self.job.end_time_range,
|
||||
)
|
||||
|
||||
self.metrics.segments_scanned = len(recordings)
|
||||
|
||||
# Apply activity and heatmap gates
|
||||
filtered_recordings = []
|
||||
for recording in recordings:
|
||||
if not segment_passes_activity_gate(recording):
|
||||
self.metrics.metadata_inactive_segments += 1
|
||||
self.metrics.segments_processed += 1
|
||||
logger.debug(
|
||||
"Motion search job %s: segment %s skipped by activity gate "
|
||||
"(motion=%s, objects=%s, regions=%s)",
|
||||
self.job.id,
|
||||
recording.id,
|
||||
recording.motion,
|
||||
recording.objects,
|
||||
recording.regions,
|
||||
)
|
||||
continue
|
||||
if not segment_passes_heatmap_gate(recording, roi_bbox):
|
||||
self.metrics.heatmap_roi_skip_segments += 1
|
||||
self.metrics.segments_processed += 1
|
||||
logger.debug(
|
||||
"Motion search job %s: segment %s skipped by heatmap gate "
|
||||
"(heatmap present=%s, roi_bbox=%s)",
|
||||
self.job.id,
|
||||
recording.id,
|
||||
recording.motion_heatmap is not None,
|
||||
roi_bbox,
|
||||
)
|
||||
continue
|
||||
filtered_recordings.append(recording)
|
||||
|
||||
self._broadcast_status()
|
||||
|
||||
# Fallback: if all segments were filtered out, scan all segments
|
||||
# This allows motion search to find things the detector missed
|
||||
if not filtered_recordings and recordings:
|
||||
logger.info(
|
||||
"All %d segments filtered by gates, falling back to full scan",
|
||||
len(recordings),
|
||||
)
|
||||
self.metrics.fallback_full_range_segments = len(recordings)
|
||||
filtered_recordings = recordings
|
||||
|
||||
logger.debug(
|
||||
"Motion search job %s: %d/%d segments passed gates "
|
||||
"(activity_skipped=%d, heatmap_skipped=%d)",
|
||||
self.job.id,
|
||||
len(filtered_recordings),
|
||||
len(recordings),
|
||||
self.metrics.metadata_inactive_segments,
|
||||
self.metrics.heatmap_roi_skip_segments,
|
||||
)
|
||||
|
||||
if self.job.parallel:
|
||||
return self._search_motion_parallel(filtered_recordings, polygon_mask)
|
||||
|
||||
return self._search_motion_sequential(filtered_recordings, polygon_mask)
|
||||
|
||||
def _search_motion_parallel(
|
||||
self,
|
||||
recordings: list[Recordings],
|
||||
polygon_mask: np.ndarray,
|
||||
) -> list[MotionSearchResult]:
|
||||
"""Search for motion in parallel across segments, streaming results."""
|
||||
all_results: list[MotionSearchResult] = []
|
||||
total_frames = 0
|
||||
next_recording_idx_to_merge = 0
|
||||
|
||||
logger.debug(
|
||||
"Motion search job %s: starting motion search with %d workers "
|
||||
"across %d segments",
|
||||
self.job.id,
|
||||
self.max_workers,
|
||||
len(recordings),
|
||||
)
|
||||
|
||||
# Initialize partial results on the job so they stream to the frontend
|
||||
self.job.results = {"results": [], "total_frames_processed": 0}
|
||||
|
||||
with ThreadPoolExecutor(max_workers=self.max_workers) as executor:
|
||||
futures: dict[Future, int] = {}
|
||||
completed_segments: dict[int, tuple[list[MotionSearchResult], int]] = {}
|
||||
|
||||
for idx, recording in enumerate(recordings):
|
||||
if self._should_stop():
|
||||
break
|
||||
|
||||
future = executor.submit(
|
||||
self._process_recording_for_motion,
|
||||
recording.path,
|
||||
recording.start_time,
|
||||
recording.end_time,
|
||||
self.job.start_time_range,
|
||||
self.job.end_time_range,
|
||||
polygon_mask,
|
||||
self.job.threshold,
|
||||
self.job.min_area,
|
||||
self.job.frame_skip,
|
||||
)
|
||||
futures[future] = idx
|
||||
|
||||
for future in as_completed(futures):
|
||||
if self._should_stop():
|
||||
# Cancel remaining futures
|
||||
for f in futures:
|
||||
f.cancel()
|
||||
break
|
||||
|
||||
recording_idx = futures[future]
|
||||
recording = recordings[recording_idx]
|
||||
|
||||
try:
|
||||
results, frames = future.result()
|
||||
self.metrics.segments_processed += 1
|
||||
completed_segments[recording_idx] = (results, frames)
|
||||
|
||||
while next_recording_idx_to_merge in completed_segments:
|
||||
segment_results, segment_frames = completed_segments.pop(
|
||||
next_recording_idx_to_merge
|
||||
)
|
||||
|
||||
all_results.extend(segment_results)
|
||||
total_frames += segment_frames
|
||||
self.job.total_frames_processed = total_frames
|
||||
self.metrics.frames_decoded = total_frames
|
||||
|
||||
if segment_results:
|
||||
deduped = self._deduplicate_results(all_results)
|
||||
self.job.results = {
|
||||
"results": [
|
||||
r.to_dict() for r in deduped[: self.job.max_results]
|
||||
],
|
||||
"total_frames_processed": total_frames,
|
||||
}
|
||||
|
||||
self._broadcast_status()
|
||||
|
||||
if segment_results and len(deduped) >= self.job.max_results:
|
||||
self.internal_stop_event.set()
|
||||
for pending_future in futures:
|
||||
pending_future.cancel()
|
||||
break
|
||||
|
||||
next_recording_idx_to_merge += 1
|
||||
|
||||
if self.internal_stop_event.is_set():
|
||||
break
|
||||
|
||||
except Exception as e:
|
||||
self.metrics.segments_processed += 1
|
||||
self.metrics.segments_with_errors += 1
|
||||
self._broadcast_status()
|
||||
logger.warning(
|
||||
"Error processing segment %s: %s",
|
||||
recording.path,
|
||||
e,
|
||||
)
|
||||
|
||||
self.job.total_frames_processed = total_frames
|
||||
self.metrics.frames_decoded = total_frames
|
||||
|
||||
logger.debug(
|
||||
"Motion search job %s: motion search complete, "
|
||||
"found %d raw results, decoded %d frames, %d segment errors",
|
||||
self.job.id,
|
||||
len(all_results),
|
||||
total_frames,
|
||||
self.metrics.segments_with_errors,
|
||||
)
|
||||
|
||||
# Sort and deduplicate results
|
||||
all_results.sort(key=lambda x: x.timestamp)
|
||||
return self._deduplicate_results(all_results)[: self.job.max_results]
|
||||
|
||||
def _search_motion_sequential(
|
||||
self,
|
||||
recordings: list[Recordings],
|
||||
polygon_mask: np.ndarray,
|
||||
) -> list[MotionSearchResult]:
|
||||
"""Search for motion sequentially across segments, streaming results."""
|
||||
all_results: list[MotionSearchResult] = []
|
||||
total_frames = 0
|
||||
|
||||
logger.debug(
|
||||
"Motion search job %s: starting sequential motion search across %d segments",
|
||||
self.job.id,
|
||||
len(recordings),
|
||||
)
|
||||
|
||||
self.job.results = {"results": [], "total_frames_processed": 0}
|
||||
|
||||
for recording in recordings:
|
||||
if self.cancel_event.is_set():
|
||||
break
|
||||
|
||||
try:
|
||||
results, frames = self._process_recording_for_motion(
|
||||
recording.path,
|
||||
recording.start_time,
|
||||
recording.end_time,
|
||||
self.job.start_time_range,
|
||||
self.job.end_time_range,
|
||||
polygon_mask,
|
||||
self.job.threshold,
|
||||
self.job.min_area,
|
||||
self.job.frame_skip,
|
||||
)
|
||||
all_results.extend(results)
|
||||
total_frames += frames
|
||||
|
||||
self.job.total_frames_processed = total_frames
|
||||
self.metrics.frames_decoded = total_frames
|
||||
self.metrics.segments_processed += 1
|
||||
|
||||
if results:
|
||||
all_results.sort(key=lambda x: x.timestamp)
|
||||
deduped = self._deduplicate_results(all_results)[
|
||||
: self.job.max_results
|
||||
]
|
||||
self.job.results = {
|
||||
"results": [r.to_dict() for r in deduped],
|
||||
"total_frames_processed": total_frames,
|
||||
}
|
||||
|
||||
self._broadcast_status()
|
||||
|
||||
if results and len(deduped) >= self.job.max_results:
|
||||
break
|
||||
|
||||
except Exception as e:
|
||||
self.metrics.segments_processed += 1
|
||||
self.metrics.segments_with_errors += 1
|
||||
self._broadcast_status()
|
||||
logger.warning("Error processing segment %s: %s", recording.path, e)
|
||||
|
||||
self.job.total_frames_processed = total_frames
|
||||
self.metrics.frames_decoded = total_frames
|
||||
|
||||
logger.debug(
|
||||
"Motion search job %s: sequential motion search complete, "
|
||||
"found %d raw results, decoded %d frames, %d segment errors",
|
||||
self.job.id,
|
||||
len(all_results),
|
||||
total_frames,
|
||||
self.metrics.segments_with_errors,
|
||||
)
|
||||
|
||||
all_results.sort(key=lambda x: x.timestamp)
|
||||
return self._deduplicate_results(all_results)[: self.job.max_results]
|
||||
|
||||
def _deduplicate_results(
|
||||
self, results: list[MotionSearchResult], min_gap: float = 1.0
|
||||
) -> list[MotionSearchResult]:
|
||||
"""Deduplicate results that are too close together."""
|
||||
if not results:
|
||||
return results
|
||||
|
||||
deduplicated: list[MotionSearchResult] = []
|
||||
last_timestamp = 0.0
|
||||
|
||||
for result in results:
|
||||
if result.timestamp - last_timestamp >= min_gap:
|
||||
deduplicated.append(result)
|
||||
last_timestamp = result.timestamp
|
||||
|
||||
return deduplicated
|
||||
|
||||
def _process_recording_for_motion(
|
||||
self,
|
||||
recording_path: str,
|
||||
recording_start: float,
|
||||
recording_end: float,
|
||||
search_start: float,
|
||||
search_end: float,
|
||||
polygon_mask: np.ndarray,
|
||||
threshold: int,
|
||||
min_area: float,
|
||||
frame_skip: int,
|
||||
) -> tuple[list[MotionSearchResult], int]:
|
||||
"""Process a single recording file for motion detection.
|
||||
|
||||
This method is designed to be called from a thread pool.
|
||||
|
||||
Args:
|
||||
min_area: Minimum change area as a percentage of the ROI (0-100).
|
||||
"""
|
||||
results: list[MotionSearchResult] = []
|
||||
frames_processed = 0
|
||||
|
||||
if not os.path.exists(recording_path):
|
||||
logger.warning("Recording file not found: %s", recording_path)
|
||||
return results, frames_processed
|
||||
|
||||
cap = cv2.VideoCapture(recording_path)
|
||||
if not cap.isOpened():
|
||||
logger.error("Could not open recording: %s", recording_path)
|
||||
return results, frames_processed
|
||||
|
||||
try:
|
||||
fps = cap.get(cv2.CAP_PROP_FPS) or 30.0
|
||||
total_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
|
||||
recording_duration = recording_end - recording_start
|
||||
|
||||
# Calculate frame range
|
||||
start_offset = max(0, search_start - recording_start)
|
||||
end_offset = min(recording_duration, search_end - recording_start)
|
||||
start_frame = int(start_offset * fps)
|
||||
end_frame = int(end_offset * fps)
|
||||
start_frame = max(0, min(start_frame, total_frames - 1))
|
||||
end_frame = max(0, min(end_frame, total_frames))
|
||||
|
||||
if start_frame >= end_frame:
|
||||
return results, frames_processed
|
||||
|
||||
cap.set(cv2.CAP_PROP_POS_FRAMES, start_frame)
|
||||
|
||||
# Get ROI bounding box
|
||||
roi_bbox = cv2.boundingRect(polygon_mask)
|
||||
roi_x, roi_y, roi_w, roi_h = roi_bbox
|
||||
|
||||
prev_frame_gray = None
|
||||
frame_step = max(frame_skip, 1)
|
||||
frame_idx = start_frame
|
||||
|
||||
while frame_idx < end_frame:
|
||||
if self._should_stop():
|
||||
break
|
||||
|
||||
ret, frame = cap.read()
|
||||
if not ret:
|
||||
frame_idx += 1
|
||||
continue
|
||||
|
||||
if (frame_idx - start_frame) % frame_step != 0:
|
||||
frame_idx += 1
|
||||
continue
|
||||
|
||||
frames_processed += 1
|
||||
|
||||
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
|
||||
|
||||
# Handle frame dimension changes
|
||||
if gray.shape != polygon_mask.shape:
|
||||
resized_mask = cv2.resize(
|
||||
polygon_mask, (gray.shape[1], gray.shape[0]), cv2.INTER_NEAREST
|
||||
)
|
||||
current_bbox = cv2.boundingRect(resized_mask)
|
||||
else:
|
||||
resized_mask = polygon_mask
|
||||
current_bbox = roi_bbox
|
||||
|
||||
roi_x, roi_y, roi_w, roi_h = current_bbox
|
||||
cropped_gray = gray[roi_y : roi_y + roi_h, roi_x : roi_x + roi_w]
|
||||
cropped_mask = resized_mask[
|
||||
roi_y : roi_y + roi_h, roi_x : roi_x + roi_w
|
||||
]
|
||||
|
||||
cropped_mask_area = np.count_nonzero(cropped_mask)
|
||||
if cropped_mask_area == 0:
|
||||
frame_idx += 1
|
||||
continue
|
||||
|
||||
# Convert percentage to pixel count for this ROI
|
||||
min_area_pixels = int((min_area / 100.0) * cropped_mask_area)
|
||||
|
||||
masked_gray = cv2.bitwise_and(
|
||||
cropped_gray, cropped_gray, mask=cropped_mask
|
||||
)
|
||||
|
||||
if prev_frame_gray is not None:
|
||||
diff = cv2.absdiff(prev_frame_gray, masked_gray)
|
||||
diff_blurred = cv2.GaussianBlur(diff, (3, 3), 0)
|
||||
_, thresh = cv2.threshold(
|
||||
diff_blurred, threshold, 255, cv2.THRESH_BINARY
|
||||
)
|
||||
thresh_dilated = cv2.dilate(thresh, None, iterations=1)
|
||||
thresh_masked = cv2.bitwise_and(
|
||||
thresh_dilated, thresh_dilated, mask=cropped_mask
|
||||
)
|
||||
|
||||
change_pixels = cv2.countNonZero(thresh_masked)
|
||||
if change_pixels > min_area_pixels:
|
||||
contours, _ = cv2.findContours(
|
||||
thresh_masked, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE
|
||||
)
|
||||
total_change_area = sum(
|
||||
cv2.contourArea(c)
|
||||
for c in contours
|
||||
if cv2.contourArea(c) >= min_area_pixels
|
||||
)
|
||||
if total_change_area > 0:
|
||||
frame_time_offset = (frame_idx - start_frame) / fps
|
||||
timestamp = (
|
||||
recording_start + start_offset + frame_time_offset
|
||||
)
|
||||
change_percentage = (
|
||||
total_change_area / cropped_mask_area
|
||||
) * 100
|
||||
results.append(
|
||||
MotionSearchResult(
|
||||
timestamp=timestamp,
|
||||
change_percentage=round(change_percentage, 2),
|
||||
)
|
||||
)
|
||||
|
||||
prev_frame_gray = masked_gray
|
||||
frame_idx += 1
|
||||
|
||||
finally:
|
||||
cap.release()
|
||||
|
||||
logger.debug(
|
||||
"Motion search segment complete: %s, %d frames processed, %d results found",
|
||||
recording_path,
|
||||
frames_processed,
|
||||
len(results),
|
||||
)
|
||||
return results, frames_processed
|
||||
|
||||
|
||||
# Module-level state for managing per-camera jobs
|
||||
_motion_search_jobs: dict[str, tuple[MotionSearchJob, threading.Event]] = {}
|
||||
_jobs_lock = threading.Lock()
|
||||
|
||||
|
||||
def stop_all_motion_search_jobs() -> None:
|
||||
"""Cancel all running motion search jobs for clean shutdown."""
|
||||
with _jobs_lock:
|
||||
for job_id, (job, cancel_event) in _motion_search_jobs.items():
|
||||
if job.status in (JobStatusTypesEnum.queued, JobStatusTypesEnum.running):
|
||||
cancel_event.set()
|
||||
logger.debug("Signalling motion search job %s to stop", job_id)
|
||||
|
||||
|
||||
def start_motion_search_job(
|
||||
config: FrigateConfig,
|
||||
camera_name: str,
|
||||
start_time: float,
|
||||
end_time: float,
|
||||
polygon_points: list[list[float]],
|
||||
threshold: int = 30,
|
||||
min_area: float = 5.0,
|
||||
frame_skip: int = 5,
|
||||
parallel: bool = False,
|
||||
max_results: int = 25,
|
||||
) -> str:
|
||||
"""Start a new motion search job.
|
||||
|
||||
Returns the job ID.
|
||||
"""
|
||||
job = MotionSearchJob(
|
||||
camera=camera_name,
|
||||
start_time_range=start_time,
|
||||
end_time_range=end_time,
|
||||
polygon_points=polygon_points,
|
||||
threshold=threshold,
|
||||
min_area=min_area,
|
||||
frame_skip=frame_skip,
|
||||
parallel=parallel,
|
||||
max_results=max_results,
|
||||
)
|
||||
|
||||
cancel_event = threading.Event()
|
||||
|
||||
with _jobs_lock:
|
||||
_motion_search_jobs[job.id] = (job, cancel_event)
|
||||
|
||||
set_current_job(job)
|
||||
|
||||
runner = MotionSearchRunner(job, config, cancel_event)
|
||||
runner.start()
|
||||
|
||||
logger.debug(
|
||||
"Started motion search job %s for camera %s: "
|
||||
"time_range=%.1f-%.1f, threshold=%d, min_area=%.1f%%, "
|
||||
"frame_skip=%d, parallel=%s, max_results=%d, polygon_points=%d vertices",
|
||||
job.id,
|
||||
camera_name,
|
||||
start_time,
|
||||
end_time,
|
||||
threshold,
|
||||
min_area,
|
||||
frame_skip,
|
||||
parallel,
|
||||
max_results,
|
||||
len(polygon_points),
|
||||
)
|
||||
return job.id
|
||||
|
||||
|
||||
def get_motion_search_job(job_id: str) -> Optional[MotionSearchJob]:
|
||||
"""Get a motion search job by ID."""
|
||||
with _jobs_lock:
|
||||
job_entry = _motion_search_jobs.get(job_id)
|
||||
if job_entry:
|
||||
return job_entry[0]
|
||||
# Check completed jobs via manager
|
||||
return get_job_by_id("motion_search", job_id)
|
||||
|
||||
|
||||
def cancel_motion_search_job(job_id: str) -> bool:
|
||||
"""Cancel a motion search job.
|
||||
|
||||
Returns True if cancellation was initiated, False if job not found.
|
||||
"""
|
||||
with _jobs_lock:
|
||||
job_entry = _motion_search_jobs.get(job_id)
|
||||
if not job_entry:
|
||||
return False
|
||||
|
||||
job, cancel_event = job_entry
|
||||
|
||||
if job.status not in (JobStatusTypesEnum.queued, JobStatusTypesEnum.running):
|
||||
# Already finished
|
||||
return True
|
||||
|
||||
cancel_event.set()
|
||||
job.status = JobStatusTypesEnum.cancelled
|
||||
job_payload = job.to_dict()
|
||||
logger.info("Cancelled motion search job %s", job_id)
|
||||
|
||||
requestor: Optional[InterProcessRequestor] = None
|
||||
try:
|
||||
requestor = InterProcessRequestor()
|
||||
requestor.send_data(UPDATE_JOB_STATE, job_payload)
|
||||
except Exception as e:
|
||||
logger.warning(
|
||||
"Failed to broadcast cancelled motion search job %s: %s", job_id, e
|
||||
)
|
||||
finally:
|
||||
if requestor:
|
||||
requestor.stop()
|
||||
|
||||
return True
|
||||
@ -78,7 +78,6 @@ class Recordings(Model):
|
||||
dBFS = IntegerField(null=True)
|
||||
segment_size = FloatField(default=0) # this should be stored as MB
|
||||
regions = IntegerField(null=True)
|
||||
motion_heatmap = JSONField(null=True) # 16x16 grid, 256 values (0-255)
|
||||
|
||||
|
||||
class ExportCase(Model):
|
||||
|
||||
@ -176,32 +176,11 @@ class ImprovedMotionDetector(MotionDetector):
|
||||
motion_boxes = []
|
||||
pct_motion = 0
|
||||
|
||||
# skip motion entirely if the scene change percentage exceeds configured
|
||||
# threshold. this is useful to ignore lighting storms, IR mode switches,
|
||||
# etc. rather than registering them as brief motion and then recalibrating.
|
||||
# note: skipping means the frame is dropped and **no recording will be
|
||||
# created**, which could hide a legitimate object if the camera is actively
|
||||
# auto‑tracking. the alternative is to allow motion and accept a small
|
||||
# recording that can be reviewed in the timeline. disabled by default (None).
|
||||
if (
|
||||
self.config.skip_motion_threshold is not None
|
||||
and pct_motion > self.config.skip_motion_threshold
|
||||
):
|
||||
# force a recalibration so we transition to the new background
|
||||
self.calibrating = True
|
||||
return []
|
||||
|
||||
# once the motion is less than 5% and the number of contours is < 4, assume its calibrated
|
||||
if pct_motion < 0.05 and len(motion_boxes) <= 4:
|
||||
self.calibrating = False
|
||||
|
||||
# if calibrating or the motion contours are > 80% of the image area
|
||||
# (lightning, ir, ptz) recalibrate. the lightning threshold does **not**
|
||||
# stop motion detection entirely; it simply halts additional processing for
|
||||
# the current frame once the percentage crosses the threshold. this helps
|
||||
# reduce false positive object detections and CPU usage during high‑motion
|
||||
# events. recordings continue to be generated because users expect data
|
||||
# while a PTZ camera is moving.
|
||||
# if calibrating or the motion contours are > 80% of the image area (lightning, ir, ptz) recalibrate
|
||||
if self.calibrating or pct_motion > self.config.lightning_threshold:
|
||||
self.calibrating = True
|
||||
|
||||
|
||||
@ -50,13 +50,11 @@ class SegmentInfo:
|
||||
active_object_count: int,
|
||||
region_count: int,
|
||||
average_dBFS: int,
|
||||
motion_heatmap: dict[str, int] | None = None,
|
||||
) -> None:
|
||||
self.motion_count = motion_count
|
||||
self.active_object_count = active_object_count
|
||||
self.region_count = region_count
|
||||
self.average_dBFS = average_dBFS
|
||||
self.motion_heatmap = motion_heatmap
|
||||
|
||||
def should_discard_segment(self, retain_mode: RetainModeEnum) -> bool:
|
||||
keep = False
|
||||
@ -456,59 +454,6 @@ class RecordingMaintainer(threading.Thread):
|
||||
if end_time < retain_cutoff:
|
||||
self.drop_segment(cache_path)
|
||||
|
||||
def _compute_motion_heatmap(
|
||||
self, camera: str, motion_boxes: list[tuple[int, int, int, int]]
|
||||
) -> dict[str, int] | None:
|
||||
"""Compute a 16x16 motion intensity heatmap from motion boxes.
|
||||
|
||||
Returns a sparse dict mapping cell index (as string) to intensity (1-255).
|
||||
Only cells with motion are included.
|
||||
|
||||
Args:
|
||||
camera: Camera name to get detect dimensions from.
|
||||
motion_boxes: List of (x1, y1, x2, y2) pixel coordinates.
|
||||
|
||||
Returns:
|
||||
Sparse dict like {"45": 3, "46": 5}, or None if no boxes.
|
||||
"""
|
||||
if not motion_boxes:
|
||||
return None
|
||||
|
||||
camera_config = self.config.cameras.get(camera)
|
||||
if not camera_config:
|
||||
return None
|
||||
|
||||
frame_width = camera_config.detect.width
|
||||
frame_height = camera_config.detect.height
|
||||
|
||||
if frame_width <= 0 or frame_height <= 0:
|
||||
return None
|
||||
|
||||
GRID_SIZE = 16
|
||||
counts: dict[int, int] = {}
|
||||
|
||||
for box in motion_boxes:
|
||||
if len(box) < 4:
|
||||
continue
|
||||
x1, y1, x2, y2 = box
|
||||
|
||||
# Convert pixel coordinates to grid cells
|
||||
grid_x1 = max(0, int((x1 / frame_width) * GRID_SIZE))
|
||||
grid_y1 = max(0, int((y1 / frame_height) * GRID_SIZE))
|
||||
grid_x2 = min(GRID_SIZE - 1, int((x2 / frame_width) * GRID_SIZE))
|
||||
grid_y2 = min(GRID_SIZE - 1, int((y2 / frame_height) * GRID_SIZE))
|
||||
|
||||
for y in range(grid_y1, grid_y2 + 1):
|
||||
for x in range(grid_x1, grid_x2 + 1):
|
||||
idx = y * GRID_SIZE + x
|
||||
counts[idx] = min(255, counts.get(idx, 0) + 1)
|
||||
|
||||
if not counts:
|
||||
return None
|
||||
|
||||
# Convert to string keys for JSON storage
|
||||
return {str(k): v for k, v in counts.items()}
|
||||
|
||||
def segment_stats(
|
||||
self, camera: str, start_time: datetime.datetime, end_time: datetime.datetime
|
||||
) -> SegmentInfo:
|
||||
@ -516,8 +461,6 @@ class RecordingMaintainer(threading.Thread):
|
||||
active_count = 0
|
||||
region_count = 0
|
||||
motion_count = 0
|
||||
all_motion_boxes: list[tuple[int, int, int, int]] = []
|
||||
|
||||
for frame in self.object_recordings_info[camera]:
|
||||
# frame is after end time of segment
|
||||
if frame[0] > end_time.timestamp():
|
||||
@ -536,8 +479,6 @@ class RecordingMaintainer(threading.Thread):
|
||||
)
|
||||
motion_count += len(frame[2])
|
||||
region_count += len(frame[3])
|
||||
# Collect motion boxes for heatmap computation
|
||||
all_motion_boxes.extend(frame[2])
|
||||
|
||||
audio_values = []
|
||||
for frame in self.audio_recordings_info[camera]:
|
||||
@ -557,14 +498,8 @@ class RecordingMaintainer(threading.Thread):
|
||||
|
||||
average_dBFS = 0 if not audio_values else np.average(audio_values)
|
||||
|
||||
motion_heatmap = self._compute_motion_heatmap(camera, all_motion_boxes)
|
||||
|
||||
return SegmentInfo(
|
||||
motion_count,
|
||||
active_count,
|
||||
region_count,
|
||||
round(average_dBFS),
|
||||
motion_heatmap,
|
||||
motion_count, active_count, region_count, round(average_dBFS)
|
||||
)
|
||||
|
||||
async def move_segment(
|
||||
@ -655,7 +590,6 @@ class RecordingMaintainer(threading.Thread):
|
||||
Recordings.regions.name: segment_info.region_count,
|
||||
Recordings.dBFS.name: segment_info.average_dBFS,
|
||||
Recordings.segment_size.name: segment_size,
|
||||
Recordings.motion_heatmap.name: segment_info.motion_heatmap,
|
||||
}
|
||||
except Exception as e:
|
||||
logger.error(f"Unable to store recording segment {cache_path}")
|
||||
|
||||
@ -1,91 +0,0 @@
|
||||
import unittest
|
||||
|
||||
import numpy as np
|
||||
|
||||
from frigate.config.camera.motion import MotionConfig
|
||||
from frigate.motion.improved_motion import ImprovedMotionDetector
|
||||
|
||||
|
||||
class TestImprovedMotionDetector(unittest.TestCase):
|
||||
def setUp(self):
|
||||
# small frame for testing; actual frames are grayscale
|
||||
self.frame_shape = (100, 100) # height, width
|
||||
self.config = MotionConfig()
|
||||
# motion detector assumes a rasterized_mask attribute exists on config
|
||||
# when update_mask() is called; add one manually by bypassing pydantic.
|
||||
object.__setattr__(
|
||||
self.config,
|
||||
"rasterized_mask",
|
||||
np.ones((self.frame_shape[0], self.frame_shape[1]), dtype=np.uint8),
|
||||
)
|
||||
|
||||
# create minimal PTZ metrics stub to satisfy detector checks
|
||||
class _Stub:
|
||||
def __init__(self, value=False):
|
||||
self.value = value
|
||||
|
||||
def is_set(self):
|
||||
return bool(self.value)
|
||||
|
||||
class DummyPTZ:
|
||||
def __init__(self):
|
||||
self.autotracker_enabled = _Stub(False)
|
||||
self.motor_stopped = _Stub(False)
|
||||
self.stop_time = _Stub(0)
|
||||
|
||||
self.detector = ImprovedMotionDetector(
|
||||
self.frame_shape, self.config, fps=30, ptz_metrics=DummyPTZ()
|
||||
)
|
||||
|
||||
# establish a baseline frame (all zeros)
|
||||
base_frame = np.zeros(
|
||||
(self.frame_shape[0], self.frame_shape[1]), dtype=np.uint8
|
||||
)
|
||||
self.detector.detect(base_frame)
|
||||
|
||||
def _half_change_frame(self) -> np.ndarray:
|
||||
"""Produce a frame where roughly half of the pixels are different."""
|
||||
frame = np.zeros((self.frame_shape[0], self.frame_shape[1]), dtype=np.uint8)
|
||||
# flip the top half to white
|
||||
frame[: self.frame_shape[0] // 2, :] = 255
|
||||
return frame
|
||||
|
||||
def test_skip_motion_threshold_default(self):
|
||||
"""With the default (None) setting, motion should always be reported."""
|
||||
frame = self._half_change_frame()
|
||||
boxes = self.detector.detect(frame)
|
||||
self.assertTrue(
|
||||
boxes, "Expected motion boxes when skip threshold is unset (disabled)"
|
||||
)
|
||||
|
||||
def test_skip_motion_threshold_applied(self):
|
||||
"""Setting a low skip threshold should prevent any boxes from being returned."""
|
||||
# change the config and update the detector reference
|
||||
self.config.skip_motion_threshold = 0.4
|
||||
self.detector.config = self.config
|
||||
self.detector.update_mask()
|
||||
|
||||
frame = self._half_change_frame()
|
||||
boxes = self.detector.detect(frame)
|
||||
self.assertEqual(
|
||||
boxes,
|
||||
[],
|
||||
"Motion boxes should be empty when scene change exceeds skip threshold",
|
||||
)
|
||||
|
||||
def test_skip_motion_threshold_does_not_affect_calibration(self):
|
||||
"""Even when skipping, the detector should go into calibrating state."""
|
||||
self.config.skip_motion_threshold = 0.4
|
||||
self.detector.config = self.config
|
||||
self.detector.update_mask()
|
||||
|
||||
frame = self._half_change_frame()
|
||||
_ = self.detector.detect(frame)
|
||||
self.assertTrue(
|
||||
self.detector.calibrating,
|
||||
"Detector should be in calibrating state after skip event",
|
||||
)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
@ -110,7 +110,6 @@ def ensure_torch_dependencies() -> bool:
|
||||
"pip",
|
||||
"install",
|
||||
"--break-system-packages",
|
||||
"setuptools<81",
|
||||
"torch",
|
||||
"torchvision",
|
||||
],
|
||||
|
||||
@ -1,34 +0,0 @@
|
||||
"""Peewee migrations -- 035_add_motion_heatmap.py.
|
||||
|
||||
Some examples (model - class or model name)::
|
||||
|
||||
> Model = migrator.orm['model_name'] # Return model in current state by name
|
||||
|
||||
> migrator.sql(sql) # Run custom SQL
|
||||
> migrator.python(func, *args, **kwargs) # Run python code
|
||||
> migrator.create_model(Model) # Create a model (could be used as decorator)
|
||||
> migrator.remove_model(model, cascade=True) # Remove a model
|
||||
> migrator.add_fields(model, **fields) # Add fields to a model
|
||||
> migrator.change_fields(model, **fields) # Change fields
|
||||
> migrator.remove_fields(model, *field_names, cascade=True)
|
||||
> migrator.rename_field(model, old_field_name, new_field_name)
|
||||
> migrator.rename_table(model, new_table_name)
|
||||
> migrator.add_index(model, *col_names, unique=False)
|
||||
> migrator.drop_index(model, *col_names)
|
||||
> migrator.add_not_null(model, *field_names)
|
||||
> migrator.drop_not_null(model, *field_names)
|
||||
> migrator.add_default(model, field_name, default)
|
||||
|
||||
"""
|
||||
|
||||
import peewee as pw
|
||||
|
||||
SQL = pw.SQL
|
||||
|
||||
|
||||
def migrate(migrator, database, fake=False, **kwargs):
|
||||
migrator.sql('ALTER TABLE "recordings" ADD COLUMN "motion_heatmap" TEXT NULL')
|
||||
|
||||
|
||||
def rollback(migrator, database, fake=False, **kwargs):
|
||||
pass
|
||||
63
web/package-lock.json
generated
63
web/package-lock.json
generated
@ -22,7 +22,6 @@
|
||||
"@radix-ui/react-hover-card": "^1.1.6",
|
||||
"@radix-ui/react-label": "^2.1.2",
|
||||
"@radix-ui/react-popover": "^1.1.6",
|
||||
"@radix-ui/react-progress": "^1.1.8",
|
||||
"@radix-ui/react-radio-group": "^1.2.3",
|
||||
"@radix-ui/react-scroll-area": "^1.2.3",
|
||||
"@radix-ui/react-select": "^2.1.6",
|
||||
@ -2923,68 +2922,6 @@
|
||||
}
|
||||
}
|
||||
},
|
||||
"node_modules/@radix-ui/react-progress": {
|
||||
"version": "1.1.8",
|
||||
"resolved": "https://registry.npmjs.org/@radix-ui/react-progress/-/react-progress-1.1.8.tgz",
|
||||
"integrity": "sha512-+gISHcSPUJ7ktBy9RnTqbdKW78bcGke3t6taawyZ71pio1JewwGSJizycs7rLhGTvMJYCQB1DBK4KQsxs7U8dA==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@radix-ui/react-context": "1.1.3",
|
||||
"@radix-ui/react-primitive": "2.1.4"
|
||||
},
|
||||
"peerDependencies": {
|
||||
"@types/react": "*",
|
||||
"@types/react-dom": "*",
|
||||
"react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc",
|
||||
"react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
|
||||
},
|
||||
"peerDependenciesMeta": {
|
||||
"@types/react": {
|
||||
"optional": true
|
||||
},
|
||||
"@types/react-dom": {
|
||||
"optional": true
|
||||
}
|
||||
}
|
||||
},
|
||||
"node_modules/@radix-ui/react-progress/node_modules/@radix-ui/react-context": {
|
||||
"version": "1.1.3",
|
||||
"resolved": "https://registry.npmjs.org/@radix-ui/react-context/-/react-context-1.1.3.tgz",
|
||||
"integrity": "sha512-ieIFACdMpYfMEjF0rEf5KLvfVyIkOz6PDGyNnP+u+4xQ6jny3VCgA4OgXOwNx2aUkxn8zx9fiVcM8CfFYv9Lxw==",
|
||||
"license": "MIT",
|
||||
"peerDependencies": {
|
||||
"@types/react": "*",
|
||||
"react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
|
||||
},
|
||||
"peerDependenciesMeta": {
|
||||
"@types/react": {
|
||||
"optional": true
|
||||
}
|
||||
}
|
||||
},
|
||||
"node_modules/@radix-ui/react-progress/node_modules/@radix-ui/react-primitive": {
|
||||
"version": "2.1.4",
|
||||
"resolved": "https://registry.npmjs.org/@radix-ui/react-primitive/-/react-primitive-2.1.4.tgz",
|
||||
"integrity": "sha512-9hQc4+GNVtJAIEPEqlYqW5RiYdrr8ea5XQ0ZOnD6fgru+83kqT15mq2OCcbe8KnjRZl5vF3ks69AKz3kh1jrhg==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@radix-ui/react-slot": "1.2.4"
|
||||
},
|
||||
"peerDependencies": {
|
||||
"@types/react": "*",
|
||||
"@types/react-dom": "*",
|
||||
"react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc",
|
||||
"react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
|
||||
},
|
||||
"peerDependenciesMeta": {
|
||||
"@types/react": {
|
||||
"optional": true
|
||||
},
|
||||
"@types/react-dom": {
|
||||
"optional": true
|
||||
}
|
||||
}
|
||||
},
|
||||
"node_modules/@radix-ui/react-radio-group": {
|
||||
"version": "1.3.8",
|
||||
"resolved": "https://registry.npmjs.org/@radix-ui/react-radio-group/-/react-radio-group-1.3.8.tgz",
|
||||
|
||||
@ -28,7 +28,6 @@
|
||||
"@radix-ui/react-hover-card": "^1.1.6",
|
||||
"@radix-ui/react-label": "^2.1.2",
|
||||
"@radix-ui/react-popover": "^1.1.6",
|
||||
"@radix-ui/react-progress": "^1.1.8",
|
||||
"@radix-ui/react-radio-group": "^1.2.3",
|
||||
"@radix-ui/react-scroll-area": "^1.2.3",
|
||||
"@radix-ui/react-select": "^2.1.6",
|
||||
|
||||
@ -92,10 +92,10 @@
|
||||
},
|
||||
"fps": {
|
||||
"label": "Detecta FPS",
|
||||
"description": "Fotogrames desitjats per segon per executar la detecció; els valors més baixos redueixen l'ús de la CPU (el valor recomanat és 5, només estableix més alt - com a màxim 10 - si el seguiment d'objectes en moviment extremadament ràpid)."
|
||||
"description": "Marcs desitjats per segon per executar la detecció; els valors més baixos redueixen l'ús de la CPU (el valor recomanat és 5, només estableix més alt - com a màxim 10 - si el seguiment d'objectes en moviment extremadament ràpid)."
|
||||
},
|
||||
"min_initialized": {
|
||||
"label": "Fotogrames d'inicialització mínims",
|
||||
"label": "Marcs d'inicialització mínims",
|
||||
"description": "Nombre d'incidències de detecció consecutives necessàries abans de crear un objecte rastrejat. Incrementa per a reduir les falses inicialitzacions. El valor per defecte és fps dividit per 2."
|
||||
},
|
||||
"max_disappeared": {
|
||||
@ -118,10 +118,10 @@
|
||||
"description": "Limita quant de temps es segueixen els objectes estacionaris abans de descartar-los.",
|
||||
"default": {
|
||||
"label": "Fotogrames màxims predeterminats",
|
||||
"description": "Fotogrames màxims predeterminats per a fer el seguiment d'un objecte estacionari abans d'aturar-se."
|
||||
"description": "Marcs màxims predeterminats per a fer el seguiment d'un objecte estacionari abans d'aturar-se."
|
||||
},
|
||||
"objects": {
|
||||
"label": "Fotogrames màxims de l'objecte",
|
||||
"label": "Marcs màxims de l'objecte",
|
||||
"description": "Sobreescriu l'objecte per als fotogrames màxims per fer un seguiment dels objectes estacionaris."
|
||||
}
|
||||
},
|
||||
|
||||
@ -84,7 +84,7 @@
|
||||
}
|
||||
},
|
||||
"ui": {
|
||||
"label": "Interfície",
|
||||
"label": "interfície",
|
||||
"description": "Preferències de la interfície d'usuari com ara la zona horària, el format de l'hora/data i les unitats.",
|
||||
"timezone": {
|
||||
"label": "Zona horària",
|
||||
@ -801,7 +801,7 @@
|
||||
"description": "Configuració de la base de dades SQLite utilitzada per Frigate per emmagatzemar objectes rastrejats i enregistrar metadades.",
|
||||
"path": {
|
||||
"label": "Ruta a la base de dades",
|
||||
"description": "Ruta del sistema de fitxers on s'emmagatzemarà el fitxer de base de dades SQLite de Frigate."
|
||||
"description": "Ruta del sistema de fitxers on s'emmagatzemarà el fitxer de base de dades SQLite de la fragata."
|
||||
}
|
||||
},
|
||||
"go2rtc": {
|
||||
@ -825,7 +825,7 @@
|
||||
},
|
||||
"topic_prefix": {
|
||||
"label": "Prefix del tema",
|
||||
"description": "El prefix del tema MQTT per a tots els temes de Frigate; ha de ser únic si s'executen diverses instàncies."
|
||||
"description": "El prefix del tema MQTT per a tots els temes de la fragata; ha de ser únic si s'executen diverses instàncies."
|
||||
},
|
||||
"client_id": {
|
||||
"label": "ID del client",
|
||||
@ -892,7 +892,7 @@
|
||||
"description": "Configuració específica d'IPv6 per als serveis de xarxa de fragate.",
|
||||
"enabled": {
|
||||
"label": "Habilita IPv6",
|
||||
"description": "Activa el suport IPv6 per als serveis de Frigate (API i UI) quan sigui aplicable"
|
||||
"description": "Activa el suport IPv6 per als serveis de fragata (API i UI) quan sigui aplicable"
|
||||
}
|
||||
},
|
||||
"listen": {
|
||||
@ -900,20 +900,20 @@
|
||||
"description": "Configuració per a ports d'escolta interns i externs. Això és per a usuaris avançats. Per a la majoria de casos d'ús es recomana canviar la secció de ports del fitxer Docker.",
|
||||
"internal": {
|
||||
"label": "Port intern",
|
||||
"description": "Port d'escolta intern per a la Frigate (predeterminat 5000)."
|
||||
"description": "Port d'escolta intern per a la fragata (predeterminat 5000)."
|
||||
},
|
||||
"external": {
|
||||
"label": "Port extern",
|
||||
"description": "Port d'escolta extern per a la Frigate (predeterminat 8971)."
|
||||
"description": "Port d'escolta extern per a la fragata (predeterminat 8971)."
|
||||
}
|
||||
}
|
||||
},
|
||||
"proxy": {
|
||||
"label": "Proxy",
|
||||
"description": "Paràmetres per a integrar Frigate darrere d'un servidor intermediari invers que passa les capçaleres d'usuari autenticades.",
|
||||
"description": "Paràmetres per a integrar la fragata darrere d'un servidor intermediari invers que passa les capçaleres d'usuari autenticades.",
|
||||
"header_map": {
|
||||
"label": "Mapeig de capçaleres",
|
||||
"description": "Mapa les capçaleres del servidor intermediari entrant a l'usuari de Frigate i als camps de rol per a l'autenticació basada en el servidor intermediari.",
|
||||
"description": "Mapa les capçaleres del servidor intermediari entrant a l'usuari de la fragata i als camps de rol per a l'autenticació basada en el servidor intermediari.",
|
||||
"user": {
|
||||
"label": "Capçalera d'usuari",
|
||||
"description": "Capçalera que conté el nom d'usuari autenticat proporcionat pel servidor intermediari de la font."
|
||||
@ -973,7 +973,7 @@
|
||||
},
|
||||
"version_check": {
|
||||
"label": "Comprovació de versió",
|
||||
"description": "Activa una comprovació de sortida per detectar si hi ha disponible una versió de Frigate més nova."
|
||||
"description": "Activa una comprovació de sortida per detectar si hi ha disponible una versió de la fragata més nova."
|
||||
}
|
||||
},
|
||||
"tls": {
|
||||
@ -1927,7 +1927,7 @@
|
||||
},
|
||||
"idle_heartbeat_fps": {
|
||||
"label": "FPS de batec cardíac inactiu",
|
||||
"description": "Fotogrames per segon per a tornar a enviar l'últim fotograma compost Birdseye quan estigui inactiu; establert a 0 per a desactivar."
|
||||
"description": "Marcs per segon per a tornar a enviar l'últim fotograma compost Birdseye quan estigui inactiu; establert a 0 per a desactivar."
|
||||
},
|
||||
"order": {
|
||||
"label": "Posició",
|
||||
@ -1952,10 +1952,10 @@
|
||||
},
|
||||
"fps": {
|
||||
"label": "Detecta FPS",
|
||||
"description": "Fotogrames desitjats per segon per executar la detecció; els valors més baixos redueixen l'ús de la CPU (el valor recomanat és 5, només estableix més alt - com a màxim 10 - si el seguiment d'objectes en moviment extremadament ràpid)."
|
||||
"description": "Marcs desitjats per segon per executar la detecció; els valors més baixos redueixen l'ús de la CPU (el valor recomanat és 5, només estableix més alt - com a màxim 10 - si el seguiment d'objectes en moviment extremadament ràpid)."
|
||||
},
|
||||
"min_initialized": {
|
||||
"label": "Fotogrames d'inicialització mínims",
|
||||
"label": "Marcs d'inicialització mínims",
|
||||
"description": "Nombre d'incidències de detecció consecutives necessàries abans de crear un objecte rastrejat. Incrementa per a reduir les falses inicialitzacions. El valor per defecte és fps dividit per 2."
|
||||
},
|
||||
"max_disappeared": {
|
||||
@ -1978,10 +1978,10 @@
|
||||
"description": "Limita quant de temps es segueixen els objectes estacionaris abans de descartar-los.",
|
||||
"default": {
|
||||
"label": "Fotogrames màxims predeterminats",
|
||||
"description": "Fotogrames màxims predeterminats per a fer el seguiment d'un objecte estacionari abans d'aturar-se."
|
||||
"description": "Marcs màxims predeterminats per a fer el seguiment d'un objecte estacionari abans d'aturar-se."
|
||||
},
|
||||
"objects": {
|
||||
"label": "Fotogrames màxims de l'objecte",
|
||||
"label": "Marcs màxims de l'objecte",
|
||||
"description": "Sobreescriu l'objecte per als fotogrames màxims per fer un seguiment dels objectes estacionaris."
|
||||
}
|
||||
},
|
||||
@ -2042,7 +2042,7 @@
|
||||
"state_config": {
|
||||
"cameras": {
|
||||
"label": "Càmeres de classificació",
|
||||
"description": "Retalla per càmera i configuració per executar la classificació d'estat.",
|
||||
"description": "Escapçament per càmera i configuració per executar la classificació d'estat.",
|
||||
"crop": {
|
||||
"label": "Retalla la classificació",
|
||||
"description": "Retalla les coordenades a usar per a executar la classificació en aquesta càmera."
|
||||
@ -2050,7 +2050,7 @@
|
||||
},
|
||||
"motion": {
|
||||
"label": "Executa en moviment",
|
||||
"description": "Si és cert, executeu la classificació quan es detecti el moviment dins del retall especificat."
|
||||
"description": "Si és cert, executeu la classificació quan es detecti el moviment dins de l'escapçat especificat."
|
||||
},
|
||||
"interval": {
|
||||
"label": "Interval de classificació",
|
||||
@ -2104,7 +2104,7 @@
|
||||
},
|
||||
"debug_save_plates": {
|
||||
"label": "Desa les plaques de depuració",
|
||||
"description": "Desa les imatges retallades de la matrícula per a depurar el rendiment LPR."
|
||||
"description": "Desa les imatges d'escapçament de la placa per a depurar el rendiment LPR."
|
||||
},
|
||||
"device": {
|
||||
"label": "Dispositiu",
|
||||
@ -2158,7 +2158,7 @@
|
||||
},
|
||||
"crop": {
|
||||
"label": "Retalla la imatge",
|
||||
"description": "Retalla les imatges publicades a MQTT segons el quadre de delimitació de l'objecte detectat."
|
||||
"description": "Escapça les imatges publicades a MQTT a la caixa contenidora de l'objecte detectat."
|
||||
},
|
||||
"height": {
|
||||
"label": "Alçada de la imatge",
|
||||
@ -2182,7 +2182,7 @@
|
||||
},
|
||||
"dashboard": {
|
||||
"label": "Mostra a la interfície",
|
||||
"description": "Estableix si aquesta càmera és visible a tot arreu a la interfície d'usuari de Frigate. Desactivar això requerirà editar manualment la configuració per tornar a veure aquesta càmera a la interfície d'usuari."
|
||||
"description": "Estableix si aquesta càmera és visible a tot arreu a la interfície d'usuari de la Fragata. Desactivar això requerirà editar manualment la configuració per tornar a veure aquesta càmera a la interfície d'usuari."
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@ -123,18 +123,7 @@
|
||||
"on": "AN",
|
||||
"suspended": "Pausierte",
|
||||
"unsuspended": "fortsetzen",
|
||||
"continue": "Weiter",
|
||||
"add": "Hinzufügen",
|
||||
"applying": "Wird angewendet…",
|
||||
"undo": "Rückgängig",
|
||||
"copiedToClipboard": "In die Zwischenablage kopiert",
|
||||
"modified": "Verändert",
|
||||
"overridden": "Überschrieben",
|
||||
"resetToGlobal": "Auf Global zurückgesetzen",
|
||||
"resetToDefault": "Auf Werkseinstellungen zurücksetzten",
|
||||
"saveAll": "Alle speichern",
|
||||
"savingAll": "Alle werden gespeichert…",
|
||||
"undoAll": "Alle rückgängig"
|
||||
"continue": "Weiter"
|
||||
},
|
||||
"label": {
|
||||
"back": "Zurück",
|
||||
|
||||
@ -1,32 +1 @@
|
||||
{
|
||||
"label": "KameraEinstellungen",
|
||||
"name": {
|
||||
"label": "Name der Kamera",
|
||||
"description": "Kameraname ist erforderlich"
|
||||
},
|
||||
"enabled": {
|
||||
"label": "Aktiviert",
|
||||
"description": "Aktiviert"
|
||||
},
|
||||
"audio": {
|
||||
"label": "Audioereignisse",
|
||||
"description": "Einstellungen für audiobasierte Ereigniserkennung für diese Kamera.",
|
||||
"enabled": {
|
||||
"label": "Aktivieren der Audioerkennung",
|
||||
"description": "Aktivieren / Deaktivieren der audiobasierten Ereigniserkennung für diese Kamera."
|
||||
},
|
||||
"min_volume": {
|
||||
"label": "Mindestlautstärke"
|
||||
},
|
||||
"listen": {
|
||||
"description": "Liste der zu erkennenden Audioereignisse (z.B: bellen, Feueralarm, schreien, sprechen, rufen)."
|
||||
},
|
||||
"filters": {
|
||||
"label": "Audiofilter"
|
||||
}
|
||||
},
|
||||
"friendly_name": {
|
||||
"label": "Anzeigename",
|
||||
"description": "Kamera-Anzeigename in der Frigate-Benutzeroberfläche"
|
||||
}
|
||||
}
|
||||
{}
|
||||
|
||||
@ -1,32 +1 @@
|
||||
{
|
||||
"version": {
|
||||
"label": "Aktuelle Version der Konfiguration",
|
||||
"description": "Die Version Numerisch oder als Zeichenketten der aktiven Konfiguration, um Migrationen oder Formatänderungen zu erkennen."
|
||||
},
|
||||
"safe_mode": {
|
||||
"label": "abgesicherter Modus",
|
||||
"description": "Wenn aktiviert, starte Frigate im abgesicherten Modus mit reduzierten Features für die Fehlersuche."
|
||||
},
|
||||
"audio": {
|
||||
"label": "Audioereignisse",
|
||||
"enabled": {
|
||||
"label": "Aktivieren der Audioerkennung"
|
||||
},
|
||||
"min_volume": {
|
||||
"label": "Mindestlautstärke"
|
||||
},
|
||||
"listen": {
|
||||
"description": "Liste der zu erkennenden Audioereignisse (z.B: bellen, Feueralarm, schreien, sprechen, rufen)."
|
||||
},
|
||||
"filters": {
|
||||
"label": "Audiofilter"
|
||||
}
|
||||
},
|
||||
"environment_vars": {
|
||||
"label": "Umgebungsvariablen",
|
||||
"description": "Schlüssel-/Wertpaare für Umgebungsvariablen des Frigate-Prozesses in Home Assistant OS. Nicht-HAOS Benutzer müssen anstatt dessen Docker Umgebungsvariablen nutzen."
|
||||
},
|
||||
"logger": {
|
||||
"label": "Protokollierung"
|
||||
}
|
||||
}
|
||||
{}
|
||||
|
||||
@ -1,26 +1 @@
|
||||
{
|
||||
"audio": {
|
||||
"global": {
|
||||
"detection": "Globale Erkennung",
|
||||
"sensitivity": "Globale Empfindlichkeit"
|
||||
},
|
||||
"cameras": {
|
||||
"detection": "Erkennung",
|
||||
"sensitivity": "Empfindlichkeit"
|
||||
}
|
||||
},
|
||||
"timestamp_style": {
|
||||
"global": {
|
||||
"appearance": "Globale Darstellung"
|
||||
},
|
||||
"cameras": {
|
||||
"appearance": "Erscheinungsbild"
|
||||
}
|
||||
},
|
||||
"motion": {
|
||||
"global": {
|
||||
"sensitivity": "Globale Empfindlichkeit",
|
||||
"algorithm": "Globaler Algorithmus"
|
||||
}
|
||||
}
|
||||
}
|
||||
{}
|
||||
|
||||
@ -1,32 +1 @@
|
||||
{
|
||||
"maximum": "Darf nicht größer sein als {{limit}}",
|
||||
"minimum": "Darf nicht kleiner sein als {{limit}}",
|
||||
"exclusiveMinimum": "Muss größer sein als {{limit}}",
|
||||
"minLength": "Muss mindestens {{limit}} Zeichen lang sein",
|
||||
"maxLength": "Muss maximal {{limit}} Zeichen lang sein",
|
||||
"minItems": "Muss mindestens {{limit}} mal vorkommen",
|
||||
"exclusiveMaximum": "Muss kleiner sein als {{limit}}",
|
||||
"maxItems": "Muss maximal {{limit}} mal vorkommen",
|
||||
"pattern": "Ungültiges Format",
|
||||
"required": "Pflichtfeld",
|
||||
"type": "Ungültiger Wertetyp",
|
||||
"enum": "Muss einer der erlaubten Werte sein",
|
||||
"const": "Wert stimmt nicht mit erwarteter Konstante überein",
|
||||
"uniqueItems": "Alle Einträge müssen eindeutig sein",
|
||||
"format": "Ungültiges Format",
|
||||
"additionalProperties": "Unbekannte Eigenschaft ist nicht erlaubt",
|
||||
"oneOf": "Muss exakt mit einem der erlaubten Schemas übereinstimmen",
|
||||
"anyOf": "Muss mindestens mit einem der erlaubten Schemas übereinstimmen",
|
||||
"proxy": {
|
||||
"header_map": {
|
||||
"roleHeaderRequired": "Rollen-Header muss angegeben werden, wenn Rollen-Zuordnungen konfiguriert sind."
|
||||
}
|
||||
},
|
||||
"ffmpeg": {
|
||||
"inputs": {
|
||||
"rolesUnique": "Jede Rolle kann nur einem input stream zugeteilt werden.",
|
||||
"detectRequired": "Es muss mindestens ein input stream die Rolle 'erkennen' tragen.",
|
||||
"hwaccelDetectOnly": "Nur der input-stream mit der Rolle 'erkennen' kann Hardwarebeschleunigungs Argumente definieren."
|
||||
}
|
||||
}
|
||||
}
|
||||
{}
|
||||
|
||||
@ -19,9 +19,5 @@
|
||||
"downloadVideo": "Video herunterladen",
|
||||
"editName": "Name ändern",
|
||||
"deleteExport": "Export löschen"
|
||||
},
|
||||
"headings": {
|
||||
"cases": "Fälle",
|
||||
"uncategorizedExports": "Unkategorisierte Exporte"
|
||||
}
|
||||
}
|
||||
|
||||
@ -5,7 +5,7 @@
|
||||
"camera": "Kameraeinstellungen - Frigate",
|
||||
"masksAndZones": "Masken- und Zoneneditor – Frigate",
|
||||
"object": "Debug - Frigate",
|
||||
"general": "Profileinstellungen - Frigate",
|
||||
"general": "UI-Einstellungen - Frigate",
|
||||
"frigatePlus": "Frigate+ Einstellungen – Frigate",
|
||||
"classification": "Klassifizierungseinstellungen – Frigate",
|
||||
"motionTuner": "Bewegungserkennungs-Optimierer – Frigate",
|
||||
@ -28,8 +28,7 @@
|
||||
"triggers": "Auslöser",
|
||||
"roles": "Rollen",
|
||||
"cameraManagement": "Verwaltung",
|
||||
"cameraReview": "Überprüfung",
|
||||
"system": "System"
|
||||
"cameraReview": "Überprüfung"
|
||||
},
|
||||
"dialog": {
|
||||
"unsavedChanges": {
|
||||
@ -42,7 +41,7 @@
|
||||
"noCamera": "Keine Kamera"
|
||||
},
|
||||
"general": {
|
||||
"title": "Profileinstellungen",
|
||||
"title": "Einstellungen der Benutzeroberfläche",
|
||||
"liveDashboard": {
|
||||
"title": "Live Übersicht",
|
||||
"playAlertVideos": {
|
||||
@ -409,7 +408,7 @@
|
||||
}
|
||||
},
|
||||
"motionMaskLabel": "Bewegungsmaske {{number}}",
|
||||
"objectMaskLabel": "Objektmaske {{number}}"
|
||||
"objectMaskLabel": "Objektmaske {{number}} ({{label}})"
|
||||
},
|
||||
"debug": {
|
||||
"objectShapeFilterDrawing": {
|
||||
|
||||
@ -264,11 +264,7 @@
|
||||
},
|
||||
"lightning_threshold": {
|
||||
"label": "Lightning threshold",
|
||||
"description": "Threshold to detect and ignore brief lighting spikes (lower is more sensitive, values between 0.3 and 1.0). This does not prevent motion detection entirely; it merely causes the detector to stop analyzing additional frames once the threshold is exceeded. Motion-based recordings are still created during these events."
|
||||
},
|
||||
"skip_motion_threshold": {
|
||||
"label": "Skip motion threshold",
|
||||
"description": "If more than this fraction of the image changes in a single frame, the detector will return no motion boxes and immediately recalibrate. This can save CPU and reduce false positives during lightning, storms, etc., but may miss real events such as a PTZ camera auto‑tracking an object. The trade‑off is between dropping a few megabytes of recordings versus reviewing a couple short clips. Range 0.0 to 1.0."
|
||||
"description": "Threshold to detect and ignore brief lighting spikes (lower is more sensitive, values between 0.3 and 1.0)."
|
||||
},
|
||||
"improve_contrast": {
|
||||
"label": "Improve contrast",
|
||||
@ -868,8 +864,7 @@
|
||||
"description": "A user-friendly name for the zone, displayed in the Frigate UI. If not set, a formatted version of the zone name will be used."
|
||||
},
|
||||
"enabled": {
|
||||
"label": "Enabled",
|
||||
"description": "Enable or disable this zone. Disabled zones are ignored at runtime."
|
||||
"label": "Whether this zone is active. Disabled zones are ignored at runtime."
|
||||
},
|
||||
"enabled_in_config": {
|
||||
"label": "Keep track of original state of zone."
|
||||
|
||||
@ -1391,11 +1391,7 @@
|
||||
},
|
||||
"lightning_threshold": {
|
||||
"label": "Lightning threshold",
|
||||
"description": "Threshold to detect and ignore brief lighting spikes (lower is more sensitive, values between 0.3 and 1.0). This does not prevent motion detection entirely; it merely causes the detector to stop analyzing additional frames once the threshold is exceeded. Motion-based recordings are still created during these events."
|
||||
},
|
||||
"skip_motion_threshold": {
|
||||
"label": "Skip motion threshold",
|
||||
"description": "If more than this fraction of the image changes in a single frame, the detector will return no motion boxes and immediately recalibrate. This can save CPU and reduce false positives during lightning, storms, etc., but may miss real events such as a PTZ camera auto‑tracking an object. The trade‑off is between dropping a few megabytes of recordings versus reviewing a couple short clips. Range 0.0 to 1.0."
|
||||
"description": "Threshold to detect and ignore brief lighting spikes (lower is more sensitive, values between 0.3 and 1.0)."
|
||||
},
|
||||
"improve_contrast": {
|
||||
"label": "Improve contrast",
|
||||
|
||||
@ -61,25 +61,5 @@
|
||||
"detected": "detected",
|
||||
"normalActivity": "Normal",
|
||||
"needsReview": "Needs review",
|
||||
"securityConcern": "Security concern",
|
||||
"motionSearch": {
|
||||
"menuItem": "Motion search",
|
||||
"openMenu": "Camera options"
|
||||
},
|
||||
"motionPreviews": {
|
||||
"menuItem": "View motion previews",
|
||||
"title": "Motion previews: {{camera}}",
|
||||
"mobileSettingsTitle": "Motion Preview Settings",
|
||||
"mobileSettingsDesc": "Adjust playback speed and dimming, and choose a date to review motion-only clips.",
|
||||
"dim": "Dim",
|
||||
"dimAria": "Adjust dimming intensity",
|
||||
"dimDesc": "Increase dimming to increase motion area visibility.",
|
||||
"speed": "Speed",
|
||||
"speedAria": "Select preview playback speed",
|
||||
"speedDesc": "Choose how quickly preview clips play.",
|
||||
"back": "Back",
|
||||
"empty": "No previews available",
|
||||
"noPreview": "Preview unavailable",
|
||||
"seekAria": "Seek {{camera}} player to {{time}}"
|
||||
}
|
||||
"securityConcern": "Security concern"
|
||||
}
|
||||
|
||||
@ -1,75 +0,0 @@
|
||||
{
|
||||
"documentTitle": "Motion Search - Frigate",
|
||||
"title": "Motion Search",
|
||||
"description": "Draw a polygon to define the region of interest, and specify a time range to search for motion changes within that region.",
|
||||
"selectCamera": "Motion Search is loading",
|
||||
"startSearch": "Start Search",
|
||||
"searchStarted": "Search started",
|
||||
"searchCancelled": "Search cancelled",
|
||||
"cancelSearch": "Cancel",
|
||||
"searching": "Search in progress.",
|
||||
"searchComplete": "Search complete",
|
||||
"noResultsYet": "Run a search to find motion changes in the selected region",
|
||||
"noChangesFound": "No pixel changes detected in the selected region",
|
||||
"changesFound_one": "Found {{count}} motion change",
|
||||
"changesFound_other": "Found {{count}} motion changes",
|
||||
"framesProcessed": "{{count}} frames processed",
|
||||
"jumpToTime": "Jump to this time",
|
||||
"results": "Results",
|
||||
"showSegmentHeatmap": "Heatmap",
|
||||
"newSearch": "New Search",
|
||||
"clearResults": "Clear Results",
|
||||
"clearROI": "Clear polygon",
|
||||
"polygonControls": {
|
||||
"points_one": "{{count}} point",
|
||||
"points_other": "{{count}} points",
|
||||
"undo": "Undo last point",
|
||||
"reset": "Reset polygon"
|
||||
},
|
||||
"motionHeatmapLabel": "Motion Heatmap",
|
||||
"dialog": {
|
||||
"title": "Motion Search",
|
||||
"cameraLabel": "Camera",
|
||||
"previewAlt": "Camera preview for {{camera}}"
|
||||
},
|
||||
"timeRange": {
|
||||
"title": "Search Range",
|
||||
"start": "Start time",
|
||||
"end": "End time"
|
||||
},
|
||||
"settings": {
|
||||
"title": "Search Settings",
|
||||
"parallelMode": "Parallel mode",
|
||||
"parallelModeDesc": "Scan multiple recording segments at the same time (faster, but significantly more CPU intensive)",
|
||||
"threshold": "Sensitivity Threshold",
|
||||
"thresholdDesc": "Lower values detect smaller changes (1-255)",
|
||||
"minArea": "Minimum Change Area",
|
||||
"minAreaDesc": "Minimum percentage of the region of interest that must change to be considered significant",
|
||||
"frameSkip": "Frame Skip",
|
||||
"frameSkipDesc": "Process every Nth frame. Set this to your camera's frame rate to process one frame per second (e.g. 5 for a 5 FPS camera, 30 for a 30 FPS camera). Higher values will be faster, but may miss short motion events.",
|
||||
"maxResults": "Maximum Results",
|
||||
"maxResultsDesc": "Stop after this many matching timestamps"
|
||||
},
|
||||
"errors": {
|
||||
"noCamera": "Please select a camera",
|
||||
"noROI": "Please draw a region of interest",
|
||||
"noTimeRange": "Please select a time range",
|
||||
"invalidTimeRange": "End time must be after start time",
|
||||
"searchFailed": "Search failed: {{message}}",
|
||||
"polygonTooSmall": "Polygon must have at least 3 points",
|
||||
"unknown": "Unknown error"
|
||||
},
|
||||
"changePercentage": "{{percentage}}% changed",
|
||||
"metrics": {
|
||||
"title": "Search Metrics",
|
||||
"segmentsScanned": "Segments scanned",
|
||||
"segmentsProcessed": "Processed",
|
||||
"segmentsSkippedInactive": "Skipped (no activity)",
|
||||
"segmentsSkippedHeatmap": "Skipped (no ROI overlap)",
|
||||
"fallbackFullRange": "Fallback full-range scan",
|
||||
"framesDecoded": "Frames decoded",
|
||||
"wallTime": "Search time",
|
||||
"segmentErrors": "Segment errors",
|
||||
"seconds": "{{seconds}}s"
|
||||
}
|
||||
}
|
||||
@ -83,8 +83,7 @@
|
||||
"triggers": "Triggers",
|
||||
"debug": "Debug",
|
||||
"frigateplus": "Frigate+",
|
||||
"mediaSync": "Media sync",
|
||||
"regionGrid": "Region grid"
|
||||
"maintenance": "Maintenance"
|
||||
},
|
||||
"dialog": {
|
||||
"unsavedChanges": {
|
||||
@ -1233,16 +1232,6 @@
|
||||
"previews": "Previews",
|
||||
"exports": "Exports",
|
||||
"recordings": "Recordings"
|
||||
},
|
||||
"regionGrid": {
|
||||
"title": "Region Grid",
|
||||
"desc": "The region grid is an optimization that learns where objects of different sizes typically appear in each camera's field of view. Frigate uses this data to efficiently size detection regions. The grid is automatically built over time from tracked object data.",
|
||||
"clear": "Clear region grid",
|
||||
"clearConfirmTitle": "Clear Region Grid",
|
||||
"clearConfirmDesc": "Clearing the region grid is not recommended unless you have recently changed your detector model size or have changed your camera's physical position and are having object tracking issues. The grid will be automatically rebuilt over time as objects are tracked. A Frigate restart is required for changes to take effect.",
|
||||
"clearSuccess": "Region grid cleared successfully",
|
||||
"clearError": "Failed to clear region grid",
|
||||
"restartRequired": "Restart required for region grid changes to take effect"
|
||||
}
|
||||
},
|
||||
"configForm": {
|
||||
|
||||
@ -1,29 +1 @@
|
||||
{
|
||||
"name": {
|
||||
"label": "Nombre de cámara",
|
||||
"description": "El nombre de la cámara es necesario"
|
||||
},
|
||||
"enabled": {
|
||||
"label": "Habilitado",
|
||||
"description": "Habilitado"
|
||||
},
|
||||
"audio": {
|
||||
"label": "Eventos de audio",
|
||||
"description": "Configuración para la detección de eventos basada en audio para esta cámara.",
|
||||
"enabled": {
|
||||
"label": "Habilitar la detección de audio",
|
||||
"description": "Activar o deshabilitar la detección de eventos de audio para esta cámara."
|
||||
},
|
||||
"max_not_heard": {
|
||||
"label": "Finalizar el tiempo de espera",
|
||||
"description": "Cantidad de segundos sin el tipo de audio configurado antes de que finalice el evento de audio."
|
||||
},
|
||||
"min_volume": {
|
||||
"label": "Volumen mínimo"
|
||||
}
|
||||
},
|
||||
"friendly_name": {
|
||||
"label": "Nombre descriptivo",
|
||||
"description": "Nombre descriptivo de la cámara utilizado en la interfaz de usuario de Frigate"
|
||||
}
|
||||
}
|
||||
{}
|
||||
|
||||
@ -1,43 +1 @@
|
||||
{
|
||||
"version": {
|
||||
"label": "Versión de configuración actual",
|
||||
"description": "Versión numérica o de cadena de la configuración activa para ayudar a detectar migraciones o cambios de formato."
|
||||
},
|
||||
"safe_mode": {
|
||||
"label": "Modo seguro",
|
||||
"description": "Cuando está habilitado, inicia Frigate en modo seguro con funciones reducidas para la solución de problemas."
|
||||
},
|
||||
"environment_vars": {
|
||||
"label": "Variables de entorno",
|
||||
"description": "Pares clave/valor de variables de entorno para establecer para el proceso de Frigate en el sistema operativo Home Assistant. Los usuarios que no son de HAOS deben usar la configuración de variables de entorno de Docker."
|
||||
},
|
||||
"logger": {
|
||||
"label": "Registro",
|
||||
"description": "Controla la verbosidad de registro predeterminada y la sobre-escritura de nivel de registro por componente.",
|
||||
"default": {
|
||||
"label": "Nivel de registro",
|
||||
"description": "Nivel de detalle global predeterminada del registro (depuración, información, advertencia, error)."
|
||||
},
|
||||
"logs": {
|
||||
"label": "Nivel de registro por proceso",
|
||||
"description": "Sobre-escribir el nivel de registro por componente para aumentar o disminuir el nivel de detalle de módulos específicos."
|
||||
}
|
||||
},
|
||||
"audio": {
|
||||
"label": "Eventos de audio",
|
||||
"enabled": {
|
||||
"label": "Habilitar la detección de audio"
|
||||
},
|
||||
"max_not_heard": {
|
||||
"label": "Finalizar el tiempo de espera",
|
||||
"description": "Cantidad de segundos sin el tipo de audio configurado antes de que finalice el evento de audio."
|
||||
},
|
||||
"min_volume": {
|
||||
"label": "Volumen mínimo"
|
||||
}
|
||||
},
|
||||
"auth": {
|
||||
"label": "Autenticación",
|
||||
"description": "Configuración relacionada con la autenticación y la sesión, incluidas las opciones de cookies y límite de peticiones."
|
||||
}
|
||||
}
|
||||
{}
|
||||
|
||||
@ -1,44 +1 @@
|
||||
{
|
||||
"audio": {
|
||||
"global": {
|
||||
"detection": "Detección Global",
|
||||
"sensitivity": "Sensibilidad Global"
|
||||
},
|
||||
"cameras": {
|
||||
"detection": "Detección",
|
||||
"sensitivity": "Sensibilidad"
|
||||
}
|
||||
},
|
||||
"timestamp_style": {
|
||||
"global": {
|
||||
"appearance": "Apariencia Global"
|
||||
},
|
||||
"cameras": {
|
||||
"appearance": "Apariencia"
|
||||
}
|
||||
},
|
||||
"motion": {
|
||||
"global": {
|
||||
"sensitivity": "Sensibilidad Global",
|
||||
"algorithm": "Algoritmo Global"
|
||||
},
|
||||
"cameras": {
|
||||
"sensitivity": "Sensibilidad",
|
||||
"algorithm": "Algoritmo"
|
||||
}
|
||||
},
|
||||
"snapshots": {
|
||||
"global": {
|
||||
"display": "Pantalla Global"
|
||||
},
|
||||
"cameras": {
|
||||
"display": "Pantalla"
|
||||
}
|
||||
},
|
||||
"detect": {
|
||||
"global": {
|
||||
"resolution": "Resolución Global",
|
||||
"tracking": "Seguimiento Global"
|
||||
}
|
||||
}
|
||||
}
|
||||
{}
|
||||
|
||||
@ -1,16 +1 @@
|
||||
{
|
||||
"minimum": "Debe ser al menos {{limit}}",
|
||||
"maximum": "Debe ser como mucho {{limit}}",
|
||||
"exclusiveMinimum": "Debe ser mayor que {{limit}}",
|
||||
"exclusiveMaximum": "Debe ser menor que {{limit}}",
|
||||
"minLength": "Debe ser al menos {{limit}} carácter(es)",
|
||||
"maxLength": "Debe ser como máximo {{limit}} carácter(es)",
|
||||
"minItems": "Debe tener al menos {{limit}} objetos",
|
||||
"maxItems": "Debe tener como máximo {{limit}} objetos",
|
||||
"pattern": "Formato no válido",
|
||||
"required": "Este campo es requerido",
|
||||
"type": "Tipo de valor no válido",
|
||||
"enum": "Debe ser uno de los valores permitidos",
|
||||
"const": "El valor no coincide con la constante esperada",
|
||||
"uniqueItems": "Todos los objetos deben ser únicos"
|
||||
}
|
||||
{}
|
||||
|
||||
@ -18,11 +18,6 @@
|
||||
"shareExport": "Compartir exportación",
|
||||
"downloadVideo": "Descargar video",
|
||||
"editName": "Editar nombre",
|
||||
"deleteExport": "Eliminar exportación",
|
||||
"assignToCase": "Añadir al caso"
|
||||
},
|
||||
"headings": {
|
||||
"cases": "Casos",
|
||||
"uncategorizedExports": "Exportaciones sin categorizar"
|
||||
"deleteExport": "Eliminar exportación"
|
||||
}
|
||||
}
|
||||
|
||||
@ -12,10 +12,7 @@
|
||||
"notifications": "Configuración de Notificaciones - Frigate",
|
||||
"enrichments": "Configuración de Análisis Avanzado - Frigate",
|
||||
"cameraManagement": "Administrar Cámaras - Frigate",
|
||||
"cameraReview": "Revisar Configuración de Cámaras - Frigate",
|
||||
"globalConfig": "Configuración Global - Frigate",
|
||||
"cameraConfig": "Configuración de la cámara - Frigate",
|
||||
"maintenance": "Mantenimiento - Frigate"
|
||||
"cameraReview": "Revisar Configuración de Cámaras - Frigate"
|
||||
},
|
||||
"menu": {
|
||||
"cameras": "Configuración de Cámara",
|
||||
|
||||
@ -5,8 +5,7 @@
|
||||
"logs": {
|
||||
"frigate": "Registros de Frigate - Frigate",
|
||||
"go2rtc": "Registros de Go2RTC - Frigate",
|
||||
"nginx": "Registros de Nginx - Frigate",
|
||||
"websocket": "Mensajes Logs - Frigata"
|
||||
"nginx": "Registros de Nginx - Frigate"
|
||||
},
|
||||
"cameras": "Estadísticas de cámaras - Frigate",
|
||||
"enrichments": "Estadísticas de Enriquecimientos - Frigate"
|
||||
@ -32,12 +31,6 @@
|
||||
},
|
||||
"download": {
|
||||
"label": "Descargar registros"
|
||||
},
|
||||
"websocket": {
|
||||
"label": "Mensajes",
|
||||
"pause": "Pausar",
|
||||
"resume": "Continuar",
|
||||
"clear": "Limpiar"
|
||||
}
|
||||
},
|
||||
"title": "Sistema",
|
||||
|
||||
@ -1,5 +1 @@
|
||||
{
|
||||
"version": {
|
||||
"label": "Version actuelle de la configuration"
|
||||
}
|
||||
}
|
||||
{}
|
||||
|
||||
@ -1,7 +1 @@
|
||||
{
|
||||
"audio": {
|
||||
"global": {
|
||||
"detection": "Détection globale"
|
||||
}
|
||||
}
|
||||
}
|
||||
{}
|
||||
|
||||
@ -1,3 +1 @@
|
||||
{
|
||||
"minimum": "Doit être au minimum {{limit}}"
|
||||
}
|
||||
{}
|
||||
|
||||
@ -1,130 +1 @@
|
||||
{
|
||||
"label": "Camera Config",
|
||||
"name": {
|
||||
"label": "Camera naam",
|
||||
"description": "Camera naam is verplicht"
|
||||
},
|
||||
"friendly_name": {
|
||||
"description": "Camera naam te gebruiken in de Frigate UI",
|
||||
"label": "Herkenbare naam"
|
||||
},
|
||||
"enabled": {
|
||||
"label": "Geactiveerd",
|
||||
"description": "Geactiveerd"
|
||||
},
|
||||
"audio": {
|
||||
"label": "Audiogebeurtenissen",
|
||||
"description": "Audio-instellingen voor gebeurtenisdetectie van deze camera.",
|
||||
"enabled": {
|
||||
"label": "Geluiddetectie inschakelen",
|
||||
"description": "Audio‑gebeurtenisdetectie voor deze camera in- of uitschakelen."
|
||||
},
|
||||
"max_not_heard": {
|
||||
"label": "Einde timeout",
|
||||
"description": "Hoeveelheid secondes zonder de geconfigureerde audio soort, voordat de geluids gebeurtenis is beindigd."
|
||||
},
|
||||
"min_volume": {
|
||||
"label": "Minimale volume",
|
||||
"description": "Minimale RMS-volumedrempel die nodig is om audiodetectie te starten; Hoe lager de waarde, hoe gevoeliger de detectie (bijvoorbeeld, 200 hoog, 500 gemiddeld, 1000 laag)."
|
||||
},
|
||||
"listen": {
|
||||
"label": "Luistercategorieën",
|
||||
"description": "Lijst van luistercategorie gebeurtenissen voor detectie (zoals: blaffen, band_alarm, schreeuw, praten, roepen)."
|
||||
},
|
||||
"filters": {
|
||||
"label": "Geluids filters",
|
||||
"description": "Instellingen per audiotype, waaronder betrouwbaarheidsdrempels, ter vermindering van foutieve detecties."
|
||||
},
|
||||
"enabled_in_config": {
|
||||
"label": "Originele audio-instelling",
|
||||
"description": "Geeft aan of audiodetectie oorspronkelijk was geactiveerd in het statische configuratiebestand."
|
||||
},
|
||||
"num_threads": {
|
||||
"label": "Detectiethreads",
|
||||
"description": "Aantal threads voor audiodetectieverwerking."
|
||||
}
|
||||
},
|
||||
"audio_transcription": {
|
||||
"label": "Audio‑transcriptie",
|
||||
"description": "Instellingen voor live en spraakgestuurde audiotranscriptie voor gebeurtenissen en live ondertitels.",
|
||||
"enabled": {
|
||||
"label": "Spraaktranscriptie inschakelen",
|
||||
"description": "Schakel transcriptie van handmatig getriggerde audiogebeurtenissen in of uit."
|
||||
},
|
||||
"enabled_in_config": {
|
||||
"label": "Originele transcriptiestatus"
|
||||
},
|
||||
"live_enabled": {
|
||||
"label": "Live transcriptie",
|
||||
"description": "Live streaming‑transcriptie van audio inschakelen tijdens ontvangst."
|
||||
}
|
||||
},
|
||||
"birdseye": {
|
||||
"label": "Overzichtsweergave",
|
||||
"description": "Instellingen voor de overzichtsweergave die meerdere camerafeeds combineert tot één lay‑out.",
|
||||
"enabled": {
|
||||
"label": "Activeer overzichtsweergave",
|
||||
"description": "De overzichtsweergavefunctie in- of uitschakelen."
|
||||
},
|
||||
"mode": {
|
||||
"label": "Volgmodus",
|
||||
"description": "Modus voor het opnemen van camera’s in overzichtsweergave: ‘objecten’, ‘beweging’ of ‘continu’."
|
||||
},
|
||||
"order": {
|
||||
"label": "Positie",
|
||||
"description": "Numerieke positie die de volgorde van de camera in de overzichtsweergave lay-out bepaalt."
|
||||
}
|
||||
},
|
||||
"detect": {
|
||||
"label": "Detectie object",
|
||||
"description": "Instellingen voor de detectierol om objecten te detecteren en trackers te starten.",
|
||||
"enabled": {
|
||||
"label": "Detectie aan",
|
||||
"description": "Objectdetectie voor deze camera in- of uitschakelen. Detectie moet zijn ingeschakeld om objecttracking te laten werken."
|
||||
},
|
||||
"height": {
|
||||
"label": "Detectie hoogte",
|
||||
"description": "De hoogte in pixels van frames voor de detectiestream. Laat dit veld leeg om de standaardresolutie te gebruiken."
|
||||
},
|
||||
"width": {
|
||||
"label": "Detectie breedte",
|
||||
"description": "De breedte in pixels van frames voor de detectiestream. Laat dit veld leeg om de standaardresolutie te gebruiken."
|
||||
},
|
||||
"fps": {
|
||||
"label": "Detectie‑FPS",
|
||||
"description": "Gewenst aantal frames per seconde waarop detectie wordt uitgevoerd; lagere waarden verlagen het CPU‑gebruik (aanbevolen waarde is 5, stel alleen hoger in — maximaal 10 — bij het volgen van extreem snel bewegende objecten)."
|
||||
},
|
||||
"min_initialized": {
|
||||
"label": "Minimale initialisatieframes",
|
||||
"description": "Aantal opeenvolgende detectieresultaten dat vereist is voordat een gevolgd object wordt aangemaakt. Verhoog deze waarde om valse initialisaties te verminderen. De standaardwaarde is FPS gedeeld door 2."
|
||||
},
|
||||
"max_disappeared": {
|
||||
"label": "Maximaal aantal verdwenen frames",
|
||||
"description": "Aantal frames zonder detectie voordat een gevolgd object als verdwenen wordt beschouwd."
|
||||
},
|
||||
"stationary": {
|
||||
"label": "Instellingen voor stilstaande objecten",
|
||||
"description": "Instellingen voor het detecteren en beheren van objecten die gedurende een bepaalde tijd stil blijven staan.",
|
||||
"interval": {
|
||||
"label": "Interval voor stilstaande objecten",
|
||||
"description": "Frequentie (in frames) waarmee detectie wordt gecontroleerd om stilstaande objecten te bevestigen."
|
||||
},
|
||||
"threshold": {
|
||||
"label": "Drempel voor stilstaande objecten",
|
||||
"description": "Het aantal frames waarin geen positieverandering wordt gedetecteerd voordat een object als stilstaand wordt beschouwd."
|
||||
},
|
||||
"max_frames": {
|
||||
"label": "Maximaal aantal frames",
|
||||
"description": "Stelt een limiet aan de duur van tracking van stilstaande objecten.",
|
||||
"default": {
|
||||
"label": "Standaard maximaal aantal frames",
|
||||
"description": "Standaardlimiet voor het aantal frames dat een stilstaand object wordt gevolgd voordat wordt gestopt."
|
||||
},
|
||||
"objects": {
|
||||
"label": "Object‑maximum aantal frames",
|
||||
"description": "Per‑object overschrijden voor het maximum aantal frames voor tracking van stationaire objecten."
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
{}
|
||||
|
||||
@ -1,107 +1 @@
|
||||
{
|
||||
"audio": {
|
||||
"label": "Audiogebeurtenissen",
|
||||
"enabled": {
|
||||
"label": "Geluiddetectie inschakelen"
|
||||
},
|
||||
"max_not_heard": {
|
||||
"label": "Einde timeout",
|
||||
"description": "Hoeveelheid secondes zonder de geconfigureerde audio soort, voordat de geluids gebeurtenis is beindigd."
|
||||
},
|
||||
"min_volume": {
|
||||
"label": "Minimale volume",
|
||||
"description": "Minimale RMS-volumedrempel die nodig is om audiodetectie te starten; Hoe lager de waarde, hoe gevoeliger de detectie (bijvoorbeeld, 200 hoog, 500 gemiddeld, 1000 laag)."
|
||||
},
|
||||
"listen": {
|
||||
"label": "Luistercategorieën",
|
||||
"description": "Lijst van luistercategorie gebeurtenissen voor detectie (zoals: blaffen, band_alarm, schreeuw, praten, roepen)."
|
||||
},
|
||||
"filters": {
|
||||
"label": "Geluids filters",
|
||||
"description": "Instellingen per audiotype, waaronder betrouwbaarheidsdrempels, ter vermindering van foutieve detecties."
|
||||
},
|
||||
"enabled_in_config": {
|
||||
"label": "Originele audio-instelling",
|
||||
"description": "Geeft aan of audiodetectie oorspronkelijk was geactiveerd in het statische configuratiebestand."
|
||||
},
|
||||
"num_threads": {
|
||||
"label": "Detectiethreads",
|
||||
"description": "Aantal threads voor audiodetectieverwerking."
|
||||
}
|
||||
},
|
||||
"audio_transcription": {
|
||||
"label": "Audio‑transcriptie",
|
||||
"description": "Instellingen voor live en spraakgestuurde audiotranscriptie voor gebeurtenissen en live ondertitels.",
|
||||
"live_enabled": {
|
||||
"label": "Live transcriptie",
|
||||
"description": "Live streaming‑transcriptie van audio inschakelen tijdens ontvangst."
|
||||
}
|
||||
},
|
||||
"birdseye": {
|
||||
"label": "Overzichtsweergave",
|
||||
"description": "Instellingen voor de overzichtsweergave die meerdere camerafeeds combineert tot één lay‑out.",
|
||||
"enabled": {
|
||||
"label": "Activeer overzichtsweergave",
|
||||
"description": "De overzichtsweergavefunctie in- of uitschakelen."
|
||||
},
|
||||
"mode": {
|
||||
"label": "Volgmodus",
|
||||
"description": "Modus voor het opnemen van camera’s in overzichtsweergave: ‘objecten’, ‘beweging’ of ‘continu’."
|
||||
},
|
||||
"order": {
|
||||
"label": "Positie",
|
||||
"description": "Numerieke positie die de volgorde van de camera in de overzichtsweergave lay-out bepaalt."
|
||||
}
|
||||
},
|
||||
"detect": {
|
||||
"label": "Detectie object",
|
||||
"description": "Instellingen voor de detectierol om objecten te detecteren en trackers te starten.",
|
||||
"enabled": {
|
||||
"label": "Detectie aan"
|
||||
},
|
||||
"height": {
|
||||
"label": "Detectie hoogte",
|
||||
"description": "De hoogte in pixels van frames voor de detectiestream. Laat dit veld leeg om de standaardresolutie te gebruiken."
|
||||
},
|
||||
"width": {
|
||||
"label": "Detectie breedte",
|
||||
"description": "De breedte in pixels van frames voor de detectiestream. Laat dit veld leeg om de standaardresolutie te gebruiken."
|
||||
},
|
||||
"fps": {
|
||||
"label": "Detectie‑FPS",
|
||||
"description": "Gewenst aantal frames per seconde waarop detectie wordt uitgevoerd; lagere waarden verlagen het CPU‑gebruik (aanbevolen waarde is 5, stel alleen hoger in — maximaal 10 — bij het volgen van extreem snel bewegende objecten)."
|
||||
},
|
||||
"min_initialized": {
|
||||
"label": "Minimale initialisatieframes",
|
||||
"description": "Aantal opeenvolgende detectieresultaten dat vereist is voordat een gevolgd object wordt aangemaakt. Verhoog deze waarde om valse initialisaties te verminderen. De standaardwaarde is FPS gedeeld door 2."
|
||||
},
|
||||
"max_disappeared": {
|
||||
"label": "Maximaal aantal verdwenen frames",
|
||||
"description": "Aantal frames zonder detectie voordat een gevolgd object als verdwenen wordt beschouwd."
|
||||
},
|
||||
"stationary": {
|
||||
"label": "Instellingen voor stilstaande objecten",
|
||||
"description": "Instellingen voor het detecteren en beheren van objecten die gedurende een bepaalde tijd stil blijven staan.",
|
||||
"interval": {
|
||||
"label": "Interval voor stilstaande objecten",
|
||||
"description": "Frequentie (in frames) waarmee detectie wordt gecontroleerd om stilstaande objecten te bevestigen."
|
||||
},
|
||||
"threshold": {
|
||||
"label": "Drempel voor stilstaande objecten",
|
||||
"description": "Het aantal frames waarin geen positieverandering wordt gedetecteerd voordat een object als stilstaand wordt beschouwd."
|
||||
},
|
||||
"max_frames": {
|
||||
"label": "Maximaal aantal frames",
|
||||
"description": "Stelt een limiet aan de duur van tracking van stilstaande objecten.",
|
||||
"default": {
|
||||
"label": "Standaard maximaal aantal frames",
|
||||
"description": "Standaardlimiet voor het aantal frames dat een stilstaand object wordt gevolgd voordat wordt gestopt."
|
||||
},
|
||||
"objects": {
|
||||
"label": "Object‑maximum aantal frames",
|
||||
"description": "Per‑object overschrijden voor het maximum aantal frames voor tracking van stationaire objecten."
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
{}
|
||||
|
||||
@ -186,8 +186,7 @@
|
||||
"restart": "Repornește Frigate",
|
||||
"review": "Revizuire",
|
||||
"classification": "Clasificare",
|
||||
"chat": "Chat",
|
||||
"actions": "Acțiuni"
|
||||
"chat": "Chat"
|
||||
},
|
||||
"button": {
|
||||
"cameraAudio": "Sunet cameră",
|
||||
@ -235,8 +234,7 @@
|
||||
"resetToDefault": "Resetare la valori implicite",
|
||||
"saveAll": "Salvează toate",
|
||||
"savingAll": "Se salvează toate…",
|
||||
"undoAll": "Anulează toate",
|
||||
"applying": "Se aplică…"
|
||||
"undoAll": "Anulează toate"
|
||||
},
|
||||
"unit": {
|
||||
"speed": {
|
||||
|
||||
@ -1,936 +1 @@
|
||||
{
|
||||
"label": "Configurație Cameră",
|
||||
"name": {
|
||||
"label": "Nume cameră",
|
||||
"description": "Numele camerei este obligatoriu"
|
||||
},
|
||||
"friendly_name": {
|
||||
"label": "Nume prietenos",
|
||||
"description": "Numele camerei afișat în interfața Frigate"
|
||||
},
|
||||
"enabled": {
|
||||
"label": "Activată",
|
||||
"description": "Activată"
|
||||
},
|
||||
"audio": {
|
||||
"label": "Evenimente audio",
|
||||
"description": "Setări pentru detectarea evenimentelor bazate pe sunet pentru această cameră.",
|
||||
"enabled": {
|
||||
"label": "Activare detecție audio",
|
||||
"description": "Activează sau dezactivează detecția evenimentelor audio pentru această cameră."
|
||||
},
|
||||
"max_not_heard": {
|
||||
"label": "Timeout final",
|
||||
"description": "Secunde fără tipul audio configurat înainte ca evenimentul să fie încheiat."
|
||||
},
|
||||
"min_volume": {
|
||||
"label": "Volum minim",
|
||||
"description": "Pragul minim de volum RMS; valorile mici cresc sensibilitatea (ex: 200 ridicată, 500 medie, 1000 scăzută)."
|
||||
},
|
||||
"listen": {
|
||||
"label": "Tipuri ascultate",
|
||||
"description": "Lista de evenimente audio de detectat (ex: lătrat, alarmă_incendiu, țipăt, vorbire)."
|
||||
},
|
||||
"filters": {
|
||||
"label": "Filtre audio",
|
||||
"description": "Setări de filtrare per tip audio, cum ar fi pragul de încredere."
|
||||
},
|
||||
"enabled_in_config": {
|
||||
"label": "Stare audio originală",
|
||||
"description": "Indică dacă detecția audio a fost activată inițial în fișierul de configurare static."
|
||||
},
|
||||
"num_threads": {
|
||||
"label": "Thread-uri detecție",
|
||||
"description": "Numărul de thread-uri pentru procesarea detecției audio."
|
||||
}
|
||||
},
|
||||
"audio_transcription": {
|
||||
"label": "Transcriere audio",
|
||||
"description": "Setări pentru transcrierea audio live și a vorbirii pentru evenimente.",
|
||||
"enabled": {
|
||||
"label": "Activare transcriere",
|
||||
"description": "Activează sau dezactivează transcrierea declanșată manual pentru evenimentele audio."
|
||||
},
|
||||
"enabled_in_config": {
|
||||
"label": "Stare transcriere originală"
|
||||
},
|
||||
"live_enabled": {
|
||||
"label": "Transcriere live",
|
||||
"description": "Activează streaming-ul de transcriere live pe măsură ce sunetul e recepționat."
|
||||
}
|
||||
},
|
||||
"birdseye": {
|
||||
"label": "Birdseye",
|
||||
"description": "Setări pentru vizualizarea compusă Birdseye care combină mai multe stream-uri într-un singur layout.",
|
||||
"enabled": {
|
||||
"label": "Activare Birdseye",
|
||||
"description": "Activează sau dezactivează funcția Birdseye."
|
||||
},
|
||||
"mode": {
|
||||
"label": "Mod urmărire",
|
||||
"description": "Modul de includere a camerelor în Birdseye: 'objects', 'motion' sau 'continuous'."
|
||||
},
|
||||
"order": {
|
||||
"label": "Poziție",
|
||||
"description": "Poziția numerică ce controlează ordinea camerei în layout-ul Birdseye."
|
||||
}
|
||||
},
|
||||
"detect": {
|
||||
"label": "Detecție obiecte",
|
||||
"description": "Setări pentru rolul de detecție folosit pentru a rula recunoașterea obiectelor și trackerele.",
|
||||
"enabled": {
|
||||
"label": "Detecție activată",
|
||||
"description": "Activează sau dezactivează detecția obiectelor pentru această cameră. Detecția trebuie să fie activă pentru ca urmărirea obiectelor să funcționeze."
|
||||
},
|
||||
"height": {
|
||||
"label": "Înălțime detect",
|
||||
"description": "Înălțimea cadrelor pentru stream-ul de detect; lasă gol pentru rezoluția nativă."
|
||||
},
|
||||
"width": {
|
||||
"label": "Lățime detect",
|
||||
"description": "Lățimea cadrelor pentru stream-ul de detect; lasă gol pentru rezoluția nativă."
|
||||
},
|
||||
"fps": {
|
||||
"label": "FPS detect",
|
||||
"description": "FPS-ul dorit pentru detecție; valori mici reduc consumul CPU (recomandat 5, max 10 pentru obiecte foarte rapide)."
|
||||
},
|
||||
"min_initialized": {
|
||||
"label": "Cadre minime inițializare",
|
||||
"description": "Numărul de detecții consecutive necesare înainte de a crea un obiect urmărit. Crește valoarea pentru a reduce alarmele false."
|
||||
},
|
||||
"max_disappeared": {
|
||||
"label": "Cadre maxime dispariție",
|
||||
"description": "Numărul de cadre fără detecție înainte ca un obiect urmărit să fie considerat dispărut."
|
||||
},
|
||||
"stationary": {
|
||||
"label": "Configurație obiecte staționare",
|
||||
"description": "Setări pentru gestionarea obiectelor care rămân nemișcate o perioadă.",
|
||||
"interval": {
|
||||
"label": "Interval staționar",
|
||||
"description": "Cât de des (în cadre) se verifică prezența unui obiect staționar."
|
||||
},
|
||||
"threshold": {
|
||||
"label": "Prag staționar",
|
||||
"description": "Numărul de cadre fără schimbare de poziție pentru a marca un obiect ca staționar."
|
||||
},
|
||||
"max_frames": {
|
||||
"label": "Cadre maxime",
|
||||
"description": "Limitează cât timp sunt urmărite obiectele staționare înainte de a fi ignorate.",
|
||||
"default": {
|
||||
"label": "Cadre maxime implicit",
|
||||
"description": "Valoarea implicită pentru urmărirea obiectelor staționare."
|
||||
},
|
||||
"objects": {
|
||||
"label": "Cadre maxime per obiect",
|
||||
"description": "Suprascrieri per obiect pentru durata urmăririi staționare."
|
||||
}
|
||||
},
|
||||
"classifier": {
|
||||
"label": "Activare clasificator vizual",
|
||||
"description": "Folosește un clasificator vizual pentru a detecta obiectele cu adevărat staționare, chiar dacă chenarul oscilează."
|
||||
}
|
||||
},
|
||||
"annotation_offset": {
|
||||
"label": "Offset adnotare",
|
||||
"description": "Milisecunde pentru a decala adnotările de detecție pentru a alinia mai bine chenarele cu înregistrarea."
|
||||
}
|
||||
},
|
||||
"face_recognition": {
|
||||
"label": "Recunoaștere facială",
|
||||
"description": "Setări pentru detecția și recunoașterea fețelor pentru această cameră.",
|
||||
"enabled": {
|
||||
"label": "Activare recunoaștere facială",
|
||||
"description": "Activează sau dezactivează recunoașterea facială."
|
||||
},
|
||||
"min_area": {
|
||||
"label": "Arie minimă față",
|
||||
"description": "Aria minimă (pixeli) pentru a încerca recunoașterea."
|
||||
}
|
||||
},
|
||||
"ffmpeg": {
|
||||
"label": "FFmpeg",
|
||||
"description": "Setări FFmpeg: cale binar, argumente, accelerare hardware și ieșiri per rol.",
|
||||
"path": {
|
||||
"label": "Cale FFmpeg",
|
||||
"description": "Calea către binarul FFmpeg sau un alias de versiune (\"5.0\" sau \"7.0\")."
|
||||
},
|
||||
"global_args": {
|
||||
"label": "Argumente globale FFmpeg",
|
||||
"description": "Argumente globale pasate proceselor FFmpeg."
|
||||
},
|
||||
"hwaccel_args": {
|
||||
"label": "Argumente accelerare hardware",
|
||||
"description": "Argumente pentru accelerarea hardware. Se recomandă presetările specifice furnizorului."
|
||||
},
|
||||
"input_args": {
|
||||
"label": "Argumente intrare",
|
||||
"description": "Argumente aplicate stream-urilor de intrare FFmpeg."
|
||||
},
|
||||
"output_args": {
|
||||
"label": "Argumente ieșire",
|
||||
"description": "Argumente de ieșire implicite pentru diverse roluri (detect, record).",
|
||||
"detect": {
|
||||
"label": "Argumente ieșire detect",
|
||||
"description": "Argumente implicite pentru stream-urile cu rol detect."
|
||||
},
|
||||
"record": {
|
||||
"label": "Argumente ieșire record",
|
||||
"description": "Argumente implicite pentru stream-urile cu rol record."
|
||||
}
|
||||
},
|
||||
"retry_interval": {
|
||||
"label": "Timp reîncercare FFmpeg",
|
||||
"description": "Secunde de așteptare înainte de reconectarea unui stream după o eroare. Implicit 10."
|
||||
},
|
||||
"apple_compatibility": {
|
||||
"label": "Compatibilitate Apple",
|
||||
"description": "Activează tag-ul HEVC pentru compatibilitate mai bună cu playerele Apple la înregistrările H.265."
|
||||
},
|
||||
"gpu": {
|
||||
"label": "Index GPU",
|
||||
"description": "Indexul GPU implicit folosit pentru accelerarea hardware."
|
||||
},
|
||||
"inputs": {
|
||||
"label": "Intrări cameră",
|
||||
"description": "Listă de definiții pentru stream-urile de intrare (căi și roluri).",
|
||||
"path": {
|
||||
"label": "Cale intrare",
|
||||
"description": "URL-ul sau calea stream-ului de intrare al camerei."
|
||||
},
|
||||
"roles": {
|
||||
"label": "Roluri intrare",
|
||||
"description": "Rolurile atribuite acestui stream de intrare."
|
||||
},
|
||||
"global_args": {
|
||||
"label": "Argumente globale FFmpeg",
|
||||
"description": "Argumente globale pentru acest stream de intrare."
|
||||
},
|
||||
"hwaccel_args": {
|
||||
"label": "Argumente accelerare hardware",
|
||||
"description": "Argumente de accelerare hardware pentru acest stream."
|
||||
},
|
||||
"input_args": {
|
||||
"label": "Argumente intrare",
|
||||
"description": "Argumente specifice acestui stream."
|
||||
}
|
||||
}
|
||||
},
|
||||
"live": {
|
||||
"label": "Redare live",
|
||||
"description": "Setări folosite de interfața web pentru a controla selecția, rezoluția și calitatea stream-ului live.",
|
||||
"streams": {
|
||||
"label": "Nume stream-uri live",
|
||||
"description": "Maparea numelor de stream-uri configurate către numele restream/go2rtc folosite live."
|
||||
},
|
||||
"height": {
|
||||
"label": "Înălțime live",
|
||||
"description": "Înălțimea (pixeli) pentru redarea jsmpeg în UI; trebuie să fie <= înălțimea stream-ului de detect."
|
||||
},
|
||||
"quality": {
|
||||
"label": "Calitate live",
|
||||
"description": "Calitatea encodării pentru stream-ul jsmpeg (1 maxim, 31 minim)."
|
||||
}
|
||||
},
|
||||
"lpr": {
|
||||
"label": "Recunoaștere numere înmatriculare",
|
||||
"description": "Setări pentru recunoașterea numerelor de înmatriculare, inclusiv praguri de detecție, formatare și numere cunoscute.",
|
||||
"enabled": {
|
||||
"label": "Activare LPR",
|
||||
"description": "Activează sau dezactivează LPR pe această cameră."
|
||||
},
|
||||
"expire_time": {
|
||||
"label": "Secunde expirare",
|
||||
"description": "Timpul în secunde după care un număr nevăzut este expirat din tracker (doar pentru camerele LPR dedicate)."
|
||||
},
|
||||
"min_area": {
|
||||
"label": "Arie minimă plăcuță",
|
||||
"description": "Aria minimă (pixeli) pentru a încerca recunoașterea."
|
||||
},
|
||||
"enhancement": {
|
||||
"label": "Nivel îmbunătățire",
|
||||
"description": "Nivelul de îmbunătățire (0-10) aplicat decupajelor cu numere înainte de OCR; valorile mai mari nu îmbunătățesc mereu rezultatele, iar nivelurile peste 5 pot funcționa doar cu numerele pe timp de noapte și trebuie folosite cu atenție."
|
||||
}
|
||||
},
|
||||
"motion": {
|
||||
"label": "Detecție mișcare",
|
||||
"description": "Setări implicite pentru detecția mișcării pentru această cameră.",
|
||||
"enabled": {
|
||||
"label": "Activare detecție mișcare",
|
||||
"description": "Activează sau dezactivează detecția mișcării pentru această cameră."
|
||||
},
|
||||
"threshold": {
|
||||
"label": "Prag mișcare",
|
||||
"description": "Pragul de diferență între pixeli; valorile mari reduc sensibilitatea (1-255)."
|
||||
},
|
||||
"lightning_threshold": {
|
||||
"label": "Prag fulger/lumină",
|
||||
"description": "Prag pentru a ignora salturile bruște de lumină (sensibilitate între 0.3 și 1.0)."
|
||||
},
|
||||
"improve_contrast": {
|
||||
"label": "Îmbunătățire contrast",
|
||||
"description": "Aplică o îmbunătățire a contrastului înainte de analiza mișcării pentru a ajuta detecția."
|
||||
},
|
||||
"contour_area": {
|
||||
"label": "Arie contur",
|
||||
"description": "Aria minimă a conturului în pixeli pentru a fi considerat mișcare."
|
||||
},
|
||||
"delta_alpha": {
|
||||
"label": "Delta alpha",
|
||||
"description": "Factor de blending alpha folosit în diferențierea cadrelor."
|
||||
},
|
||||
"frame_alpha": {
|
||||
"label": "Cadru alfa",
|
||||
"description": "Valoarea alpha pentru amestecarea cadrelor la preprocesarea mișcării."
|
||||
},
|
||||
"frame_height": {
|
||||
"label": "Înălțime cadru",
|
||||
"description": "Înălțimea la care sunt scalate cadrele pentru calculul mișcării."
|
||||
},
|
||||
"mask": {
|
||||
"label": "Coordonate mască",
|
||||
"description": "Coordonate x,y care definesc poligonul măștii de mișcare."
|
||||
},
|
||||
"mqtt_off_delay": {
|
||||
"label": "Întârziere MQTT off",
|
||||
"description": "Secunde de așteptare după ultima mișcare înainte de a trimite starea 'off' prin MQTT."
|
||||
},
|
||||
"enabled_in_config": {
|
||||
"label": "Stare mișcare originală",
|
||||
"description": "Indică dacă detecția mișcării a fost activă în configurația inițială."
|
||||
},
|
||||
"raw_mask": {
|
||||
"label": "Mască brută"
|
||||
}
|
||||
},
|
||||
"objects": {
|
||||
"label": "Obiecte",
|
||||
"description": "Setări implicite pentru urmărire, inclusiv ce etichete se urmăresc și filtrele per obiect.",
|
||||
"track": {
|
||||
"label": "Obiecte de urmărit",
|
||||
"description": "Lista etichetelor de obiecte de urmărit pentru această cameră."
|
||||
},
|
||||
"filters": {
|
||||
"label": "Filtre obiecte",
|
||||
"description": "Filtre pentru a reduce alarmele false (arie, raport, încredere).",
|
||||
"min_area": {
|
||||
"label": "Arie minimă obiect",
|
||||
"description": "Aria minimă a chenarului (pixeli sau procent)."
|
||||
},
|
||||
"max_area": {
|
||||
"label": "Arie maximă obiect",
|
||||
"description": "Aria maximă a chenarului (pixeli sau procent)."
|
||||
},
|
||||
"min_ratio": {
|
||||
"label": "Raport aspect minim",
|
||||
"description": "Raportul minim lățime/înălțime pentru chenar."
|
||||
},
|
||||
"max_ratio": {
|
||||
"label": "Raport aspect maxim",
|
||||
"description": "Raportul maxim lățime/înălțime pentru chenar."
|
||||
},
|
||||
"threshold": {
|
||||
"label": "Prag încredere",
|
||||
"description": "Încrederea medie necesară pentru a considera obiectul valid."
|
||||
},
|
||||
"min_score": {
|
||||
"label": "Scor minim",
|
||||
"description": "Încrederea minimă la un singur cadru pentru a număra obiectul."
|
||||
},
|
||||
"mask": {
|
||||
"label": "Mască filtru",
|
||||
"description": "Poligonul unde se aplică acest filtru în cadru."
|
||||
},
|
||||
"raw_mask": {
|
||||
"label": "Mască brută"
|
||||
}
|
||||
},
|
||||
"mask": {
|
||||
"label": "Mască obiect",
|
||||
"description": "Mască pentru a preveni detecția obiectelor în anumite zone."
|
||||
},
|
||||
"raw_mask": {
|
||||
"label": "Mască brută"
|
||||
},
|
||||
"genai": {
|
||||
"label": "Configurație obiecte GenAI",
|
||||
"description": "Opțiuni GenAI pentru descrierea obiectelor urmărite și trimiterea cadrelor.",
|
||||
"enabled": {
|
||||
"label": "Activare GenAI",
|
||||
"description": "Activează generarea de descrieri prin GenAI pentru obiectele urmărite."
|
||||
},
|
||||
"use_snapshot": {
|
||||
"label": "Folosește snapshot-uri",
|
||||
"description": "Folosește snapshot-urile obiectelor în loc de miniaturi pentru GenAI."
|
||||
},
|
||||
"prompt": {
|
||||
"label": "Prompt descriere",
|
||||
"description": "Șablonul de prompt implicit pentru descrierile GenAI."
|
||||
},
|
||||
"object_prompts": {
|
||||
"label": "Prompt-uri per obiect",
|
||||
"description": "Prompt-uri personalizate pentru anumite etichete de obiecte."
|
||||
},
|
||||
"objects": {
|
||||
"label": "Obiecte GenAI",
|
||||
"description": "Lista etichetelor de obiecte care vor fi trimise la GenAI."
|
||||
},
|
||||
"required_zones": {
|
||||
"label": "Zone obligatorii",
|
||||
"description": "Zonele prin care trebuie să treacă obiectele pentru a genera descrieri."
|
||||
},
|
||||
"debug_save_thumbnails": {
|
||||
"label": "Salvează miniaturile",
|
||||
"description": "Salvează miniaturile trimise la GenAI pentru depanare."
|
||||
},
|
||||
"send_triggers": {
|
||||
"label": "Trigger-e GenAI",
|
||||
"description": "Definește când sunt trimise cadrele la GenAI (la final, după actualizări etc.).",
|
||||
"tracked_object_end": {
|
||||
"label": "Trimite la final",
|
||||
"description": "Trimite cererea la GenAI când urmărirea obiectului s-a terminat."
|
||||
},
|
||||
"after_significant_updates": {
|
||||
"label": "Trigger GenAI timpuriu",
|
||||
"description": "Trimite la GenAI după un număr de actualizări semnificative ale obiectului."
|
||||
}
|
||||
},
|
||||
"enabled_in_config": {
|
||||
"label": "Stare GenAI originală",
|
||||
"description": "Indică dacă GenAI a fost activat în configurația inițială."
|
||||
}
|
||||
}
|
||||
},
|
||||
"record": {
|
||||
"label": "Înregistrare",
|
||||
"description": "Setări de înregistrare și retenție pentru această cameră.",
|
||||
"enabled": {
|
||||
"label": "Activare înregistrare",
|
||||
"description": "Activează sau dezactivează înregistrarea pentru această cameră."
|
||||
},
|
||||
"expire_interval": {
|
||||
"label": "Interval curățare înregistrări",
|
||||
"description": "Minute între trecerile de curățare a segmentelor expirate."
|
||||
},
|
||||
"continuous": {
|
||||
"label": "Retenție continuă",
|
||||
"description": "Zile de păstrare a înregistrărilor indiferent de obiecte sau mișcare. Pune 0 pentru a păstra doar alerte/detecții.",
|
||||
"days": {
|
||||
"label": "Zile retenție",
|
||||
"description": "Numărul de zile pentru păstrare."
|
||||
}
|
||||
},
|
||||
"motion": {
|
||||
"label": "Retenție mișcare",
|
||||
"description": "Zile de păstrare pentru înregistrările declanșate de mișcare.",
|
||||
"days": {
|
||||
"label": "Zile retenție",
|
||||
"description": "Numărul de zile pentru păstrare."
|
||||
}
|
||||
},
|
||||
"detections": {
|
||||
"label": "Retenție detecții",
|
||||
"description": "Setări pentru evenimentele de detecție, inclusiv duratele pre/post captură.",
|
||||
"pre_capture": {
|
||||
"label": "Secunde pre-captură",
|
||||
"description": "Secunde incluse înainte de evenimentul detectat."
|
||||
},
|
||||
"post_capture": {
|
||||
"label": "Secunde post-captură",
|
||||
"description": "Secunde incluse după încheierea evenimentului."
|
||||
},
|
||||
"retain": {
|
||||
"label": "Retenție eveniment",
|
||||
"description": "Setări de retenție pentru clipurile cu detecții.",
|
||||
"days": {
|
||||
"label": "Zile retenție",
|
||||
"description": "Numărul de zile de păstrare."
|
||||
},
|
||||
"mode": {
|
||||
"label": "Mod retenție",
|
||||
"description": "Mod: 'all' (tot), 'motion' (doar segmente cu mișcare) sau 'active_objects' (doar cu obiecte active)."
|
||||
}
|
||||
}
|
||||
},
|
||||
"alerts": {
|
||||
"label": "Retenție alerte",
|
||||
"description": "Setări de retenție pentru evenimentele de tip alertă.",
|
||||
"pre_capture": {
|
||||
"label": "Secunde pre-captură",
|
||||
"description": "Secunde incluse înainte de alertă."
|
||||
},
|
||||
"post_capture": {
|
||||
"label": "Secunde post-captură",
|
||||
"description": "Secunde incluse după alertă."
|
||||
},
|
||||
"retain": {
|
||||
"label": "Retenție eveniment",
|
||||
"description": "Setări de păstrare pentru alerte.",
|
||||
"days": {
|
||||
"label": "Zile retenție",
|
||||
"description": "Numărul de zile pentru păstrare."
|
||||
},
|
||||
"mode": {
|
||||
"label": "Mod retenție",
|
||||
"description": "Modul de păstrare a segmentelor."
|
||||
}
|
||||
}
|
||||
},
|
||||
"export": {
|
||||
"label": "Configurație export",
|
||||
"description": "Setări pentru exportul înregistrărilor (timelapse, accelerare hardware).",
|
||||
"hwaccel_args": {
|
||||
"label": "Argumente hwaccel export",
|
||||
"description": "Argumente de accelerare hardware pentru operațiunile de export/transcodare."
|
||||
}
|
||||
},
|
||||
"preview": {
|
||||
"label": "Configurație preview",
|
||||
"description": "Setări pentru calitatea preview-urilor din interfață.",
|
||||
"quality": {
|
||||
"label": "Calitate preview",
|
||||
"description": "Nivel calitate (foarte_scăzută, scăzută, medie, ridicată, foarte ridicată)."
|
||||
}
|
||||
},
|
||||
"enabled_in_config": {
|
||||
"label": "Stare înregistrare originală",
|
||||
"description": "Indică dacă înregistrarea a fost activă în configurația inițială."
|
||||
}
|
||||
},
|
||||
"review": {
|
||||
"label": "Revizuire",
|
||||
"description": "Setări care controlează alertele, detecțiile și rezumatele de tip GenAI folosite de interfață și stocare pentru această cameră.",
|
||||
"alerts": {
|
||||
"label": "Configurație alerte",
|
||||
"description": "Setări pentru obiectele care generează alerte și modul lor de retenție.",
|
||||
"enabled": {
|
||||
"label": "Activare alerte",
|
||||
"description": "Activează sau dezactivează generarea de alerte pentru această cameră."
|
||||
},
|
||||
"labels": {
|
||||
"label": "Etichete alerte",
|
||||
"description": "Obiecte care sunt considerate alerte (ex: om, mașină)."
|
||||
},
|
||||
"required_zones": {
|
||||
"label": "Zone obligatorii",
|
||||
"description": "Zonele necesare pentru a declanșa o alertă."
|
||||
},
|
||||
"enabled_in_config": {
|
||||
"label": "Stare alerte originală",
|
||||
"description": "Dacă alertele au fost active inițial în fișierul de config."
|
||||
},
|
||||
"cutoff_time": {
|
||||
"label": "Timp limită alerte",
|
||||
"description": "Secunde de așteptare după încetarea activității înainte de a încheia alerta."
|
||||
}
|
||||
},
|
||||
"detections": {
|
||||
"label": "Configurație detecții",
|
||||
"description": "Setări pentru evenimentele de detecție (non-alertă).",
|
||||
"enabled": {
|
||||
"label": "Activare detecții",
|
||||
"description": "Activează sau dezactivează evenimentele de detecție pentru această cameră."
|
||||
},
|
||||
"labels": {
|
||||
"label": "Etichete detecții",
|
||||
"description": "Obiecte care se consideră detecții."
|
||||
},
|
||||
"required_zones": {
|
||||
"label": "Zone obligatorii",
|
||||
"description": "Zonele necesare pentru o detecție."
|
||||
},
|
||||
"cutoff_time": {
|
||||
"label": "Timp limită detecții",
|
||||
"description": "Secunde de așteptare înainte de a încheia o detecție."
|
||||
},
|
||||
"enabled_in_config": {
|
||||
"label": "Stare detecții originală",
|
||||
"description": "Dacă detecțiile au fost active în configurația inițială."
|
||||
}
|
||||
},
|
||||
"genai": {
|
||||
"label": "Configurație GenAI",
|
||||
"description": "Controlul AI-ului generativ pentru descrieri și rezumate în review.",
|
||||
"enabled": {
|
||||
"label": "Activare descrieri GenAI",
|
||||
"description": "Activează descrierile și rezumatele generate de AI pentru elementele de review."
|
||||
},
|
||||
"alerts": {
|
||||
"label": "GenAI pentru alerte",
|
||||
"description": "Folosește GenAI pentru descrierea alertelor."
|
||||
},
|
||||
"detections": {
|
||||
"label": "GenAI pentru detecții",
|
||||
"description": "Folosește GenAI pentru descrierea detecțiilor."
|
||||
},
|
||||
"image_source": {
|
||||
"label": "Sursă imagine review",
|
||||
"description": "Sursa imaginilor ('preview' sau 'recordings'); 'recordings' e mai calitativ dar consumă mai multe token-uri."
|
||||
},
|
||||
"additional_concerns": {
|
||||
"label": "Preocupări suplimentare",
|
||||
"description": "Listă de note sau griji pe care GenAI să le considere când evaluează activitatea pe cameră."
|
||||
},
|
||||
"debug_save_thumbnails": {
|
||||
"label": "Salvează miniaturile",
|
||||
"description": "Salvează miniaturile trimise la furnizorul GenAI pentru depanare."
|
||||
},
|
||||
"enabled_in_config": {
|
||||
"label": "Stare GenAI originală",
|
||||
"description": "Dacă review-ul GenAI a fost activ inițial."
|
||||
},
|
||||
"preferred_language": {
|
||||
"label": "Limbă preferată",
|
||||
"description": "Limba în care vrei ca GenAI să genereze răspunsurile."
|
||||
},
|
||||
"activity_context_prompt": {
|
||||
"label": "Prompt context activitate",
|
||||
"description": "Prompt personalizat care descrie ce este suspect și ce nu pentru rezumatele GenAI."
|
||||
}
|
||||
}
|
||||
},
|
||||
"semantic_search": {
|
||||
"label": "Căutare semantică",
|
||||
"description": "Setări pentru căutarea semantică care construiește și interoghează înglobări de obiecte pentru a găsi elemente similare.",
|
||||
"triggers": {
|
||||
"label": "Trigger-e",
|
||||
"description": "Acțiuni și criterii pentru trigger-ele de căutare semantică specifice camerelor.",
|
||||
"friendly_name": {
|
||||
"label": "Nume sugestiv",
|
||||
"description": "Nume opțional afișat în UI pentru acest trigger."
|
||||
},
|
||||
"enabled": {
|
||||
"label": "Activare trigger",
|
||||
"description": "Activează sau dezactivează acest trigger."
|
||||
},
|
||||
"type": {
|
||||
"label": "Tip trigger",
|
||||
"description": "Tip: 'thumbnail' (compară cu imagine) sau 'description' (compară cu text)."
|
||||
},
|
||||
"data": {
|
||||
"label": "Conținut trigger",
|
||||
"description": "Textul sau ID-ul miniaturii de comparat cu obiectele urmărite."
|
||||
},
|
||||
"threshold": {
|
||||
"label": "Prag trigger",
|
||||
"description": "Scorul minim de similitudine (0-1) pentru activare."
|
||||
},
|
||||
"actions": {
|
||||
"label": "Acțiuni trigger",
|
||||
"description": "Lista de acțiuni (notificare, sub_label, atribut) la activare."
|
||||
}
|
||||
}
|
||||
},
|
||||
"snapshots": {
|
||||
"label": "Snapshot-uri",
|
||||
"description": "Setări pentru snapshot-urile JPEG salvate ale obiectelor monitorizate de această cameră.",
|
||||
"enabled": {
|
||||
"label": "Snapshot-uri activate",
|
||||
"description": "Activează sau dezactivează salvarea de snapshots pentru această cameră."
|
||||
},
|
||||
"clean_copy": {
|
||||
"label": "Salvează copie curată",
|
||||
"description": "Salvează și o copie fără adnotări a snapshot-ului."
|
||||
},
|
||||
"timestamp": {
|
||||
"label": "Overlay timestamp",
|
||||
"description": "Pune data și ora pe snapshot-urile salvate."
|
||||
},
|
||||
"bounding_box": {
|
||||
"label": "Overlay chenar",
|
||||
"description": "Desenează chenarele obiectelor pe snapshot-uri."
|
||||
},
|
||||
"crop": {
|
||||
"label": "Decupează snapshot-ul",
|
||||
"description": "Decupează snapshot-ul pe mărimea obiectului detectat."
|
||||
},
|
||||
"required_zones": {
|
||||
"label": "Zone obligatorii",
|
||||
"description": "Zonele prin care trebuie să treacă un obiect pentru a salva un snapshot."
|
||||
},
|
||||
"height": {
|
||||
"label": "Înălțime snapshot",
|
||||
"description": "Înălțimea la care se redimensionează snapshot-ul; lasă gol pentru dimensiunea originală."
|
||||
},
|
||||
"retain": {
|
||||
"label": "Retenție snapshot-uri",
|
||||
"description": "Setări pentru păstrarea snapshot-urilor.",
|
||||
"default": {
|
||||
"label": "Retenție implicită",
|
||||
"description": "Numărul implicit de zile pentru păstrare."
|
||||
},
|
||||
"mode": {
|
||||
"label": "Mod retenție",
|
||||
"description": "Mod retenție: 'all', 'motion' sau 'active_objects'."
|
||||
},
|
||||
"objects": {
|
||||
"label": "Retenție per obiect",
|
||||
"description": "Suprascrieri pentru zilele de retenție ale snapshot-urilor per obiect."
|
||||
}
|
||||
},
|
||||
"quality": {
|
||||
"label": "Calitate JPEG",
|
||||
"description": "Calitatea encodării JPEG pentru snapshot-uri (0-100)."
|
||||
}
|
||||
},
|
||||
"timestamp_style": {
|
||||
"label": "Stil timestamp",
|
||||
"description": "Opțiuni de stilizare pentru timestamp-ul din flux, aplicate înregistrărilor și snapshot-urilor.",
|
||||
"position": {
|
||||
"label": "Poziție timestamp",
|
||||
"description": "Unde apare data/ora pe imagine (stânga-sus/dreapta-sus etc.)."
|
||||
},
|
||||
"format": {
|
||||
"label": "Format timestamp",
|
||||
"description": "Formatul datei (coduri Python datetime)."
|
||||
},
|
||||
"color": {
|
||||
"label": "Culoare timestamp",
|
||||
"description": "Valori RGB pentru textul datei.",
|
||||
"red": {
|
||||
"label": "Roșu",
|
||||
"description": "Componenta roșie (0-255)."
|
||||
},
|
||||
"green": {
|
||||
"label": "Verde",
|
||||
"description": "Componenta verde (0-255)."
|
||||
},
|
||||
"blue": {
|
||||
"label": "Albastru",
|
||||
"description": "Componenta albastră (0-255)."
|
||||
}
|
||||
},
|
||||
"thickness": {
|
||||
"label": "Grosime timestamp",
|
||||
"description": "Grosimea liniei textului."
|
||||
},
|
||||
"effect": {
|
||||
"label": "Efect timestamp",
|
||||
"description": "Efect vizual pentru text (fără, solid, umbră)."
|
||||
}
|
||||
},
|
||||
"best_image_timeout": {
|
||||
"label": "Timp limită cea mai bună imagine",
|
||||
"description": "Cât timp să se aștepte pentru imaginea cu cel mai mare scor de încredere."
|
||||
},
|
||||
"mqtt": {
|
||||
"label": "MQTT",
|
||||
"description": "Setări de publicare a imaginilor prin MQTT.",
|
||||
"enabled": {
|
||||
"label": "Trimite imagine",
|
||||
"description": "Activează publicarea de snapshot-uri cu imagini pentru obiecte pe topic-urile MQTT pentru această cameră."
|
||||
},
|
||||
"timestamp": {
|
||||
"label": "Adaugă timestamp",
|
||||
"description": "Suprapune un timestamp pe imaginile publicate pe MQTT."
|
||||
},
|
||||
"bounding_box": {
|
||||
"label": "Adaugă bounding box",
|
||||
"description": "Desenează chenare pe imaginile publicate prin MQTT."
|
||||
},
|
||||
"crop": {
|
||||
"label": "Decupează imaginea",
|
||||
"description": "Decupează imaginile publicate pe MQTT la dimensiunea chenarului obiectului detectat."
|
||||
},
|
||||
"height": {
|
||||
"label": "Înălțime imagine",
|
||||
"description": "Înălțimea (pixeli) la care să fie redimensionate imaginile publicate prin MQTT."
|
||||
},
|
||||
"required_zones": {
|
||||
"label": "Zone obligatorii",
|
||||
"description": "Zonele în care trebuie să intre un obiect pentru ca o imagine MQTT să fie publicată."
|
||||
},
|
||||
"quality": {
|
||||
"label": "Calitate JPEG",
|
||||
"description": "Calitatea JPEG pentru imaginile publicate pe MQTT (0-100)."
|
||||
}
|
||||
},
|
||||
"notifications": {
|
||||
"label": "Notificări",
|
||||
"description": "Setări pentru activarea și controlul notificărilor pentru această cameră.",
|
||||
"enabled": {
|
||||
"label": "Activează notificările",
|
||||
"description": "Activează sau dezactivează notificările pentru această cameră."
|
||||
},
|
||||
"email": {
|
||||
"label": "Email notificare",
|
||||
"description": "Adresa de email folosită pentru notificări push sau cerută de anumiți furnizori de notificări."
|
||||
},
|
||||
"cooldown": {
|
||||
"label": "Perioadă de răcire",
|
||||
"description": "Timpul de așteptare (secunde) între notificări pentru a evita spamarea destinatarilor."
|
||||
},
|
||||
"enabled_in_config": {
|
||||
"label": "Stare originală notificări",
|
||||
"description": "Indică dacă notificările au fost activate în configurația statică originală."
|
||||
}
|
||||
},
|
||||
"onvif": {
|
||||
"label": "ONVIF",
|
||||
"description": "Setări pentru conexiunea ONVIF și autotracking PTZ pentru această cameră.",
|
||||
"host": {
|
||||
"label": "Gazdă ONVIF",
|
||||
"description": "Gazda (și schema opțională) pentru serviciul ONVIF al acestei camere."
|
||||
},
|
||||
"port": {
|
||||
"label": "Port ONVIF",
|
||||
"description": "Numărul portului pentru serviciul ONVIF."
|
||||
},
|
||||
"user": {
|
||||
"label": "Utilizator ONVIF",
|
||||
"description": "Utilizator pentru autentificarea ONVIF; unele dispozitive necesită un utilizator administrator pentru ONVIF."
|
||||
},
|
||||
"password": {
|
||||
"label": "Parolă ONVIF",
|
||||
"description": "Parola pentru autentificarea ONVIF."
|
||||
},
|
||||
"tls_insecure": {
|
||||
"label": "Dezactivează verificare TLS",
|
||||
"description": "Sari peste verificarea TLS și dezactivează autentificarea digest pentru ONVIF (nesigur; a se utiliza doar în rețele sigure)."
|
||||
},
|
||||
"autotracking": {
|
||||
"label": "Urmărire automată",
|
||||
"description": "Urmărește automat obiectele în mișcare și menține-le centrate în cadru folosind mișcările camerei PTZ.",
|
||||
"enabled": {
|
||||
"label": "Activează Autotracking",
|
||||
"description": "Activează sau dezactivează urmărirea automată PTZ a obiectelor detectate."
|
||||
},
|
||||
"calibrate_on_startup": {
|
||||
"label": "Calibrare la pornire",
|
||||
"description": "Măsoară vitezele motorului PTZ la pornire pentru a îmbunătăți precizia urmăririi. Frigate va actualiza config-ul cu movement_weights după calibrare."
|
||||
},
|
||||
"zooming": {
|
||||
"label": "Mod zoom",
|
||||
"description": "Controlează comportamentul zoom-ului: dezactivat (doar pan/tilt), absolut (cel mai compatibil) sau relativ (pan/tilt/zoom concurent)."
|
||||
},
|
||||
"zoom_factor": {
|
||||
"label": "Factor zoom",
|
||||
"description": "Controlează nivelul de zoom pe obiectele urmărite. Valorile mai mici păstrează mai mult din scenă; valorile mai mari fac zoom mai aproape, dar pot pierde urmărirea. Valori între 0.1 și 0.75."
|
||||
},
|
||||
"track": {
|
||||
"label": "Obiecte urmărite",
|
||||
"description": "Listă de tipuri de obiecte care ar trebui să declanșeze autotracking-ul."
|
||||
},
|
||||
"required_zones": {
|
||||
"label": "Zone obligatorii",
|
||||
"description": "Obiectele trebuie să intre în una dintre aceste zone înainte ca autotracking-ul să înceapă."
|
||||
},
|
||||
"return_preset": {
|
||||
"label": "Preset de întoarcere",
|
||||
"description": "Numele preset-ului ONVIF configurat în firmware-ul camerei pentru întoarcere după ce urmărirea se termină."
|
||||
},
|
||||
"timeout": {
|
||||
"label": "Timeout întoarcere",
|
||||
"description": "Așteaptă acest număr de secunde după pierderea urmăririi înainte de a returna camera la poziția presetată."
|
||||
},
|
||||
"movement_weights": {
|
||||
"label": "Ponderi mișcare",
|
||||
"description": "Valori de calibrare generate automat de calibrarea camerei. Nu modifica manual."
|
||||
},
|
||||
"enabled_in_config": {
|
||||
"label": "Stare originală autotrack",
|
||||
"description": "Câmp intern pentru a urmări dacă autotracking-ul a fost activat în configurație."
|
||||
}
|
||||
},
|
||||
"ignore_time_mismatch": {
|
||||
"label": "Ignoră decalaj timp",
|
||||
"description": "Ignoră diferențele de sincronizare a timpului între cameră și serverul Frigate pentru comunicarea ONVIF."
|
||||
}
|
||||
},
|
||||
"type": {
|
||||
"label": "Tip cameră",
|
||||
"description": "Tipul Camerei"
|
||||
},
|
||||
"ui": {
|
||||
"label": "Interfață cameră",
|
||||
"description": "Ordinea de afișare și vizibilitatea pentru această cameră în interfață. Ordinea afectează dashboard-ul implicit. Pentru control mai granular, folosește grupuri de camere.",
|
||||
"order": {
|
||||
"label": "Ordine interfață",
|
||||
"description": "Ordine numerică folosită pentru sortarea camerei în interfață (dashboard și liste); numerele mai mari apar mai târziu."
|
||||
},
|
||||
"dashboard": {
|
||||
"label": "Arată în interfață",
|
||||
"description": "Comută vizibilitatea acestei camere peste tot în interfața Frigate. Dezactivarea acestei opțiuni va necesita editarea manuală a configurației pentru a vedea din nou camera în interfață."
|
||||
}
|
||||
},
|
||||
"webui_url": {
|
||||
"label": "URL cameră",
|
||||
"description": "URL pentru a vizita camera direct din pagina de sistem"
|
||||
},
|
||||
"zones": {
|
||||
"label": "Zone",
|
||||
"description": "Zonele îți permit să definești o arie specifică în cadru pentru a determina dacă un obiect se află sau nu într-un anumit loc.",
|
||||
"friendly_name": {
|
||||
"label": "Nume zonă",
|
||||
"description": "Un nume ușor de recunoscut pentru zonă, afișat în interfața Frigate. Dacă nu este setat, se va folosi o versiune formatată a numelui zonei."
|
||||
},
|
||||
"enabled": {
|
||||
"label": "Dacă această zonă este activă. Zonele dezactivate sunt ignorate la rulare."
|
||||
},
|
||||
"enabled_in_config": {
|
||||
"label": "Păstrează starea originală a zonei."
|
||||
},
|
||||
"filters": {
|
||||
"label": "Filtre zonă",
|
||||
"description": "Filtre aplicate obiectelor din această zonă. Folosite pentru a reduce alarmele false sau pentru a restricționa ce obiecte sunt considerate prezente în zonă.",
|
||||
"min_area": {
|
||||
"label": "Aria minimă obiect",
|
||||
"description": "Aria minimă a chenarului (pixeli sau procentaj) necesară pentru acest tip de obiect. Poate fi în pixeli (int) sau procentaj (float între 0.000001 și 0.99)."
|
||||
},
|
||||
"max_area": {
|
||||
"label": "Aria maximă obiect",
|
||||
"description": "Aria maximă a chenarului (pixeli sau procentaj) permisă pentru acest tip de obiect. Poate fi în pixeli (int) sau procentaj (float între 0.000001 și 0.99)."
|
||||
},
|
||||
"min_ratio": {
|
||||
"label": "Raport aspect minim",
|
||||
"description": "Raportul minim lățime/înălțime cerut pentru ca chenarul să se califice."
|
||||
},
|
||||
"max_ratio": {
|
||||
"label": "Raport aspect maxim",
|
||||
"description": "Raportul maxim lățime/înălțime permis pentru ca chenarul să se califice."
|
||||
},
|
||||
"threshold": {
|
||||
"label": "Prag de încredere",
|
||||
"description": "Pragul mediu de încredere a detecției necesar pentru ca obiectul să fie considerat un rezultat real."
|
||||
},
|
||||
"min_score": {
|
||||
"label": "Încredere minimă",
|
||||
"description": "Încrederea minimă a detecției pe un singur cadru necesară pentru ca obiectul să fie numărat."
|
||||
},
|
||||
"mask": {
|
||||
"label": "Mască filtru",
|
||||
"description": "Coordonatele poligonului care definesc unde se aplică acest filtru în cadrul imaginii."
|
||||
},
|
||||
"raw_mask": {
|
||||
"label": "Mască brută"
|
||||
}
|
||||
},
|
||||
"coordinates": {
|
||||
"label": "Coordonate",
|
||||
"description": "Coordonatele poligonului care definesc aria zonei. Poate fi un șir separat prin virgule sau o listă de șiruri de coordonate. Coordonatele trebuie să fie relative (0-1) sau absolute (legacy)."
|
||||
},
|
||||
"distances": {
|
||||
"label": "Distanțe reale",
|
||||
"description": "Distanțe reale opționale pentru fiecare latură a patrulaterului zonei, folosite pentru calcule de viteză sau distanță. Trebuie să aibă exact 4 valori dacă este setat."
|
||||
},
|
||||
"inertia": {
|
||||
"label": "Cadre de inerție",
|
||||
"description": "Numărul de cadre consecutive în care un obiect trebuie detectat în zonă înainte de a fi considerat prezent. Ajută la filtrarea detecțiilor trecătoare."
|
||||
},
|
||||
"loitering_time": {
|
||||
"label": "Secunde staționare",
|
||||
"description": "Numărul de secunde în care un obiect trebuie să rămână în zonă pentru a fi considerat în staționare (loitering). Setează pe 0 pentru a dezactiva detecția staționării."
|
||||
},
|
||||
"speed_threshold": {
|
||||
"label": "Viteză minimă",
|
||||
"description": "Viteza minimă (în unități reale dacă distanțele sunt setate) necesară pentru ca un obiect să fie considerat prezent în zonă. Folosit pentru trigger-e de zonă bazate pe viteză."
|
||||
},
|
||||
"objects": {
|
||||
"label": "Obiecte trigger",
|
||||
"description": "Lista tipurilor de obiecte (din labelmap) care pot declanșa această zonă. Poate fi un șir sau o listă de șiruri. Dacă este gol, toate obiectele sunt luate în considerare."
|
||||
}
|
||||
},
|
||||
"enabled_in_config": {
|
||||
"label": "Stare inițială cameră",
|
||||
"description": "Păstrează starea originală a camerei."
|
||||
}
|
||||
}
|
||||
{}
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@ -1,73 +1 @@
|
||||
{
|
||||
"audio": {
|
||||
"global": {
|
||||
"detection": "Detectare globală",
|
||||
"sensitivity": "Sensibilitate globală"
|
||||
},
|
||||
"cameras": {
|
||||
"detection": "Detectare",
|
||||
"sensitivity": "Sensibilitate"
|
||||
}
|
||||
},
|
||||
"timestamp_style": {
|
||||
"global": {
|
||||
"appearance": "Aspect global"
|
||||
},
|
||||
"cameras": {
|
||||
"appearance": "Aspect"
|
||||
}
|
||||
},
|
||||
"motion": {
|
||||
"global": {
|
||||
"sensitivity": "Sensibilitate globală",
|
||||
"algorithm": "Algoritm global"
|
||||
},
|
||||
"cameras": {
|
||||
"sensitivity": "Sensibilitate",
|
||||
"algorithm": "Algoritm"
|
||||
}
|
||||
},
|
||||
"snapshots": {
|
||||
"global": {
|
||||
"display": "Afișare snapshot-uri globală"
|
||||
},
|
||||
"cameras": {
|
||||
"display": "Afișare snapshot-uri"
|
||||
}
|
||||
},
|
||||
"detect": {
|
||||
"global": {
|
||||
"resolution": "Rezoluție globală",
|
||||
"tracking": "Urmărire globală"
|
||||
},
|
||||
"cameras": {
|
||||
"resolution": "Rezoluție",
|
||||
"tracking": "Urmărire"
|
||||
}
|
||||
},
|
||||
"objects": {
|
||||
"global": {
|
||||
"tracking": "Urmărire globală",
|
||||
"filtering": "Filtrare globală"
|
||||
},
|
||||
"cameras": {
|
||||
"tracking": "Urmărire",
|
||||
"filtering": "Filtrare"
|
||||
}
|
||||
},
|
||||
"record": {
|
||||
"global": {
|
||||
"retention": "Păstrare globală",
|
||||
"events": "Evenimente globale"
|
||||
},
|
||||
"cameras": {
|
||||
"retention": "Păstrare",
|
||||
"events": "Evenimente"
|
||||
}
|
||||
},
|
||||
"ffmpeg": {
|
||||
"cameras": {
|
||||
"cameraFfmpeg": "Argumente FFmpeg specifice camerei"
|
||||
}
|
||||
}
|
||||
}
|
||||
{}
|
||||
|
||||
@ -1,32 +1 @@
|
||||
{
|
||||
"minimum": "Trebuie să fie cel puțin {{limit}}",
|
||||
"maximum": "Trebuie să fie cel mult {{limit}}",
|
||||
"exclusiveMinimum": "Trebuie să fie mai mare de {{limit}}",
|
||||
"exclusiveMaximum": "Trebuie să fie mai mic de {{limit}}",
|
||||
"minLength": "Trebuie să aibă cel puțin {{limit}} caracter(e)",
|
||||
"maxLength": "Trebuie să aibă cel mult {{limit}} caracter(e)",
|
||||
"minItems": "Trebuie să conțină cel puțin {{limit}} elemente",
|
||||
"maxItems": "Trebuie să conțină cel mult {{limit}} elemente",
|
||||
"pattern": "Format nevalid",
|
||||
"required": "Acest câmp este obligatoriu",
|
||||
"type": "Tip de valoare nevalid",
|
||||
"enum": "Trebuie să fie una dintre valorile permise",
|
||||
"const": "Valoarea nu corespunde constantei așteptate",
|
||||
"uniqueItems": "Toate elementele trebuie să fie unice",
|
||||
"format": "Format nevalid",
|
||||
"additionalProperties": "Proprietatea necunoscută nu este permisă",
|
||||
"oneOf": "Trebuie să corespundă exact uneia dintre schemele permise",
|
||||
"anyOf": "Trebuie să corespundă cel puțin uneia dintre schemele permise",
|
||||
"proxy": {
|
||||
"header_map": {
|
||||
"roleHeaderRequired": "Header-ul de rol este obligatoriu atunci când sunt configurate mapări de roluri."
|
||||
}
|
||||
},
|
||||
"ffmpeg": {
|
||||
"inputs": {
|
||||
"rolesUnique": "Fiecare rol poate fi atribuit unui singur stream.",
|
||||
"detectRequired": "Cel puțin un stream trebuie să aibă atribuit rolul 'detect'.",
|
||||
"hwaccelDetectOnly": "Doar stream-ul cu rolul 'detect' poate defini argumente pentru accelerare hardware."
|
||||
}
|
||||
}
|
||||
}
|
||||
{}
|
||||
|
||||
@ -226,10 +226,6 @@
|
||||
"downloadCleanSnapshot": {
|
||||
"label": "Descarcă un snapshot curat",
|
||||
"aria": "Descarcă snapshot curat"
|
||||
},
|
||||
"debugReplay": {
|
||||
"label": "Reluare de depanare",
|
||||
"aria": "Vezi acest obiect urmărit în vizualizarea de reluare de depanare"
|
||||
}
|
||||
},
|
||||
"dialog": {
|
||||
|
||||
@ -1587,8 +1587,7 @@
|
||||
"saveAllPartial_one": "{{successCount}} din {{totalCount}} secțiune salvată. {{failCount}} eșuate.",
|
||||
"saveAllPartial_few": "{{successCount}} din {{totalCount}} secțiuni salvate. {{failCount}} eșuate.",
|
||||
"saveAllPartial_other": "{{successCount}} din {{totalCount}} de secțiuni salvate. {{failCount}} eșuate.",
|
||||
"saveAllFailure": "Eroare la salvarea tuturor secțiunilor.",
|
||||
"applied": "Setările au fost aplicate cu succes"
|
||||
"saveAllFailure": "Eroare la salvarea tuturor secțiunilor."
|
||||
},
|
||||
"unsavedChanges": "Ai modificări nesalvate",
|
||||
"confirmReset": "Confirmă Resetarea",
|
||||
|
||||
@ -6,8 +6,7 @@
|
||||
"logs": {
|
||||
"go2rtc": "Jurnale Go2RTC - Frigate",
|
||||
"nginx": "Jurnale Nginx - Frigate",
|
||||
"frigate": "Jurnale Frigate - Frigate",
|
||||
"websocket": "Jurnale de mesaje - Frigate"
|
||||
"frigate": "Jurnale Frigate - Frigate"
|
||||
},
|
||||
"enrichments": "Statistici Procesări Avansate - Frigate"
|
||||
},
|
||||
@ -122,32 +121,6 @@
|
||||
"fetchingLogsFailed": "Eroare la preluarea jurnalelor: {{errorMessage}}",
|
||||
"whileStreamingLogs": "Eroare în timpul transmiterii jurnalelor: {{errorMessage}}"
|
||||
}
|
||||
},
|
||||
"websocket": {
|
||||
"label": "Mesaje",
|
||||
"pause": "Pauză",
|
||||
"resume": "Reluare",
|
||||
"clear": "Șterge",
|
||||
"filter": {
|
||||
"all": "Toate subiectele",
|
||||
"topics": "Subiecte",
|
||||
"events": "Evenimente",
|
||||
"reviews": "Revizuiri",
|
||||
"classification": "Clasificare",
|
||||
"face_recognition": "Recunoaștere facială",
|
||||
"lpr": "Recunoașterea numerelor de înmatriculare (LPR)",
|
||||
"camera_activity": "Activitate cameră",
|
||||
"system": "Sistem",
|
||||
"camera": "Cameră",
|
||||
"all_cameras": "Toate camerele",
|
||||
"cameras_count_one": "{{count}} Cameră",
|
||||
"cameras_count_other": "{{count}} Camere"
|
||||
},
|
||||
"empty": "Niciun mesaj capturat încă",
|
||||
"count": "{{count}} mesaje",
|
||||
"expanded": {
|
||||
"payload": "Conținut"
|
||||
}
|
||||
}
|
||||
},
|
||||
"metrics": "Metrici sistem",
|
||||
@ -243,8 +216,7 @@
|
||||
"ffmpegHighCpuUsage": "{{camera}} are o utilizare ridicată CPU FFmpeg ({{ffmpegAvg}}%)",
|
||||
"cameraIsOffline": "{{camera}} este offline",
|
||||
"healthy": "Sistemul este sănătos",
|
||||
"shmTooLow": "Alocarea /dev/shm ({{total}} MB) ar trebui mărită la cel puțin {{min}} MB.",
|
||||
"debugReplayActive": "Sesiunea de reluare de depanare este activă"
|
||||
"shmTooLow": "Alocarea /dev/shm ({{total}} MB) ar trebui mărită la cel puțin {{min}} MB."
|
||||
},
|
||||
"lastRefreshed": "Ultima actualizare: "
|
||||
}
|
||||
|
||||
@ -8,7 +8,6 @@ const motion: SectionConfigOverrides = {
|
||||
"enabled",
|
||||
"threshold",
|
||||
"lightning_threshold",
|
||||
"skip_motion_threshold",
|
||||
"improve_contrast",
|
||||
"contour_area",
|
||||
"delta_alpha",
|
||||
@ -23,7 +22,6 @@ const motion: SectionConfigOverrides = {
|
||||
hiddenFields: ["enabled_in_config", "mask", "raw_mask"],
|
||||
advancedFields: [
|
||||
"lightning_threshold",
|
||||
"skip_motion_threshold",
|
||||
"delta_alpha",
|
||||
"frame_alpha",
|
||||
"frame_height",
|
||||
@ -35,7 +33,6 @@ const motion: SectionConfigOverrides = {
|
||||
"enabled",
|
||||
"threshold",
|
||||
"lightning_threshold",
|
||||
"skip_motion_threshold",
|
||||
"improve_contrast",
|
||||
"contour_area",
|
||||
"delta_alpha",
|
||||
|
||||
@ -1,6 +1,5 @@
|
||||
import {
|
||||
MutableRefObject,
|
||||
ReactNode,
|
||||
useCallback,
|
||||
useEffect,
|
||||
useRef,
|
||||
@ -58,7 +57,6 @@ type HlsVideoPlayerProps = {
|
||||
isDetailMode?: boolean;
|
||||
camera?: string;
|
||||
currentTimeOverride?: number;
|
||||
transformedOverlay?: ReactNode;
|
||||
};
|
||||
|
||||
export default function HlsVideoPlayer({
|
||||
@ -83,7 +81,6 @@ export default function HlsVideoPlayer({
|
||||
isDetailMode = false,
|
||||
camera,
|
||||
currentTimeOverride,
|
||||
transformedOverlay,
|
||||
}: HlsVideoPlayerProps) {
|
||||
const { t } = useTranslation("components/player");
|
||||
const { data: config } = useSWR<FrigateConfig>("config");
|
||||
@ -353,162 +350,157 @@ export default function HlsVideoPlayer({
|
||||
height: isMobile ? "100%" : undefined,
|
||||
}}
|
||||
>
|
||||
<div className="relative size-full">
|
||||
{transformedOverlay}
|
||||
{isDetailMode &&
|
||||
camera &&
|
||||
currentTime &&
|
||||
loadedMetadata &&
|
||||
videoDimensions.width > 0 &&
|
||||
videoDimensions.height > 0 && (
|
||||
<div
|
||||
className={cn(
|
||||
"absolute inset-0 z-50",
|
||||
isDesktop
|
||||
? "size-full"
|
||||
: "mx-auto flex items-center justify-center portrait:max-h-[50dvh]",
|
||||
)}
|
||||
style={{
|
||||
aspectRatio: `${videoDimensions.width} / ${videoDimensions.height}`,
|
||||
}}
|
||||
>
|
||||
<ObjectTrackOverlay
|
||||
key={`overlay-${currentTime}`}
|
||||
camera={camera}
|
||||
showBoundingBoxes={!isPlaying}
|
||||
currentTime={currentTime}
|
||||
videoWidth={videoDimensions.width}
|
||||
videoHeight={videoDimensions.height}
|
||||
className="absolute inset-0 z-10"
|
||||
onSeekToTime={(timestamp, play) => {
|
||||
if (onSeekToTime) {
|
||||
onSeekToTime(timestamp, play);
|
||||
}
|
||||
}}
|
||||
/>
|
||||
</div>
|
||||
)}
|
||||
<video
|
||||
ref={videoRef}
|
||||
className={`size-full rounded-lg bg-black md:rounded-2xl ${loadedMetadata ? "" : "invisible"} cursor-pointer`}
|
||||
preload="auto"
|
||||
autoPlay
|
||||
controls={!frigateControls}
|
||||
playsInline
|
||||
muted={muted}
|
||||
onClick={
|
||||
isDesktop
|
||||
? () => {
|
||||
if (zoomScale == 1.0) onPlayPause(!isPlaying);
|
||||
{isDetailMode &&
|
||||
camera &&
|
||||
currentTime &&
|
||||
loadedMetadata &&
|
||||
videoDimensions.width > 0 &&
|
||||
videoDimensions.height > 0 && (
|
||||
<div
|
||||
className={cn(
|
||||
"absolute inset-0 z-50",
|
||||
isDesktop
|
||||
? "size-full"
|
||||
: "mx-auto flex items-center justify-center portrait:max-h-[50dvh]",
|
||||
)}
|
||||
style={{
|
||||
aspectRatio: `${videoDimensions.width} / ${videoDimensions.height}`,
|
||||
}}
|
||||
>
|
||||
<ObjectTrackOverlay
|
||||
key={`overlay-${currentTime}`}
|
||||
camera={camera}
|
||||
showBoundingBoxes={!isPlaying}
|
||||
currentTime={currentTime}
|
||||
videoWidth={videoDimensions.width}
|
||||
videoHeight={videoDimensions.height}
|
||||
className="absolute inset-0 z-10"
|
||||
onSeekToTime={(timestamp, play) => {
|
||||
if (onSeekToTime) {
|
||||
onSeekToTime(timestamp, play);
|
||||
}
|
||||
: undefined
|
||||
}}
|
||||
/>
|
||||
</div>
|
||||
)}
|
||||
<video
|
||||
ref={videoRef}
|
||||
className={`size-full rounded-lg bg-black md:rounded-2xl ${loadedMetadata ? "" : "invisible"} cursor-pointer`}
|
||||
preload="auto"
|
||||
autoPlay
|
||||
controls={!frigateControls}
|
||||
playsInline
|
||||
muted={muted}
|
||||
onClick={
|
||||
isDesktop
|
||||
? () => {
|
||||
if (zoomScale == 1.0) onPlayPause(!isPlaying);
|
||||
}
|
||||
: undefined
|
||||
}
|
||||
onVolumeChange={() => {
|
||||
setVolume(videoRef.current?.volume ?? 1.0, true);
|
||||
if (!frigateControls) {
|
||||
setMuted(videoRef.current?.muted);
|
||||
}
|
||||
onVolumeChange={() => {
|
||||
setVolume(videoRef.current?.volume ?? 1.0, true);
|
||||
if (!frigateControls) {
|
||||
setMuted(videoRef.current?.muted);
|
||||
}
|
||||
}}
|
||||
onPlay={() => {
|
||||
setIsPlaying(true);
|
||||
}}
|
||||
onPlay={() => {
|
||||
setIsPlaying(true);
|
||||
|
||||
if (isMobile) {
|
||||
setControls(true);
|
||||
setMobileCtrlTimeout(
|
||||
setTimeout(() => setControls(false), 4000),
|
||||
);
|
||||
}
|
||||
}}
|
||||
onPlaying={onPlaying}
|
||||
onPause={() => {
|
||||
setIsPlaying(false);
|
||||
clearTimeout(bufferTimeout);
|
||||
if (isMobile) {
|
||||
setControls(true);
|
||||
setMobileCtrlTimeout(setTimeout(() => setControls(false), 4000));
|
||||
}
|
||||
}}
|
||||
onPlaying={onPlaying}
|
||||
onPause={() => {
|
||||
setIsPlaying(false);
|
||||
clearTimeout(bufferTimeout);
|
||||
|
||||
if (isMobile && mobileCtrlTimeout) {
|
||||
clearTimeout(mobileCtrlTimeout);
|
||||
}
|
||||
}}
|
||||
onWaiting={() => {
|
||||
if (onError != undefined) {
|
||||
if (videoRef.current?.paused) {
|
||||
return;
|
||||
}
|
||||
|
||||
setBufferTimeout(
|
||||
setTimeout(() => {
|
||||
if (
|
||||
document.visibilityState === "visible" &&
|
||||
videoRef.current
|
||||
) {
|
||||
onError("stalled");
|
||||
}
|
||||
}, 3000),
|
||||
);
|
||||
}
|
||||
}}
|
||||
onProgress={() => {
|
||||
if (onError != undefined) {
|
||||
if (videoRef.current?.paused) {
|
||||
return;
|
||||
}
|
||||
|
||||
if (bufferTimeout) {
|
||||
clearTimeout(bufferTimeout);
|
||||
setBufferTimeout(undefined);
|
||||
}
|
||||
}
|
||||
}}
|
||||
onTimeUpdate={() => {
|
||||
if (!onTimeUpdate) {
|
||||
if (isMobile && mobileCtrlTimeout) {
|
||||
clearTimeout(mobileCtrlTimeout);
|
||||
}
|
||||
}}
|
||||
onWaiting={() => {
|
||||
if (onError != undefined) {
|
||||
if (videoRef.current?.paused) {
|
||||
return;
|
||||
}
|
||||
|
||||
const frameTime = getVideoTime();
|
||||
|
||||
if (frameTime) {
|
||||
onTimeUpdate(frameTime);
|
||||
setBufferTimeout(
|
||||
setTimeout(() => {
|
||||
if (
|
||||
document.visibilityState === "visible" &&
|
||||
videoRef.current
|
||||
) {
|
||||
onError("stalled");
|
||||
}
|
||||
}, 3000),
|
||||
);
|
||||
}
|
||||
}}
|
||||
onProgress={() => {
|
||||
if (onError != undefined) {
|
||||
if (videoRef.current?.paused) {
|
||||
return;
|
||||
}
|
||||
}}
|
||||
onLoadedData={() => {
|
||||
onPlayerLoaded?.();
|
||||
handleLoadedMetadata();
|
||||
|
||||
if (videoRef.current) {
|
||||
if (playbackRate) {
|
||||
videoRef.current.playbackRate = playbackRate;
|
||||
}
|
||||
if (bufferTimeout) {
|
||||
clearTimeout(bufferTimeout);
|
||||
setBufferTimeout(undefined);
|
||||
}
|
||||
}
|
||||
}}
|
||||
onTimeUpdate={() => {
|
||||
if (!onTimeUpdate) {
|
||||
return;
|
||||
}
|
||||
|
||||
if (volume) {
|
||||
videoRef.current.volume = volume;
|
||||
}
|
||||
const frameTime = getVideoTime();
|
||||
|
||||
if (frameTime) {
|
||||
onTimeUpdate(frameTime);
|
||||
}
|
||||
}}
|
||||
onLoadedData={() => {
|
||||
onPlayerLoaded?.();
|
||||
handleLoadedMetadata();
|
||||
|
||||
if (videoRef.current) {
|
||||
if (playbackRate) {
|
||||
videoRef.current.playbackRate = playbackRate;
|
||||
}
|
||||
}}
|
||||
onEnded={() => {
|
||||
if (onClipEnded) {
|
||||
onClipEnded(getVideoTime() ?? 0);
|
||||
|
||||
if (volume) {
|
||||
videoRef.current.volume = volume;
|
||||
}
|
||||
}}
|
||||
onError={(e) => {
|
||||
if (
|
||||
!hlsRef.current &&
|
||||
}
|
||||
}}
|
||||
onEnded={() => {
|
||||
if (onClipEnded) {
|
||||
onClipEnded(getVideoTime() ?? 0);
|
||||
}
|
||||
}}
|
||||
onError={(e) => {
|
||||
if (
|
||||
!hlsRef.current &&
|
||||
// @ts-expect-error code does exist
|
||||
unsupportedErrorCodes.includes(e.target.error.code) &&
|
||||
videoRef.current
|
||||
) {
|
||||
setLoadedMetadata(false);
|
||||
setUseHlsCompat(true);
|
||||
} else {
|
||||
toast.error(
|
||||
// @ts-expect-error code does exist
|
||||
unsupportedErrorCodes.includes(e.target.error.code) &&
|
||||
videoRef.current
|
||||
) {
|
||||
setLoadedMetadata(false);
|
||||
setUseHlsCompat(true);
|
||||
} else {
|
||||
toast.error(
|
||||
// @ts-expect-error code does exist
|
||||
`Failed to play recordings (error ${e.target.error.code}): ${e.target.error.message}`,
|
||||
{
|
||||
position: "top-center",
|
||||
},
|
||||
);
|
||||
}
|
||||
}}
|
||||
/>
|
||||
</div>
|
||||
`Failed to play recordings (error ${e.target.error.code}): ${e.target.error.message}`,
|
||||
{
|
||||
position: "top-center",
|
||||
},
|
||||
);
|
||||
}
|
||||
}}
|
||||
/>
|
||||
</TransformComponent>
|
||||
</TransformWrapper>
|
||||
);
|
||||
|
||||
@ -1,11 +1,4 @@
|
||||
import {
|
||||
ReactNode,
|
||||
useCallback,
|
||||
useEffect,
|
||||
useMemo,
|
||||
useRef,
|
||||
useState,
|
||||
} from "react";
|
||||
import { useCallback, useEffect, useMemo, useRef, useState } from "react";
|
||||
import { useApiHost } from "@/api";
|
||||
import useSWR from "swr";
|
||||
import { FrigateConfig } from "@/types/frigateConfig";
|
||||
@ -47,7 +40,6 @@ type DynamicVideoPlayerProps = {
|
||||
setFullResolution: React.Dispatch<React.SetStateAction<VideoResolutionType>>;
|
||||
toggleFullscreen: () => void;
|
||||
containerRef?: React.MutableRefObject<HTMLDivElement | null>;
|
||||
transformedOverlay?: ReactNode;
|
||||
};
|
||||
export default function DynamicVideoPlayer({
|
||||
className,
|
||||
@ -66,7 +58,6 @@ export default function DynamicVideoPlayer({
|
||||
setFullResolution,
|
||||
toggleFullscreen,
|
||||
containerRef,
|
||||
transformedOverlay,
|
||||
}: DynamicVideoPlayerProps) {
|
||||
const { t } = useTranslation(["components/player"]);
|
||||
const apiHost = useApiHost();
|
||||
@ -321,7 +312,6 @@ export default function DynamicVideoPlayer({
|
||||
isDetailMode={isDetailMode}
|
||||
camera={contextCamera || camera}
|
||||
currentTimeOverride={currentTime}
|
||||
transformedOverlay={transformedOverlay}
|
||||
/>
|
||||
)}
|
||||
<PreviewPlayer
|
||||
|
||||
@ -25,7 +25,6 @@ export type MotionReviewTimelineProps = {
|
||||
timestampSpread: number;
|
||||
timelineStart: number;
|
||||
timelineEnd: number;
|
||||
scrollToTime?: number;
|
||||
showHandlebar?: boolean;
|
||||
handlebarTime?: number;
|
||||
setHandlebarTime?: React.Dispatch<React.SetStateAction<number>>;
|
||||
@ -59,7 +58,6 @@ export function MotionReviewTimeline({
|
||||
timestampSpread,
|
||||
timelineStart,
|
||||
timelineEnd,
|
||||
scrollToTime,
|
||||
showHandlebar = false,
|
||||
handlebarTime,
|
||||
setHandlebarTime,
|
||||
@ -178,15 +176,6 @@ export function MotionReviewTimeline({
|
||||
[],
|
||||
);
|
||||
|
||||
// allow callers to request the timeline center on a specific time
|
||||
useEffect(() => {
|
||||
if (scrollToTime == undefined) return;
|
||||
|
||||
setTimeout(() => {
|
||||
scrollToSegment(alignStartDateToTimeline(scrollToTime), true, "auto");
|
||||
}, 0);
|
||||
}, [scrollToTime, scrollToSegment, alignStartDateToTimeline]);
|
||||
|
||||
// keep handlebar centered when zooming
|
||||
useEffect(() => {
|
||||
setTimeout(() => {
|
||||
|
||||
@ -343,12 +343,9 @@ export function ReviewTimeline({
|
||||
|
||||
useEffect(() => {
|
||||
if (onHandlebarDraggingChange) {
|
||||
// Keep existing callback name but treat it as a generic dragging signal.
|
||||
// This allows consumers (e.g. export-handle timelines) to correctly
|
||||
// enable preview scrubbing while dragging export handles.
|
||||
onHandlebarDraggingChange(isDragging);
|
||||
onHandlebarDraggingChange(isDraggingHandlebar);
|
||||
}
|
||||
}, [isDragging, onHandlebarDraggingChange]);
|
||||
}, [isDraggingHandlebar, onHandlebarDraggingChange]);
|
||||
|
||||
const isHandlebarInNoRecordingPeriod = useMemo(() => {
|
||||
if (!getRecordingAvailability || handlebarTime === undefined) return false;
|
||||
|
||||
@ -1,26 +0,0 @@
|
||||
import * as React from "react"
|
||||
import * as ProgressPrimitive from "@radix-ui/react-progress"
|
||||
|
||||
import { cn } from "@/lib/utils"
|
||||
|
||||
const Progress = React.forwardRef<
|
||||
React.ElementRef<typeof ProgressPrimitive.Root>,
|
||||
React.ComponentPropsWithoutRef<typeof ProgressPrimitive.Root>
|
||||
>(({ className, value, ...props }, ref) => (
|
||||
<ProgressPrimitive.Root
|
||||
ref={ref}
|
||||
className={cn(
|
||||
"relative h-4 w-full overflow-hidden rounded-full bg-secondary",
|
||||
className
|
||||
)}
|
||||
{...props}
|
||||
>
|
||||
<ProgressPrimitive.Indicator
|
||||
className="h-full w-full flex-1 bg-primary transition-all"
|
||||
style={{ transform: `translateX(-${100 - (value || 0)}%)` }}
|
||||
/>
|
||||
</ProgressPrimitive.Root>
|
||||
))
|
||||
Progress.displayName = ProgressPrimitive.Root.displayName
|
||||
|
||||
export { Progress }
|
||||
@ -8,19 +8,14 @@ import {
|
||||
import { CameraConfig, FrigateConfig } from "@/types/frigateConfig";
|
||||
import { MotionData, ReviewSegment } from "@/types/review";
|
||||
import { useCallback, useEffect, useMemo, useState } from "react";
|
||||
import { AudioDetection, ObjectType } from "@/types/ws";
|
||||
import { useTimelineUtils } from "./use-timeline-utils";
|
||||
import { AudioDetection, ObjectType } from "@/types/ws";
|
||||
import useDeepMemo from "./use-deep-memo";
|
||||
import { isEqual } from "lodash";
|
||||
import { useAutoFrigateStats } from "./use-stats";
|
||||
import useSWR from "swr";
|
||||
import { getAttributeLabels } from "@/utils/iconUtil";
|
||||
|
||||
export type MotionOnlyRange = {
|
||||
start_time: number;
|
||||
end_time: number;
|
||||
};
|
||||
|
||||
type useCameraActivityReturn = {
|
||||
enabled?: boolean;
|
||||
activeTracking: boolean;
|
||||
@ -209,9 +204,9 @@ export function useCameraMotionNextTimestamp(
|
||||
return [];
|
||||
}
|
||||
|
||||
const ranges: [number, number][] = [];
|
||||
let currentSegmentStart: number | null = null;
|
||||
let currentSegmentEnd: number | null = null;
|
||||
const ranges = [];
|
||||
let currentSegmentStart = null;
|
||||
let currentSegmentEnd = null;
|
||||
|
||||
// align motion start to timeline start
|
||||
const offset =
|
||||
@ -220,19 +215,13 @@ export function useCameraMotionNextTimestamp(
|
||||
segmentDuration;
|
||||
|
||||
const startIndex = Math.abs(Math.floor(offset / 15));
|
||||
const now = Date.now() / 1000;
|
||||
|
||||
for (
|
||||
let i = startIndex;
|
||||
i < motionData.length;
|
||||
i = i + segmentDuration / 15
|
||||
) {
|
||||
const motionStart = motionData[i]?.start_time;
|
||||
|
||||
if (motionStart == undefined) {
|
||||
continue;
|
||||
}
|
||||
|
||||
const motionStart = motionData[i].start_time;
|
||||
const motionEnd = motionStart + segmentDuration;
|
||||
|
||||
const segmentMotion = motionData
|
||||
@ -241,10 +230,10 @@ export function useCameraMotionNextTimestamp(
|
||||
const overlappingReviewItems = reviewItems.some(
|
||||
(item) =>
|
||||
(item.start_time >= motionStart && item.start_time < motionEnd) ||
|
||||
((item.end_time ?? now) > motionStart &&
|
||||
(item.end_time ?? now) <= motionEnd) ||
|
||||
((item.end_time ?? Date.now() / 1000) > motionStart &&
|
||||
(item.end_time ?? Date.now() / 1000) <= motionEnd) ||
|
||||
(item.start_time <= motionStart &&
|
||||
(item.end_time ?? now) >= motionEnd),
|
||||
(item.end_time ?? Date.now() / 1000) >= motionEnd),
|
||||
);
|
||||
|
||||
if (!segmentMotion || overlappingReviewItems) {
|
||||
@ -252,14 +241,16 @@ export function useCameraMotionNextTimestamp(
|
||||
currentSegmentStart = motionStart;
|
||||
}
|
||||
currentSegmentEnd = motionEnd;
|
||||
} else if (currentSegmentStart !== null && currentSegmentEnd !== null) {
|
||||
ranges.push([currentSegmentStart, currentSegmentEnd]);
|
||||
currentSegmentStart = null;
|
||||
currentSegmentEnd = null;
|
||||
} else {
|
||||
if (currentSegmentStart !== null) {
|
||||
ranges.push([currentSegmentStart, currentSegmentEnd]);
|
||||
currentSegmentStart = null;
|
||||
currentSegmentEnd = null;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (currentSegmentStart !== null && currentSegmentEnd !== null) {
|
||||
if (currentSegmentStart !== null) {
|
||||
ranges.push([currentSegmentStart, currentSegmentEnd]);
|
||||
}
|
||||
|
||||
@ -313,93 +304,3 @@ export function useCameraMotionNextTimestamp(
|
||||
|
||||
return nextTimestamp;
|
||||
}
|
||||
|
||||
export function useCameraMotionOnlyRanges(
|
||||
segmentDuration: number,
|
||||
reviewItems: ReviewSegment[],
|
||||
motionData: MotionData[],
|
||||
) {
|
||||
const motionOnlyRanges = useMemo(() => {
|
||||
if (!motionData?.length || !reviewItems) {
|
||||
return [];
|
||||
}
|
||||
|
||||
const fallbackBucketDuration = Math.max(1, segmentDuration / 2);
|
||||
const normalizedMotionData = Array.from(
|
||||
motionData
|
||||
.reduce((accumulator, item) => {
|
||||
const currentMotion = accumulator.get(item.start_time) ?? 0;
|
||||
accumulator.set(
|
||||
item.start_time,
|
||||
Math.max(currentMotion, item.motion ?? 0),
|
||||
);
|
||||
return accumulator;
|
||||
}, new Map<number, number>())
|
||||
.entries(),
|
||||
)
|
||||
.map(([start_time, motion]) => ({ start_time, motion }))
|
||||
.sort((left, right) => left.start_time - right.start_time);
|
||||
|
||||
const bucketRanges: MotionOnlyRange[] = [];
|
||||
const now = Date.now() / 1000;
|
||||
|
||||
for (let i = 0; i < normalizedMotionData.length; i++) {
|
||||
const motionStart = normalizedMotionData[i].start_time;
|
||||
const motionEnd = motionStart + fallbackBucketDuration;
|
||||
|
||||
const overlappingReviewItems = reviewItems.some(
|
||||
(item) =>
|
||||
(item.start_time >= motionStart && item.start_time < motionEnd) ||
|
||||
((item.end_time ?? now) > motionStart &&
|
||||
(item.end_time ?? now) <= motionEnd) ||
|
||||
(item.start_time <= motionStart &&
|
||||
(item.end_time ?? now) >= motionEnd),
|
||||
);
|
||||
|
||||
const isMotionOnlySegment =
|
||||
(normalizedMotionData[i].motion ?? 0) > 0 && !overlappingReviewItems;
|
||||
|
||||
if (!isMotionOnlySegment) {
|
||||
continue;
|
||||
}
|
||||
|
||||
bucketRanges.push({
|
||||
start_time: motionStart,
|
||||
end_time: motionEnd,
|
||||
});
|
||||
}
|
||||
|
||||
if (!bucketRanges.length) {
|
||||
return [];
|
||||
}
|
||||
|
||||
const mergedRanges = bucketRanges.reduce<MotionOnlyRange[]>(
|
||||
(ranges, range) => {
|
||||
if (!ranges.length) {
|
||||
return [range];
|
||||
}
|
||||
|
||||
const previousRange = ranges[ranges.length - 1];
|
||||
const isContiguous =
|
||||
range.start_time <= previousRange.end_time + 0.001 &&
|
||||
range.start_time >= previousRange.end_time - 0.001;
|
||||
|
||||
if (isContiguous) {
|
||||
previousRange.end_time = Math.max(
|
||||
previousRange.end_time,
|
||||
range.end_time,
|
||||
);
|
||||
return ranges;
|
||||
}
|
||||
|
||||
ranges.push(range);
|
||||
return ranges;
|
||||
},
|
||||
[],
|
||||
);
|
||||
|
||||
return mergedRanges;
|
||||
}, [motionData, reviewItems, segmentDuration]);
|
||||
|
||||
return motionOnlyRanges;
|
||||
}
|
||||
|
||||
@ -1,6 +1,5 @@
|
||||
import ActivityIndicator from "@/components/indicators/activity-indicator";
|
||||
import useApiFilter from "@/hooks/use-api-filter";
|
||||
import { useAllowedCameras } from "@/hooks/use-allowed-cameras";
|
||||
import { useCameraPreviews } from "@/hooks/use-camera-previews";
|
||||
import { useTimezone } from "@/hooks/use-date-utils";
|
||||
import { useOverlayState, useSearchEffect } from "@/hooks/use-overlay-state";
|
||||
@ -22,7 +21,6 @@ import {
|
||||
getEndOfDayTimestamp,
|
||||
} from "@/utils/dateUtil";
|
||||
import EventView from "@/views/events/EventView";
|
||||
import MotionSearchView from "@/views/motion-search/MotionSearchView";
|
||||
import { RecordingView } from "@/views/recording/RecordingView";
|
||||
import axios from "axios";
|
||||
import { useCallback, useEffect, useMemo, useState } from "react";
|
||||
@ -36,7 +34,6 @@ export default function Events() {
|
||||
revalidateOnFocus: false,
|
||||
});
|
||||
const timezone = useTimezone(config);
|
||||
const allowedCameras = useAllowedCameras();
|
||||
|
||||
// recordings viewer
|
||||
|
||||
@ -55,74 +52,6 @@ export default function Events() {
|
||||
undefined,
|
||||
false,
|
||||
);
|
||||
const [motionPreviewsCamera, setMotionPreviewsCamera] = useOverlayState<
|
||||
string | undefined
|
||||
>("motionPreviewsCamera", undefined);
|
||||
|
||||
const [motionSearchCamera, setMotionSearchCamera] = useState<string | null>(
|
||||
null,
|
||||
);
|
||||
const [motionSearchDay, setMotionSearchDay] = useState<Date | undefined>(
|
||||
undefined,
|
||||
);
|
||||
|
||||
const motionSearchCameras = useMemo(() => {
|
||||
if (!config?.cameras) {
|
||||
return [] as string[];
|
||||
}
|
||||
|
||||
return Object.keys(config.cameras).filter((cam) =>
|
||||
allowedCameras.includes(cam),
|
||||
);
|
||||
}, [allowedCameras, config?.cameras]);
|
||||
|
||||
const selectedMotionSearchCamera = useMemo(() => {
|
||||
if (!motionSearchCamera) {
|
||||
return null;
|
||||
}
|
||||
|
||||
if (motionSearchCameras.includes(motionSearchCamera)) {
|
||||
return motionSearchCamera;
|
||||
}
|
||||
|
||||
return motionSearchCameras[0] ?? null;
|
||||
}, [motionSearchCamera, motionSearchCameras]);
|
||||
|
||||
const motionSearchTimeRange = useMemo(() => {
|
||||
if (motionSearchDay) {
|
||||
return {
|
||||
after: getBeginningOfDayTimestamp(new Date(motionSearchDay)),
|
||||
before: getEndOfDayTimestamp(new Date(motionSearchDay)),
|
||||
};
|
||||
}
|
||||
|
||||
const now = Date.now() / 1000;
|
||||
return {
|
||||
after: now - 86400,
|
||||
before: now,
|
||||
};
|
||||
}, [motionSearchDay]);
|
||||
|
||||
const closeMotionSearch = useCallback(() => {
|
||||
setMotionSearchCamera(null);
|
||||
setMotionSearchDay(undefined);
|
||||
setBeforeTs(Date.now() / 1000);
|
||||
}, []);
|
||||
|
||||
const handleMotionSearchCameraSelect = useCallback((camera: string) => {
|
||||
setMotionSearchCamera(camera);
|
||||
}, []);
|
||||
|
||||
const handleMotionSearchDaySelect = useCallback((day: Date | undefined) => {
|
||||
if (day == undefined) {
|
||||
setMotionSearchDay(undefined);
|
||||
return;
|
||||
}
|
||||
|
||||
const normalizedDay = new Date(day);
|
||||
normalizedDay.setHours(0, 0, 0, 0);
|
||||
setMotionSearchDay(normalizedDay);
|
||||
}, []);
|
||||
|
||||
const [notificationTab, setNotificationTab] =
|
||||
useState<TimelineType>("timeline");
|
||||
@ -579,24 +508,7 @@ export default function Events() {
|
||||
);
|
||||
}
|
||||
} else {
|
||||
return motionSearchCamera ? (
|
||||
!config || !selectedMotionSearchCamera ? (
|
||||
<ActivityIndicator />
|
||||
) : (
|
||||
<MotionSearchView
|
||||
config={config}
|
||||
cameras={motionSearchCameras}
|
||||
selectedCamera={selectedMotionSearchCamera}
|
||||
onCameraSelect={handleMotionSearchCameraSelect}
|
||||
cameraLocked={true}
|
||||
selectedDay={motionSearchDay}
|
||||
onDaySelect={handleMotionSearchDaySelect}
|
||||
timeRange={motionSearchTimeRange}
|
||||
timezone={timezone}
|
||||
onBack={closeMotionSearch}
|
||||
/>
|
||||
)
|
||||
) : (
|
||||
return (
|
||||
<EventView
|
||||
reviewItems={reviewItems}
|
||||
currentReviewItems={currentItems}
|
||||
@ -613,11 +525,6 @@ export default function Events() {
|
||||
markItemAsReviewed={markItemAsReviewed}
|
||||
markAllItemsAsReviewed={markAllItemsAsReviewed}
|
||||
onOpenRecording={setRecording}
|
||||
motionPreviewsCamera={motionPreviewsCamera ?? null}
|
||||
setMotionPreviewsCamera={(camera) =>
|
||||
setMotionPreviewsCamera(camera ?? undefined)
|
||||
}
|
||||
setMotionSearchCamera={setMotionSearchCamera}
|
||||
pullLatestData={reloadData}
|
||||
updateFilter={onUpdateFilter}
|
||||
/>
|
||||
|
||||
@ -1,112 +0,0 @@
|
||||
import { useEffect, useMemo, useState, useCallback } from "react";
|
||||
import { useTranslation } from "react-i18next";
|
||||
import useSWR from "swr";
|
||||
import { FrigateConfig } from "@/types/frigateConfig";
|
||||
import { useTimezone } from "@/hooks/use-date-utils";
|
||||
import MotionSearchView from "@/views/motion-search/MotionSearchView";
|
||||
import {
|
||||
getBeginningOfDayTimestamp,
|
||||
getEndOfDayTimestamp,
|
||||
} from "@/utils/dateUtil";
|
||||
import { useAllowedCameras } from "@/hooks/use-allowed-cameras";
|
||||
import { useSearchEffect } from "@/hooks/use-overlay-state";
|
||||
import ActivityIndicator from "@/components/indicators/activity-indicator";
|
||||
|
||||
export default function MotionSearch() {
|
||||
const { t } = useTranslation(["views/motionSearch"]);
|
||||
|
||||
const { data: config } = useSWR<FrigateConfig>("config", {
|
||||
revalidateOnFocus: false,
|
||||
});
|
||||
|
||||
const timezone = useTimezone(config);
|
||||
|
||||
useEffect(() => {
|
||||
document.title = t("documentTitle");
|
||||
}, [t]);
|
||||
|
||||
// Get allowed cameras
|
||||
const allowedCameras = useAllowedCameras();
|
||||
|
||||
const cameras = useMemo(() => {
|
||||
if (!config?.cameras) return [];
|
||||
return Object.keys(config.cameras).filter((cam) =>
|
||||
allowedCameras.includes(cam),
|
||||
);
|
||||
}, [config?.cameras, allowedCameras]);
|
||||
|
||||
// Selected camera state
|
||||
const [selectedCamera, setSelectedCamera] = useState<string | null>(null);
|
||||
const [cameraLocked, setCameraLocked] = useState(false);
|
||||
|
||||
useSearchEffect("camera", (camera: string) => {
|
||||
if (cameras.length > 0 && cameras.includes(camera)) {
|
||||
setSelectedCamera(camera);
|
||||
setCameraLocked(true);
|
||||
}
|
||||
return false;
|
||||
});
|
||||
|
||||
// Initialize with first camera when available (only if not set by camera param)
|
||||
useEffect(() => {
|
||||
if (cameras.length === 0) return;
|
||||
if (!selectedCamera) {
|
||||
setSelectedCamera(cameras[0]);
|
||||
}
|
||||
}, [cameras, selectedCamera]);
|
||||
|
||||
// Time range state - default to last 24 hours
|
||||
const [selectedDay, setSelectedDay] = useState<Date | undefined>(undefined);
|
||||
|
||||
const timeRange = useMemo(() => {
|
||||
if (selectedDay) {
|
||||
return {
|
||||
after: getBeginningOfDayTimestamp(new Date(selectedDay)),
|
||||
before: getEndOfDayTimestamp(new Date(selectedDay)),
|
||||
};
|
||||
}
|
||||
// Default to last 24 hours
|
||||
const now = Date.now() / 1000;
|
||||
return {
|
||||
after: now - 86400,
|
||||
before: now,
|
||||
};
|
||||
}, [selectedDay]);
|
||||
|
||||
const handleCameraSelect = useCallback((camera: string) => {
|
||||
setSelectedCamera(camera);
|
||||
}, []);
|
||||
|
||||
const handleDaySelect = useCallback((day: Date | undefined) => {
|
||||
if (day == undefined) {
|
||||
setSelectedDay(undefined);
|
||||
return;
|
||||
}
|
||||
|
||||
const normalizedDay = new Date(day);
|
||||
normalizedDay.setHours(0, 0, 0, 0);
|
||||
setSelectedDay(normalizedDay);
|
||||
}, []);
|
||||
|
||||
if (!config || cameras.length === 0) {
|
||||
return (
|
||||
<div className="flex size-full items-center justify-center">
|
||||
<ActivityIndicator />
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
return (
|
||||
<MotionSearchView
|
||||
config={config}
|
||||
cameras={cameras}
|
||||
selectedCamera={selectedCamera ?? null}
|
||||
onCameraSelect={handleCameraSelect}
|
||||
cameraLocked={cameraLocked}
|
||||
selectedDay={selectedDay}
|
||||
onDaySelect={handleDaySelect}
|
||||
timeRange={timeRange}
|
||||
timezone={timezone}
|
||||
/>
|
||||
);
|
||||
}
|
||||
@ -40,8 +40,7 @@ import UsersView from "@/views/settings/UsersView";
|
||||
import RolesView from "@/views/settings/RolesView";
|
||||
import UiSettingsView from "@/views/settings/UiSettingsView";
|
||||
import FrigatePlusSettingsView from "@/views/settings/FrigatePlusSettingsView";
|
||||
import MediaSyncSettingsView from "@/views/settings/MediaSyncSettingsView";
|
||||
import RegionGridSettingsView from "@/views/settings/RegionGridSettingsView";
|
||||
import MaintenanceSettingsView from "@/views/settings/MaintenanceSettingsView";
|
||||
import SystemDetectionModelSettingsView from "@/views/settings/SystemDetectionModelSettingsView";
|
||||
import {
|
||||
SingleSectionPage,
|
||||
@ -155,8 +154,7 @@ const allSettingsViews = [
|
||||
"roles",
|
||||
"notifications",
|
||||
"frigateplus",
|
||||
"mediaSync",
|
||||
"regionGrid",
|
||||
"maintenance",
|
||||
] as const;
|
||||
type SettingsType = (typeof allSettingsViews)[number];
|
||||
|
||||
@ -446,10 +444,7 @@ const settingsGroups = [
|
||||
},
|
||||
{
|
||||
label: "maintenance",
|
||||
items: [
|
||||
{ key: "mediaSync", component: MediaSyncSettingsView },
|
||||
{ key: "regionGrid", component: RegionGridSettingsView },
|
||||
],
|
||||
items: [{ key: "maintenance", component: MaintenanceSettingsView }],
|
||||
},
|
||||
];
|
||||
|
||||
@ -476,7 +471,6 @@ const CAMERA_SELECT_BUTTON_PAGES = [
|
||||
"masksAndZones",
|
||||
"motionTuner",
|
||||
"triggers",
|
||||
"regionGrid",
|
||||
];
|
||||
|
||||
const ALLOWED_VIEWS_FOR_VIEWER = ["ui", "debug", "notifications"];
|
||||
@ -484,8 +478,7 @@ const ALLOWED_VIEWS_FOR_VIEWER = ["ui", "debug", "notifications"];
|
||||
const LARGE_BOTTOM_MARGIN_PAGES = [
|
||||
"masksAndZones",
|
||||
"motionTuner",
|
||||
"mediaSync",
|
||||
"regionGrid",
|
||||
"maintenance",
|
||||
];
|
||||
|
||||
// keys for camera sections
|
||||
|
||||
@ -106,7 +106,6 @@ export interface CameraConfig {
|
||||
frame_height: number;
|
||||
improve_contrast: boolean;
|
||||
lightning_threshold: number;
|
||||
skip_motion_threshold: number | null;
|
||||
mask: {
|
||||
[maskId: string]: {
|
||||
friendly_name?: string;
|
||||
|
||||
@ -1,46 +0,0 @@
|
||||
/**
|
||||
* Types for the Motion Search feature
|
||||
*/
|
||||
|
||||
export interface MotionSearchResult {
|
||||
timestamp: number;
|
||||
change_percentage: number;
|
||||
}
|
||||
|
||||
export interface MotionSearchRequest {
|
||||
start_time: number;
|
||||
end_time: number;
|
||||
polygon_points: number[][];
|
||||
parallel?: boolean;
|
||||
threshold?: number;
|
||||
min_area?: number;
|
||||
frame_skip?: number;
|
||||
max_results?: number;
|
||||
}
|
||||
|
||||
export interface MotionSearchStartResponse {
|
||||
success: boolean;
|
||||
message: string;
|
||||
job_id: string;
|
||||
}
|
||||
|
||||
export interface MotionSearchMetrics {
|
||||
segments_scanned: number;
|
||||
segments_processed: number;
|
||||
metadata_inactive_segments: number;
|
||||
heatmap_roi_skip_segments: number;
|
||||
fallback_full_range_segments: number;
|
||||
frames_decoded: number;
|
||||
wall_time_seconds: number;
|
||||
segments_with_errors: number;
|
||||
}
|
||||
|
||||
export interface MotionSearchStatusResponse {
|
||||
success: boolean;
|
||||
message: string;
|
||||
status: "queued" | "running" | "success" | "failed" | "cancelled";
|
||||
results?: MotionSearchResult[];
|
||||
total_frames_processed?: number;
|
||||
error_message?: string;
|
||||
metrics?: MotionSearchMetrics;
|
||||
}
|
||||
@ -11,7 +11,6 @@ export type Recording = {
|
||||
duration: number;
|
||||
motion: number;
|
||||
objects: number;
|
||||
motion_heatmap?: Record<string, number> | null;
|
||||
dBFS: number;
|
||||
};
|
||||
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@ -1,898 +0,0 @@
|
||||
import { MotionOnlyRange } from "@/hooks/use-camera-activity";
|
||||
import { Preview } from "@/types/preview";
|
||||
import {
|
||||
MutableRefObject,
|
||||
useCallback,
|
||||
useEffect,
|
||||
useMemo,
|
||||
useRef,
|
||||
useState,
|
||||
} from "react";
|
||||
import { isCurrentHour } from "@/utils/dateUtil";
|
||||
import { useTranslation } from "react-i18next";
|
||||
import { CameraConfig } from "@/types/frigateConfig";
|
||||
import useSWR from "swr";
|
||||
import { baseUrl } from "@/api/baseUrl";
|
||||
import { Recording } from "@/types/record";
|
||||
import { useResizeObserver } from "@/hooks/resize-observer";
|
||||
import { Skeleton } from "@/components/ui/skeleton";
|
||||
import ActivityIndicator from "@/components/indicators/activity-indicator";
|
||||
import TimeAgo from "@/components/dynamic/TimeAgo";
|
||||
import { useFormattedTimestamp } from "@/hooks/use-date-utils";
|
||||
import { FrigateConfig } from "@/types/frigateConfig";
|
||||
|
||||
const MOTION_HEATMAP_GRID_SIZE = 16;
|
||||
const MIN_MOTION_CELL_ALPHA = 0.06;
|
||||
|
||||
function getPreviewForMotionRange(
|
||||
cameraPreviews: Preview[],
|
||||
cameraName: string,
|
||||
range: MotionOnlyRange,
|
||||
) {
|
||||
const matchingPreviews = cameraPreviews.filter(
|
||||
(preview) =>
|
||||
preview.camera === cameraName &&
|
||||
preview.end > range.start_time &&
|
||||
preview.start < range.end_time,
|
||||
);
|
||||
|
||||
if (!matchingPreviews.length) {
|
||||
return;
|
||||
}
|
||||
|
||||
const getOverlap = (preview: Preview) =>
|
||||
Math.max(
|
||||
0,
|
||||
Math.min(preview.end, range.end_time) -
|
||||
Math.max(preview.start, range.start_time),
|
||||
);
|
||||
|
||||
return matchingPreviews.reduce((best, current) => {
|
||||
return getOverlap(current) > getOverlap(best) ? current : best;
|
||||
});
|
||||
}
|
||||
|
||||
function getRangeOverlapSeconds(
|
||||
rangeStart: number,
|
||||
rangeEnd: number,
|
||||
recordingStart: number,
|
||||
recordingEnd: number,
|
||||
) {
|
||||
return Math.max(
|
||||
0,
|
||||
Math.min(rangeEnd, recordingEnd) - Math.max(rangeStart, recordingStart),
|
||||
);
|
||||
}
|
||||
|
||||
function getMotionHeatmapForRange(
|
||||
recordings: Recording[],
|
||||
range: MotionOnlyRange,
|
||||
) {
|
||||
const weightedHeatmap = new Map<number, number>();
|
||||
let totalWeight = 0;
|
||||
|
||||
recordings.forEach((recording) => {
|
||||
const overlapSeconds = getRangeOverlapSeconds(
|
||||
range.start_time,
|
||||
range.end_time,
|
||||
recording.start_time,
|
||||
recording.end_time,
|
||||
);
|
||||
|
||||
if (overlapSeconds <= 0) {
|
||||
return;
|
||||
}
|
||||
|
||||
totalWeight += overlapSeconds;
|
||||
|
||||
if (!recording.motion_heatmap) {
|
||||
return;
|
||||
}
|
||||
|
||||
Object.entries(recording.motion_heatmap).forEach(
|
||||
([cellIndex, intensity]) => {
|
||||
const index = Number(cellIndex);
|
||||
const level = Number(intensity);
|
||||
|
||||
if (Number.isNaN(index) || Number.isNaN(level) || level <= 0) {
|
||||
return;
|
||||
}
|
||||
|
||||
const existingWeight = weightedHeatmap.get(index) ?? 0;
|
||||
weightedHeatmap.set(index, existingWeight + level * overlapSeconds);
|
||||
},
|
||||
);
|
||||
});
|
||||
|
||||
if (!totalWeight || weightedHeatmap.size === 0) {
|
||||
return null;
|
||||
}
|
||||
|
||||
const mergedHeatmap: Record<string, number> = {};
|
||||
weightedHeatmap.forEach((weightedLevel, index) => {
|
||||
const normalizedLevel = Math.max(
|
||||
0,
|
||||
Math.min(255, Math.round(weightedLevel / totalWeight)),
|
||||
);
|
||||
|
||||
if (normalizedLevel > 0) {
|
||||
mergedHeatmap[index.toString()] = normalizedLevel;
|
||||
}
|
||||
});
|
||||
|
||||
return Object.keys(mergedHeatmap).length > 0 ? mergedHeatmap : null;
|
||||
}
|
||||
|
||||
type MotionPreviewClipProps = {
|
||||
cameraName: string;
|
||||
range: MotionOnlyRange;
|
||||
playbackRate: number;
|
||||
preview?: Preview;
|
||||
fallbackFrameTimes?: number[];
|
||||
motionHeatmap?: Record<string, number> | null;
|
||||
nonMotionAlpha: number;
|
||||
isVisible: boolean;
|
||||
onSeek: (timestamp: number) => void;
|
||||
};
|
||||
|
||||
function MotionPreviewClip({
|
||||
cameraName,
|
||||
range,
|
||||
playbackRate,
|
||||
preview,
|
||||
fallbackFrameTimes,
|
||||
motionHeatmap,
|
||||
nonMotionAlpha,
|
||||
isVisible,
|
||||
onSeek,
|
||||
}: MotionPreviewClipProps) {
|
||||
const { t } = useTranslation(["views/events", "common"]);
|
||||
const { data: config } = useSWR<FrigateConfig>("config");
|
||||
const videoRef = useRef<HTMLVideoElement | null>(null);
|
||||
const dimOverlayCanvasRef = useRef<HTMLCanvasElement | null>(null);
|
||||
const overlayContainerRef = useRef<HTMLDivElement | null>(null);
|
||||
const [{ width: overlayWidth, height: overlayHeight }] =
|
||||
useResizeObserver(overlayContainerRef);
|
||||
const [videoLoaded, setVideoLoaded] = useState(false);
|
||||
const [videoPlaying, setVideoPlaying] = useState(false);
|
||||
const [fallbackImageLoaded, setFallbackImageLoaded] = useState(false);
|
||||
const [mediaDimensions, setMediaDimensions] = useState<{
|
||||
width: number;
|
||||
height: number;
|
||||
} | null>(null);
|
||||
|
||||
const [fallbackFrameIndex, setFallbackFrameIndex] = useState(0);
|
||||
const [fallbackFramesReady, setFallbackFramesReady] = useState(false);
|
||||
|
||||
const formattedDate = useFormattedTimestamp(
|
||||
range.start_time,
|
||||
config?.ui.time_format == "24hour"
|
||||
? t("time.formattedTimestampMonthDayHourMinute.24hour", {
|
||||
ns: "common",
|
||||
})
|
||||
: t("time.formattedTimestampMonthDayHourMinute.12hour", {
|
||||
ns: "common",
|
||||
}),
|
||||
config?.ui.timezone,
|
||||
);
|
||||
const fallbackFrameSrcs = useMemo(() => {
|
||||
if (!fallbackFrameTimes || fallbackFrameTimes.length === 0) {
|
||||
return [] as string[];
|
||||
}
|
||||
|
||||
return fallbackFrameTimes.map(
|
||||
(frameTime) =>
|
||||
`${baseUrl}api/preview/preview_${cameraName}-${frameTime}.webp/thumbnail.webp`,
|
||||
);
|
||||
}, [cameraName, fallbackFrameTimes]);
|
||||
|
||||
useEffect(() => {
|
||||
setFallbackFrameIndex(0);
|
||||
setFallbackFramesReady(false);
|
||||
}, [range.start_time, range.end_time, fallbackFrameTimes]);
|
||||
|
||||
useEffect(() => {
|
||||
if (fallbackFrameSrcs.length === 0) {
|
||||
setFallbackFramesReady(false);
|
||||
return;
|
||||
}
|
||||
|
||||
let cancelled = false;
|
||||
|
||||
const preloadFrames = async () => {
|
||||
await Promise.allSettled(
|
||||
fallbackFrameSrcs.map(
|
||||
(src) =>
|
||||
new Promise<void>((resolve) => {
|
||||
const image = new Image();
|
||||
image.onload = () => resolve();
|
||||
image.onerror = () => resolve();
|
||||
image.src = src;
|
||||
}),
|
||||
),
|
||||
);
|
||||
|
||||
if (!cancelled) {
|
||||
setFallbackFramesReady(true);
|
||||
}
|
||||
};
|
||||
|
||||
void preloadFrames();
|
||||
|
||||
return () => {
|
||||
cancelled = true;
|
||||
};
|
||||
}, [fallbackFrameSrcs]);
|
||||
|
||||
useEffect(() => {
|
||||
if (!fallbackFramesReady || fallbackFrameSrcs.length <= 1 || !isVisible) {
|
||||
return;
|
||||
}
|
||||
|
||||
const intervalMs = Math.max(
|
||||
50,
|
||||
Math.round(1000 / Math.max(1, playbackRate)),
|
||||
);
|
||||
const intervalId = window.setInterval(() => {
|
||||
setFallbackFrameIndex((previous) => {
|
||||
return (previous + 1) % fallbackFrameSrcs.length;
|
||||
});
|
||||
}, intervalMs);
|
||||
|
||||
return () => {
|
||||
window.clearInterval(intervalId);
|
||||
};
|
||||
}, [fallbackFrameSrcs.length, fallbackFramesReady, isVisible, playbackRate]);
|
||||
|
||||
const fallbackFrameSrc = useMemo(() => {
|
||||
if (fallbackFrameSrcs.length === 0) {
|
||||
return undefined;
|
||||
}
|
||||
|
||||
return fallbackFrameSrcs[fallbackFrameIndex] ?? fallbackFrameSrcs[0];
|
||||
}, [fallbackFrameIndex, fallbackFrameSrcs]);
|
||||
|
||||
useEffect(() => {
|
||||
setVideoLoaded(false);
|
||||
setVideoPlaying(false);
|
||||
setMediaDimensions(null);
|
||||
}, [preview?.src]);
|
||||
|
||||
useEffect(() => {
|
||||
if (!preview || !isVisible || videoLoaded || !videoRef.current) {
|
||||
return;
|
||||
}
|
||||
|
||||
if (videoRef.current.currentSrc || videoRef.current.error) {
|
||||
setVideoLoaded(true);
|
||||
}
|
||||
}, [isVisible, preview, videoLoaded]);
|
||||
|
||||
useEffect(() => {
|
||||
setFallbackImageLoaded(false);
|
||||
setMediaDimensions(null);
|
||||
}, [fallbackFrameSrcs]);
|
||||
|
||||
useEffect(() => {
|
||||
if (!fallbackFrameSrc || !isVisible || !fallbackFramesReady) {
|
||||
return;
|
||||
}
|
||||
|
||||
setFallbackImageLoaded(true);
|
||||
}, [fallbackFrameSrc, fallbackFramesReady, isVisible]);
|
||||
|
||||
const showLoadingIndicator =
|
||||
(preview != undefined && isVisible && !videoPlaying) ||
|
||||
(fallbackFrameSrc != undefined && isVisible && !fallbackImageLoaded);
|
||||
|
||||
const clipStart = useMemo(() => {
|
||||
if (!preview) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
return Math.max(0, range.start_time - preview.start);
|
||||
}, [preview, range.start_time]);
|
||||
|
||||
const clipEnd = useMemo(() => {
|
||||
if (!preview) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
const previewDuration = preview.end - preview.start;
|
||||
return Math.min(
|
||||
previewDuration,
|
||||
Math.max(clipStart + 0.1, range.end_time - preview.start),
|
||||
);
|
||||
}, [clipStart, preview, range.end_time]);
|
||||
|
||||
const resetPlayback = useCallback(() => {
|
||||
if (!videoRef.current || !preview) {
|
||||
return;
|
||||
}
|
||||
|
||||
videoRef.current.currentTime = clipStart;
|
||||
videoRef.current.playbackRate = playbackRate;
|
||||
}, [clipStart, playbackRate, preview]);
|
||||
|
||||
useEffect(() => {
|
||||
if (!videoRef.current || !preview) {
|
||||
return;
|
||||
}
|
||||
|
||||
if (!isVisible) {
|
||||
videoRef.current.pause();
|
||||
videoRef.current.currentTime = clipStart;
|
||||
return;
|
||||
}
|
||||
|
||||
if (videoRef.current.readyState >= 2) {
|
||||
resetPlayback();
|
||||
void videoRef.current.play().catch(() => undefined);
|
||||
}
|
||||
}, [clipStart, isVisible, preview, resetPlayback]);
|
||||
|
||||
const drawDimOverlay = useCallback(() => {
|
||||
if (!dimOverlayCanvasRef.current) {
|
||||
return;
|
||||
}
|
||||
|
||||
const canvas = dimOverlayCanvasRef.current;
|
||||
const context = canvas.getContext("2d");
|
||||
|
||||
if (!context) {
|
||||
return;
|
||||
}
|
||||
|
||||
if (overlayWidth <= 0 || overlayHeight <= 0) {
|
||||
return;
|
||||
}
|
||||
|
||||
const width = Math.max(1, overlayWidth);
|
||||
const height = Math.max(1, overlayHeight);
|
||||
const dpr = window.devicePixelRatio || 1;
|
||||
const pixelWidth = Math.max(1, Math.round(width * dpr));
|
||||
const pixelHeight = Math.max(1, Math.round(height * dpr));
|
||||
|
||||
if (canvas.width !== pixelWidth || canvas.height !== pixelHeight) {
|
||||
canvas.width = pixelWidth;
|
||||
canvas.height = pixelHeight;
|
||||
}
|
||||
|
||||
canvas.style.width = `${width}px`;
|
||||
canvas.style.height = `${height}px`;
|
||||
|
||||
context.setTransform(dpr, 0, 0, dpr, 0, 0);
|
||||
context.clearRect(0, 0, width, height);
|
||||
|
||||
if (!motionHeatmap) {
|
||||
return;
|
||||
}
|
||||
|
||||
// Calculate the actual rendered media area (object-contain letterboxing)
|
||||
let drawX = 0;
|
||||
let drawY = 0;
|
||||
let drawWidth = width;
|
||||
let drawHeight = height;
|
||||
|
||||
if (
|
||||
mediaDimensions &&
|
||||
mediaDimensions.width > 0 &&
|
||||
mediaDimensions.height > 0
|
||||
) {
|
||||
const containerAspect = width / height;
|
||||
const mediaAspect = mediaDimensions.width / mediaDimensions.height;
|
||||
|
||||
if (mediaAspect < containerAspect) {
|
||||
// Portrait / tall: constrained by height, bars on left and right
|
||||
drawHeight = height;
|
||||
drawWidth = height * mediaAspect;
|
||||
drawX = (width - drawWidth) / 2;
|
||||
drawY = 0;
|
||||
} else {
|
||||
// Wide / landscape: constrained by width, bars on top and bottom
|
||||
drawWidth = width;
|
||||
drawHeight = width / mediaAspect;
|
||||
drawX = 0;
|
||||
drawY = (height - drawHeight) / 2;
|
||||
}
|
||||
}
|
||||
|
||||
const heatmapLevels = Object.values(motionHeatmap)
|
||||
.map((value) => Number(value))
|
||||
.filter((value) => Number.isFinite(value) && value > 0);
|
||||
|
||||
const maxHeatmapLevel =
|
||||
heatmapLevels.length > 0 ? Math.max(...heatmapLevels) : 0;
|
||||
|
||||
const maskCanvas = document.createElement("canvas");
|
||||
maskCanvas.width = MOTION_HEATMAP_GRID_SIZE;
|
||||
maskCanvas.height = MOTION_HEATMAP_GRID_SIZE;
|
||||
|
||||
const maskContext = maskCanvas.getContext("2d");
|
||||
if (!maskContext) {
|
||||
return;
|
||||
}
|
||||
|
||||
const imageData = maskContext.createImageData(
|
||||
MOTION_HEATMAP_GRID_SIZE,
|
||||
MOTION_HEATMAP_GRID_SIZE,
|
||||
);
|
||||
|
||||
for (let index = 0; index < MOTION_HEATMAP_GRID_SIZE ** 2; index++) {
|
||||
const level = Number(motionHeatmap[index.toString()] ?? 0);
|
||||
const normalizedLevel =
|
||||
maxHeatmapLevel > 0
|
||||
? Math.min(1, Math.max(0, level / maxHeatmapLevel))
|
||||
: 0;
|
||||
const boostedLevel = Math.sqrt(normalizedLevel);
|
||||
const alpha =
|
||||
nonMotionAlpha -
|
||||
boostedLevel * (nonMotionAlpha - MIN_MOTION_CELL_ALPHA);
|
||||
|
||||
const pixelOffset = index * 4;
|
||||
imageData.data[pixelOffset] = 0;
|
||||
imageData.data[pixelOffset + 1] = 0;
|
||||
imageData.data[pixelOffset + 2] = 0;
|
||||
imageData.data[pixelOffset + 3] = Math.round(
|
||||
Math.max(0, Math.min(1, alpha)) * 255,
|
||||
);
|
||||
}
|
||||
|
||||
maskContext.putImageData(imageData, 0, 0);
|
||||
context.imageSmoothingEnabled = true;
|
||||
context.imageSmoothingQuality = "high";
|
||||
context.drawImage(maskCanvas, drawX, drawY, drawWidth, drawHeight);
|
||||
}, [
|
||||
motionHeatmap,
|
||||
nonMotionAlpha,
|
||||
overlayHeight,
|
||||
overlayWidth,
|
||||
mediaDimensions,
|
||||
]);
|
||||
|
||||
useEffect(() => {
|
||||
drawDimOverlay();
|
||||
}, [drawDimOverlay]);
|
||||
|
||||
return (
|
||||
<div
|
||||
ref={overlayContainerRef}
|
||||
className="relative aspect-video size-full cursor-pointer overflow-hidden rounded-lg bg-black md:rounded-2xl"
|
||||
onClick={() => onSeek(range.start_time)}
|
||||
>
|
||||
{showLoadingIndicator && (
|
||||
<Skeleton className="absolute inset-0 z-10 rounded-lg md:rounded-2xl" />
|
||||
)}
|
||||
{preview ? (
|
||||
<>
|
||||
<video
|
||||
ref={videoRef}
|
||||
className="size-full bg-black object-contain"
|
||||
playsInline
|
||||
preload={isVisible ? "metadata" : "none"}
|
||||
muted
|
||||
autoPlay={isVisible}
|
||||
onLoadedMetadata={() => {
|
||||
setVideoLoaded(true);
|
||||
|
||||
if (videoRef.current) {
|
||||
setMediaDimensions({
|
||||
width: videoRef.current.videoWidth,
|
||||
height: videoRef.current.videoHeight,
|
||||
});
|
||||
}
|
||||
|
||||
if (!isVisible) {
|
||||
return;
|
||||
}
|
||||
|
||||
resetPlayback();
|
||||
|
||||
if (videoRef.current) {
|
||||
void videoRef.current.play().catch(() => undefined);
|
||||
}
|
||||
}}
|
||||
onCanPlay={() => {
|
||||
setVideoLoaded(true);
|
||||
|
||||
if (!isVisible) {
|
||||
return;
|
||||
}
|
||||
|
||||
if (videoRef.current) {
|
||||
void videoRef.current.play().catch(() => undefined);
|
||||
}
|
||||
}}
|
||||
onPlay={() => setVideoPlaying(true)}
|
||||
onLoadedData={() => setVideoLoaded(true)}
|
||||
onError={() => {
|
||||
setVideoLoaded(true);
|
||||
setVideoPlaying(true);
|
||||
}}
|
||||
onTimeUpdate={() => {
|
||||
if (!videoRef.current || !preview || !isVisible) {
|
||||
return;
|
||||
}
|
||||
|
||||
if (videoRef.current.currentTime >= clipEnd) {
|
||||
videoRef.current.currentTime = clipStart;
|
||||
}
|
||||
}}
|
||||
>
|
||||
{isVisible && (
|
||||
<source
|
||||
src={`${baseUrl}${preview.src.substring(1)}`}
|
||||
type={preview.type}
|
||||
/>
|
||||
)}
|
||||
</video>
|
||||
{motionHeatmap && (
|
||||
<canvas
|
||||
ref={dimOverlayCanvasRef}
|
||||
className="pointer-events-none absolute inset-0"
|
||||
aria-hidden="true"
|
||||
/>
|
||||
)}
|
||||
</>
|
||||
) : fallbackFrameSrc ? (
|
||||
<>
|
||||
<img
|
||||
src={fallbackFrameSrc}
|
||||
className="size-full bg-black object-contain"
|
||||
loading="lazy"
|
||||
alt=""
|
||||
onLoad={(e) => {
|
||||
setFallbackImageLoaded(true);
|
||||
const img = e.currentTarget;
|
||||
if (img.naturalWidth > 0 && img.naturalHeight > 0) {
|
||||
setMediaDimensions({
|
||||
width: img.naturalWidth,
|
||||
height: img.naturalHeight,
|
||||
});
|
||||
}
|
||||
}}
|
||||
onError={() => setFallbackImageLoaded(true)}
|
||||
/>
|
||||
{motionHeatmap && (
|
||||
<canvas
|
||||
ref={dimOverlayCanvasRef}
|
||||
className="pointer-events-none absolute inset-0"
|
||||
aria-hidden="true"
|
||||
/>
|
||||
)}
|
||||
</>
|
||||
) : (
|
||||
<div className="flex size-full items-center justify-center text-sm text-muted-foreground">
|
||||
{t("motionPreviews.noPreview")}
|
||||
</div>
|
||||
)}
|
||||
|
||||
<div className="pointer-events-none absolute bottom-0 left-0 right-0 z-30 p-2">
|
||||
<div className="flex flex-col items-start text-xs text-white/90 drop-shadow-lg">
|
||||
{range.end_time ? (
|
||||
<TimeAgo time={range.start_time * 1000} dense />
|
||||
) : (
|
||||
<ActivityIndicator size={14} />
|
||||
)}
|
||||
{formattedDate}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
type MotionPreviewsPaneProps = {
|
||||
camera: CameraConfig;
|
||||
contentRef: MutableRefObject<HTMLDivElement | null>;
|
||||
cameraPreviews: Preview[];
|
||||
motionRanges: MotionOnlyRange[];
|
||||
isLoadingMotionRanges?: boolean;
|
||||
playbackRate: number;
|
||||
nonMotionAlpha: number;
|
||||
onSeek: (timestamp: number) => void;
|
||||
};
|
||||
|
||||
export default function MotionPreviewsPane({
|
||||
camera,
|
||||
contentRef,
|
||||
cameraPreviews,
|
||||
motionRanges,
|
||||
isLoadingMotionRanges = false,
|
||||
playbackRate,
|
||||
nonMotionAlpha,
|
||||
onSeek,
|
||||
}: MotionPreviewsPaneProps) {
|
||||
const { t } = useTranslation(["views/events"]);
|
||||
const [scrollContainer, setScrollContainer] = useState<HTMLDivElement | null>(
|
||||
null,
|
||||
);
|
||||
|
||||
const [windowVisible, setWindowVisible] = useState(true);
|
||||
useEffect(() => {
|
||||
const visibilityListener = () => {
|
||||
setWindowVisible(document.visibilityState == "visible");
|
||||
};
|
||||
|
||||
addEventListener("visibilitychange", visibilityListener);
|
||||
|
||||
return () => {
|
||||
removeEventListener("visibilitychange", visibilityListener);
|
||||
};
|
||||
}, []);
|
||||
|
||||
const [visibleClips, setVisibleClips] = useState<string[]>([]);
|
||||
const [hasVisibilityData, setHasVisibilityData] = useState(false);
|
||||
const clipObserver = useRef<IntersectionObserver | null>(null);
|
||||
|
||||
const recordingTimeRange = useMemo(() => {
|
||||
if (!motionRanges.length) {
|
||||
return null;
|
||||
}
|
||||
|
||||
return motionRanges.reduce(
|
||||
(bounds, range) => ({
|
||||
after: Math.min(bounds.after, range.start_time),
|
||||
before: Math.max(bounds.before, range.end_time),
|
||||
}),
|
||||
{
|
||||
after: motionRanges[0].start_time,
|
||||
before: motionRanges[0].end_time,
|
||||
},
|
||||
);
|
||||
}, [motionRanges]);
|
||||
|
||||
const { data: cameraRecordings } = useSWR<Recording[]>(
|
||||
recordingTimeRange
|
||||
? [
|
||||
`${camera.name}/recordings`,
|
||||
{
|
||||
after: Math.floor(recordingTimeRange.after),
|
||||
before: Math.ceil(recordingTimeRange.before),
|
||||
},
|
||||
]
|
||||
: null,
|
||||
{
|
||||
revalidateOnFocus: false,
|
||||
revalidateOnReconnect: false,
|
||||
},
|
||||
);
|
||||
const { data: previewFrames } = useSWR<string[]>(
|
||||
recordingTimeRange
|
||||
? `preview/${camera.name}/start/${Math.floor(recordingTimeRange.after)}/end/${Math.ceil(recordingTimeRange.before)}/frames`
|
||||
: null,
|
||||
{
|
||||
revalidateOnFocus: false,
|
||||
revalidateOnReconnect: false,
|
||||
},
|
||||
);
|
||||
|
||||
const previewFrameTimes = useMemo(() => {
|
||||
if (!previewFrames) {
|
||||
return [] as number[];
|
||||
}
|
||||
|
||||
return previewFrames
|
||||
.map((frame) => {
|
||||
const timestampPart = frame.split("-").at(-1)?.replace(".webp", "");
|
||||
return timestampPart ? Number(timestampPart) : NaN;
|
||||
})
|
||||
.filter((value) => Number.isFinite(value))
|
||||
.sort((a, b) => a - b);
|
||||
}, [previewFrames]);
|
||||
|
||||
const getFallbackFrameTimesForRange = useCallback(
|
||||
(range: MotionOnlyRange) => {
|
||||
if (!isCurrentHour(range.end_time) || previewFrameTimes.length === 0) {
|
||||
return [] as number[];
|
||||
}
|
||||
|
||||
const inRangeFrames = previewFrameTimes.filter(
|
||||
(frameTime) =>
|
||||
frameTime >= range.start_time && frameTime <= range.end_time,
|
||||
);
|
||||
|
||||
// Use all in-range frames when enough data exists for natural animation
|
||||
if (inRangeFrames.length > 1) {
|
||||
return inRangeFrames;
|
||||
}
|
||||
|
||||
// If sparse, keep the single in-range frame and add only the next 2 frames
|
||||
if (inRangeFrames.length === 1) {
|
||||
const inRangeFrame = inRangeFrames[0];
|
||||
const nextFrames = previewFrameTimes
|
||||
.filter((frameTime) => frameTime > inRangeFrame)
|
||||
.slice(0, 2);
|
||||
|
||||
return [inRangeFrame, ...nextFrames];
|
||||
}
|
||||
|
||||
const nextFramesFromStart = previewFrameTimes
|
||||
.filter((frameTime) => frameTime >= range.start_time)
|
||||
.slice(0, 3);
|
||||
// If no in-range frame exists, take up to 3 frames starting at clip start
|
||||
if (nextFramesFromStart.length > 0) {
|
||||
return nextFramesFromStart;
|
||||
}
|
||||
|
||||
const lastFrame = previewFrameTimes.at(-1);
|
||||
return lastFrame != undefined ? [lastFrame] : [];
|
||||
},
|
||||
[previewFrameTimes],
|
||||
);
|
||||
|
||||
const setContentNode = useCallback(
|
||||
(node: HTMLDivElement | null) => {
|
||||
contentRef.current = node;
|
||||
setScrollContainer(node);
|
||||
},
|
||||
[contentRef],
|
||||
);
|
||||
|
||||
useEffect(() => {
|
||||
if (!scrollContainer) {
|
||||
return;
|
||||
}
|
||||
|
||||
const visibleClipIds = new Set<string>();
|
||||
clipObserver.current = new IntersectionObserver(
|
||||
(entries) => {
|
||||
setHasVisibilityData(true);
|
||||
|
||||
entries.forEach((entry) => {
|
||||
const clipId = (entry.target as HTMLElement).dataset.clipId;
|
||||
|
||||
if (!clipId) {
|
||||
return;
|
||||
}
|
||||
|
||||
if (entry.isIntersecting) {
|
||||
visibleClipIds.add(clipId);
|
||||
} else {
|
||||
visibleClipIds.delete(clipId);
|
||||
}
|
||||
});
|
||||
|
||||
const rootRect = scrollContainer.getBoundingClientRect();
|
||||
const prunedVisibleClipIds = [...visibleClipIds].filter((clipId) => {
|
||||
const clipElement = scrollContainer.querySelector<HTMLElement>(
|
||||
`[data-clip-id="${clipId}"]`,
|
||||
);
|
||||
|
||||
if (!clipElement) {
|
||||
return false;
|
||||
}
|
||||
|
||||
const clipRect = clipElement.getBoundingClientRect();
|
||||
|
||||
return (
|
||||
clipRect.bottom > rootRect.top && clipRect.top < rootRect.bottom
|
||||
);
|
||||
});
|
||||
|
||||
setVisibleClips(prunedVisibleClipIds);
|
||||
},
|
||||
{
|
||||
root: scrollContainer,
|
||||
threshold: 0,
|
||||
},
|
||||
);
|
||||
|
||||
scrollContainer
|
||||
.querySelectorAll<HTMLElement>("[data-clip-id]")
|
||||
.forEach((node) => {
|
||||
clipObserver.current?.observe(node);
|
||||
});
|
||||
|
||||
return () => {
|
||||
clipObserver.current?.disconnect();
|
||||
};
|
||||
}, [scrollContainer]);
|
||||
|
||||
const clipRef = useCallback((node: HTMLElement | null) => {
|
||||
if (!clipObserver.current) {
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
if (node) {
|
||||
clipObserver.current.observe(node);
|
||||
}
|
||||
} catch {
|
||||
// no op
|
||||
}
|
||||
}, []);
|
||||
|
||||
const clipData = useMemo(
|
||||
() =>
|
||||
motionRanges
|
||||
.filter((range) => range.end_time > range.start_time)
|
||||
.sort((left, right) => right.start_time - left.start_time)
|
||||
.map((range) => {
|
||||
const preview = getPreviewForMotionRange(
|
||||
cameraPreviews,
|
||||
camera.name,
|
||||
range,
|
||||
);
|
||||
|
||||
return {
|
||||
range,
|
||||
preview,
|
||||
fallbackFrameTimes: !preview
|
||||
? getFallbackFrameTimesForRange(range)
|
||||
: undefined,
|
||||
motionHeatmap: getMotionHeatmapForRange(
|
||||
cameraRecordings ?? [],
|
||||
range,
|
||||
),
|
||||
};
|
||||
}),
|
||||
[
|
||||
cameraPreviews,
|
||||
camera.name,
|
||||
cameraRecordings,
|
||||
getFallbackFrameTimesForRange,
|
||||
motionRanges,
|
||||
],
|
||||
);
|
||||
|
||||
const hasCurrentHourRanges = useMemo(
|
||||
() => motionRanges.some((range) => isCurrentHour(range.end_time)),
|
||||
[motionRanges],
|
||||
);
|
||||
|
||||
const isLoadingPane =
|
||||
isLoadingMotionRanges ||
|
||||
(motionRanges.length > 0 && cameraRecordings == undefined) ||
|
||||
(hasCurrentHourRanges && previewFrames == undefined);
|
||||
|
||||
if (isLoadingPane) {
|
||||
return (
|
||||
<ActivityIndicator className="absolute left-1/2 top-1/2 -translate-x-1/2 -translate-y-1/2" />
|
||||
);
|
||||
}
|
||||
|
||||
return (
|
||||
<div className="flex min-h-0 flex-1 flex-col gap-3 overflow-hidden px-1 md:mx-2 md:gap-4">
|
||||
<div
|
||||
ref={setContentNode}
|
||||
className="no-scrollbar min-h-0 flex-1 overflow-y-auto"
|
||||
>
|
||||
{clipData.length === 0 ? (
|
||||
<div className="flex h-full items-center justify-center text-lg text-primary">
|
||||
{t("motionPreviews.empty")}
|
||||
</div>
|
||||
) : (
|
||||
<div className="grid grid-cols-1 gap-2 pb-2 sm:grid-cols-2 md:gap-4 xl:grid-cols-4">
|
||||
{clipData.map(
|
||||
({ range, preview, fallbackFrameTimes, motionHeatmap }, idx) => (
|
||||
<div
|
||||
key={`${camera.name}-${range.start_time}-${range.end_time}-${preview?.src ?? "none"}-${idx}`}
|
||||
data-clip-id={`${camera.name}-${range.start_time}-${range.end_time}-${idx}`}
|
||||
ref={clipRef}
|
||||
>
|
||||
<MotionPreviewClip
|
||||
cameraName={camera.name}
|
||||
range={range}
|
||||
playbackRate={playbackRate}
|
||||
preview={preview}
|
||||
fallbackFrameTimes={fallbackFrameTimes}
|
||||
motionHeatmap={motionHeatmap}
|
||||
nonMotionAlpha={nonMotionAlpha}
|
||||
isVisible={
|
||||
windowVisible &&
|
||||
(visibleClips.includes(
|
||||
`${camera.name}-${range.start_time}-${range.end_time}-${idx}`,
|
||||
) ||
|
||||
(!hasVisibilityData && idx < 8))
|
||||
}
|
||||
onSeek={onSeek}
|
||||
/>
|
||||
</div>
|
||||
),
|
||||
)}
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
@ -1,708 +0,0 @@
|
||||
import { useCallback, useEffect, useMemo, useRef, useState } from "react";
|
||||
import { useTranslation } from "react-i18next";
|
||||
import { isDesktop, isIOS, isMobile } from "react-device-detect";
|
||||
import { FaArrowRight, FaCalendarAlt, FaCheckCircle } from "react-icons/fa";
|
||||
import { MdOutlineRestartAlt, MdUndo } from "react-icons/md";
|
||||
|
||||
import { FrigateConfig } from "@/types/frigateConfig";
|
||||
import { TimeRange } from "@/types/timeline";
|
||||
|
||||
import { Button } from "@/components/ui/button";
|
||||
import {
|
||||
Dialog,
|
||||
DialogContent,
|
||||
DialogDescription,
|
||||
DialogHeader,
|
||||
DialogTitle,
|
||||
} from "@/components/ui/dialog";
|
||||
import { Drawer, DrawerContent } from "@/components/ui/drawer";
|
||||
import { Label } from "@/components/ui/label";
|
||||
import { Slider } from "@/components/ui/slider";
|
||||
import { Switch } from "@/components/ui/switch";
|
||||
import {
|
||||
Select,
|
||||
SelectContent,
|
||||
SelectItem,
|
||||
SelectTrigger,
|
||||
SelectValue,
|
||||
} from "@/components/ui/select";
|
||||
import {
|
||||
Popover,
|
||||
PopoverContent,
|
||||
PopoverTrigger,
|
||||
} from "@/components/ui/popover";
|
||||
import { SelectSeparator } from "@/components/ui/select";
|
||||
import {
|
||||
Tooltip,
|
||||
TooltipContent,
|
||||
TooltipTrigger,
|
||||
} from "@/components/ui/tooltip";
|
||||
import ActivityIndicator from "@/components/indicators/activity-indicator";
|
||||
import { CameraNameLabel } from "@/components/camera/FriendlyNameLabel";
|
||||
import { TimezoneAwareCalendar } from "@/components/overlay/ReviewActivityCalendar";
|
||||
|
||||
import { useApiHost } from "@/api";
|
||||
import { useResizeObserver } from "@/hooks/resize-observer";
|
||||
import { useFormattedTimestamp } from "@/hooks/use-date-utils";
|
||||
import { getUTCOffset } from "@/utils/dateUtil";
|
||||
import { cn } from "@/lib/utils";
|
||||
import MotionSearchROICanvas from "./MotionSearchROICanvas";
|
||||
import { TransformComponent, TransformWrapper } from "react-zoom-pan-pinch";
|
||||
|
||||
type MotionSearchDialogProps = {
|
||||
open: boolean;
|
||||
onOpenChange: (open: boolean) => void;
|
||||
config: FrigateConfig;
|
||||
cameras: string[];
|
||||
selectedCamera: string | null;
|
||||
onCameraSelect: (camera: string) => void;
|
||||
cameraLocked?: boolean;
|
||||
polygonPoints: number[][];
|
||||
setPolygonPoints: React.Dispatch<React.SetStateAction<number[][]>>;
|
||||
isDrawingROI: boolean;
|
||||
setIsDrawingROI: React.Dispatch<React.SetStateAction<boolean>>;
|
||||
parallelMode: boolean;
|
||||
setParallelMode: React.Dispatch<React.SetStateAction<boolean>>;
|
||||
threshold: number;
|
||||
setThreshold: React.Dispatch<React.SetStateAction<number>>;
|
||||
minArea: number;
|
||||
setMinArea: React.Dispatch<React.SetStateAction<number>>;
|
||||
frameSkip: number;
|
||||
setFrameSkip: React.Dispatch<React.SetStateAction<number>>;
|
||||
maxResults: number;
|
||||
setMaxResults: React.Dispatch<React.SetStateAction<number>>;
|
||||
searchRange?: TimeRange;
|
||||
setSearchRange: React.Dispatch<React.SetStateAction<TimeRange | undefined>>;
|
||||
defaultRange: TimeRange;
|
||||
isSearching: boolean;
|
||||
canStartSearch: boolean;
|
||||
onStartSearch: () => void;
|
||||
timezone?: string;
|
||||
};
|
||||
|
||||
export default function MotionSearchDialog({
|
||||
open,
|
||||
onOpenChange,
|
||||
config,
|
||||
cameras,
|
||||
selectedCamera,
|
||||
onCameraSelect,
|
||||
cameraLocked = false,
|
||||
polygonPoints,
|
||||
setPolygonPoints,
|
||||
isDrawingROI,
|
||||
setIsDrawingROI,
|
||||
parallelMode,
|
||||
setParallelMode,
|
||||
threshold,
|
||||
setThreshold,
|
||||
minArea,
|
||||
setMinArea,
|
||||
frameSkip,
|
||||
setFrameSkip,
|
||||
maxResults,
|
||||
setMaxResults,
|
||||
searchRange,
|
||||
setSearchRange,
|
||||
defaultRange,
|
||||
isSearching,
|
||||
canStartSearch,
|
||||
onStartSearch,
|
||||
timezone,
|
||||
}: MotionSearchDialogProps) {
|
||||
const { t } = useTranslation(["views/motionSearch", "common"]);
|
||||
const apiHost = useApiHost();
|
||||
const containerRef = useRef<HTMLDivElement>(null);
|
||||
const [{ width: containerWidth, height: containerHeight }] =
|
||||
useResizeObserver(containerRef);
|
||||
const [imageLoaded, setImageLoaded] = useState(false);
|
||||
|
||||
const cameraConfig = useMemo(() => {
|
||||
if (!selectedCamera) return undefined;
|
||||
return config.cameras[selectedCamera];
|
||||
}, [config, selectedCamera]);
|
||||
|
||||
const polygonClosed = useMemo(
|
||||
() => !isDrawingROI && polygonPoints.length >= 3,
|
||||
[isDrawingROI, polygonPoints.length],
|
||||
);
|
||||
|
||||
const undoPolygonPoint = useCallback(() => {
|
||||
if (polygonPoints.length === 0 || isSearching) {
|
||||
return;
|
||||
}
|
||||
|
||||
setPolygonPoints((prev) => prev.slice(0, -1));
|
||||
setIsDrawingROI(true);
|
||||
}, [isSearching, setIsDrawingROI, setPolygonPoints, polygonPoints.length]);
|
||||
|
||||
const resetPolygon = useCallback(() => {
|
||||
if (polygonPoints.length === 0 || isSearching) {
|
||||
return;
|
||||
}
|
||||
|
||||
setPolygonPoints([]);
|
||||
setIsDrawingROI(true);
|
||||
}, [isSearching, polygonPoints.length, setIsDrawingROI, setPolygonPoints]);
|
||||
|
||||
const imageSize = useMemo(() => {
|
||||
if (!containerWidth || !containerHeight || !cameraConfig) {
|
||||
return { width: 0, height: 0 };
|
||||
}
|
||||
|
||||
const cameraAspectRatio =
|
||||
cameraConfig.detect.width / cameraConfig.detect.height;
|
||||
const availableAspectRatio = containerWidth / containerHeight;
|
||||
|
||||
if (availableAspectRatio >= cameraAspectRatio) {
|
||||
return {
|
||||
width: containerHeight * cameraAspectRatio,
|
||||
height: containerHeight,
|
||||
};
|
||||
}
|
||||
|
||||
return {
|
||||
width: containerWidth,
|
||||
height: containerWidth / cameraAspectRatio,
|
||||
};
|
||||
}, [containerWidth, containerHeight, cameraConfig]);
|
||||
|
||||
useEffect(() => {
|
||||
setImageLoaded(false);
|
||||
}, [selectedCamera]);
|
||||
|
||||
const Overlay = isDesktop ? Dialog : Drawer;
|
||||
const Content = isDesktop ? DialogContent : DrawerContent;
|
||||
|
||||
return (
|
||||
<Overlay open={open} onOpenChange={onOpenChange}>
|
||||
<Content
|
||||
{...(isDesktop
|
||||
? {
|
||||
onOpenAutoFocus: (event: Event) => event.preventDefault(),
|
||||
}
|
||||
: {})}
|
||||
className={cn(
|
||||
isDesktop
|
||||
? "scrollbar-container max-h-[90dvh] overflow-y-auto sm:max-w-[75%]"
|
||||
: "flex max-h-[90dvh] flex-col overflow-hidden rounded-lg pb-4",
|
||||
)}
|
||||
>
|
||||
<div
|
||||
className={cn(
|
||||
!isDesktop &&
|
||||
"scrollbar-container flex min-h-0 w-full flex-col gap-4 overflow-y-auto overflow-x-hidden px-4",
|
||||
)}
|
||||
>
|
||||
<DialogHeader>
|
||||
<DialogTitle className="mt-4 md:mt-auto">
|
||||
{t("dialog.title")}
|
||||
</DialogTitle>
|
||||
<p className="my-1 text-sm text-muted-foreground">
|
||||
{t("description")}
|
||||
</p>
|
||||
</DialogHeader>
|
||||
<DialogDescription className="hidden" />
|
||||
<div
|
||||
className={cn(
|
||||
"flex gap-4",
|
||||
isDesktop ? "mt-4 flex-row" : "flex-col landscape:flex-row",
|
||||
)}
|
||||
>
|
||||
<div
|
||||
className={cn("flex flex-1 flex-col", !isDesktop && "min-w-0")}
|
||||
>
|
||||
{(!cameraLocked || !selectedCamera) && (
|
||||
<div className="flex items-end justify-between gap-2">
|
||||
<div className="mt-2 md:min-w-64">
|
||||
<div className="grid gap-2">
|
||||
<Label htmlFor="motion-search-camera">
|
||||
{t("dialog.cameraLabel")}
|
||||
</Label>
|
||||
<Select
|
||||
value={selectedCamera ?? undefined}
|
||||
onValueChange={(value) => onCameraSelect(value)}
|
||||
>
|
||||
<SelectTrigger id="motion-search-camera">
|
||||
<SelectValue placeholder={t("selectCamera")} />
|
||||
</SelectTrigger>
|
||||
<SelectContent>
|
||||
{cameras.map((camera) => (
|
||||
<SelectItem
|
||||
key={camera}
|
||||
value={camera}
|
||||
className="cursor-pointer hover:bg-accent hover:text-accent-foreground"
|
||||
>
|
||||
<CameraNameLabel camera={camera} />
|
||||
</SelectItem>
|
||||
))}
|
||||
</SelectContent>
|
||||
</Select>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
|
||||
<TransformWrapper minScale={1.0} wheel={{ smoothStep: 0.005 }}>
|
||||
<div className="flex flex-col gap-2">
|
||||
<TransformComponent
|
||||
wrapperStyle={{
|
||||
width: "100%",
|
||||
height: isDesktop ? "100%" : "auto",
|
||||
}}
|
||||
contentStyle={{
|
||||
position: "relative",
|
||||
width: "100%",
|
||||
height: "100%",
|
||||
}}
|
||||
>
|
||||
<div
|
||||
ref={containerRef}
|
||||
className="relative flex w-full items-center justify-center overflow-hidden rounded-lg border bg-secondary"
|
||||
style={{ aspectRatio: "16 / 9" }}
|
||||
>
|
||||
{selectedCamera && cameraConfig && imageSize.width > 0 ? (
|
||||
<div
|
||||
className="relative"
|
||||
style={{
|
||||
width: imageSize.width,
|
||||
height: imageSize.height,
|
||||
}}
|
||||
>
|
||||
<img
|
||||
alt={t("dialog.previewAlt", {
|
||||
camera: selectedCamera,
|
||||
})}
|
||||
src={`${apiHost}api/${selectedCamera}/latest.jpg?h=500`}
|
||||
className="h-full w-full object-contain"
|
||||
onLoad={() => setImageLoaded(true)}
|
||||
/>
|
||||
{!imageLoaded && (
|
||||
<div className="absolute inset-0 flex items-center justify-center">
|
||||
<ActivityIndicator className="h-8 w-8" />
|
||||
</div>
|
||||
)}
|
||||
<MotionSearchROICanvas
|
||||
camera={selectedCamera}
|
||||
width={cameraConfig.detect.width}
|
||||
height={cameraConfig.detect.height}
|
||||
polygonPoints={polygonPoints}
|
||||
setPolygonPoints={setPolygonPoints}
|
||||
isDrawing={isDrawingROI}
|
||||
setIsDrawing={setIsDrawingROI}
|
||||
isInteractive={true}
|
||||
/>
|
||||
</div>
|
||||
) : (
|
||||
<div className="flex h-full w-full items-center justify-center text-sm text-muted-foreground">
|
||||
{t("selectCamera")}
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
</TransformComponent>
|
||||
</div>
|
||||
</TransformWrapper>
|
||||
|
||||
{selectedCamera && (
|
||||
<div className="my-2 flex w-full flex-row justify-between rounded-md bg-background_alt p-2 text-sm">
|
||||
<div className="my-1 inline-flex items-center">
|
||||
{t("polygonControls.points", {
|
||||
count: polygonPoints.length,
|
||||
})}
|
||||
{polygonClosed && <FaCheckCircle className="ml-2 size-5" />}
|
||||
</div>
|
||||
<div className="flex flex-row justify-center gap-2">
|
||||
<Tooltip>
|
||||
<TooltipTrigger asChild>
|
||||
<Button
|
||||
variant="default"
|
||||
className="size-6 rounded-md p-1"
|
||||
aria-label={t("polygonControls.undo")}
|
||||
disabled={polygonPoints.length === 0 || isSearching}
|
||||
onClick={undoPolygonPoint}
|
||||
>
|
||||
<MdUndo className="text-secondary-foreground" />
|
||||
</Button>
|
||||
</TooltipTrigger>
|
||||
<TooltipContent>
|
||||
{t("polygonControls.undo")}
|
||||
</TooltipContent>
|
||||
</Tooltip>
|
||||
<Tooltip>
|
||||
<TooltipTrigger asChild>
|
||||
<Button
|
||||
variant="default"
|
||||
className="size-6 rounded-md p-1"
|
||||
aria-label={t("polygonControls.reset")}
|
||||
disabled={polygonPoints.length === 0 || isSearching}
|
||||
onClick={resetPolygon}
|
||||
>
|
||||
<MdOutlineRestartAlt className="text-secondary-foreground" />
|
||||
</Button>
|
||||
</TooltipTrigger>
|
||||
<TooltipContent>
|
||||
{t("polygonControls.reset")}
|
||||
</TooltipContent>
|
||||
</Tooltip>
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
|
||||
<div
|
||||
className={cn(
|
||||
"flex w-full flex-col gap-4 space-y-4 lg:w-[340px]",
|
||||
isMobile && "landscape:w-[40%] landscape:flex-shrink-0",
|
||||
)}
|
||||
>
|
||||
<div className="grid gap-3">
|
||||
<h4 className="mb-4 font-medium leading-none">
|
||||
{t("settings.title")}
|
||||
</h4>
|
||||
<div className="grid gap-4 space-y-2">
|
||||
<div className="grid gap-2">
|
||||
<Label htmlFor="threshold">{t("settings.threshold")}</Label>
|
||||
<div className="flex items-center gap-2">
|
||||
<Slider
|
||||
id="threshold"
|
||||
min={1}
|
||||
max={255}
|
||||
step={1}
|
||||
value={[threshold]}
|
||||
onValueChange={([value]) => setThreshold(value)}
|
||||
/>
|
||||
<span className="w-12 text-sm">{threshold}</span>
|
||||
</div>
|
||||
<p className="text-xs text-muted-foreground">
|
||||
{t("settings.thresholdDesc")}
|
||||
</p>
|
||||
</div>
|
||||
<div className="grid gap-2">
|
||||
<Label htmlFor="minArea">{t("settings.minArea")}</Label>
|
||||
<div className="flex items-center gap-2">
|
||||
<Slider
|
||||
id="minArea"
|
||||
min={1}
|
||||
max={100}
|
||||
step={1}
|
||||
value={[minArea]}
|
||||
onValueChange={([value]) => setMinArea(value)}
|
||||
/>
|
||||
<span className="w-12 text-sm">{minArea}%</span>
|
||||
</div>
|
||||
<p className="text-xs text-muted-foreground">
|
||||
{t("settings.minAreaDesc")}
|
||||
</p>
|
||||
</div>
|
||||
<div className="grid gap-2">
|
||||
<Label htmlFor="frameSkip">{t("settings.frameSkip")}</Label>
|
||||
<div className="flex items-center gap-2">
|
||||
<Slider
|
||||
id="frameSkip"
|
||||
min={1}
|
||||
max={60}
|
||||
step={1}
|
||||
value={[frameSkip]}
|
||||
onValueChange={([value]) => setFrameSkip(value)}
|
||||
/>
|
||||
<span className="w-12 text-sm">{frameSkip}</span>
|
||||
</div>
|
||||
<p className="text-xs text-muted-foreground">
|
||||
{t("settings.frameSkipDesc")}
|
||||
</p>
|
||||
</div>
|
||||
<div className="grid gap-2">
|
||||
<div className="flex items-center justify-between gap-2">
|
||||
<Label htmlFor="parallelMode">
|
||||
{t("settings.parallelMode")}
|
||||
</Label>
|
||||
<Switch
|
||||
id="parallelMode"
|
||||
checked={parallelMode}
|
||||
onCheckedChange={setParallelMode}
|
||||
/>
|
||||
</div>
|
||||
<p className="text-xs text-muted-foreground">
|
||||
{t("settings.parallelModeDesc")}
|
||||
</p>
|
||||
</div>
|
||||
<div className="grid gap-2">
|
||||
<Label htmlFor="maxResults">
|
||||
{t("settings.maxResults")}
|
||||
</Label>
|
||||
<div className="flex items-center gap-2">
|
||||
<Slider
|
||||
id="maxResults"
|
||||
min={1}
|
||||
max={200}
|
||||
step={1}
|
||||
value={[maxResults]}
|
||||
onValueChange={([value]) => setMaxResults(value)}
|
||||
/>
|
||||
<span className="w-12 text-sm">{maxResults}</span>
|
||||
</div>
|
||||
<p className="text-xs text-muted-foreground">
|
||||
{t("settings.maxResultsDesc")}
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<SearchRangeSelector
|
||||
range={searchRange}
|
||||
setRange={setSearchRange}
|
||||
defaultRange={defaultRange}
|
||||
timeFormat={config.ui?.time_format}
|
||||
timezone={timezone}
|
||||
/>
|
||||
|
||||
<Button
|
||||
className="w-full"
|
||||
variant="select"
|
||||
onClick={onStartSearch}
|
||||
disabled={!canStartSearch || isSearching}
|
||||
>
|
||||
{t("startSearch")}
|
||||
</Button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</Content>
|
||||
</Overlay>
|
||||
);
|
||||
}
|
||||
|
||||
type SearchRangeSelectorProps = {
|
||||
range?: TimeRange;
|
||||
setRange: React.Dispatch<React.SetStateAction<TimeRange | undefined>>;
|
||||
defaultRange: TimeRange;
|
||||
timeFormat?: "browser" | "12hour" | "24hour";
|
||||
timezone?: string;
|
||||
};
|
||||
|
||||
function SearchRangeSelector({
|
||||
range,
|
||||
setRange,
|
||||
defaultRange,
|
||||
timeFormat,
|
||||
timezone,
|
||||
}: SearchRangeSelectorProps) {
|
||||
const { t } = useTranslation(["views/motionSearch", "common"]);
|
||||
const [startOpen, setStartOpen] = useState(false);
|
||||
const [endOpen, setEndOpen] = useState(false);
|
||||
|
||||
const timezoneOffset = useMemo(
|
||||
() =>
|
||||
timezone ? Math.round(getUTCOffset(new Date(), timezone)) : undefined,
|
||||
[timezone],
|
||||
);
|
||||
const localTimeOffset = useMemo(
|
||||
() =>
|
||||
Math.round(
|
||||
getUTCOffset(
|
||||
new Date(),
|
||||
Intl.DateTimeFormat().resolvedOptions().timeZone,
|
||||
),
|
||||
),
|
||||
[],
|
||||
);
|
||||
|
||||
const startTime = useMemo(() => {
|
||||
let time = range?.after ?? defaultRange.after;
|
||||
|
||||
if (timezoneOffset !== undefined) {
|
||||
time = time + (timezoneOffset - localTimeOffset) * 60;
|
||||
}
|
||||
|
||||
return time;
|
||||
}, [range, defaultRange, timezoneOffset, localTimeOffset]);
|
||||
|
||||
const endTime = useMemo(() => {
|
||||
let time = range?.before ?? defaultRange.before;
|
||||
|
||||
if (timezoneOffset !== undefined) {
|
||||
time = time + (timezoneOffset - localTimeOffset) * 60;
|
||||
}
|
||||
|
||||
return time;
|
||||
}, [range, defaultRange, timezoneOffset, localTimeOffset]);
|
||||
|
||||
const formattedStart = useFormattedTimestamp(
|
||||
startTime,
|
||||
timeFormat === "24hour"
|
||||
? t("time.formattedTimestamp.24hour", { ns: "common" })
|
||||
: t("time.formattedTimestamp.12hour", { ns: "common" }),
|
||||
);
|
||||
const formattedEnd = useFormattedTimestamp(
|
||||
endTime,
|
||||
timeFormat === "24hour"
|
||||
? t("time.formattedTimestamp.24hour", { ns: "common" })
|
||||
: t("time.formattedTimestamp.12hour", { ns: "common" }),
|
||||
);
|
||||
|
||||
const startClock = useMemo(() => {
|
||||
const date = new Date(startTime * 1000);
|
||||
return `${date.getHours().toString().padStart(2, "0")}:${date
|
||||
.getMinutes()
|
||||
.toString()
|
||||
.padStart(2, "0")}:${date.getSeconds().toString().padStart(2, "0")}`;
|
||||
}, [startTime]);
|
||||
|
||||
const endClock = useMemo(() => {
|
||||
const date = new Date(endTime * 1000);
|
||||
return `${date.getHours().toString().padStart(2, "0")}:${date
|
||||
.getMinutes()
|
||||
.toString()
|
||||
.padStart(2, "0")}:${date.getSeconds().toString().padStart(2, "0")}`;
|
||||
}, [endTime]);
|
||||
|
||||
return (
|
||||
<div className="grid gap-2">
|
||||
<Label>{t("timeRange.title")}</Label>
|
||||
<div className="flex items-center rounded-lg bg-secondary px-2 py-1 text-secondary-foreground">
|
||||
<FaCalendarAlt />
|
||||
<div className="flex flex-wrap items-center">
|
||||
<Popover
|
||||
open={startOpen}
|
||||
onOpenChange={(open) => {
|
||||
if (!open) {
|
||||
setStartOpen(false);
|
||||
}
|
||||
}}
|
||||
modal={false}
|
||||
>
|
||||
<PopoverTrigger asChild>
|
||||
<Button
|
||||
className="text-primary"
|
||||
aria-label={t("timeRange.start")}
|
||||
variant={startOpen ? "select" : "default"}
|
||||
size="sm"
|
||||
onClick={() => {
|
||||
setStartOpen(true);
|
||||
setEndOpen(false);
|
||||
}}
|
||||
>
|
||||
{formattedStart}
|
||||
</Button>
|
||||
</PopoverTrigger>
|
||||
<PopoverContent
|
||||
disablePortal
|
||||
className="flex flex-col items-center"
|
||||
>
|
||||
<TimezoneAwareCalendar
|
||||
timezone={timezone}
|
||||
selectedDay={new Date(startTime * 1000)}
|
||||
onSelect={(day) => {
|
||||
if (!day) {
|
||||
return;
|
||||
}
|
||||
|
||||
setRange({
|
||||
before: endTime,
|
||||
after: day.getTime() / 1000 + 1,
|
||||
});
|
||||
}}
|
||||
/>
|
||||
<SelectSeparator className="bg-secondary" />
|
||||
<input
|
||||
className="text-md mx-4 w-full border border-input bg-background p-1 text-secondary-foreground hover:bg-accent hover:text-accent-foreground dark:[color-scheme:dark]"
|
||||
id="startTime"
|
||||
type="time"
|
||||
value={startClock}
|
||||
step={isIOS ? "60" : "1"}
|
||||
onChange={(e) => {
|
||||
const clock = e.target.value;
|
||||
const [hour, minute, second] = isIOS
|
||||
? [...clock.split(":"), "00"]
|
||||
: clock.split(":");
|
||||
|
||||
const start = new Date(startTime * 1000);
|
||||
start.setHours(
|
||||
parseInt(hour),
|
||||
parseInt(minute),
|
||||
parseInt(second ?? 0),
|
||||
0,
|
||||
);
|
||||
setRange({
|
||||
before: endTime,
|
||||
after: start.getTime() / 1000,
|
||||
});
|
||||
}}
|
||||
/>
|
||||
</PopoverContent>
|
||||
</Popover>
|
||||
<FaArrowRight className="size-4 text-primary" />
|
||||
<Popover
|
||||
open={endOpen}
|
||||
onOpenChange={(open) => {
|
||||
if (!open) {
|
||||
setEndOpen(false);
|
||||
}
|
||||
}}
|
||||
modal={false}
|
||||
>
|
||||
<PopoverTrigger asChild>
|
||||
<Button
|
||||
className="text-primary"
|
||||
aria-label={t("timeRange.end")}
|
||||
variant={endOpen ? "select" : "default"}
|
||||
size="sm"
|
||||
onClick={() => {
|
||||
setEndOpen(true);
|
||||
setStartOpen(false);
|
||||
}}
|
||||
>
|
||||
{formattedEnd}
|
||||
</Button>
|
||||
</PopoverTrigger>
|
||||
<PopoverContent
|
||||
disablePortal
|
||||
className="flex flex-col items-center"
|
||||
>
|
||||
<TimezoneAwareCalendar
|
||||
timezone={timezone}
|
||||
selectedDay={new Date(endTime * 1000)}
|
||||
onSelect={(day) => {
|
||||
if (!day) {
|
||||
return;
|
||||
}
|
||||
|
||||
setRange({
|
||||
after: startTime,
|
||||
before: day.getTime() / 1000,
|
||||
});
|
||||
}}
|
||||
/>
|
||||
<SelectSeparator className="bg-secondary" />
|
||||
<input
|
||||
className="text-md mx-4 w-full border border-input bg-background p-1 text-secondary-foreground hover:bg-accent hover:text-accent-foreground dark:[color-scheme:dark]"
|
||||
id="endTime"
|
||||
type="time"
|
||||
value={endClock}
|
||||
step={isIOS ? "60" : "1"}
|
||||
onChange={(e) => {
|
||||
const clock = e.target.value;
|
||||
const [hour, minute, second] = isIOS
|
||||
? [...clock.split(":"), "00"]
|
||||
: clock.split(":");
|
||||
|
||||
const end = new Date(endTime * 1000);
|
||||
end.setHours(
|
||||
parseInt(hour),
|
||||
parseInt(minute),
|
||||
parseInt(second ?? 0),
|
||||
0,
|
||||
);
|
||||
setRange({
|
||||
before: end.getTime() / 1000,
|
||||
after: startTime,
|
||||
});
|
||||
}}
|
||||
/>
|
||||
</PopoverContent>
|
||||
</Popover>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
@ -1,398 +0,0 @@
|
||||
import { useCallback, useMemo, useRef } from "react";
|
||||
import { Stage, Layer, Line, Circle, Image } from "react-konva";
|
||||
import Konva from "konva";
|
||||
import type { KonvaEventObject } from "konva/lib/Node";
|
||||
import { flattenPoints } from "@/utils/canvasUtil";
|
||||
import { cn } from "@/lib/utils";
|
||||
import { useResizeObserver } from "@/hooks/resize-observer";
|
||||
|
||||
type MotionSearchROICanvasProps = {
|
||||
camera: string;
|
||||
width: number;
|
||||
height: number;
|
||||
polygonPoints: number[][];
|
||||
setPolygonPoints: React.Dispatch<React.SetStateAction<number[][]>>;
|
||||
isDrawing: boolean;
|
||||
setIsDrawing: React.Dispatch<React.SetStateAction<boolean>>;
|
||||
isInteractive?: boolean;
|
||||
motionHeatmap?: Record<string, number> | null;
|
||||
showMotionHeatmap?: boolean;
|
||||
};
|
||||
|
||||
export default function MotionSearchROICanvas({
|
||||
width,
|
||||
height,
|
||||
polygonPoints,
|
||||
setPolygonPoints,
|
||||
isDrawing,
|
||||
setIsDrawing,
|
||||
isInteractive = true,
|
||||
motionHeatmap,
|
||||
showMotionHeatmap = false,
|
||||
}: MotionSearchROICanvasProps) {
|
||||
const containerRef = useRef<HTMLDivElement>(null);
|
||||
const stageRef = useRef<Konva.Stage>(null);
|
||||
const [{ width: containerWidth, height: containerHeight }] =
|
||||
useResizeObserver(containerRef);
|
||||
|
||||
const stageSize = useMemo(
|
||||
() => ({
|
||||
width: containerWidth > 0 ? Math.ceil(containerWidth) : 0,
|
||||
height: containerHeight > 0 ? Math.ceil(containerHeight) : 0,
|
||||
}),
|
||||
[containerHeight, containerWidth],
|
||||
);
|
||||
|
||||
const videoRect = useMemo(() => {
|
||||
const stageWidth = stageSize.width;
|
||||
const stageHeight = stageSize.height;
|
||||
const sourceWidth = width > 0 ? width : 1;
|
||||
const sourceHeight = height > 0 ? height : 1;
|
||||
|
||||
if (stageWidth <= 0 || stageHeight <= 0) {
|
||||
return { x: 0, y: 0, width: 0, height: 0 };
|
||||
}
|
||||
|
||||
const stageAspect = stageWidth / stageHeight;
|
||||
const sourceAspect = sourceWidth / sourceHeight;
|
||||
|
||||
if (stageAspect > sourceAspect) {
|
||||
const fittedHeight = stageHeight;
|
||||
const fittedWidth = fittedHeight * sourceAspect;
|
||||
return {
|
||||
x: (stageWidth - fittedWidth) / 2,
|
||||
y: 0,
|
||||
width: fittedWidth,
|
||||
height: fittedHeight,
|
||||
};
|
||||
}
|
||||
|
||||
const fittedWidth = stageWidth;
|
||||
const fittedHeight = fittedWidth / sourceAspect;
|
||||
return {
|
||||
x: 0,
|
||||
y: (stageHeight - fittedHeight) / 2,
|
||||
width: fittedWidth,
|
||||
height: fittedHeight,
|
||||
};
|
||||
}, [height, stageSize.height, stageSize.width, width]);
|
||||
|
||||
// Convert normalized points to stage coordinates
|
||||
const scaledPoints = useMemo(() => {
|
||||
return polygonPoints.map((point) => [
|
||||
videoRect.x + point[0] * videoRect.width,
|
||||
videoRect.y + point[1] * videoRect.height,
|
||||
]);
|
||||
}, [
|
||||
polygonPoints,
|
||||
videoRect.height,
|
||||
videoRect.width,
|
||||
videoRect.x,
|
||||
videoRect.y,
|
||||
]);
|
||||
|
||||
const flattenedPoints = useMemo(
|
||||
() => flattenPoints(scaledPoints),
|
||||
[scaledPoints],
|
||||
);
|
||||
|
||||
const heatmapOverlayCanvas = useMemo(() => {
|
||||
if (
|
||||
!showMotionHeatmap ||
|
||||
!motionHeatmap ||
|
||||
videoRect.width === 0 ||
|
||||
videoRect.height === 0
|
||||
) {
|
||||
return null;
|
||||
}
|
||||
|
||||
const gridSize = 16;
|
||||
const heatmapLevels = Object.values(motionHeatmap)
|
||||
.map((value) => Number(value))
|
||||
.filter((value) => Number.isFinite(value) && value > 0);
|
||||
|
||||
const maxHeatmapLevel =
|
||||
heatmapLevels.length > 0 ? Math.max(...heatmapLevels) : 0;
|
||||
|
||||
if (maxHeatmapLevel <= 0) {
|
||||
return null;
|
||||
}
|
||||
|
||||
const maskCanvas = document.createElement("canvas");
|
||||
maskCanvas.width = gridSize;
|
||||
maskCanvas.height = gridSize;
|
||||
|
||||
const maskContext = maskCanvas.getContext("2d");
|
||||
if (!maskContext) {
|
||||
return null;
|
||||
}
|
||||
|
||||
const imageData = maskContext.createImageData(gridSize, gridSize);
|
||||
const heatmapStops = [
|
||||
{ t: 0, r: 0, g: 0, b: 255 },
|
||||
{ t: 0.25, r: 0, g: 255, b: 255 },
|
||||
{ t: 0.5, r: 0, g: 255, b: 0 },
|
||||
{ t: 0.75, r: 255, g: 255, b: 0 },
|
||||
{ t: 1, r: 255, g: 0, b: 0 },
|
||||
];
|
||||
|
||||
const getHeatmapColor = (value: number) => {
|
||||
const clampedValue = Math.min(1, Math.max(0, value));
|
||||
|
||||
const upperIndex = heatmapStops.findIndex(
|
||||
(stop) => stop.t >= clampedValue,
|
||||
);
|
||||
if (upperIndex <= 0) {
|
||||
return heatmapStops[0];
|
||||
}
|
||||
|
||||
const lower = heatmapStops[upperIndex - 1];
|
||||
const upper = heatmapStops[upperIndex];
|
||||
const range = upper.t - lower.t;
|
||||
const blend = range > 0 ? (clampedValue - lower.t) / range : 0;
|
||||
|
||||
return {
|
||||
r: Math.round(lower.r + (upper.r - lower.r) * blend),
|
||||
g: Math.round(lower.g + (upper.g - lower.g) * blend),
|
||||
b: Math.round(lower.b + (upper.b - lower.b) * blend),
|
||||
};
|
||||
};
|
||||
|
||||
for (let index = 0; index < gridSize ** 2; index++) {
|
||||
const level = Number(motionHeatmap[index.toString()] ?? 0);
|
||||
const normalizedLevel =
|
||||
level > 0 ? Math.min(1, Math.max(0, level / maxHeatmapLevel)) : 0;
|
||||
const alpha =
|
||||
level > 0
|
||||
? Math.min(0.95, Math.max(0.1, 0.15 + normalizedLevel * 0.5))
|
||||
: 0;
|
||||
const color = getHeatmapColor(normalizedLevel);
|
||||
|
||||
const pixelOffset = index * 4;
|
||||
imageData.data[pixelOffset] = color.r;
|
||||
imageData.data[pixelOffset + 1] = color.g;
|
||||
imageData.data[pixelOffset + 2] = color.b;
|
||||
imageData.data[pixelOffset + 3] = Math.round(alpha * 255);
|
||||
}
|
||||
|
||||
maskContext.putImageData(imageData, 0, 0);
|
||||
|
||||
return maskCanvas;
|
||||
}, [motionHeatmap, showMotionHeatmap, videoRect.height, videoRect.width]);
|
||||
|
||||
// Handle mouse click to add point
|
||||
const handleMouseDown = useCallback(
|
||||
(e: KonvaEventObject<MouseEvent | TouchEvent>) => {
|
||||
if (!isInteractive || !isDrawing) return;
|
||||
if (videoRect.width <= 0 || videoRect.height <= 0) return;
|
||||
|
||||
const stage = e.target.getStage();
|
||||
if (!stage) return;
|
||||
|
||||
const mousePos = stage.getPointerPosition();
|
||||
if (!mousePos) return;
|
||||
|
||||
const intersection = stage.getIntersection(mousePos);
|
||||
|
||||
// If clicking on first point and we have at least 3 points, close the polygon
|
||||
if (polygonPoints.length >= 3 && intersection?.name() === "point-0") {
|
||||
setIsDrawing(false);
|
||||
return;
|
||||
}
|
||||
|
||||
// Only add point if not clicking on an existing point
|
||||
if (intersection?.getClassName() !== "Circle") {
|
||||
const clampedX = Math.min(
|
||||
Math.max(mousePos.x, videoRect.x),
|
||||
videoRect.x + videoRect.width,
|
||||
);
|
||||
const clampedY = Math.min(
|
||||
Math.max(mousePos.y, videoRect.y),
|
||||
videoRect.y + videoRect.height,
|
||||
);
|
||||
|
||||
// Convert to normalized coordinates (0-1)
|
||||
const normalizedX = (clampedX - videoRect.x) / videoRect.width;
|
||||
const normalizedY = (clampedY - videoRect.y) / videoRect.height;
|
||||
|
||||
setPolygonPoints([...polygonPoints, [normalizedX, normalizedY]]);
|
||||
}
|
||||
},
|
||||
[
|
||||
isDrawing,
|
||||
polygonPoints,
|
||||
setPolygonPoints,
|
||||
setIsDrawing,
|
||||
isInteractive,
|
||||
videoRect.height,
|
||||
videoRect.width,
|
||||
videoRect.x,
|
||||
videoRect.y,
|
||||
],
|
||||
);
|
||||
|
||||
// Handle point drag
|
||||
const handlePointDragMove = useCallback(
|
||||
(e: KonvaEventObject<MouseEvent | TouchEvent>, index: number) => {
|
||||
if (!isInteractive) return;
|
||||
const stage = e.target.getStage();
|
||||
if (!stage) return;
|
||||
|
||||
const pos = { x: e.target.x(), y: e.target.y() };
|
||||
|
||||
// Constrain to fitted video boundaries
|
||||
pos.x = Math.max(
|
||||
videoRect.x,
|
||||
Math.min(pos.x, videoRect.x + videoRect.width),
|
||||
);
|
||||
pos.y = Math.max(
|
||||
videoRect.y,
|
||||
Math.min(pos.y, videoRect.y + videoRect.height),
|
||||
);
|
||||
|
||||
// Convert to normalized coordinates
|
||||
const normalizedX = (pos.x - videoRect.x) / videoRect.width;
|
||||
const normalizedY = (pos.y - videoRect.y) / videoRect.height;
|
||||
|
||||
const newPoints = [...polygonPoints];
|
||||
newPoints[index] = [normalizedX, normalizedY];
|
||||
setPolygonPoints(newPoints);
|
||||
},
|
||||
[
|
||||
polygonPoints,
|
||||
setPolygonPoints,
|
||||
isInteractive,
|
||||
videoRect.height,
|
||||
videoRect.width,
|
||||
videoRect.x,
|
||||
videoRect.y,
|
||||
],
|
||||
);
|
||||
|
||||
// Handle right-click to delete point
|
||||
const handleContextMenu = useCallback(
|
||||
(e: KonvaEventObject<MouseEvent>, index: number) => {
|
||||
if (!isInteractive) return;
|
||||
e.evt.preventDefault();
|
||||
|
||||
if (polygonPoints.length <= 3 && !isDrawing) {
|
||||
// Don't delete if we have a closed polygon with minimum points
|
||||
return;
|
||||
}
|
||||
|
||||
const newPoints = polygonPoints.filter((_, i) => i !== index);
|
||||
setPolygonPoints(newPoints);
|
||||
|
||||
// If we deleted enough points, go back to drawing mode
|
||||
if (newPoints.length < 3) {
|
||||
setIsDrawing(true);
|
||||
}
|
||||
},
|
||||
[polygonPoints, isDrawing, setPolygonPoints, setIsDrawing, isInteractive],
|
||||
);
|
||||
|
||||
// Handle mouse hover on first point
|
||||
const handleMouseOverPoint = useCallback(
|
||||
(e: KonvaEventObject<MouseEvent | TouchEvent>, index: number) => {
|
||||
if (!isInteractive) return;
|
||||
if (!isDrawing || polygonPoints.length < 3 || index !== 0) return;
|
||||
e.target.scale({ x: 2, y: 2 });
|
||||
},
|
||||
[isDrawing, isInteractive, polygonPoints.length],
|
||||
);
|
||||
|
||||
const handleMouseOutPoint = useCallback(
|
||||
(e: KonvaEventObject<MouseEvent | TouchEvent>, index: number) => {
|
||||
if (!isInteractive) return;
|
||||
if (index === 0) {
|
||||
e.target.scale({ x: 1, y: 1 });
|
||||
}
|
||||
},
|
||||
[isInteractive],
|
||||
);
|
||||
|
||||
const vertexRadius = 6;
|
||||
const polygonColorString = "rgba(66, 135, 245, 0.8)";
|
||||
const polygonFillColor = "rgba(66, 135, 245, 0.2)";
|
||||
|
||||
return (
|
||||
<div
|
||||
ref={containerRef}
|
||||
className={cn(
|
||||
"absolute inset-0 z-10",
|
||||
isInteractive ? "pointer-events-auto" : "pointer-events-none",
|
||||
)}
|
||||
style={{ cursor: isDrawing ? "crosshair" : "default" }}
|
||||
>
|
||||
{stageSize.width > 0 && stageSize.height > 0 && (
|
||||
<Stage
|
||||
ref={stageRef}
|
||||
width={stageSize.width}
|
||||
height={stageSize.height}
|
||||
onMouseDown={handleMouseDown}
|
||||
onTouchStart={handleMouseDown}
|
||||
onContextMenu={(e) => e.evt.preventDefault()}
|
||||
className="absolute inset-0"
|
||||
>
|
||||
<Layer>
|
||||
{/* Segment heatmap overlay */}
|
||||
{heatmapOverlayCanvas && (
|
||||
<Image
|
||||
image={heatmapOverlayCanvas}
|
||||
x={videoRect.x}
|
||||
y={videoRect.y}
|
||||
width={videoRect.width}
|
||||
height={videoRect.height}
|
||||
listening={false}
|
||||
/>
|
||||
)}
|
||||
|
||||
{/* Polygon outline */}
|
||||
{scaledPoints.length > 0 && (
|
||||
<Line
|
||||
points={flattenedPoints}
|
||||
stroke={polygonColorString}
|
||||
strokeWidth={2}
|
||||
closed={!isDrawing && scaledPoints.length >= 3}
|
||||
fill={
|
||||
!isDrawing && scaledPoints.length >= 3
|
||||
? polygonFillColor
|
||||
: undefined
|
||||
}
|
||||
/>
|
||||
)}
|
||||
|
||||
{/* Draw line from last point to cursor when drawing */}
|
||||
{isDrawing && scaledPoints.length > 0 && (
|
||||
<Line
|
||||
points={flattenedPoints}
|
||||
stroke={polygonColorString}
|
||||
strokeWidth={2}
|
||||
dash={[5, 5]}
|
||||
/>
|
||||
)}
|
||||
|
||||
{/* Vertex points */}
|
||||
{scaledPoints.map((point, index) => (
|
||||
<Circle
|
||||
key={index}
|
||||
name={`point-${index}`}
|
||||
x={point[0]}
|
||||
y={point[1]}
|
||||
radius={vertexRadius}
|
||||
fill={polygonColorString}
|
||||
stroke="white"
|
||||
strokeWidth={2}
|
||||
draggable={!isDrawing && isInteractive}
|
||||
onDragMove={(e) => handlePointDragMove(e, index)}
|
||||
onMouseOver={(e) => handleMouseOverPoint(e, index)}
|
||||
onMouseOut={(e) => handleMouseOutPoint(e, index)}
|
||||
onContextMenu={(e) => handleContextMenu(e, index)}
|
||||
/>
|
||||
))}
|
||||
</Layer>
|
||||
</Stage>
|
||||
)}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
File diff suppressed because it is too large
Load Diff
@ -15,7 +15,7 @@ import { cn } from "@/lib/utils";
|
||||
import { formatUnixTimestampToDateTime } from "@/utils/dateUtil";
|
||||
import { MediaSyncStats } from "@/types/ws";
|
||||
|
||||
export default function MediaSyncSettingsView() {
|
||||
export default function MaintenanceSettingsView() {
|
||||
const { t } = useTranslation("views/settings");
|
||||
const [selectedMediaTypes, setSelectedMediaTypes] = useState<string[]>([
|
||||
"all",
|
||||
@ -103,7 +103,7 @@ export default function MediaSyncSettingsView() {
|
||||
<div className="scrollbar-container order-last mb-2 mt-2 flex h-full w-full flex-col overflow-y-auto px-2 md:order-none">
|
||||
<div className="grid w-full grid-cols-1 gap-4 md:grid-cols-2">
|
||||
<div className="col-span-1">
|
||||
<Heading as="h4" className="mb-2 hidden md:block">
|
||||
<Heading as="h4" className="mb-2">
|
||||
{t("maintenance.sync.title")}
|
||||
</Heading>
|
||||
|
||||
@ -1,124 +0,0 @@
|
||||
import Heading from "@/components/ui/heading";
|
||||
import { Button, buttonVariants } from "@/components/ui/button";
|
||||
import {
|
||||
AlertDialog,
|
||||
AlertDialogAction,
|
||||
AlertDialogCancel,
|
||||
AlertDialogContent,
|
||||
AlertDialogDescription,
|
||||
AlertDialogFooter,
|
||||
AlertDialogHeader,
|
||||
AlertDialogTitle,
|
||||
} from "@/components/ui/alert-dialog";
|
||||
import { Toaster } from "@/components/ui/sonner";
|
||||
import { useCallback, useContext, useState } from "react";
|
||||
import { useTranslation } from "react-i18next";
|
||||
import axios from "axios";
|
||||
import { toast } from "sonner";
|
||||
import { StatusBarMessagesContext } from "@/context/statusbar-provider";
|
||||
import { cn } from "@/lib/utils";
|
||||
|
||||
type RegionGridSettingsViewProps = {
|
||||
selectedCamera: string;
|
||||
};
|
||||
|
||||
export default function RegionGridSettingsView({
|
||||
selectedCamera,
|
||||
}: RegionGridSettingsViewProps) {
|
||||
const { t } = useTranslation("views/settings");
|
||||
const { addMessage } = useContext(StatusBarMessagesContext)!;
|
||||
const [isConfirmOpen, setIsConfirmOpen] = useState(false);
|
||||
const [isClearing, setIsClearing] = useState(false);
|
||||
const [imageKey, setImageKey] = useState(0);
|
||||
|
||||
const handleClear = useCallback(async () => {
|
||||
setIsClearing(true);
|
||||
|
||||
try {
|
||||
await axios.delete(`${selectedCamera}/region_grid`);
|
||||
toast.success(t("maintenance.regionGrid.clearSuccess"), {
|
||||
position: "top-center",
|
||||
});
|
||||
setImageKey((prev) => prev + 1);
|
||||
addMessage(
|
||||
"region_grid_restart",
|
||||
t("maintenance.regionGrid.restartRequired"),
|
||||
undefined,
|
||||
"region_grid_settings",
|
||||
);
|
||||
} catch {
|
||||
toast.error(t("maintenance.regionGrid.clearError"), {
|
||||
position: "top-center",
|
||||
});
|
||||
} finally {
|
||||
setIsClearing(false);
|
||||
setIsConfirmOpen(false);
|
||||
}
|
||||
}, [selectedCamera, t, addMessage]);
|
||||
|
||||
return (
|
||||
<>
|
||||
<div className="flex size-full flex-col md:flex-row">
|
||||
<Toaster position="top-center" closeButton={true} />
|
||||
<div className="scrollbar-container order-last mb-2 mt-2 flex h-full w-full flex-col overflow-y-auto px-2 md:order-none">
|
||||
<Heading as="h4" className="mb-2 hidden md:block">
|
||||
{t("maintenance.regionGrid.title")}
|
||||
</Heading>
|
||||
|
||||
<div className="max-w-6xl">
|
||||
<div className="mb-5 mt-2 flex max-w-5xl flex-col gap-2 text-sm text-muted-foreground">
|
||||
<p>{t("maintenance.regionGrid.desc")}</p>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div className="mb-4 max-w-5xl rounded-lg border border-secondary">
|
||||
<img
|
||||
key={imageKey}
|
||||
src={`api/${selectedCamera}/grid.jpg?cache=${imageKey}`}
|
||||
alt={t("maintenance.regionGrid.title")}
|
||||
className="w-full"
|
||||
/>
|
||||
</div>
|
||||
|
||||
<div className="flex w-full flex-row items-center gap-2 py-2 md:w-[50%]">
|
||||
<Button
|
||||
onClick={() => setIsConfirmOpen(true)}
|
||||
disabled={isClearing}
|
||||
variant="destructive"
|
||||
className="flex flex-1 text-white md:max-w-sm"
|
||||
>
|
||||
{t("maintenance.regionGrid.clear")}
|
||||
</Button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<AlertDialog open={isConfirmOpen} onOpenChange={setIsConfirmOpen}>
|
||||
<AlertDialogContent>
|
||||
<AlertDialogHeader>
|
||||
<AlertDialogTitle>
|
||||
{t("maintenance.regionGrid.clearConfirmTitle")}
|
||||
</AlertDialogTitle>
|
||||
<AlertDialogDescription>
|
||||
{t("maintenance.regionGrid.clearConfirmDesc")}
|
||||
</AlertDialogDescription>
|
||||
</AlertDialogHeader>
|
||||
<AlertDialogFooter>
|
||||
<AlertDialogCancel>
|
||||
{t("button.cancel", { ns: "common" })}
|
||||
</AlertDialogCancel>
|
||||
<AlertDialogAction
|
||||
className={cn(
|
||||
buttonVariants({ variant: "destructive" }),
|
||||
"text-white",
|
||||
)}
|
||||
onClick={handleClear}
|
||||
>
|
||||
{t("maintenance.regionGrid.clear")}
|
||||
</AlertDialogAction>
|
||||
</AlertDialogFooter>
|
||||
</AlertDialogContent>
|
||||
</AlertDialog>
|
||||
</>
|
||||
);
|
||||
}
|
||||
Loading…
Reference in New Issue
Block a user