mirror of
https://github.com/blakeblackshear/frigate.git
synced 2026-05-10 07:25:27 +03:00
Compare commits
No commits in common. "ec7040bed5f668a81a4e99d00602ff9c3a2b09c0" and "74c89beaf99ae4b3bcf0b26f4823c9a6ce71f762" have entirely different histories.
ec7040bed5
...
74c89beaf9
@ -616,12 +616,13 @@ record:
|
||||
# never stored, so setting the mode to "all" here won't bring them back.
|
||||
mode: motion
|
||||
|
||||
# Optional: Configuration for the snapshots written to the clips directory for each tracked object
|
||||
# Timestamp, bounding_box, crop and height settings are applied by default to API requests for snapshots.
|
||||
# Optional: Configuration for the jpg snapshots written to the clips directory for each tracked object
|
||||
# NOTE: Can be overridden at the camera level
|
||||
snapshots:
|
||||
# Optional: Enable writing snapshot images to /media/frigate/clips (default: shown below)
|
||||
# Optional: Enable writing jpg snapshot to /media/frigate/clips (default: shown below)
|
||||
enabled: False
|
||||
# Optional: save a clean copy of the snapshot image (default: shown below)
|
||||
clean_copy: True
|
||||
# Optional: print a timestamp on the snapshots (default: shown below)
|
||||
timestamp: False
|
||||
# Optional: draw bounding box on the snapshots (default: shown below)
|
||||
@ -639,8 +640,8 @@ snapshots:
|
||||
# Optional: Per object retention days
|
||||
objects:
|
||||
person: 15
|
||||
# Optional: quality of the encoded snapshot image, 0-100 (default: shown below)
|
||||
quality: 60
|
||||
# Optional: quality of the encoded jpeg, 0-100 (default: shown below)
|
||||
quality: 70
|
||||
|
||||
# Optional: Configuration for semantic search capability
|
||||
semantic_search:
|
||||
|
||||
@ -3,7 +3,7 @@ id: snapshots
|
||||
title: Snapshots
|
||||
---
|
||||
|
||||
Frigate can save a snapshot image to `/media/frigate/clips` for each object that is detected named as `<camera>-<id>-clean.webp`. They are also accessible [via the api](../integrations/api/event-snapshot-events-event-id-snapshot-jpg-get.api.mdx)
|
||||
Frigate can save a snapshot image to `/media/frigate/clips` for each object that is detected named as `<camera>-<id>.jpg`. They are also accessible [via the api](../integrations/api/event-snapshot-events-event-id-snapshot-jpg-get.api.mdx)
|
||||
|
||||
Snapshots are accessible in the UI in the Explore pane. This allows for quick submission to the Frigate+ service.
|
||||
|
||||
@ -13,19 +13,21 @@ Snapshots sent via MQTT are configured in the [config file](/configuration) unde
|
||||
|
||||
## Frame Selection
|
||||
|
||||
Frigate does not save every frame. It picks a single "best" frame for each tracked object based on detection confidence, object size, and the presence of key attributes like faces or license plates. Frames where the object touches the edge of the frame are deprioritized. That best frame is written to disk once tracking ends.
|
||||
Frigate does not save every frame — it picks a single "best" frame for each tracked object and uses it for both the snapshot and clean copy. As the object is tracked across frames, Frigate continuously evaluates whether the current frame is better than the previous best based on detection confidence, object size, and the presence of key attributes like faces or license plates. Frames where the object touches the edge of the frame are deprioritized. The snapshot is written to disk once tracking ends using whichever frame was determined to be the best.
|
||||
|
||||
MQTT snapshots are published more frequently — each time a better thumbnail frame is found during tracking, or when the current best image is older than `best_image_timeout` (default: 60s). These use their own annotation settings configured under `cameras -> your_camera -> mqtt`.
|
||||
|
||||
## Rendering
|
||||
## Clean Copy
|
||||
|
||||
Frigate stores a single clean snapshot on disk:
|
||||
Frigate can produce up to two snapshot files per event, each used in different places:
|
||||
|
||||
| API / Use | Result |
|
||||
| ---------------------------------------- | ----------------------------------------------------------------------------------------------------- |
|
||||
| Stored file | `<camera>-<id>-clean.webp`, always unannotated |
|
||||
| `/api/events/<id>/snapshot.jpg` | Starts from the camera's `snapshots` defaults, then applies any query param overrides at request time |
|
||||
| `/api/events/<id>/snapshot-clean.webp` | Returns the same stored snapshot without annotations |
|
||||
| [Frigate+](/plus/first_model) submission | Uses the same stored clean snapshot |
|
||||
| Version | File | Annotations | Used by |
|
||||
| --- | --- | --- | --- |
|
||||
| **Regular snapshot** | `<camera>-<id>.jpg` | Respects your `timestamp`, `bounding_box`, `crop`, and `height` settings | API (`/api/events/<id>/snapshot.jpg`), MQTT (`<camera>/<label>/snapshot`), Explore pane in the UI |
|
||||
| **Clean copy** | `<camera>-<id>-clean.webp` | Always unannotated — no bounding box, no timestamp, no crop, full resolution | API (`/api/events/<id>/snapshot-clean.webp`), [Frigate+](/plus/first_model) submissions, "Download Clean Snapshot" in the UI |
|
||||
|
||||
MQTT snapshots are configured separately under `cameras -> your_camera -> mqtt` and are unrelated to the stored event snapshot.
|
||||
MQTT snapshots are configured separately under `cameras -> your_camera -> mqtt` and are unrelated to the clean copy.
|
||||
|
||||
The clean copy is required for submitting events to [Frigate+](/plus/first_model) — if you plan to use Frigate+, keep `clean_copy` enabled regardless of your other snapshot settings.
|
||||
|
||||
If you are not using Frigate+ and `timestamp`, `bounding_box`, and `crop` are all disabled, the regular snapshot is already effectively clean, so `clean_copy` provides no benefit and only uses additional disk space. You can safely set `clean_copy: False` in this case.
|
||||
|
||||
@ -25,9 +25,10 @@ Yes. Subscriptions to Frigate+ provide access to the infrastructure used to trai
|
||||
|
||||
### Why can't I submit images to Frigate+?
|
||||
|
||||
If you've configured your API key and the Frigate+ Settings page in the UI shows that the key is active, you need to ensure that snapshots are enabled for the cameras you'd like to submit images for.
|
||||
If you've configured your API key and the Frigate+ Settings page in the UI shows that the key is active, you need to ensure that you've enabled both snapshots and `clean_copy` snapshots for the cameras you'd like to submit images for. Note that `clean_copy` is enabled by default when snapshots are enabled.
|
||||
|
||||
```yaml
|
||||
snapshots:
|
||||
enabled: true
|
||||
clean_copy: true
|
||||
```
|
||||
|
||||
6
docs/static/frigate-api.yaml
vendored
6
docs/static/frigate-api.yaml
vendored
@ -4929,7 +4929,10 @@ paths:
|
||||
tags:
|
||||
- Media
|
||||
summary: Event Snapshot
|
||||
description: Returns a snapshot image for the specified object id.
|
||||
description: >-
|
||||
Returns a snapshot image for the specified object id. NOTE: The query
|
||||
params only take affect while the event is in-progress. Once the event
|
||||
has ended the snapshot configuration is used.
|
||||
operationId: event_snapshot_events__event_id__snapshot_jpg_get
|
||||
parameters:
|
||||
- name: event_id
|
||||
@ -4986,6 +4989,7 @@ paths:
|
||||
anyOf:
|
||||
- type: integer
|
||||
- type: "null"
|
||||
default: 70
|
||||
title: Quality
|
||||
responses:
|
||||
"200":
|
||||
|
||||
@ -35,7 +35,7 @@ class MediaEventsSnapshotQueryParams(BaseModel):
|
||||
bbox: Optional[int] = None
|
||||
crop: Optional[int] = None
|
||||
height: Optional[int] = None
|
||||
quality: Optional[int] = None
|
||||
quality: Optional[int] = 70
|
||||
|
||||
|
||||
class MediaMjpegFeedQueryParams(BaseModel):
|
||||
|
||||
@ -13,6 +13,7 @@ from pathlib import Path
|
||||
from typing import List
|
||||
from urllib.parse import unquote
|
||||
|
||||
import cv2
|
||||
import numpy as np
|
||||
from fastapi import APIRouter, Request
|
||||
from fastapi.params import Depends
|
||||
@ -61,7 +62,7 @@ from frigate.const import CLIPS_DIR, TRIGGER_DIR
|
||||
from frigate.embeddings import EmbeddingsContext
|
||||
from frigate.models import Event, ReviewSegment, Timeline, Trigger
|
||||
from frigate.track.object_processing import TrackedObject
|
||||
from frigate.util.file import get_event_thumbnail_bytes, load_event_snapshot_image
|
||||
from frigate.util.file import get_event_thumbnail_bytes
|
||||
from frigate.util.time import get_dst_transitions, get_tz_modifiers
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
@ -1081,8 +1082,30 @@ async def send_to_plus(request: Request, event_id: str, body: SubmitPlusBody = N
|
||||
content=({"success": False, "message": message}), status_code=400
|
||||
)
|
||||
|
||||
# load clean.webp or clean.png (legacy)
|
||||
try:
|
||||
image, is_clean_snapshot = load_event_snapshot_image(event, clean_only=True)
|
||||
filename_webp = f"{event.camera}-{event.id}-clean.webp"
|
||||
filename_png = f"{event.camera}-{event.id}-clean.png"
|
||||
|
||||
image_path = None
|
||||
if os.path.exists(os.path.join(CLIPS_DIR, filename_webp)):
|
||||
image_path = os.path.join(CLIPS_DIR, filename_webp)
|
||||
elif os.path.exists(os.path.join(CLIPS_DIR, filename_png)):
|
||||
image_path = os.path.join(CLIPS_DIR, filename_png)
|
||||
|
||||
if image_path is None:
|
||||
logger.error(f"Unable to find clean snapshot for event: {event.id}")
|
||||
return JSONResponse(
|
||||
content=(
|
||||
{
|
||||
"success": False,
|
||||
"message": "Unable to find clean snapshot for event",
|
||||
}
|
||||
),
|
||||
status_code=400,
|
||||
)
|
||||
|
||||
image = cv2.imread(image_path)
|
||||
except Exception:
|
||||
logger.error(f"Unable to load clean snapshot for event: {event.id}")
|
||||
return JSONResponse(
|
||||
@ -1092,14 +1115,11 @@ async def send_to_plus(request: Request, event_id: str, body: SubmitPlusBody = N
|
||||
status_code=400,
|
||||
)
|
||||
|
||||
if not is_clean_snapshot or image is None or image.size == 0:
|
||||
logger.error(f"Unable to find clean snapshot for event: {event.id}")
|
||||
if image is None or image.size == 0:
|
||||
logger.error(f"Unable to load clean snapshot for event: {event.id}")
|
||||
return JSONResponse(
|
||||
content=(
|
||||
{
|
||||
"success": False,
|
||||
"message": "Unable to find clean snapshot for event",
|
||||
}
|
||||
{"success": False, "message": "Unable to load clean snapshot for event"}
|
||||
),
|
||||
status_code=400,
|
||||
)
|
||||
|
||||
@ -35,9 +35,9 @@ from frigate.api.defs.query.media_query_parameters import (
|
||||
from frigate.api.defs.tags import Tags
|
||||
from frigate.camera.state import CameraState
|
||||
from frigate.config import FrigateConfig
|
||||
from frigate.config.camera.snapshots import SnapshotsConfig
|
||||
from frigate.const import (
|
||||
CACHE_DIR,
|
||||
CLIPS_DIR,
|
||||
INSTALL_DIR,
|
||||
MAX_SEGMENT_DURATION,
|
||||
PREVIEW_FRAME_TYPE,
|
||||
@ -45,13 +45,8 @@ from frigate.const import (
|
||||
from frigate.models import Event, Previews, Recordings, Regions, ReviewSegment
|
||||
from frigate.output.preview import get_most_recent_preview_frame
|
||||
from frigate.track.object_processing import TrackedObjectProcessor
|
||||
from frigate.util.file import (
|
||||
get_event_snapshot_bytes,
|
||||
get_event_snapshot_path,
|
||||
get_event_thumbnail_bytes,
|
||||
load_event_snapshot_image,
|
||||
)
|
||||
from frigate.util.image import get_image_from_recording, get_image_quality_params
|
||||
from frigate.util.file import get_event_thumbnail_bytes
|
||||
from frigate.util.image import get_image_from_recording
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@ -115,24 +110,6 @@ def imagestream(
|
||||
)
|
||||
|
||||
|
||||
def _resolve_snapshot_settings(
|
||||
snapshot_config: SnapshotsConfig, params: MediaEventsSnapshotQueryParams
|
||||
) -> dict[str, Any]:
|
||||
return {
|
||||
"timestamp": snapshot_config.timestamp
|
||||
if params.timestamp is None
|
||||
else bool(params.timestamp),
|
||||
"bounding_box": snapshot_config.bounding_box
|
||||
if params.bbox is None
|
||||
else bool(params.bbox),
|
||||
"crop": snapshot_config.crop if params.crop is None else bool(params.crop),
|
||||
"height": snapshot_config.height if params.height is None else params.height,
|
||||
"quality": snapshot_config.quality
|
||||
if params.quality is None
|
||||
else params.quality,
|
||||
}
|
||||
|
||||
|
||||
@router.get("/{camera_name}/ptz/info", dependencies=[Depends(require_camera_access)])
|
||||
async def camera_ptz_info(request: Request, camera_name: str):
|
||||
if camera_name in request.app.frigate_config.cameras:
|
||||
@ -170,7 +147,14 @@ async def latest_frame(
|
||||
"paths": params.paths,
|
||||
"regions": params.regions,
|
||||
}
|
||||
quality_params = get_image_quality_params(extension.value, params.quality)
|
||||
quality = params.quality
|
||||
|
||||
if extension == Extension.png:
|
||||
quality_params = None
|
||||
elif extension == Extension.webp:
|
||||
quality_params = [int(cv2.IMWRITE_WEBP_QUALITY), quality]
|
||||
else: # jpg or jpeg
|
||||
quality_params = [int(cv2.IMWRITE_JPEG_QUALITY), quality]
|
||||
|
||||
if camera_name in request.app.frigate_config.cameras:
|
||||
frame = frame_processor.get_current_frame(camera_name, draw_options)
|
||||
@ -745,7 +729,7 @@ async def vod_clip(
|
||||
|
||||
@router.get(
|
||||
"/events/{event_id}/snapshot.jpg",
|
||||
description="Returns a snapshot image for the specified object id.",
|
||||
description="Returns a snapshot image for the specified object id. NOTE: The query params only take affect while the event is in-progress. Once the event has ended the snapshot configuration is used.",
|
||||
)
|
||||
async def event_snapshot(
|
||||
request: Request,
|
||||
@ -764,22 +748,11 @@ async def event_snapshot(
|
||||
content={"success": False, "message": "Snapshot not available"},
|
||||
status_code=404,
|
||||
)
|
||||
snapshot_settings = _resolve_snapshot_settings(
|
||||
request.app.frigate_config.cameras[event.camera].snapshots, params
|
||||
)
|
||||
jpg_bytes, frame_time = get_event_snapshot_bytes(
|
||||
event,
|
||||
ext="jpg",
|
||||
timestamp=snapshot_settings["timestamp"],
|
||||
bounding_box=snapshot_settings["bounding_box"],
|
||||
crop=snapshot_settings["crop"],
|
||||
height=snapshot_settings["height"],
|
||||
quality=snapshot_settings["quality"],
|
||||
timestamp_style=request.app.frigate_config.cameras[
|
||||
event.camera
|
||||
].timestamp_style,
|
||||
colormap=request.app.frigate_config.model.colormap,
|
||||
)
|
||||
# read snapshot from disk
|
||||
with open(
|
||||
os.path.join(CLIPS_DIR, f"{event.camera}-{event.id}.jpg"), "rb"
|
||||
) as image_file:
|
||||
jpg_bytes = image_file.read()
|
||||
except DoesNotExist:
|
||||
# see if the object is currently being tracked
|
||||
try:
|
||||
@ -790,16 +763,13 @@ async def event_snapshot(
|
||||
if event_id in camera_state.tracked_objects:
|
||||
tracked_obj = camera_state.tracked_objects.get(event_id)
|
||||
if tracked_obj is not None:
|
||||
snapshot_settings = _resolve_snapshot_settings(
|
||||
camera_state.camera_config.snapshots, params
|
||||
)
|
||||
jpg_bytes, frame_time = tracked_obj.get_img_bytes(
|
||||
ext="jpg",
|
||||
timestamp=snapshot_settings["timestamp"],
|
||||
bounding_box=snapshot_settings["bounding_box"],
|
||||
crop=snapshot_settings["crop"],
|
||||
height=snapshot_settings["height"],
|
||||
quality=snapshot_settings["quality"],
|
||||
timestamp=params.timestamp,
|
||||
bounding_box=params.bbox,
|
||||
crop=params.crop,
|
||||
height=params.height,
|
||||
quality=params.quality,
|
||||
)
|
||||
await require_camera_access(camera_state.name, request=request)
|
||||
except Exception:
|
||||
@ -895,11 +865,13 @@ async def event_thumbnail(
|
||||
(0, 0, 0),
|
||||
)
|
||||
|
||||
_, img = cv2.imencode(
|
||||
f".{extension.value}",
|
||||
thumbnail,
|
||||
get_image_quality_params(extension.value, None),
|
||||
)
|
||||
quality_params = None
|
||||
if extension in (Extension.jpg, Extension.jpeg):
|
||||
quality_params = [int(cv2.IMWRITE_JPEG_QUALITY), 70]
|
||||
elif extension == Extension.webp:
|
||||
quality_params = [int(cv2.IMWRITE_WEBP_QUALITY), 60]
|
||||
|
||||
_, img = cv2.imencode(f".{extension.value}", thumbnail, quality_params)
|
||||
thumbnail_bytes = img.tobytes()
|
||||
|
||||
return Response(
|
||||
@ -1057,16 +1029,14 @@ def clear_region_grid(request: Request, camera_name: str):
|
||||
)
|
||||
def event_snapshot_clean(request: Request, event_id: str, download: bool = False):
|
||||
webp_bytes = None
|
||||
event_complete = False
|
||||
try:
|
||||
event = Event.get(Event.id == event_id)
|
||||
event_complete = event.end_time is not None
|
||||
snapshot_config = request.app.frigate_config.cameras[event.camera].snapshots
|
||||
if not (snapshot_config.enabled and event.has_snapshot):
|
||||
return JSONResponse(
|
||||
content={
|
||||
"success": False,
|
||||
"message": "Snapshots must be enabled in the config",
|
||||
"message": "Snapshots and clean_copy must be enabled in the config",
|
||||
},
|
||||
status_code=404,
|
||||
)
|
||||
@ -1098,10 +1068,54 @@ def event_snapshot_clean(request: Request, event_id: str, download: bool = False
|
||||
)
|
||||
if webp_bytes is None:
|
||||
try:
|
||||
image_path, is_clean_snapshot = get_event_snapshot_path(
|
||||
event, clean_only=True
|
||||
# webp
|
||||
clean_snapshot_path_webp = os.path.join(
|
||||
CLIPS_DIR, f"{event.camera}-{event.id}-clean.webp"
|
||||
)
|
||||
if not is_clean_snapshot or image_path is None:
|
||||
# png (legacy)
|
||||
clean_snapshot_path_png = os.path.join(
|
||||
CLIPS_DIR, f"{event.camera}-{event.id}-clean.png"
|
||||
)
|
||||
|
||||
if os.path.exists(clean_snapshot_path_webp):
|
||||
with open(clean_snapshot_path_webp, "rb") as image_file:
|
||||
webp_bytes = image_file.read()
|
||||
elif os.path.exists(clean_snapshot_path_png):
|
||||
# convert png to webp and save for future use
|
||||
png_image = cv2.imread(clean_snapshot_path_png, cv2.IMREAD_UNCHANGED)
|
||||
if png_image is None:
|
||||
return JSONResponse(
|
||||
content={
|
||||
"success": False,
|
||||
"message": "Invalid png snapshot",
|
||||
},
|
||||
status_code=400,
|
||||
)
|
||||
|
||||
ret, webp_data = cv2.imencode(
|
||||
".webp", png_image, [int(cv2.IMWRITE_WEBP_QUALITY), 60]
|
||||
)
|
||||
if not ret:
|
||||
return JSONResponse(
|
||||
content={
|
||||
"success": False,
|
||||
"message": "Unable to convert png to webp",
|
||||
},
|
||||
status_code=400,
|
||||
)
|
||||
|
||||
webp_bytes = webp_data.tobytes()
|
||||
|
||||
# save the converted webp for future requests
|
||||
try:
|
||||
with open(clean_snapshot_path_webp, "wb") as f:
|
||||
f.write(webp_bytes)
|
||||
except Exception as e:
|
||||
logger.warning(
|
||||
f"Failed to save converted webp for event {event.id}: {e}"
|
||||
)
|
||||
# continue since we now have the data to return
|
||||
else:
|
||||
return JSONResponse(
|
||||
content={
|
||||
"success": False,
|
||||
@ -1109,34 +1123,6 @@ def event_snapshot_clean(request: Request, event_id: str, download: bool = False
|
||||
},
|
||||
status_code=404,
|
||||
)
|
||||
|
||||
if image_path.endswith(".webp"):
|
||||
with open(image_path, "rb") as image_file:
|
||||
webp_bytes = image_file.read()
|
||||
else:
|
||||
image = load_event_snapshot_image(event, clean_only=True)[0]
|
||||
if image is None:
|
||||
return JSONResponse(
|
||||
content={
|
||||
"success": False,
|
||||
"message": "Unable to load clean snapshot for event",
|
||||
},
|
||||
status_code=400,
|
||||
)
|
||||
|
||||
ret, webp_data = cv2.imencode(
|
||||
".webp", image, get_image_quality_params("webp", None)
|
||||
)
|
||||
if not ret:
|
||||
return JSONResponse(
|
||||
content={
|
||||
"success": False,
|
||||
"message": "Unable to convert snapshot to webp",
|
||||
},
|
||||
status_code=400,
|
||||
)
|
||||
|
||||
webp_bytes = webp_data.tobytes()
|
||||
except Exception:
|
||||
logger.error(f"Unable to load clean snapshot for event: {event.id}")
|
||||
return JSONResponse(
|
||||
@ -1149,7 +1135,7 @@ def event_snapshot_clean(request: Request, event_id: str, download: bool = False
|
||||
|
||||
headers = {
|
||||
"Content-Type": "image/webp",
|
||||
"Cache-Control": "private, max-age=31536000" if event_complete else "no-cache",
|
||||
"Cache-Control": "private, max-age=31536000",
|
||||
}
|
||||
|
||||
if download:
|
||||
|
||||
@ -532,6 +532,8 @@ class CameraState:
|
||||
) -> None:
|
||||
img_frame = frame if frame is not None else self.get_current_frame()
|
||||
|
||||
# write clean snapshot if enabled
|
||||
if self.camera_config.snapshots.clean_copy:
|
||||
ret, webp = cv2.imencode(
|
||||
".webp", img_frame, [int(cv2.IMWRITE_WEBP_QUALITY), 80]
|
||||
)
|
||||
@ -546,6 +548,33 @@ class CameraState:
|
||||
) as p:
|
||||
p.write(webp.tobytes())
|
||||
|
||||
# write jpg snapshot with optional annotations
|
||||
if draw.get("boxes") and isinstance(draw.get("boxes"), list):
|
||||
for box in draw.get("boxes"):
|
||||
x = int(box["box"][0] * self.camera_config.detect.width)
|
||||
y = int(box["box"][1] * self.camera_config.detect.height)
|
||||
width = int(box["box"][2] * self.camera_config.detect.width)
|
||||
height = int(box["box"][3] * self.camera_config.detect.height)
|
||||
|
||||
draw_box_with_label(
|
||||
img_frame,
|
||||
x,
|
||||
y,
|
||||
x + width,
|
||||
y + height,
|
||||
label,
|
||||
f"{box.get('score', '-')}% {int(width * height)}",
|
||||
thickness=2,
|
||||
color=box.get("color", (255, 0, 0)),
|
||||
)
|
||||
|
||||
ret, jpg = cv2.imencode(".jpg", img_frame)
|
||||
with open(
|
||||
os.path.join(CLIPS_DIR, f"{self.camera_config.name}-{event_id}.jpg"),
|
||||
"wb",
|
||||
) as j:
|
||||
j.write(jpg.tobytes())
|
||||
|
||||
# create thumbnail with max height of 175 and save
|
||||
width = int(175 * img_frame.shape[1] / img_frame.shape[0])
|
||||
thumb = cv2.resize(img_frame, dsize=(width, 175), interpolation=cv2.INTER_AREA)
|
||||
|
||||
@ -141,7 +141,7 @@ class CameraConfig(FrigateBaseModel):
|
||||
snapshots: SnapshotsConfig = Field(
|
||||
default_factory=SnapshotsConfig,
|
||||
title="Snapshots",
|
||||
description="Settings for API-generated snapshots of tracked objects for this camera.",
|
||||
description="Settings for saved JPEG snapshots of tracked objects for this camera.",
|
||||
)
|
||||
timestamp_style: TimestampStyleConfig = Field(
|
||||
default_factory=TimestampStyleConfig,
|
||||
|
||||
@ -32,20 +32,25 @@ class SnapshotsConfig(FrigateBaseModel):
|
||||
title="Enable snapshots",
|
||||
description="Enable or disable saving snapshots for all cameras; can be overridden per-camera.",
|
||||
)
|
||||
clean_copy: bool = Field(
|
||||
default=True,
|
||||
title="Save clean copy",
|
||||
description="Save an unannotated clean copy of snapshots in addition to annotated ones.",
|
||||
)
|
||||
timestamp: bool = Field(
|
||||
default=False,
|
||||
title="Timestamp overlay",
|
||||
description="Overlay a timestamp on snapshots from API.",
|
||||
description="Overlay a timestamp on saved snapshots.",
|
||||
)
|
||||
bounding_box: bool = Field(
|
||||
default=True,
|
||||
title="Bounding box overlay",
|
||||
description="Draw bounding boxes for tracked objects on snapshots from API.",
|
||||
description="Draw bounding boxes for tracked objects on saved snapshots.",
|
||||
)
|
||||
crop: bool = Field(
|
||||
default=False,
|
||||
title="Crop snapshot",
|
||||
description="Crop snapshots from API to the detected object's bounding box.",
|
||||
description="Crop saved snapshots to the detected object's bounding box.",
|
||||
)
|
||||
required_zones: list[str] = Field(
|
||||
default_factory=list,
|
||||
@ -55,17 +60,17 @@ class SnapshotsConfig(FrigateBaseModel):
|
||||
height: Optional[int] = Field(
|
||||
default=None,
|
||||
title="Snapshot height",
|
||||
description="Height (pixels) to resize snapshots from API to; leave empty to preserve original size.",
|
||||
description="Height (pixels) to resize saved snapshots to; leave empty to preserve original size.",
|
||||
)
|
||||
retain: RetainConfig = Field(
|
||||
default_factory=RetainConfig,
|
||||
title="Snapshot retention",
|
||||
description="Retention settings for snapshots including default days and per-object overrides.",
|
||||
description="Retention settings for saved snapshots including default days and per-object overrides.",
|
||||
)
|
||||
quality: int = Field(
|
||||
default=60,
|
||||
title="Snapshot quality",
|
||||
description="Encode quality for saved snapshots (0-100).",
|
||||
default=70,
|
||||
title="JPEG quality",
|
||||
description="JPEG encode quality for saved snapshots (0-100).",
|
||||
ge=0,
|
||||
le=100,
|
||||
)
|
||||
|
||||
@ -498,7 +498,7 @@ class FrigateConfig(FrigateBaseModel):
|
||||
snapshots: SnapshotsConfig = Field(
|
||||
default_factory=SnapshotsConfig,
|
||||
title="Snapshots",
|
||||
description="Settings for API-generated snapshots of tracked objects for all cameras; can be overridden per-camera.",
|
||||
description="Settings for saved JPEG snapshots of tracked objects for all cameras; can be overridden per-camera.",
|
||||
)
|
||||
timestamp_style: TimestampStyleConfig = Field(
|
||||
default_factory=TimestampStyleConfig,
|
||||
@ -933,6 +933,11 @@ class FrigateConfig(FrigateBaseModel):
|
||||
f"Camera {camera.name} has audio transcription enabled, but audio detection is not enabled for this camera. Audio detection must be enabled for cameras with audio transcription when it is disabled globally."
|
||||
)
|
||||
|
||||
if self.plus_api and not self.snapshots.clean_copy:
|
||||
logger.warning(
|
||||
"Frigate+ is configured but clean snapshots are not enabled, submissions to Frigate+ will not be possible./"
|
||||
)
|
||||
|
||||
# Validate auth roles against cameras
|
||||
camera_names = set(self.cameras.keys())
|
||||
|
||||
|
||||
@ -20,7 +20,7 @@ from frigate.genai import GenAIClient
|
||||
from frigate.models import Event
|
||||
from frigate.types import TrackedObjectUpdateTypesEnum
|
||||
from frigate.util.builtin import EventsPerSecond, InferenceSpeed
|
||||
from frigate.util.file import get_event_thumbnail_bytes, load_event_snapshot_image
|
||||
from frigate.util.file import get_event_thumbnail_bytes
|
||||
from frigate.util.image import create_thumbnail, ensure_jpeg_bytes
|
||||
|
||||
if TYPE_CHECKING:
|
||||
@ -224,12 +224,23 @@ class ObjectDescriptionProcessor(PostProcessorApi):
|
||||
def _read_and_crop_snapshot(self, event: Event) -> bytes | None:
|
||||
"""Read, decode, and crop the snapshot image."""
|
||||
|
||||
try:
|
||||
img, _ = load_event_snapshot_image(event)
|
||||
if img is None:
|
||||
logger.error(f"Cannot load snapshot for {event.id}, file not found")
|
||||
snapshot_file = os.path.join(CLIPS_DIR, f"{event.camera}-{event.id}.jpg")
|
||||
|
||||
if not os.path.isfile(snapshot_file):
|
||||
logger.error(
|
||||
f"Cannot load snapshot for {event.id}, file not found: {snapshot_file}"
|
||||
)
|
||||
return None
|
||||
|
||||
try:
|
||||
with open(snapshot_file, "rb") as image_file:
|
||||
snapshot_image = image_file.read()
|
||||
|
||||
img = cv2.imdecode(
|
||||
np.frombuffer(snapshot_image, dtype=np.int8),
|
||||
cv2.IMREAD_COLOR,
|
||||
)
|
||||
|
||||
# Crop snapshot based on region
|
||||
# provide full image if region doesn't exist (manual events)
|
||||
height, width = img.shape[:2]
|
||||
|
||||
@ -158,33 +158,36 @@ class EventProcessor(threading.Thread):
|
||||
end_time = (
|
||||
None if event_data["end_time"] is None else event_data["end_time"]
|
||||
)
|
||||
snapshot = event_data["snapshot"]
|
||||
# score of the snapshot
|
||||
score = None if snapshot is None else snapshot["score"]
|
||||
score = (
|
||||
None
|
||||
if event_data["snapshot"] is None
|
||||
else event_data["snapshot"]["score"]
|
||||
)
|
||||
# detection region in the snapshot
|
||||
region = (
|
||||
None
|
||||
if snapshot is None
|
||||
if event_data["snapshot"] is None
|
||||
else to_relative_box(
|
||||
width,
|
||||
height,
|
||||
snapshot["region"],
|
||||
event_data["snapshot"]["region"],
|
||||
)
|
||||
)
|
||||
# bounding box for the snapshot
|
||||
box = (
|
||||
None
|
||||
if snapshot is None
|
||||
if event_data["snapshot"] is None
|
||||
else to_relative_box(
|
||||
width,
|
||||
height,
|
||||
snapshot["box"],
|
||||
event_data["snapshot"]["box"],
|
||||
)
|
||||
)
|
||||
|
||||
attributes = (
|
||||
None
|
||||
if snapshot is None
|
||||
if event_data["snapshot"] is None
|
||||
else [
|
||||
{
|
||||
"box": to_relative_box(
|
||||
@ -195,14 +198,9 @@ class EventProcessor(threading.Thread):
|
||||
"label": a["label"],
|
||||
"score": a["score"],
|
||||
}
|
||||
for a in snapshot["attributes"]
|
||||
for a in event_data["snapshot"]["attributes"]
|
||||
]
|
||||
)
|
||||
snapshot_frame_time = None if snapshot is None else snapshot["frame_time"]
|
||||
snapshot_area = None if snapshot is None else snapshot["area"]
|
||||
snapshot_estimated_speed = (
|
||||
None if snapshot is None else snapshot["current_estimated_speed"]
|
||||
)
|
||||
|
||||
# keep these from being set back to false because the event
|
||||
# may have started while recordings/snapshots/alerts/detections were enabled
|
||||
@ -231,10 +229,6 @@ class EventProcessor(threading.Thread):
|
||||
"score": score,
|
||||
"top_score": event_data["top_score"],
|
||||
"attributes": attributes,
|
||||
"snapshot_clean": event_data.get("snapshot_clean", False),
|
||||
"snapshot_frame_time": snapshot_frame_time,
|
||||
"snapshot_area": snapshot_area,
|
||||
"snapshot_estimated_speed": snapshot_estimated_speed,
|
||||
"average_estimated_speed": event_data["average_estimated_speed"],
|
||||
"velocity_angle": event_data["velocity_angle"],
|
||||
"type": "object",
|
||||
@ -312,11 +306,8 @@ class EventProcessor(threading.Thread):
|
||||
"type": event_data["type"],
|
||||
"score": event_data["score"],
|
||||
"top_score": event_data["score"],
|
||||
"snapshot_clean": event_data.get("snapshot_clean", False),
|
||||
},
|
||||
}
|
||||
if event_data.get("draw") is not None:
|
||||
event[Event.data]["draw"] = event_data["draw"]
|
||||
if event_data.get("recognized_license_plate") is not None:
|
||||
event[Event.data]["recognized_license_plate"] = event_data[
|
||||
"recognized_license_plate"
|
||||
|
||||
@ -1208,7 +1208,7 @@ class TestConfig(unittest.TestCase):
|
||||
|
||||
frigate_config = FrigateConfig(**config)
|
||||
assert frigate_config.cameras["back"].snapshots.bounding_box
|
||||
assert frigate_config.cameras["back"].snapshots.quality == 60
|
||||
assert frigate_config.cameras["back"].snapshots.quality == 70
|
||||
|
||||
def test_global_snapshots_merge(self):
|
||||
config = {
|
||||
|
||||
@ -1,72 +0,0 @@
|
||||
import os
|
||||
import tempfile
|
||||
from types import SimpleNamespace
|
||||
from unittest import TestCase
|
||||
from unittest.mock import patch
|
||||
|
||||
import cv2
|
||||
import numpy as np
|
||||
|
||||
from frigate.util import file as file_util
|
||||
|
||||
|
||||
class TestFileUtils(TestCase):
|
||||
def _write_clean_snapshot(
|
||||
self, clips_dir: str, event_id: str, image: np.ndarray
|
||||
) -> None:
|
||||
assert cv2.imwrite(
|
||||
os.path.join(clips_dir, f"front_door-{event_id}-clean.webp"),
|
||||
image,
|
||||
)
|
||||
|
||||
def test_get_event_snapshot_bytes_reads_clean_webp(self):
|
||||
event_id = "clean-webp"
|
||||
image = np.zeros((100, 200, 3), np.uint8)
|
||||
event = SimpleNamespace(
|
||||
id=event_id,
|
||||
camera="front_door",
|
||||
label="Mock",
|
||||
top_score=100,
|
||||
score=0,
|
||||
start_time=0,
|
||||
data={
|
||||
"box": [0.25, 0.25, 0.25, 0.5],
|
||||
"score": 0.85,
|
||||
"attributes": [],
|
||||
},
|
||||
)
|
||||
|
||||
with (
|
||||
tempfile.TemporaryDirectory() as clips_dir,
|
||||
patch.object(file_util, "CLIPS_DIR", clips_dir),
|
||||
):
|
||||
self._write_clean_snapshot(clips_dir, event_id, image)
|
||||
|
||||
snapshot_image, is_clean = file_util.load_event_snapshot_image(
|
||||
event, clean_only=True
|
||||
)
|
||||
|
||||
assert is_clean
|
||||
assert snapshot_image is not None
|
||||
assert snapshot_image.shape[:2] == image.shape[:2]
|
||||
|
||||
rendered_bytes, _ = file_util.get_event_snapshot_bytes(
|
||||
event,
|
||||
ext="jpg",
|
||||
timestamp=False,
|
||||
bounding_box=True,
|
||||
crop=False,
|
||||
height=40,
|
||||
quality=None,
|
||||
timestamp_style=None,
|
||||
colormap={},
|
||||
)
|
||||
assert rendered_bytes is not None
|
||||
|
||||
rendered_image = cv2.imdecode(
|
||||
np.frombuffer(rendered_bytes, dtype=np.uint8),
|
||||
cv2.IMREAD_COLOR,
|
||||
)
|
||||
assert rendered_image is not None
|
||||
assert rendered_image.shape[0] == 40
|
||||
assert rendered_image.max() > 0
|
||||
@ -547,10 +547,7 @@ class TrackedObjectProcessor(threading.Thread):
|
||||
"has_clip": self.config.cameras[camera_name].record.enabled
|
||||
and include_recording,
|
||||
"has_snapshot": True,
|
||||
"snapshot_clean": True,
|
||||
"snapshot_frame_time": frame_time,
|
||||
"type": source_type,
|
||||
"draw": draw,
|
||||
},
|
||||
)
|
||||
)
|
||||
@ -606,7 +603,6 @@ class TrackedObjectProcessor(threading.Thread):
|
||||
"has_clip": self.config.cameras[camera_name].record.enabled
|
||||
and include_recording,
|
||||
"has_snapshot": True,
|
||||
"snapshot_clean": True,
|
||||
"type": "api",
|
||||
"recognized_license_plate": plate,
|
||||
"recognized_license_plate_score": score,
|
||||
|
||||
@ -13,6 +13,7 @@ import numpy as np
|
||||
from frigate.config import (
|
||||
CameraConfig,
|
||||
FilterConfig,
|
||||
SnapshotsConfig,
|
||||
UIConfig,
|
||||
)
|
||||
from frigate.const import CLIPS_DIR, REPLAY_CAMERA_PREFIX, THUMB_DIR
|
||||
@ -21,7 +22,9 @@ from frigate.review.types import SeverityEnum
|
||||
from frigate.util.builtin import sanitize_float
|
||||
from frigate.util.image import (
|
||||
area,
|
||||
get_snapshot_bytes,
|
||||
calculate_region,
|
||||
draw_box_with_label,
|
||||
draw_timestamp,
|
||||
is_better_thumbnail,
|
||||
)
|
||||
from frigate.util.object import box_inside
|
||||
@ -390,7 +393,6 @@ class TrackedObject:
|
||||
"camera": self.camera_config.name,
|
||||
"frame_time": self.obj_data["frame_time"],
|
||||
"snapshot": self.thumbnail_data,
|
||||
"snapshot_clean": True,
|
||||
"label": self.obj_data["label"],
|
||||
"sub_label": self.obj_data.get("sub_label"),
|
||||
"top_score": self.top_score,
|
||||
@ -447,15 +449,27 @@ class TrackedObject:
|
||||
return img.tobytes()
|
||||
|
||||
def get_clean_webp(self) -> bytes | None:
|
||||
webp_bytes, _ = self.get_img_bytes(
|
||||
ext="webp",
|
||||
timestamp=False,
|
||||
bounding_box=False,
|
||||
crop=False,
|
||||
height=None,
|
||||
quality=self.camera_config.snapshots.quality,
|
||||
if self.thumbnail_data is None:
|
||||
return None
|
||||
|
||||
try:
|
||||
best_frame = cv2.cvtColor(
|
||||
self.frame_cache[self.thumbnail_data["frame_time"]]["frame"],
|
||||
cv2.COLOR_YUV2BGR_I420,
|
||||
)
|
||||
return webp_bytes
|
||||
except KeyError:
|
||||
logger.warning(
|
||||
f"Unable to create clean webp because frame {self.thumbnail_data['frame_time']} is not in the cache"
|
||||
)
|
||||
return None
|
||||
|
||||
ret, webp = cv2.imencode(
|
||||
".webp", best_frame, [int(cv2.IMWRITE_WEBP_QUALITY), 60]
|
||||
)
|
||||
if ret:
|
||||
return webp.tobytes()
|
||||
else:
|
||||
return None
|
||||
|
||||
def get_img_bytes(
|
||||
self,
|
||||
@ -477,33 +491,122 @@ class TrackedObject:
|
||||
)
|
||||
except KeyError:
|
||||
logger.warning(
|
||||
f"Unable to create snapshot because frame {frame_time} is not in the cache"
|
||||
f"Unable to create jpg because frame {frame_time} is not in the cache"
|
||||
)
|
||||
return None, None
|
||||
|
||||
return get_snapshot_bytes(
|
||||
if bounding_box:
|
||||
thickness = 2
|
||||
color = self.colormap.get(self.obj_data["label"], (255, 255, 255))
|
||||
|
||||
# draw the bounding boxes on the frame
|
||||
box = self.thumbnail_data["box"]
|
||||
draw_box_with_label(
|
||||
best_frame,
|
||||
frame_time,
|
||||
ext=ext,
|
||||
timestamp=timestamp,
|
||||
bounding_box=bounding_box,
|
||||
crop=crop,
|
||||
height=height,
|
||||
quality=quality,
|
||||
label=self.obj_data["label"],
|
||||
box=self.thumbnail_data["box"],
|
||||
score=self.thumbnail_data["score"],
|
||||
area=self.thumbnail_data["area"],
|
||||
attributes=self.thumbnail_data["attributes"],
|
||||
color=self.colormap.get(self.obj_data["label"], (255, 255, 255)),
|
||||
timestamp_style=self.camera_config.timestamp_style,
|
||||
estimated_speed=self.thumbnail_data["current_estimated_speed"],
|
||||
box[0],
|
||||
box[1],
|
||||
box[2],
|
||||
box[3],
|
||||
self.obj_data["label"],
|
||||
f"{int(self.thumbnail_data['score'] * 100)}% {int(self.thumbnail_data['area'])}"
|
||||
+ (
|
||||
f" {self.thumbnail_data['current_estimated_speed']:.1f}"
|
||||
if self.thumbnail_data["current_estimated_speed"] != 0
|
||||
else ""
|
||||
),
|
||||
thickness=thickness,
|
||||
color=color,
|
||||
)
|
||||
|
||||
# draw any attributes
|
||||
for attribute in self.thumbnail_data["attributes"]:
|
||||
box = attribute["box"]
|
||||
box_area = int((box[2] - box[0]) * (box[3] - box[1]))
|
||||
draw_box_with_label(
|
||||
best_frame,
|
||||
box[0],
|
||||
box[1],
|
||||
box[2],
|
||||
box[3],
|
||||
attribute["label"],
|
||||
f"{attribute['score']:.0%} {str(box_area)}",
|
||||
thickness=thickness,
|
||||
color=color,
|
||||
)
|
||||
|
||||
if crop:
|
||||
box = self.thumbnail_data["box"]
|
||||
box_size = 300
|
||||
region = calculate_region(
|
||||
best_frame.shape,
|
||||
box[0],
|
||||
box[1],
|
||||
box[2],
|
||||
box[3],
|
||||
box_size,
|
||||
multiplier=1.1,
|
||||
)
|
||||
best_frame = best_frame[region[1] : region[3], region[0] : region[2]]
|
||||
|
||||
if height:
|
||||
width = int(height * best_frame.shape[1] / best_frame.shape[0])
|
||||
best_frame = cv2.resize(
|
||||
best_frame, dsize=(width, height), interpolation=cv2.INTER_AREA
|
||||
)
|
||||
if timestamp:
|
||||
colors = self.camera_config.timestamp_style.color
|
||||
draw_timestamp(
|
||||
best_frame,
|
||||
self.thumbnail_data["frame_time"],
|
||||
self.camera_config.timestamp_style.format,
|
||||
font_effect=self.camera_config.timestamp_style.effect,
|
||||
font_thickness=self.camera_config.timestamp_style.thickness,
|
||||
font_color=(colors.blue, colors.green, colors.red),
|
||||
position=self.camera_config.timestamp_style.position,
|
||||
)
|
||||
|
||||
quality_params = []
|
||||
|
||||
if ext == "jpg":
|
||||
quality_params = [int(cv2.IMWRITE_JPEG_QUALITY), quality or 70]
|
||||
elif ext == "webp":
|
||||
quality_params = [int(cv2.IMWRITE_WEBP_QUALITY), quality or 60]
|
||||
|
||||
ret, jpg = cv2.imencode(f".{ext}", best_frame, quality_params)
|
||||
|
||||
if ret:
|
||||
return jpg.tobytes(), frame_time
|
||||
else:
|
||||
return None, None
|
||||
|
||||
def write_snapshot_to_disk(self) -> None:
|
||||
snapshot_config: SnapshotsConfig = self.camera_config.snapshots
|
||||
jpg_bytes, _ = self.get_img_bytes(
|
||||
ext="jpg",
|
||||
timestamp=snapshot_config.timestamp,
|
||||
bounding_box=snapshot_config.bounding_box,
|
||||
crop=snapshot_config.crop,
|
||||
height=snapshot_config.height,
|
||||
quality=snapshot_config.quality,
|
||||
)
|
||||
if jpg_bytes is None:
|
||||
logger.warning(f"Unable to save snapshot for {self.obj_data['id']}.")
|
||||
else:
|
||||
with open(
|
||||
os.path.join(
|
||||
CLIPS_DIR, f"{self.camera_config.name}-{self.obj_data['id']}.jpg"
|
||||
),
|
||||
"wb",
|
||||
) as j:
|
||||
j.write(jpg_bytes)
|
||||
|
||||
# write clean snapshot if enabled
|
||||
if snapshot_config.clean_copy:
|
||||
webp_bytes = self.get_clean_webp()
|
||||
if webp_bytes is None:
|
||||
logger.warning(f"Unable to save snapshot for {self.obj_data['id']}.")
|
||||
logger.warning(
|
||||
f"Unable to save clean snapshot for {self.obj_data['id']}."
|
||||
)
|
||||
else:
|
||||
with open(
|
||||
os.path.join(
|
||||
|
||||
@ -133,18 +133,6 @@ def cleanup_camera_files(
|
||||
except Exception as e:
|
||||
logger.error("Failed to remove snapshot %s: %s", snapshot, e)
|
||||
|
||||
for snapshot in glob.glob(os.path.join(CLIPS_DIR, f"{camera_name}-*-clean.webp")):
|
||||
try:
|
||||
os.remove(snapshot)
|
||||
except Exception as e:
|
||||
logger.error("Failed to remove snapshot %s: %s", snapshot, e)
|
||||
|
||||
for snapshot in glob.glob(os.path.join(CLIPS_DIR, f"{camera_name}-*-clean.png")):
|
||||
try:
|
||||
os.remove(snapshot)
|
||||
except Exception as e:
|
||||
logger.error("Failed to remove snapshot %s: %s", snapshot, e)
|
||||
|
||||
# Remove review thumbnail files
|
||||
for thumb in glob.glob(
|
||||
os.path.join(CLIPS_DIR, "review", f"thumb-{camera_name}-*.webp")
|
||||
|
||||
@ -586,23 +586,6 @@ def migrate_018_0(config: dict[str, dict[str, Any]]) -> dict[str, dict[str, Any]
|
||||
|
||||
new_config["cameras"][name] = camera_config
|
||||
|
||||
# Remove deprecated clean_copy from global snapshots config
|
||||
if new_config.get("snapshots", {}).get("clean_copy") is not None:
|
||||
del new_config["snapshots"]["clean_copy"]
|
||||
if not new_config["snapshots"]:
|
||||
del new_config["snapshots"]
|
||||
|
||||
# Remove deprecated clean_copy from camera snapshots configs
|
||||
for name, camera in new_config.get("cameras", {}).items():
|
||||
camera_config: dict[str, dict[str, Any]] = camera.copy()
|
||||
|
||||
if camera_config.get("snapshots", {}).get("clean_copy") is not None:
|
||||
del camera_config["snapshots"]["clean_copy"]
|
||||
if not camera_config["snapshots"]:
|
||||
del camera_config["snapshots"]
|
||||
|
||||
new_config["cameras"][name] = camera_config
|
||||
|
||||
new_config["version"] = "0.18-0"
|
||||
return new_config
|
||||
|
||||
|
||||
@ -5,16 +5,14 @@ import fcntl
|
||||
import logging
|
||||
import os
|
||||
import time
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
from typing import Any, Optional
|
||||
from typing import Optional
|
||||
|
||||
import cv2
|
||||
from numpy import ndarray
|
||||
|
||||
from frigate.const import CLIPS_DIR, THUMB_DIR
|
||||
from frigate.models import Event
|
||||
from frigate.util.image import get_snapshot_bytes, relative_box_to_absolute
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@ -32,207 +30,9 @@ def get_event_thumbnail_bytes(event: Event) -> bytes | None:
|
||||
return None
|
||||
|
||||
|
||||
def get_event_snapshot(event: Event) -> ndarray | None:
|
||||
image, _ = load_event_snapshot_image(event)
|
||||
return image
|
||||
|
||||
|
||||
def get_event_snapshot_path(
|
||||
event: Event, *, clean_only: bool = False
|
||||
) -> tuple[str | None, bool]:
|
||||
clean_snapshot_paths = [
|
||||
os.path.join(CLIPS_DIR, f"{event.camera}-{event.id}-clean.webp"),
|
||||
os.path.join(CLIPS_DIR, f"{event.camera}-{event.id}-clean.png"),
|
||||
]
|
||||
|
||||
for image_path in clean_snapshot_paths:
|
||||
if os.path.exists(image_path):
|
||||
return image_path, True
|
||||
|
||||
snapshot_path = os.path.join(CLIPS_DIR, f"{event.camera}-{event.id}.jpg")
|
||||
if not os.path.exists(snapshot_path):
|
||||
return None, False
|
||||
|
||||
# Legacy JPG snapshots may already include overlays, so they should never
|
||||
# be treated as clean input for additional rendering.
|
||||
if clean_only:
|
||||
return None, False
|
||||
|
||||
return snapshot_path, False
|
||||
|
||||
|
||||
def load_event_snapshot_image(
|
||||
event: Event, *, clean_only: bool = False
|
||||
) -> tuple[ndarray | None, bool]:
|
||||
image_path, is_clean_snapshot = get_event_snapshot_path(
|
||||
event, clean_only=clean_only
|
||||
)
|
||||
if image_path is None:
|
||||
return None, False
|
||||
|
||||
image = cv2.imread(image_path)
|
||||
if image is None:
|
||||
logger.warning("Unable to load snapshot from %s", image_path)
|
||||
return None, False
|
||||
|
||||
return image, is_clean_snapshot
|
||||
|
||||
|
||||
def _get_event_snapshot_overlay_boxes(
|
||||
frame_shape: tuple[int, ...], event: Event
|
||||
) -> list[dict[str, Any]]:
|
||||
overlay_boxes: list[dict[str, Any]] = []
|
||||
draw_data = event.data.get("draw") if event.data else {}
|
||||
draw_boxes = draw_data.get("boxes", []) if isinstance(draw_data, dict) else []
|
||||
|
||||
for draw_box in draw_boxes:
|
||||
box = relative_box_to_absolute(frame_shape, draw_box.get("box"))
|
||||
if box is None:
|
||||
continue
|
||||
|
||||
draw_color = draw_box.get("color", (255, 0, 0))
|
||||
color = (
|
||||
tuple(draw_color) if isinstance(draw_color, (list, tuple)) else (255, 0, 0)
|
||||
)
|
||||
overlay_boxes.append(
|
||||
{
|
||||
"box": box,
|
||||
"label": event.label,
|
||||
"score": draw_box.get("score"),
|
||||
"color": color,
|
||||
}
|
||||
)
|
||||
|
||||
return overlay_boxes
|
||||
|
||||
|
||||
def get_event_snapshot_bytes(
|
||||
event: Event,
|
||||
*,
|
||||
ext: str,
|
||||
timestamp: bool = False,
|
||||
bounding_box: bool = False,
|
||||
crop: bool = False,
|
||||
height: int | None = None,
|
||||
quality: int | None = None,
|
||||
timestamp_style: Any | None = None,
|
||||
colormap: dict[str, tuple[int, int, int]] | None = None,
|
||||
) -> tuple[bytes | None, float]:
|
||||
best_frame, is_clean_snapshot = load_event_snapshot_image(event)
|
||||
if best_frame is None:
|
||||
return None, 0
|
||||
|
||||
frame_time = _get_event_snapshot_frame_time(event)
|
||||
box = relative_box_to_absolute(
|
||||
best_frame.shape,
|
||||
event.data.get("box") if event.data else None,
|
||||
)
|
||||
overlay_boxes = _get_event_snapshot_overlay_boxes(best_frame.shape, event)
|
||||
|
||||
if (bounding_box or crop or timestamp) and not is_clean_snapshot:
|
||||
logger.warning(
|
||||
"Unable to fully honor snapshot query parameters for completed event %s because the clean snapshot is unavailable.",
|
||||
event.id,
|
||||
)
|
||||
|
||||
return get_snapshot_bytes(
|
||||
best_frame,
|
||||
frame_time,
|
||||
ext=ext,
|
||||
timestamp=timestamp and is_clean_snapshot,
|
||||
bounding_box=bounding_box and is_clean_snapshot,
|
||||
crop=crop and is_clean_snapshot,
|
||||
height=height,
|
||||
quality=quality,
|
||||
label=event.label,
|
||||
box=box,
|
||||
score=_get_event_snapshot_score(event),
|
||||
area=_get_event_snapshot_area(event),
|
||||
attributes=_get_event_snapshot_attributes(
|
||||
best_frame.shape,
|
||||
event.data.get("attributes") if event.data else None,
|
||||
),
|
||||
color=(colormap or {}).get(event.label, (255, 255, 255)),
|
||||
overlay_boxes=overlay_boxes,
|
||||
timestamp_style=timestamp_style,
|
||||
estimated_speed=_get_event_snapshot_estimated_speed(event),
|
||||
)
|
||||
|
||||
|
||||
def _as_timestamp(value: Any) -> float:
|
||||
if isinstance(value, datetime):
|
||||
return value.timestamp()
|
||||
|
||||
return float(value)
|
||||
|
||||
|
||||
def _get_event_snapshot_frame_time(event: Event) -> float:
|
||||
if event.data:
|
||||
snapshot_frame_time = event.data.get("snapshot_frame_time")
|
||||
if snapshot_frame_time is not None:
|
||||
return _as_timestamp(snapshot_frame_time)
|
||||
|
||||
frame_time = event.data.get("frame_time")
|
||||
if frame_time is not None:
|
||||
return _as_timestamp(frame_time)
|
||||
|
||||
return _as_timestamp(event.start_time)
|
||||
|
||||
|
||||
def _get_event_snapshot_attributes(
|
||||
frame_shape: tuple[int, ...], attributes: list[dict[str, Any]] | None
|
||||
) -> list[dict[str, Any]]:
|
||||
absolute_attributes: list[dict[str, Any]] = []
|
||||
|
||||
for attribute in attributes or []:
|
||||
box = relative_box_to_absolute(frame_shape, attribute.get("box"))
|
||||
if box is None:
|
||||
continue
|
||||
|
||||
absolute_attributes.append(
|
||||
{
|
||||
"box": box,
|
||||
"label": attribute.get("label", "attribute"),
|
||||
"score": attribute.get("score", 0),
|
||||
}
|
||||
)
|
||||
|
||||
return absolute_attributes
|
||||
|
||||
|
||||
def _get_event_snapshot_score(event: Event) -> float:
|
||||
if event.data:
|
||||
score = event.data.get("score")
|
||||
if score is not None:
|
||||
return score
|
||||
|
||||
top_score = event.data.get("top_score")
|
||||
if top_score is not None:
|
||||
return top_score
|
||||
|
||||
return event.top_score or event.score or 0
|
||||
|
||||
|
||||
def _get_event_snapshot_area(event: Event) -> int | None:
|
||||
if event.data:
|
||||
area = event.data.get("snapshot_area")
|
||||
if area is not None:
|
||||
return int(area)
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def _get_event_snapshot_estimated_speed(event: Event) -> float:
|
||||
if event.data:
|
||||
estimated_speed = event.data.get("snapshot_estimated_speed")
|
||||
if estimated_speed is not None:
|
||||
return float(estimated_speed)
|
||||
|
||||
average_speed = event.data.get("average_estimated_speed")
|
||||
if average_speed is not None:
|
||||
return float(average_speed)
|
||||
|
||||
return 0
|
||||
def get_event_snapshot(event: Event) -> ndarray:
|
||||
media_name = f"{event.camera}-{event.id}"
|
||||
return cv2.imread(f"{os.path.join(CLIPS_DIR, media_name)}.jpg")
|
||||
|
||||
|
||||
### Deletion
|
||||
|
||||
@ -270,229 +270,6 @@ def draw_box_with_label(
|
||||
)
|
||||
|
||||
|
||||
def get_image_quality_params(ext: str, quality: Optional[int]) -> list[int]:
|
||||
if ext in ("jpg", "jpeg"):
|
||||
return [int(cv2.IMWRITE_JPEG_QUALITY), quality if quality is not None else 70]
|
||||
|
||||
if ext == "webp":
|
||||
return [int(cv2.IMWRITE_WEBP_QUALITY), quality if quality is not None else 60]
|
||||
|
||||
return []
|
||||
|
||||
|
||||
def relative_box_to_absolute(
|
||||
frame_shape: tuple[int, ...], box: list[float] | tuple[float, ...] | None
|
||||
) -> tuple[int, int, int, int] | None:
|
||||
if box is None or len(box) != 4:
|
||||
return None
|
||||
|
||||
frame_height = frame_shape[0]
|
||||
frame_width = frame_shape[1]
|
||||
x_min = int(box[0] * frame_width)
|
||||
y_min = int(box[1] * frame_height)
|
||||
x_max = x_min + int(box[2] * frame_width)
|
||||
y_max = y_min + int(box[3] * frame_height)
|
||||
|
||||
x_min = max(0, min(frame_width - 1, x_min))
|
||||
y_min = max(0, min(frame_height - 1, y_min))
|
||||
x_max = max(x_min + 1, min(frame_width - 1, x_max))
|
||||
y_max = max(y_min + 1, min(frame_height - 1, y_max))
|
||||
|
||||
return (x_min, y_min, x_max, y_max)
|
||||
|
||||
|
||||
def _format_snapshot_label(
|
||||
score: float | None,
|
||||
area: int | None,
|
||||
box: tuple[int, int, int, int] | None,
|
||||
estimated_speed: float = 0,
|
||||
) -> str:
|
||||
score_value = score or 0
|
||||
score_text = (
|
||||
f"{int(score_value * 100)}%" if score_value <= 1 else f"{int(score_value)}%"
|
||||
)
|
||||
|
||||
if area is None and box is not None:
|
||||
area = int((box[2] - box[0]) * (box[3] - box[1]))
|
||||
|
||||
label = f"{score_text} {int(area or 0)}"
|
||||
if estimated_speed:
|
||||
label = f"{label} {estimated_speed:.1f}"
|
||||
|
||||
return label
|
||||
|
||||
|
||||
def draw_snapshot_bounding_boxes(
|
||||
frame: np.ndarray,
|
||||
label: str,
|
||||
box: tuple[int, int, int, int] | None,
|
||||
score: float | None,
|
||||
area: int | None,
|
||||
attributes: list[dict[str, Any]] | None,
|
||||
color: tuple[int, int, int],
|
||||
estimated_speed: float = 0,
|
||||
) -> None:
|
||||
if box is None:
|
||||
return
|
||||
|
||||
draw_box_with_label(
|
||||
frame,
|
||||
box[0],
|
||||
box[1],
|
||||
box[2],
|
||||
box[3],
|
||||
label,
|
||||
_format_snapshot_label(score, area, box, estimated_speed),
|
||||
thickness=2,
|
||||
color=color,
|
||||
)
|
||||
|
||||
for attribute in attributes or []:
|
||||
attribute_box = attribute.get("box")
|
||||
if attribute_box is None:
|
||||
continue
|
||||
|
||||
box_area = int(
|
||||
(attribute_box[2] - attribute_box[0])
|
||||
* (attribute_box[3] - attribute_box[1])
|
||||
)
|
||||
draw_box_with_label(
|
||||
frame,
|
||||
attribute_box[0],
|
||||
attribute_box[1],
|
||||
attribute_box[2],
|
||||
attribute_box[3],
|
||||
attribute.get("label", "attribute"),
|
||||
f"{attribute.get('score', 0):.0%} {box_area}",
|
||||
thickness=2,
|
||||
color=color,
|
||||
)
|
||||
|
||||
|
||||
def _get_snapshot_overlay_box_label(
|
||||
score: float | int | None, box: tuple[int, int, int, int]
|
||||
) -> str:
|
||||
area = int((box[2] - box[0]) * (box[3] - box[1]))
|
||||
|
||||
if score is None:
|
||||
return f"- {area}"
|
||||
|
||||
score_value = float(score)
|
||||
score_text = (
|
||||
f"{int(score_value * 100)}%" if score_value <= 1 else f"{int(score_value)}%"
|
||||
)
|
||||
return f"{score_text} {area}"
|
||||
|
||||
|
||||
def draw_snapshot_overlay_boxes(
|
||||
frame: np.ndarray,
|
||||
overlay_boxes: list[dict[str, Any]] | None,
|
||||
default_label: str,
|
||||
default_color: tuple[int, int, int],
|
||||
) -> None:
|
||||
for overlay_box in overlay_boxes or []:
|
||||
box = overlay_box.get("box")
|
||||
if box is None:
|
||||
continue
|
||||
|
||||
box_color = overlay_box.get("color", default_color)
|
||||
color = (
|
||||
tuple(box_color) if isinstance(box_color, (list, tuple)) else default_color
|
||||
)
|
||||
draw_box_with_label(
|
||||
frame,
|
||||
box[0],
|
||||
box[1],
|
||||
box[2],
|
||||
box[3],
|
||||
overlay_box.get("label", default_label),
|
||||
_get_snapshot_overlay_box_label(overlay_box.get("score"), box),
|
||||
thickness=2,
|
||||
color=color,
|
||||
)
|
||||
|
||||
|
||||
def get_snapshot_bytes(
|
||||
frame: np.ndarray,
|
||||
frame_time: float,
|
||||
ext: str,
|
||||
*,
|
||||
timestamp: bool = False,
|
||||
bounding_box: bool = False,
|
||||
crop: bool = False,
|
||||
height: int | None = None,
|
||||
quality: int | None = None,
|
||||
label: str,
|
||||
box: tuple[int, int, int, int] | None,
|
||||
score: float | None,
|
||||
area: int | None,
|
||||
attributes: list[dict[str, Any]] | None,
|
||||
color: tuple[int, int, int],
|
||||
overlay_boxes: list[dict[str, Any]] | None = None,
|
||||
timestamp_style: Any | None = None,
|
||||
estimated_speed: float = 0,
|
||||
) -> tuple[bytes | None, float]:
|
||||
best_frame = frame.copy()
|
||||
crop_box = box
|
||||
|
||||
if crop_box is None and overlay_boxes and len(overlay_boxes) == 1:
|
||||
crop_box = overlay_boxes[0].get("box")
|
||||
|
||||
if bounding_box and box:
|
||||
draw_snapshot_bounding_boxes(
|
||||
best_frame,
|
||||
label,
|
||||
box,
|
||||
score,
|
||||
area,
|
||||
attributes,
|
||||
color,
|
||||
estimated_speed,
|
||||
)
|
||||
|
||||
if bounding_box and overlay_boxes:
|
||||
draw_snapshot_overlay_boxes(best_frame, overlay_boxes, label, color)
|
||||
|
||||
if crop and crop_box:
|
||||
region = calculate_region(
|
||||
best_frame.shape,
|
||||
crop_box[0],
|
||||
crop_box[1],
|
||||
crop_box[2],
|
||||
crop_box[3],
|
||||
300,
|
||||
multiplier=1.1,
|
||||
)
|
||||
best_frame = best_frame[region[1] : region[3], region[0] : region[2]]
|
||||
|
||||
if height:
|
||||
width = int(height * best_frame.shape[1] / best_frame.shape[0])
|
||||
best_frame = cv2.resize(
|
||||
best_frame, dsize=(width, height), interpolation=cv2.INTER_AREA
|
||||
)
|
||||
|
||||
if timestamp and timestamp_style is not None:
|
||||
colors = timestamp_style.color
|
||||
draw_timestamp(
|
||||
best_frame,
|
||||
frame_time,
|
||||
timestamp_style.format,
|
||||
font_effect=timestamp_style.effect,
|
||||
font_thickness=timestamp_style.thickness,
|
||||
font_color=(colors.blue, colors.green, colors.red),
|
||||
position=timestamp_style.position,
|
||||
)
|
||||
|
||||
ret, img = cv2.imencode(
|
||||
f".{ext}", best_frame, get_image_quality_params(ext, quality)
|
||||
)
|
||||
|
||||
if ret:
|
||||
return img.tobytes(), frame_time
|
||||
|
||||
return None, frame_time
|
||||
|
||||
|
||||
def grab_cv2_contours(cnts):
|
||||
# if the length the contours tuple returned by cv2.findContours
|
||||
# is '2' then we are using either OpenCV v2.4, v4-beta, or
|
||||
|
||||
@ -246,8 +246,8 @@ def sync_recordings(
|
||||
def sync_event_snapshots(dry_run: bool = False, force: bool = False) -> SyncResult:
|
||||
"""Sync event snapshots - delete files not referenced by any event.
|
||||
|
||||
Event snapshots are stored at: CLIPS_DIR/{camera}-{event_id}-clean.webp
|
||||
Also checks legacy variants: {camera}-{event_id}.jpg and -clean.png
|
||||
Event snapshots are stored at: CLIPS_DIR/{camera}-{event_id}.jpg
|
||||
Also checks for clean variants: {camera}-{event_id}-clean.webp and -clean.png
|
||||
"""
|
||||
result = SyncResult(media_type="event_snapshots")
|
||||
|
||||
|
||||
@ -190,24 +190,20 @@ def generate_section_translation(config_class: type) -> Dict[str, Any]:
|
||||
|
||||
def get_detector_translations(
|
||||
config_schema: Dict[str, Any],
|
||||
) -> tuple[Dict[str, Any], Dict[str, Any], set[str]]:
|
||||
"""Build detector type translations with nested fields based on schema definitions.
|
||||
|
||||
Returns a tuple of (type_translations, shared_fields, nested_field_keys).
|
||||
Shared fields (identical across all detector types) are returned separately
|
||||
to avoid duplication in the output.
|
||||
"""
|
||||
) -> tuple[Dict[str, Any], set[str]]:
|
||||
"""Build detector type translations with nested fields based on schema definitions."""
|
||||
defs = config_schema.get("$defs", {})
|
||||
detector_schema = defs.get("DetectorConfig", {})
|
||||
discriminator = detector_schema.get("discriminator", {})
|
||||
mapping = discriminator.get("mapping", {})
|
||||
|
||||
# First pass: collect all nested fields per detector type
|
||||
all_nested: Dict[str, Dict[str, Any]] = {}
|
||||
type_meta: Dict[str, Dict[str, str]] = {}
|
||||
|
||||
type_translations: Dict[str, Any] = {}
|
||||
nested_field_keys: set[str] = set()
|
||||
for detector_type, ref in mapping.items():
|
||||
if not isinstance(ref, str) or not ref.startswith("#/$defs/"):
|
||||
if not isinstance(ref, str):
|
||||
continue
|
||||
|
||||
if not ref.startswith("#/$defs/"):
|
||||
continue
|
||||
|
||||
ref_name = ref.split("/")[-1]
|
||||
@ -215,49 +211,26 @@ def get_detector_translations(
|
||||
if not ref_schema:
|
||||
continue
|
||||
|
||||
meta: Dict[str, str] = {}
|
||||
type_entry: Dict[str, str] = {}
|
||||
title = ref_schema.get("title")
|
||||
description = ref_schema.get("description")
|
||||
if title:
|
||||
meta["label"] = title
|
||||
type_entry["label"] = title
|
||||
if description:
|
||||
meta["description"] = description
|
||||
type_meta[detector_type] = meta
|
||||
type_entry["description"] = description
|
||||
|
||||
nested = extract_translations_from_schema(ref_schema, defs=defs)
|
||||
all_nested[detector_type] = {
|
||||
nested_without_root = {
|
||||
k: v for k, v in nested.items() if k not in ("label", "description")
|
||||
}
|
||||
|
||||
# Find fields that are identical across all types that have them
|
||||
shared_fields: Dict[str, Any] = {}
|
||||
if all_nested:
|
||||
# Collect all field keys across all types
|
||||
all_keys: set[str] = set()
|
||||
for nested in all_nested.values():
|
||||
all_keys.update(nested.keys())
|
||||
|
||||
for key in all_keys:
|
||||
values = [nested[key] for nested in all_nested.values() if key in nested]
|
||||
if len(values) == len(all_nested) and all(v == values[0] for v in values):
|
||||
shared_fields[key] = values[0]
|
||||
|
||||
# Build per-type translations with only unique (non-shared) fields
|
||||
type_translations: Dict[str, Any] = {}
|
||||
nested_field_keys: set[str] = set()
|
||||
for detector_type, nested in all_nested.items():
|
||||
type_entry: Dict[str, Any] = {}
|
||||
type_entry.update(type_meta.get(detector_type, {}))
|
||||
|
||||
unique_fields = {k: v for k, v in nested.items() if k not in shared_fields}
|
||||
if unique_fields:
|
||||
type_entry.update(unique_fields)
|
||||
nested_field_keys.update(unique_fields.keys())
|
||||
if nested_without_root:
|
||||
type_entry.update(nested_without_root)
|
||||
nested_field_keys.update(nested_without_root.keys())
|
||||
|
||||
if type_entry:
|
||||
type_translations[detector_type] = type_entry
|
||||
|
||||
return type_translations, shared_fields, nested_field_keys
|
||||
return type_translations, nested_field_keys
|
||||
|
||||
|
||||
def main():
|
||||
@ -330,12 +303,9 @@ def main():
|
||||
section_data.update(nested_without_root)
|
||||
|
||||
if field_name == "detectors":
|
||||
detector_types, shared_fields, detector_field_keys = (
|
||||
get_detector_translations(config_schema)
|
||||
detector_types, detector_field_keys = get_detector_translations(
|
||||
config_schema
|
||||
)
|
||||
# Add shared fields at the base detectors level
|
||||
section_data.update(shared_fields)
|
||||
# Add per-type translations (only unique fields per type)
|
||||
section_data.update(detector_types)
|
||||
for key in detector_field_keys:
|
||||
if key == "type":
|
||||
|
||||
@ -626,22 +626,26 @@
|
||||
},
|
||||
"snapshots": {
|
||||
"label": "Snapshots",
|
||||
"description": "Settings for API-generated snapshots of tracked objects for this camera.",
|
||||
"description": "Settings for saved JPEG snapshots of tracked objects for this camera.",
|
||||
"enabled": {
|
||||
"label": "Enable snapshots",
|
||||
"description": "Enable or disable saving snapshots for this camera."
|
||||
},
|
||||
"clean_copy": {
|
||||
"label": "Save clean copy",
|
||||
"description": "Save an unannotated clean copy of snapshots in addition to annotated ones."
|
||||
},
|
||||
"timestamp": {
|
||||
"label": "Timestamp overlay",
|
||||
"description": "Overlay a timestamp on snapshots from API."
|
||||
"description": "Overlay a timestamp on saved snapshots."
|
||||
},
|
||||
"bounding_box": {
|
||||
"label": "Bounding box overlay",
|
||||
"description": "Draw bounding boxes for tracked objects on snapshots from API."
|
||||
"description": "Draw bounding boxes for tracked objects on saved snapshots."
|
||||
},
|
||||
"crop": {
|
||||
"label": "Crop snapshot",
|
||||
"description": "Crop snapshots from API to the detected object's bounding box."
|
||||
"description": "Crop saved snapshots to the detected object's bounding box."
|
||||
},
|
||||
"required_zones": {
|
||||
"label": "Required zones",
|
||||
@ -649,11 +653,11 @@
|
||||
},
|
||||
"height": {
|
||||
"label": "Snapshot height",
|
||||
"description": "Height (pixels) to resize snapshots from API to; leave empty to preserve original size."
|
||||
"description": "Height (pixels) to resize saved snapshots to; leave empty to preserve original size."
|
||||
},
|
||||
"retain": {
|
||||
"label": "Snapshot retention",
|
||||
"description": "Retention settings for snapshots including default days and per-object overrides.",
|
||||
"description": "Retention settings for saved snapshots including default days and per-object overrides.",
|
||||
"default": {
|
||||
"label": "Default retention",
|
||||
"description": "Default number of days to retain snapshots."
|
||||
@ -668,8 +672,8 @@
|
||||
}
|
||||
},
|
||||
"quality": {
|
||||
"label": "Snapshot quality",
|
||||
"description": "Encode quality for saved snapshots (0-100)."
|
||||
"label": "JPEG quality",
|
||||
"description": "JPEG encode quality for saved snapshots (0-100)."
|
||||
}
|
||||
},
|
||||
"timestamp_style": {
|
||||
|
||||
@ -286,6 +286,68 @@
|
||||
"detectors": {
|
||||
"label": "Detector hardware",
|
||||
"description": "Configuration for object detectors (CPU, GPU, ONNX backends) and any detector-specific model settings.",
|
||||
"type": {
|
||||
"label": "Detector Type",
|
||||
"description": "Type of detector to use for object detection (for example 'cpu', 'edgetpu', 'openvino')."
|
||||
},
|
||||
"axengine": {
|
||||
"label": "AXEngine NPU",
|
||||
"description": "AXERA AX650N/AX8850N NPU detector running compiled .axmodel files via the AXEngine runtime.",
|
||||
"type": {
|
||||
"label": "Type"
|
||||
},
|
||||
"model": {
|
||||
"label": "Detector specific model configuration",
|
||||
"description": "Detector-specific model configuration options (path, input size, etc.).",
|
||||
"path": {
|
||||
"label": "Custom Object detection model path",
|
||||
"description": "Path to a custom detection model file (or plus://<model_id> for Frigate+ models)."
|
||||
},
|
||||
"labelmap_path": {
|
||||
"label": "Label map for custom object detector",
|
||||
"description": "Path to a labelmap file that maps numeric classes to string labels for the detector."
|
||||
},
|
||||
"width": {
|
||||
"label": "Object detection model input width",
|
||||
"description": "Width of the model input tensor in pixels."
|
||||
},
|
||||
"height": {
|
||||
"label": "Object detection model input height",
|
||||
"description": "Height of the model input tensor in pixels."
|
||||
},
|
||||
"labelmap": {
|
||||
"label": "Labelmap customization",
|
||||
"description": "Overrides or remapping entries to merge into the standard labelmap."
|
||||
},
|
||||
"attributes_map": {
|
||||
"label": "Map of object labels to their attribute labels",
|
||||
"description": "Mapping from object labels to attribute labels used to attach metadata (for example 'car' -> ['license_plate'])."
|
||||
},
|
||||
"input_tensor": {
|
||||
"label": "Model Input Tensor Shape",
|
||||
"description": "Tensor format expected by the model: 'nhwc' or 'nchw'."
|
||||
},
|
||||
"input_pixel_format": {
|
||||
"label": "Model Input Pixel Color Format",
|
||||
"description": "Pixel colorspace expected by the model: 'rgb', 'bgr', or 'yuv'."
|
||||
},
|
||||
"input_dtype": {
|
||||
"label": "Model Input D Type",
|
||||
"description": "Data type of the model input tensor (for example 'float32')."
|
||||
},
|
||||
"model_type": {
|
||||
"label": "Object Detection Model Type",
|
||||
"description": "Detector model architecture type (ssd, yolox, yolonas) used by some detectors for optimization."
|
||||
}
|
||||
},
|
||||
"model_path": {
|
||||
"label": "Detector specific model path",
|
||||
"description": "File path to the detector model binary if required by the chosen detector."
|
||||
}
|
||||
},
|
||||
"cpu": {
|
||||
"label": "CPU",
|
||||
"description": "CPU TFLite detector that runs TensorFlow Lite models on the host CPU without hardware acceleration. Not recommended.",
|
||||
"type": {
|
||||
"label": "Type"
|
||||
},
|
||||
@ -337,13 +399,6 @@
|
||||
"label": "Detector specific model path",
|
||||
"description": "File path to the detector model binary if required by the chosen detector."
|
||||
},
|
||||
"axengine": {
|
||||
"label": "AXEngine NPU",
|
||||
"description": "AXERA AX650N/AX8850N NPU detector running compiled .axmodel files via the AXEngine runtime."
|
||||
},
|
||||
"cpu": {
|
||||
"label": "CPU",
|
||||
"description": "CPU TFLite detector that runs TensorFlow Lite models on the host CPU without hardware acceleration. Not recommended.",
|
||||
"num_threads": {
|
||||
"label": "Number of detection threads",
|
||||
"description": "The number of threads used for CPU-based inference."
|
||||
@ -352,6 +407,57 @@
|
||||
"deepstack": {
|
||||
"label": "DeepStack",
|
||||
"description": "DeepStack/CodeProject.AI detector that sends images to a remote DeepStack HTTP API for inference. Not recommended.",
|
||||
"type": {
|
||||
"label": "Type"
|
||||
},
|
||||
"model": {
|
||||
"label": "Detector specific model configuration",
|
||||
"description": "Detector-specific model configuration options (path, input size, etc.).",
|
||||
"path": {
|
||||
"label": "Custom Object detection model path",
|
||||
"description": "Path to a custom detection model file (or plus://<model_id> for Frigate+ models)."
|
||||
},
|
||||
"labelmap_path": {
|
||||
"label": "Label map for custom object detector",
|
||||
"description": "Path to a labelmap file that maps numeric classes to string labels for the detector."
|
||||
},
|
||||
"width": {
|
||||
"label": "Object detection model input width",
|
||||
"description": "Width of the model input tensor in pixels."
|
||||
},
|
||||
"height": {
|
||||
"label": "Object detection model input height",
|
||||
"description": "Height of the model input tensor in pixels."
|
||||
},
|
||||
"labelmap": {
|
||||
"label": "Labelmap customization",
|
||||
"description": "Overrides or remapping entries to merge into the standard labelmap."
|
||||
},
|
||||
"attributes_map": {
|
||||
"label": "Map of object labels to their attribute labels",
|
||||
"description": "Mapping from object labels to attribute labels used to attach metadata (for example 'car' -> ['license_plate'])."
|
||||
},
|
||||
"input_tensor": {
|
||||
"label": "Model Input Tensor Shape",
|
||||
"description": "Tensor format expected by the model: 'nhwc' or 'nchw'."
|
||||
},
|
||||
"input_pixel_format": {
|
||||
"label": "Model Input Pixel Color Format",
|
||||
"description": "Pixel colorspace expected by the model: 'rgb', 'bgr', or 'yuv'."
|
||||
},
|
||||
"input_dtype": {
|
||||
"label": "Model Input D Type",
|
||||
"description": "Data type of the model input tensor (for example 'float32')."
|
||||
},
|
||||
"model_type": {
|
||||
"label": "Object Detection Model Type",
|
||||
"description": "Detector model architecture type (ssd, yolox, yolonas) used by some detectors for optimization."
|
||||
}
|
||||
},
|
||||
"model_path": {
|
||||
"label": "Detector specific model path",
|
||||
"description": "File path to the detector model binary if required by the chosen detector."
|
||||
},
|
||||
"api_url": {
|
||||
"label": "DeepStack API URL",
|
||||
"description": "The URL of the DeepStack API."
|
||||
@ -368,6 +474,57 @@
|
||||
"degirum": {
|
||||
"label": "DeGirum",
|
||||
"description": "DeGirum detector for running models via DeGirum cloud or local inference services.",
|
||||
"type": {
|
||||
"label": "Type"
|
||||
},
|
||||
"model": {
|
||||
"label": "Detector specific model configuration",
|
||||
"description": "Detector-specific model configuration options (path, input size, etc.).",
|
||||
"path": {
|
||||
"label": "Custom Object detection model path",
|
||||
"description": "Path to a custom detection model file (or plus://<model_id> for Frigate+ models)."
|
||||
},
|
||||
"labelmap_path": {
|
||||
"label": "Label map for custom object detector",
|
||||
"description": "Path to a labelmap file that maps numeric classes to string labels for the detector."
|
||||
},
|
||||
"width": {
|
||||
"label": "Object detection model input width",
|
||||
"description": "Width of the model input tensor in pixels."
|
||||
},
|
||||
"height": {
|
||||
"label": "Object detection model input height",
|
||||
"description": "Height of the model input tensor in pixels."
|
||||
},
|
||||
"labelmap": {
|
||||
"label": "Labelmap customization",
|
||||
"description": "Overrides or remapping entries to merge into the standard labelmap."
|
||||
},
|
||||
"attributes_map": {
|
||||
"label": "Map of object labels to their attribute labels",
|
||||
"description": "Mapping from object labels to attribute labels used to attach metadata (for example 'car' -> ['license_plate'])."
|
||||
},
|
||||
"input_tensor": {
|
||||
"label": "Model Input Tensor Shape",
|
||||
"description": "Tensor format expected by the model: 'nhwc' or 'nchw'."
|
||||
},
|
||||
"input_pixel_format": {
|
||||
"label": "Model Input Pixel Color Format",
|
||||
"description": "Pixel colorspace expected by the model: 'rgb', 'bgr', or 'yuv'."
|
||||
},
|
||||
"input_dtype": {
|
||||
"label": "Model Input D Type",
|
||||
"description": "Data type of the model input tensor (for example 'float32')."
|
||||
},
|
||||
"model_type": {
|
||||
"label": "Object Detection Model Type",
|
||||
"description": "Detector model architecture type (ssd, yolox, yolonas) used by some detectors for optimization."
|
||||
}
|
||||
},
|
||||
"model_path": {
|
||||
"label": "Detector specific model path",
|
||||
"description": "File path to the detector model binary if required by the chosen detector."
|
||||
},
|
||||
"location": {
|
||||
"label": "Inference Location",
|
||||
"description": "Location of the DeGirim inference engine (e.g. '@cloud', '127.0.0.1')."
|
||||
@ -384,6 +541,57 @@
|
||||
"edgetpu": {
|
||||
"label": "EdgeTPU",
|
||||
"description": "EdgeTPU detector that runs TensorFlow Lite models compiled for Coral EdgeTPU using the EdgeTPU delegate.",
|
||||
"type": {
|
||||
"label": "Type"
|
||||
},
|
||||
"model": {
|
||||
"label": "Detector specific model configuration",
|
||||
"description": "Detector-specific model configuration options (path, input size, etc.).",
|
||||
"path": {
|
||||
"label": "Custom Object detection model path",
|
||||
"description": "Path to a custom detection model file (or plus://<model_id> for Frigate+ models)."
|
||||
},
|
||||
"labelmap_path": {
|
||||
"label": "Label map for custom object detector",
|
||||
"description": "Path to a labelmap file that maps numeric classes to string labels for the detector."
|
||||
},
|
||||
"width": {
|
||||
"label": "Object detection model input width",
|
||||
"description": "Width of the model input tensor in pixels."
|
||||
},
|
||||
"height": {
|
||||
"label": "Object detection model input height",
|
||||
"description": "Height of the model input tensor in pixels."
|
||||
},
|
||||
"labelmap": {
|
||||
"label": "Labelmap customization",
|
||||
"description": "Overrides or remapping entries to merge into the standard labelmap."
|
||||
},
|
||||
"attributes_map": {
|
||||
"label": "Map of object labels to their attribute labels",
|
||||
"description": "Mapping from object labels to attribute labels used to attach metadata (for example 'car' -> ['license_plate'])."
|
||||
},
|
||||
"input_tensor": {
|
||||
"label": "Model Input Tensor Shape",
|
||||
"description": "Tensor format expected by the model: 'nhwc' or 'nchw'."
|
||||
},
|
||||
"input_pixel_format": {
|
||||
"label": "Model Input Pixel Color Format",
|
||||
"description": "Pixel colorspace expected by the model: 'rgb', 'bgr', or 'yuv'."
|
||||
},
|
||||
"input_dtype": {
|
||||
"label": "Model Input D Type",
|
||||
"description": "Data type of the model input tensor (for example 'float32')."
|
||||
},
|
||||
"model_type": {
|
||||
"label": "Object Detection Model Type",
|
||||
"description": "Detector model architecture type (ssd, yolox, yolonas) used by some detectors for optimization."
|
||||
}
|
||||
},
|
||||
"model_path": {
|
||||
"label": "Detector specific model path",
|
||||
"description": "File path to the detector model binary if required by the chosen detector."
|
||||
},
|
||||
"device": {
|
||||
"label": "Device Type",
|
||||
"description": "The device to use for EdgeTPU inference (e.g. 'usb', 'pci')."
|
||||
@ -392,6 +600,57 @@
|
||||
"hailo8l": {
|
||||
"label": "Hailo-8/Hailo-8L",
|
||||
"description": "Hailo-8/Hailo-8L detector using HEF models and the HailoRT SDK for inference on Hailo hardware.",
|
||||
"type": {
|
||||
"label": "Type"
|
||||
},
|
||||
"model": {
|
||||
"label": "Detector specific model configuration",
|
||||
"description": "Detector-specific model configuration options (path, input size, etc.).",
|
||||
"path": {
|
||||
"label": "Custom Object detection model path",
|
||||
"description": "Path to a custom detection model file (or plus://<model_id> for Frigate+ models)."
|
||||
},
|
||||
"labelmap_path": {
|
||||
"label": "Label map for custom object detector",
|
||||
"description": "Path to a labelmap file that maps numeric classes to string labels for the detector."
|
||||
},
|
||||
"width": {
|
||||
"label": "Object detection model input width",
|
||||
"description": "Width of the model input tensor in pixels."
|
||||
},
|
||||
"height": {
|
||||
"label": "Object detection model input height",
|
||||
"description": "Height of the model input tensor in pixels."
|
||||
},
|
||||
"labelmap": {
|
||||
"label": "Labelmap customization",
|
||||
"description": "Overrides or remapping entries to merge into the standard labelmap."
|
||||
},
|
||||
"attributes_map": {
|
||||
"label": "Map of object labels to their attribute labels",
|
||||
"description": "Mapping from object labels to attribute labels used to attach metadata (for example 'car' -> ['license_plate'])."
|
||||
},
|
||||
"input_tensor": {
|
||||
"label": "Model Input Tensor Shape",
|
||||
"description": "Tensor format expected by the model: 'nhwc' or 'nchw'."
|
||||
},
|
||||
"input_pixel_format": {
|
||||
"label": "Model Input Pixel Color Format",
|
||||
"description": "Pixel colorspace expected by the model: 'rgb', 'bgr', or 'yuv'."
|
||||
},
|
||||
"input_dtype": {
|
||||
"label": "Model Input D Type",
|
||||
"description": "Data type of the model input tensor (for example 'float32')."
|
||||
},
|
||||
"model_type": {
|
||||
"label": "Object Detection Model Type",
|
||||
"description": "Detector model architecture type (ssd, yolox, yolonas) used by some detectors for optimization."
|
||||
}
|
||||
},
|
||||
"model_path": {
|
||||
"label": "Detector specific model path",
|
||||
"description": "File path to the detector model binary if required by the chosen detector."
|
||||
},
|
||||
"device": {
|
||||
"label": "Device Type",
|
||||
"description": "The device to use for Hailo inference (e.g. 'PCIe', 'M.2')."
|
||||
@ -400,6 +659,57 @@
|
||||
"memryx": {
|
||||
"label": "MemryX",
|
||||
"description": "MemryX MX3 detector that runs compiled DFP models on MemryX accelerators.",
|
||||
"type": {
|
||||
"label": "Type"
|
||||
},
|
||||
"model": {
|
||||
"label": "Detector specific model configuration",
|
||||
"description": "Detector-specific model configuration options (path, input size, etc.).",
|
||||
"path": {
|
||||
"label": "Custom Object detection model path",
|
||||
"description": "Path to a custom detection model file (or plus://<model_id> for Frigate+ models)."
|
||||
},
|
||||
"labelmap_path": {
|
||||
"label": "Label map for custom object detector",
|
||||
"description": "Path to a labelmap file that maps numeric classes to string labels for the detector."
|
||||
},
|
||||
"width": {
|
||||
"label": "Object detection model input width",
|
||||
"description": "Width of the model input tensor in pixels."
|
||||
},
|
||||
"height": {
|
||||
"label": "Object detection model input height",
|
||||
"description": "Height of the model input tensor in pixels."
|
||||
},
|
||||
"labelmap": {
|
||||
"label": "Labelmap customization",
|
||||
"description": "Overrides or remapping entries to merge into the standard labelmap."
|
||||
},
|
||||
"attributes_map": {
|
||||
"label": "Map of object labels to their attribute labels",
|
||||
"description": "Mapping from object labels to attribute labels used to attach metadata (for example 'car' -> ['license_plate'])."
|
||||
},
|
||||
"input_tensor": {
|
||||
"label": "Model Input Tensor Shape",
|
||||
"description": "Tensor format expected by the model: 'nhwc' or 'nchw'."
|
||||
},
|
||||
"input_pixel_format": {
|
||||
"label": "Model Input Pixel Color Format",
|
||||
"description": "Pixel colorspace expected by the model: 'rgb', 'bgr', or 'yuv'."
|
||||
},
|
||||
"input_dtype": {
|
||||
"label": "Model Input D Type",
|
||||
"description": "Data type of the model input tensor (for example 'float32')."
|
||||
},
|
||||
"model_type": {
|
||||
"label": "Object Detection Model Type",
|
||||
"description": "Detector model architecture type (ssd, yolox, yolonas) used by some detectors for optimization."
|
||||
}
|
||||
},
|
||||
"model_path": {
|
||||
"label": "Detector specific model path",
|
||||
"description": "File path to the detector model binary if required by the chosen detector."
|
||||
},
|
||||
"device": {
|
||||
"label": "Device Path",
|
||||
"description": "The device to use for MemryX inference (e.g. 'PCIe')."
|
||||
@ -408,6 +718,57 @@
|
||||
"onnx": {
|
||||
"label": "ONNX",
|
||||
"description": "ONNX detector for running ONNX models; will use available acceleration backends (CUDA/ROCm/OpenVINO) when available.",
|
||||
"type": {
|
||||
"label": "Type"
|
||||
},
|
||||
"model": {
|
||||
"label": "Detector specific model configuration",
|
||||
"description": "Detector-specific model configuration options (path, input size, etc.).",
|
||||
"path": {
|
||||
"label": "Custom Object detection model path",
|
||||
"description": "Path to a custom detection model file (or plus://<model_id> for Frigate+ models)."
|
||||
},
|
||||
"labelmap_path": {
|
||||
"label": "Label map for custom object detector",
|
||||
"description": "Path to a labelmap file that maps numeric classes to string labels for the detector."
|
||||
},
|
||||
"width": {
|
||||
"label": "Object detection model input width",
|
||||
"description": "Width of the model input tensor in pixels."
|
||||
},
|
||||
"height": {
|
||||
"label": "Object detection model input height",
|
||||
"description": "Height of the model input tensor in pixels."
|
||||
},
|
||||
"labelmap": {
|
||||
"label": "Labelmap customization",
|
||||
"description": "Overrides or remapping entries to merge into the standard labelmap."
|
||||
},
|
||||
"attributes_map": {
|
||||
"label": "Map of object labels to their attribute labels",
|
||||
"description": "Mapping from object labels to attribute labels used to attach metadata (for example 'car' -> ['license_plate'])."
|
||||
},
|
||||
"input_tensor": {
|
||||
"label": "Model Input Tensor Shape",
|
||||
"description": "Tensor format expected by the model: 'nhwc' or 'nchw'."
|
||||
},
|
||||
"input_pixel_format": {
|
||||
"label": "Model Input Pixel Color Format",
|
||||
"description": "Pixel colorspace expected by the model: 'rgb', 'bgr', or 'yuv'."
|
||||
},
|
||||
"input_dtype": {
|
||||
"label": "Model Input D Type",
|
||||
"description": "Data type of the model input tensor (for example 'float32')."
|
||||
},
|
||||
"model_type": {
|
||||
"label": "Object Detection Model Type",
|
||||
"description": "Detector model architecture type (ssd, yolox, yolonas) used by some detectors for optimization."
|
||||
}
|
||||
},
|
||||
"model_path": {
|
||||
"label": "Detector specific model path",
|
||||
"description": "File path to the detector model binary if required by the chosen detector."
|
||||
},
|
||||
"device": {
|
||||
"label": "Device Type",
|
||||
"description": "The device to use for ONNX inference (e.g. 'AUTO', 'CPU', 'GPU')."
|
||||
@ -416,6 +777,57 @@
|
||||
"openvino": {
|
||||
"label": "OpenVINO",
|
||||
"description": "OpenVINO detector for AMD and Intel CPUs, Intel GPUs and Intel VPU hardware.",
|
||||
"type": {
|
||||
"label": "Type"
|
||||
},
|
||||
"model": {
|
||||
"label": "Detector specific model configuration",
|
||||
"description": "Detector-specific model configuration options (path, input size, etc.).",
|
||||
"path": {
|
||||
"label": "Custom Object detection model path",
|
||||
"description": "Path to a custom detection model file (or plus://<model_id> for Frigate+ models)."
|
||||
},
|
||||
"labelmap_path": {
|
||||
"label": "Label map for custom object detector",
|
||||
"description": "Path to a labelmap file that maps numeric classes to string labels for the detector."
|
||||
},
|
||||
"width": {
|
||||
"label": "Object detection model input width",
|
||||
"description": "Width of the model input tensor in pixels."
|
||||
},
|
||||
"height": {
|
||||
"label": "Object detection model input height",
|
||||
"description": "Height of the model input tensor in pixels."
|
||||
},
|
||||
"labelmap": {
|
||||
"label": "Labelmap customization",
|
||||
"description": "Overrides or remapping entries to merge into the standard labelmap."
|
||||
},
|
||||
"attributes_map": {
|
||||
"label": "Map of object labels to their attribute labels",
|
||||
"description": "Mapping from object labels to attribute labels used to attach metadata (for example 'car' -> ['license_plate'])."
|
||||
},
|
||||
"input_tensor": {
|
||||
"label": "Model Input Tensor Shape",
|
||||
"description": "Tensor format expected by the model: 'nhwc' or 'nchw'."
|
||||
},
|
||||
"input_pixel_format": {
|
||||
"label": "Model Input Pixel Color Format",
|
||||
"description": "Pixel colorspace expected by the model: 'rgb', 'bgr', or 'yuv'."
|
||||
},
|
||||
"input_dtype": {
|
||||
"label": "Model Input D Type",
|
||||
"description": "Data type of the model input tensor (for example 'float32')."
|
||||
},
|
||||
"model_type": {
|
||||
"label": "Object Detection Model Type",
|
||||
"description": "Detector model architecture type (ssd, yolox, yolonas) used by some detectors for optimization."
|
||||
}
|
||||
},
|
||||
"model_path": {
|
||||
"label": "Detector specific model path",
|
||||
"description": "File path to the detector model binary if required by the chosen detector."
|
||||
},
|
||||
"device": {
|
||||
"label": "Device Type",
|
||||
"description": "The device to use for OpenVINO inference (e.g. 'CPU', 'GPU', 'NPU')."
|
||||
@ -424,6 +836,57 @@
|
||||
"rknn": {
|
||||
"label": "RKNN",
|
||||
"description": "RKNN detector for Rockchip NPUs; runs compiled RKNN models on Rockchip hardware.",
|
||||
"type": {
|
||||
"label": "Type"
|
||||
},
|
||||
"model": {
|
||||
"label": "Detector specific model configuration",
|
||||
"description": "Detector-specific model configuration options (path, input size, etc.).",
|
||||
"path": {
|
||||
"label": "Custom Object detection model path",
|
||||
"description": "Path to a custom detection model file (or plus://<model_id> for Frigate+ models)."
|
||||
},
|
||||
"labelmap_path": {
|
||||
"label": "Label map for custom object detector",
|
||||
"description": "Path to a labelmap file that maps numeric classes to string labels for the detector."
|
||||
},
|
||||
"width": {
|
||||
"label": "Object detection model input width",
|
||||
"description": "Width of the model input tensor in pixels."
|
||||
},
|
||||
"height": {
|
||||
"label": "Object detection model input height",
|
||||
"description": "Height of the model input tensor in pixels."
|
||||
},
|
||||
"labelmap": {
|
||||
"label": "Labelmap customization",
|
||||
"description": "Overrides or remapping entries to merge into the standard labelmap."
|
||||
},
|
||||
"attributes_map": {
|
||||
"label": "Map of object labels to their attribute labels",
|
||||
"description": "Mapping from object labels to attribute labels used to attach metadata (for example 'car' -> ['license_plate'])."
|
||||
},
|
||||
"input_tensor": {
|
||||
"label": "Model Input Tensor Shape",
|
||||
"description": "Tensor format expected by the model: 'nhwc' or 'nchw'."
|
||||
},
|
||||
"input_pixel_format": {
|
||||
"label": "Model Input Pixel Color Format",
|
||||
"description": "Pixel colorspace expected by the model: 'rgb', 'bgr', or 'yuv'."
|
||||
},
|
||||
"input_dtype": {
|
||||
"label": "Model Input D Type",
|
||||
"description": "Data type of the model input tensor (for example 'float32')."
|
||||
},
|
||||
"model_type": {
|
||||
"label": "Object Detection Model Type",
|
||||
"description": "Detector model architecture type (ssd, yolox, yolonas) used by some detectors for optimization."
|
||||
}
|
||||
},
|
||||
"model_path": {
|
||||
"label": "Detector specific model path",
|
||||
"description": "File path to the detector model binary if required by the chosen detector."
|
||||
},
|
||||
"num_cores": {
|
||||
"label": "Number of NPU cores to use.",
|
||||
"description": "The number of NPU cores to use (0 for auto)."
|
||||
@ -431,15 +894,168 @@
|
||||
},
|
||||
"synaptics": {
|
||||
"label": "Synaptics",
|
||||
"description": "Synaptics NPU detector for models in .synap format using the Synap SDK on Synaptics hardware."
|
||||
"description": "Synaptics NPU detector for models in .synap format using the Synap SDK on Synaptics hardware.",
|
||||
"type": {
|
||||
"label": "Type"
|
||||
},
|
||||
"model": {
|
||||
"label": "Detector specific model configuration",
|
||||
"description": "Detector-specific model configuration options (path, input size, etc.).",
|
||||
"path": {
|
||||
"label": "Custom Object detection model path",
|
||||
"description": "Path to a custom detection model file (or plus://<model_id> for Frigate+ models)."
|
||||
},
|
||||
"labelmap_path": {
|
||||
"label": "Label map for custom object detector",
|
||||
"description": "Path to a labelmap file that maps numeric classes to string labels for the detector."
|
||||
},
|
||||
"width": {
|
||||
"label": "Object detection model input width",
|
||||
"description": "Width of the model input tensor in pixels."
|
||||
},
|
||||
"height": {
|
||||
"label": "Object detection model input height",
|
||||
"description": "Height of the model input tensor in pixels."
|
||||
},
|
||||
"labelmap": {
|
||||
"label": "Labelmap customization",
|
||||
"description": "Overrides or remapping entries to merge into the standard labelmap."
|
||||
},
|
||||
"attributes_map": {
|
||||
"label": "Map of object labels to their attribute labels",
|
||||
"description": "Mapping from object labels to attribute labels used to attach metadata (for example 'car' -> ['license_plate'])."
|
||||
},
|
||||
"input_tensor": {
|
||||
"label": "Model Input Tensor Shape",
|
||||
"description": "Tensor format expected by the model: 'nhwc' or 'nchw'."
|
||||
},
|
||||
"input_pixel_format": {
|
||||
"label": "Model Input Pixel Color Format",
|
||||
"description": "Pixel colorspace expected by the model: 'rgb', 'bgr', or 'yuv'."
|
||||
},
|
||||
"input_dtype": {
|
||||
"label": "Model Input D Type",
|
||||
"description": "Data type of the model input tensor (for example 'float32')."
|
||||
},
|
||||
"model_type": {
|
||||
"label": "Object Detection Model Type",
|
||||
"description": "Detector model architecture type (ssd, yolox, yolonas) used by some detectors for optimization."
|
||||
}
|
||||
},
|
||||
"model_path": {
|
||||
"label": "Detector specific model path",
|
||||
"description": "File path to the detector model binary if required by the chosen detector."
|
||||
}
|
||||
},
|
||||
"teflon_tfl": {
|
||||
"label": "Teflon",
|
||||
"description": "Teflon delegate detector for TFLite using Mesa Teflon delegate library to accelerate inference on supported GPUs."
|
||||
"description": "Teflon delegate detector for TFLite using Mesa Teflon delegate library to accelerate inference on supported GPUs.",
|
||||
"type": {
|
||||
"label": "Type"
|
||||
},
|
||||
"model": {
|
||||
"label": "Detector specific model configuration",
|
||||
"description": "Detector-specific model configuration options (path, input size, etc.).",
|
||||
"path": {
|
||||
"label": "Custom Object detection model path",
|
||||
"description": "Path to a custom detection model file (or plus://<model_id> for Frigate+ models)."
|
||||
},
|
||||
"labelmap_path": {
|
||||
"label": "Label map for custom object detector",
|
||||
"description": "Path to a labelmap file that maps numeric classes to string labels for the detector."
|
||||
},
|
||||
"width": {
|
||||
"label": "Object detection model input width",
|
||||
"description": "Width of the model input tensor in pixels."
|
||||
},
|
||||
"height": {
|
||||
"label": "Object detection model input height",
|
||||
"description": "Height of the model input tensor in pixels."
|
||||
},
|
||||
"labelmap": {
|
||||
"label": "Labelmap customization",
|
||||
"description": "Overrides or remapping entries to merge into the standard labelmap."
|
||||
},
|
||||
"attributes_map": {
|
||||
"label": "Map of object labels to their attribute labels",
|
||||
"description": "Mapping from object labels to attribute labels used to attach metadata (for example 'car' -> ['license_plate'])."
|
||||
},
|
||||
"input_tensor": {
|
||||
"label": "Model Input Tensor Shape",
|
||||
"description": "Tensor format expected by the model: 'nhwc' or 'nchw'."
|
||||
},
|
||||
"input_pixel_format": {
|
||||
"label": "Model Input Pixel Color Format",
|
||||
"description": "Pixel colorspace expected by the model: 'rgb', 'bgr', or 'yuv'."
|
||||
},
|
||||
"input_dtype": {
|
||||
"label": "Model Input D Type",
|
||||
"description": "Data type of the model input tensor (for example 'float32')."
|
||||
},
|
||||
"model_type": {
|
||||
"label": "Object Detection Model Type",
|
||||
"description": "Detector model architecture type (ssd, yolox, yolonas) used by some detectors for optimization."
|
||||
}
|
||||
},
|
||||
"model_path": {
|
||||
"label": "Detector specific model path",
|
||||
"description": "File path to the detector model binary if required by the chosen detector."
|
||||
}
|
||||
},
|
||||
"tensorrt": {
|
||||
"label": "TensorRT",
|
||||
"description": "TensorRT detector for Nvidia Jetson devices using serialized TensorRT engines for accelerated inference.",
|
||||
"type": {
|
||||
"label": "Type"
|
||||
},
|
||||
"model": {
|
||||
"label": "Detector specific model configuration",
|
||||
"description": "Detector-specific model configuration options (path, input size, etc.).",
|
||||
"path": {
|
||||
"label": "Custom Object detection model path",
|
||||
"description": "Path to a custom detection model file (or plus://<model_id> for Frigate+ models)."
|
||||
},
|
||||
"labelmap_path": {
|
||||
"label": "Label map for custom object detector",
|
||||
"description": "Path to a labelmap file that maps numeric classes to string labels for the detector."
|
||||
},
|
||||
"width": {
|
||||
"label": "Object detection model input width",
|
||||
"description": "Width of the model input tensor in pixels."
|
||||
},
|
||||
"height": {
|
||||
"label": "Object detection model input height",
|
||||
"description": "Height of the model input tensor in pixels."
|
||||
},
|
||||
"labelmap": {
|
||||
"label": "Labelmap customization",
|
||||
"description": "Overrides or remapping entries to merge into the standard labelmap."
|
||||
},
|
||||
"attributes_map": {
|
||||
"label": "Map of object labels to their attribute labels",
|
||||
"description": "Mapping from object labels to attribute labels used to attach metadata (for example 'car' -> ['license_plate'])."
|
||||
},
|
||||
"input_tensor": {
|
||||
"label": "Model Input Tensor Shape",
|
||||
"description": "Tensor format expected by the model: 'nhwc' or 'nchw'."
|
||||
},
|
||||
"input_pixel_format": {
|
||||
"label": "Model Input Pixel Color Format",
|
||||
"description": "Pixel colorspace expected by the model: 'rgb', 'bgr', or 'yuv'."
|
||||
},
|
||||
"input_dtype": {
|
||||
"label": "Model Input D Type",
|
||||
"description": "Data type of the model input tensor (for example 'float32')."
|
||||
},
|
||||
"model_type": {
|
||||
"label": "Object Detection Model Type",
|
||||
"description": "Detector model architecture type (ssd, yolox, yolonas) used by some detectors for optimization."
|
||||
}
|
||||
},
|
||||
"model_path": {
|
||||
"label": "Detector specific model path",
|
||||
"description": "File path to the detector model binary if required by the chosen detector."
|
||||
},
|
||||
"device": {
|
||||
"label": "GPU Device Index",
|
||||
"description": "The GPU device index to use."
|
||||
@ -448,6 +1064,57 @@
|
||||
"zmq": {
|
||||
"label": "ZMQ IPC",
|
||||
"description": "ZMQ IPC detector that offloads inference to an external process via a ZeroMQ IPC endpoint.",
|
||||
"type": {
|
||||
"label": "Type"
|
||||
},
|
||||
"model": {
|
||||
"label": "Detector specific model configuration",
|
||||
"description": "Detector-specific model configuration options (path, input size, etc.).",
|
||||
"path": {
|
||||
"label": "Custom Object detection model path",
|
||||
"description": "Path to a custom detection model file (or plus://<model_id> for Frigate+ models)."
|
||||
},
|
||||
"labelmap_path": {
|
||||
"label": "Label map for custom object detector",
|
||||
"description": "Path to a labelmap file that maps numeric classes to string labels for the detector."
|
||||
},
|
||||
"width": {
|
||||
"label": "Object detection model input width",
|
||||
"description": "Width of the model input tensor in pixels."
|
||||
},
|
||||
"height": {
|
||||
"label": "Object detection model input height",
|
||||
"description": "Height of the model input tensor in pixels."
|
||||
},
|
||||
"labelmap": {
|
||||
"label": "Labelmap customization",
|
||||
"description": "Overrides or remapping entries to merge into the standard labelmap."
|
||||
},
|
||||
"attributes_map": {
|
||||
"label": "Map of object labels to their attribute labels",
|
||||
"description": "Mapping from object labels to attribute labels used to attach metadata (for example 'car' -> ['license_plate'])."
|
||||
},
|
||||
"input_tensor": {
|
||||
"label": "Model Input Tensor Shape",
|
||||
"description": "Tensor format expected by the model: 'nhwc' or 'nchw'."
|
||||
},
|
||||
"input_pixel_format": {
|
||||
"label": "Model Input Pixel Color Format",
|
||||
"description": "Pixel colorspace expected by the model: 'rgb', 'bgr', or 'yuv'."
|
||||
},
|
||||
"input_dtype": {
|
||||
"label": "Model Input D Type",
|
||||
"description": "Data type of the model input tensor (for example 'float32')."
|
||||
},
|
||||
"model_type": {
|
||||
"label": "Object Detection Model Type",
|
||||
"description": "Detector model architecture type (ssd, yolox, yolonas) used by some detectors for optimization."
|
||||
}
|
||||
},
|
||||
"model_path": {
|
||||
"label": "Detector specific model path",
|
||||
"description": "File path to the detector model binary if required by the chosen detector."
|
||||
},
|
||||
"endpoint": {
|
||||
"label": "ZMQ IPC endpoint",
|
||||
"description": "The ZMQ endpoint to connect to."
|
||||
@ -1109,22 +1776,26 @@
|
||||
},
|
||||
"snapshots": {
|
||||
"label": "Snapshots",
|
||||
"description": "Settings for API-generated snapshots of tracked objects for all cameras; can be overridden per-camera.",
|
||||
"description": "Settings for saved JPEG snapshots of tracked objects for all cameras; can be overridden per-camera.",
|
||||
"enabled": {
|
||||
"label": "Enable snapshots",
|
||||
"description": "Enable or disable saving snapshots for all cameras; can be overridden per-camera."
|
||||
},
|
||||
"clean_copy": {
|
||||
"label": "Save clean copy",
|
||||
"description": "Save an unannotated clean copy of snapshots in addition to annotated ones."
|
||||
},
|
||||
"timestamp": {
|
||||
"label": "Timestamp overlay",
|
||||
"description": "Overlay a timestamp on snapshots from API."
|
||||
"description": "Overlay a timestamp on saved snapshots."
|
||||
},
|
||||
"bounding_box": {
|
||||
"label": "Bounding box overlay",
|
||||
"description": "Draw bounding boxes for tracked objects on snapshots from API."
|
||||
"description": "Draw bounding boxes for tracked objects on saved snapshots."
|
||||
},
|
||||
"crop": {
|
||||
"label": "Crop snapshot",
|
||||
"description": "Crop snapshots from API to the detected object's bounding box."
|
||||
"description": "Crop saved snapshots to the detected object's bounding box."
|
||||
},
|
||||
"required_zones": {
|
||||
"label": "Required zones",
|
||||
@ -1132,11 +1803,11 @@
|
||||
},
|
||||
"height": {
|
||||
"label": "Snapshot height",
|
||||
"description": "Height (pixels) to resize snapshots from API to; leave empty to preserve original size."
|
||||
"description": "Height (pixels) to resize saved snapshots to; leave empty to preserve original size."
|
||||
},
|
||||
"retain": {
|
||||
"label": "Snapshot retention",
|
||||
"description": "Retention settings for snapshots including default days and per-object overrides.",
|
||||
"description": "Retention settings for saved snapshots including default days and per-object overrides.",
|
||||
"default": {
|
||||
"label": "Default retention",
|
||||
"description": "Default number of days to retain snapshots."
|
||||
@ -1151,8 +1822,8 @@
|
||||
}
|
||||
},
|
||||
"quality": {
|
||||
"label": "Snapshot quality",
|
||||
"description": "Encode quality for saved snapshots (0-100)."
|
||||
"label": "JPEG quality",
|
||||
"description": "JPEG encode quality for saved snapshots (0-100)."
|
||||
}
|
||||
},
|
||||
"timestamp_style": {
|
||||
|
||||
@ -116,10 +116,5 @@
|
||||
"nzpost": "NZPost",
|
||||
"postnord": "PostNord",
|
||||
"gls": "GLS",
|
||||
"dpd": "DPD",
|
||||
"canada_post": "Canada Post",
|
||||
"royal_mail": "Royal Mail",
|
||||
"school_bus": "School Bus",
|
||||
"skunk": "Skunk",
|
||||
"kangaroo": "Kangaroo"
|
||||
"dpd": "DPD"
|
||||
}
|
||||
|
||||
@ -92,7 +92,6 @@
|
||||
"triggers": "Triggers",
|
||||
"debug": "Debug",
|
||||
"frigateplus": "Frigate+",
|
||||
"maintenance": "Maintenance",
|
||||
"mediaSync": "Media sync",
|
||||
"regionGrid": "Region grid"
|
||||
},
|
||||
@ -1060,11 +1059,12 @@
|
||||
},
|
||||
"snapshotConfig": {
|
||||
"title": "Snapshot Configuration",
|
||||
"desc": "Submitting to Frigate+ requires snapshots to be enabled in your config.",
|
||||
"cleanCopyWarning": "Some cameras have snapshots disabled",
|
||||
"desc": "Submitting to Frigate+ requires both snapshots and <code>clean_copy</code> snapshots to be enabled in your config.",
|
||||
"cleanCopyWarning": "Some cameras have snapshots enabled but have the clean copy disabled. You need to enable <code>clean_copy</code> in your snapshot config to be able to submit images from these cameras to Frigate+.",
|
||||
"table": {
|
||||
"camera": "Camera",
|
||||
"snapshots": "Snapshots"
|
||||
"snapshots": "Snapshots",
|
||||
"cleanCopySnapshots": "<code>clean_copy</code> Snapshots"
|
||||
}
|
||||
},
|
||||
"modelInfo": {
|
||||
|
||||
@ -75,9 +75,7 @@ export default function CameraReviewStatusToggles({
|
||||
/>
|
||||
<div className="space-y-0.5">
|
||||
<Label htmlFor="detections-enabled">
|
||||
<Trans ns="views/settings">
|
||||
cameraReview.review.detections
|
||||
</Trans>
|
||||
<Trans ns="views/settings">camera.review.detections</Trans>
|
||||
</Label>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
@ -1136,7 +1136,7 @@ export function ConfigSection({
|
||||
)}
|
||||
{hasChanges && (
|
||||
<Badge variant="outline" className="text-xs">
|
||||
{t("button.modified", {
|
||||
{t("modified", {
|
||||
ns: "common",
|
||||
defaultValue: "Modified",
|
||||
})}
|
||||
@ -1210,10 +1210,7 @@ export function ConfigSection({
|
||||
variant="secondary"
|
||||
className="cursor-default bg-danger text-xs text-white hover:bg-danger"
|
||||
>
|
||||
{t("button.modified", {
|
||||
ns: "common",
|
||||
defaultValue: "Modified",
|
||||
})}
|
||||
{t("modified", { ns: "common", defaultValue: "Modified" })}
|
||||
</Badge>
|
||||
)}
|
||||
</div>
|
||||
|
||||
@ -7,11 +7,7 @@ import type {
|
||||
import { Alert, AlertDescription, AlertTitle } from "@/components/ui/alert";
|
||||
import { LuCircleAlert } from "react-icons/lu";
|
||||
import { useTranslation } from "react-i18next";
|
||||
import {
|
||||
buildTranslationPath,
|
||||
resolveConfigTranslation,
|
||||
humanizeKey,
|
||||
} from "../utils";
|
||||
import { buildTranslationPath, humanizeKey } from "../utils";
|
||||
import type { ConfigFormContext } from "@/types/configForm";
|
||||
|
||||
type ErrorSchemaNode = RJSFSchema & {
|
||||
@ -118,15 +114,22 @@ const resolveErrorFieldLabel = ({
|
||||
);
|
||||
|
||||
if (effectiveNamespace && translationPath) {
|
||||
const translated = resolveConfigTranslation(
|
||||
i18n,
|
||||
t,
|
||||
translationPath,
|
||||
"label",
|
||||
sectionI18nPrefix,
|
||||
effectiveNamespace,
|
||||
);
|
||||
if (translated) return translated;
|
||||
const prefixedTranslationKey =
|
||||
sectionI18nPrefix && !translationPath.startsWith(`${sectionI18nPrefix}.`)
|
||||
? `${sectionI18nPrefix}.${translationPath}.label`
|
||||
: undefined;
|
||||
const translationKey = `${translationPath}.label`;
|
||||
|
||||
if (
|
||||
prefixedTranslationKey &&
|
||||
i18n.exists(prefixedTranslationKey, { ns: effectiveNamespace })
|
||||
) {
|
||||
return t(prefixedTranslationKey, { ns: effectiveNamespace });
|
||||
}
|
||||
|
||||
if (i18n.exists(translationKey, { ns: effectiveNamespace })) {
|
||||
return t(translationKey, { ns: effectiveNamespace });
|
||||
}
|
||||
}
|
||||
|
||||
const schemaNode = resolveSchemaNodeForPath(schema, segments);
|
||||
|
||||
@ -20,7 +20,6 @@ import { requiresRestartForFieldPath } from "@/utils/configUtil";
|
||||
import RestartRequiredIndicator from "@/components/indicators/RestartRequiredIndicator";
|
||||
import {
|
||||
buildTranslationPath,
|
||||
resolveConfigTranslation,
|
||||
getFilterObjectLabel,
|
||||
hasOverrideAtPath,
|
||||
humanizeKey,
|
||||
@ -220,16 +219,20 @@ export function FieldTemplate(props: FieldTemplateProps) {
|
||||
// Try to get translated label, falling back to schema title, then RJSF label
|
||||
let finalLabel = label;
|
||||
if (effectiveNamespace && translationPath) {
|
||||
const translatedLabel = resolveConfigTranslation(
|
||||
i18n,
|
||||
t,
|
||||
translationPath,
|
||||
"label",
|
||||
sectionI18nPrefix,
|
||||
effectiveNamespace,
|
||||
);
|
||||
if (translatedLabel) {
|
||||
finalLabel = translatedLabel;
|
||||
// Prefer camera-scoped translations when a section prefix is provided
|
||||
const prefixedTranslationKey =
|
||||
sectionI18nPrefix && !translationPath.startsWith(`${sectionI18nPrefix}.`)
|
||||
? `${sectionI18nPrefix}.${translationPath}.label`
|
||||
: undefined;
|
||||
const translationKey = `${translationPath}.label`;
|
||||
|
||||
if (
|
||||
prefixedTranslationKey &&
|
||||
i18n.exists(prefixedTranslationKey, { ns: effectiveNamespace })
|
||||
) {
|
||||
finalLabel = t(prefixedTranslationKey, { ns: effectiveNamespace });
|
||||
} else if (i18n.exists(translationKey, { ns: effectiveNamespace })) {
|
||||
finalLabel = t(translationKey, { ns: effectiveNamespace });
|
||||
} else if (schemaTitle) {
|
||||
finalLabel = schemaTitle;
|
||||
} else if (translatedFilterObjectLabel) {
|
||||
@ -327,16 +330,18 @@ export function FieldTemplate(props: FieldTemplateProps) {
|
||||
// Try to get translated description, falling back to schema description
|
||||
let finalDescription = description || "";
|
||||
if (effectiveNamespace && translationPath) {
|
||||
const translatedDescription = resolveConfigTranslation(
|
||||
i18n,
|
||||
t,
|
||||
translationPath,
|
||||
"description",
|
||||
sectionI18nPrefix,
|
||||
effectiveNamespace,
|
||||
);
|
||||
if (translatedDescription) {
|
||||
finalDescription = translatedDescription;
|
||||
const prefixedDescriptionKey =
|
||||
sectionI18nPrefix && !translationPath.startsWith(`${sectionI18nPrefix}.`)
|
||||
? `${sectionI18nPrefix}.${translationPath}.description`
|
||||
: undefined;
|
||||
const descriptionKey = `${translationPath}.description`;
|
||||
if (
|
||||
prefixedDescriptionKey &&
|
||||
i18n.exists(prefixedDescriptionKey, { ns: effectiveNamespace })
|
||||
) {
|
||||
finalDescription = t(prefixedDescriptionKey, { ns: effectiveNamespace });
|
||||
} else if (i18n.exists(descriptionKey, { ns: effectiveNamespace })) {
|
||||
finalDescription = t(descriptionKey, { ns: effectiveNamespace });
|
||||
} else if (schemaDescription) {
|
||||
finalDescription = schemaDescription;
|
||||
}
|
||||
|
||||
@ -17,7 +17,6 @@ import { requiresRestartForFieldPath } from "@/utils/configUtil";
|
||||
import { ConfigFormContext } from "@/types/configForm";
|
||||
import {
|
||||
buildTranslationPath,
|
||||
resolveConfigTranslation,
|
||||
getDomainFromNamespace,
|
||||
getFilterObjectLabel,
|
||||
humanizeKey,
|
||||
@ -264,14 +263,16 @@ export function ObjectFieldTemplate(props: ObjectFieldTemplateProps) {
|
||||
|
||||
let inferredLabel: string | undefined;
|
||||
if (i18nNs && translationPath) {
|
||||
inferredLabel = resolveConfigTranslation(
|
||||
i18n,
|
||||
t,
|
||||
translationPath,
|
||||
"label",
|
||||
sectionI18nPrefix,
|
||||
i18nNs,
|
||||
);
|
||||
const prefixedLabelKey =
|
||||
sectionI18nPrefix && !translationPath.startsWith(`${sectionI18nPrefix}.`)
|
||||
? `${sectionI18nPrefix}.${translationPath}.label`
|
||||
: undefined;
|
||||
const labelKey = `${translationPath}.label`;
|
||||
if (prefixedLabelKey && i18n.exists(prefixedLabelKey, { ns: i18nNs })) {
|
||||
inferredLabel = t(prefixedLabelKey, { ns: i18nNs });
|
||||
} else if (i18n.exists(labelKey, { ns: i18nNs })) {
|
||||
inferredLabel = t(labelKey, { ns: i18nNs });
|
||||
}
|
||||
}
|
||||
if (!inferredLabel && translatedFilterLabel) {
|
||||
inferredLabel = translatedFilterLabel;
|
||||
@ -285,14 +286,19 @@ export function ObjectFieldTemplate(props: ObjectFieldTemplateProps) {
|
||||
|
||||
let inferredDescription: string | undefined;
|
||||
if (i18nNs && translationPath) {
|
||||
inferredDescription = resolveConfigTranslation(
|
||||
i18n,
|
||||
t,
|
||||
translationPath,
|
||||
"description",
|
||||
sectionI18nPrefix,
|
||||
i18nNs,
|
||||
);
|
||||
const prefixedDescriptionKey =
|
||||
sectionI18nPrefix && !translationPath.startsWith(`${sectionI18nPrefix}.`)
|
||||
? `${sectionI18nPrefix}.${translationPath}.description`
|
||||
: undefined;
|
||||
const descriptionKey = `${translationPath}.description`;
|
||||
if (
|
||||
prefixedDescriptionKey &&
|
||||
i18n.exists(prefixedDescriptionKey, { ns: i18nNs })
|
||||
) {
|
||||
inferredDescription = t(prefixedDescriptionKey, { ns: i18nNs });
|
||||
} else if (i18n.exists(descriptionKey, { ns: i18nNs })) {
|
||||
inferredDescription = t(descriptionKey, { ns: i18nNs });
|
||||
}
|
||||
}
|
||||
const schemaDescription = schema?.description;
|
||||
const fallbackDescription =
|
||||
|
||||
@ -124,50 +124,6 @@ export function buildTranslationPath(
|
||||
return stringSegments.join(".");
|
||||
}
|
||||
|
||||
/**
|
||||
* Resolve a translated label or description for a config form field.
|
||||
*
|
||||
* Tries keys in priority order:
|
||||
* 1. Type-specific prefixed key (e.g. "detectors.edgetpu.device.label")
|
||||
* 2. Shared prefixed key with type stripped (e.g. "detectors.device.label")
|
||||
* 3. Unprefixed key (e.g. "device.label")
|
||||
*
|
||||
* @returns The translated string, or undefined if no key matched.
|
||||
*/
|
||||
export function resolveConfigTranslation(
|
||||
i18n: { exists: (key: string, opts?: Record<string, unknown>) => boolean },
|
||||
t: (key: string, opts?: Record<string, unknown>) => string,
|
||||
translationPath: string,
|
||||
suffix: "label" | "description",
|
||||
sectionI18nPrefix?: string,
|
||||
ns?: string,
|
||||
): string | undefined {
|
||||
const opts = ns ? { ns } : undefined;
|
||||
|
||||
if (
|
||||
sectionI18nPrefix &&
|
||||
!translationPath.startsWith(`${sectionI18nPrefix}.`)
|
||||
) {
|
||||
// 1. Type-specific prefixed key (e.g. detectors.edgetpu.device.label)
|
||||
const prefixed = `${sectionI18nPrefix}.${translationPath}.${suffix}`;
|
||||
if (i18n.exists(prefixed, opts)) return t(prefixed, opts);
|
||||
|
||||
// 2. Shared prefixed key — strip leading type segment
|
||||
// e.g. detectors.edgetpu.model.path → detectors.model.path
|
||||
const dot = translationPath.indexOf(".");
|
||||
if (dot !== -1) {
|
||||
const shared = `${sectionI18nPrefix}.${translationPath.substring(dot + 1)}.${suffix}`;
|
||||
if (i18n.exists(shared, opts)) return t(shared, opts);
|
||||
}
|
||||
}
|
||||
|
||||
// 3. Unprefixed key
|
||||
const base = `${translationPath}.${suffix}`;
|
||||
if (i18n.exists(base, opts)) return t(base, opts);
|
||||
|
||||
return undefined;
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract the filter object label from a path containing "filters" segment.
|
||||
* Returns the segment immediately after "filters".
|
||||
|
||||
@ -4,7 +4,6 @@
|
||||
|
||||
export {
|
||||
buildTranslationPath,
|
||||
resolveConfigTranslation,
|
||||
getFilterObjectLabel,
|
||||
humanizeKey,
|
||||
getDomainFromNamespace,
|
||||
|
||||
@ -159,14 +159,15 @@ export default function SearchResultActions({
|
||||
<MenuItem aria-label={t("itemMenu.downloadSnapshot.aria")}>
|
||||
<a
|
||||
className="flex items-center"
|
||||
href={`${baseUrl}api/events/${searchResult.id}/snapshot.jpg?crop=0&bbox=1×tamp=0`}
|
||||
href={`${baseUrl}api/events/${searchResult.id}/snapshot.jpg`}
|
||||
download={`${searchResult.camera}_${searchResult.label}.jpg`}
|
||||
>
|
||||
<span>{t("itemMenu.downloadSnapshot.label")}</span>
|
||||
</a>
|
||||
</MenuItem>
|
||||
)}
|
||||
{searchResult.has_snapshot && (
|
||||
{searchResult.has_snapshot &&
|
||||
config?.cameras[searchResult.camera].snapshots.clean_copy && (
|
||||
<MenuItem aria-label={t("itemMenu.downloadCleanSnapshot.aria")}>
|
||||
<a
|
||||
className="flex items-center"
|
||||
|
||||
@ -85,7 +85,7 @@ export default function DetailActionsMenu({
|
||||
<DropdownMenuItem>
|
||||
<a
|
||||
className="w-full"
|
||||
href={`${baseUrl}api/events/${search.id}/snapshot.jpg?crop=0&bbox=1×tamp=0`}
|
||||
href={`${baseUrl}api/events/${search.id}/snapshot.jpg?bbox=1`}
|
||||
download={`${search.camera}_${search.label}.jpg`}
|
||||
>
|
||||
<div className="flex cursor-pointer items-center gap-2">
|
||||
@ -94,7 +94,8 @@ export default function DetailActionsMenu({
|
||||
</a>
|
||||
</DropdownMenuItem>
|
||||
)}
|
||||
{search.has_snapshot && (
|
||||
{search.has_snapshot &&
|
||||
config?.cameras[search.camera].snapshots.clean_copy && (
|
||||
<DropdownMenuItem>
|
||||
<a
|
||||
className="w-full"
|
||||
|
||||
@ -1839,7 +1839,7 @@ export function ObjectSnapshotTab({
|
||||
<img
|
||||
ref={imgRef}
|
||||
className="mx-auto max-h-[60dvh] rounded-lg bg-background object-contain"
|
||||
src={`${baseUrl}api/events/${search?.id}/snapshot.jpg?crop=0&bbox=1×tamp=0`}
|
||||
src={`${baseUrl}api/events/${search?.id}/snapshot.jpg`}
|
||||
alt={`${search?.label}`}
|
||||
loading={isSafari ? "eager" : "lazy"}
|
||||
onLoad={() => {
|
||||
|
||||
@ -107,7 +107,7 @@ export function FrigatePlusDialog({
|
||||
<img
|
||||
ref={imgRef}
|
||||
className="mx-auto max-h-[60dvh] rounded-lg bg-black object-contain"
|
||||
src={`${baseUrl}api/events/${upload.id}/snapshot.jpg?crop=0&bbox=1×tamp=0`}
|
||||
src={`${baseUrl}api/events/${upload.id}/snapshot.jpg`}
|
||||
alt={`${upload.label}`}
|
||||
loading={isSafari ? "eager" : "lazy"}
|
||||
onLoad={onImgLoad}
|
||||
|
||||
@ -136,7 +136,7 @@ export default function EventMenu({
|
||||
download
|
||||
href={
|
||||
event.has_snapshot
|
||||
? `${apiHost}api/events/${event.id}/snapshot.jpg?crop=0&bbox=1×tamp=0`
|
||||
? `${apiHost}api/events/${event.id}/snapshot.jpg`
|
||||
: `${apiHost}api/events/${event.id}/thumbnail.webp`
|
||||
}
|
||||
>
|
||||
|
||||
@ -273,6 +273,7 @@ export interface CameraConfig {
|
||||
};
|
||||
snapshots: {
|
||||
bounding_box: boolean;
|
||||
clean_copy: boolean;
|
||||
crop: boolean;
|
||||
enabled: boolean;
|
||||
height: number | null;
|
||||
@ -614,6 +615,7 @@ export interface FrigateConfig {
|
||||
|
||||
snapshots: {
|
||||
bounding_box: boolean;
|
||||
clean_copy: boolean;
|
||||
crop: boolean;
|
||||
enabled: boolean;
|
||||
height: number | null;
|
||||
|
||||
@ -8,6 +8,7 @@ import axios from "axios";
|
||||
import { FrigateConfig } from "@/types/frigateConfig";
|
||||
import { CheckCircle2, XCircle } from "lucide-react";
|
||||
import { Trans, useTranslation } from "react-i18next";
|
||||
import { IoIosWarning } from "react-icons/io";
|
||||
import { Button } from "@/components/ui/button";
|
||||
import { Link } from "react-router-dom";
|
||||
import { LuExternalLink } from "react-icons/lu";
|
||||
@ -196,6 +197,15 @@ export default function FrigatePlusSettingsView({
|
||||
document.title = t("documentTitle.frigatePlus");
|
||||
}, [t]);
|
||||
|
||||
const needCleanSnapshots = () => {
|
||||
if (!config) {
|
||||
return false;
|
||||
}
|
||||
return Object.values(config.cameras).some(
|
||||
(camera) => camera.snapshots.enabled && !camera.snapshots.clean_copy,
|
||||
);
|
||||
};
|
||||
|
||||
if (!config) {
|
||||
return <ActivityIndicator />;
|
||||
}
|
||||
@ -405,6 +415,11 @@ export default function FrigatePlusSettingsView({
|
||||
"frigatePlus.snapshotConfig.table.snapshots",
|
||||
)}
|
||||
</th>
|
||||
<th className="px-4 py-2 text-center">
|
||||
<Trans ns="views/settings">
|
||||
frigatePlus.snapshotConfig.table.cleanCopySnapshots
|
||||
</Trans>
|
||||
</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
@ -424,12 +439,32 @@ export default function FrigatePlusSettingsView({
|
||||
<XCircle className="mx-auto size-5 text-danger" />
|
||||
)}
|
||||
</td>
|
||||
<td className="px-4 py-2 text-center">
|
||||
{camera.snapshots?.enabled &&
|
||||
camera.snapshots?.clean_copy ? (
|
||||
<CheckCircle2 className="mx-auto size-5 text-green-500" />
|
||||
) : (
|
||||
<XCircle className="mx-auto size-5 text-danger" />
|
||||
)}
|
||||
</td>
|
||||
</tr>
|
||||
),
|
||||
)}
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
{needCleanSnapshots() && (
|
||||
<div className="rounded-lg border border-secondary-foreground bg-secondary p-4 text-sm text-danger">
|
||||
<div className="flex items-center gap-2">
|
||||
<IoIosWarning className="mr-2 size-5 text-danger" />
|
||||
<div className="max-w-[85%] text-sm">
|
||||
<Trans ns="views/settings">
|
||||
frigatePlus.snapshotConfig.cleanCopyWarning
|
||||
</Trans>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
}
|
||||
/>
|
||||
|
||||
@ -207,10 +207,7 @@ export function SingleSectionPage({
|
||||
variant="secondary"
|
||||
className="cursor-default bg-danger text-xs text-white hover:bg-danger"
|
||||
>
|
||||
{t("button.modified", {
|
||||
ns: "common",
|
||||
defaultValue: "Modified",
|
||||
})}
|
||||
{t("modified", { ns: "common", defaultValue: "Modified" })}
|
||||
</Badge>
|
||||
)}
|
||||
</div>
|
||||
@ -245,7 +242,7 @@ export function SingleSectionPage({
|
||||
variant="secondary"
|
||||
className="cursor-default bg-danger text-xs text-white hover:bg-danger"
|
||||
>
|
||||
{t("button.modified", { ns: "common", defaultValue: "Modified" })}
|
||||
{t("modified", { ns: "common", defaultValue: "Modified" })}
|
||||
</Badge>
|
||||
)}
|
||||
</div>
|
||||
|
||||
Loading…
Reference in New Issue
Block a user