Miscellaneous Fixes (#20841)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions

* show id field when editing zone

* improve zone capitalization

* Update NPU models and docs

* fix mobilepage in tracked object details

* Use thread lock for openvino to avoid concurrent requests with JinaV2

* fix hashing function to avoid collisions

* remove extra flex div causing overflow

* ensure header stays on top of video controls

* don't smart capitalize friendly names

* Fix incorrect object classification crop

* don't display submit to plus if object doesn't have a snapshot

* check for snapshot and clip in actions menu

* frigate plus submission fix

still show frigate+ section if snapshot has already been submitted and run optimistic update, local state was being overridden

* Don't fail to show 0% when showing classification

* Don't fail on file system error

* Improve title and description for review genai

* fix overflowing truncated review item description in detail stream

* catch events with review items that start after the first timeline entry

review items may start later than events within them, so subtract a padding from the start time in the filter so the start of events are not incorrectly filtered out of the list in the detail stream

* also pad on review end_time

* fix

* change order of timeline zoom buttons on mobile

* use grid to ensure genai title does not cause overflow

* small tweaks

* Cleanup

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
This commit is contained in:
Josh Hawkins 2025-11-08 06:44:30 -06:00 committed by GitHub
parent ef19332fe5
commit 01452e4c51
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
15 changed files with 232 additions and 132 deletions

View File

@ -5,7 +5,7 @@ title: Enrichments
# Enrichments # Enrichments
Some of Frigate's enrichments can use a discrete GPU / NPU for accelerated processing. Some of Frigate's enrichments can use a discrete GPU or integrated GPU for accelerated processing.
## Requirements ## Requirements
@ -18,8 +18,10 @@ Object detection and enrichments (like Semantic Search, Face Recognition, and Li
- **Intel** - **Intel**
- OpenVINO will automatically be detected and used for enrichments in the default Frigate image. - OpenVINO will automatically be detected and used for enrichments in the default Frigate image.
- **Note:** Intel NPUs have limited model support for enrichments. GPU is recommended for enrichments when available.
- **Nvidia** - **Nvidia**
- Nvidia GPUs will automatically be detected and used for enrichments in the `-tensorrt` Frigate image. - Nvidia GPUs will automatically be detected and used for enrichments in the `-tensorrt` Frigate image.
- Jetson devices will automatically be detected and used for enrichments in the `-tensorrt-jp6` Frigate image. - Jetson devices will automatically be detected and used for enrichments in the `-tensorrt-jp6` Frigate image.

View File

@ -261,6 +261,8 @@ OpenVINO is supported on 6th Gen Intel platforms (Skylake) and newer. It will al
:::tip :::tip
**NPU + GPU Systems:** If you have both NPU and GPU available (Intel Core Ultra processors), use NPU for object detection and GPU for enrichments (semantic search, face recognition, etc.) for best performance and compatibility.
When using many cameras one detector may not be enough to keep up. Multiple detectors can be defined assuming GPU resources are available. An example configuration would be: When using many cameras one detector may not be enough to keep up. Multiple detectors can be defined assuming GPU resources are available. An example configuration would be:
```yaml ```yaml
@ -283,7 +285,7 @@ detectors:
| [RF-DETR](#rf-detr) | ✅ | ✅ | Requires XE iGPU or Arc | | [RF-DETR](#rf-detr) | ✅ | ✅ | Requires XE iGPU or Arc |
| [YOLO-NAS](#yolo-nas) | ✅ | ✅ | | | [YOLO-NAS](#yolo-nas) | ✅ | ✅ | |
| [MobileNet v2](#ssdlite-mobilenet-v2) | ✅ | ✅ | Fast and lightweight model, less accurate than larger models | | [MobileNet v2](#ssdlite-mobilenet-v2) | ✅ | ✅ | Fast and lightweight model, less accurate than larger models |
| [YOLOX](#yolox) | ✅ | ? | | | [YOLOX](#yolox) | ✅ | ? | |
| [D-FINE](#d-fine) | ❌ | ❌ | | | [D-FINE](#d-fine) | ❌ | ❌ | |
#### SSDLite MobileNet v2 #### SSDLite MobileNet v2

View File

@ -78,7 +78,7 @@ Switching between V1 and V2 requires reindexing your embeddings. The embeddings
### GPU Acceleration ### GPU Acceleration
The CLIP models are downloaded in ONNX format, and the `large` model can be accelerated using GPU / NPU hardware, when available. This depends on the Docker build that is used. You can also target a specific device in a multi-GPU installation. The CLIP models are downloaded in ONNX format, and the `large` model can be accelerated using GPU hardware, when available. This depends on the Docker build that is used. You can also target a specific device in a multi-GPU installation.
```yaml ```yaml
semantic_search: semantic_search:
@ -90,7 +90,7 @@ semantic_search:
:::info :::info
If the correct build is used for your GPU / NPU and the `large` model is configured, then the GPU / NPU will be detected and used automatically. If the correct build is used for your GPU / NPU and the `large` model is configured, then the GPU will be detected and used automatically.
Specify the `device` option to target a specific GPU in a multi-GPU system (see [onnxruntime's provider options](https://onnxruntime.ai/docs/execution-providers/)). Specify the `device` option to target a specific GPU in a multi-GPU system (see [onnxruntime's provider options](https://onnxruntime.ai/docs/execution-providers/)).
If you do not specify a device, the first available GPU will be used. If you do not specify a device, the first available GPU will be used.

View File

@ -418,8 +418,8 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
obj_data["box"][2], obj_data["box"][2],
obj_data["box"][3], obj_data["box"][3],
max( max(
obj_data["box"][1] - obj_data["box"][0], obj_data["box"][2] - obj_data["box"][0],
obj_data["box"][3] - obj_data["box"][2], obj_data["box"][3] - obj_data["box"][1],
), ),
1.0, 1.0,
) )
@ -546,5 +546,8 @@ def write_classification_attempt(
) )
# delete oldest face image if maximum is reached # delete oldest face image if maximum is reached
if len(files) > max_files: try:
os.unlink(os.path.join(folder, files[-1])) if len(files) > max_files:
os.unlink(os.path.join(folder, files[-1]))
except FileNotFoundError:
pass

View File

@ -3,6 +3,7 @@
import logging import logging
import os import os
import platform import platform
import threading
from abc import ABC, abstractmethod from abc import ABC, abstractmethod
from typing import Any from typing import Any
@ -161,12 +162,12 @@ class CudaGraphRunner(BaseModelRunner):
""" """
@staticmethod @staticmethod
def is_complex_model(model_type: str) -> bool: def is_model_supported(model_type: str) -> bool:
# Import here to avoid circular imports # Import here to avoid circular imports
from frigate.detectors.detector_config import ModelTypeEnum from frigate.detectors.detector_config import ModelTypeEnum
from frigate.embeddings.types import EnrichmentModelTypeEnum from frigate.embeddings.types import EnrichmentModelTypeEnum
return model_type in [ return model_type not in [
ModelTypeEnum.yolonas.value, ModelTypeEnum.yolonas.value,
EnrichmentModelTypeEnum.paddleocr.value, EnrichmentModelTypeEnum.paddleocr.value,
EnrichmentModelTypeEnum.jina_v1.value, EnrichmentModelTypeEnum.jina_v1.value,
@ -239,9 +240,30 @@ class OpenVINOModelRunner(BaseModelRunner):
EnrichmentModelTypeEnum.jina_v2.value, EnrichmentModelTypeEnum.jina_v2.value,
] ]
@staticmethod
def is_model_npu_supported(model_type: str) -> bool:
# Import here to avoid circular imports
from frigate.embeddings.types import EnrichmentModelTypeEnum
return model_type not in [
EnrichmentModelTypeEnum.paddleocr.value,
EnrichmentModelTypeEnum.jina_v1.value,
EnrichmentModelTypeEnum.jina_v2.value,
EnrichmentModelTypeEnum.arcface.value,
]
def __init__(self, model_path: str, device: str, model_type: str, **kwargs): def __init__(self, model_path: str, device: str, model_type: str, **kwargs):
self.model_path = model_path self.model_path = model_path
self.device = device self.device = device
if device == "NPU" and not OpenVINOModelRunner.is_model_npu_supported(
model_type
):
logger.warning(
f"OpenVINO model {model_type} is not supported on NPU, using GPU instead"
)
device = "GPU"
self.complex_model = OpenVINOModelRunner.is_complex_model(model_type) self.complex_model = OpenVINOModelRunner.is_complex_model(model_type)
if not os.path.isfile(model_path): if not os.path.isfile(model_path):
@ -269,6 +291,10 @@ class OpenVINOModelRunner(BaseModelRunner):
self.infer_request = self.compiled_model.create_infer_request() self.infer_request = self.compiled_model.create_infer_request()
self.input_tensor: ov.Tensor | None = None self.input_tensor: ov.Tensor | None = None
# Thread lock to prevent concurrent inference (needed for JinaV2 which shares
# one runner between text and vision embeddings called from different threads)
self._inference_lock = threading.Lock()
if not self.complex_model: if not self.complex_model:
try: try:
input_shape = self.compiled_model.inputs[0].get_shape() input_shape = self.compiled_model.inputs[0].get_shape()
@ -312,67 +338,70 @@ class OpenVINOModelRunner(BaseModelRunner):
Returns: Returns:
List of output tensors List of output tensors
""" """
# Handle single input case for backward compatibility # Lock prevents concurrent access to infer_request
if ( # Needed for JinaV2: genai thread (text) + embeddings thread (vision)
len(inputs) == 1 with self._inference_lock:
and len(self.compiled_model.inputs) == 1 # Handle single input case for backward compatibility
and self.input_tensor is not None if (
): len(inputs) == 1
# Single input case - use the pre-allocated tensor for efficiency and len(self.compiled_model.inputs) == 1
input_data = list(inputs.values())[0] and self.input_tensor is not None
np.copyto(self.input_tensor.data, input_data) ):
self.infer_request.infer(self.input_tensor) # Single input case - use the pre-allocated tensor for efficiency
else: input_data = list(inputs.values())[0]
if self.complex_model: np.copyto(self.input_tensor.data, input_data)
try: self.infer_request.infer(self.input_tensor)
# This ensures the model starts with a clean state for each sequence else:
# Important for RNN models like PaddleOCR recognition if self.complex_model:
self.infer_request.reset_state() try:
except Exception: # This ensures the model starts with a clean state for each sequence
# this will raise an exception for models with AUTO set as the device # Important for RNN models like PaddleOCR recognition
pass self.infer_request.reset_state()
except Exception:
# this will raise an exception for models with AUTO set as the device
pass
# Multiple inputs case - set each input by name # Multiple inputs case - set each input by name
for input_name, input_data in inputs.items(): for input_name, input_data in inputs.items():
# Find the input by name and its index # Find the input by name and its index
input_port = None input_port = None
input_index = None input_index = None
for idx, port in enumerate(self.compiled_model.inputs): for idx, port in enumerate(self.compiled_model.inputs):
if port.get_any_name() == input_name: if port.get_any_name() == input_name:
input_port = port input_port = port
input_index = idx input_index = idx
break break
if input_port is None: if input_port is None:
raise ValueError(f"Input '{input_name}' not found in model") raise ValueError(f"Input '{input_name}' not found in model")
# Create tensor with the correct element type # Create tensor with the correct element type
input_element_type = input_port.get_element_type() input_element_type = input_port.get_element_type()
# Ensure input data matches the expected dtype to prevent type mismatches # Ensure input data matches the expected dtype to prevent type mismatches
# that can occur with models like Jina-CLIP v2 running on OpenVINO # that can occur with models like Jina-CLIP v2 running on OpenVINO
expected_dtype = input_element_type.to_dtype() expected_dtype = input_element_type.to_dtype()
if input_data.dtype != expected_dtype: if input_data.dtype != expected_dtype:
logger.debug( logger.debug(
f"Converting input '{input_name}' from {input_data.dtype} to {expected_dtype}" f"Converting input '{input_name}' from {input_data.dtype} to {expected_dtype}"
) )
input_data = input_data.astype(expected_dtype) input_data = input_data.astype(expected_dtype)
input_tensor = ov.Tensor(input_element_type, input_data.shape) input_tensor = ov.Tensor(input_element_type, input_data.shape)
np.copyto(input_tensor.data, input_data) np.copyto(input_tensor.data, input_data)
# Set the input tensor for the specific port index # Set the input tensor for the specific port index
self.infer_request.set_input_tensor(input_index, input_tensor) self.infer_request.set_input_tensor(input_index, input_tensor)
# Run inference # Run inference
self.infer_request.infer() self.infer_request.infer()
# Get all output tensors # Get all output tensors
outputs = [] outputs = []
for i in range(len(self.compiled_model.outputs)): for i in range(len(self.compiled_model.outputs)):
outputs.append(self.infer_request.get_output_tensor(i).data) outputs.append(self.infer_request.get_output_tensor(i).data)
return outputs return outputs
class RKNNModelRunner(BaseModelRunner): class RKNNModelRunner(BaseModelRunner):
@ -500,7 +529,7 @@ def get_optimized_runner(
return OpenVINOModelRunner(model_path, device, model_type, **kwargs) return OpenVINOModelRunner(model_path, device, model_type, **kwargs)
if ( if (
not CudaGraphRunner.is_complex_model(model_type) not CudaGraphRunner.is_model_supported(model_type)
and providers[0] == "CUDAExecutionProvider" and providers[0] == "CUDAExecutionProvider"
): ):
options[0] = { options[0] = {

View File

@ -113,8 +113,8 @@ When forming your description:
## Response Format ## Response Format
Your response MUST be a flat JSON object with: Your response MUST be a flat JSON object with:
- `title` (string): A concise, direct title that describes the purpose or overall action, not just what you literally see. {"Use spatial context when available to make titles more meaningful." if camera_context_section else ""} Use names from "Objects in Scene" based on what you visually observe. If you see both a name and an unidentified object of the same type but visually observe only one person/object, use ONLY the name. Examples: "Joe walking dog", "Person taking out trash", "Joe accessing vehicle", "Person leaving porch for driveway", "Joe and person on front porch". - `title` (string): A concise, direct title that describes the primary action or event in the sequence, not just what you literally see. {"Use spatial context when available to make titles more meaningful." if camera_context_section else ""} When multiple objects/actions are present, prioritize whichever is most prominent or occurs first. Use names from "Objects in Scene" based on what you visually observe. If you see both a name and an unidentified object of the same type but visually observe only one person/object, use ONLY the name. Examples: "Joe walking dog", "Person taking out trash", "Vehicle arriving in driveway", "Joe accessing vehicle", "Person leaving porch for driveway".
- `scene` (string): A narrative description of what happens across the sequence from start to finish. **Only describe actions you can actually observe happening in the frames provided.** Do not infer or assume actions that aren't visible (e.g., if you see someone walking but never see them sit, don't say they sat down). Include setting, detected objects, and their observable actions. Avoid speculation or filling in assumed behaviors. Your description should align with and support the threat level you assign. - `scene` (string): A narrative description of what happens across the sequence from start to finish, in chronological order. Start by describing how the sequence begins, then describe the progression of events. **Describe all significant movements and actions in the order they occur.** For example, if a vehicle arrives and then a person exits, describe both actions sequentially. **Only describe actions you can actually observe happening in the frames provided.** Do not infer or assume actions that aren't visible (e.g., if you see someone walking but never see them sit, don't say they sat down). Include setting, detected objects, and their observable actions. Avoid speculation or filling in assumed behaviors. Your description should align with and support the threat level you assign.
- `confidence` (float): 0-1 confidence in your analysis. Higher confidence when objects/actions are clearly visible and context is unambiguous. Lower confidence when the sequence is unclear, objects are partially obscured, or context is ambiguous. - `confidence` (float): 0-1 confidence in your analysis. Higher confidence when objects/actions are clearly visible and context is unambiguous. Lower confidence when the sequence is unclear, objects are partially obscured, or context is ambiguous.
- `potential_threat_level` (integer): 0, 1, or 2 as defined in "Normal Activity Patterns for This Property" above. Your threat level must be consistent with your scene description and the guidance above. - `potential_threat_level` (integer): 0, 1, or 2 as defined in "Normal Activity Patterns for This Property" above. Your threat level must be consistent with your scene description and the guidance above.
{get_concern_prompt()} {get_concern_prompt()}

View File

@ -148,13 +148,13 @@ export const ClassificationCard = forwardRef<
<div <div
className={cn( className={cn(
"flex flex-col items-start text-white", "flex flex-col items-start text-white",
data.score ? "text-xs" : "text-sm", data.score != undefined ? "text-xs" : "text-sm",
)} )}
> >
<div className="smart-capitalize"> <div className="smart-capitalize">
{data.name == "unknown" ? t("details.unknown") : data.name} {data.name == "unknown" ? t("details.unknown") : data.name}
</div> </div>
{data.score && ( {data.score != undefined && (
<div <div
className={cn( className={cn(
"", "",

View File

@ -55,29 +55,32 @@ export default function DetailActionsMenu({
</DropdownMenuTrigger> </DropdownMenuTrigger>
<DropdownMenuPortal> <DropdownMenuPortal>
<DropdownMenuContent align="end"> <DropdownMenuContent align="end">
<DropdownMenuItem> {search.has_snapshot && (
<a <DropdownMenuItem>
className="w-full" <a
href={`${baseUrl}api/events/${search.id}/snapshot.jpg?bbox=1`} className="w-full"
download={`${search.camera}_${search.label}.jpg`} href={`${baseUrl}api/events/${search.id}/snapshot.jpg?bbox=1`}
> download={`${search.camera}_${search.label}.jpg`}
<div className="flex cursor-pointer items-center gap-2"> >
<span>{t("itemMenu.downloadSnapshot.label")}</span> <div className="flex cursor-pointer items-center gap-2">
</div> <span>{t("itemMenu.downloadSnapshot.label")}</span>
</a> </div>
</DropdownMenuItem> </a>
</DropdownMenuItem>
<DropdownMenuItem> )}
<a {search.has_clip && (
className="w-full" <DropdownMenuItem>
href={`${baseUrl}api/${search.camera}/${clipTimeRange}/clip.mp4`} <a
download className="w-full"
> href={`${baseUrl}api/${search.camera}/${clipTimeRange}/clip.mp4`}
<div className="flex cursor-pointer items-center gap-2"> download
<span>{t("itemMenu.downloadVideo.label")}</span> >
</div> <div className="flex cursor-pointer items-center gap-2">
</a> <span>{t("itemMenu.downloadVideo.label")}</span>
</DropdownMenuItem> </div>
</a>
</DropdownMenuItem>
)}
{config?.semantic_search.enabled && {config?.semantic_search.enabled &&
setSimilarity != undefined && setSimilarity != undefined &&

View File

@ -72,7 +72,12 @@ import {
PopoverContent, PopoverContent,
PopoverTrigger, PopoverTrigger,
} from "@/components/ui/popover"; } from "@/components/ui/popover";
import { Drawer, DrawerContent, DrawerTrigger } from "@/components/ui/drawer"; import {
Drawer,
DrawerContent,
DrawerTitle,
DrawerTrigger,
} from "@/components/ui/drawer";
import { LuInfo } from "react-icons/lu"; import { LuInfo } from "react-icons/lu";
import { TooltipPortal } from "@radix-ui/react-tooltip"; import { TooltipPortal } from "@radix-ui/react-tooltip";
import { FaPencilAlt } from "react-icons/fa"; import { FaPencilAlt } from "react-icons/fa";
@ -126,7 +131,7 @@ function TabsWithActions({
return ( return (
<div className="flex items-center justify-between gap-1"> <div className="flex items-center justify-between gap-1">
<ScrollArea className="flex-1 whitespace-nowrap"> <ScrollArea className="flex-1 whitespace-nowrap">
<div className="mb-2 flex flex-row md:mb-0"> <div className="mb-2 flex flex-row">
<ToggleGroup <ToggleGroup
className="*:rounded-md *:px-3 *:py-4" className="*:rounded-md *:px-3 *:py-4"
type="single" type="single"
@ -224,6 +229,7 @@ function AnnotationSettings({
const Overlay = isDesktop ? Popover : Drawer; const Overlay = isDesktop ? Popover : Drawer;
const Trigger = isDesktop ? PopoverTrigger : DrawerTrigger; const Trigger = isDesktop ? PopoverTrigger : DrawerTrigger;
const Content = isDesktop ? PopoverContent : DrawerContent; const Content = isDesktop ? PopoverContent : DrawerContent;
const Title = isDesktop ? "div" : DrawerTitle;
const contentProps = isDesktop const contentProps = isDesktop
? { align: "end" as const, container: container ?? undefined } ? { align: "end" as const, container: container ?? undefined }
: {}; : {};
@ -248,7 +254,9 @@ function AnnotationSettings({
<PiSlidersHorizontalBold className="size-5" /> <PiSlidersHorizontalBold className="size-5" />
</Button> </Button>
</Trigger> </Trigger>
<Title className="sr-only">
{t("trackingDetails.adjustAnnotationSettings")}
</Title>
<Content <Content
className={ className={
isDesktop isDesktop
@ -306,7 +314,7 @@ function DialogContentComponent({
if (page === "tracking_details") { if (page === "tracking_details") {
return ( return (
<TrackingDetails <TrackingDetails
className={cn("size-full", !isDesktop && "flex flex-col gap-4")} className={cn(isDesktop ? "size-full" : "flex flex-col gap-4")}
event={search as unknown as Event} event={search as unknown as Event}
tabs={ tabs={
isDesktop ? ( isDesktop ? (
@ -340,7 +348,7 @@ function DialogContentComponent({
} }
/> />
) : ( ) : (
<div className={cn(!isDesktop ? "mb-4 w-full" : "size-full")}> <div className={cn(!isDesktop ? "mb-4 w-full md:max-w-lg" : "size-full")}>
<img <img
className="w-full select-none rounded-lg object-contain transition-opacity" className="w-full select-none rounded-lg object-contain transition-opacity"
style={ style={
@ -584,8 +592,13 @@ export default function SearchDetailDialog({
"scrollbar-container overflow-y-auto", "scrollbar-container overflow-y-auto",
isDesktop && isDesktop &&
"max-h-[95dvh] sm:max-w-xl md:max-w-4xl lg:max-w-[70%]", "max-h-[95dvh] sm:max-w-xl md:max-w-4xl lg:max-w-[70%]",
isMobile && "px-4", isMobile && "flex h-full flex-col px-4",
)} )}
onEscapeKeyDown={(event) => {
if (isPopoverOpen) {
event.preventDefault();
}
}}
onInteractOutside={(e) => { onInteractOutside={(e) => {
if (isPopoverOpen) { if (isPopoverOpen) {
e.preventDefault(); e.preventDefault();
@ -596,7 +609,7 @@ export default function SearchDetailDialog({
} }
}} }}
> >
<Header> <Header className={cn(!isDesktop && "top-0 z-[60] mb-0")}>
<Title>{t("trackedObjectDetails")}</Title> <Title>{t("trackedObjectDetails")}</Title>
<Description className="sr-only"> <Description className="sr-only">
{t("trackedObjectDetails")} {t("trackedObjectDetails")}
@ -1078,12 +1091,31 @@ function ObjectDetailsTab({
}); });
setState("submitted"); setState("submitted");
setSearch({ mutate(
...search, (key) =>
plus_id: "new_upload", typeof key === "string" &&
}); (key.includes("events") ||
key.includes("events/search") ||
key.includes("events/explore")),
(currentData: SearchResult[][] | SearchResult[] | undefined) => {
if (!currentData) return currentData;
// optimistic update
return currentData
.flat()
.map((event) =>
event.id === search.id
? { ...event, plus_id: "new_upload" }
: event,
);
},
{
optimisticData: true,
rollbackOnError: true,
revalidate: false,
},
);
}, },
[search, setSearch], [search, mutate],
); );
const popoverContainerRef = useRef<HTMLDivElement | null>(null); const popoverContainerRef = useRef<HTMLDivElement | null>(null);
@ -1243,8 +1275,8 @@ function ObjectDetailsTab({
</div> </div>
{search.data.type === "object" && {search.data.type === "object" &&
!search.plus_id && config?.plus?.enabled &&
config?.plus?.enabled && ( search.has_snapshot && (
<div <div
className={cn( className={cn(
"my-2 flex w-full flex-col justify-between gap-1.5", "my-2 flex w-full flex-col justify-between gap-1.5",

View File

@ -352,7 +352,8 @@ export function TrackingDetails({
className={cn( className={cn(
isDesktop isDesktop
? "flex size-full justify-evenly gap-4 overflow-hidden" ? "flex size-full justify-evenly gap-4 overflow-hidden"
: "flex size-full flex-col gap-2", : "flex flex-col gap-2",
!isDesktop && cameraAspect === "tall" && "size-full",
className, className,
)} )}
> >
@ -453,7 +454,7 @@ export function TrackingDetails({
)} )}
> >
{isDesktop && tabs && ( {isDesktop && tabs && (
<div className="mb-4 flex items-center justify-between"> <div className="mb-2 flex items-center justify-between">
<div className="flex-1">{tabs}</div> <div className="flex-1">{tabs}</div>
</div> </div>
)} )}
@ -719,9 +720,13 @@ function LifecycleIconRow({
backgroundColor: `rgb(${color})`, backgroundColor: `rgb(${color})`,
}} }}
/> />
<span className="smart-capitalize"> <span
{item.data?.zones_friendly_names?.[zidx] ?? className={cn(
zone.replaceAll("_", " ")} item.data?.zones_friendly_names?.[zidx] === zone &&
"smart-capitalize",
)}
>
{item.data?.zones_friendly_names?.[zidx]}
</span> </span>
</Badge> </Badge>
); );

View File

@ -576,6 +576,7 @@ export default function ZoneEditPane({
control={form.control} control={form.control}
nameField="friendly_name" nameField="friendly_name"
idField="name" idField="name"
idVisible={(polygon && polygon.name.length > 0) ?? false}
nameLabel={t("masksAndZones.zones.name.title")} nameLabel={t("masksAndZones.zones.name.title")}
nameDescription={t("masksAndZones.zones.name.tips")} nameDescription={t("masksAndZones.zones.name.tips")}
placeholderName={t("masksAndZones.zones.name.inputPlaceHolder")} placeholderName={t("masksAndZones.zones.name.inputPlaceHolder")}

View File

@ -15,7 +15,7 @@ import useSWR from "swr";
import ActivityIndicator from "../indicators/activity-indicator"; import ActivityIndicator from "../indicators/activity-indicator";
import { Event } from "@/types/event"; import { Event } from "@/types/event";
import { getIconForLabel } from "@/utils/iconUtil"; import { getIconForLabel } from "@/utils/iconUtil";
import { ReviewSegment } from "@/types/review"; import { REVIEW_PADDING, ReviewSegment } from "@/types/review";
import { LuChevronDown, LuCircle, LuChevronRight } from "react-icons/lu"; import { LuChevronDown, LuCircle, LuChevronRight } from "react-icons/lu";
import { getTranslatedLabel } from "@/utils/i18n"; import { getTranslatedLabel } from "@/utils/i18n";
import EventMenu from "@/components/timeline/EventMenu"; import EventMenu from "@/components/timeline/EventMenu";
@ -391,8 +391,8 @@ function ReviewGroup({
)} )}
/> />
</div> </div>
<div className="mr-3 flex w-full justify-between"> <div className="mr-3 grid w-full grid-cols-[1fr_auto] gap-2">
<div className="ml-1 flex flex-col items-start gap-1.5"> <div className="ml-1 flex min-w-0 flex-col gap-1.5">
<div className="flex flex-row gap-3"> <div className="flex flex-row gap-3">
<div className="text-sm font-medium">{displayTime}</div> <div className="text-sm font-medium">{displayTime}</div>
<div className="relative flex items-center gap-2 text-white"> <div className="relative flex items-center gap-2 text-white">
@ -408,7 +408,7 @@ function ReviewGroup({
</div> </div>
<div className="flex flex-col gap-0.5"> <div className="flex flex-col gap-0.5">
{review.data.metadata?.title && ( {review.data.metadata?.title && (
<div className="mb-1 flex items-center gap-1 text-sm text-primary-variant"> <div className="mb-1 flex min-w-0 items-center gap-1 text-sm text-primary-variant">
<MdAutoAwesome className="size-3 shrink-0" /> <MdAutoAwesome className="size-3 shrink-0" />
<span className="truncate">{review.data.metadata.title}</span> <span className="truncate">{review.data.metadata.title}</span>
</div> </div>
@ -432,7 +432,7 @@ function ReviewGroup({
e.stopPropagation(); e.stopPropagation();
setOpen((v) => !v); setOpen((v) => !v);
}} }}
className="ml-2 inline-flex items-center justify-center rounded p-1 hover:bg-secondary/10" className="inline-flex items-center justify-center self-center rounded p-1 hover:bg-secondary/10"
> >
{open ? ( {open ? (
<LuChevronDown className="size-4 text-primary-variant" /> <LuChevronDown className="size-4 text-primary-variant" />
@ -803,8 +803,9 @@ function ObjectTimeline({
return fullTimeline return fullTimeline
.filter( .filter(
(t) => (t) =>
t.timestamp >= review.start_time && t.timestamp >= review.start_time - REVIEW_PADDING &&
(review.end_time == undefined || t.timestamp <= review.end_time), (review.end_time == undefined ||
t.timestamp <= review.end_time + REVIEW_PADDING),
) )
.map((event) => ({ .map((event) => ({
...event, ...event,

View File

@ -515,7 +515,7 @@ export function ReviewTimeline({
<div <div
className={`absolute z-30 flex gap-2 ${ className={`absolute z-30 flex gap-2 ${
isMobile isMobile
? "bottom-4 right-1 flex-col gap-3" ? "bottom-4 right-1 flex-col-reverse gap-3"
: "bottom-2 left-1/2 -translate-x-1/2" : "bottom-2 left-1/2 -translate-x-1/2"
}`} }`}
> >

View File

@ -21,20 +21,30 @@ export const capitalizeAll = (text: string): string => {
* @returns A valid camera identifier (lowercase, alphanumeric, max 8 chars) * @returns A valid camera identifier (lowercase, alphanumeric, max 8 chars)
*/ */
export function generateFixedHash(name: string, prefix: string = "id"): string { export function generateFixedHash(name: string, prefix: string = "id"): string {
// Safely encode Unicode as UTF-8 bytes // Use the full UTF-8 bytes of the name and compute an FNV-1a 32-bit hash.
// This is deterministic, fast, works with Unicode and avoids collisions from
// simple truncation of base64 output.
const utf8Bytes = new TextEncoder().encode(name); const utf8Bytes = new TextEncoder().encode(name);
// Convert to base64 manually // FNV-1a 32-bit hash algorithm
let binary = ""; let hash = 0x811c9dc5; // FNV offset basis
for (const byte of utf8Bytes) { for (let i = 0; i < utf8Bytes.length; i++) {
binary += String.fromCharCode(byte); hash ^= utf8Bytes[i];
// Multiply by FNV prime (0x01000193) with 32-bit overflow
hash = (hash >>> 0) * 0x01000193;
// Ensure 32-bit unsigned integer
hash >>>= 0;
} }
const base64 = btoa(binary);
// Strip out non-alphanumeric characters and truncate // Convert to an 8-character lowercase hex string
const cleanHash = base64.replace(/[^a-zA-Z0-9]/g, "").substring(0, 8); const hashHex = (hash >>> 0).toString(16).padStart(8, "0").toLowerCase();
return `${prefix}_${cleanHash.toLowerCase()}`; // Ensure the first character is a letter to avoid an identifier that's purely
// numeric (isValidId forbids all-digit IDs). If it starts with a digit,
// replace with 'a'. This is extremely unlikely but a simple safeguard.
const safeHash = /^[0-9]/.test(hashHex[0]) ? `a${hashHex.slice(1)}` : hashHex;
return `${prefix}_${safeHash}`;
} }
/** /**

View File

@ -98,12 +98,12 @@ export default function CameraSettingsView({
return Object.entries(cameraConfig.zones).map(([name, zoneData]) => ({ return Object.entries(cameraConfig.zones).map(([name, zoneData]) => ({
camera: cameraConfig.name, camera: cameraConfig.name,
name, name,
friendly_name: getZoneName(name, cameraConfig.name), friendly_name: cameraConfig.zones[name].friendly_name,
objects: zoneData.objects, objects: zoneData.objects,
color: zoneData.color, color: zoneData.color,
})); }));
} }
}, [cameraConfig, getZoneName]); }, [cameraConfig]);
const alertsLabels = useMemo(() => { const alertsLabels = useMemo(() => {
return cameraConfig?.review.alerts.labels return cameraConfig?.review.alerts.labels
@ -533,8 +533,14 @@ export default function CameraSettingsView({
}} }}
/> />
</FormControl> </FormControl>
<FormLabel className="font-normal smart-capitalize"> <FormLabel
{zone.friendly_name} className={cn(
"font-normal",
!zone.friendly_name &&
"smart-capitalize",
)}
>
{zone.friendly_name || zone.name}
</FormLabel> </FormLabel>
</FormItem> </FormItem>
)} )}
@ -632,8 +638,14 @@ export default function CameraSettingsView({
}} }}
/> />
</FormControl> </FormControl>
<FormLabel className="font-normal smart-capitalize"> <FormLabel
{zone.friendly_name} className={cn(
"font-normal",
!zone.friendly_name &&
"smart-capitalize",
)}
>
{zone.friendly_name || zone.name}
</FormLabel> </FormLabel>
</FormItem> </FormItem>
)} )}