mirror of
https://github.com/blakeblackshear/frigate.git
synced 2025-12-06 05:24:11 +03:00
Miscellaneous Fixes (#20841)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
* show id field when editing zone * improve zone capitalization * Update NPU models and docs * fix mobilepage in tracked object details * Use thread lock for openvino to avoid concurrent requests with JinaV2 * fix hashing function to avoid collisions * remove extra flex div causing overflow * ensure header stays on top of video controls * don't smart capitalize friendly names * Fix incorrect object classification crop * don't display submit to plus if object doesn't have a snapshot * check for snapshot and clip in actions menu * frigate plus submission fix still show frigate+ section if snapshot has already been submitted and run optimistic update, local state was being overridden * Don't fail to show 0% when showing classification * Don't fail on file system error * Improve title and description for review genai * fix overflowing truncated review item description in detail stream * catch events with review items that start after the first timeline entry review items may start later than events within them, so subtract a padding from the start time in the filter so the start of events are not incorrectly filtered out of the list in the detail stream * also pad on review end_time * fix * change order of timeline zoom buttons on mobile * use grid to ensure genai title does not cause overflow * small tweaks * Cleanup --------- Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
This commit is contained in:
parent
ef19332fe5
commit
01452e4c51
@ -5,7 +5,7 @@ title: Enrichments
|
||||
|
||||
# Enrichments
|
||||
|
||||
Some of Frigate's enrichments can use a discrete GPU / NPU for accelerated processing.
|
||||
Some of Frigate's enrichments can use a discrete GPU or integrated GPU for accelerated processing.
|
||||
|
||||
## Requirements
|
||||
|
||||
@ -18,8 +18,10 @@ Object detection and enrichments (like Semantic Search, Face Recognition, and Li
|
||||
- **Intel**
|
||||
|
||||
- OpenVINO will automatically be detected and used for enrichments in the default Frigate image.
|
||||
- **Note:** Intel NPUs have limited model support for enrichments. GPU is recommended for enrichments when available.
|
||||
|
||||
- **Nvidia**
|
||||
|
||||
- Nvidia GPUs will automatically be detected and used for enrichments in the `-tensorrt` Frigate image.
|
||||
- Jetson devices will automatically be detected and used for enrichments in the `-tensorrt-jp6` Frigate image.
|
||||
|
||||
|
||||
@ -261,6 +261,8 @@ OpenVINO is supported on 6th Gen Intel platforms (Skylake) and newer. It will al
|
||||
|
||||
:::tip
|
||||
|
||||
**NPU + GPU Systems:** If you have both NPU and GPU available (Intel Core Ultra processors), use NPU for object detection and GPU for enrichments (semantic search, face recognition, etc.) for best performance and compatibility.
|
||||
|
||||
When using many cameras one detector may not be enough to keep up. Multiple detectors can be defined assuming GPU resources are available. An example configuration would be:
|
||||
|
||||
```yaml
|
||||
|
||||
@ -78,7 +78,7 @@ Switching between V1 and V2 requires reindexing your embeddings. The embeddings
|
||||
|
||||
### GPU Acceleration
|
||||
|
||||
The CLIP models are downloaded in ONNX format, and the `large` model can be accelerated using GPU / NPU hardware, when available. This depends on the Docker build that is used. You can also target a specific device in a multi-GPU installation.
|
||||
The CLIP models are downloaded in ONNX format, and the `large` model can be accelerated using GPU hardware, when available. This depends on the Docker build that is used. You can also target a specific device in a multi-GPU installation.
|
||||
|
||||
```yaml
|
||||
semantic_search:
|
||||
@ -90,7 +90,7 @@ semantic_search:
|
||||
|
||||
:::info
|
||||
|
||||
If the correct build is used for your GPU / NPU and the `large` model is configured, then the GPU / NPU will be detected and used automatically.
|
||||
If the correct build is used for your GPU / NPU and the `large` model is configured, then the GPU will be detected and used automatically.
|
||||
Specify the `device` option to target a specific GPU in a multi-GPU system (see [onnxruntime's provider options](https://onnxruntime.ai/docs/execution-providers/)).
|
||||
If you do not specify a device, the first available GPU will be used.
|
||||
|
||||
|
||||
@ -418,8 +418,8 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
|
||||
obj_data["box"][2],
|
||||
obj_data["box"][3],
|
||||
max(
|
||||
obj_data["box"][1] - obj_data["box"][0],
|
||||
obj_data["box"][3] - obj_data["box"][2],
|
||||
obj_data["box"][2] - obj_data["box"][0],
|
||||
obj_data["box"][3] - obj_data["box"][1],
|
||||
),
|
||||
1.0,
|
||||
)
|
||||
@ -546,5 +546,8 @@ def write_classification_attempt(
|
||||
)
|
||||
|
||||
# delete oldest face image if maximum is reached
|
||||
try:
|
||||
if len(files) > max_files:
|
||||
os.unlink(os.path.join(folder, files[-1]))
|
||||
except FileNotFoundError:
|
||||
pass
|
||||
|
||||
@ -3,6 +3,7 @@
|
||||
import logging
|
||||
import os
|
||||
import platform
|
||||
import threading
|
||||
from abc import ABC, abstractmethod
|
||||
from typing import Any
|
||||
|
||||
@ -161,12 +162,12 @@ class CudaGraphRunner(BaseModelRunner):
|
||||
"""
|
||||
|
||||
@staticmethod
|
||||
def is_complex_model(model_type: str) -> bool:
|
||||
def is_model_supported(model_type: str) -> bool:
|
||||
# Import here to avoid circular imports
|
||||
from frigate.detectors.detector_config import ModelTypeEnum
|
||||
from frigate.embeddings.types import EnrichmentModelTypeEnum
|
||||
|
||||
return model_type in [
|
||||
return model_type not in [
|
||||
ModelTypeEnum.yolonas.value,
|
||||
EnrichmentModelTypeEnum.paddleocr.value,
|
||||
EnrichmentModelTypeEnum.jina_v1.value,
|
||||
@ -239,9 +240,30 @@ class OpenVINOModelRunner(BaseModelRunner):
|
||||
EnrichmentModelTypeEnum.jina_v2.value,
|
||||
]
|
||||
|
||||
@staticmethod
|
||||
def is_model_npu_supported(model_type: str) -> bool:
|
||||
# Import here to avoid circular imports
|
||||
from frigate.embeddings.types import EnrichmentModelTypeEnum
|
||||
|
||||
return model_type not in [
|
||||
EnrichmentModelTypeEnum.paddleocr.value,
|
||||
EnrichmentModelTypeEnum.jina_v1.value,
|
||||
EnrichmentModelTypeEnum.jina_v2.value,
|
||||
EnrichmentModelTypeEnum.arcface.value,
|
||||
]
|
||||
|
||||
def __init__(self, model_path: str, device: str, model_type: str, **kwargs):
|
||||
self.model_path = model_path
|
||||
self.device = device
|
||||
|
||||
if device == "NPU" and not OpenVINOModelRunner.is_model_npu_supported(
|
||||
model_type
|
||||
):
|
||||
logger.warning(
|
||||
f"OpenVINO model {model_type} is not supported on NPU, using GPU instead"
|
||||
)
|
||||
device = "GPU"
|
||||
|
||||
self.complex_model = OpenVINOModelRunner.is_complex_model(model_type)
|
||||
|
||||
if not os.path.isfile(model_path):
|
||||
@ -269,6 +291,10 @@ class OpenVINOModelRunner(BaseModelRunner):
|
||||
self.infer_request = self.compiled_model.create_infer_request()
|
||||
self.input_tensor: ov.Tensor | None = None
|
||||
|
||||
# Thread lock to prevent concurrent inference (needed for JinaV2 which shares
|
||||
# one runner between text and vision embeddings called from different threads)
|
||||
self._inference_lock = threading.Lock()
|
||||
|
||||
if not self.complex_model:
|
||||
try:
|
||||
input_shape = self.compiled_model.inputs[0].get_shape()
|
||||
@ -312,6 +338,9 @@ class OpenVINOModelRunner(BaseModelRunner):
|
||||
Returns:
|
||||
List of output tensors
|
||||
"""
|
||||
# Lock prevents concurrent access to infer_request
|
||||
# Needed for JinaV2: genai thread (text) + embeddings thread (vision)
|
||||
with self._inference_lock:
|
||||
# Handle single input case for backward compatibility
|
||||
if (
|
||||
len(inputs) == 1
|
||||
@ -500,7 +529,7 @@ def get_optimized_runner(
|
||||
return OpenVINOModelRunner(model_path, device, model_type, **kwargs)
|
||||
|
||||
if (
|
||||
not CudaGraphRunner.is_complex_model(model_type)
|
||||
not CudaGraphRunner.is_model_supported(model_type)
|
||||
and providers[0] == "CUDAExecutionProvider"
|
||||
):
|
||||
options[0] = {
|
||||
|
||||
@ -113,8 +113,8 @@ When forming your description:
|
||||
## Response Format
|
||||
|
||||
Your response MUST be a flat JSON object with:
|
||||
- `title` (string): A concise, direct title that describes the purpose or overall action, not just what you literally see. {"Use spatial context when available to make titles more meaningful." if camera_context_section else ""} Use names from "Objects in Scene" based on what you visually observe. If you see both a name and an unidentified object of the same type but visually observe only one person/object, use ONLY the name. Examples: "Joe walking dog", "Person taking out trash", "Joe accessing vehicle", "Person leaving porch for driveway", "Joe and person on front porch".
|
||||
- `scene` (string): A narrative description of what happens across the sequence from start to finish. **Only describe actions you can actually observe happening in the frames provided.** Do not infer or assume actions that aren't visible (e.g., if you see someone walking but never see them sit, don't say they sat down). Include setting, detected objects, and their observable actions. Avoid speculation or filling in assumed behaviors. Your description should align with and support the threat level you assign.
|
||||
- `title` (string): A concise, direct title that describes the primary action or event in the sequence, not just what you literally see. {"Use spatial context when available to make titles more meaningful." if camera_context_section else ""} When multiple objects/actions are present, prioritize whichever is most prominent or occurs first. Use names from "Objects in Scene" based on what you visually observe. If you see both a name and an unidentified object of the same type but visually observe only one person/object, use ONLY the name. Examples: "Joe walking dog", "Person taking out trash", "Vehicle arriving in driveway", "Joe accessing vehicle", "Person leaving porch for driveway".
|
||||
- `scene` (string): A narrative description of what happens across the sequence from start to finish, in chronological order. Start by describing how the sequence begins, then describe the progression of events. **Describe all significant movements and actions in the order they occur.** For example, if a vehicle arrives and then a person exits, describe both actions sequentially. **Only describe actions you can actually observe happening in the frames provided.** Do not infer or assume actions that aren't visible (e.g., if you see someone walking but never see them sit, don't say they sat down). Include setting, detected objects, and their observable actions. Avoid speculation or filling in assumed behaviors. Your description should align with and support the threat level you assign.
|
||||
- `confidence` (float): 0-1 confidence in your analysis. Higher confidence when objects/actions are clearly visible and context is unambiguous. Lower confidence when the sequence is unclear, objects are partially obscured, or context is ambiguous.
|
||||
- `potential_threat_level` (integer): 0, 1, or 2 as defined in "Normal Activity Patterns for This Property" above. Your threat level must be consistent with your scene description and the guidance above.
|
||||
{get_concern_prompt()}
|
||||
|
||||
@ -148,13 +148,13 @@ export const ClassificationCard = forwardRef<
|
||||
<div
|
||||
className={cn(
|
||||
"flex flex-col items-start text-white",
|
||||
data.score ? "text-xs" : "text-sm",
|
||||
data.score != undefined ? "text-xs" : "text-sm",
|
||||
)}
|
||||
>
|
||||
<div className="smart-capitalize">
|
||||
{data.name == "unknown" ? t("details.unknown") : data.name}
|
||||
</div>
|
||||
{data.score && (
|
||||
{data.score != undefined && (
|
||||
<div
|
||||
className={cn(
|
||||
"",
|
||||
|
||||
@ -55,6 +55,7 @@ export default function DetailActionsMenu({
|
||||
</DropdownMenuTrigger>
|
||||
<DropdownMenuPortal>
|
||||
<DropdownMenuContent align="end">
|
||||
{search.has_snapshot && (
|
||||
<DropdownMenuItem>
|
||||
<a
|
||||
className="w-full"
|
||||
@ -66,7 +67,8 @@ export default function DetailActionsMenu({
|
||||
</div>
|
||||
</a>
|
||||
</DropdownMenuItem>
|
||||
|
||||
)}
|
||||
{search.has_clip && (
|
||||
<DropdownMenuItem>
|
||||
<a
|
||||
className="w-full"
|
||||
@ -78,6 +80,7 @@ export default function DetailActionsMenu({
|
||||
</div>
|
||||
</a>
|
||||
</DropdownMenuItem>
|
||||
)}
|
||||
|
||||
{config?.semantic_search.enabled &&
|
||||
setSimilarity != undefined &&
|
||||
|
||||
@ -72,7 +72,12 @@ import {
|
||||
PopoverContent,
|
||||
PopoverTrigger,
|
||||
} from "@/components/ui/popover";
|
||||
import { Drawer, DrawerContent, DrawerTrigger } from "@/components/ui/drawer";
|
||||
import {
|
||||
Drawer,
|
||||
DrawerContent,
|
||||
DrawerTitle,
|
||||
DrawerTrigger,
|
||||
} from "@/components/ui/drawer";
|
||||
import { LuInfo } from "react-icons/lu";
|
||||
import { TooltipPortal } from "@radix-ui/react-tooltip";
|
||||
import { FaPencilAlt } from "react-icons/fa";
|
||||
@ -126,7 +131,7 @@ function TabsWithActions({
|
||||
return (
|
||||
<div className="flex items-center justify-between gap-1">
|
||||
<ScrollArea className="flex-1 whitespace-nowrap">
|
||||
<div className="mb-2 flex flex-row md:mb-0">
|
||||
<div className="mb-2 flex flex-row">
|
||||
<ToggleGroup
|
||||
className="*:rounded-md *:px-3 *:py-4"
|
||||
type="single"
|
||||
@ -224,6 +229,7 @@ function AnnotationSettings({
|
||||
const Overlay = isDesktop ? Popover : Drawer;
|
||||
const Trigger = isDesktop ? PopoverTrigger : DrawerTrigger;
|
||||
const Content = isDesktop ? PopoverContent : DrawerContent;
|
||||
const Title = isDesktop ? "div" : DrawerTitle;
|
||||
const contentProps = isDesktop
|
||||
? { align: "end" as const, container: container ?? undefined }
|
||||
: {};
|
||||
@ -248,7 +254,9 @@ function AnnotationSettings({
|
||||
<PiSlidersHorizontalBold className="size-5" />
|
||||
</Button>
|
||||
</Trigger>
|
||||
|
||||
<Title className="sr-only">
|
||||
{t("trackingDetails.adjustAnnotationSettings")}
|
||||
</Title>
|
||||
<Content
|
||||
className={
|
||||
isDesktop
|
||||
@ -306,7 +314,7 @@ function DialogContentComponent({
|
||||
if (page === "tracking_details") {
|
||||
return (
|
||||
<TrackingDetails
|
||||
className={cn("size-full", !isDesktop && "flex flex-col gap-4")}
|
||||
className={cn(isDesktop ? "size-full" : "flex flex-col gap-4")}
|
||||
event={search as unknown as Event}
|
||||
tabs={
|
||||
isDesktop ? (
|
||||
@ -340,7 +348,7 @@ function DialogContentComponent({
|
||||
}
|
||||
/>
|
||||
) : (
|
||||
<div className={cn(!isDesktop ? "mb-4 w-full" : "size-full")}>
|
||||
<div className={cn(!isDesktop ? "mb-4 w-full md:max-w-lg" : "size-full")}>
|
||||
<img
|
||||
className="w-full select-none rounded-lg object-contain transition-opacity"
|
||||
style={
|
||||
@ -584,8 +592,13 @@ export default function SearchDetailDialog({
|
||||
"scrollbar-container overflow-y-auto",
|
||||
isDesktop &&
|
||||
"max-h-[95dvh] sm:max-w-xl md:max-w-4xl lg:max-w-[70%]",
|
||||
isMobile && "px-4",
|
||||
isMobile && "flex h-full flex-col px-4",
|
||||
)}
|
||||
onEscapeKeyDown={(event) => {
|
||||
if (isPopoverOpen) {
|
||||
event.preventDefault();
|
||||
}
|
||||
}}
|
||||
onInteractOutside={(e) => {
|
||||
if (isPopoverOpen) {
|
||||
e.preventDefault();
|
||||
@ -596,7 +609,7 @@ export default function SearchDetailDialog({
|
||||
}
|
||||
}}
|
||||
>
|
||||
<Header>
|
||||
<Header className={cn(!isDesktop && "top-0 z-[60] mb-0")}>
|
||||
<Title>{t("trackedObjectDetails")}</Title>
|
||||
<Description className="sr-only">
|
||||
{t("trackedObjectDetails")}
|
||||
@ -1078,12 +1091,31 @@ function ObjectDetailsTab({
|
||||
});
|
||||
|
||||
setState("submitted");
|
||||
setSearch({
|
||||
...search,
|
||||
plus_id: "new_upload",
|
||||
});
|
||||
mutate(
|
||||
(key) =>
|
||||
typeof key === "string" &&
|
||||
(key.includes("events") ||
|
||||
key.includes("events/search") ||
|
||||
key.includes("events/explore")),
|
||||
(currentData: SearchResult[][] | SearchResult[] | undefined) => {
|
||||
if (!currentData) return currentData;
|
||||
// optimistic update
|
||||
return currentData
|
||||
.flat()
|
||||
.map((event) =>
|
||||
event.id === search.id
|
||||
? { ...event, plus_id: "new_upload" }
|
||||
: event,
|
||||
);
|
||||
},
|
||||
[search, setSearch],
|
||||
{
|
||||
optimisticData: true,
|
||||
rollbackOnError: true,
|
||||
revalidate: false,
|
||||
},
|
||||
);
|
||||
},
|
||||
[search, mutate],
|
||||
);
|
||||
|
||||
const popoverContainerRef = useRef<HTMLDivElement | null>(null);
|
||||
@ -1243,8 +1275,8 @@ function ObjectDetailsTab({
|
||||
</div>
|
||||
|
||||
{search.data.type === "object" &&
|
||||
!search.plus_id &&
|
||||
config?.plus?.enabled && (
|
||||
config?.plus?.enabled &&
|
||||
search.has_snapshot && (
|
||||
<div
|
||||
className={cn(
|
||||
"my-2 flex w-full flex-col justify-between gap-1.5",
|
||||
|
||||
@ -352,7 +352,8 @@ export function TrackingDetails({
|
||||
className={cn(
|
||||
isDesktop
|
||||
? "flex size-full justify-evenly gap-4 overflow-hidden"
|
||||
: "flex size-full flex-col gap-2",
|
||||
: "flex flex-col gap-2",
|
||||
!isDesktop && cameraAspect === "tall" && "size-full",
|
||||
className,
|
||||
)}
|
||||
>
|
||||
@ -453,7 +454,7 @@ export function TrackingDetails({
|
||||
)}
|
||||
>
|
||||
{isDesktop && tabs && (
|
||||
<div className="mb-4 flex items-center justify-between">
|
||||
<div className="mb-2 flex items-center justify-between">
|
||||
<div className="flex-1">{tabs}</div>
|
||||
</div>
|
||||
)}
|
||||
@ -719,9 +720,13 @@ function LifecycleIconRow({
|
||||
backgroundColor: `rgb(${color})`,
|
||||
}}
|
||||
/>
|
||||
<span className="smart-capitalize">
|
||||
{item.data?.zones_friendly_names?.[zidx] ??
|
||||
zone.replaceAll("_", " ")}
|
||||
<span
|
||||
className={cn(
|
||||
item.data?.zones_friendly_names?.[zidx] === zone &&
|
||||
"smart-capitalize",
|
||||
)}
|
||||
>
|
||||
{item.data?.zones_friendly_names?.[zidx]}
|
||||
</span>
|
||||
</Badge>
|
||||
);
|
||||
|
||||
@ -576,6 +576,7 @@ export default function ZoneEditPane({
|
||||
control={form.control}
|
||||
nameField="friendly_name"
|
||||
idField="name"
|
||||
idVisible={(polygon && polygon.name.length > 0) ?? false}
|
||||
nameLabel={t("masksAndZones.zones.name.title")}
|
||||
nameDescription={t("masksAndZones.zones.name.tips")}
|
||||
placeholderName={t("masksAndZones.zones.name.inputPlaceHolder")}
|
||||
|
||||
@ -15,7 +15,7 @@ import useSWR from "swr";
|
||||
import ActivityIndicator from "../indicators/activity-indicator";
|
||||
import { Event } from "@/types/event";
|
||||
import { getIconForLabel } from "@/utils/iconUtil";
|
||||
import { ReviewSegment } from "@/types/review";
|
||||
import { REVIEW_PADDING, ReviewSegment } from "@/types/review";
|
||||
import { LuChevronDown, LuCircle, LuChevronRight } from "react-icons/lu";
|
||||
import { getTranslatedLabel } from "@/utils/i18n";
|
||||
import EventMenu from "@/components/timeline/EventMenu";
|
||||
@ -391,8 +391,8 @@ function ReviewGroup({
|
||||
)}
|
||||
/>
|
||||
</div>
|
||||
<div className="mr-3 flex w-full justify-between">
|
||||
<div className="ml-1 flex flex-col items-start gap-1.5">
|
||||
<div className="mr-3 grid w-full grid-cols-[1fr_auto] gap-2">
|
||||
<div className="ml-1 flex min-w-0 flex-col gap-1.5">
|
||||
<div className="flex flex-row gap-3">
|
||||
<div className="text-sm font-medium">{displayTime}</div>
|
||||
<div className="relative flex items-center gap-2 text-white">
|
||||
@ -408,7 +408,7 @@ function ReviewGroup({
|
||||
</div>
|
||||
<div className="flex flex-col gap-0.5">
|
||||
{review.data.metadata?.title && (
|
||||
<div className="mb-1 flex items-center gap-1 text-sm text-primary-variant">
|
||||
<div className="mb-1 flex min-w-0 items-center gap-1 text-sm text-primary-variant">
|
||||
<MdAutoAwesome className="size-3 shrink-0" />
|
||||
<span className="truncate">{review.data.metadata.title}</span>
|
||||
</div>
|
||||
@ -432,7 +432,7 @@ function ReviewGroup({
|
||||
e.stopPropagation();
|
||||
setOpen((v) => !v);
|
||||
}}
|
||||
className="ml-2 inline-flex items-center justify-center rounded p-1 hover:bg-secondary/10"
|
||||
className="inline-flex items-center justify-center self-center rounded p-1 hover:bg-secondary/10"
|
||||
>
|
||||
{open ? (
|
||||
<LuChevronDown className="size-4 text-primary-variant" />
|
||||
@ -803,8 +803,9 @@ function ObjectTimeline({
|
||||
return fullTimeline
|
||||
.filter(
|
||||
(t) =>
|
||||
t.timestamp >= review.start_time &&
|
||||
(review.end_time == undefined || t.timestamp <= review.end_time),
|
||||
t.timestamp >= review.start_time - REVIEW_PADDING &&
|
||||
(review.end_time == undefined ||
|
||||
t.timestamp <= review.end_time + REVIEW_PADDING),
|
||||
)
|
||||
.map((event) => ({
|
||||
...event,
|
||||
|
||||
@ -515,7 +515,7 @@ export function ReviewTimeline({
|
||||
<div
|
||||
className={`absolute z-30 flex gap-2 ${
|
||||
isMobile
|
||||
? "bottom-4 right-1 flex-col gap-3"
|
||||
? "bottom-4 right-1 flex-col-reverse gap-3"
|
||||
: "bottom-2 left-1/2 -translate-x-1/2"
|
||||
}`}
|
||||
>
|
||||
|
||||
@ -21,20 +21,30 @@ export const capitalizeAll = (text: string): string => {
|
||||
* @returns A valid camera identifier (lowercase, alphanumeric, max 8 chars)
|
||||
*/
|
||||
export function generateFixedHash(name: string, prefix: string = "id"): string {
|
||||
// Safely encode Unicode as UTF-8 bytes
|
||||
// Use the full UTF-8 bytes of the name and compute an FNV-1a 32-bit hash.
|
||||
// This is deterministic, fast, works with Unicode and avoids collisions from
|
||||
// simple truncation of base64 output.
|
||||
const utf8Bytes = new TextEncoder().encode(name);
|
||||
|
||||
// Convert to base64 manually
|
||||
let binary = "";
|
||||
for (const byte of utf8Bytes) {
|
||||
binary += String.fromCharCode(byte);
|
||||
// FNV-1a 32-bit hash algorithm
|
||||
let hash = 0x811c9dc5; // FNV offset basis
|
||||
for (let i = 0; i < utf8Bytes.length; i++) {
|
||||
hash ^= utf8Bytes[i];
|
||||
// Multiply by FNV prime (0x01000193) with 32-bit overflow
|
||||
hash = (hash >>> 0) * 0x01000193;
|
||||
// Ensure 32-bit unsigned integer
|
||||
hash >>>= 0;
|
||||
}
|
||||
const base64 = btoa(binary);
|
||||
|
||||
// Strip out non-alphanumeric characters and truncate
|
||||
const cleanHash = base64.replace(/[^a-zA-Z0-9]/g, "").substring(0, 8);
|
||||
// Convert to an 8-character lowercase hex string
|
||||
const hashHex = (hash >>> 0).toString(16).padStart(8, "0").toLowerCase();
|
||||
|
||||
return `${prefix}_${cleanHash.toLowerCase()}`;
|
||||
// Ensure the first character is a letter to avoid an identifier that's purely
|
||||
// numeric (isValidId forbids all-digit IDs). If it starts with a digit,
|
||||
// replace with 'a'. This is extremely unlikely but a simple safeguard.
|
||||
const safeHash = /^[0-9]/.test(hashHex[0]) ? `a${hashHex.slice(1)}` : hashHex;
|
||||
|
||||
return `${prefix}_${safeHash}`;
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
@ -98,12 +98,12 @@ export default function CameraSettingsView({
|
||||
return Object.entries(cameraConfig.zones).map(([name, zoneData]) => ({
|
||||
camera: cameraConfig.name,
|
||||
name,
|
||||
friendly_name: getZoneName(name, cameraConfig.name),
|
||||
friendly_name: cameraConfig.zones[name].friendly_name,
|
||||
objects: zoneData.objects,
|
||||
color: zoneData.color,
|
||||
}));
|
||||
}
|
||||
}, [cameraConfig, getZoneName]);
|
||||
}, [cameraConfig]);
|
||||
|
||||
const alertsLabels = useMemo(() => {
|
||||
return cameraConfig?.review.alerts.labels
|
||||
@ -533,8 +533,14 @@ export default function CameraSettingsView({
|
||||
}}
|
||||
/>
|
||||
</FormControl>
|
||||
<FormLabel className="font-normal smart-capitalize">
|
||||
{zone.friendly_name}
|
||||
<FormLabel
|
||||
className={cn(
|
||||
"font-normal",
|
||||
!zone.friendly_name &&
|
||||
"smart-capitalize",
|
||||
)}
|
||||
>
|
||||
{zone.friendly_name || zone.name}
|
||||
</FormLabel>
|
||||
</FormItem>
|
||||
)}
|
||||
@ -632,8 +638,14 @@ export default function CameraSettingsView({
|
||||
}}
|
||||
/>
|
||||
</FormControl>
|
||||
<FormLabel className="font-normal smart-capitalize">
|
||||
{zone.friendly_name}
|
||||
<FormLabel
|
||||
className={cn(
|
||||
"font-normal",
|
||||
!zone.friendly_name &&
|
||||
"smart-capitalize",
|
||||
)}
|
||||
>
|
||||
{zone.friendly_name || zone.name}
|
||||
</FormLabel>
|
||||
</FormItem>
|
||||
)}
|
||||
|
||||
Loading…
Reference in New Issue
Block a user