mirror of
https://github.com/blakeblackshear/frigate.git
synced 2025-12-06 21:44:13 +03:00
Miscellaneous Fixes (#20897)
Some checks failed
CI / AMD64 Build (push) Has been cancelled
CI / ARM Build (push) Has been cancelled
CI / Jetson Jetpack 6 (push) Has been cancelled
CI / Assemble and push default build (push) Has been cancelled
CI / AMD64 Extra Build (push) Has been cancelled
CI / ARM Extra Build (push) Has been cancelled
CI / Synaptics Build (push) Has been cancelled
Some checks failed
CI / AMD64 Build (push) Has been cancelled
CI / ARM Build (push) Has been cancelled
CI / Jetson Jetpack 6 (push) Has been cancelled
CI / Assemble and push default build (push) Has been cancelled
CI / AMD64 Extra Build (push) Has been cancelled
CI / ARM Extra Build (push) Has been cancelled
CI / Synaptics Build (push) Has been cancelled
* don't flatten the search result cache when updating this would cause an infinite swr fetch if something was mutated and then fetch was called again * Properly sort keys for recording summary in StorageMetrics * tracked object description box tweaks * Remove ability to right click on elements inside of face popup * Update reprocess message * don't show object track until video metadata is loaded * fix blue line height calc for in progress events * Use timeline tab by default for notifications but add a query arg for customization * Try and improve notification opening behavior * Reduce review item buffering behavior * ensure logging config is passed to camera capture and tracker processes * ensure on demand recording stops when browser closes * improve active line progress height with resize observer * remove icons and duplicate find similar link in explore context menu * fix for initial broken image when creating trigger from explore * display friendly names for triggers in toasts * lpr and triggers docs updates * remove icons from dropdowns in face and classification * fix comma dangle linter issue * re-add incorrectly removed face library button icons * fix sidebar nav links on < 768px desktop layout * allow text to wrap on mark as reviewed button * match exact pixels * clarify LPR docs --------- Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
This commit is contained in:
parent
097673b845
commit
fbf4388b37
@ -3,18 +3,18 @@ id: license_plate_recognition
|
||||
title: License Plate Recognition (LPR)
|
||||
---
|
||||
|
||||
Frigate can recognize license plates on vehicles and automatically add the detected characters to the `recognized_license_plate` field or a known name as a `sub_label` to tracked objects of type `car` or `motorcycle`. A common use case may be to read the license plates of cars pulling into a driveway or cars passing by on a street.
|
||||
Frigate can recognize license plates on vehicles and automatically add the detected characters to the `recognized_license_plate` field or a [known](#matching) name as a `sub_label` to tracked objects of type `car` or `motorcycle`. A common use case may be to read the license plates of cars pulling into a driveway or cars passing by on a street.
|
||||
|
||||
LPR works best when the license plate is clearly visible to the camera. For moving vehicles, Frigate continuously refines the recognition process, keeping the most confident result. When a vehicle becomes stationary, LPR continues to run for a short time after to attempt recognition.
|
||||
|
||||
When a plate is recognized, the details are:
|
||||
|
||||
- Added as a `sub_label` (if known) or the `recognized_license_plate` field (if unknown) to a tracked object.
|
||||
- Viewable in the Review Item Details pane in Review (sub labels).
|
||||
- Added as a `sub_label` (if [known](#matching)) or the `recognized_license_plate` field (if unknown) to a tracked object.
|
||||
- Viewable in the Details pane in Review/History.
|
||||
- Viewable in the Tracked Object Details pane in Explore (sub labels and recognized license plates).
|
||||
- Filterable through the More Filters menu in Explore.
|
||||
- Published via the `frigate/events` MQTT topic as a `sub_label` (known) or `recognized_license_plate` (unknown) for the `car` or `motorcycle` tracked object.
|
||||
- Published via the `frigate/tracked_object_update` MQTT topic with `name` (if known) and `plate`.
|
||||
- Published via the `frigate/events` MQTT topic as a `sub_label` ([known](#matching)) or `recognized_license_plate` (unknown) for the `car` or `motorcycle` tracked object.
|
||||
- Published via the `frigate/tracked_object_update` MQTT topic with `name` (if [known](#matching)) and `plate`.
|
||||
|
||||
## Model Requirements
|
||||
|
||||
@ -31,6 +31,7 @@ In the default mode, Frigate's LPR needs to first detect a `car` or `motorcycle`
|
||||
## Minimum System Requirements
|
||||
|
||||
License plate recognition works by running AI models locally on your system. The YOLOv9 plate detector model and the OCR models ([PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR)) are relatively lightweight and can run on your CPU or GPU, depending on your configuration. At least 4GB of RAM is required.
|
||||
|
||||
## Configuration
|
||||
|
||||
License plate recognition is disabled by default. Enable it in your config file:
|
||||
@ -73,8 +74,8 @@ Fine-tune the LPR feature using these optional parameters at the global level of
|
||||
- Default: `small`
|
||||
- This can be `small` or `large`.
|
||||
- The `small` model is fast and identifies groups of Latin and Chinese characters.
|
||||
- The `large` model identifies Latin characters only, but uses an enhanced text detector and is more capable at finding characters on multi-line plates. It is significantly slower than the `small` model. Note that using the `large` model does not improve _text recognition_, but it may improve _text detection_.
|
||||
- For most users, the `small` model is recommended.
|
||||
- The `large` model identifies Latin characters only, and uses an enhanced text detector to find characters on multi-line plates. It is significantly slower than the `small` model.
|
||||
- If your country or region does not use multi-line plates, you should use the `small` model as performance is much better for single-line plates.
|
||||
|
||||
### Recognition
|
||||
|
||||
@ -177,7 +178,7 @@ lpr:
|
||||
|
||||
:::note
|
||||
|
||||
If you want to detect cars on cameras but don't want to use resources to run LPR on those cars, you should disable LPR for those specific cameras.
|
||||
If a camera is configured to detect `car` or `motorcycle` but you don't want Frigate to run LPR for that camera, disable LPR at the camera level:
|
||||
|
||||
```yaml
|
||||
cameras:
|
||||
@ -305,7 +306,7 @@ With this setup:
|
||||
- Review items will always be classified as a `detection`.
|
||||
- Snapshots will always be saved.
|
||||
- Zones and object masks are **not** used.
|
||||
- The `frigate/events` MQTT topic will **not** publish tracked object updates with the license plate bounding box and score, though `frigate/reviews` will publish if recordings are enabled. If a plate is recognized as a known plate, publishing will occur with an updated `sub_label` field. If characters are recognized, publishing will occur with an updated `recognized_license_plate` field.
|
||||
- The `frigate/events` MQTT topic will **not** publish tracked object updates with the license plate bounding box and score, though `frigate/reviews` will publish if recordings are enabled. If a plate is recognized as a [known](#matching) plate, publishing will occur with an updated `sub_label` field. If characters are recognized, publishing will occur with an updated `recognized_license_plate` field.
|
||||
- License plate snapshots are saved at the highest-scoring moment and appear in Explore.
|
||||
- Debug view will not show `license_plate` bounding boxes.
|
||||
|
||||
|
||||
@ -141,7 +141,7 @@ Triggers are best configured through the Frigate UI.
|
||||
Check the `Add Attribute` box to add the trigger's internal ID (e.g., "red_car_alert") to a data attribute on the tracked object that can be processed via the API or MQTT.
|
||||
5. Save the trigger to update the configuration and store the embedding in the database.
|
||||
|
||||
When a trigger fires, the UI highlights the trigger with a blue dot for 3 seconds for easy identification.
|
||||
When a trigger fires, the UI highlights the trigger with a blue dot for 3 seconds for easy identification. Additionally, the UI will show the last date/time and tracked object ID that activated your trigger. The last triggered timestamp is not saved to the database or persisted through restarts of Frigate.
|
||||
|
||||
### Usage and Best Practices
|
||||
|
||||
|
||||
@ -1781,9 +1781,8 @@ def create_trigger_embedding(
|
||||
logger.debug(
|
||||
f"Writing thumbnail for trigger with data {body.data} in {camera_name}."
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(e.with_traceback())
|
||||
logger.error(
|
||||
except Exception:
|
||||
logger.exception(
|
||||
f"Failed to write thumbnail for trigger with data {body.data} in {camera_name}"
|
||||
)
|
||||
|
||||
@ -1807,8 +1806,8 @@ def create_trigger_embedding(
|
||||
status_code=200,
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(e.with_traceback())
|
||||
except Exception:
|
||||
logger.exception("Error creating trigger embedding")
|
||||
return JSONResponse(
|
||||
content={
|
||||
"success": False,
|
||||
@ -1917,9 +1916,8 @@ def update_trigger_embedding(
|
||||
logger.debug(
|
||||
f"Deleted thumbnail for trigger with data {trigger.data} in {camera_name}."
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(e.with_traceback())
|
||||
logger.error(
|
||||
except Exception:
|
||||
logger.exception(
|
||||
f"Failed to delete thumbnail for trigger with data {trigger.data} in {camera_name}"
|
||||
)
|
||||
|
||||
@ -1958,9 +1956,8 @@ def update_trigger_embedding(
|
||||
logger.debug(
|
||||
f"Writing thumbnail for trigger with data {body.data} in {camera_name}."
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(e.with_traceback())
|
||||
logger.error(
|
||||
except Exception:
|
||||
logger.exception(
|
||||
f"Failed to write thumbnail for trigger with data {body.data} in {camera_name}"
|
||||
)
|
||||
|
||||
@ -1972,8 +1969,8 @@ def update_trigger_embedding(
|
||||
status_code=200,
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(e.with_traceback())
|
||||
except Exception:
|
||||
logger.exception("Error updating trigger embedding")
|
||||
return JSONResponse(
|
||||
content={
|
||||
"success": False,
|
||||
@ -2033,9 +2030,8 @@ def delete_trigger_embedding(
|
||||
logger.debug(
|
||||
f"Deleted thumbnail for trigger with data {trigger.data} in {camera_name}."
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(e.with_traceback())
|
||||
logger.error(
|
||||
except Exception:
|
||||
logger.exception(
|
||||
f"Failed to delete thumbnail for trigger with data {trigger.data} in {camera_name}"
|
||||
)
|
||||
|
||||
@ -2047,8 +2043,8 @@ def delete_trigger_embedding(
|
||||
status_code=200,
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(e.with_traceback())
|
||||
except Exception:
|
||||
logger.exception("Error deleting trigger embedding")
|
||||
return JSONResponse(
|
||||
content={
|
||||
"success": False,
|
||||
|
||||
@ -136,6 +136,7 @@ class CameraMaintainer(threading.Thread):
|
||||
self.ptz_metrics[name],
|
||||
self.region_grids[name],
|
||||
self.stop_event,
|
||||
self.config.logger,
|
||||
)
|
||||
self.camera_processes[config.name] = camera_process
|
||||
camera_process.start()
|
||||
@ -156,7 +157,11 @@ class CameraMaintainer(threading.Thread):
|
||||
self.frame_manager.create(f"{config.name}_frame{i}", frame_size)
|
||||
|
||||
capture_process = CameraCapture(
|
||||
config, count, self.camera_metrics[name], self.stop_event
|
||||
config,
|
||||
count,
|
||||
self.camera_metrics[name],
|
||||
self.stop_event,
|
||||
self.config.logger,
|
||||
)
|
||||
capture_process.daemon = True
|
||||
self.capture_processes[name] = capture_process
|
||||
|
||||
@ -132,17 +132,15 @@ class ReviewDescriptionProcessor(PostProcessorApi):
|
||||
|
||||
if image_source == ImageSourceEnum.recordings:
|
||||
duration = final_data["end_time"] - final_data["start_time"]
|
||||
buffer_extension = min(
|
||||
10, max(2, duration * RECORDING_BUFFER_EXTENSION_PERCENT)
|
||||
)
|
||||
buffer_extension = min(5, duration * RECORDING_BUFFER_EXTENSION_PERCENT)
|
||||
|
||||
# Ensure minimum total duration for short review items
|
||||
# This provides better context for brief events
|
||||
total_duration = duration + (2 * buffer_extension)
|
||||
if total_duration < MIN_RECORDING_DURATION:
|
||||
# Expand buffer to reach minimum duration, still respecting max of 10s per side
|
||||
# Expand buffer to reach minimum duration, still respecting max of 5s per side
|
||||
additional_buffer_per_side = (MIN_RECORDING_DURATION - duration) / 2
|
||||
buffer_extension = min(10, additional_buffer_per_side)
|
||||
buffer_extension = min(5, additional_buffer_per_side)
|
||||
|
||||
thumbs = self.get_recording_frames(
|
||||
camera,
|
||||
|
||||
@ -424,7 +424,7 @@ class FaceRealTimeProcessor(RealTimeProcessorApi):
|
||||
|
||||
if not res:
|
||||
return {
|
||||
"message": "No face was recognized.",
|
||||
"message": "Model is still training, please try again in a few moments.",
|
||||
"success": False,
|
||||
}
|
||||
|
||||
|
||||
@ -16,7 +16,7 @@ from frigate.comms.recordings_updater import (
|
||||
RecordingsDataSubscriber,
|
||||
RecordingsDataTypeEnum,
|
||||
)
|
||||
from frigate.config import CameraConfig, DetectConfig, ModelConfig
|
||||
from frigate.config import CameraConfig, DetectConfig, LoggerConfig, ModelConfig
|
||||
from frigate.config.camera.camera import CameraTypeEnum
|
||||
from frigate.config.camera.updater import (
|
||||
CameraConfigUpdateEnum,
|
||||
@ -539,6 +539,7 @@ class CameraCapture(FrigateProcess):
|
||||
shm_frame_count: int,
|
||||
camera_metrics: CameraMetrics,
|
||||
stop_event: MpEvent,
|
||||
log_config: LoggerConfig | None = None,
|
||||
) -> None:
|
||||
super().__init__(
|
||||
stop_event,
|
||||
@ -549,9 +550,10 @@ class CameraCapture(FrigateProcess):
|
||||
self.config = config
|
||||
self.shm_frame_count = shm_frame_count
|
||||
self.camera_metrics = camera_metrics
|
||||
self.log_config = log_config
|
||||
|
||||
def run(self) -> None:
|
||||
self.pre_run_setup()
|
||||
self.pre_run_setup(self.log_config)
|
||||
camera_watchdog = CameraWatchdog(
|
||||
self.config,
|
||||
self.shm_frame_count,
|
||||
@ -577,6 +579,7 @@ class CameraTracker(FrigateProcess):
|
||||
ptz_metrics: PTZMetrics,
|
||||
region_grid: list[list[dict[str, Any]]],
|
||||
stop_event: MpEvent,
|
||||
log_config: LoggerConfig | None = None,
|
||||
) -> None:
|
||||
super().__init__(
|
||||
stop_event,
|
||||
@ -592,9 +595,10 @@ class CameraTracker(FrigateProcess):
|
||||
self.camera_metrics = camera_metrics
|
||||
self.ptz_metrics = ptz_metrics
|
||||
self.region_grid = region_grid
|
||||
self.log_config = log_config
|
||||
|
||||
def run(self) -> None:
|
||||
self.pre_run_setup()
|
||||
self.pre_run_setup(self.log_config)
|
||||
frame_queue = self.camera_metrics.frame_queue
|
||||
frame_shape = self.config.frame_shape
|
||||
|
||||
|
||||
@ -44,11 +44,16 @@ self.addEventListener("notificationclick", (event) => {
|
||||
switch (event.action ?? "default") {
|
||||
case "markReviewed":
|
||||
if (event.notification.data) {
|
||||
fetch("/api/reviews/viewed", {
|
||||
method: "POST",
|
||||
headers: { "Content-Type": "application/json", "X-CSRF-TOKEN": 1 },
|
||||
body: JSON.stringify({ ids: [event.notification.data.id] }),
|
||||
});
|
||||
event.waitUntil(
|
||||
fetch("/api/reviews/viewed", {
|
||||
method: "POST",
|
||||
headers: {
|
||||
"Content-Type": "application/json",
|
||||
"X-CSRF-TOKEN": 1,
|
||||
},
|
||||
body: JSON.stringify({ ids: [event.notification.data.id] }),
|
||||
}), // eslint-disable-line comma-dangle
|
||||
);
|
||||
}
|
||||
break;
|
||||
default:
|
||||
@ -58,7 +63,7 @@ self.addEventListener("notificationclick", (event) => {
|
||||
// eslint-disable-next-line no-undef
|
||||
if (clients.openWindow) {
|
||||
// eslint-disable-next-line no-undef
|
||||
return clients.openWindow(url);
|
||||
event.waitUntil(clients.openWindow(url));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@ -398,11 +398,7 @@ export function GroupedClassificationCard({
|
||||
threshold={threshold}
|
||||
selected={false}
|
||||
i18nLibrary={i18nLibrary}
|
||||
onClick={(data, meta) => {
|
||||
if (meta || selectedItems.length > 0) {
|
||||
onClick(data);
|
||||
}
|
||||
}}
|
||||
onClick={() => {}}
|
||||
>
|
||||
{children?.(data)}
|
||||
</ClassificationCard>
|
||||
|
||||
@ -4,9 +4,7 @@ import { FrigateConfig } from "@/types/frigateConfig";
|
||||
import { baseUrl } from "@/api/baseUrl";
|
||||
import { toast } from "sonner";
|
||||
import axios from "axios";
|
||||
import { LuCamera, LuDownload, LuTrash2 } from "react-icons/lu";
|
||||
import { FiMoreVertical } from "react-icons/fi";
|
||||
import { MdImageSearch } from "react-icons/md";
|
||||
import { buttonVariants } from "@/components/ui/button";
|
||||
import {
|
||||
ContextMenu,
|
||||
@ -31,11 +29,8 @@ import {
|
||||
AlertDialogTitle,
|
||||
} from "@/components/ui/alert-dialog";
|
||||
import useSWR from "swr";
|
||||
|
||||
import { Trans, useTranslation } from "react-i18next";
|
||||
import { BsFillLightningFill } from "react-icons/bs";
|
||||
import BlurredIconButton from "../button/BlurredIconButton";
|
||||
import { PiPath } from "react-icons/pi";
|
||||
|
||||
type SearchResultActionsProps = {
|
||||
searchResult: SearchResult;
|
||||
@ -98,7 +93,6 @@ export default function SearchResultActions({
|
||||
href={`${baseUrl}api/events/${searchResult.id}/clip.mp4`}
|
||||
download={`${searchResult.camera}_${searchResult.label}.mp4`}
|
||||
>
|
||||
<LuDownload className="mr-2 size-4" />
|
||||
<span>{t("itemMenu.downloadVideo.label")}</span>
|
||||
</a>
|
||||
</MenuItem>
|
||||
@ -110,7 +104,6 @@ export default function SearchResultActions({
|
||||
href={`${baseUrl}api/events/${searchResult.id}/snapshot.jpg`}
|
||||
download={`${searchResult.camera}_${searchResult.label}.jpg`}
|
||||
>
|
||||
<LuCamera className="mr-2 size-4" />
|
||||
<span>{t("itemMenu.downloadSnapshot.label")}</span>
|
||||
</a>
|
||||
</MenuItem>
|
||||
@ -120,44 +113,31 @@ export default function SearchResultActions({
|
||||
aria-label={t("itemMenu.viewTrackingDetails.aria")}
|
||||
onClick={showTrackingDetails}
|
||||
>
|
||||
<PiPath className="mr-2 size-4" />
|
||||
<span>{t("itemMenu.viewTrackingDetails.label")}</span>
|
||||
</MenuItem>
|
||||
)}
|
||||
{config?.semantic_search?.enabled && isContextMenu && (
|
||||
<MenuItem
|
||||
aria-label={t("itemMenu.findSimilar.aria")}
|
||||
onClick={findSimilar}
|
||||
>
|
||||
<MdImageSearch className="mr-2 size-4" />
|
||||
<span>{t("itemMenu.findSimilar.label")}</span>
|
||||
</MenuItem>
|
||||
)}
|
||||
{config?.semantic_search?.enabled &&
|
||||
searchResult.data.type == "object" && (
|
||||
<MenuItem
|
||||
aria-label={t("itemMenu.addTrigger.aria")}
|
||||
onClick={addTrigger}
|
||||
>
|
||||
<BsFillLightningFill className="mr-2 size-4" />
|
||||
<span>{t("itemMenu.addTrigger.label")}</span>
|
||||
</MenuItem>
|
||||
)}
|
||||
{config?.semantic_search?.enabled &&
|
||||
searchResult.data.type == "object" && (
|
||||
<MenuItem
|
||||
aria-label={t("itemMenu.findSimilar.aria")}
|
||||
onClick={findSimilar}
|
||||
>
|
||||
<MdImageSearch className="mr-2 size-4" />
|
||||
<span>{t("itemMenu.findSimilar.label")}</span>
|
||||
</MenuItem>
|
||||
)}
|
||||
{config?.semantic_search?.enabled &&
|
||||
searchResult.data.type == "object" && (
|
||||
<MenuItem
|
||||
aria-label={t("itemMenu.addTrigger.aria")}
|
||||
onClick={addTrigger}
|
||||
>
|
||||
<span>{t("itemMenu.addTrigger.label")}</span>
|
||||
</MenuItem>
|
||||
)}
|
||||
<MenuItem
|
||||
aria-label={t("itemMenu.deleteTrackedObject.label")}
|
||||
onClick={() => setDeleteDialogOpen(true)}
|
||||
>
|
||||
<LuTrash2 className="mr-2 size-4" />
|
||||
<span>{t("button.delete", { ns: "common" })}</span>
|
||||
</MenuItem>
|
||||
</>
|
||||
|
||||
@ -46,13 +46,13 @@ export default function NavItem({
|
||||
onClick={onClick}
|
||||
className={({ isActive }) =>
|
||||
cn(
|
||||
"flex flex-col items-center justify-center rounded-lg",
|
||||
"flex flex-col items-center justify-center rounded-lg p-[6px]",
|
||||
className,
|
||||
variants[item.variant ?? "primary"][isActive ? "active" : "inactive"],
|
||||
)
|
||||
}
|
||||
>
|
||||
<Icon className="size-5 md:m-[6px]" />
|
||||
<Icon className="size-5" />
|
||||
</NavLink>
|
||||
);
|
||||
|
||||
|
||||
@ -12,6 +12,7 @@ import {
|
||||
DropdownMenuContent,
|
||||
DropdownMenuItem,
|
||||
DropdownMenuLabel,
|
||||
DropdownMenuSeparator,
|
||||
DropdownMenuTrigger,
|
||||
} from "@/components/ui/dropdown-menu";
|
||||
import {
|
||||
@ -20,7 +21,6 @@ import {
|
||||
TooltipTrigger,
|
||||
} from "@/components/ui/tooltip";
|
||||
import { isDesktop, isMobile } from "react-device-detect";
|
||||
import { LuPlus, LuScanFace } from "react-icons/lu";
|
||||
import { useTranslation } from "react-i18next";
|
||||
import { cn } from "@/lib/utils";
|
||||
import React, { ReactNode, useMemo, useState } from "react";
|
||||
@ -89,27 +89,26 @@ export default function FaceSelectionDialog({
|
||||
<DropdownMenuLabel>{t("trainFaceAs")}</DropdownMenuLabel>
|
||||
<div
|
||||
className={cn(
|
||||
"flex max-h-[40dvh] flex-col overflow-y-auto",
|
||||
"flex max-h-[40dvh] flex-col overflow-y-auto overflow-x-hidden",
|
||||
isMobile && "gap-2 pb-4",
|
||||
)}
|
||||
>
|
||||
<SelectorItem
|
||||
className="flex cursor-pointer gap-2 smart-capitalize"
|
||||
onClick={() => setNewFace(true)}
|
||||
>
|
||||
<LuPlus />
|
||||
{t("createFaceLibrary.new")}
|
||||
</SelectorItem>
|
||||
{faceNames.sort().map((faceName) => (
|
||||
<SelectorItem
|
||||
key={faceName}
|
||||
className="flex cursor-pointer gap-2 smart-capitalize"
|
||||
onClick={() => onTrainAttempt(faceName)}
|
||||
>
|
||||
<LuScanFace />
|
||||
{faceName}
|
||||
</SelectorItem>
|
||||
))}
|
||||
<DropdownMenuSeparator />
|
||||
<SelectorItem
|
||||
className="flex cursor-pointer gap-2 smart-capitalize"
|
||||
onClick={() => setNewFace(true)}
|
||||
>
|
||||
{t("createFaceLibrary.new")}
|
||||
</SelectorItem>
|
||||
</div>
|
||||
</SelectorContent>
|
||||
</Selector>
|
||||
|
||||
@ -171,6 +171,18 @@ export default function ImagePicker({
|
||||
alt={selectedImage?.label || "Selected image"}
|
||||
className="size-16 rounded object-cover"
|
||||
onLoad={() => handleImageLoad(selectedImageId || "")}
|
||||
onError={(e) => {
|
||||
// If trigger thumbnail fails to load, fall back to event thumbnail
|
||||
if (!selectedImage) {
|
||||
const target = e.target as HTMLImageElement;
|
||||
if (
|
||||
target.src.includes("clips/triggers") &&
|
||||
selectedImageId
|
||||
) {
|
||||
target.src = `${apiHost}api/events/${selectedImageId}/thumbnail.webp`;
|
||||
}
|
||||
}
|
||||
}}
|
||||
loading="lazy"
|
||||
/>
|
||||
{selectedImageId && !loadedImages.has(selectedImageId) && (
|
||||
|
||||
@ -683,6 +683,22 @@ function ObjectDetailsTab({
|
||||
|
||||
const mutate = useGlobalMutation();
|
||||
|
||||
// Helper to map over SWR cached search results while preserving
|
||||
// either paginated format (SearchResult[][]) or flat format (SearchResult[])
|
||||
const mapSearchResults = useCallback(
|
||||
(
|
||||
currentData: SearchResult[][] | SearchResult[] | undefined,
|
||||
fn: (event: SearchResult) => SearchResult,
|
||||
) => {
|
||||
if (!currentData) return currentData;
|
||||
if (Array.isArray(currentData[0])) {
|
||||
return (currentData as SearchResult[][]).map((page) => page.map(fn));
|
||||
}
|
||||
return (currentData as SearchResult[]).map(fn);
|
||||
},
|
||||
[],
|
||||
);
|
||||
|
||||
// users
|
||||
|
||||
const isAdmin = useIsAdmin();
|
||||
@ -810,17 +826,12 @@ function ObjectDetailsTab({
|
||||
(key.includes("events") ||
|
||||
key.includes("events/search") ||
|
||||
key.includes("events/explore")),
|
||||
(currentData: SearchResult[][] | SearchResult[] | undefined) => {
|
||||
if (!currentData) return currentData;
|
||||
// optimistic update
|
||||
return currentData
|
||||
.flat()
|
||||
.map((event) =>
|
||||
event.id === search.id
|
||||
? { ...event, data: { ...event.data, description: desc } }
|
||||
: event,
|
||||
);
|
||||
},
|
||||
(currentData: SearchResult[][] | SearchResult[] | undefined) =>
|
||||
mapSearchResults(currentData, (event) =>
|
||||
event.id === search.id
|
||||
? { ...event, data: { ...event.data, description: desc } }
|
||||
: event,
|
||||
),
|
||||
{
|
||||
optimisticData: true,
|
||||
rollbackOnError: true,
|
||||
@ -843,7 +854,7 @@ function ObjectDetailsTab({
|
||||
);
|
||||
setDesc(search.data.description);
|
||||
});
|
||||
}, [desc, search, mutate, t]);
|
||||
}, [desc, search, mutate, t, mapSearchResults]);
|
||||
|
||||
const regenerateDescription = useCallback(
|
||||
(source: "snapshot" | "thumbnails") => {
|
||||
@ -915,9 +926,8 @@ function ObjectDetailsTab({
|
||||
(key.includes("events") ||
|
||||
key.includes("events/search") ||
|
||||
key.includes("events/explore")),
|
||||
(currentData: SearchResult[][] | SearchResult[] | undefined) => {
|
||||
if (!currentData) return currentData;
|
||||
return currentData.flat().map((event) =>
|
||||
(currentData: SearchResult[][] | SearchResult[] | undefined) =>
|
||||
mapSearchResults(currentData, (event) =>
|
||||
event.id === search.id
|
||||
? {
|
||||
...event,
|
||||
@ -928,8 +938,7 @@ function ObjectDetailsTab({
|
||||
},
|
||||
}
|
||||
: event,
|
||||
);
|
||||
},
|
||||
),
|
||||
{
|
||||
optimisticData: true,
|
||||
rollbackOnError: true,
|
||||
@ -963,7 +972,7 @@ function ObjectDetailsTab({
|
||||
);
|
||||
});
|
||||
},
|
||||
[search, apiHost, mutate, setSearch, t],
|
||||
[search, apiHost, mutate, setSearch, t, mapSearchResults],
|
||||
);
|
||||
|
||||
// recognized plate
|
||||
@ -992,9 +1001,8 @@ function ObjectDetailsTab({
|
||||
(key.includes("events") ||
|
||||
key.includes("events/search") ||
|
||||
key.includes("events/explore")),
|
||||
(currentData: SearchResult[][] | SearchResult[] | undefined) => {
|
||||
if (!currentData) return currentData;
|
||||
return currentData.flat().map((event) =>
|
||||
(currentData: SearchResult[][] | SearchResult[] | undefined) =>
|
||||
mapSearchResults(currentData, (event) =>
|
||||
event.id === search.id
|
||||
? {
|
||||
...event,
|
||||
@ -1005,8 +1013,7 @@ function ObjectDetailsTab({
|
||||
},
|
||||
}
|
||||
: event,
|
||||
);
|
||||
},
|
||||
),
|
||||
{
|
||||
optimisticData: true,
|
||||
rollbackOnError: true,
|
||||
@ -1040,7 +1047,7 @@ function ObjectDetailsTab({
|
||||
);
|
||||
});
|
||||
},
|
||||
[search, apiHost, mutate, setSearch, t],
|
||||
[search, apiHost, mutate, setSearch, t, mapSearchResults],
|
||||
);
|
||||
|
||||
// speech transcription
|
||||
@ -1102,17 +1109,12 @@ function ObjectDetailsTab({
|
||||
(key.includes("events") ||
|
||||
key.includes("events/search") ||
|
||||
key.includes("events/explore")),
|
||||
(currentData: SearchResult[][] | SearchResult[] | undefined) => {
|
||||
if (!currentData) return currentData;
|
||||
// optimistic update
|
||||
return currentData
|
||||
.flat()
|
||||
.map((event) =>
|
||||
event.id === search.id
|
||||
? { ...event, plus_id: "new_upload" }
|
||||
: event,
|
||||
);
|
||||
},
|
||||
(currentData: SearchResult[][] | SearchResult[] | undefined) =>
|
||||
mapSearchResults(currentData, (event) =>
|
||||
event.id === search.id
|
||||
? { ...event, plus_id: "new_upload" }
|
||||
: event,
|
||||
),
|
||||
{
|
||||
optimisticData: true,
|
||||
rollbackOnError: true,
|
||||
@ -1120,7 +1122,7 @@ function ObjectDetailsTab({
|
||||
},
|
||||
);
|
||||
},
|
||||
[search, mutate],
|
||||
[search, mutate, mapSearchResults],
|
||||
);
|
||||
|
||||
const popoverContainerRef = useRef<HTMLDivElement | null>(null);
|
||||
@ -1503,7 +1505,7 @@ function ObjectDetailsTab({
|
||||
) : (
|
||||
<div className="flex flex-col gap-2">
|
||||
<Textarea
|
||||
className="text-md h-32"
|
||||
className="text-md h-32 md:text-sm"
|
||||
placeholder={t("details.description.placeholder")}
|
||||
value={desc}
|
||||
onChange={(e) => setDesc(e.target.value)}
|
||||
@ -1511,25 +1513,7 @@ function ObjectDetailsTab({
|
||||
onBlur={handleDescriptionBlur}
|
||||
autoFocus
|
||||
/>
|
||||
<div className="flex flex-row justify-end gap-4">
|
||||
<Tooltip>
|
||||
<TooltipTrigger asChild>
|
||||
<button
|
||||
aria-label={t("button.save", { ns: "common" })}
|
||||
className="text-primary/40 hover:text-primary/80"
|
||||
onClick={() => {
|
||||
setIsEditingDesc(false);
|
||||
updateDescription();
|
||||
}}
|
||||
>
|
||||
<FaCheck className="size-4" />
|
||||
</button>
|
||||
</TooltipTrigger>
|
||||
<TooltipContent>
|
||||
{t("button.save", { ns: "common" })}
|
||||
</TooltipContent>
|
||||
</Tooltip>
|
||||
|
||||
<div className="mb-10 flex flex-row justify-end gap-5">
|
||||
<Tooltip>
|
||||
<TooltipTrigger asChild>
|
||||
<button
|
||||
@ -1540,13 +1524,31 @@ function ObjectDetailsTab({
|
||||
setDesc(originalDescRef.current ?? "");
|
||||
}}
|
||||
>
|
||||
<FaTimes className="size-4" />
|
||||
<FaTimes className="size-5" />
|
||||
</button>
|
||||
</TooltipTrigger>
|
||||
<TooltipContent>
|
||||
{t("button.cancel", { ns: "common" })}
|
||||
</TooltipContent>
|
||||
</Tooltip>
|
||||
|
||||
<Tooltip>
|
||||
<TooltipTrigger asChild>
|
||||
<button
|
||||
aria-label={t("button.save", { ns: "common" })}
|
||||
className="text-primary/40 hover:text-primary/80"
|
||||
onClick={() => {
|
||||
setIsEditingDesc(false);
|
||||
updateDescription();
|
||||
}}
|
||||
>
|
||||
<FaCheck className="size-5" />
|
||||
</button>
|
||||
</TooltipTrigger>
|
||||
<TooltipContent>
|
||||
{t("button.save", { ns: "common" })}
|
||||
</TooltipContent>
|
||||
</Tooltip>
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
|
||||
@ -1,5 +1,6 @@
|
||||
import useSWR from "swr";
|
||||
import { useCallback, useEffect, useMemo, useRef, useState } from "react";
|
||||
import { useResizeObserver } from "@/hooks/resize-observer";
|
||||
import { Event } from "@/types/event";
|
||||
import ActivityIndicator from "@/components/indicators/activity-indicator";
|
||||
import { TrackingDetailsSequence } from "@/types/timeline";
|
||||
@ -89,9 +90,16 @@ export function TrackingDetails({
|
||||
}, [manualOverride, currentTime, annotationOffset]);
|
||||
|
||||
const containerRef = useRef<HTMLDivElement | null>(null);
|
||||
const timelineContainerRef = useRef<HTMLDivElement | null>(null);
|
||||
const rowRefs = useRef<(HTMLDivElement | null)[]>([]);
|
||||
const [_selectedZone, setSelectedZone] = useState("");
|
||||
const [_lifecycleZones, setLifecycleZones] = useState<string[]>([]);
|
||||
const [seekToTimestamp, setSeekToTimestamp] = useState<number | null>(null);
|
||||
const [lineBottomOffsetPx, setLineBottomOffsetPx] = useState<number>(32);
|
||||
const [lineTopOffsetPx, setLineTopOffsetPx] = useState<number>(8);
|
||||
const [blueLineHeightPx, setBlueLineHeightPx] = useState<number>(0);
|
||||
|
||||
const [timelineSize] = useResizeObserver(timelineContainerRef);
|
||||
|
||||
const aspectRatio = useMemo(() => {
|
||||
if (!config) {
|
||||
@ -221,60 +229,74 @@ export function TrackingDetails({
|
||||
displaySource,
|
||||
]);
|
||||
|
||||
const isWithinEventRange =
|
||||
effectiveTime !== undefined &&
|
||||
event.start_time !== undefined &&
|
||||
event.end_time !== undefined &&
|
||||
effectiveTime >= event.start_time &&
|
||||
effectiveTime <= event.end_time;
|
||||
|
||||
// Calculate how far down the blue line should extend based on effectiveTime
|
||||
const calculateLineHeight = useCallback(() => {
|
||||
if (!eventSequence || eventSequence.length === 0 || !isWithinEventRange) {
|
||||
return 0;
|
||||
const isWithinEventRange = useMemo(() => {
|
||||
if (effectiveTime === undefined || event.start_time === undefined) {
|
||||
return false;
|
||||
}
|
||||
|
||||
const currentTime = effectiveTime ?? 0;
|
||||
|
||||
// Find which events have been passed
|
||||
let lastPassedIndex = -1;
|
||||
for (let i = 0; i < eventSequence.length; i++) {
|
||||
if (currentTime >= (eventSequence[i].timestamp ?? 0)) {
|
||||
lastPassedIndex = i;
|
||||
} else {
|
||||
break;
|
||||
// If an event has not ended yet, fall back to last timestamp in eventSequence
|
||||
let eventEnd = event.end_time;
|
||||
if (eventEnd == null && eventSequence && eventSequence.length > 0) {
|
||||
const last = eventSequence[eventSequence.length - 1];
|
||||
if (last && last.timestamp !== undefined) {
|
||||
eventEnd = last.timestamp;
|
||||
}
|
||||
}
|
||||
|
||||
// No events passed yet
|
||||
if (lastPassedIndex < 0) return 0;
|
||||
if (eventEnd == null) {
|
||||
return false;
|
||||
}
|
||||
return effectiveTime >= event.start_time && effectiveTime <= eventEnd;
|
||||
}, [effectiveTime, event.start_time, event.end_time, eventSequence]);
|
||||
|
||||
// All events passed
|
||||
if (lastPassedIndex >= eventSequence.length - 1) return 100;
|
||||
// Dynamically compute pixel offsets so the timeline line starts at the
|
||||
// first row midpoint and ends at the last row midpoint. For accuracy,
|
||||
// measure the center Y of each lifecycle row and interpolate the current
|
||||
// effective time into a pixel position; then set the blue line height
|
||||
// so it reaches the center dot at the same time the dot becomes active.
|
||||
useEffect(() => {
|
||||
if (!timelineContainerRef.current || !eventSequence) return;
|
||||
|
||||
// Calculate percentage based on item position, not time
|
||||
// Each item occupies an equal visual space regardless of time gaps
|
||||
const itemPercentage = 100 / (eventSequence.length - 1);
|
||||
const containerRect = timelineContainerRef.current.getBoundingClientRect();
|
||||
const validRefs = rowRefs.current.filter((r) => r !== null);
|
||||
if (validRefs.length === 0) return;
|
||||
|
||||
// Find progress between current and next event for smooth transition
|
||||
const currentEvent = eventSequence[lastPassedIndex];
|
||||
const nextEvent = eventSequence[lastPassedIndex + 1];
|
||||
const currentTimestamp = currentEvent.timestamp ?? 0;
|
||||
const nextTimestamp = nextEvent.timestamp ?? 0;
|
||||
const centers = validRefs.map((n) => {
|
||||
const r = n.getBoundingClientRect();
|
||||
return r.top + r.height / 2 - containerRect.top;
|
||||
});
|
||||
|
||||
// Calculate interpolation between the two events
|
||||
const timeBetween = nextTimestamp - currentTimestamp;
|
||||
const timeElapsed = currentTime - currentTimestamp;
|
||||
const interpolation = timeBetween > 0 ? timeElapsed / timeBetween : 0;
|
||||
|
||||
// Base position plus interpolated progress to next item
|
||||
return Math.min(
|
||||
100,
|
||||
lastPassedIndex * itemPercentage + interpolation * itemPercentage,
|
||||
const topOffset = Math.max(0, centers[0]);
|
||||
const bottomOffset = Math.max(
|
||||
0,
|
||||
containerRect.height - centers[centers.length - 1],
|
||||
);
|
||||
}, [eventSequence, effectiveTime, isWithinEventRange]);
|
||||
|
||||
const blueLineHeight = calculateLineHeight();
|
||||
setLineTopOffsetPx(Math.round(topOffset));
|
||||
setLineBottomOffsetPx(Math.round(bottomOffset));
|
||||
|
||||
const eff = effectiveTime ?? 0;
|
||||
const timestamps = eventSequence.map((s) => s.timestamp ?? 0);
|
||||
|
||||
let pixelPos = centers[0];
|
||||
if (eff <= timestamps[0]) {
|
||||
pixelPos = centers[0];
|
||||
} else if (eff >= timestamps[timestamps.length - 1]) {
|
||||
pixelPos = centers[centers.length - 1];
|
||||
} else {
|
||||
for (let i = 0; i < timestamps.length - 1; i++) {
|
||||
const t1 = timestamps[i];
|
||||
const t2 = timestamps[i + 1];
|
||||
if (eff >= t1 && eff <= t2) {
|
||||
const ratio = t2 > t1 ? (eff - t1) / (t2 - t1) : 0;
|
||||
pixelPos = centers[i] + ratio * (centers[i + 1] - centers[i]);
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const bluePx = Math.round(Math.max(0, pixelPos - topOffset));
|
||||
setBlueLineHeightPx(bluePx);
|
||||
}, [eventSequence, timelineSize.width, timelineSize.height, effectiveTime]);
|
||||
|
||||
const videoSource = useMemo(() => {
|
||||
// event.start_time and event.end_time are in DETECT stream time
|
||||
@ -531,12 +553,21 @@ export function TrackingDetails({
|
||||
{t("detail.noObjectDetailData", { ns: "views/events" })}
|
||||
</div>
|
||||
) : (
|
||||
<div className="-pb-2 relative mx-0">
|
||||
<div className="absolute -top-2 bottom-8 left-6 z-0 w-0.5 -translate-x-1/2 bg-secondary-foreground" />
|
||||
<div
|
||||
className="-pb-2 relative mx-0"
|
||||
ref={timelineContainerRef}
|
||||
>
|
||||
<div
|
||||
className="absolute -top-2 left-6 z-0 w-0.5 -translate-x-1/2 bg-secondary-foreground"
|
||||
style={{ bottom: lineBottomOffsetPx }}
|
||||
/>
|
||||
{isWithinEventRange && (
|
||||
<div
|
||||
className="absolute left-6 top-2 z-[5] max-h-[calc(100%-3rem)] w-0.5 -translate-x-1/2 bg-selected transition-all duration-300"
|
||||
style={{ height: `${blueLineHeight}%` }}
|
||||
className="absolute left-6 z-[5] w-0.5 -translate-x-1/2 bg-selected transition-all duration-300"
|
||||
style={{
|
||||
top: `${lineTopOffsetPx}px`,
|
||||
height: `${blueLineHeightPx}px`,
|
||||
}}
|
||||
/>
|
||||
)}
|
||||
<div className="space-y-2">
|
||||
@ -589,20 +620,26 @@ export function TrackingDetails({
|
||||
: undefined;
|
||||
|
||||
return (
|
||||
<LifecycleIconRow
|
||||
<div
|
||||
key={`${item.timestamp}-${item.source_id ?? ""}-${idx}`}
|
||||
item={item}
|
||||
isActive={isActive}
|
||||
formattedEventTimestamp={formattedEventTimestamp}
|
||||
ratio={ratio}
|
||||
areaPx={areaPx}
|
||||
areaPct={areaPct}
|
||||
onClick={() => handleLifecycleClick(item)}
|
||||
setSelectedZone={setSelectedZone}
|
||||
getZoneColor={getZoneColor}
|
||||
effectiveTime={effectiveTime}
|
||||
isTimelineActive={isWithinEventRange}
|
||||
/>
|
||||
ref={(el) => {
|
||||
rowRefs.current[idx] = el;
|
||||
}}
|
||||
>
|
||||
<LifecycleIconRow
|
||||
item={item}
|
||||
isActive={isActive}
|
||||
formattedEventTimestamp={formattedEventTimestamp}
|
||||
ratio={ratio}
|
||||
areaPx={areaPx}
|
||||
areaPct={areaPct}
|
||||
onClick={() => handleLifecycleClick(item)}
|
||||
setSelectedZone={setSelectedZone}
|
||||
getZoneColor={getZoneColor}
|
||||
effectiveTime={effectiveTime}
|
||||
isTimelineActive={isWithinEventRange}
|
||||
/>
|
||||
</div>
|
||||
);
|
||||
})}
|
||||
</div>
|
||||
|
||||
@ -318,6 +318,7 @@ export default function HlsVideoPlayer({
|
||||
{isDetailMode &&
|
||||
camera &&
|
||||
currentTime &&
|
||||
loadedMetadata &&
|
||||
videoDimensions.width > 0 &&
|
||||
videoDimensions.height > 0 && (
|
||||
<div className="absolute z-50 size-full">
|
||||
|
||||
@ -15,6 +15,7 @@ import {
|
||||
ReviewSummary,
|
||||
SegmentedReviewData,
|
||||
} from "@/types/review";
|
||||
import { TimelineType } from "@/types/timeline";
|
||||
import {
|
||||
getBeginningOfDayTimestamp,
|
||||
getEndOfDayTimestamp,
|
||||
@ -49,6 +50,16 @@ export default function Events() {
|
||||
false,
|
||||
);
|
||||
|
||||
const [notificationTab, setNotificationTab] =
|
||||
useState<TimelineType>("timeline");
|
||||
|
||||
useSearchEffect("tab", (tab: string) => {
|
||||
if (tab === "timeline" || tab === "events" || tab === "detail") {
|
||||
setNotificationTab(tab as TimelineType);
|
||||
}
|
||||
return true;
|
||||
});
|
||||
|
||||
useSearchEffect("id", (reviewId: string) => {
|
||||
axios
|
||||
.get(`review/${reviewId}`)
|
||||
@ -66,7 +77,7 @@ export default function Events() {
|
||||
camera: resp.data.camera,
|
||||
startTime,
|
||||
severity: resp.data.severity,
|
||||
timelineType: "detail",
|
||||
timelineType: notificationTab,
|
||||
},
|
||||
true,
|
||||
);
|
||||
|
||||
@ -1,4 +1,5 @@
|
||||
import { ReviewSeverity } from "./review";
|
||||
import { TimelineType } from "./timeline";
|
||||
|
||||
export type Recording = {
|
||||
id: string;
|
||||
@ -37,7 +38,7 @@ export type RecordingStartingPoint = {
|
||||
camera: string;
|
||||
startTime: number;
|
||||
severity: ReviewSeverity;
|
||||
timelineType?: "timeline" | "events" | "detail";
|
||||
timelineType?: TimelineType;
|
||||
};
|
||||
|
||||
export type RecordingPlayerError = "stalled" | "startup";
|
||||
|
||||
@ -16,7 +16,6 @@ import { useCallback, useEffect, useMemo, useState } from "react";
|
||||
import { useTranslation } from "react-i18next";
|
||||
import { FaFolderPlus } from "react-icons/fa";
|
||||
import { MdModelTraining } from "react-icons/md";
|
||||
import { LuPencil, LuTrash2 } from "react-icons/lu";
|
||||
import { FiMoreVertical } from "react-icons/fi";
|
||||
import useSWR from "swr";
|
||||
import Heading from "@/components/ui/heading";
|
||||
@ -352,11 +351,9 @@ function ModelCard({ config, onClick, onUpdate, onDelete }: ModelCardProps) {
|
||||
onClick={(e) => e.stopPropagation()}
|
||||
>
|
||||
<DropdownMenuItem onClick={handleEditClick}>
|
||||
<LuPencil className="mr-2 size-4" />
|
||||
<span>{t("button.edit", { ns: "common" })}</span>
|
||||
</DropdownMenuItem>
|
||||
<DropdownMenuItem onClick={handleDeleteClick}>
|
||||
<LuTrash2 className="mr-2 size-4" />
|
||||
<span>{t("button.delete", { ns: "common" })}</span>
|
||||
</DropdownMenuItem>
|
||||
</DropdownMenuContent>
|
||||
|
||||
@ -799,7 +799,7 @@ function DetectionReview({
|
||||
(itemsToReview ?? 0) > 0 && (
|
||||
<div className="col-span-full flex items-center justify-center">
|
||||
<Button
|
||||
className="text-white"
|
||||
className="text-balance text-white"
|
||||
aria-label={t("markTheseItemsAsReviewed")}
|
||||
variant="select"
|
||||
onClick={() => {
|
||||
|
||||
@ -850,6 +850,29 @@ function FrigateCameraFeatures({
|
||||
}
|
||||
}, [activeToastId, t]);
|
||||
|
||||
const endEventViaBeacon = useCallback(() => {
|
||||
if (!recordingEventIdRef.current) return;
|
||||
|
||||
const url = `${window.location.origin}/api/events/${recordingEventIdRef.current}/end`;
|
||||
const payload = JSON.stringify({
|
||||
end_time: Math.ceil(Date.now() / 1000),
|
||||
});
|
||||
|
||||
// this needs to be a synchronous XMLHttpRequest to guarantee the PUT
|
||||
// reaches the server before the browser kills the page
|
||||
const xhr = new XMLHttpRequest();
|
||||
try {
|
||||
xhr.open("PUT", url, false);
|
||||
xhr.setRequestHeader("Content-Type", "application/json");
|
||||
xhr.setRequestHeader("X-CSRF-TOKEN", "1");
|
||||
xhr.setRequestHeader("X-CACHE-BYPASS", "1");
|
||||
xhr.withCredentials = true;
|
||||
xhr.send(payload);
|
||||
} catch (e) {
|
||||
// Silently ignore errors during unload
|
||||
}
|
||||
}, []);
|
||||
|
||||
const handleEventButtonClick = useCallback(() => {
|
||||
if (isRecording) {
|
||||
endEvent();
|
||||
@ -887,8 +910,19 @@ function FrigateCameraFeatures({
|
||||
}, [camera.name, isRestreamed, preferredLiveMode, t]);
|
||||
|
||||
useEffect(() => {
|
||||
// Handle page unload/close (browser close, tab close, refresh, navigation to external site)
|
||||
const handleBeforeUnload = () => {
|
||||
if (recordingEventIdRef.current) {
|
||||
endEventViaBeacon();
|
||||
}
|
||||
};
|
||||
|
||||
window.addEventListener("beforeunload", handleBeforeUnload);
|
||||
|
||||
// ensure manual event is stopped when component unmounts
|
||||
return () => {
|
||||
window.removeEventListener("beforeunload", handleBeforeUnload);
|
||||
|
||||
if (recordingEventIdRef.current) {
|
||||
endEvent();
|
||||
}
|
||||
|
||||
@ -201,12 +201,17 @@ export default function TriggerView({
|
||||
.then((configResponse) => {
|
||||
if (configResponse.status === 200) {
|
||||
updateConfig();
|
||||
const displayName =
|
||||
friendly_name && friendly_name !== ""
|
||||
? `${friendly_name} (${name})`
|
||||
: name;
|
||||
|
||||
toast.success(
|
||||
t(
|
||||
isEdit
|
||||
? "triggers.toast.success.updateTrigger"
|
||||
: "triggers.toast.success.createTrigger",
|
||||
{ name },
|
||||
{ name: displayName },
|
||||
),
|
||||
{ position: "top-center" },
|
||||
);
|
||||
@ -351,8 +356,19 @@ export default function TriggerView({
|
||||
.then((configResponse) => {
|
||||
if (configResponse.status === 200) {
|
||||
updateConfig();
|
||||
const friendly =
|
||||
config?.cameras?.[selectedCamera]?.semantic_search
|
||||
?.triggers?.[name]?.friendly_name;
|
||||
|
||||
const displayName =
|
||||
friendly && friendly !== ""
|
||||
? `${friendly} (${name})`
|
||||
: name;
|
||||
|
||||
toast.success(
|
||||
t("triggers.toast.success.deleteTrigger", { name }),
|
||||
t("triggers.toast.success.deleteTrigger", {
|
||||
name: displayName,
|
||||
}),
|
||||
{
|
||||
position: "top-center",
|
||||
},
|
||||
@ -381,7 +397,7 @@ export default function TriggerView({
|
||||
setIsLoading(false);
|
||||
});
|
||||
},
|
||||
[t, updateConfig, selectedCamera, setUnsavedChanges],
|
||||
[t, updateConfig, selectedCamera, setUnsavedChanges, config],
|
||||
);
|
||||
|
||||
useEffect(() => {
|
||||
@ -843,7 +859,14 @@ export default function TriggerView({
|
||||
/>
|
||||
<DeleteTriggerDialog
|
||||
show={showDelete}
|
||||
triggerName={selectedTrigger?.name ?? ""}
|
||||
triggerName={
|
||||
selectedTrigger
|
||||
? selectedTrigger.friendly_name &&
|
||||
selectedTrigger.friendly_name !== ""
|
||||
? `${selectedTrigger.friendly_name} (${selectedTrigger.name})`
|
||||
: selectedTrigger.name
|
||||
: ""
|
||||
}
|
||||
isLoading={isLoading}
|
||||
onCancel={() => {
|
||||
setShowDelete(false);
|
||||
|
||||
@ -72,8 +72,7 @@ export default function StorageMetrics({
|
||||
const earliestDate = useMemo(() => {
|
||||
const keys = Object.keys(recordingsSummary || {});
|
||||
return keys.length
|
||||
? new TZDate(keys[keys.length - 1] + "T00:00:00", timezone).getTime() /
|
||||
1000
|
||||
? new TZDate(keys[0] + "T00:00:00", timezone).getTime() / 1000
|
||||
: null;
|
||||
}, [recordingsSummary, timezone]);
|
||||
|
||||
|
||||
Loading…
Reference in New Issue
Block a user