mirror of
https://github.com/blakeblackshear/frigate.git
synced 2025-12-21 04:26:43 +03:00
Compare commits
7 Commits
b7a9a6baf3
...
774f76f75b
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
774f76f75b | ||
|
|
fbf4388b37 | ||
|
|
3620ef27db | ||
|
|
5cf2ae0121 | ||
|
|
17d2bc240a | ||
|
|
6fd7f862f5 | ||
|
|
5d038b5c75 |
@ -12,7 +12,7 @@
|
||||
|
||||
A complete and local NVR designed for [Home Assistant](https://www.home-assistant.io) with AI object detection. Uses OpenCV and Tensorflow to perform realtime object detection locally for IP cameras.
|
||||
|
||||
Use of a GPU or AI accelerator such as a [Google Coral](https://coral.ai/products/) or [Hailo](https://hailo.ai/) is highly recommended. AI accelerators will outperform even the best CPUs with very little overhead.
|
||||
Use of a GPU, Integrated GPU, or AI accelerator such as a [Hailo](https://hailo.ai/) is highly recommended. Dedicated hardware will outperform even the best CPUs with very little overhead.
|
||||
|
||||
- Tight integration with Home Assistant via a [custom component](https://github.com/blakeblackshear/frigate-hass-integration)
|
||||
- Designed to minimize resource use and maximize performance by only looking for objects when and where it is necessary
|
||||
|
||||
@ -3,18 +3,18 @@ id: license_plate_recognition
|
||||
title: License Plate Recognition (LPR)
|
||||
---
|
||||
|
||||
Frigate can recognize license plates on vehicles and automatically add the detected characters to the `recognized_license_plate` field or a known name as a `sub_label` to tracked objects of type `car` or `motorcycle`. A common use case may be to read the license plates of cars pulling into a driveway or cars passing by on a street.
|
||||
Frigate can recognize license plates on vehicles and automatically add the detected characters to the `recognized_license_plate` field or a [known](#matching) name as a `sub_label` to tracked objects of type `car` or `motorcycle`. A common use case may be to read the license plates of cars pulling into a driveway or cars passing by on a street.
|
||||
|
||||
LPR works best when the license plate is clearly visible to the camera. For moving vehicles, Frigate continuously refines the recognition process, keeping the most confident result. When a vehicle becomes stationary, LPR continues to run for a short time after to attempt recognition.
|
||||
|
||||
When a plate is recognized, the details are:
|
||||
|
||||
- Added as a `sub_label` (if known) or the `recognized_license_plate` field (if unknown) to a tracked object.
|
||||
- Viewable in the Review Item Details pane in Review (sub labels).
|
||||
- Added as a `sub_label` (if [known](#matching)) or the `recognized_license_plate` field (if unknown) to a tracked object.
|
||||
- Viewable in the Details pane in Review/History.
|
||||
- Viewable in the Tracked Object Details pane in Explore (sub labels and recognized license plates).
|
||||
- Filterable through the More Filters menu in Explore.
|
||||
- Published via the `frigate/events` MQTT topic as a `sub_label` (known) or `recognized_license_plate` (unknown) for the `car` or `motorcycle` tracked object.
|
||||
- Published via the `frigate/tracked_object_update` MQTT topic with `name` (if known) and `plate`.
|
||||
- Published via the `frigate/events` MQTT topic as a `sub_label` ([known](#matching)) or `recognized_license_plate` (unknown) for the `car` or `motorcycle` tracked object.
|
||||
- Published via the `frigate/tracked_object_update` MQTT topic with `name` (if [known](#matching)) and `plate`.
|
||||
|
||||
## Model Requirements
|
||||
|
||||
@ -31,6 +31,7 @@ In the default mode, Frigate's LPR needs to first detect a `car` or `motorcycle`
|
||||
## Minimum System Requirements
|
||||
|
||||
License plate recognition works by running AI models locally on your system. The YOLOv9 plate detector model and the OCR models ([PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR)) are relatively lightweight and can run on your CPU or GPU, depending on your configuration. At least 4GB of RAM is required.
|
||||
|
||||
## Configuration
|
||||
|
||||
License plate recognition is disabled by default. Enable it in your config file:
|
||||
@ -73,8 +74,8 @@ Fine-tune the LPR feature using these optional parameters at the global level of
|
||||
- Default: `small`
|
||||
- This can be `small` or `large`.
|
||||
- The `small` model is fast and identifies groups of Latin and Chinese characters.
|
||||
- The `large` model identifies Latin characters only, but uses an enhanced text detector and is more capable at finding characters on multi-line plates. It is significantly slower than the `small` model. Note that using the `large` model does not improve _text recognition_, but it may improve _text detection_.
|
||||
- For most users, the `small` model is recommended.
|
||||
- The `large` model identifies Latin characters only, and uses an enhanced text detector to find characters on multi-line plates. It is significantly slower than the `small` model.
|
||||
- If your country or region does not use multi-line plates, you should use the `small` model as performance is much better for single-line plates.
|
||||
|
||||
### Recognition
|
||||
|
||||
@ -177,7 +178,7 @@ lpr:
|
||||
|
||||
:::note
|
||||
|
||||
If you want to detect cars on cameras but don't want to use resources to run LPR on those cars, you should disable LPR for those specific cameras.
|
||||
If a camera is configured to detect `car` or `motorcycle` but you don't want Frigate to run LPR for that camera, disable LPR at the camera level:
|
||||
|
||||
```yaml
|
||||
cameras:
|
||||
@ -305,7 +306,7 @@ With this setup:
|
||||
- Review items will always be classified as a `detection`.
|
||||
- Snapshots will always be saved.
|
||||
- Zones and object masks are **not** used.
|
||||
- The `frigate/events` MQTT topic will **not** publish tracked object updates with the license plate bounding box and score, though `frigate/reviews` will publish if recordings are enabled. If a plate is recognized as a known plate, publishing will occur with an updated `sub_label` field. If characters are recognized, publishing will occur with an updated `recognized_license_plate` field.
|
||||
- The `frigate/events` MQTT topic will **not** publish tracked object updates with the license plate bounding box and score, though `frigate/reviews` will publish if recordings are enabled. If a plate is recognized as a [known](#matching) plate, publishing will occur with an updated `sub_label` field. If characters are recognized, publishing will occur with an updated `recognized_license_plate` field.
|
||||
- License plate snapshots are saved at the highest-scoring moment and appear in Explore.
|
||||
- Debug view will not show `license_plate` bounding boxes.
|
||||
|
||||
|
||||
@ -15,7 +15,7 @@ The jsmpeg live view will use more browser and client GPU resources. Using go2rt
|
||||
| ------ | ------------------------------------- | ---------- | ---------------------------- | --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| jsmpeg | same as `detect -> fps`, capped at 10 | 720p | no | no | Resolution is configurable, but go2rtc is recommended if you want higher resolutions and better frame rates. jsmpeg is Frigate's default without go2rtc configured. |
|
||||
| mse | native | native | yes (depends on audio codec) | yes | iPhone requires iOS 17.1+, Firefox is h.264 only. This is Frigate's default when go2rtc is configured. |
|
||||
| webrtc | native | native | yes (depends on audio codec) | yes | Requires extra configuration, doesn't support h.265. Frigate attempts to use WebRTC when MSE fails or when using a camera's two-way talk feature. |
|
||||
| webrtc | native | native | yes (depends on audio codec) | yes | Requires extra configuration. Frigate attempts to use WebRTC when MSE fails or when using a camera's two-way talk feature. |
|
||||
|
||||
### Camera Settings Recommendations
|
||||
|
||||
@ -127,7 +127,8 @@ WebRTC works by creating a TCP or UDP connection on port `8555`. However, it req
|
||||
```
|
||||
|
||||
- For access through Tailscale, the Frigate system's Tailscale IP must be added as a WebRTC candidate. Tailscale IPs all start with `100.`, and are reserved within the `100.64.0.0/10` CIDR block.
|
||||
- Note that WebRTC does not support H.265.
|
||||
|
||||
- Note that some browsers may not support H.265 (HEVC). You can check your browser's current version for H.265 compatibility [here](https://github.com/AlexxIT/go2rtc?tab=readme-ov-file#codecs-madness).
|
||||
|
||||
:::tip
|
||||
|
||||
|
||||
@ -11,7 +11,7 @@ This adds features including the ability to deep link directly into the app.
|
||||
|
||||
In order to install Frigate as a PWA, the following requirements must be met:
|
||||
|
||||
- Frigate must be accessed via a secure context (localhost, secure https, etc.)
|
||||
- Frigate must be accessed via a secure context (localhost, secure https, VPN, etc.)
|
||||
- On Android, Firefox, Chrome, Edge, Opera, and Samsung Internet Browser all support installing PWAs.
|
||||
- On iOS 16.4 and later, PWAs can be installed from the Share menu in Safari, Chrome, Edge, Firefox, and Orion.
|
||||
|
||||
@ -22,3 +22,7 @@ Installation varies slightly based on the device that is being used:
|
||||
- Desktop: Use the install button typically found in right edge of the address bar
|
||||
- Android: Use the `Install as App` button in the more options menu for Chrome, and the `Add app to Home screen` button for Firefox
|
||||
- iOS: Use the `Add to Homescreen` button in the share menu
|
||||
|
||||
## Usage
|
||||
|
||||
Once setup, the Frigate app can be used wherever it has access to Frigate. This means it can be setup as local-only, VPN-only, or fully accessible depending on your needs.
|
||||
|
||||
@ -141,7 +141,7 @@ Triggers are best configured through the Frigate UI.
|
||||
Check the `Add Attribute` box to add the trigger's internal ID (e.g., "red_car_alert") to a data attribute on the tracked object that can be processed via the API or MQTT.
|
||||
5. Save the trigger to update the configuration and store the embedding in the database.
|
||||
|
||||
When a trigger fires, the UI highlights the trigger with a blue dot for 3 seconds for easy identification.
|
||||
When a trigger fires, the UI highlights the trigger with a blue dot for 3 seconds for easy identification. Additionally, the UI will show the last date/time and tracked object ID that activated your trigger. The last triggered timestamp is not saved to the database or persisted through restarts of Frigate.
|
||||
|
||||
### Usage and Best Practices
|
||||
|
||||
|
||||
@ -36,9 +36,11 @@ If the EQ13 is out of stock, the link below may take you to a suggested alternat
|
||||
|
||||
:::
|
||||
|
||||
| Name | Coral Inference Speed | Coral Compatibility | Notes |
|
||||
| ------------------------------------------------------------------------------------------------------------- | --------------------- | ------------------- | ----------------------------------------------------------------------------------------- |
|
||||
| Beelink EQ13 (<a href="https://amzn.to/4jn2qVr" target="_blank" rel="nofollow noopener sponsored">Amazon</a>) | 5-10ms | USB | Dual gigabit NICs for easy isolated camera network. Easily handles several 1080p cameras. |
|
||||
| Name | Capabilities | Notes |
|
||||
| ------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------- | --------------------------------------------------- |
|
||||
| Beelink EQ13 (<a href="https://amzn.to/4jn2qVr" target="_blank" rel="nofollow noopener sponsored">Amazon</a>) | Can run object detection on several 1080p cameras with low-medium activity | Dual gigabit NICs for easy isolated camera network. |
|
||||
| Intel 1120p ([Amazon](https://www.amazon.com/Beelink-i3-1220P-Computer-Display-Gigabit/dp/B0DDCKT9YP) | Can handle a large number of 1080p cameras with high activity | |
|
||||
| Intel 125H ([Amazon](https://www.amazon.com/MINISFORUM-Pro-125H-Barebone-Computer-HDMI2-1/dp/B0FH21FSZM) | Can handle a significant number of 1080p cameras with high activity | Includes NPU for more efficient detection in 0.17+ |
|
||||
|
||||
## Detectors
|
||||
|
||||
@ -129,10 +131,16 @@ In real-world deployments, even with multiple cameras running concurrently, Frig
|
||||
|
||||
### Google Coral TPU
|
||||
|
||||
:::warning
|
||||
|
||||
The Coral is no longer recommended for new Frigate installations, except in deployments with particularly low power requirements or hardware incapable of utilizing alternative AI accelerators for object detection. Instead, we suggest using one of the numerous other supported object detectors. Frigate will continue to provide support for the Coral TPU for as long as practicably possible given its still one of the most power-efficient devices for executing object detection models.
|
||||
|
||||
:::
|
||||
|
||||
Frigate supports both the USB and M.2 versions of the Google Coral.
|
||||
|
||||
- The USB version is compatible with the widest variety of hardware and does not require a driver on the host machine. However, it does lack the automatic throttling features of the other versions.
|
||||
- The PCIe and M.2 versions require installation of a driver on the host. Follow the instructions for your version from https://coral.ai
|
||||
- The PCIe and M.2 versions require installation of a driver on the host. https://github.com/jnicolson/gasket-builder should be used.
|
||||
|
||||
A single Coral can handle many cameras using the default model and will be sufficient for the majority of users. You can calculate the maximum performance of your Coral based on the inference speed reported by Frigate. With an inference speed of 10, your Coral will top out at `1000/10=100`, or 100 frames per second. If your detection fps is regularly getting close to that, you should first consider tuning motion masks. If those are already properly configured, a second Coral may be needed.
|
||||
|
||||
|
||||
@ -94,6 +94,10 @@ $ python -c 'print("{:.2f}MB".format(((1280 * 720 * 1.5 * 20 + 270480) / 1048576
|
||||
|
||||
The shm size cannot be set per container for Home Assistant add-ons. However, this is probably not required since by default Home Assistant Supervisor allocates `/dev/shm` with half the size of your total memory. If your machine has 8GB of memory, chances are that Frigate will have access to up to 4GB without any additional configuration.
|
||||
|
||||
## Extra Steps for Specific Hardware
|
||||
|
||||
The following sections contain additional setup steps that are only required if you are using specific hardware. If you are not using any of these hardware types, you can skip to the [Docker](#docker) installation section.
|
||||
|
||||
### Raspberry Pi 3/4
|
||||
|
||||
By default, the Raspberry Pi limits the amount of memory available to the GPU. In order to use ffmpeg hardware acceleration, you must increase the available memory by setting `gpu_mem` to the maximum recommended value in `config.txt` as described in the [official docs](https://www.raspberrypi.org/documentation/computers/config_txt.html#memory-options).
|
||||
@ -106,14 +110,107 @@ The Hailo-8 and Hailo-8L AI accelerators are available in both M.2 and HAT form
|
||||
|
||||
#### Installation
|
||||
|
||||
For Raspberry Pi 5 users with the AI Kit, installation is straightforward. Simply follow this [guide](https://www.raspberrypi.com/documentation/accessories/ai-kit.html#ai-kit-installation) to install the driver and software.
|
||||
:::warning
|
||||
|
||||
For other installations, follow these steps for installation:
|
||||
The Raspberry Pi kernel includes an older version of the Hailo driver that is incompatible with Frigate. You **must** follow the installation steps below to install the correct driver version, and you **must** disable the built-in kernel driver as described in step 1.
|
||||
|
||||
1. Install the driver from the [Hailo GitHub repository](https://github.com/hailo-ai/hailort-drivers). A convenient script for Linux is available to clone the repository, build the driver, and install it.
|
||||
2. Copy or download [this script](https://github.com/blakeblackshear/frigate/blob/dev/docker/hailo8l/user_installation.sh).
|
||||
3. Ensure it has execution permissions with `sudo chmod +x user_installation.sh`
|
||||
4. Run the script with `./user_installation.sh`
|
||||
:::
|
||||
|
||||
1. **Disable the built-in Hailo driver (Raspberry Pi only)**:
|
||||
|
||||
:::note
|
||||
|
||||
If you are **not** using a Raspberry Pi, skip this step and proceed directly to step 2.
|
||||
|
||||
:::
|
||||
|
||||
If you are using a Raspberry Pi, you need to blacklist the built-in kernel Hailo driver to prevent conflicts. First, check if the driver is currently loaded:
|
||||
|
||||
```bash
|
||||
lsmod | grep hailo
|
||||
```
|
||||
|
||||
If it shows `hailo_pci`, unload it:
|
||||
|
||||
```bash
|
||||
sudo rmmod hailo_pci
|
||||
```
|
||||
|
||||
Now blacklist the driver to prevent it from loading on boot:
|
||||
|
||||
```bash
|
||||
echo "blacklist hailo_pci" | sudo tee /etc/modprobe.d/blacklist-hailo_pci.conf
|
||||
```
|
||||
|
||||
Update initramfs to ensure the blacklist takes effect:
|
||||
|
||||
```bash
|
||||
sudo update-initramfs -u
|
||||
```
|
||||
|
||||
Reboot your Raspberry Pi:
|
||||
|
||||
```bash
|
||||
sudo reboot
|
||||
```
|
||||
|
||||
After rebooting, verify the built-in driver is not loaded:
|
||||
|
||||
```bash
|
||||
lsmod | grep hailo
|
||||
```
|
||||
|
||||
This command should return no results. If it still shows `hailo_pci`, the blacklist did not take effect properly and you may need to check for other Hailo packages installed via apt that are loading the driver.
|
||||
|
||||
2. **Run the installation script**:
|
||||
|
||||
Download the installation script:
|
||||
|
||||
```bash
|
||||
wget https://raw.githubusercontent.com/blakeblackshear/frigate/dev/docker/hailo8l/user_installation.sh
|
||||
```
|
||||
|
||||
Make it executable:
|
||||
|
||||
```bash
|
||||
sudo chmod +x user_installation.sh
|
||||
```
|
||||
|
||||
Run the script:
|
||||
|
||||
```bash
|
||||
./user_installation.sh
|
||||
```
|
||||
|
||||
The script will:
|
||||
|
||||
- Install necessary build dependencies
|
||||
- Clone and build the Hailo driver from the official repository
|
||||
- Install the driver
|
||||
- Download and install the required firmware
|
||||
- Set up udev rules
|
||||
|
||||
3. **Reboot your system**:
|
||||
|
||||
After the script completes successfully, reboot to load the firmware:
|
||||
|
||||
```bash
|
||||
sudo reboot
|
||||
```
|
||||
|
||||
4. **Verify the installation**:
|
||||
|
||||
After rebooting, verify that the Hailo device is available:
|
||||
|
||||
```bash
|
||||
ls -l /dev/hailo0
|
||||
```
|
||||
|
||||
You should see the device listed. You can also verify the driver is loaded:
|
||||
|
||||
```bash
|
||||
lsmod | grep hailo_pci
|
||||
```
|
||||
|
||||
#### Setup
|
||||
|
||||
@ -302,7 +399,7 @@ services:
|
||||
shm_size: "512mb" # update for your cameras based on calculation above
|
||||
devices:
|
||||
- /dev/bus/usb:/dev/bus/usb # Passes the USB Coral, needs to be modified for other versions
|
||||
- /dev/apex_0:/dev/apex_0 # Passes a PCIe Coral, follow driver instructions here https://coral.ai/docs/m2/get-started/#2a-on-linux
|
||||
- /dev/apex_0:/dev/apex_0 # Passes a PCIe Coral, follow driver instructions here https://github.com/jnicolson/gasket-builder
|
||||
- /dev/video11:/dev/video11 # For Raspberry Pi 4B
|
||||
- /dev/dri/renderD128:/dev/dri/renderD128 # AMD / Intel GPU, needs to be updated for your hardware
|
||||
- /dev/accel:/dev/accel # Intel NPU
|
||||
|
||||
@ -202,7 +202,7 @@ services:
|
||||
...
|
||||
devices:
|
||||
- /dev/bus/usb:/dev/bus/usb # passes the USB Coral, needs to be modified for other versions
|
||||
- /dev/apex_0:/dev/apex_0 # passes a PCIe Coral, follow driver instructions here https://coral.ai/docs/m2/get-started/#2a-on-linux
|
||||
- /dev/apex_0:/dev/apex_0 # passes a PCIe Coral, follow driver instructions here https://github.com/jnicolson/gasket-builder
|
||||
...
|
||||
```
|
||||
|
||||
|
||||
@ -68,8 +68,7 @@ The USB Coral can become stuck and need to be restarted, this can happen for a n
|
||||
|
||||
The most common reason for the PCIe Coral not being detected is that the driver has not been installed. This process varies based on what OS and kernel that is being run.
|
||||
|
||||
- In most cases [the Coral docs](https://coral.ai/docs/m2/get-started/#2-install-the-pcie-driver-and-edge-tpu-runtime) show how to install the driver for the PCIe based Coral.
|
||||
- For some newer Linux distros (for example, Ubuntu 22.04+), https://github.com/jnicolson/gasket-builder can be used to build and install the latest version of the driver.
|
||||
- In most cases https://github.com/jnicolson/gasket-builder can be used to build and install the latest version of the driver.
|
||||
|
||||
## Attempting to load TPU as pci & Fatal Python error: Illegal instruction
|
||||
|
||||
|
||||
@ -1781,9 +1781,8 @@ def create_trigger_embedding(
|
||||
logger.debug(
|
||||
f"Writing thumbnail for trigger with data {body.data} in {camera_name}."
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(e.with_traceback())
|
||||
logger.error(
|
||||
except Exception:
|
||||
logger.exception(
|
||||
f"Failed to write thumbnail for trigger with data {body.data} in {camera_name}"
|
||||
)
|
||||
|
||||
@ -1807,8 +1806,8 @@ def create_trigger_embedding(
|
||||
status_code=200,
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(e.with_traceback())
|
||||
except Exception:
|
||||
logger.exception("Error creating trigger embedding")
|
||||
return JSONResponse(
|
||||
content={
|
||||
"success": False,
|
||||
@ -1917,9 +1916,8 @@ def update_trigger_embedding(
|
||||
logger.debug(
|
||||
f"Deleted thumbnail for trigger with data {trigger.data} in {camera_name}."
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(e.with_traceback())
|
||||
logger.error(
|
||||
except Exception:
|
||||
logger.exception(
|
||||
f"Failed to delete thumbnail for trigger with data {trigger.data} in {camera_name}"
|
||||
)
|
||||
|
||||
@ -1958,9 +1956,8 @@ def update_trigger_embedding(
|
||||
logger.debug(
|
||||
f"Writing thumbnail for trigger with data {body.data} in {camera_name}."
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(e.with_traceback())
|
||||
logger.error(
|
||||
except Exception:
|
||||
logger.exception(
|
||||
f"Failed to write thumbnail for trigger with data {body.data} in {camera_name}"
|
||||
)
|
||||
|
||||
@ -1972,8 +1969,8 @@ def update_trigger_embedding(
|
||||
status_code=200,
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(e.with_traceback())
|
||||
except Exception:
|
||||
logger.exception("Error updating trigger embedding")
|
||||
return JSONResponse(
|
||||
content={
|
||||
"success": False,
|
||||
@ -2033,9 +2030,8 @@ def delete_trigger_embedding(
|
||||
logger.debug(
|
||||
f"Deleted thumbnail for trigger with data {trigger.data} in {camera_name}."
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(e.with_traceback())
|
||||
logger.error(
|
||||
except Exception:
|
||||
logger.exception(
|
||||
f"Failed to delete thumbnail for trigger with data {trigger.data} in {camera_name}"
|
||||
)
|
||||
|
||||
@ -2047,8 +2043,8 @@ def delete_trigger_embedding(
|
||||
status_code=200,
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(e.with_traceback())
|
||||
except Exception:
|
||||
logger.exception("Error deleting trigger embedding")
|
||||
return JSONResponse(
|
||||
content={
|
||||
"success": False,
|
||||
|
||||
@ -136,6 +136,7 @@ class CameraMaintainer(threading.Thread):
|
||||
self.ptz_metrics[name],
|
||||
self.region_grids[name],
|
||||
self.stop_event,
|
||||
self.config.logger,
|
||||
)
|
||||
self.camera_processes[config.name] = camera_process
|
||||
camera_process.start()
|
||||
@ -156,7 +157,11 @@ class CameraMaintainer(threading.Thread):
|
||||
self.frame_manager.create(f"{config.name}_frame{i}", frame_size)
|
||||
|
||||
capture_process = CameraCapture(
|
||||
config, count, self.camera_metrics[name], self.stop_event
|
||||
config,
|
||||
count,
|
||||
self.camera_metrics[name],
|
||||
self.stop_event,
|
||||
self.config.logger,
|
||||
)
|
||||
capture_process.daemon = True
|
||||
self.capture_processes[name] = capture_process
|
||||
|
||||
@ -132,17 +132,15 @@ class ReviewDescriptionProcessor(PostProcessorApi):
|
||||
|
||||
if image_source == ImageSourceEnum.recordings:
|
||||
duration = final_data["end_time"] - final_data["start_time"]
|
||||
buffer_extension = min(
|
||||
10, max(2, duration * RECORDING_BUFFER_EXTENSION_PERCENT)
|
||||
)
|
||||
buffer_extension = min(5, duration * RECORDING_BUFFER_EXTENSION_PERCENT)
|
||||
|
||||
# Ensure minimum total duration for short review items
|
||||
# This provides better context for brief events
|
||||
total_duration = duration + (2 * buffer_extension)
|
||||
if total_duration < MIN_RECORDING_DURATION:
|
||||
# Expand buffer to reach minimum duration, still respecting max of 10s per side
|
||||
# Expand buffer to reach minimum duration, still respecting max of 5s per side
|
||||
additional_buffer_per_side = (MIN_RECORDING_DURATION - duration) / 2
|
||||
buffer_extension = min(10, additional_buffer_per_side)
|
||||
buffer_extension = min(5, additional_buffer_per_side)
|
||||
|
||||
thumbs = self.get_recording_frames(
|
||||
camera,
|
||||
|
||||
@ -424,7 +424,7 @@ class FaceRealTimeProcessor(RealTimeProcessorApi):
|
||||
|
||||
if not res:
|
||||
return {
|
||||
"message": "No face was recognized.",
|
||||
"message": "Model is still training, please try again in a few moments.",
|
||||
"success": False,
|
||||
}
|
||||
|
||||
|
||||
@ -16,7 +16,7 @@ from frigate.comms.recordings_updater import (
|
||||
RecordingsDataSubscriber,
|
||||
RecordingsDataTypeEnum,
|
||||
)
|
||||
from frigate.config import CameraConfig, DetectConfig, ModelConfig
|
||||
from frigate.config import CameraConfig, DetectConfig, LoggerConfig, ModelConfig
|
||||
from frigate.config.camera.camera import CameraTypeEnum
|
||||
from frigate.config.camera.updater import (
|
||||
CameraConfigUpdateEnum,
|
||||
@ -539,6 +539,7 @@ class CameraCapture(FrigateProcess):
|
||||
shm_frame_count: int,
|
||||
camera_metrics: CameraMetrics,
|
||||
stop_event: MpEvent,
|
||||
log_config: LoggerConfig | None = None,
|
||||
) -> None:
|
||||
super().__init__(
|
||||
stop_event,
|
||||
@ -549,9 +550,10 @@ class CameraCapture(FrigateProcess):
|
||||
self.config = config
|
||||
self.shm_frame_count = shm_frame_count
|
||||
self.camera_metrics = camera_metrics
|
||||
self.log_config = log_config
|
||||
|
||||
def run(self) -> None:
|
||||
self.pre_run_setup()
|
||||
self.pre_run_setup(self.log_config)
|
||||
camera_watchdog = CameraWatchdog(
|
||||
self.config,
|
||||
self.shm_frame_count,
|
||||
@ -577,6 +579,7 @@ class CameraTracker(FrigateProcess):
|
||||
ptz_metrics: PTZMetrics,
|
||||
region_grid: list[list[dict[str, Any]]],
|
||||
stop_event: MpEvent,
|
||||
log_config: LoggerConfig | None = None,
|
||||
) -> None:
|
||||
super().__init__(
|
||||
stop_event,
|
||||
@ -592,9 +595,10 @@ class CameraTracker(FrigateProcess):
|
||||
self.camera_metrics = camera_metrics
|
||||
self.ptz_metrics = ptz_metrics
|
||||
self.region_grid = region_grid
|
||||
self.log_config = log_config
|
||||
|
||||
def run(self) -> None:
|
||||
self.pre_run_setup()
|
||||
self.pre_run_setup(self.log_config)
|
||||
frame_queue = self.camera_metrics.frame_queue
|
||||
frame_shape = self.config.frame_shape
|
||||
|
||||
|
||||
@ -44,11 +44,16 @@ self.addEventListener("notificationclick", (event) => {
|
||||
switch (event.action ?? "default") {
|
||||
case "markReviewed":
|
||||
if (event.notification.data) {
|
||||
fetch("/api/reviews/viewed", {
|
||||
method: "POST",
|
||||
headers: { "Content-Type": "application/json", "X-CSRF-TOKEN": 1 },
|
||||
body: JSON.stringify({ ids: [event.notification.data.id] }),
|
||||
});
|
||||
event.waitUntil(
|
||||
fetch("/api/reviews/viewed", {
|
||||
method: "POST",
|
||||
headers: {
|
||||
"Content-Type": "application/json",
|
||||
"X-CSRF-TOKEN": 1,
|
||||
},
|
||||
body: JSON.stringify({ ids: [event.notification.data.id] }),
|
||||
}), // eslint-disable-line comma-dangle
|
||||
);
|
||||
}
|
||||
break;
|
||||
default:
|
||||
@ -58,7 +63,7 @@ self.addEventListener("notificationclick", (event) => {
|
||||
// eslint-disable-next-line no-undef
|
||||
if (clients.openWindow) {
|
||||
// eslint-disable-next-line no-undef
|
||||
return clients.openWindow(url);
|
||||
event.waitUntil(clients.openWindow(url));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@ -398,11 +398,7 @@ export function GroupedClassificationCard({
|
||||
threshold={threshold}
|
||||
selected={false}
|
||||
i18nLibrary={i18nLibrary}
|
||||
onClick={(data, meta) => {
|
||||
if (meta || selectedItems.length > 0) {
|
||||
onClick(data);
|
||||
}
|
||||
}}
|
||||
onClick={() => {}}
|
||||
>
|
||||
{children?.(data)}
|
||||
</ClassificationCard>
|
||||
|
||||
@ -4,9 +4,7 @@ import { FrigateConfig } from "@/types/frigateConfig";
|
||||
import { baseUrl } from "@/api/baseUrl";
|
||||
import { toast } from "sonner";
|
||||
import axios from "axios";
|
||||
import { LuCamera, LuDownload, LuTrash2 } from "react-icons/lu";
|
||||
import { FiMoreVertical } from "react-icons/fi";
|
||||
import { MdImageSearch } from "react-icons/md";
|
||||
import { buttonVariants } from "@/components/ui/button";
|
||||
import {
|
||||
ContextMenu,
|
||||
@ -31,11 +29,8 @@ import {
|
||||
AlertDialogTitle,
|
||||
} from "@/components/ui/alert-dialog";
|
||||
import useSWR from "swr";
|
||||
|
||||
import { Trans, useTranslation } from "react-i18next";
|
||||
import { BsFillLightningFill } from "react-icons/bs";
|
||||
import BlurredIconButton from "../button/BlurredIconButton";
|
||||
import { PiPath } from "react-icons/pi";
|
||||
|
||||
type SearchResultActionsProps = {
|
||||
searchResult: SearchResult;
|
||||
@ -98,7 +93,6 @@ export default function SearchResultActions({
|
||||
href={`${baseUrl}api/events/${searchResult.id}/clip.mp4`}
|
||||
download={`${searchResult.camera}_${searchResult.label}.mp4`}
|
||||
>
|
||||
<LuDownload className="mr-2 size-4" />
|
||||
<span>{t("itemMenu.downloadVideo.label")}</span>
|
||||
</a>
|
||||
</MenuItem>
|
||||
@ -110,7 +104,6 @@ export default function SearchResultActions({
|
||||
href={`${baseUrl}api/events/${searchResult.id}/snapshot.jpg`}
|
||||
download={`${searchResult.camera}_${searchResult.label}.jpg`}
|
||||
>
|
||||
<LuCamera className="mr-2 size-4" />
|
||||
<span>{t("itemMenu.downloadSnapshot.label")}</span>
|
||||
</a>
|
||||
</MenuItem>
|
||||
@ -120,44 +113,31 @@ export default function SearchResultActions({
|
||||
aria-label={t("itemMenu.viewTrackingDetails.aria")}
|
||||
onClick={showTrackingDetails}
|
||||
>
|
||||
<PiPath className="mr-2 size-4" />
|
||||
<span>{t("itemMenu.viewTrackingDetails.label")}</span>
|
||||
</MenuItem>
|
||||
)}
|
||||
{config?.semantic_search?.enabled && isContextMenu && (
|
||||
<MenuItem
|
||||
aria-label={t("itemMenu.findSimilar.aria")}
|
||||
onClick={findSimilar}
|
||||
>
|
||||
<MdImageSearch className="mr-2 size-4" />
|
||||
<span>{t("itemMenu.findSimilar.label")}</span>
|
||||
</MenuItem>
|
||||
)}
|
||||
{config?.semantic_search?.enabled &&
|
||||
searchResult.data.type == "object" && (
|
||||
<MenuItem
|
||||
aria-label={t("itemMenu.addTrigger.aria")}
|
||||
onClick={addTrigger}
|
||||
>
|
||||
<BsFillLightningFill className="mr-2 size-4" />
|
||||
<span>{t("itemMenu.addTrigger.label")}</span>
|
||||
</MenuItem>
|
||||
)}
|
||||
{config?.semantic_search?.enabled &&
|
||||
searchResult.data.type == "object" && (
|
||||
<MenuItem
|
||||
aria-label={t("itemMenu.findSimilar.aria")}
|
||||
onClick={findSimilar}
|
||||
>
|
||||
<MdImageSearch className="mr-2 size-4" />
|
||||
<span>{t("itemMenu.findSimilar.label")}</span>
|
||||
</MenuItem>
|
||||
)}
|
||||
{config?.semantic_search?.enabled &&
|
||||
searchResult.data.type == "object" && (
|
||||
<MenuItem
|
||||
aria-label={t("itemMenu.addTrigger.aria")}
|
||||
onClick={addTrigger}
|
||||
>
|
||||
<span>{t("itemMenu.addTrigger.label")}</span>
|
||||
</MenuItem>
|
||||
)}
|
||||
<MenuItem
|
||||
aria-label={t("itemMenu.deleteTrackedObject.label")}
|
||||
onClick={() => setDeleteDialogOpen(true)}
|
||||
>
|
||||
<LuTrash2 className="mr-2 size-4" />
|
||||
<span>{t("button.delete", { ns: "common" })}</span>
|
||||
</MenuItem>
|
||||
</>
|
||||
|
||||
@ -46,13 +46,13 @@ export default function NavItem({
|
||||
onClick={onClick}
|
||||
className={({ isActive }) =>
|
||||
cn(
|
||||
"flex flex-col items-center justify-center rounded-lg",
|
||||
"flex flex-col items-center justify-center rounded-lg p-[6px]",
|
||||
className,
|
||||
variants[item.variant ?? "primary"][isActive ? "active" : "inactive"],
|
||||
)
|
||||
}
|
||||
>
|
||||
<Icon className="size-5 md:m-[6px]" />
|
||||
<Icon className="size-5" />
|
||||
</NavLink>
|
||||
);
|
||||
|
||||
|
||||
@ -12,6 +12,7 @@ import {
|
||||
DropdownMenuContent,
|
||||
DropdownMenuItem,
|
||||
DropdownMenuLabel,
|
||||
DropdownMenuSeparator,
|
||||
DropdownMenuTrigger,
|
||||
} from "@/components/ui/dropdown-menu";
|
||||
import {
|
||||
@ -20,7 +21,6 @@ import {
|
||||
TooltipTrigger,
|
||||
} from "@/components/ui/tooltip";
|
||||
import { isDesktop, isMobile } from "react-device-detect";
|
||||
import { LuPlus, LuScanFace } from "react-icons/lu";
|
||||
import { useTranslation } from "react-i18next";
|
||||
import { cn } from "@/lib/utils";
|
||||
import React, { ReactNode, useMemo, useState } from "react";
|
||||
@ -89,27 +89,26 @@ export default function FaceSelectionDialog({
|
||||
<DropdownMenuLabel>{t("trainFaceAs")}</DropdownMenuLabel>
|
||||
<div
|
||||
className={cn(
|
||||
"flex max-h-[40dvh] flex-col overflow-y-auto",
|
||||
"flex max-h-[40dvh] flex-col overflow-y-auto overflow-x-hidden",
|
||||
isMobile && "gap-2 pb-4",
|
||||
)}
|
||||
>
|
||||
<SelectorItem
|
||||
className="flex cursor-pointer gap-2 smart-capitalize"
|
||||
onClick={() => setNewFace(true)}
|
||||
>
|
||||
<LuPlus />
|
||||
{t("createFaceLibrary.new")}
|
||||
</SelectorItem>
|
||||
{faceNames.sort().map((faceName) => (
|
||||
<SelectorItem
|
||||
key={faceName}
|
||||
className="flex cursor-pointer gap-2 smart-capitalize"
|
||||
onClick={() => onTrainAttempt(faceName)}
|
||||
>
|
||||
<LuScanFace />
|
||||
{faceName}
|
||||
</SelectorItem>
|
||||
))}
|
||||
<DropdownMenuSeparator />
|
||||
<SelectorItem
|
||||
className="flex cursor-pointer gap-2 smart-capitalize"
|
||||
onClick={() => setNewFace(true)}
|
||||
>
|
||||
{t("createFaceLibrary.new")}
|
||||
</SelectorItem>
|
||||
</div>
|
||||
</SelectorContent>
|
||||
</Selector>
|
||||
|
||||
@ -171,6 +171,18 @@ export default function ImagePicker({
|
||||
alt={selectedImage?.label || "Selected image"}
|
||||
className="size-16 rounded object-cover"
|
||||
onLoad={() => handleImageLoad(selectedImageId || "")}
|
||||
onError={(e) => {
|
||||
// If trigger thumbnail fails to load, fall back to event thumbnail
|
||||
if (!selectedImage) {
|
||||
const target = e.target as HTMLImageElement;
|
||||
if (
|
||||
target.src.includes("clips/triggers") &&
|
||||
selectedImageId
|
||||
) {
|
||||
target.src = `${apiHost}api/events/${selectedImageId}/thumbnail.webp`;
|
||||
}
|
||||
}
|
||||
}}
|
||||
loading="lazy"
|
||||
/>
|
||||
{selectedImageId && !loadedImages.has(selectedImageId) && (
|
||||
|
||||
@ -683,6 +683,22 @@ function ObjectDetailsTab({
|
||||
|
||||
const mutate = useGlobalMutation();
|
||||
|
||||
// Helper to map over SWR cached search results while preserving
|
||||
// either paginated format (SearchResult[][]) or flat format (SearchResult[])
|
||||
const mapSearchResults = useCallback(
|
||||
(
|
||||
currentData: SearchResult[][] | SearchResult[] | undefined,
|
||||
fn: (event: SearchResult) => SearchResult,
|
||||
) => {
|
||||
if (!currentData) return currentData;
|
||||
if (Array.isArray(currentData[0])) {
|
||||
return (currentData as SearchResult[][]).map((page) => page.map(fn));
|
||||
}
|
||||
return (currentData as SearchResult[]).map(fn);
|
||||
},
|
||||
[],
|
||||
);
|
||||
|
||||
// users
|
||||
|
||||
const isAdmin = useIsAdmin();
|
||||
@ -810,17 +826,12 @@ function ObjectDetailsTab({
|
||||
(key.includes("events") ||
|
||||
key.includes("events/search") ||
|
||||
key.includes("events/explore")),
|
||||
(currentData: SearchResult[][] | SearchResult[] | undefined) => {
|
||||
if (!currentData) return currentData;
|
||||
// optimistic update
|
||||
return currentData
|
||||
.flat()
|
||||
.map((event) =>
|
||||
event.id === search.id
|
||||
? { ...event, data: { ...event.data, description: desc } }
|
||||
: event,
|
||||
);
|
||||
},
|
||||
(currentData: SearchResult[][] | SearchResult[] | undefined) =>
|
||||
mapSearchResults(currentData, (event) =>
|
||||
event.id === search.id
|
||||
? { ...event, data: { ...event.data, description: desc } }
|
||||
: event,
|
||||
),
|
||||
{
|
||||
optimisticData: true,
|
||||
rollbackOnError: true,
|
||||
@ -843,7 +854,7 @@ function ObjectDetailsTab({
|
||||
);
|
||||
setDesc(search.data.description);
|
||||
});
|
||||
}, [desc, search, mutate, t]);
|
||||
}, [desc, search, mutate, t, mapSearchResults]);
|
||||
|
||||
const regenerateDescription = useCallback(
|
||||
(source: "snapshot" | "thumbnails") => {
|
||||
@ -915,9 +926,8 @@ function ObjectDetailsTab({
|
||||
(key.includes("events") ||
|
||||
key.includes("events/search") ||
|
||||
key.includes("events/explore")),
|
||||
(currentData: SearchResult[][] | SearchResult[] | undefined) => {
|
||||
if (!currentData) return currentData;
|
||||
return currentData.flat().map((event) =>
|
||||
(currentData: SearchResult[][] | SearchResult[] | undefined) =>
|
||||
mapSearchResults(currentData, (event) =>
|
||||
event.id === search.id
|
||||
? {
|
||||
...event,
|
||||
@ -928,8 +938,7 @@ function ObjectDetailsTab({
|
||||
},
|
||||
}
|
||||
: event,
|
||||
);
|
||||
},
|
||||
),
|
||||
{
|
||||
optimisticData: true,
|
||||
rollbackOnError: true,
|
||||
@ -963,7 +972,7 @@ function ObjectDetailsTab({
|
||||
);
|
||||
});
|
||||
},
|
||||
[search, apiHost, mutate, setSearch, t],
|
||||
[search, apiHost, mutate, setSearch, t, mapSearchResults],
|
||||
);
|
||||
|
||||
// recognized plate
|
||||
@ -992,9 +1001,8 @@ function ObjectDetailsTab({
|
||||
(key.includes("events") ||
|
||||
key.includes("events/search") ||
|
||||
key.includes("events/explore")),
|
||||
(currentData: SearchResult[][] | SearchResult[] | undefined) => {
|
||||
if (!currentData) return currentData;
|
||||
return currentData.flat().map((event) =>
|
||||
(currentData: SearchResult[][] | SearchResult[] | undefined) =>
|
||||
mapSearchResults(currentData, (event) =>
|
||||
event.id === search.id
|
||||
? {
|
||||
...event,
|
||||
@ -1005,8 +1013,7 @@ function ObjectDetailsTab({
|
||||
},
|
||||
}
|
||||
: event,
|
||||
);
|
||||
},
|
||||
),
|
||||
{
|
||||
optimisticData: true,
|
||||
rollbackOnError: true,
|
||||
@ -1040,7 +1047,7 @@ function ObjectDetailsTab({
|
||||
);
|
||||
});
|
||||
},
|
||||
[search, apiHost, mutate, setSearch, t],
|
||||
[search, apiHost, mutate, setSearch, t, mapSearchResults],
|
||||
);
|
||||
|
||||
// speech transcription
|
||||
@ -1102,17 +1109,12 @@ function ObjectDetailsTab({
|
||||
(key.includes("events") ||
|
||||
key.includes("events/search") ||
|
||||
key.includes("events/explore")),
|
||||
(currentData: SearchResult[][] | SearchResult[] | undefined) => {
|
||||
if (!currentData) return currentData;
|
||||
// optimistic update
|
||||
return currentData
|
||||
.flat()
|
||||
.map((event) =>
|
||||
event.id === search.id
|
||||
? { ...event, plus_id: "new_upload" }
|
||||
: event,
|
||||
);
|
||||
},
|
||||
(currentData: SearchResult[][] | SearchResult[] | undefined) =>
|
||||
mapSearchResults(currentData, (event) =>
|
||||
event.id === search.id
|
||||
? { ...event, plus_id: "new_upload" }
|
||||
: event,
|
||||
),
|
||||
{
|
||||
optimisticData: true,
|
||||
rollbackOnError: true,
|
||||
@ -1120,7 +1122,7 @@ function ObjectDetailsTab({
|
||||
},
|
||||
);
|
||||
},
|
||||
[search, mutate],
|
||||
[search, mutate, mapSearchResults],
|
||||
);
|
||||
|
||||
const popoverContainerRef = useRef<HTMLDivElement | null>(null);
|
||||
@ -1503,7 +1505,7 @@ function ObjectDetailsTab({
|
||||
) : (
|
||||
<div className="flex flex-col gap-2">
|
||||
<Textarea
|
||||
className="text-md h-32"
|
||||
className="text-md h-32 md:text-sm"
|
||||
placeholder={t("details.description.placeholder")}
|
||||
value={desc}
|
||||
onChange={(e) => setDesc(e.target.value)}
|
||||
@ -1511,25 +1513,7 @@ function ObjectDetailsTab({
|
||||
onBlur={handleDescriptionBlur}
|
||||
autoFocus
|
||||
/>
|
||||
<div className="flex flex-row justify-end gap-4">
|
||||
<Tooltip>
|
||||
<TooltipTrigger asChild>
|
||||
<button
|
||||
aria-label={t("button.save", { ns: "common" })}
|
||||
className="text-primary/40 hover:text-primary/80"
|
||||
onClick={() => {
|
||||
setIsEditingDesc(false);
|
||||
updateDescription();
|
||||
}}
|
||||
>
|
||||
<FaCheck className="size-4" />
|
||||
</button>
|
||||
</TooltipTrigger>
|
||||
<TooltipContent>
|
||||
{t("button.save", { ns: "common" })}
|
||||
</TooltipContent>
|
||||
</Tooltip>
|
||||
|
||||
<div className="mb-10 flex flex-row justify-end gap-5">
|
||||
<Tooltip>
|
||||
<TooltipTrigger asChild>
|
||||
<button
|
||||
@ -1540,13 +1524,31 @@ function ObjectDetailsTab({
|
||||
setDesc(originalDescRef.current ?? "");
|
||||
}}
|
||||
>
|
||||
<FaTimes className="size-4" />
|
||||
<FaTimes className="size-5" />
|
||||
</button>
|
||||
</TooltipTrigger>
|
||||
<TooltipContent>
|
||||
{t("button.cancel", { ns: "common" })}
|
||||
</TooltipContent>
|
||||
</Tooltip>
|
||||
|
||||
<Tooltip>
|
||||
<TooltipTrigger asChild>
|
||||
<button
|
||||
aria-label={t("button.save", { ns: "common" })}
|
||||
className="text-primary/40 hover:text-primary/80"
|
||||
onClick={() => {
|
||||
setIsEditingDesc(false);
|
||||
updateDescription();
|
||||
}}
|
||||
>
|
||||
<FaCheck className="size-5" />
|
||||
</button>
|
||||
</TooltipTrigger>
|
||||
<TooltipContent>
|
||||
{t("button.save", { ns: "common" })}
|
||||
</TooltipContent>
|
||||
</Tooltip>
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
|
||||
@ -1,5 +1,6 @@
|
||||
import useSWR from "swr";
|
||||
import { useCallback, useEffect, useMemo, useRef, useState } from "react";
|
||||
import { useResizeObserver } from "@/hooks/resize-observer";
|
||||
import { Event } from "@/types/event";
|
||||
import ActivityIndicator from "@/components/indicators/activity-indicator";
|
||||
import { TrackingDetailsSequence } from "@/types/timeline";
|
||||
@ -89,9 +90,16 @@ export function TrackingDetails({
|
||||
}, [manualOverride, currentTime, annotationOffset]);
|
||||
|
||||
const containerRef = useRef<HTMLDivElement | null>(null);
|
||||
const timelineContainerRef = useRef<HTMLDivElement | null>(null);
|
||||
const rowRefs = useRef<(HTMLDivElement | null)[]>([]);
|
||||
const [_selectedZone, setSelectedZone] = useState("");
|
||||
const [_lifecycleZones, setLifecycleZones] = useState<string[]>([]);
|
||||
const [seekToTimestamp, setSeekToTimestamp] = useState<number | null>(null);
|
||||
const [lineBottomOffsetPx, setLineBottomOffsetPx] = useState<number>(32);
|
||||
const [lineTopOffsetPx, setLineTopOffsetPx] = useState<number>(8);
|
||||
const [blueLineHeightPx, setBlueLineHeightPx] = useState<number>(0);
|
||||
|
||||
const [timelineSize] = useResizeObserver(timelineContainerRef);
|
||||
|
||||
const aspectRatio = useMemo(() => {
|
||||
if (!config) {
|
||||
@ -221,60 +229,74 @@ export function TrackingDetails({
|
||||
displaySource,
|
||||
]);
|
||||
|
||||
const isWithinEventRange =
|
||||
effectiveTime !== undefined &&
|
||||
event.start_time !== undefined &&
|
||||
event.end_time !== undefined &&
|
||||
effectiveTime >= event.start_time &&
|
||||
effectiveTime <= event.end_time;
|
||||
|
||||
// Calculate how far down the blue line should extend based on effectiveTime
|
||||
const calculateLineHeight = useCallback(() => {
|
||||
if (!eventSequence || eventSequence.length === 0 || !isWithinEventRange) {
|
||||
return 0;
|
||||
const isWithinEventRange = useMemo(() => {
|
||||
if (effectiveTime === undefined || event.start_time === undefined) {
|
||||
return false;
|
||||
}
|
||||
|
||||
const currentTime = effectiveTime ?? 0;
|
||||
|
||||
// Find which events have been passed
|
||||
let lastPassedIndex = -1;
|
||||
for (let i = 0; i < eventSequence.length; i++) {
|
||||
if (currentTime >= (eventSequence[i].timestamp ?? 0)) {
|
||||
lastPassedIndex = i;
|
||||
} else {
|
||||
break;
|
||||
// If an event has not ended yet, fall back to last timestamp in eventSequence
|
||||
let eventEnd = event.end_time;
|
||||
if (eventEnd == null && eventSequence && eventSequence.length > 0) {
|
||||
const last = eventSequence[eventSequence.length - 1];
|
||||
if (last && last.timestamp !== undefined) {
|
||||
eventEnd = last.timestamp;
|
||||
}
|
||||
}
|
||||
|
||||
// No events passed yet
|
||||
if (lastPassedIndex < 0) return 0;
|
||||
if (eventEnd == null) {
|
||||
return false;
|
||||
}
|
||||
return effectiveTime >= event.start_time && effectiveTime <= eventEnd;
|
||||
}, [effectiveTime, event.start_time, event.end_time, eventSequence]);
|
||||
|
||||
// All events passed
|
||||
if (lastPassedIndex >= eventSequence.length - 1) return 100;
|
||||
// Dynamically compute pixel offsets so the timeline line starts at the
|
||||
// first row midpoint and ends at the last row midpoint. For accuracy,
|
||||
// measure the center Y of each lifecycle row and interpolate the current
|
||||
// effective time into a pixel position; then set the blue line height
|
||||
// so it reaches the center dot at the same time the dot becomes active.
|
||||
useEffect(() => {
|
||||
if (!timelineContainerRef.current || !eventSequence) return;
|
||||
|
||||
// Calculate percentage based on item position, not time
|
||||
// Each item occupies an equal visual space regardless of time gaps
|
||||
const itemPercentage = 100 / (eventSequence.length - 1);
|
||||
const containerRect = timelineContainerRef.current.getBoundingClientRect();
|
||||
const validRefs = rowRefs.current.filter((r) => r !== null);
|
||||
if (validRefs.length === 0) return;
|
||||
|
||||
// Find progress between current and next event for smooth transition
|
||||
const currentEvent = eventSequence[lastPassedIndex];
|
||||
const nextEvent = eventSequence[lastPassedIndex + 1];
|
||||
const currentTimestamp = currentEvent.timestamp ?? 0;
|
||||
const nextTimestamp = nextEvent.timestamp ?? 0;
|
||||
const centers = validRefs.map((n) => {
|
||||
const r = n.getBoundingClientRect();
|
||||
return r.top + r.height / 2 - containerRect.top;
|
||||
});
|
||||
|
||||
// Calculate interpolation between the two events
|
||||
const timeBetween = nextTimestamp - currentTimestamp;
|
||||
const timeElapsed = currentTime - currentTimestamp;
|
||||
const interpolation = timeBetween > 0 ? timeElapsed / timeBetween : 0;
|
||||
|
||||
// Base position plus interpolated progress to next item
|
||||
return Math.min(
|
||||
100,
|
||||
lastPassedIndex * itemPercentage + interpolation * itemPercentage,
|
||||
const topOffset = Math.max(0, centers[0]);
|
||||
const bottomOffset = Math.max(
|
||||
0,
|
||||
containerRect.height - centers[centers.length - 1],
|
||||
);
|
||||
}, [eventSequence, effectiveTime, isWithinEventRange]);
|
||||
|
||||
const blueLineHeight = calculateLineHeight();
|
||||
setLineTopOffsetPx(Math.round(topOffset));
|
||||
setLineBottomOffsetPx(Math.round(bottomOffset));
|
||||
|
||||
const eff = effectiveTime ?? 0;
|
||||
const timestamps = eventSequence.map((s) => s.timestamp ?? 0);
|
||||
|
||||
let pixelPos = centers[0];
|
||||
if (eff <= timestamps[0]) {
|
||||
pixelPos = centers[0];
|
||||
} else if (eff >= timestamps[timestamps.length - 1]) {
|
||||
pixelPos = centers[centers.length - 1];
|
||||
} else {
|
||||
for (let i = 0; i < timestamps.length - 1; i++) {
|
||||
const t1 = timestamps[i];
|
||||
const t2 = timestamps[i + 1];
|
||||
if (eff >= t1 && eff <= t2) {
|
||||
const ratio = t2 > t1 ? (eff - t1) / (t2 - t1) : 0;
|
||||
pixelPos = centers[i] + ratio * (centers[i + 1] - centers[i]);
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const bluePx = Math.round(Math.max(0, pixelPos - topOffset));
|
||||
setBlueLineHeightPx(bluePx);
|
||||
}, [eventSequence, timelineSize.width, timelineSize.height, effectiveTime]);
|
||||
|
||||
const videoSource = useMemo(() => {
|
||||
// event.start_time and event.end_time are in DETECT stream time
|
||||
@ -531,12 +553,21 @@ export function TrackingDetails({
|
||||
{t("detail.noObjectDetailData", { ns: "views/events" })}
|
||||
</div>
|
||||
) : (
|
||||
<div className="-pb-2 relative mx-0">
|
||||
<div className="absolute -top-2 bottom-8 left-6 z-0 w-0.5 -translate-x-1/2 bg-secondary-foreground" />
|
||||
<div
|
||||
className="-pb-2 relative mx-0"
|
||||
ref={timelineContainerRef}
|
||||
>
|
||||
<div
|
||||
className="absolute -top-2 left-6 z-0 w-0.5 -translate-x-1/2 bg-secondary-foreground"
|
||||
style={{ bottom: lineBottomOffsetPx }}
|
||||
/>
|
||||
{isWithinEventRange && (
|
||||
<div
|
||||
className="absolute left-6 top-2 z-[5] max-h-[calc(100%-3rem)] w-0.5 -translate-x-1/2 bg-selected transition-all duration-300"
|
||||
style={{ height: `${blueLineHeight}%` }}
|
||||
className="absolute left-6 z-[5] w-0.5 -translate-x-1/2 bg-selected transition-all duration-300"
|
||||
style={{
|
||||
top: `${lineTopOffsetPx}px`,
|
||||
height: `${blueLineHeightPx}px`,
|
||||
}}
|
||||
/>
|
||||
)}
|
||||
<div className="space-y-2">
|
||||
@ -589,20 +620,26 @@ export function TrackingDetails({
|
||||
: undefined;
|
||||
|
||||
return (
|
||||
<LifecycleIconRow
|
||||
<div
|
||||
key={`${item.timestamp}-${item.source_id ?? ""}-${idx}`}
|
||||
item={item}
|
||||
isActive={isActive}
|
||||
formattedEventTimestamp={formattedEventTimestamp}
|
||||
ratio={ratio}
|
||||
areaPx={areaPx}
|
||||
areaPct={areaPct}
|
||||
onClick={() => handleLifecycleClick(item)}
|
||||
setSelectedZone={setSelectedZone}
|
||||
getZoneColor={getZoneColor}
|
||||
effectiveTime={effectiveTime}
|
||||
isTimelineActive={isWithinEventRange}
|
||||
/>
|
||||
ref={(el) => {
|
||||
rowRefs.current[idx] = el;
|
||||
}}
|
||||
>
|
||||
<LifecycleIconRow
|
||||
item={item}
|
||||
isActive={isActive}
|
||||
formattedEventTimestamp={formattedEventTimestamp}
|
||||
ratio={ratio}
|
||||
areaPx={areaPx}
|
||||
areaPct={areaPct}
|
||||
onClick={() => handleLifecycleClick(item)}
|
||||
setSelectedZone={setSelectedZone}
|
||||
getZoneColor={getZoneColor}
|
||||
effectiveTime={effectiveTime}
|
||||
isTimelineActive={isWithinEventRange}
|
||||
/>
|
||||
</div>
|
||||
);
|
||||
})}
|
||||
</div>
|
||||
|
||||
@ -318,6 +318,7 @@ export default function HlsVideoPlayer({
|
||||
{isDetailMode &&
|
||||
camera &&
|
||||
currentTime &&
|
||||
loadedMetadata &&
|
||||
videoDimensions.width > 0 &&
|
||||
videoDimensions.height > 0 && (
|
||||
<div className="absolute z-50 size-full">
|
||||
|
||||
@ -15,6 +15,7 @@ import {
|
||||
ReviewSummary,
|
||||
SegmentedReviewData,
|
||||
} from "@/types/review";
|
||||
import { TimelineType } from "@/types/timeline";
|
||||
import {
|
||||
getBeginningOfDayTimestamp,
|
||||
getEndOfDayTimestamp,
|
||||
@ -49,6 +50,16 @@ export default function Events() {
|
||||
false,
|
||||
);
|
||||
|
||||
const [notificationTab, setNotificationTab] =
|
||||
useState<TimelineType>("timeline");
|
||||
|
||||
useSearchEffect("tab", (tab: string) => {
|
||||
if (tab === "timeline" || tab === "events" || tab === "detail") {
|
||||
setNotificationTab(tab as TimelineType);
|
||||
}
|
||||
return true;
|
||||
});
|
||||
|
||||
useSearchEffect("id", (reviewId: string) => {
|
||||
axios
|
||||
.get(`review/${reviewId}`)
|
||||
@ -66,7 +77,7 @@ export default function Events() {
|
||||
camera: resp.data.camera,
|
||||
startTime,
|
||||
severity: resp.data.severity,
|
||||
timelineType: "detail",
|
||||
timelineType: notificationTab,
|
||||
},
|
||||
true,
|
||||
);
|
||||
|
||||
@ -1,4 +1,5 @@
|
||||
import { ReviewSeverity } from "./review";
|
||||
import { TimelineType } from "./timeline";
|
||||
|
||||
export type Recording = {
|
||||
id: string;
|
||||
@ -37,7 +38,7 @@ export type RecordingStartingPoint = {
|
||||
camera: string;
|
||||
startTime: number;
|
||||
severity: ReviewSeverity;
|
||||
timelineType?: "timeline" | "events" | "detail";
|
||||
timelineType?: TimelineType;
|
||||
};
|
||||
|
||||
export type RecordingPlayerError = "stalled" | "startup";
|
||||
|
||||
@ -16,7 +16,6 @@ import { useCallback, useEffect, useMemo, useState } from "react";
|
||||
import { useTranslation } from "react-i18next";
|
||||
import { FaFolderPlus } from "react-icons/fa";
|
||||
import { MdModelTraining } from "react-icons/md";
|
||||
import { LuPencil, LuTrash2 } from "react-icons/lu";
|
||||
import { FiMoreVertical } from "react-icons/fi";
|
||||
import useSWR from "swr";
|
||||
import Heading from "@/components/ui/heading";
|
||||
@ -352,11 +351,9 @@ function ModelCard({ config, onClick, onUpdate, onDelete }: ModelCardProps) {
|
||||
onClick={(e) => e.stopPropagation()}
|
||||
>
|
||||
<DropdownMenuItem onClick={handleEditClick}>
|
||||
<LuPencil className="mr-2 size-4" />
|
||||
<span>{t("button.edit", { ns: "common" })}</span>
|
||||
</DropdownMenuItem>
|
||||
<DropdownMenuItem onClick={handleDeleteClick}>
|
||||
<LuTrash2 className="mr-2 size-4" />
|
||||
<span>{t("button.delete", { ns: "common" })}</span>
|
||||
</DropdownMenuItem>
|
||||
</DropdownMenuContent>
|
||||
|
||||
@ -799,7 +799,7 @@ function DetectionReview({
|
||||
(itemsToReview ?? 0) > 0 && (
|
||||
<div className="col-span-full flex items-center justify-center">
|
||||
<Button
|
||||
className="text-white"
|
||||
className="text-balance text-white"
|
||||
aria-label={t("markTheseItemsAsReviewed")}
|
||||
variant="select"
|
||||
onClick={() => {
|
||||
|
||||
@ -850,6 +850,29 @@ function FrigateCameraFeatures({
|
||||
}
|
||||
}, [activeToastId, t]);
|
||||
|
||||
const endEventViaBeacon = useCallback(() => {
|
||||
if (!recordingEventIdRef.current) return;
|
||||
|
||||
const url = `${window.location.origin}/api/events/${recordingEventIdRef.current}/end`;
|
||||
const payload = JSON.stringify({
|
||||
end_time: Math.ceil(Date.now() / 1000),
|
||||
});
|
||||
|
||||
// this needs to be a synchronous XMLHttpRequest to guarantee the PUT
|
||||
// reaches the server before the browser kills the page
|
||||
const xhr = new XMLHttpRequest();
|
||||
try {
|
||||
xhr.open("PUT", url, false);
|
||||
xhr.setRequestHeader("Content-Type", "application/json");
|
||||
xhr.setRequestHeader("X-CSRF-TOKEN", "1");
|
||||
xhr.setRequestHeader("X-CACHE-BYPASS", "1");
|
||||
xhr.withCredentials = true;
|
||||
xhr.send(payload);
|
||||
} catch (e) {
|
||||
// Silently ignore errors during unload
|
||||
}
|
||||
}, []);
|
||||
|
||||
const handleEventButtonClick = useCallback(() => {
|
||||
if (isRecording) {
|
||||
endEvent();
|
||||
@ -887,8 +910,19 @@ function FrigateCameraFeatures({
|
||||
}, [camera.name, isRestreamed, preferredLiveMode, t]);
|
||||
|
||||
useEffect(() => {
|
||||
// Handle page unload/close (browser close, tab close, refresh, navigation to external site)
|
||||
const handleBeforeUnload = () => {
|
||||
if (recordingEventIdRef.current) {
|
||||
endEventViaBeacon();
|
||||
}
|
||||
};
|
||||
|
||||
window.addEventListener("beforeunload", handleBeforeUnload);
|
||||
|
||||
// ensure manual event is stopped when component unmounts
|
||||
return () => {
|
||||
window.removeEventListener("beforeunload", handleBeforeUnload);
|
||||
|
||||
if (recordingEventIdRef.current) {
|
||||
endEvent();
|
||||
}
|
||||
|
||||
@ -201,12 +201,17 @@ export default function TriggerView({
|
||||
.then((configResponse) => {
|
||||
if (configResponse.status === 200) {
|
||||
updateConfig();
|
||||
const displayName =
|
||||
friendly_name && friendly_name !== ""
|
||||
? `${friendly_name} (${name})`
|
||||
: name;
|
||||
|
||||
toast.success(
|
||||
t(
|
||||
isEdit
|
||||
? "triggers.toast.success.updateTrigger"
|
||||
: "triggers.toast.success.createTrigger",
|
||||
{ name },
|
||||
{ name: displayName },
|
||||
),
|
||||
{ position: "top-center" },
|
||||
);
|
||||
@ -351,8 +356,19 @@ export default function TriggerView({
|
||||
.then((configResponse) => {
|
||||
if (configResponse.status === 200) {
|
||||
updateConfig();
|
||||
const friendly =
|
||||
config?.cameras?.[selectedCamera]?.semantic_search
|
||||
?.triggers?.[name]?.friendly_name;
|
||||
|
||||
const displayName =
|
||||
friendly && friendly !== ""
|
||||
? `${friendly} (${name})`
|
||||
: name;
|
||||
|
||||
toast.success(
|
||||
t("triggers.toast.success.deleteTrigger", { name }),
|
||||
t("triggers.toast.success.deleteTrigger", {
|
||||
name: displayName,
|
||||
}),
|
||||
{
|
||||
position: "top-center",
|
||||
},
|
||||
@ -381,7 +397,7 @@ export default function TriggerView({
|
||||
setIsLoading(false);
|
||||
});
|
||||
},
|
||||
[t, updateConfig, selectedCamera, setUnsavedChanges],
|
||||
[t, updateConfig, selectedCamera, setUnsavedChanges, config],
|
||||
);
|
||||
|
||||
useEffect(() => {
|
||||
@ -843,7 +859,14 @@ export default function TriggerView({
|
||||
/>
|
||||
<DeleteTriggerDialog
|
||||
show={showDelete}
|
||||
triggerName={selectedTrigger?.name ?? ""}
|
||||
triggerName={
|
||||
selectedTrigger
|
||||
? selectedTrigger.friendly_name &&
|
||||
selectedTrigger.friendly_name !== ""
|
||||
? `${selectedTrigger.friendly_name} (${selectedTrigger.name})`
|
||||
: selectedTrigger.name
|
||||
: ""
|
||||
}
|
||||
isLoading={isLoading}
|
||||
onCancel={() => {
|
||||
setShowDelete(false);
|
||||
|
||||
@ -72,8 +72,7 @@ export default function StorageMetrics({
|
||||
const earliestDate = useMemo(() => {
|
||||
const keys = Object.keys(recordingsSummary || {});
|
||||
return keys.length
|
||||
? new TZDate(keys[keys.length - 1] + "T00:00:00", timezone).getTime() /
|
||||
1000
|
||||
? new TZDate(keys[0] + "T00:00:00", timezone).getTime() / 1000
|
||||
: null;
|
||||
}, [recordingsSummary, timezone]);
|
||||
|
||||
|
||||
Loading…
Reference in New Issue
Block a user