Compare commits
13 Commits
d7de519a13
...
c327cf6aa1
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
c327cf6aa1 | ||
|
|
2d8b6c8301 | ||
|
|
84c3f98a09 | ||
|
|
c87f89fcc1 | ||
|
|
815303922d | ||
|
|
224cbdc2d6 | ||
|
|
3f9b153758 | ||
|
|
8e8346099e | ||
|
|
b0527df3c7 | ||
|
|
301e0a1a3a | ||
|
|
213a1fbd00 | ||
|
|
fbf4388b37 | ||
|
|
ad3c8f3f25 |
4
LICENSE
@ -1,6 +1,6 @@
|
|||||||
The MIT License
|
The MIT License
|
||||||
|
|
||||||
Copyright (c) 2020 Blake Blackshear
|
Copyright (c) 2025 Frigate LLC (Frigate™)
|
||||||
|
|
||||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
of this software and associated documentation files (the "Software"), to deal
|
of this software and associated documentation files (the "Software"), to deal
|
||||||
@ -18,4 +18,4 @@ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|||||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
SOFTWARE.
|
SOFTWARE.
|
||||||
|
|||||||
21
README.md
@ -1,8 +1,10 @@
|
|||||||
<p align="center">
|
<p align="center">
|
||||||
<img align="center" alt="logo" src="docs/static/img/frigate.png">
|
<img align="center" alt="logo" src="docs/static/img/branding/frigate.png">
|
||||||
</p>
|
</p>
|
||||||
|
|
||||||
# Frigate - NVR With Realtime Object Detection for IP Cameras
|
# Frigate NVR™ - Realtime Object Detection for IP Cameras
|
||||||
|
|
||||||
|
[](https://opensource.org/licenses/MIT)
|
||||||
|
|
||||||
<a href="https://hosted.weblate.org/engage/frigate-nvr/">
|
<a href="https://hosted.weblate.org/engage/frigate-nvr/">
|
||||||
<img src="https://hosted.weblate.org/widget/frigate-nvr/language-badge.svg" alt="Translation status" />
|
<img src="https://hosted.weblate.org/widget/frigate-nvr/language-badge.svg" alt="Translation status" />
|
||||||
@ -12,7 +14,7 @@
|
|||||||
|
|
||||||
A complete and local NVR designed for [Home Assistant](https://www.home-assistant.io) with AI object detection. Uses OpenCV and Tensorflow to perform realtime object detection locally for IP cameras.
|
A complete and local NVR designed for [Home Assistant](https://www.home-assistant.io) with AI object detection. Uses OpenCV and Tensorflow to perform realtime object detection locally for IP cameras.
|
||||||
|
|
||||||
Use of a GPU or AI accelerator such as a [Google Coral](https://coral.ai/products/) or [Hailo](https://hailo.ai/) is highly recommended. AI accelerators will outperform even the best CPUs with very little overhead.
|
Use of a GPU or AI accelerator is highly recommended. AI accelerators will outperform even the best CPUs with very little overhead. See Frigate's supported [object detectors](https://docs.frigate.video/configuration/object_detectors/).
|
||||||
|
|
||||||
- Tight integration with Home Assistant via a [custom component](https://github.com/blakeblackshear/frigate-hass-integration)
|
- Tight integration with Home Assistant via a [custom component](https://github.com/blakeblackshear/frigate-hass-integration)
|
||||||
- Designed to minimize resource use and maximize performance by only looking for objects when and where it is necessary
|
- Designed to minimize resource use and maximize performance by only looking for objects when and where it is necessary
|
||||||
@ -33,6 +35,15 @@ View the documentation at https://docs.frigate.video
|
|||||||
|
|
||||||
If you would like to make a donation to support development, please use [Github Sponsors](https://github.com/sponsors/blakeblackshear).
|
If you would like to make a donation to support development, please use [Github Sponsors](https://github.com/sponsors/blakeblackshear).
|
||||||
|
|
||||||
|
## License
|
||||||
|
|
||||||
|
This project is licensed under the **MIT License**.
|
||||||
|
|
||||||
|
- **Code:** The source code, configuration files, and documentation in this repository are available under the [MIT License](LICENSE). You are free to use, modify, and distribute the code as long as you include the original copyright notice.
|
||||||
|
- **Trademarks:** The "Frigate" name, the "Frigate NVR" brand, and the Frigate logo are **trademarks of Frigate LLC** and are **not** covered by the MIT License.
|
||||||
|
|
||||||
|
Please see our [Trademark Policy](TRADEMARK.md) for details on acceptable use of our brand assets.
|
||||||
|
|
||||||
## Screenshots
|
## Screenshots
|
||||||
|
|
||||||
### Live dashboard
|
### Live dashboard
|
||||||
@ -66,3 +77,7 @@ We use [Weblate](https://hosted.weblate.org/projects/frigate-nvr/) to support la
|
|||||||
<a href="https://hosted.weblate.org/engage/frigate-nvr/">
|
<a href="https://hosted.weblate.org/engage/frigate-nvr/">
|
||||||
<img src="https://hosted.weblate.org/widget/frigate-nvr/multi-auto.svg" alt="Translation status" />
|
<img src="https://hosted.weblate.org/widget/frigate-nvr/multi-auto.svg" alt="Translation status" />
|
||||||
</a>
|
</a>
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Copyright © 2025 Frigate LLC.**
|
||||||
|
|||||||
58
TRADEMARK.md
Normal file
@ -0,0 +1,58 @@
|
|||||||
|
# Trademark Policy
|
||||||
|
|
||||||
|
**Last Updated:** November 2025
|
||||||
|
|
||||||
|
This document outlines the policy regarding the use of the trademarks associated with the Frigate NVR project.
|
||||||
|
|
||||||
|
## 1. Our Trademarks
|
||||||
|
|
||||||
|
The following terms and visual assets are trademarks (the "Marks") of **Frigate LLC**:
|
||||||
|
|
||||||
|
- **Frigate™**
|
||||||
|
- **Frigate NVR™**
|
||||||
|
- **Frigate+™**
|
||||||
|
- **The Frigate Logo**
|
||||||
|
|
||||||
|
**Note on Common Law Rights:**
|
||||||
|
Frigate LLC asserts all common law rights in these Marks. The absence of a federal registration symbol (®) does not constitute a waiver of our intellectual property rights.
|
||||||
|
|
||||||
|
## 2. Interaction with the MIT License
|
||||||
|
|
||||||
|
The software in this repository is licensed under the [MIT License](LICENSE).
|
||||||
|
|
||||||
|
**Crucial Distinction:**
|
||||||
|
|
||||||
|
- The **Code** is free to use, modify, and distribute under the MIT terms.
|
||||||
|
- The **Brand (Trademarks)** is **NOT** licensed under MIT.
|
||||||
|
|
||||||
|
You may not use the Marks in any way that is not explicitly permitted by this policy or by written agreement with Frigate LLC.
|
||||||
|
|
||||||
|
## 3. Acceptable Use
|
||||||
|
|
||||||
|
You may use the Marks without prior written permission in the following specific contexts:
|
||||||
|
|
||||||
|
- **Referential Use:** To truthfully refer to the software (e.g., _"I use Frigate NVR for my home security"_).
|
||||||
|
- **Compatibility:** To indicate that your product or project works with the software (e.g., _"MyPlugin for Frigate NVR"_ or _"Compatible with Frigate"_).
|
||||||
|
- **Commentary:** In news articles, blog posts, or tutorials discussing the software.
|
||||||
|
|
||||||
|
## 4. Prohibited Use
|
||||||
|
|
||||||
|
You may **NOT** use the Marks in the following ways:
|
||||||
|
|
||||||
|
- **Commercial Products:** You may not use "Frigate" in the name of a commercial product, service, or app (e.g., selling an app named _"Frigate Viewer"_ is prohibited).
|
||||||
|
- **Implying Affiliation:** You may not use the Marks in a way that suggests your project is official, sponsored by, or endorsed by Frigate LLC.
|
||||||
|
- **Confusing Forks:** If you fork this repository to create a derivative work, you **must** remove the Frigate logo and rename your project to avoid user confusion. You cannot distribute a modified version of the software under the name "Frigate".
|
||||||
|
- **Domain Names:** You may not register domain names containing "Frigate" that are likely to confuse users (e.g., `frigate-official-support.com`).
|
||||||
|
|
||||||
|
## 5. The Logo
|
||||||
|
|
||||||
|
The Frigate logo (the bird icon) is a visual trademark.
|
||||||
|
|
||||||
|
- You generally **cannot** use the logo on your own website or product packaging without permission.
|
||||||
|
- If you are building a dashboard or integration that interfaces with Frigate, you may use the logo only to represent the Frigate node/service, provided it does not imply you _are_ Frigate.
|
||||||
|
|
||||||
|
## 6. Questions & Permissions
|
||||||
|
|
||||||
|
If you are unsure if your intended use violates this policy, or if you wish to request a specific license to use the Marks (e.g., for a partnership), please contact us at:
|
||||||
|
|
||||||
|
**help@frigate.video**
|
||||||
@ -145,6 +145,6 @@ rm -rf /var/lib/apt/lists/*
|
|||||||
|
|
||||||
# Install yq, for frigate-prepare and go2rtc echo source
|
# Install yq, for frigate-prepare and go2rtc echo source
|
||||||
curl -fsSL \
|
curl -fsSL \
|
||||||
"https://github.com/mikefarah/yq/releases/download/v4.33.3/yq_linux_$(dpkg --print-architecture)" \
|
"https://github.com/mikefarah/yq/releases/download/v4.48.2/yq_linux_$(dpkg --print-architecture)" \
|
||||||
--output /usr/local/bin/yq
|
--output /usr/local/bin/yq
|
||||||
chmod +x /usr/local/bin/yq
|
chmod +x /usr/local/bin/yq
|
||||||
|
|||||||
@ -320,6 +320,12 @@ http {
|
|||||||
add_header Cache-Control "public";
|
add_header Cache-Control "public";
|
||||||
}
|
}
|
||||||
|
|
||||||
|
location /fonts/ {
|
||||||
|
access_log off;
|
||||||
|
expires 1y;
|
||||||
|
add_header Cache-Control "public";
|
||||||
|
}
|
||||||
|
|
||||||
location /locales/ {
|
location /locales/ {
|
||||||
access_log off;
|
access_log off;
|
||||||
add_header Cache-Control "public";
|
add_header Cache-Control "public";
|
||||||
|
|||||||
@ -25,7 +25,7 @@ Examples of available modules are:
|
|||||||
|
|
||||||
- `frigate.app`
|
- `frigate.app`
|
||||||
- `frigate.mqtt`
|
- `frigate.mqtt`
|
||||||
- `frigate.object_detection`
|
- `frigate.object_detection.base`
|
||||||
- `detector.<detector_name>`
|
- `detector.<detector_name>`
|
||||||
- `watchdog.<camera_name>`
|
- `watchdog.<camera_name>`
|
||||||
- `ffmpeg.<camera_name>.<sorted_roles>` NOTE: All FFmpeg logs are sent as `error` level.
|
- `ffmpeg.<camera_name>.<sorted_roles>` NOTE: All FFmpeg logs are sent as `error` level.
|
||||||
@ -53,6 +53,17 @@ environment_vars:
|
|||||||
VARIABLE_NAME: variable_value
|
VARIABLE_NAME: variable_value
|
||||||
```
|
```
|
||||||
|
|
||||||
|
#### TensorFlow Thread Configuration
|
||||||
|
|
||||||
|
If you encounter thread creation errors during classification model training, you can limit TensorFlow's thread usage:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
environment_vars:
|
||||||
|
TF_INTRA_OP_PARALLELISM_THREADS: "2" # Threads within operations (0 = use default)
|
||||||
|
TF_INTER_OP_PARALLELISM_THREADS: "2" # Threads between operations (0 = use default)
|
||||||
|
TF_DATASET_THREAD_POOL_SIZE: "2" # Data pipeline threads (0 = use default)
|
||||||
|
```
|
||||||
|
|
||||||
### `database`
|
### `database`
|
||||||
|
|
||||||
Tracked object and recording information is managed in a sqlite database at `/config/frigate.db`. If that database is deleted, recordings will be orphaned and will need to be cleaned up manually. They also won't show up in the Media Browser within Home Assistant.
|
Tracked object and recording information is managed in a sqlite database at `/config/frigate.db`. If that database is deleted, recordings will be orphaned and will need to be cleaned up manually. They also won't show up in the Media Browser within Home Assistant.
|
||||||
@ -247,7 +258,7 @@ curl -X POST http://frigate_host:5000/api/config/save -d @config.json
|
|||||||
if you'd like you can use your yaml config directly by using [`yq`](https://github.com/mikefarah/yq) to convert it to json:
|
if you'd like you can use your yaml config directly by using [`yq`](https://github.com/mikefarah/yq) to convert it to json:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
yq r -j config.yml | curl -X POST http://frigate_host:5000/api/config/save -d @-
|
yq -o=json '.' config.yaml | curl -X POST 'http://frigate_host:5000/api/config/save?save_option=saveonly' --data-binary @-
|
||||||
```
|
```
|
||||||
|
|
||||||
### Via Command Line
|
### Via Command Line
|
||||||
|
|||||||
@ -35,6 +35,15 @@ For object classification:
|
|||||||
- Ideal when multiple attributes can coexist independently.
|
- Ideal when multiple attributes can coexist independently.
|
||||||
- Example: Detecting if a `person` in a construction yard is wearing a helmet or not.
|
- Example: Detecting if a `person` in a construction yard is wearing a helmet or not.
|
||||||
|
|
||||||
|
## Assignment Requirements
|
||||||
|
|
||||||
|
Sub labels and attributes are only assigned when both conditions are met:
|
||||||
|
|
||||||
|
1. **Threshold**: Each classification attempt must have a confidence score that meets or exceeds the configured `threshold` (default: `0.8`).
|
||||||
|
2. **Class Consensus**: After at least 3 classification attempts, 60% of attempts must agree on the same class label. If the consensus class is `none`, no assignment is made.
|
||||||
|
|
||||||
|
This two-step verification prevents false positives by requiring consistent predictions across multiple frames before assigning a sub label or attribute.
|
||||||
|
|
||||||
## Example use cases
|
## Example use cases
|
||||||
|
|
||||||
### Sub label
|
### Sub label
|
||||||
@ -66,14 +75,18 @@ classification:
|
|||||||
|
|
||||||
## Training the model
|
## Training the model
|
||||||
|
|
||||||
Creating and training the model is done within the Frigate UI using the `Classification` page.
|
Creating and training the model is done within the Frigate UI using the `Classification` page. The process consists of two steps:
|
||||||
|
|
||||||
### Getting Started
|
### Step 1: Name and Define
|
||||||
|
|
||||||
|
Enter a name for your model, select the object label to classify (e.g., `person`, `dog`, `car`), choose the classification type (sub label or attribute), and define your classes. Include a `none` class for objects that don't fit any specific category.
|
||||||
|
|
||||||
|
### Step 2: Assign Training Examples
|
||||||
|
|
||||||
|
The system will automatically generate example images from detected objects matching your selected label. You'll be guided through each class one at a time to select which images represent that class. Any images not assigned to a specific class will automatically be assigned to `none` when you complete the last class. Once all images are processed, training will begin automatically.
|
||||||
|
|
||||||
When choosing which objects to classify, start with a small number of visually distinct classes and ensure your training samples match camera viewpoints and distances typical for those objects.
|
When choosing which objects to classify, start with a small number of visually distinct classes and ensure your training samples match camera viewpoints and distances typical for those objects.
|
||||||
|
|
||||||
// TODO add this section once UI is implemented. Explain process of selecting objects and curating training examples.
|
|
||||||
|
|
||||||
### Improving the Model
|
### Improving the Model
|
||||||
|
|
||||||
- **Problem framing**: Keep classes visually distinct and relevant to the chosen object types.
|
- **Problem framing**: Keep classes visually distinct and relevant to the chosen object types.
|
||||||
|
|||||||
@ -48,13 +48,23 @@ classification:
|
|||||||
|
|
||||||
## Training the model
|
## Training the model
|
||||||
|
|
||||||
Creating and training the model is done within the Frigate UI using the `Classification` page.
|
Creating and training the model is done within the Frigate UI using the `Classification` page. The process consists of three steps:
|
||||||
|
|
||||||
### Getting Started
|
### Step 1: Name and Define
|
||||||
|
|
||||||
When choosing a portion of the camera frame for state classification, it is important to make the crop tight around the area of interest to avoid extra signals unrelated to what is being classified.
|
Enter a name for your model and define at least 2 classes (states) that represent mutually exclusive states. For example, `open` and `closed` for a door, or `on` and `off` for lights.
|
||||||
|
|
||||||
// TODO add this section once UI is implemented. Explain process of selecting a crop.
|
### Step 2: Select the Crop Area
|
||||||
|
|
||||||
|
Choose one or more cameras and draw a rectangle over the area of interest for each camera. The crop should be tight around the region you want to classify to avoid extra signals unrelated to what is being classified. You can drag and resize the rectangle to adjust the crop area.
|
||||||
|
|
||||||
|
### Step 3: Assign Training Examples
|
||||||
|
|
||||||
|
The system will automatically generate example images from your camera feeds. You'll be guided through each class one at a time to select which images represent that state.
|
||||||
|
|
||||||
|
**Important**: All images must be assigned to a state before training can begin. This includes images that may not be optimal, such as when people temporarily block the view, sun glare is present, or other distractions occur. Assign these images to the state that is actually present (based on what you know the state to be), not based on the distraction. This training helps the model correctly identify the state even when such conditions occur during inference.
|
||||||
|
|
||||||
|
Once all images are assigned, training will begin automatically.
|
||||||
|
|
||||||
### Improving the Model
|
### Improving the Model
|
||||||
|
|
||||||
|
|||||||
@ -70,7 +70,7 @@ You should have at least 8 GB of RAM available (or VRAM if running on GPU) to ru
|
|||||||
genai:
|
genai:
|
||||||
provider: ollama
|
provider: ollama
|
||||||
base_url: http://localhost:11434
|
base_url: http://localhost:11434
|
||||||
model: llava:7b
|
model: qwen3-vl:4b
|
||||||
```
|
```
|
||||||
|
|
||||||
## Google Gemini
|
## Google Gemini
|
||||||
|
|||||||
@ -35,19 +35,18 @@ Each model is available in multiple parameter sizes (3b, 4b, 8b, etc.). Larger s
|
|||||||
|
|
||||||
:::tip
|
:::tip
|
||||||
|
|
||||||
If you are trying to use a single model for Frigate and HomeAssistant, it will need to support vision and tools calling. https://github.com/skye-harris/ollama-modelfiles contains optimized model configs for this task.
|
If you are trying to use a single model for Frigate and HomeAssistant, it will need to support vision and tools calling. qwen3-VL supports vision and tools simultaneously in Ollama.
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
The following models are recommended:
|
The following models are recommended:
|
||||||
|
|
||||||
| Model | Notes |
|
| Model | Notes |
|
||||||
| ----------------- | ----------------------------------------------------------- |
|
| ----------------- | -------------------------------------------------------------------- |
|
||||||
| `qwen3-vl` | Strong visual and situational understanding |
|
| `qwen3-vl` | Strong visual and situational understanding, higher vram requirement |
|
||||||
| `Intern3.5VL` | Relatively fast with good vision comprehension |
|
| `Intern3.5VL` | Relatively fast with good vision comprehension |
|
||||||
| `gemma3` | Strong frame-to-frame understanding, slower inference times |
|
| `gemma3` | Strong frame-to-frame understanding, slower inference times |
|
||||||
| `qwen2.5-vl` | Fast but capable model with good vision comprehension |
|
| `qwen2.5-vl` | Fast but capable model with good vision comprehension |
|
||||||
| `llava-phi3` | Lightweight and fast model with vision comprehension |
|
|
||||||
|
|
||||||
:::note
|
:::note
|
||||||
|
|
||||||
|
|||||||
@ -3,18 +3,18 @@ id: license_plate_recognition
|
|||||||
title: License Plate Recognition (LPR)
|
title: License Plate Recognition (LPR)
|
||||||
---
|
---
|
||||||
|
|
||||||
Frigate can recognize license plates on vehicles and automatically add the detected characters to the `recognized_license_plate` field or a known name as a `sub_label` to tracked objects of type `car` or `motorcycle`. A common use case may be to read the license plates of cars pulling into a driveway or cars passing by on a street.
|
Frigate can recognize license plates on vehicles and automatically add the detected characters to the `recognized_license_plate` field or a [known](#matching) name as a `sub_label` to tracked objects of type `car` or `motorcycle`. A common use case may be to read the license plates of cars pulling into a driveway or cars passing by on a street.
|
||||||
|
|
||||||
LPR works best when the license plate is clearly visible to the camera. For moving vehicles, Frigate continuously refines the recognition process, keeping the most confident result. When a vehicle becomes stationary, LPR continues to run for a short time after to attempt recognition.
|
LPR works best when the license plate is clearly visible to the camera. For moving vehicles, Frigate continuously refines the recognition process, keeping the most confident result. When a vehicle becomes stationary, LPR continues to run for a short time after to attempt recognition.
|
||||||
|
|
||||||
When a plate is recognized, the details are:
|
When a plate is recognized, the details are:
|
||||||
|
|
||||||
- Added as a `sub_label` (if known) or the `recognized_license_plate` field (if unknown) to a tracked object.
|
- Added as a `sub_label` (if [known](#matching)) or the `recognized_license_plate` field (if unknown) to a tracked object.
|
||||||
- Viewable in the Review Item Details pane in Review (sub labels).
|
- Viewable in the Details pane in Review/History.
|
||||||
- Viewable in the Tracked Object Details pane in Explore (sub labels and recognized license plates).
|
- Viewable in the Tracked Object Details pane in Explore (sub labels and recognized license plates).
|
||||||
- Filterable through the More Filters menu in Explore.
|
- Filterable through the More Filters menu in Explore.
|
||||||
- Published via the `frigate/events` MQTT topic as a `sub_label` (known) or `recognized_license_plate` (unknown) for the `car` or `motorcycle` tracked object.
|
- Published via the `frigate/events` MQTT topic as a `sub_label` ([known](#matching)) or `recognized_license_plate` (unknown) for the `car` or `motorcycle` tracked object.
|
||||||
- Published via the `frigate/tracked_object_update` MQTT topic with `name` (if known) and `plate`.
|
- Published via the `frigate/tracked_object_update` MQTT topic with `name` (if [known](#matching)) and `plate`.
|
||||||
|
|
||||||
## Model Requirements
|
## Model Requirements
|
||||||
|
|
||||||
@ -31,6 +31,7 @@ In the default mode, Frigate's LPR needs to first detect a `car` or `motorcycle`
|
|||||||
## Minimum System Requirements
|
## Minimum System Requirements
|
||||||
|
|
||||||
License plate recognition works by running AI models locally on your system. The YOLOv9 plate detector model and the OCR models ([PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR)) are relatively lightweight and can run on your CPU or GPU, depending on your configuration. At least 4GB of RAM is required.
|
License plate recognition works by running AI models locally on your system. The YOLOv9 plate detector model and the OCR models ([PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR)) are relatively lightweight and can run on your CPU or GPU, depending on your configuration. At least 4GB of RAM is required.
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
License plate recognition is disabled by default. Enable it in your config file:
|
License plate recognition is disabled by default. Enable it in your config file:
|
||||||
@ -73,8 +74,8 @@ Fine-tune the LPR feature using these optional parameters at the global level of
|
|||||||
- Default: `small`
|
- Default: `small`
|
||||||
- This can be `small` or `large`.
|
- This can be `small` or `large`.
|
||||||
- The `small` model is fast and identifies groups of Latin and Chinese characters.
|
- The `small` model is fast and identifies groups of Latin and Chinese characters.
|
||||||
- The `large` model identifies Latin characters only, but uses an enhanced text detector and is more capable at finding characters on multi-line plates. It is significantly slower than the `small` model. Note that using the `large` model does not improve _text recognition_, but it may improve _text detection_.
|
- The `large` model identifies Latin characters only, and uses an enhanced text detector to find characters on multi-line plates. It is significantly slower than the `small` model.
|
||||||
- For most users, the `small` model is recommended.
|
- If your country or region does not use multi-line plates, you should use the `small` model as performance is much better for single-line plates.
|
||||||
|
|
||||||
### Recognition
|
### Recognition
|
||||||
|
|
||||||
@ -177,7 +178,7 @@ lpr:
|
|||||||
|
|
||||||
:::note
|
:::note
|
||||||
|
|
||||||
If you want to detect cars on cameras but don't want to use resources to run LPR on those cars, you should disable LPR for those specific cameras.
|
If a camera is configured to detect `car` or `motorcycle` but you don't want Frigate to run LPR for that camera, disable LPR at the camera level:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
cameras:
|
cameras:
|
||||||
@ -305,7 +306,7 @@ With this setup:
|
|||||||
- Review items will always be classified as a `detection`.
|
- Review items will always be classified as a `detection`.
|
||||||
- Snapshots will always be saved.
|
- Snapshots will always be saved.
|
||||||
- Zones and object masks are **not** used.
|
- Zones and object masks are **not** used.
|
||||||
- The `frigate/events` MQTT topic will **not** publish tracked object updates with the license plate bounding box and score, though `frigate/reviews` will publish if recordings are enabled. If a plate is recognized as a known plate, publishing will occur with an updated `sub_label` field. If characters are recognized, publishing will occur with an updated `recognized_license_plate` field.
|
- The `frigate/events` MQTT topic will **not** publish tracked object updates with the license plate bounding box and score, though `frigate/reviews` will publish if recordings are enabled. If a plate is recognized as a [known](#matching) plate, publishing will occur with an updated `sub_label` field. If characters are recognized, publishing will occur with an updated `recognized_license_plate` field.
|
||||||
- License plate snapshots are saved at the highest-scoring moment and appear in Explore.
|
- License plate snapshots are saved at the highest-scoring moment and appear in Explore.
|
||||||
- Debug view will not show `license_plate` bounding boxes.
|
- Debug view will not show `license_plate` bounding boxes.
|
||||||
|
|
||||||
|
|||||||
@ -214,6 +214,42 @@ For restreamed cameras, go2rtc remains active but does not use system resources
|
|||||||
|
|
||||||
Note that disabling a camera through the config file (`enabled: False`) removes all related UI elements, including historical footage access. To retain access while disabling the camera, keep it enabled in the config and use the UI or MQTT to disable it temporarily.
|
Note that disabling a camera through the config file (`enabled: False`) removes all related UI elements, including historical footage access. To retain access while disabling the camera, keep it enabled in the config and use the UI or MQTT to disable it temporarily.
|
||||||
|
|
||||||
|
### Live player error messages
|
||||||
|
|
||||||
|
When your browser runs into problems playing back your camera streams, it will log short error messages to the browser console. They indicate playback, codec, or network issues on the client/browser side, not something server side with Frigate itself. Below are the common messages you may see and simple actions you can take to try to resolve them.
|
||||||
|
|
||||||
|
- **startup**
|
||||||
|
|
||||||
|
- What it means: The player failed to initialize or connect to the live stream (network or startup error).
|
||||||
|
- What to try: Reload the Live view or click _Reset_. Verify `go2rtc` is running and the camera stream is reachable. Try switching to a different stream from the Live UI dropdown (if available) or use a different browser.
|
||||||
|
|
||||||
|
- Possible console messages from the player code:
|
||||||
|
|
||||||
|
- `Error opening MediaSource.`
|
||||||
|
- `Browser reported a network error.`
|
||||||
|
- `Max error count ${errorCount} exceeded.` (the numeric value will vary)
|
||||||
|
|
||||||
|
- **mse-decode**
|
||||||
|
|
||||||
|
- What it means: The browser reported a decoding error while trying to play the stream, which usually is a result of a codec incompatibility or corrupted frames.
|
||||||
|
- What to try: Ensure your camera/restream is using H.264 video and AAC audio (these are the most compatible). If your camera uses a non-standard audio codec, configure `go2rtc` to transcode the stream to AAC. Try another browser (some browsers have stricter MSE/codec support) and, for iPhone, ensure you're on iOS 17.1 or newer.
|
||||||
|
|
||||||
|
- Possible console messages from the player code:
|
||||||
|
|
||||||
|
- `Safari cannot open MediaSource.`
|
||||||
|
- `Safari reported InvalidStateError.`
|
||||||
|
- `Safari reported decoding errors.`
|
||||||
|
|
||||||
|
- **stalled**
|
||||||
|
|
||||||
|
- What it means: Playback has stalled because the player has fallen too far behind live (extended buffering or no data arriving).
|
||||||
|
- What to try: This is usually indicative of the browser struggling to decode too many high-resolution streams at once. Try selecting a lower-bandwidth stream (substream), reduce the number of live streams open, improve the network connection, or lower the camera resolution. Also check your camera's keyframe (I-frame) interval — shorter intervals make playback start and recover faster. You can also try increasing the timeout value in the UI pane of Frigate's settings.
|
||||||
|
|
||||||
|
- Possible console messages from the player code:
|
||||||
|
|
||||||
|
- `Buffer time (10 seconds) exceeded, browser may not be playing media correctly.`
|
||||||
|
- `Media playback has stalled after <n> seconds due to insufficient buffering or a network interruption.` (the seconds value will vary)
|
||||||
|
|
||||||
## Live view FAQ
|
## Live view FAQ
|
||||||
|
|
||||||
1. **Why don't I have audio in my Live view?**
|
1. **Why don't I have audio in my Live view?**
|
||||||
@ -277,3 +313,38 @@ Note that disabling a camera through the config file (`enabled: False`) removes
|
|||||||
7. **My camera streams have lots of visual artifacts / distortion.**
|
7. **My camera streams have lots of visual artifacts / distortion.**
|
||||||
|
|
||||||
Some cameras don't include the hardware to support multiple connections to the high resolution stream, and this can cause unexpected behavior. In this case it is recommended to [restream](./restream.md) the high resolution stream so that it can be used for live view and recordings.
|
Some cameras don't include the hardware to support multiple connections to the high resolution stream, and this can cause unexpected behavior. In this case it is recommended to [restream](./restream.md) the high resolution stream so that it can be used for live view and recordings.
|
||||||
|
|
||||||
|
8. **Why does my camera stream switch aspect ratios on the Live dashboard?**
|
||||||
|
|
||||||
|
Your camera may change aspect ratios on the dashboard because Frigate uses different streams for different purposes. With go2rtc and Smart Streaming, Frigate shows a static image from the `detect` stream when no activity is present, and switches to the live stream when motion is detected. The camera image will change size if your streams use different aspect ratios.
|
||||||
|
|
||||||
|
To prevent this, make the `detect` stream match the go2rtc live stream's aspect ratio (resolution does not need to match, just the aspect ratio). You can either adjust the camera's output resolution or set the `width` and `height` values in your config's `detect` section to a resolution with an aspect ratio that matches.
|
||||||
|
|
||||||
|
Example: Resolutions from two streams
|
||||||
|
|
||||||
|
- Mismatched (may cause aspect ratio switching on the dashboard):
|
||||||
|
|
||||||
|
- Live/go2rtc stream: 1920x1080 (16:9)
|
||||||
|
- Detect stream: 640x352 (~1.82:1, not 16:9)
|
||||||
|
|
||||||
|
- Matched (prevents switching):
|
||||||
|
- Live/go2rtc stream: 1920x1080 (16:9)
|
||||||
|
- Detect stream: 640x360 (16:9)
|
||||||
|
|
||||||
|
You can update the detect settings in your camera config to match the aspect ratio of your go2rtc live stream. For example:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
cameras:
|
||||||
|
front_door:
|
||||||
|
detect:
|
||||||
|
width: 640
|
||||||
|
height: 360 # set this to 360 instead of 352
|
||||||
|
ffmpeg:
|
||||||
|
inputs:
|
||||||
|
- path: rtsp://127.0.0.1:8554/front_door # main stream 1920x1080
|
||||||
|
roles:
|
||||||
|
- record
|
||||||
|
- path: rtsp://127.0.0.1:8554/front_door_sub # sub stream 640x352
|
||||||
|
roles:
|
||||||
|
- detect
|
||||||
|
```
|
||||||
|
|||||||
@ -3,6 +3,8 @@ id: object_detectors
|
|||||||
title: Object Detectors
|
title: Object Detectors
|
||||||
---
|
---
|
||||||
|
|
||||||
|
import CommunityBadge from '@site/src/components/CommunityBadge';
|
||||||
|
|
||||||
# Supported Hardware
|
# Supported Hardware
|
||||||
|
|
||||||
:::info
|
:::info
|
||||||
@ -13,8 +15,8 @@ Frigate supports multiple different detectors that work on different types of ha
|
|||||||
|
|
||||||
- [Coral EdgeTPU](#edge-tpu-detector): The Google Coral EdgeTPU is available in USB and m.2 format allowing for a wide range of compatibility with devices.
|
- [Coral EdgeTPU](#edge-tpu-detector): The Google Coral EdgeTPU is available in USB and m.2 format allowing for a wide range of compatibility with devices.
|
||||||
- [Hailo](#hailo-8): The Hailo8 and Hailo8L AI Acceleration module is available in m.2 format with a HAT for RPi devices, offering a wide range of compatibility with devices.
|
- [Hailo](#hailo-8): The Hailo8 and Hailo8L AI Acceleration module is available in m.2 format with a HAT for RPi devices, offering a wide range of compatibility with devices.
|
||||||
- [MemryX](#memryx-mx3): The MX3 Acceleration module is available in m.2 format, offering broad compatibility across various platforms.
|
- <CommunityBadge /> [MemryX](#memryx-mx3): The MX3 Acceleration module is available in m.2 format, offering broad compatibility across various platforms.
|
||||||
- [DeGirum](#degirum): Service for using hardware devices in the cloud or locally. Hardware and models provided on the cloud on [their website](https://hub.degirum.com).
|
- <CommunityBadge /> [DeGirum](#degirum): Service for using hardware devices in the cloud or locally. Hardware and models provided on the cloud on [their website](https://hub.degirum.com).
|
||||||
|
|
||||||
**AMD**
|
**AMD**
|
||||||
|
|
||||||
@ -34,16 +36,16 @@ Frigate supports multiple different detectors that work on different types of ha
|
|||||||
|
|
||||||
- [ONNX](#onnx): TensorRT will automatically be detected and used as a detector in the `-tensorrt` Frigate image when a supported ONNX model is configured.
|
- [ONNX](#onnx): TensorRT will automatically be detected and used as a detector in the `-tensorrt` Frigate image when a supported ONNX model is configured.
|
||||||
|
|
||||||
**Nvidia Jetson**
|
**Nvidia Jetson** <CommunityBadge />
|
||||||
|
|
||||||
- [TensortRT](#nvidia-tensorrt-detector): TensorRT can run on Jetson devices, using one of many default models.
|
- [TensortRT](#nvidia-tensorrt-detector): TensorRT can run on Jetson devices, using one of many default models.
|
||||||
- [ONNX](#onnx): TensorRT will automatically be detected and used as a detector in the `-tensorrt-jp6` Frigate image when a supported ONNX model is configured.
|
- [ONNX](#onnx): TensorRT will automatically be detected and used as a detector in the `-tensorrt-jp6` Frigate image when a supported ONNX model is configured.
|
||||||
|
|
||||||
**Rockchip**
|
**Rockchip** <CommunityBadge />
|
||||||
|
|
||||||
- [RKNN](#rockchip-platform): RKNN models can run on Rockchip devices with included NPUs.
|
- [RKNN](#rockchip-platform): RKNN models can run on Rockchip devices with included NPUs.
|
||||||
|
|
||||||
**Synaptics**
|
**Synaptics** <CommunityBadge />
|
||||||
|
|
||||||
- [Synaptics](#synaptics): synap models can run on Synaptics devices(e.g astra machina) with included NPUs.
|
- [Synaptics](#synaptics): synap models can run on Synaptics devices(e.g astra machina) with included NPUs.
|
||||||
|
|
||||||
@ -962,7 +964,6 @@ model:
|
|||||||
# path: /config/yolov9.zip
|
# path: /config/yolov9.zip
|
||||||
# The .zip file must contain:
|
# The .zip file must contain:
|
||||||
# ├── yolov9.dfp (a file ending with .dfp)
|
# ├── yolov9.dfp (a file ending with .dfp)
|
||||||
# └── yolov9_post.onnx (optional; only if the model includes a cropped post-processing network)
|
|
||||||
```
|
```
|
||||||
|
|
||||||
#### YOLOX
|
#### YOLOX
|
||||||
|
|||||||
@ -246,7 +246,7 @@ birdseye:
|
|||||||
# Optional: ffmpeg configuration
|
# Optional: ffmpeg configuration
|
||||||
# More information about presets at https://docs.frigate.video/configuration/ffmpeg_presets
|
# More information about presets at https://docs.frigate.video/configuration/ffmpeg_presets
|
||||||
ffmpeg:
|
ffmpeg:
|
||||||
# Optional: ffmpeg binry path (default: shown below)
|
# Optional: ffmpeg binary path (default: shown below)
|
||||||
# can also be set to `7.0` or `5.0` to specify one of the included versions
|
# can also be set to `7.0` or `5.0` to specify one of the included versions
|
||||||
# or can be set to any path that holds `bin/ffmpeg` & `bin/ffprobe`
|
# or can be set to any path that holds `bin/ffmpeg` & `bin/ffprobe`
|
||||||
path: "default"
|
path: "default"
|
||||||
|
|||||||
@ -141,7 +141,7 @@ Triggers are best configured through the Frigate UI.
|
|||||||
Check the `Add Attribute` box to add the trigger's internal ID (e.g., "red_car_alert") to a data attribute on the tracked object that can be processed via the API or MQTT.
|
Check the `Add Attribute` box to add the trigger's internal ID (e.g., "red_car_alert") to a data attribute on the tracked object that can be processed via the API or MQTT.
|
||||||
5. Save the trigger to update the configuration and store the embedding in the database.
|
5. Save the trigger to update the configuration and store the embedding in the database.
|
||||||
|
|
||||||
When a trigger fires, the UI highlights the trigger with a blue dot for 3 seconds for easy identification.
|
When a trigger fires, the UI highlights the trigger with a blue dot for 3 seconds for easy identification. Additionally, the UI will show the last date/time and tracked object ID that activated your trigger. The last triggered timestamp is not saved to the database or persisted through restarts of Frigate.
|
||||||
|
|
||||||
### Usage and Best Practices
|
### Usage and Best Practices
|
||||||
|
|
||||||
|
|||||||
@ -3,6 +3,8 @@ id: hardware
|
|||||||
title: Recommended hardware
|
title: Recommended hardware
|
||||||
---
|
---
|
||||||
|
|
||||||
|
import CommunityBadge from '@site/src/components/CommunityBadge';
|
||||||
|
|
||||||
## Cameras
|
## Cameras
|
||||||
|
|
||||||
Cameras that output H.264 video and AAC audio will offer the most compatibility with all features of Frigate and Home Assistant. It is also helpful if your camera supports multiple substreams to allow different resolutions to be used for detection, streaming, and recordings without re-encoding.
|
Cameras that output H.264 video and AAC audio will offer the most compatibility with all features of Frigate and Home Assistant. It is also helpful if your camera supports multiple substreams to allow different resolutions to be used for detection, streaming, and recordings without re-encoding.
|
||||||
@ -59,7 +61,7 @@ Frigate supports multiple different detectors that work on different types of ha
|
|||||||
|
|
||||||
- [Supports primarily ssdlite and mobilenet model architectures](../../configuration/object_detectors#edge-tpu-detector)
|
- [Supports primarily ssdlite and mobilenet model architectures](../../configuration/object_detectors#edge-tpu-detector)
|
||||||
|
|
||||||
- [MemryX](#memryx-mx3): The MX3 M.2 accelerator module is available in m.2 format allowing for a wide range of compatibility with devices.
|
- <CommunityBadge /> [MemryX](#memryx-mx3): The MX3 M.2 accelerator module is available in m.2 format allowing for a wide range of compatibility with devices.
|
||||||
- [Supports many model architectures](../../configuration/object_detectors#memryx-mx3)
|
- [Supports many model architectures](../../configuration/object_detectors#memryx-mx3)
|
||||||
- Runs best with tiny, small, or medium-size models
|
- Runs best with tiny, small, or medium-size models
|
||||||
|
|
||||||
@ -84,32 +86,26 @@ Frigate supports multiple different detectors that work on different types of ha
|
|||||||
|
|
||||||
**Nvidia**
|
**Nvidia**
|
||||||
|
|
||||||
- [TensortRT](#tensorrt---nvidia-gpu): TensorRT can run on Nvidia GPUs and Jetson devices.
|
- [TensortRT](#tensorrt---nvidia-gpu): TensorRT can run on Nvidia GPUs to provide efficient object detection.
|
||||||
|
|
||||||
- [Supports majority of model architectures via ONNX](../../configuration/object_detectors#onnx-supported-models)
|
- [Supports majority of model architectures via ONNX](../../configuration/object_detectors#onnx-supported-models)
|
||||||
- Runs well with any size models including large
|
- Runs well with any size models including large
|
||||||
|
|
||||||
**Rockchip**
|
- <CommunityBadge /> [Jetson](#nvidia-jetson): Jetson devices are supported via the TensorRT or ONNX detectors when running Jetpack 6.
|
||||||
|
|
||||||
|
**Rockchip** <CommunityBadge />
|
||||||
|
|
||||||
- [RKNN](#rockchip-platform): RKNN models can run on Rockchip devices with included NPUs to provide efficient object detection.
|
- [RKNN](#rockchip-platform): RKNN models can run on Rockchip devices with included NPUs to provide efficient object detection.
|
||||||
- [Supports limited model architectures](../../configuration/object_detectors#choosing-a-model)
|
- [Supports limited model architectures](../../configuration/object_detectors#choosing-a-model)
|
||||||
- Runs best with tiny or small size models
|
- Runs best with tiny or small size models
|
||||||
- Runs efficiently on low power hardware
|
- Runs efficiently on low power hardware
|
||||||
|
|
||||||
**Synaptics**
|
**Synaptics** <CommunityBadge />
|
||||||
|
|
||||||
- [Synaptics](#synaptics): synap models can run on Synaptics devices(e.g astra machina) with included NPUs to provide efficient object detection.
|
- [Synaptics](#synaptics): synap models can run on Synaptics devices(e.g astra machina) with included NPUs to provide efficient object detection.
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
### Synaptics
|
|
||||||
|
|
||||||
- **Synaptics** Default model is **mobilenet**
|
|
||||||
|
|
||||||
| Name | Synaptics SL1680 Inference Time |
|
|
||||||
| ---------------- | ------------------------------- |
|
|
||||||
| ssd mobilenet | ~ 25 ms |
|
|
||||||
| yolov5m | ~ 118 ms |
|
|
||||||
|
|
||||||
### Hailo-8
|
### Hailo-8
|
||||||
|
|
||||||
Frigate supports both the Hailo-8 and Hailo-8L AI Acceleration Modules on compatible hardware platforms—including the Raspberry Pi 5 with the PCIe hat from the AI kit. The Hailo detector integration in Frigate automatically identifies your hardware type and selects the appropriate default model when a custom model isn’t provided.
|
Frigate supports both the Hailo-8 and Hailo-8L AI Acceleration Modules on compatible hardware platforms—including the Raspberry Pi 5 with the PCIe hat from the AI kit. The Hailo detector integration in Frigate automatically identifies your hardware type and selects the appropriate default model when a custom model isn’t provided.
|
||||||
@ -261,7 +257,7 @@ Inference speeds may vary depending on the host platform. The above data was mea
|
|||||||
|
|
||||||
### Nvidia Jetson
|
### Nvidia Jetson
|
||||||
|
|
||||||
Frigate supports all Jetson boards, from the inexpensive Jetson Nano to the powerful Jetson Orin AGX. It will [make use of the Jetson's hardware media engine](/configuration/hardware_acceleration_video#nvidia-jetson-orin-agx-orin-nx-orin-nano-xavier-agx-xavier-nx-tx2-tx1-nano) when configured with the [appropriate presets](/configuration/ffmpeg_presets#hwaccel-presets), and will make use of the Jetson's GPU and DLA for object detection when configured with the [TensorRT detector](/configuration/object_detectors#nvidia-tensorrt-detector).
|
Jetson devices are supported via the TensorRT or ONNX detectors when running Jetpack 6. It will [make use of the Jetson's hardware media engine](/configuration/hardware_acceleration_video#nvidia-jetson-orin-agx-orin-nx-orin-nano-xavier-agx-xavier-nx-tx2-tx1-nano) when configured with the [appropriate presets](/configuration/ffmpeg_presets#hwaccel-presets), and will make use of the Jetson's GPU and DLA for object detection when configured with the [TensorRT detector](/configuration/object_detectors#nvidia-tensorrt-detector).
|
||||||
|
|
||||||
Inference speed will vary depending on the YOLO model, jetson platform and jetson nvpmodel (GPU/DLA/EMC clock speed). It is typically 20-40 ms for most models. The DLA is more efficient than the GPU, but not faster, so using the DLA will reduce power consumption but will slightly increase inference time.
|
Inference speed will vary depending on the YOLO model, jetson platform and jetson nvpmodel (GPU/DLA/EMC clock speed). It is typically 20-40 ms for most models. The DLA is more efficient than the GPU, but not faster, so using the DLA will reduce power consumption but will slightly increase inference time.
|
||||||
|
|
||||||
@ -282,6 +278,15 @@ Frigate supports hardware video processing on all Rockchip boards. However, hard
|
|||||||
|
|
||||||
The inference time of a rk3588 with all 3 cores enabled is typically 25-30 ms for yolo-nas s.
|
The inference time of a rk3588 with all 3 cores enabled is typically 25-30 ms for yolo-nas s.
|
||||||
|
|
||||||
|
### Synaptics
|
||||||
|
|
||||||
|
- **Synaptics** Default model is **mobilenet**
|
||||||
|
|
||||||
|
| Name | Synaptics SL1680 Inference Time |
|
||||||
|
| ------------- | ------------------------------- |
|
||||||
|
| ssd mobilenet | ~ 25 ms |
|
||||||
|
| yolov5m | ~ 118 ms |
|
||||||
|
|
||||||
## What does Frigate use the CPU for and what does it use a detector for? (ELI5 Version)
|
## What does Frigate use the CPU for and what does it use a detector for? (ELI5 Version)
|
||||||
|
|
||||||
This is taken from a [user question on reddit](https://www.reddit.com/r/homeassistant/comments/q8mgau/comment/hgqbxh5/?utm_source=share&utm_medium=web2x&context=3). Modified slightly for clarity.
|
This is taken from a [user question on reddit](https://www.reddit.com/r/homeassistant/comments/q8mgau/comment/hgqbxh5/?utm_source=share&utm_medium=web2x&context=3). Modified slightly for clarity.
|
||||||
|
|||||||
@ -56,7 +56,7 @@ services:
|
|||||||
volumes:
|
volumes:
|
||||||
- /path/to/your/config:/config
|
- /path/to/your/config:/config
|
||||||
- /path/to/your/storage:/media/frigate
|
- /path/to/your/storage:/media/frigate
|
||||||
- type: tmpfs # Optional: 1GB of memory, reduces SSD/SD Card wear
|
- type: tmpfs # Recommended: 1GB of memory
|
||||||
target: /tmp/cache
|
target: /tmp/cache
|
||||||
tmpfs:
|
tmpfs:
|
||||||
size: 1000000000
|
size: 1000000000
|
||||||
@ -310,7 +310,7 @@ services:
|
|||||||
- /etc/localtime:/etc/localtime:ro
|
- /etc/localtime:/etc/localtime:ro
|
||||||
- /path/to/your/config:/config
|
- /path/to/your/config:/config
|
||||||
- /path/to/your/storage:/media/frigate
|
- /path/to/your/storage:/media/frigate
|
||||||
- type: tmpfs # Optional: 1GB of memory, reduces SSD/SD Card wear
|
- type: tmpfs # Recommended: 1GB of memory
|
||||||
target: /tmp/cache
|
target: /tmp/cache
|
||||||
tmpfs:
|
tmpfs:
|
||||||
size: 1000000000
|
size: 1000000000
|
||||||
|
|||||||
@ -159,11 +159,44 @@ Message published for updates to tracked object metadata, for example:
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
#### Object Classification Update
|
||||||
|
|
||||||
|
Message published when [object classification](/configuration/custom_classification/object_classification) reaches consensus on a classification result.
|
||||||
|
|
||||||
|
**Sub label type:**
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "classification",
|
||||||
|
"id": "1607123955.475377-mxklsc",
|
||||||
|
"camera": "front_door_cam",
|
||||||
|
"timestamp": 1607123958.748393,
|
||||||
|
"model": "person_classifier",
|
||||||
|
"sub_label": "delivery_person",
|
||||||
|
"score": 0.87
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Attribute type:**
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "classification",
|
||||||
|
"id": "1607123955.475377-mxklsc",
|
||||||
|
"camera": "front_door_cam",
|
||||||
|
"timestamp": 1607123958.748393,
|
||||||
|
"model": "helmet_detector",
|
||||||
|
"attribute": "yes",
|
||||||
|
"score": 0.92
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
### `frigate/reviews`
|
### `frigate/reviews`
|
||||||
|
|
||||||
Message published for each changed review item. The first message is published when the `detection` or `alert` is initiated.
|
Message published for each changed review item. The first message is published when the `detection` or `alert` is initiated.
|
||||||
|
|
||||||
An `update` with the same ID will be published when:
|
An `update` with the same ID will be published when:
|
||||||
|
|
||||||
- The severity changes from `detection` to `alert`
|
- The severity changes from `detection` to `alert`
|
||||||
- Additional objects are detected
|
- Additional objects are detected
|
||||||
- An object is recognized via face, lpr, etc.
|
- An object is recognized via face, lpr, etc.
|
||||||
@ -308,6 +341,11 @@ Publishes transcribed text for audio detected on this camera.
|
|||||||
|
|
||||||
**NOTE:** Requires audio detection and transcription to be enabled
|
**NOTE:** Requires audio detection and transcription to be enabled
|
||||||
|
|
||||||
|
### `frigate/<camera_name>/classification/<model_name>`
|
||||||
|
|
||||||
|
Publishes the current state detected by a state classification model for the camera. The topic name includes the model name as configured in your classification settings.
|
||||||
|
The published value is the detected state class name (e.g., `open`, `closed`, `on`, `off`). The state is only published when it changes, helping to reduce unnecessary MQTT traffic.
|
||||||
|
|
||||||
### `frigate/<camera_name>/enabled/set`
|
### `frigate/<camera_name>/enabled/set`
|
||||||
|
|
||||||
Topic to turn Frigate's processing of a camera on and off. Expected values are `ON` and `OFF`.
|
Topic to turn Frigate's processing of a camera on and off. Expected values are `ON` and `OFF`.
|
||||||
|
|||||||
@ -10,7 +10,7 @@ const config: Config = {
|
|||||||
baseUrl: "/",
|
baseUrl: "/",
|
||||||
onBrokenLinks: "throw",
|
onBrokenLinks: "throw",
|
||||||
onBrokenMarkdownLinks: "warn",
|
onBrokenMarkdownLinks: "warn",
|
||||||
favicon: "img/favicon.ico",
|
favicon: "img/branding/favicon.ico",
|
||||||
organizationName: "blakeblackshear",
|
organizationName: "blakeblackshear",
|
||||||
projectName: "frigate",
|
projectName: "frigate",
|
||||||
themes: [
|
themes: [
|
||||||
@ -116,8 +116,8 @@ const config: Config = {
|
|||||||
title: "Frigate",
|
title: "Frigate",
|
||||||
logo: {
|
logo: {
|
||||||
alt: "Frigate",
|
alt: "Frigate",
|
||||||
src: "img/logo.svg",
|
src: "img/branding/logo.svg",
|
||||||
srcDark: "img/logo-dark.svg",
|
srcDark: "img/branding/logo-dark.svg",
|
||||||
},
|
},
|
||||||
items: [
|
items: [
|
||||||
{
|
{
|
||||||
@ -170,7 +170,7 @@ const config: Config = {
|
|||||||
],
|
],
|
||||||
},
|
},
|
||||||
],
|
],
|
||||||
copyright: `Copyright © ${new Date().getFullYear()} Blake Blackshear`,
|
copyright: `Copyright © ${new Date().getFullYear()} Frigate LLC`,
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
plugins: [
|
plugins: [
|
||||||
|
|||||||
23
docs/src/components/CommunityBadge/index.jsx
Normal file
@ -0,0 +1,23 @@
|
|||||||
|
import React from "react";
|
||||||
|
|
||||||
|
export default function CommunityBadge() {
|
||||||
|
return (
|
||||||
|
<span
|
||||||
|
title="This detector is maintained by community members who provide code, maintenance, and support. See the contributing boards documentation for more information."
|
||||||
|
style={{
|
||||||
|
display: "inline-block",
|
||||||
|
backgroundColor: "#f1f3f5",
|
||||||
|
color: "#24292f",
|
||||||
|
fontSize: "11px",
|
||||||
|
fontWeight: 600,
|
||||||
|
padding: "2px 6px",
|
||||||
|
borderRadius: "3px",
|
||||||
|
border: "1px solid #d1d9e0",
|
||||||
|
marginLeft: "4px",
|
||||||
|
cursor: "help",
|
||||||
|
}}
|
||||||
|
>
|
||||||
|
Community Supported
|
||||||
|
</span>
|
||||||
|
);
|
||||||
|
}
|
||||||
30
docs/static/img/branding/LICENSE.md
vendored
Normal file
@ -0,0 +1,30 @@
|
|||||||
|
# COPYRIGHT AND TRADEMARK NOTICE
|
||||||
|
|
||||||
|
The images, logos, and icons contained in this directory (the "Brand Assets") are
|
||||||
|
proprietary to Frigate LLC and are NOT covered by the MIT License governing the
|
||||||
|
rest of this repository.
|
||||||
|
|
||||||
|
1. TRADEMARK STATUS
|
||||||
|
The "Frigate" name and the accompanying logo are common law trademarks™ of
|
||||||
|
Frigate LLC. Frigate LLC reserves all rights to these marks.
|
||||||
|
|
||||||
|
2. LIMITED PERMISSION FOR USE
|
||||||
|
Permission is hereby granted to display these Brand Assets strictly for the
|
||||||
|
following purposes:
|
||||||
|
a. To execute the software interface on a local machine.
|
||||||
|
b. To identify the software in documentation or reviews (nominative use).
|
||||||
|
|
||||||
|
3. RESTRICTIONS
|
||||||
|
You may NOT:
|
||||||
|
a. Use these Brand Assets to represent a derivative work (fork) as an official
|
||||||
|
product of Frigate LLC.
|
||||||
|
b. Use these Brand Assets in a way that implies endorsement, sponsorship, or
|
||||||
|
commercial affiliation with Frigate LLC.
|
||||||
|
c. Modify or alter the Brand Assets.
|
||||||
|
|
||||||
|
If you fork this repository with the intent to distribute a modified or competing
|
||||||
|
version of the software, you must replace these Brand Assets with your own
|
||||||
|
original content.
|
||||||
|
|
||||||
|
ALL RIGHTS RESERVED.
|
||||||
|
Copyright (c) 2025 Frigate LLC.
|
||||||
|
Before Width: | Height: | Size: 15 KiB After Width: | Height: | Size: 15 KiB |
|
Before Width: | Height: | Size: 12 KiB After Width: | Height: | Size: 12 KiB |
|
Before Width: | Height: | Size: 936 B After Width: | Height: | Size: 936 B |
|
Before Width: | Height: | Size: 933 B After Width: | Height: | Size: 933 B |
@ -179,6 +179,36 @@ def config(request: Request):
|
|||||||
return JSONResponse(content=config)
|
return JSONResponse(content=config)
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/config/raw_paths", dependencies=[Depends(require_role(["admin"]))])
|
||||||
|
def config_raw_paths(request: Request):
|
||||||
|
"""Admin-only endpoint that returns camera paths and go2rtc streams without credential masking."""
|
||||||
|
config_obj: FrigateConfig = request.app.frigate_config
|
||||||
|
|
||||||
|
raw_paths = {"cameras": {}, "go2rtc": {"streams": {}}}
|
||||||
|
|
||||||
|
# Extract raw camera ffmpeg input paths
|
||||||
|
for camera_name, camera in config_obj.cameras.items():
|
||||||
|
raw_paths["cameras"][camera_name] = {
|
||||||
|
"ffmpeg": {
|
||||||
|
"inputs": [
|
||||||
|
{"path": input.path, "roles": input.roles}
|
||||||
|
for input in camera.ffmpeg.inputs
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Extract raw go2rtc stream URLs
|
||||||
|
go2rtc_config = config_obj.go2rtc.model_dump(
|
||||||
|
mode="json", warnings="none", exclude_none=True
|
||||||
|
)
|
||||||
|
for stream_name, stream in go2rtc_config.get("streams", {}).items():
|
||||||
|
if stream is None:
|
||||||
|
continue
|
||||||
|
raw_paths["go2rtc"]["streams"][stream_name] = stream
|
||||||
|
|
||||||
|
return JSONResponse(content=raw_paths)
|
||||||
|
|
||||||
|
|
||||||
@router.get("/config/raw")
|
@router.get("/config/raw")
|
||||||
def config_raw():
|
def config_raw():
|
||||||
config_file = find_config_file()
|
config_file = find_config_file()
|
||||||
|
|||||||
@ -1781,9 +1781,8 @@ def create_trigger_embedding(
|
|||||||
logger.debug(
|
logger.debug(
|
||||||
f"Writing thumbnail for trigger with data {body.data} in {camera_name}."
|
f"Writing thumbnail for trigger with data {body.data} in {camera_name}."
|
||||||
)
|
)
|
||||||
except Exception as e:
|
except Exception:
|
||||||
logger.error(e.with_traceback())
|
logger.exception(
|
||||||
logger.error(
|
|
||||||
f"Failed to write thumbnail for trigger with data {body.data} in {camera_name}"
|
f"Failed to write thumbnail for trigger with data {body.data} in {camera_name}"
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -1807,8 +1806,8 @@ def create_trigger_embedding(
|
|||||||
status_code=200,
|
status_code=200,
|
||||||
)
|
)
|
||||||
|
|
||||||
except Exception as e:
|
except Exception:
|
||||||
logger.error(e.with_traceback())
|
logger.exception("Error creating trigger embedding")
|
||||||
return JSONResponse(
|
return JSONResponse(
|
||||||
content={
|
content={
|
||||||
"success": False,
|
"success": False,
|
||||||
@ -1917,9 +1916,8 @@ def update_trigger_embedding(
|
|||||||
logger.debug(
|
logger.debug(
|
||||||
f"Deleted thumbnail for trigger with data {trigger.data} in {camera_name}."
|
f"Deleted thumbnail for trigger with data {trigger.data} in {camera_name}."
|
||||||
)
|
)
|
||||||
except Exception as e:
|
except Exception:
|
||||||
logger.error(e.with_traceback())
|
logger.exception(
|
||||||
logger.error(
|
|
||||||
f"Failed to delete thumbnail for trigger with data {trigger.data} in {camera_name}"
|
f"Failed to delete thumbnail for trigger with data {trigger.data} in {camera_name}"
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -1958,9 +1956,8 @@ def update_trigger_embedding(
|
|||||||
logger.debug(
|
logger.debug(
|
||||||
f"Writing thumbnail for trigger with data {body.data} in {camera_name}."
|
f"Writing thumbnail for trigger with data {body.data} in {camera_name}."
|
||||||
)
|
)
|
||||||
except Exception as e:
|
except Exception:
|
||||||
logger.error(e.with_traceback())
|
logger.exception(
|
||||||
logger.error(
|
|
||||||
f"Failed to write thumbnail for trigger with data {body.data} in {camera_name}"
|
f"Failed to write thumbnail for trigger with data {body.data} in {camera_name}"
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -1972,8 +1969,8 @@ def update_trigger_embedding(
|
|||||||
status_code=200,
|
status_code=200,
|
||||||
)
|
)
|
||||||
|
|
||||||
except Exception as e:
|
except Exception:
|
||||||
logger.error(e.with_traceback())
|
logger.exception("Error updating trigger embedding")
|
||||||
return JSONResponse(
|
return JSONResponse(
|
||||||
content={
|
content={
|
||||||
"success": False,
|
"success": False,
|
||||||
@ -2033,9 +2030,8 @@ def delete_trigger_embedding(
|
|||||||
logger.debug(
|
logger.debug(
|
||||||
f"Deleted thumbnail for trigger with data {trigger.data} in {camera_name}."
|
f"Deleted thumbnail for trigger with data {trigger.data} in {camera_name}."
|
||||||
)
|
)
|
||||||
except Exception as e:
|
except Exception:
|
||||||
logger.error(e.with_traceback())
|
logger.exception(
|
||||||
logger.error(
|
|
||||||
f"Failed to delete thumbnail for trigger with data {trigger.data} in {camera_name}"
|
f"Failed to delete thumbnail for trigger with data {trigger.data} in {camera_name}"
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -2047,8 +2043,8 @@ def delete_trigger_embedding(
|
|||||||
status_code=200,
|
status_code=200,
|
||||||
)
|
)
|
||||||
|
|
||||||
except Exception as e:
|
except Exception:
|
||||||
logger.error(e.with_traceback())
|
logger.exception("Error deleting trigger embedding")
|
||||||
return JSONResponse(
|
return JSONResponse(
|
||||||
content={
|
content={
|
||||||
"success": False,
|
"success": False,
|
||||||
|
|||||||
@ -762,6 +762,15 @@ async def recording_clip(
|
|||||||
.order_by(Recordings.start_time.asc())
|
.order_by(Recordings.start_time.asc())
|
||||||
)
|
)
|
||||||
|
|
||||||
|
if recordings.count() == 0:
|
||||||
|
return JSONResponse(
|
||||||
|
content={
|
||||||
|
"success": False,
|
||||||
|
"message": "No recordings found for the specified time range",
|
||||||
|
},
|
||||||
|
status_code=400,
|
||||||
|
)
|
||||||
|
|
||||||
file_name = sanitize_filename(f"playlist_{camera_name}_{start_ts}-{end_ts}.txt")
|
file_name = sanitize_filename(f"playlist_{camera_name}_{start_ts}-{end_ts}.txt")
|
||||||
file_path = os.path.join(CACHE_DIR, file_name)
|
file_path = os.path.join(CACHE_DIR, file_name)
|
||||||
with open(file_path, "w") as file:
|
with open(file_path, "w") as file:
|
||||||
@ -840,6 +849,7 @@ async def vod_ts(camera_name: str, start_ts: float, end_ts: float):
|
|||||||
|
|
||||||
clips = []
|
clips = []
|
||||||
durations = []
|
durations = []
|
||||||
|
min_duration_ms = 100 # Minimum 100ms to ensure at least one video frame
|
||||||
max_duration_ms = MAX_SEGMENT_DURATION * 1000
|
max_duration_ms = MAX_SEGMENT_DURATION * 1000
|
||||||
|
|
||||||
recording: Recordings
|
recording: Recordings
|
||||||
@ -857,11 +867,11 @@ async def vod_ts(camera_name: str, start_ts: float, end_ts: float):
|
|||||||
if recording.end_time > end_ts:
|
if recording.end_time > end_ts:
|
||||||
duration -= int((recording.end_time - end_ts) * 1000)
|
duration -= int((recording.end_time - end_ts) * 1000)
|
||||||
|
|
||||||
if duration <= 0:
|
if duration < min_duration_ms:
|
||||||
# skip if the clip has no valid duration
|
# skip if the clip has no valid duration (too short to contain frames)
|
||||||
continue
|
continue
|
||||||
|
|
||||||
if 0 < duration < max_duration_ms:
|
if min_duration_ms <= duration < max_duration_ms:
|
||||||
clip["keyFrameDurations"] = [duration]
|
clip["keyFrameDurations"] = [duration]
|
||||||
clips.append(clip)
|
clips.append(clip)
|
||||||
durations.append(duration)
|
durations.append(duration)
|
||||||
|
|||||||
@ -136,6 +136,7 @@ class CameraMaintainer(threading.Thread):
|
|||||||
self.ptz_metrics[name],
|
self.ptz_metrics[name],
|
||||||
self.region_grids[name],
|
self.region_grids[name],
|
||||||
self.stop_event,
|
self.stop_event,
|
||||||
|
self.config.logger,
|
||||||
)
|
)
|
||||||
self.camera_processes[config.name] = camera_process
|
self.camera_processes[config.name] = camera_process
|
||||||
camera_process.start()
|
camera_process.start()
|
||||||
@ -156,7 +157,11 @@ class CameraMaintainer(threading.Thread):
|
|||||||
self.frame_manager.create(f"{config.name}_frame{i}", frame_size)
|
self.frame_manager.create(f"{config.name}_frame{i}", frame_size)
|
||||||
|
|
||||||
capture_process = CameraCapture(
|
capture_process = CameraCapture(
|
||||||
config, count, self.camera_metrics[name], self.stop_event
|
config,
|
||||||
|
count,
|
||||||
|
self.camera_metrics[name],
|
||||||
|
self.stop_event,
|
||||||
|
self.config.logger,
|
||||||
)
|
)
|
||||||
capture_process.daemon = True
|
capture_process.daemon = True
|
||||||
self.capture_processes[name] = capture_process
|
self.capture_processes[name] = capture_process
|
||||||
|
|||||||
@ -792,6 +792,10 @@ class FrigateConfig(FrigateBaseModel):
|
|||||||
# copy over auth and proxy config in case auth needs to be enforced
|
# copy over auth and proxy config in case auth needs to be enforced
|
||||||
safe_config["auth"] = config.get("auth", {})
|
safe_config["auth"] = config.get("auth", {})
|
||||||
safe_config["proxy"] = config.get("proxy", {})
|
safe_config["proxy"] = config.get("proxy", {})
|
||||||
|
|
||||||
|
# copy over database config for auth and so a new db is not created
|
||||||
|
safe_config["database"] = config.get("database", {})
|
||||||
|
|
||||||
return cls.parse_object(safe_config, **context)
|
return cls.parse_object(safe_config, **context)
|
||||||
|
|
||||||
# Validate and return the config dict.
|
# Validate and return the config dict.
|
||||||
|
|||||||
@ -132,17 +132,15 @@ class ReviewDescriptionProcessor(PostProcessorApi):
|
|||||||
|
|
||||||
if image_source == ImageSourceEnum.recordings:
|
if image_source == ImageSourceEnum.recordings:
|
||||||
duration = final_data["end_time"] - final_data["start_time"]
|
duration = final_data["end_time"] - final_data["start_time"]
|
||||||
buffer_extension = min(
|
buffer_extension = min(5, duration * RECORDING_BUFFER_EXTENSION_PERCENT)
|
||||||
10, max(2, duration * RECORDING_BUFFER_EXTENSION_PERCENT)
|
|
||||||
)
|
|
||||||
|
|
||||||
# Ensure minimum total duration for short review items
|
# Ensure minimum total duration for short review items
|
||||||
# This provides better context for brief events
|
# This provides better context for brief events
|
||||||
total_duration = duration + (2 * buffer_extension)
|
total_duration = duration + (2 * buffer_extension)
|
||||||
if total_duration < MIN_RECORDING_DURATION:
|
if total_duration < MIN_RECORDING_DURATION:
|
||||||
# Expand buffer to reach minimum duration, still respecting max of 10s per side
|
# Expand buffer to reach minimum duration, still respecting max of 5s per side
|
||||||
additional_buffer_per_side = (MIN_RECORDING_DURATION - duration) / 2
|
additional_buffer_per_side = (MIN_RECORDING_DURATION - duration) / 2
|
||||||
buffer_extension = min(10, additional_buffer_per_side)
|
buffer_extension = min(5, additional_buffer_per_side)
|
||||||
|
|
||||||
thumbs = self.get_recording_frames(
|
thumbs = self.get_recording_frames(
|
||||||
camera,
|
camera,
|
||||||
|
|||||||
@ -1,6 +1,7 @@
|
|||||||
"""Real time processor that works with classification tflite models."""
|
"""Real time processor that works with classification tflite models."""
|
||||||
|
|
||||||
import datetime
|
import datetime
|
||||||
|
import json
|
||||||
import logging
|
import logging
|
||||||
import os
|
import os
|
||||||
from typing import Any
|
from typing import Any
|
||||||
@ -21,6 +22,7 @@ from frigate.config.classification import (
|
|||||||
)
|
)
|
||||||
from frigate.const import CLIPS_DIR, MODEL_CACHE_DIR
|
from frigate.const import CLIPS_DIR, MODEL_CACHE_DIR
|
||||||
from frigate.log import redirect_output_to_logger
|
from frigate.log import redirect_output_to_logger
|
||||||
|
from frigate.types import TrackedObjectUpdateTypesEnum
|
||||||
from frigate.util.builtin import EventsPerSecond, InferenceSpeed, load_labels
|
from frigate.util.builtin import EventsPerSecond, InferenceSpeed, load_labels
|
||||||
from frigate.util.object import box_overlaps, calculate_region
|
from frigate.util.object import box_overlaps, calculate_region
|
||||||
|
|
||||||
@ -284,6 +286,7 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
|
|||||||
config: FrigateConfig,
|
config: FrigateConfig,
|
||||||
model_config: CustomClassificationConfig,
|
model_config: CustomClassificationConfig,
|
||||||
sub_label_publisher: EventMetadataPublisher,
|
sub_label_publisher: EventMetadataPublisher,
|
||||||
|
requestor: InterProcessRequestor,
|
||||||
metrics: DataProcessorMetrics,
|
metrics: DataProcessorMetrics,
|
||||||
):
|
):
|
||||||
super().__init__(config, metrics)
|
super().__init__(config, metrics)
|
||||||
@ -292,6 +295,7 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
|
|||||||
self.train_dir = os.path.join(CLIPS_DIR, self.model_config.name, "train")
|
self.train_dir = os.path.join(CLIPS_DIR, self.model_config.name, "train")
|
||||||
self.interpreter: Interpreter | None = None
|
self.interpreter: Interpreter | None = None
|
||||||
self.sub_label_publisher = sub_label_publisher
|
self.sub_label_publisher = sub_label_publisher
|
||||||
|
self.requestor = requestor
|
||||||
self.tensor_input_details: dict[str, Any] | None = None
|
self.tensor_input_details: dict[str, Any] | None = None
|
||||||
self.tensor_output_details: dict[str, Any] | None = None
|
self.tensor_output_details: dict[str, Any] | None = None
|
||||||
self.classification_history: dict[str, list[tuple[str, float, float]]] = {}
|
self.classification_history: dict[str, list[tuple[str, float, float]]] = {}
|
||||||
@ -486,6 +490,8 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
|
|||||||
)
|
)
|
||||||
|
|
||||||
if consensus_label is not None:
|
if consensus_label is not None:
|
||||||
|
camera = obj_data["camera"]
|
||||||
|
|
||||||
if (
|
if (
|
||||||
self.model_config.object_config.classification_type
|
self.model_config.object_config.classification_type
|
||||||
== ObjectClassificationType.sub_label
|
== ObjectClassificationType.sub_label
|
||||||
@ -494,6 +500,20 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
|
|||||||
(object_id, consensus_label, consensus_score),
|
(object_id, consensus_label, consensus_score),
|
||||||
EventMetadataTypeEnum.sub_label,
|
EventMetadataTypeEnum.sub_label,
|
||||||
)
|
)
|
||||||
|
self.requestor.send_data(
|
||||||
|
"tracked_object_update",
|
||||||
|
json.dumps(
|
||||||
|
{
|
||||||
|
"type": TrackedObjectUpdateTypesEnum.classification,
|
||||||
|
"id": object_id,
|
||||||
|
"camera": camera,
|
||||||
|
"timestamp": now,
|
||||||
|
"model": self.model_config.name,
|
||||||
|
"sub_label": consensus_label,
|
||||||
|
"score": consensus_score,
|
||||||
|
}
|
||||||
|
),
|
||||||
|
)
|
||||||
elif (
|
elif (
|
||||||
self.model_config.object_config.classification_type
|
self.model_config.object_config.classification_type
|
||||||
== ObjectClassificationType.attribute
|
== ObjectClassificationType.attribute
|
||||||
@ -507,6 +527,20 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
|
|||||||
),
|
),
|
||||||
EventMetadataTypeEnum.attribute.value,
|
EventMetadataTypeEnum.attribute.value,
|
||||||
)
|
)
|
||||||
|
self.requestor.send_data(
|
||||||
|
"tracked_object_update",
|
||||||
|
json.dumps(
|
||||||
|
{
|
||||||
|
"type": TrackedObjectUpdateTypesEnum.classification,
|
||||||
|
"id": object_id,
|
||||||
|
"camera": camera,
|
||||||
|
"timestamp": now,
|
||||||
|
"model": self.model_config.name,
|
||||||
|
"attribute": consensus_label,
|
||||||
|
"score": consensus_score,
|
||||||
|
}
|
||||||
|
),
|
||||||
|
)
|
||||||
|
|
||||||
def handle_request(self, topic, request_data):
|
def handle_request(self, topic, request_data):
|
||||||
if topic == EmbeddingsRequestEnum.reload_classification_model.value:
|
if topic == EmbeddingsRequestEnum.reload_classification_model.value:
|
||||||
|
|||||||
@ -424,7 +424,7 @@ class FaceRealTimeProcessor(RealTimeProcessorApi):
|
|||||||
|
|
||||||
if not res:
|
if not res:
|
||||||
return {
|
return {
|
||||||
"message": "No face was recognized.",
|
"message": "Model is still training, please try again in a few moments.",
|
||||||
"success": False,
|
"success": False,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@ -18,7 +18,6 @@ from frigate.detectors.detector_config import (
|
|||||||
ModelTypeEnum,
|
ModelTypeEnum,
|
||||||
)
|
)
|
||||||
from frigate.util.file import FileLock
|
from frigate.util.file import FileLock
|
||||||
from frigate.util.model import post_process_yolo
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
@ -178,13 +177,6 @@ class MemryXDetector(DetectionApi):
|
|||||||
logger.error(f"Failed to initialize MemryX model: {e}")
|
logger.error(f"Failed to initialize MemryX model: {e}")
|
||||||
raise
|
raise
|
||||||
|
|
||||||
def load_yolo_constants(self):
|
|
||||||
base = f"{self.cache_dir}/{self.model_folder}"
|
|
||||||
# constants for yolov9 post-processing
|
|
||||||
self.const_A = np.load(f"{base}/_model_22_Constant_9_output_0.npy")
|
|
||||||
self.const_B = np.load(f"{base}/_model_22_Constant_10_output_0.npy")
|
|
||||||
self.const_C = np.load(f"{base}/_model_22_Constant_12_output_0.npy")
|
|
||||||
|
|
||||||
def check_and_prepare_model(self):
|
def check_and_prepare_model(self):
|
||||||
if not os.path.exists(self.cache_dir):
|
if not os.path.exists(self.cache_dir):
|
||||||
os.makedirs(self.cache_dir, exist_ok=True)
|
os.makedirs(self.cache_dir, exist_ok=True)
|
||||||
@ -236,7 +228,6 @@ class MemryXDetector(DetectionApi):
|
|||||||
|
|
||||||
# Handle post model requirements by model type
|
# Handle post model requirements by model type
|
||||||
if self.memx_model_type in [
|
if self.memx_model_type in [
|
||||||
ModelTypeEnum.yologeneric,
|
|
||||||
ModelTypeEnum.yolonas,
|
ModelTypeEnum.yolonas,
|
||||||
ModelTypeEnum.ssd,
|
ModelTypeEnum.ssd,
|
||||||
]:
|
]:
|
||||||
@ -245,7 +236,10 @@ class MemryXDetector(DetectionApi):
|
|||||||
f"No *_post.onnx file found in custom model zip for {self.memx_model_type.name}."
|
f"No *_post.onnx file found in custom model zip for {self.memx_model_type.name}."
|
||||||
)
|
)
|
||||||
self.memx_post_model = post_candidates[0]
|
self.memx_post_model = post_candidates[0]
|
||||||
elif self.memx_model_type == ModelTypeEnum.yolox:
|
elif self.memx_model_type in [
|
||||||
|
ModelTypeEnum.yolox,
|
||||||
|
ModelTypeEnum.yologeneric,
|
||||||
|
]:
|
||||||
# Explicitly ignore any post model even if present
|
# Explicitly ignore any post model even if present
|
||||||
self.memx_post_model = None
|
self.memx_post_model = None
|
||||||
else:
|
else:
|
||||||
@ -273,8 +267,6 @@ class MemryXDetector(DetectionApi):
|
|||||||
logger.info("Using cached models.")
|
logger.info("Using cached models.")
|
||||||
self.memx_model_path = dfp_path
|
self.memx_model_path = dfp_path
|
||||||
self.memx_post_model = post_path
|
self.memx_post_model = post_path
|
||||||
if self.memx_model_type == ModelTypeEnum.yologeneric:
|
|
||||||
self.load_yolo_constants()
|
|
||||||
return
|
return
|
||||||
|
|
||||||
# ---------- CASE 3: download MemryX model (no cache) ----------
|
# ---------- CASE 3: download MemryX model (no cache) ----------
|
||||||
@ -303,9 +295,6 @@ class MemryXDetector(DetectionApi):
|
|||||||
else None
|
else None
|
||||||
)
|
)
|
||||||
|
|
||||||
if self.memx_model_type == ModelTypeEnum.yologeneric:
|
|
||||||
self.load_yolo_constants()
|
|
||||||
|
|
||||||
finally:
|
finally:
|
||||||
if os.path.exists(zip_path):
|
if os.path.exists(zip_path):
|
||||||
try:
|
try:
|
||||||
@ -600,127 +589,232 @@ class MemryXDetector(DetectionApi):
|
|||||||
|
|
||||||
self.output_queue.put(final_detections)
|
self.output_queue.put(final_detections)
|
||||||
|
|
||||||
def onnx_reshape_with_allowzero(
|
def _generate_anchors(self, sizes=[80, 40, 20]):
|
||||||
self, data: np.ndarray, shape: np.ndarray, allowzero: int = 0
|
"""Generate anchor points for YOLOv9 style processing"""
|
||||||
|
yscales = []
|
||||||
|
xscales = []
|
||||||
|
for s in sizes:
|
||||||
|
r = np.arange(s) + 0.5
|
||||||
|
yscales.append(np.repeat(r, s))
|
||||||
|
xscales.append(np.repeat(r[None, ...], s, axis=0).flatten())
|
||||||
|
|
||||||
|
yscales = np.concatenate(yscales)
|
||||||
|
xscales = np.concatenate(xscales)
|
||||||
|
anchors = np.stack([xscales, yscales], axis=1)
|
||||||
|
return anchors
|
||||||
|
|
||||||
|
def _generate_scales(self, sizes=[80, 40, 20]):
|
||||||
|
"""Generate scaling factors for each detection level"""
|
||||||
|
factors = [8, 16, 32]
|
||||||
|
s = np.concatenate([np.ones([int(s * s)]) * f for s, f in zip(sizes, factors)])
|
||||||
|
return s[:, None]
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def _softmax(x: np.ndarray, axis: int) -> np.ndarray:
|
||||||
|
"""Efficient softmax implementation"""
|
||||||
|
x = x - np.max(x, axis=axis, keepdims=True)
|
||||||
|
np.exp(x, out=x)
|
||||||
|
x /= np.sum(x, axis=axis, keepdims=True)
|
||||||
|
return x
|
||||||
|
|
||||||
|
def dfl(self, x: np.ndarray) -> np.ndarray:
|
||||||
|
"""Distribution Focal Loss decoding - YOLOv9 style"""
|
||||||
|
x = x.reshape(-1, 4, 16)
|
||||||
|
weights = np.arange(16, dtype=np.float32)
|
||||||
|
p = self._softmax(x, axis=2)
|
||||||
|
p = p * weights[None, None, :]
|
||||||
|
out = np.sum(p, axis=2, keepdims=False)
|
||||||
|
return out
|
||||||
|
|
||||||
|
def dist2bbox(
|
||||||
|
self, x: np.ndarray, anchors: np.ndarray, scales: np.ndarray
|
||||||
) -> np.ndarray:
|
) -> np.ndarray:
|
||||||
shape = shape.astype(int)
|
"""Convert distances to bounding boxes - YOLOv9 style"""
|
||||||
input_shape = data.shape
|
lt = x[:, :2]
|
||||||
output_shape = []
|
rb = x[:, 2:]
|
||||||
|
|
||||||
for i, dim in enumerate(shape):
|
x1y1 = anchors - lt
|
||||||
if dim == 0 and allowzero == 0:
|
x2y2 = anchors + rb
|
||||||
output_shape.append(input_shape[i]) # Copy dimension from input
|
|
||||||
else:
|
|
||||||
output_shape.append(dim)
|
|
||||||
|
|
||||||
# Now let NumPy infer any -1 if needed
|
wh = x2y2 - x1y1
|
||||||
reshaped = np.reshape(data, output_shape)
|
c_xy = (x1y1 + x2y2) / 2
|
||||||
|
|
||||||
return reshaped
|
out = np.concatenate([c_xy, wh], axis=1)
|
||||||
|
out = out * scales
|
||||||
|
return out
|
||||||
|
|
||||||
|
def post_process_yolo_optimized(self, outputs):
|
||||||
|
"""
|
||||||
|
Custom YOLOv9 post-processing optimized for MemryX ONNX outputs.
|
||||||
|
Implements DFL decoding, confidence filtering, and NMS in pure NumPy.
|
||||||
|
"""
|
||||||
|
# YOLOv9 outputs: 6 outputs (lbox, lcls, mbox, mcls, sbox, scls)
|
||||||
|
conv_out1, conv_out2, conv_out3, conv_out4, conv_out5, conv_out6 = outputs
|
||||||
|
|
||||||
|
# Determine grid sizes based on input resolution
|
||||||
|
# YOLOv9 uses 3 detection heads with strides [8, 16, 32]
|
||||||
|
# Grid sizes = input_size / stride
|
||||||
|
sizes = [
|
||||||
|
self.memx_model_height
|
||||||
|
// 8, # Large objects (e.g., 80 for 640x640, 40 for 320x320)
|
||||||
|
self.memx_model_height
|
||||||
|
// 16, # Medium objects (e.g., 40 for 640x640, 20 for 320x320)
|
||||||
|
self.memx_model_height
|
||||||
|
// 32, # Small objects (e.g., 20 for 640x640, 10 for 320x320)
|
||||||
|
]
|
||||||
|
|
||||||
|
# Generate anchors and scales if not already done
|
||||||
|
if not hasattr(self, "anchors"):
|
||||||
|
self.anchors = self._generate_anchors(sizes)
|
||||||
|
self.scales = self._generate_scales(sizes)
|
||||||
|
|
||||||
|
# Process outputs in YOLOv9 format: reshape and moveaxis for ONNX format
|
||||||
|
lbox = np.moveaxis(conv_out1, 1, -1) # Large boxes
|
||||||
|
lcls = np.moveaxis(conv_out2, 1, -1) # Large classes
|
||||||
|
mbox = np.moveaxis(conv_out3, 1, -1) # Medium boxes
|
||||||
|
mcls = np.moveaxis(conv_out4, 1, -1) # Medium classes
|
||||||
|
sbox = np.moveaxis(conv_out5, 1, -1) # Small boxes
|
||||||
|
scls = np.moveaxis(conv_out6, 1, -1) # Small classes
|
||||||
|
|
||||||
|
# Determine number of classes dynamically from the class output shape
|
||||||
|
# lcls shape should be (batch, height, width, num_classes)
|
||||||
|
num_classes = lcls.shape[-1]
|
||||||
|
|
||||||
|
# Validate that all class outputs have the same number of classes
|
||||||
|
if not (mcls.shape[-1] == num_classes and scls.shape[-1] == num_classes):
|
||||||
|
raise ValueError(
|
||||||
|
f"Class output shapes mismatch: lcls={lcls.shape}, mcls={mcls.shape}, scls={scls.shape}"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Concatenate boxes and classes
|
||||||
|
boxes = np.concatenate(
|
||||||
|
[
|
||||||
|
lbox.reshape(-1, 64), # 64 is for 4 bbox coords * 16 DFL bins
|
||||||
|
mbox.reshape(-1, 64),
|
||||||
|
sbox.reshape(-1, 64),
|
||||||
|
],
|
||||||
|
axis=0,
|
||||||
|
)
|
||||||
|
|
||||||
|
classes = np.concatenate(
|
||||||
|
[
|
||||||
|
lcls.reshape(-1, num_classes),
|
||||||
|
mcls.reshape(-1, num_classes),
|
||||||
|
scls.reshape(-1, num_classes),
|
||||||
|
],
|
||||||
|
axis=0,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Apply sigmoid to classes
|
||||||
|
classes = self.sigmoid(classes)
|
||||||
|
|
||||||
|
# Apply DFL to box predictions
|
||||||
|
boxes = self.dfl(boxes)
|
||||||
|
|
||||||
|
# YOLOv9 postprocessing with confidence filtering and NMS
|
||||||
|
confidence_thres = 0.4
|
||||||
|
iou_thres = 0.6
|
||||||
|
|
||||||
|
# Find the class with the highest score for each detection
|
||||||
|
max_scores = np.max(classes, axis=1) # Maximum class score for each detection
|
||||||
|
class_ids = np.argmax(classes, axis=1) # Index of the best class
|
||||||
|
|
||||||
|
# Filter out detections with scores below the confidence threshold
|
||||||
|
valid_indices = np.where(max_scores >= confidence_thres)[0]
|
||||||
|
if len(valid_indices) == 0:
|
||||||
|
# Return empty detections array
|
||||||
|
final_detections = np.zeros((20, 6), np.float32)
|
||||||
|
return final_detections
|
||||||
|
|
||||||
|
# Select only valid detections
|
||||||
|
valid_boxes = boxes[valid_indices]
|
||||||
|
valid_class_ids = class_ids[valid_indices]
|
||||||
|
valid_scores = max_scores[valid_indices]
|
||||||
|
|
||||||
|
# Convert distances to actual bounding boxes using anchors and scales
|
||||||
|
valid_boxes = self.dist2bbox(
|
||||||
|
valid_boxes, self.anchors[valid_indices], self.scales[valid_indices]
|
||||||
|
)
|
||||||
|
|
||||||
|
# Convert bounding box coordinates from (x_center, y_center, w, h) to (x_min, y_min, x_max, y_max)
|
||||||
|
x_center, y_center, width, height = (
|
||||||
|
valid_boxes[:, 0],
|
||||||
|
valid_boxes[:, 1],
|
||||||
|
valid_boxes[:, 2],
|
||||||
|
valid_boxes[:, 3],
|
||||||
|
)
|
||||||
|
x_min = x_center - width / 2
|
||||||
|
y_min = y_center - height / 2
|
||||||
|
x_max = x_center + width / 2
|
||||||
|
y_max = y_center + height / 2
|
||||||
|
|
||||||
|
# Convert to format expected by cv2.dnn.NMSBoxes: [x, y, width, height]
|
||||||
|
boxes_for_nms = []
|
||||||
|
scores_for_nms = []
|
||||||
|
|
||||||
|
for i in range(len(valid_indices)):
|
||||||
|
# Ensure coordinates are within bounds and positive
|
||||||
|
x_min_clipped = max(0, x_min[i])
|
||||||
|
y_min_clipped = max(0, y_min[i])
|
||||||
|
x_max_clipped = min(self.memx_model_width, x_max[i])
|
||||||
|
y_max_clipped = min(self.memx_model_height, y_max[i])
|
||||||
|
|
||||||
|
width_clipped = x_max_clipped - x_min_clipped
|
||||||
|
height_clipped = y_max_clipped - y_min_clipped
|
||||||
|
|
||||||
|
if width_clipped > 0 and height_clipped > 0:
|
||||||
|
boxes_for_nms.append(
|
||||||
|
[x_min_clipped, y_min_clipped, width_clipped, height_clipped]
|
||||||
|
)
|
||||||
|
scores_for_nms.append(float(valid_scores[i]))
|
||||||
|
|
||||||
|
final_detections = np.zeros((20, 6), np.float32)
|
||||||
|
|
||||||
|
if len(boxes_for_nms) == 0:
|
||||||
|
return final_detections
|
||||||
|
|
||||||
|
# Apply NMS using OpenCV
|
||||||
|
indices = cv2.dnn.NMSBoxes(
|
||||||
|
boxes_for_nms, scores_for_nms, confidence_thres, iou_thres
|
||||||
|
)
|
||||||
|
|
||||||
|
if len(indices) > 0:
|
||||||
|
# Flatten indices if they are returned as a list of arrays
|
||||||
|
if isinstance(indices[0], list) or isinstance(indices[0], np.ndarray):
|
||||||
|
indices = [i[0] for i in indices]
|
||||||
|
|
||||||
|
# Limit to top 20 detections
|
||||||
|
indices = indices[:20]
|
||||||
|
|
||||||
|
# Convert to Frigate format: [class_id, confidence, y_min, x_min, y_max, x_max] (normalized)
|
||||||
|
for i, idx in enumerate(indices):
|
||||||
|
class_id = valid_class_ids[idx]
|
||||||
|
confidence = valid_scores[idx]
|
||||||
|
|
||||||
|
# Get the box coordinates
|
||||||
|
box = boxes_for_nms[idx]
|
||||||
|
x_min_norm = box[0] / self.memx_model_width
|
||||||
|
y_min_norm = box[1] / self.memx_model_height
|
||||||
|
x_max_norm = (box[0] + box[2]) / self.memx_model_width
|
||||||
|
y_max_norm = (box[1] + box[3]) / self.memx_model_height
|
||||||
|
|
||||||
|
final_detections[i] = [
|
||||||
|
class_id,
|
||||||
|
confidence,
|
||||||
|
y_min_norm, # Frigate expects y_min first
|
||||||
|
x_min_norm,
|
||||||
|
y_max_norm,
|
||||||
|
x_max_norm,
|
||||||
|
]
|
||||||
|
|
||||||
|
return final_detections
|
||||||
|
|
||||||
def process_output(self, *outputs):
|
def process_output(self, *outputs):
|
||||||
"""Output callback function -- receives frames from the MX3 and triggers post-processing"""
|
"""Output callback function -- receives frames from the MX3 and triggers post-processing"""
|
||||||
if self.memx_model_type == ModelTypeEnum.yologeneric:
|
if self.memx_model_type == ModelTypeEnum.yologeneric:
|
||||||
if not self.memx_post_model:
|
# Use complete YOLOv9-style postprocessing (includes NMS)
|
||||||
conv_out1 = outputs[0]
|
final_detections = self.post_process_yolo_optimized(outputs)
|
||||||
conv_out2 = outputs[1]
|
|
||||||
conv_out3 = outputs[2]
|
|
||||||
conv_out4 = outputs[3]
|
|
||||||
conv_out5 = outputs[4]
|
|
||||||
conv_out6 = outputs[5]
|
|
||||||
|
|
||||||
concat_1 = self.onnx_concat([conv_out1, conv_out2], axis=1)
|
|
||||||
concat_2 = self.onnx_concat([conv_out3, conv_out4], axis=1)
|
|
||||||
concat_3 = self.onnx_concat([conv_out5, conv_out6], axis=1)
|
|
||||||
|
|
||||||
shape = np.array([1, 144, -1], dtype=np.int64)
|
|
||||||
|
|
||||||
reshaped_1 = self.onnx_reshape_with_allowzero(
|
|
||||||
concat_1, shape, allowzero=0
|
|
||||||
)
|
|
||||||
reshaped_2 = self.onnx_reshape_with_allowzero(
|
|
||||||
concat_2, shape, allowzero=0
|
|
||||||
)
|
|
||||||
reshaped_3 = self.onnx_reshape_with_allowzero(
|
|
||||||
concat_3, shape, allowzero=0
|
|
||||||
)
|
|
||||||
|
|
||||||
concat_4 = self.onnx_concat([reshaped_1, reshaped_2, reshaped_3], 2)
|
|
||||||
|
|
||||||
axis = 1
|
|
||||||
split_sizes = [64, 80]
|
|
||||||
|
|
||||||
# Calculate indices at which to split
|
|
||||||
indices = np.cumsum(split_sizes)[
|
|
||||||
:-1
|
|
||||||
] # [64] — split before the second chunk
|
|
||||||
|
|
||||||
# Perform split along axis 1
|
|
||||||
split_0, split_1 = np.split(concat_4, indices, axis=axis)
|
|
||||||
|
|
||||||
num_boxes = 2100 if self.memx_model_height == 320 else 8400
|
|
||||||
shape1 = np.array([1, 4, 16, num_boxes])
|
|
||||||
reshape_4 = self.onnx_reshape_with_allowzero(
|
|
||||||
split_0, shape1, allowzero=0
|
|
||||||
)
|
|
||||||
|
|
||||||
transpose_1 = reshape_4.transpose(0, 2, 1, 3)
|
|
||||||
|
|
||||||
axis = 1 # As per ONNX softmax node
|
|
||||||
|
|
||||||
# Subtract max for numerical stability
|
|
||||||
x_max = np.max(transpose_1, axis=axis, keepdims=True)
|
|
||||||
x_exp = np.exp(transpose_1 - x_max)
|
|
||||||
x_sum = np.sum(x_exp, axis=axis, keepdims=True)
|
|
||||||
softmax_output = x_exp / x_sum
|
|
||||||
|
|
||||||
# Weight W from the ONNX initializer (1, 16, 1, 1) with values 0 to 15
|
|
||||||
W = np.arange(16, dtype=np.float32).reshape(
|
|
||||||
1, 16, 1, 1
|
|
||||||
) # (1, 16, 1, 1)
|
|
||||||
|
|
||||||
# Apply 1x1 convolution: this is a weighted sum over channels
|
|
||||||
conv_output = np.sum(
|
|
||||||
softmax_output * W, axis=1, keepdims=True
|
|
||||||
) # shape: (1, 1, 4, 8400)
|
|
||||||
|
|
||||||
shape2 = np.array([1, 4, num_boxes])
|
|
||||||
reshape_5 = self.onnx_reshape_with_allowzero(
|
|
||||||
conv_output, shape2, allowzero=0
|
|
||||||
)
|
|
||||||
|
|
||||||
# ONNX Slice — get first 2 channels: [0:2] along axis 1
|
|
||||||
slice_output1 = reshape_5[:, 0:2, :] # Result: (1, 2, 8400)
|
|
||||||
|
|
||||||
# Slice channels 2 to 4 → axis = 1
|
|
||||||
slice_output2 = reshape_5[:, 2:4, :]
|
|
||||||
|
|
||||||
# Perform Subtraction
|
|
||||||
sub_output = self.const_A - slice_output1 # Equivalent to ONNX Sub
|
|
||||||
|
|
||||||
# Perform the ONNX-style Add
|
|
||||||
add_output = self.const_B + slice_output2
|
|
||||||
|
|
||||||
sub1 = add_output - sub_output
|
|
||||||
|
|
||||||
add1 = sub_output + add_output
|
|
||||||
|
|
||||||
div_output = add1 / 2.0
|
|
||||||
|
|
||||||
concat_5 = self.onnx_concat([div_output, sub1], axis=1)
|
|
||||||
|
|
||||||
# Expand B to (1, 1, 8400) so it can broadcast across axis=1 (4 channels)
|
|
||||||
const_C_expanded = self.const_C[:, np.newaxis, :] # Shape: (1, 1, 8400)
|
|
||||||
|
|
||||||
# Perform ONNX-style element-wise multiplication
|
|
||||||
mul_output = concat_5 * const_C_expanded # Result: (1, 4, 8400)
|
|
||||||
|
|
||||||
sigmoid_output = self.sigmoid(split_1)
|
|
||||||
outputs = self.onnx_concat([mul_output, sigmoid_output], axis=1)
|
|
||||||
|
|
||||||
final_detections = post_process_yolo(
|
|
||||||
outputs, self.memx_model_width, self.memx_model_height
|
|
||||||
)
|
|
||||||
self.output_queue.put(final_detections)
|
self.output_queue.put(final_detections)
|
||||||
|
|
||||||
elif self.memx_model_type == ModelTypeEnum.yolonas:
|
elif self.memx_model_type == ModelTypeEnum.yolonas:
|
||||||
|
|||||||
@ -195,6 +195,7 @@ class EmbeddingMaintainer(threading.Thread):
|
|||||||
self.config,
|
self.config,
|
||||||
model_config,
|
model_config,
|
||||||
self.event_metadata_publisher,
|
self.event_metadata_publisher,
|
||||||
|
self.requestor,
|
||||||
self.metrics,
|
self.metrics,
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
@ -339,6 +340,7 @@ class EmbeddingMaintainer(threading.Thread):
|
|||||||
self.config,
|
self.config,
|
||||||
model_config,
|
model_config,
|
||||||
self.event_metadata_publisher,
|
self.event_metadata_publisher,
|
||||||
|
self.requestor,
|
||||||
self.metrics,
|
self.metrics,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|||||||
@ -362,7 +362,7 @@ def stats_snapshot(
|
|||||||
stats["embeddings"]["review_description_speed"] = round(
|
stats["embeddings"]["review_description_speed"] = round(
|
||||||
embeddings_metrics.review_desc_speed.value * 1000, 2
|
embeddings_metrics.review_desc_speed.value * 1000, 2
|
||||||
)
|
)
|
||||||
stats["embeddings"]["review_descriptions"] = round(
|
stats["embeddings"]["review_description_events_per_second"] = round(
|
||||||
embeddings_metrics.review_desc_dps.value, 2
|
embeddings_metrics.review_desc_dps.value, 2
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -370,7 +370,7 @@ def stats_snapshot(
|
|||||||
stats["embeddings"]["object_description_speed"] = round(
|
stats["embeddings"]["object_description_speed"] = round(
|
||||||
embeddings_metrics.object_desc_speed.value * 1000, 2
|
embeddings_metrics.object_desc_speed.value * 1000, 2
|
||||||
)
|
)
|
||||||
stats["embeddings"]["object_descriptions"] = round(
|
stats["embeddings"]["object_description_events_per_second"] = round(
|
||||||
embeddings_metrics.object_desc_dps.value, 2
|
embeddings_metrics.object_desc_dps.value, 2
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -378,7 +378,7 @@ def stats_snapshot(
|
|||||||
stats["embeddings"][f"{key}_classification_speed"] = round(
|
stats["embeddings"][f"{key}_classification_speed"] = round(
|
||||||
embeddings_metrics.classification_speeds[key].value * 1000, 2
|
embeddings_metrics.classification_speeds[key].value * 1000, 2
|
||||||
)
|
)
|
||||||
stats["embeddings"][f"{key}_classification"] = round(
|
stats["embeddings"][f"{key}_classification_events_per_second"] = round(
|
||||||
embeddings_metrics.classification_cps[key].value, 2
|
embeddings_metrics.classification_cps[key].value, 2
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|||||||
@ -113,6 +113,7 @@ class StorageMaintainer(threading.Thread):
|
|||||||
recordings: Recordings = (
|
recordings: Recordings = (
|
||||||
Recordings.select(
|
Recordings.select(
|
||||||
Recordings.id,
|
Recordings.id,
|
||||||
|
Recordings.camera,
|
||||||
Recordings.start_time,
|
Recordings.start_time,
|
||||||
Recordings.end_time,
|
Recordings.end_time,
|
||||||
Recordings.segment_size,
|
Recordings.segment_size,
|
||||||
@ -137,7 +138,7 @@ class StorageMaintainer(threading.Thread):
|
|||||||
)
|
)
|
||||||
|
|
||||||
event_start = 0
|
event_start = 0
|
||||||
deleted_recordings = set()
|
deleted_recordings = []
|
||||||
for recording in recordings:
|
for recording in recordings:
|
||||||
# check if 1 hour of storage has been reclaimed
|
# check if 1 hour of storage has been reclaimed
|
||||||
if deleted_segments_size > hourly_bandwidth:
|
if deleted_segments_size > hourly_bandwidth:
|
||||||
@ -172,7 +173,7 @@ class StorageMaintainer(threading.Thread):
|
|||||||
if not keep:
|
if not keep:
|
||||||
try:
|
try:
|
||||||
clear_and_unlink(Path(recording.path), missing_ok=False)
|
clear_and_unlink(Path(recording.path), missing_ok=False)
|
||||||
deleted_recordings.add(recording.id)
|
deleted_recordings.append(recording)
|
||||||
deleted_segments_size += recording.segment_size
|
deleted_segments_size += recording.segment_size
|
||||||
except FileNotFoundError:
|
except FileNotFoundError:
|
||||||
# this file was not found so we must assume no space was cleaned up
|
# this file was not found so we must assume no space was cleaned up
|
||||||
@ -186,6 +187,9 @@ class StorageMaintainer(threading.Thread):
|
|||||||
recordings = (
|
recordings = (
|
||||||
Recordings.select(
|
Recordings.select(
|
||||||
Recordings.id,
|
Recordings.id,
|
||||||
|
Recordings.camera,
|
||||||
|
Recordings.start_time,
|
||||||
|
Recordings.end_time,
|
||||||
Recordings.path,
|
Recordings.path,
|
||||||
Recordings.segment_size,
|
Recordings.segment_size,
|
||||||
)
|
)
|
||||||
@ -201,7 +205,7 @@ class StorageMaintainer(threading.Thread):
|
|||||||
try:
|
try:
|
||||||
clear_and_unlink(Path(recording.path), missing_ok=False)
|
clear_and_unlink(Path(recording.path), missing_ok=False)
|
||||||
deleted_segments_size += recording.segment_size
|
deleted_segments_size += recording.segment_size
|
||||||
deleted_recordings.add(recording.id)
|
deleted_recordings.append(recording)
|
||||||
except FileNotFoundError:
|
except FileNotFoundError:
|
||||||
# this file was not found so we must assume no space was cleaned up
|
# this file was not found so we must assume no space was cleaned up
|
||||||
pass
|
pass
|
||||||
@ -211,7 +215,50 @@ class StorageMaintainer(threading.Thread):
|
|||||||
logger.debug(f"Expiring {len(deleted_recordings)} recordings")
|
logger.debug(f"Expiring {len(deleted_recordings)} recordings")
|
||||||
# delete up to 100,000 at a time
|
# delete up to 100,000 at a time
|
||||||
max_deletes = 100000
|
max_deletes = 100000
|
||||||
deleted_recordings_list = list(deleted_recordings)
|
|
||||||
|
# Update has_clip for events that overlap with deleted recordings
|
||||||
|
if deleted_recordings:
|
||||||
|
# Group deleted recordings by camera
|
||||||
|
camera_recordings = {}
|
||||||
|
for recording in deleted_recordings:
|
||||||
|
if recording.camera not in camera_recordings:
|
||||||
|
camera_recordings[recording.camera] = {
|
||||||
|
"min_start": recording.start_time,
|
||||||
|
"max_end": recording.end_time,
|
||||||
|
}
|
||||||
|
else:
|
||||||
|
camera_recordings[recording.camera]["min_start"] = min(
|
||||||
|
camera_recordings[recording.camera]["min_start"],
|
||||||
|
recording.start_time,
|
||||||
|
)
|
||||||
|
camera_recordings[recording.camera]["max_end"] = max(
|
||||||
|
camera_recordings[recording.camera]["max_end"],
|
||||||
|
recording.end_time,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Find all events that overlap with deleted recordings time range per camera
|
||||||
|
events_to_update = []
|
||||||
|
for camera, time_range in camera_recordings.items():
|
||||||
|
overlapping_events = Event.select(Event.id).where(
|
||||||
|
Event.camera == camera,
|
||||||
|
Event.has_clip == True,
|
||||||
|
Event.start_time < time_range["max_end"],
|
||||||
|
Event.end_time > time_range["min_start"],
|
||||||
|
)
|
||||||
|
|
||||||
|
for event in overlapping_events:
|
||||||
|
events_to_update.append(event.id)
|
||||||
|
|
||||||
|
# Update has_clip to False for overlapping events
|
||||||
|
if events_to_update:
|
||||||
|
for i in range(0, len(events_to_update), max_deletes):
|
||||||
|
batch = events_to_update[i : i + max_deletes]
|
||||||
|
Event.update(has_clip=False).where(Event.id << batch).execute()
|
||||||
|
logger.debug(
|
||||||
|
f"Updated has_clip to False for {len(events_to_update)} events"
|
||||||
|
)
|
||||||
|
|
||||||
|
deleted_recordings_list = [r.id for r in deleted_recordings]
|
||||||
for i in range(0, len(deleted_recordings_list), max_deletes):
|
for i in range(0, len(deleted_recordings_list), max_deletes):
|
||||||
Recordings.delete().where(
|
Recordings.delete().where(
|
||||||
Recordings.id << deleted_recordings_list[i : i + max_deletes]
|
Recordings.id << deleted_recordings_list[i : i + max_deletes]
|
||||||
|
|||||||
@ -30,3 +30,4 @@ class TrackedObjectUpdateTypesEnum(str, Enum):
|
|||||||
description = "description"
|
description = "description"
|
||||||
face = "face"
|
face = "face"
|
||||||
lpr = "lpr"
|
lpr = "lpr"
|
||||||
|
classification = "classification"
|
||||||
|
|||||||
@ -130,8 +130,13 @@ def get_soc_type() -> Optional[str]:
|
|||||||
"""Get the SoC type from device tree."""
|
"""Get the SoC type from device tree."""
|
||||||
try:
|
try:
|
||||||
with open("/proc/device-tree/compatible") as file:
|
with open("/proc/device-tree/compatible") as file:
|
||||||
soc = file.read().split(",")[-1].strip("\x00")
|
content = file.read()
|
||||||
return soc
|
|
||||||
|
# Check for Jetson devices
|
||||||
|
if "nvidia" in content:
|
||||||
|
return None
|
||||||
|
|
||||||
|
return content.split(",")[-1].strip("\x00")
|
||||||
except FileNotFoundError:
|
except FileNotFoundError:
|
||||||
logger.debug("Could not determine SoC type from device tree")
|
logger.debug("Could not determine SoC type from device tree")
|
||||||
return None
|
return None
|
||||||
|
|||||||
@ -16,7 +16,7 @@ from frigate.comms.recordings_updater import (
|
|||||||
RecordingsDataSubscriber,
|
RecordingsDataSubscriber,
|
||||||
RecordingsDataTypeEnum,
|
RecordingsDataTypeEnum,
|
||||||
)
|
)
|
||||||
from frigate.config import CameraConfig, DetectConfig, ModelConfig
|
from frigate.config import CameraConfig, DetectConfig, LoggerConfig, ModelConfig
|
||||||
from frigate.config.camera.camera import CameraTypeEnum
|
from frigate.config.camera.camera import CameraTypeEnum
|
||||||
from frigate.config.camera.updater import (
|
from frigate.config.camera.updater import (
|
||||||
CameraConfigUpdateEnum,
|
CameraConfigUpdateEnum,
|
||||||
@ -539,6 +539,7 @@ class CameraCapture(FrigateProcess):
|
|||||||
shm_frame_count: int,
|
shm_frame_count: int,
|
||||||
camera_metrics: CameraMetrics,
|
camera_metrics: CameraMetrics,
|
||||||
stop_event: MpEvent,
|
stop_event: MpEvent,
|
||||||
|
log_config: LoggerConfig | None = None,
|
||||||
) -> None:
|
) -> None:
|
||||||
super().__init__(
|
super().__init__(
|
||||||
stop_event,
|
stop_event,
|
||||||
@ -549,9 +550,10 @@ class CameraCapture(FrigateProcess):
|
|||||||
self.config = config
|
self.config = config
|
||||||
self.shm_frame_count = shm_frame_count
|
self.shm_frame_count = shm_frame_count
|
||||||
self.camera_metrics = camera_metrics
|
self.camera_metrics = camera_metrics
|
||||||
|
self.log_config = log_config
|
||||||
|
|
||||||
def run(self) -> None:
|
def run(self) -> None:
|
||||||
self.pre_run_setup()
|
self.pre_run_setup(self.log_config)
|
||||||
camera_watchdog = CameraWatchdog(
|
camera_watchdog = CameraWatchdog(
|
||||||
self.config,
|
self.config,
|
||||||
self.shm_frame_count,
|
self.shm_frame_count,
|
||||||
@ -577,6 +579,7 @@ class CameraTracker(FrigateProcess):
|
|||||||
ptz_metrics: PTZMetrics,
|
ptz_metrics: PTZMetrics,
|
||||||
region_grid: list[list[dict[str, Any]]],
|
region_grid: list[list[dict[str, Any]]],
|
||||||
stop_event: MpEvent,
|
stop_event: MpEvent,
|
||||||
|
log_config: LoggerConfig | None = None,
|
||||||
) -> None:
|
) -> None:
|
||||||
super().__init__(
|
super().__init__(
|
||||||
stop_event,
|
stop_event,
|
||||||
@ -592,9 +595,10 @@ class CameraTracker(FrigateProcess):
|
|||||||
self.camera_metrics = camera_metrics
|
self.camera_metrics = camera_metrics
|
||||||
self.ptz_metrics = ptz_metrics
|
self.ptz_metrics = ptz_metrics
|
||||||
self.region_grid = region_grid
|
self.region_grid = region_grid
|
||||||
|
self.log_config = log_config
|
||||||
|
|
||||||
def run(self) -> None:
|
def run(self) -> None:
|
||||||
self.pre_run_setup()
|
self.pre_run_setup(self.log_config)
|
||||||
frame_queue = self.camera_metrics.frame_queue
|
frame_queue = self.camera_metrics.frame_queue
|
||||||
frame_shape = self.config.frame_shape
|
frame_shape = self.config.frame_shape
|
||||||
|
|
||||||
|
|||||||
33
web/images/branding/LICENSE
Normal file
@ -0,0 +1,33 @@
|
|||||||
|
# COPYRIGHT AND TRADEMARK NOTICE
|
||||||
|
|
||||||
|
The images, logos, and icons contained in this directory (the "Brand Assets") are
|
||||||
|
proprietary to Frigate LLC and are NOT covered by the MIT License governing the
|
||||||
|
rest of this repository.
|
||||||
|
|
||||||
|
1. TRADEMARK STATUS
|
||||||
|
The "Frigate" name and the accompanying logo are common law trademarks™ of
|
||||||
|
Frigate LLC. Frigate LLC reserves all rights to these marks.
|
||||||
|
|
||||||
|
2. LIMITED PERMISSION FOR USE
|
||||||
|
Permission is hereby granted to display these Brand Assets strictly for the
|
||||||
|
following purposes:
|
||||||
|
a. To execute the software interface on a local machine.
|
||||||
|
b. To identify the software in documentation or reviews (nominative use).
|
||||||
|
|
||||||
|
3. RESTRICTIONS
|
||||||
|
You may NOT:
|
||||||
|
a. Use these Brand Assets to represent a derivative work (fork) as an official
|
||||||
|
product of Frigate LLC.
|
||||||
|
b. Use these Brand Assets in a way that implies endorsement, sponsorship, or
|
||||||
|
commercial affiliation with Frigate LLC.
|
||||||
|
c. Modify or alter the Brand Assets.
|
||||||
|
|
||||||
|
If you fork this repository with the intent to distribute a modified or competing
|
||||||
|
version of the software, you must replace these Brand Assets with your own
|
||||||
|
original content.
|
||||||
|
|
||||||
|
For full usage guidelines, strictly see the TRADEMARK.md file in the
|
||||||
|
repository root.
|
||||||
|
|
||||||
|
ALL RIGHTS RESERVED.
|
||||||
|
Copyright (c) 2025 Frigate LLC.
|
||||||
|
Before Width: | Height: | Size: 3.9 KiB After Width: | Height: | Size: 3.9 KiB |
|
Before Width: | Height: | Size: 558 B After Width: | Height: | Size: 558 B |
|
Before Width: | Height: | Size: 800 B After Width: | Height: | Size: 800 B |
|
Before Width: | Height: | Size: 15 KiB After Width: | Height: | Size: 15 KiB |
|
Before Width: | Height: | Size: 12 KiB After Width: | Height: | Size: 12 KiB |
|
Before Width: | Height: | Size: 2.9 KiB After Width: | Height: | Size: 2.9 KiB |
|
Before Width: | Height: | Size: 2.6 KiB After Width: | Height: | Size: 2.6 KiB |
@ -2,29 +2,29 @@
|
|||||||
<html lang="en">
|
<html lang="en">
|
||||||
<head>
|
<head>
|
||||||
<meta charset="UTF-8" />
|
<meta charset="UTF-8" />
|
||||||
<link rel="icon" href="/images/favicon.ico" />
|
<link rel="icon" href="/images/branding/favicon.ico" />
|
||||||
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
|
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
|
||||||
<title>Frigate</title>
|
<title>Frigate</title>
|
||||||
<link
|
<link
|
||||||
rel="apple-touch-icon"
|
rel="apple-touch-icon"
|
||||||
sizes="180x180"
|
sizes="180x180"
|
||||||
href="/images/apple-touch-icon.png"
|
href="/images/branding/apple-touch-icon.png"
|
||||||
/>
|
/>
|
||||||
<link
|
<link
|
||||||
rel="icon"
|
rel="icon"
|
||||||
type="image/png"
|
type="image/png"
|
||||||
sizes="32x32"
|
sizes="32x32"
|
||||||
href="/images/favicon-32x32.png"
|
href="/images/branding/favicon-32x32.png"
|
||||||
/>
|
/>
|
||||||
<link
|
<link
|
||||||
rel="icon"
|
rel="icon"
|
||||||
type="image/png"
|
type="image/png"
|
||||||
sizes="16x16"
|
sizes="16x16"
|
||||||
href="/images/favicon-16x16.png"
|
href="/images/branding/favicon-16x16.png"
|
||||||
/>
|
/>
|
||||||
<link rel="icon" type="image/svg+xml" href="/images/favicon.svg" />
|
<link rel="icon" type="image/svg+xml" href="/images/branding/favicon.svg" />
|
||||||
<link rel="manifest" href="/site.webmanifest" crossorigin="use-credentials" />
|
<link rel="manifest" href="/site.webmanifest" crossorigin="use-credentials" />
|
||||||
<link rel="mask-icon" href="/images/favicon.svg" color="#3b82f7" />
|
<link rel="mask-icon" href="/images/branding/favicon.svg" color="#3b82f7" />
|
||||||
<meta name="theme-color" content="#ffffff" media="(prefers-color-scheme: light)" />
|
<meta name="theme-color" content="#ffffff" media="(prefers-color-scheme: light)" />
|
||||||
<meta name="theme-color" content="#000000" media="(prefers-color-scheme: dark)" />
|
<meta name="theme-color" content="#000000" media="(prefers-color-scheme: dark)" />
|
||||||
</head>
|
</head>
|
||||||
|
|||||||
@ -2,29 +2,29 @@
|
|||||||
<html lang="en">
|
<html lang="en">
|
||||||
<head>
|
<head>
|
||||||
<meta charset="UTF-8" />
|
<meta charset="UTF-8" />
|
||||||
<link rel="icon" href="/images/favicon.ico" />
|
<link rel="icon" href="/images/branding/favicon.ico" />
|
||||||
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
|
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
|
||||||
<title>Frigate</title>
|
<title>Frigate</title>
|
||||||
<link
|
<link
|
||||||
rel="apple-touch-icon"
|
rel="apple-touch-icon"
|
||||||
sizes="180x180"
|
sizes="180x180"
|
||||||
href="/images/apple-touch-icon.png"
|
href="/images/branding/apple-touch-icon.png"
|
||||||
/>
|
/>
|
||||||
<link
|
<link
|
||||||
rel="icon"
|
rel="icon"
|
||||||
type="image/png"
|
type="image/png"
|
||||||
sizes="32x32"
|
sizes="32x32"
|
||||||
href="/images/favicon-32x32.png"
|
href="/images/branding/favicon-32x32.png"
|
||||||
/>
|
/>
|
||||||
<link
|
<link
|
||||||
rel="icon"
|
rel="icon"
|
||||||
type="image/png"
|
type="image/png"
|
||||||
sizes="16x16"
|
sizes="16x16"
|
||||||
href="/images/favicon-16x16.png"
|
href="/images/branding/favicon-16x16.png"
|
||||||
/>
|
/>
|
||||||
<link rel="icon" type="image/svg+xml" href="/images/favicon.svg" />
|
<link rel="icon" type="image/svg+xml" href="/images/branding/favicon.svg" />
|
||||||
<link rel="manifest" href="/site.webmanifest" crossorigin="use-credentials" />
|
<link rel="manifest" href="/site.webmanifest" crossorigin="use-credentials" />
|
||||||
<link rel="mask-icon" href="/images/favicon.svg" color="#3b82f7" />
|
<link rel="mask-icon" href="/images/branding/favicon.svg" color="#3b82f7" />
|
||||||
<meta name="theme-color" content="#ffffff" media="(prefers-color-scheme: light)" />
|
<meta name="theme-color" content="#ffffff" media="(prefers-color-scheme: light)" />
|
||||||
<meta name="theme-color" content="#000000" media="(prefers-color-scheme: dark)" />
|
<meta name="theme-color" content="#000000" media="(prefers-color-scheme: dark)" />
|
||||||
</head>
|
</head>
|
||||||
|
|||||||
92
web/package-lock.json
generated
@ -116,7 +116,7 @@
|
|||||||
"prettier-plugin-tailwindcss": "^0.6.5",
|
"prettier-plugin-tailwindcss": "^0.6.5",
|
||||||
"tailwindcss": "^3.4.9",
|
"tailwindcss": "^3.4.9",
|
||||||
"typescript": "^5.8.2",
|
"typescript": "^5.8.2",
|
||||||
"vite": "^6.2.0",
|
"vite": "^6.4.1",
|
||||||
"vitest": "^3.0.7"
|
"vitest": "^3.0.7"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
@ -9502,6 +9502,54 @@
|
|||||||
"dev": true,
|
"dev": true,
|
||||||
"license": "MIT"
|
"license": "MIT"
|
||||||
},
|
},
|
||||||
|
"node_modules/tinyglobby": {
|
||||||
|
"version": "0.2.15",
|
||||||
|
"resolved": "https://registry.npmjs.org/tinyglobby/-/tinyglobby-0.2.15.tgz",
|
||||||
|
"integrity": "sha512-j2Zq4NyQYG5XMST4cbs02Ak8iJUdxRM0XI5QyxXuZOzKOINmWurp3smXu3y5wDcJrptwpSjgXHzIQxR0omXljQ==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT",
|
||||||
|
"dependencies": {
|
||||||
|
"fdir": "^6.5.0",
|
||||||
|
"picomatch": "^4.0.3"
|
||||||
|
},
|
||||||
|
"engines": {
|
||||||
|
"node": ">=12.0.0"
|
||||||
|
},
|
||||||
|
"funding": {
|
||||||
|
"url": "https://github.com/sponsors/SuperchupuDev"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/tinyglobby/node_modules/fdir": {
|
||||||
|
"version": "6.5.0",
|
||||||
|
"resolved": "https://registry.npmjs.org/fdir/-/fdir-6.5.0.tgz",
|
||||||
|
"integrity": "sha512-tIbYtZbucOs0BRGqPJkshJUYdL+SDH7dVM8gjy+ERp3WAUjLEFJE+02kanyHtwjWOnwrKYBiwAmM0p4kLJAnXg==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT",
|
||||||
|
"engines": {
|
||||||
|
"node": ">=12.0.0"
|
||||||
|
},
|
||||||
|
"peerDependencies": {
|
||||||
|
"picomatch": "^3 || ^4"
|
||||||
|
},
|
||||||
|
"peerDependenciesMeta": {
|
||||||
|
"picomatch": {
|
||||||
|
"optional": true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/tinyglobby/node_modules/picomatch": {
|
||||||
|
"version": "4.0.3",
|
||||||
|
"resolved": "https://registry.npmjs.org/picomatch/-/picomatch-4.0.3.tgz",
|
||||||
|
"integrity": "sha512-5gTmgEY/sqK6gFXLIsQNH19lWb4ebPDLA4SdLP7dsWkIXHWlG66oPuVvXSGFPppYZz8ZDZq0dYYrbHfBCVUb1Q==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT",
|
||||||
|
"engines": {
|
||||||
|
"node": ">=12"
|
||||||
|
},
|
||||||
|
"funding": {
|
||||||
|
"url": "https://github.com/sponsors/jonschlinkert"
|
||||||
|
}
|
||||||
|
},
|
||||||
"node_modules/tinypool": {
|
"node_modules/tinypool": {
|
||||||
"version": "1.0.2",
|
"version": "1.0.2",
|
||||||
"resolved": "https://registry.npmjs.org/tinypool/-/tinypool-1.0.2.tgz",
|
"resolved": "https://registry.npmjs.org/tinypool/-/tinypool-1.0.2.tgz",
|
||||||
@ -9868,15 +9916,18 @@
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
"node_modules/vite": {
|
"node_modules/vite": {
|
||||||
"version": "6.2.0",
|
"version": "6.4.1",
|
||||||
"resolved": "https://registry.npmjs.org/vite/-/vite-6.2.0.tgz",
|
"resolved": "https://registry.npmjs.org/vite/-/vite-6.4.1.tgz",
|
||||||
"integrity": "sha512-7dPxoo+WsT/64rDcwoOjk76XHj+TqNTIvHKcuMQ1k4/SeHDaQt5GFAeLYzrimZrMpn/O6DtdI03WUjdxuPM0oQ==",
|
"integrity": "sha512-+Oxm7q9hDoLMyJOYfUYBuHQo+dkAloi33apOPP56pzj+vsdJDzr+j1NISE5pyaAuKL4A3UD34qd0lx5+kfKp2g==",
|
||||||
"dev": true,
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"esbuild": "^0.25.0",
|
"esbuild": "^0.25.0",
|
||||||
|
"fdir": "^6.4.4",
|
||||||
|
"picomatch": "^4.0.2",
|
||||||
"postcss": "^8.5.3",
|
"postcss": "^8.5.3",
|
||||||
"rollup": "^4.30.1"
|
"rollup": "^4.34.9",
|
||||||
|
"tinyglobby": "^0.2.13"
|
||||||
},
|
},
|
||||||
"bin": {
|
"bin": {
|
||||||
"vite": "bin/vite.js"
|
"vite": "bin/vite.js"
|
||||||
@ -9970,6 +10021,37 @@
|
|||||||
"monaco-editor": ">=0.33.0"
|
"monaco-editor": ">=0.33.0"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"node_modules/vite/node_modules/fdir": {
|
||||||
|
"version": "6.5.0",
|
||||||
|
"resolved": "https://registry.npmjs.org/fdir/-/fdir-6.5.0.tgz",
|
||||||
|
"integrity": "sha512-tIbYtZbucOs0BRGqPJkshJUYdL+SDH7dVM8gjy+ERp3WAUjLEFJE+02kanyHtwjWOnwrKYBiwAmM0p4kLJAnXg==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT",
|
||||||
|
"engines": {
|
||||||
|
"node": ">=12.0.0"
|
||||||
|
},
|
||||||
|
"peerDependencies": {
|
||||||
|
"picomatch": "^3 || ^4"
|
||||||
|
},
|
||||||
|
"peerDependenciesMeta": {
|
||||||
|
"picomatch": {
|
||||||
|
"optional": true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/vite/node_modules/picomatch": {
|
||||||
|
"version": "4.0.3",
|
||||||
|
"resolved": "https://registry.npmjs.org/picomatch/-/picomatch-4.0.3.tgz",
|
||||||
|
"integrity": "sha512-5gTmgEY/sqK6gFXLIsQNH19lWb4ebPDLA4SdLP7dsWkIXHWlG66oPuVvXSGFPppYZz8ZDZq0dYYrbHfBCVUb1Q==",
|
||||||
|
"dev": true,
|
||||||
|
"license": "MIT",
|
||||||
|
"engines": {
|
||||||
|
"node": ">=12"
|
||||||
|
},
|
||||||
|
"funding": {
|
||||||
|
"url": "https://github.com/sponsors/jonschlinkert"
|
||||||
|
}
|
||||||
|
},
|
||||||
"node_modules/vitest": {
|
"node_modules/vitest": {
|
||||||
"version": "3.0.7",
|
"version": "3.0.7",
|
||||||
"resolved": "https://registry.npmjs.org/vitest/-/vitest-3.0.7.tgz",
|
"resolved": "https://registry.npmjs.org/vitest/-/vitest-3.0.7.tgz",
|
||||||
|
|||||||
@ -122,7 +122,7 @@
|
|||||||
"prettier-plugin-tailwindcss": "^0.6.5",
|
"prettier-plugin-tailwindcss": "^0.6.5",
|
||||||
"tailwindcss": "^3.4.9",
|
"tailwindcss": "^3.4.9",
|
||||||
"typescript": "^5.8.2",
|
"typescript": "^5.8.2",
|
||||||
"vite": "^6.2.0",
|
"vite": "^6.4.1",
|
||||||
"vitest": "^3.0.7"
|
"vitest": "^3.0.7"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@ -103,7 +103,7 @@
|
|||||||
"regenerate": "A new description has been requested from {{provider}}. Depending on the speed of your provider, the new description may take some time to regenerate.",
|
"regenerate": "A new description has been requested from {{provider}}. Depending on the speed of your provider, the new description may take some time to regenerate.",
|
||||||
"updatedSublabel": "Successfully updated sub label.",
|
"updatedSublabel": "Successfully updated sub label.",
|
||||||
"updatedLPR": "Successfully updated license plate.",
|
"updatedLPR": "Successfully updated license plate.",
|
||||||
"audioTranscription": "Successfully requested audio transcription."
|
"audioTranscription": "Successfully requested audio transcription. Depending on the speed of your Frigate server, the transcription may take some time to complete."
|
||||||
},
|
},
|
||||||
"error": {
|
"error": {
|
||||||
"regenerate": "Failed to call {{provider}} for a new description: {{errorMessage}}",
|
"regenerate": "Failed to call {{provider}} for a new description: {{errorMessage}}",
|
||||||
|
|||||||
@ -177,6 +177,10 @@
|
|||||||
"noCameras": {
|
"noCameras": {
|
||||||
"title": "No Cameras Configured",
|
"title": "No Cameras Configured",
|
||||||
"description": "Get started by connecting a camera to Frigate.",
|
"description": "Get started by connecting a camera to Frigate.",
|
||||||
"buttonText": "Add Camera"
|
"buttonText": "Add Camera",
|
||||||
|
"restricted": {
|
||||||
|
"title": "No Cameras Available",
|
||||||
|
"description": "You don't have permission to view any cameras in this group."
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@ -76,7 +76,12 @@
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
"npuUsage": "NPU Usage",
|
"npuUsage": "NPU Usage",
|
||||||
"npuMemory": "NPU Memory"
|
"npuMemory": "NPU Memory",
|
||||||
|
"intelGpuWarning": {
|
||||||
|
"title": "Intel GPU Stats Warning",
|
||||||
|
"message": "GPU stats unavailable",
|
||||||
|
"description": "This is a known bug in Intel's GPU stats reporting tools (intel_gpu_top) where it will break and repeatedly return a GPU usage of 0% even in cases where hardware acceleration and object detection are correctly running on the (i)GPU. This is not a Frigate bug. You can restart the host to temporarily fix the issue and confirm that the GPU is working correctly. This does not affect performance."
|
||||||
|
}
|
||||||
},
|
},
|
||||||
"otherProcesses": {
|
"otherProcesses": {
|
||||||
"title": "Other Processes",
|
"title": "Other Processes",
|
||||||
@ -169,6 +174,7 @@
|
|||||||
"enrichments": {
|
"enrichments": {
|
||||||
"title": "Enrichments",
|
"title": "Enrichments",
|
||||||
"infPerSecond": "Inferences Per Second",
|
"infPerSecond": "Inferences Per Second",
|
||||||
|
"averageInf": "Average Inference Time",
|
||||||
"embeddings": {
|
"embeddings": {
|
||||||
"image_embedding": "Image Embedding",
|
"image_embedding": "Image Embedding",
|
||||||
"text_embedding": "Text Embedding",
|
"text_embedding": "Text Embedding",
|
||||||
@ -180,7 +186,13 @@
|
|||||||
"plate_recognition_speed": "Plate Recognition Speed",
|
"plate_recognition_speed": "Plate Recognition Speed",
|
||||||
"text_embedding_speed": "Text Embedding Speed",
|
"text_embedding_speed": "Text Embedding Speed",
|
||||||
"yolov9_plate_detection_speed": "YOLOv9 Plate Detection Speed",
|
"yolov9_plate_detection_speed": "YOLOv9 Plate Detection Speed",
|
||||||
"yolov9_plate_detection": "YOLOv9 Plate Detection"
|
"yolov9_plate_detection": "YOLOv9 Plate Detection",
|
||||||
|
"review_description": "Review Description",
|
||||||
|
"review_description_speed": "Review Description Speed",
|
||||||
|
"review_description_events_per_second": "Review Description",
|
||||||
|
"object_description": "Object Description",
|
||||||
|
"object_description_speed": "Object Description Speed",
|
||||||
|
"object_description_events_per_second": "Object Description"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@ -44,11 +44,16 @@ self.addEventListener("notificationclick", (event) => {
|
|||||||
switch (event.action ?? "default") {
|
switch (event.action ?? "default") {
|
||||||
case "markReviewed":
|
case "markReviewed":
|
||||||
if (event.notification.data) {
|
if (event.notification.data) {
|
||||||
fetch("/api/reviews/viewed", {
|
event.waitUntil(
|
||||||
method: "POST",
|
fetch("/api/reviews/viewed", {
|
||||||
headers: { "Content-Type": "application/json", "X-CSRF-TOKEN": 1 },
|
method: "POST",
|
||||||
body: JSON.stringify({ ids: [event.notification.data.id] }),
|
headers: {
|
||||||
});
|
"Content-Type": "application/json",
|
||||||
|
"X-CSRF-TOKEN": 1,
|
||||||
|
},
|
||||||
|
body: JSON.stringify({ ids: [event.notification.data.id] }),
|
||||||
|
}), // eslint-disable-line comma-dangle
|
||||||
|
);
|
||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
default:
|
default:
|
||||||
@ -58,7 +63,7 @@ self.addEventListener("notificationclick", (event) => {
|
|||||||
// eslint-disable-next-line no-undef
|
// eslint-disable-next-line no-undef
|
||||||
if (clients.openWindow) {
|
if (clients.openWindow) {
|
||||||
// eslint-disable-next-line no-undef
|
// eslint-disable-next-line no-undef
|
||||||
return clients.openWindow(url);
|
event.waitUntil(clients.openWindow(url));
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@ -398,11 +398,7 @@ export function GroupedClassificationCard({
|
|||||||
threshold={threshold}
|
threshold={threshold}
|
||||||
selected={false}
|
selected={false}
|
||||||
i18nLibrary={i18nLibrary}
|
i18nLibrary={i18nLibrary}
|
||||||
onClick={(data, meta) => {
|
onClick={() => {}}
|
||||||
if (meta || selectedItems.length > 0) {
|
|
||||||
onClick(data);
|
|
||||||
}
|
|
||||||
}}
|
|
||||||
>
|
>
|
||||||
{children?.(data)}
|
{children?.(data)}
|
||||||
</ClassificationCard>
|
</ClassificationCard>
|
||||||
|
|||||||
@ -9,7 +9,7 @@ import useSWR from "swr";
|
|||||||
import { MdHome } from "react-icons/md";
|
import { MdHome } from "react-icons/md";
|
||||||
import { usePersistedOverlayState } from "@/hooks/use-overlay-state";
|
import { usePersistedOverlayState } from "@/hooks/use-overlay-state";
|
||||||
import { Button, buttonVariants } from "../ui/button";
|
import { Button, buttonVariants } from "../ui/button";
|
||||||
import { useCallback, useMemo, useState } from "react";
|
import { useCallback, useEffect, useMemo, useState } from "react";
|
||||||
import { Tooltip, TooltipContent, TooltipTrigger } from "../ui/tooltip";
|
import { Tooltip, TooltipContent, TooltipTrigger } from "../ui/tooltip";
|
||||||
import { LuPencil, LuPlus } from "react-icons/lu";
|
import { LuPencil, LuPlus } from "react-icons/lu";
|
||||||
import {
|
import {
|
||||||
@ -87,6 +87,8 @@ type CameraGroupSelectorProps = {
|
|||||||
export function CameraGroupSelector({ className }: CameraGroupSelectorProps) {
|
export function CameraGroupSelector({ className }: CameraGroupSelectorProps) {
|
||||||
const { t } = useTranslation(["components/camera"]);
|
const { t } = useTranslation(["components/camera"]);
|
||||||
const { data: config } = useSWR<FrigateConfig>("config");
|
const { data: config } = useSWR<FrigateConfig>("config");
|
||||||
|
const allowedCameras = useAllowedCameras();
|
||||||
|
const isCustomRole = useIsCustomRole();
|
||||||
|
|
||||||
// tooltip
|
// tooltip
|
||||||
|
|
||||||
@ -119,10 +121,22 @@ export function CameraGroupSelector({ className }: CameraGroupSelectorProps) {
|
|||||||
return [];
|
return [];
|
||||||
}
|
}
|
||||||
|
|
||||||
return Object.entries(config.camera_groups).sort(
|
const allGroups = Object.entries(config.camera_groups);
|
||||||
(a, b) => a[1].order - b[1].order,
|
|
||||||
);
|
// If custom role, filter out groups where user has no accessible cameras
|
||||||
}, [config]);
|
if (isCustomRole) {
|
||||||
|
return allGroups
|
||||||
|
.filter(([, groupConfig]) => {
|
||||||
|
// Check if user has access to at least one camera in this group
|
||||||
|
return groupConfig.cameras.some((cameraName) =>
|
||||||
|
allowedCameras.includes(cameraName),
|
||||||
|
);
|
||||||
|
})
|
||||||
|
.sort((a, b) => a[1].order - b[1].order);
|
||||||
|
}
|
||||||
|
|
||||||
|
return allGroups.sort((a, b) => a[1].order - b[1].order);
|
||||||
|
}, [config, allowedCameras, isCustomRole]);
|
||||||
|
|
||||||
// add group
|
// add group
|
||||||
|
|
||||||
@ -139,6 +153,7 @@ export function CameraGroupSelector({ className }: CameraGroupSelectorProps) {
|
|||||||
activeGroup={group}
|
activeGroup={group}
|
||||||
setGroup={setGroup}
|
setGroup={setGroup}
|
||||||
deleteGroup={deleteGroup}
|
deleteGroup={deleteGroup}
|
||||||
|
isCustomRole={isCustomRole}
|
||||||
/>
|
/>
|
||||||
<Scroller className={`${isMobile ? "whitespace-nowrap" : ""}`}>
|
<Scroller className={`${isMobile ? "whitespace-nowrap" : ""}`}>
|
||||||
<div
|
<div
|
||||||
@ -206,14 +221,16 @@ export function CameraGroupSelector({ className }: CameraGroupSelectorProps) {
|
|||||||
);
|
);
|
||||||
})}
|
})}
|
||||||
|
|
||||||
<Button
|
{!isCustomRole && (
|
||||||
className="bg-secondary text-muted-foreground"
|
<Button
|
||||||
aria-label={t("group.add")}
|
className="bg-secondary text-muted-foreground"
|
||||||
size="xs"
|
aria-label={t("group.add")}
|
||||||
onClick={() => setAddGroup(true)}
|
size="xs"
|
||||||
>
|
onClick={() => setAddGroup(true)}
|
||||||
<LuPlus className="size-4 text-primary" />
|
>
|
||||||
</Button>
|
<LuPlus className="size-4 text-primary" />
|
||||||
|
</Button>
|
||||||
|
)}
|
||||||
{isMobile && <ScrollBar orientation="horizontal" className="h-0" />}
|
{isMobile && <ScrollBar orientation="horizontal" className="h-0" />}
|
||||||
</div>
|
</div>
|
||||||
</Scroller>
|
</Scroller>
|
||||||
@ -228,6 +245,7 @@ type NewGroupDialogProps = {
|
|||||||
activeGroup?: string;
|
activeGroup?: string;
|
||||||
setGroup: (value: string | undefined, replace?: boolean | undefined) => void;
|
setGroup: (value: string | undefined, replace?: boolean | undefined) => void;
|
||||||
deleteGroup: () => void;
|
deleteGroup: () => void;
|
||||||
|
isCustomRole?: boolean;
|
||||||
};
|
};
|
||||||
function NewGroupDialog({
|
function NewGroupDialog({
|
||||||
open,
|
open,
|
||||||
@ -236,6 +254,7 @@ function NewGroupDialog({
|
|||||||
activeGroup,
|
activeGroup,
|
||||||
setGroup,
|
setGroup,
|
||||||
deleteGroup,
|
deleteGroup,
|
||||||
|
isCustomRole,
|
||||||
}: NewGroupDialogProps) {
|
}: NewGroupDialogProps) {
|
||||||
const { t } = useTranslation(["components/camera"]);
|
const { t } = useTranslation(["components/camera"]);
|
||||||
const { mutate: updateConfig } = useSWR<FrigateConfig>("config");
|
const { mutate: updateConfig } = useSWR<FrigateConfig>("config");
|
||||||
@ -261,6 +280,12 @@ function NewGroupDialog({
|
|||||||
`${activeGroup}-draggable-layout`,
|
`${activeGroup}-draggable-layout`,
|
||||||
);
|
);
|
||||||
|
|
||||||
|
useEffect(() => {
|
||||||
|
if (!open) {
|
||||||
|
setEditState("none");
|
||||||
|
}
|
||||||
|
}, [open]);
|
||||||
|
|
||||||
// callbacks
|
// callbacks
|
||||||
|
|
||||||
const onDeleteGroup = useCallback(
|
const onDeleteGroup = useCallback(
|
||||||
@ -349,13 +374,7 @@ function NewGroupDialog({
|
|||||||
position="top-center"
|
position="top-center"
|
||||||
closeButton={true}
|
closeButton={true}
|
||||||
/>
|
/>
|
||||||
<Overlay
|
<Overlay open={open} onOpenChange={setOpen}>
|
||||||
open={open}
|
|
||||||
onOpenChange={(open) => {
|
|
||||||
setEditState("none");
|
|
||||||
setOpen(open);
|
|
||||||
}}
|
|
||||||
>
|
|
||||||
<Content
|
<Content
|
||||||
className={cn(
|
className={cn(
|
||||||
"scrollbar-container overflow-y-auto",
|
"scrollbar-container overflow-y-auto",
|
||||||
@ -371,28 +390,30 @@ function NewGroupDialog({
|
|||||||
>
|
>
|
||||||
<Title>{t("group.label")}</Title>
|
<Title>{t("group.label")}</Title>
|
||||||
<Description className="sr-only">{t("group.edit")}</Description>
|
<Description className="sr-only">{t("group.edit")}</Description>
|
||||||
<div
|
{!isCustomRole && (
|
||||||
className={cn(
|
<div
|
||||||
"absolute",
|
|
||||||
isDesktop && "right-6 top-10",
|
|
||||||
isMobile && "absolute right-0 top-4",
|
|
||||||
)}
|
|
||||||
>
|
|
||||||
<Button
|
|
||||||
size="sm"
|
|
||||||
className={cn(
|
className={cn(
|
||||||
isDesktop &&
|
"absolute",
|
||||||
"size-6 rounded-md bg-secondary-foreground p-1 text-background",
|
isDesktop && "right-6 top-10",
|
||||||
isMobile && "text-secondary-foreground",
|
isMobile && "absolute right-0 top-4",
|
||||||
)}
|
)}
|
||||||
aria-label={t("group.add")}
|
|
||||||
onClick={() => {
|
|
||||||
setEditState("add");
|
|
||||||
}}
|
|
||||||
>
|
>
|
||||||
<LuPlus />
|
<Button
|
||||||
</Button>
|
size="sm"
|
||||||
</div>
|
className={cn(
|
||||||
|
isDesktop &&
|
||||||
|
"size-6 rounded-md bg-secondary-foreground p-1 text-background",
|
||||||
|
isMobile && "text-secondary-foreground",
|
||||||
|
)}
|
||||||
|
aria-label={t("group.add")}
|
||||||
|
onClick={() => {
|
||||||
|
setEditState("add");
|
||||||
|
}}
|
||||||
|
>
|
||||||
|
<LuPlus />
|
||||||
|
</Button>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
</Header>
|
</Header>
|
||||||
<div className="flex flex-col gap-4 md:gap-3">
|
<div className="flex flex-col gap-4 md:gap-3">
|
||||||
{currentGroups.map((group) => (
|
{currentGroups.map((group) => (
|
||||||
@ -401,6 +422,7 @@ function NewGroupDialog({
|
|||||||
group={group}
|
group={group}
|
||||||
onDeleteGroup={() => onDeleteGroup(group[0])}
|
onDeleteGroup={() => onDeleteGroup(group[0])}
|
||||||
onEditGroup={() => onEditGroup(group)}
|
onEditGroup={() => onEditGroup(group)}
|
||||||
|
isReadOnly={isCustomRole}
|
||||||
/>
|
/>
|
||||||
))}
|
))}
|
||||||
</div>
|
</div>
|
||||||
@ -512,12 +534,14 @@ type CameraGroupRowProps = {
|
|||||||
group: [string, CameraGroupConfig];
|
group: [string, CameraGroupConfig];
|
||||||
onDeleteGroup: () => void;
|
onDeleteGroup: () => void;
|
||||||
onEditGroup: () => void;
|
onEditGroup: () => void;
|
||||||
|
isReadOnly?: boolean;
|
||||||
};
|
};
|
||||||
|
|
||||||
export function CameraGroupRow({
|
export function CameraGroupRow({
|
||||||
group,
|
group,
|
||||||
onDeleteGroup,
|
onDeleteGroup,
|
||||||
onEditGroup,
|
onEditGroup,
|
||||||
|
isReadOnly,
|
||||||
}: CameraGroupRowProps) {
|
}: CameraGroupRowProps) {
|
||||||
const { t } = useTranslation(["components/camera"]);
|
const { t } = useTranslation(["components/camera"]);
|
||||||
const [deleteDialogOpen, setDeleteDialogOpen] = useState(false);
|
const [deleteDialogOpen, setDeleteDialogOpen] = useState(false);
|
||||||
@ -564,7 +588,7 @@ export function CameraGroupRow({
|
|||||||
</AlertDialogContent>
|
</AlertDialogContent>
|
||||||
</AlertDialog>
|
</AlertDialog>
|
||||||
|
|
||||||
{isMobile && (
|
{isMobile && !isReadOnly && (
|
||||||
<>
|
<>
|
||||||
<DropdownMenu modal={!isDesktop}>
|
<DropdownMenu modal={!isDesktop}>
|
||||||
<DropdownMenuTrigger>
|
<DropdownMenuTrigger>
|
||||||
@ -589,7 +613,7 @@ export function CameraGroupRow({
|
|||||||
</DropdownMenu>
|
</DropdownMenu>
|
||||||
</>
|
</>
|
||||||
)}
|
)}
|
||||||
{!isMobile && (
|
{!isMobile && !isReadOnly && (
|
||||||
<div className="flex flex-row items-center gap-2">
|
<div className="flex flex-row items-center gap-2">
|
||||||
<Tooltip>
|
<Tooltip>
|
||||||
<TooltipTrigger asChild>
|
<TooltipTrigger asChild>
|
||||||
|
|||||||
@ -572,9 +572,8 @@ export function SortTypeContent({
|
|||||||
className="w-full space-y-1"
|
className="w-full space-y-1"
|
||||||
>
|
>
|
||||||
{availableSortTypes.map((value) => (
|
{availableSortTypes.map((value) => (
|
||||||
<div className="flex flex-row gap-2">
|
<div key={value} className="flex flex-row gap-2">
|
||||||
<RadioGroupItem
|
<RadioGroupItem
|
||||||
key={value}
|
|
||||||
value={value}
|
value={value}
|
||||||
id={`sort-${value}`}
|
id={`sort-${value}`}
|
||||||
className={
|
className={
|
||||||
|
|||||||
@ -4,9 +4,7 @@ import { FrigateConfig } from "@/types/frigateConfig";
|
|||||||
import { baseUrl } from "@/api/baseUrl";
|
import { baseUrl } from "@/api/baseUrl";
|
||||||
import { toast } from "sonner";
|
import { toast } from "sonner";
|
||||||
import axios from "axios";
|
import axios from "axios";
|
||||||
import { LuCamera, LuDownload, LuTrash2 } from "react-icons/lu";
|
|
||||||
import { FiMoreVertical } from "react-icons/fi";
|
import { FiMoreVertical } from "react-icons/fi";
|
||||||
import { MdImageSearch } from "react-icons/md";
|
|
||||||
import { buttonVariants } from "@/components/ui/button";
|
import { buttonVariants } from "@/components/ui/button";
|
||||||
import {
|
import {
|
||||||
ContextMenu,
|
ContextMenu,
|
||||||
@ -31,11 +29,8 @@ import {
|
|||||||
AlertDialogTitle,
|
AlertDialogTitle,
|
||||||
} from "@/components/ui/alert-dialog";
|
} from "@/components/ui/alert-dialog";
|
||||||
import useSWR from "swr";
|
import useSWR from "swr";
|
||||||
|
|
||||||
import { Trans, useTranslation } from "react-i18next";
|
import { Trans, useTranslation } from "react-i18next";
|
||||||
import { BsFillLightningFill } from "react-icons/bs";
|
|
||||||
import BlurredIconButton from "../button/BlurredIconButton";
|
import BlurredIconButton from "../button/BlurredIconButton";
|
||||||
import { PiPath } from "react-icons/pi";
|
|
||||||
|
|
||||||
type SearchResultActionsProps = {
|
type SearchResultActionsProps = {
|
||||||
searchResult: SearchResult;
|
searchResult: SearchResult;
|
||||||
@ -98,7 +93,6 @@ export default function SearchResultActions({
|
|||||||
href={`${baseUrl}api/events/${searchResult.id}/clip.mp4`}
|
href={`${baseUrl}api/events/${searchResult.id}/clip.mp4`}
|
||||||
download={`${searchResult.camera}_${searchResult.label}.mp4`}
|
download={`${searchResult.camera}_${searchResult.label}.mp4`}
|
||||||
>
|
>
|
||||||
<LuDownload className="mr-2 size-4" />
|
|
||||||
<span>{t("itemMenu.downloadVideo.label")}</span>
|
<span>{t("itemMenu.downloadVideo.label")}</span>
|
||||||
</a>
|
</a>
|
||||||
</MenuItem>
|
</MenuItem>
|
||||||
@ -110,7 +104,6 @@ export default function SearchResultActions({
|
|||||||
href={`${baseUrl}api/events/${searchResult.id}/snapshot.jpg`}
|
href={`${baseUrl}api/events/${searchResult.id}/snapshot.jpg`}
|
||||||
download={`${searchResult.camera}_${searchResult.label}.jpg`}
|
download={`${searchResult.camera}_${searchResult.label}.jpg`}
|
||||||
>
|
>
|
||||||
<LuCamera className="mr-2 size-4" />
|
|
||||||
<span>{t("itemMenu.downloadSnapshot.label")}</span>
|
<span>{t("itemMenu.downloadSnapshot.label")}</span>
|
||||||
</a>
|
</a>
|
||||||
</MenuItem>
|
</MenuItem>
|
||||||
@ -120,44 +113,31 @@ export default function SearchResultActions({
|
|||||||
aria-label={t("itemMenu.viewTrackingDetails.aria")}
|
aria-label={t("itemMenu.viewTrackingDetails.aria")}
|
||||||
onClick={showTrackingDetails}
|
onClick={showTrackingDetails}
|
||||||
>
|
>
|
||||||
<PiPath className="mr-2 size-4" />
|
|
||||||
<span>{t("itemMenu.viewTrackingDetails.label")}</span>
|
<span>{t("itemMenu.viewTrackingDetails.label")}</span>
|
||||||
</MenuItem>
|
</MenuItem>
|
||||||
)}
|
)}
|
||||||
{config?.semantic_search?.enabled && isContextMenu && (
|
|
||||||
<MenuItem
|
|
||||||
aria-label={t("itemMenu.findSimilar.aria")}
|
|
||||||
onClick={findSimilar}
|
|
||||||
>
|
|
||||||
<MdImageSearch className="mr-2 size-4" />
|
|
||||||
<span>{t("itemMenu.findSimilar.label")}</span>
|
|
||||||
</MenuItem>
|
|
||||||
)}
|
|
||||||
{config?.semantic_search?.enabled &&
|
|
||||||
searchResult.data.type == "object" && (
|
|
||||||
<MenuItem
|
|
||||||
aria-label={t("itemMenu.addTrigger.aria")}
|
|
||||||
onClick={addTrigger}
|
|
||||||
>
|
|
||||||
<BsFillLightningFill className="mr-2 size-4" />
|
|
||||||
<span>{t("itemMenu.addTrigger.label")}</span>
|
|
||||||
</MenuItem>
|
|
||||||
)}
|
|
||||||
{config?.semantic_search?.enabled &&
|
{config?.semantic_search?.enabled &&
|
||||||
searchResult.data.type == "object" && (
|
searchResult.data.type == "object" && (
|
||||||
<MenuItem
|
<MenuItem
|
||||||
aria-label={t("itemMenu.findSimilar.aria")}
|
aria-label={t("itemMenu.findSimilar.aria")}
|
||||||
onClick={findSimilar}
|
onClick={findSimilar}
|
||||||
>
|
>
|
||||||
<MdImageSearch className="mr-2 size-4" />
|
|
||||||
<span>{t("itemMenu.findSimilar.label")}</span>
|
<span>{t("itemMenu.findSimilar.label")}</span>
|
||||||
</MenuItem>
|
</MenuItem>
|
||||||
)}
|
)}
|
||||||
|
{config?.semantic_search?.enabled &&
|
||||||
|
searchResult.data.type == "object" && (
|
||||||
|
<MenuItem
|
||||||
|
aria-label={t("itemMenu.addTrigger.aria")}
|
||||||
|
onClick={addTrigger}
|
||||||
|
>
|
||||||
|
<span>{t("itemMenu.addTrigger.label")}</span>
|
||||||
|
</MenuItem>
|
||||||
|
)}
|
||||||
<MenuItem
|
<MenuItem
|
||||||
aria-label={t("itemMenu.deleteTrackedObject.label")}
|
aria-label={t("itemMenu.deleteTrackedObject.label")}
|
||||||
onClick={() => setDeleteDialogOpen(true)}
|
onClick={() => setDeleteDialogOpen(true)}
|
||||||
>
|
>
|
||||||
<LuTrash2 className="mr-2 size-4" />
|
|
||||||
<span>{t("button.delete", { ns: "common" })}</span>
|
<span>{t("button.delete", { ns: "common" })}</span>
|
||||||
</MenuItem>
|
</MenuItem>
|
||||||
</>
|
</>
|
||||||
|
|||||||
@ -46,13 +46,13 @@ export default function NavItem({
|
|||||||
onClick={onClick}
|
onClick={onClick}
|
||||||
className={({ isActive }) =>
|
className={({ isActive }) =>
|
||||||
cn(
|
cn(
|
||||||
"flex flex-col items-center justify-center rounded-lg",
|
"flex flex-col items-center justify-center rounded-lg p-[6px]",
|
||||||
className,
|
className,
|
||||||
variants[item.variant ?? "primary"][isActive ? "active" : "inactive"],
|
variants[item.variant ?? "primary"][isActive ? "active" : "inactive"],
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
>
|
>
|
||||||
<Icon className="size-5 md:m-[6px]" />
|
<Icon className="size-5" />
|
||||||
</NavLink>
|
</NavLink>
|
||||||
);
|
);
|
||||||
|
|
||||||
|
|||||||
@ -13,7 +13,8 @@ import { zodResolver } from "@hookform/resolvers/zod";
|
|||||||
import { useForm } from "react-hook-form";
|
import { useForm } from "react-hook-form";
|
||||||
import { z } from "zod";
|
import { z } from "zod";
|
||||||
import ActivityIndicator from "../indicators/activity-indicator";
|
import ActivityIndicator from "../indicators/activity-indicator";
|
||||||
import { useEffect, useState } from "react";
|
import { useEffect, useState, useMemo } from "react";
|
||||||
|
import useSWR from "swr";
|
||||||
import {
|
import {
|
||||||
Dialog,
|
Dialog,
|
||||||
DialogContent,
|
DialogContent,
|
||||||
@ -35,6 +36,7 @@ import { LuCheck, LuX } from "react-icons/lu";
|
|||||||
import { useTranslation } from "react-i18next";
|
import { useTranslation } from "react-i18next";
|
||||||
import { isDesktop, isMobile } from "react-device-detect";
|
import { isDesktop, isMobile } from "react-device-detect";
|
||||||
import { cn } from "@/lib/utils";
|
import { cn } from "@/lib/utils";
|
||||||
|
import { FrigateConfig } from "@/types/frigateConfig";
|
||||||
import {
|
import {
|
||||||
MobilePage,
|
MobilePage,
|
||||||
MobilePageContent,
|
MobilePageContent,
|
||||||
@ -54,9 +56,15 @@ export default function CreateUserDialog({
|
|||||||
onCreate,
|
onCreate,
|
||||||
onCancel,
|
onCancel,
|
||||||
}: CreateUserOverlayProps) {
|
}: CreateUserOverlayProps) {
|
||||||
|
const { data: config } = useSWR<FrigateConfig>("config");
|
||||||
const { t } = useTranslation(["views/settings"]);
|
const { t } = useTranslation(["views/settings"]);
|
||||||
const [isLoading, setIsLoading] = useState<boolean>(false);
|
const [isLoading, setIsLoading] = useState<boolean>(false);
|
||||||
|
|
||||||
|
const roles = useMemo(() => {
|
||||||
|
const existingRoles = config ? Object.keys(config.auth?.roles || {}) : [];
|
||||||
|
return Array.from(new Set(["admin", "viewer", ...(existingRoles || [])]));
|
||||||
|
}, [config]);
|
||||||
|
|
||||||
const formSchema = z
|
const formSchema = z
|
||||||
.object({
|
.object({
|
||||||
user: z
|
user: z
|
||||||
@ -69,7 +77,7 @@ export default function CreateUserDialog({
|
|||||||
confirmPassword: z
|
confirmPassword: z
|
||||||
.string()
|
.string()
|
||||||
.min(1, t("users.dialog.createUser.confirmPassword")),
|
.min(1, t("users.dialog.createUser.confirmPassword")),
|
||||||
role: z.enum(["admin", "viewer"]),
|
role: z.string().min(1),
|
||||||
})
|
})
|
||||||
.refine((data) => data.password === data.confirmPassword, {
|
.refine((data) => data.password === data.confirmPassword, {
|
||||||
message: t("users.dialog.form.password.notMatch"),
|
message: t("users.dialog.form.password.notMatch"),
|
||||||
@ -246,24 +254,22 @@ export default function CreateUserDialog({
|
|||||||
</SelectTrigger>
|
</SelectTrigger>
|
||||||
</FormControl>
|
</FormControl>
|
||||||
<SelectContent>
|
<SelectContent>
|
||||||
<SelectItem
|
{roles.map((r) => (
|
||||||
value="admin"
|
<SelectItem
|
||||||
className="flex items-center gap-2"
|
value={r}
|
||||||
>
|
key={r}
|
||||||
<div className="flex items-center gap-2">
|
className="flex items-center gap-2"
|
||||||
<Shield className="h-4 w-4 text-primary" />
|
>
|
||||||
<span>{t("role.admin", { ns: "common" })}</span>
|
<div className="flex items-center gap-2">
|
||||||
</div>
|
{r === "admin" ? (
|
||||||
</SelectItem>
|
<Shield className="h-4 w-4 text-primary" />
|
||||||
<SelectItem
|
) : (
|
||||||
value="viewer"
|
<User className="h-4 w-4 text-muted-foreground" />
|
||||||
className="flex items-center gap-2"
|
)}
|
||||||
>
|
<span>{t(`role.${r}`, { ns: "common" }) || r}</span>
|
||||||
<div className="flex items-center gap-2">
|
</div>
|
||||||
<User className="h-4 w-4 text-muted-foreground" />
|
</SelectItem>
|
||||||
<span>{t("role.viewer", { ns: "common" })}</span>
|
))}
|
||||||
</div>
|
|
||||||
</SelectItem>
|
|
||||||
</SelectContent>
|
</SelectContent>
|
||||||
</Select>
|
</Select>
|
||||||
<FormDescription className="text-xs text-muted-foreground">
|
<FormDescription className="text-xs text-muted-foreground">
|
||||||
|
|||||||
@ -12,6 +12,7 @@ import {
|
|||||||
DropdownMenuContent,
|
DropdownMenuContent,
|
||||||
DropdownMenuItem,
|
DropdownMenuItem,
|
||||||
DropdownMenuLabel,
|
DropdownMenuLabel,
|
||||||
|
DropdownMenuSeparator,
|
||||||
DropdownMenuTrigger,
|
DropdownMenuTrigger,
|
||||||
} from "@/components/ui/dropdown-menu";
|
} from "@/components/ui/dropdown-menu";
|
||||||
import {
|
import {
|
||||||
@ -20,7 +21,6 @@ import {
|
|||||||
TooltipTrigger,
|
TooltipTrigger,
|
||||||
} from "@/components/ui/tooltip";
|
} from "@/components/ui/tooltip";
|
||||||
import { isDesktop, isMobile } from "react-device-detect";
|
import { isDesktop, isMobile } from "react-device-detect";
|
||||||
import { LuPlus, LuScanFace } from "react-icons/lu";
|
|
||||||
import { useTranslation } from "react-i18next";
|
import { useTranslation } from "react-i18next";
|
||||||
import { cn } from "@/lib/utils";
|
import { cn } from "@/lib/utils";
|
||||||
import React, { ReactNode, useMemo, useState } from "react";
|
import React, { ReactNode, useMemo, useState } from "react";
|
||||||
@ -89,27 +89,26 @@ export default function FaceSelectionDialog({
|
|||||||
<DropdownMenuLabel>{t("trainFaceAs")}</DropdownMenuLabel>
|
<DropdownMenuLabel>{t("trainFaceAs")}</DropdownMenuLabel>
|
||||||
<div
|
<div
|
||||||
className={cn(
|
className={cn(
|
||||||
"flex max-h-[40dvh] flex-col overflow-y-auto",
|
"flex max-h-[40dvh] flex-col overflow-y-auto overflow-x-hidden",
|
||||||
isMobile && "gap-2 pb-4",
|
isMobile && "gap-2 pb-4",
|
||||||
)}
|
)}
|
||||||
>
|
>
|
||||||
<SelectorItem
|
|
||||||
className="flex cursor-pointer gap-2 smart-capitalize"
|
|
||||||
onClick={() => setNewFace(true)}
|
|
||||||
>
|
|
||||||
<LuPlus />
|
|
||||||
{t("createFaceLibrary.new")}
|
|
||||||
</SelectorItem>
|
|
||||||
{faceNames.sort().map((faceName) => (
|
{faceNames.sort().map((faceName) => (
|
||||||
<SelectorItem
|
<SelectorItem
|
||||||
key={faceName}
|
key={faceName}
|
||||||
className="flex cursor-pointer gap-2 smart-capitalize"
|
className="flex cursor-pointer gap-2 smart-capitalize"
|
||||||
onClick={() => onTrainAttempt(faceName)}
|
onClick={() => onTrainAttempt(faceName)}
|
||||||
>
|
>
|
||||||
<LuScanFace />
|
|
||||||
{faceName}
|
{faceName}
|
||||||
</SelectorItem>
|
</SelectorItem>
|
||||||
))}
|
))}
|
||||||
|
<DropdownMenuSeparator />
|
||||||
|
<SelectorItem
|
||||||
|
className="flex cursor-pointer gap-2 smart-capitalize"
|
||||||
|
onClick={() => setNewFace(true)}
|
||||||
|
>
|
||||||
|
{t("createFaceLibrary.new")}
|
||||||
|
</SelectorItem>
|
||||||
</div>
|
</div>
|
||||||
</SelectorContent>
|
</SelectorContent>
|
||||||
</Selector>
|
</Selector>
|
||||||
|
|||||||
@ -171,6 +171,18 @@ export default function ImagePicker({
|
|||||||
alt={selectedImage?.label || "Selected image"}
|
alt={selectedImage?.label || "Selected image"}
|
||||||
className="size-16 rounded object-cover"
|
className="size-16 rounded object-cover"
|
||||||
onLoad={() => handleImageLoad(selectedImageId || "")}
|
onLoad={() => handleImageLoad(selectedImageId || "")}
|
||||||
|
onError={(e) => {
|
||||||
|
// If trigger thumbnail fails to load, fall back to event thumbnail
|
||||||
|
if (!selectedImage) {
|
||||||
|
const target = e.target as HTMLImageElement;
|
||||||
|
if (
|
||||||
|
target.src.includes("clips/triggers") &&
|
||||||
|
selectedImageId
|
||||||
|
) {
|
||||||
|
target.src = `${apiHost}api/events/${selectedImageId}/thumbnail.webp`;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}}
|
||||||
loading="lazy"
|
loading="lazy"
|
||||||
/>
|
/>
|
||||||
{selectedImageId && !loadedImages.has(selectedImageId) && (
|
{selectedImageId && !loadedImages.has(selectedImageId) && (
|
||||||
|
|||||||
@ -42,9 +42,10 @@ export default function DetailActionsMenu({
|
|||||||
return `start/${startTime}/end/${endTime}`;
|
return `start/${startTime}/end/${endTime}`;
|
||||||
}, [search]);
|
}, [search]);
|
||||||
|
|
||||||
const { data: reviewItem } = useSWR<ReviewSegment>([
|
// currently, audio event ids are not saved in review items
|
||||||
`review/event/${search.id}`,
|
const { data: reviewItem } = useSWR<ReviewSegment>(
|
||||||
]);
|
search.data?.type === "audio" ? null : [`review/event/${search.id}`],
|
||||||
|
);
|
||||||
|
|
||||||
return (
|
return (
|
||||||
<DropdownMenu open={isOpen} onOpenChange={setIsOpen}>
|
<DropdownMenu open={isOpen} onOpenChange={setIsOpen}>
|
||||||
|
|||||||
@ -683,6 +683,22 @@ function ObjectDetailsTab({
|
|||||||
|
|
||||||
const mutate = useGlobalMutation();
|
const mutate = useGlobalMutation();
|
||||||
|
|
||||||
|
// Helper to map over SWR cached search results while preserving
|
||||||
|
// either paginated format (SearchResult[][]) or flat format (SearchResult[])
|
||||||
|
const mapSearchResults = useCallback(
|
||||||
|
(
|
||||||
|
currentData: SearchResult[][] | SearchResult[] | undefined,
|
||||||
|
fn: (event: SearchResult) => SearchResult,
|
||||||
|
) => {
|
||||||
|
if (!currentData) return currentData;
|
||||||
|
if (Array.isArray(currentData[0])) {
|
||||||
|
return (currentData as SearchResult[][]).map((page) => page.map(fn));
|
||||||
|
}
|
||||||
|
return (currentData as SearchResult[]).map(fn);
|
||||||
|
},
|
||||||
|
[],
|
||||||
|
);
|
||||||
|
|
||||||
// users
|
// users
|
||||||
|
|
||||||
const isAdmin = useIsAdmin();
|
const isAdmin = useIsAdmin();
|
||||||
@ -791,6 +807,15 @@ function ObjectDetailsTab({
|
|||||||
}
|
}
|
||||||
}, [search]);
|
}, [search]);
|
||||||
|
|
||||||
|
const isEventsKey = useCallback((key: unknown): boolean => {
|
||||||
|
const candidate = Array.isArray(key) ? key[0] : key;
|
||||||
|
const EVENTS_KEY_PATTERNS = ["events", "events/search", "events/explore"];
|
||||||
|
return (
|
||||||
|
typeof candidate === "string" &&
|
||||||
|
EVENTS_KEY_PATTERNS.some((p) => candidate.includes(p))
|
||||||
|
);
|
||||||
|
}, []);
|
||||||
|
|
||||||
const updateDescription = useCallback(() => {
|
const updateDescription = useCallback(() => {
|
||||||
if (!search) {
|
if (!search) {
|
||||||
return;
|
return;
|
||||||
@ -805,28 +830,20 @@ function ObjectDetailsTab({
|
|||||||
});
|
});
|
||||||
}
|
}
|
||||||
mutate(
|
mutate(
|
||||||
(key) =>
|
(key) => isEventsKey(key),
|
||||||
typeof key === "string" &&
|
(currentData: SearchResult[][] | SearchResult[] | undefined) =>
|
||||||
(key.includes("events") ||
|
mapSearchResults(currentData, (event) =>
|
||||||
key.includes("events/search") ||
|
event.id === search.id
|
||||||
key.includes("events/explore")),
|
? { ...event, data: { ...event.data, description: desc } }
|
||||||
(currentData: SearchResult[][] | SearchResult[] | undefined) => {
|
: event,
|
||||||
if (!currentData) return currentData;
|
),
|
||||||
// optimistic update
|
|
||||||
return currentData
|
|
||||||
.flat()
|
|
||||||
.map((event) =>
|
|
||||||
event.id === search.id
|
|
||||||
? { ...event, data: { ...event.data, description: desc } }
|
|
||||||
: event,
|
|
||||||
);
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
optimisticData: true,
|
optimisticData: true,
|
||||||
rollbackOnError: true,
|
rollbackOnError: true,
|
||||||
revalidate: false,
|
revalidate: false,
|
||||||
},
|
},
|
||||||
);
|
);
|
||||||
|
setSearch({ ...search, data: { ...search.data, description: desc } });
|
||||||
})
|
})
|
||||||
.catch((error) => {
|
.catch((error) => {
|
||||||
const errorMessage =
|
const errorMessage =
|
||||||
@ -843,7 +860,7 @@ function ObjectDetailsTab({
|
|||||||
);
|
);
|
||||||
setDesc(search.data.description);
|
setDesc(search.data.description);
|
||||||
});
|
});
|
||||||
}, [desc, search, mutate, t]);
|
}, [desc, search, mutate, t, mapSearchResults, isEventsKey, setSearch]);
|
||||||
|
|
||||||
const regenerateDescription = useCallback(
|
const regenerateDescription = useCallback(
|
||||||
(source: "snapshot" | "thumbnails") => {
|
(source: "snapshot" | "thumbnails") => {
|
||||||
@ -910,14 +927,9 @@ function ObjectDetailsTab({
|
|||||||
});
|
});
|
||||||
|
|
||||||
mutate(
|
mutate(
|
||||||
(key) =>
|
(key) => isEventsKey(key),
|
||||||
typeof key === "string" &&
|
(currentData: SearchResult[][] | SearchResult[] | undefined) =>
|
||||||
(key.includes("events") ||
|
mapSearchResults(currentData, (event) =>
|
||||||
key.includes("events/search") ||
|
|
||||||
key.includes("events/explore")),
|
|
||||||
(currentData: SearchResult[][] | SearchResult[] | undefined) => {
|
|
||||||
if (!currentData) return currentData;
|
|
||||||
return currentData.flat().map((event) =>
|
|
||||||
event.id === search.id
|
event.id === search.id
|
||||||
? {
|
? {
|
||||||
...event,
|
...event,
|
||||||
@ -928,8 +940,7 @@ function ObjectDetailsTab({
|
|||||||
},
|
},
|
||||||
}
|
}
|
||||||
: event,
|
: event,
|
||||||
);
|
),
|
||||||
},
|
|
||||||
{
|
{
|
||||||
optimisticData: true,
|
optimisticData: true,
|
||||||
rollbackOnError: true,
|
rollbackOnError: true,
|
||||||
@ -963,7 +974,7 @@ function ObjectDetailsTab({
|
|||||||
);
|
);
|
||||||
});
|
});
|
||||||
},
|
},
|
||||||
[search, apiHost, mutate, setSearch, t],
|
[search, apiHost, mutate, setSearch, t, mapSearchResults, isEventsKey],
|
||||||
);
|
);
|
||||||
|
|
||||||
// recognized plate
|
// recognized plate
|
||||||
@ -987,14 +998,9 @@ function ObjectDetailsTab({
|
|||||||
});
|
});
|
||||||
|
|
||||||
mutate(
|
mutate(
|
||||||
(key) =>
|
(key) => isEventsKey(key),
|
||||||
typeof key === "string" &&
|
(currentData: SearchResult[][] | SearchResult[] | undefined) =>
|
||||||
(key.includes("events") ||
|
mapSearchResults(currentData, (event) =>
|
||||||
key.includes("events/search") ||
|
|
||||||
key.includes("events/explore")),
|
|
||||||
(currentData: SearchResult[][] | SearchResult[] | undefined) => {
|
|
||||||
if (!currentData) return currentData;
|
|
||||||
return currentData.flat().map((event) =>
|
|
||||||
event.id === search.id
|
event.id === search.id
|
||||||
? {
|
? {
|
||||||
...event,
|
...event,
|
||||||
@ -1005,8 +1011,7 @@ function ObjectDetailsTab({
|
|||||||
},
|
},
|
||||||
}
|
}
|
||||||
: event,
|
: event,
|
||||||
);
|
),
|
||||||
},
|
|
||||||
{
|
{
|
||||||
optimisticData: true,
|
optimisticData: true,
|
||||||
rollbackOnError: true,
|
rollbackOnError: true,
|
||||||
@ -1040,7 +1045,7 @@ function ObjectDetailsTab({
|
|||||||
);
|
);
|
||||||
});
|
});
|
||||||
},
|
},
|
||||||
[search, apiHost, mutate, setSearch, t],
|
[search, apiHost, mutate, setSearch, t, mapSearchResults, isEventsKey],
|
||||||
);
|
);
|
||||||
|
|
||||||
// speech transcription
|
// speech transcription
|
||||||
@ -1096,23 +1101,15 @@ function ObjectDetailsTab({
|
|||||||
});
|
});
|
||||||
|
|
||||||
setState("submitted");
|
setState("submitted");
|
||||||
|
setSearch({ ...search, plus_id: "new_upload" });
|
||||||
mutate(
|
mutate(
|
||||||
(key) =>
|
(key) => isEventsKey(key),
|
||||||
typeof key === "string" &&
|
(currentData: SearchResult[][] | SearchResult[] | undefined) =>
|
||||||
(key.includes("events") ||
|
mapSearchResults(currentData, (event) =>
|
||||||
key.includes("events/search") ||
|
event.id === search.id
|
||||||
key.includes("events/explore")),
|
? { ...event, plus_id: "new_upload" }
|
||||||
(currentData: SearchResult[][] | SearchResult[] | undefined) => {
|
: event,
|
||||||
if (!currentData) return currentData;
|
),
|
||||||
// optimistic update
|
|
||||||
return currentData
|
|
||||||
.flat()
|
|
||||||
.map((event) =>
|
|
||||||
event.id === search.id
|
|
||||||
? { ...event, plus_id: "new_upload" }
|
|
||||||
: event,
|
|
||||||
);
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
optimisticData: true,
|
optimisticData: true,
|
||||||
rollbackOnError: true,
|
rollbackOnError: true,
|
||||||
@ -1120,7 +1117,7 @@ function ObjectDetailsTab({
|
|||||||
},
|
},
|
||||||
);
|
);
|
||||||
},
|
},
|
||||||
[search, mutate],
|
[search, mutate, mapSearchResults, setSearch, isEventsKey],
|
||||||
);
|
);
|
||||||
|
|
||||||
const popoverContainerRef = useRef<HTMLDivElement | null>(null);
|
const popoverContainerRef = useRef<HTMLDivElement | null>(null);
|
||||||
@ -1298,6 +1295,7 @@ function ObjectDetailsTab({
|
|||||||
|
|
||||||
{search.data.type === "object" &&
|
{search.data.type === "object" &&
|
||||||
config?.plus?.enabled &&
|
config?.plus?.enabled &&
|
||||||
|
search.end_time != undefined &&
|
||||||
search.has_snapshot && (
|
search.has_snapshot && (
|
||||||
<div
|
<div
|
||||||
className={cn(
|
className={cn(
|
||||||
@ -1503,7 +1501,7 @@ function ObjectDetailsTab({
|
|||||||
) : (
|
) : (
|
||||||
<div className="flex flex-col gap-2">
|
<div className="flex flex-col gap-2">
|
||||||
<Textarea
|
<Textarea
|
||||||
className="text-md h-32"
|
className="text-md h-32 md:text-sm"
|
||||||
placeholder={t("details.description.placeholder")}
|
placeholder={t("details.description.placeholder")}
|
||||||
value={desc}
|
value={desc}
|
||||||
onChange={(e) => setDesc(e.target.value)}
|
onChange={(e) => setDesc(e.target.value)}
|
||||||
@ -1511,25 +1509,7 @@ function ObjectDetailsTab({
|
|||||||
onBlur={handleDescriptionBlur}
|
onBlur={handleDescriptionBlur}
|
||||||
autoFocus
|
autoFocus
|
||||||
/>
|
/>
|
||||||
<div className="flex flex-row justify-end gap-4">
|
<div className="mb-10 flex flex-row justify-end gap-5">
|
||||||
<Tooltip>
|
|
||||||
<TooltipTrigger asChild>
|
|
||||||
<button
|
|
||||||
aria-label={t("button.save", { ns: "common" })}
|
|
||||||
className="text-primary/40 hover:text-primary/80"
|
|
||||||
onClick={() => {
|
|
||||||
setIsEditingDesc(false);
|
|
||||||
updateDescription();
|
|
||||||
}}
|
|
||||||
>
|
|
||||||
<FaCheck className="size-4" />
|
|
||||||
</button>
|
|
||||||
</TooltipTrigger>
|
|
||||||
<TooltipContent>
|
|
||||||
{t("button.save", { ns: "common" })}
|
|
||||||
</TooltipContent>
|
|
||||||
</Tooltip>
|
|
||||||
|
|
||||||
<Tooltip>
|
<Tooltip>
|
||||||
<TooltipTrigger asChild>
|
<TooltipTrigger asChild>
|
||||||
<button
|
<button
|
||||||
@ -1540,13 +1520,31 @@ function ObjectDetailsTab({
|
|||||||
setDesc(originalDescRef.current ?? "");
|
setDesc(originalDescRef.current ?? "");
|
||||||
}}
|
}}
|
||||||
>
|
>
|
||||||
<FaTimes className="size-4" />
|
<FaTimes className="size-5" />
|
||||||
</button>
|
</button>
|
||||||
</TooltipTrigger>
|
</TooltipTrigger>
|
||||||
<TooltipContent>
|
<TooltipContent>
|
||||||
{t("button.cancel", { ns: "common" })}
|
{t("button.cancel", { ns: "common" })}
|
||||||
</TooltipContent>
|
</TooltipContent>
|
||||||
</Tooltip>
|
</Tooltip>
|
||||||
|
|
||||||
|
<Tooltip>
|
||||||
|
<TooltipTrigger asChild>
|
||||||
|
<button
|
||||||
|
aria-label={t("button.save", { ns: "common" })}
|
||||||
|
className="text-primary/40 hover:text-primary/80"
|
||||||
|
onClick={() => {
|
||||||
|
setIsEditingDesc(false);
|
||||||
|
updateDescription();
|
||||||
|
}}
|
||||||
|
>
|
||||||
|
<FaCheck className="size-5" />
|
||||||
|
</button>
|
||||||
|
</TooltipTrigger>
|
||||||
|
<TooltipContent>
|
||||||
|
{t("button.save", { ns: "common" })}
|
||||||
|
</TooltipContent>
|
||||||
|
</Tooltip>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
)}
|
)}
|
||||||
|
|||||||
@ -1,5 +1,6 @@
|
|||||||
import useSWR from "swr";
|
import useSWR from "swr";
|
||||||
import { useCallback, useEffect, useMemo, useRef, useState } from "react";
|
import { useCallback, useEffect, useMemo, useRef, useState } from "react";
|
||||||
|
import { useResizeObserver } from "@/hooks/resize-observer";
|
||||||
import { Event } from "@/types/event";
|
import { Event } from "@/types/event";
|
||||||
import ActivityIndicator from "@/components/indicators/activity-indicator";
|
import ActivityIndicator from "@/components/indicators/activity-indicator";
|
||||||
import { TrackingDetailsSequence } from "@/types/timeline";
|
import { TrackingDetailsSequence } from "@/types/timeline";
|
||||||
@ -11,7 +12,11 @@ import { cn } from "@/lib/utils";
|
|||||||
import HlsVideoPlayer from "@/components/player/HlsVideoPlayer";
|
import HlsVideoPlayer from "@/components/player/HlsVideoPlayer";
|
||||||
import { baseUrl } from "@/api/baseUrl";
|
import { baseUrl } from "@/api/baseUrl";
|
||||||
import { REVIEW_PADDING } from "@/types/review";
|
import { REVIEW_PADDING } from "@/types/review";
|
||||||
import { ASPECT_VERTICAL_LAYOUT, ASPECT_WIDE_LAYOUT } from "@/types/record";
|
import {
|
||||||
|
ASPECT_VERTICAL_LAYOUT,
|
||||||
|
ASPECT_WIDE_LAYOUT,
|
||||||
|
Recording,
|
||||||
|
} from "@/types/record";
|
||||||
import {
|
import {
|
||||||
DropdownMenu,
|
DropdownMenu,
|
||||||
DropdownMenuTrigger,
|
DropdownMenuTrigger,
|
||||||
@ -51,6 +56,7 @@ export function TrackingDetails({
|
|||||||
const apiHost = useApiHost();
|
const apiHost = useApiHost();
|
||||||
const imgRef = useRef<HTMLImageElement | null>(null);
|
const imgRef = useRef<HTMLImageElement | null>(null);
|
||||||
const [imgLoaded, setImgLoaded] = useState(false);
|
const [imgLoaded, setImgLoaded] = useState(false);
|
||||||
|
const [isVideoLoading, setIsVideoLoading] = useState(true);
|
||||||
const [displaySource, _setDisplaySource] = useState<"video" | "image">(
|
const [displaySource, _setDisplaySource] = useState<"video" | "image">(
|
||||||
"video",
|
"video",
|
||||||
);
|
);
|
||||||
@ -65,6 +71,10 @@ export function TrackingDetails({
|
|||||||
(event.start_time ?? 0) + annotationOffset / 1000 - REVIEW_PADDING,
|
(event.start_time ?? 0) + annotationOffset / 1000 - REVIEW_PADDING,
|
||||||
);
|
);
|
||||||
|
|
||||||
|
useEffect(() => {
|
||||||
|
setIsVideoLoading(true);
|
||||||
|
}, [event.id]);
|
||||||
|
|
||||||
const { data: eventSequence } = useSWR<TrackingDetailsSequence[]>([
|
const { data: eventSequence } = useSWR<TrackingDetailsSequence[]>([
|
||||||
"timeline",
|
"timeline",
|
||||||
{
|
{
|
||||||
@ -74,6 +84,139 @@ export function TrackingDetails({
|
|||||||
|
|
||||||
const { data: config } = useSWR<FrigateConfig>("config");
|
const { data: config } = useSWR<FrigateConfig>("config");
|
||||||
|
|
||||||
|
// Fetch recording segments for the event's time range to handle motion-only gaps
|
||||||
|
const eventStartRecord = useMemo(
|
||||||
|
() => (event.start_time ?? 0) + annotationOffset / 1000,
|
||||||
|
[event.start_time, annotationOffset],
|
||||||
|
);
|
||||||
|
const eventEndRecord = useMemo(
|
||||||
|
() => (event.end_time ?? Date.now() / 1000) + annotationOffset / 1000,
|
||||||
|
[event.end_time, annotationOffset],
|
||||||
|
);
|
||||||
|
|
||||||
|
const { data: recordings } = useSWR<Recording[]>(
|
||||||
|
event.camera
|
||||||
|
? [
|
||||||
|
`${event.camera}/recordings`,
|
||||||
|
{
|
||||||
|
after: eventStartRecord - REVIEW_PADDING,
|
||||||
|
before: eventEndRecord + REVIEW_PADDING,
|
||||||
|
},
|
||||||
|
]
|
||||||
|
: null,
|
||||||
|
);
|
||||||
|
|
||||||
|
// Convert a timeline timestamp to actual video player time, accounting for
|
||||||
|
// motion-only recording gaps. Uses the same algorithm as DynamicVideoController.
|
||||||
|
const timestampToVideoTime = useCallback(
|
||||||
|
(timestamp: number): number => {
|
||||||
|
if (!recordings || recordings.length === 0) {
|
||||||
|
// Fallback to simple calculation if no recordings data
|
||||||
|
return timestamp - (eventStartRecord - REVIEW_PADDING);
|
||||||
|
}
|
||||||
|
|
||||||
|
const videoStartTime = eventStartRecord - REVIEW_PADDING;
|
||||||
|
|
||||||
|
// If timestamp is before video start, return 0
|
||||||
|
if (timestamp < videoStartTime) return 0;
|
||||||
|
|
||||||
|
// Check if timestamp is before the first recording or after the last
|
||||||
|
if (
|
||||||
|
timestamp < recordings[0].start_time ||
|
||||||
|
timestamp > recordings[recordings.length - 1].end_time
|
||||||
|
) {
|
||||||
|
// No recording available at this timestamp
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Calculate the inpoint offset - the HLS video may start partway through the first segment
|
||||||
|
let inpointOffset = 0;
|
||||||
|
if (
|
||||||
|
videoStartTime > recordings[0].start_time &&
|
||||||
|
videoStartTime < recordings[0].end_time
|
||||||
|
) {
|
||||||
|
inpointOffset = videoStartTime - recordings[0].start_time;
|
||||||
|
}
|
||||||
|
|
||||||
|
let seekSeconds = 0;
|
||||||
|
for (const segment of recordings) {
|
||||||
|
// Skip segments that end before our timestamp
|
||||||
|
if (segment.end_time <= timestamp) {
|
||||||
|
// Add this segment's duration, but subtract inpoint offset from first segment
|
||||||
|
if (segment === recordings[0]) {
|
||||||
|
seekSeconds += segment.duration - inpointOffset;
|
||||||
|
} else {
|
||||||
|
seekSeconds += segment.duration;
|
||||||
|
}
|
||||||
|
} else if (segment.start_time <= timestamp) {
|
||||||
|
// The timestamp is within this segment
|
||||||
|
if (segment === recordings[0]) {
|
||||||
|
// For the first segment, account for the inpoint offset
|
||||||
|
seekSeconds +=
|
||||||
|
timestamp - Math.max(segment.start_time, videoStartTime);
|
||||||
|
} else {
|
||||||
|
seekSeconds += timestamp - segment.start_time;
|
||||||
|
}
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return seekSeconds;
|
||||||
|
},
|
||||||
|
[recordings, eventStartRecord],
|
||||||
|
);
|
||||||
|
|
||||||
|
// Convert video player time back to timeline timestamp, accounting for
|
||||||
|
// motion-only recording gaps. Reverse of timestampToVideoTime.
|
||||||
|
const videoTimeToTimestamp = useCallback(
|
||||||
|
(playerTime: number): number => {
|
||||||
|
if (!recordings || recordings.length === 0) {
|
||||||
|
// Fallback to simple calculation if no recordings data
|
||||||
|
const videoStartTime = eventStartRecord - REVIEW_PADDING;
|
||||||
|
return playerTime + videoStartTime;
|
||||||
|
}
|
||||||
|
|
||||||
|
const videoStartTime = eventStartRecord - REVIEW_PADDING;
|
||||||
|
|
||||||
|
// Calculate the inpoint offset - the video may start partway through the first segment
|
||||||
|
let inpointOffset = 0;
|
||||||
|
if (
|
||||||
|
videoStartTime > recordings[0].start_time &&
|
||||||
|
videoStartTime < recordings[0].end_time
|
||||||
|
) {
|
||||||
|
inpointOffset = videoStartTime - recordings[0].start_time;
|
||||||
|
}
|
||||||
|
|
||||||
|
let timestamp = 0;
|
||||||
|
let totalTime = 0;
|
||||||
|
|
||||||
|
for (const segment of recordings) {
|
||||||
|
const segmentDuration =
|
||||||
|
segment === recordings[0]
|
||||||
|
? segment.duration - inpointOffset
|
||||||
|
: segment.duration;
|
||||||
|
|
||||||
|
if (totalTime + segmentDuration > playerTime) {
|
||||||
|
// The player time is within this segment
|
||||||
|
if (segment === recordings[0]) {
|
||||||
|
// For the first segment, add the inpoint offset
|
||||||
|
timestamp =
|
||||||
|
Math.max(segment.start_time, videoStartTime) +
|
||||||
|
(playerTime - totalTime);
|
||||||
|
} else {
|
||||||
|
timestamp = segment.start_time + (playerTime - totalTime);
|
||||||
|
}
|
||||||
|
break;
|
||||||
|
} else {
|
||||||
|
totalTime += segmentDuration;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return timestamp;
|
||||||
|
},
|
||||||
|
[recordings, eventStartRecord],
|
||||||
|
);
|
||||||
|
|
||||||
eventSequence?.map((event) => {
|
eventSequence?.map((event) => {
|
||||||
event.data.zones_friendly_names = event.data?.zones?.map((zone) => {
|
event.data.zones_friendly_names = event.data?.zones?.map((zone) => {
|
||||||
return resolveZoneName(config, zone);
|
return resolveZoneName(config, zone);
|
||||||
@ -89,9 +232,16 @@ export function TrackingDetails({
|
|||||||
}, [manualOverride, currentTime, annotationOffset]);
|
}, [manualOverride, currentTime, annotationOffset]);
|
||||||
|
|
||||||
const containerRef = useRef<HTMLDivElement | null>(null);
|
const containerRef = useRef<HTMLDivElement | null>(null);
|
||||||
|
const timelineContainerRef = useRef<HTMLDivElement | null>(null);
|
||||||
|
const rowRefs = useRef<(HTMLDivElement | null)[]>([]);
|
||||||
const [_selectedZone, setSelectedZone] = useState("");
|
const [_selectedZone, setSelectedZone] = useState("");
|
||||||
const [_lifecycleZones, setLifecycleZones] = useState<string[]>([]);
|
const [_lifecycleZones, setLifecycleZones] = useState<string[]>([]);
|
||||||
const [seekToTimestamp, setSeekToTimestamp] = useState<number | null>(null);
|
const [seekToTimestamp, setSeekToTimestamp] = useState<number | null>(null);
|
||||||
|
const [lineBottomOffsetPx, setLineBottomOffsetPx] = useState<number>(32);
|
||||||
|
const [lineTopOffsetPx, setLineTopOffsetPx] = useState<number>(8);
|
||||||
|
const [blueLineHeightPx, setBlueLineHeightPx] = useState<number>(0);
|
||||||
|
|
||||||
|
const [timelineSize] = useResizeObserver(timelineContainerRef);
|
||||||
|
|
||||||
const aspectRatio = useMemo(() => {
|
const aspectRatio = useMemo(() => {
|
||||||
if (!config) {
|
if (!config) {
|
||||||
@ -140,17 +290,14 @@ export function TrackingDetails({
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
// For video mode: convert to video-relative time and seek player
|
// For video mode: convert to video-relative time (accounting for motion-only gaps)
|
||||||
const eventStartRecord =
|
const relativeTime = timestampToVideoTime(targetTimeRecord);
|
||||||
(event.start_time ?? 0) + annotationOffset / 1000;
|
|
||||||
const videoStartTime = eventStartRecord - REVIEW_PADDING;
|
|
||||||
const relativeTime = targetTimeRecord - videoStartTime;
|
|
||||||
|
|
||||||
if (videoRef.current) {
|
if (videoRef.current) {
|
||||||
videoRef.current.currentTime = relativeTime;
|
videoRef.current.currentTime = relativeTime;
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
[event.start_time, annotationOffset, displaySource],
|
[annotationOffset, displaySource, timestampToVideoTime],
|
||||||
);
|
);
|
||||||
|
|
||||||
const formattedStart = config
|
const formattedStart = config
|
||||||
@ -169,21 +316,22 @@ export function TrackingDetails({
|
|||||||
})
|
})
|
||||||
: "";
|
: "";
|
||||||
|
|
||||||
const formattedEnd = config
|
const formattedEnd =
|
||||||
? formatUnixTimestampToDateTime(event.end_time ?? 0, {
|
config && event.end_time != null
|
||||||
timezone: config.ui.timezone,
|
? formatUnixTimestampToDateTime(event.end_time, {
|
||||||
date_format:
|
timezone: config.ui.timezone,
|
||||||
config.ui.time_format == "24hour"
|
date_format:
|
||||||
? t("time.formattedTimestamp.24hour", {
|
config.ui.time_format == "24hour"
|
||||||
ns: "common",
|
? t("time.formattedTimestamp.24hour", {
|
||||||
})
|
ns: "common",
|
||||||
: t("time.formattedTimestamp.12hour", {
|
})
|
||||||
ns: "common",
|
: t("time.formattedTimestamp.12hour", {
|
||||||
}),
|
ns: "common",
|
||||||
time_style: "medium",
|
}),
|
||||||
date_style: "medium",
|
time_style: "medium",
|
||||||
})
|
date_style: "medium",
|
||||||
: "";
|
})
|
||||||
|
: "";
|
||||||
|
|
||||||
useEffect(() => {
|
useEffect(() => {
|
||||||
if (!eventSequence || eventSequence.length === 0) return;
|
if (!eventSequence || eventSequence.length === 0) return;
|
||||||
@ -202,79 +350,83 @@ export function TrackingDetails({
|
|||||||
}
|
}
|
||||||
|
|
||||||
// seekToTimestamp is a record stream timestamp
|
// seekToTimestamp is a record stream timestamp
|
||||||
// event.start_time is detect stream time, convert to record
|
// Convert to video position (accounting for motion-only recording gaps)
|
||||||
// The video clip starts at (eventStartRecord - REVIEW_PADDING)
|
|
||||||
if (!videoRef.current) return;
|
if (!videoRef.current) return;
|
||||||
const eventStartRecord = event.start_time + annotationOffset / 1000;
|
const relativeTime = timestampToVideoTime(seekToTimestamp);
|
||||||
const videoStartTime = eventStartRecord - REVIEW_PADDING;
|
|
||||||
const relativeTime = seekToTimestamp - videoStartTime;
|
|
||||||
if (relativeTime >= 0) {
|
if (relativeTime >= 0) {
|
||||||
videoRef.current.currentTime = relativeTime;
|
videoRef.current.currentTime = relativeTime;
|
||||||
}
|
}
|
||||||
setSeekToTimestamp(null);
|
setSeekToTimestamp(null);
|
||||||
}, [
|
}, [seekToTimestamp, displaySource, timestampToVideoTime]);
|
||||||
seekToTimestamp,
|
|
||||||
event.start_time,
|
|
||||||
annotationOffset,
|
|
||||||
apiHost,
|
|
||||||
event.camera,
|
|
||||||
displaySource,
|
|
||||||
]);
|
|
||||||
|
|
||||||
const isWithinEventRange =
|
const isWithinEventRange = useMemo(() => {
|
||||||
effectiveTime !== undefined &&
|
if (effectiveTime === undefined || event.start_time === undefined) {
|
||||||
event.start_time !== undefined &&
|
return false;
|
||||||
event.end_time !== undefined &&
|
|
||||||
effectiveTime >= event.start_time &&
|
|
||||||
effectiveTime <= event.end_time;
|
|
||||||
|
|
||||||
// Calculate how far down the blue line should extend based on effectiveTime
|
|
||||||
const calculateLineHeight = useCallback(() => {
|
|
||||||
if (!eventSequence || eventSequence.length === 0 || !isWithinEventRange) {
|
|
||||||
return 0;
|
|
||||||
}
|
}
|
||||||
|
// If an event has not ended yet, fall back to last timestamp in eventSequence
|
||||||
const currentTime = effectiveTime ?? 0;
|
let eventEnd = event.end_time;
|
||||||
|
if (eventEnd == null && eventSequence && eventSequence.length > 0) {
|
||||||
// Find which events have been passed
|
const last = eventSequence[eventSequence.length - 1];
|
||||||
let lastPassedIndex = -1;
|
if (last && last.timestamp !== undefined) {
|
||||||
for (let i = 0; i < eventSequence.length; i++) {
|
eventEnd = last.timestamp;
|
||||||
if (currentTime >= (eventSequence[i].timestamp ?? 0)) {
|
|
||||||
lastPassedIndex = i;
|
|
||||||
} else {
|
|
||||||
break;
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// No events passed yet
|
if (eventEnd == null) {
|
||||||
if (lastPassedIndex < 0) return 0;
|
return false;
|
||||||
|
}
|
||||||
|
return effectiveTime >= event.start_time && effectiveTime <= eventEnd;
|
||||||
|
}, [effectiveTime, event.start_time, event.end_time, eventSequence]);
|
||||||
|
|
||||||
// All events passed
|
// Dynamically compute pixel offsets so the timeline line starts at the
|
||||||
if (lastPassedIndex >= eventSequence.length - 1) return 100;
|
// first row midpoint and ends at the last row midpoint. For accuracy,
|
||||||
|
// measure the center Y of each lifecycle row and interpolate the current
|
||||||
|
// effective time into a pixel position; then set the blue line height
|
||||||
|
// so it reaches the center dot at the same time the dot becomes active.
|
||||||
|
useEffect(() => {
|
||||||
|
if (!timelineContainerRef.current || !eventSequence) return;
|
||||||
|
|
||||||
// Calculate percentage based on item position, not time
|
const containerRect = timelineContainerRef.current.getBoundingClientRect();
|
||||||
// Each item occupies an equal visual space regardless of time gaps
|
const validRefs = rowRefs.current.filter((r) => r !== null);
|
||||||
const itemPercentage = 100 / (eventSequence.length - 1);
|
if (validRefs.length === 0) return;
|
||||||
|
|
||||||
// Find progress between current and next event for smooth transition
|
const centers = validRefs.map((n) => {
|
||||||
const currentEvent = eventSequence[lastPassedIndex];
|
const r = n.getBoundingClientRect();
|
||||||
const nextEvent = eventSequence[lastPassedIndex + 1];
|
return r.top + r.height / 2 - containerRect.top;
|
||||||
const currentTimestamp = currentEvent.timestamp ?? 0;
|
});
|
||||||
const nextTimestamp = nextEvent.timestamp ?? 0;
|
|
||||||
|
|
||||||
// Calculate interpolation between the two events
|
const topOffset = Math.max(0, centers[0]);
|
||||||
const timeBetween = nextTimestamp - currentTimestamp;
|
const bottomOffset = Math.max(
|
||||||
const timeElapsed = currentTime - currentTimestamp;
|
0,
|
||||||
const interpolation = timeBetween > 0 ? timeElapsed / timeBetween : 0;
|
containerRect.height - centers[centers.length - 1],
|
||||||
|
|
||||||
// Base position plus interpolated progress to next item
|
|
||||||
return Math.min(
|
|
||||||
100,
|
|
||||||
lastPassedIndex * itemPercentage + interpolation * itemPercentage,
|
|
||||||
);
|
);
|
||||||
}, [eventSequence, effectiveTime, isWithinEventRange]);
|
|
||||||
|
|
||||||
const blueLineHeight = calculateLineHeight();
|
setLineTopOffsetPx(Math.round(topOffset));
|
||||||
|
setLineBottomOffsetPx(Math.round(bottomOffset));
|
||||||
|
|
||||||
|
const eff = effectiveTime ?? 0;
|
||||||
|
const timestamps = eventSequence.map((s) => s.timestamp ?? 0);
|
||||||
|
|
||||||
|
let pixelPos = centers[0];
|
||||||
|
if (eff <= timestamps[0]) {
|
||||||
|
pixelPos = centers[0];
|
||||||
|
} else if (eff >= timestamps[timestamps.length - 1]) {
|
||||||
|
pixelPos = centers[centers.length - 1];
|
||||||
|
} else {
|
||||||
|
for (let i = 0; i < timestamps.length - 1; i++) {
|
||||||
|
const t1 = timestamps[i];
|
||||||
|
const t2 = timestamps[i + 1];
|
||||||
|
if (eff >= t1 && eff <= t2) {
|
||||||
|
const ratio = t2 > t1 ? (eff - t1) / (t2 - t1) : 0;
|
||||||
|
pixelPos = centers[i] + ratio * (centers[i + 1] - centers[i]);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
const bluePx = Math.round(Math.max(0, pixelPos - topOffset));
|
||||||
|
setBlueLineHeightPx(bluePx);
|
||||||
|
}, [eventSequence, timelineSize.width, timelineSize.height, effectiveTime]);
|
||||||
|
|
||||||
const videoSource = useMemo(() => {
|
const videoSource = useMemo(() => {
|
||||||
// event.start_time and event.end_time are in DETECT stream time
|
// event.start_time and event.end_time are in DETECT stream time
|
||||||
@ -312,14 +464,13 @@ export function TrackingDetails({
|
|||||||
|
|
||||||
const handleTimeUpdate = useCallback(
|
const handleTimeUpdate = useCallback(
|
||||||
(time: number) => {
|
(time: number) => {
|
||||||
// event.start_time is detect stream time, convert to record
|
// Convert video player time back to timeline timestamp
|
||||||
const eventStartRecord = event.start_time + annotationOffset / 1000;
|
// accounting for motion-only recording gaps
|
||||||
const videoStartTime = eventStartRecord - REVIEW_PADDING;
|
const absoluteTime = videoTimeToTimestamp(time);
|
||||||
const absoluteTime = time + videoStartTime;
|
|
||||||
|
|
||||||
setCurrentTime(absoluteTime);
|
setCurrentTime(absoluteTime);
|
||||||
},
|
},
|
||||||
[event.start_time, annotationOffset],
|
[videoTimeToTimestamp],
|
||||||
);
|
);
|
||||||
|
|
||||||
const [src, setSrc] = useState(
|
const [src, setSrc] = useState(
|
||||||
@ -381,22 +532,28 @@ export function TrackingDetails({
|
|||||||
)}
|
)}
|
||||||
>
|
>
|
||||||
{displaySource == "video" && (
|
{displaySource == "video" && (
|
||||||
<HlsVideoPlayer
|
<>
|
||||||
videoRef={videoRef}
|
<HlsVideoPlayer
|
||||||
containerRef={containerRef}
|
videoRef={videoRef}
|
||||||
visible={true}
|
containerRef={containerRef}
|
||||||
currentSource={videoSource}
|
visible={true}
|
||||||
hotKeys={false}
|
currentSource={videoSource}
|
||||||
supportsFullscreen={false}
|
hotKeys={false}
|
||||||
fullscreen={false}
|
supportsFullscreen={false}
|
||||||
frigateControls={true}
|
fullscreen={false}
|
||||||
onTimeUpdate={handleTimeUpdate}
|
frigateControls={true}
|
||||||
onSeekToTime={handleSeekToTime}
|
onTimeUpdate={handleTimeUpdate}
|
||||||
onUploadFrame={onUploadFrameToPlus}
|
onSeekToTime={handleSeekToTime}
|
||||||
isDetailMode={true}
|
onUploadFrame={onUploadFrameToPlus}
|
||||||
camera={event.camera}
|
onPlaying={() => setIsVideoLoading(false)}
|
||||||
currentTimeOverride={currentTime}
|
isDetailMode={true}
|
||||||
/>
|
camera={event.camera}
|
||||||
|
currentTimeOverride={currentTime}
|
||||||
|
/>
|
||||||
|
{isVideoLoading && (
|
||||||
|
<ActivityIndicator className="absolute left-1/2 top-1/2 -translate-x-1/2 -translate-y-1/2" />
|
||||||
|
)}
|
||||||
|
</>
|
||||||
)}
|
)}
|
||||||
{displaySource == "image" && (
|
{displaySource == "image" && (
|
||||||
<>
|
<>
|
||||||
@ -503,9 +660,16 @@ export function TrackingDetails({
|
|||||||
</div>
|
</div>
|
||||||
<div className="flex items-center gap-2">
|
<div className="flex items-center gap-2">
|
||||||
<span className="capitalize">{label}</span>
|
<span className="capitalize">{label}</span>
|
||||||
<span className="md:text-md text-xs text-secondary-foreground">
|
<div className="md:text-md flex items-center text-xs text-secondary-foreground">
|
||||||
{formattedStart ?? ""} - {formattedEnd ?? ""}
|
{formattedStart ?? ""}
|
||||||
</span>
|
{event.end_time != null ? (
|
||||||
|
<> - {formattedEnd}</>
|
||||||
|
) : (
|
||||||
|
<div className="inline-block">
|
||||||
|
<ActivityIndicator className="ml-3 size-4" />
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
{event.data?.recognized_license_plate && (
|
{event.data?.recognized_license_plate && (
|
||||||
<>
|
<>
|
||||||
<span className="text-secondary-foreground">·</span>
|
<span className="text-secondary-foreground">·</span>
|
||||||
@ -531,12 +695,21 @@ export function TrackingDetails({
|
|||||||
{t("detail.noObjectDetailData", { ns: "views/events" })}
|
{t("detail.noObjectDetailData", { ns: "views/events" })}
|
||||||
</div>
|
</div>
|
||||||
) : (
|
) : (
|
||||||
<div className="-pb-2 relative mx-0">
|
<div
|
||||||
<div className="absolute -top-2 bottom-8 left-6 z-0 w-0.5 -translate-x-1/2 bg-secondary-foreground" />
|
className="-pb-2 relative mx-0"
|
||||||
|
ref={timelineContainerRef}
|
||||||
|
>
|
||||||
|
<div
|
||||||
|
className="absolute -top-2 left-6 z-0 w-0.5 -translate-x-1/2 bg-secondary-foreground"
|
||||||
|
style={{ bottom: lineBottomOffsetPx }}
|
||||||
|
/>
|
||||||
{isWithinEventRange && (
|
{isWithinEventRange && (
|
||||||
<div
|
<div
|
||||||
className="absolute left-6 top-2 z-[5] max-h-[calc(100%-3rem)] w-0.5 -translate-x-1/2 bg-selected transition-all duration-300"
|
className="absolute left-6 z-[5] w-0.5 -translate-x-1/2 bg-selected transition-all duration-300"
|
||||||
style={{ height: `${blueLineHeight}%` }}
|
style={{
|
||||||
|
top: `${lineTopOffsetPx}px`,
|
||||||
|
height: `${blueLineHeightPx}px`,
|
||||||
|
}}
|
||||||
/>
|
/>
|
||||||
)}
|
)}
|
||||||
<div className="space-y-2">
|
<div className="space-y-2">
|
||||||
@ -589,20 +762,26 @@ export function TrackingDetails({
|
|||||||
: undefined;
|
: undefined;
|
||||||
|
|
||||||
return (
|
return (
|
||||||
<LifecycleIconRow
|
<div
|
||||||
key={`${item.timestamp}-${item.source_id ?? ""}-${idx}`}
|
key={`${item.timestamp}-${item.source_id ?? ""}-${idx}`}
|
||||||
item={item}
|
ref={(el) => {
|
||||||
isActive={isActive}
|
rowRefs.current[idx] = el;
|
||||||
formattedEventTimestamp={formattedEventTimestamp}
|
}}
|
||||||
ratio={ratio}
|
>
|
||||||
areaPx={areaPx}
|
<LifecycleIconRow
|
||||||
areaPct={areaPct}
|
item={item}
|
||||||
onClick={() => handleLifecycleClick(item)}
|
isActive={isActive}
|
||||||
setSelectedZone={setSelectedZone}
|
formattedEventTimestamp={formattedEventTimestamp}
|
||||||
getZoneColor={getZoneColor}
|
ratio={ratio}
|
||||||
effectiveTime={effectiveTime}
|
areaPx={areaPx}
|
||||||
isTimelineActive={isWithinEventRange}
|
areaPct={areaPct}
|
||||||
/>
|
onClick={() => handleLifecycleClick(item)}
|
||||||
|
setSelectedZone={setSelectedZone}
|
||||||
|
getZoneColor={getZoneColor}
|
||||||
|
effectiveTime={effectiveTime}
|
||||||
|
isTimelineActive={isWithinEventRange}
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
);
|
);
|
||||||
})}
|
})}
|
||||||
</div>
|
</div>
|
||||||
|
|||||||
@ -6,51 +6,199 @@ import {
|
|||||||
DialogTitle,
|
DialogTitle,
|
||||||
} from "@/components/ui/dialog";
|
} from "@/components/ui/dialog";
|
||||||
import { Event } from "@/types/event";
|
import { Event } from "@/types/event";
|
||||||
import { isDesktop, isMobile } from "react-device-detect";
|
import { isDesktop, isMobile, isSafari } from "react-device-detect";
|
||||||
import { ObjectSnapshotTab } from "../detail/SearchDetailDialog";
|
|
||||||
import { cn } from "@/lib/utils";
|
import { cn } from "@/lib/utils";
|
||||||
|
import { useCallback, useEffect, useState } from "react";
|
||||||
|
import axios from "axios";
|
||||||
|
import { useTranslation, Trans } from "react-i18next";
|
||||||
|
import { Button } from "@/components/ui/button";
|
||||||
|
import ActivityIndicator from "@/components/indicators/activity-indicator";
|
||||||
|
import { FaCheckCircle } from "react-icons/fa";
|
||||||
|
import { Card, CardContent } from "@/components/ui/card";
|
||||||
|
import { TransformComponent, TransformWrapper } from "react-zoom-pan-pinch";
|
||||||
|
import ImageLoadingIndicator from "@/components/indicators/ImageLoadingIndicator";
|
||||||
|
import { baseUrl } from "@/api/baseUrl";
|
||||||
|
import { getTranslatedLabel } from "@/utils/i18n";
|
||||||
|
import useImageLoaded from "@/hooks/use-image-loaded";
|
||||||
|
|
||||||
type FrigatePlusDialogProps = {
|
export type FrigatePlusDialogProps = {
|
||||||
upload?: Event;
|
upload?: Event;
|
||||||
dialog?: boolean;
|
dialog?: boolean;
|
||||||
onClose: () => void;
|
onClose: () => void;
|
||||||
onEventUploaded: () => void;
|
onEventUploaded: () => void;
|
||||||
};
|
};
|
||||||
|
|
||||||
export function FrigatePlusDialog({
|
export function FrigatePlusDialog({
|
||||||
upload,
|
upload,
|
||||||
dialog = true,
|
dialog = true,
|
||||||
onClose,
|
onClose,
|
||||||
onEventUploaded,
|
onEventUploaded,
|
||||||
}: FrigatePlusDialogProps) {
|
}: FrigatePlusDialogProps) {
|
||||||
if (!upload) {
|
const { t, i18n } = useTranslation(["components/dialog"]);
|
||||||
return;
|
|
||||||
}
|
type SubmissionState = "reviewing" | "uploading" | "submitted";
|
||||||
if (dialog) {
|
const [state, setState] = useState<SubmissionState>(
|
||||||
return (
|
upload?.plus_id ? "submitted" : "reviewing",
|
||||||
<Dialog
|
);
|
||||||
open={upload != undefined}
|
useEffect(() => {
|
||||||
onOpenChange={(open) => (!open ? onClose() : null)}
|
setState(upload?.plus_id ? "submitted" : "reviewing");
|
||||||
|
}, [upload?.plus_id]);
|
||||||
|
|
||||||
|
const onSubmitToPlus = useCallback(
|
||||||
|
async (falsePositive: boolean) => {
|
||||||
|
if (!upload) return;
|
||||||
|
falsePositive
|
||||||
|
? axios.put(`events/${upload.id}/false_positive`)
|
||||||
|
: axios.post(`events/${upload.id}/plus`, { include_annotation: 1 });
|
||||||
|
setState("submitted");
|
||||||
|
onEventUploaded();
|
||||||
|
},
|
||||||
|
[upload, onEventUploaded],
|
||||||
|
);
|
||||||
|
|
||||||
|
const [imgRef, imgLoaded, onImgLoad] = useImageLoaded();
|
||||||
|
const showCard =
|
||||||
|
!!upload &&
|
||||||
|
upload.data.type === "object" &&
|
||||||
|
upload.plus_id !== "not_enabled" &&
|
||||||
|
upload.end_time &&
|
||||||
|
upload.label !== "on_demand";
|
||||||
|
|
||||||
|
if (!dialog || !upload) return null;
|
||||||
|
|
||||||
|
return (
|
||||||
|
<Dialog open={true} onOpenChange={(open) => (!open ? onClose() : null)}>
|
||||||
|
<DialogContent
|
||||||
|
className={cn(
|
||||||
|
"scrollbar-container overflow-y-auto",
|
||||||
|
isDesktop &&
|
||||||
|
"max-h-[95dvh] sm:max-w-xl md:max-w-4xl lg:max-w-4xl xl:max-w-7xl",
|
||||||
|
isMobile && "px-4",
|
||||||
|
)}
|
||||||
>
|
>
|
||||||
<DialogContent
|
<DialogHeader>
|
||||||
className={cn(
|
<DialogTitle className="sr-only">Submit to Frigate+</DialogTitle>
|
||||||
"scrollbar-container overflow-y-auto",
|
<DialogDescription className="sr-only">
|
||||||
isDesktop &&
|
Submit this snapshot to Frigate+
|
||||||
"max-h-[95dvh] sm:max-w-xl md:max-w-4xl lg:max-w-4xl xl:max-w-7xl",
|
</DialogDescription>
|
||||||
isMobile && "px-4",
|
</DialogHeader>
|
||||||
)}
|
|
||||||
>
|
<div className="relative size-full">
|
||||||
<DialogHeader>
|
<ImageLoadingIndicator
|
||||||
<DialogTitle className="sr-only">Submit to Frigate+</DialogTitle>
|
className="absolute inset-0 aspect-video min-h-[60dvh] w-full"
|
||||||
<DialogDescription className="sr-only">
|
imgLoaded={imgLoaded}
|
||||||
Submit this snapshot to Frigate+
|
|
||||||
</DialogDescription>
|
|
||||||
</DialogHeader>
|
|
||||||
<ObjectSnapshotTab
|
|
||||||
search={upload}
|
|
||||||
onEventUploaded={onEventUploaded}
|
|
||||||
/>
|
/>
|
||||||
</DialogContent>
|
<div className={imgLoaded ? "visible" : "invisible"}>
|
||||||
</Dialog>
|
<TransformWrapper minScale={1.0} wheel={{ smoothStep: 0.005 }}>
|
||||||
);
|
<div className="flex flex-col space-y-3">
|
||||||
}
|
<TransformComponent
|
||||||
|
wrapperStyle={{ width: "100%", height: "100%" }}
|
||||||
|
contentStyle={{
|
||||||
|
position: "relative",
|
||||||
|
width: "100%",
|
||||||
|
height: "100%",
|
||||||
|
}}
|
||||||
|
>
|
||||||
|
{upload.id && (
|
||||||
|
<div className="relative mx-auto">
|
||||||
|
<img
|
||||||
|
ref={imgRef}
|
||||||
|
className="mx-auto max-h-[60dvh] rounded-lg bg-black object-contain"
|
||||||
|
src={`${baseUrl}api/events/${upload.id}/snapshot.jpg`}
|
||||||
|
alt={`${upload.label}`}
|
||||||
|
loading={isSafari ? "eager" : "lazy"}
|
||||||
|
onLoad={onImgLoad}
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</TransformComponent>
|
||||||
|
|
||||||
|
{showCard && (
|
||||||
|
<Card className="p-1 text-sm md:p-2">
|
||||||
|
<CardContent className="flex flex-col items-center justify-between gap-3 p-2 md:flex-row">
|
||||||
|
<div className="flex flex-col space-y-3">
|
||||||
|
<div className="text-lg leading-none">
|
||||||
|
{t("explore.plus.submitToPlus.label")}
|
||||||
|
</div>
|
||||||
|
<div className="text-sm text-muted-foreground">
|
||||||
|
{t("explore.plus.submitToPlus.desc")}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div className="flex w-full flex-1 flex-col justify-center gap-2 md:ml-8 md:w-auto md:justify-end">
|
||||||
|
{state === "reviewing" && (
|
||||||
|
<>
|
||||||
|
<div>
|
||||||
|
{i18n.language === "en" ? (
|
||||||
|
/^[aeiou]/i.test(upload.label || "") ? (
|
||||||
|
<Trans
|
||||||
|
ns="components/dialog"
|
||||||
|
values={{ label: upload.label }}
|
||||||
|
>
|
||||||
|
explore.plus.review.question.ask_an
|
||||||
|
</Trans>
|
||||||
|
) : (
|
||||||
|
<Trans
|
||||||
|
ns="components/dialog"
|
||||||
|
values={{ label: upload.label }}
|
||||||
|
>
|
||||||
|
explore.plus.review.question.ask_a
|
||||||
|
</Trans>
|
||||||
|
)
|
||||||
|
) : (
|
||||||
|
<Trans
|
||||||
|
ns="components/dialog"
|
||||||
|
values={{
|
||||||
|
untranslatedLabel: upload.label,
|
||||||
|
translatedLabel: getTranslatedLabel(
|
||||||
|
upload.label,
|
||||||
|
),
|
||||||
|
}}
|
||||||
|
>
|
||||||
|
explore.plus.review.question.ask_full
|
||||||
|
</Trans>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
<div className="flex w-full flex-row gap-2">
|
||||||
|
<Button
|
||||||
|
className="flex-1 bg-success"
|
||||||
|
aria-label={t("button.yes", { ns: "common" })}
|
||||||
|
onClick={() => {
|
||||||
|
setState("uploading");
|
||||||
|
onSubmitToPlus(false);
|
||||||
|
}}
|
||||||
|
>
|
||||||
|
{t("button.yes", { ns: "common" })}
|
||||||
|
</Button>
|
||||||
|
<Button
|
||||||
|
className="flex-1 text-white"
|
||||||
|
aria-label={t("button.no", { ns: "common" })}
|
||||||
|
variant="destructive"
|
||||||
|
onClick={() => {
|
||||||
|
setState("uploading");
|
||||||
|
onSubmitToPlus(true);
|
||||||
|
}}
|
||||||
|
>
|
||||||
|
{t("button.no", { ns: "common" })}
|
||||||
|
</Button>
|
||||||
|
</div>
|
||||||
|
</>
|
||||||
|
)}
|
||||||
|
{state === "uploading" && <ActivityIndicator />}
|
||||||
|
{state === "submitted" && (
|
||||||
|
<div className="flex flex-row items-center justify-center gap-2">
|
||||||
|
<FaCheckCircle className="size-4 text-success" />
|
||||||
|
{t("explore.plus.review.state.submitted")}
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
</CardContent>
|
||||||
|
</Card>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
</TransformWrapper>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</DialogContent>
|
||||||
|
</Dialog>
|
||||||
|
);
|
||||||
}
|
}
|
||||||
|
|||||||
@ -6,7 +6,7 @@ import {
|
|||||||
useState,
|
useState,
|
||||||
} from "react";
|
} from "react";
|
||||||
import Hls from "hls.js";
|
import Hls from "hls.js";
|
||||||
import { isAndroid, isDesktop, isMobile } from "react-device-detect";
|
import { isDesktop, isMobile } from "react-device-detect";
|
||||||
import { TransformComponent, TransformWrapper } from "react-zoom-pan-pinch";
|
import { TransformComponent, TransformWrapper } from "react-zoom-pan-pinch";
|
||||||
import VideoControls from "./VideoControls";
|
import VideoControls from "./VideoControls";
|
||||||
import { VideoResolutionType } from "@/types/live";
|
import { VideoResolutionType } from "@/types/live";
|
||||||
@ -22,7 +22,7 @@ import { useTranslation } from "react-i18next";
|
|||||||
import ObjectTrackOverlay from "@/components/overlay/ObjectTrackOverlay";
|
import ObjectTrackOverlay from "@/components/overlay/ObjectTrackOverlay";
|
||||||
|
|
||||||
// Android native hls does not seek correctly
|
// Android native hls does not seek correctly
|
||||||
const USE_NATIVE_HLS = !isAndroid;
|
const USE_NATIVE_HLS = false;
|
||||||
const HLS_MIME_TYPE = "application/vnd.apple.mpegurl" as const;
|
const HLS_MIME_TYPE = "application/vnd.apple.mpegurl" as const;
|
||||||
const unsupportedErrorCodes = [
|
const unsupportedErrorCodes = [
|
||||||
MediaError.MEDIA_ERR_SRC_NOT_SUPPORTED,
|
MediaError.MEDIA_ERR_SRC_NOT_SUPPORTED,
|
||||||
@ -94,24 +94,52 @@ export default function HlsVideoPlayer({
|
|||||||
const [loadedMetadata, setLoadedMetadata] = useState(false);
|
const [loadedMetadata, setLoadedMetadata] = useState(false);
|
||||||
const [bufferTimeout, setBufferTimeout] = useState<NodeJS.Timeout>();
|
const [bufferTimeout, setBufferTimeout] = useState<NodeJS.Timeout>();
|
||||||
|
|
||||||
|
const applyVideoDimensions = useCallback(
|
||||||
|
(width: number, height: number) => {
|
||||||
|
if (setFullResolution) {
|
||||||
|
setFullResolution({ width, height });
|
||||||
|
}
|
||||||
|
setVideoDimensions({ width, height });
|
||||||
|
if (height > 0) {
|
||||||
|
setTallCamera(width / height < ASPECT_VERTICAL_LAYOUT);
|
||||||
|
}
|
||||||
|
},
|
||||||
|
[setFullResolution],
|
||||||
|
);
|
||||||
|
|
||||||
const handleLoadedMetadata = useCallback(() => {
|
const handleLoadedMetadata = useCallback(() => {
|
||||||
setLoadedMetadata(true);
|
setLoadedMetadata(true);
|
||||||
if (videoRef.current) {
|
if (!videoRef.current) {
|
||||||
const width = videoRef.current.videoWidth;
|
return;
|
||||||
const height = videoRef.current.videoHeight;
|
|
||||||
|
|
||||||
if (setFullResolution) {
|
|
||||||
setFullResolution({
|
|
||||||
width,
|
|
||||||
height,
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
setVideoDimensions({ width, height });
|
|
||||||
|
|
||||||
setTallCamera(width / height < ASPECT_VERTICAL_LAYOUT);
|
|
||||||
}
|
}
|
||||||
}, [videoRef, setFullResolution]);
|
|
||||||
|
const width = videoRef.current.videoWidth;
|
||||||
|
const height = videoRef.current.videoHeight;
|
||||||
|
|
||||||
|
// iOS Safari occasionally reports 0x0 for videoWidth/videoHeight
|
||||||
|
// Poll with requestAnimationFrame until dimensions become available (or timeout).
|
||||||
|
if (width > 0 && height > 0) {
|
||||||
|
applyVideoDimensions(width, height);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
let attempts = 0;
|
||||||
|
const maxAttempts = 120; // ~2 seconds at 60fps
|
||||||
|
const tryGetDims = () => {
|
||||||
|
if (!videoRef.current) return;
|
||||||
|
const w = videoRef.current.videoWidth;
|
||||||
|
const h = videoRef.current.videoHeight;
|
||||||
|
if (w > 0 && h > 0) {
|
||||||
|
applyVideoDimensions(w, h);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
if (attempts < maxAttempts) {
|
||||||
|
attempts += 1;
|
||||||
|
requestAnimationFrame(tryGetDims);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
requestAnimationFrame(tryGetDims);
|
||||||
|
}, [videoRef, applyVideoDimensions]);
|
||||||
|
|
||||||
useEffect(() => {
|
useEffect(() => {
|
||||||
if (!videoRef.current) {
|
if (!videoRef.current) {
|
||||||
@ -130,6 +158,8 @@ export default function HlsVideoPlayer({
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
setLoadedMetadata(false);
|
||||||
|
|
||||||
const currentPlaybackRate = videoRef.current.playbackRate;
|
const currentPlaybackRate = videoRef.current.playbackRate;
|
||||||
|
|
||||||
if (!useHlsCompat) {
|
if (!useHlsCompat) {
|
||||||
@ -318,6 +348,7 @@ export default function HlsVideoPlayer({
|
|||||||
{isDetailMode &&
|
{isDetailMode &&
|
||||||
camera &&
|
camera &&
|
||||||
currentTime &&
|
currentTime &&
|
||||||
|
loadedMetadata &&
|
||||||
videoDimensions.width > 0 &&
|
videoDimensions.width > 0 &&
|
||||||
videoDimensions.height > 0 && (
|
videoDimensions.height > 0 && (
|
||||||
<div className="absolute z-50 size-full">
|
<div className="absolute z-50 size-full">
|
||||||
|
|||||||
@ -91,7 +91,7 @@ function MSEPlayer({
|
|||||||
(error: LivePlayerError, description: string = "Unknown error") => {
|
(error: LivePlayerError, description: string = "Unknown error") => {
|
||||||
// eslint-disable-next-line no-console
|
// eslint-disable-next-line no-console
|
||||||
console.error(
|
console.error(
|
||||||
`${camera} - MSE error '${error}': ${description} See the documentation: https://docs.frigate.video/configuration/live/#live-view-faq`,
|
`${camera} - MSE error '${error}': ${description} See the documentation: https://docs.frigate.video/configuration/live/#live-player-error-messages`,
|
||||||
);
|
);
|
||||||
onError?.(error);
|
onError?.(error);
|
||||||
},
|
},
|
||||||
|
|||||||
@ -309,6 +309,7 @@ function PreviewVideoPlayer({
|
|||||||
playsInline
|
playsInline
|
||||||
muted
|
muted
|
||||||
disableRemotePlayback
|
disableRemotePlayback
|
||||||
|
disablePictureInPicture
|
||||||
onSeeked={onPreviewSeeked}
|
onSeeked={onPreviewSeeked}
|
||||||
onLoadedData={() => {
|
onLoadedData={() => {
|
||||||
if (firstLoad) {
|
if (firstLoad) {
|
||||||
|
|||||||
@ -42,7 +42,7 @@ export default function WebRtcPlayer({
|
|||||||
(error: LivePlayerError, description: string = "Unknown error") => {
|
(error: LivePlayerError, description: string = "Unknown error") => {
|
||||||
// eslint-disable-next-line no-console
|
// eslint-disable-next-line no-console
|
||||||
console.error(
|
console.error(
|
||||||
`${camera} - WebRTC error '${error}': ${description} See the documentation: https://docs.frigate.video/configuration/live/#live-view-faq`,
|
`${camera} - WebRTC error '${error}': ${description} See the documentation: https://docs.frigate.video/configuration/live/#live-player-error-messages`,
|
||||||
);
|
);
|
||||||
onError?.(error);
|
onError?.(error);
|
||||||
},
|
},
|
||||||
|
|||||||
@ -2,7 +2,10 @@ import { Recording } from "@/types/record";
|
|||||||
import { DynamicPlayback } from "@/types/playback";
|
import { DynamicPlayback } from "@/types/playback";
|
||||||
import { PreviewController } from "../PreviewPlayer";
|
import { PreviewController } from "../PreviewPlayer";
|
||||||
import { TimeRange, TrackingDetailsSequence } from "@/types/timeline";
|
import { TimeRange, TrackingDetailsSequence } from "@/types/timeline";
|
||||||
import { calculateInpointOffset } from "@/utils/videoUtil";
|
import {
|
||||||
|
calculateInpointOffset,
|
||||||
|
calculateSeekPosition,
|
||||||
|
} from "@/utils/videoUtil";
|
||||||
|
|
||||||
type PlayerMode = "playback" | "scrubbing";
|
type PlayerMode = "playback" | "scrubbing";
|
||||||
|
|
||||||
@ -72,38 +75,20 @@ export class DynamicVideoController {
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (
|
|
||||||
this.recordings.length == 0 ||
|
|
||||||
time < this.recordings[0].start_time ||
|
|
||||||
time > this.recordings[this.recordings.length - 1].end_time
|
|
||||||
) {
|
|
||||||
this.setNoRecording(true);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (this.playerMode != "playback") {
|
if (this.playerMode != "playback") {
|
||||||
this.playerMode = "playback";
|
this.playerMode = "playback";
|
||||||
}
|
}
|
||||||
|
|
||||||
let seekSeconds = 0;
|
const seekSeconds = calculateSeekPosition(
|
||||||
(this.recordings || []).every((segment) => {
|
time,
|
||||||
// if the next segment is past the desired time, stop calculating
|
this.recordings,
|
||||||
if (segment.start_time > time) {
|
this.inpointOffset,
|
||||||
return false;
|
);
|
||||||
}
|
|
||||||
|
|
||||||
if (segment.end_time < time) {
|
if (seekSeconds === undefined) {
|
||||||
seekSeconds += segment.end_time - segment.start_time;
|
this.setNoRecording(true);
|
||||||
return true;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
seekSeconds +=
|
|
||||||
segment.end_time - segment.start_time - (segment.end_time - time);
|
|
||||||
return true;
|
|
||||||
});
|
|
||||||
|
|
||||||
// adjust for HLS inpoint offset
|
|
||||||
seekSeconds -= this.inpointOffset;
|
|
||||||
|
|
||||||
if (seekSeconds != 0) {
|
if (seekSeconds != 0) {
|
||||||
this.playerController.currentTime = seekSeconds;
|
this.playerController.currentTime = seekSeconds;
|
||||||
|
|||||||
@ -14,7 +14,10 @@ import { VideoResolutionType } from "@/types/live";
|
|||||||
import axios from "axios";
|
import axios from "axios";
|
||||||
import { cn } from "@/lib/utils";
|
import { cn } from "@/lib/utils";
|
||||||
import { useTranslation } from "react-i18next";
|
import { useTranslation } from "react-i18next";
|
||||||
import { calculateInpointOffset } from "@/utils/videoUtil";
|
import {
|
||||||
|
calculateInpointOffset,
|
||||||
|
calculateSeekPosition,
|
||||||
|
} from "@/utils/videoUtil";
|
||||||
import { isFirefox } from "react-device-detect";
|
import { isFirefox } from "react-device-detect";
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@ -109,10 +112,10 @@ export default function DynamicVideoPlayer({
|
|||||||
const [isLoading, setIsLoading] = useState(false);
|
const [isLoading, setIsLoading] = useState(false);
|
||||||
const [isBuffering, setIsBuffering] = useState(false);
|
const [isBuffering, setIsBuffering] = useState(false);
|
||||||
const [loadingTimeout, setLoadingTimeout] = useState<NodeJS.Timeout>();
|
const [loadingTimeout, setLoadingTimeout] = useState<NodeJS.Timeout>();
|
||||||
const [source, setSource] = useState<HlsSource>({
|
|
||||||
playlist: `${apiHost}vod/${camera}/start/${timeRange.after}/end/${timeRange.before}/master.m3u8`,
|
// Don't set source until recordings load - we need accurate startPosition
|
||||||
startPosition: startTimestamp ? timeRange.after - startTimestamp : 0,
|
// to avoid hls.js clamping to video end when startPosition exceeds duration
|
||||||
});
|
const [source, setSource] = useState<HlsSource | undefined>(undefined);
|
||||||
|
|
||||||
// start at correct time
|
// start at correct time
|
||||||
|
|
||||||
@ -184,7 +187,7 @@ export default function DynamicVideoPlayer({
|
|||||||
);
|
);
|
||||||
|
|
||||||
useEffect(() => {
|
useEffect(() => {
|
||||||
if (!controller || !recordings?.length) {
|
if (!recordings?.length) {
|
||||||
if (recordings?.length == 0) {
|
if (recordings?.length == 0) {
|
||||||
setNoRecording(true);
|
setNoRecording(true);
|
||||||
}
|
}
|
||||||
@ -192,10 +195,6 @@ export default function DynamicVideoPlayer({
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (playerRef.current) {
|
|
||||||
playerRef.current.autoplay = !isScrubbing;
|
|
||||||
}
|
|
||||||
|
|
||||||
let startPosition = undefined;
|
let startPosition = undefined;
|
||||||
|
|
||||||
if (startTimestamp) {
|
if (startTimestamp) {
|
||||||
@ -203,14 +202,12 @@ export default function DynamicVideoPlayer({
|
|||||||
recordingParams.after,
|
recordingParams.after,
|
||||||
(recordings || [])[0],
|
(recordings || [])[0],
|
||||||
);
|
);
|
||||||
const idealStartPosition = Math.max(
|
|
||||||
0,
|
|
||||||
startTimestamp - timeRange.after - inpointOffset,
|
|
||||||
);
|
|
||||||
|
|
||||||
if (idealStartPosition >= recordings[0].start_time - timeRange.after) {
|
startPosition = calculateSeekPosition(
|
||||||
startPosition = idealStartPosition;
|
startTimestamp,
|
||||||
}
|
recordings,
|
||||||
|
inpointOffset,
|
||||||
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
setSource({
|
setSource({
|
||||||
@ -218,6 +215,18 @@ export default function DynamicVideoPlayer({
|
|||||||
startPosition,
|
startPosition,
|
||||||
});
|
});
|
||||||
|
|
||||||
|
// eslint-disable-next-line react-hooks/exhaustive-deps
|
||||||
|
}, [recordings]);
|
||||||
|
|
||||||
|
useEffect(() => {
|
||||||
|
if (!controller || !recordings?.length) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (playerRef.current) {
|
||||||
|
playerRef.current.autoplay = !isScrubbing;
|
||||||
|
}
|
||||||
|
|
||||||
setLoadingTimeout(setTimeout(() => setIsLoading(true), 1000));
|
setLoadingTimeout(setTimeout(() => setIsLoading(true), 1000));
|
||||||
|
|
||||||
controller.newPlayback({
|
controller.newPlayback({
|
||||||
@ -225,7 +234,7 @@ export default function DynamicVideoPlayer({
|
|||||||
timeRange,
|
timeRange,
|
||||||
});
|
});
|
||||||
|
|
||||||
// we only want this to change when recordings update
|
// we only want this to change when controller or recordings update
|
||||||
// eslint-disable-next-line react-hooks/exhaustive-deps
|
// eslint-disable-next-line react-hooks/exhaustive-deps
|
||||||
}, [controller, recordings]);
|
}, [controller, recordings]);
|
||||||
|
|
||||||
@ -263,46 +272,48 @@ export default function DynamicVideoPlayer({
|
|||||||
|
|
||||||
return (
|
return (
|
||||||
<>
|
<>
|
||||||
<HlsVideoPlayer
|
{source && (
|
||||||
videoRef={playerRef}
|
<HlsVideoPlayer
|
||||||
containerRef={containerRef}
|
videoRef={playerRef}
|
||||||
visible={!(isScrubbing || isLoading)}
|
containerRef={containerRef}
|
||||||
currentSource={source}
|
visible={!(isScrubbing || isLoading)}
|
||||||
hotKeys={hotKeys}
|
currentSource={source}
|
||||||
supportsFullscreen={supportsFullscreen}
|
hotKeys={hotKeys}
|
||||||
fullscreen={fullscreen}
|
supportsFullscreen={supportsFullscreen}
|
||||||
inpointOffset={inpointOffset}
|
fullscreen={fullscreen}
|
||||||
onTimeUpdate={onTimeUpdate}
|
inpointOffset={inpointOffset}
|
||||||
onPlayerLoaded={onPlayerLoaded}
|
onTimeUpdate={onTimeUpdate}
|
||||||
onClipEnded={onValidateClipEnd}
|
onPlayerLoaded={onPlayerLoaded}
|
||||||
onSeekToTime={(timestamp, play) => {
|
onClipEnded={onValidateClipEnd}
|
||||||
if (onSeekToTime) {
|
onSeekToTime={(timestamp, play) => {
|
||||||
onSeekToTime(timestamp, play);
|
if (onSeekToTime) {
|
||||||
}
|
onSeekToTime(timestamp, play);
|
||||||
}}
|
}
|
||||||
onPlaying={() => {
|
}}
|
||||||
if (isScrubbing) {
|
onPlaying={() => {
|
||||||
playerRef.current?.pause();
|
if (isScrubbing) {
|
||||||
}
|
playerRef.current?.pause();
|
||||||
|
}
|
||||||
|
|
||||||
if (loadingTimeout) {
|
if (loadingTimeout) {
|
||||||
clearTimeout(loadingTimeout);
|
clearTimeout(loadingTimeout);
|
||||||
}
|
}
|
||||||
|
|
||||||
setNoRecording(false);
|
setNoRecording(false);
|
||||||
}}
|
}}
|
||||||
setFullResolution={setFullResolution}
|
setFullResolution={setFullResolution}
|
||||||
onUploadFrame={onUploadFrameToPlus}
|
onUploadFrame={onUploadFrameToPlus}
|
||||||
toggleFullscreen={toggleFullscreen}
|
toggleFullscreen={toggleFullscreen}
|
||||||
onError={(error) => {
|
onError={(error) => {
|
||||||
if (error == "stalled" && !isScrubbing) {
|
if (error == "stalled" && !isScrubbing) {
|
||||||
setIsBuffering(true);
|
setIsBuffering(true);
|
||||||
}
|
}
|
||||||
}}
|
}}
|
||||||
isDetailMode={isDetailMode}
|
isDetailMode={isDetailMode}
|
||||||
camera={contextCamera || camera}
|
camera={contextCamera || camera}
|
||||||
currentTimeOverride={currentTime}
|
currentTimeOverride={currentTime}
|
||||||
/>
|
/>
|
||||||
|
)}
|
||||||
<PreviewPlayer
|
<PreviewPlayer
|
||||||
className={cn(
|
className={cn(
|
||||||
className,
|
className,
|
||||||
|
|||||||
@ -18,7 +18,7 @@ import { z } from "zod";
|
|||||||
import axios from "axios";
|
import axios from "axios";
|
||||||
import { toast, Toaster } from "sonner";
|
import { toast, Toaster } from "sonner";
|
||||||
import { useTranslation } from "react-i18next";
|
import { useTranslation } from "react-i18next";
|
||||||
import { useState, useMemo } from "react";
|
import { useState, useMemo, useEffect } from "react";
|
||||||
import { LuTrash2, LuPlus } from "react-icons/lu";
|
import { LuTrash2, LuPlus } from "react-icons/lu";
|
||||||
import ActivityIndicator from "@/components/indicators/activity-indicator";
|
import ActivityIndicator from "@/components/indicators/activity-indicator";
|
||||||
import { FrigateConfig } from "@/types/frigateConfig";
|
import { FrigateConfig } from "@/types/frigateConfig";
|
||||||
@ -42,7 +42,15 @@ export default function CameraEditForm({
|
|||||||
onCancel,
|
onCancel,
|
||||||
}: CameraEditFormProps) {
|
}: CameraEditFormProps) {
|
||||||
const { t } = useTranslation(["views/settings"]);
|
const { t } = useTranslation(["views/settings"]);
|
||||||
const { data: config } = useSWR<FrigateConfig>("config");
|
const { data: config, mutate: mutateConfig } =
|
||||||
|
useSWR<FrigateConfig>("config");
|
||||||
|
const { data: rawPaths, mutate: mutateRawPaths } = useSWR<{
|
||||||
|
cameras: Record<
|
||||||
|
string,
|
||||||
|
{ ffmpeg: { inputs: { path: string; roles: string[] }[] } }
|
||||||
|
>;
|
||||||
|
go2rtc: { streams: Record<string, string | string[]> };
|
||||||
|
}>(cameraName ? "config/raw_paths" : null);
|
||||||
const [isLoading, setIsLoading] = useState(false);
|
const [isLoading, setIsLoading] = useState(false);
|
||||||
|
|
||||||
const formSchema = useMemo(
|
const formSchema = useMemo(
|
||||||
@ -145,14 +153,23 @@ export default function CameraEditForm({
|
|||||||
if (cameraName && config?.cameras[cameraName]) {
|
if (cameraName && config?.cameras[cameraName]) {
|
||||||
const camera = config.cameras[cameraName];
|
const camera = config.cameras[cameraName];
|
||||||
defaultValues.enabled = camera.enabled ?? true;
|
defaultValues.enabled = camera.enabled ?? true;
|
||||||
defaultValues.ffmpeg.inputs = camera.ffmpeg?.inputs?.length
|
|
||||||
? camera.ffmpeg.inputs.map((input) => ({
|
// Use raw paths from the admin endpoint if available, otherwise fall back to masked paths
|
||||||
|
const rawCameraData = rawPaths?.cameras?.[cameraName];
|
||||||
|
defaultValues.ffmpeg.inputs = rawCameraData?.ffmpeg?.inputs?.length
|
||||||
|
? rawCameraData.ffmpeg.inputs.map((input) => ({
|
||||||
path: input.path,
|
path: input.path,
|
||||||
roles: input.roles as Role[],
|
roles: input.roles as Role[],
|
||||||
}))
|
}))
|
||||||
: defaultValues.ffmpeg.inputs;
|
: camera.ffmpeg?.inputs?.length
|
||||||
|
? camera.ffmpeg.inputs.map((input) => ({
|
||||||
|
path: input.path,
|
||||||
|
roles: input.roles as Role[],
|
||||||
|
}))
|
||||||
|
: defaultValues.ffmpeg.inputs;
|
||||||
|
|
||||||
const go2rtcStreams = config.go2rtc?.streams || {};
|
const go2rtcStreams =
|
||||||
|
rawPaths?.go2rtc?.streams || config.go2rtc?.streams || {};
|
||||||
const cameraStreams: Record<string, string[]> = {};
|
const cameraStreams: Record<string, string[]> = {};
|
||||||
|
|
||||||
// get candidate stream names for this camera. could be the camera's own name,
|
// get candidate stream names for this camera. could be the camera's own name,
|
||||||
@ -196,6 +213,60 @@ export default function CameraEditForm({
|
|||||||
mode: "onChange",
|
mode: "onChange",
|
||||||
});
|
});
|
||||||
|
|
||||||
|
// Update form values when rawPaths loads
|
||||||
|
useEffect(() => {
|
||||||
|
if (
|
||||||
|
cameraName &&
|
||||||
|
config?.cameras[cameraName] &&
|
||||||
|
rawPaths?.cameras?.[cameraName]
|
||||||
|
) {
|
||||||
|
const camera = config.cameras[cameraName];
|
||||||
|
const rawCameraData = rawPaths.cameras[cameraName];
|
||||||
|
|
||||||
|
// Update ffmpeg inputs with raw paths
|
||||||
|
if (rawCameraData.ffmpeg?.inputs?.length) {
|
||||||
|
form.setValue(
|
||||||
|
"ffmpeg.inputs",
|
||||||
|
rawCameraData.ffmpeg.inputs.map((input) => ({
|
||||||
|
path: input.path,
|
||||||
|
roles: input.roles as Role[],
|
||||||
|
})),
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update go2rtc streams with raw URLs
|
||||||
|
if (rawPaths.go2rtc?.streams) {
|
||||||
|
const validNames = new Set<string>();
|
||||||
|
validNames.add(cameraName);
|
||||||
|
|
||||||
|
camera.ffmpeg?.inputs?.forEach((input) => {
|
||||||
|
const restreamMatch = input.path.match(
|
||||||
|
/^rtsp:\/\/127\.0\.0\.1:8554\/([^?#/]+)(?:[?#].*)?$/,
|
||||||
|
);
|
||||||
|
if (restreamMatch) {
|
||||||
|
validNames.add(restreamMatch[1]);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
const liveStreams = camera?.live?.streams;
|
||||||
|
if (liveStreams) {
|
||||||
|
Object.keys(liveStreams).forEach((key) => validNames.add(key));
|
||||||
|
}
|
||||||
|
|
||||||
|
const cameraStreams: Record<string, string[]> = {};
|
||||||
|
Object.entries(rawPaths.go2rtc.streams).forEach(([name, urls]) => {
|
||||||
|
if (validNames.has(name)) {
|
||||||
|
cameraStreams[name] = Array.isArray(urls) ? urls : [urls];
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
if (Object.keys(cameraStreams).length > 0) {
|
||||||
|
form.setValue("go2rtcStreams", cameraStreams);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}, [cameraName, config, rawPaths, form]);
|
||||||
|
|
||||||
const { fields, append, remove } = useFieldArray({
|
const { fields, append, remove } = useFieldArray({
|
||||||
control: form.control,
|
control: form.control,
|
||||||
name: "ffmpeg.inputs",
|
name: "ffmpeg.inputs",
|
||||||
@ -268,6 +339,8 @@ export default function CameraEditForm({
|
|||||||
}),
|
}),
|
||||||
{ position: "top-center" },
|
{ position: "top-center" },
|
||||||
);
|
);
|
||||||
|
mutateConfig();
|
||||||
|
mutateRawPaths();
|
||||||
if (onSave) onSave();
|
if (onSave) onSave();
|
||||||
});
|
});
|
||||||
} else {
|
} else {
|
||||||
@ -277,6 +350,8 @@ export default function CameraEditForm({
|
|||||||
}),
|
}),
|
||||||
{ position: "top-center" },
|
{ position: "top-center" },
|
||||||
);
|
);
|
||||||
|
mutateConfig();
|
||||||
|
mutateRawPaths();
|
||||||
if (onSave) onSave();
|
if (onSave) onSave();
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
|
|||||||
@ -377,7 +377,7 @@ export default function Step1NameCamera({
|
|||||||
);
|
);
|
||||||
return selectedBrand &&
|
return selectedBrand &&
|
||||||
selectedBrand.value != "other" ? (
|
selectedBrand.value != "other" ? (
|
||||||
<Popover>
|
<Popover modal={true}>
|
||||||
<PopoverTrigger asChild>
|
<PopoverTrigger asChild>
|
||||||
<Button
|
<Button
|
||||||
variant="ghost"
|
variant="ghost"
|
||||||
|
|||||||
@ -600,7 +600,7 @@ export default function Step3StreamConfig({
|
|||||||
<Label className="text-sm font-medium text-primary-variant">
|
<Label className="text-sm font-medium text-primary-variant">
|
||||||
{t("cameraWizard.step3.roles")}
|
{t("cameraWizard.step3.roles")}
|
||||||
</Label>
|
</Label>
|
||||||
<Popover>
|
<Popover modal={true}>
|
||||||
<PopoverTrigger asChild>
|
<PopoverTrigger asChild>
|
||||||
<Button variant="ghost" size="sm" className="h-4 w-4 p-0">
|
<Button variant="ghost" size="sm" className="h-4 w-4 p-0">
|
||||||
<LuInfo className="size-3" />
|
<LuInfo className="size-3" />
|
||||||
@ -670,7 +670,7 @@ export default function Step3StreamConfig({
|
|||||||
<Label className="text-sm font-medium text-primary-variant">
|
<Label className="text-sm font-medium text-primary-variant">
|
||||||
{t("cameraWizard.step3.featuresTitle")}
|
{t("cameraWizard.step3.featuresTitle")}
|
||||||
</Label>
|
</Label>
|
||||||
<Popover>
|
<Popover modal={true}>
|
||||||
<PopoverTrigger asChild>
|
<PopoverTrigger asChild>
|
||||||
<Button variant="ghost" size="sm" className="h-4 w-4 p-0">
|
<Button variant="ghost" size="sm" className="h-4 w-4 p-0">
|
||||||
<LuInfo className="size-3" />
|
<LuInfo className="size-3" />
|
||||||
|
|||||||
@ -15,6 +15,7 @@ import {
|
|||||||
ReviewSummary,
|
ReviewSummary,
|
||||||
SegmentedReviewData,
|
SegmentedReviewData,
|
||||||
} from "@/types/review";
|
} from "@/types/review";
|
||||||
|
import { TimelineType } from "@/types/timeline";
|
||||||
import {
|
import {
|
||||||
getBeginningOfDayTimestamp,
|
getBeginningOfDayTimestamp,
|
||||||
getEndOfDayTimestamp,
|
getEndOfDayTimestamp,
|
||||||
@ -49,6 +50,16 @@ export default function Events() {
|
|||||||
false,
|
false,
|
||||||
);
|
);
|
||||||
|
|
||||||
|
const [notificationTab, setNotificationTab] =
|
||||||
|
useState<TimelineType>("timeline");
|
||||||
|
|
||||||
|
useSearchEffect("tab", (tab: string) => {
|
||||||
|
if (tab === "timeline" || tab === "events" || tab === "detail") {
|
||||||
|
setNotificationTab(tab as TimelineType);
|
||||||
|
}
|
||||||
|
return true;
|
||||||
|
});
|
||||||
|
|
||||||
useSearchEffect("id", (reviewId: string) => {
|
useSearchEffect("id", (reviewId: string) => {
|
||||||
axios
|
axios
|
||||||
.get(`review/${reviewId}`)
|
.get(`review/${reviewId}`)
|
||||||
@ -66,7 +77,7 @@ export default function Events() {
|
|||||||
camera: resp.data.camera,
|
camera: resp.data.camera,
|
||||||
startTime,
|
startTime,
|
||||||
severity: resp.data.severity,
|
severity: resp.data.severity,
|
||||||
timelineType: "detail",
|
timelineType: notificationTab,
|
||||||
},
|
},
|
||||||
true,
|
true,
|
||||||
);
|
);
|
||||||
|
|||||||
@ -93,19 +93,23 @@ function Live() {
|
|||||||
const allowedCameras = useAllowedCameras();
|
const allowedCameras = useAllowedCameras();
|
||||||
|
|
||||||
const includesBirdseye = useMemo(() => {
|
const includesBirdseye = useMemo(() => {
|
||||||
|
// Restricted users should never have access to birdseye
|
||||||
|
if (isCustomRole) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
if (
|
if (
|
||||||
config &&
|
config &&
|
||||||
Object.keys(config.camera_groups).length &&
|
Object.keys(config.camera_groups).length &&
|
||||||
cameraGroup &&
|
cameraGroup &&
|
||||||
config.camera_groups[cameraGroup] &&
|
config.camera_groups[cameraGroup] &&
|
||||||
cameraGroup != "default" &&
|
cameraGroup != "default"
|
||||||
(!isCustomRole || "birdseye" in allowedCameras)
|
|
||||||
) {
|
) {
|
||||||
return config.camera_groups[cameraGroup].cameras.includes("birdseye");
|
return config.camera_groups[cameraGroup].cameras.includes("birdseye");
|
||||||
} else {
|
} else {
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
}, [config, cameraGroup, allowedCameras, isCustomRole]);
|
}, [config, cameraGroup, isCustomRole]);
|
||||||
|
|
||||||
const cameras = useMemo(() => {
|
const cameras = useMemo(() => {
|
||||||
if (!config) {
|
if (!config) {
|
||||||
|
|||||||
@ -26,7 +26,7 @@ import useSWR from "swr";
|
|||||||
import FilterSwitch from "@/components/filter/FilterSwitch";
|
import FilterSwitch from "@/components/filter/FilterSwitch";
|
||||||
import { ZoneMaskFilterButton } from "@/components/filter/ZoneMaskFilter";
|
import { ZoneMaskFilterButton } from "@/components/filter/ZoneMaskFilter";
|
||||||
import { PolygonType } from "@/types/canvas";
|
import { PolygonType } from "@/types/canvas";
|
||||||
import CameraSettingsView from "@/views/settings/CameraSettingsView";
|
import CameraReviewSettingsView from "@/views/settings/CameraReviewSettingsView";
|
||||||
import CameraManagementView from "@/views/settings/CameraManagementView";
|
import CameraManagementView from "@/views/settings/CameraManagementView";
|
||||||
import MotionTunerView from "@/views/settings/MotionTunerView";
|
import MotionTunerView from "@/views/settings/MotionTunerView";
|
||||||
import MasksAndZonesView from "@/views/settings/MasksAndZonesView";
|
import MasksAndZonesView from "@/views/settings/MasksAndZonesView";
|
||||||
@ -93,7 +93,7 @@ const settingsGroups = [
|
|||||||
label: "cameras",
|
label: "cameras",
|
||||||
items: [
|
items: [
|
||||||
{ key: "cameraManagement", component: CameraManagementView },
|
{ key: "cameraManagement", component: CameraManagementView },
|
||||||
{ key: "cameraReview", component: CameraSettingsView },
|
{ key: "cameraReview", component: CameraReviewSettingsView },
|
||||||
{ key: "masksAndZones", component: MasksAndZonesView },
|
{ key: "masksAndZones", component: MasksAndZonesView },
|
||||||
{ key: "motionTuner", component: MotionTunerView },
|
{ key: "motionTuner", component: MotionTunerView },
|
||||||
],
|
],
|
||||||
|
|||||||
@ -1,4 +1,5 @@
|
|||||||
import { ReviewSeverity } from "./review";
|
import { ReviewSeverity } from "./review";
|
||||||
|
import { TimelineType } from "./timeline";
|
||||||
|
|
||||||
export type Recording = {
|
export type Recording = {
|
||||||
id: string;
|
id: string;
|
||||||
@ -37,7 +38,7 @@ export type RecordingStartingPoint = {
|
|||||||
camera: string;
|
camera: string;
|
||||||
startTime: number;
|
startTime: number;
|
||||||
severity: ReviewSeverity;
|
severity: ReviewSeverity;
|
||||||
timelineType?: "timeline" | "events" | "detail";
|
timelineType?: TimelineType;
|
||||||
};
|
};
|
||||||
|
|
||||||
export type RecordingPlayerError = "stalled" | "startup";
|
export type RecordingPlayerError = "stalled" | "startup";
|
||||||
|
|||||||
@ -24,3 +24,57 @@ export function calculateInpointOffset(
|
|||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Calculates the video player time (in seconds) for a given timestamp
|
||||||
|
* by iterating through recording segments and summing their durations.
|
||||||
|
* This accounts for the fact that the video is a concatenation of segments,
|
||||||
|
* not a single continuous stream.
|
||||||
|
*
|
||||||
|
* @param timestamp - The target timestamp to seek to
|
||||||
|
* @param recordings - Array of recording segments
|
||||||
|
* @param inpointOffset - HLS inpoint offset to subtract from the result
|
||||||
|
* @returns The calculated seek position in seconds, or undefined if timestamp is out of range
|
||||||
|
*/
|
||||||
|
export function calculateSeekPosition(
|
||||||
|
timestamp: number,
|
||||||
|
recordings: Recording[],
|
||||||
|
inpointOffset: number = 0,
|
||||||
|
): number | undefined {
|
||||||
|
if (!recordings || recordings.length === 0) {
|
||||||
|
return undefined;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if timestamp is within the recordings range
|
||||||
|
if (
|
||||||
|
timestamp < recordings[0].start_time ||
|
||||||
|
timestamp > recordings[recordings.length - 1].end_time
|
||||||
|
) {
|
||||||
|
return undefined;
|
||||||
|
}
|
||||||
|
|
||||||
|
let seekSeconds = 0;
|
||||||
|
|
||||||
|
(recordings || []).every((segment) => {
|
||||||
|
// if the next segment is past the desired time, stop calculating
|
||||||
|
if (segment.start_time > timestamp) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (segment.end_time < timestamp) {
|
||||||
|
// Add the full duration of this segment
|
||||||
|
seekSeconds += segment.end_time - segment.start_time;
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
// We're in this segment - calculate position within it
|
||||||
|
seekSeconds +=
|
||||||
|
segment.end_time - segment.start_time - (segment.end_time - timestamp);
|
||||||
|
return true;
|
||||||
|
});
|
||||||
|
|
||||||
|
// Adjust for HLS inpoint offset
|
||||||
|
seekSeconds -= inpointOffset;
|
||||||
|
|
||||||
|
return seekSeconds >= 0 ? seekSeconds : undefined;
|
||||||
|
}
|
||||||
|
|||||||
@ -16,7 +16,6 @@ import { useCallback, useEffect, useMemo, useState } from "react";
|
|||||||
import { useTranslation } from "react-i18next";
|
import { useTranslation } from "react-i18next";
|
||||||
import { FaFolderPlus } from "react-icons/fa";
|
import { FaFolderPlus } from "react-icons/fa";
|
||||||
import { MdModelTraining } from "react-icons/md";
|
import { MdModelTraining } from "react-icons/md";
|
||||||
import { LuPencil, LuTrash2 } from "react-icons/lu";
|
|
||||||
import { FiMoreVertical } from "react-icons/fi";
|
import { FiMoreVertical } from "react-icons/fi";
|
||||||
import useSWR from "swr";
|
import useSWR from "swr";
|
||||||
import Heading from "@/components/ui/heading";
|
import Heading from "@/components/ui/heading";
|
||||||
@ -40,6 +39,7 @@ import {
|
|||||||
AlertDialogTitle,
|
AlertDialogTitle,
|
||||||
} from "@/components/ui/alert-dialog";
|
} from "@/components/ui/alert-dialog";
|
||||||
import BlurredIconButton from "@/components/button/BlurredIconButton";
|
import BlurredIconButton from "@/components/button/BlurredIconButton";
|
||||||
|
import { Skeleton } from "@/components/ui/skeleton";
|
||||||
|
|
||||||
const allModelTypes = ["objects", "states"] as const;
|
const allModelTypes = ["objects", "states"] as const;
|
||||||
type ModelType = (typeof allModelTypes)[number];
|
type ModelType = (typeof allModelTypes)[number];
|
||||||
@ -333,9 +333,7 @@ function ModelCard({ config, onClick, onUpdate, onDelete }: ModelCardProps) {
|
|||||||
<ImageShadowOverlay lowerClassName="h-[30%] z-0" />
|
<ImageShadowOverlay lowerClassName="h-[30%] z-0" />
|
||||||
</>
|
</>
|
||||||
) : (
|
) : (
|
||||||
<div className="flex size-full items-center justify-center bg-background_alt">
|
<Skeleton className="flex size-full items-center justify-center" />
|
||||||
<MdModelTraining className="size-16 text-muted-foreground" />
|
|
||||||
</div>
|
|
||||||
)}
|
)}
|
||||||
<div className="absolute bottom-2 left-3 text-lg text-white smart-capitalize">
|
<div className="absolute bottom-2 left-3 text-lg text-white smart-capitalize">
|
||||||
{config.name}
|
{config.name}
|
||||||
@ -352,11 +350,9 @@ function ModelCard({ config, onClick, onUpdate, onDelete }: ModelCardProps) {
|
|||||||
onClick={(e) => e.stopPropagation()}
|
onClick={(e) => e.stopPropagation()}
|
||||||
>
|
>
|
||||||
<DropdownMenuItem onClick={handleEditClick}>
|
<DropdownMenuItem onClick={handleEditClick}>
|
||||||
<LuPencil className="mr-2 size-4" />
|
|
||||||
<span>{t("button.edit", { ns: "common" })}</span>
|
<span>{t("button.edit", { ns: "common" })}</span>
|
||||||
</DropdownMenuItem>
|
</DropdownMenuItem>
|
||||||
<DropdownMenuItem onClick={handleDeleteClick}>
|
<DropdownMenuItem onClick={handleDeleteClick}>
|
||||||
<LuTrash2 className="mr-2 size-4" />
|
|
||||||
<span>{t("button.delete", { ns: "common" })}</span>
|
<span>{t("button.delete", { ns: "common" })}</span>
|
||||||
</DropdownMenuItem>
|
</DropdownMenuItem>
|
||||||
</DropdownMenuContent>
|
</DropdownMenuContent>
|
||||||
|
|||||||
@ -799,7 +799,7 @@ function DetectionReview({
|
|||||||
(itemsToReview ?? 0) > 0 && (
|
(itemsToReview ?? 0) > 0 && (
|
||||||
<div className="col-span-full flex items-center justify-center">
|
<div className="col-span-full flex items-center justify-center">
|
||||||
<Button
|
<Button
|
||||||
className="text-white"
|
className="text-balance text-white"
|
||||||
aria-label={t("markTheseItemsAsReviewed")}
|
aria-label={t("markTheseItemsAsReviewed")}
|
||||||
variant="select"
|
variant="select"
|
||||||
onClick={() => {
|
onClick={() => {
|
||||||
|
|||||||
@ -16,7 +16,6 @@ import ImageLoadingIndicator from "@/components/indicators/ImageLoadingIndicator
|
|||||||
import useImageLoaded from "@/hooks/use-image-loaded";
|
import useImageLoaded from "@/hooks/use-image-loaded";
|
||||||
import ActivityIndicator from "@/components/indicators/activity-indicator";
|
import ActivityIndicator from "@/components/indicators/activity-indicator";
|
||||||
import { useTrackedObjectUpdate } from "@/api/ws";
|
import { useTrackedObjectUpdate } from "@/api/ws";
|
||||||
import { isEqual } from "lodash";
|
|
||||||
import TimeAgo from "@/components/dynamic/TimeAgo";
|
import TimeAgo from "@/components/dynamic/TimeAgo";
|
||||||
import SearchResultActions from "@/components/menu/SearchResultActions";
|
import SearchResultActions from "@/components/menu/SearchResultActions";
|
||||||
import { SearchTab } from "@/components/overlay/detail/SearchDetailDialog";
|
import { SearchTab } from "@/components/overlay/detail/SearchDetailDialog";
|
||||||
@ -25,14 +24,12 @@ import { useTranslation } from "react-i18next";
|
|||||||
import { getTranslatedLabel } from "@/utils/i18n";
|
import { getTranslatedLabel } from "@/utils/i18n";
|
||||||
|
|
||||||
type ExploreViewProps = {
|
type ExploreViewProps = {
|
||||||
searchDetail: SearchResult | undefined;
|
|
||||||
setSearchDetail: (search: SearchResult | undefined) => void;
|
setSearchDetail: (search: SearchResult | undefined) => void;
|
||||||
setSimilaritySearch: (search: SearchResult) => void;
|
setSimilaritySearch: (search: SearchResult) => void;
|
||||||
onSelectSearch: (item: SearchResult, ctrl: boolean, page?: SearchTab) => void;
|
onSelectSearch: (item: SearchResult, ctrl: boolean, page?: SearchTab) => void;
|
||||||
};
|
};
|
||||||
|
|
||||||
export default function ExploreView({
|
export default function ExploreView({
|
||||||
searchDetail,
|
|
||||||
setSearchDetail,
|
setSearchDetail,
|
||||||
setSimilaritySearch,
|
setSimilaritySearch,
|
||||||
onSelectSearch,
|
onSelectSearch,
|
||||||
@ -83,20 +80,6 @@ export default function ExploreView({
|
|||||||
}
|
}
|
||||||
}, [wsUpdate, mutate]);
|
}, [wsUpdate, mutate]);
|
||||||
|
|
||||||
// update search detail when results change
|
|
||||||
|
|
||||||
useEffect(() => {
|
|
||||||
if (searchDetail && events) {
|
|
||||||
const updatedSearchDetail = events.find(
|
|
||||||
(result) => result.id === searchDetail.id,
|
|
||||||
);
|
|
||||||
|
|
||||||
if (updatedSearchDetail && !isEqual(updatedSearchDetail, searchDetail)) {
|
|
||||||
setSearchDetail(updatedSearchDetail);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}, [events, searchDetail, setSearchDetail]);
|
|
||||||
|
|
||||||
if (isLoading) {
|
if (isLoading) {
|
||||||
return (
|
return (
|
||||||
<ActivityIndicator className="absolute left-1/2 top-1/2 -translate-x-1/2 -translate-y-1/2" />
|
<ActivityIndicator className="absolute left-1/2 top-1/2 -translate-x-1/2 -translate-y-1/2" />
|
||||||
|
|||||||
@ -850,6 +850,29 @@ function FrigateCameraFeatures({
|
|||||||
}
|
}
|
||||||
}, [activeToastId, t]);
|
}, [activeToastId, t]);
|
||||||
|
|
||||||
|
const endEventViaBeacon = useCallback(() => {
|
||||||
|
if (!recordingEventIdRef.current) return;
|
||||||
|
|
||||||
|
const url = `${window.location.origin}/api/events/${recordingEventIdRef.current}/end`;
|
||||||
|
const payload = JSON.stringify({
|
||||||
|
end_time: Math.ceil(Date.now() / 1000),
|
||||||
|
});
|
||||||
|
|
||||||
|
// this needs to be a synchronous XMLHttpRequest to guarantee the PUT
|
||||||
|
// reaches the server before the browser kills the page
|
||||||
|
const xhr = new XMLHttpRequest();
|
||||||
|
try {
|
||||||
|
xhr.open("PUT", url, false);
|
||||||
|
xhr.setRequestHeader("Content-Type", "application/json");
|
||||||
|
xhr.setRequestHeader("X-CSRF-TOKEN", "1");
|
||||||
|
xhr.setRequestHeader("X-CACHE-BYPASS", "1");
|
||||||
|
xhr.withCredentials = true;
|
||||||
|
xhr.send(payload);
|
||||||
|
} catch (e) {
|
||||||
|
// Silently ignore errors during unload
|
||||||
|
}
|
||||||
|
}, []);
|
||||||
|
|
||||||
const handleEventButtonClick = useCallback(() => {
|
const handleEventButtonClick = useCallback(() => {
|
||||||
if (isRecording) {
|
if (isRecording) {
|
||||||
endEvent();
|
endEvent();
|
||||||
@ -887,8 +910,19 @@ function FrigateCameraFeatures({
|
|||||||
}, [camera.name, isRestreamed, preferredLiveMode, t]);
|
}, [camera.name, isRestreamed, preferredLiveMode, t]);
|
||||||
|
|
||||||
useEffect(() => {
|
useEffect(() => {
|
||||||
|
// Handle page unload/close (browser close, tab close, refresh, navigation to external site)
|
||||||
|
const handleBeforeUnload = () => {
|
||||||
|
if (recordingEventIdRef.current) {
|
||||||
|
endEventViaBeacon();
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
window.addEventListener("beforeunload", handleBeforeUnload);
|
||||||
|
|
||||||
// ensure manual event is stopped when component unmounts
|
// ensure manual event is stopped when component unmounts
|
||||||
return () => {
|
return () => {
|
||||||
|
window.removeEventListener("beforeunload", handleBeforeUnload);
|
||||||
|
|
||||||
if (recordingEventIdRef.current) {
|
if (recordingEventIdRef.current) {
|
||||||
endEvent();
|
endEvent();
|
||||||
}
|
}
|
||||||
|
|||||||
@ -20,7 +20,14 @@ import {
|
|||||||
FrigateConfig,
|
FrigateConfig,
|
||||||
} from "@/types/frigateConfig";
|
} from "@/types/frigateConfig";
|
||||||
import { ReviewSegment } from "@/types/review";
|
import { ReviewSegment } from "@/types/review";
|
||||||
import { useCallback, useEffect, useMemo, useRef, useState } from "react";
|
import {
|
||||||
|
useCallback,
|
||||||
|
useContext,
|
||||||
|
useEffect,
|
||||||
|
useMemo,
|
||||||
|
useRef,
|
||||||
|
useState,
|
||||||
|
} from "react";
|
||||||
import {
|
import {
|
||||||
isDesktop,
|
isDesktop,
|
||||||
isMobile,
|
isMobile,
|
||||||
@ -46,6 +53,8 @@ import { useStreamingSettings } from "@/context/streaming-settings-provider";
|
|||||||
import { useTranslation } from "react-i18next";
|
import { useTranslation } from "react-i18next";
|
||||||
import { EmptyCard } from "@/components/card/EmptyCard";
|
import { EmptyCard } from "@/components/card/EmptyCard";
|
||||||
import { BsFillCameraVideoOffFill } from "react-icons/bs";
|
import { BsFillCameraVideoOffFill } from "react-icons/bs";
|
||||||
|
import { AuthContext } from "@/context/auth-context";
|
||||||
|
import { useIsCustomRole } from "@/hooks/use-is-custom-role";
|
||||||
|
|
||||||
type LiveDashboardViewProps = {
|
type LiveDashboardViewProps = {
|
||||||
cameras: CameraConfig[];
|
cameras: CameraConfig[];
|
||||||
@ -374,10 +383,6 @@ export default function LiveDashboardView({
|
|||||||
onSaveMuting(true);
|
onSaveMuting(true);
|
||||||
};
|
};
|
||||||
|
|
||||||
if (cameras.length == 0 && !includeBirdseye) {
|
|
||||||
return <NoCameraView />;
|
|
||||||
}
|
|
||||||
|
|
||||||
return (
|
return (
|
||||||
<div
|
<div
|
||||||
className="scrollbar-container size-full select-none overflow-y-auto px-1 pt-2 md:p-2"
|
className="scrollbar-container size-full select-none overflow-y-auto px-1 pt-2 md:p-2"
|
||||||
@ -439,198 +444,215 @@ export default function LiveDashboardView({
|
|||||||
</div>
|
</div>
|
||||||
)}
|
)}
|
||||||
|
|
||||||
{!fullscreen && events && events.length > 0 && (
|
{cameras.length == 0 && !includeBirdseye ? (
|
||||||
<ScrollArea>
|
<NoCameraView />
|
||||||
<TooltipProvider>
|
) : (
|
||||||
<div className="flex items-center gap-2 px-1">
|
|
||||||
{events.map((event) => {
|
|
||||||
return (
|
|
||||||
<AnimatedEventCard
|
|
||||||
key={event.id}
|
|
||||||
event={event}
|
|
||||||
selectedGroup={cameraGroup}
|
|
||||||
updateEvents={updateEvents}
|
|
||||||
/>
|
|
||||||
);
|
|
||||||
})}
|
|
||||||
</div>
|
|
||||||
</TooltipProvider>
|
|
||||||
<ScrollBar orientation="horizontal" />
|
|
||||||
</ScrollArea>
|
|
||||||
)}
|
|
||||||
|
|
||||||
{!cameraGroup || cameraGroup == "default" || isMobileOnly ? (
|
|
||||||
<>
|
<>
|
||||||
<div
|
{!fullscreen && events && events.length > 0 && (
|
||||||
className={cn(
|
<ScrollArea>
|
||||||
"mt-2 grid grid-cols-1 gap-2 px-2 md:gap-4",
|
<TooltipProvider>
|
||||||
mobileLayout == "grid" &&
|
<div className="flex items-center gap-2 px-1">
|
||||||
"grid-cols-2 xl:grid-cols-3 3xl:grid-cols-4",
|
{events.map((event) => {
|
||||||
isMobile && "px-0",
|
return (
|
||||||
)}
|
<AnimatedEventCard
|
||||||
>
|
key={event.id}
|
||||||
{includeBirdseye && birdseyeConfig?.enabled && (
|
event={event}
|
||||||
|
selectedGroup={cameraGroup}
|
||||||
|
updateEvents={updateEvents}
|
||||||
|
/>
|
||||||
|
);
|
||||||
|
})}
|
||||||
|
</div>
|
||||||
|
</TooltipProvider>
|
||||||
|
<ScrollBar orientation="horizontal" />
|
||||||
|
</ScrollArea>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{!cameraGroup || cameraGroup == "default" || isMobileOnly ? (
|
||||||
|
<>
|
||||||
<div
|
<div
|
||||||
className={(() => {
|
className={cn(
|
||||||
const aspectRatio =
|
"mt-2 grid grid-cols-1 gap-2 px-2 md:gap-4",
|
||||||
birdseyeConfig.width / birdseyeConfig.height;
|
mobileLayout == "grid" &&
|
||||||
if (aspectRatio > 2) {
|
"grid-cols-2 xl:grid-cols-3 3xl:grid-cols-4",
|
||||||
return `${mobileLayout == "grid" && "col-span-2"} aspect-wide`;
|
isMobile && "px-0",
|
||||||
} else if (aspectRatio < 1) {
|
)}
|
||||||
return `${mobileLayout == "grid" && "row-span-2 h-full"} aspect-tall`;
|
|
||||||
} else {
|
|
||||||
return "aspect-video";
|
|
||||||
}
|
|
||||||
})()}
|
|
||||||
ref={birdseyeContainerRef}
|
|
||||||
>
|
>
|
||||||
<BirdseyeLivePlayer
|
{includeBirdseye && birdseyeConfig?.enabled && (
|
||||||
birdseyeConfig={birdseyeConfig}
|
|
||||||
liveMode={birdseyeConfig.restream ? "mse" : "jsmpeg"}
|
|
||||||
onClick={() => onSelectCamera("birdseye")}
|
|
||||||
containerRef={birdseyeContainerRef}
|
|
||||||
/>
|
|
||||||
</div>
|
|
||||||
)}
|
|
||||||
{cameras.map((camera) => {
|
|
||||||
let grow;
|
|
||||||
const aspectRatio = camera.detect.width / camera.detect.height;
|
|
||||||
if (aspectRatio > 2) {
|
|
||||||
grow = `${mobileLayout == "grid" && "col-span-2"} aspect-wide`;
|
|
||||||
} else if (aspectRatio < 1) {
|
|
||||||
grow = `${mobileLayout == "grid" && "row-span-2 h-full"} aspect-tall`;
|
|
||||||
} else {
|
|
||||||
grow = "aspect-video";
|
|
||||||
}
|
|
||||||
const availableStreams = camera.live.streams || {};
|
|
||||||
const firstStreamEntry = Object.values(availableStreams)[0] || "";
|
|
||||||
|
|
||||||
const streamNameFromSettings =
|
|
||||||
currentGroupStreamingSettings?.[camera.name]?.streamName || "";
|
|
||||||
const streamExists =
|
|
||||||
streamNameFromSettings &&
|
|
||||||
Object.values(availableStreams).includes(
|
|
||||||
streamNameFromSettings,
|
|
||||||
);
|
|
||||||
|
|
||||||
const streamName = streamExists
|
|
||||||
? streamNameFromSettings
|
|
||||||
: firstStreamEntry;
|
|
||||||
const streamType =
|
|
||||||
currentGroupStreamingSettings?.[camera.name]?.streamType;
|
|
||||||
const autoLive =
|
|
||||||
streamType !== undefined
|
|
||||||
? streamType !== "no-streaming"
|
|
||||||
: undefined;
|
|
||||||
const showStillWithoutActivity =
|
|
||||||
currentGroupStreamingSettings?.[camera.name]?.streamType !==
|
|
||||||
"continuous";
|
|
||||||
const useWebGL =
|
|
||||||
currentGroupStreamingSettings?.[camera.name]
|
|
||||||
?.compatibilityMode || false;
|
|
||||||
return (
|
|
||||||
<LiveContextMenu
|
|
||||||
className={grow}
|
|
||||||
key={camera.name}
|
|
||||||
camera={camera.name}
|
|
||||||
cameraGroup={cameraGroup}
|
|
||||||
streamName={streamName}
|
|
||||||
preferredLiveMode={preferredLiveModes[camera.name] ?? "mse"}
|
|
||||||
isRestreamed={isRestreamedStates[camera.name]}
|
|
||||||
supportsAudio={
|
|
||||||
supportsAudioOutputStates[streamName]?.supportsAudio ??
|
|
||||||
false
|
|
||||||
}
|
|
||||||
audioState={audioStates[camera.name]}
|
|
||||||
toggleAudio={() => toggleAudio(camera.name)}
|
|
||||||
statsState={statsStates[camera.name]}
|
|
||||||
toggleStats={() => toggleStats(camera.name)}
|
|
||||||
volumeState={volumeStates[camera.name] ?? 1}
|
|
||||||
setVolumeState={(value) =>
|
|
||||||
setVolumeStates({
|
|
||||||
[camera.name]: value,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
muteAll={muteAll}
|
|
||||||
unmuteAll={unmuteAll}
|
|
||||||
resetPreferredLiveMode={() =>
|
|
||||||
resetPreferredLiveMode(camera.name)
|
|
||||||
}
|
|
||||||
config={config}
|
|
||||||
>
|
|
||||||
<LivePlayer
|
|
||||||
cameraRef={cameraRef}
|
|
||||||
key={camera.name}
|
|
||||||
className={`${grow} rounded-lg bg-black md:rounded-2xl`}
|
|
||||||
windowVisible={
|
|
||||||
windowVisible && visibleCameras.includes(camera.name)
|
|
||||||
}
|
|
||||||
cameraConfig={camera}
|
|
||||||
preferredLiveMode={preferredLiveModes[camera.name] ?? "mse"}
|
|
||||||
autoLive={autoLive ?? globalAutoLive}
|
|
||||||
showStillWithoutActivity={showStillWithoutActivity ?? true}
|
|
||||||
alwaysShowCameraName={displayCameraNames}
|
|
||||||
useWebGL={useWebGL}
|
|
||||||
playInBackground={false}
|
|
||||||
showStats={statsStates[camera.name]}
|
|
||||||
streamName={streamName}
|
|
||||||
onClick={() => onSelectCamera(camera.name)}
|
|
||||||
onError={(e) => handleError(camera.name, e)}
|
|
||||||
onResetLiveMode={() => resetPreferredLiveMode(camera.name)}
|
|
||||||
playAudio={audioStates[camera.name] ?? false}
|
|
||||||
volume={volumeStates[camera.name]}
|
|
||||||
/>
|
|
||||||
</LiveContextMenu>
|
|
||||||
);
|
|
||||||
})}
|
|
||||||
</div>
|
|
||||||
{isDesktop && (
|
|
||||||
<div
|
|
||||||
className={cn(
|
|
||||||
"fixed",
|
|
||||||
isDesktop && "bottom-12 lg:bottom-9",
|
|
||||||
isMobile && "bottom-12 lg:bottom-16",
|
|
||||||
hasScrollbar && isDesktop ? "right-6" : "right-3",
|
|
||||||
"z-50 flex flex-row gap-2",
|
|
||||||
)}
|
|
||||||
>
|
|
||||||
<Tooltip>
|
|
||||||
<TooltipTrigger asChild>
|
|
||||||
<div
|
<div
|
||||||
className="cursor-pointer rounded-lg bg-secondary text-secondary-foreground opacity-60 transition-all duration-300 hover:bg-muted hover:opacity-100"
|
className={(() => {
|
||||||
onClick={toggleFullscreen}
|
const aspectRatio =
|
||||||
|
birdseyeConfig.width / birdseyeConfig.height;
|
||||||
|
if (aspectRatio > 2) {
|
||||||
|
return `${mobileLayout == "grid" && "col-span-2"} aspect-wide`;
|
||||||
|
} else if (aspectRatio < 1) {
|
||||||
|
return `${mobileLayout == "grid" && "row-span-2 h-full"} aspect-tall`;
|
||||||
|
} else {
|
||||||
|
return "aspect-video";
|
||||||
|
}
|
||||||
|
})()}
|
||||||
|
ref={birdseyeContainerRef}
|
||||||
>
|
>
|
||||||
{fullscreen ? (
|
<BirdseyeLivePlayer
|
||||||
<FaCompress className="size-5 md:m-[6px]" />
|
birdseyeConfig={birdseyeConfig}
|
||||||
) : (
|
liveMode={birdseyeConfig.restream ? "mse" : "jsmpeg"}
|
||||||
<FaExpand className="size-5 md:m-[6px]" />
|
onClick={() => onSelectCamera("birdseye")}
|
||||||
)}
|
containerRef={birdseyeContainerRef}
|
||||||
|
/>
|
||||||
</div>
|
</div>
|
||||||
</TooltipTrigger>
|
)}
|
||||||
<TooltipContent>
|
{cameras.map((camera) => {
|
||||||
{fullscreen
|
let grow;
|
||||||
? t("button.exitFullscreen", { ns: "common" })
|
const aspectRatio =
|
||||||
: t("button.fullscreen", { ns: "common" })}
|
camera.detect.width / camera.detect.height;
|
||||||
</TooltipContent>
|
if (aspectRatio > 2) {
|
||||||
</Tooltip>
|
grow = `${mobileLayout == "grid" && "col-span-2"} aspect-wide`;
|
||||||
</div>
|
} else if (aspectRatio < 1) {
|
||||||
|
grow = `${mobileLayout == "grid" && "row-span-2 h-full"} aspect-tall`;
|
||||||
|
} else {
|
||||||
|
grow = "aspect-video";
|
||||||
|
}
|
||||||
|
const availableStreams = camera.live.streams || {};
|
||||||
|
const firstStreamEntry =
|
||||||
|
Object.values(availableStreams)[0] || "";
|
||||||
|
|
||||||
|
const streamNameFromSettings =
|
||||||
|
currentGroupStreamingSettings?.[camera.name]?.streamName ||
|
||||||
|
"";
|
||||||
|
const streamExists =
|
||||||
|
streamNameFromSettings &&
|
||||||
|
Object.values(availableStreams).includes(
|
||||||
|
streamNameFromSettings,
|
||||||
|
);
|
||||||
|
|
||||||
|
const streamName = streamExists
|
||||||
|
? streamNameFromSettings
|
||||||
|
: firstStreamEntry;
|
||||||
|
const streamType =
|
||||||
|
currentGroupStreamingSettings?.[camera.name]?.streamType;
|
||||||
|
const autoLive =
|
||||||
|
streamType !== undefined
|
||||||
|
? streamType !== "no-streaming"
|
||||||
|
: undefined;
|
||||||
|
const showStillWithoutActivity =
|
||||||
|
currentGroupStreamingSettings?.[camera.name]?.streamType !==
|
||||||
|
"continuous";
|
||||||
|
const useWebGL =
|
||||||
|
currentGroupStreamingSettings?.[camera.name]
|
||||||
|
?.compatibilityMode || false;
|
||||||
|
return (
|
||||||
|
<LiveContextMenu
|
||||||
|
className={grow}
|
||||||
|
key={camera.name}
|
||||||
|
camera={camera.name}
|
||||||
|
cameraGroup={cameraGroup}
|
||||||
|
streamName={streamName}
|
||||||
|
preferredLiveMode={
|
||||||
|
preferredLiveModes[camera.name] ?? "mse"
|
||||||
|
}
|
||||||
|
isRestreamed={isRestreamedStates[camera.name]}
|
||||||
|
supportsAudio={
|
||||||
|
supportsAudioOutputStates[streamName]?.supportsAudio ??
|
||||||
|
false
|
||||||
|
}
|
||||||
|
audioState={audioStates[camera.name]}
|
||||||
|
toggleAudio={() => toggleAudio(camera.name)}
|
||||||
|
statsState={statsStates[camera.name]}
|
||||||
|
toggleStats={() => toggleStats(camera.name)}
|
||||||
|
volumeState={volumeStates[camera.name] ?? 1}
|
||||||
|
setVolumeState={(value) =>
|
||||||
|
setVolumeStates({
|
||||||
|
[camera.name]: value,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
muteAll={muteAll}
|
||||||
|
unmuteAll={unmuteAll}
|
||||||
|
resetPreferredLiveMode={() =>
|
||||||
|
resetPreferredLiveMode(camera.name)
|
||||||
|
}
|
||||||
|
config={config}
|
||||||
|
>
|
||||||
|
<LivePlayer
|
||||||
|
cameraRef={cameraRef}
|
||||||
|
key={camera.name}
|
||||||
|
className={`${grow} rounded-lg bg-black md:rounded-2xl`}
|
||||||
|
windowVisible={
|
||||||
|
windowVisible && visibleCameras.includes(camera.name)
|
||||||
|
}
|
||||||
|
cameraConfig={camera}
|
||||||
|
preferredLiveMode={
|
||||||
|
preferredLiveModes[camera.name] ?? "mse"
|
||||||
|
}
|
||||||
|
autoLive={autoLive ?? globalAutoLive}
|
||||||
|
showStillWithoutActivity={
|
||||||
|
showStillWithoutActivity ?? true
|
||||||
|
}
|
||||||
|
alwaysShowCameraName={displayCameraNames}
|
||||||
|
useWebGL={useWebGL}
|
||||||
|
playInBackground={false}
|
||||||
|
showStats={statsStates[camera.name]}
|
||||||
|
streamName={streamName}
|
||||||
|
onClick={() => onSelectCamera(camera.name)}
|
||||||
|
onError={(e) => handleError(camera.name, e)}
|
||||||
|
onResetLiveMode={() =>
|
||||||
|
resetPreferredLiveMode(camera.name)
|
||||||
|
}
|
||||||
|
playAudio={audioStates[camera.name] ?? false}
|
||||||
|
volume={volumeStates[camera.name]}
|
||||||
|
/>
|
||||||
|
</LiveContextMenu>
|
||||||
|
);
|
||||||
|
})}
|
||||||
|
</div>
|
||||||
|
{isDesktop && (
|
||||||
|
<div
|
||||||
|
className={cn(
|
||||||
|
"fixed",
|
||||||
|
isDesktop && "bottom-12 lg:bottom-9",
|
||||||
|
isMobile && "bottom-12 lg:bottom-16",
|
||||||
|
hasScrollbar && isDesktop ? "right-6" : "right-3",
|
||||||
|
"z-50 flex flex-row gap-2",
|
||||||
|
)}
|
||||||
|
>
|
||||||
|
<Tooltip>
|
||||||
|
<TooltipTrigger asChild>
|
||||||
|
<div
|
||||||
|
className="cursor-pointer rounded-lg bg-secondary text-secondary-foreground opacity-60 transition-all duration-300 hover:bg-muted hover:opacity-100"
|
||||||
|
onClick={toggleFullscreen}
|
||||||
|
>
|
||||||
|
{fullscreen ? (
|
||||||
|
<FaCompress className="size-5 md:m-[6px]" />
|
||||||
|
) : (
|
||||||
|
<FaExpand className="size-5 md:m-[6px]" />
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
</TooltipTrigger>
|
||||||
|
<TooltipContent>
|
||||||
|
{fullscreen
|
||||||
|
? t("button.exitFullscreen", { ns: "common" })
|
||||||
|
: t("button.fullscreen", { ns: "common" })}
|
||||||
|
</TooltipContent>
|
||||||
|
</Tooltip>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</>
|
||||||
|
) : (
|
||||||
|
<DraggableGridLayout
|
||||||
|
cameras={cameras}
|
||||||
|
cameraGroup={cameraGroup}
|
||||||
|
containerRef={containerRef}
|
||||||
|
cameraRef={cameraRef}
|
||||||
|
includeBirdseye={includeBirdseye}
|
||||||
|
onSelectCamera={onSelectCamera}
|
||||||
|
windowVisible={windowVisible}
|
||||||
|
visibleCameras={visibleCameras}
|
||||||
|
isEditMode={isEditMode}
|
||||||
|
setIsEditMode={setIsEditMode}
|
||||||
|
fullscreen={fullscreen}
|
||||||
|
toggleFullscreen={toggleFullscreen}
|
||||||
|
/>
|
||||||
)}
|
)}
|
||||||
</>
|
</>
|
||||||
) : (
|
|
||||||
<DraggableGridLayout
|
|
||||||
cameras={cameras}
|
|
||||||
cameraGroup={cameraGroup}
|
|
||||||
containerRef={containerRef}
|
|
||||||
cameraRef={cameraRef}
|
|
||||||
includeBirdseye={includeBirdseye}
|
|
||||||
onSelectCamera={onSelectCamera}
|
|
||||||
windowVisible={windowVisible}
|
|
||||||
visibleCameras={visibleCameras}
|
|
||||||
isEditMode={isEditMode}
|
|
||||||
setIsEditMode={setIsEditMode}
|
|
||||||
fullscreen={fullscreen}
|
|
||||||
toggleFullscreen={toggleFullscreen}
|
|
||||||
/>
|
|
||||||
)}
|
)}
|
||||||
</div>
|
</div>
|
||||||
);
|
);
|
||||||
@ -638,15 +660,26 @@ export default function LiveDashboardView({
|
|||||||
|
|
||||||
function NoCameraView() {
|
function NoCameraView() {
|
||||||
const { t } = useTranslation(["views/live"]);
|
const { t } = useTranslation(["views/live"]);
|
||||||
|
const { auth } = useContext(AuthContext);
|
||||||
|
const isCustomRole = useIsCustomRole();
|
||||||
|
|
||||||
|
// Check if this is a restricted user with no cameras in this group
|
||||||
|
const isRestricted = isCustomRole && auth.isAuthenticated;
|
||||||
|
|
||||||
return (
|
return (
|
||||||
<div className="flex size-full items-center justify-center">
|
<div className="flex size-full items-center justify-center">
|
||||||
<EmptyCard
|
<EmptyCard
|
||||||
icon={<BsFillCameraVideoOffFill className="size-8" />}
|
icon={<BsFillCameraVideoOffFill className="size-8" />}
|
||||||
title={t("noCameras.title")}
|
title={
|
||||||
description={t("noCameras.description")}
|
isRestricted ? t("noCameras.restricted.title") : t("noCameras.title")
|
||||||
buttonText={t("noCameras.buttonText")}
|
}
|
||||||
link="/settings?page=cameraManagement"
|
description={
|
||||||
|
isRestricted
|
||||||
|
? t("noCameras.restricted.description")
|
||||||
|
: t("noCameras.description")
|
||||||
|
}
|
||||||
|
buttonText={!isRestricted ? t("noCameras.buttonText") : undefined}
|
||||||
|
link={!isRestricted ? "/settings?page=cameraManagement" : undefined}
|
||||||
/>
|
/>
|
||||||
</div>
|
</div>
|
||||||
);
|
);
|
||||||
|
|||||||
@ -19,7 +19,6 @@ import useKeyboardListener, {
|
|||||||
import scrollIntoView from "scroll-into-view-if-needed";
|
import scrollIntoView from "scroll-into-view-if-needed";
|
||||||
import InputWithTags from "@/components/input/InputWithTags";
|
import InputWithTags from "@/components/input/InputWithTags";
|
||||||
import { ScrollArea, ScrollBar } from "@/components/ui/scroll-area";
|
import { ScrollArea, ScrollBar } from "@/components/ui/scroll-area";
|
||||||
import { isEqual } from "lodash";
|
|
||||||
import { formatDateToLocaleString } from "@/utils/dateUtil";
|
import { formatDateToLocaleString } from "@/utils/dateUtil";
|
||||||
import SearchThumbnailFooter from "@/components/card/SearchThumbnailFooter";
|
import SearchThumbnailFooter from "@/components/card/SearchThumbnailFooter";
|
||||||
import ExploreSettings from "@/components/settings/SearchSettings";
|
import ExploreSettings from "@/components/settings/SearchSettings";
|
||||||
@ -213,7 +212,7 @@ export default function SearchView({
|
|||||||
|
|
||||||
// detail
|
// detail
|
||||||
|
|
||||||
const [searchDetail, setSearchDetail] = useState<SearchResult>();
|
const [selectedId, setSelectedId] = useState<string>();
|
||||||
const [page, setPage] = useState<SearchTab>("snapshot");
|
const [page, setPage] = useState<SearchTab>("snapshot");
|
||||||
|
|
||||||
// remove duplicate event ids
|
// remove duplicate event ids
|
||||||
@ -229,6 +228,16 @@ export default function SearchView({
|
|||||||
return results;
|
return results;
|
||||||
}, [searchResults]);
|
}, [searchResults]);
|
||||||
|
|
||||||
|
const searchDetail = useMemo(() => {
|
||||||
|
if (!selectedId) return undefined;
|
||||||
|
// summary view
|
||||||
|
if (defaultView === "summary" && exploreEvents) {
|
||||||
|
return exploreEvents.find((r) => r.id === selectedId);
|
||||||
|
}
|
||||||
|
// grid view
|
||||||
|
return uniqueResults.find((r) => r.id === selectedId);
|
||||||
|
}, [selectedId, uniqueResults, exploreEvents, defaultView]);
|
||||||
|
|
||||||
// search interaction
|
// search interaction
|
||||||
|
|
||||||
const [selectedObjects, setSelectedObjects] = useState<string[]>([]);
|
const [selectedObjects, setSelectedObjects] = useState<string[]>([]);
|
||||||
@ -256,7 +265,7 @@ export default function SearchView({
|
|||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
setPage(page);
|
setPage(page);
|
||||||
setSearchDetail(item);
|
setSelectedId(item.id);
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
[selectedObjects],
|
[selectedObjects],
|
||||||
@ -295,26 +304,12 @@ export default function SearchView({
|
|||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
// update search detail when results change
|
// clear selected item when search results clear
|
||||||
|
|
||||||
useEffect(() => {
|
useEffect(() => {
|
||||||
if (searchDetail) {
|
if (!searchResults && !exploreEvents) {
|
||||||
const results =
|
setSelectedId(undefined);
|
||||||
defaultView === "summary" ? exploreEvents : searchResults?.flat();
|
|
||||||
if (results) {
|
|
||||||
const updatedSearchDetail = results.find(
|
|
||||||
(result) => result.id === searchDetail.id,
|
|
||||||
);
|
|
||||||
|
|
||||||
if (
|
|
||||||
updatedSearchDetail &&
|
|
||||||
!isEqual(updatedSearchDetail, searchDetail)
|
|
||||||
) {
|
|
||||||
setSearchDetail(updatedSearchDetail);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}, [searchResults, exploreEvents, searchDetail, defaultView]);
|
}, [searchResults, exploreEvents]);
|
||||||
|
|
||||||
const hasExistingSearch = useMemo(
|
const hasExistingSearch = useMemo(
|
||||||
() => searchResults != undefined || searchFilter != undefined,
|
() => searchResults != undefined || searchFilter != undefined,
|
||||||
@ -340,7 +335,7 @@ export default function SearchView({
|
|||||||
? results.length - 1
|
? results.length - 1
|
||||||
: (currentIndex - 1 + results.length) % results.length;
|
: (currentIndex - 1 + results.length) % results.length;
|
||||||
|
|
||||||
setSearchDetail(results[newIndex]);
|
setSelectedId(results[newIndex].id);
|
||||||
}
|
}
|
||||||
}, [uniqueResults, exploreEvents, searchDetail, defaultView]);
|
}, [uniqueResults, exploreEvents, searchDetail, defaultView]);
|
||||||
|
|
||||||
@ -357,7 +352,7 @@ export default function SearchView({
|
|||||||
const newIndex =
|
const newIndex =
|
||||||
currentIndex === -1 ? 0 : (currentIndex + 1) % results.length;
|
currentIndex === -1 ? 0 : (currentIndex + 1) % results.length;
|
||||||
|
|
||||||
setSearchDetail(results[newIndex]);
|
setSelectedId(results[newIndex].id);
|
||||||
}
|
}
|
||||||
}, [uniqueResults, exploreEvents, searchDetail, defaultView]);
|
}, [uniqueResults, exploreEvents, searchDetail, defaultView]);
|
||||||
|
|
||||||
@ -509,7 +504,7 @@ export default function SearchView({
|
|||||||
<SearchDetailDialog
|
<SearchDetailDialog
|
||||||
search={searchDetail}
|
search={searchDetail}
|
||||||
page={page}
|
page={page}
|
||||||
setSearch={setSearchDetail}
|
setSearch={(item) => setSelectedId(item?.id)}
|
||||||
setSearchPage={setPage}
|
setSearchPage={setPage}
|
||||||
setSimilarity={
|
setSimilarity={
|
||||||
searchDetail && (() => setSimilaritySearch(searchDetail))
|
searchDetail && (() => setSimilaritySearch(searchDetail))
|
||||||
@ -629,7 +624,7 @@ export default function SearchView({
|
|||||||
detail: boolean,
|
detail: boolean,
|
||||||
) => {
|
) => {
|
||||||
if (detail && selectedObjects.length == 0) {
|
if (detail && selectedObjects.length == 0) {
|
||||||
setSearchDetail(value);
|
setSelectedId(value.id);
|
||||||
} else {
|
} else {
|
||||||
onSelectSearch(
|
onSelectSearch(
|
||||||
value,
|
value,
|
||||||
@ -724,8 +719,7 @@ export default function SearchView({
|
|||||||
defaultView == "summary" && (
|
defaultView == "summary" && (
|
||||||
<div className="scrollbar-container flex size-full flex-col overflow-y-auto">
|
<div className="scrollbar-container flex size-full flex-col overflow-y-auto">
|
||||||
<ExploreView
|
<ExploreView
|
||||||
searchDetail={searchDetail}
|
setSearchDetail={(item) => setSelectedId(item?.id)}
|
||||||
setSearchDetail={setSearchDetail}
|
|
||||||
setSimilaritySearch={setSimilaritySearch}
|
setSimilaritySearch={setSimilaritySearch}
|
||||||
onSelectSearch={onSelectSearch}
|
onSelectSearch={onSelectSearch}
|
||||||
/>
|
/>
|
||||||
|
|||||||
@ -5,17 +5,9 @@ import { Button } from "@/components/ui/button";
|
|||||||
import useSWR from "swr";
|
import useSWR from "swr";
|
||||||
import { FrigateConfig } from "@/types/frigateConfig";
|
import { FrigateConfig } from "@/types/frigateConfig";
|
||||||
import { useTranslation } from "react-i18next";
|
import { useTranslation } from "react-i18next";
|
||||||
import { Label } from "@/components/ui/label";
|
|
||||||
import CameraEditForm from "@/components/settings/CameraEditForm";
|
import CameraEditForm from "@/components/settings/CameraEditForm";
|
||||||
import CameraWizardDialog from "@/components/settings/CameraWizardDialog";
|
import CameraWizardDialog from "@/components/settings/CameraWizardDialog";
|
||||||
import { LuPlus } from "react-icons/lu";
|
import { LuPlus } from "react-icons/lu";
|
||||||
import {
|
|
||||||
Select,
|
|
||||||
SelectContent,
|
|
||||||
SelectItem,
|
|
||||||
SelectTrigger,
|
|
||||||
SelectValue,
|
|
||||||
} from "@/components/ui/select";
|
|
||||||
import { IoMdArrowRoundBack } from "react-icons/io";
|
import { IoMdArrowRoundBack } from "react-icons/io";
|
||||||
import { isDesktop } from "react-device-detect";
|
import { isDesktop } from "react-device-detect";
|
||||||
import { CameraNameLabel } from "@/components/camera/FriendlyNameLabel";
|
import { CameraNameLabel } from "@/components/camera/FriendlyNameLabel";
|
||||||
@ -90,31 +82,6 @@ export default function CameraManagementView({
|
|||||||
</Button>
|
</Button>
|
||||||
{cameras.length > 0 && (
|
{cameras.length > 0 && (
|
||||||
<>
|
<>
|
||||||
<div className="my-4 flex flex-col gap-2">
|
|
||||||
<Label>{t("cameraManagement.editCamera")}</Label>
|
|
||||||
<Select
|
|
||||||
onValueChange={(value) => {
|
|
||||||
setEditCameraName(value);
|
|
||||||
setViewMode("edit");
|
|
||||||
}}
|
|
||||||
>
|
|
||||||
<SelectTrigger className="w-[180px]">
|
|
||||||
<SelectValue
|
|
||||||
placeholder={t("cameraManagement.selectCamera")}
|
|
||||||
/>
|
|
||||||
</SelectTrigger>
|
|
||||||
<SelectContent>
|
|
||||||
{cameras.map((camera) => {
|
|
||||||
return (
|
|
||||||
<SelectItem key={camera} value={camera}>
|
|
||||||
<CameraNameLabel camera={camera} />
|
|
||||||
</SelectItem>
|
|
||||||
);
|
|
||||||
})}
|
|
||||||
</SelectContent>
|
|
||||||
</Select>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<Separator className="my-2 flex bg-secondary" />
|
<Separator className="my-2 flex bg-secondary" />
|
||||||
<div className="max-w-7xl space-y-4">
|
<div className="max-w-7xl space-y-4">
|
||||||
<Heading as="h4" className="my-2">
|
<Heading as="h4" className="my-2">
|
||||||
|
|||||||
738
web/src/views/settings/CameraReviewSettingsView.tsx
Normal file
@ -0,0 +1,738 @@
|
|||||||
|
import Heading from "@/components/ui/heading";
|
||||||
|
import { useCallback, useContext, useEffect, useMemo, useState } from "react";
|
||||||
|
import { Toaster, toast } from "sonner";
|
||||||
|
import {
|
||||||
|
Form,
|
||||||
|
FormControl,
|
||||||
|
FormDescription,
|
||||||
|
FormField,
|
||||||
|
FormItem,
|
||||||
|
FormLabel,
|
||||||
|
FormMessage,
|
||||||
|
} from "@/components/ui/form";
|
||||||
|
import { zodResolver } from "@hookform/resolvers/zod";
|
||||||
|
import { useForm } from "react-hook-form";
|
||||||
|
import { z } from "zod";
|
||||||
|
import { Separator } from "@/components/ui/separator";
|
||||||
|
import { Button } from "@/components/ui/button";
|
||||||
|
import useSWR from "swr";
|
||||||
|
import { FrigateConfig } from "@/types/frigateConfig";
|
||||||
|
import { Checkbox } from "@/components/ui/checkbox";
|
||||||
|
import ActivityIndicator from "@/components/indicators/activity-indicator";
|
||||||
|
import { StatusBarMessagesContext } from "@/context/statusbar-provider";
|
||||||
|
import axios from "axios";
|
||||||
|
import { Link } from "react-router-dom";
|
||||||
|
import { LuExternalLink } from "react-icons/lu";
|
||||||
|
import { MdCircle } from "react-icons/md";
|
||||||
|
import { cn } from "@/lib/utils";
|
||||||
|
import { Trans, useTranslation } from "react-i18next";
|
||||||
|
import { Switch } from "@/components/ui/switch";
|
||||||
|
import { Label } from "@/components/ui/label";
|
||||||
|
import { useDocDomain } from "@/hooks/use-doc-domain";
|
||||||
|
import { getTranslatedLabel } from "@/utils/i18n";
|
||||||
|
import {
|
||||||
|
useAlertsState,
|
||||||
|
useDetectionsState,
|
||||||
|
useObjectDescriptionState,
|
||||||
|
useReviewDescriptionState,
|
||||||
|
} from "@/api/ws";
|
||||||
|
import { useCameraFriendlyName } from "@/hooks/use-camera-friendly-name";
|
||||||
|
import { resolveZoneName } from "@/hooks/use-zone-friendly-name";
|
||||||
|
import { formatList } from "@/utils/stringUtil";
|
||||||
|
|
||||||
|
type CameraReviewSettingsViewProps = {
|
||||||
|
selectedCamera: string;
|
||||||
|
setUnsavedChanges: React.Dispatch<React.SetStateAction<boolean>>;
|
||||||
|
};
|
||||||
|
|
||||||
|
type CameraReviewSettingsValueType = {
|
||||||
|
alerts_zones: string[];
|
||||||
|
detections_zones: string[];
|
||||||
|
};
|
||||||
|
|
||||||
|
export default function CameraReviewSettingsView({
|
||||||
|
selectedCamera,
|
||||||
|
setUnsavedChanges,
|
||||||
|
}: CameraReviewSettingsViewProps) {
|
||||||
|
const { t } = useTranslation(["views/settings"]);
|
||||||
|
const { getLocaleDocUrl } = useDocDomain();
|
||||||
|
|
||||||
|
const { data: config, mutate: updateConfig } =
|
||||||
|
useSWR<FrigateConfig>("config");
|
||||||
|
|
||||||
|
const cameraConfig = useMemo(() => {
|
||||||
|
if (config && selectedCamera) {
|
||||||
|
return config.cameras[selectedCamera];
|
||||||
|
}
|
||||||
|
}, [config, selectedCamera]);
|
||||||
|
|
||||||
|
const [changedValue, setChangedValue] = useState(false);
|
||||||
|
const [isLoading, setIsLoading] = useState(false);
|
||||||
|
const [selectDetections, setSelectDetections] = useState(false);
|
||||||
|
|
||||||
|
const { addMessage, removeMessage } = useContext(StatusBarMessagesContext)!;
|
||||||
|
|
||||||
|
const selectCameraName = useCameraFriendlyName(selectedCamera);
|
||||||
|
|
||||||
|
// zones and labels
|
||||||
|
|
||||||
|
const getZoneName = useCallback(
|
||||||
|
(zoneId: string, cameraId?: string) =>
|
||||||
|
resolveZoneName(config, zoneId, cameraId),
|
||||||
|
[config],
|
||||||
|
);
|
||||||
|
|
||||||
|
const zones = useMemo(() => {
|
||||||
|
if (cameraConfig) {
|
||||||
|
return Object.entries(cameraConfig.zones).map(([name, zoneData]) => ({
|
||||||
|
camera: cameraConfig.name,
|
||||||
|
name,
|
||||||
|
friendly_name: cameraConfig.zones[name].friendly_name,
|
||||||
|
objects: zoneData.objects,
|
||||||
|
color: zoneData.color,
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
}, [cameraConfig]);
|
||||||
|
|
||||||
|
const alertsLabels = useMemo(() => {
|
||||||
|
return cameraConfig?.review.alerts.labels
|
||||||
|
? formatList(
|
||||||
|
cameraConfig.review.alerts.labels.map((label) =>
|
||||||
|
getTranslatedLabel(
|
||||||
|
label,
|
||||||
|
cameraConfig?.audio?.listen?.includes(label) ? "audio" : "object",
|
||||||
|
),
|
||||||
|
),
|
||||||
|
)
|
||||||
|
: "";
|
||||||
|
}, [cameraConfig]);
|
||||||
|
|
||||||
|
const detectionsLabels = useMemo(() => {
|
||||||
|
return cameraConfig?.review.detections.labels
|
||||||
|
? formatList(
|
||||||
|
cameraConfig.review.detections.labels.map((label) =>
|
||||||
|
getTranslatedLabel(
|
||||||
|
label,
|
||||||
|
cameraConfig?.audio?.listen?.includes(label) ? "audio" : "object",
|
||||||
|
),
|
||||||
|
),
|
||||||
|
)
|
||||||
|
: "";
|
||||||
|
}, [cameraConfig]);
|
||||||
|
|
||||||
|
// form
|
||||||
|
|
||||||
|
const formSchema = z.object({
|
||||||
|
alerts_zones: z.array(z.string()),
|
||||||
|
detections_zones: z.array(z.string()),
|
||||||
|
});
|
||||||
|
|
||||||
|
const form = useForm<z.infer<typeof formSchema>>({
|
||||||
|
resolver: zodResolver(formSchema),
|
||||||
|
mode: "onChange",
|
||||||
|
defaultValues: {
|
||||||
|
alerts_zones: cameraConfig?.review.alerts.required_zones || [],
|
||||||
|
detections_zones: cameraConfig?.review.detections.required_zones || [],
|
||||||
|
},
|
||||||
|
});
|
||||||
|
|
||||||
|
const watchedAlertsZones = form.watch("alerts_zones");
|
||||||
|
const watchedDetectionsZones = form.watch("detections_zones");
|
||||||
|
|
||||||
|
const { payload: alertsState, send: sendAlerts } =
|
||||||
|
useAlertsState(selectedCamera);
|
||||||
|
const { payload: detectionsState, send: sendDetections } =
|
||||||
|
useDetectionsState(selectedCamera);
|
||||||
|
|
||||||
|
const { payload: objDescState, send: sendObjDesc } =
|
||||||
|
useObjectDescriptionState(selectedCamera);
|
||||||
|
const { payload: revDescState, send: sendRevDesc } =
|
||||||
|
useReviewDescriptionState(selectedCamera);
|
||||||
|
|
||||||
|
const handleCheckedChange = useCallback(
|
||||||
|
(isChecked: boolean) => {
|
||||||
|
if (!isChecked) {
|
||||||
|
form.reset({
|
||||||
|
alerts_zones: watchedAlertsZones,
|
||||||
|
detections_zones: [],
|
||||||
|
});
|
||||||
|
}
|
||||||
|
setChangedValue(true);
|
||||||
|
setSelectDetections(isChecked as boolean);
|
||||||
|
},
|
||||||
|
// we know that these deps are correct
|
||||||
|
// eslint-disable-next-line react-hooks/exhaustive-deps
|
||||||
|
[watchedAlertsZones],
|
||||||
|
);
|
||||||
|
|
||||||
|
const saveToConfig = useCallback(
|
||||||
|
async (
|
||||||
|
{ alerts_zones, detections_zones }: CameraReviewSettingsValueType, // values submitted via the form
|
||||||
|
) => {
|
||||||
|
const createQuery = (zones: string[], type: "alerts" | "detections") =>
|
||||||
|
zones.length
|
||||||
|
? zones
|
||||||
|
.map(
|
||||||
|
(zone) =>
|
||||||
|
`&cameras.${selectedCamera}.review.${type}.required_zones=${zone}`,
|
||||||
|
)
|
||||||
|
.join("")
|
||||||
|
: cameraConfig?.review[type]?.required_zones &&
|
||||||
|
cameraConfig?.review[type]?.required_zones.length > 0
|
||||||
|
? `&cameras.${selectedCamera}.review.${type}.required_zones`
|
||||||
|
: "";
|
||||||
|
|
||||||
|
const alertQueries = createQuery(alerts_zones, "alerts");
|
||||||
|
const detectionQueries = createQuery(detections_zones, "detections");
|
||||||
|
|
||||||
|
axios
|
||||||
|
.put(`config/set?${alertQueries}${detectionQueries}`, {
|
||||||
|
requires_restart: 0,
|
||||||
|
})
|
||||||
|
.then((res) => {
|
||||||
|
if (res.status === 200) {
|
||||||
|
toast.success(
|
||||||
|
t("cameraReview.reviewClassification.toast.success"),
|
||||||
|
{
|
||||||
|
position: "top-center",
|
||||||
|
},
|
||||||
|
);
|
||||||
|
updateConfig();
|
||||||
|
} else {
|
||||||
|
toast.error(
|
||||||
|
t("toast.save.error.title", {
|
||||||
|
errorMessage: res.statusText,
|
||||||
|
ns: "common",
|
||||||
|
}),
|
||||||
|
{
|
||||||
|
position: "top-center",
|
||||||
|
},
|
||||||
|
);
|
||||||
|
}
|
||||||
|
})
|
||||||
|
.catch((error) => {
|
||||||
|
const errorMessage =
|
||||||
|
error.response?.data?.message ||
|
||||||
|
error.response?.data?.detail ||
|
||||||
|
"Unknown error";
|
||||||
|
toast.error(
|
||||||
|
t("toast.save.error.title", {
|
||||||
|
errorMessage,
|
||||||
|
ns: "common",
|
||||||
|
}),
|
||||||
|
{
|
||||||
|
position: "top-center",
|
||||||
|
},
|
||||||
|
);
|
||||||
|
})
|
||||||
|
.finally(() => {
|
||||||
|
setIsLoading(false);
|
||||||
|
});
|
||||||
|
},
|
||||||
|
[updateConfig, setIsLoading, selectedCamera, cameraConfig, t],
|
||||||
|
);
|
||||||
|
|
||||||
|
const onCancel = useCallback(() => {
|
||||||
|
if (!cameraConfig) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
setChangedValue(false);
|
||||||
|
setUnsavedChanges(false);
|
||||||
|
removeMessage(
|
||||||
|
"camera_settings",
|
||||||
|
`review_classification_settings_${selectedCamera}`,
|
||||||
|
);
|
||||||
|
form.reset({
|
||||||
|
alerts_zones: cameraConfig?.review.alerts.required_zones ?? [],
|
||||||
|
detections_zones: cameraConfig?.review.detections.required_zones || [],
|
||||||
|
});
|
||||||
|
setSelectDetections(
|
||||||
|
!!cameraConfig?.review.detections.required_zones?.length,
|
||||||
|
);
|
||||||
|
// we know that these deps are correct
|
||||||
|
// eslint-disable-next-line react-hooks/exhaustive-deps
|
||||||
|
}, [removeMessage, selectedCamera, setUnsavedChanges, cameraConfig]);
|
||||||
|
|
||||||
|
useEffect(() => {
|
||||||
|
onCancel();
|
||||||
|
// we know that these deps are correct
|
||||||
|
// eslint-disable-next-line react-hooks/exhaustive-deps
|
||||||
|
}, [selectedCamera]);
|
||||||
|
|
||||||
|
useEffect(() => {
|
||||||
|
if (changedValue) {
|
||||||
|
addMessage(
|
||||||
|
"camera_settings",
|
||||||
|
t("cameraReview.reviewClassification.unsavedChanges", {
|
||||||
|
camera: selectedCamera,
|
||||||
|
}),
|
||||||
|
undefined,
|
||||||
|
`review_classification_settings_${selectedCamera}`,
|
||||||
|
);
|
||||||
|
} else {
|
||||||
|
removeMessage(
|
||||||
|
"camera_settings",
|
||||||
|
`review_classification_settings_${selectedCamera}`,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
// we know that these deps are correct
|
||||||
|
// eslint-disable-next-line react-hooks/exhaustive-deps
|
||||||
|
}, [changedValue, selectedCamera]);
|
||||||
|
|
||||||
|
function onSubmit(values: z.infer<typeof formSchema>) {
|
||||||
|
setIsLoading(true);
|
||||||
|
|
||||||
|
saveToConfig(values as CameraReviewSettingsValueType);
|
||||||
|
}
|
||||||
|
|
||||||
|
useEffect(() => {
|
||||||
|
document.title = t("documentTitle.cameraReview");
|
||||||
|
}, [t]);
|
||||||
|
|
||||||
|
if (!cameraConfig && !selectedCamera) {
|
||||||
|
return <ActivityIndicator />;
|
||||||
|
}
|
||||||
|
|
||||||
|
return (
|
||||||
|
<>
|
||||||
|
<div className="flex size-full flex-col md:flex-row">
|
||||||
|
<Toaster position="top-center" closeButton={true} />
|
||||||
|
<div className="scrollbar-container order-last mb-10 mt-2 flex h-full w-full flex-col overflow-y-auto pb-2 md:order-none">
|
||||||
|
<Heading as="h4" className="mb-2">
|
||||||
|
{t("cameraReview.title")}
|
||||||
|
</Heading>
|
||||||
|
|
||||||
|
<Heading as="h4" className="my-2">
|
||||||
|
<Trans ns="views/settings">cameraReview.review.title</Trans>
|
||||||
|
</Heading>
|
||||||
|
|
||||||
|
<div className="mb-5 mt-2 flex max-w-5xl flex-col gap-2 space-y-3 text-sm text-primary-variant">
|
||||||
|
<div className="flex flex-row items-center">
|
||||||
|
<Switch
|
||||||
|
id="alerts-enabled"
|
||||||
|
className="mr-3"
|
||||||
|
checked={alertsState == "ON"}
|
||||||
|
onCheckedChange={(isChecked) => {
|
||||||
|
sendAlerts(isChecked ? "ON" : "OFF");
|
||||||
|
}}
|
||||||
|
/>
|
||||||
|
<div className="space-y-0.5">
|
||||||
|
<Label htmlFor="alerts-enabled">
|
||||||
|
<Trans ns="views/settings">cameraReview.review.alerts</Trans>
|
||||||
|
</Label>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div className="flex flex-col">
|
||||||
|
<div className="flex flex-row items-center">
|
||||||
|
<Switch
|
||||||
|
id="detections-enabled"
|
||||||
|
className="mr-3"
|
||||||
|
checked={detectionsState == "ON"}
|
||||||
|
onCheckedChange={(isChecked) => {
|
||||||
|
sendDetections(isChecked ? "ON" : "OFF");
|
||||||
|
}}
|
||||||
|
/>
|
||||||
|
<div className="space-y-0.5">
|
||||||
|
<Label htmlFor="detections-enabled">
|
||||||
|
<Trans ns="views/settings">camera.review.detections</Trans>
|
||||||
|
</Label>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div className="mt-3 text-sm text-muted-foreground">
|
||||||
|
<Trans ns="views/settings">cameraReview.review.desc</Trans>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
{cameraConfig?.objects?.genai?.enabled_in_config && (
|
||||||
|
<>
|
||||||
|
<Separator className="my-2 flex bg-secondary" />
|
||||||
|
|
||||||
|
<Heading as="h4" className="my-2">
|
||||||
|
<Trans ns="views/settings">
|
||||||
|
cameraReview.object_descriptions.title
|
||||||
|
</Trans>
|
||||||
|
</Heading>
|
||||||
|
|
||||||
|
<div className="mb-5 mt-2 flex max-w-5xl flex-col gap-2 space-y-3 text-sm text-primary-variant">
|
||||||
|
<div className="flex flex-row items-center">
|
||||||
|
<Switch
|
||||||
|
id="alerts-enabled"
|
||||||
|
className="mr-3"
|
||||||
|
checked={objDescState == "ON"}
|
||||||
|
onCheckedChange={(isChecked) => {
|
||||||
|
sendObjDesc(isChecked ? "ON" : "OFF");
|
||||||
|
}}
|
||||||
|
/>
|
||||||
|
<div className="space-y-0.5">
|
||||||
|
<Label htmlFor="genai-enabled">
|
||||||
|
<Trans>button.enabled</Trans>
|
||||||
|
</Label>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div className="mt-3 text-sm text-muted-foreground">
|
||||||
|
<Trans ns="views/settings">
|
||||||
|
cameraReview.object_descriptions.desc
|
||||||
|
</Trans>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{cameraConfig?.review?.genai?.enabled_in_config && (
|
||||||
|
<>
|
||||||
|
<Separator className="my-2 flex bg-secondary" />
|
||||||
|
|
||||||
|
<Heading as="h4" className="my-2">
|
||||||
|
<Trans ns="views/settings">
|
||||||
|
cameraReview.review_descriptions.title
|
||||||
|
</Trans>
|
||||||
|
</Heading>
|
||||||
|
|
||||||
|
<div className="mb-5 mt-2 flex max-w-5xl flex-col gap-2 space-y-3 text-sm text-primary-variant">
|
||||||
|
<div className="flex flex-row items-center">
|
||||||
|
<Switch
|
||||||
|
id="alerts-enabled"
|
||||||
|
className="mr-3"
|
||||||
|
checked={revDescState == "ON"}
|
||||||
|
onCheckedChange={(isChecked) => {
|
||||||
|
sendRevDesc(isChecked ? "ON" : "OFF");
|
||||||
|
}}
|
||||||
|
/>
|
||||||
|
<div className="space-y-0.5">
|
||||||
|
<Label htmlFor="genai-enabled">
|
||||||
|
<Trans>button.enabled</Trans>
|
||||||
|
</Label>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div className="mt-3 text-sm text-muted-foreground">
|
||||||
|
<Trans ns="views/settings">
|
||||||
|
cameraReview.review_descriptions.desc
|
||||||
|
</Trans>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</>
|
||||||
|
)}
|
||||||
|
|
||||||
|
<Separator className="my-2 flex bg-secondary" />
|
||||||
|
|
||||||
|
<Heading as="h4" className="my-2">
|
||||||
|
<Trans ns="views/settings">
|
||||||
|
cameraReview.reviewClassification.title
|
||||||
|
</Trans>
|
||||||
|
</Heading>
|
||||||
|
|
||||||
|
<div className="max-w-6xl">
|
||||||
|
<div className="mb-5 mt-2 flex max-w-5xl flex-col gap-2 text-sm text-primary-variant">
|
||||||
|
<p>
|
||||||
|
<Trans ns="views/settings">
|
||||||
|
cameraReview.reviewClassification.desc
|
||||||
|
</Trans>
|
||||||
|
</p>
|
||||||
|
<div className="flex items-center text-primary">
|
||||||
|
<Link
|
||||||
|
to={getLocaleDocUrl("configuration/review")}
|
||||||
|
target="_blank"
|
||||||
|
rel="noopener noreferrer"
|
||||||
|
className="inline"
|
||||||
|
>
|
||||||
|
{t("readTheDocumentation", { ns: "common" })}
|
||||||
|
<LuExternalLink className="ml-2 inline-flex size-3" />
|
||||||
|
</Link>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<Form {...form}>
|
||||||
|
<form
|
||||||
|
onSubmit={form.handleSubmit(onSubmit)}
|
||||||
|
className="mt-2 space-y-6"
|
||||||
|
>
|
||||||
|
<div
|
||||||
|
className={cn(
|
||||||
|
"w-full max-w-5xl space-y-0",
|
||||||
|
zones &&
|
||||||
|
zones?.length > 0 &&
|
||||||
|
"grid items-start gap-5 md:grid-cols-2",
|
||||||
|
)}
|
||||||
|
>
|
||||||
|
<FormField
|
||||||
|
control={form.control}
|
||||||
|
name="alerts_zones"
|
||||||
|
render={() => (
|
||||||
|
<FormItem>
|
||||||
|
{zones && zones?.length > 0 ? (
|
||||||
|
<>
|
||||||
|
<div className="mb-2">
|
||||||
|
<FormLabel className="flex flex-row items-center text-base">
|
||||||
|
<Trans ns="views/settings">
|
||||||
|
camera.review.alerts
|
||||||
|
</Trans>
|
||||||
|
<MdCircle className="ml-3 size-2 text-severity_alert" />
|
||||||
|
</FormLabel>
|
||||||
|
<FormDescription>
|
||||||
|
<Trans ns="views/settings">
|
||||||
|
cameraReview.reviewClassification.selectAlertsZones
|
||||||
|
</Trans>
|
||||||
|
</FormDescription>
|
||||||
|
</div>
|
||||||
|
<div className="max-w-md rounded-lg bg-secondary p-4 md:max-w-full">
|
||||||
|
{zones?.map((zone) => (
|
||||||
|
<FormField
|
||||||
|
key={zone.name}
|
||||||
|
control={form.control}
|
||||||
|
name="alerts_zones"
|
||||||
|
render={({ field }) => (
|
||||||
|
<FormItem
|
||||||
|
key={zone.name}
|
||||||
|
className="mb-3 flex flex-row items-center space-x-3 space-y-0 last:mb-0"
|
||||||
|
>
|
||||||
|
<FormControl>
|
||||||
|
<Checkbox
|
||||||
|
className="size-5 text-white accent-white data-[state=checked]:bg-selected data-[state=checked]:text-white"
|
||||||
|
checked={field.value?.includes(
|
||||||
|
zone.name,
|
||||||
|
)}
|
||||||
|
onCheckedChange={(checked) => {
|
||||||
|
setChangedValue(true);
|
||||||
|
return checked
|
||||||
|
? field.onChange([
|
||||||
|
...field.value,
|
||||||
|
zone.name,
|
||||||
|
])
|
||||||
|
: field.onChange(
|
||||||
|
field.value?.filter(
|
||||||
|
(value) =>
|
||||||
|
value !== zone.name,
|
||||||
|
),
|
||||||
|
);
|
||||||
|
}}
|
||||||
|
/>
|
||||||
|
</FormControl>
|
||||||
|
<FormLabel
|
||||||
|
className={cn(
|
||||||
|
"font-normal",
|
||||||
|
!zone.friendly_name &&
|
||||||
|
"smart-capitalize",
|
||||||
|
)}
|
||||||
|
>
|
||||||
|
{zone.friendly_name || zone.name}
|
||||||
|
</FormLabel>
|
||||||
|
</FormItem>
|
||||||
|
)}
|
||||||
|
/>
|
||||||
|
))}
|
||||||
|
</div>
|
||||||
|
</>
|
||||||
|
) : (
|
||||||
|
<div className="font-normal text-destructive">
|
||||||
|
<Trans ns="views/settings">
|
||||||
|
cameraReview.reviewClassification.noDefinedZones
|
||||||
|
</Trans>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
<FormMessage />
|
||||||
|
<div className="text-sm">
|
||||||
|
{watchedAlertsZones && watchedAlertsZones.length > 0
|
||||||
|
? t(
|
||||||
|
"cameraReview.reviewClassification.zoneObjectAlertsTips",
|
||||||
|
{
|
||||||
|
alertsLabels,
|
||||||
|
zone: formatList(
|
||||||
|
watchedAlertsZones.map((zone) =>
|
||||||
|
getZoneName(zone),
|
||||||
|
),
|
||||||
|
),
|
||||||
|
cameraName: selectCameraName,
|
||||||
|
},
|
||||||
|
)
|
||||||
|
: t(
|
||||||
|
"cameraReview.reviewClassification.objectAlertsTips",
|
||||||
|
{
|
||||||
|
alertsLabels,
|
||||||
|
cameraName: selectCameraName,
|
||||||
|
},
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
</FormItem>
|
||||||
|
)}
|
||||||
|
/>
|
||||||
|
|
||||||
|
<FormField
|
||||||
|
control={form.control}
|
||||||
|
name="detections_zones"
|
||||||
|
render={() => (
|
||||||
|
<FormItem>
|
||||||
|
{zones && zones?.length > 0 && (
|
||||||
|
<>
|
||||||
|
<div className="mb-2">
|
||||||
|
<FormLabel className="flex flex-row items-center text-base">
|
||||||
|
<Trans ns="views/settings">
|
||||||
|
camera.review.detections
|
||||||
|
</Trans>
|
||||||
|
<MdCircle className="ml-3 size-2 text-severity_detection" />
|
||||||
|
</FormLabel>
|
||||||
|
{selectDetections && (
|
||||||
|
<FormDescription>
|
||||||
|
<Trans ns="views/settings">
|
||||||
|
cameraReview.reviewClassification.selectDetectionsZones
|
||||||
|
</Trans>
|
||||||
|
</FormDescription>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{selectDetections && (
|
||||||
|
<div className="max-w-md rounded-lg bg-secondary p-4 md:max-w-full">
|
||||||
|
{zones?.map((zone) => (
|
||||||
|
<FormField
|
||||||
|
key={zone.name}
|
||||||
|
control={form.control}
|
||||||
|
name="detections_zones"
|
||||||
|
render={({ field }) => (
|
||||||
|
<FormItem
|
||||||
|
key={zone.name}
|
||||||
|
className="mb-3 flex flex-row items-center space-x-3 space-y-0 last:mb-0"
|
||||||
|
>
|
||||||
|
<FormControl>
|
||||||
|
<Checkbox
|
||||||
|
className="size-5 text-white accent-white data-[state=checked]:bg-selected data-[state=checked]:text-white"
|
||||||
|
checked={field.value?.includes(
|
||||||
|
zone.name,
|
||||||
|
)}
|
||||||
|
onCheckedChange={(checked) => {
|
||||||
|
return checked
|
||||||
|
? field.onChange([
|
||||||
|
...field.value,
|
||||||
|
zone.name,
|
||||||
|
])
|
||||||
|
: field.onChange(
|
||||||
|
field.value?.filter(
|
||||||
|
(value) =>
|
||||||
|
value !== zone.name,
|
||||||
|
),
|
||||||
|
);
|
||||||
|
}}
|
||||||
|
/>
|
||||||
|
</FormControl>
|
||||||
|
<FormLabel
|
||||||
|
className={cn(
|
||||||
|
"font-normal",
|
||||||
|
!zone.friendly_name &&
|
||||||
|
"smart-capitalize",
|
||||||
|
)}
|
||||||
|
>
|
||||||
|
{zone.friendly_name || zone.name}
|
||||||
|
</FormLabel>
|
||||||
|
</FormItem>
|
||||||
|
)}
|
||||||
|
/>
|
||||||
|
))}
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
<FormMessage />
|
||||||
|
|
||||||
|
<div className="mb-0 flex flex-row items-center gap-2">
|
||||||
|
<Checkbox
|
||||||
|
id="select-detections"
|
||||||
|
className="size-5 text-white accent-white data-[state=checked]:bg-selected data-[state=checked]:text-white"
|
||||||
|
checked={selectDetections}
|
||||||
|
onCheckedChange={handleCheckedChange}
|
||||||
|
/>
|
||||||
|
<div className="grid gap-1.5 leading-none">
|
||||||
|
<label
|
||||||
|
htmlFor="select-detections"
|
||||||
|
className="text-sm font-medium leading-none peer-disabled:cursor-not-allowed peer-disabled:opacity-70"
|
||||||
|
>
|
||||||
|
<Trans ns="views/settings">
|
||||||
|
cameraReview.reviewClassification.limitDetections
|
||||||
|
</Trans>
|
||||||
|
</label>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</>
|
||||||
|
)}
|
||||||
|
|
||||||
|
<div className="text-sm">
|
||||||
|
{watchedDetectionsZones &&
|
||||||
|
watchedDetectionsZones.length > 0 ? (
|
||||||
|
!selectDetections ? (
|
||||||
|
<Trans
|
||||||
|
i18nKey="cameraReview.reviewClassification.zoneObjectDetectionsTips.text"
|
||||||
|
values={{
|
||||||
|
detectionsLabels,
|
||||||
|
zone: formatList(
|
||||||
|
watchedDetectionsZones.map((zone) =>
|
||||||
|
getZoneName(zone),
|
||||||
|
),
|
||||||
|
),
|
||||||
|
cameraName: selectCameraName,
|
||||||
|
}}
|
||||||
|
ns="views/settings"
|
||||||
|
/>
|
||||||
|
) : (
|
||||||
|
<Trans
|
||||||
|
i18nKey="cameraReview.reviewClassification.zoneObjectDetectionsTips.notSelectDetections"
|
||||||
|
values={{
|
||||||
|
detectionsLabels,
|
||||||
|
zone: formatList(
|
||||||
|
watchedDetectionsZones.map((zone) =>
|
||||||
|
getZoneName(zone),
|
||||||
|
),
|
||||||
|
),
|
||||||
|
cameraName: selectCameraName,
|
||||||
|
}}
|
||||||
|
ns="views/settings"
|
||||||
|
/>
|
||||||
|
)
|
||||||
|
) : (
|
||||||
|
<Trans
|
||||||
|
i18nKey="cameraReview.reviewClassification.objectDetectionsTips"
|
||||||
|
values={{
|
||||||
|
detectionsLabels,
|
||||||
|
cameraName: selectCameraName,
|
||||||
|
}}
|
||||||
|
ns="views/settings"
|
||||||
|
/>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
</FormItem>
|
||||||
|
)}
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
<Separator className="my-2 flex bg-secondary" />
|
||||||
|
|
||||||
|
<div className="flex w-full flex-row items-center gap-2 pt-2 md:w-[25%]">
|
||||||
|
<Button
|
||||||
|
className="flex flex-1"
|
||||||
|
aria-label={t("button.reset", { ns: "common" })}
|
||||||
|
onClick={onCancel}
|
||||||
|
type="button"
|
||||||
|
>
|
||||||
|
<Trans>button.reset</Trans>
|
||||||
|
</Button>
|
||||||
|
<Button
|
||||||
|
variant="select"
|
||||||
|
disabled={isLoading}
|
||||||
|
className="flex flex-1"
|
||||||
|
aria-label={t("button.save", { ns: "common" })}
|
||||||
|
type="submit"
|
||||||
|
>
|
||||||
|
{isLoading ? (
|
||||||
|
<div className="flex flex-row items-center gap-2">
|
||||||
|
<ActivityIndicator />
|
||||||
|
<span>
|
||||||
|
<Trans>button.saving</Trans>
|
||||||
|
</span>
|
||||||
|
</div>
|
||||||
|
) : (
|
||||||
|
<Trans>button.save</Trans>
|
||||||
|
)}
|
||||||
|
</Button>
|
||||||
|
</div>
|
||||||
|
</form>
|
||||||
|
</Form>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</>
|
||||||
|
);
|
||||||
|
}
|
||||||
@ -1,794 +0,0 @@
|
|||||||
import Heading from "@/components/ui/heading";
|
|
||||||
import { useCallback, useContext, useEffect, useMemo, useState } from "react";
|
|
||||||
import { Toaster, toast } from "sonner";
|
|
||||||
import {
|
|
||||||
Form,
|
|
||||||
FormControl,
|
|
||||||
FormDescription,
|
|
||||||
FormField,
|
|
||||||
FormItem,
|
|
||||||
FormLabel,
|
|
||||||
FormMessage,
|
|
||||||
} from "@/components/ui/form";
|
|
||||||
import { zodResolver } from "@hookform/resolvers/zod";
|
|
||||||
import { useForm } from "react-hook-form";
|
|
||||||
import { z } from "zod";
|
|
||||||
import { Separator } from "@/components/ui/separator";
|
|
||||||
import { Button } from "@/components/ui/button";
|
|
||||||
import useSWR from "swr";
|
|
||||||
import { FrigateConfig } from "@/types/frigateConfig";
|
|
||||||
import { Checkbox } from "@/components/ui/checkbox";
|
|
||||||
import ActivityIndicator from "@/components/indicators/activity-indicator";
|
|
||||||
import { StatusBarMessagesContext } from "@/context/statusbar-provider";
|
|
||||||
import axios from "axios";
|
|
||||||
import { Link } from "react-router-dom";
|
|
||||||
import { LuExternalLink } from "react-icons/lu";
|
|
||||||
import { MdCircle } from "react-icons/md";
|
|
||||||
import { cn } from "@/lib/utils";
|
|
||||||
import { Trans, useTranslation } from "react-i18next";
|
|
||||||
import { Switch } from "@/components/ui/switch";
|
|
||||||
import { Label } from "@/components/ui/label";
|
|
||||||
import { useDocDomain } from "@/hooks/use-doc-domain";
|
|
||||||
import { getTranslatedLabel } from "@/utils/i18n";
|
|
||||||
import {
|
|
||||||
useAlertsState,
|
|
||||||
useDetectionsState,
|
|
||||||
useObjectDescriptionState,
|
|
||||||
useReviewDescriptionState,
|
|
||||||
} from "@/api/ws";
|
|
||||||
import CameraEditForm from "@/components/settings/CameraEditForm";
|
|
||||||
import CameraWizardDialog from "@/components/settings/CameraWizardDialog";
|
|
||||||
import { IoMdArrowRoundBack } from "react-icons/io";
|
|
||||||
import { isDesktop } from "react-device-detect";
|
|
||||||
import { useCameraFriendlyName } from "@/hooks/use-camera-friendly-name";
|
|
||||||
import { resolveZoneName } from "@/hooks/use-zone-friendly-name";
|
|
||||||
import { formatList } from "@/utils/stringUtil";
|
|
||||||
|
|
||||||
type CameraSettingsViewProps = {
|
|
||||||
selectedCamera: string;
|
|
||||||
setUnsavedChanges: React.Dispatch<React.SetStateAction<boolean>>;
|
|
||||||
};
|
|
||||||
|
|
||||||
type CameraReviewSettingsValueType = {
|
|
||||||
alerts_zones: string[];
|
|
||||||
detections_zones: string[];
|
|
||||||
};
|
|
||||||
|
|
||||||
export default function CameraSettingsView({
|
|
||||||
selectedCamera,
|
|
||||||
setUnsavedChanges,
|
|
||||||
}: CameraSettingsViewProps) {
|
|
||||||
const { t } = useTranslation(["views/settings"]);
|
|
||||||
const { getLocaleDocUrl } = useDocDomain();
|
|
||||||
|
|
||||||
const { data: config, mutate: updateConfig } =
|
|
||||||
useSWR<FrigateConfig>("config");
|
|
||||||
|
|
||||||
const cameraConfig = useMemo(() => {
|
|
||||||
if (config && selectedCamera) {
|
|
||||||
return config.cameras[selectedCamera];
|
|
||||||
}
|
|
||||||
}, [config, selectedCamera]);
|
|
||||||
|
|
||||||
const [changedValue, setChangedValue] = useState(false);
|
|
||||||
const [isLoading, setIsLoading] = useState(false);
|
|
||||||
const [selectDetections, setSelectDetections] = useState(false);
|
|
||||||
const [viewMode, setViewMode] = useState<"settings" | "add" | "edit">(
|
|
||||||
"settings",
|
|
||||||
); // Control view state
|
|
||||||
const [editCameraName, setEditCameraName] = useState<string | undefined>(
|
|
||||||
undefined,
|
|
||||||
); // Track camera being edited
|
|
||||||
const [showWizard, setShowWizard] = useState(false);
|
|
||||||
|
|
||||||
const { addMessage, removeMessage } = useContext(StatusBarMessagesContext)!;
|
|
||||||
|
|
||||||
const selectCameraName = useCameraFriendlyName(selectedCamera);
|
|
||||||
|
|
||||||
// zones and labels
|
|
||||||
|
|
||||||
const getZoneName = useCallback(
|
|
||||||
(zoneId: string, cameraId?: string) =>
|
|
||||||
resolveZoneName(config, zoneId, cameraId),
|
|
||||||
[config],
|
|
||||||
);
|
|
||||||
|
|
||||||
const zones = useMemo(() => {
|
|
||||||
if (cameraConfig) {
|
|
||||||
return Object.entries(cameraConfig.zones).map(([name, zoneData]) => ({
|
|
||||||
camera: cameraConfig.name,
|
|
||||||
name,
|
|
||||||
friendly_name: cameraConfig.zones[name].friendly_name,
|
|
||||||
objects: zoneData.objects,
|
|
||||||
color: zoneData.color,
|
|
||||||
}));
|
|
||||||
}
|
|
||||||
}, [cameraConfig]);
|
|
||||||
|
|
||||||
const alertsLabels = useMemo(() => {
|
|
||||||
return cameraConfig?.review.alerts.labels
|
|
||||||
? formatList(
|
|
||||||
cameraConfig.review.alerts.labels.map((label) =>
|
|
||||||
getTranslatedLabel(
|
|
||||||
label,
|
|
||||||
cameraConfig?.audio?.listen?.includes(label) ? "audio" : "object",
|
|
||||||
),
|
|
||||||
),
|
|
||||||
)
|
|
||||||
: "";
|
|
||||||
}, [cameraConfig]);
|
|
||||||
|
|
||||||
const detectionsLabels = useMemo(() => {
|
|
||||||
return cameraConfig?.review.detections.labels
|
|
||||||
? formatList(
|
|
||||||
cameraConfig.review.detections.labels.map((label) =>
|
|
||||||
getTranslatedLabel(
|
|
||||||
label,
|
|
||||||
cameraConfig?.audio?.listen?.includes(label) ? "audio" : "object",
|
|
||||||
),
|
|
||||||
),
|
|
||||||
)
|
|
||||||
: "";
|
|
||||||
}, [cameraConfig]);
|
|
||||||
|
|
||||||
// form
|
|
||||||
|
|
||||||
const formSchema = z.object({
|
|
||||||
alerts_zones: z.array(z.string()),
|
|
||||||
detections_zones: z.array(z.string()),
|
|
||||||
});
|
|
||||||
|
|
||||||
const form = useForm<z.infer<typeof formSchema>>({
|
|
||||||
resolver: zodResolver(formSchema),
|
|
||||||
mode: "onChange",
|
|
||||||
defaultValues: {
|
|
||||||
alerts_zones: cameraConfig?.review.alerts.required_zones || [],
|
|
||||||
detections_zones: cameraConfig?.review.detections.required_zones || [],
|
|
||||||
},
|
|
||||||
});
|
|
||||||
|
|
||||||
const watchedAlertsZones = form.watch("alerts_zones");
|
|
||||||
const watchedDetectionsZones = form.watch("detections_zones");
|
|
||||||
|
|
||||||
const { payload: alertsState, send: sendAlerts } =
|
|
||||||
useAlertsState(selectedCamera);
|
|
||||||
const { payload: detectionsState, send: sendDetections } =
|
|
||||||
useDetectionsState(selectedCamera);
|
|
||||||
|
|
||||||
const { payload: objDescState, send: sendObjDesc } =
|
|
||||||
useObjectDescriptionState(selectedCamera);
|
|
||||||
const { payload: revDescState, send: sendRevDesc } =
|
|
||||||
useReviewDescriptionState(selectedCamera);
|
|
||||||
|
|
||||||
const handleCheckedChange = useCallback(
|
|
||||||
(isChecked: boolean) => {
|
|
||||||
if (!isChecked) {
|
|
||||||
form.reset({
|
|
||||||
alerts_zones: watchedAlertsZones,
|
|
||||||
detections_zones: [],
|
|
||||||
});
|
|
||||||
}
|
|
||||||
setChangedValue(true);
|
|
||||||
setSelectDetections(isChecked as boolean);
|
|
||||||
},
|
|
||||||
// we know that these deps are correct
|
|
||||||
// eslint-disable-next-line react-hooks/exhaustive-deps
|
|
||||||
[watchedAlertsZones],
|
|
||||||
);
|
|
||||||
|
|
||||||
const saveToConfig = useCallback(
|
|
||||||
async (
|
|
||||||
{ alerts_zones, detections_zones }: CameraReviewSettingsValueType, // values submitted via the form
|
|
||||||
) => {
|
|
||||||
const createQuery = (zones: string[], type: "alerts" | "detections") =>
|
|
||||||
zones.length
|
|
||||||
? zones
|
|
||||||
.map(
|
|
||||||
(zone) =>
|
|
||||||
`&cameras.${selectedCamera}.review.${type}.required_zones=${zone}`,
|
|
||||||
)
|
|
||||||
.join("")
|
|
||||||
: cameraConfig?.review[type]?.required_zones &&
|
|
||||||
cameraConfig?.review[type]?.required_zones.length > 0
|
|
||||||
? `&cameras.${selectedCamera}.review.${type}.required_zones`
|
|
||||||
: "";
|
|
||||||
|
|
||||||
const alertQueries = createQuery(alerts_zones, "alerts");
|
|
||||||
const detectionQueries = createQuery(detections_zones, "detections");
|
|
||||||
|
|
||||||
axios
|
|
||||||
.put(`config/set?${alertQueries}${detectionQueries}`, {
|
|
||||||
requires_restart: 0,
|
|
||||||
})
|
|
||||||
.then((res) => {
|
|
||||||
if (res.status === 200) {
|
|
||||||
toast.success(
|
|
||||||
t("cameraReview.reviewClassification.toast.success"),
|
|
||||||
{
|
|
||||||
position: "top-center",
|
|
||||||
},
|
|
||||||
);
|
|
||||||
updateConfig();
|
|
||||||
} else {
|
|
||||||
toast.error(
|
|
||||||
t("toast.save.error.title", {
|
|
||||||
errorMessage: res.statusText,
|
|
||||||
ns: "common",
|
|
||||||
}),
|
|
||||||
{
|
|
||||||
position: "top-center",
|
|
||||||
},
|
|
||||||
);
|
|
||||||
}
|
|
||||||
})
|
|
||||||
.catch((error) => {
|
|
||||||
const errorMessage =
|
|
||||||
error.response?.data?.message ||
|
|
||||||
error.response?.data?.detail ||
|
|
||||||
"Unknown error";
|
|
||||||
toast.error(
|
|
||||||
t("toast.save.error.title", {
|
|
||||||
errorMessage,
|
|
||||||
ns: "common",
|
|
||||||
}),
|
|
||||||
{
|
|
||||||
position: "top-center",
|
|
||||||
},
|
|
||||||
);
|
|
||||||
})
|
|
||||||
.finally(() => {
|
|
||||||
setIsLoading(false);
|
|
||||||
});
|
|
||||||
},
|
|
||||||
[updateConfig, setIsLoading, selectedCamera, cameraConfig, t],
|
|
||||||
);
|
|
||||||
|
|
||||||
const onCancel = useCallback(() => {
|
|
||||||
if (!cameraConfig) {
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
setChangedValue(false);
|
|
||||||
setUnsavedChanges(false);
|
|
||||||
removeMessage(
|
|
||||||
"camera_settings",
|
|
||||||
`review_classification_settings_${selectedCamera}`,
|
|
||||||
);
|
|
||||||
form.reset({
|
|
||||||
alerts_zones: cameraConfig?.review.alerts.required_zones ?? [],
|
|
||||||
detections_zones: cameraConfig?.review.detections.required_zones || [],
|
|
||||||
});
|
|
||||||
setSelectDetections(
|
|
||||||
!!cameraConfig?.review.detections.required_zones?.length,
|
|
||||||
);
|
|
||||||
// we know that these deps are correct
|
|
||||||
// eslint-disable-next-line react-hooks/exhaustive-deps
|
|
||||||
}, [removeMessage, selectedCamera, setUnsavedChanges, cameraConfig]);
|
|
||||||
|
|
||||||
useEffect(() => {
|
|
||||||
onCancel();
|
|
||||||
// we know that these deps are correct
|
|
||||||
// eslint-disable-next-line react-hooks/exhaustive-deps
|
|
||||||
}, [selectedCamera]);
|
|
||||||
|
|
||||||
useEffect(() => {
|
|
||||||
if (changedValue) {
|
|
||||||
addMessage(
|
|
||||||
"camera_settings",
|
|
||||||
t("cameraReview.reviewClassification.unsavedChanges", {
|
|
||||||
camera: selectedCamera,
|
|
||||||
}),
|
|
||||||
undefined,
|
|
||||||
`review_classification_settings_${selectedCamera}`,
|
|
||||||
);
|
|
||||||
} else {
|
|
||||||
removeMessage(
|
|
||||||
"camera_settings",
|
|
||||||
`review_classification_settings_${selectedCamera}`,
|
|
||||||
);
|
|
||||||
}
|
|
||||||
// we know that these deps are correct
|
|
||||||
// eslint-disable-next-line react-hooks/exhaustive-deps
|
|
||||||
}, [changedValue, selectedCamera]);
|
|
||||||
|
|
||||||
function onSubmit(values: z.infer<typeof formSchema>) {
|
|
||||||
setIsLoading(true);
|
|
||||||
|
|
||||||
saveToConfig(values as CameraReviewSettingsValueType);
|
|
||||||
}
|
|
||||||
|
|
||||||
useEffect(() => {
|
|
||||||
document.title = t("documentTitle.cameraReview");
|
|
||||||
}, [t]);
|
|
||||||
|
|
||||||
// Handle back navigation from add/edit form
|
|
||||||
const handleBack = useCallback(() => {
|
|
||||||
setViewMode("settings");
|
|
||||||
setEditCameraName(undefined);
|
|
||||||
updateConfig();
|
|
||||||
}, [updateConfig]);
|
|
||||||
|
|
||||||
if (!cameraConfig && !selectedCamera && viewMode === "settings") {
|
|
||||||
return <ActivityIndicator />;
|
|
||||||
}
|
|
||||||
|
|
||||||
return (
|
|
||||||
<>
|
|
||||||
<div className="flex size-full flex-col md:flex-row">
|
|
||||||
<Toaster position="top-center" closeButton={true} />
|
|
||||||
<div className="scrollbar-container order-last mb-10 mt-2 flex h-full w-full flex-col overflow-y-auto pb-2 md:order-none">
|
|
||||||
{viewMode === "settings" ? (
|
|
||||||
<>
|
|
||||||
<Heading as="h4" className="mb-2">
|
|
||||||
{t("cameraReview.title")}
|
|
||||||
</Heading>
|
|
||||||
|
|
||||||
<Heading as="h4" className="my-2">
|
|
||||||
<Trans ns="views/settings">cameraReview.review.title</Trans>
|
|
||||||
</Heading>
|
|
||||||
|
|
||||||
<div className="mb-5 mt-2 flex max-w-5xl flex-col gap-2 space-y-3 text-sm text-primary-variant">
|
|
||||||
<div className="flex flex-row items-center">
|
|
||||||
<Switch
|
|
||||||
id="alerts-enabled"
|
|
||||||
className="mr-3"
|
|
||||||
checked={alertsState == "ON"}
|
|
||||||
onCheckedChange={(isChecked) => {
|
|
||||||
sendAlerts(isChecked ? "ON" : "OFF");
|
|
||||||
}}
|
|
||||||
/>
|
|
||||||
<div className="space-y-0.5">
|
|
||||||
<Label htmlFor="alerts-enabled">
|
|
||||||
<Trans ns="views/settings">
|
|
||||||
cameraReview.review.alerts
|
|
||||||
</Trans>
|
|
||||||
</Label>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
<div className="flex flex-col">
|
|
||||||
<div className="flex flex-row items-center">
|
|
||||||
<Switch
|
|
||||||
id="detections-enabled"
|
|
||||||
className="mr-3"
|
|
||||||
checked={detectionsState == "ON"}
|
|
||||||
onCheckedChange={(isChecked) => {
|
|
||||||
sendDetections(isChecked ? "ON" : "OFF");
|
|
||||||
}}
|
|
||||||
/>
|
|
||||||
<div className="space-y-0.5">
|
|
||||||
<Label htmlFor="detections-enabled">
|
|
||||||
<Trans ns="views/settings">
|
|
||||||
camera.review.detections
|
|
||||||
</Trans>
|
|
||||||
</Label>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
<div className="mt-3 text-sm text-muted-foreground">
|
|
||||||
<Trans ns="views/settings">cameraReview.review.desc</Trans>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
{cameraConfig?.objects?.genai?.enabled_in_config && (
|
|
||||||
<>
|
|
||||||
<Separator className="my-2 flex bg-secondary" />
|
|
||||||
|
|
||||||
<Heading as="h4" className="my-2">
|
|
||||||
<Trans ns="views/settings">
|
|
||||||
cameraReview.object_descriptions.title
|
|
||||||
</Trans>
|
|
||||||
</Heading>
|
|
||||||
|
|
||||||
<div className="mb-5 mt-2 flex max-w-5xl flex-col gap-2 space-y-3 text-sm text-primary-variant">
|
|
||||||
<div className="flex flex-row items-center">
|
|
||||||
<Switch
|
|
||||||
id="alerts-enabled"
|
|
||||||
className="mr-3"
|
|
||||||
checked={objDescState == "ON"}
|
|
||||||
onCheckedChange={(isChecked) => {
|
|
||||||
sendObjDesc(isChecked ? "ON" : "OFF");
|
|
||||||
}}
|
|
||||||
/>
|
|
||||||
<div className="space-y-0.5">
|
|
||||||
<Label htmlFor="genai-enabled">
|
|
||||||
<Trans>button.enabled</Trans>
|
|
||||||
</Label>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
<div className="mt-3 text-sm text-muted-foreground">
|
|
||||||
<Trans ns="views/settings">
|
|
||||||
cameraReview.object_descriptions.desc
|
|
||||||
</Trans>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
</>
|
|
||||||
)}
|
|
||||||
|
|
||||||
{cameraConfig?.review?.genai?.enabled_in_config && (
|
|
||||||
<>
|
|
||||||
<Separator className="my-2 flex bg-secondary" />
|
|
||||||
|
|
||||||
<Heading as="h4" className="my-2">
|
|
||||||
<Trans ns="views/settings">
|
|
||||||
cameraReview.review_descriptions.title
|
|
||||||
</Trans>
|
|
||||||
</Heading>
|
|
||||||
|
|
||||||
<div className="mb-5 mt-2 flex max-w-5xl flex-col gap-2 space-y-3 text-sm text-primary-variant">
|
|
||||||
<div className="flex flex-row items-center">
|
|
||||||
<Switch
|
|
||||||
id="alerts-enabled"
|
|
||||||
className="mr-3"
|
|
||||||
checked={revDescState == "ON"}
|
|
||||||
onCheckedChange={(isChecked) => {
|
|
||||||
sendRevDesc(isChecked ? "ON" : "OFF");
|
|
||||||
}}
|
|
||||||
/>
|
|
||||||
<div className="space-y-0.5">
|
|
||||||
<Label htmlFor="genai-enabled">
|
|
||||||
<Trans>button.enabled</Trans>
|
|
||||||
</Label>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
<div className="mt-3 text-sm text-muted-foreground">
|
|
||||||
<Trans ns="views/settings">
|
|
||||||
cameraReview.review_descriptions.desc
|
|
||||||
</Trans>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
</>
|
|
||||||
)}
|
|
||||||
|
|
||||||
<Separator className="my-2 flex bg-secondary" />
|
|
||||||
|
|
||||||
<Heading as="h4" className="my-2">
|
|
||||||
<Trans ns="views/settings">
|
|
||||||
cameraReview.reviewClassification.title
|
|
||||||
</Trans>
|
|
||||||
</Heading>
|
|
||||||
|
|
||||||
<div className="max-w-6xl">
|
|
||||||
<div className="mb-5 mt-2 flex max-w-5xl flex-col gap-2 text-sm text-primary-variant">
|
|
||||||
<p>
|
|
||||||
<Trans ns="views/settings">
|
|
||||||
cameraReview.reviewClassification.desc
|
|
||||||
</Trans>
|
|
||||||
</p>
|
|
||||||
<div className="flex items-center text-primary">
|
|
||||||
<Link
|
|
||||||
to={getLocaleDocUrl("configuration/review")}
|
|
||||||
target="_blank"
|
|
||||||
rel="noopener noreferrer"
|
|
||||||
className="inline"
|
|
||||||
>
|
|
||||||
{t("readTheDocumentation", { ns: "common" })}
|
|
||||||
<LuExternalLink className="ml-2 inline-flex size-3" />
|
|
||||||
</Link>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<Form {...form}>
|
|
||||||
<form
|
|
||||||
onSubmit={form.handleSubmit(onSubmit)}
|
|
||||||
className="mt-2 space-y-6"
|
|
||||||
>
|
|
||||||
<div
|
|
||||||
className={cn(
|
|
||||||
"w-full max-w-5xl space-y-0",
|
|
||||||
zones &&
|
|
||||||
zones?.length > 0 &&
|
|
||||||
"grid items-start gap-5 md:grid-cols-2",
|
|
||||||
)}
|
|
||||||
>
|
|
||||||
<FormField
|
|
||||||
control={form.control}
|
|
||||||
name="alerts_zones"
|
|
||||||
render={() => (
|
|
||||||
<FormItem>
|
|
||||||
{zones && zones?.length > 0 ? (
|
|
||||||
<>
|
|
||||||
<div className="mb-2">
|
|
||||||
<FormLabel className="flex flex-row items-center text-base">
|
|
||||||
<Trans ns="views/settings">
|
|
||||||
camera.review.alerts
|
|
||||||
</Trans>
|
|
||||||
<MdCircle className="ml-3 size-2 text-severity_alert" />
|
|
||||||
</FormLabel>
|
|
||||||
<FormDescription>
|
|
||||||
<Trans ns="views/settings">
|
|
||||||
cameraReview.reviewClassification.selectAlertsZones
|
|
||||||
</Trans>
|
|
||||||
</FormDescription>
|
|
||||||
</div>
|
|
||||||
<div className="max-w-md rounded-lg bg-secondary p-4 md:max-w-full">
|
|
||||||
{zones?.map((zone) => (
|
|
||||||
<FormField
|
|
||||||
key={zone.name}
|
|
||||||
control={form.control}
|
|
||||||
name="alerts_zones"
|
|
||||||
render={({ field }) => (
|
|
||||||
<FormItem
|
|
||||||
key={zone.name}
|
|
||||||
className="mb-3 flex flex-row items-center space-x-3 space-y-0 last:mb-0"
|
|
||||||
>
|
|
||||||
<FormControl>
|
|
||||||
<Checkbox
|
|
||||||
className="size-5 text-white accent-white data-[state=checked]:bg-selected data-[state=checked]:text-white"
|
|
||||||
checked={field.value?.includes(
|
|
||||||
zone.name,
|
|
||||||
)}
|
|
||||||
onCheckedChange={(checked) => {
|
|
||||||
setChangedValue(true);
|
|
||||||
return checked
|
|
||||||
? field.onChange([
|
|
||||||
...field.value,
|
|
||||||
zone.name,
|
|
||||||
])
|
|
||||||
: field.onChange(
|
|
||||||
field.value?.filter(
|
|
||||||
(value) =>
|
|
||||||
value !== zone.name,
|
|
||||||
),
|
|
||||||
);
|
|
||||||
}}
|
|
||||||
/>
|
|
||||||
</FormControl>
|
|
||||||
<FormLabel
|
|
||||||
className={cn(
|
|
||||||
"font-normal",
|
|
||||||
!zone.friendly_name &&
|
|
||||||
"smart-capitalize",
|
|
||||||
)}
|
|
||||||
>
|
|
||||||
{zone.friendly_name || zone.name}
|
|
||||||
</FormLabel>
|
|
||||||
</FormItem>
|
|
||||||
)}
|
|
||||||
/>
|
|
||||||
))}
|
|
||||||
</div>
|
|
||||||
</>
|
|
||||||
) : (
|
|
||||||
<div className="font-normal text-destructive">
|
|
||||||
<Trans ns="views/settings">
|
|
||||||
cameraReview.reviewClassification.noDefinedZones
|
|
||||||
</Trans>
|
|
||||||
</div>
|
|
||||||
)}
|
|
||||||
<FormMessage />
|
|
||||||
<div className="text-sm">
|
|
||||||
{watchedAlertsZones && watchedAlertsZones.length > 0
|
|
||||||
? t(
|
|
||||||
"cameraReview.reviewClassification.zoneObjectAlertsTips",
|
|
||||||
{
|
|
||||||
alertsLabels,
|
|
||||||
zone: formatList(
|
|
||||||
watchedAlertsZones.map((zone) =>
|
|
||||||
getZoneName(zone),
|
|
||||||
),
|
|
||||||
),
|
|
||||||
cameraName: selectCameraName,
|
|
||||||
},
|
|
||||||
)
|
|
||||||
: t(
|
|
||||||
"cameraReview.reviewClassification.objectAlertsTips",
|
|
||||||
{
|
|
||||||
alertsLabels,
|
|
||||||
cameraName: selectCameraName,
|
|
||||||
},
|
|
||||||
)}
|
|
||||||
</div>
|
|
||||||
</FormItem>
|
|
||||||
)}
|
|
||||||
/>
|
|
||||||
|
|
||||||
<FormField
|
|
||||||
control={form.control}
|
|
||||||
name="detections_zones"
|
|
||||||
render={() => (
|
|
||||||
<FormItem>
|
|
||||||
{zones && zones?.length > 0 && (
|
|
||||||
<>
|
|
||||||
<div className="mb-2">
|
|
||||||
<FormLabel className="flex flex-row items-center text-base">
|
|
||||||
<Trans ns="views/settings">
|
|
||||||
camera.review.detections
|
|
||||||
</Trans>
|
|
||||||
<MdCircle className="ml-3 size-2 text-severity_detection" />
|
|
||||||
</FormLabel>
|
|
||||||
{selectDetections && (
|
|
||||||
<FormDescription>
|
|
||||||
<Trans ns="views/settings">
|
|
||||||
cameraReview.reviewClassification.selectDetectionsZones
|
|
||||||
</Trans>
|
|
||||||
</FormDescription>
|
|
||||||
)}
|
|
||||||
</div>
|
|
||||||
|
|
||||||
{selectDetections && (
|
|
||||||
<div className="max-w-md rounded-lg bg-secondary p-4 md:max-w-full">
|
|
||||||
{zones?.map((zone) => (
|
|
||||||
<FormField
|
|
||||||
key={zone.name}
|
|
||||||
control={form.control}
|
|
||||||
name="detections_zones"
|
|
||||||
render={({ field }) => (
|
|
||||||
<FormItem
|
|
||||||
key={zone.name}
|
|
||||||
className="mb-3 flex flex-row items-center space-x-3 space-y-0 last:mb-0"
|
|
||||||
>
|
|
||||||
<FormControl>
|
|
||||||
<Checkbox
|
|
||||||
className="size-5 text-white accent-white data-[state=checked]:bg-selected data-[state=checked]:text-white"
|
|
||||||
checked={field.value?.includes(
|
|
||||||
zone.name,
|
|
||||||
)}
|
|
||||||
onCheckedChange={(checked) => {
|
|
||||||
return checked
|
|
||||||
? field.onChange([
|
|
||||||
...field.value,
|
|
||||||
zone.name,
|
|
||||||
])
|
|
||||||
: field.onChange(
|
|
||||||
field.value?.filter(
|
|
||||||
(value) =>
|
|
||||||
value !== zone.name,
|
|
||||||
),
|
|
||||||
);
|
|
||||||
}}
|
|
||||||
/>
|
|
||||||
</FormControl>
|
|
||||||
<FormLabel
|
|
||||||
className={cn(
|
|
||||||
"font-normal",
|
|
||||||
!zone.friendly_name &&
|
|
||||||
"smart-capitalize",
|
|
||||||
)}
|
|
||||||
>
|
|
||||||
{zone.friendly_name || zone.name}
|
|
||||||
</FormLabel>
|
|
||||||
</FormItem>
|
|
||||||
)}
|
|
||||||
/>
|
|
||||||
))}
|
|
||||||
</div>
|
|
||||||
)}
|
|
||||||
<FormMessage />
|
|
||||||
|
|
||||||
<div className="mb-0 flex flex-row items-center gap-2">
|
|
||||||
<Checkbox
|
|
||||||
id="select-detections"
|
|
||||||
className="size-5 text-white accent-white data-[state=checked]:bg-selected data-[state=checked]:text-white"
|
|
||||||
checked={selectDetections}
|
|
||||||
onCheckedChange={handleCheckedChange}
|
|
||||||
/>
|
|
||||||
<div className="grid gap-1.5 leading-none">
|
|
||||||
<label
|
|
||||||
htmlFor="select-detections"
|
|
||||||
className="text-sm font-medium leading-none peer-disabled:cursor-not-allowed peer-disabled:opacity-70"
|
|
||||||
>
|
|
||||||
<Trans ns="views/settings">
|
|
||||||
cameraReview.reviewClassification.limitDetections
|
|
||||||
</Trans>
|
|
||||||
</label>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
</>
|
|
||||||
)}
|
|
||||||
|
|
||||||
<div className="text-sm">
|
|
||||||
{watchedDetectionsZones &&
|
|
||||||
watchedDetectionsZones.length > 0 ? (
|
|
||||||
!selectDetections ? (
|
|
||||||
<Trans
|
|
||||||
i18nKey="cameraReview.reviewClassification.zoneObjectDetectionsTips.text"
|
|
||||||
values={{
|
|
||||||
detectionsLabels,
|
|
||||||
zone: formatList(
|
|
||||||
watchedDetectionsZones.map((zone) =>
|
|
||||||
getZoneName(zone),
|
|
||||||
),
|
|
||||||
),
|
|
||||||
cameraName: selectCameraName,
|
|
||||||
}}
|
|
||||||
ns="views/settings"
|
|
||||||
/>
|
|
||||||
) : (
|
|
||||||
<Trans
|
|
||||||
i18nKey="cameraReview.reviewClassification.zoneObjectDetectionsTips.notSelectDetections"
|
|
||||||
values={{
|
|
||||||
detectionsLabels,
|
|
||||||
zone: formatList(
|
|
||||||
watchedDetectionsZones.map((zone) =>
|
|
||||||
getZoneName(zone),
|
|
||||||
),
|
|
||||||
),
|
|
||||||
cameraName: selectCameraName,
|
|
||||||
}}
|
|
||||||
ns="views/settings"
|
|
||||||
/>
|
|
||||||
)
|
|
||||||
) : (
|
|
||||||
<Trans
|
|
||||||
i18nKey="cameraReview.reviewClassification.objectDetectionsTips"
|
|
||||||
values={{
|
|
||||||
detectionsLabels,
|
|
||||||
cameraName: selectCameraName,
|
|
||||||
}}
|
|
||||||
ns="views/settings"
|
|
||||||
/>
|
|
||||||
)}
|
|
||||||
</div>
|
|
||||||
</FormItem>
|
|
||||||
)}
|
|
||||||
/>
|
|
||||||
</div>
|
|
||||||
<Separator className="my-2 flex bg-secondary" />
|
|
||||||
|
|
||||||
<div className="flex w-full flex-row items-center gap-2 pt-2 md:w-[25%]">
|
|
||||||
<Button
|
|
||||||
className="flex flex-1"
|
|
||||||
aria-label={t("button.reset", { ns: "common" })}
|
|
||||||
onClick={onCancel}
|
|
||||||
type="button"
|
|
||||||
>
|
|
||||||
<Trans>button.reset</Trans>
|
|
||||||
</Button>
|
|
||||||
<Button
|
|
||||||
variant="select"
|
|
||||||
disabled={isLoading}
|
|
||||||
className="flex flex-1"
|
|
||||||
aria-label={t("button.save", { ns: "common" })}
|
|
||||||
type="submit"
|
|
||||||
>
|
|
||||||
{isLoading ? (
|
|
||||||
<div className="flex flex-row items-center gap-2">
|
|
||||||
<ActivityIndicator />
|
|
||||||
<span>
|
|
||||||
<Trans>button.saving</Trans>
|
|
||||||
</span>
|
|
||||||
</div>
|
|
||||||
) : (
|
|
||||||
<Trans>button.save</Trans>
|
|
||||||
)}
|
|
||||||
</Button>
|
|
||||||
</div>
|
|
||||||
</form>
|
|
||||||
</Form>
|
|
||||||
</>
|
|
||||||
) : (
|
|
||||||
<>
|
|
||||||
<div className="mb-4 flex items-center gap-2">
|
|
||||||
<Button
|
|
||||||
className={`flex items-center gap-2.5 rounded-lg`}
|
|
||||||
aria-label={t("label.back", { ns: "common" })}
|
|
||||||
size="sm"
|
|
||||||
onClick={handleBack}
|
|
||||||
>
|
|
||||||
<IoMdArrowRoundBack className="size-5 text-secondary-foreground" />
|
|
||||||
{isDesktop && (
|
|
||||||
<div className="text-primary">
|
|
||||||
{t("button.back", { ns: "common" })}
|
|
||||||
</div>
|
|
||||||
)}
|
|
||||||
</Button>
|
|
||||||
</div>
|
|
||||||
<div className="md:max-w-5xl">
|
|
||||||
<CameraEditForm
|
|
||||||
cameraName={viewMode === "edit" ? editCameraName : undefined}
|
|
||||||
onSave={handleBack}
|
|
||||||
onCancel={handleBack}
|
|
||||||
/>
|
|
||||||
</div>
|
|
||||||
</>
|
|
||||||
)}
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<CameraWizardDialog
|
|
||||||
open={showWizard}
|
|
||||||
onClose={() => setShowWizard(false)}
|
|
||||||
/>
|
|
||||||
</>
|
|
||||||
);
|
|
||||||
}
|
|
||||||
@ -198,15 +198,20 @@ export default function TriggerView({
|
|||||||
|
|
||||||
return axios
|
return axios
|
||||||
.put("config/set", configBody)
|
.put("config/set", configBody)
|
||||||
.then((configResponse) => {
|
.then(async (configResponse) => {
|
||||||
if (configResponse.status === 200) {
|
if (configResponse.status === 200) {
|
||||||
updateConfig();
|
await updateConfig();
|
||||||
|
const displayName =
|
||||||
|
friendly_name && friendly_name !== ""
|
||||||
|
? `${friendly_name} (${name})`
|
||||||
|
: name;
|
||||||
|
|
||||||
toast.success(
|
toast.success(
|
||||||
t(
|
t(
|
||||||
isEdit
|
isEdit
|
||||||
? "triggers.toast.success.updateTrigger"
|
? "triggers.toast.success.updateTrigger"
|
||||||
: "triggers.toast.success.createTrigger",
|
: "triggers.toast.success.createTrigger",
|
||||||
{ name },
|
{ name: displayName },
|
||||||
),
|
),
|
||||||
{ position: "top-center" },
|
{ position: "top-center" },
|
||||||
);
|
);
|
||||||
@ -348,11 +353,22 @@ export default function TriggerView({
|
|||||||
|
|
||||||
return axios
|
return axios
|
||||||
.put("config/set", configBody)
|
.put("config/set", configBody)
|
||||||
.then((configResponse) => {
|
.then(async (configResponse) => {
|
||||||
if (configResponse.status === 200) {
|
if (configResponse.status === 200) {
|
||||||
updateConfig();
|
await updateConfig();
|
||||||
|
const friendly =
|
||||||
|
config?.cameras?.[selectedCamera]?.semantic_search
|
||||||
|
?.triggers?.[name]?.friendly_name;
|
||||||
|
|
||||||
|
const displayName =
|
||||||
|
friendly && friendly !== ""
|
||||||
|
? `${friendly} (${name})`
|
||||||
|
: name;
|
||||||
|
|
||||||
toast.success(
|
toast.success(
|
||||||
t("triggers.toast.success.deleteTrigger", { name }),
|
t("triggers.toast.success.deleteTrigger", {
|
||||||
|
name: displayName,
|
||||||
|
}),
|
||||||
{
|
{
|
||||||
position: "top-center",
|
position: "top-center",
|
||||||
},
|
},
|
||||||
@ -381,7 +397,7 @@ export default function TriggerView({
|
|||||||
setIsLoading(false);
|
setIsLoading(false);
|
||||||
});
|
});
|
||||||
},
|
},
|
||||||
[t, updateConfig, selectedCamera, setUnsavedChanges],
|
[t, updateConfig, selectedCamera, setUnsavedChanges, config],
|
||||||
);
|
);
|
||||||
|
|
||||||
useEffect(() => {
|
useEffect(() => {
|
||||||
@ -843,7 +859,14 @@ export default function TriggerView({
|
|||||||
/>
|
/>
|
||||||
<DeleteTriggerDialog
|
<DeleteTriggerDialog
|
||||||
show={showDelete}
|
show={showDelete}
|
||||||
triggerName={selectedTrigger?.name ?? ""}
|
triggerName={
|
||||||
|
selectedTrigger
|
||||||
|
? selectedTrigger.friendly_name &&
|
||||||
|
selectedTrigger.friendly_name !== ""
|
||||||
|
? `${selectedTrigger.friendly_name} (${selectedTrigger.name})`
|
||||||
|
: selectedTrigger.name
|
||||||
|
: ""
|
||||||
|
}
|
||||||
isLoading={isLoading}
|
isLoading={isLoading}
|
||||||
onCancel={() => {
|
onCancel={() => {
|
||||||
setShowDelete(false);
|
setShowDelete(false);
|
||||||
|
|||||||
@ -67,13 +67,14 @@ export default function EnrichmentMetrics({
|
|||||||
|
|
||||||
// features stats
|
// features stats
|
||||||
|
|
||||||
const embeddingInferenceTimeSeries = useMemo(() => {
|
const groupedEnrichmentMetrics = useMemo(() => {
|
||||||
if (!statsHistory) {
|
if (!statsHistory) {
|
||||||
return [];
|
return [];
|
||||||
}
|
}
|
||||||
|
|
||||||
const series: {
|
const series: {
|
||||||
[key: string]: {
|
[key: string]: {
|
||||||
|
rawKey: string;
|
||||||
name: string;
|
name: string;
|
||||||
metrics: Threshold;
|
metrics: Threshold;
|
||||||
data: { x: number; y: number }[];
|
data: { x: number; y: number }[];
|
||||||
@ -90,6 +91,7 @@ export default function EnrichmentMetrics({
|
|||||||
|
|
||||||
if (!(key in series)) {
|
if (!(key in series)) {
|
||||||
series[key] = {
|
series[key] = {
|
||||||
|
rawKey,
|
||||||
name: t("enrichments.embeddings." + rawKey),
|
name: t("enrichments.embeddings." + rawKey),
|
||||||
metrics: getThreshold(rawKey),
|
metrics: getThreshold(rawKey),
|
||||||
data: [],
|
data: [],
|
||||||
@ -99,7 +101,57 @@ export default function EnrichmentMetrics({
|
|||||||
series[key].data.push({ x: statsIdx + 1, y: stat });
|
series[key].data.push({ x: statsIdx + 1, y: stat });
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
return Object.values(series);
|
|
||||||
|
// Group series by category (extract base name from raw key)
|
||||||
|
const grouped: {
|
||||||
|
[category: string]: {
|
||||||
|
categoryName: string;
|
||||||
|
speedSeries?: {
|
||||||
|
name: string;
|
||||||
|
metrics: Threshold;
|
||||||
|
data: { x: number; y: number }[];
|
||||||
|
};
|
||||||
|
eventsSeries?: {
|
||||||
|
name: string;
|
||||||
|
metrics: Threshold;
|
||||||
|
data: { x: number; y: number }[];
|
||||||
|
};
|
||||||
|
};
|
||||||
|
} = {};
|
||||||
|
|
||||||
|
Object.values(series).forEach((s) => {
|
||||||
|
// Extract base category name from raw key
|
||||||
|
// All metrics follow the pattern: {base}_speed and {base}_events_per_second
|
||||||
|
let categoryKey = s.rawKey;
|
||||||
|
let isSpeed = false;
|
||||||
|
|
||||||
|
if (s.rawKey.endsWith("_speed")) {
|
||||||
|
categoryKey = s.rawKey.replace("_speed", "");
|
||||||
|
isSpeed = true;
|
||||||
|
} else if (s.rawKey.endsWith("_events_per_second")) {
|
||||||
|
categoryKey = s.rawKey.replace("_events_per_second", "");
|
||||||
|
isSpeed = false;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get translated category name
|
||||||
|
const categoryName = t("enrichments.embeddings." + categoryKey);
|
||||||
|
|
||||||
|
if (!(categoryKey in grouped)) {
|
||||||
|
grouped[categoryKey] = {
|
||||||
|
categoryName,
|
||||||
|
speedSeries: undefined,
|
||||||
|
eventsSeries: undefined,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
if (isSpeed) {
|
||||||
|
grouped[categoryKey].speedSeries = s;
|
||||||
|
} else {
|
||||||
|
grouped[categoryKey].eventsSeries = s;
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
return Object.values(grouped);
|
||||||
}, [statsHistory, t, getThreshold]);
|
}, [statsHistory, t, getThreshold]);
|
||||||
|
|
||||||
return (
|
return (
|
||||||
@ -110,35 +162,42 @@ export default function EnrichmentMetrics({
|
|||||||
</div>
|
</div>
|
||||||
<div
|
<div
|
||||||
className={cn(
|
className={cn(
|
||||||
"mt-4 grid w-full grid-cols-1 gap-2 sm:grid-cols-3",
|
"mt-4 grid w-full grid-cols-1 gap-2 sm:grid-cols-2 md:grid-cols-4",
|
||||||
embeddingInferenceTimeSeries && "sm:grid-cols-4",
|
|
||||||
)}
|
)}
|
||||||
>
|
>
|
||||||
{statsHistory.length != 0 ? (
|
{statsHistory.length != 0 ? (
|
||||||
<>
|
<>
|
||||||
{embeddingInferenceTimeSeries.map((series) => (
|
{groupedEnrichmentMetrics.map((group) => (
|
||||||
<div className="rounded-lg bg-background_alt p-2.5 md:rounded-2xl">
|
<div
|
||||||
<div className="mb-5 smart-capitalize">{series.name}</div>
|
key={group.categoryName}
|
||||||
{series.name.endsWith("Speed") ? (
|
className="rounded-lg bg-background_alt p-2.5 md:rounded-2xl"
|
||||||
<ThresholdBarGraph
|
>
|
||||||
key={series.name}
|
<div className="mb-5 smart-capitalize">
|
||||||
graphId={`${series.name}-inference`}
|
{group.categoryName}
|
||||||
name={series.name}
|
</div>
|
||||||
unit="ms"
|
<div className="space-y-4">
|
||||||
threshold={series.metrics}
|
{group.speedSeries && (
|
||||||
updateTimes={updateTimes}
|
<ThresholdBarGraph
|
||||||
data={[series]}
|
key={`${group.categoryName}-speed`}
|
||||||
/>
|
graphId={`${group.categoryName}-inference`}
|
||||||
) : (
|
name={t("enrichments.averageInf")}
|
||||||
<EventsPerSecondsLineGraph
|
unit="ms"
|
||||||
key={series.name}
|
threshold={group.speedSeries.metrics}
|
||||||
graphId={`${series.name}-fps`}
|
updateTimes={updateTimes}
|
||||||
unit=""
|
data={[group.speedSeries]}
|
||||||
name={t("enrichments.infPerSecond")}
|
/>
|
||||||
updateTimes={updateTimes}
|
)}
|
||||||
data={[series]}
|
{group.eventsSeries && (
|
||||||
/>
|
<EventsPerSecondsLineGraph
|
||||||
)}
|
key={`${group.categoryName}-events`}
|
||||||
|
graphId={`${group.categoryName}-fps`}
|
||||||
|
unit=""
|
||||||
|
name={t("enrichments.infPerSecond")}
|
||||||
|
updateTimes={updateTimes}
|
||||||
|
data={[group.eventsSeries]}
|
||||||
|
/>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
))}
|
))}
|
||||||
</>
|
</>
|
||||||
|
|||||||
@ -375,6 +375,50 @@ export default function GeneralMetrics({
|
|||||||
return Object.keys(series).length > 0 ? Object.values(series) : undefined;
|
return Object.keys(series).length > 0 ? Object.values(series) : undefined;
|
||||||
}, [statsHistory]);
|
}, [statsHistory]);
|
||||||
|
|
||||||
|
// Check if Intel GPU has all 0% usage values (known bug)
|
||||||
|
const showIntelGpuWarning = useMemo(() => {
|
||||||
|
if (!statsHistory || statsHistory.length < 3) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
const gpuKeys = Object.keys(statsHistory[0]?.gpu_usages ?? {});
|
||||||
|
const hasIntelGpu = gpuKeys.some(
|
||||||
|
(key) => key === "intel-vaapi" || key === "intel-qsv",
|
||||||
|
);
|
||||||
|
|
||||||
|
if (!hasIntelGpu) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if all GPU usage values are 0% across all stats
|
||||||
|
let allZero = true;
|
||||||
|
let hasDataPoints = false;
|
||||||
|
|
||||||
|
for (const stats of statsHistory) {
|
||||||
|
if (!stats) {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
Object.entries(stats.gpu_usages || {}).forEach(([key, gpuStats]) => {
|
||||||
|
if (key === "intel-vaapi" || key === "intel-qsv") {
|
||||||
|
if (gpuStats.gpu) {
|
||||||
|
hasDataPoints = true;
|
||||||
|
const gpuValue = parseFloat(gpuStats.gpu.slice(0, -1));
|
||||||
|
if (!isNaN(gpuValue) && gpuValue > 0) {
|
||||||
|
allZero = false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
if (!allZero) {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return hasDataPoints && allZero;
|
||||||
|
}, [statsHistory]);
|
||||||
|
|
||||||
// npu stats
|
// npu stats
|
||||||
|
|
||||||
const npuSeries = useMemo(() => {
|
const npuSeries = useMemo(() => {
|
||||||
@ -639,8 +683,46 @@ export default function GeneralMetrics({
|
|||||||
<>
|
<>
|
||||||
{statsHistory.length != 0 ? (
|
{statsHistory.length != 0 ? (
|
||||||
<div className="rounded-lg bg-background_alt p-2.5 md:rounded-2xl">
|
<div className="rounded-lg bg-background_alt p-2.5 md:rounded-2xl">
|
||||||
<div className="mb-5">
|
<div className="mb-5 flex flex-row items-center justify-between">
|
||||||
{t("general.hardwareInfo.gpuUsage")}
|
{t("general.hardwareInfo.gpuUsage")}
|
||||||
|
{showIntelGpuWarning && (
|
||||||
|
<Popover>
|
||||||
|
<PopoverTrigger asChild>
|
||||||
|
<button
|
||||||
|
className="flex flex-row items-center gap-1.5 text-yellow-600 focus:outline-none dark:text-yellow-500"
|
||||||
|
aria-label={t(
|
||||||
|
"general.hardwareInfo.intelGpuWarning.title",
|
||||||
|
)}
|
||||||
|
>
|
||||||
|
<CiCircleAlert
|
||||||
|
className="size-5"
|
||||||
|
aria-label={t(
|
||||||
|
"general.hardwareInfo.intelGpuWarning.title",
|
||||||
|
)}
|
||||||
|
/>
|
||||||
|
<span className="text-sm">
|
||||||
|
{t(
|
||||||
|
"general.hardwareInfo.intelGpuWarning.message",
|
||||||
|
)}
|
||||||
|
</span>
|
||||||
|
</button>
|
||||||
|
</PopoverTrigger>
|
||||||
|
<PopoverContent className="w-80">
|
||||||
|
<div className="space-y-2">
|
||||||
|
<div className="font-semibold">
|
||||||
|
{t(
|
||||||
|
"general.hardwareInfo.intelGpuWarning.title",
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
<div>
|
||||||
|
{t(
|
||||||
|
"general.hardwareInfo.intelGpuWarning.description",
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</PopoverContent>
|
||||||
|
</Popover>
|
||||||
|
)}
|
||||||
</div>
|
</div>
|
||||||
{gpuSeries.map((series) => (
|
{gpuSeries.map((series) => (
|
||||||
<ThresholdBarGraph
|
<ThresholdBarGraph
|
||||||
@ -729,33 +811,32 @@ export default function GeneralMetrics({
|
|||||||
) : (
|
) : (
|
||||||
<Skeleton className="aspect-video w-full" />
|
<Skeleton className="aspect-video w-full" />
|
||||||
)}
|
)}
|
||||||
</>
|
|
||||||
)}
|
{statsHistory[0]?.npu_usages && (
|
||||||
{statsHistory[0]?.npu_usages && (
|
<>
|
||||||
<div
|
{statsHistory.length != 0 ? (
|
||||||
className={cn("mt-4 grid grid-cols-1 gap-2 sm:grid-cols-2")}
|
<div className="rounded-lg bg-background_alt p-2.5 md:rounded-2xl">
|
||||||
>
|
<div className="mb-5">
|
||||||
{statsHistory.length != 0 ? (
|
{t("general.hardwareInfo.npuUsage")}
|
||||||
<div className="rounded-lg bg-background_alt p-2.5 md:rounded-2xl">
|
</div>
|
||||||
<div className="mb-5">
|
{npuSeries.map((series) => (
|
||||||
{t("general.hardwareInfo.npuUsage")}
|
<ThresholdBarGraph
|
||||||
</div>
|
key={series.name}
|
||||||
{npuSeries.map((series) => (
|
graphId={`${series.name}-npu`}
|
||||||
<ThresholdBarGraph
|
name={series.name}
|
||||||
key={series.name}
|
unit="%"
|
||||||
graphId={`${series.name}-npu`}
|
threshold={GPUUsageThreshold}
|
||||||
name={series.name}
|
updateTimes={updateTimes}
|
||||||
unit="%"
|
data={[series]}
|
||||||
threshold={GPUUsageThreshold}
|
/>
|
||||||
updateTimes={updateTimes}
|
))}
|
||||||
data={[series]}
|
</div>
|
||||||
/>
|
) : (
|
||||||
))}
|
<Skeleton className="aspect-video w-full" />
|
||||||
</div>
|
)}
|
||||||
) : (
|
</>
|
||||||
<Skeleton className="aspect-video w-full" />
|
|
||||||
)}
|
)}
|
||||||
</div>
|
</>
|
||||||
)}
|
)}
|
||||||
</div>
|
</div>
|
||||||
</>
|
</>
|
||||||
|
|||||||
@ -72,8 +72,7 @@ export default function StorageMetrics({
|
|||||||
const earliestDate = useMemo(() => {
|
const earliestDate = useMemo(() => {
|
||||||
const keys = Object.keys(recordingsSummary || {});
|
const keys = Object.keys(recordingsSummary || {});
|
||||||
return keys.length
|
return keys.length
|
||||||
? new TZDate(keys[keys.length - 1] + "T00:00:00", timezone).getTime() /
|
? new TZDate(keys[0] + "T00:00:00", timezone).getTime() / 1000
|
||||||
1000
|
|
||||||
: null;
|
: null;
|
||||||
}, [recordingsSummary, timezone]);
|
}, [recordingsSummary, timezone]);
|
||||||
|
|
||||||
|
|||||||