Compare commits
1 Commits
7b591b255e
...
6450930116
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
6450930116 |
2
LICENSE
@ -1,6 +1,6 @@
|
||||
The MIT License
|
||||
|
||||
Copyright (c) 2025 Frigate LLC (Frigate™)
|
||||
Copyright (c) 2020 Blake Blackshear
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
|
||||
19
README.md
@ -1,10 +1,8 @@
|
||||
<p align="center">
|
||||
<img align="center" alt="logo" src="docs/static/img/branding/frigate.png">
|
||||
<img align="center" alt="logo" src="docs/static/img/frigate.png">
|
||||
</p>
|
||||
|
||||
# Frigate NVR™ - Realtime Object Detection for IP Cameras
|
||||
|
||||
[](https://opensource.org/licenses/MIT)
|
||||
# Frigate - NVR With Realtime Object Detection for IP Cameras
|
||||
|
||||
<a href="https://hosted.weblate.org/engage/frigate-nvr/">
|
||||
<img src="https://hosted.weblate.org/widget/frigate-nvr/language-badge.svg" alt="Translation status" />
|
||||
@ -35,15 +33,6 @@ View the documentation at https://docs.frigate.video
|
||||
|
||||
If you would like to make a donation to support development, please use [Github Sponsors](https://github.com/sponsors/blakeblackshear).
|
||||
|
||||
## License
|
||||
|
||||
This project is licensed under the **MIT License**.
|
||||
|
||||
- **Code:** The source code, configuration files, and documentation in this repository are available under the [MIT License](LICENSE). You are free to use, modify, and distribute the code as long as you include the original copyright notice.
|
||||
- **Trademarks:** The "Frigate" name, the "Frigate NVR" brand, and the Frigate logo are **trademarks of Frigate LLC** and are **not** covered by the MIT License.
|
||||
|
||||
Please see our [Trademark Policy](TRADEMARK.md) for details on acceptable use of our brand assets.
|
||||
|
||||
## Screenshots
|
||||
|
||||
### Live dashboard
|
||||
@ -77,7 +66,3 @@ We use [Weblate](https://hosted.weblate.org/projects/frigate-nvr/) to support la
|
||||
<a href="https://hosted.weblate.org/engage/frigate-nvr/">
|
||||
<img src="https://hosted.weblate.org/widget/frigate-nvr/multi-auto.svg" alt="Translation status" />
|
||||
</a>
|
||||
|
||||
---
|
||||
|
||||
**Copyright © 2025 Frigate LLC.**
|
||||
|
||||
58
TRADEMARK.md
@ -1,58 +0,0 @@
|
||||
# Trademark Policy
|
||||
|
||||
**Last Updated:** November 2025
|
||||
|
||||
This document outlines the policy regarding the use of the trademarks associated with the Frigate NVR project.
|
||||
|
||||
## 1. Our Trademarks
|
||||
|
||||
The following terms and visual assets are trademarks (the "Marks") of **Frigate LLC**:
|
||||
|
||||
- **Frigate™**
|
||||
- **Frigate NVR™**
|
||||
- **Frigate+™**
|
||||
- **The Frigate Logo**
|
||||
|
||||
**Note on Common Law Rights:**
|
||||
Frigate LLC asserts all common law rights in these Marks. The absence of a federal registration symbol (®) does not constitute a waiver of our intellectual property rights.
|
||||
|
||||
## 2. Interaction with the MIT License
|
||||
|
||||
The software in this repository is licensed under the [MIT License](LICENSE).
|
||||
|
||||
**Crucial Distinction:**
|
||||
|
||||
- The **Code** is free to use, modify, and distribute under the MIT terms.
|
||||
- The **Brand (Trademarks)** is **NOT** licensed under MIT.
|
||||
|
||||
You may not use the Marks in any way that is not explicitly permitted by this policy or by written agreement with Frigate LLC.
|
||||
|
||||
## 3. Acceptable Use
|
||||
|
||||
You may use the Marks without prior written permission in the following specific contexts:
|
||||
|
||||
- **Referential Use:** To truthfully refer to the software (e.g., _"I use Frigate NVR for my home security"_).
|
||||
- **Compatibility:** To indicate that your product or project works with the software (e.g., _"MyPlugin for Frigate NVR"_ or _"Compatible with Frigate"_).
|
||||
- **Commentary:** In news articles, blog posts, or tutorials discussing the software.
|
||||
|
||||
## 4. Prohibited Use
|
||||
|
||||
You may **NOT** use the Marks in the following ways:
|
||||
|
||||
- **Commercial Products:** You may not use "Frigate" in the name of a commercial product, service, or app (e.g., selling an app named _"Frigate Viewer"_ is prohibited).
|
||||
- **Implying Affiliation:** You may not use the Marks in a way that suggests your project is official, sponsored by, or endorsed by Frigate LLC.
|
||||
- **Confusing Forks:** If you fork this repository to create a derivative work, you **must** remove the Frigate logo and rename your project to avoid user confusion. You cannot distribute a modified version of the software under the name "Frigate".
|
||||
- **Domain Names:** You may not register domain names containing "Frigate" that are likely to confuse users (e.g., `frigate-official-support.com`).
|
||||
|
||||
## 5. The Logo
|
||||
|
||||
The Frigate logo (the bird icon) is a visual trademark.
|
||||
|
||||
- You generally **cannot** use the logo on your own website or product packaging without permission.
|
||||
- If you are building a dashboard or integration that interfaces with Frigate, you may use the logo only to represent the Frigate node/service, provided it does not imply you _are_ Frigate.
|
||||
|
||||
## 6. Questions & Permissions
|
||||
|
||||
If you are unsure if your intended use violates this policy, or if you wish to request a specific license to use the Marks (e.g., for a partnership), please contact us at:
|
||||
|
||||
**help@frigate.video**
|
||||
@ -145,6 +145,6 @@ rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Install yq, for frigate-prepare and go2rtc echo source
|
||||
curl -fsSL \
|
||||
"https://github.com/mikefarah/yq/releases/download/v4.48.2/yq_linux_$(dpkg --print-architecture)" \
|
||||
"https://github.com/mikefarah/yq/releases/download/v4.33.3/yq_linux_$(dpkg --print-architecture)" \
|
||||
--output /usr/local/bin/yq
|
||||
chmod +x /usr/local/bin/yq
|
||||
|
||||
@ -15,7 +15,7 @@ ARG AMDGPU
|
||||
|
||||
RUN apt update -qq && \
|
||||
apt install -y wget gpg && \
|
||||
wget -O rocm.deb https://repo.radeon.com/amdgpu-install/7.1/ubuntu/jammy/amdgpu-install_7.1.70100-1_all.deb && \
|
||||
wget -O rocm.deb https://repo.radeon.com/amdgpu-install/7.0.2/ubuntu/jammy/amdgpu-install_7.0.2.70002-1_all.deb && \
|
||||
apt install -y ./rocm.deb && \
|
||||
apt update && \
|
||||
apt install -qq -y rocm
|
||||
|
||||
@ -1 +1 @@
|
||||
onnxruntime-migraphx @ https://github.com/NickM-27/frigate-onnxruntime-rocm/releases/download/v7.1.0/onnxruntime_migraphx-1.23.1-cp311-cp311-linux_x86_64.whl
|
||||
onnxruntime-migraphx @ https://github.com/NickM-27/frigate-onnxruntime-rocm/releases/download/v7.0.2/onnxruntime_migraphx-1.23.1-cp311-cp311-linux_x86_64.whl
|
||||
@ -2,7 +2,7 @@ variable "AMDGPU" {
|
||||
default = "gfx900"
|
||||
}
|
||||
variable "ROCM" {
|
||||
default = "7.1"
|
||||
default = "7.0.2"
|
||||
}
|
||||
variable "HSA_OVERRIDE_GFX_VERSION" {
|
||||
default = ""
|
||||
|
||||
@ -53,17 +53,6 @@ environment_vars:
|
||||
VARIABLE_NAME: variable_value
|
||||
```
|
||||
|
||||
#### TensorFlow Thread Configuration
|
||||
|
||||
If you encounter thread creation errors during classification model training, you can limit TensorFlow's thread usage:
|
||||
|
||||
```yaml
|
||||
environment_vars:
|
||||
TF_INTRA_OP_PARALLELISM_THREADS: "2" # Threads within operations (0 = use default)
|
||||
TF_INTER_OP_PARALLELISM_THREADS: "2" # Threads between operations (0 = use default)
|
||||
TF_DATASET_THREAD_POOL_SIZE: "2" # Data pipeline threads (0 = use default)
|
||||
```
|
||||
|
||||
### `database`
|
||||
|
||||
Tracked object and recording information is managed in a sqlite database at `/config/frigate.db`. If that database is deleted, recordings will be orphaned and will need to be cleaned up manually. They also won't show up in the Media Browser within Home Assistant.
|
||||
@ -258,7 +247,7 @@ curl -X POST http://frigate_host:5000/api/config/save -d @config.json
|
||||
if you'd like you can use your yaml config directly by using [`yq`](https://github.com/mikefarah/yq) to convert it to json:
|
||||
|
||||
```bash
|
||||
yq -o=json '.' config.yaml | curl -X POST 'http://frigate_host:5000/api/config/save?save_option=saveonly' --data-binary @-
|
||||
yq r -j config.yml | curl -X POST http://frigate_host:5000/api/config/save -d @-
|
||||
```
|
||||
|
||||
### Via Command Line
|
||||
|
||||
@ -144,10 +144,4 @@ In order to use transcription and translation for past events, you must enable a
|
||||
|
||||
The transcribed/translated speech will appear in the description box in the Tracked Object Details pane. If Semantic Search is enabled, embeddings are generated for the transcription text and are fully searchable using the description search type.
|
||||
|
||||
:::note
|
||||
|
||||
Only one `speech` event may be transcribed at a time. Frigate does not automatically transcribe `speech` events or implement a queue for long-running transcription model inference.
|
||||
|
||||
:::
|
||||
|
||||
Recorded `speech` events will always use a `whisper` model, regardless of the `model_size` config setting. Without a supported Nvidia GPU, generating transcriptions for longer `speech` events may take a fair amount of time, so be patient.
|
||||
Recorded `speech` events will always use a `whisper` model, regardless of the `model_size` config setting. Without a GPU, generating transcriptions for longer `speech` events may take a fair amount of time, so be patient.
|
||||
|
||||
@ -214,42 +214,6 @@ For restreamed cameras, go2rtc remains active but does not use system resources
|
||||
|
||||
Note that disabling a camera through the config file (`enabled: False`) removes all related UI elements, including historical footage access. To retain access while disabling the camera, keep it enabled in the config and use the UI or MQTT to disable it temporarily.
|
||||
|
||||
### Live player error messages
|
||||
|
||||
When your browser runs into problems playing back your camera streams, it will log short error messages to the browser console. They indicate playback, codec, or network issues on the client/browser side, not something server side with Frigate itself. Below are the common messages you may see and simple actions you can take to try to resolve them.
|
||||
|
||||
- **startup**
|
||||
|
||||
- What it means: The player failed to initialize or connect to the live stream (network or startup error).
|
||||
- What to try: Reload the Live view or click _Reset_. Verify `go2rtc` is running and the camera stream is reachable. Try switching to a different stream from the Live UI dropdown (if available) or use a different browser.
|
||||
|
||||
- Possible console messages from the player code:
|
||||
|
||||
- `Error opening MediaSource.`
|
||||
- `Browser reported a network error.`
|
||||
- `Max error count ${errorCount} exceeded.` (the numeric value will vary)
|
||||
|
||||
- **mse-decode**
|
||||
|
||||
- What it means: The browser reported a decoding error while trying to play the stream, which usually is a result of a codec incompatibility or corrupted frames.
|
||||
- What to try: Ensure your camera/restream is using H.264 video and AAC audio (these are the most compatible). If your camera uses a non-standard audio codec, configure `go2rtc` to transcode the stream to AAC. Try another browser (some browsers have stricter MSE/codec support) and, for iPhone, ensure you're on iOS 17.1 or newer.
|
||||
|
||||
- Possible console messages from the player code:
|
||||
|
||||
- `Safari cannot open MediaSource.`
|
||||
- `Safari reported InvalidStateError.`
|
||||
- `Safari reported decoding errors.`
|
||||
|
||||
- **stalled**
|
||||
|
||||
- What it means: Playback has stalled because the player has fallen too far behind live (extended buffering or no data arriving).
|
||||
- What to try: This is usually indicative of the browser struggling to decode too many high-resolution streams at once. Try selecting a lower-bandwidth stream (substream), reduce the number of live streams open, improve the network connection, or lower the camera resolution. Also check your camera's keyframe (I-frame) interval — shorter intervals make playback start and recover faster. You can also try increasing the timeout value in the UI pane of Frigate's settings.
|
||||
|
||||
- Possible console messages from the player code:
|
||||
|
||||
- `Buffer time (10 seconds) exceeded, browser may not be playing media correctly.`
|
||||
- `Media playback has stalled after <n> seconds due to insufficient buffering or a network interruption.` (the seconds value will vary)
|
||||
|
||||
## Live view FAQ
|
||||
|
||||
1. **Why don't I have audio in my Live view?**
|
||||
@ -313,38 +277,3 @@ When your browser runs into problems playing back your camera streams, it will l
|
||||
7. **My camera streams have lots of visual artifacts / distortion.**
|
||||
|
||||
Some cameras don't include the hardware to support multiple connections to the high resolution stream, and this can cause unexpected behavior. In this case it is recommended to [restream](./restream.md) the high resolution stream so that it can be used for live view and recordings.
|
||||
|
||||
8. **Why does my camera stream switch aspect ratios on the Live dashboard?**
|
||||
|
||||
Your camera may change aspect ratios on the dashboard because Frigate uses different streams for different purposes. With go2rtc and Smart Streaming, Frigate shows a static image from the `detect` stream when no activity is present, and switches to the live stream when motion is detected. The camera image will change size if your streams use different aspect ratios.
|
||||
|
||||
To prevent this, make the `detect` stream match the go2rtc live stream's aspect ratio (resolution does not need to match, just the aspect ratio). You can either adjust the camera's output resolution or set the `width` and `height` values in your config's `detect` section to a resolution with an aspect ratio that matches.
|
||||
|
||||
Example: Resolutions from two streams
|
||||
|
||||
- Mismatched (may cause aspect ratio switching on the dashboard):
|
||||
|
||||
- Live/go2rtc stream: 1920x1080 (16:9)
|
||||
- Detect stream: 640x352 (~1.82:1, not 16:9)
|
||||
|
||||
- Matched (prevents switching):
|
||||
- Live/go2rtc stream: 1920x1080 (16:9)
|
||||
- Detect stream: 640x360 (16:9)
|
||||
|
||||
You can update the detect settings in your camera config to match the aspect ratio of your go2rtc live stream. For example:
|
||||
|
||||
```yaml
|
||||
cameras:
|
||||
front_door:
|
||||
detect:
|
||||
width: 640
|
||||
height: 360 # set this to 360 instead of 352
|
||||
ffmpeg:
|
||||
inputs:
|
||||
- path: rtsp://127.0.0.1:8554/front_door # main stream 1920x1080
|
||||
roles:
|
||||
- record
|
||||
- path: rtsp://127.0.0.1:8554/front_door_sub # sub stream 640x352
|
||||
roles:
|
||||
- detect
|
||||
```
|
||||
|
||||
@ -3,8 +3,6 @@ id: object_detectors
|
||||
title: Object Detectors
|
||||
---
|
||||
|
||||
import CommunityBadge from '@site/src/components/CommunityBadge';
|
||||
|
||||
# Supported Hardware
|
||||
|
||||
:::info
|
||||
@ -15,8 +13,8 @@ Frigate supports multiple different detectors that work on different types of ha
|
||||
|
||||
- [Coral EdgeTPU](#edge-tpu-detector): The Google Coral EdgeTPU is available in USB and m.2 format allowing for a wide range of compatibility with devices.
|
||||
- [Hailo](#hailo-8): The Hailo8 and Hailo8L AI Acceleration module is available in m.2 format with a HAT for RPi devices, offering a wide range of compatibility with devices.
|
||||
- <CommunityBadge /> [MemryX](#memryx-mx3): The MX3 Acceleration module is available in m.2 format, offering broad compatibility across various platforms.
|
||||
- <CommunityBadge /> [DeGirum](#degirum): Service for using hardware devices in the cloud or locally. Hardware and models provided on the cloud on [their website](https://hub.degirum.com).
|
||||
- [MemryX](#memryx-mx3): The MX3 Acceleration module is available in m.2 format, offering broad compatibility across various platforms.
|
||||
- [DeGirum](#degirum): Service for using hardware devices in the cloud or locally. Hardware and models provided on the cloud on [their website](https://hub.degirum.com).
|
||||
|
||||
**AMD**
|
||||
|
||||
@ -36,16 +34,16 @@ Frigate supports multiple different detectors that work on different types of ha
|
||||
|
||||
- [ONNX](#onnx): TensorRT will automatically be detected and used as a detector in the `-tensorrt` Frigate image when a supported ONNX model is configured.
|
||||
|
||||
**Nvidia Jetson** <CommunityBadge />
|
||||
**Nvidia Jetson**
|
||||
|
||||
- [TensortRT](#nvidia-tensorrt-detector): TensorRT can run on Jetson devices, using one of many default models.
|
||||
- [ONNX](#onnx): TensorRT will automatically be detected and used as a detector in the `-tensorrt-jp6` Frigate image when a supported ONNX model is configured.
|
||||
|
||||
**Rockchip** <CommunityBadge />
|
||||
**Rockchip**
|
||||
|
||||
- [RKNN](#rockchip-platform): RKNN models can run on Rockchip devices with included NPUs.
|
||||
|
||||
**Synaptics** <CommunityBadge />
|
||||
**Synaptics**
|
||||
|
||||
- [Synaptics](#synaptics): synap models can run on Synaptics devices(e.g astra machina) with included NPUs.
|
||||
|
||||
|
||||
@ -246,7 +246,7 @@ birdseye:
|
||||
# Optional: ffmpeg configuration
|
||||
# More information about presets at https://docs.frigate.video/configuration/ffmpeg_presets
|
||||
ffmpeg:
|
||||
# Optional: ffmpeg binary path (default: shown below)
|
||||
# Optional: ffmpeg binry path (default: shown below)
|
||||
# can also be set to `7.0` or `5.0` to specify one of the included versions
|
||||
# or can be set to any path that holds `bin/ffmpeg` & `bin/ffprobe`
|
||||
path: "default"
|
||||
@ -700,11 +700,11 @@ genai:
|
||||
# Optional: Configuration for audio transcription
|
||||
# NOTE: only the enabled option can be overridden at the camera level
|
||||
audio_transcription:
|
||||
# Optional: Enable live and speech event audio transcription (default: shown below)
|
||||
# Optional: Enable license plate recognition (default: shown below)
|
||||
enabled: False
|
||||
# Optional: The device to run the models on for live transcription. (default: shown below)
|
||||
# Optional: The device to run the models on (default: shown below)
|
||||
device: CPU
|
||||
# Optional: Set the model size used for live transcription. (default: shown below)
|
||||
# Optional: Set the model size used for transcription. (default: shown below)
|
||||
model_size: small
|
||||
# Optional: Set the language used for transcription translation. (default: shown below)
|
||||
# List of language codes: https://github.com/openai/whisper/blob/main/whisper/tokenizer.py#L10
|
||||
|
||||
@ -3,8 +3,6 @@ id: hardware
|
||||
title: Recommended hardware
|
||||
---
|
||||
|
||||
import CommunityBadge from '@site/src/components/CommunityBadge';
|
||||
|
||||
## Cameras
|
||||
|
||||
Cameras that output H.264 video and AAC audio will offer the most compatibility with all features of Frigate and Home Assistant. It is also helpful if your camera supports multiple substreams to allow different resolutions to be used for detection, streaming, and recordings without re-encoding.
|
||||
@ -61,7 +59,7 @@ Frigate supports multiple different detectors that work on different types of ha
|
||||
|
||||
- [Supports primarily ssdlite and mobilenet model architectures](../../configuration/object_detectors#edge-tpu-detector)
|
||||
|
||||
- <CommunityBadge /> [MemryX](#memryx-mx3): The MX3 M.2 accelerator module is available in m.2 format allowing for a wide range of compatibility with devices.
|
||||
- [MemryX](#memryx-mx3): The MX3 M.2 accelerator module is available in m.2 format allowing for a wide range of compatibility with devices.
|
||||
- [Supports many model architectures](../../configuration/object_detectors#memryx-mx3)
|
||||
- Runs best with tiny, small, or medium-size models
|
||||
|
||||
@ -86,26 +84,32 @@ Frigate supports multiple different detectors that work on different types of ha
|
||||
|
||||
**Nvidia**
|
||||
|
||||
- [TensortRT](#tensorrt---nvidia-gpu): TensorRT can run on Nvidia GPUs to provide efficient object detection.
|
||||
|
||||
- [TensortRT](#tensorrt---nvidia-gpu): TensorRT can run on Nvidia GPUs and Jetson devices.
|
||||
- [Supports majority of model architectures via ONNX](../../configuration/object_detectors#onnx-supported-models)
|
||||
- Runs well with any size models including large
|
||||
|
||||
- <CommunityBadge /> [Jetson](#nvidia-jetson): Jetson devices are supported via the TensorRT or ONNX detectors when running Jetpack 6.
|
||||
|
||||
**Rockchip** <CommunityBadge />
|
||||
**Rockchip**
|
||||
|
||||
- [RKNN](#rockchip-platform): RKNN models can run on Rockchip devices with included NPUs to provide efficient object detection.
|
||||
- [Supports limited model architectures](../../configuration/object_detectors#choosing-a-model)
|
||||
- Runs best with tiny or small size models
|
||||
- Runs efficiently on low power hardware
|
||||
|
||||
**Synaptics** <CommunityBadge />
|
||||
**Synaptics**
|
||||
|
||||
- [Synaptics](#synaptics): synap models can run on Synaptics devices(e.g astra machina) with included NPUs to provide efficient object detection.
|
||||
|
||||
:::
|
||||
|
||||
### Synaptics
|
||||
|
||||
- **Synaptics** Default model is **mobilenet**
|
||||
|
||||
| Name | Synaptics SL1680 Inference Time |
|
||||
| ---------------- | ------------------------------- |
|
||||
| ssd mobilenet | ~ 25 ms |
|
||||
| yolov5m | ~ 118 ms |
|
||||
|
||||
### Hailo-8
|
||||
|
||||
Frigate supports both the Hailo-8 and Hailo-8L AI Acceleration Modules on compatible hardware platforms—including the Raspberry Pi 5 with the PCIe hat from the AI kit. The Hailo detector integration in Frigate automatically identifies your hardware type and selects the appropriate default model when a custom model isn’t provided.
|
||||
@ -257,7 +261,7 @@ Inference speeds may vary depending on the host platform. The above data was mea
|
||||
|
||||
### Nvidia Jetson
|
||||
|
||||
Jetson devices are supported via the TensorRT or ONNX detectors when running Jetpack 6. It will [make use of the Jetson's hardware media engine](/configuration/hardware_acceleration_video#nvidia-jetson-orin-agx-orin-nx-orin-nano-xavier-agx-xavier-nx-tx2-tx1-nano) when configured with the [appropriate presets](/configuration/ffmpeg_presets#hwaccel-presets), and will make use of the Jetson's GPU and DLA for object detection when configured with the [TensorRT detector](/configuration/object_detectors#nvidia-tensorrt-detector).
|
||||
Frigate supports all Jetson boards, from the inexpensive Jetson Nano to the powerful Jetson Orin AGX. It will [make use of the Jetson's hardware media engine](/configuration/hardware_acceleration_video#nvidia-jetson-orin-agx-orin-nx-orin-nano-xavier-agx-xavier-nx-tx2-tx1-nano) when configured with the [appropriate presets](/configuration/ffmpeg_presets#hwaccel-presets), and will make use of the Jetson's GPU and DLA for object detection when configured with the [TensorRT detector](/configuration/object_detectors#nvidia-tensorrt-detector).
|
||||
|
||||
Inference speed will vary depending on the YOLO model, jetson platform and jetson nvpmodel (GPU/DLA/EMC clock speed). It is typically 20-40 ms for most models. The DLA is more efficient than the GPU, but not faster, so using the DLA will reduce power consumption but will slightly increase inference time.
|
||||
|
||||
@ -278,15 +282,6 @@ Frigate supports hardware video processing on all Rockchip boards. However, hard
|
||||
|
||||
The inference time of a rk3588 with all 3 cores enabled is typically 25-30 ms for yolo-nas s.
|
||||
|
||||
### Synaptics
|
||||
|
||||
- **Synaptics** Default model is **mobilenet**
|
||||
|
||||
| Name | Synaptics SL1680 Inference Time |
|
||||
| ------------- | ------------------------------- |
|
||||
| ssd mobilenet | ~ 25 ms |
|
||||
| yolov5m | ~ 118 ms |
|
||||
|
||||
## What does Frigate use the CPU for and what does it use a detector for? (ELI5 Version)
|
||||
|
||||
This is taken from a [user question on reddit](https://www.reddit.com/r/homeassistant/comments/q8mgau/comment/hgqbxh5/?utm_source=share&utm_medium=web2x&context=3). Modified slightly for clarity.
|
||||
|
||||
@ -159,44 +159,11 @@ Message published for updates to tracked object metadata, for example:
|
||||
}
|
||||
```
|
||||
|
||||
#### Object Classification Update
|
||||
|
||||
Message published when [object classification](/configuration/custom_classification/object_classification) reaches consensus on a classification result.
|
||||
|
||||
**Sub label type:**
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "classification",
|
||||
"id": "1607123955.475377-mxklsc",
|
||||
"camera": "front_door_cam",
|
||||
"timestamp": 1607123958.748393,
|
||||
"model": "person_classifier",
|
||||
"sub_label": "delivery_person",
|
||||
"score": 0.87
|
||||
}
|
||||
```
|
||||
|
||||
**Attribute type:**
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "classification",
|
||||
"id": "1607123955.475377-mxklsc",
|
||||
"camera": "front_door_cam",
|
||||
"timestamp": 1607123958.748393,
|
||||
"model": "helmet_detector",
|
||||
"attribute": "yes",
|
||||
"score": 0.92
|
||||
}
|
||||
```
|
||||
|
||||
### `frigate/reviews`
|
||||
|
||||
Message published for each changed review item. The first message is published when the `detection` or `alert` is initiated.
|
||||
|
||||
An `update` with the same ID will be published when:
|
||||
|
||||
- The severity changes from `detection` to `alert`
|
||||
- Additional objects are detected
|
||||
- An object is recognized via face, lpr, etc.
|
||||
@ -341,11 +308,6 @@ Publishes transcribed text for audio detected on this camera.
|
||||
|
||||
**NOTE:** Requires audio detection and transcription to be enabled
|
||||
|
||||
### `frigate/<camera_name>/classification/<model_name>`
|
||||
|
||||
Publishes the current state detected by a state classification model for the camera. The topic name includes the model name as configured in your classification settings.
|
||||
The published value is the detected state class name (e.g., `open`, `closed`, `on`, `off`). The state is only published when it changes, helping to reduce unnecessary MQTT traffic.
|
||||
|
||||
### `frigate/<camera_name>/enabled/set`
|
||||
|
||||
Topic to turn Frigate's processing of a camera on and off. Expected values are `ON` and `OFF`.
|
||||
|
||||
@ -10,7 +10,7 @@ const config: Config = {
|
||||
baseUrl: "/",
|
||||
onBrokenLinks: "throw",
|
||||
onBrokenMarkdownLinks: "warn",
|
||||
favicon: "img/branding/favicon.ico",
|
||||
favicon: "img/favicon.ico",
|
||||
organizationName: "blakeblackshear",
|
||||
projectName: "frigate",
|
||||
themes: [
|
||||
@ -116,8 +116,8 @@ const config: Config = {
|
||||
title: "Frigate",
|
||||
logo: {
|
||||
alt: "Frigate",
|
||||
src: "img/branding/logo.svg",
|
||||
srcDark: "img/branding/logo-dark.svg",
|
||||
src: "img/logo.svg",
|
||||
srcDark: "img/logo-dark.svg",
|
||||
},
|
||||
items: [
|
||||
{
|
||||
@ -170,7 +170,7 @@ const config: Config = {
|
||||
],
|
||||
},
|
||||
],
|
||||
copyright: `Copyright © ${new Date().getFullYear()} Frigate LLC`,
|
||||
copyright: `Copyright © ${new Date().getFullYear()} Blake Blackshear`,
|
||||
},
|
||||
},
|
||||
plugins: [
|
||||
|
||||
@ -1,23 +0,0 @@
|
||||
import React from "react";
|
||||
|
||||
export default function CommunityBadge() {
|
||||
return (
|
||||
<span
|
||||
title="This detector is maintained by community members who provide code, maintenance, and support. See the contributing boards documentation for more information."
|
||||
style={{
|
||||
display: "inline-block",
|
||||
backgroundColor: "#f1f3f5",
|
||||
color: "#24292f",
|
||||
fontSize: "11px",
|
||||
fontWeight: 600,
|
||||
padding: "2px 6px",
|
||||
borderRadius: "3px",
|
||||
border: "1px solid #d1d9e0",
|
||||
marginLeft: "4px",
|
||||
cursor: "help",
|
||||
}}
|
||||
>
|
||||
Community Supported
|
||||
</span>
|
||||
);
|
||||
}
|
||||
30
docs/static/img/branding/LICENSE.md
vendored
@ -1,30 +0,0 @@
|
||||
# COPYRIGHT AND TRADEMARK NOTICE
|
||||
|
||||
The images, logos, and icons contained in this directory (the "Brand Assets") are
|
||||
proprietary to Frigate LLC and are NOT covered by the MIT License governing the
|
||||
rest of this repository.
|
||||
|
||||
1. TRADEMARK STATUS
|
||||
The "Frigate" name and the accompanying logo are common law trademarks™ of
|
||||
Frigate LLC. Frigate LLC reserves all rights to these marks.
|
||||
|
||||
2. LIMITED PERMISSION FOR USE
|
||||
Permission is hereby granted to display these Brand Assets strictly for the
|
||||
following purposes:
|
||||
a. To execute the software interface on a local machine.
|
||||
b. To identify the software in documentation or reviews (nominative use).
|
||||
|
||||
3. RESTRICTIONS
|
||||
You may NOT:
|
||||
a. Use these Brand Assets to represent a derivative work (fork) as an official
|
||||
product of Frigate LLC.
|
||||
b. Use these Brand Assets in a way that implies endorsement, sponsorship, or
|
||||
commercial affiliation with Frigate LLC.
|
||||
c. Modify or alter the Brand Assets.
|
||||
|
||||
If you fork this repository with the intent to distribute a modified or competing
|
||||
version of the software, you must replace these Brand Assets with your own
|
||||
original content.
|
||||
|
||||
ALL RIGHTS RESERVED.
|
||||
Copyright (c) 2025 Frigate LLC.
|
||||
|
Before Width: | Height: | Size: 15 KiB After Width: | Height: | Size: 15 KiB |
|
Before Width: | Height: | Size: 12 KiB After Width: | Height: | Size: 12 KiB |
|
Before Width: | Height: | Size: 936 B After Width: | Height: | Size: 936 B |
|
Before Width: | Height: | Size: 933 B After Width: | Height: | Size: 933 B |
@ -542,7 +542,6 @@ def transcribe_audio(request: Request, body: AudioTranscriptionBody):
|
||||
status_code=409, # 409 Conflict
|
||||
)
|
||||
else:
|
||||
logger.debug(f"Failed to transcribe audio, response: {response}")
|
||||
return JSONResponse(
|
||||
content={
|
||||
"success": False,
|
||||
|
||||
@ -23,7 +23,6 @@ from frigate.const import (
|
||||
NOTIFICATION_TEST,
|
||||
REQUEST_REGION_GRID,
|
||||
UPDATE_AUDIO_ACTIVITY,
|
||||
UPDATE_AUDIO_TRANSCRIPTION_STATE,
|
||||
UPDATE_BIRDSEYE_LAYOUT,
|
||||
UPDATE_CAMERA_ACTIVITY,
|
||||
UPDATE_EMBEDDINGS_REINDEX_PROGRESS,
|
||||
@ -62,7 +61,6 @@ class Dispatcher:
|
||||
self.model_state: dict[str, ModelStatusTypesEnum] = {}
|
||||
self.embeddings_reindex: dict[str, Any] = {}
|
||||
self.birdseye_layout: dict[str, Any] = {}
|
||||
self.audio_transcription_state: str = "idle"
|
||||
self._camera_settings_handlers: dict[str, Callable] = {
|
||||
"audio": self._on_audio_command,
|
||||
"audio_transcription": self._on_audio_transcription_command,
|
||||
@ -180,19 +178,6 @@ class Dispatcher:
|
||||
def handle_model_state() -> None:
|
||||
self.publish("model_state", json.dumps(self.model_state.copy()))
|
||||
|
||||
def handle_update_audio_transcription_state() -> None:
|
||||
if payload:
|
||||
self.audio_transcription_state = payload
|
||||
self.publish(
|
||||
"audio_transcription_state",
|
||||
json.dumps(self.audio_transcription_state),
|
||||
)
|
||||
|
||||
def handle_audio_transcription_state() -> None:
|
||||
self.publish(
|
||||
"audio_transcription_state", json.dumps(self.audio_transcription_state)
|
||||
)
|
||||
|
||||
def handle_update_embeddings_reindex_progress() -> None:
|
||||
self.embeddings_reindex = payload
|
||||
self.publish(
|
||||
@ -279,12 +264,10 @@ class Dispatcher:
|
||||
UPDATE_MODEL_STATE: handle_update_model_state,
|
||||
UPDATE_EMBEDDINGS_REINDEX_PROGRESS: handle_update_embeddings_reindex_progress,
|
||||
UPDATE_BIRDSEYE_LAYOUT: handle_update_birdseye_layout,
|
||||
UPDATE_AUDIO_TRANSCRIPTION_STATE: handle_update_audio_transcription_state,
|
||||
NOTIFICATION_TEST: handle_notification_test,
|
||||
"restart": handle_restart,
|
||||
"embeddingsReindexProgress": handle_embeddings_reindex_progress,
|
||||
"modelState": handle_model_state,
|
||||
"audioTranscriptionState": handle_audio_transcription_state,
|
||||
"birdseyeLayout": handle_birdseye_layout,
|
||||
"onConnect": handle_on_connect,
|
||||
}
|
||||
|
||||
@ -113,7 +113,6 @@ CLEAR_ONGOING_REVIEW_SEGMENTS = "clear_ongoing_review_segments"
|
||||
UPDATE_CAMERA_ACTIVITY = "update_camera_activity"
|
||||
UPDATE_AUDIO_ACTIVITY = "update_audio_activity"
|
||||
EXPIRE_AUDIO_ACTIVITY = "expire_audio_activity"
|
||||
UPDATE_AUDIO_TRANSCRIPTION_STATE = "update_audio_transcription_state"
|
||||
UPDATE_EVENT_DESCRIPTION = "update_event_description"
|
||||
UPDATE_REVIEW_DESCRIPTION = "update_review_description"
|
||||
UPDATE_MODEL_STATE = "update_model_state"
|
||||
|
||||
@ -13,7 +13,6 @@ from frigate.config import FrigateConfig
|
||||
from frigate.const import (
|
||||
CACHE_DIR,
|
||||
MODEL_CACHE_DIR,
|
||||
UPDATE_AUDIO_TRANSCRIPTION_STATE,
|
||||
UPDATE_EVENT_DESCRIPTION,
|
||||
)
|
||||
from frigate.data_processing.types import PostProcessDataEnum
|
||||
@ -191,8 +190,6 @@ class AudioTranscriptionPostProcessor(PostProcessorApi):
|
||||
self.transcription_running = False
|
||||
self.transcription_thread = None
|
||||
|
||||
self.requestor.send_data(UPDATE_AUDIO_TRANSCRIPTION_STATE, "idle")
|
||||
|
||||
def handle_request(self, topic: str, request_data: dict[str, any]) -> str | None:
|
||||
if topic == "transcribe_audio":
|
||||
event = request_data["event"]
|
||||
@ -206,8 +203,6 @@ class AudioTranscriptionPostProcessor(PostProcessorApi):
|
||||
|
||||
# Mark as running and start the thread
|
||||
self.transcription_running = True
|
||||
self.requestor.send_data(UPDATE_AUDIO_TRANSCRIPTION_STATE, "processing")
|
||||
|
||||
self.transcription_thread = threading.Thread(
|
||||
target=self._transcription_wrapper, args=(event,), daemon=True
|
||||
)
|
||||
|
||||
@ -1,7 +1,6 @@
|
||||
"""Real time processor that works with classification tflite models."""
|
||||
|
||||
import datetime
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
from typing import Any
|
||||
@ -22,7 +21,6 @@ from frigate.config.classification import (
|
||||
)
|
||||
from frigate.const import CLIPS_DIR, MODEL_CACHE_DIR
|
||||
from frigate.log import redirect_output_to_logger
|
||||
from frigate.types import TrackedObjectUpdateTypesEnum
|
||||
from frigate.util.builtin import EventsPerSecond, InferenceSpeed, load_labels
|
||||
from frigate.util.object import box_overlaps, calculate_region
|
||||
|
||||
@ -286,7 +284,6 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
|
||||
config: FrigateConfig,
|
||||
model_config: CustomClassificationConfig,
|
||||
sub_label_publisher: EventMetadataPublisher,
|
||||
requestor: InterProcessRequestor,
|
||||
metrics: DataProcessorMetrics,
|
||||
):
|
||||
super().__init__(config, metrics)
|
||||
@ -295,7 +292,6 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
|
||||
self.train_dir = os.path.join(CLIPS_DIR, self.model_config.name, "train")
|
||||
self.interpreter: Interpreter | None = None
|
||||
self.sub_label_publisher = sub_label_publisher
|
||||
self.requestor = requestor
|
||||
self.tensor_input_details: dict[str, Any] | None = None
|
||||
self.tensor_output_details: dict[str, Any] | None = None
|
||||
self.classification_history: dict[str, list[tuple[str, float, float]]] = {}
|
||||
@ -490,8 +486,6 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
|
||||
)
|
||||
|
||||
if consensus_label is not None:
|
||||
camera = obj_data["camera"]
|
||||
|
||||
if (
|
||||
self.model_config.object_config.classification_type
|
||||
== ObjectClassificationType.sub_label
|
||||
@ -500,20 +494,6 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
|
||||
(object_id, consensus_label, consensus_score),
|
||||
EventMetadataTypeEnum.sub_label,
|
||||
)
|
||||
self.requestor.send_data(
|
||||
"tracked_object_update",
|
||||
json.dumps(
|
||||
{
|
||||
"type": TrackedObjectUpdateTypesEnum.classification,
|
||||
"id": object_id,
|
||||
"camera": camera,
|
||||
"timestamp": now,
|
||||
"model": self.model_config.name,
|
||||
"sub_label": consensus_label,
|
||||
"score": consensus_score,
|
||||
}
|
||||
),
|
||||
)
|
||||
elif (
|
||||
self.model_config.object_config.classification_type
|
||||
== ObjectClassificationType.attribute
|
||||
@ -527,20 +507,6 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
|
||||
),
|
||||
EventMetadataTypeEnum.attribute.value,
|
||||
)
|
||||
self.requestor.send_data(
|
||||
"tracked_object_update",
|
||||
json.dumps(
|
||||
{
|
||||
"type": TrackedObjectUpdateTypesEnum.classification,
|
||||
"id": object_id,
|
||||
"camera": camera,
|
||||
"timestamp": now,
|
||||
"model": self.model_config.name,
|
||||
"attribute": consensus_label,
|
||||
"score": consensus_score,
|
||||
}
|
||||
),
|
||||
)
|
||||
|
||||
def handle_request(self, topic, request_data):
|
||||
if topic == EmbeddingsRequestEnum.reload_classification_model.value:
|
||||
|
||||
@ -195,7 +195,6 @@ class EmbeddingMaintainer(threading.Thread):
|
||||
self.config,
|
||||
model_config,
|
||||
self.event_metadata_publisher,
|
||||
self.requestor,
|
||||
self.metrics,
|
||||
)
|
||||
)
|
||||
@ -340,7 +339,6 @@ class EmbeddingMaintainer(threading.Thread):
|
||||
self.config,
|
||||
model_config,
|
||||
self.event_metadata_publisher,
|
||||
self.requestor,
|
||||
self.metrics,
|
||||
)
|
||||
|
||||
|
||||
@ -109,7 +109,6 @@ class TimelineProcessor(threading.Thread):
|
||||
event_data["region"],
|
||||
),
|
||||
"attribute": "",
|
||||
"score": event_data["score"],
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
@ -30,4 +30,3 @@ class TrackedObjectUpdateTypesEnum(str, Enum):
|
||||
description = "description"
|
||||
face = "face"
|
||||
lpr = "lpr"
|
||||
classification = "classification"
|
||||
|
||||
@ -130,13 +130,8 @@ def get_soc_type() -> Optional[str]:
|
||||
"""Get the SoC type from device tree."""
|
||||
try:
|
||||
with open("/proc/device-tree/compatible") as file:
|
||||
content = file.read()
|
||||
|
||||
# Check for Jetson devices
|
||||
if "nvidia" in content:
|
||||
return None
|
||||
|
||||
return content.split(",")[-1].strip("\x00")
|
||||
soc = file.read().split(",")[-1].strip("\x00")
|
||||
return soc
|
||||
except FileNotFoundError:
|
||||
logger.debug("Could not determine SoC type from device tree")
|
||||
return None
|
||||
|
||||
|
Before Width: | Height: | Size: 3.9 KiB After Width: | Height: | Size: 3.9 KiB |
@ -1,33 +0,0 @@
|
||||
# COPYRIGHT AND TRADEMARK NOTICE
|
||||
|
||||
The images, logos, and icons contained in this directory (the "Brand Assets") are
|
||||
proprietary to Frigate LLC and are NOT covered by the MIT License governing the
|
||||
rest of this repository.
|
||||
|
||||
1. TRADEMARK STATUS
|
||||
The "Frigate" name and the accompanying logo are common law trademarks™ of
|
||||
Frigate LLC. Frigate LLC reserves all rights to these marks.
|
||||
|
||||
2. LIMITED PERMISSION FOR USE
|
||||
Permission is hereby granted to display these Brand Assets strictly for the
|
||||
following purposes:
|
||||
a. To execute the software interface on a local machine.
|
||||
b. To identify the software in documentation or reviews (nominative use).
|
||||
|
||||
3. RESTRICTIONS
|
||||
You may NOT:
|
||||
a. Use these Brand Assets to represent a derivative work (fork) as an official
|
||||
product of Frigate LLC.
|
||||
b. Use these Brand Assets in a way that implies endorsement, sponsorship, or
|
||||
commercial affiliation with Frigate LLC.
|
||||
c. Modify or alter the Brand Assets.
|
||||
|
||||
If you fork this repository with the intent to distribute a modified or competing
|
||||
version of the software, you must replace these Brand Assets with your own
|
||||
original content.
|
||||
|
||||
For full usage guidelines, strictly see the TRADEMARK.md file in the
|
||||
repository root.
|
||||
|
||||
ALL RIGHTS RESERVED.
|
||||
Copyright (c) 2025 Frigate LLC.
|
||||
|
Before Width: | Height: | Size: 558 B After Width: | Height: | Size: 558 B |
|
Before Width: | Height: | Size: 800 B After Width: | Height: | Size: 800 B |
|
Before Width: | Height: | Size: 15 KiB After Width: | Height: | Size: 15 KiB |
|
Before Width: | Height: | Size: 12 KiB After Width: | Height: | Size: 12 KiB |
|
Before Width: | Height: | Size: 2.9 KiB After Width: | Height: | Size: 2.9 KiB |
|
Before Width: | Height: | Size: 2.6 KiB After Width: | Height: | Size: 2.6 KiB |
@ -2,29 +2,29 @@
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8" />
|
||||
<link rel="icon" href="/images/branding/favicon.ico" />
|
||||
<link rel="icon" href="/images/favicon.ico" />
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
|
||||
<title>Frigate</title>
|
||||
<link
|
||||
rel="apple-touch-icon"
|
||||
sizes="180x180"
|
||||
href="/images/branding/apple-touch-icon.png"
|
||||
href="/images/apple-touch-icon.png"
|
||||
/>
|
||||
<link
|
||||
rel="icon"
|
||||
type="image/png"
|
||||
sizes="32x32"
|
||||
href="/images/branding/favicon-32x32.png"
|
||||
href="/images/favicon-32x32.png"
|
||||
/>
|
||||
<link
|
||||
rel="icon"
|
||||
type="image/png"
|
||||
sizes="16x16"
|
||||
href="/images/branding/favicon-16x16.png"
|
||||
href="/images/favicon-16x16.png"
|
||||
/>
|
||||
<link rel="icon" type="image/svg+xml" href="/images/branding/favicon.svg" />
|
||||
<link rel="icon" type="image/svg+xml" href="/images/favicon.svg" />
|
||||
<link rel="manifest" href="/site.webmanifest" crossorigin="use-credentials" />
|
||||
<link rel="mask-icon" href="/images/branding/favicon.svg" color="#3b82f7" />
|
||||
<link rel="mask-icon" href="/images/favicon.svg" color="#3b82f7" />
|
||||
<meta name="theme-color" content="#ffffff" media="(prefers-color-scheme: light)" />
|
||||
<meta name="theme-color" content="#000000" media="(prefers-color-scheme: dark)" />
|
||||
</head>
|
||||
|
||||
@ -2,29 +2,29 @@
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8" />
|
||||
<link rel="icon" href="/images/branding/favicon.ico" />
|
||||
<link rel="icon" href="/images/favicon.ico" />
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
|
||||
<title>Frigate</title>
|
||||
<link
|
||||
rel="apple-touch-icon"
|
||||
sizes="180x180"
|
||||
href="/images/branding/apple-touch-icon.png"
|
||||
href="/images/apple-touch-icon.png"
|
||||
/>
|
||||
<link
|
||||
rel="icon"
|
||||
type="image/png"
|
||||
sizes="32x32"
|
||||
href="/images/branding/favicon-32x32.png"
|
||||
href="/images/favicon-32x32.png"
|
||||
/>
|
||||
<link
|
||||
rel="icon"
|
||||
type="image/png"
|
||||
sizes="16x16"
|
||||
href="/images/branding/favicon-16x16.png"
|
||||
href="/images/favicon-16x16.png"
|
||||
/>
|
||||
<link rel="icon" type="image/svg+xml" href="/images/branding/favicon.svg" />
|
||||
<link rel="icon" type="image/svg+xml" href="/images/favicon.svg" />
|
||||
<link rel="manifest" href="/site.webmanifest" crossorigin="use-credentials" />
|
||||
<link rel="mask-icon" href="/images/branding/favicon.svg" color="#3b82f7" />
|
||||
<link rel="mask-icon" href="/images/favicon.svg" color="#3b82f7" />
|
||||
<meta name="theme-color" content="#ffffff" media="(prefers-color-scheme: light)" />
|
||||
<meta name="theme-color" content="#000000" media="(prefers-color-scheme: dark)" />
|
||||
</head>
|
||||
|
||||
@ -103,7 +103,7 @@
|
||||
"regenerate": "A new description has been requested from {{provider}}. Depending on the speed of your provider, the new description may take some time to regenerate.",
|
||||
"updatedSublabel": "Successfully updated sub label.",
|
||||
"updatedLPR": "Successfully updated license plate.",
|
||||
"audioTranscription": "Successfully requested audio transcription. Depending on the speed of your Frigate server, the transcription may take some time to complete."
|
||||
"audioTranscription": "Successfully requested audio transcription."
|
||||
},
|
||||
"error": {
|
||||
"regenerate": "Failed to call {{provider}} for a new description: {{errorMessage}}",
|
||||
|
||||
@ -461,40 +461,6 @@ export function useEmbeddingsReindexProgress(
|
||||
return { payload: data };
|
||||
}
|
||||
|
||||
export function useAudioTranscriptionProcessState(
|
||||
revalidateOnFocus: boolean = true,
|
||||
): { payload: string } {
|
||||
const {
|
||||
value: { payload },
|
||||
send: sendCommand,
|
||||
} = useWs("audio_transcription_state", "audioTranscriptionState");
|
||||
|
||||
const data = useDeepMemo(
|
||||
payload ? (JSON.parse(payload as string) as string) : "idle",
|
||||
);
|
||||
|
||||
useEffect(() => {
|
||||
let listener = undefined;
|
||||
if (revalidateOnFocus) {
|
||||
sendCommand("audioTranscriptionState");
|
||||
listener = () => {
|
||||
if (document.visibilityState == "visible") {
|
||||
sendCommand("audioTranscriptionState");
|
||||
}
|
||||
};
|
||||
addEventListener("visibilitychange", listener);
|
||||
}
|
||||
return () => {
|
||||
if (listener) {
|
||||
removeEventListener("visibilitychange", listener);
|
||||
}
|
||||
};
|
||||
// eslint-disable-next-line react-hooks/exhaustive-deps
|
||||
}, [revalidateOnFocus]);
|
||||
|
||||
return { payload: data || "idle" };
|
||||
}
|
||||
|
||||
export function useBirdseyeLayout(revalidateOnFocus: boolean = true): {
|
||||
payload: string;
|
||||
} {
|
||||
|
||||
@ -572,8 +572,9 @@ export function SortTypeContent({
|
||||
className="w-full space-y-1"
|
||||
>
|
||||
{availableSortTypes.map((value) => (
|
||||
<div key={value} className="flex flex-row gap-2">
|
||||
<div className="flex flex-row gap-2">
|
||||
<RadioGroupItem
|
||||
key={value}
|
||||
value={value}
|
||||
id={`sort-${value}`}
|
||||
className={
|
||||
|
||||
@ -42,7 +42,6 @@ type ObjectData = {
|
||||
pathPoints: PathPoint[];
|
||||
currentZones: string[];
|
||||
currentBox?: number[];
|
||||
currentAttributeBox?: number[];
|
||||
};
|
||||
|
||||
export default function ObjectTrackOverlay({
|
||||
@ -106,12 +105,6 @@ export default function ObjectTrackOverlay({
|
||||
selectedObjectIds.length > 0
|
||||
? ["event_ids", { ids: selectedObjectIds.join(",") }]
|
||||
: null,
|
||||
null,
|
||||
{
|
||||
revalidateOnFocus: false,
|
||||
revalidateOnReconnect: false,
|
||||
dedupingInterval: 30000,
|
||||
},
|
||||
);
|
||||
|
||||
// Fetch timeline data for each object ID using fixed number of hooks
|
||||
@ -119,12 +112,7 @@ export default function ObjectTrackOverlay({
|
||||
selectedObjectIds.length > 0
|
||||
? `timeline?source_id=${selectedObjectIds.join(",")}&limit=1000`
|
||||
: null,
|
||||
null,
|
||||
{
|
||||
revalidateOnFocus: false,
|
||||
revalidateOnReconnect: false,
|
||||
dedupingInterval: 30000,
|
||||
},
|
||||
{ revalidateOnFocus: false },
|
||||
);
|
||||
|
||||
const getZonesFriendlyNames = (zones: string[], config: FrigateConfig) => {
|
||||
@ -282,7 +270,6 @@ export default function ObjectTrackOverlay({
|
||||
);
|
||||
|
||||
const currentBox = nearbyTimelineEvent?.data?.box;
|
||||
const currentAttributeBox = nearbyTimelineEvent?.data?.attribute_box;
|
||||
|
||||
return {
|
||||
objectId,
|
||||
@ -291,7 +278,6 @@ export default function ObjectTrackOverlay({
|
||||
pathPoints: combinedPoints,
|
||||
currentZones,
|
||||
currentBox,
|
||||
currentAttributeBox,
|
||||
};
|
||||
})
|
||||
.filter((obj: ObjectData) => obj.pathPoints.length > 0); // Only include objects with path data
|
||||
@ -496,20 +482,6 @@ export default function ObjectTrackOverlay({
|
||||
/>
|
||||
</g>
|
||||
)}
|
||||
{objData.currentAttributeBox && showBoundingBoxes && (
|
||||
<g>
|
||||
<rect
|
||||
x={objData.currentAttributeBox[0] * videoWidth}
|
||||
y={objData.currentAttributeBox[1] * videoHeight}
|
||||
width={objData.currentAttributeBox[2] * videoWidth}
|
||||
height={objData.currentAttributeBox[3] * videoHeight}
|
||||
fill="none"
|
||||
stroke={objData.color}
|
||||
strokeWidth={boxStroke}
|
||||
opacity="0.9"
|
||||
/>
|
||||
</g>
|
||||
)}
|
||||
</g>
|
||||
);
|
||||
})}
|
||||
|
||||
@ -42,10 +42,9 @@ export default function DetailActionsMenu({
|
||||
return `start/${startTime}/end/${endTime}`;
|
||||
}, [search]);
|
||||
|
||||
// currently, audio event ids are not saved in review items
|
||||
const { data: reviewItem } = useSWR<ReviewSegment>(
|
||||
search.data?.type === "audio" ? null : [`review/event/${search.id}`],
|
||||
);
|
||||
const { data: reviewItem } = useSWR<ReviewSegment>([
|
||||
`review/event/${search.id}`,
|
||||
]);
|
||||
|
||||
return (
|
||||
<DropdownMenu open={isOpen} onOpenChange={setIsOpen}>
|
||||
|
||||
@ -92,7 +92,6 @@ import { DialogPortal } from "@radix-ui/react-dialog";
|
||||
import { useDetailStream } from "@/context/detail-stream-context";
|
||||
import { PiSlidersHorizontalBold } from "react-icons/pi";
|
||||
import { HiSparkles } from "react-icons/hi";
|
||||
import { useAudioTranscriptionProcessState } from "@/api/ws";
|
||||
|
||||
const SEARCH_TABS = ["snapshot", "tracking_details"] as const;
|
||||
export type SearchTab = (typeof SEARCH_TABS)[number];
|
||||
@ -1077,11 +1076,6 @@ function ObjectDetailsTab({
|
||||
});
|
||||
}, [search, t]);
|
||||
|
||||
// audio transcription processing state
|
||||
|
||||
const { payload: audioTranscriptionProcessState } =
|
||||
useAudioTranscriptionProcessState();
|
||||
|
||||
// frigate+ submission
|
||||
|
||||
type SubmissionState = "reviewing" | "uploading" | "submitted";
|
||||
@ -1301,7 +1295,6 @@ function ObjectDetailsTab({
|
||||
|
||||
{search.data.type === "object" &&
|
||||
config?.plus?.enabled &&
|
||||
search.end_time != undefined &&
|
||||
search.has_snapshot && (
|
||||
<div
|
||||
className={cn(
|
||||
@ -1437,20 +1430,10 @@ function ObjectDetailsTab({
|
||||
<TooltipTrigger asChild>
|
||||
<button
|
||||
aria-label={t("itemMenu.audioTranscription.label")}
|
||||
className={cn(
|
||||
"text-primary/40",
|
||||
audioTranscriptionProcessState === "processing"
|
||||
? "cursor-not-allowed"
|
||||
: "hover:text-primary/80",
|
||||
)}
|
||||
className="text-primary/40 hover:text-primary/80"
|
||||
onClick={onTranscribe}
|
||||
disabled={audioTranscriptionProcessState === "processing"}
|
||||
>
|
||||
{audioTranscriptionProcessState === "processing" ? (
|
||||
<ActivityIndicator className="size-4" />
|
||||
) : (
|
||||
<FaMicrophone className="size-4" />
|
||||
)}
|
||||
<FaMicrophone className="size-4" />
|
||||
</button>
|
||||
</TooltipTrigger>
|
||||
<TooltipContent>
|
||||
|
||||
@ -75,15 +75,12 @@ export function TrackingDetails({
|
||||
setIsVideoLoading(true);
|
||||
}, [event.id]);
|
||||
|
||||
const { data: eventSequence } = useSWR<TrackingDetailsSequence[]>(
|
||||
["timeline", { source_id: event.id }],
|
||||
null,
|
||||
const { data: eventSequence } = useSWR<TrackingDetailsSequence[]>([
|
||||
"timeline",
|
||||
{
|
||||
revalidateOnFocus: false,
|
||||
revalidateOnReconnect: false,
|
||||
dedupingInterval: 30000,
|
||||
source_id: event.id,
|
||||
},
|
||||
);
|
||||
]);
|
||||
|
||||
const { data: config } = useSWR<FrigateConfig>("config");
|
||||
|
||||
@ -107,12 +104,6 @@ export function TrackingDetails({
|
||||
},
|
||||
]
|
||||
: null,
|
||||
null,
|
||||
{
|
||||
revalidateOnFocus: false,
|
||||
revalidateOnReconnect: false,
|
||||
dedupingInterval: 30000,
|
||||
},
|
||||
);
|
||||
|
||||
// Convert a timeline timestamp to actual video player time, accounting for
|
||||
@ -723,6 +714,53 @@ export function TrackingDetails({
|
||||
)}
|
||||
<div className="space-y-2">
|
||||
{eventSequence.map((item, idx) => {
|
||||
const isActive =
|
||||
Math.abs(
|
||||
(effectiveTime ?? 0) - (item.timestamp ?? 0),
|
||||
) <= 0.5;
|
||||
const formattedEventTimestamp = config
|
||||
? formatUnixTimestampToDateTime(item.timestamp ?? 0, {
|
||||
timezone: config.ui.timezone,
|
||||
date_format:
|
||||
config.ui.time_format == "24hour"
|
||||
? t(
|
||||
"time.formattedTimestampHourMinuteSecond.24hour",
|
||||
{ ns: "common" },
|
||||
)
|
||||
: t(
|
||||
"time.formattedTimestampHourMinuteSecond.12hour",
|
||||
{ ns: "common" },
|
||||
),
|
||||
time_style: "medium",
|
||||
date_style: "medium",
|
||||
})
|
||||
: "";
|
||||
|
||||
const ratio =
|
||||
Array.isArray(item.data.box) &&
|
||||
item.data.box.length >= 4
|
||||
? (
|
||||
aspectRatio *
|
||||
(item.data.box[2] / item.data.box[3])
|
||||
).toFixed(2)
|
||||
: "N/A";
|
||||
const areaPx =
|
||||
Array.isArray(item.data.box) &&
|
||||
item.data.box.length >= 4
|
||||
? Math.round(
|
||||
(config.cameras[event.camera]?.detect?.width ??
|
||||
0) *
|
||||
(config.cameras[event.camera]?.detect
|
||||
?.height ?? 0) *
|
||||
(item.data.box[2] * item.data.box[3]),
|
||||
)
|
||||
: undefined;
|
||||
const areaPct =
|
||||
Array.isArray(item.data.box) &&
|
||||
item.data.box.length >= 4
|
||||
? (item.data.box[2] * item.data.box[3]).toFixed(4)
|
||||
: undefined;
|
||||
|
||||
return (
|
||||
<div
|
||||
key={`${item.timestamp}-${item.source_id ?? ""}-${idx}`}
|
||||
@ -732,7 +770,11 @@ export function TrackingDetails({
|
||||
>
|
||||
<LifecycleIconRow
|
||||
item={item}
|
||||
event={event}
|
||||
isActive={isActive}
|
||||
formattedEventTimestamp={formattedEventTimestamp}
|
||||
ratio={ratio}
|
||||
areaPx={areaPx}
|
||||
areaPct={areaPct}
|
||||
onClick={() => handleLifecycleClick(item)}
|
||||
setSelectedZone={setSelectedZone}
|
||||
getZoneColor={getZoneColor}
|
||||
@ -756,7 +798,11 @@ export function TrackingDetails({
|
||||
|
||||
type LifecycleIconRowProps = {
|
||||
item: TrackingDetailsSequence;
|
||||
event: Event;
|
||||
isActive?: boolean;
|
||||
formattedEventTimestamp: string;
|
||||
ratio: string;
|
||||
areaPx?: number;
|
||||
areaPct?: string;
|
||||
onClick: () => void;
|
||||
setSelectedZone: (z: string) => void;
|
||||
getZoneColor: (zoneName: string) => number[] | undefined;
|
||||
@ -766,7 +812,11 @@ type LifecycleIconRowProps = {
|
||||
|
||||
function LifecycleIconRow({
|
||||
item,
|
||||
event,
|
||||
isActive,
|
||||
formattedEventTimestamp,
|
||||
ratio,
|
||||
areaPx,
|
||||
areaPct,
|
||||
onClick,
|
||||
setSelectedZone,
|
||||
getZoneColor,
|
||||
@ -776,101 +826,9 @@ function LifecycleIconRow({
|
||||
const { t } = useTranslation(["views/explore", "components/player"]);
|
||||
const { data: config } = useSWR<FrigateConfig>("config");
|
||||
const [isOpen, setIsOpen] = useState(false);
|
||||
|
||||
const navigate = useNavigate();
|
||||
|
||||
const aspectRatio = useMemo(() => {
|
||||
if (!config) {
|
||||
return 16 / 9;
|
||||
}
|
||||
|
||||
return (
|
||||
config.cameras[event.camera].detect.width /
|
||||
config.cameras[event.camera].detect.height
|
||||
);
|
||||
}, [config, event]);
|
||||
|
||||
const isActive = useMemo(
|
||||
() => Math.abs((effectiveTime ?? 0) - (item.timestamp ?? 0)) <= 0.5,
|
||||
[effectiveTime, item.timestamp],
|
||||
);
|
||||
|
||||
const formattedEventTimestamp = useMemo(
|
||||
() =>
|
||||
config
|
||||
? formatUnixTimestampToDateTime(item.timestamp ?? 0, {
|
||||
timezone: config.ui.timezone,
|
||||
date_format:
|
||||
config.ui.time_format == "24hour"
|
||||
? t("time.formattedTimestampHourMinuteSecond.24hour", {
|
||||
ns: "common",
|
||||
})
|
||||
: t("time.formattedTimestampHourMinuteSecond.12hour", {
|
||||
ns: "common",
|
||||
}),
|
||||
time_style: "medium",
|
||||
date_style: "medium",
|
||||
})
|
||||
: "",
|
||||
[config, item.timestamp, t],
|
||||
);
|
||||
|
||||
const ratio = useMemo(
|
||||
() =>
|
||||
Array.isArray(item.data.box) && item.data.box.length >= 4
|
||||
? (aspectRatio * (item.data.box[2] / item.data.box[3])).toFixed(2)
|
||||
: "N/A",
|
||||
[aspectRatio, item.data.box],
|
||||
);
|
||||
|
||||
const areaPx = useMemo(
|
||||
() =>
|
||||
Array.isArray(item.data.box) && item.data.box.length >= 4
|
||||
? Math.round(
|
||||
(config?.cameras[event.camera]?.detect?.width ?? 0) *
|
||||
(config?.cameras[event.camera]?.detect?.height ?? 0) *
|
||||
(item.data.box[2] * item.data.box[3]),
|
||||
)
|
||||
: undefined,
|
||||
[config, event.camera, item.data.box],
|
||||
);
|
||||
|
||||
const attributeAreaPx = useMemo(
|
||||
() =>
|
||||
Array.isArray(item.data.attribute_box) &&
|
||||
item.data.attribute_box.length >= 4
|
||||
? Math.round(
|
||||
(config?.cameras[event.camera]?.detect?.width ?? 0) *
|
||||
(config?.cameras[event.camera]?.detect?.height ?? 0) *
|
||||
(item.data.attribute_box[2] * item.data.attribute_box[3]),
|
||||
)
|
||||
: undefined,
|
||||
[config, event.camera, item.data.attribute_box],
|
||||
);
|
||||
|
||||
const attributeAreaPct = useMemo(
|
||||
() =>
|
||||
Array.isArray(item.data.attribute_box) &&
|
||||
item.data.attribute_box.length >= 4
|
||||
? (item.data.attribute_box[2] * item.data.attribute_box[3]).toFixed(4)
|
||||
: undefined,
|
||||
[item.data.attribute_box],
|
||||
);
|
||||
|
||||
const areaPct = useMemo(
|
||||
() =>
|
||||
Array.isArray(item.data.box) && item.data.box.length >= 4
|
||||
? (item.data.box[2] * item.data.box[3]).toFixed(4)
|
||||
: undefined,
|
||||
[item.data.box],
|
||||
);
|
||||
|
||||
const score = useMemo(() => {
|
||||
if (item.data.score !== undefined) {
|
||||
return (item.data.score * 100).toFixed(0) + "%";
|
||||
}
|
||||
return "N/A";
|
||||
}, [item.data.score]);
|
||||
|
||||
return (
|
||||
<div
|
||||
role="button"
|
||||
@ -898,28 +856,16 @@ function LifecycleIconRow({
|
||||
<div className="text-md flex items-start break-words text-left">
|
||||
{getLifecycleItemDescription(item)}
|
||||
</div>
|
||||
<div className="my-2 ml-2 flex flex-col flex-wrap items-start gap-1.5 text-xs text-secondary-foreground">
|
||||
<div className="flex items-center gap-1.5">
|
||||
<span className="text-primary-variant">
|
||||
{t("trackingDetails.lifecycleItemDesc.header.score")}
|
||||
</span>
|
||||
<span className="font-medium text-primary">{score}</span>
|
||||
</div>
|
||||
<div className="flex items-center gap-1.5">
|
||||
<div className="mt-1 flex flex-wrap items-center gap-2 text-xs text-secondary-foreground md:gap-5">
|
||||
<div className="flex items-center gap-1">
|
||||
<span className="text-primary-variant">
|
||||
{t("trackingDetails.lifecycleItemDesc.header.ratio")}
|
||||
</span>
|
||||
<span className="font-medium text-primary">{ratio}</span>
|
||||
</div>
|
||||
<div className="flex items-center gap-1.5">
|
||||
<div className="flex items-center gap-1">
|
||||
<span className="text-primary-variant">
|
||||
{t("trackingDetails.lifecycleItemDesc.header.area")}{" "}
|
||||
{attributeAreaPx !== undefined &&
|
||||
attributeAreaPct !== undefined && (
|
||||
<span className="text-primary-variant">
|
||||
({getTranslatedLabel(item.data.label)})
|
||||
</span>
|
||||
)}
|
||||
{t("trackingDetails.lifecycleItemDesc.header.area")}
|
||||
</span>
|
||||
{areaPx !== undefined && areaPct !== undefined ? (
|
||||
<span className="font-medium text-primary">
|
||||
@ -930,25 +876,9 @@ function LifecycleIconRow({
|
||||
<span>N/A</span>
|
||||
)}
|
||||
</div>
|
||||
{attributeAreaPx !== undefined &&
|
||||
attributeAreaPct !== undefined && (
|
||||
<div className="flex items-center gap-1.5">
|
||||
<span className="text-primary-variant">
|
||||
{t("trackingDetails.lifecycleItemDesc.header.area")} (
|
||||
{getTranslatedLabel(item.data.attribute)})
|
||||
</span>
|
||||
<span className="font-medium text-primary">
|
||||
{t("information.pixels", {
|
||||
ns: "common",
|
||||
area: attributeAreaPx,
|
||||
})}{" "}
|
||||
· {attributeAreaPct}%
|
||||
</span>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{item.data?.zones && item.data.zones.length > 0 && (
|
||||
<div className="mt-1 flex flex-wrap items-center gap-2">
|
||||
<div className="flex flex-wrap items-center gap-2">
|
||||
{item.data.zones.map((zone, zidx) => {
|
||||
const color = getZoneColor(zone)?.join(",") ?? "0,0,0";
|
||||
return (
|
||||
|
||||
@ -94,52 +94,24 @@ export default function HlsVideoPlayer({
|
||||
const [loadedMetadata, setLoadedMetadata] = useState(false);
|
||||
const [bufferTimeout, setBufferTimeout] = useState<NodeJS.Timeout>();
|
||||
|
||||
const applyVideoDimensions = useCallback(
|
||||
(width: number, height: number) => {
|
||||
if (setFullResolution) {
|
||||
setFullResolution({ width, height });
|
||||
}
|
||||
setVideoDimensions({ width, height });
|
||||
if (height > 0) {
|
||||
setTallCamera(width / height < ASPECT_VERTICAL_LAYOUT);
|
||||
}
|
||||
},
|
||||
[setFullResolution],
|
||||
);
|
||||
|
||||
const handleLoadedMetadata = useCallback(() => {
|
||||
setLoadedMetadata(true);
|
||||
if (!videoRef.current) {
|
||||
return;
|
||||
}
|
||||
if (videoRef.current) {
|
||||
const width = videoRef.current.videoWidth;
|
||||
const height = videoRef.current.videoHeight;
|
||||
|
||||
const width = videoRef.current.videoWidth;
|
||||
const height = videoRef.current.videoHeight;
|
||||
|
||||
// iOS Safari occasionally reports 0x0 for videoWidth/videoHeight
|
||||
// Poll with requestAnimationFrame until dimensions become available (or timeout).
|
||||
if (width > 0 && height > 0) {
|
||||
applyVideoDimensions(width, height);
|
||||
return;
|
||||
}
|
||||
|
||||
let attempts = 0;
|
||||
const maxAttempts = 120; // ~2 seconds at 60fps
|
||||
const tryGetDims = () => {
|
||||
if (!videoRef.current) return;
|
||||
const w = videoRef.current.videoWidth;
|
||||
const h = videoRef.current.videoHeight;
|
||||
if (w > 0 && h > 0) {
|
||||
applyVideoDimensions(w, h);
|
||||
return;
|
||||
if (setFullResolution) {
|
||||
setFullResolution({
|
||||
width,
|
||||
height,
|
||||
});
|
||||
}
|
||||
if (attempts < maxAttempts) {
|
||||
attempts += 1;
|
||||
requestAnimationFrame(tryGetDims);
|
||||
}
|
||||
};
|
||||
requestAnimationFrame(tryGetDims);
|
||||
}, [videoRef, applyVideoDimensions]);
|
||||
|
||||
setVideoDimensions({ width, height });
|
||||
|
||||
setTallCamera(width / height < ASPECT_VERTICAL_LAYOUT);
|
||||
}
|
||||
}, [videoRef, setFullResolution]);
|
||||
|
||||
useEffect(() => {
|
||||
if (!videoRef.current) {
|
||||
|
||||
@ -82,7 +82,6 @@ function MSEPlayer({
|
||||
[key: string]: (msg: { value: string; type: string }) => void;
|
||||
}>({});
|
||||
const msRef = useRef<MediaSource | null>(null);
|
||||
const mseCodecRef = useRef<string | null>(null);
|
||||
|
||||
const wsURL = useMemo(() => {
|
||||
return `${baseUrl.replace(/^http/, "ws")}live/mse/api/ws?src=${camera}`;
|
||||
@ -92,12 +91,8 @@ function MSEPlayer({
|
||||
(error: LivePlayerError, description: string = "Unknown error") => {
|
||||
// eslint-disable-next-line no-console
|
||||
console.error(
|
||||
`${camera} - MSE error '${error}': ${description} See the documentation: https://docs.frigate.video/configuration/live/#live-player-error-messages`,
|
||||
`${camera} - MSE error '${error}': ${description} See the documentation: https://docs.frigate.video/configuration/live/#live-view-faq`,
|
||||
);
|
||||
if (mseCodecRef.current) {
|
||||
// eslint-disable-next-line no-console
|
||||
console.error(`${camera} - MSE codec in use: ${mseCodecRef.current}`);
|
||||
}
|
||||
onError?.(error);
|
||||
},
|
||||
[camera, onError],
|
||||
@ -304,9 +299,6 @@ function MSEPlayer({
|
||||
onmessageRef.current["mse"] = (msg) => {
|
||||
if (msg.type !== "mse") return;
|
||||
|
||||
// Store the codec value for error logging
|
||||
mseCodecRef.current = msg.value;
|
||||
|
||||
let sb: SourceBuffer | undefined;
|
||||
try {
|
||||
sb = msRef.current?.addSourceBuffer(msg.value);
|
||||
|
||||
@ -42,7 +42,7 @@ export default function WebRtcPlayer({
|
||||
(error: LivePlayerError, description: string = "Unknown error") => {
|
||||
// eslint-disable-next-line no-console
|
||||
console.error(
|
||||
`${camera} - WebRTC error '${error}': ${description} See the documentation: https://docs.frigate.video/configuration/live/#live-player-error-messages`,
|
||||
`${camera} - WebRTC error '${error}': ${description} See the documentation: https://docs.frigate.video/configuration/live/#live-view-faq`,
|
||||
);
|
||||
onError?.(error);
|
||||
},
|
||||
|
||||
@ -16,7 +16,6 @@ export type TrackingDetailsSequence = {
|
||||
data: {
|
||||
camera: string;
|
||||
label: string;
|
||||
score: number;
|
||||
sub_label: string;
|
||||
box?: [number, number, number, number];
|
||||
region: [number, number, number, number];
|
||||
|
||||
@ -16,6 +16,7 @@ import ImageLoadingIndicator from "@/components/indicators/ImageLoadingIndicator
|
||||
import useImageLoaded from "@/hooks/use-image-loaded";
|
||||
import ActivityIndicator from "@/components/indicators/activity-indicator";
|
||||
import { useTrackedObjectUpdate } from "@/api/ws";
|
||||
import { isEqual } from "lodash";
|
||||
import TimeAgo from "@/components/dynamic/TimeAgo";
|
||||
import SearchResultActions from "@/components/menu/SearchResultActions";
|
||||
import { SearchTab } from "@/components/overlay/detail/SearchDetailDialog";
|
||||
@ -24,12 +25,14 @@ import { useTranslation } from "react-i18next";
|
||||
import { getTranslatedLabel } from "@/utils/i18n";
|
||||
|
||||
type ExploreViewProps = {
|
||||
searchDetail: SearchResult | undefined;
|
||||
setSearchDetail: (search: SearchResult | undefined) => void;
|
||||
setSimilaritySearch: (search: SearchResult) => void;
|
||||
onSelectSearch: (item: SearchResult, ctrl: boolean, page?: SearchTab) => void;
|
||||
};
|
||||
|
||||
export default function ExploreView({
|
||||
searchDetail,
|
||||
setSearchDetail,
|
||||
setSimilaritySearch,
|
||||
onSelectSearch,
|
||||
@ -80,6 +83,20 @@ export default function ExploreView({
|
||||
}
|
||||
}, [wsUpdate, mutate]);
|
||||
|
||||
// update search detail when results change
|
||||
|
||||
useEffect(() => {
|
||||
if (searchDetail && events) {
|
||||
const updatedSearchDetail = events.find(
|
||||
(result) => result.id === searchDetail.id,
|
||||
);
|
||||
|
||||
if (updatedSearchDetail && !isEqual(updatedSearchDetail, searchDetail)) {
|
||||
setSearchDetail(updatedSearchDetail);
|
||||
}
|
||||
}
|
||||
}, [events, searchDetail, setSearchDetail]);
|
||||
|
||||
if (isLoading) {
|
||||
return (
|
||||
<ActivityIndicator className="absolute left-1/2 top-1/2 -translate-x-1/2 -translate-y-1/2" />
|
||||
|
||||
@ -1376,343 +1376,329 @@ function FrigateCameraFeatures({
|
||||
title={t("cameraSettings.title", { camera })}
|
||||
/>
|
||||
</DrawerTrigger>
|
||||
<DrawerContent className="max-h-[75dvh] overflow-hidden rounded-2xl">
|
||||
<div className="scrollbar-container mt-2 flex h-auto flex-col gap-2 overflow-y-auto px-2 py-4">
|
||||
<>
|
||||
{isAdmin && (
|
||||
<>
|
||||
<DrawerContent className="rounded-2xl px-2 py-4">
|
||||
<div className="mt-2 flex flex-col gap-2">
|
||||
{isAdmin && (
|
||||
<>
|
||||
<FilterSwitch
|
||||
label={t("cameraSettings.cameraEnabled")}
|
||||
isChecked={enabledState == "ON"}
|
||||
onCheckedChange={() =>
|
||||
sendEnabled(enabledState == "ON" ? "OFF" : "ON")
|
||||
}
|
||||
/>
|
||||
<FilterSwitch
|
||||
label={t("cameraSettings.objectDetection")}
|
||||
isChecked={detectState == "ON"}
|
||||
onCheckedChange={() =>
|
||||
sendDetect(detectState == "ON" ? "OFF" : "ON")
|
||||
}
|
||||
/>
|
||||
{recordingEnabled && (
|
||||
<FilterSwitch
|
||||
label={t("cameraSettings.cameraEnabled")}
|
||||
isChecked={enabledState == "ON"}
|
||||
label={t("cameraSettings.recording")}
|
||||
isChecked={recordState == "ON"}
|
||||
onCheckedChange={() =>
|
||||
sendEnabled(enabledState == "ON" ? "OFF" : "ON")
|
||||
sendRecord(recordState == "ON" ? "OFF" : "ON")
|
||||
}
|
||||
/>
|
||||
)}
|
||||
<FilterSwitch
|
||||
label={t("cameraSettings.snapshots")}
|
||||
isChecked={snapshotState == "ON"}
|
||||
onCheckedChange={() =>
|
||||
sendSnapshot(snapshotState == "ON" ? "OFF" : "ON")
|
||||
}
|
||||
/>
|
||||
{audioDetectEnabled && (
|
||||
<FilterSwitch
|
||||
label={t("cameraSettings.objectDetection")}
|
||||
isChecked={detectState == "ON"}
|
||||
label={t("cameraSettings.audioDetection")}
|
||||
isChecked={audioState == "ON"}
|
||||
onCheckedChange={() =>
|
||||
sendDetect(detectState == "ON" ? "OFF" : "ON")
|
||||
sendAudio(audioState == "ON" ? "OFF" : "ON")
|
||||
}
|
||||
/>
|
||||
{recordingEnabled && (
|
||||
<FilterSwitch
|
||||
label={t("cameraSettings.recording")}
|
||||
isChecked={recordState == "ON"}
|
||||
onCheckedChange={() =>
|
||||
sendRecord(recordState == "ON" ? "OFF" : "ON")
|
||||
}
|
||||
/>
|
||||
)}
|
||||
)}
|
||||
{audioDetectEnabled && transcriptionEnabled && (
|
||||
<FilterSwitch
|
||||
label={t("cameraSettings.snapshots")}
|
||||
isChecked={snapshotState == "ON"}
|
||||
label={t("cameraSettings.transcription")}
|
||||
disabled={audioState == "OFF"}
|
||||
isChecked={transcriptionState == "ON"}
|
||||
onCheckedChange={() =>
|
||||
sendSnapshot(snapshotState == "ON" ? "OFF" : "ON")
|
||||
sendTranscription(transcriptionState == "ON" ? "OFF" : "ON")
|
||||
}
|
||||
/>
|
||||
{audioDetectEnabled && (
|
||||
<FilterSwitch
|
||||
label={t("cameraSettings.audioDetection")}
|
||||
isChecked={audioState == "ON"}
|
||||
onCheckedChange={() =>
|
||||
sendAudio(audioState == "ON" ? "OFF" : "ON")
|
||||
}
|
||||
/>
|
||||
)}
|
||||
{audioDetectEnabled && transcriptionEnabled && (
|
||||
<FilterSwitch
|
||||
label={t("cameraSettings.transcription")}
|
||||
disabled={audioState == "OFF"}
|
||||
isChecked={transcriptionState == "ON"}
|
||||
onCheckedChange={() =>
|
||||
sendTranscription(
|
||||
transcriptionState == "ON" ? "OFF" : "ON",
|
||||
)
|
||||
}
|
||||
/>
|
||||
)}
|
||||
{autotrackingEnabled && (
|
||||
<FilterSwitch
|
||||
label={t("cameraSettings.autotracking")}
|
||||
isChecked={autotrackingState == "ON"}
|
||||
onCheckedChange={() =>
|
||||
sendAutotracking(autotrackingState == "ON" ? "OFF" : "ON")
|
||||
}
|
||||
/>
|
||||
)}
|
||||
</>
|
||||
)}
|
||||
)}
|
||||
{autotrackingEnabled && (
|
||||
<FilterSwitch
|
||||
label={t("cameraSettings.autotracking")}
|
||||
isChecked={autotrackingState == "ON"}
|
||||
onCheckedChange={() =>
|
||||
sendAutotracking(autotrackingState == "ON" ? "OFF" : "ON")
|
||||
}
|
||||
/>
|
||||
)}
|
||||
</>
|
||||
)}
|
||||
</div>
|
||||
|
||||
<div className="mt-3 flex flex-col gap-5">
|
||||
{!isRestreamed && (
|
||||
<div className="flex flex-col gap-2 p-2">
|
||||
<Label>{t("stream.title")}</Label>
|
||||
<div className="flex flex-row items-center gap-1 text-sm text-muted-foreground">
|
||||
<LuX className="size-4 text-danger" />
|
||||
<div>
|
||||
{t("streaming.restreaming.disabled", {
|
||||
ns: "components/dialog",
|
||||
})}
|
||||
<div className="mt-3 flex flex-col gap-5">
|
||||
{!isRestreamed && (
|
||||
<div className="flex flex-col gap-2 p-2">
|
||||
<Label>{t("stream.title")}</Label>
|
||||
<div className="flex flex-row items-center gap-1 text-sm text-muted-foreground">
|
||||
<LuX className="size-4 text-danger" />
|
||||
<div>
|
||||
{t("streaming.restreaming.disabled", {
|
||||
ns: "components/dialog",
|
||||
})}
|
||||
</div>
|
||||
<Popover>
|
||||
<PopoverTrigger asChild>
|
||||
<div className="cursor-pointer p-0">
|
||||
<LuInfo className="size-4" />
|
||||
<span className="sr-only">
|
||||
{t("button.info", { ns: "common" })}
|
||||
</span>
|
||||
</div>
|
||||
<Popover>
|
||||
<PopoverTrigger asChild>
|
||||
<div className="cursor-pointer p-0">
|
||||
<LuInfo className="size-4" />
|
||||
<span className="sr-only">
|
||||
{t("button.info", { ns: "common" })}
|
||||
</span>
|
||||
</div>
|
||||
</PopoverTrigger>
|
||||
<PopoverContent className="w-80 text-xs">
|
||||
{t("streaming.restreaming.desc.title", {
|
||||
ns: "components/dialog",
|
||||
})}
|
||||
<div className="mt-2 flex items-center text-primary">
|
||||
<Link
|
||||
to={getLocaleDocUrl("configuration/live")}
|
||||
target="_blank"
|
||||
rel="noopener noreferrer"
|
||||
className="inline"
|
||||
>
|
||||
{t("readTheDocumentation", { ns: "common" })}
|
||||
<LuExternalLink className="ml-2 inline-flex size-3" />
|
||||
</Link>
|
||||
</div>
|
||||
</PopoverContent>
|
||||
</Popover>
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
{isRestreamed &&
|
||||
Object.values(camera.live.streams).length > 0 && (
|
||||
<div className="mt-1 p-2">
|
||||
<div className="mb-1 text-sm">{t("stream.title")}</div>
|
||||
<Select
|
||||
value={streamName}
|
||||
onValueChange={(value) => {
|
||||
setStreamName?.(value);
|
||||
}}
|
||||
disabled={debug}
|
||||
>
|
||||
<SelectTrigger className="w-full">
|
||||
<SelectValue>
|
||||
{Object.keys(camera.live.streams).find(
|
||||
(key) => camera.live.streams[key] === streamName,
|
||||
)}
|
||||
</SelectValue>
|
||||
</SelectTrigger>
|
||||
|
||||
<SelectContent>
|
||||
<SelectGroup>
|
||||
{Object.entries(camera.live.streams).map(
|
||||
([stream, name]) => (
|
||||
<SelectItem
|
||||
key={stream}
|
||||
className="cursor-pointer"
|
||||
value={name}
|
||||
>
|
||||
{stream}
|
||||
</SelectItem>
|
||||
),
|
||||
)}
|
||||
</SelectGroup>
|
||||
</SelectContent>
|
||||
</Select>
|
||||
|
||||
{debug && (
|
||||
<div className="flex flex-row items-center gap-1 text-sm text-muted-foreground">
|
||||
<>
|
||||
<LuX className="size-8 text-danger" />
|
||||
<div>{t("stream.debug.picker")}</div>
|
||||
</>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{preferredLiveMode != "jsmpeg" &&
|
||||
!debug &&
|
||||
isRestreamed && (
|
||||
<div className="mt-1 flex flex-row items-center gap-1 text-sm text-muted-foreground">
|
||||
{supportsAudioOutput ? (
|
||||
<>
|
||||
<LuCheck className="size-4 text-success" />
|
||||
<div>{t("stream.audio.available")}</div>
|
||||
</>
|
||||
) : (
|
||||
<>
|
||||
<LuX className="size-4 text-danger" />
|
||||
<div>{t("stream.audio.unavailable")}</div>
|
||||
<Popover>
|
||||
<PopoverTrigger asChild>
|
||||
<div className="cursor-pointer p-0">
|
||||
<LuInfo className="size-4" />
|
||||
<span className="sr-only">
|
||||
{t("button.info", { ns: "common" })}
|
||||
</span>
|
||||
</div>
|
||||
</PopoverTrigger>
|
||||
<PopoverContent className="w-52 text-xs">
|
||||
{t("stream.audio.tips.title")}
|
||||
<div className="mt-2 flex items-center text-primary">
|
||||
<Link
|
||||
to={getLocaleDocUrl("configuration/live")}
|
||||
target="_blank"
|
||||
rel="noopener noreferrer"
|
||||
className="inline"
|
||||
>
|
||||
{t("readTheDocumentation", {
|
||||
ns: "common",
|
||||
})}
|
||||
<LuExternalLink className="ml-2 inline-flex size-3" />
|
||||
</Link>
|
||||
</div>
|
||||
</PopoverContent>
|
||||
</Popover>
|
||||
</>
|
||||
)}
|
||||
</div>
|
||||
)}
|
||||
{preferredLiveMode != "jsmpeg" &&
|
||||
!debug &&
|
||||
isRestreamed &&
|
||||
supportsAudioOutput && (
|
||||
<div className="flex flex-row items-center gap-1 text-sm text-muted-foreground">
|
||||
{supports2WayTalk ? (
|
||||
<>
|
||||
<LuCheck className="size-4 text-success" />
|
||||
<div>{t("stream.twoWayTalk.available")}</div>
|
||||
</>
|
||||
) : (
|
||||
<>
|
||||
<LuX className="size-4 text-danger" />
|
||||
<div>{t("stream.twoWayTalk.unavailable")}</div>
|
||||
<Popover>
|
||||
<PopoverTrigger asChild>
|
||||
<div className="cursor-pointer p-0">
|
||||
<LuInfo className="size-4" />
|
||||
<span className="sr-only">
|
||||
{t("button.info", { ns: "common" })}
|
||||
</span>
|
||||
</div>
|
||||
</PopoverTrigger>
|
||||
<PopoverContent className="w-52 text-xs">
|
||||
{t("stream.twoWayTalk.tips")}
|
||||
<div className="mt-2 flex items-center text-primary">
|
||||
<Link
|
||||
to={getLocaleDocUrl(
|
||||
"configuration/live/#webrtc-extra-configuration",
|
||||
)}
|
||||
target="_blank"
|
||||
rel="noopener noreferrer"
|
||||
className="inline"
|
||||
>
|
||||
{t("readTheDocumentation", {
|
||||
ns: "common",
|
||||
})}
|
||||
<LuExternalLink className="ml-2 inline-flex size-3" />
|
||||
</Link>
|
||||
</div>
|
||||
</PopoverContent>
|
||||
</Popover>
|
||||
</>
|
||||
)}
|
||||
</div>
|
||||
)}
|
||||
{preferredLiveMode == "jsmpeg" && isRestreamed && (
|
||||
<div className="mt-2 flex flex-col items-center gap-3">
|
||||
<div className="flex flex-row items-center gap-2">
|
||||
<IoIosWarning className="mr-1 size-8 text-danger" />
|
||||
<p className="text-sm">
|
||||
{t("stream.lowBandwidth.tips")}
|
||||
</p>
|
||||
</div>
|
||||
<Button
|
||||
className={`flex items-center gap-2.5 rounded-lg`}
|
||||
aria-label={t("stream.lowBandwidth.resetStream")}
|
||||
variant="outline"
|
||||
size="sm"
|
||||
disabled={debug}
|
||||
onClick={() => setLowBandwidth(false)}
|
||||
>
|
||||
<MdOutlineRestartAlt className="size-5 text-primary-variant" />
|
||||
<div className="text-primary-variant">
|
||||
{t("stream.lowBandwidth.resetStream")}
|
||||
</div>
|
||||
</Button>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
)}
|
||||
<div className="flex flex-col gap-1 px-2">
|
||||
<div className="mb-1 text-sm font-medium leading-none">
|
||||
{t("manualRecording.title")}
|
||||
</div>
|
||||
<div className="flex flex-row items-stretch gap-2">
|
||||
<Button
|
||||
onClick={handleSnapshotClick}
|
||||
disabled={!cameraEnabled || debug || isSnapshotLoading}
|
||||
className="h-auto w-full whitespace-normal"
|
||||
>
|
||||
{isSnapshotLoading && (
|
||||
<ActivityIndicator className="mr-2 size-4" />
|
||||
)}
|
||||
{t("snapshot.takeSnapshot")}
|
||||
</Button>
|
||||
<Button
|
||||
onClick={handleEventButtonClick}
|
||||
className={cn(
|
||||
"h-auto w-full whitespace-normal",
|
||||
isRecording &&
|
||||
"animate-pulse bg-red-500 hover:bg-red-600",
|
||||
)}
|
||||
disabled={debug}
|
||||
>
|
||||
{t("manualRecording." + (isRecording ? "end" : "start"))}
|
||||
</Button>
|
||||
</div>
|
||||
<p className="text-sm text-muted-foreground">
|
||||
{t("manualRecording.tips")}
|
||||
</p>
|
||||
</div>
|
||||
{isRestreamed && (
|
||||
<>
|
||||
<div className="flex flex-col gap-2">
|
||||
<FilterSwitch
|
||||
label={t("manualRecording.playInBackground.label")}
|
||||
isChecked={playInBackground}
|
||||
onCheckedChange={(checked) => {
|
||||
setPlayInBackground(checked);
|
||||
}}
|
||||
disabled={debug}
|
||||
/>
|
||||
<p className="mx-2 -mt-2 text-sm text-muted-foreground">
|
||||
{t("manualRecording.playInBackground.desc")}
|
||||
</p>
|
||||
</div>
|
||||
<div className="flex flex-col gap-2">
|
||||
<FilterSwitch
|
||||
label={t("manualRecording.showStats.label")}
|
||||
isChecked={showStats}
|
||||
onCheckedChange={(checked) => {
|
||||
setShowStats(checked);
|
||||
}}
|
||||
disabled={debug}
|
||||
/>
|
||||
<p className="mx-2 -mt-2 text-sm text-muted-foreground">
|
||||
{t("manualRecording.showStats.desc")}
|
||||
</p>
|
||||
</div>
|
||||
</>
|
||||
)}
|
||||
<div className="mb-3 flex flex-col">
|
||||
<FilterSwitch
|
||||
label={t("streaming.debugView", { ns: "components/dialog" })}
|
||||
isChecked={debug}
|
||||
onCheckedChange={(checked) => setDebug(checked)}
|
||||
/>
|
||||
</PopoverTrigger>
|
||||
<PopoverContent className="w-80 text-xs">
|
||||
{t("streaming.restreaming.desc.title", {
|
||||
ns: "components/dialog",
|
||||
})}
|
||||
<div className="mt-2 flex items-center text-primary">
|
||||
<Link
|
||||
to={getLocaleDocUrl("configuration/live")}
|
||||
target="_blank"
|
||||
rel="noopener noreferrer"
|
||||
className="inline"
|
||||
>
|
||||
{t("readTheDocumentation", { ns: "common" })}
|
||||
<LuExternalLink className="ml-2 inline-flex size-3" />
|
||||
</Link>
|
||||
</div>
|
||||
</PopoverContent>
|
||||
</Popover>
|
||||
</div>
|
||||
</div>
|
||||
</>
|
||||
)}
|
||||
{isRestreamed && Object.values(camera.live.streams).length > 0 && (
|
||||
<div className="mt-1 p-2">
|
||||
<div className="mb-1 text-sm">{t("stream.title")}</div>
|
||||
<Select
|
||||
value={streamName}
|
||||
onValueChange={(value) => {
|
||||
setStreamName?.(value);
|
||||
}}
|
||||
disabled={debug}
|
||||
>
|
||||
<SelectTrigger className="w-full">
|
||||
<SelectValue>
|
||||
{Object.keys(camera.live.streams).find(
|
||||
(key) => camera.live.streams[key] === streamName,
|
||||
)}
|
||||
</SelectValue>
|
||||
</SelectTrigger>
|
||||
|
||||
<SelectContent>
|
||||
<SelectGroup>
|
||||
{Object.entries(camera.live.streams).map(
|
||||
([stream, name]) => (
|
||||
<SelectItem
|
||||
key={stream}
|
||||
className="cursor-pointer"
|
||||
value={name}
|
||||
>
|
||||
{stream}
|
||||
</SelectItem>
|
||||
),
|
||||
)}
|
||||
</SelectGroup>
|
||||
</SelectContent>
|
||||
</Select>
|
||||
|
||||
{debug && (
|
||||
<div className="flex flex-row items-center gap-1 text-sm text-muted-foreground">
|
||||
<>
|
||||
<LuX className="size-8 text-danger" />
|
||||
<div>{t("stream.debug.picker")}</div>
|
||||
</>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{preferredLiveMode != "jsmpeg" && !debug && isRestreamed && (
|
||||
<div className="mt-1 flex flex-row items-center gap-1 text-sm text-muted-foreground">
|
||||
{supportsAudioOutput ? (
|
||||
<>
|
||||
<LuCheck className="size-4 text-success" />
|
||||
<div>{t("stream.audio.available")}</div>
|
||||
</>
|
||||
) : (
|
||||
<>
|
||||
<LuX className="size-4 text-danger" />
|
||||
<div>{t("stream.audio.unavailable")}</div>
|
||||
<Popover>
|
||||
<PopoverTrigger asChild>
|
||||
<div className="cursor-pointer p-0">
|
||||
<LuInfo className="size-4" />
|
||||
<span className="sr-only">
|
||||
{t("button.info", { ns: "common" })}
|
||||
</span>
|
||||
</div>
|
||||
</PopoverTrigger>
|
||||
<PopoverContent className="w-52 text-xs">
|
||||
{t("stream.audio.tips.title")}
|
||||
<div className="mt-2 flex items-center text-primary">
|
||||
<Link
|
||||
to={getLocaleDocUrl("configuration/live")}
|
||||
target="_blank"
|
||||
rel="noopener noreferrer"
|
||||
className="inline"
|
||||
>
|
||||
{t("readTheDocumentation", { ns: "common" })}
|
||||
<LuExternalLink className="ml-2 inline-flex size-3" />
|
||||
</Link>
|
||||
</div>
|
||||
</PopoverContent>
|
||||
</Popover>
|
||||
</>
|
||||
)}
|
||||
</div>
|
||||
)}
|
||||
{preferredLiveMode != "jsmpeg" &&
|
||||
!debug &&
|
||||
isRestreamed &&
|
||||
supportsAudioOutput && (
|
||||
<div className="flex flex-row items-center gap-1 text-sm text-muted-foreground">
|
||||
{supports2WayTalk ? (
|
||||
<>
|
||||
<LuCheck className="size-4 text-success" />
|
||||
<div>{t("stream.twoWayTalk.available")}</div>
|
||||
</>
|
||||
) : (
|
||||
<>
|
||||
<LuX className="size-4 text-danger" />
|
||||
<div>{t("stream.twoWayTalk.unavailable")}</div>
|
||||
<Popover>
|
||||
<PopoverTrigger asChild>
|
||||
<div className="cursor-pointer p-0">
|
||||
<LuInfo className="size-4" />
|
||||
<span className="sr-only">
|
||||
{t("button.info", { ns: "common" })}
|
||||
</span>
|
||||
</div>
|
||||
</PopoverTrigger>
|
||||
<PopoverContent className="w-52 text-xs">
|
||||
{t("stream.twoWayTalk.tips")}
|
||||
<div className="mt-2 flex items-center text-primary">
|
||||
<Link
|
||||
to={getLocaleDocUrl(
|
||||
"configuration/live/#webrtc-extra-configuration",
|
||||
)}
|
||||
target="_blank"
|
||||
rel="noopener noreferrer"
|
||||
className="inline"
|
||||
>
|
||||
{t("readTheDocumentation", { ns: "common" })}
|
||||
<LuExternalLink className="ml-2 inline-flex size-3" />
|
||||
</Link>
|
||||
</div>
|
||||
</PopoverContent>
|
||||
</Popover>
|
||||
</>
|
||||
)}
|
||||
</div>
|
||||
)}
|
||||
{preferredLiveMode == "jsmpeg" && isRestreamed && (
|
||||
<div className="mt-2 flex flex-col items-center gap-3">
|
||||
<div className="flex flex-row items-center gap-2">
|
||||
<IoIosWarning className="mr-1 size-8 text-danger" />
|
||||
<p className="text-sm">{t("stream.lowBandwidth.tips")}</p>
|
||||
</div>
|
||||
<Button
|
||||
className={`flex items-center gap-2.5 rounded-lg`}
|
||||
aria-label={t("stream.lowBandwidth.resetStream")}
|
||||
variant="outline"
|
||||
size="sm"
|
||||
disabled={debug}
|
||||
onClick={() => setLowBandwidth(false)}
|
||||
>
|
||||
<MdOutlineRestartAlt className="size-5 text-primary-variant" />
|
||||
<div className="text-primary-variant">
|
||||
{t("stream.lowBandwidth.resetStream")}
|
||||
</div>
|
||||
</Button>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
)}
|
||||
<div className="flex flex-col gap-1 px-2">
|
||||
<div className="mb-1 text-sm font-medium leading-none">
|
||||
{t("manualRecording.title")}
|
||||
</div>
|
||||
<div className="flex flex-row items-stretch gap-2">
|
||||
<Button
|
||||
onClick={handleSnapshotClick}
|
||||
disabled={!cameraEnabled || debug || isSnapshotLoading}
|
||||
className="h-auto w-full whitespace-normal"
|
||||
>
|
||||
{isSnapshotLoading && (
|
||||
<ActivityIndicator className="mr-2 size-4" />
|
||||
)}
|
||||
{t("snapshot.takeSnapshot")}
|
||||
</Button>
|
||||
<Button
|
||||
onClick={handleEventButtonClick}
|
||||
className={cn(
|
||||
"h-auto w-full whitespace-normal",
|
||||
isRecording && "animate-pulse bg-red-500 hover:bg-red-600",
|
||||
)}
|
||||
disabled={debug}
|
||||
>
|
||||
{t("manualRecording." + (isRecording ? "end" : "start"))}
|
||||
</Button>
|
||||
</div>
|
||||
<p className="text-sm text-muted-foreground">
|
||||
{t("manualRecording.tips")}
|
||||
</p>
|
||||
</div>
|
||||
{isRestreamed && (
|
||||
<>
|
||||
<div className="flex flex-col gap-2">
|
||||
<FilterSwitch
|
||||
label={t("manualRecording.playInBackground.label")}
|
||||
isChecked={playInBackground}
|
||||
onCheckedChange={(checked) => {
|
||||
setPlayInBackground(checked);
|
||||
}}
|
||||
disabled={debug}
|
||||
/>
|
||||
<p className="mx-2 -mt-2 text-sm text-muted-foreground">
|
||||
{t("manualRecording.playInBackground.desc")}
|
||||
</p>
|
||||
</div>
|
||||
<div className="flex flex-col gap-2">
|
||||
<FilterSwitch
|
||||
label={t("manualRecording.showStats.label")}
|
||||
isChecked={showStats}
|
||||
onCheckedChange={(checked) => {
|
||||
setShowStats(checked);
|
||||
}}
|
||||
disabled={debug}
|
||||
/>
|
||||
<p className="mx-2 -mt-2 text-sm text-muted-foreground">
|
||||
{t("manualRecording.showStats.desc")}
|
||||
</p>
|
||||
</div>
|
||||
</>
|
||||
)}
|
||||
<div className="mb-3 flex flex-col">
|
||||
<FilterSwitch
|
||||
label={t("streaming.debugView", { ns: "components/dialog" })}
|
||||
isChecked={debug}
|
||||
onCheckedChange={(checked) => setDebug(checked)}
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
</DrawerContent>
|
||||
</Drawer>
|
||||
|
||||
@ -19,6 +19,7 @@ import useKeyboardListener, {
|
||||
import scrollIntoView from "scroll-into-view-if-needed";
|
||||
import InputWithTags from "@/components/input/InputWithTags";
|
||||
import { ScrollArea, ScrollBar } from "@/components/ui/scroll-area";
|
||||
import { isEqual } from "lodash";
|
||||
import { formatDateToLocaleString } from "@/utils/dateUtil";
|
||||
import SearchThumbnailFooter from "@/components/card/SearchThumbnailFooter";
|
||||
import ExploreSettings from "@/components/settings/SearchSettings";
|
||||
@ -212,7 +213,7 @@ export default function SearchView({
|
||||
|
||||
// detail
|
||||
|
||||
const [selectedId, setSelectedId] = useState<string>();
|
||||
const [searchDetail, setSearchDetail] = useState<SearchResult>();
|
||||
const [page, setPage] = useState<SearchTab>("snapshot");
|
||||
|
||||
// remove duplicate event ids
|
||||
@ -228,16 +229,6 @@ export default function SearchView({
|
||||
return results;
|
||||
}, [searchResults]);
|
||||
|
||||
const searchDetail = useMemo(() => {
|
||||
if (!selectedId) return undefined;
|
||||
// summary view
|
||||
if (defaultView === "summary" && exploreEvents) {
|
||||
return exploreEvents.find((r) => r.id === selectedId);
|
||||
}
|
||||
// grid view
|
||||
return uniqueResults.find((r) => r.id === selectedId);
|
||||
}, [selectedId, uniqueResults, exploreEvents, defaultView]);
|
||||
|
||||
// search interaction
|
||||
|
||||
const [selectedObjects, setSelectedObjects] = useState<string[]>([]);
|
||||
@ -265,7 +256,7 @@ export default function SearchView({
|
||||
}
|
||||
} else {
|
||||
setPage(page);
|
||||
setSelectedId(item.id);
|
||||
setSearchDetail(item);
|
||||
}
|
||||
},
|
||||
[selectedObjects],
|
||||
@ -304,12 +295,26 @@ export default function SearchView({
|
||||
}
|
||||
};
|
||||
|
||||
// clear selected item when search results clear
|
||||
// update search detail when results change
|
||||
|
||||
useEffect(() => {
|
||||
if (!searchResults && !exploreEvents) {
|
||||
setSelectedId(undefined);
|
||||
if (searchDetail) {
|
||||
const results =
|
||||
defaultView === "summary" ? exploreEvents : searchResults?.flat();
|
||||
if (results) {
|
||||
const updatedSearchDetail = results.find(
|
||||
(result) => result.id === searchDetail.id,
|
||||
);
|
||||
|
||||
if (
|
||||
updatedSearchDetail &&
|
||||
!isEqual(updatedSearchDetail, searchDetail)
|
||||
) {
|
||||
setSearchDetail(updatedSearchDetail);
|
||||
}
|
||||
}
|
||||
}
|
||||
}, [searchResults, exploreEvents]);
|
||||
}, [searchResults, exploreEvents, searchDetail, defaultView]);
|
||||
|
||||
const hasExistingSearch = useMemo(
|
||||
() => searchResults != undefined || searchFilter != undefined,
|
||||
@ -335,7 +340,7 @@ export default function SearchView({
|
||||
? results.length - 1
|
||||
: (currentIndex - 1 + results.length) % results.length;
|
||||
|
||||
setSelectedId(results[newIndex].id);
|
||||
setSearchDetail(results[newIndex]);
|
||||
}
|
||||
}, [uniqueResults, exploreEvents, searchDetail, defaultView]);
|
||||
|
||||
@ -352,7 +357,7 @@ export default function SearchView({
|
||||
const newIndex =
|
||||
currentIndex === -1 ? 0 : (currentIndex + 1) % results.length;
|
||||
|
||||
setSelectedId(results[newIndex].id);
|
||||
setSearchDetail(results[newIndex]);
|
||||
}
|
||||
}, [uniqueResults, exploreEvents, searchDetail, defaultView]);
|
||||
|
||||
@ -504,7 +509,7 @@ export default function SearchView({
|
||||
<SearchDetailDialog
|
||||
search={searchDetail}
|
||||
page={page}
|
||||
setSearch={(item) => setSelectedId(item?.id)}
|
||||
setSearch={setSearchDetail}
|
||||
setSearchPage={setPage}
|
||||
setSimilarity={
|
||||
searchDetail && (() => setSimilaritySearch(searchDetail))
|
||||
@ -624,7 +629,7 @@ export default function SearchView({
|
||||
detail: boolean,
|
||||
) => {
|
||||
if (detail && selectedObjects.length == 0) {
|
||||
setSelectedId(value.id);
|
||||
setSearchDetail(value);
|
||||
} else {
|
||||
onSelectSearch(
|
||||
value,
|
||||
@ -719,7 +724,8 @@ export default function SearchView({
|
||||
defaultView == "summary" && (
|
||||
<div className="scrollbar-container flex size-full flex-col overflow-y-auto">
|
||||
<ExploreView
|
||||
setSearchDetail={(item) => setSelectedId(item?.id)}
|
||||
searchDetail={searchDetail}
|
||||
setSearchDetail={setSearchDetail}
|
||||
setSimilaritySearch={setSimilaritySearch}
|
||||
onSelectSearch={onSelectSearch}
|
||||
/>
|
||||
|
||||
@ -784,7 +784,7 @@ export default function AuthenticationView({
|
||||
return (
|
||||
<div className="flex size-full flex-col">
|
||||
<Toaster position="top-center" closeButton={true} />
|
||||
<div className="scrollbar-container order-last mb-2 mt-2 flex h-full w-full flex-col overflow-y-auto pb-2 md:order-none md:mr-3 md:mt-0">
|
||||
<div className="scrollbar-container order-last mb-10 mt-2 flex h-full w-full flex-col overflow-y-auto pb-2 md:order-none md:mr-3 md:mt-0">
|
||||
{section === "users" && UsersSection}
|
||||
{section === "roles" && RolesSection}
|
||||
{!section && (
|
||||
|
||||
@ -65,7 +65,7 @@ export default function CameraManagementView({
|
||||
closeButton
|
||||
/>
|
||||
<div className="flex size-full flex-col md:flex-row">
|
||||
<div className="scrollbar-container order-last mb-2 mt-2 flex h-full w-full flex-col overflow-y-auto pb-2 md:order-none">
|
||||
<div className="scrollbar-container order-last mb-10 mt-2 flex h-full w-full flex-col overflow-y-auto pb-2 md:order-none">
|
||||
{viewMode === "settings" ? (
|
||||
<>
|
||||
<Heading as="h4" className="mb-2">
|
||||
|
||||
@ -298,7 +298,7 @@ export default function CameraReviewSettingsView({
|
||||
<>
|
||||
<div className="flex size-full flex-col md:flex-row">
|
||||
<Toaster position="top-center" closeButton={true} />
|
||||
<div className="scrollbar-container order-last mb-2 mt-2 flex h-full w-full flex-col overflow-y-auto pb-2 md:order-none">
|
||||
<div className="scrollbar-container order-last mb-10 mt-2 flex h-full w-full flex-col overflow-y-auto pb-2 md:order-none">
|
||||
<Heading as="h4" className="mb-2">
|
||||
{t("cameraReview.title")}
|
||||
</Heading>
|
||||
|
||||
@ -244,7 +244,7 @@ export default function EnrichmentsSettingsView({
|
||||
return (
|
||||
<div className="flex size-full flex-col md:flex-row">
|
||||
<Toaster position="top-center" closeButton={true} />
|
||||
<div className="scrollbar-container order-last mb-2 mt-2 flex h-full w-full flex-col overflow-y-auto pb-2 md:order-none">
|
||||
<div className="scrollbar-container order-last mb-10 mt-2 flex h-full w-full flex-col overflow-y-auto pb-2 md:order-none">
|
||||
<Heading as="h4" className="mb-2">
|
||||
{t("enrichments.title")}
|
||||
</Heading>
|
||||
|
||||
@ -211,7 +211,7 @@ export default function FrigatePlusSettingsView({
|
||||
<>
|
||||
<div className="flex size-full flex-col md:flex-row">
|
||||
<Toaster position="top-center" closeButton={true} />
|
||||
<div className="scrollbar-container order-last mb-2 mt-2 flex h-full w-full flex-col overflow-y-auto pb-2 md:order-none">
|
||||
<div className="scrollbar-container order-last mb-10 mt-2 flex h-full w-full flex-col overflow-y-auto pb-2 md:order-none">
|
||||
<Heading as="h4" className="mb-2">
|
||||
{t("frigatePlus.title")}
|
||||
</Heading>
|
||||
|
||||
@ -434,7 +434,7 @@ export default function MasksAndZonesView({
|
||||
{cameraConfig && editingPolygons && (
|
||||
<div className="flex size-full flex-col md:flex-row">
|
||||
<Toaster position="top-center" closeButton={true} />
|
||||
<div className="scrollbar-container order-last mb-2 mt-2 flex h-full w-full flex-col overflow-y-auto rounded-lg border-[1px] border-secondary-foreground bg-background_alt p-2 md:order-none md:mr-3 md:mt-0 md:w-3/12">
|
||||
<div className="scrollbar-container order-last mb-10 mt-2 flex h-full w-full flex-col overflow-y-auto rounded-lg border-[1px] border-secondary-foreground bg-background_alt p-2 md:order-none md:mr-3 md:mt-0 md:w-3/12">
|
||||
{editPane == "zone" && (
|
||||
<ZoneEditPane
|
||||
polygons={editingPolygons}
|
||||
|
||||
@ -191,7 +191,7 @@ export default function MotionTunerView({
|
||||
return (
|
||||
<div className="flex size-full flex-col md:flex-row">
|
||||
<Toaster position="top-center" closeButton={true} />
|
||||
<div className="scrollbar-container order-last mb-2 mt-2 flex h-full w-full flex-col overflow-y-auto rounded-lg border-[1px] border-secondary-foreground bg-background_alt p-2 md:order-none md:mr-3 md:mt-0 md:w-3/12">
|
||||
<div className="scrollbar-container order-last mb-10 mt-2 flex h-full w-full flex-col overflow-y-auto rounded-lg border-[1px] border-secondary-foreground bg-background_alt p-2 md:order-none md:mr-3 md:mt-0 md:w-3/12">
|
||||
<Heading as="h4" className="mb-2">
|
||||
{t("motionDetectionTuner.title")}
|
||||
</Heading>
|
||||
|
||||
@ -331,7 +331,7 @@ export default function NotificationView({
|
||||
|
||||
if (!("Notification" in window) || !window.isSecureContext) {
|
||||
return (
|
||||
<div className="scrollbar-container order-last mb-2 mt-2 flex h-full w-full flex-col overflow-y-auto pb-2 md:order-none">
|
||||
<div className="scrollbar-container order-last mb-10 mt-2 flex h-full w-full flex-col overflow-y-auto pb-2 md:order-none">
|
||||
<div className="grid w-full grid-cols-1 gap-4 md:grid-cols-2">
|
||||
<div className="col-span-1">
|
||||
<Heading as="h4" className="mb-2">
|
||||
@ -385,7 +385,7 @@ export default function NotificationView({
|
||||
<>
|
||||
<div className="flex size-full flex-col md:flex-row">
|
||||
<Toaster position="top-center" closeButton={true} />
|
||||
<div className="scrollbar-container order-last mb-2 mt-2 flex h-full w-full flex-col overflow-y-auto px-2 md:order-none">
|
||||
<div className="scrollbar-container order-last mb-10 mt-2 flex h-full w-full flex-col overflow-y-auto px-2 md:order-none">
|
||||
<div
|
||||
className={cn(
|
||||
isAdmin && "grid w-full grid-cols-1 gap-4 md:grid-cols-2",
|
||||
|
||||
@ -164,7 +164,7 @@ export default function ObjectSettingsView({
|
||||
return (
|
||||
<div className="mt-1 flex size-full flex-col pb-2 md:flex-row">
|
||||
<Toaster position="top-center" closeButton={true} />
|
||||
<div className="scrollbar-container order-last mb-2 mt-2 flex h-full w-full flex-col overflow-y-auto rounded-lg border-[1px] border-secondary-foreground bg-background_alt p-2 md:order-none md:mb-0 md:mr-2 md:mt-0 md:w-3/12">
|
||||
<div className="scrollbar-container order-last mb-10 mt-2 flex h-full w-full flex-col overflow-y-auto rounded-lg border-[1px] border-secondary-foreground bg-background_alt p-2 md:order-none md:mb-0 md:mr-2 md:mt-0 md:w-3/12">
|
||||
<Heading as="h4" className="mb-2">
|
||||
{t("debug.title")}
|
||||
</Heading>
|
||||
@ -434,7 +434,7 @@ function ObjectList({ cameraConfig, objects }: ObjectListProps) {
|
||||
{t("debug.objectShapeFilterDrawing.area")}
|
||||
</p>
|
||||
{obj.area ? (
|
||||
<div className="text-end">
|
||||
<>
|
||||
<div className="text-xs">
|
||||
px: {obj.area.toString()}
|
||||
</div>
|
||||
@ -448,7 +448,7 @@ function ObjectList({ cameraConfig, objects }: ObjectListProps) {
|
||||
.toFixed(4)
|
||||
.toString()}
|
||||
</div>
|
||||
</div>
|
||||
</>
|
||||
) : (
|
||||
"-"
|
||||
)}
|
||||
|
||||
@ -440,7 +440,7 @@ export default function TriggerView({
|
||||
return (
|
||||
<div className="flex size-full flex-col md:flex-row">
|
||||
<Toaster position="top-center" closeButton={true} />
|
||||
<div className="scrollbar-container order-last mb-2 mt-2 flex h-full w-full flex-col overflow-y-auto pb-2 md:order-none md:mr-3 md:mt-0">
|
||||
<div className="scrollbar-container order-last mb-10 mt-2 flex h-full w-full flex-col overflow-y-auto pb-2 md:order-none md:mr-3 md:mt-0">
|
||||
{!isSemanticSearchEnabled ? (
|
||||
<div className="mb-5 flex flex-row items-center justify-between gap-2">
|
||||
<div className="flex flex-col items-start">
|
||||
|
||||
@ -108,7 +108,7 @@ export default function UiSettingsView() {
|
||||
<>
|
||||
<div className="flex size-full flex-col md:flex-row">
|
||||
<Toaster position="top-center" closeButton={true} />
|
||||
<div className="scrollbar-container order-last mb-2 mt-2 flex h-full w-full flex-col overflow-y-auto pb-2 md:order-none">
|
||||
<div className="scrollbar-container order-last mb-10 mt-2 flex h-full w-full flex-col overflow-y-auto pb-2 md:order-none">
|
||||
<Heading as="h4" className="mb-2">
|
||||
{t("general.title")}
|
||||
</Heading>
|
||||
|
||||