Compare commits

..

10 Commits

Author SHA1 Message Date
dependabot[bot]
894d69181b
Merge 17010e50fc into 2d8b6c8301 2025-11-23 22:43:31 +00:00
icidi
2d8b6c8301
fix typo (#20969)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
2025-11-23 08:43:15 -07:00
Blake Blackshear
84c3f98a09
clarify trademark and license interaction (#21019) 2025-11-23 08:42:48 -07:00
Jakub Sojka
c87f89fcc1
Chore/update yq to 4.48.2 (#20967)
* chore: update yq from 4.33.3 to 4.48.2

* fix: update yq v4 syntax for Frigate config upload command

---------

Co-authored-by: Jakub Sojka <jakub.sojka@mallgroup.com>
2025-11-23 08:41:44 -07:00
Josh Hawkins
815303922d
Miscellaneous Fixes (#21005)
* update live view docs

* use swr as single source of truth for searchDetail

rather than maintaining a separate state, derive the selected item from swr cache. fixes websocket sync when regenerating descriptions or fetching transcriptions

* fix key warning in console

* don't try to fetch event from review item for audio events

* update audio transcription toast wording

* Add a community supported badge to specific detectors in the info summaries to better separate

* Make object classification publish to tracked object update and add examples for state classification

* Add item to advanced docs about tensorflow limiting

* Don't show submission for in progress objects

* fix for ios not reporting video dimensions on initial metadata load

in testing, polling with requestAnimationFrame finds the dimensions within 2 frames

* Catch jetson nvidia device tree

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2025-11-23 08:40:25 -07:00
Nicolas Mowen
224cbdc2d6
Miscellaneous Fixes (#20989)
Some checks failed
CI / AMD64 Build (push) Has been cancelled
CI / ARM Build (push) Has been cancelled
CI / Jetson Jetpack 6 (push) Has been cancelled
CI / AMD64 Extra Build (push) Has been cancelled
CI / ARM Extra Build (push) Has been cancelled
CI / Synaptics Build (push) Has been cancelled
CI / Assemble and push default build (push) Has been cancelled
* Include DB in safe mode config

Copy DB when going into safe mode to avoid creating a new one if a user has configured a separate location

* Fix documentation for example log module

* Set minimum duration for recording segments

Due to the inpoint logic, some recordings would get clipped on the end of the segment with a non-zero duration but not enough duration to include a frame. 100 ms is a safe value for any video that is 10fps or higher to have a frame

* Add docs to explain object assignment for classification

* Add warning for Intel GPU stats bug

Add warning with explanation on GPU stats page when all Intel GPU values are 0

* Update docs with creation instructions

* reset loading state when moving through events in tracking details

* disable pip on preview players

* Improve HLS handling for startPosition

The startPosition was incorrectly calculated assuming continuous recordings, when it needs to consider only some segments exist. This extracts that logic to a utility so all can use it.

---------

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2025-11-21 15:40:58 -06:00
Abinila Siva
3f9b153758
[MemryX] Update YOLOv9 post-processing (#20980)
* Update optimized YOLOv9 post-processing

* remove unused import
2025-11-21 14:24:17 -07:00
Nicolas Mowen
8e8346099e
Miscellaneous Fixes (#20973)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
2025-11-20 17:50:17 -06:00
Nicolas Mowen
b0527df3c7
HLS adjustments (#20983)
* Revert "Fix HLS jumping to end of timeChunk (#20982)"

This reverts commit 301e0a1a3a.

* Never use native HLS

* Fix inverse operation
2025-11-20 15:58:58 -07:00
Nicolas Mowen
301e0a1a3a
Fix HLS jumping to end of timeChunk (#20982)
* Fix HLS jumping to end

* Undo
2025-11-20 15:50:00 -06:00
66 changed files with 1609 additions and 726 deletions

View File

@ -1,6 +1,6 @@
The MIT License
Copyright (c) 2020 Blake Blackshear
Copyright (c) 2025 Frigate LLC (Frigate™)
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

View File

@ -1,8 +1,10 @@
<p align="center">
<img align="center" alt="logo" src="docs/static/img/frigate.png">
<img align="center" alt="logo" src="docs/static/img/branding/frigate.png">
</p>
# Frigate - NVR With Realtime Object Detection for IP Cameras
# Frigate NVR™ - Realtime Object Detection for IP Cameras
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
<a href="https://hosted.weblate.org/engage/frigate-nvr/">
<img src="https://hosted.weblate.org/widget/frigate-nvr/language-badge.svg" alt="Translation status" />
@ -33,6 +35,15 @@ View the documentation at https://docs.frigate.video
If you would like to make a donation to support development, please use [Github Sponsors](https://github.com/sponsors/blakeblackshear).
## License
This project is licensed under the **MIT License**.
- **Code:** The source code, configuration files, and documentation in this repository are available under the [MIT License](LICENSE). You are free to use, modify, and distribute the code as long as you include the original copyright notice.
- **Trademarks:** The "Frigate" name, the "Frigate NVR" brand, and the Frigate logo are **trademarks of Frigate LLC** and are **not** covered by the MIT License.
Please see our [Trademark Policy](TRADEMARK.md) for details on acceptable use of our brand assets.
## Screenshots
### Live dashboard
@ -66,3 +77,7 @@ We use [Weblate](https://hosted.weblate.org/projects/frigate-nvr/) to support la
<a href="https://hosted.weblate.org/engage/frigate-nvr/">
<img src="https://hosted.weblate.org/widget/frigate-nvr/multi-auto.svg" alt="Translation status" />
</a>
---
**Copyright © 2025 Frigate LLC.**

58
TRADEMARK.md Normal file
View File

@ -0,0 +1,58 @@
# Trademark Policy
**Last Updated:** November 2025
This document outlines the policy regarding the use of the trademarks associated with the Frigate NVR project.
## 1. Our Trademarks
The following terms and visual assets are trademarks (the "Marks") of **Frigate LLC**:
- **Frigate™**
- **Frigate NVR™**
- **Frigate+™**
- **The Frigate Logo**
**Note on Common Law Rights:**
Frigate LLC asserts all common law rights in these Marks. The absence of a federal registration symbol (®) does not constitute a waiver of our intellectual property rights.
## 2. Interaction with the MIT License
The software in this repository is licensed under the [MIT License](LICENSE).
**Crucial Distinction:**
- The **Code** is free to use, modify, and distribute under the MIT terms.
- The **Brand (Trademarks)** is **NOT** licensed under MIT.
You may not use the Marks in any way that is not explicitly permitted by this policy or by written agreement with Frigate LLC.
## 3. Acceptable Use
You may use the Marks without prior written permission in the following specific contexts:
- **Referential Use:** To truthfully refer to the software (e.g., _"I use Frigate NVR for my home security"_).
- **Compatibility:** To indicate that your product or project works with the software (e.g., _"MyPlugin for Frigate NVR"_ or _"Compatible with Frigate"_).
- **Commentary:** In news articles, blog posts, or tutorials discussing the software.
## 4. Prohibited Use
You may **NOT** use the Marks in the following ways:
- **Commercial Products:** You may not use "Frigate" in the name of a commercial product, service, or app (e.g., selling an app named _"Frigate Viewer"_ is prohibited).
- **Implying Affiliation:** You may not use the Marks in a way that suggests your project is official, sponsored by, or endorsed by Frigate LLC.
- **Confusing Forks:** If you fork this repository to create a derivative work, you **must** remove the Frigate logo and rename your project to avoid user confusion. You cannot distribute a modified version of the software under the name "Frigate".
- **Domain Names:** You may not register domain names containing "Frigate" that are likely to confuse users (e.g., `frigate-official-support.com`).
## 5. The Logo
The Frigate logo (the bird icon) is a visual trademark.
- You generally **cannot** use the logo on your own website or product packaging without permission.
- If you are building a dashboard or integration that interfaces with Frigate, you may use the logo only to represent the Frigate node/service, provided it does not imply you _are_ Frigate.
## 6. Questions & Permissions
If you are unsure if your intended use violates this policy, or if you wish to request a specific license to use the Marks (e.g., for a partnership), please contact us at:
**help@frigate.video**

View File

@ -145,6 +145,6 @@ rm -rf /var/lib/apt/lists/*
# Install yq, for frigate-prepare and go2rtc echo source
curl -fsSL \
"https://github.com/mikefarah/yq/releases/download/v4.33.3/yq_linux_$(dpkg --print-architecture)" \
"https://github.com/mikefarah/yq/releases/download/v4.48.2/yq_linux_$(dpkg --print-architecture)" \
--output /usr/local/bin/yq
chmod +x /usr/local/bin/yq

View File

@ -320,6 +320,12 @@ http {
add_header Cache-Control "public";
}
location /fonts/ {
access_log off;
expires 1y;
add_header Cache-Control "public";
}
location /locales/ {
access_log off;
add_header Cache-Control "public";

View File

@ -25,7 +25,7 @@ Examples of available modules are:
- `frigate.app`
- `frigate.mqtt`
- `frigate.object_detection`
- `frigate.object_detection.base`
- `detector.<detector_name>`
- `watchdog.<camera_name>`
- `ffmpeg.<camera_name>.<sorted_roles>` NOTE: All FFmpeg logs are sent as `error` level.
@ -53,6 +53,17 @@ environment_vars:
VARIABLE_NAME: variable_value
```
#### TensorFlow Thread Configuration
If you encounter thread creation errors during classification model training, you can limit TensorFlow's thread usage:
```yaml
environment_vars:
TF_INTRA_OP_PARALLELISM_THREADS: "2" # Threads within operations (0 = use default)
TF_INTER_OP_PARALLELISM_THREADS: "2" # Threads between operations (0 = use default)
TF_DATASET_THREAD_POOL_SIZE: "2" # Data pipeline threads (0 = use default)
```
### `database`
Tracked object and recording information is managed in a sqlite database at `/config/frigate.db`. If that database is deleted, recordings will be orphaned and will need to be cleaned up manually. They also won't show up in the Media Browser within Home Assistant.
@ -247,7 +258,7 @@ curl -X POST http://frigate_host:5000/api/config/save -d @config.json
if you'd like you can use your yaml config directly by using [`yq`](https://github.com/mikefarah/yq) to convert it to json:
```bash
yq r -j config.yml | curl -X POST http://frigate_host:5000/api/config/save -d @-
yq -o=json '.' config.yaml | curl -X POST 'http://frigate_host:5000/api/config/save?save_option=saveonly' --data-binary @-
```
### Via Command Line

View File

@ -35,6 +35,15 @@ For object classification:
- Ideal when multiple attributes can coexist independently.
- Example: Detecting if a `person` in a construction yard is wearing a helmet or not.
## Assignment Requirements
Sub labels and attributes are only assigned when both conditions are met:
1. **Threshold**: Each classification attempt must have a confidence score that meets or exceeds the configured `threshold` (default: `0.8`).
2. **Class Consensus**: After at least 3 classification attempts, 60% of attempts must agree on the same class label. If the consensus class is `none`, no assignment is made.
This two-step verification prevents false positives by requiring consistent predictions across multiple frames before assigning a sub label or attribute.
## Example use cases
### Sub label
@ -66,14 +75,18 @@ classification:
## Training the model
Creating and training the model is done within the Frigate UI using the `Classification` page.
Creating and training the model is done within the Frigate UI using the `Classification` page. The process consists of two steps:
### Getting Started
### Step 1: Name and Define
Enter a name for your model, select the object label to classify (e.g., `person`, `dog`, `car`), choose the classification type (sub label or attribute), and define your classes. Include a `none` class for objects that don't fit any specific category.
### Step 2: Assign Training Examples
The system will automatically generate example images from detected objects matching your selected label. You'll be guided through each class one at a time to select which images represent that class. Any images not assigned to a specific class will automatically be assigned to `none` when you complete the last class. Once all images are processed, training will begin automatically.
When choosing which objects to classify, start with a small number of visually distinct classes and ensure your training samples match camera viewpoints and distances typical for those objects.
// TODO add this section once UI is implemented. Explain process of selecting objects and curating training examples.
### Improving the Model
- **Problem framing**: Keep classes visually distinct and relevant to the chosen object types.

View File

@ -48,13 +48,23 @@ classification:
## Training the model
Creating and training the model is done within the Frigate UI using the `Classification` page.
Creating and training the model is done within the Frigate UI using the `Classification` page. The process consists of three steps:
### Getting Started
### Step 1: Name and Define
When choosing a portion of the camera frame for state classification, it is important to make the crop tight around the area of interest to avoid extra signals unrelated to what is being classified.
Enter a name for your model and define at least 2 classes (states) that represent mutually exclusive states. For example, `open` and `closed` for a door, or `on` and `off` for lights.
// TODO add this section once UI is implemented. Explain process of selecting a crop.
### Step 2: Select the Crop Area
Choose one or more cameras and draw a rectangle over the area of interest for each camera. The crop should be tight around the region you want to classify to avoid extra signals unrelated to what is being classified. You can drag and resize the rectangle to adjust the crop area.
### Step 3: Assign Training Examples
The system will automatically generate example images from your camera feeds. You'll be guided through each class one at a time to select which images represent that state.
**Important**: All images must be assigned to a state before training can begin. This includes images that may not be optimal, such as when people temporarily block the view, sun glare is present, or other distractions occur. Assign these images to the state that is actually present (based on what you know the state to be), not based on the distraction. This training helps the model correctly identify the state even when such conditions occur during inference.
Once all images are assigned, training will begin automatically.
### Improving the Model

View File

@ -70,7 +70,7 @@ You should have at least 8 GB of RAM available (or VRAM if running on GPU) to ru
genai:
provider: ollama
base_url: http://localhost:11434
model: llava:7b
model: qwen3-vl:4b
```
## Google Gemini

View File

@ -35,19 +35,18 @@ Each model is available in multiple parameter sizes (3b, 4b, 8b, etc.). Larger s
:::tip
If you are trying to use a single model for Frigate and HomeAssistant, it will need to support vision and tools calling. https://github.com/skye-harris/ollama-modelfiles contains optimized model configs for this task.
If you are trying to use a single model for Frigate and HomeAssistant, it will need to support vision and tools calling. qwen3-VL supports vision and tools simultaneously in Ollama.
:::
The following models are recommended:
| Model | Notes |
| ----------------- | ----------------------------------------------------------- |
| `qwen3-vl` | Strong visual and situational understanding |
| `Intern3.5VL` | Relatively fast with good vision comprehension |
| `gemma3` | Strong frame-to-frame understanding, slower inference times |
| `qwen2.5-vl` | Fast but capable model with good vision comprehension |
| `llava-phi3` | Lightweight and fast model with vision comprehension |
| Model | Notes |
| ----------------- | -------------------------------------------------------------------- |
| `qwen3-vl` | Strong visual and situational understanding, higher vram requirement |
| `Intern3.5VL` | Relatively fast with good vision comprehension |
| `gemma3` | Strong frame-to-frame understanding, slower inference times |
| `qwen2.5-vl` | Fast but capable model with good vision comprehension |
:::note

View File

@ -214,6 +214,42 @@ For restreamed cameras, go2rtc remains active but does not use system resources
Note that disabling a camera through the config file (`enabled: False`) removes all related UI elements, including historical footage access. To retain access while disabling the camera, keep it enabled in the config and use the UI or MQTT to disable it temporarily.
### Live player error messages
When your browser runs into problems playing back your camera streams, it will log short error messages to the browser console. They indicate playback, codec, or network issues on the client/browser side, not something server side with Frigate itself. Below are the common messages you may see and simple actions you can take to try to resolve them.
- **startup**
- What it means: The player failed to initialize or connect to the live stream (network or startup error).
- What to try: Reload the Live view or click _Reset_. Verify `go2rtc` is running and the camera stream is reachable. Try switching to a different stream from the Live UI dropdown (if available) or use a different browser.
- Possible console messages from the player code:
- `Error opening MediaSource.`
- `Browser reported a network error.`
- `Max error count ${errorCount} exceeded.` (the numeric value will vary)
- **mse-decode**
- What it means: The browser reported a decoding error while trying to play the stream, which usually is a result of a codec incompatibility or corrupted frames.
- What to try: Ensure your camera/restream is using H.264 video and AAC audio (these are the most compatible). If your camera uses a non-standard audio codec, configure `go2rtc` to transcode the stream to AAC. Try another browser (some browsers have stricter MSE/codec support) and, for iPhone, ensure you're on iOS 17.1 or newer.
- Possible console messages from the player code:
- `Safari cannot open MediaSource.`
- `Safari reported InvalidStateError.`
- `Safari reported decoding errors.`
- **stalled**
- What it means: Playback has stalled because the player has fallen too far behind live (extended buffering or no data arriving).
- What to try: This is usually indicative of the browser struggling to decode too many high-resolution streams at once. Try selecting a lower-bandwidth stream (substream), reduce the number of live streams open, improve the network connection, or lower the camera resolution. Also check your camera's keyframe (I-frame) interval — shorter intervals make playback start and recover faster. You can also try increasing the timeout value in the UI pane of Frigate's settings.
- Possible console messages from the player code:
- `Buffer time (10 seconds) exceeded, browser may not be playing media correctly.`
- `Media playback has stalled after <n> seconds due to insufficient buffering or a network interruption.` (the seconds value will vary)
## Live view FAQ
1. **Why don't I have audio in my Live view?**
@ -277,3 +313,38 @@ Note that disabling a camera through the config file (`enabled: False`) removes
7. **My camera streams have lots of visual artifacts / distortion.**
Some cameras don't include the hardware to support multiple connections to the high resolution stream, and this can cause unexpected behavior. In this case it is recommended to [restream](./restream.md) the high resolution stream so that it can be used for live view and recordings.
8. **Why does my camera stream switch aspect ratios on the Live dashboard?**
Your camera may change aspect ratios on the dashboard because Frigate uses different streams for different purposes. With go2rtc and Smart Streaming, Frigate shows a static image from the `detect` stream when no activity is present, and switches to the live stream when motion is detected. The camera image will change size if your streams use different aspect ratios.
To prevent this, make the `detect` stream match the go2rtc live stream's aspect ratio (resolution does not need to match, just the aspect ratio). You can either adjust the camera's output resolution or set the `width` and `height` values in your config's `detect` section to a resolution with an aspect ratio that matches.
Example: Resolutions from two streams
- Mismatched (may cause aspect ratio switching on the dashboard):
- Live/go2rtc stream: 1920x1080 (16:9)
- Detect stream: 640x352 (~1.82:1, not 16:9)
- Matched (prevents switching):
- Live/go2rtc stream: 1920x1080 (16:9)
- Detect stream: 640x360 (16:9)
You can update the detect settings in your camera config to match the aspect ratio of your go2rtc live stream. For example:
```yaml
cameras:
front_door:
detect:
width: 640
height: 360 # set this to 360 instead of 352
ffmpeg:
inputs:
- path: rtsp://127.0.0.1:8554/front_door # main stream 1920x1080
roles:
- record
- path: rtsp://127.0.0.1:8554/front_door_sub # sub stream 640x352
roles:
- detect
```

View File

@ -3,6 +3,8 @@ id: object_detectors
title: Object Detectors
---
import CommunityBadge from '@site/src/components/CommunityBadge';
# Supported Hardware
:::info
@ -13,8 +15,8 @@ Frigate supports multiple different detectors that work on different types of ha
- [Coral EdgeTPU](#edge-tpu-detector): The Google Coral EdgeTPU is available in USB and m.2 format allowing for a wide range of compatibility with devices.
- [Hailo](#hailo-8): The Hailo8 and Hailo8L AI Acceleration module is available in m.2 format with a HAT for RPi devices, offering a wide range of compatibility with devices.
- [MemryX](#memryx-mx3): The MX3 Acceleration module is available in m.2 format, offering broad compatibility across various platforms.
- [DeGirum](#degirum): Service for using hardware devices in the cloud or locally. Hardware and models provided on the cloud on [their website](https://hub.degirum.com).
- <CommunityBadge /> [MemryX](#memryx-mx3): The MX3 Acceleration module is available in m.2 format, offering broad compatibility across various platforms.
- <CommunityBadge /> [DeGirum](#degirum): Service for using hardware devices in the cloud or locally. Hardware and models provided on the cloud on [their website](https://hub.degirum.com).
**AMD**
@ -34,16 +36,16 @@ Frigate supports multiple different detectors that work on different types of ha
- [ONNX](#onnx): TensorRT will automatically be detected and used as a detector in the `-tensorrt` Frigate image when a supported ONNX model is configured.
**Nvidia Jetson**
**Nvidia Jetson** <CommunityBadge />
- [TensortRT](#nvidia-tensorrt-detector): TensorRT can run on Jetson devices, using one of many default models.
- [ONNX](#onnx): TensorRT will automatically be detected and used as a detector in the `-tensorrt-jp6` Frigate image when a supported ONNX model is configured.
**Rockchip**
**Rockchip** <CommunityBadge />
- [RKNN](#rockchip-platform): RKNN models can run on Rockchip devices with included NPUs.
**Synaptics**
**Synaptics** <CommunityBadge />
- [Synaptics](#synaptics): synap models can run on Synaptics devices(e.g astra machina) with included NPUs.
@ -962,7 +964,6 @@ model:
# path: /config/yolov9.zip
# The .zip file must contain:
# ├── yolov9.dfp (a file ending with .dfp)
# └── yolov9_post.onnx (optional; only if the model includes a cropped post-processing network)
```
#### YOLOX

View File

@ -246,7 +246,7 @@ birdseye:
# Optional: ffmpeg configuration
# More information about presets at https://docs.frigate.video/configuration/ffmpeg_presets
ffmpeg:
# Optional: ffmpeg binry path (default: shown below)
# Optional: ffmpeg binary path (default: shown below)
# can also be set to `7.0` or `5.0` to specify one of the included versions
# or can be set to any path that holds `bin/ffmpeg` & `bin/ffprobe`
path: "default"

View File

@ -3,6 +3,8 @@ id: hardware
title: Recommended hardware
---
import CommunityBadge from '@site/src/components/CommunityBadge';
## Cameras
Cameras that output H.264 video and AAC audio will offer the most compatibility with all features of Frigate and Home Assistant. It is also helpful if your camera supports multiple substreams to allow different resolutions to be used for detection, streaming, and recordings without re-encoding.
@ -59,7 +61,7 @@ Frigate supports multiple different detectors that work on different types of ha
- [Supports primarily ssdlite and mobilenet model architectures](../../configuration/object_detectors#edge-tpu-detector)
- [MemryX](#memryx-mx3): The MX3 M.2 accelerator module is available in m.2 format allowing for a wide range of compatibility with devices.
- <CommunityBadge /> [MemryX](#memryx-mx3): The MX3 M.2 accelerator module is available in m.2 format allowing for a wide range of compatibility with devices.
- [Supports many model architectures](../../configuration/object_detectors#memryx-mx3)
- Runs best with tiny, small, or medium-size models
@ -84,32 +86,26 @@ Frigate supports multiple different detectors that work on different types of ha
**Nvidia**
- [TensortRT](#tensorrt---nvidia-gpu): TensorRT can run on Nvidia GPUs and Jetson devices.
- [TensortRT](#tensorrt---nvidia-gpu): TensorRT can run on Nvidia GPUs to provide efficient object detection.
- [Supports majority of model architectures via ONNX](../../configuration/object_detectors#onnx-supported-models)
- Runs well with any size models including large
**Rockchip**
- <CommunityBadge /> [Jetson](#nvidia-jetson): Jetson devices are supported via the TensorRT or ONNX detectors when running Jetpack 6.
**Rockchip** <CommunityBadge />
- [RKNN](#rockchip-platform): RKNN models can run on Rockchip devices with included NPUs to provide efficient object detection.
- [Supports limited model architectures](../../configuration/object_detectors#choosing-a-model)
- Runs best with tiny or small size models
- Runs efficiently on low power hardware
**Synaptics**
**Synaptics** <CommunityBadge />
- [Synaptics](#synaptics): synap models can run on Synaptics devices(e.g astra machina) with included NPUs to provide efficient object detection.
:::
### Synaptics
- **Synaptics** Default model is **mobilenet**
| Name | Synaptics SL1680 Inference Time |
| ---------------- | ------------------------------- |
| ssd mobilenet | ~ 25 ms |
| yolov5m | ~ 118 ms |
### Hailo-8
Frigate supports both the Hailo-8 and Hailo-8L AI Acceleration Modules on compatible hardware platforms—including the Raspberry Pi 5 with the PCIe hat from the AI kit. The Hailo detector integration in Frigate automatically identifies your hardware type and selects the appropriate default model when a custom model isnt provided.
@ -261,7 +257,7 @@ Inference speeds may vary depending on the host platform. The above data was mea
### Nvidia Jetson
Frigate supports all Jetson boards, from the inexpensive Jetson Nano to the powerful Jetson Orin AGX. It will [make use of the Jetson's hardware media engine](/configuration/hardware_acceleration_video#nvidia-jetson-orin-agx-orin-nx-orin-nano-xavier-agx-xavier-nx-tx2-tx1-nano) when configured with the [appropriate presets](/configuration/ffmpeg_presets#hwaccel-presets), and will make use of the Jetson's GPU and DLA for object detection when configured with the [TensorRT detector](/configuration/object_detectors#nvidia-tensorrt-detector).
Jetson devices are supported via the TensorRT or ONNX detectors when running Jetpack 6. It will [make use of the Jetson's hardware media engine](/configuration/hardware_acceleration_video#nvidia-jetson-orin-agx-orin-nx-orin-nano-xavier-agx-xavier-nx-tx2-tx1-nano) when configured with the [appropriate presets](/configuration/ffmpeg_presets#hwaccel-presets), and will make use of the Jetson's GPU and DLA for object detection when configured with the [TensorRT detector](/configuration/object_detectors#nvidia-tensorrt-detector).
Inference speed will vary depending on the YOLO model, jetson platform and jetson nvpmodel (GPU/DLA/EMC clock speed). It is typically 20-40 ms for most models. The DLA is more efficient than the GPU, but not faster, so using the DLA will reduce power consumption but will slightly increase inference time.
@ -282,6 +278,15 @@ Frigate supports hardware video processing on all Rockchip boards. However, hard
The inference time of a rk3588 with all 3 cores enabled is typically 25-30 ms for yolo-nas s.
### Synaptics
- **Synaptics** Default model is **mobilenet**
| Name | Synaptics SL1680 Inference Time |
| ------------- | ------------------------------- |
| ssd mobilenet | ~ 25 ms |
| yolov5m | ~ 118 ms |
## What does Frigate use the CPU for and what does it use a detector for? (ELI5 Version)
This is taken from a [user question on reddit](https://www.reddit.com/r/homeassistant/comments/q8mgau/comment/hgqbxh5/?utm_source=share&utm_medium=web2x&context=3). Modified slightly for clarity.

View File

@ -159,11 +159,44 @@ Message published for updates to tracked object metadata, for example:
}
```
#### Object Classification Update
Message published when [object classification](/configuration/custom_classification/object_classification) reaches consensus on a classification result.
**Sub label type:**
```json
{
"type": "classification",
"id": "1607123955.475377-mxklsc",
"camera": "front_door_cam",
"timestamp": 1607123958.748393,
"model": "person_classifier",
"sub_label": "delivery_person",
"score": 0.87
}
```
**Attribute type:**
```json
{
"type": "classification",
"id": "1607123955.475377-mxklsc",
"camera": "front_door_cam",
"timestamp": 1607123958.748393,
"model": "helmet_detector",
"attribute": "yes",
"score": 0.92
}
```
### `frigate/reviews`
Message published for each changed review item. The first message is published when the `detection` or `alert` is initiated.
An `update` with the same ID will be published when:
- The severity changes from `detection` to `alert`
- Additional objects are detected
- An object is recognized via face, lpr, etc.
@ -308,6 +341,11 @@ Publishes transcribed text for audio detected on this camera.
**NOTE:** Requires audio detection and transcription to be enabled
### `frigate/<camera_name>/classification/<model_name>`
Publishes the current state detected by a state classification model for the camera. The topic name includes the model name as configured in your classification settings.
The published value is the detected state class name (e.g., `open`, `closed`, `on`, `off`). The state is only published when it changes, helping to reduce unnecessary MQTT traffic.
### `frigate/<camera_name>/enabled/set`
Topic to turn Frigate's processing of a camera on and off. Expected values are `ON` and `OFF`.

View File

@ -10,7 +10,7 @@ const config: Config = {
baseUrl: "/",
onBrokenLinks: "throw",
onBrokenMarkdownLinks: "warn",
favicon: "img/favicon.ico",
favicon: "img/branding/favicon.ico",
organizationName: "blakeblackshear",
projectName: "frigate",
themes: [
@ -116,8 +116,8 @@ const config: Config = {
title: "Frigate",
logo: {
alt: "Frigate",
src: "img/logo.svg",
srcDark: "img/logo-dark.svg",
src: "img/branding/logo.svg",
srcDark: "img/branding/logo-dark.svg",
},
items: [
{
@ -170,7 +170,7 @@ const config: Config = {
],
},
],
copyright: `Copyright © ${new Date().getFullYear()} Blake Blackshear`,
copyright: `Copyright © ${new Date().getFullYear()} Frigate LLC`,
},
},
plugins: [

View File

@ -0,0 +1,23 @@
import React from "react";
export default function CommunityBadge() {
return (
<span
title="This detector is maintained by community members who provide code, maintenance, and support. See the contributing boards documentation for more information."
style={{
display: "inline-block",
backgroundColor: "#f1f3f5",
color: "#24292f",
fontSize: "11px",
fontWeight: 600,
padding: "2px 6px",
borderRadius: "3px",
border: "1px solid #d1d9e0",
marginLeft: "4px",
cursor: "help",
}}
>
Community Supported
</span>
);
}

30
docs/static/img/branding/LICENSE.md vendored Normal file
View File

@ -0,0 +1,30 @@
# COPYRIGHT AND TRADEMARK NOTICE
The images, logos, and icons contained in this directory (the "Brand Assets") are
proprietary to Frigate LLC and are NOT covered by the MIT License governing the
rest of this repository.
1. TRADEMARK STATUS
The "Frigate" name and the accompanying logo are common law trademarks™ of
Frigate LLC. Frigate LLC reserves all rights to these marks.
2. LIMITED PERMISSION FOR USE
Permission is hereby granted to display these Brand Assets strictly for the
following purposes:
a. To execute the software interface on a local machine.
b. To identify the software in documentation or reviews (nominative use).
3. RESTRICTIONS
You may NOT:
a. Use these Brand Assets to represent a derivative work (fork) as an official
product of Frigate LLC.
b. Use these Brand Assets in a way that implies endorsement, sponsorship, or
commercial affiliation with Frigate LLC.
c. Modify or alter the Brand Assets.
If you fork this repository with the intent to distribute a modified or competing
version of the software, you must replace these Brand Assets with your own
original content.
ALL RIGHTS RESERVED.
Copyright (c) 2025 Frigate LLC.

View File

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 15 KiB

View File

Before

Width:  |  Height:  |  Size: 12 KiB

After

Width:  |  Height:  |  Size: 12 KiB

View File

Before

Width:  |  Height:  |  Size: 936 B

After

Width:  |  Height:  |  Size: 936 B

View File

Before

Width:  |  Height:  |  Size: 933 B

After

Width:  |  Height:  |  Size: 933 B

View File

@ -849,6 +849,7 @@ async def vod_ts(camera_name: str, start_ts: float, end_ts: float):
clips = []
durations = []
min_duration_ms = 100 # Minimum 100ms to ensure at least one video frame
max_duration_ms = MAX_SEGMENT_DURATION * 1000
recording: Recordings
@ -866,11 +867,11 @@ async def vod_ts(camera_name: str, start_ts: float, end_ts: float):
if recording.end_time > end_ts:
duration -= int((recording.end_time - end_ts) * 1000)
if duration <= 0:
# skip if the clip has no valid duration
if duration < min_duration_ms:
# skip if the clip has no valid duration (too short to contain frames)
continue
if 0 < duration < max_duration_ms:
if min_duration_ms <= duration < max_duration_ms:
clip["keyFrameDurations"] = [duration]
clips.append(clip)
durations.append(duration)

View File

@ -792,6 +792,10 @@ class FrigateConfig(FrigateBaseModel):
# copy over auth and proxy config in case auth needs to be enforced
safe_config["auth"] = config.get("auth", {})
safe_config["proxy"] = config.get("proxy", {})
# copy over database config for auth and so a new db is not created
safe_config["database"] = config.get("database", {})
return cls.parse_object(safe_config, **context)
# Validate and return the config dict.

View File

@ -1,6 +1,7 @@
"""Real time processor that works with classification tflite models."""
import datetime
import json
import logging
import os
from typing import Any
@ -21,6 +22,7 @@ from frigate.config.classification import (
)
from frigate.const import CLIPS_DIR, MODEL_CACHE_DIR
from frigate.log import redirect_output_to_logger
from frigate.types import TrackedObjectUpdateTypesEnum
from frigate.util.builtin import EventsPerSecond, InferenceSpeed, load_labels
from frigate.util.object import box_overlaps, calculate_region
@ -284,6 +286,7 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
config: FrigateConfig,
model_config: CustomClassificationConfig,
sub_label_publisher: EventMetadataPublisher,
requestor: InterProcessRequestor,
metrics: DataProcessorMetrics,
):
super().__init__(config, metrics)
@ -292,6 +295,7 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
self.train_dir = os.path.join(CLIPS_DIR, self.model_config.name, "train")
self.interpreter: Interpreter | None = None
self.sub_label_publisher = sub_label_publisher
self.requestor = requestor
self.tensor_input_details: dict[str, Any] | None = None
self.tensor_output_details: dict[str, Any] | None = None
self.classification_history: dict[str, list[tuple[str, float, float]]] = {}
@ -486,6 +490,8 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
)
if consensus_label is not None:
camera = obj_data["camera"]
if (
self.model_config.object_config.classification_type
== ObjectClassificationType.sub_label
@ -494,6 +500,20 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
(object_id, consensus_label, consensus_score),
EventMetadataTypeEnum.sub_label,
)
self.requestor.send_data(
"tracked_object_update",
json.dumps(
{
"type": TrackedObjectUpdateTypesEnum.classification,
"id": object_id,
"camera": camera,
"timestamp": now,
"model": self.model_config.name,
"sub_label": consensus_label,
"score": consensus_score,
}
),
)
elif (
self.model_config.object_config.classification_type
== ObjectClassificationType.attribute
@ -507,6 +527,20 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
),
EventMetadataTypeEnum.attribute.value,
)
self.requestor.send_data(
"tracked_object_update",
json.dumps(
{
"type": TrackedObjectUpdateTypesEnum.classification,
"id": object_id,
"camera": camera,
"timestamp": now,
"model": self.model_config.name,
"attribute": consensus_label,
"score": consensus_score,
}
),
)
def handle_request(self, topic, request_data):
if topic == EmbeddingsRequestEnum.reload_classification_model.value:

View File

@ -18,7 +18,6 @@ from frigate.detectors.detector_config import (
ModelTypeEnum,
)
from frigate.util.file import FileLock
from frigate.util.model import post_process_yolo
logger = logging.getLogger(__name__)
@ -178,13 +177,6 @@ class MemryXDetector(DetectionApi):
logger.error(f"Failed to initialize MemryX model: {e}")
raise
def load_yolo_constants(self):
base = f"{self.cache_dir}/{self.model_folder}"
# constants for yolov9 post-processing
self.const_A = np.load(f"{base}/_model_22_Constant_9_output_0.npy")
self.const_B = np.load(f"{base}/_model_22_Constant_10_output_0.npy")
self.const_C = np.load(f"{base}/_model_22_Constant_12_output_0.npy")
def check_and_prepare_model(self):
if not os.path.exists(self.cache_dir):
os.makedirs(self.cache_dir, exist_ok=True)
@ -236,7 +228,6 @@ class MemryXDetector(DetectionApi):
# Handle post model requirements by model type
if self.memx_model_type in [
ModelTypeEnum.yologeneric,
ModelTypeEnum.yolonas,
ModelTypeEnum.ssd,
]:
@ -245,7 +236,10 @@ class MemryXDetector(DetectionApi):
f"No *_post.onnx file found in custom model zip for {self.memx_model_type.name}."
)
self.memx_post_model = post_candidates[0]
elif self.memx_model_type == ModelTypeEnum.yolox:
elif self.memx_model_type in [
ModelTypeEnum.yolox,
ModelTypeEnum.yologeneric,
]:
# Explicitly ignore any post model even if present
self.memx_post_model = None
else:
@ -273,8 +267,6 @@ class MemryXDetector(DetectionApi):
logger.info("Using cached models.")
self.memx_model_path = dfp_path
self.memx_post_model = post_path
if self.memx_model_type == ModelTypeEnum.yologeneric:
self.load_yolo_constants()
return
# ---------- CASE 3: download MemryX model (no cache) ----------
@ -303,9 +295,6 @@ class MemryXDetector(DetectionApi):
else None
)
if self.memx_model_type == ModelTypeEnum.yologeneric:
self.load_yolo_constants()
finally:
if os.path.exists(zip_path):
try:
@ -600,127 +589,232 @@ class MemryXDetector(DetectionApi):
self.output_queue.put(final_detections)
def onnx_reshape_with_allowzero(
self, data: np.ndarray, shape: np.ndarray, allowzero: int = 0
def _generate_anchors(self, sizes=[80, 40, 20]):
"""Generate anchor points for YOLOv9 style processing"""
yscales = []
xscales = []
for s in sizes:
r = np.arange(s) + 0.5
yscales.append(np.repeat(r, s))
xscales.append(np.repeat(r[None, ...], s, axis=0).flatten())
yscales = np.concatenate(yscales)
xscales = np.concatenate(xscales)
anchors = np.stack([xscales, yscales], axis=1)
return anchors
def _generate_scales(self, sizes=[80, 40, 20]):
"""Generate scaling factors for each detection level"""
factors = [8, 16, 32]
s = np.concatenate([np.ones([int(s * s)]) * f for s, f in zip(sizes, factors)])
return s[:, None]
@staticmethod
def _softmax(x: np.ndarray, axis: int) -> np.ndarray:
"""Efficient softmax implementation"""
x = x - np.max(x, axis=axis, keepdims=True)
np.exp(x, out=x)
x /= np.sum(x, axis=axis, keepdims=True)
return x
def dfl(self, x: np.ndarray) -> np.ndarray:
"""Distribution Focal Loss decoding - YOLOv9 style"""
x = x.reshape(-1, 4, 16)
weights = np.arange(16, dtype=np.float32)
p = self._softmax(x, axis=2)
p = p * weights[None, None, :]
out = np.sum(p, axis=2, keepdims=False)
return out
def dist2bbox(
self, x: np.ndarray, anchors: np.ndarray, scales: np.ndarray
) -> np.ndarray:
shape = shape.astype(int)
input_shape = data.shape
output_shape = []
"""Convert distances to bounding boxes - YOLOv9 style"""
lt = x[:, :2]
rb = x[:, 2:]
for i, dim in enumerate(shape):
if dim == 0 and allowzero == 0:
output_shape.append(input_shape[i]) # Copy dimension from input
else:
output_shape.append(dim)
x1y1 = anchors - lt
x2y2 = anchors + rb
# Now let NumPy infer any -1 if needed
reshaped = np.reshape(data, output_shape)
wh = x2y2 - x1y1
c_xy = (x1y1 + x2y2) / 2
return reshaped
out = np.concatenate([c_xy, wh], axis=1)
out = out * scales
return out
def post_process_yolo_optimized(self, outputs):
"""
Custom YOLOv9 post-processing optimized for MemryX ONNX outputs.
Implements DFL decoding, confidence filtering, and NMS in pure NumPy.
"""
# YOLOv9 outputs: 6 outputs (lbox, lcls, mbox, mcls, sbox, scls)
conv_out1, conv_out2, conv_out3, conv_out4, conv_out5, conv_out6 = outputs
# Determine grid sizes based on input resolution
# YOLOv9 uses 3 detection heads with strides [8, 16, 32]
# Grid sizes = input_size / stride
sizes = [
self.memx_model_height
// 8, # Large objects (e.g., 80 for 640x640, 40 for 320x320)
self.memx_model_height
// 16, # Medium objects (e.g., 40 for 640x640, 20 for 320x320)
self.memx_model_height
// 32, # Small objects (e.g., 20 for 640x640, 10 for 320x320)
]
# Generate anchors and scales if not already done
if not hasattr(self, "anchors"):
self.anchors = self._generate_anchors(sizes)
self.scales = self._generate_scales(sizes)
# Process outputs in YOLOv9 format: reshape and moveaxis for ONNX format
lbox = np.moveaxis(conv_out1, 1, -1) # Large boxes
lcls = np.moveaxis(conv_out2, 1, -1) # Large classes
mbox = np.moveaxis(conv_out3, 1, -1) # Medium boxes
mcls = np.moveaxis(conv_out4, 1, -1) # Medium classes
sbox = np.moveaxis(conv_out5, 1, -1) # Small boxes
scls = np.moveaxis(conv_out6, 1, -1) # Small classes
# Determine number of classes dynamically from the class output shape
# lcls shape should be (batch, height, width, num_classes)
num_classes = lcls.shape[-1]
# Validate that all class outputs have the same number of classes
if not (mcls.shape[-1] == num_classes and scls.shape[-1] == num_classes):
raise ValueError(
f"Class output shapes mismatch: lcls={lcls.shape}, mcls={mcls.shape}, scls={scls.shape}"
)
# Concatenate boxes and classes
boxes = np.concatenate(
[
lbox.reshape(-1, 64), # 64 is for 4 bbox coords * 16 DFL bins
mbox.reshape(-1, 64),
sbox.reshape(-1, 64),
],
axis=0,
)
classes = np.concatenate(
[
lcls.reshape(-1, num_classes),
mcls.reshape(-1, num_classes),
scls.reshape(-1, num_classes),
],
axis=0,
)
# Apply sigmoid to classes
classes = self.sigmoid(classes)
# Apply DFL to box predictions
boxes = self.dfl(boxes)
# YOLOv9 postprocessing with confidence filtering and NMS
confidence_thres = 0.4
iou_thres = 0.6
# Find the class with the highest score for each detection
max_scores = np.max(classes, axis=1) # Maximum class score for each detection
class_ids = np.argmax(classes, axis=1) # Index of the best class
# Filter out detections with scores below the confidence threshold
valid_indices = np.where(max_scores >= confidence_thres)[0]
if len(valid_indices) == 0:
# Return empty detections array
final_detections = np.zeros((20, 6), np.float32)
return final_detections
# Select only valid detections
valid_boxes = boxes[valid_indices]
valid_class_ids = class_ids[valid_indices]
valid_scores = max_scores[valid_indices]
# Convert distances to actual bounding boxes using anchors and scales
valid_boxes = self.dist2bbox(
valid_boxes, self.anchors[valid_indices], self.scales[valid_indices]
)
# Convert bounding box coordinates from (x_center, y_center, w, h) to (x_min, y_min, x_max, y_max)
x_center, y_center, width, height = (
valid_boxes[:, 0],
valid_boxes[:, 1],
valid_boxes[:, 2],
valid_boxes[:, 3],
)
x_min = x_center - width / 2
y_min = y_center - height / 2
x_max = x_center + width / 2
y_max = y_center + height / 2
# Convert to format expected by cv2.dnn.NMSBoxes: [x, y, width, height]
boxes_for_nms = []
scores_for_nms = []
for i in range(len(valid_indices)):
# Ensure coordinates are within bounds and positive
x_min_clipped = max(0, x_min[i])
y_min_clipped = max(0, y_min[i])
x_max_clipped = min(self.memx_model_width, x_max[i])
y_max_clipped = min(self.memx_model_height, y_max[i])
width_clipped = x_max_clipped - x_min_clipped
height_clipped = y_max_clipped - y_min_clipped
if width_clipped > 0 and height_clipped > 0:
boxes_for_nms.append(
[x_min_clipped, y_min_clipped, width_clipped, height_clipped]
)
scores_for_nms.append(float(valid_scores[i]))
final_detections = np.zeros((20, 6), np.float32)
if len(boxes_for_nms) == 0:
return final_detections
# Apply NMS using OpenCV
indices = cv2.dnn.NMSBoxes(
boxes_for_nms, scores_for_nms, confidence_thres, iou_thres
)
if len(indices) > 0:
# Flatten indices if they are returned as a list of arrays
if isinstance(indices[0], list) or isinstance(indices[0], np.ndarray):
indices = [i[0] for i in indices]
# Limit to top 20 detections
indices = indices[:20]
# Convert to Frigate format: [class_id, confidence, y_min, x_min, y_max, x_max] (normalized)
for i, idx in enumerate(indices):
class_id = valid_class_ids[idx]
confidence = valid_scores[idx]
# Get the box coordinates
box = boxes_for_nms[idx]
x_min_norm = box[0] / self.memx_model_width
y_min_norm = box[1] / self.memx_model_height
x_max_norm = (box[0] + box[2]) / self.memx_model_width
y_max_norm = (box[1] + box[3]) / self.memx_model_height
final_detections[i] = [
class_id,
confidence,
y_min_norm, # Frigate expects y_min first
x_min_norm,
y_max_norm,
x_max_norm,
]
return final_detections
def process_output(self, *outputs):
"""Output callback function -- receives frames from the MX3 and triggers post-processing"""
if self.memx_model_type == ModelTypeEnum.yologeneric:
if not self.memx_post_model:
conv_out1 = outputs[0]
conv_out2 = outputs[1]
conv_out3 = outputs[2]
conv_out4 = outputs[3]
conv_out5 = outputs[4]
conv_out6 = outputs[5]
# Use complete YOLOv9-style postprocessing (includes NMS)
final_detections = self.post_process_yolo_optimized(outputs)
concat_1 = self.onnx_concat([conv_out1, conv_out2], axis=1)
concat_2 = self.onnx_concat([conv_out3, conv_out4], axis=1)
concat_3 = self.onnx_concat([conv_out5, conv_out6], axis=1)
shape = np.array([1, 144, -1], dtype=np.int64)
reshaped_1 = self.onnx_reshape_with_allowzero(
concat_1, shape, allowzero=0
)
reshaped_2 = self.onnx_reshape_with_allowzero(
concat_2, shape, allowzero=0
)
reshaped_3 = self.onnx_reshape_with_allowzero(
concat_3, shape, allowzero=0
)
concat_4 = self.onnx_concat([reshaped_1, reshaped_2, reshaped_3], 2)
axis = 1
split_sizes = [64, 80]
# Calculate indices at which to split
indices = np.cumsum(split_sizes)[
:-1
] # [64] — split before the second chunk
# Perform split along axis 1
split_0, split_1 = np.split(concat_4, indices, axis=axis)
num_boxes = 2100 if self.memx_model_height == 320 else 8400
shape1 = np.array([1, 4, 16, num_boxes])
reshape_4 = self.onnx_reshape_with_allowzero(
split_0, shape1, allowzero=0
)
transpose_1 = reshape_4.transpose(0, 2, 1, 3)
axis = 1 # As per ONNX softmax node
# Subtract max for numerical stability
x_max = np.max(transpose_1, axis=axis, keepdims=True)
x_exp = np.exp(transpose_1 - x_max)
x_sum = np.sum(x_exp, axis=axis, keepdims=True)
softmax_output = x_exp / x_sum
# Weight W from the ONNX initializer (1, 16, 1, 1) with values 0 to 15
W = np.arange(16, dtype=np.float32).reshape(
1, 16, 1, 1
) # (1, 16, 1, 1)
# Apply 1x1 convolution: this is a weighted sum over channels
conv_output = np.sum(
softmax_output * W, axis=1, keepdims=True
) # shape: (1, 1, 4, 8400)
shape2 = np.array([1, 4, num_boxes])
reshape_5 = self.onnx_reshape_with_allowzero(
conv_output, shape2, allowzero=0
)
# ONNX Slice — get first 2 channels: [0:2] along axis 1
slice_output1 = reshape_5[:, 0:2, :] # Result: (1, 2, 8400)
# Slice channels 2 to 4 → axis = 1
slice_output2 = reshape_5[:, 2:4, :]
# Perform Subtraction
sub_output = self.const_A - slice_output1 # Equivalent to ONNX Sub
# Perform the ONNX-style Add
add_output = self.const_B + slice_output2
sub1 = add_output - sub_output
add1 = sub_output + add_output
div_output = add1 / 2.0
concat_5 = self.onnx_concat([div_output, sub1], axis=1)
# Expand B to (1, 1, 8400) so it can broadcast across axis=1 (4 channels)
const_C_expanded = self.const_C[:, np.newaxis, :] # Shape: (1, 1, 8400)
# Perform ONNX-style element-wise multiplication
mul_output = concat_5 * const_C_expanded # Result: (1, 4, 8400)
sigmoid_output = self.sigmoid(split_1)
outputs = self.onnx_concat([mul_output, sigmoid_output], axis=1)
final_detections = post_process_yolo(
outputs, self.memx_model_width, self.memx_model_height
)
self.output_queue.put(final_detections)
elif self.memx_model_type == ModelTypeEnum.yolonas:

View File

@ -195,6 +195,7 @@ class EmbeddingMaintainer(threading.Thread):
self.config,
model_config,
self.event_metadata_publisher,
self.requestor,
self.metrics,
)
)
@ -339,6 +340,7 @@ class EmbeddingMaintainer(threading.Thread):
self.config,
model_config,
self.event_metadata_publisher,
self.requestor,
self.metrics,
)

View File

@ -362,7 +362,7 @@ def stats_snapshot(
stats["embeddings"]["review_description_speed"] = round(
embeddings_metrics.review_desc_speed.value * 1000, 2
)
stats["embeddings"]["review_descriptions"] = round(
stats["embeddings"]["review_description_events_per_second"] = round(
embeddings_metrics.review_desc_dps.value, 2
)
@ -370,7 +370,7 @@ def stats_snapshot(
stats["embeddings"]["object_description_speed"] = round(
embeddings_metrics.object_desc_speed.value * 1000, 2
)
stats["embeddings"]["object_descriptions"] = round(
stats["embeddings"]["object_description_events_per_second"] = round(
embeddings_metrics.object_desc_dps.value, 2
)
@ -378,7 +378,7 @@ def stats_snapshot(
stats["embeddings"][f"{key}_classification_speed"] = round(
embeddings_metrics.classification_speeds[key].value * 1000, 2
)
stats["embeddings"][f"{key}_classification"] = round(
stats["embeddings"][f"{key}_classification_events_per_second"] = round(
embeddings_metrics.classification_cps[key].value, 2
)

View File

@ -30,3 +30,4 @@ class TrackedObjectUpdateTypesEnum(str, Enum):
description = "description"
face = "face"
lpr = "lpr"
classification = "classification"

View File

@ -130,8 +130,13 @@ def get_soc_type() -> Optional[str]:
"""Get the SoC type from device tree."""
try:
with open("/proc/device-tree/compatible") as file:
soc = file.read().split(",")[-1].strip("\x00")
return soc
content = file.read()
# Check for Jetson devices
if "nvidia" in content:
return None
return content.split(",")[-1].strip("\x00")
except FileNotFoundError:
logger.debug("Could not determine SoC type from device tree")
return None

View File

@ -0,0 +1,33 @@
# COPYRIGHT AND TRADEMARK NOTICE
The images, logos, and icons contained in this directory (the "Brand Assets") are
proprietary to Frigate LLC and are NOT covered by the MIT License governing the
rest of this repository.
1. TRADEMARK STATUS
The "Frigate" name and the accompanying logo are common law trademarks™ of
Frigate LLC. Frigate LLC reserves all rights to these marks.
2. LIMITED PERMISSION FOR USE
Permission is hereby granted to display these Brand Assets strictly for the
following purposes:
a. To execute the software interface on a local machine.
b. To identify the software in documentation or reviews (nominative use).
3. RESTRICTIONS
You may NOT:
a. Use these Brand Assets to represent a derivative work (fork) as an official
product of Frigate LLC.
b. Use these Brand Assets in a way that implies endorsement, sponsorship, or
commercial affiliation with Frigate LLC.
c. Modify or alter the Brand Assets.
If you fork this repository with the intent to distribute a modified or competing
version of the software, you must replace these Brand Assets with your own
original content.
For full usage guidelines, strictly see the TRADEMARK.md file in the
repository root.
ALL RIGHTS RESERVED.
Copyright (c) 2025 Frigate LLC.

View File

Before

Width:  |  Height:  |  Size: 3.9 KiB

After

Width:  |  Height:  |  Size: 3.9 KiB

View File

Before

Width:  |  Height:  |  Size: 558 B

After

Width:  |  Height:  |  Size: 558 B

View File

Before

Width:  |  Height:  |  Size: 800 B

After

Width:  |  Height:  |  Size: 800 B

View File

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 15 KiB

View File

Before

Width:  |  Height:  |  Size: 12 KiB

After

Width:  |  Height:  |  Size: 12 KiB

View File

Before

Width:  |  Height:  |  Size: 2.9 KiB

After

Width:  |  Height:  |  Size: 2.9 KiB

View File

Before

Width:  |  Height:  |  Size: 2.6 KiB

After

Width:  |  Height:  |  Size: 2.6 KiB

View File

@ -2,29 +2,29 @@
<html lang="en">
<head>
<meta charset="UTF-8" />
<link rel="icon" href="/images/favicon.ico" />
<link rel="icon" href="/images/branding/favicon.ico" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Frigate</title>
<link
rel="apple-touch-icon"
sizes="180x180"
href="/images/apple-touch-icon.png"
href="/images/branding/apple-touch-icon.png"
/>
<link
rel="icon"
type="image/png"
sizes="32x32"
href="/images/favicon-32x32.png"
href="/images/branding/favicon-32x32.png"
/>
<link
rel="icon"
type="image/png"
sizes="16x16"
href="/images/favicon-16x16.png"
href="/images/branding/favicon-16x16.png"
/>
<link rel="icon" type="image/svg+xml" href="/images/favicon.svg" />
<link rel="icon" type="image/svg+xml" href="/images/branding/favicon.svg" />
<link rel="manifest" href="/site.webmanifest" crossorigin="use-credentials" />
<link rel="mask-icon" href="/images/favicon.svg" color="#3b82f7" />
<link rel="mask-icon" href="/images/branding/favicon.svg" color="#3b82f7" />
<meta name="theme-color" content="#ffffff" media="(prefers-color-scheme: light)" />
<meta name="theme-color" content="#000000" media="(prefers-color-scheme: dark)" />
</head>

View File

@ -2,29 +2,29 @@
<html lang="en">
<head>
<meta charset="UTF-8" />
<link rel="icon" href="/images/favicon.ico" />
<link rel="icon" href="/images/branding/favicon.ico" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Frigate</title>
<link
rel="apple-touch-icon"
sizes="180x180"
href="/images/apple-touch-icon.png"
href="/images/branding/apple-touch-icon.png"
/>
<link
rel="icon"
type="image/png"
sizes="32x32"
href="/images/favicon-32x32.png"
href="/images/branding/favicon-32x32.png"
/>
<link
rel="icon"
type="image/png"
sizes="16x16"
href="/images/favicon-16x16.png"
href="/images/branding/favicon-16x16.png"
/>
<link rel="icon" type="image/svg+xml" href="/images/favicon.svg" />
<link rel="icon" type="image/svg+xml" href="/images/branding/favicon.svg" />
<link rel="manifest" href="/site.webmanifest" crossorigin="use-credentials" />
<link rel="mask-icon" href="/images/favicon.svg" color="#3b82f7" />
<link rel="mask-icon" href="/images/branding/favicon.svg" color="#3b82f7" />
<meta name="theme-color" content="#ffffff" media="(prefers-color-scheme: light)" />
<meta name="theme-color" content="#000000" media="(prefers-color-scheme: dark)" />
</head>

View File

@ -103,7 +103,7 @@
"regenerate": "A new description has been requested from {{provider}}. Depending on the speed of your provider, the new description may take some time to regenerate.",
"updatedSublabel": "Successfully updated sub label.",
"updatedLPR": "Successfully updated license plate.",
"audioTranscription": "Successfully requested audio transcription."
"audioTranscription": "Successfully requested audio transcription. Depending on the speed of your Frigate server, the transcription may take some time to complete."
},
"error": {
"regenerate": "Failed to call {{provider}} for a new description: {{errorMessage}}",

View File

@ -177,6 +177,10 @@
"noCameras": {
"title": "No Cameras Configured",
"description": "Get started by connecting a camera to Frigate.",
"buttonText": "Add Camera"
"buttonText": "Add Camera",
"restricted": {
"title": "No Cameras Available",
"description": "You don't have permission to view any cameras in this group."
}
}
}

View File

@ -76,7 +76,12 @@
}
},
"npuUsage": "NPU Usage",
"npuMemory": "NPU Memory"
"npuMemory": "NPU Memory",
"intelGpuWarning": {
"title": "Intel GPU Stats Warning",
"message": "GPU stats unavailable",
"description": "This is a known bug in Intel's GPU stats reporting tools (intel_gpu_top) where it will break and repeatedly return a GPU usage of 0% even in cases where hardware acceleration and object detection are correctly running on the (i)GPU. This is not a Frigate bug. You can restart the host to temporarily fix the issue and confirm that the GPU is working correctly. This does not affect performance."
}
},
"otherProcesses": {
"title": "Other Processes",
@ -169,6 +174,7 @@
"enrichments": {
"title": "Enrichments",
"infPerSecond": "Inferences Per Second",
"averageInf": "Average Inference Time",
"embeddings": {
"image_embedding": "Image Embedding",
"text_embedding": "Text Embedding",
@ -180,7 +186,13 @@
"plate_recognition_speed": "Plate Recognition Speed",
"text_embedding_speed": "Text Embedding Speed",
"yolov9_plate_detection_speed": "YOLOv9 Plate Detection Speed",
"yolov9_plate_detection": "YOLOv9 Plate Detection"
"yolov9_plate_detection": "YOLOv9 Plate Detection",
"review_description": "Review Description",
"review_description_speed": "Review Description Speed",
"review_description_events_per_second": "Review Description",
"object_description": "Object Description",
"object_description_speed": "Object Description Speed",
"object_description_events_per_second": "Object Description"
}
}
}

View File

@ -9,7 +9,7 @@ import useSWR from "swr";
import { MdHome } from "react-icons/md";
import { usePersistedOverlayState } from "@/hooks/use-overlay-state";
import { Button, buttonVariants } from "../ui/button";
import { useCallback, useMemo, useState } from "react";
import { useCallback, useEffect, useMemo, useState } from "react";
import { Tooltip, TooltipContent, TooltipTrigger } from "../ui/tooltip";
import { LuPencil, LuPlus } from "react-icons/lu";
import {
@ -87,6 +87,8 @@ type CameraGroupSelectorProps = {
export function CameraGroupSelector({ className }: CameraGroupSelectorProps) {
const { t } = useTranslation(["components/camera"]);
const { data: config } = useSWR<FrigateConfig>("config");
const allowedCameras = useAllowedCameras();
const isCustomRole = useIsCustomRole();
// tooltip
@ -119,10 +121,22 @@ export function CameraGroupSelector({ className }: CameraGroupSelectorProps) {
return [];
}
return Object.entries(config.camera_groups).sort(
(a, b) => a[1].order - b[1].order,
);
}, [config]);
const allGroups = Object.entries(config.camera_groups);
// If custom role, filter out groups where user has no accessible cameras
if (isCustomRole) {
return allGroups
.filter(([, groupConfig]) => {
// Check if user has access to at least one camera in this group
return groupConfig.cameras.some((cameraName) =>
allowedCameras.includes(cameraName),
);
})
.sort((a, b) => a[1].order - b[1].order);
}
return allGroups.sort((a, b) => a[1].order - b[1].order);
}, [config, allowedCameras, isCustomRole]);
// add group
@ -139,6 +153,7 @@ export function CameraGroupSelector({ className }: CameraGroupSelectorProps) {
activeGroup={group}
setGroup={setGroup}
deleteGroup={deleteGroup}
isCustomRole={isCustomRole}
/>
<Scroller className={`${isMobile ? "whitespace-nowrap" : ""}`}>
<div
@ -206,14 +221,16 @@ export function CameraGroupSelector({ className }: CameraGroupSelectorProps) {
);
})}
<Button
className="bg-secondary text-muted-foreground"
aria-label={t("group.add")}
size="xs"
onClick={() => setAddGroup(true)}
>
<LuPlus className="size-4 text-primary" />
</Button>
{!isCustomRole && (
<Button
className="bg-secondary text-muted-foreground"
aria-label={t("group.add")}
size="xs"
onClick={() => setAddGroup(true)}
>
<LuPlus className="size-4 text-primary" />
</Button>
)}
{isMobile && <ScrollBar orientation="horizontal" className="h-0" />}
</div>
</Scroller>
@ -228,6 +245,7 @@ type NewGroupDialogProps = {
activeGroup?: string;
setGroup: (value: string | undefined, replace?: boolean | undefined) => void;
deleteGroup: () => void;
isCustomRole?: boolean;
};
function NewGroupDialog({
open,
@ -236,6 +254,7 @@ function NewGroupDialog({
activeGroup,
setGroup,
deleteGroup,
isCustomRole,
}: NewGroupDialogProps) {
const { t } = useTranslation(["components/camera"]);
const { mutate: updateConfig } = useSWR<FrigateConfig>("config");
@ -261,6 +280,12 @@ function NewGroupDialog({
`${activeGroup}-draggable-layout`,
);
useEffect(() => {
if (!open) {
setEditState("none");
}
}, [open]);
// callbacks
const onDeleteGroup = useCallback(
@ -349,13 +374,7 @@ function NewGroupDialog({
position="top-center"
closeButton={true}
/>
<Overlay
open={open}
onOpenChange={(open) => {
setEditState("none");
setOpen(open);
}}
>
<Overlay open={open} onOpenChange={setOpen}>
<Content
className={cn(
"scrollbar-container overflow-y-auto",
@ -371,28 +390,30 @@ function NewGroupDialog({
>
<Title>{t("group.label")}</Title>
<Description className="sr-only">{t("group.edit")}</Description>
<div
className={cn(
"absolute",
isDesktop && "right-6 top-10",
isMobile && "absolute right-0 top-4",
)}
>
<Button
size="sm"
{!isCustomRole && (
<div
className={cn(
isDesktop &&
"size-6 rounded-md bg-secondary-foreground p-1 text-background",
isMobile && "text-secondary-foreground",
"absolute",
isDesktop && "right-6 top-10",
isMobile && "absolute right-0 top-4",
)}
aria-label={t("group.add")}
onClick={() => {
setEditState("add");
}}
>
<LuPlus />
</Button>
</div>
<Button
size="sm"
className={cn(
isDesktop &&
"size-6 rounded-md bg-secondary-foreground p-1 text-background",
isMobile && "text-secondary-foreground",
)}
aria-label={t("group.add")}
onClick={() => {
setEditState("add");
}}
>
<LuPlus />
</Button>
</div>
)}
</Header>
<div className="flex flex-col gap-4 md:gap-3">
{currentGroups.map((group) => (
@ -401,6 +422,7 @@ function NewGroupDialog({
group={group}
onDeleteGroup={() => onDeleteGroup(group[0])}
onEditGroup={() => onEditGroup(group)}
isReadOnly={isCustomRole}
/>
))}
</div>
@ -512,12 +534,14 @@ type CameraGroupRowProps = {
group: [string, CameraGroupConfig];
onDeleteGroup: () => void;
onEditGroup: () => void;
isReadOnly?: boolean;
};
export function CameraGroupRow({
group,
onDeleteGroup,
onEditGroup,
isReadOnly,
}: CameraGroupRowProps) {
const { t } = useTranslation(["components/camera"]);
const [deleteDialogOpen, setDeleteDialogOpen] = useState(false);
@ -564,7 +588,7 @@ export function CameraGroupRow({
</AlertDialogContent>
</AlertDialog>
{isMobile && (
{isMobile && !isReadOnly && (
<>
<DropdownMenu modal={!isDesktop}>
<DropdownMenuTrigger>
@ -589,7 +613,7 @@ export function CameraGroupRow({
</DropdownMenu>
</>
)}
{!isMobile && (
{!isMobile && !isReadOnly && (
<div className="flex flex-row items-center gap-2">
<Tooltip>
<TooltipTrigger asChild>

View File

@ -572,9 +572,8 @@ export function SortTypeContent({
className="w-full space-y-1"
>
{availableSortTypes.map((value) => (
<div className="flex flex-row gap-2">
<div key={value} className="flex flex-row gap-2">
<RadioGroupItem
key={value}
value={value}
id={`sort-${value}`}
className={

View File

@ -42,9 +42,10 @@ export default function DetailActionsMenu({
return `start/${startTime}/end/${endTime}`;
}, [search]);
const { data: reviewItem } = useSWR<ReviewSegment>([
`review/event/${search.id}`,
]);
// currently, audio event ids are not saved in review items
const { data: reviewItem } = useSWR<ReviewSegment>(
search.data?.type === "audio" ? null : [`review/event/${search.id}`],
);
return (
<DropdownMenu open={isOpen} onOpenChange={setIsOpen}>

View File

@ -807,6 +807,15 @@ function ObjectDetailsTab({
}
}, [search]);
const isEventsKey = useCallback((key: unknown): boolean => {
const candidate = Array.isArray(key) ? key[0] : key;
const EVENTS_KEY_PATTERNS = ["events", "events/search", "events/explore"];
return (
typeof candidate === "string" &&
EVENTS_KEY_PATTERNS.some((p) => candidate.includes(p))
);
}, []);
const updateDescription = useCallback(() => {
if (!search) {
return;
@ -821,11 +830,7 @@ function ObjectDetailsTab({
});
}
mutate(
(key) =>
typeof key === "string" &&
(key.includes("events") ||
key.includes("events/search") ||
key.includes("events/explore")),
(key) => isEventsKey(key),
(currentData: SearchResult[][] | SearchResult[] | undefined) =>
mapSearchResults(currentData, (event) =>
event.id === search.id
@ -838,6 +843,7 @@ function ObjectDetailsTab({
revalidate: false,
},
);
setSearch({ ...search, data: { ...search.data, description: desc } });
})
.catch((error) => {
const errorMessage =
@ -854,7 +860,7 @@ function ObjectDetailsTab({
);
setDesc(search.data.description);
});
}, [desc, search, mutate, t, mapSearchResults]);
}, [desc, search, mutate, t, mapSearchResults, isEventsKey, setSearch]);
const regenerateDescription = useCallback(
(source: "snapshot" | "thumbnails") => {
@ -921,11 +927,7 @@ function ObjectDetailsTab({
});
mutate(
(key) =>
typeof key === "string" &&
(key.includes("events") ||
key.includes("events/search") ||
key.includes("events/explore")),
(key) => isEventsKey(key),
(currentData: SearchResult[][] | SearchResult[] | undefined) =>
mapSearchResults(currentData, (event) =>
event.id === search.id
@ -972,7 +974,7 @@ function ObjectDetailsTab({
);
});
},
[search, apiHost, mutate, setSearch, t, mapSearchResults],
[search, apiHost, mutate, setSearch, t, mapSearchResults, isEventsKey],
);
// recognized plate
@ -996,11 +998,7 @@ function ObjectDetailsTab({
});
mutate(
(key) =>
typeof key === "string" &&
(key.includes("events") ||
key.includes("events/search") ||
key.includes("events/explore")),
(key) => isEventsKey(key),
(currentData: SearchResult[][] | SearchResult[] | undefined) =>
mapSearchResults(currentData, (event) =>
event.id === search.id
@ -1047,7 +1045,7 @@ function ObjectDetailsTab({
);
});
},
[search, apiHost, mutate, setSearch, t, mapSearchResults],
[search, apiHost, mutate, setSearch, t, mapSearchResults, isEventsKey],
);
// speech transcription
@ -1103,12 +1101,9 @@ function ObjectDetailsTab({
});
setState("submitted");
setSearch({ ...search, plus_id: "new_upload" });
mutate(
(key) =>
typeof key === "string" &&
(key.includes("events") ||
key.includes("events/search") ||
key.includes("events/explore")),
(key) => isEventsKey(key),
(currentData: SearchResult[][] | SearchResult[] | undefined) =>
mapSearchResults(currentData, (event) =>
event.id === search.id
@ -1122,7 +1117,7 @@ function ObjectDetailsTab({
},
);
},
[search, mutate, mapSearchResults],
[search, mutate, mapSearchResults, setSearch, isEventsKey],
);
const popoverContainerRef = useRef<HTMLDivElement | null>(null);
@ -1300,6 +1295,7 @@ function ObjectDetailsTab({
{search.data.type === "object" &&
config?.plus?.enabled &&
search.end_time != undefined &&
search.has_snapshot && (
<div
className={cn(

View File

@ -56,6 +56,7 @@ export function TrackingDetails({
const apiHost = useApiHost();
const imgRef = useRef<HTMLImageElement | null>(null);
const [imgLoaded, setImgLoaded] = useState(false);
const [isVideoLoading, setIsVideoLoading] = useState(true);
const [displaySource, _setDisplaySource] = useState<"video" | "image">(
"video",
);
@ -70,6 +71,10 @@ export function TrackingDetails({
(event.start_time ?? 0) + annotationOffset / 1000 - REVIEW_PADDING,
);
useEffect(() => {
setIsVideoLoading(true);
}, [event.id]);
const { data: eventSequence } = useSWR<TrackingDetailsSequence[]>([
"timeline",
{
@ -527,22 +532,28 @@ export function TrackingDetails({
)}
>
{displaySource == "video" && (
<HlsVideoPlayer
videoRef={videoRef}
containerRef={containerRef}
visible={true}
currentSource={videoSource}
hotKeys={false}
supportsFullscreen={false}
fullscreen={false}
frigateControls={true}
onTimeUpdate={handleTimeUpdate}
onSeekToTime={handleSeekToTime}
onUploadFrame={onUploadFrameToPlus}
isDetailMode={true}
camera={event.camera}
currentTimeOverride={currentTime}
/>
<>
<HlsVideoPlayer
videoRef={videoRef}
containerRef={containerRef}
visible={true}
currentSource={videoSource}
hotKeys={false}
supportsFullscreen={false}
fullscreen={false}
frigateControls={true}
onTimeUpdate={handleTimeUpdate}
onSeekToTime={handleSeekToTime}
onUploadFrame={onUploadFrameToPlus}
onPlaying={() => setIsVideoLoading(false)}
isDetailMode={true}
camera={event.camera}
currentTimeOverride={currentTime}
/>
{isVideoLoading && (
<ActivityIndicator className="absolute left-1/2 top-1/2 -translate-x-1/2 -translate-y-1/2" />
)}
</>
)}
{displaySource == "image" && (
<>

View File

@ -6,51 +6,199 @@ import {
DialogTitle,
} from "@/components/ui/dialog";
import { Event } from "@/types/event";
import { isDesktop, isMobile } from "react-device-detect";
import { ObjectSnapshotTab } from "../detail/SearchDetailDialog";
import { isDesktop, isMobile, isSafari } from "react-device-detect";
import { cn } from "@/lib/utils";
import { useCallback, useEffect, useState } from "react";
import axios from "axios";
import { useTranslation, Trans } from "react-i18next";
import { Button } from "@/components/ui/button";
import ActivityIndicator from "@/components/indicators/activity-indicator";
import { FaCheckCircle } from "react-icons/fa";
import { Card, CardContent } from "@/components/ui/card";
import { TransformComponent, TransformWrapper } from "react-zoom-pan-pinch";
import ImageLoadingIndicator from "@/components/indicators/ImageLoadingIndicator";
import { baseUrl } from "@/api/baseUrl";
import { getTranslatedLabel } from "@/utils/i18n";
import useImageLoaded from "@/hooks/use-image-loaded";
type FrigatePlusDialogProps = {
export type FrigatePlusDialogProps = {
upload?: Event;
dialog?: boolean;
onClose: () => void;
onEventUploaded: () => void;
};
export function FrigatePlusDialog({
upload,
dialog = true,
onClose,
onEventUploaded,
}: FrigatePlusDialogProps) {
if (!upload) {
return;
}
if (dialog) {
return (
<Dialog
open={upload != undefined}
onOpenChange={(open) => (!open ? onClose() : null)}
const { t, i18n } = useTranslation(["components/dialog"]);
type SubmissionState = "reviewing" | "uploading" | "submitted";
const [state, setState] = useState<SubmissionState>(
upload?.plus_id ? "submitted" : "reviewing",
);
useEffect(() => {
setState(upload?.plus_id ? "submitted" : "reviewing");
}, [upload?.plus_id]);
const onSubmitToPlus = useCallback(
async (falsePositive: boolean) => {
if (!upload) return;
falsePositive
? axios.put(`events/${upload.id}/false_positive`)
: axios.post(`events/${upload.id}/plus`, { include_annotation: 1 });
setState("submitted");
onEventUploaded();
},
[upload, onEventUploaded],
);
const [imgRef, imgLoaded, onImgLoad] = useImageLoaded();
const showCard =
!!upload &&
upload.data.type === "object" &&
upload.plus_id !== "not_enabled" &&
upload.end_time &&
upload.label !== "on_demand";
if (!dialog || !upload) return null;
return (
<Dialog open={true} onOpenChange={(open) => (!open ? onClose() : null)}>
<DialogContent
className={cn(
"scrollbar-container overflow-y-auto",
isDesktop &&
"max-h-[95dvh] sm:max-w-xl md:max-w-4xl lg:max-w-4xl xl:max-w-7xl",
isMobile && "px-4",
)}
>
<DialogContent
className={cn(
"scrollbar-container overflow-y-auto",
isDesktop &&
"max-h-[95dvh] sm:max-w-xl md:max-w-4xl lg:max-w-4xl xl:max-w-7xl",
isMobile && "px-4",
)}
>
<DialogHeader>
<DialogTitle className="sr-only">Submit to Frigate+</DialogTitle>
<DialogDescription className="sr-only">
Submit this snapshot to Frigate+
</DialogDescription>
</DialogHeader>
<ObjectSnapshotTab
search={upload}
onEventUploaded={onEventUploaded}
<DialogHeader>
<DialogTitle className="sr-only">Submit to Frigate+</DialogTitle>
<DialogDescription className="sr-only">
Submit this snapshot to Frigate+
</DialogDescription>
</DialogHeader>
<div className="relative size-full">
<ImageLoadingIndicator
className="absolute inset-0 aspect-video min-h-[60dvh] w-full"
imgLoaded={imgLoaded}
/>
</DialogContent>
</Dialog>
);
}
<div className={imgLoaded ? "visible" : "invisible"}>
<TransformWrapper minScale={1.0} wheel={{ smoothStep: 0.005 }}>
<div className="flex flex-col space-y-3">
<TransformComponent
wrapperStyle={{ width: "100%", height: "100%" }}
contentStyle={{
position: "relative",
width: "100%",
height: "100%",
}}
>
{upload.id && (
<div className="relative mx-auto">
<img
ref={imgRef}
className="mx-auto max-h-[60dvh] rounded-lg bg-black object-contain"
src={`${baseUrl}api/events/${upload.id}/snapshot.jpg`}
alt={`${upload.label}`}
loading={isSafari ? "eager" : "lazy"}
onLoad={onImgLoad}
/>
</div>
)}
</TransformComponent>
{showCard && (
<Card className="p-1 text-sm md:p-2">
<CardContent className="flex flex-col items-center justify-between gap-3 p-2 md:flex-row">
<div className="flex flex-col space-y-3">
<div className="text-lg leading-none">
{t("explore.plus.submitToPlus.label")}
</div>
<div className="text-sm text-muted-foreground">
{t("explore.plus.submitToPlus.desc")}
</div>
</div>
<div className="flex w-full flex-1 flex-col justify-center gap-2 md:ml-8 md:w-auto md:justify-end">
{state === "reviewing" && (
<>
<div>
{i18n.language === "en" ? (
/^[aeiou]/i.test(upload.label || "") ? (
<Trans
ns="components/dialog"
values={{ label: upload.label }}
>
explore.plus.review.question.ask_an
</Trans>
) : (
<Trans
ns="components/dialog"
values={{ label: upload.label }}
>
explore.plus.review.question.ask_a
</Trans>
)
) : (
<Trans
ns="components/dialog"
values={{
untranslatedLabel: upload.label,
translatedLabel: getTranslatedLabel(
upload.label,
),
}}
>
explore.plus.review.question.ask_full
</Trans>
)}
</div>
<div className="flex w-full flex-row gap-2">
<Button
className="flex-1 bg-success"
aria-label={t("button.yes", { ns: "common" })}
onClick={() => {
setState("uploading");
onSubmitToPlus(false);
}}
>
{t("button.yes", { ns: "common" })}
</Button>
<Button
className="flex-1 text-white"
aria-label={t("button.no", { ns: "common" })}
variant="destructive"
onClick={() => {
setState("uploading");
onSubmitToPlus(true);
}}
>
{t("button.no", { ns: "common" })}
</Button>
</div>
</>
)}
{state === "uploading" && <ActivityIndicator />}
{state === "submitted" && (
<div className="flex flex-row items-center justify-center gap-2">
<FaCheckCircle className="size-4 text-success" />
{t("explore.plus.review.state.submitted")}
</div>
)}
</div>
</CardContent>
</Card>
)}
</div>
</TransformWrapper>
</div>
</div>
</DialogContent>
</Dialog>
);
}

View File

@ -6,7 +6,7 @@ import {
useState,
} from "react";
import Hls from "hls.js";
import { isAndroid, isDesktop, isMobile } from "react-device-detect";
import { isDesktop, isMobile } from "react-device-detect";
import { TransformComponent, TransformWrapper } from "react-zoom-pan-pinch";
import VideoControls from "./VideoControls";
import { VideoResolutionType } from "@/types/live";
@ -22,7 +22,7 @@ import { useTranslation } from "react-i18next";
import ObjectTrackOverlay from "@/components/overlay/ObjectTrackOverlay";
// Android native hls does not seek correctly
const USE_NATIVE_HLS = !isAndroid;
const USE_NATIVE_HLS = false;
const HLS_MIME_TYPE = "application/vnd.apple.mpegurl" as const;
const unsupportedErrorCodes = [
MediaError.MEDIA_ERR_SRC_NOT_SUPPORTED,
@ -94,24 +94,52 @@ export default function HlsVideoPlayer({
const [loadedMetadata, setLoadedMetadata] = useState(false);
const [bufferTimeout, setBufferTimeout] = useState<NodeJS.Timeout>();
const applyVideoDimensions = useCallback(
(width: number, height: number) => {
if (setFullResolution) {
setFullResolution({ width, height });
}
setVideoDimensions({ width, height });
if (height > 0) {
setTallCamera(width / height < ASPECT_VERTICAL_LAYOUT);
}
},
[setFullResolution],
);
const handleLoadedMetadata = useCallback(() => {
setLoadedMetadata(true);
if (videoRef.current) {
const width = videoRef.current.videoWidth;
const height = videoRef.current.videoHeight;
if (setFullResolution) {
setFullResolution({
width,
height,
});
}
setVideoDimensions({ width, height });
setTallCamera(width / height < ASPECT_VERTICAL_LAYOUT);
if (!videoRef.current) {
return;
}
}, [videoRef, setFullResolution]);
const width = videoRef.current.videoWidth;
const height = videoRef.current.videoHeight;
// iOS Safari occasionally reports 0x0 for videoWidth/videoHeight
// Poll with requestAnimationFrame until dimensions become available (or timeout).
if (width > 0 && height > 0) {
applyVideoDimensions(width, height);
return;
}
let attempts = 0;
const maxAttempts = 120; // ~2 seconds at 60fps
const tryGetDims = () => {
if (!videoRef.current) return;
const w = videoRef.current.videoWidth;
const h = videoRef.current.videoHeight;
if (w > 0 && h > 0) {
applyVideoDimensions(w, h);
return;
}
if (attempts < maxAttempts) {
attempts += 1;
requestAnimationFrame(tryGetDims);
}
};
requestAnimationFrame(tryGetDims);
}, [videoRef, applyVideoDimensions]);
useEffect(() => {
if (!videoRef.current) {
@ -130,6 +158,8 @@ export default function HlsVideoPlayer({
return;
}
setLoadedMetadata(false);
const currentPlaybackRate = videoRef.current.playbackRate;
if (!useHlsCompat) {

View File

@ -91,7 +91,7 @@ function MSEPlayer({
(error: LivePlayerError, description: string = "Unknown error") => {
// eslint-disable-next-line no-console
console.error(
`${camera} - MSE error '${error}': ${description} See the documentation: https://docs.frigate.video/configuration/live/#live-view-faq`,
`${camera} - MSE error '${error}': ${description} See the documentation: https://docs.frigate.video/configuration/live/#live-player-error-messages`,
);
onError?.(error);
},

View File

@ -309,6 +309,7 @@ function PreviewVideoPlayer({
playsInline
muted
disableRemotePlayback
disablePictureInPicture
onSeeked={onPreviewSeeked}
onLoadedData={() => {
if (firstLoad) {

View File

@ -42,7 +42,7 @@ export default function WebRtcPlayer({
(error: LivePlayerError, description: string = "Unknown error") => {
// eslint-disable-next-line no-console
console.error(
`${camera} - WebRTC error '${error}': ${description} See the documentation: https://docs.frigate.video/configuration/live/#live-view-faq`,
`${camera} - WebRTC error '${error}': ${description} See the documentation: https://docs.frigate.video/configuration/live/#live-player-error-messages`,
);
onError?.(error);
},

View File

@ -2,7 +2,10 @@ import { Recording } from "@/types/record";
import { DynamicPlayback } from "@/types/playback";
import { PreviewController } from "../PreviewPlayer";
import { TimeRange, TrackingDetailsSequence } from "@/types/timeline";
import { calculateInpointOffset } from "@/utils/videoUtil";
import {
calculateInpointOffset,
calculateSeekPosition,
} from "@/utils/videoUtil";
type PlayerMode = "playback" | "scrubbing";
@ -72,38 +75,20 @@ export class DynamicVideoController {
return;
}
if (
this.recordings.length == 0 ||
time < this.recordings[0].start_time ||
time > this.recordings[this.recordings.length - 1].end_time
) {
this.setNoRecording(true);
return;
}
if (this.playerMode != "playback") {
this.playerMode = "playback";
}
let seekSeconds = 0;
(this.recordings || []).every((segment) => {
// if the next segment is past the desired time, stop calculating
if (segment.start_time > time) {
return false;
}
const seekSeconds = calculateSeekPosition(
time,
this.recordings,
this.inpointOffset,
);
if (segment.end_time < time) {
seekSeconds += segment.end_time - segment.start_time;
return true;
}
seekSeconds +=
segment.end_time - segment.start_time - (segment.end_time - time);
return true;
});
// adjust for HLS inpoint offset
seekSeconds -= this.inpointOffset;
if (seekSeconds === undefined) {
this.setNoRecording(true);
return;
}
if (seekSeconds != 0) {
this.playerController.currentTime = seekSeconds;

View File

@ -14,7 +14,10 @@ import { VideoResolutionType } from "@/types/live";
import axios from "axios";
import { cn } from "@/lib/utils";
import { useTranslation } from "react-i18next";
import { calculateInpointOffset } from "@/utils/videoUtil";
import {
calculateInpointOffset,
calculateSeekPosition,
} from "@/utils/videoUtil";
import { isFirefox } from "react-device-detect";
/**
@ -109,10 +112,10 @@ export default function DynamicVideoPlayer({
const [isLoading, setIsLoading] = useState(false);
const [isBuffering, setIsBuffering] = useState(false);
const [loadingTimeout, setLoadingTimeout] = useState<NodeJS.Timeout>();
const [source, setSource] = useState<HlsSource>({
playlist: `${apiHost}vod/${camera}/start/${timeRange.after}/end/${timeRange.before}/master.m3u8`,
startPosition: startTimestamp ? timeRange.after - startTimestamp : 0,
});
// Don't set source until recordings load - we need accurate startPosition
// to avoid hls.js clamping to video end when startPosition exceeds duration
const [source, setSource] = useState<HlsSource | undefined>(undefined);
// start at correct time
@ -184,7 +187,7 @@ export default function DynamicVideoPlayer({
);
useEffect(() => {
if (!controller || !recordings?.length) {
if (!recordings?.length) {
if (recordings?.length == 0) {
setNoRecording(true);
}
@ -192,10 +195,6 @@ export default function DynamicVideoPlayer({
return;
}
if (playerRef.current) {
playerRef.current.autoplay = !isScrubbing;
}
let startPosition = undefined;
if (startTimestamp) {
@ -203,14 +202,12 @@ export default function DynamicVideoPlayer({
recordingParams.after,
(recordings || [])[0],
);
const idealStartPosition = Math.max(
0,
startTimestamp - timeRange.after - inpointOffset,
);
if (idealStartPosition >= recordings[0].start_time - timeRange.after) {
startPosition = idealStartPosition;
}
startPosition = calculateSeekPosition(
startTimestamp,
recordings,
inpointOffset,
);
}
setSource({
@ -218,6 +215,18 @@ export default function DynamicVideoPlayer({
startPosition,
});
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [recordings]);
useEffect(() => {
if (!controller || !recordings?.length) {
return;
}
if (playerRef.current) {
playerRef.current.autoplay = !isScrubbing;
}
setLoadingTimeout(setTimeout(() => setIsLoading(true), 1000));
controller.newPlayback({
@ -225,7 +234,7 @@ export default function DynamicVideoPlayer({
timeRange,
});
// we only want this to change when recordings update
// we only want this to change when controller or recordings update
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [controller, recordings]);
@ -263,46 +272,48 @@ export default function DynamicVideoPlayer({
return (
<>
<HlsVideoPlayer
videoRef={playerRef}
containerRef={containerRef}
visible={!(isScrubbing || isLoading)}
currentSource={source}
hotKeys={hotKeys}
supportsFullscreen={supportsFullscreen}
fullscreen={fullscreen}
inpointOffset={inpointOffset}
onTimeUpdate={onTimeUpdate}
onPlayerLoaded={onPlayerLoaded}
onClipEnded={onValidateClipEnd}
onSeekToTime={(timestamp, play) => {
if (onSeekToTime) {
onSeekToTime(timestamp, play);
}
}}
onPlaying={() => {
if (isScrubbing) {
playerRef.current?.pause();
}
{source && (
<HlsVideoPlayer
videoRef={playerRef}
containerRef={containerRef}
visible={!(isScrubbing || isLoading)}
currentSource={source}
hotKeys={hotKeys}
supportsFullscreen={supportsFullscreen}
fullscreen={fullscreen}
inpointOffset={inpointOffset}
onTimeUpdate={onTimeUpdate}
onPlayerLoaded={onPlayerLoaded}
onClipEnded={onValidateClipEnd}
onSeekToTime={(timestamp, play) => {
if (onSeekToTime) {
onSeekToTime(timestamp, play);
}
}}
onPlaying={() => {
if (isScrubbing) {
playerRef.current?.pause();
}
if (loadingTimeout) {
clearTimeout(loadingTimeout);
}
if (loadingTimeout) {
clearTimeout(loadingTimeout);
}
setNoRecording(false);
}}
setFullResolution={setFullResolution}
onUploadFrame={onUploadFrameToPlus}
toggleFullscreen={toggleFullscreen}
onError={(error) => {
if (error == "stalled" && !isScrubbing) {
setIsBuffering(true);
}
}}
isDetailMode={isDetailMode}
camera={contextCamera || camera}
currentTimeOverride={currentTime}
/>
setNoRecording(false);
}}
setFullResolution={setFullResolution}
onUploadFrame={onUploadFrameToPlus}
toggleFullscreen={toggleFullscreen}
onError={(error) => {
if (error == "stalled" && !isScrubbing) {
setIsBuffering(true);
}
}}
isDetailMode={isDetailMode}
camera={contextCamera || camera}
currentTimeOverride={currentTime}
/>
)}
<PreviewPlayer
className={cn(
className,

View File

@ -377,7 +377,7 @@ export default function Step1NameCamera({
);
return selectedBrand &&
selectedBrand.value != "other" ? (
<Popover>
<Popover modal={true}>
<PopoverTrigger asChild>
<Button
variant="ghost"

View File

@ -600,7 +600,7 @@ export default function Step3StreamConfig({
<Label className="text-sm font-medium text-primary-variant">
{t("cameraWizard.step3.roles")}
</Label>
<Popover>
<Popover modal={true}>
<PopoverTrigger asChild>
<Button variant="ghost" size="sm" className="h-4 w-4 p-0">
<LuInfo className="size-3" />
@ -670,7 +670,7 @@ export default function Step3StreamConfig({
<Label className="text-sm font-medium text-primary-variant">
{t("cameraWizard.step3.featuresTitle")}
</Label>
<Popover>
<Popover modal={true}>
<PopoverTrigger asChild>
<Button variant="ghost" size="sm" className="h-4 w-4 p-0">
<LuInfo className="size-3" />

View File

@ -93,19 +93,23 @@ function Live() {
const allowedCameras = useAllowedCameras();
const includesBirdseye = useMemo(() => {
// Restricted users should never have access to birdseye
if (isCustomRole) {
return false;
}
if (
config &&
Object.keys(config.camera_groups).length &&
cameraGroup &&
config.camera_groups[cameraGroup] &&
cameraGroup != "default" &&
(!isCustomRole || "birdseye" in allowedCameras)
cameraGroup != "default"
) {
return config.camera_groups[cameraGroup].cameras.includes("birdseye");
} else {
return false;
}
}, [config, cameraGroup, allowedCameras, isCustomRole]);
}, [config, cameraGroup, isCustomRole]);
const cameras = useMemo(() => {
if (!config) {

View File

@ -24,3 +24,57 @@ export function calculateInpointOffset(
return 0;
}
/**
* Calculates the video player time (in seconds) for a given timestamp
* by iterating through recording segments and summing their durations.
* This accounts for the fact that the video is a concatenation of segments,
* not a single continuous stream.
*
* @param timestamp - The target timestamp to seek to
* @param recordings - Array of recording segments
* @param inpointOffset - HLS inpoint offset to subtract from the result
* @returns The calculated seek position in seconds, or undefined if timestamp is out of range
*/
export function calculateSeekPosition(
timestamp: number,
recordings: Recording[],
inpointOffset: number = 0,
): number | undefined {
if (!recordings || recordings.length === 0) {
return undefined;
}
// Check if timestamp is within the recordings range
if (
timestamp < recordings[0].start_time ||
timestamp > recordings[recordings.length - 1].end_time
) {
return undefined;
}
let seekSeconds = 0;
(recordings || []).every((segment) => {
// if the next segment is past the desired time, stop calculating
if (segment.start_time > timestamp) {
return false;
}
if (segment.end_time < timestamp) {
// Add the full duration of this segment
seekSeconds += segment.end_time - segment.start_time;
return true;
}
// We're in this segment - calculate position within it
seekSeconds +=
segment.end_time - segment.start_time - (segment.end_time - timestamp);
return true;
});
// Adjust for HLS inpoint offset
seekSeconds -= inpointOffset;
return seekSeconds >= 0 ? seekSeconds : undefined;
}

View File

@ -39,6 +39,7 @@ import {
AlertDialogTitle,
} from "@/components/ui/alert-dialog";
import BlurredIconButton from "@/components/button/BlurredIconButton";
import { Skeleton } from "@/components/ui/skeleton";
const allModelTypes = ["objects", "states"] as const;
type ModelType = (typeof allModelTypes)[number];
@ -332,9 +333,7 @@ function ModelCard({ config, onClick, onUpdate, onDelete }: ModelCardProps) {
<ImageShadowOverlay lowerClassName="h-[30%] z-0" />
</>
) : (
<div className="flex size-full items-center justify-center bg-background_alt">
<MdModelTraining className="size-16 text-muted-foreground" />
</div>
<Skeleton className="flex size-full items-center justify-center" />
)}
<div className="absolute bottom-2 left-3 text-lg text-white smart-capitalize">
{config.name}

View File

@ -16,7 +16,6 @@ import ImageLoadingIndicator from "@/components/indicators/ImageLoadingIndicator
import useImageLoaded from "@/hooks/use-image-loaded";
import ActivityIndicator from "@/components/indicators/activity-indicator";
import { useTrackedObjectUpdate } from "@/api/ws";
import { isEqual } from "lodash";
import TimeAgo from "@/components/dynamic/TimeAgo";
import SearchResultActions from "@/components/menu/SearchResultActions";
import { SearchTab } from "@/components/overlay/detail/SearchDetailDialog";
@ -25,14 +24,12 @@ import { useTranslation } from "react-i18next";
import { getTranslatedLabel } from "@/utils/i18n";
type ExploreViewProps = {
searchDetail: SearchResult | undefined;
setSearchDetail: (search: SearchResult | undefined) => void;
setSimilaritySearch: (search: SearchResult) => void;
onSelectSearch: (item: SearchResult, ctrl: boolean, page?: SearchTab) => void;
};
export default function ExploreView({
searchDetail,
setSearchDetail,
setSimilaritySearch,
onSelectSearch,
@ -83,20 +80,6 @@ export default function ExploreView({
}
}, [wsUpdate, mutate]);
// update search detail when results change
useEffect(() => {
if (searchDetail && events) {
const updatedSearchDetail = events.find(
(result) => result.id === searchDetail.id,
);
if (updatedSearchDetail && !isEqual(updatedSearchDetail, searchDetail)) {
setSearchDetail(updatedSearchDetail);
}
}
}, [events, searchDetail, setSearchDetail]);
if (isLoading) {
return (
<ActivityIndicator className="absolute left-1/2 top-1/2 -translate-x-1/2 -translate-y-1/2" />

View File

@ -20,7 +20,14 @@ import {
FrigateConfig,
} from "@/types/frigateConfig";
import { ReviewSegment } from "@/types/review";
import { useCallback, useEffect, useMemo, useRef, useState } from "react";
import {
useCallback,
useContext,
useEffect,
useMemo,
useRef,
useState,
} from "react";
import {
isDesktop,
isMobile,
@ -46,6 +53,8 @@ import { useStreamingSettings } from "@/context/streaming-settings-provider";
import { useTranslation } from "react-i18next";
import { EmptyCard } from "@/components/card/EmptyCard";
import { BsFillCameraVideoOffFill } from "react-icons/bs";
import { AuthContext } from "@/context/auth-context";
import { useIsCustomRole } from "@/hooks/use-is-custom-role";
type LiveDashboardViewProps = {
cameras: CameraConfig[];
@ -374,10 +383,6 @@ export default function LiveDashboardView({
onSaveMuting(true);
};
if (cameras.length == 0 && !includeBirdseye) {
return <NoCameraView />;
}
return (
<div
className="scrollbar-container size-full select-none overflow-y-auto px-1 pt-2 md:p-2"
@ -439,198 +444,215 @@ export default function LiveDashboardView({
</div>
)}
{!fullscreen && events && events.length > 0 && (
<ScrollArea>
<TooltipProvider>
<div className="flex items-center gap-2 px-1">
{events.map((event) => {
return (
<AnimatedEventCard
key={event.id}
event={event}
selectedGroup={cameraGroup}
updateEvents={updateEvents}
/>
);
})}
</div>
</TooltipProvider>
<ScrollBar orientation="horizontal" />
</ScrollArea>
)}
{!cameraGroup || cameraGroup == "default" || isMobileOnly ? (
{cameras.length == 0 && !includeBirdseye ? (
<NoCameraView />
) : (
<>
<div
className={cn(
"mt-2 grid grid-cols-1 gap-2 px-2 md:gap-4",
mobileLayout == "grid" &&
"grid-cols-2 xl:grid-cols-3 3xl:grid-cols-4",
isMobile && "px-0",
)}
>
{includeBirdseye && birdseyeConfig?.enabled && (
{!fullscreen && events && events.length > 0 && (
<ScrollArea>
<TooltipProvider>
<div className="flex items-center gap-2 px-1">
{events.map((event) => {
return (
<AnimatedEventCard
key={event.id}
event={event}
selectedGroup={cameraGroup}
updateEvents={updateEvents}
/>
);
})}
</div>
</TooltipProvider>
<ScrollBar orientation="horizontal" />
</ScrollArea>
)}
{!cameraGroup || cameraGroup == "default" || isMobileOnly ? (
<>
<div
className={(() => {
const aspectRatio =
birdseyeConfig.width / birdseyeConfig.height;
if (aspectRatio > 2) {
return `${mobileLayout == "grid" && "col-span-2"} aspect-wide`;
} else if (aspectRatio < 1) {
return `${mobileLayout == "grid" && "row-span-2 h-full"} aspect-tall`;
} else {
return "aspect-video";
}
})()}
ref={birdseyeContainerRef}
className={cn(
"mt-2 grid grid-cols-1 gap-2 px-2 md:gap-4",
mobileLayout == "grid" &&
"grid-cols-2 xl:grid-cols-3 3xl:grid-cols-4",
isMobile && "px-0",
)}
>
<BirdseyeLivePlayer
birdseyeConfig={birdseyeConfig}
liveMode={birdseyeConfig.restream ? "mse" : "jsmpeg"}
onClick={() => onSelectCamera("birdseye")}
containerRef={birdseyeContainerRef}
/>
</div>
)}
{cameras.map((camera) => {
let grow;
const aspectRatio = camera.detect.width / camera.detect.height;
if (aspectRatio > 2) {
grow = `${mobileLayout == "grid" && "col-span-2"} aspect-wide`;
} else if (aspectRatio < 1) {
grow = `${mobileLayout == "grid" && "row-span-2 h-full"} aspect-tall`;
} else {
grow = "aspect-video";
}
const availableStreams = camera.live.streams || {};
const firstStreamEntry = Object.values(availableStreams)[0] || "";
const streamNameFromSettings =
currentGroupStreamingSettings?.[camera.name]?.streamName || "";
const streamExists =
streamNameFromSettings &&
Object.values(availableStreams).includes(
streamNameFromSettings,
);
const streamName = streamExists
? streamNameFromSettings
: firstStreamEntry;
const streamType =
currentGroupStreamingSettings?.[camera.name]?.streamType;
const autoLive =
streamType !== undefined
? streamType !== "no-streaming"
: undefined;
const showStillWithoutActivity =
currentGroupStreamingSettings?.[camera.name]?.streamType !==
"continuous";
const useWebGL =
currentGroupStreamingSettings?.[camera.name]
?.compatibilityMode || false;
return (
<LiveContextMenu
className={grow}
key={camera.name}
camera={camera.name}
cameraGroup={cameraGroup}
streamName={streamName}
preferredLiveMode={preferredLiveModes[camera.name] ?? "mse"}
isRestreamed={isRestreamedStates[camera.name]}
supportsAudio={
supportsAudioOutputStates[streamName]?.supportsAudio ??
false
}
audioState={audioStates[camera.name]}
toggleAudio={() => toggleAudio(camera.name)}
statsState={statsStates[camera.name]}
toggleStats={() => toggleStats(camera.name)}
volumeState={volumeStates[camera.name] ?? 1}
setVolumeState={(value) =>
setVolumeStates({
[camera.name]: value,
})
}
muteAll={muteAll}
unmuteAll={unmuteAll}
resetPreferredLiveMode={() =>
resetPreferredLiveMode(camera.name)
}
config={config}
>
<LivePlayer
cameraRef={cameraRef}
key={camera.name}
className={`${grow} rounded-lg bg-black md:rounded-2xl`}
windowVisible={
windowVisible && visibleCameras.includes(camera.name)
}
cameraConfig={camera}
preferredLiveMode={preferredLiveModes[camera.name] ?? "mse"}
autoLive={autoLive ?? globalAutoLive}
showStillWithoutActivity={showStillWithoutActivity ?? true}
alwaysShowCameraName={displayCameraNames}
useWebGL={useWebGL}
playInBackground={false}
showStats={statsStates[camera.name]}
streamName={streamName}
onClick={() => onSelectCamera(camera.name)}
onError={(e) => handleError(camera.name, e)}
onResetLiveMode={() => resetPreferredLiveMode(camera.name)}
playAudio={audioStates[camera.name] ?? false}
volume={volumeStates[camera.name]}
/>
</LiveContextMenu>
);
})}
</div>
{isDesktop && (
<div
className={cn(
"fixed",
isDesktop && "bottom-12 lg:bottom-9",
isMobile && "bottom-12 lg:bottom-16",
hasScrollbar && isDesktop ? "right-6" : "right-3",
"z-50 flex flex-row gap-2",
)}
>
<Tooltip>
<TooltipTrigger asChild>
{includeBirdseye && birdseyeConfig?.enabled && (
<div
className="cursor-pointer rounded-lg bg-secondary text-secondary-foreground opacity-60 transition-all duration-300 hover:bg-muted hover:opacity-100"
onClick={toggleFullscreen}
className={(() => {
const aspectRatio =
birdseyeConfig.width / birdseyeConfig.height;
if (aspectRatio > 2) {
return `${mobileLayout == "grid" && "col-span-2"} aspect-wide`;
} else if (aspectRatio < 1) {
return `${mobileLayout == "grid" && "row-span-2 h-full"} aspect-tall`;
} else {
return "aspect-video";
}
})()}
ref={birdseyeContainerRef}
>
{fullscreen ? (
<FaCompress className="size-5 md:m-[6px]" />
) : (
<FaExpand className="size-5 md:m-[6px]" />
)}
<BirdseyeLivePlayer
birdseyeConfig={birdseyeConfig}
liveMode={birdseyeConfig.restream ? "mse" : "jsmpeg"}
onClick={() => onSelectCamera("birdseye")}
containerRef={birdseyeContainerRef}
/>
</div>
</TooltipTrigger>
<TooltipContent>
{fullscreen
? t("button.exitFullscreen", { ns: "common" })
: t("button.fullscreen", { ns: "common" })}
</TooltipContent>
</Tooltip>
</div>
)}
{cameras.map((camera) => {
let grow;
const aspectRatio =
camera.detect.width / camera.detect.height;
if (aspectRatio > 2) {
grow = `${mobileLayout == "grid" && "col-span-2"} aspect-wide`;
} else if (aspectRatio < 1) {
grow = `${mobileLayout == "grid" && "row-span-2 h-full"} aspect-tall`;
} else {
grow = "aspect-video";
}
const availableStreams = camera.live.streams || {};
const firstStreamEntry =
Object.values(availableStreams)[0] || "";
const streamNameFromSettings =
currentGroupStreamingSettings?.[camera.name]?.streamName ||
"";
const streamExists =
streamNameFromSettings &&
Object.values(availableStreams).includes(
streamNameFromSettings,
);
const streamName = streamExists
? streamNameFromSettings
: firstStreamEntry;
const streamType =
currentGroupStreamingSettings?.[camera.name]?.streamType;
const autoLive =
streamType !== undefined
? streamType !== "no-streaming"
: undefined;
const showStillWithoutActivity =
currentGroupStreamingSettings?.[camera.name]?.streamType !==
"continuous";
const useWebGL =
currentGroupStreamingSettings?.[camera.name]
?.compatibilityMode || false;
return (
<LiveContextMenu
className={grow}
key={camera.name}
camera={camera.name}
cameraGroup={cameraGroup}
streamName={streamName}
preferredLiveMode={
preferredLiveModes[camera.name] ?? "mse"
}
isRestreamed={isRestreamedStates[camera.name]}
supportsAudio={
supportsAudioOutputStates[streamName]?.supportsAudio ??
false
}
audioState={audioStates[camera.name]}
toggleAudio={() => toggleAudio(camera.name)}
statsState={statsStates[camera.name]}
toggleStats={() => toggleStats(camera.name)}
volumeState={volumeStates[camera.name] ?? 1}
setVolumeState={(value) =>
setVolumeStates({
[camera.name]: value,
})
}
muteAll={muteAll}
unmuteAll={unmuteAll}
resetPreferredLiveMode={() =>
resetPreferredLiveMode(camera.name)
}
config={config}
>
<LivePlayer
cameraRef={cameraRef}
key={camera.name}
className={`${grow} rounded-lg bg-black md:rounded-2xl`}
windowVisible={
windowVisible && visibleCameras.includes(camera.name)
}
cameraConfig={camera}
preferredLiveMode={
preferredLiveModes[camera.name] ?? "mse"
}
autoLive={autoLive ?? globalAutoLive}
showStillWithoutActivity={
showStillWithoutActivity ?? true
}
alwaysShowCameraName={displayCameraNames}
useWebGL={useWebGL}
playInBackground={false}
showStats={statsStates[camera.name]}
streamName={streamName}
onClick={() => onSelectCamera(camera.name)}
onError={(e) => handleError(camera.name, e)}
onResetLiveMode={() =>
resetPreferredLiveMode(camera.name)
}
playAudio={audioStates[camera.name] ?? false}
volume={volumeStates[camera.name]}
/>
</LiveContextMenu>
);
})}
</div>
{isDesktop && (
<div
className={cn(
"fixed",
isDesktop && "bottom-12 lg:bottom-9",
isMobile && "bottom-12 lg:bottom-16",
hasScrollbar && isDesktop ? "right-6" : "right-3",
"z-50 flex flex-row gap-2",
)}
>
<Tooltip>
<TooltipTrigger asChild>
<div
className="cursor-pointer rounded-lg bg-secondary text-secondary-foreground opacity-60 transition-all duration-300 hover:bg-muted hover:opacity-100"
onClick={toggleFullscreen}
>
{fullscreen ? (
<FaCompress className="size-5 md:m-[6px]" />
) : (
<FaExpand className="size-5 md:m-[6px]" />
)}
</div>
</TooltipTrigger>
<TooltipContent>
{fullscreen
? t("button.exitFullscreen", { ns: "common" })
: t("button.fullscreen", { ns: "common" })}
</TooltipContent>
</Tooltip>
</div>
)}
</>
) : (
<DraggableGridLayout
cameras={cameras}
cameraGroup={cameraGroup}
containerRef={containerRef}
cameraRef={cameraRef}
includeBirdseye={includeBirdseye}
onSelectCamera={onSelectCamera}
windowVisible={windowVisible}
visibleCameras={visibleCameras}
isEditMode={isEditMode}
setIsEditMode={setIsEditMode}
fullscreen={fullscreen}
toggleFullscreen={toggleFullscreen}
/>
)}
</>
) : (
<DraggableGridLayout
cameras={cameras}
cameraGroup={cameraGroup}
containerRef={containerRef}
cameraRef={cameraRef}
includeBirdseye={includeBirdseye}
onSelectCamera={onSelectCamera}
windowVisible={windowVisible}
visibleCameras={visibleCameras}
isEditMode={isEditMode}
setIsEditMode={setIsEditMode}
fullscreen={fullscreen}
toggleFullscreen={toggleFullscreen}
/>
)}
</div>
);
@ -638,15 +660,26 @@ export default function LiveDashboardView({
function NoCameraView() {
const { t } = useTranslation(["views/live"]);
const { auth } = useContext(AuthContext);
const isCustomRole = useIsCustomRole();
// Check if this is a restricted user with no cameras in this group
const isRestricted = isCustomRole && auth.isAuthenticated;
return (
<div className="flex size-full items-center justify-center">
<EmptyCard
icon={<BsFillCameraVideoOffFill className="size-8" />}
title={t("noCameras.title")}
description={t("noCameras.description")}
buttonText={t("noCameras.buttonText")}
link="/settings?page=cameraManagement"
title={
isRestricted ? t("noCameras.restricted.title") : t("noCameras.title")
}
description={
isRestricted
? t("noCameras.restricted.description")
: t("noCameras.description")
}
buttonText={!isRestricted ? t("noCameras.buttonText") : undefined}
link={!isRestricted ? "/settings?page=cameraManagement" : undefined}
/>
</div>
);

View File

@ -19,7 +19,6 @@ import useKeyboardListener, {
import scrollIntoView from "scroll-into-view-if-needed";
import InputWithTags from "@/components/input/InputWithTags";
import { ScrollArea, ScrollBar } from "@/components/ui/scroll-area";
import { isEqual } from "lodash";
import { formatDateToLocaleString } from "@/utils/dateUtil";
import SearchThumbnailFooter from "@/components/card/SearchThumbnailFooter";
import ExploreSettings from "@/components/settings/SearchSettings";
@ -213,7 +212,7 @@ export default function SearchView({
// detail
const [searchDetail, setSearchDetail] = useState<SearchResult>();
const [selectedId, setSelectedId] = useState<string>();
const [page, setPage] = useState<SearchTab>("snapshot");
// remove duplicate event ids
@ -229,6 +228,16 @@ export default function SearchView({
return results;
}, [searchResults]);
const searchDetail = useMemo(() => {
if (!selectedId) return undefined;
// summary view
if (defaultView === "summary" && exploreEvents) {
return exploreEvents.find((r) => r.id === selectedId);
}
// grid view
return uniqueResults.find((r) => r.id === selectedId);
}, [selectedId, uniqueResults, exploreEvents, defaultView]);
// search interaction
const [selectedObjects, setSelectedObjects] = useState<string[]>([]);
@ -256,7 +265,7 @@ export default function SearchView({
}
} else {
setPage(page);
setSearchDetail(item);
setSelectedId(item.id);
}
},
[selectedObjects],
@ -295,26 +304,12 @@ export default function SearchView({
}
};
// update search detail when results change
// clear selected item when search results clear
useEffect(() => {
if (searchDetail) {
const results =
defaultView === "summary" ? exploreEvents : searchResults?.flat();
if (results) {
const updatedSearchDetail = results.find(
(result) => result.id === searchDetail.id,
);
if (
updatedSearchDetail &&
!isEqual(updatedSearchDetail, searchDetail)
) {
setSearchDetail(updatedSearchDetail);
}
}
if (!searchResults && !exploreEvents) {
setSelectedId(undefined);
}
}, [searchResults, exploreEvents, searchDetail, defaultView]);
}, [searchResults, exploreEvents]);
const hasExistingSearch = useMemo(
() => searchResults != undefined || searchFilter != undefined,
@ -340,7 +335,7 @@ export default function SearchView({
? results.length - 1
: (currentIndex - 1 + results.length) % results.length;
setSearchDetail(results[newIndex]);
setSelectedId(results[newIndex].id);
}
}, [uniqueResults, exploreEvents, searchDetail, defaultView]);
@ -357,7 +352,7 @@ export default function SearchView({
const newIndex =
currentIndex === -1 ? 0 : (currentIndex + 1) % results.length;
setSearchDetail(results[newIndex]);
setSelectedId(results[newIndex].id);
}
}, [uniqueResults, exploreEvents, searchDetail, defaultView]);
@ -509,7 +504,7 @@ export default function SearchView({
<SearchDetailDialog
search={searchDetail}
page={page}
setSearch={setSearchDetail}
setSearch={(item) => setSelectedId(item?.id)}
setSearchPage={setPage}
setSimilarity={
searchDetail && (() => setSimilaritySearch(searchDetail))
@ -629,7 +624,7 @@ export default function SearchView({
detail: boolean,
) => {
if (detail && selectedObjects.length == 0) {
setSearchDetail(value);
setSelectedId(value.id);
} else {
onSelectSearch(
value,
@ -724,8 +719,7 @@ export default function SearchView({
defaultView == "summary" && (
<div className="scrollbar-container flex size-full flex-col overflow-y-auto">
<ExploreView
searchDetail={searchDetail}
setSearchDetail={setSearchDetail}
setSearchDetail={(item) => setSelectedId(item?.id)}
setSimilaritySearch={setSimilaritySearch}
onSelectSearch={onSelectSearch}
/>

View File

@ -198,9 +198,9 @@ export default function TriggerView({
return axios
.put("config/set", configBody)
.then((configResponse) => {
.then(async (configResponse) => {
if (configResponse.status === 200) {
updateConfig();
await updateConfig();
const displayName =
friendly_name && friendly_name !== ""
? `${friendly_name} (${name})`
@ -353,9 +353,9 @@ export default function TriggerView({
return axios
.put("config/set", configBody)
.then((configResponse) => {
.then(async (configResponse) => {
if (configResponse.status === 200) {
updateConfig();
await updateConfig();
const friendly =
config?.cameras?.[selectedCamera]?.semantic_search
?.triggers?.[name]?.friendly_name;

View File

@ -67,13 +67,14 @@ export default function EnrichmentMetrics({
// features stats
const embeddingInferenceTimeSeries = useMemo(() => {
const groupedEnrichmentMetrics = useMemo(() => {
if (!statsHistory) {
return [];
}
const series: {
[key: string]: {
rawKey: string;
name: string;
metrics: Threshold;
data: { x: number; y: number }[];
@ -90,6 +91,7 @@ export default function EnrichmentMetrics({
if (!(key in series)) {
series[key] = {
rawKey,
name: t("enrichments.embeddings." + rawKey),
metrics: getThreshold(rawKey),
data: [],
@ -99,7 +101,57 @@ export default function EnrichmentMetrics({
series[key].data.push({ x: statsIdx + 1, y: stat });
});
});
return Object.values(series);
// Group series by category (extract base name from raw key)
const grouped: {
[category: string]: {
categoryName: string;
speedSeries?: {
name: string;
metrics: Threshold;
data: { x: number; y: number }[];
};
eventsSeries?: {
name: string;
metrics: Threshold;
data: { x: number; y: number }[];
};
};
} = {};
Object.values(series).forEach((s) => {
// Extract base category name from raw key
// All metrics follow the pattern: {base}_speed and {base}_events_per_second
let categoryKey = s.rawKey;
let isSpeed = false;
if (s.rawKey.endsWith("_speed")) {
categoryKey = s.rawKey.replace("_speed", "");
isSpeed = true;
} else if (s.rawKey.endsWith("_events_per_second")) {
categoryKey = s.rawKey.replace("_events_per_second", "");
isSpeed = false;
}
// Get translated category name
const categoryName = t("enrichments.embeddings." + categoryKey);
if (!(categoryKey in grouped)) {
grouped[categoryKey] = {
categoryName,
speedSeries: undefined,
eventsSeries: undefined,
};
}
if (isSpeed) {
grouped[categoryKey].speedSeries = s;
} else {
grouped[categoryKey].eventsSeries = s;
}
});
return Object.values(grouped);
}, [statsHistory, t, getThreshold]);
return (
@ -110,35 +162,42 @@ export default function EnrichmentMetrics({
</div>
<div
className={cn(
"mt-4 grid w-full grid-cols-1 gap-2 sm:grid-cols-3",
embeddingInferenceTimeSeries && "sm:grid-cols-4",
"mt-4 grid w-full grid-cols-1 gap-2 sm:grid-cols-2 md:grid-cols-4",
)}
>
{statsHistory.length != 0 ? (
<>
{embeddingInferenceTimeSeries.map((series) => (
<div className="rounded-lg bg-background_alt p-2.5 md:rounded-2xl">
<div className="mb-5 smart-capitalize">{series.name}</div>
{series.name.endsWith("Speed") ? (
<ThresholdBarGraph
key={series.name}
graphId={`${series.name}-inference`}
name={series.name}
unit="ms"
threshold={series.metrics}
updateTimes={updateTimes}
data={[series]}
/>
) : (
<EventsPerSecondsLineGraph
key={series.name}
graphId={`${series.name}-fps`}
unit=""
name={t("enrichments.infPerSecond")}
updateTimes={updateTimes}
data={[series]}
/>
)}
{groupedEnrichmentMetrics.map((group) => (
<div
key={group.categoryName}
className="rounded-lg bg-background_alt p-2.5 md:rounded-2xl"
>
<div className="mb-5 smart-capitalize">
{group.categoryName}
</div>
<div className="space-y-4">
{group.speedSeries && (
<ThresholdBarGraph
key={`${group.categoryName}-speed`}
graphId={`${group.categoryName}-inference`}
name={t("enrichments.averageInf")}
unit="ms"
threshold={group.speedSeries.metrics}
updateTimes={updateTimes}
data={[group.speedSeries]}
/>
)}
{group.eventsSeries && (
<EventsPerSecondsLineGraph
key={`${group.categoryName}-events`}
graphId={`${group.categoryName}-fps`}
unit=""
name={t("enrichments.infPerSecond")}
updateTimes={updateTimes}
data={[group.eventsSeries]}
/>
)}
</div>
</div>
))}
</>

View File

@ -375,6 +375,50 @@ export default function GeneralMetrics({
return Object.keys(series).length > 0 ? Object.values(series) : undefined;
}, [statsHistory]);
// Check if Intel GPU has all 0% usage values (known bug)
const showIntelGpuWarning = useMemo(() => {
if (!statsHistory || statsHistory.length < 3) {
return false;
}
const gpuKeys = Object.keys(statsHistory[0]?.gpu_usages ?? {});
const hasIntelGpu = gpuKeys.some(
(key) => key === "intel-vaapi" || key === "intel-qsv",
);
if (!hasIntelGpu) {
return false;
}
// Check if all GPU usage values are 0% across all stats
let allZero = true;
let hasDataPoints = false;
for (const stats of statsHistory) {
if (!stats) {
continue;
}
Object.entries(stats.gpu_usages || {}).forEach(([key, gpuStats]) => {
if (key === "intel-vaapi" || key === "intel-qsv") {
if (gpuStats.gpu) {
hasDataPoints = true;
const gpuValue = parseFloat(gpuStats.gpu.slice(0, -1));
if (!isNaN(gpuValue) && gpuValue > 0) {
allZero = false;
}
}
}
});
if (!allZero) {
break;
}
}
return hasDataPoints && allZero;
}, [statsHistory]);
// npu stats
const npuSeries = useMemo(() => {
@ -639,8 +683,46 @@ export default function GeneralMetrics({
<>
{statsHistory.length != 0 ? (
<div className="rounded-lg bg-background_alt p-2.5 md:rounded-2xl">
<div className="mb-5">
<div className="mb-5 flex flex-row items-center justify-between">
{t("general.hardwareInfo.gpuUsage")}
{showIntelGpuWarning && (
<Popover>
<PopoverTrigger asChild>
<button
className="flex flex-row items-center gap-1.5 text-yellow-600 focus:outline-none dark:text-yellow-500"
aria-label={t(
"general.hardwareInfo.intelGpuWarning.title",
)}
>
<CiCircleAlert
className="size-5"
aria-label={t(
"general.hardwareInfo.intelGpuWarning.title",
)}
/>
<span className="text-sm">
{t(
"general.hardwareInfo.intelGpuWarning.message",
)}
</span>
</button>
</PopoverTrigger>
<PopoverContent className="w-80">
<div className="space-y-2">
<div className="font-semibold">
{t(
"general.hardwareInfo.intelGpuWarning.title",
)}
</div>
<div>
{t(
"general.hardwareInfo.intelGpuWarning.description",
)}
</div>
</div>
</PopoverContent>
</Popover>
)}
</div>
{gpuSeries.map((series) => (
<ThresholdBarGraph
@ -729,33 +811,32 @@ export default function GeneralMetrics({
) : (
<Skeleton className="aspect-video w-full" />
)}
</>
)}
{statsHistory[0]?.npu_usages && (
<div
className={cn("mt-4 grid grid-cols-1 gap-2 sm:grid-cols-2")}
>
{statsHistory.length != 0 ? (
<div className="rounded-lg bg-background_alt p-2.5 md:rounded-2xl">
<div className="mb-5">
{t("general.hardwareInfo.npuUsage")}
</div>
{npuSeries.map((series) => (
<ThresholdBarGraph
key={series.name}
graphId={`${series.name}-npu`}
name={series.name}
unit="%"
threshold={GPUUsageThreshold}
updateTimes={updateTimes}
data={[series]}
/>
))}
</div>
) : (
<Skeleton className="aspect-video w-full" />
{statsHistory[0]?.npu_usages && (
<>
{statsHistory.length != 0 ? (
<div className="rounded-lg bg-background_alt p-2.5 md:rounded-2xl">
<div className="mb-5">
{t("general.hardwareInfo.npuUsage")}
</div>
{npuSeries.map((series) => (
<ThresholdBarGraph
key={series.name}
graphId={`${series.name}-npu`}
name={series.name}
unit="%"
threshold={GPUUsageThreshold}
updateTimes={updateTimes}
data={[series]}
/>
))}
</div>
) : (
<Skeleton className="aspect-video w-full" />
)}
</>
)}
</div>
</>
)}
</div>
</>