mirror of
https://github.com/blakeblackshear/frigate.git
synced 2026-05-03 20:17:42 +03:00
Merge branch 'dev' into motion_improvements
This commit is contained in:
commit
ff64ed4ade
4
.github/workflows/release.yml
vendored
4
.github/workflows/release.yml
vendored
@ -39,14 +39,14 @@ jobs:
|
||||
STABLE_TAG=${BASE}:stable
|
||||
PULL_TAG=${BASE}:${BUILD_TAG}
|
||||
docker run --rm -v $HOME/.docker/config.json:/config.json quay.io/skopeo/stable:latest copy --authfile /config.json --multi-arch all docker://${PULL_TAG} docker://${VERSION_TAG}
|
||||
for variant in standard-arm64 tensorrt tensorrt-jp5 tensorrt-jp6 rk h8l rocm; do
|
||||
for variant in standard-arm64 tensorrt tensorrt-jp6 rk rocm; do
|
||||
docker run --rm -v $HOME/.docker/config.json:/config.json quay.io/skopeo/stable:latest copy --authfile /config.json --multi-arch all docker://${PULL_TAG}-${variant} docker://${VERSION_TAG}-${variant}
|
||||
done
|
||||
|
||||
# stable tag
|
||||
if [[ "${BUILD_TYPE}" == "stable" ]]; then
|
||||
docker run --rm -v $HOME/.docker/config.json:/config.json quay.io/skopeo/stable:latest copy --authfile /config.json --multi-arch all docker://${PULL_TAG} docker://${STABLE_TAG}
|
||||
for variant in standard-arm64 tensorrt tensorrt-jp5 tensorrt-jp6 rk h8l rocm; do
|
||||
for variant in standard-arm64 tensorrt tensorrt-jp6 rk rocm; do
|
||||
docker run --rm -v $HOME/.docker/config.json:/config.json quay.io/skopeo/stable:latest copy --authfile /config.json --multi-arch all docker://${PULL_TAG}-${variant} docker://${STABLE_TAG}-${variant}
|
||||
done
|
||||
fi
|
||||
|
||||
20
README.md
20
README.md
@ -4,11 +4,15 @@
|
||||
|
||||
# Frigate - NVR With Realtime Object Detection for IP Cameras
|
||||
|
||||
\[English\] | [简体中文](https://github.com/blakeblackshear/frigate/README_CN.md)
|
||||
<a href="https://hosted.weblate.org/engage/frigate-nvr/">
|
||||
<img src="https://hosted.weblate.org/widget/frigate-nvr/language-badge.svg" alt="Translation status" />
|
||||
</a>
|
||||
|
||||
\[English\] | [简体中文](https://github.com/blakeblackshear/frigate/blob/dev/README_CN.md)
|
||||
|
||||
A complete and local NVR designed for [Home Assistant](https://www.home-assistant.io) with AI object detection. Uses OpenCV and Tensorflow to perform realtime object detection locally for IP cameras.
|
||||
|
||||
Use of a [Google Coral Accelerator](https://coral.ai/products/) is optional, but highly recommended. The Coral will outperform even the best CPUs and can process 100+ FPS with very little overhead.
|
||||
Use of a GPU or AI accelerator such as a [Google Coral](https://coral.ai/products/) or [Hailo](https://hailo.ai/) is highly recommended. AI accelerators will outperform even the best CPUs with very little overhead.
|
||||
|
||||
- Tight integration with Home Assistant via a [custom component](https://github.com/blakeblackshear/frigate-hass-integration)
|
||||
- Designed to minimize resource use and maximize performance by only looking for objects when and where it is necessary
|
||||
@ -32,21 +36,33 @@ If you would like to make a donation to support development, please use [Github
|
||||
## Screenshots
|
||||
|
||||
### Live dashboard
|
||||
|
||||
<div>
|
||||
<img width="800" alt="Live dashboard" src="https://github.com/blakeblackshear/frigate/assets/569905/5e713cb9-9db5-41dc-947a-6937c3bc376e">
|
||||
</div>
|
||||
|
||||
### Streamlined review workflow
|
||||
|
||||
<div>
|
||||
<img width="800" alt="Streamlined review workflow" src="https://github.com/blakeblackshear/frigate/assets/569905/6fed96e8-3b18-40e5-9ddc-31e6f3c9f2ff">
|
||||
</div>
|
||||
|
||||
### Multi-camera scrubbing
|
||||
|
||||
<div>
|
||||
<img width="800" alt="Multi-camera scrubbing" src="https://github.com/blakeblackshear/frigate/assets/569905/d6788a15-0eeb-4427-a8d4-80b93cae3d74">
|
||||
</div>
|
||||
|
||||
### Built-in mask and zone editor
|
||||
|
||||
<div>
|
||||
<img width="800" alt="Multi-camera scrubbing" src="https://github.com/blakeblackshear/frigate/assets/569905/d7885fc3-bfe6-452f-b7d0-d957cb3e31f5">
|
||||
</div>
|
||||
|
||||
## Translations
|
||||
|
||||
We use [Weblate](https://hosted.weblate.org/projects/frigate-nvr/) to support language translations. Contributions are always welcome.
|
||||
|
||||
<a href="https://hosted.weblate.org/engage/frigate-nvr/">
|
||||
<img src="https://hosted.weblate.org/widget/frigate-nvr/multi-auto.svg" alt="Translation status" />
|
||||
</a>
|
||||
|
||||
20
README_CN.md
20
README_CN.md
@ -6,10 +6,13 @@
|
||||
|
||||
[English](https://github.com/blakeblackshear/frigate) | \[简体中文\]
|
||||
|
||||
<a href="https://hosted.weblate.org/engage/frigate-nvr/-/zh_Hans/">
|
||||
<img src="https://hosted.weblate.org/widget/frigate-nvr/-/zh_Hans/svg-badge.svg" alt="翻译状态" />
|
||||
</a>
|
||||
|
||||
一个完整的本地网络视频录像机(NVR),专为[Home Assistant](https://www.home-assistant.io)设计,具备AI物体检测功能。使用OpenCV和TensorFlow在本地为IP摄像头执行实时物体检测。
|
||||
|
||||
强烈推荐使用可选配件:[Google Coral加速器](https://coral.ai/products/)。在该场景下,Coral的性能甚至超过目前的顶级CPU,并且可以以极低的电力开销轻松处理100 以上的画面帧。
|
||||
|
||||
强烈推荐使用GPU或者AI加速器(例如[Google Coral加速器](https://coral.ai/products/) 或者 [Hailo](https://hailo.ai/))。它们的性能甚至超过目前的顶级CPU,并且可以以极低的耗电实现更优的性能。
|
||||
- 通过[自定义组件](https://github.com/blakeblackshear/frigate-hass-integration)与Home Assistant紧密集成
|
||||
- 设计上通过仅在必要时和必要地点寻找物体,最大限度地减少资源使用并最大化性能
|
||||
- 大量利用多进程处理,强调实时性而非处理每一帧
|
||||
@ -21,9 +24,9 @@
|
||||
- 通过RTSP重新流传输以减少摄像头的连接数
|
||||
- 支持WebRTC和MSE,实现低延迟的实时观看
|
||||
|
||||
## 文档(英文)
|
||||
## 社区中文翻译文档
|
||||
|
||||
你可以在这里查看文档 https://docs.frigate.video
|
||||
你可以在这里查看文档 https://docs.frigate-cn.video
|
||||
|
||||
## 赞助
|
||||
|
||||
@ -50,3 +53,12 @@
|
||||
<div>
|
||||
<img width="800" alt="内置遮罩和区域编辑器" src="https://github.com/blakeblackshear/frigate/assets/569905/d7885fc3-bfe6-452f-b7d0-d957cb3e31f5">
|
||||
</div>
|
||||
|
||||
|
||||
## 翻译
|
||||
我们使用 [Weblate](https://hosted.weblate.org/projects/frigate-nvr/) 平台提供翻译支持,欢迎参与进来一起完善。
|
||||
|
||||
|
||||
## 非官方中文讨论社区
|
||||
欢迎加入中文讨论QQ群:1043861059
|
||||
Bilibili:https://space.bilibili.com/3546894915602564
|
||||
|
||||
@ -6,7 +6,7 @@ import numpy as np
|
||||
|
||||
import frigate.util as util
|
||||
from frigate.config import DetectorTypeEnum
|
||||
from frigate.object_detection import (
|
||||
from frigate.object_detection.base import (
|
||||
ObjectDetectProcess,
|
||||
RemoteObjectDetector,
|
||||
load_labels,
|
||||
|
||||
@ -4,7 +4,7 @@
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y build-essential cmake git wget
|
||||
|
||||
hailo_version="4.20.0"
|
||||
hailo_version="4.20.1"
|
||||
arch=$(uname -m)
|
||||
|
||||
if [[ $arch == "x86_64" ]]; then
|
||||
|
||||
@ -260,7 +260,7 @@ ENTRYPOINT ["/init"]
|
||||
CMD []
|
||||
|
||||
HEALTHCHECK --start-period=300s --start-interval=5s --interval=15s --timeout=5s --retries=3 \
|
||||
CMD curl --fail --silent --show-error http://127.0.0.1:5000/api/version || exit 1
|
||||
CMD test -f /dev/shm/.frigate-is-stopping && exit 0; curl --fail --silent --show-error http://127.0.0.1:5000/api/version || exit 1
|
||||
|
||||
# Frigate deps with Node.js and NPM for devcontainer
|
||||
FROM deps AS devcontainer
|
||||
|
||||
@ -2,7 +2,7 @@
|
||||
|
||||
set -euxo pipefail
|
||||
|
||||
hailo_version="4.20.0"
|
||||
hailo_version="4.20.1"
|
||||
|
||||
if [[ "${TARGETARCH}" == "amd64" ]]; then
|
||||
arch="x86_64"
|
||||
@ -10,5 +10,5 @@ elif [[ "${TARGETARCH}" == "arm64" ]]; then
|
||||
arch="aarch64"
|
||||
fi
|
||||
|
||||
wget -qO- "https://github.com/frigate-nvr/hailort/releases/download/v${hailo_version}/hailort-${TARGETARCH}.tar.gz" | tar -C / -xzf -
|
||||
wget -qO- "https://github.com/frigate-nvr/hailort/releases/download/v${hailo_version}/hailort-debian12-${TARGETARCH}.tar.gz" | tar -C / -xzf -
|
||||
wget -P /wheels/ "https://github.com/frigate-nvr/hailort/releases/download/v${hailo_version}/hailort-${hailo_version}-cp311-cp311-linux_${arch}.whl"
|
||||
|
||||
@ -25,4 +25,7 @@ elif [[ "${exit_code_service}" -ne 0 ]]; then
|
||||
fi
|
||||
fi
|
||||
|
||||
# used by the docker healthcheck
|
||||
touch /dev/shm/.frigate-is-stopping
|
||||
|
||||
exec /run/s6/basedir/bin/halt
|
||||
|
||||
@ -4,6 +4,11 @@
|
||||
|
||||
set -o errexit -o nounset -o pipefail
|
||||
|
||||
# opt out of openvino telemetry
|
||||
if [ -e /usr/local/bin/opt_in_out ]; then
|
||||
/usr/local/bin/opt_in_out --opt_out
|
||||
fi
|
||||
|
||||
# Logs should be sent to stdout so that s6 can collect them
|
||||
|
||||
# Tell S6-Overlay not to restart this service
|
||||
|
||||
@ -138,5 +138,9 @@ function migrate_db_from_media_to_config() {
|
||||
fi
|
||||
}
|
||||
|
||||
# remove leftover from last run, not normally needed, but just in case
|
||||
# used by the docker healthcheck
|
||||
rm -f /dev/shm/.frigate-is-stopping
|
||||
|
||||
migrate_addon_config_dir
|
||||
migrate_db_from_media_to_config
|
||||
|
||||
@ -53,7 +53,7 @@ elif go2rtc_config["api"].get("origin") is None:
|
||||
|
||||
# Need to set default location for HA config
|
||||
if go2rtc_config.get("hass") is None:
|
||||
go2rtc_config["hass"] = {"config": "/config"}
|
||||
go2rtc_config["hass"] = {"config": "/homeassistant"}
|
||||
|
||||
# we want to ensure that logs are easy to read
|
||||
if go2rtc_config.get("log") is None:
|
||||
@ -102,7 +102,7 @@ elif go2rtc_config["ffmpeg"].get("bin") is None:
|
||||
|
||||
# need to replace ffmpeg command when using ffmpeg4
|
||||
if LIBAVFORMAT_VERSION_MAJOR < 59:
|
||||
rtsp_args = "-fflags nobuffer -flags low_delay -stimeout 5000000 -user_agent go2rtc/ffmpeg -rtsp_transport tcp -i {input}"
|
||||
rtsp_args = "-fflags nobuffer -flags low_delay -stimeout 10000000 -user_agent go2rtc/ffmpeg -rtsp_transport tcp -i {input}"
|
||||
if go2rtc_config.get("ffmpeg") is None:
|
||||
go2rtc_config["ffmpeg"] = {"rtsp": rtsp_args}
|
||||
elif go2rtc_config["ffmpeg"].get("rtsp") is None:
|
||||
|
||||
@ -82,7 +82,7 @@ http {
|
||||
aio on;
|
||||
|
||||
# file upload size
|
||||
client_max_body_size 10M;
|
||||
client_max_body_size 20M;
|
||||
|
||||
# https://github.com/kaltura/nginx-vod-module#vod_open_file_thread_pool
|
||||
vod_open_file_thread_pool default;
|
||||
|
||||
@ -26,7 +26,7 @@ COPY --from=rootfs / /
|
||||
COPY docker/rockchip/COCO /COCO
|
||||
COPY docker/rockchip/conv2rknn.py /opt/conv2rknn.py
|
||||
|
||||
ADD https://github.com/MarcA711/rknn-toolkit2/releases/download/v2.3.0/librknnrt.so /usr/lib/
|
||||
ADD https://github.com/MarcA711/rknn-toolkit2/releases/download/v2.3.2/librknnrt.so /usr/lib/
|
||||
|
||||
ADD --chmod=111 https://github.com/MarcA711/Rockchip-FFmpeg-Builds/releases/download/6.1-7/ffmpeg /usr/lib/ffmpeg/6.0/bin/
|
||||
ADD --chmod=111 https://github.com/MarcA711/Rockchip-FFmpeg-Builds/releases/download/6.1-7/ffprobe /usr/lib/ffmpeg/6.0/bin/
|
||||
|
||||
@ -1,2 +1,2 @@
|
||||
rknn-toolkit2 == 2.3.0
|
||||
rknn-toolkit-lite2 == 2.3.0
|
||||
rknn-toolkit2 == 2.3.2
|
||||
rknn-toolkit-lite2 == 2.3.2
|
||||
@ -77,7 +77,7 @@ Changing the secret will invalidate current tokens.
|
||||
|
||||
Frigate can be configured to leverage features of common upstream authentication proxies such as Authelia, Authentik, oauth2_proxy, or traefik-forward-auth.
|
||||
|
||||
If you are leveraging the authentication of an upstream proxy, you likely want to disable Frigate's authentication. Optionally, if communication between the reverse proxy and Frigate is over an untrusted network, you should set an `auth_secret` in the `proxy` config and configure the proxy to send the secret value as a header named `X-Proxy-Secret`. Assuming this is an untrusted network, you will also want to [configure a real TLS certificate](tls.md) to ensure the traffic can't simply be sniffed to steal the secret.
|
||||
If you are leveraging the authentication of an upstream proxy, you likely want to disable Frigate's authentication as there is no correspondence between users in Frigate's database and users authenticated via the proxy. Optionally, if communication between the reverse proxy and Frigate is over an untrusted network, you should set an `auth_secret` in the `proxy` config and configure the proxy to send the secret value as a header named `X-Proxy-Secret`. Assuming this is an untrusted network, you will also want to [configure a real TLS certificate](tls.md) to ensure the traffic can't simply be sniffed to steal the secret.
|
||||
|
||||
Here is an example of how to disable Frigate's authentication and also ensure the requests come only from your known proxy.
|
||||
|
||||
@ -109,6 +109,14 @@ proxy:
|
||||
|
||||
Frigate supports both `admin` and `viewer` roles (see below). When using port `8971`, Frigate validates these headers and subsequent requests use the headers `remote-user` and `remote-role` for authorization.
|
||||
|
||||
A default role can be provided. Any value in the mapped `role` header will override the default.
|
||||
|
||||
```yaml
|
||||
proxy:
|
||||
...
|
||||
default_role: viewer
|
||||
```
|
||||
|
||||
#### Port Considerations
|
||||
|
||||
**Authenticated Port (8971)**
|
||||
|
||||
@ -15,6 +15,17 @@ Many cameras support encoding options which greatly affect the live view experie
|
||||
|
||||
:::
|
||||
|
||||
## H.265 Cameras via Safari
|
||||
|
||||
Some cameras support h265 with different formats, but Safari only supports the annexb format. When using h265 camera streams for recording with devices that use the Safari browser, the `apple_compatibility` option should be used.
|
||||
|
||||
```yaml
|
||||
cameras:
|
||||
h265_cam: # <------ Doesn't matter what the camera is called
|
||||
ffmpeg:
|
||||
apple_compatibility: true # <- Adds compatibility with MacOS and iPhone
|
||||
```
|
||||
|
||||
## MJPEG Cameras
|
||||
|
||||
Note that mjpeg cameras require encoding the video into h264 for recording, and restream roles. This will use significantly more CPU than if the cameras supported h264 feeds directly. It is recommended to use the restream role to create an h264 restream and then use that as the source for ffmpeg.
|
||||
|
||||
@ -3,7 +3,7 @@ id: face_recognition
|
||||
title: Face Recognition
|
||||
---
|
||||
|
||||
Face recognition identifies known individuals by matching detected faces with previously learned facial data. When a known person is recognized, their name will be added as a `sub_label`. This information is included in the UI, filters, as well as in notifications.
|
||||
Face recognition identifies known individuals by matching detected faces with previously learned facial data. When a known `person` is recognized, their name will be added as a `sub_label`. This information is included in the UI, filters, as well as in notifications.
|
||||
|
||||
## Model Requirements
|
||||
|
||||
@ -13,6 +13,12 @@ When running a Frigate+ model (or any custom model that natively detects faces)
|
||||
|
||||
When running a default COCO model or another model that does not include `face` as a detectable label, face detection will run via CV2 using a lightweight DNN model that runs on the CPU. In this case, you should _not_ define `face` in your list of objects to track.
|
||||
|
||||
:::note
|
||||
|
||||
Frigate needs to first detect a `person` before it can detect and recognize a face.
|
||||
|
||||
:::
|
||||
|
||||
### Face Recognition
|
||||
|
||||
Frigate has support for two face recognition model types:
|
||||
@ -22,11 +28,13 @@ Frigate has support for two face recognition model types:
|
||||
|
||||
In both cases, a lightweight face landmark detection model is also used to align faces before running recognition.
|
||||
|
||||
All of these features run locally on your system.
|
||||
|
||||
## Minimum System Requirements
|
||||
|
||||
The `small` model is optimized for efficiency and runs on the CPU, most CPUs should run the model efficiently.
|
||||
|
||||
The `large` model is optimized for accuracy, an integrated or discrete GPU is highly recommended.
|
||||
The `large` model is optimized for accuracy, an integrated or discrete GPU is highly recommended. See the [Hardware Accelerated Enrichments](/configuration/hardware_acceleration_enrichments.md) documentation.
|
||||
|
||||
## Configuration
|
||||
|
||||
@ -39,7 +47,7 @@ face_recognition:
|
||||
|
||||
## Advanced Configuration
|
||||
|
||||
Fine-tune face recognition with these optional parameters:
|
||||
Fine-tune face recognition with these optional parameters at the global level of your config. The only optional parameters that can be set at the camera level are `enabled` and `min_area`.
|
||||
|
||||
### Detection
|
||||
|
||||
@ -62,6 +70,13 @@ Fine-tune face recognition with these optional parameters:
|
||||
- `blur_confidence_filter`: Enables a filter that calculates how blurry the face is and adjusts the confidence based on this.
|
||||
- Default: `True`.
|
||||
|
||||
## Usage
|
||||
|
||||
1. **Enable face recognition** in your configuration file and restart Frigate.
|
||||
2. **Upload your face** using the **Add Face** button's wizard in the Face Library section of the Frigate UI.
|
||||
3. When Frigate detects and attempts to recognize a face, it will appear in the **Train** tab of the Face Library, along with its associated recognition confidence.
|
||||
4. From the **Train** tab, you can **assign the face** to a new or existing person to improve recognition accuracy for the future.
|
||||
|
||||
## Creating a Robust Training Set
|
||||
|
||||
The number of images needed for a sufficient training set for face recognition varies depending on several factors:
|
||||
@ -125,3 +140,19 @@ This can happen for a few different reasons, but this is usually an indicator th
|
||||
### I see scores above the threshold in the train tab, but a sub label wasn't assigned?
|
||||
|
||||
The Frigate considers the recognition scores across all recognition attempts for each person object. The scores are continually weighted based on the area of the face, and a sub label will only be assigned to person if a person is confidently recognized consistently. This avoids cases where a single high confidence recognition would throw off the results.
|
||||
|
||||
### Can I use other face recognition software like DoubleTake at the same time as the built in face recognition?
|
||||
|
||||
No, using another face recognition service will interfere with Frigate's built in face recognition. When using double-take the sub_label feature must be disabled if the built in face recognition is also desired.
|
||||
|
||||
### Does face recognition run on the recording stream?
|
||||
|
||||
Face recognition does not run on the recording stream, this would be suboptimal for many reasons:
|
||||
|
||||
1. The latency of accessing the recordings means the notifications would not include the names of recognized people because recognition would not complete until after.
|
||||
2. The embedding models used run on a set image size, so larger images will be scaled down to match this anyway.
|
||||
3. Motion clarity is much more important than extra pixels, over-compression and motion blur are much more detrimental to results than resolution.
|
||||
|
||||
### I get an unknown error when taking a photo directly with my iPhone
|
||||
|
||||
By default iOS devices will use HEIC (High Efficiency Image Container) for images, but this format is not supported for uploads. Choosing `large` as the format instead of `original` will use JPG which will work correctly.
|
||||
|
||||
@ -9,7 +9,7 @@ Some presets of FFmpeg args are provided by default to make the configuration ea
|
||||
|
||||
It is highly recommended to use hwaccel presets in the config. These presets not only replace the longer args, but they also give Frigate hints of what hardware is available and allows Frigate to make other optimizations using the GPU such as when encoding the birdseye restream or when scaling a stream that has a size different than the native stream size.
|
||||
|
||||
See [the hwaccel docs](/configuration/hardware_acceleration.md) for more info on how to setup hwaccel for your GPU / iGPU.
|
||||
See [the hwaccel docs](/configuration/hardware_acceleration_video.md) for more info on how to setup hwaccel for your GPU / iGPU.
|
||||
|
||||
| Preset | Usage | Other Notes |
|
||||
| --------------------- | ------------------------------ | ----------------------------------------------------- |
|
||||
|
||||
32
docs/docs/configuration/hardware_acceleration_enrichments.md
Normal file
32
docs/docs/configuration/hardware_acceleration_enrichments.md
Normal file
@ -0,0 +1,32 @@
|
||||
---
|
||||
id: hardware_acceleration_enrichments
|
||||
title: Enrichments
|
||||
---
|
||||
|
||||
# Enrichments
|
||||
|
||||
Some of Frigate's enrichments can use a discrete GPU for accelerated processing.
|
||||
|
||||
## Requirements
|
||||
|
||||
Object detection and enrichments (like Semantic Search, Face Recognition, and License Plate Recognition) are independent features. To use a GPU for object detection, see the [Object Detectors](/configuration/object_detectors.md) documentation. If you want to use your GPU for any supported enrichments, you must choose the appropriate Frigate Docker image for your GPU and configure the enrichment according to its specific documentation.
|
||||
|
||||
- **AMD**
|
||||
|
||||
- ROCm will automatically be detected and used for enrichments in the `-rocm` Frigate image.
|
||||
|
||||
- **Intel**
|
||||
|
||||
- OpenVINO will automatically be detected and used for enrichments in the default Frigate image.
|
||||
|
||||
- **Nvidia**
|
||||
- Nvidia GPUs will automatically be detected and used for enrichments in the `-tensorrt` Frigate image.
|
||||
- Jetson devices will automatically be detected and used for enrichments in the `-tensorrt-jp6` Frigate image.
|
||||
|
||||
Utilizing a GPU for enrichments does not require you to use the same GPU for object detection. For example, you can run the `tensorrt` Docker image for enrichments and still use other dedicated hardware for object detection.
|
||||
|
||||
:::note
|
||||
|
||||
A Google Coral is a TPU (Tensor Processing Unit), not a dedicated GPU (Graphics Processing Unit) and therefore does not provide any kind of acceleration for Frigate's enrichments.
|
||||
|
||||
:::
|
||||
@ -1,15 +1,15 @@
|
||||
---
|
||||
id: hardware_acceleration
|
||||
title: Hardware Acceleration
|
||||
id: hardware_acceleration_video
|
||||
title: Video Decoding
|
||||
---
|
||||
|
||||
# Hardware Acceleration
|
||||
# Video Decoding
|
||||
|
||||
It is highly recommended to use a GPU for hardware acceleration in Frigate. Some types of hardware acceleration are detected and used automatically, but you may need to update your configuration to enable hardware accelerated decoding in ffmpeg.
|
||||
It is highly recommended to use a GPU for hardware acceleration video decoding in Frigate. Some types of hardware acceleration are detected and used automatically, but you may need to update your configuration to enable hardware accelerated decoding in ffmpeg.
|
||||
|
||||
Depending on your system, these parameters may not be compatible. More information on hardware accelerated decoding for ffmpeg can be found here: https://trac.ffmpeg.org/wiki/HWAccelIntro
|
||||
|
||||
# Officially Supported
|
||||
# Object Detection
|
||||
|
||||
## Raspberry Pi 3/4
|
||||
|
||||
@ -69,12 +69,12 @@ Or map in all the `/dev/video*` devices.
|
||||
|
||||
**Recommended hwaccel Preset**
|
||||
|
||||
| CPU Generation | Intel Driver | Recommended Preset | Notes |
|
||||
| -------------- | ------------ | ------------------ | ----------------------------------- |
|
||||
| gen1 - gen7 | i965 | preset-vaapi | qsv is not supported |
|
||||
| gen8 - gen12 | iHD | preset-vaapi | preset-intel-qsv-* can also be used |
|
||||
| gen13+ | iHD / Xe | preset-intel-qsv-* | |
|
||||
| Intel Arc GPU | iHD / Xe | preset-intel-qsv-* | |
|
||||
| CPU Generation | Intel Driver | Recommended Preset | Notes |
|
||||
| -------------- | ------------ | ------------------- | ------------------------------------ |
|
||||
| gen1 - gen7 | i965 | preset-vaapi | qsv is not supported |
|
||||
| gen8 - gen12 | iHD | preset-vaapi | preset-intel-qsv-\* can also be used |
|
||||
| gen13+ | iHD / Xe | preset-intel-qsv-\* | |
|
||||
| Intel Arc GPU | iHD / Xe | preset-intel-qsv-\* | |
|
||||
|
||||
:::
|
||||
|
||||
@ -295,8 +295,7 @@ These instructions were originally based on the [Jellyfin documentation](https:/
|
||||
## NVIDIA Jetson (Orin AGX, Orin NX, Orin Nano\*, Xavier AGX, Xavier NX, TX2, TX1, Nano)
|
||||
|
||||
A separate set of docker images is available that is based on Jetpack/L4T. They come with an `ffmpeg` build
|
||||
with codecs that use the Jetson's dedicated media engine. If your Jetson host is running Jetpack 5.0+ use the `stable-tensorrt-jp5`
|
||||
tagged image, or if your Jetson host is running Jetpack 6.0+ use the `stable-tensorrt-jp6` tagged image. Note that the Orin Nano has no video encoder, so frigate will use software encoding on this platform, but the image will still allow hardware decoding and tensorrt object detection.
|
||||
with codecs that use the Jetson's dedicated media engine. If your Jetson host is running Jetpack 6.0+ use the `stable-tensorrt-jp6` tagged image. Note that the Orin Nano has no video encoder, so frigate will use software encoding on this platform, but the image will still allow hardware decoding and tensorrt object detection.
|
||||
|
||||
You will need to use the image with the nvidia container runtime:
|
||||
|
||||
@ -306,7 +305,7 @@ You will need to use the image with the nvidia container runtime:
|
||||
docker run -d \
|
||||
...
|
||||
--runtime nvidia
|
||||
ghcr.io/blakeblackshear/frigate:stable-tensorrt-jp5
|
||||
ghcr.io/blakeblackshear/frigate:stable-tensorrt-jp6
|
||||
```
|
||||
|
||||
### Docker Compose - Jetson
|
||||
@ -315,7 +314,7 @@ docker run -d \
|
||||
services:
|
||||
frigate:
|
||||
...
|
||||
image: ghcr.io/blakeblackshear/frigate:stable-tensorrt-jp5
|
||||
image: ghcr.io/blakeblackshear/frigate:stable-tensorrt-jp6
|
||||
runtime: nvidia # Add this
|
||||
```
|
||||
|
||||
@ -3,33 +3,34 @@ id: license_plate_recognition
|
||||
title: License Plate Recognition (LPR)
|
||||
---
|
||||
|
||||
Frigate can recognize license plates on vehicles and automatically add the detected characters to the `recognized_license_plate` field or a known name as a `sub_label` to tracked objects of type `car`. A common use case may be to read the license plates of cars pulling into a driveway or cars passing by on a street.
|
||||
Frigate can recognize license plates on vehicles and automatically add the detected characters to the `recognized_license_plate` field or a known name as a `sub_label` to tracked objects of type `car` or `motorcycle`. A common use case may be to read the license plates of cars pulling into a driveway or cars passing by on a street.
|
||||
|
||||
LPR works best when the license plate is clearly visible to the camera. For moving vehicles, Frigate continuously refines the recognition process, keeping the most confident result. However, LPR does not run on stationary vehicles.
|
||||
|
||||
When a plate is recognized, the recognized name is:
|
||||
When a plate is recognized, the details are:
|
||||
|
||||
- Added as a `sub_label` (if known) or the `recognized_license_plate` field (if unknown) to a tracked object.
|
||||
- Viewable in the Review Item Details pane in Review (sub labels).
|
||||
- Viewable in the Tracked Object Details pane in Explore (sub labels and recognized license plates).
|
||||
- Filterable through the More Filters menu in Explore.
|
||||
- Published via the `frigate/events` MQTT topic as a `sub_label` (known) or `recognized_license_plate` (unknown) for the `car` tracked object.
|
||||
- Published via the `frigate/events` MQTT topic as a `sub_label` (known) or `recognized_license_plate` (unknown) for the `car` or `motorcycle` tracked object.
|
||||
- Published via the `frigate/tracked_object_update` MQTT topic with `name` (if known) and `plate`.
|
||||
|
||||
## Model Requirements
|
||||
|
||||
Users running a Frigate+ model (or any custom model that natively detects license plates) should ensure that `license_plate` is added to the [list of objects to track](https://docs.frigate.video/plus/#available-label-types) either globally or for a specific camera. This will improve the accuracy and performance of the LPR model.
|
||||
|
||||
Users without a model that detects license plates can still run LPR. Frigate uses a lightweight YOLOv9 license plate detection model that runs on your CPU. In this case, you should _not_ define `license_plate` in your list of objects to track.
|
||||
Users without a model that detects license plates can still run LPR. Frigate uses a lightweight YOLOv9 license plate detection model that can be configured to run on your CPU or GPU. In this case, you should _not_ define `license_plate` in your list of objects to track.
|
||||
|
||||
:::note
|
||||
|
||||
In the default mode, Frigate's LPR needs to first detect a `car` before it can recognize a license plate. If you're using a dedicated LPR camera and have a zoomed-in view where a `car` will not be detected, you can still run LPR, but the configuration parameters will differ from the default mode. See the [Dedicated LPR Cameras](#dedicated-lpr-cameras) section below.
|
||||
In the default mode, Frigate's LPR needs to first detect a `car` or `motorcycle` before it can recognize a license plate. If you're using a dedicated LPR camera and have a zoomed-in view where a `car` or `motorcycle` will not be detected, you can still run LPR, but the configuration parameters will differ from the default mode. See the [Dedicated LPR Cameras](#dedicated-lpr-cameras) section below.
|
||||
|
||||
:::
|
||||
|
||||
## Minimum System Requirements
|
||||
|
||||
License plate recognition works by running AI models locally on your system. The models are relatively lightweight and run on your CPU. At least 4GB of RAM is required.
|
||||
License plate recognition works by running AI models locally on your system. The models are relatively lightweight and can run on your CPU or GPU, depending on your configuration. At least 4GB of RAM is required.
|
||||
|
||||
## Configuration
|
||||
|
||||
@ -40,23 +41,23 @@ lpr:
|
||||
enabled: True
|
||||
```
|
||||
|
||||
Like other enrichments in Frigate, LPR **must be enabled globally** to use the feature. You can disable it for specific cameras at the camera level:
|
||||
Like other enrichments in Frigate, LPR **must be enabled globally** to use the feature. You should disable it for specific cameras at the camera level if you don't want to run LPR on cars on those cameras:
|
||||
|
||||
```yaml
|
||||
cameras:
|
||||
driveway:
|
||||
garage:
|
||||
...
|
||||
lpr:
|
||||
enabled: False
|
||||
```
|
||||
|
||||
For non-dedicated LPR cameras, ensure that your camera is configured to detect objects of type `car`, and that a car is actually being detected by Frigate. Otherwise, LPR will not run.
|
||||
For non-dedicated LPR cameras, ensure that your camera is configured to detect objects of type `car` or `motorcycle`, and that a car or motorcycle is actually being detected by Frigate. Otherwise, LPR will not run.
|
||||
|
||||
Like the other real-time processors in Frigate, license plate recognition runs on the camera stream defined by the `detect` role in your config. To ensure optimal performance, select a suitable resolution for this stream in your camera's firmware that fits your specific scene and requirements.
|
||||
|
||||
## Advanced Configuration
|
||||
|
||||
Fine-tune the LPR feature using these optional parameters:
|
||||
Fine-tune the LPR feature using these optional parameters at the global level of your config. The only optional parameters that can be set at the camera level are `enabled`, `min_area`, and `enhancement`.
|
||||
|
||||
### Detection
|
||||
|
||||
@ -66,6 +67,12 @@ Fine-tune the LPR feature using these optional parameters:
|
||||
- **`min_area`**: Defines the minimum area (in pixels) a license plate must be before recognition runs.
|
||||
- Default: `1000` pixels. Note: this is intentionally set very low as it is an _area_ measurement (length x width). For reference, 1000 pixels represents a ~32x32 pixel square in your camera image.
|
||||
- Depending on the resolution of your camera's `detect` stream, you can increase this value to ignore small or distant plates.
|
||||
- **`device`**: Device to use to run license plate recognition models.
|
||||
- Default: `CPU`
|
||||
- This can be `CPU` or `GPU`. For users without a model that detects license plates natively, using a GPU may increase performance of the models, especially the YOLOv9 license plate detector model. See the [Hardware Accelerated Enrichments](/configuration/hardware_acceleration_enrichments.md) documentation.
|
||||
- **`model_size`**: The size of the model used to detect text on plates.
|
||||
- Default: `small`
|
||||
- This can be `small` or `large`. The `large` model uses an enhanced text detector and is more accurate at finding text on plates but slower than the `small` model. For most users, the small model is recommended. For users in countries with multiple lines of text on plates, the large model is recommended. Note that using the large model does not improve _text recognition_, but it may improve _text detection_.
|
||||
|
||||
### Recognition
|
||||
|
||||
@ -80,7 +87,7 @@ Fine-tune the LPR feature using these optional parameters:
|
||||
|
||||
### Matching
|
||||
|
||||
- **`known_plates`**: List of strings or regular expressions that assign custom a `sub_label` to `car` objects when a recognized plate matches a known value.
|
||||
- **`known_plates`**: List of strings or regular expressions that assign custom a `sub_label` to `car` and `motorcycle` objects when a recognized plate matches a known value.
|
||||
- These labels appear in the UI, filters, and notifications.
|
||||
- Unknown plates are still saved but are added to the `recognized_license_plate` field rather than the `sub_label`.
|
||||
- **`match_distance`**: Allows for minor variations (missing/incorrect characters) when matching a detected plate to a known plate.
|
||||
@ -89,11 +96,11 @@ Fine-tune the LPR feature using these optional parameters:
|
||||
|
||||
### Image Enhancement
|
||||
|
||||
- **`enhancement`**: A value between **0 and 10** that adjusts the level of image enhancement applied to captured license plates before they are processed for recognition. This preprocessing step can sometimes improve accuracy but may also have the opposite effect.
|
||||
- **Default:** `0` (no enhancement)
|
||||
- **`enhancement`**: A value between 0 and 10 that adjusts the level of image enhancement applied to captured license plates before they are processed for recognition. This preprocessing step can sometimes improve accuracy but may also have the opposite effect.
|
||||
- Default: `0` (no enhancement)
|
||||
- Higher values increase contrast, sharpen details, and reduce noise, but excessive enhancement can blur or distort characters, actually making them much harder for Frigate to recognize.
|
||||
- This setting is best adjusted **at the camera level** if running LPR on multiple cameras.
|
||||
- If Frigate is already recognizing plates correctly, leave this setting at the default of `0`. However, if you're experiencing frequent character issues or incomplete plates and you can already easily read the plates yourself, try increasing the value gradually, starting at **5** and adjusting as needed. To preview how different enhancement levels affect your plates, use the `debug_save_plates` configuration option (see below).
|
||||
- This setting is best adjusted at the camera level if running LPR on multiple cameras.
|
||||
- If Frigate is already recognizing plates correctly, leave this setting at the default of `0`. However, if you're experiencing frequent character issues or incomplete plates and you can already easily read the plates yourself, try increasing the value gradually, starting at 5 and adjusting as needed. You should see how different enhancement levels affect your plates. Use the `debug_save_plates` configuration option (see below).
|
||||
|
||||
### Debugging
|
||||
|
||||
@ -155,26 +162,30 @@ cameras:
|
||||
|
||||
Dedicated LPR cameras are single-purpose cameras with powerful optical zoom to capture license plates on distant vehicles, often with fine-tuned settings to capture plates at night.
|
||||
|
||||
Users can configure Frigate's LPR in two different ways depending on whether they are using a Frigate+ model:
|
||||
To mark a camera as a dedicated LPR camera, add `type: "lpr"` the camera configuration.
|
||||
|
||||
### Using a Frigate+ Model
|
||||
Users can configure Frigate's dedicated LPR mode in two different ways depending on whether a Frigate+ (or native `license_plate` detecting) model is used:
|
||||
|
||||
### Using a Frigate+ (or Native `license_plate` Detecting) Model
|
||||
|
||||
Users running a Frigate+ model (or any model that natively detects `license_plate`) can take advantage of `license_plate` detection. This allows license plates to be treated as standard objects in dedicated LPR mode, meaning that alerts, detections, snapshots, zones, and other Frigate features work as usual, and plates are detected efficiently through your configured object detector.
|
||||
|
||||
An example configuration for a dedicated LPR camera using a Frigate+ model:
|
||||
An example configuration for a dedicated LPR camera using a `license_plate`-detecting model:
|
||||
|
||||
```yaml
|
||||
# LPR global configuration
|
||||
lpr:
|
||||
enabled: True
|
||||
device: CPU # can also be GPU if available
|
||||
|
||||
# Dedicated LPR camera configuration
|
||||
cameras:
|
||||
dedicated_lpr_camera:
|
||||
type: "lpr" # required to use dedicated LPR camera mode
|
||||
ffmpeg: ... # add your streams
|
||||
detect:
|
||||
enabled: True
|
||||
fps: 5 # increase to 10 if vehicles move quickly across your frame
|
||||
fps: 5 # increase to 10 if vehicles move quickly across your frame. Higher than 10 is unnecessary and is not recommended.
|
||||
min_initialized: 2
|
||||
width: 1920
|
||||
height: 1080
|
||||
@ -206,7 +217,7 @@ With this setup:
|
||||
- Snapshots will have license plate bounding boxes on them.
|
||||
- The `frigate/events` MQTT topic will publish tracked object updates.
|
||||
- Debug view will display `license_plate` bounding boxes.
|
||||
- If you are using a Frigate+ model and want to submit images from your dedicated LPR camera for model training and fine-tuning, annotate both the `car` and the `license_plate` in the snapshots on the Frigate+ website, even if the car is barely visible.
|
||||
- If you are using a Frigate+ model and want to submit images from your dedicated LPR camera for model training and fine-tuning, annotate both the `car` / `motorcycle` and the `license_plate` in the snapshots on the Frigate+ website, even if the car is barely visible.
|
||||
|
||||
### Using the Secondary LPR Pipeline (Without Frigate+)
|
||||
|
||||
@ -218,6 +229,7 @@ An example configuration for a dedicated LPR camera using the secondary pipeline
|
||||
# LPR global configuration
|
||||
lpr:
|
||||
enabled: True
|
||||
device: CPU # can also be GPU if available and correct Docker image is used
|
||||
detection_threshold: 0.7 # change if necessary
|
||||
|
||||
# Dedicated LPR camera configuration
|
||||
@ -227,7 +239,7 @@ cameras:
|
||||
lpr:
|
||||
enabled: True
|
||||
enhancement: 3 # optional, enhance the image before trying to recognize characters
|
||||
ffmpeg: ...
|
||||
ffmpeg: ... # add your streams
|
||||
detect:
|
||||
enabled: False # disable Frigate's standard object detection pipeline
|
||||
fps: 5 # increase if necessary, though high values may slow down Frigate's enrichments pipeline and use considerable CPU
|
||||
@ -256,7 +268,7 @@ With this setup:
|
||||
- Review items will always be classified as a `detection`.
|
||||
- Snapshots will always be saved.
|
||||
- Zones and object masks are **not** used.
|
||||
- The `frigate/events` MQTT topic will **not** publish tracked object updates, though `frigate/reviews` will if recordings are enabled.
|
||||
- The `frigate/events` MQTT topic will **not** publish tracked object updates with the license plate bounding box and score, though `frigate/reviews` will publish if recordings are enabled. If a plate is recognized as a known plate, publishing will occur with an updated `sub_label` field. If characters are recognized, publishing will occur with an updated `recognized_license_plate` field.
|
||||
- License plate snapshots are saved at the highest-scoring moment and appear in Explore.
|
||||
- Debug view will not show `license_plate` bounding boxes.
|
||||
|
||||
@ -269,7 +281,7 @@ With this setup:
|
||||
| Object Detection | Standard Frigate+ detection applies | Bypasses standard object detection |
|
||||
| Zones & Object Masks | Supported | Not supported |
|
||||
| Debug View | May show `license_plate` bounding boxes | May **not** show `license_plate` bounding boxes |
|
||||
| MQTT `frigate/events` | Publishes tracked object updates | Does **not** publish tracked object updates |
|
||||
| MQTT `frigate/events` | Publishes tracked object updates | Publishes limited updates |
|
||||
| Explore | Recognized plates available in More Filters | Recognized plates available in More Filters |
|
||||
|
||||
By selecting the appropriate configuration, users can optimize their dedicated LPR cameras based on whether they are using a Frigate+ model or the secondary LPR pipeline.
|
||||
@ -280,7 +292,7 @@ By selecting the appropriate configuration, users can optimize their dedicated L
|
||||
- Disable the `improve_contrast` motion setting, especially if you are running LPR at night and the frame is mostly dark. This will prevent small pixel changes and smaller areas of motion from triggering license plate detection.
|
||||
- Ensure your camera's timestamp is covered with a motion mask so that it's not incorrectly detected as a license plate.
|
||||
- For non-Frigate+ users, you may need to change your camera settings for a clearer image or decrease your global `recognition_threshold` config if your plates are not being accurately recognized at night.
|
||||
- The secondary pipeline mode runs a local AI model on your CPU to detect plates. Increasing detect `fps` will increase CPU usage proportionally.
|
||||
- The secondary pipeline mode runs a local AI model on your CPU or GPU (depending on how `device` is configured) to detect plates. Increasing detect `fps` will increase resource usage proportionally.
|
||||
|
||||
## FAQ
|
||||
|
||||
@ -299,9 +311,9 @@ Recognized plates will show as object labels in the debug view and will appear i
|
||||
|
||||
If you are still having issues detecting plates, start with a basic configuration and see the debugging tips below.
|
||||
|
||||
### Can I run LPR without detecting `car` objects?
|
||||
### Can I run LPR without detecting `car` or `motorcycle` objects?
|
||||
|
||||
In normal LPR mode, Frigate requires a `car` to be detected first before recognizing a license plate. If you have a dedicated LPR camera, you can change the camera `type` to `"lpr"` to use the Dedicated LPR Camera algorithm. This comes with important caveats, though. See the [Dedicated LPR Cameras](#dedicated-lpr-cameras) section above.
|
||||
In normal LPR mode, Frigate requires a `car` or `motorcycle` to be detected first before recognizing a license plate. If you have a dedicated LPR camera, you can change the camera `type` to `"lpr"` to use the Dedicated LPR Camera algorithm. This comes with important caveats, though. See the [Dedicated LPR Cameras](#dedicated-lpr-cameras) section above.
|
||||
|
||||
### How can I improve detection accuracy?
|
||||
|
||||
@ -313,6 +325,10 @@ In normal LPR mode, Frigate requires a `car` to be detected first before recogni
|
||||
|
||||
Yes, but performance depends on camera quality, lighting, and infrared capabilities. Make sure your camera can capture clear images of plates at night.
|
||||
|
||||
### Can I limit LPR to specific zones?
|
||||
|
||||
LPR, like other Frigate enrichments, runs at the camera level rather than the zone level. While you can't restrict LPR to specific zones directly, you can control when recognition runs by setting a `min_area` value to filter out smaller detections.
|
||||
|
||||
### How can I match known plates with minor variations?
|
||||
|
||||
Use `match_distance` to allow small character mismatches. Alternatively, define multiple variations in `known_plates`.
|
||||
@ -320,10 +336,10 @@ Use `match_distance` to allow small character mismatches. Alternatively, define
|
||||
### How do I debug LPR issues?
|
||||
|
||||
- View MQTT messages for `frigate/events` to verify detected plates.
|
||||
- If you are using a Frigate+ model or a model that detects license plates, watch the debug view (Settings --> Debug) to ensure that `license_plate` is being detected with a `car`.
|
||||
- Watch the debug view to see plates recognized in real-time. For non-dedicated LPR cameras, the `car` label will change to the recognized plate when LPR is enabled and working.
|
||||
- If you are using a Frigate+ model or a model that detects license plates, watch the debug view (Settings --> Debug) to ensure that `license_plate` is being detected with a `car` or `motorcycle`.
|
||||
- Watch the debug view to see plates recognized in real-time. For non-dedicated LPR cameras, the `car` or `motorcycle` label will change to the recognized plate when LPR is enabled and working.
|
||||
- Adjust `detection_threshold` and `recognition_threshold` settings per the suggestions [above](#advanced-configuration).
|
||||
- Enable `debug_save_plates` to save images of detected text on plates to the clips directory (`/media/frigate/clips/lpr`).
|
||||
- Enable `debug_save_plates` to save images of detected text on plates to the clips directory (`/media/frigate/clips/lpr`). Ensure these images are readable and the text is clear.
|
||||
- Enable debug logs for LPR by adding `frigate.data_processing.common.license_plate: debug` to your `logger` configuration. These logs are _very_ verbose, so only enable this when necessary.
|
||||
|
||||
```yaml
|
||||
@ -335,4 +351,22 @@ Use `match_distance` to allow small character mismatches. Alternatively, define
|
||||
|
||||
### Will LPR slow down my system?
|
||||
|
||||
LPR runs on the CPU, so performance impact depends on your hardware. Ensure you have at least 4GB RAM and a capable CPU for optimal results. If you are running the Dedicated LPR Camera mode, resource usage will be higher compared to users who run a model that natively detects license plates. Tune your motion detection settings for your dedicated LPR camera so that the license plate detection model runs only when necessary.
|
||||
LPR's performance impact depends on your hardware. Ensure you have at least 4GB RAM and a capable CPU or GPU for optimal results. If you are running the Dedicated LPR Camera mode, resource usage will be higher compared to users who run a model that natively detects license plates. Tune your motion detection settings for your dedicated LPR camera so that the license plate detection model runs only when necessary.
|
||||
|
||||
### I am seeing a YOLOv9 plate detection metric in Enrichment Metrics, but I have a Frigate+ or custom model that detects `license_plate`. Why is the YOLOv9 model running?
|
||||
|
||||
The YOLOv9 license plate detector model will run (and the metric will appear) if you've enabled LPR but haven't defined `license_plate` as an object to track, either at the global or camera level.
|
||||
|
||||
If you are detecting `car` or `motorcycle` on cameras where you don't want to run LPR, make sure you disable LPR it at the camera level. And if you do want to run LPR on those cameras, make sure you define `license_plate` as an object to track.
|
||||
|
||||
### It looks like Frigate picked up my camera's timestamp or overlay text as the license plate. How can I prevent this?
|
||||
|
||||
This could happen if cars or motorcycles travel close to your camera's timestamp or overlay text. You could either move the text through your camera's firmware, or apply a mask to it in Frigate.
|
||||
|
||||
If you are using a model that natively detects `license_plate`, add an _object mask_ of type `license_plate` and a _motion mask_ over your text.
|
||||
|
||||
If you are not using a model that natively detects `license_plate` or you are using dedicated LPR camera mode, only a _motion mask_ over your text is required.
|
||||
|
||||
### I see "Error running ... model" in my logs. How can I fix this?
|
||||
|
||||
This usually happens when your GPU is unable to compile or use one of the LPR models. Set your `device` to `CPU` and try again. GPU acceleration only provides a slight performance increase, and the models are lightweight enough to run without issue on most CPUs.
|
||||
|
||||
@ -42,6 +42,16 @@ go2rtc:
|
||||
- "ffmpeg:http_cam#audio=opus" # <- copy of the stream which transcodes audio to the missing codec (usually will be opus)
|
||||
```
|
||||
|
||||
If your camera does not support AAC audio or are having problems with Live view, try transcoding to AAC audio directly:
|
||||
|
||||
```yaml
|
||||
go2rtc:
|
||||
streams:
|
||||
rtsp_cam: # <- for RTSP streams
|
||||
- "ffmpeg:rtsp://192.168.1.5:554/live0#video=copy#audio=aac" # <- copies video stream and transcodes to aac audio
|
||||
- "ffmpeg:rtsp_cam#audio=opus" # <- provides support for WebRTC
|
||||
```
|
||||
|
||||
If your camera does not have audio and you are having problems with Live view, you should have go2rtc send video only:
|
||||
|
||||
```yaml
|
||||
|
||||
@ -27,7 +27,7 @@ Frigate supports multiple different detectors that work on different types of ha
|
||||
**Nvidia**
|
||||
|
||||
- [TensortRT](#nvidia-tensorrt-detector): TensorRT can run on Nvidia GPUs and Jetson devices, using one of many default models.
|
||||
- [ONNX](#onnx): TensorRT will automatically be detected and used as a detector in the `-tensorrt` or `-tensorrt-jp(4/5)` Frigate images when a supported ONNX model is configured.
|
||||
- [ONNX](#onnx): TensorRT will automatically be detected and used as a detector in the `-tensorrt` or `-tensorrt-jp6` Frigate images when a supported ONNX model is configured.
|
||||
|
||||
**Rockchip**
|
||||
|
||||
@ -152,7 +152,7 @@ Use this configuration for YOLO-based models. When no custom model path or URL i
|
||||
|
||||
```yaml
|
||||
detectors:
|
||||
hailo8l:
|
||||
hailo:
|
||||
type: hailo8l
|
||||
device: PCIe
|
||||
|
||||
@ -163,6 +163,7 @@ model:
|
||||
input_pixel_format: rgb
|
||||
input_dtype: int
|
||||
model_type: yolo-generic
|
||||
labelmap_path: /labelmap/coco-80.txt
|
||||
|
||||
# The detector automatically selects the default model based on your hardware:
|
||||
# - For Hailo-8 hardware: YOLOv6n (default: yolov6n.hef)
|
||||
@ -184,7 +185,7 @@ For SSD-based models, provide either a model path or URL to your compiled SSD mo
|
||||
|
||||
```yaml
|
||||
detectors:
|
||||
hailo8l:
|
||||
hailo:
|
||||
type: hailo8l
|
||||
device: PCIe
|
||||
|
||||
@ -208,7 +209,7 @@ The Hailo detector supports all YOLO models compiled for Hailo hardware that inc
|
||||
|
||||
```yaml
|
||||
detectors:
|
||||
hailo8l:
|
||||
hailo:
|
||||
type: hailo8l
|
||||
device: PCIe
|
||||
|
||||
@ -219,6 +220,7 @@ model:
|
||||
input_pixel_format: rgb
|
||||
input_dtype: int
|
||||
model_type: yolo-generic
|
||||
labelmap_path: /labelmap/coco-80.txt
|
||||
# Optional: Specify a local model path.
|
||||
# path: /config/model_cache/hailo/custom_model.hef
|
||||
#
|
||||
@ -310,13 +312,13 @@ model:
|
||||
|
||||
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
|
||||
|
||||
#### YOLOv9
|
||||
#### YOLO (v3, v4, v7, v9)
|
||||
|
||||
[YOLOv9](https://github.com/WongKinYiu/yolov9) models are supported, but not included by default.
|
||||
YOLOv3, YOLOv4, YOLOv7, and [YOLOv9](https://github.com/WongKinYiu/yolov9) models are supported, but not included by default.
|
||||
|
||||
:::tip
|
||||
|
||||
The YOLOv9 detector has been designed to support YOLOv9 models, but may support other YOLO model architectures as well.
|
||||
The YOLO detector has been designed to support YOLOv3, YOLOv4, YOLOv7, and YOLOv9 models, but may support other YOLO model architectures as well.
|
||||
|
||||
:::
|
||||
|
||||
@ -329,12 +331,12 @@ detectors:
|
||||
device: GPU
|
||||
|
||||
model:
|
||||
model_type: yolov9
|
||||
width: 640 # <--- should match the imgsize set during model export
|
||||
height: 640 # <--- should match the imgsize set during model export
|
||||
model_type: yolo-generic
|
||||
width: 320 # <--- should match the imgsize set during model export
|
||||
height: 320 # <--- should match the imgsize set during model export
|
||||
input_tensor: nchw
|
||||
input_dtype: float
|
||||
path: /config/model_cache/yolov9-t.onnx
|
||||
path: /config/model_cache/yolo.onnx
|
||||
labelmap_path: /labelmap/coco-80.txt
|
||||
```
|
||||
|
||||
@ -482,7 +484,7 @@ frigate:
|
||||
|
||||
### Configuration Parameters
|
||||
|
||||
The TensorRT detector can be selected by specifying `tensorrt` as the model type. The GPU will need to be passed through to the docker container using the same methods described in the [Hardware Acceleration](hardware_acceleration.md#nvidia-gpus) section. If you pass through multiple GPUs, you can select which GPU is used for a detector with the `device` configuration parameter. The `device` parameter is an integer value of the GPU index, as shown by `nvidia-smi` within the container.
|
||||
The TensorRT detector can be selected by specifying `tensorrt` as the model type. The GPU will need to be passed through to the docker container using the same methods described in the [Hardware Acceleration](hardware_acceleration_video.md#nvidia-gpus) section. If you pass through multiple GPUs, you can select which GPU is used for a detector with the `device` configuration parameter. The `device` parameter is an integer value of the GPU index, as shown by `nvidia-smi` within the container.
|
||||
|
||||
The TensorRT detector uses `.trt` model files that are located in `/config/model_cache/tensorrt` by default. These model path and dimensions used will depend on which model you have generated.
|
||||
|
||||
@ -608,7 +610,7 @@ If the correct build is used for your GPU then the GPU will be detected and used
|
||||
|
||||
- **Nvidia**
|
||||
- Nvidia GPUs will automatically be detected and used with the ONNX detector in the `-tensorrt` Frigate image.
|
||||
- Jetson devices will automatically be detected and used with the ONNX detector in the `-tensorrt-jp(4/5)` Frigate image.
|
||||
- Jetson devices will automatically be detected and used with the ONNX detector in the `-tensorrt-jp6` Frigate image.
|
||||
|
||||
:::
|
||||
|
||||
@ -651,13 +653,13 @@ model:
|
||||
labelmap_path: /labelmap/coco-80.txt
|
||||
```
|
||||
|
||||
#### YOLOv9
|
||||
#### YOLO (v3, v4, v7, v9)
|
||||
|
||||
[YOLOv9](https://github.com/WongKinYiu/yolov9) models are supported, but not included by default.
|
||||
YOLOv3, YOLOv4, YOLOv7, and [YOLOv9](https://github.com/WongKinYiu/yolov9) models are supported, but not included by default.
|
||||
|
||||
:::tip
|
||||
|
||||
The YOLOv9 detector has been designed to support YOLOv9 models, but may support other YOLO model architectures as well.
|
||||
The YOLO detector has been designed to support YOLOv3, YOLOv4, YOLOv7, and YOLOv9 models, but may support other YOLO model architectures as well. See [the models section](#downloading-yolo-models) for more information on downloading YOLO models for use in Frigate.
|
||||
|
||||
:::
|
||||
|
||||
@ -669,12 +671,35 @@ detectors:
|
||||
type: onnx
|
||||
|
||||
model:
|
||||
model_type: yolov9
|
||||
width: 640 # <--- should match the imgsize set during model export
|
||||
height: 640 # <--- should match the imgsize set during model export
|
||||
model_type: yolo-generic
|
||||
width: 320 # <--- should match the imgsize set during model export
|
||||
height: 320 # <--- should match the imgsize set during model export
|
||||
input_tensor: nchw
|
||||
input_dtype: float
|
||||
path: /config/model_cache/yolov9-t.onnx
|
||||
path: /config/model_cache/yolo.onnx
|
||||
labelmap_path: /labelmap/coco-80.txt
|
||||
```
|
||||
|
||||
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
|
||||
|
||||
#### YOLOx
|
||||
|
||||
[YOLOx](https://github.com/Megvii-BaseDetection/YOLOX) models are supported, but not included by default. See [the models section](#downloading-yolo-models) for more information on downloading the YOLOx model for use in Frigate.
|
||||
|
||||
After placing the downloaded onnx model in your config folder, you can use the following configuration:
|
||||
|
||||
```yaml
|
||||
detectors:
|
||||
onnx:
|
||||
type: onnx
|
||||
|
||||
model:
|
||||
model_type: yolox
|
||||
width: 416 # <--- should match the imgsize set during model export
|
||||
height: 416 # <--- should match the imgsize set during model export
|
||||
input_tensor: nchw
|
||||
input_dtype: float_denorm
|
||||
path: /config/model_cache/yolox_tiny.onnx
|
||||
labelmap_path: /labelmap/coco-80.txt
|
||||
```
|
||||
|
||||
@ -682,7 +707,7 @@ Note that the labelmap uses a subset of the complete COCO label set that has onl
|
||||
|
||||
#### RF-DETR
|
||||
|
||||
[RF-DETR](https://github.com/roboflow/rf-detr) is a DETR based model. The ONNX exported models are supported, but not included by default. See [the models section](#downloading-rf-detr-model) for more informatoin on downloading the RF-DETR model for use in Frigate.
|
||||
[RF-DETR](https://github.com/roboflow/rf-detr) is a DETR based model. The ONNX exported models are supported, but not included by default. See [the models section](#downloading-rf-detr-model) for more information on downloading the RF-DETR model for use in Frigate.
|
||||
|
||||
After placing the downloaded onnx model in your `config/model_cache` folder, you can use the following configuration:
|
||||
|
||||
@ -786,66 +811,27 @@ Hardware accelerated object detection is supported on the following SoCs:
|
||||
- RK3576
|
||||
- RK3588
|
||||
|
||||
This implementation uses the [Rockchip's RKNN-Toolkit2](https://github.com/airockchip/rknn-toolkit2/), version v2.3.0. Currently, only [Yolo-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) is supported as object detection model.
|
||||
This implementation uses the [Rockchip's RKNN-Toolkit2](https://github.com/airockchip/rknn-toolkit2/), version v2.3.2.
|
||||
|
||||
### Prerequisites
|
||||
:::tip
|
||||
|
||||
Make sure to follow the [Rockchip specific installation instrucitions](/frigate/installation#rockchip-platform).
|
||||
|
||||
### Configuration
|
||||
|
||||
This `config.yml` shows all relevant options to configure the detector and explains them. All values shown are the default values (except for two). Lines that are required at least to use the detector are labeled as required, all other lines are optional.
|
||||
When using many cameras one detector may not be enough to keep up. Multiple detectors can be defined assuming NPU resources are available. An example configuration would be:
|
||||
|
||||
```yaml
|
||||
detectors: # required
|
||||
rknn: # required
|
||||
type: rknn # required
|
||||
# number of NPU cores to use
|
||||
# 0 means choose automatically
|
||||
# increase for better performance if you have a multicore NPU e.g. set to 3 on rk3588
|
||||
detectors:
|
||||
rknn_0:
|
||||
type: rknn
|
||||
num_cores: 0
|
||||
rknn_1:
|
||||
type: rknn
|
||||
num_cores: 0
|
||||
|
||||
model: # required
|
||||
# name of model (will be automatically downloaded) or path to your own .rknn model file
|
||||
# possible values are:
|
||||
# - deci-fp16-yolonas_s
|
||||
# - deci-fp16-yolonas_m
|
||||
# - deci-fp16-yolonas_l
|
||||
# - /config/model_cache/your_custom_model.rknn
|
||||
path: deci-fp16-yolonas_s
|
||||
# width and height of detection frames
|
||||
width: 320
|
||||
height: 320
|
||||
# pixel format of detection frame
|
||||
# default value is rgb but yolo models usually use bgr format
|
||||
input_pixel_format: bgr # required
|
||||
# shape of detection frame
|
||||
input_tensor: nhwc
|
||||
# needs to be adjusted to model, see below
|
||||
labelmap_path: /labelmap.txt # required
|
||||
```
|
||||
|
||||
The correct labelmap must be loaded for each model. If you use a custom model (see notes below), you must make sure to provide the correct labelmap. The table below lists the correct paths for the bundled models:
|
||||
|
||||
| `path` | `labelmap_path` |
|
||||
| --------------------- | --------------------- |
|
||||
| deci-fp16-yolonas\_\* | /labelmap/coco-80.txt |
|
||||
|
||||
### Choosing a model
|
||||
|
||||
:::warning
|
||||
|
||||
The pre-trained YOLO-NAS weights from DeciAI are subject to their license and can't be used commercially. For more information, see: https://docs.deci.ai/super-gradients/latest/LICENSE.YOLONAS.html
|
||||
|
||||
:::
|
||||
|
||||
The inference time was determined on a rk3588 with 3 NPU cores.
|
||||
### Prerequisites
|
||||
|
||||
| Model | Size in mb | Inference time in ms |
|
||||
| ------------------- | ---------- | -------------------- |
|
||||
| deci-fp16-yolonas_s | 24 | 25 |
|
||||
| deci-fp16-yolonas_m | 62 | 35 |
|
||||
| deci-fp16-yolonas_l | 81 | 45 |
|
||||
Make sure to follow the [Rockchip specific installation instructions](/frigate/installation#rockchip-platform).
|
||||
|
||||
:::tip
|
||||
|
||||
@ -858,9 +844,99 @@ $ cat /sys/kernel/debug/rknpu/load
|
||||
|
||||
:::
|
||||
|
||||
### Supported Models
|
||||
|
||||
This `config.yml` shows all relevant options to configure the detector and explains them. All values shown are the default values (except for two). Lines that are required at least to use the detector are labeled as required, all other lines are optional.
|
||||
|
||||
```yaml
|
||||
detectors: # required
|
||||
rknn: # required
|
||||
type: rknn # required
|
||||
# number of NPU cores to use
|
||||
# 0 means choose automatically
|
||||
# increase for better performance if you have a multicore NPU e.g. set to 3 on rk3588
|
||||
num_cores: 0
|
||||
```
|
||||
|
||||
The inference time was determined on a rk3588 with 3 NPU cores.
|
||||
|
||||
| Model | Size in mb | Inference time in ms |
|
||||
| --------------------- | ---------- | -------------------- |
|
||||
| deci-fp16-yolonas_s | 24 | 25 |
|
||||
| deci-fp16-yolonas_m | 62 | 35 |
|
||||
| deci-fp16-yolonas_l | 81 | 45 |
|
||||
| frigate-fp16-yolov9-t | 6 | 35 |
|
||||
| rock-i8-yolox_nano | 3 | 14 |
|
||||
| rock-i8_yolox_tiny | 6 | 18 |
|
||||
|
||||
- All models are automatically downloaded and stored in the folder `config/model_cache/rknn_cache`. After upgrading Frigate, you should remove older models to free up space.
|
||||
- You can also provide your own `.rknn` model. You should not save your own models in the `rknn_cache` folder, store them directly in the `model_cache` folder or another subfolder. To convert a model to `.rknn` format see the `rknn-toolkit2` (requires a x86 machine). Note, that there is only post-processing for the supported models.
|
||||
|
||||
#### YOLO-NAS
|
||||
|
||||
```yaml
|
||||
model: # required
|
||||
# name of model (will be automatically downloaded) or path to your own .rknn model file
|
||||
# possible values are:
|
||||
# - deci-fp16-yolonas_s
|
||||
# - deci-fp16-yolonas_m
|
||||
# - deci-fp16-yolonas_l
|
||||
# your yolonas_model.rknn
|
||||
path: deci-fp16-yolonas_s
|
||||
model_type: yolonas
|
||||
width: 320
|
||||
height: 320
|
||||
input_pixel_format: bgr
|
||||
input_tensor: nhwc
|
||||
labelmap_path: /labelmap/coco-80.txt
|
||||
```
|
||||
|
||||
:::warning
|
||||
|
||||
The pre-trained YOLO-NAS weights from DeciAI are subject to their license and can't be used commercially. For more information, see: https://docs.deci.ai/super-gradients/latest/LICENSE.YOLONAS.html
|
||||
|
||||
:::
|
||||
|
||||
#### YOLO (v9)
|
||||
|
||||
```yaml
|
||||
model: # required
|
||||
# name of model (will be automatically downloaded) or path to your own .rknn model file
|
||||
# possible values are:
|
||||
# - frigate-fp16-yolov9-t
|
||||
# - frigate-fp16-yolov9-s
|
||||
# - frigate-fp16-yolov9-m
|
||||
# - frigate-fp16-yolov9-c
|
||||
# - frigate-fp16-yolov9-e
|
||||
# your yolo_model.rknn
|
||||
path: frigate-fp16-yolov9-t
|
||||
model_type: yolo-generic
|
||||
width: 320
|
||||
height: 320
|
||||
input_tensor: nhwc
|
||||
input_dtype: float
|
||||
labelmap_path: /labelmap/coco-80.txt
|
||||
```
|
||||
|
||||
#### YOLOx
|
||||
|
||||
```yaml
|
||||
model: # required
|
||||
# name of model (will be automatically downloaded) or path to your own .rknn model file
|
||||
# possible values are:
|
||||
# - rock-i8-yolox_nano
|
||||
# - rock-i8-yolox_tiny
|
||||
# - rock-fp16-yolox_nano
|
||||
# - rock-fp16-yolox_tiny
|
||||
# your yolox_model.rknn
|
||||
path: rock-i8-yolox_nano
|
||||
model_type: yolox
|
||||
width: 416
|
||||
height: 416
|
||||
input_tensor: nhwc
|
||||
labelmap_path: /labelmap/coco-80.txt
|
||||
```
|
||||
|
||||
### Converting your own onnx model to rknn format
|
||||
|
||||
To convert a onnx model to the rknn format using the [rknn-toolkit2](https://github.com/airockchip/rknn-toolkit2/) you have to:
|
||||
@ -880,7 +956,7 @@ output_name: "{input_basename}"
|
||||
config:
|
||||
mean_values: [[0, 0, 0]]
|
||||
std_values: [[255, 255, 255]]
|
||||
quant_img_rgb2bgr: true
|
||||
quant_img_RGB2BGR: true
|
||||
```
|
||||
|
||||
Explanation of the paramters:
|
||||
@ -893,7 +969,7 @@ Explanation of the paramters:
|
||||
- `soc`: the SoC this model was build for (e.g. "rk3588")
|
||||
- `tk_version`: Version of `rknn-toolkit2` (e.g. "2.3.0")
|
||||
- **example**: Specifying `output_name = "frigate-{quant}-{input_basename}-{soc}-v{tk_version}"` could result in a model called `frigate-i8-my_model-rk3588-v2.3.0.rknn`.
|
||||
- `config`: Configuration passed to `rknn-toolkit2` for model conversion. For an explanation of all available parameters have a look at section "2.2. Model configuration" of [this manual](https://github.com/MarcA711/rknn-toolkit2/releases/download/v2.3.0/03_Rockchip_RKNPU_API_Reference_RKNN_Toolkit2_V2.3.0_EN.pdf).
|
||||
- `config`: Configuration passed to `rknn-toolkit2` for model conversion. For an explanation of all available parameters have a look at section "2.2. Model configuration" of [this manual](https://github.com/MarcA711/rknn-toolkit2/releases/download/v2.3.2/03_Rockchip_RKNPU_API_Reference_RKNN_Toolkit2_V2.3.2_EN.pdf).
|
||||
|
||||
# Models
|
||||
|
||||
@ -957,3 +1033,41 @@ The pre-trained YOLO-NAS weights from DeciAI are subject to their license and ca
|
||||
:::
|
||||
|
||||
The input image size in this notebook is set to 320x320. This results in lower CPU usage and faster inference times without impacting performance in most cases due to the way Frigate crops video frames to areas of interest before running detection. The notebook and config can be updated to 640x640 if desired.
|
||||
|
||||
### Downloading YOLO Models
|
||||
|
||||
#### YOLOx
|
||||
|
||||
YOLOx models can be downloaded [from the YOLOx repo](https://github.com/Megvii-BaseDetection/YOLOX/tree/main/demo/ONNXRuntime).
|
||||
|
||||
#### YOLOv3, YOLOv4, and YOLOv7
|
||||
|
||||
To export as ONNX:
|
||||
|
||||
```sh
|
||||
git clone https://github.com/NateMeyer/tensorrt_demos
|
||||
cd tensorrt_demos/yolo
|
||||
./download_yolo.sh
|
||||
python3 yolo_to_onnx.py -m yolov7-320
|
||||
```
|
||||
|
||||
#### YOLOv9
|
||||
|
||||
YOLOv9 models can be exported using the below code or they [can be downloaded from hugging face](https://huggingface.co/Xenova/yolov9-onnx/tree/main)
|
||||
|
||||
```sh
|
||||
git clone https://github.com/WongKinYiu/yolov9
|
||||
cd yolov9
|
||||
|
||||
# setup the virtual environment so installation doesn't affect main system
|
||||
python3 -m venv ./
|
||||
bin/pip install -r requirements.txt
|
||||
bin/pip install onnx onnxruntime onnx-simplifier>=0.4.1
|
||||
|
||||
# download the weights
|
||||
wget -O yolov9-t.pt "https://github.com/WongKinYiu/yolov9/releases/download/v0.1/yolov9-t-converted.pt" # download the weights
|
||||
|
||||
# prepare and run export script
|
||||
sed -i "s/ckpt = torch.load(attempt_download(w), map_location='cpu')/ckpt = torch.load(attempt_download(w), map_location='cpu', weights_only=False)/g" ./models/experimental.py
|
||||
bin/python3 export.py --weights ./yolov9-t.pt --imgsz 320 --simplify --include onnx
|
||||
```
|
||||
|
||||
@ -174,6 +174,10 @@ To reduce the output file size the ffmpeg parameter `-qp n` can be utilized (whe
|
||||
|
||||
:::
|
||||
|
||||
## Apple Compatibility with H.265 Streams
|
||||
|
||||
Apple devices running the Safari browser may fail to playback h.265 recordings. The [apple compatibility option](../configuration/camera_specific.md#h265-cameras-via-safari) should be used to ensure seamless playback on Apple devices.
|
||||
|
||||
## Syncing Recordings With Disk
|
||||
|
||||
In some cases the recordings files may be deleted but Frigate will not know this has happened. Recordings sync can be enabled which will tell Frigate to check the file system and delete any db entries for files which don't exist.
|
||||
|
||||
@ -78,16 +78,19 @@ proxy:
|
||||
# Optional: Mapping for headers from upstream proxies. Only used if Frigate's auth
|
||||
# is disabled.
|
||||
# NOTE: Many authentication proxies pass a header downstream with the authenticated
|
||||
# user name. Not all values are supported. It must be a whitelisted header.
|
||||
# user name and role. Not all values are supported. It must be a whitelisted header.
|
||||
# See the docs for more info.
|
||||
header_map:
|
||||
user: x-forwarded-user
|
||||
role: x-forwarded-role
|
||||
# Optional: Url for logging out a user. This sets the location of the logout url in
|
||||
# the UI.
|
||||
logout_url: /api/logout
|
||||
# Optional: Auth secret that is checked against the X-Proxy-Secret header sent from
|
||||
# the proxy. If not set, all requests are trusted regardless of origin.
|
||||
auth_secret: None
|
||||
# Optional: The default role to use for proxy auth. Must be "admin" or "viewer"
|
||||
default_role: viewer
|
||||
|
||||
# Optional: Authentication configuration
|
||||
auth:
|
||||
@ -543,9 +546,9 @@ semantic_search:
|
||||
model_size: "small"
|
||||
|
||||
# Optional: Configuration for face recognition capability
|
||||
# NOTE: Can (enabled, min_area) be overridden at the camera level
|
||||
# NOTE: enabled, min_area can be overridden at the camera level
|
||||
face_recognition:
|
||||
# Optional: Enable semantic search (default: shown below)
|
||||
# Optional: Enable face recognition (default: shown below)
|
||||
enabled: False
|
||||
# Optional: Minimum face distance score required to mark as a potential match (default: shown below)
|
||||
unknown_score: 0.8
|
||||
@ -560,12 +563,18 @@ face_recognition:
|
||||
save_attempts: 100
|
||||
# Optional: Apply a blur quality filter to adjust confidence based on the blur level of the image (default: shown below)
|
||||
blur_confidence_filter: True
|
||||
# Optional: Set the model size used face recognition. (default: shown below)
|
||||
model_size: small
|
||||
|
||||
# Optional: Configuration for license plate recognition capability
|
||||
# NOTE: enabled, min_area, and enhancement can be overridden at the camera level
|
||||
lpr:
|
||||
# Optional: Enable license plate recognition (default: shown below)
|
||||
enabled: False
|
||||
# Optional: The device to run the models on (default: shown below)
|
||||
device: CPU
|
||||
# Optional: Set the model size used for text detection. (default: shown below)
|
||||
model_size: small
|
||||
# Optional: License plate object confidence score required to begin running recognition (default: shown below)
|
||||
detection_threshold: 0.7
|
||||
# Optional: Minimum area of license plate to begin running recognition (default: shown below)
|
||||
|
||||
@ -152,7 +152,7 @@ go2rtc:
|
||||
my_camera: rtsp://username:$%40foo%25@192.168.1.100
|
||||
```
|
||||
|
||||
See [this comment(https://github.com/AlexxIT/go2rtc/issues/1217#issuecomment-2242296489) for more information.
|
||||
See [this comment](https://github.com/AlexxIT/go2rtc/issues/1217#issuecomment-2242296489) for more information.
|
||||
|
||||
## Advanced Restream Configurations
|
||||
|
||||
|
||||
@ -90,19 +90,7 @@ semantic_search:
|
||||
|
||||
If the correct build is used for your GPU and the `large` model is configured, then the GPU will be detected and used automatically.
|
||||
|
||||
**NOTE:** Object detection and Semantic Search are independent features. If you want to use your GPU with Semantic Search, you must choose the appropriate Frigate Docker image for your GPU.
|
||||
|
||||
- **AMD**
|
||||
|
||||
- ROCm will automatically be detected and used for Semantic Search in the `-rocm` Frigate image.
|
||||
|
||||
- **Intel**
|
||||
|
||||
- OpenVINO will automatically be detected and used for Semantic Search in the default Frigate image.
|
||||
|
||||
- **Nvidia**
|
||||
- Nvidia GPUs will automatically be detected and used for Semantic Search in the `-tensorrt` Frigate image.
|
||||
- Jetson devices will automatically be detected and used for Semantic Search in the `-tensorrt-jp(4/5)` Frigate image.
|
||||
See the [Hardware Accelerated Enrichments](/configuration/hardware_acceleration_enrichments.md) documentation.
|
||||
|
||||
:::
|
||||
|
||||
|
||||
@ -84,7 +84,13 @@ Only car objects can trigger the `front_yard_street` zone and only person can tr
|
||||
|
||||
### Zone Loitering
|
||||
|
||||
Sometimes objects are expected to be passing through a zone, but an object loitering in an area is unexpected. Zones can be configured to have a minimum loitering time before the object will be considered in the zone.
|
||||
Sometimes objects are expected to be passing through a zone, but an object loitering in an area is unexpected. Zones can be configured to have a minimum loitering time after which the object will be considered in the zone.
|
||||
|
||||
:::note
|
||||
|
||||
When using loitering zones, a review item will remain active until the object leaves. Loitering zones are only meant to be used in areas where loitering is not expected behavior.
|
||||
|
||||
:::
|
||||
|
||||
```yaml
|
||||
cameras:
|
||||
|
||||
@ -91,4 +91,4 @@ The `CODEOWNERS` file should be updated to include the `docker/board` along with
|
||||
|
||||
# Docs
|
||||
|
||||
At a minimum the `installation`, `object_detectors`, `hardware_acceleration`, and `ffmpeg-presets` docs should be updated (if applicable) to reflect the configuration of this community board.
|
||||
At a minimum the `installation`, `object_detectors`, `hardware_acceleration_video`, and `ffmpeg-presets` docs should be updated (if applicable) to reflect the configuration of this community board.
|
||||
|
||||
@ -239,11 +239,8 @@ sudo cp docker/main/rootfs/usr/local/nginx/conf/* /usr/local/nginx/conf/ && sudo
|
||||
|
||||
## Contributing translations of the Web UI
|
||||
|
||||
If you'd like to contribute translations to Frigate, please follow these steps:
|
||||
Frigate uses [Weblate](https://weblate.org) to manage translations of the Web UI. To contribute translation, sign up for an account at Weblate and navigate to the Frigate NVR project:
|
||||
|
||||
1. Fork the repository and create a new branch specifically for your translation work
|
||||
2. Locate the localization files in the web/public/locales directory
|
||||
3. Add or modify the appropriate language JSON files, maintaining the existing key structure while translating only the values
|
||||
4. Ensure your translations maintain proper formatting, including any placeholder variables (like `{{example}}`)
|
||||
5. Before submitting, thoroughly review the UI
|
||||
6. When creating your PR, include a brief description of the languages you've added or updated, and reference any related issues
|
||||
https://hosted.weblate.org/projects/frigate-nvr/
|
||||
|
||||
When translating, maintain the existing key structure while translating only the values. Ensure your translations maintain proper formatting, including any placeholder variables (like `{{example}}`).
|
||||
|
||||
@ -28,7 +28,7 @@ For the Dahua/Loryta 5442 camera, I use the following settings:
|
||||
- Encode Mode: H.264
|
||||
- Resolution: 2688\*1520
|
||||
- Frame Rate(FPS): 15
|
||||
- I Frame Interval: 30 (15 can also be used to prioritize streaming performance - see the [camera settings recommendations](../configuration/live) for more info)
|
||||
- I Frame Interval: 30 (15 can also be used to prioritize streaming performance - see the [camera settings recommendations](/configuration/live#camera_settings_recommendations) for more info)
|
||||
|
||||
**Sub Stream (Detection)**
|
||||
|
||||
|
||||
@ -38,6 +38,7 @@ Frigate supports multiple different detectors that work on different types of ha
|
||||
**Most Hardware**
|
||||
|
||||
- [Hailo](#hailo-8): The Hailo8 and Hailo8L AI Acceleration module is available in m.2 format with a HAT for RPi devices offering a wide range of compatibility with devices.
|
||||
|
||||
- [Supports many model architectures](../../configuration/object_detectors#configuration)
|
||||
- Runs best with tiny or small size models
|
||||
|
||||
@ -73,10 +74,10 @@ Frigate supports multiple different detectors that work on different types of ha
|
||||
|
||||
### Hailo-8
|
||||
|
||||
|
||||
Frigate supports both the Hailo-8 and Hailo-8L AI Acceleration Modules on compatible hardware platforms—including the Raspberry Pi 5 with the PCIe hat from the AI kit. The Hailo detector integration in Frigate automatically identifies your hardware type and selects the appropriate default model when a custom model isn’t provided.
|
||||
|
||||
**Default Model Configuration:**
|
||||
|
||||
- **Hailo-8L:** Default model is **YOLOv6n**.
|
||||
- **Hailo-8:** Default model is **YOLOv6n**.
|
||||
|
||||
@ -90,6 +91,7 @@ In real-world deployments, even with multiple cameras running concurrently, Frig
|
||||
### Google Coral TPU
|
||||
|
||||
Frigate supports both the USB and M.2 versions of the Google Coral.
|
||||
|
||||
- The USB version is compatible with the widest variety of hardware and does not require a driver on the host machine. However, it does lack the automatic throttling features of the other versions.
|
||||
- The PCIe and M.2 versions require installation of a driver on the host. Follow the instructions for your version from https://coral.ai
|
||||
|
||||
@ -107,19 +109,17 @@ More information is available [in the detector docs](/configuration/object_detec
|
||||
|
||||
Inference speeds vary greatly depending on the CPU or GPU used, some known examples of GPU inference times are below:
|
||||
|
||||
| Name | MobileNetV2 Inference Time | YOLO-NAS Inference Time | RF-DETR Inference Time | Notes |
|
||||
| -------------------- | -------------------------- | ------------------------- | ------------------------- | -------------------------------------- |
|
||||
| Intel i3 6100T | 15 - 35 ms | | | Can only run one detector instance |
|
||||
| Intel i5 6500 | ~ 15 ms | | | |
|
||||
| Intel i5 7200u | 15 - 25 ms | | | |
|
||||
| Intel i5 7500 | ~ 15 ms | | | |
|
||||
| Intel i3 8100 | ~ 15 ms | | | |
|
||||
| Intel i5 1135G7 | 10 - 15 ms | | | |
|
||||
| Intel i3 12000 | | 320: ~ 19 ms 640: ~ 54 ms | | |
|
||||
| Intel i5 12600K | ~ 15 ms | 320: ~ 20 ms 640: ~ 46 ms | | |
|
||||
| Intel i7 12650H | ~ 15 ms | 320: ~ 20 ms 640: ~ 42 ms | 336: 50 ms | |
|
||||
| Intel Arc A380 | ~ 6 ms | 320: ~ 10 ms | | |
|
||||
| Intel Arc A750 | ~ 4 ms | 320: ~ 8 ms | | |
|
||||
| Name | MobileNetV2 Inference Time | YOLO-NAS Inference Time | RF-DETR Inference Time | Notes |
|
||||
| -------------- | -------------------------- | ------------------------- | ---------------------- | ---------------------------------- |
|
||||
| Intel HD 530 | 15 - 35 ms | | | Can only run one detector instance |
|
||||
| Intel HD 620 | 15 - 25 ms | 320: ~ 35 ms | | |
|
||||
| Intel HD 630 | ~ 15 ms | 320: ~ 30 ms | | |
|
||||
| Intel UHD 730 | ~ 10 ms | 320: ~ 19 ms 640: ~ 54 ms | | |
|
||||
| Intel UHD 770 | ~ 15 ms | 320: ~ 20 ms 640: ~ 46 ms | | |
|
||||
| Intel N100 | ~ 15 ms | 320: ~ 20 ms | | |
|
||||
| Intel Iris XE | ~ 10 ms | 320: ~ 18 ms 640: ~ 50 ms | | |
|
||||
| Intel Arc A380 | ~ 6 ms | 320: ~ 10 ms 640: ~ 22 ms | 336: 20 ms 448: 27 ms | |
|
||||
| Intel Arc A750 | ~ 4 ms | 320: ~ 8 ms | | |
|
||||
|
||||
### TensorRT - Nvidia GPU
|
||||
|
||||
@ -128,7 +128,7 @@ The TensortRT detector is able to run on x86 hosts that have an Nvidia GPU which
|
||||
Inference speeds will vary greatly depending on the GPU and the model used.
|
||||
`tiny` variants are faster than the equivalent non-tiny model, some known examples are below:
|
||||
|
||||
| Name | YoloV7 Inference Time | YOLO-NAS Inference Time | RF-DETR Inference Time |
|
||||
| Name | YOLOv7 Inference Time | YOLO-NAS Inference Time | RF-DETR Inference Time |
|
||||
| --------------- | --------------------- | ------------------------- | ------------------------- |
|
||||
| GTX 1060 6GB | ~ 7 ms | | |
|
||||
| GTX 1070 | ~ 6 ms | | |
|
||||
@ -143,15 +143,15 @@ Inference speeds will vary greatly depending on the GPU and the model used.
|
||||
|
||||
With the [rocm](../configuration/object_detectors.md#amdrocm-gpu-detector) detector Frigate can take advantage of many discrete AMD GPUs.
|
||||
|
||||
| Name | YoloV9 Inference Time | YOLO-NAS Inference Time |
|
||||
| --------------- | --------------------- | ------------------------- |
|
||||
| AMD 780M | ~ 14 ms | ~ 60 ms |
|
||||
| Name | YOLOv9 Inference Time | YOLO-NAS Inference Time |
|
||||
| -------- | --------------------- | ------------------------- |
|
||||
| AMD 780M | ~ 14 ms | 320: ~ 30 ms 640: ~ 60 ms |
|
||||
|
||||
## Community Supported Detectors
|
||||
|
||||
### Nvidia Jetson
|
||||
|
||||
Frigate supports all Jetson boards, from the inexpensive Jetson Nano to the powerful Jetson Orin AGX. It will [make use of the Jetson's hardware media engine](/configuration/hardware_acceleration#nvidia-jetson-orin-agx-orin-nx-orin-nano-xavier-agx-xavier-nx-tx2-tx1-nano) when configured with the [appropriate presets](/configuration/ffmpeg_presets#hwaccel-presets), and will make use of the Jetson's GPU and DLA for object detection when configured with the [TensorRT detector](/configuration/object_detectors#nvidia-tensorrt-detector).
|
||||
Frigate supports all Jetson boards, from the inexpensive Jetson Nano to the powerful Jetson Orin AGX. It will [make use of the Jetson's hardware media engine](/configuration/hardware_acceleration_video#nvidia-jetson-orin-agx-orin-nx-orin-nano-xavier-agx-xavier-nx-tx2-tx1-nano) when configured with the [appropriate presets](/configuration/ffmpeg_presets#hwaccel-presets), and will make use of the Jetson's GPU and DLA for object detection when configured with the [TensorRT detector](/configuration/object_detectors#nvidia-tensorrt-detector).
|
||||
|
||||
Inference speed will vary depending on the YOLO model, jetson platform and jetson nvpmodel (GPU/DLA/EMC clock speed). It is typically 20-40 ms for most models. The DLA is more efficient than the GPU, but not faster, so using the DLA will reduce power consumption but will slightly increase inference time.
|
||||
|
||||
@ -165,6 +165,11 @@ Frigate supports hardware video processing on all Rockchip boards. However, hard
|
||||
- RK3576
|
||||
- RK3588
|
||||
|
||||
| Name | YOLOv9 Inference Time | YOLO-NAS Inference Time | YOLOx Inference Time |
|
||||
| -------------- | --------------------- | --------------------------- | ----------------------- |
|
||||
| rk3588 3 cores | tiny: ~ 35 ms | small: ~ 20 ms med: ~ 30 ms | nano: 14 ms tiny: 18 ms |
|
||||
| rk3566 1 core | | small: ~ 96 ms | |
|
||||
|
||||
The inference time of a rk3588 with all 3 cores enabled is typically 25-30 ms for yolo-nas s.
|
||||
|
||||
## What does Frigate use the CPU for and what does it use a detector for? (ELI5 Version)
|
||||
|
||||
@ -165,6 +165,8 @@ devices:
|
||||
- /dev/dma_heap
|
||||
- /dev/rga
|
||||
- /dev/mpp_service
|
||||
volumes:
|
||||
- /sys/:/sys/:ro
|
||||
```
|
||||
|
||||
or add these options to your `docker run` command:
|
||||
@ -175,12 +177,13 @@ or add these options to your `docker run` command:
|
||||
--device /dev/dri \
|
||||
--device /dev/dma_heap \
|
||||
--device /dev/rga \
|
||||
--device /dev/mpp_service
|
||||
--device /dev/mpp_service \
|
||||
--volume /sys/:/sys/:ro
|
||||
```
|
||||
|
||||
#### Configuration
|
||||
|
||||
Next, you should configure [hardware object detection](/configuration/object_detectors#rockchip-platform) and [hardware video processing](/configuration/hardware_acceleration#rockchip-platform).
|
||||
Next, you should configure [hardware object detection](/configuration/object_detectors#rockchip-platform) and [hardware video processing](/configuration/hardware_acceleration_video#rockchip-platform).
|
||||
|
||||
## Docker
|
||||
|
||||
@ -313,7 +316,8 @@ If you choose to run Frigate via LXC in Proxmox the setup can be complex so be p
|
||||
|
||||
:::
|
||||
|
||||
Suggestions include:
|
||||
Suggestions include:
|
||||
|
||||
- For Intel-based hardware acceleration, to allow access to the `/dev/dri/renderD128` device with major number 226 and minor number 128, add the following lines to the `/etc/pve/lxc/<id>.conf` LXC configuration:
|
||||
- `lxc.cgroup2.devices.allow: c 226:128 rwm`
|
||||
- `lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file`
|
||||
@ -404,7 +408,7 @@ mkdir -p /share/share_vol2/frigate/media
|
||||
# Also replace the time zone value for 'TZ' in the sample command.
|
||||
# Example command will create a docker container that uses at most 2 CPUs and 4G RAM.
|
||||
# You may need to add "--env=LIBVA_DRIVER_NAME=i965 \" to the following docker run command if you
|
||||
# have certain CPU (e.g., J4125). See https://docs.frigate.video/configuration/hardware_acceleration.
|
||||
# have certain CPU (e.g., J4125). See https://docs.frigate.video/configuration/hardware_acceleration_video.
|
||||
docker run \
|
||||
--name=frigate \
|
||||
--shm-size=256m \
|
||||
|
||||
@ -162,7 +162,7 @@ FFmpeg arguments for other types of cameras can be found [here](../configuration
|
||||
|
||||
### Step 3: Configure hardware acceleration (recommended)
|
||||
|
||||
Now that you have a working camera configuration, you want to setup hardware acceleration to minimize the CPU required to decode your video streams. See the [hardware acceleration](../configuration/hardware_acceleration.md) config reference for examples applicable to your hardware.
|
||||
Now that you have a working camera configuration, you want to setup hardware acceleration to minimize the CPU required to decode your video streams. See the [hardware acceleration](../configuration/hardware_acceleration_video.md) config reference for examples applicable to your hardware.
|
||||
|
||||
Here is an example configuration with hardware acceleration configured to work with most Intel processors with an integrated GPU using the [preset](../configuration/ffmpeg_presets.md):
|
||||
|
||||
@ -303,6 +303,7 @@ By default, Frigate will retain video of all tracked objects for 10 days. The fu
|
||||
### Step 7: Complete config
|
||||
|
||||
At this point you have a complete config with basic functionality.
|
||||
|
||||
- View [common configuration examples](../configuration/index.md#common-configuration-examples) for a list of common configuration examples.
|
||||
- View [full config reference](../configuration/reference.md) for a complete list of configuration options.
|
||||
|
||||
|
||||
@ -104,7 +104,9 @@ Message published for each changed tracked object. The first message is publishe
|
||||
|
||||
### `frigate/tracked_object_update`
|
||||
|
||||
Message published for updates to tracked object metadata, for example when GenAI runs and returns a tracked object description.
|
||||
Message published for updates to tracked object metadata, for example:
|
||||
|
||||
#### Generative AI Description Update
|
||||
|
||||
```json
|
||||
{
|
||||
@ -114,6 +116,33 @@ Message published for updates to tracked object metadata, for example when GenAI
|
||||
}
|
||||
```
|
||||
|
||||
#### Face Recognition Update
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "face",
|
||||
"id": "1607123955.475377-mxklsc",
|
||||
"name": "John",
|
||||
"score": 0.95,
|
||||
"camera": "front_door_cam",
|
||||
"timestamp": 1607123958.748393,
|
||||
}
|
||||
```
|
||||
|
||||
#### License Plate Recognition Update
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "lpr",
|
||||
"id": "1607123955.475377-mxklsc",
|
||||
"name": "John's Car",
|
||||
"plate": "123ABC",
|
||||
"score": 0.95,
|
||||
"camera": "driveway_cam",
|
||||
"timestamp": 1607123958.748393,
|
||||
}
|
||||
```
|
||||
|
||||
### `frigate/reviews`
|
||||
|
||||
Message published for each changed review item. The first message is published when the `detection` or `alert` is initiated. When additional objects are detected or when a zone change occurs, it will publish a, `update` message with the same id. When the review activity has ended a final `end` message is published.
|
||||
|
||||
@ -34,7 +34,7 @@ Frigate generally [recommends cameras with configurable sub streams](/frigate/ha
|
||||
To do this efficiently the following setup is required:
|
||||
|
||||
1. A GPU or iGPU must be available to do the scaling.
|
||||
2. [ffmpeg presets for hwaccel](/configuration/hardware_acceleration.md) must be used
|
||||
2. [ffmpeg presets for hwaccel](/configuration/hardware_acceleration_video.md) must be used
|
||||
3. Set the desired detection resolution for `detect -> width` and `detect -> height`.
|
||||
|
||||
When this is done correctly, the GPU will do the decoding and scaling which will result in a small increase in CPU usage but with better results.
|
||||
|
||||
@ -17,6 +17,15 @@ const config: Config = {
|
||||
markdown: {
|
||||
mermaid: true,
|
||||
},
|
||||
i18n: {
|
||||
defaultLocale: 'en',
|
||||
locales: ['en'],
|
||||
localeConfigs: {
|
||||
en: {
|
||||
label: 'English',
|
||||
}
|
||||
},
|
||||
},
|
||||
themeConfig: {
|
||||
algolia: {
|
||||
appId: 'WIURGBNBPY',
|
||||
@ -82,6 +91,16 @@ const config: Config = {
|
||||
label: 'Demo',
|
||||
position: 'right',
|
||||
},
|
||||
{
|
||||
type: 'localeDropdown',
|
||||
position: 'right',
|
||||
dropdownItemsAfter: [
|
||||
{
|
||||
label: '简体中文(社区翻译)',
|
||||
href: 'https://docs.frigate-cn.video',
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
href: 'https://github.com/blakeblackshear/frigate',
|
||||
label: 'GitHub',
|
||||
|
||||
7818
docs/package-lock.json
generated
7818
docs/package-lock.json
generated
File diff suppressed because it is too large
Load Diff
@ -17,10 +17,10 @@
|
||||
"write-heading-ids": "docusaurus write-heading-ids"
|
||||
},
|
||||
"dependencies": {
|
||||
"@docusaurus/core": "^3.6.3",
|
||||
"@docusaurus/preset-classic": "^3.6.3",
|
||||
"@docusaurus/theme-mermaid": "^3.6.3",
|
||||
"@docusaurus/core": "^3.7.0",
|
||||
"@docusaurus/plugin-content-docs": "^3.6.3",
|
||||
"@docusaurus/preset-classic": "^3.7.0",
|
||||
"@docusaurus/theme-mermaid": "^3.6.3",
|
||||
"@mdx-js/react": "^3.1.0",
|
||||
"clsx": "^2.1.1",
|
||||
"docusaurus-plugin-openapi-docs": "^4.3.1",
|
||||
|
||||
@ -59,10 +59,13 @@ const sidebars: SidebarsConfig = {
|
||||
"configuration/objects",
|
||||
"configuration/stationary_objects",
|
||||
],
|
||||
"Hardware Acceleration": [
|
||||
"configuration/hardware_acceleration_video",
|
||||
"configuration/hardware_acceleration_enrichments",
|
||||
],
|
||||
"Extra Configuration": [
|
||||
"configuration/authentication",
|
||||
"configuration/notifications",
|
||||
"configuration/hardware_acceleration",
|
||||
"configuration/ffmpeg_presets",
|
||||
"configuration/pwa",
|
||||
"configuration/tls",
|
||||
|
||||
25
docs/src/components/LanguageAlert/index.jsx
Normal file
25
docs/src/components/LanguageAlert/index.jsx
Normal file
@ -0,0 +1,25 @@
|
||||
import React, { useEffect, useState } from 'react';
|
||||
import { useLocation } from '@docusaurus/router';
|
||||
import styles from './styles.module.css';
|
||||
|
||||
export default function LanguageAlert() {
|
||||
const [showAlert, setShowAlert] = useState(false);
|
||||
const { pathname } = useLocation();
|
||||
|
||||
useEffect(() => {
|
||||
const userLanguage = navigator?.language || 'en';
|
||||
const isChineseUser = userLanguage.includes('zh');
|
||||
setShowAlert(isChineseUser);
|
||||
|
||||
}, [pathname]);
|
||||
|
||||
if (!showAlert) return null;
|
||||
|
||||
return (
|
||||
<div className={styles.alert}>
|
||||
<span>检测到您的主要语言为中文,您可以访问由中文社区翻译的</span>
|
||||
<a href={'https://docs.frigate-cn.video'+pathname}>中文文档</a>
|
||||
<span> 以获得更好的体验</span>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
13
docs/src/components/LanguageAlert/styles.module.css
Normal file
13
docs/src/components/LanguageAlert/styles.module.css
Normal file
@ -0,0 +1,13 @@
|
||||
.alert {
|
||||
padding: 12px;
|
||||
background: #fff8e6;
|
||||
border-bottom: 1px solid #ffd166;
|
||||
text-align: center;
|
||||
font-size: 15px;
|
||||
}
|
||||
|
||||
.alert a {
|
||||
color: #1890ff;
|
||||
font-weight: 500;
|
||||
margin-left: 6px;
|
||||
}
|
||||
15
docs/src/theme/Navbar/index.js
Normal file
15
docs/src/theme/Navbar/index.js
Normal file
@ -0,0 +1,15 @@
|
||||
import React from 'react';
|
||||
import NavbarLayout from '@theme/Navbar/Layout';
|
||||
import NavbarContent from '@theme/Navbar/Content';
|
||||
import LanguageAlert from '../../components/LanguageAlert';
|
||||
|
||||
export default function Navbar() {
|
||||
return (
|
||||
<>
|
||||
<NavbarLayout>
|
||||
<NavbarContent />
|
||||
</NavbarLayout>
|
||||
<LanguageAlert />
|
||||
</>
|
||||
);
|
||||
}
|
||||
554
docs/static/frigate-api.yaml
vendored
554
docs/static/frigate-api.yaml
vendored
@ -161,6 +161,253 @@ paths:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/HTTPValidationError"
|
||||
"/users/{username}/role":
|
||||
put:
|
||||
tags:
|
||||
- Auth
|
||||
summary: Update Role
|
||||
operationId: update_role_users__username__role_put
|
||||
parameters:
|
||||
- name: username
|
||||
in: path
|
||||
required: true
|
||||
schema:
|
||||
type: string
|
||||
title: Username
|
||||
requestBody:
|
||||
required: true
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/AppPutRoleBody"
|
||||
responses:
|
||||
"200":
|
||||
description: Successful Response
|
||||
content:
|
||||
application/json:
|
||||
schema: {}
|
||||
"422":
|
||||
description: Validation Error
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/HTTPValidationError"
|
||||
/faces:
|
||||
get:
|
||||
tags:
|
||||
- Events
|
||||
summary: Get Faces
|
||||
operationId: get_faces_faces_get
|
||||
responses:
|
||||
"200":
|
||||
description: Successful Response
|
||||
content:
|
||||
application/json:
|
||||
schema: {}
|
||||
/faces/reprocess:
|
||||
post:
|
||||
tags:
|
||||
- Events
|
||||
summary: Reclassify Face
|
||||
operationId: reclassify_face_faces_reprocess_post
|
||||
requestBody:
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: object
|
||||
title: Body
|
||||
responses:
|
||||
"200":
|
||||
description: Successful Response
|
||||
content:
|
||||
application/json:
|
||||
schema: {}
|
||||
"422":
|
||||
description: Validation Error
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/HTTPValidationError"
|
||||
"/faces/train/{name}/classify":
|
||||
post:
|
||||
tags:
|
||||
- Events
|
||||
summary: Train Face
|
||||
operationId: train_face_faces_train__name__classify_post
|
||||
parameters:
|
||||
- name: name
|
||||
in: path
|
||||
required: true
|
||||
schema:
|
||||
type: string
|
||||
title: Name
|
||||
requestBody:
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: object
|
||||
title: Body
|
||||
responses:
|
||||
"200":
|
||||
description: Successful Response
|
||||
content:
|
||||
application/json:
|
||||
schema: {}
|
||||
"422":
|
||||
description: Validation Error
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/HTTPValidationError"
|
||||
"/faces/{name}/create":
|
||||
post:
|
||||
tags:
|
||||
- Events
|
||||
summary: Create Face
|
||||
operationId: create_face_faces__name__create_post
|
||||
parameters:
|
||||
- name: name
|
||||
in: path
|
||||
required: true
|
||||
schema:
|
||||
type: string
|
||||
title: Name
|
||||
responses:
|
||||
"200":
|
||||
description: Successful Response
|
||||
content:
|
||||
application/json:
|
||||
schema: {}
|
||||
"422":
|
||||
description: Validation Error
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/HTTPValidationError"
|
||||
"/faces/{name}/register":
|
||||
post:
|
||||
tags:
|
||||
- Events
|
||||
summary: Register Face
|
||||
operationId: register_face_faces__name__register_post
|
||||
parameters:
|
||||
- name: name
|
||||
in: path
|
||||
required: true
|
||||
schema:
|
||||
type: string
|
||||
title: Name
|
||||
requestBody:
|
||||
required: true
|
||||
content:
|
||||
multipart/form-data:
|
||||
schema:
|
||||
$ref: >-
|
||||
#/components/schemas/Body_register_face_faces__name__register_post
|
||||
responses:
|
||||
"200":
|
||||
description: Successful Response
|
||||
content:
|
||||
application/json:
|
||||
schema: {}
|
||||
"422":
|
||||
description: Validation Error
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/HTTPValidationError"
|
||||
/faces/recognize:
|
||||
post:
|
||||
tags:
|
||||
- Events
|
||||
summary: Recognize Face
|
||||
operationId: recognize_face_faces_recognize_post
|
||||
requestBody:
|
||||
required: true
|
||||
content:
|
||||
multipart/form-data:
|
||||
schema:
|
||||
$ref: "#/components/schemas/Body_recognize_face_faces_recognize_post"
|
||||
responses:
|
||||
"200":
|
||||
description: Successful Response
|
||||
content:
|
||||
application/json:
|
||||
schema: {}
|
||||
"422":
|
||||
description: Validation Error
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/HTTPValidationError"
|
||||
"/faces/{name}/delete":
|
||||
post:
|
||||
tags:
|
||||
- Events
|
||||
summary: Deregister Faces
|
||||
operationId: deregister_faces_faces__name__delete_post
|
||||
parameters:
|
||||
- name: name
|
||||
in: path
|
||||
required: true
|
||||
schema:
|
||||
type: string
|
||||
title: Name
|
||||
requestBody:
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: object
|
||||
title: Body
|
||||
responses:
|
||||
"200":
|
||||
description: Successful Response
|
||||
content:
|
||||
application/json:
|
||||
schema: {}
|
||||
"422":
|
||||
description: Validation Error
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/HTTPValidationError"
|
||||
/lpr/reprocess:
|
||||
put:
|
||||
tags:
|
||||
- Events
|
||||
summary: Reprocess License Plate
|
||||
operationId: reprocess_license_plate_lpr_reprocess_put
|
||||
parameters:
|
||||
- name: event_id
|
||||
in: query
|
||||
required: true
|
||||
schema:
|
||||
type: string
|
||||
title: Event Id
|
||||
responses:
|
||||
"200":
|
||||
description: Successful Response
|
||||
content:
|
||||
application/json:
|
||||
schema: {}
|
||||
"422":
|
||||
description: Validation Error
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/HTTPValidationError"
|
||||
/reindex:
|
||||
put:
|
||||
tags:
|
||||
- Events
|
||||
summary: Reindex Embeddings
|
||||
operationId: reindex_embeddings_reindex_put
|
||||
responses:
|
||||
"200":
|
||||
description: Successful Response
|
||||
content:
|
||||
application/json:
|
||||
schema: {}
|
||||
/review:
|
||||
get:
|
||||
tags:
|
||||
@ -206,9 +453,7 @@ paths:
|
||||
in: query
|
||||
required: false
|
||||
schema:
|
||||
allOf:
|
||||
- $ref: "#/components/schemas/SeverityEnum"
|
||||
title: Severity
|
||||
$ref: "#/components/schemas/SeverityEnum"
|
||||
- name: before
|
||||
in: query
|
||||
required: false
|
||||
@ -237,6 +482,35 @@ paths:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/HTTPValidationError"
|
||||
/review_ids:
|
||||
get:
|
||||
tags:
|
||||
- Review
|
||||
summary: Review Ids
|
||||
operationId: review_ids_review_ids_get
|
||||
parameters:
|
||||
- name: ids
|
||||
in: query
|
||||
required: true
|
||||
schema:
|
||||
type: string
|
||||
title: Ids
|
||||
responses:
|
||||
"200":
|
||||
description: Successful Response
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: array
|
||||
items:
|
||||
$ref: "#/components/schemas/ReviewSegmentResponse"
|
||||
title: Response Review Ids Review Ids Get
|
||||
"422":
|
||||
description: Validation Error
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/HTTPValidationError"
|
||||
/review/summary:
|
||||
get:
|
||||
tags:
|
||||
@ -575,6 +849,19 @@ paths:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/HTTPValidationError"
|
||||
/metrics:
|
||||
get:
|
||||
tags:
|
||||
- App
|
||||
summary: Metrics
|
||||
description: Expose Prometheus metrics endpoint and update metrics with latest stats
|
||||
operationId: metrics_metrics_get
|
||||
responses:
|
||||
"200":
|
||||
description: Successful Response
|
||||
content:
|
||||
application/json:
|
||||
schema: {}
|
||||
/config:
|
||||
get:
|
||||
tags:
|
||||
@ -731,6 +1018,15 @@ paths:
|
||||
- type: string
|
||||
- type: "null"
|
||||
title: Download
|
||||
- name: stream
|
||||
in: query
|
||||
required: false
|
||||
schema:
|
||||
anyOf:
|
||||
- type: boolean
|
||||
- type: "null"
|
||||
default: false
|
||||
title: Stream
|
||||
- name: start
|
||||
in: query
|
||||
required: false
|
||||
@ -825,6 +1121,59 @@ paths:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/HTTPValidationError"
|
||||
/plus/models:
|
||||
get:
|
||||
tags:
|
||||
- App
|
||||
summary: Plusmodels
|
||||
operationId: plusModels_plus_models_get
|
||||
parameters:
|
||||
- name: filterByCurrentModelDetector
|
||||
in: query
|
||||
required: false
|
||||
schema:
|
||||
type: boolean
|
||||
default: false
|
||||
title: Filterbycurrentmodeldetector
|
||||
responses:
|
||||
"200":
|
||||
description: Successful Response
|
||||
content:
|
||||
application/json:
|
||||
schema: {}
|
||||
"422":
|
||||
description: Validation Error
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/HTTPValidationError"
|
||||
/recognized_license_plates:
|
||||
get:
|
||||
tags:
|
||||
- App
|
||||
summary: Get Recognized License Plates
|
||||
operationId: get_recognized_license_plates_recognized_license_plates_get
|
||||
parameters:
|
||||
- name: split_joined
|
||||
in: query
|
||||
required: false
|
||||
schema:
|
||||
anyOf:
|
||||
- type: integer
|
||||
- type: "null"
|
||||
title: Split Joined
|
||||
responses:
|
||||
"200":
|
||||
description: Successful Response
|
||||
content:
|
||||
application/json:
|
||||
schema: {}
|
||||
"422":
|
||||
description: Validation Error
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/HTTPValidationError"
|
||||
/timeline:
|
||||
get:
|
||||
tags:
|
||||
@ -1158,12 +1507,12 @@ paths:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/HTTPValidationError"
|
||||
"/export/{event_id}/{new_name}":
|
||||
"/export/{event_id}/rename":
|
||||
patch:
|
||||
tags:
|
||||
- Export
|
||||
summary: Export Rename
|
||||
operationId: export_rename_export__event_id___new_name__patch
|
||||
operationId: export_rename_export__event_id__rename_patch
|
||||
parameters:
|
||||
- name: event_id
|
||||
in: path
|
||||
@ -1171,12 +1520,12 @@ paths:
|
||||
schema:
|
||||
type: string
|
||||
title: Event Id
|
||||
- name: new_name
|
||||
in: path
|
||||
required: true
|
||||
schema:
|
||||
type: string
|
||||
title: New Name
|
||||
requestBody:
|
||||
required: true
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/ExportRenameBody"
|
||||
responses:
|
||||
"200":
|
||||
description: Successful Response
|
||||
@ -1409,6 +1758,31 @@ paths:
|
||||
- type: number
|
||||
- type: "null"
|
||||
title: Max Score
|
||||
- name: min_speed
|
||||
in: query
|
||||
required: false
|
||||
schema:
|
||||
anyOf:
|
||||
- type: number
|
||||
- type: "null"
|
||||
title: Min Speed
|
||||
- name: max_speed
|
||||
in: query
|
||||
required: false
|
||||
schema:
|
||||
anyOf:
|
||||
- type: number
|
||||
- type: "null"
|
||||
title: Max Speed
|
||||
- name: recognized_license_plate
|
||||
in: query
|
||||
required: false
|
||||
schema:
|
||||
anyOf:
|
||||
- type: string
|
||||
- type: "null"
|
||||
default: all
|
||||
title: Recognized License Plate
|
||||
- name: is_submitted
|
||||
in: query
|
||||
required: false
|
||||
@ -1684,6 +2058,31 @@ paths:
|
||||
- type: number
|
||||
- type: "null"
|
||||
title: Max Score
|
||||
- name: min_speed
|
||||
in: query
|
||||
required: false
|
||||
schema:
|
||||
anyOf:
|
||||
- type: number
|
||||
- type: "null"
|
||||
title: Min Speed
|
||||
- name: max_speed
|
||||
in: query
|
||||
required: false
|
||||
schema:
|
||||
anyOf:
|
||||
- type: number
|
||||
- type: "null"
|
||||
title: Max Speed
|
||||
- name: recognized_license_plate
|
||||
in: query
|
||||
required: false
|
||||
schema:
|
||||
anyOf:
|
||||
- type: string
|
||||
- type: "null"
|
||||
default: all
|
||||
title: Recognized License Plate
|
||||
- name: sort
|
||||
in: query
|
||||
required: false
|
||||
@ -1867,9 +2266,7 @@ paths:
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
allOf:
|
||||
- $ref: "#/components/schemas/SubmitPlusBody"
|
||||
title: Body
|
||||
$ref: "#/components/schemas/SubmitPlusBody"
|
||||
responses:
|
||||
"200":
|
||||
description: Successful Response
|
||||
@ -2056,15 +2453,13 @@ paths:
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
allOf:
|
||||
- $ref: "#/components/schemas/EventsCreateBody"
|
||||
$ref: "#/components/schemas/EventsCreateBody"
|
||||
default:
|
||||
source_type: api
|
||||
score: 0
|
||||
duration: 30
|
||||
include_recording: true
|
||||
draw: {}
|
||||
title: Body
|
||||
responses:
|
||||
"200":
|
||||
description: Successful Response
|
||||
@ -2305,6 +2700,14 @@ paths:
|
||||
- type: integer
|
||||
- type: "null"
|
||||
title: Height
|
||||
- name: store
|
||||
in: query
|
||||
required: false
|
||||
schema:
|
||||
anyOf:
|
||||
- type: integer
|
||||
- type: "null"
|
||||
title: Store
|
||||
responses:
|
||||
"200":
|
||||
description: Successful Response
|
||||
@ -2407,6 +2810,42 @@ paths:
|
||||
content:
|
||||
application/json:
|
||||
schema: {}
|
||||
/recordings/summary:
|
||||
get:
|
||||
tags:
|
||||
- Media
|
||||
summary: All Recordings Summary
|
||||
description: Returns true/false by day indicating if recordings exist
|
||||
operationId: all_recordings_summary_recordings_summary_get
|
||||
parameters:
|
||||
- name: timezone
|
||||
in: query
|
||||
required: false
|
||||
schema:
|
||||
type: string
|
||||
default: utc
|
||||
title: Timezone
|
||||
- name: cameras
|
||||
in: query
|
||||
required: false
|
||||
schema:
|
||||
anyOf:
|
||||
- type: string
|
||||
- type: "null"
|
||||
default: all
|
||||
title: Cameras
|
||||
responses:
|
||||
"200":
|
||||
description: Successful Response
|
||||
content:
|
||||
application/json:
|
||||
schema: {}
|
||||
"422":
|
||||
description: Validation Error
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/HTTPValidationError"
|
||||
"/{camera_name}/recordings/summary":
|
||||
get:
|
||||
tags:
|
||||
@ -2461,14 +2900,14 @@ paths:
|
||||
required: false
|
||||
schema:
|
||||
type: number
|
||||
default: 1733228876.15567
|
||||
default: 1744227965.180043
|
||||
title: After
|
||||
- name: before
|
||||
in: query
|
||||
required: false
|
||||
schema:
|
||||
type: number
|
||||
default: 1733232476.15567
|
||||
default: 1744231565.180048
|
||||
title: Before
|
||||
responses:
|
||||
"200":
|
||||
@ -2487,6 +2926,8 @@ paths:
|
||||
tags:
|
||||
- Media
|
||||
summary: Recording Clip
|
||||
description: >-
|
||||
For iOS devices, use the master.m3u8 HLS link instead of clip.mp4. Safari does not reliably process progressive mp4 files.
|
||||
operationId: recording_clip__camera_name__start__start_ts__end__end_ts__clip_mp4_get
|
||||
parameters:
|
||||
- name: camera_name
|
||||
@ -2749,12 +3190,12 @@ paths:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/HTTPValidationError"
|
||||
"/events/{event_id}/thumbnail.jpg":
|
||||
"/events/{event_id}/thumbnail.{extension}":
|
||||
get:
|
||||
tags:
|
||||
- Media
|
||||
summary: Event Thumbnail
|
||||
operationId: event_thumbnail_events__event_id__thumbnail_jpg_get
|
||||
operationId: event_thumbnail_events__event_id__thumbnail__extension__get
|
||||
parameters:
|
||||
- name: event_id
|
||||
in: path
|
||||
@ -2762,6 +3203,12 @@ paths:
|
||||
schema:
|
||||
type: string
|
||||
title: Event Id
|
||||
- name: extension
|
||||
in: path
|
||||
required: true
|
||||
schema:
|
||||
type: string
|
||||
title: Extension
|
||||
- name: max_cache_age
|
||||
in: query
|
||||
required: false
|
||||
@ -3251,6 +3698,12 @@ components:
|
||||
password:
|
||||
type: string
|
||||
title: Password
|
||||
role:
|
||||
anyOf:
|
||||
- type: string
|
||||
- type: "null"
|
||||
title: Role
|
||||
default: viewer
|
||||
type: object
|
||||
required:
|
||||
- username
|
||||
@ -3265,6 +3718,35 @@ components:
|
||||
required:
|
||||
- password
|
||||
title: AppPutPasswordBody
|
||||
AppPutRoleBody:
|
||||
properties:
|
||||
role:
|
||||
type: string
|
||||
title: Role
|
||||
type: object
|
||||
required:
|
||||
- role
|
||||
title: AppPutRoleBody
|
||||
Body_recognize_face_faces_recognize_post:
|
||||
properties:
|
||||
file:
|
||||
type: string
|
||||
format: binary
|
||||
title: File
|
||||
type: object
|
||||
required:
|
||||
- file
|
||||
title: Body_recognize_face_faces_recognize_post
|
||||
Body_register_face_faces__name__register_post:
|
||||
properties:
|
||||
file:
|
||||
type: string
|
||||
format: binary
|
||||
title: File
|
||||
type: object
|
||||
required:
|
||||
- file
|
||||
title: Body_register_face_faces__name__register_post
|
||||
DayReview:
|
||||
properties:
|
||||
day:
|
||||
@ -3354,7 +3836,9 @@ components:
|
||||
- type: "null"
|
||||
title: End Time
|
||||
false_positive:
|
||||
type: boolean
|
||||
anyOf:
|
||||
- type: boolean
|
||||
- type: "null"
|
||||
title: False Positive
|
||||
zones:
|
||||
items:
|
||||
@ -3362,7 +3846,9 @@ components:
|
||||
type: array
|
||||
title: Zones
|
||||
thumbnail:
|
||||
type: string
|
||||
anyOf:
|
||||
- type: string
|
||||
- type: "null"
|
||||
title: Thumbnail
|
||||
has_clip:
|
||||
type: boolean
|
||||
@ -3394,6 +3880,7 @@ components:
|
||||
- type: "null"
|
||||
title: Model Type
|
||||
data:
|
||||
type: object
|
||||
title: Data
|
||||
type: object
|
||||
required:
|
||||
@ -3511,6 +3998,11 @@ components:
|
||||
exclusiveMinimum: 0
|
||||
- type: "null"
|
||||
title: Score for sub label
|
||||
camera:
|
||||
anyOf:
|
||||
- type: string
|
||||
- type: "null"
|
||||
title: Camera this object is detected on.
|
||||
type: object
|
||||
required:
|
||||
- subLabel
|
||||
@ -3518,13 +4010,11 @@ components:
|
||||
ExportRecordingsBody:
|
||||
properties:
|
||||
playback:
|
||||
allOf:
|
||||
- $ref: "#/components/schemas/PlaybackFactorEnum"
|
||||
$ref: "#/components/schemas/PlaybackFactorEnum"
|
||||
title: Playback factor
|
||||
default: realtime
|
||||
source:
|
||||
allOf:
|
||||
- $ref: "#/components/schemas/PlaybackSourceEnum"
|
||||
$ref: "#/components/schemas/PlaybackSourceEnum"
|
||||
title: Playback source
|
||||
default: recordings
|
||||
name:
|
||||
@ -3536,6 +4026,16 @@ components:
|
||||
title: Image Path
|
||||
type: object
|
||||
title: ExportRecordingsBody
|
||||
ExportRenameBody:
|
||||
properties:
|
||||
name:
|
||||
type: string
|
||||
maxLength: 256
|
||||
title: Friendly name
|
||||
type: object
|
||||
required:
|
||||
- name
|
||||
title: ExportRenameBody
|
||||
Extension:
|
||||
type: string
|
||||
enum:
|
||||
|
||||
@ -74,7 +74,7 @@ def go2rtc_streams():
|
||||
)
|
||||
stream_data = r.json()
|
||||
for data in stream_data.values():
|
||||
for producer in data.get("producers", []):
|
||||
for producer in data.get("producers") or []:
|
||||
producer["url"] = clean_camera_user_pass(producer.get("url", ""))
|
||||
return JSONResponse(content=stream_data)
|
||||
|
||||
|
||||
@ -261,14 +261,14 @@ def auth(request: Request):
|
||||
|
||||
role_header = proxy_config.header_map.role
|
||||
role = (
|
||||
request.headers.get(role_header, default="viewer")
|
||||
request.headers.get(role_header, default=proxy_config.default_role)
|
||||
if role_header
|
||||
else "viewer"
|
||||
else proxy_config.default_role
|
||||
)
|
||||
|
||||
# if comma-separated with "admin", use "admin", else "viewer"
|
||||
# if comma-separated with "admin", use "admin", else use default role
|
||||
success_response.headers["remote-role"] = (
|
||||
"admin" if role and "admin" in role else "viewer"
|
||||
"admin" if role and "admin" in role else proxy_config.default_role
|
||||
)
|
||||
|
||||
return success_response
|
||||
|
||||
@ -14,6 +14,7 @@ from peewee import DoesNotExist
|
||||
from playhouse.shortcuts import model_to_dict
|
||||
|
||||
from frigate.api.auth import require_role
|
||||
from frigate.api.defs.request.classification_body import RenameFaceBody
|
||||
from frigate.api.defs.tags import Tags
|
||||
from frigate.config.camera import DetectConfig
|
||||
from frigate.const import FACE_DIR
|
||||
@ -250,12 +251,6 @@ def deregister_faces(request: Request, name: str, body: dict = None):
|
||||
json: dict[str, any] = body or {}
|
||||
list_of_ids = json.get("ids", "")
|
||||
|
||||
if not list_of_ids or len(list_of_ids) == 0:
|
||||
return JSONResponse(
|
||||
content=({"success": False, "message": "Not a valid list of ids"}),
|
||||
status_code=404,
|
||||
)
|
||||
|
||||
context: EmbeddingsContext = request.app.embeddings
|
||||
context.delete_face_ids(
|
||||
name, map(lambda file: sanitize_filename(file), list_of_ids)
|
||||
@ -266,6 +261,35 @@ def deregister_faces(request: Request, name: str, body: dict = None):
|
||||
)
|
||||
|
||||
|
||||
@router.put("/faces/{old_name}/rename", dependencies=[Depends(require_role(["admin"]))])
|
||||
def rename_face(request: Request, old_name: str, body: RenameFaceBody):
|
||||
if not request.app.frigate_config.face_recognition.enabled:
|
||||
return JSONResponse(
|
||||
status_code=400,
|
||||
content={"message": "Face recognition is not enabled.", "success": False},
|
||||
)
|
||||
|
||||
context: EmbeddingsContext = request.app.embeddings
|
||||
try:
|
||||
context.rename_face(old_name, body.new_name)
|
||||
return JSONResponse(
|
||||
content={
|
||||
"success": True,
|
||||
"message": f"Successfully renamed face to {body.new_name}.",
|
||||
},
|
||||
status_code=200,
|
||||
)
|
||||
except ValueError as e:
|
||||
logger.error(e)
|
||||
return JSONResponse(
|
||||
status_code=400,
|
||||
content={
|
||||
"message": "Error renaming face. Check Frigate logs.",
|
||||
"success": False,
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
@router.put("/lpr/reprocess")
|
||||
def reprocess_license_plate(request: Request, event_id: str):
|
||||
if not request.app.frigate_config.lpr.enabled:
|
||||
|
||||
5
frigate/api/defs/request/classification_body.py
Normal file
5
frigate/api/defs/request/classification_body.py
Normal file
@ -0,0 +1,5 @@
|
||||
from pydantic import BaseModel
|
||||
|
||||
|
||||
class RenameFaceBody(BaseModel):
|
||||
new_name: str
|
||||
@ -13,6 +13,15 @@ class EventsSubLabelBody(BaseModel):
|
||||
)
|
||||
|
||||
|
||||
class EventsLPRBody(BaseModel):
|
||||
recognizedLicensePlate: str = Field(
|
||||
title="Recognized License Plate", max_length=100
|
||||
)
|
||||
recognizedLicensePlateScore: Optional[float] = Field(
|
||||
title="Score for recognized license plate", default=None, gt=0.0, le=1.0
|
||||
)
|
||||
|
||||
|
||||
class EventsDescriptionBody(BaseModel):
|
||||
description: Union[str, None] = Field(title="The description of the event")
|
||||
|
||||
|
||||
@ -31,6 +31,7 @@ from frigate.api.defs.request.events_body import (
|
||||
EventsDeleteBody,
|
||||
EventsDescriptionBody,
|
||||
EventsEndBody,
|
||||
EventsLPRBody,
|
||||
EventsSubLabelBody,
|
||||
SubmitPlusBody,
|
||||
)
|
||||
@ -724,13 +725,15 @@ def events_search(request: Request, params: EventsSearchQueryParams = Depends())
|
||||
if (sort is None or sort == "relevance") and search_results:
|
||||
processed_events.sort(key=lambda x: x.get("search_distance", float("inf")))
|
||||
elif min_score is not None and max_score is not None and sort == "score_asc":
|
||||
processed_events.sort(key=lambda x: x["score"])
|
||||
processed_events.sort(key=lambda x: x["data"]["score"])
|
||||
elif min_score is not None and max_score is not None and sort == "score_desc":
|
||||
processed_events.sort(key=lambda x: x["score"], reverse=True)
|
||||
processed_events.sort(key=lambda x: x["data"]["score"], reverse=True)
|
||||
elif min_speed is not None and max_speed is not None and sort == "speed_asc":
|
||||
processed_events.sort(key=lambda x: x["average_estimated_speed"])
|
||||
processed_events.sort(key=lambda x: x["data"]["average_estimated_speed"])
|
||||
elif min_speed is not None and max_speed is not None and sort == "speed_desc":
|
||||
processed_events.sort(key=lambda x: x["average_estimated_speed"], reverse=True)
|
||||
processed_events.sort(
|
||||
key=lambda x: x["data"]["average_estimated_speed"], reverse=True
|
||||
)
|
||||
elif sort == "date_asc":
|
||||
processed_events.sort(key=lambda x: x["start_time"])
|
||||
else:
|
||||
@ -1099,6 +1102,60 @@ def set_sub_label(
|
||||
)
|
||||
|
||||
|
||||
@router.post(
|
||||
"/events/{event_id}/recognized_license_plate",
|
||||
response_model=GenericResponse,
|
||||
dependencies=[Depends(require_role(["admin"]))],
|
||||
)
|
||||
def set_plate(
|
||||
request: Request,
|
||||
event_id: str,
|
||||
body: EventsLPRBody,
|
||||
):
|
||||
try:
|
||||
event: Event = Event.get(Event.id == event_id)
|
||||
except DoesNotExist:
|
||||
event = None
|
||||
|
||||
if request.app.detected_frames_processor:
|
||||
tracked_obj: TrackedObject = None
|
||||
|
||||
for state in request.app.detected_frames_processor.camera_states.values():
|
||||
tracked_obj = state.tracked_objects.get(event_id)
|
||||
|
||||
if tracked_obj is not None:
|
||||
break
|
||||
else:
|
||||
tracked_obj = None
|
||||
|
||||
if not event and not tracked_obj:
|
||||
return JSONResponse(
|
||||
content=(
|
||||
{"success": False, "message": "Event " + event_id + " not found."}
|
||||
),
|
||||
status_code=404,
|
||||
)
|
||||
|
||||
new_plate = body.recognizedLicensePlate
|
||||
new_score = body.recognizedLicensePlateScore
|
||||
|
||||
if new_plate == "":
|
||||
new_plate = None
|
||||
new_score = None
|
||||
|
||||
request.app.event_metadata_updater.publish(
|
||||
EventMetadataTypeEnum.recognized_license_plate, (event_id, new_plate, new_score)
|
||||
)
|
||||
|
||||
return JSONResponse(
|
||||
content={
|
||||
"success": True,
|
||||
"message": f"Event {event_id} license plate set to {new_plate if new_plate is not None else 'None'}",
|
||||
},
|
||||
status_code=200,
|
||||
)
|
||||
|
||||
|
||||
@router.post(
|
||||
"/events/{event_id}/description",
|
||||
response_model=GenericResponse,
|
||||
|
||||
@ -1,5 +1,6 @@
|
||||
"""Image and video apis."""
|
||||
|
||||
import asyncio
|
||||
import glob
|
||||
import logging
|
||||
import math
|
||||
@ -110,9 +111,12 @@ def imagestream(
|
||||
@router.get("/{camera_name}/ptz/info")
|
||||
async def camera_ptz_info(request: Request, camera_name: str):
|
||||
if camera_name in request.app.frigate_config.cameras:
|
||||
return JSONResponse(
|
||||
content=await request.app.onvif.get_camera_info(camera_name),
|
||||
# Schedule get_camera_info in the OnvifController's event loop
|
||||
future = asyncio.run_coroutine_threadsafe(
|
||||
request.app.onvif.get_camera_info(camera_name), request.app.onvif.loop
|
||||
)
|
||||
result = future.result()
|
||||
return JSONResponse(content=result)
|
||||
else:
|
||||
return JSONResponse(
|
||||
content={"success": False, "message": "Camera not found"},
|
||||
@ -537,7 +541,10 @@ def recordings(
|
||||
return JSONResponse(content=list(recordings))
|
||||
|
||||
|
||||
@router.get("/{camera_name}/start/{start_ts}/end/{end_ts}/clip.mp4")
|
||||
@router.get(
|
||||
"/{camera_name}/start/{start_ts}/end/{end_ts}/clip.mp4",
|
||||
description="For iOS devices, use the master.m3u8 HLS link instead of clip.mp4. Safari does not reliably process progressive mp4 files.",
|
||||
)
|
||||
def recording_clip(
|
||||
request: Request,
|
||||
camera_name: str,
|
||||
|
||||
@ -58,13 +58,9 @@ async def review(
|
||||
)
|
||||
|
||||
clauses = [
|
||||
(
|
||||
(ReviewSegment.start_time > after)
|
||||
& (
|
||||
(ReviewSegment.end_time.is_null(True))
|
||||
| (ReviewSegment.end_time < before)
|
||||
)
|
||||
)
|
||||
(ReviewSegment.start_time > after)
|
||||
& (ReviewSegment.start_time < before)
|
||||
& ((ReviewSegment.end_time.is_null(True)) | (ReviewSegment.end_time < before))
|
||||
]
|
||||
|
||||
if cameras != "all":
|
||||
@ -176,7 +172,6 @@ async def review_summary(
|
||||
|
||||
hour_modifier, minute_modifier, seconds_offset = get_tz_modifiers(params.timezone)
|
||||
day_ago = (datetime.datetime.now() - datetime.timedelta(hours=24)).timestamp()
|
||||
month_ago = (datetime.datetime.now() - datetime.timedelta(days=30)).timestamp()
|
||||
|
||||
cameras = params.cameras
|
||||
labels = params.labels
|
||||
@ -277,7 +272,7 @@ async def review_summary(
|
||||
.get()
|
||||
)
|
||||
|
||||
clauses = [(ReviewSegment.start_time > month_ago)]
|
||||
clauses = []
|
||||
|
||||
if cameras != "all":
|
||||
camera_list = cameras.split(",")
|
||||
@ -365,7 +360,7 @@ async def review_summary(
|
||||
& (UserReviewStatus.user_id == user_id)
|
||||
),
|
||||
)
|
||||
.where(reduce(operator.and_, clauses))
|
||||
.where(reduce(operator.and_, clauses) if clauses else True)
|
||||
.group_by(
|
||||
(ReviewSegment.start_time + seconds_offset).cast("int") / day_in_seconds
|
||||
)
|
||||
|
||||
@ -55,7 +55,7 @@ from frigate.models import (
|
||||
Timeline,
|
||||
User,
|
||||
)
|
||||
from frigate.object_detection import ObjectDetectProcess
|
||||
from frigate.object_detection.base import ObjectDetectProcess
|
||||
from frigate.output.output import output_frames
|
||||
from frigate.ptz.autotrack import PtzAutoTrackerThread
|
||||
from frigate.ptz.onvif import OnvifController
|
||||
@ -699,6 +699,10 @@ class FrigateApp:
|
||||
self.audio_process.terminate()
|
||||
self.audio_process.join()
|
||||
|
||||
# stop the onvif controller
|
||||
if self.onvif_controller:
|
||||
self.onvif_controller.close()
|
||||
|
||||
# ensure the capture processes are done
|
||||
for camera, metrics in self.camera_metrics.items():
|
||||
capture_process = metrics.capture_process
|
||||
|
||||
@ -53,17 +53,6 @@ class CameraState:
|
||||
self.callbacks = defaultdict(list)
|
||||
self.ptz_autotracker_thread = ptz_autotracker_thread
|
||||
self.prev_enabled = self.camera_config.enabled
|
||||
self.requires_face_detection = (
|
||||
self.config.face_recognition.enabled
|
||||
and "face" not in self.config.objects.all_objects
|
||||
)
|
||||
|
||||
def get_max_update_frequency(self, obj: TrackedObject) -> int:
|
||||
return (
|
||||
1
|
||||
if self.requires_face_detection and obj.obj_data["label"] == "person"
|
||||
else 5
|
||||
)
|
||||
|
||||
def get_current_frame(self, draw_options: dict[str, Any] = {}):
|
||||
with self.current_frame_lock:
|
||||
@ -274,14 +263,37 @@ class CameraState:
|
||||
current_detections[id],
|
||||
)
|
||||
|
||||
# add initial frame to frame cache
|
||||
self.frame_cache[frame_time] = np.copy(current_frame)
|
||||
|
||||
# save initial thumbnail data and best object
|
||||
thumbnail_data = {
|
||||
"frame_time": frame_time,
|
||||
"box": new_obj.obj_data["box"],
|
||||
"area": new_obj.obj_data["area"],
|
||||
"region": new_obj.obj_data["region"],
|
||||
"score": new_obj.obj_data["score"],
|
||||
"attributes": new_obj.obj_data["attributes"],
|
||||
"current_estimated_speed": 0,
|
||||
"velocity_angle": 0,
|
||||
"path_data": [],
|
||||
"recognized_license_plate": None,
|
||||
"recognized_license_plate_score": None,
|
||||
}
|
||||
new_obj.thumbnail_data = thumbnail_data
|
||||
tracked_objects[id].thumbnail_data = thumbnail_data
|
||||
self.best_objects[new_obj.obj_data["label"]] = new_obj
|
||||
|
||||
# call event handlers
|
||||
for c in self.callbacks["start"]:
|
||||
c(self.name, new_obj, frame_name)
|
||||
|
||||
for id in updated_ids:
|
||||
updated_obj = tracked_objects[id]
|
||||
thumb_update, significant_update, autotracker_update = updated_obj.update(
|
||||
frame_time, current_detections[id], current_frame is not None
|
||||
thumb_update, significant_update, path_update, autotracker_update = (
|
||||
updated_obj.update(
|
||||
frame_time, current_detections[id], current_frame is not None
|
||||
)
|
||||
)
|
||||
|
||||
if autotracker_update or significant_update:
|
||||
@ -298,14 +310,18 @@ class CameraState:
|
||||
|
||||
updated_obj.last_updated = frame_time
|
||||
|
||||
# if it has been more than max_update_frequency seconds since the last thumb update
|
||||
# if it has been more than 5 seconds since the last thumb update
|
||||
# and the last update is greater than the last publish or
|
||||
# the object has changed significantly
|
||||
# the object has changed significantly or
|
||||
# the object moved enough to update the path
|
||||
if (
|
||||
frame_time - updated_obj.last_published
|
||||
> self.get_max_update_frequency(updated_obj)
|
||||
and updated_obj.last_updated > updated_obj.last_published
|
||||
) or significant_update:
|
||||
(
|
||||
frame_time - updated_obj.last_published > 5
|
||||
and updated_obj.last_updated > updated_obj.last_published
|
||||
)
|
||||
or significant_update
|
||||
or path_update
|
||||
):
|
||||
# call event handlers
|
||||
for c in self.callbacks["update"]:
|
||||
c(self.name, updated_obj, frame_name)
|
||||
|
||||
@ -135,6 +135,7 @@ class Dispatcher:
|
||||
"type": TrackedObjectUpdateTypesEnum.description,
|
||||
"id": event.id,
|
||||
"description": event.data["description"],
|
||||
"camera": event.camera,
|
||||
}
|
||||
),
|
||||
)
|
||||
|
||||
@ -39,9 +39,6 @@ class EventMetadataSubscriber(Subscriber):
|
||||
def __init__(self, topic: EventMetadataTypeEnum) -> None:
|
||||
super().__init__(topic.value)
|
||||
|
||||
def check_for_update(self, timeout: float = 1) -> tuple | None:
|
||||
return super().check_for_update(timeout)
|
||||
|
||||
def _return_object(self, topic: str, payload: tuple) -> tuple:
|
||||
if payload is None:
|
||||
return (None, None)
|
||||
|
||||
@ -6,6 +6,8 @@ from typing import Optional
|
||||
|
||||
import zmq
|
||||
|
||||
from frigate.const import FAST_QUEUE_TIMEOUT
|
||||
|
||||
SOCKET_PUB = "ipc:///tmp/cache/proxy_pub"
|
||||
SOCKET_SUB = "ipc:///tmp/cache/proxy_sub"
|
||||
|
||||
@ -77,7 +79,9 @@ class Subscriber:
|
||||
self.socket.setsockopt_string(zmq.SUBSCRIBE, self.topic)
|
||||
self.socket.connect(SOCKET_SUB)
|
||||
|
||||
def check_for_update(self, timeout: float = 1) -> Optional[tuple[str, any]]:
|
||||
def check_for_update(
|
||||
self, timeout: float = FAST_QUEUE_TIMEOUT
|
||||
) -> Optional[tuple[str, any]]:
|
||||
"""Returns message or None if no update."""
|
||||
try:
|
||||
has_update, _, _ = zmq.select([self.socket], [], [], timeout)
|
||||
|
||||
@ -63,9 +63,9 @@ class PtzAutotrackConfig(FrigateBaseModel):
|
||||
else:
|
||||
raise ValueError("Invalid type for movement_weights")
|
||||
|
||||
if len(weights) != 5:
|
||||
if len(weights) != 6:
|
||||
raise ValueError(
|
||||
"movement_weights must have exactly 5 floats, remove this line from your config and run autotracking calibration"
|
||||
"movement_weights must have exactly 6 floats, remove this line from your config and run autotracking calibration"
|
||||
)
|
||||
|
||||
return weights
|
||||
|
||||
@ -19,6 +19,11 @@ class SemanticSearchModelEnum(str, Enum):
|
||||
jinav2 = "jinav2"
|
||||
|
||||
|
||||
class LPRDeviceEnum(str, Enum):
|
||||
GPU = "GPU"
|
||||
CPU = "CPU"
|
||||
|
||||
|
||||
class BirdClassificationConfig(FrigateBaseModel):
|
||||
enabled: bool = Field(default=False, title="Enable bird classification.")
|
||||
threshold: float = Field(
|
||||
@ -89,11 +94,18 @@ class CameraFaceRecognitionConfig(FrigateBaseModel):
|
||||
default=500, title="Min area of face box to consider running face recognition."
|
||||
)
|
||||
|
||||
model_config = ConfigDict(extra="ignore", protected_namespaces=())
|
||||
model_config = ConfigDict(extra="forbid", protected_namespaces=())
|
||||
|
||||
|
||||
class LicensePlateRecognitionConfig(FrigateBaseModel):
|
||||
enabled: bool = Field(default=False, title="Enable license plate recognition.")
|
||||
device: Optional[LPRDeviceEnum] = Field(
|
||||
default=LPRDeviceEnum.CPU,
|
||||
title="The device used for license plate recognition.",
|
||||
)
|
||||
model_size: str = Field(
|
||||
default="small", title="The size of the embeddings model used."
|
||||
)
|
||||
detection_threshold: float = Field(
|
||||
default=0.7,
|
||||
title="License plate object confidence score required to begin running recognition.",
|
||||
@ -156,4 +168,4 @@ class CameraLicensePlateRecognitionConfig(FrigateBaseModel):
|
||||
le=10,
|
||||
)
|
||||
|
||||
model_config = ConfigDict(extra="ignore", protected_namespaces=())
|
||||
model_config = ConfigDict(extra="forbid", protected_namespaces=())
|
||||
|
||||
@ -472,8 +472,24 @@ class FrigateConfig(FrigateBaseModel):
|
||||
)
|
||||
|
||||
for name, camera in self.cameras.items():
|
||||
modified_global_config = global_config.copy()
|
||||
|
||||
# only populate some fields down to the camera level for specific keys
|
||||
allowed_fields_map = {
|
||||
"face_recognition": ["enabled", "min_area"],
|
||||
"lpr": ["enabled", "expire_time", "min_area", "enhancement"],
|
||||
}
|
||||
|
||||
for section in allowed_fields_map:
|
||||
if section in modified_global_config:
|
||||
modified_global_config[section] = {
|
||||
k: v
|
||||
for k, v in modified_global_config[section].items()
|
||||
if k in allowed_fields_map[section]
|
||||
}
|
||||
|
||||
merged_config = deep_merge(
|
||||
camera.model_dump(exclude_unset=True), global_config
|
||||
camera.model_dump(exclude_unset=True), modified_global_config
|
||||
)
|
||||
camera_config: CameraConfig = CameraConfig.model_validate(
|
||||
{"name": name, **merged_config}
|
||||
@ -513,10 +529,14 @@ class FrigateConfig(FrigateBaseModel):
|
||||
)
|
||||
|
||||
# Warn if detect fps > 10
|
||||
if camera_config.detect.fps > 10:
|
||||
if camera_config.detect.fps > 10 and camera_config.type != "lpr":
|
||||
logger.warning(
|
||||
f"{camera_config.name} detect fps is set to {camera_config.detect.fps}. This does NOT need to match your camera's frame rate. High values could lead to reduced performance. Recommended value is 5."
|
||||
)
|
||||
if camera_config.detect.fps > 15 and camera_config.type == "lpr":
|
||||
logger.warning(
|
||||
f"{camera_config.name} detect fps is set to {camera_config.detect.fps}. This does NOT need to match your camera's frame rate. High values could lead to reduced performance. Recommended value for LPR cameras are between 5-15."
|
||||
)
|
||||
|
||||
# Default min_initialized configuration
|
||||
min_initialized = int(camera_config.detect.fps / 2)
|
||||
|
||||
@ -30,3 +30,6 @@ class ProxyConfig(FrigateBaseModel):
|
||||
default=None,
|
||||
title="Secret value for proxy authentication.",
|
||||
)
|
||||
default_role: Optional[str] = Field(
|
||||
default="viewer", title="Default role for proxy users."
|
||||
)
|
||||
|
||||
@ -38,6 +38,7 @@ DEFAULT_ATTRIBUTE_LABEL_MAP = {
|
||||
"ups",
|
||||
"usps",
|
||||
],
|
||||
"motorcycle": ["license_plate"],
|
||||
}
|
||||
LABEL_CONSOLIDATION_MAP = {
|
||||
"car": 0.8,
|
||||
@ -128,3 +129,7 @@ AUTOTRACKING_ZOOM_EDGE_THRESHOLD = 0.05
|
||||
|
||||
JWT_SECRET_ENV_VAR = "FRIGATE_JWT_SECRET"
|
||||
PASSWORD_HASH_ALGORITHM = "pbkdf2_sha256"
|
||||
|
||||
# Queues
|
||||
|
||||
FAST_QUEUE_TIMEOUT = 0.00001 # seconds
|
||||
|
||||
@ -2,6 +2,7 @@
|
||||
|
||||
import base64
|
||||
import datetime
|
||||
import json
|
||||
import logging
|
||||
import math
|
||||
import os
|
||||
@ -23,6 +24,7 @@ from frigate.comms.event_metadata_updater import (
|
||||
)
|
||||
from frigate.const import CLIPS_DIR
|
||||
from frigate.embeddings.onnx.lpr_embedding import LPR_EMBEDDING_SIZE
|
||||
from frigate.types import TrackedObjectUpdateTypesEnum
|
||||
from frigate.util.builtin import EventsPerSecond
|
||||
from frigate.util.image import area
|
||||
|
||||
@ -53,7 +55,7 @@ class LicensePlateProcessingMixin:
|
||||
|
||||
def _detect(self, image: np.ndarray) -> List[np.ndarray]:
|
||||
"""
|
||||
Detect possible license plates in the input image by first resizing and normalizing it,
|
||||
Detect possible areas of text in the input image by first resizing and normalizing it,
|
||||
running a detection model, and filtering out low-probability regions.
|
||||
|
||||
Args:
|
||||
@ -77,9 +79,21 @@ class LicensePlateProcessingMixin:
|
||||
resized_image,
|
||||
)
|
||||
|
||||
outputs = self.model_runner.detection_model([normalized_image])[0]
|
||||
try:
|
||||
outputs = self.model_runner.detection_model([normalized_image])[0]
|
||||
except Exception as e:
|
||||
logger.warning(f"Error running LPR box detection model: {e}")
|
||||
return []
|
||||
|
||||
outputs = outputs[0, :, :]
|
||||
|
||||
if False:
|
||||
current_time = int(datetime.datetime.now().timestamp())
|
||||
cv2.imwrite(
|
||||
f"debug/frames/probability_map_{current_time}.jpg",
|
||||
(outputs * 255).astype(np.uint8),
|
||||
)
|
||||
|
||||
boxes, _ = self._boxes_from_bitmap(outputs, outputs > self.mask_thresh, w, h)
|
||||
return self._filter_polygon(boxes, (h, w))
|
||||
|
||||
@ -106,7 +120,11 @@ class LicensePlateProcessingMixin:
|
||||
norm_img = norm_img[np.newaxis, :]
|
||||
norm_images.append(norm_img)
|
||||
|
||||
outputs = self.model_runner.classification_model(norm_images)
|
||||
try:
|
||||
outputs = self.model_runner.classification_model(norm_images)
|
||||
except Exception as e:
|
||||
logger.warning(f"Error running LPR classification model: {e}")
|
||||
return
|
||||
|
||||
return self._process_classification_output(images, outputs)
|
||||
|
||||
@ -125,9 +143,6 @@ class LicensePlateProcessingMixin:
|
||||
input_shape = [3, 48, 320]
|
||||
num_images = len(images)
|
||||
|
||||
# sort images by aspect ratio for processing
|
||||
indices = np.argsort(np.array([x.shape[1] / x.shape[0] for x in images]))
|
||||
|
||||
for index in range(0, num_images, self.batch_size):
|
||||
input_h, input_w = input_shape[1], input_shape[2]
|
||||
max_wh_ratio = input_w / input_h
|
||||
@ -135,31 +150,38 @@ class LicensePlateProcessingMixin:
|
||||
|
||||
# calculate the maximum aspect ratio in the current batch
|
||||
for i in range(index, min(num_images, index + self.batch_size)):
|
||||
h, w = images[indices[i]].shape[0:2]
|
||||
h, w = images[i].shape[0:2]
|
||||
max_wh_ratio = max(max_wh_ratio, w * 1.0 / h)
|
||||
|
||||
# preprocess the images based on the max aspect ratio
|
||||
for i in range(index, min(num_images, index + self.batch_size)):
|
||||
norm_image = self._preprocess_recognition_image(
|
||||
camera, images[indices[i]], max_wh_ratio
|
||||
camera, images[i], max_wh_ratio
|
||||
)
|
||||
norm_image = norm_image[np.newaxis, :]
|
||||
norm_images.append(norm_image)
|
||||
|
||||
outputs = self.model_runner.recognition_model(norm_images)
|
||||
try:
|
||||
outputs = self.model_runner.recognition_model(norm_images)
|
||||
except Exception as e:
|
||||
logger.warning(f"Error running LPR recognition model: {e}")
|
||||
return self.ctc_decoder(outputs)
|
||||
|
||||
def _process_license_plate(
|
||||
self, camera: string, id: string, image: np.ndarray
|
||||
) -> Tuple[List[str], List[float], List[int]]:
|
||||
self, camera: str, id: str, image: np.ndarray
|
||||
) -> Tuple[List[str], List[List[float]], List[int]]:
|
||||
"""
|
||||
Complete pipeline for detecting, classifying, and recognizing license plates in the input image.
|
||||
Combines multi-line plates into a single plate string, grouping boxes by vertical alignment and ordering top to bottom,
|
||||
but only combines boxes if their average confidence scores meet the threshold and their heights are similar.
|
||||
|
||||
Args:
|
||||
camera (str): Camera identifier.
|
||||
id (str): Event identifier.
|
||||
image (np.ndarray): The input image in which to detect, classify, and recognize license plates.
|
||||
|
||||
Returns:
|
||||
Tuple[List[str], List[float], List[int]]: Detected license plate texts, confidence scores, and areas of the plates.
|
||||
Tuple[List[str], List[List[float]], List[int]]: Detected license plate texts, character-level confidence scores for each plate (flattened into a single list per plate), and areas of the plates.
|
||||
"""
|
||||
if (
|
||||
self.model_runner.detection_model.runner is None
|
||||
@ -175,69 +197,173 @@ class LicensePlateProcessingMixin:
|
||||
logger.debug("No boxes found by OCR detector model")
|
||||
return [], [], []
|
||||
|
||||
boxes = self._sort_boxes(list(boxes))
|
||||
plate_images = [self._crop_license_plate(image, x) for x in boxes]
|
||||
if len(boxes) > 0:
|
||||
plate_left = np.min([np.min(box[:, 0]) for box in boxes])
|
||||
plate_right = np.max([np.max(box[:, 0]) for box in boxes])
|
||||
plate_width = plate_right - plate_left
|
||||
else:
|
||||
plate_width = 0
|
||||
|
||||
boxes = self._merge_nearby_boxes(
|
||||
boxes, plate_width=plate_width, gap_fraction=0.1
|
||||
)
|
||||
|
||||
current_time = int(datetime.datetime.now().timestamp())
|
||||
|
||||
if WRITE_DEBUG_IMAGES:
|
||||
for i, img in enumerate(plate_images):
|
||||
cv2.imwrite(
|
||||
f"debug/frames/license_plate_cropped_{current_time}_{i + 1}.jpg",
|
||||
img,
|
||||
debug_image = image.copy()
|
||||
for box in boxes:
|
||||
box = box.astype(int)
|
||||
x_min, y_min = np.min(box[:, 0]), np.min(box[:, 1])
|
||||
x_max, y_max = np.max(box[:, 0]), np.max(box[:, 1])
|
||||
cv2.rectangle(
|
||||
debug_image,
|
||||
(x_min, y_min),
|
||||
(x_max, y_max),
|
||||
color=(0, 255, 0),
|
||||
thickness=2,
|
||||
)
|
||||
|
||||
if self.config.lpr.debug_save_plates:
|
||||
logger.debug(f"{camera}: Saving plates for event {id}")
|
||||
|
||||
Path(os.path.join(CLIPS_DIR, f"lpr/{camera}/{id}")).mkdir(
|
||||
parents=True, exist_ok=True
|
||||
cv2.imwrite(
|
||||
f"debug/frames/license_plate_boxes_{current_time}.jpg", debug_image
|
||||
)
|
||||
|
||||
for i, img in enumerate(plate_images):
|
||||
cv2.imwrite(
|
||||
os.path.join(
|
||||
CLIPS_DIR, f"lpr/{camera}/{id}/{current_time}_{i + 1}.jpg"
|
||||
),
|
||||
img,
|
||||
boxes = self._sort_boxes(list(boxes))
|
||||
|
||||
# Step 1: Compute box heights and group boxes by vertical alignment and height similarity
|
||||
box_info = []
|
||||
for i, box in enumerate(boxes):
|
||||
y_coords = box[:, 1]
|
||||
y_min, y_max = np.min(y_coords), np.max(y_coords)
|
||||
height = y_max - y_min
|
||||
box_info.append((y_min, y_max, height, i))
|
||||
|
||||
# Initial grouping based on y-coordinate overlap and height similarity
|
||||
initial_groups = []
|
||||
current_group = [box_info[0]]
|
||||
height_tolerance = 0.25 # Allow 25% difference in height for grouping
|
||||
|
||||
for i in range(1, len(box_info)):
|
||||
prev_y_min, prev_y_max, prev_height, _ = current_group[-1]
|
||||
curr_y_min, _, curr_height, _ = box_info[i]
|
||||
|
||||
# Check y-coordinate overlap
|
||||
overlap_threshold = 0.1 * (prev_y_max - prev_y_min)
|
||||
overlaps = curr_y_min <= prev_y_max + overlap_threshold
|
||||
|
||||
# Check height similarity
|
||||
height_ratio = min(prev_height, curr_height) / max(prev_height, curr_height)
|
||||
height_similar = height_ratio >= (1 - height_tolerance)
|
||||
|
||||
if overlaps and height_similar:
|
||||
current_group.append(box_info[i])
|
||||
else:
|
||||
initial_groups.append(current_group)
|
||||
current_group = [box_info[i]]
|
||||
initial_groups.append(current_group)
|
||||
|
||||
# Step 2: Process each initial group, filter by confidence
|
||||
all_license_plates = []
|
||||
all_confidences = []
|
||||
all_areas = []
|
||||
processed_indices = set()
|
||||
|
||||
recognition_threshold = self.lpr_config.recognition_threshold
|
||||
|
||||
for group in initial_groups:
|
||||
# Sort group by y-coordinate (top to bottom)
|
||||
group.sort(key=lambda x: x[0])
|
||||
group_indices = [item[3] for item in group]
|
||||
|
||||
# Skip if all indices in this group have already been processed
|
||||
if all(idx in processed_indices for idx in group_indices):
|
||||
continue
|
||||
|
||||
# Crop images for the group
|
||||
group_boxes = [boxes[i] for i in group_indices]
|
||||
group_plate_images = [
|
||||
self._crop_license_plate(image, box) for box in group_boxes
|
||||
]
|
||||
|
||||
if WRITE_DEBUG_IMAGES:
|
||||
for i, img in enumerate(group_plate_images):
|
||||
cv2.imwrite(
|
||||
f"debug/frames/license_plate_cropped_{current_time}_{group_indices[i] + 1}.jpg",
|
||||
img,
|
||||
)
|
||||
|
||||
if self.config.lpr.debug_save_plates:
|
||||
logger.debug(f"{camera}: Saving plates for event {id}")
|
||||
Path(os.path.join(CLIPS_DIR, f"lpr/{camera}/{id}")).mkdir(
|
||||
parents=True, exist_ok=True
|
||||
)
|
||||
for i, img in enumerate(group_plate_images):
|
||||
cv2.imwrite(
|
||||
os.path.join(
|
||||
CLIPS_DIR,
|
||||
f"lpr/{camera}/{id}/{current_time}_{group_indices[i] + 1}.jpg",
|
||||
),
|
||||
img,
|
||||
)
|
||||
|
||||
# keep track of the index of each image for correct area calc later
|
||||
sorted_indices = np.argsort([x.shape[1] / x.shape[0] for x in plate_images])
|
||||
reverse_mapping = {
|
||||
idx: original_idx for original_idx, idx in enumerate(sorted_indices)
|
||||
}
|
||||
# Recognize text in each cropped image
|
||||
results, confidences = self._recognize(camera, group_plate_images)
|
||||
|
||||
results, confidences = self._recognize(camera, plate_images)
|
||||
if not results:
|
||||
continue
|
||||
|
||||
if results:
|
||||
license_plates = [""] * len(plate_images)
|
||||
average_confidences = [[0.0]] * len(plate_images)
|
||||
areas = [0] * len(plate_images)
|
||||
if not confidences:
|
||||
confidences = [[0.0] for _ in results]
|
||||
|
||||
# map results back to original image order
|
||||
for i, (plate, conf) in enumerate(zip(results, confidences)):
|
||||
original_idx = reverse_mapping[i]
|
||||
# Compute average confidence for each box's recognized text
|
||||
avg_confidences = []
|
||||
for conf_list in confidences:
|
||||
avg_conf = sum(conf_list) / len(conf_list) if conf_list else 0.0
|
||||
avg_confidences.append(avg_conf)
|
||||
|
||||
height, width = plate_images[original_idx].shape[:2]
|
||||
area = height * width
|
||||
# Filter boxes based on the recognition threshold
|
||||
qualifying_indices = []
|
||||
qualifying_results = []
|
||||
qualifying_confidences = []
|
||||
for i, (avg_conf, result, conf_list) in enumerate(
|
||||
zip(avg_confidences, results, confidences)
|
||||
):
|
||||
if avg_conf >= recognition_threshold:
|
||||
qualifying_indices.append(group_indices[i])
|
||||
qualifying_results.append(result)
|
||||
qualifying_confidences.append(conf_list)
|
||||
|
||||
average_confidence = conf
|
||||
if not qualifying_results:
|
||||
continue
|
||||
|
||||
# set to True to write each cropped image for debugging
|
||||
if False:
|
||||
filename = f"debug/frames/plate_{original_idx}_{plate}_{area}.jpg"
|
||||
cv2.imwrite(filename, plate_images[original_idx])
|
||||
processed_indices.update(qualifying_indices)
|
||||
|
||||
license_plates[original_idx] = plate
|
||||
average_confidences[original_idx] = average_confidence
|
||||
areas[original_idx] = area
|
||||
# Combine the qualifying results into a single plate string
|
||||
combined_plate = " ".join(qualifying_results)
|
||||
|
||||
# Filter out plates that have a length of less than min_plate_length characters
|
||||
# or that don't match the expected format (if defined)
|
||||
# Sort by area, then by plate length, then by confidence all desc
|
||||
flat_confidences = [
|
||||
conf for conf_list in qualifying_confidences for conf in conf_list
|
||||
]
|
||||
|
||||
# Compute the combined area for qualifying boxes
|
||||
qualifying_boxes = [boxes[i] for i in qualifying_indices]
|
||||
qualifying_plate_images = [
|
||||
self._crop_license_plate(image, box) for box in qualifying_boxes
|
||||
]
|
||||
group_areas = [
|
||||
img.shape[0] * img.shape[1] for img in qualifying_plate_images
|
||||
]
|
||||
combined_area = sum(group_areas)
|
||||
|
||||
all_license_plates.append(combined_plate)
|
||||
all_confidences.append(flat_confidences)
|
||||
all_areas.append(combined_area)
|
||||
|
||||
# Step 3: Filter and sort the combined plates
|
||||
if all_license_plates:
|
||||
filtered_data = []
|
||||
for plate, conf, area in zip(license_plates, average_confidences, areas):
|
||||
for plate, conf_list, area in zip(
|
||||
all_license_plates, all_confidences, all_areas
|
||||
):
|
||||
if len(plate) < self.lpr_config.min_plate_length:
|
||||
logger.debug(
|
||||
f"Filtered out '{plate}' due to length ({len(plate)} < {self.lpr_config.min_plate_length})"
|
||||
@ -250,11 +376,11 @@ class LicensePlateProcessingMixin:
|
||||
logger.debug(f"Filtered out '{plate}' due to format mismatch")
|
||||
continue
|
||||
|
||||
filtered_data.append((plate, conf, area))
|
||||
filtered_data.append((plate, conf_list, area))
|
||||
|
||||
sorted_data = sorted(
|
||||
filtered_data,
|
||||
key=lambda x: (x[2], len(x[0]), x[1]),
|
||||
key=lambda x: (x[2], len(x[0]), sum(x[1]) / len(x[1]) if x[1] else 0),
|
||||
reverse=True,
|
||||
)
|
||||
|
||||
@ -297,6 +423,92 @@ class LicensePlateProcessingMixin:
|
||||
cv2.multiply(image, std, image)
|
||||
return image.transpose((2, 0, 1))[np.newaxis, ...]
|
||||
|
||||
def _merge_nearby_boxes(
|
||||
self,
|
||||
boxes: List[np.ndarray],
|
||||
plate_width: float,
|
||||
gap_fraction: float = 0.1,
|
||||
min_overlap_fraction: float = -0.2,
|
||||
) -> List[np.ndarray]:
|
||||
"""
|
||||
Merge bounding boxes that are likely part of the same license plate based on proximity,
|
||||
with a dynamic max_gap based on the provided width of the entire license plate.
|
||||
|
||||
Args:
|
||||
boxes (List[np.ndarray]): List of bounding boxes with shape (n, 4, 2), where n is the number of boxes,
|
||||
each box has 4 corners, and each corner has (x, y) coordinates.
|
||||
plate_width (float): The width of the entire license plate in pixels, used to calculate max_gap.
|
||||
gap_fraction (float): Fraction of the plate width to use as the maximum gap.
|
||||
Default is 0.1 (10% of the plate width).
|
||||
|
||||
Returns:
|
||||
List[np.ndarray]: List of merged bounding boxes.
|
||||
"""
|
||||
if len(boxes) == 0:
|
||||
return []
|
||||
|
||||
max_gap = plate_width * gap_fraction
|
||||
min_overlap = plate_width * min_overlap_fraction
|
||||
|
||||
# Sort boxes by top left x
|
||||
sorted_boxes = sorted(boxes, key=lambda x: x[0][0])
|
||||
|
||||
merged_boxes = []
|
||||
current_box = sorted_boxes[0]
|
||||
|
||||
for i in range(1, len(sorted_boxes)):
|
||||
next_box = sorted_boxes[i]
|
||||
|
||||
# Calculate the horizontal gap between the current box and the next box
|
||||
current_right = np.max(
|
||||
current_box[:, 0]
|
||||
) # Rightmost x-coordinate of current box
|
||||
next_left = np.min(next_box[:, 0]) # Leftmost x-coordinate of next box
|
||||
horizontal_gap = next_left - current_right
|
||||
|
||||
# Check if the boxes are vertically aligned (similar y-coordinates)
|
||||
current_top = np.min(current_box[:, 1])
|
||||
current_bottom = np.max(current_box[:, 1])
|
||||
next_top = np.min(next_box[:, 1])
|
||||
next_bottom = np.max(next_box[:, 1])
|
||||
|
||||
# Consider boxes part of the same plate if they are close horizontally or overlap
|
||||
# within the allowed limit and their vertical positions overlap significantly
|
||||
if min_overlap <= horizontal_gap <= max_gap and max(
|
||||
current_top, next_top
|
||||
) <= min(current_bottom, next_bottom):
|
||||
merged_points = np.vstack((current_box, next_box))
|
||||
new_box = np.array(
|
||||
[
|
||||
[
|
||||
np.min(merged_points[:, 0]),
|
||||
np.min(merged_points[:, 1]),
|
||||
],
|
||||
[
|
||||
np.max(merged_points[:, 0]),
|
||||
np.min(merged_points[:, 1]),
|
||||
],
|
||||
[
|
||||
np.max(merged_points[:, 0]),
|
||||
np.max(merged_points[:, 1]),
|
||||
],
|
||||
[
|
||||
np.min(merged_points[:, 0]),
|
||||
np.max(merged_points[:, 1]),
|
||||
],
|
||||
]
|
||||
)
|
||||
current_box = new_box
|
||||
else:
|
||||
# If the boxes are not close enough or overlap too much, add the current box to the result
|
||||
merged_boxes.append(current_box)
|
||||
current_box = next_box
|
||||
|
||||
# Add the last box
|
||||
merged_boxes.append(current_box)
|
||||
|
||||
return np.array(merged_boxes, dtype=np.int32)
|
||||
|
||||
def _boxes_from_bitmap(
|
||||
self, output: np.ndarray, mask: np.ndarray, dest_width: int, dest_height: int
|
||||
) -> Tuple[np.ndarray, List[float]]:
|
||||
@ -327,40 +539,34 @@ class LicensePlateProcessingMixin:
|
||||
contour = contours[index]
|
||||
|
||||
# get minimum bounding box (rotated rectangle) around the contour and the smallest side length.
|
||||
points, min_side = self._get_min_boxes(contour)
|
||||
|
||||
if min_side < self.min_size:
|
||||
points, sside = self._get_min_boxes(contour)
|
||||
if sside < self.min_size:
|
||||
continue
|
||||
|
||||
points = np.array(points)
|
||||
points = np.array(points, dtype=np.float32)
|
||||
|
||||
score = self._box_score(output, contour)
|
||||
if self.box_thresh > score:
|
||||
continue
|
||||
|
||||
polygon = Polygon(points)
|
||||
distance = polygon.area / polygon.length
|
||||
points = self._expand_box(points)
|
||||
|
||||
# Use pyclipper to shrink the polygon slightly based on the computed distance.
|
||||
offset = PyclipperOffset()
|
||||
offset.AddPath(points, JT_ROUND, ET_CLOSEDPOLYGON)
|
||||
points = np.array(offset.Execute(distance * 1.5)).reshape((-1, 1, 2))
|
||||
|
||||
# get the minimum bounding box around the shrunken polygon.
|
||||
box, min_side = self._get_min_boxes(points)
|
||||
|
||||
if min_side < self.min_size + 2:
|
||||
# Get the minimum area rectangle again after expansion
|
||||
points, sside = self._get_min_boxes(points.reshape(-1, 1, 2))
|
||||
if sside < self.min_size + 2:
|
||||
continue
|
||||
|
||||
box = np.array(box)
|
||||
points = np.array(points, dtype=np.float32)
|
||||
|
||||
# normalize and clip box coordinates to fit within the destination image size.
|
||||
box[:, 0] = np.clip(np.round(box[:, 0] / width * dest_width), 0, dest_width)
|
||||
box[:, 1] = np.clip(
|
||||
np.round(box[:, 1] / height * dest_height), 0, dest_height
|
||||
points[:, 0] = np.clip(
|
||||
np.round(points[:, 0] / width * dest_width), 0, dest_width
|
||||
)
|
||||
points[:, 1] = np.clip(
|
||||
np.round(points[:, 1] / height * dest_height), 0, dest_height
|
||||
)
|
||||
|
||||
boxes.append(box.astype("int32"))
|
||||
boxes.append(points.astype("int32"))
|
||||
scores.append(score)
|
||||
|
||||
return np.array(boxes, dtype="int32"), scores
|
||||
@ -694,9 +900,7 @@ class LicensePlateProcessingMixin:
|
||||
input_w = int(input_h * max_wh_ratio)
|
||||
|
||||
# check for model-specific input width
|
||||
model_input_w = self.model_runner.recognition_model.runner.ort.get_inputs()[
|
||||
0
|
||||
].shape[3]
|
||||
model_input_w = self.model_runner.recognition_model.runner.get_input_width()
|
||||
if isinstance(model_input_w, int) and model_input_w > 0:
|
||||
input_w = model_input_w
|
||||
|
||||
@ -776,7 +980,11 @@ class LicensePlateProcessingMixin:
|
||||
|
||||
Return the dimensions of the detected plate as [x1, y1, x2, y2].
|
||||
"""
|
||||
predictions = self.model_runner.yolov9_detection_model(input)
|
||||
try:
|
||||
predictions = self.model_runner.yolov9_detection_model(input)
|
||||
except Exception as e:
|
||||
logger.warning(f"Error running YOLOv9 license plate detection model: {e}")
|
||||
return None
|
||||
|
||||
confidence_threshold = self.lpr_config.detection_threshold
|
||||
|
||||
@ -870,16 +1078,21 @@ class LicensePlateProcessingMixin:
|
||||
|
||||
# Adjust length score based on confidence of extra characters
|
||||
conf_threshold = 0.75 # Minimum confidence for a character to be "trusted"
|
||||
if len(top_plate) > len(prev_plate):
|
||||
extra_conf = min(
|
||||
top_char_confidences[len(prev_plate) :]
|
||||
) # Lowest extra char confidence
|
||||
if extra_conf < conf_threshold:
|
||||
curr_length_score *= extra_conf / conf_threshold # Penalize if weak
|
||||
elif len(prev_plate) > len(top_plate):
|
||||
extra_conf = min(prev_char_confidences[len(top_plate) :])
|
||||
if extra_conf < conf_threshold:
|
||||
prev_length_score *= extra_conf / conf_threshold
|
||||
top_plate_char_count = len(top_plate.replace(" ", ""))
|
||||
prev_plate_char_count = len(prev_plate.replace(" ", ""))
|
||||
|
||||
if top_plate_char_count > prev_plate_char_count:
|
||||
extra_confidences = top_char_confidences[prev_plate_char_count:]
|
||||
if extra_confidences: # Ensure the slice is not empty
|
||||
extra_conf = min(extra_confidences) # Lowest extra char confidence
|
||||
if extra_conf < conf_threshold:
|
||||
curr_length_score *= extra_conf / conf_threshold # Penalize if weak
|
||||
elif prev_plate_char_count > top_plate_char_count:
|
||||
extra_confidences = prev_char_confidences[top_plate_char_count:]
|
||||
if extra_confidences: # Ensure the slice is not empty
|
||||
extra_conf = min(extra_confidences)
|
||||
if extra_conf < conf_threshold:
|
||||
prev_length_score *= extra_conf / conf_threshold
|
||||
|
||||
# Area score: Normalize by max area
|
||||
max_area = max(top_area, prev_area)
|
||||
@ -934,7 +1147,7 @@ class LicensePlateProcessingMixin:
|
||||
# 4. Log the comparison
|
||||
logger.debug(
|
||||
f"Plate comparison - Current: {top_plate} (score: {curr_score:.3f}, min_conf: {curr_min_conf:.2f}) vs "
|
||||
f"Previous: {prev_plate} (score: {prev_score:.3f}, min_conf: {prev_min_conf:.2f})\n"
|
||||
f"Previous: {prev_plate} (score: {prev_score:.3f}, min_conf: {prev_min_conf:.2f}) "
|
||||
f"Metrics - Length: {len(top_plate)} vs {len(prev_plate)} (scores: {curr_length_score:.2f} vs {prev_length_score:.2f}), "
|
||||
f"Area: {top_area} vs {prev_area}, "
|
||||
f"Avg Conf: {avg_confidence:.2f} vs {prev_avg_confidence:.2f}, "
|
||||
@ -1026,7 +1239,7 @@ class LicensePlateProcessingMixin:
|
||||
license_plate_area = (license_plate[2] - license_plate[0]) * (
|
||||
license_plate[3] - license_plate[1]
|
||||
)
|
||||
if license_plate_area < self.lpr_config.min_area:
|
||||
if license_plate_area < self.config.cameras[camera].lpr.min_area:
|
||||
logger.debug(f"{camera}: License plate area below minimum threshold.")
|
||||
return
|
||||
|
||||
@ -1047,28 +1260,29 @@ class LicensePlateProcessingMixin:
|
||||
else:
|
||||
id = obj_data["id"]
|
||||
|
||||
# don't run for non car or non license plate (dedicated lpr with frigate+) objects
|
||||
# don't run for non car/motorcycle or non license plate (dedicated lpr with frigate+) objects
|
||||
if (
|
||||
obj_data.get("label") != "car"
|
||||
obj_data.get("label") not in ["car", "motorcycle"]
|
||||
and obj_data.get("label") != "license_plate"
|
||||
):
|
||||
logger.debug(
|
||||
f"{camera}: Not a processing license plate for non car object."
|
||||
f"{camera}: Not a processing license plate for non car/motorcycle object."
|
||||
)
|
||||
return
|
||||
|
||||
# don't run for stationary car objects
|
||||
if obj_data.get("stationary") == True:
|
||||
logger.debug(
|
||||
f"{camera}: Not a processing license plate for a stationary car object."
|
||||
f"{camera}: Not a processing license plate for a stationary car/motorcycle object."
|
||||
)
|
||||
return
|
||||
|
||||
# don't overwrite sub label for objects that have a sub label
|
||||
# that is not a license plate
|
||||
if obj_data.get("sub_label") and id not in self.detected_license_plates:
|
||||
# don't run for objects with no position changes
|
||||
# this is the initial state after registering a new tracked object
|
||||
# LPR will run 2 frames after detect.min_initialized is reached
|
||||
if obj_data.get("position_changes", 0) == 0:
|
||||
logger.debug(
|
||||
f"{camera}: Not processing license plate due to existing sub label: {obj_data.get('sub_label')}."
|
||||
f"{camera}: Plate detected in {self.config.cameras[camera].detect.min_initialized + 1} concurrent frames, LPR frame threshold ({self.config.cameras[camera].detect.min_initialized + 2})"
|
||||
)
|
||||
return
|
||||
|
||||
@ -1083,6 +1297,10 @@ class LicensePlateProcessingMixin:
|
||||
return
|
||||
|
||||
rgb = cv2.cvtColor(frame, cv2.COLOR_YUV2BGR_I420)
|
||||
|
||||
# apply motion mask
|
||||
rgb[self.config.cameras[camera].motion.mask == 0] = [0, 0, 0]
|
||||
|
||||
left, top, right, bottom = car_box
|
||||
car = rgb[top:bottom, left:right]
|
||||
|
||||
@ -1107,7 +1325,7 @@ class LicensePlateProcessingMixin:
|
||||
|
||||
if not license_plate:
|
||||
logger.debug(
|
||||
f"{camera}: Detected no license plates for car object."
|
||||
f"{camera}: Detected no license plates for car/motorcycle object."
|
||||
)
|
||||
return
|
||||
|
||||
@ -1119,10 +1337,7 @@ class LicensePlateProcessingMixin:
|
||||
|
||||
# check that license plate is valid
|
||||
# double the value because we've doubled the size of the car
|
||||
if (
|
||||
license_plate_area
|
||||
< self.config.cameras[obj_data["camera"]].lpr.min_area * 2
|
||||
):
|
||||
if license_plate_area < self.config.cameras[camera].lpr.min_area * 2:
|
||||
logger.debug(f"{camera}: License plate is less than min_area")
|
||||
return
|
||||
|
||||
@ -1139,7 +1354,7 @@ class LicensePlateProcessingMixin:
|
||||
logger.debug(f"{camera}: No attributes to parse.")
|
||||
return
|
||||
|
||||
if obj_data.get("label") == "car":
|
||||
if obj_data.get("label") in ["car", "motorcycle"]:
|
||||
attributes: list[dict[str, any]] = obj_data.get(
|
||||
"current_attributes", []
|
||||
)
|
||||
@ -1166,10 +1381,10 @@ class LicensePlateProcessingMixin:
|
||||
if (
|
||||
not license_plate_box
|
||||
or area(license_plate_box)
|
||||
< self.config.cameras[obj_data["camera"]].lpr.min_area
|
||||
< self.config.cameras[camera].lpr.min_area
|
||||
):
|
||||
logger.debug(
|
||||
f"{camera}: Area for license plate box {area(license_plate_box)} is less than min_area {self.config.cameras[obj_data['camera']].lpr.min_area}"
|
||||
f"{camera}: Area for license plate box {area(license_plate_box)} is less than min_area {self.config.cameras[camera].lpr.min_area}"
|
||||
)
|
||||
return
|
||||
|
||||
@ -1210,6 +1425,8 @@ class LicensePlateProcessingMixin:
|
||||
license_plate_frame,
|
||||
)
|
||||
|
||||
logger.debug(f"{camera}: Running plate recognition.")
|
||||
|
||||
# run detection, returns results sorted by confidence, best first
|
||||
start = datetime.datetime.now().timestamp()
|
||||
license_plates, confidences, areas = self._process_license_plate(
|
||||
@ -1245,7 +1462,7 @@ class LicensePlateProcessingMixin:
|
||||
# Check against minimum confidence threshold
|
||||
if avg_confidence < self.lpr_config.recognition_threshold:
|
||||
logger.debug(
|
||||
f"{camera}: Average confidence {avg_confidence} is less than threshold ({self.lpr_config.recognition_threshold})"
|
||||
f"{camera}: Average character confidence {avg_confidence} is less than recognition_threshold ({self.lpr_config.recognition_threshold})"
|
||||
)
|
||||
return
|
||||
|
||||
@ -1314,6 +1531,21 @@ class LicensePlateProcessingMixin:
|
||||
EventMetadataTypeEnum.sub_label, (id, sub_label, avg_confidence)
|
||||
)
|
||||
|
||||
# always publish to recognized_license_plate field
|
||||
self.requestor.send_data(
|
||||
"tracked_object_update",
|
||||
json.dumps(
|
||||
{
|
||||
"type": TrackedObjectUpdateTypesEnum.lpr,
|
||||
"name": sub_label,
|
||||
"plate": top_plate,
|
||||
"score": avg_confidence,
|
||||
"id": id,
|
||||
"camera": camera,
|
||||
"timestamp": start,
|
||||
}
|
||||
),
|
||||
)
|
||||
self.sub_label_publisher.publish(
|
||||
EventMetadataTypeEnum.recognized_license_plate,
|
||||
(id, top_plate, avg_confidence),
|
||||
|
||||
@ -9,7 +9,7 @@ from ...types import DataProcessorModelRunner
|
||||
|
||||
|
||||
class LicensePlateModelRunner(DataProcessorModelRunner):
|
||||
def __init__(self, requestor, device: str = "CPU", model_size: str = "large"):
|
||||
def __init__(self, requestor, device: str = "CPU", model_size: str = "small"):
|
||||
super().__init__(requestor, device, model_size)
|
||||
self.detection_model = PaddleOCRDetection(
|
||||
model_size=model_size, requestor=requestor, device=device
|
||||
|
||||
@ -9,6 +9,7 @@ from peewee import DoesNotExist
|
||||
|
||||
from frigate.comms.embeddings_updater import EmbeddingsRequestEnum
|
||||
from frigate.comms.event_metadata_updater import EventMetadataPublisher
|
||||
from frigate.comms.inter_process import InterProcessRequestor
|
||||
from frigate.config import FrigateConfig
|
||||
from frigate.data_processing.common.license_plate.mixin import (
|
||||
WRITE_DEBUG_IMAGES,
|
||||
@ -31,11 +32,13 @@ class LicensePlatePostProcessor(LicensePlateProcessingMixin, PostProcessorApi):
|
||||
def __init__(
|
||||
self,
|
||||
config: FrigateConfig,
|
||||
requestor: InterProcessRequestor,
|
||||
sub_label_publisher: EventMetadataPublisher,
|
||||
metrics: DataProcessorMetrics,
|
||||
model_runner: LicensePlateModelRunner,
|
||||
detected_license_plates: dict[str, dict[str, any]],
|
||||
):
|
||||
self.requestor = requestor
|
||||
self.detected_license_plates = detected_license_plates
|
||||
self.model_runner = model_runner
|
||||
self.lpr_config = config.lpr
|
||||
@ -54,6 +57,9 @@ class LicensePlatePostProcessor(LicensePlateProcessingMixin, PostProcessorApi):
|
||||
Returns:
|
||||
None.
|
||||
"""
|
||||
# don't run LPR post processing for now
|
||||
return
|
||||
|
||||
event_id = data["event_id"]
|
||||
camera_name = data["camera"]
|
||||
|
||||
|
||||
@ -2,6 +2,7 @@
|
||||
|
||||
import base64
|
||||
import datetime
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import random
|
||||
@ -17,6 +18,7 @@ from frigate.comms.event_metadata_updater import (
|
||||
EventMetadataPublisher,
|
||||
EventMetadataTypeEnum,
|
||||
)
|
||||
from frigate.comms.inter_process import InterProcessRequestor
|
||||
from frigate.config import FrigateConfig
|
||||
from frigate.const import FACE_DIR, MODEL_CACHE_DIR
|
||||
from frigate.data_processing.common.face.model import (
|
||||
@ -24,6 +26,7 @@ from frigate.data_processing.common.face.model import (
|
||||
FaceNetRecognizer,
|
||||
FaceRecognizer,
|
||||
)
|
||||
from frigate.types import TrackedObjectUpdateTypesEnum
|
||||
from frigate.util.builtin import EventsPerSecond
|
||||
from frigate.util.image import area
|
||||
|
||||
@ -42,11 +45,13 @@ class FaceRealTimeProcessor(RealTimeProcessorApi):
|
||||
def __init__(
|
||||
self,
|
||||
config: FrigateConfig,
|
||||
requestor: InterProcessRequestor,
|
||||
sub_label_publisher: EventMetadataPublisher,
|
||||
metrics: DataProcessorMetrics,
|
||||
):
|
||||
super().__init__(config, metrics)
|
||||
self.face_config = config.face_recognition
|
||||
self.requestor = requestor
|
||||
self.sub_label_publisher = sub_label_publisher
|
||||
self.face_detector: cv2.FaceDetectorYN = None
|
||||
self.requires_face_detection = "face" not in self.config.objects.all_objects
|
||||
@ -157,8 +162,9 @@ class FaceRealTimeProcessor(RealTimeProcessorApi):
|
||||
def process_frame(self, obj_data: dict[str, any], frame: np.ndarray):
|
||||
"""Look for faces in image."""
|
||||
self.metrics.face_rec_fps.value = self.faces_per_second.eps()
|
||||
camera = obj_data["camera"]
|
||||
|
||||
if not self.config.cameras[obj_data["camera"]].face_recognition.enabled:
|
||||
if not self.config.cameras[camera].face_recognition.enabled:
|
||||
return
|
||||
|
||||
start = datetime.datetime.now().timestamp()
|
||||
@ -245,7 +251,7 @@ class FaceRealTimeProcessor(RealTimeProcessorApi):
|
||||
if (
|
||||
not face_box
|
||||
or area(face_box)
|
||||
< self.config.cameras[obj_data["camera"]].face_recognition.min_area
|
||||
< self.config.cameras[camera].face_recognition.min_area
|
||||
):
|
||||
logger.debug(f"Invalid face box {face}")
|
||||
return
|
||||
@ -286,6 +292,20 @@ class FaceRealTimeProcessor(RealTimeProcessorApi):
|
||||
self.person_face_history[id]
|
||||
)
|
||||
|
||||
self.requestor.send_data(
|
||||
"tracked_object_update",
|
||||
json.dumps(
|
||||
{
|
||||
"type": TrackedObjectUpdateTypesEnum.face,
|
||||
"name": weighted_sub_label,
|
||||
"score": weighted_score,
|
||||
"id": id,
|
||||
"camera": camera,
|
||||
"timestamp": start,
|
||||
}
|
||||
),
|
||||
)
|
||||
|
||||
if weighted_score >= self.face_config.recognition_threshold:
|
||||
self.sub_label_publisher.publish(
|
||||
EventMetadataTypeEnum.sub_label,
|
||||
@ -393,6 +413,9 @@ class FaceRealTimeProcessor(RealTimeProcessorApi):
|
||||
if score <= self.face_config.unknown_score:
|
||||
sub_label = "unknown"
|
||||
|
||||
if "-" in sub_label:
|
||||
sub_label = sub_label.replace("-", "_")
|
||||
|
||||
if self.config.face_recognition.save_attempts:
|
||||
# write face to library
|
||||
folder = os.path.join(FACE_DIR, "train")
|
||||
@ -460,6 +483,10 @@ class FaceRealTimeProcessor(RealTimeProcessorApi):
|
||||
if self.config.face_recognition.save_attempts:
|
||||
# write face to library
|
||||
folder = os.path.join(FACE_DIR, "train")
|
||||
|
||||
if "-" in sub_label:
|
||||
sub_label = sub_label.replace("-", "_")
|
||||
|
||||
file = os.path.join(
|
||||
folder, f"{event_id}-{timestamp}-{sub_label}-{score}.webp"
|
||||
)
|
||||
|
||||
@ -5,6 +5,7 @@ import logging
|
||||
import numpy as np
|
||||
|
||||
from frigate.comms.event_metadata_updater import EventMetadataPublisher
|
||||
from frigate.comms.inter_process import InterProcessRequestor
|
||||
from frigate.config import FrigateConfig
|
||||
from frigate.data_processing.common.license_plate.mixin import (
|
||||
LicensePlateProcessingMixin,
|
||||
@ -23,11 +24,13 @@ class LicensePlateRealTimeProcessor(LicensePlateProcessingMixin, RealTimeProcess
|
||||
def __init__(
|
||||
self,
|
||||
config: FrigateConfig,
|
||||
requestor: InterProcessRequestor,
|
||||
sub_label_publisher: EventMetadataPublisher,
|
||||
metrics: DataProcessorMetrics,
|
||||
model_runner: LicensePlateModelRunner,
|
||||
detected_license_plates: dict[str, dict[str, any]],
|
||||
):
|
||||
self.requestor = requestor
|
||||
self.detected_license_plates = detected_license_plates
|
||||
self.model_runner = model_runner
|
||||
self.lpr_config = config.lpr
|
||||
|
||||
@ -4,7 +4,7 @@ from typing import List
|
||||
|
||||
import numpy as np
|
||||
|
||||
from frigate.detectors.detector_config import ModelTypeEnum
|
||||
from frigate.detectors.detector_config import BaseDetectorConfig, ModelTypeEnum
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@ -14,9 +14,9 @@ class DetectionApi(ABC):
|
||||
supported_models: List[ModelTypeEnum]
|
||||
|
||||
@abstractmethod
|
||||
def __init__(self, detector_config):
|
||||
def __init__(self, detector_config: BaseDetectorConfig):
|
||||
self.detector_config = detector_config
|
||||
self.thresh = 0.5
|
||||
self.thresh = 0.4
|
||||
self.height = detector_config.model.height
|
||||
self.width = detector_config.model.width
|
||||
|
||||
@ -24,58 +24,34 @@ class DetectionApi(ABC):
|
||||
def detect_raw(self, tensor_input):
|
||||
pass
|
||||
|
||||
def post_process_yolonas(self, output):
|
||||
"""
|
||||
@param output: output of inference
|
||||
expected shape: [np.array(1, N, 4), np.array(1, N, 80)]
|
||||
where N depends on the input size e.g. N=2100 for 320x320 images
|
||||
def calculate_grids_strides(self, expanded=True) -> None:
|
||||
grids = []
|
||||
expanded_strides = []
|
||||
|
||||
@return: best results: np.array(20, 6) where each row is
|
||||
in this order (class_id, score, y1/height, x1/width, y2/height, x2/width)
|
||||
"""
|
||||
# decode and orient predictions
|
||||
strides = [8, 16, 32]
|
||||
hsizes = [self.height // stride for stride in strides]
|
||||
wsizes = [self.width // stride for stride in strides]
|
||||
|
||||
N = output[0].shape[1]
|
||||
for hsize, wsize, stride in zip(hsizes, wsizes, strides):
|
||||
xv, yv = np.meshgrid(np.arange(wsize), np.arange(hsize))
|
||||
|
||||
boxes = output[0].reshape(N, 4)
|
||||
scores = output[1].reshape(N, 80)
|
||||
|
||||
class_ids = np.argmax(scores, axis=1)
|
||||
scores = scores[np.arange(N), class_ids]
|
||||
|
||||
args_best = np.argwhere(scores > self.thresh)[:, 0]
|
||||
|
||||
num_matches = len(args_best)
|
||||
if num_matches == 0:
|
||||
return np.zeros((20, 6), np.float32)
|
||||
elif num_matches > 20:
|
||||
args_best20 = np.argpartition(scores[args_best], -20)[-20:]
|
||||
args_best = args_best[args_best20]
|
||||
|
||||
boxes = boxes[args_best]
|
||||
class_ids = class_ids[args_best]
|
||||
scores = scores[args_best]
|
||||
|
||||
boxes = np.transpose(
|
||||
np.vstack(
|
||||
(
|
||||
boxes[:, 1] / self.height,
|
||||
boxes[:, 0] / self.width,
|
||||
boxes[:, 3] / self.height,
|
||||
boxes[:, 2] / self.width,
|
||||
if expanded:
|
||||
grid = np.stack((xv, yv), 2).reshape(1, -1, 2)
|
||||
grids.append(grid)
|
||||
shape = grid.shape[:2]
|
||||
expanded_strides.append(np.full((*shape, 1), stride))
|
||||
else:
|
||||
xv = xv.reshape(1, 1, hsize, wsize)
|
||||
yv = yv.reshape(1, 1, hsize, wsize)
|
||||
grids.extend(np.concatenate((xv, yv), axis=1).tolist())
|
||||
expanded_strides.extend(
|
||||
np.array([stride, stride]).reshape(1, 2, 1, 1).tolist()
|
||||
)
|
||||
)
|
||||
)
|
||||
|
||||
results = np.hstack(
|
||||
(class_ids[..., np.newaxis], scores[..., np.newaxis], boxes)
|
||||
)
|
||||
|
||||
return np.resize(results, (20, 6))
|
||||
|
||||
def post_process(self, output):
|
||||
if self.detector_config.model.model_type == ModelTypeEnum.yolonas:
|
||||
return self.post_process_yolonas(output)
|
||||
if expanded:
|
||||
self.grids = np.concatenate(grids, 1)
|
||||
self.expanded_strides = np.concatenate(expanded_strides, 1)
|
||||
else:
|
||||
raise ValueError(
|
||||
f'Model type "{self.detector_config.model.model_type}" is currently not supported.'
|
||||
)
|
||||
self.grids = grids
|
||||
self.expanded_strides = expanded_strides
|
||||
|
||||
@ -25,10 +25,13 @@ class PixelFormatEnum(str, Enum):
|
||||
class InputTensorEnum(str, Enum):
|
||||
nchw = "nchw"
|
||||
nhwc = "nhwc"
|
||||
hwnc = "hwnc"
|
||||
hwcn = "hwcn"
|
||||
|
||||
|
||||
class InputDTypeEnum(str, Enum):
|
||||
float = "float"
|
||||
float_denorm = "float_denorm" # non-normalized float
|
||||
int = "int"
|
||||
|
||||
|
||||
@ -37,7 +40,6 @@ class ModelTypeEnum(str, Enum):
|
||||
rfdetr = "rfdetr"
|
||||
ssd = "ssd"
|
||||
yolox = "yolox"
|
||||
yolov9 = "yolov9"
|
||||
yolonas = "yolonas"
|
||||
yologeneric = "yolo-generic"
|
||||
|
||||
|
||||
@ -1,6 +1,5 @@
|
||||
import logging
|
||||
import os
|
||||
import queue
|
||||
import subprocess
|
||||
import threading
|
||||
import urllib.request
|
||||
@ -28,37 +27,11 @@ from frigate.detectors.detection_api import DetectionApi
|
||||
from frigate.detectors.detector_config import (
|
||||
BaseDetectorConfig,
|
||||
)
|
||||
from frigate.object_detection.util import RequestStore, ResponseStore
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
# ----------------- ResponseStore Class ----------------- #
|
||||
class ResponseStore:
|
||||
"""
|
||||
A thread-safe hash-based response store that maps request IDs
|
||||
to their results. Threads can wait on the condition variable until
|
||||
their request's result appears.
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
self.responses = {} # Maps request_id -> (original_input, infer_results)
|
||||
self.lock = threading.Lock()
|
||||
self.cond = threading.Condition(self.lock)
|
||||
|
||||
def put(self, request_id, response):
|
||||
with self.cond:
|
||||
self.responses[request_id] = response
|
||||
self.cond.notify_all()
|
||||
|
||||
def get(self, request_id, timeout=None):
|
||||
with self.cond:
|
||||
if not self.cond.wait_for(
|
||||
lambda: request_id in self.responses, timeout=timeout
|
||||
):
|
||||
raise TimeoutError(f"Timeout waiting for response {request_id}")
|
||||
return self.responses.pop(request_id)
|
||||
|
||||
|
||||
# ----------------- Utility Functions ----------------- #
|
||||
|
||||
|
||||
@ -122,14 +95,14 @@ class HailoAsyncInference:
|
||||
def __init__(
|
||||
self,
|
||||
hef_path: str,
|
||||
input_queue: queue.Queue,
|
||||
input_store: RequestStore,
|
||||
output_store: ResponseStore,
|
||||
batch_size: int = 1,
|
||||
input_type: Optional[str] = None,
|
||||
output_type: Optional[Dict[str, str]] = None,
|
||||
send_original_frame: bool = False,
|
||||
) -> None:
|
||||
self.input_queue = input_queue
|
||||
self.input_store = input_store
|
||||
self.output_store = output_store
|
||||
|
||||
params = VDevice.create_params()
|
||||
@ -202,11 +175,14 @@ class HailoAsyncInference:
|
||||
return self.hef.get_input_vstream_infos()[0].shape
|
||||
|
||||
def run(self) -> None:
|
||||
job = None
|
||||
with self.infer_model.configure() as configured_infer_model:
|
||||
while True:
|
||||
batch_data = self.input_queue.get()
|
||||
batch_data = self.input_store.get()
|
||||
|
||||
if batch_data is None:
|
||||
break
|
||||
|
||||
request_id, frame_data = batch_data
|
||||
preprocessed_batch = [frame_data]
|
||||
request_ids = [request_id]
|
||||
@ -227,7 +203,9 @@ class HailoAsyncInference:
|
||||
bindings_list=bindings_list,
|
||||
),
|
||||
)
|
||||
job.wait(100)
|
||||
|
||||
if job is not None:
|
||||
job.wait(100)
|
||||
|
||||
|
||||
# ----------------- HailoDetector Class ----------------- #
|
||||
@ -274,16 +252,14 @@ class HailoDetector(DetectionApi):
|
||||
self.working_model_path = self.check_and_prepare()
|
||||
|
||||
self.batch_size = 1
|
||||
self.input_queue = queue.Queue()
|
||||
self.input_store = RequestStore()
|
||||
self.response_store = ResponseStore()
|
||||
self.request_counter = 0
|
||||
self.request_counter_lock = threading.Lock()
|
||||
|
||||
try:
|
||||
logger.debug(f"[INIT] Loading HEF model from {self.working_model_path}")
|
||||
self.inference_engine = HailoAsyncInference(
|
||||
self.working_model_path,
|
||||
self.input_queue,
|
||||
self.input_store,
|
||||
self.response_store,
|
||||
self.batch_size,
|
||||
)
|
||||
@ -364,26 +340,16 @@ class HailoDetector(DetectionApi):
|
||||
raise FileNotFoundError(f"Model file not found at: {self.model_path}")
|
||||
return cached_model_path
|
||||
|
||||
def _get_request_id(self) -> int:
|
||||
with self.request_counter_lock:
|
||||
request_id = self.request_counter
|
||||
self.request_counter += 1
|
||||
if self.request_counter > 1000000:
|
||||
self.request_counter = 0
|
||||
return request_id
|
||||
|
||||
def detect_raw(self, tensor_input):
|
||||
request_id = self._get_request_id()
|
||||
|
||||
tensor_input = self.preprocess(tensor_input)
|
||||
|
||||
if isinstance(tensor_input, np.ndarray) and len(tensor_input.shape) == 3:
|
||||
tensor_input = np.expand_dims(tensor_input, axis=0)
|
||||
|
||||
self.input_queue.put((request_id, tensor_input))
|
||||
request_id = self.input_store.put(tensor_input)
|
||||
|
||||
try:
|
||||
original_input, infer_results = self.response_store.get(
|
||||
request_id, timeout=10.0
|
||||
)
|
||||
_, infer_results = self.response_store.get(request_id, timeout=10.0)
|
||||
except TimeoutError:
|
||||
logger.error(
|
||||
f"Timeout waiting for inference results for request {request_id}"
|
||||
|
||||
@ -13,7 +13,8 @@ from frigate.util.model import (
|
||||
get_ort_providers,
|
||||
post_process_dfine,
|
||||
post_process_rfdetr,
|
||||
post_process_yolov9,
|
||||
post_process_yolo,
|
||||
post_process_yolox,
|
||||
)
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
@ -30,6 +31,8 @@ class ONNXDetector(DetectionApi):
|
||||
type_key = DETECTOR_KEY
|
||||
|
||||
def __init__(self, detector_config: ONNXDetectorConfig):
|
||||
super().__init__(detector_config)
|
||||
|
||||
try:
|
||||
import onnxruntime as ort
|
||||
|
||||
@ -51,13 +54,14 @@ class ONNXDetector(DetectionApi):
|
||||
path, providers=providers, provider_options=options
|
||||
)
|
||||
|
||||
self.h = detector_config.model.height
|
||||
self.w = detector_config.model.width
|
||||
self.onnx_model_type = detector_config.model.model_type
|
||||
self.onnx_model_px = detector_config.model.input_pixel_format
|
||||
self.onnx_model_shape = detector_config.model.input_tensor
|
||||
path = detector_config.model.path
|
||||
|
||||
if self.onnx_model_type == ModelTypeEnum.yolox:
|
||||
self.calculate_grids_strides()
|
||||
|
||||
logger.info(f"ONNX: {path} loaded")
|
||||
|
||||
def detect_raw(self, tensor_input: np.ndarray):
|
||||
@ -66,10 +70,12 @@ class ONNXDetector(DetectionApi):
|
||||
None,
|
||||
{
|
||||
"images": tensor_input,
|
||||
"orig_target_sizes": np.array([[self.h, self.w]], dtype=np.int64),
|
||||
"orig_target_sizes": np.array(
|
||||
[[self.height, self.width]], dtype=np.int64
|
||||
),
|
||||
},
|
||||
)
|
||||
return post_process_dfine(tensor_output, self.w, self.h)
|
||||
return post_process_dfine(tensor_output, self.width, self.height)
|
||||
|
||||
model_input_name = self.model.get_inputs()[0].name
|
||||
tensor_output = self.model.run(None, {model_input_name: tensor_input})
|
||||
@ -91,18 +97,22 @@ class ONNXDetector(DetectionApi):
|
||||
detections[i] = [
|
||||
class_id,
|
||||
confidence,
|
||||
y_min / self.h,
|
||||
x_min / self.w,
|
||||
y_max / self.h,
|
||||
x_max / self.w,
|
||||
y_min / self.height,
|
||||
x_min / self.width,
|
||||
y_max / self.height,
|
||||
x_max / self.width,
|
||||
]
|
||||
return detections
|
||||
elif (
|
||||
self.onnx_model_type == ModelTypeEnum.yolov9
|
||||
or self.onnx_model_type == ModelTypeEnum.yologeneric
|
||||
):
|
||||
predictions: np.ndarray = tensor_output[0]
|
||||
return post_process_yolov9(predictions, self.w, self.h)
|
||||
elif self.onnx_model_type == ModelTypeEnum.yologeneric:
|
||||
return post_process_yolo(tensor_output, self.width, self.height)
|
||||
elif self.onnx_model_type == ModelTypeEnum.yolox:
|
||||
return post_process_yolox(
|
||||
tensor_output[0],
|
||||
self.width,
|
||||
self.height,
|
||||
self.grids,
|
||||
self.expanded_strides,
|
||||
)
|
||||
else:
|
||||
raise Exception(
|
||||
f"{self.onnx_model_type} is currently not supported for onnx. See the docs for more info on supported models."
|
||||
|
||||
@ -13,7 +13,7 @@ from frigate.detectors.detector_config import BaseDetectorConfig, ModelTypeEnum
|
||||
from frigate.util.model import (
|
||||
post_process_dfine,
|
||||
post_process_rfdetr,
|
||||
post_process_yolov9,
|
||||
post_process_yolo,
|
||||
)
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
@ -33,12 +33,12 @@ class OvDetector(DetectionApi):
|
||||
ModelTypeEnum.rfdetr,
|
||||
ModelTypeEnum.ssd,
|
||||
ModelTypeEnum.yolonas,
|
||||
ModelTypeEnum.yolov9,
|
||||
ModelTypeEnum.yologeneric,
|
||||
ModelTypeEnum.yolox,
|
||||
]
|
||||
|
||||
def __init__(self, detector_config: OvDetectorConfig):
|
||||
super().__init__(detector_config)
|
||||
self.ov_core = ov.Core()
|
||||
self.ov_model_type = detector_config.model.model_type
|
||||
|
||||
@ -134,25 +134,7 @@ class OvDetector(DetectionApi):
|
||||
break
|
||||
self.num_classes = tensor_shape[2] - 5
|
||||
logger.info(f"YOLOX model has {self.num_classes} classes")
|
||||
self.set_strides_grids()
|
||||
|
||||
def set_strides_grids(self):
|
||||
grids = []
|
||||
expanded_strides = []
|
||||
|
||||
strides = [8, 16, 32]
|
||||
|
||||
hsize_list = [self.h // stride for stride in strides]
|
||||
wsize_list = [self.w // stride for stride in strides]
|
||||
|
||||
for hsize, wsize, stride in zip(hsize_list, wsize_list, strides):
|
||||
xv, yv = np.meshgrid(np.arange(wsize), np.arange(hsize))
|
||||
grid = np.stack((xv, yv), 2).reshape(1, -1, 2)
|
||||
grids.append(grid)
|
||||
shape = grid.shape[:2]
|
||||
expanded_strides.append(np.full((*shape, 1), stride))
|
||||
self.grids = np.concatenate(grids, 1)
|
||||
self.expanded_strides = np.concatenate(expanded_strides, 1)
|
||||
self.calculate_grids_strides()
|
||||
|
||||
## Takes in class ID, confidence score, and array of [x, y, w, h] that describes detection position,
|
||||
## returns an array that's easily passable back to Frigate.
|
||||
@ -232,12 +214,13 @@ class OvDetector(DetectionApi):
|
||||
x_max / self.w,
|
||||
]
|
||||
return detections
|
||||
elif (
|
||||
self.ov_model_type == ModelTypeEnum.yolov9
|
||||
or self.ov_model_type == ModelTypeEnum.yologeneric
|
||||
):
|
||||
out_tensor = infer_request.get_output_tensor(0).data
|
||||
return post_process_yolov9(out_tensor, self.w, self.h)
|
||||
elif self.ov_model_type == ModelTypeEnum.yologeneric:
|
||||
out_tensor = []
|
||||
|
||||
for item in infer_request.output_tensors:
|
||||
out_tensor.append(item.data)
|
||||
|
||||
return post_process_yolo(out_tensor, self.w, self.h)
|
||||
elif self.ov_model_type == ModelTypeEnum.yolox:
|
||||
out_tensor = infer_request.get_output_tensor()
|
||||
# [x, y, h, w, box_score, class_no_1, ..., class_no_80],
|
||||
|
||||
@ -4,11 +4,14 @@ import re
|
||||
import urllib.request
|
||||
from typing import Literal
|
||||
|
||||
import cv2
|
||||
import numpy as np
|
||||
from pydantic import Field
|
||||
|
||||
from frigate.const import MODEL_CACHE_DIR
|
||||
from frigate.detectors.detection_api import DetectionApi
|
||||
from frigate.detectors.detector_config import BaseDetectorConfig, ModelTypeEnum
|
||||
from frigate.util.model import post_process_yolo
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@ -16,7 +19,11 @@ DETECTOR_KEY = "rknn"
|
||||
|
||||
supported_socs = ["rk3562", "rk3566", "rk3568", "rk3576", "rk3588"]
|
||||
|
||||
supported_models = {ModelTypeEnum.yolonas: "^deci-fp16-yolonas_[sml]$"}
|
||||
supported_models = {
|
||||
ModelTypeEnum.yologeneric: "^frigate-fp16-yolov9-[cemst]$",
|
||||
ModelTypeEnum.yolonas: "^deci-fp16-yolonas_[sml]$",
|
||||
ModelTypeEnum.yolox: "^rock-(fp16|i8)-yolox_(nano|tiny)$",
|
||||
}
|
||||
|
||||
model_cache_dir = os.path.join(MODEL_CACHE_DIR, "rknn_cache/")
|
||||
|
||||
@ -40,6 +47,9 @@ class Rknn(DetectionApi):
|
||||
|
||||
model_props = self.parse_model_input(model_path, soc)
|
||||
|
||||
if self.detector_config.model.model_type == ModelTypeEnum.yolox:
|
||||
self.calculate_grids_strides(expanded=False)
|
||||
|
||||
if model_props["preset"]:
|
||||
config.model.model_type = model_props["model_type"]
|
||||
|
||||
@ -109,7 +119,7 @@ class Rknn(DetectionApi):
|
||||
model_props["model_type"] = model_type
|
||||
|
||||
if model_matched:
|
||||
model_props["filename"] = model_path + f"-{soc}-v2.3.0-1.rknn"
|
||||
model_props["filename"] = model_path + f"-{soc}-v2.3.2-1.rknn"
|
||||
|
||||
model_props["path"] = model_cache_dir + model_props["filename"]
|
||||
|
||||
@ -130,7 +140,7 @@ class Rknn(DetectionApi):
|
||||
os.mkdir(model_cache_dir)
|
||||
|
||||
urllib.request.urlretrieve(
|
||||
f"https://github.com/MarcA711/rknn-models/releases/download/v2.3.0/{filename}",
|
||||
f"https://github.com/MarcA711/rknn-models/releases/download/v2.3.2/{filename}",
|
||||
model_cache_dir + filename,
|
||||
)
|
||||
|
||||
@ -150,6 +160,141 @@ class Rknn(DetectionApi):
|
||||
'Make sure to set the model input_tensor to "nhwc" in your config.'
|
||||
)
|
||||
|
||||
def post_process_yolonas(self, output: list[np.ndarray]):
|
||||
"""
|
||||
@param output: output of inference
|
||||
expected shape: [np.array(1, N, 4), np.array(1, N, 80)]
|
||||
where N depends on the input size e.g. N=2100 for 320x320 images
|
||||
|
||||
@return: best results: np.array(20, 6) where each row is
|
||||
in this order (class_id, score, y1/height, x1/width, y2/height, x2/width)
|
||||
"""
|
||||
|
||||
N = output[0].shape[1]
|
||||
|
||||
boxes = output[0].reshape(N, 4)
|
||||
scores = output[1].reshape(N, 80)
|
||||
|
||||
class_ids = np.argmax(scores, axis=1)
|
||||
scores = scores[np.arange(N), class_ids]
|
||||
|
||||
args_best = np.argwhere(scores > self.thresh)[:, 0]
|
||||
|
||||
num_matches = len(args_best)
|
||||
if num_matches == 0:
|
||||
return np.zeros((20, 6), np.float32)
|
||||
elif num_matches > 20:
|
||||
args_best20 = np.argpartition(scores[args_best], -20)[-20:]
|
||||
args_best = args_best[args_best20]
|
||||
|
||||
boxes = boxes[args_best]
|
||||
class_ids = class_ids[args_best]
|
||||
scores = scores[args_best]
|
||||
|
||||
boxes = np.transpose(
|
||||
np.vstack(
|
||||
(
|
||||
boxes[:, 1] / self.height,
|
||||
boxes[:, 0] / self.width,
|
||||
boxes[:, 3] / self.height,
|
||||
boxes[:, 2] / self.width,
|
||||
)
|
||||
)
|
||||
)
|
||||
|
||||
results = np.hstack(
|
||||
(class_ids[..., np.newaxis], scores[..., np.newaxis], boxes)
|
||||
)
|
||||
|
||||
return np.resize(results, (20, 6))
|
||||
|
||||
def post_process_yolox(
|
||||
self,
|
||||
predictions: list[np.ndarray],
|
||||
grids: np.ndarray,
|
||||
expanded_strides: np.ndarray,
|
||||
) -> np.ndarray:
|
||||
def sp_flatten(_in: np.ndarray):
|
||||
ch = _in.shape[1]
|
||||
_in = _in.transpose(0, 2, 3, 1)
|
||||
return _in.reshape(-1, ch)
|
||||
|
||||
boxes, scores, classes_conf = [], [], []
|
||||
|
||||
input_data = [
|
||||
_in.reshape([1, -1] + list(_in.shape[-2:])) for _in in predictions
|
||||
]
|
||||
|
||||
for i in range(len(input_data)):
|
||||
unprocessed_box = input_data[i][:, :4, :, :]
|
||||
box_xy = unprocessed_box[:, :2, :, :]
|
||||
box_wh = np.exp(unprocessed_box[:, 2:4, :, :]) * expanded_strides[i]
|
||||
|
||||
box_xy += grids[i]
|
||||
box_xy *= expanded_strides[i]
|
||||
box = np.concatenate((box_xy, box_wh), axis=1)
|
||||
|
||||
# Convert [c_x, c_y, w, h] to [x1, y1, x2, y2]
|
||||
xyxy = np.copy(box)
|
||||
xyxy[:, 0, :, :] = box[:, 0, :, :] - box[:, 2, :, :] / 2 # top left x
|
||||
xyxy[:, 1, :, :] = box[:, 1, :, :] - box[:, 3, :, :] / 2 # top left y
|
||||
xyxy[:, 2, :, :] = box[:, 0, :, :] + box[:, 2, :, :] / 2 # bottom right x
|
||||
xyxy[:, 3, :, :] = box[:, 1, :, :] + box[:, 3, :, :] / 2 # bottom right y
|
||||
|
||||
boxes.append(xyxy)
|
||||
scores.append(input_data[i][:, 4:5, :, :])
|
||||
classes_conf.append(input_data[i][:, 5:, :, :])
|
||||
|
||||
# flatten data
|
||||
boxes = np.concatenate([sp_flatten(_v) for _v in boxes])
|
||||
classes_conf = np.concatenate([sp_flatten(_v) for _v in classes_conf])
|
||||
scores = np.concatenate([sp_flatten(_v) for _v in scores])
|
||||
|
||||
# reshape and filter boxes
|
||||
box_confidences = scores.reshape(-1)
|
||||
class_max_score = np.max(classes_conf, axis=-1)
|
||||
classes = np.argmax(classes_conf, axis=-1)
|
||||
_class_pos = np.where(class_max_score * box_confidences >= 0.4)
|
||||
scores = (class_max_score * box_confidences)[_class_pos]
|
||||
boxes = boxes[_class_pos]
|
||||
classes = classes[_class_pos]
|
||||
|
||||
# run nms
|
||||
indices = cv2.dnn.NMSBoxes(
|
||||
bboxes=boxes,
|
||||
scores=scores,
|
||||
score_threshold=0.4,
|
||||
nms_threshold=0.4,
|
||||
)
|
||||
|
||||
results = np.zeros((20, 6), np.float32)
|
||||
|
||||
if len(indices) > 0:
|
||||
for i, idx in enumerate(indices.flatten()[:20]):
|
||||
box = boxes[idx]
|
||||
results[i] = [
|
||||
classes[idx],
|
||||
scores[idx],
|
||||
box[1] / self.height,
|
||||
box[0] / self.width,
|
||||
box[3] / self.height,
|
||||
box[2] / self.width,
|
||||
]
|
||||
|
||||
return results
|
||||
|
||||
def post_process(self, output):
|
||||
if self.detector_config.model.model_type == ModelTypeEnum.yolonas:
|
||||
return self.post_process_yolonas(output)
|
||||
elif self.detector_config.model.model_type == ModelTypeEnum.yologeneric:
|
||||
return post_process_yolo(output, self.width, self.height)
|
||||
elif self.detector_config.model.model_type == ModelTypeEnum.yolox:
|
||||
return self.post_process_yolox(output, self.grids, self.expanded_strides)
|
||||
else:
|
||||
raise ValueError(
|
||||
f'Model type "{self.detector_config.model.model_type}" is currently not supported.'
|
||||
)
|
||||
|
||||
def detect_raw(self, tensor_input):
|
||||
output = self.rknn.inference(
|
||||
[
|
||||
|
||||
@ -5,11 +5,13 @@ import json
|
||||
import logging
|
||||
import multiprocessing as mp
|
||||
import os
|
||||
import re
|
||||
import signal
|
||||
import threading
|
||||
from types import FrameType
|
||||
from typing import Optional, Union
|
||||
|
||||
from pathvalidate import ValidationError, sanitize_filename
|
||||
from setproctitle import setproctitle
|
||||
|
||||
from frigate.comms.embeddings_updater import EmbeddingsRequestEnum, EmbeddingsRequestor
|
||||
@ -240,6 +242,42 @@ class EmbeddingsContext:
|
||||
EmbeddingsRequestEnum.clear_face_classifier.value, None
|
||||
)
|
||||
|
||||
def rename_face(self, old_name: str, new_name: str) -> None:
|
||||
valid_name_pattern = r"^[a-zA-Z0-9\s_-]{1,50}$"
|
||||
|
||||
try:
|
||||
sanitized_old_name = sanitize_filename(old_name, replacement_text="_")
|
||||
sanitized_new_name = sanitize_filename(new_name, replacement_text="_")
|
||||
except ValidationError as e:
|
||||
raise ValueError(f"Invalid face name: {str(e)}")
|
||||
|
||||
if not re.match(valid_name_pattern, old_name):
|
||||
raise ValueError(f"Invalid old face name: {old_name}")
|
||||
if not re.match(valid_name_pattern, new_name):
|
||||
raise ValueError(f"Invalid new face name: {new_name}")
|
||||
if sanitized_old_name != old_name:
|
||||
raise ValueError(f"Old face name contains invalid characters: {old_name}")
|
||||
if sanitized_new_name != new_name:
|
||||
raise ValueError(f"New face name contains invalid characters: {new_name}")
|
||||
|
||||
old_path = os.path.normpath(os.path.join(FACE_DIR, old_name))
|
||||
new_path = os.path.normpath(os.path.join(FACE_DIR, new_name))
|
||||
|
||||
# Prevent path traversal
|
||||
if not old_path.startswith(
|
||||
os.path.normpath(FACE_DIR)
|
||||
) or not new_path.startswith(os.path.normpath(FACE_DIR)):
|
||||
raise ValueError("Invalid path detected")
|
||||
|
||||
if not os.path.exists(old_path):
|
||||
raise ValueError(f"Face {old_name} not found.")
|
||||
|
||||
os.rename(old_path, new_path)
|
||||
|
||||
self.requestor.send_data(
|
||||
EmbeddingsRequestEnum.clear_face_classifier.value, None
|
||||
)
|
||||
|
||||
def update_description(self, event_id: str, description: str) -> None:
|
||||
self.requestor.send_data(
|
||||
EmbeddingsRequestEnum.embed_description.value,
|
||||
|
||||
@ -108,7 +108,11 @@ class EmbeddingMaintainer(threading.Thread):
|
||||
|
||||
# model runners to share between realtime and post processors
|
||||
if self.config.lpr.enabled:
|
||||
lpr_model_runner = LicensePlateModelRunner(self.requestor)
|
||||
lpr_model_runner = LicensePlateModelRunner(
|
||||
self.requestor,
|
||||
device=self.config.lpr.device,
|
||||
model_size=self.config.lpr.model_size,
|
||||
)
|
||||
|
||||
# realtime processors
|
||||
self.realtime_processors: list[RealTimeProcessorApi] = []
|
||||
@ -116,7 +120,7 @@ class EmbeddingMaintainer(threading.Thread):
|
||||
if self.config.face_recognition.enabled:
|
||||
self.realtime_processors.append(
|
||||
FaceRealTimeProcessor(
|
||||
self.config, self.event_metadata_publisher, metrics
|
||||
self.config, self.requestor, self.event_metadata_publisher, metrics
|
||||
)
|
||||
)
|
||||
|
||||
@ -131,6 +135,7 @@ class EmbeddingMaintainer(threading.Thread):
|
||||
self.realtime_processors.append(
|
||||
LicensePlateRealTimeProcessor(
|
||||
self.config,
|
||||
self.requestor,
|
||||
self.event_metadata_publisher,
|
||||
metrics,
|
||||
lpr_model_runner,
|
||||
@ -145,6 +150,7 @@ class EmbeddingMaintainer(threading.Thread):
|
||||
self.post_processors.append(
|
||||
LicensePlatePostProcessor(
|
||||
self.config,
|
||||
self.requestor,
|
||||
self.event_metadata_publisher,
|
||||
metrics,
|
||||
lpr_model_runner,
|
||||
@ -225,7 +231,7 @@ class EmbeddingMaintainer(threading.Thread):
|
||||
|
||||
def _process_updates(self) -> None:
|
||||
"""Process event updates"""
|
||||
update = self.event_subscriber.check_for_update(timeout=0.01)
|
||||
update = self.event_subscriber.check_for_update()
|
||||
|
||||
if update is None:
|
||||
return
|
||||
@ -318,7 +324,7 @@ class EmbeddingMaintainer(threading.Thread):
|
||||
def _process_finalized(self) -> None:
|
||||
"""Process the end of an event."""
|
||||
while True:
|
||||
ended = self.event_end_subscriber.check_for_update(timeout=0.01)
|
||||
ended = self.event_end_subscriber.check_for_update()
|
||||
|
||||
if ended == None:
|
||||
break
|
||||
@ -414,7 +420,7 @@ class EmbeddingMaintainer(threading.Thread):
|
||||
def _process_recordings_updates(self) -> None:
|
||||
"""Process recordings updates."""
|
||||
while True:
|
||||
recordings_data = self.recordings_subscriber.check_for_update(timeout=0.01)
|
||||
recordings_data = self.recordings_subscriber.check_for_update()
|
||||
|
||||
if recordings_data == None:
|
||||
break
|
||||
@ -431,7 +437,7 @@ class EmbeddingMaintainer(threading.Thread):
|
||||
|
||||
def _process_event_metadata(self):
|
||||
# Check for regenerate description requests
|
||||
(topic, payload) = self.event_metadata_subscriber.check_for_update(timeout=0.01)
|
||||
(topic, payload) = self.event_metadata_subscriber.check_for_update()
|
||||
|
||||
if topic is None:
|
||||
return
|
||||
@ -445,7 +451,7 @@ class EmbeddingMaintainer(threading.Thread):
|
||||
|
||||
def _process_dedicated_lpr(self) -> None:
|
||||
"""Process event updates"""
|
||||
(topic, data) = self.detection_subscriber.check_for_update(timeout=0.01)
|
||||
(topic, data) = self.detection_subscriber.check_for_update()
|
||||
|
||||
if topic is None:
|
||||
return
|
||||
@ -579,6 +585,7 @@ class EmbeddingMaintainer(threading.Thread):
|
||||
"type": TrackedObjectUpdateTypesEnum.description,
|
||||
"id": event.id,
|
||||
"description": description,
|
||||
"camera": event.camera,
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
@ -36,11 +36,12 @@ class JinaV1TextEmbedding(BaseEmbedding):
|
||||
requestor: InterProcessRequestor,
|
||||
device: str = "AUTO",
|
||||
):
|
||||
HF_ENDPOINT = os.environ.get("HF_ENDPOINT", "https://huggingface.co")
|
||||
super().__init__(
|
||||
model_name="jinaai/jina-clip-v1",
|
||||
model_file="text_model_fp16.onnx",
|
||||
download_urls={
|
||||
"text_model_fp16.onnx": "https://huggingface.co/jinaai/jina-clip-v1/resolve/main/onnx/text_model_fp16.onnx",
|
||||
"text_model_fp16.onnx": f"{HF_ENDPOINT}/jinaai/jina-clip-v1/resolve/main/onnx/text_model_fp16.onnx",
|
||||
},
|
||||
)
|
||||
self.tokenizer_file = "tokenizer"
|
||||
@ -156,12 +157,13 @@ class JinaV1ImageEmbedding(BaseEmbedding):
|
||||
if model_size == "large"
|
||||
else "vision_model_quantized.onnx"
|
||||
)
|
||||
HF_ENDPOINT = os.environ.get("HF_ENDPOINT", "https://huggingface.co")
|
||||
super().__init__(
|
||||
model_name="jinaai/jina-clip-v1",
|
||||
model_file=model_file,
|
||||
download_urls={
|
||||
model_file: f"https://huggingface.co/jinaai/jina-clip-v1/resolve/main/onnx/{model_file}",
|
||||
"preprocessor_config.json": "https://huggingface.co/jinaai/jina-clip-v1/resolve/main/preprocessor_config.json",
|
||||
model_file: f"{HF_ENDPOINT}/jinaai/jina-clip-v1/resolve/main/onnx/{model_file}",
|
||||
"preprocessor_config.json": f"{HF_ENDPOINT}/jinaai/jina-clip-v1/resolve/main/preprocessor_config.json",
|
||||
},
|
||||
)
|
||||
self.requestor = requestor
|
||||
|
||||
@ -34,12 +34,13 @@ class JinaV2Embedding(BaseEmbedding):
|
||||
model_file = (
|
||||
"model_fp16.onnx" if model_size == "large" else "model_quantized.onnx"
|
||||
)
|
||||
HF_ENDPOINT = os.environ.get("HF_ENDPOINT", "https://huggingface.co")
|
||||
super().__init__(
|
||||
model_name="jinaai/jina-clip-v2",
|
||||
model_file=model_file,
|
||||
download_urls={
|
||||
model_file: f"https://huggingface.co/jinaai/jina-clip-v2/resolve/main/onnx/{model_file}",
|
||||
"preprocessor_config.json": "https://huggingface.co/jinaai/jina-clip-v2/resolve/main/preprocessor_config.json",
|
||||
model_file: f"{HF_ENDPOINT}/jinaai/jina-clip-v2/resolve/main/onnx/{model_file}",
|
||||
"preprocessor_config.json": f"{HF_ENDPOINT}/jinaai/jina-clip-v2/resolve/main/preprocessor_config.json",
|
||||
},
|
||||
)
|
||||
self.tokenizer_file = "tokenizer"
|
||||
|
||||
@ -31,11 +31,14 @@ class PaddleOCRDetection(BaseEmbedding):
|
||||
requestor: InterProcessRequestor,
|
||||
device: str = "AUTO",
|
||||
):
|
||||
model_file = (
|
||||
"detection-large.onnx" if model_size == "large" else "detection-small.onnx"
|
||||
)
|
||||
super().__init__(
|
||||
model_name="paddleocr-onnx",
|
||||
model_file="detection.onnx",
|
||||
model_file=model_file,
|
||||
download_urls={
|
||||
"detection.onnx": "https://github.com/hawkeye217/paddleocr-onnx/raw/refs/heads/master/models/detection.onnx"
|
||||
model_file: f"https://github.com/hawkeye217/paddleocr-onnx/raw/refs/heads/master/models/{model_file}"
|
||||
},
|
||||
)
|
||||
self.requestor = requestor
|
||||
|
||||
@ -65,10 +65,43 @@ class ONNXModelRunner:
|
||||
elif self.type == "ort":
|
||||
return [input.name for input in self.ort.get_inputs()]
|
||||
|
||||
def get_input_width(self):
|
||||
"""Get the input width of the model regardless of backend."""
|
||||
if self.type == "ort":
|
||||
return self.ort.get_inputs()[0].shape[3]
|
||||
elif self.type == "ov":
|
||||
input_info = self.interpreter.inputs
|
||||
first_input = input_info[0]
|
||||
|
||||
try:
|
||||
partial_shape = first_input.get_partial_shape()
|
||||
# width dimension
|
||||
if len(partial_shape) >= 4 and partial_shape[3].is_static:
|
||||
return partial_shape[3].get_length()
|
||||
|
||||
# If width is dynamic or we can't determine it
|
||||
return -1
|
||||
except Exception:
|
||||
try:
|
||||
# gemini says some ov versions might still allow this
|
||||
input_shape = first_input.shape
|
||||
return input_shape[3] if len(input_shape) >= 4 else -1
|
||||
except Exception:
|
||||
return -1
|
||||
return -1
|
||||
|
||||
def run(self, input: dict[str, Any]) -> Any:
|
||||
if self.type == "ov":
|
||||
infer_request = self.interpreter.create_infer_request()
|
||||
|
||||
try:
|
||||
# This ensures the model starts with a clean state for each sequence
|
||||
# Important for RNN models like PaddleOCR recognition
|
||||
infer_request.reset_state()
|
||||
except Exception:
|
||||
# this will raise an exception for models with AUTO set as the device
|
||||
pass
|
||||
|
||||
outputs = infer_request.infer(input)
|
||||
|
||||
return outputs
|
||||
|
||||
@ -29,7 +29,7 @@ from frigate.const import (
|
||||
)
|
||||
from frigate.ffmpeg_presets import parse_preset_input
|
||||
from frigate.log import LogPipe
|
||||
from frigate.object_detection import load_labels
|
||||
from frigate.object_detection.base import load_labels
|
||||
from frigate.util.builtin import get_ffmpeg_arg_list
|
||||
from frigate.video import start_or_restart_ffmpeg, stop_ffmpeg
|
||||
|
||||
|
||||
@ -75,7 +75,7 @@ class EventProcessor(threading.Thread):
|
||||
).execute()
|
||||
|
||||
while not self.stop_event.is_set():
|
||||
update = self.event_receiver.check_for_update()
|
||||
update = self.event_receiver.check_for_update(timeout=1)
|
||||
|
||||
if update == None:
|
||||
continue
|
||||
|
||||
@ -283,7 +283,7 @@ PRESETS_INPUT = {
|
||||
"-probesize",
|
||||
"1000M",
|
||||
"-rw_timeout",
|
||||
"5000000",
|
||||
"10000000",
|
||||
],
|
||||
"preset-rtmp-generic": [
|
||||
"-avoid_negative_ts",
|
||||
@ -297,7 +297,7 @@ PRESETS_INPUT = {
|
||||
"-fflags",
|
||||
"+genpts+discardcorrupt",
|
||||
"-rw_timeout",
|
||||
"5000000",
|
||||
"10000000",
|
||||
"-use_wallclock_as_timestamps",
|
||||
"1",
|
||||
"-f",
|
||||
@ -312,7 +312,7 @@ PRESETS_INPUT = {
|
||||
"-rtsp_transport",
|
||||
"tcp",
|
||||
TIMEOUT_PARAM,
|
||||
"5000000",
|
||||
"10000000",
|
||||
"-use_wallclock_as_timestamps",
|
||||
"1",
|
||||
],
|
||||
@ -321,14 +321,14 @@ PRESETS_INPUT = {
|
||||
"-rtsp_transport",
|
||||
"tcp",
|
||||
TIMEOUT_PARAM,
|
||||
"5000000",
|
||||
"10000000",
|
||||
],
|
||||
"preset-rtsp-restream-low-latency": _user_agent_args
|
||||
+ [
|
||||
"-rtsp_transport",
|
||||
"tcp",
|
||||
TIMEOUT_PARAM,
|
||||
"5000000",
|
||||
"10000000",
|
||||
"-fflags",
|
||||
"nobuffer",
|
||||
"-flags",
|
||||
@ -343,7 +343,7 @@ PRESETS_INPUT = {
|
||||
"-rtsp_transport",
|
||||
"udp",
|
||||
TIMEOUT_PARAM,
|
||||
"5000000",
|
||||
"10000000",
|
||||
"-use_wallclock_as_timestamps",
|
||||
"1",
|
||||
],
|
||||
@ -362,7 +362,7 @@ PRESETS_INPUT = {
|
||||
"-rtsp_transport",
|
||||
"tcp",
|
||||
TIMEOUT_PARAM,
|
||||
"5000000",
|
||||
"10000000",
|
||||
"-use_wallclock_as_timestamps",
|
||||
"1",
|
||||
],
|
||||
|
||||
@ -54,9 +54,13 @@ class OpenAIClient(GenAIClient):
|
||||
],
|
||||
timeout=self.timeout,
|
||||
)
|
||||
except TimeoutException as e:
|
||||
if (
|
||||
result is not None
|
||||
and hasattr(result, "choices")
|
||||
and len(result.choices) > 0
|
||||
):
|
||||
return result.choices[0].message.content.strip()
|
||||
return None
|
||||
except (TimeoutException, Exception) as e:
|
||||
logger.warning("OpenAI returned an error: %s", str(e))
|
||||
return None
|
||||
if len(result.choices) > 0:
|
||||
return result.choices[0].message.content.strip()
|
||||
return None
|
||||
|
||||
@ -6,6 +6,8 @@ import queue
|
||||
import signal
|
||||
import threading
|
||||
from abc import ABC, abstractmethod
|
||||
from multiprocessing import Queue, Value
|
||||
from multiprocessing.synchronize import Event as MpEvent
|
||||
|
||||
import numpy as np
|
||||
from setproctitle import setproctitle
|
||||
@ -15,12 +17,14 @@ from frigate.detectors import create_detector
|
||||
from frigate.detectors.detector_config import (
|
||||
BaseDetectorConfig,
|
||||
InputDTypeEnum,
|
||||
InputTensorEnum,
|
||||
ModelConfig,
|
||||
)
|
||||
from frigate.util.builtin import EventsPerSecond, load_labels
|
||||
from frigate.util.image import SharedMemoryFrameManager, UntrackedSharedMemory
|
||||
from frigate.util.services import listen
|
||||
|
||||
from .util import tensor_transform
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@ -30,14 +34,6 @@ class ObjectDetector(ABC):
|
||||
pass
|
||||
|
||||
|
||||
def tensor_transform(desired_shape: InputTensorEnum):
|
||||
# Currently this function only supports BHWC permutations
|
||||
if desired_shape == InputTensorEnum.nhwc:
|
||||
return None
|
||||
elif desired_shape == InputTensorEnum.nchw:
|
||||
return (0, 3, 1, 2)
|
||||
|
||||
|
||||
class LocalObjectDetector(ObjectDetector):
|
||||
def __init__(
|
||||
self,
|
||||
@ -84,17 +80,19 @@ class LocalObjectDetector(ObjectDetector):
|
||||
if self.dtype == InputDTypeEnum.float:
|
||||
tensor_input = tensor_input.astype(np.float32)
|
||||
tensor_input /= 255
|
||||
elif self.dtype == InputDTypeEnum.float_denorm:
|
||||
tensor_input = tensor_input.astype(np.float32)
|
||||
|
||||
return self.detect_api.detect_raw(tensor_input=tensor_input)
|
||||
|
||||
|
||||
def run_detector(
|
||||
name: str,
|
||||
detection_queue: mp.Queue,
|
||||
out_events: dict[str, mp.Event],
|
||||
avg_speed,
|
||||
start,
|
||||
detector_config,
|
||||
detection_queue: Queue,
|
||||
out_events: dict[str, MpEvent],
|
||||
avg_speed: Value,
|
||||
start: Value,
|
||||
detector_config: BaseDetectorConfig,
|
||||
):
|
||||
threading.current_thread().name = f"detector:{name}"
|
||||
logger = logging.getLogger(f"detector.{name}")
|
||||
@ -102,7 +100,7 @@ def run_detector(
|
||||
setproctitle(f"frigate.detector.{name}")
|
||||
listen()
|
||||
|
||||
stop_event = mp.Event()
|
||||
stop_event: MpEvent = mp.Event()
|
||||
|
||||
def receiveSignal(signalNumber, frame):
|
||||
stop_event.set()
|
||||
@ -150,17 +148,17 @@ def run_detector(
|
||||
class ObjectDetectProcess:
|
||||
def __init__(
|
||||
self,
|
||||
name,
|
||||
detection_queue,
|
||||
out_events,
|
||||
detector_config,
|
||||
name: str,
|
||||
detection_queue: Queue,
|
||||
out_events: dict[str, MpEvent],
|
||||
detector_config: BaseDetectorConfig,
|
||||
):
|
||||
self.name = name
|
||||
self.out_events = out_events
|
||||
self.detection_queue = detection_queue
|
||||
self.avg_inference_speed = mp.Value("d", 0.01)
|
||||
self.detection_start = mp.Value("d", 0.0)
|
||||
self.detect_process = None
|
||||
self.avg_inference_speed = Value("d", 0.01)
|
||||
self.detection_start = Value("d", 0.0)
|
||||
self.detect_process: util.Process | None = None
|
||||
self.detector_config = detector_config
|
||||
self.start_or_restart()
|
||||
|
||||
@ -198,7 +196,15 @@ class ObjectDetectProcess:
|
||||
|
||||
|
||||
class RemoteObjectDetector:
|
||||
def __init__(self, name, labels, detection_queue, event, model_config, stop_event):
|
||||
def __init__(
|
||||
self,
|
||||
name: str,
|
||||
labels: dict[int, str],
|
||||
detection_queue: Queue,
|
||||
event: MpEvent,
|
||||
model_config: ModelConfig,
|
||||
stop_event: MpEvent,
|
||||
):
|
||||
self.labels = labels
|
||||
self.name = name
|
||||
self.fps = EventsPerSecond()
|
||||
77
frigate/object_detection/util.py
Normal file
77
frigate/object_detection/util.py
Normal file
@ -0,0 +1,77 @@
|
||||
"""Object detection utilities."""
|
||||
|
||||
import queue
|
||||
import threading
|
||||
|
||||
from numpy import ndarray
|
||||
|
||||
from frigate.detectors.detector_config import InputTensorEnum
|
||||
|
||||
|
||||
class RequestStore:
|
||||
"""
|
||||
A thread-safe hash-based response store that handles creating requests.
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
self.request_counter = 0
|
||||
self.request_counter_lock = threading.Lock()
|
||||
self.input_queue = queue.Queue()
|
||||
|
||||
def __get_request_id(self) -> int:
|
||||
with self.request_counter_lock:
|
||||
request_id = self.request_counter
|
||||
self.request_counter += 1
|
||||
if self.request_counter > 1000000:
|
||||
self.request_counter = 0
|
||||
return request_id
|
||||
|
||||
def put(self, tensor_input: ndarray) -> int:
|
||||
request_id = self.__get_request_id()
|
||||
self.input_queue.put((request_id, tensor_input))
|
||||
return request_id
|
||||
|
||||
def get(self) -> tuple[int, ndarray] | None:
|
||||
try:
|
||||
return self.input_queue.get()
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
|
||||
class ResponseStore:
|
||||
"""
|
||||
A thread-safe hash-based response store that maps request IDs
|
||||
to their results. Threads can wait on the condition variable until
|
||||
their request's result appears.
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
self.responses = {} # Maps request_id -> (original_input, infer_results)
|
||||
self.lock = threading.Lock()
|
||||
self.cond = threading.Condition(self.lock)
|
||||
|
||||
def put(self, request_id: int, response: ndarray):
|
||||
with self.cond:
|
||||
self.responses[request_id] = response
|
||||
self.cond.notify_all()
|
||||
|
||||
def get(self, request_id: int, timeout=None) -> ndarray:
|
||||
with self.cond:
|
||||
if not self.cond.wait_for(
|
||||
lambda: request_id in self.responses, timeout=timeout
|
||||
):
|
||||
raise TimeoutError(f"Timeout waiting for response {request_id}")
|
||||
|
||||
return self.responses.pop(request_id)
|
||||
|
||||
|
||||
def tensor_transform(desired_shape: InputTensorEnum):
|
||||
# Currently this function only supports BHWC permutations
|
||||
if desired_shape == InputTensorEnum.nhwc:
|
||||
return None
|
||||
elif desired_shape == InputTensorEnum.nchw:
|
||||
return (0, 3, 1, 2)
|
||||
elif desired_shape == InputTensorEnum.hwnc:
|
||||
return (1, 2, 0, 3)
|
||||
elif desired_shape == InputTensorEnum.hwcn:
|
||||
return (1, 2, 3, 0)
|
||||
@ -754,7 +754,6 @@ class Birdseye:
|
||||
"birdseye", self.converter, websocket_server, stop_event
|
||||
)
|
||||
self.birdseye_manager = BirdsEyeFrameManager(config, stop_event)
|
||||
self.config_enabled_subscriber = ConfigSubscriber("config/enabled/")
|
||||
self.birdseye_subscriber = ConfigSubscriber("config/birdseye/")
|
||||
self.frame_manager = SharedMemoryFrameManager()
|
||||
self.stop_event = stop_event
|
||||
@ -799,24 +798,13 @@ class Birdseye:
|
||||
updated_birdseye_config,
|
||||
) = self.birdseye_subscriber.check_for_update()
|
||||
|
||||
(
|
||||
updated_enabled_topic,
|
||||
updated_enabled_config,
|
||||
) = self.config_enabled_subscriber.check_for_update()
|
||||
|
||||
if not updated_birdseye_topic and not updated_enabled_topic:
|
||||
if not updated_birdseye_topic:
|
||||
break
|
||||
|
||||
if updated_birdseye_config:
|
||||
camera_name = updated_birdseye_topic.rpartition("/")[-1]
|
||||
self.config.cameras[camera_name].birdseye = updated_birdseye_config
|
||||
|
||||
if updated_enabled_config:
|
||||
camera_name = updated_enabled_topic.rpartition("/")[-1]
|
||||
self.config.cameras[
|
||||
camera_name
|
||||
].enabled = updated_enabled_config.enabled
|
||||
|
||||
if self.birdseye_manager.update(
|
||||
camera,
|
||||
len([o for o in current_tracked_objects if not o["stationary"]]),
|
||||
@ -828,6 +816,5 @@ class Birdseye:
|
||||
|
||||
def stop(self) -> None:
|
||||
self.birdseye_subscriber.stop()
|
||||
self.config_enabled_subscriber.stop()
|
||||
self.converter.join()
|
||||
self.broadcaster.join()
|
||||
|
||||
@ -99,12 +99,7 @@ def output_frames(
|
||||
websocket_thread = threading.Thread(target=websocket_server.serve_forever)
|
||||
|
||||
detection_subscriber = DetectionSubscriber(DetectionTypeEnum.video)
|
||||
|
||||
enabled_subscribers = {
|
||||
camera: ConfigSubscriber(f"config/enabled/{camera}", True)
|
||||
for camera in config.cameras.keys()
|
||||
if config.cameras[camera].enabled_in_config
|
||||
}
|
||||
config_enabled_subscriber = ConfigSubscriber("config/enabled/")
|
||||
|
||||
jsmpeg_cameras: dict[str, JsmpegCamera] = {}
|
||||
birdseye: Birdseye | None = None
|
||||
@ -128,16 +123,21 @@ def output_frames(
|
||||
|
||||
websocket_thread.start()
|
||||
|
||||
def get_enabled_state(camera: str) -> bool:
|
||||
_, config_data = enabled_subscribers[camera].check_for_update()
|
||||
|
||||
if config_data:
|
||||
config.cameras[camera].enabled = config_data.enabled
|
||||
return config_data.enabled
|
||||
|
||||
return config.cameras[camera].enabled
|
||||
|
||||
while not stop_event.is_set():
|
||||
# check if there is an updated config
|
||||
while True:
|
||||
(
|
||||
updated_enabled_topic,
|
||||
updated_enabled_config,
|
||||
) = config_enabled_subscriber.check_for_update()
|
||||
|
||||
if not updated_enabled_topic:
|
||||
break
|
||||
|
||||
if updated_enabled_config:
|
||||
camera_name = updated_enabled_topic.rpartition("/")[-1]
|
||||
config.cameras[camera_name].enabled = updated_enabled_config.enabled
|
||||
|
||||
(topic, data) = detection_subscriber.check_for_update(timeout=1)
|
||||
now = datetime.datetime.now().timestamp()
|
||||
|
||||
@ -161,7 +161,7 @@ def output_frames(
|
||||
_,
|
||||
) = data
|
||||
|
||||
if not get_enabled_state(camera):
|
||||
if not config.cameras[camera].enabled:
|
||||
continue
|
||||
|
||||
frame = frame_manager.get(frame_name, config.cameras[camera].frame_shape_yuv)
|
||||
@ -242,9 +242,7 @@ def output_frames(
|
||||
if birdseye is not None:
|
||||
birdseye.stop()
|
||||
|
||||
for subscriber in enabled_subscribers.values():
|
||||
subscriber.stop()
|
||||
|
||||
config_enabled_subscriber.stop()
|
||||
websocket_server.manager.close_all()
|
||||
websocket_server.manager.stop()
|
||||
websocket_server.manager.join()
|
||||
|
||||
@ -3,11 +3,9 @@
|
||||
import asyncio
|
||||
import copy
|
||||
import logging
|
||||
import queue
|
||||
import threading
|
||||
import time
|
||||
from collections import deque
|
||||
from functools import partial
|
||||
from multiprocessing.synchronize import Event as MpEvent
|
||||
|
||||
import cv2
|
||||
@ -169,7 +167,12 @@ class PtzAutoTrackerThread(threading.Thread):
|
||||
continue
|
||||
|
||||
if camera_config.onvif.autotracking.enabled:
|
||||
self.ptz_autotracker.camera_maintenance(camera)
|
||||
future = asyncio.run_coroutine_threadsafe(
|
||||
self.ptz_autotracker.camera_maintenance(camera),
|
||||
self.ptz_autotracker.onvif.loop,
|
||||
)
|
||||
# Wait for the coroutine to complete
|
||||
future.result()
|
||||
else:
|
||||
# disabled dynamically by mqtt
|
||||
if self.ptz_autotracker.tracked_object.get(camera):
|
||||
@ -206,6 +209,7 @@ class PtzAutoTracker:
|
||||
self.calibrating: dict[str, object] = {}
|
||||
self.intercept: dict[str, object] = {}
|
||||
self.move_coefficients: dict[str, object] = {}
|
||||
self.zoom_time: dict[str, float] = {}
|
||||
self.zoom_factor: dict[str, object] = {}
|
||||
|
||||
# if cam is set to autotrack, onvif should be set up
|
||||
@ -218,9 +222,13 @@ class PtzAutoTracker:
|
||||
camera_config.onvif.autotracking.enabled
|
||||
and camera_config.onvif.autotracking.enabled_in_config
|
||||
):
|
||||
self._autotracker_setup(camera_config, camera)
|
||||
future = asyncio.run_coroutine_threadsafe(
|
||||
self._autotracker_setup(camera_config, camera), self.onvif.loop
|
||||
)
|
||||
# Wait for the coroutine to complete
|
||||
future.result()
|
||||
|
||||
def _autotracker_setup(self, camera_config: CameraConfig, camera: str):
|
||||
async def _autotracker_setup(self, camera_config: CameraConfig, camera: str):
|
||||
logger.debug(f"{camera}: Autotracker init")
|
||||
|
||||
self.object_types[camera] = camera_config.onvif.autotracking.track
|
||||
@ -241,8 +249,8 @@ class PtzAutoTracker:
|
||||
self.intercept[camera] = None
|
||||
self.move_coefficients[camera] = []
|
||||
|
||||
self.move_queues[camera] = queue.Queue()
|
||||
self.move_queue_locks[camera] = threading.Lock()
|
||||
self.move_queues[camera] = asyncio.Queue()
|
||||
self.move_queue_locks[camera] = asyncio.Lock()
|
||||
|
||||
# handle onvif constructor failing due to no connection
|
||||
if camera not in self.onvif.cams:
|
||||
@ -254,7 +262,7 @@ class PtzAutoTracker:
|
||||
return
|
||||
|
||||
if not self.onvif.cams[camera]["init"]:
|
||||
if not asyncio.run(self.onvif._init_onvif(camera)):
|
||||
if not await self.onvif._init_onvif(camera):
|
||||
logger.warning(
|
||||
f"Disabling autotracking for {camera}: Unable to initialize onvif"
|
||||
)
|
||||
@ -270,9 +278,14 @@ class PtzAutoTracker:
|
||||
self.ptz_metrics[camera].autotracker_enabled.value = False
|
||||
return
|
||||
|
||||
move_status_supported = self.onvif.get_service_capabilities(camera)
|
||||
move_status_supported = await self.onvif.get_service_capabilities(camera)
|
||||
|
||||
if move_status_supported is None or move_status_supported.lower() != "true":
|
||||
if not (
|
||||
isinstance(move_status_supported, bool) and move_status_supported
|
||||
) and not (
|
||||
isinstance(move_status_supported, str)
|
||||
and move_status_supported.lower() == "true"
|
||||
):
|
||||
logger.warning(
|
||||
f"Disabling autotracking for {camera}: ONVIF MoveStatus not supported"
|
||||
)
|
||||
@ -281,18 +294,15 @@ class PtzAutoTracker:
|
||||
return
|
||||
|
||||
if self.onvif.cams[camera]["init"]:
|
||||
self.onvif.get_camera_status(camera)
|
||||
await self.onvif.get_camera_status(camera)
|
||||
|
||||
# movement thread per camera
|
||||
self.move_threads[camera] = threading.Thread(
|
||||
name=f"ptz_move_thread_{camera}",
|
||||
target=partial(self._process_move_queue, camera),
|
||||
# movement queue with asyncio on OnvifController loop
|
||||
asyncio.run_coroutine_threadsafe(
|
||||
self._process_move_queue(camera), self.onvif.loop
|
||||
)
|
||||
self.move_threads[camera].daemon = True
|
||||
self.move_threads[camera].start()
|
||||
|
||||
if camera_config.onvif.autotracking.movement_weights:
|
||||
if len(camera_config.onvif.autotracking.movement_weights) == 5:
|
||||
if len(camera_config.onvif.autotracking.movement_weights) == 6:
|
||||
camera_config.onvif.autotracking.movement_weights = [
|
||||
float(val)
|
||||
for val in camera_config.onvif.autotracking.movement_weights
|
||||
@ -311,7 +321,10 @@ class PtzAutoTracker:
|
||||
camera_config.onvif.autotracking.movement_weights[2]
|
||||
)
|
||||
self.move_coefficients[camera] = (
|
||||
camera_config.onvif.autotracking.movement_weights[3:]
|
||||
camera_config.onvif.autotracking.movement_weights[3:5]
|
||||
)
|
||||
self.zoom_time[camera] = (
|
||||
camera_config.onvif.autotracking.movement_weights[5]
|
||||
)
|
||||
else:
|
||||
camera_config.onvif.autotracking.enabled = False
|
||||
@ -321,7 +334,7 @@ class PtzAutoTracker:
|
||||
)
|
||||
|
||||
if camera_config.onvif.autotracking.calibrate_on_startup:
|
||||
self._calibrate_camera(camera)
|
||||
await self._calibrate_camera(camera)
|
||||
|
||||
self.ptz_metrics[camera].tracking_active.clear()
|
||||
self.dispatcher.publish(f"{camera}/ptz_autotracker/active", "OFF", retain=False)
|
||||
@ -340,7 +353,7 @@ class PtzAutoTracker:
|
||||
self.config.cameras[camera].onvif.autotracking.movement_weights,
|
||||
)
|
||||
|
||||
def _calibrate_camera(self, camera):
|
||||
async def _calibrate_camera(self, camera):
|
||||
# move the camera from the preset in steps and measure the time it takes to move that amount
|
||||
# this will allow us to predict movement times with a simple linear regression
|
||||
# start with 0 so we can determine a baseline (to be used as the intercept in the regression calc)
|
||||
@ -360,28 +373,29 @@ class PtzAutoTracker:
|
||||
!= ZoomingModeEnum.disabled
|
||||
):
|
||||
logger.info(f"Calibration for {camera} in progress: 0% complete")
|
||||
self.zoom_time[camera] = 0
|
||||
|
||||
for i in range(2):
|
||||
# absolute move to 0 - fully zoomed out
|
||||
self.onvif._zoom_absolute(
|
||||
await self.onvif._zoom_absolute(
|
||||
camera,
|
||||
self.onvif.cams[camera]["absolute_zoom_range"]["XRange"]["Min"],
|
||||
1,
|
||||
)
|
||||
|
||||
while not self.ptz_metrics[camera].motor_stopped.is_set():
|
||||
self.onvif.get_camera_status(camera)
|
||||
await self.onvif.get_camera_status(camera)
|
||||
|
||||
zoom_out_values.append(self.ptz_metrics[camera].zoom_level.value)
|
||||
|
||||
self.onvif._zoom_absolute(
|
||||
await self.onvif._zoom_absolute(
|
||||
camera,
|
||||
self.onvif.cams[camera]["absolute_zoom_range"]["XRange"]["Max"],
|
||||
1,
|
||||
)
|
||||
|
||||
while not self.ptz_metrics[camera].motor_stopped.is_set():
|
||||
self.onvif.get_camera_status(camera)
|
||||
await self.onvif.get_camera_status(camera)
|
||||
|
||||
zoom_in_values.append(self.ptz_metrics[camera].zoom_level.value)
|
||||
|
||||
@ -390,7 +404,7 @@ class PtzAutoTracker:
|
||||
== ZoomingModeEnum.relative
|
||||
):
|
||||
# relative move to -0.01
|
||||
self.onvif._move_relative(
|
||||
await self.onvif._move_relative(
|
||||
camera,
|
||||
0,
|
||||
0,
|
||||
@ -399,12 +413,13 @@ class PtzAutoTracker:
|
||||
)
|
||||
|
||||
while not self.ptz_metrics[camera].motor_stopped.is_set():
|
||||
self.onvif.get_camera_status(camera)
|
||||
await self.onvif.get_camera_status(camera)
|
||||
|
||||
zoom_out_values.append(self.ptz_metrics[camera].zoom_level.value)
|
||||
|
||||
zoom_start_time = time.time()
|
||||
# relative move to 0.01
|
||||
self.onvif._move_relative(
|
||||
await self.onvif._move_relative(
|
||||
camera,
|
||||
0,
|
||||
0,
|
||||
@ -413,7 +428,39 @@ class PtzAutoTracker:
|
||||
)
|
||||
|
||||
while not self.ptz_metrics[camera].motor_stopped.is_set():
|
||||
self.onvif.get_camera_status(camera)
|
||||
await self.onvif.get_camera_status(camera)
|
||||
|
||||
zoom_stop_time = time.time()
|
||||
|
||||
full_relative_start_time = time.time()
|
||||
|
||||
await self.onvif._move_relative(
|
||||
camera,
|
||||
-1,
|
||||
-1,
|
||||
-1e-2,
|
||||
1,
|
||||
)
|
||||
|
||||
while not self.ptz_metrics[camera].motor_stopped.is_set():
|
||||
await self.onvif.get_camera_status(camera)
|
||||
|
||||
full_relative_stop_time = time.time()
|
||||
|
||||
await self.onvif._move_relative(
|
||||
camera,
|
||||
1,
|
||||
1,
|
||||
1e-2,
|
||||
1,
|
||||
)
|
||||
|
||||
while not self.ptz_metrics[camera].motor_stopped.is_set():
|
||||
await self.onvif.get_camera_status(camera)
|
||||
|
||||
self.zoom_time[camera] = (
|
||||
full_relative_stop_time - full_relative_start_time
|
||||
) - (zoom_stop_time - zoom_start_time)
|
||||
|
||||
zoom_in_values.append(self.ptz_metrics[camera].zoom_level.value)
|
||||
|
||||
@ -421,14 +468,14 @@ class PtzAutoTracker:
|
||||
self.ptz_metrics[camera].min_zoom.value = min(zoom_out_values)
|
||||
|
||||
logger.debug(
|
||||
f"{camera}: Calibration values: max zoom: {self.ptz_metrics[camera].max_zoom.value}, min zoom: {self.ptz_metrics[camera].min_zoom.value}"
|
||||
f"{camera}: Calibration values: max zoom: {self.ptz_metrics[camera].max_zoom.value}, min zoom: {self.ptz_metrics[camera].min_zoom.value}, zoom time: {self.zoom_time[camera]}"
|
||||
)
|
||||
|
||||
else:
|
||||
self.ptz_metrics[camera].max_zoom.value = 1
|
||||
self.ptz_metrics[camera].min_zoom.value = 0
|
||||
|
||||
self.onvif._move_to_preset(
|
||||
await self.onvif._move_to_preset(
|
||||
camera,
|
||||
self.config.cameras[camera].onvif.autotracking.return_preset.lower(),
|
||||
)
|
||||
@ -437,18 +484,18 @@ class PtzAutoTracker:
|
||||
|
||||
# Wait until the camera finishes moving
|
||||
while not self.ptz_metrics[camera].motor_stopped.is_set():
|
||||
self.onvif.get_camera_status(camera)
|
||||
await self.onvif.get_camera_status(camera)
|
||||
|
||||
for step in range(num_steps):
|
||||
pan = step_sizes[step]
|
||||
tilt = step_sizes[step]
|
||||
|
||||
start_time = time.time()
|
||||
self.onvif._move_relative(camera, pan, tilt, 0, 1)
|
||||
await self.onvif._move_relative(camera, pan, tilt, 0, 1)
|
||||
|
||||
# Wait until the camera finishes moving
|
||||
while not self.ptz_metrics[camera].motor_stopped.is_set():
|
||||
self.onvif.get_camera_status(camera)
|
||||
await self.onvif.get_camera_status(camera)
|
||||
stop_time = time.time()
|
||||
|
||||
self.move_metrics[camera].append(
|
||||
@ -460,7 +507,7 @@ class PtzAutoTracker:
|
||||
}
|
||||
)
|
||||
|
||||
self.onvif._move_to_preset(
|
||||
await self.onvif._move_to_preset(
|
||||
camera,
|
||||
self.config.cameras[camera].onvif.autotracking.return_preset.lower(),
|
||||
)
|
||||
@ -469,7 +516,7 @@ class PtzAutoTracker:
|
||||
|
||||
# Wait until the camera finishes moving
|
||||
while not self.ptz_metrics[camera].motor_stopped.is_set():
|
||||
self.onvif.get_camera_status(camera)
|
||||
await self.onvif.get_camera_status(camera)
|
||||
|
||||
logger.info(
|
||||
f"Calibration for {camera} in progress: {round((step / num_steps) * 100)}% complete"
|
||||
@ -537,6 +584,7 @@ class PtzAutoTracker:
|
||||
self.ptz_metrics[camera].max_zoom.value,
|
||||
self.intercept[camera],
|
||||
*self.move_coefficients[camera],
|
||||
self.zoom_time[camera],
|
||||
]
|
||||
)
|
||||
|
||||
@ -665,18 +713,17 @@ class PtzAutoTracker:
|
||||
centroid_distance < self.tracked_object_metrics[camera]["distance"]
|
||||
)
|
||||
|
||||
def _process_move_queue(self, camera):
|
||||
camera_config = self.config.cameras[camera]
|
||||
camera_config.frame_shape[1]
|
||||
camera_config.frame_shape[0]
|
||||
async def _process_move_queue(self, camera):
|
||||
move_queue = self.move_queues[camera]
|
||||
|
||||
while not self.stop_event.is_set():
|
||||
try:
|
||||
move_data = self.move_queues[camera].get(True, 0.1)
|
||||
except queue.Empty:
|
||||
# Asynchronously wait for move data with a timeout
|
||||
move_data = await asyncio.wait_for(move_queue.get(), timeout=0.1)
|
||||
except asyncio.TimeoutError:
|
||||
continue
|
||||
|
||||
with self.move_queue_locks[camera]:
|
||||
async with self.move_queue_locks[camera]:
|
||||
frame_time, pan, tilt, zoom = move_data
|
||||
|
||||
# if we're receiving move requests during a PTZ move, ignore them
|
||||
@ -685,8 +732,6 @@ class PtzAutoTracker:
|
||||
self.ptz_metrics[camera].start_time.value,
|
||||
self.ptz_metrics[camera].stop_time.value,
|
||||
):
|
||||
# instead of dequeueing this might be a good place to preemptively move based
|
||||
# on an estimate - for fast moving objects, etc.
|
||||
logger.debug(
|
||||
f"{camera}: Move queue: PTZ moving, dequeueing move request - frame time: {frame_time}, final pan: {pan}, final tilt: {tilt}, final zoom: {zoom}"
|
||||
)
|
||||
@ -697,25 +742,24 @@ class PtzAutoTracker:
|
||||
self.config.cameras[camera].onvif.autotracking.zooming
|
||||
== ZoomingModeEnum.relative
|
||||
):
|
||||
self.onvif._move_relative(camera, pan, tilt, zoom, 1)
|
||||
|
||||
await self.onvif._move_relative(camera, pan, tilt, zoom, 1)
|
||||
else:
|
||||
if pan != 0 or tilt != 0:
|
||||
self.onvif._move_relative(camera, pan, tilt, 0, 1)
|
||||
await self.onvif._move_relative(camera, pan, tilt, 0, 1)
|
||||
|
||||
# Wait until the camera finishes moving
|
||||
while not self.ptz_metrics[camera].motor_stopped.is_set():
|
||||
self.onvif.get_camera_status(camera)
|
||||
await self.onvif.get_camera_status(camera)
|
||||
|
||||
if (
|
||||
zoom > 0
|
||||
and self.ptz_metrics[camera].zoom_level.value != zoom
|
||||
):
|
||||
self.onvif._zoom_absolute(camera, zoom, 1)
|
||||
await self.onvif._zoom_absolute(camera, zoom, 1)
|
||||
|
||||
# Wait until the camera finishes moving
|
||||
while not self.ptz_metrics[camera].motor_stopped.is_set():
|
||||
self.onvif.get_camera_status(camera)
|
||||
await self.onvif.get_camera_status(camera)
|
||||
|
||||
if self.config.cameras[camera].onvif.autotracking.movement_weights:
|
||||
logger.debug(
|
||||
@ -752,6 +796,10 @@ class PtzAutoTracker:
|
||||
# calculate new coefficients if we have enough data
|
||||
self._calculate_move_coefficients(camera)
|
||||
|
||||
# Clean up the queue on exit
|
||||
while not move_queue.empty():
|
||||
await move_queue.get()
|
||||
|
||||
def _enqueue_move(self, camera, frame_time, pan, tilt, zoom):
|
||||
def split_value(value, suppress_diff=True):
|
||||
clipped = np.clip(value, -1, 1)
|
||||
@ -780,7 +828,9 @@ class PtzAutoTracker:
|
||||
f"{camera}: Enqueue movement for frame time: {frame_time} pan: {pan}, tilt: {tilt}, zoom: {zoom}"
|
||||
)
|
||||
move_data = (frame_time, pan, tilt, zoom)
|
||||
self.move_queues[camera].put(move_data)
|
||||
self.onvif.loop.call_soon_threadsafe(
|
||||
self.move_queues[camera].put_nowait, move_data
|
||||
)
|
||||
|
||||
# reset values to not split up large movements
|
||||
pan = 0
|
||||
@ -1061,6 +1111,7 @@ class PtzAutoTracker:
|
||||
|
||||
average_velocity = np.zeros((4,))
|
||||
predicted_box = obj.obj_data["box"]
|
||||
zoom_predicted_box = obj.obj_data["box"]
|
||||
|
||||
centroid_x = obj.obj_data["centroid"][0]
|
||||
centroid_y = obj.obj_data["centroid"][1]
|
||||
@ -1069,20 +1120,20 @@ class PtzAutoTracker:
|
||||
pan = ((centroid_x / camera_width) - 0.5) * 2
|
||||
tilt = (0.5 - (centroid_y / camera_height)) * 2
|
||||
|
||||
_, average_velocity = (
|
||||
self._get_valid_velocity(camera, obj)
|
||||
if "velocity" not in self.tracked_object_metrics[camera]
|
||||
else (
|
||||
self.tracked_object_metrics[camera]["valid_velocity"],
|
||||
self.tracked_object_metrics[camera]["velocity"],
|
||||
)
|
||||
)
|
||||
|
||||
if (
|
||||
camera_config.onvif.autotracking.movement_weights
|
||||
): # use estimates if we have available coefficients
|
||||
predicted_movement_time = self._predict_movement_time(camera, pan, tilt)
|
||||
|
||||
_, average_velocity = (
|
||||
self._get_valid_velocity(camera, obj)
|
||||
if "velocity" not in self.tracked_object_metrics[camera]
|
||||
else (
|
||||
self.tracked_object_metrics[camera]["valid_velocity"],
|
||||
self.tracked_object_metrics[camera]["velocity"],
|
||||
)
|
||||
)
|
||||
|
||||
if np.any(average_velocity):
|
||||
# this box could exceed the frame boundaries if velocity is high
|
||||
# but we'll handle that in _enqueue_move() as two separate moves
|
||||
@ -1111,6 +1162,34 @@ class PtzAutoTracker:
|
||||
camera, obj, predicted_box, predicted_movement_time, debug_zoom=True
|
||||
)
|
||||
|
||||
if (
|
||||
camera_config.onvif.autotracking.movement_weights
|
||||
and camera_config.onvif.autotracking.zooming == ZoomingModeEnum.relative
|
||||
and zoom != 0
|
||||
):
|
||||
zoom_predicted_movement_time = 0
|
||||
|
||||
if np.any(average_velocity):
|
||||
zoom_predicted_movement_time = abs(zoom) * self.zoom_time[camera]
|
||||
|
||||
zoom_predicted_box = (
|
||||
predicted_box
|
||||
+ camera_fps * zoom_predicted_movement_time * average_velocity
|
||||
)
|
||||
|
||||
zoom_predicted_box = np.round(zoom_predicted_box).astype(int)
|
||||
|
||||
centroid_x = round((zoom_predicted_box[0] + zoom_predicted_box[2]) / 2)
|
||||
centroid_y = round((zoom_predicted_box[1] + zoom_predicted_box[3]) / 2)
|
||||
|
||||
# recalculate pan and tilt with new centroid
|
||||
pan = ((centroid_x / camera_width) - 0.5) * 2
|
||||
tilt = (0.5 - (centroid_y / camera_height)) * 2
|
||||
|
||||
logger.debug(
|
||||
f"{camera}: Zoom amount: {zoom}, zoom predicted time: {zoom_predicted_movement_time}, zoom predicted box: {tuple(zoom_predicted_box)}"
|
||||
)
|
||||
|
||||
self._enqueue_move(camera, obj.obj_data["frame_time"], pan, tilt, zoom)
|
||||
|
||||
def _autotrack_move_zoom_only(self, camera, obj):
|
||||
@ -1242,7 +1321,7 @@ class PtzAutoTracker:
|
||||
return
|
||||
|
||||
# this is a brand new object that's on our camera, has our label, entered the zone,
|
||||
# is not a false positive, and is not initially motionless
|
||||
# is not a false positive, and is active
|
||||
if (
|
||||
# new object
|
||||
self.tracked_object[camera] is None
|
||||
@ -1252,7 +1331,7 @@ class PtzAutoTracker:
|
||||
and not obj.previous["false_positive"]
|
||||
and not obj.false_positive
|
||||
and not self.tracked_object_history[camera]
|
||||
and obj.obj_data["motionless_count"] == 0
|
||||
and obj.active
|
||||
):
|
||||
logger.debug(
|
||||
f"{camera}: New object: {obj.obj_data['id']} {obj.obj_data['box']} {obj.obj_data['frame_time']}"
|
||||
@ -1347,7 +1426,7 @@ class PtzAutoTracker:
|
||||
** (1 / self.zoom_factor[camera])
|
||||
}
|
||||
|
||||
def camera_maintenance(self, camera):
|
||||
async def camera_maintenance(self, camera):
|
||||
# bail and don't check anything if we're calibrating or tracking an object
|
||||
if (
|
||||
not self.autotracker_init[camera]
|
||||
@ -1364,7 +1443,7 @@ class PtzAutoTracker:
|
||||
self._autotracker_setup(self.config.cameras[camera], camera)
|
||||
# regularly update camera status
|
||||
if not self.ptz_metrics[camera].motor_stopped.is_set():
|
||||
self.onvif.get_camera_status(camera)
|
||||
await self.onvif.get_camera_status(camera)
|
||||
|
||||
# return to preset if tracking is over
|
||||
if (
|
||||
@ -1382,22 +1461,18 @@ class PtzAutoTracker:
|
||||
self.tracked_object[camera] = None
|
||||
self.tracked_object_history[camera].clear()
|
||||
|
||||
# empty move queue
|
||||
while not self.move_queues[camera].empty():
|
||||
self.move_queues[camera].get()
|
||||
|
||||
self.ptz_metrics[camera].motor_stopped.wait()
|
||||
logger.debug(
|
||||
f"{camera}: Time is {self.ptz_metrics[camera].frame_time.value}, returning to preset: {autotracker_config.return_preset}"
|
||||
)
|
||||
self.onvif._move_to_preset(
|
||||
await self.onvif._move_to_preset(
|
||||
camera,
|
||||
autotracker_config.return_preset.lower(),
|
||||
)
|
||||
|
||||
# update stored zoom level from preset
|
||||
if not self.ptz_metrics[camera].motor_stopped.is_set():
|
||||
self.onvif.get_camera_status(camera)
|
||||
await self.onvif.get_camera_status(camera)
|
||||
|
||||
self.ptz_metrics[camera].tracking_active.clear()
|
||||
self.dispatcher.publish(
|
||||
|
||||
@ -2,6 +2,7 @@
|
||||
|
||||
import asyncio
|
||||
import logging
|
||||
import threading
|
||||
import time
|
||||
from enum import Enum
|
||||
from importlib.util import find_spec
|
||||
@ -39,27 +40,56 @@ class OnvifController:
|
||||
def __init__(
|
||||
self, config: FrigateConfig, ptz_metrics: dict[str, PTZMetrics]
|
||||
) -> None:
|
||||
self.cams: dict[str, ONVIFCamera] = {}
|
||||
self.cams: dict[str, dict] = {}
|
||||
self.failed_cams: dict[str, dict] = {}
|
||||
self.max_retries = 5
|
||||
self.reset_timeout = 900 # 15 minutes
|
||||
|
||||
self.config = config
|
||||
self.ptz_metrics = ptz_metrics
|
||||
|
||||
# Create a dedicated event loop and run it in a separate thread
|
||||
self.loop = asyncio.new_event_loop()
|
||||
self.loop_thread = threading.Thread(target=self._run_event_loop, daemon=True)
|
||||
self.loop_thread.start()
|
||||
|
||||
self.camera_configs = {}
|
||||
for cam_name, cam in config.cameras.items():
|
||||
if not cam.enabled:
|
||||
continue
|
||||
|
||||
if cam.onvif.host:
|
||||
result = self._create_onvif_camera(cam_name, cam)
|
||||
if result:
|
||||
self.cams[cam_name] = result
|
||||
self.camera_configs[cam_name] = cam
|
||||
|
||||
def _create_onvif_camera(self, cam_name: str, cam) -> dict | None:
|
||||
"""Create an ONVIF camera instance and handle failures."""
|
||||
asyncio.run_coroutine_threadsafe(self._init_cameras(), self.loop)
|
||||
|
||||
def _run_event_loop(self) -> None:
|
||||
"""Run the event loop in a separate thread."""
|
||||
asyncio.set_event_loop(self.loop)
|
||||
try:
|
||||
return {
|
||||
self.loop.run_forever()
|
||||
except Exception as e:
|
||||
logger.error(f"Onvif event loop terminated unexpectedly: {e}")
|
||||
|
||||
async def _init_cameras(self) -> None:
|
||||
"""Initialize all configured cameras."""
|
||||
for cam_name in self.camera_configs:
|
||||
await self._init_single_camera(cam_name)
|
||||
|
||||
async def _init_single_camera(self, cam_name: str) -> bool:
|
||||
"""Initialize a single camera by name.
|
||||
|
||||
Args:
|
||||
cam_name: The name of the camera to initialize
|
||||
|
||||
Returns:
|
||||
bool: True if initialization succeeded, False otherwise
|
||||
"""
|
||||
if cam_name not in self.camera_configs:
|
||||
logger.error(f"No configuration found for camera {cam_name}")
|
||||
return False
|
||||
|
||||
cam = self.camera_configs[cam_name]
|
||||
try:
|
||||
self.cams[cam_name] = {
|
||||
"onvif": ONVIFCamera(
|
||||
cam.onvif.host,
|
||||
cam.onvif.port,
|
||||
@ -74,7 +104,8 @@ class OnvifController:
|
||||
"features": [],
|
||||
"presets": {},
|
||||
}
|
||||
except ONVIFError as e:
|
||||
return True
|
||||
except (Fault, ONVIFError, TransportError, Exception) as e:
|
||||
logger.error(f"Failed to create ONVIF camera instance for {cam_name}: {e}")
|
||||
# track initial failures
|
||||
self.failed_cams[cam_name] = {
|
||||
@ -82,7 +113,7 @@ class OnvifController:
|
||||
"last_error": str(e),
|
||||
"last_attempt": time.time(),
|
||||
}
|
||||
return None
|
||||
return False
|
||||
|
||||
async def _init_onvif(self, camera_name: str) -> bool:
|
||||
onvif: ONVIFCamera = self.cams[camera_name]["onvif"]
|
||||
@ -100,7 +131,7 @@ class OnvifController:
|
||||
# this will fire an exception if camera is not a ptz
|
||||
capabilities = onvif.get_definition("ptz")
|
||||
logger.debug(f"Onvif capabilities for {camera_name}: {capabilities}")
|
||||
except (ONVIFError, Fault, TransportError) as e:
|
||||
except (Fault, ONVIFError, TransportError, Exception) as e:
|
||||
logger.error(
|
||||
f"Unable to get Onvif capabilities for camera: {camera_name}: {e}"
|
||||
)
|
||||
@ -109,7 +140,7 @@ class OnvifController:
|
||||
try:
|
||||
profiles = await media.GetProfiles()
|
||||
logger.debug(f"Onvif profiles for {camera_name}: {profiles}")
|
||||
except (ONVIFError, Fault, TransportError) as e:
|
||||
except (Fault, ONVIFError, TransportError, Exception) as e:
|
||||
logger.error(
|
||||
f"Unable to get Onvif media profiles for camera: {camera_name}: {e}"
|
||||
)
|
||||
@ -240,12 +271,12 @@ class OnvifController:
|
||||
logger.debug(
|
||||
f"{camera_name}: Relative move request after deleting zoom: {move_request}"
|
||||
)
|
||||
except Exception:
|
||||
except Exception as e:
|
||||
self.config.cameras[
|
||||
camera_name
|
||||
].onvif.autotracking.zooming = ZoomingModeEnum.disabled
|
||||
logger.warning(
|
||||
f"Disabling autotracking zooming for {camera_name}: Relative zoom not supported"
|
||||
f"Disabling autotracking zooming for {camera_name}: Relative zoom not supported. Exception: {e}"
|
||||
)
|
||||
|
||||
if move_request.Speed is None:
|
||||
@ -263,7 +294,7 @@ class OnvifController:
|
||||
# setup existing presets
|
||||
try:
|
||||
presets: list[dict] = await ptz.GetPresets({"ProfileToken": profile.token})
|
||||
except ONVIFError as e:
|
||||
except (Fault, ONVIFError, TransportError, Exception) as e:
|
||||
logger.warning(f"Unable to get presets from camera: {camera_name}: {e}")
|
||||
presets = []
|
||||
|
||||
@ -295,7 +326,7 @@ class OnvifController:
|
||||
self.cams[camera_name]["relative_zoom_range"] = (
|
||||
ptz_config.Spaces.RelativeZoomTranslationSpace[0]
|
||||
)
|
||||
except Exception:
|
||||
except Exception as e:
|
||||
if (
|
||||
self.config.cameras[camera_name].onvif.autotracking.zooming
|
||||
== ZoomingModeEnum.relative
|
||||
@ -304,7 +335,7 @@ class OnvifController:
|
||||
camera_name
|
||||
].onvif.autotracking.zooming = ZoomingModeEnum.disabled
|
||||
logger.warning(
|
||||
f"Disabling autotracking zooming for {camera_name}: Relative zoom not supported"
|
||||
f"Disabling autotracking zooming for {camera_name}: Relative zoom not supported. Exception: {e}"
|
||||
)
|
||||
|
||||
if configs.DefaultAbsoluteZoomPositionSpace:
|
||||
@ -319,13 +350,13 @@ class OnvifController:
|
||||
ptz_config.Spaces.AbsoluteZoomPositionSpace[0]
|
||||
)
|
||||
self.cams[camera_name]["zoom_limits"] = configs.ZoomLimits
|
||||
except Exception:
|
||||
except Exception as e:
|
||||
if self.config.cameras[camera_name].onvif.autotracking.zooming:
|
||||
self.config.cameras[
|
||||
camera_name
|
||||
].onvif.autotracking.zooming = ZoomingModeEnum.disabled
|
||||
logger.warning(
|
||||
f"Disabling autotracking zooming for {camera_name}: Absolute zoom not supported"
|
||||
f"Disabling autotracking zooming for {camera_name}: Absolute zoom not supported. Exception: {e}"
|
||||
)
|
||||
|
||||
# set relative pan/tilt space for autotracker
|
||||
@ -344,25 +375,23 @@ class OnvifController:
|
||||
self.cams[camera_name]["init"] = True
|
||||
return True
|
||||
|
||||
def _stop(self, camera_name: str) -> None:
|
||||
async def _stop(self, camera_name: str) -> None:
|
||||
move_request = self.cams[camera_name]["move_request"]
|
||||
asyncio.run(
|
||||
self.cams[camera_name]["ptz"].Stop(
|
||||
{
|
||||
"ProfileToken": move_request.ProfileToken,
|
||||
"PanTilt": True,
|
||||
"Zoom": True,
|
||||
}
|
||||
)
|
||||
await self.cams[camera_name]["ptz"].Stop(
|
||||
{
|
||||
"ProfileToken": move_request.ProfileToken,
|
||||
"PanTilt": True,
|
||||
"Zoom": True,
|
||||
}
|
||||
)
|
||||
self.cams[camera_name]["active"] = False
|
||||
|
||||
def _move(self, camera_name: str, command: OnvifCommandEnum) -> None:
|
||||
async def _move(self, camera_name: str, command: OnvifCommandEnum) -> None:
|
||||
if self.cams[camera_name]["active"]:
|
||||
logger.warning(
|
||||
f"{camera_name} is already performing an action, stopping..."
|
||||
)
|
||||
self._stop(camera_name)
|
||||
await self._stop(camera_name)
|
||||
|
||||
if "pt" not in self.cams[camera_name]["features"]:
|
||||
logger.error(f"{camera_name} does not support ONVIF pan/tilt movement.")
|
||||
@ -391,11 +420,11 @@ class OnvifController:
|
||||
}
|
||||
|
||||
try:
|
||||
asyncio.run(self.cams[camera_name]["ptz"].ContinuousMove(move_request))
|
||||
except ONVIFError as e:
|
||||
await self.cams[camera_name]["ptz"].ContinuousMove(move_request)
|
||||
except (Fault, ONVIFError, TransportError, Exception) as e:
|
||||
logger.warning(f"Onvif sending move request to {camera_name} failed: {e}")
|
||||
|
||||
def _move_relative(self, camera_name: str, pan, tilt, zoom, speed) -> None:
|
||||
async def _move_relative(self, camera_name: str, pan, tilt, zoom, speed) -> None:
|
||||
if "pt-r-fov" not in self.cams[camera_name]["features"]:
|
||||
logger.error(f"{camera_name} does not support ONVIF RelativeMove (FOV).")
|
||||
return
|
||||
@ -464,7 +493,7 @@ class OnvifController:
|
||||
}
|
||||
move_request.Translation.Zoom.x = zoom
|
||||
|
||||
asyncio.run(self.cams[camera_name]["ptz"].RelativeMove(move_request))
|
||||
await self.cams[camera_name]["ptz"].RelativeMove(move_request)
|
||||
|
||||
# reset after the move request
|
||||
move_request.Translation.PanTilt.x = 0
|
||||
@ -479,7 +508,7 @@ class OnvifController:
|
||||
|
||||
self.cams[camera_name]["active"] = False
|
||||
|
||||
def _move_to_preset(self, camera_name: str, preset: str) -> None:
|
||||
async def _move_to_preset(self, camera_name: str, preset: str) -> None:
|
||||
if preset not in self.cams[camera_name]["presets"]:
|
||||
logger.error(f"{preset} is not a valid preset for {camera_name}")
|
||||
return
|
||||
@ -489,23 +518,22 @@ class OnvifController:
|
||||
self.ptz_metrics[camera_name].stop_time.value = 0
|
||||
move_request = self.cams[camera_name]["move_request"]
|
||||
preset_token = self.cams[camera_name]["presets"][preset]
|
||||
asyncio.run(
|
||||
self.cams[camera_name]["ptz"].GotoPreset(
|
||||
{
|
||||
"ProfileToken": move_request.ProfileToken,
|
||||
"PresetToken": preset_token,
|
||||
}
|
||||
)
|
||||
|
||||
await self.cams[camera_name]["ptz"].GotoPreset(
|
||||
{
|
||||
"ProfileToken": move_request.ProfileToken,
|
||||
"PresetToken": preset_token,
|
||||
}
|
||||
)
|
||||
|
||||
self.cams[camera_name]["active"] = False
|
||||
|
||||
def _zoom(self, camera_name: str, command: OnvifCommandEnum) -> None:
|
||||
async def _zoom(self, camera_name: str, command: OnvifCommandEnum) -> None:
|
||||
if self.cams[camera_name]["active"]:
|
||||
logger.warning(
|
||||
f"{camera_name} is already performing an action, stopping..."
|
||||
)
|
||||
self._stop(camera_name)
|
||||
await self._stop(camera_name)
|
||||
|
||||
if "zoom" not in self.cams[camera_name]["features"]:
|
||||
logger.error(f"{camera_name} does not support ONVIF zooming.")
|
||||
@ -519,9 +547,9 @@ class OnvifController:
|
||||
elif command == OnvifCommandEnum.zoom_out:
|
||||
move_request.Velocity = {"Zoom": {"x": -0.5}}
|
||||
|
||||
asyncio.run(self.cams[camera_name]["ptz"].ContinuousMove(move_request))
|
||||
await self.cams[camera_name]["ptz"].ContinuousMove(move_request)
|
||||
|
||||
def _zoom_absolute(self, camera_name: str, zoom, speed) -> None:
|
||||
async def _zoom_absolute(self, camera_name: str, zoom, speed) -> None:
|
||||
if "zoom-a" not in self.cams[camera_name]["features"]:
|
||||
logger.error(f"{camera_name} does not support ONVIF AbsoluteMove zooming.")
|
||||
return
|
||||
@ -560,19 +588,20 @@ class OnvifController:
|
||||
|
||||
logger.debug(f"{camera_name}: Absolute zoom: {zoom}")
|
||||
|
||||
asyncio.run(self.cams[camera_name]["ptz"].AbsoluteMove(move_request))
|
||||
await self.cams[camera_name]["ptz"].AbsoluteMove(move_request)
|
||||
|
||||
self.cams[camera_name]["active"] = False
|
||||
|
||||
def handle_command(
|
||||
async def handle_command_async(
|
||||
self, camera_name: str, command: OnvifCommandEnum, param: str = ""
|
||||
) -> None:
|
||||
"""Handle ONVIF commands asynchronously"""
|
||||
if camera_name not in self.cams.keys():
|
||||
logger.error(f"ONVIF is not configured for {camera_name}")
|
||||
return
|
||||
|
||||
if not self.cams[camera_name]["init"]:
|
||||
if not asyncio.run(self._init_onvif(camera_name)):
|
||||
if not await self._init_onvif(camera_name):
|
||||
return
|
||||
|
||||
try:
|
||||
@ -580,22 +609,43 @@ class OnvifController:
|
||||
# already init
|
||||
return
|
||||
elif command == OnvifCommandEnum.stop:
|
||||
self._stop(camera_name)
|
||||
await self._stop(camera_name)
|
||||
elif command == OnvifCommandEnum.preset:
|
||||
self._move_to_preset(camera_name, param)
|
||||
await self._move_to_preset(camera_name, param)
|
||||
elif command == OnvifCommandEnum.move_relative:
|
||||
_, pan, tilt = param.split("_")
|
||||
self._move_relative(camera_name, float(pan), float(tilt), 0, 1)
|
||||
await self._move_relative(camera_name, float(pan), float(tilt), 0, 1)
|
||||
elif (
|
||||
command == OnvifCommandEnum.zoom_in
|
||||
or command == OnvifCommandEnum.zoom_out
|
||||
):
|
||||
self._zoom(camera_name, command)
|
||||
await self._zoom(camera_name, command)
|
||||
else:
|
||||
self._move(camera_name, command)
|
||||
except ONVIFError as e:
|
||||
await self._move(camera_name, command)
|
||||
except (Fault, ONVIFError, TransportError, Exception) as e:
|
||||
logger.error(f"Unable to handle onvif command: {e}")
|
||||
|
||||
def handle_command(
|
||||
self, camera_name: str, command: OnvifCommandEnum, param: str = ""
|
||||
) -> None:
|
||||
"""
|
||||
Handle ONVIF commands by scheduling them in the event loop.
|
||||
This is the synchronous interface that schedules async work.
|
||||
"""
|
||||
future = asyncio.run_coroutine_threadsafe(
|
||||
self.handle_command_async(camera_name, command, param), self.loop
|
||||
)
|
||||
|
||||
try:
|
||||
# Wait with a timeout to prevent blocking indefinitely
|
||||
future.result(timeout=10)
|
||||
except asyncio.TimeoutError:
|
||||
logger.error(f"Command {command} timed out for camera {camera_name}")
|
||||
except Exception as e:
|
||||
logger.error(
|
||||
f"Error executing command {command} for camera {camera_name}: {e}"
|
||||
)
|
||||
|
||||
async def get_camera_info(self, camera_name: str) -> dict[str, any]:
|
||||
"""
|
||||
Get ptz capabilities and presets, attempting to reconnect if ONVIF is configured
|
||||
@ -609,26 +659,23 @@ class OnvifController:
|
||||
)
|
||||
return {}
|
||||
|
||||
if camera_name not in self.cams and (
|
||||
if camera_name not in self.cams.keys() and (
|
||||
camera_name not in self.config.cameras
|
||||
or not self.config.cameras[camera_name].onvif.host
|
||||
):
|
||||
logger.debug(f"ONVIF is not configured for {camera_name}")
|
||||
return {}
|
||||
|
||||
if camera_name in self.cams and self.cams[camera_name]["init"]:
|
||||
if camera_name in self.cams.keys() and self.cams[camera_name]["init"]:
|
||||
return {
|
||||
"name": camera_name,
|
||||
"features": self.cams[camera_name]["features"],
|
||||
"presets": list(self.cams[camera_name]["presets"].keys()),
|
||||
}
|
||||
|
||||
if camera_name not in self.cams and camera_name in self.config.cameras:
|
||||
cam = self.config.cameras[camera_name]
|
||||
result = self._create_onvif_camera(camera_name, cam)
|
||||
if result:
|
||||
self.cams[camera_name] = result
|
||||
else:
|
||||
if camera_name not in self.cams.keys() and camera_name in self.config.cameras:
|
||||
success = await self._init_single_camera(camera_name)
|
||||
if not success:
|
||||
return {}
|
||||
|
||||
# Reset retry count after timeout
|
||||
@ -681,23 +728,21 @@ class OnvifController:
|
||||
logger.debug(f"Could not initialize ONVIF for {camera_name}")
|
||||
return {}
|
||||
|
||||
def get_service_capabilities(self, camera_name: str) -> None:
|
||||
async def get_service_capabilities(self, camera_name: str) -> None:
|
||||
if camera_name not in self.cams.keys():
|
||||
logger.error(f"ONVIF is not configured for {camera_name}")
|
||||
return {}
|
||||
|
||||
if not self.cams[camera_name]["init"]:
|
||||
asyncio.run(self._init_onvif(camera_name))
|
||||
await self._init_onvif(camera_name)
|
||||
|
||||
service_capabilities_request = self.cams[camera_name][
|
||||
"service_capabilities_request"
|
||||
]
|
||||
try:
|
||||
service_capabilities = asyncio.run(
|
||||
self.cams[camera_name]["ptz"].GetServiceCapabilities(
|
||||
service_capabilities_request
|
||||
)
|
||||
)
|
||||
service_capabilities = await self.cams[camera_name][
|
||||
"ptz"
|
||||
].GetServiceCapabilities(service_capabilities_request)
|
||||
|
||||
logger.debug(
|
||||
f"Onvif service capabilities for {camera_name}: {service_capabilities}"
|
||||
@ -705,25 +750,24 @@ class OnvifController:
|
||||
|
||||
# MoveStatus is required for autotracking - should return "true" if supported
|
||||
return find_by_key(vars(service_capabilities), "MoveStatus")
|
||||
except Exception:
|
||||
except Exception as e:
|
||||
logger.warning(
|
||||
f"Camera {camera_name} does not support the ONVIF GetServiceCapabilities method. Autotracking will not function correctly and must be disabled in your config."
|
||||
f"Camera {camera_name} does not support the ONVIF GetServiceCapabilities method. Autotracking will not function correctly and must be disabled in your config. Exception: {e}"
|
||||
)
|
||||
return False
|
||||
|
||||
def get_camera_status(self, camera_name: str) -> None:
|
||||
async def get_camera_status(self, camera_name: str) -> None:
|
||||
if camera_name not in self.cams.keys():
|
||||
logger.error(f"ONVIF is not configured for {camera_name}")
|
||||
return {}
|
||||
return
|
||||
|
||||
if not self.cams[camera_name]["init"]:
|
||||
asyncio.run(self._init_onvif(camera_name))
|
||||
if not await self._init_onvif(camera_name):
|
||||
return
|
||||
|
||||
status_request = self.cams[camera_name]["status_request"]
|
||||
try:
|
||||
status = asyncio.run(
|
||||
self.cams[camera_name]["ptz"].GetStatus(status_request)
|
||||
)
|
||||
status = await self.cams[camera_name]["ptz"].GetStatus(status_request)
|
||||
except Exception:
|
||||
pass # We're unsupported, that'll be reported in the next check.
|
||||
|
||||
@ -807,3 +851,22 @@ class OnvifController:
|
||||
camera_name
|
||||
].frame_time.value
|
||||
logger.warning(f"Camera {camera_name} is still in ONVIF 'MOVING' status.")
|
||||
|
||||
def close(self) -> None:
|
||||
"""Gracefully shut down the ONVIF controller."""
|
||||
if not hasattr(self, "loop") or self.loop.is_closed():
|
||||
logger.debug("ONVIF controller already closed")
|
||||
return
|
||||
|
||||
logger.info("Exiting ONVIF controller...")
|
||||
|
||||
def stop_and_cleanup():
|
||||
try:
|
||||
self.loop.stop()
|
||||
except Exception as e:
|
||||
logger.error(f"Error during loop cleanup: {e}")
|
||||
|
||||
# Schedule stop and cleanup in the loop thread
|
||||
self.loop.call_soon_threadsafe(stop_and_cleanup)
|
||||
|
||||
self.loop_thread.join()
|
||||
|
||||
@ -27,6 +27,7 @@ from frigate.config import FrigateConfig, RetainModeEnum
|
||||
from frigate.const import (
|
||||
CACHE_DIR,
|
||||
CACHE_SEGMENT_FORMAT,
|
||||
FAST_QUEUE_TIMEOUT,
|
||||
INSERT_MANY_RECORDINGS,
|
||||
MAX_SEGMENT_DURATION,
|
||||
MAX_SEGMENTS_IN_CACHE,
|
||||
@ -38,8 +39,6 @@ from frigate.util.services import get_video_properties
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
QUEUE_READ_TIMEOUT = 0.00001 # seconds
|
||||
|
||||
|
||||
class SegmentInfo:
|
||||
def __init__(
|
||||
@ -551,7 +550,7 @@ class RecordingMaintainer(threading.Thread):
|
||||
# empty the object recordings info queue
|
||||
while True:
|
||||
(topic, data) = self.detection_subscriber.check_for_update(
|
||||
timeout=QUEUE_READ_TIMEOUT
|
||||
timeout=FAST_QUEUE_TIMEOUT
|
||||
)
|
||||
|
||||
if not topic:
|
||||
|
||||
@ -482,6 +482,10 @@ class ReviewSegmentMaintainer(threading.Thread):
|
||||
camera_name = updated_record_topic.rpartition("/")[-1]
|
||||
self.config.cameras[camera_name].record = updated_record_config
|
||||
|
||||
# immediately end segment
|
||||
if not updated_record_config.enabled:
|
||||
self.end_segment(camera_name)
|
||||
|
||||
if updated_review_topic:
|
||||
camera_name = updated_review_topic.rpartition("/")[-1]
|
||||
self.config.cameras[camera_name].review = updated_review_config
|
||||
@ -492,6 +496,10 @@ class ReviewSegmentMaintainer(threading.Thread):
|
||||
camera_name
|
||||
].enabled = updated_enabled_config.enabled
|
||||
|
||||
# immediately end segment as we may not get another update
|
||||
if not updated_enabled_config.enabled:
|
||||
self.end_segment(camera_name)
|
||||
|
||||
(topic, data) = self.detection_subscriber.check_for_update(timeout=1)
|
||||
|
||||
if not topic:
|
||||
@ -524,16 +532,14 @@ class ReviewSegmentMaintainer(threading.Thread):
|
||||
if camera not in self.indefinite_events:
|
||||
self.indefinite_events[camera] = {}
|
||||
|
||||
current_segment = self.active_review_segments.get(camera)
|
||||
|
||||
if (
|
||||
not self.config.cameras[camera].enabled
|
||||
or not self.config.cameras[camera].record.enabled
|
||||
):
|
||||
if current_segment:
|
||||
self.end_segment(camera)
|
||||
continue
|
||||
|
||||
current_segment = self.active_review_segments.get(camera)
|
||||
|
||||
# Check if the current segment should be processed based on enabled settings
|
||||
if current_segment:
|
||||
if (
|
||||
|
||||
@ -15,7 +15,7 @@ from frigate.camera import CameraMetrics
|
||||
from frigate.config import FrigateConfig
|
||||
from frigate.const import CACHE_DIR, CLIPS_DIR, RECORD_DIR
|
||||
from frigate.data_processing.types import DataProcessorMetrics
|
||||
from frigate.object_detection import ObjectDetectProcess
|
||||
from frigate.object_detection.base import ObjectDetectProcess
|
||||
from frigate.types import StatsTrackingTypes
|
||||
from frigate.util.services import (
|
||||
get_amd_gpu_stats,
|
||||
@ -24,6 +24,8 @@ from frigate.util.services import (
|
||||
get_intel_gpu_stats,
|
||||
get_jetson_stats,
|
||||
get_nvidia_gpu_stats,
|
||||
get_rockchip_gpu_stats,
|
||||
get_rockchip_npu_stats,
|
||||
is_vaapi_amd_driver,
|
||||
)
|
||||
from frigate.version import VERSION
|
||||
@ -109,6 +111,7 @@ def get_processing_stats(
|
||||
stats_tasks = [
|
||||
asyncio.create_task(set_gpu_stats(config, stats, hwaccel_errors)),
|
||||
asyncio.create_task(set_cpu_stats(stats)),
|
||||
asyncio.create_task(set_npu_usages(config, stats)),
|
||||
]
|
||||
|
||||
if config.telemetry.stats.network_bandwidth:
|
||||
@ -230,6 +233,11 @@ async def set_gpu_stats(
|
||||
else:
|
||||
stats["intel-vaapi"] = {"gpu": "", "mem": ""}
|
||||
hwaccel_errors.append(args)
|
||||
elif "preset-rk" in args:
|
||||
rga_usage = get_rockchip_gpu_stats()
|
||||
|
||||
if rga_usage:
|
||||
stats["rockchip"] = rga_usage
|
||||
elif "v4l2m2m" in args or "rpi" in args:
|
||||
# RPi v4l2m2m is currently not able to get usage stats
|
||||
stats["rpi-v4l2m2m"] = {"gpu": "", "mem": ""}
|
||||
@ -238,6 +246,19 @@ async def set_gpu_stats(
|
||||
all_stats["gpu_usages"] = stats
|
||||
|
||||
|
||||
async def set_npu_usages(config: FrigateConfig, all_stats: dict[str, Any]) -> None:
|
||||
stats: dict[str, dict] = {}
|
||||
|
||||
for detector in config.detectors.values():
|
||||
if detector.type == "rknn":
|
||||
# Rockchip NPU usage
|
||||
rk_usage = get_rockchip_npu_stats()
|
||||
stats["rockchip"] = rk_usage
|
||||
|
||||
if stats:
|
||||
all_stats["npu_usages"] = stats
|
||||
|
||||
|
||||
def stats_snapshot(
|
||||
config: FrigateConfig, stats_tracking: StatsTrackingTypes, hwaccel_errors: list[str]
|
||||
) -> dict[str, Any]:
|
||||
|
||||
@ -280,82 +280,6 @@ class TestHttpReview(BaseTestHttp):
|
||||
}
|
||||
self.assertEqual(response_json, expected_response)
|
||||
|
||||
def test_get_review_summary_multiple_days_edge_cases(self):
|
||||
now = datetime.now()
|
||||
five_days_ago = datetime.today() - timedelta(days=5)
|
||||
twenty_days_ago = datetime.today() - timedelta(days=20)
|
||||
one_month_ago = datetime.today() - timedelta(days=30)
|
||||
one_month_ago_ts = one_month_ago.timestamp()
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
super().insert_mock_review_segment("123456.random", now.timestamp())
|
||||
super().insert_mock_review_segment(
|
||||
"123457.random", five_days_ago.timestamp()
|
||||
)
|
||||
super().insert_mock_review_segment(
|
||||
"123458.random",
|
||||
twenty_days_ago.timestamp(),
|
||||
None,
|
||||
SeverityEnum.detection,
|
||||
)
|
||||
# One month ago plus 5 seconds fits within the condition (review.start_time > month_ago). Assuming that the endpoint does not take more than 5 seconds to be invoked
|
||||
super().insert_mock_review_segment(
|
||||
"123459.random",
|
||||
one_month_ago_ts + 5,
|
||||
None,
|
||||
SeverityEnum.detection,
|
||||
)
|
||||
# This won't appear in the output since it's not within last month start_time clause (review.start_time > month_ago)
|
||||
super().insert_mock_review_segment("123450.random", one_month_ago_ts)
|
||||
response = client.get("/review/summary")
|
||||
assert response.status_code == 200
|
||||
response_json = response.json()
|
||||
# e.g. '2024-11-24'
|
||||
today_formatted = now.strftime("%Y-%m-%d")
|
||||
# e.g. '2024-11-19'
|
||||
five_days_ago_formatted = five_days_ago.strftime("%Y-%m-%d")
|
||||
# e.g. '2024-11-04'
|
||||
twenty_days_ago_formatted = twenty_days_ago.strftime("%Y-%m-%d")
|
||||
# e.g. '2024-10-24'
|
||||
one_month_ago_formatted = one_month_ago.strftime("%Y-%m-%d")
|
||||
expected_response = {
|
||||
"last24Hours": {
|
||||
"reviewed_alert": 0,
|
||||
"reviewed_detection": 0,
|
||||
"total_alert": 1,
|
||||
"total_detection": 0,
|
||||
},
|
||||
today_formatted: {
|
||||
"day": today_formatted,
|
||||
"reviewed_alert": 0,
|
||||
"reviewed_detection": 0,
|
||||
"total_alert": 1,
|
||||
"total_detection": 0,
|
||||
},
|
||||
five_days_ago_formatted: {
|
||||
"day": five_days_ago_formatted,
|
||||
"reviewed_alert": 0,
|
||||
"reviewed_detection": 0,
|
||||
"total_alert": 1,
|
||||
"total_detection": 0,
|
||||
},
|
||||
twenty_days_ago_formatted: {
|
||||
"day": twenty_days_ago_formatted,
|
||||
"reviewed_alert": 0,
|
||||
"reviewed_detection": 0,
|
||||
"total_alert": 0,
|
||||
"total_detection": 1,
|
||||
},
|
||||
one_month_ago_formatted: {
|
||||
"day": one_month_ago_formatted,
|
||||
"reviewed_alert": 0,
|
||||
"reviewed_detection": 0,
|
||||
"total_alert": 0,
|
||||
"total_detection": 1,
|
||||
},
|
||||
}
|
||||
self.assertEqual(response_json, expected_response)
|
||||
|
||||
def test_get_review_summary_multiple_in_same_day(self):
|
||||
now = datetime.now()
|
||||
five_days_ago = datetime.today() - timedelta(days=5)
|
||||
|
||||
@ -1491,7 +1491,9 @@ class TestConfig(unittest.TestCase):
|
||||
"fps": 5,
|
||||
},
|
||||
"onvif": {
|
||||
"autotracking": {"movement_weights": "0, 1, 1.23, 2.34, 0.50"}
|
||||
"autotracking": {
|
||||
"movement_weights": "0, 1, 1.23, 2.34, 0.50, 1"
|
||||
}
|
||||
},
|
||||
}
|
||||
},
|
||||
@ -1504,6 +1506,7 @@ class TestConfig(unittest.TestCase):
|
||||
"1.23",
|
||||
"2.34",
|
||||
"0.5",
|
||||
"1.0",
|
||||
]
|
||||
|
||||
def test_fails_invalid_movement_weights(self):
|
||||
|
||||
@ -5,7 +5,7 @@ import numpy as np
|
||||
from pydantic import parse_obj_as
|
||||
|
||||
import frigate.detectors as detectors
|
||||
import frigate.object_detection
|
||||
import frigate.object_detection.base
|
||||
from frigate.config import DetectorConfig, ModelConfig
|
||||
from frigate.detectors import DetectorTypeEnum
|
||||
from frigate.detectors.detector_config import InputTensorEnum
|
||||
@ -23,7 +23,7 @@ class TestLocalObjectDetector(unittest.TestCase):
|
||||
DetectorConfig, ({"type": det_type, "model": {}})
|
||||
)
|
||||
test_cfg.model.path = "/test/modelpath"
|
||||
test_obj = frigate.object_detection.LocalObjectDetector(
|
||||
test_obj = frigate.object_detection.base.LocalObjectDetector(
|
||||
detector_config=test_cfg
|
||||
)
|
||||
|
||||
@ -43,7 +43,7 @@ class TestLocalObjectDetector(unittest.TestCase):
|
||||
|
||||
TEST_DATA = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
|
||||
TEST_DETECT_RESULT = np.ndarray([1, 2, 4, 8, 16, 32])
|
||||
test_obj_detect = frigate.object_detection.LocalObjectDetector(
|
||||
test_obj_detect = frigate.object_detection.base.LocalObjectDetector(
|
||||
detector_config=parse_obj_as(DetectorConfig, {"type": "cpu", "model": {}})
|
||||
)
|
||||
|
||||
@ -70,7 +70,7 @@ class TestLocalObjectDetector(unittest.TestCase):
|
||||
test_cfg = parse_obj_as(DetectorConfig, {"type": "cpu", "model": {}})
|
||||
test_cfg.model.input_tensor = InputTensorEnum.nchw
|
||||
|
||||
test_obj_detect = frigate.object_detection.LocalObjectDetector(
|
||||
test_obj_detect = frigate.object_detection.base.LocalObjectDetector(
|
||||
detector_config=test_cfg
|
||||
)
|
||||
|
||||
@ -91,7 +91,7 @@ class TestLocalObjectDetector(unittest.TestCase):
|
||||
"frigate.detectors.api_types",
|
||||
{det_type: Mock() for det_type in DetectorTypeEnum},
|
||||
)
|
||||
@patch("frigate.object_detection.load_labels")
|
||||
@patch("frigate.object_detection.base.load_labels")
|
||||
def test_detect_given_tensor_input_should_return_lfiltered_detections(
|
||||
self, mock_load_labels
|
||||
):
|
||||
@ -118,7 +118,7 @@ class TestLocalObjectDetector(unittest.TestCase):
|
||||
|
||||
test_cfg = parse_obj_as(DetectorConfig, {"type": "cpu", "model": {}})
|
||||
test_cfg.model = ModelConfig()
|
||||
test_obj_detect = frigate.object_detection.LocalObjectDetector(
|
||||
test_obj_detect = frigate.object_detection.base.LocalObjectDetector(
|
||||
detector_config=test_cfg,
|
||||
labels=TEST_LABEL_FILE,
|
||||
)
|
||||
|
||||
@ -129,6 +129,11 @@ class NorfairTracker(ObjectTracker):
|
||||
"distance_function": frigate_distance,
|
||||
"distance_threshold": 2.5,
|
||||
},
|
||||
"license_plate": {
|
||||
"filter_factory": OptimizedKalmanFilterFactory(R=2.5, Q=0.05),
|
||||
"distance_function": frigate_distance,
|
||||
"distance_threshold": 3.75,
|
||||
},
|
||||
}
|
||||
|
||||
# Define autotracking PTZ-specific configurations
|
||||
@ -273,17 +278,24 @@ class NorfairTracker(ObjectTracker):
|
||||
)
|
||||
self.tracked_objects[id] = obj
|
||||
self.disappeared[id] = 0
|
||||
if obj_match:
|
||||
boxes = [p.data["box"] for p in obj_match.past_detections]
|
||||
else:
|
||||
boxes = [obj["box"]]
|
||||
|
||||
xmins, ymins, xmaxs, ymaxs = zip(*boxes)
|
||||
|
||||
self.positions[id] = {
|
||||
"xmins": [],
|
||||
"ymins": [],
|
||||
"xmaxs": [],
|
||||
"ymaxs": [],
|
||||
"xmins": list(xmins),
|
||||
"ymins": list(ymins),
|
||||
"xmaxs": list(xmaxs),
|
||||
"ymaxs": list(ymaxs),
|
||||
"xmin": 0,
|
||||
"ymin": 0,
|
||||
"xmax": self.detect_config.width,
|
||||
"ymax": self.detect_config.height,
|
||||
}
|
||||
self.stationary_box_history[id] = []
|
||||
self.stationary_box_history[id] = boxes
|
||||
|
||||
def deregister(self, id, track_id):
|
||||
obj = self.tracked_objects[id]
|
||||
@ -369,9 +381,9 @@ class NorfairTracker(ObjectTracker):
|
||||
}
|
||||
return False
|
||||
|
||||
# if there are less than 10 entries for the position, add the bounding box
|
||||
# if there are more than 5 and less than 10 entries for the position, add the bounding box
|
||||
# and recompute the position box
|
||||
if len(position["xmins"]) < 10:
|
||||
if 5 <= len(position["xmins"]) < 10:
|
||||
position["xmins"].append(xmin)
|
||||
position["ymins"].append(ymin)
|
||||
position["xmaxs"].append(xmax)
|
||||
@ -599,7 +611,10 @@ class NorfairTracker(ObjectTracker):
|
||||
|
||||
# print a table to the console with norfair tracked object info
|
||||
if False:
|
||||
self.print_objects_as_table(self.trackers["person"]["ptz"].tracked_objects)
|
||||
if len(self.trackers["license_plate"]["static"].tracked_objects) > 0:
|
||||
self.print_objects_as_table(
|
||||
self.trackers["license_plate"]["static"].tracked_objects
|
||||
)
|
||||
|
||||
# Get tracked objects from type-specific trackers
|
||||
for object_trackers in self.trackers.values():
|
||||
@ -644,3 +659,20 @@ class NorfairTracker(ObjectTracker):
|
||||
color=(255, 0, 0),
|
||||
thickness=None,
|
||||
)
|
||||
|
||||
if False:
|
||||
# draw the current formatted time on the frame
|
||||
from datetime import datetime
|
||||
|
||||
formatted_time = datetime.fromtimestamp(frame_time).strftime(
|
||||
"%m/%d/%Y %I:%M:%S %p"
|
||||
)
|
||||
|
||||
frame = Drawer.text(
|
||||
frame,
|
||||
formatted_time,
|
||||
position=(10, 50),
|
||||
size=1.5,
|
||||
color=(255, 255, 255),
|
||||
thickness=None,
|
||||
)
|
||||
|
||||
@ -28,7 +28,7 @@ from frigate.config import (
|
||||
RecordConfig,
|
||||
SnapshotsConfig,
|
||||
)
|
||||
from frigate.const import UPDATE_CAMERA_ACTIVITY
|
||||
from frigate.const import FAST_QUEUE_TIMEOUT, UPDATE_CAMERA_ACTIVITY
|
||||
from frigate.events.types import EventStateEnum, EventTypeEnum
|
||||
from frigate.models import Event, Timeline
|
||||
from frigate.track.tracked_object import TrackedObject
|
||||
@ -684,7 +684,9 @@ class TrackedObjectProcessor(threading.Thread):
|
||||
|
||||
# cleanup event finished queue
|
||||
while not self.stop_event.is_set():
|
||||
update = self.event_end_subscriber.check_for_update(timeout=0.01)
|
||||
update = self.event_end_subscriber.check_for_update(
|
||||
timeout=FAST_QUEUE_TIMEOUT
|
||||
)
|
||||
|
||||
if not update:
|
||||
break
|
||||
|
||||
@ -117,6 +117,7 @@ class TrackedObject:
|
||||
def update(self, current_frame_time: float, obj_data, has_valid_frame: bool):
|
||||
thumb_update = False
|
||||
significant_change = False
|
||||
path_update = False
|
||||
autotracker_update = False
|
||||
# if the object is not in the current frame, add a 0.0 to the score history
|
||||
if obj_data["frame_time"] != current_frame_time:
|
||||
@ -143,24 +144,29 @@ class TrackedObject:
|
||||
obj_data,
|
||||
self.camera_config.frame_shape,
|
||||
):
|
||||
self.thumbnail_data = {
|
||||
"frame_time": current_frame_time,
|
||||
"box": obj_data["box"],
|
||||
"area": obj_data["area"],
|
||||
"region": obj_data["region"],
|
||||
"score": obj_data["score"],
|
||||
"attributes": obj_data["attributes"],
|
||||
"current_estimated_speed": self.current_estimated_speed,
|
||||
"velocity_angle": self.velocity_angle,
|
||||
"path_data": self.path_data,
|
||||
"recognized_license_plate": obj_data.get(
|
||||
"recognized_license_plate"
|
||||
),
|
||||
"recognized_license_plate_score": obj_data.get(
|
||||
"recognized_license_plate_score"
|
||||
),
|
||||
}
|
||||
thumb_update = True
|
||||
if obj_data["frame_time"] == current_frame_time:
|
||||
self.thumbnail_data = {
|
||||
"frame_time": obj_data["frame_time"],
|
||||
"box": obj_data["box"],
|
||||
"area": obj_data["area"],
|
||||
"region": obj_data["region"],
|
||||
"score": obj_data["score"],
|
||||
"attributes": obj_data["attributes"],
|
||||
"current_estimated_speed": self.current_estimated_speed,
|
||||
"velocity_angle": self.velocity_angle,
|
||||
"path_data": self.path_data,
|
||||
"recognized_license_plate": obj_data.get(
|
||||
"recognized_license_plate"
|
||||
),
|
||||
"recognized_license_plate_score": obj_data.get(
|
||||
"recognized_license_plate_score"
|
||||
),
|
||||
}
|
||||
thumb_update = True
|
||||
else:
|
||||
logger.debug(
|
||||
f"{self.camera_config.name}: Object frame time {obj_data['frame_time']} is not equal to the current frame time {current_frame_time}, not updating thumbnail"
|
||||
)
|
||||
|
||||
# check zones
|
||||
current_zones = []
|
||||
@ -272,7 +278,7 @@ class TrackedObject:
|
||||
self.attributes[attr["label"]] = attr["score"]
|
||||
|
||||
# populate the sub_label for object with highest scoring logo
|
||||
if self.obj_data["label"] in ["car", "package", "person"]:
|
||||
if self.obj_data["label"] in ["car", "motorcycle", "package", "person"]:
|
||||
recognized_logos = {
|
||||
k: self.attributes[k] for k in self.logos if k in self.attributes
|
||||
}
|
||||
@ -324,19 +330,21 @@ class TrackedObject:
|
||||
|
||||
if not self.path_data:
|
||||
self.path_data.append((bottom_center, obj_data["frame_time"]))
|
||||
path_update = True
|
||||
elif (
|
||||
math.dist(self.path_data[-1][0], bottom_center) >= threshold
|
||||
or len(self.path_data) == 1
|
||||
):
|
||||
# check Euclidean distance before appending
|
||||
self.path_data.append((bottom_center, obj_data["frame_time"]))
|
||||
path_update = True
|
||||
logger.debug(
|
||||
f"Point tracking: {obj_data['id']}, {bottom_center}, {obj_data['frame_time']}"
|
||||
)
|
||||
|
||||
self.obj_data.update(obj_data)
|
||||
self.current_zones = current_zones
|
||||
return (thumb_update, significant_change, autotracker_update)
|
||||
return (thumb_update, significant_change, path_update, autotracker_update)
|
||||
|
||||
def to_dict(self):
|
||||
event = {
|
||||
|
||||
@ -3,7 +3,7 @@ from typing import TypedDict
|
||||
|
||||
from frigate.camera import CameraMetrics
|
||||
from frigate.data_processing.types import DataProcessorMetrics
|
||||
from frigate.object_detection import ObjectDetectProcess
|
||||
from frigate.object_detection.base import ObjectDetectProcess
|
||||
|
||||
|
||||
class StatsTrackingTypes(TypedDict):
|
||||
@ -25,3 +25,5 @@ class ModelStatusTypesEnum(str, Enum):
|
||||
|
||||
class TrackedObjectUpdateTypesEnum(str, Enum):
|
||||
description = "description"
|
||||
face = "face"
|
||||
lpr = "lpr"
|
||||
|
||||
@ -319,6 +319,21 @@ def migrate_016_0(config: dict[str, dict[str, any]]) -> dict[str, dict[str, any]
|
||||
|
||||
camera_config["live"] = live_config
|
||||
|
||||
# add another value to movement_weights for autotracking cams
|
||||
onvif_config = camera_config.get("onvif", {})
|
||||
if "autotracking" in onvif_config:
|
||||
movement_weights = (
|
||||
camera_config.get("onvif", {})
|
||||
.get("autotracking")
|
||||
.get("movement_weights", {})
|
||||
)
|
||||
|
||||
if movement_weights and len(movement_weights.split(",")) == 5:
|
||||
onvif_config["autotracking"]["movement_weights"] = (
|
||||
movement_weights + ", 0"
|
||||
)
|
||||
camera_config["onvif"] = onvif_config
|
||||
|
||||
new_config["cameras"][name] = camera_config
|
||||
|
||||
new_config["version"] = "0.16-0"
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user