mirror of
https://github.com/blakeblackshear/frigate.git
synced 2026-02-07 11:45:24 +03:00
Merge branch 'blakeblackshear:dev' into dev
This commit is contained in:
commit
b7270a8b04
@ -33,7 +33,7 @@ RUN --mount=type=tmpfs,target=/tmp --mount=type=tmpfs,target=/var/cache/apt \
|
|||||||
FROM scratch AS go2rtc
|
FROM scratch AS go2rtc
|
||||||
ARG TARGETARCH
|
ARG TARGETARCH
|
||||||
WORKDIR /rootfs/usr/local/go2rtc/bin
|
WORKDIR /rootfs/usr/local/go2rtc/bin
|
||||||
ADD --link --chmod=755 "https://github.com/AlexxIT/go2rtc/releases/download/v1.8.2/go2rtc_linux_${TARGETARCH}" go2rtc
|
ADD --link --chmod=755 "https://github.com/AlexxIT/go2rtc/releases/download/v1.8.3/go2rtc_linux_${TARGETARCH}" go2rtc
|
||||||
|
|
||||||
|
|
||||||
####
|
####
|
||||||
|
|||||||
@ -120,7 +120,7 @@ NOTE: The folder that is mapped from the host needs to be the folder that contai
|
|||||||
|
|
||||||
## Custom go2rtc version
|
## Custom go2rtc version
|
||||||
|
|
||||||
Frigate currently includes go2rtc v1.8.2, there may be certain cases where you want to run a different version of go2rtc.
|
Frigate currently includes go2rtc v1.8.3, there may be certain cases where you want to run a different version of go2rtc.
|
||||||
|
|
||||||
To do this:
|
To do this:
|
||||||
|
|
||||||
|
|||||||
@ -140,7 +140,7 @@ go2rtc:
|
|||||||
- rtspx://192.168.1.1:7441/abcdefghijk
|
- rtspx://192.168.1.1:7441/abcdefghijk
|
||||||
```
|
```
|
||||||
|
|
||||||
[See the go2rtc docs for more information](https://github.com/AlexxIT/go2rtc/tree/v1.8.2#source-rtsp)
|
[See the go2rtc docs for more information](https://github.com/AlexxIT/go2rtc/tree/v1.8.3#source-rtsp)
|
||||||
|
|
||||||
In the Unifi 2.0 update Unifi Protect Cameras had a change in audio sample rate which causes issues for ffmpeg. The input rate needs to be set for record and rtmp if used directly with unifi protect.
|
In the Unifi 2.0 update Unifi Protect Cameras had a change in audio sample rate which causes issues for ffmpeg. The input rate needs to be set for record and rtmp if used directly with unifi protect.
|
||||||
|
|
||||||
|
|||||||
@ -14,6 +14,7 @@ See [the hwaccel docs](/configuration/hardware_acceleration.md) for more info on
|
|||||||
| Preset | Usage | Other Notes |
|
| Preset | Usage | Other Notes |
|
||||||
| --------------------- | ------------------------------ | ----------------------------------------------------- |
|
| --------------------- | ------------------------------ | ----------------------------------------------------- |
|
||||||
| preset-rpi-64-h264 | 64 bit Rpi with h264 stream | |
|
| preset-rpi-64-h264 | 64 bit Rpi with h264 stream | |
|
||||||
|
| preset-rpi-64-h265 | 64 bit Rpi with h265 stream | |
|
||||||
| preset-vaapi | Intel & AMD VAAPI | Check hwaccel docs to ensure correct driver is chosen |
|
| preset-vaapi | Intel & AMD VAAPI | Check hwaccel docs to ensure correct driver is chosen |
|
||||||
| preset-intel-qsv-h264 | Intel QSV with h264 stream | If issues occur recommend using vaapi preset instead |
|
| preset-intel-qsv-h264 | Intel QSV with h264 stream | If issues occur recommend using vaapi preset instead |
|
||||||
| preset-intel-qsv-h265 | Intel QSV with h265 stream | If issues occur recommend using vaapi preset instead |
|
| preset-intel-qsv-h265 | Intel QSV with h265 stream | If issues occur recommend using vaapi preset instead |
|
||||||
|
|||||||
@ -13,8 +13,13 @@ Ensure you increase the allocated RAM for your GPU to at least 128 (raspi-config
|
|||||||
**NOTICE**: If you are using the addon, you may need to turn off `Protection mode` for hardware acceleration.
|
**NOTICE**: If you are using the addon, you may need to turn off `Protection mode` for hardware acceleration.
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
|
# if you want to decode a h264 stream
|
||||||
ffmpeg:
|
ffmpeg:
|
||||||
hwaccel_args: preset-rpi-64-h264
|
hwaccel_args: preset-rpi-64-h264
|
||||||
|
|
||||||
|
# if you want to decode a h265 (hevc) stream
|
||||||
|
ffmpeg:
|
||||||
|
hwaccel_args: preset-rpi-64-h265
|
||||||
```
|
```
|
||||||
|
|
||||||
:::note
|
:::note
|
||||||
|
|||||||
@ -25,7 +25,7 @@ cameras:
|
|||||||
|
|
||||||
VSCode (and VSCode addon) supports the JSON schemas which will automatically validate the config. This can be added by adding `# yaml-language-server: $schema=http://frigate_host:5000/api/config/schema.json` to the top of the config file. `frigate_host` being the IP address of Frigate or `ccab4aaf-frigate` if running in the addon.
|
VSCode (and VSCode addon) supports the JSON schemas which will automatically validate the config. This can be added by adding `# yaml-language-server: $schema=http://frigate_host:5000/api/config/schema.json` to the top of the config file. `frigate_host` being the IP address of Frigate or `ccab4aaf-frigate` if running in the addon.
|
||||||
|
|
||||||
### Full configuration reference:
|
### Full configuration reference
|
||||||
|
|
||||||
:::caution
|
:::caution
|
||||||
|
|
||||||
@ -438,7 +438,7 @@ rtmp:
|
|||||||
enabled: False
|
enabled: False
|
||||||
|
|
||||||
# Optional: Restream configuration
|
# Optional: Restream configuration
|
||||||
# Uses https://github.com/AlexxIT/go2rtc (v1.8.2)
|
# Uses https://github.com/AlexxIT/go2rtc (v1.8.3)
|
||||||
go2rtc:
|
go2rtc:
|
||||||
|
|
||||||
# Optional: jsmpeg stream configuration for WebUI
|
# Optional: jsmpeg stream configuration for WebUI
|
||||||
|
|||||||
@ -116,4 +116,4 @@ services:
|
|||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
See [go2rtc WebRTC docs](https://github.com/AlexxIT/go2rtc/tree/v1.8.2#module-webrtc) for more information about this.
|
See [go2rtc WebRTC docs](https://github.com/AlexxIT/go2rtc/tree/v1.8.3#module-webrtc) for more information about this.
|
||||||
|
|||||||
@ -7,7 +7,7 @@ title: Restream
|
|||||||
|
|
||||||
Frigate can restream your video feed as an RTSP feed for other applications such as Home Assistant to utilize it at `rtsp://<frigate_host>:8554/<camera_name>`. Port 8554 must be open. [This allows you to use a video feed for detection in Frigate and Home Assistant live view at the same time without having to make two separate connections to the camera](#reduce-connections-to-camera). The video feed is copied from the original video feed directly to avoid re-encoding. This feed does not include any annotation by Frigate.
|
Frigate can restream your video feed as an RTSP feed for other applications such as Home Assistant to utilize it at `rtsp://<frigate_host>:8554/<camera_name>`. Port 8554 must be open. [This allows you to use a video feed for detection in Frigate and Home Assistant live view at the same time without having to make two separate connections to the camera](#reduce-connections-to-camera). The video feed is copied from the original video feed directly to avoid re-encoding. This feed does not include any annotation by Frigate.
|
||||||
|
|
||||||
Frigate uses [go2rtc](https://github.com/AlexxIT/go2rtc/tree/v1.8.2) to provide its restream and MSE/WebRTC capabilities. The go2rtc config is hosted at the `go2rtc` in the config, see [go2rtc docs](https://github.com/AlexxIT/go2rtc/tree/v1.8.2#configuration) for more advanced configurations and features.
|
Frigate uses [go2rtc](https://github.com/AlexxIT/go2rtc/tree/v1.8.3) to provide its restream and MSE/WebRTC capabilities. The go2rtc config is hosted at the `go2rtc` in the config, see [go2rtc docs](https://github.com/AlexxIT/go2rtc/tree/v1.8.3#configuration) for more advanced configurations and features.
|
||||||
|
|
||||||
:::note
|
:::note
|
||||||
|
|
||||||
@ -138,7 +138,7 @@ cameras:
|
|||||||
|
|
||||||
## Advanced Restream Configurations
|
## Advanced Restream Configurations
|
||||||
|
|
||||||
The [exec](https://github.com/AlexxIT/go2rtc/tree/v1.8.2#source-exec) source in go2rtc can be used for custom ffmpeg commands. An example is below:
|
The [exec](https://github.com/AlexxIT/go2rtc/tree/v1.8.3#source-exec) source in go2rtc can be used for custom ffmpeg commands. An example is below:
|
||||||
|
|
||||||
NOTE: The output will need to be passed with two curly braces `{{output}}`
|
NOTE: The output will need to be passed with two curly braces `{{output}}`
|
||||||
|
|
||||||
|
|||||||
@ -9,7 +9,7 @@ Cameras that output H.264 video and AAC audio will offer the most compatibility
|
|||||||
|
|
||||||
I recommend Dahua, Hikvision, and Amcrest in that order. Dahua edges out Hikvision because they are easier to find and order, not because they are better cameras. I personally use Dahua cameras because they are easier to purchase directly. In my experience Dahua and Hikvision both have multiple streams with configurable resolutions and frame rates and rock solid streams. They also both have models with large sensors well known for excellent image quality at night. Not all the models are equal. Larger sensors are better than higher resolutions; especially at night. Amcrest is the fallback recommendation because they are rebranded Dahuas. They are rebranding the lower end models with smaller sensors or less configuration options.
|
I recommend Dahua, Hikvision, and Amcrest in that order. Dahua edges out Hikvision because they are easier to find and order, not because they are better cameras. I personally use Dahua cameras because they are easier to purchase directly. In my experience Dahua and Hikvision both have multiple streams with configurable resolutions and frame rates and rock solid streams. They also both have models with large sensors well known for excellent image quality at night. Not all the models are equal. Larger sensors are better than higher resolutions; especially at night. Amcrest is the fallback recommendation because they are rebranded Dahuas. They are rebranding the lower end models with smaller sensors or less configuration options.
|
||||||
|
|
||||||
Many users have reported various issues with Reolink cameras, so I do not recommend them. If you are using Reolink, I suggest the [Reolink specific configuration](../configuration/camera_specific.md#reolink-410520-possibly-others). Wifi cameras are also not recommended. Their streams are less reliable and cause connection loss and/or lost video data.
|
Many users have reported various issues with Reolink cameras, so I do not recommend them. If you are using Reolink, I suggest the [Reolink specific configuration](../configuration/camera_specific.md#reolink-cameras). Wifi cameras are also not recommended. Their streams are less reliable and cause connection loss and/or lost video data.
|
||||||
|
|
||||||
Here are some of the camera's I recommend:
|
Here are some of the camera's I recommend:
|
||||||
|
|
||||||
|
|||||||
@ -11,7 +11,7 @@ Use of the bundled go2rtc is optional. You can still configure FFmpeg to connect
|
|||||||
|
|
||||||
# Setup a go2rtc stream
|
# Setup a go2rtc stream
|
||||||
|
|
||||||
First, you will want to configure go2rtc to connect to your camera stream by adding the stream you want to use for live view in your Frigate config file. If you set the stream name under go2rtc to match the name of your camera, it will automatically be mapped and you will get additional live view options for the camera. Avoid changing any other parts of your config at this step. Note that go2rtc supports [many different stream types](https://github.com/AlexxIT/go2rtc/tree/v1.8.2#module-streams), not just rtsp.
|
First, you will want to configure go2rtc to connect to your camera stream by adding the stream you want to use for live view in your Frigate config file. If you set the stream name under go2rtc to match the name of your camera, it will automatically be mapped and you will get additional live view options for the camera. Avoid changing any other parts of your config at this step. Note that go2rtc supports [many different stream types](https://github.com/AlexxIT/go2rtc/tree/v1.8.3#module-streams), not just rtsp.
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
go2rtc:
|
go2rtc:
|
||||||
@ -24,7 +24,7 @@ The easiest live view to get working is MSE. After adding this to the config, re
|
|||||||
|
|
||||||
### What if my video doesn't play?
|
### What if my video doesn't play?
|
||||||
|
|
||||||
If you are unable to see your video feed, first check the go2rtc logs in the Frigate UI under Logs in the sidebar. If go2rtc is having difficulty connecting to your camera, you should see some error messages in the log. If you do not see any errors, then the video codec of the stream may not be supported in your browser. If your camera stream is set to H265, try switching to H264. You can see more information about [video codec compatibility](https://github.com/AlexxIT/go2rtc/tree/v1.8.2#codecs-madness) in the go2rtc documentation. If you are not able to switch your camera settings from H265 to H264 or your stream is a different format such as MJPEG, you can use go2rtc to re-encode the video using the [FFmpeg parameters](https://github.com/AlexxIT/go2rtc/tree/v1.8.2#source-ffmpeg). It supports rotating and resizing video feeds and hardware acceleration. Keep in mind that transcoding video from one format to another is a resource intensive task and you may be better off using the built-in jsmpeg view. Here is an example of a config that will re-encode the stream to H264 without hardware acceleration:
|
If you are unable to see your video feed, first check the go2rtc logs in the Frigate UI under Logs in the sidebar. If go2rtc is having difficulty connecting to your camera, you should see some error messages in the log. If you do not see any errors, then the video codec of the stream may not be supported in your browser. If your camera stream is set to H265, try switching to H264. You can see more information about [video codec compatibility](https://github.com/AlexxIT/go2rtc/tree/v1.8.3#codecs-madness) in the go2rtc documentation. If you are not able to switch your camera settings from H265 to H264 or your stream is a different format such as MJPEG, you can use go2rtc to re-encode the video using the [FFmpeg parameters](https://github.com/AlexxIT/go2rtc/tree/v1.8.3#source-ffmpeg). It supports rotating and resizing video feeds and hardware acceleration. Keep in mind that transcoding video from one format to another is a resource intensive task and you may be better off using the built-in jsmpeg view. Here is an example of a config that will re-encode the stream to H264 without hardware acceleration:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
go2rtc:
|
go2rtc:
|
||||||
|
|||||||
65
docs/docs/guides/video_pipeline.md
Normal file
65
docs/docs/guides/video_pipeline.md
Normal file
@ -0,0 +1,65 @@
|
|||||||
|
---
|
||||||
|
id: video_pipeline
|
||||||
|
title: The video pipeline
|
||||||
|
---
|
||||||
|
Frigate uses a sophisticated video pipeline that starts with the camera feed and progressively applies transformations to it (e.g. decoding, motion detection, etc.).
|
||||||
|
|
||||||
|
This guide provides an overview to help users understand some of the key Frigate concepts.
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
At a high level, there are five processing steps that could be applied to a camera feed
|
||||||
|
|
||||||
|
```mermaid
|
||||||
|
%%{init: {"themeVariables": {"edgeLabelBackground": "transparent"}}}%%
|
||||||
|
|
||||||
|
flowchart LR
|
||||||
|
Feed(Feed\nacquisition) --> Decode(Video\ndecoding)
|
||||||
|
Decode --> Motion(Motion\ndetection)
|
||||||
|
Motion --> Object(Object\ndetection)
|
||||||
|
Feed --> Recording(Recording\nand\nvisualization)
|
||||||
|
Motion --> Recording
|
||||||
|
Object --> Recording
|
||||||
|
```
|
||||||
|
As the diagram shows, all feeds first need to be acquired. Depending on the data source, it may be as simple as using FFmpeg to connect to an RTSP source via TCP or something more involved like connecting to an Apple Homekit camera using go2rtc. A single camera can produce a main (i.e. high resolution) and a sub (i.e. lower resolution) video feed.
|
||||||
|
|
||||||
|
Typically, the sub-feed will be decoded to produce full-frame images. As part of this process, the resolution may be downscaled and an image sampling frequency may be imposed (e.g. keep 5 frames per second).
|
||||||
|
|
||||||
|
These frames will then be compared over time to detect movement areas (a.k.a. motion boxes). These motion boxes are combined into motion regions and are analyzed by a machine learning model to detect known objects. Finally, the snapshot and recording retention config will decide what video clips and events should be saved.
|
||||||
|
|
||||||
|
## Detailed view of the video pipeline
|
||||||
|
|
||||||
|
The following diagram adds a lot more detail than the simple view explained before. The goal is to show the detailed data paths between the processing steps.
|
||||||
|
|
||||||
|
```mermaid
|
||||||
|
%%{init: {"themeVariables": {"edgeLabelBackground": "transparent"}}}%%
|
||||||
|
|
||||||
|
flowchart TD
|
||||||
|
RecStore[(Recording\nstore)]
|
||||||
|
SnapStore[(Snapshot\nstore)]
|
||||||
|
|
||||||
|
subgraph Acquisition
|
||||||
|
Cam["Camera"] -->|FFmpeg supported| Stream
|
||||||
|
Cam -->|"Other streaming\nprotocols"| go2rtc
|
||||||
|
go2rtc("go2rtc") --> Stream
|
||||||
|
Stream[Capture main and\nsub streams] --> |detect stream|Decode(Decode and\ndownscale)
|
||||||
|
end
|
||||||
|
subgraph Motion
|
||||||
|
Decode --> MotionM(Apply\nmotion masks)
|
||||||
|
MotionM --> MotionD(Motion\ndetection)
|
||||||
|
end
|
||||||
|
subgraph Detection
|
||||||
|
MotionD --> |motion regions| ObjectD(Object detection)
|
||||||
|
Decode --> ObjectD
|
||||||
|
ObjectD --> ObjectFilter(Apply object filters & zones)
|
||||||
|
ObjectFilter --> ObjectZ(Track objects)
|
||||||
|
end
|
||||||
|
Decode --> |decoded frames|Birdseye
|
||||||
|
MotionD --> |motion event|Birdseye
|
||||||
|
ObjectZ --> |object event|Birdseye
|
||||||
|
|
||||||
|
MotionD --> |"video segments\n(retain motion)"|RecStore
|
||||||
|
ObjectZ --> |detection clip|RecStore
|
||||||
|
Stream -->|"video segments\n(retain all)"| RecStore
|
||||||
|
ObjectZ --> |detection snapshot|SnapStore
|
||||||
|
```
|
||||||
@ -10,6 +10,10 @@ module.exports = {
|
|||||||
favicon: 'img/favicon.ico',
|
favicon: 'img/favicon.ico',
|
||||||
organizationName: 'blakeblackshear',
|
organizationName: 'blakeblackshear',
|
||||||
projectName: 'frigate',
|
projectName: 'frigate',
|
||||||
|
markdown: {
|
||||||
|
mermaid: true,
|
||||||
|
},
|
||||||
|
themes: ['@docusaurus/theme-mermaid'],
|
||||||
themeConfig: {
|
themeConfig: {
|
||||||
algolia: {
|
algolia: {
|
||||||
appId: 'WIURGBNBPY',
|
appId: 'WIURGBNBPY',
|
||||||
|
|||||||
1059
docs/package-lock.json
generated
1059
docs/package-lock.json
generated
File diff suppressed because it is too large
Load Diff
@ -16,6 +16,7 @@
|
|||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@docusaurus/core": "^2.4.1",
|
"@docusaurus/core": "^2.4.1",
|
||||||
"@docusaurus/preset-classic": "^2.4.1",
|
"@docusaurus/preset-classic": "^2.4.1",
|
||||||
|
"@docusaurus/theme-mermaid": "^2.4.1",
|
||||||
"@mdx-js/react": "^1.6.22",
|
"@mdx-js/react": "^1.6.22",
|
||||||
"clsx": "^1.2.1",
|
"clsx": "^1.2.1",
|
||||||
"prism-react-renderer": "^1.3.5",
|
"prism-react-renderer": "^1.3.5",
|
||||||
|
|||||||
@ -14,6 +14,7 @@ module.exports = {
|
|||||||
"guides/ha_network_storage",
|
"guides/ha_network_storage",
|
||||||
"guides/stationary_objects",
|
"guides/stationary_objects",
|
||||||
"guides/reverse_proxy",
|
"guides/reverse_proxy",
|
||||||
|
"guides/video_pipeline",
|
||||||
],
|
],
|
||||||
Configuration: {
|
Configuration: {
|
||||||
"Configuration Files": [
|
"Configuration Files": [
|
||||||
|
|||||||
@ -185,6 +185,13 @@ class Dispatcher:
|
|||||||
ptz_autotracker_settings = self.config.cameras[camera_name].onvif.autotracking
|
ptz_autotracker_settings = self.config.cameras[camera_name].onvif.autotracking
|
||||||
|
|
||||||
if payload == "ON":
|
if payload == "ON":
|
||||||
|
if not self.config.cameras[
|
||||||
|
camera_name
|
||||||
|
].onvif.autotracking.enabled_in_config:
|
||||||
|
logger.error(
|
||||||
|
"Autotracking must be enabled in the config to be turned on via MQTT."
|
||||||
|
)
|
||||||
|
return
|
||||||
if not self.ptz_metrics[camera_name]["ptz_autotracker_enabled"].value:
|
if not self.ptz_metrics[camera_name]["ptz_autotracker_enabled"].value:
|
||||||
logger.info(f"Turning on ptz autotracker for {camera_name}")
|
logger.info(f"Turning on ptz autotracker for {camera_name}")
|
||||||
self.ptz_metrics[camera_name]["ptz_autotracker_enabled"].value = True
|
self.ptz_metrics[camera_name]["ptz_autotracker_enabled"].value = True
|
||||||
|
|||||||
@ -71,7 +71,7 @@ class MqttClient(Communicator): # type: ignore[misc]
|
|||||||
)
|
)
|
||||||
self.publish(
|
self.publish(
|
||||||
f"{camera_name}/ptz_autotracker/state",
|
f"{camera_name}/ptz_autotracker/state",
|
||||||
"ON" if camera.onvif.autotracking.enabled else "OFF",
|
"ON" if camera.onvif.autotracking.enabled_in_config else "OFF",
|
||||||
retain=True,
|
retain=True,
|
||||||
)
|
)
|
||||||
self.publish(
|
self.publish(
|
||||||
|
|||||||
@ -184,6 +184,9 @@ class PtzAutotrackConfig(FrigateBaseModel):
|
|||||||
default=[],
|
default=[],
|
||||||
title="Internal value used for PTZ movements based on the speed of your camera's motor.",
|
title="Internal value used for PTZ movements based on the speed of your camera's motor.",
|
||||||
)
|
)
|
||||||
|
enabled_in_config: Optional[bool] = Field(
|
||||||
|
title="Keep track of original state of autotracking."
|
||||||
|
)
|
||||||
|
|
||||||
@validator("movement_weights", pre=True)
|
@validator("movement_weights", pre=True)
|
||||||
def validate_weights(cls, v):
|
def validate_weights(cls, v):
|
||||||
@ -1191,6 +1194,9 @@ class FrigateConfig(FrigateBaseModel):
|
|||||||
# set config pre-value
|
# set config pre-value
|
||||||
camera_config.record.enabled_in_config = camera_config.record.enabled
|
camera_config.record.enabled_in_config = camera_config.record.enabled
|
||||||
camera_config.audio.enabled_in_config = camera_config.audio.enabled
|
camera_config.audio.enabled_in_config = camera_config.audio.enabled
|
||||||
|
camera_config.onvif.autotracking.enabled_in_config = (
|
||||||
|
camera_config.onvif.autotracking.enabled
|
||||||
|
)
|
||||||
|
|
||||||
# Add default filters
|
# Add default filters
|
||||||
object_keys = camera_config.objects.track
|
object_keys = camera_config.objects.track
|
||||||
|
|||||||
@ -56,6 +56,7 @@ _user_agent_args = [
|
|||||||
|
|
||||||
PRESETS_HW_ACCEL_DECODE = {
|
PRESETS_HW_ACCEL_DECODE = {
|
||||||
"preset-rpi-64-h264": "-c:v:1 h264_v4l2m2m",
|
"preset-rpi-64-h264": "-c:v:1 h264_v4l2m2m",
|
||||||
|
"preset-rpi-64-h265": "-c:v:1 hevc_v4l2m2m",
|
||||||
"preset-vaapi": f"-hwaccel_flags allow_profile_mismatch -hwaccel vaapi -hwaccel_device {_gpu_selector.get_selected_gpu()} -hwaccel_output_format vaapi",
|
"preset-vaapi": f"-hwaccel_flags allow_profile_mismatch -hwaccel vaapi -hwaccel_device {_gpu_selector.get_selected_gpu()} -hwaccel_output_format vaapi",
|
||||||
"preset-intel-qsv-h264": f"-hwaccel qsv -qsv_device {_gpu_selector.get_selected_gpu()} -hwaccel_output_format qsv -c:v h264_qsv",
|
"preset-intel-qsv-h264": f"-hwaccel qsv -qsv_device {_gpu_selector.get_selected_gpu()} -hwaccel_output_format qsv -c:v h264_qsv",
|
||||||
"preset-intel-qsv-h265": f"-load_plugin hevc_hw -hwaccel qsv -qsv_device {_gpu_selector.get_selected_gpu()} -hwaccel_output_format qsv -c:v hevc_qsv",
|
"preset-intel-qsv-h265": f"-load_plugin hevc_hw -hwaccel qsv -qsv_device {_gpu_selector.get_selected_gpu()} -hwaccel_output_format qsv -c:v hevc_qsv",
|
||||||
@ -70,6 +71,7 @@ PRESETS_HW_ACCEL_DECODE = {
|
|||||||
|
|
||||||
PRESETS_HW_ACCEL_SCALE = {
|
PRESETS_HW_ACCEL_SCALE = {
|
||||||
"preset-rpi-64-h264": "-r {0} -vf fps={0},scale={1}:{2}",
|
"preset-rpi-64-h264": "-r {0} -vf fps={0},scale={1}:{2}",
|
||||||
|
"preset-rpi-64-h265": "-r {0} -vf fps={0},scale={1}:{2}",
|
||||||
"preset-vaapi": "-r {0} -vf fps={0},scale_vaapi=w={1}:h={2}:format=nv12,hwdownload,format=nv12,format=yuv420p",
|
"preset-vaapi": "-r {0} -vf fps={0},scale_vaapi=w={1}:h={2}:format=nv12,hwdownload,format=nv12,format=yuv420p",
|
||||||
"preset-intel-qsv-h264": "-r {0} -vf vpp_qsv=framerate={0}:w={1}:h={2}:format=nv12,hwdownload,format=nv12,format=yuv420p",
|
"preset-intel-qsv-h264": "-r {0} -vf vpp_qsv=framerate={0}:w={1}:h={2}:format=nv12,hwdownload,format=nv12,format=yuv420p",
|
||||||
"preset-intel-qsv-h265": "-r {0} -vf vpp_qsv=framerate={0}:w={1}:h={2}:format=nv12,hwdownload,format=nv12,format=yuv420p",
|
"preset-intel-qsv-h265": "-r {0} -vf vpp_qsv=framerate={0}:w={1}:h={2}:format=nv12,hwdownload,format=nv12,format=yuv420p",
|
||||||
@ -84,6 +86,7 @@ PRESETS_HW_ACCEL_SCALE = {
|
|||||||
|
|
||||||
PRESETS_HW_ACCEL_ENCODE_BIRDSEYE = {
|
PRESETS_HW_ACCEL_ENCODE_BIRDSEYE = {
|
||||||
"preset-rpi-64-h264": "ffmpeg -hide_banner {0} -c:v h264_v4l2m2m {1}",
|
"preset-rpi-64-h264": "ffmpeg -hide_banner {0} -c:v h264_v4l2m2m {1}",
|
||||||
|
"preset-rpi-64-h265": "ffmpeg -hide_banner {0} -c:v hevc_v4l2m2m {1}",
|
||||||
"preset-vaapi": "ffmpeg -hide_banner -hwaccel vaapi -hwaccel_output_format vaapi -hwaccel_device {2} {0} -c:v h264_vaapi -g 50 -bf 0 -profile:v high -level:v 4.1 -sei:v 0 -an -vf format=vaapi|nv12,hwupload {1}",
|
"preset-vaapi": "ffmpeg -hide_banner -hwaccel vaapi -hwaccel_output_format vaapi -hwaccel_device {2} {0} -c:v h264_vaapi -g 50 -bf 0 -profile:v high -level:v 4.1 -sei:v 0 -an -vf format=vaapi|nv12,hwupload {1}",
|
||||||
"preset-intel-qsv-h264": "ffmpeg -hide_banner {0} -c:v h264_qsv -g 50 -bf 0 -profile:v high -level:v 4.1 -async_depth:v 1 {1}",
|
"preset-intel-qsv-h264": "ffmpeg -hide_banner {0} -c:v h264_qsv -g 50 -bf 0 -profile:v high -level:v 4.1 -async_depth:v 1 {1}",
|
||||||
"preset-intel-qsv-h265": "ffmpeg -hide_banner {0} -c:v h264_qsv -g 50 -bf 0 -profile:v high -level:v 4.1 -async_depth:v 1 {1}",
|
"preset-intel-qsv-h265": "ffmpeg -hide_banner {0} -c:v h264_qsv -g 50 -bf 0 -profile:v high -level:v 4.1 -async_depth:v 1 {1}",
|
||||||
@ -98,6 +101,7 @@ PRESETS_HW_ACCEL_ENCODE_BIRDSEYE = {
|
|||||||
|
|
||||||
PRESETS_HW_ACCEL_ENCODE_TIMELAPSE = {
|
PRESETS_HW_ACCEL_ENCODE_TIMELAPSE = {
|
||||||
"preset-rpi-64-h264": "ffmpeg -hide_banner {0} -c:v h264_v4l2m2m -pix_fmt yuv420p {1}",
|
"preset-rpi-64-h264": "ffmpeg -hide_banner {0} -c:v h264_v4l2m2m -pix_fmt yuv420p {1}",
|
||||||
|
"preset-rpi-64-h265": "ffmpeg -hide_banner {0} -c:v hevc_v4l2m2m -pix_fmt yuv420p {1}",
|
||||||
"preset-vaapi": "ffmpeg -hide_banner -hwaccel vaapi -hwaccel_output_format vaapi -hwaccel_device {2} {0} -c:v h264_vaapi {1}",
|
"preset-vaapi": "ffmpeg -hide_banner -hwaccel vaapi -hwaccel_output_format vaapi -hwaccel_device {2} {0} -c:v h264_vaapi {1}",
|
||||||
"preset-intel-qsv-h264": "ffmpeg -hide_banner {0} -c:v h264_qsv -profile:v high -level:v 4.1 -async_depth:v 1 {1}",
|
"preset-intel-qsv-h264": "ffmpeg -hide_banner {0} -c:v h264_qsv -profile:v high -level:v 4.1 -async_depth:v 1 {1}",
|
||||||
"preset-intel-qsv-h265": "ffmpeg -hide_banner {0} -c:v hevc_qsv -profile:v high -level:v 4.1 -async_depth:v 1 {1}",
|
"preset-intel-qsv-h265": "ffmpeg -hide_banner {0} -c:v hevc_qsv -profile:v high -level:v 4.1 -async_depth:v 1 {1}",
|
||||||
|
|||||||
@ -208,7 +208,10 @@ class PtzAutoTracker:
|
|||||||
continue
|
continue
|
||||||
|
|
||||||
self.autotracker_init[camera] = False
|
self.autotracker_init[camera] = False
|
||||||
if camera_config.onvif.autotracking.enabled:
|
if (
|
||||||
|
camera_config.onvif.autotracking.enabled
|
||||||
|
and camera_config.onvif.autotracking.enabled_in_config
|
||||||
|
):
|
||||||
self._autotracker_setup(camera_config, camera)
|
self._autotracker_setup(camera_config, camera)
|
||||||
|
|
||||||
def _autotracker_setup(self, camera_config, camera):
|
def _autotracker_setup(self, camera_config, camera):
|
||||||
|
|||||||
Loading…
Reference in New Issue
Block a user