first pass

This commit is contained in:
Josh Hawkins 2026-03-27 08:22:01 -05:00
parent 772bca9375
commit 6c658050df
35 changed files with 3304 additions and 280 deletions

View File

@ -4,12 +4,29 @@ title: Advanced Options
sidebar_label: Advanced Options
---
import ConfigTabs from "@site/src/components/ConfigTabs";
import TabItem from "@theme/TabItem";
import NavPath from "@site/src/components/NavPath";
### Logging
#### Frigate `logger`
Change the default log level for troubleshooting purposes.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > System > Logging" />.
| Field | Description |
| ------------------------- | ------------------------------------------------------- |
| **Logging level** | The default log level for all modules (default: `info`) |
| **Per-process log level** | Override the log level for specific modules |
</TabItem>
<TabItem value="yaml">
```yaml
logger:
# Optional: default log level (default: shown below)
@ -19,6 +36,9 @@ logger:
frigate.mqtt: error
```
</TabItem>
</ConfigTabs>
Available log levels are: `debug`, `info`, `warning`, `error`, `critical`
Examples of available modules are:
@ -48,7 +68,20 @@ This section can be used to set environment variables for those unable to modify
Variables prefixed with `FRIGATE_` can be referenced in config fields that support environment variable substitution (such as MQTT host and credentials, camera stream URLs, and ONVIF host and credentials) using the `{FRIGATE_VARIABLE_NAME}` syntax.
Example:
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > System > Environment variables" /> to add or edit environment variables.
| Field | Description |
| --------- | --------------------------------------------------------- |
| **Key** | The environment variable name (e.g., `FRIGATE_MQTT_USER`) |
| **Value** | The value for the variable |
Variables defined here can be referenced elsewhere in your configuration using the `{FRIGATE_VARIABLE_NAME}` syntax.
</TabItem>
<TabItem value="yaml">
```yaml
environment_vars:
@ -61,10 +94,27 @@ mqtt:
password: "{FRIGATE_MQTT_PASSWORD}"
```
</TabItem>
</ConfigTabs>
#### TensorFlow Thread Configuration
If you encounter thread creation errors during classification model training, you can limit TensorFlow's thread usage:
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > System > Environment variables" /> and add the following variables:
| Variable | Description |
| --------------------------------- | ---------------------------------------------- |
| `TF_INTRA_OP_PARALLELISM_THREADS` | Threads within operations (`0` = use default) |
| `TF_INTER_OP_PARALLELISM_THREADS` | Threads between operations (`0` = use default) |
| `TF_DATASET_THREAD_POOL_SIZE` | Data pipeline threads (`0` = use default) |
</TabItem>
<TabItem value="yaml">
```yaml
environment_vars:
TF_INTRA_OP_PARALLELISM_THREADS: "2" # Threads within operations (0 = use default)
@ -72,19 +122,35 @@ environment_vars:
TF_DATASET_THREAD_POOL_SIZE: "2" # Data pipeline threads (0 = use default)
```
</TabItem>
</ConfigTabs>
### `database`
Tracked object and recording information is managed in a sqlite database at `/config/frigate.db`. If that database is deleted, recordings will be orphaned and will need to be cleaned up manually. They also won't show up in the Media Browser within Home Assistant.
If you are storing your database on a network share (SMB, NFS, etc), you may get a `database is locked` error message on startup. You can customize the location of the database in the config if necessary.
If you are storing your database on a network share (SMB, NFS, etc), you may get a `database is locked` error message on startup. You can customize the location of the database if necessary.
This may need to be in a custom location if network storage is used for the media folder.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > System > Database" />.
- Set **Database path** to the custom path for the Frigate database file (default: `/config/frigate.db`)
</TabItem>
<TabItem value="yaml">
```yaml
database:
path: /path/to/frigate.db
```
</TabItem>
</ConfigTabs>
### `model`
If using a custom model, the width and height will need to be specified.
@ -103,6 +169,22 @@ Custom models may also require different input tensor formats. The colorspace co
| "nhwc" |
| "nchw" |
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > System > Detection model" /> to configure the model path, dimensions, and input format.
| Field | Description |
| --------------------------------------------- | ------------------------------------ |
| **Custom object detector model path** | Path to the custom model file |
| **Object detection model input width** | Model input width (default: 320) |
| **Object detection model input height** | Model input height (default: 320) |
| **Advanced > Model Input Tensor Shape** | Input tensor shape: `nhwc` or `nchw` |
| **Advanced > Model Input Pixel Color Format** | Pixel format: `rgb`, `bgr`, or `yuv` |
</TabItem>
<TabItem value="yaml">
```yaml
# Optional: model config
model:
@ -113,6 +195,9 @@ model:
input_pixel_format: "bgr"
```
</TabItem>
</ConfigTabs>
#### `labelmap`
:::warning
@ -163,7 +248,15 @@ services:
### Enabling IPv6
IPv6 is disabled by default, to enable IPv6 modify your Frigate configuration as follows:
IPv6 is disabled by default. Enable it in the Frigate configuration.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > System > Networking" /> and enable **IPv6**.
</TabItem>
<TabItem value="yaml">
```yaml
networking:
@ -171,11 +264,25 @@ networking:
enabled: True
```
</TabItem>
</ConfigTabs>
### Listen on different ports
You can change the ports Nginx uses for listening using Frigate's configuration file. The internal port (unauthenticated) and external port (authenticated) can be changed independently. You can also specify an IP address using the format `ip:port` if you wish to bind the port to a specific interface. This may be useful for example to prevent exposing the internal port outside the container.
You can change the ports Nginx uses for listening. The internal port (unauthenticated) and external port (authenticated) can be changed independently. You can also specify an IP address using the format `ip:port` if you wish to bind the port to a specific interface. This may be useful for example to prevent exposing the internal port outside the container.
For example:
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > System > Networking" /> to configure the listen ports.
| Field | Description |
| ----------------- | --------------------------------------------------------- |
| **Internal port** | The unauthenticated listen address/port (default: `5000`) |
| **External port** | The authenticated listen address/port (default: `8971`) |
</TabItem>
<TabItem value="yaml">
```yaml
networking:
@ -184,6 +291,9 @@ networking:
external: 8971
```
</TabItem>
</ConfigTabs>
:::warning
This setting is for advanced users. For the majority of use cases it's recommended to change the `ports` section of your Docker compose file or use the Docker `run` `--publish` option instead, e.g. `-p 443:8971`. Changing Frigate's ports may break some integrations.

View File

@ -3,6 +3,10 @@ id: audio_detectors
title: Audio Detectors
---
import ConfigTabs from "@site/src/components/ConfigTabs";
import TabItem from "@theme/TabItem";
import NavPath from "@site/src/components/NavPath";
Frigate provides a builtin audio detector which runs on the CPU. Compared to object detection in images, audio detection is a relatively lightweight operation so the only option is to run the detection on a CPU.
## Configuration
@ -11,7 +15,17 @@ Audio events work by detecting a type of audio and creating an event, the event
### Enabling Audio Events
Audio events can be enabled for all cameras or only for specific cameras.
Audio events can be enabled globally or for specific cameras.
<ConfigTabs>
<TabItem value="ui">
**Global:** Navigate to <NavPath path="Settings > Global configuration > Audio events" /> and set **Enabled** to on.
**Per-camera:** Navigate to <NavPath path="Settings > Camera configuration > Audio events" /> and set **Enabled** to on for the desired camera.
</TabItem>
<TabItem value="yaml">
```yaml
@ -26,6 +40,9 @@ cameras:
enabled: True # <- enable audio events for the front_camera
```
</TabItem>
</ConfigTabs>
If you are using multiple streams then you must set the `audio` role on the stream that is going to be used for audio detection, this can be any stream but the stream must have audio included.
:::note
@ -34,6 +51,14 @@ The ffmpeg process for capturing audio will be a separate connection to the came
:::
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Camera configuration > FFmpeg" /> and add an input with the `audio` role pointing to a stream that includes audio.
</TabItem>
<TabItem value="yaml">
```yaml
cameras:
front_camera:
@ -48,6 +73,9 @@ cameras:
- detect
```
</TabItem>
</ConfigTabs>
### Configuring Minimum Volume
The audio detector uses volume levels in the same way that motion in a camera feed is used for object detection. This means that Frigate will not run audio detection unless the audio volume is above the configured level in order to reduce resource usage. Audio levels can vary widely between camera models so it is important to run tests to see what volume levels are. The Debug view in the Frigate UI has an Audio tab for cameras that have the `audio` role assigned where a graph and the current levels are is displayed. The `min_volume` parameter should be set to the minimum the `RMS` level required to run audio detection.
@ -62,6 +90,17 @@ Volume is considered motion for recordings, this means when the `record -> retai
The included audio model has over [500 different types](https://github.com/blakeblackshear/frigate/blob/dev/audio-labelmap.txt) of audio that can be detected, many of which are not practical. By default `bark`, `fire_alarm`, `scream`, `speech`, and `yell` are enabled but these can be customized.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Global configuration > Audio events" />.
- Set **Enable audio detection** to on
- Set **Listen types** to include the audio types you want to detect
</TabItem>
<TabItem value="yaml">
```yaml
audio:
enabled: True
@ -73,15 +112,32 @@ audio:
- yell
```
</TabItem>
</ConfigTabs>
### Audio Transcription
Frigate supports fully local audio transcription using either `sherpa-onnx` or OpenAIs open-source Whisper models via `faster-whisper`. The goal of this feature is to support Semantic Search for `speech` audio events. Frigate is not intended to act as a continuous, fully-automatic speech transcription service — automatically transcribing all speech (or queuing many audio events for transcription) requires substantial CPU (or GPU) resources and is impractical on most systems. For this reason, transcriptions for events are initiated manually from the UI or the API rather than being run continuously in the background.
Frigate supports fully local audio transcription using either `sherpa-onnx` or OpenAI's open-source Whisper models via `faster-whisper`. The goal of this feature is to support Semantic Search for `speech` audio events. Frigate is not intended to act as a continuous, fully-automatic speech transcription service — automatically transcribing all speech (or queuing many audio events for transcription) requires substantial CPU (or GPU) resources and is impractical on most systems. For this reason, transcriptions for events are initiated manually from the UI or the API rather than being run continuously in the background.
Transcription accuracy also depends heavily on the quality of your camera's microphone and recording conditions. Many cameras use inexpensive microphones, and distance to the speaker, low audio bitrate, or background noise can significantly reduce transcription quality. If you need higher accuracy, more robust long-running queues, or large-scale automatic transcription, consider using the HTTP API in combination with an automation platform and a cloud transcription service.
#### Configuration
To enable transcription, enable it in your config. Note that audio detection must also be enabled as described above in order to use audio transcription features.
To enable transcription, configure it globally and optionally disable for specific cameras. Audio detection must also be enabled as described above.
<ConfigTabs>
<TabItem value="ui">
**Global:** Navigate to <NavPath path="Settings > Enrichments > Audio transcription" />.
- Set **Enable audio transcription** to on
- Set **Transcription device** to the desired device
- Set **Model size** to the desired size
**Per-camera:** Navigate to <NavPath path="Settings > Camera configuration > Audio transcription" /> to enable or disable transcription for a specific camera.
</TabItem>
<TabItem value="yaml">
```yaml
audio_transcription:
@ -100,6 +156,9 @@ cameras:
enabled: False
```
</TabItem>
</ConfigTabs>
:::note
Audio detection must be enabled and configured as described above in order to use audio transcription features.
@ -146,7 +205,7 @@ If you have CUDA hardware, you can experiment with the `large` `whisper` model o
Any `speech` events in Explore can be transcribed and/or translated through the Transcribe button in the Tracked Object Details pane.
In order to use transcription and translation for past events, you must enable audio detection and define `speech` as an audio type to listen for in your config. To have `speech` events translated into the language of your choice, set the `language` config parameter with the correct [language code](https://github.com/openai/whisper/blob/main/whisper/tokenizer.py#L10).
In order to use transcription and translation for past events, you must enable audio detection and define `speech` as an audio type to listen for. To have `speech` events translated into the language of your choice, set the `language` config parameter with the correct [language code](https://github.com/openai/whisper/blob/main/whisper/tokenizer.py#L10).
The transcribed/translated speech will appear in the description box in the Tracked Object Details pane. If Semantic Search is enabled, embeddings are generated for the transcription text and are fully searchable using the description search type.
@ -162,16 +221,16 @@ Recorded `speech` events will always use a `whisper` model, regardless of the `m
1. Why doesn't Frigate automatically transcribe all `speech` events?
Frigate does not implement a queue mechanism for speech transcription, and adding one is not trivial. A proper queue would need backpressure, prioritization, memory/disk buffering, retry logic, crash recovery, and safeguards to prevent unbounded growth when events outpace processing. Thats a significant amount of complexity for a feature that, in most real-world environments, would mostly just churn through low-value noise.
Frigate does not implement a queue mechanism for speech transcription, and adding one is not trivial. A proper queue would need backpressure, prioritization, memory/disk buffering, retry logic, crash recovery, and safeguards to prevent unbounded growth when events outpace processing. That's a significant amount of complexity for a feature that, in most real-world environments, would mostly just churn through low-value noise.
Because transcription is **serialized (one event at a time)** and speech events can be generated far faster than they can be processed, an auto-transcribe toggle would very quickly create an ever-growing backlog and degrade core functionality. For the amount of engineering and risk involved, it adds **very little practical value** for the majority of deployments, which are often on low-powered, edge hardware.
If you hear speech thats actually important and worth saving/indexing for the future, **just press the transcribe button in Explore** on that specific `speech` event - that keeps things explicit, reliable, and under your control.
If you hear speech that's actually important and worth saving/indexing for the future, **just press the transcribe button in Explore** on that specific `speech` event - that keeps things explicit, reliable, and under your control.
Other options are being considered for future versions of Frigate to add transcription options that support external `whisper` Docker containers. A single transcription service could then be shared by Frigate and other applications (for example, Home Assistant Voice), and run on more powerful machines when available.
2. Why don't you save live transcription text and use that for `speech` events?
Theres no guarantee that a `speech` event is even created from the exact audio that went through the transcription model. Live transcription and `speech` event creation are **separate, asynchronous processes**. Even when both are correctly configured, trying to align the **precise start and end time of a speech event** with whatever audio the model happened to be processing at that moment is unreliable.
There's no guarantee that a `speech` event is even created from the exact audio that went through the transcription model. Live transcription and `speech` event creation are **separate, asynchronous processes**. Even when both are correctly configured, trying to align the **precise start and end time of a speech event** with whatever audio the model happened to be processing at that moment is unreliable.
Automatically persisting that data would often result in **misaligned, partial, or irrelevant transcripts**, while still incurring all of the CPU, storage, and privacy costs of transcription. Thats why Frigate treats transcription as an **explicit, user-initiated action** rather than an automatic side-effect of every `speech` event.
Automatically persisting that data would often result in **misaligned, partial, or irrelevant transcripts**, while still incurring all of the CPU, storage, and privacy costs of transcription. That's why Frigate treats transcription as an **explicit, user-initiated action** rather than an automatic side-effect of every `speech` event.

View File

@ -3,6 +3,10 @@ id: authentication
title: Authentication
---
import ConfigTabs from "@site/src/components/ConfigTabs";
import TabItem from "@theme/TabItem";
import NavPath from "@site/src/components/NavPath";
# Authentication
Frigate stores user information in its database. Password hashes are generated using industry standard PBKDF2-SHA256 with 600,000 iterations. Upon successful login, a JWT token is issued with an expiration date and set as a cookie. The cookie is refreshed as needed automatically. This JWT token can also be passed in the Authorization header as a bearer token.
@ -22,13 +26,26 @@ On startup, an admin user and password are generated and printed in the logs. It
## Resetting admin password
In the event that you are locked out of your instance, you can tell Frigate to reset the admin password and print it in the logs on next startup using the `reset_admin_password` setting in your config file.
In the event that you are locked out of your instance, you can tell Frigate to reset the admin password and print it in the logs on next startup.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > System > Authentication" />.
- Set **Reset admin password** to on to reset the admin password and print it in the logs on next startup
</TabItem>
<TabItem value="yaml">
```yaml
auth:
reset_admin_password: true
```
</TabItem>
</ConfigTabs>
## Password guidance
Constructing secure passwords and managing them properly is important. Frigate requires a minimum length of 12 characters. For guidance on password standards see [NIST SP 800-63B](https://pages.nist.gov/800-63-3/sp800-63b.html). To learn what makes a password truly secure, read this [article](https://medium.com/peerio/how-to-build-a-billion-dollar-password-3d92568d9277).
@ -47,7 +64,20 @@ Restarting Frigate will reset the rate limits.
If you are running Frigate behind a proxy, you will want to set `trusted_proxies` or these rate limits will apply to the upstream proxy IP address. This means that a brute force attack will rate limit login attempts from other devices and could temporarily lock you out of your instance. In order to ensure rate limits only apply to the actual IP address where the requests are coming from, you will need to list the upstream networks that you want to trust. These trusted proxies are checked against the `X-Forwarded-For` header when looking for the IP address where the request originated.
If you are running a reverse proxy in the same Docker Compose file as Frigate, here is an example of how your auth config might look:
If you are running a reverse proxy in the same Docker Compose file as Frigate, configure rate limiting and trusted proxies as follows:
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > System > Authentication" />.
| Field | Description |
|-------|-------------|
| **Failed login limits** | Rate limit string for login failures (e.g., `1/second;5/minute;20/hour`) |
| **Trusted proxies** | List of upstream network CIDRs to trust for `X-Forwarded-For` (e.g., `172.18.0.0/16` for internal Docker Compose network) |
</TabItem>
<TabItem value="yaml">
```yaml
auth:
@ -56,6 +86,9 @@ auth:
- 172.18.0.0/16 # <---- this is the subnet for the internal Docker Compose network
```
</TabItem>
</ConfigTabs>
## Session Length
The default session length for user authentication in Frigate is 24 hours. This setting determines how long a user's authenticated session remains active before a token refresh is required — otherwise, the user will need to log in again.
@ -67,11 +100,24 @@ The default value of `86400` will expire the authentication session after 24 hou
- `0`: Setting the session length to 0 will require a user to log in every time they access the application or after a very short, immediate timeout.
- `604800`: Setting the session length to 604800 will require a user to log in if the token is not refreshed for 7 days.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > System > Authentication" />.
- Set **Session length** to the duration in seconds before the authentication session expires (default: 86400 / 24 hours)
</TabItem>
<TabItem value="yaml">
```yaml
auth:
session_length: 86400
```
</TabItem>
</ConfigTabs>
## JWT Token Secret
The JWT token secret needs to be kept secure. Anyone with this secret can generate valid JWT tokens to authenticate with Frigate. This should be a cryptographically random string of at least 64 characters.
@ -99,7 +145,18 @@ Frigate can be configured to leverage features of common upstream authentication
If you are leveraging the authentication of an upstream proxy, you likely want to disable Frigate's authentication as there is no correspondence between users in Frigate's database and users authenticated via the proxy. Optionally, if communication between the reverse proxy and Frigate is over an untrusted network, you should set an `auth_secret` in the `proxy` config and configure the proxy to send the secret value as a header named `X-Proxy-Secret`. Assuming this is an untrusted network, you will also want to [configure a real TLS certificate](tls.md) to ensure the traffic can't simply be sniffed to steal the secret.
Here is an example of how to disable Frigate's authentication and also ensure the requests come only from your known proxy.
To disable Frigate's authentication and ensure requests come only from your known proxy:
<ConfigTabs>
<TabItem value="ui">
1. Navigate to <NavPath path="Settings > System > Authentication" />.
- Set **Enable authentication** to off
2. Navigate to <NavPath path="Settings > System > Proxy" />.
- Set **Proxy secret** to `<some random long string>`
</TabItem>
<TabItem value="yaml">
```yaml
auth:
@ -109,6 +166,9 @@ proxy:
auth_secret: <some random long string>
```
</TabItem>
</ConfigTabs>
You can use the following code to generate a random secret.
```shell
@ -119,6 +179,20 @@ python3 -c 'import secrets; print(secrets.token_hex(64))'
If you have disabled Frigate's authentication and your proxy supports passing a header with authenticated usernames and/or roles, you can use the `header_map` config to specify the header name so it is passed to Frigate. For example, the following will map the `X-Forwarded-User` and `X-Forwarded-Groups` values. Header names are not case sensitive. Multiple values can be included in the role header. Frigate expects that the character separating the roles is a comma, but this can be specified using the `separator` config entry.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > System > Authentication" /> and configure the proxy header mapping settings.
| Field | Description |
|-------|-------------|
| **Proxy > Separator** | Character separating multiple roles in the role header (default: comma). Authentik uses a pipe `\|`. |
| **Proxy > Header Map > User** | Header name for the authenticated username (e.g., `x-forwarded-user`) |
| **Proxy > Header Map > Role** | Header name for the authenticated role/groups (e.g., `x-forwarded-groups`) |
</TabItem>
<TabItem value="yaml">
```yaml
proxy:
...
@ -128,19 +202,49 @@ proxy:
role: x-forwarded-groups
```
</TabItem>
</ConfigTabs>
Frigate supports `admin`, `viewer`, and custom roles (see below). When using port `8971`, Frigate validates these headers and subsequent requests use the headers `remote-user` and `remote-role` for authorization.
A default role can be provided. Any value in the mapped `role` header will override the default.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > System > Authentication" /> and set the default role under the proxy settings.
| Field | Description |
|-------|-------------|
| **Proxy > Default Role** | Fallback role when no role header is present (e.g., `viewer`) |
</TabItem>
<TabItem value="yaml">
```yaml
proxy:
...
default_role: viewer
```
</TabItem>
</ConfigTabs>
## Role mapping
In some environments, upstream identity providers (OIDC, SAML, LDAP, etc.) do not pass a Frigate-compatible role directly, but instead pass one or more group claims. To handle this, Frigate supports a `role_map` that translates upstream group names into Frigates internal roles (`admin`, `viewer`, or custom).
In some environments, upstream identity providers (OIDC, SAML, LDAP, etc.) do not pass a Frigate-compatible role directly, but instead pass one or more group claims. To handle this, Frigate supports a `role_map` that translates upstream group names into Frigate's internal roles (`admin`, `viewer`, or custom).
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > System > Authentication" /> and configure the role mapping under the proxy header map settings.
| Field | Description |
|-------|-------------|
| **Proxy > Header Map > Role Map** | Maps upstream group names to Frigate roles. Each Frigate role (`admin`, `viewer`, or custom) maps to a list of upstream group names. |
</TabItem>
<TabItem value="yaml">
```yaml
proxy:
@ -158,6 +262,9 @@ proxy:
- operators
```
</TabItem>
</ConfigTabs>
In this example:
- If the proxy passes a role header containing `sysadmins` or `access-level-security`, the user is assigned the `admin` role.
@ -175,7 +282,7 @@ In this example:
**Authenticated Port (8971)**
- Header mapping is **fully supported**.
- The `remote-role` header determines the users privileges:
- The `remote-role` header determines the user's privileges:
- **admin** → Full access (user management, configuration changes).
- **viewer** → Read-only access.
- **Custom roles** → Read-only access limited to the cameras defined in `auth.roles[role]`.
@ -232,6 +339,18 @@ The viewer role provides read-only access to all cameras in the UI and API. Cust
### Role Configuration Example
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > System > Authentication" /> and configure roles under the **Roles** section.
| Field | Description |
|-------|-------------|
| **Roles** | Define custom roles and assign which cameras each role can access |
</TabItem>
<TabItem value="yaml">
```yaml {11-16}
cameras:
front_door:
@ -251,13 +370,16 @@ auth:
- side_yard
```
</TabItem>
</ConfigTabs>
If you want to provide access to all cameras to a specific user, just use the **viewer** role.
### Managing User Roles
1. Log in as an **admin** user via port `8971` (preferred), or unauthenticated via port `5000`.
2. Navigate to **Settings**.
3. In the **Users** section, edit a users role by selecting from available roles (admin, viewer, or custom).
3. In the **Users** section, edit a user's role by selecting from available roles (admin, viewer, or custom).
4. In the **Roles** section, add/edit/delete custom roles (select cameras via switches). Deleting a role auto-reassigns users to "viewer".
### Role Enforcement
@ -277,7 +399,7 @@ To use role-based access control, you must connect to Frigate via the **authenti
1. Log in as an **admin** user via port `8971`.
2. Navigate to **Settings > Users**.
3. Edit a users role by selecting **admin** or **viewer**.
3. Edit a user's role by selecting **admin** or **viewer**.
## API Authentication Guide

View File

@ -3,6 +3,10 @@ id: autotracking
title: Camera Autotracking
---
import ConfigTabs from "@site/src/components/ConfigTabs";
import TabItem from "@theme/TabItem";
import NavPath from "@site/src/components/NavPath";
An ONVIF-capable, PTZ (pan-tilt-zoom) camera that supports relative movement within the field of view (FOV) can be configured to automatically track moving objects and keep them in the center of the frame.
![Autotracking example with zooming](/img/frigate-autotracking-example.gif)
@ -29,12 +33,45 @@ A growing list of cameras and brands that have been reported by users to work wi
First, set up a PTZ preset in your camera's firmware and give it a name. If you're unsure how to do this, consult the documentation for your camera manufacturer's firmware. Some tutorials for common brands: [Amcrest](https://www.youtube.com/watch?v=lJlE9-krmrM), [Reolink](https://www.youtube.com/watch?v=VAnxHUY5i5w), [Dahua](https://www.youtube.com/watch?v=7sNbc5U-k54).
Edit your Frigate configuration file and enter the ONVIF parameters for your camera. Specify the object types to track, a required zone the object must enter to begin autotracking, and the camera preset name you configured in your camera's firmware to return to when tracking has ended. Optionally, specify a delay in seconds before Frigate returns the camera to the preset.
Configure the ONVIF connection and autotracking parameters for your camera. Specify the object types to track, a required zone the object must enter to begin autotracking, and the camera preset name to return to when tracking has ended. Optionally, specify a delay in seconds before Frigate returns the camera to the preset.
An [ONVIF connection](cameras.md) is required for autotracking to function. Also, a [motion mask](masks.md) over your camera's timestamp and any overlay text is recommended to ensure they are completely excluded from scene change calculations when the camera is moving.
Note that `autotracking` is disabled by default but can be enabled in the configuration or by MQTT.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Camera configuration > ONVIF" /> for the desired camera.
**ONVIF Connection**
| Field | Description |
|-------|-------------|
| **Host** | Host of the camera being connected to. HTTP is assumed by default; prefix with `https://` for HTTPS. |
| **Port** | ONVIF port for device (default: 8000) |
| **User** | Username for login. Some devices require admin to access ONVIF. |
| **Password** | Password for login |
| **TLS Insecure** | Skip TLS verification from the ONVIF server (default: false) |
| **Profile** | ONVIF media profile to use for PTZ control, matched by token or name. If not set, the first profile with valid PTZ configuration is selected automatically. |
**Autotracking**
| Field | Description |
|-------|-------------|
| **Enabled** | Enable or disable object autotracking (default: false) |
| **Calibrate on Startup** | Calibrate the camera on startup by measuring PTZ motor speed (default: false) |
| **Zooming** | Zoom mode during autotracking: `disabled`, `absolute`, or `relative` (default: disabled) |
| **Zoom Factor** | Controls zoom behavior on tracked objects, between 0.1 and 0.75. Lower keeps more scene visible; higher zooms in more (default: 0.3) |
| **Track** | List of object types to track (default: person) |
| **Required Zones** | Zones an object must enter to begin autotracking |
| **Return Preset** | Name of ONVIF preset in camera firmware to return to when tracking ends (default: home) |
| **Timeout** | Seconds to delay before returning to preset (default: 10) |
| **Movement Weights** | Auto-generated calibration values. Do not modify manually. |
</TabItem>
<TabItem value="yaml">
```yaml
cameras:
ptzcamera:
@ -92,13 +129,16 @@ cameras:
movement_weights: []
```
</TabItem>
</ConfigTabs>
## Calibration
PTZ motors operate at different speeds. Performing a calibration will direct Frigate to measure this speed over a variety of movements and use those measurements to better predict the amount of movement necessary to keep autotracked objects in the center of the frame.
Calibration is optional, but will greatly assist Frigate in autotracking objects that move across the camera's field of view more quickly.
To begin calibration, set the `calibrate_on_startup` for your camera to `True` and restart Frigate. Frigate will then make a series of small and large movements with your camera. Don't move the PTZ manually while calibration is in progress. Once complete, camera motion will stop and your config file will be automatically updated with a `movement_weights` parameter to be used in movement calculations. You should not modify this parameter manually.
To begin calibration, set `calibrate_on_startup` for your camera to `True` and restart Frigate. Frigate will then make a series of small and large movements with your camera. Don't move the PTZ manually while calibration is in progress. Once complete, camera motion will stop and your config file will be automatically updated with a `movement_weights` parameter to be used in movement calculations. You should not modify this parameter manually.
After calibration has ended, your PTZ will be moved to the preset specified by `return_preset`.

View File

@ -3,6 +3,10 @@ id: bird_classification
title: Bird Classification
---
import ConfigTabs from "@site/src/components/ConfigTabs";
import TabItem from "@theme/TabItem";
import NavPath from "@site/src/components/NavPath";
Bird classification identifies known birds using a quantized Tensorflow model. When a known bird is recognized, its common name will be added as a `sub_label`. This information is included in the UI, filters, as well as in notifications.
## Minimum System Requirements
@ -15,7 +19,18 @@ The classification model used is the MobileNet INat Bird Classification, [availa
## Configuration
Bird classification is disabled by default, it must be enabled in your config file before it can be used. Bird classification is a global configuration setting.
Bird classification is disabled by default and must be enabled before it can be used. Bird classification is a global configuration setting.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Enrichments > Object classification" />.
- Set **Bird classification config > Bird classification** to on
- Set **Bird classification config > Bird classification threshold** to the desired confidence score (default: 0.9)
</TabItem>
<TabItem value="yaml">
```yaml
classification:
@ -23,6 +38,9 @@ classification:
enabled: true
```
</TabItem>
</ConfigTabs>
## Advanced Configuration
Fine-tune bird classification with these optional parameters:

View File

@ -1,5 +1,9 @@
# Birdseye
import ConfigTabs from "@site/src/components/ConfigTabs";
import TabItem from "@theme/TabItem";
import NavPath from "@site/src/components/NavPath";
In addition to Frigate's Live camera dashboard, Birdseye allows a portable heads-up view of your cameras to see what is going on around your property / space without having to watch all cameras that may have nothing happening. Birdseye allows specific modes that intelligently show and disappear based on what you care about.
Birdseye can be viewed by adding the "Birdseye" camera to a Camera Group in the Web UI. Add a Camera Group by pressing the "+" icon on the Live page, and choose "Birdseye" as one of the cameras.
@ -22,7 +26,22 @@ A custom icon can be added to the birdseye background by providing a 180x180 ima
### Birdseye view override at camera level
If you want to include a camera in Birdseye view only for specific circumstances, or just don't include it at all, the Birdseye setting can be set at the camera level.
To include a camera in Birdseye view only for specific circumstances, or exclude it entirely, configure Birdseye at the camera level.
<ConfigTabs>
<TabItem value="ui">
**Global settings:** Navigate to <NavPath path="Settings > System > Birdseye" /> to configure the default Birdseye behavior for all cameras.
**Per-camera overrides:** Navigate to <NavPath path="Settings > Camera configuration > Birdseye" /> to override the mode or disable Birdseye for a specific camera.
| Field | Description |
|-------|-------------|
| **Enabled** | Whether this camera appears in Birdseye view |
| **Mode** | When to show the camera: `continuous`, `motion`, or `objects` |
</TabItem>
<TabItem value="yaml">
```yaml {8-10,12-14}
# Include all cameras by default in Birdseye view
@ -41,9 +60,24 @@ cameras:
enabled: False
```
</TabItem>
</ConfigTabs>
### Birdseye Inactivity
By default birdseye shows all cameras that have had the configured activity in the last 30 seconds, this can be configured:
By default birdseye shows all cameras that have had the configured activity in the last 30 seconds. This threshold can be configured.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > System > Birdseye" />.
| Field | Description |
|-------|-------------|
| **Inactivity threshold** | Seconds of inactivity before a camera is hidden from Birdseye (default: 30) |
</TabItem>
<TabItem value="yaml">
```yaml
birdseye:
@ -52,12 +86,28 @@ birdseye:
inactivity_threshold: 15
```
</TabItem>
</ConfigTabs>
## Birdseye Layout
### Birdseye Dimensions
The resolution and aspect ratio of birdseye can be configured. Resolution will increase the quality but does not affect the layout. Changing the aspect ratio of birdseye does affect how cameras are laid out.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > System > Birdseye" />.
| Field | Description |
|-------|-------------|
| **Width** | Birdseye output width in pixels (default: 1280) |
| **Height** | Birdseye output height in pixels (default: 720) |
</TabItem>
<TabItem value="yaml">
```yaml
birdseye:
enabled: True
@ -65,10 +115,20 @@ birdseye:
height: 720
```
</TabItem>
</ConfigTabs>
### Sorting cameras in the Birdseye view
It is possible to override the order of cameras that are being shown in the Birdseye view.
The order needs to be set at the camera level.
It is possible to override the order of cameras that are being shown in the Birdseye view. The order is set at the camera level.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Camera configuration > Birdseye" /> for each camera and set the **Order** field to control the display order.
</TabItem>
<TabItem value="yaml">
```yaml
# Include all cameras by default in Birdseye view
@ -87,13 +147,26 @@ cameras:
order: 2
```
</TabItem>
</ConfigTabs>
_Note_: Cameras are sorted by default using their name to ensure a constant view inside Birdseye.
### Birdseye Cameras
It is possible to limit the number of cameras shown on birdseye at one time. When this is enabled, birdseye will show the cameras with most recent activity. There is a cooldown to ensure that cameras do not switch too frequently.
For example, this can be configured to only show the most recently active camera.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > System > Birdseye" />.
| Field | Description |
|-------|-------------|
| **Layout > Max cameras** | Maximum number of cameras shown at once (e.g., `1` for only the most active camera) |
</TabItem>
<TabItem value="yaml">
```yaml {3-4}
birdseye:
@ -102,13 +175,31 @@ birdseye:
max_cameras: 1
```
</TabItem>
</ConfigTabs>
### Birdseye Scaling
By default birdseye tries to fit 2 cameras in each row and then double in size until a suitable layout is found. The scaling can be configured with a value between 1.0 and 5.0 depending on use case.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > System > Birdseye" />.
| Field | Description |
|-------|-------------|
| **Layout > Scaling factor** | Camera scaling factor between 1.0 and 5.0 (default: 2.0) |
</TabItem>
<TabItem value="yaml">
```yaml {3-4}
birdseye:
enabled: True
layout:
scaling_factor: 3.0
```
</TabItem>
</ConfigTabs>

View File

@ -3,6 +3,10 @@ id: cameras
title: Camera Configuration
---
import ConfigTabs from "@site/src/components/ConfigTabs";
import TabItem from "@theme/TabItem";
import NavPath from "@site/src/components/NavPath";
## Setting Up Camera Inputs
Several inputs can be configured for each camera and the role of each input can be mixed and matched based on your needs. This allows you to use a lower resolution stream for object detection, but create recordings from a higher resolution stream, or vice versa.
@ -17,6 +21,24 @@ Each role can only be assigned to one input per camera. The options for roles ar
| `record` | Saves segments of the video feed based on configuration settings. [docs](record.md) |
| `audio` | Feed for audio based detection. [docs](audio_detectors.md) |
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Camera configuration > FFmpeg" />.
| Field | Description |
|-------|-------------|
| **Camera inputs** | List of input stream definitions (paths and roles) for this camera. |
Navigate to <NavPath path="Settings > Camera configuration > Object detection" />.
| Field | Description |
|-------|-------------|
| **Detect width** | Width (pixels) of frames used for the detect stream; leave empty to use the native stream resolution. |
| **Detect height** | Height (pixels) of frames used for the detect stream; leave empty to use the native stream resolution. |
</TabItem>
<TabItem value="yaml">
```yaml
mqtt:
host: mqtt.server.com
@ -36,7 +58,18 @@ cameras:
height: 720 # <- optional, by default Frigate tries to automatically detect resolution
```
Additional cameras are simply added to the config under the `cameras` entry.
</TabItem>
</ConfigTabs>
Additional cameras are simply added under the camera configuration section.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Camera configuration > Management" /> and use the add camera button to configure each additional camera.
</TabItem>
<TabItem value="yaml">
```yaml
mqtt: ...
@ -46,6 +79,9 @@ cameras:
side: ...
```
</TabItem>
</ConfigTabs>
:::note
If you only define one stream in your `inputs` and do not assign a `detect` role to it, Frigate will automatically assign it the `detect` role. Frigate will always decode a stream to support motion detection, Birdseye, the API image endpoints, and other features, even if you have disabled object detection with `enabled: False` in your config's `detect` section.
@ -64,7 +100,21 @@ Not every PTZ supports ONVIF, which is the standard protocol Frigate uses to com
:::
Add the onvif section to your camera in your configuration file:
Configure the ONVIF connection for your camera to enable PTZ controls.
<ConfigTabs>
<TabItem value="ui">
1. Navigate to <NavPath path="Settings > Camera configuration > FFmpeg" /> and select your camera.
- Set **Ffmpeg** to `...`
2. Navigate to <NavPath path="Settings > Camera configuration > ONVIF" /> and select your camera.
- Set **ONVIF host** to `10.0.10.10`
- Set **ONVIF port** to `8000`
- Set **ONVIF username** to `admin`
- Set **ONVIF password** to `password`
</TabItem>
<TabItem value="yaml">
```yaml {4-8}
cameras:
@ -77,6 +127,9 @@ cameras:
password: password
```
</TabItem>
</ConfigTabs>
If the ONVIF connection is successful, PTZ controls will be available in the camera's WebUI.
:::note
@ -130,13 +183,20 @@ The FeatureList on the [ONVIF Conformant Products Database](https://www.onvif.or
## Setting up camera groups
:::tip
Camera groups let you organize cameras together with a shared name and icon, making it easier to review and filter them. A default group for all cameras is always available.
It is recommended to set up camera groups using the UI.
<ConfigTabs>
<TabItem value="ui">
:::
1. Navigate to <NavPath path="Settings > General > UI settings" />.
2. Under the camera groups section, create a new group:
- Set the **group name** (e.g. `front`)
- Select the **cameras** to include (e.g. `driveway_cam`, `garage_cam`)
- Choose an **icon** (e.g. `LuCar`)
- Set the **order** to control the display position
Cameras can be grouped together and assigned a name and icon, this allows them to be reviewed and filtered together. There will always be the default group for all cameras.
</TabItem>
<TabItem value="yaml">
```yaml
camera_groups:
@ -148,6 +208,9 @@ camera_groups:
order: 0
```
</TabItem>
</ConfigTabs>
## Two-Way Audio
See the guide [here](/configuration/live/#two-way-talk)

View File

@ -3,13 +3,17 @@ id: object_classification
title: Object Classification
---
import ConfigTabs from "@site/src/components/ConfigTabs";
import TabItem from "@theme/TabItem";
import NavPath from "@site/src/components/NavPath";
Object classification allows you to train a custom MobileNetV2 classification model to run on tracked objects (persons, cars, animals, etc.) to identify a finer category or attribute for that object. Classification results are visible in the Tracked Object Details pane in Explore, through the `frigate/tracked_object_details` MQTT topic, in Home Assistant sensors via the official Frigate integration, or through the event endpoints in the HTTP API.
## Minimum System Requirements
Object classification models are lightweight and run very fast on CPU.
Training the model does briefly use a high amount of system resources for about 13 minutes per training run. On lower-power devices, training may take longer.
Training the model does briefly use a high amount of system resources for about 1-3 minutes per training run. On lower-power devices, training may take longer.
A CPU with AVX + AVX2 instructions is required for training and inference.
@ -27,7 +31,7 @@ For object classification:
### Classification Type
- **Sub label**:
- Applied to the objects `sub_label` field.
- Applied to the object's `sub_label` field.
- Ideal for a single, more specific identity or type.
- Example: `cat``Leo`, `Charlie`, `None`.
@ -55,7 +59,7 @@ This two-step verification prevents false positives by requiring consistent pred
### Sub label
- **Known pet vs unknown**: For `dog` objects, set sub label to your pets name (e.g., `buddy`) or `none` for others.
- **Known pet vs unknown**: For `dog` objects, set sub label to your pet's name (e.g., `buddy`) or `none` for others.
- **Mail truck vs normal car**: For `car`, classify as `mail_truck` vs `car` to filter important arrivals.
- **Delivery vs non-delivery person**: For `person`, classify `delivery` vs `visitor` based on uniform/props.
@ -68,7 +72,21 @@ This two-step verification prevents false positives by requiring consistent pred
## Configuration
Object classification is configured as a custom classification model. Each model has its own name and settings. You must list which object labels should be classified.
Object classification is configured as a custom classification model. Each model has its own name and settings. Specify which object labels should be classified.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Enrichments > Object classification" />.
| Field | Description |
|-------|-------------|
| **Custom Classification Models > Dog > Threshold** | Minimum confidence score for a classification attempt to count (default: `0.8`) |
| **Custom Classification Models > Dog > Object Config > Objects** | Object labels to classify (e.g., `dog`, `person`, `car`) |
| **Custom Classification Models > Dog > Object Config > Classification Type** | Whether to assign results as a **sub label** or **attribute** |
</TabItem>
<TabItem value="yaml">
```yaml
classification:
@ -82,6 +100,9 @@ classification:
An optional config, `save_attempts`, can be set as a key under the model name. This defines the number of classification attempts to save in the Recent Classifications tab. For object classification models, the default is 200.
</TabItem>
</ConfigTabs>
## Training the model
Creating and training the model is done within the Frigate UI using the `Classification` page. The process consists of two steps:
@ -111,11 +132,11 @@ For more detail, see [Frigate Tip: Best Practices for Training Face and Custom C
:::
- **Start small and iterate**: Begin with a small, representative set of images per class. Models often begin working well with surprisingly few examples and improve naturally over time.
- **Favor hard examples**: When images appear in the Recent Classifications tab, prioritize images scoring below 90100% or those captured under new lighting, weather, or distance conditions.
- **Favor hard examples**: When images appear in the Recent Classifications tab, prioritize images scoring below 90-100% or those captured under new lighting, weather, or distance conditions.
- **Avoid bulk training similar images**: Training large batches of images that already score 100% (or close) adds little new information and increases the risk of overfitting.
- **The wizard is just the starting point**: You dont need to find and label every class upfront. Missing classes will naturally appear in Recent Classifications, and those images tend to be more valuable because they represent new conditions and edge cases.
- **The wizard is just the starting point**: You don't need to find and label every class upfront. Missing classes will naturally appear in Recent Classifications, and those images tend to be more valuable because they represent new conditions and edge cases.
- **Problem framing**: Keep classes visually distinct and relevant to the chosen object types.
- **Preprocessing**: Ensure examples reflect object crops similar to Frigates boxes; keep the subject centered.
- **Preprocessing**: Ensure examples reflect object crops similar to Frigate's boxes; keep the subject centered.
- **Labels**: Keep label names short and consistent; include a `none` class if you plan to ignore uncertain predictions for sub labels.
- **Threshold**: Tune `threshold` per model to reduce false assignments. Start at `0.8` and adjust based on validation.
@ -125,6 +146,17 @@ To troubleshoot issues with object classification models, enable debug logging t
Enable debug logs for classification models by adding `frigate.data_processing.real_time.custom_classification: debug` to your `logger` configuration. These logs are verbose, so only keep this enabled when necessary. Restart Frigate after this change.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > System > Logging" />.
- Set **Logging level** to `debug`
- Set **Per-process log level > Frigate.Data Processing.Real Time.Custom Classification** to `debug` for verbose classification logging
</TabItem>
<TabItem value="yaml">
```yaml
logger:
default: info
@ -133,6 +165,9 @@ logger:
frigate.data_processing.real_time.custom_classification: debug
```
</TabItem>
</ConfigTabs>
The debug logs will show:
- Classification probabilities for each attempt

View File

@ -3,13 +3,17 @@ id: state_classification
title: State Classification
---
import ConfigTabs from "@site/src/components/ConfigTabs";
import TabItem from "@theme/TabItem";
import NavPath from "@site/src/components/NavPath";
State classification allows you to train a custom MobileNetV2 classification model on a fixed region of your camera frame(s) to determine a current state. The model can be configured to run on a schedule and/or when motion is detected in that region. Classification results are available through the `frigate/<camera_name>/classification/<model_name>` MQTT topic and in Home Assistant sensors via the official Frigate integration.
## Minimum System Requirements
State classification models are lightweight and run very fast on CPU.
Training the model does briefly use a high amount of system resources for about 13 minutes per training run. On lower-power devices, training may take longer.
Training the model does briefly use a high amount of system resources for about 1-3 minutes per training run. On lower-power devices, training may take longer.
A CPU with AVX + AVX2 instructions is required for training and inference.
@ -33,7 +37,22 @@ For state classification:
## Configuration
State classification is configured as a custom classification model. Each model has its own name and settings. You must provide at least one camera crop under `state_config.cameras`.
State classification is configured as a custom classification model. Each model has its own name and settings. Provide at least one camera crop under `state_config.cameras`.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Enrichments > Object classification" />.
| Field | Description |
|-------|-------------|
| **Custom Classification Models > Front Door > Threshold** | Minimum confidence score for a classification attempt to count (default: `0.8`) |
| **Custom Classification Models > Front Door > State Config > Motion** | Run classification when motion overlaps the crop area |
| **Custom Classification Models > Front Door > State Config > Interval** | Run classification every N seconds (optional) |
| **Custom Classification Models > Front Door > State Config > Cameras > Front > Crop** | The rectangular crop region on each camera to classify |
</TabItem>
<TabItem value="yaml">
```yaml
classification:
@ -50,6 +69,9 @@ classification:
An optional config, `save_attempts`, can be set as a key under the model name. This defines the number of classification attempts to save in the Recent Classifications tab. For state classification models, the default is 100.
</TabItem>
</ConfigTabs>
## Training the model
Creating and training the model is done within the Frigate UI using the `Classification` page. The process consists of three steps:
@ -82,7 +104,7 @@ For more detail, see [Frigate Tip: Best Practices for Training Face and Custom C
- **Problem framing**: Keep classes visually distinct and state-focused (e.g., `open`, `closed`, `unknown`). Avoid combining object identity with state in a single model unless necessary.
- **Data collection**: Use the model's Recent Classifications tab to gather balanced examples across times of day and weather.
- **When to train**: Focus on cases where the model is entirely incorrect or flips between states when it should not. There's no need to train additional images when the model is already working consistently.
- **Favor hard examples**: When images appear in the Recent Classifications tab, prioritize images scoring below 90100% or those captured under new conditions (e.g., first snow of the year, seasonal changes, objects temporarily in view, insects at night). These represent scenarios different from the default state and help prevent overfitting.
- **Favor hard examples**: When images appear in the Recent Classifications tab, prioritize images scoring below 90-100% or those captured under new conditions (e.g., first snow of the year, seasonal changes, objects temporarily in view, insects at night). These represent scenarios different from the default state and help prevent overfitting.
- **Avoid bulk training similar images**: Training large batches of images that already score 100% (or close) adds little new information and increases the risk of overfitting.
- **The wizard is just the starting point**: You don't need to find and label every state upfront. Missing states will naturally appear in Recent Classifications, and those images tend to be more valuable because they represent new conditions and edge cases.
@ -92,6 +114,17 @@ To troubleshoot issues with state classification models, enable debug logging to
Enable debug logs for classification models by adding `frigate.data_processing.real_time.custom_classification: debug` to your `logger` configuration. These logs are verbose, so only keep this enabled when necessary. Restart Frigate after this change.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > System > Logging" />.
- Set **Logging level** to `debug`
- Set **Per-process log level > Frigate.Data Processing.Real Time.Custom Classification** to `debug` for verbose classification logging
</TabItem>
<TabItem value="yaml">
```yaml
logger:
default: info
@ -100,6 +133,9 @@ logger:
frigate.data_processing.real_time.custom_classification: debug
```
</TabItem>
</ConfigTabs>
The debug logs will show:
- Classification probabilities for each attempt

View File

@ -3,6 +3,10 @@ id: face_recognition
title: Face Recognition
---
import ConfigTabs from "@site/src/components/ConfigTabs";
import TabItem from "@theme/TabItem";
import NavPath from "@site/src/components/NavPath";
Face recognition identifies known individuals by matching detected faces with previously learned facial data. When a known `person` is recognized, their name will be added as a `sub_label`. This information is included in the UI, filters, as well as in notifications.
## Model Requirements
@ -40,50 +44,95 @@ The `large` model is optimized for accuracy, an integrated or discrete GPU / NPU
## Configuration
Face recognition is disabled by default, face recognition must be enabled in the UI or in your config file before it can be used. Face recognition is a global configuration setting.
Face recognition is disabled by default and must be enabled before it can be used. Face recognition is a global configuration setting.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Enrichments > Face recognition" />.
- Set **Enable face recognition** to on
</TabItem>
<TabItem value="yaml">
```yaml
face_recognition:
enabled: true
```
</TabItem>
</ConfigTabs>
Like the other real-time processors in Frigate, face recognition runs on the camera stream defined by the `detect` role in your config. To ensure optimal performance, select a suitable resolution for this stream in your camera's firmware that fits your specific scene and requirements.
## Advanced Configuration
Fine-tune face recognition with these optional parameters at the global level of your config. The only optional parameters that can be set at the camera level are `enabled` and `min_area`.
Fine-tune face recognition with these optional parameters. The only optional parameters that can be set at the camera level are `enabled` and `min_area`.
### Detection
- `detection_threshold`: Face detection confidence score required before recognition runs:
- Default: `0.7`
- Note: This is field only applies to the standalone face detection model, `min_score` should be used to filter for models that have face detection built in.
- `min_area`: Defines the minimum size (in pixels) a face must be before recognition runs.
- Default: `500` pixels.
- Depending on the resolution of your camera's `detect` stream, you can increase this value to ignore small or distant faces.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Enrichments > Face recognition" />.
- Set **Enable face recognition** to on
- Set **Detection threshold** to `0.7`
- Set **Minimum face area** to `500`
</TabItem>
<TabItem value="yaml">
```yaml
face_recognition:
enabled: true
detection_threshold: 0.7
min_area: 500
```
</TabItem>
</ConfigTabs>
### Recognition
- `model_size`: Which model size to use, options are `small` or `large`
- `unknown_score`: Min score to mark a person as a potential match, matches at or below this will be marked as unknown.
- Default: `0.8`.
- `recognition_threshold`: Recognition confidence score required to add the face to the object as a sub label.
- Default: `0.9`.
- `min_faces`: Min face recognitions for the sub label to be applied to the person object.
- Default: `1`
- `save_attempts`: Number of images of recognized faces to save for training.
- Default: `200`.
- `blur_confidence_filter`: Enables a filter that calculates how blurry the face is and adjusts the confidence based on this.
- Default: `True`.
- `device`: Target a specific device to run the face recognition model on (multi-GPU installation).
- Default: `None`.
- Note: This setting is only applicable when using the `large` model. See [onnxruntime's provider options](https://onnxruntime.ai/docs/execution-providers/)
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Enrichments > Face recognition" />.
- Set **Enable face recognition** to on
- Set **Model size** to `small`
- Set **Unknown score threshold** to `0.8`
- Set **Recognition threshold** to `0.9`
- Set **Minimum faces** to `1`
- Set **Save attempts** to `200`
- Set **Blur confidence filter** to on
- Set **Device** to `None`
</TabItem>
<TabItem value="yaml">
```yaml
face_recognition:
enabled: true
model_size: small
unknown_score: 0.8
recognition_threshold: 0.9
min_faces: 1
save_attempts: 200
blur_confidence_filter: true
device: None
```
</TabItem>
</ConfigTabs>
## Usage
Follow these steps to begin:
1. **Enable face recognition** in your configuration file and restart Frigate.
1. **Enable face recognition** in your configuration and restart Frigate.
2. **Upload one face** using the **Add Face** button's wizard in the Face Library section of the Frigate UI. Read below for the best practices on expanding your training set.
3. When Frigate detects and attempts to recognize a face, it will appear in the **Train** tab of the Face Library, along with its associated recognition confidence.
4. From the **Train** tab, you can **assign the face** to a new or existing person to improve recognition accuracy for the future.

View File

@ -3,6 +3,10 @@ id: ffmpeg_presets
title: FFmpeg presets
---
import ConfigTabs from "@site/src/components/ConfigTabs";
import TabItem from "@theme/TabItem";
import NavPath from "@site/src/components/NavPath";
Some presets of FFmpeg args are provided by default to make the configuration easier. All presets can be seen in [this file](https://github.com/blakeblackshear/frigate/blob/master/frigate/ffmpeg_presets.py).
### Hwaccel Presets
@ -23,6 +27,30 @@ See [the hwaccel docs](/configuration/hardware_acceleration_video.md) for more i
| preset-jetson-h265 | Nvidia Jetson with h265 stream | |
| preset-rkmpp | Rockchip MPP | Use image with \*-rk suffix and privileged mode |
Select the appropriate hwaccel preset for your hardware.
<ConfigTabs>
<TabItem value="ui">
1. Navigate to <NavPath path="Settings > Global configuration > FFmpeg" /> and set **Hardware acceleration arguments** to the appropriate preset for your hardware.
2. To override for a specific camera, navigate to <NavPath path="Settings > Camera configuration > FFmpeg" /> and set **Hardware acceleration arguments** for that camera.
</TabItem>
<TabItem value="yaml">
```yaml
ffmpeg:
hwaccel_args: preset-vaapi
cameras:
front_door:
ffmpeg:
hwaccel_args: preset-nvidia
```
</TabItem>
</ConfigTabs>
### Input Args Presets
Input args presets help make the config more readable and handle use cases for different types of streams to ensure maximum compatibility.
@ -72,7 +100,7 @@ Output args presets help make the config more readable and handle use cases for
| Preset | Usage | Other Notes |
| -------------------------------- | --------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| preset-record-generic | Record WITHOUT audio | If your camera doesnt have audio, or if you dont want to record audio, use this option |
| preset-record-generic | Record WITHOUT audio | If your camera doesn't have audio, or if you don't want to record audio, use this option |
| preset-record-generic-audio-copy | Record WITH original audio | Use this to enable audio in recordings |
| preset-record-generic-audio-aac | Record WITH transcoded aac audio | This is the default when no option is specified. Use it to transcode audio to AAC. If the source is already in AAC format, use preset-record-generic-audio-copy instead to avoid unnecessary re-encoding |
| preset-record-mjpeg | Record an mjpeg stream | Recommend restreaming mjpeg stream instead |

View File

@ -3,6 +3,10 @@ id: genai_config
title: Configuring Generative AI
---
import ConfigTabs from "@site/src/components/ConfigTabs";
import TabItem from "@theme/TabItem";
import NavPath from "@site/src/components/NavPath";
## Configuration
A Generative AI provider can be configured in the global config, which will make the Generative AI features available for use. There are currently 4 native providers available to integrate with Frigate. Other providers that support the OpenAI standard API can also be used. See the OpenAI-Compatible section below.
@ -69,6 +73,18 @@ You must use a vision capable model with Frigate. The llama.cpp server supports
All llama.cpp native options can be passed through `provider_options`, including `temperature`, `top_k`, `top_p`, `min_p`, `repeat_penalty`, `repeat_last_n`, `seed`, `grammar`, and more. See the [llama.cpp server documentation](https://github.com/ggml-org/llama.cpp/blob/master/tools/server/README.md) for a complete list of available parameters.
<ConfigTabs>
<TabItem value="ui">
1. Navigate to <NavPath path="Settings > Enrichments > Generative AI" />.
- Set **Provider** to `llamacpp`
- Set **Base URL** to your llama.cpp server address (e.g., `http://localhost:8080`)
- Set **Model** to the name of your model
- Under **Provider Options**, set `context_size` to tell Frigate your context size so it can send the appropriate amount of information
</TabItem>
<TabItem value="yaml">
```yaml
genai:
provider: llamacpp
@ -78,6 +94,9 @@ genai:
context_size: 16000 # Tell Frigate your context size so it can send the appropriate amount of information.
```
</TabItem>
</ConfigTabs>
### Ollama
[Ollama](https://ollama.com/) allows you to self-host large language models and keep everything running locally. It is highly recommended to host this server on a machine with an Nvidia graphics card, or on a Apple silicon Mac for best performance.
@ -96,6 +115,18 @@ Note that Frigate will not automatically download the model you specify in your
#### Configuration
<ConfigTabs>
<TabItem value="ui">
1. Navigate to <NavPath path="Settings > Enrichments > Generative AI" />.
- Set **Provider** to `ollama`
- Set **Base URL** to your Ollama server address (e.g., `http://localhost:11434`)
- Set **Model** to the model tag (e.g., `qwen3-vl:4b`)
- Under **Provider Options**, set `keep_alive` (e.g., `-1`) and `options.num_ctx` to match your desired context size
</TabItem>
<TabItem value="yaml">
```yaml
genai:
provider: ollama
@ -107,6 +138,9 @@ genai:
num_ctx: 8192 # make sure the context matches other services that are using ollama
```
</TabItem>
</ConfigTabs>
### OpenAI-Compatible
Frigate supports any provider that implements the OpenAI API standard. This includes self-hosted solutions like [vLLM](https://docs.vllm.ai/), [LocalAI](https://localai.io/), and other OpenAI-compatible servers.
@ -130,6 +164,18 @@ This ensures Frigate uses the correct context window size when generating prompt
#### Configuration
<ConfigTabs>
<TabItem value="ui">
1. Navigate to <NavPath path="Settings > Enrichments > Generative AI" />.
- Set **Provider** to `openai`
- Set **Base URL** to your server address (e.g., `http://your-server:port`)
- Set **API key** if required by your server
- Set **Model** to the model name
</TabItem>
<TabItem value="yaml">
```yaml
genai:
provider: openai
@ -138,6 +184,9 @@ genai:
model: your-model-name
```
</TabItem>
</ConfigTabs>
To use a different OpenAI-compatible API endpoint, set the `OPENAI_BASE_URL` environment variable to your provider's API URL.
## Cloud Providers
@ -150,6 +199,17 @@ Ollama also supports [cloud models](https://ollama.com/cloud), where your local
#### Configuration
<ConfigTabs>
<TabItem value="ui">
1. Navigate to <NavPath path="Settings > Enrichments > Generative AI" />.
- Set **Provider** to `ollama`
- Set **Base URL** to your local Ollama address (e.g., `http://localhost:11434`)
- Set **Model** to the cloud model name
</TabItem>
<TabItem value="yaml">
```yaml
genai:
provider: ollama
@ -157,6 +217,9 @@ genai:
model: cloud-model-name
```
</TabItem>
</ConfigTabs>
### Google Gemini
Google Gemini has a [free tier](https://ai.google.dev/pricing) for the API, however the limits may not be sufficient for standard Frigate usage. Choose a plan appropriate for your installation.
@ -176,6 +239,17 @@ To start using Gemini, you must first get an API key from [Google AI Studio](htt
#### Configuration
<ConfigTabs>
<TabItem value="ui">
1. Navigate to <NavPath path="Settings > Enrichments > Generative AI" />.
- Set **Provider** to `gemini`
- Set **API key** to your Gemini API key (or use an environment variable such as `{FRIGATE_GEMINI_API_KEY}`)
- Set **Model** to the desired model (e.g., `gemini-2.5-flash`)
</TabItem>
<TabItem value="yaml">
```yaml
genai:
provider: gemini
@ -183,6 +257,9 @@ genai:
model: gemini-2.5-flash
```
</TabItem>
</ConfigTabs>
:::note
To use a different Gemini-compatible API endpoint, set the `provider_options` with the `base_url` key to your provider's API URL. For example:
@ -213,6 +290,17 @@ To start using OpenAI, you must first [create an API key](https://platform.opena
#### Configuration
<ConfigTabs>
<TabItem value="ui">
1. Navigate to <NavPath path="Settings > Enrichments > Generative AI" />.
- Set **Provider** to `openai`
- Set **API key** to your OpenAI API key (or use an environment variable such as `{FRIGATE_OPENAI_API_KEY}`)
- Set **Model** to the desired model (e.g., `gpt-4o`)
</TabItem>
<TabItem value="yaml">
```yaml
genai:
provider: openai
@ -220,6 +308,9 @@ genai:
model: gpt-4o
```
</TabItem>
</ConfigTabs>
:::note
To use a different OpenAI-compatible API endpoint, set the `OPENAI_BASE_URL` environment variable to your provider's API URL.
@ -257,6 +348,18 @@ To start using Azure OpenAI, you must first [create a resource](https://learn.mi
#### Configuration
<ConfigTabs>
<TabItem value="ui">
1. Navigate to <NavPath path="Settings > Enrichments > Generative AI" />.
- Set **Provider** to `azure_openai`
- Set **Base URL** to your Azure resource URL including the `api-version` parameter (e.g., `https://instance.cognitiveservices.azure.com/openai/responses?api-version=2025-04-01-preview`)
- Set **Model** to your deployed model name (e.g., `gpt-5-mini`)
- Set **API key** to your Azure OpenAI API key (or use an environment variable such as `{FRIGATE_OPENAI_API_KEY}`)
</TabItem>
<TabItem value="yaml">
```yaml
genai:
provider: azure_openai
@ -264,3 +367,6 @@ genai:
model: gpt-5-mini
api_key: "{FRIGATE_OPENAI_API_KEY}"
```
</TabItem>
</ConfigTabs>

View File

@ -3,6 +3,10 @@ id: genai_objects
title: Object Descriptions
---
import ConfigTabs from "@site/src/components/ConfigTabs";
import TabItem from "@theme/TabItem";
import NavPath from "@site/src/components/NavPath";
Generative AI can be used to automatically generate descriptive text based on the thumbnails of your tracked objects. This helps with [Semantic Search](/configuration/semantic_search) in Frigate to provide more context about your tracked objects. Descriptions are accessed via the _Explore_ view in the Frigate UI by clicking on a tracked object's thumbnail.
Requests for a description are sent off automatically to your AI provider at the end of the tracked object's lifecycle, or can optionally be sent earlier after a number of significantly changed frames, for example in use in more real-time notifications. Descriptions can also be regenerated manually via the Frigate UI. Note that if you are manually entering a description for tracked objects prior to its end, this will be overwritten by the generated response.
@ -15,9 +19,9 @@ Generative AI object descriptions can also be toggled dynamically for a camera v
## Usage and Best Practices
Frigate's thumbnail search excels at identifying specific details about tracked objects for example, using an "image caption" approach to find a "person wearing a yellow vest," "a white dog running across the lawn," or "a red car on a residential street." To enhance this further, Frigates default prompts are designed to ask your AI provider about the intent behind the object's actions, rather than just describing its appearance.
Frigate's thumbnail search excels at identifying specific details about tracked objects -- for example, using an "image caption" approach to find a "person wearing a yellow vest," "a white dog running across the lawn," or "a red car on a residential street." To enhance this further, Frigate's default prompts are designed to ask your AI provider about the intent behind the object's actions, rather than just describing its appearance.
While generating simple descriptions of detected objects is useful, understanding intent provides a deeper layer of insight. Instead of just recognizing "what" is in a scene, Frigates default prompts aim to infer "why" it might be there or "what" it could do next. Descriptions tell you whats happening, but intent gives context. For instance, a person walking toward a door might seem like a visitor, but if theyre moving quickly after hours, you can infer a potential break-in attempt. Detecting a person loitering near a door at night can trigger an alert sooner than simply noting "a person standing by the door," helping you respond based on the situations context.
While generating simple descriptions of detected objects is useful, understanding intent provides a deeper layer of insight. Instead of just recognizing "what" is in a scene, Frigate's default prompts aim to infer "why" it might be there or "what" it could do next. Descriptions tell you what's happening, but intent gives context. For instance, a person walking toward a door might seem like a visitor, but if they're moving quickly after hours, you can infer a potential break-in attempt. Detecting a person loitering near a door at night can trigger an alert sooner than simply noting "a person standing by the door," helping you respond based on the situation's context.
## Custom Prompts
@ -33,7 +37,18 @@ Prompts can use variable replacements `{label}`, `{sub_label}`, and `{camera}` t
:::
You are also able to define custom prompts in your configuration.
You can define custom prompts at the global level and per-object type. To configure custom prompts:
<ConfigTabs>
<TabItem value="ui">
1. Navigate to <NavPath path="Settings > Global configuration > Objects" />.
- Expand the **GenAI** section
- Set **Prompt** to your custom prompt text
- Under **Object Prompts**, add entries keyed by object type (e.g., `person`, `car`) with custom prompts for each
</TabItem>
<TabItem value="yaml">
```yaml
genai:
@ -49,7 +64,25 @@ objects:
car: "Observe the primary vehicle in these images. Focus on its movement, direction, or purpose (e.g., parking, approaching, circling). If it's a delivery vehicle, mention the company."
```
Prompts can also be overridden at the camera level to provide a more detailed prompt to the model about your specific camera, if you desire.
</TabItem>
</ConfigTabs>
Prompts can also be overridden at the camera level to provide a more detailed prompt to the model about your specific camera. To configure camera-level overrides:
<ConfigTabs>
<TabItem value="ui">
1. Navigate to <NavPath path="Settings > Camera configuration > Objects" /> for the desired camera.
- Expand the **GenAI** section
- Set **Enabled** to on
- Set **Use Snapshot** to on if desired
- Set **Prompt** to a camera-specific prompt
- Under **Object Prompts**, add entries keyed by object type with camera-specific prompts
- Set **Objects** to the list of object types that should receive descriptions (e.g., `person`, `cat`)
- Set **Required Zones** to limit descriptions to objects in specific zones (e.g., `steps`)
</TabItem>
<TabItem value="yaml">
```yaml
cameras:
@ -69,6 +102,9 @@ cameras:
- steps
```
</TabItem>
</ConfigTabs>
### Experiment with prompts
Many providers also have a public facing chat interface for their models. Download a couple of different thumbnails or snapshots from Frigate and try new things in the playground to get descriptions to your liking before updating the prompt in Frigate.

View File

@ -3,6 +3,10 @@ id: genai_review
title: Review Summaries
---
import ConfigTabs from "@site/src/components/ConfigTabs";
import TabItem from "@theme/TabItem";
import NavPath from "@site/src/components/NavPath";
Generative AI can be used to automatically generate structured summaries of review items. These summaries will show up in Frigate's native notifications as well as in the UI. Generative AI can also be used to take a collection of summaries over a period of time and provide a report, which may be useful to get a quick report of everything that happened while out for some amount of time.
Requests for a summary are requested automatically to your AI provider for alert review items when the activity has ended, they can also be optionally enabled for detections as well.
@ -28,6 +32,30 @@ This will show in multiple places in the UI to give additional context about eac
Each installation and even camera can have different parameters for what is considered suspicious activity. Frigate allows the `activity_context_prompt` to be defined globally and at the camera level, which allows you to define more specifically what should be considered normal activity. It is important that this is not overly specific as it can sway the output of the response.
To configure the activity context prompt:
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Global configuration > Review" />.
- Set **GenAI config > Activity context prompt** to your custom activity context text
</TabItem>
<TabItem value="yaml">
```yaml
review:
genai:
activity_context_prompt: |
### Normal Activity Indicators (Level 0)
- Known/verified people in any zone at any time
...
```
</TabItem>
</ConfigTabs>
<details>
<summary>Default Activity Context Prompt</summary>
@ -74,7 +102,18 @@ review:
### Image Source
By default, review summaries use preview images (cached preview frames) which have a lower resolution but use fewer tokens per image. For better image quality and more detailed analysis, you can configure Frigate to extract frames directly from recordings at a higher resolution:
By default, review summaries use preview images (cached preview frames) which have a lower resolution but use fewer tokens per image. For better image quality and more detailed analysis, configure Frigate to extract frames directly from recordings at a higher resolution.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Global configuration > Review" />.
- Set **GenAI config > Enable GenAI descriptions** to on
- Set **GenAI config > Review image source** to `recordings` (default is `preview`)
</TabItem>
<TabItem value="yaml">
```yaml
review:
@ -84,6 +123,9 @@ review:
image_source: recordings # Options: "preview" (default) or "recordings"
```
</TabItem>
</ConfigTabs>
When using `recordings`, frames are extracted at 480px height while maintaining the camera's original aspect ratio, providing better detail for the LLM while being mindful of context window size. This is particularly useful for scenarios where fine details matter, such as identifying license plates, reading text, or analyzing distant objects.
The number of frames sent to the LLM is dynamically calculated based on:
@ -103,7 +145,17 @@ If recordings are not available for a given time period, the system will automat
### Additional Concerns
Along with the concern of suspicious activity or immediate threat, you may have concerns such as animals in your garden or a gate being left open. These concerns can be configured so that the review summaries will make note of them if the activity requires additional review. For example:
Along with the concern of suspicious activity or immediate threat, you may have concerns such as animals in your garden or a gate being left open. Configure these concerns so that review summaries will make note of them if the activity requires additional review.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Global configuration > Review" />.
- Set **GenAI config > Additional concerns** to a list of your concerns (e.g., `animals in the garden`)
</TabItem>
<TabItem value="yaml">
```yaml {4,5}
review:
@ -113,9 +165,22 @@ review:
- animals in the garden
```
</TabItem>
</ConfigTabs>
### Preferred Language
By default, review summaries are generated in English. You can configure Frigate to generate summaries in your preferred language by setting the `preferred_language` option:
By default, review summaries are generated in English. Configure Frigate to generate summaries in your preferred language by setting the `preferred_language` option.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Global configuration > Review" />.
- Set **GenAI config > Preferred language** to the desired language (e.g., `Spanish`)
</TabItem>
<TabItem value="yaml">
```yaml {4}
review:
@ -124,6 +189,9 @@ review:
preferred_language: Spanish
```
</TabItem>
</ConfigTabs>
## Review Reports
Along with individual review item summaries, Generative AI can also produce a single report of review items from all cameras marked "suspicious" over a specified time period (for example, a daily summary of suspicious activity while you're on vacation).

View File

@ -4,6 +4,9 @@ title: Video Decoding
---
import CommunityBadge from '@site/src/components/CommunityBadge';
import ConfigTabs from "@site/src/components/ConfigTabs";
import TabItem from "@theme/TabItem";
import NavPath from "@site/src/components/NavPath";
# Video Decoding
@ -78,27 +81,60 @@ See [The Intel Docs](https://www.intel.com/content/www/us/en/support/articles/00
VAAPI supports automatic profile selection so it will work automatically with both H.264 and H.265 streams.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Global configuration > FFmpeg" /> and set **Hardware acceleration arguments** to `VAAPI (Intel/AMD GPU)`. For per-camera overrides, navigate to <NavPath path="Settings > Camera configuration > FFmpeg" />.
</TabItem>
<TabItem value="yaml">
```yaml
ffmpeg:
hwaccel_args: preset-vaapi
```
</TabItem>
</ConfigTabs>
### Via Quicksync
#### H.264 streams
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Global configuration > FFmpeg" /> and set **Hardware acceleration arguments** to `Intel QuickSync (H.264)`. For per-camera overrides, navigate to <NavPath path="Settings > Camera configuration > FFmpeg" />.
</TabItem>
<TabItem value="yaml">
```yaml
ffmpeg:
hwaccel_args: preset-intel-qsv-h264
```
</TabItem>
</ConfigTabs>
#### H.265 streams
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Global configuration > FFmpeg" /> and set **Hardware acceleration arguments** to `Intel QuickSync (H.265)`. For per-camera overrides, navigate to <NavPath path="Settings > Camera configuration > FFmpeg" />.
</TabItem>
<TabItem value="yaml">
```yaml
ffmpeg:
hwaccel_args: preset-intel-qsv-h265
```
</TabItem>
</ConfigTabs>
### Configuring Intel GPU Stats in Docker
Additional configuration is needed for the Docker container to be able to access the `intel_gpu_top` command for GPU stats. There are two options:
@ -196,11 +232,22 @@ You need to change the driver to `radeonsi` by adding the following environment
VAAPI supports automatic profile selection so it will work automatically with both H.264 and H.265 streams.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Global configuration > FFmpeg" /> and set **Hardware acceleration arguments** to `VAAPI (Intel/AMD GPU)`. For per-camera overrides, navigate to <NavPath path="Settings > Camera configuration > FFmpeg" />.
</TabItem>
<TabItem value="yaml">
```yaml
ffmpeg:
hwaccel_args: preset-vaapi
```
</TabItem>
</ConfigTabs>
## NVIDIA GPUs
While older GPUs may work, it is recommended to use modern, supported GPUs. NVIDIA provides a [matrix of supported GPUs and features](https://developer.nvidia.com/video-encode-and-decode-gpu-support-matrix-new). If your card is on the list and supports CUVID/NVDEC, it will most likely work with Frigate for decoding. However, you must also use [a driver version that will work with FFmpeg](https://github.com/FFmpeg/nv-codec-headers/blob/master/README). Older driver versions may be missing symbols and fail to work, and older cards are not supported by newer driver versions. The only way around this is to [provide your own FFmpeg](/configuration/advanced#custom-ffmpeg-build) that will work with your driver version, but this is unsupported and may not work well if at all.
@ -244,11 +291,22 @@ docker run -d \
Using `preset-nvidia` ffmpeg will automatically select the necessary profile for the incoming video, and will log an error if the profile is not supported by your GPU.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Global configuration > FFmpeg" /> and set **Hardware acceleration arguments** to `NVIDIA GPU`. For per-camera overrides, navigate to <NavPath path="Settings > Camera configuration > FFmpeg" />.
</TabItem>
<TabItem value="yaml">
```yaml
ffmpeg:
hwaccel_args: preset-nvidia
```
</TabItem>
</ConfigTabs>
If everything is working correctly, you should see a significant improvement in performance.
Verify that hardware decoding is working by running `nvidia-smi`, which should show `ffmpeg`
processes:
@ -296,6 +354,14 @@ These instructions were originally based on the [Jellyfin documentation](https:/
Ensure you increase the allocated RAM for your GPU to at least 128 (`raspi-config` > Performance Options > GPU Memory).
If you are using the HA App, you may need to use the full access variant and turn off _Protection mode_ for hardware acceleration.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Global configuration > FFmpeg" /> and set **Hardware acceleration arguments** to `Raspberry Pi (H.264)` (for H.264 streams) or `Raspberry Pi (H.265)` (for H.265/HEVC streams). For per-camera overrides, navigate to <NavPath path="Settings > Camera configuration > FFmpeg" />.
</TabItem>
<TabItem value="yaml">
```yaml
# if you want to decode a h264 stream
ffmpeg:
@ -306,6 +372,9 @@ ffmpeg:
hwaccel_args: preset-rpi-64-h265
```
</TabItem>
</ConfigTabs>
:::note
If running Frigate through Docker, you either need to run in privileged mode or
@ -405,11 +474,22 @@ A list of supported codecs (you can use `ffmpeg -decoders | grep nvmpi` in the c
For example, for H264 video, you'll select `preset-jetson-h264`.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Global configuration > FFmpeg" /> and set **Hardware acceleration arguments** to `NVIDIA Jetson (H.264)` (or `NVIDIA Jetson (H.265)` for HEVC streams). For per-camera overrides, navigate to <NavPath path="Settings > Camera configuration > FFmpeg" />.
</TabItem>
<TabItem value="yaml">
```yaml
ffmpeg:
hwaccel_args: preset-jetson-h264
```
</TabItem>
</ConfigTabs>
If everything is working correctly, you should see a significant reduction in ffmpeg CPU load and power consumption.
Verify that hardware decoding is working by running `jtop` (`sudo pip3 install -U jetson-stats`), which should show
that NVDEC/NVDEC1 are in use.
@ -424,13 +504,24 @@ Make sure to follow the [Rockchip specific installation instructions](/frigate/i
### Configuration
Add one of the following FFmpeg presets to your `config.yml` to enable hardware video processing:
Set the FFmpeg hwaccel preset to enable hardware video processing.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Global configuration > FFmpeg" /> and set **Hardware acceleration arguments** to `Rockchip RKMPP`. For per-camera overrides, navigate to <NavPath path="Settings > Camera configuration > FFmpeg" />.
</TabItem>
<TabItem value="yaml">
```yaml
ffmpeg:
hwaccel_args: preset-rkmpp
```
</TabItem>
</ConfigTabs>
:::note
Make sure that your SoC supports hardware acceleration for your input stream. For example, if your camera streams with h265 encoding and a 4k resolution, your SoC must be able to de- and encode h265 with a 4k resolution or higher. If you are unsure whether your SoC meets the requirements, take a look at the datasheet.
@ -480,7 +571,15 @@ Make sure to follow the [Synaptics specific installation instructions](/frigate/
### Configuration
Add one of the following FFmpeg presets to your `config.yml` to enable hardware video processing:
Set the FFmpeg hwaccel args to enable hardware video processing.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Global configuration > FFmpeg" /> and configure the hardware acceleration args and input args manually for Synaptics hardware. For per-camera overrides, navigate to <NavPath path="Settings > Camera configuration > FFmpeg" />.
</TabItem>
<TabItem value="yaml">
```yaml {2}
ffmpeg:
@ -490,6 +589,9 @@ output_args:
record: preset-record-generic-audio-aac
```
</TabItem>
</ConfigTabs>
:::warning
Make sure that your SoC supports hardware acceleration for your input stream and your input stream is h264 encoding. For example, if your camera streams with h264 encoding, your SoC must be able to de- and encode with it. If you are unsure whether your SoC meets the requirements, take a look at the datasheet.

View File

@ -3,13 +3,24 @@ id: index
title: Frigate Configuration
---
For Home Assistant App installations, the config file should be at `/addon_configs/<addon_directory>/config.yml`, where `<addon_directory>` is specific to the variant of the Frigate App you are running. See the list of directories [here](#accessing-app-config-dir).
import ConfigTabs from "@site/src/components/ConfigTabs";
import TabItem from "@theme/TabItem";
import NavPath from "@site/src/components/NavPath";
For all other installation types, the config file should be mapped to `/config/config.yml` inside the container.
Frigate can be configured through the **Settings UI** or by editing the YAML configuration file directly. The Settings UI is the recommended approach — it provides validation and a guided experience for all configuration options.
It is recommended to start with a minimal configuration and add to it as described in [the getting started guide](../guides/getting_started.md).
## Configuration File Location
For users who prefer to edit the YAML configuration file directly:
- **Home Assistant App:** `/addon_configs/<addon_directory>/config.yml` — see [directory list](#accessing-app-config-dir)
- **All other installations:** Map to `/config/config.yml` inside the container
It can be named `config.yml` or `config.yaml`, but if both files exist `config.yml` will be preferred and `config.yaml` will be ignored.
It is recommended to start with a minimal configuration and add to it as described in [this guide](../guides/getting_started.md) and use the built in configuration editor in Frigate's UI which supports validation.
A minimal starting configuration:
```yaml
mqtt:
@ -38,7 +49,7 @@ When running Frigate through the HA App, the Frigate `/config` directory is mapp
**Whenever you see `/config` in the documentation, it refers to this directory.**
If for example you are running the standard App variant and use the [VS Code App](https://github.com/hassio-addons/addon-vscode) to browse your files, you can click _File_ > _Open folder..._ and navigate to `/addon_configs/ccab4aaf_frigate` to access the Frigate `/config` directory and edit the `config.yaml` file. You can also use the built-in file editor in the Frigate UI to edit the configuration file.
If for example you are running the standard App variant and use the [VS Code App](https://github.com/hassio-addons/addon-vscode) to browse your files, you can click _File_ > _Open folder..._ and navigate to `/addon_configs/ccab4aaf_frigate` to access the Frigate `/config` directory and edit the `config.yaml` file. You can also use the built-in config editor in the Frigate UI.
## VS Code Configuration Schema
@ -81,7 +92,7 @@ genai:
## Common configuration examples
Here are some common starter configuration examples. Refer to the [reference config](./reference.md) for detailed information about all the config values.
Here are some common starter configuration examples. These can be configured through the Settings UI or via YAML. Refer to the [reference config](./reference.md) for detailed information about all config values.
### Raspberry Pi Home Assistant App with USB Coral
@ -94,6 +105,20 @@ Here are some common starter configuration examples. Refer to the [reference con
- Save snapshots for 30 days
- Motion mask for the camera timestamp
<ConfigTabs>
<TabItem value="ui">
1. Navigate to <NavPath path="Settings > System > MQTT" /> and configure the MQTT connection to your Home Assistant Mosquitto broker
2. Navigate to <NavPath path="Settings > Global configuration > FFmpeg" /> and set **Hardware acceleration arguments** to `Raspberry Pi (H.264)`
3. Navigate to <NavPath path="Settings > System > Detector hardware" /> and add a detector with **Type** `edgetpu` and **Device** `usb`
4. Navigate to <NavPath path="Settings > Global configuration > Recording" /> and set **Enable recording** to on, **Motion retention > Retention days** to `7`, **Alert retention > Event retention > Retention days** to `30`, **Alert retention > Event retention > Retention mode** to `motion`, **Detection retention > Event retention > Retention days** to `30`, **Detection retention > Event retention > Retention mode** to `motion`
5. Navigate to <NavPath path="Settings > Global configuration > Snapshots" /> and set **Enable snapshots** to on, **Snapshot retention > Default retention** to `30`
6. Navigate to <NavPath path="Settings > Camera configuration > Management" /> and add your camera with the appropriate RTSP stream URL
7. Navigate to <NavPath path="Settings > Camera configuration > Masks / Zones" /> to add a motion mask for the camera timestamp
</TabItem>
<TabItem value="yaml">
```yaml
mqtt:
host: core-mosquitto
@ -145,10 +170,13 @@ cameras:
coordinates: "0.000,0.427,0.002,0.000,0.999,0.000,0.999,0.781,0.885,0.456,0.700,0.424,0.701,0.311,0.507,0.294,0.453,0.347,0.451,0.400"
```
</TabItem>
</ConfigTabs>
### Standalone Intel Mini PC with USB Coral
- Single camera with 720p, 5fps stream for detect
- MQTT disabled (not integrated with home assistant)
- MQTT disabled (not integrated with Home Assistant)
- VAAPI hardware acceleration for decoding video
- USB Coral detector
- Save all video with any detectable motion for 7 days regardless of whether any objects were detected or not
@ -156,6 +184,20 @@ cameras:
- Save snapshots for 30 days
- Motion mask for the camera timestamp
<ConfigTabs>
<TabItem value="ui">
1. Navigate to <NavPath path="Settings > System > MQTT" /> and set **Enabled** to off
2. Navigate to <NavPath path="Settings > Global configuration > FFmpeg" /> and set **Hardware acceleration arguments** to `VAAPI (Intel/AMD GPU)`
3. Navigate to <NavPath path="Settings > System > Detector hardware" /> and add a detector with **Type** `edgetpu` and **Device** `usb`
4. Navigate to <NavPath path="Settings > Global configuration > Recording" /> and set **Enable recording** to on, **Motion retention > Retention days** to `7`, **Alert retention > Event retention > Retention days** to `30`, **Alert retention > Event retention > Retention mode** to `motion`, **Detection retention > Event retention > Retention days** to `30`, **Detection retention > Event retention > Retention mode** to `motion`
5. Navigate to <NavPath path="Settings > Global configuration > Snapshots" /> and set **Enable snapshots** to on, **Snapshot retention > Default retention** to `30`
6. Navigate to <NavPath path="Settings > Camera configuration > Management" /> and add your camera with the appropriate RTSP stream URL
7. Navigate to <NavPath path="Settings > Camera configuration > Masks / Zones" /> to add a motion mask for the camera timestamp
</TabItem>
<TabItem value="yaml">
```yaml
mqtt:
enabled: False
@ -205,17 +247,35 @@ cameras:
coordinates: "0.000,0.427,0.002,0.000,0.999,0.000,0.999,0.781,0.885,0.456,0.700,0.424,0.701,0.311,0.507,0.294,0.453,0.347,0.451,0.400"
```
### Home Assistant integrated Intel Mini PC with OpenVino
</TabItem>
</ConfigTabs>
### Home Assistant integrated Intel Mini PC with OpenVINO
- Single camera with 720p, 5fps stream for detect
- MQTT connected to same mqtt server as home assistant
- MQTT connected to same MQTT server as Home Assistant
- VAAPI hardware acceleration for decoding video
- OpenVino detector
- OpenVINO detector
- Save all video with any detectable motion for 7 days regardless of whether any objects were detected or not
- Continue to keep all video if it qualified as an alert or detection for 30 days
- Save snapshots for 30 days
- Motion mask for the camera timestamp
<ConfigTabs>
<TabItem value="ui">
1. Navigate to <NavPath path="Settings > System > MQTT" /> and configure the connection to your MQTT broker
2. Navigate to <NavPath path="Settings > Global configuration > FFmpeg" /> and set **Hardware acceleration arguments** to `VAAPI (Intel/AMD GPU)`
3. Navigate to <NavPath path="Settings > System > Detector hardware" /> and add a detector with **Type** `openvino` and **Device** `AUTO`
4. Navigate to <NavPath path="Settings > System > Detection model" /> and configure the OpenVINO model path and settings
5. Navigate to <NavPath path="Settings > Global configuration > Recording" /> and set **Enable recording** to on, **Motion retention > Retention days** to `7`, **Alert retention > Event retention > Retention days** to `30`, **Alert retention > Event retention > Retention mode** to `motion`, **Detection retention > Event retention > Retention days** to `30`, **Detection retention > Event retention > Retention mode** to `motion`
6. Navigate to <NavPath path="Settings > Global configuration > Snapshots" /> and set **Enable snapshots** to on, **Snapshot retention > Default retention** to `30`
7. Navigate to <NavPath path="Settings > Camera configuration > Management" /> and add your camera with the appropriate RTSP stream URL
8. Navigate to <NavPath path="Settings > Camera configuration > Masks / Zones" /> to add a motion mask for the camera timestamp
</TabItem>
<TabItem value="yaml">
```yaml
mqtt:
host: 192.168.X.X # <---- same mqtt broker that home assistant uses
@ -274,3 +334,6 @@ cameras:
enabled: true
coordinates: "0.000,0.427,0.002,0.000,0.999,0.000,0.999,0.781,0.885,0.456,0.700,0.424,0.701,0.311,0.507,0.294,0.453,0.347,0.451,0.400"
```
</TabItem>
</ConfigTabs>

View File

@ -3,6 +3,10 @@ id: license_plate_recognition
title: License Plate Recognition (LPR)
---
import ConfigTabs from "@site/src/components/ConfigTabs";
import TabItem from "@theme/TabItem";
import NavPath from "@site/src/components/NavPath";
Frigate can recognize license plates on vehicles and automatically add the detected characters to the `recognized_license_plate` field or a [known](#matching) name as a `sub_label` to tracked objects of type `car` or `motorcycle`. A common use case may be to read the license plates of cars pulling into a driveway or cars passing by on a street.
LPR works best when the license plate is clearly visible to the camera. For moving vehicles, Frigate continuously refines the recognition process, keeping the most confident result. When a vehicle becomes stationary, LPR continues to run for a short time after to attempt recognition.
@ -34,14 +38,35 @@ License plate recognition works by running AI models locally on your system. The
## Configuration
License plate recognition is disabled by default. Enable it in your config file:
License plate recognition is disabled by default and must be enabled before it can be used.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Enrichments > License plate recognition" />.
- Set **Enable LPR** to on
</TabItem>
<TabItem value="yaml">
```yaml
lpr:
enabled: True
```
Like other enrichments in Frigate, LPR **must be enabled globally** to use the feature. You should disable it for specific cameras at the camera level if you don't want to run LPR on cars on those cameras:
</TabItem>
</ConfigTabs>
Like other enrichments in Frigate, LPR **must be enabled globally** to use the feature. Disable it for specific cameras at the camera level if you don't want to run LPR on cars on those cameras.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Camera configuration > License plate recognition" /> for the desired camera and disable the **Enabled** toggle.
</TabItem>
<TabItem value="yaml">
```yaml {4,5}
cameras:
@ -51,65 +76,137 @@ cameras:
enabled: False
```
</TabItem>
</ConfigTabs>
For non-dedicated LPR cameras, ensure that your camera is configured to detect objects of type `car` or `motorcycle`, and that a car or motorcycle is actually being detected by Frigate. Otherwise, LPR will not run.
Like the other real-time processors in Frigate, license plate recognition runs on the camera stream defined by the `detect` role in your config. To ensure optimal performance, select a suitable resolution for this stream in your camera's firmware that fits your specific scene and requirements.
## Advanced Configuration
Fine-tune the LPR feature using these optional parameters at the global level of your config. The only optional parameters that can be set at the camera level are `enabled`, `min_area`, and `enhancement`.
Fine-tune the LPR feature using these optional parameters. The only optional parameters that can be set at the camera level are `enabled`, `min_area`, and `enhancement`.
### Detection
- **`detection_threshold`**: License plate object detection confidence score required before recognition runs.
- Default: `0.7`
- Note: This is field only applies to the standalone license plate detection model, `threshold` and `min_score` object filters should be used for models like Frigate+ that have license plate detection built in.
- **`min_area`**: Defines the minimum area (in pixels) a license plate must be before recognition runs.
- Default: `1000` pixels. Note: this is intentionally set very low as it is an _area_ measurement (length x width). For reference, 1000 pixels represents a ~32x32 pixel square in your camera image.
- Depending on the resolution of your camera's `detect` stream, you can increase this value to ignore small or distant plates.
- **`device`**: Device to use to run license plate detection _and_ recognition models.
- Default: `None`
- This is auto-selected by Frigate and can be `CPU`, `GPU`, or the GPU's device number. For users without a model that detects license plates natively, using a GPU may increase performance of the YOLOv9 license plate detector model. See the [Hardware Accelerated Enrichments](/configuration/hardware_acceleration_enrichments.md) documentation. However, for users who run a model that detects `license_plate` natively, there is little to no performance gain reported with running LPR on GPU compared to the CPU.
- **`model_size`**: The size of the model used to identify regions of text on plates.
- Default: `small`
- This can be `small` or `large`.
- The `small` model is fast and identifies groups of Latin and Chinese characters.
- The `large` model identifies Latin characters only, and uses an enhanced text detector to find characters on multi-line plates. It is significantly slower than the `small` model.
- If your country or region does not use multi-line plates, you should use the `small` model as performance is much better for single-line plates.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Enrichments > License plate recognition" />.
- Set **Enable LPR** to on
- Set **Detection threshold** to `0.7`
- Set **Minimum plate area** to `1000`
- Set **Device** to `CPU`
- Set **Model size** to `small`
</TabItem>
<TabItem value="yaml">
```yaml
lpr:
enabled: True
detection_threshold: 0.7
min_area: 1000
device: CPU
model_size: small
```
</TabItem>
</ConfigTabs>
### Recognition
- **`recognition_threshold`**: Recognition confidence score required to add the plate to the object as a `recognized_license_plate` and/or `sub_label`.
- Default: `0.9`.
- **`min_plate_length`**: Specifies the minimum number of characters a detected license plate must have to be added as a `recognized_license_plate` and/or `sub_label` to an object.
- Use this to filter out short, incomplete, or incorrect detections.
- **`format`**: A regular expression defining the expected format of detected plates. Plates that do not match this format will be discarded.
- `"^[A-Z]{1,3} [A-Z]{1,2} [0-9]{1,4}$"` matches plates like "B AB 1234" or "M X 7"
- `"^[A-Z]{2}[0-9]{2} [A-Z]{3}$"` matches plates like "AB12 XYZ" or "XY68 ABC"
- Websites like https://regex101.com/ can help test regular expressions for your plates.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Enrichments > License plate recognition" />.
- Set **Enable LPR** to on
- Set **Recognition threshold** to `0.9`
- Set **Min plate length** to `4`
- Set **Plate format regex** to `^[A-Z]{2}[0-9]{2} [A-Z]{3}$`
</TabItem>
<TabItem value="yaml">
```yaml
lpr:
enabled: True
recognition_threshold: 0.9
min_plate_length: 4
format: "^[A-Z]{2}[0-9]{2} [A-Z]{3}$"
```
</TabItem>
</ConfigTabs>
### Matching
- **`known_plates`**: List of strings or regular expressions that assign custom a `sub_label` to `car` and `motorcycle` objects when a recognized plate matches a known value.
- These labels appear in the UI, filters, and notifications.
- Unknown plates are still saved but are added to the `recognized_license_plate` field rather than the `sub_label`.
- **`match_distance`**: Allows for minor variations (missing/incorrect characters) when matching a detected plate to a known plate.
- For example, setting `match_distance: 1` allows a plate `ABCDE` to match `ABCBE` or `ABCD`.
- This parameter will _not_ operate on known plates that are defined as regular expressions. You should define the full string of your plate in `known_plates` in order to use `match_distance`.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Enrichments > License plate recognition" />.
- Set **Enable LPR** to on
- Set **Match distance** to `1`
- Set **Known plates > Wife'S Car** to `ABC-1234`
- Set **Known plates > Johnny** to `J*N-*234`
</TabItem>
<TabItem value="yaml">
```yaml
lpr:
enabled: True
match_distance: 1
known_plates:
Wife's Car:
- "ABC-1234"
Johnny:
- "J*N-*234"
```
</TabItem>
</ConfigTabs>
### Image Enhancement
- **`enhancement`**: A value between 0 and 10 that adjusts the level of image enhancement applied to captured license plates before they are processed for recognition. This preprocessing step can sometimes improve accuracy but may also have the opposite effect.
- Default: `0` (no enhancement)
- Higher values increase contrast, sharpen details, and reduce noise, but excessive enhancement can blur or distort characters, actually making them much harder for Frigate to recognize.
- This setting is best adjusted at the camera level if running LPR on multiple cameras.
- If Frigate is already recognizing plates correctly, leave this setting at the default of `0`. However, if you're experiencing frequent character issues or incomplete plates and you can already easily read the plates yourself, try increasing the value gradually, starting at 5 and adjusting as needed. You should see how different enhancement levels affect your plates. Use the `debug_save_plates` configuration option (see below).
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Enrichments > License plate recognition" />.
- Set **Enable LPR** to on
- Set **Enhancement level** to `5`
</TabItem>
<TabItem value="yaml">
```yaml
lpr:
enabled: True
enhancement: 5
```
</TabItem>
</ConfigTabs>
If Frigate is already recognizing plates correctly, leave enhancement at the default of `0`. However, if you're experiencing frequent character issues or incomplete plates and you can already easily read the plates yourself, try increasing the value gradually, starting at 5 and adjusting as needed. Use the `debug_save_plates` configuration option (see below) to see how different enhancement levels affect your plates.
### Normalization Rules
- **`replace_rules`**: List of regex replacement rules to normalize detected plates. These rules are applied sequentially and are applied _before_ the `format` regex, if specified. Each rule must have a `pattern` (which can be a string or a regex) and `replacement` (a string, which also supports [backrefs](https://docs.python.org/3/library/re.html#re.sub) like `\1`). These rules are useful for dealing with common OCR issues like noise characters, separators, or confusions (e.g., 'O'→'0').
<ConfigTabs>
<TabItem value="ui">
These rules must be defined at the global level of your `lpr` config.
Navigate to <NavPath path="Settings > Enrichments > License plate recognition" />.
| Field | Description |
|-------|-------------|
| **Replacement rules** | Regex replacement rules used to normalize detected plate strings before matching. |
</TabItem>
<TabItem value="yaml">
```yaml
lpr:
@ -126,6 +223,11 @@ lpr:
replacement: '\1-\2'
```
</TabItem>
</ConfigTabs>
These rules must be defined at the global level of your `lpr` config.
- Rules fire in order: In the example above: clean noise first, then separators, then swaps, then splits.
- Backrefs (`\1`, `\2`) allow dynamic replacements (e.g., capture groups).
- Any changes made by the rules are printed to the LPR debug log.
@ -133,13 +235,51 @@ lpr:
### Debugging
- **`debug_save_plates`**: Set to `True` to save captured text on plates for debugging. These images are stored in `/media/frigate/clips/lpr`, organized into subdirectories by `<camera>/<event_id>`, and named based on the capture timestamp.
- These saved images are not full plates but rather the specific areas of text detected on the plates. It is normal for the text detection model to sometimes find multiple areas of text on the plate. Use them to analyze what text Frigate recognized and how image enhancement affects detection.
- **Note:** Frigate does **not** automatically delete these debug images. Once LPR is functioning correctly, you should disable this option and manually remove the saved files to free up storage.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Enrichments > License plate recognition" />.
- Set **Enable LPR** to on
- Set **Save debug plates** to on
</TabItem>
<TabItem value="yaml">
```yaml
lpr:
enabled: True
debug_save_plates: True
```
</TabItem>
</ConfigTabs>
The saved images are not full plates but rather the specific areas of text detected on the plates. It is normal for the text detection model to sometimes find multiple areas of text on the plate. Use them to analyze what text Frigate recognized and how image enhancement affects detection.
**Note:** Frigate does **not** automatically delete these debug images. Once LPR is functioning correctly, you should disable this option and manually remove the saved files to free up storage.
## Configuration Examples
These configuration parameters are available at the global level of your config. The only optional parameters that should be set at the camera level are `enabled`, `min_area`, and `enhancement`.
These configuration parameters are available at the global level. The only optional parameters that should be set at the camera level are `enabled`, `min_area`, and `enhancement`.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Enrichments > License plate recognition" />.
| Field | Description |
|-------|-------------|
| **Enable LPR** | Enable or disable license plate recognition for all cameras; can be overridden per-camera. |
| **Minimum plate area** | Minimum plate area (pixels) required to attempt recognition. |
| **Min plate length** | Minimum number of characters a recognized plate must contain to be considered valid. |
| **Known plates > Wife'S Car** | |
| **Known plates > Johnny** | |
| **Known plates > Sally** | |
| **Known plates > Work Trucks** | |
</TabItem>
<TabItem value="yaml">
```yaml
lpr:
@ -158,28 +298,21 @@ lpr:
- "EMP-[0-9]{3}[A-Z]" # Matches plates like EMP-123A, EMP-456Z
```
```yaml
lpr:
enabled: True
min_area: 4000 # Run recognition on larger plates only (4000 pixels represents a 63x63 pixel square in your image)
recognition_threshold: 0.85
format: "^[A-Z]{2} [A-Z][0-9]{4}$" # Only recognize plates that are two letters, followed by a space, followed by a single letter and 4 numbers
match_distance: 1 # Allow one character variation in plate matching
replace_rules:
- pattern: "O"
replacement: "0" # Replace the letter O with the number 0 in every plate
known_plates:
Delivery Van:
- "RJ K5678"
- "UP A1234"
Supervisor:
- "MN D3163"
```
</TabItem>
</ConfigTabs>
:::note
If a camera is configured to detect `car` or `motorcycle` but you don't want Frigate to run LPR for that camera, disable LPR at the camera level:
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Camera configuration > License plate recognition" /> for the desired camera and disable the **Enabled** toggle.
</TabItem>
<TabItem value="yaml">
```yaml
cameras:
side_yard:
@ -188,13 +321,16 @@ cameras:
...
```
</TabItem>
</ConfigTabs>
:::
## Dedicated LPR Cameras
Dedicated LPR cameras are single-purpose cameras with powerful optical zoom to capture license plates on distant vehicles, often with fine-tuned settings to capture plates at night.
To mark a camera as a dedicated LPR camera, add `type: "lpr"` the camera configuration.
To mark a camera as a dedicated LPR camera, set `type: "lpr"` in the camera configuration.
:::note
@ -210,6 +346,50 @@ Users running a Frigate+ model (or any model that natively detects `license_plat
An example configuration for a dedicated LPR camera using a `license_plate`-detecting model:
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Camera configuration > FFmpeg" />.
| Field | Description |
|-------|-------------|
| **Ffmpeg** | |
Navigate to <NavPath path="Settings > Camera configuration > Object detection" />.
| Field | Description |
|-------|-------------|
| **Enable object detection** | Enable or disable object detection for this camera. |
| **Detect FPS** | Desired frames per second to run detection on; lower values reduce CPU usage (recommended value is 5, only set higher - at most 10 - if tracking extremely fast moving objects). |
| **Minimum initialization frames** | Number of consecutive detection hits required before creating a tracked object. Increase to reduce false initializations. Default value is fps divided by 2. |
| **Detect width** | Width (pixels) of frames used for the detect stream; leave empty to use the native stream resolution. |
| **Detect height** | Height (pixels) of frames used for the detect stream; leave empty to use the native stream resolution. |
Navigate to <NavPath path="Settings > Camera configuration > Objects" />.
| Field | Description |
|-------|-------------|
| **Objects to track** | List of object labels to track for this camera. |
| **Object filters > License Plate > Threshold** | |
Navigate to <NavPath path="Settings > Camera configuration > Motion detection" />.
| Field | Description |
|-------|-------------|
| **Motion threshold** | Pixel difference threshold used by the motion detector; higher values reduce sensitivity (range 1-255). |
| **Contour area** | Minimum contour area in pixels required for a motion contour to be counted. |
| **Improve contrast** | Apply contrast improvement to frames before motion analysis to help detection. |
Navigate to <NavPath path="Settings > Camera configuration > Recording" />.
| Field | Description |
|-------|-------------|
| **Enable recording** | Enable or disable recording for this camera. |
Navigate to <NavPath path="Settings > Camera configuration > Snapshots" />.
| Field | Description |
|-------|-------------|
| **Enable snapshots** | Enable or disable saving snapshots for this camera. |
</TabItem>
<TabItem value="yaml">
```yaml
# LPR global configuration
lpr:
@ -248,6 +428,9 @@ cameras:
- license_plate
```
</TabItem>
</ConfigTabs>
With this setup:
- License plates are treated as normal objects in Frigate.
@ -259,10 +442,59 @@ With this setup:
### Using the Secondary LPR Pipeline (Without Frigate+)
If you are not running a Frigate+ model, you can use Frigates built-in secondary dedicated LPR pipeline. In this mode, Frigate bypasses the standard object detection pipeline and runs a local license plate detector model on the full frame whenever motion activity occurs.
If you are not running a Frigate+ model, you can use Frigate's built-in secondary dedicated LPR pipeline. In this mode, Frigate bypasses the standard object detection pipeline and runs a local license plate detector model on the full frame whenever motion activity occurs.
An example configuration for a dedicated LPR camera using the secondary pipeline:
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Camera configuration > License plate recognition" />.
| Field | Description |
|-------|-------------|
| **Enable LPR** | Enable or disable LPR on this camera. |
| **Enhancement level** | Enhancement level (0-10) to apply to plate crops prior to OCR; higher values may not always improve results, levels above 5 may only work with night time plates and should be used with caution. |
Navigate to <NavPath path="Settings > Camera configuration > FFmpeg" />.
| Field | Description |
|-------|-------------|
| **Ffmpeg** | |
Navigate to <NavPath path="Settings > Camera configuration > Object detection" />.
| Field | Description |
|-------|-------------|
| **Enable object detection** | Enable or disable object detection for this camera. |
| **Detect FPS** | Desired frames per second to run detection on; lower values reduce CPU usage (recommended value is 5, only set higher - at most 10 - if tracking extremely fast moving objects). |
| **Detect width** | Width (pixels) of frames used for the detect stream; leave empty to use the native stream resolution. |
| **Detect height** | Height (pixels) of frames used for the detect stream; leave empty to use the native stream resolution. |
Navigate to <NavPath path="Settings > Camera configuration > Objects" />.
| Field | Description |
|-------|-------------|
| **Objects to track** | List of object labels to track for this camera. |
Navigate to <NavPath path="Settings > Camera configuration > Motion detection" />.
| Field | Description |
|-------|-------------|
| **Motion threshold** | Pixel difference threshold used by the motion detector; higher values reduce sensitivity (range 1-255). |
| **Contour area** | Minimum contour area in pixels required for a motion contour to be counted. |
| **Improve contrast** | Apply contrast improvement to frames before motion analysis to help detection. |
Navigate to <NavPath path="Settings > Camera configuration > Recording" />.
| Field | Description |
|-------|-------------|
| **Enable recording** | Enable or disable recording for this camera. |
Navigate to <NavPath path="Settings > Camera configuration > Review" />.
| Field | Description |
|-------|-------------|
| **Detections config > Enable detections** | Enable or disable detection events for this camera. |
| **Detections config > Retain > Default** | |
</TabItem>
<TabItem value="yaml">
```yaml
# LPR global configuration
lpr:
@ -299,6 +531,9 @@ cameras:
default: 7
```
</TabItem>
</ConfigTabs>
With this setup:
- The standard object detection pipeline is bypassed. Any detected license plates on dedicated LPR cameras are treated similarly to manual events in Frigate. You must **not** specify `license_plate` as an object to track.
@ -377,12 +612,27 @@ Start with ["Why isn't my license plate being detected and recognized?"](#why-is
1. Start with a simplified LPR config.
- Remove or comment out everything in your LPR config, including `min_area`, `min_plate_length`, `format`, `known_plates`, or `enhancement` values so that the only values left are `enabled` and `debug_save_plates`. This will run LPR with Frigate's default values.
```yaml
lpr:
enabled: true
device: CPU
debug_save_plates: true
```
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Enrichments > License plate recognition" />.
- Set **Enable LPR** to on
- Set **Device** to `CPU`
- Set **Save debug plates** to on
</TabItem>
<TabItem value="yaml">
```yaml
lpr:
enabled: true
device: CPU
debug_save_plates: true
```
</TabItem>
</ConfigTabs>
2. Enable debug logs to see exactly what Frigate is doing.
- Enable debug logs for LPR by adding `frigate.data_processing.common.license_plate: debug` to your `logger` configuration. These logs are _very_ verbose, so only keep this enabled when necessary. Restart Frigate after this change.

View File

@ -3,6 +3,10 @@ id: live
title: Live View
---
import ConfigTabs from "@site/src/components/ConfigTabs";
import TabItem from "@theme/TabItem";
import NavPath from "@site/src/components/NavPath";
Frigate intelligently displays your camera streams on the Live view dashboard. By default, Frigate employs "smart streaming" where camera images update once per minute when no detectable activity is occurring to conserve bandwidth and resources. As soon as any motion or active objects are detected, cameras seamlessly switch to a live stream.
### Live View technologies
@ -63,19 +67,26 @@ go2rtc:
### Setting Streams For Live UI
You can configure Frigate to allow manual selection of the stream you want to view in the Live UI. For example, you may want to view your camera's substream on mobile devices, but the full resolution stream on desktop devices. Setting the `live -> streams` list will populate a dropdown in the UI's Live view that allows you to choose between the streams. This stream setting is _per device_ and is saved in your browser's local storage.
You can configure Frigate to allow manual selection of the stream you want to view in the Live UI. For example, you may want to view your camera's substream on mobile devices, but the full resolution stream on desktop devices. Setting the streams list will populate a dropdown in the UI's Live view that allows you to choose between the streams. This stream setting is _per device_ and is saved in your browser's local storage.
Additionally, when creating and editing camera groups in the UI, you can choose the stream you want to use for your camera group's Live dashboard.
:::note
Frigate's default dashboard ("All Cameras") will always use the first entry you've defined in `streams:` when playing live streams from your cameras.
Frigate's default dashboard ("All Cameras") will always use the first entry you've defined in streams when playing live streams from your cameras.
:::
Configure the `streams` option with a "friendly name" for your stream followed by the go2rtc stream name.
Configure a "friendly name" for your stream followed by the go2rtc stream name. Using Frigate's internal version of go2rtc is required to use this feature. You cannot specify paths in the streams configuration, only go2rtc stream names.
Using Frigate's internal version of go2rtc is required to use this feature. You cannot specify paths in the `streams` configuration, only go2rtc stream names.
<ConfigTabs>
<TabItem value="ui">
1. Navigate to <NavPath path="Settings > Camera configuration > Live playback" />, then select your camera.
- Under **Live stream names**, add entries mapping a friendly name to each go2rtc stream name (e.g., `Main Stream` mapped to `test_cam`, `Sub Stream` mapped to `test_cam_sub`).
</TabItem>
<TabItem value="yaml">
```yaml {3,6,8,25-29}
go2rtc:
@ -109,6 +120,9 @@ cameras:
Special Stream: test_cam_another_sub
```
</TabItem>
</ConfigTabs>
### WebRTC extra configuration:
WebRTC works by creating a TCP or UDP connection on port `8555`. However, it requires additional configuration:
@ -185,7 +199,7 @@ To prevent go2rtc from blocking other applications from accessing your camera's
Frigate provides a dialog in the Camera Group Edit pane with several options for streaming on a camera group's dashboard. These settings are _per device_ and are saved in your device's local storage.
- Stream selection using the `live -> streams` configuration option (see _Setting Streams For Live UI_ above)
- Stream selection using the streams configuration option (see _Setting Streams For Live UI_ above)
- Streaming type:
- _No streaming_: Camera images will only update once per minute and no live streaming will occur.
- _Smart Streaming_ (default, recommended setting): Smart streaming will update your camera image once per minute when no detectable activity is occurring to conserve bandwidth and resources, since a static picture is the same as a streaming image with no motion or objects. When motion or objects are detected, the image seamlessly switches to a live stream.
@ -203,6 +217,40 @@ Use a camera group if you want to change any of these settings from the defaults
:::
### jsmpeg Stream Quality
The jsmpeg live view resolution and encoding quality can be adjusted globally or per camera. These settings only affect the jsmpeg player and do not apply when go2rtc is used for live view.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Global configuration > Live playback" /> for global defaults, or <NavPath path="Settings > Camera configuration > Live playback" /> and select a camera for per-camera overrides.
| Field | Description |
|-------|-------------|
| **Live height** | Height in pixels for the jsmpeg live stream; must be less than or equal to the detect stream height |
| **Live quality** | Encoding quality for the jsmpeg stream (1 = highest, 31 = lowest) |
</TabItem>
<TabItem value="yaml">
```yaml
# Global defaults
live:
height: 720
quality: 8
# Per-camera override
cameras:
front_door:
live:
height: 480
quality: 4
```
</TabItem>
</ConfigTabs>
### Disabling cameras
Cameras can be temporarily disabled through the Frigate UI and through [MQTT](/integrations/mqtt#frigatecamera_nameenabledset) to conserve system resources. When disabled, Frigate's ffmpeg processes are terminated — recording stops, object detection is paused, and the Live dashboard displays a blank image with a disabled message. Review items, tracked objects, and historical footage for disabled cameras can still be accessed via the UI.

View File

@ -3,6 +3,10 @@ id: masks
title: Masks
---
import ConfigTabs from "@site/src/components/ConfigTabs";
import TabItem from "@theme/TabItem";
import NavPath from "@site/src/components/NavPath";
## Motion masks
Motion masks are used to prevent unwanted types of motion from triggering detection. Try watching the Debug feed (Settings --> Debug) with `Motion Boxes` enabled to see what may be regularly detected as motion. For example, you want to mask out your timestamp, the sky, rooftops, etc. Keep in mind that this mask only prevents motion from being detected and does not prevent objects from being detected if object detection was started due to motion in unmasked areas. Motion is also used during object tracking to refine the object detection area in the next frame. _Over-masking will make it more difficult for objects to be tracked._
@ -17,17 +21,21 @@ Object filter masks can be used to filter out stubborn false positives in fixed
![object mask](/img/bottom-center-mask.jpg)
## Using the mask creator
## Creating masks
To create a poly mask:
<ConfigTabs>
<TabItem value="ui">
1. Visit the Web UI
2. Click/tap the gear icon and open "Settings"
3. Select "Mask / zone editor"
4. At the top right, select the camera you wish to create a mask or zone for
5. Click the plus icon under the type of mask or zone you would like to create
6. Click on the camera's latest image to create the points for a masked area. Click the first point again to close the polygon.
7. When you've finished creating your mask, press Save.
Navigate to <NavPath path="Settings > Global configuration > Motion detection" />.
| Field | Description |
|-------|-------------|
| **Mask coordinates > Mask1 > Friendly Name** | |
| **Mask coordinates > Mask1 > Enabled** | |
| **Mask coordinates > Mask1 > Coordinates** | |
</TabItem>
<TabItem value="yaml">
Your config file will be updated with the relative coordinates of the mask/zone:
@ -59,7 +67,7 @@ motion:
coordinates: "0.000,0.427,0.002,0.000,0.999,0.000,0.999,0.781,0.885,0.456"
```
Object filter masks can also be created through the UI or manually in the config. They are configured under the object filters section for each object type:
Object filter masks are configured under the object filters section for each object type:
```yaml
objects:
@ -78,6 +86,9 @@ objects:
coordinates: "0.000,0.700,1.000,0.700,1.000,1.000,0.000,1.000"
```
</TabItem>
</ConfigTabs>
## Enabling/Disabling Masks
Both motion masks and object filter masks can be toggled on or off without removing them from the configuration. Disabled masks are completely ignored at runtime - they will not affect motion detection or object filtering. This is useful for temporarily disabling a mask during certain seasons or times of day without modifying the configuration.

View File

@ -3,10 +3,31 @@ id: metrics
title: Metrics
---
import ConfigTabs from "@site/src/components/ConfigTabs";
import TabItem from "@theme/TabItem";
import NavPath from "@site/src/components/NavPath";
# Metrics
Frigate exposes Prometheus metrics at the `/api/metrics` endpoint that can be used to monitor the performance and health of your Frigate instance.
## Enabling Telemetry
Prometheus metrics are exposed via the telemetry configuration. Enable or configure telemetry to control metric availability.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > System > Telemetry" /> to configure metrics and telemetry settings.
</TabItem>
<TabItem value="yaml">
Metrics are available at `/api/metrics` by default. No additional Frigate configuration is required to expose them.
</TabItem>
</ConfigTabs>
## Available Metrics
### System Metrics

View File

@ -3,6 +3,10 @@ id: motion_detection
title: Motion Detection
---
import ConfigTabs from "@site/src/components/ConfigTabs";
import TabItem from "@theme/TabItem";
import NavPath from "@site/src/components/NavPath";
# Tuning Motion Detection
Frigate uses motion detection as a first line check to see if there is anything happening in the frame worth checking with object detection.
@ -21,7 +25,7 @@ First, mask areas with regular motion not caused by the objects you want to dete
## Prepare For Testing
The easiest way to tune motion detection is to use the Frigate UI under Settings > Motion Tuner. This screen allows the changing of motion detection values live to easily see the immediate effect on what is detected as motion.
The recommended way to tune motion detection is to use the built-in Motion Tuner. Navigate to <NavPath path="Settings > Camera configuration > Motion tuner" /> and select the camera you want to tune. This screen lets you adjust motion detection values live and immediately see the effect on what is detected as motion, making it the fastest way to find optimal settings for each camera.
## Tuning Motion Detection During The Day
@ -37,6 +41,20 @@ Remember that motion detection is just used to determine when object detection s
The threshold value dictates how much of a change in a pixels luminance is required to be considered motion.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Global configuration > Motion detection" /> to set the threshold globally.
To override for a specific camera, navigate to <NavPath path="Settings > Camera configuration > Motion detection" /> and select the camera, or use the <NavPath path="Settings > Camera configuration > Motion tuner" /> to adjust it live.
| Field | Description |
|-------|-------------|
| **Motion threshold** | The threshold passed to cv2.threshold to determine if a pixel is different enough to be counted as motion. Increasing this value will make motion detection less sensitive and decreasing it will make motion detection more sensitive. The value should be between 1 and 255. (default: 30) |
</TabItem>
<TabItem value="yaml">
```yaml
motion:
# Optional: The threshold passed to cv2.threshold to determine if a pixel is different enough to be counted as motion. (default: shown below)
@ -45,12 +63,29 @@ motion:
threshold: 30
```
</TabItem>
</ConfigTabs>
Lower values mean motion detection is more sensitive to changes in color, making it more likely for example to detect motion when a brown dogs blends in with a brown fence or a person wearing a red shirt blends in with a red car. If the threshold is too low however, it may detect things like grass blowing in the wind, shadows, etc. to be detected as motion.
Watching the motion boxes in the debug view, increase the threshold until you only see motion that is visible to the eye. Once this is done, it is important to test and ensure that desired motion is still detected.
### Contour Area
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Global configuration > Motion detection" /> to set the contour area globally.
To override for a specific camera, navigate to <NavPath path="Settings > Camera configuration > Motion detection" /> and select the camera, or use the <NavPath path="Settings > Camera configuration > Motion tuner" /> to adjust it live.
| Field | Description |
|-------|-------------|
| **Contour area** | Minimum size in pixels in the resized motion image that counts as motion. Increasing this value will prevent smaller areas of motion from being detected. Decreasing will make motion detection more sensitive to smaller moving objects. As a rule of thumb: 10 = high sensitivity, 30 = medium sensitivity, 50 = low sensitivity. (default: 10) |
</TabItem>
<TabItem value="yaml">
```yaml
motion:
# Optional: Minimum size in pixels in the resized motion image that counts as motion (default: shown below)
@ -63,6 +98,9 @@ motion:
contour_area: 10
```
</TabItem>
</ConfigTabs>
Once the threshold calculation is run, the pixels that have changed are grouped together. The contour area value is used to decide which groups of changed pixels qualify as motion. Smaller values are more sensitive meaning people that are far away, small animals, etc. are more likely to be detected as motion, but it also means that small changes in shadows, leaves, etc. are detected as motion. Higher values are less sensitive meaning these things won't be detected as motion but with the risk that desired motion won't be detected until closer to the camera.
Watching the motion boxes in the debug view, adjust the contour area until there are no motion boxes smaller than the smallest you'd expect frigate to detect something moving.
@ -81,6 +119,20 @@ However, if the preferred day settings do not work well at night it is recommend
### Lightning Threshold
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Global configuration > Motion detection" /> and expand the advanced fields to find the lightning threshold setting.
To override for a specific camera, navigate to <NavPath path="Settings > Camera configuration > Motion detection" /> and select the camera.
| Field | Description |
|-------|-------------|
| **Lightning threshold** | The percentage of the image used to detect lightning or other substantial changes where motion detection needs to recalibrate. Increasing this value will make motion detection more likely to consider lightning or IR mode changes as valid motion. Decreasing this value will make motion detection more likely to ignore large amounts of motion such as a person approaching a doorbell camera. (default: 0.8) |
</TabItem>
<TabItem value="yaml">
```yaml
motion:
# Optional: The percentage of the image used to detect lightning or
@ -94,6 +146,9 @@ motion:
lightning_threshold: 0.8
```
</TabItem>
</ConfigTabs>
Large changes in motion like PTZ moves and camera switches between Color and IR mode should result in a pause in object detection. `lightning_threshold` defines the percentage of the image used to detect these substantial changes. Increasing this value makes motion detection more likely to treat large changes (like IR mode switches) as valid motion. Decreasing it makes motion detection more likely to ignore large amounts of motion, such as a person approaching a doorbell camera.
Note that `lightning_threshold` does **not** stop motion-based recordings from being saved — it only prevents additional motion analysis after the threshold is exceeded, reducing false positive object detections during high-motion periods (e.g. storms or PTZ sweeps) without interfering with recordings.
@ -106,6 +161,20 @@ Some cameras, like doorbell cameras, may have missed detections when someone wal
### Skip Motion On Large Scene Changes
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Global configuration > Motion detection" /> and expand the advanced fields to find the skip motion threshold setting.
To override for a specific camera, navigate to <NavPath path="Settings > Camera configuration > Motion detection" /> and select the camera.
| Field | Description |
|-------|-------------|
| **Skip motion threshold** | Fraction of the frame that must change in a single update before Frigate will completely ignore any motion in that frame. Values range between 0.0 and 1.0; leave unset (null) to disable. For example, setting this to 0.7 causes Frigate to skip reporting motion boxes when more than 70% of the image appears to change (e.g. during lightning storms, IR/color mode switches, or other sudden lighting events). |
</TabItem>
<TabItem value="yaml">
```yaml
motion:
# Optional: Fraction of the frame that must change in a single update
@ -118,6 +187,9 @@ motion:
skip_motion_threshold: 0.7
```
</TabItem>
</ConfigTabs>
This option is handy when you want to prevent large transient changes from triggering recordings or object detection. It differs from `lightning_threshold` because it completely suppresses motion instead of just forcing a recalibration.
:::warning

View File

@ -3,6 +3,10 @@ id: notifications
title: Notifications
---
import ConfigTabs from "@site/src/components/ConfigTabs";
import TabItem from "@theme/TabItem";
import NavPath from "@site/src/components/NavPath";
# Notifications
Frigate offers native notifications using the [WebPush Protocol](https://web.dev/articles/push-notifications-web-push-protocol) which uses the [VAPID spec](https://tools.ietf.org/html/draft-thomson-webpush-vapid) to deliver notifications to web apps using encryption.
@ -18,15 +22,28 @@ In order to use notifications the following requirements must be met:
### Configuration
To configure notifications, go to the Frigate WebUI -> Settings -> Notifications and enable, then fill out the fields and save.
Enable notifications and fill out the required fields.
Optionally, you can change the default cooldown period for notifications through the `cooldown` parameter in your config file. This parameter can also be overridden at the camera level.
Optionally, change the default cooldown period for notifications. The cooldown can also be overridden at the camera level.
Notifications will be prevented if either:
- The global cooldown period hasn't elapsed since any camera's last notification
- The camera-specific cooldown period hasn't elapsed for the specific camera
#### Global notifications
<ConfigTabs>
<TabItem value="ui">
1. Navigate to <NavPath path="Settings > Notifications > Notifications" />.
- Set **Enable notifications** to on
- Set **Notification email** to your email address
- Set **Cooldown period** to the desired number of seconds to wait before sending another notification from any camera (e.g. `10`)
</TabItem>
<TabItem value="yaml">
```yaml
notifications:
enabled: True
@ -34,6 +51,21 @@ notifications:
cooldown: 10 # wait 10 seconds before sending another notification from any camera
```
</TabItem>
</ConfigTabs>
#### Per-camera notifications
<ConfigTabs>
<TabItem value="ui">
1. Navigate to <NavPath path="Settings > Camera configuration > Notifications" /> and select the desired camera.
- Set **Enabled** to on
- Set **Cooldown** to the desired number of seconds to wait before sending another notification from this camera (e.g. `30`)
</TabItem>
<TabItem value="yaml">
```yaml
cameras:
doorbell:
@ -43,6 +75,9 @@ cameras:
cooldown: 30 # wait 30 seconds before sending another notification from the doorbell camera
```
</TabItem>
</ConfigTabs>
### Registration
Once notifications are enabled, press the `Register for Notifications` button on all devices that you would like to receive notifications on. This will register the background worker. After this Frigate must be restarted and then notifications will begin to be sent.

File diff suppressed because it is too large Load Diff

View File

@ -3,11 +3,15 @@ id: object_filters
title: Filters
---
import ConfigTabs from "@site/src/components/ConfigTabs";
import TabItem from "@theme/TabItem";
import NavPath from "@site/src/components/NavPath";
There are several types of object filters that can be used to reduce false positive rates.
## Object Scores
For object filters in your configuration, any single detection below `min_score` will be ignored as a false positive. `threshold` is based on the median of the history of scores (padded to 3 values) for a tracked object. Consider the following frames when `min_score` is set to 0.6 and threshold is set to 0.85:
For object filters, any single detection below `min_score` will be ignored as a false positive. `threshold` is based on the median of the history of scores (padded to 3 values) for a tracked object. Consider the following frames when `min_score` is set to 0.6 and threshold is set to 0.85:
| Frame | Current Score | Score History | Computed Score | Detected Object |
| ----- | ------------- | --------------------------------- | -------------- | --------------- |
@ -28,6 +32,46 @@ Any detection below `min_score` will be immediately thrown out and never tracked
`threshold` is used to determine that the object is a true positive. Once an object is detected with a score >= `threshold` object is considered a true positive. If `threshold` is too low then some higher scoring false positives may create an tracked object. If `threshold` is too high then true positive tracked objects may be missed due to the object never scoring high enough.
## Configuring Object Scores
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Global configuration > Objects" /> to set score filters globally.
| Field | Description |
|-------|-------------|
| **Object filters > Person > Min Score** | Minimum score for a single detection to initiate tracking |
| **Object filters > Person > Threshold** | Minimum computed (median) score to be considered a true positive |
To override score filters for a specific camera, navigate to <NavPath path="Settings > Camera configuration > Objects" /> and select the camera.
</TabItem>
<TabItem value="yaml">
```yaml
objects:
filters:
person:
min_score: 0.5
threshold: 0.7
```
To override at the camera level:
```yaml
cameras:
front_door:
objects:
filters:
person:
min_score: 0.5
threshold: 0.7
```
</TabItem>
</ConfigTabs>
## Object Shape
False positives can also be reduced by filtering a detection based on its shape.
@ -46,6 +90,50 @@ Conceptually, a ratio of 1 is a square, 0.5 is a "tall skinny" box, and 2 is a "
:::
### Configuring Shape Filters
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Global configuration > Objects" /> to set shape filters globally.
| Field | Description |
|-------|-------------|
| **Object filters > Person > Min Area** | Minimum bounding box area in pixels (or decimal for percentage of frame) |
| **Object filters > Person > Max Area** | Maximum bounding box area in pixels (or decimal for percentage of frame) |
| **Object filters > Person > Min Ratio** | Minimum width/height ratio of the bounding box |
| **Object filters > Person > Max Ratio** | Maximum width/height ratio of the bounding box |
To override shape filters for a specific camera, navigate to <NavPath path="Settings > Camera configuration > Objects" /> and select the camera.
</TabItem>
<TabItem value="yaml">
```yaml
objects:
filters:
person:
min_area: 5000
max_area: 100000
min_ratio: 0.5
max_ratio: 2.0
```
To override at the camera level:
```yaml
cameras:
front_door:
objects:
filters:
person:
min_area: 5000
max_area: 100000
```
</TabItem>
</ConfigTabs>
## Other Tools
### Zones
@ -54,4 +142,4 @@ Conceptually, a ratio of 1 is a square, 0.5 is a "tall skinny" box, and 2 is a "
### Object Masks
[Object Filter Masks](/configuration/masks) are a last resort but can be useful when false positives are in the relatively same place but can not be filtered due to their size or shape.
[Object Filter Masks](/configuration/masks) are a last resort but can be useful when false positives are in the relatively same place but can not be filtered due to their size or shape. Object filter masks can be configured in <NavPath path="Settings > Camera configuration > Masks / Zones" />.

View File

@ -3,6 +3,9 @@ id: objects
title: Available Objects
---
import ConfigTabs from "@site/src/components/ConfigTabs";
import TabItem from "@theme/TabItem";
import NavPath from "@site/src/components/NavPath";
import labels from "../../../labelmap.txt";
Frigate includes the object labels listed below from the Google Coral test data.
@ -10,7 +13,7 @@ Frigate includes the object labels listed below from the Google Coral test data.
Please note:
- `car` is listed twice because `truck` has been renamed to `car` by default. These object types are frequently confused.
- `person` is the only tracked object by default. See the [full configuration reference](reference.md) for an example of expanding the list of tracked objects.
- `person` is the only tracked object by default. To track additional objects, configure them in the objects settings.
<ul>
{labels.split("\n").map((label) => (
@ -18,6 +21,142 @@ Please note:
))}
</ul>
## Configuring Tracked Objects
By default, Frigate only tracks `person`. To track additional object types, add them to the tracked objects list.
<ConfigTabs>
<TabItem value="ui">
1. Navigate to <NavPath path="Settings > Global configuration > Objects" />.
- Add the desired object types to the **Objects to track** list (e.g., `person`, `car`, `dog`)
To override the tracked objects list for a specific camera:
1. Navigate to <NavPath path="Settings > Camera configuration > Objects" />.
- Add the desired object types to the **Objects to track** list
</TabItem>
<TabItem value="yaml">
```yaml
objects:
track:
- person
- car
- dog
```
To override at the camera level:
```yaml
cameras:
front_door:
objects:
track:
- person
- car
```
</TabItem>
</ConfigTabs>
## Filtering Objects
Object filters help reduce false positives by constraining the size, shape, and confidence thresholds for each object type. Filters can be configured globally or per camera.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Global configuration > Objects" />.
| Field | Description |
|-------|-------------|
| **Object filters > Person > Min Area** | Minimum bounding box area in pixels (or decimal for percentage of frame) |
| **Object filters > Person > Max Area** | Maximum bounding box area in pixels (or decimal for percentage of frame) |
| **Object filters > Person > Min Ratio** | Minimum width/height ratio of the bounding box |
| **Object filters > Person > Max Ratio** | Maximum width/height ratio of the bounding box |
| **Object filters > Person > Min Score** | Minimum score for the object to initiate tracking |
| **Object filters > Person > Threshold** | Minimum computed score to be considered a true positive |
To override filters for a specific camera, navigate to <NavPath path="Settings > Camera configuration > Objects" />.
</TabItem>
<TabItem value="yaml">
```yaml
objects:
filters:
person:
min_area: 5000
max_area: 100000
min_ratio: 0.5
max_ratio: 2.0
min_score: 0.5
threshold: 0.7
```
To override at the camera level:
```yaml
cameras:
front_door:
objects:
filters:
person:
min_area: 5000
threshold: 0.7
```
</TabItem>
</ConfigTabs>
## Object Filter Masks
Object filter masks prevent specific object types from being detected in certain areas of the camera frame. These masks check the bottom center of the bounding box. A global mask applies to all object types, while per-object masks apply only to the specified type.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Global configuration > Objects" />.
| Field | Description |
|-------|-------------|
| **Object mask > Mask1 > Friendly Name / Enabled / Coordinates** | Global object filter mask that applies to all object types |
| **Object filters > Person > Mask > Mask1 > Friendly Name / Enabled / Coordinates** | Per-object mask that applies only to the specified object type |
To configure masks for a specific camera, navigate to <NavPath path="Settings > Camera configuration > Objects" />.
</TabItem>
<TabItem value="yaml">
```yaml
objects:
# Global mask applied to all object types
mask:
mask1:
friendly_name: "Object filter mask area"
enabled: true
coordinates: "0.000,0.000,0.781,0.000,0.781,0.278,0.000,0.278"
# Per-object mask
filters:
person:
mask:
mask1:
friendly_name: "Person filter mask"
enabled: true
coordinates: "0.000,0.000,0.781,0.000,0.781,0.278,0.000,0.278"
```
</TabItem>
</ConfigTabs>
:::note
The global mask is combined with any object-specific mask. Both are checked based on the bottom center of the bounding box.
:::
## Custom Models
Models for both CPU and EdgeTPU (Coral) are bundled in the image. You can use your own models with volume mounts:

View File

@ -3,6 +3,10 @@ id: profiles
title: Profiles
---
import ConfigTabs from "@site/src/components/ConfigTabs";
import TabItem from "@theme/TabItem";
import NavPath from "@site/src/components/NavPath";
Profiles allow you to define named sets of camera configuration overrides that can be activated and deactivated at runtime without restarting Frigate. This is useful for scenarios like switching between "Home" and "Away" modes, daytime and nighttime configurations, or any situation where you want to quickly change how multiple cameras behave.
## How Profiles Work
@ -24,16 +28,18 @@ Profile changes are applied in-memory and take effect immediately — no restart
The easiest way to define profiles is to use the Frigate UI. Profiles can also be configured manually in your configuration file.
### Using the UI
### Creating and Managing Profiles
To create and manage profiles from the UI, open **Settings**. From there you can:
<ConfigTabs>
<TabItem value="ui">
1. **Create a profile** — Navigate to **Profiles**. Click the **Add Profile** button, enter a name (and optionally a profile ID).
2. **Configure overrides** — Navigate to a camera configuration section (e.g. Motion detection, Record, Notifications). In the top right, two buttons will appear - choose a camera and a profile from the profile selector to edit overrides for that camera and section. Only the fields you change will be stored as overrides — fields that require a restart are hidden since profiles are applied at runtime. You can click the **Remove Profile Override** button
3. **Activate a profile** — Use the **Profiles** option in Frigate's main menu to choose a profile. Alternatively, in Settings, navigate to **Profiles**, then choose a profile in the Active Profile dropdown to activate it. The active profile is also shown in the status bar at the bottom of the screen on desktop browsers.
4. **Delete a profile** — Navigate to **Profiles**, then click the trash icon for a profile. This removes the profile definition and all camera overrides associated with it.
1. **Create a profile** — Navigate to <NavPath path="Settings > Camera configuration > Profiles" />. Click the **Add Profile** button, enter a name (and optionally a profile ID).
2. **Configure overrides** — Navigate to a camera configuration section (e.g. Motion detection, Record, Notifications). In the top right, two buttons will appear - choose a camera and a profile from the profile selector to edit overrides for that camera and section. Only the fields you change will be stored as overrides — fields that require a restart are hidden since profiles are applied at runtime. You can click the **Remove Profile Override** button to clear overrides.
3. **Activate a profile** — Use the **Profiles** option in Frigate's main menu to choose a profile. Alternatively, in Settings, navigate to <NavPath path="Settings > Camera configuration > Profiles" />, then choose a profile in the Active Profile dropdown to activate it. The active profile is also shown in the status bar at the bottom of the screen on desktop browsers.
4. **Delete a profile** — Navigate to <NavPath path="Settings > Camera configuration > Profiles" />, then click the trash icon for a profile. This removes the profile definition and all camera overrides associated with it.
### Defining Profiles in YAML
</TabItem>
<TabItem value="yaml">
First, define your profiles at the top level of your Frigate config. Every profile name referenced by a camera must be defined here.
@ -47,8 +53,6 @@ profiles:
friendly_name: Night Mode
```
### Camera Profile Overrides
Under each camera, add a `profiles` section with overrides for each profile. You only need to include the settings you want to change.
```yaml
@ -91,6 +95,9 @@ cameras:
- person
```
</TabItem>
</ConfigTabs>
### Supported Override Sections
The following camera configuration sections can be overridden in a profile:
@ -125,6 +132,17 @@ Profiles can be activated and deactivated from the Frigate UI. Open the Settings
A common use case is having different detection and notification settings based on whether you are home or away.
<ConfigTabs>
<TabItem value="ui">
1. Navigate to <NavPath path="Settings > Camera configuration > Profiles" /> and create two profiles: **Home** and **Away**.
2. For the **front_door** camera, configure the **Away** profile to enable notifications and set alert labels to `person` and `car`. Configure the **Home** profile to disable notifications.
3. For the **indoor_cam** camera, configure the **Away** profile to enable the camera, detection, and recording. Configure the **Home** profile to disable the camera entirely for privacy.
4. Activate the desired profile from <NavPath path="Settings > Camera configuration > Profiles" /> or from the **Profiles** option in Frigate's main menu.
</TabItem>
<TabItem value="yaml">
```yaml
profiles:
home:
@ -181,6 +199,9 @@ cameras:
enabled: false
```
</TabItem>
</ConfigTabs>
In this example:
- **Away profile**: The front door camera enables notifications and tracks specific alert labels. The indoor camera is fully enabled with detection and recording.

View File

@ -3,7 +3,11 @@ id: record
title: Recording
---
Recordings can be enabled and are stored at `/media/frigate/recordings`. The folder structure for the recordings is `YYYY-MM-DD/HH/<camera_name>/MM.SS.mp4` in **UTC time**. These recordings are written directly from your camera stream without re-encoding. Each camera supports a configurable retention policy in the config. Frigate chooses the largest matching retention value between the recording retention and the tracked object retention when determining if a recording should be removed.
import ConfigTabs from "@site/src/components/ConfigTabs";
import TabItem from "@theme/TabItem";
import NavPath from "@site/src/components/NavPath";
Recordings can be enabled and are stored at `/media/frigate/recordings`. The folder structure for the recordings is `YYYY-MM-DD/HH/<camera_name>/MM.SS.mp4` in **UTC time**. These recordings are written directly from your camera stream without re-encoding. Each camera supports a configurable retention policy. Frigate chooses the largest matching retention value between the recording retention and the tracked object retention when determining if a recording should be removed.
New recording segments are written from the camera stream to cache, they are only moved to disk if they match the setup recording retention policy.
@ -13,7 +17,23 @@ H265 recordings can be viewed in Chrome 108+, Edge and Safari only. All other br
### Most conservative: Ensure all video is saved
For users deploying Frigate in environments where it is important to have contiguous video stored even if there was no detectable motion, the following config will store all video for 3 days. After 3 days, only video containing motion will be saved for 7 days. After 7 days, only video containing motion and overlapping with alerts or detections will be retained until 30 days have passed.
For users deploying Frigate in environments where it is important to have contiguous video stored even if there was no detectable motion, the following configuration will store all video for 3 days. After 3 days, only video containing motion will be saved for 7 days. After 7 days, only video containing motion and overlapping with alerts or detections will be retained until 30 days have passed.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Global configuration > Recording" />.
- Set **Enable recording** to on
- Set **Continuous retention > Retention days** to `3`
- Set **Motion retention > Retention days** to `7`
- Set **Alert retention > Event retention > Retention days** to `30`
- Set **Alert retention > Event retention > Retention mode** to `all`
- Set **Detection retention > Event retention > Retention days** to `30`
- Set **Detection retention > Event retention > Retention mode** to `all`
</TabItem>
<TabItem value="yaml">
```yaml
record:
@ -32,9 +52,27 @@ record:
mode: all
```
</TabItem>
</ConfigTabs>
### Reduced storage: Only saving video when motion is detected
In order to reduce storage requirements, you can adjust your config to only retain video where motion / activity was detected.
To reduce storage requirements, configure recording to only retain video where motion or activity was detected.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Global configuration > Recording" />.
- Set **Enable recording** to on
- Set **Motion retention > Retention days** to `3`
- Set **Alert retention > Event retention > Retention days** to `30`
- Set **Alert retention > Event retention > Retention mode** to `motion`
- Set **Detection retention > Event retention > Retention days** to `30`
- Set **Detection retention > Event retention > Retention mode** to `motion`
</TabItem>
<TabItem value="yaml">
```yaml
record:
@ -51,9 +89,25 @@ record:
mode: motion
```
</TabItem>
</ConfigTabs>
### Minimum: Alerts only
If you only want to retain video that occurs during activity caused by tracked object(s), this config will discard video unless an alert is ongoing.
If you only want to retain video that occurs during activity caused by tracked object(s), this configuration will discard video unless an alert is ongoing.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Global configuration > Recording" />.
- Set **Enable recording** to on
- Set **Continuous retention > Retention days** to `0`
- Set **Alert retention > Event retention > Retention days** to `30`
- Set **Alert retention > Event retention > Retention mode** to `motion`
</TabItem>
<TabItem value="yaml">
```yaml
record:
@ -66,6 +120,9 @@ record:
mode: motion
```
</TabItem>
</ConfigTabs>
## Will Frigate delete old recordings if my storage runs out?
As of Frigate 0.12 if there is less than an hour left of storage, the oldest 2 hours of recordings will be deleted.
@ -82,7 +139,21 @@ Retention configs support decimals meaning they can be configured to retain `0.5
### Continuous and Motion Recording
The number of days to retain continuous and motion recordings can be set via the following config where X is a number, by default continuous recording is disabled.
The number of days to retain continuous and motion recordings can be configured. By default, continuous recording is disabled.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Global configuration > Recording" />.
| Field | Description |
|-------|-------------|
| **Enable recording** | Enable or disable recording for all cameras |
| **Continuous retention > Retention days** | Number of days to keep continuous recordings |
| **Motion retention > Retention days** | Number of days to keep motion recordings |
</TabItem>
<TabItem value="yaml">
```yaml
record:
@ -93,11 +164,28 @@ record:
days: 2 # <- number of days to keep motion recordings
```
</TabItem>
</ConfigTabs>
Continuous recording supports different retention modes [which are described below](#what-do-the-different-retain-modes-mean)
### Object Recording
The number of days to record review items can be specified for review items classified as alerts as well as tracked objects.
The number of days to retain recordings for review items can be specified for items classified as alerts as well as tracked objects.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Global configuration > Recording" />.
| Field | Description |
|-------|-------------|
| **Enable recording** | Enable or disable recording for all cameras |
| **Alert retention > Event retention > Retention days** | Number of days to keep alert recordings |
| **Detection retention > Event retention > Retention days** | Number of days to keep detection recordings |
</TabItem>
<TabItem value="yaml">
```yaml
record:
@ -110,9 +198,12 @@ record:
days: 10 # <- number of days to keep detections recordings
```
</TabItem>
</ConfigTabs>
This configuration will retain recording segments that overlap with alerts and detections for 10 days. Because multiple tracked objects can reference the same recording segments, this avoids storing duplicate footage for overlapping tracked objects and reduces overall storage needs.
**WARNING**: Recordings still must be enabled in the config. If a camera has recordings disabled in the config, enabling via the methods listed above will have no effect.
**WARNING**: Recordings must be enabled. If a camera has recordings disabled, enabling via the methods listed above will have no effect.
## Can I have "continuous" recordings, but only at certain times?
@ -128,7 +219,18 @@ Time lapse exporting is available only via the [HTTP API](../integrations/api/ex
When exporting a time-lapse the default speed-up is 25x with 30 FPS. This means that every 25 seconds of (real-time) recording is condensed into 1 second of time-lapse video (always without audio) with a smoothness of 30 FPS.
To configure the speed-up factor, the frame rate and further custom settings, the configuration parameter `timelapse_args` can be used. The below configuration example would change the time-lapse speed to 60x (for fitting 1 hour of recording into 1 minute of time-lapse) with 25 FPS:
To configure the speed-up factor, the frame rate and further custom settings, use the `timelapse_args` parameter. The below configuration example would change the time-lapse speed to 60x (for fitting 1 hour of recording into 1 minute of time-lapse) with 25 FPS:
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Global configuration > Recording" />.
- Set **Enable recording** to on
- Set **Export config > Timelapse Args** to `-vf setpts=PTS/60 -r 25`
</TabItem>
<TabItem value="yaml">
```yaml {3-4}
record:
@ -137,9 +239,12 @@ record:
timelapse_args: "-vf setpts=PTS/60 -r 25"
```
</TabItem>
</ConfigTabs>
:::tip
When using `hwaccel_args`, hardware encoding is used for timelapse generation. This setting can be overridden for a specific camera (e.g., when camera resolution exceeds hardware encoder limits); set `cameras.<camera>.record.export.hwaccel_args` with the appropriate settings. Using an unrecognized value or empty string will fall back to software encoding (libx264).
When using `hwaccel_args`, hardware encoding is used for timelapse generation. This setting can be overridden for a specific camera (e.g., when camera resolution exceeds hardware encoder limits); set the camera-level export hwaccel_args with the appropriate settings. Using an unrecognized value or empty string will fall back to software encoding (libx264).
:::

View File

@ -3,6 +3,10 @@ id: restream
title: Restream
---
import ConfigTabs from "@site/src/components/ConfigTabs";
import TabItem from "@theme/TabItem";
import NavPath from "@site/src/components/NavPath";
## RTSP
Frigate can restream your video feed as an RTSP feed for other applications such as Home Assistant to utilize it at `rtsp://<frigate_host>:8554/<camera_name>`. Port 8554 must be open. [This allows you to use a video feed for detection in Frigate and Home Assistant live view at the same time without having to make two separate connections to the camera](#reduce-connections-to-camera). The video feed is copied from the original video feed directly to avoid re-encoding. This feed does not include any annotation by Frigate.
@ -52,6 +56,16 @@ Some cameras only support one active connection or you may just want to have a s
One connection is made to the camera. One for the restream, `detect` and `record` connect to the restream.
Configure the go2rtc stream and point the camera inputs at the local restream.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > System > go2rtc streams" /> and add stream entries for each camera. Then navigate to <NavPath path="Settings > Camera configuration > FFmpeg" /> for each camera and set the input paths to use the local restream URL (`rtsp://127.0.0.1:8554/<camera_name>`).
</TabItem>
<TabItem value="yaml">
```yaml
go2rtc:
streams:
@ -87,10 +101,21 @@ cameras:
- audio # <- only necessary if audio detection is enabled
```
</TabItem>
</ConfigTabs>
### With Sub Stream
Two connections are made to the camera. One for the sub stream, one for the restream, `record` connects to the restream.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > System > go2rtc streams" /> and add stream entries for each camera and its sub stream. Then navigate to <NavPath path="Settings > Camera configuration > FFmpeg" /> for each camera and configure separate inputs for the main and sub streams using the local restream URLs.
</TabItem>
<TabItem value="yaml">
```yaml
go2rtc:
streams:
@ -138,6 +163,9 @@ cameras:
- detect
```
</TabItem>
</ConfigTabs>
## Handling Complex Passwords
go2rtc expects URL-encoded passwords in the config, [urlencoder.org](https://urlencoder.org) can be used for this purpose.

View File

@ -3,6 +3,10 @@ id: review
title: Review
---
import ConfigTabs from "@site/src/components/ConfigTabs";
import TabItem from "@theme/TabItem";
import NavPath from "@site/src/components/NavPath";
The Review page of the Frigate UI is for quickly reviewing historical footage of interest from your cameras. _Review items_ are indicated on a vertical timeline and displayed as a grid of previews - bandwidth-optimized, low frame rate, low resolution videos. Hovering over or swiping a preview plays the video and marks it as reviewed. If more in-depth analysis is required, the preview can be clicked/tapped and the full frame rate, full resolution recording is displayed.
Review items are filterable by date, object type, and camera.
@ -38,7 +42,19 @@ See the [objects documentation](objects.md) for the list of objects that Frigate
## Restricting alerts to specific labels
By default a review item will only be marked as an alert if a person or car is detected. This can be configured to include any object or audio label using the following config:
By default a review item will only be marked as an alert if a person or car is detected. Configure the alert labels to include any object or audio label.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Global configuration > Review" />.
| Field | Description |
|-------|-------------|
| **Alerts > Labels** | List of object or audio labels that qualify a review item as an alert |
</TabItem>
<TabItem value="yaml">
```yaml
# can be overridden at the camera level
@ -52,10 +68,25 @@ review:
- speech
```
</TabItem>
</ConfigTabs>
## Restricting detections to specific labels
By default all detections that do not qualify as an alert qualify as a detection. However, detections can further be filtered to only include certain labels or certain zones.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Global configuration > Review" />.
| Field | Description |
|-------|-------------|
| **Detections > Labels** | List of labels to restrict which tracked objects qualify as detections |
</TabItem>
<TabItem value="yaml">
```yaml
# can be overridden at the camera level
review:
@ -65,11 +96,23 @@ review:
- dog
```
</TabItem>
</ConfigTabs>
## Excluding a camera from alerts or detections
To exclude a specific camera from alerts or detections, simply provide an empty list to the alerts or detections field _at the camera level_.
To exclude a specific camera from alerts or detections, provide an empty list to the alerts or detections labels field at the camera level.
For example, to exclude objects on the camera _gatecamera_ from any detections, include this in your config:
For example, to exclude objects on the camera _gatecamera_ from any detections:
<ConfigTabs>
<TabItem value="ui">
1. Navigate to <NavPath path="Settings > Camera configuration > Review" /> and select the **gatecamera** camera.
- Set **Detections > Labels** to an empty list
</TabItem>
<TabItem value="yaml">
```yaml {3-5}
cameras:
@ -79,6 +122,9 @@ cameras:
labels: []
```
</TabItem>
</ConfigTabs>
## Restricting review items to specific zones
By default a review item will be created if any `review -> alerts -> labels` and `review -> detections -> labels` are detected anywhere in the camera frame. You will likely want to configure review items to only be created when the object enters an area of interest, [see the zone docs for more information](./zones.md#restricting-alerts-and-detections-to-specific-zones)

View File

@ -3,6 +3,10 @@ id: semantic_search
title: Semantic Search
---
import ConfigTabs from "@site/src/components/ConfigTabs";
import TabItem from "@theme/TabItem";
import NavPath from "@site/src/components/NavPath";
Semantic Search in Frigate allows you to find tracked objects within your review items using either the image itself, a user-defined text description, or an automatically generated one. This feature works by creating _embeddings_ — numerical vector representations — for both the images and text descriptions of your tracked objects. By comparing these embeddings, Frigate assesses their similarities to deliver relevant search results.
Frigate uses models from [Jina AI](https://huggingface.co/jinaai) to create and save embeddings to Frigate's database. All of this runs locally.
@ -19,7 +23,18 @@ For best performance, 16GB or more of RAM and a dedicated GPU are recommended.
## Configuration
Semantic Search is disabled by default, and must be enabled in your config file or in the UI's Enrichments Settings page before it can be used. Semantic Search is a global configuration setting.
Semantic Search is disabled by default and must be enabled before it can be used. Semantic Search is a global configuration setting.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Enrichments > Semantic search" />.
- Set **Enable semantic search** to on
- Set **Reindex on startup** to on if you want to reindex the embeddings database from existing tracked objects
</TabItem>
<TabItem value="yaml">
```yaml
semantic_search:
@ -27,6 +42,9 @@ semantic_search:
reindex: False
```
</TabItem>
</ConfigTabs>
:::tip
The embeddings database can be re-indexed from the existing tracked objects in your database by pressing the "Reindex" button in the Enrichments Settings in the UI or by adding `reindex: True` to your `semantic_search` configuration and restarting Frigate. Depending on the number of tracked objects you have, it can take a long while to complete and may max out your CPU while indexing.
@ -41,7 +59,20 @@ The [V1 model from Jina](https://huggingface.co/jinaai/jina-clip-v1) has a visio
The V1 text model is used to embed tracked object descriptions and perform searches against them. Descriptions can be created, viewed, and modified on the Explore page when clicking on thumbnail of a tracked object. See [the object description docs](/configuration/genai/objects.md) for more information on how to automatically generate tracked object descriptions.
Differently weighted versions of the Jina models are available and can be selected by setting the `model_size` config option as `small` or `large`:
Differently weighted versions of the Jina models are available and can be selected by setting the model size.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Enrichments > Semantic search" />.
| Field | Description |
|-------|-------------|
| **Model** | Select `jinav1` to use the Jina AI CLIP V1 model |
| **Model Size** | `small` (quantized, CPU-friendly) or `large` (full model, GPU-accelerated) |
</TabItem>
<TabItem value="yaml">
```yaml
semantic_search:
@ -50,6 +81,9 @@ semantic_search:
model_size: small
```
</TabItem>
</ConfigTabs>
- Configuring the `large` model employs the full Jina model and will automatically run on the GPU if applicable.
- Configuring the `small` model employs a quantized version of the Jina model that uses less RAM and runs on CPU with a very negligible difference in embedding quality.
@ -59,7 +93,20 @@ Frigate also supports the [V2 model from Jina](https://huggingface.co/jinaai/jin
V2 offers only a 3% performance improvement over V1 in both text-image and text-text retrieval tasks, an upgrade that is unlikely to yield noticeable real-world benefits. Additionally, V2 has _significantly_ higher RAM and GPU requirements, leading to increased inference time and memory usage. If you plan to use V2, ensure your system has ample RAM and a discrete GPU. CPU inference (with the `small` model) using V2 is not recommended.
To use the V2 model, update the `model` parameter in your config:
To use the V2 model, set the model to `jinav2`.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Enrichments > Semantic search" />.
| Field | Description |
|-------|-------------|
| **Model** | Select `jinav2` to use the Jina AI CLIP V2 model |
| **Model Size** | `large` is recommended for V2 (requires discrete GPU) |
</TabItem>
<TabItem value="yaml">
```yaml
semantic_search:
@ -68,6 +115,9 @@ semantic_search:
model_size: large
```
</TabItem>
</ConfigTabs>
For most users, especially native English speakers, the V1 model remains the recommended choice.
:::note
@ -82,9 +132,23 @@ Frigate can use a GenAI provider for semantic search embeddings when that provid
To use llama.cpp for semantic search:
1. Configure a GenAI provider in your config with `embeddings` in its `roles`.
2. Set `semantic_search.model` to the GenAI config key (e.g. `default`).
3. Start the llama.cpp server with `--embeddings` and `--mmproj` for image support:
1. Configure a GenAI provider with `embeddings` in its `roles`.
2. Set the semantic search model to the GenAI config key (e.g. `default`).
3. Start the llama.cpp server with `--embeddings` and `--mmproj` for image support.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Enrichments > Semantic search" />.
| Field | Description |
|-------|-------------|
| **Model** | Set to the GenAI config key (e.g. `default`) to use a configured GenAI provider for embeddings |
The GenAI provider must also be configured with the `embeddings` role under <NavPath path="Settings > Enrichments > Generative AI" />.
</TabItem>
<TabItem value="yaml">
```yaml
genai:
@ -102,6 +166,9 @@ semantic_search:
model: default
```
</TabItem>
</ConfigTabs>
The llama.cpp server must be started with `--embeddings` for the embeddings API, and a multi-modal embeddings model. See the [llama.cpp server documentation](https://github.com/ggml-org/llama.cpp/blob/master/tools/server/README.md) for details.
:::note
@ -114,6 +181,19 @@ Switching between Jina models and a GenAI provider requires reindexing. Embeddin
The CLIP models are downloaded in ONNX format, and the `large` model can be accelerated using GPU hardware, when available. This depends on the Docker build that is used. You can also target a specific device in a multi-GPU installation.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Enrichments > Semantic search" />.
| Field | Description |
|-------|-------------|
| **Model Size** | Set to `large` to enable GPU acceleration |
| **Device** | (Optional) Specify a GPU device index in a multi-GPU system (e.g. `0`) |
</TabItem>
<TabItem value="yaml">
```yaml
semantic_search:
enabled: True
@ -122,6 +202,9 @@ semantic_search:
device: 0
```
</TabItem>
</ConfigTabs>
:::info
If the correct build is used for your GPU / NPU and the `large` model is configured, then the GPU will be detected and used automatically.
@ -153,16 +236,15 @@ Semantic Search must be enabled to use Triggers.
### Configuration
Triggers are defined within the `semantic_search` configuration for each camera in your Frigate configuration file or through the UI. Each trigger consists of a `friendly_name`, a `type` (either `thumbnail` or `description`), a `data` field (the reference image event ID or text), a `threshold` for similarity matching, and a list of `actions` to perform when the trigger fires - `notification`, `sub_label`, and `attribute`.
Triggers are defined within the `semantic_search` configuration for each camera. Each trigger consists of a `friendly_name`, a `type` (either `thumbnail` or `description`), a `data` field (the reference image event ID or text), a `threshold` for similarity matching, and a list of `actions` to perform when the trigger fires - `notification`, `sub_label`, and `attribute`.
Triggers are best configured through the Frigate UI.
#### Managing Triggers in the UI
1. Navigate to the **Settings** page and select the **Triggers** tab.
2. Choose a camera from the dropdown menu to view or manage its triggers.
3. Click **Add Trigger** to create a new trigger or use the pencil icon to edit an existing one.
4. In the **Create Trigger** wizard:
1. Navigate to <NavPath path="Settings > Triggers" /> and select a camera from the dropdown menu.
2. Click **Add Trigger** to create a new trigger or use the pencil icon to edit an existing one.
3. In the **Create Trigger** wizard:
- Enter a **Name** for the trigger (e.g., "Red Car Alert").
- Enter a descriptive **Friendly Name** for the trigger (e.g., "Red car on the driveway camera").
- Select the **Type** (`Thumbnail` or `Description`).
@ -173,14 +255,14 @@ Triggers are best configured through the Frigate UI.
If native webpush notifications are enabled, check the `Send Notification` box to send a notification.
Check the `Add Sub Label` box to add the trigger's friendly name as a sub label to any triggering tracked objects.
Check the `Add Attribute` box to add the trigger's internal ID (e.g., "red_car_alert") to a data attribute on the tracked object that can be processed via the API or MQTT.
5. Save the trigger to update the configuration and store the embedding in the database.
4. Save the trigger to update the configuration and store the embedding in the database.
When a trigger fires, the UI highlights the trigger with a blue dot for 3 seconds for easy identification. Additionally, the UI will show the last date/time and tracked object ID that activated your trigger. The last triggered timestamp is not saved to the database or persisted through restarts of Frigate.
### Usage and Best Practices
1. **Thumbnail Triggers**: Select a representative image (event ID) from the Explore page that closely matches the object you want to detect. For best results, choose images where the object is prominent and fills most of the frame.
2. **Description Triggers**: Write concise, specific text descriptions (e.g., "Person in a red jacket") that align with the tracked objects description. Avoid vague terms to improve matching accuracy.
2. **Description Triggers**: Write concise, specific text descriptions (e.g., "Person in a red jacket") that align with the tracked object's description. Avoid vague terms to improve matching accuracy.
3. **Threshold Tuning**: Adjust the threshold to balance sensitivity and specificity. A higher threshold (e.g., 0.8) requires closer matches, reducing false positives but potentially missing similar objects. A lower threshold (e.g., 0.6) is more inclusive but may trigger more often.
4. **Using Explore**: Use the context menu or right-click / long-press on a tracked object in the Grid View in Explore to quickly add a trigger based on the tracked object's thumbnail.
5. **Editing triggers**: For the best experience, triggers should be edited via the UI. However, Frigate will ensure triggers edited in the config will be synced with triggers created and edited in the UI.
@ -195,6 +277,6 @@ When a trigger fires, the UI highlights the trigger with a blue dot for 3 second
#### Why can't I create a trigger on thumbnails for some text, like "person with a blue shirt" and have it trigger when a person with a blue shirt is detected?
TL;DR: Text-to-image triggers arent supported because CLIP can confuse similar images and give inconsistent scores, making automation unreliable. The same wordimage pair can give different scores and the score ranges can be too close together to set a clear cutoff.
TL;DR: Text-to-image triggers aren't supported because CLIP can confuse similar images and give inconsistent scores, making automation unreliable. The same word-image pair can give different scores and the score ranges can be too close together to set a clear cutoff.
Text-to-image triggers are not supported due to fundamental limitations of CLIP-based similarity search. While CLIP works well for exploratory, manual queries, it is unreliable for automated triggers based on a threshold. Issues include embedding drift (the same textimage pair can yield different cosine distances over time), lack of true semantic grounding (visually similar but incorrect matches), and unstable thresholding (distance distributions are dataset-dependent and often too tightly clustered to separate relevant from irrelevant results). Instead, it is recommended to set up a workflow with thumbnail triggers: first use text search to manually select 35 representative reference tracked objects, then configure thumbnail triggers based on that visual similarity. This provides robust automation without the semantic ambiguity of text to image matching.
Text-to-image triggers are not supported due to fundamental limitations of CLIP-based similarity search. While CLIP works well for exploratory, manual queries, it is unreliable for automated triggers based on a threshold. Issues include embedding drift (the same text-image pair can yield different cosine distances over time), lack of true semantic grounding (visually similar but incorrect matches), and unstable thresholding (distance distributions are dataset-dependent and often too tightly clustered to separate relevant from irrelevant results). Instead, it is recommended to set up a workflow with thumbnail triggers: first use text search to manually select 3-5 representative reference tracked objects, then configure thumbnail triggers based on that visual similarity. This provides robust automation without the semantic ambiguity of text to image matching.

View File

@ -3,19 +3,134 @@ id: snapshots
title: Snapshots
---
import ConfigTabs from "@site/src/components/ConfigTabs";
import TabItem from "@theme/TabItem";
import NavPath from "@site/src/components/NavPath";
Frigate can save a snapshot image to `/media/frigate/clips` for each object that is detected named as `<camera>-<id>-clean.webp`. They are also accessible [via the api](../integrations/api/event-snapshot-events-event-id-snapshot-jpg-get.api.mdx)
Snapshots are accessible in the UI in the Explore pane. This allows for quick submission to the Frigate+ service.
To only save snapshots for objects that enter a specific zone, [see the zone docs](./zones.md#restricting-snapshots-to-specific-zones)
Snapshots sent via MQTT are configured in the [config file](/configuration) under `cameras -> your_camera -> mqtt`
Snapshots sent via MQTT are configured separately under the camera MQTT settings, not here.
## Enabling Snapshots
Enable snapshot saving and configure the default settings that apply to all cameras.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Global configuration > Snapshots" />.
- Set **Enable snapshots** to on
</TabItem>
<TabItem value="yaml">
```yaml
snapshots:
enabled: True
```
</TabItem>
</ConfigTabs>
To override snapshot settings for a specific camera:
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Camera configuration > Snapshots" /> and select your camera.
- Set **Enable snapshots** to on
</TabItem>
<TabItem value="yaml">
```yaml
cameras:
front_door:
snapshots:
enabled: True
```
</TabItem>
</ConfigTabs>
## Snapshot Options
Configure how snapshots are rendered and stored. These settings control the defaults applied when snapshots are requested via the API.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Global configuration > Snapshots" />.
| Field | Description |
|-------|-------------|
| **Enable snapshots** | Enable or disable saving snapshots for tracked objects |
| **Timestamp overlay** | Overlay a timestamp on snapshots from API |
| **Bounding box overlay** | Draw bounding boxes for tracked objects on snapshots from API |
| **Crop snapshot** | Crop snapshots from API to the detected object's bounding box |
| **Snapshot height** | Height in pixels to resize snapshots to; leave empty to preserve original size |
| **Snapshot quality** | Encode quality for saved snapshots (0-100) |
| **Required zones** | Zones an object must enter for a snapshot to be saved |
</TabItem>
<TabItem value="yaml">
```yaml
snapshots:
enabled: True
timestamp: False
bounding_box: True
crop: False
height: 175
required_zones: []
quality: 60
```
</TabItem>
</ConfigTabs>
## Snapshot Retention
Configure how long snapshots are retained on disk. Per-object retention overrides allow different retention periods for specific object types.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Global configuration > Snapshots" />.
| Field | Description |
|-------|-------------|
| **Snapshot retention > Default retention** | Number of days to retain snapshots (default: 10) |
| **Snapshot retention > Retention mode** | Retention mode: `all`, `motion`, or `active_objects` |
| **Snapshot retention > Object retention > Person** | Per-object overrides for retention days (e.g., keep `person` snapshots for 15 days) |
</TabItem>
<TabItem value="yaml">
```yaml
snapshots:
enabled: True
retain:
default: 10
mode: motion
objects:
person: 15
```
</TabItem>
</ConfigTabs>
## Frame Selection
Frigate does not save every frame. It picks a single "best" frame for each tracked object based on detection confidence, object size, and the presence of key attributes like faces or license plates. Frames where the object touches the edge of the frame are deprioritized. That best frame is written to disk once tracking ends.
MQTT snapshots are published more frequently — each time a better thumbnail frame is found during tracking, or when the current best image is older than `best_image_timeout` (default: 60s). These use their own annotation settings configured under `cameras -> your_camera -> mqtt`.
MQTT snapshots are published more frequently — each time a better thumbnail frame is found during tracking, or when the current best image is older than `best_image_timeout` (default: 60s). These use their own annotation settings configured under the camera MQTT settings.
## Rendering
@ -28,4 +143,4 @@ Frigate stores a single clean snapshot on disk:
| `/api/events/<id>/snapshot-clean.webp` | Returns the same stored snapshot without annotations |
| [Frigate+](/plus/first_model) submission | Uses the same stored clean snapshot |
MQTT snapshots are configured separately under `cameras -> your_camera -> mqtt` and are unrelated to the stored event snapshot.
MQTT snapshots are configured separately under the camera MQTT settings and are unrelated to the stored event snapshot.

View File

@ -1,5 +1,9 @@
# Stationary Objects
import ConfigTabs from "@site/src/components/ConfigTabs";
import TabItem from "@theme/TabItem";
import NavPath from "@site/src/components/NavPath";
An object is considered stationary when it is being tracked and has been in a very similar position for a certain number of frames. This number is defined in the configuration under `detect -> stationary -> threshold`, and is 10x the frame rate (or 10 seconds) by default. Once an object is considered stationary, it will remain stationary until motion occurs within the object at which point object detection will start running again. If the object changes location, it will be considered active.
## Why does it matter if an object is stationary?
@ -8,7 +12,18 @@ Once an object becomes stationary, object detection will not be continually run
## Tuning stationary behavior
The default config is:
Configure how Frigate handles stationary objects.
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Global configuration > Object detection" />.
- Set **Stationary objects config > Stationary interval** to the frequency for running detection on stationary objects (default: 50). Once stationary, detection runs every nth frame to verify the object is still present. There is no way to disable stationary object tracking with this value.
- Set **Stationary objects config > Stationary threshold** to the number of frames an object must remain relatively still before it is considered stationary (default: 50)
</TabItem>
<TabItem value="yaml">
```yaml
detect:
@ -17,11 +32,8 @@ detect:
threshold: 50
```
`interval` is defined as the frequency for running detection on stationary objects. This means that by default once an object is considered stationary, detection will not be run on it until motion is detected or until the interval (every 50th frame by default). With `interval >= 1`, every nth frames detection will be run to make sure the object is still there.
NOTE: There is no way to disable stationary object tracking with this value.
`threshold` is the number of frames an object needs to remain relatively still before it is considered stationary.
</TabItem>
</ConfigTabs>
## Why does Frigate track stationary objects?

View File

@ -3,19 +3,36 @@ id: tls
title: TLS
---
import ConfigTabs from "@site/src/components/ConfigTabs";
import TabItem from "@theme/TabItem";
import NavPath from "@site/src/components/NavPath";
# TLS
Frigate's integrated NGINX server supports TLS certificates. By default Frigate will generate a self signed certificate that will be used for port 8971. Frigate is designed to make it easy to use whatever tool you prefer to manage certificates.
Frigate is often running behind a reverse proxy that manages TLS certificates for multiple services. You will likely need to set your reverse proxy to allow self signed certificates or you can disable TLS in Frigate's config. However, if you are running on a dedicated device that's separate from your proxy or if you expose Frigate directly to the internet, you may want to configure TLS with valid certificates.
In many deployments, TLS will be unnecessary. It can be disabled in the config with the following yaml:
In many deployments, TLS will be unnecessary. Disable it as follows:
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > System > TLS" />.
- Set **Enable TLS** to off if running behind a reverse proxy that handles TLS (default: on)
</TabItem>
<TabItem value="yaml">
```yaml
tls:
enabled: False
```
</TabItem>
</ConfigTabs>
## Certificates
TLS certificates can be mounted at `/etc/letsencrypt/live/frigate` using a bind mount or docker volume.

View File

@ -3,6 +3,10 @@ id: zones
title: Zones
---
import ConfigTabs from "@site/src/components/ConfigTabs";
import TabItem from "@theme/TabItem";
import NavPath from "@site/src/components/NavPath";
Zones allow you to define a specific area of the frame and apply additional filters for object types so you can determine whether or not an object is within a particular area. Presence in a zone is evaluated based on the bottom center of the bounding box for the object. It does not matter how much of the bounding box overlaps with the zone.
For example, the cat in this image is currently in Zone 1, but **not** Zone 2.
@ -16,11 +20,51 @@ Zones can be toggled on or off without removing them from the configuration. Dis
During testing, enable the Zones option for the Debug view of your camera (Settings --> Debug) so you can adjust as needed. The zone line will increase in thickness when any object enters the zone.
To create a zone, follow [the steps for a "Motion mask"](masks.md), but use the section of the web UI for creating a zone instead.
## Creating a Zone
<ConfigTabs>
<TabItem value="ui">
1. Navigate to <NavPath path="Settings > Camera configuration > Masks / Zones" /> and select the desired camera.
2. Under the **Zones** section, click the plus icon to add a new zone.
3. Click on the camera's latest image to create the points for the zone boundary. Click the first point again to close the polygon.
4. Configure zone options such as **Friendly name**, **Objects**, **Loitering time**, and **Inertia** in the zone editor.
5. Press **Save** when finished.
</TabItem>
<TabItem value="yaml">
Follow [the steps for creating a mask](masks.md), but use the zone section of the web UI instead. Alternatively, define zones directly in your configuration file:
```yaml
cameras:
name_of_your_camera:
zones:
entire_yard:
friendly_name: Entire yard
coordinates: 0.123,0.456,0.789,0.012,...
```
</TabItem>
</ConfigTabs>
### Restricting alerts and detections to specific zones
Often you will only want alerts to be created when an object enters areas of interest. This is done using zones along with setting required_zones. Let's say you only want to have an alert created when an object enters your entire_yard zone, the config would be:
Often you will only want alerts to be created when an object enters areas of interest. This is done by combining zones with required zones for review items.
To create an alert only when an object enters the `entire_yard` zone:
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Camera configuration > Review" />.
| Field | Description |
|-------|-------------|
| **Alerts config > Required zones** | Zones that an object must enter to be considered an alert; leave empty to allow any zone. |
</TabItem>
<TabItem value="yaml">
```yaml {6,8}
cameras:
@ -35,7 +79,23 @@ cameras:
coordinates: ...
```
You may also want to filter detections to only be created when an object enters a secondary area of interest. This is done using zones along with setting required_zones. Let's say you want alerts when an object enters the inner area of the yard but detections when an object enters the edge of the yard, the config would be
</TabItem>
</ConfigTabs>
You may also want to filter detections to only be created when an object enters a secondary area of interest. For example, to trigger alerts when an object enters the inner area of the yard but detections when an object enters the edge of the yard:
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Camera configuration > Review" />.
| Field | Description |
|-------|-------------|
| **Alerts config > Required zones** | Zones that an object must enter to be considered an alert; leave empty to allow any zone. |
| **Detections config > Required zones** | Zones that an object must enter to be considered a detection; leave empty to allow any zone. |
</TabItem>
<TabItem value="yaml">
```yaml
cameras:
@ -56,8 +116,22 @@ cameras:
coordinates: ...
```
</TabItem>
</ConfigTabs>
### Restricting snapshots to specific zones
To only save snapshots when an object enters a specific zone:
<ConfigTabs>
<TabItem value="ui">
1. Navigate to <NavPath path="Settings > Camera configuration > Snapshots" /> and select your camera.
- Set **Required zones** to `entire_yard`
</TabItem>
<TabItem value="yaml">
```yaml
cameras:
name_of_your_camera:
@ -70,9 +144,24 @@ cameras:
coordinates: ...
```
</TabItem>
</ConfigTabs>
### Restricting zones to specific objects
Sometimes you want to limit a zone to specific object types to have more granular control of when alerts, detections, and snapshots are saved. The following example will limit one zone to person objects and the other to cars.
Sometimes you want to limit a zone to specific object types to have more granular control of when alerts, detections, and snapshots are saved. The following example limits one zone to person objects and the other to cars.
<ConfigTabs>
<TabItem value="ui">
1. Navigate to <NavPath path="Settings > Camera configuration > Masks / Zones" /> and select the desired camera.
2. Create a zone named `entire_yard` covering everywhere you want to track a person.
- Under **Objects**, add `person`
3. Create a second zone named `front_yard_street` covering just the street.
- Under **Objects**, add `car`
</TabItem>
<TabItem value="yaml">
```yaml
cameras:
@ -88,6 +177,9 @@ cameras:
- car
```
</TabItem>
</ConfigTabs>
Only car objects can trigger the `front_yard_street` zone and only person can trigger the `entire_yard`. Objects will be tracked for any `person` that enter anywhere in the yard, and for cars only if they enter the street.
### Zone Loitering
@ -103,6 +195,17 @@ When using loitering zones, a review item will behave in the following way:
:::
<ConfigTabs>
<TabItem value="ui">
1. Navigate to <NavPath path="Settings > Camera configuration > Masks / Zones" /> and select the desired camera.
2. Edit or create the zone (e.g., `sidewalk`).
- Set **Loitering time** to the desired number of seconds (e.g., `4`)
- Under **Objects**, add the relevant object types (e.g., `person`)
</TabItem>
<TabItem value="yaml">
```yaml
cameras:
name_of_your_camera:
@ -114,9 +217,22 @@ cameras:
- person
```
</TabItem>
</ConfigTabs>
### Zone Inertia
Sometimes an objects bounding box may be slightly incorrect and the bottom center of the bounding box is inside the zone while the object is not actually in the zone. Zone inertia helps guard against this by requiring an object's bounding box to be within the zone for multiple consecutive frames. This value can be configured:
Sometimes an objects bounding box may be slightly incorrect and the bottom center of the bounding box is inside the zone while the object is not actually in the zone. Zone inertia helps guard against this by requiring an object's bounding box to be within the zone for multiple consecutive frames.
<ConfigTabs>
<TabItem value="ui">
1. Navigate to <NavPath path="Settings > Camera configuration > Masks / Zones" /> and select the desired camera.
2. Edit or create the zone (e.g., `front_yard`).
- Set **Inertia** to the desired number of consecutive frames (e.g., `3`)
</TabItem>
<TabItem value="yaml">
```yaml
cameras:
@ -129,8 +245,21 @@ cameras:
- person
```
</TabItem>
</ConfigTabs>
There may also be cases where you expect an object to quickly enter and exit a zone, like when a car is pulling into the driveway, and you may want to have the object be considered present in the zone immediately:
<ConfigTabs>
<TabItem value="ui">
1. Navigate to <NavPath path="Settings > Camera configuration > Masks / Zones" /> and select the desired camera.
2. Edit or create the zone (e.g., `driveway_entrance`).
- Set **Inertia** to `1`
</TabItem>
<TabItem value="yaml">
```yaml
cameras:
name_of_your_camera:
@ -142,6 +271,9 @@ cameras:
- car
```
</TabItem>
</ConfigTabs>
### Speed Estimation
Frigate can be configured to estimate the speed of objects moving through a zone. This works by combining data from Frigate's object tracker and "real world" distance measurements of the edges of the zone. The recommended use case for this feature is to track the speed of vehicles on a road as they move through the zone.
@ -152,7 +284,19 @@ Your zone must be defined with exactly 4 points and should be aligned to the gro
Speed estimation requires a minimum number of frames for your object to be tracked before a valid estimate can be calculated, so create your zone away from places where objects enter and exit for the best results. The object's bounding box must be stable and remain a constant size as it enters and exits the zone. _Your zone should not take up the full frame, and the zone does **not** need to be the same size or larger than the objects passing through it._ An object's speed is tracked while it passes through the zone and then saved to Frigate's database.
Accurate real-world distance measurements are required to estimate speeds. These distances can be specified in your zone config through the `distances` field.
Accurate real-world distance measurements are required to estimate speeds. These distances can be specified through the `distances` field. Each number represents the real-world distance between consecutive points in the `coordinates` list. The fastest and most accurate way to configure this is through the Zone Editor in the Frigate UI.
<ConfigTabs>
<TabItem value="ui">
1. Navigate to <NavPath path="Settings > Camera configuration > Masks / Zones" /> and select the desired camera.
2. Create or edit a zone with exactly 4 points aligned to the ground plane.
3. In the zone editor, enter the real-world **Distances** between each pair of consecutive points.
- For example, if the distance between the first and second points is 10 meters, between the second and third is 12 meters, etc.
4. Distances are measured in meters (metric) or feet (imperial), depending on the **Unit system** setting.
</TabItem>
<TabItem value="yaml">
```yaml
cameras:
@ -163,16 +307,34 @@ cameras:
distances: 10,12,11,13.5 # in meters or feet
```
Each number in the `distance` field represents the real-world distance between the points in the `coordinates` list. So in the example above, the distance between the first two points ([0.033,0.306] and [0.324,0.138]) is 10. The distance between the second and third set of points ([0.324,0.138] and [0.439,0.185]) is 12, and so on. The fastest and most accurate way to configure this is through the Zone Editor in the Frigate UI.
So in the example above, the distance between the first two points ([0.033,0.306] and [0.324,0.138]) is 10. The distance between the second and third set of points ([0.324,0.138] and [0.439,0.185]) is 12, and so on.
</TabItem>
</ConfigTabs>
The `distance` values are measured in meters (metric) or feet (imperial), depending on how `unit_system` is configured in your `ui` config:
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > System > UI" />.
| Field | Description |
|-------|-------------|
| **Unit system** | Set to `metric` (kilometers per hour) or `imperial` (miles per hour) |
</TabItem>
<TabItem value="yaml">
```yaml
ui:
# can be "metric" or "imperial", default is metric
unit_system: metric
```
</TabItem>
</ConfigTabs>
The average speed of your object as it moved through your zone is saved in Frigate's database and can be seen in the UI in the Tracked Object Details pane in Explore. Current estimated speed can also be seen on the debug view as the third value in the object label (see the caveats below). Current estimated speed, average estimated speed, and velocity angle (the angle of the direction the object is moving relative to the frame) of tracked objects is also sent through the `events` MQTT topic. See the [MQTT docs](../integrations/mqtt.md#frigateevents).
These speed values are output as a number in miles per hour (mph) or kilometers per hour (kph). For miles per hour, set `unit_system` to `imperial`. For kilometers per hour, set `unit_system` to `metric`.
@ -191,6 +353,17 @@ These speed values are output as a number in miles per hour (mph) or kilometers
Zones can be configured with a minimum speed requirement, meaning an object must be moving at or above this speed to be considered inside the zone. Zone `distances` must be defined as described above.
<ConfigTabs>
<TabItem value="ui">
1. Navigate to <NavPath path="Settings > Camera configuration > Masks / Zones" /> and select the desired camera.
2. Edit or create the zone with distances configured.
- Set **Speed threshold** to the desired minimum speed (e.g., `20`)
- The unit is kph or mph, depending on the **Unit system** setting
</TabItem>
<TabItem value="yaml">
```yaml
cameras:
name_of_your_camera:
@ -202,3 +375,6 @@ cameras:
# highlight-next-line
speed_threshold: 20 # unit is in kph or mph, depending on how unit_system is set (see above)
```
</TabItem>
</ConfigTabs>

View File

@ -3,6 +3,10 @@ id: getting_started
title: Getting started
---
import ConfigTabs from "@site/src/components/ConfigTabs";
import TabItem from "@theme/TabItem";
import NavPath from "@site/src/components/NavPath";
# Getting Started
:::tip
@ -85,7 +89,7 @@ This section shows how to create a minimal directory structure for a Docker inst
### Setup directories
Frigate will create a config file if one does not exist on the initial startup. The following directory structure is the bare minimum to get started. Once Frigate is running, you can use the built-in config editor which supports config validation.
Frigate will create a config file if one does not exist on the initial startup. The following directory structure is the bare minimum to get started.
```
.
@ -128,7 +132,7 @@ services:
- "8554:8554" # RTSP feeds
```
Now you should be able to start Frigate by running `docker compose up -d` from within the folder containing `docker-compose.yml`. On startup, an admin user and password will be created and outputted in the logs. You can see this by running `docker logs frigate`. Frigate should now be accessible at `https://server_ip:8971` where you can login with the `admin` user and finish the configuration using the built-in configuration editor.
Now you should be able to start Frigate by running `docker compose up -d` from within the folder containing `docker-compose.yml`. On startup, an admin user and password will be created and outputted in the logs. You can see this by running `docker logs frigate`. Frigate should now be accessible at `https://server_ip:8971` where you can login with the `admin` user and finish configuration using the Settings UI.
## Configuring Frigate
@ -140,15 +144,15 @@ At this point you should be able to start Frigate and a basic config will be cre
### Step 2: Add a camera
You can click the `Add Camera` button to use the camera setup wizard to get your first camera added into Frigate.
Click the **Add Camera** button in <NavPath path="Settings > Camera configuration > Management" /> to use the camera setup wizard to get your first camera added into Frigate.
### Step 3: Configure hardware acceleration (recommended)
Now that you have a working camera configuration, you want to setup hardware acceleration to minimize the CPU required to decode your video streams. See the [hardware acceleration](../configuration/hardware_acceleration_video.md) config reference for examples applicable to your hardware.
Now that you have a working camera configuration, set up hardware acceleration to minimize the CPU required to decode your video streams. See the [hardware acceleration](../configuration/hardware_acceleration_video.md) docs for examples applicable to your hardware.
Here is an example configuration with hardware acceleration configured to work with most Intel processors with an integrated GPU using the [preset](../configuration/ffmpeg_presets.md):
:::note
`docker-compose.yml` (after modifying, you will need to run `docker compose up -d` to apply changes)
Hardware acceleration requires passing the appropriate device to the Docker container. For Intel and AMD GPUs, add the device to your `docker-compose.yml`:
```yaml {4,5}
services:
@ -159,7 +163,17 @@ services:
...
```
`config.yml`
After modifying, run `docker compose up -d` to apply changes.
:::
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Global configuration > FFmpeg" /> and set **Hardware acceleration arguments** to the appropriate preset for your hardware (e.g., `VAAPI (Intel/AMD GPU)` for most Intel processors).
</TabItem>
<TabItem value="yaml">
```yaml
mqtt: ...
@ -173,6 +187,9 @@ cameras:
detect: ...
```
</TabItem>
</ConfigTabs>
### Step 4: Configure detectors
By default, Frigate will use a single CPU detector.
@ -184,6 +201,15 @@ In many cases, the integrated graphics on Intel CPUs provides sufficient perform
You need to refer to **Configure hardware acceleration** above to enable the container to use the GPU.
<ConfigTabs>
<TabItem value="ui">
1. Navigate to <NavPath path="Settings > System > Detector hardware" /> and add a detector with **Type** `openvino` and **Device** `GPU`
2. Navigate to <NavPath path="Settings > System > Detection model" /> and configure the model settings for OpenVINO
</TabItem>
<TabItem value="yaml">
```yaml {3-6,9-15,20-21}
mqtt: ...
@ -209,6 +235,9 @@ cameras:
...
```
</TabItem>
</ConfigTabs>
</details>
If you have a USB Coral, you will need to add a detectors section to your config.
@ -216,7 +245,9 @@ If you have a USB Coral, you will need to add a detectors section to your config
<details>
<summary>Use USB Coral detector</summary>
`docker-compose.yml` (after modifying, you will need to run `docker compose up -d` to apply changes)
:::note
You need to pass the USB Coral device to the Docker container. Add the following to your `docker-compose.yml` and run `docker compose up -d`:
```yaml {4-6}
services:
@ -228,6 +259,16 @@ services:
...
```
:::
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > System > Detector hardware" /> and add a detector with **Type** `edgetpu` and **Device** `usb`.
</TabItem>
<TabItem value="yaml">
```yaml {3-6,11-12}
mqtt: ...
@ -244,17 +285,20 @@ cameras:
...
```
</TabItem>
</ConfigTabs>
</details>
More details on available detectors can be found [here](../configuration/object_detectors.md).
Restart Frigate and you should start seeing detections for `person`. If you want to track other objects, they will need to be added according to the [configuration file reference](../configuration/reference.md).
Restart Frigate and you should start seeing detections for `person`. If you want to track other objects, they can be configured in <NavPath path="Settings > Global configuration > Objects" /> or via the [configuration file reference](../configuration/reference.md).
### Step 5: Setup motion masks
Now that you have optimized your configuration for decoding the video stream, you will want to check to see where to implement motion masks. To do this, navigate to the camera in the UI, select "Debug" at the top, and enable "Motion boxes" in the options below the video feed. Watch for areas that continuously trigger unwanted motion to be detected. Common areas to mask include camera timestamps and trees that frequently blow in the wind. The goal is to avoid wasting object detection cycles looking at these areas.
Now that you have optimized your configuration for decoding the video stream, you will want to check to see where to implement motion masks. Navigate to <NavPath path="Settings > Camera configuration > Masks / Zones" /> and enable the Debug view to see motion boxes. Watch for areas that continuously trigger unwanted motion to be detected. Common areas to mask include camera timestamps and trees that frequently blow in the wind. The goal is to avoid wasting object detection cycles looking at these areas.
Now that you know where you need to mask, use the "Mask & Zone creator" in the options pane to generate the coordinates needed for your config file. More information about masks can be found [here](../configuration/masks.md).
Use the mask editor to draw polygon masks directly on the camera feed. More information about masks can be found [here](../configuration/masks.md).
:::warning
@ -262,37 +306,18 @@ Note that motion masks should not be used to mark out areas where you do not wan
:::
Your configuration should look similar to this now.
```yaml {16-18}
mqtt:
enabled: False
detectors:
coral:
type: edgetpu
device: usb
cameras:
name_of_your_camera:
ffmpeg:
inputs:
- path: rtsp://10.0.10.10:554/rtsp
roles:
- detect
motion:
mask:
motion_area:
friendly_name: "Motion mask"
enabled: true
coordinates: "0,461,3,0,1919,0,1919,843,1699,492,1344,458,1346,336,973,317,869,375,866,432"
```
### Step 6: Enable recordings
In order to review activity in the Frigate UI, recordings need to be enabled.
To enable recording video, add the `record` role to a stream and enable it in the config. If record is disabled in the config, it won't be possible to enable it in the UI.
<ConfigTabs>
<TabItem value="ui">
1. If you have separate streams for detect and record, navigate to <NavPath path="Settings > Camera configuration > FFmpeg" /> and add a second input with the `record` role pointing to your high-resolution stream
2. Navigate to <NavPath path="Settings > Global configuration > Recording" /> (or <NavPath path="Settings > Camera configuration > Recording" /> for a specific camera) and set **Enable recording** to on
</TabItem>
<TabItem value="yaml">
```yaml {16-17}
mqtt: ...
@ -315,6 +340,9 @@ cameras:
motion: ...
```
</TabItem>
</ConfigTabs>
If you don't have separate streams for detect and record, you would just add the record role to the list on the first input.
:::note