This commit is contained in:
Josh Hawkins 2026-03-30 09:05:14 -05:00
parent d4cd5ac08d
commit 053aa6fe32
8 changed files with 73 additions and 54 deletions

View File

@ -342,11 +342,7 @@ The viewer role provides read-only access to all cameras in the UI and API. Cust
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > System > Authentication" /> and configure roles under the **Roles** section.
| Field | Description |
| --------- | ----------------------------------------------------------------- |
| **Roles** | Define custom roles and assign which cameras each role can access |
Navigate to <NavPath path="Settings > Users > Roles" /> to define custom roles and assign which cameras each role can access.
</TabItem>
<TabItem value="yaml">

View File

@ -33,7 +33,7 @@ A growing list of cameras and brands that have been reported by users to work wi
First, set up a PTZ preset in your camera's firmware and give it a name. If you're unsure how to do this, consult the documentation for your camera manufacturer's firmware. Some tutorials for common brands: [Amcrest](https://www.youtube.com/watch?v=lJlE9-krmrM), [Reolink](https://www.youtube.com/watch?v=VAnxHUY5i5w), [Dahua](https://www.youtube.com/watch?v=7sNbc5U-k54).
Configure the ONVIF connection and autotracking parameters for your camera. Specify the object types to track, a required zone the object must enter to begin autotracking, and the camera preset name to return to when tracking has ended. Optionally, specify a delay in seconds before Frigate returns the camera to the preset.
Configure the ONVIF connection and autotracking parameters for your camera. Specify the object types to track, a required zone the object must enter to begin autotracking, and the camera preset name you configured in your camera's firmware to return to when tracking has ended. Optionally, specify a delay in seconds before Frigate returns the camera to the preset.
An [ONVIF connection](cameras.md) is required for autotracking to function. Also, a [motion mask](masks.md) over your camera's timestamp and any overlay text is recommended to ensure they are completely excluded from scene change calculations when the camera is moving.

View File

@ -106,13 +106,11 @@ Configure the ONVIF connection for your camera to enable PTZ controls.
<ConfigTabs>
<TabItem value="ui">
1. Navigate to <NavPath path="Settings > Camera configuration > FFmpeg" /> and select your camera.
- Set **Ffmpeg** to `...`
2. Navigate to <NavPath path="Settings > Camera configuration > ONVIF" /> and select your camera.
- Set **ONVIF host** to `10.0.10.10`
- Set **ONVIF port** to `8000`
- Set **ONVIF username** to `admin`
- Set **ONVIF password** to `password`
1. Navigate to <NavPath path="Settings > Camera configuration > ONVIF" /> and select your camera.
- Set **ONVIF host** to your camera's IP address, e.g.: `10.0.10.10`
- Set **ONVIF port** to your camera's ONVIF port, e.g.: `8000`
- Set **ONVIF username** to your camera's ONVIF username, e.g.: `admin`
- Set **ONVIF password** to your camera's ONVIF password, e.g.: `password`
</TabItem>
<TabItem value="yaml">
@ -189,12 +187,7 @@ Camera groups let you organize cameras together with a shared name and icon, mak
<ConfigTabs>
<TabItem value="ui">
1. Navigate to <NavPath path="Settings > General > UI settings" />.
2. Under the camera groups section, create a new group:
- Set the **group name** (e.g. `front`)
- Select the **cameras** to include (e.g. `driveway_cam`, `garage_cam`)
- Choose an **icon** (e.g. `LuCar`)
- Set the **order** to control the display position
On the Live dashboard, press the **+** icon in the main navigation to add a new camera group. Configure the group name, select which cameras to include, choose an icon, and set the display order.
</TabItem>
<TabItem value="yaml">

View File

@ -123,7 +123,7 @@ Enable debug logs for classification models by adding `frigate.data_processing.r
Navigate to <NavPath path="Settings > System > Logging" />.
- Set **Logging level** to `debug`
- Set **Per-process log level > Frigate.Data Processing.Real Time.Custom Classification** to `debug` for verbose classification logging
- Set **Per-process log level > `frigate.data_processing.real_time.custom_classification`** to `debug` for verbose classification logging
</TabItem>
<TabItem value="yaml">

View File

@ -77,9 +77,10 @@ Fine-tune face recognition with these optional parameters. The only optional par
Navigate to <NavPath path="Settings > Enrichments > Face recognition" />.
- Set **Enable face recognition** to on
- Set **Detection threshold** to `0.7`
- Set **Minimum face area** to `500`
- **Detection threshold**: Face detection confidence score required before recognition runs. This field only applies to the standalone face detection model; `min_score` should be used to filter for models that have face detection built in.
- Default: `0.7`
- **Minimum face area**: Minimum size (in pixels) a face must be before recognition runs. Depending on the resolution of your camera's `detect` stream, you can increase this value to ignore small or distant faces.
- Default: `500` pixels
</TabItem>
<TabItem value="yaml">
@ -101,14 +102,19 @@ face_recognition:
Navigate to <NavPath path="Settings > Enrichments > Face recognition" />.
- Set **Enable face recognition** to on
- Set **Model size** to `small`
- Set **Unknown score threshold** to `0.8`
- Set **Recognition threshold** to `0.9`
- Set **Minimum faces** to `1`
- Set **Save attempts** to `200`
- Set **Blur confidence filter** to on
- Set **Device** to `None`
- **Model size**: Which model size to use, options are `small` or `large`.
- **Unknown score threshold**: Min score to mark a person as a potential match; matches at or below this will be marked as unknown.
- Default: `0.8`
- **Recognition threshold**: Recognition confidence score required to add the face to the object as a sub label.
- Default: `0.9`
- **Minimum faces**: Min face recognitions for the sub label to be applied to the person object.
- Default: `1`
- **Save attempts**: Number of images of recognized faces to save for training.
- Default: `200`
- **Blur confidence filter**: Enables a filter that calculates how blurry the face is and adjusts the confidence based on this.
- Default: `True`
- **Device**: Target a specific device to run the face recognition model on (multi-GPU installation). This setting is only applicable when using the `large` model. See [onnxruntime's provider options](https://onnxruntime.ai/docs/execution-providers/).
- Default: `None`
</TabItem>
<TabItem value="yaml">

View File

@ -92,13 +92,16 @@ Fine-tune the LPR feature using these optional parameters. The only optional par
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Enrichments > License plate recognition" />. For example:
Navigate to <NavPath path="Settings > Enrichments > License plate recognition" />.
- Set **Enable LPR** to on
- Set **Detection threshold** to `0.7` — minimum confidence for the plate detector to consider a region as a license plate
- Set **Minimum plate area** to `1000` — ignore plates smaller than 1000 pixels (length x width)
- Set **Device** to `CPU` — device to run the plate detection model on (can also be `GPU`)
- Set **Model size** to `small` — most users should use `small`
- **Detection threshold**: License plate object detection confidence score required before recognition runs. This field only applies to the standalone license plate detection model; `threshold` and `min_score` object filters should be used for models like Frigate+ that have license plate detection built in.
- Default: `0.7`
- **Minimum plate area**: Minimum area (in pixels) a license plate must be before recognition runs. This is an _area_ measurement (length x width). For reference, 1000 pixels represents a ~32x32 pixel square in your camera image. Depending on the resolution of your camera's `detect` stream, you can increase this value to ignore small or distant plates.
- Default: `1000` pixels
- **Device**: Device to use to run license plate detection _and_ recognition models. Auto-selected by Frigate and can be `CPU`, `GPU`, or the GPU's device number. For users without a model that detects license plates natively, using a GPU may increase performance of the YOLOv9 license plate detector model. See the [Hardware Accelerated Enrichments](/configuration/hardware_acceleration_enrichments.md) documentation.
- Default: `None`
- **Model size**: The size of the model used to identify regions of text on plates. The `small` model is fast and identifies groups of Latin and Chinese characters. The `large` model identifies Latin characters only, and uses an enhanced text detector to find characters on multi-line plates. If your country or region does not use multi-line plates, you should use the `small` model.
- Default: `small`
</TabItem>
<TabItem value="yaml">
@ -120,12 +123,12 @@ lpr:
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Enrichments > License plate recognition" />. For example:
Navigate to <NavPath path="Settings > Enrichments > License plate recognition" />.
- Set **Enable LPR** to on
- Set **Recognition threshold** to `0.9` — minimum confidence for recognized text to be accepted as a valid plate
- Set **Min plate length** to `4` — only accept plates with 4 or more characters
- Set **Plate format regex** to `^[A-Z]{2}[0-9]{2} [A-Z]{3}$` — only accept plates matching this format (e.g., UK-style plates)
- **Recognition threshold**: Recognition confidence score required to add the plate to the object as a `recognized_license_plate` and/or `sub_label`.
- Default: `0.9`
- **Min plate length**: Minimum number of characters a detected license plate must have to be added as a `recognized_license_plate` and/or `sub_label`. Use this to filter out short, incomplete, or incorrect detections.
- **Plate format regex**: A regular expression defining the expected format of detected plates. Plates that do not match this format will be discarded. Websites like https://regex101.com/ can help test regular expressions for your plates.
</TabItem>
<TabItem value="yaml">
@ -146,12 +149,10 @@ lpr:
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Enrichments > License plate recognition" />. For example:
Navigate to <NavPath path="Settings > Enrichments > License plate recognition" />.
- Set **Enable LPR** to on
- Set **Match distance** to `1` — allow up to 1 character mismatch when comparing detected plates to known plates
- Set **Known plates > Wife'S Car** to `ABC-1234` — exact plate number for this vehicle
- Set **Known plates > Johnny** to `J*N-*234` — wildcard pattern matching multiple plate variations
- **Known plates**: Assign custom `sub_label` values to `car` and `motorcycle` objects when a recognized plate matches a known value. These labels appear in the UI, filters, and notifications. Unknown plates are still saved but are added to the `recognized_license_plate` field rather than the `sub_label`.
- **Match distance**: Allows for minor variations (missing/incorrect characters) when matching a detected plate to a known plate. For example, setting to `1` allows a plate `ABCDE` to match `ABCBE` or `ABCD`. This parameter will _not_ operate on known plates that are defined as regular expressions.
</TabItem>
<TabItem value="yaml">
@ -175,10 +176,10 @@ lpr:
<ConfigTabs>
<TabItem value="ui">
Navigate to <NavPath path="Settings > Enrichments > License plate recognition" />. For example:
Navigate to <NavPath path="Settings > Enrichments > License plate recognition" />.
- Set **Enable LPR** to on
- Set **Enhancement level** to `1` — applies image enhancement (0-10) to plate crops before OCR to improve character recognition
- **Enhancement level**: A value between 0 and 10 that adjusts the level of image enhancement applied to captured license plates before they are processed for recognition. Higher values increase contrast, sharpen details, and reduce noise, but excessive enhancement can blur or distort characters. This setting is best adjusted at the camera level if running LPR on multiple cameras.
- Default: `0` (no enhancement)
</TabItem>
<TabItem value="yaml">
@ -246,8 +247,7 @@ These rules must be defined at the global level of your `lpr` config.
Navigate to <NavPath path="Settings > Enrichments > License plate recognition" />.
- Set **Enable LPR** to on
- Set **Save debug plates** to on
- **Save debug plates**: Set to on to save captured text on plates for debugging. These images are stored in `/media/frigate/clips/lpr`, organized into subdirectories by `<camera>/<event_id>`, and named based on the capture timestamp.
</TabItem>
<TabItem value="yaml">

View File

@ -203,8 +203,6 @@ record:
This configuration will retain recording segments that overlap with alerts and detections for 10 days. Because multiple tracked objects can reference the same recording segments, this avoids storing duplicate footage for overlapping tracked objects and reduces overall storage needs.
**WARNING**: Recordings must be enabled. If a camera has recordings disabled, enabling via the methods listed above will have no effect.
## Can I have "continuous" recordings, but only at certain times?
Using Frigate UI, Home Assistant, or MQTT, cameras can be automated to only record in certain situations or at certain times.

View File

@ -315,6 +315,32 @@ Note that motion masks should not be used to mark out areas where you do not wan
:::
If you are using YAML to configure Frigate instead of the UI, your configuration should look similar to this now:
```yaml {16-18}
mqtt:
enabled: False
detectors:
coral:
type: edgetpu
device: usb
cameras:
name_of_your_camera:
ffmpeg:
inputs:
- path: rtsp://10.0.10.10:554/rtsp
roles:
- detect
motion:
mask:
motion_area:
friendly_name: "Motion mask"
enabled: true
coordinates: "0,461,3,0,1919,0,1919,843,1699,492,1344,458,1346,336,973,317,869,375,866,432"
```
### Step 6: Enable recordings
In order to review activity in the Frigate UI, recordings need to be enabled.