diff --git a/docs/docs/configuration/authentication.md b/docs/docs/configuration/authentication.md index e72197f69..dcd5d84a1 100644 --- a/docs/docs/configuration/authentication.md +++ b/docs/docs/configuration/authentication.md @@ -342,11 +342,7 @@ The viewer role provides read-only access to all cameras in the UI and API. Cust -Navigate to and configure roles under the **Roles** section. - -| Field | Description | -| --------- | ----------------------------------------------------------------- | -| **Roles** | Define custom roles and assign which cameras each role can access | +Navigate to to define custom roles and assign which cameras each role can access. diff --git a/docs/docs/configuration/autotracking.md b/docs/docs/configuration/autotracking.md index 91c16019d..27312eaa9 100644 --- a/docs/docs/configuration/autotracking.md +++ b/docs/docs/configuration/autotracking.md @@ -33,7 +33,7 @@ A growing list of cameras and brands that have been reported by users to work wi First, set up a PTZ preset in your camera's firmware and give it a name. If you're unsure how to do this, consult the documentation for your camera manufacturer's firmware. Some tutorials for common brands: [Amcrest](https://www.youtube.com/watch?v=lJlE9-krmrM), [Reolink](https://www.youtube.com/watch?v=VAnxHUY5i5w), [Dahua](https://www.youtube.com/watch?v=7sNbc5U-k54). -Configure the ONVIF connection and autotracking parameters for your camera. Specify the object types to track, a required zone the object must enter to begin autotracking, and the camera preset name to return to when tracking has ended. Optionally, specify a delay in seconds before Frigate returns the camera to the preset. +Configure the ONVIF connection and autotracking parameters for your camera. Specify the object types to track, a required zone the object must enter to begin autotracking, and the camera preset name you configured in your camera's firmware to return to when tracking has ended. Optionally, specify a delay in seconds before Frigate returns the camera to the preset. An [ONVIF connection](cameras.md) is required for autotracking to function. Also, a [motion mask](masks.md) over your camera's timestamp and any overlay text is recommended to ensure they are completely excluded from scene change calculations when the camera is moving. diff --git a/docs/docs/configuration/cameras.md b/docs/docs/configuration/cameras.md index 56c988f3c..8094c9f1c 100644 --- a/docs/docs/configuration/cameras.md +++ b/docs/docs/configuration/cameras.md @@ -106,13 +106,11 @@ Configure the ONVIF connection for your camera to enable PTZ controls. -1. Navigate to and select your camera. - - Set **Ffmpeg** to `...` -2. Navigate to and select your camera. - - Set **ONVIF host** to `10.0.10.10` - - Set **ONVIF port** to `8000` - - Set **ONVIF username** to `admin` - - Set **ONVIF password** to `password` +1. Navigate to and select your camera. + - Set **ONVIF host** to your camera's IP address, e.g.: `10.0.10.10` + - Set **ONVIF port** to your camera's ONVIF port, e.g.: `8000` + - Set **ONVIF username** to your camera's ONVIF username, e.g.: `admin` + - Set **ONVIF password** to your camera's ONVIF password, e.g.: `password` @@ -189,12 +187,7 @@ Camera groups let you organize cameras together with a shared name and icon, mak -1. Navigate to . -2. Under the camera groups section, create a new group: - - Set the **group name** (e.g. `front`) - - Select the **cameras** to include (e.g. `driveway_cam`, `garage_cam`) - - Choose an **icon** (e.g. `LuCar`) - - Set the **order** to control the display position +On the Live dashboard, press the **+** icon in the main navigation to add a new camera group. Configure the group name, select which cameras to include, choose an icon, and set the display order. diff --git a/docs/docs/configuration/custom_classification/state_classification.md b/docs/docs/configuration/custom_classification/state_classification.md index d5b8a1295..688b8bb0d 100644 --- a/docs/docs/configuration/custom_classification/state_classification.md +++ b/docs/docs/configuration/custom_classification/state_classification.md @@ -123,7 +123,7 @@ Enable debug logs for classification models by adding `frigate.data_processing.r Navigate to . - Set **Logging level** to `debug` -- Set **Per-process log level > Frigate.Data Processing.Real Time.Custom Classification** to `debug` for verbose classification logging +- Set **Per-process log level > `frigate.data_processing.real_time.custom_classification`** to `debug` for verbose classification logging diff --git a/docs/docs/configuration/face_recognition.md b/docs/docs/configuration/face_recognition.md index 7a1041583..7c23884cc 100644 --- a/docs/docs/configuration/face_recognition.md +++ b/docs/docs/configuration/face_recognition.md @@ -77,9 +77,10 @@ Fine-tune face recognition with these optional parameters. The only optional par Navigate to . -- Set **Enable face recognition** to on -- Set **Detection threshold** to `0.7` -- Set **Minimum face area** to `500` +- **Detection threshold**: Face detection confidence score required before recognition runs. This field only applies to the standalone face detection model; `min_score` should be used to filter for models that have face detection built in. + - Default: `0.7` +- **Minimum face area**: Minimum size (in pixels) a face must be before recognition runs. Depending on the resolution of your camera's `detect` stream, you can increase this value to ignore small or distant faces. + - Default: `500` pixels @@ -101,14 +102,19 @@ face_recognition: Navigate to . -- Set **Enable face recognition** to on -- Set **Model size** to `small` -- Set **Unknown score threshold** to `0.8` -- Set **Recognition threshold** to `0.9` -- Set **Minimum faces** to `1` -- Set **Save attempts** to `200` -- Set **Blur confidence filter** to on -- Set **Device** to `None` +- **Model size**: Which model size to use, options are `small` or `large`. +- **Unknown score threshold**: Min score to mark a person as a potential match; matches at or below this will be marked as unknown. + - Default: `0.8` +- **Recognition threshold**: Recognition confidence score required to add the face to the object as a sub label. + - Default: `0.9` +- **Minimum faces**: Min face recognitions for the sub label to be applied to the person object. + - Default: `1` +- **Save attempts**: Number of images of recognized faces to save for training. + - Default: `200` +- **Blur confidence filter**: Enables a filter that calculates how blurry the face is and adjusts the confidence based on this. + - Default: `True` +- **Device**: Target a specific device to run the face recognition model on (multi-GPU installation). This setting is only applicable when using the `large` model. See [onnxruntime's provider options](https://onnxruntime.ai/docs/execution-providers/). + - Default: `None` diff --git a/docs/docs/configuration/license_plate_recognition.md b/docs/docs/configuration/license_plate_recognition.md index 1d7e98ee7..017cc5e16 100644 --- a/docs/docs/configuration/license_plate_recognition.md +++ b/docs/docs/configuration/license_plate_recognition.md @@ -92,13 +92,16 @@ Fine-tune the LPR feature using these optional parameters. The only optional par -Navigate to . For example: +Navigate to . -- Set **Enable LPR** to on -- Set **Detection threshold** to `0.7` — minimum confidence for the plate detector to consider a region as a license plate -- Set **Minimum plate area** to `1000` — ignore plates smaller than 1000 pixels (length x width) -- Set **Device** to `CPU` — device to run the plate detection model on (can also be `GPU`) -- Set **Model size** to `small` — most users should use `small` +- **Detection threshold**: License plate object detection confidence score required before recognition runs. This field only applies to the standalone license plate detection model; `threshold` and `min_score` object filters should be used for models like Frigate+ that have license plate detection built in. + - Default: `0.7` +- **Minimum plate area**: Minimum area (in pixels) a license plate must be before recognition runs. This is an _area_ measurement (length x width). For reference, 1000 pixels represents a ~32x32 pixel square in your camera image. Depending on the resolution of your camera's `detect` stream, you can increase this value to ignore small or distant plates. + - Default: `1000` pixels +- **Device**: Device to use to run license plate detection _and_ recognition models. Auto-selected by Frigate and can be `CPU`, `GPU`, or the GPU's device number. For users without a model that detects license plates natively, using a GPU may increase performance of the YOLOv9 license plate detector model. See the [Hardware Accelerated Enrichments](/configuration/hardware_acceleration_enrichments.md) documentation. + - Default: `None` +- **Model size**: The size of the model used to identify regions of text on plates. The `small` model is fast and identifies groups of Latin and Chinese characters. The `large` model identifies Latin characters only, and uses an enhanced text detector to find characters on multi-line plates. If your country or region does not use multi-line plates, you should use the `small` model. + - Default: `small` @@ -120,12 +123,12 @@ lpr: -Navigate to . For example: +Navigate to . -- Set **Enable LPR** to on -- Set **Recognition threshold** to `0.9` — minimum confidence for recognized text to be accepted as a valid plate -- Set **Min plate length** to `4` — only accept plates with 4 or more characters -- Set **Plate format regex** to `^[A-Z]{2}[0-9]{2} [A-Z]{3}$` — only accept plates matching this format (e.g., UK-style plates) +- **Recognition threshold**: Recognition confidence score required to add the plate to the object as a `recognized_license_plate` and/or `sub_label`. + - Default: `0.9` +- **Min plate length**: Minimum number of characters a detected license plate must have to be added as a `recognized_license_plate` and/or `sub_label`. Use this to filter out short, incomplete, or incorrect detections. +- **Plate format regex**: A regular expression defining the expected format of detected plates. Plates that do not match this format will be discarded. Websites like https://regex101.com/ can help test regular expressions for your plates. @@ -146,12 +149,10 @@ lpr: -Navigate to . For example: +Navigate to . -- Set **Enable LPR** to on -- Set **Match distance** to `1` — allow up to 1 character mismatch when comparing detected plates to known plates -- Set **Known plates > Wife'S Car** to `ABC-1234` — exact plate number for this vehicle -- Set **Known plates > Johnny** to `J*N-*234` — wildcard pattern matching multiple plate variations +- **Known plates**: Assign custom `sub_label` values to `car` and `motorcycle` objects when a recognized plate matches a known value. These labels appear in the UI, filters, and notifications. Unknown plates are still saved but are added to the `recognized_license_plate` field rather than the `sub_label`. +- **Match distance**: Allows for minor variations (missing/incorrect characters) when matching a detected plate to a known plate. For example, setting to `1` allows a plate `ABCDE` to match `ABCBE` or `ABCD`. This parameter will _not_ operate on known plates that are defined as regular expressions. @@ -175,10 +176,10 @@ lpr: -Navigate to . For example: +Navigate to . -- Set **Enable LPR** to on -- Set **Enhancement level** to `1` — applies image enhancement (0-10) to plate crops before OCR to improve character recognition +- **Enhancement level**: A value between 0 and 10 that adjusts the level of image enhancement applied to captured license plates before they are processed for recognition. Higher values increase contrast, sharpen details, and reduce noise, but excessive enhancement can blur or distort characters. This setting is best adjusted at the camera level if running LPR on multiple cameras. + - Default: `0` (no enhancement) @@ -246,8 +247,7 @@ These rules must be defined at the global level of your `lpr` config. Navigate to . -- Set **Enable LPR** to on -- Set **Save debug plates** to on +- **Save debug plates**: Set to on to save captured text on plates for debugging. These images are stored in `/media/frigate/clips/lpr`, organized into subdirectories by `/`, and named based on the capture timestamp. diff --git a/docs/docs/configuration/record.md b/docs/docs/configuration/record.md index 998c6053e..d98f51491 100644 --- a/docs/docs/configuration/record.md +++ b/docs/docs/configuration/record.md @@ -203,8 +203,6 @@ record: This configuration will retain recording segments that overlap with alerts and detections for 10 days. Because multiple tracked objects can reference the same recording segments, this avoids storing duplicate footage for overlapping tracked objects and reduces overall storage needs. -**WARNING**: Recordings must be enabled. If a camera has recordings disabled, enabling via the methods listed above will have no effect. - ## Can I have "continuous" recordings, but only at certain times? Using Frigate UI, Home Assistant, or MQTT, cameras can be automated to only record in certain situations or at certain times. diff --git a/docs/docs/guides/getting_started.md b/docs/docs/guides/getting_started.md index 5e3ff665a..cd456f201 100644 --- a/docs/docs/guides/getting_started.md +++ b/docs/docs/guides/getting_started.md @@ -315,6 +315,32 @@ Note that motion masks should not be used to mark out areas where you do not wan ::: +If you are using YAML to configure Frigate instead of the UI, your configuration should look similar to this now: + +```yaml {16-18} +mqtt: + enabled: False + +detectors: + coral: + type: edgetpu + device: usb + +cameras: + name_of_your_camera: + ffmpeg: + inputs: + - path: rtsp://10.0.10.10:554/rtsp + roles: + - detect + motion: + mask: + motion_area: + friendly_name: "Motion mask" + enabled: true + coordinates: "0,461,3,0,1919,0,1919,843,1699,492,1344,458,1346,336,973,317,869,375,866,432" +``` + ### Step 6: Enable recordings In order to review activity in the Frigate UI, recordings need to be enabled.