diff --git a/docs/docs/configuration/advanced.md b/docs/docs/configuration/advanced.md index 33d73f665..e6de72593 100644 --- a/docs/docs/configuration/advanced.md +++ b/docs/docs/configuration/advanced.md @@ -253,7 +253,7 @@ IPv6 is disabled by default. Enable it in the Frigate configuration. -Navigate to and enable **IPv6**. +Navigate to and expand **IPv6 configuration**, then enable **Enable IPv6**. diff --git a/docs/docs/configuration/audio_detectors.md b/docs/docs/configuration/audio_detectors.md index 151284dc6..bb646e677 100644 --- a/docs/docs/configuration/audio_detectors.md +++ b/docs/docs/configuration/audio_detectors.md @@ -20,9 +20,9 @@ Audio events can be enabled globally or for specific cameras. -**Global:** Navigate to and set **Enabled** to on. +**Global:** Navigate to and set **Enable audio detection** to on. -**Per-camera:** Navigate to and set **Enabled** to on for the desired camera. +**Per-camera:** Navigate to and set **Enable audio detection** to on for the desired camera. diff --git a/docs/docs/configuration/authentication.md b/docs/docs/configuration/authentication.md index 9ddb63b16..e72197f69 100644 --- a/docs/docs/configuration/authentication.md +++ b/docs/docs/configuration/authentication.md @@ -71,10 +71,10 @@ If you are running a reverse proxy in the same Docker Compose file as Frigate, c Navigate to . -| Field | Description | -|-------|-------------| -| **Failed login limits** | Rate limit string for login failures (e.g., `1/second;5/minute;20/hour`) | -| **Trusted proxies** | List of upstream network CIDRs to trust for `X-Forwarded-For` (e.g., `172.18.0.0/16` for internal Docker Compose network) | +| Field | Description | +| ----------------------- | ------------------------------------------------------------------------------------------------------------------------- | +| **Failed login limits** | Rate limit string for login failures (e.g., `1/second;5/minute;20/hour`) | +| **Trusted proxies** | List of upstream network CIDRs to trust for `X-Forwarded-For` (e.g., `172.18.0.0/16` for internal Docker Compose network) | @@ -182,13 +182,13 @@ If you have disabled Frigate's authentication and your proxy supports passing a -Navigate to and configure the proxy header mapping settings. +Navigate to and configure the header mapping and separator settings. -| Field | Description | -|-------|-------------| -| **Proxy > Separator** | Character separating multiple roles in the role header (default: comma). Authentik uses a pipe `\|`. | -| **Proxy > Header Map > User** | Header name for the authenticated username (e.g., `x-forwarded-user`) | -| **Proxy > Header Map > Role** | Header name for the authenticated role/groups (e.g., `x-forwarded-groups`) | +| Field | Description | +| -------------------------------- | ---------------------------------------------------------------------------------------------------- | +| **Separator character** | Character separating multiple roles in the role header (default: comma). Authentik uses a pipe `\|`. | +| **Header mapping > User header** | Header name for the authenticated username (e.g., `x-forwarded-user`) | +| **Header mapping > Role header** | Header name for the authenticated role/groups (e.g., `x-forwarded-groups`) | @@ -212,11 +212,11 @@ A default role can be provided. Any value in the mapped `role` header will overr -Navigate to and set the default role under the proxy settings. +Navigate to and set the default role. -| Field | Description | -|-------|-------------| -| **Proxy > Default Role** | Fallback role when no role header is present (e.g., `viewer`) | +| Field | Description | +| ---------------- | ------------------------------------------------------------- | +| **Default role** | Fallback role when no role header is present (e.g., `viewer`) | @@ -237,11 +237,11 @@ In some environments, upstream identity providers (OIDC, SAML, LDAP, etc.) do no -Navigate to and configure the role mapping under the proxy header map settings. +Navigate to and configure the role mapping under the header mapping settings. -| Field | Description | -|-------|-------------| -| **Proxy > Header Map > Role Map** | Maps upstream group names to Frigate roles. Each Frigate role (`admin`, `viewer`, or custom) maps to a list of upstream group names. | +| Field | Description | +| ----------------------------- | ------------------------------------------------------------------------------------------------------------------------------------ | +| **Header mapping > Role map** | Maps upstream group names to Frigate roles. Each Frigate role (`admin`, `viewer`, or custom) maps to a list of upstream group names. | @@ -344,8 +344,8 @@ The viewer role provides read-only access to all cameras in the UI and API. Cust Navigate to and configure roles under the **Roles** section. -| Field | Description | -|-------|-------------| +| Field | Description | +| --------- | ----------------------------------------------------------------- | | **Roles** | Define custom roles and assign which cameras each role can access | diff --git a/docs/docs/configuration/autotracking.md b/docs/docs/configuration/autotracking.md index 1e8e14f8e..91c16019d 100644 --- a/docs/docs/configuration/autotracking.md +++ b/docs/docs/configuration/autotracking.md @@ -46,28 +46,27 @@ Navigate to for the d **ONVIF Connection** -| Field | Description | -|-------|-------------| -| **Host** | Host of the camera being connected to. HTTP is assumed by default; prefix with `https://` for HTTPS. | -| **Port** | ONVIF port for device (default: 8000) | -| **User** | Username for login. Some devices require admin to access ONVIF. | -| **Password** | Password for login | -| **TLS Insecure** | Skip TLS verification from the ONVIF server (default: false) | -| **Profile** | ONVIF media profile to use for PTZ control, matched by token or name. If not set, the first profile with valid PTZ configuration is selected automatically. | +| Field | Description | +| ---------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------- | +| **ONVIF host** | Host of the camera being connected to. HTTP is assumed by default; prefix with `https://` for HTTPS. | +| **ONVIF port** | ONVIF port for device (default: 8000) | +| **ONVIF username** | Username for login. Some devices require admin to access ONVIF. | +| **ONVIF password** | Password for login | +| **Disable TLS verify** | Skip TLS verification and disable digest auth for ONVIF (default: false) | +| **ONVIF profile** | ONVIF media profile to use for PTZ control, matched by token or name. If not set, the first profile with valid PTZ configuration is selected automatically. | **Autotracking** -| Field | Description | -|-------|-------------| -| **Enabled** | Enable or disable object autotracking (default: false) | -| **Calibrate on Startup** | Calibrate the camera on startup by measuring PTZ motor speed (default: false) | -| **Zooming** | Zoom mode during autotracking: `disabled`, `absolute`, or `relative` (default: disabled) | -| **Zoom Factor** | Controls zoom behavior on tracked objects, between 0.1 and 0.75. Lower keeps more scene visible; higher zooms in more (default: 0.3) | -| **Track** | List of object types to track (default: person) | -| **Required Zones** | Zones an object must enter to begin autotracking | -| **Return Preset** | Name of ONVIF preset in camera firmware to return to when tracking ends (default: home) | -| **Timeout** | Seconds to delay before returning to preset (default: 10) | -| **Movement Weights** | Auto-generated calibration values. Do not modify manually. | +| Field | Description | +| ----------------------- | ------------------------------------------------------------------------------------------------------------------------------------ | +| **Enable Autotracking** | Enable or disable object autotracking (default: false) | +| **Calibrate on start** | Calibrate the camera on startup by measuring PTZ motor speed (default: false) | +| **Zoom mode** | Zoom mode during autotracking: `disabled`, `absolute`, or `relative` (default: disabled) | +| **Zoom Factor** | Controls zoom behavior on tracked objects, between 0.1 and 0.75. Lower keeps more scene visible; higher zooms in more (default: 0.3) | +| **Tracked objects** | List of object types to track (default: person) | +| **Required Zones** | Zones an object must enter to begin autotracking | +| **Return Preset** | Name of ONVIF preset in camera firmware to return to when tracking ends (default: home) | +| **Return timeout** | Seconds to delay before returning to preset (default: 10) | diff --git a/docs/docs/configuration/bird_classification.md b/docs/docs/configuration/bird_classification.md index 81dc65e84..75c0b8306 100644 --- a/docs/docs/configuration/bird_classification.md +++ b/docs/docs/configuration/bird_classification.md @@ -27,7 +27,7 @@ Bird classification is disabled by default and must be enabled before it can be Navigate to . - Set **Bird classification config > Bird classification** to on -- Set **Bird classification config > Bird classification threshold** to the desired confidence score (default: 0.9) +- Set **Bird classification config > Minimum score** to the desired confidence score (default: 0.9) diff --git a/docs/docs/configuration/birdseye.md b/docs/docs/configuration/birdseye.md index 82c7829e0..810449478 100644 --- a/docs/docs/configuration/birdseye.md +++ b/docs/docs/configuration/birdseye.md @@ -37,8 +37,8 @@ To include a camera in Birdseye view only for specific circumstances, or exclude | Field | Description | |-------|-------------| -| **Enabled** | Whether this camera appears in Birdseye view | -| **Mode** | When to show the camera: `continuous`, `motion`, or `objects` | +| **Enable Birdseye** | Whether this camera appears in Birdseye view | +| **Tracking mode** | When to show the camera: `continuous`, `motion`, or `objects` | @@ -125,7 +125,7 @@ It is possible to override the order of cameras that are being shown in the Bird -Navigate to for each camera and set the **Order** field to control the display order. +Navigate to for each camera and set the **Position** field to control the display order. diff --git a/docs/docs/configuration/cameras.md b/docs/docs/configuration/cameras.md index a8f2df6c4..56c988f3c 100644 --- a/docs/docs/configuration/cameras.md +++ b/docs/docs/configuration/cameras.md @@ -26,14 +26,15 @@ Each role can only be assigned to one input per camera. The options for roles ar Navigate to . -| Field | Description | -|-------|-------------| +| Field | Description | +| ----------------- | ------------------------------------------------------------------- | | **Camera inputs** | List of input stream definitions (paths and roles) for this camera. | + Navigate to . -| Field | Description | -|-------|-------------| -| **Detect width** | Width (pixels) of frames used for the detect stream; leave empty to use the native stream resolution. | +| Field | Description | +| ----------------- | ------------------------------------------------------------------------------------------------------ | +| **Detect width** | Width (pixels) of frames used for the detect stream; leave empty to use the native stream resolution. | | **Detect height** | Height (pixels) of frames used for the detect stream; leave empty to use the native stream resolution. | diff --git a/docs/docs/configuration/custom_classification/object_classification.md b/docs/docs/configuration/custom_classification/object_classification.md index e2ec25f65..6e68d4ba9 100644 --- a/docs/docs/configuration/custom_classification/object_classification.md +++ b/docs/docs/configuration/custom_classification/object_classification.md @@ -77,13 +77,19 @@ Object classification is configured as a custom classification model. Each model -Navigate to . +Navigate to the **Classification** page from the main navigation sidebar, then click **Add Classification**. -| Field | Description | -|-------|-------------| -| **Custom Classification Models > Dog > Threshold** | Minimum confidence score for a classification attempt to count (default: `0.8`) | -| **Custom Classification Models > Dog > Object Config > Objects** | Object labels to classify (e.g., `dog`, `person`, `car`) | -| **Custom Classification Models > Dog > Object Config > Classification Type** | Whether to assign results as a **sub label** or **attribute** | +In the **Create New Classification** dialog: + +| Field | Description | +| ----------------------- | ------------------------------------------------------------- | +| **Name** | A name for your classification model (e.g., `dog`) | +| **Type** | Select **Object** for object classification | +| **Object Label** | The object label to classify (e.g., `dog`, `person`, `car`) | +| **Classification Type** | Whether to assign results as a **Sub Label** or **Attribute** | +| **Classes** | The class names the model will learn to distinguish between | + +The `threshold` (default: `0.8`) can be adjusted in the YAML configuration. @@ -125,7 +131,7 @@ If examples for some of your classes do not appear in the grid, you can continue :::tip Diversity matters far more than volume -Selecting dozens of nearly identical images is one of the fastest ways to degrade model performance. MobileNetV2 can overfit quickly when trained on homogeneous data — the model learns what *that exact moment* looked like rather than what actually defines the class. **This is why Frigate does not implement bulk training in the UI.** +Selecting dozens of nearly identical images is one of the fastest ways to degrade model performance. MobileNetV2 can overfit quickly when trained on homogeneous data — the model learns what _that exact moment_ looked like rather than what actually defines the class. **This is why Frigate does not implement bulk training in the UI.** For more detail, see [Frigate Tip: Best Practices for Training Face and Custom Classification Models](https://github.com/blakeblackshear/frigate/discussions/21374). diff --git a/docs/docs/configuration/custom_classification/state_classification.md b/docs/docs/configuration/custom_classification/state_classification.md index e683a52c6..d5b8a1295 100644 --- a/docs/docs/configuration/custom_classification/state_classification.md +++ b/docs/docs/configuration/custom_classification/state_classification.md @@ -42,14 +42,17 @@ State classification is configured as a custom classification model. Each model -Navigate to . +Navigate to the **Classification** page from the main navigation sidebar, select the **States** tab, then click **Add Classification**. -| Field | Description | -|-------|-------------| -| **Custom Classification Models > Front Door > Threshold** | Minimum confidence score for a classification attempt to count (default: `0.8`) | -| **Custom Classification Models > Front Door > State Config > Motion** | Run classification when motion overlaps the crop area | -| **Custom Classification Models > Front Door > State Config > Interval** | Run classification every N seconds (optional) | -| **Custom Classification Models > Front Door > State Config > Cameras > Front > Crop** | The rectangular crop region on each camera to classify | +In the **Create New Classification** dialog: + +| Field | Description | +| ----------- | ------------------------------------------------------------------------------------ | +| **Name** | A name for your state classification model (e.g., `front_door`) | +| **Type** | Select **State** for state classification | +| **Classes** | The state names the model will learn to distinguish between (e.g., `open`, `closed`) | + +After creating the model, the wizard will guide you through selecting the camera crop area and assigning training examples. The `threshold` (default: `0.8`), `motion`, and `interval` settings can be adjusted in the YAML configuration. @@ -94,7 +97,7 @@ Once some images are assigned, training will begin automatically. :::tip Diversity matters far more than volume -Selecting dozens of nearly identical images is one of the fastest ways to degrade model performance. MobileNetV2 can overfit quickly when trained on homogeneous data — the model learns what *that exact moment* looked like rather than what actually defines the state. This often leads to models that work perfectly under the original conditions but become unstable when day turns to night, weather changes, or seasonal lighting shifts. **This is why Frigate does not implement bulk training in the UI.** +Selecting dozens of nearly identical images is one of the fastest ways to degrade model performance. MobileNetV2 can overfit quickly when trained on homogeneous data — the model learns what _that exact moment_ looked like rather than what actually defines the state. This often leads to models that work perfectly under the original conditions but become unstable when day turns to night, weather changes, or seasonal lighting shifts. **This is why Frigate does not implement bulk training in the UI.** For more detail, see [Frigate Tip: Best Practices for Training Face and Custom Classification Models](https://github.com/blakeblackshear/frigate/discussions/21374). diff --git a/docs/docs/configuration/ffmpeg_presets.md b/docs/docs/configuration/ffmpeg_presets.md index 00987e863..333388280 100644 --- a/docs/docs/configuration/ffmpeg_presets.md +++ b/docs/docs/configuration/ffmpeg_presets.md @@ -25,7 +25,7 @@ See [the hwaccel docs](/configuration/hardware_acceleration_video.md) for more i | preset-nvidia | Nvidia GPU | | | preset-jetson-h264 | Nvidia Jetson with h264 stream | | | preset-jetson-h265 | Nvidia Jetson with h265 stream | | -| preset-rkmpp | Rockchip MPP | Use image with \*-rk suffix and privileged mode | +| preset-rkmpp | Rockchip MPP | Use image with \*-rk suffix and privileged mode | Select the appropriate hwaccel preset for your hardware. diff --git a/docs/docs/configuration/genai/objects.md b/docs/docs/configuration/genai/objects.md index 023e5823f..eb8dadef5 100644 --- a/docs/docs/configuration/genai/objects.md +++ b/docs/docs/configuration/genai/objects.md @@ -43,9 +43,9 @@ You can define custom prompts at the global level and per-object type. To config 1. Navigate to . - - Expand the **GenAI** section - - Set **Prompt** to your custom prompt text - - Under **Object Prompts**, add entries keyed by object type (e.g., `person`, `car`) with custom prompts for each + - Expand the **GenAI object config** section + - Set **Caption prompt** to your custom prompt text + - Under **Object prompts**, add entries keyed by object type (e.g., `person`, `car`) with custom prompts for each @@ -73,13 +73,13 @@ Prompts can also be overridden at the camera level to provide a more detailed pr 1. Navigate to for the desired camera. - - Expand the **GenAI** section - - Set **Enabled** to on - - Set **Use Snapshot** to on if desired - - Set **Prompt** to a camera-specific prompt - - Under **Object Prompts**, add entries keyed by object type with camera-specific prompts - - Set **Objects** to the list of object types that should receive descriptions (e.g., `person`, `cat`) - - Set **Required Zones** to limit descriptions to objects in specific zones (e.g., `steps`) + - Expand the **GenAI object config** section + - Set **Enable GenAI** to on + - Set **Use snapshots** to on if desired + - Set **Caption prompt** to a camera-specific prompt + - Under **Object prompts**, add entries keyed by object type with camera-specific prompts + - Set **GenAI objects** to the list of object types that should receive descriptions (e.g., `person`, `cat`) + - Set **Required zones** to limit descriptions to objects in specific zones (e.g., `steps`) diff --git a/docs/docs/configuration/index.md b/docs/docs/configuration/index.md index e4d790601..84f978078 100644 --- a/docs/docs/configuration/index.md +++ b/docs/docs/configuration/index.md @@ -110,7 +110,7 @@ Here are some common starter configuration examples. These can be configured thr 1. Navigate to and configure the MQTT connection to your Home Assistant Mosquitto broker 2. Navigate to and set **Hardware acceleration arguments** to `Raspberry Pi (H.264)` -3. Navigate to and add a detector with **Type** `edgetpu` and **Device** `usb` +3. Navigate to and add a detector with **Type** `EdgeTPU` and **Device** `usb` 4. Navigate to and set **Enable recording** to on, **Motion retention > Retention days** to `7`, **Alert retention > Event retention > Retention days** to `30`, **Alert retention > Event retention > Retention mode** to `motion`, **Detection retention > Event retention > Retention days** to `30`, **Detection retention > Event retention > Retention mode** to `motion` 5. Navigate to and set **Enable snapshots** to on, **Snapshot retention > Default retention** to `30` 6. Navigate to and add your camera with the appropriate RTSP stream URL @@ -187,9 +187,9 @@ cameras: -1. Navigate to and set **Enabled** to off +1. Navigate to and set **Enable MQTT** to off 2. Navigate to and set **Hardware acceleration arguments** to `VAAPI (Intel/AMD GPU)` -3. Navigate to and add a detector with **Type** `edgetpu` and **Device** `usb` +3. Navigate to and add a detector with **Type** `EdgeTPU` and **Device** `usb` 4. Navigate to and set **Enable recording** to on, **Motion retention > Retention days** to `7`, **Alert retention > Event retention > Retention days** to `30`, **Alert retention > Event retention > Retention mode** to `motion`, **Detection retention > Event retention > Retention days** to `30`, **Detection retention > Event retention > Retention mode** to `motion` 5. Navigate to and set **Enable snapshots** to on, **Snapshot retention > Default retention** to `30` 6. Navigate to and add your camera with the appropriate RTSP stream URL diff --git a/docs/docs/configuration/license_plate_recognition.md b/docs/docs/configuration/license_plate_recognition.md index 09edaae4f..391fba02b 100644 --- a/docs/docs/configuration/license_plate_recognition.md +++ b/docs/docs/configuration/license_plate_recognition.md @@ -63,7 +63,7 @@ Like other enrichments in Frigate, LPR **must be enabled globally** to use the f -Navigate to for the desired camera and disable the **Enabled** toggle. +Navigate to for the desired camera and disable the **Enable LPR** toggle. @@ -201,8 +201,8 @@ If Frigate is already recognizing plates correctly, leave enhancement at the def Navigate to . -| Field | Description | -|-------|-------------| +| Field | Description | +| --------------------- | --------------------------------------------------------------------------------- | | **Replacement rules** | Regex replacement rules used to normalize detected plate strings before matching. | @@ -268,15 +268,15 @@ These configuration parameters are available at the global level. The only optio Navigate to . -| Field | Description | -|-------|-------------| -| **Enable LPR** | Enable or disable license plate recognition for all cameras; can be overridden per-camera. | -| **Minimum plate area** | Minimum plate area (pixels) required to attempt recognition. | -| **Min plate length** | Minimum number of characters a recognized plate must contain to be considered valid. | -| **Known plates > Wife'S Car** | | -| **Known plates > Johnny** | | -| **Known plates > Sally** | | -| **Known plates > Work Trucks** | | +| Field | Description | +| ------------------------------ | ------------------------------------------------------------------------------------------ | +| **Enable LPR** | Enable or disable license plate recognition for all cameras; can be overridden per-camera. | +| **Minimum plate area** | Minimum plate area (pixels) required to attempt recognition. | +| **Min plate length** | Minimum number of characters a recognized plate must contain to be considered valid. | +| **Known plates > Wife'S Car** | | +| **Known plates > Johnny** | | +| **Known plates > Sally** | | +| **Known plates > Work Trucks** | | @@ -308,7 +308,7 @@ If a camera is configured to detect `car` or `motorcycle` but you don't want Fri -Navigate to for the desired camera and disable the **Enabled** toggle. +Navigate to for the desired camera and disable the **Enable LPR** toggle. @@ -351,40 +351,45 @@ An example configuration for a dedicated LPR camera using a `license_plate`-dete Navigate to . -| Field | Description | -|-------|-------------| -| **Ffmpeg** | | +| Field | Description | +| ---------- | ----------- | +| **Ffmpeg** | | + Navigate to . -| Field | Description | -|-------|-------------| -| **Enable object detection** | Enable or disable object detection for this camera. | -| **Detect FPS** | Desired frames per second to run detection on; lower values reduce CPU usage (recommended value is 5, only set higher - at most 10 - if tracking extremely fast moving objects). | -| **Minimum initialization frames** | Number of consecutive detection hits required before creating a tracked object. Increase to reduce false initializations. Default value is fps divided by 2. | -| **Detect width** | Width (pixels) of frames used for the detect stream; leave empty to use the native stream resolution. | -| **Detect height** | Height (pixels) of frames used for the detect stream; leave empty to use the native stream resolution. | +| Field | Description | +| --------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| **Enable object detection** | Enable or disable object detection for this camera. | +| **Detect FPS** | Desired frames per second to run detection on; lower values reduce CPU usage (recommended value is 5, only set higher - at most 10 - if tracking extremely fast moving objects). | +| **Minimum initialization frames** | Number of consecutive detection hits required before creating a tracked object. Increase to reduce false initializations. Default value is fps divided by 2. | +| **Detect width** | Width (pixels) of frames used for the detect stream; leave empty to use the native stream resolution. | +| **Detect height** | Height (pixels) of frames used for the detect stream; leave empty to use the native stream resolution. | + Navigate to . -| Field | Description | -|-------|-------------| -| **Objects to track** | List of object labels to track for this camera. | -| **Object filters > License Plate > Threshold** | | +| Field | Description | +| ---------------------------------------------- | ----------------------------------------------- | +| **Objects to track** | List of object labels to track for this camera. | +| **Object filters > License Plate > Threshold** | | + Navigate to . -| Field | Description | -|-------|-------------| +| Field | Description | +| -------------------- | ------------------------------------------------------------------------------------------------------- | | **Motion threshold** | Pixel difference threshold used by the motion detector; higher values reduce sensitivity (range 1-255). | -| **Contour area** | Minimum contour area in pixels required for a motion contour to be counted. | -| **Improve contrast** | Apply contrast improvement to frames before motion analysis to help detection. | +| **Contour area** | Minimum contour area in pixels required for a motion contour to be counted. | +| **Improve contrast** | Apply contrast improvement to frames before motion analysis to help detection. | + Navigate to . -| Field | Description | -|-------|-------------| +| Field | Description | +| -------------------- | -------------------------------------------- | | **Enable recording** | Enable or disable recording for this camera. | + Navigate to . -| Field | Description | -|-------|-------------| +| Field | Description | +| -------------------- | --------------------------------------------------- | | **Enable snapshots** | Enable or disable saving snapshots for this camera. | @@ -451,46 +456,52 @@ An example configuration for a dedicated LPR camera using the secondary pipeline Navigate to . -| Field | Description | -|-------|-------------| -| **Enable LPR** | Enable or disable LPR on this camera. | +| Field | Description | +| --------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| **Enable LPR** | Enable or disable LPR on this camera. | | **Enhancement level** | Enhancement level (0-10) to apply to plate crops prior to OCR; higher values may not always improve results, levels above 5 may only work with night time plates and should be used with caution. | + Navigate to . -| Field | Description | -|-------|-------------| -| **Ffmpeg** | | +| Field | Description | +| ---------- | ----------- | +| **Ffmpeg** | | + Navigate to . -| Field | Description | -|-------|-------------| -| **Enable object detection** | Enable or disable object detection for this camera. | -| **Detect FPS** | Desired frames per second to run detection on; lower values reduce CPU usage (recommended value is 5, only set higher - at most 10 - if tracking extremely fast moving objects). | -| **Detect width** | Width (pixels) of frames used for the detect stream; leave empty to use the native stream resolution. | -| **Detect height** | Height (pixels) of frames used for the detect stream; leave empty to use the native stream resolution. | +| Field | Description | +| --------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| **Enable object detection** | Enable or disable object detection for this camera. | +| **Detect FPS** | Desired frames per second to run detection on; lower values reduce CPU usage (recommended value is 5, only set higher - at most 10 - if tracking extremely fast moving objects). | +| **Detect width** | Width (pixels) of frames used for the detect stream; leave empty to use the native stream resolution. | +| **Detect height** | Height (pixels) of frames used for the detect stream; leave empty to use the native stream resolution. | + Navigate to . -| Field | Description | -|-------|-------------| +| Field | Description | +| -------------------- | ----------------------------------------------- | | **Objects to track** | List of object labels to track for this camera. | + Navigate to . -| Field | Description | -|-------|-------------| +| Field | Description | +| -------------------- | ------------------------------------------------------------------------------------------------------- | | **Motion threshold** | Pixel difference threshold used by the motion detector; higher values reduce sensitivity (range 1-255). | -| **Contour area** | Minimum contour area in pixels required for a motion contour to be counted. | -| **Improve contrast** | Apply contrast improvement to frames before motion analysis to help detection. | +| **Contour area** | Minimum contour area in pixels required for a motion contour to be counted. | +| **Improve contrast** | Apply contrast improvement to frames before motion analysis to help detection. | + Navigate to . -| Field | Description | -|-------|-------------| +| Field | Description | +| -------------------- | -------------------------------------------- | | **Enable recording** | Enable or disable recording for this camera. | + Navigate to . -| Field | Description | -|-------|-------------| +| Field | Description | +| ----------------------------------------- | --------------------------------------------------- | | **Detections config > Enable detections** | Enable or disable detection events for this camera. | -| **Detections config > Retain > Default** | | +| **Detections config > Retain > Default** | | @@ -641,7 +652,7 @@ lpr: logger: default: info logs: - # highlight-next-line + # highlight-next-line frigate.data_processing.common.license_plate: debug ``` diff --git a/docs/docs/configuration/live.md b/docs/docs/configuration/live.md index d4fd27b3f..4e4965204 100644 --- a/docs/docs/configuration/live.md +++ b/docs/docs/configuration/live.md @@ -226,10 +226,10 @@ The jsmpeg live view resolution and encoding quality can be adjusted globally or Navigate to for global defaults, or and select a camera for per-camera overrides. -| Field | Description | -|-------|-------------| -| **Live height** | Height in pixels for the jsmpeg live stream; must be less than or equal to the detect stream height | -| **Live quality** | Encoding quality for the jsmpeg stream (1 = highest, 31 = lowest) | +| Field | Description | +| ---------------- | --------------------------------------------------------------------------------------------------- | +| **Live height** | Height in pixels for the jsmpeg live stream; must be less than or equal to the detect stream height | +| **Live quality** | Encoding quality for the jsmpeg stream (1 = highest, 31 = lowest) | diff --git a/docs/docs/configuration/masks.md b/docs/docs/configuration/masks.md index f1fafd728..e497de2c1 100644 --- a/docs/docs/configuration/masks.md +++ b/docs/docs/configuration/masks.md @@ -26,13 +26,7 @@ Object filter masks can be used to filter out stubborn false positives in fixed -Navigate to . - -| Field | Description | -|-------|-------------| -| **Mask coordinates > Mask1 > Friendly Name** | | -| **Mask coordinates > Mask1 > Enabled** | | -| **Mask coordinates > Mask1 > Coordinates** | | +Navigate to and select a camera. Use the mask editor to draw motion masks and object filter masks directly on the camera feed. Each mask can be given a friendly name and toggled on or off. diff --git a/docs/docs/configuration/metrics.md b/docs/docs/configuration/metrics.md index 5bc711b4e..d857d5eee 100644 --- a/docs/docs/configuration/metrics.md +++ b/docs/docs/configuration/metrics.md @@ -31,12 +31,14 @@ Metrics are available at `/api/metrics` by default. No additional Frigate config ## Available Metrics ### System Metrics + - `frigate_cpu_usage_percent{pid="", name="", process="", type="", cmdline=""}` - Process CPU usage percentage - `frigate_mem_usage_percent{pid="", name="", process="", type="", cmdline=""}` - Process memory usage percentage - `frigate_gpu_usage_percent{gpu_name=""}` - GPU utilization percentage - `frigate_gpu_mem_usage_percent{gpu_name=""}` - GPU memory usage percentage ### Camera Metrics + - `frigate_camera_fps{camera_name=""}` - Frames per second being consumed from your camera - `frigate_detection_fps{camera_name=""}` - Number of times detection is run per second - `frigate_process_fps{camera_name=""}` - Frames per second being processed @@ -46,21 +48,25 @@ Metrics are available at `/api/metrics` by default. No additional Frigate config - `frigate_audio_rms{camera_name=""}` - Audio RMS for camera ### Detector Metrics + - `frigate_detector_inference_speed_seconds{name=""}` - Time spent running object detection in seconds - `frigate_detection_start{name=""}` - Detector start time (unix timestamp) ### Storage Metrics + - `frigate_storage_free_bytes{storage=""}` - Storage free bytes - `frigate_storage_total_bytes{storage=""}` - Storage total bytes - `frigate_storage_used_bytes{storage=""}` - Storage used bytes - `frigate_storage_mount_type{mount_type="", storage=""}` - Storage mount type info ### Service Metrics + - `frigate_service_uptime_seconds` - Uptime in seconds - `frigate_service_last_updated_timestamp` - Stats recorded time (unix timestamp) - `frigate_device_temperature{device=""}` - Device Temperature ### Event Metrics + - `frigate_camera_events{camera="", label=""}` - Count of camera events since exporter started ## Configuring Prometheus @@ -69,10 +75,10 @@ To scrape metrics from Frigate, add the following to your Prometheus configurati ```yaml scrape_configs: - - job_name: 'frigate' - metrics_path: '/api/metrics' + - job_name: "frigate" + metrics_path: "/api/metrics" static_configs: - - targets: ['frigate:5000'] + - targets: ["frigate:5000"] scrape_interval: 15s ``` diff --git a/docs/docs/configuration/motion_detection.md b/docs/docs/configuration/motion_detection.md index 0a643e376..3f31d27db 100644 --- a/docs/docs/configuration/motion_detection.md +++ b/docs/docs/configuration/motion_detection.md @@ -48,8 +48,8 @@ Navigate to and select the camera, or use the to adjust it live. -| Field | Description | -|-------|-------------| +| Field | Description | +| -------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **Motion threshold** | The threshold passed to cv2.threshold to determine if a pixel is different enough to be counted as motion. Increasing this value will make motion detection less sensitive and decreasing it will make motion detection more sensitive. The value should be between 1 and 255. (default: 30) | @@ -79,8 +79,8 @@ Navigate to and select the camera, or use the to adjust it live. -| Field | Description | -|-------|-------------| +| Field | Description | +| ---------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **Contour area** | Minimum size in pixels in the resized motion image that counts as motion. Increasing this value will prevent smaller areas of motion from being detected. Decreasing will make motion detection more sensitive to smaller moving objects. As a rule of thumb: 10 = high sensitivity, 30 = medium sensitivity, 50 = low sensitivity. (default: 10) | @@ -126,8 +126,8 @@ Navigate to and select the camera. -| Field | Description | -|-------|-------------| +| Field | Description | +| ----------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **Lightning threshold** | The percentage of the image used to detect lightning or other substantial changes where motion detection needs to recalibrate. Increasing this value will make motion detection more likely to consider lightning or IR mode changes as valid motion. Decreasing this value will make motion detection more likely to ignore large amounts of motion such as a person approaching a doorbell camera. (default: 0.8) | @@ -168,8 +168,8 @@ Navigate to and select the camera. -| Field | Description | -|-------|-------------| +| Field | Description | +| ------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **Skip motion threshold** | Fraction of the frame that must change in a single update before Frigate will completely ignore any motion in that frame. Values range between 0.0 and 1.0; leave unset (null) to disable. For example, setting this to 0.7 causes Frigate to skip reporting motion boxes when more than 70% of the image appears to change (e.g. during lightning storms, IR/color mode switches, or other sudden lighting events). | diff --git a/docs/docs/configuration/notifications.md b/docs/docs/configuration/notifications.md index 18599af6d..0ba84b8aa 100644 --- a/docs/docs/configuration/notifications.md +++ b/docs/docs/configuration/notifications.md @@ -37,9 +37,8 @@ Notifications will be prevented if either: 1. Navigate to . - - Set **Enable notifications** to on - - Set **Notification email** to your email address - - Set **Cooldown period** to the desired number of seconds to wait before sending another notification from any camera (e.g. `10`) + - Set **Email** to your email address + - Enable notifications for the desired cameras @@ -60,8 +59,8 @@ notifications: 1. Navigate to and select the desired camera. - - Set **Enabled** to on - - Set **Cooldown** to the desired number of seconds to wait before sending another notification from this camera (e.g. `30`) + - Set **Enable notifications** to on + - Set **Cooldown period** to the desired number of seconds to wait before sending another notification from this camera (e.g. `30`) diff --git a/docs/docs/configuration/object_detectors.md b/docs/docs/configuration/object_detectors.md index ad3742886..f53a5a0a5 100644 --- a/docs/docs/configuration/object_detectors.md +++ b/docs/docs/configuration/object_detectors.md @@ -56,7 +56,6 @@ Frigate supports multiple different detectors that work on different types of ha - [AXEngine](#axera): axmodels can run on AXERA AI acceleration. - **For Testing** - [CPU Detector (not recommended for actual use](#cpu-detector-not-recommended): Use a CPU to run tflite model, this is not recommended and in most cases OpenVINO can be used in CPU mode with better results. @@ -92,7 +91,7 @@ See [common Edge TPU troubleshooting steps](/troubleshooting/edgetpu) if the Edg -Navigate to and select the **Edge TPU** detector type with device set to `usb`. +Navigate to and select the **EdgeTPU** detector type with device set to `usb`. @@ -137,7 +136,7 @@ _warning: may have [compatibility issues](https://github.com/blakeblackshear/fri -Navigate to and select the **Edge TPU** detector type with the device field left empty. +Navigate to and select the **EdgeTPU** detector type with the device field left empty. @@ -157,7 +156,7 @@ detectors: -Navigate to and select the **Edge TPU** detector type with device set to `pci`. +Navigate to and select the **EdgeTPU** detector type with device set to `pci`. @@ -247,15 +246,15 @@ After placing the downloaded files for the tflite model and labels in your confi -Navigate to and select the **Edge TPU** detector type with device set to `usb`. Then navigate to and configure the model settings: +Navigate to and select the **EdgeTPU** detector type with device set to `usb`. Then navigate to and configure the model settings: -| Field | Value | -|-------|-------| -| **Model type** | `yolo-generic` | -| **Width** | `320` (should match the imgsize of the model) | -| **Height** | `320` (should match the imgsize of the model) | -| **Path** | `/config/model_cache/yolov9-s-relu6-best_320_int8_edgetpu.tflite` | -| **Labelmap path** | `/config/labels-coco17.txt` | +| Field | Value | +| ---------------------------------------- | ----------------------------------------------------------------- | +| **Object Detection Model Type** | `yolo-generic` | +| **Object detection model input width** | `320` (should match the imgsize of the model) | +| **Object detection model input height** | `320` (should match the imgsize of the model) | +| **Custom object detector model path** | `/config/model_cache/yolov9-s-relu6-best_320_int8_edgetpu.tflite` | +| **Label map for custom object detector** | `/config/labels-coco17.txt` | @@ -304,17 +303,17 @@ Use this configuration for YOLO-based models. When no custom model path or URL i -Navigate to and select the **Hailo-8L** detector type with device set to `PCIe`. Then navigate to and configure the model settings: +Navigate to and select the **Hailo-8/Hailo-8L** detector type with device set to `PCIe`. Then navigate to and configure the model settings: -| Field | Value | -|-------|-------| -| **Width** | `320` | -| **Height** | `320` | -| **Input tensor** | `nhwc` | -| **Input pixel format** | `rgb` | -| **Input dtype** | `int` | -| **Model type** | `yolo-generic` | -| **Labelmap path** | `/labelmap/coco-80.txt` | +| Field | Value | +| ---------------------------------------- | ----------------------- | +| **Object detection model input width** | `320` | +| **Object detection model input height** | `320` | +| **Model Input Tensor Shape** | `nhwc` | +| **Model Input Pixel Color Format** | `rgb` | +| **Model Input D Type** | `int` | +| **Object Detection Model Type** | `yolo-generic` | +| **Label map for custom object detector** | `/labelmap/coco-80.txt` | The detector automatically selects the default model based on your hardware. Optionally, specify a local model path or URL to override. @@ -360,15 +359,15 @@ For SSD-based models, provide either a model path or URL to your compiled SSD mo -Navigate to and select the **Hailo-8L** detector type with device set to `PCIe`. Then navigate to and configure the model settings: +Navigate to and select the **Hailo-8/Hailo-8L** detector type with device set to `PCIe`. Then navigate to and configure the model settings: -| Field | Value | -|-------|-------| -| **Width** | `300` | -| **Height** | `300` | -| **Input tensor** | `nhwc` | -| **Input pixel format** | `rgb` | -| **Model type** | `ssd` | +| Field | Value | +| --------------------------------------- | ------ | +| **Object detection model input width** | `300` | +| **Object detection model input height** | `300` | +| **Model Input Tensor Shape** | `nhwc` | +| **Model Input Pixel Color Format** | `rgb` | +| **Object Detection Model Type** | `ssd` | Specify the local model path or URL for SSD MobileNet v1. @@ -405,7 +404,7 @@ The Hailo detector supports all YOLO models compiled for Hailo hardware that inc -Navigate to and select the **Hailo-8L** detector type with device set to `PCIe`. Then navigate to and configure the model settings to match your custom model dimensions and format. +Navigate to and select the **Hailo-8/Hailo-8L** detector type with device set to `PCIe`. Then navigate to and configure the model settings to match your custom model dimensions and format. @@ -505,14 +504,14 @@ Use the model configuration shown below when using the OpenVINO detector with th Navigate to and select the **OpenVINO** detector type with device set to `GPU` (or `NPU`). Then navigate to and configure: -| Field | Value | -|-------|-------| -| **Width** | `300` | -| **Height** | `300` | -| **Input tensor** | `nhwc` | -| **Input pixel format** | `bgr` | -| **Path** | `/openvino-model/ssdlite_mobilenet_v2.xml` | -| **Labelmap path** | `/openvino-model/coco_91cl_bkgr.txt` | +| Field | Value | +| ---------------------------------------- | ------------------------------------------ | +| **Object detection model input width** | `300` | +| **Object detection model input height** | `300` | +| **Model Input Tensor Shape** | `nhwc` | +| **Model Input Pixel Color Format** | `bgr` | +| **Custom object detector model path** | `/openvino-model/ssdlite_mobilenet_v2.xml` | +| **Label map for custom object detector** | `/openvino-model/coco_91cl_bkgr.txt` | @@ -555,15 +554,15 @@ After placing the downloaded onnx model in your config folder, use the following Navigate to and select the **OpenVINO** detector type with device set to `GPU`. Then navigate to and configure: -| Field | Value | -|-------|-------| -| **Model type** | `yolonas` | -| **Width** | `320` (should match whatever was set in notebook) | -| **Height** | `320` (should match whatever was set in notebook) | -| **Input tensor** | `nchw` | -| **Input pixel format** | `bgr` | -| **Path** | `/config/yolo_nas_s.onnx` | -| **Labelmap path** | `/labelmap/coco-80.txt` | +| Field | Value | +| ---------------------------------------- | ------------------------------------------------- | +| **Object Detection Model Type** | `yolonas` | +| **Object detection model input width** | `320` (should match whatever was set in notebook) | +| **Object detection model input height** | `320` (should match whatever was set in notebook) | +| **Model Input Tensor Shape** | `nchw` | +| **Model Input Pixel Color Format** | `bgr` | +| **Custom object detector model path** | `/config/yolo_nas_s.onnx` | +| **Label map for custom object detector** | `/labelmap/coco-80.txt` | @@ -617,15 +616,15 @@ After placing the downloaded onnx model in your config folder, use the following Navigate to and select the **OpenVINO** detector type with device set to `GPU` (or `NPU`). Then navigate to and configure: -| Field | Value | -|-------|-------| -| **Model type** | `yolo-generic` | -| **Width** | `320` (should match the imgsize set during model export) | -| **Height** | `320` (should match the imgsize set during model export) | -| **Input tensor** | `nchw` | -| **Input dtype** | `float` | -| **Path** | `/config/model_cache/yolo.onnx` | -| **Labelmap path** | `/labelmap/coco-80.txt` | +| Field | Value | +| ---------------------------------------- | -------------------------------------------------------- | +| **Object Detection Model Type** | `yolo-generic` | +| **Object detection model input width** | `320` (should match the imgsize set during model export) | +| **Object detection model input height** | `320` (should match the imgsize set during model export) | +| **Model Input Tensor Shape** | `nchw` | +| **Model Input D Type** | `float` | +| **Custom object detector model path** | `/config/model_cache/yolo.onnx` | +| **Label map for custom object detector** | `/labelmap/coco-80.txt` | @@ -673,14 +672,14 @@ After placing the downloaded onnx model in your `config/model_cache` folder, use Navigate to and select the **OpenVINO** detector type with device set to `GPU`. Then navigate to and configure: -| Field | Value | -|-------|-------| -| **Model type** | `rfdetr` | -| **Width** | `320` | -| **Height** | `320` | -| **Input tensor** | `nchw` | -| **Input dtype** | `float` | -| **Path** | `/config/model_cache/rfdetr.onnx` | +| Field | Value | +| --------------------------------------- | --------------------------------- | +| **Object Detection Model Type** | `rfdetr` | +| **Object detection model input width** | `320` | +| **Object detection model input height** | `320` | +| **Model Input Tensor Shape** | `nchw` | +| **Model Input D Type** | `float` | +| **Custom object detector model path** | `/config/model_cache/rfdetr.onnx` | @@ -725,15 +724,15 @@ After placing the downloaded onnx model in your config/model_cache folder, use t Navigate to and select the **OpenVINO** detector type with device set to `CPU`. Then navigate to and configure: -| Field | Value | -|-------|-------| -| **Model type** | `dfine` | -| **Width** | `640` | -| **Height** | `640` | -| **Input tensor** | `nchw` | -| **Input dtype** | `float` | -| **Path** | `/config/model_cache/dfine-s.onnx` | -| **Labelmap path** | `/labelmap/coco-80.txt` | +| Field | Value | +| ---------------------------------------- | ---------------------------------- | +| **Object Detection Model Type** | `dfine` | +| **Object detection model input width** | `640` | +| **Object detection model input height** | `640` | +| **Model Input Tensor Shape** | `nchw` | +| **Model Input D Type** | `float` | +| **Custom object detector model path** | `/config/model_cache/dfine-s.onnx` | +| **Label map for custom object detector** | `/labelmap/coco-80.txt` | @@ -777,7 +776,7 @@ Using the detector config below will connect to the client: -Navigate to and select the **ZMQ** detector type with the endpoint set to `tcp://host.docker.internal:5555`. +Navigate to and select the **ZMQ IPC** detector type with the endpoint set to `tcp://host.docker.internal:5555`. @@ -811,17 +810,17 @@ When Frigate is started with the following config it will connect to the detecto -Navigate to and select the **ZMQ** detector type with the endpoint set to `tcp://host.docker.internal:5555`. Then navigate to and configure: +Navigate to and select the **ZMQ IPC** detector type with the endpoint set to `tcp://host.docker.internal:5555`. Then navigate to and configure: -| Field | Value | -|-------|-------| -| **Model type** | `yolo-generic` | -| **Width** | `320` (should match the imgsize set during model export) | -| **Height** | `320` (should match the imgsize set during model export) | -| **Input tensor** | `nchw` | -| **Input dtype** | `float` | -| **Path** | `/config/model_cache/yolo.onnx` | -| **Labelmap path** | `/labelmap/coco-80.txt` | +| Field | Value | +| ---------------------------------------- | -------------------------------------------------------- | +| **Object Detection Model Type** | `yolo-generic` | +| **Object detection model input width** | `320` (should match the imgsize set during model export) | +| **Object detection model input height** | `320` (should match the imgsize set during model export) | +| **Model Input Tensor Shape** | `nchw` | +| **Model Input D Type** | `float` | +| **Custom object detector model path** | `/config/model_cache/yolo.onnx` | +| **Label map for custom object detector** | `/labelmap/coco-80.txt` | @@ -1022,15 +1021,15 @@ After placing the downloaded onnx model in your config folder, use the following Navigate to and select the **ONNX** detector type. Then navigate to and configure: -| Field | Value | -|-------|-------| -| **Model type** | `yolonas` | -| **Width** | `320` (should match whatever was set in notebook) | -| **Height** | `320` (should match whatever was set in notebook) | -| **Input pixel format** | `bgr` | -| **Input tensor** | `nchw` | -| **Path** | `/config/yolo_nas_s.onnx` | -| **Labelmap path** | `/labelmap/coco-80.txt` | +| Field | Value | +| ---------------------------------------- | ------------------------------------------------- | +| **Object Detection Model Type** | `yolonas` | +| **Object detection model input width** | `320` (should match whatever was set in notebook) | +| **Object detection model input height** | `320` (should match whatever was set in notebook) | +| **Model Input Pixel Color Format** | `bgr` | +| **Model Input Tensor Shape** | `nchw` | +| **Custom object detector model path** | `/config/yolo_nas_s.onnx` | +| **Label map for custom object detector** | `/labelmap/coco-80.txt` | @@ -1081,15 +1080,15 @@ After placing the downloaded onnx model in your config folder, use the following Navigate to and select the **ONNX** detector type. Then navigate to and configure: -| Field | Value | -|-------|-------| -| **Model type** | `yolo-generic` | -| **Width** | `320` (should match the imgsize set during model export) | -| **Height** | `320` (should match the imgsize set during model export) | -| **Input tensor** | `nchw` | -| **Input dtype** | `float` | -| **Path** | `/config/model_cache/yolo.onnx` | -| **Labelmap path** | `/labelmap/coco-80.txt` | +| Field | Value | +| ---------------------------------------- | -------------------------------------------------------- | +| **Object Detection Model Type** | `yolo-generic` | +| **Object detection model input width** | `320` (should match the imgsize set during model export) | +| **Object detection model input height** | `320` (should match the imgsize set during model export) | +| **Model Input Tensor Shape** | `nchw` | +| **Model Input D Type** | `float` | +| **Custom object detector model path** | `/config/model_cache/yolo.onnx` | +| **Label map for custom object detector** | `/labelmap/coco-80.txt` | @@ -1130,15 +1129,15 @@ After placing the downloaded onnx model in your config folder, use the following Navigate to and select the **ONNX** detector type. Then navigate to and configure: -| Field | Value | -|-------|-------| -| **Model type** | `yolox` | -| **Width** | `416` (should match the imgsize set during model export) | -| **Height** | `416` (should match the imgsize set during model export) | -| **Input tensor** | `nchw` | -| **Input dtype** | `float_denorm` | -| **Path** | `/config/model_cache/yolox_tiny.onnx` | -| **Labelmap path** | `/labelmap/coco-80.txt` | +| Field | Value | +| ---------------------------------------- | -------------------------------------------------------- | +| **Object Detection Model Type** | `yolox` | +| **Object detection model input width** | `416` (should match the imgsize set during model export) | +| **Object detection model input height** | `416` (should match the imgsize set during model export) | +| **Model Input Tensor Shape** | `nchw` | +| **Model Input D Type** | `float_denorm` | +| **Custom object detector model path** | `/config/model_cache/yolox_tiny.onnx` | +| **Label map for custom object detector** | `/labelmap/coco-80.txt` | @@ -1179,14 +1178,14 @@ After placing the downloaded onnx model in your `config/model_cache` folder, use Navigate to and select the **ONNX** detector type. Then navigate to and configure: -| Field | Value | -|-------|-------| -| **Model type** | `rfdetr` | -| **Width** | `320` | -| **Height** | `320` | -| **Input tensor** | `nchw` | -| **Input dtype** | `float` | -| **Path** | `/config/model_cache/rfdetr.onnx` | +| Field | Value | +| --------------------------------------- | --------------------------------- | +| **Object Detection Model Type** | `rfdetr` | +| **Object detection model input width** | `320` | +| **Object detection model input height** | `320` | +| **Model Input Tensor Shape** | `nchw` | +| **Model Input D Type** | `float` | +| **Custom object detector model path** | `/config/model_cache/rfdetr.onnx` | @@ -1224,15 +1223,15 @@ After placing the downloaded onnx model in your `config/model_cache` folder, use Navigate to and select the **ONNX** detector type. Then navigate to and configure: -| Field | Value | -|-------|-------| -| **Model type** | `dfine` | -| **Width** | `640` | -| **Height** | `640` | -| **Input tensor** | `nchw` | -| **Input dtype** | `float` | -| **Path** | `/config/model_cache/dfine_m_obj2coco.onnx` | -| **Labelmap path** | `/labelmap/coco-80.txt` | +| Field | Value | +| ---------------------------------------- | ------------------------------------------- | +| **Object Detection Model Type** | `dfine` | +| **Object detection model input width** | `640` | +| **Object detection model input height** | `640` | +| **Model Input Tensor Shape** | `nchw` | +| **Model Input D Type** | `float` | +| **Custom object detector model path** | `/config/model_cache/dfine_m_obj2coco.onnx` | +| **Label map for custom object detector** | `/labelmap/coco-80.txt` | @@ -1312,7 +1311,7 @@ To integrate CodeProject.AI into Frigate, configure the detector as follows: -Navigate to and select the **Deepstack** detector type. Set the API URL to point to your CodeProject.AI server (e.g., `http://:/v1/vision/detection`). +Navigate to and select the **DeepStack** detector type. Set the API URL to point to your CodeProject.AI server (e.g., `http://:/v1/vision/detection`). @@ -1417,14 +1416,14 @@ Below is the recommended configuration for using the **YOLO-NAS** (small) model Navigate to and select the **MemryX** detector type with device set to `PCIe:0`. Then navigate to and configure: -| Field | Value | -|-------|-------| -| **Model type** | `yolonas` | -| **Width** | `320` (can be set to `640` for higher resolution) | -| **Height** | `320` (can be set to `640` for higher resolution) | -| **Input tensor** | `nchw` | -| **Input dtype** | `float` | -| **Labelmap path** | `/labelmap/coco-80.txt` | +| Field | Value | +| ---------------------------------------- | ------------------------------------------------- | +| **Object Detection Model Type** | `yolonas` | +| **Object detection model input width** | `320` (can be set to `640` for higher resolution) | +| **Object detection model input height** | `320` (can be set to `640` for higher resolution) | +| **Model Input Tensor Shape** | `nchw` | +| **Model Input D Type** | `float` | +| **Label map for custom object detector** | `/labelmap/coco-80.txt` | @@ -1465,14 +1464,14 @@ Below is the recommended configuration for using the **YOLOv9** (small) model wi Navigate to and select the **MemryX** detector type with device set to `PCIe:0`. Then navigate to and configure: -| Field | Value | -|-------|-------| -| **Model type** | `yolo-generic` | -| **Width** | `320` (can be set to `640` for higher resolution) | -| **Height** | `320` (can be set to `640` for higher resolution) | -| **Input tensor** | `nchw` | -| **Input dtype** | `float` | -| **Labelmap path** | `/labelmap/coco-80.txt` | +| Field | Value | +| ---------------------------------------- | ------------------------------------------------- | +| **Object Detection Model Type** | `yolo-generic` | +| **Object detection model input width** | `320` (can be set to `640` for higher resolution) | +| **Object detection model input height** | `320` (can be set to `640` for higher resolution) | +| **Model Input Tensor Shape** | `nchw` | +| **Model Input D Type** | `float` | +| **Label map for custom object detector** | `/labelmap/coco-80.txt` | @@ -1512,14 +1511,14 @@ Below is the recommended configuration for using the **YOLOX** (small) model wit Navigate to and select the **MemryX** detector type with device set to `PCIe:0`. Then navigate to and configure: -| Field | Value | -|-------|-------| -| **Model type** | `yolox` | -| **Width** | `640` | -| **Height** | `640` | -| **Input tensor** | `nchw` | -| **Input dtype** | `float_denorm` | -| **Labelmap path** | `/labelmap/coco-80.txt` | +| Field | Value | +| ---------------------------------------- | ----------------------- | +| **Object Detection Model Type** | `yolox` | +| **Object detection model input width** | `640` | +| **Object detection model input height** | `640` | +| **Model Input Tensor Shape** | `nchw` | +| **Model Input D Type** | `float_denorm` | +| **Label map for custom object detector** | `/labelmap/coco-80.txt` | @@ -1559,14 +1558,14 @@ Below is the recommended configuration for using the **SSDLite MobileNet v2** mo Navigate to and select the **MemryX** detector type with device set to `PCIe:0`. Then navigate to and configure: -| Field | Value | -|-------|-------| -| **Model type** | `ssd` | -| **Width** | `320` | -| **Height** | `320` | -| **Input tensor** | `nchw` | -| **Input dtype** | `float` | -| **Labelmap path** | `/labelmap/coco-80.txt` | +| Field | Value | +| ---------------------------------------- | ----------------------- | +| **Object Detection Model Type** | `ssd` | +| **Object detection model input width** | `320` | +| **Object detection model input height** | `320` | +| **Model Input Tensor Shape** | `nchw` | +| **Model Input D Type** | `float` | +| **Label map for custom object detector** | `/labelmap/coco-80.txt` | @@ -1698,14 +1697,14 @@ Use the config below to work with generated TRT models: Navigate to and select the **TensorRT** detector type with the device set to `0` (the default GPU index). Then navigate to and configure: -| Field | Value | -|-------|-------| -| **Path** | `/config/model_cache/tensorrt/yolov7-320.trt` | -| **Labelmap path** | `/labelmap/coco-80.txt` | -| **Input tensor** | `nchw` | -| **Input pixel format** | `rgb` | -| **Width** | `320` (MUST match the chosen model, e.g., yolov7-320 -> 320) | -| **Height** | `320` (MUST match the chosen model, e.g., yolov7-320 -> 320) | +| Field | Value | +| ---------------------------------------- | ------------------------------------------------------------ | +| **Custom object detector model path** | `/config/model_cache/tensorrt/yolov7-320.trt` | +| **Label map for custom object detector** | `/labelmap/coco-80.txt` | +| **Model Input Tensor Shape** | `nchw` | +| **Model Input Pixel Color Format** | `rgb` | +| **Object detection model input width** | `320` (MUST match the chosen model, e.g., yolov7-320 -> 320) | +| **Object detection model input height** | `320` (MUST match the chosen model, e.g., yolov7-320 -> 320) | @@ -1755,13 +1754,13 @@ Use the model configuration shown below when using the synaptics detector with t Navigate to and select the **Synaptics** detector type. Then navigate to and configure: -| Field | Value | -|-------|-------| -| **Path** | `/synaptics/mobilenet.synap` | -| **Width** | `224` | -| **Height** | `224` | -| **Tensor format** | `nhwc` | -| **Labelmap path** | `/labelmap/coco-80.txt` | +| Field | Value | +| ---------------------------------------- | ---------------------------- | +| **Custom object detector model path** | `/synaptics/mobilenet.synap` | +| **Object detection model input width** | `224` | +| **Object detection model input height** | `224` | +| **Tensor format** | `nhwc` | +| **Label map for custom object detector** | `/labelmap/coco-80.txt` | @@ -1882,15 +1881,15 @@ The inference time was determined on a rk3588 with 3 NPU cores. Navigate to and configure: -| Field | Value | -|-------|-------| -| **Path** | `deci-fp16-yolonas_s` (or `deci-fp16-yolonas_m`, `deci-fp16-yolonas_l`) | -| **Model type** | `yolonas` | -| **Width** | `320` | -| **Height** | `320` | -| **Input pixel format** | `bgr` | -| **Input tensor** | `nhwc` | -| **Labelmap path** | `/labelmap/coco-80.txt` | +| Field | Value | +| ---------------------------------------- | ----------------------------------------------------------------------- | +| **Custom object detector model path** | `deci-fp16-yolonas_s` (or `deci-fp16-yolonas_m`, `deci-fp16-yolonas_l`) | +| **Object Detection Model Type** | `yolonas` | +| **Object detection model input width** | `320` | +| **Object detection model input height** | `320` | +| **Model Input Pixel Color Format** | `bgr` | +| **Model Input Tensor Shape** | `nhwc` | +| **Label map for custom object detector** | `/labelmap/coco-80.txt` | @@ -1928,14 +1927,14 @@ The pre-trained YOLO-NAS weights from DeciAI are subject to their license and ca Navigate to and configure: -| Field | Value | -|-------|-------| -| **Path** | `frigate-fp16-yolov9-t` (or other yolov9 variants) | -| **Model type** | `yolo-generic` | -| **Width** | `320` | -| **Height** | `320` | -| **Input tensor** | `nhwc` | -| **Labelmap path** | `/labelmap/coco-80.txt` | +| Field | Value | +| ---------------------------------------- | -------------------------------------------------- | +| **Custom object detector model path** | `frigate-fp16-yolov9-t` (or other yolov9 variants) | +| **Object Detection Model Type** | `yolo-generic` | +| **Object detection model input width** | `320` | +| **Object detection model input height** | `320` | +| **Model Input Tensor Shape** | `nhwc` | +| **Label map for custom object detector** | `/labelmap/coco-80.txt` | @@ -1968,14 +1967,14 @@ model: # required Navigate to and configure: -| Field | Value | -|-------|-------| -| **Path** | `rock-i8-yolox_nano` (or other yolox variants) | -| **Model type** | `yolox` | -| **Width** | `416` | -| **Height** | `416` | -| **Input tensor** | `nhwc` | -| **Labelmap path** | `/labelmap/coco-80.txt` | +| Field | Value | +| ---------------------------------------- | ---------------------------------------------- | +| **Custom object detector model path** | `rock-i8-yolox_nano` (or other yolox variants) | +| **Object Detection Model Type** | `yolox` | +| **Object detection model input width** | `416` | +| **Object detection model input height** | `416` | +| **Model Input Tensor Shape** | `nhwc` | +| **Label map for custom object detector** | `/labelmap/coco-80.txt` | @@ -2190,17 +2189,17 @@ Use the model configuration shown below when using the axengine detector with th -Navigate to and select the **AXEngine** detector type. Then navigate to and configure: +Navigate to and select the **AXEngine NPU** detector type. Then navigate to and configure: -| Field | Value | -|-------|-------| -| **Path** | `frigate-yolov9-tiny` | -| **Model type** | `yolo-generic` | -| **Width** | `320` | -| **Height** | `320` | -| **Input dtype** | `int` | -| **Input pixel format** | `bgr` | -| **Labelmap path** | `/labelmap/coco-80.txt` | +| Field | Value | +| ---------------------------------------- | ----------------------- | +| **Custom object detector model path** | `frigate-yolov9-tiny` | +| **Object Detection Model Type** | `yolo-generic` | +| **Object detection model input width** | `320` | +| **Object detection model input height** | `320` | +| **Model Input D Type** | `int` | +| **Model Input Pixel Color Format** | `bgr` | +| **Label map for custom object detector** | `/labelmap/coco-80.txt` | diff --git a/docs/docs/configuration/object_filters.md b/docs/docs/configuration/object_filters.md index 40482fc6d..dfea51804 100644 --- a/docs/docs/configuration/object_filters.md +++ b/docs/docs/configuration/object_filters.md @@ -39,9 +39,9 @@ Any detection below `min_score` will be immediately thrown out and never tracked Navigate to to set score filters globally. -| Field | Description | -|-------|-------------| -| **Object filters > Person > Min Score** | Minimum score for a single detection to initiate tracking | +| Field | Description | +| --------------------------------------- | ---------------------------------------------------------------- | +| **Object filters > Person > Min Score** | Minimum score for a single detection to initiate tracking | | **Object filters > Person > Threshold** | Minimum computed (median) score to be considered a true positive | To override score filters for a specific camera, navigate to and select the camera. @@ -97,12 +97,12 @@ Conceptually, a ratio of 1 is a square, 0.5 is a "tall skinny" box, and 2 is a " Navigate to to set shape filters globally. -| Field | Description | -|-------|-------------| -| **Object filters > Person > Min Area** | Minimum bounding box area in pixels (or decimal for percentage of frame) | -| **Object filters > Person > Max Area** | Maximum bounding box area in pixels (or decimal for percentage of frame) | -| **Object filters > Person > Min Ratio** | Minimum width/height ratio of the bounding box | -| **Object filters > Person > Max Ratio** | Maximum width/height ratio of the bounding box | +| Field | Description | +| --------------------------------------- | ------------------------------------------------------------------------ | +| **Object filters > Person > Min Area** | Minimum bounding box area in pixels (or decimal for percentage of frame) | +| **Object filters > Person > Max Area** | Maximum bounding box area in pixels (or decimal for percentage of frame) | +| **Object filters > Person > Min Ratio** | Minimum width/height ratio of the bounding box | +| **Object filters > Person > Max Ratio** | Maximum width/height ratio of the bounding box | To override shape filters for a specific camera, navigate to and select the camera. diff --git a/docs/docs/configuration/objects.md b/docs/docs/configuration/objects.md index e56c94cb7..9925ae8fe 100644 --- a/docs/docs/configuration/objects.md +++ b/docs/docs/configuration/objects.md @@ -70,14 +70,14 @@ Object filters help reduce false positives by constraining the size, shape, and Navigate to . -| Field | Description | -|-------|-------------| -| **Object filters > Person > Min Area** | Minimum bounding box area in pixels (or decimal for percentage of frame) | -| **Object filters > Person > Max Area** | Maximum bounding box area in pixels (or decimal for percentage of frame) | -| **Object filters > Person > Min Ratio** | Minimum width/height ratio of the bounding box | -| **Object filters > Person > Max Ratio** | Maximum width/height ratio of the bounding box | -| **Object filters > Person > Min Score** | Minimum score for the object to initiate tracking | -| **Object filters > Person > Threshold** | Minimum computed score to be considered a true positive | +| Field | Description | +| --------------------------------------- | ------------------------------------------------------------------------ | +| **Object filters > Person > Min Area** | Minimum bounding box area in pixels (or decimal for percentage of frame) | +| **Object filters > Person > Max Area** | Maximum bounding box area in pixels (or decimal for percentage of frame) | +| **Object filters > Person > Min Ratio** | Minimum width/height ratio of the bounding box | +| **Object filters > Person > Max Ratio** | Maximum width/height ratio of the bounding box | +| **Object filters > Person > Min Score** | Minimum score for the object to initiate tracking | +| **Object filters > Person > Threshold** | Minimum computed score to be considered a true positive | To override filters for a specific camera, navigate to . @@ -118,14 +118,7 @@ Object filter masks prevent specific object types from being detected in certain -Navigate to . - -| Field | Description | -|-------|-------------| -| **Object mask > Mask1 > Friendly Name / Enabled / Coordinates** | Global object filter mask that applies to all object types | -| **Object filters > Person > Mask > Mask1 > Friendly Name / Enabled / Coordinates** | Per-object mask that applies only to the specified object type | - -To configure masks for a specific camera, navigate to . +Navigate to and select a camera. Use the mask editor to draw object filter masks directly on the camera feed. Global object masks and per-object masks can both be configured from this view. diff --git a/docs/docs/configuration/record.md b/docs/docs/configuration/record.md index 4201199e7..194647584 100644 --- a/docs/docs/configuration/record.md +++ b/docs/docs/configuration/record.md @@ -146,11 +146,11 @@ The number of days to retain continuous and motion recordings can be configured. Navigate to . -| Field | Description | -|-------|-------------| -| **Enable recording** | Enable or disable recording for all cameras | +| Field | Description | +| ----------------------------------------- | -------------------------------------------- | +| **Enable recording** | Enable or disable recording for all cameras | | **Continuous retention > Retention days** | Number of days to keep continuous recordings | -| **Motion retention > Retention days** | Number of days to keep motion recordings | +| **Motion retention > Retention days** | Number of days to keep motion recordings | @@ -178,10 +178,10 @@ The number of days to retain recordings for review items can be specified for it Navigate to . -| Field | Description | -|-------|-------------| -| **Enable recording** | Enable or disable recording for all cameras | -| **Alert retention > Event retention > Retention days** | Number of days to keep alert recordings | +| Field | Description | +| ---------------------------------------------------------- | ------------------------------------------- | +| **Enable recording** | Enable or disable recording for all cameras | +| **Alert retention > Event retention > Retention days** | Number of days to keep alert recordings | | **Detection retention > Event retention > Retention days** | Number of days to keep detection recordings | @@ -221,17 +221,6 @@ When exporting a time-lapse the default speed-up is 25x with 30 FPS. This means To configure the speed-up factor, the frame rate and further custom settings, use the `timelapse_args` parameter. The below configuration example would change the time-lapse speed to 60x (for fitting 1 hour of recording into 1 minute of time-lapse) with 25 FPS: - - - -Navigate to . - -- Set **Enable recording** to on -- Set **Export config > Timelapse Args** to `-vf setpts=PTS/60 -r 25` - - - - ```yaml {3-4} record: enabled: True @@ -239,9 +228,6 @@ record: timelapse_args: "-vf setpts=PTS/60 -r 25" ``` - - - :::tip When using `hwaccel_args`, hardware encoding is used for timelapse generation. This setting can be overridden for a specific camera (e.g., when camera resolution exceeds hardware encoder limits); set the camera-level export hwaccel_args with the appropriate settings. Using an unrecognized value or empty string will fall back to software encoding (libx264). diff --git a/docs/docs/configuration/review.md b/docs/docs/configuration/review.md index 879f0fa36..4f39611db 100644 --- a/docs/docs/configuration/review.md +++ b/docs/docs/configuration/review.md @@ -27,7 +27,7 @@ Not every segment of video captured by Frigate may be of the same level of inter :::note -Alerts and detections categorize the tracked objects in review items, but Frigate must first detect those objects with your configured object detector (Coral, OpenVINO, etc). By default, the object tracker only detects `person`. Setting `labels` for `alerts` and `detections` does not automatically enable detection of new objects. To detect more than `person`, you should add the following to your config: +Alerts and detections categorize the tracked objects in review items, but Frigate must first detect those objects with your configured object detector (Coral, OpenVINO, etc). By default, the object tracker only detects `person`. Setting `labels` for `alerts` and `detections` does not automatically enable detection of new objects. To detect more than `person`, you should add more labels via or and select your camera. Alternatively, add the following to your config: ```yaml objects: @@ -47,11 +47,9 @@ By default a review item will only be marked as an alert if a person or car is d -Navigate to . +Navigate to or and select your camera. -| Field | Description | -|-------|-------------| -| **Alerts > Labels** | List of object or audio labels that qualify a review item as an alert | +Expand **Alerts config** and configure which labels and zones should generate alerts. @@ -78,11 +76,9 @@ By default all detections that do not qualify as an alert qualify as a detection -Navigate to . +Navigate to or and select your camera. -| Field | Description | -|-------|-------------| -| **Detections > Labels** | List of labels to restrict which tracked objects qualify as detections | +Expand **Detections config** and configure which labels should qualify as detections. @@ -109,7 +105,7 @@ For example, to exclude objects on the camera _gatecamera_ from any detections: 1. Navigate to and select the **gatecamera** camera. - - Set **Detections > Labels** to an empty list + - Expand **Detections config** and turn off all of the object label switches. diff --git a/docs/docs/configuration/semantic_search.md b/docs/docs/configuration/semantic_search.md index 41e7e0310..49e0db88a 100644 --- a/docs/docs/configuration/semantic_search.md +++ b/docs/docs/configuration/semantic_search.md @@ -31,7 +31,6 @@ Semantic Search is disabled by default and must be enabled before it can be used Navigate to . - Set **Enable semantic search** to on -- Set **Reindex on startup** to on if you want to reindex the embeddings database from existing tracked objects @@ -66,10 +65,10 @@ Differently weighted versions of the Jina models are available and can be select Navigate to . -| Field | Description | -|-------|-------------| -| **Model** | Select `jinav1` to use the Jina AI CLIP V1 model | -| **Model Size** | `small` (quantized, CPU-friendly) or `large` (full model, GPU-accelerated) | +| Field | Description | +| ------------------------------------------------ | -------------------------------------------------------------------------- | +| **Semantic search model or GenAI provider name** | Select `jinav1` to use the Jina AI CLIP V1 model | +| **Model size** | `small` (quantized, CPU-friendly) or `large` (full model, GPU-accelerated) | @@ -100,10 +99,10 @@ To use the V2 model, set the model to `jinav2`. Navigate to . -| Field | Description | -|-------|-------------| -| **Model** | Select `jinav2` to use the Jina AI CLIP V2 model | -| **Model Size** | `large` is recommended for V2 (requires discrete GPU) | +| Field | Description | +| ------------------------------------------------ | ----------------------------------------------------- | +| **Semantic search model or GenAI provider name** | Select `jinav2` to use the Jina AI CLIP V2 model | +| **Model size** | `large` is recommended for V2 (requires discrete GPU) | @@ -141,9 +140,9 @@ To use llama.cpp for semantic search: Navigate to . -| Field | Description | -|-------|-------------| -| **Model** | Set to the GenAI config key (e.g. `default`) to use a configured GenAI provider for embeddings | +| Field | Description | +| ------------------------------------------------ | ---------------------------------------------------------------------------------------------- | +| **Semantic search model or GenAI provider name** | Set to the GenAI config key (e.g. `default`) to use a configured GenAI provider for embeddings | The GenAI provider must also be configured with the `embeddings` role under . @@ -186,10 +185,10 @@ The CLIP models are downloaded in ONNX format, and the `large` model can be acce Navigate to . -| Field | Description | -|-------|-------------| -| **Model Size** | Set to `large` to enable GPU acceleration | -| **Device** | (Optional) Specify a GPU device index in a multi-GPU system (e.g. `0`) | +| Field | Description | +| -------------- | ---------------------------------------------------------------------- | +| **Model size** | Set to `large` to enable GPU acceleration | +| **Device** | (Optional) Specify a GPU device index in a multi-GPU system (e.g. `0`) | @@ -242,7 +241,7 @@ Triggers are best configured through the Frigate UI. #### Managing Triggers in the UI -1. Navigate to and select a camera from the dropdown menu. +1. Navigate to and select a camera from the dropdown menu. 2. Click **Add Trigger** to create a new trigger or use the pencil icon to edit an existing one. 3. In the **Create Trigger** wizard: - Enter a **Name** for the trigger (e.g., "Red Car Alert"). diff --git a/docs/docs/configuration/snapshots.md b/docs/docs/configuration/snapshots.md index 2d0821c2e..675e68a9c 100644 --- a/docs/docs/configuration/snapshots.md +++ b/docs/docs/configuration/snapshots.md @@ -68,15 +68,15 @@ Configure how snapshots are rendered and stored. These settings control the defa Navigate to . -| Field | Description | -|-------|-------------| -| **Enable snapshots** | Enable or disable saving snapshots for tracked objects | -| **Timestamp overlay** | Overlay a timestamp on snapshots from API | -| **Bounding box overlay** | Draw bounding boxes for tracked objects on snapshots from API | -| **Crop snapshot** | Crop snapshots from API to the detected object's bounding box | -| **Snapshot height** | Height in pixels to resize snapshots to; leave empty to preserve original size | -| **Snapshot quality** | Encode quality for saved snapshots (0-100) | -| **Required zones** | Zones an object must enter for a snapshot to be saved | +| Field | Description | +| ------------------------ | ------------------------------------------------------------------------------ | +| **Enable snapshots** | Enable or disable saving snapshots for tracked objects | +| **Timestamp overlay** | Overlay a timestamp on snapshots from API | +| **Bounding box overlay** | Draw bounding boxes for tracked objects on snapshots from API | +| **Crop snapshot** | Crop snapshots from API to the detected object's bounding box | +| **Snapshot height** | Height in pixels to resize snapshots to; leave empty to preserve original size | +| **Snapshot quality** | Encode quality for saved snapshots (0-100) | +| **Required zones** | Zones an object must enter for a snapshot to be saved | @@ -104,10 +104,10 @@ Configure how long snapshots are retained on disk. Per-object retention override Navigate to . -| Field | Description | -|-------|-------------| -| **Snapshot retention > Default retention** | Number of days to retain snapshots (default: 10) | -| **Snapshot retention > Retention mode** | Retention mode: `all`, `motion`, or `active_objects` | +| Field | Description | +| -------------------------------------------------- | ----------------------------------------------------------------------------------- | +| **Snapshot retention > Default retention** | Number of days to retain snapshots (default: 10) | +| **Snapshot retention > Retention mode** | Retention mode: `all`, `motion`, or `active_objects` | | **Snapshot retention > Object retention > Person** | Per-object overrides for retention days (e.g., keep `person` snapshots for 15 days) | diff --git a/docs/docs/configuration/zones.md b/docs/docs/configuration/zones.md index e5c3a8964..2cb3c8ebe 100644 --- a/docs/docs/configuration/zones.md +++ b/docs/docs/configuration/zones.md @@ -59,8 +59,8 @@ To create an alert only when an object enters the `entire_yard` zone: Navigate to . -| Field | Description | -|-------|-------------| +| Field | Description | +| ---------------------------------- | ----------------------------------------------------------------------------------------- | | **Alerts config > Required zones** | Zones that an object must enter to be considered an alert; leave empty to allow any zone. | @@ -89,9 +89,9 @@ You may also want to filter detections to only be created when an object enters Navigate to . -| Field | Description | -|-------|-------------| -| **Alerts config > Required zones** | Zones that an object must enter to be considered an alert; leave empty to allow any zone. | +| Field | Description | +| -------------------------------------- | -------------------------------------------------------------------------------------------- | +| **Alerts config > Required zones** | Zones that an object must enter to be considered an alert; leave empty to allow any zone. | | **Detections config > Required zones** | Zones that an object must enter to be considered a detection; leave empty to allow any zone. | @@ -319,8 +319,8 @@ The `distance` values are measured in meters (metric) or feet (imperial), depend Navigate to . -| Field | Description | -|-------|-------------| +| Field | Description | +| --------------- | -------------------------------------------------------------------- | | **Unit system** | Set to `metric` (kilometers per hour) or `imperial` (miles per hour) | diff --git a/docs/docs/guides/getting_started.md b/docs/docs/guides/getting_started.md index bf8084180..5e3ff665a 100644 --- a/docs/docs/guides/getting_started.md +++ b/docs/docs/guides/getting_started.md @@ -204,8 +204,17 @@ You need to refer to **Configure hardware acceleration** above to enable the con -1. Navigate to and add a detector with **Type** `openvino` and **Device** `GPU` -2. Navigate to and configure the model settings for OpenVINO +1. Navigate to and add a detector with **Type** `OpenVINO` and **Device** `GPU` +2. Navigate to and configure the model settings for OpenVINO: + +| Field | Value | +| ---------------------------------------- | ------------------------------------------ | +| **Object detection model input width** | `300` | +| **Object detection model input height** | `300` | +| **Model Input Tensor Shape** | `nhwc` | +| **Model Input Pixel Color Format** | `bgr` | +| **Custom object detector model path** | `/openvino-model/ssdlite_mobilenet_v2.xml` | +| **Label map for custom object detector** | `/openvino-model/coco_91cl_bkgr.txt` | @@ -264,7 +273,7 @@ services: -Navigate to and add a detector with **Type** `edgetpu` and **Device** `usb`. +Navigate to and add a detector with **Type** `EdgeTPU` and **Device** `usb`. @@ -296,9 +305,9 @@ Restart Frigate and you should start seeing detections for `person`. If you want ### Step 5: Setup motion masks -Now that you have optimized your configuration for decoding the video stream, you will want to check to see where to implement motion masks. Navigate to and enable the Debug view to see motion boxes. Watch for areas that continuously trigger unwanted motion to be detected. Common areas to mask include camera timestamps and trees that frequently blow in the wind. The goal is to avoid wasting object detection cycles looking at these areas. +Now that you have optimized your configuration for decoding the video stream, you will want to check to see where to implement motion masks. Click on the camera from the main dashboard, then select the gear icon in the top right, enable Debug View, and finally enable the switch for Motion Boxes. Watch for areas that continuously trigger unwanted motion to be detected. Common areas to mask include camera timestamps and trees that frequently blow in the wind. The goal is to avoid wasting object detection cycles looking at these areas. -Use the mask editor to draw polygon masks directly on the camera feed. More information about masks can be found [here](../configuration/masks.md). +Use the mask editor to draw polygon masks directly on the camera feed. Navigate to and set up a motion mask over the area. More information about masks can be found [here](../configuration/masks.md). :::warning @@ -313,7 +322,7 @@ In order to review activity in the Frigate UI, recordings need to be enabled. -1. If you have separate streams for detect and record, navigate to and add a second input with the `record` role pointing to your high-resolution stream +1. If you have separate streams for detect and record, navigate to , select your camera, and add a second input with the `record` role pointing to your high-resolution stream 2. Navigate to (or for a specific camera) and set **Enable recording** to on