mirror of
https://github.com/blakeblackshear/frigate.git
synced 2026-04-03 13:54:55 +03:00
second pass
This commit is contained in:
parent
3385c86ded
commit
05109f535c
@ -253,7 +253,7 @@ IPv6 is disabled by default. Enable it in the Frigate configuration.
|
||||
<ConfigTabs>
|
||||
<TabItem value="ui">
|
||||
|
||||
Navigate to <NavPath path="Settings > System > Networking" /> and enable **IPv6**.
|
||||
Navigate to <NavPath path="Settings > System > Networking" /> and expand **IPv6 configuration**, then enable **Enable IPv6**.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
|
||||
@ -20,9 +20,9 @@ Audio events can be enabled globally or for specific cameras.
|
||||
<ConfigTabs>
|
||||
<TabItem value="ui">
|
||||
|
||||
**Global:** Navigate to <NavPath path="Settings > Global configuration > Audio events" /> and set **Enabled** to on.
|
||||
**Global:** Navigate to <NavPath path="Settings > Global configuration > Audio events" /> and set **Enable audio detection** to on.
|
||||
|
||||
**Per-camera:** Navigate to <NavPath path="Settings > Camera configuration > Audio events" /> and set **Enabled** to on for the desired camera.
|
||||
**Per-camera:** Navigate to <NavPath path="Settings > Camera configuration > Audio events" /> and set **Enable audio detection** to on for the desired camera.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
|
||||
@ -71,10 +71,10 @@ If you are running a reverse proxy in the same Docker Compose file as Frigate, c
|
||||
|
||||
Navigate to <NavPath path="Settings > System > Authentication" />.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| **Failed login limits** | Rate limit string for login failures (e.g., `1/second;5/minute;20/hour`) |
|
||||
| **Trusted proxies** | List of upstream network CIDRs to trust for `X-Forwarded-For` (e.g., `172.18.0.0/16` for internal Docker Compose network) |
|
||||
| Field | Description |
|
||||
| ----------------------- | ------------------------------------------------------------------------------------------------------------------------- |
|
||||
| **Failed login limits** | Rate limit string for login failures (e.g., `1/second;5/minute;20/hour`) |
|
||||
| **Trusted proxies** | List of upstream network CIDRs to trust for `X-Forwarded-For` (e.g., `172.18.0.0/16` for internal Docker Compose network) |
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
@ -182,13 +182,13 @@ If you have disabled Frigate's authentication and your proxy supports passing a
|
||||
<ConfigTabs>
|
||||
<TabItem value="ui">
|
||||
|
||||
Navigate to <NavPath path="Settings > System > Authentication" /> and configure the proxy header mapping settings.
|
||||
Navigate to <NavPath path="Settings > System > Proxy" /> and configure the header mapping and separator settings.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| **Proxy > Separator** | Character separating multiple roles in the role header (default: comma). Authentik uses a pipe `\|`. |
|
||||
| **Proxy > Header Map > User** | Header name for the authenticated username (e.g., `x-forwarded-user`) |
|
||||
| **Proxy > Header Map > Role** | Header name for the authenticated role/groups (e.g., `x-forwarded-groups`) |
|
||||
| Field | Description |
|
||||
| -------------------------------- | ---------------------------------------------------------------------------------------------------- |
|
||||
| **Separator character** | Character separating multiple roles in the role header (default: comma). Authentik uses a pipe `\|`. |
|
||||
| **Header mapping > User header** | Header name for the authenticated username (e.g., `x-forwarded-user`) |
|
||||
| **Header mapping > Role header** | Header name for the authenticated role/groups (e.g., `x-forwarded-groups`) |
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
@ -212,11 +212,11 @@ A default role can be provided. Any value in the mapped `role` header will overr
|
||||
<ConfigTabs>
|
||||
<TabItem value="ui">
|
||||
|
||||
Navigate to <NavPath path="Settings > System > Authentication" /> and set the default role under the proxy settings.
|
||||
Navigate to <NavPath path="Settings > System > Proxy" /> and set the default role.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| **Proxy > Default Role** | Fallback role when no role header is present (e.g., `viewer`) |
|
||||
| Field | Description |
|
||||
| ---------------- | ------------------------------------------------------------- |
|
||||
| **Default role** | Fallback role when no role header is present (e.g., `viewer`) |
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
@ -237,11 +237,11 @@ In some environments, upstream identity providers (OIDC, SAML, LDAP, etc.) do no
|
||||
<ConfigTabs>
|
||||
<TabItem value="ui">
|
||||
|
||||
Navigate to <NavPath path="Settings > System > Authentication" /> and configure the role mapping under the proxy header map settings.
|
||||
Navigate to <NavPath path="Settings > System > Proxy" /> and configure the role mapping under the header mapping settings.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| **Proxy > Header Map > Role Map** | Maps upstream group names to Frigate roles. Each Frigate role (`admin`, `viewer`, or custom) maps to a list of upstream group names. |
|
||||
| Field | Description |
|
||||
| ----------------------------- | ------------------------------------------------------------------------------------------------------------------------------------ |
|
||||
| **Header mapping > Role map** | Maps upstream group names to Frigate roles. Each Frigate role (`admin`, `viewer`, or custom) maps to a list of upstream group names. |
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
@ -344,8 +344,8 @@ The viewer role provides read-only access to all cameras in the UI and API. Cust
|
||||
|
||||
Navigate to <NavPath path="Settings > System > Authentication" /> and configure roles under the **Roles** section.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| Field | Description |
|
||||
| --------- | ----------------------------------------------------------------- |
|
||||
| **Roles** | Define custom roles and assign which cameras each role can access |
|
||||
|
||||
</TabItem>
|
||||
|
||||
@ -46,28 +46,27 @@ Navigate to <NavPath path="Settings > Camera configuration > ONVIF" /> for the d
|
||||
|
||||
**ONVIF Connection**
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| **Host** | Host of the camera being connected to. HTTP is assumed by default; prefix with `https://` for HTTPS. |
|
||||
| **Port** | ONVIF port for device (default: 8000) |
|
||||
| **User** | Username for login. Some devices require admin to access ONVIF. |
|
||||
| **Password** | Password for login |
|
||||
| **TLS Insecure** | Skip TLS verification from the ONVIF server (default: false) |
|
||||
| **Profile** | ONVIF media profile to use for PTZ control, matched by token or name. If not set, the first profile with valid PTZ configuration is selected automatically. |
|
||||
| Field | Description |
|
||||
| ---------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| **ONVIF host** | Host of the camera being connected to. HTTP is assumed by default; prefix with `https://` for HTTPS. |
|
||||
| **ONVIF port** | ONVIF port for device (default: 8000) |
|
||||
| **ONVIF username** | Username for login. Some devices require admin to access ONVIF. |
|
||||
| **ONVIF password** | Password for login |
|
||||
| **Disable TLS verify** | Skip TLS verification and disable digest auth for ONVIF (default: false) |
|
||||
| **ONVIF profile** | ONVIF media profile to use for PTZ control, matched by token or name. If not set, the first profile with valid PTZ configuration is selected automatically. |
|
||||
|
||||
**Autotracking**
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| **Enabled** | Enable or disable object autotracking (default: false) |
|
||||
| **Calibrate on Startup** | Calibrate the camera on startup by measuring PTZ motor speed (default: false) |
|
||||
| **Zooming** | Zoom mode during autotracking: `disabled`, `absolute`, or `relative` (default: disabled) |
|
||||
| **Zoom Factor** | Controls zoom behavior on tracked objects, between 0.1 and 0.75. Lower keeps more scene visible; higher zooms in more (default: 0.3) |
|
||||
| **Track** | List of object types to track (default: person) |
|
||||
| **Required Zones** | Zones an object must enter to begin autotracking |
|
||||
| **Return Preset** | Name of ONVIF preset in camera firmware to return to when tracking ends (default: home) |
|
||||
| **Timeout** | Seconds to delay before returning to preset (default: 10) |
|
||||
| **Movement Weights** | Auto-generated calibration values. Do not modify manually. |
|
||||
| Field | Description |
|
||||
| ----------------------- | ------------------------------------------------------------------------------------------------------------------------------------ |
|
||||
| **Enable Autotracking** | Enable or disable object autotracking (default: false) |
|
||||
| **Calibrate on start** | Calibrate the camera on startup by measuring PTZ motor speed (default: false) |
|
||||
| **Zoom mode** | Zoom mode during autotracking: `disabled`, `absolute`, or `relative` (default: disabled) |
|
||||
| **Zoom Factor** | Controls zoom behavior on tracked objects, between 0.1 and 0.75. Lower keeps more scene visible; higher zooms in more (default: 0.3) |
|
||||
| **Tracked objects** | List of object types to track (default: person) |
|
||||
| **Required Zones** | Zones an object must enter to begin autotracking |
|
||||
| **Return Preset** | Name of ONVIF preset in camera firmware to return to when tracking ends (default: home) |
|
||||
| **Return timeout** | Seconds to delay before returning to preset (default: 10) |
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
|
||||
@ -27,7 +27,7 @@ Bird classification is disabled by default and must be enabled before it can be
|
||||
Navigate to <NavPath path="Settings > Enrichments > Object classification" />.
|
||||
|
||||
- Set **Bird classification config > Bird classification** to on
|
||||
- Set **Bird classification config > Bird classification threshold** to the desired confidence score (default: 0.9)
|
||||
- Set **Bird classification config > Minimum score** to the desired confidence score (default: 0.9)
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
|
||||
@ -37,8 +37,8 @@ To include a camera in Birdseye view only for specific circumstances, or exclude
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| **Enabled** | Whether this camera appears in Birdseye view |
|
||||
| **Mode** | When to show the camera: `continuous`, `motion`, or `objects` |
|
||||
| **Enable Birdseye** | Whether this camera appears in Birdseye view |
|
||||
| **Tracking mode** | When to show the camera: `continuous`, `motion`, or `objects` |
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
@ -125,7 +125,7 @@ It is possible to override the order of cameras that are being shown in the Bird
|
||||
<ConfigTabs>
|
||||
<TabItem value="ui">
|
||||
|
||||
Navigate to <NavPath path="Settings > Camera configuration > Birdseye" /> for each camera and set the **Order** field to control the display order.
|
||||
Navigate to <NavPath path="Settings > Camera configuration > Birdseye" /> for each camera and set the **Position** field to control the display order.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
|
||||
@ -26,14 +26,15 @@ Each role can only be assigned to one input per camera. The options for roles ar
|
||||
|
||||
Navigate to <NavPath path="Settings > Camera configuration > FFmpeg" />.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| Field | Description |
|
||||
| ----------------- | ------------------------------------------------------------------- |
|
||||
| **Camera inputs** | List of input stream definitions (paths and roles) for this camera. |
|
||||
|
||||
Navigate to <NavPath path="Settings > Camera configuration > Object detection" />.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| **Detect width** | Width (pixels) of frames used for the detect stream; leave empty to use the native stream resolution. |
|
||||
| Field | Description |
|
||||
| ----------------- | ------------------------------------------------------------------------------------------------------ |
|
||||
| **Detect width** | Width (pixels) of frames used for the detect stream; leave empty to use the native stream resolution. |
|
||||
| **Detect height** | Height (pixels) of frames used for the detect stream; leave empty to use the native stream resolution. |
|
||||
|
||||
</TabItem>
|
||||
|
||||
@ -77,13 +77,19 @@ Object classification is configured as a custom classification model. Each model
|
||||
<ConfigTabs>
|
||||
<TabItem value="ui">
|
||||
|
||||
Navigate to <NavPath path="Settings > Enrichments > Object classification" />.
|
||||
Navigate to the **Classification** page from the main navigation sidebar, then click **Add Classification**.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| **Custom Classification Models > Dog > Threshold** | Minimum confidence score for a classification attempt to count (default: `0.8`) |
|
||||
| **Custom Classification Models > Dog > Object Config > Objects** | Object labels to classify (e.g., `dog`, `person`, `car`) |
|
||||
| **Custom Classification Models > Dog > Object Config > Classification Type** | Whether to assign results as a **sub label** or **attribute** |
|
||||
In the **Create New Classification** dialog:
|
||||
|
||||
| Field | Description |
|
||||
| ----------------------- | ------------------------------------------------------------- |
|
||||
| **Name** | A name for your classification model (e.g., `dog`) |
|
||||
| **Type** | Select **Object** for object classification |
|
||||
| **Object Label** | The object label to classify (e.g., `dog`, `person`, `car`) |
|
||||
| **Classification Type** | Whether to assign results as a **Sub Label** or **Attribute** |
|
||||
| **Classes** | The class names the model will learn to distinguish between |
|
||||
|
||||
The `threshold` (default: `0.8`) can be adjusted in the YAML configuration.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
@ -125,7 +131,7 @@ If examples for some of your classes do not appear in the grid, you can continue
|
||||
|
||||
:::tip Diversity matters far more than volume
|
||||
|
||||
Selecting dozens of nearly identical images is one of the fastest ways to degrade model performance. MobileNetV2 can overfit quickly when trained on homogeneous data — the model learns what *that exact moment* looked like rather than what actually defines the class. **This is why Frigate does not implement bulk training in the UI.**
|
||||
Selecting dozens of nearly identical images is one of the fastest ways to degrade model performance. MobileNetV2 can overfit quickly when trained on homogeneous data — the model learns what _that exact moment_ looked like rather than what actually defines the class. **This is why Frigate does not implement bulk training in the UI.**
|
||||
|
||||
For more detail, see [Frigate Tip: Best Practices for Training Face and Custom Classification Models](https://github.com/blakeblackshear/frigate/discussions/21374).
|
||||
|
||||
|
||||
@ -42,14 +42,17 @@ State classification is configured as a custom classification model. Each model
|
||||
<ConfigTabs>
|
||||
<TabItem value="ui">
|
||||
|
||||
Navigate to <NavPath path="Settings > Enrichments > Object classification" />.
|
||||
Navigate to the **Classification** page from the main navigation sidebar, select the **States** tab, then click **Add Classification**.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| **Custom Classification Models > Front Door > Threshold** | Minimum confidence score for a classification attempt to count (default: `0.8`) |
|
||||
| **Custom Classification Models > Front Door > State Config > Motion** | Run classification when motion overlaps the crop area |
|
||||
| **Custom Classification Models > Front Door > State Config > Interval** | Run classification every N seconds (optional) |
|
||||
| **Custom Classification Models > Front Door > State Config > Cameras > Front > Crop** | The rectangular crop region on each camera to classify |
|
||||
In the **Create New Classification** dialog:
|
||||
|
||||
| Field | Description |
|
||||
| ----------- | ------------------------------------------------------------------------------------ |
|
||||
| **Name** | A name for your state classification model (e.g., `front_door`) |
|
||||
| **Type** | Select **State** for state classification |
|
||||
| **Classes** | The state names the model will learn to distinguish between (e.g., `open`, `closed`) |
|
||||
|
||||
After creating the model, the wizard will guide you through selecting the camera crop area and assigning training examples. The `threshold` (default: `0.8`), `motion`, and `interval` settings can be adjusted in the YAML configuration.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
@ -94,7 +97,7 @@ Once some images are assigned, training will begin automatically.
|
||||
|
||||
:::tip Diversity matters far more than volume
|
||||
|
||||
Selecting dozens of nearly identical images is one of the fastest ways to degrade model performance. MobileNetV2 can overfit quickly when trained on homogeneous data — the model learns what *that exact moment* looked like rather than what actually defines the state. This often leads to models that work perfectly under the original conditions but become unstable when day turns to night, weather changes, or seasonal lighting shifts. **This is why Frigate does not implement bulk training in the UI.**
|
||||
Selecting dozens of nearly identical images is one of the fastest ways to degrade model performance. MobileNetV2 can overfit quickly when trained on homogeneous data — the model learns what _that exact moment_ looked like rather than what actually defines the state. This often leads to models that work perfectly under the original conditions but become unstable when day turns to night, weather changes, or seasonal lighting shifts. **This is why Frigate does not implement bulk training in the UI.**
|
||||
|
||||
For more detail, see [Frigate Tip: Best Practices for Training Face and Custom Classification Models](https://github.com/blakeblackshear/frigate/discussions/21374).
|
||||
|
||||
|
||||
@ -25,7 +25,7 @@ See [the hwaccel docs](/configuration/hardware_acceleration_video.md) for more i
|
||||
| preset-nvidia | Nvidia GPU | |
|
||||
| preset-jetson-h264 | Nvidia Jetson with h264 stream | |
|
||||
| preset-jetson-h265 | Nvidia Jetson with h265 stream | |
|
||||
| preset-rkmpp | Rockchip MPP | Use image with \*-rk suffix and privileged mode |
|
||||
| preset-rkmpp | Rockchip MPP | Use image with \*-rk suffix and privileged mode |
|
||||
|
||||
Select the appropriate hwaccel preset for your hardware.
|
||||
|
||||
|
||||
@ -43,9 +43,9 @@ You can define custom prompts at the global level and per-object type. To config
|
||||
<TabItem value="ui">
|
||||
|
||||
1. Navigate to <NavPath path="Settings > Global configuration > Objects" />.
|
||||
- Expand the **GenAI** section
|
||||
- Set **Prompt** to your custom prompt text
|
||||
- Under **Object Prompts**, add entries keyed by object type (e.g., `person`, `car`) with custom prompts for each
|
||||
- Expand the **GenAI object config** section
|
||||
- Set **Caption prompt** to your custom prompt text
|
||||
- Under **Object prompts**, add entries keyed by object type (e.g., `person`, `car`) with custom prompts for each
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
@ -73,13 +73,13 @@ Prompts can also be overridden at the camera level to provide a more detailed pr
|
||||
<TabItem value="ui">
|
||||
|
||||
1. Navigate to <NavPath path="Settings > Camera configuration > Objects" /> for the desired camera.
|
||||
- Expand the **GenAI** section
|
||||
- Set **Enabled** to on
|
||||
- Set **Use Snapshot** to on if desired
|
||||
- Set **Prompt** to a camera-specific prompt
|
||||
- Under **Object Prompts**, add entries keyed by object type with camera-specific prompts
|
||||
- Set **Objects** to the list of object types that should receive descriptions (e.g., `person`, `cat`)
|
||||
- Set **Required Zones** to limit descriptions to objects in specific zones (e.g., `steps`)
|
||||
- Expand the **GenAI object config** section
|
||||
- Set **Enable GenAI** to on
|
||||
- Set **Use snapshots** to on if desired
|
||||
- Set **Caption prompt** to a camera-specific prompt
|
||||
- Under **Object prompts**, add entries keyed by object type with camera-specific prompts
|
||||
- Set **GenAI objects** to the list of object types that should receive descriptions (e.g., `person`, `cat`)
|
||||
- Set **Required zones** to limit descriptions to objects in specific zones (e.g., `steps`)
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
|
||||
@ -110,7 +110,7 @@ Here are some common starter configuration examples. These can be configured thr
|
||||
|
||||
1. Navigate to <NavPath path="Settings > System > MQTT" /> and configure the MQTT connection to your Home Assistant Mosquitto broker
|
||||
2. Navigate to <NavPath path="Settings > Global configuration > FFmpeg" /> and set **Hardware acceleration arguments** to `Raspberry Pi (H.264)`
|
||||
3. Navigate to <NavPath path="Settings > System > Detector hardware" /> and add a detector with **Type** `edgetpu` and **Device** `usb`
|
||||
3. Navigate to <NavPath path="Settings > System > Detector hardware" /> and add a detector with **Type** `EdgeTPU` and **Device** `usb`
|
||||
4. Navigate to <NavPath path="Settings > Global configuration > Recording" /> and set **Enable recording** to on, **Motion retention > Retention days** to `7`, **Alert retention > Event retention > Retention days** to `30`, **Alert retention > Event retention > Retention mode** to `motion`, **Detection retention > Event retention > Retention days** to `30`, **Detection retention > Event retention > Retention mode** to `motion`
|
||||
5. Navigate to <NavPath path="Settings > Global configuration > Snapshots" /> and set **Enable snapshots** to on, **Snapshot retention > Default retention** to `30`
|
||||
6. Navigate to <NavPath path="Settings > Camera configuration > Management" /> and add your camera with the appropriate RTSP stream URL
|
||||
@ -187,9 +187,9 @@ cameras:
|
||||
<ConfigTabs>
|
||||
<TabItem value="ui">
|
||||
|
||||
1. Navigate to <NavPath path="Settings > System > MQTT" /> and set **Enabled** to off
|
||||
1. Navigate to <NavPath path="Settings > System > MQTT" /> and set **Enable MQTT** to off
|
||||
2. Navigate to <NavPath path="Settings > Global configuration > FFmpeg" /> and set **Hardware acceleration arguments** to `VAAPI (Intel/AMD GPU)`
|
||||
3. Navigate to <NavPath path="Settings > System > Detector hardware" /> and add a detector with **Type** `edgetpu` and **Device** `usb`
|
||||
3. Navigate to <NavPath path="Settings > System > Detector hardware" /> and add a detector with **Type** `EdgeTPU` and **Device** `usb`
|
||||
4. Navigate to <NavPath path="Settings > Global configuration > Recording" /> and set **Enable recording** to on, **Motion retention > Retention days** to `7`, **Alert retention > Event retention > Retention days** to `30`, **Alert retention > Event retention > Retention mode** to `motion`, **Detection retention > Event retention > Retention days** to `30`, **Detection retention > Event retention > Retention mode** to `motion`
|
||||
5. Navigate to <NavPath path="Settings > Global configuration > Snapshots" /> and set **Enable snapshots** to on, **Snapshot retention > Default retention** to `30`
|
||||
6. Navigate to <NavPath path="Settings > Camera configuration > Management" /> and add your camera with the appropriate RTSP stream URL
|
||||
|
||||
@ -63,7 +63,7 @@ Like other enrichments in Frigate, LPR **must be enabled globally** to use the f
|
||||
<ConfigTabs>
|
||||
<TabItem value="ui">
|
||||
|
||||
Navigate to <NavPath path="Settings > Camera configuration > License plate recognition" /> for the desired camera and disable the **Enabled** toggle.
|
||||
Navigate to <NavPath path="Settings > Camera configuration > License plate recognition" /> for the desired camera and disable the **Enable LPR** toggle.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
@ -201,8 +201,8 @@ If Frigate is already recognizing plates correctly, leave enhancement at the def
|
||||
|
||||
Navigate to <NavPath path="Settings > Enrichments > License plate recognition" />.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| Field | Description |
|
||||
| --------------------- | --------------------------------------------------------------------------------- |
|
||||
| **Replacement rules** | Regex replacement rules used to normalize detected plate strings before matching. |
|
||||
|
||||
</TabItem>
|
||||
@ -268,15 +268,15 @@ These configuration parameters are available at the global level. The only optio
|
||||
|
||||
Navigate to <NavPath path="Settings > Enrichments > License plate recognition" />.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| **Enable LPR** | Enable or disable license plate recognition for all cameras; can be overridden per-camera. |
|
||||
| **Minimum plate area** | Minimum plate area (pixels) required to attempt recognition. |
|
||||
| **Min plate length** | Minimum number of characters a recognized plate must contain to be considered valid. |
|
||||
| **Known plates > Wife'S Car** | |
|
||||
| **Known plates > Johnny** | |
|
||||
| **Known plates > Sally** | |
|
||||
| **Known plates > Work Trucks** | |
|
||||
| Field | Description |
|
||||
| ------------------------------ | ------------------------------------------------------------------------------------------ |
|
||||
| **Enable LPR** | Enable or disable license plate recognition for all cameras; can be overridden per-camera. |
|
||||
| **Minimum plate area** | Minimum plate area (pixels) required to attempt recognition. |
|
||||
| **Min plate length** | Minimum number of characters a recognized plate must contain to be considered valid. |
|
||||
| **Known plates > Wife'S Car** | |
|
||||
| **Known plates > Johnny** | |
|
||||
| **Known plates > Sally** | |
|
||||
| **Known plates > Work Trucks** | |
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
@ -308,7 +308,7 @@ If a camera is configured to detect `car` or `motorcycle` but you don't want Fri
|
||||
<ConfigTabs>
|
||||
<TabItem value="ui">
|
||||
|
||||
Navigate to <NavPath path="Settings > Camera configuration > License plate recognition" /> for the desired camera and disable the **Enabled** toggle.
|
||||
Navigate to <NavPath path="Settings > Camera configuration > License plate recognition" /> for the desired camera and disable the **Enable LPR** toggle.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
@ -351,40 +351,45 @@ An example configuration for a dedicated LPR camera using a `license_plate`-dete
|
||||
|
||||
Navigate to <NavPath path="Settings > Camera configuration > FFmpeg" />.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| **Ffmpeg** | |
|
||||
| Field | Description |
|
||||
| ---------- | ----------- |
|
||||
| **Ffmpeg** | |
|
||||
|
||||
Navigate to <NavPath path="Settings > Camera configuration > Object detection" />.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| **Enable object detection** | Enable or disable object detection for this camera. |
|
||||
| **Detect FPS** | Desired frames per second to run detection on; lower values reduce CPU usage (recommended value is 5, only set higher - at most 10 - if tracking extremely fast moving objects). |
|
||||
| **Minimum initialization frames** | Number of consecutive detection hits required before creating a tracked object. Increase to reduce false initializations. Default value is fps divided by 2. |
|
||||
| **Detect width** | Width (pixels) of frames used for the detect stream; leave empty to use the native stream resolution. |
|
||||
| **Detect height** | Height (pixels) of frames used for the detect stream; leave empty to use the native stream resolution. |
|
||||
| Field | Description |
|
||||
| --------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| **Enable object detection** | Enable or disable object detection for this camera. |
|
||||
| **Detect FPS** | Desired frames per second to run detection on; lower values reduce CPU usage (recommended value is 5, only set higher - at most 10 - if tracking extremely fast moving objects). |
|
||||
| **Minimum initialization frames** | Number of consecutive detection hits required before creating a tracked object. Increase to reduce false initializations. Default value is fps divided by 2. |
|
||||
| **Detect width** | Width (pixels) of frames used for the detect stream; leave empty to use the native stream resolution. |
|
||||
| **Detect height** | Height (pixels) of frames used for the detect stream; leave empty to use the native stream resolution. |
|
||||
|
||||
Navigate to <NavPath path="Settings > Camera configuration > Objects" />.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| **Objects to track** | List of object labels to track for this camera. |
|
||||
| **Object filters > License Plate > Threshold** | |
|
||||
| Field | Description |
|
||||
| ---------------------------------------------- | ----------------------------------------------- |
|
||||
| **Objects to track** | List of object labels to track for this camera. |
|
||||
| **Object filters > License Plate > Threshold** | |
|
||||
|
||||
Navigate to <NavPath path="Settings > Camera configuration > Motion detection" />.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| Field | Description |
|
||||
| -------------------- | ------------------------------------------------------------------------------------------------------- |
|
||||
| **Motion threshold** | Pixel difference threshold used by the motion detector; higher values reduce sensitivity (range 1-255). |
|
||||
| **Contour area** | Minimum contour area in pixels required for a motion contour to be counted. |
|
||||
| **Improve contrast** | Apply contrast improvement to frames before motion analysis to help detection. |
|
||||
| **Contour area** | Minimum contour area in pixels required for a motion contour to be counted. |
|
||||
| **Improve contrast** | Apply contrast improvement to frames before motion analysis to help detection. |
|
||||
|
||||
Navigate to <NavPath path="Settings > Camera configuration > Recording" />.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| Field | Description |
|
||||
| -------------------- | -------------------------------------------- |
|
||||
| **Enable recording** | Enable or disable recording for this camera. |
|
||||
|
||||
Navigate to <NavPath path="Settings > Camera configuration > Snapshots" />.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| Field | Description |
|
||||
| -------------------- | --------------------------------------------------- |
|
||||
| **Enable snapshots** | Enable or disable saving snapshots for this camera. |
|
||||
|
||||
</TabItem>
|
||||
@ -451,46 +456,52 @@ An example configuration for a dedicated LPR camera using the secondary pipeline
|
||||
|
||||
Navigate to <NavPath path="Settings > Camera configuration > License plate recognition" />.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| **Enable LPR** | Enable or disable LPR on this camera. |
|
||||
| Field | Description |
|
||||
| --------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| **Enable LPR** | Enable or disable LPR on this camera. |
|
||||
| **Enhancement level** | Enhancement level (0-10) to apply to plate crops prior to OCR; higher values may not always improve results, levels above 5 may only work with night time plates and should be used with caution. |
|
||||
|
||||
Navigate to <NavPath path="Settings > Camera configuration > FFmpeg" />.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| **Ffmpeg** | |
|
||||
| Field | Description |
|
||||
| ---------- | ----------- |
|
||||
| **Ffmpeg** | |
|
||||
|
||||
Navigate to <NavPath path="Settings > Camera configuration > Object detection" />.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| **Enable object detection** | Enable or disable object detection for this camera. |
|
||||
| **Detect FPS** | Desired frames per second to run detection on; lower values reduce CPU usage (recommended value is 5, only set higher - at most 10 - if tracking extremely fast moving objects). |
|
||||
| **Detect width** | Width (pixels) of frames used for the detect stream; leave empty to use the native stream resolution. |
|
||||
| **Detect height** | Height (pixels) of frames used for the detect stream; leave empty to use the native stream resolution. |
|
||||
| Field | Description |
|
||||
| --------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| **Enable object detection** | Enable or disable object detection for this camera. |
|
||||
| **Detect FPS** | Desired frames per second to run detection on; lower values reduce CPU usage (recommended value is 5, only set higher - at most 10 - if tracking extremely fast moving objects). |
|
||||
| **Detect width** | Width (pixels) of frames used for the detect stream; leave empty to use the native stream resolution. |
|
||||
| **Detect height** | Height (pixels) of frames used for the detect stream; leave empty to use the native stream resolution. |
|
||||
|
||||
Navigate to <NavPath path="Settings > Camera configuration > Objects" />.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| Field | Description |
|
||||
| -------------------- | ----------------------------------------------- |
|
||||
| **Objects to track** | List of object labels to track for this camera. |
|
||||
|
||||
Navigate to <NavPath path="Settings > Camera configuration > Motion detection" />.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| Field | Description |
|
||||
| -------------------- | ------------------------------------------------------------------------------------------------------- |
|
||||
| **Motion threshold** | Pixel difference threshold used by the motion detector; higher values reduce sensitivity (range 1-255). |
|
||||
| **Contour area** | Minimum contour area in pixels required for a motion contour to be counted. |
|
||||
| **Improve contrast** | Apply contrast improvement to frames before motion analysis to help detection. |
|
||||
| **Contour area** | Minimum contour area in pixels required for a motion contour to be counted. |
|
||||
| **Improve contrast** | Apply contrast improvement to frames before motion analysis to help detection. |
|
||||
|
||||
Navigate to <NavPath path="Settings > Camera configuration > Recording" />.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| Field | Description |
|
||||
| -------------------- | -------------------------------------------- |
|
||||
| **Enable recording** | Enable or disable recording for this camera. |
|
||||
|
||||
Navigate to <NavPath path="Settings > Camera configuration > Review" />.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| Field | Description |
|
||||
| ----------------------------------------- | --------------------------------------------------- |
|
||||
| **Detections config > Enable detections** | Enable or disable detection events for this camera. |
|
||||
| **Detections config > Retain > Default** | |
|
||||
| **Detections config > Retain > Default** | |
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
@ -641,7 +652,7 @@ lpr:
|
||||
logger:
|
||||
default: info
|
||||
logs:
|
||||
# highlight-next-line
|
||||
# highlight-next-line
|
||||
frigate.data_processing.common.license_plate: debug
|
||||
```
|
||||
|
||||
|
||||
@ -226,10 +226,10 @@ The jsmpeg live view resolution and encoding quality can be adjusted globally or
|
||||
|
||||
Navigate to <NavPath path="Settings > Global configuration > Live playback" /> for global defaults, or <NavPath path="Settings > Camera configuration > Live playback" /> and select a camera for per-camera overrides.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| **Live height** | Height in pixels for the jsmpeg live stream; must be less than or equal to the detect stream height |
|
||||
| **Live quality** | Encoding quality for the jsmpeg stream (1 = highest, 31 = lowest) |
|
||||
| Field | Description |
|
||||
| ---------------- | --------------------------------------------------------------------------------------------------- |
|
||||
| **Live height** | Height in pixels for the jsmpeg live stream; must be less than or equal to the detect stream height |
|
||||
| **Live quality** | Encoding quality for the jsmpeg stream (1 = highest, 31 = lowest) |
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
|
||||
@ -26,13 +26,7 @@ Object filter masks can be used to filter out stubborn false positives in fixed
|
||||
<ConfigTabs>
|
||||
<TabItem value="ui">
|
||||
|
||||
Navigate to <NavPath path="Settings > Global configuration > Motion detection" />.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| **Mask coordinates > Mask1 > Friendly Name** | |
|
||||
| **Mask coordinates > Mask1 > Enabled** | |
|
||||
| **Mask coordinates > Mask1 > Coordinates** | |
|
||||
Navigate to <NavPath path="Settings > Camera configuration > Masks / Zones" /> and select a camera. Use the mask editor to draw motion masks and object filter masks directly on the camera feed. Each mask can be given a friendly name and toggled on or off.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
|
||||
@ -31,12 +31,14 @@ Metrics are available at `/api/metrics` by default. No additional Frigate config
|
||||
## Available Metrics
|
||||
|
||||
### System Metrics
|
||||
|
||||
- `frigate_cpu_usage_percent{pid="", name="", process="", type="", cmdline=""}` - Process CPU usage percentage
|
||||
- `frigate_mem_usage_percent{pid="", name="", process="", type="", cmdline=""}` - Process memory usage percentage
|
||||
- `frigate_gpu_usage_percent{gpu_name=""}` - GPU utilization percentage
|
||||
- `frigate_gpu_mem_usage_percent{gpu_name=""}` - GPU memory usage percentage
|
||||
|
||||
### Camera Metrics
|
||||
|
||||
- `frigate_camera_fps{camera_name=""}` - Frames per second being consumed from your camera
|
||||
- `frigate_detection_fps{camera_name=""}` - Number of times detection is run per second
|
||||
- `frigate_process_fps{camera_name=""}` - Frames per second being processed
|
||||
@ -46,21 +48,25 @@ Metrics are available at `/api/metrics` by default. No additional Frigate config
|
||||
- `frigate_audio_rms{camera_name=""}` - Audio RMS for camera
|
||||
|
||||
### Detector Metrics
|
||||
|
||||
- `frigate_detector_inference_speed_seconds{name=""}` - Time spent running object detection in seconds
|
||||
- `frigate_detection_start{name=""}` - Detector start time (unix timestamp)
|
||||
|
||||
### Storage Metrics
|
||||
|
||||
- `frigate_storage_free_bytes{storage=""}` - Storage free bytes
|
||||
- `frigate_storage_total_bytes{storage=""}` - Storage total bytes
|
||||
- `frigate_storage_used_bytes{storage=""}` - Storage used bytes
|
||||
- `frigate_storage_mount_type{mount_type="", storage=""}` - Storage mount type info
|
||||
|
||||
### Service Metrics
|
||||
|
||||
- `frigate_service_uptime_seconds` - Uptime in seconds
|
||||
- `frigate_service_last_updated_timestamp` - Stats recorded time (unix timestamp)
|
||||
- `frigate_device_temperature{device=""}` - Device Temperature
|
||||
|
||||
### Event Metrics
|
||||
|
||||
- `frigate_camera_events{camera="", label=""}` - Count of camera events since exporter started
|
||||
|
||||
## Configuring Prometheus
|
||||
@ -69,10 +75,10 @@ To scrape metrics from Frigate, add the following to your Prometheus configurati
|
||||
|
||||
```yaml
|
||||
scrape_configs:
|
||||
- job_name: 'frigate'
|
||||
metrics_path: '/api/metrics'
|
||||
- job_name: "frigate"
|
||||
metrics_path: "/api/metrics"
|
||||
static_configs:
|
||||
- targets: ['frigate:5000']
|
||||
- targets: ["frigate:5000"]
|
||||
scrape_interval: 15s
|
||||
```
|
||||
|
||||
|
||||
@ -48,8 +48,8 @@ Navigate to <NavPath path="Settings > Global configuration > Motion detection" /
|
||||
|
||||
To override for a specific camera, navigate to <NavPath path="Settings > Camera configuration > Motion detection" /> and select the camera, or use the <NavPath path="Settings > Camera configuration > Motion tuner" /> to adjust it live.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| Field | Description |
|
||||
| -------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| **Motion threshold** | The threshold passed to cv2.threshold to determine if a pixel is different enough to be counted as motion. Increasing this value will make motion detection less sensitive and decreasing it will make motion detection more sensitive. The value should be between 1 and 255. (default: 30) |
|
||||
|
||||
</TabItem>
|
||||
@ -79,8 +79,8 @@ Navigate to <NavPath path="Settings > Global configuration > Motion detection" /
|
||||
|
||||
To override for a specific camera, navigate to <NavPath path="Settings > Camera configuration > Motion detection" /> and select the camera, or use the <NavPath path="Settings > Camera configuration > Motion tuner" /> to adjust it live.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| Field | Description |
|
||||
| ---------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| **Contour area** | Minimum size in pixels in the resized motion image that counts as motion. Increasing this value will prevent smaller areas of motion from being detected. Decreasing will make motion detection more sensitive to smaller moving objects. As a rule of thumb: 10 = high sensitivity, 30 = medium sensitivity, 50 = low sensitivity. (default: 10) |
|
||||
|
||||
</TabItem>
|
||||
@ -126,8 +126,8 @@ Navigate to <NavPath path="Settings > Global configuration > Motion detection" /
|
||||
|
||||
To override for a specific camera, navigate to <NavPath path="Settings > Camera configuration > Motion detection" /> and select the camera.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| Field | Description |
|
||||
| ----------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| **Lightning threshold** | The percentage of the image used to detect lightning or other substantial changes where motion detection needs to recalibrate. Increasing this value will make motion detection more likely to consider lightning or IR mode changes as valid motion. Decreasing this value will make motion detection more likely to ignore large amounts of motion such as a person approaching a doorbell camera. (default: 0.8) |
|
||||
|
||||
</TabItem>
|
||||
@ -168,8 +168,8 @@ Navigate to <NavPath path="Settings > Global configuration > Motion detection" /
|
||||
|
||||
To override for a specific camera, navigate to <NavPath path="Settings > Camera configuration > Motion detection" /> and select the camera.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| Field | Description |
|
||||
| ------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| **Skip motion threshold** | Fraction of the frame that must change in a single update before Frigate will completely ignore any motion in that frame. Values range between 0.0 and 1.0; leave unset (null) to disable. For example, setting this to 0.7 causes Frigate to skip reporting motion boxes when more than 70% of the image appears to change (e.g. during lightning storms, IR/color mode switches, or other sudden lighting events). |
|
||||
|
||||
</TabItem>
|
||||
|
||||
@ -37,9 +37,8 @@ Notifications will be prevented if either:
|
||||
<TabItem value="ui">
|
||||
|
||||
1. Navigate to <NavPath path="Settings > Notifications > Notifications" />.
|
||||
- Set **Enable notifications** to on
|
||||
- Set **Notification email** to your email address
|
||||
- Set **Cooldown period** to the desired number of seconds to wait before sending another notification from any camera (e.g. `10`)
|
||||
- Set **Email** to your email address
|
||||
- Enable notifications for the desired cameras
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
@ -60,8 +59,8 @@ notifications:
|
||||
<TabItem value="ui">
|
||||
|
||||
1. Navigate to <NavPath path="Settings > Camera configuration > Notifications" /> and select the desired camera.
|
||||
- Set **Enabled** to on
|
||||
- Set **Cooldown** to the desired number of seconds to wait before sending another notification from this camera (e.g. `30`)
|
||||
- Set **Enable notifications** to on
|
||||
- Set **Cooldown period** to the desired number of seconds to wait before sending another notification from this camera (e.g. `30`)
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
|
||||
@ -56,7 +56,6 @@ Frigate supports multiple different detectors that work on different types of ha
|
||||
|
||||
- [AXEngine](#axera): axmodels can run on AXERA AI acceleration.
|
||||
|
||||
|
||||
**For Testing**
|
||||
|
||||
- [CPU Detector (not recommended for actual use](#cpu-detector-not-recommended): Use a CPU to run tflite model, this is not recommended and in most cases OpenVINO can be used in CPU mode with better results.
|
||||
@ -92,7 +91,7 @@ See [common Edge TPU troubleshooting steps](/troubleshooting/edgetpu) if the Edg
|
||||
<ConfigTabs>
|
||||
<TabItem value="ui">
|
||||
|
||||
Navigate to <NavPath path="Settings > System > Detector hardware" /> and select the **Edge TPU** detector type with device set to `usb`.
|
||||
Navigate to <NavPath path="Settings > System > Detector hardware" /> and select the **EdgeTPU** detector type with device set to `usb`.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
@ -137,7 +136,7 @@ _warning: may have [compatibility issues](https://github.com/blakeblackshear/fri
|
||||
<ConfigTabs>
|
||||
<TabItem value="ui">
|
||||
|
||||
Navigate to <NavPath path="Settings > System > Detector hardware" /> and select the **Edge TPU** detector type with the device field left empty.
|
||||
Navigate to <NavPath path="Settings > System > Detector hardware" /> and select the **EdgeTPU** detector type with the device field left empty.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
@ -157,7 +156,7 @@ detectors:
|
||||
<ConfigTabs>
|
||||
<TabItem value="ui">
|
||||
|
||||
Navigate to <NavPath path="Settings > System > Detector hardware" /> and select the **Edge TPU** detector type with device set to `pci`.
|
||||
Navigate to <NavPath path="Settings > System > Detector hardware" /> and select the **EdgeTPU** detector type with device set to `pci`.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
@ -247,15 +246,15 @@ After placing the downloaded files for the tflite model and labels in your confi
|
||||
<ConfigTabs>
|
||||
<TabItem value="ui">
|
||||
|
||||
Navigate to <NavPath path="Settings > System > Detector hardware" /> and select the **Edge TPU** detector type with device set to `usb`. Then navigate to <NavPath path="Settings > System > Detection model" /> and configure the model settings:
|
||||
Navigate to <NavPath path="Settings > System > Detector hardware" /> and select the **EdgeTPU** detector type with device set to `usb`. Then navigate to <NavPath path="Settings > System > Detection model" /> and configure the model settings:
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| **Model type** | `yolo-generic` |
|
||||
| **Width** | `320` (should match the imgsize of the model) |
|
||||
| **Height** | `320` (should match the imgsize of the model) |
|
||||
| **Path** | `/config/model_cache/yolov9-s-relu6-best_320_int8_edgetpu.tflite` |
|
||||
| **Labelmap path** | `/config/labels-coco17.txt` |
|
||||
| Field | Value |
|
||||
| ---------------------------------------- | ----------------------------------------------------------------- |
|
||||
| **Object Detection Model Type** | `yolo-generic` |
|
||||
| **Object detection model input width** | `320` (should match the imgsize of the model) |
|
||||
| **Object detection model input height** | `320` (should match the imgsize of the model) |
|
||||
| **Custom object detector model path** | `/config/model_cache/yolov9-s-relu6-best_320_int8_edgetpu.tflite` |
|
||||
| **Label map for custom object detector** | `/config/labels-coco17.txt` |
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
@ -304,17 +303,17 @@ Use this configuration for YOLO-based models. When no custom model path or URL i
|
||||
<ConfigTabs>
|
||||
<TabItem value="ui">
|
||||
|
||||
Navigate to <NavPath path="Settings > System > Detector hardware" /> and select the **Hailo-8L** detector type with device set to `PCIe`. Then navigate to <NavPath path="Settings > System > Detection model" /> and configure the model settings:
|
||||
Navigate to <NavPath path="Settings > System > Detector hardware" /> and select the **Hailo-8/Hailo-8L** detector type with device set to `PCIe`. Then navigate to <NavPath path="Settings > System > Detection model" /> and configure the model settings:
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| **Width** | `320` |
|
||||
| **Height** | `320` |
|
||||
| **Input tensor** | `nhwc` |
|
||||
| **Input pixel format** | `rgb` |
|
||||
| **Input dtype** | `int` |
|
||||
| **Model type** | `yolo-generic` |
|
||||
| **Labelmap path** | `/labelmap/coco-80.txt` |
|
||||
| Field | Value |
|
||||
| ---------------------------------------- | ----------------------- |
|
||||
| **Object detection model input width** | `320` |
|
||||
| **Object detection model input height** | `320` |
|
||||
| **Model Input Tensor Shape** | `nhwc` |
|
||||
| **Model Input Pixel Color Format** | `rgb` |
|
||||
| **Model Input D Type** | `int` |
|
||||
| **Object Detection Model Type** | `yolo-generic` |
|
||||
| **Label map for custom object detector** | `/labelmap/coco-80.txt` |
|
||||
|
||||
The detector automatically selects the default model based on your hardware. Optionally, specify a local model path or URL to override.
|
||||
|
||||
@ -360,15 +359,15 @@ For SSD-based models, provide either a model path or URL to your compiled SSD mo
|
||||
<ConfigTabs>
|
||||
<TabItem value="ui">
|
||||
|
||||
Navigate to <NavPath path="Settings > System > Detector hardware" /> and select the **Hailo-8L** detector type with device set to `PCIe`. Then navigate to <NavPath path="Settings > System > Detection model" /> and configure the model settings:
|
||||
Navigate to <NavPath path="Settings > System > Detector hardware" /> and select the **Hailo-8/Hailo-8L** detector type with device set to `PCIe`. Then navigate to <NavPath path="Settings > System > Detection model" /> and configure the model settings:
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| **Width** | `300` |
|
||||
| **Height** | `300` |
|
||||
| **Input tensor** | `nhwc` |
|
||||
| **Input pixel format** | `rgb` |
|
||||
| **Model type** | `ssd` |
|
||||
| Field | Value |
|
||||
| --------------------------------------- | ------ |
|
||||
| **Object detection model input width** | `300` |
|
||||
| **Object detection model input height** | `300` |
|
||||
| **Model Input Tensor Shape** | `nhwc` |
|
||||
| **Model Input Pixel Color Format** | `rgb` |
|
||||
| **Object Detection Model Type** | `ssd` |
|
||||
|
||||
Specify the local model path or URL for SSD MobileNet v1.
|
||||
|
||||
@ -405,7 +404,7 @@ The Hailo detector supports all YOLO models compiled for Hailo hardware that inc
|
||||
<ConfigTabs>
|
||||
<TabItem value="ui">
|
||||
|
||||
Navigate to <NavPath path="Settings > System > Detector hardware" /> and select the **Hailo-8L** detector type with device set to `PCIe`. Then navigate to <NavPath path="Settings > System > Detection model" /> and configure the model settings to match your custom model dimensions and format.
|
||||
Navigate to <NavPath path="Settings > System > Detector hardware" /> and select the **Hailo-8/Hailo-8L** detector type with device set to `PCIe`. Then navigate to <NavPath path="Settings > System > Detection model" /> and configure the model settings to match your custom model dimensions and format.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
@ -505,14 +504,14 @@ Use the model configuration shown below when using the OpenVINO detector with th
|
||||
|
||||
Navigate to <NavPath path="Settings > System > Detector hardware" /> and select the **OpenVINO** detector type with device set to `GPU` (or `NPU`). Then navigate to <NavPath path="Settings > System > Detection model" /> and configure:
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| **Width** | `300` |
|
||||
| **Height** | `300` |
|
||||
| **Input tensor** | `nhwc` |
|
||||
| **Input pixel format** | `bgr` |
|
||||
| **Path** | `/openvino-model/ssdlite_mobilenet_v2.xml` |
|
||||
| **Labelmap path** | `/openvino-model/coco_91cl_bkgr.txt` |
|
||||
| Field | Value |
|
||||
| ---------------------------------------- | ------------------------------------------ |
|
||||
| **Object detection model input width** | `300` |
|
||||
| **Object detection model input height** | `300` |
|
||||
| **Model Input Tensor Shape** | `nhwc` |
|
||||
| **Model Input Pixel Color Format** | `bgr` |
|
||||
| **Custom object detector model path** | `/openvino-model/ssdlite_mobilenet_v2.xml` |
|
||||
| **Label map for custom object detector** | `/openvino-model/coco_91cl_bkgr.txt` |
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
@ -555,15 +554,15 @@ After placing the downloaded onnx model in your config folder, use the following
|
||||
|
||||
Navigate to <NavPath path="Settings > System > Detector hardware" /> and select the **OpenVINO** detector type with device set to `GPU`. Then navigate to <NavPath path="Settings > System > Detection model" /> and configure:
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| **Model type** | `yolonas` |
|
||||
| **Width** | `320` (should match whatever was set in notebook) |
|
||||
| **Height** | `320` (should match whatever was set in notebook) |
|
||||
| **Input tensor** | `nchw` |
|
||||
| **Input pixel format** | `bgr` |
|
||||
| **Path** | `/config/yolo_nas_s.onnx` |
|
||||
| **Labelmap path** | `/labelmap/coco-80.txt` |
|
||||
| Field | Value |
|
||||
| ---------------------------------------- | ------------------------------------------------- |
|
||||
| **Object Detection Model Type** | `yolonas` |
|
||||
| **Object detection model input width** | `320` (should match whatever was set in notebook) |
|
||||
| **Object detection model input height** | `320` (should match whatever was set in notebook) |
|
||||
| **Model Input Tensor Shape** | `nchw` |
|
||||
| **Model Input Pixel Color Format** | `bgr` |
|
||||
| **Custom object detector model path** | `/config/yolo_nas_s.onnx` |
|
||||
| **Label map for custom object detector** | `/labelmap/coco-80.txt` |
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
@ -617,15 +616,15 @@ After placing the downloaded onnx model in your config folder, use the following
|
||||
|
||||
Navigate to <NavPath path="Settings > System > Detector hardware" /> and select the **OpenVINO** detector type with device set to `GPU` (or `NPU`). Then navigate to <NavPath path="Settings > System > Detection model" /> and configure:
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| **Model type** | `yolo-generic` |
|
||||
| **Width** | `320` (should match the imgsize set during model export) |
|
||||
| **Height** | `320` (should match the imgsize set during model export) |
|
||||
| **Input tensor** | `nchw` |
|
||||
| **Input dtype** | `float` |
|
||||
| **Path** | `/config/model_cache/yolo.onnx` |
|
||||
| **Labelmap path** | `/labelmap/coco-80.txt` |
|
||||
| Field | Value |
|
||||
| ---------------------------------------- | -------------------------------------------------------- |
|
||||
| **Object Detection Model Type** | `yolo-generic` |
|
||||
| **Object detection model input width** | `320` (should match the imgsize set during model export) |
|
||||
| **Object detection model input height** | `320` (should match the imgsize set during model export) |
|
||||
| **Model Input Tensor Shape** | `nchw` |
|
||||
| **Model Input D Type** | `float` |
|
||||
| **Custom object detector model path** | `/config/model_cache/yolo.onnx` |
|
||||
| **Label map for custom object detector** | `/labelmap/coco-80.txt` |
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
@ -673,14 +672,14 @@ After placing the downloaded onnx model in your `config/model_cache` folder, use
|
||||
|
||||
Navigate to <NavPath path="Settings > System > Detector hardware" /> and select the **OpenVINO** detector type with device set to `GPU`. Then navigate to <NavPath path="Settings > System > Detection model" /> and configure:
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| **Model type** | `rfdetr` |
|
||||
| **Width** | `320` |
|
||||
| **Height** | `320` |
|
||||
| **Input tensor** | `nchw` |
|
||||
| **Input dtype** | `float` |
|
||||
| **Path** | `/config/model_cache/rfdetr.onnx` |
|
||||
| Field | Value |
|
||||
| --------------------------------------- | --------------------------------- |
|
||||
| **Object Detection Model Type** | `rfdetr` |
|
||||
| **Object detection model input width** | `320` |
|
||||
| **Object detection model input height** | `320` |
|
||||
| **Model Input Tensor Shape** | `nchw` |
|
||||
| **Model Input D Type** | `float` |
|
||||
| **Custom object detector model path** | `/config/model_cache/rfdetr.onnx` |
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
@ -725,15 +724,15 @@ After placing the downloaded onnx model in your config/model_cache folder, use t
|
||||
|
||||
Navigate to <NavPath path="Settings > System > Detector hardware" /> and select the **OpenVINO** detector type with device set to `CPU`. Then navigate to <NavPath path="Settings > System > Detection model" /> and configure:
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| **Model type** | `dfine` |
|
||||
| **Width** | `640` |
|
||||
| **Height** | `640` |
|
||||
| **Input tensor** | `nchw` |
|
||||
| **Input dtype** | `float` |
|
||||
| **Path** | `/config/model_cache/dfine-s.onnx` |
|
||||
| **Labelmap path** | `/labelmap/coco-80.txt` |
|
||||
| Field | Value |
|
||||
| ---------------------------------------- | ---------------------------------- |
|
||||
| **Object Detection Model Type** | `dfine` |
|
||||
| **Object detection model input width** | `640` |
|
||||
| **Object detection model input height** | `640` |
|
||||
| **Model Input Tensor Shape** | `nchw` |
|
||||
| **Model Input D Type** | `float` |
|
||||
| **Custom object detector model path** | `/config/model_cache/dfine-s.onnx` |
|
||||
| **Label map for custom object detector** | `/labelmap/coco-80.txt` |
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
@ -777,7 +776,7 @@ Using the detector config below will connect to the client:
|
||||
<ConfigTabs>
|
||||
<TabItem value="ui">
|
||||
|
||||
Navigate to <NavPath path="Settings > System > Detector hardware" /> and select the **ZMQ** detector type with the endpoint set to `tcp://host.docker.internal:5555`.
|
||||
Navigate to <NavPath path="Settings > System > Detector hardware" /> and select the **ZMQ IPC** detector type with the endpoint set to `tcp://host.docker.internal:5555`.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
@ -811,17 +810,17 @@ When Frigate is started with the following config it will connect to the detecto
|
||||
<ConfigTabs>
|
||||
<TabItem value="ui">
|
||||
|
||||
Navigate to <NavPath path="Settings > System > Detector hardware" /> and select the **ZMQ** detector type with the endpoint set to `tcp://host.docker.internal:5555`. Then navigate to <NavPath path="Settings > System > Detection model" /> and configure:
|
||||
Navigate to <NavPath path="Settings > System > Detector hardware" /> and select the **ZMQ IPC** detector type with the endpoint set to `tcp://host.docker.internal:5555`. Then navigate to <NavPath path="Settings > System > Detection model" /> and configure:
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| **Model type** | `yolo-generic` |
|
||||
| **Width** | `320` (should match the imgsize set during model export) |
|
||||
| **Height** | `320` (should match the imgsize set during model export) |
|
||||
| **Input tensor** | `nchw` |
|
||||
| **Input dtype** | `float` |
|
||||
| **Path** | `/config/model_cache/yolo.onnx` |
|
||||
| **Labelmap path** | `/labelmap/coco-80.txt` |
|
||||
| Field | Value |
|
||||
| ---------------------------------------- | -------------------------------------------------------- |
|
||||
| **Object Detection Model Type** | `yolo-generic` |
|
||||
| **Object detection model input width** | `320` (should match the imgsize set during model export) |
|
||||
| **Object detection model input height** | `320` (should match the imgsize set during model export) |
|
||||
| **Model Input Tensor Shape** | `nchw` |
|
||||
| **Model Input D Type** | `float` |
|
||||
| **Custom object detector model path** | `/config/model_cache/yolo.onnx` |
|
||||
| **Label map for custom object detector** | `/labelmap/coco-80.txt` |
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
@ -1022,15 +1021,15 @@ After placing the downloaded onnx model in your config folder, use the following
|
||||
|
||||
Navigate to <NavPath path="Settings > System > Detector hardware" /> and select the **ONNX** detector type. Then navigate to <NavPath path="Settings > System > Detection model" /> and configure:
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| **Model type** | `yolonas` |
|
||||
| **Width** | `320` (should match whatever was set in notebook) |
|
||||
| **Height** | `320` (should match whatever was set in notebook) |
|
||||
| **Input pixel format** | `bgr` |
|
||||
| **Input tensor** | `nchw` |
|
||||
| **Path** | `/config/yolo_nas_s.onnx` |
|
||||
| **Labelmap path** | `/labelmap/coco-80.txt` |
|
||||
| Field | Value |
|
||||
| ---------------------------------------- | ------------------------------------------------- |
|
||||
| **Object Detection Model Type** | `yolonas` |
|
||||
| **Object detection model input width** | `320` (should match whatever was set in notebook) |
|
||||
| **Object detection model input height** | `320` (should match whatever was set in notebook) |
|
||||
| **Model Input Pixel Color Format** | `bgr` |
|
||||
| **Model Input Tensor Shape** | `nchw` |
|
||||
| **Custom object detector model path** | `/config/yolo_nas_s.onnx` |
|
||||
| **Label map for custom object detector** | `/labelmap/coco-80.txt` |
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
@ -1081,15 +1080,15 @@ After placing the downloaded onnx model in your config folder, use the following
|
||||
|
||||
Navigate to <NavPath path="Settings > System > Detector hardware" /> and select the **ONNX** detector type. Then navigate to <NavPath path="Settings > System > Detection model" /> and configure:
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| **Model type** | `yolo-generic` |
|
||||
| **Width** | `320` (should match the imgsize set during model export) |
|
||||
| **Height** | `320` (should match the imgsize set during model export) |
|
||||
| **Input tensor** | `nchw` |
|
||||
| **Input dtype** | `float` |
|
||||
| **Path** | `/config/model_cache/yolo.onnx` |
|
||||
| **Labelmap path** | `/labelmap/coco-80.txt` |
|
||||
| Field | Value |
|
||||
| ---------------------------------------- | -------------------------------------------------------- |
|
||||
| **Object Detection Model Type** | `yolo-generic` |
|
||||
| **Object detection model input width** | `320` (should match the imgsize set during model export) |
|
||||
| **Object detection model input height** | `320` (should match the imgsize set during model export) |
|
||||
| **Model Input Tensor Shape** | `nchw` |
|
||||
| **Model Input D Type** | `float` |
|
||||
| **Custom object detector model path** | `/config/model_cache/yolo.onnx` |
|
||||
| **Label map for custom object detector** | `/labelmap/coco-80.txt` |
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
@ -1130,15 +1129,15 @@ After placing the downloaded onnx model in your config folder, use the following
|
||||
|
||||
Navigate to <NavPath path="Settings > System > Detector hardware" /> and select the **ONNX** detector type. Then navigate to <NavPath path="Settings > System > Detection model" /> and configure:
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| **Model type** | `yolox` |
|
||||
| **Width** | `416` (should match the imgsize set during model export) |
|
||||
| **Height** | `416` (should match the imgsize set during model export) |
|
||||
| **Input tensor** | `nchw` |
|
||||
| **Input dtype** | `float_denorm` |
|
||||
| **Path** | `/config/model_cache/yolox_tiny.onnx` |
|
||||
| **Labelmap path** | `/labelmap/coco-80.txt` |
|
||||
| Field | Value |
|
||||
| ---------------------------------------- | -------------------------------------------------------- |
|
||||
| **Object Detection Model Type** | `yolox` |
|
||||
| **Object detection model input width** | `416` (should match the imgsize set during model export) |
|
||||
| **Object detection model input height** | `416` (should match the imgsize set during model export) |
|
||||
| **Model Input Tensor Shape** | `nchw` |
|
||||
| **Model Input D Type** | `float_denorm` |
|
||||
| **Custom object detector model path** | `/config/model_cache/yolox_tiny.onnx` |
|
||||
| **Label map for custom object detector** | `/labelmap/coco-80.txt` |
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
@ -1179,14 +1178,14 @@ After placing the downloaded onnx model in your `config/model_cache` folder, use
|
||||
|
||||
Navigate to <NavPath path="Settings > System > Detector hardware" /> and select the **ONNX** detector type. Then navigate to <NavPath path="Settings > System > Detection model" /> and configure:
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| **Model type** | `rfdetr` |
|
||||
| **Width** | `320` |
|
||||
| **Height** | `320` |
|
||||
| **Input tensor** | `nchw` |
|
||||
| **Input dtype** | `float` |
|
||||
| **Path** | `/config/model_cache/rfdetr.onnx` |
|
||||
| Field | Value |
|
||||
| --------------------------------------- | --------------------------------- |
|
||||
| **Object Detection Model Type** | `rfdetr` |
|
||||
| **Object detection model input width** | `320` |
|
||||
| **Object detection model input height** | `320` |
|
||||
| **Model Input Tensor Shape** | `nchw` |
|
||||
| **Model Input D Type** | `float` |
|
||||
| **Custom object detector model path** | `/config/model_cache/rfdetr.onnx` |
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
@ -1224,15 +1223,15 @@ After placing the downloaded onnx model in your `config/model_cache` folder, use
|
||||
|
||||
Navigate to <NavPath path="Settings > System > Detector hardware" /> and select the **ONNX** detector type. Then navigate to <NavPath path="Settings > System > Detection model" /> and configure:
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| **Model type** | `dfine` |
|
||||
| **Width** | `640` |
|
||||
| **Height** | `640` |
|
||||
| **Input tensor** | `nchw` |
|
||||
| **Input dtype** | `float` |
|
||||
| **Path** | `/config/model_cache/dfine_m_obj2coco.onnx` |
|
||||
| **Labelmap path** | `/labelmap/coco-80.txt` |
|
||||
| Field | Value |
|
||||
| ---------------------------------------- | ------------------------------------------- |
|
||||
| **Object Detection Model Type** | `dfine` |
|
||||
| **Object detection model input width** | `640` |
|
||||
| **Object detection model input height** | `640` |
|
||||
| **Model Input Tensor Shape** | `nchw` |
|
||||
| **Model Input D Type** | `float` |
|
||||
| **Custom object detector model path** | `/config/model_cache/dfine_m_obj2coco.onnx` |
|
||||
| **Label map for custom object detector** | `/labelmap/coco-80.txt` |
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
@ -1312,7 +1311,7 @@ To integrate CodeProject.AI into Frigate, configure the detector as follows:
|
||||
<ConfigTabs>
|
||||
<TabItem value="ui">
|
||||
|
||||
Navigate to <NavPath path="Settings > System > Detector hardware" /> and select the **Deepstack** detector type. Set the API URL to point to your CodeProject.AI server (e.g., `http://<your_codeproject_ai_server_ip>:<port>/v1/vision/detection`).
|
||||
Navigate to <NavPath path="Settings > System > Detector hardware" /> and select the **DeepStack** detector type. Set the API URL to point to your CodeProject.AI server (e.g., `http://<your_codeproject_ai_server_ip>:<port>/v1/vision/detection`).
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
@ -1417,14 +1416,14 @@ Below is the recommended configuration for using the **YOLO-NAS** (small) model
|
||||
|
||||
Navigate to <NavPath path="Settings > System > Detector hardware" /> and select the **MemryX** detector type with device set to `PCIe:0`. Then navigate to <NavPath path="Settings > System > Detection model" /> and configure:
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| **Model type** | `yolonas` |
|
||||
| **Width** | `320` (can be set to `640` for higher resolution) |
|
||||
| **Height** | `320` (can be set to `640` for higher resolution) |
|
||||
| **Input tensor** | `nchw` |
|
||||
| **Input dtype** | `float` |
|
||||
| **Labelmap path** | `/labelmap/coco-80.txt` |
|
||||
| Field | Value |
|
||||
| ---------------------------------------- | ------------------------------------------------- |
|
||||
| **Object Detection Model Type** | `yolonas` |
|
||||
| **Object detection model input width** | `320` (can be set to `640` for higher resolution) |
|
||||
| **Object detection model input height** | `320` (can be set to `640` for higher resolution) |
|
||||
| **Model Input Tensor Shape** | `nchw` |
|
||||
| **Model Input D Type** | `float` |
|
||||
| **Label map for custom object detector** | `/labelmap/coco-80.txt` |
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
@ -1465,14 +1464,14 @@ Below is the recommended configuration for using the **YOLOv9** (small) model wi
|
||||
|
||||
Navigate to <NavPath path="Settings > System > Detector hardware" /> and select the **MemryX** detector type with device set to `PCIe:0`. Then navigate to <NavPath path="Settings > System > Detection model" /> and configure:
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| **Model type** | `yolo-generic` |
|
||||
| **Width** | `320` (can be set to `640` for higher resolution) |
|
||||
| **Height** | `320` (can be set to `640` for higher resolution) |
|
||||
| **Input tensor** | `nchw` |
|
||||
| **Input dtype** | `float` |
|
||||
| **Labelmap path** | `/labelmap/coco-80.txt` |
|
||||
| Field | Value |
|
||||
| ---------------------------------------- | ------------------------------------------------- |
|
||||
| **Object Detection Model Type** | `yolo-generic` |
|
||||
| **Object detection model input width** | `320` (can be set to `640` for higher resolution) |
|
||||
| **Object detection model input height** | `320` (can be set to `640` for higher resolution) |
|
||||
| **Model Input Tensor Shape** | `nchw` |
|
||||
| **Model Input D Type** | `float` |
|
||||
| **Label map for custom object detector** | `/labelmap/coco-80.txt` |
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
@ -1512,14 +1511,14 @@ Below is the recommended configuration for using the **YOLOX** (small) model wit
|
||||
|
||||
Navigate to <NavPath path="Settings > System > Detector hardware" /> and select the **MemryX** detector type with device set to `PCIe:0`. Then navigate to <NavPath path="Settings > System > Detection model" /> and configure:
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| **Model type** | `yolox` |
|
||||
| **Width** | `640` |
|
||||
| **Height** | `640` |
|
||||
| **Input tensor** | `nchw` |
|
||||
| **Input dtype** | `float_denorm` |
|
||||
| **Labelmap path** | `/labelmap/coco-80.txt` |
|
||||
| Field | Value |
|
||||
| ---------------------------------------- | ----------------------- |
|
||||
| **Object Detection Model Type** | `yolox` |
|
||||
| **Object detection model input width** | `640` |
|
||||
| **Object detection model input height** | `640` |
|
||||
| **Model Input Tensor Shape** | `nchw` |
|
||||
| **Model Input D Type** | `float_denorm` |
|
||||
| **Label map for custom object detector** | `/labelmap/coco-80.txt` |
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
@ -1559,14 +1558,14 @@ Below is the recommended configuration for using the **SSDLite MobileNet v2** mo
|
||||
|
||||
Navigate to <NavPath path="Settings > System > Detector hardware" /> and select the **MemryX** detector type with device set to `PCIe:0`. Then navigate to <NavPath path="Settings > System > Detection model" /> and configure:
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| **Model type** | `ssd` |
|
||||
| **Width** | `320` |
|
||||
| **Height** | `320` |
|
||||
| **Input tensor** | `nchw` |
|
||||
| **Input dtype** | `float` |
|
||||
| **Labelmap path** | `/labelmap/coco-80.txt` |
|
||||
| Field | Value |
|
||||
| ---------------------------------------- | ----------------------- |
|
||||
| **Object Detection Model Type** | `ssd` |
|
||||
| **Object detection model input width** | `320` |
|
||||
| **Object detection model input height** | `320` |
|
||||
| **Model Input Tensor Shape** | `nchw` |
|
||||
| **Model Input D Type** | `float` |
|
||||
| **Label map for custom object detector** | `/labelmap/coco-80.txt` |
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
@ -1698,14 +1697,14 @@ Use the config below to work with generated TRT models:
|
||||
|
||||
Navigate to <NavPath path="Settings > System > Detector hardware" /> and select the **TensorRT** detector type with the device set to `0` (the default GPU index). Then navigate to <NavPath path="Settings > System > Detection model" /> and configure:
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| **Path** | `/config/model_cache/tensorrt/yolov7-320.trt` |
|
||||
| **Labelmap path** | `/labelmap/coco-80.txt` |
|
||||
| **Input tensor** | `nchw` |
|
||||
| **Input pixel format** | `rgb` |
|
||||
| **Width** | `320` (MUST match the chosen model, e.g., yolov7-320 -> 320) |
|
||||
| **Height** | `320` (MUST match the chosen model, e.g., yolov7-320 -> 320) |
|
||||
| Field | Value |
|
||||
| ---------------------------------------- | ------------------------------------------------------------ |
|
||||
| **Custom object detector model path** | `/config/model_cache/tensorrt/yolov7-320.trt` |
|
||||
| **Label map for custom object detector** | `/labelmap/coco-80.txt` |
|
||||
| **Model Input Tensor Shape** | `nchw` |
|
||||
| **Model Input Pixel Color Format** | `rgb` |
|
||||
| **Object detection model input width** | `320` (MUST match the chosen model, e.g., yolov7-320 -> 320) |
|
||||
| **Object detection model input height** | `320` (MUST match the chosen model, e.g., yolov7-320 -> 320) |
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
@ -1755,13 +1754,13 @@ Use the model configuration shown below when using the synaptics detector with t
|
||||
|
||||
Navigate to <NavPath path="Settings > System > Detector hardware" /> and select the **Synaptics** detector type. Then navigate to <NavPath path="Settings > System > Detection model" /> and configure:
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| **Path** | `/synaptics/mobilenet.synap` |
|
||||
| **Width** | `224` |
|
||||
| **Height** | `224` |
|
||||
| **Tensor format** | `nhwc` |
|
||||
| **Labelmap path** | `/labelmap/coco-80.txt` |
|
||||
| Field | Value |
|
||||
| ---------------------------------------- | ---------------------------- |
|
||||
| **Custom object detector model path** | `/synaptics/mobilenet.synap` |
|
||||
| **Object detection model input width** | `224` |
|
||||
| **Object detection model input height** | `224` |
|
||||
| **Tensor format** | `nhwc` |
|
||||
| **Label map for custom object detector** | `/labelmap/coco-80.txt` |
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
@ -1882,15 +1881,15 @@ The inference time was determined on a rk3588 with 3 NPU cores.
|
||||
|
||||
Navigate to <NavPath path="Settings > System > Detection model" /> and configure:
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| **Path** | `deci-fp16-yolonas_s` (or `deci-fp16-yolonas_m`, `deci-fp16-yolonas_l`) |
|
||||
| **Model type** | `yolonas` |
|
||||
| **Width** | `320` |
|
||||
| **Height** | `320` |
|
||||
| **Input pixel format** | `bgr` |
|
||||
| **Input tensor** | `nhwc` |
|
||||
| **Labelmap path** | `/labelmap/coco-80.txt` |
|
||||
| Field | Value |
|
||||
| ---------------------------------------- | ----------------------------------------------------------------------- |
|
||||
| **Custom object detector model path** | `deci-fp16-yolonas_s` (or `deci-fp16-yolonas_m`, `deci-fp16-yolonas_l`) |
|
||||
| **Object Detection Model Type** | `yolonas` |
|
||||
| **Object detection model input width** | `320` |
|
||||
| **Object detection model input height** | `320` |
|
||||
| **Model Input Pixel Color Format** | `bgr` |
|
||||
| **Model Input Tensor Shape** | `nhwc` |
|
||||
| **Label map for custom object detector** | `/labelmap/coco-80.txt` |
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
@ -1928,14 +1927,14 @@ The pre-trained YOLO-NAS weights from DeciAI are subject to their license and ca
|
||||
|
||||
Navigate to <NavPath path="Settings > System > Detection model" /> and configure:
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| **Path** | `frigate-fp16-yolov9-t` (or other yolov9 variants) |
|
||||
| **Model type** | `yolo-generic` |
|
||||
| **Width** | `320` |
|
||||
| **Height** | `320` |
|
||||
| **Input tensor** | `nhwc` |
|
||||
| **Labelmap path** | `/labelmap/coco-80.txt` |
|
||||
| Field | Value |
|
||||
| ---------------------------------------- | -------------------------------------------------- |
|
||||
| **Custom object detector model path** | `frigate-fp16-yolov9-t` (or other yolov9 variants) |
|
||||
| **Object Detection Model Type** | `yolo-generic` |
|
||||
| **Object detection model input width** | `320` |
|
||||
| **Object detection model input height** | `320` |
|
||||
| **Model Input Tensor Shape** | `nhwc` |
|
||||
| **Label map for custom object detector** | `/labelmap/coco-80.txt` |
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
@ -1968,14 +1967,14 @@ model: # required
|
||||
|
||||
Navigate to <NavPath path="Settings > System > Detection model" /> and configure:
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| **Path** | `rock-i8-yolox_nano` (or other yolox variants) |
|
||||
| **Model type** | `yolox` |
|
||||
| **Width** | `416` |
|
||||
| **Height** | `416` |
|
||||
| **Input tensor** | `nhwc` |
|
||||
| **Labelmap path** | `/labelmap/coco-80.txt` |
|
||||
| Field | Value |
|
||||
| ---------------------------------------- | ---------------------------------------------- |
|
||||
| **Custom object detector model path** | `rock-i8-yolox_nano` (or other yolox variants) |
|
||||
| **Object Detection Model Type** | `yolox` |
|
||||
| **Object detection model input width** | `416` |
|
||||
| **Object detection model input height** | `416` |
|
||||
| **Model Input Tensor Shape** | `nhwc` |
|
||||
| **Label map for custom object detector** | `/labelmap/coco-80.txt` |
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
@ -2190,17 +2189,17 @@ Use the model configuration shown below when using the axengine detector with th
|
||||
<ConfigTabs>
|
||||
<TabItem value="ui">
|
||||
|
||||
Navigate to <NavPath path="Settings > System > Detector hardware" /> and select the **AXEngine** detector type. Then navigate to <NavPath path="Settings > System > Detection model" /> and configure:
|
||||
Navigate to <NavPath path="Settings > System > Detector hardware" /> and select the **AXEngine NPU** detector type. Then navigate to <NavPath path="Settings > System > Detection model" /> and configure:
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| **Path** | `frigate-yolov9-tiny` |
|
||||
| **Model type** | `yolo-generic` |
|
||||
| **Width** | `320` |
|
||||
| **Height** | `320` |
|
||||
| **Input dtype** | `int` |
|
||||
| **Input pixel format** | `bgr` |
|
||||
| **Labelmap path** | `/labelmap/coco-80.txt` |
|
||||
| Field | Value |
|
||||
| ---------------------------------------- | ----------------------- |
|
||||
| **Custom object detector model path** | `frigate-yolov9-tiny` |
|
||||
| **Object Detection Model Type** | `yolo-generic` |
|
||||
| **Object detection model input width** | `320` |
|
||||
| **Object detection model input height** | `320` |
|
||||
| **Model Input D Type** | `int` |
|
||||
| **Model Input Pixel Color Format** | `bgr` |
|
||||
| **Label map for custom object detector** | `/labelmap/coco-80.txt` |
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
|
||||
@ -39,9 +39,9 @@ Any detection below `min_score` will be immediately thrown out and never tracked
|
||||
|
||||
Navigate to <NavPath path="Settings > Global configuration > Objects" /> to set score filters globally.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| **Object filters > Person > Min Score** | Minimum score for a single detection to initiate tracking |
|
||||
| Field | Description |
|
||||
| --------------------------------------- | ---------------------------------------------------------------- |
|
||||
| **Object filters > Person > Min Score** | Minimum score for a single detection to initiate tracking |
|
||||
| **Object filters > Person > Threshold** | Minimum computed (median) score to be considered a true positive |
|
||||
|
||||
To override score filters for a specific camera, navigate to <NavPath path="Settings > Camera configuration > Objects" /> and select the camera.
|
||||
@ -97,12 +97,12 @@ Conceptually, a ratio of 1 is a square, 0.5 is a "tall skinny" box, and 2 is a "
|
||||
|
||||
Navigate to <NavPath path="Settings > Global configuration > Objects" /> to set shape filters globally.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| **Object filters > Person > Min Area** | Minimum bounding box area in pixels (or decimal for percentage of frame) |
|
||||
| **Object filters > Person > Max Area** | Maximum bounding box area in pixels (or decimal for percentage of frame) |
|
||||
| **Object filters > Person > Min Ratio** | Minimum width/height ratio of the bounding box |
|
||||
| **Object filters > Person > Max Ratio** | Maximum width/height ratio of the bounding box |
|
||||
| Field | Description |
|
||||
| --------------------------------------- | ------------------------------------------------------------------------ |
|
||||
| **Object filters > Person > Min Area** | Minimum bounding box area in pixels (or decimal for percentage of frame) |
|
||||
| **Object filters > Person > Max Area** | Maximum bounding box area in pixels (or decimal for percentage of frame) |
|
||||
| **Object filters > Person > Min Ratio** | Minimum width/height ratio of the bounding box |
|
||||
| **Object filters > Person > Max Ratio** | Maximum width/height ratio of the bounding box |
|
||||
|
||||
To override shape filters for a specific camera, navigate to <NavPath path="Settings > Camera configuration > Objects" /> and select the camera.
|
||||
|
||||
|
||||
@ -70,14 +70,14 @@ Object filters help reduce false positives by constraining the size, shape, and
|
||||
|
||||
Navigate to <NavPath path="Settings > Global configuration > Objects" />.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| **Object filters > Person > Min Area** | Minimum bounding box area in pixels (or decimal for percentage of frame) |
|
||||
| **Object filters > Person > Max Area** | Maximum bounding box area in pixels (or decimal for percentage of frame) |
|
||||
| **Object filters > Person > Min Ratio** | Minimum width/height ratio of the bounding box |
|
||||
| **Object filters > Person > Max Ratio** | Maximum width/height ratio of the bounding box |
|
||||
| **Object filters > Person > Min Score** | Minimum score for the object to initiate tracking |
|
||||
| **Object filters > Person > Threshold** | Minimum computed score to be considered a true positive |
|
||||
| Field | Description |
|
||||
| --------------------------------------- | ------------------------------------------------------------------------ |
|
||||
| **Object filters > Person > Min Area** | Minimum bounding box area in pixels (or decimal for percentage of frame) |
|
||||
| **Object filters > Person > Max Area** | Maximum bounding box area in pixels (or decimal for percentage of frame) |
|
||||
| **Object filters > Person > Min Ratio** | Minimum width/height ratio of the bounding box |
|
||||
| **Object filters > Person > Max Ratio** | Maximum width/height ratio of the bounding box |
|
||||
| **Object filters > Person > Min Score** | Minimum score for the object to initiate tracking |
|
||||
| **Object filters > Person > Threshold** | Minimum computed score to be considered a true positive |
|
||||
|
||||
To override filters for a specific camera, navigate to <NavPath path="Settings > Camera configuration > Objects" />.
|
||||
|
||||
@ -118,14 +118,7 @@ Object filter masks prevent specific object types from being detected in certain
|
||||
<ConfigTabs>
|
||||
<TabItem value="ui">
|
||||
|
||||
Navigate to <NavPath path="Settings > Global configuration > Objects" />.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| **Object mask > Mask1 > Friendly Name / Enabled / Coordinates** | Global object filter mask that applies to all object types |
|
||||
| **Object filters > Person > Mask > Mask1 > Friendly Name / Enabled / Coordinates** | Per-object mask that applies only to the specified object type |
|
||||
|
||||
To configure masks for a specific camera, navigate to <NavPath path="Settings > Camera configuration > Objects" />.
|
||||
Navigate to <NavPath path="Settings > Camera configuration > Masks / Zones" /> and select a camera. Use the mask editor to draw object filter masks directly on the camera feed. Global object masks and per-object masks can both be configured from this view.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
|
||||
@ -146,11 +146,11 @@ The number of days to retain continuous and motion recordings can be configured.
|
||||
|
||||
Navigate to <NavPath path="Settings > Global configuration > Recording" />.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| **Enable recording** | Enable or disable recording for all cameras |
|
||||
| Field | Description |
|
||||
| ----------------------------------------- | -------------------------------------------- |
|
||||
| **Enable recording** | Enable or disable recording for all cameras |
|
||||
| **Continuous retention > Retention days** | Number of days to keep continuous recordings |
|
||||
| **Motion retention > Retention days** | Number of days to keep motion recordings |
|
||||
| **Motion retention > Retention days** | Number of days to keep motion recordings |
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
@ -178,10 +178,10 @@ The number of days to retain recordings for review items can be specified for it
|
||||
|
||||
Navigate to <NavPath path="Settings > Global configuration > Recording" />.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| **Enable recording** | Enable or disable recording for all cameras |
|
||||
| **Alert retention > Event retention > Retention days** | Number of days to keep alert recordings |
|
||||
| Field | Description |
|
||||
| ---------------------------------------------------------- | ------------------------------------------- |
|
||||
| **Enable recording** | Enable or disable recording for all cameras |
|
||||
| **Alert retention > Event retention > Retention days** | Number of days to keep alert recordings |
|
||||
| **Detection retention > Event retention > Retention days** | Number of days to keep detection recordings |
|
||||
|
||||
</TabItem>
|
||||
@ -221,17 +221,6 @@ When exporting a time-lapse the default speed-up is 25x with 30 FPS. This means
|
||||
|
||||
To configure the speed-up factor, the frame rate and further custom settings, use the `timelapse_args` parameter. The below configuration example would change the time-lapse speed to 60x (for fitting 1 hour of recording into 1 minute of time-lapse) with 25 FPS:
|
||||
|
||||
<ConfigTabs>
|
||||
<TabItem value="ui">
|
||||
|
||||
Navigate to <NavPath path="Settings > Global configuration > Recording" />.
|
||||
|
||||
- Set **Enable recording** to on
|
||||
- Set **Export config > Timelapse Args** to `-vf setpts=PTS/60 -r 25`
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
|
||||
```yaml {3-4}
|
||||
record:
|
||||
enabled: True
|
||||
@ -239,9 +228,6 @@ record:
|
||||
timelapse_args: "-vf setpts=PTS/60 -r 25"
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</ConfigTabs>
|
||||
|
||||
:::tip
|
||||
|
||||
When using `hwaccel_args`, hardware encoding is used for timelapse generation. This setting can be overridden for a specific camera (e.g., when camera resolution exceeds hardware encoder limits); set the camera-level export hwaccel_args with the appropriate settings. Using an unrecognized value or empty string will fall back to software encoding (libx264).
|
||||
|
||||
@ -27,7 +27,7 @@ Not every segment of video captured by Frigate may be of the same level of inter
|
||||
|
||||
:::note
|
||||
|
||||
Alerts and detections categorize the tracked objects in review items, but Frigate must first detect those objects with your configured object detector (Coral, OpenVINO, etc). By default, the object tracker only detects `person`. Setting `labels` for `alerts` and `detections` does not automatically enable detection of new objects. To detect more than `person`, you should add the following to your config:
|
||||
Alerts and detections categorize the tracked objects in review items, but Frigate must first detect those objects with your configured object detector (Coral, OpenVINO, etc). By default, the object tracker only detects `person`. Setting `labels` for `alerts` and `detections` does not automatically enable detection of new objects. To detect more than `person`, you should add more labels via <NavPath path="Settings > Global configuration > Objects" /> or <NavPath path="Settings > Camera configuration > Objects" /> and select your camera. Alternatively, add the following to your config:
|
||||
|
||||
```yaml
|
||||
objects:
|
||||
@ -47,11 +47,9 @@ By default a review item will only be marked as an alert if a person or car is d
|
||||
<ConfigTabs>
|
||||
<TabItem value="ui">
|
||||
|
||||
Navigate to <NavPath path="Settings > Global configuration > Review" />.
|
||||
Navigate to <NavPath path="Settings > Global configuration > Review" /> or <NavPath path="Settings > Camera configuration > Review" /> and select your camera.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| **Alerts > Labels** | List of object or audio labels that qualify a review item as an alert |
|
||||
Expand **Alerts config** and configure which labels and zones should generate alerts.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
@ -78,11 +76,9 @@ By default all detections that do not qualify as an alert qualify as a detection
|
||||
<ConfigTabs>
|
||||
<TabItem value="ui">
|
||||
|
||||
Navigate to <NavPath path="Settings > Global configuration > Review" />.
|
||||
Navigate to <NavPath path="Settings > Global configuration > Review" /> or <NavPath path="Settings > Camera configuration > Review" /> and select your camera.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| **Detections > Labels** | List of labels to restrict which tracked objects qualify as detections |
|
||||
Expand **Detections config** and configure which labels should qualify as detections.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
@ -109,7 +105,7 @@ For example, to exclude objects on the camera _gatecamera_ from any detections:
|
||||
<TabItem value="ui">
|
||||
|
||||
1. Navigate to <NavPath path="Settings > Camera configuration > Review" /> and select the **gatecamera** camera.
|
||||
- Set **Detections > Labels** to an empty list
|
||||
- Expand **Detections config** and turn off all of the object label switches.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
|
||||
@ -31,7 +31,6 @@ Semantic Search is disabled by default and must be enabled before it can be used
|
||||
Navigate to <NavPath path="Settings > Enrichments > Semantic search" />.
|
||||
|
||||
- Set **Enable semantic search** to on
|
||||
- Set **Reindex on startup** to on if you want to reindex the embeddings database from existing tracked objects
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
@ -66,10 +65,10 @@ Differently weighted versions of the Jina models are available and can be select
|
||||
|
||||
Navigate to <NavPath path="Settings > Enrichments > Semantic search" />.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| **Model** | Select `jinav1` to use the Jina AI CLIP V1 model |
|
||||
| **Model Size** | `small` (quantized, CPU-friendly) or `large` (full model, GPU-accelerated) |
|
||||
| Field | Description |
|
||||
| ------------------------------------------------ | -------------------------------------------------------------------------- |
|
||||
| **Semantic search model or GenAI provider name** | Select `jinav1` to use the Jina AI CLIP V1 model |
|
||||
| **Model size** | `small` (quantized, CPU-friendly) or `large` (full model, GPU-accelerated) |
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
@ -100,10 +99,10 @@ To use the V2 model, set the model to `jinav2`.
|
||||
|
||||
Navigate to <NavPath path="Settings > Enrichments > Semantic search" />.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| **Model** | Select `jinav2` to use the Jina AI CLIP V2 model |
|
||||
| **Model Size** | `large` is recommended for V2 (requires discrete GPU) |
|
||||
| Field | Description |
|
||||
| ------------------------------------------------ | ----------------------------------------------------- |
|
||||
| **Semantic search model or GenAI provider name** | Select `jinav2` to use the Jina AI CLIP V2 model |
|
||||
| **Model size** | `large` is recommended for V2 (requires discrete GPU) |
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
@ -141,9 +140,9 @@ To use llama.cpp for semantic search:
|
||||
|
||||
Navigate to <NavPath path="Settings > Enrichments > Semantic search" />.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| **Model** | Set to the GenAI config key (e.g. `default`) to use a configured GenAI provider for embeddings |
|
||||
| Field | Description |
|
||||
| ------------------------------------------------ | ---------------------------------------------------------------------------------------------- |
|
||||
| **Semantic search model or GenAI provider name** | Set to the GenAI config key (e.g. `default`) to use a configured GenAI provider for embeddings |
|
||||
|
||||
The GenAI provider must also be configured with the `embeddings` role under <NavPath path="Settings > Enrichments > Generative AI" />.
|
||||
|
||||
@ -186,10 +185,10 @@ The CLIP models are downloaded in ONNX format, and the `large` model can be acce
|
||||
|
||||
Navigate to <NavPath path="Settings > Enrichments > Semantic search" />.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| **Model Size** | Set to `large` to enable GPU acceleration |
|
||||
| **Device** | (Optional) Specify a GPU device index in a multi-GPU system (e.g. `0`) |
|
||||
| Field | Description |
|
||||
| -------------- | ---------------------------------------------------------------------- |
|
||||
| **Model size** | Set to `large` to enable GPU acceleration |
|
||||
| **Device** | (Optional) Specify a GPU device index in a multi-GPU system (e.g. `0`) |
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
@ -242,7 +241,7 @@ Triggers are best configured through the Frigate UI.
|
||||
|
||||
#### Managing Triggers in the UI
|
||||
|
||||
1. Navigate to <NavPath path="Settings > Triggers" /> and select a camera from the dropdown menu.
|
||||
1. Navigate to <NavPath path="Settings > Enrichments > Triggers" /> and select a camera from the dropdown menu.
|
||||
2. Click **Add Trigger** to create a new trigger or use the pencil icon to edit an existing one.
|
||||
3. In the **Create Trigger** wizard:
|
||||
- Enter a **Name** for the trigger (e.g., "Red Car Alert").
|
||||
|
||||
@ -68,15 +68,15 @@ Configure how snapshots are rendered and stored. These settings control the defa
|
||||
|
||||
Navigate to <NavPath path="Settings > Global configuration > Snapshots" />.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| **Enable snapshots** | Enable or disable saving snapshots for tracked objects |
|
||||
| **Timestamp overlay** | Overlay a timestamp on snapshots from API |
|
||||
| **Bounding box overlay** | Draw bounding boxes for tracked objects on snapshots from API |
|
||||
| **Crop snapshot** | Crop snapshots from API to the detected object's bounding box |
|
||||
| **Snapshot height** | Height in pixels to resize snapshots to; leave empty to preserve original size |
|
||||
| **Snapshot quality** | Encode quality for saved snapshots (0-100) |
|
||||
| **Required zones** | Zones an object must enter for a snapshot to be saved |
|
||||
| Field | Description |
|
||||
| ------------------------ | ------------------------------------------------------------------------------ |
|
||||
| **Enable snapshots** | Enable or disable saving snapshots for tracked objects |
|
||||
| **Timestamp overlay** | Overlay a timestamp on snapshots from API |
|
||||
| **Bounding box overlay** | Draw bounding boxes for tracked objects on snapshots from API |
|
||||
| **Crop snapshot** | Crop snapshots from API to the detected object's bounding box |
|
||||
| **Snapshot height** | Height in pixels to resize snapshots to; leave empty to preserve original size |
|
||||
| **Snapshot quality** | Encode quality for saved snapshots (0-100) |
|
||||
| **Required zones** | Zones an object must enter for a snapshot to be saved |
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
@ -104,10 +104,10 @@ Configure how long snapshots are retained on disk. Per-object retention override
|
||||
|
||||
Navigate to <NavPath path="Settings > Global configuration > Snapshots" />.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| **Snapshot retention > Default retention** | Number of days to retain snapshots (default: 10) |
|
||||
| **Snapshot retention > Retention mode** | Retention mode: `all`, `motion`, or `active_objects` |
|
||||
| Field | Description |
|
||||
| -------------------------------------------------- | ----------------------------------------------------------------------------------- |
|
||||
| **Snapshot retention > Default retention** | Number of days to retain snapshots (default: 10) |
|
||||
| **Snapshot retention > Retention mode** | Retention mode: `all`, `motion`, or `active_objects` |
|
||||
| **Snapshot retention > Object retention > Person** | Per-object overrides for retention days (e.g., keep `person` snapshots for 15 days) |
|
||||
|
||||
</TabItem>
|
||||
|
||||
@ -59,8 +59,8 @@ To create an alert only when an object enters the `entire_yard` zone:
|
||||
|
||||
Navigate to <NavPath path="Settings > Camera configuration > Review" />.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| Field | Description |
|
||||
| ---------------------------------- | ----------------------------------------------------------------------------------------- |
|
||||
| **Alerts config > Required zones** | Zones that an object must enter to be considered an alert; leave empty to allow any zone. |
|
||||
|
||||
</TabItem>
|
||||
@ -89,9 +89,9 @@ You may also want to filter detections to only be created when an object enters
|
||||
|
||||
Navigate to <NavPath path="Settings > Camera configuration > Review" />.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| **Alerts config > Required zones** | Zones that an object must enter to be considered an alert; leave empty to allow any zone. |
|
||||
| Field | Description |
|
||||
| -------------------------------------- | -------------------------------------------------------------------------------------------- |
|
||||
| **Alerts config > Required zones** | Zones that an object must enter to be considered an alert; leave empty to allow any zone. |
|
||||
| **Detections config > Required zones** | Zones that an object must enter to be considered a detection; leave empty to allow any zone. |
|
||||
|
||||
</TabItem>
|
||||
@ -319,8 +319,8 @@ The `distance` values are measured in meters (metric) or feet (imperial), depend
|
||||
|
||||
Navigate to <NavPath path="Settings > System > UI" />.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| Field | Description |
|
||||
| --------------- | -------------------------------------------------------------------- |
|
||||
| **Unit system** | Set to `metric` (kilometers per hour) or `imperial` (miles per hour) |
|
||||
|
||||
</TabItem>
|
||||
|
||||
@ -204,8 +204,17 @@ You need to refer to **Configure hardware acceleration** above to enable the con
|
||||
<ConfigTabs>
|
||||
<TabItem value="ui">
|
||||
|
||||
1. Navigate to <NavPath path="Settings > System > Detector hardware" /> and add a detector with **Type** `openvino` and **Device** `GPU`
|
||||
2. Navigate to <NavPath path="Settings > System > Detection model" /> and configure the model settings for OpenVINO
|
||||
1. Navigate to <NavPath path="Settings > System > Detector hardware" /> and add a detector with **Type** `OpenVINO` and **Device** `GPU`
|
||||
2. Navigate to <NavPath path="Settings > System > Detection model" /> and configure the model settings for OpenVINO:
|
||||
|
||||
| Field | Value |
|
||||
| ---------------------------------------- | ------------------------------------------ |
|
||||
| **Object detection model input width** | `300` |
|
||||
| **Object detection model input height** | `300` |
|
||||
| **Model Input Tensor Shape** | `nhwc` |
|
||||
| **Model Input Pixel Color Format** | `bgr` |
|
||||
| **Custom object detector model path** | `/openvino-model/ssdlite_mobilenet_v2.xml` |
|
||||
| **Label map for custom object detector** | `/openvino-model/coco_91cl_bkgr.txt` |
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
@ -264,7 +273,7 @@ services:
|
||||
<ConfigTabs>
|
||||
<TabItem value="ui">
|
||||
|
||||
Navigate to <NavPath path="Settings > System > Detector hardware" /> and add a detector with **Type** `edgetpu` and **Device** `usb`.
|
||||
Navigate to <NavPath path="Settings > System > Detector hardware" /> and add a detector with **Type** `EdgeTPU` and **Device** `usb`.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
@ -296,9 +305,9 @@ Restart Frigate and you should start seeing detections for `person`. If you want
|
||||
|
||||
### Step 5: Setup motion masks
|
||||
|
||||
Now that you have optimized your configuration for decoding the video stream, you will want to check to see where to implement motion masks. Navigate to <NavPath path="Settings > Camera configuration > Masks / Zones" /> and enable the Debug view to see motion boxes. Watch for areas that continuously trigger unwanted motion to be detected. Common areas to mask include camera timestamps and trees that frequently blow in the wind. The goal is to avoid wasting object detection cycles looking at these areas.
|
||||
Now that you have optimized your configuration for decoding the video stream, you will want to check to see where to implement motion masks. Click on the camera from the main dashboard, then select the gear icon in the top right, enable Debug View, and finally enable the switch for Motion Boxes. Watch for areas that continuously trigger unwanted motion to be detected. Common areas to mask include camera timestamps and trees that frequently blow in the wind. The goal is to avoid wasting object detection cycles looking at these areas.
|
||||
|
||||
Use the mask editor to draw polygon masks directly on the camera feed. More information about masks can be found [here](../configuration/masks.md).
|
||||
Use the mask editor to draw polygon masks directly on the camera feed. Navigate to <NavPath path="Settings > Camera configuration > Masks / Zones" /> and set up a motion mask over the area. More information about masks can be found [here](../configuration/masks.md).
|
||||
|
||||
:::warning
|
||||
|
||||
@ -313,7 +322,7 @@ In order to review activity in the Frigate UI, recordings need to be enabled.
|
||||
<ConfigTabs>
|
||||
<TabItem value="ui">
|
||||
|
||||
1. If you have separate streams for detect and record, navigate to <NavPath path="Settings > Camera configuration > FFmpeg" /> and add a second input with the `record` role pointing to your high-resolution stream
|
||||
1. If you have separate streams for detect and record, navigate to <NavPath path="Settings > Camera configuration > FFmpeg" />, select your camera, and add a second input with the `record` role pointing to your high-resolution stream
|
||||
2. Navigate to <NavPath path="Settings > Global configuration > Recording" /> (or <NavPath path="Settings > Camera configuration > Recording" /> for a specific camera) and set **Enable recording** to on
|
||||
|
||||
</TabItem>
|
||||
|
||||
Loading…
Reference in New Issue
Block a user