diff --git a/docs/docs/configuration/object_detectors.md b/docs/docs/configuration/object_detectors.md index e352a6a9a..1166c9859 100644 --- a/docs/docs/configuration/object_detectors.md +++ b/docs/docs/configuration/object_detectors.md @@ -253,11 +253,11 @@ Hailo8 supports all models in the Hailo Model Zoo that include HailoRT post-proc ## OpenVINO Detector -The OpenVINO detector type runs an OpenVINO IR model on AMD and Intel CPUs, Intel GPUs and Intel VPU hardware. To configure an OpenVINO detector, set the `"type"` attribute to `"openvino"`. +The OpenVINO detector type runs an OpenVINO IR model on AMD and Intel CPUs, Intel GPUs and Intel NPUs. To configure an OpenVINO detector, set the `"type"` attribute to `"openvino"`. -The OpenVINO device to be used is specified using the `"device"` attribute according to the naming conventions in the [Device Documentation](https://docs.openvino.ai/2024/openvino-workflow/running-inference/inference-devices-and-modes.html). The most common devices are `CPU` and `GPU`. Currently, there is a known issue with using `AUTO`. For backwards compatibility, Frigate will attempt to use `GPU` if `AUTO` is set in your configuration. +The OpenVINO device to be used is specified using the `"device"` attribute according to the naming conventions in the [Device Documentation](https://docs.openvino.ai/2025/openvino-workflow/running-inference/inference-devices-and-modes.html). The most common devices are `CPU`, `GPU`, or `NPU`. -OpenVINO is supported on 6th Gen Intel platforms (Skylake) and newer. It will also run on AMD CPUs despite having no official support for it. A supported Intel platform is required to use the `GPU` device with OpenVINO. For detailed system requirements, see [OpenVINO System Requirements](https://docs.openvino.ai/2024/about-openvino/release-notes-openvino/system-requirements.html) +OpenVINO is supported on 6th Gen Intel platforms (Skylake) and newer. It will also run on AMD CPUs despite having no official support for it. A supported Intel platform is required to use the `GPU` or `NPU` device with OpenVINO. For detailed system requirements, see [OpenVINO System Requirements](https://docs.openvino.ai/2025/about-openvino/release-notes-openvino/system-requirements.html) :::tip @@ -267,16 +267,25 @@ When using many cameras one detector may not be enough to keep up. Multiple dete detectors: ov_0: type: openvino - device: GPU + device: GPU # or NPU ov_1: type: openvino - device: GPU + device: GPU # or NPU ``` ::: ### OpenVINO Supported Models +| Model | GPU | NPU | Notes | +| ------------------------------------- | --- | --- | ------------------------------------------------------------ | +| [YOLOv9](#yolo-v3-v4-v7-v9) | ✅ | ✅ | Recommended for GPU & NPU | +| [RF-DETR](#rf-detr) | ✅ | | | +| [YOLO-NAS](#yolo-nas) | ✅ | ❌ | YOLO-NAS only works on NPU in non-flat format | +| [MobileNet v2](#ssdlite-mobilenet-v2) | ✅ | ✅ | Fast and lightweight model, less accurate than larger models | +| [YOLOX](#yolox) | ✅ | | | +| [D-FINE](#d-fine) | ❌ | ❌ | | + #### SSDLite MobileNet v2 An OpenVINO model is provided in the container at `/openvino-model/ssdlite_mobilenet_v2.xml` and is used by this detector type by default. The model comes from Intel's Open Model Zoo [SSDLite MobileNet V2](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/ssdlite_mobilenet_v2) and is converted to an FP16 precision IR model. @@ -287,7 +296,7 @@ Use the model configuration shown below when using the OpenVINO detector with th detectors: ov: type: openvino - device: GPU + device: GPU # Or NPU model: width: 300 @@ -348,7 +357,7 @@ After placing the downloaded onnx model in your config folder, you can use the f detectors: ov: type: openvino - device: GPU + device: GPU # or NPU model: model_type: yolo-generic @@ -609,6 +618,14 @@ detectors: ### ONNX Supported Models +| Model | Nvidia GPU | AMD GPU | Notes | +| ------------------------------------- | ---------- | ------- | ------------------------------------------------------------ | +| [YOLOv9](#yolo-v3-v4-v7-v9-2) | ✅ | ✅ | Supports CUDA Graphs for optimal Nvidia performance | +| [RF-DETR](#rf-detr) | ✅ | ❌ | Supports CUDA Graphs for optimal Nvidia performance | +| [YOLO-NAS](#yolo-nas-1) | ⚠️ | ⚠️ | Not supported by CUDA Graphs | +| [YOLOX](#yolox-1) | ✅ | ✅ | Supports CUDA Graphs for optimal Nvidia performance | +| [D-FINE](#d-fine) | ⚠️ | ❌ | Not supported by CUDA Graphs | + There is no default model provided, the following formats are supported: #### YOLO-NAS diff --git a/docs/docs/frigate/hardware.md b/docs/docs/frigate/hardware.md index f06f8ac7d..a1cb2b8ee 100644 --- a/docs/docs/frigate/hardware.md +++ b/docs/docs/frigate/hardware.md @@ -78,7 +78,7 @@ Frigate supports multiple different detectors that work on different types of ha **Intel** -- [OpenVino](#openvino---intel): OpenVino can run on Intel Arc GPUs, Intel integrated GPUs, and Intel CPUs to provide efficient object detection. +- [OpenVino](#openvino---intel): OpenVino can run on Intel Arc GPUs, Intel integrated GPUs, and Intel NPUs to provide efficient object detection. - [Supports majority of model architectures](../../configuration/object_detectors#openvino-supported-models) - Runs best with tiny, small, or medium models @@ -142,6 +142,7 @@ The OpenVINO detector type is able to run on: - 6th Gen Intel Platforms and newer that have an iGPU - x86 hosts with an Intel Arc GPU +- Intel NPUs - Most modern AMD CPUs (though this is officially not supported by Intel) - x86 & Arm64 hosts via CPU (generally not recommended)