mirror of
https://github.com/blakeblackshear/frigate.git
synced 2025-12-06 13:34:13 +03:00
Add support for Intel NPU (#20536)
This commit is contained in:
parent
60789f7096
commit
d7275a3c1a
@ -95,6 +95,9 @@ if [[ "${TARGETARCH}" == "amd64" ]]; then
|
|||||||
|
|
||||||
apt-get -qq install -y ocl-icd-libopencl1
|
apt-get -qq install -y ocl-icd-libopencl1
|
||||||
|
|
||||||
|
# install libtbb12 for NPU support
|
||||||
|
apt-get -qq install -y libtbb12
|
||||||
|
|
||||||
rm -f /usr/share/keyrings/intel-graphics.gpg
|
rm -f /usr/share/keyrings/intel-graphics.gpg
|
||||||
rm -f /etc/apt/sources.list.d/intel-gpu-jammy.list
|
rm -f /etc/apt/sources.list.d/intel-gpu-jammy.list
|
||||||
|
|
||||||
@ -115,6 +118,11 @@ if [[ "${TARGETARCH}" == "amd64" ]]; then
|
|||||||
wget https://github.com/intel/compute-runtime/releases/download/24.52.32224.5/intel-level-zero-gpu_1.6.32224.5_amd64.deb
|
wget https://github.com/intel/compute-runtime/releases/download/24.52.32224.5/intel-level-zero-gpu_1.6.32224.5_amd64.deb
|
||||||
wget https://github.com/intel/intel-graphics-compiler/releases/download/v2.5.6/intel-igc-opencl-2_2.5.6+18417_amd64.deb
|
wget https://github.com/intel/intel-graphics-compiler/releases/download/v2.5.6/intel-igc-opencl-2_2.5.6+18417_amd64.deb
|
||||||
wget https://github.com/intel/intel-graphics-compiler/releases/download/v2.5.6/intel-igc-core-2_2.5.6+18417_amd64.deb
|
wget https://github.com/intel/intel-graphics-compiler/releases/download/v2.5.6/intel-igc-core-2_2.5.6+18417_amd64.deb
|
||||||
|
# npu packages
|
||||||
|
wget https://github.com/oneapi-src/level-zero/releases/download/v1.21.9/level-zero_1.21.9+u22.04_amd64.deb
|
||||||
|
wget https://github.com/intel/linux-npu-driver/releases/download/v1.17.0/intel-driver-compiler-npu_1.17.0.20250508-14912879441_ubuntu22.04_amd64.deb
|
||||||
|
wget https://github.com/intel/linux-npu-driver/releases/download/v1.17.0/intel-fw-npu_1.17.0.20250508-14912879441_ubuntu22.04_amd64.deb
|
||||||
|
wget https://github.com/intel/linux-npu-driver/releases/download/v1.17.0/intel-level-zero-npu_1.17.0.20250508-14912879441_ubuntu22.04_amd64.deb
|
||||||
|
|
||||||
dpkg -i *.deb
|
dpkg -i *.deb
|
||||||
rm *.deb
|
rm *.deb
|
||||||
|
|||||||
@ -253,11 +253,11 @@ Hailo8 supports all models in the Hailo Model Zoo that include HailoRT post-proc
|
|||||||
|
|
||||||
## OpenVINO Detector
|
## OpenVINO Detector
|
||||||
|
|
||||||
The OpenVINO detector type runs an OpenVINO IR model on AMD and Intel CPUs, Intel GPUs and Intel VPU hardware. To configure an OpenVINO detector, set the `"type"` attribute to `"openvino"`.
|
The OpenVINO detector type runs an OpenVINO IR model on AMD and Intel CPUs, Intel GPUs and Intel NPUs. To configure an OpenVINO detector, set the `"type"` attribute to `"openvino"`.
|
||||||
|
|
||||||
The OpenVINO device to be used is specified using the `"device"` attribute according to the naming conventions in the [Device Documentation](https://docs.openvino.ai/2024/openvino-workflow/running-inference/inference-devices-and-modes.html). The most common devices are `CPU` and `GPU`. Currently, there is a known issue with using `AUTO`. For backwards compatibility, Frigate will attempt to use `GPU` if `AUTO` is set in your configuration.
|
The OpenVINO device to be used is specified using the `"device"` attribute according to the naming conventions in the [Device Documentation](https://docs.openvino.ai/2025/openvino-workflow/running-inference/inference-devices-and-modes.html). The most common devices are `CPU`, `GPU`, or `NPU`.
|
||||||
|
|
||||||
OpenVINO is supported on 6th Gen Intel platforms (Skylake) and newer. It will also run on AMD CPUs despite having no official support for it. A supported Intel platform is required to use the `GPU` device with OpenVINO. For detailed system requirements, see [OpenVINO System Requirements](https://docs.openvino.ai/2024/about-openvino/release-notes-openvino/system-requirements.html)
|
OpenVINO is supported on 6th Gen Intel platforms (Skylake) and newer. It will also run on AMD CPUs despite having no official support for it. A supported Intel platform is required to use the `GPU` or `NPU` device with OpenVINO. For detailed system requirements, see [OpenVINO System Requirements](https://docs.openvino.ai/2025/about-openvino/release-notes-openvino/system-requirements.html)
|
||||||
|
|
||||||
:::tip
|
:::tip
|
||||||
|
|
||||||
@ -267,27 +267,39 @@ When using many cameras one detector may not be enough to keep up. Multiple dete
|
|||||||
detectors:
|
detectors:
|
||||||
ov_0:
|
ov_0:
|
||||||
type: openvino
|
type: openvino
|
||||||
device: GPU
|
device: GPU # or NPU
|
||||||
ov_1:
|
ov_1:
|
||||||
type: openvino
|
type: openvino
|
||||||
device: GPU
|
device: GPU # or NPU
|
||||||
```
|
```
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
### OpenVINO Supported Models
|
### OpenVINO Supported Models
|
||||||
|
|
||||||
|
| Model | GPU | NPU | Notes |
|
||||||
|
| ------------------------------------- | --- | --- | ------------------------------------------------------------ |
|
||||||
|
| [YOLOv9](#yolo-v3-v4-v7-v9) | ✅ | ✅ | Recommended for GPU & NPU |
|
||||||
|
| [RF-DETR](#rf-detr) | ✅ | ✅ | Requires XE iGPU or Arc |
|
||||||
|
| [YOLO-NAS](#yolo-nas) | ✅ | ⚠️ | YOLO-NAS only works on NPU in non-flat format |
|
||||||
|
| [MobileNet v2](#ssdlite-mobilenet-v2) | ✅ | ✅ | Fast and lightweight model, less accurate than larger models |
|
||||||
|
| [YOLOX](#yolox) | ✅ | ? | |
|
||||||
|
| [D-FINE](#d-fine) | ❌ | ❌ | |
|
||||||
|
|
||||||
#### SSDLite MobileNet v2
|
#### SSDLite MobileNet v2
|
||||||
|
|
||||||
An OpenVINO model is provided in the container at `/openvino-model/ssdlite_mobilenet_v2.xml` and is used by this detector type by default. The model comes from Intel's Open Model Zoo [SSDLite MobileNet V2](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/ssdlite_mobilenet_v2) and is converted to an FP16 precision IR model.
|
An OpenVINO model is provided in the container at `/openvino-model/ssdlite_mobilenet_v2.xml` and is used by this detector type by default. The model comes from Intel's Open Model Zoo [SSDLite MobileNet V2](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/ssdlite_mobilenet_v2) and is converted to an FP16 precision IR model.
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>MobileNet v2 Config</summary>
|
||||||
|
|
||||||
Use the model configuration shown below when using the OpenVINO detector with the default OpenVINO model:
|
Use the model configuration shown below when using the OpenVINO detector with the default OpenVINO model:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
detectors:
|
detectors:
|
||||||
ov:
|
ov:
|
||||||
type: openvino
|
type: openvino
|
||||||
device: GPU
|
device: GPU # Or NPU
|
||||||
|
|
||||||
model:
|
model:
|
||||||
width: 300
|
width: 300
|
||||||
@ -298,6 +310,8 @@ model:
|
|||||||
labelmap_path: /openvino-model/coco_91cl_bkgr.txt
|
labelmap_path: /openvino-model/coco_91cl_bkgr.txt
|
||||||
```
|
```
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
#### YOLOX
|
#### YOLOX
|
||||||
|
|
||||||
This detector also supports YOLOX. Frigate does not come with any YOLOX models preloaded, so you will need to supply your own models.
|
This detector also supports YOLOX. Frigate does not come with any YOLOX models preloaded, so you will need to supply your own models.
|
||||||
@ -306,6 +320,9 @@ This detector also supports YOLOX. Frigate does not come with any YOLOX models p
|
|||||||
|
|
||||||
[YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) models are supported, but not included by default. See [the models section](#downloading-yolo-nas-model) for more information on downloading the YOLO-NAS model for use in Frigate.
|
[YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) models are supported, but not included by default. See [the models section](#downloading-yolo-nas-model) for more information on downloading the YOLO-NAS model for use in Frigate.
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>YOLO-NAS Setup & Config</summary>
|
||||||
|
|
||||||
After placing the downloaded onnx model in your config folder, you can use the following configuration:
|
After placing the downloaded onnx model in your config folder, you can use the following configuration:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
@ -326,6 +343,8 @@ model:
|
|||||||
|
|
||||||
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
|
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
#### YOLO (v3, v4, v7, v9)
|
#### YOLO (v3, v4, v7, v9)
|
||||||
|
|
||||||
YOLOv3, YOLOv4, YOLOv7, and [YOLOv9](https://github.com/WongKinYiu/yolov9) models are supported, but not included by default.
|
YOLOv3, YOLOv4, YOLOv7, and [YOLOv9](https://github.com/WongKinYiu/yolov9) models are supported, but not included by default.
|
||||||
@ -336,6 +355,9 @@ The YOLO detector has been designed to support YOLOv3, YOLOv4, YOLOv7, and YOLOv
|
|||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>YOLOv Setup & Config</summary>
|
||||||
|
|
||||||
:::warning
|
:::warning
|
||||||
|
|
||||||
If you are using a Frigate+ YOLOv9 model, you should not define any of the below `model` parameters in your config except for `path`. See [the Frigate+ model docs](/plus/first_model#step-3-set-your-model-id-in-the-config) for more information on setting up your model.
|
If you are using a Frigate+ YOLOv9 model, you should not define any of the below `model` parameters in your config except for `path`. See [the Frigate+ model docs](/plus/first_model#step-3-set-your-model-id-in-the-config) for more information on setting up your model.
|
||||||
@ -348,7 +370,7 @@ After placing the downloaded onnx model in your config folder, you can use the f
|
|||||||
detectors:
|
detectors:
|
||||||
ov:
|
ov:
|
||||||
type: openvino
|
type: openvino
|
||||||
device: GPU
|
device: GPU # or NPU
|
||||||
|
|
||||||
model:
|
model:
|
||||||
model_type: yolo-generic
|
model_type: yolo-generic
|
||||||
@ -362,6 +384,8 @@ model:
|
|||||||
|
|
||||||
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
|
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
#### RF-DETR
|
#### RF-DETR
|
||||||
|
|
||||||
[RF-DETR](https://github.com/roboflow/rf-detr) is a DETR based model. The ONNX exported models are supported, but not included by default. See [the models section](#downloading-rf-detr-model) for more informatoin on downloading the RF-DETR model for use in Frigate.
|
[RF-DETR](https://github.com/roboflow/rf-detr) is a DETR based model. The ONNX exported models are supported, but not included by default. See [the models section](#downloading-rf-detr-model) for more informatoin on downloading the RF-DETR model for use in Frigate.
|
||||||
@ -372,6 +396,9 @@ Due to the size and complexity of the RF-DETR model, it is only recommended to b
|
|||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>RF-DETR Setup & Config</summary>
|
||||||
|
|
||||||
After placing the downloaded onnx model in your `config/model_cache` folder, you can use the following configuration:
|
After placing the downloaded onnx model in your `config/model_cache` folder, you can use the following configuration:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
@ -389,6 +416,8 @@ model:
|
|||||||
path: /config/model_cache/rfdetr.onnx
|
path: /config/model_cache/rfdetr.onnx
|
||||||
```
|
```
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
#### D-FINE
|
#### D-FINE
|
||||||
|
|
||||||
[D-FINE](https://github.com/Peterande/D-FINE) is a DETR based model. The ONNX exported models are supported, but not included by default. See [the models section](#downloading-d-fine-model) for more information on downloading the D-FINE model for use in Frigate.
|
[D-FINE](https://github.com/Peterande/D-FINE) is a DETR based model. The ONNX exported models are supported, but not included by default. See [the models section](#downloading-d-fine-model) for more information on downloading the D-FINE model for use in Frigate.
|
||||||
@ -399,6 +428,9 @@ Currently D-FINE models only run on OpenVINO in CPU mode, GPUs currently fail to
|
|||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>YOLO-NAS Setup & Config</summary>
|
||||||
|
|
||||||
After placing the downloaded onnx model in your config/model_cache folder, you can use the following configuration:
|
After placing the downloaded onnx model in your config/model_cache folder, you can use the following configuration:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
@ -419,6 +451,8 @@ model:
|
|||||||
|
|
||||||
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
|
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
## Apple Silicon detector
|
## Apple Silicon detector
|
||||||
|
|
||||||
The NPU in Apple Silicon can't be accessed from within a container, so the [Apple Silicon detector client](https://github.com/frigate-nvr/apple-silicon-detector) must first be setup. It is recommended to use the Frigate docker image with `-standard-arm64` suffix, for example `ghcr.io/blakeblackshear/frigate:stable-standard-arm64`.
|
The NPU in Apple Silicon can't be accessed from within a container, so the [Apple Silicon detector client](https://github.com/frigate-nvr/apple-silicon-detector) must first be setup. It is recommended to use the Frigate docker image with `-standard-arm64` suffix, for example `ghcr.io/blakeblackshear/frigate:stable-standard-arm64`.
|
||||||
@ -609,12 +643,23 @@ detectors:
|
|||||||
|
|
||||||
### ONNX Supported Models
|
### ONNX Supported Models
|
||||||
|
|
||||||
|
| Model | Nvidia GPU | AMD GPU | Notes |
|
||||||
|
| ----------------------------- | ---------- | ------- | --------------------------------------------------- |
|
||||||
|
| [YOLOv9](#yolo-v3-v4-v7-v9-2) | ✅ | ✅ | Supports CUDA Graphs for optimal Nvidia performance |
|
||||||
|
| [RF-DETR](#rf-detr) | ✅ | ❌ | Supports CUDA Graphs for optimal Nvidia performance |
|
||||||
|
| [YOLO-NAS](#yolo-nas-1) | ⚠️ | ⚠️ | Not supported by CUDA Graphs |
|
||||||
|
| [YOLOX](#yolox-1) | ✅ | ✅ | Supports CUDA Graphs for optimal Nvidia performance |
|
||||||
|
| [D-FINE](#d-fine) | ⚠️ | ❌ | Not supported by CUDA Graphs |
|
||||||
|
|
||||||
There is no default model provided, the following formats are supported:
|
There is no default model provided, the following formats are supported:
|
||||||
|
|
||||||
#### YOLO-NAS
|
#### YOLO-NAS
|
||||||
|
|
||||||
[YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) models are supported, but not included by default. See [the models section](#downloading-yolo-nas-model) for more information on downloading the YOLO-NAS model for use in Frigate.
|
[YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) models are supported, but not included by default. See [the models section](#downloading-yolo-nas-model) for more information on downloading the YOLO-NAS model for use in Frigate.
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>YOLO-NAS Setup & Config</summary>
|
||||||
|
|
||||||
:::warning
|
:::warning
|
||||||
|
|
||||||
If you are using a Frigate+ YOLO-NAS model, you should not define any of the below `model` parameters in your config except for `path`. See [the Frigate+ model docs](/plus/first_model#step-3-set-your-model-id-in-the-config) for more information on setting up your model.
|
If you are using a Frigate+ YOLO-NAS model, you should not define any of the below `model` parameters in your config except for `path`. See [the Frigate+ model docs](/plus/first_model#step-3-set-your-model-id-in-the-config) for more information on setting up your model.
|
||||||
@ -638,6 +683,8 @@ model:
|
|||||||
labelmap_path: /labelmap/coco-80.txt
|
labelmap_path: /labelmap/coco-80.txt
|
||||||
```
|
```
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
#### YOLO (v3, v4, v7, v9)
|
#### YOLO (v3, v4, v7, v9)
|
||||||
|
|
||||||
YOLOv3, YOLOv4, YOLOv7, and [YOLOv9](https://github.com/WongKinYiu/yolov9) models are supported, but not included by default.
|
YOLOv3, YOLOv4, YOLOv7, and [YOLOv9](https://github.com/WongKinYiu/yolov9) models are supported, but not included by default.
|
||||||
@ -648,6 +695,9 @@ The YOLO detector has been designed to support YOLOv3, YOLOv4, YOLOv7, and YOLOv
|
|||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>YOLOv Setup & Config</summary>
|
||||||
|
|
||||||
:::warning
|
:::warning
|
||||||
|
|
||||||
If you are using a Frigate+ YOLOv9 model, you should not define any of the below `model` parameters in your config except for `path`. See [the Frigate+ model docs](/plus/first_model#step-3-set-your-model-id-in-the-config) for more information on setting up your model.
|
If you are using a Frigate+ YOLOv9 model, you should not define any of the below `model` parameters in your config except for `path`. See [the Frigate+ model docs](/plus/first_model#step-3-set-your-model-id-in-the-config) for more information on setting up your model.
|
||||||
@ -671,12 +721,17 @@ model:
|
|||||||
labelmap_path: /labelmap/coco-80.txt
|
labelmap_path: /labelmap/coco-80.txt
|
||||||
```
|
```
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
|
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
|
||||||
|
|
||||||
#### YOLOx
|
#### YOLOx
|
||||||
|
|
||||||
[YOLOx](https://github.com/Megvii-BaseDetection/YOLOX) models are supported, but not included by default. See [the models section](#downloading-yolo-models) for more information on downloading the YOLOx model for use in Frigate.
|
[YOLOx](https://github.com/Megvii-BaseDetection/YOLOX) models are supported, but not included by default. See [the models section](#downloading-yolo-models) for more information on downloading the YOLOx model for use in Frigate.
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>YOLOx Setup & Config</summary>
|
||||||
|
|
||||||
After placing the downloaded onnx model in your config folder, you can use the following configuration:
|
After placing the downloaded onnx model in your config folder, you can use the following configuration:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
@ -696,10 +751,15 @@ model:
|
|||||||
|
|
||||||
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
|
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
#### RF-DETR
|
#### RF-DETR
|
||||||
|
|
||||||
[RF-DETR](https://github.com/roboflow/rf-detr) is a DETR based model. The ONNX exported models are supported, but not included by default. See [the models section](#downloading-rf-detr-model) for more information on downloading the RF-DETR model for use in Frigate.
|
[RF-DETR](https://github.com/roboflow/rf-detr) is a DETR based model. The ONNX exported models are supported, but not included by default. See [the models section](#downloading-rf-detr-model) for more information on downloading the RF-DETR model for use in Frigate.
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>RF-DETR Setup & Config</summary>
|
||||||
|
|
||||||
After placing the downloaded onnx model in your `config/model_cache` folder, you can use the following configuration:
|
After placing the downloaded onnx model in your `config/model_cache` folder, you can use the following configuration:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
@ -716,10 +776,15 @@ model:
|
|||||||
path: /config/model_cache/rfdetr.onnx
|
path: /config/model_cache/rfdetr.onnx
|
||||||
```
|
```
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
#### D-FINE
|
#### D-FINE
|
||||||
|
|
||||||
[D-FINE](https://github.com/Peterande/D-FINE) is a DETR based model. The ONNX exported models are supported, but not included by default. See [the models section](#downloading-d-fine-model) for more information on downloading the D-FINE model for use in Frigate.
|
[D-FINE](https://github.com/Peterande/D-FINE) is a DETR based model. The ONNX exported models are supported, but not included by default. See [the models section](#downloading-d-fine-model) for more information on downloading the D-FINE model for use in Frigate.
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>D-FINE Setup & Config</summary>
|
||||||
|
|
||||||
After placing the downloaded onnx model in your `config/model_cache` folder, you can use the following configuration:
|
After placing the downloaded onnx model in your `config/model_cache` folder, you can use the following configuration:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
@ -737,6 +802,8 @@ model:
|
|||||||
labelmap_path: /labelmap/coco-80.txt
|
labelmap_path: /labelmap/coco-80.txt
|
||||||
```
|
```
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
|
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
|
||||||
|
|
||||||
## CPU Detector (not recommended)
|
## CPU Detector (not recommended)
|
||||||
@ -980,6 +1047,7 @@ For detailed instructions on compiling models, refer to the [MemryX Compiler](ht
|
|||||||
# ├── yolonas.dfp (a file ending with .dfp)
|
# ├── yolonas.dfp (a file ending with .dfp)
|
||||||
# └── yolonas_post.onnx (optional; only if the model includes a cropped post-processing network)
|
# └── yolonas_post.onnx (optional; only if the model includes a cropped post-processing network)
|
||||||
```
|
```
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## NVidia TensorRT Detector
|
## NVidia TensorRT Detector
|
||||||
@ -1270,13 +1338,14 @@ Explanation of the paramters:
|
|||||||
|
|
||||||
## DeGirum
|
## DeGirum
|
||||||
|
|
||||||
DeGirum is a detector that can use any type of hardware listed on [their website](https://hub.degirum.com). DeGirum can be used with local hardware through a DeGirum AI Server, or through the use of `@local`. You can also connect directly to DeGirum's AI Hub to run inferences. **Please Note:** This detector *cannot* be used for commercial purposes.
|
DeGirum is a detector that can use any type of hardware listed on [their website](https://hub.degirum.com). DeGirum can be used with local hardware through a DeGirum AI Server, or through the use of `@local`. You can also connect directly to DeGirum's AI Hub to run inferences. **Please Note:** This detector _cannot_ be used for commercial purposes.
|
||||||
|
|
||||||
### Configuration
|
### Configuration
|
||||||
|
|
||||||
#### AI Server Inference
|
#### AI Server Inference
|
||||||
|
|
||||||
Before starting with the config file for this section, you must first launch an AI server. DeGirum has an AI server ready to use as a docker container. Add this to your `docker-compose.yml` to get started:
|
Before starting with the config file for this section, you must first launch an AI server. DeGirum has an AI server ready to use as a docker container. Add this to your `docker-compose.yml` to get started:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
degirum_detector:
|
degirum_detector:
|
||||||
container_name: degirum
|
container_name: degirum
|
||||||
@ -1285,9 +1354,11 @@ degirum_detector:
|
|||||||
ports:
|
ports:
|
||||||
- "8778:8778"
|
- "8778:8778"
|
||||||
```
|
```
|
||||||
|
|
||||||
All supported hardware will automatically be found on your AI server host as long as relevant runtimes and drivers are properly installed on your machine. Refer to [DeGirum's docs site](https://docs.degirum.com/pysdk/runtimes-and-drivers) if you have any trouble.
|
All supported hardware will automatically be found on your AI server host as long as relevant runtimes and drivers are properly installed on your machine. Refer to [DeGirum's docs site](https://docs.degirum.com/pysdk/runtimes-and-drivers) if you have any trouble.
|
||||||
|
|
||||||
Once completed, changing the `config.yml` file is simple.
|
Once completed, changing the `config.yml` file is simple.
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
degirum_detector:
|
degirum_detector:
|
||||||
type: degirum
|
type: degirum
|
||||||
@ -1295,12 +1366,15 @@ degirum_detector:
|
|||||||
zoo: degirum/public # DeGirum's public model zoo. Zoo name should be in format "workspace/zoo_name". degirum/public is available to everyone, so feel free to use it if you don't know where to start. If you aren't pulling a model from the AI Hub, leave this and 'token' blank.
|
zoo: degirum/public # DeGirum's public model zoo. Zoo name should be in format "workspace/zoo_name". degirum/public is available to everyone, so feel free to use it if you don't know where to start. If you aren't pulling a model from the AI Hub, leave this and 'token' blank.
|
||||||
token: dg_example_token # For authentication with the AI Hub. Get this token through the "tokens" section on the main page of the [AI Hub](https://hub.degirum.com). This can be left blank if you're pulling a model from the public zoo and running inferences on your local hardware using @local or a local DeGirum AI Server
|
token: dg_example_token # For authentication with the AI Hub. Get this token through the "tokens" section on the main page of the [AI Hub](https://hub.degirum.com). This can be left blank if you're pulling a model from the public zoo and running inferences on your local hardware using @local or a local DeGirum AI Server
|
||||||
```
|
```
|
||||||
|
|
||||||
Setting up a model in the `config.yml` is similar to setting up an AI server.
|
Setting up a model in the `config.yml` is similar to setting up an AI server.
|
||||||
You can set it to:
|
You can set it to:
|
||||||
|
|
||||||
- A model listed on the [AI Hub](https://hub.degirum.com), given that the correct zoo name is listed in your detector
|
- A model listed on the [AI Hub](https://hub.degirum.com), given that the correct zoo name is listed in your detector
|
||||||
- If this is what you choose to do, the correct model will be downloaded onto your machine before running.
|
- If this is what you choose to do, the correct model will be downloaded onto your machine before running.
|
||||||
- A local directory acting as a zoo. See DeGirum's docs site [for more information](https://docs.degirum.com/pysdk/user-guide-pysdk/organizing-models#model-zoo-directory-structure).
|
- A local directory acting as a zoo. See DeGirum's docs site [for more information](https://docs.degirum.com/pysdk/user-guide-pysdk/organizing-models#model-zoo-directory-structure).
|
||||||
- A path to some model.json.
|
- A path to some model.json.
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
model:
|
model:
|
||||||
path: ./mobilenet_v2_ssd_coco--300x300_quant_n2x_orca1_1 # directory to model .json and file
|
path: ./mobilenet_v2_ssd_coco--300x300_quant_n2x_orca1_1 # directory to model .json and file
|
||||||
@ -1309,10 +1383,10 @@ model:
|
|||||||
input_pixel_format: rgb/bgr # look at the model.json to figure out which to put here
|
input_pixel_format: rgb/bgr # look at the model.json to figure out which to put here
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
#### Local Inference
|
#### Local Inference
|
||||||
|
|
||||||
It is also possible to eliminate the need for an AI server and run the hardware directly. The benefit of this approach is that you eliminate any bottlenecks that occur when transferring prediction results from the AI server docker container to the frigate one. However, the method of implementing local inference is different for every device and hardware combination, so it's usually more trouble than it's worth. A general guideline to achieve this would be:
|
It is also possible to eliminate the need for an AI server and run the hardware directly. The benefit of this approach is that you eliminate any bottlenecks that occur when transferring prediction results from the AI server docker container to the frigate one. However, the method of implementing local inference is different for every device and hardware combination, so it's usually more trouble than it's worth. A general guideline to achieve this would be:
|
||||||
|
|
||||||
1. Ensuring that the frigate docker container has the runtime you want to use. So for instance, running `@local` for Hailo means making sure the container you're using has the Hailo runtime installed.
|
1. Ensuring that the frigate docker container has the runtime you want to use. So for instance, running `@local` for Hailo means making sure the container you're using has the Hailo runtime installed.
|
||||||
2. To double check the runtime is detected by the DeGirum detector, make sure the `degirum sys-info` command properly shows whatever runtimes you mean to install.
|
2. To double check the runtime is detected by the DeGirum detector, make sure the `degirum sys-info` command properly shows whatever runtimes you mean to install.
|
||||||
3. Create a DeGirum detector in your `config.yml` file.
|
3. Create a DeGirum detector in your `config.yml` file.
|
||||||
@ -1323,7 +1397,6 @@ degirum_detector:
|
|||||||
location: "@local" # For accessing AI Hub devices and models
|
location: "@local" # For accessing AI Hub devices and models
|
||||||
zoo: degirum/public # DeGirum's public model zoo. Zoo name should be in format "workspace/zoo_name". degirum/public is available to everyone, so feel free to use it if you don't know where to start.
|
zoo: degirum/public # DeGirum's public model zoo. Zoo name should be in format "workspace/zoo_name". degirum/public is available to everyone, so feel free to use it if you don't know where to start.
|
||||||
token: dg_example_token # For authentication with the AI Hub. Get this token through the "tokens" section on the main page of the [AI Hub](https://hub.degirum.com). This can be left blank if you're pulling a model from the public zoo and running inferences on your local hardware using @local or a local DeGirum AI Server
|
token: dg_example_token # For authentication with the AI Hub. Get this token through the "tokens" section on the main page of the [AI Hub](https://hub.degirum.com). This can be left blank if you're pulling a model from the public zoo and running inferences on your local hardware using @local or a local DeGirum AI Server
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Once `degirum_detector` is setup, you can choose a model through 'model' section in the `config.yml` file.
|
Once `degirum_detector` is setup, you can choose a model through 'model' section in the `config.yml` file.
|
||||||
@ -1336,10 +1409,10 @@ model:
|
|||||||
input_pixel_format: rgb/bgr # look at the model.json to figure out which to put here
|
input_pixel_format: rgb/bgr # look at the model.json to figure out which to put here
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
#### AI Hub Cloud Inference
|
#### AI Hub Cloud Inference
|
||||||
|
|
||||||
If you do not possess whatever hardware you want to run, there's also the option to run cloud inferences. Do note that your detection fps might need to be lowered as network latency does significantly slow down this method of detection. For use with Frigate, we highly recommend using a local AI server as described above. To set up cloud inferences,
|
If you do not possess whatever hardware you want to run, there's also the option to run cloud inferences. Do note that your detection fps might need to be lowered as network latency does significantly slow down this method of detection. For use with Frigate, we highly recommend using a local AI server as described above. To set up cloud inferences,
|
||||||
|
|
||||||
1. Sign up at [DeGirum's AI Hub](https://hub.degirum.com).
|
1. Sign up at [DeGirum's AI Hub](https://hub.degirum.com).
|
||||||
2. Get an access token.
|
2. Get an access token.
|
||||||
3. Create a DeGirum detector in your `config.yml` file.
|
3. Create a DeGirum detector in your `config.yml` file.
|
||||||
@ -1350,7 +1423,6 @@ degirum_detector:
|
|||||||
location: "@cloud" # For accessing AI Hub devices and models
|
location: "@cloud" # For accessing AI Hub devices and models
|
||||||
zoo: degirum/public # DeGirum's public model zoo. Zoo name should be in format "workspace/zoo_name". degirum/public is available to everyone, so feel free to use it if you don't know where to start.
|
zoo: degirum/public # DeGirum's public model zoo. Zoo name should be in format "workspace/zoo_name". degirum/public is available to everyone, so feel free to use it if you don't know where to start.
|
||||||
token: dg_example_token # For authentication with the AI Hub. Get this token through the "tokens" section on the main page of the (AI Hub)[https://hub.degirum.com).
|
token: dg_example_token # For authentication with the AI Hub. Get this token through the "tokens" section on the main page of the (AI Hub)[https://hub.degirum.com).
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Once `degirum_detector` is setup, you can choose a model through 'model' section in the `config.yml` file.
|
Once `degirum_detector` is setup, you can choose a model through 'model' section in the `config.yml` file.
|
||||||
|
|||||||
@ -78,7 +78,7 @@ Frigate supports multiple different detectors that work on different types of ha
|
|||||||
|
|
||||||
**Intel**
|
**Intel**
|
||||||
|
|
||||||
- [OpenVino](#openvino---intel): OpenVino can run on Intel Arc GPUs, Intel integrated GPUs, and Intel CPUs to provide efficient object detection.
|
- [OpenVino](#openvino---intel): OpenVino can run on Intel Arc GPUs, Intel integrated GPUs, and Intel NPUs to provide efficient object detection.
|
||||||
- [Supports majority of model architectures](../../configuration/object_detectors#openvino-supported-models)
|
- [Supports majority of model architectures](../../configuration/object_detectors#openvino-supported-models)
|
||||||
- Runs best with tiny, small, or medium models
|
- Runs best with tiny, small, or medium models
|
||||||
|
|
||||||
@ -142,6 +142,7 @@ The OpenVINO detector type is able to run on:
|
|||||||
|
|
||||||
- 6th Gen Intel Platforms and newer that have an iGPU
|
- 6th Gen Intel Platforms and newer that have an iGPU
|
||||||
- x86 hosts with an Intel Arc GPU
|
- x86 hosts with an Intel Arc GPU
|
||||||
|
- Intel NPUs
|
||||||
- Most modern AMD CPUs (though this is officially not supported by Intel)
|
- Most modern AMD CPUs (though this is officially not supported by Intel)
|
||||||
- x86 & Arm64 hosts via CPU (generally not recommended)
|
- x86 & Arm64 hosts via CPU (generally not recommended)
|
||||||
|
|
||||||
@ -166,7 +167,8 @@ Inference speeds vary greatly depending on the CPU or GPU used, some known examp
|
|||||||
| Intel UHD 770 | ~ 15 ms | t-320: ~ 16 ms s-320: ~ 20 ms s-640: ~ 40 ms | 320: ~ 20 ms 640: ~ 46 ms | | |
|
| Intel UHD 770 | ~ 15 ms | t-320: ~ 16 ms s-320: ~ 20 ms s-640: ~ 40 ms | 320: ~ 20 ms 640: ~ 46 ms | | |
|
||||||
| Intel N100 | ~ 15 ms | s-320: 30 ms | 320: ~ 25 ms | | Can only run one detector instance |
|
| Intel N100 | ~ 15 ms | s-320: 30 ms | 320: ~ 25 ms | | Can only run one detector instance |
|
||||||
| Intel N150 | ~ 15 ms | t-320: 16 ms s-320: 24 ms | | | |
|
| Intel N150 | ~ 15 ms | t-320: 16 ms s-320: 24 ms | | | |
|
||||||
| Intel Iris XE | ~ 10 ms | s-320: 12 ms s-640: 30 ms | 320: ~ 18 ms 640: ~ 50 ms | | |
|
| Intel Iris XE | ~ 10 ms | s-320: 8 ms s-640: 30 ms | 320: ~ 18 ms 640: ~ 50 ms | 320-n: 33 ms | |
|
||||||
|
| Intel NPU | ~ 6 ms | s-320: 11 ms | | 320-n: 40 ms | |
|
||||||
| Intel Arc A310 | ~ 5 ms | t-320: 7 ms t-640: 11 ms s-320: 8 ms s-640: 15 ms | 320: ~ 8 ms 640: ~ 14 ms | | |
|
| Intel Arc A310 | ~ 5 ms | t-320: 7 ms t-640: 11 ms s-320: 8 ms s-640: 15 ms | 320: ~ 8 ms 640: ~ 14 ms | | |
|
||||||
| Intel Arc A380 | ~ 6 ms | | 320: ~ 10 ms 640: ~ 22 ms | 336: 20 ms 448: 27 ms | |
|
| Intel Arc A380 | ~ 6 ms | | 320: ~ 10 ms 640: ~ 22 ms | 336: 20 ms 448: 27 ms | |
|
||||||
| Intel Arc A750 | ~ 4 ms | | 320: ~ 8 ms | | |
|
| Intel Arc A750 | ~ 4 ms | | 320: ~ 8 ms | | |
|
||||||
|
|||||||
@ -304,7 +304,8 @@ services:
|
|||||||
- /dev/bus/usb:/dev/bus/usb # Passes the USB Coral, needs to be modified for other versions
|
- /dev/bus/usb:/dev/bus/usb # Passes the USB Coral, needs to be modified for other versions
|
||||||
- /dev/apex_0:/dev/apex_0 # Passes a PCIe Coral, follow driver instructions here https://coral.ai/docs/m2/get-started/#2a-on-linux
|
- /dev/apex_0:/dev/apex_0 # Passes a PCIe Coral, follow driver instructions here https://coral.ai/docs/m2/get-started/#2a-on-linux
|
||||||
- /dev/video11:/dev/video11 # For Raspberry Pi 4B
|
- /dev/video11:/dev/video11 # For Raspberry Pi 4B
|
||||||
- /dev/dri/renderD128:/dev/dri/renderD128 # For intel hwaccel, needs to be updated for your hardware
|
- /dev/dri/renderD128:/dev/dri/renderD128 # AMD / Intel GPU, needs to be updated for your hardware
|
||||||
|
- /dev/accel:/dev/accel # Intel NPU
|
||||||
volumes:
|
volumes:
|
||||||
- /etc/localtime:/etc/localtime:ro
|
- /etc/localtime:/etc/localtime:ro
|
||||||
- /path/to/your/config:/config
|
- /path/to/your/config:/config
|
||||||
|
|||||||
Loading…
Reference in New Issue
Block a user