mirror of
https://github.com/blakeblackshear/frigate.git
synced 2026-02-04 18:25:22 +03:00
feat: Add ArmNN detector. Add docs about orange pi 5
This commit is contained in:
parent
ce2d589a28
commit
8bd5f0268a
@ -138,6 +138,29 @@ model:
|
|||||||
labelmap_path: /path/to/coco_80cl.txt
|
labelmap_path: /path/to/coco_80cl.txt
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### ArmNN detector ( orange pi 5 )
|
||||||
|
|
||||||
|
You need to put ArmNN binaries for them to be located in a way, when `/usr/lib/ArmNN-linux-aarch64/libarmnnDelegate.so` is correct.
|
||||||
|
Download binaries from https://github.com/ARM-software/armnn/releases for your platform
|
||||||
|
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
detectors:
|
||||||
|
armnn:
|
||||||
|
type: armnn
|
||||||
|
num_threads: 8
|
||||||
|
# model:
|
||||||
|
# path: //cpu_model.tflite # used by default.
|
||||||
|
|
||||||
|
model:
|
||||||
|
width: 320
|
||||||
|
height: 320
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
In order for GPU to work you need to have `armnn-latest-all` installed as well as `clinfo` should show
|
||||||
|
output for the GPU support. See (hardware acceleration)[hardware_acceleration.md].
|
||||||
|
|
||||||
### Intel NCS2 VPU and Myriad X Setup
|
### Intel NCS2 VPU and Myriad X Setup
|
||||||
|
|
||||||
Intel produces a neural net inference accelleration chip called Myriad X. This chip was sold in their Neural Compute Stick 2 (NCS2) which has been discontinued. If intending to use the MYRIAD device for accelleration, additional setup is required to pass through the USB device. The host needs a udev rule installed to handle the NCS2 device.
|
Intel produces a neural net inference accelleration chip called Myriad X. This chip was sold in their Neural Compute Stick 2 (NCS2) which has been discontinued. If intending to use the MYRIAD device for accelleration, additional setup is required to pass through the USB device. The host needs a udev rule installed to handle the NCS2 device.
|
||||||
|
|||||||
@ -15,6 +15,78 @@ ffmpeg:
|
|||||||
hwaccel_args: preset-rpi-64-h264
|
hwaccel_args: preset-rpi-64-h264
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Orange Pi 5 ( ArmNN )
|
||||||
|
|
||||||
|
Ensure you have installed
|
||||||
|
|
||||||
|
```sh
|
||||||
|
ffmpeg/jammy,now 7:4.4.2-0ubuntu0.22.04.1+rkmpp20230207 arm64 [installed,upgradable to: 7:5.1.2-3]
|
||||||
|
libavcodec58/jammy,now 7:4.4.2-0ubuntu0.22.04.1+rkmpp20230207 arm64 [installed,automatic]
|
||||||
|
libavdevice58/jammy,now 7:4.4.2-0ubuntu0.22.04.1+rkmpp20230207 arm64 [installed,automatic]
|
||||||
|
libavfilter7/jammy,now 7:4.4.2-0ubuntu0.22.04.1+rkmpp20230207 arm64 [installed,automatic]
|
||||||
|
libavformat58/jammy,now 7:4.4.2-0ubuntu0.22.04.1+rkmpp20230207 arm64 [installed,automatic]
|
||||||
|
libavutil56/jammy,now 7:4.4.2-0ubuntu0.22.04.1+rkmpp20230207 arm64 [installed,automatic]
|
||||||
|
libpostproc55/jammy,now 7:4.4.2-0ubuntu0.22.04.1+rkmpp20230207 arm64 [installed,automatic]
|
||||||
|
librockchip-mpp1/jammy,now 1.5.0-1+git230210.c145c84~jammy1 arm64 [installed,automatic]
|
||||||
|
libswresample3/jammy,now 7:4.4.2-0ubuntu0.22.04.1+rkmpp20230207 arm64 [installed,automatic]
|
||||||
|
libswscale5/jammy,now 7:4.4.2-0ubuntu0.22.04.1+rkmpp20230207 arm64 [installed,automatic]
|
||||||
|
```
|
||||||
|
from https://github.com/orangepi-xunlong/rk-rootfs-build/tree/rk3588_packages_jammy
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
ffmpeg:
|
||||||
|
hwaccel_args: -hwaccel drm -hwaccel_device /dev/dri/renderD128 -c:v h264_rkmpp
|
||||||
|
```
|
||||||
|
|
||||||
|
Also, for the CPU and GPU accelleration you should use `armnn` detector on this board [see](detectors.md)
|
||||||
|
|
||||||
|
Install packages [see tutorials](https://github.com/ARM-software/armnn/blob/branches/armnn_23_02/InstallationViaAptRepository.md)
|
||||||
|
|
||||||
|
```sh
|
||||||
|
armnn-latest-all/jammy,now 23.02-1~ubuntu22.04 arm64 [installed]
|
||||||
|
armnn-latest-cpu-gpu-ref/jammy,now 23.02-1~ubuntu22.04 arm64 [installed]
|
||||||
|
armnn-latest-cpu-gpu/jammy,now 23.02-1~ubuntu22.04 arm64 [installed]
|
||||||
|
armnn-latest-cpu/jammy,now 23.02-1~ubuntu22.04 arm64 [installed]
|
||||||
|
armnn-latest-gpu/jammy,now 23.02-1~ubuntu22.04 arm64 [installed]
|
||||||
|
armnn-latest-ref/jammy,now 23.02-1~ubuntu22.04 arm64 [installed]
|
||||||
|
libarmnn-cpuacc-backend32/jammy,now 23.02-1~ubuntu22.04 arm64 [installed,automatic]
|
||||||
|
libarmnn-cpuref-backend32/jammy,now 23.02-1~ubuntu22.04 arm64 [installed,automatic]
|
||||||
|
libarmnn-gpuacc-backend32/jammy,now 23.02-1~ubuntu22.04 arm64 [installed,automatic]
|
||||||
|
libarmnn22/unstable,now 20.08-12 arm64 [installed,automatic]
|
||||||
|
libarmnn32/jammy,now 23.02-1~ubuntu22.04 arm64 [installed,automatic]
|
||||||
|
libarmnnaclcommon22/unstable,now 20.08-12 arm64 [installed]
|
||||||
|
libarmnnaclcommon32/jammy,now 23.02-1~ubuntu22.04 arm64 [installed,automatic]
|
||||||
|
libarmnntfliteparser24/jammy,now 23.02-1~ubuntu22.04 arm64 [installed,automatic]
|
||||||
|
```
|
||||||
|
|
||||||
|
In order for the GPU to work install packages
|
||||||
|
|
||||||
|
```sh
|
||||||
|
libmali-g610-x11/jammy,now 1.0.2.4 arm64 [installed]
|
||||||
|
libmali-valhall-g610-g6p0-x11-gbm/now 1.9-1 arm64 [installed,local]
|
||||||
|
```
|
||||||
|
|
||||||
|
for Ubuntu
|
||||||
|
|
||||||
|
```sh
|
||||||
|
apt install ocl-icd-opencl-dev
|
||||||
|
mkdir -p /etc/OpenCL/vendors/
|
||||||
|
dpkg -i libmali-valhall-g610-g6p0-x11_1.9-1_arm64.deb
|
||||||
|
```
|
||||||
|
|
||||||
|
`clinfo | grep 'Device Name'` should show you a full output of available data about Mali GPU
|
||||||
|
|
||||||
|
```sh
|
||||||
|
root@23cfa5ff7203:/opt/frigate# clinfo | grep 'Device Name'
|
||||||
|
Device Name Mali-LODX r0p0
|
||||||
|
Device Name Mali-LODX r0p0
|
||||||
|
Device Name Mali-LODX r0p0
|
||||||
|
Device Name Mali-LODX r0p0
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
### Intel-based CPUs (<10th Generation) via VAAPI
|
### Intel-based CPUs (<10th Generation) via VAAPI
|
||||||
|
|
||||||
VAAPI supports automatic profile selection so it will work automatically with both H.264 and H.265 streams. VAAPI is recommended for all generations of Intel-based CPUs if QSV does not work.
|
VAAPI supports automatic profile selection so it will work automatically with both H.264 and H.265 streams. VAAPI is recommended for all generations of Intel-based CPUs if QSV does not work.
|
||||||
|
|||||||
78
frigate/detectors/plugins/armnn_tfl.py
Normal file
78
frigate/detectors/plugins/armnn_tfl.py
Normal file
@ -0,0 +1,78 @@
|
|||||||
|
import logging
|
||||||
|
import numpy as np
|
||||||
|
|
||||||
|
from frigate.detectors.detection_api import DetectionApi
|
||||||
|
from frigate.detectors.detector_config import BaseDetectorConfig
|
||||||
|
from typing import Literal
|
||||||
|
from pydantic import Extra, Field
|
||||||
|
|
||||||
|
try:
|
||||||
|
from tflite_runtime.interpreter import Interpreter
|
||||||
|
except ModuleNotFoundError:
|
||||||
|
from tensorflow.lite.python.interpreter import Interpreter
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
DETECTOR_KEY = "armnn"
|
||||||
|
|
||||||
|
|
||||||
|
class ArmNNDetectorConfig(BaseDetectorConfig):
|
||||||
|
type: Literal[DETECTOR_KEY]
|
||||||
|
num_threads: int = Field(default=8, title="Number of detection threads")
|
||||||
|
|
||||||
|
|
||||||
|
def load_armnn_delegate(library_path, options=None):
|
||||||
|
try:
|
||||||
|
from tflite_runtime.interpreter import load_delegate
|
||||||
|
except ModuleNotFoundError:
|
||||||
|
from tensorflow.lite.python.interpreter import load_delegate
|
||||||
|
|
||||||
|
if options is None:
|
||||||
|
options = {"backends": "CpuAcc,GpuAcc,CpuRef", "logging-severity": "info"}
|
||||||
|
|
||||||
|
return load_delegate(library_path, options=options)
|
||||||
|
|
||||||
|
|
||||||
|
class ArmNNTfl(DetectionApi):
|
||||||
|
type_key = DETECTOR_KEY
|
||||||
|
|
||||||
|
def __init__(self, detector_config: ArmNNDetectorConfig):
|
||||||
|
armnn_delegate = load_armnn_delegate("/usr/lib/ArmNN-linux-aarch64/libarmnnDelegate.so")
|
||||||
|
|
||||||
|
self.interpreter = Interpreter(
|
||||||
|
model_path=detector_config.model.path or "/cpu_model.tflite",
|
||||||
|
num_threads=detector_config.num_threads or 8,
|
||||||
|
experimental_delegates=[armnn_delegate],
|
||||||
|
)
|
||||||
|
|
||||||
|
self.interpreter.allocate_tensors()
|
||||||
|
|
||||||
|
self.tensor_input_details = self.interpreter.get_input_details()
|
||||||
|
self.tensor_output_details = self.interpreter.get_output_details()
|
||||||
|
|
||||||
|
def detect_raw(self, tensor_input):
|
||||||
|
self.interpreter.set_tensor(self.tensor_input_details[0]["index"], tensor_input)
|
||||||
|
self.interpreter.invoke()
|
||||||
|
|
||||||
|
boxes = self.interpreter.tensor(self.tensor_output_details[0]["index"])()[0]
|
||||||
|
class_ids = self.interpreter.tensor(self.tensor_output_details[1]["index"])()[0]
|
||||||
|
scores = self.interpreter.tensor(self.tensor_output_details[2]["index"])()[0]
|
||||||
|
count = int(
|
||||||
|
self.interpreter.tensor(self.tensor_output_details[3]["index"])()[0]
|
||||||
|
)
|
||||||
|
|
||||||
|
detections = np.zeros((20, 6), np.float32)
|
||||||
|
|
||||||
|
for i in range(count):
|
||||||
|
if scores[i] < 0.4 or i == 20:
|
||||||
|
break
|
||||||
|
detections[i] = [
|
||||||
|
class_ids[i],
|
||||||
|
float(scores[i]),
|
||||||
|
boxes[i][0],
|
||||||
|
boxes[i][1],
|
||||||
|
boxes[i][2],
|
||||||
|
boxes[i][3],
|
||||||
|
]
|
||||||
|
|
||||||
|
return detections
|
||||||
Loading…
Reference in New Issue
Block a user