support for other yolov models and config checks

This commit is contained in:
MarcA711 2023-11-15 13:41:09 +00:00
parent 8c7f6d4a76
commit 61ccf7fdf7
4 changed files with 117 additions and 16 deletions

View File

@ -9,10 +9,11 @@ COPY docker/rockchip/requirements-wheels-rk.txt /requirements-wheels-rk.txt
RUN sed -i "/https:\/\//d" /requirements-wheels.txt RUN sed -i "/https:\/\//d" /requirements-wheels.txt
RUN pip3 wheel --wheel-dir=/rk-wheels -c /requirements-wheels.txt -r /requirements-wheels-rk.txt RUN pip3 wheel --wheel-dir=/rk-wheels -c /requirements-wheels.txt -r /requirements-wheels-rk.txt
FROM wget as rk-libs FROM wget as rk-downloads
RUN wget -qO librknnrt.so https://github.com/MarcA711/rknpu2/raw/master/runtime/RK3588/Linux/librknn_api/aarch64/librknnrt.so RUN wget -qO librknnrt.so https://github.com/MarcA711/rknpu2/raw/master/runtime/RK3588/Linux/librknn_api/aarch64/librknnrt.so
RUN wget -qO ffmpeg https://github.com/MarcA711/Rockchip-FFmpeg-Builds/releases/download/latest/ffmpeg RUN wget -qO ffmpeg https://github.com/MarcA711/Rockchip-FFmpeg-Builds/releases/download/latest/ffmpeg
RUN wget -qO ffprobe https://github.com/MarcA711/Rockchip-FFmpeg-Builds/releases/download/latest/ffprobe RUN wget -qO ffprobe https://github.com/MarcA711/Rockchip-FFmpeg-Builds/releases/download/latest/ffprobe
RUN wget -qO yolov8n-320x320.rknn https://github.com/MarcA711/rknn-models/releases/download/latest/yolov8n-320x320.rknn
FROM deps AS rk-deps FROM deps AS rk-deps
ARG TARGETARCH ARG TARGETARCH
@ -21,12 +22,12 @@ RUN --mount=type=bind,from=rk-wheels,source=/rk-wheels,target=/deps/rk-wheels \
WORKDIR /opt/frigate/ WORKDIR /opt/frigate/
COPY --from=rootfs / / COPY --from=rootfs / /
COPY --from=rk-libs /rootfs/librknnrt.so /usr/lib/ COPY --from=rk-downloads /rootfs/librknnrt.so /usr/lib/
COPY docker/rockchip/yolov8n-320x320.rknn /models/ COPY --from=rk-downloads /rootfs/yolov8n-320x320.rknn /models/
RUN rm -rf /usr/lib/btbn-ffmpeg/bin/ffmpeg RUN rm -rf /usr/lib/btbn-ffmpeg/bin/ffmpeg
RUN rm -rf /usr/lib/btbn-ffmpeg/bin/ffprobe RUN rm -rf /usr/lib/btbn-ffmpeg/bin/ffprobe
COPY --from=rk-libs /rootfs/ffmpeg /usr/lib/btbn-ffmpeg/bin/ COPY --from=rk-downloads /rootfs/ffmpeg /usr/lib/btbn-ffmpeg/bin/
COPY --from=rk-libs /rootfs/ffprobe /usr/lib/btbn-ffmpeg/bin/ COPY --from=rk-downloads /rootfs/ffprobe /usr/lib/btbn-ffmpeg/bin/
RUN chmod +x /usr/lib/btbn-ffmpeg/bin/ffmpeg RUN chmod +x /usr/lib/btbn-ffmpeg/bin/ffmpeg
RUN chmod +x /usr/lib/btbn-ffmpeg/bin/ffprobe RUN chmod +x /usr/lib/btbn-ffmpeg/bin/ffprobe

Binary file not shown.

View File

@ -309,14 +309,23 @@ RKNN support is provided using the `-rk` suffix for the docker image. Moreover,
### Configuration ### Configuration
This `config.yml` shows all relevant options to configure the detector and explains them. All values shown are the default values (except for one). Lines that are required at least to use the detector are labeled as required, all other lines are optional. This `config.yml` shows all relevant options to configure the detector and explains them. All values shown are the default values (except for one). Lines that are required at least to use the detector are labeled as required, all other lines are optional.
```yaml ```yaml
detectors: # required detectors: # required
rknn: # required rknn: # required
type: rknn # required type: rknn # required
# core mask for npu
core_mask: 0
# yolov8 model in rknn format to use; allowed calues: n, s, m, l, x
yolov8_rknn_model: n
# minimal confidence for detection
min_score: 0.5
# determines whether two overlapping boxes should be combined
nms_thresh: 0.45
model: # required model: # required
# path to .rknn model file # path to .rknn model file
path: /models/yolov8n-320x320.rknn path:
# width and height of detection frames # width and height of detection frames
width: 320 width: 320
height: 320 height: 320
@ -326,3 +335,57 @@ model: # required
# shape of detection frame # shape of detection frame
input_tensor: nhwc input_tensor: nhwc
``` ```
Explanation for rknn specific options:
- **core mask** controls which cores of your NPU should be used. This option applies only to SoCs with a multicore NPU (at the time of writing this in only the RK3588/S). The easiest way is to pass the value as a binary number. To do so, use the prefix `0b` and write a `0` to disable a core and a `1` to enable a core, whereas the last digit coresponds to core0, the second last to core1, etc. Examples:
- `core_mask: 0b000` or just `core_mask: 0` let the NPU decide which cores should be used. Default and recommended value.
- `core_mask: 0b001` use only core0
- `core_mask: 0b110` use core1 and core2
- **yolov8_rknn_model** see section below.
- **min_score** sets the minimal detection confidence. Should have the same value as min_score in the object block. See also [Reducing false positives](/guides/false_positives.md).
- **nms_thresh** is the IoU threshold for Non-maximum Suppression (NMS). Enable "bounding boxes" in the debug viewer.
- *Decrease* if two overlapping objects (for example one person in front of another) are detected as one object.
- *Increase* if there are multiple boxes around one object.
### Choosing a model
There are 5 yolov8 models that differ in size and therefore load the NPU more or less. In ascending order, with the top one being the smallest and least computationally intensive model:
| Model | Size in mb |
| ------- | ---------- |
| yolov8n | 9 |
| yolov8s | 25 |
| yolov8m | 54 |
| yolov8l | 90 |
| yolov8x | 136 |
:::tip
You can get the load of your NPU with the following command:
```bash
$ cat /sys/kernel/debug/rknpu/load
>> NPU load: Core0: 0%, Core1: 0%, Core2: 0%,
```
:::
- By default the rknn detector uses the yolov8**n** model (`yolov8_rknn_model: n`). This model comes with the image, so no further steps than those mentioned above are necessary.
- If you want to use are more precise model, you can set `yolov8_rknn_model:` to `s`, `m`, `l` or `x`. But additional steps are required:
1. Mount the directory `/model/download/` to your system using one of the below methods. Of course, you can change to destination folder.
- If you start frigate with docker run, append this flag to your command: `-v /model/download:./data/rknn-models`
- If you use docker compose, append this to your `volumes` block: `/model/download:./data/rknn-models`
2. Download the rknn model.
- If your server has an internet connection, it will download the model.
- Otherwise, you can download the model from [this Github repository](https://github.com/MarcA711/rknn-models/releases/tag/latest) on another device and place it in the `rknn-models` folder that you mounted to your system.
- Finally, you can also provide your own model. Note, that you will need to convert your model to the rknn format using `rknn-toolkit2` on a x86 machine. Afterwards, you can mount a directory to the image (docker run flag: `-v /model/custom:./data/my-rknn-models` or docker compose: add `/model/custom:./data/my-rknn-models` to the `volumes` block) and place your model file in that directory. Then you need to pass the path to your model using the `path` option of your `model` block like this:
```yaml
model:
path: /model/custom/my-rknn-model.rknn
```
:::caution
The `path` option of the `model` block will overwrite the `yolov8_rknn_model` option of the `detectors` block. So if you want to use one of the provided yolov8 models, make sure to not specify the `path` option.
:::

View File

@ -1,4 +1,6 @@
import logging import logging
import os.path
import urllib.request
from typing import Literal from typing import Literal
import cv2 import cv2
@ -25,7 +27,11 @@ DETECTOR_KEY = "rknn"
class RknnDetectorConfig(BaseDetectorConfig): class RknnDetectorConfig(BaseDetectorConfig):
type: Literal[DETECTOR_KEY] type: Literal[DETECTOR_KEY]
score_thresh: float = Field( yolov8_rknn_model: Literal['n', 's', 'm', 'l', 'x'] = 'n'
core_mask: int = Field(
default=0, ge=0, le=7, title="Core mask for NPU."
)
min_score: float = Field(
default=0.5, ge=0, le=1, title="Minimal confidence for detection." default=0.5, ge=0, le=1, title="Minimal confidence for detection."
) )
nms_thresh: float = Field( nms_thresh: float = Field(
@ -37,20 +43,51 @@ class Rknn(DetectionApi):
type_key = DETECTOR_KEY type_key = DETECTOR_KEY
def __init__(self, config: RknnDetectorConfig): def __init__(self, config: RknnDetectorConfig):
self.height = config.model.height
self.width = config.model.width
self.score_thresh = config.score_thresh
self.nms_thresh = config.nms_thresh
self.model_path = config.model.path or "/models/yolov8n-320x320.rknn" self.model_path = config.model.path or "/models/yolov8n-320x320.rknn"
if config.model.path != None:
self.model_path = config.model.path
else:
if config.yolov8_rknn_model == 'n':
self.model_path = "/models/yolov8n-320x320.rknn"
else:
# check if user mounted /models/download/
if not os.path.isdir("/models/download/"):
logger.error("Make sure to mount the directory \"/models/download/\" to your system. Otherwise the file will be downloaded at every restart.")
raise Exception("Make sure to mount the directory \"/models/download/\" to your system. Otherwise the file will be downloaded at every restart.")
self.model_path = "/models/download/yolov8{}-320x320.rknn".format(config.yolov8_rknn_model)
if os.path.isfile(self.model_path) == False:
logger.info("Downloading yolov8{} model.".format(config.yolov8_rknn_model))
urllib.request.urlretrieve("https://github.com/MarcA711/rknn-models/releases/download/latest/yolov8{}-320x320.rknn".format(config.yolov8_rknn_model), self.model_path)
if (config.model.width != 320) or (config.model.height != 320):
logger.error("Make sure to set the model width and heigth to 320 in your config.yml.")
raise Exception("Make sure to set the model width and heigth to 320 in your config.yml.")
if config.model.input_pixel_format != 'bgr':
logger.error("Make sure to set the model input_pixel_format to \"bgr\" in your config.yml.")
raise Exception("Make sure to set the model input_pixel_format to \"bgr\" in your config.yml.")
if config.model.input_tensor != 'nhwc':
logger.error("Make sure to set the model input_tensor to \"nhwc\" in your config.yml.")
raise Exception("Make sure to set the model input_tensor to \"nhwc\" in your config.yml.")
self.height = config.model.height
self.width = config.model.width
self.core_mask = config.core_mask
self.min_score = config.min_score
self.nms_thresh = config.nms_thresh
from rknnlite.api import RKNNLite from rknnlite.api import RKNNLite
self.rknn = RKNNLite(verbose=False) self.rknn = RKNNLite(verbose=False)
if self.rknn.load_rknn(self.model_path) != 0: if self.rknn.load_rknn(self.model_path) != 0:
logger.error("Error initializing rknn model.") logger.error("Error initializing rknn model.")
if self.rknn.init_runtime() != 0: if self.rknn.init_runtime(core_mask=self.core_mask) != 0:
logger.error("Error initializing rknn runtime.") logger.error("Error initializing rknn runtime. Do you run docker in privileged mode?")
def __del__(self): def __del__(self):
self.rknn.release() self.rknn.release()
@ -86,9 +123,9 @@ class Rknn(DetectionApi):
) )
) )
# indices of rows with confidence > SCORE_THRESH with Non-maximum Suppression (NMS) # indices of rows with confidence > min_score with Non-maximum Suppression (NMS)
result_boxes = cv2.dnn.NMSBoxes( result_boxes = cv2.dnn.NMSBoxes(
boxes, scores, self.score_thresh, self.nms_thresh, 0.5 boxes, scores, self.min_score, self.nms_thresh, 0.5
) )
detections = np.zeros((20, 6), np.float32) detections = np.zeros((20, 6), np.float32)