From 7b12b20fbb613f6e9638bc82d645596c290d53ae Mon Sep 17 00:00:00 2001 From: laviddichterman Date: Sun, 7 Sep 2025 16:36:25 -0700 Subject: [PATCH] Update object_detectors.md for v16 * add configurability to IMG_SIZE for YOLOv9 export * remove TensorRT detector as it's no longer supported in v16 --- docs/docs/configuration/object_detectors.md | 90 ++------------------- 1 file changed, 5 insertions(+), 85 deletions(-) diff --git a/docs/docs/configuration/object_detectors.md b/docs/docs/configuration/object_detectors.md index 18dc683a9..13dba993b 100644 --- a/docs/docs/configuration/object_detectors.md +++ b/docs/docs/configuration/object_detectors.md @@ -698,88 +698,6 @@ To verify that the integration is working correctly, start Frigate and observe t # Community Supported Detectors -## NVidia TensorRT Detector - -Nvidia Jetson devices may be used for object detection using the TensorRT libraries. Due to the size of the additional libraries, this detector is only provided in images with the `-tensorrt-jp6` tag suffix, e.g. `ghcr.io/blakeblackshear/frigate:stable-tensorrt-jp6`. This detector is designed to work with Yolo models for object detection. - -### Generate Models - -The model used for TensorRT must be preprocessed on the same hardware platform that they will run on. This means that each user must run additional setup to generate a model file for the TensorRT library. A script is included that will build several common models. - -The Frigate image will generate model files during startup if the specified model is not found. Processed models are stored in the `/config/model_cache` folder. Typically the `/config` path is mapped to a directory on the host already and the `model_cache` does not need to be mapped separately unless the user wants to store it in a different location on the host. - -By default, no models will be generated, but this can be overridden by specifying the `YOLO_MODELS` environment variable in Docker. One or more models may be listed in a comma-separated format, and each one will be generated. Models will only be generated if the corresponding `{model}.trt` file is not present in the `model_cache` folder, so you can force a model to be regenerated by deleting it from your Frigate data folder. - -If you have a Jetson device with DLAs (Xavier or Orin), you can generate a model that will run on the DLA by appending `-dla` to your model name, e.g. specify `YOLO_MODELS=yolov7-320-dla`. The model will run on DLA0 (Frigate does not currently support DLA1). DLA-incompatible layers will fall back to running on the GPU. - -If your GPU does not support FP16 operations, you can pass the environment variable `USE_FP16=False` to disable it. - -Specific models can be selected by passing an environment variable to the `docker run` command or in your `docker-compose.yml` file. Use the form `-e YOLO_MODELS=yolov4-416,yolov4-tiny-416` to select one or more model names. The models available are shown below. - -
-Available Models -``` -yolov3-288 -yolov3-416 -yolov3-608 -yolov3-spp-288 -yolov3-spp-416 -yolov3-spp-608 -yolov3-tiny-288 -yolov3-tiny-416 -yolov4-288 -yolov4-416 -yolov4-608 -yolov4-csp-256 -yolov4-csp-512 -yolov4-p5-448 -yolov4-p5-896 -yolov4-tiny-288 -yolov4-tiny-416 -yolov4x-mish-320 -yolov4x-mish-640 -yolov7-tiny-288 -yolov7-tiny-416 -yolov7-640 -yolov7-416 -yolov7-320 -yolov7x-640 -yolov7x-320 -``` -
- -An example `docker-compose.yml` fragment that converts the `yolov4-608` and `yolov7x-640` models would look something like this: - -```yml -frigate: - environment: - - YOLO_MODELS=yolov7-320,yolov7x-640 - - USE_FP16=false -``` - -### Configuration Parameters - -The TensorRT detector can be selected by specifying `tensorrt` as the model type. The GPU will need to be passed through to the docker container using the same methods described in the [Hardware Acceleration](hardware_acceleration_video.md#nvidia-gpus) section. If you pass through multiple GPUs, you can select which GPU is used for a detector with the `device` configuration parameter. The `device` parameter is an integer value of the GPU index, as shown by `nvidia-smi` within the container. - -The TensorRT detector uses `.trt` model files that are located in `/config/model_cache/tensorrt` by default. These model path and dimensions used will depend on which model you have generated. - -Use the config below to work with generated TRT models: - -```yaml -detectors: - tensorrt: - type: tensorrt - device: 0 #This is the default, select the first GPU - -model: - path: /config/model_cache/tensorrt/yolov7-320.trt - labelmap_path: /labelmap/coco-80.txt - input_tensor: nchw - input_pixel_format: rgb - width: 320 # MUST match the chosen model i.e yolov7-320 -> 320, yolov4-416 -> 416 - height: 320 # MUST match the chosen model i.e yolov7-320 -> 320 yolov4-416 -> 416 -``` - ## Rockchip platform Hardware accelerated object detection is supported on the following SoCs: @@ -1033,7 +951,7 @@ python3 yolo_to_onnx.py -m yolov7-320 YOLOv9 model can be exported as ONNX using the command below. You can copy and paste the whole thing to your terminal and execute, altering `MODEL_SIZE=t` in the first line to the [model size](https://github.com/WongKinYiu/yolov9#performance) you would like to convert (available sizes are `t`, `s`, `m`, `c`, and `e`). ```sh -docker build . --build-arg MODEL_SIZE=t --output . -f- <<'EOF' +docker build . --build-arg MODEL_SIZE=t --build-arg IMG_SIZE=640 --output . -f- <<'EOF' FROM python:3.11 AS build RUN apt-get update && apt-get install --no-install-recommends -y libgl1 && rm -rf /var/lib/apt/lists/* COPY --from=ghcr.io/astral-sh/uv:0.8.0 /uv /bin/ @@ -1042,11 +960,13 @@ ADD https://github.com/WongKinYiu/yolov9.git . RUN uv pip install --system -r requirements.txt RUN uv pip install --system onnx onnxruntime onnx-simplifier>=0.4.1 ARG MODEL_SIZE +ARG IMG_SIZE ADD https://github.com/WongKinYiu/yolov9/releases/download/v0.1/yolov9-${MODEL_SIZE}-converted.pt yolov9-${MODEL_SIZE}.pt RUN sed -i "s/ckpt = torch.load(attempt_download(w), map_location='cpu')/ckpt = torch.load(attempt_download(w), map_location='cpu', weights_only=False)/g" models/experimental.py -RUN python3 export.py --weights ./yolov9-${MODEL_SIZE}.pt --imgsz 320 --simplify --include onnx +RUN python3 export.py --weights ./yolov9-${MODEL_SIZE}.pt --imgsz ${IMG_SIZE} --simplify --include onnx FROM scratch ARG MODEL_SIZE -COPY --from=build /yolov9/yolov9-${MODEL_SIZE}.onnx / +ARG IMG_SIZE +COPY --from=build /yolov9/yolov9-${MODEL_SIZE}.onnx /yolov9-${MODEL_SIZE}-${IMG_SIZE}.onnx EOF ```