diff --git a/docker-compose.yml b/docker-compose.yml index 0f4ddf55a..be04ad0a3 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -21,6 +21,8 @@ services: - driver: nvidia count: 1 capabilities: [gpu] + environment: + YOLO_MODELS: yolov7-320 devices: - /dev/bus/usb:/dev/bus/usb # - /dev/dri:/dev/dri # for intel hwaccel, needs to be updated for your hardware diff --git a/docker/tensorrt/Dockerfile.base b/docker/tensorrt/Dockerfile.base index 9b489c7cc..331a328b7 100644 --- a/docker/tensorrt/Dockerfile.base +++ b/docker/tensorrt/Dockerfile.base @@ -23,4 +23,4 @@ ENV S6_CMD_WAIT_FOR_SERVICES_MAXTIME=0 COPY --from=trt-deps /usr/local/lib/libyolo_layer.so /usr/local/lib/libyolo_layer.so COPY --from=trt-deps /usr/local/src/tensorrt_demos /usr/local/src/tensorrt_demos COPY docker/tensorrt/detector/rootfs/ / -ENV YOLO_MODELS="yolov7-tiny-416" +ENV YOLO_MODELS="yolov7-320" diff --git a/docs/docs/configuration/object_detectors.md b/docs/docs/configuration/object_detectors.md index c0ba58a29..51a25702b 100644 --- a/docs/docs/configuration/object_detectors.md +++ b/docs/docs/configuration/object_detectors.md @@ -196,9 +196,9 @@ The model used for TensorRT must be preprocessed on the same hardware platform t The Frigate image will generate model files during startup if the specified model is not found. Processed models are stored in the `/config/model_cache` folder. Typically the `/config` path is mapped to a directory on the host already and the `model_cache` does not need to be mapped separately unless the user wants to store it in a different location on the host. -By default, the `yolov7-tiny-416` model will be generated, but this can be overridden by specifying the `YOLO_MODELS` environment variable in Docker. One or more models may be listed in a comma-separated format, and each one will be generated. To select no model generation, set the variable to an empty string, `YOLO_MODELS=""`. Models will only be generated if the corresponding `{model}.trt` file is not present in the `model_cache` folder, so you can force a model to be regenerated by deleting it from your Frigate data folder. +By default, the `yolov7-320` model will be generated, but this can be overridden by specifying the `YOLO_MODELS` environment variable in Docker. One or more models may be listed in a comma-separated format, and each one will be generated. To select no model generation, set the variable to an empty string, `YOLO_MODELS=""`. Models will only be generated if the corresponding `{model}.trt` file is not present in the `model_cache` folder, so you can force a model to be regenerated by deleting it from your Frigate data folder. -If you have a Jetson device with DLAs (Xavier or Orin), you can generate a model that will run on the DLA by appending `-dla` to your model name, e.g. specify `YOLO_MODELS=yolov7-tiny-416-dla`. The model will run on DLA0 (Frigate does not currently support DLA1). DLA-incompatible layers will fall back to running on the GPU. +If you have a Jetson device with DLAs (Xavier or Orin), you can generate a model that will run on the DLA by appending `-dla` to your model name, e.g. specify `YOLO_MODELS=yolov7-320-dla`. The model will run on DLA0 (Frigate does not currently support DLA1). DLA-incompatible layers will fall back to running on the GPU. If your GPU does not support FP16 operations, you can pass the environment variable `USE_FP16=False` to disable it. @@ -254,11 +254,11 @@ detectors: device: 0 #This is the default, select the first GPU model: - path: /config/model_cache/tensorrt/yolov7-tiny-416.trt + path: /config/model_cache/tensorrt/yolov7-320.trt input_tensor: nchw input_pixel_format: rgb - width: 416 - height: 416 + width: 320 + height: 320 ``` ## Deepstack / CodeProject.AI Server Detector