From 62b0f26595398e2c7b33b6b30a686460abb466fe Mon Sep 17 00:00:00 2001 From: SlothCroissant Date: Tue, 19 Dec 2023 16:00:41 -0600 Subject: [PATCH] Update detectors.md to detail MODEL_CACHE_DIR env var Resolves confusion in https://github.com/blakeblackshear/frigate/discussions/9025 --- docs/docs/configuration/detectors.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/docs/docs/configuration/detectors.md b/docs/docs/configuration/detectors.md index 83291605f..4f7cae164 100644 --- a/docs/docs/configuration/detectors.md +++ b/docs/docs/configuration/detectors.md @@ -239,7 +239,7 @@ yolov7x-320 The TensorRT detector can be selected by specifying `tensorrt` as the model type. The GPU will need to be passed through to the docker container using the same methods described in the [Hardware Acceleration](hardware_acceleration.md#nvidia-gpu) section. If you pass through multiple GPUs, you can select which GPU is used for a detector with the `device` configuration parameter. The `device` parameter is an integer value of the GPU index, as shown by `nvidia-smi` within the container. -The TensorRT detector uses `.trt` model files that are located in `/trt-models/` by default. These model file path and dimensions used will depend on which model you have generated. +The TensorRT detector uses `.trt` model files that are located in a versioned subdirectory of `${MODEL_CACHE_DIR}`. You can set `MODEL_CACHE_DIR` as an environment variable to change it from the default `/config/model_cache/tensorrt`. These model file path and dimensions used will depend on which model you have generated. ```yaml detectors: @@ -248,6 +248,8 @@ detectors: device: 0 #This is the default, select the first GPU model: + # Be sure to set MODEL_CACHE_DIR environment variable to customize the model path from the default + # In this example, MODEL_CACHE_DIR = '/trt-models' path: /trt-models/yolov7-tiny-416.trt input_tensor: nchw input_pixel_format: rgb