Fix model_cache path to live in config directory

This commit is contained in:
Nate Meyer 2023-06-24 15:16:26 -04:00
parent 3518243f06
commit 25111727f2
2 changed files with 4 additions and 4 deletions

View File

@ -4,7 +4,7 @@
set -o errexit -o nounset -o pipefail
OUTPUT_FOLDER?=/media/frigate/model_cache/tensorrt
OUTPUT_FOLDER?=/config/model_cache/tensorrt
# Create output folder
mkdir -p ${OUTPUT_FOLDER}

View File

@ -192,7 +192,7 @@ There are improved capabilities in newer GPU architectures that TensorRT can ben
The model used for TensorRT must be preprocessed on the same hardware platform that they will run on. This means that each user must run additional setup to generate a model file for the TensorRT library. A script is included that will build several common models.
The Frigate image will generate model files during startup if the specified model is not found. Processed models are stored in the `/media/frigate/model_cache` folder. Typically the `/media/frigate` path is mapped to a directory on the host already and the `model_cache` does not need to be mapped separately unless the user wants to store it in a different location on the host.
The Frigate image will generate model files during startup if the specified model is not found. Processed models are stored in the `/config/model_cache` folder. Typically the `/config` path is mapped to a directory on the host already and the `model_cache` does not need to be mapped separately unless the user wants to store it in a different location on the host.
To by default, the `yolov7-tiny-416` model will be generated, but this can be overridden by specifying the `YOLO_MODELS` environment variable in Docker. One or more models may be listed in a comma-separated format, and each one will be generated. To select no model generation, set the variable to an empty string, `YOLO_MODELS=""`. Models will only be generated if the corresponding `{model}.trt` file is not present in the `model_cache` folder, so you can force a model to be regenerated by deleting it from your Frigate data folder.
@ -241,7 +241,7 @@ frigate:
The TensorRT detector can be selected by specifying `tensorrt` as the model type. The GPU will need to be passed through to the docker container using the same methods described in the [Hardware Acceleration](hardware_acceleration.md#nvidia-gpu) section. If you pass through multiple GPUs, you can select which GPU is used for a detector with the `device` configuration parameter. The `device` parameter is an integer value of the GPU index, as shown by `nvidia-smi` within the container.
The TensorRT detector uses `.trt` model files that are located in `/media/frigate/model_cache/tensorrt` by default. These model path and dimensions used will depend on which model you have generated.
The TensorRT detector uses `.trt` model files that are located in `/config/model_cache/tensorrt` by default. These model path and dimensions used will depend on which model you have generated.
```yaml
detectors:
@ -250,7 +250,7 @@ detectors:
device: 0 #This is the default, select the first GPU
model:
path: /media/frigate/model_cache/tensorrt/yolov7-tiny-416.trt
path: /config/model_cache/tensorrt/yolov7-tiny-416.trt
input_tensor: nchw
input_pixel_format: rgb
width: 416