mirror of
https://github.com/blakeblackshear/frigate.git
synced 2026-04-15 11:32:09 +03:00
Revert removing NVIDIA TensorRT detector docs
Added documentation for NVidia TensorRT Detector, including model generation, configuration parameters, and example usage.
This commit is contained in:
parent
7b12b20fbb
commit
f19b02638b
@ -697,7 +697,86 @@ Replace `<your_codeproject_ai_server_ip>` and `<port>` with the IP address and p
|
||||
To verify that the integration is working correctly, start Frigate and observe the logs for any error messages related to CodeProject.AI. Additionally, you can check the Frigate web interface to see if the objects detected by CodeProject.AI are being displayed and tracked properly.
|
||||
|
||||
# Community Supported Detectors
|
||||
## NVidia TensorRT Detector
|
||||
laviddichterman marked this conversation as resolved.
|
||||
|
||||
Nvidia Jetson devices may be used for object detection using the TensorRT libraries. Due to the size of the additional libraries, this detector is only provided in images with the `-tensorrt-jp6` tag suffix, e.g. `ghcr.io/blakeblackshear/frigate:stable-tensorrt-jp6`. This detector is designed to work with Yolo models for object detection.
|
||||
|
||||
### Generate Models
|
||||
|
||||
The model used for TensorRT must be preprocessed on the same hardware platform that they will run on. This means that each user must run additional setup to generate a model file for the TensorRT library. A script is included that will build several common models.
|
||||
|
||||
The Frigate image will generate model files during startup if the specified model is not found. Processed models are stored in the `/config/model_cache` folder. Typically the `/config` path is mapped to a directory on the host already and the `model_cache` does not need to be mapped separately unless the user wants to store it in a different location on the host.
|
||||
|
||||
By default, no models will be generated, but this can be overridden by specifying the `YOLO_MODELS` environment variable in Docker. One or more models may be listed in a comma-separated format, and each one will be generated. Models will only be generated if the corresponding `{model}.trt` file is not present in the `model_cache` folder, so you can force a model to be regenerated by deleting it from your Frigate data folder.
|
||||
|
||||
If you have a Jetson device with DLAs (Xavier or Orin), you can generate a model that will run on the DLA by appending `-dla` to your model name, e.g. specify `YOLO_MODELS=yolov7-320-dla`. The model will run on DLA0 (Frigate does not currently support DLA1). DLA-incompatible layers will fall back to running on the GPU.
|
||||
|
||||
If your GPU does not support FP16 operations, you can pass the environment variable `USE_FP16=False` to disable it.
|
||||
|
||||
Specific models can be selected by passing an environment variable to the `docker run` command or in your `docker-compose.yml` file. Use the form `-e YOLO_MODELS=yolov4-416,yolov4-tiny-416` to select one or more model names. The models available are shown below.
|
||||
|
||||
<details>
|
||||
<summary>Available Models</summary>
|
||||
```
|
||||
yolov3-288
|
||||
yolov3-416
|
||||
yolov3-608
|
||||
yolov3-spp-288
|
||||
yolov3-spp-416
|
||||
yolov3-spp-608
|
||||
yolov3-tiny-288
|
||||
yolov3-tiny-416
|
||||
yolov4-288
|
||||
yolov4-416
|
||||
yolov4-608
|
||||
yolov4-csp-256
|
||||
yolov4-csp-512
|
||||
yolov4-p5-448
|
||||
yolov4-p5-896
|
||||
yolov4-tiny-288
|
||||
yolov4-tiny-416
|
||||
yolov4x-mish-320
|
||||
yolov4x-mish-640
|
||||
yolov7-tiny-288
|
||||
yolov7-tiny-416
|
||||
yolov7-640
|
||||
yolov7-416
|
||||
yolov7-320
|
||||
yolov7x-640
|
||||
yolov7x-320
|
||||
```
|
||||
</details>
|
||||
An example `docker-compose.yml` fragment that converts the `yolov4-608` and `yolov7x-640` models would look something like this:
|
||||
```yml
|
||||
frigate:
|
||||
environment:
|
||||
- YOLO_MODELS=yolov7-320,yolov7x-640
|
||||
- USE_FP16=false
|
||||
```
|
||||
|
||||
### Configuration Parameters
|
||||
|
||||
The TensorRT detector can be selected by specifying `tensorrt` as the model type. The GPU will need to be passed through to the docker container using the same methods described in the [Hardware Acceleration](hardware_acceleration_video.md#nvidia-gpus) section. If you pass through multiple GPUs, you can select which GPU is used for a detector with the `device` configuration parameter. The `device` parameter is an integer value of the GPU index, as shown by `nvidia-smi` within the container.
|
||||
|
||||
The TensorRT detector uses `.trt` model files that are located in `/config/model_cache/tensorrt` by default. These model path and dimensions used will depend on which model you have generated.
|
||||
|
||||
Use the config below to work with generated TRT models:
|
||||
|
||||
```yaml
|
||||
detectors:
|
||||
tensorrt:
|
||||
type: tensorrt
|
||||
device: 0 #This is the default, select the first GPU
|
||||
|
||||
model:
|
||||
path: /config/model_cache/tensorrt/yolov7-320.trt
|
||||
labelmap_path: /labelmap/coco-80.txt
|
||||
input_tensor: nchw
|
||||
input_pixel_format: rgb
|
||||
width: 320 # MUST match the chosen model i.e yolov7-320 -> 320, yolov4-416 -> 416
|
||||
height: 320 # MUST match the chosen model i.e yolov7-320 -> 320 yolov4-416 -> 416
|
||||
```
|
||||
## Rockchip platform
|
||||
|
||||
Hardware accelerated object detection is supported on the following SoCs:
|
||||
|
||||
Loading…
Reference in New Issue
Block a user