From 251f86209a23883a71447e28e966b32cfada1392 Mon Sep 17 00:00:00 2001 From: Nick Mowen Date: Fri, 30 Dec 2022 08:23:43 -0700 Subject: [PATCH] Add info about tensorrt detectors and link to docs --- docs/docs/frigate/hardware.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/docs/frigate/hardware.md b/docs/docs/frigate/hardware.md index 057c3a905..a91695454 100644 --- a/docs/docs/frigate/hardware.md +++ b/docs/docs/frigate/hardware.md @@ -66,7 +66,7 @@ Inference speeds vary greatly depending on the CPU, GPU, or VPU used, some known ### TensorRT -The TensortRT detector is able to run on x86 hosts that have an Nvidia GPU. +The TensortRT detector is able to run on x86 hosts that have an Nvidia GPU which supports the 11.x series of CUDA libraries. The minimum driver version on the host system must be `>=450.80.02`. Also the GPU must support a Compute Capability of `5.0` or greater. This generally correlates to a Maxwell-era GPU or newer, check the [TensorRT docs for more info](/configuration/detectors#nvidia-tensorrt-detector). Inference speeds will vary greatly depending on the GPU and the model used. `tiny` variants are faster than the equivalent non-tiny model, some known examples are below: