Yolov7-640, Yolov7-320, Yolov7x-640 and Yolov7x-320 models got added to the download_yolo.sh script that gets used as part of generating tensorrt models so they can now be generated
* Initial commit that adds YOLOv5 and YOLOv8 support for OpenVINO detector
* Fixed double inference bug with YOLOv5 and YOLOv8
* Modified documentation to mention YOLOv5 and YOLOv8
* Changes to pass lint checks
* Change minimum threshold to improve model performance
* Fix link
* Clean up YOLO post-processing
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Initial commit to enable Yolox models with OpenVINO in Frigate
* Fix ModelEnumtType import error in openvino.py
* Initial edit of the docs to include verbage about yolox
* Initial edit of the docs to include verbage about yolox
* Elaborate configuration and limitations in docs.
* Add capability to dynamically determine number of classes in yolox model
* Further refinements
* Removed unnecesarry comments, improved documentation, addressed PR items
* Fixed lint formatting issues
* Add missing labels to default labelmap. Fill any holes with "unknown". Remove unique labelmap for tensorrt.
* Replace "truck" with "car" on Openvino labelmap
* Remove branch from URL to tensorrt_models.sh
* Reword to make TensorRT model singular
* Add note about installing nvidia docker runtime and compatible drivers
* Initial WIP dockerfile and scripts to add tensorrt support
* Add tensorRT detector
* WIP attempt to install TensorRT 8.5
* Updates to detector for cuda python library
* TensorRT Cuda library rework WIP
Does not run
* Fixes from rebase to detector factory
* Fix parsing output memory pointer
* Handle TensorRT logs with the python logger
* Use non-async interface and convert input data to float32. Detection runs without error.
* Make TensorRT a separate build from the base Frigate image.
* Add script and documentation for generating TRT Models
* Add support for TensorRT devcontainer
* Add labelmap to trt model script and docs. Cleanup of old scripts.
* Update detect to normalize input tensor using model input type
* Add config for selecting GPU. Fix Async inference. Update documentation.
* Update some CUDA libraries to clean up version warning
* Add CI stage to build TensorRT tag
* Add note in docs for image tag and model support
* Initial work for adding OpenVino detector. Not functional
* Load model and submit for inference.
Sucessfully load model and initialize OpenVino engine with either CPU or GPU as device.
Does not parse results for objects.
* Detection working with ssdlite_mobilenetv2 FP16 model
* Add OpenVIno support and model to docker image
* Add documentation for OpenVino detector configuration
* Adds support for ARM32/ARM64 and the Myriad X hardware
- Use custom-built openvino wheel for all platforms
- Add libusb build without udev for NCS2 support
* Add documentation around Intel CPU requirements and NCS2 setup
* Print all available output tensors
* Update documentation for config parameters
Hoped to investigate this with my dev board at some point. In the meantime, added a warning for others who may experience it when upgrading to the new stable release.