This website requires JavaScript.
Explore
Help
Sign In
administrator
/
frigate
Watch
1
Star
0
Fork
0
You've already forked frigate
mirror of
https://github.com/blakeblackshear/frigate.git
synced
2025-12-06 21:44:13 +03:00
Code
Issues
Actions
2
Packages
Projects
Releases
Wiki
Activity
fbbce1ee5f
frigate
/
docker
/
tensorrt
/
requirements-models-arm64.txt
3 lines
96 B
Plaintext
Raw
Normal View
History
Unescape
Escape
Bump onnx from 1.14.0 to 1.19.1 in /docker/tensorrt Bumps [onnx](https://github.com/onnx/onnx) from 1.14.0 to 1.19.1. - [Release notes](https://github.com/onnx/onnx/releases) - [Changelog](https://github.com/onnx/onnx/blob/main/docs/Changelog-ml.md) - [Commits](https://github.com/onnx/onnx/compare/v1.14.0...v1.19.1) --- updated-dependencies: - dependency-name: onnx dependency-version: 1.19.1 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com>
2025-10-31 21:41:44 +03:00
onnx == 1.19.1; platform_machine == 'aarch64'
Nvidia Jetson ffmpeg + TensorRT support (#6458) * Non-Jetson changes Required for later commits: - Allow base image to be overridden (and don't assume its WORKDIR) - Ensure python3.9 - Map hwaccel decode presets as strings instead of lists Not required: - Fix existing documentation - Simplify hwaccel scale logic * Prepare for multi-arch tensorrt build * Add tensorrt images for Jetson boards * Add Jetson ffmpeg hwaccel * Update docs * Add CODEOWNERS * CI * Change default model from yolov7-tiny-416 to yolov7-320 In my experience the tiny models perform markedly worse without being much faster * fixup! Update docs
2023-07-26 13:50:41 +03:00
protobuf == 3.20.3; platform_machine == 'aarch64'
Reference in New Issue
Copy Permalink