* [Init] Initial commit for Synaptics SL1680 NPU
* add a rough detector which is testing with yolov8 tflite model.
* [Feat] Add dependencies installation in docker build
- Add runtime library and wheels installation in main/Dockerfile
- Add model.synap(default model, transfer from mobilenet_224full80) in docker/synap1680
* [Update] Remove dependencies installation from main Dockerfile
- remove deps installation from Dockerfile
- add dependencies installation and split wheels, deps stage in synap1680 Dockerfile
* Refactor synap detector to more closely match other implementations
* [Update] Add model path configuration check
* [Update] update ModelType to ssd
* [Update] Remove unuse script
- install_deps.sh has already been executing in deps download stage
- Dockerfile.toolchain is for testing to extract runtime libraries from Synaptics toolchain
* [Update] update Synaptics SL1680 setup description
* [Update] remove install_synap1680
- The deps download and installation is existed in synap1680
* [Fix] update document content
* [Update] Update detector from synap1680 to synaptics
This update is in order to make the synaptics SL-series NPU detector more general.
- Fix detector `os` module not import bug
- Update detector type `synap1680` to `synaptics`
- Update document description `SL1680` to `Synaptics` only
- Update docker build content `synap1680` to `synaptics`
* [Fix] Update configuration document
* Update docs/docs/configuration/object_detectors.md
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* [Update] Update document content and detector default layout
- Update object_detectors document
- Update detector's default layout
- Update default model name
* [Update] Update object detector document content
* [Fix] Fix InputTensorEnum not defined error
- import InputTensorEnum from detector_config
* [Update] Update detector script coding format
* [Update] Update synaptics detector coding format
* [Update] Add synaptics ci workflow
* [Update] update synaptics runtime libs download path
- Fork Synaptics astra sdk repo and put the runtime lib package on it
- Frigate team can update this download path later
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Simplify rocm install and update to 6.3.1
* Build out more necessary packages
* Update to 6.3.3
* Set bake version
* Fix typo
* Ensure NHWC is used
* Reset dev changes
* Write to cache
* Fix access
* Reorganize tracked object for imports
* Separate out rockchip build
* Formatting
* Use original ffmpeg build
* Fix build
* Update default search type value
* Implement ROCm detectors
* Cleanup tensor input
* Fixup image creation
* Add support for yolonas in onnx
* Get build working with onnx
* Update docs and simplify config
* Remove unused imports
* Initial support for Hailo-8L
Added file for Hailo-8L detector including dockerfile, h8l.mk, h8l.hcl, hailo8l.py, ci.yml and ssd_mobilenat_v1.hef as the inference network.
Added files to help with the installation of Hailo-8L dependences like generate_wheel_conf.py, requirements-wheel-h8l.txt and modified setup.py to try and work with any hardware.
Updated docs to reflect Initial Hailo-8L support including oject_detectors.md, hardware.md and installation.md.
* Update .github/workflows/ci.yml
typo h8l not arm64
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Update docs/docs/configuration/object_detectors.md
Clarity for the end user and correct uses of words
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Update docs/docs/frigate/installation.md
typo
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* update Installation.md to clarify Hailo-8L installation process.
* Update docs/docs/frigate/hardware.md
Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
* Update hardware.md add Inference time.
* Oops no new line at the end of the file.
* Update docs/docs/frigate/hardware.md typo
Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
* Update dockerfile to download the ssd_modilenet_v1 model instead of having it in the repo.
* Updated dockerfile so it dose not download the model file.
add function to download it at runtime.
update model path.
* fix formatting according to ruff and removed unnecessary functions.
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
Setting cache-to=compression=zstd causes the resulting user-pulled image
to have zstd-compressed layers, which are not compatible with docker
prior to 23.0. Ubuntu 20.04 still ships with docker 20.10, which yields
`Error processing tar file` when pulling these images.
Renaming the jetpack cache images is my way of clearing the cache of the
prior zstd layers, and it clarifies the convention I used for the other
cache images in which there is one cache per base image/job, not per
target/step. We don't need to delete the non-jetson cache images because
they haven't been rebuilt since zstd was enabled.
* fixup! Split independent builds into parallel jobs
* Combine caches within steps of same job
* Remove Maintain Cache workflow
Now that we're caching to ghcr instead of gha, we don't have to worry
about gha's cache eviction after 7 days/10 GB.
* Factor out common setup steps
* Re-order
* Split independent builds into parallel jobs
* Cache jetson builds
* Use zstd compression
* Switch from gha cache to registry cache
A CI run (four images cached with mode-max) populates the cache with 295
cache entries totalling 23.44 GB. This exceeds gha's 10GB limit, causing
trashing. Try with a registry instead.
* Enable manual CI runs
* Non-Jetson changes
Required for later commits:
- Allow base image to be overridden (and don't assume its WORKDIR)
- Ensure python3.9
- Map hwaccel decode presets as strings instead of lists
Not required:
- Fix existing documentation
- Simplify hwaccel scale logic
* Prepare for multi-arch tensorrt build
* Add tensorrt images for Jetson boards
* Add Jetson ffmpeg hwaccel
* Update docs
* Add CODEOWNERS
* CI
* Change default model from yolov7-tiny-416 to yolov7-320
In my experience the tiny models perform markedly worse without being
much faster
* fixup! Update docs
* Make main frigate build non rpi specific and build rpi using base image
* Add boards to sidebar
* Fix docker build
* Fix docs build
* Update pr branch for testing
* remove target from rpi build
* Remove manual build
* Add push build for rpi
* fix typos, improve wording
* Add arm build for rpi
* Cleanup and add default github ref name
* Cleanup docker build file system
* Setup to use docker bake
* Add ci/cd for bake
* Fix path
* Fix devcontainer
* Set targets
* Fix build
* Fix syntax
* Add wheels target
* Move dev container to trt
* Update key and fix rpi local
* Move requirements files and set intermediate targets
* Add back --load
* Update docs for community board development
* Update installation docs to reflect different builds available
* Update docs with official and community supported headers
* Update codeowners docs
* Update docs
* Assemble main and standard builds
* Change order of pushes
* Remove community board after successful build
* Fix rpi bake file names