[Update] update Synaptics SL1680 setup description

This commit is contained in:
GaryHuang-ASUS 2025-09-18 12:19:12 +08:00
parent f302969e2e
commit 34cba14729
4 changed files with 162 additions and 0 deletions

View File

@ -428,3 +428,31 @@ cameras:
```
:::
## Synaptics SL1680
Hardware accelerated video de-/encoding is supported on Synpatics SL1680 SoC.
### Prerequisites
Make sure to follow the [Synap1680 specific installation instructions](/frigate/installation#synaptics-sl1680).
### Configuration
Add one of the following FFmpeg presets to your `config.yml` to enable hardware video processing:
```yaml
ffmpeg:
hwaccel_args: -c:v h264_v4l2m2m
input_args: preset-rtsp-restream
output_args:
record: preset-record-generic-audio-aac
```
:::warning
Make sure that your SoC supports hardware acceleration for your input stream and your stream input is h264 encoding. For example, if your camera streams with h264 encoding, your SoC must be able to de- and encode with it. If you are unsure whether your SoC meets the requirements, take a look at the datasheet.
:::
```

View File

@ -40,6 +40,10 @@ Frigate supports multiple different detectors that work on different types of ha
- [RKNN](#rockchip-platform): RKNN models can run on Rockchip devices with included NPUs.
**Synaptics SL1680**
- [Synap](#synaptics-sl1680): synap models can run on Synaptics devices(e.g astra machina) with included NPUs.
**For Testing**
- [CPU Detector (not recommended for actual use](#cpu-detector-not-recommended): Use a CPU to run tflite model, this is not recommended and in most cases OpenVINO can be used in CPU mode with better results.
@ -838,6 +842,35 @@ model:
height: 320 # MUST match the chosen model i.e yolov7-320 -> 320 yolov4-416 -> 416
```
## Synaptics SL1680
Hardware accelerated object detection is supported on the following SoCs:
- SL1680
This implementation uses the [SL1680 model conversion](https://synaptics-synap.github.io/doc/v/latest/docs/manual/introduction.html#offline-model-conversion), version v3.1.0.
See the [installation docs](../frigate/installation.md#synaptics-sl1680) for information on configuring the SL1680 NPU hardware.
### Configuration
When configuring the Synap detector, you have to specify the model: a local **path**.
#### SSD
Use this configuration for ssd models. Here's a default pre-converted ssd model under the root folder.
```yaml
detectors:
synap_npu:
type: synap1680
model:
path: /model.synap
width: 224
height: 224
```
## Rockchip platform
Hardware accelerated object detection is supported on the following SoCs:

View File

@ -90,6 +90,78 @@ Frigate supports multiple different detectors that work on different types of ha
- Runs best with tiny or small size models
- Runs efficiently on low power hardware
**Synaptics SL1680**
- [Synaptics SL1680](#synaptics-sl1680): synap models can run on Synaptics devices(e.g astra machina) with included NPUs to provide efficient object detection.
:::
### Synaptics SL1680
- **Synaptics SL1680** Default model is **mobilenet**
| Name | Synaptics SL1680 Inference Time |
| ---------------- | ------------------------------- |
| ssd mobilenet | ~ 25 ms |
| yolov5n | ~ 118 ms |
### Hailo-8
Frigate supports both the Hailo-8 and Hailo-8L AI Acceleration Modules on compatible hardware platforms—including the Raspberry Pi 5 with the PCIe hat from the AI kit. The Hailo detector integration in Frigate automatically identifies your hardware type and selects the appropriate default model when a custom model isnt provided.
**Default Model Configuration:**
- **Hailo-8L:** Default model is **YOLOv6n**.
- **Hailo-8:** Default model is **YOLOv6n**.
In real-world deployments, even with multiple cameras running concurrently, Frigate has demonstrated consistent performance. Testing on x86 platforms—with dual PCIe lanes—yields further improvements in FPS, throughput, and latency compared to the Raspberry Pi setup.
| Name | Hailo8 Inference Time | Hailo8L Inference Time |
| ---------------- | ---------------------- | ----------------------- |
| ssd mobilenet v1 | ~ 6 ms | ~ 10 ms |
| yolov6n | ~ 7 ms | ~ 11 ms |
### Google Coral TPU
Frigate supports both the USB and M.2 versions of the Google Coral.
- The USB version is compatible with the widest variety of hardware and does not require a driver on the host machine. However, it does lack the automatic throttling features of the other versions.
- The PCIe and M.2 versions require installation of a driver on the host. Follow the instructions for your version from https://coral.ai
A single Coral can handle many cameras using the default model and will be sufficient for the majority of users. You can calculate the maximum performance of your Coral based on the inference speed reported by Frigate. With an inference speed of 10, your Coral will top out at `1000/10=100`, or 100 frames per second. If your detection fps is regularly getting close to that, you should first consider tuning motion masks. If those are already properly configured, a second Coral may be needed.
### OpenVINO - Intel
The OpenVINO detector type is able to run on:
- 6th Gen Intel Platforms and newer that have an iGPU
- x86 hosts with an Intel Arc GPU
- Most modern AMD CPUs (though this is officially not supported by Intel)
- x86 & Arm64 hosts via CPU (generally not recommended)
:::note
Intel NPUs have seen [limited success in community deployments](https://github.com/blakeblackshear/frigate/discussions/13248#discussioncomment-12347357), although they remain officially unsupported.
In testing, the NPU delivered performance that was only comparable to — or in some cases worse than — the integrated GPU.
:::
More information is available [in the detector docs](/configuration/object_detectors#openvino-detector)
Inference speeds vary greatly depending on the CPU or GPU used, some known examples of GPU inference times are below:
| Name | MobileNetV2 Inference Time | YOLO-NAS Inference Time | RF-DETR Inference Time | Notes |
| -------------- | -------------------------- | ------------------------- | ---------------------- | ---------------------------------- |
| Intel HD 530 | 15 - 35 ms | | | Can only run one detector instance |
| Intel HD 620 | 15 - 25 ms | 320: ~ 35 ms | | |
| Intel HD 630 | ~ 15 ms | 320: ~ 30 ms | | |
| Intel UHD 730 | ~ 10 ms | 320: ~ 19 ms 640: ~ 54 ms | | |
| Intel UHD 770 | ~ 15 ms | 320: ~ 20 ms 640: ~ 46 ms | | |
| Intel N100 | ~ 15 ms | 320: ~ 25 ms | | Can only run one detector instance |
| Intel Iris XE | ~ 10 ms | 320: ~ 18 ms 640: ~ 50 ms | | |
| Intel Arc A380 | ~ 6 ms | 320: ~ 10 ms 640: ~ 22 ms | 336: 20 ms 448: 27 ms | |
| Intel Arc A750 | ~ 4 ms | 320: ~ 8 ms | | |
:::
### Hailo-8

View File

@ -185,6 +185,35 @@ or add these options to your `docker run` command:
Next, you should configure [hardware object detection](/configuration/object_detectors#rockchip-platform) and [hardware video processing](/configuration/hardware_acceleration_video#rockchip-platform).
### Synaptics SL1680
#### Setup
Follow Frigate's default installation instructions, but use a docker image with `-rk` suffix for example `ghcr.io/blakeblackshear/frigate:stable-synap`.
Next, you need to grant docker permissions to access your hardware:
- During the configuration process, you should run docker in privileged mode to avoid any errors due to insufficient permissions. To do so, add `privileged: true` to your `docker-compose.yml` file or the `--privileged` flag to your docker run command.
```yaml
devices:
- /dev/synap
- /dev/video0
- /dev/video1
```
or add these options to your `docker run` command:
```
--device /dev/synap \
--device /dev/video0 \
--device /dev/video1
```
#### Configuration
Next, you should configure [hardware object detection](/configuration/object_detectors#synaptics-sl1680) and [hardware video processing](/configuration/hardware_acceleration_video#synaptics-sl1680).
## Docker
Running through Docker with Docker Compose is the recommended install method.