[Update] Update document content and detector default layout

- Update object_detectors document
- Update detector's default layout
- Update default model name
This commit is contained in:
GaryHuang-ASUS 2025-09-23 11:25:00 +08:00
parent 7cd98735cc
commit 0d562ffc5e
3 changed files with 23 additions and 13 deletions

View File

@ -25,4 +25,4 @@ COPY --from=rootfs / /
COPY --from=synap1680-wheels /rootfs/usr/local/lib/*.so /usr/lib COPY --from=synap1680-wheels /rootfs/usr/local/lib/*.so /usr/lib
ADD https://raw.githubusercontent.com/synaptics-astra/synap-release/v1.5.0/models/dolphin/object_detection/coco/model/mobilenet224_full80/model.synap /model.synap ADD https://raw.githubusercontent.com/synaptics-astra/synap-release/v1.5.0/models/dolphin/object_detection/coco/model/mobilenet224_full80/model.synap /synaptics/mobilenet.synap

View File

@ -850,27 +850,32 @@ Hardware accelerated object detection is supported on the following SoCs:
This implementation uses the [Synaptics model conversion](https://synaptics-synap.github.io/doc/v/latest/docs/manual/introduction.html#offline-model-conversion), version v3.1.0. This implementation uses the [Synaptics model conversion](https://synaptics-synap.github.io/doc/v/latest/docs/manual/introduction.html#offline-model-conversion), version v3.1.0.
This implementation is based on sdk `v1.5.0`.
See the [installation docs](../frigate/installation.md#synaptics) for information on configuring the SL-series NPU hardware. See the [installation docs](../frigate/installation.md#synaptics) for information on configuring the SL-series NPU hardware.
### Configuration ### Configuration
When configuring the Synap detector, you have to specify the model: a local **path**. When configuring the Synap detector, you have to specify the model: a local **path**.
#### SSD #### SSD Mobilenet
Use this configuration for ssd models. Here's a default pre-converted ssd model under the root folder. A synap model is provided in the container at /mobilenet.synap and is used by this detector type by default. The model comes from [Synap-release Github](https://github.com/synaptics-astra/synap-release/tree/v1.5.0/models/dolphin/object_detection/coco/model/mobilenet224_full80).
Use the model configuration shown below when using the synaptics detector with the default synap model:
```yaml ```yaml
detectors: detectors: # required
synap_npu: synap_npu: # required
type: synaptics type: synaptics # required
model: model: # required
path: /model.synap path: /mobilenet.synap # required
width: 224 width: 224 # required
height: 224 height: 224 # required
tensor_format: nhwc # Currently, the tensor format is statically specify in the detector.
labelmap_path: /labelmap/coco-80.txt tensor_format: nhwc # optional
labelmap_path: /labelmap/coco-80.txt # required
``` ```
## Rockchip platform ## Rockchip platform

View File

@ -43,6 +43,7 @@ class SynapDetector(DetectionApi):
self.model_type = detector_config.model.model_type self.model_type = detector_config.model.model_type
self.network = synap_network self.network = synap_network
self.network_input_details = self.network.inputs[0] self.network_input_details = self.network.inputs[0]
self.input_tensor_layout = detector_config.model.input_tensor
# Create Inference Engine # Create Inference Engine
self.preprocessor = Preprocessor() self.preprocessor = Preprocessor()
@ -50,7 +51,11 @@ class SynapDetector(DetectionApi):
def detect_raw(self, tensor_input: np.ndarray): def detect_raw(self, tensor_input: np.ndarray):
# It has only been testing for pre-converted mobilenet80 .tflite -> .synap model currently # It has only been testing for pre-converted mobilenet80 .tflite -> .synap model currently
postprocess_data = self.preprocessor.assign(self.network.inputs, tensor_input, Shape(tensor_input.shape), Layout.nhwc) layout = Layout.nhwc # default layout
if self.input_tensor_layout == InputTensorEnum.nhwc:
layout = Layout.nhwc
postprocess_data = self.preprocessor.assign(self.network.inputs, tensor_input, Shape(tensor_input.shape), layout)
output_tensor_obj = self.network.predict() output_tensor_obj = self.network.predict()
output = self.detector.process(output_tensor_obj, postprocess_data) output = self.detector.process(output_tensor_obj, postprocess_data)