memryx docs: minor reorg

This commit is contained in:
Tim 2025-08-12 16:37:34 -04:00
parent ca13783130
commit b2d83d0a9c
3 changed files with 12 additions and 15 deletions

View File

@ -9,13 +9,13 @@ arch=$(uname -m)
# Purge existing packages and repo
echo "Removing old MemryX installations..."
sudo apt purge -y memx-* || true
sudo apt purge -y memx-* mxa-manager || true
sudo rm -f /etc/apt/sources.list.d/memryx.list /etc/apt/trusted.gpg.d/memryx.asc
# Install kernel headers
echo "Installing kernel headers for: $(uname -r)"
sudo apt update
sudo apt install -y linux-headers-$(uname -r)
sudo apt install -y dkms linux-headers-$(uname -r)
# Add MemryX key and repo
echo "Adding MemryX GPG key and repository..."
@ -42,4 +42,4 @@ for pkg in "${packages[@]}"; do
sudo apt install -y "$pkg"
done
echo "MemryX installation complete!"
echo "MemryX installation complete!"

View File

@ -13,7 +13,7 @@ Frigate supports multiple different detectors that work on different types of ha
- [Coral EdgeTPU](#edge-tpu-detector): The Google Coral EdgeTPU is available in USB and m.2 format allowing for a wide range of compatibility with devices.
- [Hailo](#hailo-8): The Hailo8 and Hailo8L AI Acceleration module is available in m.2 format with a HAT for RPi devices, offering a wide range of compatibility with devices.
- [MemryX](#memryx-mx3): The MX3 Acceleration module is available in M.2 format, offering broad compatibility across various platforms.
- [MemryX](#memryx-mx3): The MX3 Acceleration module is available in m.2 format, offering broad compatibility across various platforms.
**AMD**
@ -384,7 +384,8 @@ model:
```
#### Using a Custom Model
To use your own model, bind-mount the compiled `.dfp` file into the container and specify its path using `model.path`. You will also have to update the `labelmap` accordingly.
To use your own model, bind-mount the path to your compiled `.dfp` file into the container and specify its path using `model.path`. You will also have to update the `labelmap` accordingly.
For detailed instructions on compiling models, refer to the [MemryX Compiler](https://developer.memryx.com/tools/neural_compiler.html#usage) docs and [Tutorials](https://developer.memryx.com/tutorials/tutorials.html).

View File

@ -190,21 +190,17 @@ With the [rocm](../configuration/object_detectors.md#amdrocm-gpu-detector) detec
### MemryX MX3
Frigate supports the MemryX MX3 M.2 AI Acceleration Module on compatible hardware platforms, including both x86 (Intel/AMD) and ARM-based SBCs such as RPi 5.
Frigate supports the MemryX MX3 M.2 AI Acceleration Module on compatible hardware platforms, including both x86 (Intel/AMD) and ARM-based SBCs such as Raspberry Pi 5.
A single MemryX MX3 module is capable of handling multiple camera streams using the default models, making it sufficient for most users. For larger deployments with more cameras or bigger models, multiple MX3 modules can be used. Frigate supports multi-detector configurations, allowing you to connect multiple MX3 modules to scale inference capacity seamlessly.
A single MemryX MX3 module is capable of handling multiple camera streams using the default models, making it sufficient for most users. For larger deployments with more cameras or bigger models, multiple MX3 modules can be used. Frigate supports multi-detector configurations, allowing you to connect multiple MX3 modules to scale inference capacity.
Detailed information is available [in the detector docs](/configuration/object_detectors#memryx-mx3).
Frigate supports the following models resolutions with the MemryX MX3 module:
**Default Model Configuration:**
- **Yolo-NAS**: 320 (default), 640
- **YOLOv9**: 320 (default), 640
- **YOLOX**: 640
- **SSDlite MobileNet v2**: 320
Due to the MX3's architecture, the maximum frames per second supported cannot be calculated as `1/inference time` and is measured separately. When deciding how many camera streams you may support with your configuration, use the **MX3 Total FPS** column to approximate of the detector's limit, not the Inference Time.
- Default model is **YOLO-NAS-Small**.
The MX3 is a pipelined architecture, where the maximum frames per second supported (and thus supported number of cameras) cannot be calculated as `1/latency` (1/"Inference Time") and is measured separately. When estimating how many camera streams you may support with your configuration, use the **MX3 Total FPS** column to approximate of the detector's limit, not the Inference Time.
| Model | Input Size | MX3 Inference Time | MX3 Total FPS |
|----------------------|------------|--------------------|---------------|
@ -215,7 +211,7 @@ Due to the MX3's architecture, the maximum frames per second supported cannot be
| YOLOX-Small | 640 | ~ 16 ms | ~ 263 |
| SSDlite MobileNet v2 | 320 | ~ 5 ms | ~ 1056 |
Inference speeds may vary depending on the host platforms CPU performance. The above data was measured on an **Intel 13700 CPU**. Platforms like Raspberry Pi, x86 hosts, Orange Pi, and other ARM-based SBCs have different levels of processing capability, which will increase post-processing time and may result in lower FPS.
Inference speeds may vary depending on the host platform. The above data was measured on an **Intel 13700 CPU**. Platforms like Raspberry Pi, Orange Pi, and other ARM-based SBCs have different levels of processing capability, which may limit total FPS.
### Nvidia Jetson