diff --git a/docs/docs/configuration/object_detectors.md b/docs/docs/configuration/object_detectors.md index 5c068e11c..4a0f014d4 100644 --- a/docs/docs/configuration/object_detectors.md +++ b/docs/docs/configuration/object_detectors.md @@ -1110,17 +1110,7 @@ model: #### Using a Custom Model -To use your own model: - -1. Package your compiled model into a `.zip` file. - -2. The `.zip` must contain the compiled `.dfp` file. - -3. Depending on the model, the compiler may also generate a cropped post-processing network. If present, it will be named with the suffix `_post.onnx`. - -4. Bind-mount the `.zip` file into the container and specify its path using `model.path` in your config. - -5. Update the `labelmap_path` to match your custom model's labels. +To use your own custom model, first compile it into a [.dfp](https://developer.memryx.com/2p1/specs/files.html#dataflow-program) file, which is the format used by MemryX. #### Compile the Model @@ -1129,18 +1119,31 @@ Custom models must be compiled using **MemryX SDK 2.1**. Before compiling your model, install the MemryX Neural Compiler tools from the [Install Tools](https://developer.memryx.com/2p1/get_started/install_tools.html) page on the **host**. +> **Note:** It is recommended to compile the model on the host machine, or on another separate machine, rather than inside the Frigate Docker container. Installing the compiler inside Docker may conflict with container packages. It is recommended to create a Python virtual environment and install the compiler there. + Once the SDK 2.1 environment is set up, follow the [MemryX Compiler](https://developer.memryx.com/2p1/tools/neural_compiler.html#usage) documentation to compile your model. Example: ```bash -mx_nc -m ./yolov9.onnx --dfp_fname ./yolov9.dfp -is "1,3,640,640" -c 4 --autocrop -v +mx_nc -m yolonas.onnx -c 4 --autocrop -v --dfp_fname yolonas.dfp ``` -> **Note:** `-is` specifies the input shape. Use your model's input dimensions. For detailed instructions on compiling models, refer to the [MemryX Compiler](https://developer.memryx.com/2p1/tools/neural_compiler.html#usage) docs and [Tutorials](https://developer.memryx.com/2p1/tutorials/tutorials.html). +#### Package the Compiled Model + +1. Package your compiled model into a `.zip` file. + +2. The `.zip` file must contain the compiled `.dfp` file. + +3. Depending on the model, the compiler may also generate a cropped post-processing network. If present, it will be named with the suffix `_post.onnx`. + +4. Bind-mount the `.zip` file into the container and specify its path using `model.path` in your config. + +5. Update `labelmap_path` to match your custom model's labels. + ```yaml # The detector automatically selects the default model if nothing is provided in the config. # diff --git a/docs/docs/frigate/installation.md b/docs/docs/frigate/installation.md index 53e978c45..3722a23ba 100644 --- a/docs/docs/frigate/installation.md +++ b/docs/docs/frigate/installation.md @@ -297,7 +297,7 @@ The MemryX MX3 Accelerator is available in the M.2 2280 form factor (like an NVM #### Installation -To get started with MX3 hardware setup for your system, refer to the [Hardware Setup Guide](https://developer.memryx.com/2p1/get_started/hardware_setup.html). +To get started with MX3 hardware setup for your system, refer to the [Hardware Setup Guide](https://developer.memryx.com/2p1/get_started/install_hardware.html). Then follow these steps for installing the correct driver/runtime configuration: