Review Fix

This commit is contained in:
OmriAx 2025-03-04 13:08:12 +02:00
parent f99bb8ec14
commit aaad1938e4
3 changed files with 77 additions and 40 deletions

View File

@ -132,7 +132,7 @@ detectors:
--- ---
## Hailo-8 Detector ## Hailo-8
This detector is available for use with both Hailo-8 and Hailo-8L AI Acceleration Modules. The integration automatically detects your hardware architecture via the Hailo CLI and selects the appropriate default model if no custom model is specified. This detector is available for use with both Hailo-8 and Hailo-8L AI Acceleration Modules. The integration automatically detects your hardware architecture via the Hailo CLI and selects the appropriate default model if no custom model is specified.
@ -143,10 +143,10 @@ See the [installation docs](../frigate/installation.md#hailo-8l) for information
When configuring the Hailo detector, you have two options to specify the model: a local **path** or a **URL**. When configuring the Hailo detector, you have two options to specify the model: a local **path** or a **URL**.
If both are provided, the detector will first check for the model at the given local path. If the file is not found, it will download the model from the specified URL. The model file is cached under `/config/model_cache/hailo`. If both are provided, the detector will first check for the model at the given local path. If the file is not found, it will download the model from the specified URL. The model file is cached under `/config/model_cache/hailo`.
#### YOLO (Recommended) #### YOLO
Use this configuration for YOLO-based models. When no custom model path or URL is provided, the detector automatically downloads the default model based on the detected hardware: Use this configuration for YOLO-based models. When no custom model path or URL is provided, the detector automatically downloads the default model based on the detected hardware:
- **Hailo-8 hardware:** Uses **YOLOv8s** (default: `yolov8s.hef`) - **Hailo-8 hardware:** Uses **YOLOv6n** (default: `yolov6n.hef`)
- **Hailo-8L hardware:** Uses **YOLOv6n** (default: `yolov6n.hef`) - **Hailo-8L hardware:** Uses **YOLOv6n** (default: `yolov6n.hef`)
```yaml ```yaml
@ -163,16 +163,17 @@ model:
input_dtype: int input_dtype: int
model_type: hailo-yolo model_type: hailo-yolo
# The detector automatically selects the default model based on your hardware: # The detector automatically selects the default model based on your hardware:
# - For Hailo-8 hardware: YOLOv8s (default: yolov8s.hef) # - For Hailo-8 hardware: YOLOv6n (default: yolov6n.hef)
# - For Hailo-8L hardware: YOLOv6n (default: yolov6n.hef) # - For Hailo-8L hardware: YOLOv6n (default: yolov6n.hef)
# #
# Optionally, you can specify a local model path to override the default. # Optionally, you can specify a local model path to override the default.
# If a local path is provided and the file exists, it will be used instead of downloading. # If a local path is provided and the file exists, it will be used instead of downloading.
# Example: # Example:
# path: /config/model_cache/hailo/yolov8s.hef # path: /config/model_cache/hailo/yolov6n.hef
# #
# You can also override using a custom URL: # You can also override using a custom URL:
# url: https://hailo-model-zoo.s3.eu-west-2.amazonaws.com/ModelZoo/Compiled/v2.14.0/hailo8/yolov8s.hef # url: https://hailo-model-zoo.s3.eu-west-2.amazonaws.com/ModelZoo/Compiled/v2.14.0/hailo8/ssd_mobilenet_v2.hef
# just make sure to give it the write configuration based on the model
``` ```
#### SSD #### SSD
@ -189,7 +190,7 @@ model:
width: 300 width: 300
height: 300 height: 300
input_tensor: nhwc input_tensor: nhwc
input_pixel_format: bgr input_pixel_format: rgb
model_type: ssd model_type: ssd
# Specify the local model path (if available) or URL for SSD MobileNet v1. # Specify the local model path (if available) or URL for SSD MobileNet v1.
# Example with a local path: # Example with a local path:
@ -222,15 +223,15 @@ model:
input_dtype: int input_dtype: int
model_type: hailo-yolo model_type: hailo-yolo
``` ```
For additional ready-to-use models, please visit: https://github.com/hailo-ai/hailo_model_zoo
Hailo8 supports all models in the Hailo Model Zoo that include HailoRT post-processing. You're welcome to choose any of these pre-configured models for your implementation.
> **Note:** > **Note:**
> If both a model **path** and **URL** are provided, the detector will first check the local model path. If the file is not found, it will download the model from the URL. > If both a model **path** and **URL** are provided, the detector will first check the local model path. If the file is not found, it will download the model from the URL.
>
> *Tested custom models include: yolov5, yolov8, yolov9, yolov11.*
--- ---
This guide now clearly explains how the model is chosen based on the presence of a local file path versus a URL, ensuring users know which model will be used by the integration.
## OpenVINO Detector ## OpenVINO Detector

View File

@ -92,15 +92,13 @@ Inference speeds will vary greatly depending on the GPU and the model used.
With the [rocm](../configuration/object_detectors.md#amdrocm-gpu-detector) detector Frigate can take advantage of many discrete AMD GPUs. With the [rocm](../configuration/object_detectors.md#amdrocm-gpu-detector) detector Frigate can take advantage of many discrete AMD GPUs.
### Hailo-8 Detector ### Hailo-8
Frigate supports both the Hailo-8 and Hailo-8L AI Acceleration Modules on compatible hardware platforms—including the Raspberry Pi 5 with the PCIe hat from the AI kit. The Hailo detector integration in Frigate automatically identifies your hardware type and selects the appropriate default model when a custom model isnt provided. Frigate supports both the Hailo-8 and Hailo-8L AI Acceleration Modules on compatible hardware platforms—including the Raspberry Pi 5 with the PCIe hat from the AI kit. The Hailo detector integration in Frigate automatically identifies your hardware type and selects the appropriate default model when a custom model isnt provided.
**Default Model Configuration:** **Default Model Configuration:**
- **Hailo-8L:** Default model is **YOLOv6n**. - **Hailo-8L:** Default model is **YOLOv6n**.
- **Hailo-8:** Default model is **YOLOv8s**. - **Hailo-8:** Default model is **YOLOv6n**.
Additionally, the heavier **YOLOv8m** model has been tested on Hailo-8 hardware for users who require higher accuracy despite increased inference time.
In real-world deployments, even with multiple cameras running concurrently, Frigate has demonstrated consistent performance. Testing on x86 platforms—with dual PCIe lanes—yields further improvements in FPS, throughput, and latency compared to the Raspberry Pi setup. In real-world deployments, even with multiple cameras running concurrently, Frigate has demonstrated consistent performance. Testing on x86 platforms—with dual PCIe lanes—yields further improvements in FPS, throughput, and latency compared to the Raspberry Pi setup.

View File

@ -7,6 +7,7 @@ import queue
import threading import threading
from functools import partial from functools import partial
from typing import Dict, Optional, List, Tuple from typing import Dict, Optional, List, Tuple
import cv2
try: try:
from hailo_platform import ( from hailo_platform import (
@ -31,19 +32,47 @@ from frigate.detectors.detection_api import DetectionApi
from frigate.detectors.detector_config import BaseDetectorConfig, ModelTypeEnum, InputTensorEnum, PixelFormatEnum, InputDTypeEnum from frigate.detectors.detector_config import BaseDetectorConfig, ModelTypeEnum, InputTensorEnum, PixelFormatEnum, InputDTypeEnum
from PIL import Image, ImageDraw, ImageFont from PIL import Image, ImageDraw, ImageFont
logger = logging.getLogger(__name__)
# ----------------- Inline Utility Functions ----------------- # # ----------------- Inline Utility Functions ----------------- #
def preprocess_image(image: Image.Image, model_w: int, model_h: int) -> Image.Image:
def preprocess_tensor(image: np.ndarray, model_w: int, model_h: int) -> np.ndarray:
""" """
Resize image with unchanged aspect ratio using padding. Resize a NumPy array image with unchanged aspect ratio using padding.
Optimized for the case where the image is 320x320 and the target is 640x640.
Assumes the input image is of shape (H, W, 3).
""" """
img_w, img_h = image.size # Remove batch dimension if present (assumes batch size of 1)
scale = min(model_w / img_w, model_h / img_h) if image.ndim == 4 and image.shape[0] == 1:
new_img_w, new_img_h = int(img_w * scale), int(img_h * scale) image = image[0]
image = image.resize((new_img_w, new_img_h), Image.Resampling.BICUBIC)
padded_image = Image.new('RGB', (model_w, model_h), (114, 114, 114)) h, w = image.shape[:2]
padded_image.paste(image, ((model_w - new_img_w) // 2, (model_h - new_img_h) // 2))
# Fast path: if image is 320x320 and target is 640x640, simply double the size quickly.
if (w, h) == (320, 320) and (model_w, model_h) == (640, 640):
return cv2.resize(image, (model_w, model_h), interpolation=cv2.INTER_LINEAR)
# Standard processing: calculate scaling factor to maintain aspect ratio.
scale = min(model_w / w, model_h / h)
new_w, new_h = int(w * scale), int(h * scale)
# Resize with high-quality bicubic interpolation
resized_image = cv2.resize(image, (new_w, new_h), interpolation=cv2.INTER_CUBIC)
# Create a new image with the target size filled with the padding color 114
padded_image = np.full((model_h, model_w, 3), 114, dtype=image.dtype)
# Calculate the center position for the resized image
x_offset = (model_w - new_w) // 2
y_offset = (model_h - new_h) // 2
padded_image[y_offset:y_offset+new_h, x_offset:x_offset+new_w] = resized_image
return padded_image return padded_image
def extract_detections(input_data: list, threshold: float = 0.5) -> dict: def extract_detections(input_data: list, threshold: float = 0.5) -> dict:
""" """
(Legacy extraction function; not used by detect_raw below.) (Legacy extraction function; not used by detect_raw below.)
@ -72,16 +101,16 @@ def extract_detections(input_data: list, threshold: float = 0.5) -> dict:
# Global constants and default URLs # Global constants and default URLs
DETECTOR_KEY = "hailo8l" DETECTOR_KEY = "hailo8l"
ARCH = None ARCH = None
H8_DEFAULT_MODEL = "yolov8s.hef" H8_DEFAULT_MODEL = "yolov6n.hef"
H8L_DEFAULT_MODEL = "yolov6n.hef" H8L_DEFAULT_MODEL = "yolov6n.hef"
H8_DEFAULT_URL = "https://hailo-model-zoo.s3.eu-west-2.amazonaws.com/ModelZoo/Compiled/v2.14.0/hailo8/yolov8s.hef" H8_DEFAULT_URL = "https://hailo-model-zoo.s3.eu-west-2.amazonaws.com/ModelZoo/Compiled/v2.14.0/hailo8/yolov6n.hef"
H8L_DEFAULT_URL = "https://hailo-model-zoo.s3.eu-west-2.amazonaws.com/ModelZoo/Compiled/v2.14.0/hailo8l/yolov6n.hef" H8L_DEFAULT_URL = "https://hailo-model-zoo.s3.eu-west-2.amazonaws.com/ModelZoo/Compiled/v2.14.0/hailo8l/yolov6n.hef"
def detect_hailo_arch(): def detect_hailo_arch():
try: try:
result = subprocess.run(['hailortcli', 'fw-control', 'identify'], capture_output=True, text=True) result = subprocess.run(['hailortcli', 'fw-control', 'identify'], capture_output=True, text=True)
if result.returncode != 0: if result.returncode != 0:
print(f"Error running hailortcli: {result.stderr}") logger.error(f"Inference error: {result.stderr}")
return None return None
for line in result.stdout.split('\n'): for line in result.stdout.split('\n'):
if "Device Architecture" in line: if "Device Architecture" in line:
@ -89,10 +118,10 @@ def detect_hailo_arch():
return "hailo8l" return "hailo8l"
elif "HAILO8" in line: elif "HAILO8" in line:
return "hailo8" return "hailo8"
print("Could not determine Hailo architecture from device information.") logger.error(f"Inference error: Could not determine Hailo architecture from device information.")
return None return None
except Exception as e: except Exception as e:
print(f"An error occurred while detecting Hailo architecture: {e}") logger.error(f"Inference error: {e}")
return None return None
# ----------------- Inline Asynchronous Inference Class ----------------- # # ----------------- Inline Asynchronous Inference Class ----------------- #
@ -297,6 +326,11 @@ class HailoDetector(DetectionApi):
def detect_raw(self, tensor_input): def detect_raw(self, tensor_input):
logging.debug("[DETECT_RAW] Starting detection") logging.debug("[DETECT_RAW] Starting detection")
# Pre process the input tensor
logger.debug(f"[DETECT_RAW] Starting pre processing")
tensor_input = self.preprocess(tensor_input)
# Ensure tensor_input has a batch dimension # Ensure tensor_input has a batch dimension
if isinstance(tensor_input, np.ndarray) and len(tensor_input.shape) == 3: if isinstance(tensor_input, np.ndarray) and len(tensor_input.shape) == 3:
tensor_input = np.expand_dims(tensor_input, axis=0) tensor_input = np.expand_dims(tensor_input, axis=0)
@ -337,16 +371,13 @@ class HailoDetector(DetectionApi):
score = float(det[4]) score = float(det[4])
if score < threshold: if score < threshold:
continue continue
# Instead of checking for a sixth element, use the outer index as the class
cls = class_id
if hasattr(self, "labels") and self.labels: if hasattr(self, "labels") and self.labels:
logging.debug(f"[DETECT_RAW] Detected class id: {cls} -> {self.labels[cls]}") logging.debug(f"[DETECT_RAW] Detected class id: {class_id} -> {self.labels[class_id]}")
print(f"[DETECT_RAW] Detected class id: {cls} -> {self.labels[cls]}")
else: else:
logging.debug(f"[DETECT_RAW] Detected class id: {cls}") logging.debug(f"[DETECT_RAW] Detected class id: {class_id}")
print(f"[DETECT_RAW] Detected class id: {cls}")
# Append in the order: [class_id, confidence, ymin, xmin, ymax, xmax] all_detections.append([class_id, score, det[0], det[1], det[2], det[3]])
all_detections.append([cls, score, det[0], det[1], det[2], det[3]])
if len(all_detections) == 0: if len(all_detections) == 0:
return np.zeros((20, 6), dtype=np.float32) return np.zeros((20, 6), dtype=np.float32)
@ -354,18 +385,25 @@ class HailoDetector(DetectionApi):
detections_array = np.array(all_detections, dtype=np.float32) detections_array = np.array(all_detections, dtype=np.float32)
# Pad or truncate to exactly 20 rows # Pad or truncate to exactly 20 rows
if detections_array.shape[0] < 20: if detections_array.shape[0] > 20:
detections_array = detections_array[:20, :]
elif detections_array.shape[0] < 20:
pad = np.zeros((20 - detections_array.shape[0], 6), dtype=np.float32) pad = np.zeros((20 - detections_array.shape[0], 6), dtype=np.float32)
detections_array = np.vstack((detections_array, pad)) detections_array = np.vstack((detections_array, pad))
elif detections_array.shape[0] > 20:
detections_array = detections_array[:20, :]
logging.debug(f"[DETECT_RAW] Processed detections: {detections_array}") logging.debug(f"[DETECT_RAW] Processed detections: {detections_array}")
return detections_array return detections_array
# Preprocess method using inline utility # Preprocess method using inline utility
def preprocess(self, image): def preprocess(self, image):
return preprocess_image(image, self.input_shape[1], self.input_shape[0]) if isinstance(image, np.ndarray):
# Process the tensor input and reintroduce the batch dimension.
processed = preprocess_tensor(image, self.input_shape[1], self.input_shape[0])
return np.expand_dims(processed, axis=0)
else:
raise ValueError("Unsupported image format for preprocessing")
# Close the Hailo device # Close the Hailo device
def close(self): def close(self):