Compare commits

..

3 Commits

Author SHA1 Message Date
mathieu-d
569d60441a
Merge 0f3dd097ec into 704ee9667c 2026-05-06 05:27:13 +02:00
Vinnie Esposito
704ee9667c
fix(face_recognition): feed BGR (not RGB) to FaceDetectorYN in manual detection branch (#23123)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
* fix(face_recognition): feed BGR (not RGB) to FaceDetectorYN in manual detection branch

Frigate's `requires_face_detection` branch in `FaceRealTimeProcessor.process_frame`
converts the YUV camera frame to RGB and passes it to `cv2.FaceDetectorYN`.
YuNet is trained on BGR — feeding it RGB silently degrades detection
confidence by ~10× on typical person crops, causing face_recognition to
emit no `sub_label` and produce no `train/` entries. There is no log signal
because the detector simply returns 0 faces; from outside the box it looks
like nobody is walking past any camera.

The same file already does the YUV→BGR conversion correctly in the
else-branch (was line 271, now line 285) — only the manual-detection
branch was missed.

## Reproduction

Verified in-pod against the running Frigate's models on identical
person crops (snapshot pulled from a real person event):

    BGR (correct):  cv2.FaceDetectorYN ←  confidence 0.744 ✓
    RGB (current):  cv2.FaceDetectorYN ←  confidence 0.047 ✗

The `score_threshold=0.5` set on `FaceDetectorYN.create()` filters anything
under 0.5 at the detector layer, so the RGB-degraded crops never reach
the user-configurable `detection_threshold`. Result: silent outage.

## Fix

Three changes in `frigate/data_processing/real_time/face.py`:

1. `cv2.COLOR_YUV2RGB_I420` → `cv2.COLOR_YUV2BGR_I420`
2. Variable rename `rgb` → `bgr` to match
3. Remove the now-redundant `cv2.cvtColor(face_frame, cv2.COLOR_RGB2BGR)`
   block — `face_frame` is already BGR after the upstream conversion change

Net diff: +6 / -7. Pure Python, no new dependencies.

## How a deployment confirms the fix

After this change, walking past a camera produces:
- `data.attributes` with a `face` entry on the person event (currently empty)
- New entries in `/api/faces` `train/` array (currently frozen)
- `sub_label` populated on subsequent person events for trained faces

Signed-off-by: Vinnie Esposito <vespo21@gmail.com>

* Cleanup comment

---------

Signed-off-by: Vinnie Esposito <vespo21@gmail.com>
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2026-05-05 19:10:24 -05:00
Nicolas Mowen
76a1230885
ROCm Optimizations (#23118)
* Update to ROCm 7.2.3

* Add inference time for 9060XT

* Update times

* Update hardware info for latest ROCm

* Add env vars to save kernels and miopen database

* re-enable face recognition for ROCm

* Update

* Save LLVM cache
2026-05-05 16:33:43 -05:00
7 changed files with 21 additions and 21 deletions

View File

@ -13,7 +13,7 @@ ARG ROCM
RUN apt update -qq && \
apt install -y wget gpg && \
wget -O rocm.deb https://repo.radeon.com/amdgpu-install/7.2/ubuntu/jammy/amdgpu-install_7.2.70200-1_all.deb && \
wget -O rocm.deb https://repo.radeon.com/amdgpu-install/7.2.3/ubuntu/jammy/amdgpu-install_7.2.3.70203-1_all.deb && \
apt install -y ./rocm.deb && \
apt update && \
apt install -qq -y rocm
@ -78,6 +78,10 @@ ENV MIGRAPHX_DISABLE_MIOPEN_FUSION=1
ENV MIGRAPHX_DISABLE_SCHEDULE_PASS=1
ENV MIGRAPHX_DISABLE_REDUCE_FUSION=1
ENV MIGRAPHX_ENABLE_HIPRTC_WORKAROUNDS=1
ENV MIOPEN_CUSTOM_CACHE_DIR=/config/model_cache/migraphx
ENV MIOPEN_USER_DB_PATH=/config/model_cache/migraphx
ENV AMD_COMGR_CACHE=1
ENV AMD_COMGR_CACHE_DIR=/config/model_cache/migraphx
COPY --from=rocm-dist / /

View File

@ -1 +1 @@
onnxruntime-migraphx @ https://github.com/NickM-27/frigate-onnxruntime-rocm/releases/download/v7.2.0/onnxruntime_migraphx-1.23.1-cp311-cp311-linux_x86_64.whl
onnxruntime-migraphx @ https://github.com/NickM-27/frigate-onnxruntime-rocm/releases/download/v7.2.3-1/onnxruntime_migraphx-1.24.4-cp311-cp311-linux_x86_64.whl

View File

@ -1,5 +1,5 @@
variable "ROCM" {
default = "7.2.0"
default = "7.2.3"
}
variable "HSA_OVERRIDE_GFX_VERSION" {
default = ""

View File

@ -1022,12 +1022,12 @@ detectors:
### ONNX Supported Models
| Model | Nvidia GPU | AMD GPU | Notes |
| ----------------------------- | ---------- | ------- | --------------------------------------------------- |
| [YOLOv9](#yolo-v3-v4-v7-v9-2) | ✅ | ✅ | Supports CUDA Graphs for optimal Nvidia performance |
| [RF-DETR](#rf-detr) | ✅ | ❌ | Supports CUDA Graphs for optimal Nvidia performance |
| [YOLO-NAS](#yolo-nas-1) | ⚠️ | ⚠️ | Not supported by CUDA Graphs |
| [YOLOX](#yolox-1) | ✅ | ✅ | Supports CUDA Graphs for optimal Nvidia performance |
| Model | Nvidia GPU | AMD GPU | Notes |
| ------------------------------------ | ---------- | ------- | --------------------------------------------------- |
| [YOLOv9](#yolo-v3-v4-v7-v9-2) | ✅ | ✅ | Supports CUDA Graphs for optimal Nvidia performance |
| [RF-DETR](#rf-detr) | ✅ | ⚠️ | Supports CUDA Graphs for optimal Nvidia performance |
| [YOLO-NAS](#yolo-nas-1) | ⚠️ | ⚠️ | Not supported by CUDA Graphs |
| [YOLOX](#yolox-1) | ✅ | ✅ | Supports CUDA Graphs for optimal Nvidia performance |
| [D-FINE / DEIMv2](#d-fine--deimv2-1) | ⚠️ | ❌ | Not supported by CUDA Graphs |
There is no default model provided, the following formats are supported:

View File

@ -223,10 +223,11 @@ Apple Silicon can not run within a container, so a ZMQ proxy is utilized to comm
With the [ROCm](../configuration/object_detectors.md#amdrocm-gpu-detector) detector Frigate can take advantage of many discrete AMD GPUs.
| Name | YOLOv9 Inference Time | YOLO-NAS Inference Time |
| --------- | --------------------------- | ------------------------- |
| AMD 780M | t-320: ~ 14 ms s-320: 20 ms | 320: ~ 25 ms 640: ~ 50 ms |
| AMD 8700G | | 320: ~ 20 ms 640: ~ 40 ms |
| Name | YOLOv9 Inference Time | YOLO-NAS Inference Time | RF-DETR Inference Time |
| -------------- | --------------------------- | ------------------------- | ---------------------- |
| AMD 780M | t-320: ~ 14 ms s-320: 20 ms | 320: ~ 25 ms 640: ~ 50 ms | |
| AMD 8700G | | 320: ~ 20 ms 640: ~ 40 ms | |
| AMD 9060XT 16G | t-320: ~ 4 ms s-320: 5 ms | 320: ~ 6 ms | Nano-320: ~ 90 ms |
## Community Supported Detectors

View File

@ -229,9 +229,10 @@ class FaceRealTimeProcessor(RealTimeProcessorApi):
logger.debug(f"No person box available for {id}")
return
rgb = cv2.cvtColor(frame, cv2.COLOR_YUV2RGB_I420)
# YuNet (cv2.FaceDetectorYN) is trained on BGR
bgr = cv2.cvtColor(frame, cv2.COLOR_YUV2BGR_I420)
left, top, right, bottom = person_box
person = rgb[top:bottom, left:right]
person = bgr[top:bottom, left:right]
face_box = self.__detect_face(person, self.face_config.detection_threshold)
if not face_box:
@ -250,11 +251,6 @@ class FaceRealTimeProcessor(RealTimeProcessorApi):
)
return
try:
face_frame = cv2.cvtColor(face_frame, cv2.COLOR_RGB2BGR)
except Exception as e:
logger.debug(f"Failed to convert face frame color for {id}: {e}")
return
else:
# don't run for object without attributes
if not obj_data.get("current_attributes"):

View File

@ -132,7 +132,6 @@ class ONNXModelRunner(BaseModelRunner):
return model_type in [
EnrichmentModelTypeEnum.paddleocr.value,
EnrichmentModelTypeEnum.jina_v2.value,
EnrichmentModelTypeEnum.arcface.value,
ModelTypeEnum.rfdetr.value,
ModelTypeEnum.dfine.value,
]