From 704ee9667cdfb0bf5f7f09daae2710ee1aa8fb7b Mon Sep 17 00:00:00 2001 From: Vinnie Esposito <47068580+vespo92@users.noreply.github.com> Date: Tue, 5 May 2026 19:10:24 -0500 Subject: [PATCH] fix(face_recognition): feed BGR (not RGB) to FaceDetectorYN in manual detection branch (#23123) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * fix(face_recognition): feed BGR (not RGB) to FaceDetectorYN in manual detection branch Frigate's `requires_face_detection` branch in `FaceRealTimeProcessor.process_frame` converts the YUV camera frame to RGB and passes it to `cv2.FaceDetectorYN`. YuNet is trained on BGR — feeding it RGB silently degrades detection confidence by ~10× on typical person crops, causing face_recognition to emit no `sub_label` and produce no `train/` entries. There is no log signal because the detector simply returns 0 faces; from outside the box it looks like nobody is walking past any camera. The same file already does the YUV→BGR conversion correctly in the else-branch (was line 271, now line 285) — only the manual-detection branch was missed. ## Reproduction Verified in-pod against the running Frigate's models on identical person crops (snapshot pulled from a real person event): BGR (correct): cv2.FaceDetectorYN ← confidence 0.744 ✓ RGB (current): cv2.FaceDetectorYN ← confidence 0.047 ✗ The `score_threshold=0.5` set on `FaceDetectorYN.create()` filters anything under 0.5 at the detector layer, so the RGB-degraded crops never reach the user-configurable `detection_threshold`. Result: silent outage. ## Fix Three changes in `frigate/data_processing/real_time/face.py`: 1. `cv2.COLOR_YUV2RGB_I420` → `cv2.COLOR_YUV2BGR_I420` 2. Variable rename `rgb` → `bgr` to match 3. Remove the now-redundant `cv2.cvtColor(face_frame, cv2.COLOR_RGB2BGR)` block — `face_frame` is already BGR after the upstream conversion change Net diff: +6 / -7. Pure Python, no new dependencies. ## How a deployment confirms the fix After this change, walking past a camera produces: - `data.attributes` with a `face` entry on the person event (currently empty) - New entries in `/api/faces` `train/` array (currently frozen) - `sub_label` populated on subsequent person events for trained faces Signed-off-by: Vinnie Esposito * Cleanup comment --------- Signed-off-by: Vinnie Esposito Co-authored-by: Nicolas Mowen --- frigate/data_processing/real_time/face.py | 10 +++------- 1 file changed, 3 insertions(+), 7 deletions(-) diff --git a/frigate/data_processing/real_time/face.py b/frigate/data_processing/real_time/face.py index c6b6346b5..c5c4ec56f 100644 --- a/frigate/data_processing/real_time/face.py +++ b/frigate/data_processing/real_time/face.py @@ -229,9 +229,10 @@ class FaceRealTimeProcessor(RealTimeProcessorApi): logger.debug(f"No person box available for {id}") return - rgb = cv2.cvtColor(frame, cv2.COLOR_YUV2RGB_I420) + # YuNet (cv2.FaceDetectorYN) is trained on BGR + bgr = cv2.cvtColor(frame, cv2.COLOR_YUV2BGR_I420) left, top, right, bottom = person_box - person = rgb[top:bottom, left:right] + person = bgr[top:bottom, left:right] face_box = self.__detect_face(person, self.face_config.detection_threshold) if not face_box: @@ -250,11 +251,6 @@ class FaceRealTimeProcessor(RealTimeProcessorApi): ) return - try: - face_frame = cv2.cvtColor(face_frame, cv2.COLOR_RGB2BGR) - except Exception as e: - logger.debug(f"Failed to convert face frame color for {id}: {e}") - return else: # don't run for object without attributes if not obj_data.get("current_attributes"):