fix(face_recognition): feed BGR (not RGB) to FaceDetectorYN in manual detection branch (#23123)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions

* fix(face_recognition): feed BGR (not RGB) to FaceDetectorYN in manual detection branch

Frigate's `requires_face_detection` branch in `FaceRealTimeProcessor.process_frame`
converts the YUV camera frame to RGB and passes it to `cv2.FaceDetectorYN`.
YuNet is trained on BGR — feeding it RGB silently degrades detection
confidence by ~10× on typical person crops, causing face_recognition to
emit no `sub_label` and produce no `train/` entries. There is no log signal
because the detector simply returns 0 faces; from outside the box it looks
like nobody is walking past any camera.

The same file already does the YUV→BGR conversion correctly in the
else-branch (was line 271, now line 285) — only the manual-detection
branch was missed.

## Reproduction

Verified in-pod against the running Frigate's models on identical
person crops (snapshot pulled from a real person event):

    BGR (correct):  cv2.FaceDetectorYN ←  confidence 0.744 ✓
    RGB (current):  cv2.FaceDetectorYN ←  confidence 0.047 ✗

The `score_threshold=0.5` set on `FaceDetectorYN.create()` filters anything
under 0.5 at the detector layer, so the RGB-degraded crops never reach
the user-configurable `detection_threshold`. Result: silent outage.

## Fix

Three changes in `frigate/data_processing/real_time/face.py`:

1. `cv2.COLOR_YUV2RGB_I420` → `cv2.COLOR_YUV2BGR_I420`
2. Variable rename `rgb` → `bgr` to match
3. Remove the now-redundant `cv2.cvtColor(face_frame, cv2.COLOR_RGB2BGR)`
   block — `face_frame` is already BGR after the upstream conversion change

Net diff: +6 / -7. Pure Python, no new dependencies.

## How a deployment confirms the fix

After this change, walking past a camera produces:
- `data.attributes` with a `face` entry on the person event (currently empty)
- New entries in `/api/faces` `train/` array (currently frozen)
- `sub_label` populated on subsequent person events for trained faces

Signed-off-by: Vinnie Esposito <vespo21@gmail.com>

* Cleanup comment

---------

Signed-off-by: Vinnie Esposito <vespo21@gmail.com>
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
This commit is contained in:
Vinnie Esposito 2026-05-05 19:10:24 -05:00 committed by GitHub
parent 76a1230885
commit 704ee9667c
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -229,9 +229,10 @@ class FaceRealTimeProcessor(RealTimeProcessorApi):
logger.debug(f"No person box available for {id}") logger.debug(f"No person box available for {id}")
return return
rgb = cv2.cvtColor(frame, cv2.COLOR_YUV2RGB_I420) # YuNet (cv2.FaceDetectorYN) is trained on BGR
bgr = cv2.cvtColor(frame, cv2.COLOR_YUV2BGR_I420)
left, top, right, bottom = person_box left, top, right, bottom = person_box
person = rgb[top:bottom, left:right] person = bgr[top:bottom, left:right]
face_box = self.__detect_face(person, self.face_config.detection_threshold) face_box = self.__detect_face(person, self.face_config.detection_threshold)
if not face_box: if not face_box:
@ -250,11 +251,6 @@ class FaceRealTimeProcessor(RealTimeProcessorApi):
) )
return return
try:
face_frame = cv2.cvtColor(face_frame, cv2.COLOR_RGB2BGR)
except Exception as e:
logger.debug(f"Failed to convert face frame color for {id}: {e}")
return
else: else:
# don't run for object without attributes # don't run for object without attributes
if not obj_data.get("current_attributes"): if not obj_data.get("current_attributes"):