mirror of
https://github.com/blakeblackshear/frigate.git
synced 2026-05-07 05:55:27 +03:00
fix(face_recognition): feed BGR (not RGB) to FaceDetectorYN in manual detection branch
Frigate's `requires_face_detection` branch in `FaceRealTimeProcessor.process_frame`
converts the YUV camera frame to RGB and passes it to `cv2.FaceDetectorYN`.
YuNet is trained on BGR — feeding it RGB silently degrades detection
confidence by ~10× on typical person crops, causing face_recognition to
emit no `sub_label` and produce no `train/` entries. There is no log signal
because the detector simply returns 0 faces; from outside the box it looks
like nobody is walking past any camera.
The same file already does the YUV→BGR conversion correctly in the
else-branch (was line 271, now line 285) — only the manual-detection
branch was missed.
## Reproduction
Verified in-pod against the running Frigate's models on identical
person crops (snapshot pulled from a real person event):
BGR (correct): cv2.FaceDetectorYN ← confidence 0.744 ✓
RGB (current): cv2.FaceDetectorYN ← confidence 0.047 ✗
The `score_threshold=0.5` set on `FaceDetectorYN.create()` filters anything
under 0.5 at the detector layer, so the RGB-degraded crops never reach
the user-configurable `detection_threshold`. Result: silent outage.
## Fix
Three changes in `frigate/data_processing/real_time/face.py`:
1. `cv2.COLOR_YUV2RGB_I420` → `cv2.COLOR_YUV2BGR_I420`
2. Variable rename `rgb` → `bgr` to match
3. Remove the now-redundant `cv2.cvtColor(face_frame, cv2.COLOR_RGB2BGR)`
block — `face_frame` is already BGR after the upstream conversion change
Net diff: +6 / -7. Pure Python, no new dependencies.
## How a deployment confirms the fix
After this change, walking past a camera produces:
- `data.attributes` with a `face` entry on the person event (currently empty)
- New entries in `/api/faces` `train/` array (currently frozen)
- `sub_label` populated on subsequent person events for trained faces
Signed-off-by: Vinnie Esposito <vespo21@gmail.com>
This commit is contained in:
parent
76a1230885
commit
6a277620ca
@ -229,9 +229,13 @@ class FaceRealTimeProcessor(RealTimeProcessorApi):
|
||||
logger.debug(f"No person box available for {id}")
|
||||
return
|
||||
|
||||
rgb = cv2.cvtColor(frame, cv2.COLOR_YUV2RGB_I420)
|
||||
# YuNet (cv2.FaceDetectorYN) is trained on BGR; feeding RGB
|
||||
# silently degrades detection confidence by ~10x on typical
|
||||
# person crops, causing face_recognition to fail with no log
|
||||
# signal. The else-branch below already does YUV2BGR correctly.
|
||||
bgr = cv2.cvtColor(frame, cv2.COLOR_YUV2BGR_I420)
|
||||
left, top, right, bottom = person_box
|
||||
person = rgb[top:bottom, left:right]
|
||||
person = bgr[top:bottom, left:right]
|
||||
face_box = self.__detect_face(person, self.face_config.detection_threshold)
|
||||
|
||||
if not face_box:
|
||||
@ -250,11 +254,6 @@ class FaceRealTimeProcessor(RealTimeProcessorApi):
|
||||
)
|
||||
return
|
||||
|
||||
try:
|
||||
face_frame = cv2.cvtColor(face_frame, cv2.COLOR_RGB2BGR)
|
||||
except Exception as e:
|
||||
logger.debug(f"Failed to convert face frame color for {id}: {e}")
|
||||
return
|
||||
else:
|
||||
# don't run for object without attributes
|
||||
if not obj_data.get("current_attributes"):
|
||||
|
||||
Loading…
Reference in New Issue
Block a user