mirror of
https://github.com/blakeblackshear/frigate.git
synced 2026-05-06 21:45:32 +03:00
NVR with realtime local object detection for IP cameras
aicameragoogle-coralhome-assistanthome-automationhomeautomationmqttnvrobject-detectionrealtimertsptensorflow
|
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
* fix(face_recognition): feed BGR (not RGB) to FaceDetectorYN in manual detection branch
Frigate's `requires_face_detection` branch in `FaceRealTimeProcessor.process_frame`
converts the YUV camera frame to RGB and passes it to `cv2.FaceDetectorYN`.
YuNet is trained on BGR — feeding it RGB silently degrades detection
confidence by ~10× on typical person crops, causing face_recognition to
emit no `sub_label` and produce no `train/` entries. There is no log signal
because the detector simply returns 0 faces; from outside the box it looks
like nobody is walking past any camera.
The same file already does the YUV→BGR conversion correctly in the
else-branch (was line 271, now line 285) — only the manual-detection
branch was missed.
## Reproduction
Verified in-pod against the running Frigate's models on identical
person crops (snapshot pulled from a real person event):
BGR (correct): cv2.FaceDetectorYN ← confidence 0.744 ✓
RGB (current): cv2.FaceDetectorYN ← confidence 0.047 ✗
The `score_threshold=0.5` set on `FaceDetectorYN.create()` filters anything
under 0.5 at the detector layer, so the RGB-degraded crops never reach
the user-configurable `detection_threshold`. Result: silent outage.
## Fix
Three changes in `frigate/data_processing/real_time/face.py`:
1. `cv2.COLOR_YUV2RGB_I420` → `cv2.COLOR_YUV2BGR_I420`
2. Variable rename `rgb` → `bgr` to match
3. Remove the now-redundant `cv2.cvtColor(face_frame, cv2.COLOR_RGB2BGR)`
block — `face_frame` is already BGR after the upstream conversion change
Net diff: +6 / -7. Pure Python, no new dependencies.
## How a deployment confirms the fix
After this change, walking past a camera produces:
- `data.attributes` with a `face` entry on the person event (currently empty)
- New entries in `/api/faces` `train/` array (currently frozen)
- `sub_label` populated on subsequent person events for trained faces
Signed-off-by: Vinnie Esposito <vespo21@gmail.com>
* Cleanup comment
---------
Signed-off-by: Vinnie Esposito <vespo21@gmail.com>
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
|
||
|---|---|---|
| .cspell | ||
| .cursor/rules | ||
| .devcontainer | ||
| .github | ||
| .vscode | ||
| config | ||
| docker | ||
| docs | ||
| frigate | ||
| migrations | ||
| notebooks | ||
| testing-scripts | ||
| web | ||
| .dockerignore | ||
| .gitignore | ||
| .pylintrc | ||
| audio-labelmap.txt | ||
| CODEOWNERS | ||
| CONTRIBUTING.md | ||
| cspell.json | ||
| docker-compose.yml | ||
| generate_config_translations.py | ||
| labelmap.txt | ||
| LICENSE | ||
| Makefile | ||
| netlify.toml | ||
| package-lock.json | ||
| pyproject.toml | ||
| README_CN.md | ||
| README.md | ||
| TRADEMARK.md | ||
Frigate NVR™ - Realtime Object Detection for IP Cameras
English
A complete and local NVR designed for Home Assistant with AI object detection. Uses OpenCV and Tensorflow to perform realtime object detection locally for IP cameras.
Use of a GPU or AI accelerator is highly recommended. AI accelerators will outperform even the best CPUs with very little overhead. See Frigate's supported object detectors.
- Tight integration with Home Assistant via a custom component
- Designed to minimize resource use and maximize performance by only looking for objects when and where it is necessary
- Leverages multiprocessing heavily with an emphasis on realtime over processing every frame
- Uses a very low overhead motion detection to determine where to run object detection
- Object detection with TensorFlow runs in separate processes for maximum FPS
- Communicates over MQTT for easy integration into other systems
- Records video with retention settings based on detected objects
- 24/7 recording
- Re-streaming via RTSP to reduce the number of connections to your camera
- WebRTC & MSE support for low-latency live view
Documentation
View the documentation at https://docs.frigate.video
Donations
If you would like to make a donation to support development, please use Github Sponsors.
License
This project is licensed under the MIT License.
- Code: The source code, configuration files, and documentation in this repository are available under the MIT License. You are free to use, modify, and distribute the code as long as you include the original copyright notice.
- Trademarks: The "Frigate" name, the "Frigate NVR" brand, and the Frigate logo are trademarks of Frigate, Inc. and are not covered by the MIT License.
Please see our Trademark Policy for details on acceptable use of our brand assets.
Screenshots
Live dashboard
Streamlined review workflow
Multi-camera scrubbing
Built-in mask and zone editor
Translations
We use Weblate to support language translations. Contributions are always welcome.
Copyright © 2026 Frigate, Inc.
