mirror of
https://github.com/blakeblackshear/frigate.git
synced 2026-03-18 06:08:22 +03:00
NVR with realtime local object detection for IP cameras
aicameragoogle-coralhome-assistanthome-automationhomeautomationmqttnvrobject-detectionrealtimertsptensorflow
In BirdsEyeFrameManager.__init__(), the numpy slice that copies the custom logo (transparent_layer from custom.png alpha channel) onto blank_frame has shape[0] and shape[1] swapped: blank_frame[y:y+shape[1], x:x+shape[0]] = transparent_layer shape[0] is rows (height) and shape[1] is cols (width), so the row range needs shape[0] and the column range needs shape[1]: blank_frame[y:y+shape[0], x:x+shape[1]] = transparent_layer The bug is masked for square images where shape[0]==shape[1]. For non-square images (e.g. 1920x1080), it produces: ValueError: could not broadcast input array from shape (1080,1920) into shape (1620,1080) This silently kills the birdseye output process -- no frames are written to the FIFO pipe, go2rtc exec ffmpeg times out, and the birdseye restream shows a black screen with no errors in the UI. |
||
|---|---|---|
| .cspell | ||
| .cursor/rules | ||
| .devcontainer | ||
| .github | ||
| .vscode | ||
| config | ||
| docker | ||
| docs | ||
| frigate | ||
| migrations | ||
| notebooks | ||
| web | ||
| .dockerignore | ||
| .gitignore | ||
| .pylintrc | ||
| audio-labelmap.txt | ||
| benchmark_motion.py | ||
| benchmark.py | ||
| CODEOWNERS | ||
| cspell.json | ||
| docker-compose.yml | ||
| generate_config_translations.py | ||
| labelmap.txt | ||
| LICENSE | ||
| Makefile | ||
| netlify.toml | ||
| package-lock.json | ||
| process_clip.py | ||
| pyproject.toml | ||
| README_CN.md | ||
| README.md | ||
| TRADEMARK.md | ||
Frigate NVR™ - Realtime Object Detection for IP Cameras
English
A complete and local NVR designed for Home Assistant with AI object detection. Uses OpenCV and Tensorflow to perform realtime object detection locally for IP cameras.
Use of a GPU or AI accelerator is highly recommended. AI accelerators will outperform even the best CPUs with very little overhead. See Frigate's supported object detectors.
- Tight integration with Home Assistant via a custom component
- Designed to minimize resource use and maximize performance by only looking for objects when and where it is necessary
- Leverages multiprocessing heavily with an emphasis on realtime over processing every frame
- Uses a very low overhead motion detection to determine where to run object detection
- Object detection with TensorFlow runs in separate processes for maximum FPS
- Communicates over MQTT for easy integration into other systems
- Records video with retention settings based on detected objects
- 24/7 recording
- Re-streaming via RTSP to reduce the number of connections to your camera
- WebRTC & MSE support for low-latency live view
Documentation
View the documentation at https://docs.frigate.video
Donations
If you would like to make a donation to support development, please use Github Sponsors.
License
This project is licensed under the MIT License.
- Code: The source code, configuration files, and documentation in this repository are available under the MIT License. You are free to use, modify, and distribute the code as long as you include the original copyright notice.
- Trademarks: The "Frigate" name, the "Frigate NVR" brand, and the Frigate logo are trademarks of Frigate, Inc. and are not covered by the MIT License.
Please see our Trademark Policy for details on acceptable use of our brand assets.
Screenshots
Live dashboard
Streamlined review workflow
Multi-camera scrubbing
Built-in mask and zone editor
Translations
We use Weblate to support language translations. Contributions are always welcome.
Copyright © 2026 Frigate, Inc.
