mirror of
https://github.com/blakeblackshear/frigate.git
synced 2026-03-10 10:33:11 +03:00
NVR with realtime local object detection for IP cameras
aicameragoogle-coralhome-assistanthome-automationhomeautomationmqttnvrobject-detectionrealtimertsptensorflow
Extends the ZMQ split-detector pattern (apple-silicon-detector) to cover ONNX embedding models — ArcFace face recognition and Jina semantic search. On macOS, Docker has no access to CoreML or the Apple Neural Engine, so embedding inference is forced to CPU (~200ms/face for ArcFace). This adds a ZmqEmbeddingRunner that sends preprocessed tensors to a native host process over ZMQ TCP and receives embeddings back, enabling CoreML/ANE acceleration outside the container. Files changed: - frigate/detectors/detection_runners.py: add ZmqEmbeddingRunner class and hook into get_optimized_runner() via "zmq://" device prefix - tools/zmq_embedding_server.py: new host-side server script Tested on Mac Mini M4, 24h soak test, ~5000 object reindex. |
||
|---|---|---|
| .cspell | ||
| .cursor/rules | ||
| .devcontainer | ||
| .github | ||
| .vscode | ||
| config | ||
| docker | ||
| docs | ||
| frigate | ||
| migrations | ||
| notebooks | ||
| tools | ||
| web | ||
| .dockerignore | ||
| .gitignore | ||
| .pylintrc | ||
| audio-labelmap.txt | ||
| benchmark_motion.py | ||
| benchmark.py | ||
| CODEOWNERS | ||
| cspell.json | ||
| docker-compose.yml | ||
| generate_config_translations.py | ||
| labelmap.txt | ||
| LICENSE | ||
| Makefile | ||
| netlify.toml | ||
| package-lock.json | ||
| process_clip.py | ||
| pyproject.toml | ||
| README_CN.md | ||
| README.md | ||
| TRADEMARK.md | ||
Frigate NVR™ - Realtime Object Detection for IP Cameras
English
A complete and local NVR designed for Home Assistant with AI object detection. Uses OpenCV and Tensorflow to perform realtime object detection locally for IP cameras.
Use of a GPU or AI accelerator is highly recommended. AI accelerators will outperform even the best CPUs with very little overhead. See Frigate's supported object detectors.
- Tight integration with Home Assistant via a custom component
- Designed to minimize resource use and maximize performance by only looking for objects when and where it is necessary
- Leverages multiprocessing heavily with an emphasis on realtime over processing every frame
- Uses a very low overhead motion detection to determine where to run object detection
- Object detection with TensorFlow runs in separate processes for maximum FPS
- Communicates over MQTT for easy integration into other systems
- Records video with retention settings based on detected objects
- 24/7 recording
- Re-streaming via RTSP to reduce the number of connections to your camera
- WebRTC & MSE support for low-latency live view
Documentation
View the documentation at https://docs.frigate.video
Donations
If you would like to make a donation to support development, please use Github Sponsors.
License
This project is licensed under the MIT License.
- Code: The source code, configuration files, and documentation in this repository are available under the MIT License. You are free to use, modify, and distribute the code as long as you include the original copyright notice.
- Trademarks: The "Frigate" name, the "Frigate NVR" brand, and the Frigate logo are trademarks of Frigate, Inc. and are not covered by the MIT License.
Please see our Trademark Policy for details on acceptable use of our brand assets.
Screenshots
Live dashboard
Streamlined review workflow
Multi-camera scrubbing
Built-in mask and zone editor
Translations
We use Weblate to support language translations. Contributions are always welcome.
Copyright © 2026 Frigate, Inc.
