* use default stable api version for gemini genai client
* update gemini docs
* remove outdated genai.md and update correct file
* Classification fixes
* Mutate when a date is selected and marked as reviewed
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* tracking details tweaks
- fix 4:3 layout
- get and use aspect of record stream if different from detect stream
* aspect ratio docs tip
* spacing
* fix
* i18n fix
* additional logs on ffmpeg exit
* improve no camera view
instead of showing an "add camera" message, show a specific message for empty camera groups when frigate already has cameras added
* add note about separate onvif accounts in some camera firmware
* clarify review summary report docs
* review settings tweaks
- remove horizontal divider
- update description language for switches
- keep save button disabled until review classification settings change
* use correct Toaster component from shadcn
* clarify support for intel b-series (battlemage) gpus
* add clarifying comment to dummy camera docs
* misc triggers tweaks
i18n fixes
fix toaster color
fix clicking on labels selecting incorrect checkbox
* update copilot instructions
* lpr docs tweaks
* add retry params to gemini
* i18n fix
* ensure users only see recognized plates from accessible cameras in explore
* ensure all zone filters are converted to pixels
zone-level filters were never converted from percentage area to pixels. RuntimeFilterConfig was only applied to filters at the camera level, not zone.filters.
Fixes https://github.com/blakeblackshear/frigate/discussions/21694
* add test for percentage based zone filters
* use export id for key instead of name
* update gemini docs
* fix(recording): handle unexpected filenames in cache maintainer to prevent crash
* test(recording): add test for maintainer cache file parsing
* Prevent log spam from unexpected cache files
Addresses PR review feedback: Add deduplication to prevent warning
messages from being logged repeatedly for the same unexpected file
in the cache directory. Each unexpected filename is only logged once
per RecordingMaintainer instance lifecycle.
Also adds test to verify warning is only emitted once per filename.
* Fix code formatting for test_maintainer.py
* fixes + ruff
* Fix jetson stats reading
* Return result
* Avoid unknown class for cover image
* fix double encoding of passwords in camera wizard
* formatting
* empty homekit config fixes
* add locks to jina v1 embeddings
protect tokenizer and feature extractor in jina_v1_embedding with per-instance thread lock to avoid the "Already borrowed" RuntimeError during concurrent tokenization
* Capitalize correctly
* replace deprecated google-generativeai with google-genai
update gemini genai provider with new calls from SDK
provider_options specifies any http options
suppress unneeded info logging
* fix attribute area on detail stream hover
---------
Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
Currently translated at 100.0% (53 of 53 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (49 of 49 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (122 of 122 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (131 of 131 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (6 of 6 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (214 of 214 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (46 of 46 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (654 of 654 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (92 of 92 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (136 of 136 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (55 of 55 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (43 of 43 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (13 of 13 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (501 of 501 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (118 of 118 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (10 of 10 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (131 of 131 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (55 of 55 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (41 of 41 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (53 of 53 strings)
Translated using Weblate (Persian)
Currently translated at 99.6% (652 of 654 strings)
Translated using Weblate (Persian)
Currently translated at 98.9% (91 of 92 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (122 of 122 strings)
Translated using Weblate (Persian)
Currently translated at 84.6% (11 of 13 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (74 of 74 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (118 of 118 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (135 of 135 strings)
Translated using Weblate (Persian)
Currently translated at 95.6% (44 of 46 strings)
Translated using Weblate (Persian)
Currently translated at 66.6% (4 of 6 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (10 of 10 strings)
Translated using Weblate (Persian)
Currently translated at 92.0% (23 of 25 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (501 of 501 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (2 of 2 strings)
Translated using Weblate (Persian)
Currently translated at 100.0% (214 of 214 strings)
Translated using Weblate (Persian)
Currently translated at 97.9% (48 of 49 strings)
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: حمید ملک محمدی <hmmftg@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-auth/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-icons/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-player/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/objects/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-configeditor/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-exports/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-recording/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-search/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/fa/
Translation: Frigate NVR/audio
Translation: Frigate NVR/common
Translation: Frigate NVR/components-auth
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/components-icons
Translation: Frigate NVR/components-player
Translation: Frigate NVR/objects
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-configeditor
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-exports
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-recording
Translation: Frigate NVR/views-search
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
Currently translated at 100.0% (43 of 43 strings)
Translated using Weblate (Croatian)
Currently translated at 100.0% (46 of 46 strings)
Translated using Weblate (Croatian)
Currently translated at 100.0% (74 of 74 strings)
Translated using Weblate (Croatian)
Currently translated at 100.0% (55 of 55 strings)
Translated using Weblate (Croatian)
Currently translated at 100.0% (118 of 118 strings)
Translated using Weblate (Croatian)
Currently translated at 100.0% (49 of 49 strings)
Translated using Weblate (Croatian)
Currently translated at 100.0% (501 of 501 strings)
Translated using Weblate (Croatian)
Currently translated at 46.7% (43 of 92 strings)
Translated using Weblate (Croatian)
Currently translated at 100.0% (118 of 118 strings)
Translated using Weblate (Croatian)
Currently translated at 100.0% (53 of 53 strings)
Translated using Weblate (Croatian)
Currently translated at 100.0% (43 of 43 strings)
Translated using Weblate (Croatian)
Currently translated at 100.0% (215 of 215 strings)
Translated using Weblate (Croatian)
Currently translated at 10.9% (55 of 501 strings)
Translated using Weblate (Croatian)
Currently translated at 27.8% (34 of 122 strings)
Translated using Weblate (Croatian)
Currently translated at 15.8% (34 of 215 strings)
Translated using Weblate (Croatian)
Currently translated at 24.5% (29 of 118 strings)
Translated using Weblate (Croatian)
Currently translated at 100.0% (13 of 13 strings)
Translated using Weblate (Croatian)
Currently translated at 67.4% (29 of 43 strings)
Translated using Weblate (Croatian)
Currently translated at 39.1% (29 of 74 strings)
Translated using Weblate (Croatian)
Currently translated at 58.4% (31 of 53 strings)
Translated using Weblate (Croatian)
Currently translated at 22.7% (31 of 136 strings)
Translated using Weblate (Croatian)
Currently translated at 100.0% (10 of 10 strings)
Translated using Weblate (Croatian)
Currently translated at 63.0% (29 of 46 strings)
Translated using Weblate (Croatian)
Currently translated at 100.0% (10 of 10 strings)
Translated using Weblate (Croatian)
Currently translated at 31.5% (29 of 92 strings)
Translated using Weblate (Croatian)
Currently translated at 100.0% (25 of 25 strings)
Translated using Weblate (Croatian)
Currently translated at 59.1% (29 of 49 strings)
Translated using Weblate (Croatian)
Currently translated at 7.9% (40 of 501 strings)
Translated using Weblate (Croatian)
Currently translated at 52.7% (29 of 55 strings)
Translated using Weblate (Croatian)
Currently translated at 5.0% (33 of 654 strings)
Translated using Weblate (Croatian)
Currently translated at 26.4% (36 of 136 strings)
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: stipe-jurkovic <sjurko00@fesb.hr>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-auth/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-player/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/objects/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-configeditor/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-exports/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-search/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/hr/
Translation: Frigate NVR/audio
Translation: Frigate NVR/common
Translation: Frigate NVR/components-auth
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/components-player
Translation: Frigate NVR/objects
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-configeditor
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-exports
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-search
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
Currently translated at 100.0% (10 of 10 strings)
Translated using Weblate (Portuguese)
Currently translated at 27.8% (34 of 122 strings)
Translated using Weblate (Portuguese)
Currently translated at 100.0% (214 of 214 strings)
Translated using Weblate (Portuguese)
Currently translated at 4.9% (6 of 122 strings)
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Nuno Ponte <nuno.ponte@gmail.com>
Co-authored-by: fabiovalverde <fabio@rvalverde.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/pt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-auth/pt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/pt/
Translation: Frigate NVR/common
Translation: Frigate NVR/components-auth
Translation: Frigate NVR/views-classificationmodel
* Strip model name before training
* Handle options file for go2rtc option
* Make reviewed optional and add null to API call
* Send reviewed for dashboard
* Allow setting context size for openai compatible endpoints
* push empty go2rtc config to avoid homekit error in log
* Add option to set runtime options for LLM providers
* Docs
---------
Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
* icon improvements
add type to getIconForLabel
provide default icon for audio events
* Add preferred language to review docs
* prevent react Suspense crash during auth redirect
add redirect-check guards to stop rendering lazy routes while navigation is pending (fixes some users seeing React error #426 when auth expires)
* Uppsercase model name
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* mse player improvements
- fix WebSocket race condition by registering message handlers before sending and avoid closing CONNECTING sockets to eliminate "Socket is not connected" errors.
- attempt to resolve Safari MSE timeout and handler issues by wrapping temporary handlers in try/catch and stabilizing the permanent mse handler so SourceBuffer setup completes reliably.
- add intentional disconnect tracking to prevent unwanted reconnects during navigation/StrictMode cycles
* Update Ollama
* additional MSE tweaks
* Turn activity context prompt into a yaml example
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Correctly set query padding
* Adjust AMD headers and add community badge
* Simplify getting started guide for camera wizard
* add optimizing performance guide
* tweaks
* fix character issue
* fix more characters
* fix links
* fix more links
* Refactor new docs
* Add import
* Fix link
* Don't list hardware
* Reduce redundancy in titles
* Add note about Intel NPU and addon
* Fix ability to specify if card is using heading
* improve display of area percentage
* fix text color on genai summary chip
* fix indentation in genai docs
* Adjust default config model to align with recommended
* add correct genai key
---------
Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
* disable modal on dropdown menu in explore
* add another example case for when classification overrides a sub label
* update ollama docs link
* Improve handling of automatic playback for recordings
* Improve ollama documentation
* Don't fall out when all recording segments exist
* clarify coral docs
* improve initial scroll to active item in detail stream
* i18n fixes
* remove console warning
* detail stream scrolling fixes for HA/iOS
* Improve usability of GenAI summary dialog and make clicking on the description directly open it
* Review card too
* Use empty card with dynamic text for review based on the user's config
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Use thread lock for JinaV2 call as it sets multiple internal fields while being called
* fix audio label translation in explore filter
* Show event in all cases, even without non-none match
* improve i18n key fallback when translation files aren't loaded
just display a valid time now instead of "invalid time"
---------
Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
* Add shortSummary field to review summary to be used for notifications
* pull in current config version into default config
* fix crash when dynamically adding cameras
depending on where we are in the update loop, camera configs might not be updated yet and we are receiving detections already
* add no tracked objects and icon to explore summary view
* reset add camera wizard when closing and saving
* don't flash no exports icon while loading
* Improve handling of homekit config
* Increase prompt tokens reservation
* Adjust
* Catch event not found object detection
* Use thread lock for JinaV2 in onnxruntime
* remove incorrect embeddings process from memray docs
* only show transcribe button if audio event has video
* apply aspect ratio and margin constraints to path overlay in detail stream on mobile
improves a specific case where the overlay was not aligned with 4:3 cameras on mobile phones
* show metadata title as tooltip on icon hover in detail stream
---------
Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
Currently translated at 100.0% (501 of 501 strings)
Translated using Weblate (Hebrew)
Currently translated at 100.0% (74 of 74 strings)
Translated using Weblate (Hebrew)
Currently translated at 100.0% (92 of 92 strings)
Translated using Weblate (Hebrew)
Currently translated at 100.0% (49 of 49 strings)
Translated using Weblate (Hebrew)
Currently translated at 100.0% (122 of 122 strings)
Translated using Weblate (Hebrew)
Currently translated at 100.0% (46 of 46 strings)
Translated using Weblate (Hebrew)
Currently translated at 100.0% (10 of 10 strings)
Translated using Weblate (Hebrew)
Currently translated at 100.0% (53 of 53 strings)
Translated using Weblate (Hebrew)
Currently translated at 100.0% (135 of 135 strings)
Translated using Weblate (Hebrew)
Currently translated at 100.0% (654 of 654 strings)
Translated using Weblate (Hebrew)
Currently translated at 94.3% (617 of 654 strings)
Translated using Weblate (Hebrew)
Currently translated at 100.0% (214 of 214 strings)
Translated using Weblate (Hebrew)
Currently translated at 100.0% (41 of 41 strings)
Translated using Weblate (Hebrew)
Currently translated at 94.3% (617 of 654 strings)
Translated using Weblate (Hebrew)
Currently translated at 100.0% (118 of 118 strings)
Translated using Weblate (Hebrew)
Currently translated at 100.0% (55 of 55 strings)
Translated using Weblate (Hebrew)
Currently translated at 100.0% (131 of 131 strings)
Translated using Weblate (Hebrew)
Currently translated at 96.2% (51 of 53 strings)
Translated using Weblate (Hebrew)
Currently translated at 100.0% (122 of 122 strings)
Translated using Weblate (Hebrew)
Currently translated at 97.8% (90 of 92 strings)
Translated using Weblate (Hebrew)
Currently translated at 100.0% (501 of 501 strings)
Translated using Weblate (Hebrew)
Currently translated at 99.2% (134 of 135 strings)
Translated using Weblate (Hebrew)
Currently translated at 90.2% (83 of 92 strings)
Translated using Weblate (Hebrew)
Currently translated at 91.1% (195 of 214 strings)
Translated using Weblate (Hebrew)
Currently translated at 100.0% (46 of 46 strings)
Translated using Weblate (Hebrew)
Currently translated at 95.1% (39 of 41 strings)
Translated using Weblate (Hebrew)
Currently translated at 100.0% (13 of 13 strings)
Translated using Weblate (Hebrew)
Currently translated at 100.0% (10 of 10 strings)
Translated using Weblate (Hebrew)
Currently translated at 45.0% (55 of 122 strings)
Translated using Weblate (Hebrew)
Currently translated at 48.6% (318 of 654 strings)
Translated using Weblate (Hebrew)
Currently translated at 100.0% (74 of 74 strings)
Translated using Weblate (Hebrew)
Currently translated at 98.1% (54 of 55 strings)
Translated using Weblate (Hebrew)
Currently translated at 82.9% (112 of 135 strings)
Translated using Weblate (Hebrew)
Currently translated at 90.0% (118 of 131 strings)
Translated using Weblate (Hebrew)
Currently translated at 100.0% (49 of 49 strings)
Translated using Weblate (Hebrew)
Currently translated at 88.6% (47 of 53 strings)
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Ronen Atsil <atsil55@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/he/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/he/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-auth/he/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/he/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/he/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/he/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/objects/he/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/he/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/he/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/he/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-exports/he/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/he/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/he/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-search/he/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/he/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/he/
Translation: Frigate NVR/audio
Translation: Frigate NVR/common
Translation: Frigate NVR/components-auth
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/objects
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-exports
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-search
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
Currently translated at 35.1% (26 of 74 strings)
Translated using Weblate (Latvian)
Currently translated at 100.0% (10 of 10 strings)
Translated using Weblate (Latvian)
Currently translated at 12.9% (17 of 131 strings)
Translated using Weblate (Latvian)
Currently translated at 9.4% (7 of 74 strings)
Translated using Weblate (Latvian)
Currently translated at 100.0% (55 of 55 strings)
Translated using Weblate (Latvian)
Currently translated at 100.0% (25 of 25 strings)
Translated using Weblate (Latvian)
Currently translated at 100.0% (13 of 13 strings)
Translated using Weblate (Latvian)
Currently translated at 14.7% (18 of 122 strings)
Translated using Weblate (Latvian)
Currently translated at 100.0% (6 of 6 strings)
Translated using Weblate (Latvian)
Currently translated at 100.0% (214 of 214 strings)
Translated using Weblate (Latvian)
Currently translated at 2.7% (18 of 654 strings)
Translated using Weblate (Latvian)
Currently translated at 100.0% (53 of 53 strings)
Translated using Weblate (Latvian)
Currently translated at 100.0% (49 of 49 strings)
Translated using Weblate (Latvian)
Currently translated at 100.0% (41 of 41 strings)
Translated using Weblate (Latvian)
Currently translated at 7.6% (7 of 92 strings)
Translated using Weblate (Latvian)
Currently translated at 100.0% (46 of 46 strings)
Translated using Weblate (Latvian)
Currently translated at 6.5% (33 of 501 strings)
Translated using Weblate (Latvian)
Currently translated at 14.0% (19 of 135 strings)
Translated using Weblate (Latvian)
Currently translated at 14.4% (17 of 118 strings)
Translated using Weblate (Latvian)
Currently translated at 100.0% (2 of 2 strings)
Translated using Weblate (Latvian)
Currently translated at 5.7% (7 of 122 strings)
Translated using Weblate (Latvian)
Currently translated at 5.1% (7 of 135 strings)
Translated using Weblate (Latvian)
Currently translated at 28.0% (7 of 25 strings)
Translated using Weblate (Latvian)
Currently translated at 10.9% (6 of 55 strings)
Translated using Weblate (Latvian)
Currently translated at 100.0% (10 of 10 strings)
Translated using Weblate (Latvian)
Currently translated at 100.0% (46 of 46 strings)
Translated using Weblate (Latvian)
Currently translated at 100.0% (6 of 6 strings)
Translated using Weblate (Latvian)
Currently translated at 6.5% (6 of 92 strings)
Translated using Weblate (Latvian)
Currently translated at 0.9% (6 of 654 strings)
Translated using Weblate (Latvian)
Currently translated at 8.1% (6 of 74 strings)
Translated using Weblate (Latvian)
Currently translated at 2.1% (11 of 501 strings)
Translated using Weblate (Latvian)
Currently translated at 12.2% (6 of 49 strings)
Translated using Weblate (Latvian)
Currently translated at 100.0% (13 of 13 strings)
Translated using Weblate (Latvian)
Currently translated at 100.0% (2 of 2 strings)
Translated using Weblate (Latvian)
Currently translated at 17.0% (7 of 41 strings)
Translated using Weblate (Latvian)
Currently translated at 100.0% (10 of 10 strings)
Translated using Weblate (Latvian)
Currently translated at 11.3% (6 of 53 strings)
Translated using Weblate (Latvian)
Currently translated at 4.5% (6 of 131 strings)
Translated using Weblate (Latvian)
Currently translated at 5.9% (7 of 118 strings)
Translated using Weblate (Latvian)
Currently translated at 100.0% (214 of 214 strings)
Translated using Weblate (Latvian)
Currently translated at 98.1% (210 of 214 strings)
Translated using Weblate (Latvian)
Currently translated at 96.7% (207 of 214 strings)
Translated using Weblate (Latvian)
Currently translated at 93.4% (200 of 214 strings)
Translated using Weblate (Latvian)
Currently translated at 91.1% (195 of 214 strings)
Translated using Weblate (Latvian)
Currently translated at 90.6% (194 of 214 strings)
Translated using Weblate (Latvian)
Currently translated at 89.7% (192 of 214 strings)
Translated using Weblate (Latvian)
Currently translated at 87.3% (187 of 214 strings)
Translated using Weblate (Latvian)
Currently translated at 85.5% (183 of 214 strings)
Translated using Weblate (Latvian)
Currently translated at 84.1% (180 of 214 strings)
Translated using Weblate (Latvian)
Currently translated at 73.8% (158 of 214 strings)
Update translation files
Updated by "Squash Git commits" add-on in Weblate.
Co-authored-by: Gatis <gatisagnese@gmail.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/lv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/lv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-auth/lv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/lv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/lv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/lv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-icons/lv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-input/lv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-player/lv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/objects/lv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/lv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-configeditor/lv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/lv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/lv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-exports/lv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/lv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/lv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-recording/lv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-search/lv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/lv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/lv/
Translation: Frigate NVR/audio
Translation: Frigate NVR/common
Translation: Frigate NVR/components-auth
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/components-icons
Translation: Frigate NVR/components-input
Translation: Frigate NVR/components-player
Translation: Frigate NVR/objects
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-configeditor
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-exports
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-recording
Translation: Frigate NVR/views-search
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
* use fallback timeout for opening media source
covers the case where there is no active connection to the go2rtc stream and the camera takes a long time to start
* Add review thumbnail URL to integration docs
* fix weekday starting point on explore when set to monday in UI settings
* only show allowed cameras and groups in camera filter button
* Reset the wizard state after closing with model
* remove footnote about 0.17
* 0.17
* add triggers to note
* add slovak
* Ensure genai client exists
* Correctly catch JSONDecodeError
* clarify docs for none class
* version bump on updating page
* fix ExportRecordingsBody to allow optional name field
fixes https://github.com/blakeblackshear/frigate/discussions/21413 because of https://github.com/blakeblackshear/frigate-hass-integration/pull/1021
* Catch remote protocol error from ollama
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Send preferred language for report service
* make object lifecycle scrollable in tracking details
* fix info popovers in live camera drawer
* ensure metrics are initialized if genai is enabled
* docs
* ollama cloud model docs
* Ensure object descriptions get claened up
---------
Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
* remove footer messages and add update topic to motion tuner view
restart after changing values is no longer required
* add cache key and activity indicator for loading classification wizard images
* Always mark model as untrained when a classname is changed
* clarify object classification docs
* add debug logs for individual lpr replace_rules
* update memray docs
* memray tweaks
* Don't fail for audio transcription when semantic search is not enabled
* Fix incorrect mismatch for object vs sub label
* Check if the video is currently playing when deciding to seek due to misalignment
* Refactor timeline event handling to allow multiple timeline entries per update
* Check if zones have actually changed (not just count) for event state update
* show event icon on mobile
* move div inside conditional
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Fix genai callbacks in MQTT
* Cleanup cursor pointer for classification cards
* Cleanup
* Handle unknown SOCs for RKNN converter by only using known SOCs
* don't allow "none" as a classification class name
* change internal port user to admin and default unspecified username to viewer
* keep 5000 as anonymous user
* suppress tensorflow logging during classification training
* Always apply base log level suppressions for noisy third-party libraries even if no specific logConfig is provided
* remove decorator and specifically suppress TFLite delegate creation messages
---------
Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
* attributes endpoint
* event endpoints
* add attributes to more filters
* add to suggestions and query in explore
* support attributes in search input
* i18n
* add object type filter to endpoint
* add attributes to tracked object details pane
* add generic multi select dialog
* save object attributes endpoint
* add group by param to fetch attributes endpoint
* add attribute editing to tracked object details
* docs
* fix docs
* update openapi spec to match python
* fix coral docs
* add note about sub label object classification with person
* Catch OSError for deleting classification image
* add docs for dummy camera debugging
* add to sidebar
* fix formatting
* fix
* avx instructions are required for classification
* break text on classification card to prevent button overflow
* Ensure there is no NameError when processing
* Don't use region for state classification models
* fix spelling
* Handle attribute based models
* Catch case of non-trained model that doesn't add infinite number of classification images
* Actually train object classification models automatically
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Exclude D-FINE from using CUDA Graphs
* fix objects count in detail stream
* Add debugging for classification models
* validate idb stored stream name and reset if invalid
fixes https://github.com/blakeblackshear/frigate/discussions/21311
* ensure jina loading takes place in the main thread to prevent lazily importing tensorflow in another thread later
reverts atexit changes in https://github.com/blakeblackshear/frigate/pull/21301 and fixes https://github.com/blakeblackshear/frigate/discussions/21306
* revert old atexit change in bird too
* revert types
* ensure we bail in the live mode hook for empty camera groups
prevent infinite rendering on camera groups with no cameras
---------
Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
* Add node/npm version config to package.json
* Bump npm version/fix node version format
* Version range
* Use package.json for github actions node version
* Unification
* Move it all to the bottom
* Remove this
* Bump versions in docs
* Add volta config here too
* Revert changes
* Revert this
* Wait for config to load before evaluating route access
Fix race condition where custom role users are temporarily denied access after login while config is still loading. Defer route rendering in DefaultAppView until config is available so the complete role list is known before ProtectedRoute evaluates permissions
* Use batching for state classification generation
* Ignore incorrect scoring images if they make it through the deletion
* Delete unclassified images
* mitigate tensorflow atexit crash by pre-importing tflite/tensorflow on main thread
Pre-import Interpreter in embeddings maintainer and add defensive lazy imports in classification processors to avoid worker-thread tensorflow imports causing "can't register atexit after shutdown"
* don't require old password for users with admin role when changing passwords
* don't render actions menu if no options are available
* Remove hwaccel arg as it is not used for encoding
* change password button text
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Fix Safari popover issue in classification wizard
* use name for key instead of title
prevents duplicate key warnings when users mix vaapi and qsv
* update auth api endpoint descriptions and docs
* tweak headings
* fix note
* clarify classification docs
* Fix cuda birdseye
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* pin onnx in rfdetr model generation command
* Apply suggestion from @NickM-27
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Add Axis Q-6155E camera configuration details
Added Axis Q-6155E camera details with ONVIF service port information.
* Update Axis Q-6155E ONVIF autotracking support details
Added the reason for autotracking not working
* require admin role to delete users
* explicitly prevent deletion of admin user
* Recordings playback fixes
* Remove nvidia pyindex
* Update version
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Revise GPU and AI accelerator recommendations
Updated hardware recommendations for AI acceleration.
* Revise PCIe Coral driver installation instructions
Updated instructions for PCIe Coral driver installation.
* Revise Coral driver installation instructions
Updated driver installation instructions for PCIe and M.2 versions of Google Coral.
* Change PCIe Coral driver link in getting_started.md
Updated the link for PCIe Coral driver instructions.
* Change PCIe Coral driver link in installation guide
Updated the link for PCIe Coral driver instructions.
* Update Coral TPU recommendation in hardware documentation
Added a warning about the Coral TPU's recommendation status for new Frigate installations and suggested alternatives.
Never write strings in the frontend directly, always write to and reference the relevant translations file.
Always conform new and refactored code to the existing coding style in the project.
- For Frigate NVR, never write strings in the frontend directly. Since the project uses `react-i18next`, use `t()` and write the English string in the relevant translations file in `web/public/locales/en`.
- Always conform new and refactored code to the existing coding style in the project.
- Always have a way to test your work and confirm your changes. When running backend tests, use `python3 -u -m unittest`.
@ -40,7 +40,7 @@ If you would like to make a donation to support development, please use [Github
This project is licensed under the **MIT License**.
- **Code:** The source code, configuration files, and documentation in this repository are available under the [MIT License](LICENSE). You are free to use, modify, and distribute the code as long as you include the original copyright notice.
- **Trademarks:** The "Frigate" name, the "Frigate NVR" brand, and the Frigate logo are **trademarks of Frigate LLC** and are **not** covered by the MIT License.
- **Trademarks:** The "Frigate" name, the "Frigate NVR" brand, and the Frigate logo are **trademarks of Frigate, Inc.** and are **not** covered by the MIT License.
Please see our [Trademark Policy](TRADEMARK.md) for details on acceptable use of our brand assets.
@ -67,7 +67,7 @@ Please see our [Trademark Policy](TRADEMARK.md) for details on acceptable use of
<imgwidth="800"alt="Built-in mask and zone editor" src="https://github.com/blakeblackshear/frigate/assets/569905/d7885fc3-bfe6-452f-b7d0-d957cb3e31f5">
</div>
## Translations
@ -80,4 +80,4 @@ We use [Weblate](https://hosted.weblate.org/projects/frigate-nvr/) to support la
@ -6,7 +6,7 @@ This document outlines the policy regarding the use of the trademarks associated
## 1. Our Trademarks
The following terms and visual assets are trademarks (the "Marks") of **Frigate LLC**:
The following terms and visual assets are trademarks (the "Marks") of **Frigate, Inc.**:
- **Frigate™**
- **Frigate NVR™**
@ -14,7 +14,7 @@ The following terms and visual assets are trademarks (the "Marks") of **Frigate
- **The Frigate Logo**
**Note on Common Law Rights:**
Frigate LLC asserts all common law rights in these Marks. The absence of a federal registration symbol (®) does not constitute a waiver of our intellectual property rights.
Frigate, Inc. asserts all common law rights in these Marks. The absence of a federal registration symbol (®) does not constitute a waiver of our intellectual property rights.
## 2. Interaction with the MIT License
@ -25,7 +25,7 @@ The software in this repository is licensed under the [MIT License](LICENSE).
- The **Code** is free to use, modify, and distribute under the MIT terms.
- The **Brand (Trademarks)** is **NOT** licensed under MIT.
You may not use the Marks in any way that is not explicitly permitted by this policy or by written agreement with Frigate LLC.
You may not use the Marks in any way that is not explicitly permitted by this policy or by written agreement with Frigate, Inc.
## 3. Acceptable Use
@ -40,7 +40,7 @@ You may use the Marks without prior written permission in the following specific
You may **NOT** use the Marks in the following ways:
- **Commercial Products:** You may not use "Frigate" in the name of a commercial product, service, or app (e.g., selling an app named _"Frigate Viewer"_ is prohibited).
- **Implying Affiliation:** You may not use the Marks in a way that suggests your project is official, sponsored by, or endorsed by Frigate LLC.
- **Implying Affiliation:** You may not use the Marks in a way that suggests your project is official, sponsored by, or endorsed by Frigate, Inc.
- **Confusing Forks:** If you fork this repository to create a derivative work, you **must** remove the Frigate logo and rename your project to avoid user confusion. You cannot distribute a modified version of the software under the name "Frigate".
- **Domain Names:** You may not register domain names containing "Frigate" that are likely to confuse users (e.g., `frigate-official-support.com`).
The audio detector uses volume levels in the same way that motion in a camera feed is used for object detection. This means that frigate will not run audio detection unless the audio volume is above the configured level in order to reduce resource usage. Audio levels can vary widely between camera models so it is important to run tests to see what volume levels are. The Debug view in the Frigate UI has an Audio tab for cameras that have the `audio` role assigned where a graph and the current levels are is displayed. The `min_volume` parameter should be set to the minimum the `RMS` level required to run audio detection.
The audio detector uses volume levels in the same way that motion in a camera feed is used for object detection. This means that Frigate will not run audio detection unless the audio volume is above the configured level in order to reduce resource usage. Audio levels can vary widely between camera models so it is important to run tests to see what volume levels are. The Debug view in the Frigate UI has an Audio tab for cameras that have the `audio` role assigned where a graph and the current levels are is displayed. The `min_volume` parameter should be set to the minimum the `RMS` level required to run audio detection.
You may need to include `-k` in the argument list in these steps (eg: `curl -k -i -X POST ...`) if your Frigate instance is using a self-signed certificate.
:::
The response will contain a cookie with the JWT token.
#### 2. Using the Bearer Token
Once you have the token, include it in the Authorization header for subsequent requests:
- "ffmpeg:http://reolink_nvr_ip/flv?port=1935&app=bcs&stream=channel3_main.bcs&user=username&password=password" # channel numbers are 0-15
@ -227,6 +227,12 @@ cameras:
### Unifi Protect Cameras
:::note
Unifi G5s cameras and newer need a Unifi Protect server to enable rtsps stream, it's not posible to enable it in standalone mode.
:::
Unifi protect cameras require the rtspx stream to be used with go2rtc.
To utilize a Unifi protect camera, modify the rtsps link to begin with rtspx.
Additionally, remove the "?enableSrtp" from the end of the Unifi link.
@ -252,6 +258,10 @@ ffmpeg:
TP-Link VIGI cameras need some adjustments to the main stream settings on the camera itself to avoid issues. The stream needs to be configured as `H264` with `Smart Coding` set to `off`. Without these settings you may have problems when trying to watch recorded footage. For example Firefox will stop playback after a few seconds and show the following error message: `The media playback was aborted due to a corruption problem or because the media used features your browser did not support.`.
### Wyze Wireless Cameras
Some community members have found better performance on Wyze cameras by using an alternative firmware known as [Thingino](https://thingino.com/).
## USB Cameras (aka Webcams)
To use a USB camera (webcam) with Frigate, the recommendation is to use go2rtc's [FFmpeg Device](https://github.com/AlexxIT/go2rtc?tab=readme-ov-file#source-ffmpeg-device) support:
If the ONVIF connection is successful, PTZ controls will be available in the camera's WebUI.
:::note
Some cameras use a separate ONVIF/service account that is distinct from the device administrator credentials. If ONVIF authentication fails with the admin account, try creating or using an ONVIF/service user in the camera's firmware. Refer to your camera manufacturer's documentation for more.
:::
:::tip
If your ONVIF camera does not require authentication credentials, you may still need to specify an empty string for `user` and `password`, eg: `user: ""` and `password: ""`.
@ -94,18 +100,19 @@ This list of working and non-working PTZ cameras is based on user feedback. If y
The FeatureList on the [ONVIF Conformant Products Database](https://www.onvif.org/conformant-products/) can provide a starting point to determine a camera's compatibility with Frigate's autotracking. Look to see if a camera lists `PTZRelative`, `PTZRelativePanTilt` and/or `PTZRelativeZoom`. These features are required for autotracking, but some cameras still fail to respond even if they claim support. If they are missing, autotracking will not work (though basic PTZ in the WebUI might). Avoid cameras with no database entry unless they are confirmed as working below.
| Brand or specific camera | PTZ Controls | Autotracking | Notes |
| Amcrest IP4M-S2112EW-AI | ✅ | ❌ | FOV relative movement not supported. |
| Amcrest IP5M-1190EW | ✅ | ❌ | ONVIF Port: 80. FOV relative movement not supported. |
| Annke CZ504 | ✅ | ✅ | Annke support provide specific firmware ([V5.7.1 build 250227](https://github.com/pierrepinon/annke_cz504/raw/refs/heads/main/digicap_V5-7-1_build_250227.dav)) to fix issue with ONVIF "TranslationSpaceFov" |
| Axis Q-6155E | ✅ | ❌ | ONVIF service port: 80; Camera does not support MoveStatus. |
| Ctronics PTZ | ✅ | ❌ | |
| Dahua | ✅ | ✅ | Some low-end Dahuas (lite series, picoo series (commonly), among others) have been reported to not support autotracking. These models usually don't have a four digit model number with chassis prefix and options postfix (e.g. DH-P5AE-PV vs DH-SD49825GB-HNR). |
| Dahua DH-SD2A500HB | ✅ | ❌ | |
| Dahua DH-SD49825GB-HNR | ✅ | ✅ | |
| Dahua DH-P5AE-PV | ❌ | ❌ | |
| Foscam | ✅ | ❌ | In general support PTZ, but not relative move. There are no official ONVIF certifications and tests available on the ONVIF Conformant Products Database | |
| Foscam | ✅ | ❌ | In general support PTZ, but not relative move. There are no official ONVIF certifications and tests available on the ONVIF Conformant Products Database |
Object classification allows you to train a custom MobileNetV2 classification model to run on tracked objects (persons, cars, animals, etc.) to identify a finer category or attribute for that object.
Object classification allows you to train a custom MobileNetV2 classification model to run on tracked objects (persons, cars, animals, etc.) to identify a finer category or attribute for that object. Classification results are visible in the Tracked Object Details pane in Explore, through the `frigate/tracked_object_details` MQTT topic, in Home Assistant sensors via the official Frigate integration, or through the event endpoints in the HTTP API.
## Minimum System Requirements
@ -11,6 +11,8 @@ Object classification models are lightweight and run very fast on CPU. Inference
Training the model does briefly use a high amount of system resources for about 1–3 minutes per training run. On lower-power devices, training may take longer.
A CPU with AVX instructions is required for training and inference.
## Classes
Classes are the categories your model will learn to distinguish between. Each class represents a distinct visual category that the model will predict.
@ -31,9 +33,15 @@ For object classification:
- Example: `cat` → `Leo`, `Charlie`, `None`.
- **Attribute**:
- Added as metadata to the object (visible in /events):`<model_name>: <predicted_value>`.
- Added as metadata to the object, visible in the Tracked Object Details pane in Explore, `frigate/events` MQTT messages, and the HTTP API response as`<model_name>: <predicted_value>`.
- Ideal when multiple attributes can coexist independently.
- Example: Detecting if a `person` in a construction yard is wearing a helmet or not.
- Example: Detecting if a `person` in a construction yard is wearing a helmet or not, and if they are wearing a yellow vest or not.
:::note
A tracked object can only have a single sub label. If you are using Triggers or Face Recognition and you configure an object classification model for `person` using the sub label type, your sub label may not be assigned correctly as it depends on which enrichment completes its analysis first. This could also occur with `car` objects that are assigned a sub label for a delivery carrier. Consider using the `attribute` type instead.
:::
## Assignment Requirements
@ -73,13 +81,17 @@ classification:
classification_type: sub_label # or: attribute
```
An optional config, `save_attempts`, can be set as a key under the model name. This defines the number of classification attempts to save in the Recent Classifications tab. For object classification models, the default is 200.
## Training the model
Creating and training the model is done within the Frigate UI using the `Classification` page. The process consists of two steps:
### Step 1: Name and Define
Enter a name for your model, select the object label to classify (e.g., `person`, `dog`, `car`), choose the classification type (sub label or attribute), and define your classes. Include a `none` class for objects that don't fit any specific category.
Enter a name for your model, select the object label to classify (e.g., `person`, `dog`, `car`), choose the classification type (sub label or attribute), and define your classes. Frigate will automatically include a `none` class for objects that don't fit any specific category.
For example: To classify your two cats, create a model named "Our Cats" and create two classes, "Charlie" and "Leo". A third class, "none", will be created automatically for other neighborhood cats that are not your own.
### Step 2: Assign Training Examples
@ -87,6 +99,8 @@ The system will automatically generate example images from detected objects matc
When choosing which objects to classify, start with a small number of visually distinct classes and ensure your training samples match camera viewpoints and distances typical for those objects.
If examples for some of your classes do not appear in the grid, you can continue configuring the model without them. New images will begin to appear in the Recent Classifications view. When your missing classes are seen, classify them from this view and retrain your model.
### Improving the Model
- **Problem framing**: Keep classes visually distinct and relevant to the chosen object types.
@ -94,3 +108,23 @@ When choosing which objects to classify, start with a small number of visually d
- **Preprocessing**: Ensure examples reflect object crops similar to Frigate’s boxes; keep the subject centered.
- **Labels**: Keep label names short and consistent; include a `none` class if you plan to ignore uncertain predictions for sub labels.
- **Threshold**: Tune `threshold` per model to reduce false assignments. Start at `0.8` and adjust based on validation.
## Debugging Classification Models
To troubleshoot issues with object classification models, enable debug logging to see detailed information about classification attempts, scores, and consensus calculations.
Enable debug logs for classification models by adding `frigate.data_processing.real_time.custom_classification: debug` to your `logger` configuration. These logs are verbose, so only keep this enabled when necessary. Restart Frigate after this change.
State classification allows you to train a custom MobileNetV2 classification model on a fixed region of your camera frame(s) to determine a current state. The model can be configured to run on a schedule and/or when motion is detected in that region.
State classification allows you to train a custom MobileNetV2 classification model on a fixed region of your camera frame(s) to determine a current state. The model can be configured to run on a schedule and/or when motion is detected in that region. Classification results are available through the `frigate/<camera_name>/classification/<model_name>` MQTT topic and in Home Assistant sensors via the official Frigate integration.
## Minimum System Requirements
@ -11,6 +11,8 @@ State classification models are lightweight and run very fast on CPU. Inference
Training the model does briefly use a high amount of system resources for about 1–3 minutes per training run. On lower-power devices, training may take longer.
A CPU with AVX instructions is required for training and inference.
## Classes
Classes are the different states an area on your camera can be in. Each class represents a distinct visual state that the model will learn to recognize.
@ -46,6 +48,8 @@ classification:
crop: [0, 180, 220, 400]
```
An optional config, `save_attempts`, can be set as a key under the model name. This defines the number of classification attempts to save in the Recent Classifications tab. For state classification models, the default is 100.
## Training the model
Creating and training the model is done within the Frigate UI using the `Classification` page. The process consists of three steps:
@ -60,11 +64,9 @@ Choose one or more cameras and draw a rectangle over the area of interest for ea
### Step 3: Assign Training Examples
The system will automatically generate example images from your camera feeds. You'll be guided through each class one at a time to select which images represent that state.
The system will automatically generate example images from your camera feeds. You'll be guided through each class one at a time to select which images represent that state. It's not strictly required to select all images you see. If a state is missing from the samples, you can train it from the Recent tab later.
**Important**: All images must be assigned to a state before training can begin. This includes images that may not be optimal, such as when people temporarily block the view, sun glare is present, or other distractions occur. Assign these images to the state that is actually present (based on what you know the state to be), not based on the distraction. This training helps the model correctly identify the state even when such conditions occur during inference.
Once all images are assigned, training will begin automatically.
Once some images are assigned, training will begin automatically.
### Improving the Model
@ -72,3 +74,34 @@ Once all images are assigned, training will begin automatically.
- **Data collection**: Use the model's Recent Classifications tab to gather balanced examples across times of day and weather.
- **When to train**: Focus on cases where the model is entirely incorrect or flips between states when it should not. There's no need to train additional images when the model is already working consistently.
- **Selecting training images**: Images scoring below 100% due to new conditions (e.g., first snow of the year, seasonal changes) or variations (e.g., objects temporarily in view, insects at night) are good candidates for training, as they represent scenarios different from the default state. Training these lower-scoring images that differ from existing training data helps prevent overfitting. Avoid training large quantities of images that look very similar, especially if they already score 100% as this can lead to overfitting.
## Debugging Classification Models
To troubleshoot issues with state classification models, enable debug logging to see detailed information about classification attempts, scores, and state verification.
Enable debug logs for classification models by adding `frigate.data_processing.real_time.custom_classification: debug` to your `logger` configuration. These logs are verbose, so only keep this enabled when necessary. Restart Frigate after this change.
- State verification progress (consecutive detections needed)
- When state changes are published
### Recent Classifications
For state classification, images are only added to recent classifications under specific circumstances:
- **First detection**: The first classification attempt for a camera is always saved
- **State changes**: Images are saved when the detected state differs from the current verified state
- **Pending verification**: Images are saved when there's a pending state change being verified (requires 3 consecutive identical states)
- **Low confidence**: Images with scores below 100% are saved even if the state matches the current state (useful for training)
Images are **not** saved when the state is stable (detected state matches current state) **and** the score is 100%. This prevents unnecessary storage of redundant high-confidence classifications.
Generative AI can be used to automatically generate descriptive text based on the thumbnails of your tracked objects. This helps with [Semantic Search](/configuration/semantic_search) in Frigate to provide more context about your tracked objects. Descriptions are accessed via the _Explore_ view in the Frigate UI by clicking on a tracked object's thumbnail.
Requests for a description are sent off automatically to your AI provider at the end of the tracked object's lifecycle, or can optionally be sent earlier after a number of significantly changed frames, for example in use in more real-time notifications. Descriptions can also be regenerated manually via the Frigate UI. Note that if you are manually entering a description for tracked objects prior to its end, this will be overwritten by the generated response.
## Configuration
Generative AI can be enabled for all cameras or only for specific cameras. If GenAI is disabled for a camera, you can still manually generate descriptions for events using the HTTP API. There are currently 3 native providers available to integrate with Frigate. Other providers that support the OpenAI standard API can also be used. See the OpenAI section below.
To use Generative AI, you must define a single provider at the global level of your Frigate configuration. If the provider you choose requires an API key, you may either directly paste it in your configuration, or store it in an environment variable prefixed with `FRIGATE_`.
By default, descriptions will be generated for all tracked objects and all zones. But you can also optionally specify `objects` and `required_zones` to only generate descriptions for certain tracked objects or zones.
Optionally, you can generate the description using a snapshot (if enabled) by setting `use_snapshot` to `True`. By default, this is set to `False`, which sends the uncompressed images from the `detect` stream collected over the object's lifetime to the model. Once the object lifecycle ends, only a single compressed and cropped thumbnail is saved with the tracked object. Using a snapshot might be useful when you want to _regenerate_ a tracked object's description as it will provide the AI with a higher-quality image (typically downscaled by the AI itself) than the cropped/compressed thumbnail. Using a snapshot otherwise has a trade-off in that only a single image is sent to your provider, which will limit the model's ability to determine object movement or direction.
Generative AI can also be toggled dynamically for a camera via MQTT with the topic `frigate/<camera_name>/object_descriptions/set`. See the [MQTT documentation](/integrations/mqtt/#frigatecamera_nameobjectdescriptionsset).
## Ollama
:::warning
Using Ollama on CPU is not recommended, high inference times make using Generative AI impractical.
:::
[Ollama](https://ollama.com/) allows you to self-host large language models and keep everything running locally. It provides a nice API over [llama.cpp](https://github.com/ggerganov/llama.cpp). It is highly recommended to host this server on a machine with an Nvidia graphics card, or on a Apple silicon Mac for best performance.
Most of the 7b parameter 4-bit vision models will fit inside 8GB of VRAM. There is also a [Docker container](https://hub.docker.com/r/ollama/ollama) available.
Parallel requests also come with some caveats. You will need to set `OLLAMA_NUM_PARALLEL=1` and choose a `OLLAMA_MAX_QUEUE` and `OLLAMA_MAX_LOADED_MODELS` values that are appropriate for your hardware and preferences. See the [Ollama documentation](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-does-ollama-handle-concurrent-requests).
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their model library](https://ollama.com/library). At the time of writing, this includes `llava`, `llava-llama3`, `llava-phi3`, and `moondream`. Note that Frigate will not automatically download the model you specify in your config, you must download the model to your local instance of Ollama first i.e. by running `ollama pull llava:7b` on your Ollama server/Docker container. Note that the model specified in Frigate's config must match the downloaded model tag.
:::note
You should have at least 8 GB of RAM available (or VRAM if running on GPU) to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.
:::
### Configuration
```yaml
genai:
provider: ollama
base_url: http://localhost:11434
model: qwen3-vl:4b
```
## Google Gemini
Google Gemini has a free tier allowing [15 queries per minute](https://ai.google.dev/pricing) to the API, which is more than sufficient for standard Frigate usage.
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://ai.google.dev/gemini-api/docs/models/gemini).
### Get API Key
To start using Gemini, you must first get an API key from [Google AI Studio](https://aistudio.google.com).
1. Accept the Terms of Service
2. Click "Get API Key" from the right hand navigation
3. Click "Create API key in new project"
4. Copy the API key for use in your config
### Configuration
```yaml
genai:
provider: gemini
api_key: "{FRIGATE_GEMINI_API_KEY}"
model: gemini-2.0-flash
```
:::note
To use a different Gemini-compatible API endpoint, set the `GEMINI_BASE_URL` environment variable to your provider's API URL.
:::
## OpenAI
OpenAI does not have a free tier for their API. With the release of gpt-4o, pricing has been reduced and each generation should cost fractions of a cent if you choose to go this route.
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://platform.openai.com/docs/models).
### Get API Key
To start using OpenAI, you must first [create an API key](https://platform.openai.com/api-keys) and [configure billing](https://platform.openai.com/settings/organization/billing/overview).
### Configuration
```yaml
genai:
provider: openai
api_key: "{FRIGATE_OPENAI_API_KEY}"
model: gpt-4o
```
:::note
To use a different OpenAI-compatible API endpoint, set the `OPENAI_BASE_URL` environment variable to your provider's API URL.
:::
## Azure OpenAI
Microsoft offers several vision models through Azure OpenAI. A subscription is required.
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models).
### Create Resource and Get API Key
To start using Azure OpenAI, you must first [create a resource](https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource). You'll need your API key, model name, and resource URL, which must include the `api-version` parameter (see the example below).
Frigate's thumbnail search excels at identifying specific details about tracked objects – for example, using an "image caption" approach to find a "person wearing a yellow vest," "a white dog running across the lawn," or "a red car on a residential street." To enhance this further, Frigate’s default prompts are designed to ask your AI provider about the intent behind the object's actions, rather than just describing its appearance.
While generating simple descriptions of detected objects is useful, understanding intent provides a deeper layer of insight. Instead of just recognizing "what" is in a scene, Frigate’s default prompts aim to infer "why" it might be there or "what" it could do next. Descriptions tell you what’s happening, but intent gives context. For instance, a person walking toward a door might seem like a visitor, but if they’re moving quickly after hours, you can infer a potential break-in attempt. Detecting a person loitering near a door at night can trigger an alert sooner than simply noting "a person standing by the door," helping you respond based on the situation’s context.
### Using GenAI for notifications
Frigate provides an [MQTT topic](/integrations/mqtt), `frigate/tracked_object_update`, that is updated with a JSON payload containing `event_id` and `description` when your AI provider returns a description for a tracked object. This description could be used directly in notifications, such as sending alerts to your phone or making audio announcements. If additional details from the tracked object are needed, you can query the [HTTP API](/integrations/api/event-events-event-id-get) using the `event_id`, eg: `http://frigate_ip:5000/api/events/<event_id>`.
If looking to get notifications earlier than when an object ceases to be tracked, an additional send trigger can be configured of `after_significant_updates`.
```yaml
genai:
send_triggers:
tracked_object_end: true # default
after_significant_updates: 3 # how many updates to a tracked object before we should send an image
```
## Custom Prompts
Frigate sends multiple frames from the tracked object along with a prompt to your Generative AI provider asking it to generate a description. The default prompt is as follows:
```
Analyze the sequence of images containing the {label}. Focus on the likely intent or behavior of the {label} based on its actions and movement, rather than describing its appearance or the surroundings. Consider what the {label} is doing, why, and what it might do next.
```
:::tip
Prompts can use variable replacements `{label}`, `{sub_label}`, and `{camera}` to substitute information from the tracked object as part of the prompt.
:::
You are also able to define custom prompts in your configuration.
```yaml
genai:
provider: ollama
base_url: http://localhost:11434
model: llava
objects:
prompt: "Analyze the {label} in these images from the {camera} security camera. Focus on the actions, behavior, and potential intent of the {label}, rather than just describing its appearance."
object_prompts:
person: "Examine the main person in these images. What are they doing and what might their actions suggest about their intent (e.g., approaching a door, leaving an area, standing still)? Do not describe the surroundings or static details."
car: "Observe the primary vehicle in these images. Focus on its movement, direction, or purpose (e.g., parking, approaching, circling). If it's a delivery vehicle, mention the company."
```
Prompts can also be overridden at the camera level to provide a more detailed prompt to the model about your specific camera, if you desire.
```yaml
cameras:
front_door:
objects:
genai:
enabled: True
use_snapshot: True
prompt: "Analyze the {label} in these images from the {camera} security camera at the front door. Focus on the actions and potential intent of the {label}."
object_prompts:
person: "Examine the person in these images. What are they doing, and how might their actions suggest their purpose (e.g., delivering something, approaching, leaving)? If they are carrying or interacting with a package, include details about its source or destination."
cat: "Observe the cat in these images. Focus on its movement and intent (e.g., wandering, hunting, interacting with objects). If the cat is near the flower pots or engaging in any specific actions, mention it."
objects:
- person
- cat
required_zones:
- steps
```
### Experiment with prompts
Many providers also have a public facing chat interface for their models. Download a couple of different thumbnails or snapshots from Frigate and try new things in the playground to get descriptions to your liking before updating the prompt in Frigate.
- OpenAI - [ChatGPT](https://chatgpt.com)
- Gemini - [Google AI Studio](https://aistudio.google.com)
@ -17,11 +17,23 @@ Using Ollama on CPU is not recommended, high inference times make using Generati
:::
[Ollama](https://ollama.com/) allows you to self-host large language models and keep everything running locally. It provides a nice API over [llama.cpp](https://github.com/ggerganov/llama.cpp). It is highly recommended to host this server on a machine with an Nvidia graphics card, or on a Apple silicon Mac for best performance.
[Ollama](https://ollama.com/) allows you to self-host large language models and keep everything running locally. It is highly recommended to host this server on a machine with an Nvidia graphics card, or on a Apple silicon Mac for best performance.
Most of the 7b parameter 4-bit vision models will fit inside 8GB of VRAM. There is also a [Docker container](https://hub.docker.com/r/ollama/ollama) available.
Parallel requests also come with some caveats. You will need to set `OLLAMA_NUM_PARALLEL=1` and choose a `OLLAMA_MAX_QUEUE` and `OLLAMA_MAX_LOADED_MODELS` values that are appropriate for your hardware and preferences. See the [Ollama documentation](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-does-ollama-handle-concurrent-requests).
Parallel requests also come with some caveats. You will need to set `OLLAMA_NUM_PARALLEL=1` and choose a `OLLAMA_MAX_QUEUE` and `OLLAMA_MAX_LOADED_MODELS` values that are appropriate for your hardware and preferences. See the [Ollama documentation](https://docs.ollama.com/faq#how-does-ollama-handle-concurrent-requests).
### Model Types: Instruct vs Thinking
Most vision-language models are available as **instruct** models, which are fine-tuned to follow instructions and respond concisely to prompts. However, some models (such as certain Qwen-VL or minigpt variants) offer both **instruct** and **thinking** versions.
- **Instruct models** are always recommended for use with Frigate. These models generate direct, relevant, actionable descriptions that best fit Frigate's object and event summary use case.
- **Thinking models** are fine-tuned for more free-form, open-ended, and speculative outputs, which are typically not concise and may not provide the practical summaries Frigate expects. For this reason, Frigate does **not** recommend or support using thinking models.
Some models are labeled as **hybrid** (capable of both thinking and instruct tasks). In these cases, Frigate will always use instruct-style prompts and specifically disables thinking-mode behaviors to ensure concise, useful responses.
**Recommendation:**
Always select the `-instruct` or documented instruct/tagged variant of any model you use in your Frigate configuration. If in doubt, refer to your model provider’s documentation or model library for guidance on the correct model variant to use.
### Supported Models
@ -42,7 +54,7 @@ If you are trying to use a single model for Frigate and HomeAssistant, it will n
| `Intern3.5VL` | Relatively fast with good vision comprehension |
| `gemma3` | Strong frame-to-frame understanding, slower inference times |
@ -54,26 +66,26 @@ You should have at least 8 GB of RAM available (or VRAM if running on GPU) to ru
:::
#### Ollama Cloud models
Ollama also supports [cloud models](https://ollama.com/cloud), where your local Ollama instance handles requests from Frigate, but model inference is performed in the cloud. Set up Ollama locally, sign in with your Ollama account, and specify the cloud model name in your Frigate config. For more details, see the Ollama cloud model [docs](https://docs.ollama.com/cloud).
### Configuration
```yaml
genai:
provider: ollama
base_url: http://localhost:11434
model: minicpm-v:8b
provider_options: # other Ollama client options can be defined
keep_alive: -1
options:
num_ctx: 8192 # make sure the context matches other services that are using ollama
model: qwen3-vl:4b
```
## Google Gemini
Google Gemini has a free tier allowing [15 queries per minute](https://ai.google.dev/pricing) to the API, which is more than sufficient for standard Frigate usage.
Google Gemini has a [free tier](https://ai.google.dev/pricing) for the API, however the limits may not be sufficient for standard Frigate usage. Choose a plan appropriate for your installation.
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://ai.google.dev/gemini-api/docs/models/gemini). At the time of writing, this includes `gemini-1.5-pro` and `gemini-1.5-flash`.
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://ai.google.dev/gemini-api/docs/models/gemini).
### Get API Key
@ -90,16 +102,32 @@ To start using Gemini, you must first get an API key from [Google AI Studio](htt
genai:
provider: gemini
api_key: "{FRIGATE_GEMINI_API_KEY}"
model: gemini-1.5-flash
model: gemini-2.5-flash
```
:::note
To use a different Gemini-compatible API endpoint, set the `provider_options` with the `base_url` key to your provider's API URL. For example:
```
genai:
provider: gemini
...
provider_options:
base_url: https://...
```
Other HTTP options are available, see the [python-genai documentation](https://github.com/googleapis/python-genai).
:::
## OpenAI
OpenAI does not have a free tier for their API. With the release of gpt-4o, pricing has been reduced and each generation should cost fractions of a cent if you choose to go this route.
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://platform.openai.com/docs/models). At the time of writing, this includes `gpt-4o` and `gpt-4-turbo`.
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://platform.openai.com/docs/models).
### Get API Key
@ -120,23 +148,41 @@ To use a different OpenAI-compatible API endpoint, set the `OPENAI_BASE_URL` env
:::
:::tip
For OpenAI-compatible servers (such as llama.cpp) that don't expose the configured context size in the API response, you can manually specify the context size in `provider_options`:
```yaml
genai:
provider: openai
base_url: http://your-llama-server
model: your-model-name
provider_options:
context_size: 8192 # Specify the configured context size
```
This ensures Frigate uses the correct context window size when generating prompts.
:::
## Azure OpenAI
Microsoft offers several vision models through Azure OpenAI. A subscription is required.
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models). At the time of writing, this includes `gpt-4o` and `gpt-4-turbo`.
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models).
### Create Resource and Get API Key
To start using Azure OpenAI, you must first [create a resource](https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource). You'll need your API key and resource URL, which must include the `api-version` parameter (see the example below). The model field is not required in your configuration as the model is part of the deployment name you chose when deploying the resource.
To start using Azure OpenAI, you must first [create a resource](https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource). You'll need your API key, model name, and resource URL, which must include the `api-version` parameter (see the example below).
@ -39,9 +39,10 @@ You are also able to define custom prompts in your configuration.
genai:
provider: ollama
base_url: http://localhost:11434
model: llava
model: qwen3-vl:8b-instruct
objects:
genai:
prompt: "Analyze the {label} in these images from the {camera} security camera. Focus on the actions, behavior, and potential intent of the {label}, rather than just describing its appearance."
object_prompts:
person: "Examine the main person in these images. What are they doing and what might their actions suggest about their intent (e.g., approaching a door, leaving an area, standing still)? Do not describe the surroundings or static details."
@ -16,12 +16,13 @@ Review summaries provide structured JSON responses that are saved for each revie
```
- `title` (string): A concise, direct title that describes the purpose or overall action (e.g., "Person taking out trash", "Joe walking dog").
- `scene` (string): A narrative description of what happens across the sequence from start to finish, including setting, detected objects, and their observable actions.
- `shortSummary` (string): A brief 2-sentence summary of the scene, suitable for notifications. This is a condensed version of the scene description.
- `confidence` (float): 0-1 confidence in the analysis. Higher confidence when objects/actions are clearly visible and context is unambiguous.
- `other_concerns` (list): List of user-defined concerns that may need additional investigation.
- `potential_threat_level` (integer): 0, 1, or 2 as defined below.
```
This will show in multiple places in the UI to give additional context about each activity, and allow viewing more details when extra attention is required. Frigate's built in notifications will also automatically show the title and description when the data is available.
This will show in multiple places in the UI to give additional context about each activity, and allow viewing more details when extra attention is required. Frigate's built in notifications will automatically show the title and `shortSummary` when the data is available, while the full `scene` description is available in the UI for detailed review.
### Defining Typical Activity
@ -30,40 +31,43 @@ Each installation and even camera can have different parameters for what is cons
- Deliveries or services during daytime/evening (6 AM - 10 PM): carrying packages to doors/porches, placing items, leaving
- Services/maintenance workers with visible tools, uniforms, or service vehicles during daytime
- Activity confined to public areas only (sidewalks, streets) without entering property at any time
```yaml
review:
genai:
activity_context_prompt: |
### Normal Activity Indicators (Level 0)
- Known/verified people in any zone at any time
- People with pets in residential areas
- Deliveries or services during daytime/evening (6 AM - 10 PM): carrying packages to doors/porches, placing items, leaving
- Services/maintenance workers with visible tools, uniforms, or service vehicles during daytime
- Activity confined to public areas only (sidewalks, streets) without entering property at any time
### Suspicious Activity Indicators (Level 1)
- **Testing or attempting to open doors/windows/handles on vehicles or buildings** — ALWAYS Level 1 regardless of time or duration
- **Unidentified person in private areas (driveways, near vehicles/buildings) during late night/early morning (11 PM - 5 AM)** — ALWAYS Level 1 regardless of activity or duration
- Taking items that don't belong to them (packages, objects from porches/driveways)
- Climbing or jumping fences/barriers to access property
- Attempting to conceal actions or items from view
- Prolonged loitering: remaining in same area without visible purpose throughout most of the sequence
### Suspicious Activity Indicators (Level 1)
- **Testing or attempting to open doors/windows/handles on vehicles or buildings** — ALWAYS Level 1 regardless of time or duration
- **Unidentified person in private areas (driveways, near vehicles/buildings) during late night/early morning (11 PM - 5 AM)** — ALWAYS Level 1 regardless of activity or duration
- Taking items that don't belong to them (packages, objects from porches/driveways)
- Climbing or jumping fences/barriers to access property
- Attempting to conceal actions or items from view
- Prolonged loitering: remaining in same area without visible purpose throughout most of the sequence
- Otherwise, if daytime/evening (6 AM - 10 PM) with clear legitimate purpose (delivery, service worker) → Level 0
3. **Escalate to Level 2 if:** Weapons, break-in tools, forced entry in progress, violence, or active property damage visible (escalates from Level 0 or 1)
3. **Escalate to Level 2 if:** Weapons, break-in tools, forced entry in progress, violence, or active property damage visible (escalates from Level 0 or 1)
The mere presence of an unidentified person in private areas during late night hours is inherently suspicious and warrants human review, regardless of what activity they appear to be doing or how brief the sequence is.
The mere presence of an unidentified person in private areas during late night hours is inherently suspicious and warrants human review, regardless of what activity they appear to be doing or how brief the sequence is.
```
</details>
@ -108,12 +112,23 @@ review:
- animals in the garden
```
### Preferred Language
By default, review summaries are generated in English. You can configure Frigate to generate summaries in your preferred language by setting the `preferred_language` option:
```yaml
review:
genai:
enabled: true
preferred_language: Spanish
```
## Review Reports
Along with individual review item summaries, Generative AI provides the ability to request a report of a given time period. For example, you can get a daily report while on a vacation of any suspicious activity or other concerns that may require review.
Along with individual review item summaries, Generative AI can also produce a single report of review items from all cameras marked "suspicious" over a specified time period (for example, a daily summary of suspicious activity while you're on vacation).
### Requesting Reports Programmatically
Review reports can be requested via the [API](/integrations/api#review-summarization) by sending a POST request to `/api/review/summarize/start/{start_ts}/end/{end_ts}` with Unix timestamps.
Review reports can be requested via the [API](/integrations/api/generate-review-summary-review-summarize-start-start-ts-end-end-ts-post) by sending a POST request to `/api/review/summarize/start/{start_ts}/end/{end_ts}` with Unix timestamps.
For Home Assistant users, there is a built-in service (`frigate.review_summarize`) that makes it easy to request review reports as part of automations or scripts. This allows you to automatically generate daily summaries, vacation reports, or custom time period reports based on your specific needs.
@ -13,7 +13,7 @@ Object detection and enrichments (like Semantic Search, Face Recognition, and Li
- **AMD**
- ROCm will automatically be detected and used for enrichments in the `-rocm` Frigate image.
- ROCm support in the `-rocm` Frigate image is automatically detected for enrichments, but only some enrichment models are available due to ROCm's focus on LLMs and limited stability with certain neural network models. Frigate disables models that perform poorly or are unstable to ensure reliable operation, so only compatible enrichments may be active.
import CommunityBadge from '@site/src/components/CommunityBadge';
# Video Decoding
It is highly recommended to use a GPU for hardware acceleration video decoding in Frigate. Some types of hardware acceleration are detected and used automatically, but you may need to update your configuration to enable hardware accelerated decoding in ffmpeg.
It is highly recommended to use an integrated or discrete GPU for hardware acceleration video decoding in Frigate.
Depending on your system, these parameters may not be compatible. More information on hardware accelerated decoding for ffmpeg can be found here: https://trac.ffmpeg.org/wiki/HWAccelIntro
Some types of hardware acceleration are detected and used automatically, but you may need to update your configuration to enable hardware accelerated decoding in ffmpeg. To verify that hardware acceleration is working:
- Check the logs: A message will either say that hardware acceleration was automatically detected, or there will be a warning that no hardware acceleration was automatically detected
- If hardware acceleration is specified in the config, verification can be done by ensuring the logs are free from errors. There is no CPU fallback for hardware acceleration.
:::info
## Raspberry Pi 3/4
Frigate supports presets for optimal hardware accelerated video decoding:
Ensure you increase the allocated RAM for your GPU to at least 128 (`raspi-config` > Performance Options > GPU Memory).
If you are using the HA Add-on, you may need to use the full access variant and turn off _Protection mode_ for hardware acceleration.
**AMD**
```yaml
# if you want to decode a h264 stream
ffmpeg:
hwaccel_args: preset-rpi-64-h264
- [AMD](#amd-based-cpus): Frigate can utilize modern AMD integrated GPUs and AMD discrete GPUs to accelerate video decoding.
# if you want to decode a h265 (hevc) stream
ffmpeg:
hwaccel_args: preset-rpi-64-h265
```
**Intel**
:::note
- [Intel](#intel-based-cpus): Frigate can utilize most Intel integrated GPUs and Arc GPUs to accelerate video decoding.
If running Frigate through Docker, you either need to run in privileged mode or
map the `/dev/video*` devices to Frigate. With Docker Compose add:
**Nvidia GPU**
```yaml
services:
frigate:
...
devices:
- /dev/video11:/dev/video11
```
- [Nvidia GPU](#nvidia-gpus): Frigate can utilize most modern Nvidia GPUs to accelerate video decoding.
Or with `docker run`:
**Raspberry Pi 3/4**
```bash
docker run -d \
--name frigate \
...
--device /dev/video11 \
ghcr.io/blakeblackshear/frigate:stable
```
- [Raspberry Pi](#raspberry-pi-34): Frigate can utilize the media engine in the Raspberry Pi 3 and 4 to slightly accelerate video decoding.
`/dev/video11` is the correct device (on Raspberry Pi 4B). You can check
by running the following and looking for `H264`:
**Nvidia Jetson** <CommunityBadge/>
```bash
for d in /dev/video*; do
echo -e "---\n$d"
v4l2-ctl --list-formats-ext -d $d
done
```
- [Jetson](#nvidia-jetson): Frigate can utilize the media engine in Jetson hardware to accelerate video decoding.
Or map in all the `/dev/video*` devices.
**Rockchip** <CommunityBadge/>
- [RKNN](#rockchip-platform): Frigate can utilize the media engine in RockChip SOCs to accelerate video decoding.
**Other Hardware**
Depending on your system, these presets may not be compatible, and you may need to use manual hwaccel args to take advantage of your hardware. More information on hardware accelerated decoding for ffmpeg can be found here: https://trac.ffmpeg.org/wiki/HWAccelIntro
:::
## Intel-based CPUs
Frigate can utilize most Intel integrated GPUs and Arc GPUs to accelerate video decoding.
| gen1 - gen5 | i965 | preset-vaapi | qsv is not supported, may not support H.265 |
| gen6 - gen7 | iHD | preset-vaapi | qsv is not supported |
| gen8 - gen12 | iHD | preset-vaapi | preset-intel-qsv-\* can also be used |
| gen13+ | iHD / Xe | preset-intel-qsv-\* | |
@ -195,15 +182,17 @@ telemetry:
If you are passing in a device path, make sure you've passed the device through to the container.
## AMD/ATI GPUs (Radeon HD 2000 and newer GPUs) via libva-mesa-driver
## AMD-based CPUs
VAAPI supports automatic profile selection so it will work automatically with both H.264 and H.265 streams.
Frigate can utilize modern AMD integrated GPUs and AMD GPUs to accelerate video decoding using VAAPI.
:::note
### Configuring Radeon Driver
You need to change the driver to `radeonsi` by adding the following environment variable `LIBVA_DRIVER_NAME=radeonsi` to your docker-compose file or [in the `config.yml` for HA Add-on users](advanced.md#environment_vars).
:::
### Via VAAPI
VAAPI supports automatic profile selection so it will work automatically with both H.264 and H.265 streams.
```yaml
ffmpeg:
@ -264,7 +253,7 @@ processes:
:::note
`nvidia-smi`may not show `ffmpeg` processes when run inside the container [due to docker limitations](https://github.com/NVIDIA/nvidia-docker/issues/179#issuecomment-645579458).
`nvidia-smi`will not show `ffmpeg` processes when run inside the container [due to docker limitations](https://github.com/NVIDIA/nvidia-docker/issues/179#issuecomment-645579458).
:::
@ -300,12 +289,63 @@ If you do not see these processes, check the `docker logs` for the container and
These instructions were originally based on the [Jellyfin documentation](https://jellyfin.org/docs/general/administration/hardware-acceleration.html#nvidia-hardware-acceleration-on-docker-linux).
## Raspberry Pi 3/4
Ensure you increase the allocated RAM for your GPU to at least 128 (`raspi-config` > Performance Options > GPU Memory).
If you are using the HA Add-on, you may need to use the full access variant and turn off _Protection mode_ for hardware acceleration.
```yaml
# if you want to decode a h264 stream
ffmpeg:
hwaccel_args: preset-rpi-64-h264
# if you want to decode a h265 (hevc) stream
ffmpeg:
hwaccel_args: preset-rpi-64-h265
```
:::note
If running Frigate through Docker, you either need to run in privileged mode or
map the `/dev/video*` devices to Frigate. With Docker Compose add:
```yaml
services:
frigate:
...
devices:
- /dev/video11:/dev/video11
```
Or with `docker run`:
```bash
docker run -d \
--name frigate \
...
--device /dev/video11 \
ghcr.io/blakeblackshear/frigate:stable
```
`/dev/video11` is the correct device (on Raspberry Pi 4B). You can check
by running the following and looking for `H264`:
```bash
for d in /dev/video*; do
echo -e "---\n$d"
v4l2-ctl --list-formats-ext -d $d
done
```
Or map in all the `/dev/video*` devices.
:::
# Community Supported
## NVIDIA Jetson (Orin AGX, Orin NX, Orin Nano\*, Xavier AGX, Xavier NX, TX2, TX1, Nano)
## NVIDIA Jetson
A separate set of docker images is available that is based on Jetpack/L4T. They come with an `ffmpeg` build
with codecs that use the Jetson's dedicated media engine. If your Jetson host is running Jetpack 6.0+ use the `stable-tensorrt-jp6` tagged image. Note that the Orin Nano has no video encoder, so frigate will use software encoding on this platform, but the image will still allow hardware decoding and tensorrt object detection.
A separate set of docker images is available for Jetson devices. They come with an `ffmpeg` build with codecs that use the Jetson's dedicated media engine. If your Jetson host is running Jetpack 6.0+ use the `stable-tensorrt-jp6` tagged image. Note that the Orin Nano has no video encoder, so frigate will use software encoding on this platform, but the image will still allow hardware decoding and tensorrt object detection.
You will need to use the image with the nvidia container runtime:
@ -68,8 +68,8 @@ Fine-tune the LPR feature using these optional parameters at the global level of
- Default: `1000` pixels. Note: this is intentionally set very low as it is an _area_ measurement (length x width). For reference, 1000 pixels represents a ~32x32 pixel square in your camera image.
- Depending on the resolution of your camera's `detect` stream, you can increase this value to ignore small or distant plates.
- **`device`**: Device to use to run license plate detection _and_ recognition models.
- Default: `CPU`
- This can be `CPU`, `GPU`, or the GPU's device number. For users without a model that detects license plates natively, using a GPU may increase performance of the YOLOv9 license plate detector model. See the [Hardware Accelerated Enrichments](/configuration/hardware_acceleration_enrichments.md) documentation. However, for users who run a model that detects `license_plate` natively, there is little to no performance gain reported with running LPR on GPU compared to the CPU.
- Default: `None`
- This is auto-selected by Frigate and can be `CPU`, `GPU`, or the GPU's device number. For users without a model that detects license plates natively, using a GPU may increase performance of the YOLOv9 license plate detector model. See the [Hardware Accelerated Enrichments](/configuration/hardware_acceleration_enrichments.md) documentation. However, for users who run a model that detects `license_plate` natively, there is little to no performance gain reported with running LPR on GPU compared to the CPU.
- **`model_size`**: The size of the model used to identify regions of text on plates.
- Default: `small`
- This can be `small` or `large`.
@ -432,6 +432,6 @@ If you are using a model that natively detects `license_plate`, add an _object m
If you are not using a model that natively detects `license_plate` or you are using dedicated LPR camera mode, only a _motion mask_ over your text is required.
### I see "Error running ... model" in my logs. How can I fix this?
### I see "Error running ... model" in my logs, or my inference time is very high. How can I fix this?
This usually happens when your GPU is unable to compile or use one of the LPR models. Set your `device` to `CPU` and try again. GPU acceleration only provides a slight performance increase, and the models are lightweight enough to run without issue on most CPUs.
| jsmpeg | same as `detect -> fps`, capped at 10 | 720p | no | no | Resolution is configurable, but go2rtc is recommended if you want higher resolutions and better frame rates. jsmpeg is Frigate's default without go2rtc configured. |
| mse | native | native | yes (depends on audio codec) | yes | iPhone requires iOS 17.1+, Firefox is h.264 only. This is Frigate's default when go2rtc is configured. |
| webrtc | native | native | yes (depends on audio codec) | yes | Requires extra configuration, doesn't support h.265. Frigate attempts to use WebRTC when MSE fails or when using a camera's two-way talk feature. |
| webrtc | native | native | yes (depends on audio codec) | yes | Requires extra configuration. Frigate attempts to use WebRTC when MSE fails or when using a camera's two-way talk feature. |
### Camera Settings Recommendations
@ -127,7 +127,8 @@ WebRTC works by creating a TCP or UDP connection on port `8555`. However, it req
```
- For access through Tailscale, the Frigate system's Tailscale IP must be added as a WebRTC candidate. Tailscale IPs all start with `100.`, and are reserved within the `100.64.0.0/10` CIDR block.
- Note that WebRTC does not support H.265.
- Note that some browsers may not support H.265 (HEVC). You can check your browser's current version for H.265 compatibility [here](https://github.com/AlexxIT/go2rtc?tab=readme-ov-file#codecs-madness).
| [YOLOv9](#yolov9) | More accurate but slower than default model |
#### SSDLite MobileNet v2
#### Mobiledet
A TensorFlow Lite model is provided in the container at `/edgetpu_model.tflite` and is used by this detector type by default. To provide your own model, bind mount the file into the container and provide the path with `model.path`.
#### YOLOv9
#### YOLOv9
[YOLOv9](https://github.com/dbro/frigate-detector-edgetpu-yolo9/releases/download/v1.0/yolov9-s-relu6-best_320_int8_edgetpu.tflite) models that are compiled for Tensorflow Lite and properly quantized are supported, but not included by default. To provide your own model, bind mount the file into the container and provide the path with `model.path`. Note that the model may require a custom label file (eg. [use this 17 label file](https://raw.githubusercontent.com/dbro/frigate-detector-edgetpu-yolo9/refs/heads/main/labels-coco17.txt) for the model linked above.)
YOLOv9 models that are compiled for TensorFlow Lite and properly quantized are supported, but not included by default. [Download the model](https://github.com/dbro/frigate-detector-edgetpu-yolo9/releases/download/v1.0/yolov9-s-relu6-best_320_int8_edgetpu.tflite), bind mount the file into the container, and provide the path with `model.path`. Note that the linked model requires a 17-label [labelmap file](https://raw.githubusercontent.com/dbro/frigate-detector-edgetpu-yolo9/refs/heads/main/labels-coco17.txt) that includes only 17 COCO classes.
RF-DETR can be exported as ONNX by running the command below. You can copy and paste the whole thing to your terminal and execute, altering `MODEL_SIZE=Nano` in the first line to `Nano`, `Small`, or `Medium` size.
@ -11,7 +11,7 @@ This adds features including the ability to deep link directly into the app.
In order to install Frigate as a PWA, the following requirements must be met:
- Frigate must be accessed via a secure context (localhost, secure https, etc.)
- Frigate must be accessed via a secure context (localhost, secure https, VPN, etc.)
- On Android, Firefox, Chrome, Edge, Opera, and Samsung Internet Browser all support installing PWAs.
- On iOS 16.4 and later, PWAs can be installed from the Share menu in Safari, Chrome, Edge, Firefox, and Orion.
@ -22,3 +22,7 @@ Installation varies slightly based on the device that is being used:
- Desktop: Use the install button typically found in right edge of the address bar
- Android: Use the `Install as App` button in the more options menu for Chrome, and the `Add app to Home screen` button for Firefox
- iOS: Use the `Add to Homescreen` button in the share menu
## Usage
Once setup, the Frigate app can be used wherever it has access to Frigate. This means it can be setup as local-only, VPN-only, or fully accessible depending on your needs.
- `front_door` stream is used by Frigate for viewing, recording, and detection. The `#backchannel=0` parameter prevents go2rtc from establishing the audio output backchannel, so it won't block two-way talk access.
- `front_door_twoway` stream is used for two-way talk functionality. This stream can be used by Frigate's WebRTC viewer when two-way talk is enabled, or by other applications (like Home Assistant Advanced Camera Card) that need access to the camera's audio output channel.
## Security: Restricted Stream Sources
For security reasons, the `echo:`, `expr:`, and `exec:` stream sources are disabled by default in go2rtc. These sources allow arbitrary command execution and can pose security risks if misconfigured.
If you attempt to use these sources in your configuration, the streams will be removed and an error message will be printed in the logs.
To enable these sources, you must set the environment variable `GO2RTC_ALLOW_ARBITRARY_EXEC=true`. This can be done in your Docker Compose file or container environment:
```yaml
environment:
- GO2RTC_ALLOW_ARBITRARY_EXEC=true
```
:::warning
Enabling arbitrary exec sources allows execution of arbitrary commands through go2rtc stream configurations. Only enable this if you understand the security implications and trust all sources of your configuration.
:::
## Advanced Restream Configurations
The [exec](https://github.com/AlexxIT/go2rtc/tree/v1.9.10#source-exec) source in go2rtc can be used for custom ffmpeg commands. An example is below:
:::warning
The `exec:`, `echo:`, and `expr:` sources are disabled by default for security. You must set `GO2RTC_ALLOW_ARBITRARY_EXEC=true` to use them. See [Security: Restricted Stream Sources](#security-restricted-stream-sources) for more information.
:::
NOTE: The output will need to be passed with two curly braces `{{output}}`
@ -11,6 +11,12 @@ Cameras configured to output H.264 video and AAC audio will offer the most compa
- **Stream Viewing**: This stream will be rebroadcast as is to Home Assistant for viewing with the stream component. Setting this resolution too high will use significant bandwidth when viewing streams in Home Assistant, and they may not load reliably over slower connections.
:::tip
For the best experience in Frigate's UI, configure your camera so that the detection and recording streams use the same aspect ratio. For example, if your main stream is 3840x2160 (16:9), set your substream to 640x360 (also 16:9) instead of 640x480 (4:3). While not strictly required, matching aspect ratios helps ensure seamless live stream display and preview/recordings playback.
:::
### Choosing a detect resolution
The ideal resolution for detection is one where the objects you want to detect fit inside the dimensions of the model used by Frigate (320x320). Frigate does not pass the entire camera frame to object detection. It will crop an area of motion from the full frame and look in that portion of the frame. If the area being inspected is larger than 320x320, Frigate must resize it before running object detection. Higher resolutions do not improve the detection accuracy because the additional detail is lost in the resize. Below you can see a reference for how large a 320x320 area is against common resolutions.
| Beelink EQ13 (<ahref="https://amzn.to/4jn2qVr"target="_blank"rel="nofollow noopener sponsored">Amazon</a>) | Can run object detection on several 1080p cameras with low-medium activity | Dual gigabit NICs for easy isolated camera network. |
| Intel 1120p ([Amazon](https://www.amazon.com/Beelink-i3-1220P-Computer-Display-Gigabit/dp/B0DDCKT9YP) | Can handle a large number of 1080p cameras with high activity | |
| Intel 125H ([Amazon](https://www.amazon.com/MINISFORUM-Pro-125H-Barebone-Computer-HDMI2-1/dp/B0FH21FSZM) | Can handle a significant number of 1080p cameras with high activity | Includes NPU for more efficient detection in 0.17+ |
## Detectors
@ -53,12 +55,10 @@ Frigate supports multiple different detectors that work on different types of ha
**Most Hardware**
- [Hailo](#hailo-8): The Hailo8 and Hailo8L AI Acceleration module is available in m.2 format with a HAT for RPi devices offering a wide range of compatibility with devices.
- [Supports many model architectures](../../configuration/object_detectors#configuration)
- Runs best with tiny or small size models
- [Google Coral EdgeTPU](#google-coral-tpu): The Google Coral EdgeTPU is available in USB and m.2 format allowing for a wide range of compatibility with devices.
- [Supports primarily ssdlite and mobilenet model architectures](../../configuration/object_detectors#edge-tpu-detector)
- <CommunityBadge/> [MemryX](#memryx-mx3): The MX3 M.2 accelerator module is available in m.2 format allowing for a wide range of compatibility with devices.
@ -87,7 +87,6 @@ Frigate supports multiple different detectors that work on different types of ha
**Nvidia**
- [TensortRT](#tensorrt---nvidia-gpu): TensorRT can run on Nvidia GPUs to provide efficient object detection.
- [Supports majority of model architectures via ONNX](../../configuration/object_detectors#onnx-supported-models)
- Runs well with any size models including large
@ -125,10 +124,16 @@ In real-world deployments, even with multiple cameras running concurrently, Frig
### Google Coral TPU
:::warning
The Coral is no longer recommended for new Frigate installations, except in deployments with particularly low power requirements or hardware incapable of utilizing alternative AI accelerators for object detection. Instead, we suggest using one of the numerous other supported object detectors. Frigate will continue to provide support for the Coral TPU for as long as practicably possible given its still one of the most power-efficient devices for executing object detection models.
:::
Frigate supports both the USB and M.2 versions of the Google Coral.
- The USB version is compatible with the widest variety of hardware and does not require a driver on the host machine. However, it does lack the automatic throttling features of the other versions.
- The PCIe and M.2 versions require installation of a driver on the host. Follow the instructions for your version from https://coral.ai
- The PCIe and M.2 versions require installation of a driver on the host. https://github.com/jnicolson/gasket-builder should be used.
A single Coral can handle many cameras using the default model and will be sufficient for the majority of users. You can calculate the maximum performance of your Coral based on the inference speed reported by Frigate. With an inference speed of 10, your Coral will top out at `1000/10=100`, or 100 frames per second. If your detection fps is regularly getting close to that, you should first consider tuning motion masks. If those are already properly configured, a second Coral may be needed.
@ -144,9 +149,7 @@ The OpenVINO detector type is able to run on:
:::note
Intel NPUs have seen [limited success in community deployments](https://github.com/blakeblackshear/frigate/discussions/13248#discussioncomment-12347357), although they remain officially unsupported.
In testing, the NPU delivered performance that was only comparable to — or in some cases worse than — the integrated GPU.
Intel B-series (Battlemage) GPUs are not officially supported with Frigate 0.17, though a user has [provided steps to rebuild the Frigate container](https://github.com/blakeblackshear/frigate/discussions/21257) with support for them.
The shm size cannot be set per container for Home Assistant add-ons. However, this is probably not required since by default Home Assistant Supervisor allocates `/dev/shm` with half the size of your total memory. If your machine has 8GB of memory, chances are that Frigate will have access to up to 4GB without any additional configuration.
## Extra Steps for Specific Hardware
The following sections contain additional setup steps that are only required if you are using specific hardware. If you are not using any of these hardware types, you can skip to the [Docker](#docker) installation section.
### Raspberry Pi 3/4
By default, the Raspberry Pi limits the amount of memory available to the GPU. In order to use ffmpeg hardware acceleration, you must increase the available memory by setting `gpu_mem` to the maximum recommended value in `config.txt` as described in the [official docs](https://www.raspberrypi.org/documentation/computers/config_txt.html#memory-options).
@ -106,14 +110,107 @@ The Hailo-8 and Hailo-8L AI accelerators are available in both M.2 and HAT form
#### Installation
For Raspberry Pi 5 users with the AI Kit, installation is straightforward. Simply follow this [guide](https://www.raspberrypi.com/documentation/accessories/ai-kit.html#ai-kit-installation) to install the driver and software.
:::warning
For other installations, follow these steps for installation:
The Raspberry Pi kernel includes an older version of the Hailo driver that is incompatible with Frigate. You **must** follow the installation steps below to install the correct driver version, and you **must** disable the built-in kernel driver as described in step 1.
1. Install the driver from the [Hailo GitHub repository](https://github.com/hailo-ai/hailort-drivers). A convenient script for Linux is available to clone the repository, build the driver, and install it.
2. Copy or download [this script](https://github.com/blakeblackshear/frigate/blob/dev/docker/hailo8l/user_installation.sh).
3. Ensure it has execution permissions with `sudo chmod +x user_installation.sh`
4. Run the script with `./user_installation.sh`
:::
1. **Disable the built-in Hailo driver (Raspberry Pi only)**:
:::note
If you are **not** using a Raspberry Pi, skip this step and proceed directly to step 2.
:::
If you are using a Raspberry Pi, you need to blacklist the built-in kernel Hailo driver to prevent conflicts. First, check if the driver is currently loaded:
```bash
lsmod | grep hailo
```
If it shows `hailo_pci`, unload it:
```bash
sudo rmmod hailo_pci
```
Now blacklist the driver to prevent it from loading on boot:
```bash
echo "blacklist hailo_pci" | sudo tee /etc/modprobe.d/blacklist-hailo_pci.conf
```
Update initramfs to ensure the blacklist takes effect:
```bash
sudo update-initramfs -u
```
Reboot your Raspberry Pi:
```bash
sudo reboot
```
After rebooting, verify the built-in driver is not loaded:
```bash
lsmod | grep hailo
```
This command should return no results. If it still shows `hailo_pci`, the blacklist did not take effect properly and you may need to check for other Hailo packages installed via apt that are loading the driver.
The current stable version of Frigate is **0.16.2**. The release notes and any breaking changes for this version can be found on the [Frigate GitHub releases page](https://github.com/blakeblackshear/frigate/releases/tag/v0.16.2).
The current stable version of Frigate is **0.17.0**. The release notes and any breaking changes for this version can be found on the [Frigate GitHub releases page](https://github.com/blakeblackshear/frigate/releases/tag/v0.17.0).
Keeping Frigate up to date ensures you benefit from the latest features, performance improvements, and bug fixes. The update process varies slightly depending on your installation method (Docker, Home Assistant Addon, etc.). Below are instructions for the most common setups.
@ -33,21 +33,21 @@ If you’re running Frigate via Docker (recommended method), follow these steps:
2. **Update and Pull the Latest Image**:
- If using Docker Compose:
- Edit your `docker-compose.yml` file to specify the desired version tag (e.g., `0.16.2` instead of `0.15.2`). For example:
- Edit your `docker-compose.yml` file to specify the desired version tag (e.g., `0.17.0` instead of `0.16.3`). For example:
- **Note for `stable` Tag Users**: If your `docker-compose.yml` uses the `stable` tag (e.g., `ghcr.io/blakeblackshear/frigate:stable`), you don’t need to update the tag manually. The `stable` tag always points to the latest stable release after pulling.
- If using `docker run`:
- Pull the image with the appropriate tag (e.g., `0.16.2`, `0.16.2-tensorrt`, or `stable`):
- Pull the image with the appropriate tag (e.g., `0.17.0`, `0.17.0-tensorrt`, or `stable`):
2. Restore your backed-up config file and database.
3. Revert to the previous image version:
- For Docker: Specify an older tag (e.g., `ghcr.io/blakeblackshear/frigate:0.15.2`) in your `docker run` command.
- For Docker Compose: Edit your `docker-compose.yml`, specify the older version tag (e.g., `ghcr.io/blakeblackshear/frigate:0.15.2`), and re-run `docker compose up -d`.
- For Docker: Specify an older tag (e.g., `ghcr.io/blakeblackshear/frigate:0.16.3`) in your `docker run` command.
- For Docker Compose: Edit your `docker-compose.yml`, specify the older version tag (e.g., `ghcr.io/blakeblackshear/frigate:0.16.3`), and re-run `docker compose up -d`.
- For Home Assistant: Reinstall the previous addon version manually via the repository if needed and restart the addon.
@ -134,31 +134,13 @@ Now you should be able to start Frigate by running `docker compose up -d` from w
This section assumes that you already have an environment setup as described in [Installation](../frigate/installation.md). You should also configure your cameras according to the [camera setup guide](/frigate/camera_setup). Pay particular attention to the section on choosing a detect resolution.
### Step 1: Add a detect stream
### Step 1: Start Frigate
First we will add the detect stream for the camera:
At this point you should be able to start Frigate and a basic config will be created automatically.
At this point you should be able to start Frigate and see the video feed in the UI.
If you get an error image from the camera, this means ffmpeg was not able to get the video feed from your camera. Check the logs for error messages from ffmpeg. The default ffmpeg arguments are designed to work with H264 RTSP cameras that support TCP connections.
FFmpeg arguments for other types of cameras can be found [here](../configuration/camera_specific.md).
You can click the `Add Camera` button to use the camera setup wizard to get your first camera added into Frigate.
[Periscope](https://github.com/maksz42/periscope) is a lightweight Android app that turns old devices into live viewers for Frigate. It works on Android 2.2 and above, including Android TV. It supports authentication and HTTPS.
[Scrypted - Frigate bridge](https://github.com/apocaliss92/scrypted-frigate-bridge) is an plugin that allows to ingest Frigate detections, motion, videoclips on Scrypted as well as provide templates to export rebroadcast configurations on Frigate.
@ -16,12 +16,10 @@ There are three model types offered in Frigate+, `mobiledet`, `yolonas`, and `yo
Not all model types are supported by all detectors, so it's important to choose a model type to match your detector as shown in the table under [supported detector types](#supported-detector-types). You can test model types for compatibility and speed on your hardware by using the base models.
| `mobiledet` | Based on the same architecture as the default model included with Frigate. Runs on Google Coral devices and CPUs. |
| `yolonas` | A newer architecture that offers slightly higher accuracy and improved detection of small objects. Runs on Intel, NVidia GPUs, and AMD GPUs. |
| `yolov9` | A leading SOTA (state of the art) object detection model with similar performance to yolonas, but on a wider range of hardware options. Runs on Intel, NVidia GPUs, AMD GPUs, Hailo, MemryX\*, Apple Silicon\*, and Rockchip NPUs. |
_\* Support coming in 0.17_
| `yolov9` | A leading SOTA (state of the art) object detection model with similar performance to yolonas, but on a wider range of hardware options. Runs on Intel, NVidia GPUs, AMD GPUs, Hailo, MemryX, Apple Silicon, and Rockchip NPUs. |
### YOLOv9 Details
@ -39,7 +37,7 @@ If you have a Hailo device, you will need to specify the hardware you have when
#### Rockchip (RKNN) Support
For 0.16, YOLOv9 onnx models will need to be manually converted. First, you will need to configure Frigate to use the model id for your YOLOv9 onnx model so it downloads the model to your `model_cache` directory. From there, you can follow the [documentation](/configuration/object_detectors.md#converting-your-own-onnx-model-to-rknn-format) to convert it. Automatic conversion is coming in 0.17.
For 0.16, YOLOv9 onnx models will need to be manually converted. First, you will need to configure Frigate to use the model id for your YOLOv9 onnx model so it downloads the model to your `model_cache` directory. From there, you can follow the [documentation](/configuration/object_detectors.md#converting-your-own-onnx-model-to-rknn-format) to convert it. Automatic conversion is available in 0.17 and later.
## Supported detector types
@ -55,7 +53,7 @@ Currently, Frigate+ models support CPU (`cpu`), Google Coral (`edgetpu`), OpenVi
High CPU usage can impact Frigate's performance and responsiveness. This guide outlines the most effective configuration changes to help reduce CPU consumption and optimize resource usage.
## 1. Hardware Acceleration for Video Decoding
**Priority: Critical**
Video decoding is one of the most CPU-intensive tasks in Frigate. While an AI accelerator handles object detection, it does not assist with decoding video streams. Hardware acceleration (hwaccel) offloads this work to your GPU or specialized video decode hardware, significantly reducing CPU usage and enabling you to support more cameras on the same hardware.
### Key Concepts
**Resolution & FPS Impact:** The decoding burden grows exponentially with resolution and frame rate. A 4K stream at 30 FPS requires roughly 4 times the processing power of a 1080p stream at the same frame rate, and doubling the frame rate doubles the decode workload. This is why hardware acceleration becomes critical when working with multiple high-resolution cameras.
**Hardware Acceleration Benefits:** By using dedicated video decode hardware, you can:
- Significantly reduce CPU usage per camera stream
- Support 2-3x more cameras on the same hardware
- Free up CPU resources for motion detection and other Frigate processes
- Reduce system heat and power consumption
### Configuration
Frigate provides preset configurations for common hardware acceleration scenarios. Set up `hwaccel_args` based on your hardware in your [configuration](../configuration/reference) as described in the [getting started guide](../guides/getting_started).
### Troubleshooting Hardware Acceleration
If hardware acceleration isn't working:
1. Check Frigate logs for FFmpeg errors related to hwaccel
2. Verify the hardware device is accessible inside the container
3. Ensure your camera streams use H.264 or H.265 codecs (most common)
4. Try different presets if the automatic detection fails
5. Check that your GPU drivers are properly installed on the host system
## 2. Detector Selection and Configuration
**Priority: Critical**
Choosing the right detector for your hardware is the single most important factor for detection performance. The detector is responsible for running the AI model that identifies objects in video frames. Different detector types have vastly different performance characteristics and hardware requirements, as detailed in the [hardware documentation](../frigate/hardware).
### Understanding Detector Performance
Frigate uses motion detection as a first-line check before running expensive object detection, as explained in the [motion detection documentation](../configuration/motion_detection). When motion is detected, Frigate creates a "region" (the green boxes in the debug viewer) and sends it to the detector. The detector's inference speed determines how many detections per second your system can handle.
**Calculating Detector Capacity:** Your detector has a finite capacity measured in detections per second. With an inference speed of 10ms, your detector can handle approximately 100 detections per second (1000ms / 10ms = 100).If your cameras collectively require more than this capacity, you'll experience delays, missed detections, or the system will fall behind.
### Choosing the Right Detector
Different detectors have vastly different performance characteristics, see the expected performance for object detectors in [the hardware docs](../frigate/hardware)
### Multiple Detector Instances
When a single detector cannot keep up with your camera count, some detector types (`openvino`, `onnx`) allow you to define multiple detector instances to share the workload. This is particularly useful with GPU-based detectors that have sufficient VRAM to run multiple inference processes.
For detailed instructions on configuring multiple detectors, see the [Object Detectors documentation](../configuration/object_detectors).
**When to add a second detector:**
- Skipped FPS is consistently > 0 even during normal activity
### Model Selection and Optimization
The model you use significantly impacts detector performance. Frigate provides default models optimized for each detector type, but you can customize them as described in the [detector documentation](../configuration/object_detectors).
**Model Size Trade-offs:**
- Smaller models (320x320): Faster inference, Frigate is specifically optimized for a 320x320 size model.
- Larger models (640x640): Slower inference, can sometimes have higher accuracy on very large objects that take up a majority of the frame.
When investigating object detection or tracking problems, it can be helpful to replay an exported video as a temporary "dummy" camera. This lets you reproduce issues locally, iterate on configuration (detections, zones, enrichment settings), and capture logs and clips for analysis.
## When to use
- Replaying an exported clip to reproduce incorrect detections
- Testing configuration changes (model settings, trackers, filters) against a known clip
- Gathering deterministic logs and recordings for debugging or issue reports
## Example Config
Place the clip you want to replay in a location accessible to Frigate (for example `/media/frigate/` or the repository `debug/` folder when developing). Then add a temporary camera to your `config/config.yml` like this:
```yaml
cameras:
test:
ffmpeg:
inputs:
- path: /media/frigate/car-stopping.mp4
input_args: -re -stream_loop -1 -fflags +genpts
roles:
- detect
detect:
enabled: true
record:
enabled: false
snapshots:
enabled: false
```
- `-re -stream_loop -1` tells `ffmpeg` to play the file in realtime and loop indefinitely, which is useful for long debugging sessions.
- `-fflags +genpts` helps generate presentation timestamps when they are missing in the file.
## Steps
1. Export or copy the clip you want to replay to the Frigate host (e.g., `/media/frigate/` or `debug/clips/`). Depending on what you are looking to debug, it is often helpful to add some "pre-capture" time (where the tracked object is not yet visible) to the clip when exporting.
2. Add the temporary camera to `config/config.yml` (example above). Use a unique name such as `test` or `replay_camera` so it's easy to remove later.
- If you're debugging a specific camera, copy the settings from that camera (frame rate, model/enrichment settings, zones, etc.) into the temporary camera so the replay closely matches the original environment. Leave `record` and `snapshots` disabled unless you are specifically debugging recording or snapshot behavior.
3. Restart Frigate.
4. Observe the Debug view in the UI and logs as the clip is replayed. Watch detections, zones, or any feature you're looking to debug, and note any errors in the logs to reproduce the issue.
5. Iterate on camera or enrichment settings (model, fps, zones, filters) and re-check the replay until the behavior is resolved.
6. Remove the temporary camera from your config after debugging to avoid spurious telemetry or recordings.
## Variables to consider in object tracking
- The exported video will not always line up exactly with how it originally ran through Frigate (or even with the last loop). Different frames may be used on replay, which can change detections and tracking.
- Motion detection depends on the frames used; small frame shifts can change motion regions and therefore what gets passed to the detector.
- Object detection is not deterministic: models and post-processing can yield different results across runs, so you may not get identical detections or track IDs every time.
When debugging, treat the replay as a close approximation rather than a byte-for-byte replay. Capture multiple runs, enable recording if helpful, and examine logs and saved event clips to understand variability.
## Troubleshooting
- No video: verify the path is correct and accessible from the Frigate process/container.
- FFmpeg errors: check the log output for ffmpeg-specific flags and adjust `input_args` accordingly for your file/container. You may also need to disable hardware acceleration (`hwaccel_args: ""`) for the dummy camera.
- No detections: confirm the camera `roles` include `detect`, and model/detector configuration is enabled.
@ -68,8 +68,7 @@ The USB Coral can become stuck and need to be restarted, this can happen for a n
The most common reason for the PCIe Coral not being detected is that the driver has not been installed. This process varies based on what OS and kernel that is being run.
- In most cases [the Coral docs](https://coral.ai/docs/m2/get-started/#2-install-the-pcie-driver-and-edge-tpu-runtime) show how to install the driver for the PCIe based Coral.
- For some newer Linux distros (for example, Ubuntu 22.04+), https://github.com/jnicolson/gasket-builder can be used to build and install the latest version of the driver.
- In most cases https://github.com/jnicolson/gasket-builder can be used to build and install the latest version of the driver.
## Attempting to load TPU as pci & Fatal Python error: Illegal instruction
Frigate includes built-in memory profiling using [memray](https://bloomberg.github.io/memray/) to help diagnose memory issues. This feature allows you to profile specific Frigate modules to identify memory leaks, excessive allocations, or other memory-related problems.
@ -9,8 +9,20 @@ Frigate includes built-in memory profiling using [memray](https://bloomberg.gith
Memory profiling is controlled via the `FRIGATE_MEMRAY_MODULES` environment variable. Set it to a comma-separated list of module names you want to profile:
When you specify a module name (e.g., `frigate.capture`), all processes with that module prefix will be profiled. For example, `frigate.capture` will profile all camera capture processes.
@ -55,11 +67,20 @@ After a process exits normally, you'll find HTML reports in `/config/memray_repo
If a process crashes or you want to generate a report from an existing binary file, you can manually create the HTML report:
description="Returns the current authenticated user's profile including username, role, and allowed cameras. This endpoint requires authentication and returns information about the user's permissions.",
description='Authenticates a user with username and password. Returns a JWT token as a secure HTTP-only cookie that can be used for subsequent API requests. The JWT token can also be retrieved from the response and used as a Bearer token in the Authorization header.\n\nExample using Bearer token:\n```\ncurl -H "Authorization: Bearer <token_value>" https://frigate_ip:8971/api/profile\n```',
description="Returns a list of all users with their usernames and roles. Requires admin role. Each user object contains the username and assigned role.",
description='Creates a new user with the specified username, password, and role. Requires admin role. Password must meet strength requirements: minimum 8 characters, at least one uppercase letter, at least one digit, and at least one special character (!@#$%^&*(),.?":{} |<>).',
description="Deletes a user by username. The built-in admin user cannot be deleted. Requires admin role. Returns success message or error if user not found.",
description="Updates a user's password. Users can only change their own password unless they have admin role. Requires the current password to verify identity for non-admin users. Password must meet strength requirements: minimum 8 characters, at least one uppercase letter, at least one digit, and at least one special character (!@#$%^&*(),.?\":{} |<>). If user changes their own password, a new JWT cookie is automatically issued.",
)
asyncdefupdate_password(
request:Request,
@ -830,13 +879,9 @@ async def update_password(
exceptDoesNotExist:
returnJSONResponse(content={"message":"User not found"},status_code=404)
# Require old_password when:
# 1. Non-admin user is changing another user's password (admin only action)
# Require old_password when non-admin user is changing any password
# Admin users changing passwords do NOT need to provide the current password
ifcurrent_role!="admin":
ifnotbody.old_password:
returnJSONResponse(
content={"message":"Current password is required"},
@ -887,6 +932,8 @@ async def update_password(
@router.put(
"/users/{username}/role",
dependencies=[Depends(require_role(["admin"]))],
summary="Update user role",
description="Updates a user's role. The built-in admin user's role cannot be modified. Requires admin role. Valid roles are defined in the configuration.",
description="A comprehensive description of the setting and entities, including relevant context and plausible inferences if supported by visual evidence."
)
shortSummary:str=Field(
description="A brief 2-sentence summary of the scene, suitable for notifications. Should capture the key activity and context without full detail."
)
confidence:float=Field(
description="A float between 0 and 1 representing your overall confidence in this analysis."
-`title`(string):Aconcise,directtitlethatdescribestheprimaryactionoreventinthesequence,notjustwhatyouliterallysee.Usespatialcontextwhenavailabletomaketitlesmoremeaningful.Whenmultipleobjects/actionsarepresent,prioritizewhicheverismostprominentoroccursfirst.Usenamesfrom"Objects in Scene"basedonwhatyouvisuallyobserve.Ifyouseebothanameandanunidentifiedobjectofthesametypebutvisuallyobserveonlyoneperson/object,useONLYthename.Examples:"Joe walking dog","Person taking out trash","Vehicle arriving in driveway","Joe accessing vehicle","Person leaving porch for driveway".
-`scene`(string):Anarrativedescriptionofwhathappensacrossthesequencefromstarttofinish,inchronologicalorder.Startbydescribinghowthesequencebegins,thendescribetheprogressionofevents.**Describeallsignificantmovementsandactionsintheordertheyoccur.**Forexample,ifavehiclearrivesandthenapersonexits,describebothactionssequentially.**Onlydescribeactionsyoucanactuallyobservehappeningintheframesprovided.**Donotinferorassumeactionsthataren't visible (e.g., if you see someone walking but never see them sit, don'tsaytheysatdown).Includesetting,detectedobjects,andtheirobservableactions.Avoidspeculationorfillinginassumedbehaviors.Yourdescriptionshouldalignwithandsupportthethreatlevelyouassign.
-`potential_threat_level`(integer):0,1,or2asdefinedin"Normal Activity Patterns for This Property"above.Yourthreatlevelmustbeconsistentwithyourscenedescriptionandtheguidanceabove.
{get_concern_prompt()}
@ -178,6 +179,7 @@ Each line represents a detection state, not necessarily unique individuals. Pare
start_ts:float,
end_ts:float,
events:list[dict[str,Any]],
preferred_language:str|None,
debug_save:bool,
)->str|None:
"""Generate a summary of review item descriptions over a period of time."""
@ -191,6 +193,8 @@ Input format: Each event is a JSON object with:
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.