Compare commits

...

206 Commits

Author SHA1 Message Date
ivanshi1108
ebf3f0acac
Merge e27a94ae0b into e4eac4ac81 2025-11-11 05:54:30 +00:00
Josh Hawkins
e4eac4ac81
Add Camera Wizard improvements (#20876)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
* backend api endpoint

* don't add no-credentials version of streams to rtsp_candidates

* frontend types

* improve types

* add optional probe dialog to wizard step 1

* i18n

* form description and field change

* add onvif form description

* match onvif probe pane with other steps in the wizard

* refactor to add probe and snapshot as step 2

* consolidate probe dialog

* don't change dialog size

* radio button style

* refactor to select onvif urls via combobox in step 3

* i18n

* add scrollbar container

* i18n cleanup

* fix button activity indicator

* match test parsing in step 3 with step 2

* hide resolution if both width and height are zero

* use drawer for stream selection on mobile in step 3

* suppress double toasts

* api endpoint description
2025-11-10 15:49:52 -06:00
Josh Hawkins
c371fc0c87
Miscellaneous Fixes (#20866)
* Don't warn when event ids have expired for trigger sync

* Import faster_whisper conditinally to avoid illegal instruction

* Catch OpenVINO runtime error

* fix race condition in detail stream context

navigating between tracked objects in Explore would sometimes prevent the object track from appearing

* Handle case where classification images are deleted

* Adjust default rounded corners on larger screens

* Improve flow handling for classification state

* Remove images when wizard is cancelled

* Improve deletion handling for classes

* Set constraints on review buffers

* Update to support correct data format

* Set minimum duration for recording based review items

* Use friendly name in review genai prompt

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2025-11-10 10:03:56 -07:00
Nicolas Mowen
99a363c047
Improve classification (#20863)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
2025-11-09 16:21:13 -06:00
Nicolas Mowen
a374a60756
Miscellaneous Fixes (#20850)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
* Fix wrongly added detection objects to alert

* Fix CudaGraph inverse condition

* Add debug logs

* Formatting
2025-11-09 08:38:38 -06:00
Nicolas Mowen
d41ee4ff88
Miscellaneous Fixes (#20848)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
* Fix filtering for classification

* Adjust prompt to account for response tokens

* Correctly return response for reprocess

* Use API response to update data instead of trying to re-parse all of the values

* Implement rename class api

* Fix model deletion / rename dialog

* Remove camera spatial context

* Catch error
2025-11-08 13:13:40 -07:00
Josh Hawkins
c99ada8f6a
Tracked Object Details pane tweaks (#20849)
* use grid view on desktop

* refactor description box to remove buttons and add row of action icon buttons

* add tooltips

* fix trigger creation

when using the search effect to create a trigger, the prefilled object will not exist in the config yet

* i18n

* set max width on thumbnail
2025-11-08 12:26:30 -07:00
Josh Hawkins
01452e4c51
Miscellaneous Fixes (#20841)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
* show id field when editing zone

* improve zone capitalization

* Update NPU models and docs

* fix mobilepage in tracked object details

* Use thread lock for openvino to avoid concurrent requests with JinaV2

* fix hashing function to avoid collisions

* remove extra flex div causing overflow

* ensure header stays on top of video controls

* don't smart capitalize friendly names

* Fix incorrect object classification crop

* don't display submit to plus if object doesn't have a snapshot

* check for snapshot and clip in actions menu

* frigate plus submission fix

still show frigate+ section if snapshot has already been submitted and run optimistic update, local state was being overridden

* Don't fail to show 0% when showing classification

* Don't fail on file system error

* Improve title and description for review genai

* fix overflowing truncated review item description in detail stream

* catch events with review items that start after the first timeline entry

review items may start later than events within them, so subtract a padding from the start time in the filter so the start of events are not incorrectly filtered out of the list in the detail stream

* also pad on review end_time

* fix

* change order of timeline zoom buttons on mobile

* use grid to ensure genai title does not cause overflow

* small tweaks

* Cleanup

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2025-11-08 05:44:30 -07:00
GuoQing Liu
ef19332fe5
Add zones friend name (#20761)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
* feat: add zones friendly name

* fix: fix the issue where the input field was empty when there was no friendly_name

* chore: fix the issue where the friendly name would replace spaces with underscores

* docs: update zones docs

* Update web/src/components/settings/ZoneEditPane.tsx

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

* Add friendly_name option for zone configuration

Added optional friendly name for zones in configuration.

* fix: fix the logical error in the null/empty check for the polygons parameter

* fix: remove the toast name for zones will use the friendly_name instead

* docs: remove emoji tips

* revert: revert zones doc ui tips

* Update docs/docs/configuration/zones.md

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

* Update docs/docs/configuration/zones.md

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

* Update docs/docs/configuration/zones.md

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

* feat: add friendly zone names to tracking details and lifecycle item descriptions

* chore: lint fix

* refactor: add friendly zone names to timeline entries and clean up unused code

* refactor: add formatList

---------

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2025-11-07 08:02:06 -06:00
Josh Hawkins
530b69b877
Miscellaneous fixes (#20833)
* remove frigate+ icon from explore grid footer

* add margin

* pointer cursor on event menu items in detail stream

* don't show submit to plus for non-objects and if plus is disabled

* tweak spacing in annotation settings popover

* Fix deletion of classification images and library

* Ensure after creating a class that things are correct

* Fix dialog getting stuck

* Only show the genai summary popup on mobile when timeline is open

* fix audio transcription embedding

* spacing

* hide x icon on restart sheet to prevent closure issues

* prevent x overflow in detail stream on mobile safari

* ensure name is valid for search effect trigger

* add trigger to detail actions menu

* move find similar to actions menu

* Use a column layout for MobilePageContent in PlatformAwareSheet

 This is so the header is outside the scrolling area and the content can grow/scroll independently. This now matches the way it's done in classification

* Skip azure execution provider

* add optional ref to always scroll to top

the more filters in explore was not scrolled to the top on open due to the use of framer motion

* fix title classes on desktop

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2025-11-07 06:53:27 -07:00
Artem Vladimirov
a15399fed5
fix: add pluralization (classification model) (#20838)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
Co-authored-by: Artem Vladimirov <a.vladimirov@small.kz>
2025-11-07 05:40:48 -07:00
Nicolas Mowen
88a2f6c991 Fix weblate incorrect state
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
2025-11-06 09:33:14 -07:00
Hosted Weblate
dca04cbe9c Translated using Weblate (Cantonese (Traditional Han script))
Currently translated at 94.4% (565 of 598 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: beginner2047 <leoywng44@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/yue_Hant/
Translation: Frigate NVR/views-settings
2025-11-06 09:33:14 -07:00
Hosted Weblate
f0de8e7643 Translated using Weblate (Norwegian Bokmål)
Currently translated at 99.8% (597 of 598 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 98.9% (98 of 99 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (99 of 99 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (125 of 125 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (598 of 598 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (125 of 125 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (596 of 596 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (207 of 207 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (98 of 98 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (125 of 125 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (72 of 72 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (596 of 596 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (48 of 48 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (124 of 124 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (72 of 72 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (54 of 54 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (596 of 596 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (88 of 88 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (124 of 124 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (39 of 39 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (596 of 596 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (90 of 90 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: OverTheHillsAndFarAway <prosjektx@users.noreply.hosted.weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-search/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/nb_NO/
Translation: Frigate NVR/common
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-search
Translation: Frigate NVR/views-settings
2025-11-06 09:33:14 -07:00
Hosted Weblate
ef4e13089c Translated using Weblate (Chinese (Simplified Han script))
Currently translated at 100.0% (106 of 106 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 99.8% (597 of 598 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (99 of 99 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (598 of 598 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (207 of 207 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (98 of 98 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (125 of 125 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (88 of 88 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (596 of 596 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (124 of 124 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (39 of 39 strings)

Co-authored-by: GuoQing Liu <842607283@qq.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/zh_Hans/
Translation: Frigate NVR/common
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-settings
2025-11-06 09:33:14 -07:00
Hosted Weblate
ee9a734ebd Translated using Weblate (Chinese (Traditional Han script))
Currently translated at 1.8% (2 of 106 strings)

Translated using Weblate (Chinese (Traditional Han script))

Currently translated at 71.7% (28 of 39 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: windasd <me@windasd.tw>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/zh_Hant/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/zh_Hant/
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
2025-11-06 09:33:14 -07:00
Hosted Weblate
0c12677f7b Translated using Weblate (Slovak)
Currently translated at 99.4% (595 of 598 strings)

Translated using Weblate (Slovak)

Currently translated at 97.9% (97 of 99 strings)

Translated using Weblate (Slovak)

Currently translated at 100.0% (98 of 98 strings)

Translated using Weblate (Slovak)

Currently translated at 100.0% (596 of 596 strings)

Translated using Weblate (Slovak)

Currently translated at 99.2% (124 of 125 strings)

Translated using Weblate (Slovak)

Currently translated at 100.0% (51 of 51 strings)

Translated using Weblate (Slovak)

Currently translated at 100.0% (207 of 207 strings)

Translated using Weblate (Slovak)

Currently translated at 85.9% (512 of 596 strings)

Translated using Weblate (Slovak)

Currently translated at 99.1% (123 of 124 strings)

Translated using Weblate (Slovak)

Currently translated at 100.0% (54 of 54 strings)

Translated using Weblate (Slovak)

Currently translated at 98.6% (494 of 501 strings)

Translated using Weblate (Slovak)

Currently translated at 100.0% (88 of 88 strings)

Translated using Weblate (Slovak)

Currently translated at 74.1% (442 of 596 strings)

Translated using Weblate (Slovak)

Currently translated at 98.3% (122 of 124 strings)

Translated using Weblate (Slovak)

Currently translated at 100.0% (13 of 13 strings)

Translated using Weblate (Slovak)

Currently translated at 100.0% (39 of 39 strings)

Translated using Weblate (Slovak)

Currently translated at 98.1% (53 of 54 strings)

Translated using Weblate (Slovak)

Currently translated at 100.0% (206 of 206 strings)

Translated using Weblate (Slovak)

Currently translated at 88.0% (441 of 501 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Jakub K <klacanjakub0@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-exports/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/sk/
Translation: Frigate NVR/audio
Translation: Frigate NVR/common
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-exports
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-settings
2025-11-06 09:33:14 -07:00
Hosted Weblate
72ede19bee Translated using Weblate (Swedish)
Currently translated at 99.8% (597 of 598 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (106 of 106 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (99 of 99 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (99 of 99 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (598 of 598 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (98 of 98 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (596 of 596 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (596 of 596 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (125 of 125 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (39 of 39 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (207 of 207 strings)

Co-authored-by: Daniel Nylander <daniel@danielnylander.se>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Kristian Johansson <knmjohansson@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/sv/
Translation: Frigate NVR/common
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-settings
2025-11-06 09:33:14 -07:00
Hosted Weblate
3d3e43da96 Translated using Weblate (French)
Currently translated at 99.8% (597 of 598 strings)

Translated using Weblate (French)

Currently translated at 100.0% (106 of 106 strings)

Translated using Weblate (French)

Currently translated at 100.0% (99 of 99 strings)

Translated using Weblate (French)

Currently translated at 100.0% (99 of 99 strings)

Translated using Weblate (French)

Currently translated at 100.0% (598 of 598 strings)

Translated using Weblate (French)

Currently translated at 100.0% (125 of 125 strings)

Translated using Weblate (French)

Currently translated at 100.0% (207 of 207 strings)

Translated using Weblate (French)

Currently translated at 100.0% (98 of 98 strings)

Translated using Weblate (French)

Currently translated at 100.0% (125 of 125 strings)

Translated using Weblate (French)

Currently translated at 100.0% (89 of 89 strings)

Translated using Weblate (French)

Currently translated at 100.0% (88 of 88 strings)

Translated using Weblate (French)

Currently translated at 100.0% (596 of 596 strings)

Translated using Weblate (French)

Currently translated at 100.0% (124 of 124 strings)

Translated using Weblate (French)

Currently translated at 100.0% (39 of 39 strings)

Co-authored-by: Apocoloquintose <bertrand.moreux@gmail.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/fr/
Translation: Frigate NVR/common
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-settings
2025-11-06 09:33:14 -07:00
Hosted Weblate
b3140f666c Translated using Weblate (Spanish)
Currently translated at 68.3% (409 of 598 strings)

Translated using Weblate (Spanish)

Currently translated at 22.6% (24 of 106 strings)

Translated using Weblate (Spanish)

Currently translated at 100.0% (52 of 52 strings)

Translated using Weblate (Spanish)

Currently translated at 100.0% (13 of 13 strings)

Translated using Weblate (Spanish)

Currently translated at 75.2% (94 of 125 strings)

Translated using Weblate (Spanish)

Currently translated at 87.1% (34 of 39 strings)

Translated using Weblate (Spanish)

Currently translated at 98.1% (53 of 54 strings)

Translated using Weblate (Spanish)

Currently translated at 68.5% (410 of 598 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: José María Díaz <jdiaz.bb@gmail.com>
Co-authored-by: Reydel Leon Machado <contact@reydelleon.me>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/es/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/es/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/es/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/es/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-exports/es/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/es/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/es/
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-exports
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-settings
2025-11-06 09:33:14 -07:00
Hosted Weblate
eb6187b5fc Translated using Weblate (Dutch)
Currently translated at 99.8% (597 of 598 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (106 of 106 strings)

Translated using Weblate (Dutch)

Currently translated at 98.9% (98 of 99 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (99 of 99 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (125 of 125 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (598 of 598 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (207 of 207 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (98 of 98 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (90 of 90 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (125 of 125 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (89 of 89 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (88 of 88 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (596 of 596 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (124 of 124 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (39 of 39 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Marijn <168113859+Marijn0@users.noreply.github.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/nl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/nl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/nl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/nl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/nl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/nl/
Translation: Frigate NVR/common
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
2025-11-06 09:33:14 -07:00
Hosted Weblate
6fef71ef46 Translated using Weblate (Indonesian)
Currently translated at 100.0% (13 of 13 strings)

Translated using Weblate (Indonesian)

Currently translated at 100.0% (39 of 39 strings)

Translated using Weblate (Indonesian)

Currently translated at 100.0% (10 of 10 strings)

Translated using Weblate (Indonesian)

Currently translated at 17.3% (87 of 501 strings)

Translated using Weblate (Indonesian)

Currently translated at 100.0% (52 of 52 strings)

Translated using Weblate (Indonesian)

Currently translated at 100.0% (10 of 10 strings)

Co-authored-by: Albert <albertong27@gmail.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/id/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-auth/id/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-configeditor/id/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/id/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-exports/id/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/id/
Translation: Frigate NVR/audio
Translation: Frigate NVR/components-auth
Translation: Frigate NVR/views-configeditor
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-exports
Translation: Frigate NVR/views-facelibrary
2025-11-06 09:33:14 -07:00
Hosted Weblate
18377ed716 Translated using Weblate (Italian)
Currently translated at 99.8% (597 of 598 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (106 of 106 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (598 of 598 strings)

Translated using Weblate (Italian)

Currently translated at 97.9% (97 of 99 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (98 of 98 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (72 of 72 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (125 of 125 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (207 of 207 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (89 of 89 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (88 of 88 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (596 of 596 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (124 of 124 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (54 of 54 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (206 of 206 strings)

Translated using Weblate (Italian)

Currently translated at 28.4% (25 of 88 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (13 of 13 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (39 of 39 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (51 of 51 strings)

Translated using Weblate (Italian)

Currently translated at 75.0% (93 of 124 strings)

Translated using Weblate (Italian)

Currently translated at 98.1% (53 of 54 strings)

Co-authored-by: Gringo <ita.translations@tiscali.it>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/it/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/it/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/it/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/it/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/it/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/it/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-exports/it/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/it/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/it/
Translation: Frigate NVR/common
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-exports
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-settings
2025-11-06 09:33:14 -07:00
Hosted Weblate
165e1e4e64 Translated using Weblate (Polish)
Currently translated at 67.8% (406 of 598 strings)

Co-authored-by: Bartlomiej Puls <bartlomiej.puls@gmail.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/pl/
Translation: Frigate NVR/views-settings
2025-11-06 09:33:14 -07:00
Hosted Weblate
a4790586ad Translated using Weblate (Hungarian)
Currently translated at 69.2% (414 of 598 strings)

Translated using Weblate (Hungarian)

Currently translated at 7.0% (7 of 99 strings)

Translated using Weblate (Hungarian)

Currently translated at 8.1% (8 of 98 strings)

Translated using Weblate (Hungarian)

Currently translated at 96.0% (49 of 51 strings)

Translated using Weblate (Hungarian)

Currently translated at 100.0% (13 of 13 strings)

Translated using Weblate (Hungarian)

Currently translated at 69.2% (27 of 39 strings)

Translated using Weblate (Hungarian)

Currently translated at 100.0% (10 of 10 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Zrinyi Patrik <patrikzrinyi404@gmail.com>
Co-authored-by: Zsolt Fojtyik <zsozso830316@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-auth/hu/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/hu/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/hu/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-exports/hu/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/hu/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/hu/
Translation: Frigate NVR/components-auth
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-exports
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-settings
2025-11-06 09:33:14 -07:00
Hosted Weblate
21623113b5 Translated using Weblate (Vietnamese)
Currently translated at 11.1% (11 of 99 strings)

Translated using Weblate (Vietnamese)

Currently translated at 12.2% (12 of 98 strings)

Translated using Weblate (Vietnamese)

Currently translated at 62.0% (370 of 596 strings)

Translated using Weblate (Vietnamese)

Currently translated at 92.3% (12 of 13 strings)

Translated using Weblate (Vietnamese)

Currently translated at 69.2% (27 of 39 strings)

Translated using Weblate (Vietnamese)

Currently translated at 100.0% (10 of 10 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: John Nguyen <thongnguyen.uit@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-auth/vi/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/vi/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/vi/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-exports/vi/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/vi/
Translation: Frigate NVR/components-auth
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-exports
Translation: Frigate NVR/views-settings
2025-11-06 09:33:14 -07:00
Hosted Weblate
1d8915b0cd Translated using Weblate (Portuguese)
Currently translated at 76.0% (455 of 598 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: ssantos <ssantos@web.de>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/pt/
Translation: Frigate NVR/views-settings
2025-11-06 09:33:14 -07:00
Hosted Weblate
7e5c117cd6 Translated using Weblate (Czech)
Currently translated at 67.5% (404 of 598 strings)

Translated using Weblate (Czech)

Currently translated at 3.0% (3 of 98 strings)

Translated using Weblate (Czech)

Currently translated at 67.9% (405 of 596 strings)

Translated using Weblate (Czech)

Currently translated at 92.1% (47 of 51 strings)

Translated using Weblate (Czech)

Currently translated at 100.0% (10 of 10 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Vitek <vit@vakula.cz>
Co-authored-by: lukascissa <lukas@cissa.cz>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-auth/cs/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/cs/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/cs/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/cs/
Translation: Frigate NVR/components-auth
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-settings
2025-11-06 09:33:14 -07:00
Hosted Weblate
229b7ead78 Translated using Weblate (Catalan)
Currently translated at 99.8% (597 of 598 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (106 of 106 strings)

Translated using Weblate (Catalan)

Currently translated at 97.9% (97 of 99 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (598 of 598 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (48 of 48 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (48 of 48 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (207 of 207 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (98 of 98 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (596 of 596 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (13 of 13 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (125 of 125 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (39 of 39 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (54 of 54 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (207 of 207 strings)

Co-authored-by: Eduardo Pastor Fernández <123eduardoneko123@gmail.com>
Co-authored-by: Gerard Ricart Castells <gerard.ricart@gmail.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/ca/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/ca/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/ca/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/ca/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/ca/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-exports/ca/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-search/ca/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/ca/
Translation: Frigate NVR/common
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-exports
Translation: Frigate NVR/views-search
Translation: Frigate NVR/views-settings
2025-11-06 09:33:14 -07:00
Hosted Weblate
bd9ad3c50a Translated using Weblate (Japanese)
Currently translated at 94.4% (565 of 598 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: yhi264 <yhiraki@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/ja/
Translation: Frigate NVR/views-settings
2025-11-06 09:33:14 -07:00
Hosted Weblate
6f66f681d1 Translated using Weblate (Ukrainian)
Currently translated at 99.8% (597 of 598 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (106 of 106 strings)

Translated using Weblate (Ukrainian)

Currently translated at 98.9% (98 of 99 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (99 of 99 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (598 of 598 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (98 of 98 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (207 of 207 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (98 of 98 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (125 of 125 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (89 of 89 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (88 of 88 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (596 of 596 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (51 of 51 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (13 of 13 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (124 of 124 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (39 of 39 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (54 of 54 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (206 of 206 strings)

Co-authored-by: Anatoli Skovpen <a@ask.kiev.ua>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Максим Горпиніч <gorpinicmaksim0@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/uk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/uk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/uk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/uk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/uk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-exports/uk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/uk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/uk/
Translation: Frigate NVR/common
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-exports
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-settings
2025-11-06 09:33:14 -07:00
Hosted Weblate
77773d133d Translated using Weblate (Romanian)
Currently translated at 99.8% (597 of 598 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (106 of 106 strings)

Translated using Weblate (Romanian)

Currently translated at 98.9% (98 of 99 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (99 of 99 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (598 of 598 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (98 of 98 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (125 of 125 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (207 of 207 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (88 of 88 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (51 of 51 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (124 of 124 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (39 of 39 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (593 of 593 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (48 of 48 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (90 of 90 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (51 of 51 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (13 of 13 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (124 of 124 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (34 of 34 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (54 of 54 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (206 of 206 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: lukasig <lukasig@hotmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/ro/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/ro/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/ro/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/ro/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/ro/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-exports/ro/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/ro/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/ro/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-search/ro/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/ro/
Translation: Frigate NVR/common
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-exports
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-search
Translation: Frigate NVR/views-settings
2025-11-06 09:33:14 -07:00
Hosted Weblate
1b3edf8798 Translated using Weblate (Russian)
Currently translated at 22.6% (24 of 106 strings)

Translated using Weblate (Russian)

Currently translated at 96.1% (50 of 52 strings)

Translated using Weblate (Russian)

Currently translated at 94.8% (37 of 39 strings)

Translated using Weblate (Russian)

Currently translated at 22.2% (22 of 99 strings)

Translated using Weblate (Russian)

Currently translated at 72.0% (90 of 125 strings)

Translated using Weblate (Russian)

Currently translated at 89.7% (35 of 39 strings)

Translated using Weblate (Russian)

Currently translated at 11.1% (11 of 99 strings)

Translated using Weblate (Russian)

Currently translated at 100.0% (13 of 13 strings)

Translated using Weblate (Russian)

Currently translated at 84.6% (33 of 39 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Артём Владимиров <artyomka71@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/ru/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/ru/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/ru/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-exports/ru/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/ru/
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-exports
Translation: Frigate NVR/views-facelibrary
2025-11-06 09:33:14 -07:00
Hosted Weblate
99e81eba95 Translated using Weblate (German)
Currently translated at 94.1% (563 of 598 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Phil Jope <Phil.Jope@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/de/
Translation: Frigate NVR/views-settings
2025-11-06 09:33:14 -07:00
Hosted Weblate
44c91adcee Translated using Weblate (Portuguese (Brazil))
Currently translated at 74.9% (448 of 598 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 22.2% (22 of 99 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 23.4% (23 of 98 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 72.0% (90 of 125 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 97.4% (38 of 39 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 100.0% (34 of 34 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Marcelo Popper Costa <marcelo_popper@hotmail.com>
Co-authored-by: Nico <n2778370@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/pt_BR/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/pt_BR/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/pt_BR/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/pt_BR/
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-settings
2025-11-06 09:33:14 -07:00
Hosted Weblate
81932cd399 Translated using Weblate (Lithuanian)
Currently translated at 73.7% (441 of 598 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: MaBeniu <runnerm@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/lt/
Translation: Frigate NVR/views-settings
2025-11-06 09:33:14 -07:00
Hosted Weblate
4b1054ee05 Translated using Weblate (Turkish)
Currently translated at 41.4% (41 of 99 strings)

Translated using Weblate (Turkish)

Currently translated at 42.4% (42 of 99 strings)

Translated using Weblate (Turkish)

Currently translated at 42.4% (42 of 99 strings)

Translated using Weblate (Turkish)

Currently translated at 62.7% (375 of 598 strings)

Translated using Weblate (Turkish)

Currently translated at 62.7% (375 of 598 strings)

Translated using Weblate (Turkish)

Currently translated at 95.5% (86 of 90 strings)

Translated using Weblate (Turkish)

Currently translated at 98.0% (51 of 52 strings)

Translated using Weblate (Turkish)

Currently translated at 98.0% (51 of 52 strings)

Translated using Weblate (Turkish)

Currently translated at 92.3% (12 of 13 strings)

Translated using Weblate (Turkish)

Currently translated at 92.3% (12 of 13 strings)

Translated using Weblate (Turkish)

Currently translated at 86.4% (108 of 125 strings)

Translated using Weblate (Turkish)

Currently translated at 97.4% (38 of 39 strings)

Translated using Weblate (Turkish)

Currently translated at 97.4% (38 of 39 strings)

Translated using Weblate (Turkish)

Currently translated at 96.2% (52 of 54 strings)

Translated using Weblate (Turkish)

Currently translated at 100.0% (10 of 10 strings)

Translated using Weblate (Turkish)

Currently translated at 100.0% (10 of 10 strings)

Co-authored-by: Emircanos <emircan368@gmail.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Serhat Karaman <serhatkaramanworkmail@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-auth/tr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/tr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/tr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/tr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/tr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-exports/tr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/tr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/tr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/tr/
Translation: Frigate NVR/components-auth
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-exports
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
2025-11-06 09:33:14 -07:00
Josh Hawkins
945317b44e
Tracked Object Details pane tweaks (#20830)
* add prev/next buttons on desktop

* buttons should work with summary and grid view

* i18n

* small tweaks

* don't change dialog size

* remove heading and count

* remove icons

* spacing

* two column detail view

* add actions to dots menu

* move actions menu to its own component

* set modal to false on face library dropdown to guard against improper closures

https://github.com/shadcn-ui/ui/discussions/6908

* frigate plus layout

* remove face training

* clean up unused

* refactor to remove duplication between mobile and desktop

* turn annotation settings into a popover

* fix popover

* improve annotation offset popver

* change icon and popover text in detail stream for annotation settings

* clean up

* use drawer on mobile

* fix setter function

* use dialog ref for popover portal

* don't portal popover

* tweaks

* add button type

* lower xl max width

* fixes

* justify
2025-11-06 09:22:52 -07:00
Artem Vladimirov
32f1d85a6f
fix: add pluralization for userRolesUpdated toast message (#20827)
Co-authored-by: Artem Vladimirov <a.vladimirov@small.kz>
2025-11-06 07:39:57 -07:00
Nicolas Mowen
35ce275071
Add ability to define Review Summary camera context (#20828)
* Add ability to define GenAI camera context

* Cleanup

* Only show example with list
2025-11-06 07:39:44 -07:00
Nicolas Mowen
8048168814
Bug Fixes (#20825)
* Correctly sort summary responses

* Consider JinaV2 as a complex model

* Subscribe to record updates in camera watchdog

* Cleanup score showing

* No need to sort review summary

* Add tests for recording summary

* Don't break existing format

* Sort event summary by day
2025-11-06 08:21:07 -06:00
Nicolas Mowen
a510ea9036
Review card refactor (#20813)
Some checks failed
CI / AMD64 Build (push) Has been cancelled
CI / ARM Build (push) Has been cancelled
CI / Jetson Jetpack 6 (push) Has been cancelled
CI / AMD64 Extra Build (push) Has been cancelled
CI / ARM Extra Build (push) Has been cancelled
CI / Synaptics Build (push) Has been cancelled
CI / Assemble and push default build (push) Has been cancelled
* Use the review card in event timeline popover

* Show review title in review card
2025-11-05 09:48:47 -06:00
Josh Hawkins
e1bc7360ad
Form validation tweaks (#20812)
* Always show ID field when editing a trigger

* use onBlur method for form validation

this will prevent the trigger ID from expanding too soon when a user is typing the friendly name
2025-11-05 09:18:10 -06:00
Josh Hawkins
4638c22c16
UI tweaks (#20811)
* camera wizard input mobile font zooming

* ensure the selected page is visible when navigating via url on mobile

* Filter detail stream to only show items from within the review item

* remove incorrect classes causing extra scroll in detail stream

* change button label

* fix mobile menu button highlight issue

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2025-11-05 07:49:31 -07:00
Nicolas Mowen
81faa8899d
Classification Improvements (#20807)
* Don't show model selection or back button when in multi select mode

* Add dialog to edit classification models

* Fix header spacing

* Cleanup desktop

* Incrase max number of object classifications

* fix iOS mobile card

* Cleanup
2025-11-05 07:11:12 -07:00
Nicolas Mowen
043bd9e6ee
Fix jetson build (#20808)
* Fix jetson build

* Set numpy version in model wheels

* Use constraint instead

* Simplify
2025-11-05 07:10:56 -07:00
Artem Vladimirov
9f0b6004f2
fix: add pluralization for deletedModel toast message (#20803)
* fix: add pluralization for deletedModel toast message

* revert ru translation

---------

Co-authored-by: Artem Vladimirov <a.vladimirov@small.kz>
2025-11-05 05:02:54 -07:00
Nicolas Mowen
b751228476
Various Tweaks (#20800)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
* Fix incorrectly picking start time when date was selected

* Implement shared file locking utility

* Cleanup
2025-11-04 17:06:14 -06:00
Nicolas Mowen
3b2d136665
UI Tweaks (#20791)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
* Add tooltip for classification group

* Don't portal upload dialog when not in fullscreen
2025-11-04 10:54:05 -06:00
Josh Hawkins
e7394d0dc1
Form validation tweaks (#20790)
* ensure id field is expanded on form errors

* only validate id field when name field has no errors

* use ref instead

* all numeric is an invalid name
2025-11-04 08:57:47 -06:00
Josh Hawkins
2e288109f4
Review tweaks (#20789)
* use alerts/detections colors for dots and add back blue border

* add alerts/detections colored dot next to event icons

* add margin for border
2025-11-04 08:45:45 -06:00
Josh Hawkins
256817d5c2
Make events summary endpoint DST-aware (#20786)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
2025-11-03 17:54:33 -07:00
Nicolas Mowen
84409eab7e
Various fixes (#20785)
* Catch case where detector overflows

* Add more debug logs

* Cleanup

* Adjust no class wording

* Adjustments
2025-11-03 18:42:59 -06:00
Josh Hawkins
9e83888133
Fix recordings summary for DST (#20784)
* make recordings summary endpoints DST aware

* remove unused

* clean up
2025-11-03 17:30:56 -07:00
Abinila Siva
85f7138361
update installation code to hold SDK 2.1 version (#20781) 2025-11-03 13:23:51 -07:00
Nicolas Mowen
fc1cad2872
Adjust LPR packages for licensing (#20780) 2025-11-03 14:11:02 -06:00
Nicolas Mowen
5529432856
Various fixes (#20774)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
* Change order of deletion

* Add debug log for camera enabled

* Add more face debug logs

* Set jetson numpy version
2025-11-03 10:05:03 -06:00
Josh Hawkins
59963fc47e
Camera Wizard tweaks (#20773)
* add switch to use go2rtc ffmpeg mode

* i18n

* move testing state outside of button
2025-11-03 08:42:38 -07:00
Nicolas Mowen
31fa87ce73
Correctly remove classification model from config (#20772)
* Correctly remove classification model from config

* Undo

* fix

* Use existing config update API and dynamically remove models that were running

* Set update message for face
2025-11-03 08:01:30 -07:00
Nicolas Mowen
740c618240
Fix review summary for DST (#20770)
* Fix review summary for DST

* Fix
2025-11-03 07:34:47 -06:00
Nicolas Mowen
4f76b34f44
Classification fixes (#20771)
* Fully delete a model

* Fix deletion dialog

* Fix classification back step

* Adjust selection gradient

* Fix

* Fix
2025-11-03 07:34:06 -06:00
Josh Hawkins
d44340eca6
Tracked Object Details pane tweaks (#20762)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
* normalize path and points sizes

* fix bounding box display to only show on actual points that have a box

* add support for using snapshots
2025-11-02 06:48:43 -07:00
GuoQing Liu
aff82f809c
feat: add search filter group audio i18n (#20760) 2025-11-02 07:45:24 -06:00
Josh Hawkins
1e50d83d06
create i18n key for list separator and use in zones (#20749)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
2025-11-01 12:20:32 -06:00
Josh Hawkins
36fb27ef56
Refactor Tracked Object Details dialog (#20748)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
* detail stream settings

* remove old review detail dialog

* change layout

* use detail stream in tracking details

* reusable tabs component

* pass in tabs for desktop

* fix object selection and time updating

* i18n

* aspect fixes

* include tolerance for displaying of path and zone

some browsers (firefox and probably brave) intentionally reduce precision of seeking with currentTime for privacy reasons

* detail stream seeking fixes

* tracking details seeking fixes

* layout tweaks

* add download button back for now

* remove

* remove

* snapshot is now default tab
2025-11-01 09:19:30 -05:00
Nicolas Mowen
9937a7cc3d
Add ability to delete classification models (#20747)
* fix typo

* Add ability to delete classification models
2025-11-01 09:11:24 -05:00
Nicolas Mowen
7aac6b4f21
Don't remove tensorflow on trt (#20743)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
2025-10-31 16:17:41 -05:00
Nicolas Mowen
338b681ed0
Various Tweaks (#20742)
* Pull context size from openai models

* Adjust wording based on type of model

* Instruct to not use parenthesis

* Simplify genai config

* Don't use GPU for training
2025-10-31 12:40:31 -06:00
Nicolas Mowen
685f2c5030
Mark delivery objects with marker so LLM knows what is happening (#20736)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / Assemble and push default build (push) Blocked by required conditions
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
2025-10-31 07:14:34 -05:00
Nicolas Mowen
971521cd8e
Review description updates (#20723)
Some checks failed
CI / Jetson Jetpack 6 (push) Has been cancelled
CI / AMD64 Build (push) Has been cancelled
CI / ARM Build (push) Has been cancelled
CI / AMD64 Extra Build (push) Has been cancelled
CI / ARM Extra Build (push) Has been cancelled
CI / Synaptics Build (push) Has been cancelled
CI / Assemble and push default build (push) Has been cancelled
* Update docs for review descriptions

* Add logging for context tokens used

* Incrase number of images due to lower than expected context usage

* Re-balance the suspicious activity checks

* Adjustments to context sizing

* optimize context usage

* Adjust context usage

* Make title more direct

* Update docs
2025-10-30 08:52:55 -06:00
Nicolas Mowen
fd1eb64921
Add qwen3 vl (#20720) 2025-10-29 21:31:25 -05:00
Josh Hawkins
dbbe40bd27
Improve Details Settings (#20718)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
* detail stream settings

* fix mobile landscape

* mobile landscape

* tweak

* tweaks
2025-10-29 16:03:38 -06:00
Josh Hawkins
901002a0a5
Add zoom icons to timeline (#20717)
* add icons to zoom timeline

* fix current zoom level handling

* ensure mobile buttons don't stay selected

* remove icons on event review timeline

* add tooltips
2025-10-29 12:04:29 -06:00
Hosted Weblate
62bc2aeaab Added translation using Weblate (Cantonese (Traditional Han script))
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Languages add-on <noreply-addon-languages@weblate.org>
2025-10-29 08:59:49 -06:00
Hosted Weblate
bcff1ee9aa Added translation using Weblate (Persian (Old))
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Languages add-on <noreply-addon-languages@weblate.org>
2025-10-29 08:59:49 -06:00
Hosted Weblate
ccf17439c7 Translated using Weblate (Norwegian Bokmål)
Currently translated at 100.0% (593 of 593 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (13 of 13 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (54 of 54 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 84.3% (500 of 593 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (206 of 206 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 76.5% (454 of 593 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (204 of 204 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (124 of 124 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (53 of 53 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (87 of 87 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (90 of 90 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (51 of 51 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (124 of 124 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (34 of 34 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 75.8% (444 of 585 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (10 of 10 strings)

Added translation using Weblate (Norwegian Bokmål)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Languages add-on <noreply-addon-languages@weblate.org>
Co-authored-by: OverTheHillsAndFarAway <prosjektx@users.noreply.hosted.weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-auth/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-exports/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/nb_NO/
Translation: Frigate NVR/common
Translation: Frigate NVR/components-auth
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-exports
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-10-29 08:59:49 -06:00
Hosted Weblate
e20c0b99e5 Added translation using Weblate (Abkhazian)
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Languages add-on <noreply-addon-languages@weblate.org>
2025-10-29 08:59:49 -06:00
Hosted Weblate
da3cdbbdef Translated using Weblate (Chinese (Simplified Han script))
Currently translated at 100.0% (593 of 593 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (124 of 124 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (54 of 54 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (206 of 206 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (13 of 13 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 76.6% (95 of 124 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 98.1% (53 of 54 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (87 of 87 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (124 of 124 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 97.0% (33 of 34 strings)

Added translation using Weblate (Chinese (Simplified Han script))

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (584 of 584 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (582 of 582 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (60 of 60 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (10 of 10 strings)

Co-authored-by: GuoQing Liu <842607283@qq.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Languages add-on <noreply-addon-languages@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-auth/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-exports/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/zh_Hans/
Translation: Frigate NVR/common
Translation: Frigate NVR/components-auth
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-exports
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-settings
2025-10-29 08:59:49 -06:00
Hosted Weblate
766c619417 Added translation using Weblate (Chinese (Traditional Han script))
Translated using Weblate (Chinese (Traditional Han script))

Currently translated at 12.1% (71 of 585 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Ladybug.H <klat3521@gmail.com>
Co-authored-by: Languages add-on <noreply-addon-languages@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/zh_Hant/
Translation: Frigate NVR/views-settings
2025-10-29 08:59:49 -06:00
Hosted Weblate
0f9e910f22 Added translation using Weblate (Urdu)
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Languages add-on <noreply-addon-languages@weblate.org>
2025-10-29 08:59:49 -06:00
Hosted Weblate
24ea361556 Translated using Weblate (Slovenian)
Currently translated at 27.5% (24 of 87 strings)

Translated using Weblate (Slovenian)

Currently translated at 30.7% (180 of 585 strings)

Translated using Weblate (Slovenian)

Currently translated at 100.0% (72 of 72 strings)

Translated using Weblate (Slovenian)

Currently translated at 100.0% (53 of 53 strings)

Translated using Weblate (Slovenian)

Currently translated at 26.8% (157 of 585 strings)

Translated using Weblate (Slovenian)

Currently translated at 19.1% (112 of 585 strings)

Translated using Weblate (Slovenian)

Currently translated at 1.1% (1 of 87 strings)

Translated using Weblate (Slovenian)

Currently translated at 16.7% (98 of 585 strings)

Translated using Weblate (Slovenian)

Currently translated at 100.0% (51 of 51 strings)

Added translation using Weblate (Slovenian)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Languages add-on <noreply-addon-languages@weblate.org>
Co-authored-by: tadythefish <tady.the.fish@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/sl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/sl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/sl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/sl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/sl/
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-settings
2025-10-29 08:59:49 -06:00
Hosted Weblate
debdd827b4 Translated using Weblate (Slovak)
Currently translated at 74.1% (434 of 585 strings)

Translated using Weblate (Slovak)

Currently translated at 86.6% (434 of 501 strings)

Translated using Weblate (Slovak)

Currently translated at 65.2% (382 of 585 strings)

Translated using Weblate (Slovak)

Currently translated at 76.8% (385 of 501 strings)

Translated using Weblate (Slovak)

Currently translated at 100.0% (87 of 87 strings)

Translated using Weblate (Slovak)

Currently translated at 62.7% (367 of 585 strings)

Translated using Weblate (Slovak)

Currently translated at 73.8% (370 of 501 strings)

Translated using Weblate (Slovak)

Currently translated at 83.9% (73 of 87 strings)

Translated using Weblate (Slovak)

Currently translated at 96.0% (49 of 51 strings)

Added translation using Weblate (Slovak)

Translated using Weblate (Slovak)

Currently translated at 62.0% (363 of 585 strings)

Translated using Weblate (Slovak)

Currently translated at 99.1% (123 of 124 strings)

Translated using Weblate (Slovak)

Currently translated at 100.0% (90 of 90 strings)

Translated using Weblate (Slovak)

Currently translated at 100.0% (34 of 34 strings)

Translated using Weblate (Slovak)

Currently translated at 100.0% (10 of 10 strings)

Translated using Weblate (Slovak)

Currently translated at 98.0% (50 of 51 strings)

Translated using Weblate (Slovak)

Currently translated at 72.8% (365 of 501 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Jakub K <klacanjakub0@gmail.com>
Co-authored-by: Languages add-on <noreply-addon-languages@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-auth/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/sk/
Translation: Frigate NVR/audio
Translation: Frigate NVR/components-auth
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
2025-10-29 08:59:49 -06:00
Hosted Weblate
5c262610a2 Added translation using Weblate (Korean)
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Languages add-on <noreply-addon-languages@weblate.org>
2025-10-29 08:59:49 -06:00
Hosted Weblate
684f074e48 Added translation using Weblate (Serbian)
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Languages add-on <noreply-addon-languages@weblate.org>
2025-10-29 08:59:49 -06:00
Hosted Weblate
d26984db0e Added translation using Weblate (Finnish)
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Languages add-on <noreply-addon-languages@weblate.org>
2025-10-29 08:59:49 -06:00
Hosted Weblate
c3738b5dbb Translated using Weblate (Persian)
Currently translated at 13.7% (12 of 87 strings)

Translated using Weblate (Persian)

Currently translated at 1.6% (2 of 124 strings)

Added translation using Weblate (Persian)

Co-authored-by: Amir Hossein Omidi <amirhosein011omidi@gmail.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Languages add-on <noreply-addon-languages@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/fa/
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-explore
2025-10-29 08:59:49 -06:00
Hosted Weblate
7ad62ea451 Translated using Weblate (Swedish)
Currently translated at 59.7% (52 of 87 strings)

Translated using Weblate (Swedish)

Currently translated at 96.7% (120 of 124 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (54 of 54 strings)

Translated using Weblate (Swedish)

Currently translated at 48.2% (42 of 87 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (13 of 13 strings)

Translated using Weblate (Swedish)

Currently translated at 87.9% (109 of 124 strings)

Translated using Weblate (Swedish)

Currently translated at 40.2% (35 of 87 strings)

Translated using Weblate (Swedish)

Currently translated at 99.1% (580 of 585 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (51 of 51 strings)

Translated using Weblate (Swedish)

Currently translated at 82.2% (102 of 124 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (53 of 53 strings)

Added translation using Weblate (Swedish)

Translated using Weblate (Swedish)

Currently translated at 100.0% (51 of 51 strings)

Translated using Weblate (Swedish)

Currently translated at 99.1% (123 of 124 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (34 of 34 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (10 of 10 strings)

Co-authored-by: Daniel Nylander <daniel@danielnylander.se>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Kristian Johansson <knmjohansson@gmail.com>
Co-authored-by: Languages add-on <noreply-addon-languages@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-auth/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-exports/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/sv/
Translation: Frigate NVR/components-auth
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-exports
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-settings
2025-10-29 08:59:49 -06:00
Hosted Weblate
78c04da114 Translated using Weblate (French)
Currently translated at 100.0% (593 of 593 strings)

Translated using Weblate (French)

Currently translated at 100.0% (206 of 206 strings)

Translated using Weblate (French)

Currently translated at 98.1% (582 of 593 strings)

Translated using Weblate (French)

Currently translated at 100.0% (54 of 54 strings)

Translated using Weblate (French)

Currently translated at 100.0% (204 of 204 strings)

Translated using Weblate (French)

Currently translated at 100.0% (13 of 13 strings)

Translated using Weblate (French)

Currently translated at 100.0% (87 of 87 strings)

Translated using Weblate (French)

Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (French)

Currently translated at 100.0% (585 of 585 strings)

Translated using Weblate (French)

Currently translated at 100.0% (48 of 48 strings)

Translated using Weblate (French)

Currently translated at 100.0% (6 of 6 strings)

Translated using Weblate (French)

Currently translated at 100.0% (90 of 90 strings)

Translated using Weblate (French)

Currently translated at 100.0% (51 of 51 strings)

Translated using Weblate (French)

Currently translated at 100.0% (9 of 9 strings)

Translated using Weblate (French)

Currently translated at 100.0% (124 of 124 strings)

Translated using Weblate (French)

Currently translated at 100.0% (34 of 34 strings)

Translated using Weblate (French)

Currently translated at 100.0% (72 of 72 strings)

Translated using Weblate (French)

Currently translated at 100.0% (25 of 25 strings)

Translated using Weblate (French)

Currently translated at 100.0% (2 of 2 strings)

Translated using Weblate (French)

Currently translated at 100.0% (53 of 53 strings)

Translated using Weblate (French)

Currently translated at 100.0% (10 of 10 strings)

Translated using Weblate (French)

Currently translated at 100.0% (46 of 46 strings)

Translated using Weblate (French)

Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (French)

Currently translated at 100.0% (199 of 199 strings)

Translated using Weblate (French)

Currently translated at 100.0% (124 of 124 strings)

Translated using Weblate (French)

Currently translated at 100.0% (585 of 585 strings)

Translated using Weblate (French)

Currently translated at 100.0% (6 of 6 strings)

Translated using Weblate (French)

Currently translated at 100.0% (90 of 90 strings)

Translated using Weblate (French)

Currently translated at 100.0% (51 of 51 strings)

Translated using Weblate (French)

Currently translated at 100.0% (124 of 124 strings)

Translated using Weblate (French)

Currently translated at 100.0% (72 of 72 strings)

Translated using Weblate (French)

Currently translated at 100.0% (199 of 199 strings)

Translated using Weblate (French)

Currently translated at 100.0% (87 of 87 strings)

Translated using Weblate (French)

Currently translated at 100.0% (585 of 585 strings)

Translated using Weblate (French)

Currently translated at 100.0% (90 of 90 strings)

Translated using Weblate (French)

Currently translated at 100.0% (51 of 51 strings)

Translated using Weblate (French)

Currently translated at 100.0% (124 of 124 strings)

Translated using Weblate (French)

Currently translated at 100.0% (34 of 34 strings)

Translated using Weblate (French)

Currently translated at 100.0% (72 of 72 strings)

Translated using Weblate (French)

Currently translated at 100.0% (25 of 25 strings)

Translated using Weblate (French)

Currently translated at 100.0% (2 of 2 strings)

Translated using Weblate (French)

Currently translated at 100.0% (53 of 53 strings)

Translated using Weblate (French)

Currently translated at 100.0% (10 of 10 strings)

Translated using Weblate (French)

Currently translated at 100.0% (46 of 46 strings)

Translated using Weblate (French)

Currently translated at 100.0% (199 of 199 strings)

Translated using Weblate (French)

Currently translated at 100.0% (87 of 87 strings)

Added translation using Weblate (French)

Translated using Weblate (French)

Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (French)

Currently translated at 100.0% (585 of 585 strings)

Translated using Weblate (French)

Currently translated at 100.0% (124 of 124 strings)

Translated using Weblate (French)

Currently translated at 100.0% (34 of 34 strings)

Translated using Weblate (French)

Currently translated at 100.0% (199 of 199 strings)

Translated using Weblate (French)

Currently translated at 100.0% (501 of 501 strings)

Translated using Weblate (French)

Currently translated at 100.0% (48 of 48 strings)

Translated using Weblate (French)

Currently translated at 100.0% (6 of 6 strings)

Translated using Weblate (French)

Currently translated at 100.0% (90 of 90 strings)

Translated using Weblate (French)

Currently translated at 100.0% (51 of 51 strings)

Translated using Weblate (French)

Currently translated at 100.0% (9 of 9 strings)

Translated using Weblate (French)

Currently translated at 100.0% (122 of 122 strings)

Translated using Weblate (French)

Currently translated at 100.0% (33 of 33 strings)

Translated using Weblate (French)

Currently translated at 100.0% (72 of 72 strings)

Translated using Weblate (French)

Currently translated at 100.0% (10 of 10 strings)

Translated using Weblate (French)

Currently translated at 100.0% (25 of 25 strings)

Translated using Weblate (French)

Currently translated at 100.0% (2 of 2 strings)

Translated using Weblate (French)

Currently translated at 100.0% (53 of 53 strings)

Translated using Weblate (French)

Currently translated at 100.0% (10 of 10 strings)

Translated using Weblate (French)

Currently translated at 100.0% (46 of 46 strings)

Translated using Weblate (French)

Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (French)

Currently translated at 100.0% (199 of 199 strings)

Translated using Weblate (French)

Currently translated at 100.0% (501 of 501 strings)

Translated using Weblate (French)

Currently translated at 100.0% (584 of 584 strings)

Translated using Weblate (French)

Currently translated at 100.0% (582 of 582 strings)

Translated using Weblate (French)

Currently translated at 100.0% (90 of 90 strings)

Translated using Weblate (French)

Currently translated at 100.0% (33 of 33 strings)

Translated using Weblate (French)

Currently translated at 100.0% (10 of 10 strings)

Translated using Weblate (French)

Currently translated at 100.0% (501 of 501 strings)

Translated using Weblate (French)

Currently translated at 100.0% (580 of 580 strings)

Translated using Weblate (French)

Currently translated at 100.0% (60 of 60 strings)

Translated using Weblate (French)

Currently translated at 100.0% (33 of 33 strings)

Translated using Weblate (French)

Currently translated at 100.0% (580 of 580 strings)

Translated using Weblate (French)

Currently translated at 100.0% (580 of 580 strings)

Translated using Weblate (French)

Currently translated at 98.8% (89 of 90 strings)

Translated using Weblate (French)

Currently translated at 100.0% (33 of 33 strings)

Translated using Weblate (French)

Currently translated at 100.0% (33 of 33 strings)

Translated using Weblate (French)

Currently translated at 100.0% (501 of 501 strings)

Translated using Weblate (French)

Currently translated at 100.0% (501 of 501 strings)

Co-authored-by: Apocoloquintose <bertrand.moreux@gmail.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Pascal Courtonne <pascal@bobbz.org>
Co-authored-by: Yanom1212 <ylamarche@icloud.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-auth/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-icons/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-input/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-player/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/objects/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-configeditor/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-exports/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-recording/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-search/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/fr/
Translation: Frigate NVR/audio
Translation: Frigate NVR/common
Translation: Frigate NVR/components-auth
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/components-icons
Translation: Frigate NVR/components-input
Translation: Frigate NVR/components-player
Translation: Frigate NVR/objects
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-configeditor
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-exports
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-recording
Translation: Frigate NVR/views-search
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-10-29 08:59:49 -06:00
Hosted Weblate
e7c17b9a5d Added translation using Weblate (Spanish)
Translated using Weblate (Spanish)

Currently translated at 100.0% (10 of 10 strings)

Translated using Weblate (Spanish)

Currently translated at 72.1% (420 of 582 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Israel Cabrera <issurfer@gmail.com>
Co-authored-by: Languages add-on <noreply-addon-languages@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-auth/es/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/es/
Translation: Frigate NVR/components-auth
Translation: Frigate NVR/views-settings
2025-10-29 08:59:49 -06:00
Hosted Weblate
04cefd0381 Translated using Weblate (Dutch)
Currently translated at 100.0% (593 of 593 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (54 of 54 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (206 of 206 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (13 of 13 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (53 of 53 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (124 of 124 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (53 of 53 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (87 of 87 strings)

Added translation using Weblate (Dutch)

Translated using Weblate (Dutch)

Currently translated at 100.0% (51 of 51 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (124 of 124 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (34 of 34 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (584 of 584 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (582 of 582 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (60 of 60 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (10 of 10 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (90 of 90 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (122 of 122 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (501 of 501 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Languages add-on <noreply-addon-languages@weblate.org>
Co-authored-by: Marijn <168113859+Marijn0@users.noreply.github.com>
Co-authored-by: Patrick <github@derr.eu>
Co-authored-by: kheno <kheno@go9.be>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/nl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/nl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-auth/nl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/nl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/objects/nl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/nl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/nl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/nl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-exports/nl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/nl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/nl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/nl/
Translation: Frigate NVR/audio
Translation: Frigate NVR/common
Translation: Frigate NVR/components-auth
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/objects
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-exports
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
2025-10-29 08:59:49 -06:00
Hosted Weblate
f6d8014678 Added translation using Weblate (Indonesian)
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Languages add-on <noreply-addon-languages@weblate.org>
2025-10-29 08:59:49 -06:00
Hosted Weblate
8eeae9ac4d Added translation using Weblate (Arabic)
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Languages add-on <noreply-addon-languages@weblate.org>
2025-10-29 08:59:49 -06:00
Hosted Weblate
fec9c415ce Added translation using Weblate (Italian)
Translated using Weblate (Italian)

Currently translated at 100.0% (34 of 34 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (124 of 124 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (585 of 585 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (582 of 582 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (10 of 10 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (60 of 60 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (580 of 580 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (90 of 90 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (501 of 501 strings)

Co-authored-by: Gringo <ita.translations@tiscali.it>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Languages add-on <noreply-addon-languages@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/it/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-auth/it/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/it/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/it/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/it/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/it/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/it/
Translation: Frigate NVR/audio
Translation: Frigate NVR/components-auth
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
2025-10-29 08:59:49 -06:00
Hosted Weblate
a6249d61b7 Translated using Weblate (Polish)
Currently translated at 3.4% (3 of 87 strings)

Translated using Weblate (Polish)

Currently translated at 68.9% (409 of 593 strings)

Translated using Weblate (Polish)

Currently translated at 94.1% (48 of 51 strings)

Added translation using Weblate (Polish)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Languages add-on <noreply-addon-languages@weblate.org>
Co-authored-by: Łukasz Czajor <czajor.lukasz@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/pl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/pl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/pl/
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-settings
2025-10-29 08:59:49 -06:00
Hosted Weblate
b961e61b67 Added translation using Weblate (Hebrew)
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Languages add-on <noreply-addon-languages@weblate.org>
2025-10-29 08:59:49 -06:00
Hosted Weblate
f815a462e9 Added translation using Weblate (Hindi)
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Languages add-on <noreply-addon-languages@weblate.org>
2025-10-29 08:59:49 -06:00
Hosted Weblate
796f5965a1 Added translation using Weblate (Hungarian)
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Languages add-on <noreply-addon-languages@weblate.org>
2025-10-29 08:59:49 -06:00
Hosted Weblate
8cacb998c9 Added translation using Weblate (Croatian)
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Languages add-on <noreply-addon-languages@weblate.org>
2025-10-29 08:59:49 -06:00
Hosted Weblate
e4f10b2955 Added translation using Weblate (Vietnamese)
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Languages add-on <noreply-addon-languages@weblate.org>
2025-10-29 08:59:49 -06:00
Hosted Weblate
c413ce5973 Added translation using Weblate (Portuguese)
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Languages add-on <noreply-addon-languages@weblate.org>
2025-10-29 08:59:49 -06:00
Hosted Weblate
70b0048fe0 Added translation using Weblate (Czech)
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Languages add-on <noreply-addon-languages@weblate.org>
2025-10-29 08:59:49 -06:00
Hosted Weblate
c53f5ee9a6 Translated using Weblate (Catalan)
Currently translated at 100.0% (124 of 124 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (53 of 53 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (501 of 501 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (87 of 87 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (585 of 585 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (51 of 51 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (124 of 124 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (34 of 34 strings)

Added translation using Weblate (Catalan)

Translated using Weblate (Catalan)

Currently translated at 100.0% (585 of 585 strings)

Translated using Weblate (Catalan)

Currently translated at 99.2% (497 of 501 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (582 of 582 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (90 of 90 strings)

Translated using Weblate (Catalan)

Currently translated at 93.0% (466 of 501 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (580 of 580 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (10 of 10 strings)

Translated using Weblate (Catalan)

Currently translated at 89.8% (450 of 501 strings)

Translated using Weblate (Catalan)

Currently translated at 97.0% (563 of 580 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (90 of 90 strings)

Translated using Weblate (Catalan)

Currently translated at 98.3% (59 of 60 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (33 of 33 strings)

Translated using Weblate (Catalan)

Currently translated at 85.2% (427 of 501 strings)

Co-authored-by: Eduardo Pastor Fernández <123eduardoneko123@gmail.com>
Co-authored-by: Gerard Ricart Castells <gerard.ricart@gmail.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Languages add-on <noreply-addon-languages@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/ca/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-auth/ca/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/ca/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/ca/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/ca/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/ca/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/ca/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/ca/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/ca/
Translation: Frigate NVR/audio
Translation: Frigate NVR/components-auth
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
2025-10-29 08:59:49 -06:00
Hosted Weblate
e7f9617d93 Translated using Weblate (Japanese)
Currently translated at 6.8% (6 of 87 strings)

Translated using Weblate (Japanese)

Currently translated at 94.1% (48 of 51 strings)

Added translation using Weblate (Japanese)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Languages add-on <noreply-addon-languages@weblate.org>
Co-authored-by: gon 360 <gon360@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/ja/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/ja/
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-facelibrary
2025-10-29 08:59:49 -06:00
Hosted Weblate
6a3de88fa0 Added translation using Weblate (Ukrainian)
Translated using Weblate (Ukrainian)

Currently translated at 100.0% (34 of 34 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (584 of 584 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (582 of 582 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (60 of 60 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (10 of 10 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Languages add-on <noreply-addon-languages@weblate.org>
Co-authored-by: Максим Горпиніч <gorpinicmaksim0@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-auth/uk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/uk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/uk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/uk/
Translation: Frigate NVR/components-auth
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-settings
2025-10-29 08:59:49 -06:00
Hosted Weblate
8b43935510 Added translation using Weblate (Bulgarian)
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Languages add-on <noreply-addon-languages@weblate.org>
2025-10-29 08:59:49 -06:00
Hosted Weblate
bf3221deb4 Added translation using Weblate (Romanian)
Translated using Weblate (Romanian)

Currently translated at 100.0% (582 of 582 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (60 of 60 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (10 of 10 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Languages add-on <noreply-addon-languages@weblate.org>
Co-authored-by: lukasig <lukasig@hotmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-auth/ro/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/ro/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/ro/
Translation: Frigate NVR/components-auth
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-settings
2025-10-29 08:59:49 -06:00
Hosted Weblate
a4a0bef2d7 Added translation using Weblate (Russian)
Translated using Weblate (Russian)

Currently translated at 75.9% (442 of 582 strings)

Translated using Weblate (Russian)

Currently translated at 100.0% (33 of 33 strings)

Translated using Weblate (Russian)

Currently translated at 100.0% (10 of 10 strings)

Translated using Weblate (Russian)

Currently translated at 100.0% (199 of 199 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Languages add-on <noreply-addon-languages@weblate.org>
Co-authored-by: internetson <sockmancore@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/ru/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-auth/ru/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/ru/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/ru/
Translation: Frigate NVR/common
Translation: Frigate NVR/components-auth
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-settings
2025-10-29 08:59:49 -06:00
Hosted Weblate
5fb5a185e0 Added translation using Weblate (Greek)
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Languages add-on <noreply-addon-languages@weblate.org>
2025-10-29 08:59:49 -06:00
Hosted Weblate
093891b439 Added translation using Weblate (Danish)
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Languages add-on <noreply-addon-languages@weblate.org>
2025-10-29 08:59:49 -06:00
Hosted Weblate
e3b46372ab Added translation using Weblate (German)
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Languages add-on <noreply-addon-languages@weblate.org>
2025-10-29 08:59:49 -06:00
Hosted Weblate
e9bb280419 Translated using Weblate (Portuguese (Brazil))
Currently translated at 16.0% (14 of 87 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 100.0% (13 of 13 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 94.1% (32 of 34 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 10.3% (9 of 87 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 100.0% (51 of 51 strings)

Added translation using Weblate (Portuguese (Brazil))

Translated using Weblate (Portuguese (Brazil))

Currently translated at 78.6% (460 of 585 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 98.8% (89 of 90 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 100.0% (60 of 60 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 100.0% (10 of 10 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Languages add-on <noreply-addon-languages@weblate.org>
Co-authored-by: Marcelo Popper Costa <marcelo_popper@hotmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-auth/pt_BR/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/pt_BR/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/pt_BR/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-exports/pt_BR/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/pt_BR/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/pt_BR/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/pt_BR/
Translation: Frigate NVR/components-auth
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-exports
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
2025-10-29 08:59:49 -06:00
Hosted Weblate
f8983cee1e Added translation using Weblate (Tamil)
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Languages add-on <noreply-addon-languages@weblate.org>
2025-10-29 08:59:49 -06:00
Hosted Weblate
8b191c69b1 Added translation using Weblate (Thai)
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Languages add-on <noreply-addon-languages@weblate.org>
2025-10-29 08:59:49 -06:00
Hosted Weblate
2e622cb54c Translated using Weblate (Lithuanian)
Currently translated at 39.0% (34 of 87 strings)

Translated using Weblate (Lithuanian)

Currently translated at 77.4% (453 of 585 strings)

Translated using Weblate (Lithuanian)

Currently translated at 100.0% (90 of 90 strings)

Translated using Weblate (Lithuanian)

Currently translated at 100.0% (51 of 51 strings)

Translated using Weblate (Lithuanian)

Currently translated at 100.0% (34 of 34 strings)

Translated using Weblate (Lithuanian)

Currently translated at 100.0% (10 of 10 strings)

Translated using Weblate (Lithuanian)

Currently translated at 85.2% (427 of 501 strings)

Added translation using Weblate (Lithuanian)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Languages add-on <noreply-addon-languages@weblate.org>
Co-authored-by: MaBeniu <runnerm@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/lt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-auth/lt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-classificationmodel/lt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/lt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/lt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/lt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/lt/
Translation: Frigate NVR/audio
Translation: Frigate NVR/components-auth
Translation: Frigate NVR/views-classificationmodel
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
2025-10-29 08:59:49 -06:00
Hosted Weblate
ef171ffd31 Added translation using Weblate (Turkish)
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Languages add-on <noreply-addon-languages@weblate.org>
2025-10-29 08:59:49 -06:00
Hosted Weblate
98731e86f5 Added translation using Weblate (Galician)
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Languages add-on <noreply-addon-languages@weblate.org>
2025-10-29 08:59:49 -06:00
Nicolas Mowen
29bc213c04
Various Tweaks (#20713)
* Adjust for commutes

* Tweaks

* Don't show no models view in grid

* Add text-md to inputs

* Adjust train title for mobile

* Cleanup prompt more

* Use i18n functions for tooltip

* Fix model complexity causing crash

* Cleanup
2025-10-29 09:40:50 -05:00
Josh Hawkins
61549a0151
No recordings indicator on History timeline (#20715)
* black background

* fix backend logic

* fixes

* ensure data being sent to api is segment aligned

* tweak

* tweaks to keep motion review as-is

* fix for half segment fractional seconds when using zooming
2025-10-29 08:39:07 -06:00
Thibault Junin
9917fc3169
feat(player): always show camera names + add UI config toggle (#20705)
* feat(player): always show camera names + add UI config toggle

* feat(settings): add toggle for displaying camera names in multi-camera views

* update label and description for camera name setting
2025-10-29 09:20:11 -05:00
Josh Hawkins
576f692dae
Trigger actions (#20709)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
* add backend trigger actions

* config

* frontend types

* add actions to form and wizard

* i18n

* docs

* use camera level notification enabled check
2025-10-28 15:13:04 -06:00
Josh Hawkins
6ccf8cd2b8
install python3-h2 to fix webpush issues for some users (#20706)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
2025-10-28 09:24:44 -05:00
Nicolas Mowen
16e17e027d
Review prompt adjustments (#20704)
* Make prompt more fair and reduce time extension

* Adjust naming of unrecognized objects

* Improve object naming behavior

* Add more context image levels
2025-10-28 08:28:36 -05:00
Josh Hawkins
c2cbb0fa87
improve i18n for lists of text/labels (#20696)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
2025-10-27 18:53:18 -05:00
Josh Hawkins
640007e5d3
Trigger Wizard (#20691)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
* add reusable component for combined name / internal name form field

* fix labels

* refactor utilities

* refactor image picker

* lazy loading

* don't clear text box

* trigger wizard

* image picker fixes

* use name and ID field in trigger edit dialog

* ensure wizard resets when reopening

* icon size tweak

* multiple triggers can trigger at once

* remove scrolling

* mobile tweaks

* remove duplicated component

* fix types

* use table on desktop and keep cards on mobile

* provide default
2025-10-27 14:58:31 -05:00
Nicolas Mowen
710a77679b
Improve default genai review prompt structure (#20690) 2025-10-27 11:34:39 -05:00
Josh Hawkins
893fe79d22
UI tweaks (#20687)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
* add blurred icon button component

* apply component to explore, face, and classification views

* apply to exports and fix bug where play button was unclickable
2025-10-27 07:44:34 -05:00
Nicolas Mowen
5ff7a47ba9
Unify list of objects under dedicated section (#20684)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
* Unify list of objects under dedicated section

* Use helper fuction
2025-10-26 16:37:57 -05:00
Josh Hawkins
5715ed62ad
More detail pane tweaks (#20681)
* More detail pane tweaks

* remove unneeded check

* add ability to submit frames to frigate+

* rename object lifecycle to tracking details

* add object mask creation to lifecycle item menu

* change tracking details icon
2025-10-26 13:12:20 -05:00
Nicolas Mowen
43706eb48d
Add button to view exports when exported (#20682) 2025-10-26 13:11:48 -05:00
Nicolas Mowen
190925375b
Classification fixes (#20677)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
* Don't run classification on stationary objects and set a maximum number of classifications

* Fix layout of classification selection
2025-10-26 08:41:18 -05:00
Nicolas Mowen
094a0a6e05
Add ability to change source of images for review descriptions (#20676)
* Add ability to change source of images for review descriptions

* Undo
2025-10-26 08:40:38 -05:00
Josh Hawkins
840d567d22
UI tweaks (#20675)
* spacing tweaks and add link to explore for plate

* clear selected objects when changing cameras

* plate link and spacing in object lifecycle

* set tabindex to prevent tooltip from showing on reopen

* show month and day in object lifecycle timestamp
2025-10-26 07:27:07 -05:00
Josh Hawkins
2c480b9a89
Fix History layout for mobile portrait cameras (#20669)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
2025-10-25 19:44:06 -05:00
Nicolas Mowen
1fb21a4dac
Classification improvements (#20665)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
* Don't classify objects that are ended

* Use weighted scoring for object classification

* Implement state verification
2025-10-25 16:15:49 -06:00
Josh Hawkins
63042b9c08
Review stream tweaks (#20662)
* tweak api to fetch multiple timelines

* support multiple selected objects in context

* rework context provider

* use toggle in detail stream

* use toggle in menu

* plot multiple object tracks

* verified icon, recognized plate, and clicking tweaks

* add plate to object lifecycle

* close menu before opening frigate+ dialog

* clean up

* normal text case for tooltip

* capitalization

* use flexbox for recording view
2025-10-25 16:15:36 -06:00
Nicolas Mowen
0a6b9f98ed
Various fixes (#20666)
* Remove nvidia pyindex

* Improve prompt
2025-10-25 16:40:04 -05:00
Blake Blackshear
32875fb4cc Merge remote-tracking branch 'origin/master' into dev
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
2025-10-25 11:16:09 +00:00
Josh Hawkins
9ec65d7aa9
Review stream tweaks (#20656)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
* add blue dot instead of blue outline

* fix layout for portrait cameras

* fix light mode
2025-10-24 16:30:12 -06:00
Nicolas Mowen
83fa651ada
Various fixes (#20655)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
2025-10-24 16:38:35 -05:00
Josh Hawkins
eb51eb3c9d
UI tweaks (#20649)
* match face wizard with camera and classification wizards

* remove review detail dialog and link chip to detail stream in history

* remove footer on explore images and move to overlay

* use consistent overlay button styles

* spacing tweak

* ensure selected ring stays on top of gradients

* fix z-index

* match object lifecycle with details
2025-10-24 11:08:59 -06:00
Josh Hawkins
49f5d595ea
Review stream tweaks (#20648)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
* add detail stream selector to mobile drawer

* tweak getDurationFromTimestamps for i18n and abbreviations

* improve lifecycle description labeling

* i18n

* match figma

* fix progress line and add area and ratio tooltip

* allow clicking on chevron without triggering playback

* tweaks

* add key

* change wording

* clean up

* clean up

* remove check

* clean up
2025-10-24 07:50:06 -05:00
Josh Hawkins
e2da8aa04c
Camera wizard tweaks (#20643)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
* dialog size tweaks

* move icon and make links in popvers clickable

* colors and clickable links
2025-10-23 13:58:22 -06:00
Nicolas Mowen
f5a57edcc9
Implement Wizard for Creating Classification Models (#20622)
* Implement extraction of images for classification state models

* Add object classification dataset preparation

* Add first step wizard

* Update i18n

* Add state classification image selection step

* Improve box handling

* Add object selector

* Improve object cropping implementation

* Fix state classification selection

* Finalize training and image selection step

* Cleanup

* Design optimizations

* Cleanup mobile styling

* Update no models screen

* Cleanups and fixes

* Fix bugs

* Improve model training and creation process

* Cleanup

* Dynamically add metrics for new model

* Add loading when hitting continue

* Improve image selection mechanism

* Remove unused translation keys

* Adjust wording

* Add retry button for image generation

* Make no models view more specific

* Adjust plus icon

* Adjust form label

* Start with correct type selected

* Cleanup sizing and more font colors

* Small tweaks

* Add tips and more info

* Cleanup dialog sizing

* Add cursor rule for frontend

* Cleanup

* remove underline

* Lazy loading
2025-10-23 13:27:28 -06:00
Nicolas Mowen
4df7793587
Set correct nginx container value (#20641) 2025-10-23 12:44:12 -06:00
Josh Hawkins
ac5de290ab
fix missing i18n key (#20639) 2025-10-23 12:40:06 -05:00
Nicolas Mowen
8c3c596dee
Fix ffmpeg command (#20637) 2025-10-23 11:51:16 -05:00
Nicolas Mowen
c5def83e08
Always use fmp4 for HLS (#20638) 2025-10-23 11:50:37 -05:00
Josh Hawkins
81df534784
Camera wizard improvements (#20636)
* use avg_frame_rate

* probe metadata and snapshot separately

* improve ffprobe error reporting

* show error messages in toaster
2025-10-23 08:34:52 -05:00
Nicolas Mowen
0d5cfa2e38
Actually set filter back to undefined (#20624)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
2025-10-22 18:40:39 -05:00
Josh Hawkins
b38f830b3b
Improve camera wizard stream validation (#20620)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
2025-10-22 13:01:16 -06:00
Josh Hawkins
2387dccc19
Add login page docs hint (#20619)
* add config field

* add endpoint

* set config var when onboarding

* add no auth exception to nginx config

* form changes and i18n

* clean up
2025-10-22 12:24:53 -05:00
Nicolas Mowen
d6f5d2b0fa
Classification Model UI Refactor (#20602)
* Add cutoff for object classification

* Add selector for classifiction model type

* Improve model selection view

* Clean up design of classification card

* Tweaks

* Adjust button colors

* Improvements to gradients and making face library consistent

* Add basic classification model wizard

* Use relative coordinates

* Properly get resolution

* Clean up exports

* Cleanup

* Cleanup

* Update to use pre-defined component for image shadow

* Refactor image grouping

* Clean up mobile

* Clean up decision logic

* Remove max check on classification objects

* Increase default number of faces shown

* Cleanup

* Improve mobile layout

* Clenaup

* Update vocabulary

* Fix layout

* Fix page

* Cleanup

* Choose last item for unknown objects

* Move explore button

* Cleanup grid

* Cleanup classification

* Cleanup grid

* Cleanup

* Set transparency

* Set unknown

* Don't filter all configs

* Check length
2025-10-22 07:36:09 -06:00
CurseGroup France
9638e85a1f
Fix Cp,fog for gpu optimization (#20613)
Some checks are pending
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
Co-authored-by: Clement Rousseau <clement.rousseau@pmb-software.fr>
2025-10-22 05:02:40 -06:00
Nicolas Mowen
c5fe354552
Improve Reolink Camera Documentation (#20605)
* Improve Reolink Camera Documentation

* Update Reolink configuration link in live.md
2025-10-21 16:20:41 -06:00
Nicolas Mowen
007371019a
Set default values for LLM performance (#20606)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
2025-10-21 17:10:00 -05:00
Hosted Weblate
21ff257705 Translated using Weblate (Cantonese (Traditional Han script))
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
Currently translated at 100.0% (579 of 579 strings)

Translated using Weblate (Cantonese (Traditional Han script))

Currently translated at 100.0% (91 of 91 strings)

Translated using Weblate (Cantonese (Traditional Han script))

Currently translated at 100.0% (199 of 199 strings)

Translated using Weblate (Cantonese (Traditional Han script))

Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (Cantonese (Traditional Han script))

Currently translated at 100.0% (468 of 468 strings)

Translated using Weblate (Cantonese (Traditional Han script))

Currently translated at 100.0% (87 of 87 strings)

Translated using Weblate (Cantonese (Traditional Han script))

Currently translated at 100.0% (72 of 72 strings)

Translated using Weblate (Cantonese (Traditional Han script))

Currently translated at 100.0% (60 of 60 strings)

Translated using Weblate (Cantonese (Traditional Han script))

Currently translated at 100.0% (46 of 46 strings)

Translated using Weblate (Cantonese (Traditional Han script))

Currently translated at 100.0% (10 of 10 strings)

Translated using Weblate (Cantonese (Traditional Han script))

Currently translated at 100.0% (122 of 122 strings)

Translated using Weblate (Cantonese (Traditional Han script))

Currently translated at 100.0% (26 of 26 strings)

Translated using Weblate (Cantonese (Traditional Han script))

Currently translated at 100.0% (53 of 53 strings)

Translated using Weblate (Cantonese (Traditional Han script))

Currently translated at 100.0% (193 of 193 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: beginner2047 <leoywng44@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/yue_Hant/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/yue_Hant/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/yue_Hant/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/yue_Hant/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-configeditor/yue_Hant/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/yue_Hant/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/yue_Hant/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/yue_Hant/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/yue_Hant/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/yue_Hant/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/yue_Hant/
Translation: Frigate NVR/common
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/views-configeditor
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-10-21 10:49:33 -06:00
Hosted Weblate
adb45e318f Added translation using Weblate (Persian (Old))
Added translation using Weblate (Persian (Old))

Update translation files

Updated by "Squash Git commits" add-on in Weblate.

Update translation files

Updated by "Squash Git commits" add-on in Weblate.

Update translation files

Updated by "Squash Git commits" add-on in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Languages add-on <noreply-addon-languages@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/
Translation: Frigate NVR/common
2025-10-21 10:49:33 -06:00
Hosted Weblate
5225d599b9 Translated using Weblate (Norwegian Bokmål)
Currently translated at 92.2% (83 of 90 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (33 of 33 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (53 of 53 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 74.4% (432 of 580 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (199 of 199 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (60 of 60 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (467 of 467 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (193 of 193 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (33 of 33 strings)

Translated using Weblate (Swedish)

Currently translated at 96.8% (485 of 501 strings)

Translated using Weblate (Swedish)

Currently translated at 94.8% (475 of 501 strings)

Translated using Weblate (Swedish)

Currently translated at 94.4% (473 of 501 strings)

Translated using Weblate (Swedish)

Currently translated at 93.6% (469 of 501 strings)

Translated using Weblate (Swedish)

Currently translated at 89.6% (449 of 501 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (91 of 91 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (579 of 579 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (577 of 577 strings)

Translated using Weblate (Swedish)

Currently translated at 80.1% (461 of 575 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (199 of 199 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (87 of 87 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (53 of 53 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (468 of 468 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (193 of 193 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (467 of 467 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (61 of 61 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Kristian Johansson <knmjohansson@gmail.com>
Co-authored-by: OverTheHillsAndFarAway <prosjektx@users.noreply.hosted.weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/sv/
Translation: Frigate NVR/audio
Translation: Frigate NVR/common
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-10-21 10:49:33 -06:00
Hosted Weblate
da2f414f83 Translated using Weblate (Chinese (Simplified Han script))
Currently translated at 100.0% (580 of 580 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (90 of 90 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (501 of 501 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 96.9% (32 of 33 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (579 of 579 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (91 of 91 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 90.3% (523 of 579 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (87 of 87 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (53 of 53 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (199 of 199 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 97.7% (85 of 87 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (468 of 468 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (193 of 193 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (467 of 467 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (84 of 84 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (61 of 61 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (122 of 122 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (26 of 26 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (72 of 72 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (46 of 46 strings)

Co-authored-by: GuoQing Liu <842607283@qq.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/zh_Hans/
Translation: Frigate NVR/audio
Translation: Frigate NVR/common
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-10-21 10:49:33 -06:00
Hosted Weblate
795bd9908c Translated using Weblate (Slovak)
Currently translated at 61.7% (358 of 580 strings)

Translated using Weblate (Slovak)

Currently translated at 100.0% (33 of 33 strings)

Translated using Weblate (Slovak)

Currently translated at 70.2% (352 of 501 strings)

Translated using Weblate (Slovak)

Currently translated at 61.8% (358 of 579 strings)

Translated using Weblate (Slovak)

Currently translated at 100.0% (90 of 90 strings)

Translated using Weblate (Slovak)

Currently translated at 100.0% (199 of 199 strings)

Translated using Weblate (Slovak)

Currently translated at 64.8% (325 of 501 strings)

Translated using Weblate (Slovak)

Currently translated at 54.7% (316 of 577 strings)

Translated using Weblate (Slovak)

Currently translated at 41.7% (241 of 577 strings)

Translated using Weblate (Slovak)

Currently translated at 98.4% (196 of 199 strings)

Translated using Weblate (Slovak)

Currently translated at 34.0% (196 of 575 strings)

Translated using Weblate (Slovak)

Currently translated at 97.4% (194 of 199 strings)

Translated using Weblate (Slovak)

Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (Slovak)

Currently translated at 51.9% (243 of 468 strings)

Translated using Weblate (Slovak)

Currently translated at 99.1% (121 of 122 strings)

Translated using Weblate (Slovak)

Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (Slovak)

Currently translated at 100.0% (60 of 60 strings)

Translated using Weblate (Slovak)

Currently translated at 100.0% (53 of 53 strings)

Translated using Weblate (Slovak)

Currently translated at 98.4% (190 of 193 strings)

Translated using Weblate (Slovak)

Currently translated at 61.3% (262 of 427 strings)

Translated using Weblate (Slovak)

Currently translated at 91.5% (108 of 118 strings)

Translated using Weblate (Slovak)

Currently translated at 23.0% (108 of 468 strings)

Translated using Weblate (Slovak)

Currently translated at 88.5% (108 of 122 strings)

Translated using Weblate (Slovak)

Currently translated at 100.0% (87 of 87 strings)

Translated using Weblate (Slovak)

Currently translated at 91.5% (108 of 118 strings)

Translated using Weblate (Slovak)

Currently translated at 56.9% (110 of 193 strings)

Translated using Weblate (Slovak)

Currently translated at 29.2% (125 of 427 strings)

Translated using Weblate (Slovak)

Currently translated at 87.2% (103 of 118 strings)

Translated using Weblate (Slovak)

Currently translated at 22.0% (103 of 467 strings)

Translated using Weblate (Slovak)

Currently translated at 84.4% (103 of 122 strings)

Translated using Weblate (Slovak)

Currently translated at 100.0% (84 of 84 strings)

Translated using Weblate (Slovak)

Currently translated at 88.1% (104 of 118 strings)

Translated using Weblate (Slovak)

Currently translated at 55.2% (106 of 192 strings)

Translated using Weblate (Slovak)

Currently translated at 28.3% (121 of 427 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Jakub K <klacanjakub0@gmail.com>
Co-authored-by: Jakub T <jakub.tilesch@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/objects/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/sk/
Translation: Frigate NVR/audio
Translation: Frigate NVR/common
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/objects
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-10-21 10:49:33 -06:00
Hosted Weblate
bbc130de18 Translated using Weblate (Korean)
Currently translated at 96.6% (114 of 118 strings)

Translated using Weblate (Korean)

Currently translated at 10.3% (60 of 580 strings)

Translated using Weblate (Korean)

Currently translated at 13.9% (17 of 122 strings)

Translated using Weblate (Korean)

Currently translated at 83.3% (50 of 60 strings)

Translated using Weblate (Korean)

Currently translated at 100.0% (10 of 10 strings)

Translated using Weblate (Korean)

Currently translated at 93.9% (31 of 33 strings)

Translated using Weblate (Korean)

Currently translated at 100.0% (9 of 9 strings)

Translated using Weblate (Korean)

Currently translated at 100.0% (90 of 90 strings)

Translated using Weblate (Korean)

Currently translated at 13.5% (68 of 501 strings)

Translated using Weblate (Korean)

Currently translated at 100.0% (2 of 2 strings)

Translated using Weblate (Korean)

Currently translated at 99.1% (117 of 118 strings)

Translated using Weblate (Korean)

Currently translated at 100.0% (25 of 25 strings)

Translated using Weblate (Korean)

Currently translated at 100.0% (46 of 46 strings)

Translated using Weblate (Korean)

Currently translated at 69.8% (37 of 53 strings)

Translated using Weblate (Korean)

Currently translated at 98.9% (197 of 199 strings)

Translated using Weblate (Korean)

Currently translated at 7.0% (33 of 468 strings)

Translated using Weblate (Korean)

Currently translated at 100.0% (46 of 46 strings)

Translated using Weblate (Korean)

Currently translated at 71.1% (37 of 52 strings)

Translated using Weblate (Korean)

Currently translated at 13.1% (16 of 122 strings)

Translated using Weblate (Korean)

Currently translated at 6.6% (4 of 60 strings)

Translated using Weblate (Korean)

Currently translated at 60.8% (28 of 46 strings)

Translated using Weblate (Korean)

Currently translated at 100.0% (9 of 9 strings)

Translated using Weblate (Korean)

Currently translated at 100.0% (87 of 87 strings)

Translated using Weblate (Korean)

Currently translated at 100.0% (193 of 193 strings)

Translated using Weblate (Korean)

Currently translated at 15.6% (67 of 427 strings)

Translated using Weblate (Korean)

Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (Korean)

Currently translated at 100.0% (6 of 6 strings)

Translated using Weblate (Korean)

Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (Korean)

Currently translated at 70.2% (59 of 84 strings)

Translated using Weblate (Korean)

Currently translated at 8.1% (35 of 427 strings)

Translated using Weblate (Korean)

Currently translated at 55.0% (65 of 118 strings)

Translated using Weblate (Korean)

Currently translated at 23.6% (17 of 72 strings)

Co-authored-by: GGAMBI <mmxdog@empal.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/ko/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/ko/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-auth/ko/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/ko/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/ko/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/ko/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-input/ko/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-player/ko/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/objects/ko/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-configeditor/ko/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/ko/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/ko/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-exports/ko/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/ko/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/ko/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-recording/ko/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/ko/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/ko/
Translation: Frigate NVR/audio
Translation: Frigate NVR/common
Translation: Frigate NVR/components-auth
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/components-input
Translation: Frigate NVR/components-player
Translation: Frigate NVR/objects
Translation: Frigate NVR/views-configeditor
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-exports
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-recording
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-10-21 10:49:33 -06:00
Hosted Weblate
ce07dae4ec Translated using Weblate (Swedish)
Currently translated at 100.0% (90 of 90 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (501 of 501 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (580 of 580 strings)

Translated using Weblate (Swedish)

Currently translated at 98.8% (495 of 501 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (33 of 33 strings)

Translated using Weblate (Swedish)

Currently translated at 96.8% (485 of 501 strings)

Translated using Weblate (Swedish)

Currently translated at 94.8% (475 of 501 strings)

Translated using Weblate (Swedish)

Currently translated at 94.4% (473 of 501 strings)

Translated using Weblate (Swedish)

Currently translated at 93.6% (469 of 501 strings)

Translated using Weblate (Swedish)

Currently translated at 89.6% (449 of 501 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (91 of 91 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (579 of 579 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (577 of 577 strings)

Translated using Weblate (Swedish)

Currently translated at 80.1% (461 of 575 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (199 of 199 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (87 of 87 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (53 of 53 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (468 of 468 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (193 of 193 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (467 of 467 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (61 of 61 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Kristian Johansson <knmjohansson@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/sv/
Translation: Frigate NVR/audio
Translation: Frigate NVR/common
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-10-21 10:49:33 -06:00
Hosted Weblate
cb7009102e Translated using Weblate (French)
Currently translated at 97.6% (489 of 501 strings)

Translated using Weblate (French)

Currently translated at 100.0% (579 of 579 strings)

Translated using Weblate (French)

Currently translated at 100.0% (579 of 579 strings)

Translated using Weblate (French)

Currently translated at 100.0% (579 of 579 strings)

Translated using Weblate (French)

Currently translated at 96.4% (483 of 501 strings)

Translated using Weblate (French)

Currently translated at 77.8% (451 of 579 strings)

Translated using Weblate (French)

Currently translated at 100.0% (91 of 91 strings)

Translated using Weblate (French)

Currently translated at 73.7% (427 of 579 strings)

Translated using Weblate (French)

Currently translated at 100.0% (199 of 199 strings)

Translated using Weblate (French)

Currently translated at 100.0% (87 of 87 strings)

Translated using Weblate (French)

Currently translated at 100.0% (53 of 53 strings)

Translated using Weblate (French)

Currently translated at 100.0% (468 of 468 strings)

Translated using Weblate (French)

Currently translated at 100.0% (193 of 193 strings)

Translated using Weblate (French)

Currently translated at 100.0% (467 of 467 strings)

Translated using Weblate (French)

Currently translated at 100.0% (61 of 61 strings)

Co-authored-by: Apocoloquintose <bertrand.moreux@gmail.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Pascal Courtonne <pascal@bobbz.org>
Co-authored-by: Sylvain LEROY <syl_tigra@hotmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/fr/
Translation: Frigate NVR/audio
Translation: Frigate NVR/common
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
2025-10-21 10:49:33 -06:00
Hosted Weblate
2a7a9323c3 Translated using Weblate (Spanish)
Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (Spanish)

Currently translated at 100.0% (87 of 87 strings)

Translated using Weblate (Spanish)

Currently translated at 100.0% (60 of 60 strings)

Translated using Weblate (Spanish)

Currently translated at 100.0% (53 of 53 strings)

Translated using Weblate (Spanish)

Currently translated at 100.0% (468 of 468 strings)

Translated using Weblate (Spanish)

Currently translated at 100.0% (193 of 193 strings)

Translated using Weblate (Spanish)

Currently translated at 95.9% (448 of 467 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Reydel Leon Machado <contact@reydelleon.me>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/es/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/es/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/es/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/es/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/es/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/es/
Translation: Frigate NVR/common
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-10-21 10:49:33 -06:00
Hosted Weblate
b55615196e Translated using Weblate (Dutch)
Currently translated at 100.0% (580 of 580 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (579 of 579 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (90 of 90 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (33 of 33 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (53 of 53 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (501 of 501 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (199 of 199 strings)

Translated using Weblate (Dutch)

Currently translated at 72.3% (419 of 579 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (468 of 468 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (193 of 193 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (467 of 467 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (61 of 61 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Marijn <168113859+Marijn0@users.noreply.github.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/nl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/nl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/nl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/nl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/nl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/nl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/nl/
Translation: Frigate NVR/audio
Translation: Frigate NVR/common
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
2025-10-21 10:49:33 -06:00
Hosted Weblate
d384f2fc32 Translated using Weblate (Indonesian)
Currently translated at 1.7% (10 of 579 strings)

Translated using Weblate (Indonesian)

Currently translated at 20.1% (86 of 427 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Toni Tan <toni@tan.lc>
Co-authored-by: Tukimin Satrio <k797du3eh@mozmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/id/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/id/
Translation: Frigate NVR/audio
Translation: Frigate NVR/views-settings
2025-10-21 10:49:33 -06:00
Hosted Weblate
3eac652ba2 Translated using Weblate (Italian)
Currently translated at 100.0% (579 of 579 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (90 of 90 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (33 of 33 strings)

Translated using Weblate (Italian)

Currently translated at 86.8% (435 of 501 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (575 of 575 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (87 of 87 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (53 of 53 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (199 of 199 strings)

Translated using Weblate (Italian)

Currently translated at 72.8% (419 of 575 strings)

Translated using Weblate (Italian)

Currently translated at 97.7% (85 of 87 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (468 of 468 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (193 of 193 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (467 of 467 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (61 of 61 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (84 of 84 strings)

Co-authored-by: GMagician <gmagician@users.noreply.hosted.weblate.org>
Co-authored-by: Gringo <ita.translations@tiscali.it>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/it/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/it/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/it/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/it/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/it/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/it/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/it/
Translation: Frigate NVR/audio
Translation: Frigate NVR/common
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
2025-10-21 10:49:33 -06:00
Hosted Weblate
45fabab417 Translated using Weblate (Polish)
Currently translated at 100.0% (87 of 87 strings)

Translated using Weblate (Polish)

Currently translated at 100.0% (53 of 53 strings)

Translated using Weblate (Polish)

Currently translated at 100.0% (468 of 468 strings)

Translated using Weblate (Polish)

Currently translated at 100.0% (84 of 84 strings)

Translated using Weblate (Polish)

Currently translated at 100.0% (193 of 193 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Patryk Smoliński <smolinski.patryk@mensa.org.pl>
Co-authored-by: Wojciech Niziński <niziak-weblate@spox.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/pl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/pl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/pl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/pl/
Translation: Frigate NVR/common
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
2025-10-21 10:49:33 -06:00
Hosted Weblate
8928b03497 Translated using Weblate (Hebrew)
Currently translated at 97.4% (115 of 118 strings)

Translated using Weblate (Hebrew)

Currently translated at 55.6% (323 of 580 strings)

Translated using Weblate (Hebrew)

Currently translated at 85.5% (77 of 90 strings)

Translated using Weblate (Hebrew)

Currently translated at 100.0% (6 of 6 strings)

Translated using Weblate (Hebrew)

Currently translated at 100.0% (10 of 10 strings)

Translated using Weblate (Hebrew)

Currently translated at 100.0% (72 of 72 strings)

Translated using Weblate (Hebrew)

Currently translated at 93.9% (31 of 33 strings)

Translated using Weblate (Hebrew)

Currently translated at 95.0% (476 of 501 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Kipper Deceit <kipper-deceit.50@icloud.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/he/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/he/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-configeditor/he/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/he/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/he/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-recording/he/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/he/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/he/
Translation: Frigate NVR/audio
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/views-configeditor
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-recording
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-10-21 10:49:33 -06:00
Hosted Weblate
7fc4822594 Translated using Weblate (Hungarian)
Currently translated at 73.7% (427 of 579 strings)

Translated using Weblate (Hungarian)

Currently translated at 97.7% (88 of 90 strings)

Translated using Weblate (Hungarian)

Currently translated at 100.0% (199 of 199 strings)

Translated using Weblate (Hungarian)

Currently translated at 100.0% (468 of 468 strings)

Translated using Weblate (Hungarian)

Currently translated at 100.0% (87 of 87 strings)

Translated using Weblate (Hungarian)

Currently translated at 100.0% (53 of 53 strings)

Translated using Weblate (Hungarian)

Currently translated at 100.0% (193 of 193 strings)

Co-authored-by: Balázs Bencs <beniboy87@gmail.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Zsolt Fojtyik <zsozso830316@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/hu/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/hu/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/hu/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/hu/
Translation: Frigate NVR/common
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
2025-10-21 10:49:33 -06:00
Hosted Weblate
33725ddae9 Translated using Weblate (Portuguese)
Currently translated at 80.8% (468 of 579 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: ssantos <ssantos@web.de>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/pt/
Translation: Frigate NVR/views-settings
2025-10-21 10:49:33 -06:00
Hosted Weblate
b336bdec03 Translated using Weblate (Czech)
Currently translated at 71.0% (412 of 580 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Libor Šafář <liborsaf9@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/cs/
Translation: Frigate NVR/views-settings
2025-10-21 10:49:33 -06:00
Hosted Weblate
06cbdf6cce Translated using Weblate (Catalan)
Currently translated at 72.8% (422 of 579 strings)

Translated using Weblate (Catalan)

Currently translated at 95.5% (86 of 90 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (53 of 53 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (199 of 199 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: anton garcias <isaga.percompartir@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/ca/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/ca/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/ca/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/ca/
Translation: Frigate NVR/common
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
2025-10-21 10:49:33 -06:00
Hosted Weblate
2ae4203dac Translated using Weblate (Japanese)
Currently translated at 100.0% (579 of 579 strings)

Translated using Weblate (Japanese)

Currently translated at 100.0% (91 of 91 strings)

Translated using Weblate (Japanese)

Currently translated at 100.0% (577 of 577 strings)

Translated using Weblate (Japanese)

Currently translated at 73.3% (423 of 577 strings)

Translated using Weblate (Japanese)

Currently translated at 100.0% (87 of 87 strings)

Translated using Weblate (Japanese)

Currently translated at 100.0% (53 of 53 strings)

Translated using Weblate (Japanese)

Currently translated at 100.0% (199 of 199 strings)

Translated using Weblate (Japanese)

Currently translated at 98.9% (197 of 199 strings)

Translated using Weblate (Japanese)

Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (Japanese)

Currently translated at 72.5% (417 of 575 strings)

Translated using Weblate (Japanese)

Currently translated at 100.0% (122 of 122 strings)

Translated using Weblate (Japanese)

Currently translated at 96.9% (193 of 199 strings)

Translated using Weblate (Japanese)

Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (Japanese)

Currently translated at 100.0% (467 of 467 strings)

Translated using Weblate (Japanese)

Currently translated at 100.0% (84 of 84 strings)

Translated using Weblate (Japanese)

Currently translated at 100.0% (61 of 61 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: yhi264 <yhiraki@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/ja/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/ja/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/ja/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/ja/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/ja/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/ja/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/ja/
Translation: Frigate NVR/common
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-10-21 10:49:33 -06:00
Hosted Weblate
d420816376 Translated using Weblate (Ukrainian)
Currently translated at 100.0% (90 of 90 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (580 of 580 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (33 of 33 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (501 of 501 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (91 of 91 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (579 of 579 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (577 of 577 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (48 of 48 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (199 of 199 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (575 of 575 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (48 of 48 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (199 of 199 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (53 of 53 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (87 of 87 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (468 of 468 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (193 of 193 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (467 of 467 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (61 of 61 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Максим Горпиніч <gorpinicmaksim0@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/uk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/uk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/uk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/uk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/uk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/uk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-search/uk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/uk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/uk/
Translation: Frigate NVR/audio
Translation: Frigate NVR/common
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-search
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-10-21 10:49:33 -06:00
Hosted Weblate
daa78361c5 Translated using Weblate (Romanian)
Currently translated at 100.0% (90 of 90 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (580 of 580 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (33 of 33 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (501 of 501 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (579 of 579 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (90 of 90 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (122 of 122 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (53 of 53 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (199 of 199 strings)

Translated using Weblate (Romanian)

Currently translated at 92.6% (464 of 501 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (468 of 468 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (193 of 193 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (467 of 467 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (84 of 84 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (61 of 61 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: THT <andreyavram@yahoo.com>
Co-authored-by: lukasig <lukasig@hotmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/ro/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/ro/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/ro/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/ro/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/ro/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/ro/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/ro/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/ro/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/ro/
Translation: Frigate NVR/audio
Translation: Frigate NVR/common
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-10-21 10:49:33 -06:00
Hosted Weblate
6bc75e72b2 Translated using Weblate (Russian)
Currently translated at 91.6% (428 of 467 strings)

Translated using Weblate (Russian)

Currently translated at 98.8% (83 of 84 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Igor Kalinin <stigory@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/ru/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/ru/
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
2025-10-21 10:49:33 -06:00
Hosted Weblate
534db717c4 Translated using Weblate (Greek)
Currently translated at 3.9% (23 of 579 strings)

Translated using Weblate (Greek)

Currently translated at 32.2% (29 of 90 strings)

Translated using Weblate (Greek)

Currently translated at 40.2% (80 of 199 strings)

Co-authored-by: Christos Sidiropoulos <dev@csidirop.de>
Co-authored-by: George Betsis <gbetsis@gmail.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/el/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/el/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/el/
Translation: Frigate NVR/common
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
2025-10-21 10:49:33 -06:00
Hosted Weblate
6220f337d9 Translated using Weblate (German)
Currently translated at 99.6% (577 of 579 strings)

Translated using Weblate (German)

Currently translated at 99.6% (577 of 579 strings)

Translated using Weblate (German)

Currently translated at 100.0% (91 of 91 strings)

Translated using Weblate (German)

Currently translated at 96.3% (558 of 579 strings)

Translated using Weblate (German)

Currently translated at 100.0% (53 of 53 strings)

Translated using Weblate (German)

Currently translated at 100.0% (199 of 199 strings)

Translated using Weblate (German)

Currently translated at 100.0% (87 of 87 strings)

Translated using Weblate (German)

Currently translated at 100.0% (468 of 468 strings)

Translated using Weblate (German)

Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (German)

Currently translated at 100.0% (84 of 84 strings)

Translated using Weblate (German)

Currently translated at 100.0% (60 of 60 strings)

Translated using Weblate (German)

Currently translated at 100.0% (193 of 193 strings)

Co-authored-by: Christos Sidiropoulos <dev@csidirop.de>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Patrick Polsterer <patrick.polsterer@gmail.com>
Co-authored-by: mvdberge <micha.vordemberge@christmann.info>
Co-authored-by: sandronidi <sandro.niederhauser@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/de/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/de/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/objects/de/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/de/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/de/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/de/
Translation: Frigate NVR/common
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/objects
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
2025-10-21 10:49:33 -06:00
Hosted Weblate
32781af2a5 Translated using Weblate (Portuguese (Brazil))
Currently translated at 79.1% (458 of 579 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 98.8% (89 of 90 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 96.6% (87 of 90 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 100.0% (33 of 33 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 86.0% (431 of 501 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 76.4% (441 of 577 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 100.0% (53 of 53 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 100.0% (199 of 199 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 100.0% (87 of 87 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 100.0% (468 of 468 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 100.0% (193 of 193 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 100.0% (467 of 467 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 100.0% (61 of 61 strings)

Co-authored-by: Helder Santana <helder.santana@systemsbr.com.br>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Marcelo Popper Costa <marcelo_popper@hotmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/pt_BR/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/pt_BR/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/pt_BR/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/pt_BR/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/pt_BR/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/pt_BR/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/pt_BR/
Translation: Frigate NVR/audio
Translation: Frigate NVR/common
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
2025-10-21 10:49:33 -06:00
Hosted Weblate
bd10b13bc3 Translated using Weblate (Lithuanian)
Currently translated at 96.9% (32 of 33 strings)

Translated using Weblate (Lithuanian)

Currently translated at 78.7% (453 of 575 strings)

Translated using Weblate (Lithuanian)

Currently translated at 100.0% (87 of 87 strings)

Translated using Weblate (Lithuanian)

Currently translated at 100.0% (53 of 53 strings)

Translated using Weblate (Lithuanian)

Currently translated at 100.0% (199 of 199 strings)

Translated using Weblate (Lithuanian)

Currently translated at 100.0% (468 of 468 strings)

Translated using Weblate (Lithuanian)

Currently translated at 100.0% (193 of 193 strings)

Translated using Weblate (Lithuanian)

Currently translated at 100.0% (467 of 467 strings)

Translated using Weblate (Lithuanian)

Currently translated at 100.0% (84 of 84 strings)

Translated using Weblate (Lithuanian)

Currently translated at 100.0% (52 of 52 strings)

Translated using Weblate (Lithuanian)

Currently translated at 98.8% (83 of 84 strings)

Translated using Weblate (Lithuanian)

Currently translated at 100.0% (61 of 61 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Kostas Čaplinskas <pokemonm360@gmail.com>
Co-authored-by: MaBeniu <runnerm@gmail.com>
Co-authored-by: pcxtx <pcxtx@yahoo.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/lt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/lt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/lt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/lt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/lt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/lt/
Translation: Frigate NVR/common
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
2025-10-21 10:49:33 -06:00
Josh Hawkins
c5fec3271f
Improve matching go2rtc stream names with cameras (#20586)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
* improve matching go2rtc stream names with cameras

* fix unrelated lint issue
2025-10-20 10:33:02 -05:00
Josh Hawkins
0743cb57c2
fix birdseye and empty card (#20582) 2025-10-20 07:03:22 -06:00
Josh Hawkins
4319118e94
enforce at least one letter in zone names (#20561)
Some checks failed
CI / AMD64 Build (push) Has been cancelled
CI / ARM Build (push) Has been cancelled
CI / Jetson Jetpack 6 (push) Has been cancelled
CI / AMD64 Extra Build (push) Has been cancelled
CI / ARM Extra Build (push) Has been cancelled
CI / Synaptics Build (push) Has been cancelled
CI / Assemble and push default build (push) Has been cancelled
2025-10-19 05:21:15 -06:00
Francesco Durighetto
4c689dde8e
Add optional idle heartbeat for Birdseye (#20453)
* Add optional idle heartbeat for Birdseye (periodic frame emission when idle)

birdseye: add optional idle heartbeat and FFmpeg tuning envs (default off)

This adds an optional configuration field `birdseye.idle_heartbeat_fps` to
enable a lightweight idle heartbeat mechanism in Birdseye. When set to a value
greater than 0, Birdseye periodically re-sends the last composed frame during
idle periods (no motion or active updates).

This helps downstream consumers such as go2rtc, Alexa, or Scrypted to attach
faster and maintain a low-latency RTSP stream when the system is idle.

Key details:
- Config-based (`birdseye.idle_heartbeat_fps`), default `0` (disabled).
- Uses existing Birdseye rendering pipeline; minimal performance impact.
- Does not alter behavior when unset.

Documentation: added tip section in docs/configuration/restream.md.

* Update docs/docs/configuration/restream.md

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>

* Update docs/docs/configuration/reference.md

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>

* Refactors Birdseye idle frame broadcasting

Simplifies the idle frame broadcasting logic by removing the dedicated thread.

The idle frame is now resent directly within the main loop,
improving efficiency and reducing complexity.  Also, limits the idle
heartbeat FPS to a maximum of 10 since the framebuffer is limited to 10 anyway

* ruff fix

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
Co-authored-by: Francesco Durighetto <francesco.durighetto@subbyx.com>
Co-authored-by: duri <duri@homelabubuntu.durihome.unifi>
2025-10-19 05:20:36 -06:00
Nicolas Mowen
f6f555387e
Fix timeline attribute assumption (#20555) 2025-10-18 22:31:55 -05:00
Josh Hawkins
a2396db2aa
Detail Stream tweaks (#20553)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
* show audio events in detail stream

* refactor object lifecycle to look similar to detail stream

* pass detail stream as prop to avoid context error

* fix highlighting timing

* add view in explore to menu
2025-10-18 12:19:21 -06:00
Josh Hawkins
5dc8a85f2f
Update Azure OpenAI genai docs (#20549)
* Update azure openai genai docs

* tweak url
2025-10-18 06:44:26 -06:00
Nicolas Mowen
a8bcc109a9
Add support for Intel NPU stats (#20542)
Some checks are pending
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
2025-10-17 08:06:41 -05:00
Nicolas Mowen
4228861810
Improve Intel Model (#20541)
* Update supported models and inference times

* Fix d-fine inputs

* Improve d-fine
2025-10-17 06:31:28 -06:00
Nicolas Mowen
0302db1c43
Fix model exports (#20540) 2025-10-17 07:16:30 -05:00
Nicolas Mowen
d7275a3c1a
Add support for Intel NPU (#20536) 2025-10-17 05:58:59 -05:00
Josh Hawkins
60789f7096
Detail Stream tweaks (#20533)
Some checks are pending
CI / ARM Extra Build (push) Blocked by required conditions
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
2025-10-16 14:15:23 -06:00
Nicolas Mowen
9599450cff
Add GenAI info to detail stream (#20527)
Some checks are pending
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
* Add camera previews back

* Add genai info
2025-10-16 07:55:10 -06:00
Josh Hawkins
b52044aecc
Add Detail stream in History view (#20525)
* new type

* activity stream panel

* use context provider for activity stream

* new activity stream panel in history view

* overlay for object tracking details in history view

* use overlay in video player

* don't refetch timeline

* fix activity stream group from being highlighted prematurely

* use annotation offset

* fix scrolling and use custom hook for interaction

* memoize to prevent unnecessary renders

* i18n and timestamp formatting

* add annotation offset slider

* bg color

* add collapsible component

* refactor

* rename activity to detail

* fix merge conflicts

* i18n

* more i18n
2025-10-16 07:24:14 -06:00
Nicolas Mowen
2e7a2fd780
Store and show boxes for attributes in timeline (#20513)
* Store and show boxes for attributes in timeline

* Simplify
2025-10-16 07:00:38 -06:00
Nicolas Mowen
a4764563a5
Fix YOLOv9 export script (#20514) 2025-10-16 07:56:37 -05:00
Josh Hawkins
942a61ddfb
version bump in docs (#20501) 2025-10-15 05:53:31 -06:00
Nicolas Mowen
4d582062fb
Ensure that a user must provide an image in an expected location (#20491)
Some checks failed
CI / AMD64 Build (push) Has been cancelled
CI / ARM Build (push) Has been cancelled
CI / Jetson Jetpack 5 (push) Has been cancelled
CI / Jetson Jetpack 6 (push) Has been cancelled
CI / AMD64 Extra Build (push) Has been cancelled
CI / ARM Extra Build (push) Has been cancelled
CI / Assemble and push default build (push) Has been cancelled
* Ensure that a user must provide an image in an expected location

* Use const
2025-10-14 16:29:20 -05:00
Nicolas Mowen
e0a8445bac
Improve rf-detr export (#20485) 2025-10-14 08:32:44 -05:00
Josh Hawkins
2a271c0f5e
Update GenAI docs for Gemini model deprecation (#20462) 2025-10-13 10:00:21 -06:00
Nicolas Mowen
925bf78811
Update review topic description (#20445) 2025-10-12 07:28:08 -05:00
Sean Kelly
59102794e8
Add keyboard shortcut for switching to previous label (#20426)
* Add keyboard shortcut for switching to previous label

* Update docs/docs/plus/annotating.md

Co-authored-by: Blake Blackshear <blake.blackshear@gmail.com>

---------

Co-authored-by: Blake Blackshear <blake.blackshear@gmail.com>
2025-10-11 10:43:41 -06:00
mpking828
20e5e3bdc0
Update camera_specific.md to fix 2 way audio example for Reolink (#20343)
Update camera_specific.md to fix 2 way audio example for Reolink
2025-10-03 08:49:51 -06:00
AmirHossein_Omidi
b94ebda9e5
Update license_plate_recognition.md (#20306)
* Update license_plate_recognition.md

Add PaddleOCR description for license plate recognition in Frigate docs

* Update docs/docs/configuration/license_plate_recognition.md

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

* Update docs/docs/configuration/license_plate_recognition.md

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

---------

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2025-10-01 08:18:47 -05:00
Nicolas Mowen
8cdaef307a
Update face rec docs (#20256)
* Update face rec docs

* clarify

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

---------

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2025-09-28 11:31:59 -05:00
547 changed files with 30025 additions and 7302 deletions

View File

@ -0,0 +1,6 @@
---
globs: ["**/*.ts", "**/*.tsx"]
alwaysApply: false
---
Never write strings in the frontend directly, always write to and reference the relevant translations file.

View File

@ -5,6 +5,12 @@ set -euxo pipefail
SQLITE3_VERSION="3.46.1"
PYSQLITE3_VERSION="0.5.3"
# Install libsqlite3-dev if not present (needed for some base images like NVIDIA TensorRT)
if ! dpkg -l | grep -q libsqlite3-dev; then
echo "Installing libsqlite3-dev for compilation..."
apt-get update && apt-get install -y libsqlite3-dev && rm -rf /var/lib/apt/lists/*
fi
# Fetch the pre-built sqlite amalgamation instead of building from source
if [[ ! -d "sqlite" ]]; then
mkdir sqlite

View File

@ -20,6 +20,7 @@ apt-get -qq install --no-install-recommends -y \
libgl1 \
libglib2.0-0 \
libusb-1.0.0 \
python3-h2 \
libgomp1 # memryx detector
update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.11 1
@ -95,6 +96,9 @@ if [[ "${TARGETARCH}" == "amd64" ]]; then
apt-get -qq install -y ocl-icd-libopencl1
# install libtbb12 for NPU support
apt-get -qq install -y libtbb12
rm -f /usr/share/keyrings/intel-graphics.gpg
rm -f /etc/apt/sources.list.d/intel-gpu-jammy.list
@ -115,6 +119,11 @@ if [[ "${TARGETARCH}" == "amd64" ]]; then
wget https://github.com/intel/compute-runtime/releases/download/24.52.32224.5/intel-level-zero-gpu_1.6.32224.5_amd64.deb
wget https://github.com/intel/intel-graphics-compiler/releases/download/v2.5.6/intel-igc-opencl-2_2.5.6+18417_amd64.deb
wget https://github.com/intel/intel-graphics-compiler/releases/download/v2.5.6/intel-igc-core-2_2.5.6+18417_amd64.deb
# npu packages
wget https://github.com/oneapi-src/level-zero/releases/download/v1.21.9/level-zero_1.21.9+u22.04_amd64.deb
wget https://github.com/intel/linux-npu-driver/releases/download/v1.17.0/intel-driver-compiler-npu_1.17.0.20250508-14912879441_ubuntu22.04_amd64.deb
wget https://github.com/intel/linux-npu-driver/releases/download/v1.17.0/intel-fw-npu_1.17.0.20250508-14912879441_ubuntu22.04_amd64.deb
wget https://github.com/intel/linux-npu-driver/releases/download/v1.17.0/intel-level-zero-npu_1.17.0.20250508-14912879441_ubuntu22.04_amd64.deb
dpkg -i *.deb
rm *.deb

View File

@ -2,9 +2,9 @@
set -e
# Download the MxAccl for Frigate github release
wget https://github.com/memryx/mx_accl_frigate/archive/refs/heads/main.zip -O /tmp/mxaccl.zip
wget https://github.com/memryx/mx_accl_frigate/archive/refs/tags/v2.1.0.zip -O /tmp/mxaccl.zip
unzip /tmp/mxaccl.zip -d /tmp
mv /tmp/mx_accl_frigate-main /opt/mx_accl_frigate
mv /tmp/mx_accl_frigate-2.1.0 /opt/mx_accl_frigate
rm /tmp/mxaccl.zip
# Install Python dependencies

View File

@ -56,7 +56,7 @@ pywebpush == 2.0.*
# alpr
pyclipper == 1.3.*
shapely == 2.0.*
Levenshtein==0.26.*
rapidfuzz==3.12.*
# HailoRT Wheels
appdirs==1.4.*
argcomplete==2.0.*

View File

@ -1,2 +1 @@
scikit-build == 0.18.*
nvidia-pyindex

View File

@ -73,6 +73,8 @@ http {
vod_manifest_segment_durations_mode accurate;
vod_ignore_edit_list on;
vod_segment_duration 10000;
# MPEG-TS settings (not used when fMP4 is enabled, kept for reference)
vod_hls_mpegts_align_frames off;
vod_hls_mpegts_interleave_frames on;
@ -105,6 +107,10 @@ http {
aio threads;
vod hls;
# Use fMP4 (fragmented MP4) instead of MPEG-TS for better performance
# Smaller segments, faster generation, better browser compatibility
vod_hls_container_format fmp4;
secure_token $args;
secure_token_types application/vnd.apple.mpegurl;
@ -274,6 +280,18 @@ http {
include proxy.conf;
}
# Allow unauthenticated access to the first_time_login endpoint
# so the login page can load help text before authentication.
location /api/auth/first_time_login {
auth_request off;
limit_except GET {
deny all;
}
rewrite ^/api(/.*)$ $1 break;
proxy_pass http://frigate_api;
include proxy.conf;
}
location /api/stats {
include auth_request.conf;
access_log off;

View File

@ -24,10 +24,13 @@ echo "Adding MemryX GPG key and repository..."
wget -qO- https://developer.memryx.com/deb/memryx.asc | sudo tee /etc/apt/trusted.gpg.d/memryx.asc >/dev/null
echo 'deb https://developer.memryx.com/deb stable main' | sudo tee /etc/apt/sources.list.d/memryx.list >/dev/null
# Update and install memx-drivers
echo "Installing memx-drivers..."
# Update and install specific SDK 2.1 packages
echo "Installing MemryX SDK 2.1 packages..."
sudo apt update
sudo apt install -y memx-drivers
sudo apt install -y memx-drivers=2.1.* memx-accl=2.1.* mxa-manager=2.1.*
# Hold packages to prevent automatic upgrades
sudo apt-mark hold memx-drivers memx-accl mxa-manager
# ARM-specific board setup
if [[ "$arch" == "aarch64" || "$arch" == "arm64" ]]; then
@ -37,11 +40,5 @@ fi
echo -e "\n\n\033[1;31mYOU MUST RESTART YOUR COMPUTER NOW\033[0m\n\n"
# Install other runtime packages
packages=("memx-accl" "mxa-manager")
for pkg in "${packages[@]}"; do
echo "Installing $pkg..."
sudo apt install -y "$pkg"
done
echo "MemryX SDK 2.1 installation complete!"
echo "MemryX installation complete!"

View File

@ -21,7 +21,7 @@ FROM deps AS frigate-tensorrt
ARG PIP_BREAK_SYSTEM_PACKAGES
RUN --mount=type=bind,from=trt-wheels,source=/trt-wheels,target=/deps/trt-wheels \
pip3 uninstall -y onnxruntime tensorflow-cpu \
pip3 uninstall -y onnxruntime \
&& pip3 install -U /deps/trt-wheels/*.whl
COPY --from=rootfs / /

View File

@ -112,7 +112,7 @@ RUN apt-get update \
&& apt-get install -y protobuf-compiler libprotobuf-dev \
&& rm -rf /var/lib/apt/lists/*
RUN --mount=type=bind,source=docker/tensorrt/requirements-models-arm64.txt,target=/requirements-tensorrt-models.txt \
pip3 wheel --wheel-dir=/trt-model-wheels -r /requirements-tensorrt-models.txt
pip3 wheel --wheel-dir=/trt-model-wheels --no-deps -r /requirements-tensorrt-models.txt
FROM wget AS jetson-ffmpeg
ARG DEBIAN_FRONTEND
@ -145,7 +145,8 @@ COPY --from=trt-wheels /etc/TENSORRT_VER /etc/TENSORRT_VER
RUN --mount=type=bind,from=trt-wheels,source=/trt-wheels,target=/deps/trt-wheels \
--mount=type=bind,from=trt-model-wheels,source=/trt-model-wheels,target=/deps/trt-model-wheels \
pip3 uninstall -y onnxruntime \
&& pip3 install -U /deps/trt-wheels/*.whl /deps/trt-model-wheels/*.whl \
&& pip3 install -U /deps/trt-wheels/*.whl \
&& pip3 install -U /deps/trt-model-wheels/*.whl \
&& ldconfig
WORKDIR /opt/frigate/

View File

@ -13,7 +13,6 @@ nvidia_cusolver_cu12==11.6.3.*; platform_machine == 'x86_64'
nvidia_cusparse_cu12==12.5.1.*; platform_machine == 'x86_64'
nvidia_nccl_cu12==2.23.4; platform_machine == 'x86_64'
nvidia_nvjitlink_cu12==12.5.82; platform_machine == 'x86_64'
tensorflow==2.19.*; platform_machine == 'x86_64'
onnx==1.16.*; platform_machine == 'x86_64'
onnxruntime-gpu==1.22.*; platform_machine == 'x86_64'
protobuf==3.20.3; platform_machine == 'x86_64'

View File

@ -1 +1,2 @@
cuda-python == 12.6.*; platform_machine == 'aarch64'
numpy == 1.26.*; platform_machine == 'aarch64'

View File

@ -164,13 +164,35 @@ According to [this discussion](https://github.com/blakeblackshear/frigate/issues
Cameras connected via a Reolink NVR can be connected with the http stream, use `channel[0..15]` in the stream url for the additional channels.
The setup of main stream can be also done via RTSP, but isn't always reliable on all hardware versions. The example configuration is working with the oldest HW version RLN16-410 device with multiple types of cameras.
<details>
<summary>Example Config</summary>
:::tip
Reolink's latest cameras support two way audio via go2rtc and other applications. It is important that the http-flv stream is still used for stability, a secondary rtsp stream can be added that will be using for the two way audio only.
NOTE: The RTSP stream can not be prefixed with `ffmpeg:`, as go2rtc needs to handle the stream to support two way audio.
Ensure HTTP is enabled in the camera's advanced network settings. To use two way talk with Frigate, see the [Live view documentation](/configuration/live#two-way-talk).
:::
```yaml
go2rtc:
streams:
# example for connecting to a standard Reolink camera
your_reolink_camera:
- "ffmpeg:http://reolink_ip/flv?port=1935&app=bcs&stream=channel0_main.bcs&user=username&password=password#video=copy#audio=copy#audio=opus"
your_reolink_camera_sub:
- "ffmpeg:http://reolink_ip/flv?port=1935&app=bcs&stream=channel0_ext.bcs&user=username&password=password"
# example for connectin to a Reolink camera that supports two way talk
your_reolink_camera_twt:
- "ffmpeg:http://reolink_ip/flv?port=1935&app=bcs&stream=channel0_main.bcs&user=username&password=password#video=copy#audio=copy#audio=opus"
- "rtsp://username:password@reolink_ip/Preview_01_sub
your_reolink_camera_twt_sub:
- "ffmpeg:http://reolink_ip/flv?port=1935&app=bcs&stream=channel0_ext.bcs&user=username&password=password"
- "rtsp://username:password@reolink_ip/Preview_01_sub
# example for connecting to a Reolink NVR
your_reolink_camera_via_nvr:
- "ffmpeg:http://reolink_nvr_ip/flv?port=1935&app=bcs&stream=channel3_main.bcs&user=username&password=password" # channel numbers are 0-15
- "ffmpeg:your_reolink_camera_via_nvr#audio=aac"
@ -201,22 +223,7 @@ cameras:
roles:
- detect
```
#### Reolink Doorbell
The reolink doorbell supports two way audio via go2rtc and other applications. It is important that the http-flv stream is still used for stability, a secondary rtsp stream can be added that will be using for the two way audio only.
Ensure HTTP is enabled in the camera's advanced network settings. To use two way talk with Frigate, see the [Live view documentation](/configuration/live#two-way-talk).
```yaml
go2rtc:
streams:
your_reolink_doorbell:
- "ffmpeg:http://reolink_ip/flv?port=1935&app=bcs&stream=channel0_main.bcs&user=username&password=password#video=copy#audio=copy#audio=opus"
- rtsp://reolink_ip/Preview_01_sub
your_reolink_doorbell_sub:
- "ffmpeg:http://reolink_ip/flv?port=1935&app=bcs&stream=channel0_ext.bcs&user=username&password=password"
```
</details>
### Unifi Protect Cameras

View File

@ -10,9 +10,19 @@ Object classification allows you to train a custom MobileNetV2 classification mo
Object classification models are lightweight and run very fast on CPU. Inference should be usable on virtually any machine that can run Frigate.
Training the model does briefly use a high amount of system resources for about 13 minutes per training run. On lower-power devices, training may take longer.
When running the `-tensorrt` image, Nvidia GPUs will automatically be used to accelerate training.
### Sub label vs Attribute
## Classes
Classes are the categories your model will learn to distinguish between. Each class represents a distinct visual category that the model will predict.
For object classification:
- Define classes that represent different types or attributes of the detected object
- Examples: For `person` objects, classes might be `delivery_person`, `resident`, `stranger`
- Include a `none` class for objects that don't fit any specific category
- Keep classes visually distinct to improve accuracy
### Classification Type
- **Sub label**:
@ -67,7 +77,7 @@ When choosing which objects to classify, start with a small number of visually d
### Improving the Model
- **Problem framing**: Keep classes visually distinct and relevant to the chosen object types.
- **Data collection**: Use the models Train tab to gather balanced examples across times of day, weather, and distances.
- **Data collection**: Use the models Recent Classification tab to gather balanced examples across times of day, weather, and distances.
- **Preprocessing**: Ensure examples reflect object crops similar to Frigates boxes; keep the subject centered.
- **Labels**: Keep label names short and consistent; include a `none` class if you plan to ignore uncertain predictions for sub labels.
- **Threshold**: Tune `threshold` per model to reduce false assignments. Start at `0.8` and adjust based on validation.

View File

@ -10,7 +10,17 @@ State classification allows you to train a custom MobileNetV2 classification mod
State classification models are lightweight and run very fast on CPU. Inference should be usable on virtually any machine that can run Frigate.
Training the model does briefly use a high amount of system resources for about 13 minutes per training run. On lower-power devices, training may take longer.
When running the `-tensorrt` image, Nvidia GPUs will automatically be used to accelerate training.
## Classes
Classes are the different states an area on your camera can be in. Each class represents a distinct visual state that the model will learn to recognize.
For state classification:
- Define classes that represent mutually exclusive states
- Examples: `open` and `closed` for a garage door, `on` and `off` for lights
- Use at least 2 classes (typically binary states work best)
- Keep class names clear and descriptive
## Example use cases
@ -49,4 +59,4 @@ When choosing a portion of the camera frame for state classification, it is impo
### Improving the Model
- **Problem framing**: Keep classes visually distinct and state-focused (e.g., `open`, `closed`, `unknown`). Avoid combining object identity with state in a single model unless necessary.
- **Data collection**: Use the models Train tab to gather balanced examples across times of day and weather.
- **Data collection**: Use the models Recent Classifications tab to gather balanced examples across times of day and weather.

View File

@ -70,7 +70,7 @@ Fine-tune face recognition with these optional parameters at the global level of
- `min_faces`: Min face recognitions for the sub label to be applied to the person object.
- Default: `1`
- `save_attempts`: Number of images of recognized faces to save for training.
- Default: `100`.
- Default: `200`.
- `blur_confidence_filter`: Enables a filter that calculates how blurry the face is and adjusts the confidence based on this.
- Default: `True`.
- `device`: Target a specific device to run the face recognition model on (multi-GPU installation).
@ -114,9 +114,9 @@ When choosing images to include in the face training set it is recommended to al
:::
### Understanding the Train Tab
### Understanding the Recent Recognitions Tab
The Train tab in the face library displays recent face recognition attempts. Detected face images are grouped according to the person they were identified as potentially matching.
The Recent Recognitions tab in the face library displays recent face recognition attempts. Detected face images are grouped according to the person they were identified as potentially matching.
Each face image is labeled with a name (or `Unknown`) along with the confidence score of the recognition attempt. While each image can be used to train the system for a specific person, not all images are suitable for training.
@ -140,7 +140,7 @@ Once front-facing images are performing well, start choosing slightly off-angle
Start with the [Usage](#usage) section and re-read the [Model Requirements](#model-requirements) above.
1. Ensure `person` is being _detected_. A `person` will automatically be scanned by Frigate for a face. Any detected faces will appear in the Train tab in the Frigate UI's Face Library.
1. Ensure `person` is being _detected_. A `person` will automatically be scanned by Frigate for a face. Any detected faces will appear in the Recent Recognitions tab in the Frigate UI's Face Library.
If you are using a Frigate+ or `face` detecting model:
@ -161,6 +161,8 @@ Start with the [Usage](#usage) section and re-read the [Model Requirements](#mod
Accuracy is definitely a going to be improved with higher quality cameras / streams. It is important to look at the DORI (Detection Observation Recognition Identification) range of your camera, if that specification is posted. This specification explains the distance from the camera that a person can be detected, observed, recognized, and identified. The identification range is the most relevant here, and the distance listed by the camera is the furthest that face recognition will realistically work.
Some users have also noted that setting the stream in camera firmware to a constant bit rate (CBR) leads to better image clarity than with a variable bit rate (VBR).
### Why can't I bulk upload photos?
It is important to methodically add photos to the library, bulk importing photos (especially from a general photo library) will lead to over-fitting in that particular scenario and hurt recognition performance.
@ -186,7 +188,7 @@ Avoid training on images that already score highly, as this can lead to over-fit
No, face recognition does not support negative training (i.e., explicitly telling it who someone is _not_). Instead, the best approach is to improve the training data by using a more diverse and representative set of images for each person.
For more guidance, refer to the section above on improving recognition accuracy.
### I see scores above the threshold in the train tab, but a sub label wasn't assigned?
### I see scores above the threshold in the Recent Recognitions tab, but a sub label wasn't assigned?
The Frigate considers the recognition scores across all recognition attempts for each person object. The scores are continually weighted based on the area of the face, and a sub label will only be assigned to person if a person is confidently recognized consistently. This avoids cases where a single high confidence recognition would throw off the results.

View File

@ -17,18 +17,17 @@ To use Generative AI, you must define a single provider at the global level of y
genai:
provider: gemini
api_key: "{FRIGATE_GEMINI_API_KEY}"
model: gemini-1.5-flash
model: gemini-2.0-flash
cameras:
front_camera:
objects:
genai:
enabled: True # <- enable GenAI for your front camera
use_snapshot: True
objects:
- person
required_zones:
- steps
enabled: True # <- enable GenAI for your front camera
use_snapshot: True
objects:
- person
required_zones:
- steps
indoor_camera:
objects:
genai:
@ -80,7 +79,7 @@ Google Gemini has a free tier allowing [15 queries per minute](https://ai.google
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://ai.google.dev/gemini-api/docs/models/gemini). At the time of writing, this includes `gemini-1.5-pro` and `gemini-1.5-flash`.
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://ai.google.dev/gemini-api/docs/models/gemini).
### Get API Key
@ -97,7 +96,7 @@ To start using Gemini, you must first get an API key from [Google AI Studio](htt
genai:
provider: gemini
api_key: "{FRIGATE_GEMINI_API_KEY}"
model: gemini-1.5-flash
model: gemini-2.0-flash
```
:::note
@ -112,7 +111,7 @@ OpenAI does not have a free tier for their API. With the release of gpt-4o, pric
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://platform.openai.com/docs/models). At the time of writing, this includes `gpt-4o` and `gpt-4-turbo`.
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://platform.openai.com/docs/models).
### Get API Key
@ -139,18 +138,19 @@ Microsoft offers several vision models through Azure OpenAI. A subscription is r
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models). At the time of writing, this includes `gpt-4o` and `gpt-4-turbo`.
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models).
### Create Resource and Get API Key
To start using Azure OpenAI, you must first [create a resource](https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource). You'll need your API key and resource URL, which must include the `api-version` parameter (see the example below). The model field is not required in your configuration as the model is part of the deployment name you chose when deploying the resource.
To start using Azure OpenAI, you must first [create a resource](https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource). You'll need your API key, model name, and resource URL, which must include the `api-version` parameter (see the example below).
### Configuration
```yaml
genai:
provider: azure_openai
base_url: https://example-endpoint.openai.azure.com/openai/deployments/gpt-4o/chat/completions?api-version=2023-03-15-preview
base_url: https://instance.cognitiveservices.azure.com/openai/responses?api-version=2025-04-01-preview
model: gpt-5-mini
api_key: "{FRIGATE_OPENAI_API_KEY}"
```
@ -196,10 +196,10 @@ genai:
model: llava
objects:
prompt: "Analyze the {label} in these images from the {camera} security camera. Focus on the actions, behavior, and potential intent of the {label}, rather than just describing its appearance."
object_prompts:
person: "Examine the main person in these images. What are they doing and what might their actions suggest about their intent (e.g., approaching a door, leaving an area, standing still)? Do not describe the surroundings or static details."
car: "Observe the primary vehicle in these images. Focus on its movement, direction, or purpose (e.g., parking, approaching, circling). If it's a delivery vehicle, mention the company."
prompt: "Analyze the {label} in these images from the {camera} security camera. Focus on the actions, behavior, and potential intent of the {label}, rather than just describing its appearance."
object_prompts:
person: "Examine the main person in these images. What are they doing and what might their actions suggest about their intent (e.g., approaching a door, leaving an area, standing still)? Do not describe the surroundings or static details."
car: "Observe the primary vehicle in these images. Focus on its movement, direction, or purpose (e.g., parking, approaching, circling). If it's a delivery vehicle, mention the company."
```
Prompts can also be overridden at the camera level to provide a more detailed prompt to the model about your specific camera, if you desire.

View File

@ -43,9 +43,10 @@ The following models are recommended:
| Model | Notes |
| ----------------- | ----------------------------------------------------------- |
| `Intern3.5VL` | Relatively fast with good vision comprehension
| `qwen3-vl` | Strong visual and situational understanding |
| `Intern3.5VL` | Relatively fast with good vision comprehension |
| `gemma3` | Strong frame-to-frame understanding, slower inference times |
| `qwen2.5vl` | Fast but capable model with good vision comprehension |
| `qwen2.5-vl` | Fast but capable model with good vision comprehension |
| `llava-phi3` | Lightweight and fast model with vision comprehension |
:::note

View File

@ -7,38 +7,95 @@ Generative AI can be used to automatically generate structured summaries of revi
Requests for a summary are requested automatically to your AI provider for alert review items when the activity has ended, they can also be optionally enabled for detections as well.
Generative AI review summaries can also be toggled dynamically for a camera via MQTT with the topic `frigate/<camera_name>/review_descriptions/set`. See the [MQTT documentation](/integrations/mqtt/#frigatecamera_namereviewdescriptionsset).
Generative AI review summaries can also be toggled dynamically for a [camera via MQTT](/integrations/mqtt/#frigatecamera_namereviewdescriptionsset).
## Review Summary Usage and Best Practices
Review summaries provide structured JSON responses that are saved for each review item:
```
- `scene` (string): A full description including setting, entities, actions, and any plausible supported inferences.
- `confidence` (float): 0-1 confidence in the analysis.
- `title` (string): A concise, direct title that describes the purpose or overall action (e.g., "Person taking out trash", "Joe walking dog").
- `scene` (string): A narrative description of what happens across the sequence from start to finish, including setting, detected objects, and their observable actions.
- `confidence` (float): 0-1 confidence in the analysis. Higher confidence when objects/actions are clearly visible and context is unambiguous.
- `other_concerns` (list): List of user-defined concerns that may need additional investigation.
- `potential_threat_level` (integer): 0, 1, or 2 as defined below.
Threat-level definitions:
- 0 — Typical or expected activity for this location/time (includes residents, guests, or known animals engaged in normal activities, even if they glance around or scan surroundings).
- 1 — Unusual or suspicious activity: At least one security-relevant behavior is present **and not explainable by a normal residential activity**.
- 2 — Active or immediate threat: Breaking in, vandalism, aggression, weapon display.
```
This will show in the UI as a list of concerns that each review item has along with the general description.
This will show in multiple places in the UI to give additional context about each activity, and allow viewing more details when extra attention is required. Frigate's built in notifications will also automatically show the title and description when the data is available.
### Defining Typical Activity
Each installation and even camera can have different parameters for what is considered suspicious activity. Frigate allows the `activity_context_prompt` to be defined globally and at the camera level, which allows you to define more specifically what should be considered normal activity. It is important that this is not overly specific as it can sway the output of the response. The default `activity_context_prompt` is below:
Each installation and even camera can have different parameters for what is considered suspicious activity. Frigate allows the `activity_context_prompt` to be defined globally and at the camera level, which allows you to define more specifically what should be considered normal activity. It is important that this is not overly specific as it can sway the output of the response.
<details>
<summary>Default Activity Context Prompt</summary>
```
- **Zone context is critical**: Private enclosed spaces (back yards, back decks, fenced areas, inside garages) are resident territory where brief transient activity, routine tasks, and pet care are expected and normal. Front yards, driveways, and porches are semi-public but still resident spaces where deliveries, parking, and coming/going are routine. Consider whether the zone and activity align with normal residential use.
- **Person + Pet = Normal Activity**: When both "Person" and "Dog" (or "Cat") are detected together in residential zones, this is routine pet care activity (walking, letting out, playing, supervising). Assign Level 0 unless there are OTHER strong suspicious behaviors present (like testing doors, taking items, etc.). A person with their pet in a residential zone is baseline normal activity.
- Brief appearances in private zones (back yards, garages) are normal residential patterns.
- Normal residential activity includes: residents, family members, guests, deliveries, services, maintenance workers, routine property use (parking, unloading, mail pickup, trash removal).
- Brief movement with legitimate items (bags, packages, tools, equipment) in appropriate zones is routine.
### Normal Activity Indicators (Level 0)
- Known/verified people in any zone at any time
- People with pets in residential areas
- Deliveries or services during daytime/evening (6 AM - 10 PM): carrying packages to doors/porches, placing items, leaving
- Services/maintenance workers with visible tools, uniforms, or service vehicles during daytime
- Activity confined to public areas only (sidewalks, streets) without entering property at any time
### Suspicious Activity Indicators (Level 1)
- **Testing or attempting to open doors/windows/handles on vehicles or buildings** — ALWAYS Level 1 regardless of time or duration
- **Unidentified person in private areas (driveways, near vehicles/buildings) during late night/early morning (11 PM - 5 AM)** — ALWAYS Level 1 regardless of activity or duration
- Taking items that don't belong to them (packages, objects from porches/driveways)
- Climbing or jumping fences/barriers to access property
- Attempting to conceal actions or items from view
- Prolonged loitering: remaining in same area without visible purpose throughout most of the sequence
### Critical Threat Indicators (Level 2)
- Holding break-in tools (crowbars, pry bars, bolt cutters)
- Weapons visible (guns, knives, bats used aggressively)
- Forced entry in progress
- Physical aggression or violence
- Active property damage or theft in progress
### Assessment Guidance
Evaluate in this order:
1. **If person is verified/known** → Level 0 regardless of time or activity
2. **If person is unidentified:**
- Check time: If late night/early morning (11 PM - 5 AM) AND in private areas (driveways, near vehicles/buildings) → Level 1
- Check actions: If testing doors/handles, taking items, climbing → Level 1
- Otherwise, if daytime/evening (6 AM - 10 PM) with clear legitimate purpose (delivery, service worker) → Level 0
3. **Escalate to Level 2 if:** Weapons, break-in tools, forced entry in progress, violence, or active property damage visible (escalates from Level 0 or 1)
The mere presence of an unidentified person in private areas during late night hours is inherently suspicious and warrants human review, regardless of what activity they appear to be doing or how brief the sequence is.
```
</details>
### Image Source
By default, review summaries use preview images (cached preview frames) which have a lower resolution but use fewer tokens per image. For better image quality and more detailed analysis, you can configure Frigate to extract frames directly from recordings at a higher resolution:
```yaml
review:
genai:
enabled: true
image_source: recordings # Options: "preview" (default) or "recordings"
```
When using `recordings`, frames are extracted at 480px height while maintaining the camera's original aspect ratio, providing better detail for the LLM while being mindful of context window size. This is particularly useful for scenarios where fine details matter, such as identifying license plates, reading text, or analyzing distant objects.
The number of frames sent to the LLM is dynamically calculated based on:
- Your LLM provider's context window size
- The camera's resolution and aspect ratio (ultrawide cameras like 32:9 use more tokens per image)
- The image source (recordings use more tokens than preview images)
Frame counts are automatically optimized to use ~98% of the available context window while capping at 20 frames maximum to ensure reasonable inference times. Note that using recordings will:
- Provide higher quality images to the LLM (480p vs 180p preview images)
- Use more tokens per image due to higher resolution
- Result in fewer frames being sent for ultrawide cameras due to larger image size
- Require that recordings are enabled for the camera
If recordings are not available for a given time period, the system will automatically fall back to using preview frames.
### Additional Concerns
Along with the concern of suspicious activity or immediate threat, you may have concerns such as animals in your garden or a gate being left open. These concerns can be configured so that the review summaries will make note of them if the activity requires additional review. For example:

View File

@ -5,7 +5,7 @@ title: Enrichments
# Enrichments
Some of Frigate's enrichments can use a discrete GPU / NPU for accelerated processing.
Some of Frigate's enrichments can use a discrete GPU or integrated GPU for accelerated processing.
## Requirements
@ -18,8 +18,10 @@ Object detection and enrichments (like Semantic Search, Face Recognition, and Li
- **Intel**
- OpenVINO will automatically be detected and used for enrichments in the default Frigate image.
- **Note:** Intel NPUs have limited model support for enrichments. GPU is recommended for enrichments when available.
- **Nvidia**
- Nvidia GPUs will automatically be detected and used for enrichments in the `-tensorrt` Frigate image.
- Jetson devices will automatically be detected and used for enrichments in the `-tensorrt-jp6` Frigate image.

View File

@ -30,8 +30,7 @@ In the default mode, Frigate's LPR needs to first detect a `car` or `motorcycle`
## Minimum System Requirements
License plate recognition works by running AI models locally on your system. The models are relatively lightweight and can run on your CPU or GPU, depending on your configuration. At least 4GB of RAM is required.
License plate recognition works by running AI models locally on your system. The YOLOv9 plate detector model and the OCR models ([PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR)) are relatively lightweight and can run on your CPU or GPU, depending on your configuration. At least 4GB of RAM is required.
## Configuration
License plate recognition is disabled by default. Enable it in your config file:

View File

@ -174,7 +174,7 @@ For devices that support two way talk, Frigate can be configured to use the feat
- Ensure you access Frigate via https (may require [opening port 8971](/frigate/installation/#ports)).
- For the Home Assistant Frigate card, [follow the docs](http://card.camera/#/usage/2-way-audio) for the correct source.
To use the Reolink Doorbell with two way talk, you should use the [recommended Reolink configuration](/configuration/camera_specific#reolink-doorbell)
To use the Reolink Doorbell with two way talk, you should use the [recommended Reolink configuration](/configuration/camera_specific#reolink-cameras)
As a starting point to check compatibility for your camera, view the list of cameras supported for two-way talk on the [go2rtc repository](https://github.com/AlexxIT/go2rtc?tab=readme-ov-file#two-way-audio). For cameras in the category `ONVIF Profile T`, you can use the [ONVIF Conformant Products Database](https://www.onvif.org/conformant-products/)'s FeatureList to check for the presence of `AudioOutput`. A camera that supports `ONVIF Profile T` _usually_ supports this, but due to inconsistent support, a camera that explicitly lists this feature may still not work. If no entry for your camera exists on the database, it is recommended not to buy it or to consult with the manufacturer's support on the feature availability.

View File

@ -258,41 +258,55 @@ Hailo8 supports all models in the Hailo Model Zoo that include HailoRT post-proc
## OpenVINO Detector
The OpenVINO detector type runs an OpenVINO IR model on AMD and Intel CPUs, Intel GPUs and Intel VPU hardware. To configure an OpenVINO detector, set the `"type"` attribute to `"openvino"`.
The OpenVINO detector type runs an OpenVINO IR model on AMD and Intel CPUs, Intel GPUs and Intel NPUs. To configure an OpenVINO detector, set the `"type"` attribute to `"openvino"`.
The OpenVINO device to be used is specified using the `"device"` attribute according to the naming conventions in the [Device Documentation](https://docs.openvino.ai/2024/openvino-workflow/running-inference/inference-devices-and-modes.html). The most common devices are `CPU` and `GPU`. Currently, there is a known issue with using `AUTO`. For backwards compatibility, Frigate will attempt to use `GPU` if `AUTO` is set in your configuration.
The OpenVINO device to be used is specified using the `"device"` attribute according to the naming conventions in the [Device Documentation](https://docs.openvino.ai/2025/openvino-workflow/running-inference/inference-devices-and-modes.html). The most common devices are `CPU`, `GPU`, or `NPU`.
OpenVINO is supported on 6th Gen Intel platforms (Skylake) and newer. It will also run on AMD CPUs despite having no official support for it. A supported Intel platform is required to use the `GPU` device with OpenVINO. For detailed system requirements, see [OpenVINO System Requirements](https://docs.openvino.ai/2024/about-openvino/release-notes-openvino/system-requirements.html)
OpenVINO is supported on 6th Gen Intel platforms (Skylake) and newer. It will also run on AMD CPUs despite having no official support for it. A supported Intel platform is required to use the `GPU` or `NPU` device with OpenVINO. For detailed system requirements, see [OpenVINO System Requirements](https://docs.openvino.ai/2025/about-openvino/release-notes-openvino/system-requirements.html)
:::tip
**NPU + GPU Systems:** If you have both NPU and GPU available (Intel Core Ultra processors), use NPU for object detection and GPU for enrichments (semantic search, face recognition, etc.) for best performance and compatibility.
When using many cameras one detector may not be enough to keep up. Multiple detectors can be defined assuming GPU resources are available. An example configuration would be:
```yaml
detectors:
ov_0:
type: openvino
device: GPU
device: GPU # or NPU
ov_1:
type: openvino
device: GPU
device: GPU # or NPU
```
:::
### OpenVINO Supported Models
| Model | GPU | NPU | Notes |
| ------------------------------------- | --- | --- | ------------------------------------------------------------ |
| [YOLOv9](#yolo-v3-v4-v7-v9) | ✅ | ✅ | Recommended for GPU & NPU |
| [RF-DETR](#rf-detr) | ✅ | ✅ | Requires XE iGPU or Arc |
| [YOLO-NAS](#yolo-nas) | ✅ | ✅ | |
| [MobileNet v2](#ssdlite-mobilenet-v2) | ✅ | ✅ | Fast and lightweight model, less accurate than larger models |
| [YOLOX](#yolox) | ✅ | ? | |
| [D-FINE](#d-fine) | ❌ | ❌ | |
#### SSDLite MobileNet v2
An OpenVINO model is provided in the container at `/openvino-model/ssdlite_mobilenet_v2.xml` and is used by this detector type by default. The model comes from Intel's Open Model Zoo [SSDLite MobileNet V2](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/ssdlite_mobilenet_v2) and is converted to an FP16 precision IR model.
<details>
<summary>MobileNet v2 Config</summary>
Use the model configuration shown below when using the OpenVINO detector with the default OpenVINO model:
```yaml
detectors:
ov:
type: openvino
device: GPU
device: GPU # Or NPU
model:
width: 300
@ -303,6 +317,8 @@ model:
labelmap_path: /openvino-model/coco_91cl_bkgr.txt
```
</details>
#### YOLOX
This detector also supports YOLOX. Frigate does not come with any YOLOX models preloaded, so you will need to supply your own models.
@ -311,6 +327,9 @@ This detector also supports YOLOX. Frigate does not come with any YOLOX models p
[YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) models are supported, but not included by default. See [the models section](#downloading-yolo-nas-model) for more information on downloading the YOLO-NAS model for use in Frigate.
<details>
<summary>YOLO-NAS Setup & Config</summary>
After placing the downloaded onnx model in your config folder, you can use the following configuration:
```yaml
@ -331,6 +350,8 @@ model:
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
</details>
#### YOLO (v3, v4, v7, v9)
YOLOv3, YOLOv4, YOLOv7, and [YOLOv9](https://github.com/WongKinYiu/yolov9) models are supported, but not included by default.
@ -341,6 +362,9 @@ The YOLO detector has been designed to support YOLOv3, YOLOv4, YOLOv7, and YOLOv
:::
<details>
<summary>YOLOv Setup & Config</summary>
:::warning
If you are using a Frigate+ YOLOv9 model, you should not define any of the below `model` parameters in your config except for `path`. See [the Frigate+ model docs](/plus/first_model#step-3-set-your-model-id-in-the-config) for more information on setting up your model.
@ -353,7 +377,7 @@ After placing the downloaded onnx model in your config folder, you can use the f
detectors:
ov:
type: openvino
device: GPU
device: GPU # or NPU
model:
model_type: yolo-generic
@ -367,6 +391,8 @@ model:
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
</details>
#### RF-DETR
[RF-DETR](https://github.com/roboflow/rf-detr) is a DETR based model. The ONNX exported models are supported, but not included by default. See [the models section](#downloading-rf-detr-model) for more informatoin on downloading the RF-DETR model for use in Frigate.
@ -377,6 +403,9 @@ Due to the size and complexity of the RF-DETR model, it is only recommended to b
:::
<details>
<summary>RF-DETR Setup & Config</summary>
After placing the downloaded onnx model in your `config/model_cache` folder, you can use the following configuration:
```yaml
@ -394,6 +423,8 @@ model:
path: /config/model_cache/rfdetr.onnx
```
</details>
#### D-FINE
[D-FINE](https://github.com/Peterande/D-FINE) is a DETR based model. The ONNX exported models are supported, but not included by default. See [the models section](#downloading-d-fine-model) for more information on downloading the D-FINE model for use in Frigate.
@ -404,6 +435,9 @@ Currently D-FINE models only run on OpenVINO in CPU mode, GPUs currently fail to
:::
<details>
<summary>D-FINE Setup & Config</summary>
After placing the downloaded onnx model in your config/model_cache folder, you can use the following configuration:
```yaml
@ -418,15 +452,17 @@ model:
height: 640
input_tensor: nchw
input_dtype: float
path: /config/model_cache/dfine_s_obj2coco.onnx
path: /config/model_cache/dfine-s.onnx
labelmap_path: /labelmap/coco-80.txt
```
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
</details>
## Apple Silicon detector
The NPU in Apple Silicon can't be accessed from within a container, so the [Apple Silicon detector client](https://github.com/frigate-nvr/apple-silicon-detector) must first be setup. It is recommended to use the Frigate docker image with `-standard-arm64` suffix, for example `ghcr.io/blakeblackshear/frigate:stable-standard-arm64`.
The NPU in Apple Silicon can't be accessed from within a container, so the [Apple Silicon detector client](https://github.com/frigate-nvr/apple-silicon-detector) must first be setup. It is recommended to use the Frigate docker image with `-standard-arm64` suffix, for example `ghcr.io/blakeblackshear/frigate:stable-standard-arm64`.
### Setup
@ -614,12 +650,23 @@ detectors:
### ONNX Supported Models
| Model | Nvidia GPU | AMD GPU | Notes |
| ----------------------------- | ---------- | ------- | --------------------------------------------------- |
| [YOLOv9](#yolo-v3-v4-v7-v9-2) | ✅ | ✅ | Supports CUDA Graphs for optimal Nvidia performance |
| [RF-DETR](#rf-detr) | ✅ | ❌ | Supports CUDA Graphs for optimal Nvidia performance |
| [YOLO-NAS](#yolo-nas-1) | ⚠️ | ⚠️ | Not supported by CUDA Graphs |
| [YOLOX](#yolox-1) | ✅ | ✅ | Supports CUDA Graphs for optimal Nvidia performance |
| [D-FINE](#d-fine) | ⚠️ | ❌ | Not supported by CUDA Graphs |
There is no default model provided, the following formats are supported:
#### YOLO-NAS
[YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) models are supported, but not included by default. See [the models section](#downloading-yolo-nas-model) for more information on downloading the YOLO-NAS model for use in Frigate.
<details>
<summary>YOLO-NAS Setup & Config</summary>
:::warning
If you are using a Frigate+ YOLO-NAS model, you should not define any of the below `model` parameters in your config except for `path`. See [the Frigate+ model docs](/plus/first_model#step-3-set-your-model-id-in-the-config) for more information on setting up your model.
@ -643,6 +690,8 @@ model:
labelmap_path: /labelmap/coco-80.txt
```
</details>
#### YOLO (v3, v4, v7, v9)
YOLOv3, YOLOv4, YOLOv7, and [YOLOv9](https://github.com/WongKinYiu/yolov9) models are supported, but not included by default.
@ -653,6 +702,9 @@ The YOLO detector has been designed to support YOLOv3, YOLOv4, YOLOv7, and YOLOv
:::
<details>
<summary>YOLOv Setup & Config</summary>
:::warning
If you are using a Frigate+ YOLOv9 model, you should not define any of the below `model` parameters in your config except for `path`. See [the Frigate+ model docs](/plus/first_model#step-3-set-your-model-id-in-the-config) for more information on setting up your model.
@ -676,12 +728,17 @@ model:
labelmap_path: /labelmap/coco-80.txt
```
</details>
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
#### YOLOx
[YOLOx](https://github.com/Megvii-BaseDetection/YOLOX) models are supported, but not included by default. See [the models section](#downloading-yolo-models) for more information on downloading the YOLOx model for use in Frigate.
<details>
<summary>YOLOx Setup & Config</summary>
After placing the downloaded onnx model in your config folder, you can use the following configuration:
```yaml
@ -701,10 +758,15 @@ model:
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
</details>
#### RF-DETR
[RF-DETR](https://github.com/roboflow/rf-detr) is a DETR based model. The ONNX exported models are supported, but not included by default. See [the models section](#downloading-rf-detr-model) for more information on downloading the RF-DETR model for use in Frigate.
<details>
<summary>RF-DETR Setup & Config</summary>
After placing the downloaded onnx model in your `config/model_cache` folder, you can use the following configuration:
```yaml
@ -721,10 +783,15 @@ model:
path: /config/model_cache/rfdetr.onnx
```
</details>
#### D-FINE
[D-FINE](https://github.com/Peterande/D-FINE) is a DETR based model. The ONNX exported models are supported, but not included by default. See [the models section](#downloading-d-fine-model) for more information on downloading the D-FINE model for use in Frigate.
<details>
<summary>D-FINE Setup & Config</summary>
After placing the downloaded onnx model in your `config/model_cache` folder, you can use the following configuration:
```yaml
@ -742,6 +809,8 @@ model:
labelmap_path: /labelmap/coco-80.txt
```
</details>
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
## CPU Detector (not recommended)
@ -861,16 +930,16 @@ detectors:
model:
model_type: yolonas
width: 320 # (Can be set to 640 for higher resolution)
height: 320 # (Can be set to 640 for higher resolution)
width: 320 # (Can be set to 640 for higher resolution)
height: 320 # (Can be set to 640 for higher resolution)
input_tensor: nchw
input_dtype: float
labelmap_path: /labelmap/coco-80.txt
# Optional: The model is normally fetched through the runtime, so 'path' can be omitted unless you want to use a custom or local model.
# path: /config/yolonas.zip
# The .zip file must contain:
# ├── yolonas.dfp (a file ending with .dfp)
# └── yolonas_post.onnx (optional; only if the model includes a cropped post-processing network)
# The .zip file must contain:
# ├── yolonas.dfp (a file ending with .dfp)
# └── yolonas_post.onnx (optional; only if the model includes a cropped post-processing network)
```
#### YOLOv9
@ -889,16 +958,16 @@ detectors:
model:
model_type: yolo-generic
width: 320 # (Can be set to 640 for higher resolution)
height: 320 # (Can be set to 640 for higher resolution)
width: 320 # (Can be set to 640 for higher resolution)
height: 320 # (Can be set to 640 for higher resolution)
input_tensor: nchw
input_dtype: float
labelmap_path: /labelmap/coco-80.txt
# Optional: The model is normally fetched through the runtime, so 'path' can be omitted unless you want to use a custom or local model.
# path: /config/yolov9.zip
# The .zip file must contain:
# ├── yolov9.dfp (a file ending with .dfp)
# └── yolov9_post.onnx (optional; only if the model includes a cropped post-processing network)
# The .zip file must contain:
# ├── yolov9.dfp (a file ending with .dfp)
# └── yolov9_post.onnx (optional; only if the model includes a cropped post-processing network)
```
#### YOLOX
@ -924,8 +993,8 @@ model:
labelmap_path: /labelmap/coco-80.txt
# Optional: The model is normally fetched through the runtime, so 'path' can be omitted unless you want to use a custom or local model.
# path: /config/yolox.zip
# The .zip file must contain:
# ├── yolox.dfp (a file ending with .dfp)
# The .zip file must contain:
# ├── yolox.dfp (a file ending with .dfp)
```
#### SSDLite MobileNet v2
@ -951,9 +1020,9 @@ model:
labelmap_path: /labelmap/coco-80.txt
# Optional: The model is normally fetched through the runtime, so 'path' can be omitted unless you want to use a custom or local model.
# path: /config/ssdlite_mobilenet.zip
# The .zip file must contain:
# ├── ssdlite_mobilenet.dfp (a file ending with .dfp)
# └── ssdlite_mobilenet_post.onnx (optional; only if the model includes a cropped post-processing network)
# The .zip file must contain:
# ├── ssdlite_mobilenet.dfp (a file ending with .dfp)
# └── ssdlite_mobilenet_post.onnx (optional; only if the model includes a cropped post-processing network)
```
#### Using a Custom Model
@ -973,18 +1042,19 @@ To use your own model:
For detailed instructions on compiling models, refer to the [MemryX Compiler](https://developer.memryx.com/tools/neural_compiler.html#usage) docs and [Tutorials](https://developer.memryx.com/tutorials/tutorials.html).
```yaml
# The detector automatically selects the default model if nothing is provided in the config.
#
# Optionally, you can specify a local model path as a .zip file to override the default.
# If a local path is provided and the file exists, it will be used instead of downloading.
#
# Example:
# path: /config/yolonas.zip
#
# The .zip file must contain:
# ├── yolonas.dfp (a file ending with .dfp)
# └── yolonas_post.onnx (optional; only if the model includes a cropped post-processing network)
# The detector automatically selects the default model if nothing is provided in the config.
#
# Optionally, you can specify a local model path as a .zip file to override the default.
# If a local path is provided and the file exists, it will be used instead of downloading.
#
# Example:
# path: /config/yolonas.zip
#
# The .zip file must contain:
# ├── yolonas.dfp (a file ending with .dfp)
# └── yolonas_post.onnx (optional; only if the model includes a cropped post-processing network)
```
---
## NVidia TensorRT Detector
@ -1092,16 +1162,16 @@ A synap model is provided in the container at /mobilenet.synap and is used by th
Use the model configuration shown below when using the synaptics detector with the default synap model:
```yaml
detectors: # required
synap_npu: # required
type: synaptics # required
detectors: # required
synap_npu: # required
type: synaptics # required
model: # required
path: /synaptics/mobilenet.synap # required
width: 224 # required
height: 224 # required
tensor_format: nhwc # default value (optional. If you change the model, it is required)
labelmap_path: /labelmap/coco-80.txt # required
model: # required
path: /synaptics/mobilenet.synap # required
width: 224 # required
height: 224 # required
tensor_format: nhwc # default value (optional. If you change the model, it is required)
labelmap_path: /labelmap/coco-80.txt # required
```
## AXERA
@ -1310,97 +1380,101 @@ Explanation of the paramters:
## DeGirum
DeGirum is a detector that can use any type of hardware listed on [their website](https://hub.degirum.com). DeGirum can be used with local hardware through a DeGirum AI Server, or through the use of `@local`. You can also connect directly to DeGirum's AI Hub to run inferences. **Please Note:** This detector *cannot* be used for commercial purposes.
DeGirum is a detector that can use any type of hardware listed on [their website](https://hub.degirum.com). DeGirum can be used with local hardware through a DeGirum AI Server, or through the use of `@local`. You can also connect directly to DeGirum's AI Hub to run inferences. **Please Note:** This detector _cannot_ be used for commercial purposes.
### Configuration
#### AI Server Inference
Before starting with the config file for this section, you must first launch an AI server. DeGirum has an AI server ready to use as a docker container. Add this to your `docker-compose.yml` to get started:
```yaml
degirum_detector:
container_name: degirum
image: degirum/aiserver:latest
privileged: true
ports:
- "8778:8778"
container_name: degirum
image: degirum/aiserver:latest
privileged: true
ports:
- "8778:8778"
```
All supported hardware will automatically be found on your AI server host as long as relevant runtimes and drivers are properly installed on your machine. Refer to [DeGirum's docs site](https://docs.degirum.com/pysdk/runtimes-and-drivers) if you have any trouble.
Once completed, changing the `config.yml` file is simple.
```yaml
degirum_detector:
type: degirum
location: degirum # Set to service name (degirum_detector), container_name (degirum), or a host:port (192.168.29.4:8778)
zoo: degirum/public # DeGirum's public model zoo. Zoo name should be in format "workspace/zoo_name". degirum/public is available to everyone, so feel free to use it if you don't know where to start. If you aren't pulling a model from the AI Hub, leave this and 'token' blank.
token: dg_example_token # For authentication with the AI Hub. Get this token through the "tokens" section on the main page of the [AI Hub](https://hub.degirum.com). This can be left blank if you're pulling a model from the public zoo and running inferences on your local hardware using @local or a local DeGirum AI Server
```
Setting up a model in the `config.yml` is similar to setting up an AI server.
You can set it to:
- A model listed on the [AI Hub](https://hub.degirum.com), given that the correct zoo name is listed in your detector
- If this is what you choose to do, the correct model will be downloaded onto your machine before running.
- A local directory acting as a zoo. See DeGirum's docs site [for more information](https://docs.degirum.com/pysdk/user-guide-pysdk/organizing-models#model-zoo-directory-structure).
- A path to some model.json.
```yaml
model:
path: ./mobilenet_v2_ssd_coco--300x300_quant_n2x_orca1_1 # directory to model .json and file
width: 300 # width is in the model name as the first number in the "int"x"int" section
height: 300 # height is in the model name as the second number in the "int"x"int" section
input_pixel_format: rgb/bgr # look at the model.json to figure out which to put here
type: degirum
location: degirum # Set to service name (degirum_detector), container_name (degirum), or a host:port (192.168.29.4:8778)
zoo: degirum/public # DeGirum's public model zoo. Zoo name should be in format "workspace/zoo_name". degirum/public is available to everyone, so feel free to use it if you don't know where to start. If you aren't pulling a model from the AI Hub, leave this and 'token' blank.
token: dg_example_token # For authentication with the AI Hub. Get this token through the "tokens" section on the main page of the [AI Hub](https://hub.degirum.com). This can be left blank if you're pulling a model from the public zoo and running inferences on your local hardware using @local or a local DeGirum AI Server
```
Setting up a model in the `config.yml` is similar to setting up an AI server.
You can set it to:
- A model listed on the [AI Hub](https://hub.degirum.com), given that the correct zoo name is listed in your detector
- If this is what you choose to do, the correct model will be downloaded onto your machine before running.
- A local directory acting as a zoo. See DeGirum's docs site [for more information](https://docs.degirum.com/pysdk/user-guide-pysdk/organizing-models#model-zoo-directory-structure).
- A path to some model.json.
```yaml
model:
path: ./mobilenet_v2_ssd_coco--300x300_quant_n2x_orca1_1 # directory to model .json and file
width: 300 # width is in the model name as the first number in the "int"x"int" section
height: 300 # height is in the model name as the second number in the "int"x"int" section
input_pixel_format: rgb/bgr # look at the model.json to figure out which to put here
```
#### Local Inference
It is also possible to eliminate the need for an AI server and run the hardware directly. The benefit of this approach is that you eliminate any bottlenecks that occur when transferring prediction results from the AI server docker container to the frigate one. However, the method of implementing local inference is different for every device and hardware combination, so it's usually more trouble than it's worth. A general guideline to achieve this would be:
1. Ensuring that the frigate docker container has the runtime you want to use. So for instance, running `@local` for Hailo means making sure the container you're using has the Hailo runtime installed.
2. To double check the runtime is detected by the DeGirum detector, make sure the `degirum sys-info` command properly shows whatever runtimes you mean to install.
3. Create a DeGirum detector in your `config.yml` file.
```yaml
degirum_detector:
type: degirum
location: "@local" # For accessing AI Hub devices and models
zoo: degirum/public # DeGirum's public model zoo. Zoo name should be in format "workspace/zoo_name". degirum/public is available to everyone, so feel free to use it if you don't know where to start.
token: dg_example_token # For authentication with the AI Hub. Get this token through the "tokens" section on the main page of the [AI Hub](https://hub.degirum.com). This can be left blank if you're pulling a model from the public zoo and running inferences on your local hardware using @local or a local DeGirum AI Server
type: degirum
location: "@local" # For accessing AI Hub devices and models
zoo: degirum/public # DeGirum's public model zoo. Zoo name should be in format "workspace/zoo_name". degirum/public is available to everyone, so feel free to use it if you don't know where to start.
token: dg_example_token # For authentication with the AI Hub. Get this token through the "tokens" section on the main page of the [AI Hub](https://hub.degirum.com). This can be left blank if you're pulling a model from the public zoo and running inferences on your local hardware using @local or a local DeGirum AI Server
```
Once `degirum_detector` is setup, you can choose a model through 'model' section in the `config.yml` file.
```yaml
model:
path: mobilenet_v2_ssd_coco--300x300_quant_n2x_orca1_1
width: 300 # width is in the model name as the first number in the "int"x"int" section
height: 300 # height is in the model name as the second number in the "int"x"int" section
input_pixel_format: rgb/bgr # look at the model.json to figure out which to put here
path: mobilenet_v2_ssd_coco--300x300_quant_n2x_orca1_1
width: 300 # width is in the model name as the first number in the "int"x"int" section
height: 300 # height is in the model name as the second number in the "int"x"int" section
input_pixel_format: rgb/bgr # look at the model.json to figure out which to put here
```
#### AI Hub Cloud Inference
If you do not possess whatever hardware you want to run, there's also the option to run cloud inferences. Do note that your detection fps might need to be lowered as network latency does significantly slow down this method of detection. For use with Frigate, we highly recommend using a local AI server as described above. To set up cloud inferences,
1. Sign up at [DeGirum's AI Hub](https://hub.degirum.com).
2. Get an access token.
3. Create a DeGirum detector in your `config.yml` file.
```yaml
degirum_detector:
type: degirum
location: "@cloud" # For accessing AI Hub devices and models
zoo: degirum/public # DeGirum's public model zoo. Zoo name should be in format "workspace/zoo_name". degirum/public is available to everyone, so feel free to use it if you don't know where to start.
token: dg_example_token # For authentication with the AI Hub. Get this token through the "tokens" section on the main page of the (AI Hub)[https://hub.degirum.com).
type: degirum
location: "@cloud" # For accessing AI Hub devices and models
zoo: degirum/public # DeGirum's public model zoo. Zoo name should be in format "workspace/zoo_name". degirum/public is available to everyone, so feel free to use it if you don't know where to start.
token: dg_example_token # For authentication with the AI Hub. Get this token through the "tokens" section on the main page of the (AI Hub)[https://hub.degirum.com).
```
Once `degirum_detector` is setup, you can choose a model through 'model' section in the `config.yml` file.
```yaml
model:
path: mobilenet_v2_ssd_coco--300x300_quant_n2x_orca1_1
width: 300 # width is in the model name as the first number in the "int"x"int" section
height: 300 # height is in the model name as the second number in the "int"x"int" section
input_pixel_format: rgb/bgr # look at the model.json to figure out which to put here
path: mobilenet_v2_ssd_coco--300x300_quant_n2x_orca1_1
width: 300 # width is in the model name as the first number in the "int"x"int" section
height: 300 # height is in the model name as the second number in the "int"x"int" section
input_pixel_format: rgb/bgr # look at the model.json to figure out which to put here
```
# Models
@ -1423,7 +1497,7 @@ COPY --from=ghcr.io/astral-sh/uv:0.8.0 /uv /bin/
WORKDIR /dfine
RUN git clone https://github.com/Peterande/D-FINE.git .
RUN uv pip install --system -r requirements.txt
RUN uv pip install --system onnx onnxruntime onnxsim
RUN uv pip install --system onnx onnxruntime onnxsim onnxscript
# Create output directory and download checkpoint
RUN mkdir -p output
ARG MODEL_SIZE
@ -1447,9 +1521,9 @@ FROM python:3.11 AS build
RUN apt-get update && apt-get install --no-install-recommends -y libgl1 && rm -rf /var/lib/apt/lists/*
COPY --from=ghcr.io/astral-sh/uv:0.8.0 /uv /bin/
WORKDIR /rfdetr
RUN uv pip install --system rfdetr onnx onnxruntime onnxsim onnx-graphsurgeon
RUN uv pip install --system rfdetr[onnxexport] torch==2.8.0 onnxscript
ARG MODEL_SIZE
RUN python3 -c "from rfdetr import RFDETR${MODEL_SIZE}; x = RFDETR${MODEL_SIZE}(resolution=320); x.export()"
RUN python3 -c "from rfdetr import RFDETR${MODEL_SIZE}; x = RFDETR${MODEL_SIZE}(resolution=320); x.export(simplify=True)"
FROM scratch
ARG MODEL_SIZE
COPY --from=build /rfdetr/output/inference_model.onnx /rfdetr-${MODEL_SIZE}.onnx
@ -1497,7 +1571,7 @@ COPY --from=ghcr.io/astral-sh/uv:0.8.0 /uv /bin/
WORKDIR /yolov9
ADD https://github.com/WongKinYiu/yolov9.git .
RUN uv pip install --system -r requirements.txt
RUN uv pip install --system onnx==1.18.0 onnxruntime onnx-simplifier>=0.4.1
RUN uv pip install --system onnx==1.18.0 onnxruntime onnx-simplifier>=0.4.1 onnxscript
ARG MODEL_SIZE
ARG IMG_SIZE
ADD https://github.com/WongKinYiu/yolov9/releases/download/v0.1/yolov9-${MODEL_SIZE}-converted.pt yolov9-${MODEL_SIZE}.pt

View File

@ -240,6 +240,8 @@ birdseye:
scaling_factor: 2.0
# Optional: Maximum number of cameras to show at one time, showing the most recent (default: show all cameras)
max_cameras: 1
# Optional: Frames-per-second to re-send the last composed Birdseye frame when idle (no motion or active updates). (default: shown below)
idle_heartbeat_fps: 0.0
# Optional: ffmpeg configuration
# More information about presets at https://docs.frigate.video/configuration/ffmpeg_presets
@ -427,6 +429,15 @@ review:
alerts: True
# Optional: Enable GenAI review summaries for detections (default: shown below)
detections: False
# Optional: Activity Context Prompt to give context to the GenAI what activity is and is not suspicious.
# It is important to be direct and detailed. See documentation for the default prompt structure.
activity_context_prompt: """Define what is and is not suspicious
"""
# Optional: Image source for GenAI (default: preview)
# Options: "preview" (uses cached preview frames at ~180p) or "recordings" (extracts frames from recordings at 480p)
# Using "recordings" provides better image quality but uses more tokens per image.
# Frame count is automatically calculated based on context window size, aspect ratio, and image source (capped at 20 frames).
image_source: preview
# Optional: Additional concerns that the GenAI should make note of (default: None)
additional_concerns:
- Animals in the garden
@ -628,7 +639,7 @@ face_recognition:
# Optional: Min face recognitions for the sub label to be applied to the person object (default: shown below)
min_faces: 1
# Optional: Number of images of recognized faces to save for training (default: shown below)
save_attempts: 100
save_attempts: 200
# Optional: Apply a blur quality filter to adjust confidence based on the blur level of the image (default: shown below)
blur_confidence_filter: True
# Optional: Set the model size used face recognition. (default: shown below)
@ -669,20 +680,18 @@ lpr:
# Optional: List of regex replacement rules to normalize detected plates (default: shown below)
replace_rules: {}
# Optional: Configuration for AI generated tracked object descriptions
# Optional: Configuration for AI / LLM provider
# WARNING: Depending on the provider, this will send thumbnails over the internet
# to Google or OpenAI's LLMs to generate descriptions. It can be overridden at
# the camera level (enabled: False) to enhance privacy for indoor cameras.
# to Google or OpenAI's LLMs to generate descriptions. GenAI features can be configured at
# the camera level to enhance privacy for indoor cameras.
genai:
# Optional: Enable AI description generation (default: shown below)
enabled: False
# Required if enabled: Provider must be one of ollama, gemini, or openai
# Required: Provider must be one of ollama, gemini, or openai
provider: ollama
# Required if provider is ollama. May also be used for an OpenAI API compatible backend with the openai provider.
base_url: http://localhost::11434
# Required if gemini or openai
api_key: "{FRIGATE_GENAI_API_KEY}"
# Required if enabled: The model to use with the provider.
# Required: The model to use with the provider.
model: gemini-1.5-flash
# Optional additional args to pass to the GenAI Provider (default: None)
provider_options:
@ -801,6 +810,8 @@ cameras:
# NOTE: This must be different than any camera names, but can match with another zone on another
# camera.
front_steps:
# Optional: A friendly name or descriptive text for the zones
friendly_name: ""
# Required: List of x,y coordinates to define the polygon of the zone.
# NOTE: Presence in a zone is evaluated only based on the bottom center of the objects bounding box.
coordinates: 0.033,0.306,0.324,0.138,0.439,0.185,0.042,0.428
@ -920,10 +931,13 @@ cameras:
type: thumbnail
# Reference data for matching, either an event ID for `thumbnail` or a text string for `description`. (default: none)
data: 1751565549.853251-b69j73
# Similarity threshold for triggering. (default: none)
threshold: 0.7
# Similarity threshold for triggering. (default: shown below)
threshold: 0.8
# List of actions to perform when the trigger fires. (default: none)
# Available options: `notification` (send a webpush notification)
# Available options:
# - `notification` (send a webpush notification)
# - `sub_label` (add trigger friendly name as a sub label to the triggering tracked object)
# - `attribute` (add trigger's name and similarity score as a data attribute to the triggering tracked object)
actions:
- notification

View File

@ -24,6 +24,11 @@ birdseye:
restream: True
```
:::tip
To improve connection speed when using Birdseye via restream you can enable a small idle heartbeat by setting `birdseye.idle_heartbeat_fps` to a low value (e.g. `12`). This makes Frigate periodically push the last frame even when no motion is detected, reducing initial connection latency.
:::
### Securing Restream With Authentication
The go2rtc restream can be secured with RTSP based username / password authentication. Ex:

View File

@ -78,7 +78,7 @@ Switching between V1 and V2 requires reindexing your embeddings. The embeddings
### GPU Acceleration
The CLIP models are downloaded in ONNX format, and the `large` model can be accelerated using GPU / NPU hardware, when available. This depends on the Docker build that is used. You can also target a specific device in a multi-GPU installation.
The CLIP models are downloaded in ONNX format, and the `large` model can be accelerated using GPU hardware, when available. This depends on the Docker build that is used. You can also target a specific device in a multi-GPU installation.
```yaml
semantic_search:
@ -90,7 +90,7 @@ semantic_search:
:::info
If the correct build is used for your GPU / NPU and the `large` model is configured, then the GPU / NPU will be detected and used automatically.
If the correct build is used for your GPU / NPU and the `large` model is configured, then the GPU will be detected and used automatically.
Specify the `device` option to target a specific GPU in a multi-GPU system (see [onnxruntime's provider options](https://onnxruntime.ai/docs/execution-providers/)).
If you do not specify a device, the first available GPU will be used.
@ -119,7 +119,7 @@ Semantic Search must be enabled to use Triggers.
### Configuration
Triggers are defined within the `semantic_search` configuration for each camera in your Frigate configuration file or through the UI. Each trigger consists of a `friendly_name`, a `type` (either `thumbnail` or `description`), a `data` field (the reference image event ID or text), a `threshold` for similarity matching, and a list of `actions` to perform when the trigger fires.
Triggers are defined within the `semantic_search` configuration for each camera in your Frigate configuration file or through the UI. Each trigger consists of a `friendly_name`, a `type` (either `thumbnail` or `description`), a `data` field (the reference image event ID or text), a `threshold` for similarity matching, and a list of `actions` to perform when the trigger fires - `notification`, `sub_label`, and `attribute`.
Triggers are best configured through the Frigate UI.
@ -128,17 +128,20 @@ Triggers are best configured through the Frigate UI.
1. Navigate to the **Settings** page and select the **Triggers** tab.
2. Choose a camera from the dropdown menu to view or manage its triggers.
3. Click **Add Trigger** to create a new trigger or use the pencil icon to edit an existing one.
4. In the **Create Trigger** dialog:
- Enter a **Name** for the trigger (e.g., "red_car_alert").
4. In the **Create Trigger** wizard:
- Enter a **Name** for the trigger (e.g., "Red Car Alert").
- Enter a descriptive **Friendly Name** for the trigger (e.g., "Red car on the driveway camera").
- Select the **Type** (`Thumbnail` or `Description`).
- For `Thumbnail`, select an image to trigger this action when a similar thumbnail image is detected, based on the threshold.
- For `Description`, enter text to trigger this action when a similar tracked object description is detected.
- Set the **Threshold** for similarity matching.
- Select **Actions** to perform when the trigger fires.
If native webpush notifications are enabled, check the `Send Notification` box to send a notification.
Check the `Add Sub Label` box to add the trigger's friendly name as a sub label to any triggering tracked objects.
Check the `Add Attribute` box to add the trigger's internal ID (e.g., "red_car_alert") to a data attribute on the tracked object that can be processed via the API or MQTT.
5. Save the trigger to update the configuration and store the embedding in the database.
When a trigger fires, the UI highlights the trigger with a blue outline for 3 seconds for easy identification.
When a trigger fires, the UI highlights the trigger with a blue dot for 3 seconds for easy identification.
### Usage and Best Practices

View File

@ -27,6 +27,7 @@ cameras:
- entire_yard
zones:
entire_yard:
friendly_name: Entire yard # You can use characters from any language text
coordinates: ...
```
@ -44,8 +45,10 @@ cameras:
- edge_yard
zones:
edge_yard:
friendly_name: Edge yard # You can use characters from any language text
coordinates: ...
inner_yard:
friendly_name: Inner yard # You can use characters from any language text
coordinates: ...
```
@ -59,6 +62,7 @@ cameras:
- entire_yard
zones:
entire_yard:
friendly_name: Entire yard
coordinates: ...
```
@ -82,6 +86,7 @@ cameras:
Only car objects can trigger the `front_yard_street` zone and only person can trigger the `entire_yard`. Objects will be tracked for any `person` that enter anywhere in the yard, and for cars only if they enter the street.
### Zone Loitering
Sometimes objects are expected to be passing through a zone, but an object loitering in an area is unexpected. Zones can be configured to have a minimum loitering time after which the object will be considered in the zone.

View File

@ -78,7 +78,7 @@ Frigate supports multiple different detectors that work on different types of ha
**Intel**
- [OpenVino](#openvino---intel): OpenVino can run on Intel Arc GPUs, Intel integrated GPUs, and Intel CPUs to provide efficient object detection.
- [OpenVino](#openvino---intel): OpenVino can run on Intel Arc GPUs, Intel integrated GPUs, and Intel NPUs to provide efficient object detection.
- [Supports majority of model architectures](../../configuration/object_detectors#openvino-supported-models)
- Runs best with tiny, small, or medium models
@ -150,6 +150,7 @@ The OpenVINO detector type is able to run on:
- 6th Gen Intel Platforms and newer that have an iGPU
- x86 hosts with an Intel Arc GPU
- Intel NPUs
- Most modern AMD CPUs (though this is officially not supported by Intel)
- x86 & Arm64 hosts via CPU (generally not recommended)
@ -174,7 +175,8 @@ Inference speeds vary greatly depending on the CPU or GPU used, some known examp
| Intel UHD 770 | ~ 15 ms | t-320: ~ 16 ms s-320: ~ 20 ms s-640: ~ 40 ms | 320: ~ 20 ms 640: ~ 46 ms | | |
| Intel N100 | ~ 15 ms | s-320: 30 ms | 320: ~ 25 ms | | Can only run one detector instance |
| Intel N150 | ~ 15 ms | t-320: 16 ms s-320: 24 ms | | | |
| Intel Iris XE | ~ 10 ms | s-320: 12 ms s-640: 30 ms | 320: ~ 18 ms 640: ~ 50 ms | | |
| Intel Iris XE | ~ 10 ms | t-320: 6 ms t-640: 14 ms s-320: 8 ms s-640: 16 ms | 320: ~ 10 ms 640: ~ 20 ms | 320-n: 33 ms | |
| Intel NPU | ~ 6 ms | s-320: 11 ms | 320: ~ 14 ms 640: ~ 34 ms | 320-n: 40 ms | |
| Intel Arc A310 | ~ 5 ms | t-320: 7 ms t-640: 11 ms s-320: 8 ms s-640: 15 ms | 320: ~ 8 ms 640: ~ 14 ms | | |
| Intel Arc A380 | ~ 6 ms | | 320: ~ 10 ms 640: ~ 22 ms | 336: 20 ms 448: 27 ms | |
| Intel Arc A750 | ~ 4 ms | | 320: ~ 8 ms | | |

View File

@ -338,7 +338,8 @@ services:
- /dev/bus/usb:/dev/bus/usb # Passes the USB Coral, needs to be modified for other versions
- /dev/apex_0:/dev/apex_0 # Passes a PCIe Coral, follow driver instructions here https://coral.ai/docs/m2/get-started/#2a-on-linux
- /dev/video11:/dev/video11 # For Raspberry Pi 4B
- /dev/dri/renderD128:/dev/dri/renderD128 # For intel hwaccel, needs to be updated for your hardware
- /dev/dri/renderD128:/dev/dri/renderD128 # AMD / Intel GPU, needs to be updated for your hardware
- /dev/accel:/dev/accel # Intel NPU
volumes:
- /etc/localtime:/etc/localtime:ro
- /path/to/your/config:/config

View File

@ -5,7 +5,7 @@ title: Updating
# Updating Frigate
The current stable version of Frigate is **0.16.1**. The release notes and any breaking changes for this version can be found on the [Frigate GitHub releases page](https://github.com/blakeblackshear/frigate/releases/tag/v0.16.1).
The current stable version of Frigate is **0.16.2**. The release notes and any breaking changes for this version can be found on the [Frigate GitHub releases page](https://github.com/blakeblackshear/frigate/releases/tag/v0.16.2).
Keeping Frigate up to date ensures you benefit from the latest features, performance improvements, and bug fixes. The update process varies slightly depending on your installation method (Docker, Home Assistant Addon, etc.). Below are instructions for the most common setups.
@ -33,21 +33,21 @@ If youre running Frigate via Docker (recommended method), follow these steps:
2. **Update and Pull the Latest Image**:
- If using Docker Compose:
- Edit your `docker-compose.yml` file to specify the desired version tag (e.g., `0.16.1` instead of `0.15.2`). For example:
- Edit your `docker-compose.yml` file to specify the desired version tag (e.g., `0.16.2` instead of `0.15.2`). For example:
```yaml
services:
frigate:
image: ghcr.io/blakeblackshear/frigate:0.16.1
image: ghcr.io/blakeblackshear/frigate:0.16.2
```
- Then pull the image:
```bash
docker pull ghcr.io/blakeblackshear/frigate:0.16.1
docker pull ghcr.io/blakeblackshear/frigate:0.16.2
```
- **Note for `stable` Tag Users**: If your `docker-compose.yml` uses the `stable` tag (e.g., `ghcr.io/blakeblackshear/frigate:stable`), you dont need to update the tag manually. The `stable` tag always points to the latest stable release after pulling.
- If using `docker run`:
- Pull the image with the appropriate tag (e.g., `0.16.1`, `0.16.1-tensorrt`, or `stable`):
- Pull the image with the appropriate tag (e.g., `0.16.2`, `0.16.2-tensorrt`, or `stable`):
```bash
docker pull ghcr.io/blakeblackshear/frigate:0.16.1
docker pull ghcr.io/blakeblackshear/frigate:0.16.2
```
3. **Start the Container**:

View File

@ -161,7 +161,14 @@ Message published for updates to tracked object metadata, for example:
### `frigate/reviews`
Message published for each changed review item. The first message is published when the `detection` or `alert` is initiated. When additional objects are detected or when a zone change occurs, it will publish a, `update` message with the same id. When the review activity has ended a final `end` message is published.
Message published for each changed review item. The first message is published when the `detection` or `alert` is initiated.
An `update` with the same ID will be published when:
- The severity changes from `detection` to `alert`
- Additional objects are detected
- An object is recognized via face, lpr, etc.
When the review activity has ended a final `end` message is published.
```json
{

View File

@ -42,6 +42,7 @@ Misidentified objects should have a correct label added. For example, if a perso
| `w` | Add box |
| `d` | Toggle difficult |
| `s` | Switch to the next label |
| `Shift + s` | Switch to the previous label |
| `tab` | Select next largest box |
| `del` | Delete current box |
| `esc` | Deselect/Cancel |

View File

@ -37,7 +37,6 @@ from frigate.stats.prometheus import get_metrics, update_metrics
from frigate.util.builtin import (
clean_camera_user_pass,
flatten_config_data,
get_tz_modifiers,
process_config_query_string,
update_yaml_file_bulk,
)
@ -48,6 +47,7 @@ from frigate.util.services import (
restart_frigate,
vainfo_hwaccel,
)
from frigate.util.time import get_tz_modifiers
from frigate.version import VERSION
logger = logging.getLogger(__name__)
@ -387,20 +387,29 @@ def config_set(request: Request, body: AppConfigSetBody):
old_config: FrigateConfig = request.app.frigate_config
request.app.frigate_config = config
if body.update_topic and body.update_topic.startswith("config/cameras/"):
_, _, camera, field = body.update_topic.split("/")
if body.update_topic:
if body.update_topic.startswith("config/cameras/"):
_, _, camera, field = body.update_topic.split("/")
if field == "add":
settings = config.cameras[camera]
elif field == "remove":
settings = old_config.cameras[camera]
if field == "add":
settings = config.cameras[camera]
elif field == "remove":
settings = old_config.cameras[camera]
else:
settings = config.get_nested_object(body.update_topic)
request.app.config_publisher.publish_update(
CameraConfigUpdateTopic(CameraConfigUpdateEnum[field], camera),
settings,
)
else:
# Generic handling for global config updates
settings = config.get_nested_object(body.update_topic)
request.app.config_publisher.publish_update(
CameraConfigUpdateTopic(CameraConfigUpdateEnum[field], camera),
settings,
)
# Publish None for removal, actual config for add/update
request.app.config_publisher.publisher.publish(
body.update_topic, settings
)
return JSONResponse(
content=(
@ -688,7 +697,11 @@ def timeline(camera: str = "all", limit: int = 100, source_id: Optional[str] = N
clauses.append((Timeline.camera == camera))
if source_id:
clauses.append((Timeline.source_id == source_id))
source_ids = [sid.strip() for sid in source_id.split(",")]
if len(source_ids) == 1:
clauses.append((Timeline.source_id == source_ids[0]))
else:
clauses.append((Timeline.source_id.in_(source_ids)))
if len(clauses) == 0:
clauses.append((True))

View File

@ -35,6 +35,23 @@ logger = logging.getLogger(__name__)
router = APIRouter(tags=[Tags.auth])
@router.get("/auth/first_time_login")
def first_time_login(request: Request):
"""Return whether the admin first-time login help flag is set in config.
This endpoint is intentionally unauthenticated so the login page can
query it before a user is authenticated.
"""
auth_config = request.app.frigate_config.auth
return JSONResponse(
content={
"admin_first_time_login": auth_config.admin_first_time_login
or auth_config.reset_admin_password
}
)
class RateLimiter:
_limit = ""
@ -515,6 +532,11 @@ def login(request: Request, body: AppPostLoginBody):
set_jwt_cookie(
response, JWT_COOKIE_NAME, encoded_jwt, expiration, JWT_COOKIE_SECURE
)
# Clear admin_first_time_login flag after successful admin login so the
# UI stops showing the first-time login documentation link.
if role == "admin":
request.app.frigate_config.auth.admin_first_time_login = False
return response
return JSONResponse(content={"message": "Login failed"}, status_code=401)

View File

@ -3,11 +3,15 @@
import json
import logging
import re
from importlib.util import find_spec
from pathlib import Path
from urllib.parse import quote_plus
import requests
from fastapi import APIRouter, Depends, Request, Response
from fastapi import APIRouter, Depends, Query, Request, Response
from fastapi.responses import JSONResponse
from onvif import ONVIFCamera, ONVIFError
from zeep.exceptions import Fault, TransportError
from frigate.api.auth import require_role
from frigate.api.defs.tags import Tags
@ -199,19 +203,30 @@ def ffprobe(request: Request, paths: str = "", detailed: bool = False):
request.app.frigate_config.ffmpeg, path.strip(), detailed=detailed
)
result = {
"return_code": ffprobe.returncode,
"stderr": (
ffprobe.stderr.decode("unicode_escape").strip()
if ffprobe.returncode != 0
else ""
),
"stdout": (
json.loads(ffprobe.stdout.decode("unicode_escape").strip())
if ffprobe.returncode == 0
else ""
),
}
if ffprobe.returncode != 0:
try:
stderr_decoded = ffprobe.stderr.decode("utf-8")
except UnicodeDecodeError:
try:
stderr_decoded = ffprobe.stderr.decode("unicode_escape")
except Exception:
stderr_decoded = str(ffprobe.stderr)
stderr_lines = [
line.strip() for line in stderr_decoded.split("\n") if line.strip()
]
result = {
"return_code": ffprobe.returncode,
"stderr": stderr_lines,
"stdout": "",
}
else:
result = {
"return_code": ffprobe.returncode,
"stderr": [],
"stdout": json.loads(ffprobe.stdout.decode("unicode_escape").strip()),
}
# Add detailed metadata if requested and probe was successful
if detailed and ffprobe.returncode == 0 and result["stdout"]:
@ -441,3 +456,471 @@ def _extract_fps(r_frame_rate: str) -> float | None:
return round(float(num) / float(den), 2)
except (ValueError, ZeroDivisionError):
return None
@router.get(
"/onvif/probe",
dependencies=[Depends(require_role(["admin"]))],
summary="Probe ONVIF device",
description=(
"Probe an ONVIF device to determine capabilities and optionally test available stream URIs. "
"Query params: host (required), port (default 80), username, password, test (boolean)."
),
)
async def onvif_probe(
request: Request,
host: str = Query(None),
port: int = Query(80),
username: str = Query(""),
password: str = Query(""),
test: bool = Query(False),
):
"""
Probe a single ONVIF device to determine capabilities.
Connects to an ONVIF device and queries for:
- Device information (manufacturer, model)
- Media profiles count
- PTZ support
- Available presets
- Autotracking support
Query Parameters:
host: Device host/IP address (required)
port: Device port (default 80)
username: ONVIF username (optional)
password: ONVIF password (optional)
test: run ffprobe on the stream (optional)
Returns:
JSON with device capabilities information
"""
if not host:
return JSONResponse(
content={"success": False, "message": "host parameter is required"},
status_code=400,
)
# Validate host format
if not _is_valid_host(host):
return JSONResponse(
content={"success": False, "message": "Invalid host format"},
status_code=400,
)
onvif_camera = None
try:
logger.debug(f"Probing ONVIF device at {host}:{port}")
try:
wsdl_base = None
spec = find_spec("onvif")
if spec and getattr(spec, "origin", None):
wsdl_base = str(Path(spec.origin).parent / "wsdl")
except Exception:
wsdl_base = None
onvif_camera = ONVIFCamera(
host, port, username or "", password or "", wsdl_dir=wsdl_base
)
await onvif_camera.update_xaddrs()
# Get device information
device_info = {
"manufacturer": "Unknown",
"model": "Unknown",
"firmware_version": "Unknown",
}
try:
device_service = await onvif_camera.create_devicemgmt_service()
device_info_resp = await device_service.GetDeviceInformation()
manufacturer = getattr(device_info_resp, "Manufacturer", None) or (
device_info_resp.get("Manufacturer")
if isinstance(device_info_resp, dict)
else None
)
model = getattr(device_info_resp, "Model", None) or (
device_info_resp.get("Model")
if isinstance(device_info_resp, dict)
else None
)
firmware = getattr(device_info_resp, "FirmwareVersion", None) or (
device_info_resp.get("FirmwareVersion")
if isinstance(device_info_resp, dict)
else None
)
device_info.update(
{
"manufacturer": manufacturer or "Unknown",
"model": model or "Unknown",
"firmware_version": firmware or "Unknown",
}
)
except Exception:
logger.debug("Failed to get device info")
# Get media profiles
profiles = []
profiles_count = 0
first_profile_token = None
ptz_config_token = None
try:
media_service = await onvif_camera.create_media_service()
profiles = await media_service.GetProfiles()
profiles_count = len(profiles) if profiles else 0
if profiles and len(profiles) > 0:
p = profiles[0]
first_profile_token = getattr(p, "token", None) or (
p.get("token") if isinstance(p, dict) else None
)
# Get PTZ configuration token from the profile
ptz_configuration = getattr(p, "PTZConfiguration", None) or (
p.get("PTZConfiguration") if isinstance(p, dict) else None
)
if ptz_configuration:
ptz_config_token = getattr(ptz_configuration, "token", None) or (
ptz_configuration.get("token")
if isinstance(ptz_configuration, dict)
else None
)
except Exception:
logger.debug("Failed to get media profiles")
# Check PTZ support and capabilities
ptz_supported = False
presets_count = 0
autotrack_supported = False
try:
ptz_service = await onvif_camera.create_ptz_service()
# Check if PTZ service is available
try:
await ptz_service.GetServiceCapabilities()
ptz_supported = True
logger.debug("PTZ service is available")
except Exception as e:
logger.debug(f"PTZ service not available: {e}")
ptz_supported = False
# Try to get presets if PTZ is supported and we have a profile
if ptz_supported and first_profile_token:
try:
presets_resp = await ptz_service.GetPresets(
{"ProfileToken": first_profile_token}
)
presets_count = len(presets_resp) if presets_resp else 0
logger.debug(f"Found {presets_count} presets")
except Exception as e:
logger.debug(f"Failed to get presets: {e}")
presets_count = 0
# Check for autotracking support - requires both FOV relative movement and MoveStatus
if ptz_supported and first_profile_token and ptz_config_token:
# First check for FOV relative movement support
pt_r_fov_supported = False
try:
config_request = ptz_service.create_type("GetConfigurationOptions")
config_request.ConfigurationToken = ptz_config_token
ptz_config = await ptz_service.GetConfigurationOptions(
config_request
)
if ptz_config:
# Check for pt-r-fov support
spaces = getattr(ptz_config, "Spaces", None) or (
ptz_config.get("Spaces")
if isinstance(ptz_config, dict)
else None
)
if spaces:
rel_pan_tilt_space = getattr(
spaces, "RelativePanTiltTranslationSpace", None
) or (
spaces.get("RelativePanTiltTranslationSpace")
if isinstance(spaces, dict)
else None
)
if rel_pan_tilt_space:
# Look for FOV space
for i, space in enumerate(rel_pan_tilt_space):
uri = None
if isinstance(space, dict):
uri = space.get("URI")
else:
uri = getattr(space, "URI", None)
if uri and "TranslationSpaceFov" in uri:
pt_r_fov_supported = True
logger.debug(
"FOV relative movement (pt-r-fov) supported"
)
break
logger.debug(f"PTZ config spaces: {ptz_config}")
except Exception as e:
logger.debug(f"Failed to check FOV relative movement: {e}")
pt_r_fov_supported = False
# Now check for MoveStatus support via GetServiceCapabilities
if pt_r_fov_supported:
try:
service_capabilities_request = ptz_service.create_type(
"GetServiceCapabilities"
)
service_capabilities = await ptz_service.GetServiceCapabilities(
service_capabilities_request
)
# Look for MoveStatus in the capabilities
move_status_capable = False
if service_capabilities:
# Try to find MoveStatus key recursively
def find_move_status(obj, key="MoveStatus"):
if isinstance(obj, dict):
if key in obj:
return obj[key]
for v in obj.values():
result = find_move_status(v, key)
if result is not None:
return result
elif hasattr(obj, key):
return getattr(obj, key)
elif hasattr(obj, "__dict__"):
for v in vars(obj).values():
result = find_move_status(v, key)
if result is not None:
return result
return None
move_status_value = find_move_status(service_capabilities)
# MoveStatus should return "true" if supported
if isinstance(move_status_value, bool):
move_status_capable = move_status_value
elif isinstance(move_status_value, str):
move_status_capable = (
move_status_value.lower() == "true"
)
logger.debug(f"MoveStatus capability: {move_status_value}")
# Autotracking is supported if both conditions are met
autotrack_supported = pt_r_fov_supported and move_status_capable
if autotrack_supported:
logger.debug(
"Autotracking fully supported (pt-r-fov + MoveStatus)"
)
else:
logger.debug(
f"Autotracking not fully supported - pt-r-fov: {pt_r_fov_supported}, MoveStatus: {move_status_capable}"
)
except Exception as e:
logger.debug(f"Failed to check MoveStatus support: {e}")
autotrack_supported = False
except Exception as e:
logger.debug(f"Failed to probe PTZ service: {e}")
result = {
"success": True,
"host": host,
"port": port,
"manufacturer": device_info["manufacturer"],
"model": device_info["model"],
"firmware_version": device_info["firmware_version"],
"profiles_count": profiles_count,
"ptz_supported": ptz_supported,
"presets_count": presets_count,
"autotrack_supported": autotrack_supported,
}
# Gather RTSP candidates
rtsp_candidates: list[dict] = []
try:
media_service = await onvif_camera.create_media_service()
if profiles_count and media_service:
for p in profiles or []:
token = getattr(p, "token", None) or (
p.get("token") if isinstance(p, dict) else None
)
if not token:
continue
try:
stream_setup = {
"Stream": "RTP-Unicast",
"Transport": {"Protocol": "RTSP"},
}
stream_req = {
"ProfileToken": token,
"StreamSetup": stream_setup,
}
stream_uri_resp = await media_service.GetStreamUri(stream_req)
uri = (
stream_uri_resp.get("Uri")
if isinstance(stream_uri_resp, dict)
else getattr(stream_uri_resp, "Uri", None)
)
if uri:
logger.debug(
f"GetStreamUri returned for token {token}: {uri}"
)
# If credentials were provided, do NOT add the unauthenticated URI.
try:
if isinstance(uri, str) and uri.startswith("rtsp://"):
if username and password and "@" not in uri:
# Inject URL-encoded credentials and add only the
# authenticated version.
cred = f"{quote_plus(username)}:{quote_plus(password)}@"
injected = uri.replace(
"rtsp://", f"rtsp://{cred}", 1
)
rtsp_candidates.append(
{
"source": "GetStreamUri",
"profile_token": token,
"uri": injected,
}
)
else:
# No credentials provided or URI already contains
# credentials — add the URI as returned.
rtsp_candidates.append(
{
"source": "GetStreamUri",
"profile_token": token,
"uri": uri,
}
)
else:
# Non-RTSP URIs (e.g., http-flv) — add as returned.
rtsp_candidates.append(
{
"source": "GetStreamUri",
"profile_token": token,
"uri": uri,
}
)
except Exception as e:
logger.debug(
f"Skipping stream URI for token {token} due to processing error: {e}"
)
continue
except Exception:
logger.debug(
f"GetStreamUri failed for token {token}", exc_info=True
)
continue
# Add common RTSP patterns as fallback
if not rtsp_candidates:
common_paths = [
"/h264",
"/live.sdp",
"/media.amp",
"/Streaming/Channels/101",
"/Streaming/Channels/1",
"/stream1",
"/cam/realmonitor?channel=1&subtype=0",
"/11",
]
# Use URL-encoded credentials for pattern fallback URIs when provided
auth_str = (
f"{quote_plus(username)}:{quote_plus(password)}@"
if username and password
else ""
)
rtsp_port = 554
for path in common_paths:
uri = f"rtsp://{auth_str}{host}:{rtsp_port}{path}"
rtsp_candidates.append({"source": "pattern", "uri": uri})
except Exception:
logger.debug("Failed to collect RTSP candidates")
# Optionally test RTSP candidates using ffprobe_stream
tested_candidates = []
if test and rtsp_candidates:
for c in rtsp_candidates:
uri = c["uri"]
to_test = [uri]
try:
if (
username
and password
and isinstance(uri, str)
and uri.startswith("rtsp://")
and "@" not in uri
):
cred = f"{quote_plus(username)}:{quote_plus(password)}@"
cred_uri = uri.replace("rtsp://", f"rtsp://{cred}", 1)
if cred_uri not in to_test:
to_test.append(cred_uri)
except Exception:
pass
for test_uri in to_test:
try:
probe = ffprobe_stream(
request.app.frigate_config.ffmpeg, test_uri, detailed=False
)
print(probe)
ok = probe is not None and getattr(probe, "returncode", 1) == 0
tested_candidates.append(
{
"uri": test_uri,
"source": c.get("source"),
"ok": ok,
"profile_token": c.get("profile_token"),
}
)
except Exception as e:
logger.debug(f"Unable to probe stream: {e}")
tested_candidates.append(
{
"uri": test_uri,
"source": c.get("source"),
"ok": False,
"profile_token": c.get("profile_token"),
}
)
result["rtsp_candidates"] = rtsp_candidates
if test:
result["rtsp_tested"] = tested_candidates
logger.debug(f"ONVIF probe successful: {result}")
return JSONResponse(content=result)
except ONVIFError as e:
logger.warning(f"ONVIF error probing {host}:{port}: {e}")
return JSONResponse(
content={"success": False, "message": "ONVIF error"},
status_code=400,
)
except (Fault, TransportError) as e:
logger.warning(f"Connection error probing {host}:{port}: {e}")
return JSONResponse(
content={"success": False, "message": "Connection error"},
status_code=503,
)
except Exception as e:
logger.warning(f"Error probing ONVIF device at {host}:{port}, {e}")
return JSONResponse(
content={"success": False, "message": "Probe failed"},
status_code=500,
)
finally:
# Best-effort cleanup of ONVIF camera client session
if onvif_camera is not None:
try:
# Check if the camera has a close method and call it
if hasattr(onvif_camera, "close"):
await onvif_camera.close()
except Exception as e:
logger.debug(f"Error closing ONVIF camera session: {e}")

View File

@ -3,7 +3,9 @@
import datetime
import logging
import os
import random
import shutil
import string
from typing import Any
import cv2
@ -17,6 +19,8 @@ from frigate.api.auth import require_role
from frigate.api.defs.request.classification_body import (
AudioTranscriptionBody,
DeleteFaceImagesBody,
GenerateObjectExamplesBody,
GenerateStateExamplesBody,
RenameFaceBody,
)
from frigate.api.defs.response.classification_response import (
@ -27,10 +31,16 @@ from frigate.api.defs.response.generic_response import GenericResponse
from frigate.api.defs.tags import Tags
from frigate.config import FrigateConfig
from frigate.config.camera import DetectConfig
from frigate.const import CLIPS_DIR, FACE_DIR
from frigate.const import CLIPS_DIR, FACE_DIR, MODEL_CACHE_DIR
from frigate.embeddings import EmbeddingsContext
from frigate.models import Event
from frigate.util.path import get_event_snapshot
from frigate.util.classification import (
collect_object_classification_examples,
collect_state_classification_examples,
get_dataset_image_count,
read_training_metadata,
)
from frigate.util.file import get_event_snapshot
logger = logging.getLogger(__name__)
@ -104,9 +114,18 @@ def reclassify_face(request: Request, body: dict = None):
context: EmbeddingsContext = request.app.embeddings
response = context.reprocess_face(training_file)
if not isinstance(response, dict):
return JSONResponse(
status_code=500,
content={
"success": False,
"message": "Could not process request.",
},
)
return JSONResponse(
status_code=200 if response.get("success", True) else 400,
content=response,
status_code=200,
)
@ -159,8 +178,7 @@ def train_face(request: Request, name: str, body: dict = None):
new_name = f"{sanitized_name}-{datetime.datetime.now().timestamp()}.webp"
new_file_folder = os.path.join(FACE_DIR, f"{sanitized_name}")
if not os.path.exists(new_file_folder):
os.mkdir(new_file_folder)
os.makedirs(new_file_folder, exist_ok=True)
if training_file_name:
shutil.move(training_file, os.path.join(new_file_folder, new_name))
@ -548,23 +566,59 @@ def get_classification_dataset(name: str):
dataset_dir = os.path.join(CLIPS_DIR, sanitize_filename(name), "dataset")
if not os.path.exists(dataset_dir):
return JSONResponse(status_code=200, content={})
return JSONResponse(
status_code=200, content={"categories": {}, "training_metadata": None}
)
for name in os.listdir(dataset_dir):
category_dir = os.path.join(dataset_dir, name)
for category_name in os.listdir(dataset_dir):
category_dir = os.path.join(dataset_dir, category_name)
if not os.path.isdir(category_dir):
continue
dataset_dict[name] = []
dataset_dict[category_name] = []
for file in filter(
lambda f: (f.lower().endswith((".webp", ".png", ".jpg", ".jpeg"))),
os.listdir(category_dir),
):
dataset_dict[name].append(file)
dataset_dict[category_name].append(file)
return JSONResponse(status_code=200, content=dataset_dict)
# Get training metadata
metadata = read_training_metadata(sanitize_filename(name))
current_image_count = get_dataset_image_count(sanitize_filename(name))
if metadata is None:
training_metadata = {
"has_trained": False,
"last_training_date": None,
"last_training_image_count": 0,
"current_image_count": current_image_count,
"new_images_count": current_image_count,
"dataset_changed": current_image_count > 0,
}
else:
last_training_count = metadata.get("last_training_image_count", 0)
# Dataset has changed if count is different (either added or deleted images)
dataset_changed = current_image_count != last_training_count
# Only show positive count for new images (ignore deletions in the count display)
new_images_count = max(0, current_image_count - last_training_count)
training_metadata = {
"has_trained": True,
"last_training_date": metadata.get("last_training_date"),
"last_training_image_count": last_training_count,
"current_image_count": current_image_count,
"new_images_count": new_images_count,
"dataset_changed": dataset_changed,
}
return JSONResponse(
status_code=200,
content={
"categories": dataset_dict,
"training_metadata": training_metadata,
},
)
@router.get(
@ -655,12 +709,106 @@ def delete_classification_dataset_images(
if os.path.isfile(file_path):
os.unlink(file_path)
if os.path.exists(folder) and not os.listdir(folder):
os.rmdir(folder)
return JSONResponse(
content=({"success": True, "message": "Successfully deleted faces."}),
content=({"success": True, "message": "Successfully deleted images."}),
status_code=200,
)
@router.put(
"/classification/{name}/dataset/{old_category}/rename",
response_model=GenericResponse,
dependencies=[Depends(require_role(["admin"]))],
summary="Rename a classification category",
description="""Renames a classification category for a given classification model.
The old category must exist and the new name must be valid. Returns a success message or an error if the name is invalid.""",
)
def rename_classification_category(
request: Request, name: str, old_category: str, body: dict = None
):
config: FrigateConfig = request.app.frigate_config
if name not in config.classification.custom:
return JSONResponse(
content=(
{
"success": False,
"message": f"{name} is not a known classification model.",
}
),
status_code=404,
)
json: dict[str, Any] = body or {}
new_category = sanitize_filename(json.get("new_category", ""))
if not new_category:
return JSONResponse(
content=(
{
"success": False,
"message": "New category name is required.",
}
),
status_code=400,
)
old_folder = os.path.join(
CLIPS_DIR, sanitize_filename(name), "dataset", sanitize_filename(old_category)
)
new_folder = os.path.join(
CLIPS_DIR, sanitize_filename(name), "dataset", new_category
)
if not os.path.exists(old_folder):
return JSONResponse(
content=(
{
"success": False,
"message": f"Category {old_category} does not exist.",
}
),
status_code=404,
)
if os.path.exists(new_folder):
return JSONResponse(
content=(
{
"success": False,
"message": f"Category {new_category} already exists.",
}
),
status_code=400,
)
try:
os.rename(old_folder, new_folder)
return JSONResponse(
content=(
{
"success": True,
"message": f"Successfully renamed category to {new_category}.",
}
),
status_code=200,
)
except Exception as e:
logger.error(f"Error renaming category: {e}")
return JSONResponse(
content=(
{
"success": False,
"message": "Failed to rename category",
}
),
status_code=500,
)
@router.post(
"/classification/{name}/dataset/categorize",
response_model=GenericResponse,
@ -701,13 +849,14 @@ def categorize_classification_image(request: Request, name: str, body: dict = No
status_code=404,
)
new_name = f"{category}-{datetime.datetime.now().timestamp()}.png"
random_id = "".join(random.choices(string.ascii_lowercase + string.digits, k=6))
timestamp = datetime.datetime.now().timestamp()
new_name = f"{category}-{timestamp}-{random_id}.png"
new_file_folder = os.path.join(
CLIPS_DIR, sanitize_filename(name), "dataset", category
)
if not os.path.exists(new_file_folder):
os.mkdir(new_file_folder)
os.makedirs(new_file_folder, exist_ok=True)
# use opencv because webp images can not be used to train
img = cv2.imread(training_file)
@ -715,7 +864,7 @@ def categorize_classification_image(request: Request, name: str, body: dict = No
os.unlink(training_file)
return JSONResponse(
content=({"success": True, "message": "Successfully deleted faces."}),
content=({"success": True, "message": "Successfully categorized image."}),
status_code=200,
)
@ -753,6 +902,87 @@ def delete_classification_train_images(request: Request, name: str, body: dict =
os.unlink(file_path)
return JSONResponse(
content=({"success": True, "message": "Successfully deleted faces."}),
content=({"success": True, "message": "Successfully deleted images."}),
status_code=200,
)
@router.post(
"/classification/generate_examples/state",
response_model=GenericResponse,
dependencies=[Depends(require_role(["admin"]))],
summary="Generate state classification examples",
)
async def generate_state_examples(request: Request, body: GenerateStateExamplesBody):
"""Generate examples for state classification."""
model_name = sanitize_filename(body.model_name)
cameras_normalized = {
camera_name: tuple(crop)
for camera_name, crop in body.cameras.items()
if camera_name in request.app.frigate_config.cameras
}
collect_state_classification_examples(model_name, cameras_normalized)
return JSONResponse(
content={"success": True, "message": "Example generation completed"},
status_code=200,
)
@router.post(
"/classification/generate_examples/object",
response_model=GenericResponse,
dependencies=[Depends(require_role(["admin"]))],
summary="Generate object classification examples",
)
async def generate_object_examples(request: Request, body: GenerateObjectExamplesBody):
"""Generate examples for object classification."""
model_name = sanitize_filename(body.model_name)
collect_object_classification_examples(model_name, body.label)
return JSONResponse(
content={"success": True, "message": "Example generation completed"},
status_code=200,
)
@router.delete(
"/classification/{name}",
response_model=GenericResponse,
dependencies=[Depends(require_role(["admin"]))],
summary="Delete a classification model",
description="""Deletes a specific classification model and all its associated data.
Works even if the model is not in the config (e.g., partially created during wizard).
Returns a success message.""",
)
def delete_classification_model(request: Request, name: str):
sanitized_name = sanitize_filename(name)
# Delete the classification model's data directory in clips
data_dir = os.path.join(CLIPS_DIR, sanitized_name)
if os.path.exists(data_dir):
try:
shutil.rmtree(data_dir)
logger.info(f"Deleted classification data directory for {name}")
except Exception as e:
logger.debug(f"Failed to delete data directory for {name}: {e}")
# Delete the classification model's files in model_cache
model_dir = os.path.join(MODEL_CACHE_DIR, sanitized_name)
if os.path.exists(model_dir):
try:
shutil.rmtree(model_dir)
logger.info(f"Deleted classification model directory for {name}")
except Exception as e:
logger.debug(f"Failed to delete model directory for {name}: {e}")
return JSONResponse(
content=(
{
"success": True,
"message": f"Successfully deleted classification model {name}.",
}
),
status_code=200,
)

View File

@ -1,17 +1,31 @@
from typing import List
from typing import Dict, List, Tuple
from pydantic import BaseModel, Field
class RenameFaceBody(BaseModel):
new_name: str
new_name: str = Field(description="New name for the face")
class AudioTranscriptionBody(BaseModel):
event_id: str
event_id: str = Field(description="ID of the event to transcribe audio for")
class DeleteFaceImagesBody(BaseModel):
ids: List[str] = Field(
description="List of image filenames to delete from the face folder"
)
class GenerateStateExamplesBody(BaseModel):
model_name: str = Field(description="Name of the classification model")
cameras: Dict[str, Tuple[float, float, float, float]] = Field(
description="Dictionary mapping camera names to normalized crop coordinates in [x1, y1, x2, y2] format (values 0-1)"
)
class GenerateObjectExamplesBody(BaseModel):
model_name: str = Field(description="Name of the classification model")
label: str = Field(
description="Object label to collect examples for (e.g., 'person', 'car')"
)

View File

@ -2,6 +2,7 @@
import base64
import datetime
import json
import logging
import os
import random
@ -57,8 +58,8 @@ from frigate.const import CLIPS_DIR, TRIGGER_DIR
from frigate.embeddings import EmbeddingsContext
from frigate.models import Event, ReviewSegment, Timeline, Trigger
from frigate.track.object_processing import TrackedObject
from frigate.util.builtin import get_tz_modifiers
from frigate.util.path import get_event_thumbnail_bytes
from frigate.util.file import get_event_thumbnail_bytes
from frigate.util.time import get_dst_transitions, get_tz_modifiers
logger = logging.getLogger(__name__)
@ -813,7 +814,6 @@ def events_summary(
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
):
tz_name = params.timezone
hour_modifier, minute_modifier, seconds_offset = get_tz_modifiers(tz_name)
has_clip = params.has_clip
has_snapshot = params.has_snapshot
@ -828,33 +828,91 @@ def events_summary(
if len(clauses) == 0:
clauses.append((True))
groups = (
time_range_query = (
Event.select(
Event.camera,
Event.label,
Event.sub_label,
Event.data,
fn.strftime(
"%Y-%m-%d",
fn.datetime(
Event.start_time, "unixepoch", hour_modifier, minute_modifier
),
).alias("day"),
Event.zones,
fn.COUNT(Event.id).alias("count"),
fn.MIN(Event.start_time).alias("min_time"),
fn.MAX(Event.start_time).alias("max_time"),
)
.where(reduce(operator.and_, clauses) & (Event.camera << allowed_cameras))
.group_by(
Event.camera,
Event.label,
Event.sub_label,
Event.data,
(Event.start_time + seconds_offset).cast("int") / (3600 * 24),
Event.zones,
)
.dicts()
.get()
)
return JSONResponse(content=[e for e in groups.dicts()])
min_time = time_range_query.get("min_time")
max_time = time_range_query.get("max_time")
if min_time is None or max_time is None:
return JSONResponse(content=[])
dst_periods = get_dst_transitions(tz_name, min_time, max_time)
grouped: dict[tuple, dict] = {}
for period_start, period_end, period_offset in dst_periods:
hours_offset = int(period_offset / 60 / 60)
minutes_offset = int(period_offset / 60 - hours_offset * 60)
period_hour_modifier = f"{hours_offset} hour"
period_minute_modifier = f"{minutes_offset} minute"
period_groups = (
Event.select(
Event.camera,
Event.label,
Event.sub_label,
Event.data,
fn.strftime(
"%Y-%m-%d",
fn.datetime(
Event.start_time,
"unixepoch",
period_hour_modifier,
period_minute_modifier,
),
).alias("day"),
Event.zones,
fn.COUNT(Event.id).alias("count"),
)
.where(
reduce(operator.and_, clauses)
& (Event.camera << allowed_cameras)
& (Event.start_time >= period_start)
& (Event.start_time <= period_end)
)
.group_by(
Event.camera,
Event.label,
Event.sub_label,
Event.data,
(Event.start_time + period_offset).cast("int") / (3600 * 24),
Event.zones,
)
.namedtuples()
)
for g in period_groups:
key = (
g.camera,
g.label,
g.sub_label,
json.dumps(g.data, sort_keys=True) if g.data is not None else None,
g.day,
json.dumps(g.zones, sort_keys=True) if g.zones is not None else None,
)
if key in grouped:
grouped[key]["count"] += int(g.count or 0)
else:
grouped[key] = {
"camera": g.camera,
"label": g.label,
"sub_label": g.sub_label,
"data": g.data,
"day": g.day,
"zones": g.zones,
"count": int(g.count or 0),
}
return JSONResponse(content=sorted(grouped.values(), key=lambda x: x["day"]))
@router.get(

View File

@ -9,6 +9,7 @@ from typing import List
import psutil
from fastapi import APIRouter, Depends, Request
from fastapi.responses import JSONResponse
from pathvalidate import sanitize_filepath
from peewee import DoesNotExist
from playhouse.shortcuts import model_to_dict
@ -26,14 +27,14 @@ from frigate.api.defs.response.export_response import (
)
from frigate.api.defs.response.generic_response import GenericResponse
from frigate.api.defs.tags import Tags
from frigate.const import EXPORT_DIR
from frigate.const import CLIPS_DIR, EXPORT_DIR
from frigate.models import Export, Previews, Recordings
from frigate.record.export import (
PlaybackFactorEnum,
PlaybackSourceEnum,
RecordingExporter,
)
from frigate.util.builtin import is_current_hour
from frigate.util.time import is_current_hour
logger = logging.getLogger(__name__)
@ -88,7 +89,14 @@ def export_recording(
playback_factor = body.playback
playback_source = body.source
friendly_name = body.name
existing_image = body.image_path
existing_image = sanitize_filepath(body.image_path) if body.image_path else None
# Ensure that existing_image is a valid path
if existing_image and not existing_image.startswith(CLIPS_DIR):
return JSONResponse(
content=({"success": False, "message": "Invalid image path"}),
status_code=400,
)
if playback_source == "recordings":
recordings_count = (

View File

@ -44,9 +44,9 @@ from frigate.const import (
)
from frigate.models import Event, Previews, Recordings, Regions, ReviewSegment
from frigate.track.object_processing import TrackedObjectProcessor
from frigate.util.builtin import get_tz_modifiers
from frigate.util.file import get_event_thumbnail_bytes
from frigate.util.image import get_image_from_recording
from frigate.util.path import get_event_thumbnail_bytes
from frigate.util.time import get_dst_transitions
logger = logging.getLogger(__name__)
@ -424,7 +424,6 @@ def all_recordings_summary(
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
):
"""Returns true/false by day indicating if recordings exist"""
hour_modifier, minute_modifier, seconds_offset = get_tz_modifiers(params.timezone)
cameras = params.cameras
if cameras != "all":
@ -432,43 +431,72 @@ def all_recordings_summary(
filtered = requested.intersection(allowed_cameras)
if not filtered:
return JSONResponse(content={})
cameras = ",".join(filtered)
camera_list = list(filtered)
else:
cameras = allowed_cameras
camera_list = allowed_cameras
query = (
time_range_query = (
Recordings.select(
fn.strftime(
"%Y-%m-%d",
fn.datetime(
Recordings.start_time + seconds_offset,
"unixepoch",
hour_modifier,
minute_modifier,
),
).alias("day")
fn.MIN(Recordings.start_time).alias("min_time"),
fn.MAX(Recordings.start_time).alias("max_time"),
)
.group_by(
fn.strftime(
"%Y-%m-%d",
fn.datetime(
Recordings.start_time + seconds_offset,
"unixepoch",
hour_modifier,
minute_modifier,
),
)
)
.order_by(Recordings.start_time.desc())
.where(Recordings.camera << camera_list)
.dicts()
.get()
)
if params.cameras != "all":
query = query.where(Recordings.camera << cameras.split(","))
min_time = time_range_query.get("min_time")
max_time = time_range_query.get("max_time")
recording_days = query.namedtuples()
days = {day.day: True for day in recording_days}
if min_time is None or max_time is None:
return JSONResponse(content={})
return JSONResponse(content=days)
dst_periods = get_dst_transitions(params.timezone, min_time, max_time)
days: dict[str, bool] = {}
for period_start, period_end, period_offset in dst_periods:
hours_offset = int(period_offset / 60 / 60)
minutes_offset = int(period_offset / 60 - hours_offset * 60)
period_hour_modifier = f"{hours_offset} hour"
period_minute_modifier = f"{minutes_offset} minute"
period_query = (
Recordings.select(
fn.strftime(
"%Y-%m-%d",
fn.datetime(
Recordings.start_time,
"unixepoch",
period_hour_modifier,
period_minute_modifier,
),
).alias("day")
)
.where(
(Recordings.camera << camera_list)
& (Recordings.end_time >= period_start)
& (Recordings.start_time <= period_end)
)
.group_by(
fn.strftime(
"%Y-%m-%d",
fn.datetime(
Recordings.start_time,
"unixepoch",
period_hour_modifier,
period_minute_modifier,
),
)
)
.order_by(Recordings.start_time.desc())
.namedtuples()
)
for g in period_query:
days[g.day] = True
return JSONResponse(content=dict(sorted(days.items())))
@router.get(
@ -476,61 +504,103 @@ def all_recordings_summary(
)
async def recordings_summary(camera_name: str, timezone: str = "utc"):
"""Returns hourly summary for recordings of given camera"""
hour_modifier, minute_modifier, seconds_offset = get_tz_modifiers(timezone)
recording_groups = (
time_range_query = (
Recordings.select(
fn.strftime(
"%Y-%m-%d %H",
fn.datetime(
Recordings.start_time, "unixepoch", hour_modifier, minute_modifier
),
).alias("hour"),
fn.SUM(Recordings.duration).alias("duration"),
fn.SUM(Recordings.motion).alias("motion"),
fn.SUM(Recordings.objects).alias("objects"),
fn.MIN(Recordings.start_time).alias("min_time"),
fn.MAX(Recordings.start_time).alias("max_time"),
)
.where(Recordings.camera == camera_name)
.group_by((Recordings.start_time + seconds_offset).cast("int") / 3600)
.order_by(Recordings.start_time.desc())
.namedtuples()
.dicts()
.get()
)
event_groups = (
Event.select(
fn.strftime(
"%Y-%m-%d %H",
fn.datetime(
Event.start_time, "unixepoch", hour_modifier, minute_modifier
),
).alias("hour"),
fn.COUNT(Event.id).alias("count"),
min_time = time_range_query.get("min_time")
max_time = time_range_query.get("max_time")
days: dict[str, dict] = {}
if min_time is None or max_time is None:
return JSONResponse(content=list(days.values()))
dst_periods = get_dst_transitions(timezone, min_time, max_time)
for period_start, period_end, period_offset in dst_periods:
hours_offset = int(period_offset / 60 / 60)
minutes_offset = int(period_offset / 60 - hours_offset * 60)
period_hour_modifier = f"{hours_offset} hour"
period_minute_modifier = f"{minutes_offset} minute"
recording_groups = (
Recordings.select(
fn.strftime(
"%Y-%m-%d %H",
fn.datetime(
Recordings.start_time,
"unixepoch",
period_hour_modifier,
period_minute_modifier,
),
).alias("hour"),
fn.SUM(Recordings.duration).alias("duration"),
fn.SUM(Recordings.motion).alias("motion"),
fn.SUM(Recordings.objects).alias("objects"),
)
.where(
(Recordings.camera == camera_name)
& (Recordings.end_time >= period_start)
& (Recordings.start_time <= period_end)
)
.group_by((Recordings.start_time + period_offset).cast("int") / 3600)
.order_by(Recordings.start_time.desc())
.namedtuples()
)
.where(Event.camera == camera_name, Event.has_clip)
.group_by((Event.start_time + seconds_offset).cast("int") / 3600)
.namedtuples()
)
event_map = {g.hour: g.count for g in event_groups}
event_groups = (
Event.select(
fn.strftime(
"%Y-%m-%d %H",
fn.datetime(
Event.start_time,
"unixepoch",
period_hour_modifier,
period_minute_modifier,
),
).alias("hour"),
fn.COUNT(Event.id).alias("count"),
)
.where(Event.camera == camera_name, Event.has_clip)
.where(
(Event.start_time >= period_start) & (Event.start_time <= period_end)
)
.group_by((Event.start_time + period_offset).cast("int") / 3600)
.namedtuples()
)
days = {}
event_map = {g.hour: g.count for g in event_groups}
for recording_group in recording_groups:
parts = recording_group.hour.split()
hour = parts[1]
day = parts[0]
events_count = event_map.get(recording_group.hour, 0)
hour_data = {
"hour": hour,
"events": events_count,
"motion": recording_group.motion,
"objects": recording_group.objects,
"duration": round(recording_group.duration),
}
if day not in days:
days[day] = {"events": events_count, "hours": [hour_data], "day": day}
else:
days[day]["events"] += events_count
days[day]["hours"].append(hour_data)
for recording_group in recording_groups:
parts = recording_group.hour.split()
hour = parts[1]
day = parts[0]
events_count = event_map.get(recording_group.hour, 0)
hour_data = {
"hour": hour,
"events": events_count,
"motion": recording_group.motion,
"objects": recording_group.objects,
"duration": round(recording_group.duration),
}
if day in days:
# merge counts if already present (edge-case at DST boundary)
days[day]["events"] += events_count or 0
days[day]["hours"].append(hour_data)
else:
days[day] = {
"events": events_count or 0,
"hours": [hour_data],
"day": day,
}
return JSONResponse(content=list(days.values()))
@ -589,7 +659,7 @@ async def no_recordings(
)
scale = params.scale
clauses = [(Recordings.start_time > after) & (Recordings.end_time < before)]
clauses = [(Recordings.end_time >= after) & (Recordings.start_time <= before)]
if cameras != "all":
camera_list = cameras.split(",")
clauses.append((Recordings.camera << camera_list))
@ -608,33 +678,39 @@ async def no_recordings(
# Convert recordings to list of (start, end) tuples
recordings = [(r["start_time"], r["end_time"]) for r in data]
# Generate all time segments
current = after
# Iterate through time segments and check if each has any recording
no_recording_segments = []
current_start = None
current = after
current_gap_start = None
while current < before:
segment_end = current + scale
# Check if segment overlaps with any recording
segment_end = min(current + scale, before)
# Check if this segment overlaps with any recording
has_recording = any(
start <= segment_end and end >= current for start, end in recordings
rec_start < segment_end and rec_end > current
for rec_start, rec_end in recordings
)
if not has_recording:
if current_start is None:
current_start = current # Start a new gap
# This segment has no recordings
if current_gap_start is None:
current_gap_start = current # Start a new gap
else:
if current_start is not None:
# This segment has recordings
if current_gap_start is not None:
# End the current gap and append it
no_recording_segments.append(
{"start_time": int(current_start), "end_time": int(current)}
{"start_time": int(current_gap_start), "end_time": int(current)}
)
current_start = None
current_gap_start = None
current = segment_end
# Append the last gap if it exists
if current_start is not None:
if current_gap_start is not None:
no_recording_segments.append(
{"start_time": int(current_start), "end_time": int(before)}
{"start_time": int(current_gap_start), "end_time": int(before)}
)
return JSONResponse(content=no_recording_segments)

View File

@ -36,7 +36,7 @@ from frigate.config import FrigateConfig
from frigate.embeddings import EmbeddingsContext
from frigate.models import Recordings, ReviewSegment, UserReviewStatus
from frigate.review.types import SeverityEnum
from frigate.util.builtin import get_tz_modifiers
from frigate.util.time import get_dst_transitions
logger = logging.getLogger(__name__)
@ -197,7 +197,6 @@ async def review_summary(
user_id = current_user["username"]
hour_modifier, minute_modifier, seconds_offset = get_tz_modifiers(params.timezone)
day_ago = (datetime.datetime.now() - datetime.timedelta(hours=24)).timestamp()
cameras = params.cameras
@ -329,89 +328,135 @@ async def review_summary(
)
clauses.append(reduce(operator.or_, label_clauses))
day_in_seconds = 60 * 60 * 24
last_month_query = (
# Find the time range of available data
time_range_query = (
ReviewSegment.select(
fn.strftime(
"%Y-%m-%d",
fn.datetime(
ReviewSegment.start_time,
"unixepoch",
hour_modifier,
minute_modifier,
),
).alias("day"),
fn.SUM(
Case(
None,
[
(
(ReviewSegment.severity == SeverityEnum.alert)
& (UserReviewStatus.has_been_reviewed == True),
1,
)
],
0,
)
).alias("reviewed_alert"),
fn.SUM(
Case(
None,
[
(
(ReviewSegment.severity == SeverityEnum.detection)
& (UserReviewStatus.has_been_reviewed == True),
1,
)
],
0,
)
).alias("reviewed_detection"),
fn.SUM(
Case(
None,
[
(
(ReviewSegment.severity == SeverityEnum.alert),
1,
)
],
0,
)
).alias("total_alert"),
fn.SUM(
Case(
None,
[
(
(ReviewSegment.severity == SeverityEnum.detection),
1,
)
],
0,
)
).alias("total_detection"),
)
.left_outer_join(
UserReviewStatus,
on=(
(ReviewSegment.id == UserReviewStatus.review_segment)
& (UserReviewStatus.user_id == user_id)
),
fn.MIN(ReviewSegment.start_time).alias("min_time"),
fn.MAX(ReviewSegment.start_time).alias("max_time"),
)
.where(reduce(operator.and_, clauses) if clauses else True)
.group_by(
(ReviewSegment.start_time + seconds_offset).cast("int") / day_in_seconds
)
.order_by(ReviewSegment.start_time.desc())
.dicts()
.get()
)
min_time = time_range_query.get("min_time")
max_time = time_range_query.get("max_time")
data = {
"last24Hours": last_24_query,
}
for e in last_month_query.dicts().iterator():
data[e["day"]] = e
# If no data, return early
if min_time is None or max_time is None:
return JSONResponse(content=data)
# Get DST transition periods
dst_periods = get_dst_transitions(params.timezone, min_time, max_time)
day_in_seconds = 60 * 60 * 24
# Query each DST period separately with the correct offset
for period_start, period_end, period_offset in dst_periods:
# Calculate hour/minute modifiers for this period
hours_offset = int(period_offset / 60 / 60)
minutes_offset = int(period_offset / 60 - hours_offset * 60)
period_hour_modifier = f"{hours_offset} hour"
period_minute_modifier = f"{minutes_offset} minute"
# Build clauses including time range for this period
period_clauses = clauses.copy()
period_clauses.append(
(ReviewSegment.start_time >= period_start)
& (ReviewSegment.start_time <= period_end)
)
period_query = (
ReviewSegment.select(
fn.strftime(
"%Y-%m-%d",
fn.datetime(
ReviewSegment.start_time,
"unixepoch",
period_hour_modifier,
period_minute_modifier,
),
).alias("day"),
fn.SUM(
Case(
None,
[
(
(ReviewSegment.severity == SeverityEnum.alert)
& (UserReviewStatus.has_been_reviewed == True),
1,
)
],
0,
)
).alias("reviewed_alert"),
fn.SUM(
Case(
None,
[
(
(ReviewSegment.severity == SeverityEnum.detection)
& (UserReviewStatus.has_been_reviewed == True),
1,
)
],
0,
)
).alias("reviewed_detection"),
fn.SUM(
Case(
None,
[
(
(ReviewSegment.severity == SeverityEnum.alert),
1,
)
],
0,
)
).alias("total_alert"),
fn.SUM(
Case(
None,
[
(
(ReviewSegment.severity == SeverityEnum.detection),
1,
)
],
0,
)
).alias("total_detection"),
)
.left_outer_join(
UserReviewStatus,
on=(
(ReviewSegment.id == UserReviewStatus.review_segment)
& (UserReviewStatus.user_id == user_id)
),
)
.where(reduce(operator.and_, period_clauses))
.group_by(
(ReviewSegment.start_time + period_offset).cast("int") / day_in_seconds
)
.order_by(ReviewSegment.start_time.desc())
)
# Merge results from this period
for e in period_query.dicts().iterator():
day_key = e["day"]
if day_key in data:
# Merge counts if day already exists (edge case at DST boundary)
data[day_key]["reviewed_alert"] += e["reviewed_alert"] or 0
data[day_key]["reviewed_detection"] += e["reviewed_detection"] or 0
data[day_key]["total_alert"] += e["total_alert"] or 0
data[day_key]["total_detection"] += e["total_detection"] or 0
else:
data[day_key] = e
return JSONResponse(content=data)

View File

@ -488,6 +488,8 @@ class FrigateApp:
}
).execute()
self.config.auth.admin_first_time_login = True
logger.info("********************************************************")
logger.info("********************************************************")
logger.info("*** Auth is enabled, but no users exist. ***")

View File

@ -38,6 +38,13 @@ class AuthConfig(FrigateBaseModel):
default_factory=dict,
title="Role to camera mappings. Empty list grants access to all cameras.",
)
admin_first_time_login: Optional[bool] = Field(
default=False,
title="Internal field to expose first-time admin login flag to the UI",
description=(
"When true the UI may show a help link on the login page informing users how to sign in after an admin password reset. "
),
)
@field_validator("roles")
@classmethod

View File

@ -55,6 +55,12 @@ class BirdseyeConfig(FrigateBaseModel):
layout: BirdseyeLayoutConfig = Field(
default_factory=BirdseyeLayoutConfig, title="Birdseye Layout Config"
)
idle_heartbeat_fps: float = Field(
default=0.0,
ge=0.0,
le=10.0,
title="Idle heartbeat FPS (0 disables, max 10)",
)
# uses BaseModel because some global attributes are not available at the camera level

View File

@ -177,6 +177,12 @@ class CameraConfig(FrigateBaseModel):
def ffmpeg_cmds(self) -> list[dict[str, list[str]]]:
return self._ffmpeg_cmds
def get_formatted_name(self) -> str:
"""Return the friendly name if set, otherwise return a formatted version of the camera name."""
if self.friendly_name:
return self.friendly_name
return self.name.replace("_", " ").title() if self.name else ""
def create_ffmpeg_cmds(self):
if "_ffmpeg_cmds" in self:
return

View File

@ -1,10 +1,18 @@
from enum import Enum
from typing import Optional, Union
from pydantic import Field, field_validator
from ..base import FrigateBaseModel
__all__ = ["ReviewConfig", "DetectionsConfig", "AlertsConfig"]
__all__ = ["ReviewConfig", "DetectionsConfig", "AlertsConfig", "ImageSourceEnum"]
class ImageSourceEnum(str, Enum):
"""Image source options for GenAI Review."""
preview = "preview"
recordings = "recordings"
DEFAULT_ALERT_OBJECTS = ["person", "car"]
@ -77,6 +85,10 @@ class GenAIReviewConfig(FrigateBaseModel):
)
alerts: bool = Field(default=True, title="Enable GenAI for alerts.")
detections: bool = Field(default=False, title="Enable GenAI for detections.")
image_source: ImageSourceEnum = Field(
default=ImageSourceEnum.preview,
title="Image source for review descriptions.",
)
additional_concerns: list[str] = Field(
default=[],
title="Additional concerns that GenAI should make note of on this camera.",
@ -93,13 +105,40 @@ class GenAIReviewConfig(FrigateBaseModel):
default=None,
)
activity_context_prompt: str = Field(
default="""- **Zone context is critical**: Private enclosed spaces (back yards, back decks, fenced areas, inside garages) are resident territory where brief transient activity, routine tasks, and pet care are expected and normal. Front yards, driveways, and porches are semi-public but still resident spaces where deliveries, parking, and coming/going are routine. Consider whether the zone and activity align with normal residential use.
- **Person + Pet = Normal Activity**: When both "Person" and "Dog" (or "Cat") are detected together in residential zones, this is routine pet care activity (walking, letting out, playing, supervising). Assign Level 0 unless there are OTHER strong suspicious behaviors present (like testing doors, taking items, etc.). A person with their pet in a residential zone is baseline normal activity.
- Brief appearances in private zones (back yards, garages) are normal residential patterns.
- Normal residential activity includes: residents, family members, guests, deliveries, services, maintenance workers, routine property use (parking, unloading, mail pickup, trash removal).
- Brief movement with legitimate items (bags, packages, tools, equipment) in appropriate zones is routine.
""",
title="Custom activity context prompt defining normal activity patterns for this property.",
default="""### Normal Activity Indicators (Level 0)
- Known/verified people in any zone at any time
- People with pets in residential areas
- Deliveries or services during daytime/evening (6 AM - 10 PM): carrying packages to doors/porches, placing items, leaving
- Services/maintenance workers with visible tools, uniforms, or service vehicles during daytime
- Activity confined to public areas only (sidewalks, streets) without entering property at any time
### Suspicious Activity Indicators (Level 1)
- **Testing or attempting to open doors/windows/handles on vehicles or buildings** ALWAYS Level 1 regardless of time or duration
- **Unidentified person in private areas (driveways, near vehicles/buildings) during late night/early morning (11 PM - 5 AM)** ALWAYS Level 1 regardless of activity or duration
- Taking items that don't belong to them (packages, objects from porches/driveways)
- Climbing or jumping fences/barriers to access property
- Attempting to conceal actions or items from view
- Prolonged loitering: remaining in same area without visible purpose throughout most of the sequence
### Critical Threat Indicators (Level 2)
- Holding break-in tools (crowbars, pry bars, bolt cutters)
- Weapons visible (guns, knives, bats used aggressively)
- Forced entry in progress
- Physical aggression or violence
- Active property damage or theft in progress
### Assessment Guidance
Evaluate in this order:
1. **If person is verified/known** Level 0 regardless of time or activity
2. **If person is unidentified:**
- Check time: If late night/early morning (11 PM - 5 AM) AND in private areas (driveways, near vehicles/buildings) Level 1
- Check actions: If testing doors/handles, taking items, climbing Level 1
- Otherwise, if daytime/evening (6 AM - 10 PM) with clear legitimate purpose (delivery, service worker) Level 0
3. **Escalate to Level 2 if:** Weapons, break-in tools, forced entry in progress, violence, or active property damage visible (escalates from Level 0 or 1)
The mere presence of an unidentified person in private areas during late night hours is inherently suspicious and warrants human review, regardless of what activity they appear to be doing or how brief the sequence is.""",
title="Custom activity context prompt defining normal and suspicious activity patterns for this property.",
)

View File

@ -13,6 +13,9 @@ logger = logging.getLogger(__name__)
class ZoneConfig(BaseModel):
friendly_name: Optional[str] = Field(
None, title="Zone friendly name used in the Frigate UI."
)
filters: dict[str, FilterConfig] = Field(
default_factory=dict, title="Zone filters."
)
@ -53,6 +56,12 @@ class ZoneConfig(BaseModel):
def contour(self) -> np.ndarray:
return self._contour
def get_formatted_name(self, zone_name: str) -> str:
"""Return the friendly name if set, otherwise return a formatted version of the zone name."""
if self.friendly_name:
return self.friendly_name
return zone_name.replace("_", " ").title()
@field_validator("objects", mode="before")
@classmethod
def validate_objects(cls, v):

View File

@ -33,6 +33,8 @@ class TriggerType(str, Enum):
class TriggerAction(str, Enum):
NOTIFICATION = "notification"
SUB_LABEL = "sub_label"
ATTRIBUTE = "attribute"
class ObjectClassificationType(str, Enum):
@ -69,7 +71,7 @@ class BirdClassificationConfig(FrigateBaseModel):
class CustomClassificationStateCameraConfig(FrigateBaseModel):
crop: list[int, int, int, int] = Field(
crop: list[float, float, float, float] = Field(
title="Crop of image frame on this camera to run classification on."
)
@ -197,7 +199,9 @@ class FaceRecognitionConfig(FrigateBaseModel):
title="Min face recognitions for the sub label to be applied to the person object.",
)
save_attempts: int = Field(
default=100, ge=0, title="Number of face attempts to save in the train tab."
default=200,
ge=0,
title="Number of face attempts to save in the recent recognitions tab.",
)
blur_confidence_filter: bool = Field(
default=True, title="Apply blur quality filter to face confidence."

View File

@ -4,7 +4,6 @@ import logging
import os
import sherpa_onnx
from faster_whisper.utils import download_model
from frigate.comms.inter_process import InterProcessRequestor
from frigate.const import MODEL_CACHE_DIR
@ -25,6 +24,9 @@ class AudioTranscriptionModelRunner:
if model_size == "large":
# use the Whisper download function instead of our own
# Import dynamically to avoid crashes on systems without AVX support
from faster_whisper.utils import download_model
logger.debug("Downloading Whisper audio transcription model")
download_model(
size_or_id="small" if device == "cuda" else "tiny",

View File

@ -14,8 +14,8 @@ from typing import Any, List, Optional, Tuple
import cv2
import numpy as np
from Levenshtein import distance, jaro_winkler
from pyclipper import ET_CLOSEDPOLYGON, JT_ROUND, PyclipperOffset
from rapidfuzz.distance import JaroWinkler, Levenshtein
from shapely.geometry import Polygon
from frigate.comms.event_metadata_updater import (
@ -1123,7 +1123,9 @@ class LicensePlateProcessingMixin:
for i, plate in enumerate(plates):
merged = False
for j, cluster in enumerate(clusters):
sims = [jaro_winkler(plate["plate"], v["plate"]) for v in cluster]
sims = [
JaroWinkler.similarity(plate["plate"], v["plate"]) for v in cluster
]
if len(sims) > 0:
avg_sim = sum(sims) / len(sims)
if avg_sim >= self.cluster_threshold:
@ -1500,7 +1502,7 @@ class LicensePlateProcessingMixin:
and current_time - data["last_seen"]
<= self.config.cameras[camera].lpr.expire_time
):
similarity = jaro_winkler(data["plate"], top_plate)
similarity = JaroWinkler.similarity(data["plate"], top_plate)
if similarity >= self.similarity_threshold:
plate_id = existing_id
logger.debug(
@ -1580,7 +1582,8 @@ class LicensePlateProcessingMixin:
for label, plates_list in self.lpr_config.known_plates.items()
if any(
re.match(f"^{plate}$", rep_plate)
or distance(plate, rep_plate) <= self.lpr_config.match_distance
or Levenshtein.distance(plate, rep_plate)
<= self.lpr_config.match_distance
for plate in plates_list
)
),

View File

@ -6,10 +6,8 @@ import threading
import time
from typing import Optional
from faster_whisper import WhisperModel
from peewee import DoesNotExist
from frigate.comms.embeddings_updater import EmbeddingsRequestEnum
from frigate.comms.inter_process import InterProcessRequestor
from frigate.config import FrigateConfig
from frigate.const import (
@ -32,11 +30,13 @@ class AudioTranscriptionPostProcessor(PostProcessorApi):
self,
config: FrigateConfig,
requestor: InterProcessRequestor,
embeddings,
metrics: DataProcessorMetrics,
):
super().__init__(config, metrics, None)
self.config = config
self.requestor = requestor
self.embeddings = embeddings
self.recognizer = None
self.transcription_lock = threading.Lock()
self.transcription_thread = None
@ -50,6 +50,9 @@ class AudioTranscriptionPostProcessor(PostProcessorApi):
def __build_recognizer(self) -> None:
try:
# Import dynamically to avoid crashes on systems without AVX support
from faster_whisper import WhisperModel
self.recognizer = WhisperModel(
model_size_or_path="small",
device="cuda"
@ -128,10 +131,7 @@ class AudioTranscriptionPostProcessor(PostProcessorApi):
)
# Embed the description
self.requestor.send_data(
EmbeddingsRequestEnum.embed_description.value,
{"id": event_id, "description": transcription},
)
self.embeddings.embed_description(event_id, transcription)
except DoesNotExist:
logger.debug("No recording found for audio transcription post-processing")

View File

@ -20,8 +20,8 @@ from frigate.genai import GenAIClient
from frigate.models import Event
from frigate.types import TrackedObjectUpdateTypesEnum
from frigate.util.builtin import EventsPerSecond, InferenceSpeed
from frigate.util.file import get_event_thumbnail_bytes
from frigate.util.image import create_thumbnail, ensure_jpeg_bytes
from frigate.util.path import get_event_thumbnail_bytes
if TYPE_CHECKING:
from frigate.embeddings import Embeddings

View File

@ -3,6 +3,7 @@
import copy
import datetime
import logging
import math
import os
import shutil
import threading
@ -10,22 +11,28 @@ from pathlib import Path
from typing import Any
import cv2
from peewee import DoesNotExist
from frigate.comms.embeddings_updater import EmbeddingsRequestEnum
from frigate.comms.inter_process import InterProcessRequestor
from frigate.config import FrigateConfig
from frigate.config.camera.review import GenAIReviewConfig
from frigate.config.camera import CameraConfig
from frigate.config.camera.review import GenAIReviewConfig, ImageSourceEnum
from frigate.const import CACHE_DIR, CLIPS_DIR, UPDATE_REVIEW_DESCRIPTION
from frigate.data_processing.types import PostProcessDataEnum
from frigate.genai import GenAIClient
from frigate.models import ReviewSegment
from frigate.models import Recordings, ReviewSegment
from frigate.util.builtin import EventsPerSecond, InferenceSpeed
from frigate.util.image import get_image_from_recording
from ..post.api import PostProcessorApi
from ..types import DataProcessorMetrics
logger = logging.getLogger(__name__)
RECORDING_BUFFER_EXTENSION_PERCENT = 0.10
MIN_RECORDING_DURATION = 10
class ReviewDescriptionProcessor(PostProcessorApi):
def __init__(
@ -43,20 +50,53 @@ class ReviewDescriptionProcessor(PostProcessorApi):
self.review_descs_dps = EventsPerSecond()
self.review_descs_dps.start()
def calculate_frame_count(self) -> int:
"""Calculate optimal number of frames based on context size."""
# With our preview images (height of 180px) each image should be ~100 tokens per image
# We want to be conservative to not have too long of query times with too many images
context_size = self.genai_client.get_context_size()
def calculate_frame_count(
self,
camera: str,
image_source: ImageSourceEnum = ImageSourceEnum.preview,
height: int = 480,
) -> int:
"""Calculate optimal number of frames based on context size, image source, and resolution.
if context_size > 10000:
return 20
elif context_size > 6000:
return 16
elif context_size > 4000:
return 12
Token usage varies by resolution: larger images (ultrawide aspect ratios) use more tokens.
Estimates ~1 token per 1250 pixels. Targets 98% context utilization with safety margin.
Capped at 20 frames.
"""
context_size = self.genai_client.get_context_size()
camera_config = self.config.cameras[camera]
detect_width = camera_config.detect.width
detect_height = camera_config.detect.height
aspect_ratio = detect_width / detect_height
if image_source == ImageSourceEnum.recordings:
if aspect_ratio >= 1:
# Landscape or square: constrain height
width = int(height * aspect_ratio)
else:
# Portrait: constrain width
width = height
height = int(width / aspect_ratio)
else:
return 8
if aspect_ratio >= 1:
# Landscape or square: constrain height
target_height = 180
width = int(target_height * aspect_ratio)
height = target_height
else:
# Portrait: constrain width
target_width = 180
width = target_width
height = int(target_width / aspect_ratio)
pixels_per_image = width * height
tokens_per_image = pixels_per_image / 1250
prompt_tokens = 3500
response_tokens = 300
available_tokens = context_size - prompt_tokens - response_tokens
max_frames = int(available_tokens / tokens_per_image)
return min(max(max_frames, 3), 20)
def process_data(self, data, data_type):
self.metrics.review_desc_dps.value = self.review_descs_dps.eps()
@ -88,36 +128,63 @@ class ReviewDescriptionProcessor(PostProcessorApi):
):
return
frames = self.get_cache_frames(
camera, final_data["start_time"], final_data["end_time"]
)
image_source = camera_config.review.genai.image_source
if not frames:
frames = [final_data["thumb_path"]]
thumbs = []
for idx, thumb_path in enumerate(frames):
thumb_data = cv2.imread(thumb_path)
ret, jpg = cv2.imencode(
".jpg", thumb_data, [int(cv2.IMWRITE_JPEG_QUALITY), 100]
if image_source == ImageSourceEnum.recordings:
duration = final_data["end_time"] - final_data["start_time"]
buffer_extension = min(
10, max(2, duration * RECORDING_BUFFER_EXTENSION_PERCENT)
)
if ret:
thumbs.append(jpg.tobytes())
# Ensure minimum total duration for short review items
# This provides better context for brief events
total_duration = duration + (2 * buffer_extension)
if total_duration < MIN_RECORDING_DURATION:
# Expand buffer to reach minimum duration, still respecting max of 10s per side
additional_buffer_per_side = (MIN_RECORDING_DURATION - duration) / 2
buffer_extension = min(10, additional_buffer_per_side)
if camera_config.review.genai.debug_save_thumbnails:
id = data["after"]["id"]
Path(os.path.join(CLIPS_DIR, "genai-requests", f"{id}")).mkdir(
thumbs = self.get_recording_frames(
camera,
final_data["start_time"] - buffer_extension,
final_data["end_time"] + buffer_extension,
height=480, # Use 480p for good balance between quality and token usage
)
if not thumbs:
# Fallback to preview frames if no recordings available
logger.warning(
f"No recording frames found for {camera}, falling back to preview frames"
)
thumbs = self.get_preview_frames_as_bytes(
camera,
final_data["start_time"],
final_data["end_time"],
final_data["thumb_path"],
id,
camera_config.review.genai.debug_save_thumbnails,
)
elif camera_config.review.genai.debug_save_thumbnails:
# Save debug thumbnails for recordings
Path(os.path.join(CLIPS_DIR, "genai-requests", id)).mkdir(
parents=True, exist_ok=True
)
shutil.copy(
thumb_path,
os.path.join(
CLIPS_DIR,
f"genai-requests/{id}/{idx}.webp",
),
)
for idx, frame_bytes in enumerate(thumbs):
with open(
os.path.join(CLIPS_DIR, f"genai-requests/{id}/{idx}.jpg"),
"wb",
) as f:
f.write(frame_bytes)
else:
# Use preview frames
thumbs = self.get_preview_frames_as_bytes(
camera,
final_data["start_time"],
final_data["end_time"],
final_data["thumb_path"],
id,
camera_config.review.genai.debug_save_thumbnails,
)
# kickoff analysis
self.review_descs_dps.update()
@ -127,11 +194,12 @@ class ReviewDescriptionProcessor(PostProcessorApi):
self.requestor,
self.genai_client,
self.review_desc_speed,
camera,
camera_config,
final_data,
thumbs,
camera_config.review.genai,
list(self.config.model.merged_labelmap.values()),
self.config.model.all_attributes,
),
).start()
@ -217,7 +285,7 @@ class ReviewDescriptionProcessor(PostProcessorApi):
all_frames.append(os.path.join(preview_dir, file))
frame_count = len(all_frames)
desired_frame_count = self.calculate_frame_count()
desired_frame_count = self.calculate_frame_count(camera)
if frame_count <= desired_frame_count:
return all_frames
@ -231,48 +299,179 @@ class ReviewDescriptionProcessor(PostProcessorApi):
return selected_frames
def get_recording_frames(
self,
camera: str,
start_time: float,
end_time: float,
height: int = 480,
) -> list[bytes]:
"""Get frames from recordings at specified timestamps."""
duration = end_time - start_time
desired_frame_count = self.calculate_frame_count(
camera, ImageSourceEnum.recordings, height
)
# Calculate evenly spaced timestamps throughout the duration
if desired_frame_count == 1:
timestamps = [start_time + duration / 2]
else:
step = duration / (desired_frame_count - 1)
timestamps = [start_time + (i * step) for i in range(desired_frame_count)]
def extract_frame_from_recording(ts: float) -> bytes | None:
"""Extract a single frame from recording at given timestamp."""
try:
recording = (
Recordings.select(
Recordings.path,
Recordings.start_time,
)
.where((ts >= Recordings.start_time) & (ts <= Recordings.end_time))
.where(Recordings.camera == camera)
.order_by(Recordings.start_time.desc())
.limit(1)
.get()
)
time_in_segment = ts - recording.start_time
return get_image_from_recording(
self.config.ffmpeg,
recording.path,
time_in_segment,
"mjpeg",
height=height,
)
except DoesNotExist:
return None
frames = []
for timestamp in timestamps:
try:
# Try to extract frame at exact timestamp
image_data = extract_frame_from_recording(timestamp)
if not image_data:
# Try with rounded timestamp as fallback
rounded_timestamp = math.ceil(timestamp)
image_data = extract_frame_from_recording(rounded_timestamp)
if image_data:
frames.append(image_data)
else:
logger.warning(
f"No recording found for {camera} at timestamp {timestamp}"
)
except Exception as e:
logger.error(
f"Error extracting frame from recording for {camera} at {timestamp}: {e}"
)
continue
return frames
def get_preview_frames_as_bytes(
self,
camera: str,
start_time: float,
end_time: float,
thumb_path_fallback: str,
review_id: str,
save_debug: bool,
) -> list[bytes]:
"""Get preview frames and convert them to JPEG bytes.
Args:
camera: Camera name
start_time: Start timestamp
end_time: End timestamp
thumb_path_fallback: Fallback thumbnail path if no preview frames found
review_id: Review item ID for debug saving
save_debug: Whether to save debug thumbnails
Returns:
List of JPEG image bytes
"""
frame_paths = self.get_cache_frames(camera, start_time, end_time)
if not frame_paths:
frame_paths = [thumb_path_fallback]
thumbs = []
for idx, thumb_path in enumerate(frame_paths):
thumb_data = cv2.imread(thumb_path)
ret, jpg = cv2.imencode(
".jpg", thumb_data, [int(cv2.IMWRITE_JPEG_QUALITY), 100]
)
if ret:
thumbs.append(jpg.tobytes())
if save_debug:
Path(os.path.join(CLIPS_DIR, "genai-requests", review_id)).mkdir(
parents=True, exist_ok=True
)
shutil.copy(
thumb_path,
os.path.join(CLIPS_DIR, f"genai-requests/{review_id}/{idx}.webp"),
)
return thumbs
@staticmethod
def run_analysis(
requestor: InterProcessRequestor,
genai_client: GenAIClient,
review_inference_speed: InferenceSpeed,
camera: str,
camera_config: CameraConfig,
final_data: dict[str, str],
thumbs: list[bytes],
genai_config: GenAIReviewConfig,
labelmap_objects: list[str],
attribute_labels: list[str],
) -> None:
start = datetime.datetime.now().timestamp()
# Format zone names using zone config friendly names if available
formatted_zones = []
for zone_name in final_data["data"]["zones"]:
if zone_name in camera_config.zones:
formatted_zones.append(
camera_config.zones[zone_name].get_formatted_name(zone_name)
)
analytics_data = {
"id": final_data["id"],
"camera": camera,
"zones": final_data["data"]["zones"],
"camera": camera_config.get_formatted_name(),
"zones": formatted_zones,
"start": datetime.datetime.fromtimestamp(final_data["start_time"]).strftime(
"%A, %I:%M %p"
),
"duration": round(final_data["end_time"] - final_data["start_time"]),
}
objects = []
named_objects = []
unified_objects = []
objects_list = final_data["data"]["objects"]
sub_labels_list = final_data["data"]["sub_labels"]
for i, verified_label in enumerate(final_data["data"]["verified_objects"]):
object_type = verified_label.replace("-verified", "").replace("_", " ")
name = sub_labels_list[i].replace("_", " ").title()
unified_objects.append(f"{name} ({object_type})")
for label in objects_list:
if "-verified" in label:
continue
elif label in labelmap_objects:
objects.append(label.replace("_", " ").title())
object_type = label.replace("_", " ").title()
for i, verified_label in enumerate(final_data["data"]["verified_objects"]):
named_objects.append(
f"{sub_labels_list[i].replace('_', ' ').title()} ({verified_label.replace('-verified', '')})"
)
if label in attribute_labels:
unified_objects.append(f"{object_type} (delivery/service)")
else:
unified_objects.append(object_type)
analytics_data["objects"] = objects
analytics_data["recognized_objects"] = named_objects
analytics_data["unified_objects"] = unified_objects
metadata = genai_client.generate_review_description(
analytics_data,

View File

@ -10,6 +10,10 @@ import cv2
import numpy as np
from peewee import DoesNotExist
from frigate.comms.event_metadata_updater import (
EventMetadataPublisher,
EventMetadataTypeEnum,
)
from frigate.comms.inter_process import InterProcessRequestor
from frigate.config import FrigateConfig
from frigate.const import CONFIG_DIR
@ -18,7 +22,7 @@ from frigate.db.sqlitevecq import SqliteVecQueueDatabase
from frigate.embeddings.util import ZScoreNormalization
from frigate.models import Event, Trigger
from frigate.util.builtin import cosine_distance
from frigate.util.path import get_event_thumbnail_bytes
from frigate.util.file import get_event_thumbnail_bytes
from ..post.api import PostProcessorApi
from ..types import DataProcessorMetrics
@ -34,6 +38,7 @@ class SemanticTriggerProcessor(PostProcessorApi):
db: SqliteVecQueueDatabase,
config: FrigateConfig,
requestor: InterProcessRequestor,
sub_label_publisher: EventMetadataPublisher,
metrics: DataProcessorMetrics,
embeddings,
):
@ -41,6 +46,7 @@ class SemanticTriggerProcessor(PostProcessorApi):
self.db = db
self.embeddings = embeddings
self.requestor = requestor
self.sub_label_publisher = sub_label_publisher
self.trigger_embeddings: list[np.ndarray] = []
self.thumb_stats = ZScoreNormalization()
@ -184,14 +190,44 @@ class SemanticTriggerProcessor(PostProcessorApi):
),
)
friendly_name = (
self.config.cameras[camera]
.semantic_search.triggers[trigger["name"]]
.friendly_name
)
if (
self.config.cameras[camera]
.semantic_search.triggers[trigger["name"]]
.actions
):
# TODO: handle actions for the trigger
# handle actions for the trigger
# notifications already handled by webpush
pass
if (
"sub_label"
in self.config.cameras[camera]
.semantic_search.triggers[trigger["name"]]
.actions
):
self.sub_label_publisher.publish(
(event_id, friendly_name, similarity),
EventMetadataTypeEnum.sub_label,
)
if (
"attribute"
in self.config.cameras[camera]
.semantic_search.triggers[trigger["name"]]
.actions
):
self.sub_label_publisher.publish(
(
event_id,
trigger["name"],
trigger["type"],
similarity,
),
EventMetadataTypeEnum.attribute.value,
)
if WRITE_DEBUG_IMAGES:
try:

View File

@ -34,6 +34,8 @@ except ModuleNotFoundError:
logger = logging.getLogger(__name__)
MAX_OBJECT_CLASSIFICATIONS = 16
class CustomStateClassificationProcessor(RealTimeProcessorApi):
def __init__(
@ -53,9 +55,18 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
self.tensor_output_details: dict[str, Any] | None = None
self.labelmap: dict[int, str] = {}
self.classifications_per_second = EventsPerSecond()
self.inference_speed = InferenceSpeed(
self.metrics.classification_speeds[self.model_config.name]
)
self.state_history: dict[str, dict[str, Any]] = {}
if (
self.metrics
and self.model_config.name in self.metrics.classification_speeds
):
self.inference_speed = InferenceSpeed(
self.metrics.classification_speeds[self.model_config.name]
)
else:
self.inference_speed = None
self.last_run = datetime.datetime.now().timestamp()
self.__build_detector()
@ -83,12 +94,50 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
def __update_metrics(self, duration: float) -> None:
self.classifications_per_second.update()
self.inference_speed.update(duration)
if self.inference_speed:
self.inference_speed.update(duration)
def verify_state_change(self, camera: str, detected_state: str) -> str | None:
"""
Verify state change requires 3 consecutive identical states before publishing.
Returns state to publish or None if verification not complete.
"""
if camera not in self.state_history:
self.state_history[camera] = {
"current_state": None,
"pending_state": None,
"consecutive_count": 0,
}
verification = self.state_history[camera]
if detected_state == verification["current_state"]:
verification["pending_state"] = None
verification["consecutive_count"] = 0
return None
if detected_state == verification["pending_state"]:
verification["consecutive_count"] += 1
if verification["consecutive_count"] >= 3:
verification["current_state"] = detected_state
verification["pending_state"] = None
verification["consecutive_count"] = 0
return detected_state
else:
verification["pending_state"] = detected_state
verification["consecutive_count"] = 1
logger.debug(
f"New state '{detected_state}' detected for {camera}, need {3 - verification['consecutive_count']} more consecutive detections"
)
return None
def process_frame(self, frame_data: dict[str, Any], frame: np.ndarray):
self.metrics.classification_cps[
self.model_config.name
].value = self.classifications_per_second.eps()
if self.metrics and self.model_config.name in self.metrics.classification_cps:
self.metrics.classification_cps[
self.model_config.name
].value = self.classifications_per_second.eps()
camera = frame_data.get("camera")
if camera not in self.model_config.state_config.cameras:
@ -96,10 +145,10 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
camera_config = self.model_config.state_config.cameras[camera]
crop = [
camera_config.crop[0],
camera_config.crop[1],
camera_config.crop[2],
camera_config.crop[3],
camera_config.crop[0] * self.config.cameras[camera].detect.width,
camera_config.crop[1] * self.config.cameras[camera].detect.height,
camera_config.crop[2] * self.config.cameras[camera].detect.width,
camera_config.crop[3] * self.config.cameras[camera].detect.height,
]
should_run = False
@ -121,6 +170,19 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
self.last_run = now
should_run = True
# Shortcut: always run if we have a pending state verification to complete
if (
not should_run
and camera in self.state_history
and self.state_history[camera]["pending_state"] is not None
and now > self.last_run + 0.5
):
self.last_run = now
should_run = True
logger.debug(
f"Running verification check for pending state: {self.state_history[camera]['pending_state']} ({self.state_history[camera]['consecutive_count']}/3)"
)
if not should_run:
return
@ -165,6 +227,9 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
self.tensor_output_details[0]["index"]
)[0]
probs = res / res.sum(axis=0)
logger.debug(
f"{self.model_config.name} Ran state classification with probabilities: {probs}"
)
best_id = np.argmax(probs)
score = round(probs[best_id], 2)
self.__update_metrics(datetime.datetime.now().timestamp() - now)
@ -178,10 +243,19 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
score,
)
if score >= self.model_config.threshold:
if score < self.model_config.threshold:
logger.debug(
f"Score {score} below threshold {self.model_config.threshold}, skipping verification"
)
return
detected_state = self.labelmap[best_id]
verified_state = self.verify_state_change(camera, detected_state)
if verified_state is not None:
self.requestor.send_data(
f"{camera}/classification/{self.model_config.name}",
self.labelmap[best_id],
verified_state,
)
def handle_request(self, topic, request_data):
@ -220,12 +294,20 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
self.sub_label_publisher = sub_label_publisher
self.tensor_input_details: dict[str, Any] | None = None
self.tensor_output_details: dict[str, Any] | None = None
self.detected_objects: dict[str, float] = {}
self.classification_history: dict[str, list[tuple[str, float, float]]] = {}
self.labelmap: dict[int, str] = {}
self.classifications_per_second = EventsPerSecond()
self.inference_speed = InferenceSpeed(
self.metrics.classification_speeds[self.model_config.name]
)
if (
self.metrics
and self.model_config.name in self.metrics.classification_speeds
):
self.inference_speed = InferenceSpeed(
self.metrics.classification_speeds[self.model_config.name]
)
else:
self.inference_speed = None
self.__build_detector()
@redirect_output_to_logger(logger, logging.DEBUG)
@ -251,12 +333,64 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
def __update_metrics(self, duration: float) -> None:
self.classifications_per_second.update()
self.inference_speed.update(duration)
if self.inference_speed:
self.inference_speed.update(duration)
def get_weighted_score(
self,
object_id: str,
current_label: str,
current_score: float,
current_time: float,
) -> tuple[str | None, float]:
"""
Determine weighted score based on history to prevent false positives/negatives.
Requires 60% of attempts to agree on a label before publishing.
Returns (weighted_label, weighted_score) or (None, 0.0) if no weighted score.
"""
if object_id not in self.classification_history:
self.classification_history[object_id] = []
self.classification_history[object_id].append(
(current_label, current_score, current_time)
)
history = self.classification_history[object_id]
if len(history) < 3:
return None, 0.0
label_counts = {}
label_scores = {}
total_attempts = len(history)
for label, score, timestamp in history:
if label not in label_counts:
label_counts[label] = 0
label_scores[label] = []
label_counts[label] += 1
label_scores[label].append(score)
best_label = max(label_counts, key=label_counts.get)
best_count = label_counts[best_label]
consensus_threshold = total_attempts * 0.6
if best_count < consensus_threshold:
return None, 0.0
avg_score = sum(label_scores[best_label]) / len(label_scores[best_label])
if best_label == "none":
return None, 0.0
return best_label, avg_score
def process_frame(self, obj_data, frame):
self.metrics.classification_cps[
self.model_config.name
].value = self.classifications_per_second.eps()
if self.metrics and self.model_config.name in self.metrics.classification_cps:
self.metrics.classification_cps[
self.model_config.name
].value = self.classifications_per_second.eps()
if obj_data["false_positive"]:
return
@ -264,6 +398,21 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
if obj_data["label"] not in self.model_config.object_config.objects:
return
if obj_data.get("end_time") is not None:
return
if obj_data.get("stationary"):
return
object_id = obj_data["id"]
if (
object_id in self.classification_history
and len(self.classification_history[object_id])
>= MAX_OBJECT_CLASSIFICATIONS
):
return
now = datetime.datetime.now().timestamp()
x, y, x2, y2 = calculate_region(
frame.shape,
@ -272,8 +421,8 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
obj_data["box"][2],
obj_data["box"][3],
max(
obj_data["box"][1] - obj_data["box"][0],
obj_data["box"][3] - obj_data["box"][2],
obj_data["box"][2] - obj_data["box"][0],
obj_data["box"][3] - obj_data["box"][1],
),
1.0,
)
@ -295,7 +444,7 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
write_classification_attempt(
self.train_dir,
cv2.cvtColor(crop, cv2.COLOR_RGB2BGR),
obj_data["id"],
object_id,
now,
"unknown",
0.0,
@ -309,48 +458,55 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
self.tensor_output_details[0]["index"]
)[0]
probs = res / res.sum(axis=0)
logger.debug(
f"{self.model_config.name} Ran object classification with probabilities: {probs}"
)
best_id = np.argmax(probs)
score = round(probs[best_id], 2)
previous_score = self.detected_objects.get(obj_data["id"], 0.0)
self.__update_metrics(datetime.datetime.now().timestamp() - now)
write_classification_attempt(
self.train_dir,
cv2.cvtColor(crop, cv2.COLOR_RGB2BGR),
obj_data["id"],
object_id,
now,
self.labelmap[best_id],
score,
max_files=200,
)
if score < self.model_config.threshold:
logger.debug(f"Score {score} is less than threshold.")
return
if score <= previous_score:
logger.debug(f"Score {score} is worse than previous score {previous_score}")
return
sub_label = self.labelmap[best_id]
self.detected_objects[obj_data["id"]] = score
if (
self.model_config.object_config.classification_type
== ObjectClassificationType.sub_label
):
if sub_label != "none":
consensus_label, consensus_score = self.get_weighted_score(
object_id, sub_label, score, now
)
if consensus_label is not None:
if (
self.model_config.object_config.classification_type
== ObjectClassificationType.sub_label
):
self.sub_label_publisher.publish(
(obj_data["id"], sub_label, score),
(object_id, consensus_label, consensus_score),
EventMetadataTypeEnum.sub_label,
)
elif (
self.model_config.object_config.classification_type
== ObjectClassificationType.attribute
):
self.sub_label_publisher.publish(
(obj_data["id"], self.model_config.name, sub_label, score),
EventMetadataTypeEnum.attribute.value,
)
elif (
self.model_config.object_config.classification_type
== ObjectClassificationType.attribute
):
self.sub_label_publisher.publish(
(
object_id,
self.model_config.name,
consensus_label,
consensus_score,
),
EventMetadataTypeEnum.attribute.value,
)
def handle_request(self, topic, request_data):
if topic == EmbeddingsRequestEnum.reload_classification_model.value:
@ -368,8 +524,8 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
return None
def expire_object(self, object_id, camera):
if object_id in self.detected_objects:
self.detected_objects.pop(object_id)
if object_id in self.classification_history:
self.classification_history.pop(object_id)
@staticmethod
@ -380,6 +536,7 @@ def write_classification_attempt(
timestamp: float,
label: str,
score: float,
max_files: int = 100,
) -> None:
if "-" in label:
label = label.replace("-", "_")
@ -395,5 +552,8 @@ def write_classification_attempt(
)
# delete oldest face image if maximum is reached
if len(files) > 100:
os.unlink(os.path.join(folder, files[-1]))
try:
if len(files) > max_files:
os.unlink(os.path.join(folder, files[-1]))
except FileNotFoundError:
pass

View File

@ -166,6 +166,7 @@ class FaceRealTimeProcessor(RealTimeProcessorApi):
camera = obj_data["camera"]
if not self.config.cameras[camera].face_recognition.enabled:
logger.debug(f"Face recognition disabled for camera {camera}, skipping")
return
start = datetime.datetime.now().timestamp()
@ -208,6 +209,7 @@ class FaceRealTimeProcessor(RealTimeProcessorApi):
person_box = obj_data.get("box")
if not person_box:
logger.debug(f"No person box available for {id}")
return
rgb = cv2.cvtColor(frame, cv2.COLOR_YUV2RGB_I420)
@ -233,7 +235,8 @@ class FaceRealTimeProcessor(RealTimeProcessorApi):
try:
face_frame = cv2.cvtColor(face_frame, cv2.COLOR_RGB2BGR)
except Exception:
except Exception as e:
logger.debug(f"Failed to convert face frame color for {id}: {e}")
return
else:
# don't run for object without attributes
@ -251,6 +254,7 @@ class FaceRealTimeProcessor(RealTimeProcessorApi):
# no faces detected in this frame
if not face:
logger.debug(f"No face attributes found for {id}")
return
face_box = face.get("box")
@ -274,6 +278,7 @@ class FaceRealTimeProcessor(RealTimeProcessorApi):
res = self.recognizer.classify(face_frame)
if not res:
logger.debug(f"Face recognizer returned no result for {id}")
self.__update_metrics(datetime.datetime.now().timestamp() - start)
return
@ -330,6 +335,7 @@ class FaceRealTimeProcessor(RealTimeProcessorApi):
def handle_request(self, topic, request_data) -> dict[str, Any] | None:
if topic == EmbeddingsRequestEnum.clear_face_classifier.value:
self.recognizer.clear()
return {"success": True, "message": "Face classifier cleared."}
elif topic == EmbeddingsRequestEnum.recognize_face.value:
img = cv2.imdecode(
np.frombuffer(base64.b64decode(request_data["image"]), dtype=np.uint8),
@ -417,7 +423,10 @@ class FaceRealTimeProcessor(RealTimeProcessorApi):
res = self.recognizer.classify(img)
if not res:
return
return {
"message": "No face was recognized.",
"success": False,
}
sub_label, score = res
@ -436,6 +445,13 @@ class FaceRealTimeProcessor(RealTimeProcessorApi):
)
shutil.move(current_file, new_file)
return {
"message": f"Successfully reprocessed face. Result: {sub_label} (score: {score:.2f})",
"success": True,
"face_name": sub_label,
"score": score,
}
def expire_object(self, object_id: str, camera: str):
if object_id in self.person_face_history:
self.person_face_history.pop(object_id)

View File

@ -3,6 +3,7 @@
import logging
import os
import platform
import threading
from abc import ABC, abstractmethod
from typing import Any
@ -21,21 +22,25 @@ def is_arm64_platform() -> bool:
return machine in ("aarch64", "arm64", "armv8", "armv7l")
def get_ort_session_options() -> ort.SessionOptions | None:
def get_ort_session_options(
is_complex_model: bool = False,
) -> ort.SessionOptions | None:
"""Get ONNX Runtime session options with appropriate settings.
On ARM/RKNN platforms, use basic optimizations to avoid graph fusion issues
that can break certain models. On amd64, use default optimizations for better performance.
"""
sess_options = None
Args:
is_complex_model: Whether the model needs basic optimization to avoid graph fusion issues.
if is_arm64_platform():
Returns:
SessionOptions with appropriate optimization level, or None for default settings.
"""
if is_complex_model:
sess_options = ort.SessionOptions()
sess_options.graph_optimization_level = (
ort.GraphOptimizationLevel.ORT_ENABLE_BASIC
)
return sess_options
return sess_options
return None
# Import OpenVINO only when needed to avoid circular dependencies
@ -103,6 +108,21 @@ class BaseModelRunner(ABC):
class ONNXModelRunner(BaseModelRunner):
"""Run ONNX models using ONNX Runtime."""
@staticmethod
def is_cpu_complex_model(model_type: str) -> bool:
"""Check if model needs basic optimization level to avoid graph fusion issues.
Some models (like Jina-CLIP) have issues with aggressive optimizations like
SimplifiedLayerNormFusion that create or expect nodes that don't exist.
"""
# Import here to avoid circular imports
from frigate.embeddings.types import EnrichmentModelTypeEnum
return model_type in [
EnrichmentModelTypeEnum.jina_v1.value,
EnrichmentModelTypeEnum.jina_v2.value,
]
@staticmethod
def is_migraphx_complex_model(model_type: str) -> bool:
# Import here to avoid circular imports
@ -142,12 +162,12 @@ class CudaGraphRunner(BaseModelRunner):
"""
@staticmethod
def is_complex_model(model_type: str) -> bool:
def is_model_supported(model_type: str) -> bool:
# Import here to avoid circular imports
from frigate.detectors.detector_config import ModelTypeEnum
from frigate.embeddings.types import EnrichmentModelTypeEnum
return model_type in [
return model_type not in [
ModelTypeEnum.yolonas.value,
EnrichmentModelTypeEnum.paddleocr.value,
EnrichmentModelTypeEnum.jina_v1.value,
@ -215,11 +235,35 @@ class OpenVINOModelRunner(BaseModelRunner):
# Import here to avoid circular imports
from frigate.embeddings.types import EnrichmentModelTypeEnum
return model_type in [EnrichmentModelTypeEnum.paddleocr.value]
return model_type in [
EnrichmentModelTypeEnum.paddleocr.value,
EnrichmentModelTypeEnum.jina_v2.value,
]
@staticmethod
def is_model_npu_supported(model_type: str) -> bool:
# Import here to avoid circular imports
from frigate.embeddings.types import EnrichmentModelTypeEnum
return model_type not in [
EnrichmentModelTypeEnum.paddleocr.value,
EnrichmentModelTypeEnum.jina_v1.value,
EnrichmentModelTypeEnum.jina_v2.value,
EnrichmentModelTypeEnum.arcface.value,
]
def __init__(self, model_path: str, device: str, model_type: str, **kwargs):
self.model_path = model_path
self.device = device
if device == "NPU" and not OpenVINOModelRunner.is_model_npu_supported(
model_type
):
logger.warning(
f"OpenVINO model {model_type} is not supported on NPU, using GPU instead"
)
device = "GPU"
self.complex_model = OpenVINOModelRunner.is_complex_model(model_type)
if not os.path.isfile(model_path):
@ -247,6 +291,10 @@ class OpenVINOModelRunner(BaseModelRunner):
self.infer_request = self.compiled_model.create_infer_request()
self.input_tensor: ov.Tensor | None = None
# Thread lock to prevent concurrent inference (needed for JinaV2 which shares
# one runner between text and vision embeddings called from different threads)
self._inference_lock = threading.Lock()
if not self.complex_model:
try:
input_shape = self.compiled_model.inputs[0].get_shape()
@ -290,55 +338,74 @@ class OpenVINOModelRunner(BaseModelRunner):
Returns:
List of output tensors
"""
# Handle single input case for backward compatibility
if (
len(inputs) == 1
and len(self.compiled_model.inputs) == 1
and self.input_tensor is not None
):
# Single input case - use the pre-allocated tensor for efficiency
input_data = list(inputs.values())[0]
np.copyto(self.input_tensor.data, input_data)
self.infer_request.infer(self.input_tensor)
else:
if self.complex_model:
# Lock prevents concurrent access to infer_request
# Needed for JinaV2: genai thread (text) + embeddings thread (vision)
with self._inference_lock:
# Handle single input case for backward compatibility
if (
len(inputs) == 1
and len(self.compiled_model.inputs) == 1
and self.input_tensor is not None
):
# Single input case - use the pre-allocated tensor for efficiency
input_data = list(inputs.values())[0]
np.copyto(self.input_tensor.data, input_data)
self.infer_request.infer(self.input_tensor)
else:
if self.complex_model:
try:
# This ensures the model starts with a clean state for each sequence
# Important for RNN models like PaddleOCR recognition
self.infer_request.reset_state()
except Exception:
# this will raise an exception for models with AUTO set as the device
pass
# Multiple inputs case - set each input by name
for input_name, input_data in inputs.items():
# Find the input by name and its index
input_port = None
input_index = None
for idx, port in enumerate(self.compiled_model.inputs):
if port.get_any_name() == input_name:
input_port = port
input_index = idx
break
if input_port is None:
raise ValueError(f"Input '{input_name}' not found in model")
# Create tensor with the correct element type
input_element_type = input_port.get_element_type()
# Ensure input data matches the expected dtype to prevent type mismatches
# that can occur with models like Jina-CLIP v2 running on OpenVINO
expected_dtype = input_element_type.to_dtype()
if input_data.dtype != expected_dtype:
logger.debug(
f"Converting input '{input_name}' from {input_data.dtype} to {expected_dtype}"
)
input_data = input_data.astype(expected_dtype)
input_tensor = ov.Tensor(input_element_type, input_data.shape)
np.copyto(input_tensor.data, input_data)
# Set the input tensor for the specific port index
self.infer_request.set_input_tensor(input_index, input_tensor)
# Run inference
try:
# This ensures the model starts with a clean state for each sequence
# Important for RNN models like PaddleOCR recognition
self.infer_request.reset_state()
except Exception:
# this will raise an exception for models with AUTO set as the device
pass
self.infer_request.infer()
except Exception as e:
logger.error(f"Error during OpenVINO inference: {e}")
return []
# Multiple inputs case - set each input by name
for input_name, input_data in inputs.items():
# Find the input by name
input_port = None
for port in self.compiled_model.inputs:
if port.get_any_name() == input_name:
input_port = port
break
# Get all output tensors
outputs = []
for i in range(len(self.compiled_model.outputs)):
outputs.append(self.infer_request.get_output_tensor(i).data)
if input_port is None:
raise ValueError(f"Input '{input_name}' not found in model")
# Create tensor with the correct element type
input_element_type = input_port.get_element_type()
input_tensor = ov.Tensor(input_element_type, input_data.shape)
np.copyto(input_tensor.data, input_data)
# Set the input tensor
self.infer_request.set_input_tensor(input_tensor)
# Run inference
self.infer_request.infer()
# Get all output tensors
outputs = []
for i in range(len(self.compiled_model.outputs)):
outputs.append(self.infer_request.get_output_tensor(i).data)
return outputs
return outputs
class RKNNModelRunner(BaseModelRunner):
@ -466,7 +533,7 @@ def get_optimized_runner(
return OpenVINOModelRunner(model_path, device, model_type, **kwargs)
if (
not CudaGraphRunner.is_complex_model(model_type)
CudaGraphRunner.is_model_supported(model_type)
and providers[0] == "CUDAExecutionProvider"
):
options[0] = {
@ -494,7 +561,9 @@ def get_optimized_runner(
return ONNXModelRunner(
ort.InferenceSession(
model_path,
sess_options=get_ort_session_options(),
sess_options=get_ort_session_options(
ONNXModelRunner.is_cpu_complex_model(model_type)
),
providers=providers,
provider_options=options,
)

View File

@ -17,6 +17,7 @@ from frigate.detectors.detector_config import (
BaseDetectorConfig,
ModelTypeEnum,
)
from frigate.util.file import FileLock
from frigate.util.model import post_process_yolo
logger = logging.getLogger(__name__)
@ -177,29 +178,6 @@ class MemryXDetector(DetectionApi):
logger.error(f"Failed to initialize MemryX model: {e}")
raise
def _acquire_file_lock(self, lock_path: str, timeout: int = 60, poll: float = 0.2):
"""
Create an exclusive lock file. Blocks (with polling) until it can acquire,
or raises TimeoutError. Uses only stdlib (os.O_EXCL).
"""
start = time.time()
while True:
try:
fd = os.open(lock_path, os.O_CREAT | os.O_EXCL | os.O_RDWR)
os.close(fd)
return
except FileExistsError:
if time.time() - start > timeout:
raise TimeoutError(f"Timeout waiting for lock: {lock_path}")
time.sleep(poll)
def _release_file_lock(self, lock_path: str):
"""Best-effort removal of the lock file."""
try:
os.remove(lock_path)
except FileNotFoundError:
pass
def load_yolo_constants(self):
base = f"{self.cache_dir}/{self.model_folder}"
# constants for yolov9 post-processing
@ -212,9 +190,9 @@ class MemryXDetector(DetectionApi):
os.makedirs(self.cache_dir, exist_ok=True)
lock_path = os.path.join(self.cache_dir, f".{self.model_folder}.lock")
self._acquire_file_lock(lock_path)
lock = FileLock(lock_path, timeout=60)
try:
with lock:
# ---------- CASE 1: user provided a custom model path ----------
if self.memx_model_path:
if not self.memx_model_path.endswith(".zip"):
@ -338,9 +316,6 @@ class MemryXDetector(DetectionApi):
f"Failed to remove downloaded zip {zip_path}: {e}"
)
finally:
self._release_file_lock(lock_path)
def send_input(self, connection_id, tensor_input: np.ndarray):
"""Pre-process (if needed) and send frame to MemryX input queue"""
if tensor_input is None:

View File

@ -29,7 +29,7 @@ from frigate.db.sqlitevecq import SqliteVecQueueDatabase
from frigate.models import Event, Trigger
from frigate.types import ModelStatusTypesEnum
from frigate.util.builtin import EventsPerSecond, InferenceSpeed, serialize
from frigate.util.path import get_event_thumbnail_bytes
from frigate.util.file import get_event_thumbnail_bytes
from .onnx.jina_v1_embedding import JinaV1ImageEmbedding, JinaV1TextEmbedding
from .onnx.jina_v2_embedding import JinaV2Embedding
@ -472,7 +472,7 @@ class Embeddings:
)
thumbnail_missing = True
except DoesNotExist:
logger.warning(
logger.debug(
f"Event ID {trigger.data} for trigger {trigger_name} does not exist."
)
continue

View File

@ -9,6 +9,7 @@ from typing import Any
from peewee import DoesNotExist
from frigate.comms.config_updater import ConfigSubscriber
from frigate.comms.detections_updater import DetectionSubscriber, DetectionTypeEnum
from frigate.comms.embeddings_updater import (
EmbeddingsRequestEnum,
@ -61,8 +62,8 @@ from frigate.events.types import EventTypeEnum, RegenerateDescriptionEnum
from frigate.genai import get_genai_client
from frigate.models import Event, Recordings, ReviewSegment, Trigger
from frigate.util.builtin import serialize
from frigate.util.file import get_event_thumbnail_bytes
from frigate.util.image import SharedMemoryFrameManager
from frigate.util.path import get_event_thumbnail_bytes
from .embeddings import Embeddings
@ -95,6 +96,9 @@ class EmbeddingMaintainer(threading.Thread):
CameraConfigUpdateEnum.semantic_search,
],
)
self.classification_config_subscriber = ConfigSubscriber(
"config/classification/custom/"
)
# Configure Frigate DB
db = SqliteVecQueueDatabase(
@ -154,11 +158,13 @@ class EmbeddingMaintainer(threading.Thread):
self.realtime_processors: list[RealTimeProcessorApi] = []
if self.config.face_recognition.enabled:
logger.debug("Face recognition enabled, initializing FaceRealTimeProcessor")
self.realtime_processors.append(
FaceRealTimeProcessor(
self.config, self.requestor, self.event_metadata_publisher, metrics
)
)
logger.debug("FaceRealTimeProcessor initialized successfully")
if self.config.classification.bird.enabled:
self.realtime_processors.append(
@ -220,7 +226,9 @@ class EmbeddingMaintainer(threading.Thread):
for c in self.config.cameras.values()
):
self.post_processors.append(
AudioTranscriptionPostProcessor(self.config, self.requestor, metrics)
AudioTranscriptionPostProcessor(
self.config, self.requestor, self.embeddings, metrics
)
)
semantic_trigger_processor: SemanticTriggerProcessor | None = None
@ -229,6 +237,7 @@ class EmbeddingMaintainer(threading.Thread):
db,
self.config,
self.requestor,
self.event_metadata_publisher,
metrics,
self.embeddings,
)
@ -255,6 +264,7 @@ class EmbeddingMaintainer(threading.Thread):
"""Maintain a SQLite-vec database for semantic search."""
while not self.stop_event.is_set():
self.config_updater.check_for_updates()
self._check_classification_config_updates()
self._process_requests()
self._process_updates()
self._process_recordings_updates()
@ -265,6 +275,7 @@ class EmbeddingMaintainer(threading.Thread):
self._process_event_metadata()
self.config_updater.stop()
self.classification_config_subscriber.stop()
self.event_subscriber.stop()
self.event_end_subscriber.stop()
self.recordings_subscriber.stop()
@ -275,6 +286,67 @@ class EmbeddingMaintainer(threading.Thread):
self.requestor.stop()
logger.info("Exiting embeddings maintenance...")
def _check_classification_config_updates(self) -> None:
"""Check for classification config updates and add/remove processors."""
topic, model_config = self.classification_config_subscriber.check_for_update()
if topic:
model_name = topic.split("/")[-1]
if model_config is None:
self.realtime_processors = [
processor
for processor in self.realtime_processors
if not (
isinstance(
processor,
(
CustomStateClassificationProcessor,
CustomObjectClassificationProcessor,
),
)
and processor.model_config.name == model_name
)
]
logger.info(
f"Successfully removed classification processor for model: {model_name}"
)
else:
self.config.classification.custom[model_name] = model_config
# Check if processor already exists
for processor in self.realtime_processors:
if isinstance(
processor,
(
CustomStateClassificationProcessor,
CustomObjectClassificationProcessor,
),
):
if processor.model_config.name == model_name:
logger.debug(
f"Classification processor for model {model_name} already exists, skipping"
)
return
if model_config.state_config is not None:
processor = CustomStateClassificationProcessor(
self.config, model_config, self.requestor, self.metrics
)
else:
processor = CustomObjectClassificationProcessor(
self.config,
model_config,
self.event_metadata_publisher,
self.metrics,
)
self.realtime_processors.append(processor)
logger.info(
f"Added classification processor for model: {model_name} (type: {type(processor).__name__})"
)
def _process_requests(self) -> None:
"""Process embeddings requests"""
@ -327,7 +399,14 @@ class EmbeddingMaintainer(threading.Thread):
source_type, _, camera, frame_name, data = update
logger.debug(
f"Received update - source_type: {source_type}, camera: {camera}, data label: {data.get('label') if data else 'None'}"
)
if not camera or source_type != EventTypeEnum.tracked_object:
logger.debug(
f"Skipping update - camera: {camera}, source_type: {source_type}"
)
return
if self.config.semantic_search.enabled:
@ -337,6 +416,9 @@ class EmbeddingMaintainer(threading.Thread):
# no need to process updated objects if no processors are active
if len(self.realtime_processors) == 0 and len(self.post_processors) == 0:
logger.debug(
f"No processors active - realtime: {len(self.realtime_processors)}, post: {len(self.post_processors)}"
)
return
# Create our own thumbnail based on the bounding box and the frame time
@ -345,6 +427,7 @@ class EmbeddingMaintainer(threading.Thread):
frame_name, camera_config.frame_shape_yuv
)
except FileNotFoundError:
logger.debug(f"Frame {frame_name} not found for camera {camera}")
pass
if yuv_frame is None:
@ -353,7 +436,11 @@ class EmbeddingMaintainer(threading.Thread):
)
return
logger.debug(
f"Processing {len(self.realtime_processors)} realtime processors for object {data.get('id')} (label: {data.get('label')})"
)
for processor in self.realtime_processors:
logger.debug(f"Calling process_frame on {processor.__class__.__name__}")
processor.process_frame(data, yuv_frame)
for processor in self.post_processors:

View File

@ -12,7 +12,7 @@ from frigate.config import FrigateConfig
from frigate.const import CLIPS_DIR
from frigate.db.sqlitevecq import SqliteVecQueueDatabase
from frigate.models import Event, Timeline
from frigate.util.path import delete_event_snapshot, delete_event_thumbnail
from frigate.util.file import delete_event_snapshot, delete_event_thumbnail
logger = logging.getLogger(__name__)

View File

@ -150,10 +150,10 @@ PRESETS_HW_ACCEL_SCALE["preset-rk-h265"] = PRESETS_HW_ACCEL_SCALE[FFMPEG_HWACCEL
PRESETS_HW_ACCEL_ENCODE_BIRDSEYE = {
"preset-rpi-64-h264": "{0} -hide_banner {1} -c:v h264_v4l2m2m {2}",
"preset-rpi-64-h265": "{0} -hide_banner {1} -c:v hevc_v4l2m2m {2}",
FFMPEG_HWACCEL_VAAPI: "{0} -hide_banner -hwaccel vaapi -hwaccel_output_format vaapi {3} {1} -c:v h264_vaapi -g 50 -bf 0 -profile:v high -level:v 4.1 -sei:v 0 -an -vf format=vaapi|nv12,hwupload {2}",
FFMPEG_HWACCEL_VAAPI: "{0} -hide_banner -hwaccel vaapi -hwaccel_output_format vaapi -hwaccel_device {3} {1} -c:v h264_vaapi -g 50 -bf 0 -profile:v high -level:v 4.1 -sei:v 0 -an -vf format=vaapi|nv12,hwupload {2}",
"preset-intel-qsv-h264": "{0} -hide_banner {1} -c:v h264_qsv -g 50 -bf 0 -profile:v high -level:v 4.1 -async_depth:v 1 {2}",
"preset-intel-qsv-h265": "{0} -hide_banner {1} -c:v h264_qsv -g 50 -bf 0 -profile:v main -level:v 4.1 -async_depth:v 1 {2}",
FFMPEG_HWACCEL_NVIDIA: "{0} -hide_banner {1} {3} -c:v h264_nvenc -g 50 -profile:v high -level:v auto -preset:v p2 -tune:v ll {2}",
FFMPEG_HWACCEL_NVIDIA: "{0} -hide_banner {1} -hwaccel device {3} -c:v h264_nvenc -g 50 -profile:v high -level:v auto -preset:v p2 -tune:v ll {2}",
"preset-jetson-h264": "{0} -hide_banner {1} -c:v h264_nvmpi -profile high {2}",
"preset-jetson-h265": "{0} -hide_banner {1} -c:v h264_nvmpi -profile main {2}",
FFMPEG_HWACCEL_RKMPP: "{0} -hide_banner {1} -c:v h264_rkmpp -profile:v high {2}",
@ -246,7 +246,7 @@ def parse_preset_hardware_acceleration_scale(
",hwdownload,format=nv12,eq=gamma=1.4:gamma_weight=0.5" in scale
and os.environ.get("FFMPEG_DISABLE_GAMMA_EQUALIZER") is not None
):
scale.replace(
scale = scale.replace(
",hwdownload,format=nv12,eq=gamma=1.4:gamma_weight=0.5",
":format=nv12,hwdownload,format=nv12,format=yuv420p",
)

View File

@ -51,8 +51,7 @@ class GenAIClient:
def get_concern_prompt() -> str:
if concerns:
concern_list = "\n - ".join(concerns)
return f"""
- `other_concerns` (list of strings): Include a list of any of the following concerns that are occurring:
return f"""- `other_concerns` (list of strings): Include a list of any of the following concerns that are occurring:
- {concern_list}"""
else:
return ""
@ -63,58 +62,68 @@ class GenAIClient:
else:
return ""
def get_verified_objects() -> str:
if review_data["recognized_objects"]:
return " - " + "\n - ".join(review_data["recognized_objects"])
def get_objects_list() -> str:
if review_data["unified_objects"]:
return "\n- " + "\n- ".join(review_data["unified_objects"])
else:
return " None"
return "\n- (No objects detected)"
context_prompt = f"""
Please analyze the sequence of images ({len(thumbnails)} total) taken in chronological order from the perspective of the {review_data["camera"].replace("_", " ")} security camera.
Your task is to analyze the sequence of images ({len(thumbnails)} total) taken in chronological order from the perspective of the {review_data["camera"]} security camera.
## Normal Activity Patterns for This Property
**Normal activity patterns for this property:**
{activity_context_prompt}
## Task Instructions
Your task is to provide a clear, accurate description of the scene that:
1. States exactly what is happening based on observable actions and movements.
2. Evaluates whether the observable evidence suggests normal activity for this property or genuine security concerns.
3. Assigns a potential_threat_level based on the definitions below, applying them consistently.
2. Evaluates the activity against the Normal and Suspicious Activity Indicators above.
3. Assigns a potential_threat_level (0, 1, or 2) based on the threat level indicators defined above, applying them consistently.
**IMPORTANT: Start by checking if the activity matches the normal patterns above. If it does, assign Level 0. Only consider higher threat levels if the activity clearly deviates from normal patterns or shows genuine security concerns.**
**Use the activity patterns above as guidance to calibrate your assessment. Match the activity against both normal and suspicious indicators, then use your judgment based on the complete context.**
## Analysis Guidelines
When forming your description:
- **CRITICAL: Only describe objects explicitly listed in "Detected objects" below.** Do not infer or mention additional people, vehicles, or objects not present in the detected objects list, even if visual patterns suggest them. If only a car is detected, do not describe a person interacting with it unless "person" is also in the detected objects list.
- **CRITICAL: Only describe objects explicitly listed in "Objects in Scene" below.** Do not infer or mention additional people, vehicles, or objects not present in this list, even if visual patterns suggest them. If only a car is listed, do not describe a person interacting with it unless "person" is also in the objects list.
- **Only describe actions actually visible in the frames.** Do not assume or infer actions that you don't observe happening. If someone walks toward furniture but you never see them sit, do not say they sat. Stick to what you can see across the sequence.
- Describe what you observe: actions, movements, interactions with objects and the environment. Include any observable environmental changes (e.g., lighting changes triggered by activity).
- Note visible details such as clothing, items being carried or placed, tools or equipment present, and how they interact with the property or objects.
- Consider the full sequence chronologically: what happens from start to finish, how duration and actions relate to the location and objects involved.
- **Use the actual timestamp provided in "Activity started at"** below for time of day contextdo not infer time from image brightness or darkness. Unusual hours (late night/early morning) should increase suspicion when the observable behavior itself appears questionable. However, recognize that some legitimate activities can occur at any hour.
- Identify patterns that suggest genuine security concerns: testing doors/windows on vehicles or buildings, accessing unauthorized areas, attempting to conceal actions, extended loitering without apparent purpose, taking items, behavior that clearly doesn't align with the zone context and detected objects.
- **Weigh all evidence holistically**: Start by checking if the activity matches the normal patterns above. If it does, assign Level 0. Only consider Level 1 if the activity clearly deviates from normal patterns or shows genuine security concerns that warrant attention.
- **Consider duration as a primary factor**: Apply the duration thresholds defined in the activity patterns above. Brief sequences during normal hours with apparent purpose typically indicate normal activity unless explicit suspicious actions are visible.
- **Weigh all evidence holistically**: Match the activity against the normal and suspicious patterns defined above, then evaluate based on the complete context (zone, objects, time, actions, duration). Apply the threat level indicators consistently. Use your judgment for edge cases.
## Response Format
Your response MUST be a flat JSON object with:
- `title` (string): A concise, one-sentence title that captures the main activity. Include any verified recognized objects (from the "Verified recognized objects" list below) and key detected objects. Examples: "Joe walking dog in backyard", "Unknown person testing car doors at night".
- `scene` (string): A narrative description of what happens across the sequence from start to finish. **Only describe actions you can actually observe happening in the frames provided.** Do not infer or assume actions that aren't visible (e.g., if you see someone walking but never see them sit, don't say they sat down). Include setting, detected objects, and their observable actions. Avoid speculation or filling in assumed behaviors. Your description should align with and support the threat level you assign.
- `title` (string): A concise, direct title that describes the primary action or event in the sequence, not just what you literally see. Use spatial context when available to make titles more meaningful. When multiple objects/actions are present, prioritize whichever is most prominent or occurs first. Use names from "Objects in Scene" based on what you visually observe. If you see both a name and an unidentified object of the same type but visually observe only one person/object, use ONLY the name. Examples: "Joe walking dog", "Person taking out trash", "Vehicle arriving in driveway", "Joe accessing vehicle", "Person leaving porch for driveway".
- `scene` (string): A narrative description of what happens across the sequence from start to finish, in chronological order. Start by describing how the sequence begins, then describe the progression of events. **Describe all significant movements and actions in the order they occur.** For example, if a vehicle arrives and then a person exits, describe both actions sequentially. **Only describe actions you can actually observe happening in the frames provided.** Do not infer or assume actions that aren't visible (e.g., if you see someone walking but never see them sit, don't say they sat down). Include setting, detected objects, and their observable actions. Avoid speculation or filling in assumed behaviors. Your description should align with and support the threat level you assign.
- `confidence` (float): 0-1 confidence in your analysis. Higher confidence when objects/actions are clearly visible and context is unambiguous. Lower confidence when the sequence is unclear, objects are partially obscured, or context is ambiguous.
- `potential_threat_level` (integer): 0, 1, or 2 as defined below. Your threat level must be consistent with your scene description and the guidance above.
- `potential_threat_level` (integer): 0, 1, or 2 as defined in "Normal Activity Patterns for This Property" above. Your threat level must be consistent with your scene description and the guidance above.
{get_concern_prompt()}
Threat-level definitions:
- 0 **Normal activity (DEFAULT)**: What you observe matches the normal activity patterns above or is consistent with expected activity for this property type. The observable evidenceconsidering zone context, detected objects, and timing togethersupports a benign explanation. **Use this level for routine activities even if minor ambiguous elements exist.**
- 1 **Potentially suspicious**: Observable behavior raises genuine security concerns that warrant human review. The evidence doesn't support a routine explanation and clearly deviates from the normal patterns above. Examples: testing doors/windows on vehicles or structures, accessing areas that don't align with the activity, taking items that likely don't belong to them, behavior clearly inconsistent with the zone and context, or activity that lacks any visible legitimate indicators. **Only use this level when the activity clearly doesn't match normal patterns.**
- 2 **Immediate threat**: Clear evidence of forced entry, break-in, vandalism, aggression, weapons, theft in progress, or active property damage.
## Sequence Details
Sequence details:
- Frame 1 = earliest, Frame {len(thumbnails)} = latest
- Activity started at {review_data["start"]} and lasted {review_data["duration"]} seconds
- Detected objects: {", ".join(review_data["objects"])}
- Verified recognized objects (use these names when describing these objects):
{get_verified_objects()}
- Zones involved: {", ".join(z.replace("_", " ").title() for z in review_data["zones"]) or "None"}
- Zones involved: {", ".join(review_data["zones"]) if review_data["zones"] else "None"}
**IMPORTANT:**
## Objects in Scene
Each line represents a detection state, not necessarily unique individuals. Parentheses indicate object type or category, use only the name/label in your response, not the parentheses.
**CRITICAL: When you see both recognized and unrecognized entries of the same type (e.g., "Joe (person)" and "Person"), visually count how many distinct people/objects you actually see based on appearance and clothing. If you observe only ONE person throughout the sequence, use ONLY the recognized name (e.g., "Joe"). The same person may be recognized in some frames but not others. Only describe both if you visually see MULTIPLE distinct people with clearly different appearances.**
**Note: Unidentified objects (without names) are NOT indicators of suspicious activitythey simply mean the system hasn't identified that object.**
{get_objects_list()}
## Important Notes
- Values must be plain strings, floats, or integers no nested objects, no extra commentary.
- Only describe objects from the "Detected objects" list above. Do not hallucinate additional objects.
- Only describe objects from the "Objects in Scene" list above. Do not hallucinate additional objects.
- When describing people or vehicles, use the exact names provided.
{get_language_prompt()}
"""
logger.debug(
@ -149,7 +158,8 @@ Sequence details:
try:
metadata = ReviewMetadata.model_validate_json(clean_json)
if review_data["recognized_objects"]:
# If any verified objects (contain parentheses with name), set to 0
if any("(" in obj for obj in review_data["unified_objects"]):
metadata.potential_threat_level = 0
metadata.time = review_data["start"]

View File

@ -1,7 +1,7 @@
"""Ollama Provider for Frigate AI."""
import logging
from typing import Optional
from typing import Any, Optional
from httpx import TimeoutException
from ollama import Client as ApiClient
@ -17,10 +17,24 @@ logger = logging.getLogger(__name__)
class OllamaClient(GenAIClient):
"""Generative AI client for Frigate using Ollama."""
LOCAL_OPTIMIZED_OPTIONS = {
"options": {
"temperature": 0.5,
"repeat_penalty": 1.05,
"presence_penalty": 0.3,
},
}
provider: ApiClient
provider_options: dict[str, Any]
def _init_provider(self):
"""Initialize the client."""
self.provider_options = {
**self.LOCAL_OPTIMIZED_OPTIONS,
**self.genai_config.provider_options,
}
try:
client = ApiClient(host=self.genai_config.base_url, timeout=self.timeout)
# ensure the model is available locally
@ -48,10 +62,13 @@ class OllamaClient(GenAIClient):
self.genai_config.model,
prompt,
images=images if images else None,
**self.genai_config.provider_options,
**self.provider_options,
)
logger.debug(
f"Ollama tokens used: eval_count={result.get('eval_count')}, prompt_eval_count={result.get('prompt_eval_count')}"
)
return result["response"].strip()
except (TimeoutException, ResponseError) as e:
except (TimeoutException, ResponseError, ConnectionError) as e:
logger.warning("Ollama returned an error: %s", str(e))
return None

View File

@ -18,6 +18,7 @@ class OpenAIClient(GenAIClient):
"""Generative AI client for Frigate using OpenAI."""
provider: OpenAI
context_size: Optional[int] = None
def _init_provider(self):
"""Initialize the client."""
@ -69,5 +70,33 @@ class OpenAIClient(GenAIClient):
def get_context_size(self) -> int:
"""Get the context window size for OpenAI."""
# OpenAI GPT-4 Vision models have 128K token context window
return 128000
if self.context_size is not None:
return self.context_size
try:
models = self.provider.models.list()
for model in models.data:
if model.id == self.genai_config.model:
if hasattr(model, "max_model_len") and model.max_model_len:
self.context_size = model.max_model_len
logger.debug(
f"Retrieved context size {self.context_size} for model {self.genai_config.model}"
)
return self.context_size
except Exception as e:
logger.debug(
f"Failed to fetch model context size from API: {e}, using default"
)
# Default to 128K for ChatGPT models, 8K for others
model_name = self.genai_config.model.lower()
if "gpt" in model_name:
self.context_size = 128000
else:
self.context_size = 8192
logger.debug(
f"Using default context size {self.context_size} for model {self.genai_config.model}"
)
return self.context_size

View File

@ -9,6 +9,7 @@ from multiprocessing import Queue, Value
from multiprocessing.synchronize import Event as MpEvent
import numpy as np
import zmq
from frigate.comms.object_detector_signaler import (
ObjectDetectorPublisher,
@ -377,6 +378,15 @@ class RemoteObjectDetector:
if self.stop_event.is_set():
return detections
# Drain any stale detection results from the ZMQ buffer before making a new request
# This prevents reading detection results from a previous request
# NOTE: This should never happen, but can in some rare cases
while True:
try:
self.detector_subscriber.socket.recv_string(flags=zmq.NOBLOCK)
except zmq.Again:
break
# copy input to shared memory
self.np_shm[:] = tensor_input[:]
self.detection_queue.put(self.name)

View File

@ -9,6 +9,7 @@ import os
import queue
import subprocess as sp
import threading
import time
import traceback
from typing import Any, Optional
@ -791,6 +792,10 @@ class Birdseye:
self.frame_manager = SharedMemoryFrameManager()
self.stop_event = stop_event
self.requestor = InterProcessRequestor()
self.idle_fps: float = self.config.birdseye.idle_heartbeat_fps
self._idle_interval: Optional[float] = (
(1.0 / self.idle_fps) if self.idle_fps > 0 else None
)
if config.birdseye.restream:
self.birdseye_buffer = self.frame_manager.create(
@ -848,6 +853,15 @@ class Birdseye:
if frame_layout_changed:
coordinates = self.birdseye_manager.get_camera_coordinates()
self.requestor.send_data(UPDATE_BIRDSEYE_LAYOUT, coordinates)
if self._idle_interval:
now = time.monotonic()
is_idle = len(self.birdseye_manager.camera_layout) == 0
if (
is_idle
and (now - self.birdseye_manager.last_output_time)
>= self._idle_interval
):
self.__send_new_frame()
def stop(self) -> None:
self.converter.join()

View File

@ -14,7 +14,8 @@ from frigate.config import CameraConfig, FrigateConfig, RetainModeEnum
from frigate.const import CACHE_DIR, CLIPS_DIR, MAX_WAL_SIZE, RECORD_DIR
from frigate.models import Previews, Recordings, ReviewSegment, UserReviewStatus
from frigate.record.util import remove_empty_directories, sync_recordings
from frigate.util.builtin import clear_and_unlink, get_tomorrow_at_time
from frigate.util.builtin import clear_and_unlink
from frigate.util.time import get_tomorrow_at_time
logger = logging.getLogger(__name__)

View File

@ -28,7 +28,7 @@ from frigate.ffmpeg_presets import (
parse_preset_hardware_acceleration_encode,
)
from frigate.models import Export, Previews, Recordings
from frigate.util.builtin import is_current_hour
from frigate.util.time import is_current_hour
logger = logging.getLogger(__name__)

View File

@ -407,6 +407,19 @@ class ReviewSegmentMaintainer(threading.Thread):
segment.last_detection_time = frame_time
for object in activity.get_all_objects():
# Alert-level objects should always be added (they extend/upgrade the segment)
# Detection-level objects should only be added if:
# - The segment is a detection segment (matching severity), OR
# - The segment is an alert AND the object started before the alert ended
# (objects starting after will be in the new detection segment)
is_alert_object = object in activity.categorized_objects["alerts"]
if not is_alert_object and segment.severity == SeverityEnum.alert:
# This is a detection-level object
# Only add if it started during the alert's active period
if object["start_time"] > segment.last_alert_time:
continue
if not object["sub_label"]:
segment.detections[object["id"]] = object["label"]
elif object["sub_label"][0] in self.config.model.all_attributes:

View File

@ -25,6 +25,7 @@ from frigate.util.services import (
get_intel_gpu_stats,
get_jetson_stats,
get_nvidia_gpu_stats,
get_openvino_npu_stats,
get_rockchip_gpu_stats,
get_rockchip_npu_stats,
is_vaapi_amd_driver,
@ -247,6 +248,10 @@ async def set_npu_usages(config: FrigateConfig, all_stats: dict[str, Any]) -> No
# Rockchip NPU usage
rk_usage = get_rockchip_npu_stats()
stats["rockchip"] = rk_usage
elif detector.type == "openvino" and detector.device == "NPU":
# OpenVINO NPU usage
ov_usage = get_openvino_npu_stats()
stats["openvino"] = ov_usage
if stats:
all_stats["npu_usages"] = stats

View File

@ -0,0 +1,379 @@
"""Unit tests for recordings/media API endpoints."""
from datetime import datetime, timezone
from typing import Any
import pytz
from fastapi.testclient import TestClient
from frigate.api.auth import get_allowed_cameras_for_filter, get_current_user
from frigate.models import Recordings
from frigate.test.http_api.base_http_test import BaseTestHttp
class TestHttpMedia(BaseTestHttp):
"""Test media API endpoints, particularly recordings with DST handling."""
def setUp(self):
"""Set up test fixtures."""
super().setUp([Recordings])
self.app = super().create_app()
# Mock auth to bypass camera access for tests
async def mock_get_current_user(request: Any):
return {"username": "test_user", "role": "admin"}
self.app.dependency_overrides[get_current_user] = mock_get_current_user
self.app.dependency_overrides[get_allowed_cameras_for_filter] = lambda: [
"front_door",
"back_door",
]
def tearDown(self):
"""Clean up after tests."""
self.app.dependency_overrides.clear()
super().tearDown()
def test_recordings_summary_across_dst_spring_forward(self):
"""
Test recordings summary across spring DST transition (spring forward).
In 2024, DST in America/New_York transitions on March 10, 2024 at 2:00 AM
Clocks spring forward from 2:00 AM to 3:00 AM (EST to EDT)
"""
tz = pytz.timezone("America/New_York")
# March 9, 2024 at 12:00 PM EST (before DST)
march_9_noon = tz.localize(datetime(2024, 3, 9, 12, 0, 0)).timestamp()
# March 10, 2024 at 12:00 PM EDT (after DST transition)
march_10_noon = tz.localize(datetime(2024, 3, 10, 12, 0, 0)).timestamp()
# March 11, 2024 at 12:00 PM EDT (after DST)
march_11_noon = tz.localize(datetime(2024, 3, 11, 12, 0, 0)).timestamp()
with TestClient(self.app) as client:
# Insert recordings for each day
Recordings.insert(
id="recording_march_9",
path="/media/recordings/march_9.mp4",
camera="front_door",
start_time=march_9_noon,
end_time=march_9_noon + 3600, # 1 hour recording
duration=3600,
motion=100,
objects=5,
).execute()
Recordings.insert(
id="recording_march_10",
path="/media/recordings/march_10.mp4",
camera="front_door",
start_time=march_10_noon,
end_time=march_10_noon + 3600,
duration=3600,
motion=150,
objects=8,
).execute()
Recordings.insert(
id="recording_march_11",
path="/media/recordings/march_11.mp4",
camera="front_door",
start_time=march_11_noon,
end_time=march_11_noon + 3600,
duration=3600,
motion=200,
objects=10,
).execute()
# Test recordings summary with America/New_York timezone
response = client.get(
"/recordings/summary",
params={"timezone": "America/New_York", "cameras": "all"},
)
assert response.status_code == 200
summary = response.json()
# Verify we get exactly 3 days
assert len(summary) == 3, f"Expected 3 days, got {len(summary)}"
# Verify the correct dates are returned (API returns dict with True values)
assert "2024-03-09" in summary, f"Expected 2024-03-09 in {summary}"
assert "2024-03-10" in summary, f"Expected 2024-03-10 in {summary}"
assert "2024-03-11" in summary, f"Expected 2024-03-11 in {summary}"
assert summary["2024-03-09"] is True
assert summary["2024-03-10"] is True
assert summary["2024-03-11"] is True
def test_recordings_summary_across_dst_fall_back(self):
"""
Test recordings summary across fall DST transition (fall back).
In 2024, DST in America/New_York transitions on November 3, 2024 at 2:00 AM
Clocks fall back from 2:00 AM to 1:00 AM (EDT to EST)
"""
tz = pytz.timezone("America/New_York")
# November 2, 2024 at 12:00 PM EDT (before DST transition)
nov_2_noon = tz.localize(datetime(2024, 11, 2, 12, 0, 0)).timestamp()
# November 3, 2024 at 12:00 PM EST (after DST transition)
# Need to specify is_dst=False to get the time after fall back
nov_3_noon = tz.localize(
datetime(2024, 11, 3, 12, 0, 0), is_dst=False
).timestamp()
# November 4, 2024 at 12:00 PM EST (after DST)
nov_4_noon = tz.localize(datetime(2024, 11, 4, 12, 0, 0)).timestamp()
with TestClient(self.app) as client:
# Insert recordings for each day
Recordings.insert(
id="recording_nov_2",
path="/media/recordings/nov_2.mp4",
camera="front_door",
start_time=nov_2_noon,
end_time=nov_2_noon + 3600,
duration=3600,
motion=100,
objects=5,
).execute()
Recordings.insert(
id="recording_nov_3",
path="/media/recordings/nov_3.mp4",
camera="front_door",
start_time=nov_3_noon,
end_time=nov_3_noon + 3600,
duration=3600,
motion=150,
objects=8,
).execute()
Recordings.insert(
id="recording_nov_4",
path="/media/recordings/nov_4.mp4",
camera="front_door",
start_time=nov_4_noon,
end_time=nov_4_noon + 3600,
duration=3600,
motion=200,
objects=10,
).execute()
# Test recordings summary with America/New_York timezone
response = client.get(
"/recordings/summary",
params={"timezone": "America/New_York", "cameras": "all"},
)
assert response.status_code == 200
summary = response.json()
# Verify we get exactly 3 days
assert len(summary) == 3, f"Expected 3 days, got {len(summary)}"
# Verify the correct dates are returned (API returns dict with True values)
assert "2024-11-02" in summary, f"Expected 2024-11-02 in {summary}"
assert "2024-11-03" in summary, f"Expected 2024-11-03 in {summary}"
assert "2024-11-04" in summary, f"Expected 2024-11-04 in {summary}"
assert summary["2024-11-02"] is True
assert summary["2024-11-03"] is True
assert summary["2024-11-04"] is True
def test_recordings_summary_multiple_cameras_across_dst(self):
"""
Test recordings summary with multiple cameras across DST boundary.
"""
tz = pytz.timezone("America/New_York")
# March 9, 2024 at 10:00 AM EST (before DST)
march_9_morning = tz.localize(datetime(2024, 3, 9, 10, 0, 0)).timestamp()
# March 10, 2024 at 3:00 PM EDT (after DST transition)
march_10_afternoon = tz.localize(datetime(2024, 3, 10, 15, 0, 0)).timestamp()
with TestClient(self.app) as client:
# Insert recordings for front_door on March 9
Recordings.insert(
id="front_march_9",
path="/media/recordings/front_march_9.mp4",
camera="front_door",
start_time=march_9_morning,
end_time=march_9_morning + 3600,
duration=3600,
motion=100,
objects=5,
).execute()
# Insert recordings for back_door on March 10
Recordings.insert(
id="back_march_10",
path="/media/recordings/back_march_10.mp4",
camera="back_door",
start_time=march_10_afternoon,
end_time=march_10_afternoon + 3600,
duration=3600,
motion=150,
objects=8,
).execute()
# Test with all cameras
response = client.get(
"/recordings/summary",
params={"timezone": "America/New_York", "cameras": "all"},
)
assert response.status_code == 200
summary = response.json()
# Verify we get both days
assert len(summary) == 2, f"Expected 2 days, got {len(summary)}"
assert "2024-03-09" in summary
assert "2024-03-10" in summary
assert summary["2024-03-09"] is True
assert summary["2024-03-10"] is True
def test_recordings_summary_at_dst_transition_time(self):
"""
Test recordings that span the exact DST transition time.
"""
tz = pytz.timezone("America/New_York")
# March 10, 2024 at 1:00 AM EST (1 hour before DST transition)
# At 2:00 AM, clocks jump to 3:00 AM
before_transition = tz.localize(datetime(2024, 3, 10, 1, 0, 0)).timestamp()
# Recording that spans the transition (1:00 AM to 3:30 AM EDT)
# This is 1.5 hours of actual time but spans the "missing" hour
after_transition = tz.localize(datetime(2024, 3, 10, 3, 30, 0)).timestamp()
with TestClient(self.app) as client:
Recordings.insert(
id="recording_during_transition",
path="/media/recordings/transition.mp4",
camera="front_door",
start_time=before_transition,
end_time=after_transition,
duration=after_transition - before_transition,
motion=100,
objects=5,
).execute()
response = client.get(
"/recordings/summary",
params={"timezone": "America/New_York", "cameras": "all"},
)
assert response.status_code == 200
summary = response.json()
# The recording should appear on March 10
assert len(summary) == 1
assert "2024-03-10" in summary
assert summary["2024-03-10"] is True
def test_recordings_summary_utc_timezone(self):
"""
Test recordings summary with UTC timezone (no DST).
"""
# Use UTC timestamps directly
march_9_utc = datetime(2024, 3, 9, 17, 0, 0, tzinfo=timezone.utc).timestamp()
march_10_utc = datetime(2024, 3, 10, 17, 0, 0, tzinfo=timezone.utc).timestamp()
with TestClient(self.app) as client:
Recordings.insert(
id="recording_march_9_utc",
path="/media/recordings/march_9_utc.mp4",
camera="front_door",
start_time=march_9_utc,
end_time=march_9_utc + 3600,
duration=3600,
motion=100,
objects=5,
).execute()
Recordings.insert(
id="recording_march_10_utc",
path="/media/recordings/march_10_utc.mp4",
camera="front_door",
start_time=march_10_utc,
end_time=march_10_utc + 3600,
duration=3600,
motion=150,
objects=8,
).execute()
# Test with UTC timezone
response = client.get(
"/recordings/summary", params={"timezone": "utc", "cameras": "all"}
)
assert response.status_code == 200
summary = response.json()
# Verify we get both days
assert len(summary) == 2
assert "2024-03-09" in summary
assert "2024-03-10" in summary
assert summary["2024-03-09"] is True
assert summary["2024-03-10"] is True
def test_recordings_summary_no_recordings(self):
"""
Test recordings summary when no recordings exist.
"""
with TestClient(self.app) as client:
response = client.get(
"/recordings/summary",
params={"timezone": "America/New_York", "cameras": "all"},
)
assert response.status_code == 200
summary = response.json()
assert len(summary) == 0
def test_recordings_summary_single_camera_filter(self):
"""
Test recordings summary filtered to a single camera.
"""
tz = pytz.timezone("America/New_York")
march_10_noon = tz.localize(datetime(2024, 3, 10, 12, 0, 0)).timestamp()
with TestClient(self.app) as client:
# Insert recordings for both cameras
Recordings.insert(
id="front_recording",
path="/media/recordings/front.mp4",
camera="front_door",
start_time=march_10_noon,
end_time=march_10_noon + 3600,
duration=3600,
motion=100,
objects=5,
).execute()
Recordings.insert(
id="back_recording",
path="/media/recordings/back.mp4",
camera="back_door",
start_time=march_10_noon,
end_time=march_10_noon + 3600,
duration=3600,
motion=150,
objects=8,
).execute()
# Test with only front_door camera
response = client.get(
"/recordings/summary",
params={"timezone": "America/New_York", "cameras": "front_door"},
)
assert response.status_code == 200
summary = response.json()
assert len(summary) == 1
assert "2024-03-10" in summary
assert summary["2024-03-10"] is True

View File

@ -142,6 +142,14 @@ class TimelineProcessor(threading.Thread):
timeline_entry[Timeline.data]["attribute"] = list(
event_data["attributes"].keys()
)[0]
if len(event_data["current_attributes"]) > 0:
timeline_entry[Timeline.data]["attribute_box"] = to_relative_box(
camera_config.detect.width,
camera_config.detect.height,
event_data["current_attributes"][0]["box"],
)
save = True
elif event_type == EventStateEnum.end:
timeline_entry[Timeline.class_type] = "gone"

View File

@ -23,6 +23,7 @@ class ModelStatusTypesEnum(str, Enum):
error = "error"
training = "training"
complete = "complete"
failed = "failed"
class TrackedObjectUpdateTypesEnum(str, Enum):

View File

@ -15,12 +15,9 @@ from collections.abc import Mapping
from multiprocessing.sharedctypes import Synchronized
from pathlib import Path
from typing import Any, Dict, Optional, Tuple, Union
from zoneinfo import ZoneInfoNotFoundError
import numpy as np
import pytz
from ruamel.yaml import YAML
from tzlocal import get_localzone
from frigate.const import REGEX_HTTP_CAMERA_USER_PASS, REGEX_RTSP_CAMERA_USER_PASS
@ -157,17 +154,6 @@ def load_labels(path: Optional[str], encoding="utf-8", prefill=91):
return labels
def get_tz_modifiers(tz_name: str) -> Tuple[str, str, float]:
seconds_offset = (
datetime.datetime.now(pytz.timezone(tz_name)).utcoffset().total_seconds()
)
hours_offset = int(seconds_offset / 60 / 60)
minutes_offset = int(seconds_offset / 60 - hours_offset * 60)
hour_modifier = f"{hours_offset} hour"
minute_modifier = f"{minutes_offset} minute"
return hour_modifier, minute_modifier, seconds_offset
def to_relative_box(
width: int, height: int, box: Tuple[int, int, int, int]
) -> Tuple[int | float, int | float, int | float, int | float]:
@ -298,34 +284,6 @@ def find_by_key(dictionary, target_key):
return None
def get_tomorrow_at_time(hour: int) -> datetime.datetime:
"""Returns the datetime of the following day at 2am."""
try:
tomorrow = datetime.datetime.now(get_localzone()) + datetime.timedelta(days=1)
except ZoneInfoNotFoundError:
tomorrow = datetime.datetime.now(datetime.timezone.utc) + datetime.timedelta(
days=1
)
logger.warning(
"Using utc for maintenance due to missing or incorrect timezone set"
)
return tomorrow.replace(hour=hour, minute=0, second=0).astimezone(
datetime.timezone.utc
)
def is_current_hour(timestamp: int) -> bool:
"""Returns if timestamp is in the current UTC hour."""
start_of_next_hour = (
datetime.datetime.now(datetime.timezone.utc).replace(
minute=0, second=0, microsecond=0
)
+ datetime.timedelta(hours=1)
).timestamp()
return timestamp < start_of_next_hour
def clear_and_unlink(file: Path, missing_ok: bool = True) -> None:
"""clear file then unlink to avoid space retained by file descriptors."""
if not missing_ok and not file.exists():

View File

@ -1,13 +1,18 @@
"""Util for classification models."""
import datetime
import json
import logging
import os
import random
from collections import defaultdict
import cv2
import numpy as np
from frigate.comms.embeddings_updater import EmbeddingsRequestEnum, EmbeddingsRequestor
from frigate.comms.inter_process import InterProcessRequestor
from frigate.config import FfmpegConfig
from frigate.const import (
CLIPS_DIR,
MODEL_CACHE_DIR,
@ -15,16 +20,105 @@ from frigate.const import (
UPDATE_MODEL_STATE,
)
from frigate.log import redirect_output_to_logger
from frigate.models import Event, Recordings, ReviewSegment
from frigate.types import ModelStatusTypesEnum
from frigate.util.file import get_event_thumbnail_bytes
from frigate.util.image import get_image_from_recording
from frigate.util.process import FrigateProcess
BATCH_SIZE = 16
EPOCHS = 50
LEARNING_RATE = 0.001
TRAINING_METADATA_FILE = ".training_metadata.json"
logger = logging.getLogger(__name__)
def write_training_metadata(model_name: str, image_count: int) -> None:
"""
Write training metadata to a hidden file in the model's clips directory.
Args:
model_name: Name of the classification model
image_count: Number of images used in training
"""
clips_model_dir = os.path.join(CLIPS_DIR, model_name)
os.makedirs(clips_model_dir, exist_ok=True)
metadata_path = os.path.join(clips_model_dir, TRAINING_METADATA_FILE)
metadata = {
"last_training_date": datetime.datetime.now().isoformat(),
"last_training_image_count": image_count,
}
try:
with open(metadata_path, "w") as f:
json.dump(metadata, f, indent=2)
logger.info(f"Wrote training metadata for {model_name}: {image_count} images")
except Exception as e:
logger.error(f"Failed to write training metadata for {model_name}: {e}")
def read_training_metadata(model_name: str) -> dict[str, any] | None:
"""
Read training metadata from the hidden file in the model's clips directory.
Args:
model_name: Name of the classification model
Returns:
Dictionary with last_training_date and last_training_image_count, or None if not found
"""
clips_model_dir = os.path.join(CLIPS_DIR, model_name)
metadata_path = os.path.join(clips_model_dir, TRAINING_METADATA_FILE)
if not os.path.exists(metadata_path):
return None
try:
with open(metadata_path, "r") as f:
metadata = json.load(f)
return metadata
except Exception as e:
logger.error(f"Failed to read training metadata for {model_name}: {e}")
return None
def get_dataset_image_count(model_name: str) -> int:
"""
Count the total number of images in the model's dataset directory.
Args:
model_name: Name of the classification model
Returns:
Total count of images across all categories
"""
dataset_dir = os.path.join(CLIPS_DIR, model_name, "dataset")
if not os.path.exists(dataset_dir):
return 0
total_count = 0
try:
for category in os.listdir(dataset_dir):
category_dir = os.path.join(dataset_dir, category)
if not os.path.isdir(category_dir):
continue
image_files = [
f
for f in os.listdir(category_dir)
if f.lower().endswith((".webp", ".png", ".jpg", ".jpeg"))
]
total_count += len(image_files)
except Exception as e:
logger.error(f"Failed to count dataset images for {model_name}: {e}")
return 0
return total_count
class ClassificationTrainingProcess(FrigateProcess):
def __init__(self, model_name: str) -> None:
super().__init__(
@ -36,7 +130,8 @@ class ClassificationTrainingProcess(FrigateProcess):
def run(self) -> None:
self.pre_run_setup()
self.__train_classification_model()
success = self.__train_classification_model()
exit(0 if success else 1)
def __generate_representative_dataset_factory(self, dataset_dir: str):
def generate_representative_dataset():
@ -59,87 +154,119 @@ class ClassificationTrainingProcess(FrigateProcess):
@redirect_output_to_logger(logger, logging.DEBUG)
def __train_classification_model(self) -> bool:
"""Train a classification model."""
try:
# import in the function so that tensorflow is not initialized multiple times
import tensorflow as tf
from tensorflow.keras import layers, models, optimizers
from tensorflow.keras.applications import MobileNetV2
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# import in the function so that tensorflow is not initialized multiple times
import tensorflow as tf
from tensorflow.keras import layers, models, optimizers
from tensorflow.keras.applications import MobileNetV2
from tensorflow.keras.preprocessing.image import ImageDataGenerator
dataset_dir = os.path.join(CLIPS_DIR, self.model_name, "dataset")
model_dir = os.path.join(MODEL_CACHE_DIR, self.model_name)
os.makedirs(model_dir, exist_ok=True)
logger.info(f"Kicking off classification training for {self.model_name}.")
dataset_dir = os.path.join(CLIPS_DIR, self.model_name, "dataset")
model_dir = os.path.join(MODEL_CACHE_DIR, self.model_name)
num_classes = len(
[
d
for d in os.listdir(dataset_dir)
if os.path.isdir(os.path.join(dataset_dir, d))
]
)
num_classes = len(
[
d
for d in os.listdir(dataset_dir)
if os.path.isdir(os.path.join(dataset_dir, d))
]
)
# Start with imagenet base model with 35% of channels in each layer
base_model = MobileNetV2(
input_shape=(224, 224, 3),
include_top=False,
weights="imagenet",
alpha=0.35,
)
base_model.trainable = False # Freeze pre-trained layers
if num_classes < 2:
logger.error(
f"Training failed for {self.model_name}: Need at least 2 classes, found {num_classes}"
)
return False
model = models.Sequential(
[
base_model,
layers.GlobalAveragePooling2D(),
layers.Dense(128, activation="relu"),
layers.Dropout(0.3),
layers.Dense(num_classes, activation="softmax"),
]
)
# Start with imagenet base model with 35% of channels in each layer
base_model = MobileNetV2(
input_shape=(224, 224, 3),
include_top=False,
weights="imagenet",
alpha=0.35,
)
base_model.trainable = False # Freeze pre-trained layers
model.compile(
optimizer=optimizers.Adam(learning_rate=LEARNING_RATE),
loss="categorical_crossentropy",
metrics=["accuracy"],
)
model = models.Sequential(
[
base_model,
layers.GlobalAveragePooling2D(),
layers.Dense(128, activation="relu"),
layers.Dropout(0.3),
layers.Dense(num_classes, activation="softmax"),
]
)
# create training set
datagen = ImageDataGenerator(rescale=1.0 / 255, validation_split=0.2)
train_gen = datagen.flow_from_directory(
dataset_dir,
target_size=(224, 224),
batch_size=BATCH_SIZE,
class_mode="categorical",
subset="training",
)
model.compile(
optimizer=optimizers.Adam(learning_rate=LEARNING_RATE),
loss="categorical_crossentropy",
metrics=["accuracy"],
)
# write labelmap
class_indices = train_gen.class_indices
index_to_class = {v: k for k, v in class_indices.items()}
sorted_classes = [index_to_class[i] for i in range(len(index_to_class))]
with open(os.path.join(model_dir, "labelmap.txt"), "w") as f:
for class_name in sorted_classes:
f.write(f"{class_name}\n")
# create training set
datagen = ImageDataGenerator(rescale=1.0 / 255, validation_split=0.2)
train_gen = datagen.flow_from_directory(
dataset_dir,
target_size=(224, 224),
batch_size=BATCH_SIZE,
class_mode="categorical",
subset="training",
)
# train the model
model.fit(train_gen, epochs=EPOCHS, verbose=0)
total_images = train_gen.samples
logger.debug(
f"Training {self.model_name}: {total_images} images across {num_classes} classes"
)
# convert model to tflite
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = (
self.__generate_representative_dataset_factory(dataset_dir)
)
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
tflite_model = converter.convert()
# write labelmap
class_indices = train_gen.class_indices
index_to_class = {v: k for k, v in class_indices.items()}
sorted_classes = [index_to_class[i] for i in range(len(index_to_class))]
with open(os.path.join(model_dir, "labelmap.txt"), "w") as f:
for class_name in sorted_classes:
f.write(f"{class_name}\n")
# write model
with open(os.path.join(model_dir, "model.tflite"), "wb") as f:
f.write(tflite_model)
# train the model
logger.debug(f"Training {self.model_name} for {EPOCHS} epochs...")
model.fit(train_gen, epochs=EPOCHS, verbose=0)
logger.debug(f"Converting {self.model_name} to TFLite...")
# convert model to tflite
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = (
self.__generate_representative_dataset_factory(dataset_dir)
)
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
tflite_model = converter.convert()
# write model
model_path = os.path.join(model_dir, "model.tflite")
with open(model_path, "wb") as f:
f.write(tflite_model)
# verify model file was written successfully
if not os.path.exists(model_path) or os.path.getsize(model_path) == 0:
logger.error(
f"Training failed for {self.model_name}: Model file was not created or is empty"
)
return False
# write training metadata with image count
dataset_image_count = get_dataset_image_count(self.model_name)
write_training_metadata(self.model_name, dataset_image_count)
logger.info(f"Finished training {self.model_name}")
return True
except Exception as e:
logger.error(f"Training failed for {self.model_name}: {e}", exc_info=True)
return False
@staticmethod
def kickoff_model_training(
embeddingRequestor: EmbeddingsRequestor, model_name: str
) -> None:
@ -159,16 +286,551 @@ def kickoff_model_training(
training_process.start()
training_process.join()
# reload model and mark training as complete
embeddingRequestor.send_data(
EmbeddingsRequestEnum.reload_classification_model.value,
{"model_name": model_name},
)
requestor.send_data(
UPDATE_MODEL_STATE,
{
"model": model_name,
"state": ModelStatusTypesEnum.complete,
},
)
# check if training succeeded by examining the exit code
training_success = training_process.exitcode == 0
if training_success:
# reload model and mark training as complete
embeddingRequestor.send_data(
EmbeddingsRequestEnum.reload_classification_model.value,
{"model_name": model_name},
)
requestor.send_data(
UPDATE_MODEL_STATE,
{
"model": model_name,
"state": ModelStatusTypesEnum.complete,
},
)
else:
logger.error(
f"Training subprocess failed for {model_name} (exit code: {training_process.exitcode})"
)
# mark training as failed so UI shows error state
# don't reload the model since it failed
requestor.send_data(
UPDATE_MODEL_STATE,
{
"model": model_name,
"state": ModelStatusTypesEnum.failed,
},
)
requestor.stop()
@staticmethod
def collect_state_classification_examples(
model_name: str, cameras: dict[str, tuple[float, float, float, float]]
) -> None:
"""
Collect representative state classification examples from review items.
This function:
1. Queries review items from specified cameras
2. Selects 100 balanced timestamps across the data
3. Extracts keyframes from recordings (cropped to specified regions)
4. Selects 20 most visually distinct images
5. Saves them to the dataset directory
Args:
model_name: Name of the classification model
cameras: Dict mapping camera names to normalized crop coordinates [x1, y1, x2, y2] (0-1)
"""
dataset_dir = os.path.join(CLIPS_DIR, model_name, "dataset")
temp_dir = os.path.join(dataset_dir, "temp")
os.makedirs(temp_dir, exist_ok=True)
# Step 1: Get review items for the cameras
camera_names = list(cameras.keys())
review_items = list(
ReviewSegment.select()
.where(ReviewSegment.camera.in_(camera_names))
.where(ReviewSegment.end_time.is_null(False))
.order_by(ReviewSegment.start_time.asc())
)
if not review_items:
logger.warning(f"No review items found for cameras: {camera_names}")
return
# Step 2: Create balanced timestamp selection (100 samples)
timestamps = _select_balanced_timestamps(review_items, target_count=100)
# Step 3: Extract keyframes from recordings with crops applied
keyframes = _extract_keyframes(
"/usr/lib/ffmpeg/7.0/bin/ffmpeg", timestamps, temp_dir, cameras
)
# Step 4: Select 24 most visually distinct images (they're already cropped)
distinct_images = _select_distinct_images(keyframes, target_count=24)
# Step 5: Save to train directory for later classification
train_dir = os.path.join(CLIPS_DIR, model_name, "train")
os.makedirs(train_dir, exist_ok=True)
saved_count = 0
for idx, image_path in enumerate(distinct_images):
dest_path = os.path.join(train_dir, f"example_{idx:03d}.jpg")
try:
img = cv2.imread(image_path)
if img is not None:
cv2.imwrite(dest_path, img)
saved_count += 1
except Exception as e:
logger.error(f"Failed to save image {image_path}: {e}")
import shutil
try:
shutil.rmtree(temp_dir)
except Exception as e:
logger.warning(f"Failed to clean up temp directory: {e}")
def _select_balanced_timestamps(
review_items: list[ReviewSegment], target_count: int = 100
) -> list[dict]:
"""
Select balanced timestamps from review items.
Strategy:
- Group review items by camera and time of day
- Sample evenly across groups to ensure diversity
- For each selected review item, pick a random timestamp within its duration
Returns:
List of dicts with keys: camera, timestamp, review_item
"""
# Group by camera and hour of day for temporal diversity
grouped = defaultdict(list)
for item in review_items:
camera = item.camera
# Group by 6-hour blocks for temporal diversity
hour_block = int(item.start_time // (6 * 3600))
key = f"{camera}_{hour_block}"
grouped[key].append(item)
# Calculate how many samples per group
num_groups = len(grouped)
if num_groups == 0:
return []
samples_per_group = max(1, target_count // num_groups)
timestamps = []
# Sample from each group
for group_items in grouped.values():
# Take samples_per_group items from this group
sample_size = min(samples_per_group, len(group_items))
sampled_items = random.sample(group_items, sample_size)
for item in sampled_items:
# Pick a random timestamp within the review item's duration
duration = item.end_time - item.start_time
if duration <= 0:
continue
# Sample from middle 80% to avoid edge artifacts
offset = random.uniform(duration * 0.1, duration * 0.9)
timestamp = item.start_time + offset
timestamps.append(
{
"camera": item.camera,
"timestamp": timestamp,
"review_item": item,
}
)
# If we don't have enough, sample more from larger groups
while len(timestamps) < target_count and len(timestamps) < len(review_items):
for group_items in grouped.values():
if len(timestamps) >= target_count:
break
# Pick a random item not already sampled
item = random.choice(group_items)
duration = item.end_time - item.start_time
if duration <= 0:
continue
offset = random.uniform(duration * 0.1, duration * 0.9)
timestamp = item.start_time + offset
# Check if we already have a timestamp near this one
if not any(abs(t["timestamp"] - timestamp) < 1.0 for t in timestamps):
timestamps.append(
{
"camera": item.camera,
"timestamp": timestamp,
"review_item": item,
}
)
return timestamps[:target_count]
def _extract_keyframes(
ffmpeg_path: str,
timestamps: list[dict],
output_dir: str,
camera_crops: dict[str, tuple[float, float, float, float]],
) -> list[str]:
"""
Extract keyframes from recordings at specified timestamps and crop to specified regions.
Args:
ffmpeg_path: Path to ffmpeg binary
timestamps: List of timestamp dicts from _select_balanced_timestamps
output_dir: Directory to save extracted frames
camera_crops: Dict mapping camera names to normalized crop coordinates [x1, y1, x2, y2] (0-1)
Returns:
List of paths to successfully extracted and cropped keyframe images
"""
keyframe_paths = []
for idx, ts_info in enumerate(timestamps):
camera = ts_info["camera"]
timestamp = ts_info["timestamp"]
if camera not in camera_crops:
logger.warning(f"No crop coordinates for camera {camera}")
continue
norm_x1, norm_y1, norm_x2, norm_y2 = camera_crops[camera]
try:
recording = (
Recordings.select()
.where(
(timestamp >= Recordings.start_time)
& (timestamp <= Recordings.end_time)
& (Recordings.camera == camera)
)
.order_by(Recordings.start_time.desc())
.limit(1)
.get()
)
except Exception:
continue
relative_time = timestamp - recording.start_time
try:
config = FfmpegConfig(path="/usr/lib/ffmpeg/7.0")
image_data = get_image_from_recording(
config,
recording.path,
relative_time,
codec="mjpeg",
height=None,
)
if image_data:
nparr = np.frombuffer(image_data, np.uint8)
img = cv2.imdecode(nparr, cv2.IMREAD_COLOR)
if img is not None:
height, width = img.shape[:2]
x1 = int(norm_x1 * width)
y1 = int(norm_y1 * height)
x2 = int(norm_x2 * width)
y2 = int(norm_y2 * height)
x1_clipped = max(0, min(x1, width))
y1_clipped = max(0, min(y1, height))
x2_clipped = max(0, min(x2, width))
y2_clipped = max(0, min(y2, height))
if x2_clipped > x1_clipped and y2_clipped > y1_clipped:
cropped = img[y1_clipped:y2_clipped, x1_clipped:x2_clipped]
resized = cv2.resize(cropped, (224, 224))
output_path = os.path.join(output_dir, f"frame_{idx:04d}.jpg")
cv2.imwrite(output_path, resized)
keyframe_paths.append(output_path)
except Exception as e:
logger.debug(
f"Failed to extract frame from {recording.path} at {relative_time}s: {e}"
)
continue
return keyframe_paths
def _select_distinct_images(
image_paths: list[str], target_count: int = 20
) -> list[str]:
"""
Select the most visually distinct images from a set of keyframes.
Uses a greedy algorithm based on image histograms:
1. Start with a random image
2. Iteratively add the image that is most different from already selected images
3. Difference is measured using histogram comparison
Args:
image_paths: List of paths to candidate images
target_count: Number of distinct images to select
Returns:
List of paths to selected images
"""
if len(image_paths) <= target_count:
return image_paths
histograms = {}
valid_paths = []
for path in image_paths:
try:
img = cv2.imread(path)
if img is None:
continue
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
hist = cv2.calcHist(
[hsv], [0, 1, 2], None, [8, 8, 8], [0, 180, 0, 256, 0, 256]
)
hist = cv2.normalize(hist, hist).flatten()
histograms[path] = hist
valid_paths.append(path)
except Exception as e:
logger.debug(f"Failed to process image {path}: {e}")
continue
if len(valid_paths) <= target_count:
return valid_paths
selected = []
first_image = random.choice(valid_paths)
selected.append(first_image)
remaining = [p for p in valid_paths if p != first_image]
while len(selected) < target_count and remaining:
max_min_distance = -1
best_candidate = None
for candidate in remaining:
min_distance = float("inf")
for selected_img in selected:
distance = cv2.compareHist(
histograms[candidate],
histograms[selected_img],
cv2.HISTCMP_BHATTACHARYYA,
)
min_distance = min(min_distance, distance)
if min_distance > max_min_distance:
max_min_distance = min_distance
best_candidate = candidate
if best_candidate:
selected.append(best_candidate)
remaining.remove(best_candidate)
else:
break
return selected
@staticmethod
def collect_object_classification_examples(
model_name: str,
label: str,
) -> None:
"""
Collect representative object classification examples from event thumbnails.
This function:
1. Queries events for the specified label
2. Selects 100 balanced events across different cameras and times
3. Retrieves thumbnails for selected events (with 33% center crop applied)
4. Selects 24 most visually distinct thumbnails
5. Saves to dataset directory
Args:
model_name: Name of the classification model
label: Object label to collect (e.g., "person", "car")
cameras: List of camera names to collect examples from
"""
dataset_dir = os.path.join(CLIPS_DIR, model_name, "dataset")
temp_dir = os.path.join(dataset_dir, "temp")
os.makedirs(temp_dir, exist_ok=True)
# Step 1: Query events for the specified label and cameras
events = list(
Event.select().where((Event.label == label)).order_by(Event.start_time.asc())
)
if not events:
logger.warning(f"No events found for label '{label}'")
return
logger.debug(f"Found {len(events)} events")
# Step 2: Select balanced events (100 samples)
selected_events = _select_balanced_events(events, target_count=100)
logger.debug(f"Selected {len(selected_events)} events")
# Step 3: Extract thumbnails from events
thumbnails = _extract_event_thumbnails(selected_events, temp_dir)
logger.debug(f"Successfully extracted {len(thumbnails)} thumbnails")
# Step 4: Select 24 most visually distinct thumbnails
distinct_images = _select_distinct_images(thumbnails, target_count=24)
logger.debug(f"Selected {len(distinct_images)} distinct images")
# Step 5: Save to train directory for later classification
train_dir = os.path.join(CLIPS_DIR, model_name, "train")
os.makedirs(train_dir, exist_ok=True)
saved_count = 0
for idx, image_path in enumerate(distinct_images):
dest_path = os.path.join(train_dir, f"example_{idx:03d}.jpg")
try:
img = cv2.imread(image_path)
if img is not None:
cv2.imwrite(dest_path, img)
saved_count += 1
except Exception as e:
logger.error(f"Failed to save image {image_path}: {e}")
import shutil
try:
shutil.rmtree(temp_dir)
except Exception as e:
logger.warning(f"Failed to clean up temp directory: {e}")
logger.debug(
f"Successfully collected {saved_count} classification examples in {train_dir}"
)
def _select_balanced_events(
events: list[Event], target_count: int = 100
) -> list[Event]:
"""
Select balanced events from the event list.
Strategy:
- Group events by camera and time of day
- Sample evenly across groups to ensure diversity
- Prioritize events with higher scores
Returns:
List of selected events
"""
grouped = defaultdict(list)
for event in events:
camera = event.camera
hour_block = int(event.start_time // (6 * 3600))
key = f"{camera}_{hour_block}"
grouped[key].append(event)
num_groups = len(grouped)
if num_groups == 0:
return []
samples_per_group = max(1, target_count // num_groups)
selected = []
for group_events in grouped.values():
sorted_events = sorted(
group_events,
key=lambda e: e.data.get("score", 0) if e.data else 0,
reverse=True,
)
sample_size = min(samples_per_group, len(sorted_events))
selected.extend(sorted_events[:sample_size])
if len(selected) < target_count:
remaining = [e for e in events if e not in selected]
remaining_sorted = sorted(
remaining,
key=lambda e: e.data.get("score", 0) if e.data else 0,
reverse=True,
)
needed = target_count - len(selected)
selected.extend(remaining_sorted[:needed])
return selected[:target_count]
def _extract_event_thumbnails(events: list[Event], output_dir: str) -> list[str]:
"""
Extract thumbnails from events and save to disk.
Args:
events: List of Event objects
output_dir: Directory to save thumbnails
Returns:
List of paths to successfully extracted thumbnail images
"""
thumbnail_paths = []
for idx, event in enumerate(events):
try:
thumbnail_bytes = get_event_thumbnail_bytes(event)
if thumbnail_bytes:
nparr = np.frombuffer(thumbnail_bytes, np.uint8)
img = cv2.imdecode(nparr, cv2.IMREAD_COLOR)
if img is not None:
height, width = img.shape[:2]
crop_size = 1.0
if event.data and "box" in event.data and "region" in event.data:
box = event.data["box"]
region = event.data["region"]
if len(box) == 4 and len(region) == 4:
box_w, box_h = box[2], box[3]
region_w, region_h = region[2], region[3]
box_area = (box_w * box_h) / (region_w * region_h)
if box_area < 0.05:
crop_size = 0.4
elif box_area < 0.10:
crop_size = 0.5
elif box_area < 0.20:
crop_size = 0.65
elif box_area < 0.35:
crop_size = 0.80
else:
crop_size = 0.95
crop_width = int(width * crop_size)
crop_height = int(height * crop_size)
x1 = (width - crop_width) // 2
y1 = (height - crop_height) // 2
x2 = x1 + crop_width
y2 = y1 + crop_height
cropped = img[y1:y2, x1:x2]
resized = cv2.resize(cropped, (224, 224))
output_path = os.path.join(output_dir, f"thumbnail_{idx:04d}.jpg")
cv2.imwrite(output_path, resized)
thumbnail_paths.append(output_path)
except Exception as e:
logger.debug(f"Failed to extract thumbnail for event {event.id}: {e}")
continue
return thumbnail_paths

View File

@ -384,10 +384,10 @@ def migrate_017_0(config: dict[str, dict[str, Any]]) -> dict[str, dict[str, Any]
new_object_config["genai"] = {}
for key in global_genai.keys():
if key not in ["enabled", "model", "provider", "base_url", "api_key"]:
new_object_config["genai"][key] = global_genai[key]
else:
if key in ["model", "provider", "base_url", "api_key"]:
new_genai_config[key] = global_genai[key]
else:
new_object_config["genai"][key] = global_genai[key]
config["genai"] = new_genai_config

View File

@ -1,7 +1,6 @@
import logging
import os
import threading
import time
from pathlib import Path
from typing import Callable, List
@ -10,40 +9,11 @@ import requests
from frigate.comms.inter_process import InterProcessRequestor
from frigate.const import UPDATE_MODEL_STATE
from frigate.types import ModelStatusTypesEnum
from frigate.util.file import FileLock
logger = logging.getLogger(__name__)
class FileLock:
def __init__(self, path):
self.path = path
self.lock_file = f"{path}.lock"
# we have not acquired the lock yet so it should not exist
if os.path.exists(self.lock_file):
try:
os.remove(self.lock_file)
except Exception:
pass
def acquire(self):
parent_dir = os.path.dirname(self.lock_file)
os.makedirs(parent_dir, exist_ok=True)
while True:
try:
with open(self.lock_file, "x"):
return
except FileExistsError:
time.sleep(0.1)
def release(self):
try:
os.remove(self.lock_file)
except FileNotFoundError:
pass
class ModelDownloader:
def __init__(
self,
@ -81,15 +51,13 @@ class ModelDownloader:
def _download_models(self):
for file_name in self.file_names:
path = os.path.join(self.download_path, file_name)
lock = FileLock(path)
lock_path = f"{path}.lock"
lock = FileLock(lock_path, cleanup_stale_on_init=True)
if not os.path.exists(path):
lock.acquire()
try:
with lock:
if not os.path.exists(path):
self.download_func(path)
finally:
lock.release()
self.requestor.send_data(
UPDATE_MODEL_STATE,

276
frigate/util/file.py Normal file
View File

@ -0,0 +1,276 @@
"""Path and file utilities."""
import base64
import fcntl
import logging
import os
import time
from pathlib import Path
from typing import Optional
import cv2
from numpy import ndarray
from frigate.const import CLIPS_DIR, THUMB_DIR
from frigate.models import Event
logger = logging.getLogger(__name__)
def get_event_thumbnail_bytes(event: Event) -> bytes | None:
if event.thumbnail:
return base64.b64decode(event.thumbnail)
else:
try:
with open(
os.path.join(THUMB_DIR, event.camera, f"{event.id}.webp"), "rb"
) as f:
return f.read()
except Exception:
return None
def get_event_snapshot(event: Event) -> ndarray:
media_name = f"{event.camera}-{event.id}"
return cv2.imread(f"{os.path.join(CLIPS_DIR, media_name)}.jpg")
### Deletion
def delete_event_images(event: Event) -> bool:
return delete_event_snapshot(event) and delete_event_thumbnail(event)
def delete_event_snapshot(event: Event) -> bool:
media_name = f"{event.camera}-{event.id}"
media_path = Path(f"{os.path.join(CLIPS_DIR, media_name)}.jpg")
try:
media_path.unlink(missing_ok=True)
media_path = Path(f"{os.path.join(CLIPS_DIR, media_name)}-clean.webp")
media_path.unlink(missing_ok=True)
# also delete clean.png (legacy) for backward compatibility
media_path = Path(f"{os.path.join(CLIPS_DIR, media_name)}-clean.png")
media_path.unlink(missing_ok=True)
return True
except OSError:
return False
def delete_event_thumbnail(event: Event) -> bool:
if event.thumbnail:
return True
else:
Path(os.path.join(THUMB_DIR, event.camera, f"{event.id}.webp")).unlink(
missing_ok=True
)
return True
### File Locking
class FileLock:
"""
A file-based lock for coordinating access to resources across processes.
Uses fcntl.flock() for proper POSIX file locking on Linux. Supports timeouts,
stale lock detection, and can be used as a context manager.
Example:
```python
# Using as a context manager (recommended)
with FileLock("/path/to/resource.lock", timeout=60):
# Critical section
do_something()
# Manual acquisition and release
lock = FileLock("/path/to/resource.lock")
if lock.acquire(timeout=60):
try:
do_something()
finally:
lock.release()
```
Attributes:
lock_path: Path to the lock file
timeout: Maximum time to wait for lock acquisition (seconds)
poll_interval: Time to wait between lock acquisition attempts (seconds)
stale_timeout: Time after which a lock is considered stale (seconds)
"""
def __init__(
self,
lock_path: str | Path,
timeout: int = 300,
poll_interval: float = 1.0,
stale_timeout: int = 600,
cleanup_stale_on_init: bool = False,
):
"""
Initialize a FileLock.
Args:
lock_path: Path to the lock file
timeout: Maximum time to wait for lock acquisition in seconds (default: 300)
poll_interval: Time to wait between lock attempts in seconds (default: 1.0)
stale_timeout: Time after which a lock is considered stale in seconds (default: 600)
cleanup_stale_on_init: Whether to clean up stale locks on initialization (default: False)
"""
self.lock_path = Path(lock_path)
self.timeout = timeout
self.poll_interval = poll_interval
self.stale_timeout = stale_timeout
self._fd: Optional[int] = None
self._acquired = False
if cleanup_stale_on_init:
self._cleanup_stale_lock()
def _cleanup_stale_lock(self) -> bool:
"""
Clean up a stale lock file if it exists and is old.
Returns:
True if lock was cleaned up, False otherwise
"""
try:
if self.lock_path.exists():
# Check if lock file is older than stale_timeout
lock_age = time.time() - self.lock_path.stat().st_mtime
if lock_age > self.stale_timeout:
logger.warning(
f"Removing stale lock file: {self.lock_path} (age: {lock_age:.1f}s)"
)
self.lock_path.unlink()
return True
except Exception as e:
logger.error(f"Error cleaning up stale lock: {e}")
return False
def is_stale(self) -> bool:
"""
Check if the lock file is stale (older than stale_timeout).
Returns:
True if lock is stale, False otherwise
"""
try:
if self.lock_path.exists():
lock_age = time.time() - self.lock_path.stat().st_mtime
return lock_age > self.stale_timeout
except Exception:
pass
return False
def acquire(self, timeout: Optional[int] = None) -> bool:
"""
Acquire the file lock using fcntl.flock().
Args:
timeout: Maximum time to wait for lock in seconds (uses instance timeout if None)
Returns:
True if lock acquired, False if timeout or error
"""
if self._acquired:
logger.warning(f"Lock already acquired: {self.lock_path}")
return True
if timeout is None:
timeout = self.timeout
# Ensure parent directory exists
self.lock_path.parent.mkdir(parents=True, exist_ok=True)
# Clean up stale lock before attempting to acquire
self._cleanup_stale_lock()
try:
self._fd = os.open(self.lock_path, os.O_CREAT | os.O_RDWR)
start_time = time.time()
while time.time() - start_time < timeout:
try:
fcntl.flock(self._fd, fcntl.LOCK_EX | fcntl.LOCK_NB)
self._acquired = True
logger.debug(f"Acquired lock: {self.lock_path}")
return True
except (OSError, IOError):
# Lock is held by another process
if time.time() - start_time >= timeout:
logger.warning(f"Timeout waiting for lock: {self.lock_path}")
os.close(self._fd)
self._fd = None
return False
time.sleep(self.poll_interval)
# Timeout reached
if self._fd is not None:
os.close(self._fd)
self._fd = None
return False
except Exception as e:
logger.error(f"Error acquiring lock: {e}")
if self._fd is not None:
try:
os.close(self._fd)
except Exception:
pass
self._fd = None
return False
def release(self) -> None:
"""
Release the file lock.
This closes the file descriptor and removes the lock file.
"""
if not self._acquired:
return
try:
# Close file descriptor and release fcntl lock
if self._fd is not None:
try:
fcntl.flock(self._fd, fcntl.LOCK_UN)
os.close(self._fd)
except Exception as e:
logger.warning(f"Error closing lock file descriptor: {e}")
finally:
self._fd = None
# Remove lock file
if self.lock_path.exists():
self.lock_path.unlink()
logger.debug(f"Released lock: {self.lock_path}")
except FileNotFoundError:
# Lock file already removed, that's fine
pass
except Exception as e:
logger.error(f"Error releasing lock: {e}")
finally:
self._acquired = False
def __enter__(self):
"""Context manager entry - acquire the lock."""
if not self.acquire():
raise TimeoutError(f"Failed to acquire lock: {self.lock_path}")
return self
def __exit__(self, exc_type, exc_val, exc_tb):
"""Context manager exit - release the lock."""
self.release()
return False
def __del__(self):
"""Destructor - ensure lock is released."""
if self._acquired:
self.release()

View File

@ -369,6 +369,10 @@ def get_ort_providers(
"enable_cpu_mem_arena": False,
}
)
elif provider == "AzureExecutionProvider":
# Skip Azure provider - not typically available on local hardware
# and prevents fallback to OpenVINO when it's the first provider
continue
else:
providers.append(provider)
options.append({})

View File

@ -1,62 +0,0 @@
"""Path utilities."""
import base64
import os
from pathlib import Path
import cv2
from numpy import ndarray
from frigate.const import CLIPS_DIR, THUMB_DIR
from frigate.models import Event
def get_event_thumbnail_bytes(event: Event) -> bytes | None:
if event.thumbnail:
return base64.b64decode(event.thumbnail)
else:
try:
with open(
os.path.join(THUMB_DIR, event.camera, f"{event.id}.webp"), "rb"
) as f:
return f.read()
except Exception:
return None
def get_event_snapshot(event: Event) -> ndarray:
media_name = f"{event.camera}-{event.id}"
return cv2.imread(f"{os.path.join(CLIPS_DIR, media_name)}.jpg")
### Deletion
def delete_event_images(event: Event) -> bool:
return delete_event_snapshot(event) and delete_event_thumbnail(event)
def delete_event_snapshot(event: Event) -> bool:
media_name = f"{event.camera}-{event.id}"
media_path = Path(f"{os.path.join(CLIPS_DIR, media_name)}.jpg")
try:
media_path.unlink(missing_ok=True)
media_path = Path(f"{os.path.join(CLIPS_DIR, media_name)}-clean.webp")
media_path.unlink(missing_ok=True)
# also delete clean.png (legacy) for backward compatibility
media_path = Path(f"{os.path.join(CLIPS_DIR, media_name)}-clean.png")
media_path.unlink(missing_ok=True)
return True
except OSError:
return False
def delete_event_thumbnail(event: Event) -> bool:
if event.thumbnail:
return True
else:
Path(os.path.join(THUMB_DIR, event.camera, f"{event.id}.webp")).unlink(
missing_ok=True
)
return True

View File

@ -1,6 +1,5 @@
"""RKNN model conversion utility for Frigate."""
import fcntl
import logging
import os
import subprocess
@ -9,6 +8,8 @@ import time
from pathlib import Path
from typing import Optional
from frigate.util.file import FileLock
logger = logging.getLogger(__name__)
MODEL_TYPE_CONFIGS = {
@ -245,112 +246,6 @@ def convert_onnx_to_rknn(
logger.warning(f"Failed to remove temporary ONNX file: {e}")
def cleanup_stale_lock(lock_file_path: Path) -> bool:
"""
Clean up a stale lock file if it exists and is old.
Args:
lock_file_path: Path to the lock file
Returns:
True if lock was cleaned up, False otherwise
"""
try:
if lock_file_path.exists():
# Check if lock file is older than 10 minutes (stale)
lock_age = time.time() - lock_file_path.stat().st_mtime
if lock_age > 600: # 10 minutes
logger.warning(
f"Removing stale lock file: {lock_file_path} (age: {lock_age:.1f}s)"
)
lock_file_path.unlink()
return True
except Exception as e:
logger.error(f"Error cleaning up stale lock: {e}")
return False
def acquire_conversion_lock(lock_file_path: Path, timeout: int = 300) -> bool:
"""
Acquire a file-based lock for model conversion.
Args:
lock_file_path: Path to the lock file
timeout: Maximum time to wait for lock in seconds
Returns:
True if lock acquired, False if timeout or error
"""
try:
lock_file_path.parent.mkdir(parents=True, exist_ok=True)
cleanup_stale_lock(lock_file_path)
lock_fd = os.open(lock_file_path, os.O_CREAT | os.O_RDWR)
# Try to acquire exclusive lock
start_time = time.time()
while time.time() - start_time < timeout:
try:
fcntl.flock(lock_fd, fcntl.LOCK_EX | fcntl.LOCK_NB)
# Lock acquired successfully
logger.debug(f"Acquired conversion lock: {lock_file_path}")
return True
except (OSError, IOError):
# Lock is held by another process, wait and retry
if time.time() - start_time >= timeout:
logger.warning(
f"Timeout waiting for conversion lock: {lock_file_path}"
)
os.close(lock_fd)
return False
logger.debug("Waiting for conversion lock to be released...")
time.sleep(1)
os.close(lock_fd)
return False
except Exception as e:
logger.error(f"Error acquiring conversion lock: {e}")
return False
def release_conversion_lock(lock_file_path: Path) -> None:
"""
Release the conversion lock.
Args:
lock_file_path: Path to the lock file
"""
try:
if lock_file_path.exists():
lock_file_path.unlink()
logger.debug(f"Released conversion lock: {lock_file_path}")
except Exception as e:
logger.error(f"Error releasing conversion lock: {e}")
def is_lock_stale(lock_file_path: Path, max_age: int = 600) -> bool:
"""
Check if a lock file is stale (older than max_age seconds).
Args:
lock_file_path: Path to the lock file
max_age: Maximum age in seconds before considering lock stale
Returns:
True if lock is stale, False otherwise
"""
try:
if lock_file_path.exists():
lock_age = time.time() - lock_file_path.stat().st_mtime
return lock_age > max_age
except Exception:
pass
return False
def wait_for_conversion_completion(
model_type: str, rknn_path: Path, lock_file_path: Path, timeout: int = 300
) -> bool:
@ -358,6 +253,7 @@ def wait_for_conversion_completion(
Wait for another process to complete the conversion.
Args:
model_type: Type of model being converted
rknn_path: Path to the expected RKNN model
lock_file_path: Path to the lock file to monitor
timeout: Maximum time to wait in seconds
@ -366,6 +262,8 @@ def wait_for_conversion_completion(
True if RKNN model appears, False if timeout
"""
start_time = time.time()
lock = FileLock(lock_file_path, stale_timeout=600)
while time.time() - start_time < timeout:
# Check if RKNN model appeared
if rknn_path.exists():
@ -385,11 +283,14 @@ def wait_for_conversion_completion(
return False
# Check if lock is stale
if is_lock_stale(lock_file_path):
if lock.is_stale():
logger.warning("Lock file is stale, attempting to clean up and retry...")
cleanup_stale_lock(lock_file_path)
lock._cleanup_stale_lock()
# Try to acquire lock again
if acquire_conversion_lock(lock_file_path, timeout=60):
retry_lock = FileLock(
lock_file_path, timeout=60, cleanup_stale_on_init=True
)
if retry_lock.acquire():
try:
# Check if RKNN file appeared while waiting
if rknn_path.exists():
@ -415,7 +316,7 @@ def wait_for_conversion_completion(
return False
finally:
release_conversion_lock(lock_file_path)
retry_lock.release()
logger.debug("Waiting for RKNN model to appear...")
time.sleep(1)
@ -452,8 +353,9 @@ def auto_convert_model(
return str(rknn_path)
lock_file_path = base_path.parent / f"{base_name}.conversion.lock"
lock = FileLock(lock_file_path, timeout=300, cleanup_stale_on_init=True)
if acquire_conversion_lock(lock_file_path):
if lock.acquire():
try:
if rknn_path.exists():
logger.info(
@ -476,7 +378,7 @@ def auto_convert_model(
return None
finally:
release_conversion_lock(lock_file_path)
lock.release()
else:
logger.info(
f"Another process is converting {model_path}, waiting for completion..."

View File

@ -9,6 +9,7 @@ import resource
import shutil
import signal
import subprocess as sp
import time
import traceback
from datetime import datetime
from typing import Any, List, Optional, Tuple
@ -388,6 +389,39 @@ def get_intel_gpu_stats(intel_gpu_device: Optional[str]) -> Optional[dict[str, s
return results
def get_openvino_npu_stats() -> Optional[dict[str, str]]:
"""Get NPU stats using openvino."""
NPU_RUNTIME_PATH = "/sys/devices/pci0000:00/0000:00:0b.0/power/runtime_active_time"
try:
with open(NPU_RUNTIME_PATH, "r") as f:
initial_runtime = float(f.read().strip())
initial_time = time.time()
# Sleep for 1 second to get an accurate reading
time.sleep(1.0)
# Read runtime value again
with open(NPU_RUNTIME_PATH, "r") as f:
current_runtime = float(f.read().strip())
current_time = time.time()
# Calculate usage percentage
runtime_diff = current_runtime - initial_runtime
time_diff = (current_time - initial_time) * 1000.0 # Convert to milliseconds
if time_diff > 0:
usage = min(100.0, max(0.0, (runtime_diff / time_diff * 100.0)))
else:
usage = 0.0
return {"npu": f"{round(usage, 2)}", "mem": "-"}
except (FileNotFoundError, PermissionError, ValueError):
return None
def get_rockchip_gpu_stats() -> Optional[dict[str, str]]:
"""Get GPU stats using rk."""
try:
@ -543,7 +577,7 @@ def ffprobe_stream(ffmpeg, path: str, detailed: bool = False) -> sp.CompletedPro
if detailed and format_entries:
ffprobe_cmd.extend(["-show_entries", f"format={format_entries}"])
ffprobe_cmd.extend(["-loglevel", "quiet", clean_path])
ffprobe_cmd.extend(["-loglevel", "error", clean_path])
return sp.run(ffprobe_cmd, capture_output=True)

100
frigate/util/time.py Normal file
View File

@ -0,0 +1,100 @@
"""Time utilities."""
import datetime
import logging
from typing import Tuple
from zoneinfo import ZoneInfoNotFoundError
import pytz
from tzlocal import get_localzone
logger = logging.getLogger(__name__)
def get_tz_modifiers(tz_name: str) -> Tuple[str, str, float]:
seconds_offset = (
datetime.datetime.now(pytz.timezone(tz_name)).utcoffset().total_seconds()
)
hours_offset = int(seconds_offset / 60 / 60)
minutes_offset = int(seconds_offset / 60 - hours_offset * 60)
hour_modifier = f"{hours_offset} hour"
minute_modifier = f"{minutes_offset} minute"
return hour_modifier, minute_modifier, seconds_offset
def get_tomorrow_at_time(hour: int) -> datetime.datetime:
"""Returns the datetime of the following day at 2am."""
try:
tomorrow = datetime.datetime.now(get_localzone()) + datetime.timedelta(days=1)
except ZoneInfoNotFoundError:
tomorrow = datetime.datetime.now(datetime.timezone.utc) + datetime.timedelta(
days=1
)
logger.warning(
"Using utc for maintenance due to missing or incorrect timezone set"
)
return tomorrow.replace(hour=hour, minute=0, second=0).astimezone(
datetime.timezone.utc
)
def is_current_hour(timestamp: int) -> bool:
"""Returns if timestamp is in the current UTC hour."""
start_of_next_hour = (
datetime.datetime.now(datetime.timezone.utc).replace(
minute=0, second=0, microsecond=0
)
+ datetime.timedelta(hours=1)
).timestamp()
return timestamp < start_of_next_hour
def get_dst_transitions(
tz_name: str, start_time: float, end_time: float
) -> list[tuple[float, float]]:
"""
Find DST transition points and return time periods with consistent offsets.
Args:
tz_name: Timezone name (e.g., 'America/New_York')
start_time: Start timestamp (UTC)
end_time: End timestamp (UTC)
Returns:
List of (period_start, period_end, seconds_offset) tuples representing
continuous periods with the same UTC offset
"""
try:
tz = pytz.timezone(tz_name)
except pytz.UnknownTimeZoneError:
# If timezone is invalid, return single period with no offset
return [(start_time, end_time, 0)]
periods = []
current = start_time
# Get initial offset
dt = datetime.datetime.utcfromtimestamp(current).replace(tzinfo=pytz.UTC)
local_dt = dt.astimezone(tz)
prev_offset = local_dt.utcoffset().total_seconds()
period_start = start_time
# Check each day for offset changes
while current <= end_time:
dt = datetime.datetime.utcfromtimestamp(current).replace(tzinfo=pytz.UTC)
local_dt = dt.astimezone(tz)
current_offset = local_dt.utcoffset().total_seconds()
if current_offset != prev_offset:
# Found a transition - close previous period
periods.append((period_start, current, prev_offset))
period_start = current
prev_offset = current_offset
current += 86400 # Check daily
# Add final period
periods.append((period_start, end_time, prev_offset))
return periods

View File

@ -34,7 +34,7 @@ from frigate.ptz.autotrack import ptz_moving_at_frame_time
from frigate.track import ObjectTracker
from frigate.track.norfair_tracker import NorfairTracker
from frigate.track.tracked_object import TrackedObjectAttribute
from frigate.util.builtin import EventsPerSecond, get_tomorrow_at_time
from frigate.util.builtin import EventsPerSecond
from frigate.util.image import (
FrameManager,
SharedMemoryFrameManager,
@ -53,6 +53,7 @@ from frigate.util.object import (
reduce_detections,
)
from frigate.util.process import FrigateProcess
from frigate.util.time import get_tomorrow_at_time
logger = logging.getLogger(__name__)
@ -195,7 +196,9 @@ class CameraWatchdog(threading.Thread):
self.sleeptime = self.config.ffmpeg.retry_interval
self.config_subscriber = CameraConfigUpdateSubscriber(
None, {config.name: config}, [CameraConfigUpdateEnum.enabled]
None,
{config.name: config},
[CameraConfigUpdateEnum.enabled, CameraConfigUpdateEnum.record],
)
self.requestor = InterProcessRequestor()
self.was_enabled = self.config.enabled

166
web/package-lock.json generated
View File

@ -14,6 +14,7 @@
"@radix-ui/react-alert-dialog": "^1.1.6",
"@radix-ui/react-aspect-ratio": "^1.1.2",
"@radix-ui/react-checkbox": "^1.1.4",
"@radix-ui/react-collapsible": "^1.1.12",
"@radix-ui/react-context-menu": "^2.2.6",
"@radix-ui/react-dialog": "^1.1.15",
"@radix-ui/react-dropdown-menu": "^2.1.6",
@ -1380,6 +1381,171 @@
}
}
},
"node_modules/@radix-ui/react-collapsible": {
"version": "1.1.12",
"resolved": "https://registry.npmjs.org/@radix-ui/react-collapsible/-/react-collapsible-1.1.12.tgz",
"integrity": "sha512-Uu+mSh4agx2ib1uIGPP4/CKNULyajb3p92LsVXmH2EHVMTfZWpll88XJ0j4W0z3f8NK1eYl1+Mf/szHPmcHzyA==",
"license": "MIT",
"dependencies": {
"@radix-ui/primitive": "1.1.3",
"@radix-ui/react-compose-refs": "1.1.2",
"@radix-ui/react-context": "1.1.2",
"@radix-ui/react-id": "1.1.1",
"@radix-ui/react-presence": "1.1.5",
"@radix-ui/react-primitive": "2.1.3",
"@radix-ui/react-use-controllable-state": "1.2.2",
"@radix-ui/react-use-layout-effect": "1.1.1"
},
"peerDependencies": {
"@types/react": "*",
"@types/react-dom": "*",
"react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc",
"react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
},
"peerDependenciesMeta": {
"@types/react": {
"optional": true
},
"@types/react-dom": {
"optional": true
}
}
},
"node_modules/@radix-ui/react-collapsible/node_modules/@radix-ui/primitive": {
"version": "1.1.3",
"resolved": "https://registry.npmjs.org/@radix-ui/primitive/-/primitive-1.1.3.tgz",
"integrity": "sha512-JTF99U/6XIjCBo0wqkU5sK10glYe27MRRsfwoiq5zzOEZLHU3A3KCMa5X/azekYRCJ0HlwI0crAXS/5dEHTzDg==",
"license": "MIT"
},
"node_modules/@radix-ui/react-collapsible/node_modules/@radix-ui/react-compose-refs": {
"version": "1.1.2",
"resolved": "https://registry.npmjs.org/@radix-ui/react-compose-refs/-/react-compose-refs-1.1.2.tgz",
"integrity": "sha512-z4eqJvfiNnFMHIIvXP3CY57y2WJs5g2v3X0zm9mEJkrkNv4rDxu+sg9Jh8EkXyeqBkB7SOcboo9dMVqhyrACIg==",
"license": "MIT",
"peerDependencies": {
"@types/react": "*",
"react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
},
"peerDependenciesMeta": {
"@types/react": {
"optional": true
}
}
},
"node_modules/@radix-ui/react-collapsible/node_modules/@radix-ui/react-context": {
"version": "1.1.2",
"resolved": "https://registry.npmjs.org/@radix-ui/react-context/-/react-context-1.1.2.tgz",
"integrity": "sha512-jCi/QKUM2r1Ju5a3J64TH2A5SpKAgh0LpknyqdQ4m6DCV0xJ2HG1xARRwNGPQfi1SLdLWZ1OJz6F4OMBBNiGJA==",
"license": "MIT",
"peerDependencies": {
"@types/react": "*",
"react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
},
"peerDependenciesMeta": {
"@types/react": {
"optional": true
}
}
},
"node_modules/@radix-ui/react-collapsible/node_modules/@radix-ui/react-id": {
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/@radix-ui/react-id/-/react-id-1.1.1.tgz",
"integrity": "sha512-kGkGegYIdQsOb4XjsfM97rXsiHaBwco+hFI66oO4s9LU+PLAC5oJ7khdOVFxkhsmlbpUqDAvXw11CluXP+jkHg==",
"license": "MIT",
"dependencies": {
"@radix-ui/react-use-layout-effect": "1.1.1"
},
"peerDependencies": {
"@types/react": "*",
"react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
},
"peerDependenciesMeta": {
"@types/react": {
"optional": true
}
}
},
"node_modules/@radix-ui/react-collapsible/node_modules/@radix-ui/react-presence": {
"version": "1.1.5",
"resolved": "https://registry.npmjs.org/@radix-ui/react-presence/-/react-presence-1.1.5.tgz",
"integrity": "sha512-/jfEwNDdQVBCNvjkGit4h6pMOzq8bHkopq458dPt2lMjx+eBQUohZNG9A7DtO/O5ukSbxuaNGXMjHicgwy6rQQ==",
"license": "MIT",
"dependencies": {
"@radix-ui/react-compose-refs": "1.1.2",
"@radix-ui/react-use-layout-effect": "1.1.1"
},
"peerDependencies": {
"@types/react": "*",
"@types/react-dom": "*",
"react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc",
"react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
},
"peerDependenciesMeta": {
"@types/react": {
"optional": true
},
"@types/react-dom": {
"optional": true
}
}
},
"node_modules/@radix-ui/react-collapsible/node_modules/@radix-ui/react-primitive": {
"version": "2.1.3",
"resolved": "https://registry.npmjs.org/@radix-ui/react-primitive/-/react-primitive-2.1.3.tgz",
"integrity": "sha512-m9gTwRkhy2lvCPe6QJp4d3G1TYEUHn/FzJUtq9MjH46an1wJU+GdoGC5VLof8RX8Ft/DlpshApkhswDLZzHIcQ==",
"license": "MIT",
"dependencies": {
"@radix-ui/react-slot": "1.2.3"
},
"peerDependencies": {
"@types/react": "*",
"@types/react-dom": "*",
"react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc",
"react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
},
"peerDependenciesMeta": {
"@types/react": {
"optional": true
},
"@types/react-dom": {
"optional": true
}
}
},
"node_modules/@radix-ui/react-collapsible/node_modules/@radix-ui/react-use-controllable-state": {
"version": "1.2.2",
"resolved": "https://registry.npmjs.org/@radix-ui/react-use-controllable-state/-/react-use-controllable-state-1.2.2.tgz",
"integrity": "sha512-BjasUjixPFdS+NKkypcyyN5Pmg83Olst0+c6vGov0diwTEo6mgdqVR6hxcEgFuh4QrAs7Rc+9KuGJ9TVCj0Zzg==",
"license": "MIT",
"dependencies": {
"@radix-ui/react-use-effect-event": "0.0.2",
"@radix-ui/react-use-layout-effect": "1.1.1"
},
"peerDependencies": {
"@types/react": "*",
"react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
},
"peerDependenciesMeta": {
"@types/react": {
"optional": true
}
}
},
"node_modules/@radix-ui/react-collapsible/node_modules/@radix-ui/react-use-layout-effect": {
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/@radix-ui/react-use-layout-effect/-/react-use-layout-effect-1.1.1.tgz",
"integrity": "sha512-RbJRS4UWQFkzHTTwVymMTUv8EqYhOp8dOOviLj2ugtTiXRaRQS7GLGxZTLL1jWhMeoSCf5zmcZkqTl9IiYfXcQ==",
"license": "MIT",
"peerDependencies": {
"@types/react": "*",
"react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
},
"peerDependenciesMeta": {
"@types/react": {
"optional": true
}
}
},
"node_modules/@radix-ui/react-collection": {
"version": "1.1.2",
"resolved": "https://registry.npmjs.org/@radix-ui/react-collection/-/react-collection-1.1.2.tgz",

View File

@ -20,6 +20,7 @@
"@radix-ui/react-alert-dialog": "^1.1.6",
"@radix-ui/react-aspect-ratio": "^1.1.2",
"@radix-ui/react-checkbox": "^1.1.4",
"@radix-ui/react-collapsible": "^1.1.12",
"@radix-ui/react-context-menu": "^2.2.6",
"@radix-ui/react-dialog": "^1.1.15",
"@radix-ui/react-dropdown-menu": "^2.1.6",

View File

@ -0,0 +1 @@
{}

View File

@ -0,0 +1 @@
{}

View File

@ -0,0 +1 @@
{}

View File

@ -425,5 +425,79 @@
"radio": "Ràdio",
"pink_noise": "Soroll rosa",
"power_windows": "Finestres elèctriques",
"artillery_fire": "Foc d'artilleria"
"artillery_fire": "Foc d'artilleria",
"sodeling": "Cant a la tirolesa",
"vibration": "Vibració",
"throbbing": "Palpitant",
"cacophony": "Cacofonia",
"sidetone": "To local",
"distortion": "Distorsió",
"mains_hum": "brunzit",
"noise": "Soroll",
"echo": "Echo",
"reverberation": "Reverberació",
"inside": "Interior",
"pulse": "Pols",
"outside": "Fora",
"chirp_tone": "To de grinyol",
"harmonic": "Harmònic",
"sine_wave": "Ona sinus",
"crunch": "Cruixit",
"hum": "Taral·lejar",
"plop": "Chof",
"clickety_clack": "Clic-Clac",
"clicking": "Clicant",
"clatter": "Soroll",
"chird": "Piular",
"liquid": "Líquid",
"splash": "Xof",
"slosh": "Xip-xap",
"boing": "Boing",
"zing": "Fiu",
"rumble": "Bum-bum",
"sizzle": "Xiu-xiu",
"whir": "Brrrm",
"rustle": "Fru-Fru",
"creak": "Clic-clac",
"clang": "Clang",
"squish": "Xaf",
"drip": "Plic-plic",
"pour": "Glug-glug",
"trickle": "Xiulet",
"gush": "Xuuuix",
"fill": "Glug-glug",
"ding": "Ding",
"ping": "Ping",
"beep": "Bip",
"squeal": "Xiscle",
"crumpling": "Arrugant-se",
"rub": "Fregar",
"scrape": "Raspar",
"scratch": "Rasca",
"whip": "Fuet",
"bouncing": "Rebotant",
"breaking": "Trencant",
"smash": "Aixafar",
"whack": "Cop",
"slap": "Bufetada",
"bang": "Bang",
"basketball_bounce": "Rebot de bàsquet",
"chorus_effect": "Efecte de cor",
"effects_unit": "Unitat d'Efectes",
"electronic_tuner": "Afinador electrònic",
"thunk": "Bruix",
"thump": "Cop fort",
"whoosh": "Xiuxiueig",
"arrow": "Fletxa",
"sonar": "Sonar",
"boiling": "Bullint",
"stir": "Remenar",
"pump": "Bomba",
"spray": "Esprai",
"shofar": "Xofar",
"crushing": "Aixafament",
"change_ringing": "Toc de campanes",
"flap": "Cop de peu",
"roll": "Rodament",
"tearing": "Esquinçat"
}

View File

@ -207,10 +207,21 @@
"length": {
"feet": "peus",
"meters": "metres"
},
"data": {
"kbps": "Kb/s",
"mbps": "Mb/s",
"gbps": "Gb/s",
"kbph": "kB/hora",
"mbph": "MB/hora",
"gbph": "GB/hora"
}
},
"label": {
"back": "Torna enrere"
"back": "Torna enrere",
"hide": "Oculta {{item}}",
"show": "Mostra {{item}}",
"ID": "ID"
},
"button": {
"apply": "Aplicar",
@ -270,5 +281,17 @@
"desc": "Pàgina no trobada"
},
"selectItem": "Selecciona {{item}}",
"readTheDocumentation": "Llegir la documentació"
"readTheDocumentation": "Llegir la documentació",
"information": {
"pixels": "{{area}}px"
},
"list": {
"two": "{{0}} i {{1}}",
"many": "{{items}}, i {{last}}",
"separatorWithSpace": ",· "
},
"field": {
"optional": "Opcional",
"internalID": "L'ID intern que Frigate s'utilitza a la configuració i a la base de dades"
}
}

View File

@ -10,6 +10,7 @@
"loginFailed": "Error en l'inici de sessió",
"unknownError": "Error desconegut. Comproveu els registres.",
"webUnknownError": "Error desconegut. Comproveu els registres de la consola."
}
},
"firstTimeLogin": "Intentar iniciar sessió per primera vegada? Les credencials s'imprimeixen als registres de Frigate."
}
}

View File

@ -53,7 +53,7 @@
"export": "Exportar",
"selectOrExport": "Seleccionar o exportar",
"toast": {
"success": "Exportació inciada amb èxit. Pots veure l'arxiu a la carpeta /exports.",
"success": "Exportació inciada amb èxit. Pots veure l'arxiu a la pàgina d'exportacions.",
"error": {
"endTimeMustAfterStartTime": "L'hora de finalització ha de ser posterior a l'hora d'inici",
"noVaildTimeSelected": "No s'ha seleccionat un rang de temps vàlid",
@ -98,7 +98,8 @@
"button": {
"deleteNow": "Suprimir ara",
"export": "Exportar",
"markAsReviewed": "Marcar com a revisat"
"markAsReviewed": "Marcar com a revisat",
"markAsUnreviewed": "Marcar com no revisat"
},
"confirmDelete": {
"title": "Confirmar la supressió",
@ -116,6 +117,7 @@
"search": {
"placeholder": "Cerca per etiqueta o subetiqueta..."
},
"noImages": "No s'han trobat miniatures per a aquesta càmera"
"noImages": "No s'han trobat miniatures per a aquesta càmera",
"unknownLabel": "Imatge activadora desada"
}
}

View File

@ -0,0 +1,164 @@
{
"documentTitle": "Models de classificació",
"button": {
"deleteClassificationAttempts": "Suprimeix les imatges de classificació",
"renameCategory": "Reanomena la classe",
"deleteCategory": "Suprimeix la classe",
"deleteImages": "Suprimeix les imatges",
"trainModel": "Model de tren",
"addClassification": "Afegeix una classificació",
"deleteModels": "Suprimeix els models",
"editModel": "Edita el model"
},
"toast": {
"success": {
"deletedCategory": "Classe suprimida",
"deletedImage": "Imatges suprimides",
"categorizedImage": "Imatge classificada amb èxit",
"trainedModel": "Model entrenat amb èxit.",
"trainingModel": "S'ha iniciat amb èxit la formació de models.",
"deletedModel_one": "S'ha suprimit correctament el model {{count}}",
"deletedModel_many": "S'han suprimit correctament {{count}} models",
"deletedModel_other": "",
"updatedModel": "S'ha actualitzat correctament la configuració del model"
},
"error": {
"deleteImageFailed": "No s'ha pogut suprimir: {{errorMessage}}",
"deleteCategoryFailed": "No s'ha pogut suprimir la classe: {{errorMessage}}",
"categorizeFailed": "No s'ha pogut categoritzar la imatge: {{errorMessage}}",
"trainingFailed": "No s'ha pogut iniciar l'entrenament del model: {{errorMessage}}",
"deleteModelFailed": "No s'ha pogut suprimir el model: {{errorMessage}}",
"updateModelFailed": "No s'ha pogut actualitzar el model: {{errorMessage}}"
}
},
"deleteCategory": {
"title": "Suprimeix la classe",
"desc": "Esteu segur que voleu suprimir la classe {{name}}? Això suprimirà permanentment totes les imatges associades i requerirà tornar a entrenar el model."
},
"deleteDatasetImages": {
"title": "Suprimeix les imatges del conjunt de dades",
"desc": "Esteu segur que voleu suprimir {{count}} imatges de {{dataset}}? Aquesta acció no es pot desfer i requerirà tornar a entrenar el model."
},
"deleteTrainImages": {
"title": "Suprimeix les imatges del tren",
"desc": "Esteu segur que voleu suprimir {{count}} imatges? Aquesta acció no es pot desfer."
},
"renameCategory": {
"title": "Reanomena la classe",
"desc": "Introduïu un nom nou per {{name}}. Se us requerirà que torneu a entrenar el model per al canvi de nom a afectar."
},
"description": {
"invalidName": "Nom no vàlid. Els noms només poden incloure lletres, números, espais, apòstrofs, guions baixos i guions."
},
"train": {
"title": "Classificacions recents",
"aria": "Selecciona les classificacions recents",
"titleShort": "Recent"
},
"categories": "Classes",
"createCategory": {
"new": "Crea una classe nova"
},
"categorizeImageAs": "Classifica la imatge com a:",
"categorizeImage": "Classifica la imatge",
"noModels": {
"object": {
"title": "No hi ha models de classificació d'objectes",
"description": "Crea un model personalitzat per classificar els objectes detectats.",
"buttonText": "Crea un model d'objecte"
},
"state": {
"title": "Cap model de classificació d'estat",
"description": "Crea un model personalitzat per a monitoritzar i classificar els canvis d'estat en àrees de càmera específiques.",
"buttonText": "Crea un model d'estat"
}
},
"wizard": {
"title": "Crea una classificació nova",
"steps": {
"nameAndDefine": "Nom i definició",
"stateArea": "Àrea estatal",
"chooseExamples": "Trieu exemples"
},
"step1": {
"description": "Els models estatals monitoritzen àrees de càmera fixes per als canvis (p. ex., porta oberta/tancada). Els models d'objectes afegeixen classificacions als objectes detectats (per exemple, animals coneguts, persones de lliurament, etc.).",
"name": "Nom",
"namePlaceholder": "Introduïu el nom del model...",
"type": "Tipus",
"typeState": "Estat",
"typeObject": "Objecte",
"objectLabel": "Etiqueta de l'objecte",
"objectLabelPlaceholder": "Selecciona el tipus d'objecte...",
"classificationType": "Tipus de classificació",
"classificationTypeTip": "Apreneu sobre els tipus de classificació",
"classificationTypeDesc": "Les subetiquetes afegeixen text addicional a l'etiqueta de l'objecte (p. ex., 'Person: UPS'). Els atributs són metadades cercables emmagatzemades per separat a les metadades de l'objecte.",
"classificationSubLabel": "Subetiqueta",
"classificationAttribute": "Atribut",
"classes": "Classes",
"classesTip": "Aprèn sobre les classes",
"classesStateDesc": "Defineix els diferents estats en què pot estar la teva àrea de càmera. Per exemple: \"obert\" i \"tancat\" per a una porta de garatge.",
"classesObjectDesc": "Defineix les diferents categories en què classificar els objectes detectats. Per exemple: 'lliuramentpersonpersona', 'resident', 'amenaça' per a la classificació de persones.",
"classPlaceholder": "Introduïu el nom de la classe...",
"errors": {
"nameRequired": "Es requereix el nom del model",
"nameLength": "El nom del model ha de tenir 64 caràcters o menys",
"nameOnlyNumbers": "El nom del model no pot contenir només números",
"classRequired": "Es requereix com a mínim 1 classe",
"classesUnique": "Els noms de classe han de ser únics",
"stateRequiresTwoClasses": "Els models d'estat requereixen almenys 2 classes",
"objectLabelRequired": "Seleccioneu una etiqueta d'objecte",
"objectTypeRequired": "Seleccioneu un tipus de classificació"
},
"states": "Estats"
},
"step2": {
"description": "Seleccioneu les càmeres i definiu l'àrea a monitoritzar per a cada càmera. El model classificarà l'estat d'aquestes àrees.",
"cameras": "Càmeres",
"selectCamera": "Selecciona la càmera",
"noCameras": "Feu clic a + per a afegir càmeres",
"selectCameraPrompt": "Seleccioneu una càmera de la llista per definir la seva àrea de monitoratge"
},
"step3": {
"selectImagesPrompt": "Selecciona totes les imatges amb: {{className}}",
"selectImagesDescription": "Feu clic a les imatges per a seleccionar-les. Feu clic a Continua quan hàgiu acabat amb aquesta classe.",
"generating": {
"title": "S'estan generant imatges de mostra",
"description": "Frigate està traient imatges representatives dels vostres enregistraments. Això pot trigar un moment..."
},
"training": {
"title": "Model d'entrenament",
"description": "El teu model s'està entrenant en segon pla. Tanqueu aquest diàleg i el vostre model començarà a funcionar tan aviat com s'hagi completat l'entrenament."
},
"retryGenerate": "Torna a provar la generació",
"noImages": "No s'ha generat cap imatge de mostra",
"classifying": "Classificació i formació...",
"trainingStarted": "L'entrenament s'ha iniciat amb èxit",
"errors": {
"noCameras": "No s'ha configurat cap càmera",
"noObjectLabel": "No s'ha seleccionat cap etiqueta d'objecte",
"generateFailed": "No s'han pogut generar exemples: {{error}}",
"generationFailed": "Ha fallat la generació. Torneu-ho a provar.",
"classifyFailed": "No s'han pogut classificar les imatges: {{error}}"
},
"generateSuccess": "Imatges de mostra generades amb èxit"
}
},
"deleteModel": {
"title": "Suprimeix el model de classificació",
"single": "Esteu segur que voleu suprimir {{name}}? Això suprimirà permanentment totes les dades associades, incloses les imatges i les dades d'entrenament. Aquesta acció no es pot desfer.",
"desc": "Esteu segur que voleu suprimir {{count}} model(s)? Això suprimirà permanentment totes les dades associades, incloses les imatges i les dades d'entrenament. Aquesta acció no es pot desfer."
},
"menu": {
"objects": "Objectes",
"states": "Estats"
},
"details": {
"scoreInfo": "La puntuació representa la confiança mitjana de la classificació en totes les deteccions d'aquest objecte."
},
"edit": {
"title": "Edita el model de classificació",
"descriptionState": "Edita les classes per a aquest model de classificació d'estats. Els canvis requeriran tornar a entrenar el model.",
"descriptionObject": "Edita el tipus d'objecte i el tipus de classificació per a aquest model de classificació d'objectes.",
"stateClassesInfo": "Nota: Canviar les classes d'estat requereix tornar a entrenar el model amb les classes actualitzades."
}
}

View File

@ -36,5 +36,24 @@
"selected_one": "{{count}} seleccionats",
"selected_other": "{{count}} seleccionats",
"suspiciousActivity": "Activitat sospitosa",
"threateningActivity": "Activitat amenaçadora"
"threateningActivity": "Activitat amenaçadora",
"detail": {
"noDataFound": "No hi ha dades detallades a revisar",
"trackedObject_one": "objecte",
"aria": "Canvia la vista de detall",
"trackedObject_other": "objectes",
"noObjectDetailData": "No hi ha dades de detall d'objecte disponibles.",
"label": "Detall",
"settings": "Configuració de la vista detallada",
"alwaysExpandActive": {
"title": "Expandeix sempre actiu",
"desc": "Expandeix sempre els detalls de l'objecte de la revisió activa quan estigui disponible."
}
},
"objectTrack": {
"clickToSeek": "Feu clic per cercar aquesta hora",
"trackedPoint": "Punt de seguiment"
},
"zoomIn": "Amplia",
"zoomOut": "Redueix"
}

View File

@ -84,7 +84,8 @@
"details": "detalls",
"snapshot": "instantània",
"video": "vídeo",
"object_lifecycle": "cicle de vida de l'objecte"
"object_lifecycle": "cicle de vida de l'objecte",
"thumbnail": "miniatura"
},
"details": {
"timestamp": "Marca temporal",
@ -206,13 +207,23 @@
"audioTranscription": {
"label": "Transcriu",
"aria": "Demanar una transcripció d'audio"
},
"showObjectDetails": {
"label": "Mostra la ruta de l'objecte"
},
"hideObjectDetails": {
"label": "Amaga la ruta de l'objecte"
},
"viewTrackingDetails": {
"label": "Veure detalls de seguiment",
"aria": "Mostra els detalls de seguiment"
}
},
"noTrackedObjects": "No s'han trobat objectes rastrejats",
"dialog": {
"confirmDelete": {
"title": "Confirmar la supressió",
"desc": "Eliminant aquest objecte seguit borrarà l'snapshot, qualsevol embedding gravat, i qualsevol objecte associat. Les imatges gravades d'aquest objecte seguit en l'historial <em>NO</em> seràn eliminades.<br /><br />Estas segur que vols continuar?"
"desc": "Eliminant aquest objecte seguit borrarà l'snapshot, qualsevol embedding gravat, i qualsevol detall de seguiment. Les imatges gravades d'aquest objecte seguit en l'historial <em>NO</em> seràn eliminades.<br /><br />Estas segur que vols continuar?"
}
},
"fetchingTrackedObjectsFailed": "Error al obtenir objectes rastrejats: {{errorMessage}}",
@ -224,5 +235,53 @@
},
"concerns": {
"label": "Preocupacions"
},
"trackingDetails": {
"title": "Detalls de seguiment",
"noImageFound": "No s'ha trobat cap imatge amb aquesta hora.",
"createObjectMask": "Crear màscara d'objecte",
"adjustAnnotationSettings": "Ajustar configuració d'anotacions",
"scrollViewTips": "Feu clic per veure els moments significatius del cicle de vida d'aquest objecte.",
"autoTrackingTips": "Limitar les posicións de la caixa serà inacurat per càmeras de seguiment automàtic.",
"count": "{{first}} de {{second}}",
"trackedPoint": "Punt Seguit",
"lifecycleItemDesc": {
"visible": "{{label}} detectat",
"entered_zone": "{{label}} ha entrat a {{zones}}",
"active": "{{label}} ha esdevingut actiu",
"stationary": "{{label}} ha esdevingut estacionari",
"attribute": {
"faceOrLicense_plate": "{{attribute}} detectat per {{label}}",
"other": "{{label}} reconegut com a {{attribute}}"
},
"gone": "{{label}} esquerra",
"heard": "{{label}} sentit",
"external": "{{label}} detectat",
"header": {
"zones": "Zones",
"ratio": "Ràtio",
"area": "Àrea"
}
},
"annotationSettings": {
"title": "Configuració d'anotacions",
"showAllZones": {
"title": "Mostra totes les Zones",
"desc": "Mostra sempre les zones amb marcs on els objectes hagin entrat a la zona."
},
"offset": {
"label": "Òfset d'Anotació",
"desc": "Aquestes dades provenen del flux de detecció de la càmera, però se superposen a les imatges del flux de gravació. És poc probable que els dos fluxos estiguin perfectament sincronitzats. Com a resultat, el quadre delimitador i les imatges no s'alinearan perfectament. Tanmateix, es pot utilitzar el camp <code>annotation_offset</code> per ajustar-ho.",
"millisecondsToOffset": "Millisegons per l'òfset de detecció d'anotacions per. <em>Per defecte: 0</em>",
"tips": "CONSELL: Imagineu-vos que hi ha un clip d'esdeveniment amb una persona caminant d'esquerra a dreta. Si el quadre delimitador de la cronologia de l'esdeveniment està constantment a l'esquerra de la persona, aleshores s'hauria de disminuir el valor. De la mateixa manera, si una persona camina d'esquerra a dreta i el quadre delimitador està constantment per davant de la persona, aleshores s'hauria d'augmentar el valor.",
"toast": {
"success": "L'Òfset d'anotació per a {{camera}} s'ha desat al fitxer de configuració. Reinicieu Frigate per aplicar els canvis."
}
}
},
"carousel": {
"previous": "Diapositiva anterior",
"next": "Dispositiva posterior"
}
}
}

View File

@ -13,5 +13,11 @@
"error": {
"renameExportFailed": "Error al canviar el nom de lexportació: {{errorMessage}}"
}
},
"tooltip": {
"shareExport": "Comparteix l'exportació",
"downloadVideo": "Baixa el vídeo",
"editName": "Edita el nom",
"deleteExport": "Suprimeix l'exportació"
}
}

View File

@ -12,13 +12,13 @@
"collections": "Col·leccions",
"train": {
"empty": "No hi ha intents recents de reconeixement de rostres",
"title": "Entrenar",
"aria": "Seleccionar entrenament"
"title": "Reconeixements recents",
"aria": "Selecciona els reconeixements recents"
},
"description": {
"addFace": "Guia per a agregar una nova colecció a la biblioteca de rostres.",
"addFace": "Afegiu una col·lecció nova a la biblioteca de cares pujant la vostra primera imatge.",
"placeholder": "Introduïu un nom per a aquesta col·lecció",
"invalidName": "Nom no vàlid. Els noms només poden incloure lletres, números, espais, apòstrofs, guions baixos i guionets."
"invalidName": "Nom no vàlid. Els noms només poden incloure lletres, números, espais, apòstrofs, guions baixos i guions."
},
"documentTitle": "Biblioteca de rostres - Frigate",
"uploadFaceImage": {
@ -54,7 +54,7 @@
"selectImage": "Siusplau, selecciona un fixer d'imatge."
},
"maxSize": "Mida màxima: {{size}}MB",
"dropInstructions": "Arrastra una imatge aquí, o fes clic per a selccionar-ne una"
"dropInstructions": "Arrossegueu i deixeu anar o enganxeu una imatge aquí, o feu clic per seleccionar"
},
"button": {
"uploadImage": "Pujar imatge",

View File

@ -86,8 +86,8 @@
"disable": "Amaga estadístiques de la transmissió"
},
"manualRecording": {
"title": "Gravació sota demanda",
"tips": "Iniciar un event manual basat en els paràmetres de retenció de gravació per aquesta càmera.",
"title": "Sota demanda",
"tips": "Baixeu una instantània o inicieu un esdeveniment manual basat en la configuració de retenció d'enregistrament d'aquesta càmera.",
"playInBackground": {
"label": "Reproduir en segon pla",
"desc": "Habilita aquesta opció per a continuar la transmissió quan el reproductor està amagat."
@ -130,6 +130,9 @@
"playInBackground": {
"label": "Reproduir en segon pla",
"tips": "Habilita aquesta opció per a contiuar la transmissió tot i que el reproductor estigui ocult."
},
"debug": {
"picker": "Selecció de stream no disponible en mode debug. La vista debug sempre fa servir el stream assignat pel rol de detecció."
}
},
"streamingSettings": "Paràmetres de transmissió",
@ -167,5 +170,16 @@
"transcription": {
"enable": "Habilita la transcripció d'àudio en temps real",
"disable": "Deshabilita la transcripció d'àudio en temps real"
},
"snapshot": {
"takeSnapshot": "Descarregar una instantània",
"noVideoSource": "No hi ha cap font de video per fer una instantània.",
"captureFailed": "Error capturant una instantània.",
"downloadStarted": "Inici de baixada d'instantània."
},
"noCameras": {
"title": "No s'ha configurat cap càmera",
"description": "Comenceu connectant una càmera a Frigate.",
"buttonText": "Afegeix una càmera"
}
}

Some files were not shown because too many files have changed in this diff Show More