2023-06-11 15:23:18 +03:00
|
|
|
import datetime
|
2020-11-04 15:28:07 +03:00
|
|
|
import logging
|
|
|
|
|
import multiprocessing as mp
|
|
|
|
|
import os
|
2024-05-18 19:36:13 +03:00
|
|
|
import secrets
|
2023-01-31 02:42:23 +03:00
|
|
|
import shutil
|
2023-07-16 15:42:56 +03:00
|
|
|
from multiprocessing import Queue
|
Use Fork-Server As Spawn Method (#18682)
* Set runtime
* Use count correctly
* Don't assume camera sizes
* Use separate zmq proxy for object detection
* Correct order
* Use forkserver
* Only store PID instead of entire process reference
* Cleanup
* Catch correct errors
* Fix typing
* Remove before_run from process util
The before_run never actually ran because:
You're right to suspect an issue with before_run not being called and a potential deadlock. The way you've implemented the run_wrapper using __getattribute__ for the run method of BaseProcess is a common pitfall in Python's multiprocessing, especially when combined with how multiprocessing.Process works internally.
Here's a breakdown of why before_run isn't being called and why you might be experiencing a deadlock:
The Problem: __getattribute__ and Process Serialization
When you create a multiprocessing.Process object and call start(), the multiprocessing module needs to serialize the process object (or at least enough of it to re-create the process in the new interpreter). It then pickles this serialized object and sends it to the newly spawned process.
The issue with your __getattribute__ implementation for run is that:
run is retrieved during serialization: When multiprocessing tries to pickle your Process object to send to the new process, it will likely access the run attribute. This triggers your __getattribute__ wrapper, which then tries to bind run_wrapper to self.
run_wrapper is bound to the parent process's self: The run_wrapper closure, when created in the parent process, captures the self (the Process instance) from the parent's memory space.
Deserialization creates a new object: In the child process, a new Process object is created by deserializing the pickled data. However, the run_wrapper method that was pickled still holds a reference to the self from the parent process. This is a subtle but critical distinction.
The child's run is not your wrapped run: When the child process starts, it internally calls its own run method. Because of the serialization and deserialization process, the run method that's ultimately executed in the child process is the original multiprocessing.Process.run or the Process.run if you had directly overridden it. Your __getattribute__ magic, which wraps run, isn't correctly applied to the Process object within the child's context.
* Cleanup
* Logging bugfix (#18465)
* use mp Manager to handle logging queues
A Python bug (https://github.com/python/cpython/issues/91555) was preventing logs from the embeddings maintainer process from printing. The bug is fixed in Python 3.14, but a viable workaround is to use the multiprocessing Manager, which better manages mp queues and causes the logging to work correctly.
* consolidate
* fix typing
* Fix typing
* Use global log queue
* Move to using process for logging
* Convert camera tracking to process
* Add more processes
* Finalize process
* Cleanup
* Cleanup typing
* Formatting
* Remove daemon
---------
Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2025-06-12 21:12:34 +03:00
|
|
|
from multiprocessing.managers import DictProxy, SyncManager
|
2023-05-29 13:31:17 +03:00
|
|
|
from multiprocessing.synchronize import Event as MpEvent
|
2025-05-30 05:58:31 +03:00
|
|
|
from pathlib import Path
|
2026-03-21 03:02:47 +03:00
|
|
|
from typing import Callable, Optional
|
2020-11-04 15:28:07 +03:00
|
|
|
|
2023-05-29 13:31:17 +03:00
|
|
|
import psutil
|
2024-09-24 16:05:30 +03:00
|
|
|
import uvicorn
|
2020-12-24 16:47:27 +03:00
|
|
|
from peewee_migrate import Router
|
2020-11-04 15:28:07 +03:00
|
|
|
from playhouse.sqlite_ext import SqliteExtDatabase
|
|
|
|
|
|
2024-05-18 19:36:13 +03:00
|
|
|
from frigate.api.auth import hash_password
|
2024-09-24 16:05:30 +03:00
|
|
|
from frigate.api.fastapi_app import create_fastapi_app
|
2024-09-27 15:53:23 +03:00
|
|
|
from frigate.camera import CameraMetrics, PTZMetrics
|
2025-06-11 20:25:30 +03:00
|
|
|
from frigate.camera.maintainer import CameraMaintainer
|
2025-02-11 05:47:15 +03:00
|
|
|
from frigate.comms.base_communicator import Communicator
|
|
|
|
|
from frigate.comms.dispatcher import Dispatcher
|
2025-03-11 01:29:29 +03:00
|
|
|
from frigate.comms.event_metadata_updater import EventMetadataPublisher
|
2023-07-14 03:52:33 +03:00
|
|
|
from frigate.comms.inter_process import InterProcessCommunicator
|
2022-11-24 05:03:20 +03:00
|
|
|
from frigate.comms.mqtt import MqttClient
|
Use Fork-Server As Spawn Method (#18682)
* Set runtime
* Use count correctly
* Don't assume camera sizes
* Use separate zmq proxy for object detection
* Correct order
* Use forkserver
* Only store PID instead of entire process reference
* Cleanup
* Catch correct errors
* Fix typing
* Remove before_run from process util
The before_run never actually ran because:
You're right to suspect an issue with before_run not being called and a potential deadlock. The way you've implemented the run_wrapper using __getattribute__ for the run method of BaseProcess is a common pitfall in Python's multiprocessing, especially when combined with how multiprocessing.Process works internally.
Here's a breakdown of why before_run isn't being called and why you might be experiencing a deadlock:
The Problem: __getattribute__ and Process Serialization
When you create a multiprocessing.Process object and call start(), the multiprocessing module needs to serialize the process object (or at least enough of it to re-create the process in the new interpreter). It then pickles this serialized object and sends it to the newly spawned process.
The issue with your __getattribute__ implementation for run is that:
run is retrieved during serialization: When multiprocessing tries to pickle your Process object to send to the new process, it will likely access the run attribute. This triggers your __getattribute__ wrapper, which then tries to bind run_wrapper to self.
run_wrapper is bound to the parent process's self: The run_wrapper closure, when created in the parent process, captures the self (the Process instance) from the parent's memory space.
Deserialization creates a new object: In the child process, a new Process object is created by deserializing the pickled data. However, the run_wrapper method that was pickled still holds a reference to the self from the parent process. This is a subtle but critical distinction.
The child's run is not your wrapped run: When the child process starts, it internally calls its own run method. Because of the serialization and deserialization process, the run method that's ultimately executed in the child process is the original multiprocessing.Process.run or the Process.run if you had directly overridden it. Your __getattribute__ magic, which wraps run, isn't correctly applied to the Process object within the child's context.
* Cleanup
* Logging bugfix (#18465)
* use mp Manager to handle logging queues
A Python bug (https://github.com/python/cpython/issues/91555) was preventing logs from the embeddings maintainer process from printing. The bug is fixed in Python 3.14, but a viable workaround is to use the multiprocessing Manager, which better manages mp queues and causes the logging to work correctly.
* consolidate
* fix typing
* Fix typing
* Use global log queue
* Move to using process for logging
* Convert camera tracking to process
* Add more processes
* Finalize process
* Cleanup
* Cleanup typing
* Formatting
* Remove daemon
---------
Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2025-06-12 21:12:34 +03:00
|
|
|
from frigate.comms.object_detector_signaler import DetectorProxy
|
2024-07-22 23:39:15 +03:00
|
|
|
from frigate.comms.webpush import WebPushClient
|
2022-11-24 05:03:20 +03:00
|
|
|
from frigate.comms.ws import WebSocketClient
|
2024-06-22 00:30:19 +03:00
|
|
|
from frigate.comms.zmq_proxy import ZmqProxy
|
2025-05-22 21:16:51 +03:00
|
|
|
from frigate.config.camera.updater import CameraConfigUpdatePublisher
|
2024-10-01 01:45:22 +03:00
|
|
|
from frigate.config.config import FrigateConfig
|
Camera profile support (#22482)
* add CameraProfileConfig model for named config overrides
* add profiles field to CameraConfig
* add active_profile field to FrigateConfig
Runtime-only field excluded from YAML serialization, tracks which
profile is currently active.
* add ProfileManager for profile activation and persistence
Handles snapshotting base configs, applying profile overrides via
deep_merge + apply_section_update, publishing ZMQ updates, and
persisting active profile to /config/.active_profile.
* add profile API endpoints (GET /profiles, GET/PUT /profile)
* add MQTT and dispatcher integration for profiles
- Subscribe to frigate/profile/set MQTT topic
- Publish profile/state and profiles/available on connect
- Add _on_profile_command handler to dispatcher
- Broadcast active profile state on WebSocket connect
* wire ProfileManager into app startup and FastAPI
- Create ProfileManager after dispatcher init
- Restore persisted profile on startup
- Pass dispatcher and profile_manager to FastAPI app
* add tests for invalid profile values and keys
Tests that Pydantic rejects: invalid field values (fps: "not_a_number"),
unknown section keys (ffmpeg in profile), invalid nested values, and
invalid profiles in full config parsing.
* formatting
* fix CameraLiveConfig JSON serialization error on profile activation
refactor _publish_updates to only publish ZMQ updates for
sections that actually changed, not all sections on affected cameras.
* consolidate
* add enabled field to camera profiles for enabling/disabling cameras
* add zones support to camera profiles
* add frontend profile types, color utility, and config save support
* add profile state management and save preview support
* add profileName prop to BaseSection for profile-aware config editing
* add profile section dropdown and wire into camera settings pages
* add per-profile camera enable/disable to Camera Management view
* add profiles summary page with card-based layout and fix backend zone comparison bug
* add active profile badge to settings toolbar
* i18n
* add red dot for any pending changes including profiles
* profile support for mask and zone editor
* fix hidden field validation errors caused by lodash wildcard and schema gaps
lodash unset does not support wildcard (*) segments, so hidden fields like
filters.*.mask were never stripped from form data, leaving null raw_coordinates
that fail RJSF anyOf validation. Add unsetWithWildcard helper and also strip
hidden fields from the JSON schema itself as defense-in-depth.
* add face_recognition and lpr to profile-eligible sections
* move profile dropdown from section panes to settings header
* add profiles enable toggle and improve empty state
* formatting
* tweaks
* tweak colors and switch
* fix profile save diff, masksAndZones delete, and config sync
* ui tweaks
* ensure profile manager gets updated config
* rename profile settings to ui settings
* refactor profilesview and add dots/border colors when overridden
* implement an update_config method for profile manager
* fix mask deletion
* more unique colors
* add top-level profiles config section with friendly names
* implement profile friendly names and improve profile UI
- Add ProfileDefinitionConfig type and profiles field to FrigateConfig
- Use ProfilesApiResponse type with friendly_name support throughout
- Replace Record<string, unknown> with proper JsonObject/JsonValue types
- Add profile creation form matching zone pattern (Zod + NameAndIdFields)
- Add pencil icon for renaming profile friendly names in ProfilesView
- Move Profiles menu item to first under Camera Configuration
- Add activity indicators on save/rename/delete buttons
- Display friendly names in CameraManagementView profile selector
- Fix duplicate colored dots in management profile dropdown
- Fix i18n namespace for overridden base config tooltips
- Move profile override deletion from dropdown trash icon to footer
button with confirmation dialog, matching Reset to Global pattern
- Remove Add Profile from section header dropdown to prevent saving
camera overrides before top-level profile definition exists
- Clean up newProfiles state after API profile deletion
- Refresh profiles SWR cache after saving profile definitions
* remove profile badge in settings and add profiles to main menu
* use icon only on mobile
* change color order
* docs
* show activity indicator on trash icon while deleting a profile
* tweak language
* immediately create profiles on backend instead of deferring to Save All
* hide restart-required fields when editing a profile section
fields that require a restart cannot take effect via profile switching,
so they are merged into hiddenFields when profileName is set
* show active profile indicator in desktop status bar
* fix profile config inheritance bug where Pydantic defaults override base values
The /config API was dumping profile overrides with model_dump() which included
all Pydantic defaults. When the frontend merged these over
the camera's base config, explicitly-set base values were
lost. Now profile overrides are re-dumped with exclude_unset=True so only
user-specified fields are returned.
Also fixes the Save All path generating spurious deletion markers for
restart-required fields that are hidden during profile
editing but not excluded from the raw data sanitization in
prepareSectionSavePayload.
* docs tweaks
* docs tweak
* formatting
* formatting
* fix typing
* fix test pollution
test_maintainer was injecting MagicMock() into sys.modules["frigate.config.camera.updater"] at module load time and never restoring it. When the profile tests later imported CameraConfigUpdateEnum and CameraConfigUpdateTopic from that module, they got mock objects instead of the real dataclass/enum, so equality comparisons always failed
* remove
* fix settings showing profile-merged values when editing base config
When a profile is active, the in-memory config contains effective
(profile-merged) values. The settings UI was displaying these merged
values even when the "Base Config" view was selected.
Backend: snapshot pre-profile base configs in ProfileManager and expose
them via a `base_config` key in the /api/config camera response when a
profile is active. The top-level sections continue to reflect the
effective running config.
Frontend: read from `base_config` when available in BaseSection,
useConfigOverride, useAllCameraOverrides, and prepareSectionSavePayload.
Include formData labels in Object/Audio switches widgets so that labels
added only by a profile override remain visible when editing that profile.
* use rasterized_mask as field
makes it easier to exclude from the schema with exclude=True
prevents leaking of the field when using model_dump for profiles
* fix zones
- Fix zone colors not matching across profiles by falling back to base zone color when profile zone data lacks a color field
- Use base_config for base-layer values in masks/zones view so profile-merged values don't pollute the base config editing view
- Handle zones separately in profile manager snapshot/restore since ZoneConfig requires special serialization (color as private attr, contour generation)
- Inherit base zone color and generate contours for profile zone overrides in profile manager
* formatting
* don't require restart for camera enabled change for profiles
* publish camera state when changing profiles
* formatting
* remove available profiles from mqtt
* improve typing
2026-03-19 17:47:57 +03:00
|
|
|
from frigate.config.profile_manager import ProfileManager
|
2023-04-30 21:32:36 +03:00
|
|
|
from frigate.const import (
|
|
|
|
|
CACHE_DIR,
|
|
|
|
|
CLIPS_DIR,
|
|
|
|
|
CONFIG_DIR,
|
2023-06-08 14:32:35 +03:00
|
|
|
EXPORT_DIR,
|
2025-01-05 00:21:47 +03:00
|
|
|
FACE_DIR,
|
2023-04-30 21:32:36 +03:00
|
|
|
MODEL_CACHE_DIR,
|
|
|
|
|
RECORD_DIR,
|
2025-02-18 17:46:29 +03:00
|
|
|
THUMB_DIR,
|
2025-07-07 17:03:57 +03:00
|
|
|
TRIGGER_DIR,
|
2023-04-30 21:32:36 +03:00
|
|
|
)
|
2025-01-10 22:44:30 +03:00
|
|
|
from frigate.data_processing.types import DataProcessorMetrics
|
2024-10-07 23:30:45 +03:00
|
|
|
from frigate.db.sqlitevecq import SqliteVecQueueDatabase
|
2026-03-04 19:07:34 +03:00
|
|
|
from frigate.debug_replay import (
|
|
|
|
|
DebugReplayManager,
|
|
|
|
|
cleanup_replay_cameras,
|
|
|
|
|
)
|
Use Fork-Server As Spawn Method (#18682)
* Set runtime
* Use count correctly
* Don't assume camera sizes
* Use separate zmq proxy for object detection
* Correct order
* Use forkserver
* Only store PID instead of entire process reference
* Cleanup
* Catch correct errors
* Fix typing
* Remove before_run from process util
The before_run never actually ran because:
You're right to suspect an issue with before_run not being called and a potential deadlock. The way you've implemented the run_wrapper using __getattribute__ for the run method of BaseProcess is a common pitfall in Python's multiprocessing, especially when combined with how multiprocessing.Process works internally.
Here's a breakdown of why before_run isn't being called and why you might be experiencing a deadlock:
The Problem: __getattribute__ and Process Serialization
When you create a multiprocessing.Process object and call start(), the multiprocessing module needs to serialize the process object (or at least enough of it to re-create the process in the new interpreter). It then pickles this serialized object and sends it to the newly spawned process.
The issue with your __getattribute__ implementation for run is that:
run is retrieved during serialization: When multiprocessing tries to pickle your Process object to send to the new process, it will likely access the run attribute. This triggers your __getattribute__ wrapper, which then tries to bind run_wrapper to self.
run_wrapper is bound to the parent process's self: The run_wrapper closure, when created in the parent process, captures the self (the Process instance) from the parent's memory space.
Deserialization creates a new object: In the child process, a new Process object is created by deserializing the pickled data. However, the run_wrapper method that was pickled still holds a reference to the self from the parent process. This is a subtle but critical distinction.
The child's run is not your wrapped run: When the child process starts, it internally calls its own run method. Because of the serialization and deserialization process, the run method that's ultimately executed in the child process is the original multiprocessing.Process.run or the Process.run if you had directly overridden it. Your __getattribute__ magic, which wraps run, isn't correctly applied to the Process object within the child's context.
* Cleanup
* Logging bugfix (#18465)
* use mp Manager to handle logging queues
A Python bug (https://github.com/python/cpython/issues/91555) was preventing logs from the embeddings maintainer process from printing. The bug is fixed in Python 3.14, but a viable workaround is to use the multiprocessing Manager, which better manages mp queues and causes the logging to work correctly.
* consolidate
* fix typing
* Fix typing
* Use global log queue
* Move to using process for logging
* Convert camera tracking to process
* Add more processes
* Finalize process
* Cleanup
* Cleanup typing
* Formatting
* Remove daemon
---------
Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2025-06-12 21:12:34 +03:00
|
|
|
from frigate.embeddings import EmbeddingProcess, EmbeddingsContext
|
2024-09-27 15:53:23 +03:00
|
|
|
from frigate.events.audio import AudioProcessor
|
2023-05-19 13:16:11 +03:00
|
|
|
from frigate.events.cleanup import EventCleanup
|
|
|
|
|
from frigate.events.maintainer import EventProcessor
|
2026-03-06 02:53:48 +03:00
|
|
|
from frigate.jobs.motion_search import stop_all_motion_search_jobs
|
2025-05-29 18:02:17 +03:00
|
|
|
from frigate.log import _stop_logging
|
2023-12-03 17:16:01 +03:00
|
|
|
from frigate.models import (
|
|
|
|
|
Event,
|
2024-04-20 01:11:41 +03:00
|
|
|
Export,
|
2023-12-03 17:16:01 +03:00
|
|
|
Previews,
|
|
|
|
|
Recordings,
|
|
|
|
|
RecordingsToDelete,
|
|
|
|
|
Regions,
|
2024-02-21 02:26:09 +03:00
|
|
|
ReviewSegment,
|
2023-12-03 17:16:01 +03:00
|
|
|
Timeline,
|
2025-07-07 17:03:57 +03:00
|
|
|
Trigger,
|
2024-05-18 19:36:13 +03:00
|
|
|
User,
|
2023-12-03 17:16:01 +03:00
|
|
|
)
|
2025-04-15 16:55:38 +03:00
|
|
|
from frigate.object_detection.base import ObjectDetectProcess
|
Use Fork-Server As Spawn Method (#18682)
* Set runtime
* Use count correctly
* Don't assume camera sizes
* Use separate zmq proxy for object detection
* Correct order
* Use forkserver
* Only store PID instead of entire process reference
* Cleanup
* Catch correct errors
* Fix typing
* Remove before_run from process util
The before_run never actually ran because:
You're right to suspect an issue with before_run not being called and a potential deadlock. The way you've implemented the run_wrapper using __getattribute__ for the run method of BaseProcess is a common pitfall in Python's multiprocessing, especially when combined with how multiprocessing.Process works internally.
Here's a breakdown of why before_run isn't being called and why you might be experiencing a deadlock:
The Problem: __getattribute__ and Process Serialization
When you create a multiprocessing.Process object and call start(), the multiprocessing module needs to serialize the process object (or at least enough of it to re-create the process in the new interpreter). It then pickles this serialized object and sends it to the newly spawned process.
The issue with your __getattribute__ implementation for run is that:
run is retrieved during serialization: When multiprocessing tries to pickle your Process object to send to the new process, it will likely access the run attribute. This triggers your __getattribute__ wrapper, which then tries to bind run_wrapper to self.
run_wrapper is bound to the parent process's self: The run_wrapper closure, when created in the parent process, captures the self (the Process instance) from the parent's memory space.
Deserialization creates a new object: In the child process, a new Process object is created by deserializing the pickled data. However, the run_wrapper method that was pickled still holds a reference to the self from the parent process. This is a subtle but critical distinction.
The child's run is not your wrapped run: When the child process starts, it internally calls its own run method. Because of the serialization and deserialization process, the run method that's ultimately executed in the child process is the original multiprocessing.Process.run or the Process.run if you had directly overridden it. Your __getattribute__ magic, which wraps run, isn't correctly applied to the Process object within the child's context.
* Cleanup
* Logging bugfix (#18465)
* use mp Manager to handle logging queues
A Python bug (https://github.com/python/cpython/issues/91555) was preventing logs from the embeddings maintainer process from printing. The bug is fixed in Python 3.14, but a viable workaround is to use the multiprocessing Manager, which better manages mp queues and causes the logging to work correctly.
* consolidate
* fix typing
* Fix typing
* Use global log queue
* Move to using process for logging
* Convert camera tracking to process
* Add more processes
* Finalize process
* Cleanup
* Cleanup typing
* Formatting
* Remove daemon
---------
Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2025-06-12 21:12:34 +03:00
|
|
|
from frigate.output.output import OutputProcess
|
2023-07-08 15:04:47 +03:00
|
|
|
from frigate.ptz.autotrack import PtzAutoTrackerThread
|
|
|
|
|
from frigate.ptz.onvif import OnvifController
|
2023-07-26 13:55:08 +03:00
|
|
|
from frigate.record.cleanup import RecordingCleanup
|
2024-04-20 01:11:41 +03:00
|
|
|
from frigate.record.export import migrate_exports
|
Use Fork-Server As Spawn Method (#18682)
* Set runtime
* Use count correctly
* Don't assume camera sizes
* Use separate zmq proxy for object detection
* Correct order
* Use forkserver
* Only store PID instead of entire process reference
* Cleanup
* Catch correct errors
* Fix typing
* Remove before_run from process util
The before_run never actually ran because:
You're right to suspect an issue with before_run not being called and a potential deadlock. The way you've implemented the run_wrapper using __getattribute__ for the run method of BaseProcess is a common pitfall in Python's multiprocessing, especially when combined with how multiprocessing.Process works internally.
Here's a breakdown of why before_run isn't being called and why you might be experiencing a deadlock:
The Problem: __getattribute__ and Process Serialization
When you create a multiprocessing.Process object and call start(), the multiprocessing module needs to serialize the process object (or at least enough of it to re-create the process in the new interpreter). It then pickles this serialized object and sends it to the newly spawned process.
The issue with your __getattribute__ implementation for run is that:
run is retrieved during serialization: When multiprocessing tries to pickle your Process object to send to the new process, it will likely access the run attribute. This triggers your __getattribute__ wrapper, which then tries to bind run_wrapper to self.
run_wrapper is bound to the parent process's self: The run_wrapper closure, when created in the parent process, captures the self (the Process instance) from the parent's memory space.
Deserialization creates a new object: In the child process, a new Process object is created by deserializing the pickled data. However, the run_wrapper method that was pickled still holds a reference to the self from the parent process. This is a subtle but critical distinction.
The child's run is not your wrapped run: When the child process starts, it internally calls its own run method. Because of the serialization and deserialization process, the run method that's ultimately executed in the child process is the original multiprocessing.Process.run or the Process.run if you had directly overridden it. Your __getattribute__ magic, which wraps run, isn't correctly applied to the Process object within the child's context.
* Cleanup
* Logging bugfix (#18465)
* use mp Manager to handle logging queues
A Python bug (https://github.com/python/cpython/issues/91555) was preventing logs from the embeddings maintainer process from printing. The bug is fixed in Python 3.14, but a viable workaround is to use the multiprocessing Manager, which better manages mp queues and causes the logging to work correctly.
* consolidate
* fix typing
* Fix typing
* Use global log queue
* Move to using process for logging
* Convert camera tracking to process
* Add more processes
* Finalize process
* Cleanup
* Cleanup typing
* Formatting
* Remove daemon
---------
Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2025-06-12 21:12:34 +03:00
|
|
|
from frigate.record.record import RecordProcess
|
|
|
|
|
from frigate.review.review import ReviewProcess
|
2024-02-21 23:10:28 +03:00
|
|
|
from frigate.stats.emitter import StatsEmitter
|
|
|
|
|
from frigate.stats.util import stats_init
|
2022-10-09 14:28:26 +03:00
|
|
|
from frigate.storage import StorageMaintainer
|
2023-04-23 18:45:19 +03:00
|
|
|
from frigate.timeline import TimelineProcessor
|
2025-03-12 06:31:05 +03:00
|
|
|
from frigate.track.object_processing import TrackedObjectProcessor
|
2024-09-24 15:07:47 +03:00
|
|
|
from frigate.util.builtin import empty_and_close_queue
|
2025-06-11 20:25:30 +03:00
|
|
|
from frigate.util.image import UntrackedSharedMemory
|
2026-03-21 03:02:47 +03:00
|
|
|
from frigate.util.process import FrigateProcess
|
2025-07-11 16:11:35 +03:00
|
|
|
from frigate.util.services import set_file_limit
|
2021-09-14 06:02:23 +03:00
|
|
|
from frigate.version import VERSION
|
2020-11-04 15:28:07 +03:00
|
|
|
from frigate.watchdog import FrigateWatchdog
|
|
|
|
|
|
|
|
|
|
logger = logging.getLogger(__name__)
|
|
|
|
|
|
2021-02-17 16:23:32 +03:00
|
|
|
|
|
|
|
|
class FrigateApp:
|
2025-06-24 20:41:11 +03:00
|
|
|
def __init__(
|
|
|
|
|
self, config: FrigateConfig, manager: SyncManager, stop_event: MpEvent
|
|
|
|
|
) -> None:
|
Use Fork-Server As Spawn Method (#18682)
* Set runtime
* Use count correctly
* Don't assume camera sizes
* Use separate zmq proxy for object detection
* Correct order
* Use forkserver
* Only store PID instead of entire process reference
* Cleanup
* Catch correct errors
* Fix typing
* Remove before_run from process util
The before_run never actually ran because:
You're right to suspect an issue with before_run not being called and a potential deadlock. The way you've implemented the run_wrapper using __getattribute__ for the run method of BaseProcess is a common pitfall in Python's multiprocessing, especially when combined with how multiprocessing.Process works internally.
Here's a breakdown of why before_run isn't being called and why you might be experiencing a deadlock:
The Problem: __getattribute__ and Process Serialization
When you create a multiprocessing.Process object and call start(), the multiprocessing module needs to serialize the process object (or at least enough of it to re-create the process in the new interpreter). It then pickles this serialized object and sends it to the newly spawned process.
The issue with your __getattribute__ implementation for run is that:
run is retrieved during serialization: When multiprocessing tries to pickle your Process object to send to the new process, it will likely access the run attribute. This triggers your __getattribute__ wrapper, which then tries to bind run_wrapper to self.
run_wrapper is bound to the parent process's self: The run_wrapper closure, when created in the parent process, captures the self (the Process instance) from the parent's memory space.
Deserialization creates a new object: In the child process, a new Process object is created by deserializing the pickled data. However, the run_wrapper method that was pickled still holds a reference to the self from the parent process. This is a subtle but critical distinction.
The child's run is not your wrapped run: When the child process starts, it internally calls its own run method. Because of the serialization and deserialization process, the run method that's ultimately executed in the child process is the original multiprocessing.Process.run or the Process.run if you had directly overridden it. Your __getattribute__ magic, which wraps run, isn't correctly applied to the Process object within the child's context.
* Cleanup
* Logging bugfix (#18465)
* use mp Manager to handle logging queues
A Python bug (https://github.com/python/cpython/issues/91555) was preventing logs from the embeddings maintainer process from printing. The bug is fixed in Python 3.14, but a viable workaround is to use the multiprocessing Manager, which better manages mp queues and causes the logging to work correctly.
* consolidate
* fix typing
* Fix typing
* Use global log queue
* Move to using process for logging
* Convert camera tracking to process
* Add more processes
* Finalize process
* Cleanup
* Cleanup typing
* Formatting
* Remove daemon
---------
Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2025-06-12 21:12:34 +03:00
|
|
|
self.metrics_manager = manager
|
2024-11-08 21:47:46 +03:00
|
|
|
self.audio_process: Optional[mp.Process] = None
|
2025-06-24 20:41:11 +03:00
|
|
|
self.stop_event = stop_event
|
2023-07-16 15:42:56 +03:00
|
|
|
self.detection_queue: Queue = mp.Queue()
|
2022-11-04 05:23:09 +03:00
|
|
|
self.detectors: dict[str, ObjectDetectProcess] = {}
|
2022-04-16 18:40:04 +03:00
|
|
|
self.detection_shms: list[mp.shared_memory.SharedMemory] = []
|
2023-07-16 15:42:56 +03:00
|
|
|
self.log_queue: Queue = mp.Queue()
|
Use Fork-Server As Spawn Method (#18682)
* Set runtime
* Use count correctly
* Don't assume camera sizes
* Use separate zmq proxy for object detection
* Correct order
* Use forkserver
* Only store PID instead of entire process reference
* Cleanup
* Catch correct errors
* Fix typing
* Remove before_run from process util
The before_run never actually ran because:
You're right to suspect an issue with before_run not being called and a potential deadlock. The way you've implemented the run_wrapper using __getattribute__ for the run method of BaseProcess is a common pitfall in Python's multiprocessing, especially when combined with how multiprocessing.Process works internally.
Here's a breakdown of why before_run isn't being called and why you might be experiencing a deadlock:
The Problem: __getattribute__ and Process Serialization
When you create a multiprocessing.Process object and call start(), the multiprocessing module needs to serialize the process object (or at least enough of it to re-create the process in the new interpreter). It then pickles this serialized object and sends it to the newly spawned process.
The issue with your __getattribute__ implementation for run is that:
run is retrieved during serialization: When multiprocessing tries to pickle your Process object to send to the new process, it will likely access the run attribute. This triggers your __getattribute__ wrapper, which then tries to bind run_wrapper to self.
run_wrapper is bound to the parent process's self: The run_wrapper closure, when created in the parent process, captures the self (the Process instance) from the parent's memory space.
Deserialization creates a new object: In the child process, a new Process object is created by deserializing the pickled data. However, the run_wrapper method that was pickled still holds a reference to the self from the parent process. This is a subtle but critical distinction.
The child's run is not your wrapped run: When the child process starts, it internally calls its own run method. Because of the serialization and deserialization process, the run method that's ultimately executed in the child process is the original multiprocessing.Process.run or the Process.run if you had directly overridden it. Your __getattribute__ magic, which wraps run, isn't correctly applied to the Process object within the child's context.
* Cleanup
* Logging bugfix (#18465)
* use mp Manager to handle logging queues
A Python bug (https://github.com/python/cpython/issues/91555) was preventing logs from the embeddings maintainer process from printing. The bug is fixed in Python 3.14, but a viable workaround is to use the multiprocessing Manager, which better manages mp queues and causes the logging to work correctly.
* consolidate
* fix typing
* Fix typing
* Use global log queue
* Move to using process for logging
* Convert camera tracking to process
* Add more processes
* Finalize process
* Cleanup
* Cleanup typing
* Formatting
* Remove daemon
---------
Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2025-06-12 21:12:34 +03:00
|
|
|
self.camera_metrics: DictProxy = self.metrics_manager.dict()
|
2025-01-10 22:44:30 +03:00
|
|
|
self.embeddings_metrics: DataProcessorMetrics | None = (
|
Use Fork-Server As Spawn Method (#18682)
* Set runtime
* Use count correctly
* Don't assume camera sizes
* Use separate zmq proxy for object detection
* Correct order
* Use forkserver
* Only store PID instead of entire process reference
* Cleanup
* Catch correct errors
* Fix typing
* Remove before_run from process util
The before_run never actually ran because:
You're right to suspect an issue with before_run not being called and a potential deadlock. The way you've implemented the run_wrapper using __getattribute__ for the run method of BaseProcess is a common pitfall in Python's multiprocessing, especially when combined with how multiprocessing.Process works internally.
Here's a breakdown of why before_run isn't being called and why you might be experiencing a deadlock:
The Problem: __getattribute__ and Process Serialization
When you create a multiprocessing.Process object and call start(), the multiprocessing module needs to serialize the process object (or at least enough of it to re-create the process in the new interpreter). It then pickles this serialized object and sends it to the newly spawned process.
The issue with your __getattribute__ implementation for run is that:
run is retrieved during serialization: When multiprocessing tries to pickle your Process object to send to the new process, it will likely access the run attribute. This triggers your __getattribute__ wrapper, which then tries to bind run_wrapper to self.
run_wrapper is bound to the parent process's self: The run_wrapper closure, when created in the parent process, captures the self (the Process instance) from the parent's memory space.
Deserialization creates a new object: In the child process, a new Process object is created by deserializing the pickled data. However, the run_wrapper method that was pickled still holds a reference to the self from the parent process. This is a subtle but critical distinction.
The child's run is not your wrapped run: When the child process starts, it internally calls its own run method. Because of the serialization and deserialization process, the run method that's ultimately executed in the child process is the original multiprocessing.Process.run or the Process.run if you had directly overridden it. Your __getattribute__ magic, which wraps run, isn't correctly applied to the Process object within the child's context.
* Cleanup
* Logging bugfix (#18465)
* use mp Manager to handle logging queues
A Python bug (https://github.com/python/cpython/issues/91555) was preventing logs from the embeddings maintainer process from printing. The bug is fixed in Python 3.14, but a viable workaround is to use the multiprocessing Manager, which better manages mp queues and causes the logging to work correctly.
* consolidate
* fix typing
* Fix typing
* Use global log queue
* Move to using process for logging
* Convert camera tracking to process
* Add more processes
* Finalize process
* Cleanup
* Cleanup typing
* Formatting
* Remove daemon
---------
Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2025-06-12 21:12:34 +03:00
|
|
|
DataProcessorMetrics(
|
|
|
|
|
self.metrics_manager, list(config.classification.custom.keys())
|
|
|
|
|
)
|
2025-02-28 21:43:08 +03:00
|
|
|
if (
|
|
|
|
|
config.semantic_search.enabled
|
2025-12-21 03:30:34 +03:00
|
|
|
or any(
|
|
|
|
|
c.objects.genai.enabled or c.review.genai.enabled
|
|
|
|
|
for c in config.cameras.values()
|
|
|
|
|
)
|
2025-02-28 21:43:08 +03:00
|
|
|
or config.lpr.enabled
|
|
|
|
|
or config.face_recognition.enabled
|
2025-06-06 19:29:44 +03:00
|
|
|
or len(config.classification.custom) > 0
|
2025-02-28 21:43:08 +03:00
|
|
|
)
|
|
|
|
|
else None
|
2025-01-05 17:47:57 +03:00
|
|
|
)
|
2024-09-27 15:53:23 +03:00
|
|
|
self.ptz_metrics: dict[str, PTZMetrics] = {}
|
2023-05-05 01:58:59 +03:00
|
|
|
self.processes: dict[str, int] = {}
|
2024-10-07 23:30:45 +03:00
|
|
|
self.embeddings: Optional[EmbeddingsContext] = None
|
Camera profile support (#22482)
* add CameraProfileConfig model for named config overrides
* add profiles field to CameraConfig
* add active_profile field to FrigateConfig
Runtime-only field excluded from YAML serialization, tracks which
profile is currently active.
* add ProfileManager for profile activation and persistence
Handles snapshotting base configs, applying profile overrides via
deep_merge + apply_section_update, publishing ZMQ updates, and
persisting active profile to /config/.active_profile.
* add profile API endpoints (GET /profiles, GET/PUT /profile)
* add MQTT and dispatcher integration for profiles
- Subscribe to frigate/profile/set MQTT topic
- Publish profile/state and profiles/available on connect
- Add _on_profile_command handler to dispatcher
- Broadcast active profile state on WebSocket connect
* wire ProfileManager into app startup and FastAPI
- Create ProfileManager after dispatcher init
- Restore persisted profile on startup
- Pass dispatcher and profile_manager to FastAPI app
* add tests for invalid profile values and keys
Tests that Pydantic rejects: invalid field values (fps: "not_a_number"),
unknown section keys (ffmpeg in profile), invalid nested values, and
invalid profiles in full config parsing.
* formatting
* fix CameraLiveConfig JSON serialization error on profile activation
refactor _publish_updates to only publish ZMQ updates for
sections that actually changed, not all sections on affected cameras.
* consolidate
* add enabled field to camera profiles for enabling/disabling cameras
* add zones support to camera profiles
* add frontend profile types, color utility, and config save support
* add profile state management and save preview support
* add profileName prop to BaseSection for profile-aware config editing
* add profile section dropdown and wire into camera settings pages
* add per-profile camera enable/disable to Camera Management view
* add profiles summary page with card-based layout and fix backend zone comparison bug
* add active profile badge to settings toolbar
* i18n
* add red dot for any pending changes including profiles
* profile support for mask and zone editor
* fix hidden field validation errors caused by lodash wildcard and schema gaps
lodash unset does not support wildcard (*) segments, so hidden fields like
filters.*.mask were never stripped from form data, leaving null raw_coordinates
that fail RJSF anyOf validation. Add unsetWithWildcard helper and also strip
hidden fields from the JSON schema itself as defense-in-depth.
* add face_recognition and lpr to profile-eligible sections
* move profile dropdown from section panes to settings header
* add profiles enable toggle and improve empty state
* formatting
* tweaks
* tweak colors and switch
* fix profile save diff, masksAndZones delete, and config sync
* ui tweaks
* ensure profile manager gets updated config
* rename profile settings to ui settings
* refactor profilesview and add dots/border colors when overridden
* implement an update_config method for profile manager
* fix mask deletion
* more unique colors
* add top-level profiles config section with friendly names
* implement profile friendly names and improve profile UI
- Add ProfileDefinitionConfig type and profiles field to FrigateConfig
- Use ProfilesApiResponse type with friendly_name support throughout
- Replace Record<string, unknown> with proper JsonObject/JsonValue types
- Add profile creation form matching zone pattern (Zod + NameAndIdFields)
- Add pencil icon for renaming profile friendly names in ProfilesView
- Move Profiles menu item to first under Camera Configuration
- Add activity indicators on save/rename/delete buttons
- Display friendly names in CameraManagementView profile selector
- Fix duplicate colored dots in management profile dropdown
- Fix i18n namespace for overridden base config tooltips
- Move profile override deletion from dropdown trash icon to footer
button with confirmation dialog, matching Reset to Global pattern
- Remove Add Profile from section header dropdown to prevent saving
camera overrides before top-level profile definition exists
- Clean up newProfiles state after API profile deletion
- Refresh profiles SWR cache after saving profile definitions
* remove profile badge in settings and add profiles to main menu
* use icon only on mobile
* change color order
* docs
* show activity indicator on trash icon while deleting a profile
* tweak language
* immediately create profiles on backend instead of deferring to Save All
* hide restart-required fields when editing a profile section
fields that require a restart cannot take effect via profile switching,
so they are merged into hiddenFields when profileName is set
* show active profile indicator in desktop status bar
* fix profile config inheritance bug where Pydantic defaults override base values
The /config API was dumping profile overrides with model_dump() which included
all Pydantic defaults. When the frontend merged these over
the camera's base config, explicitly-set base values were
lost. Now profile overrides are re-dumped with exclude_unset=True so only
user-specified fields are returned.
Also fixes the Save All path generating spurious deletion markers for
restart-required fields that are hidden during profile
editing but not excluded from the raw data sanitization in
prepareSectionSavePayload.
* docs tweaks
* docs tweak
* formatting
* formatting
* fix typing
* fix test pollution
test_maintainer was injecting MagicMock() into sys.modules["frigate.config.camera.updater"] at module load time and never restoring it. When the profile tests later imported CameraConfigUpdateEnum and CameraConfigUpdateTopic from that module, they got mock objects instead of the real dataclass/enum, so equality comparisons always failed
* remove
* fix settings showing profile-merged values when editing base config
When a profile is active, the in-memory config contains effective
(profile-merged) values. The settings UI was displaying these merged
values even when the "Base Config" view was selected.
Backend: snapshot pre-profile base configs in ProfileManager and expose
them via a `base_config` key in the /api/config camera response when a
profile is active. The top-level sections continue to reflect the
effective running config.
Frontend: read from `base_config` when available in BaseSection,
useConfigOverride, useAllCameraOverrides, and prepareSectionSavePayload.
Include formData labels in Object/Audio switches widgets so that labels
added only by a profile override remain visible when editing that profile.
* use rasterized_mask as field
makes it easier to exclude from the schema with exclude=True
prevents leaking of the field when using model_dump for profiles
* fix zones
- Fix zone colors not matching across profiles by falling back to base zone color when profile zone data lacks a color field
- Use base_config for base-layer values in masks/zones view so profile-merged values don't pollute the base config editing view
- Handle zones separately in profile manager snapshot/restore since ZoneConfig requires special serialization (color as private attr, contour generation)
- Inherit base zone color and generate contours for profile zone overrides in profile manager
* formatting
* don't require restart for camera enabled change for profiles
* publish camera state when changing profiles
* formatting
* remove available profiles from mqtt
* improve typing
2026-03-19 17:47:57 +03:00
|
|
|
self.profile_manager: Optional[ProfileManager] = None
|
2024-10-03 15:33:53 +03:00
|
|
|
self.config = config
|
2021-01-16 06:33:53 +03:00
|
|
|
|
2022-04-16 18:40:04 +03:00
|
|
|
def ensure_dirs(self) -> None:
|
2025-01-05 00:21:47 +03:00
|
|
|
dirs = [
|
2023-06-08 14:32:35 +03:00
|
|
|
CONFIG_DIR,
|
|
|
|
|
RECORD_DIR,
|
2025-02-18 17:46:29 +03:00
|
|
|
THUMB_DIR,
|
2024-06-09 21:45:26 +03:00
|
|
|
f"{CLIPS_DIR}/cache",
|
2023-06-08 14:32:35 +03:00
|
|
|
CACHE_DIR,
|
|
|
|
|
MODEL_CACHE_DIR,
|
|
|
|
|
EXPORT_DIR,
|
2025-01-05 00:21:47 +03:00
|
|
|
]
|
|
|
|
|
|
|
|
|
|
if self.config.face_recognition.enabled:
|
|
|
|
|
dirs.append(FACE_DIR)
|
|
|
|
|
|
2025-07-07 17:03:57 +03:00
|
|
|
if self.config.semantic_search.enabled:
|
|
|
|
|
dirs.append(TRIGGER_DIR)
|
|
|
|
|
|
2025-01-05 00:21:47 +03:00
|
|
|
for d in dirs:
|
2020-12-01 16:22:23 +03:00
|
|
|
if not os.path.exists(d) and not os.path.islink(d):
|
2020-12-03 17:01:22 +03:00
|
|
|
logger.info(f"Creating directory: {d}")
|
2020-12-01 16:22:23 +03:00
|
|
|
os.makedirs(d)
|
2020-12-03 17:01:22 +03:00
|
|
|
else:
|
|
|
|
|
logger.debug(f"Skipping directory: {d}")
|
2020-12-21 16:37:42 +03:00
|
|
|
|
2026-03-04 19:07:34 +03:00
|
|
|
def init_debug_replay_manager(self) -> None:
|
|
|
|
|
self.replay_manager = DebugReplayManager()
|
|
|
|
|
|
2024-09-24 15:07:47 +03:00
|
|
|
def init_camera_metrics(self) -> None:
|
2024-09-27 15:53:23 +03:00
|
|
|
# create camera_metrics
|
2023-07-15 03:05:14 +03:00
|
|
|
for camera_name in self.config.cameras.keys():
|
Use Fork-Server As Spawn Method (#18682)
* Set runtime
* Use count correctly
* Don't assume camera sizes
* Use separate zmq proxy for object detection
* Correct order
* Use forkserver
* Only store PID instead of entire process reference
* Cleanup
* Catch correct errors
* Fix typing
* Remove before_run from process util
The before_run never actually ran because:
You're right to suspect an issue with before_run not being called and a potential deadlock. The way you've implemented the run_wrapper using __getattribute__ for the run method of BaseProcess is a common pitfall in Python's multiprocessing, especially when combined with how multiprocessing.Process works internally.
Here's a breakdown of why before_run isn't being called and why you might be experiencing a deadlock:
The Problem: __getattribute__ and Process Serialization
When you create a multiprocessing.Process object and call start(), the multiprocessing module needs to serialize the process object (or at least enough of it to re-create the process in the new interpreter). It then pickles this serialized object and sends it to the newly spawned process.
The issue with your __getattribute__ implementation for run is that:
run is retrieved during serialization: When multiprocessing tries to pickle your Process object to send to the new process, it will likely access the run attribute. This triggers your __getattribute__ wrapper, which then tries to bind run_wrapper to self.
run_wrapper is bound to the parent process's self: The run_wrapper closure, when created in the parent process, captures the self (the Process instance) from the parent's memory space.
Deserialization creates a new object: In the child process, a new Process object is created by deserializing the pickled data. However, the run_wrapper method that was pickled still holds a reference to the self from the parent process. This is a subtle but critical distinction.
The child's run is not your wrapped run: When the child process starts, it internally calls its own run method. Because of the serialization and deserialization process, the run method that's ultimately executed in the child process is the original multiprocessing.Process.run or the Process.run if you had directly overridden it. Your __getattribute__ magic, which wraps run, isn't correctly applied to the Process object within the child's context.
* Cleanup
* Logging bugfix (#18465)
* use mp Manager to handle logging queues
A Python bug (https://github.com/python/cpython/issues/91555) was preventing logs from the embeddings maintainer process from printing. The bug is fixed in Python 3.14, but a viable workaround is to use the multiprocessing Manager, which better manages mp queues and causes the logging to work correctly.
* consolidate
* fix typing
* Fix typing
* Use global log queue
* Move to using process for logging
* Convert camera tracking to process
* Add more processes
* Finalize process
* Cleanup
* Cleanup typing
* Formatting
* Remove daemon
---------
Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2025-06-12 21:12:34 +03:00
|
|
|
self.camera_metrics[camera_name] = CameraMetrics(self.metrics_manager)
|
2024-09-27 15:53:23 +03:00
|
|
|
self.ptz_metrics[camera_name] = PTZMetrics(
|
|
|
|
|
autotracker_enabled=self.config.cameras[
|
|
|
|
|
camera_name
|
|
|
|
|
].onvif.autotracking.enabled
|
|
|
|
|
)
|
2021-02-17 16:23:32 +03:00
|
|
|
|
2022-04-16 18:40:04 +03:00
|
|
|
def init_queues(self) -> None:
|
2020-11-04 15:28:07 +03:00
|
|
|
# Queue for cameras to push tracked objects to
|
2025-06-11 20:25:30 +03:00
|
|
|
# leaving room for 2 extra cameras to be added
|
2023-07-08 14:46:31 +03:00
|
|
|
self.detected_frames_queue: Queue = mp.Queue(
|
2025-06-11 20:25:30 +03:00
|
|
|
maxsize=(
|
|
|
|
|
sum(
|
|
|
|
|
camera.enabled_in_config == True
|
|
|
|
|
for camera in self.config.cameras.values()
|
|
|
|
|
)
|
|
|
|
|
+ 2
|
|
|
|
|
)
|
|
|
|
|
* 2
|
2021-02-17 16:23:32 +03:00
|
|
|
)
|
2020-11-04 15:28:07 +03:00
|
|
|
|
2023-04-23 18:45:19 +03:00
|
|
|
# Queue for timeline events
|
2023-07-16 15:42:56 +03:00
|
|
|
self.timeline_queue: Queue = mp.Queue()
|
2023-04-23 18:45:19 +03:00
|
|
|
|
2022-04-16 18:40:04 +03:00
|
|
|
def init_database(self) -> None:
|
2023-06-11 15:23:18 +03:00
|
|
|
def vacuum_db(db: SqliteExtDatabase) -> None:
|
2024-01-26 16:18:29 +03:00
|
|
|
logger.info("Running database vacuum")
|
2023-06-11 15:23:18 +03:00
|
|
|
db.execute_sql("VACUUM;")
|
|
|
|
|
|
|
|
|
|
try:
|
|
|
|
|
with open(f"{CONFIG_DIR}/.vacuum", "w") as f:
|
|
|
|
|
f.write(str(datetime.datetime.now().timestamp()))
|
|
|
|
|
except PermissionError:
|
|
|
|
|
logger.error("Unable to write to /config to save DB state")
|
|
|
|
|
|
2023-11-06 16:43:26 +03:00
|
|
|
def cleanup_timeline_db(db: SqliteExtDatabase) -> None:
|
|
|
|
|
db.execute_sql(
|
|
|
|
|
"DELETE FROM timeline WHERE source_id NOT IN (SELECT id FROM event);"
|
|
|
|
|
)
|
|
|
|
|
|
|
|
|
|
try:
|
|
|
|
|
with open(f"{CONFIG_DIR}/.timeline", "w") as f:
|
|
|
|
|
f.write(str(datetime.datetime.now().timestamp()))
|
|
|
|
|
except PermissionError:
|
|
|
|
|
logger.error("Unable to write to /config to save DB state")
|
|
|
|
|
|
2021-06-07 04:24:36 +03:00
|
|
|
# Migrate DB schema
|
2021-01-24 15:53:01 +03:00
|
|
|
migrate_db = SqliteExtDatabase(self.config.database.path)
|
2020-12-24 16:47:27 +03:00
|
|
|
|
|
|
|
|
# Run migrations
|
2021-02-17 16:23:32 +03:00
|
|
|
del logging.getLogger("peewee_migrate").handlers[:]
|
2021-01-24 15:53:01 +03:00
|
|
|
router = Router(migrate_db)
|
2024-04-20 16:55:51 +03:00
|
|
|
|
|
|
|
|
if len(router.diff) > 0:
|
|
|
|
|
logger.info("Making backup of DB before migrations...")
|
|
|
|
|
shutil.copyfile(
|
|
|
|
|
self.config.database.path,
|
|
|
|
|
self.config.database.path.replace("frigate.db", "backup.db"),
|
|
|
|
|
)
|
|
|
|
|
|
2020-12-24 16:47:27 +03:00
|
|
|
router.run()
|
|
|
|
|
|
2023-11-06 16:43:26 +03:00
|
|
|
# this is a temporary check to clean up user DB from beta
|
|
|
|
|
# will be removed before final release
|
|
|
|
|
if not os.path.exists(f"{CONFIG_DIR}/.timeline"):
|
|
|
|
|
cleanup_timeline_db(migrate_db)
|
|
|
|
|
|
2023-06-11 15:23:18 +03:00
|
|
|
# check if vacuum needs to be run
|
|
|
|
|
if os.path.exists(f"{CONFIG_DIR}/.vacuum"):
|
|
|
|
|
with open(f"{CONFIG_DIR}/.vacuum") as f:
|
|
|
|
|
try:
|
2023-06-15 01:19:26 +03:00
|
|
|
timestamp = round(float(f.readline()))
|
2023-06-11 15:23:18 +03:00
|
|
|
except Exception:
|
|
|
|
|
timestamp = 0
|
|
|
|
|
|
|
|
|
|
if (
|
|
|
|
|
timestamp
|
|
|
|
|
< (
|
|
|
|
|
datetime.datetime.now() - datetime.timedelta(weeks=2)
|
|
|
|
|
).timestamp()
|
|
|
|
|
):
|
|
|
|
|
vacuum_db(migrate_db)
|
|
|
|
|
else:
|
|
|
|
|
vacuum_db(migrate_db)
|
|
|
|
|
|
2021-01-24 15:53:01 +03:00
|
|
|
migrate_db.close()
|
|
|
|
|
|
2023-05-05 01:58:59 +03:00
|
|
|
def init_go2rtc(self) -> None:
|
|
|
|
|
for proc in psutil.process_iter(["pid", "name"]):
|
|
|
|
|
if proc.info["name"] == "go2rtc":
|
|
|
|
|
logger.info(f"go2rtc process pid: {proc.info['pid']}")
|
|
|
|
|
self.processes["go2rtc"] = proc.info["pid"]
|
|
|
|
|
|
2023-04-26 16:25:26 +03:00
|
|
|
def init_recording_manager(self) -> None:
|
2025-06-24 20:41:11 +03:00
|
|
|
recording_process = RecordProcess(self.config, self.stop_event)
|
2023-04-26 16:25:26 +03:00
|
|
|
self.recording_process = recording_process
|
|
|
|
|
recording_process.start()
|
2023-05-05 01:58:59 +03:00
|
|
|
self.processes["recording"] = recording_process.pid or 0
|
2023-04-26 16:25:26 +03:00
|
|
|
logger.info(f"Recording process started: {recording_process.pid}")
|
|
|
|
|
|
2024-02-21 02:26:09 +03:00
|
|
|
def init_review_segment_manager(self) -> None:
|
2025-06-24 20:41:11 +03:00
|
|
|
review_segment_process = ReviewProcess(self.config, self.stop_event)
|
2024-02-21 02:26:09 +03:00
|
|
|
self.review_segment_process = review_segment_process
|
|
|
|
|
review_segment_process.start()
|
|
|
|
|
self.processes["review_segment"] = review_segment_process.pid or 0
|
2024-06-22 00:30:19 +03:00
|
|
|
logger.info(f"Review process started: {review_segment_process.pid}")
|
|
|
|
|
|
|
|
|
|
def init_embeddings_manager(self) -> None:
|
2025-08-09 01:33:11 +03:00
|
|
|
# always start the embeddings process
|
Use Fork-Server As Spawn Method (#18682)
* Set runtime
* Use count correctly
* Don't assume camera sizes
* Use separate zmq proxy for object detection
* Correct order
* Use forkserver
* Only store PID instead of entire process reference
* Cleanup
* Catch correct errors
* Fix typing
* Remove before_run from process util
The before_run never actually ran because:
You're right to suspect an issue with before_run not being called and a potential deadlock. The way you've implemented the run_wrapper using __getattribute__ for the run method of BaseProcess is a common pitfall in Python's multiprocessing, especially when combined with how multiprocessing.Process works internally.
Here's a breakdown of why before_run isn't being called and why you might be experiencing a deadlock:
The Problem: __getattribute__ and Process Serialization
When you create a multiprocessing.Process object and call start(), the multiprocessing module needs to serialize the process object (or at least enough of it to re-create the process in the new interpreter). It then pickles this serialized object and sends it to the newly spawned process.
The issue with your __getattribute__ implementation for run is that:
run is retrieved during serialization: When multiprocessing tries to pickle your Process object to send to the new process, it will likely access the run attribute. This triggers your __getattribute__ wrapper, which then tries to bind run_wrapper to self.
run_wrapper is bound to the parent process's self: The run_wrapper closure, when created in the parent process, captures the self (the Process instance) from the parent's memory space.
Deserialization creates a new object: In the child process, a new Process object is created by deserializing the pickled data. However, the run_wrapper method that was pickled still holds a reference to the self from the parent process. This is a subtle but critical distinction.
The child's run is not your wrapped run: When the child process starts, it internally calls its own run method. Because of the serialization and deserialization process, the run method that's ultimately executed in the child process is the original multiprocessing.Process.run or the Process.run if you had directly overridden it. Your __getattribute__ magic, which wraps run, isn't correctly applied to the Process object within the child's context.
* Cleanup
* Logging bugfix (#18465)
* use mp Manager to handle logging queues
A Python bug (https://github.com/python/cpython/issues/91555) was preventing logs from the embeddings maintainer process from printing. The bug is fixed in Python 3.14, but a viable workaround is to use the multiprocessing Manager, which better manages mp queues and causes the logging to work correctly.
* consolidate
* fix typing
* Fix typing
* Use global log queue
* Move to using process for logging
* Convert camera tracking to process
* Add more processes
* Finalize process
* Cleanup
* Cleanup typing
* Formatting
* Remove daemon
---------
Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2025-06-12 21:12:34 +03:00
|
|
|
embedding_process = EmbeddingProcess(
|
2025-06-24 20:41:11 +03:00
|
|
|
self.config, self.embeddings_metrics, self.stop_event
|
2024-06-22 00:30:19 +03:00
|
|
|
)
|
|
|
|
|
self.embedding_process = embedding_process
|
|
|
|
|
embedding_process.start()
|
|
|
|
|
self.processes["embeddings"] = embedding_process.pid or 0
|
|
|
|
|
logger.info(f"Embedding process started: {embedding_process.pid}")
|
2024-02-21 02:26:09 +03:00
|
|
|
|
2023-04-26 16:25:26 +03:00
|
|
|
def bind_database(self) -> None:
|
|
|
|
|
"""Bind db to the main process."""
|
|
|
|
|
# NOTE: all db accessing processes need to be created before the db can be bound to the main process
|
2024-10-07 23:30:45 +03:00
|
|
|
self.db = SqliteVecQueueDatabase(
|
2023-06-11 15:23:18 +03:00
|
|
|
self.config.database.path,
|
|
|
|
|
pragmas={
|
|
|
|
|
"auto_vacuum": "FULL", # Does not defragment database
|
|
|
|
|
"cache_size": -512 * 1000, # 512MB of cache,
|
|
|
|
|
"synchronous": "NORMAL", # Safe when using WAL https://www.sqlite.org/pragma.html#pragma_synchronous
|
|
|
|
|
},
|
2023-07-21 15:29:50 +03:00
|
|
|
timeout=max(
|
2025-06-11 20:25:30 +03:00
|
|
|
60,
|
|
|
|
|
10
|
|
|
|
|
* len([c for c in self.config.cameras.values() if c.enabled_in_config]),
|
2023-07-21 15:29:50 +03:00
|
|
|
),
|
2024-10-07 23:30:45 +03:00
|
|
|
load_vec_extension=self.config.semantic_search.enabled,
|
2023-06-11 15:23:18 +03:00
|
|
|
)
|
2024-02-21 02:26:09 +03:00
|
|
|
models = [
|
|
|
|
|
Event,
|
2024-04-20 01:11:41 +03:00
|
|
|
Export,
|
2024-02-21 02:26:09 +03:00
|
|
|
Previews,
|
|
|
|
|
Recordings,
|
|
|
|
|
RecordingsToDelete,
|
|
|
|
|
Regions,
|
|
|
|
|
ReviewSegment,
|
|
|
|
|
Timeline,
|
2024-05-18 19:36:13 +03:00
|
|
|
User,
|
2025-07-07 17:03:57 +03:00
|
|
|
Trigger,
|
2024-02-21 02:26:09 +03:00
|
|
|
]
|
2020-11-04 15:28:07 +03:00
|
|
|
self.db.bind(models)
|
|
|
|
|
|
2024-04-20 01:11:41 +03:00
|
|
|
def check_db_data_migrations(self) -> None:
|
|
|
|
|
# check if vacuum needs to be run
|
|
|
|
|
if not os.path.exists(f"{CONFIG_DIR}/.exports"):
|
|
|
|
|
try:
|
|
|
|
|
with open(f"{CONFIG_DIR}/.exports", "w") as f:
|
|
|
|
|
f.write(str(datetime.datetime.now().timestamp()))
|
|
|
|
|
except PermissionError:
|
|
|
|
|
logger.error("Unable to write to /config to save export state")
|
|
|
|
|
|
2024-10-01 01:45:22 +03:00
|
|
|
migrate_exports(self.config.ffmpeg, list(self.config.cameras.keys()))
|
2024-04-20 01:11:41 +03:00
|
|
|
|
2024-10-07 23:30:45 +03:00
|
|
|
def init_embeddings_client(self) -> None:
|
2025-08-09 01:33:11 +03:00
|
|
|
# Create a client for other processes to use
|
|
|
|
|
self.embeddings = EmbeddingsContext(self.db)
|
2024-10-07 23:30:45 +03:00
|
|
|
|
2023-07-14 03:52:33 +03:00
|
|
|
def init_inter_process_communicator(self) -> None:
|
2024-02-15 03:24:36 +03:00
|
|
|
self.inter_process_communicator = InterProcessCommunicator()
|
2025-05-22 21:16:51 +03:00
|
|
|
self.inter_config_updater = CameraConfigUpdatePublisher()
|
2025-03-11 01:29:29 +03:00
|
|
|
self.event_metadata_updater = EventMetadataPublisher()
|
2024-06-22 00:30:19 +03:00
|
|
|
self.inter_zmq_proxy = ZmqProxy()
|
Use Fork-Server As Spawn Method (#18682)
* Set runtime
* Use count correctly
* Don't assume camera sizes
* Use separate zmq proxy for object detection
* Correct order
* Use forkserver
* Only store PID instead of entire process reference
* Cleanup
* Catch correct errors
* Fix typing
* Remove before_run from process util
The before_run never actually ran because:
You're right to suspect an issue with before_run not being called and a potential deadlock. The way you've implemented the run_wrapper using __getattribute__ for the run method of BaseProcess is a common pitfall in Python's multiprocessing, especially when combined with how multiprocessing.Process works internally.
Here's a breakdown of why before_run isn't being called and why you might be experiencing a deadlock:
The Problem: __getattribute__ and Process Serialization
When you create a multiprocessing.Process object and call start(), the multiprocessing module needs to serialize the process object (or at least enough of it to re-create the process in the new interpreter). It then pickles this serialized object and sends it to the newly spawned process.
The issue with your __getattribute__ implementation for run is that:
run is retrieved during serialization: When multiprocessing tries to pickle your Process object to send to the new process, it will likely access the run attribute. This triggers your __getattribute__ wrapper, which then tries to bind run_wrapper to self.
run_wrapper is bound to the parent process's self: The run_wrapper closure, when created in the parent process, captures the self (the Process instance) from the parent's memory space.
Deserialization creates a new object: In the child process, a new Process object is created by deserializing the pickled data. However, the run_wrapper method that was pickled still holds a reference to the self from the parent process. This is a subtle but critical distinction.
The child's run is not your wrapped run: When the child process starts, it internally calls its own run method. Because of the serialization and deserialization process, the run method that's ultimately executed in the child process is the original multiprocessing.Process.run or the Process.run if you had directly overridden it. Your __getattribute__ magic, which wraps run, isn't correctly applied to the Process object within the child's context.
* Cleanup
* Logging bugfix (#18465)
* use mp Manager to handle logging queues
A Python bug (https://github.com/python/cpython/issues/91555) was preventing logs from the embeddings maintainer process from printing. The bug is fixed in Python 3.14, but a viable workaround is to use the multiprocessing Manager, which better manages mp queues and causes the logging to work correctly.
* consolidate
* fix typing
* Fix typing
* Use global log queue
* Move to using process for logging
* Convert camera tracking to process
* Add more processes
* Finalize process
* Cleanup
* Cleanup typing
* Formatting
* Remove daemon
---------
Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2025-06-12 21:12:34 +03:00
|
|
|
self.detection_proxy = DetectorProxy()
|
2023-07-14 03:52:33 +03:00
|
|
|
|
2023-04-26 14:08:53 +03:00
|
|
|
def init_onvif(self) -> None:
|
2023-07-11 14:23:20 +03:00
|
|
|
self.onvif_controller = OnvifController(self.config, self.ptz_metrics)
|
2023-04-26 14:08:53 +03:00
|
|
|
|
2022-11-24 05:03:20 +03:00
|
|
|
def init_dispatcher(self) -> None:
|
|
|
|
|
comms: list[Communicator] = []
|
2020-11-04 15:28:07 +03:00
|
|
|
|
2022-11-24 05:03:20 +03:00
|
|
|
if self.config.mqtt.enabled:
|
|
|
|
|
comms.append(MqttClient(self.config))
|
|
|
|
|
|
2025-02-11 05:47:15 +03:00
|
|
|
notification_cameras = [
|
|
|
|
|
c
|
|
|
|
|
for c in self.config.cameras.values()
|
|
|
|
|
if c.enabled and c.notifications.enabled_in_config
|
|
|
|
|
]
|
|
|
|
|
|
|
|
|
|
if notification_cameras:
|
|
|
|
|
comms.append(WebPushClient(self.config, self.stop_event))
|
2024-07-22 23:39:15 +03:00
|
|
|
|
2023-02-04 05:15:47 +03:00
|
|
|
comms.append(WebSocketClient(self.config))
|
2023-07-14 03:52:33 +03:00
|
|
|
comms.append(self.inter_process_communicator)
|
|
|
|
|
|
2023-04-26 14:08:53 +03:00
|
|
|
self.dispatcher = Dispatcher(
|
2023-04-26 16:25:26 +03:00
|
|
|
self.config,
|
2024-02-19 16:26:59 +03:00
|
|
|
self.inter_config_updater,
|
2023-04-26 16:25:26 +03:00
|
|
|
self.onvif_controller,
|
2023-07-11 14:23:20 +03:00
|
|
|
self.ptz_metrics,
|
2023-04-26 16:25:26 +03:00
|
|
|
comms,
|
2023-04-26 14:08:53 +03:00
|
|
|
)
|
2021-06-14 15:31:13 +03:00
|
|
|
|
Camera profile support (#22482)
* add CameraProfileConfig model for named config overrides
* add profiles field to CameraConfig
* add active_profile field to FrigateConfig
Runtime-only field excluded from YAML serialization, tracks which
profile is currently active.
* add ProfileManager for profile activation and persistence
Handles snapshotting base configs, applying profile overrides via
deep_merge + apply_section_update, publishing ZMQ updates, and
persisting active profile to /config/.active_profile.
* add profile API endpoints (GET /profiles, GET/PUT /profile)
* add MQTT and dispatcher integration for profiles
- Subscribe to frigate/profile/set MQTT topic
- Publish profile/state and profiles/available on connect
- Add _on_profile_command handler to dispatcher
- Broadcast active profile state on WebSocket connect
* wire ProfileManager into app startup and FastAPI
- Create ProfileManager after dispatcher init
- Restore persisted profile on startup
- Pass dispatcher and profile_manager to FastAPI app
* add tests for invalid profile values and keys
Tests that Pydantic rejects: invalid field values (fps: "not_a_number"),
unknown section keys (ffmpeg in profile), invalid nested values, and
invalid profiles in full config parsing.
* formatting
* fix CameraLiveConfig JSON serialization error on profile activation
refactor _publish_updates to only publish ZMQ updates for
sections that actually changed, not all sections on affected cameras.
* consolidate
* add enabled field to camera profiles for enabling/disabling cameras
* add zones support to camera profiles
* add frontend profile types, color utility, and config save support
* add profile state management and save preview support
* add profileName prop to BaseSection for profile-aware config editing
* add profile section dropdown and wire into camera settings pages
* add per-profile camera enable/disable to Camera Management view
* add profiles summary page with card-based layout and fix backend zone comparison bug
* add active profile badge to settings toolbar
* i18n
* add red dot for any pending changes including profiles
* profile support for mask and zone editor
* fix hidden field validation errors caused by lodash wildcard and schema gaps
lodash unset does not support wildcard (*) segments, so hidden fields like
filters.*.mask were never stripped from form data, leaving null raw_coordinates
that fail RJSF anyOf validation. Add unsetWithWildcard helper and also strip
hidden fields from the JSON schema itself as defense-in-depth.
* add face_recognition and lpr to profile-eligible sections
* move profile dropdown from section panes to settings header
* add profiles enable toggle and improve empty state
* formatting
* tweaks
* tweak colors and switch
* fix profile save diff, masksAndZones delete, and config sync
* ui tweaks
* ensure profile manager gets updated config
* rename profile settings to ui settings
* refactor profilesview and add dots/border colors when overridden
* implement an update_config method for profile manager
* fix mask deletion
* more unique colors
* add top-level profiles config section with friendly names
* implement profile friendly names and improve profile UI
- Add ProfileDefinitionConfig type and profiles field to FrigateConfig
- Use ProfilesApiResponse type with friendly_name support throughout
- Replace Record<string, unknown> with proper JsonObject/JsonValue types
- Add profile creation form matching zone pattern (Zod + NameAndIdFields)
- Add pencil icon for renaming profile friendly names in ProfilesView
- Move Profiles menu item to first under Camera Configuration
- Add activity indicators on save/rename/delete buttons
- Display friendly names in CameraManagementView profile selector
- Fix duplicate colored dots in management profile dropdown
- Fix i18n namespace for overridden base config tooltips
- Move profile override deletion from dropdown trash icon to footer
button with confirmation dialog, matching Reset to Global pattern
- Remove Add Profile from section header dropdown to prevent saving
camera overrides before top-level profile definition exists
- Clean up newProfiles state after API profile deletion
- Refresh profiles SWR cache after saving profile definitions
* remove profile badge in settings and add profiles to main menu
* use icon only on mobile
* change color order
* docs
* show activity indicator on trash icon while deleting a profile
* tweak language
* immediately create profiles on backend instead of deferring to Save All
* hide restart-required fields when editing a profile section
fields that require a restart cannot take effect via profile switching,
so they are merged into hiddenFields when profileName is set
* show active profile indicator in desktop status bar
* fix profile config inheritance bug where Pydantic defaults override base values
The /config API was dumping profile overrides with model_dump() which included
all Pydantic defaults. When the frontend merged these over
the camera's base config, explicitly-set base values were
lost. Now profile overrides are re-dumped with exclude_unset=True so only
user-specified fields are returned.
Also fixes the Save All path generating spurious deletion markers for
restart-required fields that are hidden during profile
editing but not excluded from the raw data sanitization in
prepareSectionSavePayload.
* docs tweaks
* docs tweak
* formatting
* formatting
* fix typing
* fix test pollution
test_maintainer was injecting MagicMock() into sys.modules["frigate.config.camera.updater"] at module load time and never restoring it. When the profile tests later imported CameraConfigUpdateEnum and CameraConfigUpdateTopic from that module, they got mock objects instead of the real dataclass/enum, so equality comparisons always failed
* remove
* fix settings showing profile-merged values when editing base config
When a profile is active, the in-memory config contains effective
(profile-merged) values. The settings UI was displaying these merged
values even when the "Base Config" view was selected.
Backend: snapshot pre-profile base configs in ProfileManager and expose
them via a `base_config` key in the /api/config camera response when a
profile is active. The top-level sections continue to reflect the
effective running config.
Frontend: read from `base_config` when available in BaseSection,
useConfigOverride, useAllCameraOverrides, and prepareSectionSavePayload.
Include formData labels in Object/Audio switches widgets so that labels
added only by a profile override remain visible when editing that profile.
* use rasterized_mask as field
makes it easier to exclude from the schema with exclude=True
prevents leaking of the field when using model_dump for profiles
* fix zones
- Fix zone colors not matching across profiles by falling back to base zone color when profile zone data lacks a color field
- Use base_config for base-layer values in masks/zones view so profile-merged values don't pollute the base config editing view
- Handle zones separately in profile manager snapshot/restore since ZoneConfig requires special serialization (color as private attr, contour generation)
- Inherit base zone color and generate contours for profile zone overrides in profile manager
* formatting
* don't require restart for camera enabled change for profiles
* publish camera state when changing profiles
* formatting
* remove available profiles from mqtt
* improve typing
2026-03-19 17:47:57 +03:00
|
|
|
def init_profile_manager(self) -> None:
|
|
|
|
|
self.profile_manager = ProfileManager(
|
|
|
|
|
self.config, self.inter_config_updater, self.dispatcher
|
|
|
|
|
)
|
|
|
|
|
self.dispatcher.profile_manager = self.profile_manager
|
|
|
|
|
|
|
|
|
|
persisted = ProfileManager.load_persisted_profile()
|
|
|
|
|
if persisted and any(
|
|
|
|
|
persisted in cam.profiles for cam in self.config.cameras.values()
|
|
|
|
|
):
|
|
|
|
|
logger.info("Restoring persisted profile '%s'", persisted)
|
|
|
|
|
self.profile_manager.activate_profile(persisted)
|
|
|
|
|
|
2022-04-16 18:40:04 +03:00
|
|
|
def start_detectors(self) -> None:
|
2020-11-04 15:28:07 +03:00
|
|
|
for name in self.config.cameras.keys():
|
2021-06-19 15:15:02 +03:00
|
|
|
try:
|
2022-12-15 16:12:52 +03:00
|
|
|
largest_frame = max(
|
|
|
|
|
[
|
|
|
|
|
det.model.height * det.model.width * 3
|
2024-10-01 01:45:22 +03:00
|
|
|
if det.model is not None
|
|
|
|
|
else 320
|
|
|
|
|
for det in self.config.detectors.values()
|
2022-12-15 16:12:52 +03:00
|
|
|
]
|
|
|
|
|
)
|
2024-11-16 00:14:37 +03:00
|
|
|
shm_in = UntrackedSharedMemory(
|
2021-06-24 08:45:27 +03:00
|
|
|
name=name,
|
|
|
|
|
create=True,
|
2022-12-15 16:12:52 +03:00
|
|
|
size=largest_frame,
|
2021-06-19 15:15:02 +03:00
|
|
|
)
|
|
|
|
|
except FileExistsError:
|
2024-11-16 00:14:37 +03:00
|
|
|
shm_in = UntrackedSharedMemory(name=name)
|
2021-06-19 15:15:02 +03:00
|
|
|
|
|
|
|
|
try:
|
2024-11-16 00:14:37 +03:00
|
|
|
shm_out = UntrackedSharedMemory(
|
2021-06-19 15:15:02 +03:00
|
|
|
name=f"out-{name}", create=True, size=20 * 6 * 4
|
|
|
|
|
)
|
|
|
|
|
except FileExistsError:
|
2024-11-16 00:14:37 +03:00
|
|
|
shm_out = UntrackedSharedMemory(name=f"out-{name}")
|
2021-06-19 15:15:02 +03:00
|
|
|
|
2020-11-04 15:28:07 +03:00
|
|
|
self.detection_shms.append(shm_in)
|
|
|
|
|
self.detection_shms.append(shm_out)
|
|
|
|
|
|
2022-12-15 16:12:52 +03:00
|
|
|
for name, detector_config in self.config.detectors.items():
|
2022-11-04 05:23:09 +03:00
|
|
|
self.detectors[name] = ObjectDetectProcess(
|
|
|
|
|
name,
|
|
|
|
|
self.detection_queue,
|
2025-06-11 20:25:30 +03:00
|
|
|
list(self.config.cameras.keys()),
|
2025-06-13 17:43:38 +03:00
|
|
|
self.config,
|
2022-12-15 16:12:52 +03:00
|
|
|
detector_config,
|
2025-06-24 20:41:11 +03:00
|
|
|
self.stop_event,
|
2022-11-04 05:23:09 +03:00
|
|
|
)
|
2020-11-04 15:28:07 +03:00
|
|
|
|
2023-07-08 15:04:47 +03:00
|
|
|
def start_ptz_autotracker(self) -> None:
|
|
|
|
|
self.ptz_autotracker_thread = PtzAutoTrackerThread(
|
|
|
|
|
self.config,
|
|
|
|
|
self.onvif_controller,
|
2023-07-11 14:23:20 +03:00
|
|
|
self.ptz_metrics,
|
2023-11-02 02:20:26 +03:00
|
|
|
self.dispatcher,
|
2023-07-08 15:04:47 +03:00
|
|
|
self.stop_event,
|
|
|
|
|
)
|
|
|
|
|
self.ptz_autotracker_thread.start()
|
|
|
|
|
|
2022-04-16 18:40:04 +03:00
|
|
|
def start_detected_frames_processor(self) -> None:
|
2021-02-17 16:23:32 +03:00
|
|
|
self.detected_frames_processor = TrackedObjectProcessor(
|
|
|
|
|
self.config,
|
2022-11-24 05:03:20 +03:00
|
|
|
self.dispatcher,
|
2021-02-17 16:23:32 +03:00
|
|
|
self.detected_frames_queue,
|
2023-07-08 15:04:47 +03:00
|
|
|
self.ptz_autotracker_thread,
|
2021-02-17 16:23:32 +03:00
|
|
|
self.stop_event,
|
|
|
|
|
)
|
2020-11-04 15:28:07 +03:00
|
|
|
self.detected_frames_processor.start()
|
|
|
|
|
|
2022-04-16 18:40:04 +03:00
|
|
|
def start_video_output_processor(self) -> None:
|
2025-06-24 20:41:11 +03:00
|
|
|
output_processor = OutputProcess(self.config, self.stop_event)
|
2021-05-29 21:27:00 +03:00
|
|
|
self.output_processor = output_processor
|
|
|
|
|
output_processor.start()
|
2021-05-30 14:45:37 +03:00
|
|
|
logger.info(f"Output process started: {output_processor.pid}")
|
2021-05-29 21:27:00 +03:00
|
|
|
|
2025-06-11 20:25:30 +03:00
|
|
|
def start_camera_processor(self) -> None:
|
|
|
|
|
self.camera_maintainer = CameraMaintainer(
|
|
|
|
|
self.config,
|
|
|
|
|
self.detection_queue,
|
|
|
|
|
self.detected_frames_queue,
|
|
|
|
|
self.camera_metrics,
|
|
|
|
|
self.ptz_metrics,
|
|
|
|
|
self.stop_event,
|
2025-06-21 23:38:34 +03:00
|
|
|
self.metrics_manager,
|
2025-06-11 20:25:30 +03:00
|
|
|
)
|
|
|
|
|
self.camera_maintainer.start()
|
2021-02-17 16:23:32 +03:00
|
|
|
|
2024-09-27 15:53:23 +03:00
|
|
|
def start_audio_processor(self) -> None:
|
|
|
|
|
audio_cameras = [
|
|
|
|
|
c
|
|
|
|
|
for c in self.config.cameras.values()
|
|
|
|
|
if c.enabled and c.audio.enabled_in_config
|
|
|
|
|
]
|
|
|
|
|
|
|
|
|
|
if audio_cameras:
|
2025-05-27 18:26:00 +03:00
|
|
|
self.audio_process = AudioProcessor(
|
2025-06-24 20:41:11 +03:00
|
|
|
self.config, audio_cameras, self.camera_metrics, self.stop_event
|
2025-05-27 18:26:00 +03:00
|
|
|
)
|
2024-11-08 21:47:46 +03:00
|
|
|
self.audio_process.start()
|
|
|
|
|
self.processes["audio_detector"] = self.audio_process.pid or 0
|
2023-07-01 16:18:33 +03:00
|
|
|
|
2023-04-23 18:45:19 +03:00
|
|
|
def start_timeline_processor(self) -> None:
|
|
|
|
|
self.timeline_processor = TimelineProcessor(
|
|
|
|
|
self.config, self.timeline_queue, self.stop_event
|
|
|
|
|
)
|
|
|
|
|
self.timeline_processor.start()
|
|
|
|
|
|
2022-04-16 18:40:04 +03:00
|
|
|
def start_event_processor(self) -> None:
|
2021-02-17 16:23:32 +03:00
|
|
|
self.event_processor = EventProcessor(
|
|
|
|
|
self.config,
|
2023-04-23 18:45:19 +03:00
|
|
|
self.timeline_queue,
|
2021-02-17 16:23:32 +03:00
|
|
|
self.stop_event,
|
|
|
|
|
)
|
2020-11-04 15:28:07 +03:00
|
|
|
self.event_processor.start()
|
2021-02-17 16:23:32 +03:00
|
|
|
|
2022-04-16 18:40:04 +03:00
|
|
|
def start_event_cleanup(self) -> None:
|
2024-10-07 23:30:45 +03:00
|
|
|
self.event_cleanup = EventCleanup(self.config, self.stop_event, self.db)
|
2020-11-24 16:27:51 +03:00
|
|
|
self.event_cleanup.start()
|
2021-02-17 16:23:32 +03:00
|
|
|
|
2023-07-26 13:55:08 +03:00
|
|
|
def start_record_cleanup(self) -> None:
|
|
|
|
|
self.record_cleanup = RecordingCleanup(self.config, self.stop_event)
|
|
|
|
|
self.record_cleanup.start()
|
|
|
|
|
|
2022-10-09 14:28:26 +03:00
|
|
|
def start_storage_maintainer(self) -> None:
|
|
|
|
|
self.storage_maintainer = StorageMaintainer(self.config, self.stop_event)
|
|
|
|
|
self.storage_maintainer.start()
|
|
|
|
|
|
2022-04-16 18:40:04 +03:00
|
|
|
def start_stats_emitter(self) -> None:
|
2021-02-17 16:23:32 +03:00
|
|
|
self.stats_emitter = StatsEmitter(
|
|
|
|
|
self.config,
|
2024-02-21 23:10:28 +03:00
|
|
|
stats_init(
|
2025-01-05 17:47:57 +03:00
|
|
|
self.config,
|
|
|
|
|
self.camera_metrics,
|
|
|
|
|
self.embeddings_metrics,
|
|
|
|
|
self.detectors,
|
|
|
|
|
self.processes,
|
2024-02-21 23:10:28 +03:00
|
|
|
),
|
2021-02-17 16:23:32 +03:00
|
|
|
self.stop_event,
|
|
|
|
|
)
|
2021-01-04 02:35:58 +03:00
|
|
|
self.stats_emitter.start()
|
|
|
|
|
|
2022-04-16 18:40:04 +03:00
|
|
|
def start_watchdog(self) -> None:
|
2020-11-04 15:28:07 +03:00
|
|
|
self.frigate_watchdog = FrigateWatchdog(self.detectors, self.stop_event)
|
2026-03-21 03:02:47 +03:00
|
|
|
|
|
|
|
|
# (attribute on self, key in self.processes, factory)
|
|
|
|
|
specs: list[tuple[str, str, Callable[[], FrigateProcess]]] = [
|
|
|
|
|
(
|
|
|
|
|
"embedding_process",
|
|
|
|
|
"embeddings",
|
|
|
|
|
lambda: EmbeddingProcess(
|
|
|
|
|
self.config, self.embeddings_metrics, self.stop_event
|
|
|
|
|
),
|
|
|
|
|
),
|
|
|
|
|
(
|
|
|
|
|
"recording_process",
|
|
|
|
|
"recording",
|
|
|
|
|
lambda: RecordProcess(self.config, self.stop_event),
|
|
|
|
|
),
|
|
|
|
|
(
|
|
|
|
|
"review_segment_process",
|
|
|
|
|
"review_segment",
|
|
|
|
|
lambda: ReviewProcess(self.config, self.stop_event),
|
|
|
|
|
),
|
|
|
|
|
(
|
|
|
|
|
"output_processor",
|
|
|
|
|
"output",
|
|
|
|
|
lambda: OutputProcess(self.config, self.stop_event),
|
|
|
|
|
),
|
|
|
|
|
]
|
|
|
|
|
|
|
|
|
|
for attr, key, factory in specs:
|
|
|
|
|
if not hasattr(self, attr):
|
|
|
|
|
continue
|
|
|
|
|
|
|
|
|
|
def on_restart(
|
|
|
|
|
proc: FrigateProcess, _attr: str = attr, _key: str = key
|
|
|
|
|
) -> None:
|
|
|
|
|
setattr(self, _attr, proc)
|
|
|
|
|
self.processes[_key] = proc.pid or 0
|
|
|
|
|
|
|
|
|
|
self.frigate_watchdog.register(
|
|
|
|
|
key, getattr(self, attr), factory, on_restart
|
|
|
|
|
)
|
|
|
|
|
|
2020-11-04 15:28:07 +03:00
|
|
|
self.frigate_watchdog.start()
|
|
|
|
|
|
2024-05-18 19:36:13 +03:00
|
|
|
def init_auth(self) -> None:
|
2024-06-15 02:02:13 +03:00
|
|
|
if self.config.auth.enabled:
|
2024-05-18 19:36:13 +03:00
|
|
|
if User.select().count() == 0:
|
|
|
|
|
password = secrets.token_hex(16)
|
|
|
|
|
password_hash = hash_password(
|
|
|
|
|
password, iterations=self.config.auth.hash_iterations
|
|
|
|
|
)
|
|
|
|
|
User.insert(
|
|
|
|
|
{
|
|
|
|
|
User.username: "admin",
|
2025-03-10 06:59:07 +03:00
|
|
|
User.role: "admin",
|
2024-05-18 19:36:13 +03:00
|
|
|
User.password_hash: password_hash,
|
2024-10-07 16:18:09 +03:00
|
|
|
User.notification_tokens: [],
|
2024-05-18 19:36:13 +03:00
|
|
|
}
|
|
|
|
|
).execute()
|
|
|
|
|
|
2025-10-22 20:24:53 +03:00
|
|
|
self.config.auth.admin_first_time_login = True
|
|
|
|
|
|
2024-05-18 19:36:13 +03:00
|
|
|
logger.info("********************************************************")
|
|
|
|
|
logger.info("********************************************************")
|
|
|
|
|
logger.info("*** Auth is enabled, but no users exist. ***")
|
|
|
|
|
logger.info("*** Created a default user: ***")
|
|
|
|
|
logger.info("*** User: admin ***")
|
|
|
|
|
logger.info(f"*** Password: {password} ***")
|
|
|
|
|
logger.info("********************************************************")
|
|
|
|
|
logger.info("********************************************************")
|
|
|
|
|
elif self.config.auth.reset_admin_password:
|
|
|
|
|
password = secrets.token_hex(16)
|
|
|
|
|
password_hash = hash_password(
|
|
|
|
|
password, iterations=self.config.auth.hash_iterations
|
|
|
|
|
)
|
2024-10-07 16:18:09 +03:00
|
|
|
User.replace(
|
|
|
|
|
username="admin",
|
2025-03-08 19:01:08 +03:00
|
|
|
role="admin",
|
2024-10-07 16:18:09 +03:00
|
|
|
password_hash=password_hash,
|
|
|
|
|
notification_tokens=[],
|
|
|
|
|
).execute()
|
2024-05-18 19:36:13 +03:00
|
|
|
|
|
|
|
|
logger.info("********************************************************")
|
|
|
|
|
logger.info("********************************************************")
|
|
|
|
|
logger.info("*** Reset admin password set in the config. ***")
|
|
|
|
|
logger.info(f"*** Password: {password} ***")
|
|
|
|
|
logger.info("********************************************************")
|
|
|
|
|
logger.info("********************************************************")
|
|
|
|
|
|
2022-04-16 18:40:04 +03:00
|
|
|
def start(self) -> None:
|
2021-09-14 06:02:23 +03:00
|
|
|
logger.info(f"Starting Frigate ({VERSION})")
|
2024-02-10 15:30:53 +03:00
|
|
|
|
2024-09-24 15:07:47 +03:00
|
|
|
# Ensure global state.
|
|
|
|
|
self.ensure_dirs()
|
|
|
|
|
|
2025-07-11 16:11:35 +03:00
|
|
|
# Set soft file limits.
|
|
|
|
|
set_file_limit()
|
|
|
|
|
|
2024-09-24 15:07:47 +03:00
|
|
|
# Start frigate services.
|
2026-03-04 19:07:34 +03:00
|
|
|
self.init_debug_replay_manager()
|
2024-09-24 15:07:47 +03:00
|
|
|
self.init_camera_metrics()
|
|
|
|
|
self.init_queues()
|
|
|
|
|
self.init_database()
|
|
|
|
|
self.init_onvif()
|
|
|
|
|
self.init_recording_manager()
|
|
|
|
|
self.init_review_segment_manager()
|
|
|
|
|
self.init_go2rtc()
|
2024-10-11 00:37:43 +03:00
|
|
|
self.init_embeddings_manager()
|
2024-09-24 15:07:47 +03:00
|
|
|
self.bind_database()
|
|
|
|
|
self.check_db_data_migrations()
|
2026-03-04 19:07:34 +03:00
|
|
|
|
|
|
|
|
# Clean up any stale replay camera artifacts (filesystem + DB)
|
|
|
|
|
cleanup_replay_cameras()
|
|
|
|
|
|
2024-09-24 15:07:47 +03:00
|
|
|
self.init_inter_process_communicator()
|
Use Fork-Server As Spawn Method (#18682)
* Set runtime
* Use count correctly
* Don't assume camera sizes
* Use separate zmq proxy for object detection
* Correct order
* Use forkserver
* Only store PID instead of entire process reference
* Cleanup
* Catch correct errors
* Fix typing
* Remove before_run from process util
The before_run never actually ran because:
You're right to suspect an issue with before_run not being called and a potential deadlock. The way you've implemented the run_wrapper using __getattribute__ for the run method of BaseProcess is a common pitfall in Python's multiprocessing, especially when combined with how multiprocessing.Process works internally.
Here's a breakdown of why before_run isn't being called and why you might be experiencing a deadlock:
The Problem: __getattribute__ and Process Serialization
When you create a multiprocessing.Process object and call start(), the multiprocessing module needs to serialize the process object (or at least enough of it to re-create the process in the new interpreter). It then pickles this serialized object and sends it to the newly spawned process.
The issue with your __getattribute__ implementation for run is that:
run is retrieved during serialization: When multiprocessing tries to pickle your Process object to send to the new process, it will likely access the run attribute. This triggers your __getattribute__ wrapper, which then tries to bind run_wrapper to self.
run_wrapper is bound to the parent process's self: The run_wrapper closure, when created in the parent process, captures the self (the Process instance) from the parent's memory space.
Deserialization creates a new object: In the child process, a new Process object is created by deserializing the pickled data. However, the run_wrapper method that was pickled still holds a reference to the self from the parent process. This is a subtle but critical distinction.
The child's run is not your wrapped run: When the child process starts, it internally calls its own run method. Because of the serialization and deserialization process, the run method that's ultimately executed in the child process is the original multiprocessing.Process.run or the Process.run if you had directly overridden it. Your __getattribute__ magic, which wraps run, isn't correctly applied to the Process object within the child's context.
* Cleanup
* Logging bugfix (#18465)
* use mp Manager to handle logging queues
A Python bug (https://github.com/python/cpython/issues/91555) was preventing logs from the embeddings maintainer process from printing. The bug is fixed in Python 3.14, but a viable workaround is to use the multiprocessing Manager, which better manages mp queues and causes the logging to work correctly.
* consolidate
* fix typing
* Fix typing
* Use global log queue
* Move to using process for logging
* Convert camera tracking to process
* Add more processes
* Finalize process
* Cleanup
* Cleanup typing
* Formatting
* Remove daemon
---------
Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2025-06-12 21:12:34 +03:00
|
|
|
self.start_detectors()
|
2024-09-24 15:07:47 +03:00
|
|
|
self.init_dispatcher()
|
Camera profile support (#22482)
* add CameraProfileConfig model for named config overrides
* add profiles field to CameraConfig
* add active_profile field to FrigateConfig
Runtime-only field excluded from YAML serialization, tracks which
profile is currently active.
* add ProfileManager for profile activation and persistence
Handles snapshotting base configs, applying profile overrides via
deep_merge + apply_section_update, publishing ZMQ updates, and
persisting active profile to /config/.active_profile.
* add profile API endpoints (GET /profiles, GET/PUT /profile)
* add MQTT and dispatcher integration for profiles
- Subscribe to frigate/profile/set MQTT topic
- Publish profile/state and profiles/available on connect
- Add _on_profile_command handler to dispatcher
- Broadcast active profile state on WebSocket connect
* wire ProfileManager into app startup and FastAPI
- Create ProfileManager after dispatcher init
- Restore persisted profile on startup
- Pass dispatcher and profile_manager to FastAPI app
* add tests for invalid profile values and keys
Tests that Pydantic rejects: invalid field values (fps: "not_a_number"),
unknown section keys (ffmpeg in profile), invalid nested values, and
invalid profiles in full config parsing.
* formatting
* fix CameraLiveConfig JSON serialization error on profile activation
refactor _publish_updates to only publish ZMQ updates for
sections that actually changed, not all sections on affected cameras.
* consolidate
* add enabled field to camera profiles for enabling/disabling cameras
* add zones support to camera profiles
* add frontend profile types, color utility, and config save support
* add profile state management and save preview support
* add profileName prop to BaseSection for profile-aware config editing
* add profile section dropdown and wire into camera settings pages
* add per-profile camera enable/disable to Camera Management view
* add profiles summary page with card-based layout and fix backend zone comparison bug
* add active profile badge to settings toolbar
* i18n
* add red dot for any pending changes including profiles
* profile support for mask and zone editor
* fix hidden field validation errors caused by lodash wildcard and schema gaps
lodash unset does not support wildcard (*) segments, so hidden fields like
filters.*.mask were never stripped from form data, leaving null raw_coordinates
that fail RJSF anyOf validation. Add unsetWithWildcard helper and also strip
hidden fields from the JSON schema itself as defense-in-depth.
* add face_recognition and lpr to profile-eligible sections
* move profile dropdown from section panes to settings header
* add profiles enable toggle and improve empty state
* formatting
* tweaks
* tweak colors and switch
* fix profile save diff, masksAndZones delete, and config sync
* ui tweaks
* ensure profile manager gets updated config
* rename profile settings to ui settings
* refactor profilesview and add dots/border colors when overridden
* implement an update_config method for profile manager
* fix mask deletion
* more unique colors
* add top-level profiles config section with friendly names
* implement profile friendly names and improve profile UI
- Add ProfileDefinitionConfig type and profiles field to FrigateConfig
- Use ProfilesApiResponse type with friendly_name support throughout
- Replace Record<string, unknown> with proper JsonObject/JsonValue types
- Add profile creation form matching zone pattern (Zod + NameAndIdFields)
- Add pencil icon for renaming profile friendly names in ProfilesView
- Move Profiles menu item to first under Camera Configuration
- Add activity indicators on save/rename/delete buttons
- Display friendly names in CameraManagementView profile selector
- Fix duplicate colored dots in management profile dropdown
- Fix i18n namespace for overridden base config tooltips
- Move profile override deletion from dropdown trash icon to footer
button with confirmation dialog, matching Reset to Global pattern
- Remove Add Profile from section header dropdown to prevent saving
camera overrides before top-level profile definition exists
- Clean up newProfiles state after API profile deletion
- Refresh profiles SWR cache after saving profile definitions
* remove profile badge in settings and add profiles to main menu
* use icon only on mobile
* change color order
* docs
* show activity indicator on trash icon while deleting a profile
* tweak language
* immediately create profiles on backend instead of deferring to Save All
* hide restart-required fields when editing a profile section
fields that require a restart cannot take effect via profile switching,
so they are merged into hiddenFields when profileName is set
* show active profile indicator in desktop status bar
* fix profile config inheritance bug where Pydantic defaults override base values
The /config API was dumping profile overrides with model_dump() which included
all Pydantic defaults. When the frontend merged these over
the camera's base config, explicitly-set base values were
lost. Now profile overrides are re-dumped with exclude_unset=True so only
user-specified fields are returned.
Also fixes the Save All path generating spurious deletion markers for
restart-required fields that are hidden during profile
editing but not excluded from the raw data sanitization in
prepareSectionSavePayload.
* docs tweaks
* docs tweak
* formatting
* formatting
* fix typing
* fix test pollution
test_maintainer was injecting MagicMock() into sys.modules["frigate.config.camera.updater"] at module load time and never restoring it. When the profile tests later imported CameraConfigUpdateEnum and CameraConfigUpdateTopic from that module, they got mock objects instead of the real dataclass/enum, so equality comparisons always failed
* remove
* fix settings showing profile-merged values when editing base config
When a profile is active, the in-memory config contains effective
(profile-merged) values. The settings UI was displaying these merged
values even when the "Base Config" view was selected.
Backend: snapshot pre-profile base configs in ProfileManager and expose
them via a `base_config` key in the /api/config camera response when a
profile is active. The top-level sections continue to reflect the
effective running config.
Frontend: read from `base_config` when available in BaseSection,
useConfigOverride, useAllCameraOverrides, and prepareSectionSavePayload.
Include formData labels in Object/Audio switches widgets so that labels
added only by a profile override remain visible when editing that profile.
* use rasterized_mask as field
makes it easier to exclude from the schema with exclude=True
prevents leaking of the field when using model_dump for profiles
* fix zones
- Fix zone colors not matching across profiles by falling back to base zone color when profile zone data lacks a color field
- Use base_config for base-layer values in masks/zones view so profile-merged values don't pollute the base config editing view
- Handle zones separately in profile manager snapshot/restore since ZoneConfig requires special serialization (color as private attr, contour generation)
- Inherit base zone color and generate contours for profile zone overrides in profile manager
* formatting
* don't require restart for camera enabled change for profiles
* publish camera state when changing profiles
* formatting
* remove available profiles from mqtt
* improve typing
2026-03-19 17:47:57 +03:00
|
|
|
self.init_profile_manager()
|
2024-10-07 23:30:45 +03:00
|
|
|
self.init_embeddings_client()
|
2021-05-29 21:27:00 +03:00
|
|
|
self.start_video_output_processor()
|
2023-07-08 15:04:47 +03:00
|
|
|
self.start_ptz_autotracker()
|
2020-11-04 15:28:07 +03:00
|
|
|
self.start_detected_frames_processor()
|
2025-06-11 20:25:30 +03:00
|
|
|
self.start_camera_processor()
|
2024-09-27 15:53:23 +03:00
|
|
|
self.start_audio_processor()
|
2022-11-30 04:59:56 +03:00
|
|
|
self.start_storage_maintainer()
|
2024-02-21 23:10:28 +03:00
|
|
|
self.start_stats_emitter()
|
2023-04-23 18:45:19 +03:00
|
|
|
self.start_timeline_processor()
|
2020-11-04 15:28:07 +03:00
|
|
|
self.start_event_processor()
|
2020-11-24 16:27:51 +03:00
|
|
|
self.start_event_cleanup()
|
2023-07-26 13:55:08 +03:00
|
|
|
self.start_record_cleanup()
|
2020-11-04 15:28:07 +03:00
|
|
|
self.start_watchdog()
|
2021-02-17 16:23:32 +03:00
|
|
|
|
2024-09-24 15:07:47 +03:00
|
|
|
self.init_auth()
|
2020-12-05 20:14:18 +03:00
|
|
|
|
2021-05-18 08:52:08 +03:00
|
|
|
try:
|
2024-09-24 16:05:30 +03:00
|
|
|
uvicorn.run(
|
|
|
|
|
create_fastapi_app(
|
|
|
|
|
self.config,
|
|
|
|
|
self.db,
|
|
|
|
|
self.embeddings,
|
|
|
|
|
self.detected_frames_processor,
|
|
|
|
|
self.storage_maintainer,
|
|
|
|
|
self.onvif_controller,
|
|
|
|
|
self.stats_emitter,
|
2024-09-24 17:14:51 +03:00
|
|
|
self.event_metadata_updater,
|
2025-05-23 05:51:23 +03:00
|
|
|
self.inter_config_updater,
|
2026-03-04 19:07:34 +03:00
|
|
|
self.replay_manager,
|
Camera profile support (#22482)
* add CameraProfileConfig model for named config overrides
* add profiles field to CameraConfig
* add active_profile field to FrigateConfig
Runtime-only field excluded from YAML serialization, tracks which
profile is currently active.
* add ProfileManager for profile activation and persistence
Handles snapshotting base configs, applying profile overrides via
deep_merge + apply_section_update, publishing ZMQ updates, and
persisting active profile to /config/.active_profile.
* add profile API endpoints (GET /profiles, GET/PUT /profile)
* add MQTT and dispatcher integration for profiles
- Subscribe to frigate/profile/set MQTT topic
- Publish profile/state and profiles/available on connect
- Add _on_profile_command handler to dispatcher
- Broadcast active profile state on WebSocket connect
* wire ProfileManager into app startup and FastAPI
- Create ProfileManager after dispatcher init
- Restore persisted profile on startup
- Pass dispatcher and profile_manager to FastAPI app
* add tests for invalid profile values and keys
Tests that Pydantic rejects: invalid field values (fps: "not_a_number"),
unknown section keys (ffmpeg in profile), invalid nested values, and
invalid profiles in full config parsing.
* formatting
* fix CameraLiveConfig JSON serialization error on profile activation
refactor _publish_updates to only publish ZMQ updates for
sections that actually changed, not all sections on affected cameras.
* consolidate
* add enabled field to camera profiles for enabling/disabling cameras
* add zones support to camera profiles
* add frontend profile types, color utility, and config save support
* add profile state management and save preview support
* add profileName prop to BaseSection for profile-aware config editing
* add profile section dropdown and wire into camera settings pages
* add per-profile camera enable/disable to Camera Management view
* add profiles summary page with card-based layout and fix backend zone comparison bug
* add active profile badge to settings toolbar
* i18n
* add red dot for any pending changes including profiles
* profile support for mask and zone editor
* fix hidden field validation errors caused by lodash wildcard and schema gaps
lodash unset does not support wildcard (*) segments, so hidden fields like
filters.*.mask were never stripped from form data, leaving null raw_coordinates
that fail RJSF anyOf validation. Add unsetWithWildcard helper and also strip
hidden fields from the JSON schema itself as defense-in-depth.
* add face_recognition and lpr to profile-eligible sections
* move profile dropdown from section panes to settings header
* add profiles enable toggle and improve empty state
* formatting
* tweaks
* tweak colors and switch
* fix profile save diff, masksAndZones delete, and config sync
* ui tweaks
* ensure profile manager gets updated config
* rename profile settings to ui settings
* refactor profilesview and add dots/border colors when overridden
* implement an update_config method for profile manager
* fix mask deletion
* more unique colors
* add top-level profiles config section with friendly names
* implement profile friendly names and improve profile UI
- Add ProfileDefinitionConfig type and profiles field to FrigateConfig
- Use ProfilesApiResponse type with friendly_name support throughout
- Replace Record<string, unknown> with proper JsonObject/JsonValue types
- Add profile creation form matching zone pattern (Zod + NameAndIdFields)
- Add pencil icon for renaming profile friendly names in ProfilesView
- Move Profiles menu item to first under Camera Configuration
- Add activity indicators on save/rename/delete buttons
- Display friendly names in CameraManagementView profile selector
- Fix duplicate colored dots in management profile dropdown
- Fix i18n namespace for overridden base config tooltips
- Move profile override deletion from dropdown trash icon to footer
button with confirmation dialog, matching Reset to Global pattern
- Remove Add Profile from section header dropdown to prevent saving
camera overrides before top-level profile definition exists
- Clean up newProfiles state after API profile deletion
- Refresh profiles SWR cache after saving profile definitions
* remove profile badge in settings and add profiles to main menu
* use icon only on mobile
* change color order
* docs
* show activity indicator on trash icon while deleting a profile
* tweak language
* immediately create profiles on backend instead of deferring to Save All
* hide restart-required fields when editing a profile section
fields that require a restart cannot take effect via profile switching,
so they are merged into hiddenFields when profileName is set
* show active profile indicator in desktop status bar
* fix profile config inheritance bug where Pydantic defaults override base values
The /config API was dumping profile overrides with model_dump() which included
all Pydantic defaults. When the frontend merged these over
the camera's base config, explicitly-set base values were
lost. Now profile overrides are re-dumped with exclude_unset=True so only
user-specified fields are returned.
Also fixes the Save All path generating spurious deletion markers for
restart-required fields that are hidden during profile
editing but not excluded from the raw data sanitization in
prepareSectionSavePayload.
* docs tweaks
* docs tweak
* formatting
* formatting
* fix typing
* fix test pollution
test_maintainer was injecting MagicMock() into sys.modules["frigate.config.camera.updater"] at module load time and never restoring it. When the profile tests later imported CameraConfigUpdateEnum and CameraConfigUpdateTopic from that module, they got mock objects instead of the real dataclass/enum, so equality comparisons always failed
* remove
* fix settings showing profile-merged values when editing base config
When a profile is active, the in-memory config contains effective
(profile-merged) values. The settings UI was displaying these merged
values even when the "Base Config" view was selected.
Backend: snapshot pre-profile base configs in ProfileManager and expose
them via a `base_config` key in the /api/config camera response when a
profile is active. The top-level sections continue to reflect the
effective running config.
Frontend: read from `base_config` when available in BaseSection,
useConfigOverride, useAllCameraOverrides, and prepareSectionSavePayload.
Include formData labels in Object/Audio switches widgets so that labels
added only by a profile override remain visible when editing that profile.
* use rasterized_mask as field
makes it easier to exclude from the schema with exclude=True
prevents leaking of the field when using model_dump for profiles
* fix zones
- Fix zone colors not matching across profiles by falling back to base zone color when profile zone data lacks a color field
- Use base_config for base-layer values in masks/zones view so profile-merged values don't pollute the base config editing view
- Handle zones separately in profile manager snapshot/restore since ZoneConfig requires special serialization (color as private attr, contour generation)
- Inherit base zone color and generate contours for profile zone overrides in profile manager
* formatting
* don't require restart for camera enabled change for profiles
* publish camera state when changing profiles
* formatting
* remove available profiles from mqtt
* improve typing
2026-03-19 17:47:57 +03:00
|
|
|
self.dispatcher,
|
|
|
|
|
self.profile_manager,
|
2024-09-24 16:05:30 +03:00
|
|
|
),
|
|
|
|
|
host="127.0.0.1",
|
|
|
|
|
port=5001,
|
|
|
|
|
log_level="error",
|
|
|
|
|
)
|
2024-09-24 15:07:47 +03:00
|
|
|
finally:
|
|
|
|
|
self.stop()
|
2021-02-17 16:23:32 +03:00
|
|
|
|
2022-04-16 18:40:04 +03:00
|
|
|
def stop(self) -> None:
|
2023-05-29 13:31:17 +03:00
|
|
|
logger.info("Stopping...")
|
2024-06-07 02:54:38 +03:00
|
|
|
|
2025-05-30 05:58:31 +03:00
|
|
|
# used by the docker healthcheck
|
|
|
|
|
Path("/dev/shm/.frigate-is-stopping").touch()
|
|
|
|
|
|
2026-03-06 02:53:48 +03:00
|
|
|
# Cancel any running motion search jobs before setting stop_event
|
|
|
|
|
stop_all_motion_search_jobs()
|
|
|
|
|
|
2020-11-04 15:28:07 +03:00
|
|
|
self.stop_event.set()
|
|
|
|
|
|
2024-04-11 15:42:16 +03:00
|
|
|
# set an end_time on entries without an end_time before exiting
|
2024-05-09 16:20:33 +03:00
|
|
|
Event.update(
|
|
|
|
|
end_time=datetime.datetime.now().timestamp(), has_snapshot=False
|
|
|
|
|
).where(Event.end_time == None).execute()
|
2024-04-11 15:42:16 +03:00
|
|
|
ReviewSegment.update(end_time=datetime.datetime.now().timestamp()).where(
|
|
|
|
|
ReviewSegment.end_time == None
|
|
|
|
|
).execute()
|
|
|
|
|
|
2024-11-08 21:47:46 +03:00
|
|
|
# stop the audio process
|
|
|
|
|
if self.audio_process:
|
|
|
|
|
self.audio_process.terminate()
|
|
|
|
|
self.audio_process.join()
|
|
|
|
|
|
2025-05-07 16:53:29 +03:00
|
|
|
# stop the onvif controller
|
|
|
|
|
if self.onvif_controller:
|
|
|
|
|
self.onvif_controller.close()
|
|
|
|
|
|
2024-06-07 02:54:38 +03:00
|
|
|
# ensure the detectors are done
|
2023-02-04 17:58:45 +03:00
|
|
|
for detector in self.detectors.values():
|
|
|
|
|
detector.stop()
|
|
|
|
|
|
2024-06-07 02:54:38 +03:00
|
|
|
empty_and_close_queue(self.detection_queue)
|
|
|
|
|
logger.info("Detection queue closed")
|
|
|
|
|
|
|
|
|
|
self.detected_frames_processor.join()
|
|
|
|
|
empty_and_close_queue(self.detected_frames_queue)
|
|
|
|
|
logger.info("Detected frames queue closed")
|
|
|
|
|
|
|
|
|
|
self.timeline_processor.join()
|
|
|
|
|
self.event_processor.join()
|
|
|
|
|
empty_and_close_queue(self.timeline_queue)
|
|
|
|
|
logger.info("Timeline queue closed")
|
|
|
|
|
|
|
|
|
|
self.output_processor.terminate()
|
|
|
|
|
self.output_processor.join()
|
|
|
|
|
|
|
|
|
|
self.recording_process.terminate()
|
|
|
|
|
self.recording_process.join()
|
|
|
|
|
|
|
|
|
|
self.review_segment_process.terminate()
|
|
|
|
|
self.review_segment_process.join()
|
2023-02-04 05:15:47 +03:00
|
|
|
|
|
|
|
|
self.dispatcher.stop()
|
2023-07-08 15:04:47 +03:00
|
|
|
self.ptz_autotracker_thread.join()
|
2024-06-07 02:54:38 +03:00
|
|
|
|
2020-11-24 16:27:51 +03:00
|
|
|
self.event_cleanup.join()
|
2023-07-26 13:55:08 +03:00
|
|
|
self.record_cleanup.join()
|
2021-01-04 02:35:58 +03:00
|
|
|
self.stats_emitter.join()
|
2020-11-04 15:28:07 +03:00
|
|
|
self.frigate_watchdog.join()
|
2026-03-04 19:07:34 +03:00
|
|
|
self.camera_maintainer.join()
|
2021-02-07 17:38:35 +03:00
|
|
|
self.db.stop()
|
2020-11-04 15:28:07 +03:00
|
|
|
|
2024-06-23 16:13:02 +03:00
|
|
|
# Save embeddings stats to disk
|
2024-08-04 06:06:20 +03:00
|
|
|
if self.embeddings:
|
2024-10-10 18:42:24 +03:00
|
|
|
self.embeddings.stop()
|
2024-06-23 16:13:02 +03:00
|
|
|
|
2024-06-07 02:54:38 +03:00
|
|
|
# Stop Communicators
|
|
|
|
|
self.inter_process_communicator.stop()
|
|
|
|
|
self.inter_config_updater.stop()
|
2024-09-24 17:14:51 +03:00
|
|
|
self.event_metadata_updater.stop()
|
2024-06-22 00:30:19 +03:00
|
|
|
self.inter_zmq_proxy.stop()
|
Use Fork-Server As Spawn Method (#18682)
* Set runtime
* Use count correctly
* Don't assume camera sizes
* Use separate zmq proxy for object detection
* Correct order
* Use forkserver
* Only store PID instead of entire process reference
* Cleanup
* Catch correct errors
* Fix typing
* Remove before_run from process util
The before_run never actually ran because:
You're right to suspect an issue with before_run not being called and a potential deadlock. The way you've implemented the run_wrapper using __getattribute__ for the run method of BaseProcess is a common pitfall in Python's multiprocessing, especially when combined with how multiprocessing.Process works internally.
Here's a breakdown of why before_run isn't being called and why you might be experiencing a deadlock:
The Problem: __getattribute__ and Process Serialization
When you create a multiprocessing.Process object and call start(), the multiprocessing module needs to serialize the process object (or at least enough of it to re-create the process in the new interpreter). It then pickles this serialized object and sends it to the newly spawned process.
The issue with your __getattribute__ implementation for run is that:
run is retrieved during serialization: When multiprocessing tries to pickle your Process object to send to the new process, it will likely access the run attribute. This triggers your __getattribute__ wrapper, which then tries to bind run_wrapper to self.
run_wrapper is bound to the parent process's self: The run_wrapper closure, when created in the parent process, captures the self (the Process instance) from the parent's memory space.
Deserialization creates a new object: In the child process, a new Process object is created by deserializing the pickled data. However, the run_wrapper method that was pickled still holds a reference to the self from the parent process. This is a subtle but critical distinction.
The child's run is not your wrapped run: When the child process starts, it internally calls its own run method. Because of the serialization and deserialization process, the run method that's ultimately executed in the child process is the original multiprocessing.Process.run or the Process.run if you had directly overridden it. Your __getattribute__ magic, which wraps run, isn't correctly applied to the Process object within the child's context.
* Cleanup
* Logging bugfix (#18465)
* use mp Manager to handle logging queues
A Python bug (https://github.com/python/cpython/issues/91555) was preventing logs from the embeddings maintainer process from printing. The bug is fixed in Python 3.14, but a viable workaround is to use the multiprocessing Manager, which better manages mp queues and causes the logging to work correctly.
* consolidate
* fix typing
* Fix typing
* Use global log queue
* Move to using process for logging
* Convert camera tracking to process
* Add more processes
* Finalize process
* Cleanup
* Cleanup typing
* Formatting
* Remove daemon
---------
Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2025-06-12 21:12:34 +03:00
|
|
|
self.detection_proxy.stop()
|
2024-06-07 02:54:38 +03:00
|
|
|
|
2020-11-04 15:28:07 +03:00
|
|
|
while len(self.detection_shms) > 0:
|
|
|
|
|
shm = self.detection_shms.pop()
|
|
|
|
|
shm.close()
|
|
|
|
|
shm.unlink()
|
2023-02-04 05:15:47 +03:00
|
|
|
|
2025-05-29 18:02:17 +03:00
|
|
|
_stop_logging()
|
Use Fork-Server As Spawn Method (#18682)
* Set runtime
* Use count correctly
* Don't assume camera sizes
* Use separate zmq proxy for object detection
* Correct order
* Use forkserver
* Only store PID instead of entire process reference
* Cleanup
* Catch correct errors
* Fix typing
* Remove before_run from process util
The before_run never actually ran because:
You're right to suspect an issue with before_run not being called and a potential deadlock. The way you've implemented the run_wrapper using __getattribute__ for the run method of BaseProcess is a common pitfall in Python's multiprocessing, especially when combined with how multiprocessing.Process works internally.
Here's a breakdown of why before_run isn't being called and why you might be experiencing a deadlock:
The Problem: __getattribute__ and Process Serialization
When you create a multiprocessing.Process object and call start(), the multiprocessing module needs to serialize the process object (or at least enough of it to re-create the process in the new interpreter). It then pickles this serialized object and sends it to the newly spawned process.
The issue with your __getattribute__ implementation for run is that:
run is retrieved during serialization: When multiprocessing tries to pickle your Process object to send to the new process, it will likely access the run attribute. This triggers your __getattribute__ wrapper, which then tries to bind run_wrapper to self.
run_wrapper is bound to the parent process's self: The run_wrapper closure, when created in the parent process, captures the self (the Process instance) from the parent's memory space.
Deserialization creates a new object: In the child process, a new Process object is created by deserializing the pickled data. However, the run_wrapper method that was pickled still holds a reference to the self from the parent process. This is a subtle but critical distinction.
The child's run is not your wrapped run: When the child process starts, it internally calls its own run method. Because of the serialization and deserialization process, the run method that's ultimately executed in the child process is the original multiprocessing.Process.run or the Process.run if you had directly overridden it. Your __getattribute__ magic, which wraps run, isn't correctly applied to the Process object within the child's context.
* Cleanup
* Logging bugfix (#18465)
* use mp Manager to handle logging queues
A Python bug (https://github.com/python/cpython/issues/91555) was preventing logs from the embeddings maintainer process from printing. The bug is fixed in Python 3.14, but a viable workaround is to use the multiprocessing Manager, which better manages mp queues and causes the logging to work correctly.
* consolidate
* fix typing
* Fix typing
* Use global log queue
* Move to using process for logging
* Convert camera tracking to process
* Add more processes
* Finalize process
* Cleanup
* Cleanup typing
* Formatting
* Remove daemon
---------
Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2025-06-12 21:12:34 +03:00
|
|
|
self.metrics_manager.shutdown()
|