Add media sync API endpoint (#21526)

* add media cleanup functions

* add endpoint

* remove scheduled sync recordings from cleanup

* move to utils dir

* tweak import

* remove sync_recordings and add config migrator

* remove sync_recordings

* docs

* remove key

* clean up docs

* docs fix

* docs tweak
This commit is contained in:
Josh Hawkins 2026-01-04 12:21:55 -06:00 committed by GitHub
parent 1c95eb2c39
commit a77b0a7c4b
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
12 changed files with 922 additions and 188 deletions

View File

@ -141,6 +141,8 @@ record:
When using `hwaccel_args`, hardware encoding is used for timelapse generation. This setting can be overridden for a specific camera (e.g., when camera resolution exceeds hardware encoder limits); set `cameras.<camera>.record.export.hwaccel_args` with the appropriate settings. Using an unrecognized value or empty string will fall back to software encoding (libx264).
:::
:::tip
The encoder determines its own behavior so the resulting file size may be undesirably large.
@ -152,19 +154,36 @@ To reduce the output file size the ffmpeg parameter `-qp n` can be utilized (whe
Apple devices running the Safari browser may fail to playback h.265 recordings. The [apple compatibility option](../configuration/camera_specific.md#h265-cameras-via-safari) should be used to ensure seamless playback on Apple devices.
## Syncing Recordings With Disk
## Syncing Media Files With Disk
In some cases the recordings files may be deleted but Frigate will not know this has happened. Recordings sync can be enabled which will tell Frigate to check the file system and delete any db entries for files which don't exist.
Media files (event snapshots, event thumbnails, review thumbnails, previews, exports, and recordings) can become orphaned when database entries are deleted but the corresponding files remain on disk.
```yaml
record:
sync_recordings: True
This feature checks the file system for media files and removes any that are not referenced in the database.
The API endpoint `POST /api/media/sync` can be used to trigger a media sync. The endpoint accepts a JSON request body to control the operation.
Request body schema (JSON):
```json
{
"dry_run": true,
"media_types": ["all"],
"force": false
}
```
This feature is meant to fix variations in files, not completely delete entries in the database. If you delete all of your media, don't use `sync_recordings`, just stop Frigate, delete the `frigate.db` database, and restart.
- `dry_run` (boolean): If `true` (default) the service will only report orphaned files without deleting them. Set to `false` to allow deletions.
- `media_types` (array of strings): Which media types to sync. Use `"all"` to sync everything, or a list of one or more of:
- `event_snapshots`
- `event_thumbnails`
- `review_thumbnails`
- `previews`
- `exports`
- `recordings`
- `force` (boolean): If `true` the safety threshold is bypassed and deletions proceed even if the operation would remove a large proportion of files. Use with extreme caution.
:::warning
The sync operation uses considerable CPU resources and in most cases is not needed, only enable when necessary.
This operation uses considerable CPU resources and includes a safety threshold that aborts if more than 50% of files would be deleted. Only run when necessary. If you set `force: true` the safety threshold will be bypassed; do not use `force` unless you are certain the deletions are intended.
:::

View File

@ -510,8 +510,6 @@ record:
# Optional: Number of minutes to wait between cleanup runs (default: shown below)
# This can be used to reduce the frequency of deleting recording segments from disk if you want to minimize i/o
expire_interval: 60
# Optional: Two-way sync recordings database with disk on startup and once a day (default: shown below).
sync_recordings: False
# Optional: Continuous retention settings
continuous:
# Optional: Number of days to retain recordings regardless of tracked objects or motion (default: shown below)

View File

@ -25,7 +25,7 @@ from pydantic import ValidationError
from frigate.api.auth import allow_any_authenticated, allow_public, require_role
from frigate.api.defs.query.app_query_parameters import AppTimelineHourlyQueryParameters
from frigate.api.defs.request.app_body import AppConfigSetBody
from frigate.api.defs.request.app_body import AppConfigSetBody, MediaSyncBody
from frigate.api.defs.tags import Tags
from frigate.config import FrigateConfig
from frigate.config.camera.updater import (
@ -42,6 +42,7 @@ from frigate.util.builtin import (
update_yaml_file_bulk,
)
from frigate.util.config import find_config_file
from frigate.util.media import sync_all_media
from frigate.util.services import (
get_nvidia_driver_info,
process_logs,
@ -602,6 +603,68 @@ def restart():
)
@router.post("/media/sync", dependencies=[Depends(require_role(["admin"]))])
def sync_media(body: MediaSyncBody = Body(...)):
"""Sync media files with database - remove orphaned files.
Syncs specified media types: event snapshots, event thumbnails, review thumbnails,
previews, exports, and/or recordings.
Args:
body: MediaSyncBody with dry_run flag and media_types list.
media_types can include: 'all', 'event_snapshots', 'event_thumbnails',
'review_thumbnails', 'previews', 'exports', 'recordings'
Returns:
JSON response with sync results for each requested media type.
"""
try:
results = sync_all_media(
dry_run=body.dry_run, media_types=body.media_types, force=body.force
)
# Check if any operations were aborted or had errors
has_errors = False
for result_name in [
"event_snapshots",
"event_thumbnails",
"review_thumbnails",
"previews",
"exports",
"recordings",
]:
result = getattr(results, result_name, None)
if result and (result.aborted or result.error):
has_errors = True
break
content = {
"success": not has_errors,
"dry_run": body.dry_run,
"media_types": body.media_types,
"results": results.to_dict(),
}
if has_errors:
content["message"] = (
"Some sync operations were aborted or had errors; check logs for details."
)
return JSONResponse(
content=content,
status_code=200,
)
except Exception as e:
logger.error(f"Error syncing media files: {e}")
return JSONResponse(
content={
"success": False,
"message": f"Error syncing media files: {str(e)}",
},
status_code=500,
)
@router.get("/labels", dependencies=[Depends(allow_any_authenticated())])
def get_labels(camera: str = ""):
try:

View File

@ -1,6 +1,6 @@
from typing import Any, Dict, Optional
from typing import Any, Dict, List, Optional
from pydantic import BaseModel
from pydantic import BaseModel, Field
class AppConfigSetBody(BaseModel):
@ -27,3 +27,16 @@ class AppPostLoginBody(BaseModel):
class AppPutRoleBody(BaseModel):
role: str
class MediaSyncBody(BaseModel):
dry_run: bool = Field(
default=True, description="If True, only report orphans without deleting them"
)
media_types: List[str] = Field(
default=["all"],
description="Types of media to sync: 'all', 'event_snapshots', 'event_thumbnails', 'review_thumbnails', 'previews', 'exports', 'recordings'",
)
force: bool = Field(
default=False, description="If True, bypass safety threshold checks"
)

View File

@ -77,9 +77,6 @@ class RecordExportConfig(FrigateBaseModel):
class RecordConfig(FrigateBaseModel):
enabled: bool = Field(default=False, title="Enable record on all cameras.")
sync_recordings: bool = Field(
default=False, title="Sync recordings with disk on startup and once a day."
)
expire_interval: int = Field(
default=60,
title="Number of minutes to wait between cleanup runs.",

View File

@ -13,9 +13,8 @@ from playhouse.sqlite_ext import SqliteExtDatabase
from frigate.config import CameraConfig, FrigateConfig, RetainModeEnum
from frigate.const import CACHE_DIR, CLIPS_DIR, MAX_WAL_SIZE, RECORD_DIR
from frigate.models import Previews, Recordings, ReviewSegment, UserReviewStatus
from frigate.record.util import remove_empty_directories, sync_recordings
from frigate.util.builtin import clear_and_unlink
from frigate.util.time import get_tomorrow_at_time
from frigate.util.media import remove_empty_directories
logger = logging.getLogger(__name__)
@ -347,11 +346,6 @@ class RecordingCleanup(threading.Thread):
logger.debug("End expire recordings.")
def run(self) -> None:
# on startup sync recordings with disk if enabled
if self.config.record.sync_recordings:
sync_recordings(limited=False)
next_sync = get_tomorrow_at_time(3)
# Expire tmp clips every minute, recordings and clean directories every hour.
for counter in itertools.cycle(range(self.config.record.expire_interval)):
if self.stop_event.wait(60):
@ -360,14 +354,6 @@ class RecordingCleanup(threading.Thread):
self.clean_tmp_previews()
if (
self.config.record.sync_recordings
and datetime.datetime.now().astimezone(datetime.timezone.utc)
> next_sync
):
sync_recordings(limited=True)
next_sync = get_tomorrow_at_time(3)
if counter == 0:
self.clean_tmp_clips()
self.expire_recordings()

View File

@ -1,147 +0,0 @@
"""Recordings Utilities."""
import datetime
import logging
import os
from peewee import DatabaseError, chunked
from frigate.const import RECORD_DIR
from frigate.models import Recordings, RecordingsToDelete
logger = logging.getLogger(__name__)
def remove_empty_directories(directory: str) -> None:
# list all directories recursively and sort them by path,
# longest first
paths = sorted(
[x[0] for x in os.walk(directory)],
key=lambda p: len(str(p)),
reverse=True,
)
for path in paths:
# don't delete the parent
if path == directory:
continue
if len(os.listdir(path)) == 0:
os.rmdir(path)
def sync_recordings(limited: bool) -> None:
"""Check the db for stale recordings entries that don't exist in the filesystem."""
def delete_db_entries_without_file(check_timestamp: float) -> bool:
"""Delete db entries where file was deleted outside of frigate."""
if limited:
recordings = Recordings.select(Recordings.id, Recordings.path).where(
Recordings.start_time >= check_timestamp
)
else:
# get all recordings in the db
recordings = Recordings.select(Recordings.id, Recordings.path)
# Use pagination to process records in chunks
page_size = 1000
num_pages = (recordings.count() + page_size - 1) // page_size
recordings_to_delete = set()
for page in range(num_pages):
for recording in recordings.paginate(page, page_size):
if not os.path.exists(recording.path):
recordings_to_delete.add(recording.id)
if len(recordings_to_delete) == 0:
return True
logger.info(
f"Deleting {len(recordings_to_delete)} recording DB entries with missing files"
)
# convert back to list of dictionaries for insertion
recordings_to_delete = [
{"id": recording_id} for recording_id in recordings_to_delete
]
if float(len(recordings_to_delete)) / max(1, recordings.count()) > 0.5:
logger.warning(
f"Deleting {(len(recordings_to_delete) / max(1, recordings.count()) * 100):.2f}% of recordings DB entries, could be due to configuration error. Aborting..."
)
return False
# create a temporary table for deletion
RecordingsToDelete.create_table(temporary=True)
# insert ids to the temporary table
max_inserts = 1000
for batch in chunked(recordings_to_delete, max_inserts):
RecordingsToDelete.insert_many(batch).execute()
try:
# delete records in the main table that exist in the temporary table
query = Recordings.delete().where(
Recordings.id.in_(RecordingsToDelete.select(RecordingsToDelete.id))
)
query.execute()
except DatabaseError as e:
logger.error(f"Database error during recordings db cleanup: {e}")
return True
def delete_files_without_db_entry(files_on_disk: list[str]):
"""Delete files where file is not inside frigate db."""
files_to_delete = []
for file in files_on_disk:
if not Recordings.select().where(Recordings.path == file).exists():
files_to_delete.append(file)
if len(files_to_delete) == 0:
return True
logger.info(
f"Deleting {len(files_to_delete)} recordings files with missing DB entries"
)
if float(len(files_to_delete)) / max(1, len(files_on_disk)) > 0.5:
logger.debug(
f"Deleting {(len(files_to_delete) / max(1, len(files_on_disk)) * 100):.2f}% of recordings DB entries, could be due to configuration error. Aborting..."
)
return False
for file in files_to_delete:
os.unlink(file)
return True
logger.debug("Start sync recordings.")
# start checking on the hour 36 hours ago
check_point = datetime.datetime.now().replace(
minute=0, second=0, microsecond=0
).astimezone(datetime.timezone.utc) - datetime.timedelta(hours=36)
db_success = delete_db_entries_without_file(check_point.timestamp())
# only try to cleanup files if db cleanup was successful
if db_success:
if limited:
# get recording files from last 36 hours
hour_check = f"{RECORD_DIR}/{check_point.strftime('%Y-%m-%d/%H')}"
files_on_disk = {
os.path.join(root, file)
for root, _, files in os.walk(RECORD_DIR)
for file in files
if root > hour_check
}
else:
# get all recordings files on disk and put them in a set
files_on_disk = {
os.path.join(root, file)
for root, _, files in os.walk(RECORD_DIR)
for file in files
}
delete_files_without_db_entry(files_on_disk)
logger.debug("End sync recordings.")

View File

@ -13,7 +13,7 @@ from frigate.util.services import get_video_properties
logger = logging.getLogger(__name__)
CURRENT_CONFIG_VERSION = "0.17-0"
CURRENT_CONFIG_VERSION = "0.18-0"
DEFAULT_CONFIG_FILE = os.path.join(CONFIG_DIR, "config.yml")
@ -98,6 +98,13 @@ def migrate_frigate_config(config_file: str):
yaml.dump(new_config, f)
previous_version = "0.17-0"
if previous_version < "0.18-0":
logger.info(f"Migrating frigate config from {previous_version} to 0.18-0...")
new_config = migrate_018_0(config)
with open(config_file, "w") as f:
yaml.dump(new_config, f)
previous_version = "0.18-0"
logger.info("Finished frigate config migration...")
@ -427,6 +434,27 @@ def migrate_017_0(config: dict[str, dict[str, Any]]) -> dict[str, dict[str, Any]
return new_config
def migrate_018_0(config: dict[str, dict[str, Any]]) -> dict[str, dict[str, Any]]:
"""Handle migrating frigate config to 0.18-0"""
new_config = config.copy()
# Remove deprecated sync_recordings from global record config
if new_config.get("record", {}).get("sync_recordings") is not None:
del new_config["record"]["sync_recordings"]
# Remove deprecated sync_recordings from camera-specific record configs
for name, camera in config.get("cameras", {}).items():
camera_config: dict[str, dict[str, Any]] = camera.copy()
if camera_config.get("record", {}).get("sync_recordings") is not None:
del camera_config["record"]["sync_recordings"]
new_config["cameras"][name] = camera_config
new_config["version"] = "0.18-0"
return new_config
def get_relative_coordinates(
mask: Optional[Union[str, list]], frame_shape: tuple[int, int]
) -> Union[str, list]:

785
frigate/util/media.py Normal file
View File

@ -0,0 +1,785 @@
"""Recordings Utilities."""
import datetime
import logging
import os
from dataclasses import dataclass, field
from peewee import DatabaseError, chunked
from frigate.const import CLIPS_DIR, EXPORT_DIR, RECORD_DIR, THUMB_DIR
from frigate.models import (
Event,
Export,
Previews,
Recordings,
RecordingsToDelete,
ReviewSegment,
)
logger = logging.getLogger(__name__)
# Safety threshold - abort if more than 50% of files would be deleted
SAFETY_THRESHOLD = 0.5
@dataclass
class SyncResult:
"""Result of a sync operation."""
media_type: str
files_checked: int = 0
orphans_found: int = 0
orphans_deleted: int = 0
orphan_paths: list[str] = field(default_factory=list)
aborted: bool = False
error: str | None = None
def to_dict(self) -> dict:
return {
"media_type": self.media_type,
"files_checked": self.files_checked,
"orphans_found": self.orphans_found,
"orphans_deleted": self.orphans_deleted,
"aborted": self.aborted,
"error": self.error,
}
def remove_empty_directories(directory: str) -> None:
# list all directories recursively and sort them by path,
# longest first
paths = sorted(
[x[0] for x in os.walk(directory)],
key=lambda p: len(str(p)),
reverse=True,
)
for path in paths:
# don't delete the parent
if path == directory:
continue
if len(os.listdir(path)) == 0:
os.rmdir(path)
def sync_recordings(
limited: bool = False, dry_run: bool = False, force: bool = False
) -> SyncResult:
"""Sync recordings between the database and disk using the SyncResult format."""
result = SyncResult(media_type="recordings")
try:
logger.debug("Start sync recordings.")
# start checking on the hour 36 hours ago
check_point = datetime.datetime.now().replace(
minute=0, second=0, microsecond=0
).astimezone(datetime.timezone.utc) - datetime.timedelta(hours=36)
# Gather DB recordings to inspect
if limited:
recordings_query = Recordings.select(Recordings.id, Recordings.path).where(
Recordings.start_time >= check_point.timestamp()
)
else:
recordings_query = Recordings.select(Recordings.id, Recordings.path)
recordings_count = recordings_query.count()
page_size = 1000
num_pages = (recordings_count + page_size - 1) // page_size
recordings_to_delete: list[dict] = []
for page in range(num_pages):
for recording in recordings_query.paginate(page, page_size):
if not os.path.exists(recording.path):
recordings_to_delete.append(
{"id": recording.id, "path": recording.path}
)
result.files_checked += recordings_count
result.orphans_found += len(recordings_to_delete)
result.orphan_paths.extend(
[
recording["path"]
for recording in recordings_to_delete
if recording.get("path")
]
)
if (
recordings_count
and len(recordings_to_delete) / recordings_count > SAFETY_THRESHOLD
):
if force:
logger.warning(
f"Deleting {(len(recordings_to_delete) / max(1, recordings_count) * 100):.2f}% of recordings DB entries (force=True, bypassing safety threshold)"
)
else:
logger.warning(
f"Deleting {(len(recordings_to_delete) / max(1, recordings_count) * 100):.2f}% of recordings DB entries, could be due to configuration error. Aborting..."
)
result.aborted = True
return result
if recordings_to_delete and not dry_run:
logger.info(
f"Deleting {len(recordings_to_delete)} recording DB entries with missing files"
)
RecordingsToDelete.create_table(temporary=True)
max_inserts = 1000
for batch in chunked(recordings_to_delete, max_inserts):
RecordingsToDelete.insert_many(batch).execute()
try:
deleted = (
Recordings.delete()
.where(
Recordings.id.in_(
RecordingsToDelete.select(RecordingsToDelete.id)
)
)
.execute()
)
result.orphans_deleted += int(deleted)
except DatabaseError as e:
logger.error(f"Database error during recordings db cleanup: {e}")
result.error = str(e)
result.aborted = True
return result
if result.aborted:
logger.warning("Recording DB sync aborted; skipping file cleanup.")
return result
# Only try to cleanup files if db cleanup was successful or dry_run
if limited:
# get recording files from last 36 hours
hour_check = f"{RECORD_DIR}/{check_point.strftime('%Y-%m-%d/%H')}"
files_on_disk = {
os.path.join(root, file)
for root, _, files in os.walk(RECORD_DIR)
for file in files
if root > hour_check
}
else:
# get all recordings files on disk and put them in a set
files_on_disk = {
os.path.join(root, file)
for root, _, files in os.walk(RECORD_DIR)
for file in files
}
result.files_checked += len(files_on_disk)
files_to_delete: list[str] = []
for file in files_on_disk:
if not Recordings.select().where(Recordings.path == file).exists():
files_to_delete.append(file)
result.orphans_found += len(files_to_delete)
result.orphan_paths.extend(files_to_delete)
if (
files_on_disk
and len(files_to_delete) / len(files_on_disk) > SAFETY_THRESHOLD
):
if force:
logger.warning(
f"Deleting {(len(files_to_delete) / max(1, len(files_on_disk)) * 100):.2f}% of recordings files (force=True, bypassing safety threshold)"
)
else:
logger.warning(
f"Deleting {(len(files_to_delete) / max(1, len(files_on_disk)) * 100):.2f}% of recordings files, could be due to configuration error. Aborting..."
)
result.aborted = True
return result
if files_to_delete and not dry_run:
logger.info(
f"Deleting {len(files_to_delete)} recordings files with missing DB entries"
)
for file in files_to_delete:
try:
os.unlink(file)
result.orphans_deleted += 1
except OSError as e:
logger.error(f"Failed to delete {file}: {e}")
logger.debug("End sync recordings.")
except Exception as e:
logger.error(f"Error syncing recordings: {e}")
result.error = str(e)
return result
def sync_event_snapshots(dry_run: bool = False, force: bool = False) -> SyncResult:
"""Sync event snapshots - delete files not referenced by any event.
Event snapshots are stored at: CLIPS_DIR/{camera}-{event_id}.jpg
Also checks for clean variants: {camera}-{event_id}-clean.webp and -clean.png
"""
result = SyncResult(media_type="event_snapshots")
try:
# Get all event IDs with snapshots from DB
events_with_snapshots = set(
f"{e.camera}-{e.id}"
for e in Event.select(Event.id, Event.camera).where(
Event.has_snapshot == True
)
)
# Find snapshot files on disk (directly in CLIPS_DIR, not subdirectories)
snapshot_files: list[tuple[str, str]] = [] # (full_path, base_name)
if os.path.isdir(CLIPS_DIR):
for file in os.listdir(CLIPS_DIR):
file_path = os.path.join(CLIPS_DIR, file)
if os.path.isfile(file_path) and file.endswith(
(".jpg", "-clean.webp", "-clean.png")
):
# Extract base name (camera-event_id) from filename
base_name = file
for suffix in ["-clean.webp", "-clean.png", ".jpg"]:
if file.endswith(suffix):
base_name = file[: -len(suffix)]
break
snapshot_files.append((file_path, base_name))
result.files_checked = len(snapshot_files)
# Find orphans
orphans: list[str] = []
for file_path, base_name in snapshot_files:
if base_name not in events_with_snapshots:
orphans.append(file_path)
result.orphans_found = len(orphans)
result.orphan_paths = orphans
if len(orphans) == 0:
return result
# Safety check
if (
result.files_checked > 0
and len(orphans) / result.files_checked > SAFETY_THRESHOLD
):
if force:
logger.warning(
f"Event snapshots sync: Would delete {len(orphans)}/{result.files_checked} "
f"({len(orphans) / result.files_checked * 100:.2f}%) files (force=True, bypassing safety threshold)."
)
else:
logger.warning(
f"Event snapshots sync: Would delete {len(orphans)}/{result.files_checked} "
f"({len(orphans) / result.files_checked * 100:.2f}%) files. "
"Aborting due to safety threshold."
)
result.aborted = True
return result
if dry_run:
logger.info(
f"Event snapshots sync (dry run): Found {len(orphans)} orphaned files"
)
return result
# Delete orphans
logger.info(f"Deleting {len(orphans)} orphaned event snapshot files")
for file_path in orphans:
try:
os.unlink(file_path)
result.orphans_deleted += 1
except OSError as e:
logger.error(f"Failed to delete {file_path}: {e}")
except Exception as e:
logger.error(f"Error syncing event snapshots: {e}")
result.error = str(e)
return result
def sync_event_thumbnails(dry_run: bool = False, force: bool = False) -> SyncResult:
"""Sync event thumbnails - delete files not referenced by any event.
Event thumbnails are stored at: THUMB_DIR/{camera}/{event_id}.webp
Only events without inline thumbnail (thumbnail field is None/empty) use files.
"""
result = SyncResult(media_type="event_thumbnails")
try:
# Get all events that use file-based thumbnails
# Events with thumbnail field populated don't need files
events_with_file_thumbs = set(
(e.camera, e.id)
for e in Event.select(Event.id, Event.camera, Event.thumbnail).where(
(Event.thumbnail.is_null(True)) | (Event.thumbnail == "")
)
)
# Find thumbnail files on disk
thumbnail_files: list[
tuple[str, str, str]
] = [] # (full_path, camera, event_id)
if os.path.isdir(THUMB_DIR):
for camera_dir in os.listdir(THUMB_DIR):
camera_path = os.path.join(THUMB_DIR, camera_dir)
if not os.path.isdir(camera_path):
continue
for file in os.listdir(camera_path):
if file.endswith(".webp"):
event_id = file[:-5] # Remove .webp
file_path = os.path.join(camera_path, file)
thumbnail_files.append((file_path, camera_dir, event_id))
result.files_checked = len(thumbnail_files)
# Find orphans - files where event doesn't exist or event has inline thumbnail
orphans: list[str] = []
for file_path, camera, event_id in thumbnail_files:
if (camera, event_id) not in events_with_file_thumbs:
# Check if event exists with inline thumbnail
event_exists = Event.select().where(Event.id == event_id).exists()
if not event_exists:
orphans.append(file_path)
# If event exists with inline thumbnail, the file is also orphaned
elif event_exists:
event = Event.get_or_none(Event.id == event_id)
if event and event.thumbnail:
orphans.append(file_path)
result.orphans_found = len(orphans)
result.orphan_paths = orphans
if len(orphans) == 0:
return result
# Safety check
if (
result.files_checked > 0
and len(orphans) / result.files_checked > SAFETY_THRESHOLD
):
if force:
logger.warning(
f"Event thumbnails sync: Would delete {len(orphans)}/{result.files_checked} "
f"({len(orphans) / result.files_checked * 100:.2f}%) files (force=True, bypassing safety threshold)."
)
else:
logger.warning(
f"Event thumbnails sync: Would delete {len(orphans)}/{result.files_checked} "
f"({len(orphans) / result.files_checked * 100:.2f}%) files. "
"Aborting due to safety threshold."
)
result.aborted = True
return result
if dry_run:
logger.info(
f"Event thumbnails sync (dry run): Found {len(orphans)} orphaned files"
)
return result
# Delete orphans
logger.info(f"Deleting {len(orphans)} orphaned event thumbnail files")
for file_path in orphans:
try:
os.unlink(file_path)
result.orphans_deleted += 1
except OSError as e:
logger.error(f"Failed to delete {file_path}: {e}")
except Exception as e:
logger.error(f"Error syncing event thumbnails: {e}")
result.error = str(e)
return result
def sync_review_thumbnails(dry_run: bool = False, force: bool = False) -> SyncResult:
"""Sync review segment thumbnails - delete files not referenced by any review segment.
Review thumbnails are stored at: CLIPS_DIR/review/thumb-{camera}-{review_id}.webp
The full path is stored in ReviewSegment.thumb_path
"""
result = SyncResult(media_type="review_thumbnails")
try:
# Get all thumb paths from DB
review_thumb_paths = set(
r.thumb_path
for r in ReviewSegment.select(ReviewSegment.thumb_path)
if r.thumb_path
)
# Find review thumbnail files on disk
review_dir = os.path.join(CLIPS_DIR, "review")
thumbnail_files: list[str] = []
if os.path.isdir(review_dir):
for file in os.listdir(review_dir):
if file.startswith("thumb-") and file.endswith(".webp"):
file_path = os.path.join(review_dir, file)
thumbnail_files.append(file_path)
result.files_checked = len(thumbnail_files)
# Find orphans
orphans: list[str] = []
for file_path in thumbnail_files:
if file_path not in review_thumb_paths:
orphans.append(file_path)
result.orphans_found = len(orphans)
result.orphan_paths = orphans
if len(orphans) == 0:
return result
# Safety check
if (
result.files_checked > 0
and len(orphans) / result.files_checked > SAFETY_THRESHOLD
):
if force:
logger.warning(
f"Review thumbnails sync: Would delete {len(orphans)}/{result.files_checked} "
f"({len(orphans) / result.files_checked * 100:.2f}%) files (force=True, bypassing safety threshold)."
)
else:
logger.warning(
f"Review thumbnails sync: Would delete {len(orphans)}/{result.files_checked} "
f"({len(orphans) / result.files_checked * 100:.2f}%) files. "
"Aborting due to safety threshold."
)
result.aborted = True
return result
if dry_run:
logger.info(
f"Review thumbnails sync (dry run): Found {len(orphans)} orphaned files"
)
return result
# Delete orphans
logger.info(f"Deleting {len(orphans)} orphaned review thumbnail files")
for file_path in orphans:
try:
os.unlink(file_path)
result.orphans_deleted += 1
except OSError as e:
logger.error(f"Failed to delete {file_path}: {e}")
except Exception as e:
logger.error(f"Error syncing review thumbnails: {e}")
result.error = str(e)
return result
def sync_previews(dry_run: bool = False, force: bool = False) -> SyncResult:
"""Sync preview files - delete files not referenced by any preview record.
Previews are stored at: CLIPS_DIR/previews/{camera}/*.mp4
The full path is stored in Previews.path
"""
result = SyncResult(media_type="previews")
try:
# Get all preview paths from DB
preview_paths = set(p.path for p in Previews.select(Previews.path) if p.path)
# Find preview files on disk
previews_dir = os.path.join(CLIPS_DIR, "previews")
preview_files: list[str] = []
if os.path.isdir(previews_dir):
for camera_dir in os.listdir(previews_dir):
camera_path = os.path.join(previews_dir, camera_dir)
if not os.path.isdir(camera_path):
continue
for file in os.listdir(camera_path):
if file.endswith(".mp4"):
file_path = os.path.join(camera_path, file)
preview_files.append(file_path)
result.files_checked = len(preview_files)
# Find orphans
orphans: list[str] = []
for file_path in preview_files:
if file_path not in preview_paths:
orphans.append(file_path)
result.orphans_found = len(orphans)
result.orphan_paths = orphans
if len(orphans) == 0:
return result
# Safety check
if (
result.files_checked > 0
and len(orphans) / result.files_checked > SAFETY_THRESHOLD
):
if force:
logger.warning(
f"Previews sync: Would delete {len(orphans)}/{result.files_checked} "
f"({len(orphans) / result.files_checked * 100:.2f}%) files (force=True, bypassing safety threshold)."
)
else:
logger.warning(
f"Previews sync: Would delete {len(orphans)}/{result.files_checked} "
f"({len(orphans) / result.files_checked * 100:.2f}%) files. "
"Aborting due to safety threshold."
)
result.aborted = True
return result
if dry_run:
logger.info(f"Previews sync (dry run): Found {len(orphans)} orphaned files")
return result
# Delete orphans
logger.info(f"Deleting {len(orphans)} orphaned preview files")
for file_path in orphans:
try:
os.unlink(file_path)
result.orphans_deleted += 1
except OSError as e:
logger.error(f"Failed to delete {file_path}: {e}")
except Exception as e:
logger.error(f"Error syncing previews: {e}")
result.error = str(e)
return result
def sync_exports(dry_run: bool = False, force: bool = False) -> SyncResult:
"""Sync export files - delete files not referenced by any export record.
Export videos are stored at: EXPORT_DIR/*.mp4
Export thumbnails are stored at: CLIPS_DIR/export/*.jpg
The paths are stored in Export.video_path and Export.thumb_path
"""
result = SyncResult(media_type="exports")
try:
# Get all export paths from DB
export_video_paths = set()
export_thumb_paths = set()
for e in Export.select(Export.video_path, Export.thumb_path):
if e.video_path:
export_video_paths.add(e.video_path)
if e.thumb_path:
export_thumb_paths.add(e.thumb_path)
# Find export video files on disk
export_files: list[str] = []
if os.path.isdir(EXPORT_DIR):
for file in os.listdir(EXPORT_DIR):
if file.endswith(".mp4"):
file_path = os.path.join(EXPORT_DIR, file)
export_files.append(file_path)
# Find export thumbnail files on disk
export_thumb_dir = os.path.join(CLIPS_DIR, "export")
thumb_files: list[str] = []
if os.path.isdir(export_thumb_dir):
for file in os.listdir(export_thumb_dir):
if file.endswith(".jpg"):
file_path = os.path.join(export_thumb_dir, file)
thumb_files.append(file_path)
result.files_checked = len(export_files) + len(thumb_files)
# Find orphans
orphans: list[str] = []
for file_path in export_files:
if file_path not in export_video_paths:
orphans.append(file_path)
for file_path in thumb_files:
if file_path not in export_thumb_paths:
orphans.append(file_path)
result.orphans_found = len(orphans)
result.orphan_paths = orphans
if len(orphans) == 0:
return result
# Safety check
if (
result.files_checked > 0
and len(orphans) / result.files_checked > SAFETY_THRESHOLD
):
if force:
logger.warning(
f"Exports sync: Would delete {len(orphans)}/{result.files_checked} "
f"({len(orphans) / result.files_checked * 100:.2f}%) files (force=True, bypassing safety threshold)."
)
else:
logger.warning(
f"Exports sync: Would delete {len(orphans)}/{result.files_checked} "
f"({len(orphans) / result.files_checked * 100:.2f}%) files. "
"Aborting due to safety threshold."
)
result.aborted = True
return result
if dry_run:
logger.info(f"Exports sync (dry run): Found {len(orphans)} orphaned files")
return result
# Delete orphans
logger.info(f"Deleting {len(orphans)} orphaned export files")
for file_path in orphans:
try:
os.unlink(file_path)
result.orphans_deleted += 1
except OSError as e:
logger.error(f"Failed to delete {file_path}: {e}")
except Exception as e:
logger.error(f"Error syncing exports: {e}")
result.error = str(e)
return result
@dataclass
class MediaSyncResults:
"""Combined results from all media sync operations."""
event_snapshots: SyncResult | None = None
event_thumbnails: SyncResult | None = None
review_thumbnails: SyncResult | None = None
previews: SyncResult | None = None
exports: SyncResult | None = None
recordings: SyncResult | None = None
@property
def total_files_checked(self) -> int:
total = 0
for result in [
self.event_snapshots,
self.event_thumbnails,
self.review_thumbnails,
self.previews,
self.exports,
self.recordings,
]:
if result:
total += result.files_checked
return total
@property
def total_orphans_found(self) -> int:
total = 0
for result in [
self.event_snapshots,
self.event_thumbnails,
self.review_thumbnails,
self.previews,
self.exports,
self.recordings,
]:
if result:
total += result.orphans_found
return total
@property
def total_orphans_deleted(self) -> int:
total = 0
for result in [
self.event_snapshots,
self.event_thumbnails,
self.review_thumbnails,
self.previews,
self.exports,
self.recordings,
]:
if result:
total += result.orphans_deleted
return total
def to_dict(self) -> dict:
"""Convert results to dictionary for API response."""
results = {}
for name, result in [
("event_snapshots", self.event_snapshots),
("event_thumbnails", self.event_thumbnails),
("review_thumbnails", self.review_thumbnails),
("previews", self.previews),
("exports", self.exports),
("recordings", self.recordings),
]:
if result:
results[name] = {
"files_checked": result.files_checked,
"orphans_found": result.orphans_found,
"orphans_deleted": result.orphans_deleted,
"aborted": result.aborted,
"error": result.error,
}
results["totals"] = {
"files_checked": self.total_files_checked,
"orphans_found": self.total_orphans_found,
"orphans_deleted": self.total_orphans_deleted,
}
return results
def sync_all_media(
dry_run: bool = False, media_types: list[str] = ["all"], force: bool = False
) -> MediaSyncResults:
"""Sync specified media types with the database.
Args:
dry_run: If True, only report orphans without deleting them.
media_types: List of media types to sync. Can include: 'all', 'event_snapshots',
'event_thumbnails', 'review_thumbnails', 'previews', 'exports', 'recordings'
force: If True, bypass safety threshold checks.
Returns:
MediaSyncResults with details of each sync operation.
"""
logger.debug(
f"Starting media sync (dry_run={dry_run}, media_types={media_types}, force={force})"
)
results = MediaSyncResults()
# Determine which media types to sync
sync_all = "all" in media_types
if sync_all or "event_snapshots" in media_types:
results.event_snapshots = sync_event_snapshots(dry_run=dry_run, force=force)
if sync_all or "event_thumbnails" in media_types:
results.event_thumbnails = sync_event_thumbnails(dry_run=dry_run, force=force)
if sync_all or "review_thumbnails" in media_types:
results.review_thumbnails = sync_review_thumbnails(dry_run=dry_run, force=force)
if sync_all or "previews" in media_types:
results.previews = sync_previews(dry_run=dry_run, force=force)
if sync_all or "exports" in media_types:
results.exports = sync_exports(dry_run=dry_run, force=force)
if sync_all or "recordings" in media_types:
results.recordings = sync_recordings(dry_run=dry_run, force=force)
logger.info(
f"Media sync complete: checked {results.total_files_checked} files, "
f"found {results.total_orphans_found} orphans, "
f"deleted {results.total_orphans_deleted}"
)
return results

View File

@ -324,9 +324,6 @@
"enabled": {
"label": "Enable record on all cameras."
},
"sync_recordings": {
"label": "Sync recordings with disk on startup and once a day."
},
"expire_interval": {
"label": "Number of minutes to wait between cleanup runs."
},
@ -758,4 +755,4 @@
"label": "Keep track of original state of camera."
}
}
}
}

View File

@ -4,9 +4,6 @@
"enabled": {
"label": "Enable record on all cameras."
},
"sync_recordings": {
"label": "Sync recordings with disk on startup and once a day."
},
"expire_interval": {
"label": "Number of minutes to wait between cleanup runs."
},
@ -90,4 +87,4 @@
"label": "Keep track of original state of recording."
}
}
}
}

View File

@ -197,7 +197,6 @@ export interface CameraConfig {
days: number;
mode: string;
};
sync_recordings: boolean;
};
review: {
alerts: {
@ -542,7 +541,6 @@ export interface FrigateConfig {
days: number;
mode: string;
};
sync_recordings: boolean;
};
rtmp: {