mirror of
https://github.com/blakeblackshear/frigate.git
synced 2026-03-10 02:29:19 +03:00
* Update version * Create scaffolding for case management (#21293) * implement case management for export apis (#21295) * refactor vainfo to search for first GPU (#21296) use existing LibvaGpuSelector to pick appropritate libva device * Case management UI (#21299) * Refactor export cards to match existing cards in other UI pages * Show cases separately from exports * Add proper filtering and display of cases * Add ability to edit and select cases for exports * Cleanup typing * Hide if no unassigned * Cleanup hiding logic * fix scrolling * Improve layout * Camera connection quality indicator (#21297) * add camera connection quality metrics and indicator * formatting * move stall calcs to watchdog * clean up * change watchdog to 1s and separately track time for ffmpeg retry_interval * implement status caching to reduce message volume * Export filter UI (#21322) * Get started on export filters * implement basic filter * Implement filtering and adjust api * Improve filter handling * Improve navigation * Cleanup * handle scrolling * Refactor temperature reporting for detectors and implement Hailo temp reading (#21395) * Add Hailo temperature retrieval * Refactor `get_hailo_temps()` to use ctxmanager * Show Hailo temps in system UI * Move hailo_platform import to get_hailo_temps * Refactor temperatures calculations to use within detector block * Adjust webUI to handle new location --------- Co-authored-by: tigattack <10629864+tigattack@users.noreply.github.com> * Camera-specific hwaccel settings for timelapse exports (correct base) (#21386) * added hwaccel_args to camera.record.export config struct * populate camera.record.export.hwaccel_args with a cascade up to camera then global if 'auto' * use new hwaccel args in export * added documentation for camera-specific hwaccel export * fix c/p error * missed an import * fleshed out the docs and comments a bit * ruff lint * separated out the tips in the doc * fix documentation * fix and simplify reference config doc * Add support for GPU and NPU temperatures (#21495) * Add rockchip temps * Add support for GPU and NPU temperatures in the frontend * Add support for Nvidia temperature * Improve separation * Adjust graph scaling * Exports Improvements (#21521) * Add images to case folder view * Add ability to select case in export dialog * Add to mobile review too * Add API to handle deleting recordings (#21520) * Add recording delete API * Re-organize recordings apis * Fix import * Consolidate query types * Add media sync API endpoint (#21526) * add media cleanup functions * add endpoint * remove scheduled sync recordings from cleanup * move to utils dir * tweak import * remove sync_recordings and add config migrator * remove sync_recordings * docs * remove key * clean up docs * docs fix * docs tweak * Media sync API refactor and UI (#21542) * generic job infrastructure * types and dispatcher changes for jobs * save data in memory only for completed jobs * implement media sync job and endpoints * change logs to debug * websocket hook and types * frontend * i18n * docs tweaks * endpoint descriptions * tweak docs * use same logging pattern in sync_recordings as the other sync functions (#21625) * Fix incorrect counting in sync_recordings (#21626) * Update go2rtc to v1.9.13 (#21648) Co-authored-by: Eugeny Tulupov <eugeny.tulupov@spirent.com> * Refactor Time-Lapse Export (#21668) * refactor time lapse creation to be a separate API call with ability to pass arbitrary ffmpeg args * Add CPU fallback * Optimize empty directory cleanup for recordings (#21695) The previous empty directory cleanup did a full recursive directory walk, which can be extremely slow. This new implementation only removes directories which have a chance of being empty due to a recent file deletion. * Implement llama.cpp GenAI Provider (#21690) * Implement llama.cpp GenAI Provider * Add docs * Update links * Fix broken mqtt links * Fix more broken anchors * Remove parents in remove_empty_directories (#21726) The original implementation did a full directory tree walk to find and remove empty directories, so this implementation should remove the parents as well, like the original did. * Implement LLM Chat API with tool calling support (#21731) * Implement initial tools definiton APIs * Add initial chat completion API with tool support * Implement other providers * Cleanup * Offline preview image (#21752) * use latest preview frame for latest image when camera is offline * remove frame extraction logic * tests * frontend * add description to api endpoint * Update to ROCm 7.2.0 (#21753) * Update to ROCm 7.2.0 * ROCm now works properly with JinaV1 * Arcface has compilation error * Add live context tool to LLM (#21754) * Add live context tool * Improve handling of images in request * Improve prompt caching * Add networking options for configuring listening ports (#21779) * feat: add X-Frame-Time when returning snapshot (#21932) Co-authored-by: Florent MORICONI <170678386+fmcloudconsulting@users.noreply.github.com> * Improve jsmpeg player websocket handling (#21943) * improve jsmpeg player websocket handling prevent websocket console messages from appearing when player is destroyed * reformat files after ruff upgrade * Allow API Events to be Detections or Alerts, depending on the Event Label (#21923) * - API created events will be alerts OR detections, depending on the event label, defaulting to alerts - Indefinite API events will extend the recording segment until those events are ended - API event start time is the actual start time, instead of having a pre-buffer of record.event_pre_capture * Instead of checking for indefinite events on a camera before deciding if we should end the segment, only update last_detection_time and last_alert_time if frame_time is greater, which should have the same effect * Add the ability to set a pre_capture number of seconds when creating a manual event via the API. Default behavior unchanged * Remove unnecessary _publish_segment_start() call * Formatting * handle last_alert_time or last_detection_time being None when checking them against the frame_time * comment manual_info["label"].split(": ")[0] for clarity * ffmpeg Preview Segment Optimization for "high" and "very_high" (#21996) * Introduce qmax parameter for ffmpeg preview encoding Added PREVIEW_QMAX_PARAM to control ffmpeg encoding quality. * formatting * Fix spacing in qmax parameters for preview quality * Adapt to new Gemini format * Fix frame time access * Remove exceptions * Cleanup --------- Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com> Co-authored-by: tigattack <10629864+tigattack@users.noreply.github.com> Co-authored-by: Andrew Roberts <adroberts@gmail.com> Co-authored-by: Eugeny Tulupov <zhekka3@gmail.com> Co-authored-by: Eugeny Tulupov <eugeny.tulupov@spirent.com> Co-authored-by: John Shaw <1753078+johnshaw@users.noreply.github.com> Co-authored-by: Eric Work <work.eric@gmail.com> Co-authored-by: FL42 <46161216+fl42@users.noreply.github.com> Co-authored-by: Florent MORICONI <170678386+fmcloudconsulting@users.noreply.github.com> Co-authored-by: nulledy <254504350+nulledy@users.noreply.github.com>
759 lines
23 KiB
JSON
759 lines
23 KiB
JSON
{
|
|
"label": "Camera configuration.",
|
|
"properties": {
|
|
"name": {
|
|
"label": "Camera name."
|
|
},
|
|
"friendly_name": {
|
|
"label": "Camera friendly name used in the Frigate UI."
|
|
},
|
|
"enabled": {
|
|
"label": "Enable camera."
|
|
},
|
|
"audio": {
|
|
"label": "Audio events configuration.",
|
|
"properties": {
|
|
"enabled": {
|
|
"label": "Enable audio events."
|
|
},
|
|
"max_not_heard": {
|
|
"label": "Seconds of not hearing the type of audio to end the event."
|
|
},
|
|
"min_volume": {
|
|
"label": "Min volume required to run audio detection."
|
|
},
|
|
"listen": {
|
|
"label": "Audio to listen for."
|
|
},
|
|
"filters": {
|
|
"label": "Audio filters."
|
|
},
|
|
"enabled_in_config": {
|
|
"label": "Keep track of original state of audio detection."
|
|
},
|
|
"num_threads": {
|
|
"label": "Number of detection threads"
|
|
}
|
|
}
|
|
},
|
|
"audio_transcription": {
|
|
"label": "Audio transcription config.",
|
|
"properties": {
|
|
"enabled": {
|
|
"label": "Enable audio transcription."
|
|
},
|
|
"language": {
|
|
"label": "Language abbreviation to use for audio event transcription/translation."
|
|
},
|
|
"device": {
|
|
"label": "The device used for license plate recognition."
|
|
},
|
|
"model_size": {
|
|
"label": "The size of the embeddings model used."
|
|
},
|
|
"enabled_in_config": {
|
|
"label": "Keep track of original state of camera."
|
|
},
|
|
"live_enabled": {
|
|
"label": "Enable live transcriptions."
|
|
}
|
|
}
|
|
},
|
|
"birdseye": {
|
|
"label": "Birdseye camera configuration.",
|
|
"properties": {
|
|
"enabled": {
|
|
"label": "Enable birdseye view for camera."
|
|
},
|
|
"mode": {
|
|
"label": "Tracking mode for camera."
|
|
},
|
|
"order": {
|
|
"label": "Position of the camera in the birdseye view."
|
|
}
|
|
}
|
|
},
|
|
"detect": {
|
|
"label": "Object detection configuration.",
|
|
"properties": {
|
|
"enabled": {
|
|
"label": "Detection Enabled."
|
|
},
|
|
"height": {
|
|
"label": "Height of the stream for the detect role."
|
|
},
|
|
"width": {
|
|
"label": "Width of the stream for the detect role."
|
|
},
|
|
"fps": {
|
|
"label": "Number of frames per second to process through detection."
|
|
},
|
|
"min_initialized": {
|
|
"label": "Minimum number of consecutive hits for an object to be initialized by the tracker."
|
|
},
|
|
"max_disappeared": {
|
|
"label": "Maximum number of frames the object can disappear before detection ends."
|
|
},
|
|
"stationary": {
|
|
"label": "Stationary objects config.",
|
|
"properties": {
|
|
"interval": {
|
|
"label": "Frame interval for checking stationary objects."
|
|
},
|
|
"threshold": {
|
|
"label": "Number of frames without a position change for an object to be considered stationary"
|
|
},
|
|
"max_frames": {
|
|
"label": "Max frames for stationary objects.",
|
|
"properties": {
|
|
"default": {
|
|
"label": "Default max frames."
|
|
},
|
|
"objects": {
|
|
"label": "Object specific max frames."
|
|
}
|
|
}
|
|
},
|
|
"classifier": {
|
|
"label": "Enable visual classifier for determing if objects with jittery bounding boxes are stationary."
|
|
}
|
|
}
|
|
},
|
|
"annotation_offset": {
|
|
"label": "Milliseconds to offset detect annotations by."
|
|
}
|
|
}
|
|
},
|
|
"face_recognition": {
|
|
"label": "Face recognition config.",
|
|
"properties": {
|
|
"enabled": {
|
|
"label": "Enable face recognition."
|
|
},
|
|
"min_area": {
|
|
"label": "Min area of face box to consider running face recognition."
|
|
}
|
|
}
|
|
},
|
|
"ffmpeg": {
|
|
"label": "FFmpeg configuration for the camera.",
|
|
"properties": {
|
|
"path": {
|
|
"label": "FFmpeg path"
|
|
},
|
|
"global_args": {
|
|
"label": "Global FFmpeg arguments."
|
|
},
|
|
"hwaccel_args": {
|
|
"label": "FFmpeg hardware acceleration arguments."
|
|
},
|
|
"input_args": {
|
|
"label": "FFmpeg input arguments."
|
|
},
|
|
"output_args": {
|
|
"label": "FFmpeg output arguments per role.",
|
|
"properties": {
|
|
"detect": {
|
|
"label": "Detect role FFmpeg output arguments."
|
|
},
|
|
"record": {
|
|
"label": "Record role FFmpeg output arguments."
|
|
}
|
|
}
|
|
},
|
|
"retry_interval": {
|
|
"label": "Time in seconds to wait before FFmpeg retries connecting to the camera."
|
|
},
|
|
"apple_compatibility": {
|
|
"label": "Set tag on HEVC (H.265) recording stream to improve compatibility with Apple players."
|
|
},
|
|
"inputs": {
|
|
"label": "Camera inputs."
|
|
}
|
|
}
|
|
},
|
|
"live": {
|
|
"label": "Live playback settings.",
|
|
"properties": {
|
|
"streams": {
|
|
"label": "Friendly names and restream names to use for live view."
|
|
},
|
|
"height": {
|
|
"label": "Live camera view height"
|
|
},
|
|
"quality": {
|
|
"label": "Live camera view quality"
|
|
}
|
|
}
|
|
},
|
|
"lpr": {
|
|
"label": "LPR config.",
|
|
"properties": {
|
|
"enabled": {
|
|
"label": "Enable license plate recognition."
|
|
},
|
|
"expire_time": {
|
|
"label": "Expire plates not seen after number of seconds (for dedicated LPR cameras only)."
|
|
},
|
|
"min_area": {
|
|
"label": "Minimum area of license plate to begin running recognition."
|
|
},
|
|
"enhancement": {
|
|
"label": "Amount of contrast adjustment and denoising to apply to license plate images before recognition."
|
|
}
|
|
}
|
|
},
|
|
"motion": {
|
|
"label": "Motion detection configuration.",
|
|
"properties": {
|
|
"enabled": {
|
|
"label": "Enable motion on all cameras."
|
|
},
|
|
"threshold": {
|
|
"label": "Motion detection threshold (1-255)."
|
|
},
|
|
"lightning_threshold": {
|
|
"label": "Lightning detection threshold (0.3-1.0)."
|
|
},
|
|
"improve_contrast": {
|
|
"label": "Improve Contrast"
|
|
},
|
|
"contour_area": {
|
|
"label": "Contour Area"
|
|
},
|
|
"delta_alpha": {
|
|
"label": "Delta Alpha"
|
|
},
|
|
"frame_alpha": {
|
|
"label": "Frame Alpha"
|
|
},
|
|
"frame_height": {
|
|
"label": "Frame Height"
|
|
},
|
|
"mask": {
|
|
"label": "Coordinates polygon for the motion mask."
|
|
},
|
|
"mqtt_off_delay": {
|
|
"label": "Delay for updating MQTT with no motion detected."
|
|
},
|
|
"enabled_in_config": {
|
|
"label": "Keep track of original state of motion detection."
|
|
}
|
|
}
|
|
},
|
|
"objects": {
|
|
"label": "Object configuration.",
|
|
"properties": {
|
|
"track": {
|
|
"label": "Objects to track."
|
|
},
|
|
"filters": {
|
|
"label": "Object filters.",
|
|
"properties": {
|
|
"min_area": {
|
|
"label": "Minimum area of bounding box for object to be counted. Can be pixels (int) or percentage (float between 0.000001 and 0.99)."
|
|
},
|
|
"max_area": {
|
|
"label": "Maximum area of bounding box for object to be counted. Can be pixels (int) or percentage (float between 0.000001 and 0.99)."
|
|
},
|
|
"min_ratio": {
|
|
"label": "Minimum ratio of bounding box's width/height for object to be counted."
|
|
},
|
|
"max_ratio": {
|
|
"label": "Maximum ratio of bounding box's width/height for object to be counted."
|
|
},
|
|
"threshold": {
|
|
"label": "Average detection confidence threshold for object to be counted."
|
|
},
|
|
"min_score": {
|
|
"label": "Minimum detection confidence for object to be counted."
|
|
},
|
|
"mask": {
|
|
"label": "Detection area polygon mask for this filter configuration."
|
|
}
|
|
}
|
|
},
|
|
"mask": {
|
|
"label": "Object mask."
|
|
},
|
|
"genai": {
|
|
"label": "Config for using genai to analyze objects.",
|
|
"properties": {
|
|
"enabled": {
|
|
"label": "Enable GenAI for camera."
|
|
},
|
|
"use_snapshot": {
|
|
"label": "Use snapshots for generating descriptions."
|
|
},
|
|
"prompt": {
|
|
"label": "Default caption prompt."
|
|
},
|
|
"object_prompts": {
|
|
"label": "Object specific prompts."
|
|
},
|
|
"objects": {
|
|
"label": "List of objects to run generative AI for."
|
|
},
|
|
"required_zones": {
|
|
"label": "List of required zones to be entered in order to run generative AI."
|
|
},
|
|
"debug_save_thumbnails": {
|
|
"label": "Save thumbnails sent to generative AI for debugging purposes."
|
|
},
|
|
"send_triggers": {
|
|
"label": "What triggers to use to send frames to generative AI for a tracked object.",
|
|
"properties": {
|
|
"tracked_object_end": {
|
|
"label": "Send once the object is no longer tracked."
|
|
},
|
|
"after_significant_updates": {
|
|
"label": "Send an early request to generative AI when X frames accumulated."
|
|
}
|
|
}
|
|
},
|
|
"enabled_in_config": {
|
|
"label": "Keep track of original state of generative AI."
|
|
}
|
|
}
|
|
}
|
|
}
|
|
},
|
|
"record": {
|
|
"label": "Record configuration.",
|
|
"properties": {
|
|
"enabled": {
|
|
"label": "Enable record on all cameras."
|
|
},
|
|
"expire_interval": {
|
|
"label": "Number of minutes to wait between cleanup runs."
|
|
},
|
|
"continuous": {
|
|
"label": "Continuous recording retention settings.",
|
|
"properties": {
|
|
"days": {
|
|
"label": "Default retention period."
|
|
}
|
|
}
|
|
},
|
|
"motion": {
|
|
"label": "Motion recording retention settings.",
|
|
"properties": {
|
|
"days": {
|
|
"label": "Default retention period."
|
|
}
|
|
}
|
|
},
|
|
"detections": {
|
|
"label": "Detection specific retention settings.",
|
|
"properties": {
|
|
"pre_capture": {
|
|
"label": "Seconds to retain before event starts."
|
|
},
|
|
"post_capture": {
|
|
"label": "Seconds to retain after event ends."
|
|
},
|
|
"retain": {
|
|
"label": "Event retention settings.",
|
|
"properties": {
|
|
"days": {
|
|
"label": "Default retention period."
|
|
},
|
|
"mode": {
|
|
"label": "Retain mode."
|
|
}
|
|
}
|
|
}
|
|
}
|
|
},
|
|
"alerts": {
|
|
"label": "Alert specific retention settings.",
|
|
"properties": {
|
|
"pre_capture": {
|
|
"label": "Seconds to retain before event starts."
|
|
},
|
|
"post_capture": {
|
|
"label": "Seconds to retain after event ends."
|
|
},
|
|
"retain": {
|
|
"label": "Event retention settings.",
|
|
"properties": {
|
|
"days": {
|
|
"label": "Default retention period."
|
|
},
|
|
"mode": {
|
|
"label": "Retain mode."
|
|
}
|
|
}
|
|
}
|
|
}
|
|
},
|
|
"export": {
|
|
"label": "Recording Export Config",
|
|
"properties": {
|
|
"timelapse_args": {
|
|
"label": "Timelapse Args"
|
|
}
|
|
}
|
|
},
|
|
"preview": {
|
|
"label": "Recording Preview Config",
|
|
"properties": {
|
|
"quality": {
|
|
"label": "Quality of recording preview."
|
|
}
|
|
}
|
|
},
|
|
"enabled_in_config": {
|
|
"label": "Keep track of original state of recording."
|
|
}
|
|
}
|
|
},
|
|
"review": {
|
|
"label": "Review configuration.",
|
|
"properties": {
|
|
"alerts": {
|
|
"label": "Review alerts config.",
|
|
"properties": {
|
|
"enabled": {
|
|
"label": "Enable alerts."
|
|
},
|
|
"labels": {
|
|
"label": "Labels to create alerts for."
|
|
},
|
|
"required_zones": {
|
|
"label": "List of required zones to be entered in order to save the event as an alert."
|
|
},
|
|
"enabled_in_config": {
|
|
"label": "Keep track of original state of alerts."
|
|
},
|
|
"cutoff_time": {
|
|
"label": "Time to cutoff alerts after no alert-causing activity has occurred."
|
|
}
|
|
}
|
|
},
|
|
"detections": {
|
|
"label": "Review detections config.",
|
|
"properties": {
|
|
"enabled": {
|
|
"label": "Enable detections."
|
|
},
|
|
"labels": {
|
|
"label": "Labels to create detections for."
|
|
},
|
|
"required_zones": {
|
|
"label": "List of required zones to be entered in order to save the event as a detection."
|
|
},
|
|
"cutoff_time": {
|
|
"label": "Time to cutoff detection after no detection-causing activity has occurred."
|
|
},
|
|
"enabled_in_config": {
|
|
"label": "Keep track of original state of detections."
|
|
}
|
|
}
|
|
},
|
|
"genai": {
|
|
"label": "Review description genai config.",
|
|
"properties": {
|
|
"enabled": {
|
|
"label": "Enable GenAI descriptions for review items."
|
|
},
|
|
"alerts": {
|
|
"label": "Enable GenAI for alerts."
|
|
},
|
|
"detections": {
|
|
"label": "Enable GenAI for detections."
|
|
},
|
|
"additional_concerns": {
|
|
"label": "Additional concerns that GenAI should make note of on this camera."
|
|
},
|
|
"debug_save_thumbnails": {
|
|
"label": "Save thumbnails sent to generative AI for debugging purposes."
|
|
},
|
|
"enabled_in_config": {
|
|
"label": "Keep track of original state of generative AI."
|
|
},
|
|
"preferred_language": {
|
|
"label": "Preferred language for GenAI Response"
|
|
},
|
|
"activity_context_prompt": {
|
|
"label": "Custom activity context prompt defining normal activity patterns for this property."
|
|
}
|
|
}
|
|
}
|
|
}
|
|
},
|
|
"semantic_search": {
|
|
"label": "Semantic search configuration.",
|
|
"properties": {
|
|
"triggers": {
|
|
"label": "Trigger actions on tracked objects that match existing thumbnails or descriptions",
|
|
"properties": {
|
|
"enabled": {
|
|
"label": "Enable this trigger"
|
|
},
|
|
"type": {
|
|
"label": "Type of trigger"
|
|
},
|
|
"data": {
|
|
"label": "Trigger content (text phrase or image ID)"
|
|
},
|
|
"threshold": {
|
|
"label": "Confidence score required to run the trigger"
|
|
},
|
|
"actions": {
|
|
"label": "Actions to perform when trigger is matched"
|
|
}
|
|
}
|
|
}
|
|
}
|
|
},
|
|
"snapshots": {
|
|
"label": "Snapshot configuration.",
|
|
"properties": {
|
|
"enabled": {
|
|
"label": "Snapshots enabled."
|
|
},
|
|
"clean_copy": {
|
|
"label": "Create a clean copy of the snapshot image."
|
|
},
|
|
"timestamp": {
|
|
"label": "Add a timestamp overlay on the snapshot."
|
|
},
|
|
"bounding_box": {
|
|
"label": "Add a bounding box overlay on the snapshot."
|
|
},
|
|
"crop": {
|
|
"label": "Crop the snapshot to the detected object."
|
|
},
|
|
"required_zones": {
|
|
"label": "List of required zones to be entered in order to save a snapshot."
|
|
},
|
|
"height": {
|
|
"label": "Snapshot image height."
|
|
},
|
|
"retain": {
|
|
"label": "Snapshot retention.",
|
|
"properties": {
|
|
"default": {
|
|
"label": "Default retention period."
|
|
},
|
|
"mode": {
|
|
"label": "Retain mode."
|
|
},
|
|
"objects": {
|
|
"label": "Object retention period."
|
|
}
|
|
}
|
|
},
|
|
"quality": {
|
|
"label": "Quality of the encoded jpeg (0-100)."
|
|
}
|
|
}
|
|
},
|
|
"timestamp_style": {
|
|
"label": "Timestamp style configuration.",
|
|
"properties": {
|
|
"position": {
|
|
"label": "Timestamp position."
|
|
},
|
|
"format": {
|
|
"label": "Timestamp format."
|
|
},
|
|
"color": {
|
|
"label": "Timestamp color.",
|
|
"properties": {
|
|
"red": {
|
|
"label": "Red"
|
|
},
|
|
"green": {
|
|
"label": "Green"
|
|
},
|
|
"blue": {
|
|
"label": "Blue"
|
|
}
|
|
}
|
|
},
|
|
"thickness": {
|
|
"label": "Timestamp thickness."
|
|
},
|
|
"effect": {
|
|
"label": "Timestamp effect."
|
|
}
|
|
}
|
|
},
|
|
"best_image_timeout": {
|
|
"label": "How long to wait for the image with the highest confidence score."
|
|
},
|
|
"mqtt": {
|
|
"label": "MQTT configuration.",
|
|
"properties": {
|
|
"enabled": {
|
|
"label": "Send image over MQTT."
|
|
},
|
|
"timestamp": {
|
|
"label": "Add timestamp to MQTT image."
|
|
},
|
|
"bounding_box": {
|
|
"label": "Add bounding box to MQTT image."
|
|
},
|
|
"crop": {
|
|
"label": "Crop MQTT image to detected object."
|
|
},
|
|
"height": {
|
|
"label": "MQTT image height."
|
|
},
|
|
"required_zones": {
|
|
"label": "List of required zones to be entered in order to send the image."
|
|
},
|
|
"quality": {
|
|
"label": "Quality of the encoded jpeg (0-100)."
|
|
}
|
|
}
|
|
},
|
|
"notifications": {
|
|
"label": "Notifications configuration.",
|
|
"properties": {
|
|
"enabled": {
|
|
"label": "Enable notifications"
|
|
},
|
|
"email": {
|
|
"label": "Email required for push."
|
|
},
|
|
"cooldown": {
|
|
"label": "Cooldown period for notifications (time in seconds)."
|
|
},
|
|
"enabled_in_config": {
|
|
"label": "Keep track of original state of notifications."
|
|
}
|
|
}
|
|
},
|
|
"onvif": {
|
|
"label": "Camera Onvif Configuration.",
|
|
"properties": {
|
|
"host": {
|
|
"label": "Onvif Host"
|
|
},
|
|
"port": {
|
|
"label": "Onvif Port"
|
|
},
|
|
"user": {
|
|
"label": "Onvif Username"
|
|
},
|
|
"password": {
|
|
"label": "Onvif Password"
|
|
},
|
|
"tls_insecure": {
|
|
"label": "Onvif Disable TLS verification"
|
|
},
|
|
"autotracking": {
|
|
"label": "PTZ auto tracking config.",
|
|
"properties": {
|
|
"enabled": {
|
|
"label": "Enable PTZ object autotracking."
|
|
},
|
|
"calibrate_on_startup": {
|
|
"label": "Perform a camera calibration when Frigate starts."
|
|
},
|
|
"zooming": {
|
|
"label": "Autotracker zooming mode."
|
|
},
|
|
"zoom_factor": {
|
|
"label": "Zooming factor (0.1-0.75)."
|
|
},
|
|
"track": {
|
|
"label": "Objects to track."
|
|
},
|
|
"required_zones": {
|
|
"label": "List of required zones to be entered in order to begin autotracking."
|
|
},
|
|
"return_preset": {
|
|
"label": "Name of camera preset to return to when object tracking is over."
|
|
},
|
|
"timeout": {
|
|
"label": "Seconds to delay before returning to preset."
|
|
},
|
|
"movement_weights": {
|
|
"label": "Internal value used for PTZ movements based on the speed of your camera's motor."
|
|
},
|
|
"enabled_in_config": {
|
|
"label": "Keep track of original state of autotracking."
|
|
}
|
|
}
|
|
},
|
|
"ignore_time_mismatch": {
|
|
"label": "Onvif Ignore Time Synchronization Mismatch Between Camera and Server"
|
|
}
|
|
}
|
|
},
|
|
"type": {
|
|
"label": "Camera Type"
|
|
},
|
|
"ui": {
|
|
"label": "Camera UI Modifications.",
|
|
"properties": {
|
|
"order": {
|
|
"label": "Order of camera in UI."
|
|
},
|
|
"dashboard": {
|
|
"label": "Show this camera in Frigate dashboard UI."
|
|
}
|
|
}
|
|
},
|
|
"webui_url": {
|
|
"label": "URL to visit the camera directly from system page"
|
|
},
|
|
"zones": {
|
|
"label": "Zone configuration.",
|
|
"properties": {
|
|
"filters": {
|
|
"label": "Zone filters.",
|
|
"properties": {
|
|
"min_area": {
|
|
"label": "Minimum area of bounding box for object to be counted. Can be pixels (int) or percentage (float between 0.000001 and 0.99)."
|
|
},
|
|
"max_area": {
|
|
"label": "Maximum area of bounding box for object to be counted. Can be pixels (int) or percentage (float between 0.000001 and 0.99)."
|
|
},
|
|
"min_ratio": {
|
|
"label": "Minimum ratio of bounding box's width/height for object to be counted."
|
|
},
|
|
"max_ratio": {
|
|
"label": "Maximum ratio of bounding box's width/height for object to be counted."
|
|
},
|
|
"threshold": {
|
|
"label": "Average detection confidence threshold for object to be counted."
|
|
},
|
|
"min_score": {
|
|
"label": "Minimum detection confidence for object to be counted."
|
|
},
|
|
"mask": {
|
|
"label": "Detection area polygon mask for this filter configuration."
|
|
}
|
|
}
|
|
},
|
|
"coordinates": {
|
|
"label": "Coordinates polygon for the defined zone."
|
|
},
|
|
"distances": {
|
|
"label": "Real-world distances for the sides of quadrilateral for the defined zone."
|
|
},
|
|
"inertia": {
|
|
"label": "Number of consecutive frames required for object to be considered present in the zone."
|
|
},
|
|
"loitering_time": {
|
|
"label": "Number of seconds that an object must loiter to be considered in the zone."
|
|
},
|
|
"speed_threshold": {
|
|
"label": "Minimum speed value for an object to be considered in the zone."
|
|
},
|
|
"objects": {
|
|
"label": "List of objects that can trigger the zone."
|
|
}
|
|
}
|
|
},
|
|
"enabled_in_config": {
|
|
"label": "Keep track of original state of camera."
|
|
}
|
|
}
|
|
}
|