Merge branch 'dev' into 230708-yet-another-fix-queues

This commit is contained in:
Sergey Krashevich 2023-07-11 01:19:14 +03:00 committed by GitHub
commit 1e766749e4
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
21 changed files with 856 additions and 48 deletions

View File

@ -0,0 +1,71 @@
---
id: autotracking
title: Autotracking
---
An ONVIF-capable camera that supports relative movement within the field of view (FOV) can also be configured to automatically track moving objects and keep them in the center of the frame.
## Autotracking behavior
Once Frigate determines that an object is not a false positive and has entered one of the required zones, the autotracker will move the PTZ camera to keep the object centered in the frame until the object either moves out of the frame, the PTZ is not capable of any more movement, or Frigate loses track of it.
Upon loss of tracking, Frigate will scan the region of the lost object for `timeout` seconds. If an object of the same type is found in that region, Frigate will track that new object.
When tracking has ended, Frigate will return to the camera preset specified by the `return_preset` configuration entry.
## Checking ONVIF camera support
Frigate autotracking functions with PTZ cameras capable of relative movement within the field of view (as specified in the [ONVIF spec](https://www.onvif.org/specs/srv/ptz/ONVIF-PTZ-Service-Spec-v1712.pdf) as `RelativePanTiltTranslationSpace` having a `TranslationSpaceFov` entry).
Many cheaper PTZs likely don't support this standard. Frigate will report an error message in the log and disable autotracking if your PTZ is unsupported.
Alternatively, you can download and run [this simple Python script](https://gist.github.com/hawkeye217/152a1d4ba80760dac95d46e143d37112), replacing the details on line 4 with your camera's IP address, ONVIF port, username, and password to check your camera.
## Configuration
First, configure the ONVIF parameters for your camera, then specify the object types to track, a required zone the object must enter, and a camera preset name to return to when tracking has ended. Optionally, specify a delay in seconds before Frigate returns the camera to the preset.
An [ONVIF connection](cameras.md) is required for autotracking to function.
Note that `autotracking` is disabled by default but can be enabled in the configuration or by MQTT.
```yaml
cameras:
ptzcamera:
...
onvif:
# Required: host of the camera being connected to.
host: 0.0.0.0
# Optional: ONVIF port for device (default: shown below).
port: 8000
# Optional: username for login.
# NOTE: Some devices require admin to access ONVIF.
user: admin
# Optional: password for login.
password: admin
# Optional: PTZ camera object autotracking. Keeps a moving object in
# the center of the frame by automatically moving the PTZ camera.
autotracking:
# Optional: enable/disable object autotracking. (default: shown below)
enabled: False
# Optional: list of objects to track from labelmap.txt (default: shown below)
track:
- person
# Required: Begin automatically tracking an object when it enters any of the listed zones.
required_zones:
- zone_name
# Required: Name of ONVIF camera preset to return to when tracking is over. (default: shown below)
return_preset: home
# Optional: Seconds to delay before returning to preset. (default: shown below)
timeout: 10
```
## Best practices and considerations
Every PTZ camera is different, so autotracking may not perform ideally in every situation. This experimental feature was initially developed using an EmpireTech/Dahua SD1A404XB-GNR.
The object tracker in Frigate estimates the motion of the PTZ so that tracked objects are preserved when the camera moves. In most cases (especially for faster moving objects), the default 5 fps is insufficient for the motion estimator to perform accurately. 10 fps is the current recommendation. Higher frame rates will likely not be more performant and will only slow down Frigate and the motion estimator. Adjust your camera to output at least 10 frames per second and change the `fps` parameter in the [detect configuration](index.md) of your configuration file.
A fast [detector](object_detectors.md) is recommended. CPU detectors will not perform well or won't work at all. If Frigate already has trouble keeping track of your object, the autotracker will struggle as well.
The autotracker will add PTZ motion requests to a queue while the motor is moving. Once the motor stops, the events in the queue will be executed together as one large move (rather than incremental moves). If your PTZ's motor is slow, you may not be able to reliably autotrack fast moving objects.

View File

@ -66,3 +66,5 @@ cameras:
```
then PTZ controls will be available in the cameras WebUI.
An ONVIF-capable camera that supports relative movement within the field of view (FOV) can also be configured to automatically track moving objects and keep them in the center of the frame. For autotracking setup, see the [autotracking](autotracking.md) docs.

View File

@ -145,6 +145,12 @@ audio:
enabled: False
# Optional: Configure the amount of seconds without detected audio to end the event (default: shown below)
max_not_heard: 30
# Optional: Configure the min rms volume required to run audio detection (default: shown below)
# As a rule of thumb:
# - 200 - high sensitivity
# - 500 - medium sensitivity
# - 1000 - low sensitivity
min_volume: 500
# Optional: Types of audio to listen for (default: shown below)
listen:
- bark
@ -555,6 +561,21 @@ cameras:
user: admin
# Optional: password for login.
password: admin
# Optional: PTZ camera object autotracking. Keeps a moving object in
# the center of the frame by automatically moving the PTZ camera.
autotracking:
# Optional: enable/disable object autotracking. (default: shown below)
enabled: False
# Optional: list of objects to track from labelmap.txt (default: shown below)
track:
- person
# Required: Begin automatically tracking an object when it enters any of the listed zones.
required_zones:
- zone_name
# Required: Name of ONVIF camera preset to return to when tracking is over.
return_preset: preset_name
# Optional: Seconds to delay before returning to preset. (default: shown below)
timeout: 10
# Optional: Configuration for how to sort the cameras in the Birdseye view.
birdseye:

View File

@ -63,7 +63,9 @@ Message published for each changed event. The first message is published when th
"stationary": false, // whether or not the object is considered stationary
"motionless_count": 0, // number of frames the object has been motionless
"position_changes": 2, // number of times the object has moved from a stationary position
"attributes": [], // set of unique attributes that have been identified on the object
"attributes": {
"face": 0.64
}, // attributes with top score that have been identified on the object at any point
"current_attributes": [] // detailed data about the current attributes in this frame
},
"after": {
@ -90,13 +92,15 @@ Message published for each changed event. The first message is published when th
"stationary": false, // whether or not the object is considered stationary
"motionless_count": 0, // number of frames the object has been motionless
"position_changes": 2, // number of times the object has changed position
"attributes": ["face"], // set of unique attributes that have been identified on the object
"attributes": {
"face": 0.86
}, // attributes with top score that have been identified on the object at any point
"current_attributes": [
// detailed data about the current attributes in this frame
{
"label": "face",
"box": [442, 506, 534, 524],
"score": 0.64
"score": 0.86
}
]
}
@ -188,3 +192,11 @@ Topic to send PTZ commands to camera.
| `MOVE_<dir>` | send command to continuously move in `<dir>`, possible values are [UP, DOWN, LEFT, RIGHT] |
| `ZOOM_<dir>` | send command to continuously zoom `<dir>`, possible values are [IN, OUT] |
| `STOP` | send command to stop moving |
### `frigate/<camera_name>/ptz_autotracker/set`
Topic to turn the PTZ autotracker for a camera on and off. Expected values are `ON` and `OFF`.
### `frigate/<camera_name>/ptz_autotracker/state`
Topic with current state of the PTZ autotracker for a camera. Published values are `ON` and `OFF`.

View File

@ -42,7 +42,8 @@ from frigate.object_detection import ObjectDetectProcess
from frigate.object_processing import TrackedObjectProcessor
from frigate.output import output_frames
from frigate.plus import PlusApi
from frigate.ptz import OnvifController
from frigate.ptz.autotrack import PtzAutoTrackerThread
from frigate.ptz.onvif import OnvifController
from frigate.record.record import manage_recordings
from frigate.stats import StatsEmitter, stats_init
from frigate.storage import StorageMaintainer
@ -134,6 +135,13 @@ class FrigateApp:
"i",
self.config.cameras[camera_name].motion.improve_contrast,
),
"ptz_autotracker_enabled": mp.Value( # type: ignore[typeddict-item]
# issue https://github.com/python/typeshed/issues/8799
# from mypy 0.981 onwards
"i",
self.config.cameras[camera_name].onvif.autotracking.enabled,
),
"ptz_stopped": mp.Event(),
"motion_threshold": mp.Value( # type: ignore[typeddict-item]
# issue https://github.com/python/typeshed/issues/8799
# from mypy 0.981 onwards
@ -164,6 +172,7 @@ class FrigateApp:
"capture_process": None,
"process": None,
}
self.camera_metrics[camera_name]["ptz_stopped"].set()
self.feature_metrics[camera_name] = {
"audio_enabled": mp.Value( # type: ignore[typeddict-item]
# issue https://github.com/python/typeshed/issues/8799
@ -206,10 +215,10 @@ class FrigateApp:
)
# Queue for recordings info
self.recordings_info_queue: Queue = ff.Queue()
self.recordings_info_queue: Queue = ff.Queue(DEFAULT_QUEUE_BUFFER_SIZE)
# Queue for timeline events
self.timeline_queue: Queue = ff.Queue()
self.timeline_queue: Queue = ff.Queue(DEFAULT_QUEUE_BUFFER_SIZE)
def init_database(self) -> None:
def vacuum_db(db: SqliteExtDatabase) -> None:
@ -312,7 +321,7 @@ class FrigateApp:
)
def init_onvif(self) -> None:
self.onvif_controller = OnvifController(self.config)
self.onvif_controller = OnvifController(self.config, self.camera_metrics)
def init_dispatcher(self) -> None:
comms: list[Communicator] = []
@ -366,6 +375,15 @@ class FrigateApp:
detector_config,
)
def start_ptz_autotracker(self) -> None:
self.ptz_autotracker_thread = PtzAutoTrackerThread(
self.config,
self.onvif_controller,
self.camera_metrics,
self.stop_event,
)
self.ptz_autotracker_thread.start()
def start_detected_frames_processor(self) -> None:
self.detected_frames_processor = TrackedObjectProcessor(
self.config,
@ -375,6 +393,7 @@ class FrigateApp:
self.event_processed_queue,
self.video_output_queue,
self.recordings_info_queue,
self.ptz_autotracker_thread,
self.stop_event,
)
self.detected_frames_processor.start()
@ -539,6 +558,7 @@ class FrigateApp:
sys.exit(1)
self.start_detectors()
self.start_video_output_processor()
self.start_ptz_autotracker()
self.start_detected_frames_processor()
self.start_camera_processors()
self.start_camera_capture_processes()
@ -583,6 +603,7 @@ class FrigateApp:
self.dispatcher.stop()
self.detected_frames_processor.join()
self.ptz_autotracker_thread.join()
self.event_processor.join()
self.event_cleanup.join()
self.stats_emitter.join()

View File

@ -5,7 +5,7 @@ from abc import ABC, abstractmethod
from typing import Any, Callable
from frigate.config import FrigateConfig
from frigate.ptz import OnvifCommandEnum, OnvifController
from frigate.ptz.onvif import OnvifCommandEnum, OnvifController
from frigate.types import CameraMetricsTypes, FeatureMetricsTypes
from frigate.util.services import restart_frigate
@ -55,6 +55,7 @@ class Dispatcher:
"audio": self._on_audio_command,
"detect": self._on_detect_command,
"improve_contrast": self._on_motion_improve_contrast_command,
"ptz_autotracker": self._on_ptz_autotracker_command,
"motion": self._on_motion_command,
"motion_contour_area": self._on_motion_contour_area_command,
"motion_threshold": self._on_motion_threshold_command,
@ -159,6 +160,25 @@ class Dispatcher:
self.publish(f"{camera_name}/improve_contrast/state", payload, retain=True)
def _on_ptz_autotracker_command(self, camera_name: str, payload: str) -> None:
"""Callback for ptz_autotracker topic."""
ptz_autotracker_settings = self.config.cameras[camera_name].onvif.autotracking
if payload == "ON":
if not self.camera_metrics[camera_name]["ptz_autotracker_enabled"].value:
logger.info(f"Turning on ptz autotracker for {camera_name}")
self.camera_metrics[camera_name]["ptz_autotracker_enabled"].value = True
ptz_autotracker_settings.enabled = True
elif payload == "OFF":
if self.camera_metrics[camera_name]["ptz_autotracker_enabled"].value:
logger.info(f"Turning off ptz autotracker for {camera_name}")
self.camera_metrics[camera_name][
"ptz_autotracker_enabled"
].value = False
ptz_autotracker_settings.enabled = False
self.publish(f"{camera_name}/ptz_autotracker/state", payload, retain=True)
def _on_motion_contour_area_command(self, camera_name: str, payload: int) -> None:
"""Callback for motion contour topic."""
try:

View File

@ -69,6 +69,11 @@ class MqttClient(Communicator): # type: ignore[misc]
"ON" if camera.motion.improve_contrast else "OFF", # type: ignore[union-attr]
retain=True,
)
self.publish(
f"{camera_name}/ptz_autotracker/state",
"ON" if camera.onvif.autotracking.enabled else "OFF",
retain=True,
)
self.publish(
f"{camera_name}/motion_threshold/state",
camera.motion.threshold, # type: ignore[union-attr]
@ -152,6 +157,7 @@ class MqttClient(Communicator): # type: ignore[misc]
"audio",
"motion",
"improve_contrast",
"ptz_autotracker",
"motion_threshold",
"motion_contour_area",
]

View File

@ -128,11 +128,31 @@ class MqttConfig(FrigateBaseModel):
return v
class PtzAutotrackConfig(FrigateBaseModel):
enabled: bool = Field(default=False, title="Enable PTZ object autotracking.")
track: List[str] = Field(default=DEFAULT_TRACKED_OBJECTS, title="Objects to track.")
required_zones: List[str] = Field(
default_factory=list,
title="List of required zones to be entered in order to begin autotracking.",
)
return_preset: str = Field(
default="home",
title="Name of camera preset to return to when object tracking is over.",
)
timeout: int = Field(
default=10, title="Seconds to delay before returning to preset."
)
class OnvifConfig(FrigateBaseModel):
host: str = Field(default="", title="Onvif Host")
port: int = Field(default=8000, title="Onvif Port")
user: Optional[str] = Field(title="Onvif Username")
password: Optional[str] = Field(title="Onvif Password")
autotracking: PtzAutotrackConfig = Field(
default_factory=PtzAutotrackConfig,
title="PTZ auto tracking config.",
)
class RetainModeEnum(str, Enum):
@ -393,6 +413,9 @@ class AudioConfig(FrigateBaseModel):
max_not_heard: int = Field(
default=30, title="Seconds of not hearing the type of audio to end the event."
)
min_volume: int = Field(
default=500, title="Min volume required to run audio detection."
)
listen: List[str] = Field(
default=DEFAULT_LISTEN_AUDIO, title="Audio to listen for."
)
@ -892,6 +915,17 @@ def verify_zone_objects_are_tracked(camera_config: CameraConfig) -> None:
)
def verify_autotrack_zones(camera_config: CameraConfig) -> ValueError | None:
"""Verify that required_zones are specified when autotracking is enabled."""
if (
camera_config.onvif.autotracking.enabled
and not camera_config.onvif.autotracking.required_zones
):
raise ValueError(
f"Camera {camera_config.name} has autotracking enabled, required_zones must be set to at least one of the camera's zones."
)
class FrigateConfig(FrigateBaseModel):
mqtt: MqttConfig = Field(title="MQTT Configuration.")
database: DatabaseConfig = Field(
@ -1067,6 +1101,7 @@ class FrigateConfig(FrigateBaseModel):
verify_recording_retention(camera_config)
verify_recording_segments_setup_with_reasonable_time(camera_config)
verify_zone_objects_are_tracked(camera_config)
verify_autotrack_zones(camera_config)
if camera_config.rtmp.enabled:
logger.warning(

View File

@ -27,14 +27,17 @@ class EdgeTpuTfl(DetectionApi):
type_key = DETECTOR_KEY
def __init__(self, detector_config: EdgeTpuDetectorConfig):
device_config = {"device": "usb"}
device_config = {}
if detector_config.device is not None:
device_config = {"device": detector_config.device}
edge_tpu_delegate = None
try:
logger.info(f"Attempting to load TPU as {device_config['device']}")
device_type = (
device_config["device"] if "device" in device_config else "auto"
)
logger.info(f"Attempting to load TPU as {device_type}")
edge_tpu_delegate = load_delegate("libedgetpu.so.1.0", device_config)
logger.info("TPU found")
self.interpreter = Interpreter(

View File

@ -169,6 +169,10 @@ class AudioEventMaintainer(threading.Thread):
if not self.feature_metrics[self.config.name]["audio_enabled"].value:
return
rms = np.sqrt(np.mean(np.absolute(np.square(audio.astype(np.float32)))))
# only run audio detection when volume is above min_volume
if rms >= self.config.audio.min_volume:
waveform = (audio / AUDIO_MAX_BIT_RANGE).astype(np.float32)
model_detections = self.detector.detect(waveform)
@ -192,7 +196,7 @@ class AudioEventMaintainer(threading.Thread):
)
if resp.status_code == 200:
event_id = resp.json()[0]["event_id"]
event_id = resp.json()["event_id"]
self.detections[label] = {
"id": event_id,
"label": label,

View File

@ -199,7 +199,8 @@ class EventProcessor(threading.Thread):
# only overwrite the sub_label in the database if it's set
if event_data.get("sub_label") is not None:
event[Event.sub_label] = event_data["sub_label"]
event[Event.sub_label] = event_data["sub_label"][0]
event[Event.data]["sub_label_score"] = event_data["sub_label"][1]
(
Event.insert(event)

View File

@ -35,7 +35,7 @@ from frigate.events.external import ExternalEventProcessor
from frigate.models import Event, Recordings, Timeline
from frigate.object_processing import TrackedObject
from frigate.plus import PlusApi
from frigate.ptz import OnvifController
from frigate.ptz.onvif import OnvifController
from frigate.record.export import PlaybackFactorEnum, RecordingExporter
from frigate.stats import stats_snapshot
from frigate.storage import StorageMaintainer

View File

@ -38,7 +38,9 @@ class Event(Model): # type: ignore[misc]
IntegerField()
) # TODO remove when columns can be dropped without rebuilding table
retain_indefinitely = BooleanField(default=False)
ratio = FloatField(default=1.0)
ratio = FloatField(
default=1.0
) # TODO remove when columns can be dropped without rebuilding table
plus_id = CharField(max_length=30)
model_hash = CharField(max_length=32)
detector_type = CharField(max_length=32)

View File

@ -22,6 +22,7 @@ from frigate.config import (
)
from frigate.const import CLIPS_DIR
from frigate.events.maintainer import EventTypeEnum
from frigate.ptz.autotrack import PtzAutoTrackerThread
from frigate.util.image import (
SharedMemoryFrameManager,
area,
@ -111,7 +112,7 @@ class TrackedObject:
self.zone_presence = {}
self.current_zones = []
self.entered_zones = []
self.attributes = set()
self.attributes = defaultdict(float)
self.false_positive = True
self.has_clip = False
self.has_snapshot = False
@ -143,6 +144,7 @@ class TrackedObject:
def update(self, current_frame_time, obj_data):
thumb_update = False
significant_change = False
autotracker_update = False
# if the object is not in the current frame, add a 0.0 to the score history
if obj_data["frame_time"] != current_frame_time:
self.score_history.append(0.0)
@ -205,15 +207,19 @@ class TrackedObject:
# maintain attributes
for attr in obj_data["attributes"]:
self.attributes.add(attr["label"])
if self.attributes[attr["label"]] < attr["score"]:
self.attributes[attr["label"]] = attr["score"]
# populate the sub_label for car with first logo if it exists
if self.obj_data["label"] == "car" and "sub_label" not in self.obj_data:
recognized_logos = self.attributes.intersection(
set(["ups", "fedex", "amazon"])
)
# populate the sub_label for car with highest scoring logo
if self.obj_data["label"] == "car":
recognized_logos = {
k: self.attributes[k]
for k in ["ups", "fedex", "amazon"]
if k in self.attributes
}
if len(recognized_logos) > 0:
self.obj_data["sub_label"] = recognized_logos.pop()
max_logo = max(recognized_logos, key=recognized_logos.get)
self.obj_data["sub_label"] = (max_logo, recognized_logos[max_logo])
# check for significant change
if not self.false_positive:
@ -236,9 +242,15 @@ class TrackedObject:
if self.obj_data["frame_time"] - self.previous["frame_time"] > 60:
significant_change = True
# update autotrack at half fps
if self.obj_data["frame_time"] - self.previous["frame_time"] > (
1 / (self.camera_config.detect.fps / 2)
):
autotracker_update = True
self.obj_data.update(obj_data)
self.current_zones = current_zones
return (thumb_update, significant_change)
return (thumb_update, significant_change, autotracker_update)
def to_dict(self, include_thumbnail: bool = False):
(self.thumbnail_data["frame_time"] if self.thumbnail_data is not None else 0.0)
@ -266,7 +278,7 @@ class TrackedObject:
"entered_zones": self.entered_zones.copy(),
"has_clip": self.has_clip,
"has_snapshot": self.has_snapshot,
"attributes": list(self.attributes),
"attributes": self.attributes,
"current_attributes": self.obj_data["attributes"],
}
@ -437,7 +449,11 @@ def zone_filtered(obj: TrackedObject, object_config):
# Maintains the state of a camera
class CameraState:
def __init__(
self, name, config: FrigateConfig, frame_manager: SharedMemoryFrameManager
self,
name,
config: FrigateConfig,
frame_manager: SharedMemoryFrameManager,
ptz_autotracker_thread: PtzAutoTrackerThread,
):
self.name = name
self.config = config
@ -455,6 +471,7 @@ class CameraState:
self.regions = []
self.previous_frame_id = None
self.callbacks = defaultdict(list)
self.ptz_autotracker_thread = ptz_autotracker_thread
def get_current_frame(self, draw_options={}):
with self.current_frame_lock:
@ -476,6 +493,21 @@ class CameraState:
thickness = 1
color = (255, 0, 0)
# draw thicker box around ptz autotracked object
if (
self.camera_config.onvif.autotracking.enabled
and self.ptz_autotracker_thread.ptz_autotracker.tracked_object[
self.name
]
is not None
and obj["id"]
== self.ptz_autotracker_thread.ptz_autotracker.tracked_object[
self.name
].obj_data["id"]
):
thickness = 5
color = self.config.model.colormap[obj["label"]]
# draw the bounding boxes on the frame
box = obj["box"]
draw_box_with_label(
@ -589,10 +621,14 @@ class CameraState:
for id in updated_ids:
updated_obj = tracked_objects[id]
thumb_update, significant_update = updated_obj.update(
thumb_update, significant_update, autotracker_update = updated_obj.update(
frame_time, current_detections[id]
)
if autotracker_update or significant_update:
for c in self.callbacks["autotrack"]:
c(self.name, updated_obj, frame_time)
if thumb_update:
# ensure this frame is stored in the cache
if (
@ -733,6 +769,7 @@ class TrackedObjectProcessor(threading.Thread):
event_processed_queue,
video_output_queue,
recordings_info_queue,
ptz_autotracker_thread,
stop_event,
):
threading.Thread.__init__(self)
@ -748,6 +785,7 @@ class TrackedObjectProcessor(threading.Thread):
self.camera_states: dict[str, CameraState] = {}
self.frame_manager = SharedMemoryFrameManager()
self.last_motion_detected: dict[str, float] = {}
self.ptz_autotracker_thread = ptz_autotracker_thread
def start(camera, obj: TrackedObject, current_frame_time):
self.event_queue.put(
@ -774,6 +812,9 @@ class TrackedObjectProcessor(threading.Thread):
)
)
def autotrack(camera, obj: TrackedObject, current_frame_time):
self.ptz_autotracker_thread.ptz_autotracker.autotrack_object(camera, obj)
def end(camera, obj: TrackedObject, current_frame_time):
# populate has_snapshot
obj.has_snapshot = self.should_save_snapshot(camera, obj)
@ -822,6 +863,7 @@ class TrackedObjectProcessor(threading.Thread):
"type": "end",
}
self.dispatcher.publish("events", json.dumps(message), retain=False)
self.ptz_autotracker_thread.ptz_autotracker.end_object(camera, obj)
self.event_queue.put(
(
@ -858,8 +900,11 @@ class TrackedObjectProcessor(threading.Thread):
self.dispatcher.publish(f"{camera}/{object_name}", status, retain=False)
for camera in self.config.cameras.keys():
camera_state = CameraState(camera, self.config, self.frame_manager)
camera_state = CameraState(
camera, self.config, self.frame_manager, self.ptz_autotracker_thread
)
camera_state.on("start", start)
camera_state.on("autotrack", autotrack)
camera_state.on("update", update)
camera_state.on("end", end)
camera_state.on("snapshot", snapshot)

370
frigate/ptz/autotrack.py Normal file
View File

@ -0,0 +1,370 @@
"""Automatically pan, tilt, and zoom on detected objects via onvif."""
import copy
import logging
import queue
import threading
import time
from functools import partial
from multiprocessing.synchronize import Event as MpEvent
import cv2
import numpy as np
from norfair.camera_motion import MotionEstimator, TranslationTransformationGetter
from frigate.config import CameraConfig, FrigateConfig
from frigate.ptz.onvif import OnvifController
from frigate.types import CameraMetricsTypes
from frigate.util.image import SharedMemoryFrameManager, intersection_over_union
logger = logging.getLogger(__name__)
class PtzMotionEstimator:
def __init__(self, config: CameraConfig, ptz_stopped) -> None:
self.frame_manager = SharedMemoryFrameManager()
# homography is nice (zooming) but slow, translation is pan/tilt only but fast.
self.norfair_motion_estimator = MotionEstimator(
transformations_getter=TranslationTransformationGetter(),
min_distance=30,
max_points=500,
)
self.camera_config = config
self.coord_transformations = None
self.ptz_stopped = ptz_stopped
logger.debug(f"Motion estimator init for cam: {config.name}")
def motion_estimator(self, detections, frame_time, camera_name):
if (
self.camera_config.onvif.autotracking.enabled
and not self.ptz_stopped.is_set()
):
logger.debug(
f"Motion estimator running for {camera_name} - frame time: {frame_time}"
)
frame_id = f"{camera_name}{frame_time}"
yuv_frame = self.frame_manager.get(
frame_id, self.camera_config.frame_shape_yuv
)
frame = cv2.cvtColor(yuv_frame, cv2.COLOR_YUV2GRAY_I420)
# mask out detections for better motion estimation
mask = np.ones(frame.shape[:2], frame.dtype)
detection_boxes = [x[2] for x in detections]
for detection in detection_boxes:
x1, y1, x2, y2 = detection
mask[y1:y2, x1:x2] = 0
# merge camera config motion mask with detections. Norfair function needs 0,1 mask
mask = np.bitwise_and(mask, self.camera_config.motion.mask).clip(max=1)
# Norfair estimator function needs color so it can convert it right back to gray
frame = cv2.cvtColor(frame, cv2.COLOR_GRAY2BGRA)
self.coord_transformations = self.norfair_motion_estimator.update(
frame, mask
)
self.frame_manager.close(frame_id)
logger.debug(
f"Motion estimator transformation: {self.coord_transformations.rel_to_abs((0,0))}"
)
return self.coord_transformations
return None
class PtzAutoTrackerThread(threading.Thread):
def __init__(
self,
config: FrigateConfig,
onvif: OnvifController,
camera_metrics: dict[str, CameraMetricsTypes],
stop_event: MpEvent,
) -> None:
threading.Thread.__init__(self)
self.name = "ptz_autotracker"
self.ptz_autotracker = PtzAutoTracker(config, onvif, camera_metrics)
self.stop_event = stop_event
self.config = config
def run(self):
while not self.stop_event.is_set():
for camera_name, cam in self.config.cameras.items():
if cam.onvif.autotracking.enabled:
self.ptz_autotracker.camera_maintenance(camera_name)
else:
# disabled dynamically by mqtt
if self.ptz_autotracker.tracked_object.get(camera_name):
self.ptz_autotracker.tracked_object[camera_name] = None
self.ptz_autotracker.tracked_object_previous[camera_name] = None
time.sleep(1)
logger.info("Exiting autotracker...")
class PtzAutoTracker:
def __init__(
self,
config: FrigateConfig,
onvif: OnvifController,
camera_metrics: CameraMetricsTypes,
) -> None:
self.config = config
self.onvif = onvif
self.camera_metrics = camera_metrics
self.tracked_object: dict[str, object] = {}
self.tracked_object_previous: dict[str, object] = {}
self.object_types = {}
self.required_zones = {}
self.move_queues = {}
self.move_threads = {}
self.autotracker_init = {}
# if cam is set to autotrack, onvif should be set up
for camera_name, cam in self.config.cameras.items():
self.autotracker_init[camera_name] = False
if cam.onvif.autotracking.enabled:
self._autotracker_setup(cam, camera_name)
def _autotracker_setup(self, cam, camera_name):
logger.debug(f"Autotracker init for cam: {camera_name}")
self.object_types[camera_name] = cam.onvif.autotracking.track
self.required_zones[camera_name] = cam.onvif.autotracking.required_zones
self.tracked_object[camera_name] = None
self.tracked_object_previous[camera_name] = None
self.move_queues[camera_name] = queue.Queue()
if not self.onvif.cams[camera_name]["init"]:
if not self.onvif._init_onvif(camera_name):
logger.warning(f"Unable to initialize onvif for {camera_name}")
cam.onvif.autotracking.enabled = False
self.camera_metrics[camera_name][
"ptz_autotracker_enabled"
].value = False
return
if not self.onvif.cams[camera_name]["relative_fov_supported"]:
cam.onvif.autotracking.enabled = False
self.camera_metrics[camera_name][
"ptz_autotracker_enabled"
].value = False
logger.warning(
f"Disabling autotracking for {camera_name}: FOV relative movement not supported"
)
return
# movement thread per camera
if not self.move_threads or not self.move_threads[camera_name]:
self.move_threads[camera_name] = threading.Thread(
name=f"move_thread_{camera_name}",
target=partial(self._process_move_queue, camera_name),
)
self.move_threads[camera_name].daemon = True
self.move_threads[camera_name].start()
self.autotracker_init[camera_name] = True
def _process_move_queue(self, camera):
while True:
try:
if self.move_queues[camera].qsize() > 1:
# Accumulate values since last moved
pan = 0
tilt = 0
while not self.move_queues[camera].empty():
queued_pan, queued_tilt = self.move_queues[camera].queue[0]
# If exceeding the movement range, keep it in the queue and move now
if abs(pan + queued_pan) > 1.0 or abs(tilt + queued_tilt) > 1.0:
logger.debug("Pan or tilt value exceeds 1.0")
break
queued_pan, queued_tilt = self.move_queues[camera].get()
pan += queued_pan
tilt += queued_tilt
else:
move_data = self.move_queues[camera].get()
pan, tilt = move_data
# check if ptz is moving
self.onvif.get_camera_status(camera)
# Wait until the camera finishes moving
self.camera_metrics[camera]["ptz_stopped"].wait()
self.onvif._move_relative(camera, pan, tilt, 1)
# Wait until the camera finishes moving
while not self.camera_metrics[camera]["ptz_stopped"].is_set():
# check if ptz is moving
self.onvif.get_camera_status(camera)
time.sleep(1 / (self.config.cameras[camera].detect.fps / 2))
except queue.Empty:
time.sleep(0.1)
def _enqueue_move(self, camera, pan, tilt):
move_data = (pan, tilt)
logger.debug(f"enqueue pan: {pan}, enqueue tilt: {tilt}")
self.move_queues[camera].put(move_data)
def _autotrack_move_ptz(self, camera, obj):
camera_config = self.config.cameras[camera]
# # frame width and height
camera_width = camera_config.frame_shape[1]
camera_height = camera_config.frame_shape[0]
# Normalize coordinates. top right of the fov is (1,1).
pan = 0.5 - (obj.obj_data["centroid"][0] / camera_width)
tilt = 0.5 - (obj.obj_data["centroid"][1] / camera_height)
# ideas: check object velocity for camera speed?
self._enqueue_move(camera, -pan, tilt)
def autotrack_object(self, camera, obj):
camera_config = self.config.cameras[camera]
if (
camera_config.onvif.autotracking.enabled
and self.camera_metrics[camera]["ptz_stopped"].is_set()
):
# either this is a brand new object that's on our camera, has our label, entered the zone, is not a false positive,
# and is not initially motionless - or one we're already tracking, which assumes all those things are already true
if (
# new object
self.tracked_object[camera] is None
and obj.camera == camera
and obj.obj_data["label"] in self.object_types[camera]
and set(obj.entered_zones) & set(self.required_zones[camera])
and not obj.previous["false_positive"]
and not obj.false_positive
and self.tracked_object_previous[camera] is None
and obj.obj_data["motionless_count"] == 0
):
logger.debug(
f"Autotrack: New object: {obj.obj_data['id']} {obj.obj_data['box']} {obj.obj_data['frame_time']}"
)
self.tracked_object[camera] = obj
self.tracked_object_previous[camera] = copy.deepcopy(obj)
self._autotrack_move_ptz(camera, obj)
return
if (
# already tracking an object
self.tracked_object[camera] is not None
and self.tracked_object_previous[camera] is not None
and obj.obj_data["id"] == self.tracked_object[camera].obj_data["id"]
and obj.obj_data["frame_time"]
!= self.tracked_object_previous[camera].obj_data["frame_time"]
):
# don't move the ptz if we're relatively close to the existing box
# should we use iou or euclidean distance or both?
# distance = math.sqrt((obj.obj_data["centroid"][0] - camera_width/2)**2 + (obj.obj_data["centroid"][1] - obj.camera_height/2)**2)
# if distance <= (self.camera_width * .15) or distance <= (self.camera_height * .15)
if (
intersection_over_union(
self.tracked_object_previous[camera].obj_data["box"],
obj.obj_data["box"],
)
> 0.5
):
logger.debug(
f"Autotrack: Existing object (do NOT move ptz): {obj.obj_data['id']} {obj.obj_data['box']} {obj.obj_data['frame_time']}"
)
self.tracked_object_previous[camera] = copy.deepcopy(obj)
return
logger.debug(
f"Autotrack: Existing object (move ptz): {obj.obj_data['id']} {obj.obj_data['box']} {obj.obj_data['frame_time']}"
)
self.tracked_object_previous[camera] = copy.deepcopy(obj)
self._autotrack_move_ptz(camera, obj)
return
if (
# The tracker lost an object, so let's check the previous object's region and compare it with the incoming object
# If it's within bounds, start tracking that object.
# Should we check region (maybe too broad) or expand the previous object's box a bit and check that?
self.tracked_object[camera] is None
and obj.camera == camera
and obj.obj_data["label"] in self.object_types[camera]
and not obj.previous["false_positive"]
and not obj.false_positive
and obj.obj_data["motionless_count"] == 0
and self.tracked_object_previous[camera] is not None
):
if (
intersection_over_union(
self.tracked_object_previous[camera].obj_data["region"],
obj.obj_data["box"],
)
< 0.2
):
logger.debug(
f"Autotrack: Reacquired object: {obj.obj_data['id']} {obj.obj_data['box']} {obj.obj_data['frame_time']}"
)
self.tracked_object[camera] = obj
self.tracked_object_previous[camera] = copy.deepcopy(obj)
self._autotrack_move_ptz(camera, obj)
return
def end_object(self, camera, obj):
if self.config.cameras[camera].onvif.autotracking.enabled:
if (
self.tracked_object[camera] is not None
and obj.obj_data["id"] == self.tracked_object[camera].obj_data["id"]
):
logger.debug(
f"Autotrack: End object: {obj.obj_data['id']} {obj.obj_data['box']}"
)
self.tracked_object[camera] = None
self.onvif.get_camera_status(camera)
def camera_maintenance(self, camera):
# calls get_camera_status to check/update ptz movement
# returns camera to preset after timeout when tracking is over
autotracker_config = self.config.cameras[camera].onvif.autotracking
if not self.autotracker_init[camera]:
self._autotracker_setup(self.config.cameras[camera], camera)
# regularly update camera status
if not self.camera_metrics[camera]["ptz_stopped"].is_set():
self.onvif.get_camera_status(camera)
# return to preset if tracking is over
if (
self.tracked_object[camera] is None
and self.tracked_object_previous[camera] is not None
and (
# might want to use a different timestamp here?
time.time()
- self.tracked_object_previous[camera].obj_data["frame_time"]
> autotracker_config.timeout
)
and autotracker_config.return_preset
):
self.camera_metrics[camera]["ptz_stopped"].wait()
logger.debug(
f"Autotrack: Time is {time.time()}, returning to preset: {autotracker_config.return_preset}"
)
self.onvif._move_to_preset(
camera,
autotracker_config.return_preset.lower(),
)
self.tracked_object_previous[camera] = None

View File

@ -4,9 +4,11 @@ import logging
import site
from enum import Enum
import numpy
from onvif import ONVIFCamera, ONVIFError
from frigate.config import FrigateConfig
from frigate.types import CameraMetricsTypes
logger = logging.getLogger(__name__)
@ -26,8 +28,11 @@ class OnvifCommandEnum(str, Enum):
class OnvifController:
def __init__(self, config: FrigateConfig) -> None:
def __init__(
self, config: FrigateConfig, camera_metrics: dict[str, CameraMetricsTypes]
) -> None:
self.cams: dict[str, ONVIFCamera] = {}
self.camera_metrics = camera_metrics
for cam_name, cam in config.cameras.items():
if not cam.enabled:
@ -68,12 +73,51 @@ class OnvifController:
ptz = onvif.create_ptz_service()
request = ptz.create_type("GetConfigurationOptions")
request.ConfigurationToken = profile.PTZConfiguration.token
ptz_config = ptz.GetConfigurationOptions(request)
# setup moving request
fov_space_id = next(
(
i
for i, space in enumerate(
ptz_config.Spaces.RelativePanTiltTranslationSpace
)
if "TranslationSpaceFov" in space["URI"]
),
None,
)
# setup continuous moving request
move_request = ptz.create_type("ContinuousMove")
move_request.ProfileToken = profile.token
self.cams[camera_name]["move_request"] = move_request
# setup relative moving request for autotracking
move_request = ptz.create_type("RelativeMove")
move_request.ProfileToken = profile.token
if move_request.Translation is None and fov_space_id is not None:
move_request.Translation = ptz.GetStatus(
{"ProfileToken": profile.token}
).Position
move_request.Translation.PanTilt.space = ptz_config["Spaces"][
"RelativePanTiltTranslationSpace"
][fov_space_id]["URI"]
move_request.Translation.Zoom.space = ptz_config["Spaces"][
"RelativeZoomTranslationSpace"
][0]["URI"]
if move_request.Speed is None:
move_request.Speed = ptz.GetStatus({"ProfileToken": profile.token}).Position
self.cams[camera_name]["relative_move_request"] = move_request
# setup relative moving request for autotracking
move_request = ptz.create_type("AbsoluteMove")
move_request.ProfileToken = profile.token
self.cams[camera_name]["absolute_move_request"] = move_request
# status request for autotracking
status_request = ptz.create_type("GetStatus")
status_request.ProfileToken = profile.token
self.cams[camera_name]["status_request"] = status_request
# setup existing presets
try:
presets: list[dict] = ptz.GetPresets({"ProfileToken": profile.token})
@ -94,6 +138,20 @@ class OnvifController:
if ptz_config.Spaces and ptz_config.Spaces.ContinuousZoomVelocitySpace:
supported_features.append("zoom")
if ptz_config.Spaces and ptz_config.Spaces.RelativePanTiltTranslationSpace:
supported_features.append("pt-r")
if ptz_config.Spaces and ptz_config.Spaces.RelativeZoomTranslationSpace:
supported_features.append("zoom-r")
if fov_space_id is not None:
supported_features.append("pt-r-fov")
self.cams[camera_name][
"relative_fov_range"
] = ptz_config.Spaces.RelativePanTiltTranslationSpace[fov_space_id]
self.cams[camera_name]["relative_fov_supported"] = fov_space_id is not None
self.cams[camera_name]["features"] = supported_features
self.cams[camera_name]["init"] = True
@ -143,12 +201,67 @@ class OnvifController:
onvif.get_service("ptz").ContinuousMove(move_request)
def _move_relative(self, camera_name: str, pan, tilt, speed) -> None:
if not self.cams[camera_name]["relative_fov_supported"]:
logger.error(f"{camera_name} does not support ONVIF RelativeMove (FOV).")
return
logger.debug(f"{camera_name} called RelativeMove: pan: {pan} tilt: {tilt}")
self.get_camera_status(camera_name)
if self.cams[camera_name]["active"]:
logger.warning(
f"{camera_name} is already performing an action, not moving..."
)
return
self.cams[camera_name]["active"] = True
self.camera_metrics[camera_name]["ptz_stopped"].clear()
onvif: ONVIFCamera = self.cams[camera_name]["onvif"]
move_request = self.cams[camera_name]["relative_move_request"]
# function takes in -1 to 1 for pan and tilt, interpolate to the values of the camera.
# The onvif spec says this can report as +INF and -INF, so this may need to be modified
pan = numpy.interp(
pan,
[-1, 1],
[
self.cams[camera_name]["relative_fov_range"]["XRange"]["Min"],
self.cams[camera_name]["relative_fov_range"]["XRange"]["Max"],
],
)
tilt = numpy.interp(
tilt,
[-1, 1],
[
self.cams[camera_name]["relative_fov_range"]["YRange"]["Min"],
self.cams[camera_name]["relative_fov_range"]["YRange"]["Max"],
],
)
move_request.Speed = {
"PanTilt": {
"x": speed,
"y": speed,
},
"Zoom": 0,
}
move_request.Translation.PanTilt.x = pan
move_request.Translation.PanTilt.y = tilt
move_request.Translation.Zoom.x = 0
onvif.get_service("ptz").RelativeMove(move_request)
self.cams[camera_name]["active"] = False
def _move_to_preset(self, camera_name: str, preset: str) -> None:
if preset not in self.cams[camera_name]["presets"]:
logger.error(f"{preset} is not a valid preset for {camera_name}")
return
self.cams[camera_name]["active"] = True
self.camera_metrics[camera_name]["ptz_stopped"].clear()
move_request = self.cams[camera_name]["move_request"]
onvif: ONVIFCamera = self.cams[camera_name]["onvif"]
preset_token = self.cams[camera_name]["presets"][preset]
@ -158,6 +271,7 @@ class OnvifController:
"PresetToken": preset_token,
}
)
self.camera_metrics[camera_name]["ptz_stopped"].set()
self.cams[camera_name]["active"] = False
def _zoom(self, camera_name: str, command: OnvifCommandEnum) -> None:
@ -216,3 +330,30 @@ class OnvifController:
"features": self.cams[camera_name]["features"],
"presets": list(self.cams[camera_name]["presets"].keys()),
}
def get_camera_status(self, camera_name: str) -> dict[str, any]:
if camera_name not in self.cams.keys():
logger.error(f"Onvif is not setup for {camera_name}")
return {}
if not self.cams[camera_name]["init"]:
self._init_onvif(camera_name)
onvif: ONVIFCamera = self.cams[camera_name]["onvif"]
status_request = self.cams[camera_name]["status_request"]
status = onvif.get_service("ptz").GetStatus(status_request)
if status.MoveStatus.PanTilt == "IDLE" and status.MoveStatus.Zoom == "IDLE":
self.cams[camera_name]["active"] = False
self.camera_metrics[camera_name]["ptz_stopped"].set()
else:
self.cams[camera_name]["active"] = True
self.camera_metrics[camera_name]["ptz_stopped"].clear()
return {
"pan": status.Position.PanTilt.x,
"tilt": status.Position.PanTilt.y,
"zoom": status.Position.Zoom.x,
"pantilt_moving": status.MoveStatus.PanTilt,
"zoom_moving": status.MoveStatus.Zoom,
}

View File

@ -5,7 +5,8 @@ import numpy as np
from norfair import Detection, Drawable, Tracker, draw_boxes
from norfair.drawing.drawer import Drawer
from frigate.config import DetectConfig
from frigate.config import CameraConfig
from frigate.ptz.autotrack import PtzMotionEstimator
from frigate.track import ObjectTracker
from frigate.util.image import intersection_over_union
@ -54,12 +55,16 @@ def frigate_distance(detection: Detection, tracked_object) -> float:
class NorfairTracker(ObjectTracker):
def __init__(self, config: DetectConfig):
def __init__(self, config: CameraConfig, ptz_autotracker_enabled, ptz_stopped):
self.tracked_objects = {}
self.disappeared = {}
self.positions = {}
self.max_disappeared = config.max_disappeared
self.detect_config = config
self.max_disappeared = config.detect.max_disappeared
self.camera_config = config
self.detect_config = config.detect
self.ptz_autotracker_enabled = ptz_autotracker_enabled.value
self.ptz_stopped = ptz_stopped
self.camera_name = config.name
self.track_id_map = {}
# TODO: could also initialize a tracker per object class if there
# was a good reason to have different distance calculations
@ -69,6 +74,8 @@ class NorfairTracker(ObjectTracker):
initialization_delay=0,
hit_counter_max=self.max_disappeared,
)
if self.ptz_autotracker_enabled:
self.ptz_motion_estimator = PtzMotionEstimator(config, self.ptz_stopped)
def register(self, track_id, obj):
rand_id = "".join(random.choices(string.ascii_lowercase + string.digits, k=6))
@ -230,7 +237,16 @@ class NorfairTracker(ObjectTracker):
)
)
tracked_objects = self.tracker.update(detections=norfair_detections)
coord_transformations = None
if self.ptz_autotracker_enabled:
coord_transformations = self.ptz_motion_estimator.motion_estimator(
detections, frame_time, self.camera_name
)
tracked_objects = self.tracker.update(
detections=norfair_detections, coord_transformations=coord_transformations
)
# update or create new tracks
active_ids = []

View File

@ -1,5 +1,6 @@
from multiprocessing.context import Process
from multiprocessing.sharedctypes import Synchronized
from multiprocessing.synchronize import Event
from typing import Optional, TypedDict
from faster_fifo import Queue
@ -17,6 +18,8 @@ class CameraMetricsTypes(TypedDict):
frame_queue: Queue
motion_enabled: Synchronized
improve_contrast_enabled: Synchronized
ptz_autotracker_enabled: Synchronized
ptz_stopped: Event
motion_threshold: Synchronized
motion_contour_area: Synchronized
process: Optional[Process]

View File

@ -478,6 +478,8 @@ def track_camera(
detection_enabled = process_info["detection_enabled"]
motion_enabled = process_info["motion_enabled"]
improve_contrast_enabled = process_info["improve_contrast_enabled"]
ptz_autotracker_enabled = process_info["ptz_autotracker_enabled"]
ptz_stopped = process_info["ptz_stopped"]
motion_threshold = process_info["motion_threshold"]
motion_contour_area = process_info["motion_contour_area"]
@ -497,7 +499,7 @@ def track_camera(
name, labelmap, detection_queue, result_connection, model_config, stop_event
)
object_tracker = NorfairTracker(config.detect)
object_tracker = NorfairTracker(config, ptz_autotracker_enabled, ptz_stopped)
frame_manager = SharedMemoryFrameManager()
@ -518,6 +520,7 @@ def track_camera(
detection_enabled,
motion_enabled,
stop_event,
ptz_stopped,
)
logger.info(f"{name}: exiting subprocess")
@ -742,6 +745,7 @@ def process_frames(
detection_enabled: mp.Value,
motion_enabled: mp.Value,
stop_event,
ptz_stopped: mp.Event,
exit_on_empty: bool = False,
):
fps = process_info["process_fps"]
@ -778,7 +782,11 @@ def process_frames(
continue
# look for motion if enabled
motion_boxes = motion_detector.detect(frame) if motion_enabled.value else []
motion_boxes = (
motion_detector.detect(frame)
if motion_enabled.value and ptz_stopped.is_set()
else []
)
regions = []
consolidated_detections = []

20
web/src/icons/Score.jsx Normal file
View File

@ -0,0 +1,20 @@
import { h } from 'preact';
import { memo } from 'preact/compat';
export function Score({ className = 'h-6 w-6', stroke = 'currentColor', fill = 'currentColor', onClick = () => {} }) {
return (
<svg
xmlns="http://www.w3.org/2000/svg"
className={className}
fill={fill}
viewBox="0 0 24 24"
stroke={stroke}
onClick={onClick}
>
<title>percent</title>
<path d="M12,9A3,3 0 0,0 9,12A3,3 0 0,0 12,15A3,3 0 0,0 15,12A3,3 0 0,0 12,9M19,19H15V21H19A2,2 0 0,0 21,19V15H19M19,3H15V5H19V9H21V5A2,2 0 0,0 19,3M5,5H9V3H5A2,2 0 0,0 3,5V9H5M5,15H3V19A2,2 0 0,0 5,21H9V19H5V15Z" />
</svg>
);
}
export default memo(Score);

View File

@ -30,6 +30,7 @@ import TimeAgo from '../components/TimeAgo';
import Timepicker from '../components/TimePicker';
import TimelineSummary from '../components/TimelineSummary';
import TimelineEventOverlay from '../components/TimelineEventOverlay';
import { Score } from '../icons/Score';
const API_LIMIT = 25;
@ -602,13 +603,10 @@ export default function Events({ path, ...props }) {
<div className="m-2 flex grow">
<div className="flex flex-col grow">
<div className="capitalize text-lg font-bold">
{event.sub_label
? `${event.label.replaceAll('_', ' ')}: ${event.sub_label.replaceAll('_', ' ')}`
: event.label.replaceAll('_', ' ')}
{(event?.data?.top_score || event.top_score || 0) == 0
? null
: ` (${((event?.data?.top_score || event.top_score) * 100).toFixed(0)}%)`}
{event.label.replaceAll('_', ' ')}
{event.sub_label ? `: ${event.sub_label.replaceAll('_', ' ')}` : null}
</div>
<div className="text-sm flex">
<Clock className="h-5 w-5 mr-2 inline" />
{formatUnixTimestampToDateTime(event.start_time, { ...config.ui })}
@ -628,6 +626,15 @@ export default function Events({ path, ...props }) {
<Zone className="w-5 h-5 mr-2 inline" />
{event.zones.join(', ').replaceAll('_', ' ')}
</div>
<div className="capitalize text-sm flex align-center">
<Score className="w-5 h-5 mr-2 inline" />
{(event?.data?.top_score || event.top_score || 0) == 0
? null
: `Label: ${((event?.data?.top_score || event.top_score) * 100).toFixed(0)}%`}
{(event?.data?.sub_label_score || 0) == 0
? null
: `, Sub Label: ${(event?.data?.sub_label_score * 100).toFixed(0)}%`}
</div>
</div>
<div class="hidden sm:flex flex-col justify-end mr-2">
{event.end_time && event.has_snapshot && (