mirror of
https://github.com/blakeblackshear/frigate.git
synced 2026-02-05 10:45:21 +03:00
Merge branch 'dev' into prometheus-metrics
This commit is contained in:
commit
172dc1e5cf
@ -204,6 +204,15 @@ http {
|
||||
proxy_set_header Host $host;
|
||||
}
|
||||
|
||||
location ~* /api/go2rtc([/]?.*)$ {
|
||||
proxy_pass http://go2rtc;
|
||||
rewrite ^/api/go2rtc(.*)$ /api$1 break;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection "Upgrade";
|
||||
proxy_set_header Host $host;
|
||||
}
|
||||
|
||||
location ~* /api/.*\.(jpg|jpeg|png)$ {
|
||||
add_header 'Access-Control-Allow-Origin' '*';
|
||||
add_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, DELETE, OPTIONS';
|
||||
|
||||
@ -33,3 +33,25 @@ cameras:
|
||||
birdseye:
|
||||
enabled: False
|
||||
```
|
||||
|
||||
### Sorting cameras in the Birdseye view
|
||||
|
||||
It is possible to override the order of cameras that are being shown in the Birdseye view.
|
||||
The order needs to be set at the camera level.
|
||||
|
||||
```yaml
|
||||
# Include all cameras by default in Birdseye view
|
||||
birdseye:
|
||||
enabled: True
|
||||
mode: continuous
|
||||
|
||||
cameras:
|
||||
front:
|
||||
birdseye:
|
||||
order: 1
|
||||
back:
|
||||
birdseye:
|
||||
order: 2
|
||||
```
|
||||
|
||||
*Note*: Cameras are sorted by default using their name to ensure a constant view inside Birdseye.
|
||||
|
||||
@ -256,3 +256,25 @@ model:
|
||||
width: 416
|
||||
height: 416
|
||||
```
|
||||
|
||||
## Deepstack / CodeProject.AI Server Detector
|
||||
|
||||
The Deepstack / CodeProject.AI Server detector for Frigate allows you to integrate Deepstack and CodeProject.AI object detection capabilities into Frigate. CodeProject.AI and DeepStack are open-source AI platforms that can be run on various devices such as the Raspberry Pi, Nvidia Jetson, and other compatible hardware. It is important to note that the integration is performed over the network, so the inference times may not be as fast as native Frigate detectors, but it still provides an efficient and reliable solution for object detection and tracking.
|
||||
|
||||
### Setup
|
||||
|
||||
To get started with CodeProject.AI, visit their [official website](https://www.codeproject.com/Articles/5322557/CodeProject-AI-Server-AI-the-easy-way) to follow the instructions to download and install the AI server on your preferred device. Detailed setup instructions for CodeProject.AI are outside the scope of the Frigate documentation.
|
||||
|
||||
To integrate CodeProject.AI into Frigate, you'll need to make the following changes to your Frigate configuration file:
|
||||
|
||||
```yaml
|
||||
detectors:
|
||||
deepstack:
|
||||
api_url: http://<your_codeproject_ai_server_ip>:<port>/v1/vision/detection
|
||||
type: deepstack
|
||||
api_timeout: 0.1 # seconds
|
||||
```
|
||||
|
||||
Replace `<your_codeproject_ai_server_ip>` and `<port>` with the IP address and port of your CodeProject.AI server.
|
||||
|
||||
To verify that the integration is working correctly, start Frigate and observe the logs for any error messages related to CodeProject.AI. Additionally, you can check the Frigate web interface to see if the objects detected by CodeProject.AI are being displayed and tracked properly.
|
||||
@ -518,6 +518,12 @@ cameras:
|
||||
# Optional: password for login.
|
||||
password: admin
|
||||
|
||||
# Optional: Configuration for how to sort the cameras in the Birdseye view.
|
||||
birdseye:
|
||||
# Optional: Adjust sort order of cameras in the Birdseye view. Larger numbers come later (default: shown below)
|
||||
# By default the cameras are sorted alphabetically.
|
||||
order: 0
|
||||
|
||||
# Optional
|
||||
ui:
|
||||
# Optional: Set the default live mode for cameras in the UI (default: shown below)
|
||||
|
||||
@ -213,7 +213,7 @@ Sets retain to false for the event id (event may be deleted quickly after removi
|
||||
### `POST /api/events/<id>/sub_label`
|
||||
|
||||
Set a sub label for an event. For example to update `person` -> `person's name` if they were recognized with facial recognition.
|
||||
Sub labels must be 20 characters or shorter.
|
||||
Sub labels must be 100 characters or shorter.
|
||||
|
||||
```json
|
||||
{
|
||||
|
||||
@ -5,13 +5,11 @@ title: Frigate+
|
||||
|
||||
:::info
|
||||
|
||||
Frigate+ is under active development and currently only offers the ability to submit your examples with annotations. Models will be available after enough examples are submitted to train a robust model. It is free to create an account and upload your examples.
|
||||
Frigate+ is under active development. Models are available as a part of an invitation only beta. It is free to create an account and upload/annotate your examples.
|
||||
|
||||
:::
|
||||
|
||||
Frigate+ offers models trained from scratch and specifically designed for the way Frigate NVR analyzes video footage. They offer higher accuracy with less resources. By uploading your own labeled examples, your model can be uniquely tuned for accuracy in your specific conditions. After tuning, performance is evaluated against a broad dataset and real world examples submitted by other Frigate+ users to prevent overfitting.
|
||||
|
||||
Custom models also include a more relevant set of objects for security cameras such as person, face, car, license plate, delivery truck, package, dog, cat, deer, and more. Interested in detecting an object unique to you? Upload examples to incorporate your own objects without worrying that you are reducing the accuracy of other object types in the model.
|
||||
Frigate+ offers models trained from scratch and specifically designed for the way Frigate NVR analyzes video footage. They offer higher accuracy with less resources and include a more relevant set of objects for security cameras. By uploading your own labeled examples, your model can be uniquely tuned for accuracy in your specific conditions. After tuning, performance is evaluated against a broad dataset and real world examples submitted by other Frigate+ users to prevent overfitting.
|
||||
|
||||
## Setup
|
||||
|
||||
@ -35,7 +33,7 @@ You cannot use the `environment_vars` section of your configuration file to set
|
||||
|
||||
:::
|
||||
|
||||
### Submit examples
|
||||
## Submit examples
|
||||
|
||||
Once your API key is configured, you can submit examples directly from the events page in Frigate using the `SEND TO FRIGATE+` button.
|
||||
|
||||
@ -52,3 +50,25 @@ Snapshots must be enabled to be able to submit examples to Frigate+
|
||||
You can view all of your submitted images at [https://plus.frigate.video](https://plus.frigate.video). Annotations can be added by clicking an image.
|
||||
|
||||

|
||||
|
||||
## Use Models
|
||||
|
||||
Models available in Frigate+ can be used with a special model path. No other information needs to be configured for Frigate+ models because it fetches the remaining config from Frigate+ automatically.
|
||||
|
||||
```yaml
|
||||
model:
|
||||
path: plus://e63b7345cc83a84ed79dedfc99c16616
|
||||
```
|
||||
|
||||
Models are downloaded into the `/config/model_cache` folder and only downloaded if needed.
|
||||
|
||||
You can override the labelmap for Frigate+ models like this:
|
||||
|
||||
```yaml
|
||||
model:
|
||||
path: plus://e63b7345cc83a84ed79dedfc99c16616
|
||||
labelmap:
|
||||
3: animal
|
||||
4: animal
|
||||
5: animal
|
||||
```
|
||||
|
||||
@ -8,6 +8,7 @@ import signal
|
||||
import sys
|
||||
from typing import Optional
|
||||
from types import FrameType
|
||||
import psutil
|
||||
|
||||
import traceback
|
||||
from peewee_migrate import Router
|
||||
@ -18,7 +19,14 @@ from frigate.comms.dispatcher import Communicator, Dispatcher
|
||||
from frigate.comms.mqtt import MqttClient
|
||||
from frigate.comms.ws import WebSocketClient
|
||||
from frigate.config import FrigateConfig
|
||||
from frigate.const import CACHE_DIR, CLIPS_DIR, CONFIG_DIR, DEFAULT_DB_PATH, RECORD_DIR
|
||||
from frigate.const import (
|
||||
CACHE_DIR,
|
||||
CLIPS_DIR,
|
||||
CONFIG_DIR,
|
||||
DEFAULT_DB_PATH,
|
||||
MODEL_CACHE_DIR,
|
||||
RECORD_DIR,
|
||||
)
|
||||
from frigate.object_detection import ObjectDetectProcess
|
||||
from frigate.events import EventCleanup, EventProcessor
|
||||
from frigate.http import create_app
|
||||
@ -51,13 +59,14 @@ class FrigateApp:
|
||||
self.plus_api = PlusApi()
|
||||
self.camera_metrics: dict[str, CameraMetricsTypes] = {}
|
||||
self.record_metrics: dict[str, RecordMetricsTypes] = {}
|
||||
self.processes: dict[str, int] = {}
|
||||
|
||||
def set_environment_vars(self) -> None:
|
||||
for key, value in self.config.environment_vars.items():
|
||||
os.environ[key] = value
|
||||
|
||||
def ensure_dirs(self) -> None:
|
||||
for d in [CONFIG_DIR, RECORD_DIR, CLIPS_DIR, CACHE_DIR]:
|
||||
for d in [CONFIG_DIR, RECORD_DIR, CLIPS_DIR, CACHE_DIR, MODEL_CACHE_DIR]:
|
||||
if not os.path.exists(d) and not os.path.islink(d):
|
||||
logger.info(f"Creating directory: {d}")
|
||||
os.makedirs(d)
|
||||
@ -70,6 +79,7 @@ class FrigateApp:
|
||||
)
|
||||
self.log_process.daemon = True
|
||||
self.log_process.start()
|
||||
self.processes["logger"] = self.log_process.pid or 0
|
||||
root_configurer(self.log_queue)
|
||||
|
||||
def init_config(self) -> None:
|
||||
@ -81,7 +91,7 @@ class FrigateApp:
|
||||
config_file = config_file_yaml
|
||||
|
||||
user_config = FrigateConfig.parse_file(config_file)
|
||||
self.config = user_config.runtime_config
|
||||
self.config = user_config.runtime_config(self.plus_api)
|
||||
|
||||
for camera_name in self.config.cameras.keys():
|
||||
# create camera_metrics
|
||||
@ -164,6 +174,12 @@ class FrigateApp:
|
||||
|
||||
migrate_db.close()
|
||||
|
||||
def init_go2rtc(self) -> None:
|
||||
for proc in psutil.process_iter(["pid", "name"]):
|
||||
if proc.info["name"] == "go2rtc":
|
||||
logger.info(f"go2rtc process pid: {proc.info['pid']}")
|
||||
self.processes["go2rtc"] = proc.info["pid"]
|
||||
|
||||
def init_recording_manager(self) -> None:
|
||||
recording_process = mp.Process(
|
||||
target=manage_recordings,
|
||||
@ -173,6 +189,7 @@ class FrigateApp:
|
||||
recording_process.daemon = True
|
||||
self.recording_process = recording_process
|
||||
recording_process.start()
|
||||
self.processes["recording"] = recording_process.pid or 0
|
||||
logger.info(f"Recording process started: {recording_process.pid}")
|
||||
|
||||
def bind_database(self) -> None:
|
||||
@ -184,7 +201,7 @@ class FrigateApp:
|
||||
|
||||
def init_stats(self) -> None:
|
||||
self.stats_tracking = stats_init(
|
||||
self.config, self.camera_metrics, self.detectors
|
||||
self.config, self.camera_metrics, self.detectors, self.processes
|
||||
)
|
||||
|
||||
def init_web_server(self) -> None:
|
||||
@ -379,6 +396,7 @@ class FrigateApp:
|
||||
self.init_logger()
|
||||
logger.info(f"Starting Frigate ({VERSION})")
|
||||
try:
|
||||
self.ensure_dirs()
|
||||
try:
|
||||
self.init_config()
|
||||
except Exception as e:
|
||||
@ -399,12 +417,12 @@ class FrigateApp:
|
||||
self.log_process.terminate()
|
||||
sys.exit(1)
|
||||
self.set_environment_vars()
|
||||
self.ensure_dirs()
|
||||
self.set_log_levels()
|
||||
self.init_queues()
|
||||
self.init_database()
|
||||
self.init_onvif()
|
||||
self.init_recording_manager()
|
||||
self.init_go2rtc()
|
||||
self.bind_database()
|
||||
self.init_dispatcher()
|
||||
except Exception as e:
|
||||
|
||||
@ -19,6 +19,7 @@ from frigate.const import (
|
||||
YAML_EXT,
|
||||
)
|
||||
from frigate.detectors.detector_config import BaseDetectorConfig
|
||||
from frigate.plus import PlusApi
|
||||
from frigate.util import (
|
||||
create_mask,
|
||||
deep_merge,
|
||||
@ -269,6 +270,9 @@ class DetectConfig(FrigateBaseModel):
|
||||
default_factory=StationaryConfig,
|
||||
title="Stationary objects config.",
|
||||
)
|
||||
annotation_offset: int = Field(
|
||||
default=0, title="Milliseconds to offset detect annotations by."
|
||||
)
|
||||
|
||||
|
||||
class FilterConfig(FrigateBaseModel):
|
||||
@ -396,6 +400,7 @@ class BirdseyeConfig(FrigateBaseModel):
|
||||
# uses BaseModel because some global attributes are not available at the camera level
|
||||
class BirdseyeCameraConfig(BaseModel):
|
||||
enabled: bool = Field(default=True, title="Enable birdseye view for camera.")
|
||||
order: int = Field(default=0, title="Position of the camera in the birdseye view.")
|
||||
mode: BirdseyeModeEnum = Field(
|
||||
default=BirdseyeModeEnum.objects, title="Tracking mode for camera."
|
||||
)
|
||||
@ -902,8 +907,7 @@ class FrigateConfig(FrigateBaseModel):
|
||||
title="Global timestamp style configuration.",
|
||||
)
|
||||
|
||||
@property
|
||||
def runtime_config(self) -> FrigateConfig:
|
||||
def runtime_config(self, plus_api: PlusApi = None) -> FrigateConfig:
|
||||
"""Merge camera config with globals."""
|
||||
config = self.copy(deep=True)
|
||||
|
||||
@ -1027,6 +1031,7 @@ class FrigateConfig(FrigateBaseModel):
|
||||
enabled_labels.update(camera.objects.track)
|
||||
|
||||
config.model.create_colormap(sorted(enabled_labels))
|
||||
config.model.check_and_load_plus_model(plus_api)
|
||||
|
||||
for key, detector in config.detectors.items():
|
||||
detector_config: DetectorConfig = parse_obj_as(DetectorConfig, detector)
|
||||
@ -1059,6 +1064,9 @@ class FrigateConfig(FrigateBaseModel):
|
||||
merged_model["path"] = "/edgetpu_model.tflite"
|
||||
|
||||
detector_config.model = ModelConfig.parse_obj(merged_model)
|
||||
detector_config.model.check_and_load_plus_model(
|
||||
plus_api, detector_config.type
|
||||
)
|
||||
detector_config.model.compute_model_hash()
|
||||
config.detectors[key] = detector_config
|
||||
|
||||
|
||||
@ -1,5 +1,6 @@
|
||||
CONFIG_DIR = "/config"
|
||||
DEFAULT_DB_PATH = f"{CONFIG_DIR}/frigate.db"
|
||||
MODEL_CACHE_DIR = f"{CONFIG_DIR}/model_cache"
|
||||
BASE_DIR = "/media/frigate"
|
||||
CLIPS_DIR = f"{BASE_DIR}/clips"
|
||||
RECORD_DIR = f"{BASE_DIR}/recordings"
|
||||
|
||||
@ -1,11 +1,16 @@
|
||||
import hashlib
|
||||
import json
|
||||
import logging
|
||||
from enum import Enum
|
||||
import os
|
||||
from typing import Dict, List, Optional, Tuple, Union, Literal
|
||||
|
||||
|
||||
import requests
|
||||
import matplotlib.pyplot as plt
|
||||
from pydantic import BaseModel, Extra, Field, validator
|
||||
from pydantic.fields import PrivateAttr
|
||||
from frigate.plus import PlusApi
|
||||
|
||||
from frigate.util import load_labels
|
||||
|
||||
@ -73,6 +78,45 @@ class ModelConfig(BaseModel):
|
||||
}
|
||||
self._colormap = {}
|
||||
|
||||
def check_and_load_plus_model(
|
||||
self, plus_api: PlusApi, detector: str = None
|
||||
) -> None:
|
||||
if not self.path or not self.path.startswith("plus://"):
|
||||
return
|
||||
|
||||
model_id = self.path[7:]
|
||||
self.path = f"/config/model_cache/{model_id}"
|
||||
model_info_path = f"{self.path}.json"
|
||||
|
||||
# download the model if it doesn't exist
|
||||
if not os.path.isfile(self.path):
|
||||
download_url = plus_api.get_model_download_url(model_id)
|
||||
r = requests.get(download_url)
|
||||
with open(self.path, "wb") as f:
|
||||
f.write(r.content)
|
||||
|
||||
# download the model info if it doesn't exist
|
||||
if not os.path.isfile(model_info_path):
|
||||
model_info = plus_api.get_model_info(model_id)
|
||||
with open(model_info_path, "w") as f:
|
||||
json.dump(model_info, f)
|
||||
else:
|
||||
with open(model_info_path, "r") as f:
|
||||
model_info = json.load(f)
|
||||
|
||||
if detector and detector not in model_info["supportedDetectors"]:
|
||||
raise ValueError(f"Model does not support detector type of {detector}")
|
||||
|
||||
self.width = model_info["width"]
|
||||
self.height = model_info["height"]
|
||||
self.input_tensor = model_info["inputShape"]
|
||||
self.input_pixel_format = model_info["pixelFormat"]
|
||||
self.model_type = model_info["type"]
|
||||
self._merged_labelmap = {
|
||||
**{int(key): val for key, val in model_info["labelMap"].items()},
|
||||
**self.labelmap,
|
||||
}
|
||||
|
||||
def compute_model_hash(self) -> None:
|
||||
with open(self.path, "rb") as f:
|
||||
file_hash = hashlib.md5()
|
||||
|
||||
78
frigate/detectors/plugins/deepstack.py
Normal file
78
frigate/detectors/plugins/deepstack.py
Normal file
@ -0,0 +1,78 @@
|
||||
import logging
|
||||
import numpy as np
|
||||
import requests
|
||||
import io
|
||||
|
||||
from frigate.detectors.detection_api import DetectionApi
|
||||
from frigate.detectors.detector_config import BaseDetectorConfig
|
||||
from typing import Literal
|
||||
from pydantic import Extra, Field
|
||||
from PIL import Image
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
DETECTOR_KEY = "deepstack"
|
||||
|
||||
|
||||
class DeepstackDetectorConfig(BaseDetectorConfig):
|
||||
type: Literal[DETECTOR_KEY]
|
||||
api_url: str = Field(
|
||||
default="http://localhost:80/v1/vision/detection", title="DeepStack API URL"
|
||||
)
|
||||
api_timeout: float = Field(default=0.1, title="DeepStack API timeout (in seconds)")
|
||||
api_key: str = Field(default="", title="DeepStack API key (if required)")
|
||||
|
||||
|
||||
class DeepStack(DetectionApi):
|
||||
type_key = DETECTOR_KEY
|
||||
|
||||
def __init__(self, detector_config: DeepstackDetectorConfig):
|
||||
self.api_url = detector_config.api_url
|
||||
self.api_timeout = detector_config.api_timeout
|
||||
self.api_key = detector_config.api_key
|
||||
self.labels = detector_config.model.merged_labelmap
|
||||
|
||||
self.h = detector_config.model.height
|
||||
self.w = detector_config.model.width
|
||||
|
||||
def get_label_index(self, label_value):
|
||||
if label_value.lower() == "truck":
|
||||
label_value = "car"
|
||||
for index, value in self.labels.items():
|
||||
if value == label_value.lower():
|
||||
return index
|
||||
return -1
|
||||
|
||||
def detect_raw(self, tensor_input):
|
||||
image_data = np.squeeze(tensor_input).astype(np.uint8)
|
||||
image = Image.fromarray(image_data)
|
||||
with io.BytesIO() as output:
|
||||
image.save(output, format="JPEG")
|
||||
image_bytes = output.getvalue()
|
||||
data = {"api_key": self.api_key}
|
||||
response = requests.post(
|
||||
self.api_url, files={"image": image_bytes}, timeout=self.api_timeout
|
||||
)
|
||||
response_json = response.json()
|
||||
detections = np.zeros((20, 6), np.float32)
|
||||
|
||||
for i, detection in enumerate(response_json["predictions"]):
|
||||
logger.debug(f"Response: {detection}")
|
||||
if detection["confidence"] < 0.4:
|
||||
logger.debug(f"Break due to confidence < 0.4")
|
||||
break
|
||||
label = self.get_label_index(detection["label"])
|
||||
if label < 0:
|
||||
logger.debug(f"Break due to unknown label")
|
||||
break
|
||||
detections[i] = [
|
||||
label,
|
||||
float(detection["confidence"]),
|
||||
detection["y_min"] / self.h,
|
||||
detection["x_min"] / self.w,
|
||||
detection["y_max"] / self.h,
|
||||
detection["x_max"] / self.w,
|
||||
]
|
||||
|
||||
return detections
|
||||
@ -3,6 +3,8 @@ import logging
|
||||
import os
|
||||
import queue
|
||||
import threading
|
||||
|
||||
from enum import Enum
|
||||
from pathlib import Path
|
||||
|
||||
from peewee import fn
|
||||
@ -10,7 +12,6 @@ from peewee import fn
|
||||
from frigate.config import EventsConfig, FrigateConfig
|
||||
from frigate.const import CLIPS_DIR
|
||||
from frigate.models import Event
|
||||
from frigate.timeline import TimelineSourceEnum
|
||||
from frigate.types import CameraMetricsTypes
|
||||
from frigate.util import to_relative_box
|
||||
|
||||
@ -21,6 +22,12 @@ from typing import Dict
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class EventTypeEnum(str, Enum):
|
||||
# api = "api"
|
||||
# audio = "audio"
|
||||
tracked_object = "tracked_object"
|
||||
|
||||
|
||||
def should_update_db(prev_event: Event, current_event: Event) -> bool:
|
||||
"""If current_event has updated fields and (clip or snapshot)."""
|
||||
if current_event["has_clip"] or current_event["has_snapshot"]:
|
||||
@ -66,7 +73,9 @@ class EventProcessor(threading.Thread):
|
||||
|
||||
while not self.stop_event.is_set():
|
||||
try:
|
||||
event_type, camera, event_data = self.event_queue.get(timeout=1)
|
||||
source_type, event_type, camera, event_data = self.event_queue.get(
|
||||
timeout=1
|
||||
)
|
||||
except queue.Empty:
|
||||
continue
|
||||
|
||||
@ -75,100 +84,19 @@ class EventProcessor(threading.Thread):
|
||||
self.timeline_queue.put(
|
||||
(
|
||||
camera,
|
||||
TimelineSourceEnum.tracked_object,
|
||||
source_type,
|
||||
event_type,
|
||||
self.events_in_process.get(event_data["id"]),
|
||||
event_data,
|
||||
)
|
||||
)
|
||||
|
||||
# if this is the first message, just store it and continue, its not time to insert it in the db
|
||||
if event_type == "start":
|
||||
self.events_in_process[event_data["id"]] = event_data
|
||||
continue
|
||||
if source_type == EventTypeEnum.tracked_object:
|
||||
if event_type == "start":
|
||||
self.events_in_process[event_data["id"]] = event_data
|
||||
continue
|
||||
|
||||
if should_update_db(self.events_in_process[event_data["id"]], event_data):
|
||||
camera_config = self.config.cameras[camera]
|
||||
event_config: EventsConfig = camera_config.record.events
|
||||
width = camera_config.detect.width
|
||||
height = camera_config.detect.height
|
||||
first_detector = list(self.config.detectors.values())[0]
|
||||
|
||||
start_time = event_data["start_time"] - event_config.pre_capture
|
||||
end_time = (
|
||||
None
|
||||
if event_data["end_time"] is None
|
||||
else event_data["end_time"] + event_config.post_capture
|
||||
)
|
||||
# score of the snapshot
|
||||
score = (
|
||||
None
|
||||
if event_data["snapshot"] is None
|
||||
else event_data["snapshot"]["score"]
|
||||
)
|
||||
# detection region in the snapshot
|
||||
region = (
|
||||
None
|
||||
if event_data["snapshot"] is None
|
||||
else to_relative_box(
|
||||
width,
|
||||
height,
|
||||
event_data["snapshot"]["region"],
|
||||
)
|
||||
)
|
||||
# bounding box for the snapshot
|
||||
box = (
|
||||
None
|
||||
if event_data["snapshot"] is None
|
||||
else to_relative_box(
|
||||
width,
|
||||
height,
|
||||
event_data["snapshot"]["box"],
|
||||
)
|
||||
)
|
||||
|
||||
# keep these from being set back to false because the event
|
||||
# may have started while recordings and snapshots were enabled
|
||||
# this would be an issue for long running events
|
||||
if self.events_in_process[event_data["id"]]["has_clip"]:
|
||||
event_data["has_clip"] = True
|
||||
if self.events_in_process[event_data["id"]]["has_snapshot"]:
|
||||
event_data["has_snapshot"] = True
|
||||
|
||||
event = {
|
||||
Event.id: event_data["id"],
|
||||
Event.label: event_data["label"],
|
||||
Event.camera: camera,
|
||||
Event.start_time: start_time,
|
||||
Event.end_time: end_time,
|
||||
Event.top_score: event_data["top_score"],
|
||||
Event.score: score,
|
||||
Event.zones: list(event_data["entered_zones"]),
|
||||
Event.thumbnail: event_data["thumbnail"],
|
||||
Event.region: region,
|
||||
Event.box: box,
|
||||
Event.has_clip: event_data["has_clip"],
|
||||
Event.has_snapshot: event_data["has_snapshot"],
|
||||
Event.model_hash: first_detector.model.model_hash,
|
||||
Event.model_type: first_detector.model.model_type,
|
||||
Event.detector_type: first_detector.type,
|
||||
}
|
||||
|
||||
(
|
||||
Event.insert(event)
|
||||
.on_conflict(
|
||||
conflict_target=[Event.id],
|
||||
update=event,
|
||||
)
|
||||
.execute()
|
||||
)
|
||||
|
||||
# update the stored copy for comparison on future update messages
|
||||
self.events_in_process[event_data["id"]] = event_data
|
||||
|
||||
if event_type == "end":
|
||||
del self.events_in_process[event_data["id"]]
|
||||
self.event_processed_queue.put((event_data["id"], camera))
|
||||
self.handle_object_detection(event_type, camera, event_data)
|
||||
|
||||
# set an end_time on events without an end_time before exiting
|
||||
Event.update(end_time=datetime.datetime.now().timestamp()).where(
|
||||
@ -176,6 +104,99 @@ class EventProcessor(threading.Thread):
|
||||
).execute()
|
||||
logger.info(f"Exiting event processor...")
|
||||
|
||||
def handle_object_detection(
|
||||
self,
|
||||
event_type: str,
|
||||
camera: str,
|
||||
event_data: Event,
|
||||
) -> None:
|
||||
"""handle tracked object event updates."""
|
||||
# if this is the first message, just store it and continue, its not time to insert it in the db
|
||||
if should_update_db(self.events_in_process[event_data["id"]], event_data):
|
||||
camera_config = self.config.cameras[camera]
|
||||
event_config: EventsConfig = camera_config.record.events
|
||||
width = camera_config.detect.width
|
||||
height = camera_config.detect.height
|
||||
first_detector = list(self.config.detectors.values())[0]
|
||||
|
||||
start_time = event_data["start_time"] - event_config.pre_capture
|
||||
end_time = (
|
||||
None
|
||||
if event_data["end_time"] is None
|
||||
else event_data["end_time"] + event_config.post_capture
|
||||
)
|
||||
# score of the snapshot
|
||||
score = (
|
||||
None
|
||||
if event_data["snapshot"] is None
|
||||
else event_data["snapshot"]["score"]
|
||||
)
|
||||
# detection region in the snapshot
|
||||
region = (
|
||||
None
|
||||
if event_data["snapshot"] is None
|
||||
else to_relative_box(
|
||||
width,
|
||||
height,
|
||||
event_data["snapshot"]["region"],
|
||||
)
|
||||
)
|
||||
# bounding box for the snapshot
|
||||
box = (
|
||||
None
|
||||
if event_data["snapshot"] is None
|
||||
else to_relative_box(
|
||||
width,
|
||||
height,
|
||||
event_data["snapshot"]["box"],
|
||||
)
|
||||
)
|
||||
|
||||
# keep these from being set back to false because the event
|
||||
# may have started while recordings and snapshots were enabled
|
||||
# this would be an issue for long running events
|
||||
if self.events_in_process[event_data["id"]]["has_clip"]:
|
||||
event_data["has_clip"] = True
|
||||
if self.events_in_process[event_data["id"]]["has_snapshot"]:
|
||||
event_data["has_snapshot"] = True
|
||||
|
||||
event = {
|
||||
Event.id: event_data["id"],
|
||||
Event.label: event_data["label"],
|
||||
Event.camera: camera,
|
||||
Event.start_time: start_time,
|
||||
Event.end_time: end_time,
|
||||
Event.zones: list(event_data["entered_zones"]),
|
||||
Event.thumbnail: event_data["thumbnail"],
|
||||
Event.has_clip: event_data["has_clip"],
|
||||
Event.has_snapshot: event_data["has_snapshot"],
|
||||
Event.model_hash: first_detector.model.model_hash,
|
||||
Event.model_type: first_detector.model.model_type,
|
||||
Event.detector_type: first_detector.type,
|
||||
Event.data: {
|
||||
"box": box,
|
||||
"region": region,
|
||||
"score": score,
|
||||
"top_score": event_data["top_score"],
|
||||
},
|
||||
}
|
||||
|
||||
(
|
||||
Event.insert(event)
|
||||
.on_conflict(
|
||||
conflict_target=[Event.id],
|
||||
update=event,
|
||||
)
|
||||
.execute()
|
||||
)
|
||||
|
||||
# update the stored copy for comparison on future update messages
|
||||
self.events_in_process[event_data["id"]] = event_data
|
||||
|
||||
if event_type == "end":
|
||||
del self.events_in_process[event_data["id"]]
|
||||
self.event_processed_queue.put((event_data["id"], camera))
|
||||
|
||||
|
||||
class EventCleanup(threading.Thread):
|
||||
def __init__(self, config: FrigateConfig, stop_event: MpEvent):
|
||||
|
||||
@ -48,7 +48,6 @@ from frigate.util import (
|
||||
restart_frigate,
|
||||
vainfo_hwaccel,
|
||||
get_tz_modifiers,
|
||||
to_relative_box,
|
||||
)
|
||||
from frigate.storage import StorageMaintainer
|
||||
from frigate.version import VERSION
|
||||
@ -204,7 +203,7 @@ def send_to_plus(id):
|
||||
return make_response(jsonify({"success": False, "message": message}), 404)
|
||||
|
||||
# events from before the conversion to relative dimensions cant include annotations
|
||||
if any(d > 1 for d in event.box):
|
||||
if any(d > 1 for d in event.data["box"]):
|
||||
include_annotation = None
|
||||
|
||||
if event.end_time is None:
|
||||
@ -260,8 +259,8 @@ def send_to_plus(id):
|
||||
event.save()
|
||||
|
||||
if not include_annotation is None:
|
||||
region = event.region
|
||||
box = event.box
|
||||
region = event.data["region"]
|
||||
box = event.data["box"]
|
||||
|
||||
try:
|
||||
current_app.plus_api.add_annotation(
|
||||
@ -302,7 +301,7 @@ def false_positive(id):
|
||||
return make_response(jsonify({"success": False, "message": message}), 404)
|
||||
|
||||
# events from before the conversion to relative dimensions cant include annotations
|
||||
if any(d > 1 for d in event.box):
|
||||
if any(d > 1 for d in event.data["box"]):
|
||||
message = f"Events prior to 0.13 cannot be submitted as false positives"
|
||||
logger.error(message)
|
||||
return make_response(jsonify({"success": False, "message": message}), 400)
|
||||
@ -319,11 +318,15 @@ def false_positive(id):
|
||||
# need to refetch the event now that it has a plus_id
|
||||
event = Event.get(Event.id == id)
|
||||
|
||||
region = event.region
|
||||
box = event.box
|
||||
region = event.data["region"]
|
||||
box = event.data["box"]
|
||||
|
||||
# provide top score if score is unavailable
|
||||
score = event.top_score if event.score is None else event.score
|
||||
score = (
|
||||
(event.data["top_score"] if event.data["top_score"] else event.top_score)
|
||||
if event.data["score"] is None
|
||||
else event.data["score"]
|
||||
)
|
||||
|
||||
try:
|
||||
current_app.plus_api.add_false_positive(
|
||||
@ -380,13 +383,13 @@ def set_sub_label(id):
|
||||
else:
|
||||
new_sub_label = None
|
||||
|
||||
if new_sub_label and len(new_sub_label) > 20:
|
||||
if new_sub_label and len(new_sub_label) > 100:
|
||||
return make_response(
|
||||
jsonify(
|
||||
{
|
||||
"success": False,
|
||||
"message": new_sub_label
|
||||
+ " exceeds the 20 character limit for sub_label",
|
||||
+ " exceeds the 100 character limit for sub_label",
|
||||
}
|
||||
),
|
||||
400,
|
||||
@ -764,6 +767,7 @@ def events():
|
||||
Event.top_score,
|
||||
Event.false_positive,
|
||||
Event.box,
|
||||
Event.data,
|
||||
]
|
||||
|
||||
if camera != "all":
|
||||
@ -870,6 +874,11 @@ def config():
|
||||
|
||||
config["plus"] = {"enabled": current_app.plus_api.is_active()}
|
||||
|
||||
for detector, detector_config in config["detectors"].items():
|
||||
detector_config["model"][
|
||||
"labelmap"
|
||||
] = current_app.frigate_config.model.merged_labelmap
|
||||
|
||||
return jsonify(config)
|
||||
|
||||
|
||||
|
||||
@ -14,26 +14,37 @@ from playhouse.sqlite_ext import JSONField
|
||||
class Event(Model): # type: ignore[misc]
|
||||
id = CharField(null=False, primary_key=True, max_length=30)
|
||||
label = CharField(index=True, max_length=20)
|
||||
sub_label = CharField(max_length=20, null=True)
|
||||
sub_label = CharField(max_length=100, null=True)
|
||||
camera = CharField(index=True, max_length=20)
|
||||
start_time = DateTimeField()
|
||||
end_time = DateTimeField()
|
||||
top_score = FloatField()
|
||||
score = FloatField()
|
||||
top_score = (
|
||||
FloatField()
|
||||
) # TODO remove when columns can be dropped without rebuilding table
|
||||
score = (
|
||||
FloatField()
|
||||
) # TODO remove when columns can be dropped without rebuilding table
|
||||
false_positive = BooleanField()
|
||||
zones = JSONField()
|
||||
thumbnail = TextField()
|
||||
has_clip = BooleanField(default=True)
|
||||
has_snapshot = BooleanField(default=True)
|
||||
region = JSONField()
|
||||
box = JSONField()
|
||||
area = IntegerField()
|
||||
region = (
|
||||
JSONField()
|
||||
) # TODO remove when columns can be dropped without rebuilding table
|
||||
box = (
|
||||
JSONField()
|
||||
) # TODO remove when columns can be dropped without rebuilding table
|
||||
area = (
|
||||
IntegerField()
|
||||
) # TODO remove when columns can be dropped without rebuilding table
|
||||
retain_indefinitely = BooleanField(default=False)
|
||||
ratio = FloatField(default=1.0)
|
||||
plus_id = CharField(max_length=30)
|
||||
model_hash = CharField(max_length=32)
|
||||
detector_type = CharField(max_length=32)
|
||||
model_type = CharField(max_length=32)
|
||||
data = JSONField() # ex: tracked object box, region, etc.
|
||||
|
||||
|
||||
class Timeline(Model): # type: ignore[misc]
|
||||
|
||||
@ -46,6 +46,7 @@ def stats_init(
|
||||
config: FrigateConfig,
|
||||
camera_metrics: dict[str, CameraMetricsTypes],
|
||||
detectors: dict[str, ObjectDetectProcess],
|
||||
processes: dict[str, int],
|
||||
) -> StatsTrackingTypes:
|
||||
stats_tracking: StatsTrackingTypes = {
|
||||
"camera_metrics": camera_metrics,
|
||||
@ -53,6 +54,7 @@ def stats_init(
|
||||
"started": int(time.time()),
|
||||
"latest_frigate_version": get_latest_version(config),
|
||||
"last_updated": int(time.time()),
|
||||
"processes": processes,
|
||||
}
|
||||
return stats_tracking
|
||||
|
||||
@ -151,9 +153,12 @@ async def set_gpu_stats(
|
||||
nvidia_usage = get_nvidia_gpu_stats()
|
||||
|
||||
if nvidia_usage:
|
||||
name = nvidia_usage["name"]
|
||||
del nvidia_usage["name"]
|
||||
stats[name] = nvidia_usage
|
||||
for i in range(len(nvidia_usage)):
|
||||
stats[nvidia_usage[i]["name"]] = {
|
||||
"gpu": str(round(float(nvidia_usage[i]["gpu"]), 2)) + "%",
|
||||
"mem": str(round(float(nvidia_usage[i]["mem"]), 2)) + "%",
|
||||
}
|
||||
|
||||
else:
|
||||
stats["nvidia-gpu"] = {"gpu": -1, "mem": -1}
|
||||
hwaccel_errors.append(args)
|
||||
@ -260,6 +265,12 @@ def stats_snapshot(
|
||||
"mount_type": get_fs_type(path),
|
||||
}
|
||||
|
||||
stats["processes"] = {}
|
||||
for name, pid in stats_tracking["processes"].items():
|
||||
stats["processes"][name] = {
|
||||
"pid": pid,
|
||||
}
|
||||
|
||||
return stats
|
||||
|
||||
|
||||
|
||||
@ -21,6 +21,7 @@ from frigate.config import (
|
||||
FrigateConfig,
|
||||
)
|
||||
from frigate.const import CLIPS_DIR
|
||||
from frigate.events import EventTypeEnum
|
||||
from frigate.util import (
|
||||
SharedMemoryFrameManager,
|
||||
calculate_region,
|
||||
@ -656,7 +657,9 @@ class TrackedObjectProcessor(threading.Thread):
|
||||
self.last_motion_detected: dict[str, float] = {}
|
||||
|
||||
def start(camera, obj: TrackedObject, current_frame_time):
|
||||
self.event_queue.put(("start", camera, obj.to_dict()))
|
||||
self.event_queue.put(
|
||||
(EventTypeEnum.tracked_object, "start", camera, obj.to_dict())
|
||||
)
|
||||
|
||||
def update(camera, obj: TrackedObject, current_frame_time):
|
||||
obj.has_snapshot = self.should_save_snapshot(camera, obj)
|
||||
@ -670,7 +673,12 @@ class TrackedObjectProcessor(threading.Thread):
|
||||
self.dispatcher.publish("events", json.dumps(message), retain=False)
|
||||
obj.previous = after
|
||||
self.event_queue.put(
|
||||
("update", camera, obj.to_dict(include_thumbnail=True))
|
||||
(
|
||||
EventTypeEnum.tracked_object,
|
||||
"update",
|
||||
camera,
|
||||
obj.to_dict(include_thumbnail=True),
|
||||
)
|
||||
)
|
||||
|
||||
def end(camera, obj: TrackedObject, current_frame_time):
|
||||
@ -722,7 +730,14 @@ class TrackedObjectProcessor(threading.Thread):
|
||||
}
|
||||
self.dispatcher.publish("events", json.dumps(message), retain=False)
|
||||
|
||||
self.event_queue.put(("end", camera, obj.to_dict(include_thumbnail=True)))
|
||||
self.event_queue.put(
|
||||
(
|
||||
EventTypeEnum.tracked_object,
|
||||
"end",
|
||||
camera,
|
||||
obj.to_dict(include_thumbnail=True),
|
||||
)
|
||||
)
|
||||
|
||||
def snapshot(camera, obj: TrackedObject, current_frame_time):
|
||||
mqtt_config: MqttConfig = self.config.cameras[camera].mqtt
|
||||
|
||||
@ -4,6 +4,7 @@ import logging
|
||||
import math
|
||||
import multiprocessing as mp
|
||||
import os
|
||||
import operator
|
||||
import queue
|
||||
import signal
|
||||
import subprocess as sp
|
||||
@ -292,8 +293,16 @@ class BirdsEyeFrameManager:
|
||||
# calculate layout dimensions
|
||||
layout_dim = math.ceil(math.sqrt(len(active_cameras)))
|
||||
|
||||
# check if we need to reset the layout because there are new cameras to add
|
||||
reset_layout = (
|
||||
True if len(active_cameras.difference(self.active_cameras)) > 0 else False
|
||||
)
|
||||
|
||||
# reset the layout if it needs to be different
|
||||
if layout_dim != self.layout_dim:
|
||||
if layout_dim != self.layout_dim or reset_layout:
|
||||
if reset_layout:
|
||||
logger.debug(f"Added new cameras, resetting layout...")
|
||||
|
||||
logger.debug(f"Changing layout size from {self.layout_dim} to {layout_dim}")
|
||||
self.layout_dim = layout_dim
|
||||
|
||||
@ -327,6 +336,20 @@ class BirdsEyeFrameManager:
|
||||
|
||||
self.active_cameras = active_cameras
|
||||
|
||||
# this also converts added_cameras from a set to a list since we need
|
||||
# to pop elements in order
|
||||
added_cameras = sorted(
|
||||
added_cameras,
|
||||
# sort cameras by order and by name if the order is the same
|
||||
key=lambda added_camera: (
|
||||
self.config.cameras[added_camera].birdseye.order,
|
||||
added_camera,
|
||||
),
|
||||
# we're popping out elements from the end, so this needs to be reverse
|
||||
# as we want the last element to be the first
|
||||
reverse=True,
|
||||
)
|
||||
|
||||
# update each position in the layout
|
||||
for position, camera in enumerate(self.camera_layout, start=0):
|
||||
# if this camera was removed, replace it or clear it
|
||||
|
||||
@ -3,7 +3,7 @@ import json
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
from typing import List
|
||||
from typing import Any, Dict, List
|
||||
import requests
|
||||
from frigate.const import PLUS_ENV_VAR, PLUS_API_HOST
|
||||
from requests.models import Response
|
||||
@ -187,3 +187,24 @@ class PlusApi:
|
||||
|
||||
if not r.ok:
|
||||
raise Exception(r.text)
|
||||
|
||||
def get_model_download_url(
|
||||
self,
|
||||
model_id: str,
|
||||
) -> str:
|
||||
r = self._get(f"model/{model_id}/signed_url")
|
||||
|
||||
if not r.ok:
|
||||
raise Exception(r.text)
|
||||
|
||||
presigned_url = r.json()
|
||||
|
||||
return str(presigned_url.get("url"))
|
||||
|
||||
def get_model_info(self, model_id: str) -> Any:
|
||||
r = self._get(f"model/{model_id}")
|
||||
|
||||
if not r.ok:
|
||||
raise Exception(r.text)
|
||||
|
||||
return r.json()
|
||||
|
||||
@ -3,7 +3,7 @@
|
||||
import datetime
|
||||
import itertools
|
||||
import logging
|
||||
import subprocess as sp
|
||||
import os
|
||||
import threading
|
||||
from pathlib import Path
|
||||
|
||||
@ -12,7 +12,7 @@ from multiprocessing.synchronize import Event as MpEvent
|
||||
|
||||
from frigate.config import RetainModeEnum, FrigateConfig
|
||||
from frigate.const import RECORD_DIR, SECONDS_IN_DAY
|
||||
from frigate.models import Event, Recordings
|
||||
from frigate.models import Event, Recordings, Timeline
|
||||
from frigate.record.util import remove_empty_directories
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
@ -140,6 +140,15 @@ class RecordingCleanup(threading.Thread):
|
||||
Path(recording.path).unlink(missing_ok=True)
|
||||
deleted_recordings.add(recording.id)
|
||||
|
||||
# delete timeline entries relevant to this recording segment
|
||||
Timeline.delete().where(
|
||||
Timeline.timestamp.between(
|
||||
recording.start_time, recording.end_time
|
||||
),
|
||||
Timeline.timestamp < expire_date,
|
||||
Timeline.camera == camera,
|
||||
).execute()
|
||||
|
||||
logger.debug(f"Expiring {len(deleted_recordings)} recordings")
|
||||
# delete up to 100,000 at a time
|
||||
max_deletes = 100000
|
||||
@ -183,12 +192,14 @@ class RecordingCleanup(threading.Thread):
|
||||
return
|
||||
|
||||
logger.debug(f"Oldest recording in the db: {oldest_timestamp}")
|
||||
process = sp.run(
|
||||
["find", RECORD_DIR, "-type", "f", "!", "-newermt", f"@{oldest_timestamp}"],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
)
|
||||
files_to_check = process.stdout.splitlines()
|
||||
|
||||
files_to_check = []
|
||||
|
||||
for root, _, files in os.walk(RECORD_DIR):
|
||||
for file in files:
|
||||
file_path = os.path.join(root, file)
|
||||
if os.path.getmtime(file_path) < oldest_timestamp:
|
||||
files_to_check.append(file_path)
|
||||
|
||||
for f in files_to_check:
|
||||
p = Path(f)
|
||||
@ -207,12 +218,10 @@ class RecordingCleanup(threading.Thread):
|
||||
recordings: Recordings = Recordings.select()
|
||||
|
||||
# get all recordings files on disk
|
||||
process = sp.run(
|
||||
["find", RECORD_DIR, "-type", "f"],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
)
|
||||
files_on_disk = process.stdout.splitlines()
|
||||
files_on_disk = []
|
||||
for root, _, files in os.walk(RECORD_DIR):
|
||||
for file in files:
|
||||
files_on_disk.append(os.path.join(root, file))
|
||||
|
||||
recordings_to_delete = []
|
||||
for recording in recordings.objects().iterator():
|
||||
|
||||
@ -1,3 +1,5 @@
|
||||
import json
|
||||
import os
|
||||
import unittest
|
||||
import numpy as np
|
||||
from pydantic import ValidationError
|
||||
@ -6,7 +8,9 @@ from frigate.config import (
|
||||
BirdseyeModeEnum,
|
||||
FrigateConfig,
|
||||
)
|
||||
from frigate.const import MODEL_CACHE_DIR
|
||||
from frigate.detectors import DetectorTypeEnum
|
||||
from frigate.plus import PlusApi
|
||||
from frigate.util import deep_merge, load_config_with_no_duplicates
|
||||
|
||||
|
||||
@ -30,11 +34,40 @@ class TestConfig(unittest.TestCase):
|
||||
},
|
||||
}
|
||||
|
||||
self.plus_model_info = {
|
||||
"id": "e63b7345cc83a84ed79dedfc99c16616",
|
||||
"name": "SSDLite Mobiledet",
|
||||
"description": "Fine tuned model",
|
||||
"trainDate": "2023-04-28T23:22:01.262Z",
|
||||
"type": "ssd",
|
||||
"supportedDetectors": ["edgetpu"],
|
||||
"width": 320,
|
||||
"height": 320,
|
||||
"inputShape": "nhwc",
|
||||
"pixelFormat": "rgb",
|
||||
"labelMap": {
|
||||
"0": "amazon",
|
||||
"1": "car",
|
||||
"2": "cat",
|
||||
"3": "deer",
|
||||
"4": "dog",
|
||||
"5": "face",
|
||||
"6": "fedex",
|
||||
"7": "license_plate",
|
||||
"8": "package",
|
||||
"9": "person",
|
||||
"10": "ups",
|
||||
},
|
||||
}
|
||||
|
||||
if not os.path.exists(MODEL_CACHE_DIR) and not os.path.islink(MODEL_CACHE_DIR):
|
||||
os.makedirs(MODEL_CACHE_DIR)
|
||||
|
||||
def test_config_class(self):
|
||||
frigate_config = FrigateConfig(**self.minimal)
|
||||
assert self.minimal == frigate_config.dict(exclude_unset=True)
|
||||
|
||||
runtime_config = frigate_config.runtime_config
|
||||
runtime_config = frigate_config.runtime_config()
|
||||
assert "cpu" in runtime_config.detectors.keys()
|
||||
assert runtime_config.detectors["cpu"].type == DetectorTypeEnum.cpu
|
||||
assert runtime_config.detectors["cpu"].model.width == 320
|
||||
@ -59,7 +92,7 @@ class TestConfig(unittest.TestCase):
|
||||
}
|
||||
|
||||
frigate_config = FrigateConfig(**(deep_merge(config, self.minimal)))
|
||||
runtime_config = frigate_config.runtime_config
|
||||
runtime_config = frigate_config.runtime_config()
|
||||
|
||||
assert "cpu" in runtime_config.detectors.keys()
|
||||
assert "edgetpu" in runtime_config.detectors.keys()
|
||||
@ -125,7 +158,7 @@ class TestConfig(unittest.TestCase):
|
||||
frigate_config = FrigateConfig(**config)
|
||||
assert config == frigate_config.dict(exclude_unset=True)
|
||||
|
||||
runtime_config = frigate_config.runtime_config
|
||||
runtime_config = frigate_config.runtime_config()
|
||||
assert "dog" in runtime_config.cameras["back"].objects.track
|
||||
|
||||
def test_override_birdseye(self):
|
||||
@ -151,7 +184,7 @@ class TestConfig(unittest.TestCase):
|
||||
frigate_config = FrigateConfig(**config)
|
||||
assert config == frigate_config.dict(exclude_unset=True)
|
||||
|
||||
runtime_config = frigate_config.runtime_config
|
||||
runtime_config = frigate_config.runtime_config()
|
||||
assert not runtime_config.cameras["back"].birdseye.enabled
|
||||
assert runtime_config.cameras["back"].birdseye.mode is BirdseyeModeEnum.motion
|
||||
|
||||
@ -177,7 +210,7 @@ class TestConfig(unittest.TestCase):
|
||||
frigate_config = FrigateConfig(**config)
|
||||
assert config == frigate_config.dict(exclude_unset=True)
|
||||
|
||||
runtime_config = frigate_config.runtime_config
|
||||
runtime_config = frigate_config.runtime_config()
|
||||
assert runtime_config.cameras["back"].birdseye.enabled
|
||||
|
||||
def test_inherit_birdseye(self):
|
||||
@ -202,7 +235,7 @@ class TestConfig(unittest.TestCase):
|
||||
frigate_config = FrigateConfig(**config)
|
||||
assert config == frigate_config.dict(exclude_unset=True)
|
||||
|
||||
runtime_config = frigate_config.runtime_config
|
||||
runtime_config = frigate_config.runtime_config()
|
||||
assert runtime_config.cameras["back"].birdseye.enabled
|
||||
assert (
|
||||
runtime_config.cameras["back"].birdseye.mode is BirdseyeModeEnum.continuous
|
||||
@ -231,7 +264,7 @@ class TestConfig(unittest.TestCase):
|
||||
frigate_config = FrigateConfig(**config)
|
||||
assert config == frigate_config.dict(exclude_unset=True)
|
||||
|
||||
runtime_config = frigate_config.runtime_config
|
||||
runtime_config = frigate_config.runtime_config()
|
||||
assert "cat" in runtime_config.cameras["back"].objects.track
|
||||
|
||||
def test_default_object_filters(self):
|
||||
@ -256,7 +289,7 @@ class TestConfig(unittest.TestCase):
|
||||
frigate_config = FrigateConfig(**config)
|
||||
assert config == frigate_config.dict(exclude_unset=True)
|
||||
|
||||
runtime_config = frigate_config.runtime_config
|
||||
runtime_config = frigate_config.runtime_config()
|
||||
assert "dog" in runtime_config.cameras["back"].objects.filters
|
||||
|
||||
def test_inherit_object_filters(self):
|
||||
@ -284,7 +317,7 @@ class TestConfig(unittest.TestCase):
|
||||
frigate_config = FrigateConfig(**config)
|
||||
assert config == frigate_config.dict(exclude_unset=True)
|
||||
|
||||
runtime_config = frigate_config.runtime_config
|
||||
runtime_config = frigate_config.runtime_config()
|
||||
assert "dog" in runtime_config.cameras["back"].objects.filters
|
||||
assert runtime_config.cameras["back"].objects.filters["dog"].threshold == 0.7
|
||||
|
||||
@ -313,7 +346,7 @@ class TestConfig(unittest.TestCase):
|
||||
frigate_config = FrigateConfig(**config)
|
||||
assert config == frigate_config.dict(exclude_unset=True)
|
||||
|
||||
runtime_config = frigate_config.runtime_config
|
||||
runtime_config = frigate_config.runtime_config()
|
||||
assert "dog" in runtime_config.cameras["back"].objects.filters
|
||||
assert runtime_config.cameras["back"].objects.filters["dog"].threshold == 0.7
|
||||
|
||||
@ -343,7 +376,7 @@ class TestConfig(unittest.TestCase):
|
||||
frigate_config = FrigateConfig(**config)
|
||||
assert config == frigate_config.dict(exclude_unset=True)
|
||||
|
||||
runtime_config = frigate_config.runtime_config
|
||||
runtime_config = frigate_config.runtime_config()
|
||||
back_camera = runtime_config.cameras["back"]
|
||||
assert "dog" in back_camera.objects.filters
|
||||
assert len(back_camera.objects.filters["dog"].raw_mask) == 2
|
||||
@ -374,7 +407,7 @@ class TestConfig(unittest.TestCase):
|
||||
frigate_config = FrigateConfig(**config)
|
||||
assert config == frigate_config.dict(exclude_unset=True)
|
||||
|
||||
runtime_config = frigate_config.runtime_config
|
||||
runtime_config = frigate_config.runtime_config()
|
||||
assert "-rtsp_transport" in runtime_config.cameras["back"].ffmpeg_cmds[0]["cmd"]
|
||||
|
||||
def test_ffmpeg_params_global(self):
|
||||
@ -403,7 +436,7 @@ class TestConfig(unittest.TestCase):
|
||||
frigate_config = FrigateConfig(**config)
|
||||
assert config == frigate_config.dict(exclude_unset=True)
|
||||
|
||||
runtime_config = frigate_config.runtime_config
|
||||
runtime_config = frigate_config.runtime_config()
|
||||
assert "-re" in runtime_config.cameras["back"].ffmpeg_cmds[0]["cmd"]
|
||||
|
||||
def test_ffmpeg_params_camera(self):
|
||||
@ -433,7 +466,7 @@ class TestConfig(unittest.TestCase):
|
||||
frigate_config = FrigateConfig(**config)
|
||||
assert config == frigate_config.dict(exclude_unset=True)
|
||||
|
||||
runtime_config = frigate_config.runtime_config
|
||||
runtime_config = frigate_config.runtime_config()
|
||||
assert "-re" in runtime_config.cameras["back"].ffmpeg_cmds[0]["cmd"]
|
||||
assert "test" not in runtime_config.cameras["back"].ffmpeg_cmds[0]["cmd"]
|
||||
|
||||
@ -468,7 +501,7 @@ class TestConfig(unittest.TestCase):
|
||||
frigate_config = FrigateConfig(**config)
|
||||
assert config == frigate_config.dict(exclude_unset=True)
|
||||
|
||||
runtime_config = frigate_config.runtime_config
|
||||
runtime_config = frigate_config.runtime_config()
|
||||
assert "-re" in runtime_config.cameras["back"].ffmpeg_cmds[0]["cmd"]
|
||||
assert "test" in runtime_config.cameras["back"].ffmpeg_cmds[0]["cmd"]
|
||||
assert "test2" not in runtime_config.cameras["back"].ffmpeg_cmds[0]["cmd"]
|
||||
@ -498,7 +531,7 @@ class TestConfig(unittest.TestCase):
|
||||
frigate_config = FrigateConfig(**config)
|
||||
assert config == frigate_config.dict(exclude_unset=True)
|
||||
|
||||
runtime_config = frigate_config.runtime_config
|
||||
runtime_config = frigate_config.runtime_config()
|
||||
assert (
|
||||
runtime_config.cameras["back"].record.events.retain.objects["person"] == 30
|
||||
)
|
||||
@ -576,7 +609,7 @@ class TestConfig(unittest.TestCase):
|
||||
frigate_config = FrigateConfig(**config)
|
||||
assert config == frigate_config.dict(exclude_unset=True)
|
||||
|
||||
runtime_config = frigate_config.runtime_config
|
||||
runtime_config = frigate_config.runtime_config()
|
||||
assert isinstance(
|
||||
runtime_config.cameras["back"].zones["test"].contour, np.ndarray
|
||||
)
|
||||
@ -608,7 +641,7 @@ class TestConfig(unittest.TestCase):
|
||||
frigate_config = FrigateConfig(**config)
|
||||
assert config == frigate_config.dict(exclude_unset=True)
|
||||
|
||||
runtime_config = frigate_config.runtime_config
|
||||
runtime_config = frigate_config.runtime_config()
|
||||
back_camera = runtime_config.cameras["back"]
|
||||
assert back_camera.record.events.objects is None
|
||||
assert back_camera.record.events.retain.objects["person"] == 30
|
||||
@ -639,7 +672,7 @@ class TestConfig(unittest.TestCase):
|
||||
frigate_config = FrigateConfig(**config)
|
||||
assert config == frigate_config.dict(exclude_unset=True)
|
||||
|
||||
runtime_config = frigate_config.runtime_config
|
||||
runtime_config = frigate_config.runtime_config()
|
||||
ffmpeg_cmds = runtime_config.cameras["back"].ffmpeg_cmds
|
||||
assert len(ffmpeg_cmds) == 1
|
||||
assert not "clips" in ffmpeg_cmds[0]["roles"]
|
||||
@ -670,7 +703,7 @@ class TestConfig(unittest.TestCase):
|
||||
frigate_config = FrigateConfig(**config)
|
||||
assert config == frigate_config.dict(exclude_unset=True)
|
||||
|
||||
runtime_config = frigate_config.runtime_config
|
||||
runtime_config = frigate_config.runtime_config()
|
||||
assert runtime_config.cameras["back"].detect.max_disappeared == 5 * 5
|
||||
|
||||
def test_motion_frame_height_wont_go_below_120(self):
|
||||
@ -698,7 +731,7 @@ class TestConfig(unittest.TestCase):
|
||||
frigate_config = FrigateConfig(**config)
|
||||
assert config == frigate_config.dict(exclude_unset=True)
|
||||
|
||||
runtime_config = frigate_config.runtime_config
|
||||
runtime_config = frigate_config.runtime_config()
|
||||
assert runtime_config.cameras["back"].motion.frame_height == 50
|
||||
|
||||
def test_motion_contour_area_dynamic(self):
|
||||
@ -726,7 +759,7 @@ class TestConfig(unittest.TestCase):
|
||||
frigate_config = FrigateConfig(**config)
|
||||
assert config == frigate_config.dict(exclude_unset=True)
|
||||
|
||||
runtime_config = frigate_config.runtime_config
|
||||
runtime_config = frigate_config.runtime_config()
|
||||
assert round(runtime_config.cameras["back"].motion.contour_area) == 30
|
||||
|
||||
def test_merge_labelmap(self):
|
||||
@ -755,7 +788,7 @@ class TestConfig(unittest.TestCase):
|
||||
frigate_config = FrigateConfig(**config)
|
||||
assert config == frigate_config.dict(exclude_unset=True)
|
||||
|
||||
runtime_config = frigate_config.runtime_config
|
||||
runtime_config = frigate_config.runtime_config()
|
||||
assert runtime_config.model.merged_labelmap[7] == "truck"
|
||||
|
||||
def test_default_labelmap_empty(self):
|
||||
@ -783,7 +816,7 @@ class TestConfig(unittest.TestCase):
|
||||
frigate_config = FrigateConfig(**config)
|
||||
assert config == frigate_config.dict(exclude_unset=True)
|
||||
|
||||
runtime_config = frigate_config.runtime_config
|
||||
runtime_config = frigate_config.runtime_config()
|
||||
assert runtime_config.model.merged_labelmap[0] == "person"
|
||||
|
||||
def test_default_labelmap(self):
|
||||
@ -812,9 +845,43 @@ class TestConfig(unittest.TestCase):
|
||||
frigate_config = FrigateConfig(**config)
|
||||
assert config == frigate_config.dict(exclude_unset=True)
|
||||
|
||||
runtime_config = frigate_config.runtime_config
|
||||
runtime_config = frigate_config.runtime_config()
|
||||
assert runtime_config.model.merged_labelmap[0] == "person"
|
||||
|
||||
def test_plus_labelmap(self):
|
||||
with open("/config/model_cache/test", "w") as f:
|
||||
json.dump(self.plus_model_info, f)
|
||||
with open("/config/model_cache/test.json", "w") as f:
|
||||
json.dump(self.plus_model_info, f)
|
||||
|
||||
config = {
|
||||
"mqtt": {"host": "mqtt"},
|
||||
"model": {"path": "plus://test"},
|
||||
"cameras": {
|
||||
"back": {
|
||||
"ffmpeg": {
|
||||
"inputs": [
|
||||
{
|
||||
"path": "rtsp://10.0.0.1:554/video",
|
||||
"roles": ["detect"],
|
||||
},
|
||||
]
|
||||
},
|
||||
"detect": {
|
||||
"height": 1080,
|
||||
"width": 1920,
|
||||
"fps": 5,
|
||||
},
|
||||
}
|
||||
},
|
||||
}
|
||||
|
||||
frigate_config = FrigateConfig(**config)
|
||||
assert config == frigate_config.dict(exclude_unset=True)
|
||||
|
||||
runtime_config = frigate_config.runtime_config(PlusApi())
|
||||
assert runtime_config.model.merged_labelmap[0] == "amazon"
|
||||
|
||||
def test_fails_on_invalid_role(self):
|
||||
config = {
|
||||
"mqtt": {"host": "mqtt"},
|
||||
@ -871,7 +938,7 @@ class TestConfig(unittest.TestCase):
|
||||
}
|
||||
|
||||
frigate_config = FrigateConfig(**config)
|
||||
self.assertRaises(ValueError, lambda: frigate_config.runtime_config)
|
||||
self.assertRaises(ValueError, lambda: frigate_config.runtime_config())
|
||||
|
||||
def test_works_on_missing_role_multiple_cams(self):
|
||||
config = {
|
||||
@ -919,7 +986,7 @@ class TestConfig(unittest.TestCase):
|
||||
}
|
||||
|
||||
frigate_config = FrigateConfig(**config)
|
||||
runtime_config = frigate_config.runtime_config
|
||||
runtime_config = frigate_config.runtime_config()
|
||||
|
||||
def test_global_detect(self):
|
||||
config = {
|
||||
@ -946,7 +1013,7 @@ class TestConfig(unittest.TestCase):
|
||||
frigate_config = FrigateConfig(**config)
|
||||
assert config == frigate_config.dict(exclude_unset=True)
|
||||
|
||||
runtime_config = frigate_config.runtime_config
|
||||
runtime_config = frigate_config.runtime_config()
|
||||
assert runtime_config.cameras["back"].detect.max_disappeared == 1
|
||||
assert runtime_config.cameras["back"].detect.height == 1080
|
||||
|
||||
@ -969,7 +1036,7 @@ class TestConfig(unittest.TestCase):
|
||||
frigate_config = FrigateConfig(**config)
|
||||
assert config == frigate_config.dict(exclude_unset=True)
|
||||
|
||||
runtime_config = frigate_config.runtime_config
|
||||
runtime_config = frigate_config.runtime_config()
|
||||
assert runtime_config.cameras["back"].detect.max_disappeared == 25
|
||||
assert runtime_config.cameras["back"].detect.height == 720
|
||||
|
||||
@ -998,7 +1065,7 @@ class TestConfig(unittest.TestCase):
|
||||
frigate_config = FrigateConfig(**config)
|
||||
assert config == frigate_config.dict(exclude_unset=True)
|
||||
|
||||
runtime_config = frigate_config.runtime_config
|
||||
runtime_config = frigate_config.runtime_config()
|
||||
assert runtime_config.cameras["back"].detect.max_disappeared == 1
|
||||
assert runtime_config.cameras["back"].detect.height == 1080
|
||||
assert runtime_config.cameras["back"].detect.width == 1920
|
||||
@ -1026,7 +1093,7 @@ class TestConfig(unittest.TestCase):
|
||||
frigate_config = FrigateConfig(**config)
|
||||
assert config == frigate_config.dict(exclude_unset=True)
|
||||
|
||||
runtime_config = frigate_config.runtime_config
|
||||
runtime_config = frigate_config.runtime_config()
|
||||
assert runtime_config.cameras["back"].snapshots.enabled
|
||||
assert runtime_config.cameras["back"].snapshots.height == 100
|
||||
|
||||
@ -1049,7 +1116,7 @@ class TestConfig(unittest.TestCase):
|
||||
frigate_config = FrigateConfig(**config)
|
||||
assert config == frigate_config.dict(exclude_unset=True)
|
||||
|
||||
runtime_config = frigate_config.runtime_config
|
||||
runtime_config = frigate_config.runtime_config()
|
||||
assert runtime_config.cameras["back"].snapshots.bounding_box
|
||||
assert runtime_config.cameras["back"].snapshots.quality == 70
|
||||
|
||||
@ -1077,7 +1144,7 @@ class TestConfig(unittest.TestCase):
|
||||
frigate_config = FrigateConfig(**config)
|
||||
assert config == frigate_config.dict(exclude_unset=True)
|
||||
|
||||
runtime_config = frigate_config.runtime_config
|
||||
runtime_config = frigate_config.runtime_config()
|
||||
assert runtime_config.cameras["back"].snapshots.bounding_box == False
|
||||
assert runtime_config.cameras["back"].snapshots.height == 150
|
||||
assert runtime_config.cameras["back"].snapshots.enabled
|
||||
@ -1101,7 +1168,7 @@ class TestConfig(unittest.TestCase):
|
||||
frigate_config = FrigateConfig(**config)
|
||||
assert config == frigate_config.dict(exclude_unset=True)
|
||||
|
||||
runtime_config = frigate_config.runtime_config
|
||||
runtime_config = frigate_config.runtime_config()
|
||||
assert not runtime_config.cameras["back"].rtmp.enabled
|
||||
|
||||
def test_default_not_rtmp(self):
|
||||
@ -1123,7 +1190,7 @@ class TestConfig(unittest.TestCase):
|
||||
frigate_config = FrigateConfig(**config)
|
||||
assert config == frigate_config.dict(exclude_unset=True)
|
||||
|
||||
runtime_config = frigate_config.runtime_config
|
||||
runtime_config = frigate_config.runtime_config()
|
||||
assert not runtime_config.cameras["back"].rtmp.enabled
|
||||
|
||||
def test_global_rtmp_merge(self):
|
||||
@ -1149,7 +1216,7 @@ class TestConfig(unittest.TestCase):
|
||||
frigate_config = FrigateConfig(**config)
|
||||
assert config == frigate_config.dict(exclude_unset=True)
|
||||
|
||||
runtime_config = frigate_config.runtime_config
|
||||
runtime_config = frigate_config.runtime_config()
|
||||
assert runtime_config.cameras["back"].rtmp.enabled
|
||||
|
||||
def test_global_rtmp_default(self):
|
||||
@ -1175,7 +1242,7 @@ class TestConfig(unittest.TestCase):
|
||||
frigate_config = FrigateConfig(**config)
|
||||
assert config == frigate_config.dict(exclude_unset=True)
|
||||
|
||||
runtime_config = frigate_config.runtime_config
|
||||
runtime_config = frigate_config.runtime_config()
|
||||
assert not runtime_config.cameras["back"].rtmp.enabled
|
||||
|
||||
def test_global_jsmpeg(self):
|
||||
@ -1198,7 +1265,7 @@ class TestConfig(unittest.TestCase):
|
||||
frigate_config = FrigateConfig(**config)
|
||||
assert config == frigate_config.dict(exclude_unset=True)
|
||||
|
||||
runtime_config = frigate_config.runtime_config
|
||||
runtime_config = frigate_config.runtime_config()
|
||||
assert runtime_config.cameras["back"].live.quality == 4
|
||||
|
||||
def test_default_live(self):
|
||||
@ -1220,7 +1287,7 @@ class TestConfig(unittest.TestCase):
|
||||
frigate_config = FrigateConfig(**config)
|
||||
assert config == frigate_config.dict(exclude_unset=True)
|
||||
|
||||
runtime_config = frigate_config.runtime_config
|
||||
runtime_config = frigate_config.runtime_config()
|
||||
assert runtime_config.cameras["back"].live.quality == 8
|
||||
|
||||
def test_global_live_merge(self):
|
||||
@ -1246,7 +1313,7 @@ class TestConfig(unittest.TestCase):
|
||||
frigate_config = FrigateConfig(**config)
|
||||
assert config == frigate_config.dict(exclude_unset=True)
|
||||
|
||||
runtime_config = frigate_config.runtime_config
|
||||
runtime_config = frigate_config.runtime_config()
|
||||
assert runtime_config.cameras["back"].live.quality == 7
|
||||
assert runtime_config.cameras["back"].live.height == 480
|
||||
|
||||
@ -1270,7 +1337,7 @@ class TestConfig(unittest.TestCase):
|
||||
frigate_config = FrigateConfig(**config)
|
||||
assert config == frigate_config.dict(exclude_unset=True)
|
||||
|
||||
runtime_config = frigate_config.runtime_config
|
||||
runtime_config = frigate_config.runtime_config()
|
||||
assert runtime_config.cameras["back"].timestamp_style.position == "bl"
|
||||
|
||||
def test_default_timestamp_style(self):
|
||||
@ -1292,7 +1359,7 @@ class TestConfig(unittest.TestCase):
|
||||
frigate_config = FrigateConfig(**config)
|
||||
assert config == frigate_config.dict(exclude_unset=True)
|
||||
|
||||
runtime_config = frigate_config.runtime_config
|
||||
runtime_config = frigate_config.runtime_config()
|
||||
assert runtime_config.cameras["back"].timestamp_style.position == "tl"
|
||||
|
||||
def test_global_timestamp_style_merge(self):
|
||||
@ -1317,7 +1384,7 @@ class TestConfig(unittest.TestCase):
|
||||
frigate_config = FrigateConfig(**config)
|
||||
assert config == frigate_config.dict(exclude_unset=True)
|
||||
|
||||
runtime_config = frigate_config.runtime_config
|
||||
runtime_config = frigate_config.runtime_config()
|
||||
assert runtime_config.cameras["back"].timestamp_style.position == "bl"
|
||||
assert runtime_config.cameras["back"].timestamp_style.thickness == 4
|
||||
|
||||
@ -1341,7 +1408,7 @@ class TestConfig(unittest.TestCase):
|
||||
frigate_config = FrigateConfig(**config)
|
||||
assert config == frigate_config.dict(exclude_unset=True)
|
||||
|
||||
runtime_config = frigate_config.runtime_config
|
||||
runtime_config = frigate_config.runtime_config()
|
||||
assert runtime_config.cameras["back"].snapshots.retain.default == 1.5
|
||||
|
||||
def test_fails_on_bad_camera_name(self):
|
||||
@ -1365,7 +1432,7 @@ class TestConfig(unittest.TestCase):
|
||||
frigate_config = FrigateConfig(**config)
|
||||
|
||||
self.assertRaises(
|
||||
ValidationError, lambda: frigate_config.runtime_config.cameras
|
||||
ValidationError, lambda: frigate_config.runtime_config().cameras
|
||||
)
|
||||
|
||||
def test_fails_on_bad_segment_time(self):
|
||||
@ -1392,7 +1459,8 @@ class TestConfig(unittest.TestCase):
|
||||
frigate_config = FrigateConfig(**config)
|
||||
|
||||
self.assertRaises(
|
||||
ValueError, lambda: frigate_config.runtime_config.ffmpeg.output_args.record
|
||||
ValueError,
|
||||
lambda: frigate_config.runtime_config().ffmpeg.output_args.record,
|
||||
)
|
||||
|
||||
def test_fails_zone_defines_untracked_object(self):
|
||||
@ -1421,7 +1489,7 @@ class TestConfig(unittest.TestCase):
|
||||
|
||||
frigate_config = FrigateConfig(**config)
|
||||
|
||||
self.assertRaises(ValueError, lambda: frigate_config.runtime_config.cameras)
|
||||
self.assertRaises(ValueError, lambda: frigate_config.runtime_config().cameras)
|
||||
|
||||
def test_fails_duplicate_keys(self):
|
||||
raw_config = """
|
||||
@ -1465,7 +1533,7 @@ class TestConfig(unittest.TestCase):
|
||||
frigate_config = FrigateConfig(**config)
|
||||
assert config == frigate_config.dict(exclude_unset=True)
|
||||
|
||||
runtime_config = frigate_config.runtime_config
|
||||
runtime_config = frigate_config.runtime_config()
|
||||
assert "dog" in runtime_config.cameras["back"].objects.filters
|
||||
assert runtime_config.cameras["back"].objects.filters["dog"].min_ratio == 0.2
|
||||
assert runtime_config.cameras["back"].objects.filters["dog"].max_ratio == 10.1
|
||||
|
||||
@ -17,20 +17,20 @@ class TestGpuStats(unittest.TestCase):
|
||||
process.stdout = self.amd_results
|
||||
sp.return_value = process
|
||||
amd_stats = get_amd_gpu_stats()
|
||||
assert amd_stats == {"gpu": "4.17 %", "mem": "60.37 %"}
|
||||
assert amd_stats == {"gpu": "4.17%", "mem": "60.37%"}
|
||||
|
||||
@patch("subprocess.run")
|
||||
def test_nvidia_gpu_stats(self, sp):
|
||||
process = MagicMock()
|
||||
process.returncode = 0
|
||||
process.stdout = self.nvidia_results
|
||||
sp.return_value = process
|
||||
nvidia_stats = get_nvidia_gpu_stats()
|
||||
assert nvidia_stats == {
|
||||
"name": "NVIDIA GeForce RTX 3050",
|
||||
"gpu": "42 %",
|
||||
"mem": "61.5 %",
|
||||
}
|
||||
# @patch("subprocess.run")
|
||||
# def test_nvidia_gpu_stats(self, sp):
|
||||
# process = MagicMock()
|
||||
# process.returncode = 0
|
||||
# process.stdout = self.nvidia_results
|
||||
# sp.return_value = process
|
||||
# nvidia_stats = get_nvidia_gpu_stats()
|
||||
# assert nvidia_stats == {
|
||||
# "name": "NVIDIA GeForce RTX 3050",
|
||||
# "gpu": "42 %",
|
||||
# "mem": "61.5 %",
|
||||
# }
|
||||
|
||||
@patch("subprocess.run")
|
||||
def test_intel_gpu_stats(self, sp):
|
||||
@ -40,6 +40,6 @@ class TestGpuStats(unittest.TestCase):
|
||||
sp.return_value = process
|
||||
intel_stats = get_intel_gpu_stats()
|
||||
assert intel_stats == {
|
||||
"gpu": "1.34 %",
|
||||
"mem": "- %",
|
||||
"gpu": "1.34%",
|
||||
"mem": "-%",
|
||||
}
|
||||
|
||||
@ -292,7 +292,7 @@ class TestHttp(unittest.TestCase):
|
||||
|
||||
def test_config(self):
|
||||
app = create_app(
|
||||
FrigateConfig(**self.minimal_config).runtime_config,
|
||||
FrigateConfig(**self.minimal_config).runtime_config(),
|
||||
self.db,
|
||||
None,
|
||||
None,
|
||||
@ -308,7 +308,7 @@ class TestHttp(unittest.TestCase):
|
||||
|
||||
def test_recordings(self):
|
||||
app = create_app(
|
||||
FrigateConfig(**self.minimal_config).runtime_config,
|
||||
FrigateConfig(**self.minimal_config).runtime_config(),
|
||||
self.db,
|
||||
None,
|
||||
None,
|
||||
@ -327,7 +327,7 @@ class TestHttp(unittest.TestCase):
|
||||
@patch("frigate.http.stats_snapshot")
|
||||
def test_stats(self, mock_stats):
|
||||
app = create_app(
|
||||
FrigateConfig(**self.minimal_config).runtime_config,
|
||||
FrigateConfig(**self.minimal_config).runtime_config(),
|
||||
self.db,
|
||||
None,
|
||||
None,
|
||||
|
||||
@ -4,9 +4,8 @@ import logging
|
||||
import threading
|
||||
import queue
|
||||
|
||||
from enum import Enum
|
||||
|
||||
from frigate.config import FrigateConfig
|
||||
from frigate.events import EventTypeEnum
|
||||
from frigate.models import Timeline
|
||||
|
||||
from multiprocessing.queues import Queue
|
||||
@ -17,12 +16,6 @@ from frigate.util import to_relative_box
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class TimelineSourceEnum(str, Enum):
|
||||
# api = "api"
|
||||
# audio = "audio"
|
||||
tracked_object = "tracked_object"
|
||||
|
||||
|
||||
class TimelineProcessor(threading.Thread):
|
||||
"""Handle timeline queue and update DB."""
|
||||
|
||||
@ -51,7 +44,7 @@ class TimelineProcessor(threading.Thread):
|
||||
except queue.Empty:
|
||||
continue
|
||||
|
||||
if input_type == TimelineSourceEnum.tracked_object:
|
||||
if input_type == EventTypeEnum.tracked_object:
|
||||
self.handle_object_detection(
|
||||
camera, event_type, prev_event_data, event_data
|
||||
)
|
||||
|
||||
@ -34,3 +34,4 @@ class StatsTrackingTypes(TypedDict):
|
||||
started: int
|
||||
latest_frigate_version: str
|
||||
last_updated: int
|
||||
processes: dict[str, int]
|
||||
|
||||
204
frigate/util.py
204
frigate/util.py
@ -9,12 +9,14 @@ import signal
|
||||
import traceback
|
||||
import urllib.parse
|
||||
import yaml
|
||||
import os
|
||||
|
||||
from abc import ABC, abstractmethod
|
||||
from collections import Counter
|
||||
from collections.abc import Mapping
|
||||
from multiprocessing import shared_memory
|
||||
from typing import Any, AnyStr, Optional, Tuple
|
||||
import py3nvml.py3nvml as nvml
|
||||
|
||||
import cv2
|
||||
import numpy as np
|
||||
@ -740,55 +742,54 @@ def escape_special_characters(path: str) -> str:
|
||||
|
||||
|
||||
def get_cgroups_version() -> str:
|
||||
"""Determine what version of cgroups is enabled"""
|
||||
"""Determine what version of cgroups is enabled."""
|
||||
|
||||
stat_command = ["stat", "-fc", "%T", "/sys/fs/cgroup"]
|
||||
cgroup_path = "/sys/fs/cgroup"
|
||||
|
||||
p = sp.run(
|
||||
stat_command,
|
||||
encoding="ascii",
|
||||
capture_output=True,
|
||||
)
|
||||
if not os.path.ismount(cgroup_path):
|
||||
logger.debug(f"{cgroup_path} is not a mount point.")
|
||||
return "unknown"
|
||||
|
||||
if p.returncode == 0:
|
||||
value: str = p.stdout.strip().lower()
|
||||
try:
|
||||
with open("/proc/mounts", "r") as f:
|
||||
mounts = f.readlines()
|
||||
|
||||
if value == "cgroup2fs":
|
||||
return "cgroup2"
|
||||
elif value == "tmpfs":
|
||||
return "cgroup"
|
||||
else:
|
||||
logger.debug(
|
||||
f"Could not determine cgroups version: unhandled filesystem {value}"
|
||||
)
|
||||
else:
|
||||
logger.debug(f"Could not determine cgroups version: {p.stderr}")
|
||||
for mount in mounts:
|
||||
mount_info = mount.split()
|
||||
if mount_info[1] == cgroup_path:
|
||||
fs_type = mount_info[2]
|
||||
if fs_type == "cgroup2fs" or fs_type == "cgroup2":
|
||||
return "cgroup2"
|
||||
elif fs_type == "tmpfs":
|
||||
return "cgroup"
|
||||
else:
|
||||
logger.debug(
|
||||
f"Could not determine cgroups version: unhandled filesystem {fs_type}"
|
||||
)
|
||||
break
|
||||
except Exception as e:
|
||||
logger.debug(f"Could not determine cgroups version: {e}")
|
||||
|
||||
return "unknown"
|
||||
|
||||
|
||||
def get_docker_memlimit_bytes() -> int:
|
||||
"""Get mem limit in bytes set in docker if present. Returns -1 if no limit detected"""
|
||||
"""Get mem limit in bytes set in docker if present. Returns -1 if no limit detected."""
|
||||
|
||||
# check running a supported cgroups version
|
||||
if get_cgroups_version() == "cgroup2":
|
||||
memlimit_command = ["cat", "/sys/fs/cgroup/memory.max"]
|
||||
memlimit_path = "/sys/fs/cgroup/memory.max"
|
||||
|
||||
p = sp.run(
|
||||
memlimit_command,
|
||||
encoding="ascii",
|
||||
capture_output=True,
|
||||
)
|
||||
|
||||
if p.returncode == 0:
|
||||
value: str = p.stdout.strip()
|
||||
try:
|
||||
with open(memlimit_path, "r") as f:
|
||||
value = f.read().strip()
|
||||
|
||||
if value.isnumeric():
|
||||
return int(value)
|
||||
elif value.lower() == "max":
|
||||
return -1
|
||||
else:
|
||||
logger.debug(f"Unable to get docker memlimit: {p.stderr}")
|
||||
except Exception as e:
|
||||
logger.debug(f"Unable to get docker memlimit: {e}")
|
||||
|
||||
return -1
|
||||
|
||||
@ -796,42 +797,51 @@ def get_docker_memlimit_bytes() -> int:
|
||||
def get_cpu_stats() -> dict[str, dict]:
|
||||
"""Get cpu usages for each process id"""
|
||||
usages = {}
|
||||
# -n=2 runs to ensure extraneous values are not included
|
||||
top_command = ["top", "-b", "-n", "2"]
|
||||
|
||||
docker_memlimit = get_docker_memlimit_bytes() / 1024
|
||||
total_mem = os.sysconf("SC_PAGE_SIZE") * os.sysconf("SC_PHYS_PAGES") / 1024
|
||||
|
||||
p = sp.run(
|
||||
top_command,
|
||||
encoding="ascii",
|
||||
capture_output=True,
|
||||
)
|
||||
for process in psutil.process_iter(["pid", "name", "cpu_percent"]):
|
||||
pid = process.info["pid"]
|
||||
try:
|
||||
cpu_percent = process.info["cpu_percent"]
|
||||
|
||||
if p.returncode != 0:
|
||||
logger.error(p.stderr)
|
||||
return usages
|
||||
else:
|
||||
lines = p.stdout.split("\n")
|
||||
with open(f"/proc/{pid}/stat", "r") as f:
|
||||
stats = f.readline().split()
|
||||
utime = int(stats[13])
|
||||
stime = int(stats[14])
|
||||
starttime = int(stats[21])
|
||||
|
||||
for line in lines:
|
||||
stats = list(filter(lambda a: a != "", line.strip().split(" ")))
|
||||
try:
|
||||
if docker_memlimit > 0:
|
||||
mem_res = int(stats[5])
|
||||
mem_pct = str(
|
||||
round((float(mem_res) / float(docker_memlimit)) * 100, 1)
|
||||
)
|
||||
else:
|
||||
mem_pct = stats[9]
|
||||
with open("/proc/uptime") as f:
|
||||
system_uptime_sec = int(float(f.read().split()[0]))
|
||||
|
||||
usages[stats[0]] = {
|
||||
"cpu": stats[8],
|
||||
"mem": mem_pct,
|
||||
}
|
||||
except:
|
||||
continue
|
||||
clk_tck = os.sysconf(os.sysconf_names["SC_CLK_TCK"])
|
||||
|
||||
return usages
|
||||
process_utime_sec = utime // clk_tck
|
||||
process_stime_sec = stime // clk_tck
|
||||
process_starttime_sec = starttime // clk_tck
|
||||
|
||||
process_elapsed_sec = system_uptime_sec - process_starttime_sec
|
||||
process_usage_sec = process_utime_sec + process_stime_sec
|
||||
cpu_average_usage = process_usage_sec * 100 // process_elapsed_sec
|
||||
|
||||
with open(f"/proc/{pid}/statm", "r") as f:
|
||||
mem_stats = f.readline().split()
|
||||
mem_res = int(mem_stats[1]) * os.sysconf("SC_PAGE_SIZE") / 1024
|
||||
|
||||
if docker_memlimit > 0:
|
||||
mem_pct = round((mem_res / docker_memlimit) * 100, 1)
|
||||
else:
|
||||
mem_pct = round((mem_res / total_mem) * 100, 1)
|
||||
|
||||
usages[pid] = {
|
||||
"cpu": str(cpu_percent),
|
||||
"cpu_average": str(round(cpu_average_usage, 2)),
|
||||
"mem": f"{mem_pct}",
|
||||
}
|
||||
except:
|
||||
continue
|
||||
|
||||
return usages
|
||||
|
||||
|
||||
def get_amd_gpu_stats() -> dict[str, str]:
|
||||
@ -853,9 +863,9 @@ def get_amd_gpu_stats() -> dict[str, str]:
|
||||
|
||||
for hw in usages:
|
||||
if "gpu" in hw:
|
||||
results["gpu"] = f"{hw.strip().split(' ')[1].replace('%', '')} %"
|
||||
results["gpu"] = f"{hw.strip().split(' ')[1].replace('%', '')}%"
|
||||
elif "vram" in hw:
|
||||
results["mem"] = f"{hw.strip().split(' ')[1].replace('%', '')} %"
|
||||
results["mem"] = f"{hw.strip().split(' ')[1].replace('%', '')}%"
|
||||
|
||||
return results
|
||||
|
||||
@ -911,50 +921,48 @@ def get_intel_gpu_stats() -> dict[str, str]:
|
||||
else:
|
||||
video_avg = 1
|
||||
|
||||
results["gpu"] = f"{round((video_avg + render_avg) / 2, 2)} %"
|
||||
results["mem"] = "- %"
|
||||
results["gpu"] = f"{round((video_avg + render_avg) / 2, 2)}%"
|
||||
results["mem"] = "-%"
|
||||
return results
|
||||
|
||||
|
||||
def get_nvidia_gpu_stats() -> dict[str, str]:
|
||||
"""Get stats using nvidia-smi."""
|
||||
nvidia_smi_command = [
|
||||
"nvidia-smi",
|
||||
"--query-gpu=gpu_name,utilization.gpu,memory.used,memory.total",
|
||||
"--format=csv",
|
||||
]
|
||||
def try_get_info(f, h, default="N/A"):
|
||||
try:
|
||||
v = f(h)
|
||||
except nvml.NVMLError_NotSupported:
|
||||
v = default
|
||||
return v
|
||||
|
||||
if (
|
||||
"CUDA_VISIBLE_DEVICES" in os.environ
|
||||
and os.environ["CUDA_VISIBLE_DEVICES"].isdigit()
|
||||
):
|
||||
nvidia_smi_command.extend(["--id", os.environ["CUDA_VISIBLE_DEVICES"]])
|
||||
elif (
|
||||
"NVIDIA_VISIBLE_DEVICES" in os.environ
|
||||
and os.environ["NVIDIA_VISIBLE_DEVICES"].isdigit()
|
||||
):
|
||||
nvidia_smi_command.extend(["--id", os.environ["NVIDIA_VISIBLE_DEVICES"]])
|
||||
|
||||
p = sp.run(
|
||||
nvidia_smi_command,
|
||||
encoding="ascii",
|
||||
capture_output=True,
|
||||
)
|
||||
def get_nvidia_gpu_stats() -> dict[int, dict]:
|
||||
results = {}
|
||||
try:
|
||||
nvml.nvmlInit()
|
||||
deviceCount = nvml.nvmlDeviceGetCount()
|
||||
for i in range(deviceCount):
|
||||
handle = nvml.nvmlDeviceGetHandleByIndex(i)
|
||||
meminfo = try_get_info(nvml.nvmlDeviceGetMemoryInfo, handle)
|
||||
util = try_get_info(nvml.nvmlDeviceGetUtilizationRates, handle)
|
||||
if util != "N/A":
|
||||
gpu_util = util.gpu
|
||||
else:
|
||||
gpu_util = 0
|
||||
|
||||
if p.returncode != 0:
|
||||
logger.error(f"Unable to poll nvidia GPU stats: {p.stderr}")
|
||||
return None
|
||||
else:
|
||||
usages = p.stdout.split("\n")[1].strip().split(",")
|
||||
memory_percent = f"{round(float(usages[2].replace(' MiB', '').strip()) / float(usages[3].replace(' MiB', '').strip()) * 100, 1)} %"
|
||||
results: dict[str, str] = {
|
||||
"name": usages[0],
|
||||
"gpu": usages[1].strip(),
|
||||
"mem": memory_percent,
|
||||
}
|
||||
if meminfo != "N/A":
|
||||
gpu_mem_util = meminfo.used / meminfo.total * 100
|
||||
else:
|
||||
gpu_mem_util = -1
|
||||
|
||||
results[i] = {
|
||||
"name": nvml.nvmlDeviceGetName(handle),
|
||||
"gpu": gpu_util,
|
||||
"mem": gpu_mem_util,
|
||||
}
|
||||
except:
|
||||
return results
|
||||
|
||||
return results
|
||||
|
||||
|
||||
def ffprobe_stream(path: str) -> sp.CompletedProcess:
|
||||
"""Run ffprobe on stream."""
|
||||
|
||||
49
migrations/015_event_refactor.py
Normal file
49
migrations/015_event_refactor.py
Normal file
@ -0,0 +1,49 @@
|
||||
"""Peewee migrations
|
||||
|
||||
Some examples (model - class or model name)::
|
||||
|
||||
> Model = migrator.orm['model_name'] # Return model in current state by name
|
||||
|
||||
> migrator.sql(sql) # Run custom SQL
|
||||
> migrator.python(func, *args, **kwargs) # Run python code
|
||||
> migrator.create_model(Model) # Create a model (could be used as decorator)
|
||||
> migrator.remove_model(model, cascade=True) # Remove a model
|
||||
> migrator.add_fields(model, **fields) # Add fields to a model
|
||||
> migrator.change_fields(model, **fields) # Change fields
|
||||
> migrator.remove_fields(model, *field_names, cascade=True)
|
||||
> migrator.rename_field(model, old_field_name, new_field_name)
|
||||
> migrator.rename_table(model, new_table_name)
|
||||
> migrator.add_index(model, *col_names, unique=False)
|
||||
> migrator.drop_index(model, *col_names)
|
||||
> migrator.add_not_null(model, *field_names)
|
||||
> migrator.drop_not_null(model, *field_names)
|
||||
> migrator.add_default(model, field_name, default)
|
||||
|
||||
"""
|
||||
|
||||
import datetime as dt
|
||||
import peewee as pw
|
||||
from playhouse.sqlite_ext import *
|
||||
from decimal import ROUND_HALF_EVEN
|
||||
from frigate.models import Event
|
||||
|
||||
try:
|
||||
import playhouse.postgres_ext as pw_pext
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
SQL = pw.SQL
|
||||
|
||||
|
||||
def migrate(migrator, database, fake=False, **kwargs):
|
||||
migrator.drop_not_null(
|
||||
Event, "top_score", "score", "region", "box", "area", "ratio"
|
||||
)
|
||||
migrator.add_fields(
|
||||
Event,
|
||||
data=JSONField(default={}),
|
||||
)
|
||||
|
||||
|
||||
def rollback(migrator, database, fake=False, **kwargs):
|
||||
pass
|
||||
12
migrations/016_sublabel_increase.py
Normal file
12
migrations/016_sublabel_increase.py
Normal file
@ -0,0 +1,12 @@
|
||||
import peewee as pw
|
||||
from playhouse.migrate import *
|
||||
from playhouse.sqlite_ext import *
|
||||
from frigate.models import Event
|
||||
|
||||
|
||||
def migrate(migrator, database, fake=False, **kwargs):
|
||||
migrator.change_columns(Event, sub_label=pw.CharField(max_length=100, null=True))
|
||||
|
||||
|
||||
def rollback(migrator, database, fake=False, **kwargs):
|
||||
migrator.change_columns(Event, sub_label=pw.CharField(max_length=20, null=True))
|
||||
@ -1,5 +1,5 @@
|
||||
click == 8.1.*
|
||||
Flask == 2.2.*
|
||||
Flask == 2.3.*
|
||||
imutils == 0.5.*
|
||||
matplotlib == 3.7.*
|
||||
mypy == 0.942
|
||||
@ -7,15 +7,16 @@ numpy == 1.23.*
|
||||
onvif_zeep == 0.2.12
|
||||
opencv-python-headless == 4.5.5.*
|
||||
paho-mqtt == 1.6.*
|
||||
peewee == 3.15.*
|
||||
peewee == 3.16.*
|
||||
peewee_migrate == 1.7.*
|
||||
psutil == 5.9.*
|
||||
pydantic == 1.10.*
|
||||
git+https://github.com/fbcotter/py3nvml#egg=py3nvml
|
||||
PyYAML == 6.0
|
||||
pytz == 2023.3
|
||||
tzlocal == 4.3
|
||||
types-PyYAML == 6.0.*
|
||||
requests == 2.28.*
|
||||
requests == 2.30.*
|
||||
types-requests == 2.28.*
|
||||
scipy == 1.10.*
|
||||
setproctitle == 1.3.*
|
||||
|
||||
|
Before Width: | Height: | Size: 2.9 KiB After Width: | Height: | Size: 2.9 KiB |
@ -8,8 +8,9 @@
|
||||
<link rel="apple-touch-icon" sizes="180x180" href="/images/apple-touch-icon.png" />
|
||||
<link rel="icon" type="image/png" sizes="32x32" href="/images/favicon-32x32.png" />
|
||||
<link rel="icon" type="image/png" sizes="16x16" href="/images/favicon-16x16.png" />
|
||||
<link rel="icon" type="image/svg+xml" href="/images/favicon.svg">
|
||||
<link rel="manifest" href="/site.webmanifest" />
|
||||
<link rel="mask-icon" href="/images/safari-pinned-tab.svg" color="#3b82f7" />
|
||||
<link rel="mask-icon" href="/images/favicon.svg" color="#3b82f7" />
|
||||
<meta name="msapplication-TileColor" content="#3b82f7" />
|
||||
<meta name="theme-color" content="#ffffff" media="(prefers-color-scheme: light)" />
|
||||
<meta name="theme-color" content="#111827" media="(prefers-color-scheme: dark)" />
|
||||
|
||||
888
web/package-lock.json
generated
888
web/package-lock.json
generated
File diff suppressed because it is too large
Load Diff
@ -13,11 +13,11 @@
|
||||
},
|
||||
"dependencies": {
|
||||
"@cycjimmy/jsmpeg-player": "^6.0.5",
|
||||
"axios": "^1.3.6",
|
||||
"axios": "^1.4.0",
|
||||
"copy-to-clipboard": "3.3.3",
|
||||
"date-fns": "^2.29.3",
|
||||
"date-fns": "^2.30.0",
|
||||
"idb-keyval": "^6.2.0",
|
||||
"immer": "^9.0.21",
|
||||
"immer": "^10.0.1",
|
||||
"monaco-yaml": "^4.0.4",
|
||||
"preact": "^10.13.2",
|
||||
"preact-async-route": "^2.2.1",
|
||||
@ -38,21 +38,21 @@
|
||||
"@testing-library/user-event": "^14.4.3",
|
||||
"@typescript-eslint/eslint-plugin": "^5.59.1",
|
||||
"@typescript-eslint/parser": "^5.59.1",
|
||||
"@vitest/coverage-c8": "^0.30.1",
|
||||
"@vitest/ui": "^0.30.1",
|
||||
"@vitest/coverage-c8": "^0.31.0",
|
||||
"@vitest/ui": "^0.31.0",
|
||||
"autoprefixer": "^10.4.14",
|
||||
"eslint": "^8.39.0",
|
||||
"eslint-config-preact": "^1.3.0",
|
||||
"eslint-config-prettier": "^8.8.0",
|
||||
"eslint-plugin-vitest-globals": "^1.3.1",
|
||||
"fake-indexeddb": "^4.0.1",
|
||||
"jsdom": "^21.1.1",
|
||||
"jsdom": "^22.0.0",
|
||||
"msw": "^1.2.1",
|
||||
"postcss": "^8.4.23",
|
||||
"prettier": "^2.8.8",
|
||||
"tailwindcss": "^3.3.2",
|
||||
"typescript": "^5.0.4",
|
||||
"vite": "^4.3.2",
|
||||
"vitest": "^0.30.1"
|
||||
"vite": "^4.3.5",
|
||||
"vitest": "^0.31.0"
|
||||
}
|
||||
}
|
||||
|
||||
|
Before Width: | Height: | Size: 3.1 KiB After Width: | Height: | Size: 3.1 KiB |
|
Before Width: | Height: | Size: 6.9 KiB After Width: | Height: | Size: 6.9 KiB |
@ -1,14 +1,14 @@
|
||||
{
|
||||
"name": "",
|
||||
"short_name": "",
|
||||
"name": "Frigate",
|
||||
"short_name": "Frigate",
|
||||
"icons": [
|
||||
{
|
||||
"src": "/images/android-chrome-192x192.png",
|
||||
"src": "/icons/android-chrome-192x192.png",
|
||||
"sizes": "192x192",
|
||||
"type": "image/png"
|
||||
},
|
||||
{
|
||||
"src": "/images/android-chrome-512x512.png",
|
||||
"src": "/icons/android-chrome-512x512.png",
|
||||
"sizes": "512x512",
|
||||
"type": "image/png"
|
||||
}
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
import { h, createContext } from 'preact';
|
||||
import { baseUrl } from './baseUrl';
|
||||
import produce from 'immer';
|
||||
import { produce } from 'immer';
|
||||
import { useCallback, useContext, useEffect, useRef, useReducer } from 'preact/hooks';
|
||||
|
||||
const initialState = Object.freeze({ __connected: false });
|
||||
|
||||
@ -7,48 +7,49 @@ const ButtonColors = {
|
||||
contained: 'bg-blue-500 focus:bg-blue-400 active:bg-blue-600 ring-blue-300',
|
||||
outlined:
|
||||
'text-blue-500 border-2 border-blue-500 hover:bg-blue-500 hover:bg-opacity-20 focus:bg-blue-500 focus:bg-opacity-40 active:bg-blue-500 active:bg-opacity-40',
|
||||
text:
|
||||
'text-blue-500 hover:bg-blue-500 hover:bg-opacity-20 focus:bg-blue-500 focus:bg-opacity-40 active:bg-blue-500 active:bg-opacity-40',
|
||||
text: 'text-blue-500 hover:bg-blue-500 hover:bg-opacity-20 focus:bg-blue-500 focus:bg-opacity-40 active:bg-blue-500 active:bg-opacity-40',
|
||||
iconOnly: 'text-blue-500 hover:text-blue-200',
|
||||
},
|
||||
red: {
|
||||
contained: 'bg-red-500 focus:bg-red-400 active:bg-red-600 ring-red-300',
|
||||
outlined:
|
||||
'text-red-500 border-2 border-red-500 hover:bg-red-500 hover:bg-opacity-20 focus:bg-red-500 focus:bg-opacity-40 active:bg-red-500 active:bg-opacity-40',
|
||||
text:
|
||||
'text-red-500 hover:bg-red-500 hover:bg-opacity-20 focus:bg-red-500 focus:bg-opacity-40 active:bg-red-500 active:bg-opacity-40',
|
||||
text: 'text-red-500 hover:bg-red-500 hover:bg-opacity-20 focus:bg-red-500 focus:bg-opacity-40 active:bg-red-500 active:bg-opacity-40',
|
||||
iconOnly: 'text-red-500 hover:text-red-200',
|
||||
},
|
||||
yellow: {
|
||||
contained: 'bg-yellow-500 focus:bg-yellow-400 active:bg-yellow-600 ring-yellow-300',
|
||||
outlined:
|
||||
'text-yellow-500 border-2 border-yellow-500 hover:bg-yellow-500 hover:bg-opacity-20 focus:bg-yellow-500 focus:bg-opacity-40 active:bg-yellow-500 active:bg-opacity-40',
|
||||
text:
|
||||
'text-yellow-500 hover:bg-yellow-500 hover:bg-opacity-20 focus:bg-yellow-500 focus:bg-opacity-40 active:bg-yellow-500 active:bg-opacity-40',
|
||||
text: 'text-yellow-500 hover:bg-yellow-500 hover:bg-opacity-20 focus:bg-yellow-500 focus:bg-opacity-40 active:bg-yellow-500 active:bg-opacity-40',
|
||||
iconOnly: 'text-yellow-500 hover:text-yellow-200',
|
||||
},
|
||||
green: {
|
||||
contained: 'bg-green-500 focus:bg-green-400 active:bg-green-600 ring-green-300',
|
||||
outlined:
|
||||
'text-green-500 border-2 border-green-500 hover:bg-green-500 hover:bg-opacity-20 focus:bg-green-500 focus:bg-opacity-40 active:bg-green-500 active:bg-opacity-40',
|
||||
text:
|
||||
'text-green-500 hover:bg-green-500 hover:bg-opacity-20 focus:bg-green-500 focus:bg-opacity-40 active:bg-green-500 active:bg-opacity-40',
|
||||
text: 'text-green-500 hover:bg-green-500 hover:bg-opacity-20 focus:bg-green-500 focus:bg-opacity-40 active:bg-green-500 active:bg-opacity-40',
|
||||
iconOnly: 'text-green-500 hover:text-green-200',
|
||||
},
|
||||
gray: {
|
||||
contained: 'bg-gray-500 focus:bg-gray-400 active:bg-gray-600 ring-gray-300',
|
||||
outlined:
|
||||
'text-gray-500 border-2 border-gray-500 hover:bg-gray-500 hover:bg-opacity-20 focus:bg-gray-500 focus:bg-opacity-40 active:bg-gray-500 active:bg-opacity-40',
|
||||
text:
|
||||
'text-gray-500 hover:bg-gray-500 hover:bg-opacity-20 focus:bg-gray-500 focus:bg-opacity-40 active:bg-gray-500 active:bg-opacity-40',
|
||||
text: 'text-gray-500 hover:bg-gray-500 hover:bg-opacity-20 focus:bg-gray-500 focus:bg-opacity-40 active:bg-gray-500 active:bg-opacity-40',
|
||||
iconOnly: 'text-gray-500 hover:text-gray-200',
|
||||
},
|
||||
disabled: {
|
||||
contained: 'bg-gray-400',
|
||||
outlined:
|
||||
'text-gray-500 border-2 border-gray-500 hover:bg-gray-500 hover:bg-opacity-20 focus:bg-gray-500 focus:bg-opacity-40 active:bg-gray-500 active:bg-opacity-40',
|
||||
text:
|
||||
'text-gray-500 hover:bg-gray-500 hover:bg-opacity-20 focus:bg-gray-500 focus:bg-opacity-40 active:bg-gray-500 active:bg-opacity-40',
|
||||
text: 'text-gray-500 hover:bg-gray-500 hover:bg-opacity-20 focus:bg-gray-500 focus:bg-opacity-40 active:bg-gray-500 active:bg-opacity-40',
|
||||
iconOnly: 'text-gray-500 hover:text-gray-200',
|
||||
},
|
||||
black: {
|
||||
contained: '',
|
||||
outlined: '',
|
||||
text: 'text-black dark:text-white',
|
||||
iconOnly: '',
|
||||
},
|
||||
};
|
||||
|
||||
@ -56,6 +57,7 @@ const ButtonTypes = {
|
||||
contained: 'text-white shadow focus:shadow-xl hover:shadow-md',
|
||||
outlined: '',
|
||||
text: 'transition-opacity',
|
||||
iconOnly: 'transition-opacity',
|
||||
};
|
||||
|
||||
export default function Button({
|
||||
@ -73,7 +75,7 @@ export default function Button({
|
||||
let classes = `whitespace-nowrap flex items-center space-x-1 ${className} ${ButtonTypes[type]} ${
|
||||
ButtonColors[disabled ? 'disabled' : color][type]
|
||||
} font-sans inline-flex font-bold uppercase text-xs px-1.5 md:px-2 py-2 rounded outline-none focus:outline-none ring-opacity-50 transition-shadow transition-colors ${
|
||||
disabled ? 'cursor-not-allowed' : 'focus:ring-2 cursor-pointer'
|
||||
disabled ? 'cursor-not-allowed' : `${type == 'iconOnly' ? '' : 'focus:ring-2'} cursor-pointer`
|
||||
}`;
|
||||
|
||||
if (disabled) {
|
||||
|
||||
@ -163,7 +163,9 @@ export function EventCard({ camera, event }) {
|
||||
<div className="text-xs md:text-normal text-gray-300">Start: {format(start, 'HH:mm:ss')}</div>
|
||||
<div className="text-xs md:text-normal text-gray-300">Duration: {duration}</div>
|
||||
</div>
|
||||
<div className="text-lg text-white text-right leading-tight">{(event.top_score * 100).toFixed(1)}%</div>
|
||||
<div className="text-lg text-white text-right leading-tight">
|
||||
{((event?.data?.top_score || event.top_score) * 100).toFixed(1)}%
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
@ -2,10 +2,11 @@ import { h } from 'preact';
|
||||
import useSWR from 'swr';
|
||||
import ActivityIndicator from './ActivityIndicator';
|
||||
import { formatUnixTimestampToDateTime } from '../utils/dateUtil';
|
||||
import About from '../icons/About';
|
||||
import PlayIcon from '../icons/Play';
|
||||
import ExitIcon from '../icons/Exit';
|
||||
import { Zone } from '../icons/Zone';
|
||||
import { useState } from 'preact/hooks';
|
||||
import { useMemo, useState } from 'preact/hooks';
|
||||
import Button from './Button';
|
||||
|
||||
export default function TimelineSummary({ event, onFrameSelected }) {
|
||||
@ -18,6 +19,14 @@ export default function TimelineSummary({ event, onFrameSelected }) {
|
||||
|
||||
const { data: config } = useSWR('config');
|
||||
|
||||
const annotationOffset = useMemo(() => {
|
||||
if (!config) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
return (config.cameras[event.camera]?.detect?.annotation_offset || 0) / 1000;
|
||||
}, [config, event]);
|
||||
|
||||
const [timeIndex, setTimeIndex] = useState(-1);
|
||||
|
||||
const recordingParams = {
|
||||
@ -53,7 +62,7 @@ export default function TimelineSummary({ event, onFrameSelected }) {
|
||||
|
||||
const onSelectMoment = async (index) => {
|
||||
setTimeIndex(index);
|
||||
onFrameSelected(eventTimeline[index], getSeekSeconds(eventTimeline[index].timestamp));
|
||||
onFrameSelected(eventTimeline[index], getSeekSeconds(eventTimeline[index].timestamp + annotationOffset));
|
||||
};
|
||||
|
||||
if (!eventTimeline || !config) {
|
||||
@ -73,7 +82,7 @@ export default function TimelineSummary({ event, onFrameSelected }) {
|
||||
<Button
|
||||
key={index}
|
||||
className="rounded-full"
|
||||
type="text"
|
||||
type="iconOnly"
|
||||
color={index == timeIndex ? 'blue' : 'gray'}
|
||||
aria-label={window.innerWidth > 640 ? getTimelineItemDescription(config, item, event) : ''}
|
||||
onClick={() => onSelectMoment(index)}
|
||||
@ -84,7 +93,7 @@ export default function TimelineSummary({ event, onFrameSelected }) {
|
||||
<Button
|
||||
key={index}
|
||||
className="rounded-full"
|
||||
type="text"
|
||||
type="iconOnly"
|
||||
color={index == timeIndex ? 'blue' : 'gray'}
|
||||
aria-label={window.innerWidth > 640 ? getTimelineItemDescription(config, item, event) : ''}
|
||||
onClick={() => onSelectMoment(index)}
|
||||
@ -96,9 +105,19 @@ export default function TimelineSummary({ event, onFrameSelected }) {
|
||||
</div>
|
||||
</div>
|
||||
{timeIndex >= 0 ? (
|
||||
<div className="bg-gray-500 p-4 m-2 max-w-md self-center">
|
||||
Disclaimer: This data comes from the detect feed but is shown on the recordings, it is unlikely that the
|
||||
streams are perfectly in sync so the bounding box and the footage will not line up perfectly.
|
||||
<div className="m-2 max-w-md self-center">
|
||||
<div className="flex justify-start">
|
||||
<div className="text-lg flex justify-between py-4">Bounding boxes may not align</div>
|
||||
<Button
|
||||
className="rounded-full"
|
||||
type="text"
|
||||
color="gray"
|
||||
aria-label=" Disclaimer: This data comes from the detect feed but is shown on the recordings, it is unlikely that the
|
||||
streams are perfectly in sync so the bounding box and the footage will not line up perfectly. The annotation_offset field can be used to adjust this."
|
||||
>
|
||||
<About className="w-4" />
|
||||
</Button>
|
||||
</div>
|
||||
</div>
|
||||
) : null}
|
||||
</div>
|
||||
|
||||
19
web/src/icons/About.jsx
Normal file
19
web/src/icons/About.jsx
Normal file
@ -0,0 +1,19 @@
|
||||
import { h } from 'preact';
|
||||
import { memo } from 'preact/compat';
|
||||
|
||||
export function About({ className = '' }) {
|
||||
return (
|
||||
<svg
|
||||
xmlns="http://www.w3.org/2000/svg"
|
||||
fill="currentColor"
|
||||
viewBox="0 0 24 24"
|
||||
strokeWidth={1.5}
|
||||
stroke="currentColor"
|
||||
className={`${className}`}
|
||||
>
|
||||
<path d="M11 7h2v2h-2zm0 4h2v6h-2zm1-9C6.48 2 2 6.48 2 12s4.48 10 10 10 10-4.48 10-10S17.52 2 12 2zm0 18c-4.41 0-8-3.59-8-8s3.59-8 8-8 8 3.59 8 8-3.59 8-8 8z" />
|
||||
</svg>
|
||||
);
|
||||
}
|
||||
|
||||
export default memo(About);
|
||||
@ -24,7 +24,7 @@ export default function Birdseye() {
|
||||
}
|
||||
|
||||
return Object.entries(config.cameras)
|
||||
.filter(([_, conf]) => conf.onvif?.host)
|
||||
.filter(([_, conf]) => conf.onvif?.host && conf.onvif.host != '')
|
||||
.map(([_, camera]) => camera.name);
|
||||
}, [config]);
|
||||
|
||||
@ -37,7 +37,7 @@ export default function Birdseye() {
|
||||
if ('MediaSource' in window) {
|
||||
player = (
|
||||
<Fragment>
|
||||
<div className="max-w-5xl xl:w-1/2">
|
||||
<div className={ptzCameras.length ? 'max-w-5xl xl:w-1/2' : 'max-w-5xl'}>
|
||||
<MsePlayer camera="birdseye" />
|
||||
</div>
|
||||
</Fragment>
|
||||
@ -54,7 +54,7 @@ export default function Birdseye() {
|
||||
} else if (viewSource == 'webrtc' && config.birdseye.restream) {
|
||||
player = (
|
||||
<Fragment>
|
||||
<div className="max-w-5xl xl:w-1/2">
|
||||
<div className={ptzCameras.length ? 'max-w-5xl xl:w-1/2' : 'max-w-5xl'}>
|
||||
<WebRtcPlayer camera="birdseye" />
|
||||
</div>
|
||||
</Fragment>
|
||||
@ -62,7 +62,7 @@ export default function Birdseye() {
|
||||
} else {
|
||||
player = (
|
||||
<Fragment>
|
||||
<div className="max-w-7xl xl:w-1/2">
|
||||
<div className={ptzCameras.length ? 'max-w-5xl xl:w-1/2' : 'max-w-5xl'}>
|
||||
<JSMpegPlayer camera="birdseye" />
|
||||
</div>
|
||||
</Fragment>
|
||||
@ -94,7 +94,7 @@ export default function Birdseye() {
|
||||
<div className="xl:flex justify-between">
|
||||
{player}
|
||||
|
||||
{ptzCameras && (
|
||||
{ptzCameras.length ? (
|
||||
<div className="dark:bg-gray-800 shadow-md hover:shadow-lg rounded-lg transition-shadow p-4 w-full sm:w-min xl:h-min xl:w-1/2">
|
||||
<Heading size="sm">Control Panel</Heading>
|
||||
{ptzCameras.map((camera) => (
|
||||
@ -104,7 +104,7 @@ export default function Birdseye() {
|
||||
</div>
|
||||
))}
|
||||
</div>
|
||||
)}
|
||||
) : null}
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
|
||||
@ -206,7 +206,7 @@ export default function Events({ path, ...props }) {
|
||||
e.stopPropagation();
|
||||
setDownloadEvent((_prev) => ({
|
||||
id: event.id,
|
||||
box: event.box,
|
||||
box: event?.data?.box || event.box,
|
||||
label: event.label,
|
||||
has_clip: event.has_clip,
|
||||
has_snapshot: event.has_snapshot,
|
||||
@ -599,7 +599,7 @@ export default function Events({ path, ...props }) {
|
||||
{event.sub_label
|
||||
? `${event.label.replaceAll('_', ' ')}: ${event.sub_label.replaceAll('_', ' ')}`
|
||||
: event.label.replaceAll('_', ' ')}
|
||||
({(event.top_score * 100).toFixed(0)}%)
|
||||
({((event?.data?.top_score || event.top_score) * 100).toFixed(0)}%)
|
||||
</div>
|
||||
<div className="text-sm flex">
|
||||
<Clock className="h-5 w-5 mr-2 inline" />
|
||||
@ -638,7 +638,9 @@ export default function Events({ path, ...props }) {
|
||||
<Button
|
||||
color="gray"
|
||||
disabled={uploading.includes(event.id)}
|
||||
onClick={(e) => showSubmitToPlus(event.id, event.label, event.box, e)}
|
||||
onClick={(e) =>
|
||||
showSubmitToPlus(event.id, event.label, event?.data?.box || event.box, e)
|
||||
}
|
||||
>
|
||||
{uploading.includes(event.id) ? 'Uploading...' : 'Send to Frigate+'}
|
||||
</Button>
|
||||
@ -680,7 +682,9 @@ export default function Events({ path, ...props }) {
|
||||
<div>
|
||||
<TimelineSummary
|
||||
event={event}
|
||||
onFrameSelected={(frame, seekSeconds) => onEventFrameSelected(event, frame, seekSeconds)}
|
||||
onFrameSelected={(frame, seekSeconds) =>
|
||||
onEventFrameSelected(event, frame, seekSeconds)
|
||||
}
|
||||
/>
|
||||
<div>
|
||||
<VideoPlayer
|
||||
@ -720,7 +724,7 @@ export default function Events({ path, ...props }) {
|
||||
}}
|
||||
>
|
||||
{eventOverlay.class_type == 'entered_zone' ? (
|
||||
<div className="absolute w-2 h-2 bg-yellow-500 left-[50%] bottom-0" />
|
||||
<div className="absolute w-2 h-2 bg-yellow-500 left-[50%] -translate-x-1/2 translate-y-3/4 bottom-0" />
|
||||
) : null}
|
||||
</div>
|
||||
) : null}
|
||||
@ -738,7 +742,9 @@ export default function Events({ path, ...props }) {
|
||||
? `${apiHost}/api/events/${event.id}/snapshot.jpg`
|
||||
: `${apiHost}/api/events/${event.id}/thumbnail.jpg`
|
||||
}
|
||||
alt={`${event.label} at ${(event.top_score * 100).toFixed(0)}% confidence`}
|
||||
alt={`${event.label} at ${((event?.data?.top_score || event.top_score) * 100).toFixed(
|
||||
0
|
||||
)}% confidence`}
|
||||
/>
|
||||
</div>
|
||||
) : null}
|
||||
|
||||
@ -30,7 +30,7 @@ export default function Recording({ camera, date, hour = '00', minute = '00', se
|
||||
// calculates the seek seconds by adding up all the seconds in the segments prior to the playback time
|
||||
const seekSeconds = useMemo(() => {
|
||||
if (!recordings) {
|
||||
return 0;
|
||||
return undefined;
|
||||
}
|
||||
const currentUnix = getUnixTime(currentDate);
|
||||
|
||||
@ -103,6 +103,9 @@ export default function Recording({ camera, date, hour = '00', minute = '00', se
|
||||
}, [playlistIndex]);
|
||||
|
||||
useEffect(() => {
|
||||
if (seekSeconds === undefined) {
|
||||
return;
|
||||
}
|
||||
if (this.player) {
|
||||
// if the playlist has moved on to the next item, then reset
|
||||
if (this.player.playlist.currentItem() !== playlistIndex) {
|
||||
@ -114,7 +117,7 @@ export default function Recording({ camera, date, hour = '00', minute = '00', se
|
||||
}
|
||||
}, [seekSeconds, playlistIndex]);
|
||||
|
||||
if (!recordingsSummary || !recordings || !config) {
|
||||
if (!recordingsSummary || !config) {
|
||||
return <ActivityIndicator />;
|
||||
}
|
||||
|
||||
@ -145,7 +148,9 @@ export default function Recording({ camera, date, hour = '00', minute = '00', se
|
||||
player.playlist(playlist);
|
||||
player.playlist.autoadvance(0);
|
||||
player.playlist.currentItem(playlistIndex);
|
||||
player.currentTime(seekSeconds);
|
||||
if (seekSeconds !== undefined) {
|
||||
player.currentTime(seekSeconds);
|
||||
}
|
||||
this.player = player;
|
||||
}
|
||||
}}
|
||||
|
||||
@ -5,6 +5,8 @@ import { useWs } from '../api/ws';
|
||||
import useSWR from 'swr';
|
||||
import { Table, Tbody, Thead, Tr, Th, Td } from '../components/Table';
|
||||
import Link from '../components/Link';
|
||||
import Button from '../components/Button';
|
||||
import { About } from '../icons/About';
|
||||
|
||||
const emptyObject = Object.freeze({});
|
||||
|
||||
@ -66,9 +68,19 @@ export default function Storage() {
|
||||
|
||||
<Fragment>
|
||||
<Heading size="lg">Overview</Heading>
|
||||
<div data-testid="detectors" className="grid grid-cols-1 md:grid-cols-2 gap-4">
|
||||
<div data-testid="overview-types" className="grid grid-cols-1 md:grid-cols-2 gap-4">
|
||||
<div className="dark:bg-gray-800 shadow-md hover:shadow-lg rounded-lg transition-shadow">
|
||||
<div className="text-lg flex justify-between p-4">Data</div>
|
||||
<div className="flex justify-start">
|
||||
<div className="text-lg flex justify-between p-4">Data</div>
|
||||
<Button
|
||||
className="rounded-full"
|
||||
type="text"
|
||||
color="gray"
|
||||
aria-label="Overview of total used storage and total capacity of the drives that hold the recordings and snapshots directories."
|
||||
>
|
||||
<About className="w-5" />
|
||||
</Button>
|
||||
</div>
|
||||
<div className="p-2">
|
||||
<Table className="w-full">
|
||||
<Thead>
|
||||
@ -83,7 +95,17 @@ export default function Storage() {
|
||||
</div>
|
||||
</div>
|
||||
<div className="dark:bg-gray-800 shadow-md hover:shadow-lg rounded-lg transition-shadow">
|
||||
<div className="text-lg flex justify-between p-4">Memory</div>
|
||||
<div className="flex justify-start">
|
||||
<div className="text-lg flex justify-between p-4">Memory</div>
|
||||
<Button
|
||||
className="rounded-full"
|
||||
type="text"
|
||||
color="gray"
|
||||
aria-label="Overview of used and total memory in frigate process."
|
||||
>
|
||||
<About className="w-5" />
|
||||
</Button>
|
||||
</div>
|
||||
<div className="p-2">
|
||||
<Table className="w-full">
|
||||
<Thead>
|
||||
@ -110,7 +132,17 @@ export default function Storage() {
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<Heading size="lg">Cameras</Heading>
|
||||
<div className="flex justify-start">
|
||||
<Heading size="lg">Cameras</Heading>
|
||||
<Button
|
||||
className="rounded-full"
|
||||
type="text"
|
||||
color="gray"
|
||||
aria-label="Overview of per-camera storage usage and bandwidth."
|
||||
>
|
||||
<About className="w-5" />
|
||||
</Button>
|
||||
</div>
|
||||
<div data-testid="detectors" className="grid grid-cols-1 3xl:grid-cols-3 md:grid-cols-2 gap-4">
|
||||
{Object.entries(storage).map(([name, camera]) => (
|
||||
<div key={name} className="dark:bg-gray-800 shadow-md hover:shadow-lg rounded-lg transition-shadow">
|
||||
|
||||
@ -11,6 +11,7 @@ import { useState } from 'preact/hooks';
|
||||
import Dialog from '../components/Dialog';
|
||||
import TimeAgo from '../components/TimeAgo';
|
||||
import copy from 'copy-to-clipboard';
|
||||
import { About } from '../icons/About';
|
||||
|
||||
const emptyObject = Object.freeze({});
|
||||
|
||||
@ -29,12 +30,16 @@ export default function System() {
|
||||
detectors,
|
||||
service = {},
|
||||
detection_fps: _,
|
||||
processes,
|
||||
...cameras
|
||||
} = stats || initialStats || emptyObject;
|
||||
|
||||
const detectorNames = Object.keys(detectors || emptyObject);
|
||||
const gpuNames = Object.keys(gpu_usages || emptyObject);
|
||||
const cameraNames = Object.keys(cameras || emptyObject);
|
||||
const processesNames = Object.keys(processes || emptyObject);
|
||||
|
||||
const { data: go2rtc } = useSWR('go2rtc');
|
||||
|
||||
const onHandleFfprobe = async (camera, e) => {
|
||||
if (e) {
|
||||
@ -90,14 +95,16 @@ export default function System() {
|
||||
System <span className="text-sm">{service.version}</span>
|
||||
</Heading>
|
||||
{config && (
|
||||
<Link
|
||||
className="p-1 text-blue-500 hover:underline"
|
||||
target="_blank"
|
||||
rel="noopener noreferrer"
|
||||
href="/live/webrtc/"
|
||||
>
|
||||
go2rtc dashboard
|
||||
</Link>
|
||||
<span class="p-1">go2rtc {go2rtc && ( `${go2rtc.version} ` ) }
|
||||
<Link
|
||||
className="text-blue-500 hover:underline"
|
||||
target="_blank"
|
||||
rel="noopener noreferrer"
|
||||
href="/live/webrtc/"
|
||||
>
|
||||
dashboard
|
||||
</Link>
|
||||
</span>
|
||||
)}
|
||||
</div>
|
||||
|
||||
@ -206,7 +213,19 @@ export default function System() {
|
||||
</div>
|
||||
) : (
|
||||
<Fragment>
|
||||
<Heading size="lg">Detectors</Heading>
|
||||
<div className="flex justify-start">
|
||||
<Heading className="self-center" size="lg">
|
||||
Detectors
|
||||
</Heading>
|
||||
<Button
|
||||
className="rounded-full"
|
||||
type="text"
|
||||
color="gray"
|
||||
aria-label="Momentary resource usage of each process that is controlling the object detector. CPU % is for a single core."
|
||||
>
|
||||
<About className="w-5" />
|
||||
</Button>
|
||||
</div>
|
||||
<div data-testid="detectors" className="grid grid-cols-1 3xl:grid-cols-3 md:grid-cols-2 gap-4">
|
||||
{detectorNames.map((detector) => (
|
||||
<div key={detector} className="dark:bg-gray-800 shadow-md hover:shadow-lg rounded-lg transition-shadow">
|
||||
@ -235,8 +254,20 @@ export default function System() {
|
||||
))}
|
||||
</div>
|
||||
|
||||
<div className="text-lg flex justify-between p-4">
|
||||
<Heading size="lg">GPUs</Heading>
|
||||
<div className="text-lg flex justify-between">
|
||||
<div className="flex justify-start">
|
||||
<Heading className="self-center" size="lg">
|
||||
GPUs
|
||||
</Heading>
|
||||
<Button
|
||||
className="rounded-full"
|
||||
type="text"
|
||||
color="gray"
|
||||
aria-label="Momentary resource usage of each GPU. Intel GPUs do not support memory stats."
|
||||
>
|
||||
<About className="w-5" />
|
||||
</Button>
|
||||
</div>
|
||||
<Button onClick={(e) => onHandleVainfo(e)}>vainfo</Button>
|
||||
</div>
|
||||
|
||||
@ -280,7 +311,19 @@ export default function System() {
|
||||
</div>
|
||||
)}
|
||||
|
||||
<Heading size="lg">Cameras</Heading>
|
||||
<div className="flex justify-start">
|
||||
<Heading className="self-center" size="lg">
|
||||
Cameras
|
||||
</Heading>
|
||||
<Button
|
||||
className="rounded-full"
|
||||
type="text"
|
||||
color="gray"
|
||||
aria-label="Momentary resource usage of each process interacting with the camera stream. CPU % is for a single core."
|
||||
>
|
||||
<About className="w-5" />
|
||||
</Button>
|
||||
</div>
|
||||
{!cameras ? (
|
||||
<ActivityIndicator />
|
||||
) : (
|
||||
@ -345,6 +388,49 @@ export default function System() {
|
||||
</div>
|
||||
)}
|
||||
|
||||
<div className="flex justify-start">
|
||||
<Heading className="self-center" size="lg">
|
||||
Other Processes
|
||||
</Heading>
|
||||
<Button
|
||||
className="rounded-full"
|
||||
type="text"
|
||||
color="gray"
|
||||
aria-label="Momentary resource usage for other important processes. CPU % is for a single core."
|
||||
>
|
||||
<About className="w-5" />
|
||||
</Button>
|
||||
</div>
|
||||
<div data-testid="cameras" className="grid grid-cols-1 3xl:grid-cols-3 md:grid-cols-2 gap-4">
|
||||
{processesNames.map((process) => (
|
||||
<div key={process} className="dark:bg-gray-800 shadow-md hover:shadow-lg rounded-lg transition-shadow">
|
||||
<div className="capitalize text-lg flex justify-between p-4">
|
||||
<div className="text-lg flex justify-between">{process}</div>
|
||||
</div>
|
||||
<div className="p-2">
|
||||
<Table className="w-full">
|
||||
<Thead>
|
||||
<Tr>
|
||||
<Th>P-ID</Th>
|
||||
<Th>CPU %</Th>
|
||||
<Th>Avg CPU %</Th>
|
||||
<Th>Memory %</Th>
|
||||
</Tr>
|
||||
</Thead>
|
||||
<Tbody>
|
||||
<Tr key="other" index="0">
|
||||
<Td>{processes[process]['pid'] || '- '}</Td>
|
||||
<Td>{cpu_usages[processes[process]['pid']]?.['cpu'] || '- '}%</Td>
|
||||
<Td>{cpu_usages[processes[process]['pid']]?.['cpu_average'] || '- '}%</Td>
|
||||
<Td>{cpu_usages[processes[process]['pid']]?.['mem'] || '- '}%</Td>
|
||||
</Tr>
|
||||
</Tbody>
|
||||
</Table>
|
||||
</div>
|
||||
</div>
|
||||
))}
|
||||
</div>
|
||||
|
||||
<p>System stats update automatically every {config.mqtt.stats_interval} seconds.</p>
|
||||
</Fragment>
|
||||
)}
|
||||
|
||||
Loading…
Reference in New Issue
Block a user