Compare commits

...

8 Commits

Author SHA1 Message Date
ivanshi1108
ddaab05bb1
Merge 438df7d484 into 213a1fbd00 2025-11-19 07:40:13 +08:00
Josh Hawkins
213a1fbd00
Miscellaneous Fixes (#20951)
Some checks failed
CI / AMD64 Build (push) Has been cancelled
CI / ARM Build (push) Has been cancelled
CI / Jetson Jetpack 6 (push) Has been cancelled
CI / AMD64 Extra Build (push) Has been cancelled
CI / ARM Extra Build (push) Has been cancelled
CI / Synaptics Build (push) Has been cancelled
CI / Assemble and push default build (push) Has been cancelled
* ensure viewer roles are available in create user dialog

* admin-only endpoint to return unmaksed camera paths and go2rtc streams

* remove camera edit dropdown

pushing camera editing from the UI to 0.18

* clean up camera edit form

* rename component for clarity

CameraSettingsView is now CameraReviewSettingsView

* Catch case where user requsts clip for time that has no recordings

* ensure emergency cleanup also sets has_clip on overlapping events

improves https://github.com/blakeblackshear/frigate/discussions/20945

* use debug log instead of info

* update docs to recommend tmpfs

* improve display of in-progress events in explore tracking details

* improve seeking logic in tracking details

mimic the logic of DynamicVideoController

* only use ffprobe for duration to avoid blocking

fixes https://github.com/blakeblackshear/frigate/discussions/20737#discussioncomment-14999869

* Revert "only use ffprobe for duration to avoid blocking"

This reverts commit 8b15078005.

* update readme to link to object detector docs

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2025-11-18 15:33:42 -07:00
shizhicheng
438df7d484 The model inference time has been changed to the time displayed on the Frigate UI 2025-11-16 22:22:38 +08:00
shizhicheng
e27a94ae0b Fix logical errors caused by code formatting 2025-11-11 05:54:19 +00:00
shizhicheng
1dee548dbc Modifications to the YOLOv9 object detection model:
The model is now dynamically downloaded to the cache directory.
Post-processing is now done using Frigate's built-in `post_process_yolo`.
Configuration in the relevant documentation has been updated.
2025-11-11 05:42:28 +00:00
shizhicheng
91e17e12b7 Change the default detection model to YOLOv9 2025-11-09 13:21:17 +00:00
ivanshi1108
bb45483e9e
Modify AXERA section from hardware.md
Modify AXERA section and related content from hardware documentation.
2025-10-28 09:54:00 +08:00
shizhicheng
7b4eaf2d10 Initial commit for AXERA AI accelerators 2025-10-24 09:03:13 +00:00
20 changed files with 1480 additions and 905 deletions

View File

@ -225,3 +225,29 @@ jobs:
sources: |
ghcr.io/${{ steps.lowercaseRepo.outputs.lowercase }}:${{ env.SHORT_SHA }}-amd64
ghcr.io/${{ steps.lowercaseRepo.outputs.lowercase }}:${{ env.SHORT_SHA }}-rpi
axera_build:
runs-on: ubuntu-22.04
name: AXERA Build
needs:
- amd64_build
- arm64_build
steps:
- name: Check out code
uses: actions/checkout@v5
with:
persist-credentials: false
- name: Set up QEMU and Buildx
id: setup
uses: ./.github/actions/setup
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push Axera build
uses: docker/bake-action@v6
with:
source: .
push: true
targets: axcl
files: docker/axcl/axcl.hcl
set: |
axcl.tags=${{ steps.setup.outputs.image-name }}-axcl
*.cache-from=type=gha

View File

@ -12,7 +12,7 @@
A complete and local NVR designed for [Home Assistant](https://www.home-assistant.io) with AI object detection. Uses OpenCV and Tensorflow to perform realtime object detection locally for IP cameras.
Use of a GPU or AI accelerator such as a [Google Coral](https://coral.ai/products/) or [Hailo](https://hailo.ai/) is highly recommended. AI accelerators will outperform even the best CPUs with very little overhead.
Use of a GPU or AI accelerator is highly recommended. AI accelerators will outperform even the best CPUs with very little overhead. See Frigate's supported [object detectors](https://docs.frigate.video/configuration/object_detectors/).
- Tight integration with Home Assistant via a [custom component](https://github.com/blakeblackshear/frigate-hass-integration)
- Designed to minimize resource use and maximize performance by only looking for objects when and where it is necessary

55
docker/axcl/Dockerfile Normal file
View File

@ -0,0 +1,55 @@
# syntax=docker/dockerfile:1.6
# https://askubuntu.com/questions/972516/debian-frontend-environment-variable
ARG DEBIAN_FRONTEND=noninteractive
# Globally set pip break-system-packages option to avoid having to specify it every time
ARG PIP_BREAK_SYSTEM_PACKAGES=1
FROM frigate AS frigate-axcl
ARG TARGETARCH
ARG PIP_BREAK_SYSTEM_PACKAGES
# Install axpyengine
RUN wget https://github.com/AXERA-TECH/pyaxengine/releases/download/0.1.3.rc1/axengine-0.1.3-py3-none-any.whl -O /axengine-0.1.3-py3-none-any.whl
RUN pip3 install -i https://mirrors.aliyun.com/pypi/simple/ /axengine-0.1.3-py3-none-any.whl \
&& rm /axengine-0.1.3-py3-none-any.whl
# Install axcl
RUN if [ "$TARGETARCH" = "amd64" ]; then \
echo "Installing x86_64 version of axcl"; \
wget https://github.com/ivanshi1108/assets/releases/download/v0.16.2/axcl_host_x86_64_V3.6.5_20250908154509_NO4973.deb -O /axcl.deb; \
else \
echo "Installing aarch64 version of axcl"; \
wget https://github.com/ivanshi1108/assets/releases/download/v0.16.2/axcl_host_aarch64_V3.6.5_20250908154509_NO4973.deb -O /axcl.deb; \
fi
RUN mkdir /unpack_axcl && \
dpkg-deb -x /axcl.deb /unpack_axcl && \
cp -R /unpack_axcl/usr/bin/axcl /usr/bin/ && \
cp -R /unpack_axcl/usr/lib/axcl /usr/lib/ && \
rm -rf /unpack_axcl /axcl.deb
# Install axcl ffmpeg
RUN mkdir -p /usr/lib/ffmpeg/axcl
RUN if [ "$TARGETARCH" = "amd64" ]; then \
wget https://github.com/ivanshi1108/assets/releases/download/v0.16.2/ffmpeg-x64 -O /usr/lib/ffmpeg/axcl/ffmpeg && \
wget https://github.com/ivanshi1108/assets/releases/download/v0.16.2/ffprobe-x64 -O /usr/lib/ffmpeg/axcl/ffprobe; \
else \
wget https://github.com/ivanshi1108/assets/releases/download/v0.16.2/ffmpeg-aarch64 -O /usr/lib/ffmpeg/axcl/ffmpeg && \
wget https://github.com/ivanshi1108/assets/releases/download/v0.16.2/ffprobe-aarch64 -O /usr/lib/ffmpeg/axcl/ffprobe; \
fi
RUN chmod +x /usr/lib/ffmpeg/axcl/ffmpeg /usr/lib/ffmpeg/axcl/ffprobe
# Set ldconfig path
RUN echo "/usr/lib/axcl" > /etc/ld.so.conf.d/ax.conf
# Set env
ENV PATH="$PATH:/usr/bin/axcl"
ENV LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/lib/axcl"
ENTRYPOINT ["sh", "-c", "ldconfig && exec /init"]

13
docker/axcl/axcl.hcl Normal file
View File

@ -0,0 +1,13 @@
target frigate {
dockerfile = "docker/main/Dockerfile"
platforms = ["linux/amd64", "linux/arm64"]
target = "frigate"
}
target axcl {
dockerfile = "docker/axcl/Dockerfile"
contexts = {
frigate = "target:frigate",
}
platforms = ["linux/amd64", "linux/arm64"]
}

15
docker/axcl/axcl.mk Normal file
View File

@ -0,0 +1,15 @@
BOARDS += axcl
local-axcl: version
docker buildx bake --file=docker/axcl/axcl.hcl axcl \
--set axcl.tags=frigate:latest-axcl \
--load
build-axcl: version
docker buildx bake --file=docker/axcl/axcl.hcl axcl \
--set axcl.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-axcl
push-axcl: build-axcl
docker buildx bake --file=docker/axcl/axcl.hcl axcl \
--set axcl.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-axcl \
--push

View File

@ -0,0 +1,83 @@
#!/bin/bash
# Update package list and install dependencies
sudo apt-get update
sudo apt-get install -y build-essential cmake git wget pciutils kmod udev
# Check if gcc-12 is needed
current_gcc_version=$(gcc --version | head -n1 | awk '{print $NF}')
gcc_major_version=$(echo $current_gcc_version | cut -d'.' -f1)
if [[ $gcc_major_version -lt 12 ]]; then
echo "Current GCC version ($current_gcc_version) is lower than 12, installing gcc-12..."
sudo apt-get install -y gcc-12
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-12 12
echo "GCC-12 installed and set as default"
else
echo "Current GCC version ($current_gcc_version) is sufficient, skipping GCC installation"
fi
# Determine architecture
arch=$(uname -m)
download_url=""
if [[ $arch == "x86_64" ]]; then
download_url="https://github.com/ivanshi1108/assets/releases/download/v0.16.2/axcl_host_x86_64_V3.6.5_20250908154509_NO4973.deb"
deb_file="axcl_host_x86_64_V3.6.5_20250908154509_NO4973.deb"
elif [[ $arch == "aarch64" ]]; then
download_url="https://github.com/ivanshi1108/assets/releases/download/v0.16.2/axcl_host_aarch64_V3.6.5_20250908154509_NO4973.deb"
deb_file="axcl_host_aarch64_V3.6.5_20250908154509_NO4973.deb"
else
echo "Unsupported architecture: $arch"
exit 1
fi
# Download AXCL driver
echo "Downloading AXCL driver for $arch..."
wget "$download_url" -O "$deb_file"
if [ $? -ne 0 ]; then
echo "Failed to download AXCL driver"
exit 1
fi
# Install AXCL driver
echo "Installing AXCL driver..."
sudo dpkg -i "$deb_file"
if [ $? -ne 0 ]; then
echo "Failed to install AXCL driver, attempting to fix dependencies..."
sudo apt-get install -f -y
sudo dpkg -i "$deb_file"
if [ $? -ne 0 ]; then
echo "AXCL driver installation failed"
exit 1
fi
fi
# Update environment
echo "Updating environment..."
source /etc/profile
# Verify installation
echo "Verifying AXCL installation..."
if command -v axcl-smi &> /dev/null; then
echo "AXCL driver detected, checking AI accelerator status..."
axcl_output=$(axcl-smi 2>&1)
axcl_exit_code=$?
echo "$axcl_output"
if [ $axcl_exit_code -eq 0 ]; then
echo "AXCL driver installation completed successfully!"
else
echo "AXCL driver installed but no AI accelerator detected or communication failed."
echo "Please check if the AI accelerator is properly connected and powered on."
exit 1
fi
else
echo "axcl-smi command not found. AXCL driver installation may have failed."
exit 1
fi

View File

@ -47,6 +47,11 @@ Frigate supports multiple different detectors that work on different types of ha
- [Synaptics](#synaptics): synap models can run on Synaptics devices(e.g astra machina) with included NPUs.
**AXERA**
- [AXEngine](#axera): axmodels can run on AXERA AI acceleration.
**For Testing**
- [CPU Detector (not recommended for actual use](#cpu-detector-not-recommended): Use a CPU to run tflite model, this is not recommended and in most cases OpenVINO can be used in CPU mode with better results.
@ -1169,6 +1174,41 @@ model: # required
labelmap_path: /labelmap/coco-80.txt # required
```
## AXERA
Hardware accelerated object detection is supported on the following SoCs:
- AX650N
- AX8850N
This implementation uses the [AXera Pulsar2 Toolchain](https://huggingface.co/AXERA-TECH/Pulsar2).
See the [installation docs](../frigate/installation.md#axera) for information on configuring the AXEngine hardware.
### Configuration
When configuring the AXEngine detector, you have to specify the model name.
#### yolov9
A yolov9 model is provided in the container at /axmodels and is used by this detector type by default.
Use the model configuration shown below when using the axengine detector with the default axmodel:
```yaml
detectors: # required
axengine: # required
type: axengine # required
model: # required
path: frigate-yolov9-tiny # required
model_type: yolo-generic # required
width: 320 # required
height: 320 # required
tensor_format: bgr # required
labelmap_path: /labelmap/coco-80.txt # required
```
## Rockchip platform
Hardware accelerated object detection is supported on the following SoCs:

View File

@ -110,6 +110,14 @@ Frigate supports multiple different detectors that work on different types of ha
| ssd mobilenet | ~ 25 ms |
| yolov5m | ~ 118 ms |
### AXERA
- **AXEngine** Default model is **yolov9**
| Name | AXERA AX650N/AX8850N Inference Time |
| ---------------- | ----------------------------------- |
| yolov9-tiny | ~ 4 ms |
### Hailo-8
Frigate supports both the Hailo-8 and Hailo-8L AI Acceleration Modules on compatible hardware platforms—including the Raspberry Pi 5 with the PCIe hat from the AI kit. The Hailo detector integration in Frigate automatically identifies your hardware type and selects the appropriate default model when a custom model isnt provided.

View File

@ -56,7 +56,7 @@ services:
volumes:
- /path/to/your/config:/config
- /path/to/your/storage:/media/frigate
- type: tmpfs # Optional: 1GB of memory, reduces SSD/SD Card wear
- type: tmpfs # Recommended: 1GB of memory
target: /tmp/cache
tmpfs:
size: 1000000000
@ -287,6 +287,40 @@ or add these options to your `docker run` command:
Next, you should configure [hardware object detection](/configuration/object_detectors#synaptics) and [hardware video processing](/configuration/hardware_acceleration_video#synaptics).
### AXERA
AXERA accelerators are available in an M.2 form factor, compatible with both Raspberry Pi and Orange Pi. This form factor has also been successfully tested on x86 platforms, making it a versatile choice for various computing environments.
#### Installation
Using AXERA accelerators requires the installation of the AXCL driver. We provide a convenient Linux script to complete this installation.
Follow these steps for installation:
1. Copy or download [this script](https://github.com/ivanshi1108/assets/releases/download/v0.16.2/user_installation.sh).
2. Ensure it has execution permissions with `sudo chmod +x user_installation.sh`
3. Run the script with `./user_installation.sh`
#### Setup
To set up Frigate, follow the default installation instructions, for example: `ghcr.io/blakeblackshear/frigate:stable`
Next, grant Docker permissions to access your hardware by adding the following lines to your `docker-compose.yml` file:
```yaml
devices:
- /dev/axcl_host
- /dev/ax_mmb_dev
- /dev/msg_userdev
```
If you are using `docker run`, add this option to your command `--device /dev/axcl_host --device /dev/ax_mmb_dev --device /dev/msg_userdev`
#### Configuration
Finally, configure [hardware object detection](/configuration/object_detectors#axera) to complete the setup.
## Docker
Running through Docker with Docker Compose is the recommended install method.
@ -310,7 +344,7 @@ services:
- /etc/localtime:/etc/localtime:ro
- /path/to/your/config:/config
- /path/to/your/storage:/media/frigate
- type: tmpfs # Optional: 1GB of memory, reduces SSD/SD Card wear
- type: tmpfs # Recommended: 1GB of memory
target: /tmp/cache
tmpfs:
size: 1000000000

View File

@ -179,6 +179,36 @@ def config(request: Request):
return JSONResponse(content=config)
@router.get("/config/raw_paths", dependencies=[Depends(require_role(["admin"]))])
def config_raw_paths(request: Request):
"""Admin-only endpoint that returns camera paths and go2rtc streams without credential masking."""
config_obj: FrigateConfig = request.app.frigate_config
raw_paths = {"cameras": {}, "go2rtc": {"streams": {}}}
# Extract raw camera ffmpeg input paths
for camera_name, camera in config_obj.cameras.items():
raw_paths["cameras"][camera_name] = {
"ffmpeg": {
"inputs": [
{"path": input.path, "roles": input.roles}
for input in camera.ffmpeg.inputs
]
}
}
# Extract raw go2rtc stream URLs
go2rtc_config = config_obj.go2rtc.model_dump(
mode="json", warnings="none", exclude_none=True
)
for stream_name, stream in go2rtc_config.get("streams", {}).items():
if stream is None:
continue
raw_paths["go2rtc"]["streams"][stream_name] = stream
return JSONResponse(content=raw_paths)
@router.get("/config/raw")
def config_raw():
config_file = find_config_file()

View File

@ -762,6 +762,15 @@ async def recording_clip(
.order_by(Recordings.start_time.asc())
)
if recordings.count() == 0:
return JSONResponse(
content={
"success": False,
"message": "No recordings found for the specified time range",
},
status_code=400,
)
file_name = sanitize_filename(f"playlist_{camera_name}_{start_ts}-{end_ts}.txt")
file_path = os.path.join(CACHE_DIR, file_name)
with open(file_path, "w") as file:

View File

@ -0,0 +1,92 @@
import logging
import os.path
import re
import urllib.request
from typing import Literal
import cv2
import numpy as np
from pydantic import Field
from frigate.const import MODEL_CACHE_DIR
from frigate.detectors.detection_api import DetectionApi
from frigate.detectors.detector_config import BaseDetectorConfig, ModelTypeEnum
from frigate.util.model import post_process_yolo
import axengine as axe
from axengine import axclrt_provider_name, axengine_provider_name
logger = logging.getLogger(__name__)
DETECTOR_KEY = "axengine"
supported_models = {
ModelTypeEnum.yologeneric: "frigate-yolov9-.*$",
}
model_cache_dir = os.path.join(MODEL_CACHE_DIR, "axengine_cache/")
class AxengineDetectorConfig(BaseDetectorConfig):
type: Literal[DETECTOR_KEY]
class Axengine(DetectionApi):
type_key = DETECTOR_KEY
def __init__(self, config: AxengineDetectorConfig):
logger.info("__init__ axengine")
super().__init__(config)
self.height = config.model.height
self.width = config.model.width
model_path = config.model.path or "frigate-yolov9-tiny"
model_props = self.parse_model_input(model_path)
self.session = axe.InferenceSession(model_props["path"])
def __del__(self):
pass
def parse_model_input(self, model_path):
model_props = {}
model_props["preset"] = True
model_matched = False
for model_type, pattern in supported_models.items():
if re.match(pattern, model_path):
model_matched = True
model_props["model_type"] = model_type
if model_matched:
model_props["filename"] = model_path + f".axmodel"
model_props["path"] = model_cache_dir + model_props["filename"]
if not os.path.isfile(model_props["path"]):
self.download_model(model_props["filename"])
else:
supported_models_str = ", ".join(
model[1:-1] for model in supported_models
)
raise Exception(
f"Model {model_path} is unsupported. Provide your own model or choose one of the following: {supported_models_str}"
)
return model_props
def download_model(self, filename):
if not os.path.isdir(model_cache_dir):
os.mkdir(model_cache_dir)
GITHUB_ENDPOINT = os.environ.get("GITHUB_ENDPOINT", "https://github.com")
urllib.request.urlretrieve(
f"{GITHUB_ENDPOINT}/ivanshi1108/assets/releases/download/v0.16.2/{filename}",
model_cache_dir + filename,
)
def detect_raw(self, tensor_input):
results = None
results = self.session.run(None, {"images": tensor_input})
if self.detector_config.model.model_type == ModelTypeEnum.yologeneric:
return post_process_yolo(results, self.width, self.height)
else:
raise ValueError(
f'Model type "{self.detector_config.model.model_type}" is currently not supported.'
)

View File

@ -113,6 +113,7 @@ class StorageMaintainer(threading.Thread):
recordings: Recordings = (
Recordings.select(
Recordings.id,
Recordings.camera,
Recordings.start_time,
Recordings.end_time,
Recordings.segment_size,
@ -137,7 +138,7 @@ class StorageMaintainer(threading.Thread):
)
event_start = 0
deleted_recordings = set()
deleted_recordings = []
for recording in recordings:
# check if 1 hour of storage has been reclaimed
if deleted_segments_size > hourly_bandwidth:
@ -172,7 +173,7 @@ class StorageMaintainer(threading.Thread):
if not keep:
try:
clear_and_unlink(Path(recording.path), missing_ok=False)
deleted_recordings.add(recording.id)
deleted_recordings.append(recording)
deleted_segments_size += recording.segment_size
except FileNotFoundError:
# this file was not found so we must assume no space was cleaned up
@ -186,6 +187,9 @@ class StorageMaintainer(threading.Thread):
recordings = (
Recordings.select(
Recordings.id,
Recordings.camera,
Recordings.start_time,
Recordings.end_time,
Recordings.path,
Recordings.segment_size,
)
@ -201,7 +205,7 @@ class StorageMaintainer(threading.Thread):
try:
clear_and_unlink(Path(recording.path), missing_ok=False)
deleted_segments_size += recording.segment_size
deleted_recordings.add(recording.id)
deleted_recordings.append(recording)
except FileNotFoundError:
# this file was not found so we must assume no space was cleaned up
pass
@ -211,7 +215,50 @@ class StorageMaintainer(threading.Thread):
logger.debug(f"Expiring {len(deleted_recordings)} recordings")
# delete up to 100,000 at a time
max_deletes = 100000
deleted_recordings_list = list(deleted_recordings)
# Update has_clip for events that overlap with deleted recordings
if deleted_recordings:
# Group deleted recordings by camera
camera_recordings = {}
for recording in deleted_recordings:
if recording.camera not in camera_recordings:
camera_recordings[recording.camera] = {
"min_start": recording.start_time,
"max_end": recording.end_time,
}
else:
camera_recordings[recording.camera]["min_start"] = min(
camera_recordings[recording.camera]["min_start"],
recording.start_time,
)
camera_recordings[recording.camera]["max_end"] = max(
camera_recordings[recording.camera]["max_end"],
recording.end_time,
)
# Find all events that overlap with deleted recordings time range per camera
events_to_update = []
for camera, time_range in camera_recordings.items():
overlapping_events = Event.select(Event.id).where(
Event.camera == camera,
Event.has_clip == True,
Event.start_time < time_range["max_end"],
Event.end_time > time_range["min_start"],
)
for event in overlapping_events:
events_to_update.append(event.id)
# Update has_clip to False for overlapping events
if events_to_update:
for i in range(0, len(events_to_update), max_deletes):
batch = events_to_update[i : i + max_deletes]
Event.update(has_clip=False).where(Event.id << batch).execute()
logger.debug(
f"Updated has_clip to False for {len(events_to_update)} events"
)
deleted_recordings_list = [r.id for r in deleted_recordings]
for i in range(0, len(deleted_recordings_list), max_deletes):
Recordings.delete().where(
Recordings.id << deleted_recordings_list[i : i + max_deletes]

View File

@ -13,7 +13,8 @@ import { zodResolver } from "@hookform/resolvers/zod";
import { useForm } from "react-hook-form";
import { z } from "zod";
import ActivityIndicator from "../indicators/activity-indicator";
import { useEffect, useState } from "react";
import { useEffect, useState, useMemo } from "react";
import useSWR from "swr";
import {
Dialog,
DialogContent,
@ -35,6 +36,7 @@ import { LuCheck, LuX } from "react-icons/lu";
import { useTranslation } from "react-i18next";
import { isDesktop, isMobile } from "react-device-detect";
import { cn } from "@/lib/utils";
import { FrigateConfig } from "@/types/frigateConfig";
import {
MobilePage,
MobilePageContent,
@ -54,9 +56,15 @@ export default function CreateUserDialog({
onCreate,
onCancel,
}: CreateUserOverlayProps) {
const { data: config } = useSWR<FrigateConfig>("config");
const { t } = useTranslation(["views/settings"]);
const [isLoading, setIsLoading] = useState<boolean>(false);
const roles = useMemo(() => {
const existingRoles = config ? Object.keys(config.auth?.roles || {}) : [];
return Array.from(new Set(["admin", "viewer", ...(existingRoles || [])]));
}, [config]);
const formSchema = z
.object({
user: z
@ -69,7 +77,7 @@ export default function CreateUserDialog({
confirmPassword: z
.string()
.min(1, t("users.dialog.createUser.confirmPassword")),
role: z.enum(["admin", "viewer"]),
role: z.string().min(1),
})
.refine((data) => data.password === data.confirmPassword, {
message: t("users.dialog.form.password.notMatch"),
@ -246,24 +254,22 @@ export default function CreateUserDialog({
</SelectTrigger>
</FormControl>
<SelectContent>
{roles.map((r) => (
<SelectItem
value="admin"
value={r}
key={r}
className="flex items-center gap-2"
>
<div className="flex items-center gap-2">
{r === "admin" ? (
<Shield className="h-4 w-4 text-primary" />
<span>{t("role.admin", { ns: "common" })}</span>
</div>
</SelectItem>
<SelectItem
value="viewer"
className="flex items-center gap-2"
>
<div className="flex items-center gap-2">
) : (
<User className="h-4 w-4 text-muted-foreground" />
<span>{t("role.viewer", { ns: "common" })}</span>
)}
<span>{t(`role.${r}`, { ns: "common" }) || r}</span>
</div>
</SelectItem>
))}
</SelectContent>
</Select>
<FormDescription className="text-xs text-muted-foreground">

View File

@ -12,7 +12,11 @@ import { cn } from "@/lib/utils";
import HlsVideoPlayer from "@/components/player/HlsVideoPlayer";
import { baseUrl } from "@/api/baseUrl";
import { REVIEW_PADDING } from "@/types/review";
import { ASPECT_VERTICAL_LAYOUT, ASPECT_WIDE_LAYOUT } from "@/types/record";
import {
ASPECT_VERTICAL_LAYOUT,
ASPECT_WIDE_LAYOUT,
Recording,
} from "@/types/record";
import {
DropdownMenu,
DropdownMenuTrigger,
@ -75,6 +79,139 @@ export function TrackingDetails({
const { data: config } = useSWR<FrigateConfig>("config");
// Fetch recording segments for the event's time range to handle motion-only gaps
const eventStartRecord = useMemo(
() => (event.start_time ?? 0) + annotationOffset / 1000,
[event.start_time, annotationOffset],
);
const eventEndRecord = useMemo(
() => (event.end_time ?? Date.now() / 1000) + annotationOffset / 1000,
[event.end_time, annotationOffset],
);
const { data: recordings } = useSWR<Recording[]>(
event.camera
? [
`${event.camera}/recordings`,
{
after: eventStartRecord - REVIEW_PADDING,
before: eventEndRecord + REVIEW_PADDING,
},
]
: null,
);
// Convert a timeline timestamp to actual video player time, accounting for
// motion-only recording gaps. Uses the same algorithm as DynamicVideoController.
const timestampToVideoTime = useCallback(
(timestamp: number): number => {
if (!recordings || recordings.length === 0) {
// Fallback to simple calculation if no recordings data
return timestamp - (eventStartRecord - REVIEW_PADDING);
}
const videoStartTime = eventStartRecord - REVIEW_PADDING;
// If timestamp is before video start, return 0
if (timestamp < videoStartTime) return 0;
// Check if timestamp is before the first recording or after the last
if (
timestamp < recordings[0].start_time ||
timestamp > recordings[recordings.length - 1].end_time
) {
// No recording available at this timestamp
return 0;
}
// Calculate the inpoint offset - the HLS video may start partway through the first segment
let inpointOffset = 0;
if (
videoStartTime > recordings[0].start_time &&
videoStartTime < recordings[0].end_time
) {
inpointOffset = videoStartTime - recordings[0].start_time;
}
let seekSeconds = 0;
for (const segment of recordings) {
// Skip segments that end before our timestamp
if (segment.end_time <= timestamp) {
// Add this segment's duration, but subtract inpoint offset from first segment
if (segment === recordings[0]) {
seekSeconds += segment.duration - inpointOffset;
} else {
seekSeconds += segment.duration;
}
} else if (segment.start_time <= timestamp) {
// The timestamp is within this segment
if (segment === recordings[0]) {
// For the first segment, account for the inpoint offset
seekSeconds +=
timestamp - Math.max(segment.start_time, videoStartTime);
} else {
seekSeconds += timestamp - segment.start_time;
}
break;
}
}
return seekSeconds;
},
[recordings, eventStartRecord],
);
// Convert video player time back to timeline timestamp, accounting for
// motion-only recording gaps. Reverse of timestampToVideoTime.
const videoTimeToTimestamp = useCallback(
(playerTime: number): number => {
if (!recordings || recordings.length === 0) {
// Fallback to simple calculation if no recordings data
const videoStartTime = eventStartRecord - REVIEW_PADDING;
return playerTime + videoStartTime;
}
const videoStartTime = eventStartRecord - REVIEW_PADDING;
// Calculate the inpoint offset - the video may start partway through the first segment
let inpointOffset = 0;
if (
videoStartTime > recordings[0].start_time &&
videoStartTime < recordings[0].end_time
) {
inpointOffset = videoStartTime - recordings[0].start_time;
}
let timestamp = 0;
let totalTime = 0;
for (const segment of recordings) {
const segmentDuration =
segment === recordings[0]
? segment.duration - inpointOffset
: segment.duration;
if (totalTime + segmentDuration > playerTime) {
// The player time is within this segment
if (segment === recordings[0]) {
// For the first segment, add the inpoint offset
timestamp =
Math.max(segment.start_time, videoStartTime) +
(playerTime - totalTime);
} else {
timestamp = segment.start_time + (playerTime - totalTime);
}
break;
} else {
totalTime += segmentDuration;
}
}
return timestamp;
},
[recordings, eventStartRecord],
);
eventSequence?.map((event) => {
event.data.zones_friendly_names = event.data?.zones?.map((zone) => {
return resolveZoneName(config, zone);
@ -148,17 +285,14 @@ export function TrackingDetails({
return;
}
// For video mode: convert to video-relative time and seek player
const eventStartRecord =
(event.start_time ?? 0) + annotationOffset / 1000;
const videoStartTime = eventStartRecord - REVIEW_PADDING;
const relativeTime = targetTimeRecord - videoStartTime;
// For video mode: convert to video-relative time (accounting for motion-only gaps)
const relativeTime = timestampToVideoTime(targetTimeRecord);
if (videoRef.current) {
videoRef.current.currentTime = relativeTime;
}
},
[event.start_time, annotationOffset, displaySource],
[annotationOffset, displaySource, timestampToVideoTime],
);
const formattedStart = config
@ -177,8 +311,9 @@ export function TrackingDetails({
})
: "";
const formattedEnd = config
? formatUnixTimestampToDateTime(event.end_time ?? 0, {
const formattedEnd =
config && event.end_time != null
? formatUnixTimestampToDateTime(event.end_time, {
timezone: config.ui.timezone,
date_format:
config.ui.time_format == "24hour"
@ -210,24 +345,14 @@ export function TrackingDetails({
}
// seekToTimestamp is a record stream timestamp
// event.start_time is detect stream time, convert to record
// The video clip starts at (eventStartRecord - REVIEW_PADDING)
// Convert to video position (accounting for motion-only recording gaps)
if (!videoRef.current) return;
const eventStartRecord = event.start_time + annotationOffset / 1000;
const videoStartTime = eventStartRecord - REVIEW_PADDING;
const relativeTime = seekToTimestamp - videoStartTime;
const relativeTime = timestampToVideoTime(seekToTimestamp);
if (relativeTime >= 0) {
videoRef.current.currentTime = relativeTime;
}
setSeekToTimestamp(null);
}, [
seekToTimestamp,
event.start_time,
annotationOffset,
apiHost,
event.camera,
displaySource,
]);
}, [seekToTimestamp, displaySource, timestampToVideoTime]);
const isWithinEventRange = useMemo(() => {
if (effectiveTime === undefined || event.start_time === undefined) {
@ -334,14 +459,13 @@ export function TrackingDetails({
const handleTimeUpdate = useCallback(
(time: number) => {
// event.start_time is detect stream time, convert to record
const eventStartRecord = event.start_time + annotationOffset / 1000;
const videoStartTime = eventStartRecord - REVIEW_PADDING;
const absoluteTime = time + videoStartTime;
// Convert video player time back to timeline timestamp
// accounting for motion-only recording gaps
const absoluteTime = videoTimeToTimestamp(time);
setCurrentTime(absoluteTime);
},
[event.start_time, annotationOffset],
[videoTimeToTimestamp],
);
const [src, setSrc] = useState(
@ -525,9 +649,16 @@ export function TrackingDetails({
</div>
<div className="flex items-center gap-2">
<span className="capitalize">{label}</span>
<span className="md:text-md text-xs text-secondary-foreground">
{formattedStart ?? ""} - {formattedEnd ?? ""}
</span>
<div className="md:text-md flex items-center text-xs text-secondary-foreground">
{formattedStart ?? ""}
{event.end_time != null ? (
<> - {formattedEnd}</>
) : (
<div className="inline-block">
<ActivityIndicator className="ml-3 size-4" />
</div>
)}
</div>
{event.data?.recognized_license_plate && (
<>
<span className="text-secondary-foreground">·</span>

View File

@ -18,7 +18,7 @@ import { z } from "zod";
import axios from "axios";
import { toast, Toaster } from "sonner";
import { useTranslation } from "react-i18next";
import { useState, useMemo } from "react";
import { useState, useMemo, useEffect } from "react";
import { LuTrash2, LuPlus } from "react-icons/lu";
import ActivityIndicator from "@/components/indicators/activity-indicator";
import { FrigateConfig } from "@/types/frigateConfig";
@ -42,7 +42,15 @@ export default function CameraEditForm({
onCancel,
}: CameraEditFormProps) {
const { t } = useTranslation(["views/settings"]);
const { data: config } = useSWR<FrigateConfig>("config");
const { data: config, mutate: mutateConfig } =
useSWR<FrigateConfig>("config");
const { data: rawPaths, mutate: mutateRawPaths } = useSWR<{
cameras: Record<
string,
{ ffmpeg: { inputs: { path: string; roles: string[] }[] } }
>;
go2rtc: { streams: Record<string, string | string[]> };
}>(cameraName ? "config/raw_paths" : null);
const [isLoading, setIsLoading] = useState(false);
const formSchema = useMemo(
@ -145,14 +153,23 @@ export default function CameraEditForm({
if (cameraName && config?.cameras[cameraName]) {
const camera = config.cameras[cameraName];
defaultValues.enabled = camera.enabled ?? true;
defaultValues.ffmpeg.inputs = camera.ffmpeg?.inputs?.length
// Use raw paths from the admin endpoint if available, otherwise fall back to masked paths
const rawCameraData = rawPaths?.cameras?.[cameraName];
defaultValues.ffmpeg.inputs = rawCameraData?.ffmpeg?.inputs?.length
? rawCameraData.ffmpeg.inputs.map((input) => ({
path: input.path,
roles: input.roles as Role[],
}))
: camera.ffmpeg?.inputs?.length
? camera.ffmpeg.inputs.map((input) => ({
path: input.path,
roles: input.roles as Role[],
}))
: defaultValues.ffmpeg.inputs;
const go2rtcStreams = config.go2rtc?.streams || {};
const go2rtcStreams =
rawPaths?.go2rtc?.streams || config.go2rtc?.streams || {};
const cameraStreams: Record<string, string[]> = {};
// get candidate stream names for this camera. could be the camera's own name,
@ -196,6 +213,60 @@ export default function CameraEditForm({
mode: "onChange",
});
// Update form values when rawPaths loads
useEffect(() => {
if (
cameraName &&
config?.cameras[cameraName] &&
rawPaths?.cameras?.[cameraName]
) {
const camera = config.cameras[cameraName];
const rawCameraData = rawPaths.cameras[cameraName];
// Update ffmpeg inputs with raw paths
if (rawCameraData.ffmpeg?.inputs?.length) {
form.setValue(
"ffmpeg.inputs",
rawCameraData.ffmpeg.inputs.map((input) => ({
path: input.path,
roles: input.roles as Role[],
})),
);
}
// Update go2rtc streams with raw URLs
if (rawPaths.go2rtc?.streams) {
const validNames = new Set<string>();
validNames.add(cameraName);
camera.ffmpeg?.inputs?.forEach((input) => {
const restreamMatch = input.path.match(
/^rtsp:\/\/127\.0\.0\.1:8554\/([^?#/]+)(?:[?#].*)?$/,
);
if (restreamMatch) {
validNames.add(restreamMatch[1]);
}
});
const liveStreams = camera?.live?.streams;
if (liveStreams) {
Object.keys(liveStreams).forEach((key) => validNames.add(key));
}
const cameraStreams: Record<string, string[]> = {};
Object.entries(rawPaths.go2rtc.streams).forEach(([name, urls]) => {
if (validNames.has(name)) {
cameraStreams[name] = Array.isArray(urls) ? urls : [urls];
}
});
if (Object.keys(cameraStreams).length > 0) {
form.setValue("go2rtcStreams", cameraStreams);
}
}
}
}, [cameraName, config, rawPaths, form]);
const { fields, append, remove } = useFieldArray({
control: form.control,
name: "ffmpeg.inputs",
@ -268,6 +339,8 @@ export default function CameraEditForm({
}),
{ position: "top-center" },
);
mutateConfig();
mutateRawPaths();
if (onSave) onSave();
});
} else {
@ -277,6 +350,8 @@ export default function CameraEditForm({
}),
{ position: "top-center" },
);
mutateConfig();
mutateRawPaths();
if (onSave) onSave();
}
} else {

View File

@ -26,7 +26,7 @@ import useSWR from "swr";
import FilterSwitch from "@/components/filter/FilterSwitch";
import { ZoneMaskFilterButton } from "@/components/filter/ZoneMaskFilter";
import { PolygonType } from "@/types/canvas";
import CameraSettingsView from "@/views/settings/CameraSettingsView";
import CameraReviewSettingsView from "@/views/settings/CameraReviewSettingsView";
import CameraManagementView from "@/views/settings/CameraManagementView";
import MotionTunerView from "@/views/settings/MotionTunerView";
import MasksAndZonesView from "@/views/settings/MasksAndZonesView";
@ -93,7 +93,7 @@ const settingsGroups = [
label: "cameras",
items: [
{ key: "cameraManagement", component: CameraManagementView },
{ key: "cameraReview", component: CameraSettingsView },
{ key: "cameraReview", component: CameraReviewSettingsView },
{ key: "masksAndZones", component: MasksAndZonesView },
{ key: "motionTuner", component: MotionTunerView },
],

View File

@ -5,17 +5,9 @@ import { Button } from "@/components/ui/button";
import useSWR from "swr";
import { FrigateConfig } from "@/types/frigateConfig";
import { useTranslation } from "react-i18next";
import { Label } from "@/components/ui/label";
import CameraEditForm from "@/components/settings/CameraEditForm";
import CameraWizardDialog from "@/components/settings/CameraWizardDialog";
import { LuPlus } from "react-icons/lu";
import {
Select,
SelectContent,
SelectItem,
SelectTrigger,
SelectValue,
} from "@/components/ui/select";
import { IoMdArrowRoundBack } from "react-icons/io";
import { isDesktop } from "react-device-detect";
import { CameraNameLabel } from "@/components/camera/FriendlyNameLabel";
@ -90,31 +82,6 @@ export default function CameraManagementView({
</Button>
{cameras.length > 0 && (
<>
<div className="my-4 flex flex-col gap-2">
<Label>{t("cameraManagement.editCamera")}</Label>
<Select
onValueChange={(value) => {
setEditCameraName(value);
setViewMode("edit");
}}
>
<SelectTrigger className="w-[180px]">
<SelectValue
placeholder={t("cameraManagement.selectCamera")}
/>
</SelectTrigger>
<SelectContent>
{cameras.map((camera) => {
return (
<SelectItem key={camera} value={camera}>
<CameraNameLabel camera={camera} />
</SelectItem>
);
})}
</SelectContent>
</Select>
</div>
<Separator className="my-2 flex bg-secondary" />
<div className="max-w-7xl space-y-4">
<Heading as="h4" className="my-2">

View File

@ -0,0 +1,738 @@
import Heading from "@/components/ui/heading";
import { useCallback, useContext, useEffect, useMemo, useState } from "react";
import { Toaster, toast } from "sonner";
import {
Form,
FormControl,
FormDescription,
FormField,
FormItem,
FormLabel,
FormMessage,
} from "@/components/ui/form";
import { zodResolver } from "@hookform/resolvers/zod";
import { useForm } from "react-hook-form";
import { z } from "zod";
import { Separator } from "@/components/ui/separator";
import { Button } from "@/components/ui/button";
import useSWR from "swr";
import { FrigateConfig } from "@/types/frigateConfig";
import { Checkbox } from "@/components/ui/checkbox";
import ActivityIndicator from "@/components/indicators/activity-indicator";
import { StatusBarMessagesContext } from "@/context/statusbar-provider";
import axios from "axios";
import { Link } from "react-router-dom";
import { LuExternalLink } from "react-icons/lu";
import { MdCircle } from "react-icons/md";
import { cn } from "@/lib/utils";
import { Trans, useTranslation } from "react-i18next";
import { Switch } from "@/components/ui/switch";
import { Label } from "@/components/ui/label";
import { useDocDomain } from "@/hooks/use-doc-domain";
import { getTranslatedLabel } from "@/utils/i18n";
import {
useAlertsState,
useDetectionsState,
useObjectDescriptionState,
useReviewDescriptionState,
} from "@/api/ws";
import { useCameraFriendlyName } from "@/hooks/use-camera-friendly-name";
import { resolveZoneName } from "@/hooks/use-zone-friendly-name";
import { formatList } from "@/utils/stringUtil";
type CameraReviewSettingsViewProps = {
selectedCamera: string;
setUnsavedChanges: React.Dispatch<React.SetStateAction<boolean>>;
};
type CameraReviewSettingsValueType = {
alerts_zones: string[];
detections_zones: string[];
};
export default function CameraReviewSettingsView({
selectedCamera,
setUnsavedChanges,
}: CameraReviewSettingsViewProps) {
const { t } = useTranslation(["views/settings"]);
const { getLocaleDocUrl } = useDocDomain();
const { data: config, mutate: updateConfig } =
useSWR<FrigateConfig>("config");
const cameraConfig = useMemo(() => {
if (config && selectedCamera) {
return config.cameras[selectedCamera];
}
}, [config, selectedCamera]);
const [changedValue, setChangedValue] = useState(false);
const [isLoading, setIsLoading] = useState(false);
const [selectDetections, setSelectDetections] = useState(false);
const { addMessage, removeMessage } = useContext(StatusBarMessagesContext)!;
const selectCameraName = useCameraFriendlyName(selectedCamera);
// zones and labels
const getZoneName = useCallback(
(zoneId: string, cameraId?: string) =>
resolveZoneName(config, zoneId, cameraId),
[config],
);
const zones = useMemo(() => {
if (cameraConfig) {
return Object.entries(cameraConfig.zones).map(([name, zoneData]) => ({
camera: cameraConfig.name,
name,
friendly_name: cameraConfig.zones[name].friendly_name,
objects: zoneData.objects,
color: zoneData.color,
}));
}
}, [cameraConfig]);
const alertsLabels = useMemo(() => {
return cameraConfig?.review.alerts.labels
? formatList(
cameraConfig.review.alerts.labels.map((label) =>
getTranslatedLabel(
label,
cameraConfig?.audio?.listen?.includes(label) ? "audio" : "object",
),
),
)
: "";
}, [cameraConfig]);
const detectionsLabels = useMemo(() => {
return cameraConfig?.review.detections.labels
? formatList(
cameraConfig.review.detections.labels.map((label) =>
getTranslatedLabel(
label,
cameraConfig?.audio?.listen?.includes(label) ? "audio" : "object",
),
),
)
: "";
}, [cameraConfig]);
// form
const formSchema = z.object({
alerts_zones: z.array(z.string()),
detections_zones: z.array(z.string()),
});
const form = useForm<z.infer<typeof formSchema>>({
resolver: zodResolver(formSchema),
mode: "onChange",
defaultValues: {
alerts_zones: cameraConfig?.review.alerts.required_zones || [],
detections_zones: cameraConfig?.review.detections.required_zones || [],
},
});
const watchedAlertsZones = form.watch("alerts_zones");
const watchedDetectionsZones = form.watch("detections_zones");
const { payload: alertsState, send: sendAlerts } =
useAlertsState(selectedCamera);
const { payload: detectionsState, send: sendDetections } =
useDetectionsState(selectedCamera);
const { payload: objDescState, send: sendObjDesc } =
useObjectDescriptionState(selectedCamera);
const { payload: revDescState, send: sendRevDesc } =
useReviewDescriptionState(selectedCamera);
const handleCheckedChange = useCallback(
(isChecked: boolean) => {
if (!isChecked) {
form.reset({
alerts_zones: watchedAlertsZones,
detections_zones: [],
});
}
setChangedValue(true);
setSelectDetections(isChecked as boolean);
},
// we know that these deps are correct
// eslint-disable-next-line react-hooks/exhaustive-deps
[watchedAlertsZones],
);
const saveToConfig = useCallback(
async (
{ alerts_zones, detections_zones }: CameraReviewSettingsValueType, // values submitted via the form
) => {
const createQuery = (zones: string[], type: "alerts" | "detections") =>
zones.length
? zones
.map(
(zone) =>
`&cameras.${selectedCamera}.review.${type}.required_zones=${zone}`,
)
.join("")
: cameraConfig?.review[type]?.required_zones &&
cameraConfig?.review[type]?.required_zones.length > 0
? `&cameras.${selectedCamera}.review.${type}.required_zones`
: "";
const alertQueries = createQuery(alerts_zones, "alerts");
const detectionQueries = createQuery(detections_zones, "detections");
axios
.put(`config/set?${alertQueries}${detectionQueries}`, {
requires_restart: 0,
})
.then((res) => {
if (res.status === 200) {
toast.success(
t("cameraReview.reviewClassification.toast.success"),
{
position: "top-center",
},
);
updateConfig();
} else {
toast.error(
t("toast.save.error.title", {
errorMessage: res.statusText,
ns: "common",
}),
{
position: "top-center",
},
);
}
})
.catch((error) => {
const errorMessage =
error.response?.data?.message ||
error.response?.data?.detail ||
"Unknown error";
toast.error(
t("toast.save.error.title", {
errorMessage,
ns: "common",
}),
{
position: "top-center",
},
);
})
.finally(() => {
setIsLoading(false);
});
},
[updateConfig, setIsLoading, selectedCamera, cameraConfig, t],
);
const onCancel = useCallback(() => {
if (!cameraConfig) {
return;
}
setChangedValue(false);
setUnsavedChanges(false);
removeMessage(
"camera_settings",
`review_classification_settings_${selectedCamera}`,
);
form.reset({
alerts_zones: cameraConfig?.review.alerts.required_zones ?? [],
detections_zones: cameraConfig?.review.detections.required_zones || [],
});
setSelectDetections(
!!cameraConfig?.review.detections.required_zones?.length,
);
// we know that these deps are correct
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [removeMessage, selectedCamera, setUnsavedChanges, cameraConfig]);
useEffect(() => {
onCancel();
// we know that these deps are correct
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [selectedCamera]);
useEffect(() => {
if (changedValue) {
addMessage(
"camera_settings",
t("cameraReview.reviewClassification.unsavedChanges", {
camera: selectedCamera,
}),
undefined,
`review_classification_settings_${selectedCamera}`,
);
} else {
removeMessage(
"camera_settings",
`review_classification_settings_${selectedCamera}`,
);
}
// we know that these deps are correct
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [changedValue, selectedCamera]);
function onSubmit(values: z.infer<typeof formSchema>) {
setIsLoading(true);
saveToConfig(values as CameraReviewSettingsValueType);
}
useEffect(() => {
document.title = t("documentTitle.cameraReview");
}, [t]);
if (!cameraConfig && !selectedCamera) {
return <ActivityIndicator />;
}
return (
<>
<div className="flex size-full flex-col md:flex-row">
<Toaster position="top-center" closeButton={true} />
<div className="scrollbar-container order-last mb-10 mt-2 flex h-full w-full flex-col overflow-y-auto pb-2 md:order-none">
<Heading as="h4" className="mb-2">
{t("cameraReview.title")}
</Heading>
<Heading as="h4" className="my-2">
<Trans ns="views/settings">cameraReview.review.title</Trans>
</Heading>
<div className="mb-5 mt-2 flex max-w-5xl flex-col gap-2 space-y-3 text-sm text-primary-variant">
<div className="flex flex-row items-center">
<Switch
id="alerts-enabled"
className="mr-3"
checked={alertsState == "ON"}
onCheckedChange={(isChecked) => {
sendAlerts(isChecked ? "ON" : "OFF");
}}
/>
<div className="space-y-0.5">
<Label htmlFor="alerts-enabled">
<Trans ns="views/settings">cameraReview.review.alerts</Trans>
</Label>
</div>
</div>
<div className="flex flex-col">
<div className="flex flex-row items-center">
<Switch
id="detections-enabled"
className="mr-3"
checked={detectionsState == "ON"}
onCheckedChange={(isChecked) => {
sendDetections(isChecked ? "ON" : "OFF");
}}
/>
<div className="space-y-0.5">
<Label htmlFor="detections-enabled">
<Trans ns="views/settings">camera.review.detections</Trans>
</Label>
</div>
</div>
<div className="mt-3 text-sm text-muted-foreground">
<Trans ns="views/settings">cameraReview.review.desc</Trans>
</div>
</div>
</div>
{cameraConfig?.objects?.genai?.enabled_in_config && (
<>
<Separator className="my-2 flex bg-secondary" />
<Heading as="h4" className="my-2">
<Trans ns="views/settings">
cameraReview.object_descriptions.title
</Trans>
</Heading>
<div className="mb-5 mt-2 flex max-w-5xl flex-col gap-2 space-y-3 text-sm text-primary-variant">
<div className="flex flex-row items-center">
<Switch
id="alerts-enabled"
className="mr-3"
checked={objDescState == "ON"}
onCheckedChange={(isChecked) => {
sendObjDesc(isChecked ? "ON" : "OFF");
}}
/>
<div className="space-y-0.5">
<Label htmlFor="genai-enabled">
<Trans>button.enabled</Trans>
</Label>
</div>
</div>
<div className="mt-3 text-sm text-muted-foreground">
<Trans ns="views/settings">
cameraReview.object_descriptions.desc
</Trans>
</div>
</div>
</>
)}
{cameraConfig?.review?.genai?.enabled_in_config && (
<>
<Separator className="my-2 flex bg-secondary" />
<Heading as="h4" className="my-2">
<Trans ns="views/settings">
cameraReview.review_descriptions.title
</Trans>
</Heading>
<div className="mb-5 mt-2 flex max-w-5xl flex-col gap-2 space-y-3 text-sm text-primary-variant">
<div className="flex flex-row items-center">
<Switch
id="alerts-enabled"
className="mr-3"
checked={revDescState == "ON"}
onCheckedChange={(isChecked) => {
sendRevDesc(isChecked ? "ON" : "OFF");
}}
/>
<div className="space-y-0.5">
<Label htmlFor="genai-enabled">
<Trans>button.enabled</Trans>
</Label>
</div>
</div>
<div className="mt-3 text-sm text-muted-foreground">
<Trans ns="views/settings">
cameraReview.review_descriptions.desc
</Trans>
</div>
</div>
</>
)}
<Separator className="my-2 flex bg-secondary" />
<Heading as="h4" className="my-2">
<Trans ns="views/settings">
cameraReview.reviewClassification.title
</Trans>
</Heading>
<div className="max-w-6xl">
<div className="mb-5 mt-2 flex max-w-5xl flex-col gap-2 text-sm text-primary-variant">
<p>
<Trans ns="views/settings">
cameraReview.reviewClassification.desc
</Trans>
</p>
<div className="flex items-center text-primary">
<Link
to={getLocaleDocUrl("configuration/review")}
target="_blank"
rel="noopener noreferrer"
className="inline"
>
{t("readTheDocumentation", { ns: "common" })}
<LuExternalLink className="ml-2 inline-flex size-3" />
</Link>
</div>
</div>
</div>
<Form {...form}>
<form
onSubmit={form.handleSubmit(onSubmit)}
className="mt-2 space-y-6"
>
<div
className={cn(
"w-full max-w-5xl space-y-0",
zones &&
zones?.length > 0 &&
"grid items-start gap-5 md:grid-cols-2",
)}
>
<FormField
control={form.control}
name="alerts_zones"
render={() => (
<FormItem>
{zones && zones?.length > 0 ? (
<>
<div className="mb-2">
<FormLabel className="flex flex-row items-center text-base">
<Trans ns="views/settings">
camera.review.alerts
</Trans>
<MdCircle className="ml-3 size-2 text-severity_alert" />
</FormLabel>
<FormDescription>
<Trans ns="views/settings">
cameraReview.reviewClassification.selectAlertsZones
</Trans>
</FormDescription>
</div>
<div className="max-w-md rounded-lg bg-secondary p-4 md:max-w-full">
{zones?.map((zone) => (
<FormField
key={zone.name}
control={form.control}
name="alerts_zones"
render={({ field }) => (
<FormItem
key={zone.name}
className="mb-3 flex flex-row items-center space-x-3 space-y-0 last:mb-0"
>
<FormControl>
<Checkbox
className="size-5 text-white accent-white data-[state=checked]:bg-selected data-[state=checked]:text-white"
checked={field.value?.includes(
zone.name,
)}
onCheckedChange={(checked) => {
setChangedValue(true);
return checked
? field.onChange([
...field.value,
zone.name,
])
: field.onChange(
field.value?.filter(
(value) =>
value !== zone.name,
),
);
}}
/>
</FormControl>
<FormLabel
className={cn(
"font-normal",
!zone.friendly_name &&
"smart-capitalize",
)}
>
{zone.friendly_name || zone.name}
</FormLabel>
</FormItem>
)}
/>
))}
</div>
</>
) : (
<div className="font-normal text-destructive">
<Trans ns="views/settings">
cameraReview.reviewClassification.noDefinedZones
</Trans>
</div>
)}
<FormMessage />
<div className="text-sm">
{watchedAlertsZones && watchedAlertsZones.length > 0
? t(
"cameraReview.reviewClassification.zoneObjectAlertsTips",
{
alertsLabels,
zone: formatList(
watchedAlertsZones.map((zone) =>
getZoneName(zone),
),
),
cameraName: selectCameraName,
},
)
: t(
"cameraReview.reviewClassification.objectAlertsTips",
{
alertsLabels,
cameraName: selectCameraName,
},
)}
</div>
</FormItem>
)}
/>
<FormField
control={form.control}
name="detections_zones"
render={() => (
<FormItem>
{zones && zones?.length > 0 && (
<>
<div className="mb-2">
<FormLabel className="flex flex-row items-center text-base">
<Trans ns="views/settings">
camera.review.detections
</Trans>
<MdCircle className="ml-3 size-2 text-severity_detection" />
</FormLabel>
{selectDetections && (
<FormDescription>
<Trans ns="views/settings">
cameraReview.reviewClassification.selectDetectionsZones
</Trans>
</FormDescription>
)}
</div>
{selectDetections && (
<div className="max-w-md rounded-lg bg-secondary p-4 md:max-w-full">
{zones?.map((zone) => (
<FormField
key={zone.name}
control={form.control}
name="detections_zones"
render={({ field }) => (
<FormItem
key={zone.name}
className="mb-3 flex flex-row items-center space-x-3 space-y-0 last:mb-0"
>
<FormControl>
<Checkbox
className="size-5 text-white accent-white data-[state=checked]:bg-selected data-[state=checked]:text-white"
checked={field.value?.includes(
zone.name,
)}
onCheckedChange={(checked) => {
return checked
? field.onChange([
...field.value,
zone.name,
])
: field.onChange(
field.value?.filter(
(value) =>
value !== zone.name,
),
);
}}
/>
</FormControl>
<FormLabel
className={cn(
"font-normal",
!zone.friendly_name &&
"smart-capitalize",
)}
>
{zone.friendly_name || zone.name}
</FormLabel>
</FormItem>
)}
/>
))}
</div>
)}
<FormMessage />
<div className="mb-0 flex flex-row items-center gap-2">
<Checkbox
id="select-detections"
className="size-5 text-white accent-white data-[state=checked]:bg-selected data-[state=checked]:text-white"
checked={selectDetections}
onCheckedChange={handleCheckedChange}
/>
<div className="grid gap-1.5 leading-none">
<label
htmlFor="select-detections"
className="text-sm font-medium leading-none peer-disabled:cursor-not-allowed peer-disabled:opacity-70"
>
<Trans ns="views/settings">
cameraReview.reviewClassification.limitDetections
</Trans>
</label>
</div>
</div>
</>
)}
<div className="text-sm">
{watchedDetectionsZones &&
watchedDetectionsZones.length > 0 ? (
!selectDetections ? (
<Trans
i18nKey="cameraReview.reviewClassification.zoneObjectDetectionsTips.text"
values={{
detectionsLabels,
zone: formatList(
watchedDetectionsZones.map((zone) =>
getZoneName(zone),
),
),
cameraName: selectCameraName,
}}
ns="views/settings"
/>
) : (
<Trans
i18nKey="cameraReview.reviewClassification.zoneObjectDetectionsTips.notSelectDetections"
values={{
detectionsLabels,
zone: formatList(
watchedDetectionsZones.map((zone) =>
getZoneName(zone),
),
),
cameraName: selectCameraName,
}}
ns="views/settings"
/>
)
) : (
<Trans
i18nKey="cameraReview.reviewClassification.objectDetectionsTips"
values={{
detectionsLabels,
cameraName: selectCameraName,
}}
ns="views/settings"
/>
)}
</div>
</FormItem>
)}
/>
</div>
<Separator className="my-2 flex bg-secondary" />
<div className="flex w-full flex-row items-center gap-2 pt-2 md:w-[25%]">
<Button
className="flex flex-1"
aria-label={t("button.reset", { ns: "common" })}
onClick={onCancel}
type="button"
>
<Trans>button.reset</Trans>
</Button>
<Button
variant="select"
disabled={isLoading}
className="flex flex-1"
aria-label={t("button.save", { ns: "common" })}
type="submit"
>
{isLoading ? (
<div className="flex flex-row items-center gap-2">
<ActivityIndicator />
<span>
<Trans>button.saving</Trans>
</span>
</div>
) : (
<Trans>button.save</Trans>
)}
</Button>
</div>
</form>
</Form>
</div>
</div>
</>
);
}

View File

@ -1,794 +0,0 @@
import Heading from "@/components/ui/heading";
import { useCallback, useContext, useEffect, useMemo, useState } from "react";
import { Toaster, toast } from "sonner";
import {
Form,
FormControl,
FormDescription,
FormField,
FormItem,
FormLabel,
FormMessage,
} from "@/components/ui/form";
import { zodResolver } from "@hookform/resolvers/zod";
import { useForm } from "react-hook-form";
import { z } from "zod";
import { Separator } from "@/components/ui/separator";
import { Button } from "@/components/ui/button";
import useSWR from "swr";
import { FrigateConfig } from "@/types/frigateConfig";
import { Checkbox } from "@/components/ui/checkbox";
import ActivityIndicator from "@/components/indicators/activity-indicator";
import { StatusBarMessagesContext } from "@/context/statusbar-provider";
import axios from "axios";
import { Link } from "react-router-dom";
import { LuExternalLink } from "react-icons/lu";
import { MdCircle } from "react-icons/md";
import { cn } from "@/lib/utils";
import { Trans, useTranslation } from "react-i18next";
import { Switch } from "@/components/ui/switch";
import { Label } from "@/components/ui/label";
import { useDocDomain } from "@/hooks/use-doc-domain";
import { getTranslatedLabel } from "@/utils/i18n";
import {
useAlertsState,
useDetectionsState,
useObjectDescriptionState,
useReviewDescriptionState,
} from "@/api/ws";
import CameraEditForm from "@/components/settings/CameraEditForm";
import CameraWizardDialog from "@/components/settings/CameraWizardDialog";
import { IoMdArrowRoundBack } from "react-icons/io";
import { isDesktop } from "react-device-detect";
import { useCameraFriendlyName } from "@/hooks/use-camera-friendly-name";
import { resolveZoneName } from "@/hooks/use-zone-friendly-name";
import { formatList } from "@/utils/stringUtil";
type CameraSettingsViewProps = {
selectedCamera: string;
setUnsavedChanges: React.Dispatch<React.SetStateAction<boolean>>;
};
type CameraReviewSettingsValueType = {
alerts_zones: string[];
detections_zones: string[];
};
export default function CameraSettingsView({
selectedCamera,
setUnsavedChanges,
}: CameraSettingsViewProps) {
const { t } = useTranslation(["views/settings"]);
const { getLocaleDocUrl } = useDocDomain();
const { data: config, mutate: updateConfig } =
useSWR<FrigateConfig>("config");
const cameraConfig = useMemo(() => {
if (config && selectedCamera) {
return config.cameras[selectedCamera];
}
}, [config, selectedCamera]);
const [changedValue, setChangedValue] = useState(false);
const [isLoading, setIsLoading] = useState(false);
const [selectDetections, setSelectDetections] = useState(false);
const [viewMode, setViewMode] = useState<"settings" | "add" | "edit">(
"settings",
); // Control view state
const [editCameraName, setEditCameraName] = useState<string | undefined>(
undefined,
); // Track camera being edited
const [showWizard, setShowWizard] = useState(false);
const { addMessage, removeMessage } = useContext(StatusBarMessagesContext)!;
const selectCameraName = useCameraFriendlyName(selectedCamera);
// zones and labels
const getZoneName = useCallback(
(zoneId: string, cameraId?: string) =>
resolveZoneName(config, zoneId, cameraId),
[config],
);
const zones = useMemo(() => {
if (cameraConfig) {
return Object.entries(cameraConfig.zones).map(([name, zoneData]) => ({
camera: cameraConfig.name,
name,
friendly_name: cameraConfig.zones[name].friendly_name,
objects: zoneData.objects,
color: zoneData.color,
}));
}
}, [cameraConfig]);
const alertsLabels = useMemo(() => {
return cameraConfig?.review.alerts.labels
? formatList(
cameraConfig.review.alerts.labels.map((label) =>
getTranslatedLabel(
label,
cameraConfig?.audio?.listen?.includes(label) ? "audio" : "object",
),
),
)
: "";
}, [cameraConfig]);
const detectionsLabels = useMemo(() => {
return cameraConfig?.review.detections.labels
? formatList(
cameraConfig.review.detections.labels.map((label) =>
getTranslatedLabel(
label,
cameraConfig?.audio?.listen?.includes(label) ? "audio" : "object",
),
),
)
: "";
}, [cameraConfig]);
// form
const formSchema = z.object({
alerts_zones: z.array(z.string()),
detections_zones: z.array(z.string()),
});
const form = useForm<z.infer<typeof formSchema>>({
resolver: zodResolver(formSchema),
mode: "onChange",
defaultValues: {
alerts_zones: cameraConfig?.review.alerts.required_zones || [],
detections_zones: cameraConfig?.review.detections.required_zones || [],
},
});
const watchedAlertsZones = form.watch("alerts_zones");
const watchedDetectionsZones = form.watch("detections_zones");
const { payload: alertsState, send: sendAlerts } =
useAlertsState(selectedCamera);
const { payload: detectionsState, send: sendDetections } =
useDetectionsState(selectedCamera);
const { payload: objDescState, send: sendObjDesc } =
useObjectDescriptionState(selectedCamera);
const { payload: revDescState, send: sendRevDesc } =
useReviewDescriptionState(selectedCamera);
const handleCheckedChange = useCallback(
(isChecked: boolean) => {
if (!isChecked) {
form.reset({
alerts_zones: watchedAlertsZones,
detections_zones: [],
});
}
setChangedValue(true);
setSelectDetections(isChecked as boolean);
},
// we know that these deps are correct
// eslint-disable-next-line react-hooks/exhaustive-deps
[watchedAlertsZones],
);
const saveToConfig = useCallback(
async (
{ alerts_zones, detections_zones }: CameraReviewSettingsValueType, // values submitted via the form
) => {
const createQuery = (zones: string[], type: "alerts" | "detections") =>
zones.length
? zones
.map(
(zone) =>
`&cameras.${selectedCamera}.review.${type}.required_zones=${zone}`,
)
.join("")
: cameraConfig?.review[type]?.required_zones &&
cameraConfig?.review[type]?.required_zones.length > 0
? `&cameras.${selectedCamera}.review.${type}.required_zones`
: "";
const alertQueries = createQuery(alerts_zones, "alerts");
const detectionQueries = createQuery(detections_zones, "detections");
axios
.put(`config/set?${alertQueries}${detectionQueries}`, {
requires_restart: 0,
})
.then((res) => {
if (res.status === 200) {
toast.success(
t("cameraReview.reviewClassification.toast.success"),
{
position: "top-center",
},
);
updateConfig();
} else {
toast.error(
t("toast.save.error.title", {
errorMessage: res.statusText,
ns: "common",
}),
{
position: "top-center",
},
);
}
})
.catch((error) => {
const errorMessage =
error.response?.data?.message ||
error.response?.data?.detail ||
"Unknown error";
toast.error(
t("toast.save.error.title", {
errorMessage,
ns: "common",
}),
{
position: "top-center",
},
);
})
.finally(() => {
setIsLoading(false);
});
},
[updateConfig, setIsLoading, selectedCamera, cameraConfig, t],
);
const onCancel = useCallback(() => {
if (!cameraConfig) {
return;
}
setChangedValue(false);
setUnsavedChanges(false);
removeMessage(
"camera_settings",
`review_classification_settings_${selectedCamera}`,
);
form.reset({
alerts_zones: cameraConfig?.review.alerts.required_zones ?? [],
detections_zones: cameraConfig?.review.detections.required_zones || [],
});
setSelectDetections(
!!cameraConfig?.review.detections.required_zones?.length,
);
// we know that these deps are correct
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [removeMessage, selectedCamera, setUnsavedChanges, cameraConfig]);
useEffect(() => {
onCancel();
// we know that these deps are correct
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [selectedCamera]);
useEffect(() => {
if (changedValue) {
addMessage(
"camera_settings",
t("cameraReview.reviewClassification.unsavedChanges", {
camera: selectedCamera,
}),
undefined,
`review_classification_settings_${selectedCamera}`,
);
} else {
removeMessage(
"camera_settings",
`review_classification_settings_${selectedCamera}`,
);
}
// we know that these deps are correct
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [changedValue, selectedCamera]);
function onSubmit(values: z.infer<typeof formSchema>) {
setIsLoading(true);
saveToConfig(values as CameraReviewSettingsValueType);
}
useEffect(() => {
document.title = t("documentTitle.cameraReview");
}, [t]);
// Handle back navigation from add/edit form
const handleBack = useCallback(() => {
setViewMode("settings");
setEditCameraName(undefined);
updateConfig();
}, [updateConfig]);
if (!cameraConfig && !selectedCamera && viewMode === "settings") {
return <ActivityIndicator />;
}
return (
<>
<div className="flex size-full flex-col md:flex-row">
<Toaster position="top-center" closeButton={true} />
<div className="scrollbar-container order-last mb-10 mt-2 flex h-full w-full flex-col overflow-y-auto pb-2 md:order-none">
{viewMode === "settings" ? (
<>
<Heading as="h4" className="mb-2">
{t("cameraReview.title")}
</Heading>
<Heading as="h4" className="my-2">
<Trans ns="views/settings">cameraReview.review.title</Trans>
</Heading>
<div className="mb-5 mt-2 flex max-w-5xl flex-col gap-2 space-y-3 text-sm text-primary-variant">
<div className="flex flex-row items-center">
<Switch
id="alerts-enabled"
className="mr-3"
checked={alertsState == "ON"}
onCheckedChange={(isChecked) => {
sendAlerts(isChecked ? "ON" : "OFF");
}}
/>
<div className="space-y-0.5">
<Label htmlFor="alerts-enabled">
<Trans ns="views/settings">
cameraReview.review.alerts
</Trans>
</Label>
</div>
</div>
<div className="flex flex-col">
<div className="flex flex-row items-center">
<Switch
id="detections-enabled"
className="mr-3"
checked={detectionsState == "ON"}
onCheckedChange={(isChecked) => {
sendDetections(isChecked ? "ON" : "OFF");
}}
/>
<div className="space-y-0.5">
<Label htmlFor="detections-enabled">
<Trans ns="views/settings">
camera.review.detections
</Trans>
</Label>
</div>
</div>
<div className="mt-3 text-sm text-muted-foreground">
<Trans ns="views/settings">cameraReview.review.desc</Trans>
</div>
</div>
</div>
{cameraConfig?.objects?.genai?.enabled_in_config && (
<>
<Separator className="my-2 flex bg-secondary" />
<Heading as="h4" className="my-2">
<Trans ns="views/settings">
cameraReview.object_descriptions.title
</Trans>
</Heading>
<div className="mb-5 mt-2 flex max-w-5xl flex-col gap-2 space-y-3 text-sm text-primary-variant">
<div className="flex flex-row items-center">
<Switch
id="alerts-enabled"
className="mr-3"
checked={objDescState == "ON"}
onCheckedChange={(isChecked) => {
sendObjDesc(isChecked ? "ON" : "OFF");
}}
/>
<div className="space-y-0.5">
<Label htmlFor="genai-enabled">
<Trans>button.enabled</Trans>
</Label>
</div>
</div>
<div className="mt-3 text-sm text-muted-foreground">
<Trans ns="views/settings">
cameraReview.object_descriptions.desc
</Trans>
</div>
</div>
</>
)}
{cameraConfig?.review?.genai?.enabled_in_config && (
<>
<Separator className="my-2 flex bg-secondary" />
<Heading as="h4" className="my-2">
<Trans ns="views/settings">
cameraReview.review_descriptions.title
</Trans>
</Heading>
<div className="mb-5 mt-2 flex max-w-5xl flex-col gap-2 space-y-3 text-sm text-primary-variant">
<div className="flex flex-row items-center">
<Switch
id="alerts-enabled"
className="mr-3"
checked={revDescState == "ON"}
onCheckedChange={(isChecked) => {
sendRevDesc(isChecked ? "ON" : "OFF");
}}
/>
<div className="space-y-0.5">
<Label htmlFor="genai-enabled">
<Trans>button.enabled</Trans>
</Label>
</div>
</div>
<div className="mt-3 text-sm text-muted-foreground">
<Trans ns="views/settings">
cameraReview.review_descriptions.desc
</Trans>
</div>
</div>
</>
)}
<Separator className="my-2 flex bg-secondary" />
<Heading as="h4" className="my-2">
<Trans ns="views/settings">
cameraReview.reviewClassification.title
</Trans>
</Heading>
<div className="max-w-6xl">
<div className="mb-5 mt-2 flex max-w-5xl flex-col gap-2 text-sm text-primary-variant">
<p>
<Trans ns="views/settings">
cameraReview.reviewClassification.desc
</Trans>
</p>
<div className="flex items-center text-primary">
<Link
to={getLocaleDocUrl("configuration/review")}
target="_blank"
rel="noopener noreferrer"
className="inline"
>
{t("readTheDocumentation", { ns: "common" })}
<LuExternalLink className="ml-2 inline-flex size-3" />
</Link>
</div>
</div>
</div>
<Form {...form}>
<form
onSubmit={form.handleSubmit(onSubmit)}
className="mt-2 space-y-6"
>
<div
className={cn(
"w-full max-w-5xl space-y-0",
zones &&
zones?.length > 0 &&
"grid items-start gap-5 md:grid-cols-2",
)}
>
<FormField
control={form.control}
name="alerts_zones"
render={() => (
<FormItem>
{zones && zones?.length > 0 ? (
<>
<div className="mb-2">
<FormLabel className="flex flex-row items-center text-base">
<Trans ns="views/settings">
camera.review.alerts
</Trans>
<MdCircle className="ml-3 size-2 text-severity_alert" />
</FormLabel>
<FormDescription>
<Trans ns="views/settings">
cameraReview.reviewClassification.selectAlertsZones
</Trans>
</FormDescription>
</div>
<div className="max-w-md rounded-lg bg-secondary p-4 md:max-w-full">
{zones?.map((zone) => (
<FormField
key={zone.name}
control={form.control}
name="alerts_zones"
render={({ field }) => (
<FormItem
key={zone.name}
className="mb-3 flex flex-row items-center space-x-3 space-y-0 last:mb-0"
>
<FormControl>
<Checkbox
className="size-5 text-white accent-white data-[state=checked]:bg-selected data-[state=checked]:text-white"
checked={field.value?.includes(
zone.name,
)}
onCheckedChange={(checked) => {
setChangedValue(true);
return checked
? field.onChange([
...field.value,
zone.name,
])
: field.onChange(
field.value?.filter(
(value) =>
value !== zone.name,
),
);
}}
/>
</FormControl>
<FormLabel
className={cn(
"font-normal",
!zone.friendly_name &&
"smart-capitalize",
)}
>
{zone.friendly_name || zone.name}
</FormLabel>
</FormItem>
)}
/>
))}
</div>
</>
) : (
<div className="font-normal text-destructive">
<Trans ns="views/settings">
cameraReview.reviewClassification.noDefinedZones
</Trans>
</div>
)}
<FormMessage />
<div className="text-sm">
{watchedAlertsZones && watchedAlertsZones.length > 0
? t(
"cameraReview.reviewClassification.zoneObjectAlertsTips",
{
alertsLabels,
zone: formatList(
watchedAlertsZones.map((zone) =>
getZoneName(zone),
),
),
cameraName: selectCameraName,
},
)
: t(
"cameraReview.reviewClassification.objectAlertsTips",
{
alertsLabels,
cameraName: selectCameraName,
},
)}
</div>
</FormItem>
)}
/>
<FormField
control={form.control}
name="detections_zones"
render={() => (
<FormItem>
{zones && zones?.length > 0 && (
<>
<div className="mb-2">
<FormLabel className="flex flex-row items-center text-base">
<Trans ns="views/settings">
camera.review.detections
</Trans>
<MdCircle className="ml-3 size-2 text-severity_detection" />
</FormLabel>
{selectDetections && (
<FormDescription>
<Trans ns="views/settings">
cameraReview.reviewClassification.selectDetectionsZones
</Trans>
</FormDescription>
)}
</div>
{selectDetections && (
<div className="max-w-md rounded-lg bg-secondary p-4 md:max-w-full">
{zones?.map((zone) => (
<FormField
key={zone.name}
control={form.control}
name="detections_zones"
render={({ field }) => (
<FormItem
key={zone.name}
className="mb-3 flex flex-row items-center space-x-3 space-y-0 last:mb-0"
>
<FormControl>
<Checkbox
className="size-5 text-white accent-white data-[state=checked]:bg-selected data-[state=checked]:text-white"
checked={field.value?.includes(
zone.name,
)}
onCheckedChange={(checked) => {
return checked
? field.onChange([
...field.value,
zone.name,
])
: field.onChange(
field.value?.filter(
(value) =>
value !== zone.name,
),
);
}}
/>
</FormControl>
<FormLabel
className={cn(
"font-normal",
!zone.friendly_name &&
"smart-capitalize",
)}
>
{zone.friendly_name || zone.name}
</FormLabel>
</FormItem>
)}
/>
))}
</div>
)}
<FormMessage />
<div className="mb-0 flex flex-row items-center gap-2">
<Checkbox
id="select-detections"
className="size-5 text-white accent-white data-[state=checked]:bg-selected data-[state=checked]:text-white"
checked={selectDetections}
onCheckedChange={handleCheckedChange}
/>
<div className="grid gap-1.5 leading-none">
<label
htmlFor="select-detections"
className="text-sm font-medium leading-none peer-disabled:cursor-not-allowed peer-disabled:opacity-70"
>
<Trans ns="views/settings">
cameraReview.reviewClassification.limitDetections
</Trans>
</label>
</div>
</div>
</>
)}
<div className="text-sm">
{watchedDetectionsZones &&
watchedDetectionsZones.length > 0 ? (
!selectDetections ? (
<Trans
i18nKey="cameraReview.reviewClassification.zoneObjectDetectionsTips.text"
values={{
detectionsLabels,
zone: formatList(
watchedDetectionsZones.map((zone) =>
getZoneName(zone),
),
),
cameraName: selectCameraName,
}}
ns="views/settings"
/>
) : (
<Trans
i18nKey="cameraReview.reviewClassification.zoneObjectDetectionsTips.notSelectDetections"
values={{
detectionsLabels,
zone: formatList(
watchedDetectionsZones.map((zone) =>
getZoneName(zone),
),
),
cameraName: selectCameraName,
}}
ns="views/settings"
/>
)
) : (
<Trans
i18nKey="cameraReview.reviewClassification.objectDetectionsTips"
values={{
detectionsLabels,
cameraName: selectCameraName,
}}
ns="views/settings"
/>
)}
</div>
</FormItem>
)}
/>
</div>
<Separator className="my-2 flex bg-secondary" />
<div className="flex w-full flex-row items-center gap-2 pt-2 md:w-[25%]">
<Button
className="flex flex-1"
aria-label={t("button.reset", { ns: "common" })}
onClick={onCancel}
type="button"
>
<Trans>button.reset</Trans>
</Button>
<Button
variant="select"
disabled={isLoading}
className="flex flex-1"
aria-label={t("button.save", { ns: "common" })}
type="submit"
>
{isLoading ? (
<div className="flex flex-row items-center gap-2">
<ActivityIndicator />
<span>
<Trans>button.saving</Trans>
</span>
</div>
) : (
<Trans>button.save</Trans>
)}
</Button>
</div>
</form>
</Form>
</>
) : (
<>
<div className="mb-4 flex items-center gap-2">
<Button
className={`flex items-center gap-2.5 rounded-lg`}
aria-label={t("label.back", { ns: "common" })}
size="sm"
onClick={handleBack}
>
<IoMdArrowRoundBack className="size-5 text-secondary-foreground" />
{isDesktop && (
<div className="text-primary">
{t("button.back", { ns: "common" })}
</div>
)}
</Button>
</div>
<div className="md:max-w-5xl">
<CameraEditForm
cameraName={viewMode === "edit" ? editCameraName : undefined}
onSave={handleBack}
onCancel={handleBack}
/>
</div>
</>
)}
</div>
</div>
<CameraWizardDialog
open={showWizard}
onClose={() => setShowWizard(false)}
/>
</>
);
}