mirror of
https://github.com/blakeblackshear/frigate.git
synced 2025-12-07 14:04:10 +03:00
Compare commits
20 Commits
0a7b2c6a2b
...
4824ec5ac6
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
4824ec5ac6 | ||
|
|
46f97f9aae | ||
|
|
c9c6dbb581 | ||
|
|
12dd8570df | ||
|
|
7e07111483 | ||
|
|
4ccaecf294 | ||
|
|
bcaad3a017 | ||
|
|
8728ce228d | ||
|
|
952cf19593 | ||
|
|
21bbf25ba1 | ||
|
|
88476b3117 | ||
|
|
8127551d83 | ||
|
|
b2ad8a783b | ||
|
|
01395cde9f | ||
|
|
5e51792fad | ||
|
|
3b4e5aae03 | ||
|
|
0e7fd7410f | ||
|
|
198f19654c | ||
|
|
1f9669bbe5 | ||
|
|
9d4aac2b8e |
@ -191,6 +191,7 @@ ONVIF
|
||||
openai
|
||||
opencv
|
||||
openvino
|
||||
overfitting
|
||||
OWASP
|
||||
paddleocr
|
||||
paho
|
||||
|
||||
51
README_CN.md
51
README_CN.md
@ -1,28 +1,31 @@
|
||||
<p align="center">
|
||||
<img align="center" alt="logo" src="docs/static/img/frigate.png">
|
||||
<img align="center" alt="logo" src="docs/static/img/branding/frigate.png">
|
||||
</p>
|
||||
|
||||
# Frigate - 一个具有实时目标检测的本地NVR
|
||||
# Frigate NVR™ - 一个具有实时目标检测的本地 NVR
|
||||
|
||||
[English](https://github.com/blakeblackshear/frigate) | \[简体中文\]
|
||||
|
||||
[](https://opensource.org/licenses/MIT)
|
||||
|
||||
<a href="https://hosted.weblate.org/engage/frigate-nvr/-/zh_Hans/">
|
||||
<img src="https://hosted.weblate.org/widget/frigate-nvr/-/zh_Hans/svg-badge.svg" alt="翻译状态" />
|
||||
</a>
|
||||
|
||||
一个完整的本地网络视频录像机(NVR),专为[Home Assistant](https://www.home-assistant.io)设计,具备AI物体检测功能。使用OpenCV和TensorFlow在本地为IP摄像头执行实时物体检测。
|
||||
一个完整的本地网络视频录像机(NVR),专为[Home Assistant](https://www.home-assistant.io)设计,具备 AI 目标/物体检测功能。使用 OpenCV 和 TensorFlow 在本地为 IP 摄像头执行实时物体检测。
|
||||
|
||||
强烈推荐使用GPU或者AI加速器(例如[Google Coral加速器](https://coral.ai/products/) 或者 [Hailo](https://hailo.ai/))。它们的性能甚至超过目前的顶级CPU,并且可以以极低的耗电实现更优的性能。
|
||||
- 通过[自定义组件](https://github.com/blakeblackshear/frigate-hass-integration)与Home Assistant紧密集成
|
||||
- 设计上通过仅在必要时和必要地点寻找物体,最大限度地减少资源使用并最大化性能
|
||||
强烈推荐使用 GPU 或者 AI 加速器(例如[Google Coral 加速器](https://coral.ai/products/) 或者 [Hailo](https://hailo.ai/)等)。它们的运行效率远远高于现在的顶级 CPU,并且功耗也极低。
|
||||
|
||||
- 通过[自定义组件](https://github.com/blakeblackshear/frigate-hass-integration)与 Home Assistant 紧密集成
|
||||
- 设计上通过仅在必要时和必要地点寻找目标,最大限度地减少资源使用并最大化性能
|
||||
- 大量利用多进程处理,强调实时性而非处理每一帧
|
||||
- 使用非常低开销的运动检测来确定运行物体检测的位置
|
||||
- 使用TensorFlow进行物体检测,运行在单独的进程中以达到最大FPS
|
||||
- 通过MQTT进行通信,便于集成到其他系统中
|
||||
- 使用非常低开销的画面变动检测(也叫运动检测)来确定运行目标检测的位置
|
||||
- 使用 TensorFlow 进行目标检测,并运行在单独的进程中以达到最大 FPS
|
||||
- 通过 MQTT 进行通信,便于集成到其他系统中
|
||||
- 根据检测到的物体设置保留时间进行视频录制
|
||||
- 24/7全天候录制
|
||||
- 通过RTSP重新流传输以减少摄像头的连接数
|
||||
- 支持WebRTC和MSE,实现低延迟的实时观看
|
||||
- 24/7 全天候录制
|
||||
- 通过 RTSP 重新流传输以减少摄像头的连接数
|
||||
- 支持 WebRTC 和 MSE,实现低延迟的实时观看
|
||||
|
||||
## 社区中文翻译文档
|
||||
|
||||
@ -32,39 +35,55 @@
|
||||
|
||||
如果您想通过捐赠支持开发,请使用 [Github Sponsors](https://github.com/sponsors/blakeblackshear)。
|
||||
|
||||
## 协议
|
||||
|
||||
本项目采用 **MIT 许可证**授权。
|
||||
**代码部分**:本代码库中的源代码、配置文件和文档均遵循 [MIT 许可证](LICENSE)。您可以自由使用、修改和分发这些代码,但必须保留原始版权声明。
|
||||
|
||||
**商标部分**:“Frigate”名称、“Frigate NVR”品牌以及 Frigate 的 Logo 为 **Frigate LLC 的商标**,**不在** MIT 许可证覆盖范围内。
|
||||
有关品牌资产的规范使用详情,请参阅我们的[《商标政策》](TRADEMARK.md)。
|
||||
|
||||
## 截图
|
||||
|
||||
### 实时监控面板
|
||||
|
||||
<div>
|
||||
<img width="800" alt="实时监控面板" src="https://github.com/blakeblackshear/frigate/assets/569905/5e713cb9-9db5-41dc-947a-6937c3bc376e">
|
||||
</div>
|
||||
|
||||
### 简单的核查工作流程
|
||||
|
||||
<div>
|
||||
<img width="800" alt="简单的审查工作流程" src="https://github.com/blakeblackshear/frigate/assets/569905/6fed96e8-3b18-40e5-9ddc-31e6f3c9f2ff">
|
||||
</div>
|
||||
|
||||
### 多摄像头可按时间轴查看
|
||||
|
||||
<div>
|
||||
<img width="800" alt="多摄像头可按时间轴查看" src="https://github.com/blakeblackshear/frigate/assets/569905/d6788a15-0eeb-4427-a8d4-80b93cae3d74">
|
||||
</div>
|
||||
|
||||
### 内置遮罩和区域编辑器
|
||||
|
||||
<div>
|
||||
<img width="800" alt="内置遮罩和区域编辑器" src="https://github.com/blakeblackshear/frigate/assets/569905/d7885fc3-bfe6-452f-b7d0-d957cb3e31f5">
|
||||
</div>
|
||||
|
||||
|
||||
## 翻译
|
||||
|
||||
我们使用 [Weblate](https://hosted.weblate.org/projects/frigate-nvr/) 平台提供翻译支持,欢迎参与进来一起完善。
|
||||
|
||||
|
||||
## 非官方中文讨论社区
|
||||
欢迎加入中文讨论QQ群:[1043861059](https://qm.qq.com/q/7vQKsTmSz)
|
||||
|
||||
欢迎加入中文讨论 QQ 群:[1043861059](https://qm.qq.com/q/7vQKsTmSz)
|
||||
|
||||
Bilibili:https://space.bilibili.com/3546894915602564
|
||||
|
||||
|
||||
## 中文社区赞助商
|
||||
|
||||
[](https://edgeone.ai/zh?from=github)
|
||||
本项目 CDN 加速及安全防护由 Tencent EdgeOne 赞助
|
||||
|
||||
---
|
||||
|
||||
**Copyright © 2025 Frigate LLC.**
|
||||
|
||||
@ -168,6 +168,8 @@ Recorded `speech` events will always use a `whisper` model, regardless of the `m
|
||||
|
||||
If you hear speech that’s actually important and worth saving/indexing for the future, **just press the transcribe button in Explore** on that specific `speech` event - that keeps things explicit, reliable, and under your control.
|
||||
|
||||
Other options are being considered for future versions of Frigate to add transcription options that support external `whisper` Docker containers. A single transcription service could then be shared by Frigate and other applications (for example, Home Assistant Voice), and run on more powerful machines when available.
|
||||
|
||||
2. Why don't you save live transcription text and use that for `speech` events?
|
||||
|
||||
There’s no guarantee that a `speech` event is even created from the exact audio that went through the transcription model. Live transcription and `speech` event creation are **separate, asynchronous processes**. Even when both are correctly configured, trying to align the **precise start and end time of a speech event** with whatever audio the model happened to be processing at that moment is unreliable.
|
||||
|
||||
@ -69,4 +69,6 @@ Once all images are assigned, training will begin automatically.
|
||||
### Improving the Model
|
||||
|
||||
- **Problem framing**: Keep classes visually distinct and state-focused (e.g., `open`, `closed`, `unknown`). Avoid combining object identity with state in a single model unless necessary.
|
||||
- **Data collection**: Use the model’s Recent Classifications tab to gather balanced examples across times of day and weather.
|
||||
- **Data collection**: Use the model's Recent Classifications tab to gather balanced examples across times of day and weather.
|
||||
- **When to train**: Focus on cases where the model is entirely incorrect or flips between states when it should not. There's no need to train additional images when the model is already working consistently.
|
||||
- **Selecting training images**: Images scoring below 100% due to new conditions (e.g., first snow of the year, seasonal changes) or variations (e.g., objects temporarily in view, insects at night) are good candidates for training, as they represent scenarios different from the default state. Training these lower-scoring images that differ from existing training data helps prevent overfitting. Avoid training large quantities of images that look very similar, especially if they already score 100% as this can lead to overfitting.
|
||||
|
||||
@ -710,6 +710,44 @@ audio_transcription:
|
||||
# List of language codes: https://github.com/openai/whisper/blob/main/whisper/tokenizer.py#L10
|
||||
language: en
|
||||
|
||||
# Optional: Configuration for classification models
|
||||
classification:
|
||||
# Optional: Configuration for bird classification
|
||||
bird:
|
||||
# Optional: Enable bird classification (default: shown below)
|
||||
enabled: False
|
||||
# Optional: Minimum classification score required to be considered a match (default: shown below)
|
||||
threshold: 0.9
|
||||
custom:
|
||||
# Required: name of the classification model
|
||||
model_name:
|
||||
# Optional: Enable running the model (default: shown below)
|
||||
enabled: True
|
||||
# Optional: Name of classification model (default: shown below)
|
||||
name: None
|
||||
# Optional: Classification score threshold to change the state (default: shown below)
|
||||
threshold: 0.8
|
||||
# Optional: Number of classification attempts to save in the recent classifications tab (default: shown below)
|
||||
# NOTE: Defaults to 200 for object classification and 100 for state classification if not specified
|
||||
save_attempts: None
|
||||
# Optional: Object classification configuration
|
||||
object_config:
|
||||
# Required: Object types to classify
|
||||
objects: [dog]
|
||||
# Optional: Type of classification that is applied (default: shown below)
|
||||
classification_type: sub_label
|
||||
# Optional: State classification configuration
|
||||
state_config:
|
||||
# Required: Cameras to run classification on
|
||||
cameras:
|
||||
camera_name:
|
||||
# Required: Crop of image frame on this camera to run classification on
|
||||
crop: [0, 180, 220, 400]
|
||||
# Optional: If classification should be run when motion is detected in the crop (default: shown below)
|
||||
motion: False
|
||||
# Optional: Interval to run classification on in seconds (default: shown below)
|
||||
interval: None
|
||||
|
||||
# Optional: Restream configuration
|
||||
# Uses https://github.com/AlexxIT/go2rtc (v1.9.10)
|
||||
# NOTE: The default go2rtc API port (1984) must be used,
|
||||
|
||||
@ -1,13 +1,18 @@
|
||||
.alert {
|
||||
padding: 12px;
|
||||
background: #fff8e6;
|
||||
border-bottom: 1px solid #ffd166;
|
||||
text-align: center;
|
||||
font-size: 15px;
|
||||
}
|
||||
padding: 12px;
|
||||
background: #fff8e6;
|
||||
border-bottom: 1px solid #ffd166;
|
||||
text-align: center;
|
||||
font-size: 15px;
|
||||
}
|
||||
|
||||
.alert a {
|
||||
color: #1890ff;
|
||||
font-weight: 500;
|
||||
margin-left: 6px;
|
||||
}
|
||||
[data-theme="dark"] .alert {
|
||||
background: #3b2f0b;
|
||||
border-bottom: 1px solid #665c22;
|
||||
}
|
||||
|
||||
.alert a {
|
||||
color: #1890ff;
|
||||
font-weight: 500;
|
||||
margin-left: 6px;
|
||||
}
|
||||
|
||||
@ -1731,37 +1731,40 @@ def create_trigger_embedding(
|
||||
if event.data.get("type") != "object":
|
||||
return
|
||||
|
||||
if thumbnail := get_event_thumbnail_bytes(event):
|
||||
cursor = context.db.execute_sql(
|
||||
"""
|
||||
SELECT thumbnail_embedding FROM vec_thumbnails WHERE id = ?
|
||||
""",
|
||||
[body.data],
|
||||
# Get the thumbnail
|
||||
thumbnail = get_event_thumbnail_bytes(event)
|
||||
|
||||
if thumbnail is None:
|
||||
return JSONResponse(
|
||||
content={
|
||||
"success": False,
|
||||
"message": f"Failed to get thumbnail for {body.data} for {body.type} trigger",
|
||||
},
|
||||
status_code=400,
|
||||
)
|
||||
|
||||
row = cursor.fetchone() if cursor else None
|
||||
# Try to reuse existing embedding from database
|
||||
cursor = context.db.execute_sql(
|
||||
"""
|
||||
SELECT thumbnail_embedding FROM vec_thumbnails WHERE id = ?
|
||||
""",
|
||||
[body.data],
|
||||
)
|
||||
|
||||
if row:
|
||||
query_embedding = row[0]
|
||||
embedding = np.frombuffer(query_embedding, dtype=np.float32)
|
||||
row = cursor.fetchone() if cursor else None
|
||||
|
||||
if row:
|
||||
query_embedding = row[0]
|
||||
embedding = np.frombuffer(query_embedding, dtype=np.float32)
|
||||
else:
|
||||
# Extract valid thumbnail
|
||||
thumbnail = get_event_thumbnail_bytes(event)
|
||||
|
||||
if thumbnail is None:
|
||||
return JSONResponse(
|
||||
content={
|
||||
"success": False,
|
||||
"message": f"Failed to get thumbnail for {body.data} for {body.type} trigger",
|
||||
},
|
||||
status_code=400,
|
||||
)
|
||||
|
||||
# Generate new embedding
|
||||
embedding = context.generate_image_embedding(
|
||||
body.data, (base64.b64encode(thumbnail).decode("ASCII"))
|
||||
)
|
||||
|
||||
if not embedding:
|
||||
if embedding is None or (
|
||||
isinstance(embedding, (list, np.ndarray)) and len(embedding) == 0
|
||||
):
|
||||
return JSONResponse(
|
||||
content={
|
||||
"success": False,
|
||||
@ -1896,7 +1899,9 @@ def update_trigger_embedding(
|
||||
body.data, (base64.b64encode(thumbnail).decode("ASCII"))
|
||||
)
|
||||
|
||||
if not embedding:
|
||||
if embedding is None or (
|
||||
isinstance(embedding, (list, np.ndarray)) and len(embedding) == 0
|
||||
):
|
||||
return JSONResponse(
|
||||
content={
|
||||
"success": False,
|
||||
|
||||
@ -105,6 +105,11 @@ class CustomClassificationConfig(FrigateBaseModel):
|
||||
threshold: float = Field(
|
||||
default=0.8, title="Classification score threshold to change the state."
|
||||
)
|
||||
save_attempts: int | None = Field(
|
||||
default=None,
|
||||
title="Number of classification attempts to save in the recent classifications tab. If not specified, defaults to 200 for object classification and 100 for state classification.",
|
||||
ge=0,
|
||||
)
|
||||
object_config: CustomClassificationObjectConfig | None = Field(default=None)
|
||||
state_config: CustomClassificationStateConfig | None = Field(default=None)
|
||||
|
||||
|
||||
@ -250,6 +250,11 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
|
||||
if self.interpreter is None:
|
||||
# When interpreter is None, always save (score is 0.0, which is < 1.0)
|
||||
if self._should_save_image(camera, "unknown", 0.0):
|
||||
save_attempts = (
|
||||
self.model_config.save_attempts
|
||||
if self.model_config.save_attempts is not None
|
||||
else 100
|
||||
)
|
||||
write_classification_attempt(
|
||||
self.train_dir,
|
||||
cv2.cvtColor(frame, cv2.COLOR_RGB2BGR),
|
||||
@ -257,6 +262,7 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
|
||||
now,
|
||||
"unknown",
|
||||
0.0,
|
||||
max_files=save_attempts,
|
||||
)
|
||||
return
|
||||
|
||||
@ -277,6 +283,11 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
|
||||
detected_state = self.labelmap[best_id]
|
||||
|
||||
if self._should_save_image(camera, detected_state, score):
|
||||
save_attempts = (
|
||||
self.model_config.save_attempts
|
||||
if self.model_config.save_attempts is not None
|
||||
else 100
|
||||
)
|
||||
write_classification_attempt(
|
||||
self.train_dir,
|
||||
cv2.cvtColor(frame, cv2.COLOR_RGB2BGR),
|
||||
@ -284,6 +295,7 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
|
||||
now,
|
||||
detected_state,
|
||||
score,
|
||||
max_files=save_attempts,
|
||||
)
|
||||
|
||||
if score < self.model_config.threshold:
|
||||
@ -482,6 +494,11 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
|
||||
return
|
||||
|
||||
if self.interpreter is None:
|
||||
save_attempts = (
|
||||
self.model_config.save_attempts
|
||||
if self.model_config.save_attempts is not None
|
||||
else 200
|
||||
)
|
||||
write_classification_attempt(
|
||||
self.train_dir,
|
||||
cv2.cvtColor(crop, cv2.COLOR_RGB2BGR),
|
||||
@ -489,6 +506,7 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
|
||||
now,
|
||||
"unknown",
|
||||
0.0,
|
||||
max_files=save_attempts,
|
||||
)
|
||||
return
|
||||
|
||||
@ -506,6 +524,11 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
|
||||
score = round(probs[best_id], 2)
|
||||
self.__update_metrics(datetime.datetime.now().timestamp() - now)
|
||||
|
||||
save_attempts = (
|
||||
self.model_config.save_attempts
|
||||
if self.model_config.save_attempts is not None
|
||||
else 200
|
||||
)
|
||||
write_classification_attempt(
|
||||
self.train_dir,
|
||||
cv2.cvtColor(crop, cv2.COLOR_RGB2BGR),
|
||||
@ -513,7 +536,7 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
|
||||
now,
|
||||
self.labelmap[best_id],
|
||||
score,
|
||||
max_files=200,
|
||||
max_files=save_attempts,
|
||||
)
|
||||
|
||||
if score < self.model_config.threshold:
|
||||
|
||||
@ -5,7 +5,7 @@ import shutil
|
||||
import threading
|
||||
from pathlib import Path
|
||||
|
||||
from peewee import fn
|
||||
from peewee import SQL, fn
|
||||
|
||||
from frigate.config import FrigateConfig
|
||||
from frigate.const import RECORD_DIR
|
||||
@ -44,13 +44,19 @@ class StorageMaintainer(threading.Thread):
|
||||
)
|
||||
}
|
||||
|
||||
# calculate MB/hr
|
||||
# calculate MB/hr from last 100 segments
|
||||
try:
|
||||
bandwidth = round(
|
||||
Recordings.select(fn.AVG(bandwidth_equation))
|
||||
# Subquery to get last 100 segments, then average their bandwidth
|
||||
last_100 = (
|
||||
Recordings.select(bandwidth_equation.alias("bw"))
|
||||
.where(Recordings.camera == camera, Recordings.segment_size > 0)
|
||||
.order_by(Recordings.start_time.desc())
|
||||
.limit(100)
|
||||
.scalar()
|
||||
.alias("recent")
|
||||
)
|
||||
|
||||
bandwidth = round(
|
||||
Recordings.select(fn.AVG(SQL("bw"))).from_(last_100).scalar()
|
||||
* 3600,
|
||||
2,
|
||||
)
|
||||
|
||||
@ -330,7 +330,7 @@ def collect_state_classification_examples(
|
||||
1. Queries review items from specified cameras
|
||||
2. Selects 100 balanced timestamps across the data
|
||||
3. Extracts keyframes from recordings (cropped to specified regions)
|
||||
4. Selects 20 most visually distinct images
|
||||
4. Selects 24 most visually distinct images
|
||||
5. Saves them to the dataset directory
|
||||
|
||||
Args:
|
||||
@ -660,7 +660,6 @@ def collect_object_classification_examples(
|
||||
Args:
|
||||
model_name: Name of the classification model
|
||||
label: Object label to collect (e.g., "person", "car")
|
||||
cameras: List of camera names to collect examples from
|
||||
"""
|
||||
dataset_dir = os.path.join(CLIPS_DIR, model_name, "dataset")
|
||||
temp_dir = os.path.join(dataset_dir, "temp")
|
||||
|
||||
@ -170,6 +170,10 @@
|
||||
"label": "Download snapshot",
|
||||
"aria": "Download snapshot"
|
||||
},
|
||||
"downloadCleanSnapshot": {
|
||||
"label": "Download clean snapshot",
|
||||
"aria": "Download clean snapshot"
|
||||
},
|
||||
"viewTrackingDetails": {
|
||||
"label": "View tracking details",
|
||||
"aria": "Show the tracking details"
|
||||
|
||||
@ -76,7 +76,7 @@
|
||||
"npuMemory": "NPU minne",
|
||||
"npuUsage": "NPU belastning",
|
||||
"intelGpuWarning": {
|
||||
"title": "Advarsel om Intel GPU-statistikk",
|
||||
"title": "Til info om Intel GPU-statistikk",
|
||||
"message": "GPU statistikk ikke tilgjengelig",
|
||||
"description": "Dette er en kjent feil i Intels verktøy for rapportering av GPU-statistikk (intel_gpu_top), der verktøyet slutter å fungere og gjentatte ganger viser 0 % GPU-bruk, selv om maskinvareakselerasjon og objektdeteksjon kjører korrekt på (i)GPU-en. Dette er ikke en feil i Frigate. Du kan starte verten på nytt for å løse problemet midlertidig, og for å bekrefte at GPU-en fungerer som den skal. Dette påvirker ikke ytelsen."
|
||||
}
|
||||
|
||||
@ -918,14 +918,20 @@
|
||||
"cameraWizard": {
|
||||
"title": "Dodaj kamerę",
|
||||
"steps": {
|
||||
"streamConfiguration": "Konfiguracja strumienia"
|
||||
"streamConfiguration": "Konfiguracja strumienia",
|
||||
"nameAndConnection": "Nazwa i połączenie",
|
||||
"probeOrSnapshot": "Sonda lub migawka",
|
||||
"validationAndTesting": "Walidacja i testowanie"
|
||||
},
|
||||
"save": {
|
||||
"success": "Zapisano ustawienia nowej kamery {{cameraName}}."
|
||||
"success": "Zapisano ustawienia nowej kamery {{cameraName}}.",
|
||||
"failure": "Błąd zapisu {{cameraName}}."
|
||||
},
|
||||
"testResultLabels": {
|
||||
"resolution": "Rozdzielczość",
|
||||
"fps": "kl./s"
|
||||
"fps": "kl./s",
|
||||
"video": "Wideo",
|
||||
"audio": "Audio"
|
||||
},
|
||||
"commonErrors": {
|
||||
"noUrl": "Podaj poprawny adres URL",
|
||||
@ -959,7 +965,12 @@
|
||||
"invalidCharacters": "Nazwa kamery zawiera niepoprawne znaki",
|
||||
"nameExists": "Nazwa kamery jest już zajęta",
|
||||
"customUrlRtspRequired": "Niestandardowe adresy URL muszą zaczynać się od \"rtsp://\". Ręczna konfiguracja wymagana jest dla strumieniów innych niż RTSP."
|
||||
}
|
||||
},
|
||||
"description": "Wprowadź szczegóły kamery i wybierz autodetekcję lub ręcznie wybierz firmę.",
|
||||
"probeMode": "Wykryj kamerę",
|
||||
"detectionMethodDescription": "Wykryj kamerę za pomocą ONVIF (jeśli wspierane) by znaleźć adresy strumieni lub wybierz ręcznie markę kamery by wybrać predefiniowane adresy. By wpisać własny adres strumienia RTSP użyj ręcznej metody i wybierz \"Inne\".",
|
||||
"useDigestAuth": "Użyj przesyłania skrótu autentykacji",
|
||||
"useDigestAuthDescription": "Użyj przesyłania skrótu logowania HTTP dla ONVIF. Niektóre kamery mogą wymagać dedykowanego użytkownika i hasła ONVIF zamiast standardowego konta admin."
|
||||
},
|
||||
"step2": {
|
||||
"testSuccess": "Test połączenia udany!",
|
||||
@ -967,7 +978,8 @@
|
||||
"testFailedTitle": "Test Nieudany",
|
||||
"streamDetails": "Szczegóły Strumienia",
|
||||
"testing": {
|
||||
"fetchingSnapshot": "Przygotowywanie migawki kamery..."
|
||||
"fetchingSnapshot": "Przygotowywanie migawki kamery...",
|
||||
"probingMetadata": "Wykrywanie metadanych kamery..."
|
||||
},
|
||||
"deviceInfo": "Informacje o urządzeniu",
|
||||
"manufacturer": "Producent",
|
||||
@ -981,7 +993,16 @@
|
||||
"testConnection": "Przetestuj połączenie",
|
||||
"errors": {
|
||||
"hostRequired": "Wymagany jest Host/Adres IP"
|
||||
}
|
||||
},
|
||||
"description": "Wykryj dostępne strumienie kamery lub skonfiguruj ręcznie ustawienia na podstawie wybranej metody detekcji.",
|
||||
"probing": "Wykrywanie kamery...",
|
||||
"retry": "Ponów",
|
||||
"probeFailed": "Błąd wykrywania kamery: {{error}}",
|
||||
"probingDevice": "Wykrywanie urządzenia...",
|
||||
"probeSuccessful": "Wykrywanie udane",
|
||||
"probeError": "Błąd wykrywania",
|
||||
"probeNoSuccess": "Niepowodzenie wykrywania",
|
||||
"presets": "Ustawienia wstępne"
|
||||
},
|
||||
"step3": {
|
||||
"streamTitle": "Strumień numer: {{number}}",
|
||||
@ -1049,7 +1070,8 @@
|
||||
"reolink-rtsp": "Strumień RTSP dla kamer firmy Reolink nie jest rekomendowany. Uruchom strumień HTTP w oprogramowaniu kamery i uruchom kreator jeszcze raz."
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"description": "Wykonaj poniższe kroki aby dodać nową kamerę do Frigate."
|
||||
},
|
||||
"cameraManagement": {
|
||||
"title": "Zarządzaj kamerami",
|
||||
|
||||
@ -108,6 +108,18 @@ export default function SearchResultActions({
|
||||
</a>
|
||||
</MenuItem>
|
||||
)}
|
||||
{searchResult.has_snapshot &&
|
||||
config?.cameras[searchResult.camera].snapshots.clean_copy && (
|
||||
<MenuItem aria-label={t("itemMenu.downloadCleanSnapshot.aria")}>
|
||||
<a
|
||||
className="flex items-center"
|
||||
href={`${baseUrl}api/events/${searchResult.id}/snapshot-clean.webp`}
|
||||
download={`${searchResult.camera}_${searchResult.label}-clean.webp`}
|
||||
>
|
||||
<span>{t("itemMenu.downloadCleanSnapshot.label")}</span>
|
||||
</a>
|
||||
</MenuItem>
|
||||
)}
|
||||
{searchResult.data.type == "object" && (
|
||||
<MenuItem
|
||||
aria-label={t("itemMenu.viewTrackingDetails.aria")}
|
||||
|
||||
@ -69,6 +69,20 @@ export default function DetailActionsMenu({
|
||||
</a>
|
||||
</DropdownMenuItem>
|
||||
)}
|
||||
{search.has_snapshot &&
|
||||
config?.cameras[search.camera].snapshots.clean_copy && (
|
||||
<DropdownMenuItem>
|
||||
<a
|
||||
className="w-full"
|
||||
href={`${baseUrl}api/events/${search.id}/snapshot-clean.webp`}
|
||||
download={`${search.camera}_${search.label}-clean.webp`}
|
||||
>
|
||||
<div className="flex cursor-pointer items-center gap-2">
|
||||
<span>{t("itemMenu.downloadCleanSnapshot.label")}</span>
|
||||
</div>
|
||||
</a>
|
||||
</DropdownMenuItem>
|
||||
)}
|
||||
{search.has_clip && (
|
||||
<DropdownMenuItem>
|
||||
<a
|
||||
|
||||
@ -498,7 +498,7 @@ export default function SearchDetailDialog({
|
||||
|
||||
const views = [...SEARCH_TABS];
|
||||
|
||||
if (search.data.type != "object" || !search.has_clip) {
|
||||
if (!search.has_clip) {
|
||||
const index = views.indexOf("tracking_details");
|
||||
views.splice(index, 1);
|
||||
}
|
||||
@ -548,7 +548,7 @@ export default function SearchDetailDialog({
|
||||
"relative flex items-center justify-between",
|
||||
"w-full",
|
||||
// match dialog's max-width classes
|
||||
"sm:max-w-xl md:max-w-4xl lg:max-w-[70%]",
|
||||
"max-h-[95dvh] max-w-[85%] xl:max-w-[70%]",
|
||||
)}
|
||||
>
|
||||
<Tooltip>
|
||||
@ -594,8 +594,7 @@ export default function SearchDetailDialog({
|
||||
ref={isDesktop ? dialogContentRef : undefined}
|
||||
className={cn(
|
||||
"scrollbar-container overflow-y-auto",
|
||||
isDesktop &&
|
||||
"max-h-[95dvh] sm:max-w-xl md:max-w-4xl lg:max-w-[70%]",
|
||||
isDesktop && "max-h-[95dvh] max-w-[85%] xl:max-w-[70%]",
|
||||
isMobile && "flex h-full flex-col px-4",
|
||||
)}
|
||||
onEscapeKeyDown={(event) => {
|
||||
|
||||
@ -622,7 +622,7 @@ export function TrackingDetails({
|
||||
|
||||
<div
|
||||
className={cn(
|
||||
isDesktop && "justify-between overflow-hidden md:basis-2/5",
|
||||
isDesktop && "justify-between overflow-hidden lg:basis-2/5",
|
||||
)}
|
||||
>
|
||||
{isDesktop && tabs && (
|
||||
@ -900,96 +900,99 @@ function LifecycleIconRow({
|
||||
<div className="text-md flex items-start break-words text-left">
|
||||
{getLifecycleItemDescription(item)}
|
||||
</div>
|
||||
<div className="my-2 ml-2 flex flex-col flex-wrap items-start gap-1.5 text-xs text-secondary-foreground">
|
||||
<div className="flex items-center gap-1.5">
|
||||
<span className="text-primary-variant">
|
||||
{t("trackingDetails.lifecycleItemDesc.header.score")}
|
||||
</span>
|
||||
<span className="font-medium text-primary">{score}</span>
|
||||
</div>
|
||||
<div className="flex items-center gap-1.5">
|
||||
<span className="text-primary-variant">
|
||||
{t("trackingDetails.lifecycleItemDesc.header.ratio")}
|
||||
</span>
|
||||
<span className="font-medium text-primary">{ratio}</span>
|
||||
</div>
|
||||
<div className="flex items-center gap-1.5">
|
||||
<span className="text-primary-variant">
|
||||
{t("trackingDetails.lifecycleItemDesc.header.area")}{" "}
|
||||
{attributeAreaPx !== undefined &&
|
||||
attributeAreaPct !== undefined && (
|
||||
<span className="text-primary-variant">
|
||||
({getTranslatedLabel(item.data.label)})
|
||||
</span>
|
||||
)}
|
||||
</span>
|
||||
{areaPx !== undefined && areaPct !== undefined ? (
|
||||
<span className="font-medium text-primary">
|
||||
{t("information.pixels", { ns: "common", area: areaPx })} ·{" "}
|
||||
{areaPct}%
|
||||
{/* Only show Score/Ratio/Area for object events, not for audio (heard) or manual API (external) events */}
|
||||
{item.class_type !== "heard" && item.class_type !== "external" && (
|
||||
<div className="my-2 ml-2 flex flex-col flex-wrap items-start gap-1.5 text-xs text-secondary-foreground">
|
||||
<div className="flex items-center gap-1.5">
|
||||
<span className="text-primary-variant">
|
||||
{t("trackingDetails.lifecycleItemDesc.header.score")}
|
||||
</span>
|
||||
) : (
|
||||
<span>N/A</span>
|
||||
)}
|
||||
</div>
|
||||
{attributeAreaPx !== undefined &&
|
||||
attributeAreaPct !== undefined && (
|
||||
<div className="flex items-center gap-1.5">
|
||||
<span className="text-primary-variant">
|
||||
{t("trackingDetails.lifecycleItemDesc.header.area")} (
|
||||
{getTranslatedLabel(item.data.attribute)})
|
||||
</span>
|
||||
<span className="font-medium text-primary">
|
||||
{t("information.pixels", {
|
||||
ns: "common",
|
||||
area: attributeAreaPx,
|
||||
})}{" "}
|
||||
· {attributeAreaPct}%
|
||||
</span>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{item.data?.zones && item.data.zones.length > 0 && (
|
||||
<div className="mt-1 flex flex-wrap items-center gap-2">
|
||||
{item.data.zones.map((zone, zidx) => {
|
||||
const color = getZoneColor(zone)?.join(",") ?? "0,0,0";
|
||||
return (
|
||||
<Badge
|
||||
key={`${zone}-${zidx}`}
|
||||
variant="outline"
|
||||
className="inline-flex cursor-pointer items-center gap-2"
|
||||
onClick={(e: React.MouseEvent) => {
|
||||
e.stopPropagation();
|
||||
setSelectedZone(zone);
|
||||
}}
|
||||
style={{
|
||||
borderColor: `rgba(${color}, 0.6)`,
|
||||
background: `rgba(${color}, 0.08)`,
|
||||
}}
|
||||
>
|
||||
<span
|
||||
className="size-1 rounded-full"
|
||||
style={{
|
||||
display: "inline-block",
|
||||
width: 10,
|
||||
height: 10,
|
||||
backgroundColor: `rgb(${color})`,
|
||||
}}
|
||||
/>
|
||||
<span
|
||||
className={cn(
|
||||
item.data?.zones_friendly_names?.[zidx] === zone &&
|
||||
"smart-capitalize",
|
||||
)}
|
||||
>
|
||||
{item.data?.zones_friendly_names?.[zidx]}
|
||||
</span>
|
||||
</Badge>
|
||||
);
|
||||
})}
|
||||
<span className="font-medium text-primary">{score}</span>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
<div className="flex items-center gap-1.5">
|
||||
<span className="text-primary-variant">
|
||||
{t("trackingDetails.lifecycleItemDesc.header.ratio")}
|
||||
</span>
|
||||
<span className="font-medium text-primary">{ratio}</span>
|
||||
</div>
|
||||
<div className="flex items-center gap-1.5">
|
||||
<span className="text-primary-variant">
|
||||
{t("trackingDetails.lifecycleItemDesc.header.area")}{" "}
|
||||
{attributeAreaPx !== undefined &&
|
||||
attributeAreaPct !== undefined && (
|
||||
<span className="text-primary-variant">
|
||||
({getTranslatedLabel(item.data.label)})
|
||||
</span>
|
||||
)}
|
||||
</span>
|
||||
{areaPx !== undefined && areaPct !== undefined ? (
|
||||
<span className="font-medium text-primary">
|
||||
{t("information.pixels", { ns: "common", area: areaPx })}{" "}
|
||||
· {areaPct}%
|
||||
</span>
|
||||
) : (
|
||||
<span>N/A</span>
|
||||
)}
|
||||
</div>
|
||||
{attributeAreaPx !== undefined &&
|
||||
attributeAreaPct !== undefined && (
|
||||
<div className="flex items-center gap-1.5">
|
||||
<span className="text-primary-variant">
|
||||
{t("trackingDetails.lifecycleItemDesc.header.area")} (
|
||||
{getTranslatedLabel(item.data.attribute)})
|
||||
</span>
|
||||
<span className="font-medium text-primary">
|
||||
{t("information.pixels", {
|
||||
ns: "common",
|
||||
area: attributeAreaPx,
|
||||
})}{" "}
|
||||
· {attributeAreaPct}%
|
||||
</span>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
)}
|
||||
|
||||
{item.data?.zones && item.data.zones.length > 0 && (
|
||||
<div className="mt-1 flex flex-wrap items-center gap-2">
|
||||
{item.data.zones.map((zone, zidx) => {
|
||||
const color = getZoneColor(zone)?.join(",") ?? "0,0,0";
|
||||
return (
|
||||
<Badge
|
||||
key={`${zone}-${zidx}`}
|
||||
variant="outline"
|
||||
className="inline-flex cursor-pointer items-center gap-2"
|
||||
onClick={(e: React.MouseEvent) => {
|
||||
e.stopPropagation();
|
||||
setSelectedZone(zone);
|
||||
}}
|
||||
style={{
|
||||
borderColor: `rgba(${color}, 0.6)`,
|
||||
background: `rgba(${color}, 0.08)`,
|
||||
}}
|
||||
>
|
||||
<span
|
||||
className="size-1 rounded-full"
|
||||
style={{
|
||||
display: "inline-block",
|
||||
width: 10,
|
||||
height: 10,
|
||||
backgroundColor: `rgb(${color})`,
|
||||
}}
|
||||
/>
|
||||
<span
|
||||
className={cn(
|
||||
item.data?.zones_friendly_names?.[zidx] === zone &&
|
||||
"smart-capitalize",
|
||||
)}
|
||||
>
|
||||
{item.data?.zones_friendly_names?.[zidx]}
|
||||
</span>
|
||||
</Badge>
|
||||
);
|
||||
})}
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
<div className="ml-3 flex-shrink-0 px-1 text-right text-xs text-primary-variant">
|
||||
|
||||
@ -305,6 +305,7 @@ export type CustomClassificationModelConfig = {
|
||||
enabled: boolean;
|
||||
name: string;
|
||||
threshold: number;
|
||||
save_attempts?: number;
|
||||
object_config?: {
|
||||
objects: string[];
|
||||
classification_type: string;
|
||||
|
||||
Loading…
Reference in New Issue
Block a user