mirror of
https://github.com/blakeblackshear/frigate.git
synced 2026-05-13 00:45:28 +03:00
Compare commits
15 Commits
c18ac05e40
...
bf07143553
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
bf07143553 | ||
|
|
08f0306fd0 | ||
|
|
2b2ba6aee3 | ||
|
|
b6ea89b820 | ||
|
|
13957fec00 | ||
|
|
2a8f5afc2c | ||
|
|
34008bfd57 | ||
|
|
271f5cf9d5 | ||
|
|
21dc6a6248 | ||
|
|
15892352c4 | ||
|
|
e1d5353e04 | ||
|
|
1a6d87b283 | ||
|
|
b53aeefbf3 | ||
|
|
c0b462d44e | ||
|
|
3edfd905de |
@ -4,14 +4,14 @@
|
|||||||
|
|
||||||
# Frigate NVR™ - 一个具有实时目标检测的本地 NVR
|
# Frigate NVR™ - 一个具有实时目标检测的本地 NVR
|
||||||
|
|
||||||
[English](https://github.com/blakeblackshear/frigate) | \[简体中文\]
|
|
||||||
|
|
||||||
[](https://opensource.org/licenses/MIT)
|
|
||||||
|
|
||||||
<a href="https://hosted.weblate.org/engage/frigate-nvr/-/zh_Hans/">
|
<a href="https://hosted.weblate.org/engage/frigate-nvr/-/zh_Hans/">
|
||||||
<img src="https://hosted.weblate.org/widget/frigate-nvr/-/zh_Hans/svg-badge.svg" alt="翻译状态" />
|
<img src="https://hosted.weblate.org/widget/frigate-nvr/-/zh_Hans/svg-badge.svg" alt="翻译状态" />
|
||||||
</a>
|
</a>
|
||||||
|
|
||||||
|
[English](https://github.com/blakeblackshear/frigate) | \[简体中文\]
|
||||||
|
|
||||||
|
[](https://opensource.org/licenses/MIT)
|
||||||
|
|
||||||
一个完整的本地网络视频录像机(NVR),专为[Home Assistant](https://www.home-assistant.io)设计,具备 AI 目标/物体检测功能。使用 OpenCV 和 TensorFlow 在本地为 IP 摄像头执行实时物体检测。
|
一个完整的本地网络视频录像机(NVR),专为[Home Assistant](https://www.home-assistant.io)设计,具备 AI 目标/物体检测功能。使用 OpenCV 和 TensorFlow 在本地为 IP 摄像头执行实时物体检测。
|
||||||
|
|
||||||
强烈推荐使用 GPU 或者 AI 加速器(例如[Google Coral 加速器](https://coral.ai/products/) 或者 [Hailo](https://hailo.ai/)等)。它们的运行效率远远高于现在的顶级 CPU,并且功耗也极低。
|
强烈推荐使用 GPU 或者 AI 加速器(例如[Google Coral 加速器](https://coral.ai/products/) 或者 [Hailo](https://hailo.ai/)等)。它们的运行效率远远高于现在的顶级 CPU,并且功耗也极低。
|
||||||
@ -38,6 +38,7 @@
|
|||||||
## 协议
|
## 协议
|
||||||
|
|
||||||
本项目采用 **MIT 许可证**授权。
|
本项目采用 **MIT 许可证**授权。
|
||||||
|
|
||||||
**代码部分**:本代码库中的源代码、配置文件和文档均遵循 [MIT 许可证](LICENSE)。您可以自由使用、修改和分发这些代码,但必须保留原始版权声明。
|
**代码部分**:本代码库中的源代码、配置文件和文档均遵循 [MIT 许可证](LICENSE)。您可以自由使用、修改和分发这些代码,但必须保留原始版权声明。
|
||||||
|
|
||||||
**商标部分**:“Frigate”名称、“Frigate NVR”品牌以及 Frigate 的 Logo 为 **Frigate LLC 的商标**,**不在** MIT 许可证覆盖范围内。
|
**商标部分**:“Frigate”名称、“Frigate NVR”品牌以及 Frigate 的 Logo 为 **Frigate LLC 的商标**,**不在** MIT 许可证覆盖范围内。
|
||||||
|
|||||||
@ -11,6 +11,8 @@ Object classification models are lightweight and run very fast on CPU. Inference
|
|||||||
|
|
||||||
Training the model does briefly use a high amount of system resources for about 1–3 minutes per training run. On lower-power devices, training may take longer.
|
Training the model does briefly use a high amount of system resources for about 1–3 minutes per training run. On lower-power devices, training may take longer.
|
||||||
|
|
||||||
|
A CPU with AVX instructions is required for training and inference.
|
||||||
|
|
||||||
## Classes
|
## Classes
|
||||||
|
|
||||||
Classes are the categories your model will learn to distinguish between. Each class represents a distinct visual category that the model will predict.
|
Classes are the categories your model will learn to distinguish between. Each class represents a distinct visual category that the model will predict.
|
||||||
|
|||||||
@ -11,6 +11,8 @@ State classification models are lightweight and run very fast on CPU. Inference
|
|||||||
|
|
||||||
Training the model does briefly use a high amount of system resources for about 1–3 minutes per training run. On lower-power devices, training may take longer.
|
Training the model does briefly use a high amount of system resources for about 1–3 minutes per training run. On lower-power devices, training may take longer.
|
||||||
|
|
||||||
|
A CPU with AVX instructions is required for training and inference.
|
||||||
|
|
||||||
## Classes
|
## Classes
|
||||||
|
|
||||||
Classes are the different states an area on your camera can be in. Each class represents a distinct visual state that the model will learn to recognize.
|
Classes are the different states an area on your camera can be in. Each class represents a distinct visual state that the model will learn to recognize.
|
||||||
|
|||||||
60
docs/docs/troubleshooting/dummy-camera.md
Normal file
60
docs/docs/troubleshooting/dummy-camera.md
Normal file
@ -0,0 +1,60 @@
|
|||||||
|
---
|
||||||
|
id: dummy-camera
|
||||||
|
title: Troubleshooting Detection
|
||||||
|
---
|
||||||
|
|
||||||
|
When investigating object detection or tracking problems, it can be helpful to replay an exported video as a temporary "dummy" camera. This lets you reproduce issues locally, iterate on configuration (detections, zones, enrichment settings), and capture logs and clips for analysis.
|
||||||
|
|
||||||
|
## When to use
|
||||||
|
|
||||||
|
- Replaying an exported clip to reproduce incorrect detections
|
||||||
|
- Testing configuration changes (model settings, trackers, filters) against a known clip
|
||||||
|
- Gathering deterministic logs and recordings for debugging or issue reports
|
||||||
|
|
||||||
|
## Example Config
|
||||||
|
|
||||||
|
Place the clip you want to replay in a location accessible to Frigate (for example `/media/frigate/` or the repository `debug/` folder when developing). Then add a temporary camera to your `config/config.yml` like this:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
cameras:
|
||||||
|
test:
|
||||||
|
ffmpeg:
|
||||||
|
inputs:
|
||||||
|
- path: /media/frigate/car-stopping.mp4
|
||||||
|
input_args: -re -stream_loop -1 -fflags +genpts
|
||||||
|
roles:
|
||||||
|
- detect
|
||||||
|
detect:
|
||||||
|
enabled: true
|
||||||
|
record:
|
||||||
|
enabled: false
|
||||||
|
snapshots:
|
||||||
|
enabled: false
|
||||||
|
```
|
||||||
|
|
||||||
|
- `-re -stream_loop -1` tells `ffmpeg` to play the file in realtime and loop indefinitely, which is useful for long debugging sessions.
|
||||||
|
- `-fflags +genpts` helps generate presentation timestamps when they are missing in the file.
|
||||||
|
|
||||||
|
## Steps
|
||||||
|
|
||||||
|
1. Export or copy the clip you want to replay to the Frigate host (e.g., `/media/frigate/` or `debug/clips/`).
|
||||||
|
2. Add the temporary camera to `config/config.yml` (example above). Use a unique name such as `test` or `replay_camera` so it's easy to remove later.
|
||||||
|
- If you're debugging a specific camera, copy the settings from that camera (frame rate, model/enrichment settings, zones, etc.) into the temporary camera so the replay closely matches the original environment. Leave `record` and `snapshots` disabled unless you are specifically debugging recording or snapshot behavior.
|
||||||
|
3. Restart Frigate.
|
||||||
|
4. Observe the Debug view in the UI and logs as the clip is replayed. Watch detections, zones, or any feature you're looking to debug, and note any errors in the logs to reproduce the issue.
|
||||||
|
5. Iterate on camera or enrichment settings (model, fps, zones, filters) and re-check the replay until the behavior is resolved.
|
||||||
|
6. Remove the temporary camera from your config after debugging to avoid spurious telemetry or recordings.
|
||||||
|
|
||||||
|
## Variables to consider in object tracking
|
||||||
|
|
||||||
|
- The exported video will not always line up exactly with how it originally ran through Frigate (or even with the last loop). Different frames may be used on replay, which can change detections and tracking.
|
||||||
|
- Motion detection depends on the frames used; small frame shifts can change motion regions and therefore what gets passed to the detector.
|
||||||
|
- Object detection is not deterministic: models and post-processing can yield different results across runs, so you may not get identical detections or track IDs every time.
|
||||||
|
|
||||||
|
When debugging, treat the replay as a close approximation rather than a byte-for-byte replay. Capture multiple runs, enable recording if helpful, and examine logs and saved event clips to understand variability.
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
- No video: verify the path is correct and accessible from the Frigate process/container.
|
||||||
|
- FFmpeg errors: check the log output for ffmpeg-specific flags and adjust `input_args` accordingly for your file/container. You may also need to disable hardware acceleration (`hwaccel_args: ""`) for the dummy camera.
|
||||||
|
- No detections: confirm the camera `roles` include `detect`, and model/detector configuration is enabled.
|
||||||
@ -132,6 +132,7 @@ const sidebars: SidebarsConfig = {
|
|||||||
"troubleshooting/gpu",
|
"troubleshooting/gpu",
|
||||||
"troubleshooting/edgetpu",
|
"troubleshooting/edgetpu",
|
||||||
"troubleshooting/memory",
|
"troubleshooting/memory",
|
||||||
|
"troubleshooting/dummy-camera",
|
||||||
],
|
],
|
||||||
Development: [
|
Development: [
|
||||||
"development/contributing",
|
"development/contributing",
|
||||||
|
|||||||
@ -143,17 +143,6 @@ def require_admin_by_default():
|
|||||||
return admin_checker
|
return admin_checker
|
||||||
|
|
||||||
|
|
||||||
def _is_authenticated(request: Request) -> bool:
|
|
||||||
"""
|
|
||||||
Helper to determine if a request is from an authenticated user.
|
|
||||||
|
|
||||||
Returns True if the request has a valid authenticated user (not anonymous).
|
|
||||||
Port 5000 internal requests are considered anonymous despite having admin role.
|
|
||||||
"""
|
|
||||||
username = request.headers.get("remote-user")
|
|
||||||
return username is not None and username != "anonymous"
|
|
||||||
|
|
||||||
|
|
||||||
def allow_public():
|
def allow_public():
|
||||||
"""
|
"""
|
||||||
Override dependency to allow unauthenticated access to an endpoint.
|
Override dependency to allow unauthenticated access to an endpoint.
|
||||||
@ -173,27 +162,24 @@ def allow_public():
|
|||||||
|
|
||||||
def allow_any_authenticated():
|
def allow_any_authenticated():
|
||||||
"""
|
"""
|
||||||
Override dependency to allow any authenticated user (bypass admin requirement).
|
Override dependency to allow any request that passed through the /auth endpoint.
|
||||||
|
|
||||||
Allows:
|
Allows:
|
||||||
- Port 5000 internal requests (have admin role despite anonymous user)
|
- Port 5000 internal requests (remote-user: "anonymous", remote-role: "admin")
|
||||||
- Any authenticated user with a real username (not "anonymous")
|
- Authenticated users with JWT tokens (remote-user: username)
|
||||||
|
- Unauthenticated requests when auth is disabled (remote-user: "anonymous")
|
||||||
|
|
||||||
Rejects:
|
Rejects:
|
||||||
- Port 8971 requests with anonymous user (auth disabled, no proxy auth)
|
- Requests with no remote-user header (did not pass through /auth endpoint)
|
||||||
|
|
||||||
Example:
|
Example:
|
||||||
@router.get("/authenticated-endpoint", dependencies=[Depends(allow_any_authenticated())])
|
@router.get("/authenticated-endpoint", dependencies=[Depends(allow_any_authenticated())])
|
||||||
"""
|
"""
|
||||||
|
|
||||||
async def auth_checker(request: Request):
|
async def auth_checker(request: Request):
|
||||||
# Port 5000 requests have admin role and should be allowed
|
# Ensure a remote-user has been set by the /auth endpoint
|
||||||
role = request.headers.get("remote-role")
|
username = request.headers.get("remote-user")
|
||||||
if role == "admin":
|
if username is None:
|
||||||
return
|
|
||||||
|
|
||||||
# Otherwise require a real authenticated user (not anonymous)
|
|
||||||
if not _is_authenticated(request):
|
|
||||||
raise HTTPException(status_code=401, detail="Authentication required")
|
raise HTTPException(status_code=401, detail="Authentication required")
|
||||||
return
|
return
|
||||||
|
|
||||||
|
|||||||
@ -229,28 +229,34 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
|
|||||||
if not should_run:
|
if not should_run:
|
||||||
return
|
return
|
||||||
|
|
||||||
x, y, x2, y2 = calculate_region(
|
|
||||||
frame.shape,
|
|
||||||
crop[0],
|
|
||||||
crop[1],
|
|
||||||
crop[2],
|
|
||||||
crop[3],
|
|
||||||
224,
|
|
||||||
1.0,
|
|
||||||
)
|
|
||||||
|
|
||||||
rgb = cv2.cvtColor(frame, cv2.COLOR_YUV2RGB_I420)
|
rgb = cv2.cvtColor(frame, cv2.COLOR_YUV2RGB_I420)
|
||||||
frame = rgb[
|
height, width = rgb.shape[:2]
|
||||||
y:y2,
|
|
||||||
x:x2,
|
|
||||||
]
|
|
||||||
|
|
||||||
if frame.shape != (224, 224):
|
# Convert normalized crop coordinates to pixel values
|
||||||
try:
|
x1 = int(camera_config.crop[0] * width)
|
||||||
resized_frame = cv2.resize(frame, (224, 224))
|
y1 = int(camera_config.crop[1] * height)
|
||||||
except Exception:
|
x2 = int(camera_config.crop[2] * width)
|
||||||
logger.warning("Failed to resize image for state classification")
|
y2 = int(camera_config.crop[3] * height)
|
||||||
return
|
|
||||||
|
# Clip coordinates to frame boundaries
|
||||||
|
x1 = max(0, min(x1, width))
|
||||||
|
y1 = max(0, min(y1, height))
|
||||||
|
x2 = max(0, min(x2, width))
|
||||||
|
y2 = max(0, min(y2, height))
|
||||||
|
|
||||||
|
if x2 <= x1 or y2 <= y1:
|
||||||
|
logger.warning(
|
||||||
|
f"Invalid crop coordinates for {camera}: [{x1}, {y1}, {x2}, {y2}]"
|
||||||
|
)
|
||||||
|
return
|
||||||
|
|
||||||
|
frame = rgb[y1:y2, x1:x2]
|
||||||
|
|
||||||
|
try:
|
||||||
|
resized_frame = cv2.resize(frame, (224, 224))
|
||||||
|
except Exception:
|
||||||
|
logger.warning("Failed to resize image for state classification")
|
||||||
|
return
|
||||||
|
|
||||||
if self.interpreter is None:
|
if self.interpreter is None:
|
||||||
# When interpreter is None, always save (score is 0.0, which is < 1.0)
|
# When interpreter is None, always save (score is 0.0, which is < 1.0)
|
||||||
@ -513,6 +519,13 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
|
|||||||
0.0,
|
0.0,
|
||||||
max_files=save_attempts,
|
max_files=save_attempts,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Still track history even when model doesn't exist to respect MAX_OBJECT_CLASSIFICATIONS
|
||||||
|
# Add an entry with "unknown" label so the history limit is enforced
|
||||||
|
if object_id not in self.classification_history:
|
||||||
|
self.classification_history[object_id] = []
|
||||||
|
|
||||||
|
self.classification_history[object_id].append(("unknown", 0.0, now))
|
||||||
return
|
return
|
||||||
|
|
||||||
input = np.expand_dims(resized_crop, axis=0)
|
input = np.expand_dims(resized_crop, axis=0)
|
||||||
|
|||||||
@ -1,7 +1,9 @@
|
|||||||
{
|
{
|
||||||
"documentTitle": "Classification Models - Frigate",
|
"documentTitle": "Classification Models - Frigate",
|
||||||
"details": {
|
"details": {
|
||||||
"scoreInfo": "Score represents the average classification confidence across all detections of this object."
|
"scoreInfo": "Score represents the average classification confidence across all detections of this object.",
|
||||||
|
"none": "None",
|
||||||
|
"unknown": "Unknown"
|
||||||
},
|
},
|
||||||
"button": {
|
"button": {
|
||||||
"deleteClassificationAttempts": "Delete Classification Images",
|
"deleteClassificationAttempts": "Delete Classification Images",
|
||||||
@ -72,7 +74,7 @@
|
|||||||
},
|
},
|
||||||
"renameCategory": {
|
"renameCategory": {
|
||||||
"title": "Rename Class",
|
"title": "Rename Class",
|
||||||
"desc": "Enter a new name for {{name}}. You will be required to retrain the model for the name change to take affect."
|
"desc": "Enter a new name for {{name}}. You will be required to retrain the model for the name change to take effect."
|
||||||
},
|
},
|
||||||
"description": {
|
"description": {
|
||||||
"invalidName": "Invalid name. Names can only include letters, numbers, spaces, apostrophes, underscores, and hyphens."
|
"invalidName": "Invalid name. Names can only include letters, numbers, spaces, apostrophes, underscores, and hyphens."
|
||||||
@ -83,7 +85,6 @@
|
|||||||
"aria": "Select Recent Classifications"
|
"aria": "Select Recent Classifications"
|
||||||
},
|
},
|
||||||
"categories": "Classes",
|
"categories": "Classes",
|
||||||
"none": "None",
|
|
||||||
"createCategory": {
|
"createCategory": {
|
||||||
"new": "Create New Class"
|
"new": "Create New Class"
|
||||||
},
|
},
|
||||||
|
|||||||
@ -4,8 +4,8 @@ import { cn } from "@/lib/utils";
|
|||||||
import {
|
import {
|
||||||
ClassificationItemData,
|
ClassificationItemData,
|
||||||
ClassificationThreshold,
|
ClassificationThreshold,
|
||||||
|
ClassifiedEvent,
|
||||||
} from "@/types/classification";
|
} from "@/types/classification";
|
||||||
import { Event } from "@/types/event";
|
|
||||||
import { forwardRef, useMemo, useRef, useState } from "react";
|
import { forwardRef, useMemo, useRef, useState } from "react";
|
||||||
import { isDesktop, isIOS, isMobile, isMobileOnly } from "react-device-detect";
|
import { isDesktop, isIOS, isMobile, isMobileOnly } from "react-device-detect";
|
||||||
import { useTranslation } from "react-i18next";
|
import { useTranslation } from "react-i18next";
|
||||||
@ -160,8 +160,12 @@ export const ClassificationCard = forwardRef<
|
|||||||
data.score != undefined ? "text-xs" : "text-sm",
|
data.score != undefined ? "text-xs" : "text-sm",
|
||||||
)}
|
)}
|
||||||
>
|
>
|
||||||
<div className="smart-capitalize">
|
<div className="break-all smart-capitalize">
|
||||||
{data.name == "unknown" ? t("details.unknown") : data.name}
|
{data.name == "unknown"
|
||||||
|
? t("details.unknown")
|
||||||
|
: data.name == "none"
|
||||||
|
? t("details.none")
|
||||||
|
: data.name}
|
||||||
</div>
|
</div>
|
||||||
{data.score != undefined && (
|
{data.score != undefined && (
|
||||||
<div
|
<div
|
||||||
@ -186,7 +190,7 @@ export const ClassificationCard = forwardRef<
|
|||||||
|
|
||||||
type GroupedClassificationCardProps = {
|
type GroupedClassificationCardProps = {
|
||||||
group: ClassificationItemData[];
|
group: ClassificationItemData[];
|
||||||
event?: Event;
|
classifiedEvent?: ClassifiedEvent;
|
||||||
threshold?: ClassificationThreshold;
|
threshold?: ClassificationThreshold;
|
||||||
selectedItems: string[];
|
selectedItems: string[];
|
||||||
i18nLibrary: string;
|
i18nLibrary: string;
|
||||||
@ -197,7 +201,7 @@ type GroupedClassificationCardProps = {
|
|||||||
};
|
};
|
||||||
export function GroupedClassificationCard({
|
export function GroupedClassificationCard({
|
||||||
group,
|
group,
|
||||||
event,
|
classifiedEvent,
|
||||||
threshold,
|
threshold,
|
||||||
selectedItems,
|
selectedItems,
|
||||||
i18nLibrary,
|
i18nLibrary,
|
||||||
@ -232,14 +236,15 @@ export function GroupedClassificationCard({
|
|||||||
const bestTyped: ClassificationItemData = best;
|
const bestTyped: ClassificationItemData = best;
|
||||||
return {
|
return {
|
||||||
...bestTyped,
|
...bestTyped,
|
||||||
name: event
|
name:
|
||||||
? event.sub_label && event.sub_label !== "none"
|
classifiedEvent?.label && classifiedEvent.label !== "none"
|
||||||
? event.sub_label
|
? classifiedEvent.label
|
||||||
: t(noClassificationLabel)
|
: classifiedEvent
|
||||||
: bestTyped.name,
|
? t(noClassificationLabel)
|
||||||
score: event?.data?.sub_label_score,
|
: bestTyped.name,
|
||||||
|
score: classifiedEvent?.score,
|
||||||
};
|
};
|
||||||
}, [group, event, noClassificationLabel, t]);
|
}, [group, classifiedEvent, noClassificationLabel, t]);
|
||||||
|
|
||||||
const bestScoreStatus = useMemo(() => {
|
const bestScoreStatus = useMemo(() => {
|
||||||
if (!bestItem?.score || !threshold) {
|
if (!bestItem?.score || !threshold) {
|
||||||
@ -325,36 +330,38 @@ export function GroupedClassificationCard({
|
|||||||
)}
|
)}
|
||||||
>
|
>
|
||||||
<ContentTitle className="flex items-center gap-2 font-normal capitalize">
|
<ContentTitle className="flex items-center gap-2 font-normal capitalize">
|
||||||
{event?.sub_label && event.sub_label !== "none"
|
{classifiedEvent?.label && classifiedEvent.label !== "none"
|
||||||
? event.sub_label
|
? classifiedEvent.label
|
||||||
: t(noClassificationLabel)}
|
: t(noClassificationLabel)}
|
||||||
{event?.sub_label && event.sub_label !== "none" && (
|
{classifiedEvent?.label &&
|
||||||
<div className="flex items-center gap-1">
|
classifiedEvent.label !== "none" &&
|
||||||
<div
|
classifiedEvent.score !== undefined && (
|
||||||
className={cn(
|
<div className="flex items-center gap-1">
|
||||||
"",
|
<div
|
||||||
bestScoreStatus == "match" && "text-success",
|
className={cn(
|
||||||
bestScoreStatus == "potential" && "text-orange-400",
|
"",
|
||||||
bestScoreStatus == "unknown" && "text-danger",
|
bestScoreStatus == "match" && "text-success",
|
||||||
)}
|
bestScoreStatus == "potential" && "text-orange-400",
|
||||||
>{`${Math.round((event.data.sub_label_score || 0) * 100)}%`}</div>
|
bestScoreStatus == "unknown" && "text-danger",
|
||||||
<Popover>
|
)}
|
||||||
<PopoverTrigger asChild>
|
>{`${Math.round((classifiedEvent.score || 0) * 100)}%`}</div>
|
||||||
<button
|
<Popover>
|
||||||
className="focus:outline-none"
|
<PopoverTrigger asChild>
|
||||||
aria-label={t("details.scoreInfo", {
|
<button
|
||||||
ns: i18nLibrary,
|
className="focus:outline-none"
|
||||||
})}
|
aria-label={t("details.scoreInfo", {
|
||||||
>
|
ns: i18nLibrary,
|
||||||
<LuInfo className="size-3" />
|
})}
|
||||||
</button>
|
>
|
||||||
</PopoverTrigger>
|
<LuInfo className="size-3" />
|
||||||
<PopoverContent className="w-80 text-sm">
|
</button>
|
||||||
{t("details.scoreInfo", { ns: i18nLibrary })}
|
</PopoverTrigger>
|
||||||
</PopoverContent>
|
<PopoverContent className="w-80 text-sm">
|
||||||
</Popover>
|
{t("details.scoreInfo", { ns: i18nLibrary })}
|
||||||
</div>
|
</PopoverContent>
|
||||||
)}
|
</Popover>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
</ContentTitle>
|
</ContentTitle>
|
||||||
<ContentDescription className={cn("", isMobile && "px-2")}>
|
<ContentDescription className={cn("", isMobile && "px-2")}>
|
||||||
{time && (
|
{time && (
|
||||||
@ -368,14 +375,14 @@ export function GroupedClassificationCard({
|
|||||||
</div>
|
</div>
|
||||||
{isDesktop && (
|
{isDesktop && (
|
||||||
<div className="flex flex-row justify-between">
|
<div className="flex flex-row justify-between">
|
||||||
{event && (
|
{classifiedEvent && (
|
||||||
<Tooltip>
|
<Tooltip>
|
||||||
<TooltipTrigger asChild>
|
<TooltipTrigger asChild>
|
||||||
<div
|
<div
|
||||||
className="cursor-pointer"
|
className="cursor-pointer"
|
||||||
tabIndex={-1}
|
tabIndex={-1}
|
||||||
onClick={() => {
|
onClick={() => {
|
||||||
navigate(`/explore?event_id=${event.id}`);
|
navigate(`/explore?event_id=${classifiedEvent.id}`);
|
||||||
}}
|
}}
|
||||||
>
|
>
|
||||||
<LuSearch className="size-4 text-secondary-foreground" />
|
<LuSearch className="size-4 text-secondary-foreground" />
|
||||||
|
|||||||
@ -186,15 +186,17 @@ export default function Step3ChooseExamples({
|
|||||||
await Promise.all(emptyFolderPromises);
|
await Promise.all(emptyFolderPromises);
|
||||||
|
|
||||||
// Step 3: Determine if we should train
|
// Step 3: Determine if we should train
|
||||||
// For state models, we need ALL states to have examples
|
// For state models, we need ALL states to have examples (at least 2 states)
|
||||||
// For object models, we need at least 2 classes with images
|
// For object models, we need at least 1 class with images (the rest go to "none")
|
||||||
const allStatesHaveExamplesForTraining =
|
const allStatesHaveExamplesForTraining =
|
||||||
step1Data.modelType !== "state" ||
|
step1Data.modelType !== "state" ||
|
||||||
step1Data.classes.every((className) =>
|
step1Data.classes.every((className) =>
|
||||||
classesWithImages.has(className),
|
classesWithImages.has(className),
|
||||||
);
|
);
|
||||||
const shouldTrain =
|
const shouldTrain =
|
||||||
allStatesHaveExamplesForTraining && classesWithImages.size >= 2;
|
step1Data.modelType === "object"
|
||||||
|
? classesWithImages.size >= 1
|
||||||
|
: allStatesHaveExamplesForTraining && classesWithImages.size >= 2;
|
||||||
|
|
||||||
// Step 4: Kick off training only if we have enough classes with images
|
// Step 4: Kick off training only if we have enough classes with images
|
||||||
if (shouldTrain) {
|
if (shouldTrain) {
|
||||||
|
|||||||
@ -132,7 +132,7 @@ export default function ClassificationSelectionDialog({
|
|||||||
onClick={() => onCategorizeImage(category)}
|
onClick={() => onCategorizeImage(category)}
|
||||||
>
|
>
|
||||||
{category === "none"
|
{category === "none"
|
||||||
? t("none")
|
? t("details.none")
|
||||||
: category.replaceAll("_", " ")}
|
: category.replaceAll("_", " ")}
|
||||||
</SelectorItem>
|
</SelectorItem>
|
||||||
))}
|
))}
|
||||||
|
|||||||
@ -170,7 +170,9 @@ export function ClassFilterContent({
|
|||||||
<FilterSwitch
|
<FilterSwitch
|
||||||
key={item}
|
key={item}
|
||||||
label={
|
label={
|
||||||
item === "none" ? t("none") : item.replaceAll("_", " ")
|
item === "none"
|
||||||
|
? t("details.none", { ns: "views/classificationModel" })
|
||||||
|
: item.replaceAll("_", " ")
|
||||||
}
|
}
|
||||||
isChecked={classes?.includes(item) ?? false}
|
isChecked={classes?.includes(item) ?? false}
|
||||||
onCheckedChange={(isChecked) => {
|
onCheckedChange={(isChecked) => {
|
||||||
|
|||||||
@ -68,7 +68,10 @@ import {
|
|||||||
ClassificationCard,
|
ClassificationCard,
|
||||||
GroupedClassificationCard,
|
GroupedClassificationCard,
|
||||||
} from "@/components/card/ClassificationCard";
|
} from "@/components/card/ClassificationCard";
|
||||||
import { ClassificationItemData } from "@/types/classification";
|
import {
|
||||||
|
ClassificationItemData,
|
||||||
|
ClassifiedEvent,
|
||||||
|
} from "@/types/classification";
|
||||||
|
|
||||||
export default function FaceLibrary() {
|
export default function FaceLibrary() {
|
||||||
const { t } = useTranslation(["views/faceLibrary"]);
|
const { t } = useTranslation(["views/faceLibrary"]);
|
||||||
@ -922,10 +925,22 @@ function FaceAttemptGroup({
|
|||||||
[onRefresh, t],
|
[onRefresh, t],
|
||||||
);
|
);
|
||||||
|
|
||||||
|
// Create ClassifiedEvent from Event (face recognition uses sub_label)
|
||||||
|
const classifiedEvent: ClassifiedEvent | undefined = useMemo(() => {
|
||||||
|
if (!event || !event.sub_label || event.sub_label === "none") {
|
||||||
|
return undefined;
|
||||||
|
}
|
||||||
|
return {
|
||||||
|
id: event.id,
|
||||||
|
label: event.sub_label,
|
||||||
|
score: event.data?.sub_label_score,
|
||||||
|
};
|
||||||
|
}, [event]);
|
||||||
|
|
||||||
return (
|
return (
|
||||||
<GroupedClassificationCard
|
<GroupedClassificationCard
|
||||||
group={group}
|
group={group}
|
||||||
event={event}
|
classifiedEvent={classifiedEvent}
|
||||||
threshold={threshold}
|
threshold={threshold}
|
||||||
selectedItems={selectedFaces}
|
selectedItems={selectedFaces}
|
||||||
i18nLibrary="views/faceLibrary"
|
i18nLibrary="views/faceLibrary"
|
||||||
|
|||||||
@ -21,6 +21,12 @@ export type ClassificationThreshold = {
|
|||||||
unknown: number;
|
unknown: number;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
export type ClassifiedEvent = {
|
||||||
|
id: string;
|
||||||
|
label?: string;
|
||||||
|
score?: number;
|
||||||
|
};
|
||||||
|
|
||||||
export type ClassificationDatasetResponse = {
|
export type ClassificationDatasetResponse = {
|
||||||
categories: {
|
categories: {
|
||||||
[id: string]: string[];
|
[id: string]: string[];
|
||||||
|
|||||||
@ -24,5 +24,12 @@ export interface Event {
|
|||||||
type: "object" | "audio" | "manual";
|
type: "object" | "audio" | "manual";
|
||||||
recognized_license_plate?: string;
|
recognized_license_plate?: string;
|
||||||
path_data: [number[], number][];
|
path_data: [number[], number][];
|
||||||
|
// Allow arbitrary keys for attributes (e.g., model_name, model_name_score)
|
||||||
|
[key: string]:
|
||||||
|
| number
|
||||||
|
| number[]
|
||||||
|
| string
|
||||||
|
| [number[], number][]
|
||||||
|
| undefined;
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|||||||
@ -62,6 +62,7 @@ import useApiFilter from "@/hooks/use-api-filter";
|
|||||||
import {
|
import {
|
||||||
ClassificationDatasetResponse,
|
ClassificationDatasetResponse,
|
||||||
ClassificationItemData,
|
ClassificationItemData,
|
||||||
|
ClassifiedEvent,
|
||||||
TrainFilter,
|
TrainFilter,
|
||||||
} from "@/types/classification";
|
} from "@/types/classification";
|
||||||
import {
|
import {
|
||||||
@ -707,7 +708,7 @@ function LibrarySelector({
|
|||||||
className="flex-grow cursor-pointer capitalize"
|
className="flex-grow cursor-pointer capitalize"
|
||||||
onClick={() => setPageToggle(id)}
|
onClick={() => setPageToggle(id)}
|
||||||
>
|
>
|
||||||
{id === "none" ? t("none") : id.replaceAll("_", " ")}
|
{id === "none" ? t("details.none") : id.replaceAll("_", " ")}
|
||||||
<span className="ml-2 text-muted-foreground">
|
<span className="ml-2 text-muted-foreground">
|
||||||
({dataset?.[id].length})
|
({dataset?.[id].length})
|
||||||
</span>
|
</span>
|
||||||
@ -1033,6 +1034,45 @@ function ObjectTrainGrid({
|
|||||||
};
|
};
|
||||||
}, [model]);
|
}, [model]);
|
||||||
|
|
||||||
|
// Helper function to create ClassifiedEvent from Event
|
||||||
|
const createClassifiedEvent = useCallback(
|
||||||
|
(event: Event | undefined): ClassifiedEvent | undefined => {
|
||||||
|
if (!event || !model.object_config) {
|
||||||
|
return undefined;
|
||||||
|
}
|
||||||
|
|
||||||
|
const classificationType = model.object_config.classification_type;
|
||||||
|
|
||||||
|
if (classificationType === "attribute") {
|
||||||
|
// For attribute type, look at event.data[model.name]
|
||||||
|
const attributeValue = event.data[model.name] as string | undefined;
|
||||||
|
const attributeScore = event.data[`${model.name}_score`] as
|
||||||
|
| number
|
||||||
|
| undefined;
|
||||||
|
|
||||||
|
if (attributeValue && attributeValue !== "none") {
|
||||||
|
return {
|
||||||
|
id: event.id,
|
||||||
|
label: attributeValue,
|
||||||
|
score: attributeScore,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// For sub_label type, use event.sub_label
|
||||||
|
if (event.sub_label && event.sub_label !== "none") {
|
||||||
|
return {
|
||||||
|
id: event.id,
|
||||||
|
label: event.sub_label,
|
||||||
|
score: event.data?.sub_label_score,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return undefined;
|
||||||
|
},
|
||||||
|
[model],
|
||||||
|
);
|
||||||
|
|
||||||
// selection
|
// selection
|
||||||
|
|
||||||
const [selectedEvent, setSelectedEvent] = useState<Event>();
|
const [selectedEvent, setSelectedEvent] = useState<Event>();
|
||||||
@ -1095,11 +1135,13 @@ function ObjectTrainGrid({
|
|||||||
>
|
>
|
||||||
{Object.entries(groups).map(([key, group]) => {
|
{Object.entries(groups).map(([key, group]) => {
|
||||||
const event = events?.find((ev) => ev.id == key);
|
const event = events?.find((ev) => ev.id == key);
|
||||||
|
const classifiedEvent = createClassifiedEvent(event);
|
||||||
|
|
||||||
return (
|
return (
|
||||||
<div key={key} className="aspect-square w-full">
|
<div key={key} className="aspect-square w-full">
|
||||||
<GroupedClassificationCard
|
<GroupedClassificationCard
|
||||||
group={group}
|
group={group}
|
||||||
event={event}
|
classifiedEvent={classifiedEvent}
|
||||||
threshold={threshold}
|
threshold={threshold}
|
||||||
selectedItems={selectedImages}
|
selectedItems={selectedImages}
|
||||||
i18nLibrary="views/classificationModel"
|
i18nLibrary="views/classificationModel"
|
||||||
|
|||||||
Loading…
Reference in New Issue
Block a user