mirror of
https://github.com/blakeblackshear/frigate.git
synced 2025-12-16 01:56:43 +03:00
Compare commits
12 Commits
8c318699c4
...
0bcabf0731
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
0bcabf0731 | ||
|
|
029fd544b8 | ||
|
|
c66daf2946 | ||
|
|
f966946713 | ||
|
|
6c33dda19f | ||
|
|
d1957535d0 | ||
|
|
5143162618 | ||
|
|
d9d98f9d3a | ||
|
|
7c0a97520a | ||
|
|
6d662151de | ||
|
|
d9216d39e6 | ||
|
|
1c6506aa9e |
@ -67,7 +67,7 @@ When choosing which objects to classify, start with a small number of visually d
|
||||
### Improving the Model
|
||||
|
||||
- **Problem framing**: Keep classes visually distinct and relevant to the chosen object types.
|
||||
- **Data collection**: Use the model’s Train tab to gather balanced examples across times of day, weather, and distances.
|
||||
- **Data collection**: Use the model’s Recent Classification tab to gather balanced examples across times of day, weather, and distances.
|
||||
- **Preprocessing**: Ensure examples reflect object crops similar to Frigate’s boxes; keep the subject centered.
|
||||
- **Labels**: Keep label names short and consistent; include a `none` class if you plan to ignore uncertain predictions for sub labels.
|
||||
- **Threshold**: Tune `threshold` per model to reduce false assignments. Start at `0.8` and adjust based on validation.
|
||||
|
||||
@ -49,4 +49,4 @@ When choosing a portion of the camera frame for state classification, it is impo
|
||||
### Improving the Model
|
||||
|
||||
- **Problem framing**: Keep classes visually distinct and state-focused (e.g., `open`, `closed`, `unknown`). Avoid combining object identity with state in a single model unless necessary.
|
||||
- **Data collection**: Use the model’s Train tab to gather balanced examples across times of day and weather.
|
||||
- **Data collection**: Use the model’s Recent Classifications tab to gather balanced examples across times of day and weather.
|
||||
|
||||
@ -70,7 +70,7 @@ Fine-tune face recognition with these optional parameters at the global level of
|
||||
- `min_faces`: Min face recognitions for the sub label to be applied to the person object.
|
||||
- Default: `1`
|
||||
- `save_attempts`: Number of images of recognized faces to save for training.
|
||||
- Default: `100`.
|
||||
- Default: `200`.
|
||||
- `blur_confidence_filter`: Enables a filter that calculates how blurry the face is and adjusts the confidence based on this.
|
||||
- Default: `True`.
|
||||
- `device`: Target a specific device to run the face recognition model on (multi-GPU installation).
|
||||
@ -114,9 +114,9 @@ When choosing images to include in the face training set it is recommended to al
|
||||
|
||||
:::
|
||||
|
||||
### Understanding the Train Tab
|
||||
### Understanding the Recent Recognitions Tab
|
||||
|
||||
The Train tab in the face library displays recent face recognition attempts. Detected face images are grouped according to the person they were identified as potentially matching.
|
||||
The Recent Recognitions tab in the face library displays recent face recognition attempts. Detected face images are grouped according to the person they were identified as potentially matching.
|
||||
|
||||
Each face image is labeled with a name (or `Unknown`) along with the confidence score of the recognition attempt. While each image can be used to train the system for a specific person, not all images are suitable for training.
|
||||
|
||||
@ -140,7 +140,7 @@ Once front-facing images are performing well, start choosing slightly off-angle
|
||||
|
||||
Start with the [Usage](#usage) section and re-read the [Model Requirements](#model-requirements) above.
|
||||
|
||||
1. Ensure `person` is being _detected_. A `person` will automatically be scanned by Frigate for a face. Any detected faces will appear in the Train tab in the Frigate UI's Face Library.
|
||||
1. Ensure `person` is being _detected_. A `person` will automatically be scanned by Frigate for a face. Any detected faces will appear in the Recent Recognitions tab in the Frigate UI's Face Library.
|
||||
|
||||
If you are using a Frigate+ or `face` detecting model:
|
||||
|
||||
@ -186,7 +186,7 @@ Avoid training on images that already score highly, as this can lead to over-fit
|
||||
No, face recognition does not support negative training (i.e., explicitly telling it who someone is _not_). Instead, the best approach is to improve the training data by using a more diverse and representative set of images for each person.
|
||||
For more guidance, refer to the section above on improving recognition accuracy.
|
||||
|
||||
### I see scores above the threshold in the train tab, but a sub label wasn't assigned?
|
||||
### I see scores above the threshold in the Recent Recognitions tab, but a sub label wasn't assigned?
|
||||
|
||||
The Frigate considers the recognition scores across all recognition attempts for each person object. The scores are continually weighted based on the area of the face, and a sub label will only be assigned to person if a person is confidently recognized consistently. This avoids cases where a single high confidence recognition would throw off the results.
|
||||
|
||||
|
||||
@ -630,7 +630,7 @@ face_recognition:
|
||||
# Optional: Min face recognitions for the sub label to be applied to the person object (default: shown below)
|
||||
min_faces: 1
|
||||
# Optional: Number of images of recognized faces to save for training (default: shown below)
|
||||
save_attempts: 100
|
||||
save_attempts: 200
|
||||
# Optional: Apply a blur quality filter to adjust confidence based on the blur level of the image (default: shown below)
|
||||
blur_confidence_filter: True
|
||||
# Optional: Set the model size used face recognition. (default: shown below)
|
||||
|
||||
@ -197,7 +197,7 @@ class FaceRecognitionConfig(FrigateBaseModel):
|
||||
title="Min face recognitions for the sub label to be applied to the person object.",
|
||||
)
|
||||
save_attempts: int = Field(
|
||||
default=100, ge=0, title="Number of face attempts to save in the train tab."
|
||||
default=200, ge=0, title="Number of face attempts to save in the recent recognitions tab."
|
||||
)
|
||||
blur_confidence_filter: bool = Field(
|
||||
default=True, title="Apply blur quality filter to face confidence."
|
||||
|
||||
@ -35,10 +35,6 @@ except ModuleNotFoundError:
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
MAX_CLASSIFICATION_VERIFICATION_ATTEMPTS = 6
|
||||
MAX_CLASSIFICATION_ATTEMPTS = 12
|
||||
|
||||
|
||||
class CustomStateClassificationProcessor(RealTimeProcessorApi):
|
||||
def __init__(
|
||||
self,
|
||||
@ -268,26 +264,6 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
|
||||
if obj_data["label"] not in self.model_config.object_config.objects:
|
||||
return
|
||||
|
||||
if (
|
||||
obj_data["id"] in self.detected_objects
|
||||
and len(self.detected_objects[obj_data["id"]])
|
||||
>= MAX_CLASSIFICATION_VERIFICATION_ATTEMPTS
|
||||
):
|
||||
# if we are at max attempts after rec and we have a rec
|
||||
if obj_data.get("sub_label"):
|
||||
logger.debug(
|
||||
"Not processing due to hitting max attempts after true recognition."
|
||||
)
|
||||
return
|
||||
|
||||
# if we don't have a rec and are at max attempts
|
||||
if (
|
||||
len(self.detected_objects[obj_data["id"]])
|
||||
>= MAX_CLASSIFICATION_ATTEMPTS
|
||||
):
|
||||
logger.debug("Not processing due to hitting max rec attempts.")
|
||||
return
|
||||
|
||||
now = datetime.datetime.now().timestamp()
|
||||
x, y, x2, y2 = calculate_region(
|
||||
frame.shape,
|
||||
|
||||
@ -23,7 +23,7 @@
|
||||
"label": "Min face recognitions for the sub label to be applied to the person object."
|
||||
},
|
||||
"save_attempts": {
|
||||
"label": "Number of face attempts to save in the train tab."
|
||||
"label": "Number of face attempts to save in the recent recognitions tab."
|
||||
},
|
||||
"blur_confidence_filter": {
|
||||
"label": "Apply blur quality filter to face confidence."
|
||||
|
||||
@ -22,7 +22,7 @@
|
||||
"title": "Create Collection",
|
||||
"desc": "Create a new collection",
|
||||
"new": "Create New Face",
|
||||
"nextSteps": "To build a strong foundation:<li>Use the Train tab to select and train on images for each detected person.</li><li>Focus on straight-on images for best results; avoid training images that capture faces at an angle.</li></ul>"
|
||||
"nextSteps": "To build a strong foundation:<li>Use the Recent Recognitions tab to select and train on images for each detected person.</li><li>Focus on straight-on images for best results; avoid training images that capture faces at an angle.</li></ul>"
|
||||
},
|
||||
"steps": {
|
||||
"faceName": "Enter Face Name",
|
||||
|
||||
@ -6,7 +6,7 @@ import {
|
||||
ClassificationThreshold,
|
||||
} from "@/types/classification";
|
||||
import { Event } from "@/types/event";
|
||||
import { useMemo, useRef, useState } from "react";
|
||||
import { forwardRef, useMemo, useRef, useState } from "react";
|
||||
import { isDesktop, isMobile } from "react-device-detect";
|
||||
import { useTranslation } from "react-i18next";
|
||||
import TimeAgo from "../dynamic/TimeAgo";
|
||||
@ -14,7 +14,24 @@ import { Tooltip, TooltipContent, TooltipTrigger } from "../ui/tooltip";
|
||||
import { LuSearch } from "react-icons/lu";
|
||||
import { TooltipPortal } from "@radix-ui/react-tooltip";
|
||||
import { useNavigate } from "react-router-dom";
|
||||
import { getTranslatedLabel } from "@/utils/i18n";
|
||||
import { HiSquare2Stack } from "react-icons/hi2";
|
||||
import { ImageShadowOverlay } from "../overlay/ImageShadowOverlay";
|
||||
import {
|
||||
Dialog,
|
||||
DialogContent,
|
||||
DialogDescription,
|
||||
DialogHeader,
|
||||
DialogTitle,
|
||||
DialogTrigger,
|
||||
} from "../ui/dialog";
|
||||
import {
|
||||
MobilePage,
|
||||
MobilePageContent,
|
||||
MobilePageDescription,
|
||||
MobilePageHeader,
|
||||
MobilePageTitle,
|
||||
MobilePageTrigger,
|
||||
} from "../mobile/MobilePage";
|
||||
|
||||
type ClassificationCardProps = {
|
||||
imgClassName?: string;
|
||||
@ -23,19 +40,27 @@ type ClassificationCardProps = {
|
||||
selected: boolean;
|
||||
i18nLibrary: string;
|
||||
showArea?: boolean;
|
||||
count?: number;
|
||||
onClick: (data: ClassificationItemData, meta: boolean) => void;
|
||||
children?: React.ReactNode;
|
||||
};
|
||||
export function ClassificationCard({
|
||||
imgClassName,
|
||||
data,
|
||||
threshold,
|
||||
selected,
|
||||
i18nLibrary,
|
||||
showArea = true,
|
||||
onClick,
|
||||
children,
|
||||
}: ClassificationCardProps) {
|
||||
export const ClassificationCard = forwardRef<
|
||||
HTMLDivElement,
|
||||
ClassificationCardProps
|
||||
>(function ClassificationCard(
|
||||
{
|
||||
imgClassName,
|
||||
data,
|
||||
threshold,
|
||||
selected,
|
||||
i18nLibrary,
|
||||
showArea = true,
|
||||
count,
|
||||
onClick,
|
||||
children,
|
||||
},
|
||||
ref,
|
||||
) {
|
||||
const { t } = useTranslation([i18nLibrary]);
|
||||
const [imageLoaded, setImageLoaded] = useState(false);
|
||||
|
||||
@ -71,12 +96,26 @@ export function ClassificationCard({
|
||||
|
||||
return (
|
||||
<div
|
||||
ref={ref}
|
||||
className={cn(
|
||||
"relative flex size-48 cursor-pointer flex-col overflow-hidden rounded-lg outline outline-[3px]",
|
||||
"relative flex cursor-pointer flex-col overflow-hidden rounded-lg outline outline-[3px]",
|
||||
isMobile ? "!size-full" : "size-48",
|
||||
selected
|
||||
? "shadow-selected outline-selected"
|
||||
: "outline-transparent duration-500",
|
||||
)}
|
||||
onClick={(e) => {
|
||||
const isMeta = e.metaKey || e.ctrlKey;
|
||||
if (isMeta) {
|
||||
e.stopPropagation();
|
||||
}
|
||||
onClick(data, isMeta);
|
||||
}}
|
||||
onContextMenu={(e) => {
|
||||
e.preventDefault();
|
||||
e.stopPropagation();
|
||||
onClick(data, true);
|
||||
}}
|
||||
>
|
||||
<img
|
||||
ref={imgRef}
|
||||
@ -87,13 +126,16 @@ export function ClassificationCard({
|
||||
)}
|
||||
onLoad={() => setImageLoaded(true)}
|
||||
src={`${baseUrl}${data.filepath}`}
|
||||
onClick={(e) => {
|
||||
e.stopPropagation();
|
||||
onClick(data, e.metaKey || e.ctrlKey);
|
||||
}}
|
||||
/>
|
||||
{false && imageArea != undefined && (
|
||||
<div className="absolute bottom-1 right-1 z-10 rounded-lg bg-black/50 px-2 py-1 text-xs text-white">
|
||||
<ImageShadowOverlay upperClassName="z-0" lowerClassName="h-[30%] z-0" />
|
||||
{count && (
|
||||
<div className="absolute right-2 top-2 flex flex-row items-center gap-1">
|
||||
<div className="text-gray-200">{count}</div>{" "}
|
||||
<HiSquare2Stack className="text-gray-200" />
|
||||
</div>
|
||||
)}
|
||||
{!count && imageArea != undefined && (
|
||||
<div className="absolute right-1 top-1 rounded-lg bg-black/50 px-2 py-1 text-xs text-white">
|
||||
{t("information.pixels", { ns: "common", area: imageArea })}
|
||||
</div>
|
||||
)}
|
||||
@ -127,7 +169,7 @@ export function ClassificationCard({
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
});
|
||||
|
||||
type GroupedClassificationCardProps = {
|
||||
group: ClassificationItemData[];
|
||||
@ -137,7 +179,6 @@ type GroupedClassificationCardProps = {
|
||||
i18nLibrary: string;
|
||||
objectType: string;
|
||||
onClick: (data: ClassificationItemData | undefined) => void;
|
||||
onSelectEvent: (event: Event) => void;
|
||||
children?: (data: ClassificationItemData) => React.ReactNode;
|
||||
};
|
||||
export function GroupedClassificationCard({
|
||||
@ -146,20 +187,54 @@ export function GroupedClassificationCard({
|
||||
threshold,
|
||||
selectedItems,
|
||||
i18nLibrary,
|
||||
objectType,
|
||||
onClick,
|
||||
onSelectEvent,
|
||||
children,
|
||||
}: GroupedClassificationCardProps) {
|
||||
const navigate = useNavigate();
|
||||
const { t } = useTranslation(["views/explore", i18nLibrary]);
|
||||
const [detailOpen, setDetailOpen] = useState(false);
|
||||
|
||||
// data
|
||||
|
||||
const allItemsSelected = useMemo(
|
||||
() => group.every((data) => selectedItems.includes(data.filename)),
|
||||
[group, selectedItems],
|
||||
);
|
||||
const bestItem = useMemo<ClassificationItemData | undefined>(() => {
|
||||
let best: undefined | ClassificationItemData = undefined;
|
||||
|
||||
group.forEach((item) => {
|
||||
if (item?.name != undefined && item.name != "none") {
|
||||
if (
|
||||
best?.score == undefined ||
|
||||
(item.score && best.score < item.score)
|
||||
) {
|
||||
best = item;
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
if (!best) {
|
||||
return group[0];
|
||||
}
|
||||
|
||||
const bestTyped: ClassificationItemData = best;
|
||||
return {
|
||||
...bestTyped,
|
||||
name: event?.sub_label || bestTyped.name,
|
||||
score: event?.data?.sub_label_score || bestTyped.score,
|
||||
};
|
||||
}, [group, event]);
|
||||
|
||||
const bestScoreStatus = useMemo(() => {
|
||||
if (!bestItem?.score || !threshold) {
|
||||
return "unknown";
|
||||
}
|
||||
|
||||
if (bestItem.score >= threshold.recognition) {
|
||||
return "match";
|
||||
} else if (bestItem.score >= threshold.unknown) {
|
||||
return "potential";
|
||||
} else {
|
||||
return "unknown";
|
||||
}
|
||||
}, [bestItem, threshold]);
|
||||
|
||||
const time = useMemo(() => {
|
||||
const item = group[0];
|
||||
@ -171,94 +246,150 @@ export function GroupedClassificationCard({
|
||||
return item.timestamp * 1000;
|
||||
}, [group]);
|
||||
|
||||
return (
|
||||
<div
|
||||
className={cn(
|
||||
"flex cursor-pointer flex-col gap-2 rounded-lg bg-card p-2 outline outline-[3px]",
|
||||
isMobile && "w-full",
|
||||
allItemsSelected
|
||||
? "shadow-selected outline-selected"
|
||||
: "outline-transparent duration-500",
|
||||
)}
|
||||
onClick={() => {
|
||||
if (selectedItems.length) {
|
||||
onClick(undefined);
|
||||
}
|
||||
}}
|
||||
onContextMenu={(e) => {
|
||||
e.stopPropagation();
|
||||
e.preventDefault();
|
||||
onClick(undefined);
|
||||
}}
|
||||
>
|
||||
<div className="flex flex-row justify-between">
|
||||
<div className="flex flex-col gap-1">
|
||||
<div className="select-none smart-capitalize">
|
||||
{getTranslatedLabel(objectType)}
|
||||
{event?.sub_label
|
||||
? `: ${event.sub_label} (${Math.round((event.data.sub_label_score || 0) * 100)}%)`
|
||||
: ": " + t("details.unknown")}
|
||||
</div>
|
||||
{time && (
|
||||
<TimeAgo
|
||||
className="text-sm text-secondary-foreground"
|
||||
time={time}
|
||||
dense
|
||||
/>
|
||||
)}
|
||||
</div>
|
||||
{event && (
|
||||
<Tooltip>
|
||||
<TooltipTrigger>
|
||||
<div
|
||||
className="cursor-pointer"
|
||||
onClick={() => {
|
||||
navigate(`/explore?event_id=${event.id}`);
|
||||
}}
|
||||
>
|
||||
<LuSearch className="size-4 text-muted-foreground" />
|
||||
</div>
|
||||
</TooltipTrigger>
|
||||
<TooltipPortal>
|
||||
<TooltipContent>
|
||||
{t("details.item.button.viewInExplore", {
|
||||
ns: "views/explore",
|
||||
})}
|
||||
</TooltipContent>
|
||||
</TooltipPortal>
|
||||
</Tooltip>
|
||||
)}
|
||||
</div>
|
||||
if (!bestItem) {
|
||||
return null;
|
||||
}
|
||||
|
||||
<div
|
||||
className={cn(
|
||||
"gap-2",
|
||||
isDesktop
|
||||
? "flex flex-row flex-wrap"
|
||||
: "grid grid-cols-2 sm:grid-cols-5 lg:grid-cols-6",
|
||||
)}
|
||||
const Overlay = isDesktop ? Dialog : MobilePage;
|
||||
const Trigger = isDesktop ? DialogTrigger : MobilePageTrigger;
|
||||
const Header = isDesktop ? DialogHeader : MobilePageHeader;
|
||||
const Content = isDesktop ? DialogContent : MobilePageContent;
|
||||
const ContentTitle = isDesktop ? DialogTitle : MobilePageTitle;
|
||||
const ContentDescription = isDesktop
|
||||
? DialogDescription
|
||||
: MobilePageDescription;
|
||||
|
||||
return (
|
||||
<>
|
||||
<ClassificationCard
|
||||
data={bestItem}
|
||||
threshold={threshold}
|
||||
selected={selectedItems.includes(bestItem.filename)}
|
||||
i18nLibrary={i18nLibrary}
|
||||
count={group.length}
|
||||
onClick={(_, meta) => {
|
||||
if (meta || selectedItems.length > 0) {
|
||||
onClick(undefined);
|
||||
} else {
|
||||
setDetailOpen(true);
|
||||
}
|
||||
}}
|
||||
/>
|
||||
<Overlay
|
||||
open={detailOpen}
|
||||
onOpenChange={(open) => {
|
||||
if (!open) {
|
||||
setDetailOpen(false);
|
||||
}
|
||||
}}
|
||||
>
|
||||
{group.map((data: ClassificationItemData) => (
|
||||
<ClassificationCard
|
||||
key={data.filename}
|
||||
data={data}
|
||||
threshold={threshold}
|
||||
selected={
|
||||
allItemsSelected ? false : selectedItems.includes(data.filename)
|
||||
}
|
||||
i18nLibrary={i18nLibrary}
|
||||
onClick={(data, meta) => {
|
||||
if (meta || selectedItems.length > 0) {
|
||||
onClick(data);
|
||||
} else if (event) {
|
||||
onSelectEvent(event);
|
||||
}
|
||||
}}
|
||||
>
|
||||
{children?.(data)}
|
||||
</ClassificationCard>
|
||||
))}
|
||||
</div>
|
||||
</div>
|
||||
<Trigger asChild></Trigger>
|
||||
<Content
|
||||
className={cn(
|
||||
"",
|
||||
isDesktop && "w-auto max-w-[85%]",
|
||||
isMobile && "flex flex-col",
|
||||
)}
|
||||
onOpenAutoFocus={(e) => e.preventDefault()}
|
||||
>
|
||||
<>
|
||||
{isDesktop && (
|
||||
<div className="absolute right-10 top-4 flex flex-row justify-between">
|
||||
{event && (
|
||||
<Tooltip>
|
||||
<TooltipTrigger asChild>
|
||||
<div
|
||||
className="cursor-pointer"
|
||||
tabIndex={-1}
|
||||
onClick={() => {
|
||||
navigate(`/explore?event_id=${event.id}`);
|
||||
}}
|
||||
>
|
||||
<LuSearch className="size-4 text-secondary-foreground" />
|
||||
</div>
|
||||
</TooltipTrigger>
|
||||
<TooltipPortal>
|
||||
<TooltipContent>
|
||||
{t("details.item.button.viewInExplore", {
|
||||
ns: "views/explore",
|
||||
})}
|
||||
</TooltipContent>
|
||||
</TooltipPortal>
|
||||
</Tooltip>
|
||||
)}
|
||||
</div>
|
||||
)}
|
||||
<Header className={cn("mx-2", isMobile && "flex-shrink-0")}>
|
||||
<div>
|
||||
<ContentTitle
|
||||
className={cn(
|
||||
"flex items-center gap-1 font-normal capitalize",
|
||||
isMobile && "px-2",
|
||||
)}
|
||||
>
|
||||
{event?.sub_label ? event.sub_label : t("details.unknown")}
|
||||
{event?.sub_label && (
|
||||
<div
|
||||
className={cn(
|
||||
"",
|
||||
bestScoreStatus == "match" && "text-success",
|
||||
bestScoreStatus == "potential" && "text-orange-400",
|
||||
bestScoreStatus == "unknown" && "text-danger",
|
||||
)}
|
||||
>{`${Math.round((event.data.sub_label_score || 0) * 100)}%`}</div>
|
||||
)}
|
||||
</ContentTitle>
|
||||
<ContentDescription className={cn("", isMobile && "px-2")}>
|
||||
{time && (
|
||||
<TimeAgo
|
||||
className="text-sm text-secondary-foreground"
|
||||
time={time}
|
||||
dense
|
||||
/>
|
||||
)}
|
||||
</ContentDescription>
|
||||
</div>
|
||||
</Header>
|
||||
<div
|
||||
className={cn(
|
||||
"flex cursor-pointer flex-col gap-2 rounded-lg",
|
||||
isDesktop && "p-2",
|
||||
isMobile && "scrollbar-container w-full flex-1 overflow-y-auto",
|
||||
)}
|
||||
>
|
||||
<div
|
||||
className={cn(
|
||||
"gap-2",
|
||||
isDesktop
|
||||
? "flex flex-row flex-wrap"
|
||||
: "grid grid-cols-2 justify-items-center gap-2 px-2 sm:grid-cols-5 lg:grid-cols-6",
|
||||
)}
|
||||
>
|
||||
{group.map((data: ClassificationItemData) => (
|
||||
<div
|
||||
key={data.filename}
|
||||
className={cn(isMobile && "aspect-square size-full")}
|
||||
>
|
||||
<ClassificationCard
|
||||
data={data}
|
||||
threshold={threshold}
|
||||
selected={false}
|
||||
i18nLibrary={i18nLibrary}
|
||||
onClick={(data, meta) => {
|
||||
if (meta || selectedItems.length > 0) {
|
||||
onClick(data);
|
||||
}
|
||||
}}
|
||||
>
|
||||
{children?.(data)}
|
||||
</ClassificationCard>
|
||||
</div>
|
||||
))}
|
||||
</div>
|
||||
</div>
|
||||
</>
|
||||
</Content>
|
||||
</Overlay>
|
||||
</>
|
||||
);
|
||||
}
|
||||
|
||||
@ -21,6 +21,7 @@ import { baseUrl } from "@/api/baseUrl";
|
||||
import { cn } from "@/lib/utils";
|
||||
import { shareOrCopy } from "@/utils/browserUtil";
|
||||
import { useTranslation } from "react-i18next";
|
||||
import { ImageShadowOverlay } from "../overlay/ImageShadowOverlay";
|
||||
|
||||
type ExportProps = {
|
||||
className: string;
|
||||
@ -224,7 +225,7 @@ export default function ExportCard({
|
||||
{loading && (
|
||||
<Skeleton className="absolute inset-0 aspect-video rounded-lg md:rounded-2xl" />
|
||||
)}
|
||||
<div className="rounded-b-l pointer-events-none absolute inset-x-0 bottom-0 h-[50%] rounded-lg bg-gradient-to-t from-black/60 to-transparent md:rounded-2xl" />
|
||||
<ImageShadowOverlay />
|
||||
<div className="absolute bottom-2 left-3 flex h-full items-end justify-between text-white smart-capitalize">
|
||||
{exportedRecording.name.replaceAll("_", " ")}
|
||||
</div>
|
||||
|
||||
27
web/src/components/overlay/ImageShadowOverlay.tsx
Normal file
27
web/src/components/overlay/ImageShadowOverlay.tsx
Normal file
@ -0,0 +1,27 @@
|
||||
import { cn } from "@/lib/utils";
|
||||
|
||||
type ImageShadowOverlayProps = {
|
||||
upperClassName?: string;
|
||||
lowerClassName?: string;
|
||||
};
|
||||
export function ImageShadowOverlay({
|
||||
upperClassName,
|
||||
lowerClassName,
|
||||
}: ImageShadowOverlayProps) {
|
||||
return (
|
||||
<>
|
||||
<div
|
||||
className={cn(
|
||||
"pointer-events-none absolute inset-x-0 top-0 z-10 h-[30%] w-full rounded-lg bg-gradient-to-b from-black/20 to-transparent md:rounded-2xl",
|
||||
upperClassName,
|
||||
)}
|
||||
/>
|
||||
<div
|
||||
className={cn(
|
||||
"pointer-events-none absolute inset-x-0 bottom-0 z-10 h-[10%] w-full rounded-lg bg-gradient-to-t from-black/20 to-transparent md:rounded-2xl",
|
||||
lowerClassName,
|
||||
)}
|
||||
/>
|
||||
</>
|
||||
);
|
||||
}
|
||||
@ -6,6 +6,7 @@ import MSEPlayer from "./MsePlayer";
|
||||
import { LivePlayerMode } from "@/types/live";
|
||||
import { cn } from "@/lib/utils";
|
||||
import React from "react";
|
||||
import { ImageShadowOverlay } from "../overlay/ImageShadowOverlay";
|
||||
|
||||
type LivePlayerProps = {
|
||||
className?: string;
|
||||
@ -76,8 +77,7 @@ export default function BirdseyeLivePlayer({
|
||||
)}
|
||||
onClick={onClick}
|
||||
>
|
||||
<div className="pointer-events-none absolute inset-x-0 top-0 z-10 h-[30%] w-full rounded-lg bg-gradient-to-b from-black/20 to-transparent md:rounded-2xl"></div>
|
||||
<div className="pointer-events-none absolute inset-x-0 bottom-0 z-10 h-[10%] w-full rounded-lg bg-gradient-to-t from-black/20 to-transparent md:rounded-2xl"></div>
|
||||
<ImageShadowOverlay />
|
||||
<div className="size-full" ref={playerRef}>
|
||||
{player}
|
||||
</div>
|
||||
|
||||
@ -25,6 +25,7 @@ import { PlayerStats } from "./PlayerStats";
|
||||
import { LuVideoOff } from "react-icons/lu";
|
||||
import { Trans, useTranslation } from "react-i18next";
|
||||
import { useCameraFriendlyName } from "@/hooks/use-camera-friendly-name";
|
||||
import { ImageShadowOverlay } from "../overlay/ImageShadowOverlay";
|
||||
|
||||
type LivePlayerProps = {
|
||||
cameraRef?: (ref: HTMLDivElement | null) => void;
|
||||
@ -328,10 +329,7 @@ export default function LivePlayer({
|
||||
>
|
||||
{cameraEnabled &&
|
||||
((showStillWithoutActivity && !liveReady) || liveReady) && (
|
||||
<>
|
||||
<div className="pointer-events-none absolute inset-x-0 top-0 z-10 h-[30%] w-full rounded-lg bg-gradient-to-b from-black/20 to-transparent md:rounded-2xl"></div>
|
||||
<div className="pointer-events-none absolute inset-x-0 bottom-0 z-10 h-[10%] w-full rounded-lg bg-gradient-to-t from-black/20 to-transparent md:rounded-2xl"></div>
|
||||
</>
|
||||
<ImageShadowOverlay />
|
||||
)}
|
||||
{player}
|
||||
{cameraEnabled &&
|
||||
|
||||
@ -107,7 +107,7 @@ const DialogContent = React.forwardRef<
|
||||
>
|
||||
{children}
|
||||
<DialogPrimitive.Close className="absolute right-4 top-4 rounded-sm opacity-70 ring-offset-background transition-opacity data-[state=open]:bg-accent data-[state=open]:text-muted-foreground hover:opacity-100 focus:outline-none focus:ring-2 focus:ring-ring focus:ring-offset-2 disabled:pointer-events-none">
|
||||
<X className="h-4 w-4" />
|
||||
<X className="h-4 w-4 text-secondary-foreground" />
|
||||
<span className="sr-only">Close</span>
|
||||
</DialogPrimitive.Close>
|
||||
</DialogPrimitive.Content>
|
||||
|
||||
@ -51,7 +51,7 @@ import {
|
||||
useRef,
|
||||
useState,
|
||||
} from "react";
|
||||
import { isDesktop } from "react-device-detect";
|
||||
import { isDesktop, isMobile, isMobileOnly } from "react-device-detect";
|
||||
import { Trans, useTranslation } from "react-i18next";
|
||||
import {
|
||||
LuFolderCheck,
|
||||
@ -63,10 +63,6 @@ import {
|
||||
} from "react-icons/lu";
|
||||
import { toast } from "sonner";
|
||||
import useSWR from "swr";
|
||||
import SearchDetailDialog, {
|
||||
SearchTab,
|
||||
} from "@/components/overlay/detail/SearchDetailDialog";
|
||||
import { SearchResult } from "@/types/search";
|
||||
import {
|
||||
ClassificationCard,
|
||||
GroupedClassificationCard,
|
||||
@ -686,11 +682,6 @@ function TrainingGrid({
|
||||
{ ids: eventIdsQuery },
|
||||
]);
|
||||
|
||||
// selection
|
||||
|
||||
const [selectedEvent, setSelectedEvent] = useState<Event>();
|
||||
const [dialogTab, setDialogTab] = useState<SearchTab>("details");
|
||||
|
||||
if (attemptImages.length == 0) {
|
||||
return (
|
||||
<div className="absolute left-1/2 top-1/2 flex -translate-x-1/2 -translate-y-1/2 flex-col items-center justify-center text-center">
|
||||
@ -701,40 +692,32 @@ function TrainingGrid({
|
||||
}
|
||||
|
||||
return (
|
||||
<>
|
||||
<SearchDetailDialog
|
||||
search={
|
||||
selectedEvent ? (selectedEvent as unknown as SearchResult) : undefined
|
||||
}
|
||||
page={dialogTab}
|
||||
setSimilarity={undefined}
|
||||
setSearchPage={setDialogTab}
|
||||
setSearch={(search) => setSelectedEvent(search as unknown as Event)}
|
||||
setInputFocused={() => {}}
|
||||
/>
|
||||
|
||||
<div
|
||||
ref={contentRef}
|
||||
className="scrollbar-container flex flex-wrap gap-2 overflow-y-scroll p-1"
|
||||
>
|
||||
{Object.entries(faceGroups).map(([key, group]) => {
|
||||
const event = events?.find((ev) => ev.id == key);
|
||||
return (
|
||||
<div
|
||||
ref={contentRef}
|
||||
className={cn(
|
||||
"scrollbar-container gap-3 overflow-y-scroll p-1",
|
||||
isMobile
|
||||
? "grid grid-cols-2 sm:grid-cols-3 md:grid-cols-4 lg:grid-cols-6 xl:grid-cols-8"
|
||||
: "flex flex-wrap",
|
||||
)}
|
||||
>
|
||||
{Object.entries(faceGroups).map(([key, group]) => {
|
||||
const event = events?.find((ev) => ev.id == key);
|
||||
return (
|
||||
<div key={key} className={cn(isMobile && "aspect-square size-full")}>
|
||||
<FaceAttemptGroup
|
||||
key={key}
|
||||
config={config}
|
||||
group={group}
|
||||
event={event}
|
||||
faceNames={faceNames}
|
||||
selectedFaces={selectedFaces}
|
||||
onClickFaces={onClickFaces}
|
||||
onSelectEvent={setSelectedEvent}
|
||||
onRefresh={onRefresh}
|
||||
/>
|
||||
);
|
||||
})}
|
||||
</div>
|
||||
</>
|
||||
</div>
|
||||
);
|
||||
})}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
@ -745,7 +728,6 @@ type FaceAttemptGroupProps = {
|
||||
faceNames: string[];
|
||||
selectedFaces: string[];
|
||||
onClickFaces: (image: string[], ctrl: boolean) => void;
|
||||
onSelectEvent: (event: Event) => void;
|
||||
onRefresh: () => void;
|
||||
};
|
||||
function FaceAttemptGroup({
|
||||
@ -755,7 +737,6 @@ function FaceAttemptGroup({
|
||||
faceNames,
|
||||
selectedFaces,
|
||||
onClickFaces,
|
||||
onSelectEvent,
|
||||
onRefresh,
|
||||
}: FaceAttemptGroupProps) {
|
||||
const { t } = useTranslation(["views/faceLibrary", "views/explore"]);
|
||||
@ -773,8 +754,8 @@ function FaceAttemptGroup({
|
||||
|
||||
const handleClickEvent = useCallback(
|
||||
(meta: boolean) => {
|
||||
if (event && selectedFaces.length == 0 && !meta) {
|
||||
onSelectEvent(event);
|
||||
if (!meta) {
|
||||
return;
|
||||
} else {
|
||||
const anySelected =
|
||||
group.find((face) => selectedFaces.includes(face.filename)) !=
|
||||
@ -798,7 +779,7 @@ function FaceAttemptGroup({
|
||||
}
|
||||
}
|
||||
},
|
||||
[event, group, selectedFaces, onClickFaces, onSelectEvent],
|
||||
[group, selectedFaces, onClickFaces],
|
||||
);
|
||||
|
||||
// api calls
|
||||
@ -873,7 +854,6 @@ function FaceAttemptGroup({
|
||||
handleClickEvent(true);
|
||||
}
|
||||
}}
|
||||
onSelectEvent={onSelectEvent}
|
||||
>
|
||||
{(data) => (
|
||||
<>
|
||||
|
||||
@ -1,6 +1,7 @@
|
||||
import { baseUrl } from "@/api/baseUrl";
|
||||
import ClassificationModelWizardDialog from "@/components/classification/ClassificationModelWizardDialog";
|
||||
import ActivityIndicator from "@/components/indicators/activity-indicator";
|
||||
import { ImageShadowOverlay } from "@/components/overlay/ImageShadowOverlay";
|
||||
import { Button } from "@/components/ui/button";
|
||||
import { ToggleGroup, ToggleGroupItem } from "@/components/ui/toggle-group";
|
||||
import useOptimisticState from "@/hooks/use-optimistic-state";
|
||||
@ -163,7 +164,7 @@ function ModelCard({ config, onClick }: ModelCardProps) {
|
||||
className={cn("size-full", isMobile && "w-full")}
|
||||
src={`${baseUrl}clips/${config.name}/dataset/${coverImage?.name}/${coverImage?.img}`}
|
||||
/>
|
||||
<div className="absolute bottom-0 h-[50%] w-full bg-gradient-to-t from-black/60 to-transparent" />
|
||||
<ImageShadowOverlay />
|
||||
<div className="absolute bottom-2 left-3 text-lg smart-capitalize">
|
||||
{config.name}
|
||||
</div>
|
||||
|
||||
@ -44,7 +44,7 @@ import {
|
||||
useRef,
|
||||
useState,
|
||||
} from "react";
|
||||
import { isDesktop, isMobile } from "react-device-detect";
|
||||
import { isDesktop, isMobile, isMobileOnly } from "react-device-detect";
|
||||
import { Trans, useTranslation } from "react-i18next";
|
||||
import { LuPencil, LuTrash2 } from "react-icons/lu";
|
||||
import { toast } from "sonner";
|
||||
@ -791,7 +791,7 @@ function StateTrainGrid({
|
||||
<div
|
||||
ref={contentRef}
|
||||
className={cn(
|
||||
"scrollbar-container flex flex-wrap gap-2 overflow-y-auto p-2",
|
||||
"scrollbar-container flex flex-wrap gap-3 overflow-y-auto p-2",
|
||||
isMobile && "justify-center",
|
||||
)}
|
||||
>
|
||||
@ -927,41 +927,50 @@ function ObjectTrainGrid({
|
||||
|
||||
<div
|
||||
ref={contentRef}
|
||||
className="scrollbar-container flex flex-wrap gap-2 overflow-y-scroll p-1"
|
||||
className={cn(
|
||||
"scrollbar-container gap-3 overflow-y-scroll p-1",
|
||||
isMobile
|
||||
? "grid grid-cols-2 md:grid-cols-4 lg:grid-cols-6 xl:grid-cols-8"
|
||||
: "flex flex-wrap",
|
||||
)}
|
||||
>
|
||||
{Object.entries(groups).map(([key, group]) => {
|
||||
const event = events?.find((ev) => ev.id == key);
|
||||
return (
|
||||
<GroupedClassificationCard
|
||||
<div
|
||||
key={key}
|
||||
group={group}
|
||||
event={event}
|
||||
threshold={threshold}
|
||||
selectedItems={selectedImages}
|
||||
i18nLibrary="views/classificationModel"
|
||||
objectType={model.object_config?.objects?.at(0) ?? "Object"}
|
||||
onClick={(data) => {
|
||||
if (data) {
|
||||
onClickImages([data.filename], true);
|
||||
} else {
|
||||
handleClickEvent(group, event, true);
|
||||
}
|
||||
}}
|
||||
onSelectEvent={() => {}}
|
||||
className={cn(isMobile && "aspect-square size-full")}
|
||||
>
|
||||
{(data) => (
|
||||
<>
|
||||
<ClassificationSelectionDialog
|
||||
classes={classes}
|
||||
modelName={model.name}
|
||||
image={data.filename}
|
||||
onRefresh={onRefresh}
|
||||
>
|
||||
<TbCategoryPlus className="size-7 cursor-pointer p-1 text-gray-200 hover:rounded-full hover:bg-primary-foreground" />
|
||||
</ClassificationSelectionDialog>
|
||||
</>
|
||||
)}
|
||||
</GroupedClassificationCard>
|
||||
<GroupedClassificationCard
|
||||
key={key}
|
||||
group={group}
|
||||
event={event}
|
||||
threshold={threshold}
|
||||
selectedItems={selectedImages}
|
||||
i18nLibrary="views/classificationModel"
|
||||
objectType={model.object_config?.objects?.at(0) ?? "Object"}
|
||||
onClick={(data) => {
|
||||
if (data) {
|
||||
onClickImages([data.filename], true);
|
||||
} else {
|
||||
handleClickEvent(group, event, true);
|
||||
}
|
||||
}}
|
||||
>
|
||||
{(data) => (
|
||||
<>
|
||||
<ClassificationSelectionDialog
|
||||
classes={classes}
|
||||
modelName={model.name}
|
||||
image={data.filename}
|
||||
onRefresh={onRefresh}
|
||||
>
|
||||
<TbCategoryPlus className="size-7 cursor-pointer p-1 text-gray-200 hover:rounded-full hover:bg-primary-foreground" />
|
||||
</ClassificationSelectionDialog>
|
||||
</>
|
||||
)}
|
||||
</GroupedClassificationCard>
|
||||
</div>
|
||||
);
|
||||
})}
|
||||
</div>
|
||||
|
||||
Loading…
Reference in New Issue
Block a user