Compare commits

..

No commits in common. "0bcabf0731b48f1f993d1013835b54fa30509c17" and "8c318699c4ff0b6ddfd097ca228c1210556e16a5" have entirely different histories.

17 changed files with 227 additions and 350 deletions

View File

@ -67,7 +67,7 @@ When choosing which objects to classify, start with a small number of visually d
### Improving the Model
- **Problem framing**: Keep classes visually distinct and relevant to the chosen object types.
- **Data collection**: Use the models Recent Classification tab to gather balanced examples across times of day, weather, and distances.
- **Data collection**: Use the models Train tab to gather balanced examples across times of day, weather, and distances.
- **Preprocessing**: Ensure examples reflect object crops similar to Frigates boxes; keep the subject centered.
- **Labels**: Keep label names short and consistent; include a `none` class if you plan to ignore uncertain predictions for sub labels.
- **Threshold**: Tune `threshold` per model to reduce false assignments. Start at `0.8` and adjust based on validation.

View File

@ -49,4 +49,4 @@ When choosing a portion of the camera frame for state classification, it is impo
### Improving the Model
- **Problem framing**: Keep classes visually distinct and state-focused (e.g., `open`, `closed`, `unknown`). Avoid combining object identity with state in a single model unless necessary.
- **Data collection**: Use the models Recent Classifications tab to gather balanced examples across times of day and weather.
- **Data collection**: Use the models Train tab to gather balanced examples across times of day and weather.

View File

@ -70,7 +70,7 @@ Fine-tune face recognition with these optional parameters at the global level of
- `min_faces`: Min face recognitions for the sub label to be applied to the person object.
- Default: `1`
- `save_attempts`: Number of images of recognized faces to save for training.
- Default: `200`.
- Default: `100`.
- `blur_confidence_filter`: Enables a filter that calculates how blurry the face is and adjusts the confidence based on this.
- Default: `True`.
- `device`: Target a specific device to run the face recognition model on (multi-GPU installation).
@ -114,9 +114,9 @@ When choosing images to include in the face training set it is recommended to al
:::
### Understanding the Recent Recognitions Tab
### Understanding the Train Tab
The Recent Recognitions tab in the face library displays recent face recognition attempts. Detected face images are grouped according to the person they were identified as potentially matching.
The Train tab in the face library displays recent face recognition attempts. Detected face images are grouped according to the person they were identified as potentially matching.
Each face image is labeled with a name (or `Unknown`) along with the confidence score of the recognition attempt. While each image can be used to train the system for a specific person, not all images are suitable for training.
@ -140,7 +140,7 @@ Once front-facing images are performing well, start choosing slightly off-angle
Start with the [Usage](#usage) section and re-read the [Model Requirements](#model-requirements) above.
1. Ensure `person` is being _detected_. A `person` will automatically be scanned by Frigate for a face. Any detected faces will appear in the Recent Recognitions tab in the Frigate UI's Face Library.
1. Ensure `person` is being _detected_. A `person` will automatically be scanned by Frigate for a face. Any detected faces will appear in the Train tab in the Frigate UI's Face Library.
If you are using a Frigate+ or `face` detecting model:
@ -186,7 +186,7 @@ Avoid training on images that already score highly, as this can lead to over-fit
No, face recognition does not support negative training (i.e., explicitly telling it who someone is _not_). Instead, the best approach is to improve the training data by using a more diverse and representative set of images for each person.
For more guidance, refer to the section above on improving recognition accuracy.
### I see scores above the threshold in the Recent Recognitions tab, but a sub label wasn't assigned?
### I see scores above the threshold in the train tab, but a sub label wasn't assigned?
The Frigate considers the recognition scores across all recognition attempts for each person object. The scores are continually weighted based on the area of the face, and a sub label will only be assigned to person if a person is confidently recognized consistently. This avoids cases where a single high confidence recognition would throw off the results.

View File

@ -630,7 +630,7 @@ face_recognition:
# Optional: Min face recognitions for the sub label to be applied to the person object (default: shown below)
min_faces: 1
# Optional: Number of images of recognized faces to save for training (default: shown below)
save_attempts: 200
save_attempts: 100
# Optional: Apply a blur quality filter to adjust confidence based on the blur level of the image (default: shown below)
blur_confidence_filter: True
# Optional: Set the model size used face recognition. (default: shown below)

View File

@ -197,7 +197,7 @@ class FaceRecognitionConfig(FrigateBaseModel):
title="Min face recognitions for the sub label to be applied to the person object.",
)
save_attempts: int = Field(
default=200, ge=0, title="Number of face attempts to save in the recent recognitions tab."
default=100, ge=0, title="Number of face attempts to save in the train tab."
)
blur_confidence_filter: bool = Field(
default=True, title="Apply blur quality filter to face confidence."

View File

@ -35,6 +35,10 @@ except ModuleNotFoundError:
logger = logging.getLogger(__name__)
MAX_CLASSIFICATION_VERIFICATION_ATTEMPTS = 6
MAX_CLASSIFICATION_ATTEMPTS = 12
class CustomStateClassificationProcessor(RealTimeProcessorApi):
def __init__(
self,
@ -264,6 +268,26 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
if obj_data["label"] not in self.model_config.object_config.objects:
return
if (
obj_data["id"] in self.detected_objects
and len(self.detected_objects[obj_data["id"]])
>= MAX_CLASSIFICATION_VERIFICATION_ATTEMPTS
):
# if we are at max attempts after rec and we have a rec
if obj_data.get("sub_label"):
logger.debug(
"Not processing due to hitting max attempts after true recognition."
)
return
# if we don't have a rec and are at max attempts
if (
len(self.detected_objects[obj_data["id"]])
>= MAX_CLASSIFICATION_ATTEMPTS
):
logger.debug("Not processing due to hitting max rec attempts.")
return
now = datetime.datetime.now().timestamp()
x, y, x2, y2 = calculate_region(
frame.shape,

View File

@ -23,7 +23,7 @@
"label": "Min face recognitions for the sub label to be applied to the person object."
},
"save_attempts": {
"label": "Number of face attempts to save in the recent recognitions tab."
"label": "Number of face attempts to save in the train tab."
},
"blur_confidence_filter": {
"label": "Apply blur quality filter to face confidence."

View File

@ -22,7 +22,7 @@
"title": "Create Collection",
"desc": "Create a new collection",
"new": "Create New Face",
"nextSteps": "To build a strong foundation:<li>Use the Recent Recognitions tab to select and train on images for each detected person.</li><li>Focus on straight-on images for best results; avoid training images that capture faces at an angle.</li></ul>"
"nextSteps": "To build a strong foundation:<li>Use the Train tab to select and train on images for each detected person.</li><li>Focus on straight-on images for best results; avoid training images that capture faces at an angle.</li></ul>"
},
"steps": {
"faceName": "Enter Face Name",

View File

@ -6,7 +6,7 @@ import {
ClassificationThreshold,
} from "@/types/classification";
import { Event } from "@/types/event";
import { forwardRef, useMemo, useRef, useState } from "react";
import { useMemo, useRef, useState } from "react";
import { isDesktop, isMobile } from "react-device-detect";
import { useTranslation } from "react-i18next";
import TimeAgo from "../dynamic/TimeAgo";
@ -14,24 +14,7 @@ import { Tooltip, TooltipContent, TooltipTrigger } from "../ui/tooltip";
import { LuSearch } from "react-icons/lu";
import { TooltipPortal } from "@radix-ui/react-tooltip";
import { useNavigate } from "react-router-dom";
import { HiSquare2Stack } from "react-icons/hi2";
import { ImageShadowOverlay } from "../overlay/ImageShadowOverlay";
import {
Dialog,
DialogContent,
DialogDescription,
DialogHeader,
DialogTitle,
DialogTrigger,
} from "../ui/dialog";
import {
MobilePage,
MobilePageContent,
MobilePageDescription,
MobilePageHeader,
MobilePageTitle,
MobilePageTrigger,
} from "../mobile/MobilePage";
import { getTranslatedLabel } from "@/utils/i18n";
type ClassificationCardProps = {
imgClassName?: string;
@ -40,27 +23,19 @@ type ClassificationCardProps = {
selected: boolean;
i18nLibrary: string;
showArea?: boolean;
count?: number;
onClick: (data: ClassificationItemData, meta: boolean) => void;
children?: React.ReactNode;
};
export const ClassificationCard = forwardRef<
HTMLDivElement,
ClassificationCardProps
>(function ClassificationCard(
{
imgClassName,
data,
threshold,
selected,
i18nLibrary,
showArea = true,
count,
onClick,
children,
},
ref,
) {
export function ClassificationCard({
imgClassName,
data,
threshold,
selected,
i18nLibrary,
showArea = true,
onClick,
children,
}: ClassificationCardProps) {
const { t } = useTranslation([i18nLibrary]);
const [imageLoaded, setImageLoaded] = useState(false);
@ -96,26 +71,12 @@ export const ClassificationCard = forwardRef<
return (
<div
ref={ref}
className={cn(
"relative flex cursor-pointer flex-col overflow-hidden rounded-lg outline outline-[3px]",
isMobile ? "!size-full" : "size-48",
"relative flex size-48 cursor-pointer flex-col overflow-hidden rounded-lg outline outline-[3px]",
selected
? "shadow-selected outline-selected"
: "outline-transparent duration-500",
)}
onClick={(e) => {
const isMeta = e.metaKey || e.ctrlKey;
if (isMeta) {
e.stopPropagation();
}
onClick(data, isMeta);
}}
onContextMenu={(e) => {
e.preventDefault();
e.stopPropagation();
onClick(data, true);
}}
>
<img
ref={imgRef}
@ -126,16 +87,13 @@ export const ClassificationCard = forwardRef<
)}
onLoad={() => setImageLoaded(true)}
src={`${baseUrl}${data.filepath}`}
onClick={(e) => {
e.stopPropagation();
onClick(data, e.metaKey || e.ctrlKey);
}}
/>
<ImageShadowOverlay upperClassName="z-0" lowerClassName="h-[30%] z-0" />
{count && (
<div className="absolute right-2 top-2 flex flex-row items-center gap-1">
<div className="text-gray-200">{count}</div>{" "}
<HiSquare2Stack className="text-gray-200" />
</div>
)}
{!count && imageArea != undefined && (
<div className="absolute right-1 top-1 rounded-lg bg-black/50 px-2 py-1 text-xs text-white">
{false && imageArea != undefined && (
<div className="absolute bottom-1 right-1 z-10 rounded-lg bg-black/50 px-2 py-1 text-xs text-white">
{t("information.pixels", { ns: "common", area: imageArea })}
</div>
)}
@ -169,7 +127,7 @@ export const ClassificationCard = forwardRef<
</div>
</div>
);
});
}
type GroupedClassificationCardProps = {
group: ClassificationItemData[];
@ -179,6 +137,7 @@ type GroupedClassificationCardProps = {
i18nLibrary: string;
objectType: string;
onClick: (data: ClassificationItemData | undefined) => void;
onSelectEvent: (event: Event) => void;
children?: (data: ClassificationItemData) => React.ReactNode;
};
export function GroupedClassificationCard({
@ -187,54 +146,20 @@ export function GroupedClassificationCard({
threshold,
selectedItems,
i18nLibrary,
objectType,
onClick,
onSelectEvent,
children,
}: GroupedClassificationCardProps) {
const navigate = useNavigate();
const { t } = useTranslation(["views/explore", i18nLibrary]);
const [detailOpen, setDetailOpen] = useState(false);
// data
const bestItem = useMemo<ClassificationItemData | undefined>(() => {
let best: undefined | ClassificationItemData = undefined;
group.forEach((item) => {
if (item?.name != undefined && item.name != "none") {
if (
best?.score == undefined ||
(item.score && best.score < item.score)
) {
best = item;
}
}
});
if (!best) {
return group[0];
}
const bestTyped: ClassificationItemData = best;
return {
...bestTyped,
name: event?.sub_label || bestTyped.name,
score: event?.data?.sub_label_score || bestTyped.score,
};
}, [group, event]);
const bestScoreStatus = useMemo(() => {
if (!bestItem?.score || !threshold) {
return "unknown";
}
if (bestItem.score >= threshold.recognition) {
return "match";
} else if (bestItem.score >= threshold.unknown) {
return "potential";
} else {
return "unknown";
}
}, [bestItem, threshold]);
const allItemsSelected = useMemo(
() => group.every((data) => selectedItems.includes(data.filename)),
[group, selectedItems],
);
const time = useMemo(() => {
const item = group[0];
@ -246,150 +171,94 @@ export function GroupedClassificationCard({
return item.timestamp * 1000;
}, [group]);
if (!bestItem) {
return null;
}
const Overlay = isDesktop ? Dialog : MobilePage;
const Trigger = isDesktop ? DialogTrigger : MobilePageTrigger;
const Header = isDesktop ? DialogHeader : MobilePageHeader;
const Content = isDesktop ? DialogContent : MobilePageContent;
const ContentTitle = isDesktop ? DialogTitle : MobilePageTitle;
const ContentDescription = isDesktop
? DialogDescription
: MobilePageDescription;
return (
<>
<ClassificationCard
data={bestItem}
threshold={threshold}
selected={selectedItems.includes(bestItem.filename)}
i18nLibrary={i18nLibrary}
count={group.length}
onClick={(_, meta) => {
if (meta || selectedItems.length > 0) {
onClick(undefined);
} else {
setDetailOpen(true);
}
}}
/>
<Overlay
open={detailOpen}
onOpenChange={(open) => {
if (!open) {
setDetailOpen(false);
}
}}
>
<Trigger asChild></Trigger>
<Content
className={cn(
"",
isDesktop && "w-auto max-w-[85%]",
isMobile && "flex flex-col",
<div
className={cn(
"flex cursor-pointer flex-col gap-2 rounded-lg bg-card p-2 outline outline-[3px]",
isMobile && "w-full",
allItemsSelected
? "shadow-selected outline-selected"
: "outline-transparent duration-500",
)}
onClick={() => {
if (selectedItems.length) {
onClick(undefined);
}
}}
onContextMenu={(e) => {
e.stopPropagation();
e.preventDefault();
onClick(undefined);
}}
>
<div className="flex flex-row justify-between">
<div className="flex flex-col gap-1">
<div className="select-none smart-capitalize">
{getTranslatedLabel(objectType)}
{event?.sub_label
? `: ${event.sub_label} (${Math.round((event.data.sub_label_score || 0) * 100)}%)`
: ": " + t("details.unknown")}
</div>
{time && (
<TimeAgo
className="text-sm text-secondary-foreground"
time={time}
dense
/>
)}
onOpenAutoFocus={(e) => e.preventDefault()}
>
<>
{isDesktop && (
<div className="absolute right-10 top-4 flex flex-row justify-between">
{event && (
<Tooltip>
<TooltipTrigger asChild>
<div
className="cursor-pointer"
tabIndex={-1}
onClick={() => {
navigate(`/explore?event_id=${event.id}`);
}}
>
<LuSearch className="size-4 text-secondary-foreground" />
</div>
</TooltipTrigger>
<TooltipPortal>
<TooltipContent>
{t("details.item.button.viewInExplore", {
ns: "views/explore",
})}
</TooltipContent>
</TooltipPortal>
</Tooltip>
)}
</div>
)}
<Header className={cn("mx-2", isMobile && "flex-shrink-0")}>
<div>
<ContentTitle
className={cn(
"flex items-center gap-1 font-normal capitalize",
isMobile && "px-2",
)}
>
{event?.sub_label ? event.sub_label : t("details.unknown")}
{event?.sub_label && (
<div
className={cn(
"",
bestScoreStatus == "match" && "text-success",
bestScoreStatus == "potential" && "text-orange-400",
bestScoreStatus == "unknown" && "text-danger",
)}
>{`${Math.round((event.data.sub_label_score || 0) * 100)}%`}</div>
)}
</ContentTitle>
<ContentDescription className={cn("", isMobile && "px-2")}>
{time && (
<TimeAgo
className="text-sm text-secondary-foreground"
time={time}
dense
/>
)}
</ContentDescription>
</div>
</Header>
<div
className={cn(
"flex cursor-pointer flex-col gap-2 rounded-lg",
isDesktop && "p-2",
isMobile && "scrollbar-container w-full flex-1 overflow-y-auto",
)}
>
</div>
{event && (
<Tooltip>
<TooltipTrigger>
<div
className={cn(
"gap-2",
isDesktop
? "flex flex-row flex-wrap"
: "grid grid-cols-2 justify-items-center gap-2 px-2 sm:grid-cols-5 lg:grid-cols-6",
)}
className="cursor-pointer"
onClick={() => {
navigate(`/explore?event_id=${event.id}`);
}}
>
{group.map((data: ClassificationItemData) => (
<div
key={data.filename}
className={cn(isMobile && "aspect-square size-full")}
>
<ClassificationCard
data={data}
threshold={threshold}
selected={false}
i18nLibrary={i18nLibrary}
onClick={(data, meta) => {
if (meta || selectedItems.length > 0) {
onClick(data);
}
}}
>
{children?.(data)}
</ClassificationCard>
</div>
))}
<LuSearch className="size-4 text-muted-foreground" />
</div>
</div>
</>
</Content>
</Overlay>
</>
</TooltipTrigger>
<TooltipPortal>
<TooltipContent>
{t("details.item.button.viewInExplore", {
ns: "views/explore",
})}
</TooltipContent>
</TooltipPortal>
</Tooltip>
)}
</div>
<div
className={cn(
"gap-2",
isDesktop
? "flex flex-row flex-wrap"
: "grid grid-cols-2 sm:grid-cols-5 lg:grid-cols-6",
)}
>
{group.map((data: ClassificationItemData) => (
<ClassificationCard
key={data.filename}
data={data}
threshold={threshold}
selected={
allItemsSelected ? false : selectedItems.includes(data.filename)
}
i18nLibrary={i18nLibrary}
onClick={(data, meta) => {
if (meta || selectedItems.length > 0) {
onClick(data);
} else if (event) {
onSelectEvent(event);
}
}}
>
{children?.(data)}
</ClassificationCard>
))}
</div>
</div>
);
}

View File

@ -21,7 +21,6 @@ import { baseUrl } from "@/api/baseUrl";
import { cn } from "@/lib/utils";
import { shareOrCopy } from "@/utils/browserUtil";
import { useTranslation } from "react-i18next";
import { ImageShadowOverlay } from "../overlay/ImageShadowOverlay";
type ExportProps = {
className: string;
@ -225,7 +224,7 @@ export default function ExportCard({
{loading && (
<Skeleton className="absolute inset-0 aspect-video rounded-lg md:rounded-2xl" />
)}
<ImageShadowOverlay />
<div className="rounded-b-l pointer-events-none absolute inset-x-0 bottom-0 h-[50%] rounded-lg bg-gradient-to-t from-black/60 to-transparent md:rounded-2xl" />
<div className="absolute bottom-2 left-3 flex h-full items-end justify-between text-white smart-capitalize">
{exportedRecording.name.replaceAll("_", " ")}
</div>

View File

@ -1,27 +0,0 @@
import { cn } from "@/lib/utils";
type ImageShadowOverlayProps = {
upperClassName?: string;
lowerClassName?: string;
};
export function ImageShadowOverlay({
upperClassName,
lowerClassName,
}: ImageShadowOverlayProps) {
return (
<>
<div
className={cn(
"pointer-events-none absolute inset-x-0 top-0 z-10 h-[30%] w-full rounded-lg bg-gradient-to-b from-black/20 to-transparent md:rounded-2xl",
upperClassName,
)}
/>
<div
className={cn(
"pointer-events-none absolute inset-x-0 bottom-0 z-10 h-[10%] w-full rounded-lg bg-gradient-to-t from-black/20 to-transparent md:rounded-2xl",
lowerClassName,
)}
/>
</>
);
}

View File

@ -6,7 +6,6 @@ import MSEPlayer from "./MsePlayer";
import { LivePlayerMode } from "@/types/live";
import { cn } from "@/lib/utils";
import React from "react";
import { ImageShadowOverlay } from "../overlay/ImageShadowOverlay";
type LivePlayerProps = {
className?: string;
@ -77,7 +76,8 @@ export default function BirdseyeLivePlayer({
)}
onClick={onClick}
>
<ImageShadowOverlay />
<div className="pointer-events-none absolute inset-x-0 top-0 z-10 h-[30%] w-full rounded-lg bg-gradient-to-b from-black/20 to-transparent md:rounded-2xl"></div>
<div className="pointer-events-none absolute inset-x-0 bottom-0 z-10 h-[10%] w-full rounded-lg bg-gradient-to-t from-black/20 to-transparent md:rounded-2xl"></div>
<div className="size-full" ref={playerRef}>
{player}
</div>

View File

@ -25,7 +25,6 @@ import { PlayerStats } from "./PlayerStats";
import { LuVideoOff } from "react-icons/lu";
import { Trans, useTranslation } from "react-i18next";
import { useCameraFriendlyName } from "@/hooks/use-camera-friendly-name";
import { ImageShadowOverlay } from "../overlay/ImageShadowOverlay";
type LivePlayerProps = {
cameraRef?: (ref: HTMLDivElement | null) => void;
@ -329,7 +328,10 @@ export default function LivePlayer({
>
{cameraEnabled &&
((showStillWithoutActivity && !liveReady) || liveReady) && (
<ImageShadowOverlay />
<>
<div className="pointer-events-none absolute inset-x-0 top-0 z-10 h-[30%] w-full rounded-lg bg-gradient-to-b from-black/20 to-transparent md:rounded-2xl"></div>
<div className="pointer-events-none absolute inset-x-0 bottom-0 z-10 h-[10%] w-full rounded-lg bg-gradient-to-t from-black/20 to-transparent md:rounded-2xl"></div>
</>
)}
{player}
{cameraEnabled &&

View File

@ -107,7 +107,7 @@ const DialogContent = React.forwardRef<
>
{children}
<DialogPrimitive.Close className="absolute right-4 top-4 rounded-sm opacity-70 ring-offset-background transition-opacity data-[state=open]:bg-accent data-[state=open]:text-muted-foreground hover:opacity-100 focus:outline-none focus:ring-2 focus:ring-ring focus:ring-offset-2 disabled:pointer-events-none">
<X className="h-4 w-4 text-secondary-foreground" />
<X className="h-4 w-4" />
<span className="sr-only">Close</span>
</DialogPrimitive.Close>
</DialogPrimitive.Content>

View File

@ -51,7 +51,7 @@ import {
useRef,
useState,
} from "react";
import { isDesktop, isMobile, isMobileOnly } from "react-device-detect";
import { isDesktop } from "react-device-detect";
import { Trans, useTranslation } from "react-i18next";
import {
LuFolderCheck,
@ -63,6 +63,10 @@ import {
} from "react-icons/lu";
import { toast } from "sonner";
import useSWR from "swr";
import SearchDetailDialog, {
SearchTab,
} from "@/components/overlay/detail/SearchDetailDialog";
import { SearchResult } from "@/types/search";
import {
ClassificationCard,
GroupedClassificationCard,
@ -682,6 +686,11 @@ function TrainingGrid({
{ ids: eventIdsQuery },
]);
// selection
const [selectedEvent, setSelectedEvent] = useState<Event>();
const [dialogTab, setDialogTab] = useState<SearchTab>("details");
if (attemptImages.length == 0) {
return (
<div className="absolute left-1/2 top-1/2 flex -translate-x-1/2 -translate-y-1/2 flex-col items-center justify-center text-center">
@ -692,32 +701,40 @@ function TrainingGrid({
}
return (
<div
ref={contentRef}
className={cn(
"scrollbar-container gap-3 overflow-y-scroll p-1",
isMobile
? "grid grid-cols-2 sm:grid-cols-3 md:grid-cols-4 lg:grid-cols-6 xl:grid-cols-8"
: "flex flex-wrap",
)}
>
{Object.entries(faceGroups).map(([key, group]) => {
const event = events?.find((ev) => ev.id == key);
return (
<div key={key} className={cn(isMobile && "aspect-square size-full")}>
<>
<SearchDetailDialog
search={
selectedEvent ? (selectedEvent as unknown as SearchResult) : undefined
}
page={dialogTab}
setSimilarity={undefined}
setSearchPage={setDialogTab}
setSearch={(search) => setSelectedEvent(search as unknown as Event)}
setInputFocused={() => {}}
/>
<div
ref={contentRef}
className="scrollbar-container flex flex-wrap gap-2 overflow-y-scroll p-1"
>
{Object.entries(faceGroups).map(([key, group]) => {
const event = events?.find((ev) => ev.id == key);
return (
<FaceAttemptGroup
key={key}
config={config}
group={group}
event={event}
faceNames={faceNames}
selectedFaces={selectedFaces}
onClickFaces={onClickFaces}
onSelectEvent={setSelectedEvent}
onRefresh={onRefresh}
/>
</div>
);
})}
</div>
);
})}
</div>
</>
);
}
@ -728,6 +745,7 @@ type FaceAttemptGroupProps = {
faceNames: string[];
selectedFaces: string[];
onClickFaces: (image: string[], ctrl: boolean) => void;
onSelectEvent: (event: Event) => void;
onRefresh: () => void;
};
function FaceAttemptGroup({
@ -737,6 +755,7 @@ function FaceAttemptGroup({
faceNames,
selectedFaces,
onClickFaces,
onSelectEvent,
onRefresh,
}: FaceAttemptGroupProps) {
const { t } = useTranslation(["views/faceLibrary", "views/explore"]);
@ -754,8 +773,8 @@ function FaceAttemptGroup({
const handleClickEvent = useCallback(
(meta: boolean) => {
if (!meta) {
return;
if (event && selectedFaces.length == 0 && !meta) {
onSelectEvent(event);
} else {
const anySelected =
group.find((face) => selectedFaces.includes(face.filename)) !=
@ -779,7 +798,7 @@ function FaceAttemptGroup({
}
}
},
[group, selectedFaces, onClickFaces],
[event, group, selectedFaces, onClickFaces, onSelectEvent],
);
// api calls
@ -854,6 +873,7 @@ function FaceAttemptGroup({
handleClickEvent(true);
}
}}
onSelectEvent={onSelectEvent}
>
{(data) => (
<>

View File

@ -1,7 +1,6 @@
import { baseUrl } from "@/api/baseUrl";
import ClassificationModelWizardDialog from "@/components/classification/ClassificationModelWizardDialog";
import ActivityIndicator from "@/components/indicators/activity-indicator";
import { ImageShadowOverlay } from "@/components/overlay/ImageShadowOverlay";
import { Button } from "@/components/ui/button";
import { ToggleGroup, ToggleGroupItem } from "@/components/ui/toggle-group";
import useOptimisticState from "@/hooks/use-optimistic-state";
@ -164,7 +163,7 @@ function ModelCard({ config, onClick }: ModelCardProps) {
className={cn("size-full", isMobile && "w-full")}
src={`${baseUrl}clips/${config.name}/dataset/${coverImage?.name}/${coverImage?.img}`}
/>
<ImageShadowOverlay />
<div className="absolute bottom-0 h-[50%] w-full bg-gradient-to-t from-black/60 to-transparent" />
<div className="absolute bottom-2 left-3 text-lg smart-capitalize">
{config.name}
</div>

View File

@ -44,7 +44,7 @@ import {
useRef,
useState,
} from "react";
import { isDesktop, isMobile, isMobileOnly } from "react-device-detect";
import { isDesktop, isMobile } from "react-device-detect";
import { Trans, useTranslation } from "react-i18next";
import { LuPencil, LuTrash2 } from "react-icons/lu";
import { toast } from "sonner";
@ -791,7 +791,7 @@ function StateTrainGrid({
<div
ref={contentRef}
className={cn(
"scrollbar-container flex flex-wrap gap-3 overflow-y-auto p-2",
"scrollbar-container flex flex-wrap gap-2 overflow-y-auto p-2",
isMobile && "justify-center",
)}
>
@ -927,50 +927,41 @@ function ObjectTrainGrid({
<div
ref={contentRef}
className={cn(
"scrollbar-container gap-3 overflow-y-scroll p-1",
isMobile
? "grid grid-cols-2 md:grid-cols-4 lg:grid-cols-6 xl:grid-cols-8"
: "flex flex-wrap",
)}
className="scrollbar-container flex flex-wrap gap-2 overflow-y-scroll p-1"
>
{Object.entries(groups).map(([key, group]) => {
const event = events?.find((ev) => ev.id == key);
return (
<div
<GroupedClassificationCard
key={key}
className={cn(isMobile && "aspect-square size-full")}
group={group}
event={event}
threshold={threshold}
selectedItems={selectedImages}
i18nLibrary="views/classificationModel"
objectType={model.object_config?.objects?.at(0) ?? "Object"}
onClick={(data) => {
if (data) {
onClickImages([data.filename], true);
} else {
handleClickEvent(group, event, true);
}
}}
onSelectEvent={() => {}}
>
<GroupedClassificationCard
key={key}
group={group}
event={event}
threshold={threshold}
selectedItems={selectedImages}
i18nLibrary="views/classificationModel"
objectType={model.object_config?.objects?.at(0) ?? "Object"}
onClick={(data) => {
if (data) {
onClickImages([data.filename], true);
} else {
handleClickEvent(group, event, true);
}
}}
>
{(data) => (
<>
<ClassificationSelectionDialog
classes={classes}
modelName={model.name}
image={data.filename}
onRefresh={onRefresh}
>
<TbCategoryPlus className="size-7 cursor-pointer p-1 text-gray-200 hover:rounded-full hover:bg-primary-foreground" />
</ClassificationSelectionDialog>
</>
)}
</GroupedClassificationCard>
</div>
{(data) => (
<>
<ClassificationSelectionDialog
classes={classes}
modelName={model.name}
image={data.filename}
onRefresh={onRefresh}
>
<TbCategoryPlus className="size-7 cursor-pointer p-1 text-gray-200 hover:rounded-full hover:bg-primary-foreground" />
</ClassificationSelectionDialog>
</>
)}
</GroupedClassificationCard>
);
})}
</div>