Classification Model UI Refactor (#20602)

* Add cutoff for object classification

* Add selector for classifiction model type

* Improve model selection view

* Clean up design of classification card

* Tweaks

* Adjust button colors

* Improvements to gradients and making face library consistent

* Add basic classification model wizard

* Use relative coordinates

* Properly get resolution

* Clean up exports

* Cleanup

* Cleanup

* Update to use pre-defined component for image shadow

* Refactor image grouping

* Clean up mobile

* Clean up decision logic

* Remove max check on classification objects

* Increase default number of faces shown

* Cleanup

* Improve mobile layout

* Clenaup

* Update vocabulary

* Fix layout

* Fix page

* Cleanup

* Choose last item for unknown objects

* Move explore button

* Cleanup grid

* Cleanup classification

* Cleanup grid

* Cleanup

* Set transparency

* Set unknown

* Don't filter all configs

* Check length
This commit is contained in:
Nicolas Mowen 2025-10-22 07:36:09 -06:00 committed by GitHub
parent 9638e85a1f
commit d6f5d2b0fa
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
23 changed files with 666 additions and 434 deletions

View File

@ -67,7 +67,7 @@ When choosing which objects to classify, start with a small number of visually d
### Improving the Model ### Improving the Model
- **Problem framing**: Keep classes visually distinct and relevant to the chosen object types. - **Problem framing**: Keep classes visually distinct and relevant to the chosen object types.
- **Data collection**: Use the models Train tab to gather balanced examples across times of day, weather, and distances. - **Data collection**: Use the models Recent Classification tab to gather balanced examples across times of day, weather, and distances.
- **Preprocessing**: Ensure examples reflect object crops similar to Frigates boxes; keep the subject centered. - **Preprocessing**: Ensure examples reflect object crops similar to Frigates boxes; keep the subject centered.
- **Labels**: Keep label names short and consistent; include a `none` class if you plan to ignore uncertain predictions for sub labels. - **Labels**: Keep label names short and consistent; include a `none` class if you plan to ignore uncertain predictions for sub labels.
- **Threshold**: Tune `threshold` per model to reduce false assignments. Start at `0.8` and adjust based on validation. - **Threshold**: Tune `threshold` per model to reduce false assignments. Start at `0.8` and adjust based on validation.

View File

@ -49,4 +49,4 @@ When choosing a portion of the camera frame for state classification, it is impo
### Improving the Model ### Improving the Model
- **Problem framing**: Keep classes visually distinct and state-focused (e.g., `open`, `closed`, `unknown`). Avoid combining object identity with state in a single model unless necessary. - **Problem framing**: Keep classes visually distinct and state-focused (e.g., `open`, `closed`, `unknown`). Avoid combining object identity with state in a single model unless necessary.
- **Data collection**: Use the models Train tab to gather balanced examples across times of day and weather. - **Data collection**: Use the models Recent Classifications tab to gather balanced examples across times of day and weather.

View File

@ -70,7 +70,7 @@ Fine-tune face recognition with these optional parameters at the global level of
- `min_faces`: Min face recognitions for the sub label to be applied to the person object. - `min_faces`: Min face recognitions for the sub label to be applied to the person object.
- Default: `1` - Default: `1`
- `save_attempts`: Number of images of recognized faces to save for training. - `save_attempts`: Number of images of recognized faces to save for training.
- Default: `100`. - Default: `200`.
- `blur_confidence_filter`: Enables a filter that calculates how blurry the face is and adjusts the confidence based on this. - `blur_confidence_filter`: Enables a filter that calculates how blurry the face is and adjusts the confidence based on this.
- Default: `True`. - Default: `True`.
- `device`: Target a specific device to run the face recognition model on (multi-GPU installation). - `device`: Target a specific device to run the face recognition model on (multi-GPU installation).
@ -114,9 +114,9 @@ When choosing images to include in the face training set it is recommended to al
::: :::
### Understanding the Train Tab ### Understanding the Recent Recognitions Tab
The Train tab in the face library displays recent face recognition attempts. Detected face images are grouped according to the person they were identified as potentially matching. The Recent Recognitions tab in the face library displays recent face recognition attempts. Detected face images are grouped according to the person they were identified as potentially matching.
Each face image is labeled with a name (or `Unknown`) along with the confidence score of the recognition attempt. While each image can be used to train the system for a specific person, not all images are suitable for training. Each face image is labeled with a name (or `Unknown`) along with the confidence score of the recognition attempt. While each image can be used to train the system for a specific person, not all images are suitable for training.
@ -140,7 +140,7 @@ Once front-facing images are performing well, start choosing slightly off-angle
Start with the [Usage](#usage) section and re-read the [Model Requirements](#model-requirements) above. Start with the [Usage](#usage) section and re-read the [Model Requirements](#model-requirements) above.
1. Ensure `person` is being _detected_. A `person` will automatically be scanned by Frigate for a face. Any detected faces will appear in the Train tab in the Frigate UI's Face Library. 1. Ensure `person` is being _detected_. A `person` will automatically be scanned by Frigate for a face. Any detected faces will appear in the Recent Recognitions tab in the Frigate UI's Face Library.
If you are using a Frigate+ or `face` detecting model: If you are using a Frigate+ or `face` detecting model:
@ -186,7 +186,7 @@ Avoid training on images that already score highly, as this can lead to over-fit
No, face recognition does not support negative training (i.e., explicitly telling it who someone is _not_). Instead, the best approach is to improve the training data by using a more diverse and representative set of images for each person. No, face recognition does not support negative training (i.e., explicitly telling it who someone is _not_). Instead, the best approach is to improve the training data by using a more diverse and representative set of images for each person.
For more guidance, refer to the section above on improving recognition accuracy. For more guidance, refer to the section above on improving recognition accuracy.
### I see scores above the threshold in the train tab, but a sub label wasn't assigned? ### I see scores above the threshold in the Recent Recognitions tab, but a sub label wasn't assigned?
The Frigate considers the recognition scores across all recognition attempts for each person object. The scores are continually weighted based on the area of the face, and a sub label will only be assigned to person if a person is confidently recognized consistently. This avoids cases where a single high confidence recognition would throw off the results. The Frigate considers the recognition scores across all recognition attempts for each person object. The scores are continually weighted based on the area of the face, and a sub label will only be assigned to person if a person is confidently recognized consistently. This avoids cases where a single high confidence recognition would throw off the results.

View File

@ -630,7 +630,7 @@ face_recognition:
# Optional: Min face recognitions for the sub label to be applied to the person object (default: shown below) # Optional: Min face recognitions for the sub label to be applied to the person object (default: shown below)
min_faces: 1 min_faces: 1
# Optional: Number of images of recognized faces to save for training (default: shown below) # Optional: Number of images of recognized faces to save for training (default: shown below)
save_attempts: 100 save_attempts: 200
# Optional: Apply a blur quality filter to adjust confidence based on the blur level of the image (default: shown below) # Optional: Apply a blur quality filter to adjust confidence based on the blur level of the image (default: shown below)
blur_confidence_filter: True blur_confidence_filter: True
# Optional: Set the model size used face recognition. (default: shown below) # Optional: Set the model size used face recognition. (default: shown below)
@ -671,20 +671,18 @@ lpr:
# Optional: List of regex replacement rules to normalize detected plates (default: shown below) # Optional: List of regex replacement rules to normalize detected plates (default: shown below)
replace_rules: {} replace_rules: {}
# Optional: Configuration for AI generated tracked object descriptions # Optional: Configuration for AI / LLM provider
# WARNING: Depending on the provider, this will send thumbnails over the internet # WARNING: Depending on the provider, this will send thumbnails over the internet
# to Google or OpenAI's LLMs to generate descriptions. It can be overridden at # to Google or OpenAI's LLMs to generate descriptions. GenAI features can be configured at
# the camera level (enabled: False) to enhance privacy for indoor cameras. # the camera level to enhance privacy for indoor cameras.
genai: genai:
# Optional: Enable AI description generation (default: shown below) # Required: Provider must be one of ollama, gemini, or openai
enabled: False
# Required if enabled: Provider must be one of ollama, gemini, or openai
provider: ollama provider: ollama
# Required if provider is ollama. May also be used for an OpenAI API compatible backend with the openai provider. # Required if provider is ollama. May also be used for an OpenAI API compatible backend with the openai provider.
base_url: http://localhost::11434 base_url: http://localhost::11434
# Required if gemini or openai # Required if gemini or openai
api_key: "{FRIGATE_GENAI_API_KEY}" api_key: "{FRIGATE_GENAI_API_KEY}"
# Required if enabled: The model to use with the provider. # Required: The model to use with the provider.
model: gemini-1.5-flash model: gemini-1.5-flash
# Optional additional args to pass to the GenAI Provider (default: None) # Optional additional args to pass to the GenAI Provider (default: None)
provider_options: provider_options:

View File

@ -69,7 +69,7 @@ class BirdClassificationConfig(FrigateBaseModel):
class CustomClassificationStateCameraConfig(FrigateBaseModel): class CustomClassificationStateCameraConfig(FrigateBaseModel):
crop: list[int, int, int, int] = Field( crop: list[float, float, float, float] = Field(
title="Crop of image frame on this camera to run classification on." title="Crop of image frame on this camera to run classification on."
) )
@ -197,7 +197,9 @@ class FaceRecognitionConfig(FrigateBaseModel):
title="Min face recognitions for the sub label to be applied to the person object.", title="Min face recognitions for the sub label to be applied to the person object.",
) )
save_attempts: int = Field( save_attempts: int = Field(
default=100, ge=0, title="Number of face attempts to save in the train tab." default=200,
ge=0,
title="Number of face attempts to save in the recent recognitions tab.",
) )
blur_confidence_filter: bool = Field( blur_confidence_filter: bool = Field(
default=True, title="Apply blur quality filter to face confidence." default=True, title="Apply blur quality filter to face confidence."

View File

@ -96,10 +96,10 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
camera_config = self.model_config.state_config.cameras[camera] camera_config = self.model_config.state_config.cameras[camera]
crop = [ crop = [
camera_config.crop[0], camera_config.crop[0] * self.config.cameras[camera].detect.width,
camera_config.crop[1], camera_config.crop[1] * self.config.cameras[camera].detect.height,
camera_config.crop[2], camera_config.crop[2] * self.config.cameras[camera].detect.width,
camera_config.crop[3], camera_config.crop[3] * self.config.cameras[camera].detect.height,
] ]
should_run = False should_run = False

View File

@ -23,7 +23,7 @@
"label": "Min face recognitions for the sub label to be applied to the person object." "label": "Min face recognitions for the sub label to be applied to the person object."
}, },
"save_attempts": { "save_attempts": {
"label": "Number of face attempts to save in the train tab." "label": "Number of face attempts to save in the recent recognitions tab."
}, },
"blur_confidence_filter": { "blur_confidence_filter": {
"label": "Apply blur quality filter to face confidence." "label": "Apply blur quality filter to face confidence."

View File

@ -41,13 +41,17 @@
"invalidName": "Invalid name. Names can only include letters, numbers, spaces, apostrophes, underscores, and hyphens." "invalidName": "Invalid name. Names can only include letters, numbers, spaces, apostrophes, underscores, and hyphens."
}, },
"train": { "train": {
"title": "Train", "title": "Recent Classifications",
"aria": "Select Train" "aria": "Select Recent Classifications"
}, },
"categories": "Classes", "categories": "Classes",
"createCategory": { "createCategory": {
"new": "Create New Class" "new": "Create New Class"
}, },
"categorizeImageAs": "Classify Image As:", "categorizeImageAs": "Classify Image As:",
"categorizeImage": "Classify Image" "categorizeImage": "Classify Image",
"wizard": {
"title": "Create New Classification",
"description": "Create a new state or object classification model."
}
} }

View File

@ -22,7 +22,7 @@
"title": "Create Collection", "title": "Create Collection",
"desc": "Create a new collection", "desc": "Create a new collection",
"new": "Create New Face", "new": "Create New Face",
"nextSteps": "To build a strong foundation:<li>Use the Train tab to select and train on images for each detected person.</li><li>Focus on straight-on images for best results; avoid training images that capture faces at an angle.</li></ul>" "nextSteps": "To build a strong foundation:<li>Use the Recent Recognitions tab to select and train on images for each detected person.</li><li>Focus on straight-on images for best results; avoid training images that capture faces at an angle.</li></ul>"
}, },
"steps": { "steps": {
"faceName": "Enter Face Name", "faceName": "Enter Face Name",
@ -33,8 +33,8 @@
} }
}, },
"train": { "train": {
"title": "Train", "title": "Recent Recognitions",
"aria": "Select train", "aria": "Select recent recognitions",
"empty": "There are no recent face recognition attempts" "empty": "There are no recent face recognition attempts"
}, },
"selectItem": "Select {{item}}", "selectItem": "Select {{item}}",

View File

@ -6,7 +6,7 @@ import {
ClassificationThreshold, ClassificationThreshold,
} from "@/types/classification"; } from "@/types/classification";
import { Event } from "@/types/event"; import { Event } from "@/types/event";
import { useMemo, useRef, useState } from "react"; import { forwardRef, useMemo, useRef, useState } from "react";
import { isDesktop, isMobile } from "react-device-detect"; import { isDesktop, isMobile } from "react-device-detect";
import { useTranslation } from "react-i18next"; import { useTranslation } from "react-i18next";
import TimeAgo from "../dynamic/TimeAgo"; import TimeAgo from "../dynamic/TimeAgo";
@ -14,7 +14,24 @@ import { Tooltip, TooltipContent, TooltipTrigger } from "../ui/tooltip";
import { LuSearch } from "react-icons/lu"; import { LuSearch } from "react-icons/lu";
import { TooltipPortal } from "@radix-ui/react-tooltip"; import { TooltipPortal } from "@radix-ui/react-tooltip";
import { useNavigate } from "react-router-dom"; import { useNavigate } from "react-router-dom";
import { getTranslatedLabel } from "@/utils/i18n"; import { HiSquare2Stack } from "react-icons/hi2";
import { ImageShadowOverlay } from "../overlay/ImageShadowOverlay";
import {
Dialog,
DialogContent,
DialogDescription,
DialogHeader,
DialogTitle,
DialogTrigger,
} from "../ui/dialog";
import {
MobilePage,
MobilePageContent,
MobilePageDescription,
MobilePageHeader,
MobilePageTitle,
MobilePageTrigger,
} from "../mobile/MobilePage";
type ClassificationCardProps = { type ClassificationCardProps = {
className?: string; className?: string;
@ -24,10 +41,15 @@ type ClassificationCardProps = {
selected: boolean; selected: boolean;
i18nLibrary: string; i18nLibrary: string;
showArea?: boolean; showArea?: boolean;
count?: number;
onClick: (data: ClassificationItemData, meta: boolean) => void; onClick: (data: ClassificationItemData, meta: boolean) => void;
children?: React.ReactNode; children?: React.ReactNode;
}; };
export function ClassificationCard({ export const ClassificationCard = forwardRef<
HTMLDivElement,
ClassificationCardProps
>(function ClassificationCard(
{
className, className,
imgClassName, imgClassName,
data, data,
@ -35,9 +57,12 @@ export function ClassificationCard({
selected, selected,
i18nLibrary, i18nLibrary,
showArea = true, showArea = true,
count,
onClick, onClick,
children, children,
}: ClassificationCardProps) { },
ref,
) {
const { t } = useTranslation([i18nLibrary]); const { t } = useTranslation([i18nLibrary]);
const [imageLoaded, setImageLoaded] = useState(false); const [imageLoaded, setImageLoaded] = useState(false);
@ -72,36 +97,58 @@ export function ClassificationCard({
}, [showArea, imageLoaded]); }, [showArea, imageLoaded]);
return ( return (
<>
<div <div
ref={ref}
className={cn( className={cn(
"relative flex cursor-pointer flex-col rounded-lg outline outline-[3px]", "relative flex size-full cursor-pointer flex-col overflow-hidden rounded-lg outline outline-[3px]",
className, className,
selected selected
? "shadow-selected outline-selected" ? "shadow-selected outline-selected"
: "outline-transparent duration-500", : "outline-transparent duration-500",
)} )}
onClick={(e) => {
const isMeta = e.metaKey || e.ctrlKey;
if (isMeta) {
e.stopPropagation();
}
onClick(data, isMeta);
}}
onContextMenu={(e) => {
e.preventDefault();
e.stopPropagation();
onClick(data, true);
}}
> >
<div className="relative w-full select-none overflow-hidden rounded-lg">
<img <img
ref={imgRef} ref={imgRef}
className={cn(
"absolute bottom-0 left-0 right-0 top-0 size-full",
imgClassName,
isMobile && "w-full",
)}
onLoad={() => setImageLoaded(true)} onLoad={() => setImageLoaded(true)}
className={cn("size-44", imgClassName, isMobile && "w-full")}
src={`${baseUrl}${data.filepath}`} src={`${baseUrl}${data.filepath}`}
onClick={(e) => {
e.stopPropagation();
onClick(data, e.metaKey || e.ctrlKey);
}}
/> />
{imageArea != undefined && ( <ImageShadowOverlay upperClassName="z-0" lowerClassName="h-[30%] z-0" />
<div className="absolute bottom-1 right-1 z-10 rounded-lg bg-black/50 px-2 py-1 text-xs text-white"> {count && (
<div className="absolute right-2 top-2 flex flex-row items-center gap-1">
<div className="text-gray-200">{count}</div>{" "}
<HiSquare2Stack className="text-gray-200" />
</div>
)}
{!count && imageArea != undefined && (
<div className="absolute right-1 top-1 rounded-lg bg-black/50 px-2 py-1 text-xs text-white">
{t("information.pixels", { ns: "common", area: imageArea })} {t("information.pixels", { ns: "common", area: imageArea })}
</div> </div>
)} )}
</div> <div className="absolute bottom-0 left-0 right-0 h-[50%] bg-gradient-to-t from-black/60 to-transparent" />
<div className="select-none p-2"> <div className="absolute bottom-0 flex w-full select-none flex-row items-center justify-between gap-2 p-2">
<div className="flex w-full flex-row items-center justify-between gap-2"> <div
<div className="flex flex-col items-start text-xs text-primary-variant"> className={cn(
"flex flex-col items-start text-white",
data.score ? "text-xs" : "text-sm",
)}
>
<div className="smart-capitalize"> <div className="smart-capitalize">
{data.name == "unknown" ? t("details.unknown") : data.name} {data.name == "unknown" ? t("details.unknown") : data.name}
</div> </div>
@ -118,15 +165,13 @@ export function ClassificationCard({
</div> </div>
)} )}
</div> </div>
<div className="flex flex-row items-start justify-end gap-5 md:gap-4"> <div className="flex flex-row items-start justify-end gap-5 md:gap-2">
{children} {children}
</div> </div>
</div> </div>
</div> </div>
</div>
</>
); );
} });
type GroupedClassificationCardProps = { type GroupedClassificationCardProps = {
group: ClassificationItemData[]; group: ClassificationItemData[];
@ -136,7 +181,6 @@ type GroupedClassificationCardProps = {
i18nLibrary: string; i18nLibrary: string;
objectType: string; objectType: string;
onClick: (data: ClassificationItemData | undefined) => void; onClick: (data: ClassificationItemData | undefined) => void;
onSelectEvent: (event: Event) => void;
children?: (data: ClassificationItemData) => React.ReactNode; children?: (data: ClassificationItemData) => React.ReactNode;
}; };
export function GroupedClassificationCard({ export function GroupedClassificationCard({
@ -145,20 +189,54 @@ export function GroupedClassificationCard({
threshold, threshold,
selectedItems, selectedItems,
i18nLibrary, i18nLibrary,
objectType,
onClick, onClick,
onSelectEvent,
children, children,
}: GroupedClassificationCardProps) { }: GroupedClassificationCardProps) {
const navigate = useNavigate(); const navigate = useNavigate();
const { t } = useTranslation(["views/explore", i18nLibrary]); const { t } = useTranslation(["views/explore", i18nLibrary]);
const [detailOpen, setDetailOpen] = useState(false);
// data // data
const allItemsSelected = useMemo( const bestItem = useMemo<ClassificationItemData | undefined>(() => {
() => group.every((data) => selectedItems.includes(data.filename)), let best: undefined | ClassificationItemData = undefined;
[group, selectedItems],
); group.forEach((item) => {
if (item?.name != undefined && item.name != "none") {
if (
best?.score == undefined ||
(item.score && best.score < item.score)
) {
best = item;
}
}
});
if (!best) {
return group.at(-1);
}
const bestTyped: ClassificationItemData = best;
return {
...bestTyped,
name: event ? (event.sub_label ?? t("details.unknown")) : bestTyped.name,
score: event?.data?.sub_label_score || bestTyped.score,
};
}, [group, event, t]);
const bestScoreStatus = useMemo(() => {
if (!bestItem?.score || !threshold) {
return "unknown";
}
if (bestItem.score >= threshold.recognition) {
return "match";
} else if (bestItem.score >= threshold.unknown) {
return "potential";
} else {
return "unknown";
}
}, [bestItem, threshold]);
const time = useMemo(() => { const time = useMemo(() => {
const item = group[0]; const item = group[0];
@ -170,34 +248,79 @@ export function GroupedClassificationCard({
return item.timestamp * 1000; return item.timestamp * 1000;
}, [group]); }, [group]);
if (!bestItem) {
return null;
}
const Overlay = isDesktop ? Dialog : MobilePage;
const Trigger = isDesktop ? DialogTrigger : MobilePageTrigger;
const Header = isDesktop ? DialogHeader : MobilePageHeader;
const Content = isDesktop ? DialogContent : MobilePageContent;
const ContentTitle = isDesktop ? DialogTitle : MobilePageTitle;
const ContentDescription = isDesktop
? DialogDescription
: MobilePageDescription;
return ( return (
<div <>
className={cn( <ClassificationCard
"flex cursor-pointer flex-col gap-2 rounded-lg bg-card p-2 outline outline-[3px]", data={bestItem}
isMobile && "w-full", threshold={threshold}
allItemsSelected selected={selectedItems.includes(bestItem.filename)}
? "shadow-selected outline-selected" i18nLibrary={i18nLibrary}
: "outline-transparent duration-500", count={group.length}
)} onClick={(_, meta) => {
onClick={() => { if (meta || selectedItems.length > 0) {
if (selectedItems.length) {
onClick(undefined); onClick(undefined);
} else {
setDetailOpen(true);
} }
}} }}
onContextMenu={(e) => { />
e.stopPropagation(); <Overlay
e.preventDefault(); open={detailOpen}
onClick(undefined); onOpenChange={(open) => {
if (!open) {
setDetailOpen(false);
}
}} }}
> >
<div className="flex flex-row justify-between"> <Trigger asChild></Trigger>
<div className="flex flex-col gap-1"> <Content
<div className="select-none smart-capitalize"> className={cn(
{getTranslatedLabel(objectType)} "",
{event?.sub_label isDesktop && "min-w-[50%] max-w-[65%]",
? `: ${event.sub_label} (${Math.round((event.data.sub_label_score || 0) * 100)}%)` isMobile && "flex flex-col",
: ": " + t("details.unknown")} )}
</div> onOpenAutoFocus={(e) => e.preventDefault()}
>
<>
<Header
className={cn(
"mx-2 flex flex-row items-center gap-4",
isMobile && "flex-shrink-0",
)}
>
<div>
<ContentTitle
className={cn(
"flex items-center gap-1 font-normal capitalize",
isMobile && "px-2",
)}
>
{event?.sub_label ? event.sub_label : t("details.unknown")}
{event?.sub_label && (
<div
className={cn(
"",
bestScoreStatus == "match" && "text-success",
bestScoreStatus == "potential" && "text-orange-400",
bestScoreStatus == "unknown" && "text-danger",
)}
>{`${Math.round((event.data.sub_label_score || 0) * 100)}%`}</div>
)}
</ContentTitle>
<ContentDescription className={cn("", isMobile && "px-2")}>
{time && ( {time && (
<TimeAgo <TimeAgo
className="text-sm text-secondary-foreground" className="text-sm text-secondary-foreground"
@ -205,17 +328,21 @@ export function GroupedClassificationCard({
dense dense
/> />
)} )}
</ContentDescription>
</div> </div>
{isDesktop && (
<div className="flex flex-row justify-between">
{event && ( {event && (
<Tooltip> <Tooltip>
<TooltipTrigger> <TooltipTrigger asChild>
<div <div
className="cursor-pointer" className="cursor-pointer"
tabIndex={-1}
onClick={() => { onClick={() => {
navigate(`/explore?event_id=${event.id}`); navigate(`/explore?event_id=${event.id}`);
}} }}
> >
<LuSearch className="size-4 text-muted-foreground" /> <LuSearch className="size-4 text-secondary-foreground" />
</div> </div>
</TooltipTrigger> </TooltipTrigger>
<TooltipPortal> <TooltipPortal>
@ -228,36 +355,36 @@ export function GroupedClassificationCard({
</Tooltip> </Tooltip>
)} )}
</div> </div>
)}
</Header>
<div <div
className={cn( className={cn(
"gap-2", "grid w-full auto-rows-min grid-cols-2 gap-2 sm:grid-cols-3 md:grid-cols-4 lg:grid-cols-6 xl:grid-cols-6 2xl:grid-cols-8",
isDesktop isDesktop && "p-2",
? "flex flex-row flex-wrap" isMobile && "scrollbar-container flex-1 overflow-y-auto",
: "grid grid-cols-2 sm:grid-cols-5 lg:grid-cols-6",
)} )}
> >
{group.map((data: ClassificationItemData) => ( {group.map((data: ClassificationItemData) => (
<div key={data.filename} className="aspect-square w-full">
<ClassificationCard <ClassificationCard
key={data.filename}
data={data} data={data}
threshold={threshold} threshold={threshold}
selected={ selected={false}
allItemsSelected ? false : selectedItems.includes(data.filename)
}
i18nLibrary={i18nLibrary} i18nLibrary={i18nLibrary}
onClick={(data, meta) => { onClick={(data, meta) => {
if (meta || selectedItems.length > 0) { if (meta || selectedItems.length > 0) {
onClick(data); onClick(data);
} else if (event) {
onSelectEvent(event);
} }
}} }}
> >
{children?.(data)} {children?.(data)}
</ClassificationCard> </ClassificationCard>
</div>
))} ))}
</div> </div>
</div> </>
</Content>
</Overlay>
</>
); );
} }

View File

@ -21,6 +21,7 @@ import { baseUrl } from "@/api/baseUrl";
import { cn } from "@/lib/utils"; import { cn } from "@/lib/utils";
import { shareOrCopy } from "@/utils/browserUtil"; import { shareOrCopy } from "@/utils/browserUtil";
import { useTranslation } from "react-i18next"; import { useTranslation } from "react-i18next";
import { ImageShadowOverlay } from "../overlay/ImageShadowOverlay";
type ExportProps = { type ExportProps = {
className: string; className: string;
@ -145,7 +146,7 @@ export default function ExportCard({
<> <>
{exportedRecording.thumb_path.length > 0 ? ( {exportedRecording.thumb_path.length > 0 ? (
<img <img
className="absolute inset-0 aspect-video size-full rounded-lg object-contain md:rounded-2xl" className="absolute inset-0 aspect-video size-full rounded-lg object-cover md:rounded-2xl"
src={`${baseUrl}${exportedRecording.thumb_path.replace("/media/frigate/", "")}`} src={`${baseUrl}${exportedRecording.thumb_path.replace("/media/frigate/", "")}`}
onLoad={() => setLoading(false)} onLoad={() => setLoading(false)}
/> />
@ -224,12 +225,11 @@ export default function ExportCard({
{loading && ( {loading && (
<Skeleton className="absolute inset-0 aspect-video rounded-lg md:rounded-2xl" /> <Skeleton className="absolute inset-0 aspect-video rounded-lg md:rounded-2xl" />
)} )}
<div className="rounded-b-l pointer-events-none absolute inset-x-0 bottom-0 h-[20%] rounded-lg bg-gradient-to-t from-black/60 to-transparent md:rounded-2xl"> <ImageShadowOverlay />
<div className="mx-3 flex h-full items-end justify-between pb-1 text-sm text-white smart-capitalize"> <div className="absolute bottom-2 left-3 flex h-full items-end justify-between text-white smart-capitalize">
{exportedRecording.name.replaceAll("_", " ")} {exportedRecording.name.replaceAll("_", " ")}
</div> </div>
</div> </div>
</div>
</> </>
); );
} }

View File

@ -0,0 +1,66 @@
import { useTranslation } from "react-i18next";
import StepIndicator from "../indicators/StepIndicator";
import {
Dialog,
DialogContent,
DialogDescription,
DialogHeader,
DialogTitle,
} from "../ui/dialog";
import { useState } from "react";
const STEPS = [
"classificationWizard.steps.nameAndDefine",
"classificationWizard.steps.stateArea",
"classificationWizard.steps.chooseExamples",
"classificationWizard.steps.train",
];
type ClassificationModelWizardDialogProps = {
open: boolean;
onClose: () => void;
};
export default function ClassificationModelWizardDialog({
open,
onClose,
}: ClassificationModelWizardDialogProps) {
const { t } = useTranslation(["views/classificationModel"]);
// step management
const [currentStep, _] = useState(0);
return (
<Dialog
open={open}
onOpenChange={(open) => {
if (!open) {
onClose;
}
}}
>
<DialogContent
className="max-h-[90dvh] max-w-4xl overflow-y-auto"
onInteractOutside={(e) => {
e.preventDefault();
}}
>
<StepIndicator
steps={STEPS}
currentStep={currentStep}
variant="dots"
className="mb-4 justify-start"
/>
<DialogHeader>
<DialogTitle>{t("wizard.title")}</DialogTitle>
{currentStep === 0 && (
<DialogDescription>{t("wizard.description")}</DialogDescription>
)}
</DialogHeader>
<div className="pb-4">
<div className="size-full"></div>
</div>
</DialogContent>
</Dialog>
);
}

View File

@ -20,15 +20,14 @@ import {
TooltipTrigger, TooltipTrigger,
} from "@/components/ui/tooltip"; } from "@/components/ui/tooltip";
import { isDesktop, isMobile } from "react-device-detect"; import { isDesktop, isMobile } from "react-device-detect";
import { LuPlus } from "react-icons/lu";
import { useTranslation } from "react-i18next"; import { useTranslation } from "react-i18next";
import { cn } from "@/lib/utils"; import { cn } from "@/lib/utils";
import React, { ReactNode, useCallback, useMemo, useState } from "react"; import React, { ReactNode, useCallback, useMemo, useState } from "react";
import TextEntryDialog from "./dialog/TextEntryDialog"; import TextEntryDialog from "./dialog/TextEntryDialog";
import { Button } from "../ui/button"; import { Button } from "../ui/button";
import { MdCategory } from "react-icons/md";
import axios from "axios"; import axios from "axios";
import { toast } from "sonner"; import { toast } from "sonner";
import { Separator } from "../ui/separator";
type ClassificationSelectionDialogProps = { type ClassificationSelectionDialogProps = {
className?: string; className?: string;
@ -97,7 +96,7 @@ export default function ClassificationSelectionDialog({
); );
return ( return (
<div className={className ?? ""}> <div className={className ?? "flex"}>
{newClass && ( {newClass && (
<TextEntryDialog <TextEntryDialog
open={true} open={true}
@ -128,23 +127,22 @@ export default function ClassificationSelectionDialog({
isMobile && "gap-2 pb-4", isMobile && "gap-2 pb-4",
)} )}
> >
<SelectorItem
className="flex cursor-pointer gap-2 smart-capitalize"
onClick={() => setNewClass(true)}
>
<LuPlus />
{t("createCategory.new")}
</SelectorItem>
{classes.sort().map((category) => ( {classes.sort().map((category) => (
<SelectorItem <SelectorItem
key={category} key={category}
className="flex cursor-pointer gap-2 smart-capitalize" className="flex cursor-pointer gap-2 smart-capitalize"
onClick={() => onCategorizeImage(category)} onClick={() => onCategorizeImage(category)}
> >
<MdCategory />
{category.replaceAll("_", " ")} {category.replaceAll("_", " ")}
</SelectorItem> </SelectorItem>
))} ))}
<Separator />
<SelectorItem
className="flex cursor-pointer gap-2 smart-capitalize"
onClick={() => setNewClass(true)}
>
{t("createCategory.new")}
</SelectorItem>
</div> </div>
</SelectorContent> </SelectorContent>
</Selector> </Selector>

View File

@ -62,7 +62,7 @@ export default function FaceSelectionDialog({
); );
return ( return (
<div className={className ?? ""}> <div className={className ?? "flex"}>
{newFace && ( {newFace && (
<TextEntryDialog <TextEntryDialog
open={true} open={true}

View File

@ -0,0 +1,27 @@
import { cn } from "@/lib/utils";
type ImageShadowOverlayProps = {
upperClassName?: string;
lowerClassName?: string;
};
export function ImageShadowOverlay({
upperClassName,
lowerClassName,
}: ImageShadowOverlayProps) {
return (
<>
<div
className={cn(
"pointer-events-none absolute inset-x-0 top-0 z-10 h-[30%] w-full rounded-lg bg-gradient-to-b from-black/20 to-transparent md:rounded-2xl",
upperClassName,
)}
/>
<div
className={cn(
"pointer-events-none absolute inset-x-0 bottom-0 z-10 h-[10%] w-full rounded-lg bg-gradient-to-t from-black/20 to-transparent md:rounded-2xl",
lowerClassName,
)}
/>
</>
);
}

View File

@ -60,7 +60,7 @@ export default function TrainFilterDialog({
moreFiltersSelected ? "text-white" : "text-secondary-foreground", moreFiltersSelected ? "text-white" : "text-secondary-foreground",
)} )}
/> />
{isDesktop && t("more")} {isDesktop && t("filter")}
</Button> </Button>
); );
const content = ( const content = (
@ -122,7 +122,7 @@ export default function TrainFilterDialog({
return ( return (
<PlatformAwareSheet <PlatformAwareSheet
trigger={trigger} trigger={trigger}
title={t("more")} title={t("filter")}
content={content} content={content}
contentClassName={cn( contentClassName={cn(
"w-auto lg:min-w-[275px] scrollbar-container h-full overflow-auto px-4", "w-auto lg:min-w-[275px] scrollbar-container h-full overflow-auto px-4",

View File

@ -6,6 +6,7 @@ import MSEPlayer from "./MsePlayer";
import { LivePlayerMode } from "@/types/live"; import { LivePlayerMode } from "@/types/live";
import { cn } from "@/lib/utils"; import { cn } from "@/lib/utils";
import React from "react"; import React from "react";
import { ImageShadowOverlay } from "../overlay/ImageShadowOverlay";
type LivePlayerProps = { type LivePlayerProps = {
className?: string; className?: string;
@ -76,8 +77,7 @@ export default function BirdseyeLivePlayer({
)} )}
onClick={onClick} onClick={onClick}
> >
<div className="pointer-events-none absolute inset-x-0 top-0 z-10 h-[30%] w-full rounded-lg bg-gradient-to-b from-black/20 to-transparent md:rounded-2xl"></div> <ImageShadowOverlay />
<div className="pointer-events-none absolute inset-x-0 bottom-0 z-10 h-[10%] w-full rounded-lg bg-gradient-to-t from-black/20 to-transparent md:rounded-2xl"></div>
<div className="size-full" ref={playerRef}> <div className="size-full" ref={playerRef}>
{player} {player}
</div> </div>

View File

@ -25,6 +25,7 @@ import { PlayerStats } from "./PlayerStats";
import { LuVideoOff } from "react-icons/lu"; import { LuVideoOff } from "react-icons/lu";
import { Trans, useTranslation } from "react-i18next"; import { Trans, useTranslation } from "react-i18next";
import { useCameraFriendlyName } from "@/hooks/use-camera-friendly-name"; import { useCameraFriendlyName } from "@/hooks/use-camera-friendly-name";
import { ImageShadowOverlay } from "../overlay/ImageShadowOverlay";
type LivePlayerProps = { type LivePlayerProps = {
cameraRef?: (ref: HTMLDivElement | null) => void; cameraRef?: (ref: HTMLDivElement | null) => void;
@ -328,10 +329,7 @@ export default function LivePlayer({
> >
{cameraEnabled && {cameraEnabled &&
((showStillWithoutActivity && !liveReady) || liveReady) && ( ((showStillWithoutActivity && !liveReady) || liveReady) && (
<> <ImageShadowOverlay />
<div className="pointer-events-none absolute inset-x-0 top-0 z-10 h-[30%] w-full rounded-lg bg-gradient-to-b from-black/20 to-transparent md:rounded-2xl"></div>
<div className="pointer-events-none absolute inset-x-0 bottom-0 z-10 h-[10%] w-full rounded-lg bg-gradient-to-t from-black/20 to-transparent md:rounded-2xl"></div>
</>
)} )}
{player} {player}
{cameraEnabled && {cameraEnabled &&

View File

@ -107,7 +107,7 @@ const DialogContent = React.forwardRef<
> >
{children} {children}
<DialogPrimitive.Close className="absolute right-4 top-4 rounded-sm opacity-70 ring-offset-background transition-opacity data-[state=open]:bg-accent data-[state=open]:text-muted-foreground hover:opacity-100 focus:outline-none focus:ring-2 focus:ring-ring focus:ring-offset-2 disabled:pointer-events-none"> <DialogPrimitive.Close className="absolute right-4 top-4 rounded-sm opacity-70 ring-offset-background transition-opacity data-[state=open]:bg-accent data-[state=open]:text-muted-foreground hover:opacity-100 focus:outline-none focus:ring-2 focus:ring-ring focus:ring-offset-2 disabled:pointer-events-none">
<X className="h-4 w-4" /> <X className="h-4 w-4 text-secondary-foreground" />
<span className="sr-only">Close</span> <span className="sr-only">Close</span>
</DialogPrimitive.Close> </DialogPrimitive.Close>
</DialogPrimitive.Content> </DialogPrimitive.Content>

View File

@ -63,10 +63,6 @@ import {
} from "react-icons/lu"; } from "react-icons/lu";
import { toast } from "sonner"; import { toast } from "sonner";
import useSWR from "swr"; import useSWR from "swr";
import SearchDetailDialog, {
SearchTab,
} from "@/components/overlay/detail/SearchDetailDialog";
import { SearchResult } from "@/types/search";
import { import {
ClassificationCard, ClassificationCard,
GroupedClassificationCard, GroupedClassificationCard,
@ -686,11 +682,6 @@ function TrainingGrid({
{ ids: eventIdsQuery }, { ids: eventIdsQuery },
]); ]);
// selection
const [selectedEvent, setSelectedEvent] = useState<Event>();
const [dialogTab, setDialogTab] = useState<SearchTab>("details");
if (attemptImages.length == 0) { if (attemptImages.length == 0) {
return ( return (
<div className="absolute left-1/2 top-1/2 flex -translate-x-1/2 -translate-y-1/2 flex-col items-center justify-center text-center"> <div className="absolute left-1/2 top-1/2 flex -translate-x-1/2 -translate-y-1/2 flex-col items-center justify-center text-center">
@ -701,40 +692,29 @@ function TrainingGrid({
} }
return ( return (
<>
<SearchDetailDialog
search={
selectedEvent ? (selectedEvent as unknown as SearchResult) : undefined
}
page={dialogTab}
setSimilarity={undefined}
setSearchPage={setDialogTab}
setSearch={(search) => setSelectedEvent(search as unknown as Event)}
setInputFocused={() => {}}
/>
<div <div
ref={contentRef} ref={contentRef}
className="scrollbar-container flex flex-wrap gap-2 overflow-y-scroll p-1" className={cn(
"scrollbar-container grid grid-cols-2 gap-3 overflow-y-scroll p-1 sm:grid-cols-3 md:grid-cols-4 lg:grid-cols-6 xl:grid-cols-8 2xl:grid-cols-10 3xl:grid-cols-12",
)}
> >
{Object.entries(faceGroups).map(([key, group]) => { {Object.entries(faceGroups).map(([key, group]) => {
const event = events?.find((ev) => ev.id == key); const event = events?.find((ev) => ev.id == key);
return ( return (
<div key={key} className="aspect-square w-full">
<FaceAttemptGroup <FaceAttemptGroup
key={key}
config={config} config={config}
group={group} group={group}
event={event} event={event}
faceNames={faceNames} faceNames={faceNames}
selectedFaces={selectedFaces} selectedFaces={selectedFaces}
onClickFaces={onClickFaces} onClickFaces={onClickFaces}
onSelectEvent={setSelectedEvent}
onRefresh={onRefresh} onRefresh={onRefresh}
/> />
</div>
); );
})} })}
</div> </div>
</>
); );
} }
@ -745,7 +725,6 @@ type FaceAttemptGroupProps = {
faceNames: string[]; faceNames: string[];
selectedFaces: string[]; selectedFaces: string[];
onClickFaces: (image: string[], ctrl: boolean) => void; onClickFaces: (image: string[], ctrl: boolean) => void;
onSelectEvent: (event: Event) => void;
onRefresh: () => void; onRefresh: () => void;
}; };
function FaceAttemptGroup({ function FaceAttemptGroup({
@ -755,7 +734,6 @@ function FaceAttemptGroup({
faceNames, faceNames,
selectedFaces, selectedFaces,
onClickFaces, onClickFaces,
onSelectEvent,
onRefresh, onRefresh,
}: FaceAttemptGroupProps) { }: FaceAttemptGroupProps) {
const { t } = useTranslation(["views/faceLibrary", "views/explore"]); const { t } = useTranslation(["views/faceLibrary", "views/explore"]);
@ -773,8 +751,8 @@ function FaceAttemptGroup({
const handleClickEvent = useCallback( const handleClickEvent = useCallback(
(meta: boolean) => { (meta: boolean) => {
if (event && selectedFaces.length == 0 && !meta) { if (!meta) {
onSelectEvent(event); return;
} else { } else {
const anySelected = const anySelected =
group.find((face) => selectedFaces.includes(face.filename)) != group.find((face) => selectedFaces.includes(face.filename)) !=
@ -798,7 +776,7 @@ function FaceAttemptGroup({
} }
} }
}, },
[event, group, selectedFaces, onClickFaces, onSelectEvent], [group, selectedFaces, onClickFaces],
); );
// api calls // api calls
@ -873,7 +851,6 @@ function FaceAttemptGroup({
handleClickEvent(true); handleClickEvent(true);
} }
}} }}
onSelectEvent={onSelectEvent}
> >
{(data) => ( {(data) => (
<> <>
@ -881,12 +858,12 @@ function FaceAttemptGroup({
faceNames={faceNames} faceNames={faceNames}
onTrainAttempt={(name) => onTrainAttempt(data, name)} onTrainAttempt={(name) => onTrainAttempt(data, name)}
> >
<AddFaceIcon className="size-5 cursor-pointer text-primary-variant hover:text-primary" /> <AddFaceIcon className="size-7 cursor-pointer p-1 text-gray-200 hover:rounded-full hover:bg-primary-foreground/40" />
</FaceSelectionDialog> </FaceSelectionDialog>
<Tooltip> <Tooltip>
<TooltipTrigger> <TooltipTrigger>
<LuRefreshCw <LuRefreshCw
className="size-5 cursor-pointer text-primary-variant hover:text-primary" className="size-7 cursor-pointer p-1 text-gray-200 hover:rounded-full hover:bg-primary-foreground/40"
onClick={() => onReprocess(data)} onClick={() => onReprocess(data)}
/> />
</TooltipTrigger> </TooltipTrigger>
@ -934,14 +911,12 @@ function FaceGrid({
<div <div
ref={contentRef} ref={contentRef}
className={cn( className={cn(
"scrollbar-container gap-2 overflow-y-scroll p-1", "scrollbar-container grid grid-cols-2 gap-2 overflow-y-scroll p-1 md:grid-cols-4 xl:grid-cols-8 2xl:grid-cols-10 3xl:grid-cols-12",
isDesktop ? "flex flex-wrap" : "grid grid-cols-2 md:grid-cols-4",
)} )}
> >
{sortedFaces.map((image: string) => ( {sortedFaces.map((image: string) => (
<div key={image} className="aspect-square w-full">
<ClassificationCard <ClassificationCard
className="gap-2 rounded-lg bg-card p-2"
key={image}
data={{ data={{
name: pageToggle, name: pageToggle,
filename: image, filename: image,
@ -954,7 +929,7 @@ function FaceGrid({
<Tooltip> <Tooltip>
<TooltipTrigger> <TooltipTrigger>
<LuTrash2 <LuTrash2
className="size-5 cursor-pointer text-primary-variant hover:text-primary" className="size-5 cursor-pointer text-gray-200 hover:text-danger"
onClick={(e) => { onClick={(e) => {
e.stopPropagation(); e.stopPropagation();
onDelete(pageToggle, [image]); onDelete(pageToggle, [image]);
@ -964,6 +939,7 @@ function FaceGrid({
<TooltipContent>{t("button.deleteFaceAttempts")}</TooltipContent> <TooltipContent>{t("button.deleteFaceAttempts")}</TooltipContent>
</Tooltip> </Tooltip>
</ClassificationCard> </ClassificationCard>
</div>
))} ))}
</div> </div>
); );

View File

@ -304,10 +304,10 @@ export type CustomClassificationModelConfig = {
enabled: boolean; enabled: boolean;
name: string; name: string;
threshold: number; threshold: number;
object_config: null | { object_config?: {
objects: string[]; objects: string[];
}; };
state_config: null | { state_config?: {
cameras: { cameras: {
[cameraName: string]: { [cameraName: string]: {
crop: [number, number, number, number]; crop: [number, number, number, number];

View File

@ -1,24 +1,39 @@
import { baseUrl } from "@/api/baseUrl"; import { baseUrl } from "@/api/baseUrl";
import ClassificationModelWizardDialog from "@/components/classification/ClassificationModelWizardDialog";
import ActivityIndicator from "@/components/indicators/activity-indicator"; import ActivityIndicator from "@/components/indicators/activity-indicator";
import { ImageShadowOverlay } from "@/components/overlay/ImageShadowOverlay";
import { Button } from "@/components/ui/button";
import { ToggleGroup, ToggleGroupItem } from "@/components/ui/toggle-group";
import useOptimisticState from "@/hooks/use-optimistic-state";
import { cn } from "@/lib/utils"; import { cn } from "@/lib/utils";
import { import {
CustomClassificationModelConfig, CustomClassificationModelConfig,
FrigateConfig, FrigateConfig,
} from "@/types/frigateConfig"; } from "@/types/frigateConfig";
import { useMemo } from "react"; import { useMemo, useState } from "react";
import { isMobile } from "react-device-detect"; import { isMobile } from "react-device-detect";
import { useTranslation } from "react-i18next";
import { FaFolderPlus } from "react-icons/fa";
import useSWR from "swr"; import useSWR from "swr";
const allModelTypes = ["objects", "states"] as const;
type ModelType = (typeof allModelTypes)[number];
type ModelSelectionViewProps = { type ModelSelectionViewProps = {
onClick: (model: CustomClassificationModelConfig) => void; onClick: (model: CustomClassificationModelConfig) => void;
}; };
export default function ModelSelectionView({ export default function ModelSelectionView({
onClick, onClick,
}: ModelSelectionViewProps) { }: ModelSelectionViewProps) {
const { t } = useTranslation(["views/classificationModel"]);
const [page, setPage] = useState<ModelType>("objects");
const [pageToggle, setPageToggle] = useOptimisticState(page, setPage, 100);
const { data: config } = useSWR<FrigateConfig>("config", { const { data: config } = useSWR<FrigateConfig>("config", {
revalidateOnFocus: false, revalidateOnFocus: false,
}); });
// data
const classificationConfigs = useMemo(() => { const classificationConfigs = useMemo(() => {
if (!config) { if (!config) {
return []; return [];
@ -27,6 +42,24 @@ export default function ModelSelectionView({
return Object.values(config.classification.custom); return Object.values(config.classification.custom);
}, [config]); }, [config]);
const selectedClassificationConfigs = useMemo(() => {
return classificationConfigs.filter((model) => {
if (pageToggle == "objects" && model.object_config != undefined) {
return true;
}
if (pageToggle == "states" && model.state_config != undefined) {
return true;
}
return false;
});
}, [classificationConfigs, pageToggle]);
// new model wizard
const [newModel, setNewModel] = useState(false);
if (!config) { if (!config) {
return <ActivityIndicator />; return <ActivityIndicator />;
} }
@ -36,8 +69,55 @@ export default function ModelSelectionView({
} }
return ( return (
<div className="flex size-full flex-col p-2">
<ClassificationModelWizardDialog
open={newModel}
onClose={() => setNewModel(false)}
/>
<div className="flex h-12 w-full items-center justify-between">
<div className="flex flex-row items-center">
<ToggleGroup
className="*:rounded-md *:px-3 *:py-4"
type="single"
size="sm"
value={pageToggle}
onValueChange={(value: ModelType) => {
if (value) {
// Restrict viewer navigation
setPageToggle(value);
}
}}
>
{allModelTypes.map((item) => (
<ToggleGroupItem
key={item}
className={`flex scroll-mx-10 items-center justify-between gap-2 ${pageToggle == item ? "" : "*:text-muted-foreground"}`}
value={item}
data-nav-item={item}
aria-label={t("selectItem", {
ns: "common",
item: t("menu." + item),
})}
>
<div className="smart-capitalize">{t("menu." + item)}</div>
</ToggleGroupItem>
))}
</ToggleGroup>
</div>
<div className="flex flex-row items-center">
<Button
className="flex flex-row items-center gap-2"
variant="select"
onClick={() => setNewModel(true)}
>
<FaFolderPlus />
Add Classification
</Button>
</div>
</div>
<div className="flex size-full gap-2 p-2"> <div className="flex size-full gap-2 p-2">
{classificationConfigs.map((config) => ( {selectedClassificationConfigs.map((config) => (
<ModelCard <ModelCard
key={config.name} key={config.name}
config={config} config={config}
@ -45,6 +125,7 @@ export default function ModelSelectionView({
/> />
))} ))}
</div> </div>
</div>
); );
} }
@ -57,46 +138,37 @@ function ModelCard({ config, onClick }: ModelCardProps) {
[id: string]: string[]; [id: string]: string[];
}>(`classification/${config.name}/dataset`, { revalidateOnFocus: false }); }>(`classification/${config.name}/dataset`, { revalidateOnFocus: false });
const coverImages = useMemo(() => { const coverImage = useMemo(() => {
if (!dataset) { if (!dataset?.length) {
return {}; return undefined;
} }
const imageMap: { [key: string]: string } = {}; const keys = Object.keys(dataset).filter((key) => key != "none");
const selectedKey = keys[0];
for (const [key, imageList] of Object.entries(dataset)) { return {
if (imageList.length > 0) { name: selectedKey,
imageMap[key] = imageList[0]; img: dataset[selectedKey][0],
} };
}
return imageMap;
}, [dataset]); }, [dataset]);
return ( return (
<div <div
key={config.name} key={config.name}
className={cn( className={cn(
"flex h-60 cursor-pointer flex-col items-center gap-2 rounded-lg bg-card p-2 outline outline-[3px]", "relative size-60 cursor-pointer overflow-hidden rounded-lg",
"outline-transparent duration-500", "outline-transparent duration-500",
isMobile && "w-full", isMobile && "w-full",
)} )}
onClick={() => onClick()} onClick={() => onClick()}
> >
<div
className={cn("grid size-48 grid-cols-2 gap-2", isMobile && "w-full")}
>
{Object.entries(coverImages).map(([key, image]) => (
<img <img
key={key} className={cn("size-full", isMobile && "w-full")}
className="" src={`${baseUrl}clips/${config.name}/dataset/${coverImage?.name}/${coverImage?.img}`}
src={`${baseUrl}clips/${config.name}/dataset/${key}/${image}`}
/> />
))} <ImageShadowOverlay />
</div> <div className="absolute bottom-2 left-3 text-lg smart-capitalize">
<div className="smart-capitalize"> {config.name}
{config.name} ({config.state_config != null ? "State" : "Object"}{" "}
Classification)
</div> </div>
</div> </div>
); );

View File

@ -44,7 +44,7 @@ import {
useRef, useRef,
useState, useState,
} from "react"; } from "react";
import { isDesktop, isMobile } from "react-device-detect"; import { isDesktop } from "react-device-detect";
import { Trans, useTranslation } from "react-i18next"; import { Trans, useTranslation } from "react-i18next";
import { LuPencil, LuTrash2 } from "react-icons/lu"; import { LuPencil, LuTrash2 } from "react-icons/lu";
import { toast } from "sonner"; import { toast } from "sonner";
@ -56,7 +56,6 @@ import { ModelState } from "@/types/ws";
import ActivityIndicator from "@/components/indicators/activity-indicator"; import ActivityIndicator from "@/components/indicators/activity-indicator";
import { useNavigate } from "react-router-dom"; import { useNavigate } from "react-router-dom";
import { IoMdArrowRoundBack } from "react-icons/io"; import { IoMdArrowRoundBack } from "react-icons/io";
import { MdAutoFixHigh } from "react-icons/md";
import TrainFilterDialog from "@/components/overlay/dialog/TrainFilterDialog"; import TrainFilterDialog from "@/components/overlay/dialog/TrainFilterDialog";
import useApiFilter from "@/hooks/use-api-filter"; import useApiFilter from "@/hooks/use-api-filter";
import { ClassificationItemData, TrainFilter } from "@/types/classification"; import { ClassificationItemData, TrainFilter } from "@/types/classification";
@ -69,6 +68,7 @@ import SearchDetailDialog, {
SearchTab, SearchTab,
} from "@/components/overlay/detail/SearchDetailDialog"; } from "@/components/overlay/detail/SearchDetailDialog";
import { SearchResult } from "@/types/search"; import { SearchResult } from "@/types/search";
import { HiSparkles } from "react-icons/hi";
type ModelTrainingViewProps = { type ModelTrainingViewProps = {
model: CustomClassificationModelConfig; model: CustomClassificationModelConfig;
@ -378,12 +378,13 @@ export default function ModelTrainingView({ model }: ModelTrainingViewProps) {
<Button <Button
className="flex justify-center gap-2" className="flex justify-center gap-2"
onClick={trainModel} onClick={trainModel}
variant="select"
disabled={modelState != "complete"} disabled={modelState != "complete"}
> >
{modelState == "training" ? ( {modelState == "training" ? (
<ActivityIndicator size={20} /> <ActivityIndicator size={20} />
) : ( ) : (
<MdAutoFixHigh className="text-secondary-foreground" /> <HiSparkles className="text-white" />
)} )}
{isDesktop && t("button.trainModel")} {isDesktop && t("button.trainModel")}
</Button> </Button>
@ -631,13 +632,11 @@ function DatasetGrid({
return ( return (
<div <div
ref={contentRef} ref={contentRef}
className="scrollbar-container flex flex-wrap gap-2 overflow-y-auto p-2" className="scrollbar-container grid grid-cols-2 gap-2 overflow-y-scroll p-1 md:grid-cols-4 xl:grid-cols-8 2xl:grid-cols-10 3xl:grid-cols-12"
> >
{classData.map((image) => ( {classData.map((image) => (
<div key={image} className="aspect-square w-full">
<ClassificationCard <ClassificationCard
key={image}
className="w-60 gap-4 rounded-lg bg-card p-2"
imgClassName="size-auto"
data={{ data={{
filename: image, filename: image,
filepath: `clips/${modelName}/dataset/${categoryName}/${image}`, filepath: `clips/${modelName}/dataset/${categoryName}/${image}`,
@ -650,7 +649,7 @@ function DatasetGrid({
<Tooltip> <Tooltip>
<TooltipTrigger> <TooltipTrigger>
<LuTrash2 <LuTrash2
className="size-5 cursor-pointer text-primary-variant hover:text-primary" className="size-5 cursor-pointer text-primary-variant hover:text-danger"
onClick={(e) => { onClick={(e) => {
e.stopPropagation(); e.stopPropagation();
onDelete([image]); onDelete([image]);
@ -662,6 +661,7 @@ function DatasetGrid({
</TooltipContent> </TooltipContent>
</Tooltip> </Tooltip>
</ClassificationCard> </ClassificationCard>
</div>
))} ))}
</div> </div>
); );
@ -757,7 +757,6 @@ function TrainGrid({
selectedImages={selectedImages} selectedImages={selectedImages}
onClickImages={onClickImages} onClickImages={onClickImages}
onRefresh={onRefresh} onRefresh={onRefresh}
onDelete={onDelete}
/> />
); );
} }
@ -780,10 +779,7 @@ function StateTrainGrid({
selectedImages, selectedImages,
onClickImages, onClickImages,
onRefresh, onRefresh,
onDelete,
}: StateTrainGridProps) { }: StateTrainGridProps) {
const { t } = useTranslation(["views/classificationModel"]);
const threshold = useMemo(() => { const threshold = useMemo(() => {
return { return {
recognition: model.threshold, recognition: model.threshold,
@ -795,15 +791,12 @@ function StateTrainGrid({
<div <div
ref={contentRef} ref={contentRef}
className={cn( className={cn(
"scrollbar-container flex flex-wrap gap-2 overflow-y-auto p-2", "scrollbar-container grid grid-cols-2 gap-3 overflow-y-scroll p-1 sm:grid-cols-3 md:grid-cols-4 lg:grid-cols-6 xl:grid-cols-8 2xl:grid-cols-10 3xl:grid-cols-12",
isMobile && "justify-center",
)} )}
> >
{trainData?.map((data) => ( {trainData?.map((data) => (
<div key={data.filename} className="aspect-square w-full">
<ClassificationCard <ClassificationCard
key={data.filename}
className="w-60 gap-2 rounded-lg bg-card p-2"
imgClassName="size-auto"
data={data} data={data}
threshold={threshold} threshold={threshold}
selected={selectedImages.includes(data.filename)} selected={selectedImages.includes(data.filename)}
@ -817,23 +810,10 @@ function StateTrainGrid({
image={data.filename} image={data.filename}
onRefresh={onRefresh} onRefresh={onRefresh}
> >
<TbCategoryPlus className="size-5 cursor-pointer text-primary-variant hover:text-primary" /> <TbCategoryPlus className="size-7 cursor-pointer p-1 text-gray-200 hover:rounded-full hover:bg-primary-foreground/40" />
</ClassificationSelectionDialog> </ClassificationSelectionDialog>
<Tooltip>
<TooltipTrigger>
<LuTrash2
className="size-5 cursor-pointer text-primary-variant hover:text-primary"
onClick={(e) => {
e.stopPropagation();
onDelete([data.filename]);
}}
/>
</TooltipTrigger>
<TooltipContent>
{t("button.deleteClassificationAttempts")}
</TooltipContent>
</Tooltip>
</ClassificationCard> </ClassificationCard>
</div>
))} ))}
</div> </div>
); );
@ -847,7 +827,6 @@ type ObjectTrainGridProps = {
selectedImages: string[]; selectedImages: string[];
onClickImages: (images: string[], ctrl: boolean) => void; onClickImages: (images: string[], ctrl: boolean) => void;
onRefresh: () => void; onRefresh: () => void;
onDelete: (ids: string[]) => void;
}; };
function ObjectTrainGrid({ function ObjectTrainGrid({
model, model,
@ -857,10 +836,7 @@ function ObjectTrainGrid({
selectedImages, selectedImages,
onClickImages, onClickImages,
onRefresh, onRefresh,
onDelete,
}: ObjectTrainGridProps) { }: ObjectTrainGridProps) {
const { t } = useTranslation(["views/classificationModel"]);
// item data // item data
const groups = useMemo(() => { const groups = useMemo(() => {
@ -950,13 +926,15 @@ function ObjectTrainGrid({
<div <div
ref={contentRef} ref={contentRef}
className="scrollbar-container flex flex-wrap gap-2 overflow-y-scroll p-1" className={cn(
"scrollbar-container grid grid-cols-2 gap-3 overflow-y-scroll p-1 sm:grid-cols-3 md:grid-cols-4 lg:grid-cols-6 xl:grid-cols-8 2xl:grid-cols-10 3xl:grid-cols-12",
)}
> >
{Object.entries(groups).map(([key, group]) => { {Object.entries(groups).map(([key, group]) => {
const event = events?.find((ev) => ev.id == key); const event = events?.find((ev) => ev.id == key);
return ( return (
<div key={key} className="aspect-square w-full">
<GroupedClassificationCard <GroupedClassificationCard
key={key}
group={group} group={group}
event={event} event={event}
threshold={threshold} threshold={threshold}
@ -970,7 +948,6 @@ function ObjectTrainGrid({
handleClickEvent(group, event, true); handleClickEvent(group, event, true);
} }
}} }}
onSelectEvent={() => {}}
> >
{(data) => ( {(data) => (
<> <>
@ -980,25 +957,12 @@ function ObjectTrainGrid({
image={data.filename} image={data.filename}
onRefresh={onRefresh} onRefresh={onRefresh}
> >
<TbCategoryPlus className="size-5 cursor-pointer text-primary-variant hover:text-primary" /> <TbCategoryPlus className="size-7 cursor-pointer p-1 text-gray-200 hover:rounded-full hover:bg-primary-foreground/40" />
</ClassificationSelectionDialog> </ClassificationSelectionDialog>
<Tooltip>
<TooltipTrigger>
<LuTrash2
className="size-5 cursor-pointer text-primary-variant hover:text-primary"
onClick={(e) => {
e.stopPropagation();
onDelete([data.filename]);
}}
/>
</TooltipTrigger>
<TooltipContent>
{t("button.deleteClassificationAttempts")}
</TooltipContent>
</Tooltip>
</> </>
)} )}
</GroupedClassificationCard> </GroupedClassificationCard>
</div>
); );
})} })}
</div> </div>