Compare commits

..

No commits in common. "0bcabf0731b48f1f993d1013835b54fa30509c17" and "8c318699c4ff0b6ddfd097ca228c1210556e16a5" have entirely different histories.

17 changed files with 227 additions and 350 deletions

View File

@ -67,7 +67,7 @@ When choosing which objects to classify, start with a small number of visually d
### Improving the Model ### Improving the Model
- **Problem framing**: Keep classes visually distinct and relevant to the chosen object types. - **Problem framing**: Keep classes visually distinct and relevant to the chosen object types.
- **Data collection**: Use the models Recent Classification tab to gather balanced examples across times of day, weather, and distances. - **Data collection**: Use the models Train tab to gather balanced examples across times of day, weather, and distances.
- **Preprocessing**: Ensure examples reflect object crops similar to Frigates boxes; keep the subject centered. - **Preprocessing**: Ensure examples reflect object crops similar to Frigates boxes; keep the subject centered.
- **Labels**: Keep label names short and consistent; include a `none` class if you plan to ignore uncertain predictions for sub labels. - **Labels**: Keep label names short and consistent; include a `none` class if you plan to ignore uncertain predictions for sub labels.
- **Threshold**: Tune `threshold` per model to reduce false assignments. Start at `0.8` and adjust based on validation. - **Threshold**: Tune `threshold` per model to reduce false assignments. Start at `0.8` and adjust based on validation.

View File

@ -49,4 +49,4 @@ When choosing a portion of the camera frame for state classification, it is impo
### Improving the Model ### Improving the Model
- **Problem framing**: Keep classes visually distinct and state-focused (e.g., `open`, `closed`, `unknown`). Avoid combining object identity with state in a single model unless necessary. - **Problem framing**: Keep classes visually distinct and state-focused (e.g., `open`, `closed`, `unknown`). Avoid combining object identity with state in a single model unless necessary.
- **Data collection**: Use the models Recent Classifications tab to gather balanced examples across times of day and weather. - **Data collection**: Use the models Train tab to gather balanced examples across times of day and weather.

View File

@ -70,7 +70,7 @@ Fine-tune face recognition with these optional parameters at the global level of
- `min_faces`: Min face recognitions for the sub label to be applied to the person object. - `min_faces`: Min face recognitions for the sub label to be applied to the person object.
- Default: `1` - Default: `1`
- `save_attempts`: Number of images of recognized faces to save for training. - `save_attempts`: Number of images of recognized faces to save for training.
- Default: `200`. - Default: `100`.
- `blur_confidence_filter`: Enables a filter that calculates how blurry the face is and adjusts the confidence based on this. - `blur_confidence_filter`: Enables a filter that calculates how blurry the face is and adjusts the confidence based on this.
- Default: `True`. - Default: `True`.
- `device`: Target a specific device to run the face recognition model on (multi-GPU installation). - `device`: Target a specific device to run the face recognition model on (multi-GPU installation).
@ -114,9 +114,9 @@ When choosing images to include in the face training set it is recommended to al
::: :::
### Understanding the Recent Recognitions Tab ### Understanding the Train Tab
The Recent Recognitions tab in the face library displays recent face recognition attempts. Detected face images are grouped according to the person they were identified as potentially matching. The Train tab in the face library displays recent face recognition attempts. Detected face images are grouped according to the person they were identified as potentially matching.
Each face image is labeled with a name (or `Unknown`) along with the confidence score of the recognition attempt. While each image can be used to train the system for a specific person, not all images are suitable for training. Each face image is labeled with a name (or `Unknown`) along with the confidence score of the recognition attempt. While each image can be used to train the system for a specific person, not all images are suitable for training.
@ -140,7 +140,7 @@ Once front-facing images are performing well, start choosing slightly off-angle
Start with the [Usage](#usage) section and re-read the [Model Requirements](#model-requirements) above. Start with the [Usage](#usage) section and re-read the [Model Requirements](#model-requirements) above.
1. Ensure `person` is being _detected_. A `person` will automatically be scanned by Frigate for a face. Any detected faces will appear in the Recent Recognitions tab in the Frigate UI's Face Library. 1. Ensure `person` is being _detected_. A `person` will automatically be scanned by Frigate for a face. Any detected faces will appear in the Train tab in the Frigate UI's Face Library.
If you are using a Frigate+ or `face` detecting model: If you are using a Frigate+ or `face` detecting model:
@ -186,7 +186,7 @@ Avoid training on images that already score highly, as this can lead to over-fit
No, face recognition does not support negative training (i.e., explicitly telling it who someone is _not_). Instead, the best approach is to improve the training data by using a more diverse and representative set of images for each person. No, face recognition does not support negative training (i.e., explicitly telling it who someone is _not_). Instead, the best approach is to improve the training data by using a more diverse and representative set of images for each person.
For more guidance, refer to the section above on improving recognition accuracy. For more guidance, refer to the section above on improving recognition accuracy.
### I see scores above the threshold in the Recent Recognitions tab, but a sub label wasn't assigned? ### I see scores above the threshold in the train tab, but a sub label wasn't assigned?
The Frigate considers the recognition scores across all recognition attempts for each person object. The scores are continually weighted based on the area of the face, and a sub label will only be assigned to person if a person is confidently recognized consistently. This avoids cases where a single high confidence recognition would throw off the results. The Frigate considers the recognition scores across all recognition attempts for each person object. The scores are continually weighted based on the area of the face, and a sub label will only be assigned to person if a person is confidently recognized consistently. This avoids cases where a single high confidence recognition would throw off the results.

View File

@ -630,7 +630,7 @@ face_recognition:
# Optional: Min face recognitions for the sub label to be applied to the person object (default: shown below) # Optional: Min face recognitions for the sub label to be applied to the person object (default: shown below)
min_faces: 1 min_faces: 1
# Optional: Number of images of recognized faces to save for training (default: shown below) # Optional: Number of images of recognized faces to save for training (default: shown below)
save_attempts: 200 save_attempts: 100
# Optional: Apply a blur quality filter to adjust confidence based on the blur level of the image (default: shown below) # Optional: Apply a blur quality filter to adjust confidence based on the blur level of the image (default: shown below)
blur_confidence_filter: True blur_confidence_filter: True
# Optional: Set the model size used face recognition. (default: shown below) # Optional: Set the model size used face recognition. (default: shown below)

View File

@ -197,7 +197,7 @@ class FaceRecognitionConfig(FrigateBaseModel):
title="Min face recognitions for the sub label to be applied to the person object.", title="Min face recognitions for the sub label to be applied to the person object.",
) )
save_attempts: int = Field( save_attempts: int = Field(
default=200, ge=0, title="Number of face attempts to save in the recent recognitions tab." default=100, ge=0, title="Number of face attempts to save in the train tab."
) )
blur_confidence_filter: bool = Field( blur_confidence_filter: bool = Field(
default=True, title="Apply blur quality filter to face confidence." default=True, title="Apply blur quality filter to face confidence."

View File

@ -35,6 +35,10 @@ except ModuleNotFoundError:
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
MAX_CLASSIFICATION_VERIFICATION_ATTEMPTS = 6
MAX_CLASSIFICATION_ATTEMPTS = 12
class CustomStateClassificationProcessor(RealTimeProcessorApi): class CustomStateClassificationProcessor(RealTimeProcessorApi):
def __init__( def __init__(
self, self,
@ -264,6 +268,26 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
if obj_data["label"] not in self.model_config.object_config.objects: if obj_data["label"] not in self.model_config.object_config.objects:
return return
if (
obj_data["id"] in self.detected_objects
and len(self.detected_objects[obj_data["id"]])
>= MAX_CLASSIFICATION_VERIFICATION_ATTEMPTS
):
# if we are at max attempts after rec and we have a rec
if obj_data.get("sub_label"):
logger.debug(
"Not processing due to hitting max attempts after true recognition."
)
return
# if we don't have a rec and are at max attempts
if (
len(self.detected_objects[obj_data["id"]])
>= MAX_CLASSIFICATION_ATTEMPTS
):
logger.debug("Not processing due to hitting max rec attempts.")
return
now = datetime.datetime.now().timestamp() now = datetime.datetime.now().timestamp()
x, y, x2, y2 = calculate_region( x, y, x2, y2 = calculate_region(
frame.shape, frame.shape,

View File

@ -23,7 +23,7 @@
"label": "Min face recognitions for the sub label to be applied to the person object." "label": "Min face recognitions for the sub label to be applied to the person object."
}, },
"save_attempts": { "save_attempts": {
"label": "Number of face attempts to save in the recent recognitions tab." "label": "Number of face attempts to save in the train tab."
}, },
"blur_confidence_filter": { "blur_confidence_filter": {
"label": "Apply blur quality filter to face confidence." "label": "Apply blur quality filter to face confidence."

View File

@ -22,7 +22,7 @@
"title": "Create Collection", "title": "Create Collection",
"desc": "Create a new collection", "desc": "Create a new collection",
"new": "Create New Face", "new": "Create New Face",
"nextSteps": "To build a strong foundation:<li>Use the Recent Recognitions tab to select and train on images for each detected person.</li><li>Focus on straight-on images for best results; avoid training images that capture faces at an angle.</li></ul>" "nextSteps": "To build a strong foundation:<li>Use the Train tab to select and train on images for each detected person.</li><li>Focus on straight-on images for best results; avoid training images that capture faces at an angle.</li></ul>"
}, },
"steps": { "steps": {
"faceName": "Enter Face Name", "faceName": "Enter Face Name",

View File

@ -6,7 +6,7 @@ import {
ClassificationThreshold, ClassificationThreshold,
} from "@/types/classification"; } from "@/types/classification";
import { Event } from "@/types/event"; import { Event } from "@/types/event";
import { forwardRef, useMemo, useRef, useState } from "react"; import { useMemo, useRef, useState } from "react";
import { isDesktop, isMobile } from "react-device-detect"; import { isDesktop, isMobile } from "react-device-detect";
import { useTranslation } from "react-i18next"; import { useTranslation } from "react-i18next";
import TimeAgo from "../dynamic/TimeAgo"; import TimeAgo from "../dynamic/TimeAgo";
@ -14,24 +14,7 @@ import { Tooltip, TooltipContent, TooltipTrigger } from "../ui/tooltip";
import { LuSearch } from "react-icons/lu"; import { LuSearch } from "react-icons/lu";
import { TooltipPortal } from "@radix-ui/react-tooltip"; import { TooltipPortal } from "@radix-ui/react-tooltip";
import { useNavigate } from "react-router-dom"; import { useNavigate } from "react-router-dom";
import { HiSquare2Stack } from "react-icons/hi2"; import { getTranslatedLabel } from "@/utils/i18n";
import { ImageShadowOverlay } from "../overlay/ImageShadowOverlay";
import {
Dialog,
DialogContent,
DialogDescription,
DialogHeader,
DialogTitle,
DialogTrigger,
} from "../ui/dialog";
import {
MobilePage,
MobilePageContent,
MobilePageDescription,
MobilePageHeader,
MobilePageTitle,
MobilePageTrigger,
} from "../mobile/MobilePage";
type ClassificationCardProps = { type ClassificationCardProps = {
imgClassName?: string; imgClassName?: string;
@ -40,27 +23,19 @@ type ClassificationCardProps = {
selected: boolean; selected: boolean;
i18nLibrary: string; i18nLibrary: string;
showArea?: boolean; showArea?: boolean;
count?: number;
onClick: (data: ClassificationItemData, meta: boolean) => void; onClick: (data: ClassificationItemData, meta: boolean) => void;
children?: React.ReactNode; children?: React.ReactNode;
}; };
export const ClassificationCard = forwardRef< export function ClassificationCard({
HTMLDivElement, imgClassName,
ClassificationCardProps data,
>(function ClassificationCard( threshold,
{ selected,
imgClassName, i18nLibrary,
data, showArea = true,
threshold, onClick,
selected, children,
i18nLibrary, }: ClassificationCardProps) {
showArea = true,
count,
onClick,
children,
},
ref,
) {
const { t } = useTranslation([i18nLibrary]); const { t } = useTranslation([i18nLibrary]);
const [imageLoaded, setImageLoaded] = useState(false); const [imageLoaded, setImageLoaded] = useState(false);
@ -96,26 +71,12 @@ export const ClassificationCard = forwardRef<
return ( return (
<div <div
ref={ref}
className={cn( className={cn(
"relative flex cursor-pointer flex-col overflow-hidden rounded-lg outline outline-[3px]", "relative flex size-48 cursor-pointer flex-col overflow-hidden rounded-lg outline outline-[3px]",
isMobile ? "!size-full" : "size-48",
selected selected
? "shadow-selected outline-selected" ? "shadow-selected outline-selected"
: "outline-transparent duration-500", : "outline-transparent duration-500",
)} )}
onClick={(e) => {
const isMeta = e.metaKey || e.ctrlKey;
if (isMeta) {
e.stopPropagation();
}
onClick(data, isMeta);
}}
onContextMenu={(e) => {
e.preventDefault();
e.stopPropagation();
onClick(data, true);
}}
> >
<img <img
ref={imgRef} ref={imgRef}
@ -126,16 +87,13 @@ export const ClassificationCard = forwardRef<
)} )}
onLoad={() => setImageLoaded(true)} onLoad={() => setImageLoaded(true)}
src={`${baseUrl}${data.filepath}`} src={`${baseUrl}${data.filepath}`}
onClick={(e) => {
e.stopPropagation();
onClick(data, e.metaKey || e.ctrlKey);
}}
/> />
<ImageShadowOverlay upperClassName="z-0" lowerClassName="h-[30%] z-0" /> {false && imageArea != undefined && (
{count && ( <div className="absolute bottom-1 right-1 z-10 rounded-lg bg-black/50 px-2 py-1 text-xs text-white">
<div className="absolute right-2 top-2 flex flex-row items-center gap-1">
<div className="text-gray-200">{count}</div>{" "}
<HiSquare2Stack className="text-gray-200" />
</div>
)}
{!count && imageArea != undefined && (
<div className="absolute right-1 top-1 rounded-lg bg-black/50 px-2 py-1 text-xs text-white">
{t("information.pixels", { ns: "common", area: imageArea })} {t("information.pixels", { ns: "common", area: imageArea })}
</div> </div>
)} )}
@ -169,7 +127,7 @@ export const ClassificationCard = forwardRef<
</div> </div>
</div> </div>
); );
}); }
type GroupedClassificationCardProps = { type GroupedClassificationCardProps = {
group: ClassificationItemData[]; group: ClassificationItemData[];
@ -179,6 +137,7 @@ type GroupedClassificationCardProps = {
i18nLibrary: string; i18nLibrary: string;
objectType: string; objectType: string;
onClick: (data: ClassificationItemData | undefined) => void; onClick: (data: ClassificationItemData | undefined) => void;
onSelectEvent: (event: Event) => void;
children?: (data: ClassificationItemData) => React.ReactNode; children?: (data: ClassificationItemData) => React.ReactNode;
}; };
export function GroupedClassificationCard({ export function GroupedClassificationCard({
@ -187,54 +146,20 @@ export function GroupedClassificationCard({
threshold, threshold,
selectedItems, selectedItems,
i18nLibrary, i18nLibrary,
objectType,
onClick, onClick,
onSelectEvent,
children, children,
}: GroupedClassificationCardProps) { }: GroupedClassificationCardProps) {
const navigate = useNavigate(); const navigate = useNavigate();
const { t } = useTranslation(["views/explore", i18nLibrary]); const { t } = useTranslation(["views/explore", i18nLibrary]);
const [detailOpen, setDetailOpen] = useState(false);
// data // data
const bestItem = useMemo<ClassificationItemData | undefined>(() => { const allItemsSelected = useMemo(
let best: undefined | ClassificationItemData = undefined; () => group.every((data) => selectedItems.includes(data.filename)),
[group, selectedItems],
group.forEach((item) => { );
if (item?.name != undefined && item.name != "none") {
if (
best?.score == undefined ||
(item.score && best.score < item.score)
) {
best = item;
}
}
});
if (!best) {
return group[0];
}
const bestTyped: ClassificationItemData = best;
return {
...bestTyped,
name: event?.sub_label || bestTyped.name,
score: event?.data?.sub_label_score || bestTyped.score,
};
}, [group, event]);
const bestScoreStatus = useMemo(() => {
if (!bestItem?.score || !threshold) {
return "unknown";
}
if (bestItem.score >= threshold.recognition) {
return "match";
} else if (bestItem.score >= threshold.unknown) {
return "potential";
} else {
return "unknown";
}
}, [bestItem, threshold]);
const time = useMemo(() => { const time = useMemo(() => {
const item = group[0]; const item = group[0];
@ -246,150 +171,94 @@ export function GroupedClassificationCard({
return item.timestamp * 1000; return item.timestamp * 1000;
}, [group]); }, [group]);
if (!bestItem) {
return null;
}
const Overlay = isDesktop ? Dialog : MobilePage;
const Trigger = isDesktop ? DialogTrigger : MobilePageTrigger;
const Header = isDesktop ? DialogHeader : MobilePageHeader;
const Content = isDesktop ? DialogContent : MobilePageContent;
const ContentTitle = isDesktop ? DialogTitle : MobilePageTitle;
const ContentDescription = isDesktop
? DialogDescription
: MobilePageDescription;
return ( return (
<> <div
<ClassificationCard className={cn(
data={bestItem} "flex cursor-pointer flex-col gap-2 rounded-lg bg-card p-2 outline outline-[3px]",
threshold={threshold} isMobile && "w-full",
selected={selectedItems.includes(bestItem.filename)} allItemsSelected
i18nLibrary={i18nLibrary} ? "shadow-selected outline-selected"
count={group.length} : "outline-transparent duration-500",
onClick={(_, meta) => { )}
if (meta || selectedItems.length > 0) { onClick={() => {
onClick(undefined); if (selectedItems.length) {
} else { onClick(undefined);
setDetailOpen(true); }
} }}
}} onContextMenu={(e) => {
/> e.stopPropagation();
<Overlay e.preventDefault();
open={detailOpen} onClick(undefined);
onOpenChange={(open) => { }}
if (!open) { >
setDetailOpen(false); <div className="flex flex-row justify-between">
} <div className="flex flex-col gap-1">
}} <div className="select-none smart-capitalize">
> {getTranslatedLabel(objectType)}
<Trigger asChild></Trigger> {event?.sub_label
<Content ? `: ${event.sub_label} (${Math.round((event.data.sub_label_score || 0) * 100)}%)`
className={cn( : ": " + t("details.unknown")}
"", </div>
isDesktop && "w-auto max-w-[85%]", {time && (
isMobile && "flex flex-col", <TimeAgo
className="text-sm text-secondary-foreground"
time={time}
dense
/>
)} )}
onOpenAutoFocus={(e) => e.preventDefault()} </div>
> {event && (
<> <Tooltip>
{isDesktop && ( <TooltipTrigger>
<div className="absolute right-10 top-4 flex flex-row justify-between">
{event && (
<Tooltip>
<TooltipTrigger asChild>
<div
className="cursor-pointer"
tabIndex={-1}
onClick={() => {
navigate(`/explore?event_id=${event.id}`);
}}
>
<LuSearch className="size-4 text-secondary-foreground" />
</div>
</TooltipTrigger>
<TooltipPortal>
<TooltipContent>
{t("details.item.button.viewInExplore", {
ns: "views/explore",
})}
</TooltipContent>
</TooltipPortal>
</Tooltip>
)}
</div>
)}
<Header className={cn("mx-2", isMobile && "flex-shrink-0")}>
<div>
<ContentTitle
className={cn(
"flex items-center gap-1 font-normal capitalize",
isMobile && "px-2",
)}
>
{event?.sub_label ? event.sub_label : t("details.unknown")}
{event?.sub_label && (
<div
className={cn(
"",
bestScoreStatus == "match" && "text-success",
bestScoreStatus == "potential" && "text-orange-400",
bestScoreStatus == "unknown" && "text-danger",
)}
>{`${Math.round((event.data.sub_label_score || 0) * 100)}%`}</div>
)}
</ContentTitle>
<ContentDescription className={cn("", isMobile && "px-2")}>
{time && (
<TimeAgo
className="text-sm text-secondary-foreground"
time={time}
dense
/>
)}
</ContentDescription>
</div>
</Header>
<div
className={cn(
"flex cursor-pointer flex-col gap-2 rounded-lg",
isDesktop && "p-2",
isMobile && "scrollbar-container w-full flex-1 overflow-y-auto",
)}
>
<div <div
className={cn( className="cursor-pointer"
"gap-2", onClick={() => {
isDesktop navigate(`/explore?event_id=${event.id}`);
? "flex flex-row flex-wrap" }}
: "grid grid-cols-2 justify-items-center gap-2 px-2 sm:grid-cols-5 lg:grid-cols-6",
)}
> >
{group.map((data: ClassificationItemData) => ( <LuSearch className="size-4 text-muted-foreground" />
<div
key={data.filename}
className={cn(isMobile && "aspect-square size-full")}
>
<ClassificationCard
data={data}
threshold={threshold}
selected={false}
i18nLibrary={i18nLibrary}
onClick={(data, meta) => {
if (meta || selectedItems.length > 0) {
onClick(data);
}
}}
>
{children?.(data)}
</ClassificationCard>
</div>
))}
</div> </div>
</div> </TooltipTrigger>
</> <TooltipPortal>
</Content> <TooltipContent>
</Overlay> {t("details.item.button.viewInExplore", {
</> ns: "views/explore",
})}
</TooltipContent>
</TooltipPortal>
</Tooltip>
)}
</div>
<div
className={cn(
"gap-2",
isDesktop
? "flex flex-row flex-wrap"
: "grid grid-cols-2 sm:grid-cols-5 lg:grid-cols-6",
)}
>
{group.map((data: ClassificationItemData) => (
<ClassificationCard
key={data.filename}
data={data}
threshold={threshold}
selected={
allItemsSelected ? false : selectedItems.includes(data.filename)
}
i18nLibrary={i18nLibrary}
onClick={(data, meta) => {
if (meta || selectedItems.length > 0) {
onClick(data);
} else if (event) {
onSelectEvent(event);
}
}}
>
{children?.(data)}
</ClassificationCard>
))}
</div>
</div>
); );
} }

View File

@ -21,7 +21,6 @@ import { baseUrl } from "@/api/baseUrl";
import { cn } from "@/lib/utils"; import { cn } from "@/lib/utils";
import { shareOrCopy } from "@/utils/browserUtil"; import { shareOrCopy } from "@/utils/browserUtil";
import { useTranslation } from "react-i18next"; import { useTranslation } from "react-i18next";
import { ImageShadowOverlay } from "../overlay/ImageShadowOverlay";
type ExportProps = { type ExportProps = {
className: string; className: string;
@ -225,7 +224,7 @@ export default function ExportCard({
{loading && ( {loading && (
<Skeleton className="absolute inset-0 aspect-video rounded-lg md:rounded-2xl" /> <Skeleton className="absolute inset-0 aspect-video rounded-lg md:rounded-2xl" />
)} )}
<ImageShadowOverlay /> <div className="rounded-b-l pointer-events-none absolute inset-x-0 bottom-0 h-[50%] rounded-lg bg-gradient-to-t from-black/60 to-transparent md:rounded-2xl" />
<div className="absolute bottom-2 left-3 flex h-full items-end justify-between text-white smart-capitalize"> <div className="absolute bottom-2 left-3 flex h-full items-end justify-between text-white smart-capitalize">
{exportedRecording.name.replaceAll("_", " ")} {exportedRecording.name.replaceAll("_", " ")}
</div> </div>

View File

@ -1,27 +0,0 @@
import { cn } from "@/lib/utils";
type ImageShadowOverlayProps = {
upperClassName?: string;
lowerClassName?: string;
};
export function ImageShadowOverlay({
upperClassName,
lowerClassName,
}: ImageShadowOverlayProps) {
return (
<>
<div
className={cn(
"pointer-events-none absolute inset-x-0 top-0 z-10 h-[30%] w-full rounded-lg bg-gradient-to-b from-black/20 to-transparent md:rounded-2xl",
upperClassName,
)}
/>
<div
className={cn(
"pointer-events-none absolute inset-x-0 bottom-0 z-10 h-[10%] w-full rounded-lg bg-gradient-to-t from-black/20 to-transparent md:rounded-2xl",
lowerClassName,
)}
/>
</>
);
}

View File

@ -6,7 +6,6 @@ import MSEPlayer from "./MsePlayer";
import { LivePlayerMode } from "@/types/live"; import { LivePlayerMode } from "@/types/live";
import { cn } from "@/lib/utils"; import { cn } from "@/lib/utils";
import React from "react"; import React from "react";
import { ImageShadowOverlay } from "../overlay/ImageShadowOverlay";
type LivePlayerProps = { type LivePlayerProps = {
className?: string; className?: string;
@ -77,7 +76,8 @@ export default function BirdseyeLivePlayer({
)} )}
onClick={onClick} onClick={onClick}
> >
<ImageShadowOverlay /> <div className="pointer-events-none absolute inset-x-0 top-0 z-10 h-[30%] w-full rounded-lg bg-gradient-to-b from-black/20 to-transparent md:rounded-2xl"></div>
<div className="pointer-events-none absolute inset-x-0 bottom-0 z-10 h-[10%] w-full rounded-lg bg-gradient-to-t from-black/20 to-transparent md:rounded-2xl"></div>
<div className="size-full" ref={playerRef}> <div className="size-full" ref={playerRef}>
{player} {player}
</div> </div>

View File

@ -25,7 +25,6 @@ import { PlayerStats } from "./PlayerStats";
import { LuVideoOff } from "react-icons/lu"; import { LuVideoOff } from "react-icons/lu";
import { Trans, useTranslation } from "react-i18next"; import { Trans, useTranslation } from "react-i18next";
import { useCameraFriendlyName } from "@/hooks/use-camera-friendly-name"; import { useCameraFriendlyName } from "@/hooks/use-camera-friendly-name";
import { ImageShadowOverlay } from "../overlay/ImageShadowOverlay";
type LivePlayerProps = { type LivePlayerProps = {
cameraRef?: (ref: HTMLDivElement | null) => void; cameraRef?: (ref: HTMLDivElement | null) => void;
@ -329,7 +328,10 @@ export default function LivePlayer({
> >
{cameraEnabled && {cameraEnabled &&
((showStillWithoutActivity && !liveReady) || liveReady) && ( ((showStillWithoutActivity && !liveReady) || liveReady) && (
<ImageShadowOverlay /> <>
<div className="pointer-events-none absolute inset-x-0 top-0 z-10 h-[30%] w-full rounded-lg bg-gradient-to-b from-black/20 to-transparent md:rounded-2xl"></div>
<div className="pointer-events-none absolute inset-x-0 bottom-0 z-10 h-[10%] w-full rounded-lg bg-gradient-to-t from-black/20 to-transparent md:rounded-2xl"></div>
</>
)} )}
{player} {player}
{cameraEnabled && {cameraEnabled &&

View File

@ -107,7 +107,7 @@ const DialogContent = React.forwardRef<
> >
{children} {children}
<DialogPrimitive.Close className="absolute right-4 top-4 rounded-sm opacity-70 ring-offset-background transition-opacity data-[state=open]:bg-accent data-[state=open]:text-muted-foreground hover:opacity-100 focus:outline-none focus:ring-2 focus:ring-ring focus:ring-offset-2 disabled:pointer-events-none"> <DialogPrimitive.Close className="absolute right-4 top-4 rounded-sm opacity-70 ring-offset-background transition-opacity data-[state=open]:bg-accent data-[state=open]:text-muted-foreground hover:opacity-100 focus:outline-none focus:ring-2 focus:ring-ring focus:ring-offset-2 disabled:pointer-events-none">
<X className="h-4 w-4 text-secondary-foreground" /> <X className="h-4 w-4" />
<span className="sr-only">Close</span> <span className="sr-only">Close</span>
</DialogPrimitive.Close> </DialogPrimitive.Close>
</DialogPrimitive.Content> </DialogPrimitive.Content>

View File

@ -51,7 +51,7 @@ import {
useRef, useRef,
useState, useState,
} from "react"; } from "react";
import { isDesktop, isMobile, isMobileOnly } from "react-device-detect"; import { isDesktop } from "react-device-detect";
import { Trans, useTranslation } from "react-i18next"; import { Trans, useTranslation } from "react-i18next";
import { import {
LuFolderCheck, LuFolderCheck,
@ -63,6 +63,10 @@ import {
} from "react-icons/lu"; } from "react-icons/lu";
import { toast } from "sonner"; import { toast } from "sonner";
import useSWR from "swr"; import useSWR from "swr";
import SearchDetailDialog, {
SearchTab,
} from "@/components/overlay/detail/SearchDetailDialog";
import { SearchResult } from "@/types/search";
import { import {
ClassificationCard, ClassificationCard,
GroupedClassificationCard, GroupedClassificationCard,
@ -682,6 +686,11 @@ function TrainingGrid({
{ ids: eventIdsQuery }, { ids: eventIdsQuery },
]); ]);
// selection
const [selectedEvent, setSelectedEvent] = useState<Event>();
const [dialogTab, setDialogTab] = useState<SearchTab>("details");
if (attemptImages.length == 0) { if (attemptImages.length == 0) {
return ( return (
<div className="absolute left-1/2 top-1/2 flex -translate-x-1/2 -translate-y-1/2 flex-col items-center justify-center text-center"> <div className="absolute left-1/2 top-1/2 flex -translate-x-1/2 -translate-y-1/2 flex-col items-center justify-center text-center">
@ -692,32 +701,40 @@ function TrainingGrid({
} }
return ( return (
<div <>
ref={contentRef} <SearchDetailDialog
className={cn( search={
"scrollbar-container gap-3 overflow-y-scroll p-1", selectedEvent ? (selectedEvent as unknown as SearchResult) : undefined
isMobile }
? "grid grid-cols-2 sm:grid-cols-3 md:grid-cols-4 lg:grid-cols-6 xl:grid-cols-8" page={dialogTab}
: "flex flex-wrap", setSimilarity={undefined}
)} setSearchPage={setDialogTab}
> setSearch={(search) => setSelectedEvent(search as unknown as Event)}
{Object.entries(faceGroups).map(([key, group]) => { setInputFocused={() => {}}
const event = events?.find((ev) => ev.id == key); />
return (
<div key={key} className={cn(isMobile && "aspect-square size-full")}> <div
ref={contentRef}
className="scrollbar-container flex flex-wrap gap-2 overflow-y-scroll p-1"
>
{Object.entries(faceGroups).map(([key, group]) => {
const event = events?.find((ev) => ev.id == key);
return (
<FaceAttemptGroup <FaceAttemptGroup
key={key}
config={config} config={config}
group={group} group={group}
event={event} event={event}
faceNames={faceNames} faceNames={faceNames}
selectedFaces={selectedFaces} selectedFaces={selectedFaces}
onClickFaces={onClickFaces} onClickFaces={onClickFaces}
onSelectEvent={setSelectedEvent}
onRefresh={onRefresh} onRefresh={onRefresh}
/> />
</div> );
); })}
})} </div>
</div> </>
); );
} }
@ -728,6 +745,7 @@ type FaceAttemptGroupProps = {
faceNames: string[]; faceNames: string[];
selectedFaces: string[]; selectedFaces: string[];
onClickFaces: (image: string[], ctrl: boolean) => void; onClickFaces: (image: string[], ctrl: boolean) => void;
onSelectEvent: (event: Event) => void;
onRefresh: () => void; onRefresh: () => void;
}; };
function FaceAttemptGroup({ function FaceAttemptGroup({
@ -737,6 +755,7 @@ function FaceAttemptGroup({
faceNames, faceNames,
selectedFaces, selectedFaces,
onClickFaces, onClickFaces,
onSelectEvent,
onRefresh, onRefresh,
}: FaceAttemptGroupProps) { }: FaceAttemptGroupProps) {
const { t } = useTranslation(["views/faceLibrary", "views/explore"]); const { t } = useTranslation(["views/faceLibrary", "views/explore"]);
@ -754,8 +773,8 @@ function FaceAttemptGroup({
const handleClickEvent = useCallback( const handleClickEvent = useCallback(
(meta: boolean) => { (meta: boolean) => {
if (!meta) { if (event && selectedFaces.length == 0 && !meta) {
return; onSelectEvent(event);
} else { } else {
const anySelected = const anySelected =
group.find((face) => selectedFaces.includes(face.filename)) != group.find((face) => selectedFaces.includes(face.filename)) !=
@ -779,7 +798,7 @@ function FaceAttemptGroup({
} }
} }
}, },
[group, selectedFaces, onClickFaces], [event, group, selectedFaces, onClickFaces, onSelectEvent],
); );
// api calls // api calls
@ -854,6 +873,7 @@ function FaceAttemptGroup({
handleClickEvent(true); handleClickEvent(true);
} }
}} }}
onSelectEvent={onSelectEvent}
> >
{(data) => ( {(data) => (
<> <>

View File

@ -1,7 +1,6 @@
import { baseUrl } from "@/api/baseUrl"; import { baseUrl } from "@/api/baseUrl";
import ClassificationModelWizardDialog from "@/components/classification/ClassificationModelWizardDialog"; import ClassificationModelWizardDialog from "@/components/classification/ClassificationModelWizardDialog";
import ActivityIndicator from "@/components/indicators/activity-indicator"; import ActivityIndicator from "@/components/indicators/activity-indicator";
import { ImageShadowOverlay } from "@/components/overlay/ImageShadowOverlay";
import { Button } from "@/components/ui/button"; import { Button } from "@/components/ui/button";
import { ToggleGroup, ToggleGroupItem } from "@/components/ui/toggle-group"; import { ToggleGroup, ToggleGroupItem } from "@/components/ui/toggle-group";
import useOptimisticState from "@/hooks/use-optimistic-state"; import useOptimisticState from "@/hooks/use-optimistic-state";
@ -164,7 +163,7 @@ function ModelCard({ config, onClick }: ModelCardProps) {
className={cn("size-full", isMobile && "w-full")} className={cn("size-full", isMobile && "w-full")}
src={`${baseUrl}clips/${config.name}/dataset/${coverImage?.name}/${coverImage?.img}`} src={`${baseUrl}clips/${config.name}/dataset/${coverImage?.name}/${coverImage?.img}`}
/> />
<ImageShadowOverlay /> <div className="absolute bottom-0 h-[50%] w-full bg-gradient-to-t from-black/60 to-transparent" />
<div className="absolute bottom-2 left-3 text-lg smart-capitalize"> <div className="absolute bottom-2 left-3 text-lg smart-capitalize">
{config.name} {config.name}
</div> </div>

View File

@ -44,7 +44,7 @@ import {
useRef, useRef,
useState, useState,
} from "react"; } from "react";
import { isDesktop, isMobile, isMobileOnly } from "react-device-detect"; import { isDesktop, isMobile } from "react-device-detect";
import { Trans, useTranslation } from "react-i18next"; import { Trans, useTranslation } from "react-i18next";
import { LuPencil, LuTrash2 } from "react-icons/lu"; import { LuPencil, LuTrash2 } from "react-icons/lu";
import { toast } from "sonner"; import { toast } from "sonner";
@ -791,7 +791,7 @@ function StateTrainGrid({
<div <div
ref={contentRef} ref={contentRef}
className={cn( className={cn(
"scrollbar-container flex flex-wrap gap-3 overflow-y-auto p-2", "scrollbar-container flex flex-wrap gap-2 overflow-y-auto p-2",
isMobile && "justify-center", isMobile && "justify-center",
)} )}
> >
@ -927,50 +927,41 @@ function ObjectTrainGrid({
<div <div
ref={contentRef} ref={contentRef}
className={cn( className="scrollbar-container flex flex-wrap gap-2 overflow-y-scroll p-1"
"scrollbar-container gap-3 overflow-y-scroll p-1",
isMobile
? "grid grid-cols-2 md:grid-cols-4 lg:grid-cols-6 xl:grid-cols-8"
: "flex flex-wrap",
)}
> >
{Object.entries(groups).map(([key, group]) => { {Object.entries(groups).map(([key, group]) => {
const event = events?.find((ev) => ev.id == key); const event = events?.find((ev) => ev.id == key);
return ( return (
<div <GroupedClassificationCard
key={key} key={key}
className={cn(isMobile && "aspect-square size-full")} group={group}
event={event}
threshold={threshold}
selectedItems={selectedImages}
i18nLibrary="views/classificationModel"
objectType={model.object_config?.objects?.at(0) ?? "Object"}
onClick={(data) => {
if (data) {
onClickImages([data.filename], true);
} else {
handleClickEvent(group, event, true);
}
}}
onSelectEvent={() => {}}
> >
<GroupedClassificationCard {(data) => (
key={key} <>
group={group} <ClassificationSelectionDialog
event={event} classes={classes}
threshold={threshold} modelName={model.name}
selectedItems={selectedImages} image={data.filename}
i18nLibrary="views/classificationModel" onRefresh={onRefresh}
objectType={model.object_config?.objects?.at(0) ?? "Object"} >
onClick={(data) => { <TbCategoryPlus className="size-7 cursor-pointer p-1 text-gray-200 hover:rounded-full hover:bg-primary-foreground" />
if (data) { </ClassificationSelectionDialog>
onClickImages([data.filename], true); </>
} else { )}
handleClickEvent(group, event, true); </GroupedClassificationCard>
}
}}
>
{(data) => (
<>
<ClassificationSelectionDialog
classes={classes}
modelName={model.name}
image={data.filename}
onRefresh={onRefresh}
>
<TbCategoryPlus className="size-7 cursor-pointer p-1 text-gray-200 hover:rounded-full hover:bg-primary-foreground" />
</ClassificationSelectionDialog>
</>
)}
</GroupedClassificationCard>
</div>
); );
})} })}
</div> </div>