Compare commits

..

No commits in common. "b38f830b3b99fb61eb35b2f2d1d3fb9a458b7123" and "9638e85a1fdc5eff18e363613367fba4523ff491" have entirely different histories.

31 changed files with 434 additions and 763 deletions

View File

@ -274,18 +274,6 @@ http {
include proxy.conf; include proxy.conf;
} }
# Allow unauthenticated access to the first_time_login endpoint
# so the login page can load help text before authentication.
location /api/auth/first_time_login {
auth_request off;
limit_except GET {
deny all;
}
rewrite ^/api(/.*)$ $1 break;
proxy_pass http://frigate_api;
include proxy.conf;
}
location /api/stats { location /api/stats {
include auth_request.conf; include auth_request.conf;
access_log off; access_log off;

View File

@ -67,7 +67,7 @@ When choosing which objects to classify, start with a small number of visually d
### Improving the Model ### Improving the Model
- **Problem framing**: Keep classes visually distinct and relevant to the chosen object types. - **Problem framing**: Keep classes visually distinct and relevant to the chosen object types.
- **Data collection**: Use the models Recent Classification tab to gather balanced examples across times of day, weather, and distances. - **Data collection**: Use the models Train tab to gather balanced examples across times of day, weather, and distances.
- **Preprocessing**: Ensure examples reflect object crops similar to Frigates boxes; keep the subject centered. - **Preprocessing**: Ensure examples reflect object crops similar to Frigates boxes; keep the subject centered.
- **Labels**: Keep label names short and consistent; include a `none` class if you plan to ignore uncertain predictions for sub labels. - **Labels**: Keep label names short and consistent; include a `none` class if you plan to ignore uncertain predictions for sub labels.
- **Threshold**: Tune `threshold` per model to reduce false assignments. Start at `0.8` and adjust based on validation. - **Threshold**: Tune `threshold` per model to reduce false assignments. Start at `0.8` and adjust based on validation.

View File

@ -49,4 +49,4 @@ When choosing a portion of the camera frame for state classification, it is impo
### Improving the Model ### Improving the Model
- **Problem framing**: Keep classes visually distinct and state-focused (e.g., `open`, `closed`, `unknown`). Avoid combining object identity with state in a single model unless necessary. - **Problem framing**: Keep classes visually distinct and state-focused (e.g., `open`, `closed`, `unknown`). Avoid combining object identity with state in a single model unless necessary.
- **Data collection**: Use the models Recent Classifications tab to gather balanced examples across times of day and weather. - **Data collection**: Use the models Train tab to gather balanced examples across times of day and weather.

View File

@ -70,7 +70,7 @@ Fine-tune face recognition with these optional parameters at the global level of
- `min_faces`: Min face recognitions for the sub label to be applied to the person object. - `min_faces`: Min face recognitions for the sub label to be applied to the person object.
- Default: `1` - Default: `1`
- `save_attempts`: Number of images of recognized faces to save for training. - `save_attempts`: Number of images of recognized faces to save for training.
- Default: `200`. - Default: `100`.
- `blur_confidence_filter`: Enables a filter that calculates how blurry the face is and adjusts the confidence based on this. - `blur_confidence_filter`: Enables a filter that calculates how blurry the face is and adjusts the confidence based on this.
- Default: `True`. - Default: `True`.
- `device`: Target a specific device to run the face recognition model on (multi-GPU installation). - `device`: Target a specific device to run the face recognition model on (multi-GPU installation).
@ -114,9 +114,9 @@ When choosing images to include in the face training set it is recommended to al
::: :::
### Understanding the Recent Recognitions Tab ### Understanding the Train Tab
The Recent Recognitions tab in the face library displays recent face recognition attempts. Detected face images are grouped according to the person they were identified as potentially matching. The Train tab in the face library displays recent face recognition attempts. Detected face images are grouped according to the person they were identified as potentially matching.
Each face image is labeled with a name (or `Unknown`) along with the confidence score of the recognition attempt. While each image can be used to train the system for a specific person, not all images are suitable for training. Each face image is labeled with a name (or `Unknown`) along with the confidence score of the recognition attempt. While each image can be used to train the system for a specific person, not all images are suitable for training.
@ -140,7 +140,7 @@ Once front-facing images are performing well, start choosing slightly off-angle
Start with the [Usage](#usage) section and re-read the [Model Requirements](#model-requirements) above. Start with the [Usage](#usage) section and re-read the [Model Requirements](#model-requirements) above.
1. Ensure `person` is being _detected_. A `person` will automatically be scanned by Frigate for a face. Any detected faces will appear in the Recent Recognitions tab in the Frigate UI's Face Library. 1. Ensure `person` is being _detected_. A `person` will automatically be scanned by Frigate for a face. Any detected faces will appear in the Train tab in the Frigate UI's Face Library.
If you are using a Frigate+ or `face` detecting model: If you are using a Frigate+ or `face` detecting model:
@ -186,7 +186,7 @@ Avoid training on images that already score highly, as this can lead to over-fit
No, face recognition does not support negative training (i.e., explicitly telling it who someone is _not_). Instead, the best approach is to improve the training data by using a more diverse and representative set of images for each person. No, face recognition does not support negative training (i.e., explicitly telling it who someone is _not_). Instead, the best approach is to improve the training data by using a more diverse and representative set of images for each person.
For more guidance, refer to the section above on improving recognition accuracy. For more guidance, refer to the section above on improving recognition accuracy.
### I see scores above the threshold in the Recent Recognitions tab, but a sub label wasn't assigned? ### I see scores above the threshold in the train tab, but a sub label wasn't assigned?
The Frigate considers the recognition scores across all recognition attempts for each person object. The scores are continually weighted based on the area of the face, and a sub label will only be assigned to person if a person is confidently recognized consistently. This avoids cases where a single high confidence recognition would throw off the results. The Frigate considers the recognition scores across all recognition attempts for each person object. The scores are continually weighted based on the area of the face, and a sub label will only be assigned to person if a person is confidently recognized consistently. This avoids cases where a single high confidence recognition would throw off the results.

View File

@ -630,7 +630,7 @@ face_recognition:
# Optional: Min face recognitions for the sub label to be applied to the person object (default: shown below) # Optional: Min face recognitions for the sub label to be applied to the person object (default: shown below)
min_faces: 1 min_faces: 1
# Optional: Number of images of recognized faces to save for training (default: shown below) # Optional: Number of images of recognized faces to save for training (default: shown below)
save_attempts: 200 save_attempts: 100
# Optional: Apply a blur quality filter to adjust confidence based on the blur level of the image (default: shown below) # Optional: Apply a blur quality filter to adjust confidence based on the blur level of the image (default: shown below)
blur_confidence_filter: True blur_confidence_filter: True
# Optional: Set the model size used face recognition. (default: shown below) # Optional: Set the model size used face recognition. (default: shown below)
@ -671,18 +671,20 @@ lpr:
# Optional: List of regex replacement rules to normalize detected plates (default: shown below) # Optional: List of regex replacement rules to normalize detected plates (default: shown below)
replace_rules: {} replace_rules: {}
# Optional: Configuration for AI / LLM provider # Optional: Configuration for AI generated tracked object descriptions
# WARNING: Depending on the provider, this will send thumbnails over the internet # WARNING: Depending on the provider, this will send thumbnails over the internet
# to Google or OpenAI's LLMs to generate descriptions. GenAI features can be configured at # to Google or OpenAI's LLMs to generate descriptions. It can be overridden at
# the camera level to enhance privacy for indoor cameras. # the camera level (enabled: False) to enhance privacy for indoor cameras.
genai: genai:
# Required: Provider must be one of ollama, gemini, or openai # Optional: Enable AI description generation (default: shown below)
enabled: False
# Required if enabled: Provider must be one of ollama, gemini, or openai
provider: ollama provider: ollama
# Required if provider is ollama. May also be used for an OpenAI API compatible backend with the openai provider. # Required if provider is ollama. May also be used for an OpenAI API compatible backend with the openai provider.
base_url: http://localhost::11434 base_url: http://localhost::11434
# Required if gemini or openai # Required if gemini or openai
api_key: "{FRIGATE_GENAI_API_KEY}" api_key: "{FRIGATE_GENAI_API_KEY}"
# Required: The model to use with the provider. # Required if enabled: The model to use with the provider.
model: gemini-1.5-flash model: gemini-1.5-flash
# Optional additional args to pass to the GenAI Provider (default: None) # Optional additional args to pass to the GenAI Provider (default: None)
provider_options: provider_options:

View File

@ -35,23 +35,6 @@ logger = logging.getLogger(__name__)
router = APIRouter(tags=[Tags.auth]) router = APIRouter(tags=[Tags.auth])
@router.get("/auth/first_time_login")
def first_time_login(request: Request):
"""Return whether the admin first-time login help flag is set in config.
This endpoint is intentionally unauthenticated so the login page can
query it before a user is authenticated.
"""
auth_config = request.app.frigate_config.auth
return JSONResponse(
content={
"admin_first_time_login": auth_config.admin_first_time_login
or auth_config.reset_admin_password
}
)
class RateLimiter: class RateLimiter:
_limit = "" _limit = ""
@ -532,11 +515,6 @@ def login(request: Request, body: AppPostLoginBody):
set_jwt_cookie( set_jwt_cookie(
response, JWT_COOKIE_NAME, encoded_jwt, expiration, JWT_COOKIE_SECURE response, JWT_COOKIE_NAME, encoded_jwt, expiration, JWT_COOKIE_SECURE
) )
# Clear admin_first_time_login flag after successful admin login so the
# UI stops showing the first-time login documentation link.
if role == "admin":
request.app.frigate_config.auth.admin_first_time_login = False
return response return response
return JSONResponse(content={"message": "Login failed"}, status_code=401) return JSONResponse(content={"message": "Login failed"}, status_code=401)

View File

@ -488,8 +488,6 @@ class FrigateApp:
} }
).execute() ).execute()
self.config.auth.admin_first_time_login = True
logger.info("********************************************************") logger.info("********************************************************")
logger.info("********************************************************") logger.info("********************************************************")
logger.info("*** Auth is enabled, but no users exist. ***") logger.info("*** Auth is enabled, but no users exist. ***")

View File

@ -38,13 +38,6 @@ class AuthConfig(FrigateBaseModel):
default_factory=dict, default_factory=dict,
title="Role to camera mappings. Empty list grants access to all cameras.", title="Role to camera mappings. Empty list grants access to all cameras.",
) )
admin_first_time_login: Optional[bool] = Field(
default=False,
title="Internal field to expose first-time admin login flag to the UI",
description=(
"When true the UI may show a help link on the login page informing users how to sign in after an admin password reset. "
),
)
@field_validator("roles") @field_validator("roles")
@classmethod @classmethod

View File

@ -69,7 +69,7 @@ class BirdClassificationConfig(FrigateBaseModel):
class CustomClassificationStateCameraConfig(FrigateBaseModel): class CustomClassificationStateCameraConfig(FrigateBaseModel):
crop: list[float, float, float, float] = Field( crop: list[int, int, int, int] = Field(
title="Crop of image frame on this camera to run classification on." title="Crop of image frame on this camera to run classification on."
) )
@ -197,9 +197,7 @@ class FaceRecognitionConfig(FrigateBaseModel):
title="Min face recognitions for the sub label to be applied to the person object.", title="Min face recognitions for the sub label to be applied to the person object.",
) )
save_attempts: int = Field( save_attempts: int = Field(
default=200, default=100, ge=0, title="Number of face attempts to save in the train tab."
ge=0,
title="Number of face attempts to save in the recent recognitions tab.",
) )
blur_confidence_filter: bool = Field( blur_confidence_filter: bool = Field(
default=True, title="Apply blur quality filter to face confidence." default=True, title="Apply blur quality filter to face confidence."

View File

@ -96,10 +96,10 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
camera_config = self.model_config.state_config.cameras[camera] camera_config = self.model_config.state_config.cameras[camera]
crop = [ crop = [
camera_config.crop[0] * self.config.cameras[camera].detect.width, camera_config.crop[0],
camera_config.crop[1] * self.config.cameras[camera].detect.height, camera_config.crop[1],
camera_config.crop[2] * self.config.cameras[camera].detect.width, camera_config.crop[2],
camera_config.crop[3] * self.config.cameras[camera].detect.height, camera_config.crop[3],
] ]
should_run = False should_run = False

View File

@ -3,7 +3,6 @@
"user": "Username", "user": "Username",
"password": "Password", "password": "Password",
"login": "Login", "login": "Login",
"firstTimeLogin": "Trying to log in for the first time? Credentials are printed in the Frigate logs.",
"errors": { "errors": {
"usernameRequired": "Username is required", "usernameRequired": "Username is required",
"passwordRequired": "Password is required", "passwordRequired": "Password is required",

View File

@ -23,7 +23,7 @@
"label": "Min face recognitions for the sub label to be applied to the person object." "label": "Min face recognitions for the sub label to be applied to the person object."
}, },
"save_attempts": { "save_attempts": {
"label": "Number of face attempts to save in the recent recognitions tab." "label": "Number of face attempts to save in the train tab."
}, },
"blur_confidence_filter": { "blur_confidence_filter": {
"label": "Apply blur quality filter to face confidence." "label": "Apply blur quality filter to face confidence."

View File

@ -41,17 +41,13 @@
"invalidName": "Invalid name. Names can only include letters, numbers, spaces, apostrophes, underscores, and hyphens." "invalidName": "Invalid name. Names can only include letters, numbers, spaces, apostrophes, underscores, and hyphens."
}, },
"train": { "train": {
"title": "Recent Classifications", "title": "Train",
"aria": "Select Recent Classifications" "aria": "Select Train"
}, },
"categories": "Classes", "categories": "Classes",
"createCategory": { "createCategory": {
"new": "Create New Class" "new": "Create New Class"
}, },
"categorizeImageAs": "Classify Image As:", "categorizeImageAs": "Classify Image As:",
"categorizeImage": "Classify Image", "categorizeImage": "Classify Image"
"wizard": {
"title": "Create New Classification",
"description": "Create a new state or object classification model."
}
} }

View File

@ -22,7 +22,7 @@
"title": "Create Collection", "title": "Create Collection",
"desc": "Create a new collection", "desc": "Create a new collection",
"new": "Create New Face", "new": "Create New Face",
"nextSteps": "To build a strong foundation:<li>Use the Recent Recognitions tab to select and train on images for each detected person.</li><li>Focus on straight-on images for best results; avoid training images that capture faces at an angle.</li></ul>" "nextSteps": "To build a strong foundation:<li>Use the Train tab to select and train on images for each detected person.</li><li>Focus on straight-on images for best results; avoid training images that capture faces at an angle.</li></ul>"
}, },
"steps": { "steps": {
"faceName": "Enter Face Name", "faceName": "Enter Face Name",
@ -33,8 +33,8 @@
} }
}, },
"train": { "train": {
"title": "Recent Recognitions", "title": "Train",
"aria": "Select recent recognitions", "aria": "Select train",
"empty": "There are no recent face recognition attempts" "empty": "There are no recent face recognition attempts"
}, },
"selectItem": "Select {{item}}", "selectItem": "Select {{item}}",

View File

@ -272,8 +272,6 @@
"title": "Stream Validation", "title": "Stream Validation",
"videoCodecGood": "Video codec is {{codec}}.", "videoCodecGood": "Video codec is {{codec}}.",
"audioCodecGood": "Audio codec is {{codec}}.", "audioCodecGood": "Audio codec is {{codec}}.",
"resolutionHigh": "A resolution of {{resolution}} may cause increased resource usage.",
"resolutionLow": "A resolution of {{resolution}} may be too low for reliable detection of small objects.",
"noAudioWarning": "No audio detected for this stream, recordings will not have audio.", "noAudioWarning": "No audio detected for this stream, recordings will not have audio.",
"audioCodecRecordError": "The AAC audio codec is required to support audio in recordings.", "audioCodecRecordError": "The AAC audio codec is required to support audio in recordings.",
"audioCodecRequired": "An audio stream is required to support audio detection.", "audioCodecRequired": "An audio stream is required to support audio detection.",

View File

@ -22,24 +22,14 @@ import { zodResolver } from "@hookform/resolvers/zod";
import { z } from "zod"; import { z } from "zod";
import { AuthContext } from "@/context/auth-context"; import { AuthContext } from "@/context/auth-context";
import { useTranslation } from "react-i18next"; import { useTranslation } from "react-i18next";
import useSWR from "swr";
import { LuExternalLink } from "react-icons/lu";
import { useDocDomain } from "@/hooks/use-doc-domain";
import { Card, CardContent } from "@/components/ui/card";
interface UserAuthFormProps extends React.HTMLAttributes<HTMLDivElement> {} interface UserAuthFormProps extends React.HTMLAttributes<HTMLDivElement> {}
export function UserAuthForm({ className, ...props }: UserAuthFormProps) { export function UserAuthForm({ className, ...props }: UserAuthFormProps) {
const { t } = useTranslation(["components/auth", "common"]); const { t } = useTranslation(["components/auth"]);
const { getLocaleDocUrl } = useDocDomain();
const [isLoading, setIsLoading] = React.useState<boolean>(false); const [isLoading, setIsLoading] = React.useState<boolean>(false);
const { login } = React.useContext(AuthContext); const { login } = React.useContext(AuthContext);
// need to use local fetcher because useSWR default fetcher is not set up in this context
const fetcher = (path: string) => axios.get(path).then((res) => res.data);
const { data } = useSWR("/auth/first_time_login", fetcher);
const showFirstTimeLink = data?.admin_first_time_login === true;
const formSchema = z.object({ const formSchema = z.object({
user: z.string().min(1, t("form.errors.usernameRequired")), user: z.string().min(1, t("form.errors.usernameRequired")),
password: z.string().min(1, t("form.errors.passwordRequired")), password: z.string().min(1, t("form.errors.passwordRequired")),
@ -146,24 +136,6 @@ export function UserAuthForm({ className, ...props }: UserAuthFormProps) {
</div> </div>
</form> </form>
</Form> </Form>
{showFirstTimeLink && (
<Card className="mt-4 p-4 text-center text-sm">
<CardContent className="p-2">
<p className="mb-2 text-primary-variant">
{t("form.firstTimeLogin")}
</p>
<a
href={getLocaleDocUrl("configuration/authentication#onboarding")}
target="_blank"
rel="noopener noreferrer"
className="inline-flex items-center text-primary"
>
{t("readTheDocumentation", { ns: "common" })}
<LuExternalLink className="ml-2 size-3" />
</a>
</CardContent>
</Card>
)}
<Toaster /> <Toaster />
</div> </div>
); );

View File

@ -6,7 +6,7 @@ import {
ClassificationThreshold, ClassificationThreshold,
} from "@/types/classification"; } from "@/types/classification";
import { Event } from "@/types/event"; import { Event } from "@/types/event";
import { forwardRef, useMemo, useRef, useState } from "react"; import { useMemo, useRef, useState } from "react";
import { isDesktop, isMobile } from "react-device-detect"; import { isDesktop, isMobile } from "react-device-detect";
import { useTranslation } from "react-i18next"; import { useTranslation } from "react-i18next";
import TimeAgo from "../dynamic/TimeAgo"; import TimeAgo from "../dynamic/TimeAgo";
@ -14,24 +14,7 @@ import { Tooltip, TooltipContent, TooltipTrigger } from "../ui/tooltip";
import { LuSearch } from "react-icons/lu"; import { LuSearch } from "react-icons/lu";
import { TooltipPortal } from "@radix-ui/react-tooltip"; import { TooltipPortal } from "@radix-ui/react-tooltip";
import { useNavigate } from "react-router-dom"; import { useNavigate } from "react-router-dom";
import { HiSquare2Stack } from "react-icons/hi2"; import { getTranslatedLabel } from "@/utils/i18n";
import { ImageShadowOverlay } from "../overlay/ImageShadowOverlay";
import {
Dialog,
DialogContent,
DialogDescription,
DialogHeader,
DialogTitle,
DialogTrigger,
} from "../ui/dialog";
import {
MobilePage,
MobilePageContent,
MobilePageDescription,
MobilePageHeader,
MobilePageTitle,
MobilePageTrigger,
} from "../mobile/MobilePage";
type ClassificationCardProps = { type ClassificationCardProps = {
className?: string; className?: string;
@ -41,28 +24,20 @@ type ClassificationCardProps = {
selected: boolean; selected: boolean;
i18nLibrary: string; i18nLibrary: string;
showArea?: boolean; showArea?: boolean;
count?: number;
onClick: (data: ClassificationItemData, meta: boolean) => void; onClick: (data: ClassificationItemData, meta: boolean) => void;
children?: React.ReactNode; children?: React.ReactNode;
}; };
export const ClassificationCard = forwardRef< export function ClassificationCard({
HTMLDivElement, className,
ClassificationCardProps imgClassName,
>(function ClassificationCard( data,
{ threshold,
className, selected,
imgClassName, i18nLibrary,
data, showArea = true,
threshold, onClick,
selected, children,
i18nLibrary, }: ClassificationCardProps) {
showArea = true,
count,
onClick,
children,
},
ref,
) {
const { t } = useTranslation([i18nLibrary]); const { t } = useTranslation([i18nLibrary]);
const [imageLoaded, setImageLoaded] = useState(false); const [imageLoaded, setImageLoaded] = useState(false);
@ -97,81 +72,61 @@ export const ClassificationCard = forwardRef<
}, [showArea, imageLoaded]); }, [showArea, imageLoaded]);
return ( return (
<div <>
ref={ref} <div
className={cn(
"relative flex size-full cursor-pointer flex-col overflow-hidden rounded-lg outline outline-[3px]",
className,
selected
? "shadow-selected outline-selected"
: "outline-transparent duration-500",
)}
onClick={(e) => {
const isMeta = e.metaKey || e.ctrlKey;
if (isMeta) {
e.stopPropagation();
}
onClick(data, isMeta);
}}
onContextMenu={(e) => {
e.preventDefault();
e.stopPropagation();
onClick(data, true);
}}
>
<img
ref={imgRef}
className={cn( className={cn(
"absolute bottom-0 left-0 right-0 top-0 size-full", "relative flex cursor-pointer flex-col rounded-lg outline outline-[3px]",
imgClassName, className,
isMobile && "w-full", selected
? "shadow-selected outline-selected"
: "outline-transparent duration-500",
)} )}
onLoad={() => setImageLoaded(true)} >
src={`${baseUrl}${data.filepath}`} <div className="relative w-full select-none overflow-hidden rounded-lg">
/> <img
<ImageShadowOverlay upperClassName="z-0" lowerClassName="h-[30%] z-0" /> ref={imgRef}
{count && ( onLoad={() => setImageLoaded(true)}
<div className="absolute right-2 top-2 flex flex-row items-center gap-1"> className={cn("size-44", imgClassName, isMobile && "w-full")}
<div className="text-gray-200">{count}</div>{" "} src={`${baseUrl}${data.filepath}`}
<HiSquare2Stack className="text-gray-200" /> onClick={(e) => {
</div> e.stopPropagation();
)} onClick(data, e.metaKey || e.ctrlKey);
{!count && imageArea != undefined && ( }}
<div className="absolute right-1 top-1 rounded-lg bg-black/50 px-2 py-1 text-xs text-white"> />
{t("information.pixels", { ns: "common", area: imageArea })} {imageArea != undefined && (
</div> <div className="absolute bottom-1 right-1 z-10 rounded-lg bg-black/50 px-2 py-1 text-xs text-white">
)} {t("information.pixels", { ns: "common", area: imageArea })}
<div className="absolute bottom-0 left-0 right-0 h-[50%] bg-gradient-to-t from-black/60 to-transparent" />
<div className="absolute bottom-0 flex w-full select-none flex-row items-center justify-between gap-2 p-2">
<div
className={cn(
"flex flex-col items-start text-white",
data.score ? "text-xs" : "text-sm",
)}
>
<div className="smart-capitalize">
{data.name == "unknown" ? t("details.unknown") : data.name}
</div>
{data.score && (
<div
className={cn(
"",
scoreStatus == "match" && "text-success",
scoreStatus == "potential" && "text-orange-400",
scoreStatus == "unknown" && "text-danger",
)}
>
{Math.round(data.score * 100)}%
</div> </div>
)} )}
</div> </div>
<div className="flex flex-row items-start justify-end gap-5 md:gap-2"> <div className="select-none p-2">
{children} <div className="flex w-full flex-row items-center justify-between gap-2">
<div className="flex flex-col items-start text-xs text-primary-variant">
<div className="smart-capitalize">
{data.name == "unknown" ? t("details.unknown") : data.name}
</div>
{data.score && (
<div
className={cn(
"",
scoreStatus == "match" && "text-success",
scoreStatus == "potential" && "text-orange-400",
scoreStatus == "unknown" && "text-danger",
)}
>
{Math.round(data.score * 100)}%
</div>
)}
</div>
<div className="flex flex-row items-start justify-end gap-5 md:gap-4">
{children}
</div>
</div>
</div> </div>
</div> </div>
</div> </>
); );
}); }
type GroupedClassificationCardProps = { type GroupedClassificationCardProps = {
group: ClassificationItemData[]; group: ClassificationItemData[];
@ -181,6 +136,7 @@ type GroupedClassificationCardProps = {
i18nLibrary: string; i18nLibrary: string;
objectType: string; objectType: string;
onClick: (data: ClassificationItemData | undefined) => void; onClick: (data: ClassificationItemData | undefined) => void;
onSelectEvent: (event: Event) => void;
children?: (data: ClassificationItemData) => React.ReactNode; children?: (data: ClassificationItemData) => React.ReactNode;
}; };
export function GroupedClassificationCard({ export function GroupedClassificationCard({
@ -189,54 +145,20 @@ export function GroupedClassificationCard({
threshold, threshold,
selectedItems, selectedItems,
i18nLibrary, i18nLibrary,
objectType,
onClick, onClick,
onSelectEvent,
children, children,
}: GroupedClassificationCardProps) { }: GroupedClassificationCardProps) {
const navigate = useNavigate(); const navigate = useNavigate();
const { t } = useTranslation(["views/explore", i18nLibrary]); const { t } = useTranslation(["views/explore", i18nLibrary]);
const [detailOpen, setDetailOpen] = useState(false);
// data // data
const bestItem = useMemo<ClassificationItemData | undefined>(() => { const allItemsSelected = useMemo(
let best: undefined | ClassificationItemData = undefined; () => group.every((data) => selectedItems.includes(data.filename)),
[group, selectedItems],
group.forEach((item) => { );
if (item?.name != undefined && item.name != "none") {
if (
best?.score == undefined ||
(item.score && best.score < item.score)
) {
best = item;
}
}
});
if (!best) {
return group.at(-1);
}
const bestTyped: ClassificationItemData = best;
return {
...bestTyped,
name: event ? (event.sub_label ?? t("details.unknown")) : bestTyped.name,
score: event?.data?.sub_label_score || bestTyped.score,
};
}, [group, event, t]);
const bestScoreStatus = useMemo(() => {
if (!bestItem?.score || !threshold) {
return "unknown";
}
if (bestItem.score >= threshold.recognition) {
return "match";
} else if (bestItem.score >= threshold.unknown) {
return "potential";
} else {
return "unknown";
}
}, [bestItem, threshold]);
const time = useMemo(() => { const time = useMemo(() => {
const item = group[0]; const item = group[0];
@ -248,143 +170,94 @@ export function GroupedClassificationCard({
return item.timestamp * 1000; return item.timestamp * 1000;
}, [group]); }, [group]);
if (!bestItem) {
return null;
}
const Overlay = isDesktop ? Dialog : MobilePage;
const Trigger = isDesktop ? DialogTrigger : MobilePageTrigger;
const Header = isDesktop ? DialogHeader : MobilePageHeader;
const Content = isDesktop ? DialogContent : MobilePageContent;
const ContentTitle = isDesktop ? DialogTitle : MobilePageTitle;
const ContentDescription = isDesktop
? DialogDescription
: MobilePageDescription;
return ( return (
<> <div
<ClassificationCard className={cn(
data={bestItem} "flex cursor-pointer flex-col gap-2 rounded-lg bg-card p-2 outline outline-[3px]",
threshold={threshold} isMobile && "w-full",
selected={selectedItems.includes(bestItem.filename)} allItemsSelected
i18nLibrary={i18nLibrary} ? "shadow-selected outline-selected"
count={group.length} : "outline-transparent duration-500",
onClick={(_, meta) => { )}
if (meta || selectedItems.length > 0) { onClick={() => {
onClick(undefined); if (selectedItems.length) {
} else { onClick(undefined);
setDetailOpen(true); }
} }}
}} onContextMenu={(e) => {
/> e.stopPropagation();
<Overlay e.preventDefault();
open={detailOpen} onClick(undefined);
onOpenChange={(open) => { }}
if (!open) { >
setDetailOpen(false); <div className="flex flex-row justify-between">
} <div className="flex flex-col gap-1">
}} <div className="select-none smart-capitalize">
> {getTranslatedLabel(objectType)}
<Trigger asChild></Trigger> {event?.sub_label
<Content ? `: ${event.sub_label} (${Math.round((event.data.sub_label_score || 0) * 100)}%)`
className={cn( : ": " + t("details.unknown")}
"", </div>
isDesktop && "min-w-[50%] max-w-[65%]", {time && (
isMobile && "flex flex-col", <TimeAgo
className="text-sm text-secondary-foreground"
time={time}
dense
/>
)} )}
onOpenAutoFocus={(e) => e.preventDefault()} </div>
> {event && (
<> <Tooltip>
<Header <TooltipTrigger>
className={cn( <div
"mx-2 flex flex-row items-center gap-4", className="cursor-pointer"
isMobile && "flex-shrink-0", onClick={() => {
)} navigate(`/explore?event_id=${event.id}`);
> }}
<div> >
<ContentTitle <LuSearch className="size-4 text-muted-foreground" />
className={cn(
"flex items-center gap-1 font-normal capitalize",
isMobile && "px-2",
)}
>
{event?.sub_label ? event.sub_label : t("details.unknown")}
{event?.sub_label && (
<div
className={cn(
"",
bestScoreStatus == "match" && "text-success",
bestScoreStatus == "potential" && "text-orange-400",
bestScoreStatus == "unknown" && "text-danger",
)}
>{`${Math.round((event.data.sub_label_score || 0) * 100)}%`}</div>
)}
</ContentTitle>
<ContentDescription className={cn("", isMobile && "px-2")}>
{time && (
<TimeAgo
className="text-sm text-secondary-foreground"
time={time}
dense
/>
)}
</ContentDescription>
</div> </div>
{isDesktop && ( </TooltipTrigger>
<div className="flex flex-row justify-between"> <TooltipPortal>
{event && ( <TooltipContent>
<Tooltip> {t("details.item.button.viewInExplore", {
<TooltipTrigger asChild> ns: "views/explore",
<div })}
className="cursor-pointer" </TooltipContent>
tabIndex={-1} </TooltipPortal>
onClick={() => { </Tooltip>
navigate(`/explore?event_id=${event.id}`); )}
}} </div>
>
<LuSearch className="size-4 text-secondary-foreground" /> <div
</div> className={cn(
</TooltipTrigger> "gap-2",
<TooltipPortal> isDesktop
<TooltipContent> ? "flex flex-row flex-wrap"
{t("details.item.button.viewInExplore", { : "grid grid-cols-2 sm:grid-cols-5 lg:grid-cols-6",
ns: "views/explore", )}
})} >
</TooltipContent> {group.map((data: ClassificationItemData) => (
</TooltipPortal> <ClassificationCard
</Tooltip> key={data.filename}
)} data={data}
</div> threshold={threshold}
)} selected={
</Header> allItemsSelected ? false : selectedItems.includes(data.filename)
<div }
className={cn( i18nLibrary={i18nLibrary}
"grid w-full auto-rows-min grid-cols-2 gap-2 sm:grid-cols-3 md:grid-cols-4 lg:grid-cols-6 xl:grid-cols-6 2xl:grid-cols-8", onClick={(data, meta) => {
isDesktop && "p-2", if (meta || selectedItems.length > 0) {
isMobile && "scrollbar-container flex-1 overflow-y-auto", onClick(data);
)} } else if (event) {
> onSelectEvent(event);
{group.map((data: ClassificationItemData) => ( }
<div key={data.filename} className="aspect-square w-full"> }}
<ClassificationCard >
data={data} {children?.(data)}
threshold={threshold} </ClassificationCard>
selected={false} ))}
i18nLibrary={i18nLibrary} </div>
onClick={(data, meta) => { </div>
if (meta || selectedItems.length > 0) {
onClick(data);
}
}}
>
{children?.(data)}
</ClassificationCard>
</div>
))}
</div>
</>
</Content>
</Overlay>
</>
); );
} }

View File

@ -21,7 +21,6 @@ import { baseUrl } from "@/api/baseUrl";
import { cn } from "@/lib/utils"; import { cn } from "@/lib/utils";
import { shareOrCopy } from "@/utils/browserUtil"; import { shareOrCopy } from "@/utils/browserUtil";
import { useTranslation } from "react-i18next"; import { useTranslation } from "react-i18next";
import { ImageShadowOverlay } from "../overlay/ImageShadowOverlay";
type ExportProps = { type ExportProps = {
className: string; className: string;
@ -146,7 +145,7 @@ export default function ExportCard({
<> <>
{exportedRecording.thumb_path.length > 0 ? ( {exportedRecording.thumb_path.length > 0 ? (
<img <img
className="absolute inset-0 aspect-video size-full rounded-lg object-cover md:rounded-2xl" className="absolute inset-0 aspect-video size-full rounded-lg object-contain md:rounded-2xl"
src={`${baseUrl}${exportedRecording.thumb_path.replace("/media/frigate/", "")}`} src={`${baseUrl}${exportedRecording.thumb_path.replace("/media/frigate/", "")}`}
onLoad={() => setLoading(false)} onLoad={() => setLoading(false)}
/> />
@ -225,9 +224,10 @@ export default function ExportCard({
{loading && ( {loading && (
<Skeleton className="absolute inset-0 aspect-video rounded-lg md:rounded-2xl" /> <Skeleton className="absolute inset-0 aspect-video rounded-lg md:rounded-2xl" />
)} )}
<ImageShadowOverlay /> <div className="rounded-b-l pointer-events-none absolute inset-x-0 bottom-0 h-[20%] rounded-lg bg-gradient-to-t from-black/60 to-transparent md:rounded-2xl">
<div className="absolute bottom-2 left-3 flex h-full items-end justify-between text-white smart-capitalize"> <div className="mx-3 flex h-full items-end justify-between pb-1 text-sm text-white smart-capitalize">
{exportedRecording.name.replaceAll("_", " ")} {exportedRecording.name.replaceAll("_", " ")}
</div>
</div> </div>
</div> </div>
</> </>

View File

@ -1,66 +0,0 @@
import { useTranslation } from "react-i18next";
import StepIndicator from "../indicators/StepIndicator";
import {
Dialog,
DialogContent,
DialogDescription,
DialogHeader,
DialogTitle,
} from "../ui/dialog";
import { useState } from "react";
const STEPS = [
"classificationWizard.steps.nameAndDefine",
"classificationWizard.steps.stateArea",
"classificationWizard.steps.chooseExamples",
"classificationWizard.steps.train",
];
type ClassificationModelWizardDialogProps = {
open: boolean;
onClose: () => void;
};
export default function ClassificationModelWizardDialog({
open,
onClose,
}: ClassificationModelWizardDialogProps) {
const { t } = useTranslation(["views/classificationModel"]);
// step management
const [currentStep, _] = useState(0);
return (
<Dialog
open={open}
onOpenChange={(open) => {
if (!open) {
onClose;
}
}}
>
<DialogContent
className="max-h-[90dvh] max-w-4xl overflow-y-auto"
onInteractOutside={(e) => {
e.preventDefault();
}}
>
<StepIndicator
steps={STEPS}
currentStep={currentStep}
variant="dots"
className="mb-4 justify-start"
/>
<DialogHeader>
<DialogTitle>{t("wizard.title")}</DialogTitle>
{currentStep === 0 && (
<DialogDescription>{t("wizard.description")}</DialogDescription>
)}
</DialogHeader>
<div className="pb-4">
<div className="size-full"></div>
</div>
</DialogContent>
</Dialog>
);
}

View File

@ -20,14 +20,15 @@ import {
TooltipTrigger, TooltipTrigger,
} from "@/components/ui/tooltip"; } from "@/components/ui/tooltip";
import { isDesktop, isMobile } from "react-device-detect"; import { isDesktop, isMobile } from "react-device-detect";
import { LuPlus } from "react-icons/lu";
import { useTranslation } from "react-i18next"; import { useTranslation } from "react-i18next";
import { cn } from "@/lib/utils"; import { cn } from "@/lib/utils";
import React, { ReactNode, useCallback, useMemo, useState } from "react"; import React, { ReactNode, useCallback, useMemo, useState } from "react";
import TextEntryDialog from "./dialog/TextEntryDialog"; import TextEntryDialog from "./dialog/TextEntryDialog";
import { Button } from "../ui/button"; import { Button } from "../ui/button";
import { MdCategory } from "react-icons/md";
import axios from "axios"; import axios from "axios";
import { toast } from "sonner"; import { toast } from "sonner";
import { Separator } from "../ui/separator";
type ClassificationSelectionDialogProps = { type ClassificationSelectionDialogProps = {
className?: string; className?: string;
@ -96,7 +97,7 @@ export default function ClassificationSelectionDialog({
); );
return ( return (
<div className={className ?? "flex"}> <div className={className ?? ""}>
{newClass && ( {newClass && (
<TextEntryDialog <TextEntryDialog
open={true} open={true}
@ -127,22 +128,23 @@ export default function ClassificationSelectionDialog({
isMobile && "gap-2 pb-4", isMobile && "gap-2 pb-4",
)} )}
> >
<SelectorItem
className="flex cursor-pointer gap-2 smart-capitalize"
onClick={() => setNewClass(true)}
>
<LuPlus />
{t("createCategory.new")}
</SelectorItem>
{classes.sort().map((category) => ( {classes.sort().map((category) => (
<SelectorItem <SelectorItem
key={category} key={category}
className="flex cursor-pointer gap-2 smart-capitalize" className="flex cursor-pointer gap-2 smart-capitalize"
onClick={() => onCategorizeImage(category)} onClick={() => onCategorizeImage(category)}
> >
<MdCategory />
{category.replaceAll("_", " ")} {category.replaceAll("_", " ")}
</SelectorItem> </SelectorItem>
))} ))}
<Separator />
<SelectorItem
className="flex cursor-pointer gap-2 smart-capitalize"
onClick={() => setNewClass(true)}
>
{t("createCategory.new")}
</SelectorItem>
</div> </div>
</SelectorContent> </SelectorContent>
</Selector> </Selector>

View File

@ -62,7 +62,7 @@ export default function FaceSelectionDialog({
); );
return ( return (
<div className={className ?? "flex"}> <div className={className ?? ""}>
{newFace && ( {newFace && (
<TextEntryDialog <TextEntryDialog
open={true} open={true}

View File

@ -1,27 +0,0 @@
import { cn } from "@/lib/utils";
type ImageShadowOverlayProps = {
upperClassName?: string;
lowerClassName?: string;
};
export function ImageShadowOverlay({
upperClassName,
lowerClassName,
}: ImageShadowOverlayProps) {
return (
<>
<div
className={cn(
"pointer-events-none absolute inset-x-0 top-0 z-10 h-[30%] w-full rounded-lg bg-gradient-to-b from-black/20 to-transparent md:rounded-2xl",
upperClassName,
)}
/>
<div
className={cn(
"pointer-events-none absolute inset-x-0 bottom-0 z-10 h-[10%] w-full rounded-lg bg-gradient-to-t from-black/20 to-transparent md:rounded-2xl",
lowerClassName,
)}
/>
</>
);
}

View File

@ -60,7 +60,7 @@ export default function TrainFilterDialog({
moreFiltersSelected ? "text-white" : "text-secondary-foreground", moreFiltersSelected ? "text-white" : "text-secondary-foreground",
)} )}
/> />
{isDesktop && t("filter")} {isDesktop && t("more")}
</Button> </Button>
); );
const content = ( const content = (
@ -122,7 +122,7 @@ export default function TrainFilterDialog({
return ( return (
<PlatformAwareSheet <PlatformAwareSheet
trigger={trigger} trigger={trigger}
title={t("filter")} title={t("more")}
content={content} content={content}
contentClassName={cn( contentClassName={cn(
"w-auto lg:min-w-[275px] scrollbar-container h-full overflow-auto px-4", "w-auto lg:min-w-[275px] scrollbar-container h-full overflow-auto px-4",

View File

@ -6,7 +6,6 @@ import MSEPlayer from "./MsePlayer";
import { LivePlayerMode } from "@/types/live"; import { LivePlayerMode } from "@/types/live";
import { cn } from "@/lib/utils"; import { cn } from "@/lib/utils";
import React from "react"; import React from "react";
import { ImageShadowOverlay } from "../overlay/ImageShadowOverlay";
type LivePlayerProps = { type LivePlayerProps = {
className?: string; className?: string;
@ -77,7 +76,8 @@ export default function BirdseyeLivePlayer({
)} )}
onClick={onClick} onClick={onClick}
> >
<ImageShadowOverlay /> <div className="pointer-events-none absolute inset-x-0 top-0 z-10 h-[30%] w-full rounded-lg bg-gradient-to-b from-black/20 to-transparent md:rounded-2xl"></div>
<div className="pointer-events-none absolute inset-x-0 bottom-0 z-10 h-[10%] w-full rounded-lg bg-gradient-to-t from-black/20 to-transparent md:rounded-2xl"></div>
<div className="size-full" ref={playerRef}> <div className="size-full" ref={playerRef}>
{player} {player}
</div> </div>

View File

@ -25,7 +25,6 @@ import { PlayerStats } from "./PlayerStats";
import { LuVideoOff } from "react-icons/lu"; import { LuVideoOff } from "react-icons/lu";
import { Trans, useTranslation } from "react-i18next"; import { Trans, useTranslation } from "react-i18next";
import { useCameraFriendlyName } from "@/hooks/use-camera-friendly-name"; import { useCameraFriendlyName } from "@/hooks/use-camera-friendly-name";
import { ImageShadowOverlay } from "../overlay/ImageShadowOverlay";
type LivePlayerProps = { type LivePlayerProps = {
cameraRef?: (ref: HTMLDivElement | null) => void; cameraRef?: (ref: HTMLDivElement | null) => void;
@ -329,7 +328,10 @@ export default function LivePlayer({
> >
{cameraEnabled && {cameraEnabled &&
((showStillWithoutActivity && !liveReady) || liveReady) && ( ((showStillWithoutActivity && !liveReady) || liveReady) && (
<ImageShadowOverlay /> <>
<div className="pointer-events-none absolute inset-x-0 top-0 z-10 h-[30%] w-full rounded-lg bg-gradient-to-b from-black/20 to-transparent md:rounded-2xl"></div>
<div className="pointer-events-none absolute inset-x-0 bottom-0 z-10 h-[10%] w-full rounded-lg bg-gradient-to-t from-black/20 to-transparent md:rounded-2xl"></div>
</>
)} )}
{player} {player}
{cameraEnabled && {cameraEnabled &&

View File

@ -500,29 +500,6 @@ function StreamIssues({
} }
} }
if (stream.roles.includes("detect") && stream.resolution) {
const [width, height] = stream.resolution.split("x").map(Number);
if (!isNaN(width) && !isNaN(height) && width > 0 && height > 0) {
const minDimension = Math.min(width, height);
const maxDimension = Math.max(width, height);
if (minDimension > 1080) {
result.push({
type: "warning",
message: t("cameraWizard.step3.issues.resolutionHigh", {
resolution: stream.resolution,
}),
});
} else if (maxDimension < 640) {
result.push({
type: "error",
message: t("cameraWizard.step3.issues.resolutionLow", {
resolution: stream.resolution,
}),
});
}
}
}
// Substream Check // Substream Check
if ( if (
wizardData.brandTemplate == "dahua" && wizardData.brandTemplate == "dahua" &&

View File

@ -107,7 +107,7 @@ const DialogContent = React.forwardRef<
> >
{children} {children}
<DialogPrimitive.Close className="absolute right-4 top-4 rounded-sm opacity-70 ring-offset-background transition-opacity data-[state=open]:bg-accent data-[state=open]:text-muted-foreground hover:opacity-100 focus:outline-none focus:ring-2 focus:ring-ring focus:ring-offset-2 disabled:pointer-events-none"> <DialogPrimitive.Close className="absolute right-4 top-4 rounded-sm opacity-70 ring-offset-background transition-opacity data-[state=open]:bg-accent data-[state=open]:text-muted-foreground hover:opacity-100 focus:outline-none focus:ring-2 focus:ring-ring focus:ring-offset-2 disabled:pointer-events-none">
<X className="h-4 w-4 text-secondary-foreground" /> <X className="h-4 w-4" />
<span className="sr-only">Close</span> <span className="sr-only">Close</span>
</DialogPrimitive.Close> </DialogPrimitive.Close>
</DialogPrimitive.Content> </DialogPrimitive.Content>

View File

@ -63,6 +63,10 @@ import {
} from "react-icons/lu"; } from "react-icons/lu";
import { toast } from "sonner"; import { toast } from "sonner";
import useSWR from "swr"; import useSWR from "swr";
import SearchDetailDialog, {
SearchTab,
} from "@/components/overlay/detail/SearchDetailDialog";
import { SearchResult } from "@/types/search";
import { import {
ClassificationCard, ClassificationCard,
GroupedClassificationCard, GroupedClassificationCard,
@ -682,6 +686,11 @@ function TrainingGrid({
{ ids: eventIdsQuery }, { ids: eventIdsQuery },
]); ]);
// selection
const [selectedEvent, setSelectedEvent] = useState<Event>();
const [dialogTab, setDialogTab] = useState<SearchTab>("details");
if (attemptImages.length == 0) { if (attemptImages.length == 0) {
return ( return (
<div className="absolute left-1/2 top-1/2 flex -translate-x-1/2 -translate-y-1/2 flex-col items-center justify-center text-center"> <div className="absolute left-1/2 top-1/2 flex -translate-x-1/2 -translate-y-1/2 flex-col items-center justify-center text-center">
@ -692,29 +701,40 @@ function TrainingGrid({
} }
return ( return (
<div <>
ref={contentRef} <SearchDetailDialog
className={cn( search={
"scrollbar-container grid grid-cols-2 gap-3 overflow-y-scroll p-1 sm:grid-cols-3 md:grid-cols-4 lg:grid-cols-6 xl:grid-cols-8 2xl:grid-cols-10 3xl:grid-cols-12", selectedEvent ? (selectedEvent as unknown as SearchResult) : undefined
)} }
> page={dialogTab}
{Object.entries(faceGroups).map(([key, group]) => { setSimilarity={undefined}
const event = events?.find((ev) => ev.id == key); setSearchPage={setDialogTab}
return ( setSearch={(search) => setSelectedEvent(search as unknown as Event)}
<div key={key} className="aspect-square w-full"> setInputFocused={() => {}}
/>
<div
ref={contentRef}
className="scrollbar-container flex flex-wrap gap-2 overflow-y-scroll p-1"
>
{Object.entries(faceGroups).map(([key, group]) => {
const event = events?.find((ev) => ev.id == key);
return (
<FaceAttemptGroup <FaceAttemptGroup
key={key}
config={config} config={config}
group={group} group={group}
event={event} event={event}
faceNames={faceNames} faceNames={faceNames}
selectedFaces={selectedFaces} selectedFaces={selectedFaces}
onClickFaces={onClickFaces} onClickFaces={onClickFaces}
onSelectEvent={setSelectedEvent}
onRefresh={onRefresh} onRefresh={onRefresh}
/> />
</div> );
); })}
})} </div>
</div> </>
); );
} }
@ -725,6 +745,7 @@ type FaceAttemptGroupProps = {
faceNames: string[]; faceNames: string[];
selectedFaces: string[]; selectedFaces: string[];
onClickFaces: (image: string[], ctrl: boolean) => void; onClickFaces: (image: string[], ctrl: boolean) => void;
onSelectEvent: (event: Event) => void;
onRefresh: () => void; onRefresh: () => void;
}; };
function FaceAttemptGroup({ function FaceAttemptGroup({
@ -734,6 +755,7 @@ function FaceAttemptGroup({
faceNames, faceNames,
selectedFaces, selectedFaces,
onClickFaces, onClickFaces,
onSelectEvent,
onRefresh, onRefresh,
}: FaceAttemptGroupProps) { }: FaceAttemptGroupProps) {
const { t } = useTranslation(["views/faceLibrary", "views/explore"]); const { t } = useTranslation(["views/faceLibrary", "views/explore"]);
@ -751,8 +773,8 @@ function FaceAttemptGroup({
const handleClickEvent = useCallback( const handleClickEvent = useCallback(
(meta: boolean) => { (meta: boolean) => {
if (!meta) { if (event && selectedFaces.length == 0 && !meta) {
return; onSelectEvent(event);
} else { } else {
const anySelected = const anySelected =
group.find((face) => selectedFaces.includes(face.filename)) != group.find((face) => selectedFaces.includes(face.filename)) !=
@ -776,7 +798,7 @@ function FaceAttemptGroup({
} }
} }
}, },
[group, selectedFaces, onClickFaces], [event, group, selectedFaces, onClickFaces, onSelectEvent],
); );
// api calls // api calls
@ -851,6 +873,7 @@ function FaceAttemptGroup({
handleClickEvent(true); handleClickEvent(true);
} }
}} }}
onSelectEvent={onSelectEvent}
> >
{(data) => ( {(data) => (
<> <>
@ -858,12 +881,12 @@ function FaceAttemptGroup({
faceNames={faceNames} faceNames={faceNames}
onTrainAttempt={(name) => onTrainAttempt(data, name)} onTrainAttempt={(name) => onTrainAttempt(data, name)}
> >
<AddFaceIcon className="size-7 cursor-pointer p-1 text-gray-200 hover:rounded-full hover:bg-primary-foreground/40" /> <AddFaceIcon className="size-5 cursor-pointer text-primary-variant hover:text-primary" />
</FaceSelectionDialog> </FaceSelectionDialog>
<Tooltip> <Tooltip>
<TooltipTrigger> <TooltipTrigger>
<LuRefreshCw <LuRefreshCw
className="size-7 cursor-pointer p-1 text-gray-200 hover:rounded-full hover:bg-primary-foreground/40" className="size-5 cursor-pointer text-primary-variant hover:text-primary"
onClick={() => onReprocess(data)} onClick={() => onReprocess(data)}
/> />
</TooltipTrigger> </TooltipTrigger>
@ -911,35 +934,36 @@ function FaceGrid({
<div <div
ref={contentRef} ref={contentRef}
className={cn( className={cn(
"scrollbar-container grid grid-cols-2 gap-2 overflow-y-scroll p-1 md:grid-cols-4 xl:grid-cols-8 2xl:grid-cols-10 3xl:grid-cols-12", "scrollbar-container gap-2 overflow-y-scroll p-1",
isDesktop ? "flex flex-wrap" : "grid grid-cols-2 md:grid-cols-4",
)} )}
> >
{sortedFaces.map((image: string) => ( {sortedFaces.map((image: string) => (
<div key={image} className="aspect-square w-full"> <ClassificationCard
<ClassificationCard className="gap-2 rounded-lg bg-card p-2"
data={{ key={image}
name: pageToggle, data={{
filename: image, name: pageToggle,
filepath: `clips/faces/${pageToggle}/${image}`, filename: image,
}} filepath: `clips/faces/${pageToggle}/${image}`,
selected={selectedFaces.includes(image)} }}
i18nLibrary="views/faceLibrary" selected={selectedFaces.includes(image)}
onClick={(data, meta) => onClickFaces([data.filename], meta)} i18nLibrary="views/faceLibrary"
> onClick={(data, meta) => onClickFaces([data.filename], meta)}
<Tooltip> >
<TooltipTrigger> <Tooltip>
<LuTrash2 <TooltipTrigger>
className="size-5 cursor-pointer text-gray-200 hover:text-danger" <LuTrash2
onClick={(e) => { className="size-5 cursor-pointer text-primary-variant hover:text-primary"
e.stopPropagation(); onClick={(e) => {
onDelete(pageToggle, [image]); e.stopPropagation();
}} onDelete(pageToggle, [image]);
/> }}
</TooltipTrigger> />
<TooltipContent>{t("button.deleteFaceAttempts")}</TooltipContent> </TooltipTrigger>
</Tooltip> <TooltipContent>{t("button.deleteFaceAttempts")}</TooltipContent>
</ClassificationCard> </Tooltip>
</div> </ClassificationCard>
))} ))}
</div> </div>
); );

View File

@ -304,10 +304,10 @@ export type CustomClassificationModelConfig = {
enabled: boolean; enabled: boolean;
name: string; name: string;
threshold: number; threshold: number;
object_config?: { object_config: null | {
objects: string[]; objects: string[];
}; };
state_config?: { state_config: null | {
cameras: { cameras: {
[cameraName: string]: { [cameraName: string]: {
crop: [number, number, number, number]; crop: [number, number, number, number];

View File

@ -1,39 +1,24 @@
import { baseUrl } from "@/api/baseUrl"; import { baseUrl } from "@/api/baseUrl";
import ClassificationModelWizardDialog from "@/components/classification/ClassificationModelWizardDialog";
import ActivityIndicator from "@/components/indicators/activity-indicator"; import ActivityIndicator from "@/components/indicators/activity-indicator";
import { ImageShadowOverlay } from "@/components/overlay/ImageShadowOverlay";
import { Button } from "@/components/ui/button";
import { ToggleGroup, ToggleGroupItem } from "@/components/ui/toggle-group";
import useOptimisticState from "@/hooks/use-optimistic-state";
import { cn } from "@/lib/utils"; import { cn } from "@/lib/utils";
import { import {
CustomClassificationModelConfig, CustomClassificationModelConfig,
FrigateConfig, FrigateConfig,
} from "@/types/frigateConfig"; } from "@/types/frigateConfig";
import { useMemo, useState } from "react"; import { useMemo } from "react";
import { isMobile } from "react-device-detect"; import { isMobile } from "react-device-detect";
import { useTranslation } from "react-i18next";
import { FaFolderPlus } from "react-icons/fa";
import useSWR from "swr"; import useSWR from "swr";
const allModelTypes = ["objects", "states"] as const;
type ModelType = (typeof allModelTypes)[number];
type ModelSelectionViewProps = { type ModelSelectionViewProps = {
onClick: (model: CustomClassificationModelConfig) => void; onClick: (model: CustomClassificationModelConfig) => void;
}; };
export default function ModelSelectionView({ export default function ModelSelectionView({
onClick, onClick,
}: ModelSelectionViewProps) { }: ModelSelectionViewProps) {
const { t } = useTranslation(["views/classificationModel"]);
const [page, setPage] = useState<ModelType>("objects");
const [pageToggle, setPageToggle] = useOptimisticState(page, setPage, 100);
const { data: config } = useSWR<FrigateConfig>("config", { const { data: config } = useSWR<FrigateConfig>("config", {
revalidateOnFocus: false, revalidateOnFocus: false,
}); });
// data
const classificationConfigs = useMemo(() => { const classificationConfigs = useMemo(() => {
if (!config) { if (!config) {
return []; return [];
@ -42,24 +27,6 @@ export default function ModelSelectionView({
return Object.values(config.classification.custom); return Object.values(config.classification.custom);
}, [config]); }, [config]);
const selectedClassificationConfigs = useMemo(() => {
return classificationConfigs.filter((model) => {
if (pageToggle == "objects" && model.object_config != undefined) {
return true;
}
if (pageToggle == "states" && model.state_config != undefined) {
return true;
}
return false;
});
}, [classificationConfigs, pageToggle]);
// new model wizard
const [newModel, setNewModel] = useState(false);
if (!config) { if (!config) {
return <ActivityIndicator />; return <ActivityIndicator />;
} }
@ -69,62 +36,14 @@ export default function ModelSelectionView({
} }
return ( return (
<div className="flex size-full flex-col p-2"> <div className="flex size-full gap-2 p-2">
<ClassificationModelWizardDialog {classificationConfigs.map((config) => (
open={newModel} <ModelCard
onClose={() => setNewModel(false)} key={config.name}
/> config={config}
onClick={() => onClick(config)}
<div className="flex h-12 w-full items-center justify-between"> />
<div className="flex flex-row items-center"> ))}
<ToggleGroup
className="*:rounded-md *:px-3 *:py-4"
type="single"
size="sm"
value={pageToggle}
onValueChange={(value: ModelType) => {
if (value) {
// Restrict viewer navigation
setPageToggle(value);
}
}}
>
{allModelTypes.map((item) => (
<ToggleGroupItem
key={item}
className={`flex scroll-mx-10 items-center justify-between gap-2 ${pageToggle == item ? "" : "*:text-muted-foreground"}`}
value={item}
data-nav-item={item}
aria-label={t("selectItem", {
ns: "common",
item: t("menu." + item),
})}
>
<div className="smart-capitalize">{t("menu." + item)}</div>
</ToggleGroupItem>
))}
</ToggleGroup>
</div>
<div className="flex flex-row items-center">
<Button
className="flex flex-row items-center gap-2"
variant="select"
onClick={() => setNewModel(true)}
>
<FaFolderPlus />
Add Classification
</Button>
</div>
</div>
<div className="flex size-full gap-2 p-2">
{selectedClassificationConfigs.map((config) => (
<ModelCard
key={config.name}
config={config}
onClick={() => onClick(config)}
/>
))}
</div>
</div> </div>
); );
} }
@ -138,37 +57,46 @@ function ModelCard({ config, onClick }: ModelCardProps) {
[id: string]: string[]; [id: string]: string[];
}>(`classification/${config.name}/dataset`, { revalidateOnFocus: false }); }>(`classification/${config.name}/dataset`, { revalidateOnFocus: false });
const coverImage = useMemo(() => { const coverImages = useMemo(() => {
if (!dataset?.length) { if (!dataset) {
return undefined; return {};
} }
const keys = Object.keys(dataset).filter((key) => key != "none"); const imageMap: { [key: string]: string } = {};
const selectedKey = keys[0];
return { for (const [key, imageList] of Object.entries(dataset)) {
name: selectedKey, if (imageList.length > 0) {
img: dataset[selectedKey][0], imageMap[key] = imageList[0];
}; }
}
return imageMap;
}, [dataset]); }, [dataset]);
return ( return (
<div <div
key={config.name} key={config.name}
className={cn( className={cn(
"relative size-60 cursor-pointer overflow-hidden rounded-lg", "flex h-60 cursor-pointer flex-col items-center gap-2 rounded-lg bg-card p-2 outline outline-[3px]",
"outline-transparent duration-500", "outline-transparent duration-500",
isMobile && "w-full", isMobile && "w-full",
)} )}
onClick={() => onClick()} onClick={() => onClick()}
> >
<img <div
className={cn("size-full", isMobile && "w-full")} className={cn("grid size-48 grid-cols-2 gap-2", isMobile && "w-full")}
src={`${baseUrl}clips/${config.name}/dataset/${coverImage?.name}/${coverImage?.img}`} >
/> {Object.entries(coverImages).map(([key, image]) => (
<ImageShadowOverlay /> <img
<div className="absolute bottom-2 left-3 text-lg smart-capitalize"> key={key}
{config.name} className=""
src={`${baseUrl}clips/${config.name}/dataset/${key}/${image}`}
/>
))}
</div>
<div className="smart-capitalize">
{config.name} ({config.state_config != null ? "State" : "Object"}{" "}
Classification)
</div> </div>
</div> </div>
); );

View File

@ -44,7 +44,7 @@ import {
useRef, useRef,
useState, useState,
} from "react"; } from "react";
import { isDesktop } from "react-device-detect"; import { isDesktop, isMobile } from "react-device-detect";
import { Trans, useTranslation } from "react-i18next"; import { Trans, useTranslation } from "react-i18next";
import { LuPencil, LuTrash2 } from "react-icons/lu"; import { LuPencil, LuTrash2 } from "react-icons/lu";
import { toast } from "sonner"; import { toast } from "sonner";
@ -56,6 +56,7 @@ import { ModelState } from "@/types/ws";
import ActivityIndicator from "@/components/indicators/activity-indicator"; import ActivityIndicator from "@/components/indicators/activity-indicator";
import { useNavigate } from "react-router-dom"; import { useNavigate } from "react-router-dom";
import { IoMdArrowRoundBack } from "react-icons/io"; import { IoMdArrowRoundBack } from "react-icons/io";
import { MdAutoFixHigh } from "react-icons/md";
import TrainFilterDialog from "@/components/overlay/dialog/TrainFilterDialog"; import TrainFilterDialog from "@/components/overlay/dialog/TrainFilterDialog";
import useApiFilter from "@/hooks/use-api-filter"; import useApiFilter from "@/hooks/use-api-filter";
import { ClassificationItemData, TrainFilter } from "@/types/classification"; import { ClassificationItemData, TrainFilter } from "@/types/classification";
@ -68,7 +69,6 @@ import SearchDetailDialog, {
SearchTab, SearchTab,
} from "@/components/overlay/detail/SearchDetailDialog"; } from "@/components/overlay/detail/SearchDetailDialog";
import { SearchResult } from "@/types/search"; import { SearchResult } from "@/types/search";
import { HiSparkles } from "react-icons/hi";
type ModelTrainingViewProps = { type ModelTrainingViewProps = {
model: CustomClassificationModelConfig; model: CustomClassificationModelConfig;
@ -378,13 +378,12 @@ export default function ModelTrainingView({ model }: ModelTrainingViewProps) {
<Button <Button
className="flex justify-center gap-2" className="flex justify-center gap-2"
onClick={trainModel} onClick={trainModel}
variant="select"
disabled={modelState != "complete"} disabled={modelState != "complete"}
> >
{modelState == "training" ? ( {modelState == "training" ? (
<ActivityIndicator size={20} /> <ActivityIndicator size={20} />
) : ( ) : (
<HiSparkles className="text-white" /> <MdAutoFixHigh className="text-secondary-foreground" />
)} )}
{isDesktop && t("button.trainModel")} {isDesktop && t("button.trainModel")}
</Button> </Button>
@ -632,36 +631,37 @@ function DatasetGrid({
return ( return (
<div <div
ref={contentRef} ref={contentRef}
className="scrollbar-container grid grid-cols-2 gap-2 overflow-y-scroll p-1 md:grid-cols-4 xl:grid-cols-8 2xl:grid-cols-10 3xl:grid-cols-12" className="scrollbar-container flex flex-wrap gap-2 overflow-y-auto p-2"
> >
{classData.map((image) => ( {classData.map((image) => (
<div key={image} className="aspect-square w-full"> <ClassificationCard
<ClassificationCard key={image}
data={{ className="w-60 gap-4 rounded-lg bg-card p-2"
filename: image, imgClassName="size-auto"
filepath: `clips/${modelName}/dataset/${categoryName}/${image}`, data={{
name: "", filename: image,
}} filepath: `clips/${modelName}/dataset/${categoryName}/${image}`,
selected={selectedImages.includes(image)} name: "",
i18nLibrary="views/classificationModel" }}
onClick={(data, _) => onClickImages([data.filename], true)} selected={selectedImages.includes(image)}
> i18nLibrary="views/classificationModel"
<Tooltip> onClick={(data, _) => onClickImages([data.filename], true)}
<TooltipTrigger> >
<LuTrash2 <Tooltip>
className="size-5 cursor-pointer text-primary-variant hover:text-danger" <TooltipTrigger>
onClick={(e) => { <LuTrash2
e.stopPropagation(); className="size-5 cursor-pointer text-primary-variant hover:text-primary"
onDelete([image]); onClick={(e) => {
}} e.stopPropagation();
/> onDelete([image]);
</TooltipTrigger> }}
<TooltipContent> />
{t("button.deleteClassificationAttempts")} </TooltipTrigger>
</TooltipContent> <TooltipContent>
</Tooltip> {t("button.deleteClassificationAttempts")}
</ClassificationCard> </TooltipContent>
</div> </Tooltip>
</ClassificationCard>
))} ))}
</div> </div>
); );
@ -757,6 +757,7 @@ function TrainGrid({
selectedImages={selectedImages} selectedImages={selectedImages}
onClickImages={onClickImages} onClickImages={onClickImages}
onRefresh={onRefresh} onRefresh={onRefresh}
onDelete={onDelete}
/> />
); );
} }
@ -779,7 +780,10 @@ function StateTrainGrid({
selectedImages, selectedImages,
onClickImages, onClickImages,
onRefresh, onRefresh,
onDelete,
}: StateTrainGridProps) { }: StateTrainGridProps) {
const { t } = useTranslation(["views/classificationModel"]);
const threshold = useMemo(() => { const threshold = useMemo(() => {
return { return {
recognition: model.threshold, recognition: model.threshold,
@ -791,29 +795,45 @@ function StateTrainGrid({
<div <div
ref={contentRef} ref={contentRef}
className={cn( className={cn(
"scrollbar-container grid grid-cols-2 gap-3 overflow-y-scroll p-1 sm:grid-cols-3 md:grid-cols-4 lg:grid-cols-6 xl:grid-cols-8 2xl:grid-cols-10 3xl:grid-cols-12", "scrollbar-container flex flex-wrap gap-2 overflow-y-auto p-2",
isMobile && "justify-center",
)} )}
> >
{trainData?.map((data) => ( {trainData?.map((data) => (
<div key={data.filename} className="aspect-square w-full"> <ClassificationCard
<ClassificationCard key={data.filename}
data={data} className="w-60 gap-2 rounded-lg bg-card p-2"
threshold={threshold} imgClassName="size-auto"
selected={selectedImages.includes(data.filename)} data={data}
i18nLibrary="views/classificationModel" threshold={threshold}
showArea={false} selected={selectedImages.includes(data.filename)}
onClick={(data, meta) => onClickImages([data.filename], meta)} i18nLibrary="views/classificationModel"
showArea={false}
onClick={(data, meta) => onClickImages([data.filename], meta)}
>
<ClassificationSelectionDialog
classes={classes}
modelName={model.name}
image={data.filename}
onRefresh={onRefresh}
> >
<ClassificationSelectionDialog <TbCategoryPlus className="size-5 cursor-pointer text-primary-variant hover:text-primary" />
classes={classes} </ClassificationSelectionDialog>
modelName={model.name} <Tooltip>
image={data.filename} <TooltipTrigger>
onRefresh={onRefresh} <LuTrash2
> className="size-5 cursor-pointer text-primary-variant hover:text-primary"
<TbCategoryPlus className="size-7 cursor-pointer p-1 text-gray-200 hover:rounded-full hover:bg-primary-foreground/40" /> onClick={(e) => {
</ClassificationSelectionDialog> e.stopPropagation();
</ClassificationCard> onDelete([data.filename]);
</div> }}
/>
</TooltipTrigger>
<TooltipContent>
{t("button.deleteClassificationAttempts")}
</TooltipContent>
</Tooltip>
</ClassificationCard>
))} ))}
</div> </div>
); );
@ -827,6 +847,7 @@ type ObjectTrainGridProps = {
selectedImages: string[]; selectedImages: string[];
onClickImages: (images: string[], ctrl: boolean) => void; onClickImages: (images: string[], ctrl: boolean) => void;
onRefresh: () => void; onRefresh: () => void;
onDelete: (ids: string[]) => void;
}; };
function ObjectTrainGrid({ function ObjectTrainGrid({
model, model,
@ -836,7 +857,10 @@ function ObjectTrainGrid({
selectedImages, selectedImages,
onClickImages, onClickImages,
onRefresh, onRefresh,
onDelete,
}: ObjectTrainGridProps) { }: ObjectTrainGridProps) {
const { t } = useTranslation(["views/classificationModel"]);
// item data // item data
const groups = useMemo(() => { const groups = useMemo(() => {
@ -926,43 +950,55 @@ function ObjectTrainGrid({
<div <div
ref={contentRef} ref={contentRef}
className={cn( className="scrollbar-container flex flex-wrap gap-2 overflow-y-scroll p-1"
"scrollbar-container grid grid-cols-2 gap-3 overflow-y-scroll p-1 sm:grid-cols-3 md:grid-cols-4 lg:grid-cols-6 xl:grid-cols-8 2xl:grid-cols-10 3xl:grid-cols-12",
)}
> >
{Object.entries(groups).map(([key, group]) => { {Object.entries(groups).map(([key, group]) => {
const event = events?.find((ev) => ev.id == key); const event = events?.find((ev) => ev.id == key);
return ( return (
<div key={key} className="aspect-square w-full"> <GroupedClassificationCard
<GroupedClassificationCard key={key}
group={group} group={group}
event={event} event={event}
threshold={threshold} threshold={threshold}
selectedItems={selectedImages} selectedItems={selectedImages}
i18nLibrary="views/classificationModel" i18nLibrary="views/classificationModel"
objectType={model.object_config?.objects?.at(0) ?? "Object"} objectType={model.object_config?.objects?.at(0) ?? "Object"}
onClick={(data) => { onClick={(data) => {
if (data) { if (data) {
onClickImages([data.filename], true); onClickImages([data.filename], true);
} else { } else {
handleClickEvent(group, event, true); handleClickEvent(group, event, true);
} }
}} }}
> onSelectEvent={() => {}}
{(data) => ( >
<> {(data) => (
<ClassificationSelectionDialog <>
classes={classes} <ClassificationSelectionDialog
modelName={model.name} classes={classes}
image={data.filename} modelName={model.name}
onRefresh={onRefresh} image={data.filename}
> onRefresh={onRefresh}
<TbCategoryPlus className="size-7 cursor-pointer p-1 text-gray-200 hover:rounded-full hover:bg-primary-foreground/40" /> >
</ClassificationSelectionDialog> <TbCategoryPlus className="size-5 cursor-pointer text-primary-variant hover:text-primary" />
</> </ClassificationSelectionDialog>
)} <Tooltip>
</GroupedClassificationCard> <TooltipTrigger>
</div> <LuTrash2
className="size-5 cursor-pointer text-primary-variant hover:text-primary"
onClick={(e) => {
e.stopPropagation();
onDelete([data.filename]);
}}
/>
</TooltipTrigger>
<TooltipContent>
{t("button.deleteClassificationAttempts")}
</TooltipContent>
</Tooltip>
</>
)}
</GroupedClassificationCard>
); );
})} })}
</div> </div>