mirror of
https://github.com/blakeblackshear/frigate.git
synced 2025-12-06 13:34:13 +03:00
Miscellaneous Fixes (#20848)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
* Fix filtering for classification * Adjust prompt to account for response tokens * Correctly return response for reprocess * Use API response to update data instead of trying to re-parse all of the values * Implement rename class api * Fix model deletion / rename dialog * Remove camera spatial context * Catch error
This commit is contained in:
parent
c99ada8f6a
commit
d41ee4ff88
@ -68,36 +68,6 @@ The mere presence of an unidentified person in private areas during late night h
|
|||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
### Camera Spatial Context
|
|
||||||
|
|
||||||
In addition to defining activity patterns, you can provide spatial context for specific cameras to help the LLM generate more accurate and descriptive titles and scene descriptions. The `camera_context` field allows you to describe physical features and locations that are outside the camera's field of view but are relevant for understanding the scene.
|
|
||||||
|
|
||||||
**Important Guidelines:**
|
|
||||||
|
|
||||||
- This context is used **only for descriptive purposes** to help the LLM write better titles and scene descriptions
|
|
||||||
- It should describe **physical features and spatial relationships** (e.g., "front door is to the right", "driveway on the left")
|
|
||||||
- It should **NOT** include subjective assessments or threat evaluations (e.g., "high-crime area")
|
|
||||||
- Threat level determination remains based solely on observable actions defined in the activity patterns
|
|
||||||
|
|
||||||
Example configuration:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
cameras:
|
|
||||||
front_door:
|
|
||||||
review:
|
|
||||||
genai:
|
|
||||||
enabled: true
|
|
||||||
camera_context: |
|
|
||||||
- Front door entrance is to the right of the frame
|
|
||||||
- Driveway and street are to the left
|
|
||||||
- Steps in the center lead from the sidewalk to the front door
|
|
||||||
- Garage is located beyond the left edge of the frame
|
|
||||||
```
|
|
||||||
|
|
||||||
This helps the LLM generate more natural descriptions like "Person approaching front door" instead of "Person walking toward right side of frame".
|
|
||||||
|
|
||||||
The `camera_context` can be defined globally under `genai.review` and overridden per camera for specific spatial details.
|
|
||||||
|
|
||||||
### Image Source
|
### Image Source
|
||||||
|
|
||||||
By default, review summaries use preview images (cached preview frames) which have a lower resolution but use fewer tokens per image. For better image quality and more detailed analysis, you can configure Frigate to extract frames directly from recordings at a higher resolution:
|
By default, review summaries use preview images (cached preview frames) which have a lower resolution but use fewer tokens per image. For better image quality and more detailed analysis, you can configure Frigate to extract frames directly from recordings at a higher resolution:
|
||||||
|
|||||||
@ -112,9 +112,18 @@ def reclassify_face(request: Request, body: dict = None):
|
|||||||
context: EmbeddingsContext = request.app.embeddings
|
context: EmbeddingsContext = request.app.embeddings
|
||||||
response = context.reprocess_face(training_file)
|
response = context.reprocess_face(training_file)
|
||||||
|
|
||||||
|
if not isinstance(response, dict):
|
||||||
|
return JSONResponse(
|
||||||
|
status_code=500,
|
||||||
|
content={
|
||||||
|
"success": False,
|
||||||
|
"message": "Could not process request.",
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
return JSONResponse(
|
return JSONResponse(
|
||||||
|
status_code=200 if response.get("success", True) else 400,
|
||||||
content=response,
|
content=response,
|
||||||
status_code=200,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
@ -671,6 +680,97 @@ def delete_classification_dataset_images(
|
|||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@router.put(
|
||||||
|
"/classification/{name}/dataset/{old_category}/rename",
|
||||||
|
response_model=GenericResponse,
|
||||||
|
dependencies=[Depends(require_role(["admin"]))],
|
||||||
|
summary="Rename a classification category",
|
||||||
|
description="""Renames a classification category for a given classification model.
|
||||||
|
The old category must exist and the new name must be valid. Returns a success message or an error if the name is invalid.""",
|
||||||
|
)
|
||||||
|
def rename_classification_category(
|
||||||
|
request: Request, name: str, old_category: str, body: dict = None
|
||||||
|
):
|
||||||
|
config: FrigateConfig = request.app.frigate_config
|
||||||
|
|
||||||
|
if name not in config.classification.custom:
|
||||||
|
return JSONResponse(
|
||||||
|
content=(
|
||||||
|
{
|
||||||
|
"success": False,
|
||||||
|
"message": f"{name} is not a known classification model.",
|
||||||
|
}
|
||||||
|
),
|
||||||
|
status_code=404,
|
||||||
|
)
|
||||||
|
|
||||||
|
json: dict[str, Any] = body or {}
|
||||||
|
new_category = sanitize_filename(json.get("new_category", ""))
|
||||||
|
|
||||||
|
if not new_category:
|
||||||
|
return JSONResponse(
|
||||||
|
content=(
|
||||||
|
{
|
||||||
|
"success": False,
|
||||||
|
"message": "New category name is required.",
|
||||||
|
}
|
||||||
|
),
|
||||||
|
status_code=400,
|
||||||
|
)
|
||||||
|
|
||||||
|
old_folder = os.path.join(
|
||||||
|
CLIPS_DIR, sanitize_filename(name), "dataset", sanitize_filename(old_category)
|
||||||
|
)
|
||||||
|
new_folder = os.path.join(
|
||||||
|
CLIPS_DIR, sanitize_filename(name), "dataset", new_category
|
||||||
|
)
|
||||||
|
|
||||||
|
if not os.path.exists(old_folder):
|
||||||
|
return JSONResponse(
|
||||||
|
content=(
|
||||||
|
{
|
||||||
|
"success": False,
|
||||||
|
"message": f"Category {old_category} does not exist.",
|
||||||
|
}
|
||||||
|
),
|
||||||
|
status_code=404,
|
||||||
|
)
|
||||||
|
|
||||||
|
if os.path.exists(new_folder):
|
||||||
|
return JSONResponse(
|
||||||
|
content=(
|
||||||
|
{
|
||||||
|
"success": False,
|
||||||
|
"message": f"Category {new_category} already exists.",
|
||||||
|
}
|
||||||
|
),
|
||||||
|
status_code=400,
|
||||||
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
os.rename(old_folder, new_folder)
|
||||||
|
return JSONResponse(
|
||||||
|
content=(
|
||||||
|
{
|
||||||
|
"success": True,
|
||||||
|
"message": f"Successfully renamed category to {new_category}.",
|
||||||
|
}
|
||||||
|
),
|
||||||
|
status_code=200,
|
||||||
|
)
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error renaming category: {e}")
|
||||||
|
return JSONResponse(
|
||||||
|
content=(
|
||||||
|
{
|
||||||
|
"success": False,
|
||||||
|
"message": "Failed to rename category",
|
||||||
|
}
|
||||||
|
),
|
||||||
|
status_code=500,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
@router.post(
|
@router.post(
|
||||||
"/classification/{name}/dataset/categorize",
|
"/classification/{name}/dataset/categorize",
|
||||||
response_model=GenericResponse,
|
response_model=GenericResponse,
|
||||||
|
|||||||
@ -140,10 +140,6 @@ Evaluate in this order:
|
|||||||
The mere presence of an unidentified person in private areas during late night hours is inherently suspicious and warrants human review, regardless of what activity they appear to be doing or how brief the sequence is.""",
|
The mere presence of an unidentified person in private areas during late night hours is inherently suspicious and warrants human review, regardless of what activity they appear to be doing or how brief the sequence is.""",
|
||||||
title="Custom activity context prompt defining normal and suspicious activity patterns for this property.",
|
title="Custom activity context prompt defining normal and suspicious activity patterns for this property.",
|
||||||
)
|
)
|
||||||
camera_context: str = Field(
|
|
||||||
default="",
|
|
||||||
title="Spatial context about the camera's field of view to help with descriptive accuracy. Should describe physical features and locations outside the frame.",
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class ReviewConfig(FrigateBaseModel):
|
class ReviewConfig(FrigateBaseModel):
|
||||||
|
|||||||
@ -90,7 +90,8 @@ class ReviewDescriptionProcessor(PostProcessorApi):
|
|||||||
pixels_per_image = width * height
|
pixels_per_image = width * height
|
||||||
tokens_per_image = pixels_per_image / 1250
|
tokens_per_image = pixels_per_image / 1250
|
||||||
prompt_tokens = 3500
|
prompt_tokens = 3500
|
||||||
available_tokens = context_size * 0.98 - prompt_tokens
|
response_tokens = 300
|
||||||
|
available_tokens = context_size - prompt_tokens - response_tokens
|
||||||
max_frames = int(available_tokens / tokens_per_image)
|
max_frames = int(available_tokens / tokens_per_image)
|
||||||
|
|
||||||
return min(max(max_frames, 3), 20)
|
return min(max(max_frames, 3), 20)
|
||||||
@ -458,7 +459,6 @@ def run_analysis(
|
|||||||
genai_config.preferred_language,
|
genai_config.preferred_language,
|
||||||
genai_config.debug_save_thumbnails,
|
genai_config.debug_save_thumbnails,
|
||||||
genai_config.activity_context_prompt,
|
genai_config.activity_context_prompt,
|
||||||
genai_config.camera_context,
|
|
||||||
)
|
)
|
||||||
review_inference_speed.update(datetime.datetime.now().timestamp() - start)
|
review_inference_speed.update(datetime.datetime.now().timestamp() - start)
|
||||||
|
|
||||||
|
|||||||
@ -423,7 +423,10 @@ class FaceRealTimeProcessor(RealTimeProcessorApi):
|
|||||||
res = self.recognizer.classify(img)
|
res = self.recognizer.classify(img)
|
||||||
|
|
||||||
if not res:
|
if not res:
|
||||||
return
|
return {
|
||||||
|
"message": "No face was recognized.",
|
||||||
|
"success": False,
|
||||||
|
}
|
||||||
|
|
||||||
sub_label, score = res
|
sub_label, score = res
|
||||||
|
|
||||||
@ -442,6 +445,13 @@ class FaceRealTimeProcessor(RealTimeProcessorApi):
|
|||||||
)
|
)
|
||||||
shutil.move(current_file, new_file)
|
shutil.move(current_file, new_file)
|
||||||
|
|
||||||
|
return {
|
||||||
|
"message": f"Successfully reprocessed face. Result: {sub_label} (score: {score:.2f})",
|
||||||
|
"success": True,
|
||||||
|
"face_name": sub_label,
|
||||||
|
"score": score,
|
||||||
|
}
|
||||||
|
|
||||||
def expire_object(self, object_id: str, camera: str):
|
def expire_object(self, object_id: str, camera: str):
|
||||||
if object_id in self.person_face_history:
|
if object_id in self.person_face_history:
|
||||||
self.person_face_history.pop(object_id)
|
self.person_face_history.pop(object_id)
|
||||||
|
|||||||
@ -45,7 +45,6 @@ class GenAIClient:
|
|||||||
preferred_language: str | None,
|
preferred_language: str | None,
|
||||||
debug_save: bool,
|
debug_save: bool,
|
||||||
activity_context_prompt: str,
|
activity_context_prompt: str,
|
||||||
camera_context: str = "",
|
|
||||||
) -> ReviewMetadata | None:
|
) -> ReviewMetadata | None:
|
||||||
"""Generate a description for the review item activity."""
|
"""Generate a description for the review item activity."""
|
||||||
|
|
||||||
@ -70,16 +69,6 @@ class GenAIClient:
|
|||||||
else:
|
else:
|
||||||
return "\n- (No objects detected)"
|
return "\n- (No objects detected)"
|
||||||
|
|
||||||
def get_camera_context_section() -> str:
|
|
||||||
if camera_context:
|
|
||||||
return f"""## Camera Spatial Context
|
|
||||||
|
|
||||||
Use this spatial information when writing the title and scene description to provide more accurate context about where activity is occurring or where people/objects are moving to/from.
|
|
||||||
|
|
||||||
{camera_context}"""
|
|
||||||
return ""
|
|
||||||
|
|
||||||
camera_context_section = get_camera_context_section()
|
|
||||||
context_prompt = f"""
|
context_prompt = f"""
|
||||||
Your task is to analyze the sequence of images ({len(thumbnails)} total) taken in chronological order from the perspective of the {review_data["camera"].replace("_", " ")} security camera.
|
Your task is to analyze the sequence of images ({len(thumbnails)} total) taken in chronological order from the perspective of the {review_data["camera"].replace("_", " ")} security camera.
|
||||||
|
|
||||||
@ -87,8 +76,6 @@ Your task is to analyze the sequence of images ({len(thumbnails)} total) taken i
|
|||||||
|
|
||||||
{activity_context_prompt}
|
{activity_context_prompt}
|
||||||
|
|
||||||
{camera_context_section}
|
|
||||||
|
|
||||||
## Task Instructions
|
## Task Instructions
|
||||||
|
|
||||||
Your task is to provide a clear, accurate description of the scene that:
|
Your task is to provide a clear, accurate description of the scene that:
|
||||||
@ -113,7 +100,7 @@ When forming your description:
|
|||||||
## Response Format
|
## Response Format
|
||||||
|
|
||||||
Your response MUST be a flat JSON object with:
|
Your response MUST be a flat JSON object with:
|
||||||
- `title` (string): A concise, direct title that describes the primary action or event in the sequence, not just what you literally see. {"Use spatial context when available to make titles more meaningful." if camera_context_section else ""} When multiple objects/actions are present, prioritize whichever is most prominent or occurs first. Use names from "Objects in Scene" based on what you visually observe. If you see both a name and an unidentified object of the same type but visually observe only one person/object, use ONLY the name. Examples: "Joe walking dog", "Person taking out trash", "Vehicle arriving in driveway", "Joe accessing vehicle", "Person leaving porch for driveway".
|
- `title` (string): A concise, direct title that describes the primary action or event in the sequence, not just what you literally see. Use spatial context when available to make titles more meaningful. When multiple objects/actions are present, prioritize whichever is most prominent or occurs first. Use names from "Objects in Scene" based on what you visually observe. If you see both a name and an unidentified object of the same type but visually observe only one person/object, use ONLY the name. Examples: "Joe walking dog", "Person taking out trash", "Vehicle arriving in driveway", "Joe accessing vehicle", "Person leaving porch for driveway".
|
||||||
- `scene` (string): A narrative description of what happens across the sequence from start to finish, in chronological order. Start by describing how the sequence begins, then describe the progression of events. **Describe all significant movements and actions in the order they occur.** For example, if a vehicle arrives and then a person exits, describe both actions sequentially. **Only describe actions you can actually observe happening in the frames provided.** Do not infer or assume actions that aren't visible (e.g., if you see someone walking but never see them sit, don't say they sat down). Include setting, detected objects, and their observable actions. Avoid speculation or filling in assumed behaviors. Your description should align with and support the threat level you assign.
|
- `scene` (string): A narrative description of what happens across the sequence from start to finish, in chronological order. Start by describing how the sequence begins, then describe the progression of events. **Describe all significant movements and actions in the order they occur.** For example, if a vehicle arrives and then a person exits, describe both actions sequentially. **Only describe actions you can actually observe happening in the frames provided.** Do not infer or assume actions that aren't visible (e.g., if you see someone walking but never see them sit, don't say they sat down). Include setting, detected objects, and their observable actions. Avoid speculation or filling in assumed behaviors. Your description should align with and support the threat level you assign.
|
||||||
- `confidence` (float): 0-1 confidence in your analysis. Higher confidence when objects/actions are clearly visible and context is unambiguous. Lower confidence when the sequence is unclear, objects are partially obscured, or context is ambiguous.
|
- `confidence` (float): 0-1 confidence in your analysis. Higher confidence when objects/actions are clearly visible and context is unambiguous. Lower confidence when the sequence is unclear, objects are partially obscured, or context is ambiguous.
|
||||||
- `potential_threat_level` (integer): 0, 1, or 2 as defined in "Normal Activity Patterns for This Property" above. Your threat level must be consistent with your scene description and the guidance above.
|
- `potential_threat_level` (integer): 0, 1, or 2 as defined in "Normal Activity Patterns for This Property" above. Your threat level must be consistent with your scene description and the guidance above.
|
||||||
|
|||||||
@ -67,9 +67,6 @@
|
|||||||
},
|
},
|
||||||
"activity_context_prompt": {
|
"activity_context_prompt": {
|
||||||
"label": "Custom activity context prompt defining normal activity patterns for this property."
|
"label": "Custom activity context prompt defining normal activity patterns for this property."
|
||||||
},
|
|
||||||
"camera_context": {
|
|
||||||
"label": "Spatial context about the camera's field of view to help with descriptive accuracy. Should describe physical features and locations outside the frame. This is for spatial reference only and should NOT include subjective assessments."
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@ -22,7 +22,8 @@
|
|||||||
"categorizedImage": "Successfully Classified Image",
|
"categorizedImage": "Successfully Classified Image",
|
||||||
"trainedModel": "Successfully trained model.",
|
"trainedModel": "Successfully trained model.",
|
||||||
"trainingModel": "Successfully started model training.",
|
"trainingModel": "Successfully started model training.",
|
||||||
"updatedModel": "Successfully updated model configuration"
|
"updatedModel": "Successfully updated model configuration",
|
||||||
|
"renamedCategory": "Successfully renamed class to {{name}}"
|
||||||
},
|
},
|
||||||
"error": {
|
"error": {
|
||||||
"deleteImageFailed": "Failed to delete: {{errorMessage}}",
|
"deleteImageFailed": "Failed to delete: {{errorMessage}}",
|
||||||
@ -30,7 +31,8 @@
|
|||||||
"deleteModelFailed": "Failed to delete model: {{errorMessage}}",
|
"deleteModelFailed": "Failed to delete model: {{errorMessage}}",
|
||||||
"categorizeFailed": "Failed to categorize image: {{errorMessage}}",
|
"categorizeFailed": "Failed to categorize image: {{errorMessage}}",
|
||||||
"trainingFailed": "Failed to start model training: {{errorMessage}}",
|
"trainingFailed": "Failed to start model training: {{errorMessage}}",
|
||||||
"updateModelFailed": "Failed to update model: {{errorMessage}}"
|
"updateModelFailed": "Failed to update model: {{errorMessage}}",
|
||||||
|
"renameCategoryFailed": "Failed to rename class: {{errorMessage}}"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"deleteCategory": {
|
"deleteCategory": {
|
||||||
|
|||||||
@ -75,7 +75,7 @@
|
|||||||
"deletedName_other": "{{count}} faces have been successfully deleted.",
|
"deletedName_other": "{{count}} faces have been successfully deleted.",
|
||||||
"renamedFace": "Successfully renamed face to {{name}}",
|
"renamedFace": "Successfully renamed face to {{name}}",
|
||||||
"trainedFace": "Successfully trained face.",
|
"trainedFace": "Successfully trained face.",
|
||||||
"updatedFaceScore": "Successfully updated face score."
|
"updatedFaceScore": "Successfully updated face score to {{name}} ({{score}})."
|
||||||
},
|
},
|
||||||
"error": {
|
"error": {
|
||||||
"uploadingImageFailed": "Failed to upload image: {{errorMessage}}",
|
"uploadingImageFailed": "Failed to upload image: {{errorMessage}}",
|
||||||
|
|||||||
@ -622,7 +622,15 @@ type TrainingGridProps = {
|
|||||||
faceNames: string[];
|
faceNames: string[];
|
||||||
selectedFaces: string[];
|
selectedFaces: string[];
|
||||||
onClickFaces: (images: string[], ctrl: boolean) => void;
|
onClickFaces: (images: string[], ctrl: boolean) => void;
|
||||||
onRefresh: () => void;
|
onRefresh: (
|
||||||
|
data?:
|
||||||
|
| FaceLibraryData
|
||||||
|
| Promise<FaceLibraryData>
|
||||||
|
| ((
|
||||||
|
currentData: FaceLibraryData | undefined,
|
||||||
|
) => FaceLibraryData | undefined),
|
||||||
|
opts?: boolean | { revalidate?: boolean },
|
||||||
|
) => Promise<FaceLibraryData | undefined>;
|
||||||
};
|
};
|
||||||
function TrainingGrid({
|
function TrainingGrid({
|
||||||
config,
|
config,
|
||||||
@ -726,7 +734,15 @@ type FaceAttemptGroupProps = {
|
|||||||
faceNames: string[];
|
faceNames: string[];
|
||||||
selectedFaces: string[];
|
selectedFaces: string[];
|
||||||
onClickFaces: (image: string[], ctrl: boolean) => void;
|
onClickFaces: (image: string[], ctrl: boolean) => void;
|
||||||
onRefresh: () => void;
|
onRefresh: (
|
||||||
|
data?:
|
||||||
|
| FaceLibraryData
|
||||||
|
| Promise<FaceLibraryData>
|
||||||
|
| ((
|
||||||
|
currentData: FaceLibraryData | undefined,
|
||||||
|
) => FaceLibraryData | undefined),
|
||||||
|
opts?: boolean | { revalidate?: boolean },
|
||||||
|
) => Promise<FaceLibraryData | undefined>;
|
||||||
};
|
};
|
||||||
function FaceAttemptGroup({
|
function FaceAttemptGroup({
|
||||||
config,
|
config,
|
||||||
@ -814,11 +830,44 @@ function FaceAttemptGroup({
|
|||||||
axios
|
axios
|
||||||
.post(`/faces/reprocess`, { training_file: data.filename })
|
.post(`/faces/reprocess`, { training_file: data.filename })
|
||||||
.then((resp) => {
|
.then((resp) => {
|
||||||
if (resp.status == 200) {
|
if (resp.status == 200 && resp.data?.success) {
|
||||||
toast.success(t("toast.success.updatedFaceScore"), {
|
const { face_name, score } = resp.data;
|
||||||
position: "top-center",
|
const oldFilename = data.filename;
|
||||||
});
|
const parts = oldFilename.split("-");
|
||||||
onRefresh();
|
const newFilename = `${parts[0]}-${parts[1]}-${parts[2]}-${face_name}-${score}.webp`;
|
||||||
|
|
||||||
|
onRefresh(
|
||||||
|
(currentData: FaceLibraryData | undefined) => {
|
||||||
|
if (!currentData?.train) return currentData;
|
||||||
|
|
||||||
|
return {
|
||||||
|
...currentData,
|
||||||
|
train: currentData.train.map((filename: string) =>
|
||||||
|
filename === oldFilename ? newFilename : filename,
|
||||||
|
),
|
||||||
|
};
|
||||||
|
},
|
||||||
|
{ revalidate: true },
|
||||||
|
);
|
||||||
|
|
||||||
|
toast.success(
|
||||||
|
t("toast.success.updatedFaceScore", {
|
||||||
|
name: face_name,
|
||||||
|
score: score.toFixed(2),
|
||||||
|
}),
|
||||||
|
{
|
||||||
|
position: "top-center",
|
||||||
|
},
|
||||||
|
);
|
||||||
|
} else if (resp.data?.success === false) {
|
||||||
|
// Handle case where API returns success: false
|
||||||
|
const errorMessage = resp.data?.message || "Unknown error";
|
||||||
|
toast.error(
|
||||||
|
t("toast.error.updateFaceScoreFailed", { errorMessage }),
|
||||||
|
{
|
||||||
|
position: "top-center",
|
||||||
|
},
|
||||||
|
);
|
||||||
}
|
}
|
||||||
})
|
})
|
||||||
.catch((error) => {
|
.catch((error) => {
|
||||||
|
|||||||
@ -187,6 +187,37 @@ export default function ModelTrainingView({ model }: ModelTrainingViewProps) {
|
|||||||
null,
|
null,
|
||||||
);
|
);
|
||||||
|
|
||||||
|
const onRename = useCallback(
|
||||||
|
(old_name: string, new_name: string) => {
|
||||||
|
axios
|
||||||
|
.put(`/classification/${model.name}/dataset/${old_name}/rename`, {
|
||||||
|
new_category: new_name,
|
||||||
|
})
|
||||||
|
.then((resp) => {
|
||||||
|
if (resp.status == 200) {
|
||||||
|
toast.success(
|
||||||
|
t("toast.success.renamedCategory", { name: new_name }),
|
||||||
|
{
|
||||||
|
position: "top-center",
|
||||||
|
},
|
||||||
|
);
|
||||||
|
setPageToggle(new_name);
|
||||||
|
refreshDataset();
|
||||||
|
}
|
||||||
|
})
|
||||||
|
.catch((error) => {
|
||||||
|
const errorMessage =
|
||||||
|
error.response?.data?.message ||
|
||||||
|
error.response?.data?.detail ||
|
||||||
|
"Unknown error";
|
||||||
|
toast.error(t("toast.error.renameCategoryFailed", { errorMessage }), {
|
||||||
|
position: "top-center",
|
||||||
|
});
|
||||||
|
});
|
||||||
|
},
|
||||||
|
[model, setPageToggle, refreshDataset, t],
|
||||||
|
);
|
||||||
|
|
||||||
const onDelete = useCallback(
|
const onDelete = useCallback(
|
||||||
(ids: string[], isName: boolean = false, category?: string) => {
|
(ids: string[], isName: boolean = false, category?: string) => {
|
||||||
const targetCategory = category || pageToggle;
|
const targetCategory = category || pageToggle;
|
||||||
@ -354,7 +385,7 @@ export default function ModelTrainingView({ model }: ModelTrainingViewProps) {
|
|||||||
trainImages={trainImages || []}
|
trainImages={trainImages || []}
|
||||||
setPageToggle={setPageToggle}
|
setPageToggle={setPageToggle}
|
||||||
onDelete={onDelete}
|
onDelete={onDelete}
|
||||||
onRename={() => {}}
|
onRename={onRename}
|
||||||
/>
|
/>
|
||||||
</div>
|
</div>
|
||||||
)}
|
)}
|
||||||
@ -534,7 +565,7 @@ function LibrarySelector({
|
|||||||
regexErrorMessage={t("description.invalidName")}
|
regexErrorMessage={t("description.invalidName")}
|
||||||
/>
|
/>
|
||||||
|
|
||||||
<DropdownMenu>
|
<DropdownMenu modal={false}>
|
||||||
<DropdownMenuTrigger asChild>
|
<DropdownMenuTrigger asChild>
|
||||||
<Button className="flex justify-between smart-capitalize">
|
<Button className="flex justify-between smart-capitalize">
|
||||||
{pageTitle}
|
{pageTitle}
|
||||||
@ -585,48 +616,50 @@ function LibrarySelector({
|
|||||||
({dataset?.[id].length})
|
({dataset?.[id].length})
|
||||||
</span>
|
</span>
|
||||||
</div>
|
</div>
|
||||||
<div className="flex gap-0.5">
|
{id != "none" && (
|
||||||
<Tooltip>
|
<div className="flex gap-0.5">
|
||||||
<TooltipTrigger asChild>
|
<Tooltip>
|
||||||
<Button
|
<TooltipTrigger asChild>
|
||||||
variant="ghost"
|
<Button
|
||||||
size="icon"
|
variant="ghost"
|
||||||
className="size-7 lg:opacity-0 lg:transition-opacity lg:group-hover:opacity-100"
|
size="icon"
|
||||||
onClick={(e) => {
|
className="size-7 lg:opacity-0 lg:transition-opacity lg:group-hover:opacity-100"
|
||||||
e.stopPropagation();
|
onClick={(e) => {
|
||||||
setRenameClass(id);
|
e.stopPropagation();
|
||||||
}}
|
setRenameClass(id);
|
||||||
>
|
}}
|
||||||
<LuPencil className="size-4 text-primary" />
|
>
|
||||||
</Button>
|
<LuPencil className="size-4 text-primary" />
|
||||||
</TooltipTrigger>
|
</Button>
|
||||||
<TooltipPortal>
|
</TooltipTrigger>
|
||||||
<TooltipContent>
|
<TooltipPortal>
|
||||||
{t("button.renameCategory")}
|
<TooltipContent>
|
||||||
</TooltipContent>
|
{t("button.renameCategory")}
|
||||||
</TooltipPortal>
|
</TooltipContent>
|
||||||
</Tooltip>
|
</TooltipPortal>
|
||||||
<Tooltip>
|
</Tooltip>
|
||||||
<TooltipTrigger asChild>
|
<Tooltip>
|
||||||
<Button
|
<TooltipTrigger asChild>
|
||||||
variant="ghost"
|
<Button
|
||||||
size="icon"
|
variant="ghost"
|
||||||
className="size-7 lg:opacity-0 lg:transition-opacity lg:group-hover:opacity-100"
|
size="icon"
|
||||||
onClick={(e) => {
|
className="size-7 lg:opacity-0 lg:transition-opacity lg:group-hover:opacity-100"
|
||||||
e.stopPropagation();
|
onClick={(e) => {
|
||||||
setConfirmDelete(id);
|
e.stopPropagation();
|
||||||
}}
|
setConfirmDelete(id);
|
||||||
>
|
}}
|
||||||
<LuTrash2 className="size-4 text-destructive" />
|
>
|
||||||
</Button>
|
<LuTrash2 className="size-4 text-destructive" />
|
||||||
</TooltipTrigger>
|
</Button>
|
||||||
<TooltipPortal>
|
</TooltipTrigger>
|
||||||
<TooltipContent>
|
<TooltipPortal>
|
||||||
{t("button.deleteCategory")}
|
<TooltipContent>
|
||||||
</TooltipContent>
|
{t("button.deleteCategory")}
|
||||||
</TooltipPortal>
|
</TooltipContent>
|
||||||
</Tooltip>
|
</TooltipPortal>
|
||||||
</div>
|
</Tooltip>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
</DropdownMenuItem>
|
</DropdownMenuItem>
|
||||||
))}
|
))}
|
||||||
</DropdownMenuContent>
|
</DropdownMenuContent>
|
||||||
@ -745,17 +778,11 @@ function TrainGrid({
|
|||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (
|
if (trainFilter.min_score && trainFilter.min_score > data.score) {
|
||||||
trainFilter.min_score &&
|
|
||||||
trainFilter.min_score > data.score / 100.0
|
|
||||||
) {
|
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (
|
if (trainFilter.max_score && trainFilter.max_score < data.score) {
|
||||||
trainFilter.max_score &&
|
|
||||||
trainFilter.max_score < data.score / 100.0
|
|
||||||
) {
|
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
Loading…
Reference in New Issue
Block a user