diff --git a/docs/docs/configuration/custom_classification/object_classification.md b/docs/docs/configuration/custom_classification/object_classification.md
index a75aae31a..9465716b7 100644
--- a/docs/docs/configuration/custom_classification/object_classification.md
+++ b/docs/docs/configuration/custom_classification/object_classification.md
@@ -67,7 +67,7 @@ When choosing which objects to classify, start with a small number of visually d
### Improving the Model
- **Problem framing**: Keep classes visually distinct and relevant to the chosen object types.
-- **Data collection**: Use the model’s Train tab to gather balanced examples across times of day, weather, and distances.
+- **Data collection**: Use the model’s Recent Classification tab to gather balanced examples across times of day, weather, and distances.
- **Preprocessing**: Ensure examples reflect object crops similar to Frigate’s boxes; keep the subject centered.
- **Labels**: Keep label names short and consistent; include a `none` class if you plan to ignore uncertain predictions for sub labels.
- **Threshold**: Tune `threshold` per model to reduce false assignments. Start at `0.8` and adjust based on validation.
diff --git a/docs/docs/configuration/custom_classification/state_classification.md b/docs/docs/configuration/custom_classification/state_classification.md
index ec38ea696..afc79eff8 100644
--- a/docs/docs/configuration/custom_classification/state_classification.md
+++ b/docs/docs/configuration/custom_classification/state_classification.md
@@ -49,4 +49,4 @@ When choosing a portion of the camera frame for state classification, it is impo
### Improving the Model
- **Problem framing**: Keep classes visually distinct and state-focused (e.g., `open`, `closed`, `unknown`). Avoid combining object identity with state in a single model unless necessary.
-- **Data collection**: Use the model’s Train tab to gather balanced examples across times of day and weather.
+- **Data collection**: Use the model’s Recent Classifications tab to gather balanced examples across times of day and weather.
diff --git a/docs/docs/configuration/face_recognition.md b/docs/docs/configuration/face_recognition.md
index d14946eaf..129669e7f 100644
--- a/docs/docs/configuration/face_recognition.md
+++ b/docs/docs/configuration/face_recognition.md
@@ -70,7 +70,7 @@ Fine-tune face recognition with these optional parameters at the global level of
- `min_faces`: Min face recognitions for the sub label to be applied to the person object.
- Default: `1`
- `save_attempts`: Number of images of recognized faces to save for training.
- - Default: `100`.
+ - Default: `200`.
- `blur_confidence_filter`: Enables a filter that calculates how blurry the face is and adjusts the confidence based on this.
- Default: `True`.
- `device`: Target a specific device to run the face recognition model on (multi-GPU installation).
@@ -114,9 +114,9 @@ When choosing images to include in the face training set it is recommended to al
:::
-### Understanding the Train Tab
+### Understanding the Recent Recognitions Tab
-The Train tab in the face library displays recent face recognition attempts. Detected face images are grouped according to the person they were identified as potentially matching.
+The Recent Recognitions tab in the face library displays recent face recognition attempts. Detected face images are grouped according to the person they were identified as potentially matching.
Each face image is labeled with a name (or `Unknown`) along with the confidence score of the recognition attempt. While each image can be used to train the system for a specific person, not all images are suitable for training.
@@ -140,7 +140,7 @@ Once front-facing images are performing well, start choosing slightly off-angle
Start with the [Usage](#usage) section and re-read the [Model Requirements](#model-requirements) above.
-1. Ensure `person` is being _detected_. A `person` will automatically be scanned by Frigate for a face. Any detected faces will appear in the Train tab in the Frigate UI's Face Library.
+1. Ensure `person` is being _detected_. A `person` will automatically be scanned by Frigate for a face. Any detected faces will appear in the Recent Recognitions tab in the Frigate UI's Face Library.
If you are using a Frigate+ or `face` detecting model:
@@ -186,7 +186,7 @@ Avoid training on images that already score highly, as this can lead to over-fit
No, face recognition does not support negative training (i.e., explicitly telling it who someone is _not_). Instead, the best approach is to improve the training data by using a more diverse and representative set of images for each person.
For more guidance, refer to the section above on improving recognition accuracy.
-### I see scores above the threshold in the train tab, but a sub label wasn't assigned?
+### I see scores above the threshold in the Recent Recognitions tab, but a sub label wasn't assigned?
The Frigate considers the recognition scores across all recognition attempts for each person object. The scores are continually weighted based on the area of the face, and a sub label will only be assigned to person if a person is confidently recognized consistently. This avoids cases where a single high confidence recognition would throw off the results.
diff --git a/docs/docs/configuration/reference.md b/docs/docs/configuration/reference.md
index 3d963a5bd..663192c06 100644
--- a/docs/docs/configuration/reference.md
+++ b/docs/docs/configuration/reference.md
@@ -630,7 +630,7 @@ face_recognition:
# Optional: Min face recognitions for the sub label to be applied to the person object (default: shown below)
min_faces: 1
# Optional: Number of images of recognized faces to save for training (default: shown below)
- save_attempts: 100
+ save_attempts: 200
# Optional: Apply a blur quality filter to adjust confidence based on the blur level of the image (default: shown below)
blur_confidence_filter: True
# Optional: Set the model size used face recognition. (default: shown below)
@@ -671,20 +671,18 @@ lpr:
# Optional: List of regex replacement rules to normalize detected plates (default: shown below)
replace_rules: {}
-# Optional: Configuration for AI generated tracked object descriptions
+# Optional: Configuration for AI / LLM provider
# WARNING: Depending on the provider, this will send thumbnails over the internet
-# to Google or OpenAI's LLMs to generate descriptions. It can be overridden at
-# the camera level (enabled: False) to enhance privacy for indoor cameras.
+# to Google or OpenAI's LLMs to generate descriptions. GenAI features can be configured at
+# the camera level to enhance privacy for indoor cameras.
genai:
- # Optional: Enable AI description generation (default: shown below)
- enabled: False
- # Required if enabled: Provider must be one of ollama, gemini, or openai
+ # Required: Provider must be one of ollama, gemini, or openai
provider: ollama
# Required if provider is ollama. May also be used for an OpenAI API compatible backend with the openai provider.
base_url: http://localhost::11434
# Required if gemini or openai
api_key: "{FRIGATE_GENAI_API_KEY}"
- # Required if enabled: The model to use with the provider.
+ # Required: The model to use with the provider.
model: gemini-1.5-flash
# Optional additional args to pass to the GenAI Provider (default: None)
provider_options:
diff --git a/frigate/config/classification.py b/frigate/config/classification.py
index 56126e4d4..5b6cb8cec 100644
--- a/frigate/config/classification.py
+++ b/frigate/config/classification.py
@@ -69,7 +69,7 @@ class BirdClassificationConfig(FrigateBaseModel):
class CustomClassificationStateCameraConfig(FrigateBaseModel):
- crop: list[int, int, int, int] = Field(
+ crop: list[float, float, float, float] = Field(
title="Crop of image frame on this camera to run classification on."
)
@@ -197,7 +197,9 @@ class FaceRecognitionConfig(FrigateBaseModel):
title="Min face recognitions for the sub label to be applied to the person object.",
)
save_attempts: int = Field(
- default=100, ge=0, title="Number of face attempts to save in the train tab."
+ default=200,
+ ge=0,
+ title="Number of face attempts to save in the recent recognitions tab.",
)
blur_confidence_filter: bool = Field(
default=True, title="Apply blur quality filter to face confidence."
diff --git a/frigate/data_processing/real_time/custom_classification.py b/frigate/data_processing/real_time/custom_classification.py
index e5e4fc90e..1fb9dfc97 100644
--- a/frigate/data_processing/real_time/custom_classification.py
+++ b/frigate/data_processing/real_time/custom_classification.py
@@ -96,10 +96,10 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
camera_config = self.model_config.state_config.cameras[camera]
crop = [
- camera_config.crop[0],
- camera_config.crop[1],
- camera_config.crop[2],
- camera_config.crop[3],
+ camera_config.crop[0] * self.config.cameras[camera].detect.width,
+ camera_config.crop[1] * self.config.cameras[camera].detect.height,
+ camera_config.crop[2] * self.config.cameras[camera].detect.width,
+ camera_config.crop[3] * self.config.cameras[camera].detect.height,
]
should_run = False
diff --git a/web/public/locales/en/config/face_recognition.json b/web/public/locales/en/config/face_recognition.json
index ec6f8929b..705d75468 100644
--- a/web/public/locales/en/config/face_recognition.json
+++ b/web/public/locales/en/config/face_recognition.json
@@ -23,7 +23,7 @@
"label": "Min face recognitions for the sub label to be applied to the person object."
},
"save_attempts": {
- "label": "Number of face attempts to save in the train tab."
+ "label": "Number of face attempts to save in the recent recognitions tab."
},
"blur_confidence_filter": {
"label": "Apply blur quality filter to face confidence."
diff --git a/web/public/locales/en/views/classificationModel.json b/web/public/locales/en/views/classificationModel.json
index 47b2b13bf..dcfc5a1b2 100644
--- a/web/public/locales/en/views/classificationModel.json
+++ b/web/public/locales/en/views/classificationModel.json
@@ -41,13 +41,17 @@
"invalidName": "Invalid name. Names can only include letters, numbers, spaces, apostrophes, underscores, and hyphens."
},
"train": {
- "title": "Train",
- "aria": "Select Train"
+ "title": "Recent Classifications",
+ "aria": "Select Recent Classifications"
},
"categories": "Classes",
"createCategory": {
"new": "Create New Class"
},
"categorizeImageAs": "Classify Image As:",
- "categorizeImage": "Classify Image"
+ "categorizeImage": "Classify Image",
+ "wizard": {
+ "title": "Create New Classification",
+ "description": "Create a new state or object classification model."
+ }
}
diff --git a/web/public/locales/en/views/faceLibrary.json b/web/public/locales/en/views/faceLibrary.json
index 3a0804511..6febf85f0 100644
--- a/web/public/locales/en/views/faceLibrary.json
+++ b/web/public/locales/en/views/faceLibrary.json
@@ -22,7 +22,7 @@
"title": "Create Collection",
"desc": "Create a new collection",
"new": "Create New Face",
- "nextSteps": "To build a strong foundation:
Use the Train tab to select and train on images for each detected person.
Focus on straight-on images for best results; avoid training images that capture faces at an angle.
"
+ "nextSteps": "To build a strong foundation:
Use the Recent Recognitions tab to select and train on images for each detected person.
Focus on straight-on images for best results; avoid training images that capture faces at an angle.