* Include DB in safe mode config Copy DB when going into safe mode to avoid creating a new one if a user has configured a separate location * Fix documentation for example log module * Set minimum duration for recording segments Due to the inpoint logic, some recordings would get clipped on the end of the segment with a non-zero duration but not enough duration to include a frame. 100 ms is a safe value for any video that is 10fps or higher to have a frame * Add docs to explain object assignment for classification * Add warning for Intel GPU stats bug Add warning with explanation on GPU stats page when all Intel GPU values are 0 * Update docs with creation instructions * reset loading state when moving through events in tracking details * disable pip on preview players * Improve HLS handling for startPosition The startPosition was incorrectly calculated assuming continuous recordings, when it needs to consider only some segments exist. This extracts that logic to a utility so all can use it. --------- Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
4.8 KiB
| id | title |
|---|---|
| object_classification | Object Classification |
Object classification allows you to train a custom MobileNetV2 classification model to run on tracked objects (persons, cars, animals, etc.) to identify a finer category or attribute for that object.
Minimum System Requirements
Object classification models are lightweight and run very fast on CPU. Inference should be usable on virtually any machine that can run Frigate.
Training the model does briefly use a high amount of system resources for about 1–3 minutes per training run. On lower-power devices, training may take longer.
Classes
Classes are the categories your model will learn to distinguish between. Each class represents a distinct visual category that the model will predict.
For object classification:
- Define classes that represent different types or attributes of the detected object
- Examples: For
personobjects, classes might bedelivery_person,resident,stranger - Include a
noneclass for objects that don't fit any specific category - Keep classes visually distinct to improve accuracy
Classification Type
-
Sub label:
- Applied to the object’s
sub_labelfield. - Ideal for a single, more specific identity or type.
- Example:
cat→Leo,Charlie,None.
- Applied to the object’s
-
Attribute:
- Added as metadata to the object (visible in /events):
<model_name>: <predicted_value>. - Ideal when multiple attributes can coexist independently.
- Example: Detecting if a
personin a construction yard is wearing a helmet or not.
- Added as metadata to the object (visible in /events):
Assignment Requirements
Sub labels and attributes are only assigned when both conditions are met:
- Threshold: Each classification attempt must have a confidence score that meets or exceeds the configured
threshold(default:0.8). - Class Consensus: After at least 3 classification attempts, 60% of attempts must agree on the same class label. If the consensus class is
none, no assignment is made.
This two-step verification prevents false positives by requiring consistent predictions across multiple frames before assigning a sub label or attribute.
Example use cases
Sub label
- Known pet vs unknown: For
dogobjects, set sub label to your pet’s name (e.g.,buddy) ornonefor others. - Mail truck vs normal car: For
car, classify asmail_truckvscarto filter important arrivals. - Delivery vs non-delivery person: For
person, classifydeliveryvsvisitorbased on uniform/props.
Attributes
- Backpack: For
person, add attributebackpack: yes/no. - Helmet: For
person(worksite), addhelmet: yes/no. - Leash: For
dog, addleash: yes/no(useful for park or yard rules). - Ladder rack: For
truck, addladder_rack: yes/noto flag service vehicles.
Configuration
Object classification is configured as a custom classification model. Each model has its own name and settings. You must list which object labels should be classified.
classification:
custom:
dog:
threshold: 0.8
object_config:
objects: [dog] # object labels to classify
classification_type: sub_label # or: attribute
Training the model
Creating and training the model is done within the Frigate UI using the Classification page. The process consists of two steps:
Step 1: Name and Define
Enter a name for your model, select the object label to classify (e.g., person, dog, car), choose the classification type (sub label or attribute), and define your classes. Include a none class for objects that don't fit any specific category.
Step 2: Assign Training Examples
The system will automatically generate example images from detected objects matching your selected label. You'll be guided through each class one at a time to select which images represent that class. Any images not assigned to a specific class will automatically be assigned to none when you complete the last class. Once all images are processed, training will begin automatically.
When choosing which objects to classify, start with a small number of visually distinct classes and ensure your training samples match camera viewpoints and distances typical for those objects.
Improving the Model
- Problem framing: Keep classes visually distinct and relevant to the chosen object types.
- Data collection: Use the model’s Recent Classification tab to gather balanced examples across times of day, weather, and distances.
- Preprocessing: Ensure examples reflect object crops similar to Frigate’s boxes; keep the subject centered.
- Labels: Keep label names short and consistent; include a
noneclass if you plan to ignore uncertain predictions for sub labels. - Threshold: Tune
thresholdper model to reduce false assignments. Start at0.8and adjust based on validation.