frigate/docs/docs/configuration/custom_classification/object_classification.md
Nicolas Mowen f5a57edcc9
Implement Wizard for Creating Classification Models (#20622)
* Implement extraction of images for classification state models

* Add object classification dataset preparation

* Add first step wizard

* Update i18n

* Add state classification image selection step

* Improve box handling

* Add object selector

* Improve object cropping implementation

* Fix state classification selection

* Finalize training and image selection step

* Cleanup

* Design optimizations

* Cleanup mobile styling

* Update no models screen

* Cleanups and fixes

* Fix bugs

* Improve model training and creation process

* Cleanup

* Dynamically add metrics for new model

* Add loading when hitting continue

* Improve image selection mechanism

* Remove unused translation keys

* Adjust wording

* Add retry button for image generation

* Make no models view more specific

* Adjust plus icon

* Adjust form label

* Start with correct type selected

* Cleanup sizing and more font colors

* Small tweaks

* Add tips and more info

* Cleanup dialog sizing

* Add cursor rule for frontend

* Cleanup

* remove underline

* Lazy loading
2025-10-23 13:27:28 -06:00

3.7 KiB
Raw Blame History

id title
object_classification Object Classification

Object classification allows you to train a custom MobileNetV2 classification model to run on tracked objects (persons, cars, animals, etc.) to identify a finer category or attribute for that object.

Minimum System Requirements

Object classification models are lightweight and run very fast on CPU. Inference should be usable on virtually any machine that can run Frigate.

Training the model does briefly use a high amount of system resources for about 13 minutes per training run. On lower-power devices, training may take longer. When running the -tensorrt image, Nvidia GPUs will automatically be used to accelerate training.

Classes

Classes are the categories your model will learn to distinguish between. Each class represents a distinct visual category that the model will predict.

For object classification:

  • Define classes that represent different types or attributes of the detected object
  • Examples: For person objects, classes might be delivery_person, resident, stranger
  • Include a none class for objects that don't fit any specific category
  • Keep classes visually distinct to improve accuracy

Classification Type

  • Sub label:

    • Applied to the objects sub_label field.
    • Ideal for a single, more specific identity or type.
    • Example: catLeo, Charlie, None.
  • Attribute:

    • Added as metadata to the object (visible in /events): <model_name>: <predicted_value>.
    • Ideal when multiple attributes can coexist independently.
    • Example: Detecting if a person in a construction yard is wearing a helmet or not.

Example use cases

Sub label

  • Known pet vs unknown: For dog objects, set sub label to your pets name (e.g., buddy) or none for others.
  • Mail truck vs normal car: For car, classify as mail_truck vs car to filter important arrivals.
  • Delivery vs non-delivery person: For person, classify delivery vs visitor based on uniform/props.

Attributes

  • Backpack: For person, add attribute backpack: yes/no.
  • Helmet: For person (worksite), add helmet: yes/no.
  • Leash: For dog, add leash: yes/no (useful for park or yard rules).
  • Ladder rack: For truck, add ladder_rack: yes/no to flag service vehicles.

Configuration

Object classification is configured as a custom classification model. Each model has its own name and settings. You must list which object labels should be classified.

classification:
  custom:
    dog:
      threshold: 0.8
      object_config:
        objects: [dog] # object labels to classify
        classification_type: sub_label # or: attribute

Training the model

Creating and training the model is done within the Frigate UI using the Classification page.

Getting Started

When choosing which objects to classify, start with a small number of visually distinct classes and ensure your training samples match camera viewpoints and distances typical for those objects.

// TODO add this section once UI is implemented. Explain process of selecting objects and curating training examples.

Improving the Model

  • Problem framing: Keep classes visually distinct and relevant to the chosen object types.
  • Data collection: Use the models Recent Classification tab to gather balanced examples across times of day, weather, and distances.
  • Preprocessing: Ensure examples reflect object crops similar to Frigates boxes; keep the subject centered.
  • Labels: Keep label names short and consistent; include a none class if you plan to ignore uncertain predictions for sub labels.
  • Threshold: Tune threshold per model to reduce false assignments. Start at 0.8 and adjust based on validation.