frigate/docs/docs/configuration/custom_classification/object_classification.md
Nicolas Mowen 78eace258e
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
Miscellaneous Fixes (0.17 Beta) (#21320)
* Exclude D-FINE from using CUDA Graphs

* fix objects count in detail stream

* Add debugging for classification models

* validate idb stored stream name and reset if invalid

fixes https://github.com/blakeblackshear/frigate/discussions/21311

* ensure jina loading takes place in the main thread to prevent lazily importing tensorflow in another thread later

reverts atexit changes in https://github.com/blakeblackshear/frigate/pull/21301 and fixes https://github.com/blakeblackshear/frigate/discussions/21306

* revert old atexit change in bird too

* revert types

* ensure we bail in the live mode hook for empty camera groups

prevent infinite rendering on camera groups with no cameras

---------

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2025-12-16 22:35:43 -06:00

5.6 KiB
Raw Blame History

id title
object_classification Object Classification

Object classification allows you to train a custom MobileNetV2 classification model to run on tracked objects (persons, cars, animals, etc.) to identify a finer category or attribute for that object.

Minimum System Requirements

Object classification models are lightweight and run very fast on CPU. Inference should be usable on virtually any machine that can run Frigate.

Training the model does briefly use a high amount of system resources for about 13 minutes per training run. On lower-power devices, training may take longer.

Classes

Classes are the categories your model will learn to distinguish between. Each class represents a distinct visual category that the model will predict.

For object classification:

  • Define classes that represent different types or attributes of the detected object
  • Examples: For person objects, classes might be delivery_person, resident, stranger
  • Include a none class for objects that don't fit any specific category
  • Keep classes visually distinct to improve accuracy

Classification Type

  • Sub label:

    • Applied to the objects sub_label field.
    • Ideal for a single, more specific identity or type.
    • Example: catLeo, Charlie, None.
  • Attribute:

    • Added as metadata to the object (visible in /events): <model_name>: <predicted_value>.
    • Ideal when multiple attributes can coexist independently.
    • Example: Detecting if a person in a construction yard is wearing a helmet or not.

Assignment Requirements

Sub labels and attributes are only assigned when both conditions are met:

  1. Threshold: Each classification attempt must have a confidence score that meets or exceeds the configured threshold (default: 0.8).
  2. Class Consensus: After at least 3 classification attempts, 60% of attempts must agree on the same class label. If the consensus class is none, no assignment is made.

This two-step verification prevents false positives by requiring consistent predictions across multiple frames before assigning a sub label or attribute.

Example use cases

Sub label

  • Known pet vs unknown: For dog objects, set sub label to your pets name (e.g., buddy) or none for others.
  • Mail truck vs normal car: For car, classify as mail_truck vs car to filter important arrivals.
  • Delivery vs non-delivery person: For person, classify delivery vs visitor based on uniform/props.

Attributes

  • Backpack: For person, add attribute backpack: yes/no.
  • Helmet: For person (worksite), add helmet: yes/no.
  • Leash: For dog, add leash: yes/no (useful for park or yard rules).
  • Ladder rack: For truck, add ladder_rack: yes/no to flag service vehicles.

Configuration

Object classification is configured as a custom classification model. Each model has its own name and settings. You must list which object labels should be classified.

classification:
  custom:
    dog:
      threshold: 0.8
      object_config:
        objects: [dog] # object labels to classify
        classification_type: sub_label # or: attribute

Training the model

Creating and training the model is done within the Frigate UI using the Classification page. The process consists of two steps:

Step 1: Name and Define

Enter a name for your model, select the object label to classify (e.g., person, dog, car), choose the classification type (sub label or attribute), and define your classes. Include a none class for objects that don't fit any specific category.

Step 2: Assign Training Examples

The system will automatically generate example images from detected objects matching your selected label. You'll be guided through each class one at a time to select which images represent that class. Any images not assigned to a specific class will automatically be assigned to none when you complete the last class. Once all images are processed, training will begin automatically.

When choosing which objects to classify, start with a small number of visually distinct classes and ensure your training samples match camera viewpoints and distances typical for those objects.

Improving the Model

  • Problem framing: Keep classes visually distinct and relevant to the chosen object types.
  • Data collection: Use the models Recent Classification tab to gather balanced examples across times of day, weather, and distances.
  • Preprocessing: Ensure examples reflect object crops similar to Frigates boxes; keep the subject centered.
  • Labels: Keep label names short and consistent; include a none class if you plan to ignore uncertain predictions for sub labels.
  • Threshold: Tune threshold per model to reduce false assignments. Start at 0.8 and adjust based on validation.

Debugging Classification Models

To troubleshoot issues with object classification models, enable debug logging to see detailed information about classification attempts, scores, and consensus calculations.

Enable debug logs for classification models by adding frigate.data_processing.real_time.custom_classification: debug to your logger configuration. These logs are verbose, so only keep this enabled when necessary. Restart Frigate after this change.

logger:
  default: info
  logs:
    frigate.data_processing.real_time.custom_classification: debug

The debug logs will show:

  • Classification probabilities for each attempt
  • Whether scores meet the threshold requirement
  • Consensus calculations and when assignments are made
  • Object classification history and weighted scores