frigate/docs/docs/configuration/custom_classification/state_classification.md
Nicolas Mowen 224cbdc2d6
Some checks failed
CI / AMD64 Build (push) Has been cancelled
CI / ARM Build (push) Has been cancelled
CI / Jetson Jetpack 6 (push) Has been cancelled
CI / AMD64 Extra Build (push) Has been cancelled
CI / ARM Extra Build (push) Has been cancelled
CI / Synaptics Build (push) Has been cancelled
CI / Assemble and push default build (push) Has been cancelled
Miscellaneous Fixes (#20989)
* Include DB in safe mode config

Copy DB when going into safe mode to avoid creating a new one if a user has configured a separate location

* Fix documentation for example log module

* Set minimum duration for recording segments

Due to the inpoint logic, some recordings would get clipped on the end of the segment with a non-zero duration but not enough duration to include a frame. 100 ms is a safe value for any video that is 10fps or higher to have a frame

* Add docs to explain object assignment for classification

* Add warning for Intel GPU stats bug

Add warning with explanation on GPU stats page when all Intel GPU values are 0

* Update docs with creation instructions

* reset loading state when moving through events in tracking details

* disable pip on preview players

* Improve HLS handling for startPosition

The startPosition was incorrectly calculated assuming continuous recordings, when it needs to consider only some segments exist. This extracts that logic to a utility so all can use it.

---------

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2025-11-21 15:40:58 -06:00

3.5 KiB
Raw Blame History

id title
state_classification State Classification

State classification allows you to train a custom MobileNetV2 classification model on a fixed region of your camera frame(s) to determine a current state. The model can be configured to run on a schedule and/or when motion is detected in that region.

Minimum System Requirements

State classification models are lightweight and run very fast on CPU. Inference should be usable on virtually any machine that can run Frigate.

Training the model does briefly use a high amount of system resources for about 13 minutes per training run. On lower-power devices, training may take longer.

Classes

Classes are the different states an area on your camera can be in. Each class represents a distinct visual state that the model will learn to recognize.

For state classification:

  • Define classes that represent mutually exclusive states
  • Examples: open and closed for a garage door, on and off for lights
  • Use at least 2 classes (typically binary states work best)
  • Keep class names clear and descriptive

Example use cases

  • Door state: Detect if a garage or front door is open vs closed.
  • Gate state: Track if a driveway gate is open or closed.
  • Trash day: Bins at curb vs no bins present.
  • Pool cover: Cover on vs off.

Configuration

State classification is configured as a custom classification model. Each model has its own name and settings. You must provide at least one camera crop under state_config.cameras.

classification:
  custom:
    front_door:
      threshold: 0.8
      state_config:
        motion: true # run when motion overlaps the crop
        interval: 10 # also run every N seconds (optional)
        cameras:
          front:
            crop: [0, 180, 220, 400]

Training the model

Creating and training the model is done within the Frigate UI using the Classification page. The process consists of three steps:

Step 1: Name and Define

Enter a name for your model and define at least 2 classes (states) that represent mutually exclusive states. For example, open and closed for a door, or on and off for lights.

Step 2: Select the Crop Area

Choose one or more cameras and draw a rectangle over the area of interest for each camera. The crop should be tight around the region you want to classify to avoid extra signals unrelated to what is being classified. You can drag and resize the rectangle to adjust the crop area.

Step 3: Assign Training Examples

The system will automatically generate example images from your camera feeds. You'll be guided through each class one at a time to select which images represent that state.

Important: All images must be assigned to a state before training can begin. This includes images that may not be optimal, such as when people temporarily block the view, sun glare is present, or other distractions occur. Assign these images to the state that is actually present (based on what you know the state to be), not based on the distraction. This training helps the model correctly identify the state even when such conditions occur during inference.

Once all images are assigned, training will begin automatically.

Improving the Model

  • Problem framing: Keep classes visually distinct and state-focused (e.g., open, closed, unknown). Avoid combining object identity with state in a single model unless necessary.
  • Data collection: Use the models Recent Classifications tab to gather balanced examples across times of day and weather.