Compare commits

..

2 Commits

Author SHA1 Message Date
Josh Hawkins
f316244495
Improve playback of videos in Tracking Details (#22301)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
* prevent short hls segments by extending clip backwards

* clean up

* snap to keyframe instead of arbitrarily subtracting time

* formatting
2026-03-07 06:36:08 -07:00
Josh Hawkins
d1f3a807d3
call out avx2 requirement (#22305) 2026-03-07 06:18:17 -07:00
9 changed files with 97 additions and 16 deletions

View File

@ -11,7 +11,7 @@ Object classification models are lightweight and run very fast on CPU.
Training the model does briefly use a high amount of system resources for about 13 minutes per training run. On lower-power devices, training may take longer.
A CPU with AVX instructions is required for training and inference.
A CPU with AVX + AVX2 instructions is required for training and inference.
## Classes
@ -27,7 +27,6 @@ For object classification:
### Classification Type
- **Sub label**:
- Applied to the objects `sub_label` field.
- Ideal for a single, more specific identity or type.
- Example: `cat``Leo`, `Charlie`, `None`.

View File

@ -11,7 +11,7 @@ State classification models are lightweight and run very fast on CPU.
Training the model does briefly use a high amount of system resources for about 13 minutes per training run. On lower-power devices, training may take longer.
A CPU with AVX instructions is required for training and inference.
A CPU with AVX + AVX2 instructions is required for training and inference.
## Classes

View File

@ -32,7 +32,7 @@ All of these features run locally on your system.
## Minimum System Requirements
A CPU with AVX instructions is required to run Face Recognition.
A CPU with AVX + AVX2 instructions is required to run Face Recognition.
The `small` model is optimized for efficiency and runs on the CPU, most CPUs should run the model efficiently.
@ -145,17 +145,14 @@ Start with the [Usage](#usage) section and re-read the [Model Requirements](#mod
1. Ensure `person` is being _detected_. A `person` will automatically be scanned by Frigate for a face. Any detected faces will appear in the Recent Recognitions tab in the Frigate UI's Face Library.
If you are using a Frigate+ or `face` detecting model:
- Watch the debug view (Settings --> Debug) to ensure that `face` is being detected along with `person`.
- You may need to adjust the `min_score` for the `face` object if faces are not being detected.
If you are **not** using a Frigate+ or `face` detecting model:
- Check your `detect` stream resolution and ensure it is sufficiently high enough to capture face details on `person` objects.
- You may need to lower your `detection_threshold` if faces are not being detected.
2. Any detected faces will then be _recognized_.
- Make sure you have trained at least one face per the recommendations above.
- Adjust `recognition_threshold` settings per the suggestions [above](#advanced-configuration).

View File

@ -30,7 +30,7 @@ In the default mode, Frigate's LPR needs to first detect a `car` or `motorcycle`
## Minimum System Requirements
License plate recognition works by running AI models locally on your system. The YOLOv9 plate detector model and the OCR models ([PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR)) are relatively lightweight and can run on your CPU or GPU, depending on your configuration. At least 4GB of RAM and a CPU with AVX instructions is required.
License plate recognition works by running AI models locally on your system. The YOLOv9 plate detector model and the OCR models ([PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR)) are relatively lightweight and can run on your CPU or GPU, depending on your configuration. At least 4GB of RAM and a CPU with AVX + AVX2 instructions is required.
## Configuration
@ -375,7 +375,6 @@ Use `match_distance` to allow small character mismatches. Alternatively, define
Start with ["Why isn't my license plate being detected and recognized?"](#why-isnt-my-license-plate-being-detected-and-recognized). If you are still having issues, work through these steps.
1. Start with a simplified LPR config.
- Remove or comment out everything in your LPR config, including `min_area`, `min_plate_length`, `format`, `known_plates`, or `enhancement` values so that the only values left are `enabled` and `debug_save_plates`. This will run LPR with Frigate's default values.
```yaml
@ -386,7 +385,6 @@ Start with ["Why isn't my license plate being detected and recognized?"](#why-is
```
2. Enable debug logs to see exactly what Frigate is doing.
- Enable debug logs for LPR by adding `frigate.data_processing.common.license_plate: debug` to your `logger` configuration. These logs are _very_ verbose, so only keep this enabled when necessary. Restart Frigate after this change.
```yaml
@ -399,18 +397,15 @@ Start with ["Why isn't my license plate being detected and recognized?"](#why-is
3. Ensure your plates are being _detected_.
If you are using a Frigate+ or `license_plate` detecting model:
- Watch the debug view (Settings --> Debug) to ensure that `license_plate` is being detected.
- View MQTT messages for `frigate/events` to verify detected plates.
- You may need to adjust your `min_score` and/or `threshold` for the `license_plate` object if your plates are not being detected.
If you are **not** using a Frigate+ or `license_plate` detecting model:
- Watch the debug logs for messages from the YOLOv9 plate detector.
- You may need to adjust your `detection_threshold` if your plates are not being detected.
4. Ensure the characters on detected plates are being _recognized_.
- Enable `debug_save_plates` to save images of detected text on plates to the clips directory (`/media/frigate/clips/lpr`). Ensure these images are readable and the text is clear.
- Watch the debug view to see plates recognized in real-time. For non-dedicated LPR cameras, the `car` or `motorcycle` label will change to the recognized plate when LPR is enabled and working.
- Adjust `recognition_threshold` settings per the suggestions [above](#advanced-configuration).

View File

@ -13,7 +13,7 @@ Semantic Search is accessed via the _Explore_ view in the Frigate UI.
Semantic Search works by running a large AI model locally on your system. Small or underpowered systems like a Raspberry Pi will not run Semantic Search reliably or at all.
A minimum of 8GB of RAM is required to use Semantic Search. A CPU with AVX instructions is required to run Semantic Search. A GPU is not strictly required but will provide a significant performance increase over CPU-only systems.
A minimum of 8GB of RAM is required to use Semantic Search. A CPU with AVX + AVX2 instructions is required to run Semantic Search. A GPU is not strictly required but will provide a significant performance increase over CPU-only systems.
For best performance, 16GB or more of RAM and a dedicated GPU are recommended.

View File

@ -26,7 +26,7 @@ I may earn a small commission for my endorsement, recommendation, testimonial, o
## Server
My current favorite is the Beelink EQ13 because of the efficient N100 CPU and dual NICs that allow you to setup a dedicated private network for your cameras where they can be blocked from accessing the internet. There are many used workstation options on eBay that work very well. Anything with an Intel CPU (with AVX instructions) and capable of running Debian should work fine. As a bonus, you may want to look for devices with a M.2 or PCIe express slot that is compatible with the Google Coral, Hailo, or other AI accelerators.
My current favorite is the Beelink EQ13 because of the efficient N100 CPU and dual NICs that allow you to setup a dedicated private network for your cameras where they can be blocked from accessing the internet. There are many used workstation options on eBay that work very well. Anything with an Intel CPU (with AVX + AVX2 instructions) and capable of running Debian should work fine. As a bonus, you may want to look for devices with a M.2 or PCIe express slot that is compatible with the Google Coral, Hailo, or other AI accelerators.
Note that many of these mini PCs come with Windows pre-installed, and you will need to install Linux according to the [getting started guide](../guides/getting_started.md).

View File

@ -36,7 +36,7 @@ There are many different hardware options for object detection depending on prio
### CPU
Frigate requires a CPU with AVX instructions. Most modern CPUs (post-2011) support AVX, but it is generally absent in low-power or budget-oriented processors, particularly older Intel Pentium, Celeron, and Atom-based chips. Specifically, Intel Celeron and Pentium models prior to the 2020 Tiger Lake generation typically lack AVX.
Frigate requires a CPU with AVX + AVX2 instructions. Most modern CPUs (post-2011) support AVX and AVX2, but it is generally absent in low-power or budget-oriented processors, particularly older Intel Pentium, Celeron, and Atom-based chips. Specifically, Intel Celeron and Pentium models prior to the 2020 Tiger Lake generation typically lack AVX. Older Intel Xeon models may have AVX, but may lack AVX2.
### Storage

View File

@ -50,10 +50,12 @@ from frigate.models import Event, Previews, Recordings, Regions, ReviewSegment
from frigate.track.object_processing import TrackedObjectProcessor
from frigate.util.file import get_event_thumbnail_bytes
from frigate.util.image import get_image_from_recording
from frigate.util.media import get_keyframe_before
from frigate.util.time import get_dst_transitions
logger = logging.getLogger(__name__)
router = APIRouter(tags=[Tags.media])
@ -900,6 +902,33 @@ async def vod_ts(
if recording.end_time > end_ts:
duration -= int((recording.end_time - end_ts) * 1000)
# nginx-vod-module pushes clipFrom forward to the next keyframe,
# which can leave too few frames and produce an empty/unplayable
# segment. Snap clipFrom back to the preceding keyframe so the
# segment always starts with a decodable frame.
if "clipFrom" in clip:
keyframe_ms = get_keyframe_before(recording.path, clip["clipFrom"])
if keyframe_ms is not None:
gained = clip["clipFrom"] - keyframe_ms
clip["clipFrom"] = keyframe_ms
duration += gained
logger.debug(
"VOD: snapped clipFrom to keyframe at %sms for %s, duration now %sms",
keyframe_ms,
recording.path,
duration,
)
else:
# could not read keyframes, remove clipFrom to use full recording
logger.debug(
"VOD: no keyframe info for %s, removing clipFrom to use full recording",
recording.path,
)
del clip["clipFrom"]
duration = int(recording.duration * 1000)
if recording.end_time > end_ts:
duration -= int((recording.end_time - end_ts) * 1000)
if duration < min_duration_ms:
# skip if the clip has no valid duration (too short to contain frames)
logger.debug(

61
frigate/util/media.py Normal file
View File

@ -0,0 +1,61 @@
"""Utilities for media file inspection."""
import subprocess as sp
from frigate.const import DEFAULT_FFMPEG_VERSION
FFPROBE_PATH = (
f"/usr/lib/ffmpeg/{DEFAULT_FFMPEG_VERSION}/bin/ffprobe"
if DEFAULT_FFMPEG_VERSION
else "ffprobe"
)
def get_keyframe_before(path: str, offset_ms: int) -> int | None:
"""Get the timestamp (ms) of the last keyframe at or before offset_ms.
Uses ffprobe packet index to read keyframe positions from the mp4 file.
Returns None if ffprobe fails or no keyframe is found before the offset.
"""
try:
result = sp.run(
[
FFPROBE_PATH,
"-select_streams",
"v:0",
"-show_entries",
"packet=pts_time,flags",
"-of",
"csv=p=0",
"-loglevel",
"error",
path,
],
capture_output=True,
timeout=5,
)
except (sp.TimeoutExpired, FileNotFoundError):
return None
if result.returncode != 0:
return None
offset_s = offset_ms / 1000.0
best_ms = None
for line in result.stdout.decode().strip().splitlines():
parts = line.strip().split(",")
if len(parts) != 2:
continue
ts_str, flags = parts
if "K" not in flags:
continue
try:
ts = float(ts_str)
except ValueError:
continue
if ts <= offset_s:
best_ms = int(ts * 1000)
else:
break
return best_ms