Miscellaneous Fixes (0.17 beta) (#21355)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions

* remove footer messages and add update topic to motion tuner view

restart after changing values is no longer required

* add cache key and activity indicator for loading classification wizard images

* Always mark model as untrained when a classname is changed

* clarify object classification docs

* add debug logs for individual lpr replace_rules

* update memray docs

* memray tweaks

* Don't fail for audio transcription when semantic search is not enabled

* Fix incorrect mismatch for object vs sub label

* Check if the video is currently playing when deciding to seek due to misalignment

* Refactor timeline event handling to allow multiple timeline entries per update

* Check if zones have actually changed (not just count) for event state update

* show event icon on mobile

* move div inside conditional

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
This commit is contained in:
Josh Hawkins 2025-12-19 18:59:26 -06:00 committed by GitHub
parent e636449d56
commit 60052e5f9f
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
11 changed files with 139 additions and 91 deletions

View File

@ -95,6 +95,8 @@ The system will automatically generate example images from detected objects matc
When choosing which objects to classify, start with a small number of visually distinct classes and ensure your training samples match camera viewpoints and distances typical for those objects. When choosing which objects to classify, start with a small number of visually distinct classes and ensure your training samples match camera viewpoints and distances typical for those objects.
If examples for some of your classes do not appear in the grid, you can continue configuring the model without them. New images will begin to appear in the Recent Classifications view. When your missing classes are seen, classify them from this view and retrain your model.
### Improving the Model ### Improving the Model
- **Problem framing**: Keep classes visually distinct and relevant to the chosen object types. - **Problem framing**: Keep classes visually distinct and relevant to the chosen object types.

View File

@ -9,8 +9,20 @@ Frigate includes built-in memory profiling using [memray](https://bloomberg.gith
Memory profiling is controlled via the `FRIGATE_MEMRAY_MODULES` environment variable. Set it to a comma-separated list of module names you want to profile: Memory profiling is controlled via the `FRIGATE_MEMRAY_MODULES` environment variable. Set it to a comma-separated list of module names you want to profile:
```yaml
# docker-compose example
services:
frigate:
...
environment:
- FRIGATE_MEMRAY_MODULES=frigate.embeddings,frigate.capture
```
```bash ```bash
export FRIGATE_MEMRAY_MODULES="frigate.review_segment_manager,frigate.capture" # docker run example
docker run -e FRIGATE_MEMRAY_MODULES="frigate.embeddings" \
...
--name frigate <frigate_image>
``` ```
### Module Names ### Module Names
@ -24,11 +36,12 @@ Frigate processes are named using a module-based naming scheme. Common module na
- `frigate.output` - Output processing - `frigate.output` - Output processing
- `frigate.audio_manager` - Audio processing - `frigate.audio_manager` - Audio processing
- `frigate.embeddings` - Embeddings processing - `frigate.embeddings` - Embeddings processing
- `frigate.embeddings_manager` - Embeddings manager
You can also specify the full process name (including camera-specific identifiers) if you want to profile a specific camera: You can also specify the full process name (including camera-specific identifiers) if you want to profile a specific camera:
```bash ```bash
export FRIGATE_MEMRAY_MODULES="frigate.capture:front_door" FRIGATE_MEMRAY_MODULES=frigate.capture:front_door
``` ```
When you specify a module name (e.g., `frigate.capture`), all processes with that module prefix will be profiled. For example, `frigate.capture` will profile all camera capture processes. When you specify a module name (e.g., `frigate.capture`), all processes with that module prefix will be profiled. For example, `frigate.capture` will profile all camera capture processes.
@ -55,11 +68,20 @@ After a process exits normally, you'll find HTML reports in `/config/memray_repo
If a process crashes or you want to generate a report from an existing binary file, you can manually create the HTML report: If a process crashes or you want to generate a report from an existing binary file, you can manually create the HTML report:
- Run `memray` inside the Frigate container:
```bash ```bash
memray flamegraph /config/memray_reports/<module_name>.bin docker-compose exec frigate memray flamegraph /config/memray_reports/<module_name>.bin
# or
docker exec -it <container_name_or_id> memray flamegraph /config/memray_reports/<module_name>.bin
``` ```
This will generate an HTML file that you can open in your browser. - You can also copy the `.bin` file to the host and run `memray` locally if you have it installed:
```bash
docker cp <container_name_or_id>:/config/memray_reports/<module_name>.bin /tmp/
memray flamegraph /tmp/<module_name>.bin
```
## Understanding the Reports ## Understanding the Reports
@ -110,20 +132,4 @@ The interactive HTML reports allow you to:
- Check that memray is properly installed (included by default in Frigate) - Check that memray is properly installed (included by default in Frigate)
- Verify the process actually started and ran (check process logs) - Verify the process actually started and ran (check process logs)
## Example Usage
```bash
# Enable profiling for review and capture modules
export FRIGATE_MEMRAY_MODULES="frigate.review_segment_manager,frigate.capture"
# Start Frigate
# ... let it run for a while ...
# Check for reports
ls -lh /config/memray_reports/
# If a process crashed, manually generate report
memray flamegraph /config/memray_reports/frigate_capture_front_door.bin
```
For more information about memray and interpreting reports, see the [official memray documentation](https://bloomberg.github.io/memray/). For more information about memray and interpreting reports, see the [official memray documentation](https://bloomberg.github.io/memray/).

View File

@ -40,6 +40,7 @@ from frigate.util.classification import (
collect_state_classification_examples, collect_state_classification_examples,
get_dataset_image_count, get_dataset_image_count,
read_training_metadata, read_training_metadata,
write_training_metadata,
) )
from frigate.util.file import get_event_snapshot from frigate.util.file import get_event_snapshot
@ -842,6 +843,12 @@ def rename_classification_category(
try: try:
os.rename(old_folder, new_folder) os.rename(old_folder, new_folder)
# Mark dataset as ready to train by resetting training metadata
# This ensures the dataset is marked as changed after renaming
sanitized_name = sanitize_filename(name)
write_training_metadata(sanitized_name, 0)
return JSONResponse( return JSONResponse(
content=( content=(
{ {

View File

@ -374,6 +374,9 @@ class LicensePlateProcessingMixin:
combined_plate = re.sub( combined_plate = re.sub(
pattern, replacement, combined_plate pattern, replacement, combined_plate
) )
logger.debug(
f"{camera}: Processing replace rule: '{pattern}' -> '{replacement}', result: '{combined_plate}'"
)
except re.error as e: except re.error as e:
logger.warning( logger.warning(
f"{camera}: Invalid regex in replace_rules '{pattern}': {e}" f"{camera}: Invalid regex in replace_rules '{pattern}': {e}"
@ -381,7 +384,7 @@ class LicensePlateProcessingMixin:
if combined_plate != original_combined: if combined_plate != original_combined:
logger.debug( logger.debug(
f"{camera}: Rules applied: '{original_combined}' -> '{combined_plate}'" f"{camera}: All rules applied: '{original_combined}' -> '{combined_plate}'"
) )
# Compute the combined area for qualifying boxes # Compute the combined area for qualifying boxes

View File

@ -131,8 +131,9 @@ class AudioTranscriptionPostProcessor(PostProcessorApi):
}, },
) )
# Embed the description # Embed the description if semantic search is enabled
self.embeddings.embed_description(event_id, transcription) if self.config.semantic_search.enabled:
self.embeddings.embed_description(event_id, transcription)
except DoesNotExist: except DoesNotExist:
logger.debug("No recording found for audio transcription post-processing") logger.debug("No recording found for audio transcription post-processing")

View File

@ -46,7 +46,7 @@ def should_update_state(prev_event: Event, current_event: Event) -> bool:
if prev_event["sub_label"] != current_event["sub_label"]: if prev_event["sub_label"] != current_event["sub_label"]:
return True return True
if len(prev_event["current_zones"]) < len(current_event["current_zones"]): if set(prev_event["current_zones"]) != set(current_event["current_zones"]):
return True return True
return False return False

View File

@ -86,11 +86,11 @@ class TimelineProcessor(threading.Thread):
event_data: dict[Any, Any], event_data: dict[Any, Any],
) -> bool: ) -> bool:
"""Handle object detection.""" """Handle object detection."""
save = False
camera_config = self.config.cameras[camera] camera_config = self.config.cameras[camera]
event_id = event_data["id"] event_id = event_data["id"]
timeline_entry = { # Base timeline entry data that all entries will share
base_entry = {
Timeline.timestamp: event_data["frame_time"], Timeline.timestamp: event_data["frame_time"],
Timeline.camera: camera, Timeline.camera: camera,
Timeline.source: "tracked_object", Timeline.source: "tracked_object",
@ -123,40 +123,64 @@ class TimelineProcessor(threading.Thread):
e[Timeline.data]["sub_label"] = event_data["sub_label"] e[Timeline.data]["sub_label"] = event_data["sub_label"]
if event_type == EventStateEnum.start: if event_type == EventStateEnum.start:
timeline_entry = base_entry.copy()
timeline_entry[Timeline.class_type] = "visible" timeline_entry[Timeline.class_type] = "visible"
save = True self.insert_or_save(timeline_entry, prev_event_data, event_data)
elif event_type == EventStateEnum.update: elif event_type == EventStateEnum.update:
# Check all conditions and create timeline entries for each change
entries_to_save = []
# Check for zone changes
prev_zones = set(prev_event_data["current_zones"])
current_zones = set(event_data["current_zones"])
zones_changed = prev_zones != current_zones
# Only save "entered_zone" events when the object is actually IN zones
if ( if (
len(prev_event_data["current_zones"]) < len(event_data["current_zones"]) zones_changed
and not event_data["stationary"] and not event_data["stationary"]
and len(current_zones) > 0
): ):
timeline_entry[Timeline.class_type] = "entered_zone" zone_entry = base_entry.copy()
timeline_entry[Timeline.data]["zones"] = event_data["current_zones"] zone_entry[Timeline.class_type] = "entered_zone"
save = True zone_entry[Timeline.data] = base_entry[Timeline.data].copy()
elif prev_event_data["stationary"] != event_data["stationary"]: zone_entry[Timeline.data]["zones"] = event_data["current_zones"]
timeline_entry[Timeline.class_type] = ( entries_to_save.append(zone_entry)
# Check for stationary status change
if prev_event_data["stationary"] != event_data["stationary"]:
stationary_entry = base_entry.copy()
stationary_entry[Timeline.class_type] = (
"stationary" if event_data["stationary"] else "active" "stationary" if event_data["stationary"] else "active"
) )
save = True stationary_entry[Timeline.data] = base_entry[Timeline.data].copy()
elif prev_event_data["attributes"] == {} and event_data["attributes"] != {}: entries_to_save.append(stationary_entry)
timeline_entry[Timeline.class_type] = "attribute"
timeline_entry[Timeline.data]["attribute"] = list( # Check for new attributes
if prev_event_data["attributes"] == {} and event_data["attributes"] != {}:
attribute_entry = base_entry.copy()
attribute_entry[Timeline.class_type] = "attribute"
attribute_entry[Timeline.data] = base_entry[Timeline.data].copy()
attribute_entry[Timeline.data]["attribute"] = list(
event_data["attributes"].keys() event_data["attributes"].keys()
)[0] )[0]
if len(event_data["current_attributes"]) > 0: if len(event_data["current_attributes"]) > 0:
timeline_entry[Timeline.data]["attribute_box"] = to_relative_box( attribute_entry[Timeline.data]["attribute_box"] = to_relative_box(
camera_config.detect.width, camera_config.detect.width,
camera_config.detect.height, camera_config.detect.height,
event_data["current_attributes"][0]["box"], event_data["current_attributes"][0]["box"],
) )
save = True entries_to_save.append(attribute_entry)
elif event_type == EventStateEnum.end:
timeline_entry[Timeline.class_type] = "gone"
save = True
if save: # Save all entries
for entry in entries_to_save:
self.insert_or_save(entry, prev_event_data, event_data)
elif event_type == EventStateEnum.end:
timeline_entry = base_entry.copy()
timeline_entry[Timeline.class_type] = "gone"
self.insert_or_save(timeline_entry, prev_event_data, event_data) self.insert_or_save(timeline_entry, prev_event_data, event_data)
def handle_api_entry( def handle_api_entry(

View File

@ -233,7 +233,7 @@ export function GroupedClassificationCard({
}); });
if (!best) { if (!best) {
return group.at(-1); best = group.at(-1)!;
} }
const bestTyped: ClassificationItemData = best; const bestTyped: ClassificationItemData = best;
@ -377,30 +377,34 @@ export function GroupedClassificationCard({
)} )}
</ContentDescription> </ContentDescription>
</div> </div>
{isDesktop && ( {classifiedEvent && (
<div className="flex flex-row justify-between"> <div
{classifiedEvent && ( className={cn(
<Tooltip> "flex",
<TooltipTrigger asChild> isDesktop && "flex-row justify-between",
<div isMobile && "absolute right-4 top-8",
className="cursor-pointer"
tabIndex={-1}
onClick={() => {
navigate(`/explore?event_id=${classifiedEvent.id}`);
}}
>
<LuSearch className="size-4 text-secondary-foreground" />
</div>
</TooltipTrigger>
<TooltipPortal>
<TooltipContent>
{t("details.item.button.viewInExplore", {
ns: "views/explore",
})}
</TooltipContent>
</TooltipPortal>
</Tooltip>
)} )}
>
<Tooltip>
<TooltipTrigger asChild>
<div
className="cursor-pointer"
tabIndex={-1}
onClick={() => {
navigate(`/explore?event_id=${classifiedEvent.id}`);
}}
>
<LuSearch className="size-4 text-secondary-foreground" />
</div>
</TooltipTrigger>
<TooltipPortal>
<TooltipContent>
{t("details.item.button.viewInExplore", {
ns: "views/explore",
})}
</TooltipContent>
</TooltipPortal>
</Tooltip>
</div> </div>
)} )}
</Header> </Header>

View File

@ -45,6 +45,12 @@ export default function Step3ChooseExamples({
const [isProcessing, setIsProcessing] = useState(false); const [isProcessing, setIsProcessing] = useState(false);
const [currentClassIndex, setCurrentClassIndex] = useState(0); const [currentClassIndex, setCurrentClassIndex] = useState(0);
const [selectedImages, setSelectedImages] = useState<Set<string>>(new Set()); const [selectedImages, setSelectedImages] = useState<Set<string>>(new Set());
const [cacheKey, setCacheKey] = useState<number>(Date.now());
const [loadedImages, setLoadedImages] = useState<Set<string>>(new Set());
const handleImageLoad = useCallback((imageName: string) => {
setLoadedImages((prev) => new Set(prev).add(imageName));
}, []);
const { data: trainImages, mutate: refreshTrainImages } = useSWR<string[]>( const { data: trainImages, mutate: refreshTrainImages } = useSWR<string[]>(
hasGenerated ? `classification/${step1Data.modelName}/train` : null, hasGenerated ? `classification/${step1Data.modelName}/train` : null,
@ -332,6 +338,8 @@ export default function Step3ChooseExamples({
setHasGenerated(true); setHasGenerated(true);
toast.success(t("wizard.step3.generateSuccess")); toast.success(t("wizard.step3.generateSuccess"));
// Update cache key to force image reload
setCacheKey(Date.now());
await refreshTrainImages(); await refreshTrainImages();
} catch (error) { } catch (error) {
const axiosError = error as { const axiosError = error as {
@ -565,10 +573,16 @@ export default function Step3ChooseExamples({
)} )}
onClick={() => toggleImageSelection(imageName)} onClick={() => toggleImageSelection(imageName)}
> >
{!loadedImages.has(imageName) && (
<div className="flex h-full items-center justify-center">
<ActivityIndicator className="size-6" />
</div>
)}
<img <img
src={`${baseUrl}clips/${step1Data.modelName}/train/${imageName}`} src={`${baseUrl}clips/${step1Data.modelName}/train/${imageName}?t=${cacheKey}`}
alt={`Example ${index + 1}`} alt={`Example ${index + 1}`}
className="h-full w-full object-cover" className="h-full w-full object-cover"
onLoad={() => handleImageLoad(imageName)}
/> />
</div> </div>
); );

View File

@ -309,7 +309,10 @@ export function RecordingView({
currentTimeRange.after <= currentTime && currentTimeRange.after <= currentTime &&
currentTimeRange.before >= currentTime currentTimeRange.before >= currentTime
) { ) {
mainControllerRef.current?.seekToTimestamp(currentTime, true); mainControllerRef.current?.seekToTimestamp(
currentTime,
mainControllerRef.current.isPlaying(),
);
} else { } else {
updateSelectedSegment(currentTime, true); updateSelectedSegment(currentTime, true);
} }

View File

@ -4,7 +4,7 @@ import useSWR from "swr";
import axios from "axios"; import axios from "axios";
import ActivityIndicator from "@/components/indicators/activity-indicator"; import ActivityIndicator from "@/components/indicators/activity-indicator";
import AutoUpdatingCameraImage from "@/components/camera/AutoUpdatingCameraImage"; import AutoUpdatingCameraImage from "@/components/camera/AutoUpdatingCameraImage";
import { useCallback, useContext, useEffect, useMemo, useState } from "react"; import { useCallback, useEffect, useMemo, useState } from "react";
import { Slider } from "@/components/ui/slider"; import { Slider } from "@/components/ui/slider";
import { Label } from "@/components/ui/label"; import { Label } from "@/components/ui/label";
import { import {
@ -20,7 +20,6 @@ import { toast } from "sonner";
import { Separator } from "@/components/ui/separator"; import { Separator } from "@/components/ui/separator";
import { Link } from "react-router-dom"; import { Link } from "react-router-dom";
import { LuExternalLink } from "react-icons/lu"; import { LuExternalLink } from "react-icons/lu";
import { StatusBarMessagesContext } from "@/context/statusbar-provider";
import { Trans, useTranslation } from "react-i18next"; import { Trans, useTranslation } from "react-i18next";
import { useDocDomain } from "@/hooks/use-doc-domain"; import { useDocDomain } from "@/hooks/use-doc-domain";
import { cn } from "@/lib/utils"; import { cn } from "@/lib/utils";
@ -48,8 +47,6 @@ export default function MotionTunerView({
const [changedValue, setChangedValue] = useState(false); const [changedValue, setChangedValue] = useState(false);
const [isLoading, setIsLoading] = useState(false); const [isLoading, setIsLoading] = useState(false);
const { addMessage, removeMessage } = useContext(StatusBarMessagesContext)!;
const { send: sendMotionThreshold } = useMotionThreshold(selectedCamera); const { send: sendMotionThreshold } = useMotionThreshold(selectedCamera);
const { send: sendMotionContourArea } = useMotionContourArea(selectedCamera); const { send: sendMotionContourArea } = useMotionContourArea(selectedCamera);
const { send: sendImproveContrast } = useImproveContrast(selectedCamera); const { send: sendImproveContrast } = useImproveContrast(selectedCamera);
@ -119,7 +116,10 @@ export default function MotionTunerView({
axios axios
.put( .put(
`config/set?cameras.${selectedCamera}.motion.threshold=${motionSettings.threshold}&cameras.${selectedCamera}.motion.contour_area=${motionSettings.contour_area}&cameras.${selectedCamera}.motion.improve_contrast=${motionSettings.improve_contrast ? "True" : "False"}`, `config/set?cameras.${selectedCamera}.motion.threshold=${motionSettings.threshold}&cameras.${selectedCamera}.motion.contour_area=${motionSettings.contour_area}&cameras.${selectedCamera}.motion.improve_contrast=${motionSettings.improve_contrast ? "True" : "False"}`,
{ requires_restart: 0 }, {
requires_restart: 0,
update_topic: `config/cameras/${selectedCamera}/motion`,
},
) )
.then((res) => { .then((res) => {
if (res.status === 200) { if (res.status === 200) {
@ -164,23 +164,7 @@ export default function MotionTunerView({
const onCancel = useCallback(() => { const onCancel = useCallback(() => {
setMotionSettings(origMotionSettings); setMotionSettings(origMotionSettings);
setChangedValue(false); setChangedValue(false);
removeMessage("motion_tuner", `motion_tuner_${selectedCamera}`); }, [origMotionSettings]);
}, [origMotionSettings, removeMessage, selectedCamera]);
useEffect(() => {
if (changedValue) {
addMessage(
"motion_tuner",
t("motionDetectionTuner.unsavedChanges", { camera: selectedCamera }),
undefined,
`motion_tuner_${selectedCamera}`,
);
} else {
removeMessage("motion_tuner", `motion_tuner_${selectedCamera}`);
}
// we know that these deps are correct
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [changedValue, selectedCamera]);
useEffect(() => { useEffect(() => {
document.title = t("documentTitle.motionTuner"); document.title = t("documentTitle.motionTuner");