Compare commits

..

2 Commits

Author SHA1 Message Date
Nicolas Mowen
649ca49e55
Beta discussion template (#21239)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
* Add beta support template for discussions

* Add note to bug
2025-12-11 09:37:46 -06:00
Josh Hawkins
fa6dda6735
Miscellaneous Fixes (#21208)
* conditionally display actions for admin role only

* only allow admins to save annotation offset

* Fix classification reset filter

* fix explore context menu from blocking pointer events on the body element after dialog close

applying modal=false to the menu (not to the dialog) to fix this in the same way as elsewhere in the codebase

* add select all link to face library, classification, and explore

* Disable iOS image dragging for classification card

* add proxmox ballooning comment

* lpr docs tweaks

* yaml list

* clarify tls_insecure

* Improve security summary format and usefulness

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2025-12-11 07:23:34 -07:00
20 changed files with 403 additions and 182 deletions

View File

@ -0,0 +1,129 @@
title: "[Beta Support]: "
labels: ["support", "triage", "beta"]
body:
- type: markdown
attributes:
value: |
Thank you for testing Frigate beta versions! Use this form for support with beta releases.
**Note:** Beta versions may have incomplete features, known issues, or unexpected behavior. Please check the [release notes](https://github.com/blakeblackshear/frigate/releases) and [recent discussions][discussions] for known beta issues before submitting.
Before submitting, read the [beta documentation][docs].
[docs]: https://deploy-preview-19787--frigate-docs.netlify.app/
- type: textarea
id: description
attributes:
label: Describe the problem you are having
description: Please be as detailed as possible. Include what you expected to happen vs what actually happened.
validations:
required: true
- type: input
id: version
attributes:
label: Beta Version
description: Visible on the System page in the Web UI. Please include the full version including the build identifier (eg. 0.17.0-beta1)
placeholder: "0.17.0-beta1"
validations:
required: true
- type: dropdown
id: issue-category
attributes:
label: Issue Category
description: What area is your issue related to? This helps us understand the context.
options:
- Object Detection / Detectors
- Hardware Acceleration
- Configuration / Setup
- WebUI / Frontend
- Recordings / Storage
- Notifications / Events
- Integration (Home Assistant, etc)
- Performance / Stability
- Installation / Updates
- Other
validations:
required: true
- type: textarea
id: config
attributes:
label: Frigate config file
description: This will be automatically formatted into code, so no need for backticks. Remove any sensitive information like passwords or URLs.
render: yaml
validations:
required: true
- type: textarea
id: frigatelogs
attributes:
label: Relevant Frigate log output
description: Please copy and paste any relevant Frigate log output. Include logs before and after your exact error when possible. This will be automatically formatted into code, so no need for backticks.
render: shell
validations:
required: true
- type: textarea
id: go2rtclogs
attributes:
label: Relevant go2rtc log output (if applicable)
description: If your issue involves cameras, streams, or playback, please include go2rtc logs. Logs can be viewed via the Frigate UI, Docker, or the go2rtc dashboard. This will be automatically formatted into code, so no need for backticks.
render: shell
- type: dropdown
id: install-method
attributes:
label: Install method
options:
- Home Assistant Add-on
- Docker Compose
- Docker CLI
- Proxmox via Docker
- Proxmox via TTeck Script
- Windows WSL2
validations:
required: true
- type: textarea
id: docker
attributes:
label: docker-compose file or Docker CLI command
description: This will be automatically formatted into code, so no need for backticks. Include relevant environment variables and device mappings.
render: yaml
validations:
required: true
- type: dropdown
id: os
attributes:
label: Operating system
options:
- Home Assistant OS
- Debian
- Ubuntu
- Other Linux
- Proxmox
- UNRAID
- Windows
- Other
validations:
required: true
- type: input
id: hardware
attributes:
label: CPU / GPU / Hardware
description: Provide details about your hardware (e.g., Intel i5-9400, NVIDIA RTX 3060, Raspberry Pi 4, etc)
placeholder: "Intel i7-10700, NVIDIA GTX 1660"
- type: textarea
id: screenshots
attributes:
label: Screenshots
description: Screenshots of the issue, System metrics pages, or any relevant UI. Drag and drop or paste images directly.
- type: textarea
id: steps-to-reproduce
attributes:
label: Steps to reproduce
description: If applicable, provide detailed steps to reproduce the issue
placeholder: |
1. Go to '...'
2. Click on '...'
3. See error
- type: textarea
id: other
attributes:
label: Any other information that may be helpful
description: Additional context, related issues, when the problem started appearing, etc.

View File

@ -6,6 +6,8 @@ body:
value: |
Use this form to submit a reproducible bug in Frigate or Frigate's UI.
**⚠️ If you are running a beta version (0.17.0-beta or similar), please use the [Beta Support template](https://github.com/blakeblackshear/frigate/discussions/new?category=beta-support) instead.**
Before submitting your bug report, please ask the AI with the "Ask AI" button on the [official documentation site][ai] about your issue, [search the discussions][discussions], look at recent open and closed [pull requests][prs], read the [official Frigate documentation][docs], and read the [Frigate FAQ][faq] pinned at the Discussion page to see if your bug has already been fixed by the developers or reported by the community.
**If you are unsure if your issue is actually a bug or not, please submit a support request first.**

View File

@ -374,9 +374,19 @@ Use `match_distance` to allow small character mismatches. Alternatively, define
Start with ["Why isn't my license plate being detected and recognized?"](#why-isnt-my-license-plate-being-detected-and-recognized). If you are still having issues, work through these steps.
1. Enable debug logs to see exactly what Frigate is doing.
1. Start with a simplified LPR config.
- Enable debug logs for LPR by adding `frigate.data_processing.common.license_plate: debug` to your `logger` configuration. These logs are _very_ verbose, so only keep this enabled when necessary.
- Remove or comment out everything in your LPR config, including `min_area`, `min_plate_length`, `format`, `known_plates`, or `enhancement` values so that the only values left are `enabled` and `debug_save_plates`. This will run LPR with Frigate's default values.
```yaml
lpr:
enabled: true
debug_save_plates: true
```
2. Enable debug logs to see exactly what Frigate is doing.
- Enable debug logs for LPR by adding `frigate.data_processing.common.license_plate: debug` to your `logger` configuration. These logs are _very_ verbose, so only keep this enabled when necessary. Restart Frigate after this change.
```yaml
logger:
@ -385,7 +395,7 @@ Start with ["Why isn't my license plate being detected and recognized?"](#why-is
frigate.data_processing.common.license_plate: debug
```
2. Ensure your plates are being _detected_.
3. Ensure your plates are being _detected_.
If you are using a Frigate+ or `license_plate` detecting model:
@ -398,7 +408,7 @@ Start with ["Why isn't my license plate being detected and recognized?"](#why-is
- Watch the debug logs for messages from the YOLOv9 plate detector.
- You may need to adjust your `detection_threshold` if your plates are not being detected.
3. Ensure the characters on detected plates are being _recognized_.
4. Ensure the characters on detected plates are being _recognized_.
- Enable `debug_save_plates` to save images of detected text on plates to the clips directory (`/media/frigate/clips/lpr`). Ensure these images are readable and the text is clear.
- Watch the debug view to see plates recognized in real-time. For non-dedicated LPR cameras, the `car` or `motorcycle` label will change to the recognized plate when LPR is enabled and working.

View File

@ -911,7 +911,7 @@ cameras:
user: admin
# Optional: password for login.
password: admin
# Optional: Skip TLS verification from the ONVIF server (default: shown below)
# Optional: Skip TLS verification and disable digest authentication for the ONVIF server (default: shown below)
tls_insecure: False
# Optional: Ignores time synchronization mismatches between the camera and the server during authentication.
# Using NTP on both ends is recommended and this should only be set to True in a "safe" environment due to the security risk it represents.

View File

@ -135,6 +135,7 @@ Finally, configure [hardware object detection](/configuration/object_detectors#h
### MemryX MX3
The MemryX MX3 Accelerator is available in the M.2 2280 form factor (like an NVMe SSD), and supports a variety of configurations:
- x86 (Intel/AMD) PCs
- Raspberry Pi 5
- Orange Pi 5 Plus/Max
@ -142,7 +143,6 @@ The MemryX MX3 Accelerator is available in the M.2 2280 form factor (like an NVM
#### Configuration
#### Installation
To get started with MX3 hardware setup for your system, refer to the [Hardware Setup Guide](https://developer.memryx.com/get_started/hardware_setup.html).
@ -156,7 +156,7 @@ Then follow these steps for installing the correct driver/runtime configuration:
#### Setup
To set up Frigate, follow the default installation instructions, for example: `ghcr.io/blakeblackshear/frigate:stable`
To set up Frigate, follow the default installation instructions, for example: `ghcr.io/blakeblackshear/frigate:stable`
Next, grant Docker permissions to access your hardware by adding the following lines to your `docker-compose.yml` file:
@ -173,7 +173,7 @@ In your `docker-compose.yml`, also add:
privileged: true
volumes:
/run/mxa_manager:/run/mxa_manager
- /run/mxa_manager:/run/mxa_manager
```
If you can't use Docker Compose, you can run the container with something similar to this:
@ -411,7 +411,7 @@ To install make sure you have the [community app plugin here](https://forums.unr
## Proxmox
[According to Proxmox documentation](https://pve.proxmox.com/pve-docs/pve-admin-guide.html#chapter_pct) it is recommended that you run application containers like Frigate inside a Proxmox QEMU VM. This will give you all the advantages of application containerization, while also providing the benefits that VMs offer, such as strong isolation from the host and the ability to live-migrate, which otherwise isnt possible with containers.
[According to Proxmox documentation](https://pve.proxmox.com/pve-docs/pve-admin-guide.html#chapter_pct) it is recommended that you run application containers like Frigate inside a Proxmox QEMU VM. This will give you all the advantages of application containerization, while also providing the benefits that VMs offer, such as strong isolation from the host and the ability to live-migrate, which otherwise isnt possible with containers. Ensure that ballooning is **disabled**, especially if you are passing through a GPU to the VM.
:::warning

View File

@ -251,20 +251,22 @@ class ReviewDescriptionProcessor(PostProcessorApi):
if not primary_segments:
return "No concerns were found during this time period."
# For each primary segment, find overlapping contextual items from other cameras
all_items_for_summary = []
# Build hierarchical structure: each primary event with its contextual items
events_with_context = []
for primary_seg in primary_segments:
# Add the primary item with marker
# Start building the primary event structure
primary_item = copy.deepcopy(primary_seg["metadata"])
primary_item["_is_primary"] = True
primary_item["_camera"] = primary_seg["camera"]
all_items_for_summary.append(primary_item)
primary_item["camera"] = primary_seg["camera"]
primary_item["start_time"] = primary_seg["start_time"]
primary_item["end_time"] = primary_seg["end_time"]
# Find overlapping contextual items from other cameras
primary_start = primary_seg["start_time"]
primary_end = primary_seg["end_time"]
primary_camera = primary_seg["camera"]
contextual_items = []
seen_contextual_cameras = set()
for seg in segments:
seg_camera = seg["camera"]
@ -279,21 +281,25 @@ class ReviewDescriptionProcessor(PostProcessorApi):
seg_end = seg["end_time"]
if seg_start < primary_end and primary_start < seg_end:
contextual_item = copy.deepcopy(seg["metadata"])
contextual_item["_is_primary"] = False
contextual_item["_camera"] = seg_camera
contextual_item["_related_to_camera"] = primary_camera
# Avoid duplicates if same camera has multiple overlapping segments
if seg_camera not in seen_contextual_cameras:
contextual_item = copy.deepcopy(seg["metadata"])
contextual_item["camera"] = seg_camera
contextual_item["start_time"] = seg_start
contextual_item["end_time"] = seg_end
contextual_items.append(contextual_item)
seen_contextual_cameras.add(seg_camera)
if not any(
item.get("_camera") == seg_camera
and item.get("time") == contextual_item.get("time")
for item in all_items_for_summary
):
all_items_for_summary.append(contextual_item)
# Add context array to primary item
primary_item["context"] = contextual_items
events_with_context.append(primary_item)
total_context_items = sum(
len(event.get("context", [])) for event in events_with_context
)
logger.debug(
f"Summary includes {len(primary_segments)} primary items and "
f"{len(all_items_for_summary) - len(primary_segments)} contextual items"
f"Summary includes {len(events_with_context)} primary events with "
f"{total_context_items} total contextual items"
)
if self.config.review.genai.debug_save_thumbnails:
@ -304,7 +310,7 @@ class ReviewDescriptionProcessor(PostProcessorApi):
return self.genai_client.generate_review_summary(
start_ts,
end_ts,
all_items_for_summary,
events_with_context,
self.config.review.genai.debug_save_thumbnails,
)
else:

View File

@ -177,78 +177,60 @@ Each line represents a detection state, not necessarily unique individuals. Pare
self,
start_ts: float,
end_ts: float,
segments: list[dict[str, Any]],
events: list[dict[str, Any]],
debug_save: bool,
) -> str | None:
"""Generate a summary of review item descriptions over a period of time."""
time_range = f"{datetime.datetime.fromtimestamp(start_ts).strftime('%B %d, %Y at %I:%M %p')} to {datetime.datetime.fromtimestamp(end_ts).strftime('%B %d, %Y at %I:%M %p')}"
timeline_summary_prompt = f"""
You are a security officer.
Time range: {time_range}.
Input: JSON list with "title", "scene", "confidence", "potential_threat_level" (0-2), "other_concerns", "_is_primary", "_camera".
You are a security officer writing a concise security report.
Task: Write a concise, human-presentable security report in markdown format.
Time range: {time_range}
CRITICAL - Understanding Primary vs Contextual Items:
- Items with "_is_primary": true are events that REQUIRE REVIEW and MUST be included in the report
- Items with "_is_primary": false are additional context from other camera perspectives that overlap in time
- **DO NOT create separate bullet points or sections for contextual items**
- **ONLY use contextual items to enrich and inform the description of primary items**
- The "_camera" field indicates which camera captured each event
- **When a contextual item provides relevant background, you MUST incorporate it directly into the primary event's bullet point**
- Contextual information often explains or de-escalates seemingly suspicious primary events
Input format: Each event is a JSON object with:
- "title", "scene", "confidence", "potential_threat_level" (0-2), "other_concerns", "camera", "time", "start_time", "end_time"
- "context": array of related events from other cameras that occurred during overlapping time periods
Rules for the report:
Report Structure - Use this EXACT format:
- Title & overview
- Start with:
# Security Summary - {time_range}
- Write a 1-2 sentence situational overview capturing the general pattern of the period.
- Keep the overview high-level; specific details will be in the event bullets below.
# Security Summary - {time_range}
- Event details
- **ONLY create bullet points for PRIMARY items (_is_primary: true)**
- **Do NOT create sections or bullets for events that don't exist**
- Do NOT create separate bullets for contextual items
- Present primary events in chronological order as a bullet list.
- **CRITICAL: When contextual items overlap with a primary event, you MUST weave that information directly into the same bullet point**
- Format: **[Timestamp]** - [Description incorporating any contextual information]. [Camera info]. (threat level: X)
- If contextual information provides an explanation (e.g., delivery truck person is likely delivery driver), reflect this understanding in your description and potentially adjust the perceived threat level
- If multiple PRIMARY events occur within the same minute, combine them into a single bullet with sub-points.
- Use bold timestamps for clarity.
- Camera format: "Camera: [camera name]" or mention contextual cameras inline when relevant
- Group bullets under subheadings ONLY when you have actual PRIMARY events to list (e.g., Porch Activity, Unusual Behavior).
## Overview
[Write 1-2 sentences summarizing the overall activity pattern during this period.]
- Threat levels
- Show the threat level for PRIMARY events using these labels:
- Threat level 0: "Normal"
- Threat level 1: "Needs review"
- Threat level 2: "Security concern"
- Format as (threat level: Normal), (threat level: Needs review), or (threat level: Security concern).
- **When contextual items clearly explain a primary event (e.g., delivery truck explains person at door), you should describe it as normal activity and note the explanation**
- **Your description and tone should reflect the fuller understanding provided by contextual information**
- Example: Primary event says "unidentified person with face covering" but context shows delivery truck describe as "delivery person (truck visible on Front Driveway Cam)" rather than emphasizing suspicious elements
- The stored threat level remains as originally classified, but your narrative should reflect the contextual understanding
- If multiple PRIMARY events at the same time share the same threat level, only state it once.
---
- Final assessment
- End with a Final Assessment section.
- If all primary events are threat level 0 or explained by contextual items:
Final assessment: Only normal residential activity observed during this period.
- If threat level 1 events are present:
Final assessment: Some activity requires review but no security concerns identified.
- If threat level 2 events are present, clearly summarize them as Security concerns requiring immediate attention.
- Keep this section brief - do not repeat details from the event descriptions above.
## Timeline
- Conciseness
- Do not repeat benign clothing/appearance details unless they distinguish individuals.
- Summarize similar routine events instead of restating full scene descriptions.
- When incorporating contextual information, do so briefly and naturally within the primary event description.
- Avoid lengthy explanatory notes - integrate context seamlessly into the narrative.
[Group events by time periods (e.g., "Morning (6:00 AM - 12:00 PM)", "Afternoon (12:00 PM - 5:00 PM)", "Evening (5:00 PM - 9:00 PM)", "Night (9:00 PM - 6:00 AM)"). Use appropriate time blocks based on when events occurred.]
### [Time Block Name]
**HH:MM AM/PM** | [Camera Name] | [Threat Level Indicator]
- [Event title]: [Clear description incorporating contextual information from the "context" array]
- Context: [If context array has items, mention them here, e.g., "Delivery truck present on Front Driveway Cam (HH:MM AM/PM)"]
- Assessment: [Brief assessment incorporating context - if context explains the event, note it here]
[Repeat for each event in chronological order within the time block]
---
## Summary
[One sentence summarizing the period. If all events are normal/explained: "Routine activity observed." If review needed: "Some activity requires review but no security concerns." If security concerns: "Security concerns requiring immediate attention."]
Guidelines:
- List ALL events in chronological order, grouped by time blocks
- Threat level indicators: Normal, Needs review, 🔴 Security concern
- Integrate contextual information naturally - use the "context" array to enrich each event's description
- If context explains the event (e.g., delivery truck explains person at door), describe it accordingly (e.g., "delivery person" not "unidentified person")
- Be concise but informative - focus on what happened and what it means
- If contextual information makes an event clearly normal, reflect that in your assessment
- Only create time blocks that have events - don't create empty sections
"""
for item in segments:
timeline_summary_prompt += f"\n{item}"
timeline_summary_prompt += "\n\nEvents:\n"
for event in events:
timeline_summary_prompt += f"\n{event}\n"
if debug_save:
with open(

View File

@ -52,6 +52,7 @@
},
"selected_one": "{{count}} selected",
"selected_other": "{{count}} selected",
"select_all": "All",
"camera": "Camera",
"detected": "detected",
"normalActivity": "Normal",

View File

@ -29,6 +29,7 @@
},
"train": {
"title": "Recent Recognitions",
"titleShort": "Recent",
"aria": "Select recent recognitions",
"empty": "There are no recent face recognition attempts"
},

View File

@ -7,7 +7,7 @@ import {
} from "@/types/classification";
import { Event } from "@/types/event";
import { forwardRef, useMemo, useRef, useState } from "react";
import { isDesktop, isMobile, isMobileOnly } from "react-device-detect";
import { isDesktop, isIOS, isMobile, isMobileOnly } from "react-device-detect";
import { useTranslation } from "react-i18next";
import TimeAgo from "../dynamic/TimeAgo";
import { Tooltip, TooltipContent, TooltipTrigger } from "../ui/tooltip";
@ -127,6 +127,15 @@ export const ClassificationCard = forwardRef<
imgClassName,
isMobile && "w-full",
)}
style={
isIOS
? {
WebkitUserSelect: "none",
WebkitTouchCallout: "none",
}
: undefined
}
draggable={false}
loading="lazy"
onLoad={() => setImageLoaded(true)}
src={`${baseUrl}${data.filepath}`}

View File

@ -19,6 +19,7 @@ import {
import useKeyboardListener from "@/hooks/use-keyboard-listener";
import { Trans, useTranslation } from "react-i18next";
import { toast } from "sonner";
import { useIsAdmin } from "@/hooks/use-is-admin";
type ReviewActionGroupProps = {
selectedReviews: ReviewSegment[];
@ -33,6 +34,7 @@ export default function ReviewActionGroup({
pullLatestData,
}: ReviewActionGroupProps) {
const { t } = useTranslation(["components/dialog"]);
const isAdmin = useIsAdmin();
const onClearSelected = useCallback(() => {
setSelectedReviews([]);
}, [setSelectedReviews]);
@ -185,21 +187,23 @@ export default function ReviewActionGroup({
</div>
)}
</Button>
<Button
className="flex items-center gap-2 p-2"
aria-label={t("button.delete", { ns: "common" })}
size="sm"
onClick={handleDelete}
>
<HiTrash className="text-secondary-foreground" />
{isDesktop && (
<div className="text-primary">
{bypassDialog
? t("recording.button.deleteNow")
: t("button.delete", { ns: "common" })}
</div>
)}
</Button>
{isAdmin && (
<Button
className="flex items-center gap-2 p-2"
aria-label={t("button.delete", { ns: "common" })}
size="sm"
onClick={handleDelete}
>
<HiTrash className="text-secondary-foreground" />
{isDesktop && (
<div className="text-primary">
{bypassDialog
? t("recording.button.deleteNow")
: t("button.delete", { ns: "common" })}
</div>
)}
</Button>
)}
</div>
</div>
</>

View File

@ -16,18 +16,24 @@ import {
import useKeyboardListener from "@/hooks/use-keyboard-listener";
import { toast } from "sonner";
import { Trans, useTranslation } from "react-i18next";
import { useIsAdmin } from "@/hooks/use-is-admin";
type SearchActionGroupProps = {
selectedObjects: string[];
setSelectedObjects: (ids: string[]) => void;
pullLatestData: () => void;
onSelectAllObjects: () => void;
totalItems: number;
};
export default function SearchActionGroup({
selectedObjects,
setSelectedObjects,
pullLatestData,
onSelectAllObjects,
totalItems,
}: SearchActionGroupProps) {
const { t } = useTranslation(["components/filter"]);
const isAdmin = useIsAdmin();
const onClearSelected = useCallback(() => {
setSelectedObjects([]);
}, [setSelectedObjects]);
@ -122,24 +128,37 @@ export default function SearchActionGroup({
>
{t("button.unselect", { ns: "common" })}
</div>
</div>
<div className="flex items-center gap-1 md:gap-2">
<Button
className="flex items-center gap-2 p-2"
aria-label={t("button.delete", { ns: "common" })}
size="sm"
onClick={handleDelete}
>
<HiTrash className="text-secondary-foreground" />
{isDesktop && (
<div className="text-primary">
{bypassDialog
? t("button.deleteNow", { ns: "common" })
: t("button.delete", { ns: "common" })}
{selectedObjects.length < totalItems && (
<>
<div className="p-1">{"|"}</div>
<div
className="cursor-pointer p-2 text-primary hover:rounded-lg hover:bg-secondary"
onClick={onSelectAllObjects}
>
{t("select_all", { ns: "views/events" })}
</div>
)}
</Button>
</>
)}
</div>
{isAdmin && (
<div className="flex items-center gap-1 md:gap-2">
<Button
className="flex items-center gap-2 p-2"
aria-label={t("button.delete", { ns: "common" })}
size="sm"
onClick={handleDelete}
>
<HiTrash className="text-secondary-foreground" />
{isDesktop && (
<div className="text-primary">
{bypassDialog
? t("button.deleteNow", { ns: "common" })
: t("button.delete", { ns: "common" })}
</div>
)}
</Button>
</div>
)}
</div>
</>
);

View File

@ -31,6 +31,7 @@ import {
import useSWR from "swr";
import { Trans, useTranslation } from "react-i18next";
import BlurredIconButton from "../button/BlurredIconButton";
import { useIsAdmin } from "@/hooks/use-is-admin";
type SearchResultActionsProps = {
searchResult: SearchResult;
@ -52,6 +53,7 @@ export default function SearchResultActions({
children,
}: SearchResultActionsProps) {
const { t } = useTranslation(["views/explore"]);
const isAdmin = useIsAdmin();
const { data: config } = useSWR<FrigateConfig>("config");
@ -137,7 +139,8 @@ export default function SearchResultActions({
<span>{t("itemMenu.findSimilar.label")}</span>
</MenuItem>
)}
{config?.semantic_search?.enabled &&
{isAdmin &&
config?.semantic_search?.enabled &&
searchResult.data.type == "object" && (
<MenuItem
aria-label={t("itemMenu.addTrigger.aria")}
@ -146,12 +149,14 @@ export default function SearchResultActions({
<span>{t("itemMenu.addTrigger.label")}</span>
</MenuItem>
)}
<MenuItem
aria-label={t("itemMenu.deleteTrackedObject.label")}
onClick={() => setDeleteDialogOpen(true)}
>
<span>{t("button.delete", { ns: "common" })}</span>
</MenuItem>
{isAdmin && (
<MenuItem
aria-label={t("itemMenu.deleteTrackedObject.label")}
onClick={() => setDeleteDialogOpen(true)}
>
<span>{t("button.delete", { ns: "common" })}</span>
</MenuItem>
)}
</>
);
@ -184,7 +189,7 @@ export default function SearchResultActions({
</AlertDialogContent>
</AlertDialog>
{isContextMenu ? (
<ContextMenu>
<ContextMenu modal={false}>
<ContextMenuTrigger>{children}</ContextMenuTrigger>
<ContextMenuContent>{menuItems}</ContextMenuContent>
</ContextMenu>

View File

@ -10,6 +10,7 @@ import { Trans, useTranslation } from "react-i18next";
import { LuInfo } from "react-icons/lu";
import { cn } from "@/lib/utils";
import { isMobile } from "react-device-detect";
import { useIsAdmin } from "@/hooks/use-is-admin";
type Props = {
className?: string;
@ -17,6 +18,7 @@ type Props = {
export default function AnnotationOffsetSlider({ className }: Props) {
const { annotationOffset, setAnnotationOffset, camera } = useDetailStream();
const isAdmin = useIsAdmin();
const { mutate } = useSWRConfig();
const { t } = useTranslation(["views/explore"]);
const [isSaving, setIsSaving] = useState(false);
@ -101,11 +103,13 @@ export default function AnnotationOffsetSlider({ className }: Props) {
<Button size="sm" variant="ghost" onClick={reset}>
{t("button.reset", { ns: "common" })}
</Button>
<Button size="sm" onClick={save} disabled={isSaving}>
{isSaving
? t("button.saving", { ns: "common" })
: t("button.save", { ns: "common" })}
</Button>
{isAdmin && (
<Button size="sm" onClick={save} disabled={isSaving}>
{isSaving
? t("button.saving", { ns: "common" })
: t("button.save", { ns: "common" })}
</Button>
)}
</div>
</div>
<div

View File

@ -24,6 +24,7 @@ import { Input } from "@/components/ui/input";
import { Separator } from "@/components/ui/separator";
import { Trans, useTranslation } from "react-i18next";
import { useDocDomain } from "@/hooks/use-doc-domain";
import { useIsAdmin } from "@/hooks/use-is-admin";
type AnnotationSettingsPaneProps = {
event: Event;
@ -36,6 +37,7 @@ export function AnnotationSettingsPane({
setAnnotationOffset,
}: AnnotationSettingsPaneProps) {
const { t } = useTranslation(["views/explore"]);
const isAdmin = useIsAdmin();
const { getLocaleDocUrl } = useDocDomain();
const { data: config, mutate: updateConfig } =
@ -201,22 +203,24 @@ export function AnnotationSettingsPane({
>
{t("button.apply", { ns: "common" })}
</Button>
<Button
variant="select"
aria-label={t("button.save", { ns: "common" })}
disabled={isLoading}
className="flex flex-1"
type="submit"
>
{isLoading ? (
<div className="flex flex-row items-center gap-2">
<ActivityIndicator />
<span>{t("button.saving", { ns: "common" })}</span>
</div>
) : (
t("button.save", { ns: "common" })
)}
</Button>
{isAdmin && (
<Button
variant="select"
aria-label={t("button.save", { ns: "common" })}
disabled={isLoading}
className="flex flex-1"
type="submit"
>
{isLoading ? (
<div className="flex flex-row items-center gap-2">
<ActivityIndicator />
<span>{t("button.saving", { ns: "common" })}</span>
</div>
) : (
t("button.save", { ns: "common" })
)}
</Button>
)}
</div>
</div>
</form>

View File

@ -15,6 +15,7 @@ import {
import { HiDotsHorizontal } from "react-icons/hi";
import { SearchResult } from "@/types/search";
import { FrigateConfig } from "@/types/frigateConfig";
import { useIsAdmin } from "@/hooks/use-is-admin";
type Props = {
search: SearchResult | Event;
@ -35,6 +36,7 @@ export default function DetailActionsMenu({
const { t } = useTranslation(["views/explore", "views/faceLibrary"]);
const navigate = useNavigate();
const [isOpen, setIsOpen] = useState(false);
const isAdmin = useIsAdmin();
const clipTimeRange = useMemo(() => {
const startTime = (search.start_time ?? 0) - REVIEW_PADDING;
@ -130,22 +132,24 @@ export default function DetailActionsMenu({
</DropdownMenuItem>
)}
{config?.semantic_search.enabled && search.data.type == "object" && (
<DropdownMenuItem
onClick={() => {
setIsOpen(false);
setTimeout(() => {
navigate(
`/settings?page=triggers&camera=${search.camera}&event_id=${search.id}`,
);
}, 0);
}}
>
<div className="flex cursor-pointer items-center gap-2">
<span>{t("itemMenu.addTrigger.label")}</span>
</div>
</DropdownMenuItem>
)}
{isAdmin &&
config?.semantic_search.enabled &&
search.data.type == "object" && (
<DropdownMenuItem
onClick={() => {
setIsOpen(false);
setTimeout(() => {
navigate(
`/settings?page=triggers&camera=${search.camera}&event_id=${search.id}`,
);
}, 0);
}}
>
<div className="flex cursor-pointer items-center gap-2">
<span>{t("itemMenu.addTrigger.label")}</span>
</div>
</DropdownMenuItem>
)}
</DropdownMenuContent>
</DropdownMenuPortal>
</DropdownMenu>

View File

@ -97,20 +97,9 @@ export default function TrainFilterDialog({
<Button
aria-label={t("reset.label")}
onClick={() => {
setCurrentFilter((prevFilter) => ({
...prevFilter,
time_range: undefined,
zones: undefined,
sub_labels: undefined,
search_type: undefined,
min_score: undefined,
max_score: undefined,
min_speed: undefined,
max_speed: undefined,
has_snapshot: undefined,
has_clip: undefined,
recognized_license_plate: undefined,
}));
const resetFilter: TrainFilter = {};
setCurrentFilter(resetFilter);
onUpdateFilter(resetFilter);
}}
>
{t("button.reset", { ns: "common" })}

View File

@ -52,7 +52,7 @@ import {
useRef,
useState,
} from "react";
import { isDesktop } from "react-device-detect";
import { isDesktop, isMobileOnly } from "react-device-detect";
import { Trans, useTranslation } from "react-i18next";
import {
LuFolderCheck,
@ -370,10 +370,10 @@ export default function FaceLibrary() {
/>
{selectedFaces?.length > 0 ? (
<div className="flex items-center justify-center gap-2">
<div className="mx-1 flex w-48 items-center justify-center text-sm text-muted-foreground">
<div className="mx-1 flex w-auto items-center justify-center text-sm text-muted-foreground">
<div className="p-1">
{t("selected", {
ns: "views/event",
ns: "views/events",
count: selectedFaces.length,
})}
</div>
@ -384,6 +384,24 @@ export default function FaceLibrary() {
>
{t("button.unselect", { ns: "common" })}
</div>
{selectedFaces.length <
(pageToggle === "train"
? trainImages.length
: faceImages.length) && (
<>
<div className="p-1">{"|"}</div>
<div
className="cursor-pointer p-2 text-primary hover:rounded-lg hover:bg-secondary"
onClick={() =>
setSelectedFaces([
...(pageToggle === "train" ? trainImages : faceImages),
])
}
>
{t("select_all", { ns: "views/events" })}
</div>
</>
)}
</div>
<Button
className="flex gap-2"
@ -482,6 +500,18 @@ function LibrarySelector({
[renameFace],
);
const pageTitle = useMemo(() => {
if (pageToggle != "train") {
return pageToggle;
}
if (isMobileOnly) {
return t("train.titleShort");
}
return t("train.title");
}, [pageToggle, t]);
return (
<>
<Dialog
@ -532,7 +562,7 @@ function LibrarySelector({
<DropdownMenu modal={false}>
<DropdownMenuTrigger asChild>
<Button className="flex justify-between smart-capitalize">
{pageToggle == "train" ? t("train.title") : pageToggle}
{pageTitle}
<span className="ml-2 text-primary-variant">
({(pageToggle && faceData?.[pageToggle]?.length) || 0})
</span>

View File

@ -421,10 +421,10 @@ export default function ModelTrainingView({ model }: ModelTrainingViewProps) {
isMobileOnly && "justify-between",
)}
>
<div className="flex w-48 items-center justify-center text-sm text-muted-foreground">
<div className="flex w-auto items-center justify-center text-sm text-muted-foreground md:w-auto">
<div className="p-1">
{t("selected", {
ns: "views/event",
ns: "views/events",
count: selectedImages.length,
})}
</div>
@ -435,6 +435,26 @@ export default function ModelTrainingView({ model }: ModelTrainingViewProps) {
>
{t("button.unselect", { ns: "common" })}
</div>
{selectedImages.length <
(pageToggle === "train"
? trainImages?.length || 0
: dataset?.[pageToggle]?.length || 0) && (
<>
<div className="p-1">{"|"}</div>
<div
className="cursor-pointer p-2 text-primary hover:rounded-lg hover:bg-secondary"
onClick={() =>
setSelectedImages([
...(pageToggle === "train"
? trainImages || []
: dataset?.[pageToggle] || []),
])
}
>
{t("select_all", { ns: "views/events" })}
</div>
</>
)}
</div>
<Button
className="flex gap-2"

View File

@ -572,6 +572,8 @@ export default function SearchView({
selectedObjects={selectedObjects}
setSelectedObjects={setSelectedObjects}
pullLatestData={refresh}
onSelectAllObjects={onSelectAllObjects}
totalItems={uniqueResults.length}
/>
</div>
)}