Miscellaneous Fixes (#21005)

* update live view docs

* use swr as single source of truth for searchDetail

rather than maintaining a separate state, derive the selected item from swr cache. fixes websocket sync when regenerating descriptions or fetching transcriptions

* fix key warning in console

* don't try to fetch event from review item for audio events

* update audio transcription toast wording

* Add a community supported badge to specific detectors in the info summaries to better separate

* Make object classification publish to tracked object update and add examples for state classification

* Add item to advanced docs about tensorflow limiting

* Don't show submission for in progress objects

* fix for ios not reporting video dimensions on initial metadata load

in testing, polling with requestAnimationFrame finds the dimensions within 2 frames

* Catch jetson nvidia device tree

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
This commit is contained in:
Josh Hawkins 2025-11-23 09:40:25 -06:00 committed by GitHub
parent 224cbdc2d6
commit 815303922d
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
19 changed files with 288 additions and 90 deletions

View File

@ -53,6 +53,17 @@ environment_vars:
VARIABLE_NAME: variable_value
```
#### TensorFlow Thread Configuration
If you encounter thread creation errors during classification model training, you can limit TensorFlow's thread usage:
```yaml
environment_vars:
TF_INTRA_OP_PARALLELISM_THREADS: "2" # Threads within operations (0 = use default)
TF_INTER_OP_PARALLELISM_THREADS: "2" # Threads between operations (0 = use default)
TF_DATASET_THREAD_POOL_SIZE: "2" # Data pipeline threads (0 = use default)
```
### `database`
Tracked object and recording information is managed in a sqlite database at `/config/frigate.db`. If that database is deleted, recordings will be orphaned and will need to be cleaned up manually. They also won't show up in the Media Browser within Home Assistant.

View File

@ -214,6 +214,42 @@ For restreamed cameras, go2rtc remains active but does not use system resources
Note that disabling a camera through the config file (`enabled: False`) removes all related UI elements, including historical footage access. To retain access while disabling the camera, keep it enabled in the config and use the UI or MQTT to disable it temporarily.
### Live player error messages
When your browser runs into problems playing back your camera streams, it will log short error messages to the browser console. They indicate playback, codec, or network issues on the client/browser side, not something server side with Frigate itself. Below are the common messages you may see and simple actions you can take to try to resolve them.
- **startup**
- What it means: The player failed to initialize or connect to the live stream (network or startup error).
- What to try: Reload the Live view or click _Reset_. Verify `go2rtc` is running and the camera stream is reachable. Try switching to a different stream from the Live UI dropdown (if available) or use a different browser.
- Possible console messages from the player code:
- `Error opening MediaSource.`
- `Browser reported a network error.`
- `Max error count ${errorCount} exceeded.` (the numeric value will vary)
- **mse-decode**
- What it means: The browser reported a decoding error while trying to play the stream, which usually is a result of a codec incompatibility or corrupted frames.
- What to try: Ensure your camera/restream is using H.264 video and AAC audio (these are the most compatible). If your camera uses a non-standard audio codec, configure `go2rtc` to transcode the stream to AAC. Try another browser (some browsers have stricter MSE/codec support) and, for iPhone, ensure you're on iOS 17.1 or newer.
- Possible console messages from the player code:
- `Safari cannot open MediaSource.`
- `Safari reported InvalidStateError.`
- `Safari reported decoding errors.`
- **stalled**
- What it means: Playback has stalled because the player has fallen too far behind live (extended buffering or no data arriving).
- What to try: This is usually indicative of the browser struggling to decode too many high-resolution streams at once. Try selecting a lower-bandwidth stream (substream), reduce the number of live streams open, improve the network connection, or lower the camera resolution. Also check your camera's keyframe (I-frame) interval — shorter intervals make playback start and recover faster. You can also try increasing the timeout value in the UI pane of Frigate's settings.
- Possible console messages from the player code:
- `Buffer time (10 seconds) exceeded, browser may not be playing media correctly.`
- `Media playback has stalled after <n> seconds due to insufficient buffering or a network interruption.` (the seconds value will vary)
## Live view FAQ
1. **Why don't I have audio in my Live view?**
@ -277,3 +313,38 @@ Note that disabling a camera through the config file (`enabled: False`) removes
7. **My camera streams have lots of visual artifacts / distortion.**
Some cameras don't include the hardware to support multiple connections to the high resolution stream, and this can cause unexpected behavior. In this case it is recommended to [restream](./restream.md) the high resolution stream so that it can be used for live view and recordings.
8. **Why does my camera stream switch aspect ratios on the Live dashboard?**
Your camera may change aspect ratios on the dashboard because Frigate uses different streams for different purposes. With go2rtc and Smart Streaming, Frigate shows a static image from the `detect` stream when no activity is present, and switches to the live stream when motion is detected. The camera image will change size if your streams use different aspect ratios.
To prevent this, make the `detect` stream match the go2rtc live stream's aspect ratio (resolution does not need to match, just the aspect ratio). You can either adjust the camera's output resolution or set the `width` and `height` values in your config's `detect` section to a resolution with an aspect ratio that matches.
Example: Resolutions from two streams
- Mismatched (may cause aspect ratio switching on the dashboard):
- Live/go2rtc stream: 1920x1080 (16:9)
- Detect stream: 640x352 (~1.82:1, not 16:9)
- Matched (prevents switching):
- Live/go2rtc stream: 1920x1080 (16:9)
- Detect stream: 640x360 (16:9)
You can update the detect settings in your camera config to match the aspect ratio of your go2rtc live stream. For example:
```yaml
cameras:
front_door:
detect:
width: 640
height: 360 # set this to 360 instead of 352
ffmpeg:
inputs:
- path: rtsp://127.0.0.1:8554/front_door # main stream 1920x1080
roles:
- record
- path: rtsp://127.0.0.1:8554/front_door_sub # sub stream 640x352
roles:
- detect
```

View File

@ -3,6 +3,8 @@ id: object_detectors
title: Object Detectors
---
import CommunityBadge from '@site/src/components/CommunityBadge';
# Supported Hardware
:::info
@ -13,8 +15,8 @@ Frigate supports multiple different detectors that work on different types of ha
- [Coral EdgeTPU](#edge-tpu-detector): The Google Coral EdgeTPU is available in USB and m.2 format allowing for a wide range of compatibility with devices.
- [Hailo](#hailo-8): The Hailo8 and Hailo8L AI Acceleration module is available in m.2 format with a HAT for RPi devices, offering a wide range of compatibility with devices.
- [MemryX](#memryx-mx3): The MX3 Acceleration module is available in m.2 format, offering broad compatibility across various platforms.
- [DeGirum](#degirum): Service for using hardware devices in the cloud or locally. Hardware and models provided on the cloud on [their website](https://hub.degirum.com).
- <CommunityBadge /> [MemryX](#memryx-mx3): The MX3 Acceleration module is available in m.2 format, offering broad compatibility across various platforms.
- <CommunityBadge /> [DeGirum](#degirum): Service for using hardware devices in the cloud or locally. Hardware and models provided on the cloud on [their website](https://hub.degirum.com).
**AMD**
@ -34,16 +36,16 @@ Frigate supports multiple different detectors that work on different types of ha
- [ONNX](#onnx): TensorRT will automatically be detected and used as a detector in the `-tensorrt` Frigate image when a supported ONNX model is configured.
**Nvidia Jetson**
**Nvidia Jetson** <CommunityBadge />
- [TensortRT](#nvidia-tensorrt-detector): TensorRT can run on Jetson devices, using one of many default models.
- [ONNX](#onnx): TensorRT will automatically be detected and used as a detector in the `-tensorrt-jp6` Frigate image when a supported ONNX model is configured.
**Rockchip**
**Rockchip** <CommunityBadge />
- [RKNN](#rockchip-platform): RKNN models can run on Rockchip devices with included NPUs.
**Synaptics**
**Synaptics** <CommunityBadge />
- [Synaptics](#synaptics): synap models can run on Synaptics devices(e.g astra machina) with included NPUs.

View File

@ -3,6 +3,8 @@ id: hardware
title: Recommended hardware
---
import CommunityBadge from '@site/src/components/CommunityBadge';
## Cameras
Cameras that output H.264 video and AAC audio will offer the most compatibility with all features of Frigate and Home Assistant. It is also helpful if your camera supports multiple substreams to allow different resolutions to be used for detection, streaming, and recordings without re-encoding.
@ -59,7 +61,7 @@ Frigate supports multiple different detectors that work on different types of ha
- [Supports primarily ssdlite and mobilenet model architectures](../../configuration/object_detectors#edge-tpu-detector)
- [MemryX](#memryx-mx3): The MX3 M.2 accelerator module is available in m.2 format allowing for a wide range of compatibility with devices.
- <CommunityBadge /> [MemryX](#memryx-mx3): The MX3 M.2 accelerator module is available in m.2 format allowing for a wide range of compatibility with devices.
- [Supports many model architectures](../../configuration/object_detectors#memryx-mx3)
- Runs best with tiny, small, or medium-size models
@ -84,32 +86,26 @@ Frigate supports multiple different detectors that work on different types of ha
**Nvidia**
- [TensortRT](#tensorrt---nvidia-gpu): TensorRT can run on Nvidia GPUs and Jetson devices.
- [TensortRT](#tensorrt---nvidia-gpu): TensorRT can run on Nvidia GPUs to provide efficient object detection.
- [Supports majority of model architectures via ONNX](../../configuration/object_detectors#onnx-supported-models)
- Runs well with any size models including large
**Rockchip**
- <CommunityBadge /> [Jetson](#nvidia-jetson): Jetson devices are supported via the TensorRT or ONNX detectors when running Jetpack 6.
**Rockchip** <CommunityBadge />
- [RKNN](#rockchip-platform): RKNN models can run on Rockchip devices with included NPUs to provide efficient object detection.
- [Supports limited model architectures](../../configuration/object_detectors#choosing-a-model)
- Runs best with tiny or small size models
- Runs efficiently on low power hardware
**Synaptics**
**Synaptics** <CommunityBadge />
- [Synaptics](#synaptics): synap models can run on Synaptics devices(e.g astra machina) with included NPUs to provide efficient object detection.
:::
### Synaptics
- **Synaptics** Default model is **mobilenet**
| Name | Synaptics SL1680 Inference Time |
| ---------------- | ------------------------------- |
| ssd mobilenet | ~ 25 ms |
| yolov5m | ~ 118 ms |
### Hailo-8
Frigate supports both the Hailo-8 and Hailo-8L AI Acceleration Modules on compatible hardware platforms—including the Raspberry Pi 5 with the PCIe hat from the AI kit. The Hailo detector integration in Frigate automatically identifies your hardware type and selects the appropriate default model when a custom model isnt provided.
@ -261,7 +257,7 @@ Inference speeds may vary depending on the host platform. The above data was mea
### Nvidia Jetson
Frigate supports all Jetson boards, from the inexpensive Jetson Nano to the powerful Jetson Orin AGX. It will [make use of the Jetson's hardware media engine](/configuration/hardware_acceleration_video#nvidia-jetson-orin-agx-orin-nx-orin-nano-xavier-agx-xavier-nx-tx2-tx1-nano) when configured with the [appropriate presets](/configuration/ffmpeg_presets#hwaccel-presets), and will make use of the Jetson's GPU and DLA for object detection when configured with the [TensorRT detector](/configuration/object_detectors#nvidia-tensorrt-detector).
Jetson devices are supported via the TensorRT or ONNX detectors when running Jetpack 6. It will [make use of the Jetson's hardware media engine](/configuration/hardware_acceleration_video#nvidia-jetson-orin-agx-orin-nx-orin-nano-xavier-agx-xavier-nx-tx2-tx1-nano) when configured with the [appropriate presets](/configuration/ffmpeg_presets#hwaccel-presets), and will make use of the Jetson's GPU and DLA for object detection when configured with the [TensorRT detector](/configuration/object_detectors#nvidia-tensorrt-detector).
Inference speed will vary depending on the YOLO model, jetson platform and jetson nvpmodel (GPU/DLA/EMC clock speed). It is typically 20-40 ms for most models. The DLA is more efficient than the GPU, but not faster, so using the DLA will reduce power consumption but will slightly increase inference time.
@ -282,6 +278,15 @@ Frigate supports hardware video processing on all Rockchip boards. However, hard
The inference time of a rk3588 with all 3 cores enabled is typically 25-30 ms for yolo-nas s.
### Synaptics
- **Synaptics** Default model is **mobilenet**
| Name | Synaptics SL1680 Inference Time |
| ------------- | ------------------------------- |
| ssd mobilenet | ~ 25 ms |
| yolov5m | ~ 118 ms |
## What does Frigate use the CPU for and what does it use a detector for? (ELI5 Version)
This is taken from a [user question on reddit](https://www.reddit.com/r/homeassistant/comments/q8mgau/comment/hgqbxh5/?utm_source=share&utm_medium=web2x&context=3). Modified slightly for clarity.

View File

@ -159,11 +159,44 @@ Message published for updates to tracked object metadata, for example:
}
```
#### Object Classification Update
Message published when [object classification](/configuration/custom_classification/object_classification) reaches consensus on a classification result.
**Sub label type:**
```json
{
"type": "classification",
"id": "1607123955.475377-mxklsc",
"camera": "front_door_cam",
"timestamp": 1607123958.748393,
"model": "person_classifier",
"sub_label": "delivery_person",
"score": 0.87
}
```
**Attribute type:**
```json
{
"type": "classification",
"id": "1607123955.475377-mxklsc",
"camera": "front_door_cam",
"timestamp": 1607123958.748393,
"model": "helmet_detector",
"attribute": "yes",
"score": 0.92
}
```
### `frigate/reviews`
Message published for each changed review item. The first message is published when the `detection` or `alert` is initiated.
An `update` with the same ID will be published when:
- The severity changes from `detection` to `alert`
- Additional objects are detected
- An object is recognized via face, lpr, etc.
@ -308,6 +341,11 @@ Publishes transcribed text for audio detected on this camera.
**NOTE:** Requires audio detection and transcription to be enabled
### `frigate/<camera_name>/classification/<model_name>`
Publishes the current state detected by a state classification model for the camera. The topic name includes the model name as configured in your classification settings.
The published value is the detected state class name (e.g., `open`, `closed`, `on`, `off`). The state is only published when it changes, helping to reduce unnecessary MQTT traffic.
### `frigate/<camera_name>/enabled/set`
Topic to turn Frigate's processing of a camera on and off. Expected values are `ON` and `OFF`.

View File

@ -0,0 +1,23 @@
import React from "react";
export default function CommunityBadge() {
return (
<span
title="This detector is maintained by community members who provide code, maintenance, and support. See the contributing boards documentation for more information."
style={{
display: "inline-block",
backgroundColor: "#f1f3f5",
color: "#24292f",
fontSize: "11px",
fontWeight: 600,
padding: "2px 6px",
borderRadius: "3px",
border: "1px solid #d1d9e0",
marginLeft: "4px",
cursor: "help",
}}
>
Community Supported
</span>
);
}

View File

@ -1,6 +1,7 @@
"""Real time processor that works with classification tflite models."""
import datetime
import json
import logging
import os
from typing import Any
@ -21,6 +22,7 @@ from frigate.config.classification import (
)
from frigate.const import CLIPS_DIR, MODEL_CACHE_DIR
from frigate.log import redirect_output_to_logger
from frigate.types import TrackedObjectUpdateTypesEnum
from frigate.util.builtin import EventsPerSecond, InferenceSpeed, load_labels
from frigate.util.object import box_overlaps, calculate_region
@ -284,6 +286,7 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
config: FrigateConfig,
model_config: CustomClassificationConfig,
sub_label_publisher: EventMetadataPublisher,
requestor: InterProcessRequestor,
metrics: DataProcessorMetrics,
):
super().__init__(config, metrics)
@ -292,6 +295,7 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
self.train_dir = os.path.join(CLIPS_DIR, self.model_config.name, "train")
self.interpreter: Interpreter | None = None
self.sub_label_publisher = sub_label_publisher
self.requestor = requestor
self.tensor_input_details: dict[str, Any] | None = None
self.tensor_output_details: dict[str, Any] | None = None
self.classification_history: dict[str, list[tuple[str, float, float]]] = {}
@ -486,6 +490,8 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
)
if consensus_label is not None:
camera = obj_data["camera"]
if (
self.model_config.object_config.classification_type
== ObjectClassificationType.sub_label
@ -494,6 +500,20 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
(object_id, consensus_label, consensus_score),
EventMetadataTypeEnum.sub_label,
)
self.requestor.send_data(
"tracked_object_update",
json.dumps(
{
"type": TrackedObjectUpdateTypesEnum.classification,
"id": object_id,
"camera": camera,
"timestamp": now,
"model": self.model_config.name,
"sub_label": consensus_label,
"score": consensus_score,
}
),
)
elif (
self.model_config.object_config.classification_type
== ObjectClassificationType.attribute
@ -507,6 +527,20 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
),
EventMetadataTypeEnum.attribute.value,
)
self.requestor.send_data(
"tracked_object_update",
json.dumps(
{
"type": TrackedObjectUpdateTypesEnum.classification,
"id": object_id,
"camera": camera,
"timestamp": now,
"model": self.model_config.name,
"attribute": consensus_label,
"score": consensus_score,
}
),
)
def handle_request(self, topic, request_data):
if topic == EmbeddingsRequestEnum.reload_classification_model.value:

View File

@ -195,6 +195,7 @@ class EmbeddingMaintainer(threading.Thread):
self.config,
model_config,
self.event_metadata_publisher,
self.requestor,
self.metrics,
)
)
@ -339,6 +340,7 @@ class EmbeddingMaintainer(threading.Thread):
self.config,
model_config,
self.event_metadata_publisher,
self.requestor,
self.metrics,
)

View File

@ -30,3 +30,4 @@ class TrackedObjectUpdateTypesEnum(str, Enum):
description = "description"
face = "face"
lpr = "lpr"
classification = "classification"

View File

@ -130,8 +130,13 @@ def get_soc_type() -> Optional[str]:
"""Get the SoC type from device tree."""
try:
with open("/proc/device-tree/compatible") as file:
soc = file.read().split(",")[-1].strip("\x00")
return soc
content = file.read()
# Check for Jetson devices
if "nvidia" in content:
return None
return content.split(",")[-1].strip("\x00")
except FileNotFoundError:
logger.debug("Could not determine SoC type from device tree")
return None

View File

@ -103,7 +103,7 @@
"regenerate": "A new description has been requested from {{provider}}. Depending on the speed of your provider, the new description may take some time to regenerate.",
"updatedSublabel": "Successfully updated sub label.",
"updatedLPR": "Successfully updated license plate.",
"audioTranscription": "Successfully requested audio transcription."
"audioTranscription": "Successfully requested audio transcription. Depending on the speed of your Frigate server, the transcription may take some time to complete."
},
"error": {
"regenerate": "Failed to call {{provider}} for a new description: {{errorMessage}}",

View File

@ -572,9 +572,8 @@ export function SortTypeContent({
className="w-full space-y-1"
>
{availableSortTypes.map((value) => (
<div className="flex flex-row gap-2">
<div key={value} className="flex flex-row gap-2">
<RadioGroupItem
key={value}
value={value}
id={`sort-${value}`}
className={

View File

@ -42,9 +42,10 @@ export default function DetailActionsMenu({
return `start/${startTime}/end/${endTime}`;
}, [search]);
const { data: reviewItem } = useSWR<ReviewSegment>([
`review/event/${search.id}`,
]);
// currently, audio event ids are not saved in review items
const { data: reviewItem } = useSWR<ReviewSegment>(
search.data?.type === "audio" ? null : [`review/event/${search.id}`],
);
return (
<DropdownMenu open={isOpen} onOpenChange={setIsOpen}>

View File

@ -1295,6 +1295,7 @@ function ObjectDetailsTab({
{search.data.type === "object" &&
config?.plus?.enabled &&
search.end_time != undefined &&
search.has_snapshot && (
<div
className={cn(

View File

@ -94,24 +94,52 @@ export default function HlsVideoPlayer({
const [loadedMetadata, setLoadedMetadata] = useState(false);
const [bufferTimeout, setBufferTimeout] = useState<NodeJS.Timeout>();
const applyVideoDimensions = useCallback(
(width: number, height: number) => {
if (setFullResolution) {
setFullResolution({ width, height });
}
setVideoDimensions({ width, height });
if (height > 0) {
setTallCamera(width / height < ASPECT_VERTICAL_LAYOUT);
}
},
[setFullResolution],
);
const handleLoadedMetadata = useCallback(() => {
setLoadedMetadata(true);
if (videoRef.current) {
const width = videoRef.current.videoWidth;
const height = videoRef.current.videoHeight;
if (setFullResolution) {
setFullResolution({
width,
height,
});
}
setVideoDimensions({ width, height });
setTallCamera(width / height < ASPECT_VERTICAL_LAYOUT);
if (!videoRef.current) {
return;
}
}, [videoRef, setFullResolution]);
const width = videoRef.current.videoWidth;
const height = videoRef.current.videoHeight;
// iOS Safari occasionally reports 0x0 for videoWidth/videoHeight
// Poll with requestAnimationFrame until dimensions become available (or timeout).
if (width > 0 && height > 0) {
applyVideoDimensions(width, height);
return;
}
let attempts = 0;
const maxAttempts = 120; // ~2 seconds at 60fps
const tryGetDims = () => {
if (!videoRef.current) return;
const w = videoRef.current.videoWidth;
const h = videoRef.current.videoHeight;
if (w > 0 && h > 0) {
applyVideoDimensions(w, h);
return;
}
if (attempts < maxAttempts) {
attempts += 1;
requestAnimationFrame(tryGetDims);
}
};
requestAnimationFrame(tryGetDims);
}, [videoRef, applyVideoDimensions]);
useEffect(() => {
if (!videoRef.current) {

View File

@ -91,7 +91,7 @@ function MSEPlayer({
(error: LivePlayerError, description: string = "Unknown error") => {
// eslint-disable-next-line no-console
console.error(
`${camera} - MSE error '${error}': ${description} See the documentation: https://docs.frigate.video/configuration/live/#live-view-faq`,
`${camera} - MSE error '${error}': ${description} See the documentation: https://docs.frigate.video/configuration/live/#live-player-error-messages`,
);
onError?.(error);
},

View File

@ -42,7 +42,7 @@ export default function WebRtcPlayer({
(error: LivePlayerError, description: string = "Unknown error") => {
// eslint-disable-next-line no-console
console.error(
`${camera} - WebRTC error '${error}': ${description} See the documentation: https://docs.frigate.video/configuration/live/#live-view-faq`,
`${camera} - WebRTC error '${error}': ${description} See the documentation: https://docs.frigate.video/configuration/live/#live-player-error-messages`,
);
onError?.(error);
},

View File

@ -16,7 +16,6 @@ import ImageLoadingIndicator from "@/components/indicators/ImageLoadingIndicator
import useImageLoaded from "@/hooks/use-image-loaded";
import ActivityIndicator from "@/components/indicators/activity-indicator";
import { useTrackedObjectUpdate } from "@/api/ws";
import { isEqual } from "lodash";
import TimeAgo from "@/components/dynamic/TimeAgo";
import SearchResultActions from "@/components/menu/SearchResultActions";
import { SearchTab } from "@/components/overlay/detail/SearchDetailDialog";
@ -25,14 +24,12 @@ import { useTranslation } from "react-i18next";
import { getTranslatedLabel } from "@/utils/i18n";
type ExploreViewProps = {
searchDetail: SearchResult | undefined;
setSearchDetail: (search: SearchResult | undefined) => void;
setSimilaritySearch: (search: SearchResult) => void;
onSelectSearch: (item: SearchResult, ctrl: boolean, page?: SearchTab) => void;
};
export default function ExploreView({
searchDetail,
setSearchDetail,
setSimilaritySearch,
onSelectSearch,
@ -83,20 +80,6 @@ export default function ExploreView({
}
}, [wsUpdate, mutate]);
// update search detail when results change
useEffect(() => {
if (searchDetail && events) {
const updatedSearchDetail = events.find(
(result) => result.id === searchDetail.id,
);
if (updatedSearchDetail && !isEqual(updatedSearchDetail, searchDetail)) {
setSearchDetail(updatedSearchDetail);
}
}
}, [events, searchDetail, setSearchDetail]);
if (isLoading) {
return (
<ActivityIndicator className="absolute left-1/2 top-1/2 -translate-x-1/2 -translate-y-1/2" />

View File

@ -19,7 +19,6 @@ import useKeyboardListener, {
import scrollIntoView from "scroll-into-view-if-needed";
import InputWithTags from "@/components/input/InputWithTags";
import { ScrollArea, ScrollBar } from "@/components/ui/scroll-area";
import { isEqual } from "lodash";
import { formatDateToLocaleString } from "@/utils/dateUtil";
import SearchThumbnailFooter from "@/components/card/SearchThumbnailFooter";
import ExploreSettings from "@/components/settings/SearchSettings";
@ -213,7 +212,7 @@ export default function SearchView({
// detail
const [searchDetail, setSearchDetail] = useState<SearchResult>();
const [selectedId, setSelectedId] = useState<string>();
const [page, setPage] = useState<SearchTab>("snapshot");
// remove duplicate event ids
@ -229,6 +228,16 @@ export default function SearchView({
return results;
}, [searchResults]);
const searchDetail = useMemo(() => {
if (!selectedId) return undefined;
// summary view
if (defaultView === "summary" && exploreEvents) {
return exploreEvents.find((r) => r.id === selectedId);
}
// grid view
return uniqueResults.find((r) => r.id === selectedId);
}, [selectedId, uniqueResults, exploreEvents, defaultView]);
// search interaction
const [selectedObjects, setSelectedObjects] = useState<string[]>([]);
@ -256,7 +265,7 @@ export default function SearchView({
}
} else {
setPage(page);
setSearchDetail(item);
setSelectedId(item.id);
}
},
[selectedObjects],
@ -295,26 +304,12 @@ export default function SearchView({
}
};
// update search detail when results change
// clear selected item when search results clear
useEffect(() => {
if (searchDetail) {
const results =
defaultView === "summary" ? exploreEvents : searchResults?.flat();
if (results) {
const updatedSearchDetail = results.find(
(result) => result.id === searchDetail.id,
);
if (
updatedSearchDetail &&
!isEqual(updatedSearchDetail, searchDetail)
) {
setSearchDetail(updatedSearchDetail);
}
}
if (!searchResults && !exploreEvents) {
setSelectedId(undefined);
}
}, [searchResults, exploreEvents, searchDetail, defaultView]);
}, [searchResults, exploreEvents]);
const hasExistingSearch = useMemo(
() => searchResults != undefined || searchFilter != undefined,
@ -340,7 +335,7 @@ export default function SearchView({
? results.length - 1
: (currentIndex - 1 + results.length) % results.length;
setSearchDetail(results[newIndex]);
setSelectedId(results[newIndex].id);
}
}, [uniqueResults, exploreEvents, searchDetail, defaultView]);
@ -357,7 +352,7 @@ export default function SearchView({
const newIndex =
currentIndex === -1 ? 0 : (currentIndex + 1) % results.length;
setSearchDetail(results[newIndex]);
setSelectedId(results[newIndex].id);
}
}, [uniqueResults, exploreEvents, searchDetail, defaultView]);
@ -509,7 +504,7 @@ export default function SearchView({
<SearchDetailDialog
search={searchDetail}
page={page}
setSearch={setSearchDetail}
setSearch={(item) => setSelectedId(item?.id)}
setSearchPage={setPage}
setSimilarity={
searchDetail && (() => setSimilaritySearch(searchDetail))
@ -629,7 +624,7 @@ export default function SearchView({
detail: boolean,
) => {
if (detail && selectedObjects.length == 0) {
setSearchDetail(value);
setSelectedId(value.id);
} else {
onSelectSearch(
value,
@ -724,8 +719,7 @@ export default function SearchView({
defaultView == "summary" && (
<div className="scrollbar-container flex size-full flex-col overflow-y-auto">
<ExploreView
searchDetail={searchDetail}
setSearchDetail={setSearchDetail}
setSearchDetail={(item) => setSelectedId(item?.id)}
setSimilaritySearch={setSimilaritySearch}
onSelectSearch={onSelectSearch}
/>