mirror of
https://github.com/blakeblackshear/frigate.git
synced 2025-12-06 05:24:11 +03:00
Miscellaneous Fixes (#20989)
Some checks failed
CI / AMD64 Build (push) Has been cancelled
CI / ARM Build (push) Has been cancelled
CI / Jetson Jetpack 6 (push) Has been cancelled
CI / AMD64 Extra Build (push) Has been cancelled
CI / ARM Extra Build (push) Has been cancelled
CI / Synaptics Build (push) Has been cancelled
CI / Assemble and push default build (push) Has been cancelled
Some checks failed
CI / AMD64 Build (push) Has been cancelled
CI / ARM Build (push) Has been cancelled
CI / Jetson Jetpack 6 (push) Has been cancelled
CI / AMD64 Extra Build (push) Has been cancelled
CI / ARM Extra Build (push) Has been cancelled
CI / Synaptics Build (push) Has been cancelled
CI / Assemble and push default build (push) Has been cancelled
* Include DB in safe mode config Copy DB when going into safe mode to avoid creating a new one if a user has configured a separate location * Fix documentation for example log module * Set minimum duration for recording segments Due to the inpoint logic, some recordings would get clipped on the end of the segment with a non-zero duration but not enough duration to include a frame. 100 ms is a safe value for any video that is 10fps or higher to have a frame * Add docs to explain object assignment for classification * Add warning for Intel GPU stats bug Add warning with explanation on GPU stats page when all Intel GPU values are 0 * Update docs with creation instructions * reset loading state when moving through events in tracking details * disable pip on preview players * Improve HLS handling for startPosition The startPosition was incorrectly calculated assuming continuous recordings, when it needs to consider only some segments exist. This extracts that logic to a utility so all can use it. --------- Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
This commit is contained in:
parent
3f9b153758
commit
224cbdc2d6
@ -25,7 +25,7 @@ Examples of available modules are:
|
|||||||
|
|
||||||
- `frigate.app`
|
- `frigate.app`
|
||||||
- `frigate.mqtt`
|
- `frigate.mqtt`
|
||||||
- `frigate.object_detection`
|
- `frigate.object_detection.base`
|
||||||
- `detector.<detector_name>`
|
- `detector.<detector_name>`
|
||||||
- `watchdog.<camera_name>`
|
- `watchdog.<camera_name>`
|
||||||
- `ffmpeg.<camera_name>.<sorted_roles>` NOTE: All FFmpeg logs are sent as `error` level.
|
- `ffmpeg.<camera_name>.<sorted_roles>` NOTE: All FFmpeg logs are sent as `error` level.
|
||||||
|
|||||||
@ -35,6 +35,15 @@ For object classification:
|
|||||||
- Ideal when multiple attributes can coexist independently.
|
- Ideal when multiple attributes can coexist independently.
|
||||||
- Example: Detecting if a `person` in a construction yard is wearing a helmet or not.
|
- Example: Detecting if a `person` in a construction yard is wearing a helmet or not.
|
||||||
|
|
||||||
|
## Assignment Requirements
|
||||||
|
|
||||||
|
Sub labels and attributes are only assigned when both conditions are met:
|
||||||
|
|
||||||
|
1. **Threshold**: Each classification attempt must have a confidence score that meets or exceeds the configured `threshold` (default: `0.8`).
|
||||||
|
2. **Class Consensus**: After at least 3 classification attempts, 60% of attempts must agree on the same class label. If the consensus class is `none`, no assignment is made.
|
||||||
|
|
||||||
|
This two-step verification prevents false positives by requiring consistent predictions across multiple frames before assigning a sub label or attribute.
|
||||||
|
|
||||||
## Example use cases
|
## Example use cases
|
||||||
|
|
||||||
### Sub label
|
### Sub label
|
||||||
@ -66,14 +75,18 @@ classification:
|
|||||||
|
|
||||||
## Training the model
|
## Training the model
|
||||||
|
|
||||||
Creating and training the model is done within the Frigate UI using the `Classification` page.
|
Creating and training the model is done within the Frigate UI using the `Classification` page. The process consists of two steps:
|
||||||
|
|
||||||
### Getting Started
|
### Step 1: Name and Define
|
||||||
|
|
||||||
|
Enter a name for your model, select the object label to classify (e.g., `person`, `dog`, `car`), choose the classification type (sub label or attribute), and define your classes. Include a `none` class for objects that don't fit any specific category.
|
||||||
|
|
||||||
|
### Step 2: Assign Training Examples
|
||||||
|
|
||||||
|
The system will automatically generate example images from detected objects matching your selected label. You'll be guided through each class one at a time to select which images represent that class. Any images not assigned to a specific class will automatically be assigned to `none` when you complete the last class. Once all images are processed, training will begin automatically.
|
||||||
|
|
||||||
When choosing which objects to classify, start with a small number of visually distinct classes and ensure your training samples match camera viewpoints and distances typical for those objects.
|
When choosing which objects to classify, start with a small number of visually distinct classes and ensure your training samples match camera viewpoints and distances typical for those objects.
|
||||||
|
|
||||||
// TODO add this section once UI is implemented. Explain process of selecting objects and curating training examples.
|
|
||||||
|
|
||||||
### Improving the Model
|
### Improving the Model
|
||||||
|
|
||||||
- **Problem framing**: Keep classes visually distinct and relevant to the chosen object types.
|
- **Problem framing**: Keep classes visually distinct and relevant to the chosen object types.
|
||||||
|
|||||||
@ -48,13 +48,23 @@ classification:
|
|||||||
|
|
||||||
## Training the model
|
## Training the model
|
||||||
|
|
||||||
Creating and training the model is done within the Frigate UI using the `Classification` page.
|
Creating and training the model is done within the Frigate UI using the `Classification` page. The process consists of three steps:
|
||||||
|
|
||||||
### Getting Started
|
### Step 1: Name and Define
|
||||||
|
|
||||||
When choosing a portion of the camera frame for state classification, it is important to make the crop tight around the area of interest to avoid extra signals unrelated to what is being classified.
|
Enter a name for your model and define at least 2 classes (states) that represent mutually exclusive states. For example, `open` and `closed` for a door, or `on` and `off` for lights.
|
||||||
|
|
||||||
// TODO add this section once UI is implemented. Explain process of selecting a crop.
|
### Step 2: Select the Crop Area
|
||||||
|
|
||||||
|
Choose one or more cameras and draw a rectangle over the area of interest for each camera. The crop should be tight around the region you want to classify to avoid extra signals unrelated to what is being classified. You can drag and resize the rectangle to adjust the crop area.
|
||||||
|
|
||||||
|
### Step 3: Assign Training Examples
|
||||||
|
|
||||||
|
The system will automatically generate example images from your camera feeds. You'll be guided through each class one at a time to select which images represent that state.
|
||||||
|
|
||||||
|
**Important**: All images must be assigned to a state before training can begin. This includes images that may not be optimal, such as when people temporarily block the view, sun glare is present, or other distractions occur. Assign these images to the state that is actually present (based on what you know the state to be), not based on the distraction. This training helps the model correctly identify the state even when such conditions occur during inference.
|
||||||
|
|
||||||
|
Once all images are assigned, training will begin automatically.
|
||||||
|
|
||||||
### Improving the Model
|
### Improving the Model
|
||||||
|
|
||||||
|
|||||||
@ -849,6 +849,7 @@ async def vod_ts(camera_name: str, start_ts: float, end_ts: float):
|
|||||||
|
|
||||||
clips = []
|
clips = []
|
||||||
durations = []
|
durations = []
|
||||||
|
min_duration_ms = 100 # Minimum 100ms to ensure at least one video frame
|
||||||
max_duration_ms = MAX_SEGMENT_DURATION * 1000
|
max_duration_ms = MAX_SEGMENT_DURATION * 1000
|
||||||
|
|
||||||
recording: Recordings
|
recording: Recordings
|
||||||
@ -866,11 +867,11 @@ async def vod_ts(camera_name: str, start_ts: float, end_ts: float):
|
|||||||
if recording.end_time > end_ts:
|
if recording.end_time > end_ts:
|
||||||
duration -= int((recording.end_time - end_ts) * 1000)
|
duration -= int((recording.end_time - end_ts) * 1000)
|
||||||
|
|
||||||
if duration <= 0:
|
if duration < min_duration_ms:
|
||||||
# skip if the clip has no valid duration
|
# skip if the clip has no valid duration (too short to contain frames)
|
||||||
continue
|
continue
|
||||||
|
|
||||||
if 0 < duration < max_duration_ms:
|
if min_duration_ms <= duration < max_duration_ms:
|
||||||
clip["keyFrameDurations"] = [duration]
|
clip["keyFrameDurations"] = [duration]
|
||||||
clips.append(clip)
|
clips.append(clip)
|
||||||
durations.append(duration)
|
durations.append(duration)
|
||||||
|
|||||||
@ -792,6 +792,10 @@ class FrigateConfig(FrigateBaseModel):
|
|||||||
# copy over auth and proxy config in case auth needs to be enforced
|
# copy over auth and proxy config in case auth needs to be enforced
|
||||||
safe_config["auth"] = config.get("auth", {})
|
safe_config["auth"] = config.get("auth", {})
|
||||||
safe_config["proxy"] = config.get("proxy", {})
|
safe_config["proxy"] = config.get("proxy", {})
|
||||||
|
|
||||||
|
# copy over database config for auth and so a new db is not created
|
||||||
|
safe_config["database"] = config.get("database", {})
|
||||||
|
|
||||||
return cls.parse_object(safe_config, **context)
|
return cls.parse_object(safe_config, **context)
|
||||||
|
|
||||||
# Validate and return the config dict.
|
# Validate and return the config dict.
|
||||||
|
|||||||
@ -76,7 +76,12 @@
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
"npuUsage": "NPU Usage",
|
"npuUsage": "NPU Usage",
|
||||||
"npuMemory": "NPU Memory"
|
"npuMemory": "NPU Memory",
|
||||||
|
"intelGpuWarning": {
|
||||||
|
"title": "Intel GPU Stats Warning",
|
||||||
|
"message": "GPU stats unavailable",
|
||||||
|
"description": "This is a known bug in Intel's GPU stats reporting tools (intel_gpu_top) where it will break and repeatedly return a GPU usage of 0% even in cases where hardware acceleration and object detection are correctly running on the (i)GPU. This is not a Frigate bug. You can restart the host to temporarily fix the issue and confirm that the GPU is working correctly. This does not affect performance."
|
||||||
|
}
|
||||||
},
|
},
|
||||||
"otherProcesses": {
|
"otherProcesses": {
|
||||||
"title": "Other Processes",
|
"title": "Other Processes",
|
||||||
|
|||||||
@ -56,6 +56,7 @@ export function TrackingDetails({
|
|||||||
const apiHost = useApiHost();
|
const apiHost = useApiHost();
|
||||||
const imgRef = useRef<HTMLImageElement | null>(null);
|
const imgRef = useRef<HTMLImageElement | null>(null);
|
||||||
const [imgLoaded, setImgLoaded] = useState(false);
|
const [imgLoaded, setImgLoaded] = useState(false);
|
||||||
|
const [isVideoLoading, setIsVideoLoading] = useState(true);
|
||||||
const [displaySource, _setDisplaySource] = useState<"video" | "image">(
|
const [displaySource, _setDisplaySource] = useState<"video" | "image">(
|
||||||
"video",
|
"video",
|
||||||
);
|
);
|
||||||
@ -70,6 +71,10 @@ export function TrackingDetails({
|
|||||||
(event.start_time ?? 0) + annotationOffset / 1000 - REVIEW_PADDING,
|
(event.start_time ?? 0) + annotationOffset / 1000 - REVIEW_PADDING,
|
||||||
);
|
);
|
||||||
|
|
||||||
|
useEffect(() => {
|
||||||
|
setIsVideoLoading(true);
|
||||||
|
}, [event.id]);
|
||||||
|
|
||||||
const { data: eventSequence } = useSWR<TrackingDetailsSequence[]>([
|
const { data: eventSequence } = useSWR<TrackingDetailsSequence[]>([
|
||||||
"timeline",
|
"timeline",
|
||||||
{
|
{
|
||||||
@ -527,22 +532,28 @@ export function TrackingDetails({
|
|||||||
)}
|
)}
|
||||||
>
|
>
|
||||||
{displaySource == "video" && (
|
{displaySource == "video" && (
|
||||||
<HlsVideoPlayer
|
<>
|
||||||
videoRef={videoRef}
|
<HlsVideoPlayer
|
||||||
containerRef={containerRef}
|
videoRef={videoRef}
|
||||||
visible={true}
|
containerRef={containerRef}
|
||||||
currentSource={videoSource}
|
visible={true}
|
||||||
hotKeys={false}
|
currentSource={videoSource}
|
||||||
supportsFullscreen={false}
|
hotKeys={false}
|
||||||
fullscreen={false}
|
supportsFullscreen={false}
|
||||||
frigateControls={true}
|
fullscreen={false}
|
||||||
onTimeUpdate={handleTimeUpdate}
|
frigateControls={true}
|
||||||
onSeekToTime={handleSeekToTime}
|
onTimeUpdate={handleTimeUpdate}
|
||||||
onUploadFrame={onUploadFrameToPlus}
|
onSeekToTime={handleSeekToTime}
|
||||||
isDetailMode={true}
|
onUploadFrame={onUploadFrameToPlus}
|
||||||
camera={event.camera}
|
onPlaying={() => setIsVideoLoading(false)}
|
||||||
currentTimeOverride={currentTime}
|
isDetailMode={true}
|
||||||
/>
|
camera={event.camera}
|
||||||
|
currentTimeOverride={currentTime}
|
||||||
|
/>
|
||||||
|
{isVideoLoading && (
|
||||||
|
<ActivityIndicator className="absolute left-1/2 top-1/2 -translate-x-1/2 -translate-y-1/2" />
|
||||||
|
)}
|
||||||
|
</>
|
||||||
)}
|
)}
|
||||||
{displaySource == "image" && (
|
{displaySource == "image" && (
|
||||||
<>
|
<>
|
||||||
|
|||||||
@ -130,6 +130,8 @@ export default function HlsVideoPlayer({
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
setLoadedMetadata(false);
|
||||||
|
|
||||||
const currentPlaybackRate = videoRef.current.playbackRate;
|
const currentPlaybackRate = videoRef.current.playbackRate;
|
||||||
|
|
||||||
if (!useHlsCompat) {
|
if (!useHlsCompat) {
|
||||||
|
|||||||
@ -309,6 +309,7 @@ function PreviewVideoPlayer({
|
|||||||
playsInline
|
playsInline
|
||||||
muted
|
muted
|
||||||
disableRemotePlayback
|
disableRemotePlayback
|
||||||
|
disablePictureInPicture
|
||||||
onSeeked={onPreviewSeeked}
|
onSeeked={onPreviewSeeked}
|
||||||
onLoadedData={() => {
|
onLoadedData={() => {
|
||||||
if (firstLoad) {
|
if (firstLoad) {
|
||||||
|
|||||||
@ -2,7 +2,10 @@ import { Recording } from "@/types/record";
|
|||||||
import { DynamicPlayback } from "@/types/playback";
|
import { DynamicPlayback } from "@/types/playback";
|
||||||
import { PreviewController } from "../PreviewPlayer";
|
import { PreviewController } from "../PreviewPlayer";
|
||||||
import { TimeRange, TrackingDetailsSequence } from "@/types/timeline";
|
import { TimeRange, TrackingDetailsSequence } from "@/types/timeline";
|
||||||
import { calculateInpointOffset } from "@/utils/videoUtil";
|
import {
|
||||||
|
calculateInpointOffset,
|
||||||
|
calculateSeekPosition,
|
||||||
|
} from "@/utils/videoUtil";
|
||||||
|
|
||||||
type PlayerMode = "playback" | "scrubbing";
|
type PlayerMode = "playback" | "scrubbing";
|
||||||
|
|
||||||
@ -72,38 +75,20 @@ export class DynamicVideoController {
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (
|
|
||||||
this.recordings.length == 0 ||
|
|
||||||
time < this.recordings[0].start_time ||
|
|
||||||
time > this.recordings[this.recordings.length - 1].end_time
|
|
||||||
) {
|
|
||||||
this.setNoRecording(true);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (this.playerMode != "playback") {
|
if (this.playerMode != "playback") {
|
||||||
this.playerMode = "playback";
|
this.playerMode = "playback";
|
||||||
}
|
}
|
||||||
|
|
||||||
let seekSeconds = 0;
|
const seekSeconds = calculateSeekPosition(
|
||||||
(this.recordings || []).every((segment) => {
|
time,
|
||||||
// if the next segment is past the desired time, stop calculating
|
this.recordings,
|
||||||
if (segment.start_time > time) {
|
this.inpointOffset,
|
||||||
return false;
|
);
|
||||||
}
|
|
||||||
|
|
||||||
if (segment.end_time < time) {
|
if (seekSeconds === undefined) {
|
||||||
seekSeconds += segment.end_time - segment.start_time;
|
this.setNoRecording(true);
|
||||||
return true;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
seekSeconds +=
|
|
||||||
segment.end_time - segment.start_time - (segment.end_time - time);
|
|
||||||
return true;
|
|
||||||
});
|
|
||||||
|
|
||||||
// adjust for HLS inpoint offset
|
|
||||||
seekSeconds -= this.inpointOffset;
|
|
||||||
|
|
||||||
if (seekSeconds != 0) {
|
if (seekSeconds != 0) {
|
||||||
this.playerController.currentTime = seekSeconds;
|
this.playerController.currentTime = seekSeconds;
|
||||||
|
|||||||
@ -14,7 +14,10 @@ import { VideoResolutionType } from "@/types/live";
|
|||||||
import axios from "axios";
|
import axios from "axios";
|
||||||
import { cn } from "@/lib/utils";
|
import { cn } from "@/lib/utils";
|
||||||
import { useTranslation } from "react-i18next";
|
import { useTranslation } from "react-i18next";
|
||||||
import { calculateInpointOffset } from "@/utils/videoUtil";
|
import {
|
||||||
|
calculateInpointOffset,
|
||||||
|
calculateSeekPosition,
|
||||||
|
} from "@/utils/videoUtil";
|
||||||
import { isFirefox } from "react-device-detect";
|
import { isFirefox } from "react-device-detect";
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@ -109,10 +112,10 @@ export default function DynamicVideoPlayer({
|
|||||||
const [isLoading, setIsLoading] = useState(false);
|
const [isLoading, setIsLoading] = useState(false);
|
||||||
const [isBuffering, setIsBuffering] = useState(false);
|
const [isBuffering, setIsBuffering] = useState(false);
|
||||||
const [loadingTimeout, setLoadingTimeout] = useState<NodeJS.Timeout>();
|
const [loadingTimeout, setLoadingTimeout] = useState<NodeJS.Timeout>();
|
||||||
const [source, setSource] = useState<HlsSource>({
|
|
||||||
playlist: `${apiHost}vod/${camera}/start/${timeRange.after}/end/${timeRange.before}/master.m3u8`,
|
// Don't set source until recordings load - we need accurate startPosition
|
||||||
startPosition: startTimestamp ? startTimestamp - timeRange.after : 0,
|
// to avoid hls.js clamping to video end when startPosition exceeds duration
|
||||||
});
|
const [source, setSource] = useState<HlsSource | undefined>(undefined);
|
||||||
|
|
||||||
// start at correct time
|
// start at correct time
|
||||||
|
|
||||||
@ -184,7 +187,7 @@ export default function DynamicVideoPlayer({
|
|||||||
);
|
);
|
||||||
|
|
||||||
useEffect(() => {
|
useEffect(() => {
|
||||||
if (!controller || !recordings?.length) {
|
if (!recordings?.length) {
|
||||||
if (recordings?.length == 0) {
|
if (recordings?.length == 0) {
|
||||||
setNoRecording(true);
|
setNoRecording(true);
|
||||||
}
|
}
|
||||||
@ -192,10 +195,6 @@ export default function DynamicVideoPlayer({
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (playerRef.current) {
|
|
||||||
playerRef.current.autoplay = !isScrubbing;
|
|
||||||
}
|
|
||||||
|
|
||||||
let startPosition = undefined;
|
let startPosition = undefined;
|
||||||
|
|
||||||
if (startTimestamp) {
|
if (startTimestamp) {
|
||||||
@ -203,14 +202,12 @@ export default function DynamicVideoPlayer({
|
|||||||
recordingParams.after,
|
recordingParams.after,
|
||||||
(recordings || [])[0],
|
(recordings || [])[0],
|
||||||
);
|
);
|
||||||
const idealStartPosition = Math.max(
|
|
||||||
0,
|
|
||||||
startTimestamp - timeRange.after - inpointOffset,
|
|
||||||
);
|
|
||||||
|
|
||||||
if (idealStartPosition >= recordings[0].start_time - timeRange.after) {
|
startPosition = calculateSeekPosition(
|
||||||
startPosition = idealStartPosition;
|
startTimestamp,
|
||||||
}
|
recordings,
|
||||||
|
inpointOffset,
|
||||||
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
setSource({
|
setSource({
|
||||||
@ -218,6 +215,18 @@ export default function DynamicVideoPlayer({
|
|||||||
startPosition,
|
startPosition,
|
||||||
});
|
});
|
||||||
|
|
||||||
|
// eslint-disable-next-line react-hooks/exhaustive-deps
|
||||||
|
}, [recordings]);
|
||||||
|
|
||||||
|
useEffect(() => {
|
||||||
|
if (!controller || !recordings?.length) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (playerRef.current) {
|
||||||
|
playerRef.current.autoplay = !isScrubbing;
|
||||||
|
}
|
||||||
|
|
||||||
setLoadingTimeout(setTimeout(() => setIsLoading(true), 1000));
|
setLoadingTimeout(setTimeout(() => setIsLoading(true), 1000));
|
||||||
|
|
||||||
controller.newPlayback({
|
controller.newPlayback({
|
||||||
@ -225,7 +234,7 @@ export default function DynamicVideoPlayer({
|
|||||||
timeRange,
|
timeRange,
|
||||||
});
|
});
|
||||||
|
|
||||||
// we only want this to change when recordings update
|
// we only want this to change when controller or recordings update
|
||||||
// eslint-disable-next-line react-hooks/exhaustive-deps
|
// eslint-disable-next-line react-hooks/exhaustive-deps
|
||||||
}, [controller, recordings]);
|
}, [controller, recordings]);
|
||||||
|
|
||||||
@ -263,46 +272,48 @@ export default function DynamicVideoPlayer({
|
|||||||
|
|
||||||
return (
|
return (
|
||||||
<>
|
<>
|
||||||
<HlsVideoPlayer
|
{source && (
|
||||||
videoRef={playerRef}
|
<HlsVideoPlayer
|
||||||
containerRef={containerRef}
|
videoRef={playerRef}
|
||||||
visible={!(isScrubbing || isLoading)}
|
containerRef={containerRef}
|
||||||
currentSource={source}
|
visible={!(isScrubbing || isLoading)}
|
||||||
hotKeys={hotKeys}
|
currentSource={source}
|
||||||
supportsFullscreen={supportsFullscreen}
|
hotKeys={hotKeys}
|
||||||
fullscreen={fullscreen}
|
supportsFullscreen={supportsFullscreen}
|
||||||
inpointOffset={inpointOffset}
|
fullscreen={fullscreen}
|
||||||
onTimeUpdate={onTimeUpdate}
|
inpointOffset={inpointOffset}
|
||||||
onPlayerLoaded={onPlayerLoaded}
|
onTimeUpdate={onTimeUpdate}
|
||||||
onClipEnded={onValidateClipEnd}
|
onPlayerLoaded={onPlayerLoaded}
|
||||||
onSeekToTime={(timestamp, play) => {
|
onClipEnded={onValidateClipEnd}
|
||||||
if (onSeekToTime) {
|
onSeekToTime={(timestamp, play) => {
|
||||||
onSeekToTime(timestamp, play);
|
if (onSeekToTime) {
|
||||||
}
|
onSeekToTime(timestamp, play);
|
||||||
}}
|
}
|
||||||
onPlaying={() => {
|
}}
|
||||||
if (isScrubbing) {
|
onPlaying={() => {
|
||||||
playerRef.current?.pause();
|
if (isScrubbing) {
|
||||||
}
|
playerRef.current?.pause();
|
||||||
|
}
|
||||||
|
|
||||||
if (loadingTimeout) {
|
if (loadingTimeout) {
|
||||||
clearTimeout(loadingTimeout);
|
clearTimeout(loadingTimeout);
|
||||||
}
|
}
|
||||||
|
|
||||||
setNoRecording(false);
|
setNoRecording(false);
|
||||||
}}
|
}}
|
||||||
setFullResolution={setFullResolution}
|
setFullResolution={setFullResolution}
|
||||||
onUploadFrame={onUploadFrameToPlus}
|
onUploadFrame={onUploadFrameToPlus}
|
||||||
toggleFullscreen={toggleFullscreen}
|
toggleFullscreen={toggleFullscreen}
|
||||||
onError={(error) => {
|
onError={(error) => {
|
||||||
if (error == "stalled" && !isScrubbing) {
|
if (error == "stalled" && !isScrubbing) {
|
||||||
setIsBuffering(true);
|
setIsBuffering(true);
|
||||||
}
|
}
|
||||||
}}
|
}}
|
||||||
isDetailMode={isDetailMode}
|
isDetailMode={isDetailMode}
|
||||||
camera={contextCamera || camera}
|
camera={contextCamera || camera}
|
||||||
currentTimeOverride={currentTime}
|
currentTimeOverride={currentTime}
|
||||||
/>
|
/>
|
||||||
|
)}
|
||||||
<PreviewPlayer
|
<PreviewPlayer
|
||||||
className={cn(
|
className={cn(
|
||||||
className,
|
className,
|
||||||
|
|||||||
@ -24,3 +24,57 @@ export function calculateInpointOffset(
|
|||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Calculates the video player time (in seconds) for a given timestamp
|
||||||
|
* by iterating through recording segments and summing their durations.
|
||||||
|
* This accounts for the fact that the video is a concatenation of segments,
|
||||||
|
* not a single continuous stream.
|
||||||
|
*
|
||||||
|
* @param timestamp - The target timestamp to seek to
|
||||||
|
* @param recordings - Array of recording segments
|
||||||
|
* @param inpointOffset - HLS inpoint offset to subtract from the result
|
||||||
|
* @returns The calculated seek position in seconds, or undefined if timestamp is out of range
|
||||||
|
*/
|
||||||
|
export function calculateSeekPosition(
|
||||||
|
timestamp: number,
|
||||||
|
recordings: Recording[],
|
||||||
|
inpointOffset: number = 0,
|
||||||
|
): number | undefined {
|
||||||
|
if (!recordings || recordings.length === 0) {
|
||||||
|
return undefined;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if timestamp is within the recordings range
|
||||||
|
if (
|
||||||
|
timestamp < recordings[0].start_time ||
|
||||||
|
timestamp > recordings[recordings.length - 1].end_time
|
||||||
|
) {
|
||||||
|
return undefined;
|
||||||
|
}
|
||||||
|
|
||||||
|
let seekSeconds = 0;
|
||||||
|
|
||||||
|
(recordings || []).every((segment) => {
|
||||||
|
// if the next segment is past the desired time, stop calculating
|
||||||
|
if (segment.start_time > timestamp) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (segment.end_time < timestamp) {
|
||||||
|
// Add the full duration of this segment
|
||||||
|
seekSeconds += segment.end_time - segment.start_time;
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
// We're in this segment - calculate position within it
|
||||||
|
seekSeconds +=
|
||||||
|
segment.end_time - segment.start_time - (segment.end_time - timestamp);
|
||||||
|
return true;
|
||||||
|
});
|
||||||
|
|
||||||
|
// Adjust for HLS inpoint offset
|
||||||
|
seekSeconds -= inpointOffset;
|
||||||
|
|
||||||
|
return seekSeconds >= 0 ? seekSeconds : undefined;
|
||||||
|
}
|
||||||
|
|||||||
@ -375,6 +375,50 @@ export default function GeneralMetrics({
|
|||||||
return Object.keys(series).length > 0 ? Object.values(series) : undefined;
|
return Object.keys(series).length > 0 ? Object.values(series) : undefined;
|
||||||
}, [statsHistory]);
|
}, [statsHistory]);
|
||||||
|
|
||||||
|
// Check if Intel GPU has all 0% usage values (known bug)
|
||||||
|
const showIntelGpuWarning = useMemo(() => {
|
||||||
|
if (!statsHistory || statsHistory.length < 3) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
const gpuKeys = Object.keys(statsHistory[0]?.gpu_usages ?? {});
|
||||||
|
const hasIntelGpu = gpuKeys.some(
|
||||||
|
(key) => key === "intel-vaapi" || key === "intel-qsv",
|
||||||
|
);
|
||||||
|
|
||||||
|
if (!hasIntelGpu) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if all GPU usage values are 0% across all stats
|
||||||
|
let allZero = true;
|
||||||
|
let hasDataPoints = false;
|
||||||
|
|
||||||
|
for (const stats of statsHistory) {
|
||||||
|
if (!stats) {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
Object.entries(stats.gpu_usages || {}).forEach(([key, gpuStats]) => {
|
||||||
|
if (key === "intel-vaapi" || key === "intel-qsv") {
|
||||||
|
if (gpuStats.gpu) {
|
||||||
|
hasDataPoints = true;
|
||||||
|
const gpuValue = parseFloat(gpuStats.gpu.slice(0, -1));
|
||||||
|
if (!isNaN(gpuValue) && gpuValue > 0) {
|
||||||
|
allZero = false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
if (!allZero) {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return hasDataPoints && allZero;
|
||||||
|
}, [statsHistory]);
|
||||||
|
|
||||||
// npu stats
|
// npu stats
|
||||||
|
|
||||||
const npuSeries = useMemo(() => {
|
const npuSeries = useMemo(() => {
|
||||||
@ -639,8 +683,46 @@ export default function GeneralMetrics({
|
|||||||
<>
|
<>
|
||||||
{statsHistory.length != 0 ? (
|
{statsHistory.length != 0 ? (
|
||||||
<div className="rounded-lg bg-background_alt p-2.5 md:rounded-2xl">
|
<div className="rounded-lg bg-background_alt p-2.5 md:rounded-2xl">
|
||||||
<div className="mb-5">
|
<div className="mb-5 flex flex-row items-center justify-between">
|
||||||
{t("general.hardwareInfo.gpuUsage")}
|
{t("general.hardwareInfo.gpuUsage")}
|
||||||
|
{showIntelGpuWarning && (
|
||||||
|
<Popover>
|
||||||
|
<PopoverTrigger asChild>
|
||||||
|
<button
|
||||||
|
className="flex flex-row items-center gap-1.5 text-yellow-600 focus:outline-none dark:text-yellow-500"
|
||||||
|
aria-label={t(
|
||||||
|
"general.hardwareInfo.intelGpuWarning.title",
|
||||||
|
)}
|
||||||
|
>
|
||||||
|
<CiCircleAlert
|
||||||
|
className="size-5"
|
||||||
|
aria-label={t(
|
||||||
|
"general.hardwareInfo.intelGpuWarning.title",
|
||||||
|
)}
|
||||||
|
/>
|
||||||
|
<span className="text-sm">
|
||||||
|
{t(
|
||||||
|
"general.hardwareInfo.intelGpuWarning.message",
|
||||||
|
)}
|
||||||
|
</span>
|
||||||
|
</button>
|
||||||
|
</PopoverTrigger>
|
||||||
|
<PopoverContent className="w-80">
|
||||||
|
<div className="space-y-2">
|
||||||
|
<div className="font-semibold">
|
||||||
|
{t(
|
||||||
|
"general.hardwareInfo.intelGpuWarning.title",
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
<div>
|
||||||
|
{t(
|
||||||
|
"general.hardwareInfo.intelGpuWarning.description",
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</PopoverContent>
|
||||||
|
</Popover>
|
||||||
|
)}
|
||||||
</div>
|
</div>
|
||||||
{gpuSeries.map((series) => (
|
{gpuSeries.map((series) => (
|
||||||
<ThresholdBarGraph
|
<ThresholdBarGraph
|
||||||
|
|||||||
Loading…
Reference in New Issue
Block a user