mirror of
https://github.com/blakeblackshear/frigate.git
synced 2026-01-22 20:18:30 +03:00
Miscellaneous fixes (0.17 Beta) (#21489)
Some checks failed
CI / AMD64 Build (push) Has been cancelled
CI / ARM Build (push) Has been cancelled
CI / Jetson Jetpack 6 (push) Has been cancelled
CI / AMD64 Extra Build (push) Has been cancelled
CI / ARM Extra Build (push) Has been cancelled
CI / Synaptics Build (push) Has been cancelled
CI / Assemble and push default build (push) Has been cancelled
Some checks failed
CI / AMD64 Build (push) Has been cancelled
CI / ARM Build (push) Has been cancelled
CI / Jetson Jetpack 6 (push) Has been cancelled
CI / AMD64 Extra Build (push) Has been cancelled
CI / ARM Extra Build (push) Has been cancelled
CI / Synaptics Build (push) Has been cancelled
CI / Assemble and push default build (push) Has been cancelled
* Correctly set query padding * Adjust AMD headers and add community badge * Simplify getting started guide for camera wizard * add optimizing performance guide * tweaks * fix character issue * fix more characters * fix links * fix more links * Refactor new docs * Add import * Fix link * Don't list hardware * Reduce redundancy in titles * Add note about Intel NPU and addon * Fix ability to specify if card is using heading * improve display of area percentage * fix text color on genai summary chip * fix indentation in genai docs * Adjust default config model to align with recommended * add correct genai key --------- Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
This commit is contained in:
parent
d1f28eb8e1
commit
047ae19191
@ -211,7 +211,7 @@ You are also able to define custom prompts in your configuration.
|
|||||||
genai:
|
genai:
|
||||||
provider: ollama
|
provider: ollama
|
||||||
base_url: http://localhost:11434
|
base_url: http://localhost:11434
|
||||||
model: llava
|
model: qwen3-vl:8b-instruct
|
||||||
|
|
||||||
objects:
|
objects:
|
||||||
prompt: "Analyze the {label} in these images from the {camera} security camera. Focus on the actions, behavior, and potential intent of the {label}, rather than just describing its appearance."
|
prompt: "Analyze the {label} in these images from the {camera} security camera. Focus on the actions, behavior, and potential intent of the {label}, rather than just describing its appearance."
|
||||||
|
|||||||
@ -39,9 +39,10 @@ You are also able to define custom prompts in your configuration.
|
|||||||
genai:
|
genai:
|
||||||
provider: ollama
|
provider: ollama
|
||||||
base_url: http://localhost:11434
|
base_url: http://localhost:11434
|
||||||
model: llava
|
model: qwen3-vl:8b-instruct
|
||||||
|
|
||||||
objects:
|
objects:
|
||||||
|
genai:
|
||||||
prompt: "Analyze the {label} in these images from the {camera} security camera. Focus on the actions, behavior, and potential intent of the {label}, rather than just describing its appearance."
|
prompt: "Analyze the {label} in these images from the {camera} security camera. Focus on the actions, behavior, and potential intent of the {label}, rather than just describing its appearance."
|
||||||
object_prompts:
|
object_prompts:
|
||||||
person: "Examine the main person in these images. What are they doing and what might their actions suggest about their intent (e.g., approaching a door, leaving an area, standing still)? Do not describe the surroundings or static details."
|
person: "Examine the main person in these images. What are they doing and what might their actions suggest about their intent (e.g., approaching a door, leaving an area, standing still)? Do not describe the surroundings or static details."
|
||||||
|
|||||||
@ -3,6 +3,8 @@ id: hardware_acceleration_video
|
|||||||
title: Video Decoding
|
title: Video Decoding
|
||||||
---
|
---
|
||||||
|
|
||||||
|
import CommunityBadge from '@site/src/components/CommunityBadge';
|
||||||
|
|
||||||
# Video Decoding
|
# Video Decoding
|
||||||
|
|
||||||
It is highly recommended to use an integrated or discrete GPU for hardware acceleration video decoding in Frigate.
|
It is highly recommended to use an integrated or discrete GPU for hardware acceleration video decoding in Frigate.
|
||||||
@ -31,11 +33,11 @@ Frigate supports presets for optimal hardware accelerated video decoding:
|
|||||||
|
|
||||||
- [Raspberry Pi](#raspberry-pi-34): Frigate can utilize the media engine in the Raspberry Pi 3 and 4 to slightly accelerate video decoding.
|
- [Raspberry Pi](#raspberry-pi-34): Frigate can utilize the media engine in the Raspberry Pi 3 and 4 to slightly accelerate video decoding.
|
||||||
|
|
||||||
**Nvidia Jetson**
|
**Nvidia Jetson** <CommunityBadge />
|
||||||
|
|
||||||
- [Jetson](#nvidia-jetson): Frigate can utilize the media engine in Jetson hardware to accelerate video decoding.
|
- [Jetson](#nvidia-jetson): Frigate can utilize the media engine in Jetson hardware to accelerate video decoding.
|
||||||
|
|
||||||
**Rockchip**
|
**Rockchip** <CommunityBadge />
|
||||||
|
|
||||||
- [RKNN](#rockchip-platform): Frigate can utilize the media engine in RockChip SOCs to accelerate video decoding.
|
- [RKNN](#rockchip-platform): Frigate can utilize the media engine in RockChip SOCs to accelerate video decoding.
|
||||||
|
|
||||||
@ -184,11 +186,11 @@ If you are passing in a device path, make sure you've passed the device through
|
|||||||
|
|
||||||
Frigate can utilize modern AMD integrated GPUs and AMD GPUs to accelerate video decoding using VAAPI.
|
Frigate can utilize modern AMD integrated GPUs and AMD GPUs to accelerate video decoding using VAAPI.
|
||||||
|
|
||||||
:::note
|
### Configuring Radeon Driver
|
||||||
|
|
||||||
You need to change the driver to `radeonsi` by adding the following environment variable `LIBVA_DRIVER_NAME=radeonsi` to your docker-compose file or [in the `config.yml` for HA Add-on users](advanced.md#environment_vars).
|
You need to change the driver to `radeonsi` by adding the following environment variable `LIBVA_DRIVER_NAME=radeonsi` to your docker-compose file or [in the `config.yml` for HA Add-on users](advanced.md#environment_vars).
|
||||||
|
|
||||||
:::
|
### Via VAAPI
|
||||||
|
|
||||||
VAAPI supports automatic profile selection so it will work automatically with both H.264 and H.265 streams.
|
VAAPI supports automatic profile selection so it will work automatically with both H.264 and H.265 streams.
|
||||||
|
|
||||||
|
|||||||
@ -465,6 +465,7 @@ There are important limitations in HA OS to be aware of:
|
|||||||
|
|
||||||
- Separate local storage for media is not yet supported by Home Assistant
|
- Separate local storage for media is not yet supported by Home Assistant
|
||||||
- AMD GPUs are not supported because HA OS does not include the mesa driver.
|
- AMD GPUs are not supported because HA OS does not include the mesa driver.
|
||||||
|
- Intel NPUs are not supported because HA OS does not include the NPU firmware.
|
||||||
- Nvidia GPUs are not supported because addons do not support the nvidia runtime.
|
- Nvidia GPUs are not supported because addons do not support the nvidia runtime.
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|||||||
@ -134,31 +134,13 @@ Now you should be able to start Frigate by running `docker compose up -d` from w
|
|||||||
|
|
||||||
This section assumes that you already have an environment setup as described in [Installation](../frigate/installation.md). You should also configure your cameras according to the [camera setup guide](/frigate/camera_setup). Pay particular attention to the section on choosing a detect resolution.
|
This section assumes that you already have an environment setup as described in [Installation](../frigate/installation.md). You should also configure your cameras according to the [camera setup guide](/frigate/camera_setup). Pay particular attention to the section on choosing a detect resolution.
|
||||||
|
|
||||||
### Step 1: Add a detect stream
|
### Step 1: Start Frigate
|
||||||
|
|
||||||
First we will add the detect stream for the camera:
|
At this point you should be able to start Frigate and a basic config will be created automatically.
|
||||||
|
|
||||||
```yaml
|
### Step 2: Add a camera
|
||||||
mqtt:
|
|
||||||
enabled: False
|
|
||||||
|
|
||||||
cameras:
|
You can click the `Add Camera` button to use the camera setup wizard to get your first camera added into Frigate.
|
||||||
name_of_your_camera: # <------ Name the camera
|
|
||||||
enabled: True
|
|
||||||
ffmpeg:
|
|
||||||
inputs:
|
|
||||||
- path: rtsp://10.0.10.10:554/rtsp # <----- The stream you want to use for detection
|
|
||||||
roles:
|
|
||||||
- detect
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 2: Start Frigate
|
|
||||||
|
|
||||||
At this point you should be able to start Frigate and see the video feed in the UI.
|
|
||||||
|
|
||||||
If you get an error image from the camera, this means ffmpeg was not able to get the video feed from your camera. Check the logs for error messages from ffmpeg. The default ffmpeg arguments are designed to work with H264 RTSP cameras that support TCP connections.
|
|
||||||
|
|
||||||
FFmpeg arguments for other types of cameras can be found [here](../configuration/camera_specific.md).
|
|
||||||
|
|
||||||
### Step 3: Configure hardware acceleration (recommended)
|
### Step 3: Configure hardware acceleration (recommended)
|
||||||
|
|
||||||
@ -173,7 +155,7 @@ services:
|
|||||||
frigate:
|
frigate:
|
||||||
...
|
...
|
||||||
devices:
|
devices:
|
||||||
- /dev/dri/renderD128:/dev/dri/renderD128 # for intel hwaccel, needs to be updated for your hardware
|
- /dev/dri/renderD128:/dev/dri/renderD128 # for intel & amd hwaccel, needs to be updated for your hardware
|
||||||
...
|
...
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|||||||
73
docs/docs/troubleshooting/cpu.md
Normal file
73
docs/docs/troubleshooting/cpu.md
Normal file
@ -0,0 +1,73 @@
|
|||||||
|
---
|
||||||
|
id: cpu
|
||||||
|
title: High CPU Usage
|
||||||
|
---
|
||||||
|
|
||||||
|
High CPU usage can impact Frigate's performance and responsiveness. This guide outlines the most effective configuration changes to help reduce CPU consumption and optimize resource usage.
|
||||||
|
|
||||||
|
## 1. Hardware Acceleration for Video Decoding
|
||||||
|
|
||||||
|
**Priority: Critical**
|
||||||
|
|
||||||
|
Video decoding is one of the most CPU-intensive tasks in Frigate. While an AI accelerator handles object detection, it does not assist with decoding video streams. Hardware acceleration (hwaccel) offloads this work to your GPU or specialized video decode hardware, significantly reducing CPU usage and enabling you to support more cameras on the same hardware.
|
||||||
|
|
||||||
|
### Key Concepts
|
||||||
|
|
||||||
|
**Resolution & FPS Impact:** The decoding burden grows exponentially with resolution and frame rate. A 4K stream at 30 FPS requires roughly 4 times the processing power of a 1080p stream at the same frame rate, and doubling the frame rate doubles the decode workload. This is why hardware acceleration becomes critical when working with multiple high-resolution cameras.
|
||||||
|
|
||||||
|
**Hardware Acceleration Benefits:** By using dedicated video decode hardware, you can:
|
||||||
|
|
||||||
|
- Significantly reduce CPU usage per camera stream
|
||||||
|
- Support 2-3x more cameras on the same hardware
|
||||||
|
- Free up CPU resources for motion detection and other Frigate processes
|
||||||
|
- Reduce system heat and power consumption
|
||||||
|
|
||||||
|
### Configuration
|
||||||
|
|
||||||
|
Frigate provides preset configurations for common hardware acceleration scenarios. Set up `hwaccel_args` based on your hardware in your [configuration](../configuration/reference) as described in the [getting started guide](../guides/getting_started).
|
||||||
|
|
||||||
|
### Troubleshooting Hardware Acceleration
|
||||||
|
|
||||||
|
If hardware acceleration isn't working:
|
||||||
|
|
||||||
|
1. Check Frigate logs for FFmpeg errors related to hwaccel
|
||||||
|
2. Verify the hardware device is accessible inside the container
|
||||||
|
3. Ensure your camera streams use H.264 or H.265 codecs (most common)
|
||||||
|
4. Try different presets if the automatic detection fails
|
||||||
|
5. Check that your GPU drivers are properly installed on the host system
|
||||||
|
|
||||||
|
## 2. Detector Selection and Configuration
|
||||||
|
|
||||||
|
**Priority: Critical**
|
||||||
|
|
||||||
|
Choosing the right detector for your hardware is the single most important factor for detection performance. The detector is responsible for running the AI model that identifies objects in video frames. Different detector types have vastly different performance characteristics and hardware requirements, as detailed in the [hardware documentation](../frigate/hardware).
|
||||||
|
|
||||||
|
### Understanding Detector Performance
|
||||||
|
|
||||||
|
Frigate uses motion detection as a first-line check before running expensive object detection, as explained in the [motion detection documentation](../configuration/motion_detection). When motion is detected, Frigate creates a "region" (the green boxes in the debug viewer) and sends it to the detector. The detector's inference speed determines how many detections per second your system can handle.
|
||||||
|
|
||||||
|
**Calculating Detector Capacity:** Your detector has a finite capacity measured in detections per second. With an inference speed of 10ms, your detector can handle approximately 100 detections per second (1000ms / 10ms = 100).If your cameras collectively require more than this capacity, you'll experience delays, missed detections, or the system will fall behind.
|
||||||
|
|
||||||
|
### Choosing the Right Detector
|
||||||
|
|
||||||
|
Different detectors have vastly different performance characteristics, see the expected performance for object detectors in [the hardware docs](../frigate/hardware)
|
||||||
|
|
||||||
|
### Multiple Detector Instances
|
||||||
|
|
||||||
|
When a single detector cannot keep up with your camera count, some detector types (`openvino`, `onnx`) allow you to define multiple detector instances to share the workload. This is particularly useful with GPU-based detectors that have sufficient VRAM to run multiple inference processes.
|
||||||
|
|
||||||
|
For detailed instructions on configuring multiple detectors, see the [Object Detectors documentation](../configuration/object_detectors).
|
||||||
|
|
||||||
|
|
||||||
|
**When to add a second detector:**
|
||||||
|
|
||||||
|
- Skipped FPS is consistently > 0 even during normal activity
|
||||||
|
|
||||||
|
### Model Selection and Optimization
|
||||||
|
|
||||||
|
The model you use significantly impacts detector performance. Frigate provides default models optimized for each detector type, but you can customize them as described in the [detector documentation](../configuration/object_detectors).
|
||||||
|
|
||||||
|
**Model Size Trade-offs:**
|
||||||
|
|
||||||
|
- Smaller models (320x320): Faster inference, Frigate is specifically optimized for a 320x320 size model.
|
||||||
|
- Larger models (640x640): Slower inference, can sometimes have higher accuracy on very large objects that take up a majority of the frame.
|
||||||
@ -1,6 +1,6 @@
|
|||||||
---
|
---
|
||||||
id: dummy-camera
|
id: dummy-camera
|
||||||
title: Troubleshooting Detection
|
title: Analyzing Object Detection
|
||||||
---
|
---
|
||||||
|
|
||||||
When investigating object detection or tracking problems, it can be helpful to replay an exported video as a temporary "dummy" camera. This lets you reproduce issues locally, iterate on configuration (detections, zones, enrichment settings), and capture logs and clips for analysis.
|
When investigating object detection or tracking problems, it can be helpful to replay an exported video as a temporary "dummy" camera. This lets you reproduce issues locally, iterate on configuration (detections, zones, enrichment settings), and capture logs and clips for analysis.
|
||||||
|
|||||||
@ -1,6 +1,6 @@
|
|||||||
---
|
---
|
||||||
id: edgetpu
|
id: edgetpu
|
||||||
title: Troubleshooting EdgeTPU
|
title: EdgeTPU Errors
|
||||||
---
|
---
|
||||||
|
|
||||||
## USB Coral Not Detected
|
## USB Coral Not Detected
|
||||||
|
|||||||
@ -1,6 +1,6 @@
|
|||||||
---
|
---
|
||||||
id: gpu
|
id: gpu
|
||||||
title: Troubleshooting GPU
|
title: GPU Errors
|
||||||
---
|
---
|
||||||
|
|
||||||
## OpenVINO
|
## OpenVINO
|
||||||
|
|||||||
@ -1,6 +1,6 @@
|
|||||||
---
|
---
|
||||||
id: memory
|
id: memory
|
||||||
title: Memory Troubleshooting
|
title: Memory Usage
|
||||||
---
|
---
|
||||||
|
|
||||||
Frigate includes built-in memory profiling using [memray](https://bloomberg.github.io/memray/) to help diagnose memory issues. This feature allows you to profile specific Frigate modules to identify memory leaks, excessive allocations, or other memory-related problems.
|
Frigate includes built-in memory profiling using [memray](https://bloomberg.github.io/memray/) to help diagnose memory issues. This feature allows you to profile specific Frigate modules to identify memory leaks, excessive allocations, or other memory-related problems.
|
||||||
|
|||||||
@ -1,6 +1,6 @@
|
|||||||
---
|
---
|
||||||
id: recordings
|
id: recordings
|
||||||
title: Troubleshooting Recordings
|
title: Recordings Errors
|
||||||
---
|
---
|
||||||
|
|
||||||
## I have Frigate configured for motion recording only, but it still seems to be recording even with no motion. Why?
|
## I have Frigate configured for motion recording only, but it still seems to be recording even with no motion. Why?
|
||||||
|
|||||||
@ -129,10 +129,27 @@ const sidebars: SidebarsConfig = {
|
|||||||
Troubleshooting: [
|
Troubleshooting: [
|
||||||
"troubleshooting/faqs",
|
"troubleshooting/faqs",
|
||||||
"troubleshooting/recordings",
|
"troubleshooting/recordings",
|
||||||
"troubleshooting/gpu",
|
|
||||||
"troubleshooting/edgetpu",
|
|
||||||
"troubleshooting/memory",
|
|
||||||
"troubleshooting/dummy-camera",
|
"troubleshooting/dummy-camera",
|
||||||
|
{
|
||||||
|
type: "category",
|
||||||
|
label: "Troubleshooting Hardware",
|
||||||
|
link: {
|
||||||
|
type: "generated-index",
|
||||||
|
title: "Troubleshooting Hardware",
|
||||||
|
description: "Troubleshooting Problems with Hardware",
|
||||||
|
},
|
||||||
|
items: ["troubleshooting/gpu", "troubleshooting/edgetpu"],
|
||||||
|
},
|
||||||
|
{
|
||||||
|
type: "category",
|
||||||
|
label: "Troubleshooting Resource Usage",
|
||||||
|
link: {
|
||||||
|
type: "generated-index",
|
||||||
|
title: "Troubleshooting Resource Usage",
|
||||||
|
description: "Troubleshooting issues with resource usage",
|
||||||
|
},
|
||||||
|
items: ["troubleshooting/cpu", "troubleshooting/memory"],
|
||||||
|
},
|
||||||
],
|
],
|
||||||
Development: [
|
Development: [
|
||||||
"development/contributing",
|
"development/contributing",
|
||||||
|
|||||||
@ -1935,7 +1935,7 @@ async def label_clip(request: Request, camera_name: str, label: str):
|
|||||||
try:
|
try:
|
||||||
event = event_query.get()
|
event = event_query.get()
|
||||||
|
|
||||||
return await event_clip(request, event.id)
|
return await event_clip(request, event.id, 0)
|
||||||
except DoesNotExist:
|
except DoesNotExist:
|
||||||
return JSONResponse(
|
return JSONResponse(
|
||||||
content={"success": False, "message": "Event not found"}, status_code=404
|
content={"success": False, "message": "Event not found"}, status_code=404
|
||||||
|
|||||||
@ -8,6 +8,7 @@ type EmptyCardProps = {
|
|||||||
className?: string;
|
className?: string;
|
||||||
icon: React.ReactNode;
|
icon: React.ReactNode;
|
||||||
title: string;
|
title: string;
|
||||||
|
titleHeading?: boolean;
|
||||||
description?: string;
|
description?: string;
|
||||||
buttonText?: string;
|
buttonText?: string;
|
||||||
link?: string;
|
link?: string;
|
||||||
@ -16,14 +17,23 @@ export function EmptyCard({
|
|||||||
className,
|
className,
|
||||||
icon,
|
icon,
|
||||||
title,
|
title,
|
||||||
|
titleHeading = true,
|
||||||
description,
|
description,
|
||||||
buttonText,
|
buttonText,
|
||||||
link,
|
link,
|
||||||
}: EmptyCardProps) {
|
}: EmptyCardProps) {
|
||||||
|
let TitleComponent;
|
||||||
|
|
||||||
|
if (titleHeading) {
|
||||||
|
TitleComponent = <Heading as="h4">{title}</Heading>;
|
||||||
|
} else {
|
||||||
|
TitleComponent = <div>{title}</div>;
|
||||||
|
}
|
||||||
|
|
||||||
return (
|
return (
|
||||||
<div className={cn("flex flex-col items-center gap-2", className)}>
|
<div className={cn("flex flex-col items-center gap-2", className)}>
|
||||||
{icon}
|
{icon}
|
||||||
<Heading as="h4">{title}</Heading>
|
{TitleComponent}
|
||||||
{description && (
|
{description && (
|
||||||
<div className="mb-3 text-secondary-foreground">{description}</div>
|
<div className="mb-3 text-secondary-foreground">{description}</div>
|
||||||
)}
|
)}
|
||||||
|
|||||||
@ -26,7 +26,9 @@ export function GenAISummaryChip({ review }: GenAISummaryChipProps) {
|
|||||||
className={cn(
|
className={cn(
|
||||||
"absolute left-1/2 top-8 z-30 flex max-w-[90vw] -translate-x-[50%] cursor-pointer select-none items-center gap-2 rounded-full p-2 text-sm transition-all duration-500",
|
"absolute left-1/2 top-8 z-30 flex max-w-[90vw] -translate-x-[50%] cursor-pointer select-none items-center gap-2 rounded-full p-2 text-sm transition-all duration-500",
|
||||||
isVisible ? "translate-y-0 opacity-100" : "-translate-y-4 opacity-0",
|
isVisible ? "translate-y-0 opacity-100" : "-translate-y-4 opacity-0",
|
||||||
isDesktop ? "bg-card" : "bg-secondary-foreground",
|
isDesktop
|
||||||
|
? "bg-card text-primary"
|
||||||
|
: "bg-secondary-foreground text-white",
|
||||||
)}
|
)}
|
||||||
>
|
>
|
||||||
<MdAutoAwesome className="shrink-0" />
|
<MdAutoAwesome className="shrink-0" />
|
||||||
|
|||||||
@ -849,7 +849,11 @@ function LifecycleIconRow({
|
|||||||
() =>
|
() =>
|
||||||
Array.isArray(item.data.attribute_box) &&
|
Array.isArray(item.data.attribute_box) &&
|
||||||
item.data.attribute_box.length >= 4
|
item.data.attribute_box.length >= 4
|
||||||
? (item.data.attribute_box[2] * item.data.attribute_box[3]).toFixed(4)
|
? (
|
||||||
|
item.data.attribute_box[2] *
|
||||||
|
item.data.attribute_box[3] *
|
||||||
|
100
|
||||||
|
).toFixed(2)
|
||||||
: undefined,
|
: undefined,
|
||||||
[item.data.attribute_box],
|
[item.data.attribute_box],
|
||||||
);
|
);
|
||||||
@ -857,7 +861,7 @@ function LifecycleIconRow({
|
|||||||
const areaPct = useMemo(
|
const areaPct = useMemo(
|
||||||
() =>
|
() =>
|
||||||
Array.isArray(item.data.box) && item.data.box.length >= 4
|
Array.isArray(item.data.box) && item.data.box.length >= 4
|
||||||
? (item.data.box[2] * item.data.box[3]).toFixed(4)
|
? (item.data.box[2] * item.data.box[3] * 100).toFixed(2)
|
||||||
: undefined,
|
: undefined,
|
||||||
[item.data.box],
|
[item.data.box],
|
||||||
);
|
);
|
||||||
|
|||||||
@ -744,7 +744,7 @@ function LifecycleItem({
|
|||||||
const areaPct = useMemo(
|
const areaPct = useMemo(
|
||||||
() =>
|
() =>
|
||||||
Array.isArray(item?.data.box) && item?.data.box.length >= 4
|
Array.isArray(item?.data.box) && item?.data.box.length >= 4
|
||||||
? (item?.data.box[2] * item?.data.box[3]).toFixed(4)
|
? (item?.data.box[2] * item?.data.box[3] * 100).toFixed(2)
|
||||||
: undefined,
|
: undefined,
|
||||||
[item],
|
[item],
|
||||||
);
|
);
|
||||||
@ -766,7 +766,11 @@ function LifecycleItem({
|
|||||||
() =>
|
() =>
|
||||||
Array.isArray(item?.data.attribute_box) &&
|
Array.isArray(item?.data.attribute_box) &&
|
||||||
item?.data.attribute_box.length >= 4
|
item?.data.attribute_box.length >= 4
|
||||||
? (item?.data.attribute_box[2] * item?.data.attribute_box[3]).toFixed(4)
|
? (
|
||||||
|
item?.data.attribute_box[2] *
|
||||||
|
item?.data.attribute_box[3] *
|
||||||
|
100
|
||||||
|
).toFixed(2)
|
||||||
: undefined,
|
: undefined,
|
||||||
[item],
|
[item],
|
||||||
);
|
);
|
||||||
@ -845,7 +849,7 @@ function LifecycleItem({
|
|||||||
</span>
|
</span>
|
||||||
{areaPx !== undefined && areaPct !== undefined ? (
|
{areaPx !== undefined && areaPct !== undefined ? (
|
||||||
<span className="font-medium text-foreground">
|
<span className="font-medium text-foreground">
|
||||||
{areaPx} {t("information.pixels", { ns: "common" })}{" "}
|
{t("information.pixels", { ns: "common", area: areaPx })}{" "}
|
||||||
<span className="text-secondary-foreground">·</span>{" "}
|
<span className="text-secondary-foreground">·</span>{" "}
|
||||||
{areaPct}%
|
{areaPct}%
|
||||||
</span>
|
</span>
|
||||||
|
|||||||
@ -762,8 +762,9 @@ function DetectionReview({
|
|||||||
|
|
||||||
{!loading && currentItems?.length === 0 && (
|
{!loading && currentItems?.length === 0 && (
|
||||||
<EmptyCard
|
<EmptyCard
|
||||||
className="y-translate-1/2 absolute left-[50%] top-[50%] -translate-x-1/2"
|
className="absolute left-[50%] top-[50%] -translate-x-1/2 -translate-y-1/2 items-center text-center"
|
||||||
title={emptyCardData.title}
|
title={emptyCardData.title}
|
||||||
|
titleHeading={false}
|
||||||
description={emptyCardData.description}
|
description={emptyCardData.description}
|
||||||
icon={<LuFolderCheck className="size-16" />}
|
icon={<LuFolderCheck className="size-16" />}
|
||||||
/>
|
/>
|
||||||
|
|||||||
Loading…
Reference in New Issue
Block a user