mirror of
https://github.com/blakeblackshear/frigate.git
synced 2026-03-26 10:08:22 +03:00
Tweaks (#22630)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
* fix stage overlay size * add audio filter config and load audio labels * remove add button from object and audio labels in settings * tests * update classification docs * tweak wording * don't require restart for timestamp_style changes * add optional i18n prefix for select widgets * use i18n enum prefix for timestamp position * add i18n for all presets
This commit is contained in:
parent
b1c410bc3e
commit
c0124938b3
@ -102,8 +102,19 @@ If examples for some of your classes do not appear in the grid, you can continue
|
|||||||
|
|
||||||
### Improving the Model
|
### Improving the Model
|
||||||
|
|
||||||
|
:::tip Diversity matters far more than volume
|
||||||
|
|
||||||
|
Selecting dozens of nearly identical images is one of the fastest ways to degrade model performance. MobileNetV2 can overfit quickly when trained on homogeneous data — the model learns what *that exact moment* looked like rather than what actually defines the class. **This is why Frigate does not implement bulk training in the UI.**
|
||||||
|
|
||||||
|
For more detail, see [Frigate Tip: Best Practices for Training Face and Custom Classification Models](https://github.com/blakeblackshear/frigate/discussions/21374).
|
||||||
|
|
||||||
|
:::
|
||||||
|
|
||||||
|
- **Start small and iterate**: Begin with a small, representative set of images per class. Models often begin working well with surprisingly few examples and improve naturally over time.
|
||||||
|
- **Favor hard examples**: When images appear in the Recent Classifications tab, prioritize images scoring below 90–100% or those captured under new lighting, weather, or distance conditions.
|
||||||
|
- **Avoid bulk training similar images**: Training large batches of images that already score 100% (or close) adds little new information and increases the risk of overfitting.
|
||||||
|
- **The wizard is just the starting point**: You don’t need to find and label every class upfront. Missing classes will naturally appear in Recent Classifications, and those images tend to be more valuable because they represent new conditions and edge cases.
|
||||||
- **Problem framing**: Keep classes visually distinct and relevant to the chosen object types.
|
- **Problem framing**: Keep classes visually distinct and relevant to the chosen object types.
|
||||||
- **Data collection**: Use the model’s Recent Classification tab to gather balanced examples across times of day, weather, and distances.
|
|
||||||
- **Preprocessing**: Ensure examples reflect object crops similar to Frigate’s boxes; keep the subject centered.
|
- **Preprocessing**: Ensure examples reflect object crops similar to Frigate’s boxes; keep the subject centered.
|
||||||
- **Labels**: Keep label names short and consistent; include a `none` class if you plan to ignore uncertain predictions for sub labels.
|
- **Labels**: Keep label names short and consistent; include a `none` class if you plan to ignore uncertain predictions for sub labels.
|
||||||
- **Threshold**: Tune `threshold` per model to reduce false assignments. Start at `0.8` and adjust based on validation.
|
- **Threshold**: Tune `threshold` per model to reduce false assignments. Start at `0.8` and adjust based on validation.
|
||||||
|
|||||||
@ -70,10 +70,21 @@ Once some images are assigned, training will begin automatically.
|
|||||||
|
|
||||||
### Improving the Model
|
### Improving the Model
|
||||||
|
|
||||||
|
:::tip Diversity matters far more than volume
|
||||||
|
|
||||||
|
Selecting dozens of nearly identical images is one of the fastest ways to degrade model performance. MobileNetV2 can overfit quickly when trained on homogeneous data — the model learns what *that exact moment* looked like rather than what actually defines the state. This often leads to models that work perfectly under the original conditions but become unstable when day turns to night, weather changes, or seasonal lighting shifts. **This is why Frigate does not implement bulk training in the UI.**
|
||||||
|
|
||||||
|
For more detail, see [Frigate Tip: Best Practices for Training Face and Custom Classification Models](https://github.com/blakeblackshear/frigate/discussions/21374).
|
||||||
|
|
||||||
|
:::
|
||||||
|
|
||||||
|
- **Start small and iterate**: Begin with a small, representative set of images per class. Models often begin working well with surprisingly few examples and improve naturally over time.
|
||||||
- **Problem framing**: Keep classes visually distinct and state-focused (e.g., `open`, `closed`, `unknown`). Avoid combining object identity with state in a single model unless necessary.
|
- **Problem framing**: Keep classes visually distinct and state-focused (e.g., `open`, `closed`, `unknown`). Avoid combining object identity with state in a single model unless necessary.
|
||||||
- **Data collection**: Use the model's Recent Classifications tab to gather balanced examples across times of day and weather.
|
- **Data collection**: Use the model's Recent Classifications tab to gather balanced examples across times of day and weather.
|
||||||
- **When to train**: Focus on cases where the model is entirely incorrect or flips between states when it should not. There's no need to train additional images when the model is already working consistently.
|
- **When to train**: Focus on cases where the model is entirely incorrect or flips between states when it should not. There's no need to train additional images when the model is already working consistently.
|
||||||
- **Selecting training images**: Images scoring below 100% due to new conditions (e.g., first snow of the year, seasonal changes) or variations (e.g., objects temporarily in view, insects at night) are good candidates for training, as they represent scenarios different from the default state. Training these lower-scoring images that differ from existing training data helps prevent overfitting. Avoid training large quantities of images that look very similar, especially if they already score 100% as this can lead to overfitting.
|
- **Favor hard examples**: When images appear in the Recent Classifications tab, prioritize images scoring below 90–100% or those captured under new conditions (e.g., first snow of the year, seasonal changes, objects temporarily in view, insects at night). These represent scenarios different from the default state and help prevent overfitting.
|
||||||
|
- **Avoid bulk training similar images**: Training large batches of images that already score 100% (or close) adds little new information and increases the risk of overfitting.
|
||||||
|
- **The wizard is just the starting point**: You don't need to find and label every state upfront. Missing states will naturally appear in Recent Classifications, and those images tend to be more valuable because they represent new conditions and edge cases.
|
||||||
|
|
||||||
## Debugging Classification Models
|
## Debugging Classification Models
|
||||||
|
|
||||||
|
|||||||
@ -32,6 +32,7 @@ class CameraConfigUpdateEnum(str, Enum):
|
|||||||
face_recognition = "face_recognition"
|
face_recognition = "face_recognition"
|
||||||
lpr = "lpr"
|
lpr = "lpr"
|
||||||
snapshots = "snapshots"
|
snapshots = "snapshots"
|
||||||
|
timestamp_style = "timestamp_style"
|
||||||
zones = "zones"
|
zones = "zones"
|
||||||
|
|
||||||
|
|
||||||
@ -133,6 +134,8 @@ class CameraConfigUpdateSubscriber:
|
|||||||
config.snapshots = updated_config
|
config.snapshots = updated_config
|
||||||
elif update_type == CameraConfigUpdateEnum.onvif:
|
elif update_type == CameraConfigUpdateEnum.onvif:
|
||||||
config.onvif = updated_config
|
config.onvif = updated_config
|
||||||
|
elif update_type == CameraConfigUpdateEnum.timestamp_style:
|
||||||
|
config.timestamp_style = updated_config
|
||||||
elif update_type == CameraConfigUpdateEnum.zones:
|
elif update_type == CameraConfigUpdateEnum.zones:
|
||||||
config.zones = updated_config
|
config.zones = updated_config
|
||||||
|
|
||||||
|
|||||||
@ -25,6 +25,7 @@ from frigate.plus import PlusApi
|
|||||||
from frigate.util.builtin import (
|
from frigate.util.builtin import (
|
||||||
deep_merge,
|
deep_merge,
|
||||||
get_ffmpeg_arg_list,
|
get_ffmpeg_arg_list,
|
||||||
|
load_labels,
|
||||||
)
|
)
|
||||||
from frigate.util.config import (
|
from frigate.util.config import (
|
||||||
CURRENT_CONFIG_VERSION,
|
CURRENT_CONFIG_VERSION,
|
||||||
@ -40,7 +41,7 @@ from frigate.util.services import auto_detect_hwaccel
|
|||||||
from .auth import AuthConfig
|
from .auth import AuthConfig
|
||||||
from .base import FrigateBaseModel
|
from .base import FrigateBaseModel
|
||||||
from .camera import CameraConfig, CameraLiveConfig
|
from .camera import CameraConfig, CameraLiveConfig
|
||||||
from .camera.audio import AudioConfig
|
from .camera.audio import AudioConfig, AudioFilterConfig
|
||||||
from .camera.birdseye import BirdseyeConfig
|
from .camera.birdseye import BirdseyeConfig
|
||||||
from .camera.detect import DetectConfig
|
from .camera.detect import DetectConfig
|
||||||
from .camera.ffmpeg import FfmpegConfig
|
from .camera.ffmpeg import FfmpegConfig
|
||||||
@ -473,7 +474,7 @@ class FrigateConfig(FrigateBaseModel):
|
|||||||
live: CameraLiveConfig = Field(
|
live: CameraLiveConfig = Field(
|
||||||
default_factory=CameraLiveConfig,
|
default_factory=CameraLiveConfig,
|
||||||
title="Live playback",
|
title="Live playback",
|
||||||
description="Settings used by the Web UI to control live stream resolution and quality.",
|
description="Settings to control the jsmpeg live stream resolution and quality. This does not affect restreamed cameras that use go2rtc for live view.",
|
||||||
)
|
)
|
||||||
motion: Optional[MotionConfig] = Field(
|
motion: Optional[MotionConfig] = Field(
|
||||||
default=None,
|
default=None,
|
||||||
@ -671,6 +672,12 @@ class FrigateConfig(FrigateBaseModel):
|
|||||||
detector_config.model = model
|
detector_config.model = model
|
||||||
self.detectors[key] = detector_config
|
self.detectors[key] = detector_config
|
||||||
|
|
||||||
|
all_audio_labels = {
|
||||||
|
label
|
||||||
|
for label in load_labels("/audio-labelmap.txt", prefill=521).values()
|
||||||
|
if label
|
||||||
|
}
|
||||||
|
|
||||||
for name, camera in self.cameras.items():
|
for name, camera in self.cameras.items():
|
||||||
modified_global_config = global_config.copy()
|
modified_global_config = global_config.copy()
|
||||||
|
|
||||||
@ -791,6 +798,14 @@ class FrigateConfig(FrigateBaseModel):
|
|||||||
camera_config.review.genai.enabled
|
camera_config.review.genai.enabled
|
||||||
)
|
)
|
||||||
|
|
||||||
|
if camera_config.audio.filters is None:
|
||||||
|
camera_config.audio.filters = {}
|
||||||
|
|
||||||
|
audio_keys = all_audio_labels
|
||||||
|
audio_keys = audio_keys - camera_config.audio.filters.keys()
|
||||||
|
for key in audio_keys:
|
||||||
|
camera_config.audio.filters[key] = AudioFilterConfig()
|
||||||
|
|
||||||
# Add default filters
|
# Add default filters
|
||||||
object_keys = camera_config.objects.track
|
object_keys = camera_config.objects.track
|
||||||
if camera_config.objects.filters is None:
|
if camera_config.objects.filters is None:
|
||||||
|
|||||||
@ -10,7 +10,7 @@ from ruamel.yaml.constructor import DuplicateKeyError
|
|||||||
from frigate.config import BirdseyeModeEnum, FrigateConfig
|
from frigate.config import BirdseyeModeEnum, FrigateConfig
|
||||||
from frigate.const import MODEL_CACHE_DIR
|
from frigate.const import MODEL_CACHE_DIR
|
||||||
from frigate.detectors import DetectorTypeEnum
|
from frigate.detectors import DetectorTypeEnum
|
||||||
from frigate.util.builtin import deep_merge
|
from frigate.util.builtin import deep_merge, load_labels
|
||||||
|
|
||||||
|
|
||||||
class TestConfig(unittest.TestCase):
|
class TestConfig(unittest.TestCase):
|
||||||
@ -288,6 +288,65 @@ class TestConfig(unittest.TestCase):
|
|||||||
frigate_config = FrigateConfig(**config)
|
frigate_config = FrigateConfig(**config)
|
||||||
assert "dog" in frigate_config.cameras["back"].objects.filters
|
assert "dog" in frigate_config.cameras["back"].objects.filters
|
||||||
|
|
||||||
|
def test_default_audio_filters(self):
|
||||||
|
config = {
|
||||||
|
"mqtt": {"host": "mqtt"},
|
||||||
|
"audio": {"listen": ["speech", "yell"]},
|
||||||
|
"cameras": {
|
||||||
|
"back": {
|
||||||
|
"ffmpeg": {
|
||||||
|
"inputs": [
|
||||||
|
{"path": "rtsp://10.0.0.1:554/video", "roles": ["detect"]}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"detect": {
|
||||||
|
"height": 1080,
|
||||||
|
"width": 1920,
|
||||||
|
"fps": 5,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
frigate_config = FrigateConfig(**config)
|
||||||
|
all_audio_labels = {
|
||||||
|
label
|
||||||
|
for label in load_labels("/audio-labelmap.txt", prefill=521).values()
|
||||||
|
if label
|
||||||
|
}
|
||||||
|
|
||||||
|
assert all_audio_labels.issubset(
|
||||||
|
set(frigate_config.cameras["back"].audio.filters.keys())
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_override_audio_filters(self):
|
||||||
|
config = {
|
||||||
|
"mqtt": {"host": "mqtt"},
|
||||||
|
"cameras": {
|
||||||
|
"back": {
|
||||||
|
"ffmpeg": {
|
||||||
|
"inputs": [
|
||||||
|
{"path": "rtsp://10.0.0.1:554/video", "roles": ["detect"]}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"detect": {
|
||||||
|
"height": 1080,
|
||||||
|
"width": 1920,
|
||||||
|
"fps": 5,
|
||||||
|
},
|
||||||
|
"audio": {
|
||||||
|
"listen": ["speech", "yell"],
|
||||||
|
"filters": {"speech": {"threshold": 0.9}},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
frigate_config = FrigateConfig(**config)
|
||||||
|
assert "speech" in frigate_config.cameras["back"].audio.filters
|
||||||
|
assert frigate_config.cameras["back"].audio.filters["speech"].threshold == 0.9
|
||||||
|
assert "babbling" in frigate_config.cameras["back"].audio.filters
|
||||||
|
|
||||||
def test_inherit_object_filters(self):
|
def test_inherit_object_filters(self):
|
||||||
config = {
|
config = {
|
||||||
"mqtt": {"host": "mqtt"},
|
"mqtt": {"host": "mqtt"},
|
||||||
|
|||||||
@ -81,6 +81,7 @@ class TrackedObjectProcessor(threading.Thread):
|
|||||||
CameraConfigUpdateEnum.motion,
|
CameraConfigUpdateEnum.motion,
|
||||||
CameraConfigUpdateEnum.objects,
|
CameraConfigUpdateEnum.objects,
|
||||||
CameraConfigUpdateEnum.remove,
|
CameraConfigUpdateEnum.remove,
|
||||||
|
CameraConfigUpdateEnum.timestamp_style,
|
||||||
CameraConfigUpdateEnum.zones,
|
CameraConfigUpdateEnum.zones,
|
||||||
],
|
],
|
||||||
)
|
)
|
||||||
|
|||||||
@ -752,7 +752,7 @@
|
|||||||
},
|
},
|
||||||
"live": {
|
"live": {
|
||||||
"label": "Live playback",
|
"label": "Live playback",
|
||||||
"description": "Settings used by the Web UI to control live stream resolution and quality.",
|
"description": "Settings to control the jsmpeg live stream resolution and quality. This does not affect restreamed cameras that use go2rtc for live view.",
|
||||||
"streams": {
|
"streams": {
|
||||||
"label": "Live stream names",
|
"label": "Live stream names",
|
||||||
"description": "Mapping of configured stream names to restream/go2rtc names used for live playback."
|
"description": "Mapping of configured stream names to restream/go2rtc names used for live playback."
|
||||||
|
|||||||
@ -825,6 +825,12 @@
|
|||||||
"area": "Area"
|
"area": "Area"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"timestampPosition": {
|
||||||
|
"tl": "Top left",
|
||||||
|
"tr": "Top right",
|
||||||
|
"bl": "Bottom left",
|
||||||
|
"br": "Bottom right"
|
||||||
|
},
|
||||||
"users": {
|
"users": {
|
||||||
"title": "Users",
|
"title": "Users",
|
||||||
"management": {
|
"management": {
|
||||||
@ -1342,7 +1348,22 @@
|
|||||||
"preset-nvidia": "NVIDIA GPU",
|
"preset-nvidia": "NVIDIA GPU",
|
||||||
"preset-jetson-h264": "NVIDIA Jetson (H.264)",
|
"preset-jetson-h264": "NVIDIA Jetson (H.264)",
|
||||||
"preset-jetson-h265": "NVIDIA Jetson (H.265)",
|
"preset-jetson-h265": "NVIDIA Jetson (H.265)",
|
||||||
"preset-rkmpp": "Rockchip RKMPP"
|
"preset-rkmpp": "Rockchip RKMPP",
|
||||||
|
"preset-http-jpeg-generic": "HTTP JPEG (Generic)",
|
||||||
|
"preset-http-mjpeg-generic": "HTTP MJPEG (Generic)",
|
||||||
|
"preset-http-reolink": "HTTP - Reolink Cameras",
|
||||||
|
"preset-rtmp-generic": "RTMP (Generic)",
|
||||||
|
"preset-rtsp-generic": "RTSP (Generic)",
|
||||||
|
"preset-rtsp-restream": "RTSP - Restream from go2rtc",
|
||||||
|
"preset-rtsp-restream-low-latency": "RTSP - Restream from go2rtc (Low Latency)",
|
||||||
|
"preset-rtsp-udp": "RTSP - UDP",
|
||||||
|
"preset-rtsp-blue-iris": "RTSP - Blue Iris",
|
||||||
|
"preset-record-generic": "Record (Generic, no audio)",
|
||||||
|
"preset-record-generic-audio-copy": "Record (Generic + Copy Audio)",
|
||||||
|
"preset-record-generic-audio-aac": "Record (Generic + Audio to AAC)",
|
||||||
|
"preset-record-mjpeg": "Record - MJPEG Cameras",
|
||||||
|
"preset-record-jpeg": "Record - JPEG Cameras",
|
||||||
|
"preset-record-ubiquiti": "Record - Ubiquiti Cameras"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"cameraInputs": {
|
"cameraInputs": {
|
||||||
|
|||||||
@ -19,6 +19,16 @@ const audio: SectionConfigOverrides = {
|
|||||||
hiddenFields: ["enabled_in_config"],
|
hiddenFields: ["enabled_in_config"],
|
||||||
advancedFields: ["min_volume", "max_not_heard", "num_threads"],
|
advancedFields: ["min_volume", "max_not_heard", "num_threads"],
|
||||||
uiSchema: {
|
uiSchema: {
|
||||||
|
filters: {
|
||||||
|
"ui:options": {
|
||||||
|
expandable: false,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
"filters.*": {
|
||||||
|
"ui:options": {
|
||||||
|
additionalPropertyKeyReadonly: true,
|
||||||
|
},
|
||||||
|
},
|
||||||
listen: {
|
listen: {
|
||||||
"ui:widget": "audioLabels",
|
"ui:widget": "audioLabels",
|
||||||
},
|
},
|
||||||
|
|||||||
@ -29,6 +29,11 @@ const objects: SectionConfigOverrides = {
|
|||||||
],
|
],
|
||||||
advancedFields: ["genai"],
|
advancedFields: ["genai"],
|
||||||
uiSchema: {
|
uiSchema: {
|
||||||
|
filters: {
|
||||||
|
"ui:options": {
|
||||||
|
expandable: false,
|
||||||
|
},
|
||||||
|
},
|
||||||
"filters.*.min_area": {
|
"filters.*.min_area": {
|
||||||
"ui:options": {
|
"ui:options": {
|
||||||
suppressMultiSchema: true,
|
suppressMultiSchema: true,
|
||||||
|
|||||||
@ -4,12 +4,13 @@ const timestampStyle: SectionConfigOverrides = {
|
|||||||
base: {
|
base: {
|
||||||
sectionDocs: "/configuration/reference",
|
sectionDocs: "/configuration/reference",
|
||||||
restartRequired: [],
|
restartRequired: [],
|
||||||
fieldOrder: ["position", "format", "color", "thickness"],
|
fieldOrder: ["position", "format", "thickness", "color"],
|
||||||
hiddenFields: ["effect", "enabled_in_config"],
|
hiddenFields: ["effect", "enabled_in_config"],
|
||||||
advancedFields: [],
|
advancedFields: [],
|
||||||
uiSchema: {
|
uiSchema: {
|
||||||
position: {
|
position: {
|
||||||
"ui:size": "xs",
|
"ui:size": "xs",
|
||||||
|
"ui:options": { enumI18nPrefix: "timestampPosition" },
|
||||||
},
|
},
|
||||||
format: {
|
format: {
|
||||||
"ui:size": "xs",
|
"ui:size": "xs",
|
||||||
@ -17,7 +18,7 @@ const timestampStyle: SectionConfigOverrides = {
|
|||||||
},
|
},
|
||||||
},
|
},
|
||||||
global: {
|
global: {
|
||||||
restartRequired: ["position", "format", "color", "thickness", "effect"],
|
restartRequired: [],
|
||||||
},
|
},
|
||||||
camera: {
|
camera: {
|
||||||
restartRequired: [],
|
restartRequired: [],
|
||||||
|
|||||||
@ -1,5 +1,6 @@
|
|||||||
// Select Widget - maps to shadcn/ui Select
|
// Select Widget - maps to shadcn/ui Select
|
||||||
import type { WidgetProps } from "@rjsf/utils";
|
import type { WidgetProps } from "@rjsf/utils";
|
||||||
|
import { useTranslation } from "react-i18next";
|
||||||
import {
|
import {
|
||||||
Select,
|
Select,
|
||||||
SelectContent,
|
SelectContent,
|
||||||
@ -21,9 +22,18 @@ export function SelectWidget(props: WidgetProps) {
|
|||||||
schema,
|
schema,
|
||||||
} = props;
|
} = props;
|
||||||
|
|
||||||
|
const { t } = useTranslation(["views/settings"]);
|
||||||
const { enumOptions = [] } = options;
|
const { enumOptions = [] } = options;
|
||||||
|
const enumI18nPrefix = options["enumI18nPrefix"] as string | undefined;
|
||||||
const fieldClassName = getSizedFieldClassName(options, "sm");
|
const fieldClassName = getSizedFieldClassName(options, "sm");
|
||||||
|
|
||||||
|
const getLabel = (option: { value: unknown; label: string }) => {
|
||||||
|
if (enumI18nPrefix) {
|
||||||
|
return t(`${enumI18nPrefix}.${option.value}`);
|
||||||
|
}
|
||||||
|
return option.label;
|
||||||
|
};
|
||||||
|
|
||||||
return (
|
return (
|
||||||
<Select
|
<Select
|
||||||
value={value?.toString() ?? ""}
|
value={value?.toString() ?? ""}
|
||||||
@ -42,7 +52,7 @@ export function SelectWidget(props: WidgetProps) {
|
|||||||
<SelectContent>
|
<SelectContent>
|
||||||
{enumOptions.map((option: { value: unknown; label: string }) => (
|
{enumOptions.map((option: { value: unknown; label: string }) => (
|
||||||
<SelectItem key={String(option.value)} value={String(option.value)}>
|
<SelectItem key={String(option.value)} value={String(option.value)}>
|
||||||
{option.label}
|
{getLabel(option)}
|
||||||
</SelectItem>
|
</SelectItem>
|
||||||
))}
|
))}
|
||||||
</SelectContent>
|
</SelectContent>
|
||||||
|
|||||||
@ -707,14 +707,23 @@ export default function LiveCameraView({
|
|||||||
}}
|
}}
|
||||||
>
|
>
|
||||||
<div
|
<div
|
||||||
className={`relative flex flex-col items-center justify-center ${growClassName}`}
|
className={cn(
|
||||||
|
"flex flex-col items-center justify-center",
|
||||||
|
growClassName,
|
||||||
|
)}
|
||||||
ref={clickOverlayRef}
|
ref={clickOverlayRef}
|
||||||
style={{
|
style={{
|
||||||
aspectRatio: constrainedAspectRatio,
|
aspectRatio: constrainedAspectRatio,
|
||||||
}}
|
}}
|
||||||
>
|
>
|
||||||
{clickOverlay && overlaySize.width > 0 && (
|
{clickOverlay && overlaySize.width > 0 && (
|
||||||
<div className="absolute inset-0 z-40 cursor-crosshair">
|
<div
|
||||||
|
className="absolute z-40 cursor-crosshair"
|
||||||
|
style={{
|
||||||
|
width: overlaySize.width,
|
||||||
|
height: overlaySize.height,
|
||||||
|
}}
|
||||||
|
>
|
||||||
<Stage
|
<Stage
|
||||||
width={overlaySize.width}
|
width={overlaySize.width}
|
||||||
height={overlaySize.height}
|
height={overlaySize.height}
|
||||||
|
|||||||
Loading…
Reference in New Issue
Block a user