Compare commits

...

14 Commits

Author SHA1 Message Date
Josh Hawkins
d6d636ea27
Merge b5a360be39 into 8fc1e97df5 2026-04-22 22:31:37 +01:00
Josh Hawkins
8fc1e97df5
Stream probe fallback (#22971)
* fall back to tcp transport when rtsp probes fail over udp

* tweak wizard message
2026-04-22 14:38:54 -06:00
eXtremeSHOK
0a332cada9
Update third_party_extensions.md (#22973) 2026-04-22 14:38:36 -06:00
dependabot[bot]
ba499201e6
Bump lodash-es from 4.17.23 to 4.18.1 in /web (#22733)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
Bumps [lodash-es](https://github.com/lodash/lodash) from 4.17.23 to 4.18.1.
- [Release notes](https://github.com/lodash/lodash/releases)
- [Commits](https://github.com/lodash/lodash/compare/4.17.23...4.18.1)

---
updated-dependencies:
- dependency-name: lodash-es
  dependency-version: 4.18.1
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-22 15:03:43 -05:00
dependabot[bot]
c244e6582a
Bump path-to-regexp from 0.1.12 to 0.1.13 in /docs (#22683)
Bumps [path-to-regexp](https://github.com/pillarjs/path-to-regexp) from 0.1.12 to 0.1.13.
- [Release notes](https://github.com/pillarjs/path-to-regexp/releases)
- [Changelog](https://github.com/pillarjs/path-to-regexp/blob/v.0.1.13/History.md)
- [Commits](https://github.com/pillarjs/path-to-regexp/compare/v0.1.12...v.0.1.13)

---
updated-dependencies:
- dependency-name: path-to-regexp
  dependency-version: 0.1.13
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-22 14:39:46 -05:00
dependabot[bot]
fff3594553
Bump lodash from 4.17.23 to 4.18.1 in /web (#22787)
Bumps [lodash](https://github.com/lodash/lodash) from 4.17.23 to 4.18.1.
- [Release notes](https://github.com/lodash/lodash/releases)
- [Commits](https://github.com/lodash/lodash/compare/4.17.23...4.18.1)

---
updated-dependencies:
- dependency-name: lodash
  dependency-version: 4.18.1
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-22 14:39:08 -05:00
dependabot[bot]
25bfb2c481
Bump python-multipart from 0.0.20 to 0.0.26 in /docker/main (#22894)
Bumps [python-multipart](https://github.com/Kludex/python-multipart) from 0.0.20 to 0.0.26.
- [Release notes](https://github.com/Kludex/python-multipart/releases)
- [Changelog](https://github.com/Kludex/python-multipart/blob/master/CHANGELOG.md)
- [Commits](https://github.com/Kludex/python-multipart/compare/0.0.20...0.0.26)

---
updated-dependencies:
- dependency-name: python-multipart
  dependency-version: 0.0.26
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-22 14:38:56 -05:00
Nicolas Mowen
b7261c8e70
GenAI Tweaks (#22968)
* Add debug logs

* refresh embeddings maintainer genai clients on config update

---------

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2026-04-22 09:55:54 -06:00
Josh Hawkins
ad9092d0da
Tweaks (#22965)
* use ffmpeg to probe rtsp urls instead of cv2

cv2 is faster (no subprocess launch) and will continue to be used for recording segments

* tweak faq

* change unsaved color to orange

avoids confusion with validation errors (red)

* don't use any variant of orange as a profile color

avoids confusion with unsaved changes

* more unsaved color tweaks
2026-04-22 09:19:30 -06:00
Nicolas Mowen
20705a3e97
Update oneVPL (#22966) 2026-04-22 08:50:37 -06:00
Josh Hawkins
f4ac063b37
Add camera wizard improvements (#22963)
* warn in camera wizard when detect stream resolution cannot be determined

* add timeout and tcp fallback for rtsp urls only
2026-04-22 08:15:17 -05:00
Abhilash Kishore
2dcaeb6809
fix: bump OpenVINO to 2025.4.x to resolve LXC container detector crash (#22859)
* fix: bump OpenVINO to 2025.4.x to resolve LXC container crash

* fix: replace openvino + onnxruntime with onnxruntime-openvino 1.24.*

onnxruntime-openvino 1.24.* bundles OpenVINO 2025.4.1, which fixes a
crash in constrained CPU environments (e.g. Proxmox LXC) where
lin_system_conf.cpp calls stoi("") on empty strings read from offline
CPU sysfs entries.

Consolidating to onnxruntime-openvino also ensures the OpenVINO runtime
and ONNX Runtime OpenVINO EP are always compatible versions.

* revert: restore onnxruntime, keep openvino bump

Reverting onnxruntime-openvino consolidation - onnxruntime is used with
multiple execution providers (CUDA, TensorRT, MIGraphX, CPU) and cannot
be replaced wholesale with the openvino-specific wheel.
2026-04-22 07:12:14 -06:00
Josh Hawkins
b5a360be39 add test 2026-04-17 17:18:11 -05:00
Josh Hawkins
54a7c5015e fix birdseye layout calculation
replace the two pass layout with a single pass pixel space algorithm
2026-04-17 17:18:04 -05:00
26 changed files with 430 additions and 208 deletions

View File

@ -87,43 +87,43 @@ if [[ "${TARGETARCH}" == "amd64" ]]; then
# intel packages use zst compression so we need to update dpkg # intel packages use zst compression so we need to update dpkg
apt-get install -y dpkg apt-get install -y dpkg
# use intel apt intel packages # use intel apt repo for libmfx1 (legacy QSV, pre-Gen12)
wget -qO - https://repositories.intel.com/gpu/intel-graphics.key | gpg --yes --dearmor --output /usr/share/keyrings/intel-graphics.gpg wget -qO - https://repositories.intel.com/gpu/intel-graphics.key | gpg --yes --dearmor --output /usr/share/keyrings/intel-graphics.gpg
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/intel-graphics.gpg] https://repositories.intel.com/gpu/ubuntu jammy client" | tee /etc/apt/sources.list.d/intel-gpu-jammy.list echo "deb [arch=amd64 signed-by=/usr/share/keyrings/intel-graphics.gpg] https://repositories.intel.com/gpu/ubuntu jammy client" | tee /etc/apt/sources.list.d/intel-gpu-jammy.list
apt-get -qq update apt-get -qq update
# intel-media-va-driver-non-free is built from source in the # intel-media-va-driver-non-free is built from source in the
# intel-media-driver Dockerfile stage for Battlemage (Xe2) support # intel-media-driver Dockerfile stage for Battlemage (Xe2) support
apt-get -qq install --no-install-recommends --no-install-suggests -y \ apt-get -qq install --no-install-recommends --no-install-suggests -y \
libmfx1 libmfxgen1 libvpl2 libmfx1
rm -f /usr/share/keyrings/intel-graphics.gpg
rm -f /etc/apt/sources.list.d/intel-gpu-jammy.list
# upgrade libva2, oneVPL runtime, and libvpl2 from trixie for Battlemage support
echo "deb http://deb.debian.org/debian trixie main" > /etc/apt/sources.list.d/trixie.list
apt-get -qq update
apt-get -qq install -y -t trixie libva2 libva-drm2 libzstd1
apt-get -qq install -y -t trixie libmfx-gen1.2 libvpl2
rm -f /etc/apt/sources.list.d/trixie.list
apt-get -qq update
apt-get -qq install -y ocl-icd-libopencl1 apt-get -qq install -y ocl-icd-libopencl1
# install libtbb12 for NPU support # install libtbb12 for NPU support
apt-get -qq install -y libtbb12 apt-get -qq install -y libtbb12
rm -f /usr/share/keyrings/intel-graphics.gpg # install legacy and standard intel compute packages
rm -f /etc/apt/sources.list.d/intel-gpu-jammy.list
# install legacy and standard intel icd and level-zero-gpu
# see https://github.com/intel/compute-runtime/blob/master/LEGACY_PLATFORMS.md for more info # see https://github.com/intel/compute-runtime/blob/master/LEGACY_PLATFORMS.md for more info
# newer intel packages (gmmlib 22.9+, igc 2.32+) require libstdc++ >= 13.1 and libzstd >= 1.5.5
echo "deb http://deb.debian.org/debian trixie main" > /etc/apt/sources.list.d/trixie.list
apt-get -qq update
apt-get -qq install -y -t trixie libstdc++6 libzstd1
rm -f /etc/apt/sources.list.d/trixie.list
apt-get -qq update
# needed core package # needed core package
wget https://github.com/intel/compute-runtime/releases/download/26.14.37833.4/libigdgmm12_22.9.0_amd64.deb wget https://github.com/intel/compute-runtime/releases/download/26.14.37833.4/libigdgmm12_22.9.0_amd64.deb
dpkg -i libigdgmm12_22.9.0_amd64.deb dpkg -i libigdgmm12_22.9.0_amd64.deb
rm libigdgmm12_22.9.0_amd64.deb rm libigdgmm12_22.9.0_amd64.deb
# legacy packages # legacy compute-runtime packages
wget https://github.com/intel/compute-runtime/releases/download/24.35.30872.36/intel-opencl-icd-legacy1_24.35.30872.36_amd64.deb wget https://github.com/intel/compute-runtime/releases/download/24.35.30872.36/intel-opencl-icd-legacy1_24.35.30872.36_amd64.deb
wget https://github.com/intel/compute-runtime/releases/download/24.35.30872.36/intel-level-zero-gpu-legacy1_1.5.30872.36_amd64.deb wget https://github.com/intel/compute-runtime/releases/download/24.35.30872.36/intel-level-zero-gpu-legacy1_1.5.30872.36_amd64.deb
wget https://github.com/intel/intel-graphics-compiler/releases/download/igc-1.0.17537.24/intel-igc-opencl_1.0.17537.24_amd64.deb wget https://github.com/intel/intel-graphics-compiler/releases/download/igc-1.0.17537.24/intel-igc-opencl_1.0.17537.24_amd64.deb
wget https://github.com/intel/intel-graphics-compiler/releases/download/igc-1.0.17537.24/intel-igc-core_1.0.17537.24_amd64.deb wget https://github.com/intel/intel-graphics-compiler/releases/download/igc-1.0.17537.24/intel-igc-core_1.0.17537.24_amd64.deb
# standard packages # standard compute-runtime packages
wget https://github.com/intel/compute-runtime/releases/download/26.14.37833.4/intel-opencl-icd_26.14.37833.4-0_amd64.deb wget https://github.com/intel/compute-runtime/releases/download/26.14.37833.4/intel-opencl-icd_26.14.37833.4-0_amd64.deb
wget https://github.com/intel/compute-runtime/releases/download/26.14.37833.4/libze-intel-gpu1_26.14.37833.4-0_amd64.deb wget https://github.com/intel/compute-runtime/releases/download/26.14.37833.4/libze-intel-gpu1_26.14.37833.4-0_amd64.deb
wget https://github.com/intel/intel-graphics-compiler/releases/download/v2.32.7/intel-igc-opencl-2_2.32.7+21184_amd64.deb wget https://github.com/intel/intel-graphics-compiler/releases/download/v2.32.7/intel-igc-opencl-2_2.32.7+21184_amd64.deb
@ -137,6 +137,10 @@ if [[ "${TARGETARCH}" == "amd64" ]]; then
dpkg -i *.deb dpkg -i *.deb
rm *.deb rm *.deb
apt-get -qq install -f -y apt-get -qq install -f -y
# Battlemage uses the xe kernel driver, but the VA-API driver is still iHD.
# The oneVPL runtime may look for a driver named after the kernel module.
ln -sf /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so /usr/lib/x86_64-linux-gnu/dri/xe_drv_video.so
fi fi
if [[ "${TARGETARCH}" == "arm64" ]]; then if [[ "${TARGETARCH}" == "arm64" ]]; then

View File

@ -11,7 +11,7 @@ joserfc == 1.2.*
cryptography == 44.0.* cryptography == 44.0.*
pathvalidate == 3.3.* pathvalidate == 3.3.*
markupsafe == 3.0.* markupsafe == 3.0.*
python-multipart == 0.0.20 python-multipart == 0.0.26
# Classification Model Training # Classification Model Training
tensorflow == 2.19.* ; platform_machine == 'aarch64' tensorflow == 2.19.* ; platform_machine == 'aarch64'
tensorflow-cpu == 2.19.* ; platform_machine == 'x86_64' tensorflow-cpu == 2.19.* ; platform_machine == 'x86_64'
@ -42,7 +42,7 @@ opencv-python-headless == 4.11.0.*
opencv-contrib-python == 4.11.0.* opencv-contrib-python == 4.11.0.*
scipy == 1.16.* scipy == 1.16.*
# OpenVino & ONNX # OpenVino & ONNX
openvino == 2025.3.* openvino == 2025.4.*
onnxruntime == 1.22.* onnxruntime == 1.22.*
# Embeddings # Embeddings
transformers == 4.45.* transformers == 4.45.*

View File

@ -39,6 +39,10 @@ This is a fork (with fixed errors and new features) of [original Double Take](ht
[Frigate telegram](https://github.com/OldTyT/frigate-telegram) makes it possible to send events from Frigate to Telegram. Events are sent as a message with a text description, video, and thumbnail. [Frigate telegram](https://github.com/OldTyT/frigate-telegram) makes it possible to send events from Frigate to Telegram. Events are sent as a message with a text description, video, and thumbnail.
## [kiosk-monitor](https://github.com/extremeshok/kiosk-monitor)
[kiosk-monitor](https://github.com/extremeshok/kiosk-monitor) is a Raspberry Pi watchdog that runs Chromium fullscreen on a Frigate dashboard (optionally with VLC on a second monitor for an RTSP camera stream), auto-restarts on frozen screens or unreachable URLs, and ships a Birdseye-aware Chromium helper that auto-sizes the grid to the display.
## [Periscope](https://github.com/maksz42/periscope) ## [Periscope](https://github.com/maksz42/periscope)
[Periscope](https://github.com/maksz42/periscope) is a lightweight Android app that turns old devices into live viewers for Frigate. It works on Android 2.2 and above, including Android TV. It supports authentication and HTTPS. [Periscope](https://github.com/maksz42/periscope) is a lightweight Android app that turns old devices into live viewers for Frigate. It works on Android 2.2 and above, including Android TV. It supports authentication and HTTPS.

View File

@ -111,26 +111,16 @@ TCP ensures that all data packets arrive in the correct order. This is crucial f
You can still configure Frigate to use UDP by using ffmpeg input args or the preset `preset-rtsp-udp`. See the [ffmpeg presets](/configuration/ffmpeg_presets) documentation. You can still configure Frigate to use UDP by using ffmpeg input args or the preset `preset-rtsp-udp`. See the [ffmpeg presets](/configuration/ffmpeg_presets) documentation.
### Frigate hangs on startup with a "probing detect stream" message in the logs ### Frigate is slow to start up with a "probing detect stream" message in the logs
On startup, Frigate probes each camera's detect stream with OpenCV to auto-detect its resolution. OpenCV's FFmpeg backend may attempt RTSP over UDP during this probe regardless of the `-rtsp_transport tcp` in your `input_args` or preset. For cameras that do not respond to UDP (common on some Reolink models and others behind firewalls that block UDP), the probe can hang indefinitely and block Frigate from finishing startup, or it can return zeroed-out dimensions that show up as width `0` and height `0` in Camera Probe Info under System Metrics. When `detect.width` and `detect.height` are not set, Frigate probes each camera's detect stream on startup (and when saving the config) to auto-detect its resolution. For RTSP streams Frigate probes with ffprobe and automatically retries over TCP if UDP doesn't respond, with a 5 second timeout per attempt. A camera that cannot be reached over either transport will add up to ~10 seconds to startup before Frigate falls through with default dimensions, which may show up as width `0` and height `0` in Camera Probe Info under System Metrics.
There are two ways to avoid this: To skip the probe entirely and make startup instant, set `detect.width` and `detect.height` explicitly in your camera config:
1. Set `detect.width` and `detect.height` explicitly in your camera config. When both are set, Frigate skips the auto-detect probe entirely: ```yaml
cameras:
```yaml
cameras:
my_camera: my_camera:
detect: detect:
width: 1280 width: 1280
height: 720 height: 720
``` ```
2. Force OpenCV's FFmpeg backend to use TCP for RTSP by setting the environment variable on your Frigate container:
```
OPENCV_FFMPEG_CAPTURE_OPTIONS=rtsp_transport;tcp
```
This is a process-wide setting and applies to all cameras. If you have any cameras that require `preset-rtsp-udp`, use option 1 instead.

View File

@ -10897,9 +10897,9 @@
"license": "MIT" "license": "MIT"
}, },
"node_modules/express/node_modules/path-to-regexp": { "node_modules/express/node_modules/path-to-regexp": {
"version": "0.1.12", "version": "0.1.13",
"resolved": "https://registry.npmjs.org/path-to-regexp/-/path-to-regexp-0.1.12.tgz", "resolved": "https://registry.npmjs.org/path-to-regexp/-/path-to-regexp-0.1.13.tgz",
"integrity": "sha512-RA1GjUVMnvYFxuqovrEqZoxxW5NUZqbwKtYz/Tt7nXerk0LbLblQmrsgdeOxV5SFHf0UDggjS/bSeOZwt1pmEQ==", "integrity": "sha512-A/AGNMFN3c8bOlvV9RreMdrv7jsmF9XIfDeCd87+I8RNg6s78BhJxMu69NEMHBSJFxKidViTEdruRwEk/WIKqA==",
"license": "MIT" "license": "MIT"
}, },
"node_modules/express/node_modules/range-parser": { "node_modules/express/node_modules/range-parser": {

View File

@ -310,6 +310,10 @@ class EmbeddingMaintainer(threading.Thread):
self._handle_custom_classification_update(topic, payload) self._handle_custom_classification_update(topic, payload)
return return
if topic == "config/genai":
self.config.genai = payload
self.genai_manager.update_config(self.config)
# Broadcast to all processors — each decides if the topic is relevant # Broadcast to all processors — each decides if the topic is relevant
for processor in self.realtime_processors: for processor in self.realtime_processors:
processor.update_config(topic, payload) processor.update_config(topic, payload)

View File

@ -113,6 +113,15 @@ class OllamaClient(GenAIClient):
schema = response_format.get("json_schema", {}).get("schema") schema = response_format.get("json_schema", {}).get("schema")
if schema: if schema:
ollama_options["format"] = self._clean_schema_for_ollama(schema) ollama_options["format"] = self._clean_schema_for_ollama(schema)
logger.debug(
"Ollama generate request: model=%s, prompt_len=%s, image_count=%s, "
"has_format=%s, options=%s",
self.genai_config.model,
len(prompt),
len(images) if images else 0,
"format" in ollama_options,
{k: v for k, v in ollama_options.items() if k != "format"},
)
result = self.provider.generate( result = self.provider.generate(
self.genai_config.model, self.genai_config.model,
prompt, prompt,
@ -120,9 +129,24 @@ class OllamaClient(GenAIClient):
**ollama_options, **ollama_options,
) )
logger.debug( logger.debug(
f"Ollama tokens used: eval_count={result.get('eval_count')}, prompt_eval_count={result.get('prompt_eval_count')}" "Ollama generate response: done=%s, done_reason=%s, eval_count=%s, "
"prompt_eval_count=%s, response_len=%s",
result.get("done"),
result.get("done_reason"),
result.get("eval_count"),
result.get("prompt_eval_count"),
len(result.get("response", "") or ""),
) )
return str(result["response"]).strip() response_text = str(result["response"]).strip()
if not response_text:
logger.warning(
"Ollama returned a blank response for model %s (done_reason=%s, "
"eval_count=%s). Check model output, ensure thinking is disabled.",
self.genai_config.model,
result.get("done_reason"),
result.get("eval_count"),
)
return response_text
except ( except (
TimeoutException, TimeoutException,
ResponseError, ResponseError,

View File

@ -80,7 +80,23 @@ class OpenAIClient(GenAIClient):
and hasattr(result, "choices") and hasattr(result, "choices")
and len(result.choices) > 0 and len(result.choices) > 0
): ):
return str(result.choices[0].message.content.strip()) message = result.choices[0].message
content = message.content
if not content:
# When reasoning is enabled for some OpenAI backends the actual response
# is incorrectly placed in reasoning_content instead of content.
# This is buggy/incorrect behavior — reasoning should not be
# enabled for these models.
reasoning_content = getattr(message, "reasoning_content", None)
if reasoning_content:
logger.warning(
"Response content was empty but reasoning_content was provided; "
"reasoning appears to be enabled and should be disabled for this model."
)
content = reasoning_content
return str(content.strip()) if content else None
return None return None
except (TimeoutException, Exception) as e: except (TimeoutException, Exception) as e:
logger.warning("OpenAI returned an error: %s", str(e)) logger.warning("OpenAI returned an error: %s", str(e))

View File

@ -590,112 +590,92 @@ class BirdsEyeFrameManager:
) -> Optional[list[list[Any]]]: ) -> Optional[list[list[Any]]]:
"""Calculate the optimal layout for 2+ cameras.""" """Calculate the optimal layout for 2+ cameras."""
def map_layout( def find_available_x(
camera_layout: list[list[Any]], row_height: int current_x: int,
) -> tuple[int, int, Optional[list[list[Any]]]]: width: int,
"""Map the calculated layout.""" reserved_ranges: list[tuple[int, int]],
candidate_layout = [] max_width: int,
starting_x = 0 ) -> Optional[int]:
x = 0 """Find the first horizontal slot that does not collide with reservations."""
x = current_x
for reserved_start, reserved_end in sorted(reserved_ranges):
if x >= reserved_end:
continue
if x + width <= reserved_start:
return x
x = max(x, reserved_end)
if x + width <= max_width:
return x
return None
def map_layout(row_height: int) -> tuple[int, int, Optional[list[list[Any]]]]:
"""Lay out cameras row by row while reserving portrait spans for the next row."""
candidate_layout: list[list[Any]] = []
reserved_ranges: dict[int, list[tuple[int, int]]] = {}
current_row: list[Any] = []
row_index = 0
row_y = 0
row_x = 0
max_width = 0 max_width = 0
y = 0 max_height = 0
for row in camera_layout:
final_row = []
max_width = max(max_width, x)
x = starting_x
for cameras in row:
camera_dims = self.cameras[cameras[0]]["dimensions"].copy()
camera_aspect = cameras[1]
if camera_dims[1] > camera_dims[0]:
scaled_height = int(row_height * 2)
scaled_width = int(scaled_height * camera_aspect)
starting_x = scaled_width
else:
scaled_height = row_height
scaled_width = int(scaled_height * camera_aspect)
# layout is too large
if (
x + scaled_width > self.canvas.width
or y + scaled_height > self.canvas.height
):
return x + scaled_width, y + scaled_height, None
final_row.append((cameras[0], (x, y, scaled_width, scaled_height)))
x += scaled_width
y += row_height
candidate_layout.append(final_row)
if max_width == 0:
max_width = x
return max_width, y, candidate_layout
canvas_aspect_x, canvas_aspect_y = self.canvas.get_aspect(coefficient)
camera_layout: list[list[Any]] = []
camera_layout.append([])
starting_x = 0
x = starting_x
y = 0
y_i = 0
max_y = 0
for camera in cameras_to_add: for camera in cameras_to_add:
camera_dims = self.cameras[camera]["dimensions"].copy() camera_dims = self.cameras[camera]["dimensions"].copy()
camera_aspect_x, camera_aspect_y = self.canvas.get_camera_aspect( camera_aspect_x, camera_aspect_y = self.canvas.get_camera_aspect(
camera, camera_dims[0], camera_dims[1] camera, camera_dims[0], camera_dims[1]
) )
portrait = camera_dims[1] > camera_dims[0]
scaled_height = row_height * 2 if portrait else row_height
scaled_width = int(scaled_height * (camera_aspect_x / camera_aspect_y))
if camera_dims[1] > camera_dims[0]: while True:
portrait = True x = find_available_x(
else: row_x,
portrait = False scaled_width,
reserved_ranges.get(row_index, []),
self.canvas.width,
)
if (x + camera_aspect_x) <= canvas_aspect_x: if x is not None and row_y + scaled_height <= self.canvas.height:
# insert if camera can fit on current row current_row.append(
camera_layout[y_i].append( (camera, (x, row_y, scaled_width, scaled_height))
(
camera,
camera_aspect_x / camera_aspect_y,
)
) )
row_x = x + scaled_width
max_width = max(max_width, row_x)
max_height = max(max_height, row_y + scaled_height)
if portrait: if portrait:
starting_x = camera_aspect_x reserved_ranges.setdefault(row_index + 1, []).append(
else: (x, row_x)
max_y = max(
max_y,
camera_aspect_y,
) )
x += camera_aspect_x break
else:
# move on to the next row and insert
y += max_y
y_i += 1
camera_layout.append([])
x = starting_x
if x + camera_aspect_x > canvas_aspect_x: if current_row:
return None candidate_layout.append(current_row)
current_row = []
camera_layout[y_i].append( row_index += 1
( row_y = row_index * row_height
camera, row_x = 0
camera_aspect_x / camera_aspect_y,
)
)
x += camera_aspect_x
if y + max_y > canvas_aspect_y: if row_y + scaled_height > self.canvas.height:
return None overflow_width = max(max_width, scaled_width)
overflow_height = row_y + scaled_height
return overflow_width, overflow_height, None
row_height = int(self.canvas.height / coefficient) if current_row:
total_width, total_height, standard_candidate_layout = map_layout( candidate_layout.append(current_row)
camera_layout, row_height
) return max_width, max_height, candidate_layout
row_height = max(1, int(self.canvas.height / coefficient))
total_width, total_height, standard_candidate_layout = map_layout(row_height)
if not standard_candidate_layout: if not standard_candidate_layout:
# if standard layout didn't work # if standard layout didn't work
@ -704,9 +684,9 @@ class BirdsEyeFrameManager:
total_width / self.canvas.width, total_width / self.canvas.width,
total_height / self.canvas.height, total_height / self.canvas.height,
) )
row_height = int(row_height / scale_down_percent) row_height = max(1, int(row_height / scale_down_percent))
total_width, total_height, standard_candidate_layout = map_layout( total_width, total_height, standard_candidate_layout = map_layout(
camera_layout, row_height row_height
) )
if not standard_candidate_layout: if not standard_candidate_layout:
@ -720,8 +700,8 @@ class BirdsEyeFrameManager:
1 / (total_width / self.canvas.width), 1 / (total_width / self.canvas.width),
1 / (total_height / self.canvas.height), 1 / (total_height / self.canvas.height),
) )
row_height = int(row_height * scale_up_percent) row_height = max(1, int(row_height * scale_up_percent))
_, _, scaled_layout = map_layout(camera_layout, row_height) _, _, scaled_layout = map_layout(row_height)
if scaled_layout: if scaled_layout:
return scaled_layout return scaled_layout

View File

@ -1,11 +1,64 @@
"""Test camera user and password cleanup.""" """Tests for Birdseye canvas sizing and layout behavior."""
import unittest import unittest
from multiprocessing import Event
from frigate.output.birdseye import get_canvas_shape from frigate.config import FrigateConfig
from frigate.output.birdseye import BirdsEyeFrameManager, get_canvas_shape
class TestBirdseye(unittest.TestCase): class TestBirdseye(unittest.TestCase):
def _build_manager(
self, camera_dimensions: dict[str, tuple[int, int]]
) -> BirdsEyeFrameManager:
config = {
"mqtt": {"host": "mqtt"},
"birdseye": {"width": 1280, "height": 720},
"cameras": {},
}
for order, (camera, dimensions) in enumerate(
camera_dimensions.items(), start=1
):
config["cameras"][camera] = {
"ffmpeg": {
"inputs": [
{
"path": f"rtsp://10.0.0.1:554/{camera}",
"roles": ["detect"],
}
]
},
"detect": {
"width": dimensions[0],
"height": dimensions[1],
"fps": 5,
},
"birdseye": {"order": order},
}
return BirdsEyeFrameManager(FrigateConfig(**config), Event())
def _assert_no_overlaps(
self, layout: list[list[tuple[str, tuple[int, int, int, int]]]]
):
rectangles = [position for row in layout for _, position in row]
for index, rect in enumerate(rectangles):
x1, y1, width1, height1 = rect
for other in rectangles[index + 1 :]:
x2, y2, width2, height2 = other
overlap = (
x1 < x2 + width2
and x2 < x1 + width1
and y1 < y2 + height2
and y2 < y1 + height1
)
self.assertFalse(
overlap,
msg=f"Overlapping rectangles found: {rect} and {other}",
)
def test_16x9(self): def test_16x9(self):
"""Test 16x9 aspect ratio works as expected for birdseye.""" """Test 16x9 aspect ratio works as expected for birdseye."""
width = 1280 width = 1280
@ -45,3 +98,104 @@ class TestBirdseye(unittest.TestCase):
canvas_width, canvas_height = get_canvas_shape(width, height) canvas_width, canvas_height = get_canvas_shape(width, height)
assert canvas_width == width # width will be the same assert canvas_width == width # width will be the same
assert canvas_height != height assert canvas_height != height
def test_portrait_camera_does_not_overlap_next_row(self):
"""Portrait cameras should reserve their real horizontal position on the next row."""
manager = self._build_manager(
{
"cam_a": (1280, 720),
"cam_p": (360, 640),
"cam_b": (1280, 720),
"cam_c": (640, 480),
}
)
layout = manager.calculate_layout(["cam_a", "cam_p", "cam_b", "cam_c"], 3)
self.assertIsNotNone(layout)
assert layout is not None
self._assert_no_overlaps(layout)
cam_c = [
position for row in layout for camera, position in row if camera == "cam_c"
][0]
self.assertEqual(cam_c[0], 0)
def test_portrait_reservation_only_applies_to_next_row(self):
"""Portrait reservations should not push later rows after the span ends."""
manager = self._build_manager(
{
"cam_a": (1280, 720),
"cam_p": (360, 640),
"cam_b": (1280, 720),
"cam_c": (1280, 720),
"cam_d": (1280, 720),
"cam_e": (1280, 720),
}
)
layout = manager.calculate_layout(
["cam_a", "cam_p", "cam_b", "cam_c", "cam_d", "cam_e"],
3,
)
self.assertIsNotNone(layout)
assert layout is not None
self._assert_no_overlaps(layout)
cam_e = [
position for row in layout for camera, position in row if camera == "cam_e"
][0]
self.assertEqual(cam_e[0], 0)
def test_multiple_portraits_reserve_distinct_ranges(self):
"""Multiple portrait cameras in one row should reserve separate spans below them."""
manager = self._build_manager(
{
"cam_a": (640, 480),
"cam_p1": (360, 640),
"cam_p2": (360, 640),
"cam_b": (640, 480),
"cam_c": (1280, 720),
"cam_d": (640, 480),
}
)
layout = manager.calculate_layout(
["cam_a", "cam_p1", "cam_p2", "cam_b", "cam_c", "cam_d"],
4,
)
self.assertIsNotNone(layout)
assert layout is not None
self._assert_no_overlaps(layout)
def test_two_landscapes_then_portrait_then_two_landscapes(self):
"""A portrait after two landscapes should reserve only its own tail span."""
manager = self._build_manager(
{
"cam_a": (1280, 720),
"cam_b": (1280, 720),
"cam_p": (360, 640),
"cam_c": (1280, 720),
"cam_d": (1280, 720),
}
)
layout = manager.calculate_layout(
["cam_a", "cam_b", "cam_p", "cam_c", "cam_d"],
3,
)
self.assertIsNotNone(layout)
assert layout is not None
self._assert_no_overlaps(layout)
cam_c = [
position for row in layout for camera, position in row if camera == "cam_c"
][0]
cam_d = [
position for row in layout for camera, position in row if camera == "cam_d"
][0]
self.assertEqual(cam_c[0], 0)
self.assertEqual(cam_d[0], cam_c[0] + cam_c[2])

View File

@ -711,8 +711,11 @@ def ffprobe_stream(ffmpeg, path: str, detailed: bool = False) -> sp.CompletedPro
else: else:
format_entries = None format_entries = None
ffprobe_cmd = [ def run(rtsp_transport: Optional[str] = None) -> sp.CompletedProcess:
ffmpeg.ffprobe_path, cmd = [ffmpeg.ffprobe_path]
if rtsp_transport:
cmd += ["-rtsp_transport", rtsp_transport]
cmd += [
"-timeout", "-timeout",
"1000000", "1000000",
"-print_format", "-print_format",
@ -720,14 +723,19 @@ def ffprobe_stream(ffmpeg, path: str, detailed: bool = False) -> sp.CompletedPro
"-show_entries", "-show_entries",
f"stream={stream_entries}", f"stream={stream_entries}",
] ]
# Add format entries for detailed mode
if detailed and format_entries: if detailed and format_entries:
ffprobe_cmd.extend(["-show_entries", f"format={format_entries}"]) cmd.extend(["-show_entries", f"format={format_entries}"])
cmd.extend(["-loglevel", "error", clean_path])
return sp.run(cmd, capture_output=True)
ffprobe_cmd.extend(["-loglevel", "error", clean_path]) result = run()
return sp.run(ffprobe_cmd, capture_output=True) # For RTSP: retry with explicit TCP transport if the first attempt failed
# (default UDP may be blocked)
if result.returncode != 0 and clean_path.startswith("rtsp://"):
result = run(rtsp_transport="tcp")
return result
def vainfo_hwaccel(device_name: Optional[str] = None) -> sp.CompletedProcess: def vainfo_hwaccel(device_name: Optional[str] = None) -> sp.CompletedProcess:
@ -807,10 +815,15 @@ async def get_video_properties(
) -> dict[str, Any]: ) -> dict[str, Any]:
async def probe_with_ffprobe( async def probe_with_ffprobe(
url: str, url: str,
rtsp_transport: Optional[str] = None,
) -> tuple[bool, int, int, Optional[str], float]: ) -> tuple[bool, int, int, Optional[str], float]:
"""Fallback using ffprobe: returns (valid, width, height, codec, duration).""" """Fallback using ffprobe: returns (valid, width, height, codec, duration)."""
cmd = [ cmd = [ffmpeg.ffprobe_path]
ffmpeg.ffprobe_path, if rtsp_transport:
cmd += ["-rtsp_transport", rtsp_transport]
cmd += [
"-rw_timeout",
"5000000",
"-v", "-v",
"quiet", "quiet",
"-print_format", "-print_format",
@ -872,13 +885,27 @@ async def get_video_properties(
cap.release() cap.release()
return valid, width, height, fourcc, duration return valid, width, height, fourcc, duration
# try cv2 first is_rtsp = url.startswith("rtsp://")
if is_rtsp:
# skip cv2 for RTSP: its FFmpeg backend has a hardcoded ~30s internal
# timeout that cannot be shortened per-call, and ffprobe bounded by
# -rw_timeout handles RTSP probing reliably
has_video, width, height, fourcc, duration = await probe_with_ffprobe(url)
else:
# try cv2 first for local files, HTTP, RTMP
has_video, width, height, fourcc, duration = probe_with_cv2(url) has_video, width, height, fourcc, duration = probe_with_cv2(url)
# fallback to ffprobe if needed # fallback to ffprobe if needed
if not has_video or (get_duration and duration < 0): if not has_video or (get_duration and duration < 0):
has_video, width, height, fourcc, duration = await probe_with_ffprobe(url) has_video, width, height, fourcc, duration = await probe_with_ffprobe(url)
# last resort for RTSP: try TCP transport, since default UDP may be blocked
if (not has_video or (get_duration and duration < 0)) and is_rtsp:
has_video, width, height, fourcc, duration = await probe_with_ffprobe(
url, rtsp_transport="tcp"
)
result: dict[str, Any] = {"has_valid_video": has_video} result: dict[str, Any] = {"has_valid_video": has_video}
if has_video: if has_video:
result.update({"width": width, "height": height}) result.update({"width": width, "height": height})

14
web/package-lock.json generated
View File

@ -54,7 +54,7 @@
"immer": "^10.1.1", "immer": "^10.1.1",
"js-yaml": "^4.1.1", "js-yaml": "^4.1.1",
"konva": "^10.2.3", "konva": "^10.2.3",
"lodash": "^4.17.23", "lodash": "^4.18.1",
"lucide-react": "^0.577.0", "lucide-react": "^0.577.0",
"monaco-yaml": "^5.4.1", "monaco-yaml": "^5.4.1",
"next-themes": "^0.4.6", "next-themes": "^0.4.6",
@ -9636,15 +9636,15 @@
} }
}, },
"node_modules/lodash": { "node_modules/lodash": {
"version": "4.17.23", "version": "4.18.1",
"resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.23.tgz", "resolved": "https://registry.npmjs.org/lodash/-/lodash-4.18.1.tgz",
"integrity": "sha512-LgVTMpQtIopCi79SJeDiP0TfWi5CNEc/L/aRdTh3yIvmZXTnheWpKjSZhnvMl8iXbC1tFg9gdHHDMLoV7CnG+w==", "integrity": "sha512-dMInicTPVE8d1e5otfwmmjlxkZoUpiVLwyeTdUsi/Caj/gfzzblBcCE5sRHV/AsjuCmxWrte2TNGSYuCeCq+0Q==",
"license": "MIT" "license": "MIT"
}, },
"node_modules/lodash-es": { "node_modules/lodash-es": {
"version": "4.17.23", "version": "4.18.1",
"resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.23.tgz", "resolved": "https://registry.npmjs.org/lodash-es/-/lodash-es-4.18.1.tgz",
"integrity": "sha512-LgVTMpQtIopCi79SJeDiP0TfWi5CNEc/L/aRdTh3yIvmZXTnheWpKjSZhnvMl8iXbC1tFg9gdHHDMLoV7CnG+w==", "integrity": "sha512-J8xewKD/Gk22OZbhpOVSwcs60zhd95ESDwezOFuA3/099925PdHJ7OFHNTGtajL3AlZkykD32HykiMo+BIBI8A==",
"license": "MIT" "license": "MIT"
}, },
"node_modules/lodash.merge": { "node_modules/lodash.merge": {

View File

@ -68,7 +68,7 @@
"immer": "^10.1.1", "immer": "^10.1.1",
"js-yaml": "^4.1.1", "js-yaml": "^4.1.1",
"konva": "^10.2.3", "konva": "^10.2.3",
"lodash": "^4.17.23", "lodash": "^4.18.1",
"lucide-react": "^0.577.0", "lucide-react": "^0.577.0",
"monaco-yaml": "^5.4.1", "monaco-yaml": "^5.4.1",
"next-themes": "^0.4.6", "next-themes": "^0.4.6",

View File

@ -415,6 +415,7 @@
"audioCodecGood": "Audio codec is {{codec}}.", "audioCodecGood": "Audio codec is {{codec}}.",
"resolutionHigh": "A resolution of {{resolution}} may cause increased resource usage.", "resolutionHigh": "A resolution of {{resolution}} may cause increased resource usage.",
"resolutionLow": "A resolution of {{resolution}} may be too low for reliable detection of small objects.", "resolutionLow": "A resolution of {{resolution}} may be too low for reliable detection of small objects.",
"resolutionUnknown": "The resolution of this stream could not be probed. You should manually set the detect resolution in Settings or your config.",
"noAudioWarning": "No audio detected for this stream, recordings will not have audio.", "noAudioWarning": "No audio detected for this stream, recordings will not have audio.",
"audioCodecRecordError": "The AAC audio codec is required to support audio in recordings.", "audioCodecRecordError": "The AAC audio codec is required to support audio in recordings.",
"audioCodecRequired": "An audio stream is required to support audio detection.", "audioCodecRequired": "An audio stream is required to support audio detection.",

View File

@ -218,7 +218,7 @@ export default function CameraReviewClassification({
<Label <Label
className={cn( className={cn(
"flex flex-row items-center text-base", "flex flex-row items-center text-base",
alertsZonesModified && "text-danger", alertsZonesModified && "text-unsaved",
)} )}
> >
<Trans ns="views/settings">cameraReview.review.alerts</Trans> <Trans ns="views/settings">cameraReview.review.alerts</Trans>
@ -286,7 +286,7 @@ export default function CameraReviewClassification({
<Label <Label
className={cn( className={cn(
"flex flex-row items-center text-base", "flex flex-row items-center text-base",
detectionsZonesModified && "text-danger", detectionsZonesModified && "text-unsaved",
)} )}
> >
<Trans ns="views/settings"> <Trans ns="views/settings">

View File

@ -1012,7 +1012,7 @@ export function ConfigSection({
> >
{hasChanges && ( {hasChanges && (
<div className="flex items-center gap-2"> <div className="flex items-center gap-2">
<span className="text-sm text-danger"> <span className="text-sm text-unsaved">
{t("unsavedChanges", { {t("unsavedChanges", {
ns: "views/settings", ns: "views/settings",
defaultValue: "You have unsaved changes", defaultValue: "You have unsaved changes",
@ -1299,7 +1299,7 @@ export function ConfigSection({
{hasChanges && ( {hasChanges && (
<Badge <Badge
variant="secondary" variant="secondary"
className="cursor-default bg-danger text-xs text-white hover:bg-danger" className="cursor-default bg-unsaved text-xs text-black hover:bg-unsaved"
> >
{t("button.modified", { {t("button.modified", {
ns: "common", ns: "common",

View File

@ -154,7 +154,7 @@ export function KnownPlatesField(props: FieldProps) {
<div className="flex items-center justify-between"> <div className="flex items-center justify-between">
<div> <div>
<CardTitle <CardTitle
className={cn("text-sm", isModified && "text-danger")} className={cn("text-sm", isModified && "text-unsaved")}
> >
{title} {title}
</CardTitle> </CardTitle>

View File

@ -142,7 +142,7 @@ export function ReplaceRulesField(props: FieldProps) {
<div className="flex items-center justify-between"> <div className="flex items-center justify-between">
<div> <div>
<CardTitle <CardTitle
className={cn("text-sm", isModified && "text-danger")} className={cn("text-sm", isModified && "text-unsaved")}
> >
{title} {title}
</CardTitle> </CardTitle>

View File

@ -497,7 +497,7 @@ export function FieldTemplate(props: FieldTemplateProps) {
htmlFor={id} htmlFor={id}
className={cn( className={cn(
"text-sm font-medium", "text-sm font-medium",
isModified && "text-danger", isModified && "text-unsaved",
hasFieldErrors && "text-destructive", hasFieldErrors && "text-destructive",
)} )}
> >
@ -516,7 +516,7 @@ export function FieldTemplate(props: FieldTemplateProps) {
return ( return (
<Label <Label
htmlFor={id} htmlFor={id}
className={cn("text-sm font-medium", isModified && "text-danger")} className={cn("text-sm font-medium", isModified && "text-unsaved")}
> >
{finalLabel} {finalLabel}
{required && <span className="ml-1 text-destructive">*</span>} {required && <span className="ml-1 text-destructive">*</span>}
@ -535,7 +535,7 @@ export function FieldTemplate(props: FieldTemplateProps) {
htmlFor={id} htmlFor={id}
className={cn( className={cn(
"text-sm font-medium", "text-sm font-medium",
isModified && "text-danger", isModified && "text-unsaved",
hasFieldErrors && "text-destructive", hasFieldErrors && "text-destructive",
)} )}
> >

View File

@ -467,7 +467,7 @@ export function ObjectFieldTemplate(props: ObjectFieldTemplateProps) {
<CardTitle <CardTitle
className={cn( className={cn(
"flex items-center text-sm", "flex items-center text-sm",
hasModifiedDescendants && "text-danger", hasModifiedDescendants && "text-unsaved",
)} )}
> >
{inferredLabel} {inferredLabel}

View File

@ -607,23 +607,38 @@ function StreamIssues({
} }
} }
if (stream.roles.includes("detect") && stream.resolution) { if (stream.roles.includes("detect") && stream.testResult) {
const [width, height] = stream.resolution.split("x").map(Number); const probedResolution = stream.testResult.resolution;
if (!isNaN(width) && !isNaN(height) && width > 0 && height > 0) { let probedWidth = 0;
const minDimension = Math.min(width, height); let probedHeight = 0;
const maxDimension = Math.max(width, height); if (probedResolution) {
const [w, h] = probedResolution.split("x").map(Number);
if (!isNaN(w) && !isNaN(h)) {
probedWidth = w;
probedHeight = h;
}
}
if (probedWidth <= 0 || probedHeight <= 0) {
result.push({
type: "error",
message: t("cameraWizard.step4.issues.resolutionUnknown"),
});
} else {
const minDimension = Math.min(probedWidth, probedHeight);
const maxDimension = Math.max(probedWidth, probedHeight);
if (minDimension > 1080) { if (minDimension > 1080) {
result.push({ result.push({
type: "warning", type: "warning",
message: t("cameraWizard.step4.issues.resolutionHigh", { message: t("cameraWizard.step4.issues.resolutionHigh", {
resolution: stream.resolution, resolution: probedResolution,
}), }),
}); });
} else if (maxDimension < 640) { } else if (maxDimension < 640) {
result.push({ result.push({
type: "error", type: "error",
message: t("cameraWizard.step4.issues.resolutionLow", { message: t("cameraWizard.step4.issues.resolutionLow", {
resolution: stream.resolution, resolution: probedResolution,
}), }),
}); });
} }

View File

@ -1435,7 +1435,7 @@ export default function Settings() {
/> />
)} )}
{showUnsavedDot && ( {showUnsavedDot && (
<span className="inline-block size-2 rounded-full bg-danger" /> <span className="inline-block size-2 rounded-full bg-unsaved" />
)} )}
</div> </div>
)} )}
@ -1516,7 +1516,7 @@ export default function Settings() {
<div className="sticky bottom-0 z-50 mt-2 bg-background p-4"> <div className="sticky bottom-0 z-50 mt-2 bg-background p-4">
<div className="flex flex-col items-center gap-2"> <div className="flex flex-col items-center gap-2">
<div className="flex items-center gap-2"> <div className="flex items-center gap-2">
<span className="text-sm text-danger"> <span className="text-sm text-unsaved">
{t("unsavedChanges", { {t("unsavedChanges", {
ns: "views/settings", ns: "views/settings",
defaultValue: "You have unsaved changes", defaultValue: "You have unsaved changes",

View File

@ -79,11 +79,11 @@ const PROFILE_COLORS: ProfileColor[] = [
bgMuted: "bg-green-400/20", bgMuted: "bg-green-400/20",
}, },
{ {
bg: "bg-amber-400", bg: "bg-fuchsia-500",
text: "text-amber-400", text: "text-fuchsia-500",
dot: "bg-amber-400", dot: "bg-fuchsia-500",
border: "border-amber-400", border: "border-fuchsia-500",
bgMuted: "bg-amber-400/20", bgMuted: "bg-fuchsia-500/20",
}, },
{ {
bg: "bg-slate-400", bg: "bg-slate-400",
@ -93,11 +93,11 @@ const PROFILE_COLORS: ProfileColor[] = [
bgMuted: "bg-slate-400/20", bgMuted: "bg-slate-400/20",
}, },
{ {
bg: "bg-orange-300", bg: "bg-stone-500",
text: "text-orange-300", text: "text-stone-500",
dot: "bg-orange-300", dot: "bg-stone-500",
border: "border-orange-300", border: "border-stone-500",
bgMuted: "bg-orange-300/20", bgMuted: "bg-stone-500/20",
}, },
{ {
bg: "bg-blue-300", bg: "bg-blue-300",

View File

@ -380,7 +380,9 @@ export default function Go2RtcStreamsSettingsView({
> >
{hasChanges && ( {hasChanges && (
<div className="flex items-center gap-2"> <div className="flex items-center gap-2">
<span className="text-sm text-danger">{t("unsavedChanges")}</span> <span className="text-sm text-unsaved">
{t("unsavedChanges")}
</span>
</div> </div>
)} )}
<div className="flex w-full items-center gap-2 md:w-auto"> <div className="flex w-full items-center gap-2 md:w-auto">

View File

@ -212,7 +212,7 @@ export function SingleSectionPage({
{sectionStatus.hasChanges && ( {sectionStatus.hasChanges && (
<Badge <Badge
variant="secondary" variant="secondary"
className="cursor-default bg-danger text-xs text-white hover:bg-danger" className="cursor-default bg-unsaved text-xs text-black hover:bg-unsaved"
> >
{t("button.modified", { {t("button.modified", {
ns: "common", ns: "common",
@ -250,7 +250,7 @@ export function SingleSectionPage({
{sectionStatus.hasChanges && ( {sectionStatus.hasChanges && (
<Badge <Badge
variant="secondary" variant="secondary"
className="cursor-default bg-danger text-xs text-white hover:bg-danger" className="cursor-default bg-unsaved text-xs text-black hover:bg-unsaved"
> >
{t("button.modified", { ns: "common", defaultValue: "Modified" })} {t("button.modified", { ns: "common", defaultValue: "Modified" })}
</Badge> </Badge>

View File

@ -65,6 +65,7 @@ module.exports = {
ring: "hsl(var(--ring))", ring: "hsl(var(--ring))",
danger: "#ef4444", danger: "#ef4444",
success: "#22c55e", success: "#22c55e",
unsaved: "#f59e0b",
background: "hsl(var(--background))", background: "hsl(var(--background))",
background_alt: "hsl(var(--background-alt))", background_alt: "hsl(var(--background-alt))",
foreground: "hsl(var(--foreground))", foreground: "hsl(var(--foreground))",