Compare commits

...

8 Commits

Author SHA1 Message Date
Jonathan Gilbert
41e73b92ab
Merge 4359aca5bf into 814c497bef 2026-05-03 16:51:49 -05:00
Josh Hawkins
814c497bef
Use Job infrastructure for Debug Replay (#23099)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
* use ReplayState enum

* extract shared ffmpeg progress helper

* make start call non-blocking with worker thread

* expose replay state on status endpoint and return 202 from start

* cancel in-flight ffmpeg when stop is called during preparation

* add replay i18n strings for preparing and error states

* show status in replay UI

* navigate immediately on 202 from debug replay menus and dialog

* remove unused

* simplify to use Job infrastructure

* tests

* cleanup and tweaks

* fetch schema

* update api spec

* formatting

* fix e2e test

* mypy

* clean up

* formatting

* fix

* fix test

* don't try to show camera image until status reports ready

* simplify loading logic

* fix race in latest_frame on debug replay shutdown

* remove toast when successfully stopping

it gets hidden almost immediately
2026-05-03 14:54:20 -06:00
Josh Hawkins
5bc15d4aa9
chapter and thumbnail fixes (#23100)
- Skip null end_time when building export chapter metadata
- Use plain seconds for export thumbnail ffmpeg seek
2026-05-03 13:25:53 -06:00
Josh Hawkins
7ad233ef15
fix malformed svg from breaking docs build (#23102) 2026-05-03 13:21:22 -06:00
GuoQing Liu
882b3a8ffd
docs: add docker compose generator (#22956)
* docs: add docker compose generator

* docs: add more icon support

* Update docs/src/components/DockerComposeGenerator/config/config.yaml

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

* Update docs/src/components/DockerComposeGenerator/config/config.yaml

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

* Update docs/src/components/DockerComposeGenerator/config/config.yaml

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

* Rename heading from 'Generic Hardware Acceleration' to 'Generic Hardware Devices'

* Remove port 5000 configuration for security reasons

Removed unauthenticated Web UI port 5000 from configuration due to security risks.

* docs: remove 5000 port tips

* docs: improve NVIDIA GPU count input

* docs: add docker compose tabs

* Update docs/src/components/DockerComposeGenerator/config/config.yaml

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

* Update docs/src/components/DockerComposeGenerator/components/OtherOptions.tsx

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

* Update docs/src/components/DockerComposeGenerator/config/config.yaml

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

* Update docs/src/components/DockerComposeGenerator/config/config.yaml

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

* Update docs/src/components/DockerComposeGenerator/components/StoragePaths.tsx

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

* Update docs/src/components/DockerComposeGenerator/components/StoragePaths.tsx

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

* Update docs/src/components/DockerComposeGenerator/config/config.yaml

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

* docs: Adjust the position of the RTSP password variable option

* docs: timezone change to select

* docs: add hailo and memryX mx3 driver tips

* docs: RTSP password is optional

* docs: fix select style

---------

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2026-05-03 13:56:51 -05:00
Pedro Diogo
b6fd86a066
feat(genai): add api_key auth support for ollama cloud (#23096)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions
- Add _auth_headers() helper to pass Bearer token when api_key is set
- Wire headers into all Ollama client instantiations (sync + async)
- Update docs with Ollama Cloud direct connection example and yaml config
2026-05-02 17:55:25 -06:00
Josh Hawkins
147cd5cc2b
Miscellaneous fixes (#23092)
* lpr fixes

- remove duplicate code
- fix min_area check for non frigate+ code path
- move log outside of non frigate+ code path

* only show chat link when a genai provider is configured with the chat role

* respect ui.timezone when generating fallback export names

* reapply radix pointer events fix to call sites that use navigate()

* formatting

* fall back to prior preview frame for short export thumbnails

* fix typing

* fix e2e test for chat navigation

* batch annotation offset to seek atomically and throttle slider drag

* add debug replay loading toast for explore actions

* Improve handling of webpush missing shortSummary

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2026-05-02 16:35:42 -06:00
Jonathan Gilbert
4359aca5bf feat: native support for rootless container execution 2026-05-02 09:14:18 +10:00
61 changed files with 4343 additions and 525 deletions

5
.gitignore vendored
View File

@ -22,3 +22,8 @@ core
!/web/**/*.ts !/web/**/*.ts
.idea/* .idea/*
.ipynb_checkpoints .ipynb_checkpoints
# Auto-generated Docker Compose Generator config files
docs/src/components/DockerComposeGenerator/config/devices.ts
docs/src/components/DockerComposeGenerator/config/hardware.ts
docs/src/components/DockerComposeGenerator/config/ports.ts

View File

@ -287,6 +287,9 @@ RUN --mount=type=bind,source=docker/main/install_memryx.sh,target=/deps/install_
COPY --from=deps-rootfs / / COPY --from=deps-rootfs / /
RUN mkdir -p /etc/letsencrypt/www /etc/letsencrypt/live/frigate /usr/local/nginx/logs /config /run /tmp/cache /media/frigate \
&& chmod -R a+rwX /etc/letsencrypt /usr/local/nginx /config /run /tmp/cache /media/frigate
RUN ldconfig RUN ldconfig
EXPOSE 5000 EXPOSE 5000
@ -297,6 +300,8 @@ EXPOSE 8555/tcp 8555/udp
ENV S6_LOGGING_SCRIPT="T 1 n0 s10000000 T" ENV S6_LOGGING_SCRIPT="T 1 n0 s10000000 T"
# Do not fail on long-running download scripts # Do not fail on long-running download scripts
ENV S6_CMD_WAIT_FOR_SERVICES_MAXTIME=0 ENV S6_CMD_WAIT_FOR_SERVICES_MAXTIME=0
# Set HOME to cache directory so rootless users can cache downloaded models
ENV HOME=/tmp/cache
ENTRYPOINT ["/init"] ENTRYPOINT ["/init"]
CMD [] CMD []

View File

@ -1,4 +1,4 @@
#!/command/with-contenv bash #!/command/with-contenv bash
# shellcheck shell=bash # shellcheck shell=bash
exec logutil-service /dev/shm/logs/certsync exec /usr/local/bin/logutil /dev/shm/logs/certsync/

View File

@ -1,4 +1,4 @@
#!/command/with-contenv bash #!/command/with-contenv bash
# shellcheck shell=bash # shellcheck shell=bash
exec logutil-service /dev/shm/logs/frigate exec /usr/local/bin/logutil /dev/shm/logs/frigate/

View File

@ -1,4 +1,4 @@
#!/command/with-contenv bash #!/command/with-contenv bash
# shellcheck shell=bash # shellcheck shell=bash
exec logutil-service /dev/shm/logs/go2rtc exec /usr/local/bin/logutil /dev/shm/logs/go2rtc/

View File

@ -7,5 +7,7 @@ set -o errexit -o nounset -o pipefail
dirs=(/dev/shm/logs/frigate /dev/shm/logs/go2rtc /dev/shm/logs/nginx /dev/shm/logs/certsync) dirs=(/dev/shm/logs/frigate /dev/shm/logs/go2rtc /dev/shm/logs/nginx /dev/shm/logs/certsync)
mkdir -p "${dirs[@]}" mkdir -p "${dirs[@]}"
chown nobody:nogroup "${dirs[@]}" if [ "$(id -u)" = "0" ]; then
chown nobody:nogroup "${dirs[@]}"
fi
chmod 02755 "${dirs[@]}" chmod 02755 "${dirs[@]}"

View File

@ -1,4 +1,4 @@
#!/command/with-contenv bash #!/command/with-contenv bash
# shellcheck shell=bash # shellcheck shell=bash
exec logutil-service /dev/shm/logs/nginx exec /usr/local/bin/logutil /dev/shm/logs/nginx/

View File

@ -65,6 +65,11 @@ function set_worker_processes() {
set_worker_processes set_worker_processes
# NGINX cannot switch users if running rootless; strip the directive
if [ "$(id -u)" != "0" ]; then
sed -i '/^user root;/d' /usr/local/nginx/conf/nginx.conf || true
fi
# ensure the directory for ACME challenges exists # ensure the directory for ACME challenges exists
mkdir -p /etc/letsencrypt/www mkdir -p /etc/letsencrypt/www

View File

@ -0,0 +1,17 @@
#!/command/with-contenv bash
# shellcheck shell=bash
LOG_DIR=$1
if [ -z "$LOG_DIR" ]; then
echo "Usage: $0 <log-dir>"
exit 1
fi
CMD=(s6-log -b -- ${S6_LOGGING_SCRIPT:-n20 s1000000 T} "$LOG_DIR")
if [ "$(id -u)" = "0" ]; then
exec /command/s6-envuidgid nobody /command/s6-applyuidgid -U -- "${CMD[@]}"
else
exec "${CMD[@]}"
fi

View File

@ -201,7 +201,7 @@ Cloud Generative AI providers require an active internet connection to send imag
### Ollama Cloud ### Ollama Cloud
Ollama also supports [cloud models](https://ollama.com/cloud), where your local Ollama instance handles requests from Frigate, but model inference is performed in the cloud. Set up Ollama locally, sign in with your Ollama account, and specify the cloud model name in your Frigate config. For more details, see the Ollama cloud model [docs](https://docs.ollama.com/cloud). Ollama also supports [cloud models](https://ollama.com/cloud), where model inference is performed in the cloud. You can connect directly to Ollama Cloud by setting `base_url` to `https://ollama.com` and providing an API key. Alternatively, you can run Ollama locally and use a cloud model name so your local instance forwards requests to the cloud. For more details, see the Ollama cloud model [docs](https://docs.ollama.com/cloud).
#### Configuration #### Configuration
@ -210,7 +210,8 @@ Ollama also supports [cloud models](https://ollama.com/cloud), where your local
1. Navigate to <NavPath path="Settings > Enrichments > Generative AI" />. 1. Navigate to <NavPath path="Settings > Enrichments > Generative AI" />.
- Set **Provider** to `ollama` - Set **Provider** to `ollama`
- Set **Base URL** to your local Ollama address (e.g., `http://localhost:11434`) - Set **Base URL** to your local Ollama address (e.g., `http://localhost:11434`) or `https://ollama.com` for direct cloud inference
- Set **API key** if required by your endpoint (e.g., when using `https://ollama.com`)
- Set **Model** to the cloud model name - Set **Model** to the cloud model name
</TabItem> </TabItem>
@ -223,6 +224,16 @@ genai:
model: cloud-model-name model: cloud-model-name
``` ```
or when using Ollama Cloud directly
```yaml
genai:
provider: ollama
base_url: https://ollama.com
model: cloud-model-name
api_key: your-api-key
```
</TabItem> </TabItem>
</ConfigTabs> </ConfigTabs>

View File

@ -4,6 +4,9 @@ title: Installation
--- ---
import ShmCalculator from '@site/src/components/ShmCalculator' import ShmCalculator from '@site/src/components/ShmCalculator'
import DockerComposeGenerator from '@site/src/components/DockerComposeGenerator'
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
Frigate is a Docker container that can be run on any Docker host including as a [Home Assistant App](https://www.home-assistant.io/apps/). Note that the Home Assistant App is **not** the same thing as the integration. The [integration](/integrations/home-assistant) is required to integrate Frigate into Home Assistant, whether you are running Frigate as a standalone Docker container or as a Home Assistant App. Frigate is a Docker container that can be run on any Docker host including as a [Home Assistant App](https://www.home-assistant.io/apps/). Note that the Home Assistant App is **not** the same thing as the integration. The [integration](/integrations/home-assistant) is required to integrate Frigate into Home Assistant, whether you are running Frigate as a standalone Docker container or as a Home Assistant App.
@ -474,6 +477,16 @@ Finally, configure [hardware object detection](/configuration/object_detectors#a
Running through Docker with Docker Compose is the recommended install method. Running through Docker with Docker Compose is the recommended install method.
<Tabs>
<TabItem value="domestic" label="Docker Compose Generator" default>
Generate a Frigate Docker Compose configuration based on your hardware and requirements.
<DockerComposeGenerator/>
</TabItem>
<TabItem value="original" label="Example Docker Compose File">
```yaml ```yaml
services: services:
frigate: frigate:
@ -507,6 +520,10 @@ services:
environment: environment:
FRIGATE_RTSP_PASSWORD: "password" FRIGATE_RTSP_PASSWORD: "password"
``` ```
</TabItem>
</Tabs>
**Docker CLI**
If you can't use Docker Compose, you can run the container with something similar to this: If you can't use Docker Compose, you can run the container with something similar to this:

View File

@ -14,9 +14,11 @@
"@docusaurus/theme-mermaid": "^3.7.0", "@docusaurus/theme-mermaid": "^3.7.0",
"@inkeep/docusaurus": "^2.0.16", "@inkeep/docusaurus": "^2.0.16",
"@mdx-js/react": "^3.1.0", "@mdx-js/react": "^3.1.0",
"@types/js-yaml": "^4.0.9",
"clsx": "^2.1.1", "clsx": "^2.1.1",
"docusaurus-plugin-openapi-docs": "^4.5.1", "docusaurus-plugin-openapi-docs": "^4.5.1",
"docusaurus-theme-openapi-docs": "^4.5.1", "docusaurus-theme-openapi-docs": "^4.5.1",
"js-yaml": "^4.1.1",
"prism-react-renderer": "^2.4.1", "prism-react-renderer": "^2.4.1",
"raw-loader": "^4.0.2", "raw-loader": "^4.0.2",
"react": "^18.3.1", "react": "^18.3.1",
@ -5747,6 +5749,11 @@
"@types/istanbul-lib-report": "*" "@types/istanbul-lib-report": "*"
} }
}, },
"node_modules/@types/js-yaml": {
"version": "4.0.9",
"resolved": "https://mirrors.tencent.com/npm/@types/js-yaml/-/js-yaml-4.0.9.tgz",
"integrity": "sha512-k4MGaQl5TGo/iipqb2UDG2UwjXziSWkh0uysQelTlJpX1qGlpUZYm8PnO4DxG1qBomtJUdYJ6qR6xdIah10JLg=="
},
"node_modules/@types/json-schema": { "node_modules/@types/json-schema": {
"version": "7.0.15", "version": "7.0.15",
"resolved": "https://registry.npmjs.org/@types/json-schema/-/json-schema-7.0.15.tgz", "resolved": "https://registry.npmjs.org/@types/json-schema/-/json-schema-7.0.15.tgz",
@ -12883,7 +12890,7 @@
}, },
"node_modules/js-yaml": { "node_modules/js-yaml": {
"version": "4.1.1", "version": "4.1.1",
"resolved": "https://registry.npmjs.org/js-yaml/-/js-yaml-4.1.1.tgz", "resolved": "https://mirrors.tencent.com/npm/js-yaml/-/js-yaml-4.1.1.tgz",
"integrity": "sha512-qQKT4zQxXl8lLwBtHMWwaTcGfFOZviOJet3Oy/xmGk2gZH677CJM9EvtfdSkgWcATZhj/55JZ0rmy3myCT5lsA==", "integrity": "sha512-qQKT4zQxXl8lLwBtHMWwaTcGfFOZviOJet3Oy/xmGk2gZH677CJM9EvtfdSkgWcATZhj/55JZ0rmy3myCT5lsA==",
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {

View File

@ -3,9 +3,10 @@
"version": "0.0.0", "version": "0.0.0",
"private": true, "private": true,
"scripts": { "scripts": {
"build:config": "node scripts/build-config.mjs",
"docusaurus": "docusaurus", "docusaurus": "docusaurus",
"start": "npm run regen-docs && docusaurus start --host 0.0.0.0", "start": "npm run build:config && npm run regen-docs && docusaurus start --host 0.0.0.0",
"build": "npm run regen-docs && docusaurus build", "build": "npm run build:config && npm run regen-docs && docusaurus build",
"swizzle": "docusaurus swizzle", "swizzle": "docusaurus swizzle",
"deploy": "docusaurus deploy", "deploy": "docusaurus deploy",
"clear": "docusaurus clear", "clear": "docusaurus clear",
@ -23,9 +24,11 @@
"@docusaurus/theme-mermaid": "^3.7.0", "@docusaurus/theme-mermaid": "^3.7.0",
"@inkeep/docusaurus": "^2.0.16", "@inkeep/docusaurus": "^2.0.16",
"@mdx-js/react": "^3.1.0", "@mdx-js/react": "^3.1.0",
"@types/js-yaml": "^4.0.9",
"clsx": "^2.1.1", "clsx": "^2.1.1",
"docusaurus-plugin-openapi-docs": "^4.5.1", "docusaurus-plugin-openapi-docs": "^4.5.1",
"docusaurus-theme-openapi-docs": "^4.5.1", "docusaurus-theme-openapi-docs": "^4.5.1",
"js-yaml": "^4.1.1",
"prism-react-renderer": "^2.4.1", "prism-react-renderer": "^2.4.1",
"raw-loader": "^4.0.2", "raw-loader": "^4.0.2",
"react": "^18.3.1", "react": "^18.3.1",

View File

@ -0,0 +1,64 @@
#!/usr/bin/env node
/**
* Build script: reads config.yaml and generates TypeScript files
* for the Docker Compose Generator.
*
* Usage: node scripts/build-config.mjs
*/
import fs from "node:fs";
import path from "node:path";
import { fileURLToPath } from "node:url";
import yaml from "js-yaml";
const __dirname = path.dirname(fileURLToPath(import.meta.url));
const CONFIG_DIR = path.resolve(__dirname, "../src/components/DockerComposeGenerator/config");
const YAML_PATH = path.join(CONFIG_DIR, "config.yaml");
// Read & parse YAML
const raw = fs.readFileSync(YAML_PATH, "utf8");
const config = yaml.load(raw);
if (!config.devices || !config.hardware || !config.ports) {
console.error("config.yaml must contain 'devices', 'hardware', and 'ports' sections.");
process.exit(1);
}
/**
* Generate a .ts file from a section of the YAML config.
*/
function generateTsFile(sectionName, items, typeName, varName, mapVarName, yamlFilename) {
const jsonItems = JSON.stringify(items, null, 2);
// Indent JSON to fit inside the array literal
const indented = jsonItems
.split("\n")
.map((line, i) => (i === 0 ? line : " " + line))
.join("\n");
const content = `/**
* AUTO-GENERATED FILE do not edit directly.
* Source: ${yamlFilename}
* To update, edit the YAML file and run: npm run build:config
*/
import type { ${typeName} } from "./types";
export const ${varName}: ${typeName}[] = ${indented};
/** Lookup map for quick access by ID */
export const ${mapVarName}: Map<string, ${typeName}> = new Map(${varName}.map((item) => [item.id, item]));
`;
const outPath = path.join(CONFIG_DIR, `${sectionName}.ts`);
fs.writeFileSync(outPath, content, "utf8");
console.log(` ✓ Generated ${sectionName}.ts (${items.length} items)`);
}
console.log("Building config from config.yaml...");
generateTsFile("devices", config.devices, "DeviceConfig", "devices", "deviceMap", "config.yaml");
generateTsFile("hardware", config.hardware, "HardwareOption", "hardwareOptions", "hardwareMap", "config.yaml");
generateTsFile("ports", config.ports, "PortConfig", "ports", "portMap", "config.yaml");
console.log("Done!");

View File

@ -0,0 +1,108 @@
import React from "react";
import Admonition from "@theme/Admonition";
import DeviceSelector from "./components/DeviceSelector";
import HardwareOptions from "./components/HardwareOptions";
import PortConfigSection from "./components/PortConfig";
import StoragePaths from "./components/StoragePaths";
import NvidiaGpuConfig from "./components/NvidiaGpuConfig";
import OtherOptions from "./components/OtherOptions";
import GeneratedOutput from "./components/GeneratedOutput";
import { useConfigGenerator } from "./hooks/useConfigGenerator";
import styles from "./styles.module.css";
/**
* Simple markdown-link-to-React renderer for help text.
* Only supports [text](url) syntax no nested brackets.
*/
function renderHelpText(text: string): React.ReactNode {
const parts = text.split(/(\[[^\]]+\]\([^)]+\))/g);
return parts.map((part, i) => {
const match = part.match(/^\[([^\]]+)\]\(([^)]+)\)$/);
if (match) {
return (
<a key={i} href={match[2]}>
{match[1]}
</a>
);
}
return <React.Fragment key={i}>{part}</React.Fragment>;
});
}
export default function DockerComposeGenerator() {
const {
deviceId, device, hardwareEnabled,
portEnabled,
nvidiaGpuCount, nvidiaGpuDeviceId,
configPath, mediaPath, rtspPassword, timezone, shmSize,
shmSizeError, gpuDeviceIdError, configPathError, mediaPathError,
hasAnyHardware, generatedYaml,
selectDevice, toggleHardware, togglePort,
handleShmSizeChange, handleConfigPathChange, handleMediaPathChange,
handleNvidiaGpuCountChange, handleNvidiaGpuDeviceIdChange,
setRtspPassword, setTimezone, isHardwareDisabled,
} = useConfigGenerator();
return (
<div className={styles.generator}>
<div className={styles.card}>
<DeviceSelector selectedId={deviceId} onSelect={selectDevice} />
{device.helpText && (
<Admonition type={device.helpType || "info"}>
{renderHelpText(device.helpText)}
</Admonition>
)}
{device.needsNvidiaConfig && (
<NvidiaGpuConfig
gpuCount={nvidiaGpuCount}
gpuDeviceId={nvidiaGpuDeviceId}
gpuDeviceIdError={gpuDeviceIdError}
onGpuCountChange={handleNvidiaGpuCountChange}
onGpuDeviceIdChange={handleNvidiaGpuDeviceIdChange}
/>
)}
<HardwareOptions
deviceId={deviceId}
hardwareEnabled={hardwareEnabled}
onToggle={toggleHardware}
isDisabled={isHardwareDisabled}
/>
<StoragePaths
configPath={configPath}
mediaPath={mediaPath}
configPathError={configPathError}
mediaPathError={mediaPathError}
onConfigPathChange={handleConfigPathChange}
onMediaPathChange={handleMediaPathChange}
/>
<PortConfigSection
portEnabled={portEnabled}
onTogglePort={togglePort}
/>
<OtherOptions
rtspPassword={rtspPassword}
timezone={timezone}
shmSize={shmSize}
shmSizeError={shmSizeError}
onRtspPasswordChange={setRtspPassword}
onTimezoneChange={setTimezone}
onShmSizeChange={handleShmSizeChange}
/>
<GeneratedOutput
yaml={generatedYaml}
configPath={configPath}
mediaPath={mediaPath}
hasAnyHardware={hasAnyHardware}
deviceId={deviceId}
/>
</div>
</div>
);
}

View File

@ -0,0 +1,147 @@
import React from "react";
import { useColorMode } from "@docusaurus/theme-common";
import { devices } from "../config";
import type { DeviceConfig } from "../config";
import styles from "../styles.module.css";
interface Props {
selectedId: string;
onSelect: (id: string) => void;
}
/**
* Determine the icon type from the icon string:
* - Starts with "<svg" inline SVG
* - Starts with "/" or "http" image URL/path
* - Otherwise emoji text
*/
function getIconType(icon: string): "svg" | "image" | "emoji" {
const trimmed = icon.trim();
if (trimmed.startsWith("<svg")) return "svg";
if (trimmed.startsWith("/") || trimmed.startsWith("http://") || trimmed.startsWith("https://")) return "image";
return "emoji";
}
/**
* Check if the style object contains background-* properties,
* indicating the image should be rendered as a CSS background-image
* rather than an <img> tag.
*/
function hasBackgroundProps(style: React.CSSProperties | undefined): boolean {
if (!style) return false;
return Object.keys(style).some((key) => {
const k = key.toLowerCase().replace(/-/g, "");
return k === "backgroundsize" || k === "backgroundposition" || k === "backgroundrepeat" || k === "backgroundimage";
});
}
/**
* Convert a style object to CSS custom properties (e.g. { width: "24px" } { "--svg-width": "24px" })
* so they can be consumed by CSS rules targeting child elements like <svg>.
*/
function toCssVars(style: React.CSSProperties | undefined, prefix: string): React.CSSProperties {
if (!style) return {};
const vars: Record<string, string> = {};
for (const [key, value] of Object.entries(style)) {
const cssKey = key.replace(/([A-Z])/g, "-$1").toLowerCase();
vars[`--${prefix}-${cssKey}`] = value;
}
return vars as React.CSSProperties;
}
function DeviceIcon({ device }: { device: DeviceConfig }) {
const { isDarkTheme } = useColorMode();
const iconStr = isDarkTheme && device.iconDark ? device.iconDark : device.icon;
const iconStyle = (isDarkTheme && device.iconDarkStyle
? device.iconDarkStyle
: device.iconStyle) as React.CSSProperties | undefined;
const svgStyle = (isDarkTheme && device.svgDarkStyle
? device.svgDarkStyle
: device.svgStyle) as React.CSSProperties | undefined;
const iconType = getIconType(iconStr);
if (iconType === "svg") {
return (
<div
className={styles.deviceIconSvg}
style={{ ...iconStyle, ...toCssVars(svgStyle, "svg") }}
dangerouslySetInnerHTML={{ __html: iconStr }}
/>
);
}
if (iconType === "image") {
// When iconStyle contains background-* properties, render as background-image
// on the container div instead of an <img> tag, enabling background-size/position control.
if (hasBackgroundProps(iconStyle)) {
return (
<div
className={styles.deviceIconImage}
style={{
backgroundImage: `url(${iconStr})`,
backgroundRepeat: "no-repeat",
backgroundPosition: "center",
backgroundSize: "contain",
...iconStyle,
}}
/>
);
}
return (
<div className={styles.deviceIconImage}>
<img src={iconStr} alt={device.name} style={iconStyle} />
</div>
);
}
return (
<div className={styles.deviceIcon} style={iconStyle}>
{iconStr}
</div>
);
}
function DeviceCard({
device,
active,
onClick,
}: {
device: DeviceConfig;
active: boolean;
onClick: () => void;
}) {
return (
<div
className={`${styles.deviceCard} ${active ? styles.deviceCardActive : ""}`}
onClick={onClick}
role="button"
tabIndex={0}
onKeyDown={(e) => {
if (e.key === "Enter" || e.key === " ") onClick();
}}
>
<DeviceIcon device={device} />
<div className={styles.deviceName}>{device.name}</div>
<div className={styles.deviceDesc}>{device.description}</div>
</div>
);
}
export default function DeviceSelector({ selectedId, onSelect }: Props) {
return (
<div className={styles.formSection}>
<h4>Device Type</h4>
<div className={styles.deviceGrid}>
{devices.map((d) => (
<DeviceCard
key={d.id}
device={d}
active={selectedId === d.id}
onClick={() => onSelect(d.id)}
/>
))}
</div>
</div>
);
}

View File

@ -0,0 +1,60 @@
import React, { useState, useCallback } from "react";
import CodeBlock from "@theme/CodeBlock";
import Admonition from "@theme/Admonition";
import styles from "../styles.module.css";
interface Props {
yaml: string;
configPath: string;
mediaPath: string;
hasAnyHardware: boolean;
deviceId: string;
}
export default function GeneratedOutput({
yaml,
configPath,
mediaPath,
hasAnyHardware,
deviceId,
}: Props) {
const [copied, setCopied] = useState(false);
const handleCopy = useCallback(() => {
navigator.clipboard.writeText(yaml).then(() => {
setCopied(true);
setTimeout(() => setCopied(false), 2000);
});
}, [yaml]);
return (
<div className={styles.resultSection}>
<div className={styles.resultHeader}>
<h4>Generated Configuration</h4>
<button className="button button--primary button--sm" onClick={handleCopy}>
{copied ? "Copied!" : "Copy"}
</button>
</div>
{!configPath && (
<Admonition type="tip">
<p>You haven&apos;t specified a config file directory. You may want to modify the default path.</p>
</Admonition>
)}
{!mediaPath && (
<Admonition type="tip">
<p>You haven&apos;t specified a recording storage directory. You may want to modify the default path.</p>
</Admonition>
)}
{deviceId === "stable" && !hasAnyHardware && (
<Admonition type="warning">
<p>You haven&apos;t selected any hardware acceleration. Please check if you have supported hardware available.</p>
</Admonition>
)}
<CodeBlock language="yaml" title="docker-compose.yml">
{yaml}
</CodeBlock>
</div>
);
}

View File

@ -0,0 +1,62 @@
import React from "react";
import { hardwareOptions } from "../config";
import type { HardwareOption } from "../config";
import styles from "../styles.module.css";
interface Props {
deviceId: string;
hardwareEnabled: Record<string, boolean>;
onToggle: (hwId: string) => void;
isDisabled: (hwId: string) => boolean;
}
function renderDescription(text: string): React.ReactNode {
const parts = text.split(/(\[[^\]]+\]\([^)]+\))/g);
return parts.map((part, i) => {
const match = part.match(/^\[([^\]]+)\]\(([^)]+)\)$/);
if (match) {
return <a key={i} href={match[2]}>{match[1]}</a>;
}
return <React.Fragment key={i}>{part}</React.Fragment>;
});
}
function HardwareCheckbox({
hw, disabled, checked, onToggle,
}: {
hw: HardwareOption; disabled: boolean; checked: boolean; onToggle: () => void;
}) {
return (
<div className={styles.hardwareItem}>
<label className={`${styles.checkboxLabel} ${disabled ? styles.checkboxDisabled : ""}`}>
<input type="checkbox" checked={checked} onChange={onToggle} disabled={disabled} />
<span>{hw.label}</span>
</label>
{checked && hw.description && (
<div className={styles.hardwareDescription}>{renderDescription(hw.description)}</div>
)}
</div>
);
}
export default function HardwareOptions({ deviceId, hardwareEnabled, onToggle, isDisabled }: Props) {
return (
<div className={styles.formSection}>
<h4>Generic Hardware Devices</h4>
{deviceId !== "stable" && (
<p className={styles.helpText}>
Some options have been auto-configured based on your device type.
</p>
)}
<div className={styles.checkboxGrid}>
{hardwareOptions.map((hw) => {
const disabled = isDisabled(hw.id);
const checked = disabled ? false : !!hardwareEnabled[hw.id];
return (
<HardwareCheckbox key={hw.id} hw={hw} disabled={disabled} checked={checked} onToggle={() => onToggle(hw.id)} />
);
})}
</div>
</div>
);
}

View File

@ -0,0 +1,64 @@
import React from "react";
import styles from "../styles.module.css";
interface Props {
gpuCount: string;
gpuDeviceId: string;
gpuDeviceIdError: boolean;
onGpuCountChange: (value: string) => void;
onGpuDeviceIdChange: (value: string) => void;
}
export default function NvidiaGpuConfig({
gpuCount,
gpuDeviceId,
gpuDeviceIdError,
onGpuCountChange,
onGpuDeviceIdChange,
}: Props) {
const showDeviceId = gpuCount !== "";
return (
<div className={styles.nvidiaConfig}>
<div className={styles.formGroup}>
<label htmlFor="dcg-gpu-count" className={styles.label}>
GPU count:
</label>
<input
id="dcg-gpu-count"
type="text"
inputMode="numeric"
pattern="[0-9]*"
className={styles.input}
value={gpuCount}
placeholder="all"
onChange={(e) => onGpuCountChange(e.target.value.replace(/\D/g, ""))}
/>
</div>
{showDeviceId && (
<div className={styles.formGroup}>
<label htmlFor="dcg-gpu-device-id" className={styles.label}>
GPU device IDs (required, comma-separated):
</label>
<input
id="dcg-gpu-device-id"
type="text"
className={`${styles.input} ${gpuDeviceIdError ? styles.inputError : ""}`}
value={gpuDeviceId}
placeholder="0"
onChange={(e) => onGpuDeviceIdChange(e.target.value)}
/>
{gpuDeviceIdError ? (
<p className={styles.helpText}>
GPU device IDs are required when GPU count is a number
</p>
) : (
<p className={styles.helpText}>
Single GPU: 0 &nbsp;|&nbsp; Multiple GPUs: 0,1,2
</p>
)}
</div>
)}
</div>
);
}

View File

@ -0,0 +1,122 @@
import React, { useMemo } from "react";
import CodeInline from "@theme/CodeInline";
import styles from "../styles.module.css";
const AUTO_TIMEZONE_VALUE = "__auto__";
function getTimezoneList(): string[] {
if (typeof Intl !== "undefined") {
const intl = Intl as typeof Intl & {
supportedValuesOf?: (key: string) => string[];
};
const supported = intl.supportedValuesOf?.("timeZone");
if (supported && supported.length > 0) {
return [...supported].sort();
}
}
const fallback = Intl.DateTimeFormat().resolvedOptions().timeZone;
return fallback ? [fallback] : ["UTC"];
}
interface Props {
rtspPassword: string;
timezone: string;
shmSize: string;
shmSizeError: boolean;
onRtspPasswordChange: (value: string) => void;
onTimezoneChange: (value: string) => void;
onShmSizeChange: (value: string) => void;
}
export default function OtherOptions({
rtspPassword,
timezone,
shmSize,
shmSizeError,
onRtspPasswordChange,
onTimezoneChange,
onShmSizeChange,
}: Props) {
const timezones = useMemo(() => getTimezoneList(), []);
const systemTimezone =
Intl.DateTimeFormat().resolvedOptions().timeZone || "Etc/UTC";
const selectedValue = timezone || AUTO_TIMEZONE_VALUE;
return (
<div className={styles.formSection}>
<h4>Other Options</h4>
<div className={styles.formGrid}>
<div className={styles.formGroup}>
<label htmlFor="dcg-timezone" className={styles.label}>
Timezone:
</label>
<select
id="dcg-timezone"
className={`${styles.input} ${styles.select}`}
value={selectedValue}
onChange={(e) =>
onTimezoneChange(
e.target.value === AUTO_TIMEZONE_VALUE ? "" : e.target.value
)
}
>
<option value={AUTO_TIMEZONE_VALUE}>
Use browser timezone ({systemTimezone})
</option>
{timezones.map((tz) => (
<option key={tz} value={tz}>
{tz}
</option>
))}
</select>
</div>
<div className={styles.formGroup}>
<label htmlFor="dcg-shm-size" className={styles.label}>
Shared memory (SHM):
</label>
<input
id="dcg-shm-size"
type="text"
className={`${styles.input} ${shmSizeError ? styles.inputError : ""}`}
value={shmSize}
placeholder="512mb"
onChange={(e) => onShmSizeChange(e.target.value)}
/>
{shmSizeError ? (
<p className={styles.helpText}>
Invalid format. Use a number followed by a unit (e.g. 512mb, 1gb)
</p>
) : (
<p className={styles.helpText}>
See{" "}
<a href="/frigate/installation#calculating-required-shm-size">
calculating required SHM size
</a>{" "}
for the correct value.
</p>
)}
</div>
<div className={styles.formGroup}>
<label htmlFor="dcg-rtsp-password" className={styles.label}>
RTSP password:
</label>
<input
id="dcg-rtsp-password"
type="text"
className={styles.input}
value={rtspPassword}
placeholder="password"
onChange={(e) => onRtspPasswordChange(e.target.value)}
/>
<p className={styles.helpText}>
Optional. You can specify{" "}
<CodeInline>{"{FRIGATE_RTSP_PASSWORD}"}</CodeInline>{" "}
in the config file to reference camera stream passwords. This is NOT
the Frigate login password.
</p>
</div>
</div>
</div>
);
}

View File

@ -0,0 +1,71 @@
import React from "react";
import Admonition from "@theme/Admonition";
import { ports } from "../config";
import styles from "../styles.module.css";
interface Props {
portEnabled: Record<string, boolean>;
onTogglePort: (portId: string) => void;
}
function PortItem({
port,
enabled,
onToggle,
}: {
port: typeof ports[number];
enabled: boolean;
onToggle: () => void;
}) {
const showWarning = port.warningContent && (
port.warningWhen === "checked" ? enabled :
port.warningWhen === "unchecked" ? !enabled : enabled
);
return (
<div className={styles.hardwareItem}>
<label className={`${styles.checkboxLabel} ${port.locked ? styles.checkboxDisabled : ""}`}>
<input
type="checkbox"
checked={enabled}
onChange={onToggle}
disabled={port.locked}
/>
<span>
{port.locked && "🔒 "}
Port {port.host}
{port.protocol !== "tcp" && `/${port.protocol}`}
</span>
</label>
{port.description && (
<div className={styles.hardwareDescription}>{port.description}</div>
)}
{showWarning && (
<Admonition type={port.warningType || "warning"}>
{port.warningContent}
</Admonition>
)}
</div>
);
}
export default function PortConfigSection({
portEnabled,
onTogglePort,
}: Props) {
return (
<div className={styles.formSection}>
<h4>Port Configuration</h4>
<div className={styles.checkboxGrid}>
{ports.map((port) => (
<PortItem
key={port.id}
port={port}
enabled={!!portEnabled[port.id]}
onToggle={() => onTogglePort(port.id)}
/>
))}
</div>
</div>
);
}

View File

@ -0,0 +1,66 @@
import React from "react";
import styles from "../styles.module.css";
interface Props {
configPath: string;
mediaPath: string;
configPathError: boolean;
mediaPathError: boolean;
onConfigPathChange: (value: string) => void;
onMediaPathChange: (value: string) => void;
}
export default function StoragePaths({
configPath,
mediaPath,
configPathError,
mediaPathError,
onConfigPathChange,
onMediaPathChange,
}: Props) {
return (
<div className={styles.formSection}>
<h4>Storage Paths</h4>
<div className={styles.formGrid}>
<div className={styles.formGroup}>
<label htmlFor="dcg-config-path" className={styles.label}>
Config / DB / model cache directory (on your host):
</label>
<input
id="dcg-config-path"
type="text"
className={`${styles.input} ${configPathError ? styles.inputError : ""}`}
value={configPath}
placeholder="/path/to/your/config"
onChange={(e) => onConfigPathChange(e.target.value)}
/>
{configPathError && (
<p className={styles.helpText}>
Path contains invalid characters. Only letters, numbers,
underscores, hyphens, slashes, and dots are allowed.
</p>
)}
</div>
<div className={styles.formGroup}>
<label htmlFor="dcg-media-path" className={styles.label}>
Recording storage directory (on your host):
</label>
<input
id="dcg-media-path"
type="text"
className={`${styles.input} ${mediaPathError ? styles.inputError : ""}`}
value={mediaPath}
placeholder="/path/to/your/storage"
onChange={(e) => onMediaPathChange(e.target.value)}
/>
{mediaPathError && (
<p className={styles.helpText}>
Path contains invalid characters. Only letters, numbers,
underscores, hyphens, slashes, and dots are allowed.
</p>
)}
</div>
</div>
</div>
);
}

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,12 @@
export { devices, deviceMap } from "./devices";
export { hardwareOptions, hardwareMap } from "./hardware";
export { ports, portMap } from "./ports";
export type {
DeviceConfig,
DeviceMapping,
VolumeMapping,
HardwareOption,
PortConfig,
NvidiaDeployConfig,
} from "./types";

View File

@ -0,0 +1,154 @@
/**
* Type definitions for the Docker Compose Generator configuration.
* All device, hardware, and port options are declaratively defined
* so that adding a new device only requires editing config files.
*/
/** A single device mapping entry (e.g. /dev/dri:/dev/dri) */
export interface DeviceMapping {
/** Host device path */
host: string;
/** Container device path (defaults to host if omitted) */
container?: string;
/** Inline comment for this device line */
comment?: string;
}
/** A single volume mapping entry */
export interface VolumeMapping {
/** Host path */
host: string;
/** Container path */
container: string;
/** Whether the mount is read-only */
readOnly?: boolean;
/** Inline comment */
comment?: string;
}
/** NVIDIA deploy configuration for docker-compose */
export interface NvidiaDeployConfig {
/** "all" or a specific number */
count: string;
/** Specific GPU device IDs (when count is a number) */
deviceIds?: string[];
}
/** Full device type definition */
export interface DeviceConfig {
/** Unique identifier, e.g. "intel" */
id: string;
/** Display name, e.g. "Intel GPU" */
name: string;
/** Short description */
description: string;
/**
* Icon for the device card. Supports:
* - Emoji string (e.g. "🖥️")
* - Image URL or static path (e.g. "/img/intel.svg", "https://example.com/icon.png")
* - Inline SVG markup (e.g. "<svg>...</svg>")
*/
icon: string;
/**
* Additional CSS properties applied to the icon element.
* - For image-type icons: if any `background-*` property (e.g. `background-size`,
* `background-position`) is present, the image is rendered as a CSS `background-image`
* on the container div, enabling full background positioning control.
* Otherwise the image is rendered as an `<img>` tag and styles apply to it.
* - For emoji/SVG icons: styles apply to the container div.
*/
iconStyle?: Record<string, string>;
/**
* Additional CSS properties applied directly to the inner `<svg>` element
* when the icon is an inline SVG. Use this to override the default
* `width: 100%; height: 100%` or set `fill`, `transform`, etc.
* Ignored for emoji and image-type icons.
*/
svgStyle?: Record<string, string>;
/**
* Icon for dark mode. Same format as `icon`. When provided, this icon
* replaces `icon` when the user is in dark mode.
*/
iconDark?: string;
/** Additional CSS properties for the dark mode icon container */
iconDarkStyle?: Record<string, string>;
/**
* SVG-specific styles for dark mode. Same as `svgStyle` but applied
* when dark mode is active. Merged over `svgStyle` in dark mode.
*/
svgDarkStyle?: Record<string, string>;
/** Docker image tag, e.g. "stable" */
imageTag: string;
/**
* Image tag suffix appended to the base tag.
* e.g. "-standard-arm64" produces "stable-standard-arm64"
*/
imageTagSuffix?: string;
/** Hardware option IDs to auto-enable when this device is selected */
autoHardware: string[];
/** Help text shown as an admonition when this device is selected */
helpText?: string;
/** Admonition type for help text */
helpType?: "info" | "warning" | "danger";
/** Device mappings always added for this device type */
devices?: DeviceMapping[];
/** Volume mappings always added for this device type */
volumes?: VolumeMapping[];
/** Extra environment variables for this device type */
env?: Record<string, string>;
/** NVIDIA deploy config (only for tensorrt) */
nvidiaDeploy?: NvidiaDeployConfig;
/** Runtime setting, e.g. "nvidia" for Jetson */
runtime?: string;
/** Extra hosts entries, e.g. "host.docker.internal:host-gateway" */
extraHosts?: string[];
/** Security options, e.g. ["apparmor=unconfined"] */
securityOpt?: string[];
/** Whether this device type needs the NVIDIA GPU config UI */
needsNvidiaConfig?: boolean;
}
/** Generic hardware acceleration option definition */
export interface HardwareOption {
/** Unique identifier, e.g. "usbCoral" */
id: string;
/** Display label */
label: string;
/**
* Description shown below the checkbox when this option is enabled.
* Supports markdown link syntax: [text](url)
*/
description?: string;
/** Device IDs that disable this option */
disabledWhen?: string[];
/** Device mappings added when this option is enabled */
devices?: DeviceMapping[];
/** Volume mappings added when this option is enabled */
volumes?: VolumeMapping[];
/** Extra environment variables */
env?: Record<string, string>;
}
/** Port definition */
export interface PortConfig {
/** Unique identifier (also the default host port as string) */
id: string;
/** Host port number */
host: number;
/** Container port number */
container: number;
/** Protocol */
protocol?: "tcp" | "udp";
/** Description of the port's purpose */
description: string;
/** Whether enabled by default */
defaultEnabled: boolean;
/** Whether this port is locked (always enabled, cannot be toggled off) */
locked?: boolean;
/** Admonition type for the warning */
warningType?: "warning" | "danger";
/** Warning content (markdown) */
warningContent?: string;
/** When to show the warning: when the port is checked or unchecked */
warningWhen?: "checked" | "unchecked";
}

View File

@ -0,0 +1,250 @@
import type {
DeviceConfig,
DeviceMapping,
VolumeMapping,
} from "../config/types";
import { hardwareMap } from "../config";
// ---------------------------------------------------------------------------
// Input type
// ---------------------------------------------------------------------------
export interface GeneratorInput {
device: DeviceConfig;
selectedHardware: string[];
enabledPorts: string[];
configPath: string;
mediaPath: string;
rtspPassword?: string;
timezone: string;
shmSize: string;
nvidiaGpuCount?: string;
nvidiaGpuDeviceId?: string;
}
// ---------------------------------------------------------------------------
// Helpers
// ---------------------------------------------------------------------------
function deviceLine(dm: DeviceMapping): string {
const host = dm.host;
const container = dm.container ?? dm.host;
const mapping = host === container ? host : `${host}:${container}`;
const comment = dm.comment ? ` # ${dm.comment}` : "";
return ` - ${mapping}${comment}`;
}
function volumeLine(vm: VolumeMapping): string {
const ro = vm.readOnly ? ":ro" : "";
const comment = vm.comment ? ` # ${vm.comment}` : "";
return ` - ${vm.host}:${vm.container}${ro}${comment}`;
}
// ---------------------------------------------------------------------------
// YAML builder — each section returns an array of lines
// ---------------------------------------------------------------------------
function buildImage(device: DeviceConfig): string[] {
const tag = device.imageTagSuffix
? `${device.imageTag}${device.imageTagSuffix}`
: device.imageTag;
return [` image: ghcr.io/blakeblackshear/frigate:${tag}`];
}
function buildDevices(
device: DeviceConfig,
hwDevices: DeviceMapping[]
): string[] {
const all: DeviceMapping[] = [
...(device.devices ?? []),
...hwDevices,
];
if (all.length === 0) return [];
return [
" devices:",
...all.map(deviceLine),
];
}
function buildVolumes(
device: DeviceConfig,
hwVolumes: VolumeMapping[],
configPath: string,
mediaPath: string
): string[] {
const all: VolumeMapping[] = [
...(device.volumes ?? []),
...hwVolumes,
];
return [
" volumes:",
" - /etc/localtime:/etc/localtime:ro # Sync host time",
` - ${configPath}:/config # Config file directory`,
` - ${mediaPath}:/media/frigate # Recording storage directory`,
" - type: tmpfs # 1GB in-memory filesystem for recording segment storage",
" target: /tmp/cache",
" tmpfs:",
" size: 1000000000",
...all.map(volumeLine),
];
}
function buildPorts(enabledPorts: string[]): string[] {
return [
" ports:",
...enabledPorts,
];
}
function buildEnvironment(
device: DeviceConfig,
hwEnv: Record<string, string>,
rtspPassword: string | undefined,
timezone: string
): string[] {
const allEnv: Record<string, string> = {
...hwEnv,
...(device.env ?? {}),
};
const lines: string[] = [" environment:"];
if (rtspPassword) {
lines.push(
` FRIGATE_RTSP_PASSWORD: "${rtspPassword}" # RTSP password — change to your own`
);
}
lines.push(` TZ: "${timezone}" # Timezone`);
for (const [key, value] of Object.entries(allEnv)) {
lines.push(` ${key}: "${value}"`);
}
return lines;
}
function buildDeploy(device: DeviceConfig, input: GeneratorInput): string[] {
if (device.id === "stable-tensorrt") {
const count = input.nvidiaGpuCount || "all";
const isAll = count === "all";
const deviceId = input.nvidiaGpuDeviceId?.trim();
if (isAll) {
return [
" deploy:",
" resources:",
" reservations:",
" devices:",
" - driver: nvidia",
" count: all # Use all GPUs",
" capabilities: [gpu]",
];
}
if (deviceId) {
const ids = deviceId
.split(",")
.map((s) => s.trim())
.filter(Boolean)
.map((s) => `'${s}'`)
.join(", ");
return [
" deploy:",
" resources:",
" reservations:",
" devices:",
" - driver: nvidia",
` device_ids: [${ids}] # GPU device IDs`,
` count: ${count} # GPU count`,
" capabilities: [gpu]",
];
}
return [
" deploy:",
" resources:",
" reservations:",
" devices:",
" - driver: nvidia",
` count: ${count} # GPU count`,
" capabilities: [gpu]",
];
}
return [];
}
function buildRuntime(device: DeviceConfig): string[] {
if (device.runtime) {
return [` runtime: ${device.runtime}`];
}
return [];
}
function buildExtraHosts(device: DeviceConfig): string[] {
if (!device.extraHosts?.length) return [];
return [
" extra_hosts:",
...device.extraHosts.map(
(h, i) =>
` - "${h}"${i === 0 ? " # Required to talk to the NPU detector" : ""}`
),
];
}
function buildSecurityOpt(device: DeviceConfig): string[] {
if (!device.securityOpt?.length) return [];
return [
" security_opt:",
...device.securityOpt.map((s) => ` - ${s}`),
];
}
// ---------------------------------------------------------------------------
// Public API
// ---------------------------------------------------------------------------
/**
* Generate a docker-compose YAML string from the given input.
* The output is pure YAML with inline comments (no Shiki annotations).
*/
export function generateDockerCompose(input: GeneratorInput): string {
const { device } = input;
// Collect hardware-level devices, volumes, and env
const hwDevices: DeviceMapping[] = [];
const hwVolumes: VolumeMapping[] = [];
const hwEnv: Record<string, string> = {};
for (const hwId of input.selectedHardware) {
const hw = hardwareMap.get(hwId);
if (!hw) continue;
// Skip GPU device mapping for tensorrt images (it uses deploy instead)
if (hw.id === "gpu" && device.imageTag === "stable-tensorrt") continue;
hwDevices.push(...(hw.devices ?? []));
hwVolumes.push(...(hw.volumes ?? []));
Object.assign(hwEnv, hw.env ?? {});
}
const lines: string[] = [
"services:",
" frigate:",
" container_name: frigate",
" privileged: true # This may not be necessary for all setups",
" restart: unless-stopped",
" stop_grace_period: 30s # Allow enough time to shut down the various services",
...buildImage(device),
` shm_size: "${input.shmSize || "512mb"}" # Update for your cameras based on SHM calculation`,
...buildRuntime(device),
...buildDeploy(device, input),
...buildExtraHosts(device),
...buildSecurityOpt(device),
...buildDevices(device, hwDevices),
...buildVolumes(device, hwVolumes, input.configPath, input.mediaPath),
...buildPorts(input.enabledPorts),
...buildEnvironment(device, hwEnv, input.rtspPassword, input.timezone),
];
return lines.join("\n");
}

View File

@ -0,0 +1,195 @@
import { useState, useCallback, useMemo } from "react";
import { deviceMap, hardwareMap, portMap } from "../config";
import { generateDockerCompose } from "../generator";
import type { GeneratorInput } from "../generator";
/**
* Main hook that holds all form state and generates the Docker Compose output.
* Configuration is loaded synchronously from build-time generated .ts files.
*/
export function useConfigGenerator() {
const [deviceId, setDeviceId] = useState("stable");
const [hardwareEnabled, setHardwareEnabled] = useState<Record<string, boolean>>(() => {
const defaultDevice = deviceMap.get("stable");
const initial: Record<string, boolean> = {};
if (defaultDevice) {
for (const hwId of defaultDevice.autoHardware) {
initial[hwId] = true;
}
}
return initial;
});
const [portEnabled, setPortEnabled] = useState<Record<string, boolean>>(() => {
const initial: Record<string, boolean> = {};
for (const p of portMap.values()) {
initial[p.id] = p.defaultEnabled;
}
return initial;
});
const [nvidiaGpuCount, setNvidiaGpuCount] = useState("");
const [nvidiaGpuDeviceId, setNvidiaGpuDeviceId] = useState("");
const [configPath, setConfigPath] = useState("");
const [mediaPath, setMediaPath] = useState("");
const [rtspPassword, setRtspPassword] = useState("");
const [timezone, setTimezone] = useState("");
const [shmSize, setShmSize] = useState("512mb");
const [shmSizeError, setShmSizeError] = useState(false);
const [gpuDeviceIdError, setGpuDeviceIdError] = useState(false);
const [configPathError, setConfigPathError] = useState(false);
const [mediaPathError, setMediaPathError] = useState(false);
const device = useMemo(() => deviceMap.get(deviceId)!, [deviceId]);
const selectDevice = useCallback((id: string) => {
const newDevice = deviceMap.get(id);
if (!newDevice) return;
setDeviceId(id);
setHardwareEnabled(() => {
const next: Record<string, boolean> = {};
for (const hwId of newDevice.autoHardware) {
next[hwId] = true;
}
return next;
});
setNvidiaGpuCount("");
setNvidiaGpuDeviceId("");
setGpuDeviceIdError(false);
}, []);
const toggleHardware = useCallback((hwId: string) => {
setHardwareEnabled((prev) => ({ ...prev, [hwId]: !prev[hwId] }));
}, []);
const togglePort = useCallback((portId: string) => {
const port = portMap.get(portId);
if (port?.locked) return;
setPortEnabled((prev) => ({ ...prev, [portId]: !prev[portId] }));
}, []);
const isHardwareDisabled = useCallback(
(hwId: string): boolean => {
const hw = hardwareMap.get(hwId);
if (!hw) return false;
return hw.disabledWhen?.includes(deviceId) ?? false;
},
[deviceId]
);
const validateShmSize = useCallback((value: string): boolean => {
if (!value) return true;
return /^\d+(\.\d+)?[bkmgBKMG]{1,2}$/.test(value);
}, []);
const validatePath = useCallback((value: string): boolean => {
if (!value) return true;
return /^[a-zA-Z0-9_\-/./]+$/.test(value);
}, []);
const handleShmSizeChange = useCallback(
(value: string) => {
const filtered = value.replace(/[^0-9.bkmgBKMG]/g, "");
const valid = validateShmSize(filtered);
setShmSize(filtered);
setShmSizeError(!valid && filtered !== "");
},
[validateShmSize]
);
const handleConfigPathChange = useCallback(
(value: string) => {
const filtered = value.replace(/[^a-zA-Z0-9_\-/./]/g, "");
const valid = validatePath(filtered);
setConfigPath(filtered);
setConfigPathError(!valid && filtered !== "");
},
[validatePath]
);
const handleMediaPathChange = useCallback(
(value: string) => {
const filtered = value.replace(/[^a-zA-Z0-9_\-/./]/g, "");
const valid = validatePath(filtered);
setMediaPath(filtered);
setMediaPathError(!valid && filtered !== "");
},
[validatePath]
);
const handleNvidiaGpuCountChange = useCallback((value: string) => {
// Only allow digits
setNvidiaGpuCount(value);
if (value === "") {
setNvidiaGpuDeviceId("");
setGpuDeviceIdError(false);
} else {
setGpuDeviceIdError(false);
}
}, []);
const handleNvidiaGpuDeviceIdChange = useCallback((value: string) => {
setNvidiaGpuDeviceId(value.trim());
setGpuDeviceIdError(false);
}, []);
const enabledPortLines = useMemo(() => {
const lines: string[] = [];
for (const [id, enabled] of Object.entries(portEnabled)) {
if (!enabled) continue;
const p = portMap.get(id);
if (!p) continue;
const proto = p.protocol && p.protocol !== "tcp" ? `/${p.protocol}` : "";
const comment = p.description ? ` # ${p.description}` : "";
lines.push(` - "${p.host}:${p.container}${proto}"${comment}`);
}
return lines;
}, [portEnabled]);
const selectedHardwareIds = useMemo(() => {
return Object.entries(hardwareEnabled)
.filter(([id, enabled]) => {
if (!enabled) return false;
const hw = hardwareMap.get(id);
if (!hw) return false;
if (hw.disabledWhen?.includes(deviceId)) return false;
return true;
})
.map(([id]) => id);
}, [hardwareEnabled, deviceId]);
const generatedYaml = useMemo(() => {
const input: GeneratorInput = {
device,
selectedHardware: selectedHardwareIds,
enabledPorts: enabledPortLines,
configPath: configPath || "/path/to/your/config",
mediaPath: mediaPath || "/path/to/your/storage",
rtspPassword,
timezone: timezone || Intl.DateTimeFormat().resolvedOptions().timeZone || "Etc/UTC",
shmSize: shmSize || "512mb",
nvidiaGpuCount,
nvidiaGpuDeviceId,
};
return generateDockerCompose(input);
}, [
device, selectedHardwareIds, enabledPortLines,
configPath, mediaPath, rtspPassword, timezone, shmSize,
nvidiaGpuCount, nvidiaGpuDeviceId,
]);
const hasAnyHardware = selectedHardwareIds.length > 0 || !!device?.devices?.length;
return {
deviceId, device, hardwareEnabled, portEnabled,
nvidiaGpuCount, nvidiaGpuDeviceId,
configPath, mediaPath, rtspPassword, timezone, shmSize,
shmSizeError, gpuDeviceIdError, configPathError, mediaPathError,
hasAnyHardware, generatedYaml,
selectDevice, toggleHardware, togglePort,
handleShmSizeChange, handleConfigPathChange, handleMediaPathChange,
handleNvidiaGpuCountChange, handleNvidiaGpuDeviceIdChange,
setRtspPassword, setTimezone, isHardwareDisabled,
};
}

View File

@ -0,0 +1 @@
export { default } from "./DockerComposeGenerator";

View File

@ -0,0 +1,381 @@
/* ===================================================================
Docker Compose Generator styles
Uses Docusaurus / Infima CSS variables for theme compatibility.
=================================================================== */
.generator {
margin: 2rem 0;
}
.card {
background: var(--ifm-background-surface-color);
border: 1px solid var(--ifm-color-emphasis-400);
border-radius: 12px;
padding: 2rem;
box-shadow: var(--ifm-global-shadow-lw);
}
[data-theme="light"] .card {
background: var(--ifm-color-emphasis-100);
border: 1px solid var(--ifm-color-emphasis-300);
}
/* --- Form sections --- */
.formSection {
margin-bottom: 1.5rem;
padding-bottom: 1.5rem;
border-bottom: 1px solid var(--ifm-color-emphasis-400);
}
.formSection:last-child {
border-bottom: none;
margin-bottom: 0;
padding-bottom: 0;
}
.formSection h4 {
margin: 0 0 1rem 0;
color: var(--ifm-font-color-base);
font-size: 1.1rem;
font-weight: var(--ifm-font-weight-semibold);
}
/* --- Form controls --- */
.formGroup {
margin-bottom: 1rem;
}
.formGroup:last-child {
margin-bottom: 0;
}
.label {
display: block;
margin-bottom: 0.25rem;
color: var(--ifm-font-color-base);
font-weight: var(--ifm-font-weight-semibold);
font-size: 0.9rem;
}
.input {
width: 100%;
padding: 0.5rem 0.75rem;
border: 1px solid var(--ifm-color-emphasis-400);
border-radius: 6px;
background: var(--ifm-background-color);
color: var(--ifm-font-color-base);
font-size: 0.95rem;
transition: border-color 0.2s, box-shadow 0.2s;
}
[data-theme="light"] .input {
background: #fff;
border: 1px solid #d0d7de;
}
.input:focus {
outline: none;
border-color: var(--ifm-color-primary);
box-shadow: 0 0 0 3px var(--ifm-color-primary-lightest);
}
[data-theme="dark"] .input {
border-color: var(--ifm-color-emphasis-300);
}
.inputError {
border-color: #e74c3c;
animation: shake 0.3s ease-in-out;
}
@keyframes shake {
0%,
100% {
transform: translateX(0);
}
25% {
transform: translateX(-5px);
}
75% {
transform: translateX(5px);
}
}
/* --- Select dropdown --- */
.select {
cursor: pointer;
appearance: none;
-moz-appearance: none;
-webkit-appearance: none;
background: var(--ifm-background-color)
url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' width='12' height='12' viewBox='0 0 12 12'%3E%3Cpath fill='%23666' d='M6 8L1 3h10z'/%3E%3C/svg%3E")
no-repeat right 0.75rem center / 12px 12px;
padding-right: 2rem;
}
[data-theme="light"] .select {
background: #fff
url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' width='12' height='12' viewBox='0 0 12 12'%3E%3Cpath fill='%23555' d='M6 8L1 3h10z'/%3E%3C/svg%3E")
no-repeat right 0.75rem center / 12px 12px;
}
.helpText {
margin: 0.5rem 0 0 0;
font-size: 0.85rem;
color: var(--ifm-font-color-secondary);
line-height: 1.5;
}
.helpText a {
color: var(--ifm-color-primary);
}
/* --- Device grid --- */
.deviceGrid {
display: grid;
grid-template-columns: repeat(auto-fill, minmax(130px, 1fr));
gap: 0.75rem;
margin-top: 0.5rem;
}
.deviceCard {
padding: 0.75rem;
border: 2px solid var(--ifm-color-emphasis-400);
border-radius: 12px;
cursor: pointer;
transition: all 0.2s;
text-align: center;
background: var(--ifm-background-color);
display: flex;
flex-direction: column;
align-items: center;
}
[data-theme="light"] .deviceCard {
border: 2px solid #d0d7de;
background: #fff;
}
.deviceCard:hover {
border-color: var(--ifm-color-primary);
background: var(--ifm-color-emphasis-100);
transform: translateY(-2px);
}
.deviceCardActive {
border-color: var(--ifm-color-primary);
background: var(--ifm-color-primary-lightest);
box-shadow: 0 0 0 1px var(--ifm-color-primary);
}
[data-theme="light"] .deviceCardActive {
background: color-mix(in srgb, var(--ifm-color-primary) 12%, #fff);
}
[data-theme="dark"] .deviceCardActive {
background: color-mix(in srgb, var(--ifm-color-primary) 25%, #1b1b1b);
}
[data-theme="dark"] .deviceCardActive .deviceName {
color: var(--ifm-color-primary-light);
}
[data-theme="dark"] .deviceCardActive .deviceDesc {
color: var(--ifm-color-primary-light);
opacity: 0.85;
}
.deviceIcon {
font-size: 2rem;
margin-bottom: 0.25rem;
height: 40px;
width: 50px;
display: flex;
align-items: center;
justify-content: center;
}
.deviceIconSvg {
margin-bottom: 0.25rem;
height: 40px;
width: 50px;
display: flex;
align-items: center;
justify-content: center;
overflow: visible;
/* Allow iconStyle width/height to override */
flex-shrink: 0;
}
.deviceIconSvg svg {
width: var(--svg-width, 100%);
height: var(--svg-height, 100%);
fill: var(--svg-fill, currentColor);
transform: var(--svg-transform, none);
}
.deviceIconImage {
margin-bottom: 0.25rem;
height: 40px;
width: 50px;
display: flex;
align-items: center;
justify-content: center;
}
.deviceIconImage img {
max-width: 100%;
max-height: 100%;
object-fit: contain;
}
.deviceName {
font-weight: var(--ifm-font-weight-semibold);
color: var(--ifm-font-color-base);
margin-bottom: 0.15rem;
font-size: 0.9rem;
}
.deviceDesc {
font-size: 0.75rem;
color: var(--ifm-font-color-secondary);
line-height: 1.3;
}
/* --- Checkbox grid --- */
.checkboxGrid {
display: grid;
grid-template-columns: repeat(2, 1fr);
gap: 0.5rem;
}
@media (max-width: 576px) {
.checkboxGrid {
grid-template-columns: 1fr;
}
}
.hardwareItem {
margin-bottom: 0;
}
.hardwareDescription {
margin: 0.15rem 0 0.4rem 1.6rem;
font-size: 0.8rem;
color: var(--ifm-font-color-secondary);
line-height: 1.5;
}
.hardwareDescription a {
color: var(--ifm-color-primary);
text-decoration: underline;
text-underline-offset: 2px;
}
.checkboxLabel {
display: flex;
align-items: center;
gap: 0.5rem;
cursor: pointer;
padding: 0.4rem 0.5rem;
border-radius: 6px;
transition: background-color 0.2s;
font-size: 0.9rem;
}
.checkboxLabel:hover {
background: var(--ifm-color-emphasis-100);
}
.checkboxLabel input[type="checkbox"] {
width: 1.1rem;
height: 1.1rem;
cursor: pointer;
flex-shrink: 0;
}
.checkboxLabel span {
color: var(--ifm-font-color-base);
}
.checkboxDisabled {
cursor: not-allowed;
}
.checkboxDisabled:hover {
background: transparent;
}
.checkboxDisabled input[type="checkbox"] {
cursor: not-allowed;
opacity: 0.5;
}
/* --- Form grid (side-by-side) --- */
.formGrid {
display: grid;
grid-template-columns: repeat(2, 1fr);
gap: 1rem;
}
@media (max-width: 576px) {
.formGrid {
grid-template-columns: 1fr;
}
}
.formGrid .formGroup {
margin-bottom: 0;
}
/* --- Port section --- */
.portSection {
margin-bottom: 0.75rem;
}
.warningBadge {
margin-left: auto;
color: #e67e22;
font-size: 0.85rem;
}
/* --- NVIDIA config --- */
.nvidiaConfig {
margin-top: 1rem;
margin-bottom: 1.5rem;
padding: 1rem;
background: var(--ifm-background-color);
border-radius: 8px;
border-left: 3px solid var(--ifm-color-primary);
}
[data-theme="light"] .nvidiaConfig {
background: #f6f8fa;
border-left: 3px solid var(--ifm-color-primary);
}
/* --- Result section --- */
.resultSection {
margin-top: 2rem;
}
.resultHeader {
display: flex;
justify-content: space-between;
align-items: center;
margin-bottom: 1rem;
}
.resultHeader h4 {
margin: 0;
color: var(--ifm-font-color-base);
}

View File

@ -5997,7 +5997,10 @@ paths:
tags: tags:
- App - App
summary: Start debug replay summary: Start debug replay
description: Start a debug replay session from camera recordings. description:
Start a debug replay session from camera recordings. Returns
immediately while clip generation runs as a background job; subscribe
to the 'debug_replay' job_state WS topic to track progress.
operationId: start_debug_replay_debug_replay_start_post operationId: start_debug_replay_debug_replay_start_post
requestBody: requestBody:
required: true required: true
@ -6006,12 +6009,16 @@ paths:
schema: schema:
$ref: "#/components/schemas/DebugReplayStartBody" $ref: "#/components/schemas/DebugReplayStartBody"
responses: responses:
"200": "202":
description: Successful Response description: Successful Response
content: content:
application/json: application/json:
schema: schema:
$ref: "#/components/schemas/DebugReplayStartResponse" $ref: "#/components/schemas/DebugReplayStartResponse"
"400":
description: Invalid camera, time range, or no recordings
"409":
description: A replay session is already active
"422": "422":
description: Validation Error description: Validation Error
content: content:
@ -6272,10 +6279,14 @@ components:
replay_camera: replay_camera:
type: string type: string
title: Replay Camera title: Replay Camera
job_id:
type: string
title: Job Id
type: object type: object
required: required:
- success - success
- replay_camera - replay_camera
- job_id
title: DebugReplayStartResponse title: DebugReplayStartResponse
description: Response for starting a debug replay session. description: Response for starting a debug replay session.
DebugReplayStatusResponse: DebugReplayStatusResponse:

View File

@ -10,6 +10,7 @@ from pydantic import BaseModel, Field
from frigate.api.auth import require_role from frigate.api.auth import require_role
from frigate.api.defs.tags import Tags from frigate.api.defs.tags import Tags
from frigate.jobs.debug_replay import start_debug_replay_job
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -29,10 +30,17 @@ class DebugReplayStartResponse(BaseModel):
success: bool success: bool
replay_camera: str replay_camera: str
job_id: str
class DebugReplayStatusResponse(BaseModel): class DebugReplayStatusResponse(BaseModel):
"""Response for debug replay status.""" """Response for debug replay status.
Returns only session-presence fields. Startup progress and error
details flow through the job_state WebSocket topic via the
debug_replay job (see frigate.jobs.debug_replay); the
Replay page subscribes there with useJobStatus("debug_replay").
"""
active: bool active: bool
replay_camera: str | None = None replay_camera: str | None = None
@ -51,15 +59,32 @@ class DebugReplayStopResponse(BaseModel):
@router.post( @router.post(
"/debug_replay/start", "/debug_replay/start",
response_model=DebugReplayStartResponse, response_model=DebugReplayStartResponse,
status_code=202,
responses={
400: {"description": "Invalid camera, time range, or no recordings"},
409: {"description": "A replay session is already active"},
},
dependencies=[Depends(require_role(["admin"]))], dependencies=[Depends(require_role(["admin"]))],
summary="Start debug replay", summary="Start debug replay",
description="Start a debug replay session from camera recordings.", description="Start a debug replay session from camera recordings. Returns "
"immediately while clip generation runs as a background job; subscribe "
"to the 'debug_replay' job_state WS topic to track progress.",
) )
async def start_debug_replay(request: Request, body: DebugReplayStartBody): async def start_debug_replay(request: Request, body: DebugReplayStartBody):
"""Start a debug replay session.""" """Start a debug replay session asynchronously."""
replay_manager = request.app.replay_manager replay_manager = request.app.replay_manager
if replay_manager.active: try:
job_id = await asyncio.to_thread(
start_debug_replay_job,
source_camera=body.camera,
start_ts=body.start_time,
end_ts=body.end_time,
frigate_config=request.app.frigate_config,
config_publisher=request.app.config_publisher,
replay_manager=replay_manager,
)
except RuntimeError:
return JSONResponse( return JSONResponse(
content={ content={
"success": False, "success": False,
@ -67,38 +92,23 @@ async def start_debug_replay(request: Request, body: DebugReplayStartBody):
}, },
status_code=409, status_code=409,
) )
try:
replay_camera = await asyncio.to_thread(
replay_manager.start,
source_camera=body.camera,
start_ts=body.start_time,
end_ts=body.end_time,
frigate_config=request.app.frigate_config,
config_publisher=request.app.config_publisher,
)
except ValueError: except ValueError:
logger.exception("Invalid parameters for debug replay start request") logger.exception("Rejected debug replay start request")
return JSONResponse( return JSONResponse(
content={ content={
"success": False, "success": False,
"message": "Invalid debug replay request parameters", "message": "Invalid debug replay parameters",
}, },
status_code=400, status_code=400,
) )
except RuntimeError:
logger.exception("Error while starting debug replay session")
return JSONResponse( return JSONResponse(
content={ content={
"success": False, "success": True,
"message": "An internal error occurred while starting debug replay", "replay_camera": replay_manager.replay_camera_name,
"job_id": job_id,
}, },
status_code=500, status_code=202,
)
return DebugReplayStartResponse(
success=True,
replay_camera=replay_camera,
) )
@ -118,12 +128,16 @@ def get_debug_replay_status(request: Request):
if replay_manager.active and replay_camera: if replay_manager.active and replay_camera:
frame_processor = request.app.detected_frames_processor frame_processor = request.app.detected_frames_processor
frame = frame_processor.get_current_frame(replay_camera) frame = (
frame_processor.get_current_frame(replay_camera)
if frame_processor is not None
else None
)
if frame is not None: if frame is not None:
frame_time = frame_processor.get_current_frame_time(replay_camera) frame_time = frame_processor.get_current_frame_time(replay_camera)
camera_config = request.app.frigate_config.cameras.get(replay_camera) camera_config = request.app.frigate_config.cameras.get(replay_camera)
retry_interval = 10 retry_interval = 10.0
if camera_config is not None: if camera_config is not None:
retry_interval = float(camera_config.ffmpeg.retry_interval or 10) retry_interval = float(camera_config.ffmpeg.retry_interval or 10)

View File

@ -174,12 +174,10 @@ async def latest_frame(
} }
quality_params = get_image_quality_params(extension.value, params.quality) quality_params = get_image_quality_params(extension.value, params.quality)
if camera_name in request.app.frigate_config.cameras: camera_config = request.app.frigate_config.cameras.get(camera_name)
if camera_config is not None:
frame = frame_processor.get_current_frame(camera_name, draw_options) frame = frame_processor.get_current_frame(camera_name, draw_options)
retry_interval = float( retry_interval = float(camera_config.ffmpeg.retry_interval or 10)
request.app.frigate_config.cameras.get(camera_name).ffmpeg.retry_interval
or 10
)
is_offline = False is_offline = False
if frame is None or datetime.now().timestamp() > ( if frame is None or datetime.now().timestamp() > (

View File

@ -429,7 +429,10 @@ class WebPushClient(Communicator):
else: else:
title = base_title title = base_title
if payload["after"]["data"]["metadata"].get("shortSummary"):
message = payload["after"]["data"]["metadata"]["shortSummary"] message = payload["after"]["data"]["metadata"]["shortSummary"]
else:
message = f"Detected on {camera_name}"
else: else:
zone_names = payload["after"]["data"]["zones"] zone_names = payload["after"]["data"]["zones"]
formatted_zone_names = [] formatted_zone_names = []

View File

@ -1073,10 +1073,6 @@ class LicensePlateProcessingMixin:
top_score = score top_score = score
top_box = bbox top_box = bbox
if score > top_score:
top_score = score
top_box = bbox
# Return the top scoring bounding box if found # Return the top scoring bounding box if found
if top_box is not None: if top_box is not None:
# expand box by 5% to help with OCR # expand box by 5% to help with OCR
@ -1092,9 +1088,6 @@ class LicensePlateProcessingMixin:
] ]
).clip(0, [input.shape[1], input.shape[0]] * 2) ).clip(0, [input.shape[1], input.shape[0]] * 2)
logger.debug(
f"{camera}: Found license plate. Bounding box: {expanded_box.astype(int)}"
)
return tuple(int(x) for x in expanded_box) # type: ignore[return-value] return tuple(int(x) for x in expanded_box) # type: ignore[return-value]
else: else:
return None # No detection above the threshold return None # No detection above the threshold
@ -1360,8 +1353,8 @@ class LicensePlateProcessingMixin:
) )
# check that license plate is valid # check that license plate is valid
# double the value because we've doubled the size of the car # quadruple the value because we've doubled both dimensions of the car
if license_plate_area < self.config.cameras[camera].lpr.min_area * 2: if license_plate_area < self.config.cameras[camera].lpr.min_area * 4:
logger.debug(f"{camera}: License plate is less than min_area") logger.debug(f"{camera}: License plate is less than min_area")
return return
@ -1465,6 +1458,7 @@ class LicensePlateProcessingMixin:
license_plate_frame, license_plate_frame,
) )
logger.debug(f"{camera}: Found license plate. Bounding box: {list(plate_box)}")
logger.debug(f"{camera}: Running plate recognition for id: {id}.") logger.debug(f"{camera}: Running plate recognition for id: {id}.")
# run detection, returns results sorted by confidence, best first # run detection, returns results sorted by confidence, best first

View File

@ -1,9 +1,13 @@
"""Debug replay camera management for replaying recordings with detection overlays.""" """Debug replay camera management for replaying recordings with detection overlays.
The startup work (ffmpeg concat + camera config publish) lives in
frigate.jobs.debug_replay. This module owns only session presence
(active), session metadata, and post-session cleanup.
"""
import logging import logging
import os import os
import shutil import shutil
import subprocess as sp
import threading import threading
from ruamel.yaml import YAML from ruamel.yaml import YAML
@ -21,7 +25,7 @@ from frigate.const import (
REPLAY_DIR, REPLAY_DIR,
THUMB_DIR, THUMB_DIR,
) )
from frigate.models import Recordings from frigate.jobs.debug_replay import cancel_debug_replay_job, wait_for_runner
from frigate.util.camera_cleanup import cleanup_camera_db, cleanup_camera_files from frigate.util.camera_cleanup import cleanup_camera_db, cleanup_camera_files
from frigate.util.config import find_config_file from frigate.util.config import find_config_file
@ -29,7 +33,14 @@ logger = logging.getLogger(__name__)
class DebugReplayManager: class DebugReplayManager:
"""Manages a single debug replay session.""" """Owns the lifecycle pointers for a single debug replay session.
A session exists from the moment mark_starting is called (synchronously,
inside the API handler) until clear_session runs (on success cleanup,
failure, or stop). The active property is the source of truth that the
status bar consumes broader than the startup job, which only covers the
preparing_clip / starting_camera window.
"""
def __init__(self) -> None: def __init__(self) -> None:
self._lock = threading.Lock() self._lock = threading.Lock()
@ -41,144 +52,66 @@ class DebugReplayManager:
@property @property
def active(self) -> bool: def active(self) -> bool:
"""Whether a replay session is currently active.""" """True from mark_starting until clear_session."""
return self.replay_camera_name is not None return self.replay_camera_name is not None
def start( def mark_starting(
self, self,
source_camera: str, source_camera: str,
replay_camera_name: str,
start_ts: float, start_ts: float,
end_ts: float, end_ts: float,
frigate_config: FrigateConfig, ) -> None:
config_publisher: CameraConfigUpdatePublisher, """Synchronously claim the session before the job runner starts.
) -> str:
"""Start a debug replay session.
Args: Called inside the API handler so the status bar sees active=True
source_camera: Name of the source camera to replay immediately, before the worker thread does any ffmpeg work.
start_ts: Start timestamp
end_ts: End timestamp
frigate_config: Current Frigate configuration
config_publisher: Publisher for camera config updates
Returns:
The replay camera name
Raises:
ValueError: If a session is already active or parameters are invalid
RuntimeError: If clip generation fails
""" """
with self._lock: with self._lock:
return self._start_locked( self.replay_camera_name = replay_camera_name
source_camera, start_ts, end_ts, frigate_config, config_publisher self.source_camera = source_camera
) self.start_ts = start_ts
self.end_ts = end_ts
self.clip_path = None
def _start_locked( def mark_session_ready(self, clip_path: str) -> None:
"""Record the on-disk clip path after the camera has been published."""
with self._lock:
self.clip_path = clip_path
def clear_session(self) -> None:
"""Reset session pointers without publishing camera removal.
Used by the job runner on failure paths. stop() does the camera
teardown plus this clear in one step.
"""
with self._lock:
self._clear_locked()
def _clear_locked(self) -> None:
self.replay_camera_name = None
self.source_camera = None
self.clip_path = None
self.start_ts = None
self.end_ts = None
def publish_camera(
self, self,
source_camera: str, source_camera: str,
start_ts: float, replay_name: str,
end_ts: float, clip_path: str,
frigate_config: FrigateConfig, frigate_config: FrigateConfig,
config_publisher: CameraConfigUpdatePublisher, config_publisher: CameraConfigUpdatePublisher,
) -> str: ) -> None:
if self.active: """Build the in-memory replay camera config and publish the add event.
raise ValueError("A replay session is already active")
if source_camera not in frigate_config.cameras: Called by the job runner during the starting_camera phase.
raise ValueError(f"Camera '{source_camera}' not found") """
if end_ts <= start_ts:
raise ValueError("End time must be after start time")
# Query recordings for the source camera in the time range
recordings = (
Recordings.select(
Recordings.path,
Recordings.start_time,
Recordings.end_time,
)
.where(
Recordings.start_time.between(start_ts, end_ts)
| Recordings.end_time.between(start_ts, end_ts)
| ((start_ts > Recordings.start_time) & (end_ts < Recordings.end_time))
)
.where(Recordings.camera == source_camera)
.order_by(Recordings.start_time.asc())
)
if not recordings.count():
raise ValueError(
f"No recordings found for camera '{source_camera}' in the specified time range"
)
# Create replay directory
os.makedirs(REPLAY_DIR, exist_ok=True)
# Generate replay camera name
replay_name = f"{REPLAY_CAMERA_PREFIX}{source_camera}"
# Build concat file for ffmpeg
concat_file = os.path.join(REPLAY_DIR, f"{replay_name}_concat.txt")
clip_path = os.path.join(REPLAY_DIR, f"{replay_name}.mp4")
with open(concat_file, "w") as f:
for recording in recordings:
f.write(f"file '{recording.path}'\n")
# Concatenate recordings into a single clip with -c copy (fast)
ffmpeg_cmd = [
frigate_config.ffmpeg.ffmpeg_path,
"-hide_banner",
"-y",
"-f",
"concat",
"-safe",
"0",
"-i",
concat_file,
"-c",
"copy",
"-movflags",
"+faststart",
clip_path,
]
logger.info(
"Generating replay clip for %s (%.1f - %.1f)",
source_camera,
start_ts,
end_ts,
)
try:
result = sp.run(
ffmpeg_cmd,
capture_output=True,
text=True,
timeout=120,
)
if result.returncode != 0:
logger.error("FFmpeg error: %s", result.stderr)
raise RuntimeError(
f"Failed to generate replay clip: {result.stderr[-500:]}"
)
except sp.TimeoutExpired:
raise RuntimeError("Clip generation timed out")
finally:
# Clean up concat file
if os.path.exists(concat_file):
os.remove(concat_file)
if not os.path.exists(clip_path):
raise RuntimeError("Clip file was not created")
# Build camera config dict for the replay camera
source_config = frigate_config.cameras[source_camera] source_config = frigate_config.cameras[source_camera]
camera_dict = self._build_camera_config_dict( camera_dict = self._build_camera_config_dict(
source_config, replay_name, clip_path source_config, replay_name, clip_path
) )
# Build an in-memory config with the replay camera added
config_file = find_config_file() config_file = find_config_file()
yaml_parser = YAML() yaml_parser = YAML()
with open(config_file, "r") as f: with open(config_file, "r") as f:
@ -191,73 +124,46 @@ class DebugReplayManager:
try: try:
new_config = FrigateConfig.parse_object(config_data) new_config = FrigateConfig.parse_object(config_data)
except Exception as e: except Exception as e:
raise RuntimeError(f"Failed to validate replay camera config: {e}") raise RuntimeError(f"Failed to validate replay camera config: {e}") from e
# Update the running config
frigate_config.cameras[replay_name] = new_config.cameras[replay_name] frigate_config.cameras[replay_name] = new_config.cameras[replay_name]
# Publish the add event
config_publisher.publish_update( config_publisher.publish_update(
CameraConfigUpdateTopic(CameraConfigUpdateEnum.add, replay_name), CameraConfigUpdateTopic(CameraConfigUpdateEnum.add, replay_name),
new_config.cameras[replay_name], new_config.cameras[replay_name],
) )
# Store session state
self.replay_camera_name = replay_name
self.source_camera = source_camera
self.clip_path = clip_path
self.start_ts = start_ts
self.end_ts = end_ts
logger.info("Debug replay started: %s -> %s", source_camera, replay_name)
return replay_name
def stop( def stop(
self, self,
frigate_config: FrigateConfig, frigate_config: FrigateConfig,
config_publisher: CameraConfigUpdatePublisher, config_publisher: CameraConfigUpdatePublisher,
) -> None: ) -> None:
"""Stop the active replay session and clean up all artifacts. """Cancel any in-flight startup job and tear down the active session.
Args: Safe to call when no session is active (no-op with a warning).
frigate_config: Current Frigate configuration
config_publisher: Publisher for camera config updates
""" """
with self._lock: cancel_debug_replay_job()
self._stop_locked(frigate_config, config_publisher) wait_for_runner(timeout=2.0)
def _stop_locked( with self._lock:
self,
frigate_config: FrigateConfig,
config_publisher: CameraConfigUpdatePublisher,
) -> None:
if not self.active: if not self.active:
logger.warning("No active replay session to stop") logger.warning("No active replay session to stop")
return return
replay_name = self.replay_camera_name replay_name = self.replay_camera_name
# Publish remove event so subscribers stop and remove from their config # Only publish remove if the camera was actually added to the live
if replay_name in frigate_config.cameras: # config (i.e. the runner reached the starting_camera phase).
if replay_name is not None and replay_name in frigate_config.cameras:
config_publisher.publish_update( config_publisher.publish_update(
CameraConfigUpdateTopic(CameraConfigUpdateEnum.remove, replay_name), CameraConfigUpdateTopic(CameraConfigUpdateEnum.remove, replay_name),
frigate_config.cameras[replay_name], frigate_config.cameras[replay_name],
) )
# Do NOT pop here — let subscribers handle removal from the shared
# config dict when they process the ZMQ message to avoid race conditions
# Defensive DB cleanup if replay_name is not None:
self._cleanup_db(replay_name) self._cleanup_db(replay_name)
# Remove filesystem artifacts
self._cleanup_files(replay_name) self._cleanup_files(replay_name)
# Reset state self._clear_locked()
self.replay_camera_name = None
self.source_camera = None
self.clip_path = None
self.start_ts = None
self.end_ts = None
logger.info("Debug replay stopped and cleaned up: %s", replay_name) logger.info("Debug replay stopped and cleaned up: %s", replay_name)
@ -267,16 +173,7 @@ class DebugReplayManager:
replay_name: str, replay_name: str,
clip_path: str, clip_path: str,
) -> dict: ) -> dict:
"""Build a camera config dictionary for the replay camera. """Build a camera config dictionary for the replay camera."""
Args:
source_config: Source camera's CameraConfig
replay_name: Name for the replay camera
clip_path: Path to the replay clip file
Returns:
Camera config as a dictionary
"""
# Extract detect config (exclude computed fields) # Extract detect config (exclude computed fields)
detect_dict = source_config.detect.model_dump( detect_dict = source_config.detect.model_dump(
exclude={"min_initialized", "max_disappeared", "enabled_in_config"} exclude={"min_initialized", "max_disappeared", "enabled_in_config"}
@ -311,7 +208,6 @@ class DebugReplayManager:
zone_dump = zone_config.model_dump( zone_dump = zone_config.model_dump(
exclude={"contour", "color"}, exclude_defaults=True exclude={"contour", "color"}, exclude_defaults=True
) )
# Always include required fields
zone_dump.setdefault("coordinates", zone_config.coordinates) zone_dump.setdefault("coordinates", zone_config.coordinates)
zones_dict[zone_name] = zone_dump zones_dict[zone_name] = zone_dump

View File

@ -31,6 +31,12 @@ class OllamaClient(GenAIClient):
provider: ApiClient | None provider: ApiClient | None
provider_options: dict[str, Any] provider_options: dict[str, Any]
def _auth_headers(self) -> dict | None:
if self.genai_config.api_key:
return {"Authorization": "Bearer " + self.genai_config.api_key}
return None
def _init_provider(self) -> ApiClient | None: def _init_provider(self) -> ApiClient | None:
"""Initialize the client.""" """Initialize the client."""
self.provider_options = { self.provider_options = {
@ -39,7 +45,11 @@ class OllamaClient(GenAIClient):
} }
try: try:
client = ApiClient(host=self.genai_config.base_url, timeout=self.timeout) client = ApiClient(
host=self.genai_config.base_url,
timeout=self.timeout,
headers=self._auth_headers(),
)
# ensure the model is available locally # ensure the model is available locally
response = client.show(self.genai_config.model) response = client.show(self.genai_config.model)
if response.get("error"): if response.get("error"):
@ -166,7 +176,9 @@ class OllamaClient(GenAIClient):
return [] return []
try: try:
client = ApiClient( client = ApiClient(
host=self.genai_config.base_url, timeout=self.timeout host=self.genai_config.base_url,
timeout=self.timeout,
headers=self._auth_headers(),
) )
except Exception: except Exception:
return [] return []
@ -344,6 +356,7 @@ class OllamaClient(GenAIClient):
async_client = OllamaAsyncClient( async_client = OllamaAsyncClient(
host=self.genai_config.base_url, host=self.genai_config.base_url,
timeout=self.timeout, timeout=self.timeout,
headers=self._auth_headers(),
) )
response = await async_client.chat(**request_params) response = await async_client.chat(**request_params)
result = self._message_from_response(response) result = self._message_from_response(response)
@ -359,6 +372,7 @@ class OllamaClient(GenAIClient):
async_client = OllamaAsyncClient( async_client = OllamaAsyncClient(
host=self.genai_config.base_url, host=self.genai_config.base_url,
timeout=self.timeout, timeout=self.timeout,
headers=self._auth_headers(),
) )
content_parts: list[str] = [] content_parts: list[str] = []
final_message: dict[str, Any] | None = None final_message: dict[str, Any] | None = None

View File

@ -0,0 +1,386 @@
"""Debug replay startup job: ffmpeg concat + camera config publish.
The runner orchestrates the async portion of starting a debug replay
session. The DebugReplayManager (in frigate.debug_replay) owns session
presence so the status bar can keep reading a single `active` flag from
/debug_replay/status for the entire session window which is broader
than this job's lifetime.
"""
import logging
import os
import subprocess as sp
import threading
import time
from dataclasses import dataclass
from typing import TYPE_CHECKING, Any, Optional, cast
from peewee import ModelSelect
from frigate.config import FrigateConfig
from frigate.config.camera.updater import CameraConfigUpdatePublisher
from frigate.const import REPLAY_CAMERA_PREFIX, REPLAY_DIR
from frigate.jobs.export import JobStatePublisher
from frigate.jobs.job import Job
from frigate.jobs.manager import job_is_running, set_current_job
from frigate.models import Recordings
from frigate.types import JobStatusTypesEnum
from frigate.util.ffmpeg import run_ffmpeg_with_progress
if TYPE_CHECKING:
from frigate.debug_replay import DebugReplayManager
logger = logging.getLogger(__name__)
# Coalesce frequent ffmpeg progress callbacks so the WS isn't flooded.
PROGRESS_BROADCAST_MIN_INTERVAL = 1.0
JOB_TYPE = "debug_replay"
STEP_PREPARING_CLIP = "preparing_clip"
STEP_STARTING_CAMERA = "starting_camera"
_active_runner: Optional["DebugReplayJobRunner"] = None
_runner_lock = threading.Lock()
def _set_active_runner(runner: Optional["DebugReplayJobRunner"]) -> None:
global _active_runner
with _runner_lock:
_active_runner = runner
def get_active_runner() -> Optional["DebugReplayJobRunner"]:
with _runner_lock:
return _active_runner
@dataclass
class DebugReplayJob(Job):
"""Job state for a debug replay startup."""
job_type: str = JOB_TYPE
source_camera: str = ""
replay_camera_name: str = ""
start_ts: float = 0.0
end_ts: float = 0.0
current_step: Optional[str] = None
progress_percent: float = 0.0
def to_dict(self) -> dict[str, Any]:
"""Whitelisted payload for the job_state WS topic.
Replay-specific fields land in results so the frontend's
generic Job<TResults> type can be parameterised cleanly.
"""
return {
"id": self.id,
"job_type": self.job_type,
"status": self.status,
"start_time": self.start_time,
"end_time": self.end_time,
"error_message": self.error_message,
"results": {
"current_step": self.current_step,
"progress_percent": self.progress_percent,
"source_camera": self.source_camera,
"replay_camera_name": self.replay_camera_name,
"start_ts": self.start_ts,
"end_ts": self.end_ts,
},
}
def query_recordings(source_camera: str, start_ts: float, end_ts: float) -> ModelSelect:
"""Return the Recordings query for the time range.
Module-level so tests can patch it without instantiating a runner.
"""
query = (
Recordings.select(
Recordings.path,
Recordings.start_time,
Recordings.end_time,
)
.where(
Recordings.start_time.between(start_ts, end_ts)
| Recordings.end_time.between(start_ts, end_ts)
| ((start_ts > Recordings.start_time) & (end_ts < Recordings.end_time))
)
.where(Recordings.camera == source_camera)
.order_by(Recordings.start_time.asc())
)
return cast(ModelSelect, query)
class DebugReplayJobRunner(threading.Thread):
"""Worker thread that drives the startup job to completion.
Owns the live ffmpeg Popen reference for cancellation. Cancellation
is two-step (threading.Event + proc.terminate()) so the runner
both knows it should stop and is unblocked from its blocking subprocess
wait.
"""
def __init__(
self,
job: DebugReplayJob,
frigate_config: FrigateConfig,
config_publisher: CameraConfigUpdatePublisher,
replay_manager: "DebugReplayManager",
publisher: Optional[JobStatePublisher] = None,
) -> None:
super().__init__(daemon=True, name=f"debug_replay_{job.id}")
self.job = job
self.frigate_config = frigate_config
self.config_publisher = config_publisher
self.replay_manager = replay_manager
self.publisher = publisher if publisher is not None else JobStatePublisher()
self._cancel_event = threading.Event()
self._active_process: sp.Popen | None = None
self._proc_lock = threading.Lock()
self._last_broadcast_monotonic: float = 0.0
def cancel(self) -> None:
"""Request cancellation. Idempotent."""
self._cancel_event.set()
with self._proc_lock:
proc = self._active_process
if proc is not None:
try:
proc.terminate()
except Exception as exc:
logger.warning("Failed to terminate ffmpeg subprocess: %s", exc)
def is_cancelled(self) -> bool:
return self._cancel_event.is_set()
def _record_proc(self, proc: sp.Popen) -> None:
with self._proc_lock:
self._active_process = proc
# Race: cancel arrived between Popen and _record_proc.
if self._cancel_event.is_set():
try:
proc.terminate()
except Exception:
pass
def _broadcast(self, force: bool = False) -> None:
now = time.monotonic()
if (
not force
and now - self._last_broadcast_monotonic < PROGRESS_BROADCAST_MIN_INTERVAL
):
return
self._last_broadcast_monotonic = now
try:
self.publisher.publish(self.job.to_dict())
except Exception as err:
logger.warning("Publisher raised during job state broadcast: %s", err)
def run(self) -> None:
replay_name = self.job.replay_camera_name
os.makedirs(REPLAY_DIR, exist_ok=True)
concat_file = os.path.join(REPLAY_DIR, f"{replay_name}_concat.txt")
clip_path = os.path.join(REPLAY_DIR, f"{replay_name}.mp4")
self.job.status = JobStatusTypesEnum.running
self.job.start_time = time.time()
self.job.current_step = STEP_PREPARING_CLIP
self._broadcast(force=True)
try:
recordings = query_recordings(
self.job.source_camera, self.job.start_ts, self.job.end_ts
)
with open(concat_file, "w") as f:
for recording in recordings:
f.write(f"file '{recording.path}'\n")
ffmpeg_cmd = [
self.frigate_config.ffmpeg.ffmpeg_path,
"-hide_banner",
"-y",
"-f",
"concat",
"-safe",
"0",
"-i",
concat_file,
"-c",
"copy",
"-movflags",
"+faststart",
clip_path,
]
logger.info(
"Generating replay clip for %s (%.1f - %.1f)",
self.job.source_camera,
self.job.start_ts,
self.job.end_ts,
)
def _on_progress(percent: float) -> None:
self.job.progress_percent = percent
self._broadcast()
try:
returncode, stderr = run_ffmpeg_with_progress(
ffmpeg_cmd,
expected_duration_seconds=max(
0.0, self.job.end_ts - self.job.start_ts
),
on_progress=_on_progress,
process_started=self._record_proc,
use_low_priority=True,
)
finally:
with self._proc_lock:
self._active_process = None
if self._cancel_event.is_set():
self._finalize_cancelled(clip_path)
return
if returncode != 0:
raise RuntimeError(f"FFmpeg failed: {stderr[-500:]}")
if not os.path.exists(clip_path):
raise RuntimeError("Clip file was not created")
self.job.current_step = STEP_STARTING_CAMERA
self.job.progress_percent = 100.0
self._broadcast(force=True)
if self._cancel_event.is_set():
self._finalize_cancelled(clip_path)
return
self.replay_manager.publish_camera(
source_camera=self.job.source_camera,
replay_name=replay_name,
clip_path=clip_path,
frigate_config=self.frigate_config,
config_publisher=self.config_publisher,
)
self.replay_manager.mark_session_ready(clip_path)
self.job.status = JobStatusTypesEnum.success
self.job.end_time = time.time()
self._broadcast(force=True)
logger.info(
"Debug replay started: %s -> %s",
self.job.source_camera,
replay_name,
)
except Exception as exc:
logger.exception("Debug replay startup failed")
self.job.status = JobStatusTypesEnum.failed
self.job.error_message = str(exc)
self.job.end_time = time.time()
self._broadcast(force=True)
self.replay_manager.clear_session()
_remove_silent(clip_path)
finally:
_remove_silent(concat_file)
_set_active_runner(None)
def _finalize_cancelled(self, clip_path: str) -> None:
logger.info("Debug replay startup cancelled")
self.job.status = JobStatusTypesEnum.cancelled
self.job.end_time = time.time()
self._broadcast(force=True)
# The caller of cancel_debug_replay_job (DebugReplayManager.stop) owns
# session cleanup — db rows, filesystem artifacts, clear_session. We
# only clean up the partial concat output we created.
_remove_silent(clip_path)
def _remove_silent(path: str) -> None:
try:
if os.path.exists(path):
os.remove(path)
except OSError:
pass
def start_debug_replay_job(
*,
source_camera: str,
start_ts: float,
end_ts: float,
frigate_config: FrigateConfig,
config_publisher: CameraConfigUpdatePublisher,
replay_manager: "DebugReplayManager",
) -> str:
"""Validate, create job, start runner. Returns the job id.
Raises ValueError for bad params (camera missing, time range
invalid, no recordings) and RuntimeError if a session is already
active.
"""
if job_is_running(JOB_TYPE) or replay_manager.active:
raise RuntimeError("A replay session is already active")
if source_camera not in frigate_config.cameras:
raise ValueError(f"Camera '{source_camera}' not found")
if end_ts <= start_ts:
raise ValueError("End time must be after start time")
recordings = query_recordings(source_camera, start_ts, end_ts)
if not recordings.count():
raise ValueError(
f"No recordings found for camera '{source_camera}' in the specified time range"
)
replay_name = f"{REPLAY_CAMERA_PREFIX}{source_camera}"
replay_manager.mark_starting(
source_camera=source_camera,
replay_camera_name=replay_name,
start_ts=start_ts,
end_ts=end_ts,
)
job = DebugReplayJob(
source_camera=source_camera,
replay_camera_name=replay_name,
start_ts=start_ts,
end_ts=end_ts,
)
set_current_job(job)
runner = DebugReplayJobRunner(
job=job,
frigate_config=frigate_config,
config_publisher=config_publisher,
replay_manager=replay_manager,
)
_set_active_runner(runner)
runner.start()
return job.id
def cancel_debug_replay_job() -> bool:
"""Signal the active runner to cancel.
Returns True if a runner was signalled, False if no job was active.
"""
runner = get_active_runner()
if runner is None:
return False
runner.cancel()
return True
def wait_for_runner(timeout: float = 2.0) -> bool:
"""Join the active runner. Returns True if the runner ended in time."""
runner = get_active_runner()
if runner is None:
return True
runner.join(timeout=timeout)
return not runner.is_alive()

View File

@ -13,6 +13,7 @@ from enum import Enum
from pathlib import Path from pathlib import Path
from typing import Callable, Optional from typing import Callable, Optional
import pytz # type: ignore[import-untyped]
from peewee import DoesNotExist from peewee import DoesNotExist
from frigate.config import FfmpegConfig, FrigateConfig from frigate.config import FfmpegConfig, FrigateConfig
@ -22,13 +23,13 @@ from frigate.const import (
EXPORT_DIR, EXPORT_DIR,
MAX_PLAYLIST_SECONDS, MAX_PLAYLIST_SECONDS,
PREVIEW_FRAME_TYPE, PREVIEW_FRAME_TYPE,
PROCESS_PRIORITY_LOW,
) )
from frigate.ffmpeg_presets import ( from frigate.ffmpeg_presets import (
EncodeTypeEnum, EncodeTypeEnum,
parse_preset_hardware_acceleration_encode, parse_preset_hardware_acceleration_encode,
) )
from frigate.models import Export, Previews, Recordings, ReviewSegment from frigate.models import Export, Previews, Recordings, ReviewSegment
from frigate.util.ffmpeg import run_ffmpeg_with_progress
from frigate.util.time import is_current_hour from frigate.util.time import is_current_hour
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -242,109 +243,43 @@ class RecordingExporter(threading.Thread):
return total return total
def _inject_progress_flags(self, ffmpeg_cmd: list[str]) -> list[str]:
"""Insert FFmpeg progress reporting flags before the output path.
``-progress pipe:2`` writes structured key=value lines to stderr,
``-nostats`` suppresses the noisy default stats output.
"""
if not ffmpeg_cmd:
return ffmpeg_cmd
return ffmpeg_cmd[:-1] + ["-progress", "pipe:2", "-nostats", ffmpeg_cmd[-1]]
def _run_ffmpeg_with_progress( def _run_ffmpeg_with_progress(
self, self,
ffmpeg_cmd: list[str], ffmpeg_cmd: list[str],
playlist_lines: str | list[str], playlist_lines: str | list[str],
step: str = "encoding", step: str = "encoding",
) -> tuple[int, str]: ) -> tuple[int, str]:
"""Run an FFmpeg export command, parsing progress events from stderr. """Delegate to the shared helper, mapping percent → (step, percent).
Returns ``(returncode, captured_stderr)``. Stdout is left attached to Returns ``(returncode, captured_stderr)``.
the parent process so we don't have to drain it (and risk a deadlock
if the buffer fills). Progress percent is computed against the
expected output duration; values are clamped to [0, 100] inside
:py:meth:`_emit_progress`.
""" """
cmd = ["nice", "-n", str(PROCESS_PRIORITY_LOW)] + self._inject_progress_flags(
ffmpeg_cmd
)
if isinstance(playlist_lines, list): if isinstance(playlist_lines, list):
stdin_payload = "\n".join(playlist_lines) stdin_payload = "\n".join(playlist_lines)
else: else:
stdin_payload = playlist_lines stdin_payload = playlist_lines
expected_duration = self._expected_output_duration_seconds() return run_ffmpeg_with_progress(
ffmpeg_cmd,
self._emit_progress(step, 0.0) expected_duration_seconds=self._expected_output_duration_seconds(),
on_progress=lambda percent: self._emit_progress(step, percent),
proc = sp.Popen( stdin_payload=stdin_payload,
cmd, use_low_priority=True,
stdin=sp.PIPE,
stderr=sp.PIPE,
text=True,
encoding="ascii",
errors="replace",
) )
assert proc.stdin is not None
assert proc.stderr is not None
try:
proc.stdin.write(stdin_payload)
except (BrokenPipeError, OSError):
# FFmpeg may have rejected the input early; still wait for it
# to terminate so the returncode is meaningful.
pass
finally:
try:
proc.stdin.close()
except (BrokenPipeError, OSError):
pass
captured: list[str] = []
try:
for raw_line in proc.stderr:
captured.append(raw_line)
line = raw_line.strip()
if not line:
continue
if line.startswith("out_time_us="):
if expected_duration <= 0:
continue
try:
out_time_us = int(line.split("=", 1)[1])
except (ValueError, IndexError):
continue
if out_time_us < 0:
continue
out_seconds = out_time_us / 1_000_000.0
percent = (out_seconds / expected_duration) * 100.0
self._emit_progress(step, percent)
elif line == "progress=end":
self._emit_progress(step, 100.0)
break
except Exception:
logger.exception("Failed reading FFmpeg progress for %s", self.export_id)
proc.wait()
# Drain any remaining stderr so callers can log it on failure.
try:
remaining = proc.stderr.read()
if remaining:
captured.append(remaining)
except Exception:
pass
return proc.returncode, "".join(captured)
def get_datetime_from_timestamp(self, timestamp: int) -> str: def get_datetime_from_timestamp(self, timestamp: int) -> str:
# return in iso format # return in iso format using the configured ui.timezone when set,
# so the auto-generated export name reflects local time rather
# than the container's UTC clock
tz_name = self.config.ui.timezone
if tz_name:
try:
tz = pytz.timezone(tz_name)
except pytz.UnknownTimeZoneError:
tz = None
if tz is not None:
return datetime.datetime.fromtimestamp(timestamp, tz=tz).strftime(
"%Y-%m-%d %H:%M:%S"
)
return datetime.datetime.fromtimestamp(timestamp).strftime("%Y-%m-%d %H:%M:%S") return datetime.datetime.fromtimestamp(timestamp).strftime("%Y-%m-%d %H:%M:%S")
def _chapter_metadata_path(self) -> str: def _chapter_metadata_path(self) -> str:
@ -407,6 +342,7 @@ class RecordingExporter(threading.Thread):
return None return None
total_output = windows[-1][2] + (windows[-1][1] - windows[-1][0]) total_output = windows[-1][2] + (windows[-1][1] - windows[-1][0])
last_recorded_end = windows[-1][1]
def wall_to_output(t: float) -> float: def wall_to_output(t: float) -> float:
t = max(float(self.start_time), min(float(self.end_time), t)) t = max(float(self.start_time), min(float(self.end_time), t))
@ -419,8 +355,18 @@ class RecordingExporter(threading.Thread):
chapter_blocks: list[str] = [] chapter_blocks: list[str] = []
for review in review_rows: for review in review_rows:
if review.start_time is None:
continue
# In-progress segments have a NULL end_time until the activity
# closes; clamp to the last recorded second so the chapter never
# extends past the actual video.
review_end = (
float(review.end_time)
if review.end_time is not None
else last_recorded_end
)
start_out = wall_to_output(float(review.start_time)) start_out = wall_to_output(float(review.start_time))
end_out = wall_to_output(float(review.end_time)) end_out = wall_to_output(review_end)
# Drop chapters that fall entirely in a recording gap, or are # Drop chapters that fall entirely in a recording gap, or are
# too short to be navigable in a player. # too short to be navigable in a player.
@ -503,16 +449,14 @@ class RecordingExporter(threading.Thread):
except DoesNotExist: except DoesNotExist:
return "" return ""
diff = self.start_time - preview.start_time diff = max(0.0, float(self.start_time) - float(preview.start_time))
minutes = int(diff / 60)
seconds = int(diff % 60)
ffmpeg_cmd = [ ffmpeg_cmd = [
"/usr/lib/ffmpeg/7.0/bin/ffmpeg", # hardcode path for exports thumbnail due to missing libwebp support "/usr/lib/ffmpeg/7.0/bin/ffmpeg", # hardcode path for exports thumbnail due to missing libwebp support
"-hide_banner", "-hide_banner",
"-loglevel", "-loglevel",
"warning", "warning",
"-ss", "-ss",
f"00:{minutes}:{seconds}", f"{diff:.3f}",
"-i", "-i",
preview.path, preview.path,
"-frames", "-frames",
@ -538,12 +482,18 @@ class RecordingExporter(threading.Thread):
start_file = f"{file_start}{self.start_time}.{PREVIEW_FRAME_TYPE}" start_file = f"{file_start}{self.start_time}.{PREVIEW_FRAME_TYPE}"
end_file = f"{file_start}{self.end_time}.{PREVIEW_FRAME_TYPE}" end_file = f"{file_start}{self.end_time}.{PREVIEW_FRAME_TYPE}"
selected_preview = None selected_preview = None
# Preview frames are written at most 1-2 fps during activity
# and as little as one every 30s during quiet periods, so a
# short export window can contain zero frames. Track the most
# recent frame before the window as a fallback.
fallback_preview = None
for file in sorted(os.listdir(preview_dir)): for file in sorted(os.listdir(preview_dir)):
if not file.startswith(file_start): if not file.startswith(file_start):
continue continue
if file < start_file: if file < start_file:
fallback_preview = os.path.join(preview_dir, file)
continue continue
if file > end_file: if file > end_file:
@ -552,6 +502,9 @@ class RecordingExporter(threading.Thread):
selected_preview = os.path.join(preview_dir, file) selected_preview = os.path.join(preview_dir, file)
break break
if not selected_preview:
selected_preview = fallback_preview
if not selected_preview: if not selected_preview:
return "" return ""

View File

@ -0,0 +1,123 @@
"""Tests for /debug_replay API endpoints."""
from unittest.mock import patch
from frigate.models import Event, Recordings, ReviewSegment
from frigate.test.http_api.base_http_test import AuthTestClient, BaseTestHttp
class TestDebugReplayAPI(BaseTestHttp):
def setUp(self):
super().setUp([Event, Recordings, ReviewSegment])
self.app = self.create_app()
def test_start_returns_202_with_job_id(self):
# Stub the factory to skip validation/threading and just record the
# name on the manager the way the real factory's mark_starting would.
def fake_start(**kwargs):
kwargs["replay_manager"].mark_starting(
source_camera=kwargs["source_camera"],
replay_camera_name="_replay_front",
start_ts=kwargs["start_ts"],
end_ts=kwargs["end_ts"],
)
return "job-1234"
with patch(
"frigate.api.debug_replay.start_debug_replay_job",
side_effect=fake_start,
):
with AuthTestClient(self.app) as client:
resp = client.post(
"/debug_replay/start",
json={
"camera": "front",
"start_time": 100,
"end_time": 200,
},
)
self.assertEqual(resp.status_code, 202)
body = resp.json()
self.assertTrue(body["success"])
self.assertEqual(body["job_id"], "job-1234")
self.assertEqual(body["replay_camera"], "_replay_front")
def test_start_returns_400_on_validation_error(self):
with patch(
"frigate.api.debug_replay.start_debug_replay_job",
side_effect=ValueError("Camera 'missing' not found"),
):
with AuthTestClient(self.app) as client:
resp = client.post(
"/debug_replay/start",
json={
"camera": "missing",
"start_time": 100,
"end_time": 200,
},
)
self.assertEqual(resp.status_code, 400)
body = resp.json()
self.assertFalse(body["success"])
# Message is hard-coded so we don't echo exception text back to clients
# (CodeQL: information exposure through an exception).
self.assertEqual(body["message"], "Invalid debug replay parameters")
def test_start_returns_409_when_session_already_active(self):
with patch(
"frigate.api.debug_replay.start_debug_replay_job",
side_effect=RuntimeError("A replay session is already active"),
):
with AuthTestClient(self.app) as client:
resp = client.post(
"/debug_replay/start",
json={
"camera": "front",
"start_time": 100,
"end_time": 200,
},
)
self.assertEqual(resp.status_code, 409)
body = resp.json()
self.assertFalse(body["success"])
def test_status_inactive_when_no_session(self):
with AuthTestClient(self.app) as client:
resp = client.get("/debug_replay/status")
self.assertEqual(resp.status_code, 200)
body = resp.json()
self.assertFalse(body["active"])
self.assertIsNone(body["replay_camera"])
self.assertIsNone(body["source_camera"])
self.assertIsNone(body["start_time"])
self.assertIsNone(body["end_time"])
self.assertFalse(body["live_ready"])
# Make sure deprecated fields are gone
self.assertNotIn("state", body)
self.assertNotIn("progress_percent", body)
self.assertNotIn("error_message", body)
def test_status_active_after_mark_starting(self):
manager = self.app.replay_manager
manager.mark_starting(
source_camera="front",
replay_camera_name="_replay_front",
start_ts=100.0,
end_ts=200.0,
)
with AuthTestClient(self.app) as client:
resp = client.get("/debug_replay/status")
self.assertEqual(resp.status_code, 200)
body = resp.json()
self.assertTrue(body["active"])
self.assertEqual(body["replay_camera"], "_replay_front")
self.assertEqual(body["source_camera"], "front")
self.assertEqual(body["start_time"], 100.0)
self.assertEqual(body["end_time"], 200.0)
self.assertFalse(body["live_ready"])

View File

@ -0,0 +1,242 @@
"""Tests for the simplified DebugReplayManager.
Startup orchestration lives in ``frigate.jobs.debug_replay`` (covered by
``test_debug_replay_job``). The manager owns only session presence and
cleanup.
"""
import unittest
import unittest.mock
from unittest.mock import MagicMock, patch
class TestDebugReplayManagerSession(unittest.TestCase):
def test_inactive_by_default(self) -> None:
from frigate.debug_replay import DebugReplayManager
manager = DebugReplayManager()
self.assertFalse(manager.active)
self.assertIsNone(manager.replay_camera_name)
self.assertIsNone(manager.source_camera)
self.assertIsNone(manager.clip_path)
self.assertIsNone(manager.start_ts)
self.assertIsNone(manager.end_ts)
def test_mark_starting_sets_session_pointers_and_active(self) -> None:
from frigate.debug_replay import DebugReplayManager
manager = DebugReplayManager()
manager.mark_starting(
source_camera="front",
replay_camera_name="_replay_front",
start_ts=100.0,
end_ts=200.0,
)
self.assertTrue(manager.active)
self.assertEqual(manager.replay_camera_name, "_replay_front")
self.assertEqual(manager.source_camera, "front")
self.assertEqual(manager.start_ts, 100.0)
self.assertEqual(manager.end_ts, 200.0)
self.assertIsNone(manager.clip_path)
def test_mark_session_ready_sets_clip_path(self) -> None:
from frigate.debug_replay import DebugReplayManager
manager = DebugReplayManager()
manager.mark_starting("front", "_replay_front", 100.0, 200.0)
manager.mark_session_ready(clip_path="/tmp/replay/_replay_front.mp4")
self.assertEqual(manager.clip_path, "/tmp/replay/_replay_front.mp4")
self.assertTrue(manager.active)
def test_clear_session_resets_all_pointers(self) -> None:
from frigate.debug_replay import DebugReplayManager
manager = DebugReplayManager()
manager.mark_starting("front", "_replay_front", 100.0, 200.0)
manager.mark_session_ready("/tmp/replay/clip.mp4")
manager.clear_session()
self.assertFalse(manager.active)
self.assertIsNone(manager.replay_camera_name)
self.assertIsNone(manager.source_camera)
self.assertIsNone(manager.clip_path)
self.assertIsNone(manager.start_ts)
self.assertIsNone(manager.end_ts)
class TestDebugReplayManagerStop(unittest.TestCase):
def test_stop_when_inactive_is_a_noop(self) -> None:
from frigate.debug_replay import DebugReplayManager
manager = DebugReplayManager()
frigate_config = MagicMock()
frigate_config.cameras = {}
publisher = MagicMock()
# Should not raise; should not publish any events.
manager.stop(frigate_config=frigate_config, config_publisher=publisher)
publisher.publish_update.assert_not_called()
def test_stop_publishes_remove_when_camera_was_published(self) -> None:
from frigate.config.camera.updater import CameraConfigUpdateEnum
from frigate.debug_replay import DebugReplayManager
manager = DebugReplayManager()
manager.mark_starting("front", "_replay_front", 100.0, 200.0)
manager.mark_session_ready("/tmp/replay/_replay_front.mp4")
camera_config = MagicMock()
frigate_config = MagicMock()
frigate_config.cameras = {"_replay_front": camera_config}
publisher = MagicMock()
with (
patch.object(manager, "_cleanup_db"),
patch.object(manager, "_cleanup_files"),
patch("frigate.debug_replay.cancel_debug_replay_job", return_value=False),
):
manager.stop(frigate_config=frigate_config, config_publisher=publisher)
# One publish_update call with a remove topic.
self.assertEqual(publisher.publish_update.call_count, 1)
topic_arg = publisher.publish_update.call_args.args[0]
self.assertEqual(topic_arg.update_type, CameraConfigUpdateEnum.remove)
self.assertFalse(manager.active)
def test_stop_skips_remove_publish_when_camera_not_in_config(self) -> None:
"""Cancellation during preparing_clip: no camera was published yet."""
from frigate.debug_replay import DebugReplayManager
manager = DebugReplayManager()
manager.mark_starting("front", "_replay_front", 100.0, 200.0)
# clip_path stays None because we cancelled before camera publish.
frigate_config = MagicMock()
frigate_config.cameras = {} # _replay_front not present
publisher = MagicMock()
with (
patch.object(manager, "_cleanup_db"),
patch.object(manager, "_cleanup_files"),
patch("frigate.debug_replay.cancel_debug_replay_job", return_value=True),
):
manager.stop(frigate_config=frigate_config, config_publisher=publisher)
publisher.publish_update.assert_not_called()
self.assertFalse(manager.active)
def test_stop_calls_cancel_debug_replay_job(self) -> None:
from frigate.debug_replay import DebugReplayManager
manager = DebugReplayManager()
manager.mark_starting("front", "_replay_front", 100.0, 200.0)
frigate_config = MagicMock()
frigate_config.cameras = {}
publisher = MagicMock()
with (
patch.object(manager, "_cleanup_db"),
patch.object(manager, "_cleanup_files"),
patch(
"frigate.debug_replay.cancel_debug_replay_job",
return_value=True,
) as mock_cancel,
):
manager.stop(frigate_config=frigate_config, config_publisher=publisher)
mock_cancel.assert_called_once()
class TestDebugReplayManagerPublishCamera(unittest.TestCase):
def test_publish_camera_invokes_publisher_with_add_topic(self) -> None:
from frigate.config.camera.updater import CameraConfigUpdateEnum
from frigate.debug_replay import DebugReplayManager
manager = DebugReplayManager()
source_config = MagicMock()
new_camera_config = MagicMock()
frigate_config = MagicMock()
frigate_config.cameras = {"front": source_config}
publisher = MagicMock()
with (
patch.object(
manager,
"_build_camera_config_dict",
return_value={"enabled": True},
),
patch("frigate.debug_replay.find_config_file", return_value="/cfg.yml"),
patch("frigate.debug_replay.YAML") as yaml_cls,
patch("frigate.debug_replay.FrigateConfig.parse_object") as parse_object,
patch("builtins.open", unittest.mock.mock_open(read_data="cameras:\n")),
):
yaml_instance = yaml_cls.return_value
yaml_instance.load.return_value = {"cameras": {}}
parsed = MagicMock()
parsed.cameras = {"_replay_front": new_camera_config}
parse_object.return_value = parsed
manager.publish_camera(
source_camera="front",
replay_name="_replay_front",
clip_path="/tmp/clip.mp4",
frigate_config=frigate_config,
config_publisher=publisher,
)
# Camera registered into the live config dict
self.assertIn("_replay_front", frigate_config.cameras)
# Publisher invoked with an add topic
self.assertEqual(publisher.publish_update.call_count, 1)
topic_arg = publisher.publish_update.call_args.args[0]
self.assertEqual(topic_arg.update_type, CameraConfigUpdateEnum.add)
def test_publish_camera_wraps_parse_failure_in_runtime_error(self) -> None:
from frigate.debug_replay import DebugReplayManager
manager = DebugReplayManager()
frigate_config = MagicMock()
frigate_config.cameras = {"front": MagicMock()}
publisher = MagicMock()
with (
patch.object(
manager,
"_build_camera_config_dict",
return_value={"enabled": True},
),
patch("frigate.debug_replay.find_config_file", return_value="/cfg.yml"),
patch("frigate.debug_replay.YAML") as yaml_cls,
patch(
"frigate.debug_replay.FrigateConfig.parse_object",
side_effect=ValueError("zone foo has invalid coordinates"),
),
patch("builtins.open", unittest.mock.mock_open(read_data="cameras:\n")),
):
yaml_cls.return_value.load.return_value = {"cameras": {}}
with self.assertRaises(RuntimeError) as ctx:
manager.publish_camera(
source_camera="front",
replay_name="_replay_front",
clip_path="/tmp/clip.mp4",
frigate_config=frigate_config,
config_publisher=publisher,
)
self.assertIn("replay camera config", str(ctx.exception))
self.assertIn("invalid coordinates", str(ctx.exception))
publisher.publish_update.assert_not_called()
if __name__ == "__main__":
unittest.main()

View File

@ -0,0 +1,460 @@
"""Tests for the debug replay job runner and factory."""
import threading
import time
import unittest
import unittest.mock
from unittest.mock import MagicMock, patch
from frigate.debug_replay import DebugReplayManager
from frigate.jobs.debug_replay import (
DebugReplayJob,
cancel_debug_replay_job,
get_active_runner,
start_debug_replay_job,
)
from frigate.jobs.export import JobStatePublisher
from frigate.jobs.manager import _completed_jobs, _current_jobs
from frigate.types import JobStatusTypesEnum
def _reset_job_manager() -> None:
"""Clear the global job manager state between tests."""
_current_jobs.clear()
_completed_jobs.clear()
def _patch_publisher(test_case: unittest.TestCase) -> None:
"""Replace JobStatePublisher.publish with a no-op to avoid hanging on IPC."""
publisher_patch = patch.object(
JobStatePublisher, "publish", lambda self, payload: None
)
publisher_patch.start()
test_case.addCleanup(publisher_patch.stop)
class TestDebugReplayJob(unittest.TestCase):
def test_default_fields(self) -> None:
job = DebugReplayJob()
self.assertEqual(job.job_type, "debug_replay")
self.assertEqual(job.status, JobStatusTypesEnum.queued)
self.assertIsNone(job.current_step)
self.assertEqual(job.progress_percent, 0.0)
def test_to_dict_whitelist(self) -> None:
job = DebugReplayJob(
source_camera="front",
replay_camera_name="_replay_front",
start_ts=100.0,
end_ts=200.0,
)
job.current_step = "preparing_clip"
job.progress_percent = 42.5
payload = job.to_dict()
# Top-level matches the standard Job<TResults> shape.
for key in (
"id",
"job_type",
"status",
"start_time",
"end_time",
"error_message",
"results",
):
self.assertIn(key, payload, f"missing top-level field: {key}")
results = payload["results"]
self.assertEqual(results["source_camera"], "front")
self.assertEqual(results["replay_camera_name"], "_replay_front")
self.assertEqual(results["current_step"], "preparing_clip")
self.assertEqual(results["progress_percent"], 42.5)
self.assertEqual(results["start_ts"], 100.0)
self.assertEqual(results["end_ts"], 200.0)
class TestStartDebugReplayJob(unittest.TestCase):
def setUp(self) -> None:
_reset_job_manager()
_patch_publisher(self)
self.manager = DebugReplayManager()
self.frigate_config = MagicMock()
self.frigate_config.cameras = {"front": MagicMock()}
self.frigate_config.ffmpeg.ffmpeg_path = "/bin/true"
self.publisher = MagicMock()
self.recordings_qs = MagicMock()
self.recordings_qs.count.return_value = 1
self.recordings_qs.__iter__.return_value = iter([MagicMock(path="/tmp/r1.mp4")])
def tearDown(self) -> None:
runner = get_active_runner()
if runner is not None:
runner.cancel()
runner.join(timeout=2.0)
_reset_job_manager()
def test_rejects_unknown_camera(self) -> None:
with self.assertRaises(ValueError):
start_debug_replay_job(
source_camera="missing",
start_ts=100.0,
end_ts=200.0,
frigate_config=self.frigate_config,
config_publisher=self.publisher,
replay_manager=self.manager,
)
def test_rejects_invalid_time_range(self) -> None:
with self.assertRaises(ValueError):
start_debug_replay_job(
source_camera="front",
start_ts=200.0,
end_ts=100.0,
frigate_config=self.frigate_config,
config_publisher=self.publisher,
replay_manager=self.manager,
)
def test_rejects_when_no_recordings(self) -> None:
empty_qs = MagicMock()
empty_qs.count.return_value = 0
with patch("frigate.jobs.debug_replay.query_recordings", return_value=empty_qs):
with self.assertRaises(ValueError):
start_debug_replay_job(
source_camera="front",
start_ts=100.0,
end_ts=200.0,
frigate_config=self.frigate_config,
config_publisher=self.publisher,
replay_manager=self.manager,
)
def test_returns_job_id_and_marks_session_starting(self) -> None:
block = threading.Event()
def slow_helper(cmd, **kwargs):
block.wait(timeout=5)
return 0, ""
with (
patch(
"frigate.jobs.debug_replay.query_recordings",
return_value=self.recordings_qs,
),
patch(
"frigate.jobs.debug_replay.run_ffmpeg_with_progress",
side_effect=slow_helper,
),
patch.object(self.manager, "publish_camera"),
patch("os.path.exists", return_value=True),
patch("os.makedirs"),
patch("builtins.open", unittest.mock.mock_open()),
):
job_id = start_debug_replay_job(
source_camera="front",
start_ts=100.0,
end_ts=200.0,
frigate_config=self.frigate_config,
config_publisher=self.publisher,
replay_manager=self.manager,
)
self.assertIsInstance(job_id, str)
self.assertTrue(self.manager.active)
self.assertEqual(self.manager.replay_camera_name, "_replay_front")
self.assertEqual(self.manager.source_camera, "front")
block.set()
def test_rejects_concurrent_calls(self) -> None:
block = threading.Event()
def slow_helper(cmd, **kwargs):
block.wait(timeout=5)
return 0, ""
with (
patch(
"frigate.jobs.debug_replay.query_recordings",
return_value=self.recordings_qs,
),
patch(
"frigate.jobs.debug_replay.run_ffmpeg_with_progress",
side_effect=slow_helper,
),
patch.object(self.manager, "publish_camera"),
patch("os.path.exists", return_value=True),
patch("os.makedirs"),
patch("builtins.open", unittest.mock.mock_open()),
):
start_debug_replay_job(
source_camera="front",
start_ts=100.0,
end_ts=200.0,
frigate_config=self.frigate_config,
config_publisher=self.publisher,
replay_manager=self.manager,
)
with self.assertRaises(RuntimeError):
start_debug_replay_job(
source_camera="front",
start_ts=100.0,
end_ts=200.0,
frigate_config=self.frigate_config,
config_publisher=self.publisher,
replay_manager=self.manager,
)
block.set()
class TestRunnerHappyPath(unittest.TestCase):
def setUp(self) -> None:
_reset_job_manager()
_patch_publisher(self)
self.manager = DebugReplayManager()
self.frigate_config = MagicMock()
self.frigate_config.cameras = {"front": MagicMock()}
self.frigate_config.ffmpeg.ffmpeg_path = "/bin/true"
self.publisher = MagicMock()
self.recordings_qs = MagicMock()
self.recordings_qs.count.return_value = 1
self.recordings_qs.__iter__.return_value = iter([MagicMock(path="/tmp/r1.mp4")])
def tearDown(self) -> None:
runner = get_active_runner()
if runner is not None:
runner.cancel()
runner.join(timeout=2.0)
_reset_job_manager()
def _wait_for(self, predicate, timeout: float = 5.0) -> bool:
deadline = time.time() + timeout
while time.time() < deadline:
if predicate():
return True
time.sleep(0.02)
return False
def test_progress_callback_updates_job_percent(self) -> None:
captured: list[float] = []
def fake_helper(cmd, *, on_progress=None, **kwargs):
on_progress(0.0)
on_progress(50.0)
on_progress(100.0)
return 0, ""
with (
patch(
"frigate.jobs.debug_replay.query_recordings",
return_value=self.recordings_qs,
),
patch(
"frigate.jobs.debug_replay.run_ffmpeg_with_progress",
side_effect=fake_helper,
),
patch.object(
self.manager,
"publish_camera",
side_effect=lambda *a, **kw: captured.append("published"),
),
patch("os.path.exists", return_value=True),
patch("os.makedirs"),
patch("builtins.open", unittest.mock.mock_open()),
):
start_debug_replay_job(
source_camera="front",
start_ts=100.0,
end_ts=200.0,
frigate_config=self.frigate_config,
config_publisher=self.publisher,
replay_manager=self.manager,
)
self.assertTrue(
self._wait_for(lambda: get_active_runner() is None),
"runner did not finish",
)
from frigate.jobs.manager import get_current_job
job = get_current_job("debug_replay")
self.assertIsNotNone(job)
self.assertEqual(job.status, JobStatusTypesEnum.success)
self.assertEqual(job.progress_percent, 100.0)
self.assertEqual(captured, ["published"])
# Manager should have been told the session is ready with the clip path.
self.assertIsNotNone(self.manager.clip_path)
class TestRunnerFailurePath(unittest.TestCase):
def setUp(self) -> None:
_reset_job_manager()
_patch_publisher(self)
self.manager = DebugReplayManager()
self.frigate_config = MagicMock()
self.frigate_config.cameras = {"front": MagicMock()}
self.frigate_config.ffmpeg.ffmpeg_path = "/bin/true"
self.publisher = MagicMock()
self.recordings_qs = MagicMock()
self.recordings_qs.count.return_value = 1
self.recordings_qs.__iter__.return_value = iter([MagicMock(path="/tmp/r1.mp4")])
def tearDown(self) -> None:
runner = get_active_runner()
if runner is not None:
runner.cancel()
runner.join(timeout=2.0)
_reset_job_manager()
def _wait_for(self, predicate, timeout: float = 5.0) -> bool:
deadline = time.time() + timeout
while time.time() < deadline:
if predicate():
return True
time.sleep(0.02)
return False
def test_ffmpeg_failure_marks_job_failed_and_clears_session(self) -> None:
def failing_helper(cmd, **kwargs):
return 1, "ffmpeg exploded"
with (
patch(
"frigate.jobs.debug_replay.query_recordings",
return_value=self.recordings_qs,
),
patch(
"frigate.jobs.debug_replay.run_ffmpeg_with_progress",
side_effect=failing_helper,
),
patch("os.path.exists", return_value=True),
patch("os.makedirs"),
patch("os.remove"),
patch("builtins.open", unittest.mock.mock_open()),
):
start_debug_replay_job(
source_camera="front",
start_ts=100.0,
end_ts=200.0,
frigate_config=self.frigate_config,
config_publisher=self.publisher,
replay_manager=self.manager,
)
self.assertTrue(
self._wait_for(lambda: get_active_runner() is None),
"runner did not finish",
)
from frigate.jobs.manager import get_current_job
job = get_current_job("debug_replay")
self.assertIsNotNone(job)
self.assertEqual(job.status, JobStatusTypesEnum.failed)
self.assertIsNotNone(job.error_message)
self.assertIn("ffmpeg", job.error_message.lower())
# Session cleared so a new /start is allowed
self.assertFalse(self.manager.active)
class TestRunnerCancellation(unittest.TestCase):
def setUp(self) -> None:
_reset_job_manager()
_patch_publisher(self)
self.manager = DebugReplayManager()
self.frigate_config = MagicMock()
self.frigate_config.cameras = {"front": MagicMock()}
self.frigate_config.ffmpeg.ffmpeg_path = "/bin/true"
self.publisher = MagicMock()
self.recordings_qs = MagicMock()
self.recordings_qs.count.return_value = 1
self.recordings_qs.__iter__.return_value = iter([MagicMock(path="/tmp/r1.mp4")])
def tearDown(self) -> None:
runner = get_active_runner()
if runner is not None:
runner.cancel()
runner.join(timeout=2.0)
_reset_job_manager()
def _wait_for(self, predicate, timeout: float = 5.0) -> bool:
deadline = time.time() + timeout
while time.time() < deadline:
if predicate():
return True
time.sleep(0.02)
return False
def test_cancel_terminates_ffmpeg_and_marks_cancelled(self) -> None:
terminated = threading.Event()
fake_proc = MagicMock()
fake_proc.terminate = MagicMock(side_effect=lambda: terminated.set())
def fake_helper(cmd, *, process_started=None, **kwargs):
if process_started is not None:
process_started(fake_proc)
terminated.wait(timeout=5)
return -15, "killed"
with (
patch(
"frigate.jobs.debug_replay.query_recordings",
return_value=self.recordings_qs,
),
patch(
"frigate.jobs.debug_replay.run_ffmpeg_with_progress",
side_effect=fake_helper,
),
patch("os.path.exists", return_value=True),
patch("os.makedirs"),
patch("os.remove"),
patch("builtins.open", unittest.mock.mock_open()),
):
start_debug_replay_job(
source_camera="front",
start_ts=100.0,
end_ts=200.0,
frigate_config=self.frigate_config,
config_publisher=self.publisher,
replay_manager=self.manager,
)
# Wait for the runner to register the active process.
self.assertTrue(
self._wait_for(
lambda: (
get_active_runner() is not None
and get_active_runner()._active_process is fake_proc
)
)
)
cancelled = cancel_debug_replay_job()
self.assertTrue(cancelled)
self.assertTrue(fake_proc.terminate.called)
self.assertTrue(
self._wait_for(lambda: get_active_runner() is None),
"runner did not finish",
)
from frigate.jobs.manager import get_current_job
job = get_current_job("debug_replay")
self.assertEqual(job.status, JobStatusTypesEnum.cancelled)
# Runner must not clear the manager session on cancellation —
# that belongs to the caller of cancel_debug_replay_job (stop()).
# If the runner cleared it, stop() would log "no active session"
# and skip its cleanup_db / cleanup_files calls.
self.assertTrue(self.manager.active)
if __name__ == "__main__":
unittest.main()

View File

@ -1,6 +1,9 @@
"""Tests for export progress tracking, broadcast, and FFmpeg parsing.""" """Tests for export progress tracking, broadcast, and FFmpeg parsing."""
import io import io
import os
import shutil
import tempfile
import unittest import unittest
from unittest.mock import MagicMock, patch from unittest.mock import MagicMock, patch
@ -11,6 +14,7 @@ from frigate.jobs.export import (
) )
from frigate.record.export import PlaybackSourceEnum, RecordingExporter from frigate.record.export import PlaybackSourceEnum, RecordingExporter
from frigate.types import JobStatusTypesEnum from frigate.types import JobStatusTypesEnum
from frigate.util.ffmpeg import inject_progress_flags
def _make_exporter( def _make_exporter(
@ -115,10 +119,9 @@ class TestExpectedOutputDuration(unittest.TestCase):
class TestProgressFlagInjection(unittest.TestCase): class TestProgressFlagInjection(unittest.TestCase):
def test_inserts_before_output_path(self) -> None: def test_inserts_before_output_path(self) -> None:
exporter = _make_exporter()
cmd = ["ffmpeg", "-i", "input.m3u8", "-c", "copy", "/tmp/output.mp4"] cmd = ["ffmpeg", "-i", "input.m3u8", "-c", "copy", "/tmp/output.mp4"]
result = exporter._inject_progress_flags(cmd) result = inject_progress_flags(cmd)
assert result == [ assert result == [
"ffmpeg", "ffmpeg",
@ -133,8 +136,7 @@ class TestProgressFlagInjection(unittest.TestCase):
] ]
def test_handles_empty_cmd(self) -> None: def test_handles_empty_cmd(self) -> None:
exporter = _make_exporter() assert inject_progress_flags([]) == []
assert exporter._inject_progress_flags([]) == []
class TestFfmpegProgressParsing(unittest.TestCase): class TestFfmpegProgressParsing(unittest.TestCase):
@ -164,7 +166,7 @@ class TestFfmpegProgressParsing(unittest.TestCase):
fake_proc.returncode = 0 fake_proc.returncode = 0
fake_proc.wait = MagicMock(return_value=0) fake_proc.wait = MagicMock(return_value=0)
with patch("frigate.record.export.sp.Popen", return_value=fake_proc): with patch("frigate.util.ffmpeg.sp.Popen", return_value=fake_proc):
returncode, _stderr = exporter._run_ffmpeg_with_progress( returncode, _stderr = exporter._run_ffmpeg_with_progress(
["ffmpeg", "-i", "x.m3u8", "/tmp/out.mp4"], "playlist", step="encoding" ["ffmpeg", "-i", "x.m3u8", "/tmp/out.mp4"], "playlist", step="encoding"
) )
@ -363,6 +365,121 @@ class TestBroadcastAggregation(unittest.TestCase):
assert job.progress_percent == 33.0 assert job.progress_percent == 33.0
class TestGetDatetimeFromTimestamp(unittest.TestCase):
"""Auto-generated export name should honor config.ui.timezone, not
fall back to the container's UTC clock when a timezone is configured.
"""
def test_uses_configured_ui_timezone(self) -> None:
exporter = _make_exporter()
exporter.config.ui.timezone = "America/New_York"
# 2025-01-15 12:00:00 UTC is 07:00:00 EST
assert exporter.get_datetime_from_timestamp(1736942400) == "2025-01-15 07:00:00"
def test_falls_back_to_local_when_timezone_unset(self) -> None:
exporter = _make_exporter()
exporter.config.ui.timezone = None
# No assertion on the exact wall-clock value — just confirm no
# exception and that pytz isn't required when the field is unset.
assert isinstance(exporter.get_datetime_from_timestamp(1736942400), str)
def test_invalid_timezone_falls_back_to_local(self) -> None:
exporter = _make_exporter()
exporter.config.ui.timezone = "Not/A_Real_Zone"
assert isinstance(exporter.get_datetime_from_timestamp(1736942400), str)
class TestSaveThumbnailFromPreviewFrames(unittest.TestCase):
"""Short exports in the current hour can fall between preview frame
writes (1-2 fps during activity, every 30s otherwise). When no frame
falls inside the export window, save_thumbnail should fall back to
the most recent prior frame instead of returning no thumbnail."""
def setUp(self) -> None:
self.tmp_root = tempfile.mkdtemp(prefix="frigate_thumb_test_")
self.preview_dir = os.path.join(self.tmp_root, "cache", "preview_frames")
self.export_clips = os.path.join(self.tmp_root, "clips", "export")
os.makedirs(self.preview_dir, exist_ok=True)
os.makedirs(self.export_clips, exist_ok=True)
def tearDown(self) -> None:
shutil.rmtree(self.tmp_root, ignore_errors=True)
def _write_frame(self, camera: str, frame_time: float) -> str:
path = os.path.join(self.preview_dir, f"preview_{camera}-{frame_time}.webp")
with open(path, "wb") as f:
f.write(b"fake-webp-bytes")
return path
def _make_short_current_hour_exporter(self) -> RecordingExporter:
# Use a "now-ish" timestamp so save_thumbnail's start-of-hour
# comparison takes the current-hour branch (preview frames).
import datetime
now = datetime.datetime.now(datetime.timezone.utc).timestamp()
exporter = _make_exporter()
exporter.export_id = "thumb_short"
exporter.start_time = now
exporter.end_time = now + 3
return exporter
def test_short_export_falls_back_to_prior_preview_frame(self) -> None:
exporter = self._make_short_current_hour_exporter()
# Most recent preview frame is 10s before the export window
prior = self._write_frame(exporter.camera, exporter.start_time - 10.0)
thumb_target = os.path.join(self.export_clips, f"{exporter.export_id}.webp")
with (
patch(
"frigate.record.export.CACHE_DIR", os.path.join(self.tmp_root, "cache")
),
patch(
"frigate.record.export.CLIPS_DIR", os.path.join(self.tmp_root, "clips")
),
):
result = exporter.save_thumbnail(exporter.export_id)
assert result == thumb_target
assert os.path.isfile(thumb_target)
with open(thumb_target, "rb") as f, open(prior, "rb") as src:
assert f.read() == src.read()
def test_returns_empty_when_no_preview_frames_exist(self) -> None:
exporter = self._make_short_current_hour_exporter()
with (
patch(
"frigate.record.export.CACHE_DIR", os.path.join(self.tmp_root, "cache")
),
patch(
"frigate.record.export.CLIPS_DIR", os.path.join(self.tmp_root, "clips")
),
):
result = exporter.save_thumbnail(exporter.export_id)
assert result == ""
def test_prefers_in_window_frame_over_prior_frame(self) -> None:
exporter = self._make_short_current_hour_exporter()
self._write_frame(exporter.camera, exporter.start_time - 10.0)
in_window = self._write_frame(exporter.camera, exporter.start_time + 1.0)
thumb_target = os.path.join(self.export_clips, f"{exporter.export_id}.webp")
with (
patch(
"frigate.record.export.CACHE_DIR", os.path.join(self.tmp_root, "cache")
),
patch(
"frigate.record.export.CLIPS_DIR", os.path.join(self.tmp_root, "clips")
),
):
result = exporter.save_thumbnail(exporter.export_id)
assert result == thumb_target
with open(thumb_target, "rb") as f, open(in_window, "rb") as src:
assert f.read() == src.read()
class TestSchedulesCleanup(unittest.TestCase): class TestSchedulesCleanup(unittest.TestCase):
def test_schedule_job_cleanup_removes_after_delay(self) -> None: def test_schedule_job_cleanup_removes_after_delay(self) -> None:
config = MagicMock() config = MagicMock()
@ -381,5 +498,56 @@ class TestSchedulesCleanup(unittest.TestCase):
assert job.id not in manager.jobs assert job.id not in manager.jobs
class TestChapterMetadataInProgressReview(unittest.TestCase):
"""Regression: in-progress review segments have end_time=NULL until the
activity closes. The chapter builder must clamp the chapter end to the
last recorded second instead of crashing on float(None)."""
def _fake_select_returning(self, rows: list) -> MagicMock:
mock_query = MagicMock()
mock_query.where.return_value = mock_query
mock_query.order_by.return_value = mock_query
mock_query.iterator.return_value = iter(rows)
return mock_query
def test_in_progress_review_does_not_crash_and_clamps_to_last_recording(
self,
) -> None:
exporter = _make_exporter(end_minus_start=200)
# Recordings cover [1000, 1150]; export window is [1000, 1200] so
# the last recorded second is 1150 (a 50s gap at the tail).
recordings = [
MagicMock(start_time=1000.0, end_time=1150.0),
]
in_progress = MagicMock(
start_time=1100.0,
end_time=None,
severity="alert",
data={"objects": ["person"]},
)
with tempfile.TemporaryDirectory() as tmpdir:
chapter_path = os.path.join(tmpdir, "chapters.txt")
exporter._chapter_metadata_path = lambda: chapter_path # type: ignore[method-assign]
with patch(
"frigate.record.export.ReviewSegment.select",
return_value=self._fake_select_returning([in_progress]),
):
result = exporter._build_chapter_metadata_file(recordings)
assert result == chapter_path
with open(chapter_path) as f:
content = f.read()
# Output time is windows[-1][1] - windows[-1][0] = 150s.
# Review starts at wall=1100, output offset = 100s -> 100000ms.
# Clamped end = last_recorded_end (1150) -> output offset = 150s -> 150000ms.
assert "[CHAPTER]" in content
assert "START=100000" in content
assert "END=150000" in content
assert "title=Alert: person" in content
if __name__ == "__main__": if __name__ == "__main__":
unittest.main() unittest.main()

View File

@ -0,0 +1,111 @@
"""Tests for the shared ffmpeg progress helper."""
import unittest
from unittest.mock import MagicMock, patch
from frigate.util.ffmpeg import inject_progress_flags, run_ffmpeg_with_progress
class TestInjectProgressFlags(unittest.TestCase):
def test_inserts_flags_before_output_path(self):
cmd = ["ffmpeg", "-i", "in.mp4", "-c", "copy", "out.mp4"]
result = inject_progress_flags(cmd)
self.assertEqual(
result,
[
"ffmpeg",
"-i",
"in.mp4",
"-c",
"copy",
"-progress",
"pipe:2",
"-nostats",
"out.mp4",
],
)
def test_empty_cmd_returns_empty(self):
self.assertEqual(inject_progress_flags([]), [])
class TestRunFfmpegWithProgress(unittest.TestCase):
def _make_fake_proc(self, stderr_lines, returncode=0):
proc = MagicMock()
proc.stderr = iter(stderr_lines)
proc.stdin = MagicMock()
proc.returncode = returncode
proc.wait = MagicMock()
return proc
def test_emits_percent_from_out_time_us_lines(self):
captured: list[float] = []
def on_progress(percent: float) -> None:
captured.append(percent)
stderr_lines = [
"out_time_us=1000000\n",
"out_time_us=5000000\n",
"progress=end\n",
]
proc = self._make_fake_proc(stderr_lines)
proc.stderr = MagicMock()
proc.stderr.__iter__ = lambda self: iter(stderr_lines)
proc.stderr.read = MagicMock(return_value="")
with patch("subprocess.Popen", return_value=proc):
returncode, _stderr = run_ffmpeg_with_progress(
["ffmpeg", "-i", "in", "out"],
expected_duration_seconds=10.0,
on_progress=on_progress,
use_low_priority=False,
)
self.assertEqual(returncode, 0)
self.assertEqual(len(captured), 4) # initial 0.0 + two parsed + final 100.0
self.assertAlmostEqual(captured[0], 0.0)
self.assertAlmostEqual(captured[1], 10.0)
self.assertAlmostEqual(captured[2], 50.0)
self.assertAlmostEqual(captured[3], 100.0)
def test_passes_started_process_to_callback(self):
proc = self._make_fake_proc([])
proc.stderr = MagicMock()
proc.stderr.__iter__ = lambda self: iter([])
proc.stderr.read = MagicMock(return_value="")
seen: list = []
with patch("subprocess.Popen", return_value=proc):
run_ffmpeg_with_progress(
["ffmpeg", "out"],
expected_duration_seconds=1.0,
process_started=lambda p: seen.append(p),
use_low_priority=False,
)
self.assertEqual(seen, [proc])
def test_clamps_percent_to_0_100(self):
captured: list[float] = []
def on_progress(percent: float) -> None:
captured.append(percent)
stderr_lines = ["out_time_us=999999999999\n"]
proc = self._make_fake_proc(stderr_lines)
proc.stderr = MagicMock()
proc.stderr.__iter__ = lambda self: iter(stderr_lines)
proc.stderr.read = MagicMock(return_value="")
with patch("subprocess.Popen", return_value=proc):
run_ffmpeg_with_progress(
["ffmpeg", "out"],
expected_duration_seconds=10.0,
on_progress=on_progress,
use_low_priority=False,
)
# initial 0.0 then a clamped reading
self.assertEqual(captured[-1], 100.0)

View File

@ -2,8 +2,9 @@
import logging import logging
import subprocess as sp import subprocess as sp
from typing import Any from typing import Any, Callable, Optional
from frigate.const import PROCESS_PRIORITY_LOW
from frigate.log import LogPipe from frigate.log import LogPipe
@ -46,3 +47,124 @@ def start_or_restart_ffmpeg(
start_new_session=True, start_new_session=True,
) )
return process return process
logger = logging.getLogger(__name__)
def inject_progress_flags(cmd: list[str]) -> list[str]:
"""Insert `-progress pipe:2 -nostats` immediately before the output path.
`-progress pipe:2` writes structured key=value lines to stderr;
`-nostats` suppresses the noisy default stats output. The output path
is conventionally the last token in an FFmpeg argv.
"""
if not cmd:
return cmd
return cmd[:-1] + ["-progress", "pipe:2", "-nostats", cmd[-1]]
def run_ffmpeg_with_progress(
cmd: list[str],
*,
expected_duration_seconds: float,
on_progress: Optional[Callable[[float], None]] = None,
stdin_payload: Optional[str] = None,
process_started: Optional[Callable[[sp.Popen], None]] = None,
use_low_priority: bool = True,
) -> tuple[int, str]:
"""Run an ffmpeg command, streaming progress via `-progress pipe:2`.
Args:
cmd: ffmpeg argv. Output path must be the last token.
expected_duration_seconds: Duration of the expected output clip in
seconds. Used to convert ffmpeg's `out_time_us` into a percent.
on_progress: Optional callback invoked with a percent in [0, 100].
Called once with 0.0 at start, again on each `out_time_us=`
stderr line, and once with 100.0 on `progress=end`.
stdin_payload: Optional string written to ffmpeg stdin (used by
export for concat playlists).
process_started: Optional callback invoked with the live `Popen`
once spawned lets callers store the ref for cancellation.
use_low_priority: When True, prepend `nice -n PROCESS_PRIORITY_LOW`
so concat doesn't starve detection.
Returns:
Tuple of `(returncode, captured_stderr)`. Stdout is left attached
to the parent process to avoid buffer-full deadlocks.
"""
full_cmd = inject_progress_flags(cmd)
if use_low_priority:
full_cmd = ["nice", "-n", str(PROCESS_PRIORITY_LOW)] + full_cmd
def emit(percent: float) -> None:
if on_progress is None:
return
try:
on_progress(max(0.0, min(100.0, percent)))
except Exception:
logger.exception("FFmpeg progress callback failed")
emit(0.0)
proc = sp.Popen(
full_cmd,
stdin=sp.PIPE if stdin_payload is not None else None,
stderr=sp.PIPE,
text=True,
encoding="ascii",
errors="replace",
)
if process_started is not None:
try:
process_started(proc)
except Exception:
logger.exception("FFmpeg process_started callback failed")
if stdin_payload is not None and proc.stdin is not None:
try:
proc.stdin.write(stdin_payload)
except (BrokenPipeError, OSError):
pass
finally:
try:
proc.stdin.close()
except (BrokenPipeError, OSError):
pass
captured: list[str] = []
if proc.stderr is not None:
try:
for raw_line in proc.stderr:
captured.append(raw_line)
line = raw_line.strip()
if not line:
continue
if line.startswith("out_time_us="):
if expected_duration_seconds <= 0:
continue
try:
out_time_us = int(line.split("=", 1)[1])
except (ValueError, IndexError):
continue
if out_time_us < 0:
continue
out_seconds = out_time_us / 1_000_000.0
emit((out_seconds / expected_duration_seconds) * 100.0)
elif line == "progress=end":
emit(100.0)
break
except Exception:
logger.exception("Failed reading FFmpeg progress stream")
proc.wait()
if proc.stderr is not None:
try:
remaining = proc.stderr.read()
if remaining:
captured.append(remaining)
except Exception:
pass
return proc.returncode or 0, "".join(captured)

View File

@ -69,17 +69,18 @@ test.describe("Navigation — conditional items @critical", () => {
).toBeVisible(); ).toBeVisible();
}); });
test("/chat is hidden when genai.model is none (desktop)", async ({ test("/chat is hidden when no agent has the chat role (desktop)", async ({
frigateApp, frigateApp,
}) => { }) => {
test.skip(frigateApp.isMobile, "Desktop sidebar"); test.skip(frigateApp.isMobile, "Desktop sidebar");
await frigateApp.installDefaults({ await frigateApp.installDefaults({
config: { config: {
genai: { genai: {
enabled: false, descriptions_only: {
provider: "ollama", provider: "ollama",
model: "none", model: "llava",
base_url: "", roles: ["descriptions"],
},
}, },
}, },
}); });
@ -89,12 +90,20 @@ test.describe("Navigation — conditional items @critical", () => {
).toHaveCount(0); ).toHaveCount(0);
}); });
test("/chat is visible when genai.model is set (desktop)", async ({ test("/chat is visible when an agent has the chat role (desktop)", async ({
frigateApp, frigateApp,
}) => { }) => {
test.skip(frigateApp.isMobile, "Desktop sidebar"); test.skip(frigateApp.isMobile, "Desktop sidebar");
await frigateApp.installDefaults({ await frigateApp.installDefaults({
config: { genai: { enabled: true, model: "llava" } }, config: {
genai: {
chat_agent: {
provider: "ollama",
model: "llava",
roles: ["chat"],
},
},
},
}); });
await frigateApp.goto("/"); await frigateApp.goto("/");
await expect( await expect(

View File

@ -31,7 +31,7 @@ test.describe("Replay — no active session @medium", () => {
await expect( await expect(
frigateApp.page.getByRole("heading", { frigateApp.page.getByRole("heading", {
level: 2, level: 2,
name: /No Active Replay Session/i, name: /No Active Debug Replay Session/i,
}), }),
).toBeVisible({ timeout: 10_000 }); ).toBeVisible({ timeout: 10_000 });
const goButton = frigateApp.page.getByRole("button", { const goButton = frigateApp.page.getByRole("button", {
@ -48,7 +48,7 @@ test.describe("Replay — no active session @medium", () => {
await expect( await expect(
frigateApp.page.getByRole("heading", { frigateApp.page.getByRole("heading", {
level: 2, level: 2,
name: /No Active Replay Session/i, name: /No Active Debug Replay Session/i,
}), }),
).toBeVisible({ timeout: 10_000 }); ).toBeVisible({ timeout: 10_000 });
await frigateApp.page await frigateApp.page
@ -297,7 +297,7 @@ test.describe("Replay — mobile @medium @mobile", () => {
await expect( await expect(
frigateApp.page.getByRole("heading", { frigateApp.page.getByRole("heading", {
level: 2, level: 2,
name: /No Active Replay Session/i, name: /No Active Debug Replay Session/i,
}), }),
).toBeVisible({ timeout: 10_000 }); ).toBeVisible({ timeout: 10_000 });
}); });

View File

@ -19,26 +19,31 @@
"startLabel": "Start", "startLabel": "Start",
"endLabel": "End", "endLabel": "End",
"toast": { "toast": {
"success": "Debug replay started successfully",
"error": "Failed to start debug replay: {{error}}", "error": "Failed to start debug replay: {{error}}",
"alreadyActive": "A replay session is already active", "alreadyActive": "A replay session is already active",
"stopped": "Debug replay stopped",
"stopError": "Failed to stop debug replay: {{error}}", "stopError": "Failed to stop debug replay: {{error}}",
"goToReplay": "Go to Replay" "goToReplay": "Go to Replay"
} }
}, },
"page": { "page": {
"noSession": "No Active Replay Session", "noSession": "No Active Debug Replay Session",
"noSessionDesc": "Start a debug replay from the History view by clicking the Debug Replay button in the toolbar.", "noSessionDesc": "Start a Debug Replay from History view by clicking the Actions button in the toolbar and choosing Debug Replay.",
"goToRecordings": "Go to History", "goToRecordings": "Go to History",
"preparingClip": "Preparing clip…",
"preparingClipDesc": "Frigate is stitching together recordings for the selected time range. This can take a minute for longer ranges.",
"startingCamera": "Starting Debug Replay…",
"startError": {
"title": "Failed to start Debug Replay",
"back": "Back to History"
},
"sourceCamera": "Source Camera", "sourceCamera": "Source Camera",
"replayCamera": "Replay Camera", "replayCamera": "Replay Camera",
"initializingReplay": "Initializing replay...", "initializingReplay": "Initializing Debug Replay...",
"stoppingReplay": "Stopping replay...", "stoppingReplay": "Stopping Debug Replay...",
"stopReplay": "Stop Replay", "stopReplay": "Stop Replay",
"confirmStop": { "confirmStop": {
"title": "Stop Debug Replay?", "title": "Stop Debug Replay?",
"description": "This will stop the replay session and clean up all temporary data. Are you sure?", "description": "This will stop the session and clean up all temporary data. Are you sure?",
"confirm": "Stop Replay", "confirm": "Stop Replay",
"cancel": "Cancel" "cancel": "Cancel"
}, },
@ -49,6 +54,6 @@
"activeTracking": "Active tracking", "activeTracking": "Active tracking",
"noActiveTracking": "No active tracking", "noActiveTracking": "No active tracking",
"configuration": "Configuration", "configuration": "Configuration",
"configurationDesc": "Fine tune motion detection and object tracking settings for the debug replay camera. No changes are saved to your Frigate configuration file." "configurationDesc": "Fine tune motion detection and object tracking settings for the Debug Replay camera. No changes are saved to your Frigate configuration file."
} }
} }

View File

@ -93,6 +93,14 @@ export default function GeneralSettings({ className }: GeneralSettingsProps) {
useSWR<ProfilesApiResponse>("profiles"); useSWR<ProfilesApiResponse>("profiles");
const logoutUrl = config?.proxy?.logout_url || "/api/logout"; const logoutUrl = config?.proxy?.logout_url || "/api/logout";
const hasChatAgent = useMemo(
() =>
Object.values(config?.genai ?? {}).some((agent) =>
agent?.roles?.includes("chat"),
),
[config?.genai],
);
// languages // languages
const languages = useMemo(() => { const languages = useMemo(() => {
@ -511,7 +519,7 @@ export default function GeneralSettings({ className }: GeneralSettingsProps) {
<span>{t("menu.classification")}</span> <span>{t("menu.classification")}</span>
</MenuItem> </MenuItem>
</Link> </Link>
{config?.genai?.model !== "none" && ( {hasChatAgent && (
<Link to="/chat"> <Link to="/chat">
<MenuItem <MenuItem
className="flex w-full items-center p-2 text-sm" className="flex w-full items-center p-2 text-sm"

View File

@ -98,10 +98,7 @@ export default function SearchResultActions({
end_time: event.end_time, end_time: event.end_time,
}) })
.then((response) => { .then((response) => {
if (response.status === 200) { if (response.status === 202 || response.status === 200) {
toast.success(t("dialog.toast.success", { ns: "views/replay" }), {
position: "top-center",
});
navigate("/replay"); navigate("/replay");
} }
}) })

View File

@ -209,10 +209,7 @@ export default function DebugReplayDialog({
end_time: range.before, end_time: range.before,
}) })
.then((response) => { .then((response) => {
if (response.status === 200) { if (response.status === 202 || response.status === 200) {
toast.success(t("dialog.toast.success"), {
position: "top-center",
});
setMode("none"); setMode("none");
setRange(undefined); setRange(undefined);
navigate("/replay"); navigate("/replay");

View File

@ -262,10 +262,7 @@ export default function MobileReviewSettingsDrawer({
end_time: debugReplayRange.before, end_time: debugReplayRange.before,
}); });
if (response.status === 200) { if (response.status === 202 || response.status === 200) {
toast.success(t("dialog.toast.success", { ns: "views/replay" }), {
position: "top-center",
});
setDebugReplayMode("none"); setDebugReplayMode("none");
setDebugReplayRange(undefined); setDebugReplayRange(undefined);
setDrawerMode("none"); setDrawerMode("none");

View File

@ -1,4 +1,6 @@
import { useCallback, useState } from "react"; import { useCallback, useEffect, useMemo, useState } from "react";
import { flushSync } from "react-dom";
import { throttle } from "lodash";
import { Slider } from "@/components/ui/slider"; import { Slider } from "@/components/ui/slider";
import { Button } from "@/components/ui/button"; import { Button } from "@/components/ui/button";
import { Popover, PopoverContent, PopoverTrigger } from "../../ui/popover"; import { Popover, PopoverContent, PopoverTrigger } from "../../ui/popover";
@ -19,11 +21,21 @@ import { useIsAdmin } from "@/hooks/use-is-admin";
import { useDocDomain } from "@/hooks/use-doc-domain"; import { useDocDomain } from "@/hooks/use-doc-domain";
import { Link } from "react-router-dom"; import { Link } from "react-router-dom";
const SLIDER_DRAG_THROTTLE_MS = 80;
type Props = { type Props = {
className?: string; className?: string;
// Optional side-effect invoked atomically with setAnnotationOffset (inside
// flushSync) so callers like the timeline panel can re-seek the video in the
// same React commit as the offset state update — preventing a one-frame
// overlay mismatch where annotationOffset has changed but currentTime has not.
onApplyOffset?: (newOffset: number) => void;
}; };
export default function AnnotationOffsetSlider({ className }: Props) { export default function AnnotationOffsetSlider({
className,
onApplyOffset,
}: Props) {
const { annotationOffset, setAnnotationOffset, camera } = useDetailStream(); const { annotationOffset, setAnnotationOffset, camera } = useDetailStream();
const isAdmin = useIsAdmin(); const isAdmin = useIsAdmin();
const { getLocaleDocUrl } = useDocDomain(); const { getLocaleDocUrl } = useDocDomain();
@ -31,31 +43,62 @@ export default function AnnotationOffsetSlider({ className }: Props) {
const { t } = useTranslation(["views/explore"]); const { t } = useTranslation(["views/explore"]);
const [isSaving, setIsSaving] = useState(false); const [isSaving, setIsSaving] = useState(false);
const applyOffset = useCallback(
(newOffset: number) => {
flushSync(() => {
setAnnotationOffset(newOffset);
onApplyOffset?.(newOffset);
});
},
[setAnnotationOffset, onApplyOffset],
);
const throttledApplyOffset = useMemo(
() =>
throttle(applyOffset, SLIDER_DRAG_THROTTLE_MS, {
leading: true,
trailing: true,
}),
[applyOffset],
);
useEffect(() => () => throttledApplyOffset.cancel(), [throttledApplyOffset]);
const handleChange = useCallback( const handleChange = useCallback(
(values: number[]) => { (values: number[]) => {
if (!values || values.length === 0) return; if (!values || values.length === 0) return;
const valueMs = values[0]; throttledApplyOffset(values[0]);
setAnnotationOffset(valueMs);
}, },
[setAnnotationOffset], [throttledApplyOffset],
);
const handleCommit = useCallback(
(values: number[]) => {
if (!values || values.length === 0) return;
// Ensure the final value lands even if it would otherwise be discarded
// by the trailing edge of the throttle window.
throttledApplyOffset.cancel();
applyOffset(values[0]);
},
[throttledApplyOffset, applyOffset],
); );
const stepOffset = useCallback( const stepOffset = useCallback(
(delta: number) => { (delta: number) => {
setAnnotationOffset((prev) => { const next = Math.max(
const next = prev + delta;
return Math.max(
ANNOTATION_OFFSET_MIN, ANNOTATION_OFFSET_MIN,
Math.min(ANNOTATION_OFFSET_MAX, next), Math.min(ANNOTATION_OFFSET_MAX, annotationOffset + delta),
); );
}); throttledApplyOffset.cancel();
applyOffset(next);
}, },
[setAnnotationOffset], [annotationOffset, applyOffset, throttledApplyOffset],
); );
const reset = useCallback(() => { const reset = useCallback(() => {
setAnnotationOffset(0); throttledApplyOffset.cancel();
}, [setAnnotationOffset]); applyOffset(0);
}, [applyOffset, throttledApplyOffset]);
const save = useCallback(async () => { const save = useCallback(async () => {
setIsSaving(true); setIsSaving(true);
@ -130,6 +173,7 @@ export default function AnnotationOffsetSlider({ className }: Props) {
max={ANNOTATION_OFFSET_MAX} max={ANNOTATION_OFFSET_MAX}
step={ANNOTATION_OFFSET_STEP} step={ANNOTATION_OFFSET_STEP}
onValueChange={handleChange} onValueChange={handleChange}
onValueCommit={handleCommit}
/> />
</div> </div>
<Button <Button

View File

@ -1,7 +1,9 @@
import { Event } from "@/types/event"; import { Event } from "@/types/event";
import { FrigateConfig } from "@/types/frigateConfig"; import { FrigateConfig } from "@/types/frigateConfig";
import axios from "axios"; import axios from "axios";
import { useCallback, useState } from "react"; import { useCallback, useEffect, useMemo, useState } from "react";
import { flushSync } from "react-dom";
import { throttle } from "lodash";
import { LuExternalLink, LuMinus, LuPlus } from "react-icons/lu"; import { LuExternalLink, LuMinus, LuPlus } from "react-icons/lu";
import { Link } from "react-router-dom"; import { Link } from "react-router-dom";
import { toast } from "sonner"; import { toast } from "sonner";
@ -19,6 +21,8 @@ import {
ANNOTATION_OFFSET_STEP, ANNOTATION_OFFSET_STEP,
} from "@/lib/const"; } from "@/lib/const";
const SLIDER_DRAG_THROTTLE_MS = 80;
type AnnotationSettingsPaneProps = { type AnnotationSettingsPaneProps = {
event: Event; event: Event;
annotationOffset: number; annotationOffset: number;
@ -38,30 +42,64 @@ export function AnnotationSettingsPane({
const [isLoading, setIsLoading] = useState(false); const [isLoading, setIsLoading] = useState(false);
const handleSliderChange = useCallback( // flushSync ensures setAnnotationOffset commits synchronously so the
(values: number[]) => { // useLayoutEffect in TrackingDetails (which seeks the video and sets
if (!values || values.length === 0) return; // currentTime in response) runs before the browser paints — preventing a
setAnnotationOffset(values[0]); // one-frame overlay mismatch where annotationOffset has changed but
}, // currentTime has not.
[setAnnotationOffset], const applyOffset = useCallback(
); (newOffset: number) => {
flushSync(() => {
const stepOffset = useCallback( setAnnotationOffset(newOffset);
(delta: number) => {
setAnnotationOffset((prev) => {
const next = prev + delta;
return Math.max(
ANNOTATION_OFFSET_MIN,
Math.min(ANNOTATION_OFFSET_MAX, next),
);
}); });
}, },
[setAnnotationOffset], [setAnnotationOffset],
); );
const throttledApplyOffset = useMemo(
() =>
throttle(applyOffset, SLIDER_DRAG_THROTTLE_MS, {
leading: true,
trailing: true,
}),
[applyOffset],
);
useEffect(() => () => throttledApplyOffset.cancel(), [throttledApplyOffset]);
const handleSliderChange = useCallback(
(values: number[]) => {
if (!values || values.length === 0) return;
throttledApplyOffset(values[0]);
},
[throttledApplyOffset],
);
const handleSliderCommit = useCallback(
(values: number[]) => {
if (!values || values.length === 0) return;
throttledApplyOffset.cancel();
applyOffset(values[0]);
},
[throttledApplyOffset, applyOffset],
);
const stepOffset = useCallback(
(delta: number) => {
const next = Math.max(
ANNOTATION_OFFSET_MIN,
Math.min(ANNOTATION_OFFSET_MAX, annotationOffset + delta),
);
throttledApplyOffset.cancel();
applyOffset(next);
},
[annotationOffset, applyOffset, throttledApplyOffset],
);
const reset = useCallback(() => { const reset = useCallback(() => {
setAnnotationOffset(0); throttledApplyOffset.cancel();
}, [setAnnotationOffset]); applyOffset(0);
}, [applyOffset, throttledApplyOffset]);
const saveToConfig = useCallback(async () => { const saveToConfig = useCallback(async () => {
if (!config || !event) return; if (!config || !event) return;
@ -143,6 +181,7 @@ export function AnnotationSettingsPane({
max={ANNOTATION_OFFSET_MAX} max={ANNOTATION_OFFSET_MAX}
step={ANNOTATION_OFFSET_STEP} step={ANNOTATION_OFFSET_STEP}
onValueChange={handleSliderChange} onValueChange={handleSliderChange}
onValueCommit={handleSliderCommit}
className="flex-1" className="flex-1"
/> />
<Button <Button

View File

@ -73,7 +73,7 @@ export default function DetailActionsMenu({
} }
return ( return (
<DropdownMenu open={isOpen} onOpenChange={setIsOpen}> <DropdownMenu modal={false} open={isOpen} onOpenChange={setIsOpen}>
<DropdownMenuTrigger> <DropdownMenuTrigger>
<div className="rounded" role="button"> <div className="rounded" role="button">
<HiDotsHorizontal className="size-4 text-muted-foreground" /> <HiDotsHorizontal className="size-4 text-muted-foreground" />

View File

@ -957,8 +957,9 @@ function ObjectDetailsTab({
toast.success( toast.success(
t("details.item.toast.success.regenerate", { t("details.item.toast.success.regenerate", {
provider: capitalizeAll( provider: capitalizeAll(
config?.genai.provider.replaceAll("_", " ") ?? Object.values(config?.genai ?? {})
t("generativeAI"), .find((agent) => agent?.roles?.includes("descriptions"))
?.provider?.replaceAll("_", " ") ?? t("generativeAI"),
), ),
}), }),
{ {
@ -976,8 +977,9 @@ function ObjectDetailsTab({
toast.error( toast.error(
t("details.item.toast.error.regenerate", { t("details.item.toast.error.regenerate", {
provider: capitalizeAll( provider: capitalizeAll(
config?.genai.provider.replaceAll("_", " ") ?? Object.values(config?.genai ?? {})
t("generativeAI"), .find((agent) => agent?.roles?.includes("descriptions"))
?.provider?.replaceAll("_", " ") ?? t("generativeAI"),
), ),
errorMessage, errorMessage,
}), }),

View File

@ -1,5 +1,13 @@
import useSWR from "swr"; import useSWR from "swr";
import { useCallback, useEffect, useMemo, useRef, useState } from "react"; import {
useCallback,
useEffect,
useLayoutEffect,
useMemo,
useRef,
useState,
} from "react";
import { flushSync } from "react-dom";
import { useResizeObserver } from "@/hooks/resize-observer"; import { useResizeObserver } from "@/hooks/resize-observer";
import { useFullscreen } from "@/hooks/use-fullscreen"; import { useFullscreen } from "@/hooks/use-fullscreen";
import { Event } from "@/types/event"; import { Event } from "@/types/event";
@ -389,7 +397,12 @@ export function TrackingDetails({
// When the pinned timestamp or offset changes, re-seek the video and // When the pinned timestamp or offset changes, re-seek the video and
// explicitly update currentTime so the overlay shows the pinned event's box. // explicitly update currentTime so the overlay shows the pinned event's box.
useEffect(() => { // useLayoutEffect + flushSync force the setCurrentTime commit to land before
// the browser paints, so the overlay never shows a frame where
// annotationOffset has changed but currentTime has not — that mismatch would
// resolve effectiveCurrentTime away from the pinned detect timestamp and
// make the bounding box disappear or jump for one frame.
useLayoutEffect(() => {
const pinned = pinnedDetectTimestampRef.current; const pinned = pinnedDetectTimestampRef.current;
if (!isAnnotationSettingsOpen || pinned == null) return; if (!isAnnotationSettingsOpen || pinned == null) return;
if (!videoRef.current || displaySource !== "video") return; if (!videoRef.current || displaySource !== "video") return;
@ -398,10 +411,9 @@ export function TrackingDetails({
const relativeTime = timestampToVideoTime(targetTimeRecord); const relativeTime = timestampToVideoTime(targetTimeRecord);
videoRef.current.currentTime = relativeTime; videoRef.current.currentTime = relativeTime;
// Explicitly update currentTime state so the overlay's effectiveCurrentTime flushSync(() => {
// resolves back to the pinned detect timestamp:
// effectiveCurrentTime = targetTimeRecord - annotationOffset/1000 = pinned
setCurrentTime(targetTimeRecord); setCurrentTime(targetTimeRecord);
});
}, [ }, [
isAnnotationSettingsOpen, isAnnotationSettingsOpen,
annotationOffset, annotationOffset,
@ -1204,7 +1216,11 @@ function LifecycleIconRow({
<div className="flex flex-row items-center gap-3"> <div className="flex flex-row items-center gap-3">
<div className="whitespace-nowrap">{formattedEventTimestamp}</div> <div className="whitespace-nowrap">{formattedEventTimestamp}</div>
{isAdmin && (config?.plus?.enabled || item.data.box) && ( {isAdmin && (config?.plus?.enabled || item.data.box) && (
<DropdownMenu open={isOpen} onOpenChange={setIsOpen}> <DropdownMenu
modal={false}
open={isOpen}
onOpenChange={setIsOpen}
>
<DropdownMenuTrigger> <DropdownMenuTrigger>
<div className="rounded p-1 pr-2" role="button"> <div className="rounded p-1 pr-2" role="button">
<HiDotsHorizontal className="size-4 text-muted-foreground" /> <HiDotsHorizontal className="size-4 text-muted-foreground" />

View File

@ -126,13 +126,20 @@ export default function DetailStream({
// eslint-disable-next-line react-hooks/exhaustive-deps // eslint-disable-next-line react-hooks/exhaustive-deps
}, [controlsExpanded]); }, [controlsExpanded]);
// Re-seek on annotation offset change while settings panel is open // The slider invokes this atomically with setAnnotationOffset (inside the
useEffect(() => { // same flushSync) so currentTime advances in the same React commit as the
// offset. Without this, the overlay would render one frame with the new
// offset but the old currentTime, briefly resolving effectiveCurrentTime to
// the wrong detect-stream timestamp and making the bounding box vanish or
// jump.
const handleApplyOffset = useCallback(
(newOffset: number) => {
const pinned = pinnedDetectTimestampRef.current; const pinned = pinnedDetectTimestampRef.current;
if (!controlsExpanded || pinned == null) return; if (!controlsExpanded || pinned == null) return;
const recordTime = pinned + annotationOffset / 1000; onSeek(pinned + newOffset / 1000, false);
onSeek(recordTime, false); },
}, [controlsExpanded, annotationOffset, onSeek]); [controlsExpanded, onSeek],
);
// Ensure we initialize the active review when reviewItems first arrive. // Ensure we initialize the active review when reviewItems first arrive.
// This helps when the component mounts while the video is already // This helps when the component mounts while the video is already
@ -337,7 +344,7 @@ export default function DetailStream({
</button> </button>
{controlsExpanded && ( {controlsExpanded && (
<div className="space-y-4 px-3 pb-5 pt-2"> <div className="space-y-4 px-3 pb-5 pt-2">
<AnnotationOffsetSlider /> <AnnotationOffsetSlider onApplyOffset={handleApplyOffset} />
<Separator /> <Separator />
<div className="flex flex-col gap-1"> <div className="flex flex-col gap-1">
<div className="flex items-center justify-between"> <div className="flex items-center justify-between">

View File

@ -61,10 +61,7 @@ export default function EventMenu({
end_time: event.end_time, end_time: event.end_time,
}) })
.then((response) => { .then((response) => {
if (response.status === 200) { if (response.status === 202 || response.status === 200) {
toast.success(t("dialog.toast.success", { ns: "views/replay" }), {
position: "top-center",
});
navigate("/replay"); navigate("/replay");
} }
}) })
@ -106,7 +103,7 @@ export default function EventMenu({
return ( return (
<> <>
<span tabIndex={0} className="sr-only" /> <span tabIndex={0} className="sr-only" />
<DropdownMenu open={isOpen} onOpenChange={setIsOpen}> <DropdownMenu modal={false} open={isOpen} onOpenChange={setIsOpen}>
<DropdownMenuTrigger> <DropdownMenuTrigger>
<div className="rounded p-1 pr-2" role="button"> <div className="rounded p-1 pr-2" role="button">
<HiDotsHorizontal className="size-4 text-muted-foreground" /> <HiDotsHorizontal className="size-4 text-muted-foreground" />

View File

@ -28,6 +28,14 @@ export default function useNavigation(
}); });
const isAdmin = useIsAdmin(); const isAdmin = useIsAdmin();
const hasChatAgent = useMemo(
() =>
Object.values(config?.genai ?? {}).some((agent) =>
agent?.roles?.includes("chat"),
),
[config?.genai],
);
return useMemo( return useMemo(
() => () =>
[ [
@ -89,9 +97,9 @@ export default function useNavigation(
icon: MdChat, icon: MdChat,
title: "menu.chat", title: "menu.chat",
url: "/chat", url: "/chat",
enabled: isDesktop && isAdmin && config?.genai?.model !== "none", enabled: isDesktop && isAdmin && hasChatAgent,
}, },
] as NavData[], ] as NavData[],
[config?.face_recognition?.enabled, config?.genai?.model, variant, isAdmin], [config?.face_recognition?.enabled, hasChatAgent, variant, isAdmin],
); );
} }

View File

@ -42,7 +42,9 @@ import { CameraConfig, FrigateConfig } from "@/types/frigateConfig";
import { getIconForLabel } from "@/utils/iconUtil"; import { getIconForLabel } from "@/utils/iconUtil";
import { getTranslatedLabel } from "@/utils/i18n"; import { getTranslatedLabel } from "@/utils/i18n";
import { Card } from "@/components/ui/card"; import { Card } from "@/components/ui/card";
import { Progress } from "@/components/ui/progress";
import { ObjectType } from "@/types/ws"; import { ObjectType } from "@/types/ws";
import { useJobStatus } from "@/api/ws";
import WsMessageFeed from "@/components/ws/WsMessageFeed"; import WsMessageFeed from "@/components/ws/WsMessageFeed";
import { ConfigSectionTemplate } from "@/components/config-form/sections/ConfigSectionTemplate"; import { ConfigSectionTemplate } from "@/components/config-form/sections/ConfigSectionTemplate";
@ -53,6 +55,7 @@ import { isDesktop, isMobile } from "react-device-detect";
import Logo from "@/components/Logo"; import Logo from "@/components/Logo";
import { Separator } from "@/components/ui/separator"; import { Separator } from "@/components/ui/separator";
import { useDocDomain } from "@/hooks/use-doc-domain"; import { useDocDomain } from "@/hooks/use-doc-domain";
import { useConfigSchema } from "@/hooks/use-config-schema";
import DebugDrawingLayer from "@/components/overlay/DebugDrawingLayer"; import DebugDrawingLayer from "@/components/overlay/DebugDrawingLayer";
import { IoMdArrowRoundBack } from "react-icons/io"; import { IoMdArrowRoundBack } from "react-icons/io";
@ -65,6 +68,15 @@ type DebugReplayStatus = {
live_ready: boolean; live_ready: boolean;
}; };
type DebugReplayJobResults = {
current_step: "preparing_clip" | "starting_camera" | null;
progress_percent: number | null;
source_camera: string | null;
replay_camera_name: string | null;
start_ts: number | null;
end_ts: number | null;
};
type DebugOptions = { type DebugOptions = {
bbox: boolean; bbox: boolean;
timestamp: boolean; timestamp: boolean;
@ -105,8 +117,6 @@ const DEBUG_OPTION_I18N_KEY: Record<keyof DebugOptions, string> = {
paths: "paths", paths: "paths",
}; };
const REPLAY_INIT_SKELETON_TIMEOUT_MS = 8000;
export default function Replay() { export default function Replay() {
const { t } = useTranslation(["views/replay", "views/settings", "common"]); const { t } = useTranslation(["views/replay", "views/settings", "common"]);
const navigate = useNavigate(); const navigate = useNavigate();
@ -119,6 +129,9 @@ export default function Replay() {
} = useSWR<DebugReplayStatus>("debug_replay/status", { } = useSWR<DebugReplayStatus>("debug_replay/status", {
refreshInterval: 1000, refreshInterval: 1000,
}); });
const { payload: replayJob } =
useJobStatus<DebugReplayJobResults>("debug_replay");
const configSchema = useConfigSchema();
const [isInitializing, setIsInitializing] = useState(true); const [isInitializing, setIsInitializing] = useState(true);
// Refresh status immediately on mount to avoid showing "no session" briefly // Refresh status immediately on mount to avoid showing "no session" briefly
@ -130,12 +143,6 @@ export default function Replay() {
initializeStatus(); initializeStatus();
}, [refreshStatus]); }, [refreshStatus]);
useEffect(() => {
if (status?.live_ready) {
setShowReplayInitSkeleton(false);
}
}, [status?.live_ready]);
const [options, setOptions] = useState<DebugOptions>(DEFAULT_OPTIONS); const [options, setOptions] = useState<DebugOptions>(DEFAULT_OPTIONS);
const [isStopping, setIsStopping] = useState(false); const [isStopping, setIsStopping] = useState(false);
const [configDialogOpen, setConfigDialogOpen] = useState(false); const [configDialogOpen, setConfigDialogOpen] = useState(false);
@ -160,11 +167,7 @@ export default function Replay() {
axios axios
.post("debug_replay/stop") .post("debug_replay/stop")
.then(() => { .then(() => {
toast.success(t("dialog.toast.stopped"), {
position: "top-center",
});
refreshStatus(); refreshStatus();
navigate("/review");
}) })
.catch((error) => { .catch((error) => {
const errorMessage = const errorMessage =
@ -178,7 +181,7 @@ export default function Replay() {
.finally(() => { .finally(() => {
setIsStopping(false); setIsStopping(false);
}); });
}, [navigate, refreshStatus, t]); }, [refreshStatus, t]);
// Camera activity for the replay camera // Camera activity for the replay camera
const { data: config } = useSWR<FrigateConfig>("config", { const { data: config } = useSWR<FrigateConfig>("config", {
@ -191,35 +194,10 @@ export default function Replay() {
const { objects } = useCameraActivity(replayCameraConfig); const { objects } = useCameraActivity(replayCameraConfig);
const [showReplayInitSkeleton, setShowReplayInitSkeleton] = useState(false);
// debug draw // debug draw
const containerRef = useRef<HTMLDivElement>(null); const containerRef = useRef<HTMLDivElement>(null);
const [debugDraw, setDebugDraw] = useState(false); const [debugDraw, setDebugDraw] = useState(false);
useEffect(() => {
if (!status?.active || !status.replay_camera) {
setShowReplayInitSkeleton(false);
return;
}
setShowReplayInitSkeleton(true);
const timeout = window.setTimeout(() => {
setShowReplayInitSkeleton(false);
}, REPLAY_INIT_SKELETON_TIMEOUT_MS);
return () => {
window.clearTimeout(timeout);
};
}, [status?.active, status?.replay_camera]);
useEffect(() => {
if (status?.live_ready) {
setShowReplayInitSkeleton(false);
}
}, [status?.live_ready]);
// Format time range for display // Format time range for display
const timeRangeDisplay = useMemo(() => { const timeRangeDisplay = useMemo(() => {
if (!status?.start_time || !status?.end_time) return ""; if (!status?.start_time || !status?.end_time) return "";
@ -237,8 +215,39 @@ export default function Replay() {
); );
} }
// No active session // Startup error (job failed). Only show when status.active is also true so
if (!status?.active) { // we don't surface stale failed jobs after a session ended cleanly.
if (replayJob?.status === "failed" && status?.active) {
return (
<div className="flex size-full flex-col items-center justify-center gap-4 p-8">
<Heading as="h2" className="text-center">
{t("page.startError.title")}
</Heading>
{replayJob.error_message && (
<p className="max-w-xl text-center text-sm text-muted-foreground">
{replayJob.error_message}
</p>
)}
<Button
variant="default"
onClick={() => {
axios
.post("debug_replay/stop")
.catch(() => {})
.finally(() => navigate("/review"));
}}
>
{t("page.startError.back")}
</Button>
</div>
);
}
// No active session. Also covers the brief window between the runner
// pushing job.status = "cancelled" via WS and the next SWR refresh
// flipping status.active to false — without this, render falls through
// to the full replay UI and you see a flash of it before stop completes.
if (!status?.active || replayJob?.status === "cancelled") {
return ( return (
<div className="flex size-full flex-col items-center justify-center gap-4 p-8"> <div className="flex size-full flex-col items-center justify-center gap-4 p-8">
<MdReplay className="size-12" /> <MdReplay className="size-12" />
@ -255,6 +264,52 @@ export default function Replay() {
); );
} }
// Startup in progress (job is running). The session is active but the
// replay camera isn't ready yet; show progress / phase from the job.
const startupStep =
replayJob?.status === "running"
? (replayJob.results?.current_step ?? null)
: null;
if (startupStep === "preparing_clip" || startupStep === "starting_camera") {
const phaseTitle =
startupStep === "preparing_clip"
? t("page.preparingClip")
: t("page.startingCamera");
const progressPercent = replayJob?.results?.progress_percent ?? null;
const showProgressBar =
startupStep === "preparing_clip" && progressPercent != null;
return (
<div className="flex size-full flex-col items-center justify-center gap-4 p-8">
{showProgressBar ? (
<div className="flex w-64 flex-col items-center gap-2">
<Progress value={progressPercent ?? 0} />
<div className="text-xs text-muted-foreground">
{Math.round(progressPercent ?? 0)}%
</div>
</div>
) : (
<ActivityIndicator className="size-8" />
)}
<Heading as="h3" className="text-center">
{phaseTitle}
</Heading>
{startupStep === "preparing_clip" && (
<p className="max-w-md text-center text-sm text-muted-foreground">
{t("page.preparingClipDesc")}
</p>
)}
<Button
variant="outline"
size="sm"
disabled={isStopping}
onClick={handleStop}
>
{t("button.cancel", { ns: "common" })}
</Button>
</div>
);
}
return ( return (
<div className="flex size-full flex-col overflow-hidden"> <div className="flex size-full flex-col overflow-hidden">
<Toaster position="top-center" closeButton={true} /> <Toaster position="top-center" closeButton={true} />
@ -345,6 +400,8 @@ export default function Replay() {
) : ( ) : (
status.replay_camera && ( status.replay_camera && (
<div className="relative size-full min-h-10" ref={containerRef}> <div className="relative size-full min-h-10" ref={containerRef}>
{status.live_ready ? (
<>
<AutoUpdatingCameraImage <AutoUpdatingCameraImage
className="size-full" className="size-full"
cameraClasses="relative w-full h-full flex flex-col justify-start" cameraClasses="relative w-full h-full flex flex-col justify-start"
@ -365,7 +422,8 @@ export default function Replay() {
} }
/> />
)} )}
{showReplayInitSkeleton && ( </>
) : (
<div className="pointer-events-none absolute inset-0 z-10 size-full rounded-lg bg-background"> <div className="pointer-events-none absolute inset-0 z-10 size-full rounded-lg bg-background">
<Skeleton className="size-full rounded-lg" /> <Skeleton className="size-full rounded-lg" />
<div className="absolute left-1/2 top-1/2 flex -translate-x-1/2 -translate-y-1/2 flex-col items-center justify-center gap-2"> <div className="absolute left-1/2 top-1/2 flex -translate-x-1/2 -translate-y-1/2 flex-col items-center justify-center gap-2">
@ -595,6 +653,11 @@ export default function Replay() {
{t("page.configurationDesc")} {t("page.configurationDesc")}
</DialogDescription> </DialogDescription>
</DialogHeader> </DialogHeader>
{configSchema == null ? (
<div className="flex h-40 items-center justify-center">
<ActivityIndicator />
</div>
) : (
<div className="space-y-6"> <div className="space-y-6">
<ConfigSectionTemplate <ConfigSectionTemplate
sectionKey="motion" sectionKey="motion"
@ -621,6 +684,7 @@ export default function Replay() {
showOverrideIndicator={false} showOverrideIndicator={false}
/> />
</div> </div>
)}
</DialogContent> </DialogContent>
</Dialog> </Dialog>
</div> </div>

View File

@ -382,6 +382,18 @@ export type AllGroupsStreamingSettings = {
[groupName: string]: GroupStreamingSettings; [groupName: string]: GroupStreamingSettings;
}; };
export type GenAIRole = "chat" | "descriptions" | "embeddings";
export type GenAIAgentConfig = {
api_key?: string;
base_url?: string;
model: string;
provider?: string;
roles: GenAIRole[];
provider_options?: Record<string, unknown>;
runtime_options?: Record<string, unknown>;
};
export interface FrigateConfig { export interface FrigateConfig {
version: string; version: string;
safe_mode: boolean; safe_mode: boolean;
@ -478,12 +490,7 @@ export interface FrigateConfig {
retry_interval: number; retry_interval: number;
}; };
genai: { genai: Record<string, GenAIAgentConfig>;
provider: string;
base_url?: string;
api_key?: string;
model: string;
};
go2rtc: { go2rtc: {
streams: Record<string, string | string[]>; streams: Record<string, string | string[]>;