Merge branch 'blakeblackshear:dev' into quantumrand/yolo26-support

This commit is contained in:
Paul 2026-01-26 03:33:47 -08:00 committed by GitHub
commit c65ef8ad0d
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
32 changed files with 692 additions and 377 deletions

View File

@ -1,3 +1,385 @@
- For Frigate NVR, never write strings in the frontend directly. Since the project uses `react-i18next`, use `t()` and write the English string in the relevant translations file in `web/public/locales/en`. # GitHub Copilot Instructions for Frigate NVR
- Always conform new and refactored code to the existing coding style in the project.
- Always have a way to test your work and confirm your changes. When running backend tests, use `python3 -u -m unittest`. This document provides coding guidelines and best practices for contributing to Frigate NVR, a complete and local NVR designed for Home Assistant with AI object detection.
## Project Overview
Frigate NVR is a realtime object detection system for IP cameras that uses:
- **Backend**: Python 3.13+ with FastAPI, OpenCV, TensorFlow/ONNX
- **Frontend**: React with TypeScript, Vite, TailwindCSS
- **Architecture**: Multiprocessing design with ZMQ and MQTT communication
- **Focus**: Minimal resource usage with maximum performance
## Code Review Guidelines
When reviewing code, do NOT comment on:
- Missing imports - Static analysis tooling catches these
- Code formatting - Ruff (Python) and Prettier (TypeScript/React) handle formatting
- Minor style inconsistencies already enforced by linters
## Python Backend Standards
### Python Requirements
- **Compatibility**: Python 3.13+
- **Language Features**: Use modern Python features:
- Pattern matching
- Type hints (comprehensive typing preferred)
- f-strings (preferred over `%` or `.format()`)
- Dataclasses
- Async/await patterns
### Code Quality Standards
- **Formatting**: Ruff (configured in `pyproject.toml`)
- **Linting**: Ruff with rules defined in project config
- **Type Checking**: Use type hints consistently
- **Testing**: unittest framework - use `python3 -u -m unittest` to run tests
- **Language**: American English for all code, comments, and documentation
### Logging Standards
- **Logger Pattern**: Use module-level logger
```python
import logging
logger = logging.getLogger(__name__)
```
- **Format Guidelines**:
- No periods at end of log messages
- No sensitive data (keys, tokens, passwords)
- Use lazy logging: `logger.debug("Message with %s", variable)`
- **Log Levels**:
- `debug`: Development and troubleshooting information
- `info`: Important runtime events (startup, shutdown, state changes)
- `warning`: Recoverable issues that should be addressed
- `error`: Errors that affect functionality but don't crash the app
- `exception`: Use in except blocks to include traceback
### Error Handling
- **Exception Types**: Choose most specific exception available
- **Try/Catch Best Practices**:
- Only wrap code that can throw exceptions
- Keep try blocks minimal - process data after the try/except
- Avoid bare exceptions except in background tasks
Bad pattern:
```python
try:
data = await device.get_data() # Can throw
# ❌ Don't process data inside try block
processed = data.get("value", 0) * 100
result = processed
except DeviceError:
logger.error("Failed to get data")
```
Good pattern:
```python
try:
data = await device.get_data() # Can throw
except DeviceError:
logger.error("Failed to get data")
return
# ✅ Process data outside try block
processed = data.get("value", 0) * 100
result = processed
```
### Async Programming
- **External I/O**: All external I/O operations must be async
- **Best Practices**:
- Avoid sleeping in loops - use `asyncio.sleep()` not `time.sleep()`
- Avoid awaiting in loops - use `asyncio.gather()` instead
- No blocking calls in async functions
- Use `asyncio.create_task()` for background operations
- **Thread Safety**: Use proper synchronization for shared state
### Documentation Standards
- **Module Docstrings**: Concise descriptions at top of files
```python
"""Utilities for motion detection and analysis."""
```
- **Function Docstrings**: Required for public functions and methods
```python
async def process_frame(frame: ndarray, config: Config) -> Detection:
"""Process a video frame for object detection.
Args:
frame: The video frame as numpy array
config: Detection configuration
Returns:
Detection results with bounding boxes
"""
```
- **Comment Style**:
- Explain the "why" not just the "what"
- Keep lines under 88 characters when possible
- Use clear, descriptive comments
### File Organization
- **API Endpoints**: `frigate/api/` - FastAPI route handlers
- **Configuration**: `frigate/config/` - Configuration parsing and validation
- **Detectors**: `frigate/detectors/` - Object detection backends
- **Events**: `frigate/events/` - Event management and storage
- **Utilities**: `frigate/util/` - Shared utility functions
## Frontend (React/TypeScript) Standards
### Internationalization (i18n)
- **CRITICAL**: Never write user-facing strings directly in components
- **Always use react-i18next**: Import and use the `t()` function
```tsx
import { useTranslation } from "react-i18next";
function MyComponent() {
const { t } = useTranslation(["views/live"]);
return <div>{t("camera_not_found")}</div>;
}
```
- **Translation Files**: Add English strings to the appropriate json files in `web/public/locales/en`
- **Namespaces**: Organize translations by feature/view (e.g., `views/live`, `common`, `views/system`)
### Code Quality
- **Linting**: ESLint (see `web/.eslintrc.cjs`)
- **Formatting**: Prettier with Tailwind CSS plugin
- **Type Safety**: TypeScript strict mode enabled
- **Testing**: Vitest for unit tests
### Component Patterns
- **UI Components**: Use Radix UI primitives (in `web/src/components/ui/`)
- **Styling**: TailwindCSS with `cn()` utility for class merging
- **State Management**: React hooks (useState, useEffect, useCallback, useMemo)
- **Data Fetching**: Custom hooks with proper loading and error states
### ESLint Rules
Key rules enforced:
- `react-hooks/rules-of-hooks`: error
- `react-hooks/exhaustive-deps`: error
- `no-console`: error (use proper logging or remove)
- `@typescript-eslint/no-explicit-any`: warn (always use proper types instead of `any`)
- Unused variables must be prefixed with `_`
- Comma dangles required for multiline objects/arrays
### File Organization
- **Pages**: `web/src/pages/` - Route components
- **Views**: `web/src/views/` - Complex view components
- **Components**: `web/src/components/` - Reusable components
- **Hooks**: `web/src/hooks/` - Custom React hooks
- **API**: `web/src/api/` - API client functions
- **Types**: `web/src/types/` - TypeScript type definitions
## Testing Requirements
### Backend Testing
- **Framework**: Python unittest
- **Run Command**: `python3 -u -m unittest`
- **Location**: `frigate/test/`
- **Coverage**: Aim for comprehensive test coverage of core functionality
- **Pattern**: Use `TestCase` classes with descriptive test method names
```python
class TestMotionDetection(unittest.TestCase):
def test_detects_motion_above_threshold(self):
# Test implementation
```
### Test Best Practices
- Always have a way to test your work and confirm your changes
- Write tests for bug fixes to prevent regressions
- Test edge cases and error conditions
- Mock external dependencies (cameras, APIs, hardware)
- Use fixtures for test data
## Development Commands
### Python Backend
```bash
# Run all tests
python3 -u -m unittest
# Run specific test file
python3 -u -m unittest frigate.test.test_ffmpeg_presets
# Check formatting (Ruff)
ruff format --check frigate/
# Apply formatting
ruff format frigate/
# Run linter
ruff check frigate/
```
### Frontend (from web/ directory)
```bash
# Start dev server (AI agents should never run this directly unless asked)
npm run dev
# Build for production
npm run build
# Run linter
npm run lint
# Fix linting issues
npm run lint:fix
# Format code
npm run prettier:write
```
### Docker Development
AI agents should never run these commands directly unless instructed.
```bash
# Build local image
make local
# Build debug image
make debug
```
## Common Patterns
### API Endpoint Pattern
```python
from fastapi import APIRouter, Request
from frigate.api.defs.tags import Tags
router = APIRouter(tags=[Tags.Events])
@router.get("/events")
async def get_events(request: Request, limit: int = 100):
"""Retrieve events from the database."""
# Implementation
```
### Configuration Access
```python
# Access Frigate configuration
config: FrigateConfig = request.app.frigate_config
camera_config = config.cameras["front_door"]
```
### Database Queries
```python
from frigate.models import Event
# Use Peewee ORM for database access
events = (
Event.select()
.where(Event.camera == camera_name)
.order_by(Event.start_time.desc())
.limit(limit)
)
```
## Common Anti-Patterns to Avoid
### ❌ Avoid These
```python
# Blocking operations in async functions
data = requests.get(url) # ❌ Use async HTTP client
time.sleep(5) # ❌ Use asyncio.sleep()
# Hardcoded strings in React components
<div>Camera not found</div> # ❌ Use t("camera_not_found")
# Missing error handling
data = await api.get_data() # ❌ No exception handling
# Bare exceptions in regular code
try:
value = await sensor.read()
except Exception: # ❌ Too broad
logger.error("Failed")
```
### ✅ Use These Instead
```python
# Async operations
import aiohttp
async with aiohttp.ClientSession() as session:
async with session.get(url) as response:
data = await response.json()
await asyncio.sleep(5) # ✅ Non-blocking
# Translatable strings in React
const { t } = useTranslation();
<div>{t("camera_not_found")}</div> # ✅ Translatable
# Proper error handling
try:
data = await api.get_data()
except ApiException as err:
logger.error("API error: %s", err)
raise
# Specific exceptions
try:
value = await sensor.read()
except SensorException as err: # ✅ Specific
logger.exception("Failed to read sensor")
```
## Project-Specific Conventions
### Configuration Files
- Main config: `config/config.yml`
### Directory Structure
- Backend code: `frigate/`
- Frontend code: `web/`
- Docker files: `docker/`
- Documentation: `docs/`
- Database migrations: `migrations/`
### Code Style Conformance
Always conform new and refactored code to the existing coding style in the project:
- Follow established patterns in similar files
- Match indentation and formatting of surrounding code
- Use consistent naming conventions (snake_case for Python, camelCase for TypeScript)
- Maintain the same level of verbosity in comments and docstrings
## Additional Resources
- Documentation: https://docs.frigate.video
- Main Repository: https://github.com/blakeblackshear/frigate
- Home Assistant Integration: https://github.com/blakeblackshear/frigate-hass-integration

View File

@ -79,6 +79,12 @@ cameras:
If the ONVIF connection is successful, PTZ controls will be available in the camera's WebUI. If the ONVIF connection is successful, PTZ controls will be available in the camera's WebUI.
:::note
Some cameras use a separate ONVIF/service account that is distinct from the device administrator credentials. If ONVIF authentication fails with the admin account, try creating or using an ONVIF/service user in the camera's firmware. Refer to your camera manufacturer's documentation for more.
:::
:::tip :::tip
If your ONVIF camera does not require authentication credentials, you may still need to specify an empty string for `user` and `password`, eg: `user: ""` and `password: ""`. If your ONVIF camera does not require authentication credentials, you may still need to specify an empty string for `user` and `password`, eg: `user: ""` and `password: ""`.

View File

@ -1,247 +0,0 @@
---
id: genai
title: Generative AI
---
Generative AI can be used to automatically generate descriptive text based on the thumbnails of your tracked objects. This helps with [Semantic Search](/configuration/semantic_search) in Frigate to provide more context about your tracked objects. Descriptions are accessed via the _Explore_ view in the Frigate UI by clicking on a tracked object's thumbnail.
Requests for a description are sent off automatically to your AI provider at the end of the tracked object's lifecycle, or can optionally be sent earlier after a number of significantly changed frames, for example in use in more real-time notifications. Descriptions can also be regenerated manually via the Frigate UI. Note that if you are manually entering a description for tracked objects prior to its end, this will be overwritten by the generated response.
## Configuration
Generative AI can be enabled for all cameras or only for specific cameras. If GenAI is disabled for a camera, you can still manually generate descriptions for events using the HTTP API. There are currently 3 native providers available to integrate with Frigate. Other providers that support the OpenAI standard API can also be used. See the OpenAI section below.
To use Generative AI, you must define a single provider at the global level of your Frigate configuration. If the provider you choose requires an API key, you may either directly paste it in your configuration, or store it in an environment variable prefixed with `FRIGATE_`.
```yaml
genai:
provider: gemini
api_key: "{FRIGATE_GEMINI_API_KEY}"
model: gemini-2.0-flash
cameras:
front_camera:
genai:
enabled: True # <- enable GenAI for your front camera
use_snapshot: True
objects:
- person
required_zones:
- steps
indoor_camera:
objects:
genai:
enabled: False # <- disable GenAI for your indoor camera
```
By default, descriptions will be generated for all tracked objects and all zones. But you can also optionally specify `objects` and `required_zones` to only generate descriptions for certain tracked objects or zones.
Optionally, you can generate the description using a snapshot (if enabled) by setting `use_snapshot` to `True`. By default, this is set to `False`, which sends the uncompressed images from the `detect` stream collected over the object's lifetime to the model. Once the object lifecycle ends, only a single compressed and cropped thumbnail is saved with the tracked object. Using a snapshot might be useful when you want to _regenerate_ a tracked object's description as it will provide the AI with a higher-quality image (typically downscaled by the AI itself) than the cropped/compressed thumbnail. Using a snapshot otherwise has a trade-off in that only a single image is sent to your provider, which will limit the model's ability to determine object movement or direction.
Generative AI can also be toggled dynamically for a camera via MQTT with the topic `frigate/<camera_name>/object_descriptions/set`. See the [MQTT documentation](/integrations/mqtt/#frigatecamera_nameobjectdescriptionsset).
## Ollama
:::warning
Using Ollama on CPU is not recommended, high inference times make using Generative AI impractical.
:::
[Ollama](https://ollama.com/) allows you to self-host large language models and keep everything running locally. It is highly recommended to host this server on a machine with an Nvidia graphics card, or on a Apple silicon Mac for best performance.
Most of the 7b parameter 4-bit vision models will fit inside 8GB of VRAM. There is also a [Docker container](https://hub.docker.com/r/ollama/ollama) available.
Parallel requests also come with some caveats. You will need to set `OLLAMA_NUM_PARALLEL=1` and choose a `OLLAMA_MAX_QUEUE` and `OLLAMA_MAX_LOADED_MODELS` values that are appropriate for your hardware and preferences. See the [Ollama documentation](https://docs.ollama.com/faq#how-does-ollama-handle-concurrent-requests).
### Model Types: Instruct vs Thinking
Most vision-language models are available as **instruct** models, which are fine-tuned to follow instructions and respond concisely to prompts. However, some models (such as certain Qwen-VL or minigpt variants) offer both **instruct** and **thinking** versions.
- **Instruct models** are always recommended for use with Frigate. These models generate direct, relevant, actionable descriptions that best fit Frigate's object and event summary use case.
- **Thinking models** are fine-tuned for more free-form, open-ended, and speculative outputs, which are typically not concise and may not provide the practical summaries Frigate expects. For this reason, Frigate does **not** recommend or support using thinking models.
Some models are labeled as **hybrid** (capable of both thinking and instruct tasks). In these cases, Frigate will always use instruct-style prompts and specifically disables thinking-mode behaviors to ensure concise, useful responses.
**Recommendation:**
Always select the `-instruct` or documented instruct/tagged variant of any model you use in your Frigate configuration. If in doubt, refer to your model providers documentation or model library for guidance on the correct model variant to use.
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their model library](https://ollama.com/search?c=vision). Note that Frigate will not automatically download the model you specify in your config, you must download the model to your local instance of Ollama first i.e. by running `ollama pull qwen3-vl:2b-instruct` on your Ollama server/Docker container. Note that the model specified in Frigate's config must match the downloaded model tag.
:::note
You should have at least 8 GB of RAM available (or VRAM if running on GPU) to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.
:::
#### Ollama Cloud models
Ollama also supports [cloud models](https://ollama.com/cloud), where your local Ollama instance handles requests from Frigate, but model inference is performed in the cloud. Set up Ollama locally, sign in with your Ollama account, and specify the cloud model name in your Frigate config. For more details, see the Ollama cloud model [docs](https://docs.ollama.com/cloud).
### Configuration
```yaml
genai:
provider: ollama
base_url: http://localhost:11434
model: qwen3-vl:4b
```
## Google Gemini
Google Gemini has a [free tier](https://ai.google.dev/pricing) for the API, however the limits may not be sufficient for standard Frigate usage. Choose a plan appropriate for your installation.
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://ai.google.dev/gemini-api/docs/models/gemini).
### Get API Key
To start using Gemini, you must first get an API key from [Google AI Studio](https://aistudio.google.com).
1. Accept the Terms of Service
2. Click "Get API Key" from the right hand navigation
3. Click "Create API key in new project"
4. Copy the API key for use in your config
### Configuration
```yaml
genai:
provider: gemini
api_key: "{FRIGATE_GEMINI_API_KEY}"
model: gemini-2.5-flash
```
:::note
To use a different Gemini-compatible API endpoint, set the `GEMINI_BASE_URL` environment variable to your provider's API URL.
:::
## OpenAI
OpenAI does not have a free tier for their API. With the release of gpt-4o, pricing has been reduced and each generation should cost fractions of a cent if you choose to go this route.
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://platform.openai.com/docs/models).
### Get API Key
To start using OpenAI, you must first [create an API key](https://platform.openai.com/api-keys) and [configure billing](https://platform.openai.com/settings/organization/billing/overview).
### Configuration
```yaml
genai:
provider: openai
api_key: "{FRIGATE_OPENAI_API_KEY}"
model: gpt-4o
```
:::note
To use a different OpenAI-compatible API endpoint, set the `OPENAI_BASE_URL` environment variable to your provider's API URL.
:::
## Azure OpenAI
Microsoft offers several vision models through Azure OpenAI. A subscription is required.
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models).
### Create Resource and Get API Key
To start using Azure OpenAI, you must first [create a resource](https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource). You'll need your API key, model name, and resource URL, which must include the `api-version` parameter (see the example below).
### Configuration
```yaml
genai:
provider: azure_openai
base_url: https://instance.cognitiveservices.azure.com/openai/responses?api-version=2025-04-01-preview
model: gpt-5-mini
api_key: "{FRIGATE_OPENAI_API_KEY}"
```
## Usage and Best Practices
Frigate's thumbnail search excels at identifying specific details about tracked objects for example, using an "image caption" approach to find a "person wearing a yellow vest," "a white dog running across the lawn," or "a red car on a residential street." To enhance this further, Frigates default prompts are designed to ask your AI provider about the intent behind the object's actions, rather than just describing its appearance.
While generating simple descriptions of detected objects is useful, understanding intent provides a deeper layer of insight. Instead of just recognizing "what" is in a scene, Frigates default prompts aim to infer "why" it might be there or "what" it could do next. Descriptions tell you whats happening, but intent gives context. For instance, a person walking toward a door might seem like a visitor, but if theyre moving quickly after hours, you can infer a potential break-in attempt. Detecting a person loitering near a door at night can trigger an alert sooner than simply noting "a person standing by the door," helping you respond based on the situations context.
### Using GenAI for notifications
Frigate provides an [MQTT topic](/integrations/mqtt), `frigate/tracked_object_update`, that is updated with a JSON payload containing `event_id` and `description` when your AI provider returns a description for a tracked object. This description could be used directly in notifications, such as sending alerts to your phone or making audio announcements. If additional details from the tracked object are needed, you can query the [HTTP API](/integrations/api/event-events-event-id-get) using the `event_id`, eg: `http://frigate_ip:5000/api/events/<event_id>`.
If looking to get notifications earlier than when an object ceases to be tracked, an additional send trigger can be configured of `after_significant_updates`.
```yaml
genai:
send_triggers:
tracked_object_end: true # default
after_significant_updates: 3 # how many updates to a tracked object before we should send an image
```
## Custom Prompts
Frigate sends multiple frames from the tracked object along with a prompt to your Generative AI provider asking it to generate a description. The default prompt is as follows:
```
Analyze the sequence of images containing the {label}. Focus on the likely intent or behavior of the {label} based on its actions and movement, rather than describing its appearance or the surroundings. Consider what the {label} is doing, why, and what it might do next.
```
:::tip
Prompts can use variable replacements `{label}`, `{sub_label}`, and `{camera}` to substitute information from the tracked object as part of the prompt.
:::
You are also able to define custom prompts in your configuration.
```yaml
genai:
provider: ollama
base_url: http://localhost:11434
model: qwen3-vl:8b-instruct
objects:
prompt: "Analyze the {label} in these images from the {camera} security camera. Focus on the actions, behavior, and potential intent of the {label}, rather than just describing its appearance."
object_prompts:
person: "Examine the main person in these images. What are they doing and what might their actions suggest about their intent (e.g., approaching a door, leaving an area, standing still)? Do not describe the surroundings or static details."
car: "Observe the primary vehicle in these images. Focus on its movement, direction, or purpose (e.g., parking, approaching, circling). If it's a delivery vehicle, mention the company."
```
Prompts can also be overridden at the camera level to provide a more detailed prompt to the model about your specific camera, if you desire.
```yaml
cameras:
front_door:
objects:
genai:
enabled: True
use_snapshot: True
prompt: "Analyze the {label} in these images from the {camera} security camera at the front door. Focus on the actions and potential intent of the {label}."
object_prompts:
person: "Examine the person in these images. What are they doing, and how might their actions suggest their purpose (e.g., delivering something, approaching, leaving)? If they are carrying or interacting with a package, include details about its source or destination."
cat: "Observe the cat in these images. Focus on its movement and intent (e.g., wandering, hunting, interacting with objects). If the cat is near the flower pots or engaging in any specific actions, mention it."
objects:
- person
- cat
required_zones:
- steps
```
### Experiment with prompts
Many providers also have a public facing chat interface for their models. Download a couple of different thumbnails or snapshots from Frigate and try new things in the playground to get descriptions to your liking before updating the prompt in Frigate.
- OpenAI - [ChatGPT](https://chatgpt.com)
- Gemini - [Google AI Studio](https://aistudio.google.com)
- Ollama - [Open WebUI](https://docs.openwebui.com/)

View File

@ -17,11 +17,23 @@ Using Ollama on CPU is not recommended, high inference times make using Generati
::: :::
[Ollama](https://ollama.com/) allows you to self-host large language models and keep everything running locally. It provides a nice API over [llama.cpp](https://github.com/ggerganov/llama.cpp). It is highly recommended to host this server on a machine with an Nvidia graphics card, or on a Apple silicon Mac for best performance. [Ollama](https://ollama.com/) allows you to self-host large language models and keep everything running locally. It is highly recommended to host this server on a machine with an Nvidia graphics card, or on a Apple silicon Mac for best performance.
Most of the 7b parameter 4-bit vision models will fit inside 8GB of VRAM. There is also a [Docker container](https://hub.docker.com/r/ollama/ollama) available. Most of the 7b parameter 4-bit vision models will fit inside 8GB of VRAM. There is also a [Docker container](https://hub.docker.com/r/ollama/ollama) available.
Parallel requests also come with some caveats. You will need to set `OLLAMA_NUM_PARALLEL=1` and choose a `OLLAMA_MAX_QUEUE` and `OLLAMA_MAX_LOADED_MODELS` values that are appropriate for your hardware and preferences. See the [Ollama documentation](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-does-ollama-handle-concurrent-requests). Parallel requests also come with some caveats. You will need to set `OLLAMA_NUM_PARALLEL=1` and choose a `OLLAMA_MAX_QUEUE` and `OLLAMA_MAX_LOADED_MODELS` values that are appropriate for your hardware and preferences. See the [Ollama documentation](https://docs.ollama.com/faq#how-does-ollama-handle-concurrent-requests).
### Model Types: Instruct vs Thinking
Most vision-language models are available as **instruct** models, which are fine-tuned to follow instructions and respond concisely to prompts. However, some models (such as certain Qwen-VL or minigpt variants) offer both **instruct** and **thinking** versions.
- **Instruct models** are always recommended for use with Frigate. These models generate direct, relevant, actionable descriptions that best fit Frigate's object and event summary use case.
- **Thinking models** are fine-tuned for more free-form, open-ended, and speculative outputs, which are typically not concise and may not provide the practical summaries Frigate expects. For this reason, Frigate does **not** recommend or support using thinking models.
Some models are labeled as **hybrid** (capable of both thinking and instruct tasks). In these cases, Frigate will always use instruct-style prompts and specifically disables thinking-mode behaviors to ensure concise, useful responses.
**Recommendation:**
Always select the `-instruct` or documented instruct/tagged variant of any model you use in your Frigate configuration. If in doubt, refer to your model providers documentation or model library for guidance on the correct model variant to use.
### Supported Models ### Supported Models
@ -54,26 +66,26 @@ You should have at least 8 GB of RAM available (or VRAM if running on GPU) to ru
::: :::
#### Ollama Cloud models
Ollama also supports [cloud models](https://ollama.com/cloud), where your local Ollama instance handles requests from Frigate, but model inference is performed in the cloud. Set up Ollama locally, sign in with your Ollama account, and specify the cloud model name in your Frigate config. For more details, see the Ollama cloud model [docs](https://docs.ollama.com/cloud).
### Configuration ### Configuration
```yaml ```yaml
genai: genai:
provider: ollama provider: ollama
base_url: http://localhost:11434 base_url: http://localhost:11434
model: minicpm-v:8b model: qwen3-vl:4b
provider_options: # other Ollama client options can be defined
keep_alive: -1
options:
num_ctx: 8192 # make sure the context matches other services that are using ollama
``` ```
## Google Gemini ## Google Gemini
Google Gemini has a free tier allowing [15 queries per minute](https://ai.google.dev/pricing) to the API, which is more than sufficient for standard Frigate usage. Google Gemini has a [free tier](https://ai.google.dev/pricing) for the API, however the limits may not be sufficient for standard Frigate usage. Choose a plan appropriate for your installation.
### Supported Models ### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://ai.google.dev/gemini-api/docs/models/gemini). At the time of writing, this includes `gemini-1.5-pro` and `gemini-1.5-flash`. You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://ai.google.dev/gemini-api/docs/models/gemini).
### Get API Key ### Get API Key
@ -90,16 +102,32 @@ To start using Gemini, you must first get an API key from [Google AI Studio](htt
genai: genai:
provider: gemini provider: gemini
api_key: "{FRIGATE_GEMINI_API_KEY}" api_key: "{FRIGATE_GEMINI_API_KEY}"
model: gemini-1.5-flash model: gemini-2.5-flash
``` ```
:::note
To use a different Gemini-compatible API endpoint, set the `provider_options` with the `base_url` key to your provider's API URL. For example:
```
genai:
provider: gemini
...
provider_options:
base_url: https://...
```
Other HTTP options are available, see the [python-genai documentation](https://github.com/googleapis/python-genai).
:::
## OpenAI ## OpenAI
OpenAI does not have a free tier for their API. With the release of gpt-4o, pricing has been reduced and each generation should cost fractions of a cent if you choose to go this route. OpenAI does not have a free tier for their API. With the release of gpt-4o, pricing has been reduced and each generation should cost fractions of a cent if you choose to go this route.
### Supported Models ### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://platform.openai.com/docs/models). At the time of writing, this includes `gpt-4o` and `gpt-4-turbo`. You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://platform.openai.com/docs/models).
### Get API Key ### Get API Key
@ -143,17 +171,18 @@ Microsoft offers several vision models through Azure OpenAI. A subscription is r
### Supported Models ### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models). At the time of writing, this includes `gpt-4o` and `gpt-4-turbo`. You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models).
### Create Resource and Get API Key ### Create Resource and Get API Key
To start using Azure OpenAI, you must first [create a resource](https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource). You'll need your API key and resource URL, which must include the `api-version` parameter (see the example below). The model field is not required in your configuration as the model is part of the deployment name you chose when deploying the resource. To start using Azure OpenAI, you must first [create a resource](https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource). You'll need your API key, model name, and resource URL, which must include the `api-version` parameter (see the example below).
### Configuration ### Configuration
```yaml ```yaml
genai: genai:
provider: azure_openai provider: azure_openai
base_url: https://example-endpoint.openai.azure.com/openai/deployments/gpt-4o/chat/completions?api-version=2023-03-15-preview base_url: https://instance.cognitiveservices.azure.com/openai/responses?api-version=2025-04-01-preview
model: gpt-5-mini
api_key: "{FRIGATE_OPENAI_API_KEY}" api_key: "{FRIGATE_OPENAI_API_KEY}"
``` ```

View File

@ -125,10 +125,10 @@ review:
## Review Reports ## Review Reports
Along with individual review item summaries, Generative AI provides the ability to request a report of a given time period. For example, you can get a daily report while on a vacation of any suspicious activity or other concerns that may require review. Along with individual review item summaries, Generative AI can also produce a single report of review items from all cameras marked "suspicious" over a specified time period (for example, a daily summary of suspicious activity while you're on vacation).
### Requesting Reports Programmatically ### Requesting Reports Programmatically
Review reports can be requested via the [API](/integrations/api#review-summarization) by sending a POST request to `/api/review/summarize/start/{start_ts}/end/{end_ts}` with Unix timestamps. Review reports can be requested via the [API](/integrations/api/generate-review-summary-review-summarize-start-start-ts-end-end-ts-post) by sending a POST request to `/api/review/summarize/start/{start_ts}/end/{end_ts}` with Unix timestamps.
For Home Assistant users, there is a built-in service (`frigate.review_summarize`) that makes it easy to request review reports as part of automations or scripts. This allows you to automatically generate daily summaries, vacation reports, or custom time period reports based on your specific needs. For Home Assistant users, there is a built-in service (`frigate.review_summarize`) that makes it easy to request review reports as part of automations or scripts. This allows you to automatically generate daily summaries, vacation reports, or custom time period reports based on your specific needs.

View File

@ -11,6 +11,12 @@ Cameras configured to output H.264 video and AAC audio will offer the most compa
- **Stream Viewing**: This stream will be rebroadcast as is to Home Assistant for viewing with the stream component. Setting this resolution too high will use significant bandwidth when viewing streams in Home Assistant, and they may not load reliably over slower connections. - **Stream Viewing**: This stream will be rebroadcast as is to Home Assistant for viewing with the stream component. Setting this resolution too high will use significant bandwidth when viewing streams in Home Assistant, and they may not load reliably over slower connections.
:::tip
For the best experience in Frigate's UI, configure your camera so that the detection and recording streams use the same aspect ratio. For example, if your main stream is 3840x2160 (16:9), set your substream to 640x360 (also 16:9) instead of 640x480 (4:3). While not strictly required, matching aspect ratios helps ensure seamless live stream display and preview/recordings playback.
:::
### Choosing a detect resolution ### Choosing a detect resolution
The ideal resolution for detection is one where the objects you want to detect fit inside the dimensions of the model used by Frigate (320x320). Frigate does not pass the entire camera frame to object detection. It will crop an area of motion from the full frame and look in that portion of the frame. If the area being inspected is larger than 320x320, Frigate must resize it before running object detection. Higher resolutions do not improve the detection accuracy because the additional detail is lost in the resize. Below you can see a reference for how large a 320x320 area is against common resolutions. The ideal resolution for detection is one where the objects you want to detect fit inside the dimensions of the model used by Frigate (320x320). Frigate does not pass the entire camera frame to object detection. It will crop an area of motion from the full frame and look in that portion of the frame. If the area being inspected is larger than 320x320, Frigate must resize it before running object detection. Higher resolutions do not improve the detection accuracy because the additional detail is lost in the resize. Below you can see a reference for how large a 320x320 area is against common resolutions.

View File

@ -55,12 +55,10 @@ Frigate supports multiple different detectors that work on different types of ha
**Most Hardware** **Most Hardware**
- [Hailo](#hailo-8): The Hailo8 and Hailo8L AI Acceleration module is available in m.2 format with a HAT for RPi devices offering a wide range of compatibility with devices. - [Hailo](#hailo-8): The Hailo8 and Hailo8L AI Acceleration module is available in m.2 format with a HAT for RPi devices offering a wide range of compatibility with devices.
- [Supports many model architectures](../../configuration/object_detectors#configuration) - [Supports many model architectures](../../configuration/object_detectors#configuration)
- Runs best with tiny or small size models - Runs best with tiny or small size models
- [Google Coral EdgeTPU](#google-coral-tpu): The Google Coral EdgeTPU is available in USB and m.2 format allowing for a wide range of compatibility with devices. - [Google Coral EdgeTPU](#google-coral-tpu): The Google Coral EdgeTPU is available in USB and m.2 format allowing for a wide range of compatibility with devices.
- [Supports primarily ssdlite and mobilenet model architectures](../../configuration/object_detectors#edge-tpu-detector) - [Supports primarily ssdlite and mobilenet model architectures](../../configuration/object_detectors#edge-tpu-detector)
- <CommunityBadge /> [MemryX](#memryx-mx3): The MX3 M.2 accelerator module is available in m.2 format allowing for a wide range of compatibility with devices. - <CommunityBadge /> [MemryX](#memryx-mx3): The MX3 M.2 accelerator module is available in m.2 format allowing for a wide range of compatibility with devices.
@ -89,7 +87,6 @@ Frigate supports multiple different detectors that work on different types of ha
**Nvidia** **Nvidia**
- [TensortRT](#tensorrt---nvidia-gpu): TensorRT can run on Nvidia GPUs to provide efficient object detection. - [TensortRT](#tensorrt---nvidia-gpu): TensorRT can run on Nvidia GPUs to provide efficient object detection.
- [Supports majority of model architectures via ONNX](../../configuration/object_detectors#onnx-supported-models) - [Supports majority of model architectures via ONNX](../../configuration/object_detectors#onnx-supported-models)
- Runs well with any size models including large - Runs well with any size models including large
@ -152,9 +149,7 @@ The OpenVINO detector type is able to run on:
:::note :::note
Intel NPUs have seen [limited success in community deployments](https://github.com/blakeblackshear/frigate/discussions/13248#discussioncomment-12347357), although they remain officially unsupported. Intel B-series (Battlemage) GPUs are not officially supported with Frigate 0.17, though a user has [provided steps to rebuild the Frigate container](https://github.com/blakeblackshear/frigate/discussions/21257) with support for them.
In testing, the NPU delivered performance that was only comparable to — or in some cases worse than — the integrated GPU.
::: :::
@ -172,7 +167,7 @@ Inference speeds vary greatly depending on the CPU or GPU used, some known examp
| Intel N100 | ~ 15 ms | s-320: 30 ms | 320: ~ 25 ms | | Can only run one detector instance | | Intel N100 | ~ 15 ms | s-320: 30 ms | 320: ~ 25 ms | | Can only run one detector instance |
| Intel N150 | ~ 15 ms | t-320: 16 ms s-320: 24 ms | | | | | Intel N150 | ~ 15 ms | t-320: 16 ms s-320: 24 ms | | | |
| Intel Iris XE | ~ 10 ms | t-320: 6 ms t-640: 14 ms s-320: 8 ms s-640: 16 ms | 320: ~ 10 ms 640: ~ 20 ms | 320-n: 33 ms | | | Intel Iris XE | ~ 10 ms | t-320: 6 ms t-640: 14 ms s-320: 8 ms s-640: 16 ms | 320: ~ 10 ms 640: ~ 20 ms | 320-n: 33 ms | |
| Intel NPU | ~ 6 ms | s-320: 11 ms | 320: ~ 14 ms 640: ~ 34 ms | 320-n: 40 ms | | | Intel NPU | ~ 6 ms | s-320: 11 ms s-640: 30 ms | 320: ~ 14 ms 640: ~ 34 ms | 320-n: 40 ms | |
| Intel Arc A310 | ~ 5 ms | t-320: 7 ms t-640: 11 ms s-320: 8 ms s-640: 15 ms | 320: ~ 8 ms 640: ~ 14 ms | | | | Intel Arc A310 | ~ 5 ms | t-320: 7 ms t-640: 11 ms s-320: 8 ms s-640: 15 ms | 320: ~ 8 ms 640: ~ 14 ms | | |
| Intel Arc A380 | ~ 6 ms | | 320: ~ 10 ms 640: ~ 22 ms | 336: 20 ms 448: 27 ms | | | Intel Arc A380 | ~ 6 ms | | 320: ~ 10 ms 640: ~ 22 ms | 336: 20 ms 448: 27 ms | |
| Intel Arc A750 | ~ 4 ms | | 320: ~ 8 ms | | | | Intel Arc A750 | ~ 4 ms | | 320: ~ 8 ms | | |

View File

@ -37,7 +37,7 @@ cameras:
## Steps ## Steps
1. Export or copy the clip you want to replay to the Frigate host (e.g., `/media/frigate/` or `debug/clips/`). 1. Export or copy the clip you want to replay to the Frigate host (e.g., `/media/frigate/` or `debug/clips/`). Depending on what you are looking to debug, it is often helpful to add some "pre-capture" time (where the tracked object is not yet visible) to the clip when exporting.
2. Add the temporary camera to `config/config.yml` (example above). Use a unique name such as `test` or `replay_camera` so it's easy to remove later. 2. Add the temporary camera to `config/config.yml` (example above). Use a unique name such as `test` or `replay_camera` so it's easy to remove later.
- If you're debugging a specific camera, copy the settings from that camera (frame rate, model/enrichment settings, zones, etc.) into the temporary camera so the replay closely matches the original environment. Leave `record` and `snapshots` disabled unless you are specifically debugging recording or snapshot behavior. - If you're debugging a specific camera, copy the settings from that camera (frame rate, model/enrichment settings, zones, etc.) into the temporary camera so the replay closely matches the original environment. Leave `record` and `snapshots` disabled unless you are specifically debugging recording or snapshot behavior.
3. Restart Frigate. 3. Restart Frigate.

View File

@ -759,15 +759,28 @@ def delete_classification_dataset_images(
CLIPS_DIR, sanitize_filename(name), "dataset", sanitize_filename(category) CLIPS_DIR, sanitize_filename(name), "dataset", sanitize_filename(category)
) )
deleted_count = 0
for id in list_of_ids: for id in list_of_ids:
file_path = os.path.join(folder, sanitize_filename(id)) file_path = os.path.join(folder, sanitize_filename(id))
if os.path.isfile(file_path): if os.path.isfile(file_path):
os.unlink(file_path) os.unlink(file_path)
deleted_count += 1
if os.path.exists(folder) and not os.listdir(folder) and category.lower() != "none": if os.path.exists(folder) and not os.listdir(folder) and category.lower() != "none":
os.rmdir(folder) os.rmdir(folder)
# Update training metadata to reflect deleted images
# This ensures the dataset is marked as changed after deletion
# (even if the total count happens to be the same after adding and deleting)
if deleted_count > 0:
sanitized_name = sanitize_filename(name)
metadata = read_training_metadata(sanitized_name)
if metadata:
last_count = metadata.get("last_training_image_count", 0)
updated_count = max(0, last_count - deleted_count)
write_training_metadata(sanitized_name, updated_count)
return JSONResponse( return JSONResponse(
content=({"success": True, "message": "Successfully deleted images."}), content=({"success": True, "message": "Successfully deleted images."}),
status_code=200, status_code=200,

View File

@ -140,7 +140,12 @@ Each line represents a detection state, not necessarily unique individuals. Pare
) as f: ) as f:
f.write(context_prompt) f.write(context_prompt)
response = self._send(context_prompt, thumbnails) json_schema = {
"name": "review_metadata",
"schema": ReviewMetadata.model_json_schema(),
"strict": True,
}
response = self._send(context_prompt, thumbnails, json_schema=json_schema)
if debug_save and response: if debug_save and response:
with open( with open(
@ -152,6 +157,8 @@ Each line represents a detection state, not necessarily unique individuals. Pare
f.write(response) f.write(response)
if response: if response:
# With JSON schema, response should already be valid JSON
# But keep regex cleanup as fallback for providers without schema support
clean_json = re.sub( clean_json = re.sub(
r"\n?```$", "", re.sub(r"^```[a-zA-Z0-9]*\n?", "", response) r"\n?```$", "", re.sub(r"^```[a-zA-Z0-9]*\n?", "", response)
) )
@ -284,8 +291,16 @@ Guidelines:
"""Initialize the client.""" """Initialize the client."""
return None return None
def _send(self, prompt: str, images: list[bytes]) -> Optional[str]: def _send(
"""Submit a request to the provider.""" self, prompt: str, images: list[bytes], json_schema: Optional[dict] = None
) -> Optional[str]:
"""Submit a request to the provider.
Args:
prompt: The text prompt to send
images: List of image bytes to include
json_schema: Optional JSON schema for structured output (provider-specific support)
"""
return None return None
def get_context_size(self) -> int: def get_context_size(self) -> int:

View File

@ -41,13 +41,15 @@ class OpenAIClient(GenAIClient):
azure_endpoint=azure_endpoint, azure_endpoint=azure_endpoint,
) )
def _send(self, prompt: str, images: list[bytes]) -> Optional[str]: def _send(
self, prompt: str, images: list[bytes], json_schema: Optional[dict] = None
) -> Optional[str]:
"""Submit a request to Azure OpenAI.""" """Submit a request to Azure OpenAI."""
encoded_images = [base64.b64encode(image).decode("utf-8") for image in images] encoded_images = [base64.b64encode(image).decode("utf-8") for image in images]
try:
result = self.provider.chat.completions.create( request_params = {
model=self.genai_config.model, "model": self.genai_config.model,
messages=[ "messages": [
{ {
"role": "user", "role": "user",
"content": [{"type": "text", "text": prompt}] "content": [{"type": "text", "text": prompt}]
@ -63,7 +65,22 @@ class OpenAIClient(GenAIClient):
], ],
}, },
], ],
timeout=self.timeout, "timeout": self.timeout,
}
if json_schema:
request_params["response_format"] = {
"type": "json_schema",
"json_schema": {
"name": json_schema.get("name", "response"),
"schema": json_schema.get("schema", {}),
"strict": json_schema.get("strict", True),
},
}
try:
result = self.provider.chat.completions.create(
**request_params,
**self.genai_config.runtime_options, **self.genai_config.runtime_options,
) )
except Exception as e: except Exception as e:

View File

@ -22,7 +22,6 @@ class GeminiClient(GenAIClient):
"""Initialize the client.""" """Initialize the client."""
# Merge provider_options into HttpOptions # Merge provider_options into HttpOptions
http_options_dict = { http_options_dict = {
"api_version": "v1",
"timeout": int(self.timeout * 1000), # requires milliseconds "timeout": int(self.timeout * 1000), # requires milliseconds
"retry_options": types.HttpRetryOptions( "retry_options": types.HttpRetryOptions(
attempts=3, attempts=3,
@ -42,7 +41,9 @@ class GeminiClient(GenAIClient):
http_options=types.HttpOptions(**http_options_dict), http_options=types.HttpOptions(**http_options_dict),
) )
def _send(self, prompt: str, images: list[bytes]) -> Optional[str]: def _send(
self, prompt: str, images: list[bytes], json_schema: Optional[dict] = None
) -> Optional[str]:
"""Submit a request to Gemini.""" """Submit a request to Gemini."""
contents = [ contents = [
types.Part.from_bytes(data=img, mime_type="image/jpeg") for img in images types.Part.from_bytes(data=img, mime_type="image/jpeg") for img in images
@ -52,6 +53,12 @@ class GeminiClient(GenAIClient):
generation_config_dict = {"candidate_count": 1} generation_config_dict = {"candidate_count": 1}
generation_config_dict.update(self.genai_config.runtime_options) generation_config_dict.update(self.genai_config.runtime_options)
if json_schema and "schema" in json_schema:
generation_config_dict["response_mime_type"] = "application/json"
generation_config_dict["response_schema"] = types.Schema(
json_schema=json_schema["schema"]
)
response = self.provider.models.generate_content( response = self.provider.models.generate_content(
model=self.genai_config.model, model=self.genai_config.model,
contents=contents, contents=contents,

View File

@ -50,7 +50,9 @@ class OllamaClient(GenAIClient):
logger.warning("Error initializing Ollama: %s", str(e)) logger.warning("Error initializing Ollama: %s", str(e))
return None return None
def _send(self, prompt: str, images: list[bytes]) -> Optional[str]: def _send(
self, prompt: str, images: list[bytes], json_schema: Optional[dict] = None
) -> Optional[str]:
"""Submit a request to Ollama""" """Submit a request to Ollama"""
if self.provider is None: if self.provider is None:
logger.warning( logger.warning(
@ -62,6 +64,10 @@ class OllamaClient(GenAIClient):
**self.provider_options, **self.provider_options,
**self.genai_config.runtime_options, **self.genai_config.runtime_options,
} }
if json_schema and "schema" in json_schema:
ollama_options["format"] = json_schema["schema"]
result = self.provider.generate( result = self.provider.generate(
self.genai_config.model, self.genai_config.model,
prompt, prompt,

View File

@ -31,7 +31,9 @@ class OpenAIClient(GenAIClient):
} }
return OpenAI(api_key=self.genai_config.api_key, **provider_opts) return OpenAI(api_key=self.genai_config.api_key, **provider_opts)
def _send(self, prompt: str, images: list[bytes]) -> Optional[str]: def _send(
self, prompt: str, images: list[bytes], json_schema: Optional[dict] = None
) -> Optional[str]:
"""Submit a request to OpenAI.""" """Submit a request to OpenAI."""
encoded_images = [base64.b64encode(image).decode("utf-8") for image in images] encoded_images = [base64.b64encode(image).decode("utf-8") for image in images]
messages_content = [] messages_content = []
@ -51,16 +53,31 @@ class OpenAIClient(GenAIClient):
"text": prompt, "text": prompt,
} }
) )
try:
result = self.provider.chat.completions.create( request_params = {
model=self.genai_config.model, "model": self.genai_config.model,
messages=[ "messages": [
{ {
"role": "user", "role": "user",
"content": messages_content, "content": messages_content,
}, },
], ],
timeout=self.timeout, "timeout": self.timeout,
}
if json_schema:
request_params["response_format"] = {
"type": "json_schema",
"json_schema": {
"name": json_schema.get("name", "response"),
"schema": json_schema.get("schema", {}),
"strict": json_schema.get("strict", True),
},
}
try:
result = self.provider.chat.completions.create(
**request_params,
**self.genai_config.runtime_options, **self.genai_config.runtime_options,
) )
if ( if (

View File

@ -64,10 +64,12 @@ def stop_ffmpeg(ffmpeg_process: sp.Popen[Any], logger: logging.Logger):
try: try:
logger.info("Waiting for ffmpeg to exit gracefully...") logger.info("Waiting for ffmpeg to exit gracefully...")
ffmpeg_process.communicate(timeout=30) ffmpeg_process.communicate(timeout=30)
logger.info("FFmpeg has exited")
except sp.TimeoutExpired: except sp.TimeoutExpired:
logger.info("FFmpeg didn't exit. Force killing...") logger.info("FFmpeg didn't exit. Force killing...")
ffmpeg_process.kill() ffmpeg_process.kill()
ffmpeg_process.communicate() ffmpeg_process.communicate()
logger.info("FFmpeg has been killed")
ffmpeg_process = None ffmpeg_process = None

9
web/package-lock.json generated
View File

@ -48,7 +48,7 @@
"idb-keyval": "^6.2.1", "idb-keyval": "^6.2.1",
"immer": "^10.1.1", "immer": "^10.1.1",
"konva": "^9.3.18", "konva": "^9.3.18",
"lodash": "^4.17.21", "lodash": "^4.17.23",
"lucide-react": "^0.477.0", "lucide-react": "^0.477.0",
"monaco-yaml": "^5.3.1", "monaco-yaml": "^5.3.1",
"next-themes": "^0.3.0", "next-themes": "^0.3.0",
@ -7210,9 +7210,10 @@
} }
}, },
"node_modules/lodash": { "node_modules/lodash": {
"version": "4.17.21", "version": "4.17.23",
"resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.21.tgz", "resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.23.tgz",
"integrity": "sha512-v2kDEe57lecTulaDIuNTPy3Ry4gLGJ6Z1O3vE1krgXZNrsQ+LFTGHVxVjcXPs17LhbZVGedAJv8XZ1tvj5FvSg==" "integrity": "sha512-LgVTMpQtIopCi79SJeDiP0TfWi5CNEc/L/aRdTh3yIvmZXTnheWpKjSZhnvMl8iXbC1tFg9gdHHDMLoV7CnG+w==",
"license": "MIT"
}, },
"node_modules/lodash.merge": { "node_modules/lodash.merge": {
"version": "4.6.2", "version": "4.6.2",

View File

@ -54,7 +54,7 @@
"idb-keyval": "^6.2.1", "idb-keyval": "^6.2.1",
"immer": "^10.1.1", "immer": "^10.1.1",
"konva": "^9.3.18", "konva": "^9.3.18",
"lodash": "^4.17.21", "lodash": "^4.17.23",
"lucide-react": "^0.477.0", "lucide-react": "^0.477.0",
"monaco-yaml": "^5.3.1", "monaco-yaml": "^5.3.1",
"next-themes": "^0.3.0", "next-themes": "^0.3.0",

View File

@ -181,6 +181,16 @@
"restricted": { "restricted": {
"title": "No Cameras Available", "title": "No Cameras Available",
"description": "You don't have permission to view any cameras in this group." "description": "You don't have permission to view any cameras in this group."
},
"default": {
"title": "No Cameras Configured",
"description": "Get started by connecting a camera to Frigate.",
"buttonText": "Add Camera"
},
"group": {
"title": "No Cameras in Group",
"description": "This camera group has no assigned or enabled cameras.",
"buttonText": "Manage Groups"
} }
} }
} }

View File

@ -386,11 +386,11 @@
"title": "Camera Review Settings", "title": "Camera Review Settings",
"object_descriptions": { "object_descriptions": {
"title": "Generative AI Object Descriptions", "title": "Generative AI Object Descriptions",
"desc": "Temporarily enable/disable Generative AI object descriptions for this camera. When disabled, AI generated descriptions will not be requested for tracked objects on this camera." "desc": "Temporarily enable/disable Generative AI object descriptions for this camera until Frigate restarts. When disabled, AI generated descriptions will not be requested for tracked objects on this camera."
}, },
"review_descriptions": { "review_descriptions": {
"title": "Generative AI Review Descriptions", "title": "Generative AI Review Descriptions",
"desc": "Temporarily enable/disable Generative AI review descriptions for this camera. When disabled, AI generated descriptions will not be requested for review items on this camera." "desc": "Temporarily enable/disable Generative AI review descriptions for this camera until Frigate restarts. When disabled, AI generated descriptions will not be requested for review items on this camera."
}, },
"review": { "review": {
"title": "Review", "title": "Review",

View File

@ -12,7 +12,7 @@ import { useCameraPreviews } from "@/hooks/use-camera-previews";
import { baseUrl } from "@/api/baseUrl"; import { baseUrl } from "@/api/baseUrl";
import { VideoPreview } from "../preview/ScrubbablePreview"; import { VideoPreview } from "../preview/ScrubbablePreview";
import { useApiHost } from "@/api"; import { useApiHost } from "@/api";
import { isDesktop, isSafari } from "react-device-detect"; import { isSafari } from "react-device-detect";
import { useUserPersistence } from "@/hooks/use-user-persistence"; import { useUserPersistence } from "@/hooks/use-user-persistence";
import { Skeleton } from "../ui/skeleton"; import { Skeleton } from "../ui/skeleton";
import { Button } from "../ui/button"; import { Button } from "../ui/button";
@ -87,7 +87,6 @@ export function AnimatedEventCard({
}, [visibilityListener]); }, [visibilityListener]);
const [isLoaded, setIsLoaded] = useState(false); const [isLoaded, setIsLoaded] = useState(false);
const [isHovered, setIsHovered] = useState(false);
// interaction // interaction
@ -134,18 +133,15 @@ export function AnimatedEventCard({
<Tooltip> <Tooltip>
<TooltipTrigger asChild> <TooltipTrigger asChild>
<div <div
className="relative h-24 flex-shrink-0 overflow-hidden rounded md:rounded-lg 4k:h-32" className="group relative h-24 flex-shrink-0 overflow-hidden rounded md:rounded-lg 4k:h-32"
style={{ style={{
aspectRatio: alertVideos ? aspectRatio : undefined, aspectRatio: alertVideos ? aspectRatio : undefined,
}} }}
onMouseEnter={isDesktop ? () => setIsHovered(true) : undefined}
onMouseLeave={isDesktop ? () => setIsHovered(false) : undefined}
> >
{isHovered && (
<Tooltip> <Tooltip>
<TooltipTrigger asChild> <TooltipTrigger asChild>
<Button <Button
className="absolute left-2 top-1 z-40 bg-gray-500 bg-gradient-to-br from-gray-400 to-gray-500" className="pointer-events-none absolute left-2 top-1 z-40 bg-gray-500 bg-gradient-to-br from-gray-400 to-gray-500 opacity-0 transition-opacity group-hover:pointer-events-auto group-hover:opacity-100"
size="xs" size="xs"
aria-label={t("markAsReviewed")} aria-label={t("markAsReviewed")}
onClick={async () => { onClick={async () => {
@ -158,7 +154,6 @@ export function AnimatedEventCard({
</TooltipTrigger> </TooltipTrigger>
<TooltipContent>{t("markAsReviewed")}</TooltipContent> <TooltipContent>{t("markAsReviewed")}</TooltipContent>
</Tooltip> </Tooltip>
)}
{previews != undefined && alertVideosLoaded && ( {previews != undefined && alertVideosLoaded && (
<div <div
className="size-full cursor-pointer" className="size-full cursor-pointer"

View File

@ -35,7 +35,9 @@ export function EmptyCard({
{icon} {icon}
{TitleComponent} {TitleComponent}
{description && ( {description && (
<div className="mb-3 text-secondary-foreground">{description}</div> <div className="mb-3 text-center text-secondary-foreground">
{description}
</div>
)} )}
{buttonText?.length && ( {buttonText?.length && (
<Button size="sm" variant="select"> <Button size="sm" variant="select">

View File

@ -141,16 +141,36 @@ export default function ClassificationModelEditDialog({
}); });
// Fetch dataset to get current classes for state models // Fetch dataset to get current classes for state models
const { data: dataset } = useSWR<ClassificationDatasetResponse>( const { data: dataset, mutate: mutateDataset } =
isStateModel ? `classification/${model.name}/dataset` : null, useSWR<ClassificationDatasetResponse>(
{ isStateModel && open ? `classification/${model.name}/dataset` : null,
revalidateOnFocus: false, { revalidateOnFocus: false },
},
); );
useEffect(() => {
if (open) {
if (isObjectModel) {
form.reset({
objectLabel: model.object_config?.objects?.[0] || "",
objectType:
(model.object_config
?.classification_type as ObjectClassificationType) || "sub_label",
} as ObjectFormData);
} else {
form.reset({
classes: [""],
} as StateFormData);
}
if (isStateModel) {
mutateDataset();
}
}
}, [open, isObjectModel, isStateModel, model, form, mutateDataset]);
// Update form with classes from dataset when loaded // Update form with classes from dataset when loaded
useEffect(() => { useEffect(() => {
if (isStateModel && dataset?.categories) { if (isStateModel && open && dataset?.categories) {
const classes = Object.keys(dataset.categories).filter( const classes = Object.keys(dataset.categories).filter(
(key) => key !== "none", (key) => key !== "none",
); );
@ -161,7 +181,7 @@ export default function ClassificationModelEditDialog({
); );
} }
} }
}, [dataset, isStateModel, form]); }, [dataset, isStateModel, open, form]);
const watchedClasses = isStateModel const watchedClasses = isStateModel
? (form as ReturnType<typeof useForm<StateFormData>>).watch("classes") ? (form as ReturnType<typeof useForm<StateFormData>>).watch("classes")

View File

@ -13,7 +13,7 @@ import HlsVideoPlayer from "@/components/player/HlsVideoPlayer";
import { baseUrl } from "@/api/baseUrl"; import { baseUrl } from "@/api/baseUrl";
import { REVIEW_PADDING } from "@/types/review"; import { REVIEW_PADDING } from "@/types/review";
import { import {
ASPECT_VERTICAL_LAYOUT, ASPECT_PORTRAIT_LAYOUT,
ASPECT_WIDE_LAYOUT, ASPECT_WIDE_LAYOUT,
Recording, Recording,
} from "@/types/record"; } from "@/types/record";
@ -39,6 +39,7 @@ import { useApiHost } from "@/api";
import ImageLoadingIndicator from "@/components/indicators/ImageLoadingIndicator"; import ImageLoadingIndicator from "@/components/indicators/ImageLoadingIndicator";
import ObjectTrackOverlay from "../ObjectTrackOverlay"; import ObjectTrackOverlay from "../ObjectTrackOverlay";
import { useIsAdmin } from "@/hooks/use-is-admin"; import { useIsAdmin } from "@/hooks/use-is-admin";
import { VideoResolutionType } from "@/types/live";
type TrackingDetailsProps = { type TrackingDetailsProps = {
className?: string; className?: string;
@ -253,16 +254,25 @@ export function TrackingDetails({
const [timelineSize] = useResizeObserver(timelineContainerRef); const [timelineSize] = useResizeObserver(timelineContainerRef);
const [fullResolution, setFullResolution] = useState<VideoResolutionType>({
width: 0,
height: 0,
});
const aspectRatio = useMemo(() => { const aspectRatio = useMemo(() => {
if (!config) { if (!config) {
return 16 / 9; return 16 / 9;
} }
if (fullResolution.width && fullResolution.height) {
return fullResolution.width / fullResolution.height;
}
return ( return (
config.cameras[event.camera].detect.width / config.cameras[event.camera].detect.width /
config.cameras[event.camera].detect.height config.cameras[event.camera].detect.height
); );
}, [config, event]); }, [config, event, fullResolution]);
const label = event.sub_label const label = event.sub_label
? event.sub_label ? event.sub_label
@ -460,7 +470,7 @@ export function TrackingDetails({
return "normal"; return "normal";
} else if (aspectRatio > ASPECT_WIDE_LAYOUT) { } else if (aspectRatio > ASPECT_WIDE_LAYOUT) {
return "wide"; return "wide";
} else if (aspectRatio < ASPECT_VERTICAL_LAYOUT) { } else if (aspectRatio < ASPECT_PORTRAIT_LAYOUT) {
return "tall"; return "tall";
} else { } else {
return "normal"; return "normal";
@ -556,6 +566,7 @@ export function TrackingDetails({
onSeekToTime={handleSeekToTime} onSeekToTime={handleSeekToTime}
onUploadFrame={onUploadFrameToPlus} onUploadFrame={onUploadFrameToPlus}
onPlaying={() => setIsVideoLoading(false)} onPlaying={() => setIsVideoLoading(false)}
setFullResolution={setFullResolution}
isDetailMode={true} isDetailMode={true}
camera={event.camera} camera={event.camera}
currentTimeOverride={currentTime} currentTimeOverride={currentTime}
@ -623,7 +634,7 @@ export function TrackingDetails({
<div <div
className={cn( className={cn(
isDesktop && "justify-start overflow-hidden", isDesktop && "justify-start overflow-hidden",
aspectRatio > 1 && aspectRatio < 1.5 aspectRatio > 1 && aspectRatio < ASPECT_PORTRAIT_LAYOUT
? "lg:basis-3/5" ? "lg:basis-3/5"
: "lg:basis-2/5", : "lg:basis-2/5",
)} )}

View File

@ -16,7 +16,7 @@ import { zodResolver } from "@hookform/resolvers/zod";
import { useForm, useFieldArray } from "react-hook-form"; import { useForm, useFieldArray } from "react-hook-form";
import { z } from "zod"; import { z } from "zod";
import axios from "axios"; import axios from "axios";
import { toast, Toaster } from "sonner"; import { toast } from "sonner";
import { useTranslation } from "react-i18next"; import { useTranslation } from "react-i18next";
import { useState, useMemo, useEffect } from "react"; import { useState, useMemo, useEffect } from "react";
import { LuTrash2, LuPlus } from "react-icons/lu"; import { LuTrash2, LuPlus } from "react-icons/lu";
@ -26,6 +26,7 @@ import useSWR from "swr";
import { processCameraName } from "@/utils/cameraUtil"; import { processCameraName } from "@/utils/cameraUtil";
import { Label } from "@/components/ui/label"; import { Label } from "@/components/ui/label";
import { ConfigSetBody } from "@/types/cameraWizard"; import { ConfigSetBody } from "@/types/cameraWizard";
import { Toaster } from "../ui/sonner";
const RoleEnum = z.enum(["audio", "detect", "record"]); const RoleEnum = z.enum(["audio", "detect", "record"]);
type Role = z.infer<typeof RoleEnum>; type Role = z.infer<typeof RoleEnum>;

View File

@ -374,9 +374,13 @@ export default function Events() {
reviewed: true, reviewed: true,
}); });
reloadData(); reloadData();
if (reviewSearchParams["after"] != undefined) {
updateSegments();
}
} }
}, },
[reloadData, updateSegments], [reloadData, updateSegments, reviewSearchParams],
); );
const markItemAsReviewed = useCallback( const markItemAsReviewed = useCallback(

View File

@ -44,4 +44,5 @@ export type RecordingStartingPoint = {
export type RecordingPlayerError = "stalled" | "startup"; export type RecordingPlayerError = "stalled" | "startup";
export const ASPECT_VERTICAL_LAYOUT = 1.5; export const ASPECT_VERTICAL_LAYOUT = 1.5;
export const ASPECT_PORTRAIT_LAYOUT = 1.333;
export const ASPECT_WIDE_LAYOUT = 2; export const ASPECT_WIDE_LAYOUT = 2;

View File

@ -447,7 +447,7 @@ export default function LiveDashboardView({
)} )}
{cameras.length == 0 && !includeBirdseye ? ( {cameras.length == 0 && !includeBirdseye ? (
<NoCameraView /> <NoCameraView cameraGroup={cameraGroup} />
) : ( ) : (
<> <>
{!fullscreen && events && events.length > 0 && ( {!fullscreen && events && events.length > 0 && (
@ -666,28 +666,39 @@ export default function LiveDashboardView({
); );
} }
function NoCameraView() { function NoCameraView({ cameraGroup }: { cameraGroup?: string }) {
const { t } = useTranslation(["views/live"]); const { t } = useTranslation(["views/live"]);
const { auth } = useContext(AuthContext); const { auth } = useContext(AuthContext);
const isAdmin = useIsAdmin(); const isAdmin = useIsAdmin();
// Check if this is a restricted user with no cameras in this group const isDefault = cameraGroup === "default";
const isRestricted = !isAdmin && auth.isAuthenticated; const isRestricted = !isAdmin && auth.isAuthenticated;
let type: "default" | "group" | "restricted";
if (isRestricted) {
type = "restricted";
} else if (isDefault) {
type = "default";
} else {
type = "group";
}
return ( return (
<div className="flex size-full items-center justify-center"> <div className="flex size-full items-center justify-center">
<EmptyCard <EmptyCard
icon={<BsFillCameraVideoOffFill className="size-8" />} icon={<BsFillCameraVideoOffFill className="size-8" />}
title={ title={t(`noCameras.${type}.title`)}
isRestricted ? t("noCameras.restricted.title") : t("noCameras.title") description={t(`noCameras.${type}.description`)}
buttonText={
type !== "restricted" && isDefault
? t(`noCameras.${type}.buttonText`)
: undefined
} }
description={ link={
isRestricted type !== "restricted" && isDefault
? t("noCameras.restricted.description") ? "/settings?page=cameraManagement"
: t("noCameras.description") : undefined
} }
buttonText={!isRestricted ? t("noCameras.buttonText") : undefined}
link={!isRestricted ? "/settings?page=cameraManagement" : undefined}
/> />
</div> </div>
); );

View File

@ -1,6 +1,6 @@
import Heading from "@/components/ui/heading"; import Heading from "@/components/ui/heading";
import { useCallback, useEffect, useMemo, useState } from "react"; import { useCallback, useEffect, useMemo, useState } from "react";
import { Toaster } from "sonner"; import { Toaster } from "@/components/ui/sonner";
import { Button } from "@/components/ui/button"; import { Button } from "@/components/ui/button";
import useSWR from "swr"; import useSWR from "swr";
import { FrigateConfig } from "@/types/frigateConfig"; import { FrigateConfig } from "@/types/frigateConfig";

View File

@ -1,6 +1,7 @@
import Heading from "@/components/ui/heading"; import Heading from "@/components/ui/heading";
import { useCallback, useContext, useEffect, useMemo, useState } from "react"; import { useCallback, useContext, useEffect, useMemo, useState } from "react";
import { Toaster, toast } from "sonner"; import { toast } from "sonner";
import { Toaster } from "@/components/ui/sonner";
import { import {
Form, Form,
FormControl, FormControl,
@ -158,11 +159,12 @@ export default function CameraReviewSettingsView({
}); });
} }
setChangedValue(true); setChangedValue(true);
setUnsavedChanges(true);
setSelectDetections(isChecked as boolean); setSelectDetections(isChecked as boolean);
}, },
// we know that these deps are correct // we know that these deps are correct
// eslint-disable-next-line react-hooks/exhaustive-deps // eslint-disable-next-line react-hooks/exhaustive-deps
[watchedAlertsZones], [watchedAlertsZones, setUnsavedChanges],
); );
const saveToConfig = useCallback( const saveToConfig = useCallback(
@ -197,6 +199,8 @@ export default function CameraReviewSettingsView({
position: "top-center", position: "top-center",
}, },
); );
setChangedValue(false);
setUnsavedChanges(false);
updateConfig(); updateConfig();
} else { } else {
toast.error( toast.error(
@ -229,7 +233,14 @@ export default function CameraReviewSettingsView({
setIsLoading(false); setIsLoading(false);
}); });
}, },
[updateConfig, setIsLoading, selectedCamera, cameraConfig, t], [
updateConfig,
setIsLoading,
selectedCamera,
cameraConfig,
t,
setUnsavedChanges,
],
); );
const onCancel = useCallback(() => { const onCancel = useCallback(() => {
@ -495,6 +506,7 @@ export default function CameraReviewSettingsView({
)} )}
onCheckedChange={(checked) => { onCheckedChange={(checked) => {
setChangedValue(true); setChangedValue(true);
setUnsavedChanges(true);
return checked return checked
? field.onChange([ ? field.onChange([
...field.value, ...field.value,
@ -600,6 +612,8 @@ export default function CameraReviewSettingsView({
zone.name, zone.name,
)} )}
onCheckedChange={(checked) => { onCheckedChange={(checked) => {
setChangedValue(true);
setUnsavedChanges(true);
return checked return checked
? field.onChange([ ? field.onChange([
...field.value, ...field.value,
@ -699,7 +713,6 @@ export default function CameraReviewSettingsView({
)} )}
/> />
</div> </div>
<Separator className="my-2 flex bg-secondary" />
<div className="flex w-full flex-row items-center gap-2 pt-2 md:w-[25%]"> <div className="flex w-full flex-row items-center gap-2 pt-2 md:w-[25%]">
<Button <Button
@ -712,7 +725,7 @@ export default function CameraReviewSettingsView({
</Button> </Button>
<Button <Button
variant="select" variant="select"
disabled={isLoading} disabled={!changedValue || isLoading}
className="flex flex-1" className="flex flex-1"
aria-label={t("button.save", { ns: "common" })} aria-label={t("button.save", { ns: "common" })}
type="submit" type="submit"

View File

@ -1,7 +1,7 @@
import Heading from "@/components/ui/heading"; import Heading from "@/components/ui/heading";
import { Label } from "@/components/ui/label"; import { Label } from "@/components/ui/label";
import { useCallback, useContext, useEffect, useState } from "react"; import { useCallback, useContext, useEffect, useState } from "react";
import { Toaster } from "sonner"; import { Toaster } from "@/components/ui/sonner";
import { Separator } from "../../components/ui/separator"; import { Separator } from "../../components/ui/separator";
import ActivityIndicator from "@/components/indicators/activity-indicator"; import ActivityIndicator from "@/components/indicators/activity-indicator";
import { toast } from "sonner"; import { toast } from "sonner";

View File

@ -664,9 +664,7 @@ export default function TriggerView({
<TableHeader className="sticky top-0 bg-muted/50"> <TableHeader className="sticky top-0 bg-muted/50">
<TableRow> <TableRow>
<TableHead className="w-4"></TableHead> <TableHead className="w-4"></TableHead>
<TableHead> <TableHead>{t("triggers.table.name")}</TableHead>
{t("name", { ns: "triggers.table.name" })}
</TableHead>
<TableHead>{t("triggers.table.type")}</TableHead> <TableHead>{t("triggers.table.type")}</TableHead>
<TableHead> <TableHead>
{t("triggers.table.lastTriggered")} {t("triggers.table.lastTriggered")}

View File

@ -2,7 +2,7 @@ import Heading from "@/components/ui/heading";
import { Label } from "@/components/ui/label"; import { Label } from "@/components/ui/label";
import { Switch } from "@/components/ui/switch"; import { Switch } from "@/components/ui/switch";
import { useCallback, useContext, useEffect } from "react"; import { useCallback, useContext, useEffect } from "react";
import { Toaster } from "sonner"; import { Toaster } from "@/components/ui/sonner";
import { toast } from "sonner"; import { toast } from "sonner";
import { Separator } from "../../components/ui/separator"; import { Separator } from "../../components/ui/separator";
import { Button } from "../../components/ui/button"; import { Button } from "../../components/ui/button";