Miscellaneous fixes (0.17 beta) (#21764)
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
CI / Assemble and push default build (push) Blocked by required conditions

* Add 640x640 Intel NPU stats

* use css instead of js for reviewed button hover state in filmstrip

* update copilot instructions to copy HA's format

* Set json schema for genai

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
This commit is contained in:
Josh Hawkins 2026-01-25 19:59:25 -06:00 committed by GitHub
parent a75f6945ae
commit 50ac5a1483
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
8 changed files with 500 additions and 60 deletions

View File

@ -1,3 +1,385 @@
- For Frigate NVR, never write strings in the frontend directly. Since the project uses `react-i18next`, use `t()` and write the English string in the relevant translations file in `web/public/locales/en`.
- Always conform new and refactored code to the existing coding style in the project.
- Always have a way to test your work and confirm your changes. When running backend tests, use `python3 -u -m unittest`.
# GitHub Copilot Instructions for Frigate NVR
This document provides coding guidelines and best practices for contributing to Frigate NVR, a complete and local NVR designed for Home Assistant with AI object detection.
## Project Overview
Frigate NVR is a realtime object detection system for IP cameras that uses:
- **Backend**: Python 3.13+ with FastAPI, OpenCV, TensorFlow/ONNX
- **Frontend**: React with TypeScript, Vite, TailwindCSS
- **Architecture**: Multiprocessing design with ZMQ and MQTT communication
- **Focus**: Minimal resource usage with maximum performance
## Code Review Guidelines
When reviewing code, do NOT comment on:
- Missing imports - Static analysis tooling catches these
- Code formatting - Ruff (Python) and Prettier (TypeScript/React) handle formatting
- Minor style inconsistencies already enforced by linters
## Python Backend Standards
### Python Requirements
- **Compatibility**: Python 3.13+
- **Language Features**: Use modern Python features:
- Pattern matching
- Type hints (comprehensive typing preferred)
- f-strings (preferred over `%` or `.format()`)
- Dataclasses
- Async/await patterns
### Code Quality Standards
- **Formatting**: Ruff (configured in `pyproject.toml`)
- **Linting**: Ruff with rules defined in project config
- **Type Checking**: Use type hints consistently
- **Testing**: unittest framework - use `python3 -u -m unittest` to run tests
- **Language**: American English for all code, comments, and documentation
### Logging Standards
- **Logger Pattern**: Use module-level logger
```python
import logging
logger = logging.getLogger(__name__)
```
- **Format Guidelines**:
- No periods at end of log messages
- No sensitive data (keys, tokens, passwords)
- Use lazy logging: `logger.debug("Message with %s", variable)`
- **Log Levels**:
- `debug`: Development and troubleshooting information
- `info`: Important runtime events (startup, shutdown, state changes)
- `warning`: Recoverable issues that should be addressed
- `error`: Errors that affect functionality but don't crash the app
- `exception`: Use in except blocks to include traceback
### Error Handling
- **Exception Types**: Choose most specific exception available
- **Try/Catch Best Practices**:
- Only wrap code that can throw exceptions
- Keep try blocks minimal - process data after the try/except
- Avoid bare exceptions except in background tasks
Bad pattern:
```python
try:
data = await device.get_data() # Can throw
# ❌ Don't process data inside try block
processed = data.get("value", 0) * 100
result = processed
except DeviceError:
logger.error("Failed to get data")
```
Good pattern:
```python
try:
data = await device.get_data() # Can throw
except DeviceError:
logger.error("Failed to get data")
return
# ✅ Process data outside try block
processed = data.get("value", 0) * 100
result = processed
```
### Async Programming
- **External I/O**: All external I/O operations must be async
- **Best Practices**:
- Avoid sleeping in loops - use `asyncio.sleep()` not `time.sleep()`
- Avoid awaiting in loops - use `asyncio.gather()` instead
- No blocking calls in async functions
- Use `asyncio.create_task()` for background operations
- **Thread Safety**: Use proper synchronization for shared state
### Documentation Standards
- **Module Docstrings**: Concise descriptions at top of files
```python
"""Utilities for motion detection and analysis."""
```
- **Function Docstrings**: Required for public functions and methods
```python
async def process_frame(frame: ndarray, config: Config) -> Detection:
"""Process a video frame for object detection.
Args:
frame: The video frame as numpy array
config: Detection configuration
Returns:
Detection results with bounding boxes
"""
```
- **Comment Style**:
- Explain the "why" not just the "what"
- Keep lines under 88 characters when possible
- Use clear, descriptive comments
### File Organization
- **API Endpoints**: `frigate/api/` - FastAPI route handlers
- **Configuration**: `frigate/config/` - Configuration parsing and validation
- **Detectors**: `frigate/detectors/` - Object detection backends
- **Events**: `frigate/events/` - Event management and storage
- **Utilities**: `frigate/util/` - Shared utility functions
## Frontend (React/TypeScript) Standards
### Internationalization (i18n)
- **CRITICAL**: Never write user-facing strings directly in components
- **Always use react-i18next**: Import and use the `t()` function
```tsx
import { useTranslation } from "react-i18next";
function MyComponent() {
const { t } = useTranslation(["views/live"]);
return <div>{t("camera_not_found")}</div>;
}
```
- **Translation Files**: Add English strings to the appropriate json files in `web/public/locales/en`
- **Namespaces**: Organize translations by feature/view (e.g., `views/live`, `common`, `views/system`)
### Code Quality
- **Linting**: ESLint (see `web/.eslintrc.cjs`)
- **Formatting**: Prettier with Tailwind CSS plugin
- **Type Safety**: TypeScript strict mode enabled
- **Testing**: Vitest for unit tests
### Component Patterns
- **UI Components**: Use Radix UI primitives (in `web/src/components/ui/`)
- **Styling**: TailwindCSS with `cn()` utility for class merging
- **State Management**: React hooks (useState, useEffect, useCallback, useMemo)
- **Data Fetching**: Custom hooks with proper loading and error states
### ESLint Rules
Key rules enforced:
- `react-hooks/rules-of-hooks`: error
- `react-hooks/exhaustive-deps`: error
- `no-console`: error (use proper logging or remove)
- `@typescript-eslint/no-explicit-any`: warn (always use proper types instead of `any`)
- Unused variables must be prefixed with `_`
- Comma dangles required for multiline objects/arrays
### File Organization
- **Pages**: `web/src/pages/` - Route components
- **Views**: `web/src/views/` - Complex view components
- **Components**: `web/src/components/` - Reusable components
- **Hooks**: `web/src/hooks/` - Custom React hooks
- **API**: `web/src/api/` - API client functions
- **Types**: `web/src/types/` - TypeScript type definitions
## Testing Requirements
### Backend Testing
- **Framework**: Python unittest
- **Run Command**: `python3 -u -m unittest`
- **Location**: `frigate/test/`
- **Coverage**: Aim for comprehensive test coverage of core functionality
- **Pattern**: Use `TestCase` classes with descriptive test method names
```python
class TestMotionDetection(unittest.TestCase):
def test_detects_motion_above_threshold(self):
# Test implementation
```
### Test Best Practices
- Always have a way to test your work and confirm your changes
- Write tests for bug fixes to prevent regressions
- Test edge cases and error conditions
- Mock external dependencies (cameras, APIs, hardware)
- Use fixtures for test data
## Development Commands
### Python Backend
```bash
# Run all tests
python3 -u -m unittest
# Run specific test file
python3 -u -m unittest frigate.test.test_ffmpeg_presets
# Check formatting (Ruff)
ruff format --check frigate/
# Apply formatting
ruff format frigate/
# Run linter
ruff check frigate/
```
### Frontend (from web/ directory)
```bash
# Start dev server (AI agents should never run this directly unless asked)
npm run dev
# Build for production
npm run build
# Run linter
npm run lint
# Fix linting issues
npm run lint:fix
# Format code
npm run prettier:write
```
### Docker Development
AI agents should never run these commands directly unless instructed.
```bash
# Build local image
make local
# Build debug image
make debug
```
## Common Patterns
### API Endpoint Pattern
```python
from fastapi import APIRouter, Request
from frigate.api.defs.tags import Tags
router = APIRouter(tags=[Tags.Events])
@router.get("/events")
async def get_events(request: Request, limit: int = 100):
"""Retrieve events from the database."""
# Implementation
```
### Configuration Access
```python
# Access Frigate configuration
config: FrigateConfig = request.app.frigate_config
camera_config = config.cameras["front_door"]
```
### Database Queries
```python
from frigate.models import Event
# Use Peewee ORM for database access
events = (
Event.select()
.where(Event.camera == camera_name)
.order_by(Event.start_time.desc())
.limit(limit)
)
```
## Common Anti-Patterns to Avoid
### ❌ Avoid These
```python
# Blocking operations in async functions
data = requests.get(url) # ❌ Use async HTTP client
time.sleep(5) # ❌ Use asyncio.sleep()
# Hardcoded strings in React components
<div>Camera not found</div> # ❌ Use t("camera_not_found")
# Missing error handling
data = await api.get_data() # ❌ No exception handling
# Bare exceptions in regular code
try:
value = await sensor.read()
except Exception: # ❌ Too broad
logger.error("Failed")
```
### ✅ Use These Instead
```python
# Async operations
import aiohttp
async with aiohttp.ClientSession() as session:
async with session.get(url) as response:
data = await response.json()
await asyncio.sleep(5) # ✅ Non-blocking
# Translatable strings in React
const { t } = useTranslation();
<div>{t("camera_not_found")}</div> # ✅ Translatable
# Proper error handling
try:
data = await api.get_data()
except ApiException as err:
logger.error("API error: %s", err)
raise
# Specific exceptions
try:
value = await sensor.read()
except SensorException as err: # ✅ Specific
logger.exception("Failed to read sensor")
```
## Project-Specific Conventions
### Configuration Files
- Main config: `config/config.yml`
### Directory Structure
- Backend code: `frigate/`
- Frontend code: `web/`
- Docker files: `docker/`
- Documentation: `docs/`
- Database migrations: `migrations/`
### Code Style Conformance
Always conform new and refactored code to the existing coding style in the project:
- Follow established patterns in similar files
- Match indentation and formatting of surrounding code
- Use consistent naming conventions (snake_case for Python, camelCase for TypeScript)
- Maintain the same level of verbosity in comments and docstrings
## Additional Resources
- Documentation: https://docs.frigate.video
- Main Repository: https://github.com/blakeblackshear/frigate
- Home Assistant Integration: https://github.com/blakeblackshear/frigate-hass-integration

View File

@ -167,7 +167,7 @@ Inference speeds vary greatly depending on the CPU or GPU used, some known examp
| Intel N100 | ~ 15 ms | s-320: 30 ms | 320: ~ 25 ms | | Can only run one detector instance |
| Intel N150 | ~ 15 ms | t-320: 16 ms s-320: 24 ms | | | |
| Intel Iris XE | ~ 10 ms | t-320: 6 ms t-640: 14 ms s-320: 8 ms s-640: 16 ms | 320: ~ 10 ms 640: ~ 20 ms | 320-n: 33 ms | |
| Intel NPU | ~ 6 ms | s-320: 11 ms | 320: ~ 14 ms 640: ~ 34 ms | 320-n: 40 ms | |
| Intel NPU | ~ 6 ms | s-320: 11 ms s-640: 30 ms | 320: ~ 14 ms 640: ~ 34 ms | 320-n: 40 ms | |
| Intel Arc A310 | ~ 5 ms | t-320: 7 ms t-640: 11 ms s-320: 8 ms s-640: 15 ms | 320: ~ 8 ms 640: ~ 14 ms | | |
| Intel Arc A380 | ~ 6 ms | | 320: ~ 10 ms 640: ~ 22 ms | 336: 20 ms 448: 27 ms | |
| Intel Arc A750 | ~ 4 ms | | 320: ~ 8 ms | | |

View File

@ -140,7 +140,12 @@ Each line represents a detection state, not necessarily unique individuals. Pare
) as f:
f.write(context_prompt)
response = self._send(context_prompt, thumbnails)
json_schema = {
"name": "review_metadata",
"schema": ReviewMetadata.model_json_schema(),
"strict": True,
}
response = self._send(context_prompt, thumbnails, json_schema=json_schema)
if debug_save and response:
with open(
@ -152,6 +157,8 @@ Each line represents a detection state, not necessarily unique individuals. Pare
f.write(response)
if response:
# With JSON schema, response should already be valid JSON
# But keep regex cleanup as fallback for providers without schema support
clean_json = re.sub(
r"\n?```$", "", re.sub(r"^```[a-zA-Z0-9]*\n?", "", response)
)
@ -284,8 +291,16 @@ Guidelines:
"""Initialize the client."""
return None
def _send(self, prompt: str, images: list[bytes]) -> Optional[str]:
"""Submit a request to the provider."""
def _send(
self, prompt: str, images: list[bytes], json_schema: Optional[dict] = None
) -> Optional[str]:
"""Submit a request to the provider.
Args:
prompt: The text prompt to send
images: List of image bytes to include
json_schema: Optional JSON schema for structured output (provider-specific support)
"""
return None
def get_context_size(self) -> int:

View File

@ -41,29 +41,46 @@ class OpenAIClient(GenAIClient):
azure_endpoint=azure_endpoint,
)
def _send(self, prompt: str, images: list[bytes]) -> Optional[str]:
def _send(
self, prompt: str, images: list[bytes], json_schema: Optional[dict] = None
) -> Optional[str]:
"""Submit a request to Azure OpenAI."""
encoded_images = [base64.b64encode(image).decode("utf-8") for image in images]
request_params = {
"model": self.genai_config.model,
"messages": [
{
"role": "user",
"content": [{"type": "text", "text": prompt}]
+ [
{
"type": "image_url",
"image_url": {
"url": f"data:image/jpeg;base64,{image}",
"detail": "low",
},
}
for image in encoded_images
],
},
],
"timeout": self.timeout,
}
if json_schema:
request_params["response_format"] = {
"type": "json_schema",
"json_schema": {
"name": json_schema.get("name", "response"),
"schema": json_schema.get("schema", {}),
"strict": json_schema.get("strict", True),
},
}
try:
result = self.provider.chat.completions.create(
model=self.genai_config.model,
messages=[
{
"role": "user",
"content": [{"type": "text", "text": prompt}]
+ [
{
"type": "image_url",
"image_url": {
"url": f"data:image/jpeg;base64,{image}",
"detail": "low",
},
}
for image in encoded_images
],
},
],
timeout=self.timeout,
**request_params,
**self.genai_config.runtime_options,
)
except Exception as e:

View File

@ -41,7 +41,9 @@ class GeminiClient(GenAIClient):
http_options=types.HttpOptions(**http_options_dict),
)
def _send(self, prompt: str, images: list[bytes]) -> Optional[str]:
def _send(
self, prompt: str, images: list[bytes], json_schema: Optional[dict] = None
) -> Optional[str]:
"""Submit a request to Gemini."""
contents = [
types.Part.from_bytes(data=img, mime_type="image/jpeg") for img in images
@ -51,6 +53,12 @@ class GeminiClient(GenAIClient):
generation_config_dict = {"candidate_count": 1}
generation_config_dict.update(self.genai_config.runtime_options)
if json_schema and "schema" in json_schema:
generation_config_dict["response_mime_type"] = "application/json"
generation_config_dict["response_schema"] = types.Schema(
json_schema=json_schema["schema"]
)
response = self.provider.models.generate_content(
model=self.genai_config.model,
contents=contents,

View File

@ -50,7 +50,9 @@ class OllamaClient(GenAIClient):
logger.warning("Error initializing Ollama: %s", str(e))
return None
def _send(self, prompt: str, images: list[bytes]) -> Optional[str]:
def _send(
self, prompt: str, images: list[bytes], json_schema: Optional[dict] = None
) -> Optional[str]:
"""Submit a request to Ollama"""
if self.provider is None:
logger.warning(
@ -62,6 +64,10 @@ class OllamaClient(GenAIClient):
**self.provider_options,
**self.genai_config.runtime_options,
}
if json_schema and "schema" in json_schema:
ollama_options["format"] = json_schema["schema"]
result = self.provider.generate(
self.genai_config.model,
prompt,

View File

@ -31,7 +31,9 @@ class OpenAIClient(GenAIClient):
}
return OpenAI(api_key=self.genai_config.api_key, **provider_opts)
def _send(self, prompt: str, images: list[bytes]) -> Optional[str]:
def _send(
self, prompt: str, images: list[bytes], json_schema: Optional[dict] = None
) -> Optional[str]:
"""Submit a request to OpenAI."""
encoded_images = [base64.b64encode(image).decode("utf-8") for image in images]
messages_content = []
@ -51,16 +53,31 @@ class OpenAIClient(GenAIClient):
"text": prompt,
}
)
request_params = {
"model": self.genai_config.model,
"messages": [
{
"role": "user",
"content": messages_content,
},
],
"timeout": self.timeout,
}
if json_schema:
request_params["response_format"] = {
"type": "json_schema",
"json_schema": {
"name": json_schema.get("name", "response"),
"schema": json_schema.get("schema", {}),
"strict": json_schema.get("strict", True),
},
}
try:
result = self.provider.chat.completions.create(
model=self.genai_config.model,
messages=[
{
"role": "user",
"content": messages_content,
},
],
timeout=self.timeout,
**request_params,
**self.genai_config.runtime_options,
)
if (

View File

@ -12,7 +12,7 @@ import { useCameraPreviews } from "@/hooks/use-camera-previews";
import { baseUrl } from "@/api/baseUrl";
import { VideoPreview } from "../preview/ScrubbablePreview";
import { useApiHost } from "@/api";
import { isDesktop, isSafari } from "react-device-detect";
import { isSafari } from "react-device-detect";
import { useUserPersistence } from "@/hooks/use-user-persistence";
import { Skeleton } from "../ui/skeleton";
import { Button } from "../ui/button";
@ -87,7 +87,6 @@ export function AnimatedEventCard({
}, [visibilityListener]);
const [isLoaded, setIsLoaded] = useState(false);
const [isHovered, setIsHovered] = useState(false);
// interaction
@ -134,31 +133,27 @@ export function AnimatedEventCard({
<Tooltip>
<TooltipTrigger asChild>
<div
className="relative h-24 flex-shrink-0 overflow-hidden rounded md:rounded-lg 4k:h-32"
className="group relative h-24 flex-shrink-0 overflow-hidden rounded md:rounded-lg 4k:h-32"
style={{
aspectRatio: alertVideos ? aspectRatio : undefined,
}}
onMouseEnter={isDesktop ? () => setIsHovered(true) : undefined}
onMouseLeave={isDesktop ? () => setIsHovered(false) : undefined}
>
{isHovered && (
<Tooltip>
<TooltipTrigger asChild>
<Button
className="absolute left-2 top-1 z-40 bg-gray-500 bg-gradient-to-br from-gray-400 to-gray-500"
size="xs"
aria-label={t("markAsReviewed")}
onClick={async () => {
await axios.post(`reviews/viewed`, { ids: [event.id] });
updateEvents();
}}
>
<FaCircleCheck className="size-3 text-white" />
</Button>
</TooltipTrigger>
<TooltipContent>{t("markAsReviewed")}</TooltipContent>
</Tooltip>
)}
<Tooltip>
<TooltipTrigger asChild>
<Button
className="pointer-events-none absolute left-2 top-1 z-40 bg-gray-500 bg-gradient-to-br from-gray-400 to-gray-500 opacity-0 transition-opacity group-hover:pointer-events-auto group-hover:opacity-100"
size="xs"
aria-label={t("markAsReviewed")}
onClick={async () => {
await axios.post(`reviews/viewed`, { ids: [event.id] });
updateEvents();
}}
>
<FaCircleCheck className="size-3 text-white" />
</Button>
</TooltipTrigger>
<TooltipContent>{t("markAsReviewed")}</TooltipContent>
</Tooltip>
{previews != undefined && alertVideosLoaded && (
<div
className="size-full cursor-pointer"