Compare commits

..

No commits in common. "c0124938b3cad696fc198b1b96a39a37df372fda" and "04a2f42d110782b870e6f855c71a42e6af41b3b2" have entirely different histories.

33 changed files with 275 additions and 533 deletions

View File

@ -102,19 +102,8 @@ If examples for some of your classes do not appear in the grid, you can continue
### Improving the Model ### Improving the Model
:::tip Diversity matters far more than volume
Selecting dozens of nearly identical images is one of the fastest ways to degrade model performance. MobileNetV2 can overfit quickly when trained on homogeneous data — the model learns what *that exact moment* looked like rather than what actually defines the class. **This is why Frigate does not implement bulk training in the UI.**
For more detail, see [Frigate Tip: Best Practices for Training Face and Custom Classification Models](https://github.com/blakeblackshear/frigate/discussions/21374).
:::
- **Start small and iterate**: Begin with a small, representative set of images per class. Models often begin working well with surprisingly few examples and improve naturally over time.
- **Favor hard examples**: When images appear in the Recent Classifications tab, prioritize images scoring below 90100% or those captured under new lighting, weather, or distance conditions.
- **Avoid bulk training similar images**: Training large batches of images that already score 100% (or close) adds little new information and increases the risk of overfitting.
- **The wizard is just the starting point**: You dont need to find and label every class upfront. Missing classes will naturally appear in Recent Classifications, and those images tend to be more valuable because they represent new conditions and edge cases.
- **Problem framing**: Keep classes visually distinct and relevant to the chosen object types. - **Problem framing**: Keep classes visually distinct and relevant to the chosen object types.
- **Data collection**: Use the models Recent Classification tab to gather balanced examples across times of day, weather, and distances.
- **Preprocessing**: Ensure examples reflect object crops similar to Frigates boxes; keep the subject centered. - **Preprocessing**: Ensure examples reflect object crops similar to Frigates boxes; keep the subject centered.
- **Labels**: Keep label names short and consistent; include a `none` class if you plan to ignore uncertain predictions for sub labels. - **Labels**: Keep label names short and consistent; include a `none` class if you plan to ignore uncertain predictions for sub labels.
- **Threshold**: Tune `threshold` per model to reduce false assignments. Start at `0.8` and adjust based on validation. - **Threshold**: Tune `threshold` per model to reduce false assignments. Start at `0.8` and adjust based on validation.

View File

@ -70,21 +70,10 @@ Once some images are assigned, training will begin automatically.
### Improving the Model ### Improving the Model
:::tip Diversity matters far more than volume
Selecting dozens of nearly identical images is one of the fastest ways to degrade model performance. MobileNetV2 can overfit quickly when trained on homogeneous data — the model learns what *that exact moment* looked like rather than what actually defines the state. This often leads to models that work perfectly under the original conditions but become unstable when day turns to night, weather changes, or seasonal lighting shifts. **This is why Frigate does not implement bulk training in the UI.**
For more detail, see [Frigate Tip: Best Practices for Training Face and Custom Classification Models](https://github.com/blakeblackshear/frigate/discussions/21374).
:::
- **Start small and iterate**: Begin with a small, representative set of images per class. Models often begin working well with surprisingly few examples and improve naturally over time.
- **Problem framing**: Keep classes visually distinct and state-focused (e.g., `open`, `closed`, `unknown`). Avoid combining object identity with state in a single model unless necessary. - **Problem framing**: Keep classes visually distinct and state-focused (e.g., `open`, `closed`, `unknown`). Avoid combining object identity with state in a single model unless necessary.
- **Data collection**: Use the model's Recent Classifications tab to gather balanced examples across times of day and weather. - **Data collection**: Use the model's Recent Classifications tab to gather balanced examples across times of day and weather.
- **When to train**: Focus on cases where the model is entirely incorrect or flips between states when it should not. There's no need to train additional images when the model is already working consistently. - **When to train**: Focus on cases where the model is entirely incorrect or flips between states when it should not. There's no need to train additional images when the model is already working consistently.
- **Favor hard examples**: When images appear in the Recent Classifications tab, prioritize images scoring below 90100% or those captured under new conditions (e.g., first snow of the year, seasonal changes, objects temporarily in view, insects at night). These represent scenarios different from the default state and help prevent overfitting. - **Selecting training images**: Images scoring below 100% due to new conditions (e.g., first snow of the year, seasonal changes) or variations (e.g., objects temporarily in view, insects at night) are good candidates for training, as they represent scenarios different from the default state. Training these lower-scoring images that differ from existing training data helps prevent overfitting. Avoid training large quantities of images that look very similar, especially if they already score 100% as this can lead to overfitting.
- **Avoid bulk training similar images**: Training large batches of images that already score 100% (or close) adds little new information and increases the risk of overfitting.
- **The wizard is just the starting point**: You don't need to find and label every state upfront. Missing states will naturally appear in Recent Classifications, and those images tend to be more valuable because they represent new conditions and edge cases.
## Debugging Classification Models ## Debugging Classification Models

View File

@ -32,7 +32,6 @@ class CameraConfigUpdateEnum(str, Enum):
face_recognition = "face_recognition" face_recognition = "face_recognition"
lpr = "lpr" lpr = "lpr"
snapshots = "snapshots" snapshots = "snapshots"
timestamp_style = "timestamp_style"
zones = "zones" zones = "zones"
@ -134,8 +133,6 @@ class CameraConfigUpdateSubscriber:
config.snapshots = updated_config config.snapshots = updated_config
elif update_type == CameraConfigUpdateEnum.onvif: elif update_type == CameraConfigUpdateEnum.onvif:
config.onvif = updated_config config.onvif = updated_config
elif update_type == CameraConfigUpdateEnum.timestamp_style:
config.timestamp_style = updated_config
elif update_type == CameraConfigUpdateEnum.zones: elif update_type == CameraConfigUpdateEnum.zones:
config.zones = updated_config config.zones = updated_config

View File

@ -25,7 +25,6 @@ from frigate.plus import PlusApi
from frigate.util.builtin import ( from frigate.util.builtin import (
deep_merge, deep_merge,
get_ffmpeg_arg_list, get_ffmpeg_arg_list,
load_labels,
) )
from frigate.util.config import ( from frigate.util.config import (
CURRENT_CONFIG_VERSION, CURRENT_CONFIG_VERSION,
@ -41,7 +40,7 @@ from frigate.util.services import auto_detect_hwaccel
from .auth import AuthConfig from .auth import AuthConfig
from .base import FrigateBaseModel from .base import FrigateBaseModel
from .camera import CameraConfig, CameraLiveConfig from .camera import CameraConfig, CameraLiveConfig
from .camera.audio import AudioConfig, AudioFilterConfig from .camera.audio import AudioConfig
from .camera.birdseye import BirdseyeConfig from .camera.birdseye import BirdseyeConfig
from .camera.detect import DetectConfig from .camera.detect import DetectConfig
from .camera.ffmpeg import FfmpegConfig from .camera.ffmpeg import FfmpegConfig
@ -474,7 +473,7 @@ class FrigateConfig(FrigateBaseModel):
live: CameraLiveConfig = Field( live: CameraLiveConfig = Field(
default_factory=CameraLiveConfig, default_factory=CameraLiveConfig,
title="Live playback", title="Live playback",
description="Settings to control the jsmpeg live stream resolution and quality. This does not affect restreamed cameras that use go2rtc for live view.", description="Settings used by the Web UI to control live stream resolution and quality.",
) )
motion: Optional[MotionConfig] = Field( motion: Optional[MotionConfig] = Field(
default=None, default=None,
@ -672,12 +671,6 @@ class FrigateConfig(FrigateBaseModel):
detector_config.model = model detector_config.model = model
self.detectors[key] = detector_config self.detectors[key] = detector_config
all_audio_labels = {
label
for label in load_labels("/audio-labelmap.txt", prefill=521).values()
if label
}
for name, camera in self.cameras.items(): for name, camera in self.cameras.items():
modified_global_config = global_config.copy() modified_global_config = global_config.copy()
@ -798,14 +791,6 @@ class FrigateConfig(FrigateBaseModel):
camera_config.review.genai.enabled camera_config.review.genai.enabled
) )
if camera_config.audio.filters is None:
camera_config.audio.filters = {}
audio_keys = all_audio_labels
audio_keys = audio_keys - camera_config.audio.filters.keys()
for key in audio_keys:
camera_config.audio.filters[key] = AudioFilterConfig()
# Add default filters # Add default filters
object_keys = camera_config.objects.track object_keys = camera_config.objects.track
if camera_config.objects.filters is None: if camera_config.objects.filters is None:

View File

@ -317,7 +317,7 @@ class MemryXDetector(DetectionApi):
f"Failed to remove downloaded zip {zip_path}: {e}" f"Failed to remove downloaded zip {zip_path}: {e}"
) )
def send_input(self, connection_id, tensor_input: np.ndarray) -> None: def send_input(self, connection_id, tensor_input: np.ndarray):
"""Pre-process (if needed) and send frame to MemryX input queue""" """Pre-process (if needed) and send frame to MemryX input queue"""
if tensor_input is None: if tensor_input is None:
raise ValueError("[send_input] No image data provided for inference") raise ValueError("[send_input] No image data provided for inference")

View File

@ -5,7 +5,7 @@ import importlib
import logging import logging
import os import os
import re import re
from typing import Any, Callable, Optional from typing import Any, Optional
import numpy as np import numpy as np
from playhouse.shortcuts import model_to_dict from playhouse.shortcuts import model_to_dict
@ -31,10 +31,10 @@ __all__ = [
PROVIDERS = {} PROVIDERS = {}
def register_genai_provider(key: GenAIProviderEnum) -> Callable: def register_genai_provider(key: GenAIProviderEnum):
"""Register a GenAI provider.""" """Register a GenAI provider."""
def decorator(cls: type) -> type: def decorator(cls):
PROVIDERS[key] = cls PROVIDERS[key] = cls
return cls return cls
@ -297,7 +297,7 @@ Guidelines:
"""Generate a description for the frame.""" """Generate a description for the frame."""
try: try:
prompt = camera_config.objects.genai.object_prompts.get( prompt = camera_config.objects.genai.object_prompts.get(
str(event.label), event.label,
camera_config.objects.genai.prompt, camera_config.objects.genai.prompt,
).format(**model_to_dict(event)) ).format(**model_to_dict(event))
except KeyError as e: except KeyError as e:
@ -307,7 +307,7 @@ Guidelines:
logger.debug(f"Sending images to genai provider with prompt: {prompt}") logger.debug(f"Sending images to genai provider with prompt: {prompt}")
return self._send(prompt, thumbnails) return self._send(prompt, thumbnails)
def _init_provider(self) -> Any: def _init_provider(self):
"""Initialize the client.""" """Initialize the client."""
return None return None
@ -402,7 +402,7 @@ Guidelines:
} }
def load_providers() -> None: def load_providers():
package_dir = os.path.dirname(__file__) package_dir = os.path.dirname(__file__)
for filename in os.listdir(package_dir): for filename in os.listdir(package_dir):
if filename.endswith(".py") and filename != "__init__.py": if filename.endswith(".py") and filename != "__init__.py":

View File

@ -3,7 +3,7 @@
import base64 import base64
import json import json
import logging import logging
from typing import Any, AsyncGenerator, Optional from typing import Any, Optional
from urllib.parse import parse_qs, urlparse from urllib.parse import parse_qs, urlparse
from openai import AzureOpenAI from openai import AzureOpenAI
@ -20,10 +20,10 @@ class OpenAIClient(GenAIClient):
provider: AzureOpenAI provider: AzureOpenAI
def _init_provider(self) -> AzureOpenAI | None: def _init_provider(self):
"""Initialize the client.""" """Initialize the client."""
try: try:
parsed_url = urlparse(self.genai_config.base_url or "") parsed_url = urlparse(self.genai_config.base_url)
query_params = parse_qs(parsed_url.query) query_params = parse_qs(parsed_url.query)
api_version = query_params.get("api-version", [None])[0] api_version = query_params.get("api-version", [None])[0]
azure_endpoint = f"{parsed_url.scheme}://{parsed_url.netloc}/" azure_endpoint = f"{parsed_url.scheme}://{parsed_url.netloc}/"
@ -79,7 +79,7 @@ class OpenAIClient(GenAIClient):
logger.warning("Azure OpenAI returned an error: %s", str(e)) logger.warning("Azure OpenAI returned an error: %s", str(e))
return None return None
if len(result.choices) > 0: if len(result.choices) > 0:
return str(result.choices[0].message.content.strip()) return result.choices[0].message.content.strip()
return None return None
def get_context_size(self) -> int: def get_context_size(self) -> int:
@ -113,7 +113,7 @@ class OpenAIClient(GenAIClient):
if openai_tool_choice is not None: if openai_tool_choice is not None:
request_params["tool_choice"] = openai_tool_choice request_params["tool_choice"] = openai_tool_choice
result = self.provider.chat.completions.create(**request_params) # type: ignore[call-overload] result = self.provider.chat.completions.create(**request_params)
if ( if (
result is None result is None
@ -181,7 +181,7 @@ class OpenAIClient(GenAIClient):
messages: list[dict[str, Any]], messages: list[dict[str, Any]],
tools: Optional[list[dict[str, Any]]] = None, tools: Optional[list[dict[str, Any]]] = None,
tool_choice: Optional[str] = "auto", tool_choice: Optional[str] = "auto",
) -> AsyncGenerator[tuple[str, Any], None]: ):
""" """
Stream chat with tools; yields content deltas then final message. Stream chat with tools; yields content deltas then final message.
@ -214,7 +214,7 @@ class OpenAIClient(GenAIClient):
tool_calls_by_index: dict[int, dict[str, Any]] = {} tool_calls_by_index: dict[int, dict[str, Any]] = {}
finish_reason = "stop" finish_reason = "stop"
stream = self.provider.chat.completions.create(**request_params) # type: ignore[call-overload] stream = self.provider.chat.completions.create(**request_params)
for chunk in stream: for chunk in stream:
if not chunk or not chunk.choices: if not chunk or not chunk.choices:

View File

@ -2,11 +2,10 @@
import json import json
import logging import logging
from typing import Any, AsyncGenerator, Optional from typing import Any, Optional
from google import genai from google import genai
from google.genai import errors, types from google.genai import errors, types
from google.genai.types import FunctionCallingConfigMode
from frigate.config import GenAIProviderEnum from frigate.config import GenAIProviderEnum
from frigate.genai import GenAIClient, register_genai_provider from frigate.genai import GenAIClient, register_genai_provider
@ -20,10 +19,10 @@ class GeminiClient(GenAIClient):
provider: genai.Client provider: genai.Client
def _init_provider(self) -> genai.Client: def _init_provider(self):
"""Initialize the client.""" """Initialize the client."""
# Merge provider_options into HttpOptions # Merge provider_options into HttpOptions
http_options_dict: dict[str, Any] = { http_options_dict = {
"timeout": int(self.timeout * 1000), # requires milliseconds "timeout": int(self.timeout * 1000), # requires milliseconds
"retry_options": types.HttpRetryOptions( "retry_options": types.HttpRetryOptions(
attempts=3, attempts=3,
@ -55,7 +54,7 @@ class GeminiClient(GenAIClient):
] + [prompt] ] + [prompt]
try: try:
# Merge runtime_options into generation_config if provided # Merge runtime_options into generation_config if provided
generation_config_dict: dict[str, Any] = {"candidate_count": 1} generation_config_dict = {"candidate_count": 1}
generation_config_dict.update(self.genai_config.runtime_options) generation_config_dict.update(self.genai_config.runtime_options)
if response_format and response_format.get("type") == "json_schema": if response_format and response_format.get("type") == "json_schema":
@ -66,7 +65,7 @@ class GeminiClient(GenAIClient):
response = self.provider.models.generate_content( response = self.provider.models.generate_content(
model=self.genai_config.model, model=self.genai_config.model,
contents=contents, # type: ignore[arg-type] contents=contents,
config=types.GenerateContentConfig( config=types.GenerateContentConfig(
**generation_config_dict, **generation_config_dict,
), ),
@ -79,8 +78,6 @@ class GeminiClient(GenAIClient):
return None return None
try: try:
if response.text is None:
return None
description = response.text.strip() description = response.text.strip()
except (ValueError, AttributeError): except (ValueError, AttributeError):
# No description was generated # No description was generated
@ -105,7 +102,7 @@ class GeminiClient(GenAIClient):
""" """
try: try:
# Convert messages to Gemini format # Convert messages to Gemini format
gemini_messages: list[types.Content] = [] gemini_messages = []
for msg in messages: for msg in messages:
role = msg.get("role", "user") role = msg.get("role", "user")
content = msg.get("content", "") content = msg.get("content", "")
@ -113,11 +110,7 @@ class GeminiClient(GenAIClient):
# Map roles to Gemini format # Map roles to Gemini format
if role == "system": if role == "system":
# Gemini doesn't have system role, prepend to first user message # Gemini doesn't have system role, prepend to first user message
if ( if gemini_messages and gemini_messages[0].role == "user":
gemini_messages
and gemini_messages[0].role == "user"
and gemini_messages[0].parts
):
gemini_messages[0].parts[ gemini_messages[0].parts[
0 0
].text = f"{content}\n\n{gemini_messages[0].parts[0].text}" ].text = f"{content}\n\n{gemini_messages[0].parts[0].text}"
@ -143,7 +136,7 @@ class GeminiClient(GenAIClient):
types.Content( types.Content(
role="function", role="function",
parts=[ parts=[
types.Part.from_function_response(function_response) # type: ignore[misc,call-arg,arg-type] types.Part.from_function_response(function_response)
], ],
) )
) )
@ -178,25 +171,19 @@ class GeminiClient(GenAIClient):
if tool_choice: if tool_choice:
if tool_choice == "none": if tool_choice == "none":
tool_config = types.ToolConfig( tool_config = types.ToolConfig(
function_calling_config=types.FunctionCallingConfig( function_calling_config=types.FunctionCallingConfig(mode="NONE")
mode=FunctionCallingConfigMode.NONE
)
) )
elif tool_choice == "auto": elif tool_choice == "auto":
tool_config = types.ToolConfig( tool_config = types.ToolConfig(
function_calling_config=types.FunctionCallingConfig( function_calling_config=types.FunctionCallingConfig(mode="AUTO")
mode=FunctionCallingConfigMode.AUTO
)
) )
elif tool_choice == "required": elif tool_choice == "required":
tool_config = types.ToolConfig( tool_config = types.ToolConfig(
function_calling_config=types.FunctionCallingConfig( function_calling_config=types.FunctionCallingConfig(mode="ANY")
mode=FunctionCallingConfigMode.ANY
)
) )
# Build request config # Build request config
config_params: dict[str, Any] = {"candidate_count": 1} config_params = {"candidate_count": 1}
if gemini_tools: if gemini_tools:
config_params["tools"] = gemini_tools config_params["tools"] = gemini_tools
@ -210,7 +197,7 @@ class GeminiClient(GenAIClient):
response = self.provider.models.generate_content( response = self.provider.models.generate_content(
model=self.genai_config.model, model=self.genai_config.model,
contents=gemini_messages, # type: ignore[arg-type] contents=gemini_messages,
config=types.GenerateContentConfig(**config_params), config=types.GenerateContentConfig(**config_params),
) )
@ -304,7 +291,7 @@ class GeminiClient(GenAIClient):
messages: list[dict[str, Any]], messages: list[dict[str, Any]],
tools: Optional[list[dict[str, Any]]] = None, tools: Optional[list[dict[str, Any]]] = None,
tool_choice: Optional[str] = "auto", tool_choice: Optional[str] = "auto",
) -> AsyncGenerator[tuple[str, Any], None]: ):
""" """
Stream chat with tools; yields content deltas then final message. Stream chat with tools; yields content deltas then final message.
@ -312,7 +299,7 @@ class GeminiClient(GenAIClient):
""" """
try: try:
# Convert messages to Gemini format # Convert messages to Gemini format
gemini_messages: list[types.Content] = [] gemini_messages = []
for msg in messages: for msg in messages:
role = msg.get("role", "user") role = msg.get("role", "user")
content = msg.get("content", "") content = msg.get("content", "")
@ -320,11 +307,7 @@ class GeminiClient(GenAIClient):
# Map roles to Gemini format # Map roles to Gemini format
if role == "system": if role == "system":
# Gemini doesn't have system role, prepend to first user message # Gemini doesn't have system role, prepend to first user message
if ( if gemini_messages and gemini_messages[0].role == "user":
gemini_messages
and gemini_messages[0].role == "user"
and gemini_messages[0].parts
):
gemini_messages[0].parts[ gemini_messages[0].parts[
0 0
].text = f"{content}\n\n{gemini_messages[0].parts[0].text}" ].text = f"{content}\n\n{gemini_messages[0].parts[0].text}"
@ -350,7 +333,7 @@ class GeminiClient(GenAIClient):
types.Content( types.Content(
role="function", role="function",
parts=[ parts=[
types.Part.from_function_response(function_response) # type: ignore[misc,call-arg,arg-type] types.Part.from_function_response(function_response)
], ],
) )
) )
@ -385,25 +368,19 @@ class GeminiClient(GenAIClient):
if tool_choice: if tool_choice:
if tool_choice == "none": if tool_choice == "none":
tool_config = types.ToolConfig( tool_config = types.ToolConfig(
function_calling_config=types.FunctionCallingConfig( function_calling_config=types.FunctionCallingConfig(mode="NONE")
mode=FunctionCallingConfigMode.NONE
)
) )
elif tool_choice == "auto": elif tool_choice == "auto":
tool_config = types.ToolConfig( tool_config = types.ToolConfig(
function_calling_config=types.FunctionCallingConfig( function_calling_config=types.FunctionCallingConfig(mode="AUTO")
mode=FunctionCallingConfigMode.AUTO
)
) )
elif tool_choice == "required": elif tool_choice == "required":
tool_config = types.ToolConfig( tool_config = types.ToolConfig(
function_calling_config=types.FunctionCallingConfig( function_calling_config=types.FunctionCallingConfig(mode="ANY")
mode=FunctionCallingConfigMode.ANY
)
) )
# Build request config # Build request config
config_params: dict[str, Any] = {"candidate_count": 1} config_params = {"candidate_count": 1}
if gemini_tools: if gemini_tools:
config_params["tools"] = gemini_tools config_params["tools"] = gemini_tools
@ -422,7 +399,7 @@ class GeminiClient(GenAIClient):
stream = await self.provider.aio.models.generate_content_stream( stream = await self.provider.aio.models.generate_content_stream(
model=self.genai_config.model, model=self.genai_config.model,
contents=gemini_messages, # type: ignore[arg-type] contents=gemini_messages,
config=types.GenerateContentConfig(**config_params), config=types.GenerateContentConfig(**config_params),
) )

View File

@ -4,7 +4,7 @@ import base64
import io import io
import json import json
import logging import logging
from typing import Any, AsyncGenerator, Optional from typing import Any, Optional
import httpx import httpx
import numpy as np import numpy as np
@ -23,7 +23,7 @@ def _to_jpeg(img_bytes: bytes) -> bytes | None:
try: try:
img = Image.open(io.BytesIO(img_bytes)) img = Image.open(io.BytesIO(img_bytes))
if img.mode != "RGB": if img.mode != "RGB":
img = img.convert("RGB") # type: ignore[assignment] img = img.convert("RGB")
buf = io.BytesIO() buf = io.BytesIO()
img.save(buf, format="JPEG", quality=85) img.save(buf, format="JPEG", quality=85)
return buf.getvalue() return buf.getvalue()
@ -36,10 +36,10 @@ def _to_jpeg(img_bytes: bytes) -> bytes | None:
class LlamaCppClient(GenAIClient): class LlamaCppClient(GenAIClient):
"""Generative AI client for Frigate using llama.cpp server.""" """Generative AI client for Frigate using llama.cpp server."""
provider: str | None # base_url provider: str # base_url
provider_options: dict[str, Any] provider_options: dict[str, Any]
def _init_provider(self) -> str | None: def _init_provider(self):
"""Initialize the client.""" """Initialize the client."""
self.provider_options = { self.provider_options = {
**self.genai_config.provider_options, **self.genai_config.provider_options,
@ -75,7 +75,7 @@ class LlamaCppClient(GenAIClient):
content.append( content.append(
{ {
"type": "image_url", "type": "image_url",
"image_url": { # type: ignore[dict-item] "image_url": {
"url": f"data:image/jpeg;base64,{encoded_image}", "url": f"data:image/jpeg;base64,{encoded_image}",
}, },
} }
@ -111,7 +111,7 @@ class LlamaCppClient(GenAIClient):
): ):
choice = result["choices"][0] choice = result["choices"][0]
if "message" in choice and "content" in choice["message"]: if "message" in choice and "content" in choice["message"]:
return str(choice["message"]["content"].strip()) return choice["message"]["content"].strip()
return None return None
except Exception as e: except Exception as e:
logger.warning("llama.cpp returned an error: %s", str(e)) logger.warning("llama.cpp returned an error: %s", str(e))
@ -229,7 +229,7 @@ class LlamaCppClient(GenAIClient):
content.append( content.append(
{ {
"prompt_string": "<__media__>\n", "prompt_string": "<__media__>\n",
"multimodal_data": [encoded], # type: ignore[dict-item] "multimodal_data": [encoded],
} }
) )
@ -367,7 +367,7 @@ class LlamaCppClient(GenAIClient):
messages: list[dict[str, Any]], messages: list[dict[str, Any]],
tools: Optional[list[dict[str, Any]]] = None, tools: Optional[list[dict[str, Any]]] = None,
tool_choice: Optional[str] = "auto", tool_choice: Optional[str] = "auto",
) -> AsyncGenerator[tuple[str, Any], None]: ):
"""Stream chat with tools via OpenAI-compatible streaming API.""" """Stream chat with tools via OpenAI-compatible streaming API."""
if self.provider is None: if self.provider is None:
logger.warning( logger.warning(

View File

@ -2,7 +2,7 @@
import json import json
import logging import logging
from typing import Any, AsyncGenerator, Optional from typing import Any, Optional
from httpx import RemoteProtocolError, TimeoutException from httpx import RemoteProtocolError, TimeoutException
from ollama import AsyncClient as OllamaAsyncClient from ollama import AsyncClient as OllamaAsyncClient
@ -28,10 +28,10 @@ class OllamaClient(GenAIClient):
}, },
} }
provider: ApiClient | None provider: ApiClient
provider_options: dict[str, Any] provider_options: dict[str, Any]
def _init_provider(self) -> ApiClient | None: def _init_provider(self):
"""Initialize the client.""" """Initialize the client."""
self.provider_options = { self.provider_options = {
**self.LOCAL_OPTIMIZED_OPTIONS, **self.LOCAL_OPTIMIZED_OPTIONS,
@ -73,7 +73,7 @@ class OllamaClient(GenAIClient):
"exclusiveMinimum", "exclusiveMinimum",
"exclusiveMaximum", "exclusiveMaximum",
} }
result: dict[str, Any] = {} result = {}
for key, value in schema.items(): for key, value in schema.items():
if not _is_properties and key in STRIP_KEYS: if not _is_properties and key in STRIP_KEYS:
continue continue
@ -122,7 +122,7 @@ class OllamaClient(GenAIClient):
logger.debug( logger.debug(
f"Ollama tokens used: eval_count={result.get('eval_count')}, prompt_eval_count={result.get('prompt_eval_count')}" f"Ollama tokens used: eval_count={result.get('eval_count')}, prompt_eval_count={result.get('prompt_eval_count')}"
) )
return str(result["response"]).strip() return result["response"].strip()
except ( except (
TimeoutException, TimeoutException,
ResponseError, ResponseError,
@ -263,7 +263,7 @@ class OllamaClient(GenAIClient):
messages: list[dict[str, Any]], messages: list[dict[str, Any]],
tools: Optional[list[dict[str, Any]]] = None, tools: Optional[list[dict[str, Any]]] = None,
tool_choice: Optional[str] = "auto", tool_choice: Optional[str] = "auto",
) -> AsyncGenerator[tuple[str, Any], None]: ):
"""Stream chat with tools; yields content deltas then final message. """Stream chat with tools; yields content deltas then final message.
When tools are provided, Ollama streaming does not include tool_calls When tools are provided, Ollama streaming does not include tool_calls

View File

@ -3,7 +3,7 @@
import base64 import base64
import json import json
import logging import logging
from typing import Any, AsyncGenerator, Optional from typing import Any, Optional
from httpx import TimeoutException from httpx import TimeoutException
from openai import OpenAI from openai import OpenAI
@ -21,7 +21,7 @@ class OpenAIClient(GenAIClient):
provider: OpenAI provider: OpenAI
context_size: Optional[int] = None context_size: Optional[int] = None
def _init_provider(self) -> OpenAI: def _init_provider(self):
"""Initialize the client.""" """Initialize the client."""
# Extract context_size from provider_options as it's not a valid OpenAI client parameter # Extract context_size from provider_options as it's not a valid OpenAI client parameter
# It will be used in get_context_size() instead # It will be used in get_context_size() instead
@ -81,7 +81,7 @@ class OpenAIClient(GenAIClient):
and hasattr(result, "choices") and hasattr(result, "choices")
and len(result.choices) > 0 and len(result.choices) > 0
): ):
return str(result.choices[0].message.content.strip()) return result.choices[0].message.content.strip()
return None return None
except (TimeoutException, Exception) as e: except (TimeoutException, Exception) as e:
logger.warning("OpenAI returned an error: %s", str(e)) logger.warning("OpenAI returned an error: %s", str(e))
@ -171,7 +171,7 @@ class OpenAIClient(GenAIClient):
} }
request_params.update(provider_opts) request_params.update(provider_opts)
result = self.provider.chat.completions.create(**request_params) # type: ignore[call-overload] result = self.provider.chat.completions.create(**request_params)
if ( if (
result is None result is None
@ -245,7 +245,7 @@ class OpenAIClient(GenAIClient):
messages: list[dict[str, Any]], messages: list[dict[str, Any]],
tools: Optional[list[dict[str, Any]]] = None, tools: Optional[list[dict[str, Any]]] = None,
tool_choice: Optional[str] = "auto", tool_choice: Optional[str] = "auto",
) -> AsyncGenerator[tuple[str, Any], None]: ):
""" """
Stream chat with tools; yields content deltas then final message. Stream chat with tools; yields content deltas then final message.
@ -287,7 +287,7 @@ class OpenAIClient(GenAIClient):
tool_calls_by_index: dict[int, dict[str, Any]] = {} tool_calls_by_index: dict[int, dict[str, Any]] = {}
finish_reason = "stop" finish_reason = "stop"
stream = self.provider.chat.completions.create(**request_params) # type: ignore[call-overload] stream = self.provider.chat.completions.create(**request_params)
for chunk in stream: for chunk in stream:
if not chunk or not chunk.choices: if not chunk or not chunk.choices:

View File

@ -5,7 +5,7 @@ import os
import threading import threading
from dataclasses import dataclass, field from dataclasses import dataclass, field
from datetime import datetime from datetime import datetime
from typing import Optional, cast from typing import Optional
from frigate.comms.inter_process import InterProcessRequestor from frigate.comms.inter_process import InterProcessRequestor
from frigate.const import CONFIG_DIR, UPDATE_JOB_STATE from frigate.const import CONFIG_DIR, UPDATE_JOB_STATE
@ -122,7 +122,7 @@ def start_media_sync_job(
if job_is_running("media_sync"): if job_is_running("media_sync"):
current = get_current_job("media_sync") current = get_current_job("media_sync")
logger.warning( logger.warning(
f"Media sync job {current.id if current else 'unknown'} is already running. Rejecting new request." f"Media sync job {current.id} is already running. Rejecting new request."
) )
return None return None
@ -146,9 +146,9 @@ def start_media_sync_job(
def get_current_media_sync_job() -> Optional[MediaSyncJob]: def get_current_media_sync_job() -> Optional[MediaSyncJob]:
"""Get the current running/queued media sync job, if any.""" """Get the current running/queued media sync job, if any."""
return cast(Optional[MediaSyncJob], get_current_job("media_sync")) return get_current_job("media_sync")
def get_media_sync_job_by_id(job_id: str) -> Optional[MediaSyncJob]: def get_media_sync_job_by_id(job_id: str) -> Optional[MediaSyncJob]:
"""Get media sync job by ID. Currently only tracks the current job.""" """Get media sync job by ID. Currently only tracks the current job."""
return cast(Optional[MediaSyncJob], get_job_by_id("media_sync", job_id)) return get_job_by_id("media_sync", job_id)

View File

@ -6,7 +6,7 @@ import threading
from concurrent.futures import Future, ThreadPoolExecutor, as_completed from concurrent.futures import Future, ThreadPoolExecutor, as_completed
from dataclasses import asdict, dataclass, field from dataclasses import asdict, dataclass, field
from datetime import datetime from datetime import datetime
from typing import Any, Optional, cast from typing import Any, Optional
import cv2 import cv2
import numpy as np import numpy as np
@ -96,7 +96,7 @@ def create_polygon_mask(
dtype=np.int32, dtype=np.int32,
) )
mask = np.zeros((frame_height, frame_width), dtype=np.uint8) mask = np.zeros((frame_height, frame_width), dtype=np.uint8)
cv2.fillPoly(mask, [motion_points], (255,)) cv2.fillPoly(mask, [motion_points], 255)
return mask return mask
@ -116,7 +116,7 @@ def compute_roi_bbox_normalized(
def heatmap_overlaps_roi( def heatmap_overlaps_roi(
heatmap: object, roi_bbox: tuple[float, float, float, float] heatmap: dict[str, int], roi_bbox: tuple[float, float, float, float]
) -> bool: ) -> bool:
"""Check if a sparse motion heatmap has any overlap with the ROI bounding box. """Check if a sparse motion heatmap has any overlap with the ROI bounding box.
@ -155,9 +155,9 @@ def segment_passes_activity_gate(recording: Recordings) -> bool:
Returns True if any of motion, objects, or regions is non-zero/non-null. Returns True if any of motion, objects, or regions is non-zero/non-null.
Returns True if all are null (old segments without data). Returns True if all are null (old segments without data).
""" """
motion: Any = recording.motion motion = recording.motion
objects: Any = recording.objects objects = recording.objects
regions: Any = recording.regions regions = recording.regions
# Old segments without metadata - pass through (conservative) # Old segments without metadata - pass through (conservative)
if motion is None and objects is None and regions is None: if motion is None and objects is None and regions is None:
@ -278,9 +278,6 @@ class MotionSearchRunner(threading.Thread):
frame_width = camera_config.detect.width frame_width = camera_config.detect.width
frame_height = camera_config.detect.height frame_height = camera_config.detect.height
if frame_width is None or frame_height is None:
raise ValueError(f"Camera {camera_name} detect dimensions not configured")
# Create polygon mask # Create polygon mask
polygon_mask = create_polygon_mask( polygon_mask = create_polygon_mask(
self.job.polygon_points, frame_width, frame_height self.job.polygon_points, frame_width, frame_height
@ -418,13 +415,11 @@ class MotionSearchRunner(threading.Thread):
if self._should_stop(): if self._should_stop():
break break
rec_start: float = recording.start_time # type: ignore[assignment]
rec_end: float = recording.end_time # type: ignore[assignment]
future = executor.submit( future = executor.submit(
self._process_recording_for_motion, self._process_recording_for_motion,
str(recording.path), recording.path,
rec_start, recording.start_time,
rec_end, recording.end_time,
self.job.start_time_range, self.job.start_time_range,
self.job.end_time_range, self.job.end_time_range,
polygon_mask, polygon_mask,
@ -529,12 +524,10 @@ class MotionSearchRunner(threading.Thread):
break break
try: try:
rec_start: float = recording.start_time # type: ignore[assignment]
rec_end: float = recording.end_time # type: ignore[assignment]
results, frames = self._process_recording_for_motion( results, frames = self._process_recording_for_motion(
str(recording.path), recording.path,
rec_start, recording.start_time,
rec_end, recording.end_time,
self.job.start_time_range, self.job.start_time_range,
self.job.end_time_range, self.job.end_time_range,
polygon_mask, polygon_mask,
@ -679,9 +672,7 @@ class MotionSearchRunner(threading.Thread):
# Handle frame dimension changes # Handle frame dimension changes
if gray.shape != polygon_mask.shape: if gray.shape != polygon_mask.shape:
resized_mask = cv2.resize( resized_mask = cv2.resize(
polygon_mask, polygon_mask, (gray.shape[1], gray.shape[0]), cv2.INTER_NEAREST
(gray.shape[1], gray.shape[0]),
interpolation=cv2.INTER_NEAREST,
) )
current_bbox = cv2.boundingRect(resized_mask) current_bbox = cv2.boundingRect(resized_mask)
else: else:
@ -707,7 +698,7 @@ class MotionSearchRunner(threading.Thread):
) )
if prev_frame_gray is not None: if prev_frame_gray is not None:
diff = cv2.absdiff(prev_frame_gray, masked_gray) # type: ignore[unreachable] diff = cv2.absdiff(prev_frame_gray, masked_gray)
diff_blurred = cv2.GaussianBlur(diff, (3, 3), 0) diff_blurred = cv2.GaussianBlur(diff, (3, 3), 0)
_, thresh = cv2.threshold( _, thresh = cv2.threshold(
diff_blurred, threshold, 255, cv2.THRESH_BINARY diff_blurred, threshold, 255, cv2.THRESH_BINARY
@ -834,7 +825,7 @@ def get_motion_search_job(job_id: str) -> Optional[MotionSearchJob]:
if job_entry: if job_entry:
return job_entry[0] return job_entry[0]
# Check completed jobs via manager # Check completed jobs via manager
return cast(Optional[MotionSearchJob], get_job_by_id("motion_search", job_id)) return get_job_by_id("motion_search", job_id)
def cancel_motion_search_job(job_id: str) -> bool: def cancel_motion_search_job(job_id: str) -> bool:

View File

@ -54,9 +54,9 @@ class VLMWatchRunner(threading.Thread):
job: VLMWatchJob, job: VLMWatchJob,
config: FrigateConfig, config: FrigateConfig,
cancel_event: threading.Event, cancel_event: threading.Event,
frame_processor: Any, frame_processor,
genai_manager: Any, genai_manager,
dispatcher: Any, dispatcher,
) -> None: ) -> None:
super().__init__(daemon=True, name=f"vlm_watch_{job.id}") super().__init__(daemon=True, name=f"vlm_watch_{job.id}")
self.job = job self.job = job
@ -226,12 +226,9 @@ class VLMWatchRunner(threading.Thread):
remaining = deadline - time.time() remaining = deadline - time.time()
if remaining <= 0: if remaining <= 0:
break break
result = self.detection_subscriber.check_for_update( topic, payload = self.detection_subscriber.check_for_update(
timeout=min(1.0, remaining) timeout=min(1.0, remaining)
) )
if result is None:
continue
topic, payload = result
if topic is None or payload is None: if topic is None or payload is None:
continue continue
# payload = (camera, frame_name, frame_time, tracked_objects, motion_boxes, regions) # payload = (camera, frame_name, frame_time, tracked_objects, motion_boxes, regions)
@ -331,9 +328,9 @@ def start_vlm_watch_job(
condition: str, condition: str,
max_duration_minutes: int, max_duration_minutes: int,
config: FrigateConfig, config: FrigateConfig,
frame_processor: Any, frame_processor,
genai_manager: Any, genai_manager,
dispatcher: Any, dispatcher,
labels: list[str] | None = None, labels: list[str] | None = None,
zones: list[str] | None = None, zones: list[str] | None = None,
) -> str: ) -> str:

View File

@ -13,10 +13,10 @@ class MotionDetector(ABC):
frame_shape: Tuple[int, int, int], frame_shape: Tuple[int, int, int],
config: MotionConfig, config: MotionConfig,
fps: int, fps: int,
improve_contrast: bool, improve_contrast,
threshold: int, threshold,
contour_area: int | None, contour_area,
) -> None: ):
pass pass
@abstractmethod @abstractmethod
@ -25,7 +25,7 @@ class MotionDetector(ABC):
pass pass
@abstractmethod @abstractmethod
def is_calibrating(self) -> bool: def is_calibrating(self):
"""Return if motion is recalibrating.""" """Return if motion is recalibrating."""
pass pass
@ -35,6 +35,6 @@ class MotionDetector(ABC):
pass pass
@abstractmethod @abstractmethod
def stop(self) -> None: def stop(self):
"""Stop any ongoing work and processes.""" """Stop any ongoing work and processes."""
pass pass

View File

@ -1,9 +1,7 @@
from typing import Any
import cv2 import cv2
import numpy as np import numpy as np
from frigate.config.config import RuntimeMotionConfig from frigate.config import MotionConfig
from frigate.motion import MotionDetector from frigate.motion import MotionDetector
from frigate.util.image import grab_cv2_contours from frigate.util.image import grab_cv2_contours
@ -11,20 +9,19 @@ from frigate.util.image import grab_cv2_contours
class FrigateMotionDetector(MotionDetector): class FrigateMotionDetector(MotionDetector):
def __init__( def __init__(
self, self,
frame_shape: tuple[int, ...], frame_shape,
config: RuntimeMotionConfig, config: MotionConfig,
fps: int, fps: int,
improve_contrast: Any, improve_contrast,
threshold: Any, threshold,
contour_area: Any, contour_area,
) -> None: ):
self.config = config self.config = config
self.frame_shape = frame_shape self.frame_shape = frame_shape
frame_height = config.frame_height or frame_shape[0] self.resize_factor = frame_shape[0] / config.frame_height
self.resize_factor = frame_shape[0] / frame_height
self.motion_frame_size = ( self.motion_frame_size = (
frame_height, config.frame_height,
frame_height * frame_shape[1] // frame_shape[0], config.frame_height * frame_shape[1] // frame_shape[0],
) )
self.avg_frame = np.zeros(self.motion_frame_size, np.float32) self.avg_frame = np.zeros(self.motion_frame_size, np.float32)
self.avg_delta = np.zeros(self.motion_frame_size, np.float32) self.avg_delta = np.zeros(self.motion_frame_size, np.float32)
@ -41,10 +38,10 @@ class FrigateMotionDetector(MotionDetector):
self.threshold = threshold self.threshold = threshold
self.contour_area = contour_area self.contour_area = contour_area
def is_calibrating(self) -> bool: def is_calibrating(self):
return False return False
def detect(self, frame: np.ndarray) -> list: def detect(self, frame):
motion_boxes = [] motion_boxes = []
gray = frame[0 : self.frame_shape[0], 0 : self.frame_shape[1]] gray = frame[0 : self.frame_shape[0], 0 : self.frame_shape[1]]
@ -102,7 +99,7 @@ class FrigateMotionDetector(MotionDetector):
# dilate the thresholded image to fill in holes, then find contours # dilate the thresholded image to fill in holes, then find contours
# on thresholded image # on thresholded image
thresh_dilated = cv2.dilate(thresh, None, iterations=2) # type: ignore[call-overload] thresh_dilated = cv2.dilate(thresh, None, iterations=2)
contours = cv2.findContours( contours = cv2.findContours(
thresh_dilated, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE thresh_dilated, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE
) )

View File

@ -1,12 +1,11 @@
import logging import logging
from typing import Optional
import cv2 import cv2
import numpy as np import numpy as np
from scipy.ndimage import gaussian_filter from scipy.ndimage import gaussian_filter
from frigate.camera import PTZMetrics from frigate.camera import PTZMetrics
from frigate.config.config import RuntimeMotionConfig from frigate.config import MotionConfig
from frigate.motion import MotionDetector from frigate.motion import MotionDetector
from frigate.util.image import grab_cv2_contours from frigate.util.image import grab_cv2_contours
@ -16,23 +15,22 @@ logger = logging.getLogger(__name__)
class ImprovedMotionDetector(MotionDetector): class ImprovedMotionDetector(MotionDetector):
def __init__( def __init__(
self, self,
frame_shape: tuple[int, ...], frame_shape,
config: RuntimeMotionConfig, config: MotionConfig,
fps: int, fps: int,
ptz_metrics: Optional[PTZMetrics] = None, ptz_metrics: PTZMetrics = None,
name: str = "improved", name="improved",
blur_radius: int = 1, blur_radius=1,
interpolation: int = cv2.INTER_NEAREST, interpolation=cv2.INTER_NEAREST,
contrast_frame_history: int = 50, contrast_frame_history=50,
) -> None: ):
self.name = name self.name = name
self.config = config self.config = config
self.frame_shape = frame_shape self.frame_shape = frame_shape
frame_height = config.frame_height or frame_shape[0] self.resize_factor = frame_shape[0] / config.frame_height
self.resize_factor = frame_shape[0] / frame_height
self.motion_frame_size = ( self.motion_frame_size = (
frame_height, config.frame_height,
frame_height * frame_shape[1] // frame_shape[0], config.frame_height * frame_shape[1] // frame_shape[0],
) )
self.avg_frame = np.zeros(self.motion_frame_size, np.float32) self.avg_frame = np.zeros(self.motion_frame_size, np.float32)
self.motion_frame_count = 0 self.motion_frame_count = 0
@ -46,20 +44,20 @@ class ImprovedMotionDetector(MotionDetector):
self.contrast_values[:, 1:2] = 255 self.contrast_values[:, 1:2] = 255
self.contrast_values_index = 0 self.contrast_values_index = 0
self.ptz_metrics = ptz_metrics self.ptz_metrics = ptz_metrics
self.last_stop_time: float | None = None self.last_stop_time = None
def is_calibrating(self) -> bool: def is_calibrating(self):
return self.calibrating return self.calibrating
def detect(self, frame: np.ndarray) -> list[tuple[int, int, int, int]]: def detect(self, frame):
motion_boxes: list[tuple[int, int, int, int]] = [] motion_boxes = []
if not self.config.enabled: if not self.config.enabled:
return motion_boxes return motion_boxes
# if ptz motor is moving from autotracking, quickly return # if ptz motor is moving from autotracking, quickly return
# a single box that is 80% of the frame # a single box that is 80% of the frame
if self.ptz_metrics is not None and ( if (
self.ptz_metrics.autotracker_enabled.value self.ptz_metrics.autotracker_enabled.value
and not self.ptz_metrics.motor_stopped.is_set() and not self.ptz_metrics.motor_stopped.is_set()
): ):
@ -132,19 +130,19 @@ class ImprovedMotionDetector(MotionDetector):
# dilate the thresholded image to fill in holes, then find contours # dilate the thresholded image to fill in holes, then find contours
# on thresholded image # on thresholded image
thresh_dilated = cv2.dilate(thresh, None, iterations=1) # type: ignore[call-overload] thresh_dilated = cv2.dilate(thresh, None, iterations=1)
contours = cv2.findContours( contours = cv2.findContours(
thresh_dilated, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE thresh_dilated, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE
) )
contours = grab_cv2_contours(contours) contours = grab_cv2_contours(contours)
# loop over the contours # loop over the contours
total_contour_area: float = 0 total_contour_area = 0
for c in contours: for c in contours:
# if the contour is big enough, count it as motion # if the contour is big enough, count it as motion
contour_area = cv2.contourArea(c) contour_area = cv2.contourArea(c)
total_contour_area += contour_area total_contour_area += contour_area
if contour_area > (self.config.contour_area or 0): if contour_area > self.config.contour_area:
x, y, w, h = cv2.boundingRect(c) x, y, w, h = cv2.boundingRect(c)
motion_boxes.append( motion_boxes.append(
( (
@ -161,7 +159,7 @@ class ImprovedMotionDetector(MotionDetector):
# check if the motor has just stopped from autotracking # check if the motor has just stopped from autotracking
# if so, reassign the average to the current frame so we begin with a new baseline # if so, reassign the average to the current frame so we begin with a new baseline
if self.ptz_metrics is not None and ( if (
# ensure we only do this for cameras with autotracking enabled # ensure we only do this for cameras with autotracking enabled
self.ptz_metrics.autotracker_enabled.value self.ptz_metrics.autotracker_enabled.value
and self.ptz_metrics.motor_stopped.is_set() and self.ptz_metrics.motor_stopped.is_set()

View File

@ -41,24 +41,6 @@ ignore_errors = false
[mypy-frigate.events] [mypy-frigate.events]
ignore_errors = false ignore_errors = false
[mypy-frigate.genai.*]
ignore_errors = false
[mypy-frigate.jobs.*]
ignore_errors = false
[mypy-frigate.motion.*]
ignore_errors = false
[mypy-frigate.object_detection.*]
ignore_errors = false
[mypy-frigate.output.*]
ignore_errors = false
[mypy-frigate.ptz]
ignore_errors = false
[mypy-frigate.log] [mypy-frigate.log]
ignore_errors = false ignore_errors = false

View File

@ -7,7 +7,6 @@ from abc import ABC, abstractmethod
from collections import deque from collections import deque
from multiprocessing import Queue, Value from multiprocessing import Queue, Value
from multiprocessing.synchronize import Event as MpEvent from multiprocessing.synchronize import Event as MpEvent
from typing import Any, Optional
import numpy as np import numpy as np
import zmq import zmq
@ -35,25 +34,26 @@ logger = logging.getLogger(__name__)
class ObjectDetector(ABC): class ObjectDetector(ABC):
@abstractmethod @abstractmethod
def detect(self, tensor_input: np.ndarray, threshold: float = 0.4) -> list: def detect(self, tensor_input, threshold: float = 0.4):
pass pass
class BaseLocalDetector(ObjectDetector): class BaseLocalDetector(ObjectDetector):
def __init__( def __init__(
self, self,
detector_config: Optional[BaseDetectorConfig] = None, detector_config: BaseDetectorConfig = None,
labels: Optional[str] = None, labels: str = None,
stop_event: Optional[MpEvent] = None, stop_event: MpEvent = None,
) -> None: ):
self.fps = EventsPerSecond() self.fps = EventsPerSecond()
if labels is None: if labels is None:
self.labels: dict[int, str] = {} self.labels = {}
else: else:
self.labels = load_labels(labels) self.labels = load_labels(labels)
if detector_config and detector_config.model: if detector_config:
self.input_transform = tensor_transform(detector_config.model.input_tensor) self.input_transform = tensor_transform(detector_config.model.input_tensor)
self.dtype = detector_config.model.input_dtype self.dtype = detector_config.model.input_dtype
else: else:
self.input_transform = None self.input_transform = None
@ -77,10 +77,10 @@ class BaseLocalDetector(ObjectDetector):
return tensor_input return tensor_input
def detect(self, tensor_input: np.ndarray, threshold: float = 0.4) -> list: def detect(self, tensor_input: np.ndarray, threshold=0.4):
detections = [] detections = []
raw_detections = self.detect_raw(tensor_input) # type: ignore[attr-defined] raw_detections = self.detect_raw(tensor_input)
for d in raw_detections: for d in raw_detections:
if int(d[0]) < 0 or int(d[0]) >= len(self.labels): if int(d[0]) < 0 or int(d[0]) >= len(self.labels):
@ -96,28 +96,28 @@ class BaseLocalDetector(ObjectDetector):
class LocalObjectDetector(BaseLocalDetector): class LocalObjectDetector(BaseLocalDetector):
def detect_raw(self, tensor_input: np.ndarray) -> np.ndarray: def detect_raw(self, tensor_input: np.ndarray):
tensor_input = self._transform_input(tensor_input) tensor_input = self._transform_input(tensor_input)
return self.detect_api.detect_raw(tensor_input=tensor_input) # type: ignore[no-any-return] return self.detect_api.detect_raw(tensor_input=tensor_input)
class AsyncLocalObjectDetector(BaseLocalDetector): class AsyncLocalObjectDetector(BaseLocalDetector):
def async_send_input(self, tensor_input: np.ndarray, connection_id: str) -> None: def async_send_input(self, tensor_input: np.ndarray, connection_id: str):
tensor_input = self._transform_input(tensor_input) tensor_input = self._transform_input(tensor_input)
self.detect_api.send_input(connection_id, tensor_input) return self.detect_api.send_input(connection_id, tensor_input)
def async_receive_output(self) -> Any: def async_receive_output(self):
return self.detect_api.receive_output() return self.detect_api.receive_output()
class DetectorRunner(FrigateProcess): class DetectorRunner(FrigateProcess):
def __init__( def __init__(
self, self,
name: str, name,
detection_queue: Queue, detection_queue: Queue,
cameras: list[str], cameras: list[str],
avg_speed: Any, avg_speed: Value,
start_time: Any, start_time: Value,
config: FrigateConfig, config: FrigateConfig,
detector_config: BaseDetectorConfig, detector_config: BaseDetectorConfig,
stop_event: MpEvent, stop_event: MpEvent,
@ -129,11 +129,11 @@ class DetectorRunner(FrigateProcess):
self.start_time = start_time self.start_time = start_time
self.config = config self.config = config
self.detector_config = detector_config self.detector_config = detector_config
self.outputs: dict[str, Any] = {} self.outputs: dict = {}
def create_output_shm(self, name: str) -> None: def create_output_shm(self, name: str):
out_shm = UntrackedSharedMemory(name=f"out-{name}", create=False) out_shm = UntrackedSharedMemory(name=f"out-{name}", create=False)
out_np: np.ndarray = np.ndarray((20, 6), dtype=np.float32, buffer=out_shm.buf) out_np = np.ndarray((20, 6), dtype=np.float32, buffer=out_shm.buf)
self.outputs[name] = {"shm": out_shm, "np": out_np} self.outputs[name] = {"shm": out_shm, "np": out_np}
def run(self) -> None: def run(self) -> None:
@ -155,8 +155,8 @@ class DetectorRunner(FrigateProcess):
connection_id, connection_id,
( (
1, 1,
self.detector_config.model.height, # type: ignore[union-attr] self.detector_config.model.height,
self.detector_config.model.width, # type: ignore[union-attr] self.detector_config.model.width,
3, 3,
), ),
) )
@ -187,11 +187,11 @@ class DetectorRunner(FrigateProcess):
class AsyncDetectorRunner(FrigateProcess): class AsyncDetectorRunner(FrigateProcess):
def __init__( def __init__(
self, self,
name: str, name,
detection_queue: Queue, detection_queue: Queue,
cameras: list[str], cameras: list[str],
avg_speed: Any, avg_speed: Value,
start_time: Any, start_time: Value,
config: FrigateConfig, config: FrigateConfig,
detector_config: BaseDetectorConfig, detector_config: BaseDetectorConfig,
stop_event: MpEvent, stop_event: MpEvent,
@ -203,15 +203,15 @@ class AsyncDetectorRunner(FrigateProcess):
self.start_time = start_time self.start_time = start_time
self.config = config self.config = config
self.detector_config = detector_config self.detector_config = detector_config
self.outputs: dict[str, Any] = {} self.outputs: dict = {}
self._frame_manager: SharedMemoryFrameManager | None = None self._frame_manager: SharedMemoryFrameManager | None = None
self._publisher: ObjectDetectorPublisher | None = None self._publisher: ObjectDetectorPublisher | None = None
self._detector: AsyncLocalObjectDetector | None = None self._detector: AsyncLocalObjectDetector | None = None
self.send_times: deque[float] = deque() self.send_times = deque()
def create_output_shm(self, name: str) -> None: def create_output_shm(self, name: str):
out_shm = UntrackedSharedMemory(name=f"out-{name}", create=False) out_shm = UntrackedSharedMemory(name=f"out-{name}", create=False)
out_np: np.ndarray = np.ndarray((20, 6), dtype=np.float32, buffer=out_shm.buf) out_np = np.ndarray((20, 6), dtype=np.float32, buffer=out_shm.buf)
self.outputs[name] = {"shm": out_shm, "np": out_np} self.outputs[name] = {"shm": out_shm, "np": out_np}
def _detect_worker(self) -> None: def _detect_worker(self) -> None:
@ -222,13 +222,12 @@ class AsyncDetectorRunner(FrigateProcess):
except queue.Empty: except queue.Empty:
continue continue
assert self._frame_manager is not None
input_frame = self._frame_manager.get( input_frame = self._frame_manager.get(
connection_id, connection_id,
( (
1, 1,
self.detector_config.model.height, # type: ignore[union-attr] self.detector_config.model.height,
self.detector_config.model.width, # type: ignore[union-attr] self.detector_config.model.width,
3, 3,
), ),
) )
@ -239,13 +238,11 @@ class AsyncDetectorRunner(FrigateProcess):
# mark start time and send to accelerator # mark start time and send to accelerator
self.send_times.append(time.perf_counter()) self.send_times.append(time.perf_counter())
assert self._detector is not None
self._detector.async_send_input(input_frame, connection_id) self._detector.async_send_input(input_frame, connection_id)
def _result_worker(self) -> None: def _result_worker(self) -> None:
logger.info("Starting Result Worker Thread") logger.info("Starting Result Worker Thread")
while not self.stop_event.is_set(): while not self.stop_event.is_set():
assert self._detector is not None
connection_id, detections = self._detector.async_receive_output() connection_id, detections = self._detector.async_receive_output()
# Handle timeout case (queue.Empty) - just continue # Handle timeout case (queue.Empty) - just continue
@ -259,7 +256,6 @@ class AsyncDetectorRunner(FrigateProcess):
duration = time.perf_counter() - ts duration = time.perf_counter() - ts
# release input buffer # release input buffer
assert self._frame_manager is not None
self._frame_manager.close(connection_id) self._frame_manager.close(connection_id)
if connection_id not in self.outputs: if connection_id not in self.outputs:
@ -268,7 +264,6 @@ class AsyncDetectorRunner(FrigateProcess):
# write results and publish # write results and publish
if detections is not None: if detections is not None:
self.outputs[connection_id]["np"][:] = detections[:] self.outputs[connection_id]["np"][:] = detections[:]
assert self._publisher is not None
self._publisher.publish(connection_id) self._publisher.publish(connection_id)
# update timers # update timers
@ -335,14 +330,11 @@ class ObjectDetectProcess:
self.stop_event = stop_event self.stop_event = stop_event
self.start_or_restart() self.start_or_restart()
def stop(self) -> None: def stop(self):
# if the process has already exited on its own, just return # if the process has already exited on its own, just return
if self.detect_process and self.detect_process.exitcode: if self.detect_process and self.detect_process.exitcode:
return return
if self.detect_process is None:
return
logging.info("Waiting for detection process to exit gracefully...") logging.info("Waiting for detection process to exit gracefully...")
self.detect_process.join(timeout=30) self.detect_process.join(timeout=30)
if self.detect_process.exitcode is None: if self.detect_process.exitcode is None:
@ -351,8 +343,8 @@ class ObjectDetectProcess:
self.detect_process.join() self.detect_process.join()
logging.info("Detection process has exited...") logging.info("Detection process has exited...")
def start_or_restart(self) -> None: def start_or_restart(self):
self.detection_start.value = 0.0 # type: ignore[attr-defined] self.detection_start.value = 0.0
if (self.detect_process is not None) and self.detect_process.is_alive(): if (self.detect_process is not None) and self.detect_process.is_alive():
self.stop() self.stop()
@ -397,19 +389,17 @@ class RemoteObjectDetector:
self.detection_queue = detection_queue self.detection_queue = detection_queue
self.stop_event = stop_event self.stop_event = stop_event
self.shm = UntrackedSharedMemory(name=self.name, create=False) self.shm = UntrackedSharedMemory(name=self.name, create=False)
self.np_shm: np.ndarray = np.ndarray( self.np_shm = np.ndarray(
(1, model_config.height, model_config.width, 3), (1, model_config.height, model_config.width, 3),
dtype=np.uint8, dtype=np.uint8,
buffer=self.shm.buf, buffer=self.shm.buf,
) )
self.out_shm = UntrackedSharedMemory(name=f"out-{self.name}", create=False) self.out_shm = UntrackedSharedMemory(name=f"out-{self.name}", create=False)
self.out_np_shm: np.ndarray = np.ndarray( self.out_np_shm = np.ndarray((20, 6), dtype=np.float32, buffer=self.out_shm.buf)
(20, 6), dtype=np.float32, buffer=self.out_shm.buf
)
self.detector_subscriber = ObjectDetectorSubscriber(name) self.detector_subscriber = ObjectDetectorSubscriber(name)
def detect(self, tensor_input: np.ndarray, threshold: float = 0.4) -> list: def detect(self, tensor_input, threshold=0.4):
detections: list = [] detections = []
if self.stop_event.is_set(): if self.stop_event.is_set():
return detections return detections
@ -441,7 +431,7 @@ class RemoteObjectDetector:
self.fps.update() self.fps.update()
return detections return detections
def cleanup(self) -> None: def cleanup(self):
self.detector_subscriber.stop() self.detector_subscriber.stop()
self.shm.unlink() self.shm.unlink()
self.out_shm.unlink() self.out_shm.unlink()

View File

@ -13,10 +13,10 @@ class RequestStore:
A thread-safe hash-based response store that handles creating requests. A thread-safe hash-based response store that handles creating requests.
""" """
def __init__(self) -> None: def __init__(self):
self.request_counter = 0 self.request_counter = 0
self.request_counter_lock = threading.Lock() self.request_counter_lock = threading.Lock()
self.input_queue: queue.Queue[tuple[int, ndarray]] = queue.Queue() self.input_queue = queue.Queue()
def __get_request_id(self) -> int: def __get_request_id(self) -> int:
with self.request_counter_lock: with self.request_counter_lock:
@ -45,19 +45,17 @@ class ResponseStore:
their request's result appears. their request's result appears.
""" """
def __init__(self) -> None: def __init__(self):
self.responses: dict[ self.responses = {} # Maps request_id -> (original_input, infer_results)
int, ndarray
] = {} # Maps request_id -> (original_input, infer_results)
self.lock = threading.Lock() self.lock = threading.Lock()
self.cond = threading.Condition(self.lock) self.cond = threading.Condition(self.lock)
def put(self, request_id: int, response: ndarray) -> None: def put(self, request_id: int, response: ndarray):
with self.cond: with self.cond:
self.responses[request_id] = response self.responses[request_id] = response
self.cond.notify_all() self.cond.notify_all()
def get(self, request_id: int, timeout: float | None = None) -> ndarray: def get(self, request_id: int, timeout=None) -> ndarray:
with self.cond: with self.cond:
if not self.cond.wait_for( if not self.cond.wait_for(
lambda: request_id in self.responses, timeout=timeout lambda: request_id in self.responses, timeout=timeout
@ -67,9 +65,7 @@ class ResponseStore:
return self.responses.pop(request_id) return self.responses.pop(request_id)
def tensor_transform( def tensor_transform(desired_shape: InputTensorEnum):
desired_shape: InputTensorEnum,
) -> tuple[int, int, int, int] | None:
# Currently this function only supports BHWC permutations # Currently this function only supports BHWC permutations
if desired_shape == InputTensorEnum.nhwc: if desired_shape == InputTensorEnum.nhwc:
return None return None

View File

@ -4,13 +4,13 @@ import datetime
import glob import glob
import logging import logging
import math import math
import multiprocessing as mp
import os import os
import queue import queue
import subprocess as sp import subprocess as sp
import threading import threading
import time import time
import traceback import traceback
from multiprocessing.synchronize import Event as MpEvent
from typing import Any, Optional from typing import Any, Optional
import cv2 import cv2
@ -74,25 +74,25 @@ class Canvas:
self, self,
canvas_width: int, canvas_width: int,
canvas_height: int, canvas_height: int,
scaling_factor: float, scaling_factor: int,
) -> None: ) -> None:
self.scaling_factor = scaling_factor self.scaling_factor = scaling_factor
gcd = math.gcd(canvas_width, canvas_height) gcd = math.gcd(canvas_width, canvas_height)
self.aspect = get_standard_aspect_ratio( self.aspect = get_standard_aspect_ratio(
int(canvas_width / gcd), int(canvas_height / gcd) (canvas_width / gcd), (canvas_height / gcd)
) )
self.width = canvas_width self.width = canvas_width
self.height: float = (self.width * self.aspect[1]) / self.aspect[0] self.height = (self.width * self.aspect[1]) / self.aspect[0]
self.coefficient_cache: dict[int, float] = {} self.coefficient_cache: dict[int, int] = {}
self.aspect_cache: dict[str, tuple[int, int]] = {} self.aspect_cache: dict[str, tuple[int, int]] = {}
def get_aspect(self, coefficient: float) -> tuple[float, float]: def get_aspect(self, coefficient: int) -> tuple[int, int]:
return (self.aspect[0] * coefficient, self.aspect[1] * coefficient) return (self.aspect[0] * coefficient, self.aspect[1] * coefficient)
def get_coefficient(self, camera_count: int) -> float: def get_coefficient(self, camera_count: int) -> int:
return self.coefficient_cache.get(camera_count, self.scaling_factor) return self.coefficient_cache.get(camera_count, self.scaling_factor)
def set_coefficient(self, camera_count: int, coefficient: float) -> None: def set_coefficient(self, camera_count: int, coefficient: int) -> None:
self.coefficient_cache[camera_count] = coefficient self.coefficient_cache[camera_count] = coefficient
def get_camera_aspect( def get_camera_aspect(
@ -105,7 +105,7 @@ class Canvas:
gcd = math.gcd(camera_width, camera_height) gcd = math.gcd(camera_width, camera_height)
camera_aspect = get_standard_aspect_ratio( camera_aspect = get_standard_aspect_ratio(
int(camera_width / gcd), int(camera_height / gcd) camera_width / gcd, camera_height / gcd
) )
self.aspect_cache[cam_name] = camera_aspect self.aspect_cache[cam_name] = camera_aspect
return camera_aspect return camera_aspect
@ -116,7 +116,7 @@ class FFMpegConverter(threading.Thread):
self, self,
ffmpeg: FfmpegConfig, ffmpeg: FfmpegConfig,
input_queue: queue.Queue, input_queue: queue.Queue,
stop_event: MpEvent, stop_event: mp.Event,
in_width: int, in_width: int,
in_height: int, in_height: int,
out_width: int, out_width: int,
@ -128,7 +128,7 @@ class FFMpegConverter(threading.Thread):
self.camera = "birdseye" self.camera = "birdseye"
self.input_queue = input_queue self.input_queue = input_queue
self.stop_event = stop_event self.stop_event = stop_event
self.bd_pipe: int | None = None self.bd_pipe = None
if birdseye_rtsp: if birdseye_rtsp:
self.recreate_birdseye_pipe() self.recreate_birdseye_pipe()
@ -181,8 +181,7 @@ class FFMpegConverter(threading.Thread):
os.close(stdin) os.close(stdin)
self.reading_birdseye = False self.reading_birdseye = False
def __write(self, b: bytes) -> None: def __write(self, b) -> None:
assert self.process.stdin is not None
self.process.stdin.write(b) self.process.stdin.write(b)
if self.bd_pipe: if self.bd_pipe:
@ -201,13 +200,13 @@ class FFMpegConverter(threading.Thread):
return return
def read(self, length: int) -> Any: def read(self, length):
try: try:
return self.process.stdout.read1(length) # type: ignore[union-attr] return self.process.stdout.read1(length)
except ValueError: except ValueError:
return False return False
def exit(self) -> None: def exit(self):
if self.bd_pipe: if self.bd_pipe:
os.close(self.bd_pipe) os.close(self.bd_pipe)
@ -234,8 +233,8 @@ class BroadcastThread(threading.Thread):
self, self,
camera: str, camera: str,
converter: FFMpegConverter, converter: FFMpegConverter,
websocket_server: Any, websocket_server,
stop_event: MpEvent, stop_event: mp.Event,
): ):
super().__init__() super().__init__()
self.camera = camera self.camera = camera
@ -243,7 +242,7 @@ class BroadcastThread(threading.Thread):
self.websocket_server = websocket_server self.websocket_server = websocket_server
self.stop_event = stop_event self.stop_event = stop_event
def run(self) -> None: def run(self):
while not self.stop_event.is_set(): while not self.stop_event.is_set():
buf = self.converter.read(65536) buf = self.converter.read(65536)
if buf: if buf:
@ -271,16 +270,16 @@ class BirdsEyeFrameManager:
def __init__( def __init__(
self, self,
config: FrigateConfig, config: FrigateConfig,
stop_event: MpEvent, stop_event: mp.Event,
): ):
self.config = config self.config = config
width, height = get_canvas_shape(config.birdseye.width, config.birdseye.height) width, height = get_canvas_shape(config.birdseye.width, config.birdseye.height)
self.frame_shape = (height, width) self.frame_shape = (height, width)
self.yuv_shape = (height * 3 // 2, width) self.yuv_shape = (height * 3 // 2, width)
self.frame: np.ndarray = np.ndarray(self.yuv_shape, dtype=np.uint8) self.frame = np.ndarray(self.yuv_shape, dtype=np.uint8)
self.canvas = Canvas(width, height, config.birdseye.layout.scaling_factor) self.canvas = Canvas(width, height, config.birdseye.layout.scaling_factor)
self.stop_event = stop_event self.stop_event = stop_event
self.last_refresh_time: float = 0 self.last_refresh_time = 0
# initialize the frame as black and with the Frigate logo # initialize the frame as black and with the Frigate logo
self.blank_frame = np.zeros(self.yuv_shape, np.uint8) self.blank_frame = np.zeros(self.yuv_shape, np.uint8)
@ -324,15 +323,15 @@ class BirdsEyeFrameManager:
self.frame[:] = self.blank_frame self.frame[:] = self.blank_frame
self.cameras: dict[str, Any] = {} self.cameras = {}
for camera in self.config.cameras.keys(): for camera in self.config.cameras.keys():
self.add_camera(camera) self.add_camera(camera)
self.camera_layout: list[Any] = [] self.camera_layout = []
self.active_cameras: set[str] = set() self.active_cameras = set()
self.last_output_time = 0.0 self.last_output_time = 0.0
def add_camera(self, cam: str) -> None: def add_camera(self, cam: str):
"""Add a camera to self.cameras with the correct structure.""" """Add a camera to self.cameras with the correct structure."""
settings = self.config.cameras[cam] settings = self.config.cameras[cam]
# precalculate the coordinates for all the channels # precalculate the coordinates for all the channels
@ -362,21 +361,16 @@ class BirdsEyeFrameManager:
}, },
} }
def remove_camera(self, cam: str) -> None: def remove_camera(self, cam: str):
"""Remove a camera from self.cameras.""" """Remove a camera from self.cameras."""
if cam in self.cameras: if cam in self.cameras:
del self.cameras[cam] del self.cameras[cam]
def clear_frame(self) -> None: def clear_frame(self):
logger.debug("Clearing the birdseye frame") logger.debug("Clearing the birdseye frame")
self.frame[:] = self.blank_frame self.frame[:] = self.blank_frame
def copy_to_position( def copy_to_position(self, position, camera=None, frame: np.ndarray = None):
self,
position: Any,
camera: Optional[str] = None,
frame: Optional[np.ndarray] = None,
) -> None:
if camera is None: if camera is None:
frame = None frame = None
channel_dims = None channel_dims = None
@ -395,9 +389,7 @@ class BirdsEyeFrameManager:
channel_dims, channel_dims,
) )
def camera_active( def camera_active(self, mode, object_box_count, motion_box_count):
self, mode: Any, object_box_count: int, motion_box_count: int
) -> bool:
if mode == BirdseyeModeEnum.continuous: if mode == BirdseyeModeEnum.continuous:
return True return True
@ -407,8 +399,6 @@ class BirdsEyeFrameManager:
if mode == BirdseyeModeEnum.objects and object_box_count > 0: if mode == BirdseyeModeEnum.objects and object_box_count > 0:
return True return True
return False
def get_camera_coordinates(self) -> dict[str, dict[str, int]]: def get_camera_coordinates(self) -> dict[str, dict[str, int]]:
"""Return the coordinates of each camera in the current layout.""" """Return the coordinates of each camera in the current layout."""
coordinates = {} coordinates = {}
@ -461,7 +451,7 @@ class BirdsEyeFrameManager:
- self.cameras[active_camera]["last_active_frame"] - self.cameras[active_camera]["last_active_frame"]
), ),
) )
active_cameras = set(limited_active_cameras[:max_cameras]) active_cameras = limited_active_cameras[:max_cameras]
max_camera_refresh = True max_camera_refresh = True
self.last_refresh_time = now self.last_refresh_time = now
@ -520,7 +510,7 @@ class BirdsEyeFrameManager:
# center camera view in canvas and ensure that it fits # center camera view in canvas and ensure that it fits
if scaled_width < self.canvas.width: if scaled_width < self.canvas.width:
coefficient: float = 1 coefficient = 1
x_offset = int((self.canvas.width - scaled_width) / 2) x_offset = int((self.canvas.width - scaled_width) / 2)
else: else:
coefficient = self.canvas.width / scaled_width coefficient = self.canvas.width / scaled_width
@ -567,7 +557,7 @@ class BirdsEyeFrameManager:
calculating = False calculating = False
self.canvas.set_coefficient(len(active_cameras), coefficient) self.canvas.set_coefficient(len(active_cameras), coefficient)
self.camera_layout = layout_candidate or [] self.camera_layout = layout_candidate
frame_changed = True frame_changed = True
# Draw the layout # Draw the layout
@ -587,12 +577,10 @@ class BirdsEyeFrameManager:
self, self,
cameras_to_add: list[str], cameras_to_add: list[str],
coefficient: float, coefficient: float,
) -> Optional[list[list[Any]]]: ) -> tuple[Any]:
"""Calculate the optimal layout for 2+ cameras.""" """Calculate the optimal layout for 2+ cameras."""
def map_layout( def map_layout(camera_layout: list[list[Any]], row_height: int):
camera_layout: list[list[Any]], row_height: int
) -> tuple[int, int, Optional[list[list[Any]]]]:
"""Map the calculated layout.""" """Map the calculated layout."""
candidate_layout = [] candidate_layout = []
starting_x = 0 starting_x = 0
@ -789,11 +777,11 @@ class Birdseye:
def __init__( def __init__(
self, self,
config: FrigateConfig, config: FrigateConfig,
stop_event: MpEvent, stop_event: mp.Event,
websocket_server: Any, websocket_server,
) -> None: ) -> None:
self.config = config self.config = config
self.input: queue.Queue[bytes] = queue.Queue(maxsize=10) self.input = queue.Queue(maxsize=10)
self.converter = FFMpegConverter( self.converter = FFMpegConverter(
config.ffmpeg, config.ffmpeg,
self.input, self.input,
@ -818,7 +806,7 @@ class Birdseye:
) )
if config.birdseye.restream: if config.birdseye.restream:
self.birdseye_buffer: Any = self.frame_manager.create( self.birdseye_buffer = self.frame_manager.create(
"birdseye", "birdseye",
self.birdseye_manager.yuv_shape[0] * self.birdseye_manager.yuv_shape[1], self.birdseye_manager.yuv_shape[0] * self.birdseye_manager.yuv_shape[1],
) )

View File

@ -1,11 +1,10 @@
"""Handle outputting individual cameras via jsmpeg.""" """Handle outputting individual cameras via jsmpeg."""
import logging import logging
import multiprocessing as mp
import queue import queue
import subprocess as sp import subprocess as sp
import threading import threading
from multiprocessing.synchronize import Event as MpEvent
from typing import Any
from frigate.config import CameraConfig, FfmpegConfig from frigate.config import CameraConfig, FfmpegConfig
@ -18,7 +17,7 @@ class FFMpegConverter(threading.Thread):
camera: str, camera: str,
ffmpeg: FfmpegConfig, ffmpeg: FfmpegConfig,
input_queue: queue.Queue, input_queue: queue.Queue,
stop_event: MpEvent, stop_event: mp.Event,
in_width: int, in_width: int,
in_height: int, in_height: int,
out_width: int, out_width: int,
@ -65,17 +64,16 @@ class FFMpegConverter(threading.Thread):
start_new_session=True, start_new_session=True,
) )
def __write(self, b: bytes) -> None: def __write(self, b) -> None:
assert self.process.stdin is not None
self.process.stdin.write(b) self.process.stdin.write(b)
def read(self, length: int) -> Any: def read(self, length):
try: try:
return self.process.stdout.read1(length) # type: ignore[union-attr] return self.process.stdout.read1(length)
except ValueError: except ValueError:
return False return False
def exit(self) -> None: def exit(self):
self.process.terminate() self.process.terminate()
try: try:
@ -100,8 +98,8 @@ class BroadcastThread(threading.Thread):
self, self,
camera: str, camera: str,
converter: FFMpegConverter, converter: FFMpegConverter,
websocket_server: Any, websocket_server,
stop_event: MpEvent, stop_event: mp.Event,
): ):
super().__init__() super().__init__()
self.camera = camera self.camera = camera
@ -109,7 +107,7 @@ class BroadcastThread(threading.Thread):
self.websocket_server = websocket_server self.websocket_server = websocket_server
self.stop_event = stop_event self.stop_event = stop_event
def run(self) -> None: def run(self):
while not self.stop_event.is_set(): while not self.stop_event.is_set():
buf = self.converter.read(65536) buf = self.converter.read(65536)
if buf: if buf:
@ -135,15 +133,15 @@ class BroadcastThread(threading.Thread):
class JsmpegCamera: class JsmpegCamera:
def __init__( def __init__(
self, config: CameraConfig, stop_event: MpEvent, websocket_server: Any self, config: CameraConfig, stop_event: mp.Event, websocket_server
) -> None: ) -> None:
self.config = config self.config = config
self.input: queue.Queue[bytes] = queue.Queue(maxsize=config.detect.fps) self.input = queue.Queue(maxsize=config.detect.fps)
width = int( width = int(
config.live.height * (config.frame_shape[1] / config.frame_shape[0]) config.live.height * (config.frame_shape[1] / config.frame_shape[0])
) )
self.converter = FFMpegConverter( self.converter = FFMpegConverter(
config.name or "", config.name,
config.ffmpeg, config.ffmpeg,
self.input, self.input,
stop_event, stop_event,
@ -154,13 +152,13 @@ class JsmpegCamera:
config.live.quality, config.live.quality,
) )
self.broadcaster = BroadcastThread( self.broadcaster = BroadcastThread(
config.name or "", self.converter, websocket_server, stop_event config.name, self.converter, websocket_server, stop_event
) )
self.converter.start() self.converter.start()
self.broadcaster.start() self.broadcaster.start()
def write_frame(self, frame_bytes: bytes) -> None: def write_frame(self, frame_bytes) -> None:
try: try:
self.input.put_nowait(frame_bytes) self.input.put_nowait(frame_bytes)
except queue.Full: except queue.Full:

View File

@ -61,12 +61,6 @@ def check_disabled_camera_update(
# last camera update was more than 1 second ago # last camera update was more than 1 second ago
# need to send empty data to birdseye because current # need to send empty data to birdseye because current
# frame is now out of date # frame is now out of date
cam_width = config.cameras[camera].detect.width
cam_height = config.cameras[camera].detect.height
if cam_width is None or cam_height is None:
raise ValueError(f"Camera {camera} detect dimensions not configured")
if birdseye and offline_time < 10: if birdseye and offline_time < 10:
# we only need to send blank frames to birdseye at the beginning of a camera being offline # we only need to send blank frames to birdseye at the beginning of a camera being offline
birdseye.write_data( birdseye.write_data(
@ -74,7 +68,10 @@ def check_disabled_camera_update(
[], [],
[], [],
now, now,
get_blank_yuv_frame(cam_width, cam_height), get_blank_yuv_frame(
config.cameras[camera].detect.width,
config.cameras[camera].detect.height,
),
) )
if not has_enabled_camera and birdseye: if not has_enabled_camera and birdseye:
@ -176,7 +173,7 @@ class OutputProcess(FrigateProcess):
birdseye_config_subscriber.check_for_update() birdseye_config_subscriber.check_for_update()
) )
if update_topic is not None and birdseye_config is not None: if update_topic is not None:
previous_global_mode = self.config.birdseye.mode previous_global_mode = self.config.birdseye.mode
self.config.birdseye = birdseye_config self.config.birdseye = birdseye_config
@ -201,10 +198,7 @@ class OutputProcess(FrigateProcess):
birdseye, birdseye,
) )
_result = detection_subscriber.check_for_update(timeout=1) (topic, data) = detection_subscriber.check_for_update(timeout=1)
if _result is None:
continue
(topic, data) = _result
now = datetime.datetime.now().timestamp() now = datetime.datetime.now().timestamp()
if now - last_disabled_cam_check > 5: if now - last_disabled_cam_check > 5:
@ -214,7 +208,7 @@ class OutputProcess(FrigateProcess):
self.config, birdseye, preview_recorders, preview_write_times self.config, birdseye, preview_recorders, preview_write_times
) )
if not topic or data is None: if not topic:
continue continue
( (
@ -268,16 +262,12 @@ class OutputProcess(FrigateProcess):
jsmpeg_cameras[camera].write_frame(frame.tobytes()) jsmpeg_cameras[camera].write_frame(frame.tobytes())
# send output data to birdseye if websocket is connected or restreaming # send output data to birdseye if websocket is connected or restreaming
if ( if self.config.birdseye.enabled and (
self.config.birdseye.enabled
and birdseye is not None
and (
self.config.birdseye.restream self.config.birdseye.restream
or any( or any(
ws.environ["PATH_INFO"].endswith("birdseye") ws.environ["PATH_INFO"].endswith("birdseye")
for ws in websocket_server.manager for ws in websocket_server.manager
) )
)
): ):
birdseye.write_data( birdseye.write_data(
camera, camera,
@ -292,12 +282,9 @@ class OutputProcess(FrigateProcess):
move_preview_frames("clips") move_preview_frames("clips")
while True: while True:
_cleanup_result = detection_subscriber.check_for_update(timeout=0) (topic, data) = detection_subscriber.check_for_update(timeout=0)
if _cleanup_result is None:
break
(topic, data) = _cleanup_result
if not topic or data is None: if not topic:
break break
( (
@ -335,7 +322,7 @@ class OutputProcess(FrigateProcess):
logger.info("exiting output process...") logger.info("exiting output process...")
def move_preview_frames(loc: str) -> None: def move_preview_frames(loc: str):
preview_holdover = os.path.join(CLIPS_DIR, "preview_restart_cache") preview_holdover = os.path.join(CLIPS_DIR, "preview_restart_cache")
preview_cache = os.path.join(CACHE_DIR, "preview_frames") preview_cache = os.path.join(CACHE_DIR, "preview_frames")

View File

@ -22,6 +22,7 @@ from frigate.ffmpeg_presets import (
parse_preset_hardware_acceleration_encode, parse_preset_hardware_acceleration_encode,
) )
from frigate.models import Previews from frigate.models import Previews
from frigate.track.object_processing import TrackedObject
from frigate.util.image import copy_yuv_to_position, get_blank_yuv_frame, get_yuv_crop from frigate.util.image import copy_yuv_to_position, get_blank_yuv_frame, get_yuv_crop
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -65,9 +66,7 @@ def get_cache_image_name(camera: str, frame_time: float) -> str:
) )
def get_most_recent_preview_frame( def get_most_recent_preview_frame(camera: str, before: float = None) -> str | None:
camera: str, before: float | None = None
) -> str | None:
"""Get the most recent preview frame for a camera.""" """Get the most recent preview frame for a camera."""
if not os.path.exists(PREVIEW_CACHE_DIR): if not os.path.exists(PREVIEW_CACHE_DIR):
return None return None
@ -148,12 +147,12 @@ class FFMpegConverter(threading.Thread):
if t_idx == item_count - 1: if t_idx == item_count - 1:
# last frame does not get a duration # last frame does not get a duration
playlist.append( playlist.append(
f"file '{get_cache_image_name(self.config.name, self.frame_times[t_idx])}'" # type: ignore[arg-type] f"file '{get_cache_image_name(self.config.name, self.frame_times[t_idx])}'"
) )
continue continue
playlist.append( playlist.append(
f"file '{get_cache_image_name(self.config.name, self.frame_times[t_idx])}'" # type: ignore[arg-type] f"file '{get_cache_image_name(self.config.name, self.frame_times[t_idx])}'"
) )
playlist.append( playlist.append(
f"duration {self.frame_times[t_idx + 1] - self.frame_times[t_idx]}" f"duration {self.frame_times[t_idx + 1] - self.frame_times[t_idx]}"
@ -200,33 +199,30 @@ class FFMpegConverter(threading.Thread):
# unlink files from cache # unlink files from cache
# don't delete last frame as it will be used as first frame in next segment # don't delete last frame as it will be used as first frame in next segment
for t in self.frame_times[0:-1]: for t in self.frame_times[0:-1]:
Path(get_cache_image_name(self.config.name, t)).unlink(missing_ok=True) # type: ignore[arg-type] Path(get_cache_image_name(self.config.name, t)).unlink(missing_ok=True)
class PreviewRecorder: class PreviewRecorder:
def __init__(self, config: CameraConfig) -> None: def __init__(self, config: CameraConfig) -> None:
self.config = config self.config = config
self.camera_name: str = config.name or "" self.start_time = 0
self.start_time: float = 0 self.last_output_time = 0
self.last_output_time: float = 0
self.offline = False self.offline = False
self.output_frames: list[float] = [] self.output_frames = []
if config.detect.width is None or config.detect.height is None: if config.detect.width > config.detect.height:
raise ValueError("Detect width and height must be set for previews.")
self.detect_width: int = config.detect.width
self.detect_height: int = config.detect.height
if self.detect_width > self.detect_height:
self.out_height = PREVIEW_HEIGHT self.out_height = PREVIEW_HEIGHT
self.out_width = ( self.out_width = (
int((self.detect_width / self.detect_height) * self.out_height) // 4 * 4 int((config.detect.width / config.detect.height) * self.out_height)
// 4
* 4
) )
else: else:
self.out_width = PREVIEW_HEIGHT self.out_width = PREVIEW_HEIGHT
self.out_height = ( self.out_height = (
int((self.detect_height / self.detect_width) * self.out_width) // 4 * 4 int((config.detect.height / config.detect.width) * self.out_width)
// 4
* 4
) )
# create communication for finished previews # create communication for finished previews
@ -306,7 +302,7 @@ class PreviewRecorder:
) )
self.start_time = frame_time self.start_time = frame_time
self.last_output_time = frame_time self.last_output_time = frame_time
self.output_frames = [] self.output_frames: list[float] = []
def should_write_frame( def should_write_frame(
self, self,
@ -346,9 +342,7 @@ class PreviewRecorder:
def write_frame_to_cache(self, frame_time: float, frame: np.ndarray) -> None: def write_frame_to_cache(self, frame_time: float, frame: np.ndarray) -> None:
# resize yuv frame # resize yuv frame
small_frame: np.ndarray = np.zeros( small_frame = np.zeros((self.out_height * 3 // 2, self.out_width), np.uint8)
(self.out_height * 3 // 2, self.out_width), np.uint8
)
copy_yuv_to_position( copy_yuv_to_position(
small_frame, small_frame,
(0, 0), (0, 0),
@ -362,7 +356,7 @@ class PreviewRecorder:
cv2.COLOR_YUV2BGR_I420, cv2.COLOR_YUV2BGR_I420,
) )
cv2.imwrite( cv2.imwrite(
get_cache_image_name(self.camera_name, frame_time), get_cache_image_name(self.config.name, frame_time),
small_frame, small_frame,
[ [
int(cv2.IMWRITE_WEBP_QUALITY), int(cv2.IMWRITE_WEBP_QUALITY),
@ -402,7 +396,7 @@ class PreviewRecorder:
).start() ).start()
else: else:
logger.debug( logger.debug(
f"Not saving preview for {self.camera_name} because there are no saved frames." f"Not saving preview for {self.config.name} because there are no saved frames."
) )
self.reset_frame_cache(frame_time) self.reset_frame_cache(frame_time)
@ -422,7 +416,9 @@ class PreviewRecorder:
if not self.offline: if not self.offline:
self.write_frame_to_cache( self.write_frame_to_cache(
frame_time, frame_time,
get_blank_yuv_frame(self.detect_width, self.detect_height), get_blank_yuv_frame(
self.config.detect.width, self.config.detect.height
),
) )
self.offline = True self.offline = True
@ -435,9 +431,9 @@ class PreviewRecorder:
return return
old_frame_path = get_cache_image_name( old_frame_path = get_cache_image_name(
self.camera_name, self.output_frames[-1] self.config.name, self.output_frames[-1]
) )
new_frame_path = get_cache_image_name(self.camera_name, frame_time) new_frame_path = get_cache_image_name(self.config.name, frame_time)
shutil.copy(old_frame_path, new_frame_path) shutil.copy(old_frame_path, new_frame_path)
# save last frame to ensure consistent duration # save last frame to ensure consistent duration
@ -451,12 +447,13 @@ class PreviewRecorder:
self.reset_frame_cache(frame_time) self.reset_frame_cache(frame_time)
def stop(self) -> None: def stop(self) -> None:
self.config_subscriber.stop()
self.requestor.stop() self.requestor.stop()
def get_active_objects( def get_active_objects(
frame_time: float, camera_config: CameraConfig, all_objects: list[dict[str, Any]] frame_time: float, camera_config: CameraConfig, all_objects: list[TrackedObject]
) -> list[dict[str, Any]]: ) -> list[TrackedObject]:
"""get active objects for detection.""" """get active objects for detection."""
return [ return [
o o

View File

@ -10,7 +10,7 @@ from ruamel.yaml.constructor import DuplicateKeyError
from frigate.config import BirdseyeModeEnum, FrigateConfig from frigate.config import BirdseyeModeEnum, FrigateConfig
from frigate.const import MODEL_CACHE_DIR from frigate.const import MODEL_CACHE_DIR
from frigate.detectors import DetectorTypeEnum from frigate.detectors import DetectorTypeEnum
from frigate.util.builtin import deep_merge, load_labels from frigate.util.builtin import deep_merge
class TestConfig(unittest.TestCase): class TestConfig(unittest.TestCase):
@ -288,65 +288,6 @@ class TestConfig(unittest.TestCase):
frigate_config = FrigateConfig(**config) frigate_config = FrigateConfig(**config)
assert "dog" in frigate_config.cameras["back"].objects.filters assert "dog" in frigate_config.cameras["back"].objects.filters
def test_default_audio_filters(self):
config = {
"mqtt": {"host": "mqtt"},
"audio": {"listen": ["speech", "yell"]},
"cameras": {
"back": {
"ffmpeg": {
"inputs": [
{"path": "rtsp://10.0.0.1:554/video", "roles": ["detect"]}
]
},
"detect": {
"height": 1080,
"width": 1920,
"fps": 5,
},
}
},
}
frigate_config = FrigateConfig(**config)
all_audio_labels = {
label
for label in load_labels("/audio-labelmap.txt", prefill=521).values()
if label
}
assert all_audio_labels.issubset(
set(frigate_config.cameras["back"].audio.filters.keys())
)
def test_override_audio_filters(self):
config = {
"mqtt": {"host": "mqtt"},
"cameras": {
"back": {
"ffmpeg": {
"inputs": [
{"path": "rtsp://10.0.0.1:554/video", "roles": ["detect"]}
]
},
"detect": {
"height": 1080,
"width": 1920,
"fps": 5,
},
"audio": {
"listen": ["speech", "yell"],
"filters": {"speech": {"threshold": 0.9}},
},
}
},
}
frigate_config = FrigateConfig(**config)
assert "speech" in frigate_config.cameras["back"].audio.filters
assert frigate_config.cameras["back"].audio.filters["speech"].threshold == 0.9
assert "babbling" in frigate_config.cameras["back"].audio.filters
def test_inherit_object_filters(self): def test_inherit_object_filters(self):
config = { config = {
"mqtt": {"host": "mqtt"}, "mqtt": {"host": "mqtt"},

View File

@ -81,7 +81,6 @@ class TrackedObjectProcessor(threading.Thread):
CameraConfigUpdateEnum.motion, CameraConfigUpdateEnum.motion,
CameraConfigUpdateEnum.objects, CameraConfigUpdateEnum.objects,
CameraConfigUpdateEnum.remove, CameraConfigUpdateEnum.remove,
CameraConfigUpdateEnum.timestamp_style,
CameraConfigUpdateEnum.zones, CameraConfigUpdateEnum.zones,
], ],
) )

View File

@ -752,7 +752,7 @@
}, },
"live": { "live": {
"label": "Live playback", "label": "Live playback",
"description": "Settings to control the jsmpeg live stream resolution and quality. This does not affect restreamed cameras that use go2rtc for live view.", "description": "Settings used by the Web UI to control live stream resolution and quality.",
"streams": { "streams": {
"label": "Live stream names", "label": "Live stream names",
"description": "Mapping of configured stream names to restream/go2rtc names used for live playback." "description": "Mapping of configured stream names to restream/go2rtc names used for live playback."

View File

@ -825,12 +825,6 @@
"area": "Area" "area": "Area"
} }
}, },
"timestampPosition": {
"tl": "Top left",
"tr": "Top right",
"bl": "Bottom left",
"br": "Bottom right"
},
"users": { "users": {
"title": "Users", "title": "Users",
"management": { "management": {
@ -1348,22 +1342,7 @@
"preset-nvidia": "NVIDIA GPU", "preset-nvidia": "NVIDIA GPU",
"preset-jetson-h264": "NVIDIA Jetson (H.264)", "preset-jetson-h264": "NVIDIA Jetson (H.264)",
"preset-jetson-h265": "NVIDIA Jetson (H.265)", "preset-jetson-h265": "NVIDIA Jetson (H.265)",
"preset-rkmpp": "Rockchip RKMPP", "preset-rkmpp": "Rockchip RKMPP"
"preset-http-jpeg-generic": "HTTP JPEG (Generic)",
"preset-http-mjpeg-generic": "HTTP MJPEG (Generic)",
"preset-http-reolink": "HTTP - Reolink Cameras",
"preset-rtmp-generic": "RTMP (Generic)",
"preset-rtsp-generic": "RTSP (Generic)",
"preset-rtsp-restream": "RTSP - Restream from go2rtc",
"preset-rtsp-restream-low-latency": "RTSP - Restream from go2rtc (Low Latency)",
"preset-rtsp-udp": "RTSP - UDP",
"preset-rtsp-blue-iris": "RTSP - Blue Iris",
"preset-record-generic": "Record (Generic, no audio)",
"preset-record-generic-audio-copy": "Record (Generic + Copy Audio)",
"preset-record-generic-audio-aac": "Record (Generic + Audio to AAC)",
"preset-record-mjpeg": "Record - MJPEG Cameras",
"preset-record-jpeg": "Record - JPEG Cameras",
"preset-record-ubiquiti": "Record - Ubiquiti Cameras"
} }
}, },
"cameraInputs": { "cameraInputs": {

View File

@ -19,16 +19,6 @@ const audio: SectionConfigOverrides = {
hiddenFields: ["enabled_in_config"], hiddenFields: ["enabled_in_config"],
advancedFields: ["min_volume", "max_not_heard", "num_threads"], advancedFields: ["min_volume", "max_not_heard", "num_threads"],
uiSchema: { uiSchema: {
filters: {
"ui:options": {
expandable: false,
},
},
"filters.*": {
"ui:options": {
additionalPropertyKeyReadonly: true,
},
},
listen: { listen: {
"ui:widget": "audioLabels", "ui:widget": "audioLabels",
}, },

View File

@ -29,11 +29,6 @@ const objects: SectionConfigOverrides = {
], ],
advancedFields: ["genai"], advancedFields: ["genai"],
uiSchema: { uiSchema: {
filters: {
"ui:options": {
expandable: false,
},
},
"filters.*.min_area": { "filters.*.min_area": {
"ui:options": { "ui:options": {
suppressMultiSchema: true, suppressMultiSchema: true,

View File

@ -4,13 +4,12 @@ const timestampStyle: SectionConfigOverrides = {
base: { base: {
sectionDocs: "/configuration/reference", sectionDocs: "/configuration/reference",
restartRequired: [], restartRequired: [],
fieldOrder: ["position", "format", "thickness", "color"], fieldOrder: ["position", "format", "color", "thickness"],
hiddenFields: ["effect", "enabled_in_config"], hiddenFields: ["effect", "enabled_in_config"],
advancedFields: [], advancedFields: [],
uiSchema: { uiSchema: {
position: { position: {
"ui:size": "xs", "ui:size": "xs",
"ui:options": { enumI18nPrefix: "timestampPosition" },
}, },
format: { format: {
"ui:size": "xs", "ui:size": "xs",
@ -18,7 +17,7 @@ const timestampStyle: SectionConfigOverrides = {
}, },
}, },
global: { global: {
restartRequired: [], restartRequired: ["position", "format", "color", "thickness", "effect"],
}, },
camera: { camera: {
restartRequired: [], restartRequired: [],

View File

@ -1,6 +1,5 @@
// Select Widget - maps to shadcn/ui Select // Select Widget - maps to shadcn/ui Select
import type { WidgetProps } from "@rjsf/utils"; import type { WidgetProps } from "@rjsf/utils";
import { useTranslation } from "react-i18next";
import { import {
Select, Select,
SelectContent, SelectContent,
@ -22,18 +21,9 @@ export function SelectWidget(props: WidgetProps) {
schema, schema,
} = props; } = props;
const { t } = useTranslation(["views/settings"]);
const { enumOptions = [] } = options; const { enumOptions = [] } = options;
const enumI18nPrefix = options["enumI18nPrefix"] as string | undefined;
const fieldClassName = getSizedFieldClassName(options, "sm"); const fieldClassName = getSizedFieldClassName(options, "sm");
const getLabel = (option: { value: unknown; label: string }) => {
if (enumI18nPrefix) {
return t(`${enumI18nPrefix}.${option.value}`);
}
return option.label;
};
return ( return (
<Select <Select
value={value?.toString() ?? ""} value={value?.toString() ?? ""}
@ -52,7 +42,7 @@ export function SelectWidget(props: WidgetProps) {
<SelectContent> <SelectContent>
{enumOptions.map((option: { value: unknown; label: string }) => ( {enumOptions.map((option: { value: unknown; label: string }) => (
<SelectItem key={String(option.value)} value={String(option.value)}> <SelectItem key={String(option.value)} value={String(option.value)}>
{getLabel(option)} {option.label}
</SelectItem> </SelectItem>
))} ))}
</SelectContent> </SelectContent>

View File

@ -707,23 +707,14 @@ export default function LiveCameraView({
}} }}
> >
<div <div
className={cn( className={`relative flex flex-col items-center justify-center ${growClassName}`}
"flex flex-col items-center justify-center",
growClassName,
)}
ref={clickOverlayRef} ref={clickOverlayRef}
style={{ style={{
aspectRatio: constrainedAspectRatio, aspectRatio: constrainedAspectRatio,
}} }}
> >
{clickOverlay && overlaySize.width > 0 && ( {clickOverlay && overlaySize.width > 0 && (
<div <div className="absolute inset-0 z-40 cursor-crosshair">
className="absolute z-40 cursor-crosshair"
style={{
width: overlaySize.width,
height: overlaySize.height,
}}
>
<Stage <Stage
width={overlaySize.width} width={overlaySize.width}
height={overlaySize.height} height={overlaySize.height}