mirror of
https://github.com/blakeblackshear/frigate.git
synced 2025-12-18 02:56:44 +03:00
Compare commits
7 Commits
dd6444a34d
...
ae15bb37c1
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
ae15bb37c1 | ||
|
|
1b57fb15a7 | ||
|
|
cd606ad240 | ||
|
|
de2144f158 | ||
|
|
e79ff9a079 | ||
|
|
fe47620153 | ||
|
|
793906bb68 |
@ -81,3 +81,5 @@ librosa==0.11.*
|
||||
soundfile==0.13.*
|
||||
# DeGirum detector
|
||||
degirum == 0.16.*
|
||||
# Memory profiling
|
||||
memray == 1.15.*
|
||||
|
||||
@ -75,7 +75,13 @@ audio:
|
||||
|
||||
### Audio Transcription
|
||||
|
||||
Frigate supports fully local audio transcription using either `sherpa-onnx` or OpenAI’s open-source Whisper models via `faster-whisper`. To enable transcription, enable it in your config. Note that audio detection must also be enabled as described above in order to use audio transcription features.
|
||||
Frigate supports fully local audio transcription using either `sherpa-onnx` or OpenAI’s open-source Whisper models via `faster-whisper`. The goal of this feature is to support Semantic Search for `speech` audio events. Frigate is not intended to act as a continuous, fully-automatic speech transcription service — automatically transcribing all speech (or queuing many audio events for transcription) requires substantial CPU (or GPU) resources and is impractical on most systems. For this reason, transcriptions for events are initiated manually from the UI or the API rather than being run continuously in the background.
|
||||
|
||||
Transcription accuracy also depends heavily on the quality of your camera's microphone and recording conditions. Many cameras use inexpensive microphones, and distance to the speaker, low audio bitrate, or background noise can significantly reduce transcription quality. If you need higher accuracy, more robust long-running queues, or large-scale automatic transcription, consider using the HTTP API in combination with an automation platform and a cloud transcription service.
|
||||
|
||||
#### Configuration
|
||||
|
||||
To enable transcription, enable it in your config. Note that audio detection must also be enabled as described above in order to use audio transcription features.
|
||||
|
||||
```yaml
|
||||
audio_transcription:
|
||||
|
||||
129
docs/docs/troubleshooting/memory.md
Normal file
129
docs/docs/troubleshooting/memory.md
Normal file
@ -0,0 +1,129 @@
|
||||
---
|
||||
id: memory
|
||||
title: Memory Troubleshooting
|
||||
---
|
||||
|
||||
Frigate includes built-in memory profiling using [memray](https://bloomberg.github.io/memray/) to help diagnose memory issues. This feature allows you to profile specific Frigate modules to identify memory leaks, excessive allocations, or other memory-related problems.
|
||||
|
||||
## Enabling Memory Profiling
|
||||
|
||||
Memory profiling is controlled via the `FRIGATE_MEMRAY_MODULES` environment variable. Set it to a comma-separated list of module names you want to profile:
|
||||
|
||||
```bash
|
||||
export FRIGATE_MEMRAY_MODULES="frigate.review_segment_manager,frigate.capture"
|
||||
```
|
||||
|
||||
### Module Names
|
||||
|
||||
Frigate processes are named using a module-based naming scheme. Common module names include:
|
||||
|
||||
- `frigate.review_segment_manager` - Review segment processing
|
||||
- `frigate.recording_manager` - Recording management
|
||||
- `frigate.capture` - Camera capture processes (all cameras with this module name)
|
||||
- `frigate.process` - Camera processing/tracking (all cameras with this module name)
|
||||
- `frigate.output` - Output processing
|
||||
- `frigate.audio_manager` - Audio processing
|
||||
- `frigate.embeddings` - Embeddings processing
|
||||
|
||||
You can also specify the full process name (including camera-specific identifiers) if you want to profile a specific camera:
|
||||
|
||||
```bash
|
||||
export FRIGATE_MEMRAY_MODULES="frigate.capture:front_door"
|
||||
```
|
||||
|
||||
When you specify a module name (e.g., `frigate.capture`), all processes with that module prefix will be profiled. For example, `frigate.capture` will profile all camera capture processes.
|
||||
|
||||
## How It Works
|
||||
|
||||
1. **Binary File Creation**: When profiling is enabled, memray creates a binary file (`.bin`) in `/config/memray_reports/` that is updated continuously in real-time as the process runs.
|
||||
|
||||
2. **Automatic HTML Generation**: On normal process exit, Frigate automatically:
|
||||
|
||||
- Stops memray tracking
|
||||
- Generates an HTML flamegraph report
|
||||
- Saves it to `/config/memray_reports/<module_name>.html`
|
||||
|
||||
3. **Crash Recovery**: If a process crashes (SIGKILL, segfault, etc.), the binary file is preserved with all data up to the crash point. You can manually generate the HTML report from the binary file.
|
||||
|
||||
## Viewing Reports
|
||||
|
||||
### Automatic Reports
|
||||
|
||||
After a process exits normally, you'll find HTML reports in `/config/memray_reports/`. Open these files in a web browser to view interactive flamegraphs showing memory usage patterns.
|
||||
|
||||
### Manual Report Generation
|
||||
|
||||
If a process crashes or you want to generate a report from an existing binary file, you can manually create the HTML report:
|
||||
|
||||
```bash
|
||||
memray flamegraph /config/memray_reports/<module_name>.bin
|
||||
```
|
||||
|
||||
This will generate an HTML file that you can open in your browser.
|
||||
|
||||
## Understanding the Reports
|
||||
|
||||
Memray flamegraphs show:
|
||||
|
||||
- **Memory allocations over time**: See where memory is being allocated in your code
|
||||
- **Call stacks**: Understand the full call chain leading to allocations
|
||||
- **Memory hotspots**: Identify functions or code paths that allocate the most memory
|
||||
- **Memory leaks**: Spot patterns where memory is allocated but not freed
|
||||
|
||||
The interactive HTML reports allow you to:
|
||||
|
||||
- Zoom into specific time ranges
|
||||
- Filter by function names
|
||||
- View detailed allocation information
|
||||
- Export data for further analysis
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Profile During Issues**: Enable profiling when you're experiencing memory issues, not all the time, as it adds some overhead.
|
||||
|
||||
2. **Profile Specific Modules**: Instead of profiling everything, focus on the modules you suspect are causing issues.
|
||||
|
||||
3. **Let Processes Run**: Allow processes to run for a meaningful duration to capture representative memory usage patterns.
|
||||
|
||||
4. **Check Binary Files**: If HTML reports aren't generated automatically (e.g., after a crash), check for `.bin` files in `/config/memray_reports/` and generate reports manually.
|
||||
|
||||
5. **Compare Reports**: Generate reports at different times to compare memory usage patterns and identify trends.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### No Reports Generated
|
||||
|
||||
- Check that the environment variable is set correctly
|
||||
- Verify the module name matches exactly (case-sensitive)
|
||||
- Check logs for memray-related errors
|
||||
- Ensure `/config/memray_reports/` directory exists and is writable
|
||||
|
||||
### Process Crashed Before Report Generation
|
||||
|
||||
- Look for `.bin` files in `/config/memray_reports/`
|
||||
- Manually generate HTML reports using: `memray flamegraph <file>.bin`
|
||||
- The binary file contains all data up to the crash point
|
||||
|
||||
### Reports Show No Data
|
||||
|
||||
- Ensure the process ran long enough to generate meaningful data
|
||||
- Check that memray is properly installed (included by default in Frigate)
|
||||
- Verify the process actually started and ran (check process logs)
|
||||
|
||||
## Example Usage
|
||||
|
||||
```bash
|
||||
# Enable profiling for review and capture modules
|
||||
export FRIGATE_MEMRAY_MODULES="frigate.review_segment_manager,frigate.capture"
|
||||
|
||||
# Start Frigate
|
||||
# ... let it run for a while ...
|
||||
|
||||
# Check for reports
|
||||
ls -lh /config/memray_reports/
|
||||
|
||||
# If a process crashed, manually generate report
|
||||
memray flamegraph /config/memray_reports/frigate_capture_front_door.bin
|
||||
```
|
||||
|
||||
For more information about memray and interpreting reports, see the [official memray documentation](https://bloomberg.github.io/memray/).
|
||||
@ -131,6 +131,7 @@ const sidebars: SidebarsConfig = {
|
||||
"troubleshooting/recordings",
|
||||
"troubleshooting/gpu",
|
||||
"troubleshooting/edgetpu",
|
||||
"troubleshooting/memory",
|
||||
],
|
||||
Development: [
|
||||
"development/contributing",
|
||||
|
||||
@ -23,7 +23,7 @@ from markupsafe import escape
|
||||
from peewee import SQL, fn, operator
|
||||
from pydantic import ValidationError
|
||||
|
||||
from frigate.api.auth import require_role
|
||||
from frigate.api.auth import allow_any_authenticated, allow_public, require_role
|
||||
from frigate.api.defs.query.app_query_parameters import AppTimelineHourlyQueryParameters
|
||||
from frigate.api.defs.request.app_body import AppConfigSetBody
|
||||
from frigate.api.defs.tags import Tags
|
||||
@ -56,29 +56,33 @@ logger = logging.getLogger(__name__)
|
||||
router = APIRouter(tags=[Tags.app])
|
||||
|
||||
|
||||
@router.get("/", response_class=PlainTextResponse)
|
||||
@router.get(
|
||||
"/", response_class=PlainTextResponse, dependencies=[Depends(allow_public())]
|
||||
)
|
||||
def is_healthy():
|
||||
return "Frigate is running. Alive and healthy!"
|
||||
|
||||
|
||||
@router.get("/config/schema.json")
|
||||
@router.get("/config/schema.json", dependencies=[Depends(allow_public())])
|
||||
def config_schema(request: Request):
|
||||
return Response(
|
||||
content=request.app.frigate_config.schema_json(), media_type="application/json"
|
||||
)
|
||||
|
||||
|
||||
@router.get("/version", response_class=PlainTextResponse)
|
||||
@router.get(
|
||||
"/version", response_class=PlainTextResponse, dependencies=[Depends(allow_public())]
|
||||
)
|
||||
def version():
|
||||
return VERSION
|
||||
|
||||
|
||||
@router.get("/stats")
|
||||
@router.get("/stats", dependencies=[Depends(allow_any_authenticated())])
|
||||
def stats(request: Request):
|
||||
return JSONResponse(content=request.app.stats_emitter.get_latest_stats())
|
||||
|
||||
|
||||
@router.get("/stats/history")
|
||||
@router.get("/stats/history", dependencies=[Depends(allow_any_authenticated())])
|
||||
def stats_history(request: Request, keys: str = None):
|
||||
if keys:
|
||||
keys = keys.split(",")
|
||||
@ -86,7 +90,7 @@ def stats_history(request: Request, keys: str = None):
|
||||
return JSONResponse(content=request.app.stats_emitter.get_stats_history(keys))
|
||||
|
||||
|
||||
@router.get("/metrics")
|
||||
@router.get("/metrics", dependencies=[Depends(allow_any_authenticated())])
|
||||
def metrics(request: Request):
|
||||
"""Expose Prometheus metrics endpoint and update metrics with latest stats"""
|
||||
# Retrieve the latest statistics and update the Prometheus metrics
|
||||
@ -103,7 +107,7 @@ def metrics(request: Request):
|
||||
return Response(content=content, media_type=content_type)
|
||||
|
||||
|
||||
@router.get("/config")
|
||||
@router.get("/config", dependencies=[Depends(allow_any_authenticated())])
|
||||
def config(request: Request):
|
||||
config_obj: FrigateConfig = request.app.frigate_config
|
||||
config: dict[str, dict[str, Any]] = config_obj.model_dump(
|
||||
@ -209,7 +213,7 @@ def config_raw_paths(request: Request):
|
||||
return JSONResponse(content=raw_paths)
|
||||
|
||||
|
||||
@router.get("/config/raw")
|
||||
@router.get("/config/raw", dependencies=[Depends(allow_any_authenticated())])
|
||||
def config_raw():
|
||||
config_file = find_config_file()
|
||||
|
||||
@ -452,7 +456,7 @@ def config_set(request: Request, body: AppConfigSetBody):
|
||||
)
|
||||
|
||||
|
||||
@router.get("/vainfo")
|
||||
@router.get("/vainfo", dependencies=[Depends(allow_any_authenticated())])
|
||||
def vainfo():
|
||||
vainfo = vainfo_hwaccel()
|
||||
return JSONResponse(
|
||||
@ -472,12 +476,16 @@ def vainfo():
|
||||
)
|
||||
|
||||
|
||||
@router.get("/nvinfo")
|
||||
@router.get("/nvinfo", dependencies=[Depends(allow_any_authenticated())])
|
||||
def nvinfo():
|
||||
return JSONResponse(content=get_nvidia_driver_info())
|
||||
|
||||
|
||||
@router.get("/logs/{service}", tags=[Tags.logs])
|
||||
@router.get(
|
||||
"/logs/{service}",
|
||||
tags=[Tags.logs],
|
||||
dependencies=[Depends(allow_any_authenticated())],
|
||||
)
|
||||
async def logs(
|
||||
service: str = Path(enum=["frigate", "nginx", "go2rtc"]),
|
||||
download: Optional[str] = None,
|
||||
@ -585,7 +593,7 @@ def restart():
|
||||
)
|
||||
|
||||
|
||||
@router.get("/labels")
|
||||
@router.get("/labels", dependencies=[Depends(allow_any_authenticated())])
|
||||
def get_labels(camera: str = ""):
|
||||
try:
|
||||
if camera:
|
||||
@ -603,7 +611,7 @@ def get_labels(camera: str = ""):
|
||||
return JSONResponse(content=labels)
|
||||
|
||||
|
||||
@router.get("/sub_labels")
|
||||
@router.get("/sub_labels", dependencies=[Depends(allow_any_authenticated())])
|
||||
def get_sub_labels(split_joined: Optional[int] = None):
|
||||
try:
|
||||
events = Event.select(Event.sub_label).distinct()
|
||||
@ -634,7 +642,7 @@ def get_sub_labels(split_joined: Optional[int] = None):
|
||||
return JSONResponse(content=sub_labels)
|
||||
|
||||
|
||||
@router.get("/plus/models")
|
||||
@router.get("/plus/models", dependencies=[Depends(allow_any_authenticated())])
|
||||
def plusModels(request: Request, filterByCurrentModelDetector: bool = False):
|
||||
if not request.app.frigate_config.plus_api.is_active():
|
||||
return JSONResponse(
|
||||
@ -676,7 +684,9 @@ def plusModels(request: Request, filterByCurrentModelDetector: bool = False):
|
||||
return JSONResponse(content=validModels)
|
||||
|
||||
|
||||
@router.get("/recognized_license_plates")
|
||||
@router.get(
|
||||
"/recognized_license_plates", dependencies=[Depends(allow_any_authenticated())]
|
||||
)
|
||||
def get_recognized_license_plates(split_joined: Optional[int] = None):
|
||||
try:
|
||||
query = (
|
||||
@ -710,7 +720,7 @@ def get_recognized_license_plates(split_joined: Optional[int] = None):
|
||||
return JSONResponse(content=recognized_license_plates)
|
||||
|
||||
|
||||
@router.get("/timeline")
|
||||
@router.get("/timeline", dependencies=[Depends(allow_any_authenticated())])
|
||||
def timeline(camera: str = "all", limit: int = 100, source_id: Optional[str] = None):
|
||||
clauses = []
|
||||
|
||||
@ -747,7 +757,7 @@ def timeline(camera: str = "all", limit: int = 100, source_id: Optional[str] = N
|
||||
return JSONResponse(content=[t for t in timeline])
|
||||
|
||||
|
||||
@router.get("/timeline/hourly")
|
||||
@router.get("/timeline/hourly", dependencies=[Depends(allow_any_authenticated())])
|
||||
def hourly_timeline(params: AppTimelineHourlyQueryParameters = Depends()):
|
||||
"""Get hourly summary for timeline."""
|
||||
cameras = params.cameras
|
||||
|
||||
@ -32,10 +32,154 @@ from frigate.models import User
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def require_admin_by_default():
|
||||
"""
|
||||
Global admin requirement dependency for all endpoints by default.
|
||||
|
||||
This is set as the default dependency on the FastAPI app to ensure all
|
||||
endpoints require admin access unless explicitly overridden with
|
||||
allow_public(), allow_any_authenticated(), or require_role().
|
||||
|
||||
Port 5000 (internal) always has admin role set by the /auth endpoint,
|
||||
so this check passes automatically for internal requests.
|
||||
|
||||
Certain paths are exempted from the global admin check because they must
|
||||
be accessible before authentication (login, auth) or they have their own
|
||||
route-level authorization dependencies that handle access control.
|
||||
"""
|
||||
# Paths that have route-level auth dependencies and should bypass global admin check
|
||||
# These paths still have authorization - it's handled by their route-level dependencies
|
||||
EXEMPT_PATHS = {
|
||||
# Public auth endpoints (allow_public)
|
||||
"/auth",
|
||||
"/auth/first_time_login",
|
||||
"/login",
|
||||
# Authenticated user endpoints (allow_any_authenticated)
|
||||
"/logout",
|
||||
"/profile",
|
||||
# Public info endpoints (allow_public)
|
||||
"/",
|
||||
"/version",
|
||||
"/config/schema.json",
|
||||
"/metrics",
|
||||
# Authenticated user endpoints (allow_any_authenticated)
|
||||
"/stats",
|
||||
"/stats/history",
|
||||
"/config",
|
||||
"/config/raw",
|
||||
"/vainfo",
|
||||
"/nvinfo",
|
||||
"/labels",
|
||||
"/sub_labels",
|
||||
"/plus/models",
|
||||
"/recognized_license_plates",
|
||||
"/timeline",
|
||||
"/timeline/hourly",
|
||||
"/events/summary",
|
||||
"/recordings/storage",
|
||||
"/recordings/summary",
|
||||
"/recordings/unavailable",
|
||||
"/go2rtc/streams",
|
||||
}
|
||||
|
||||
# Path prefixes that should be exempt (for paths with parameters)
|
||||
EXEMPT_PREFIXES = (
|
||||
"/logs/", # /logs/{service}
|
||||
"/review", # /review, /review/{id}, /review_ids, /review/summary, etc.
|
||||
"/reviews/", # /reviews/viewed, /reviews/delete
|
||||
"/events/", # /events/{id}/thumbnail, etc. (camera-scoped)
|
||||
"/go2rtc/streams/", # /go2rtc/streams/{camera}
|
||||
"/users/", # /users/{username}/password (has own auth)
|
||||
"/preview/", # /preview/{file}/thumbnail.jpg
|
||||
)
|
||||
|
||||
async def admin_checker(request: Request):
|
||||
path = request.url.path
|
||||
|
||||
# Check exact path matches
|
||||
if path in EXEMPT_PATHS:
|
||||
return
|
||||
|
||||
# Check prefix matches for parameterized paths
|
||||
if path.startswith(EXEMPT_PREFIXES):
|
||||
return
|
||||
|
||||
# For all other paths, require admin role
|
||||
# Port 5000 (internal) requests have admin role set automatically
|
||||
role = request.headers.get("remote-role")
|
||||
if role == "admin":
|
||||
return
|
||||
|
||||
raise HTTPException(
|
||||
status_code=403,
|
||||
detail="Admin role required for this endpoint",
|
||||
)
|
||||
|
||||
return admin_checker
|
||||
|
||||
|
||||
def _is_authenticated(request: Request) -> bool:
|
||||
"""
|
||||
Helper to determine if a request is from an authenticated user.
|
||||
|
||||
Returns True if the request has a valid authenticated user (not anonymous).
|
||||
Port 5000 internal requests are considered anonymous despite having admin role.
|
||||
"""
|
||||
username = request.headers.get("remote-user")
|
||||
return username is not None and username != "anonymous"
|
||||
|
||||
|
||||
def allow_public():
|
||||
"""
|
||||
Override dependency to allow unauthenticated access to an endpoint.
|
||||
|
||||
Use this for endpoints that should be publicly accessible without
|
||||
authentication, such as login page, health checks, or pre-auth info.
|
||||
|
||||
Example:
|
||||
@router.get("/public-endpoint", dependencies=[Depends(allow_public())])
|
||||
"""
|
||||
|
||||
async def public_checker(request: Request):
|
||||
return # Always allow
|
||||
|
||||
return public_checker
|
||||
|
||||
|
||||
def allow_any_authenticated():
|
||||
"""
|
||||
Override dependency to allow any authenticated user (bypass admin requirement).
|
||||
|
||||
Allows:
|
||||
- Port 5000 internal requests (have admin role despite anonymous user)
|
||||
- Any authenticated user with a real username (not "anonymous")
|
||||
|
||||
Rejects:
|
||||
- Port 8971 requests with anonymous user (auth disabled, no proxy auth)
|
||||
|
||||
Example:
|
||||
@router.get("/authenticated-endpoint", dependencies=[Depends(allow_any_authenticated())])
|
||||
"""
|
||||
|
||||
async def auth_checker(request: Request):
|
||||
# Port 5000 requests have admin role and should be allowed
|
||||
role = request.headers.get("remote-role")
|
||||
if role == "admin":
|
||||
return
|
||||
|
||||
# Otherwise require a real authenticated user (not anonymous)
|
||||
if not _is_authenticated(request):
|
||||
raise HTTPException(status_code=401, detail="Authentication required")
|
||||
return
|
||||
|
||||
return auth_checker
|
||||
|
||||
|
||||
router = APIRouter(tags=[Tags.auth])
|
||||
|
||||
|
||||
@router.get("/auth/first_time_login")
|
||||
@router.get("/auth/first_time_login", dependencies=[Depends(allow_public())])
|
||||
def first_time_login(request: Request):
|
||||
"""Return whether the admin first-time login help flag is set in config.
|
||||
|
||||
@ -352,7 +496,7 @@ def resolve_role(
|
||||
|
||||
|
||||
# Endpoints
|
||||
@router.get("/auth")
|
||||
@router.get("/auth", dependencies=[Depends(allow_public())])
|
||||
def auth(request: Request):
|
||||
auth_config: AuthConfig = request.app.frigate_config.auth
|
||||
proxy_config: ProxyConfig = request.app.frigate_config.proxy
|
||||
@ -478,7 +622,7 @@ def auth(request: Request):
|
||||
return fail_response
|
||||
|
||||
|
||||
@router.get("/profile")
|
||||
@router.get("/profile", dependencies=[Depends(allow_any_authenticated())])
|
||||
def profile(request: Request):
|
||||
username = request.headers.get("remote-user", "anonymous")
|
||||
role = request.headers.get("remote-role", "viewer")
|
||||
@ -492,7 +636,7 @@ def profile(request: Request):
|
||||
)
|
||||
|
||||
|
||||
@router.get("/logout")
|
||||
@router.get("/logout", dependencies=[Depends(allow_any_authenticated())])
|
||||
def logout(request: Request):
|
||||
auth_config: AuthConfig = request.app.frigate_config.auth
|
||||
response = RedirectResponse("/login", status_code=303)
|
||||
@ -503,7 +647,7 @@ def logout(request: Request):
|
||||
limiter = Limiter(key_func=get_remote_addr)
|
||||
|
||||
|
||||
@router.post("/login")
|
||||
@router.post("/login", dependencies=[Depends(allow_public())])
|
||||
@limiter.limit(limit_value=rateLimiter.get_limit)
|
||||
def login(request: Request, body: AppPostLoginBody):
|
||||
JWT_COOKIE_NAME = request.app.frigate_config.auth.cookie_name
|
||||
@ -578,13 +722,21 @@ def create_user(
|
||||
return JSONResponse(content={"username": body.username})
|
||||
|
||||
|
||||
@router.delete("/users/{username}")
|
||||
def delete_user(username: str):
|
||||
@router.delete("/users/{username}", dependencies=[Depends(require_role(["admin"]))])
|
||||
def delete_user(request: Request, username: str):
|
||||
# Prevent deletion of the built-in admin user
|
||||
if username == "admin":
|
||||
return JSONResponse(
|
||||
content={"message": "Cannot delete admin user"}, status_code=403
|
||||
)
|
||||
|
||||
User.delete_by_id(username)
|
||||
return JSONResponse(content={"success": True})
|
||||
|
||||
|
||||
@router.put("/users/{username}/password")
|
||||
@router.put(
|
||||
"/users/{username}/password", dependencies=[Depends(allow_any_authenticated())]
|
||||
)
|
||||
async def update_password(
|
||||
request: Request,
|
||||
username: str,
|
||||
|
||||
@ -15,7 +15,11 @@ from onvif import ONVIFCamera, ONVIFError
|
||||
from zeep.exceptions import Fault, TransportError
|
||||
from zeep.transports import AsyncTransport
|
||||
|
||||
from frigate.api.auth import require_role
|
||||
from frigate.api.auth import (
|
||||
allow_any_authenticated,
|
||||
require_camera_access,
|
||||
require_role,
|
||||
)
|
||||
from frigate.api.defs.tags import Tags
|
||||
from frigate.config.config import FrigateConfig
|
||||
from frigate.util.builtin import clean_camera_user_pass
|
||||
@ -50,7 +54,7 @@ def _is_valid_host(host: str) -> bool:
|
||||
return False
|
||||
|
||||
|
||||
@router.get("/go2rtc/streams")
|
||||
@router.get("/go2rtc/streams", dependencies=[Depends(allow_any_authenticated())])
|
||||
def go2rtc_streams():
|
||||
r = requests.get("http://127.0.0.1:1984/api/streams")
|
||||
if not r.ok:
|
||||
@ -66,7 +70,9 @@ def go2rtc_streams():
|
||||
return JSONResponse(content=stream_data)
|
||||
|
||||
|
||||
@router.get("/go2rtc/streams/{camera_name}")
|
||||
@router.get(
|
||||
"/go2rtc/streams/{camera_name}", dependencies=[Depends(require_camera_access)]
|
||||
)
|
||||
def go2rtc_camera_stream(request: Request, camera_name: str):
|
||||
r = requests.get(
|
||||
f"http://127.0.0.1:1984/api/streams?src={camera_name}&video=all&audio=allµphone"
|
||||
@ -161,7 +167,7 @@ def go2rtc_delete_stream(stream_name: str):
|
||||
)
|
||||
|
||||
|
||||
@router.get("/ffprobe")
|
||||
@router.get("/ffprobe", dependencies=[Depends(require_role(["admin"]))])
|
||||
def ffprobe(request: Request, paths: str = "", detailed: bool = False):
|
||||
path_param = paths
|
||||
|
||||
|
||||
@ -870,6 +870,46 @@ def categorize_classification_image(request: Request, name: str, body: dict = No
|
||||
)
|
||||
|
||||
|
||||
@router.post(
|
||||
"/classification/{name}/dataset/{category}/create",
|
||||
response_model=GenericResponse,
|
||||
dependencies=[Depends(require_role(["admin"]))],
|
||||
summary="Create an empty classification category folder",
|
||||
description="""Creates an empty folder for a classification category.
|
||||
This is used to create folders for categories that don't have images yet.
|
||||
Returns a success message or an error if the name is invalid.""",
|
||||
)
|
||||
def create_classification_category(request: Request, name: str, category: str):
|
||||
config: FrigateConfig = request.app.frigate_config
|
||||
|
||||
if name not in config.classification.custom:
|
||||
return JSONResponse(
|
||||
content=(
|
||||
{
|
||||
"success": False,
|
||||
"message": f"{name} is not a known classification model.",
|
||||
}
|
||||
),
|
||||
status_code=404,
|
||||
)
|
||||
|
||||
category_folder = os.path.join(
|
||||
CLIPS_DIR, sanitize_filename(name), "dataset", sanitize_filename(category)
|
||||
)
|
||||
|
||||
os.makedirs(category_folder, exist_ok=True)
|
||||
|
||||
return JSONResponse(
|
||||
content=(
|
||||
{
|
||||
"success": True,
|
||||
"message": f"Successfully created category folder: {category}",
|
||||
}
|
||||
),
|
||||
status_code=200,
|
||||
)
|
||||
|
||||
|
||||
@router.post(
|
||||
"/classification/{name}/train/delete",
|
||||
response_model=GenericResponse,
|
||||
|
||||
@ -22,6 +22,7 @@ from peewee import JOIN, DoesNotExist, fn, operator
|
||||
from playhouse.shortcuts import model_to_dict
|
||||
|
||||
from frigate.api.auth import (
|
||||
allow_any_authenticated,
|
||||
get_allowed_cameras_for_filter,
|
||||
require_camera_access,
|
||||
require_role,
|
||||
@ -808,7 +809,7 @@ def events_search(
|
||||
return JSONResponse(content=processed_events)
|
||||
|
||||
|
||||
@router.get("/events/summary")
|
||||
@router.get("/events/summary", dependencies=[Depends(allow_any_authenticated())])
|
||||
def events_summary(
|
||||
params: EventsSummaryQueryParams = Depends(),
|
||||
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
|
||||
|
||||
@ -2,7 +2,7 @@ import logging
|
||||
import re
|
||||
from typing import Optional
|
||||
|
||||
from fastapi import FastAPI, Request
|
||||
from fastapi import Depends, FastAPI, Request
|
||||
from fastapi.responses import JSONResponse
|
||||
from joserfc.jwk import OctKey
|
||||
from playhouse.sqliteq import SqliteQueueDatabase
|
||||
@ -24,7 +24,7 @@ from frigate.api import (
|
||||
preview,
|
||||
review,
|
||||
)
|
||||
from frigate.api.auth import get_jwt_secret, limiter
|
||||
from frigate.api.auth import get_jwt_secret, limiter, require_admin_by_default
|
||||
from frigate.comms.event_metadata_updater import (
|
||||
EventMetadataPublisher,
|
||||
)
|
||||
@ -62,11 +62,15 @@ def create_fastapi_app(
|
||||
stats_emitter: StatsEmitter,
|
||||
event_metadata_updater: EventMetadataPublisher,
|
||||
config_publisher: CameraConfigUpdatePublisher,
|
||||
enforce_default_admin: bool = True,
|
||||
):
|
||||
logger.info("Starting FastAPI app")
|
||||
app = FastAPI(
|
||||
debug=False,
|
||||
swagger_ui_parameters={"apisSorter": "alpha", "operationsSorter": "alpha"},
|
||||
dependencies=[Depends(require_admin_by_default())]
|
||||
if enforce_default_admin
|
||||
else [],
|
||||
)
|
||||
|
||||
# update the request_address with the x-forwarded-for header from nginx
|
||||
|
||||
@ -22,7 +22,11 @@ from pathvalidate import sanitize_filename
|
||||
from peewee import DoesNotExist, fn, operator
|
||||
from tzlocal import get_localzone_name
|
||||
|
||||
from frigate.api.auth import get_allowed_cameras_for_filter, require_camera_access
|
||||
from frigate.api.auth import (
|
||||
allow_any_authenticated,
|
||||
get_allowed_cameras_for_filter,
|
||||
require_camera_access,
|
||||
)
|
||||
from frigate.api.defs.query.media_query_parameters import (
|
||||
Extension,
|
||||
MediaEventsSnapshotQueryParams,
|
||||
@ -393,7 +397,7 @@ async def submit_recording_snapshot_to_plus(
|
||||
)
|
||||
|
||||
|
||||
@router.get("/recordings/storage")
|
||||
@router.get("/recordings/storage", dependencies=[Depends(allow_any_authenticated())])
|
||||
def get_recordings_storage_usage(request: Request):
|
||||
recording_stats = request.app.stats_emitter.get_latest_stats()["service"][
|
||||
"storage"
|
||||
@ -417,7 +421,7 @@ def get_recordings_storage_usage(request: Request):
|
||||
return JSONResponse(content=camera_usages)
|
||||
|
||||
|
||||
@router.get("/recordings/summary")
|
||||
@router.get("/recordings/summary", dependencies=[Depends(allow_any_authenticated())])
|
||||
def all_recordings_summary(
|
||||
request: Request,
|
||||
params: MediaRecordingsSummaryQueryParams = Depends(),
|
||||
@ -635,7 +639,11 @@ async def recordings(
|
||||
return JSONResponse(content=list(recordings))
|
||||
|
||||
|
||||
@router.get("/recordings/unavailable", response_model=list[dict])
|
||||
@router.get(
|
||||
"/recordings/unavailable",
|
||||
response_model=list[dict],
|
||||
dependencies=[Depends(allow_any_authenticated())],
|
||||
)
|
||||
async def no_recordings(
|
||||
request: Request,
|
||||
params: MediaRecordingsAvailabilityQueryParams = Depends(),
|
||||
@ -1053,7 +1061,10 @@ async def event_snapshot(
|
||||
)
|
||||
|
||||
|
||||
@router.get("/events/{event_id}/thumbnail.{extension}")
|
||||
@router.get(
|
||||
"/events/{event_id}/thumbnail.{extension}",
|
||||
dependencies=[Depends(require_camera_access)],
|
||||
)
|
||||
async def event_thumbnail(
|
||||
request: Request,
|
||||
event_id: str,
|
||||
@ -1251,7 +1262,10 @@ def grid_snapshot(
|
||||
)
|
||||
|
||||
|
||||
@router.get("/events/{event_id}/snapshot-clean.webp")
|
||||
@router.get(
|
||||
"/events/{event_id}/snapshot-clean.webp",
|
||||
dependencies=[Depends(require_camera_access)],
|
||||
)
|
||||
def event_snapshot_clean(request: Request, event_id: str, download: bool = False):
|
||||
webp_bytes = None
|
||||
try:
|
||||
@ -1375,7 +1389,9 @@ def event_snapshot_clean(request: Request, event_id: str, download: bool = False
|
||||
)
|
||||
|
||||
|
||||
@router.get("/events/{event_id}/clip.mp4")
|
||||
@router.get(
|
||||
"/events/{event_id}/clip.mp4", dependencies=[Depends(require_camera_access)]
|
||||
)
|
||||
async def event_clip(
|
||||
request: Request,
|
||||
event_id: str,
|
||||
@ -1403,7 +1419,9 @@ async def event_clip(
|
||||
)
|
||||
|
||||
|
||||
@router.get("/events/{event_id}/preview.gif")
|
||||
@router.get(
|
||||
"/events/{event_id}/preview.gif", dependencies=[Depends(require_camera_access)]
|
||||
)
|
||||
def event_preview(request: Request, event_id: str):
|
||||
try:
|
||||
event: Event = Event.get(Event.id == event_id)
|
||||
@ -1756,7 +1774,7 @@ def preview_mp4(
|
||||
)
|
||||
|
||||
|
||||
@router.get("/review/{event_id}/preview")
|
||||
@router.get("/review/{event_id}/preview", dependencies=[Depends(require_camera_access)])
|
||||
def review_preview(
|
||||
request: Request,
|
||||
event_id: str,
|
||||
@ -1782,8 +1800,12 @@ def review_preview(
|
||||
return preview_mp4(request, review.camera, start_ts, end_ts)
|
||||
|
||||
|
||||
@router.get("/preview/{file_name}/thumbnail.jpg")
|
||||
@router.get("/preview/{file_name}/thumbnail.webp")
|
||||
@router.get(
|
||||
"/preview/{file_name}/thumbnail.jpg", dependencies=[Depends(require_camera_access)]
|
||||
)
|
||||
@router.get(
|
||||
"/preview/{file_name}/thumbnail.webp", dependencies=[Depends(require_camera_access)]
|
||||
)
|
||||
def preview_thumbnail(file_name: str):
|
||||
"""Get a thumbnail from the cached preview frames."""
|
||||
if len(file_name) > 1000:
|
||||
|
||||
@ -14,6 +14,7 @@ from peewee import Case, DoesNotExist, IntegrityError, fn, operator
|
||||
from playhouse.shortcuts import model_to_dict
|
||||
|
||||
from frigate.api.auth import (
|
||||
allow_any_authenticated,
|
||||
get_allowed_cameras_for_filter,
|
||||
get_current_user,
|
||||
require_camera_access,
|
||||
@ -43,7 +44,11 @@ logger = logging.getLogger(__name__)
|
||||
router = APIRouter(tags=[Tags.review])
|
||||
|
||||
|
||||
@router.get("/review", response_model=list[ReviewSegmentResponse])
|
||||
@router.get(
|
||||
"/review",
|
||||
response_model=list[ReviewSegmentResponse],
|
||||
dependencies=[Depends(allow_any_authenticated())],
|
||||
)
|
||||
async def review(
|
||||
params: ReviewQueryParams = Depends(),
|
||||
current_user: dict = Depends(get_current_user),
|
||||
@ -152,7 +157,11 @@ async def review(
|
||||
return JSONResponse(content=[r for r in review_query])
|
||||
|
||||
|
||||
@router.get("/review_ids", response_model=list[ReviewSegmentResponse])
|
||||
@router.get(
|
||||
"/review_ids",
|
||||
response_model=list[ReviewSegmentResponse],
|
||||
dependencies=[Depends(allow_any_authenticated())],
|
||||
)
|
||||
async def review_ids(request: Request, ids: str):
|
||||
ids = ids.split(",")
|
||||
|
||||
@ -186,7 +195,11 @@ async def review_ids(request: Request, ids: str):
|
||||
)
|
||||
|
||||
|
||||
@router.get("/review/summary", response_model=ReviewSummaryResponse)
|
||||
@router.get(
|
||||
"/review/summary",
|
||||
response_model=ReviewSummaryResponse,
|
||||
dependencies=[Depends(allow_any_authenticated())],
|
||||
)
|
||||
async def review_summary(
|
||||
params: ReviewSummaryQueryParams = Depends(),
|
||||
current_user: dict = Depends(get_current_user),
|
||||
@ -461,7 +474,11 @@ async def review_summary(
|
||||
return JSONResponse(content=data)
|
||||
|
||||
|
||||
@router.post("/reviews/viewed", response_model=GenericResponse)
|
||||
@router.post(
|
||||
"/reviews/viewed",
|
||||
response_model=GenericResponse,
|
||||
dependencies=[Depends(allow_any_authenticated())],
|
||||
)
|
||||
async def set_multiple_reviewed(
|
||||
request: Request,
|
||||
body: ReviewModifyMultipleBody,
|
||||
@ -644,7 +661,11 @@ def motion_activity(
|
||||
return JSONResponse(content=normalized)
|
||||
|
||||
|
||||
@router.get("/review/event/{event_id}", response_model=ReviewSegmentResponse)
|
||||
@router.get(
|
||||
"/review/event/{event_id}",
|
||||
response_model=ReviewSegmentResponse,
|
||||
dependencies=[Depends(allow_any_authenticated())],
|
||||
)
|
||||
async def get_review_from_event(request: Request, event_id: str):
|
||||
try:
|
||||
review = ReviewSegment.get(
|
||||
@ -659,7 +680,11 @@ async def get_review_from_event(request: Request, event_id: str):
|
||||
)
|
||||
|
||||
|
||||
@router.get("/review/{review_id}", response_model=ReviewSegmentResponse)
|
||||
@router.get(
|
||||
"/review/{review_id}",
|
||||
response_model=ReviewSegmentResponse,
|
||||
dependencies=[Depends(allow_any_authenticated())],
|
||||
)
|
||||
async def get_review(request: Request, review_id: str):
|
||||
try:
|
||||
review = ReviewSegment.get(ReviewSegment.id == review_id)
|
||||
@ -672,7 +697,11 @@ async def get_review(request: Request, review_id: str):
|
||||
)
|
||||
|
||||
|
||||
@router.delete("/review/{review_id}/viewed", response_model=GenericResponse)
|
||||
@router.delete(
|
||||
"/review/{review_id}/viewed",
|
||||
response_model=GenericResponse,
|
||||
dependencies=[Depends(allow_any_authenticated())],
|
||||
)
|
||||
async def set_not_reviewed(
|
||||
review_id: str,
|
||||
current_user: dict = Depends(get_current_user),
|
||||
|
||||
@ -375,7 +375,19 @@ class WebPushClient(Communicator):
|
||||
ended = state == "end" or state == "genai"
|
||||
|
||||
if state == "genai" and payload["after"]["data"]["metadata"]:
|
||||
title = payload["after"]["data"]["metadata"]["title"]
|
||||
base_title = payload["after"]["data"]["metadata"]["title"]
|
||||
threat_level = payload["after"]["data"]["metadata"].get(
|
||||
"potential_threat_level", 0
|
||||
)
|
||||
|
||||
# Add prefix for threat levels 1 and 2
|
||||
if threat_level == 1:
|
||||
title = f"Needs Review: {base_title}"
|
||||
elif threat_level == 2:
|
||||
title = f"Security Concern: {base_title}"
|
||||
else:
|
||||
title = base_title
|
||||
|
||||
message = payload["after"]["data"]["metadata"]["scene"]
|
||||
else:
|
||||
title = f"{titlecase(', '.join(sorted_objects).replace('_', ' '))}{' was' if state == 'end' else ''} detected in {titlecase(', '.join(payload['after']['data']['zones']).replace('_', ' '))}"
|
||||
|
||||
@ -12,6 +12,7 @@ from typing import Any
|
||||
|
||||
import cv2
|
||||
from peewee import DoesNotExist
|
||||
from titlecase import titlecase
|
||||
|
||||
from frigate.comms.embeddings_updater import EmbeddingsRequestEnum
|
||||
from frigate.comms.inter_process import InterProcessRequestor
|
||||
@ -455,14 +456,14 @@ def run_analysis(
|
||||
|
||||
for i, verified_label in enumerate(final_data["data"]["verified_objects"]):
|
||||
object_type = verified_label.replace("-verified", "").replace("_", " ")
|
||||
name = sub_labels_list[i].replace("_", " ").title()
|
||||
name = titlecase(sub_labels_list[i].replace("_", " "))
|
||||
unified_objects.append(f"{name} ({object_type})")
|
||||
|
||||
for label in objects_list:
|
||||
if "-verified" in label:
|
||||
continue
|
||||
elif label in labelmap_objects:
|
||||
object_type = label.replace("_", " ").title()
|
||||
object_type = titlecase(label.replace("_", " "))
|
||||
|
||||
if label in attribute_labels:
|
||||
unified_objects.append(f"{object_type} (delivery/service)")
|
||||
|
||||
@ -405,9 +405,6 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
|
||||
if obj_data.get("end_time") is not None:
|
||||
return
|
||||
|
||||
if obj_data.get("stationary"):
|
||||
return
|
||||
|
||||
object_id = obj_data["id"]
|
||||
|
||||
if (
|
||||
|
||||
@ -2,7 +2,6 @@ import glob
|
||||
import logging
|
||||
import os
|
||||
import shutil
|
||||
import time
|
||||
import urllib.request
|
||||
import zipfile
|
||||
from queue import Queue
|
||||
@ -55,6 +54,9 @@ class MemryXDetector(DetectionApi):
|
||||
)
|
||||
return
|
||||
|
||||
# Initialize stop_event as None, will be set later by set_stop_event()
|
||||
self.stop_event = None
|
||||
|
||||
model_cfg = getattr(detector_config, "model", None)
|
||||
|
||||
# Check if model_type was explicitly set by the user
|
||||
@ -363,26 +365,43 @@ class MemryXDetector(DetectionApi):
|
||||
def process_input(self):
|
||||
"""Input callback function: wait for frames in the input queue, preprocess, and send to MX3 (return)"""
|
||||
while True:
|
||||
# Check if shutdown is requested
|
||||
if self.stop_event and self.stop_event.is_set():
|
||||
logger.debug("[process_input] Stop event detected, returning None")
|
||||
return None
|
||||
try:
|
||||
# Wait for a frame from the queue (blocking call)
|
||||
frame = self.capture_queue.get(
|
||||
block=True
|
||||
) # Blocks until data is available
|
||||
# Wait for a frame from the queue with timeout to check stop_event periodically
|
||||
frame = self.capture_queue.get(block=True, timeout=0.5)
|
||||
|
||||
return frame
|
||||
|
||||
except Exception as e:
|
||||
logger.info(f"[process_input] Error processing input: {e}")
|
||||
time.sleep(0.1) # Prevent busy waiting in case of error
|
||||
# Silently handle queue.Empty timeouts (expected during normal operation)
|
||||
# Log any other unexpected exceptions
|
||||
if "Empty" not in str(type(e).__name__):
|
||||
logger.warning(f"[process_input] Unexpected error: {e}")
|
||||
# Loop continues and will check stop_event at the top
|
||||
|
||||
def receive_output(self):
|
||||
"""Retrieve processed results from MemryX output queue + a copy of the original frame"""
|
||||
connection_id = (
|
||||
self.capture_id_queue.get()
|
||||
) # Get the corresponding connection ID
|
||||
detections = self.output_queue.get() # Get detections from MemryX
|
||||
try:
|
||||
# Get connection ID with timeout
|
||||
connection_id = self.capture_id_queue.get(
|
||||
block=True, timeout=1.0
|
||||
) # Get the corresponding connection ID
|
||||
detections = self.output_queue.get() # Get detections from MemryX
|
||||
|
||||
return connection_id, detections
|
||||
return connection_id, detections
|
||||
|
||||
except Exception as e:
|
||||
# On timeout or stop event, return None
|
||||
if self.stop_event and self.stop_event.is_set():
|
||||
logger.debug("[receive_output] Stop event detected, exiting")
|
||||
# Silently handle queue.Empty timeouts, they're expected during normal operation
|
||||
elif "Empty" not in str(type(e).__name__):
|
||||
logger.warning(f"[receive_output] Error receiving output: {e}")
|
||||
|
||||
return None, None
|
||||
|
||||
def post_process_yolonas(self, output):
|
||||
predictions = output[0]
|
||||
@ -831,6 +850,19 @@ class MemryXDetector(DetectionApi):
|
||||
f"{self.memx_model_type} is currently not supported for memryx. See the docs for more info on supported models."
|
||||
)
|
||||
|
||||
def set_stop_event(self, stop_event):
|
||||
"""Set the stop event for graceful shutdown."""
|
||||
self.stop_event = stop_event
|
||||
|
||||
def shutdown(self):
|
||||
"""Gracefully shutdown the MemryX accelerator"""
|
||||
try:
|
||||
if hasattr(self, "accl") and self.accl is not None:
|
||||
self.accl.shutdown()
|
||||
logger.info("MemryX accelerator shutdown complete")
|
||||
except Exception as e:
|
||||
logger.error(f"Error during MemryX shutdown: {e}")
|
||||
|
||||
def detect_raw(self, tensor_input: np.ndarray):
|
||||
"""Removed synchronous detect_raw() function so that we only use async"""
|
||||
return 0
|
||||
|
||||
@ -205,14 +205,20 @@ Rules for the report:
|
||||
- Group bullets under subheadings when multiple events fall into the same category (e.g., Vehicle Activity, Porch Activity, Unusual Behavior).
|
||||
|
||||
- Threat levels
|
||||
- Always show (threat level: X) for each event.
|
||||
- Always show the threat level for each event using these labels:
|
||||
- Threat level 0: "Normal"
|
||||
- Threat level 1: "Needs review"
|
||||
- Threat level 2: "Security concern"
|
||||
- Format as (threat level: Normal), (threat level: Needs review), or (threat level: Security concern).
|
||||
- If multiple events at the same time share the same threat level, only state it once.
|
||||
|
||||
- Final assessment
|
||||
- End with a Final Assessment section.
|
||||
- If all events are threat level 1 with no escalation:
|
||||
- If all events are threat level 0:
|
||||
Final assessment: Only normal residential activity observed during this period.
|
||||
- If threat level 2+ events are present, clearly summarize them as Potential concerns requiring review.
|
||||
- If threat level 1 events are present:
|
||||
Final assessment: Some activity requires review but no security concerns identified.
|
||||
- If threat level 2 events are present, clearly summarize them as Security concerns requiring immediate attention.
|
||||
|
||||
- Conciseness
|
||||
- Do not repeat benign clothing/appearance details unless they distinguish individuals.
|
||||
|
||||
@ -43,6 +43,7 @@ class BaseLocalDetector(ObjectDetector):
|
||||
self,
|
||||
detector_config: BaseDetectorConfig = None,
|
||||
labels: str = None,
|
||||
stop_event: MpEvent = None,
|
||||
):
|
||||
self.fps = EventsPerSecond()
|
||||
if labels is None:
|
||||
@ -60,6 +61,10 @@ class BaseLocalDetector(ObjectDetector):
|
||||
|
||||
self.detect_api = create_detector(detector_config)
|
||||
|
||||
# If the detector supports stop_event, pass it
|
||||
if hasattr(self.detect_api, "set_stop_event") and stop_event:
|
||||
self.detect_api.set_stop_event(stop_event)
|
||||
|
||||
def _transform_input(self, tensor_input: np.ndarray) -> np.ndarray:
|
||||
if self.input_transform:
|
||||
tensor_input = np.transpose(tensor_input, self.input_transform)
|
||||
@ -240,6 +245,10 @@ class AsyncDetectorRunner(FrigateProcess):
|
||||
while not self.stop_event.is_set():
|
||||
connection_id, detections = self._detector.async_receive_output()
|
||||
|
||||
# Handle timeout case (queue.Empty) - just continue
|
||||
if connection_id is None:
|
||||
continue
|
||||
|
||||
if not self.send_times:
|
||||
# guard; shouldn't happen if send/recv are balanced
|
||||
continue
|
||||
@ -266,21 +275,38 @@ class AsyncDetectorRunner(FrigateProcess):
|
||||
|
||||
self._frame_manager = SharedMemoryFrameManager()
|
||||
self._publisher = ObjectDetectorPublisher()
|
||||
self._detector = AsyncLocalObjectDetector(detector_config=self.detector_config)
|
||||
self._detector = AsyncLocalObjectDetector(
|
||||
detector_config=self.detector_config, stop_event=self.stop_event
|
||||
)
|
||||
|
||||
for name in self.cameras:
|
||||
self.create_output_shm(name)
|
||||
|
||||
t_detect = threading.Thread(target=self._detect_worker, daemon=True)
|
||||
t_result = threading.Thread(target=self._result_worker, daemon=True)
|
||||
t_detect = threading.Thread(target=self._detect_worker, daemon=False)
|
||||
t_result = threading.Thread(target=self._result_worker, daemon=False)
|
||||
t_detect.start()
|
||||
t_result.start()
|
||||
|
||||
while not self.stop_event.is_set():
|
||||
time.sleep(0.5)
|
||||
try:
|
||||
while not self.stop_event.is_set():
|
||||
time.sleep(0.5)
|
||||
|
||||
self._publisher.stop()
|
||||
logger.info("Exited async detection process...")
|
||||
logger.info(
|
||||
"Stop event detected, waiting for detector threads to finish..."
|
||||
)
|
||||
|
||||
# Wait for threads to finish processing
|
||||
t_detect.join(timeout=5)
|
||||
t_result.join(timeout=5)
|
||||
|
||||
# Shutdown the AsyncDetector
|
||||
self._detector.detect_api.shutdown()
|
||||
|
||||
self._publisher.stop()
|
||||
except Exception as e:
|
||||
logger.error(f"Error during async detector shutdown: {e}")
|
||||
finally:
|
||||
logger.info("Exited Async detection process...")
|
||||
|
||||
|
||||
class ObjectDetectProcess:
|
||||
@ -308,7 +334,7 @@ class ObjectDetectProcess:
|
||||
# if the process has already exited on its own, just return
|
||||
if self.detect_process and self.detect_process.exitcode:
|
||||
return
|
||||
self.detect_process.terminate()
|
||||
|
||||
logging.info("Waiting for detection process to exit gracefully...")
|
||||
self.detect_process.join(timeout=30)
|
||||
if self.detect_process.exitcode is None:
|
||||
|
||||
@ -3,6 +3,8 @@ import logging
|
||||
import os
|
||||
import unittest
|
||||
|
||||
from fastapi import Request
|
||||
from fastapi.testclient import TestClient
|
||||
from peewee_migrate import Router
|
||||
from playhouse.sqlite_ext import SqliteExtDatabase
|
||||
from playhouse.sqliteq import SqliteQueueDatabase
|
||||
@ -16,6 +18,20 @@ from frigate.review.types import SeverityEnum
|
||||
from frigate.test.const import TEST_DB, TEST_DB_CLEANUPS
|
||||
|
||||
|
||||
class AuthTestClient(TestClient):
|
||||
"""TestClient that automatically adds auth headers to all requests."""
|
||||
|
||||
def request(self, *args, **kwargs):
|
||||
# Add default auth headers if not already present
|
||||
headers = kwargs.get("headers") or {}
|
||||
if "remote-user" not in headers:
|
||||
headers["remote-user"] = "admin"
|
||||
if "remote-role" not in headers:
|
||||
headers["remote-role"] = "admin"
|
||||
kwargs["headers"] = headers
|
||||
return super().request(*args, **kwargs)
|
||||
|
||||
|
||||
class BaseTestHttp(unittest.TestCase):
|
||||
def setUp(self, models):
|
||||
# setup clean database for each test run
|
||||
@ -113,7 +129,9 @@ class BaseTestHttp(unittest.TestCase):
|
||||
pass
|
||||
|
||||
def create_app(self, stats=None, event_metadata_publisher=None):
|
||||
return create_fastapi_app(
|
||||
from frigate.api.auth import get_allowed_cameras_for_filter, get_current_user
|
||||
|
||||
app = create_fastapi_app(
|
||||
FrigateConfig(**self.minimal_config),
|
||||
self.db,
|
||||
None,
|
||||
@ -123,8 +141,33 @@ class BaseTestHttp(unittest.TestCase):
|
||||
stats,
|
||||
event_metadata_publisher,
|
||||
None,
|
||||
enforce_default_admin=False,
|
||||
)
|
||||
|
||||
# Default test mocks for authentication
|
||||
# Tests can override these in their setUp if needed
|
||||
# This mock uses headers set by AuthTestClient
|
||||
async def mock_get_current_user(request: Request):
|
||||
username = request.headers.get("remote-user")
|
||||
role = request.headers.get("remote-role")
|
||||
if not username or not role:
|
||||
from fastapi.responses import JSONResponse
|
||||
|
||||
return JSONResponse(
|
||||
content={"message": "No authorization headers."}, status_code=401
|
||||
)
|
||||
return {"username": username, "role": role}
|
||||
|
||||
async def mock_get_allowed_cameras_for_filter(request: Request):
|
||||
return list(self.minimal_config.get("cameras", {}).keys())
|
||||
|
||||
app.dependency_overrides[get_current_user] = mock_get_current_user
|
||||
app.dependency_overrides[get_allowed_cameras_for_filter] = (
|
||||
mock_get_allowed_cameras_for_filter
|
||||
)
|
||||
|
||||
return app
|
||||
|
||||
def insert_mock_event(
|
||||
self,
|
||||
id: str,
|
||||
|
||||
@ -1,10 +1,8 @@
|
||||
from unittest.mock import Mock
|
||||
|
||||
from fastapi.testclient import TestClient
|
||||
|
||||
from frigate.models import Event, Recordings, ReviewSegment
|
||||
from frigate.stats.emitter import StatsEmitter
|
||||
from frigate.test.http_api.base_http_test import BaseTestHttp
|
||||
from frigate.test.http_api.base_http_test import AuthTestClient, BaseTestHttp
|
||||
|
||||
|
||||
class TestHttpApp(BaseTestHttp):
|
||||
@ -20,7 +18,7 @@ class TestHttpApp(BaseTestHttp):
|
||||
stats.get_latest_stats.return_value = self.test_stats
|
||||
app = super().create_app(stats)
|
||||
|
||||
with TestClient(app) as client:
|
||||
with AuthTestClient(app) as client:
|
||||
response = client.get("/stats")
|
||||
response_json = response.json()
|
||||
assert response_json == self.test_stats
|
||||
|
||||
@ -1,14 +1,13 @@
|
||||
from unittest.mock import patch
|
||||
|
||||
from fastapi import HTTPException, Request
|
||||
from fastapi.testclient import TestClient
|
||||
|
||||
from frigate.api.auth import (
|
||||
get_allowed_cameras_for_filter,
|
||||
get_current_user,
|
||||
)
|
||||
from frigate.models import Event, Recordings, ReviewSegment
|
||||
from frigate.test.http_api.base_http_test import BaseTestHttp
|
||||
from frigate.test.http_api.base_http_test import AuthTestClient, BaseTestHttp
|
||||
|
||||
|
||||
class TestCameraAccessEventReview(BaseTestHttp):
|
||||
@ -16,9 +15,17 @@ class TestCameraAccessEventReview(BaseTestHttp):
|
||||
super().setUp([Event, ReviewSegment, Recordings])
|
||||
self.app = super().create_app()
|
||||
|
||||
# Mock get_current_user to return valid user for all tests
|
||||
async def mock_get_current_user():
|
||||
return {"username": "test_user", "role": "user"}
|
||||
# Mock get_current_user for all tests
|
||||
async def mock_get_current_user(request: Request):
|
||||
username = request.headers.get("remote-user")
|
||||
role = request.headers.get("remote-role")
|
||||
if not username or not role:
|
||||
from fastapi.responses import JSONResponse
|
||||
|
||||
return JSONResponse(
|
||||
content={"message": "No authorization headers."}, status_code=401
|
||||
)
|
||||
return {"username": username, "role": role}
|
||||
|
||||
self.app.dependency_overrides[get_current_user] = mock_get_current_user
|
||||
|
||||
@ -30,21 +37,25 @@ class TestCameraAccessEventReview(BaseTestHttp):
|
||||
super().insert_mock_event("event1", camera="front_door")
|
||||
super().insert_mock_event("event2", camera="back_door")
|
||||
|
||||
self.app.dependency_overrides[get_allowed_cameras_for_filter] = lambda: [
|
||||
"front_door"
|
||||
]
|
||||
with TestClient(self.app) as client:
|
||||
async def mock_cameras(request: Request):
|
||||
return ["front_door"]
|
||||
|
||||
self.app.dependency_overrides[get_allowed_cameras_for_filter] = mock_cameras
|
||||
with AuthTestClient(self.app) as client:
|
||||
resp = client.get("/events")
|
||||
assert resp.status_code == 200
|
||||
ids = [e["id"] for e in resp.json()]
|
||||
assert "event1" in ids
|
||||
assert "event2" not in ids
|
||||
|
||||
self.app.dependency_overrides[get_allowed_cameras_for_filter] = lambda: [
|
||||
"front_door",
|
||||
"back_door",
|
||||
]
|
||||
with TestClient(self.app) as client:
|
||||
async def mock_cameras(request: Request):
|
||||
return [
|
||||
"front_door",
|
||||
"back_door",
|
||||
]
|
||||
|
||||
self.app.dependency_overrides[get_allowed_cameras_for_filter] = mock_cameras
|
||||
with AuthTestClient(self.app) as client:
|
||||
resp = client.get("/events")
|
||||
assert resp.status_code == 200
|
||||
ids = [e["id"] for e in resp.json()]
|
||||
@ -54,21 +65,25 @@ class TestCameraAccessEventReview(BaseTestHttp):
|
||||
super().insert_mock_review_segment("rev1", camera="front_door")
|
||||
super().insert_mock_review_segment("rev2", camera="back_door")
|
||||
|
||||
self.app.dependency_overrides[get_allowed_cameras_for_filter] = lambda: [
|
||||
"front_door"
|
||||
]
|
||||
with TestClient(self.app) as client:
|
||||
async def mock_cameras(request: Request):
|
||||
return ["front_door"]
|
||||
|
||||
self.app.dependency_overrides[get_allowed_cameras_for_filter] = mock_cameras
|
||||
with AuthTestClient(self.app) as client:
|
||||
resp = client.get("/review")
|
||||
assert resp.status_code == 200
|
||||
ids = [r["id"] for r in resp.json()]
|
||||
assert "rev1" in ids
|
||||
assert "rev2" not in ids
|
||||
|
||||
self.app.dependency_overrides[get_allowed_cameras_for_filter] = lambda: [
|
||||
"front_door",
|
||||
"back_door",
|
||||
]
|
||||
with TestClient(self.app) as client:
|
||||
async def mock_cameras(request: Request):
|
||||
return [
|
||||
"front_door",
|
||||
"back_door",
|
||||
]
|
||||
|
||||
self.app.dependency_overrides[get_allowed_cameras_for_filter] = mock_cameras
|
||||
with AuthTestClient(self.app) as client:
|
||||
resp = client.get("/review")
|
||||
assert resp.status_code == 200
|
||||
ids = [r["id"] for r in resp.json()]
|
||||
@ -84,7 +99,7 @@ class TestCameraAccessEventReview(BaseTestHttp):
|
||||
raise HTTPException(status_code=403, detail="Access denied")
|
||||
|
||||
with patch("frigate.api.event.require_camera_access", mock_require_allowed):
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
resp = client.get("/events/event1")
|
||||
assert resp.status_code == 200
|
||||
assert resp.json()["id"] == "event1"
|
||||
@ -94,7 +109,7 @@ class TestCameraAccessEventReview(BaseTestHttp):
|
||||
raise HTTPException(status_code=403, detail="Access denied")
|
||||
|
||||
with patch("frigate.api.event.require_camera_access", mock_require_disallowed):
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
resp = client.get("/events/event1")
|
||||
assert resp.status_code == 403
|
||||
|
||||
@ -108,7 +123,7 @@ class TestCameraAccessEventReview(BaseTestHttp):
|
||||
raise HTTPException(status_code=403, detail="Access denied")
|
||||
|
||||
with patch("frigate.api.review.require_camera_access", mock_require_allowed):
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
resp = client.get("/review/rev1")
|
||||
assert resp.status_code == 200
|
||||
assert resp.json()["id"] == "rev1"
|
||||
@ -118,7 +133,7 @@ class TestCameraAccessEventReview(BaseTestHttp):
|
||||
raise HTTPException(status_code=403, detail="Access denied")
|
||||
|
||||
with patch("frigate.api.review.require_camera_access", mock_require_disallowed):
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
resp = client.get("/review/rev1")
|
||||
assert resp.status_code == 403
|
||||
|
||||
@ -126,21 +141,25 @@ class TestCameraAccessEventReview(BaseTestHttp):
|
||||
super().insert_mock_event("event1", camera="front_door")
|
||||
super().insert_mock_event("event2", camera="back_door")
|
||||
|
||||
self.app.dependency_overrides[get_allowed_cameras_for_filter] = lambda: [
|
||||
"front_door"
|
||||
]
|
||||
with TestClient(self.app) as client:
|
||||
async def mock_cameras(request: Request):
|
||||
return ["front_door"]
|
||||
|
||||
self.app.dependency_overrides[get_allowed_cameras_for_filter] = mock_cameras
|
||||
with AuthTestClient(self.app) as client:
|
||||
resp = client.get("/events", params={"cameras": "all"})
|
||||
assert resp.status_code == 200
|
||||
ids = [e["id"] for e in resp.json()]
|
||||
assert "event1" in ids
|
||||
assert "event2" not in ids
|
||||
|
||||
self.app.dependency_overrides[get_allowed_cameras_for_filter] = lambda: [
|
||||
"front_door",
|
||||
"back_door",
|
||||
]
|
||||
with TestClient(self.app) as client:
|
||||
async def mock_cameras(request: Request):
|
||||
return [
|
||||
"front_door",
|
||||
"back_door",
|
||||
]
|
||||
|
||||
self.app.dependency_overrides[get_allowed_cameras_for_filter] = mock_cameras
|
||||
with AuthTestClient(self.app) as client:
|
||||
resp = client.get("/events", params={"cameras": "all"})
|
||||
assert resp.status_code == 200
|
||||
ids = [e["id"] for e in resp.json()]
|
||||
@ -150,20 +169,24 @@ class TestCameraAccessEventReview(BaseTestHttp):
|
||||
super().insert_mock_event("event1", camera="front_door")
|
||||
super().insert_mock_event("event2", camera="back_door")
|
||||
|
||||
self.app.dependency_overrides[get_allowed_cameras_for_filter] = lambda: [
|
||||
"front_door"
|
||||
]
|
||||
with TestClient(self.app) as client:
|
||||
async def mock_cameras(request: Request):
|
||||
return ["front_door"]
|
||||
|
||||
self.app.dependency_overrides[get_allowed_cameras_for_filter] = mock_cameras
|
||||
with AuthTestClient(self.app) as client:
|
||||
resp = client.get("/events/summary")
|
||||
assert resp.status_code == 200
|
||||
summary_list = resp.json()
|
||||
assert len(summary_list) == 1
|
||||
|
||||
self.app.dependency_overrides[get_allowed_cameras_for_filter] = lambda: [
|
||||
"front_door",
|
||||
"back_door",
|
||||
]
|
||||
with TestClient(self.app) as client:
|
||||
async def mock_cameras(request: Request):
|
||||
return [
|
||||
"front_door",
|
||||
"back_door",
|
||||
]
|
||||
|
||||
self.app.dependency_overrides[get_allowed_cameras_for_filter] = mock_cameras
|
||||
with AuthTestClient(self.app) as client:
|
||||
resp = client.get("/events/summary")
|
||||
summary_list = resp.json()
|
||||
assert len(summary_list) == 2
|
||||
|
||||
@ -2,14 +2,13 @@ from datetime import datetime
|
||||
from typing import Any
|
||||
from unittest.mock import Mock
|
||||
|
||||
from fastapi.testclient import TestClient
|
||||
from playhouse.shortcuts import model_to_dict
|
||||
|
||||
from frigate.api.auth import get_allowed_cameras_for_filter, get_current_user
|
||||
from frigate.comms.event_metadata_updater import EventMetadataPublisher
|
||||
from frigate.models import Event, Recordings, ReviewSegment, Timeline
|
||||
from frigate.stats.emitter import StatsEmitter
|
||||
from frigate.test.http_api.base_http_test import BaseTestHttp
|
||||
from frigate.test.http_api.base_http_test import AuthTestClient, BaseTestHttp, Request
|
||||
from frigate.test.test_storage import _insert_mock_event
|
||||
|
||||
|
||||
@ -18,14 +17,26 @@ class TestHttpApp(BaseTestHttp):
|
||||
super().setUp([Event, Recordings, ReviewSegment, Timeline])
|
||||
self.app = super().create_app()
|
||||
|
||||
# Mock auth to bypass camera access for tests
|
||||
async def mock_get_current_user(request: Any):
|
||||
return {"username": "test_user", "role": "admin"}
|
||||
# Mock get_current_user for all tests
|
||||
async def mock_get_current_user(request: Request):
|
||||
username = request.headers.get("remote-user")
|
||||
role = request.headers.get("remote-role")
|
||||
if not username or not role:
|
||||
from fastapi.responses import JSONResponse
|
||||
|
||||
return JSONResponse(
|
||||
content={"message": "No authorization headers."}, status_code=401
|
||||
)
|
||||
return {"username": username, "role": role}
|
||||
|
||||
self.app.dependency_overrides[get_current_user] = mock_get_current_user
|
||||
self.app.dependency_overrides[get_allowed_cameras_for_filter] = lambda: [
|
||||
"front_door"
|
||||
]
|
||||
|
||||
async def mock_get_allowed_cameras_for_filter(request: Request):
|
||||
return ["front_door"]
|
||||
|
||||
self.app.dependency_overrides[get_allowed_cameras_for_filter] = (
|
||||
mock_get_allowed_cameras_for_filter
|
||||
)
|
||||
|
||||
def tearDown(self):
|
||||
self.app.dependency_overrides.clear()
|
||||
@ -35,20 +46,20 @@ class TestHttpApp(BaseTestHttp):
|
||||
################################### GET /events Endpoint #########################################################
|
||||
####################################################################################################################
|
||||
def test_get_event_list_no_events(self):
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
events = client.get("/events").json()
|
||||
assert len(events) == 0
|
||||
|
||||
def test_get_event_list_no_match_event_id(self):
|
||||
id = "123456.random"
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
super().insert_mock_event(id)
|
||||
events = client.get("/events", params={"event_id": "abc"}).json()
|
||||
assert len(events) == 0
|
||||
|
||||
def test_get_event_list_match_event_id(self):
|
||||
id = "123456.random"
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
super().insert_mock_event(id)
|
||||
events = client.get("/events", params={"event_id": id}).json()
|
||||
assert len(events) == 1
|
||||
@ -58,7 +69,7 @@ class TestHttpApp(BaseTestHttp):
|
||||
now = int(datetime.now().timestamp())
|
||||
|
||||
id = "123456.random"
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
super().insert_mock_event(id, now, now + 1)
|
||||
events = client.get(
|
||||
"/events", params={"max_length": 1, "min_length": 1}
|
||||
@ -69,7 +80,7 @@ class TestHttpApp(BaseTestHttp):
|
||||
def test_get_event_list_no_match_max_length(self):
|
||||
now = int(datetime.now().timestamp())
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
id = "123456.random"
|
||||
super().insert_mock_event(id, now, now + 2)
|
||||
events = client.get("/events", params={"max_length": 1}).json()
|
||||
@ -78,7 +89,7 @@ class TestHttpApp(BaseTestHttp):
|
||||
def test_get_event_list_no_match_min_length(self):
|
||||
now = int(datetime.now().timestamp())
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
id = "123456.random"
|
||||
super().insert_mock_event(id, now, now + 2)
|
||||
events = client.get("/events", params={"min_length": 3}).json()
|
||||
@ -88,7 +99,7 @@ class TestHttpApp(BaseTestHttp):
|
||||
id = "123456.random"
|
||||
id2 = "54321.random"
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
super().insert_mock_event(id)
|
||||
events = client.get("/events").json()
|
||||
assert len(events) == 1
|
||||
@ -108,14 +119,14 @@ class TestHttpApp(BaseTestHttp):
|
||||
def test_get_event_list_no_match_has_clip(self):
|
||||
now = int(datetime.now().timestamp())
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
id = "123456.random"
|
||||
super().insert_mock_event(id, now, now + 2)
|
||||
events = client.get("/events", params={"has_clip": 0}).json()
|
||||
assert len(events) == 0
|
||||
|
||||
def test_get_event_list_has_clip(self):
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
id = "123456.random"
|
||||
super().insert_mock_event(id, has_clip=True)
|
||||
events = client.get("/events", params={"has_clip": 1}).json()
|
||||
@ -123,7 +134,7 @@ class TestHttpApp(BaseTestHttp):
|
||||
assert events[0]["id"] == id
|
||||
|
||||
def test_get_event_list_sort_score(self):
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
id = "123456.random"
|
||||
id2 = "54321.random"
|
||||
super().insert_mock_event(id, top_score=37, score=37, data={"score": 50})
|
||||
@ -141,7 +152,7 @@ class TestHttpApp(BaseTestHttp):
|
||||
def test_get_event_list_sort_start_time(self):
|
||||
now = int(datetime.now().timestamp())
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
id = "123456.random"
|
||||
id2 = "54321.random"
|
||||
super().insert_mock_event(id, start_time=now + 3)
|
||||
@ -159,7 +170,7 @@ class TestHttpApp(BaseTestHttp):
|
||||
def test_get_good_event(self):
|
||||
id = "123456.random"
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
super().insert_mock_event(id)
|
||||
event = client.get(f"/events/{id}").json()
|
||||
|
||||
@ -171,7 +182,7 @@ class TestHttpApp(BaseTestHttp):
|
||||
id = "123456.random"
|
||||
bad_id = "654321.other"
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
super().insert_mock_event(id)
|
||||
event_response = client.get(f"/events/{bad_id}")
|
||||
assert event_response.status_code == 404
|
||||
@ -180,7 +191,7 @@ class TestHttpApp(BaseTestHttp):
|
||||
def test_delete_event(self):
|
||||
id = "123456.random"
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
super().insert_mock_event(id)
|
||||
event = client.get(f"/events/{id}").json()
|
||||
assert event
|
||||
@ -193,7 +204,7 @@ class TestHttpApp(BaseTestHttp):
|
||||
def test_event_retention(self):
|
||||
id = "123456.random"
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
super().insert_mock_event(id)
|
||||
client.post(f"/events/{id}/retain", headers={"remote-role": "admin"})
|
||||
event = client.get(f"/events/{id}").json()
|
||||
@ -212,12 +223,11 @@ class TestHttpApp(BaseTestHttp):
|
||||
morning = 1656590400 # 06/30/2022 6 am (GMT)
|
||||
evening = 1656633600 # 06/30/2022 6 pm (GMT)
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
super().insert_mock_event(morning_id, morning)
|
||||
super().insert_mock_event(evening_id, evening)
|
||||
# both events come back
|
||||
events = client.get("/events").json()
|
||||
print("events!!!", events)
|
||||
assert events
|
||||
assert len(events) == 2
|
||||
# morning event is excluded
|
||||
@ -248,7 +258,7 @@ class TestHttpApp(BaseTestHttp):
|
||||
|
||||
mock_event_updater.publish.side_effect = update_event
|
||||
|
||||
with TestClient(app) as client:
|
||||
with AuthTestClient(app) as client:
|
||||
super().insert_mock_event(id)
|
||||
new_sub_label_response = client.post(
|
||||
f"/events/{id}/sub_label",
|
||||
@ -285,7 +295,7 @@ class TestHttpApp(BaseTestHttp):
|
||||
|
||||
mock_event_updater.publish.side_effect = update_event
|
||||
|
||||
with TestClient(app) as client:
|
||||
with AuthTestClient(app) as client:
|
||||
super().insert_mock_event(id)
|
||||
client.post(
|
||||
f"/events/{id}/sub_label",
|
||||
@ -301,7 +311,7 @@ class TestHttpApp(BaseTestHttp):
|
||||
####################################################################################################################
|
||||
def test_get_metrics(self):
|
||||
"""ensure correct prometheus metrics api response"""
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
ts_start = datetime.now().timestamp()
|
||||
ts_end = ts_start + 30
|
||||
_insert_mock_event(
|
||||
|
||||
@ -1,14 +1,13 @@
|
||||
"""Unit tests for recordings/media API endpoints."""
|
||||
|
||||
from datetime import datetime, timezone
|
||||
from typing import Any
|
||||
|
||||
import pytz
|
||||
from fastapi.testclient import TestClient
|
||||
from fastapi import Request
|
||||
|
||||
from frigate.api.auth import get_allowed_cameras_for_filter, get_current_user
|
||||
from frigate.models import Recordings
|
||||
from frigate.test.http_api.base_http_test import BaseTestHttp
|
||||
from frigate.test.http_api.base_http_test import AuthTestClient, BaseTestHttp
|
||||
|
||||
|
||||
class TestHttpMedia(BaseTestHttp):
|
||||
@ -19,15 +18,26 @@ class TestHttpMedia(BaseTestHttp):
|
||||
super().setUp([Recordings])
|
||||
self.app = super().create_app()
|
||||
|
||||
# Mock auth to bypass camera access for tests
|
||||
async def mock_get_current_user(request: Any):
|
||||
return {"username": "test_user", "role": "admin"}
|
||||
# Mock get_current_user for all tests
|
||||
async def mock_get_current_user(request: Request):
|
||||
username = request.headers.get("remote-user")
|
||||
role = request.headers.get("remote-role")
|
||||
if not username or not role:
|
||||
from fastapi.responses import JSONResponse
|
||||
|
||||
return JSONResponse(
|
||||
content={"message": "No authorization headers."}, status_code=401
|
||||
)
|
||||
return {"username": username, "role": role}
|
||||
|
||||
self.app.dependency_overrides[get_current_user] = mock_get_current_user
|
||||
self.app.dependency_overrides[get_allowed_cameras_for_filter] = lambda: [
|
||||
"front_door",
|
||||
"back_door",
|
||||
]
|
||||
|
||||
async def mock_get_allowed_cameras_for_filter(request: Request):
|
||||
return ["front_door"]
|
||||
|
||||
self.app.dependency_overrides[get_allowed_cameras_for_filter] = (
|
||||
mock_get_allowed_cameras_for_filter
|
||||
)
|
||||
|
||||
def tearDown(self):
|
||||
"""Clean up after tests."""
|
||||
@ -52,7 +62,7 @@ class TestHttpMedia(BaseTestHttp):
|
||||
# March 11, 2024 at 12:00 PM EDT (after DST)
|
||||
march_11_noon = tz.localize(datetime(2024, 3, 11, 12, 0, 0)).timestamp()
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
# Insert recordings for each day
|
||||
Recordings.insert(
|
||||
id="recording_march_9",
|
||||
@ -128,7 +138,7 @@ class TestHttpMedia(BaseTestHttp):
|
||||
# November 4, 2024 at 12:00 PM EST (after DST)
|
||||
nov_4_noon = tz.localize(datetime(2024, 11, 4, 12, 0, 0)).timestamp()
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
# Insert recordings for each day
|
||||
Recordings.insert(
|
||||
id="recording_nov_2",
|
||||
@ -195,7 +205,15 @@ class TestHttpMedia(BaseTestHttp):
|
||||
# March 10, 2024 at 3:00 PM EDT (after DST transition)
|
||||
march_10_afternoon = tz.localize(datetime(2024, 3, 10, 15, 0, 0)).timestamp()
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
# Override allowed cameras for this test to include both
|
||||
async def mock_get_allowed_cameras_for_filter(_request: Request):
|
||||
return ["front_door", "back_door"]
|
||||
|
||||
self.app.dependency_overrides[get_allowed_cameras_for_filter] = (
|
||||
mock_get_allowed_cameras_for_filter
|
||||
)
|
||||
|
||||
# Insert recordings for front_door on March 9
|
||||
Recordings.insert(
|
||||
id="front_march_9",
|
||||
@ -236,6 +254,14 @@ class TestHttpMedia(BaseTestHttp):
|
||||
assert summary["2024-03-09"] is True
|
||||
assert summary["2024-03-10"] is True
|
||||
|
||||
# Reset dependency override back to default single camera for other tests
|
||||
async def reset_allowed_cameras(_request: Request):
|
||||
return ["front_door"]
|
||||
|
||||
self.app.dependency_overrides[get_allowed_cameras_for_filter] = (
|
||||
reset_allowed_cameras
|
||||
)
|
||||
|
||||
def test_recordings_summary_at_dst_transition_time(self):
|
||||
"""
|
||||
Test recordings that span the exact DST transition time.
|
||||
@ -250,7 +276,7 @@ class TestHttpMedia(BaseTestHttp):
|
||||
# This is 1.5 hours of actual time but spans the "missing" hour
|
||||
after_transition = tz.localize(datetime(2024, 3, 10, 3, 30, 0)).timestamp()
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
Recordings.insert(
|
||||
id="recording_during_transition",
|
||||
path="/media/recordings/transition.mp4",
|
||||
@ -283,7 +309,7 @@ class TestHttpMedia(BaseTestHttp):
|
||||
march_9_utc = datetime(2024, 3, 9, 17, 0, 0, tzinfo=timezone.utc).timestamp()
|
||||
march_10_utc = datetime(2024, 3, 10, 17, 0, 0, tzinfo=timezone.utc).timestamp()
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
Recordings.insert(
|
||||
id="recording_march_9_utc",
|
||||
path="/media/recordings/march_9_utc.mp4",
|
||||
@ -325,7 +351,7 @@ class TestHttpMedia(BaseTestHttp):
|
||||
"""
|
||||
Test recordings summary when no recordings exist.
|
||||
"""
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
response = client.get(
|
||||
"/recordings/summary",
|
||||
params={"timezone": "America/New_York", "cameras": "all"},
|
||||
@ -342,7 +368,7 @@ class TestHttpMedia(BaseTestHttp):
|
||||
tz = pytz.timezone("America/New_York")
|
||||
march_10_noon = tz.localize(datetime(2024, 3, 10, 12, 0, 0)).timestamp()
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
# Insert recordings for both cameras
|
||||
Recordings.insert(
|
||||
id="front_recording",
|
||||
|
||||
@ -1,12 +1,12 @@
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
from fastapi.testclient import TestClient
|
||||
from fastapi import Request
|
||||
from peewee import DoesNotExist
|
||||
|
||||
from frigate.api.auth import get_allowed_cameras_for_filter, get_current_user
|
||||
from frigate.models import Event, Recordings, ReviewSegment, UserReviewStatus
|
||||
from frigate.review.types import SeverityEnum
|
||||
from frigate.test.http_api.base_http_test import BaseTestHttp
|
||||
from frigate.test.http_api.base_http_test import AuthTestClient, BaseTestHttp
|
||||
|
||||
|
||||
class TestHttpReview(BaseTestHttp):
|
||||
@ -16,14 +16,26 @@ class TestHttpReview(BaseTestHttp):
|
||||
self.user_id = "admin"
|
||||
|
||||
# Mock get_current_user for all tests
|
||||
async def mock_get_current_user():
|
||||
return {"username": self.user_id, "role": "admin"}
|
||||
# This mock uses headers set by AuthTestClient
|
||||
async def mock_get_current_user(request: Request):
|
||||
username = request.headers.get("remote-user")
|
||||
role = request.headers.get("remote-role")
|
||||
if not username or not role:
|
||||
from fastapi.responses import JSONResponse
|
||||
|
||||
return JSONResponse(
|
||||
content={"message": "No authorization headers."}, status_code=401
|
||||
)
|
||||
return {"username": username, "role": role}
|
||||
|
||||
self.app.dependency_overrides[get_current_user] = mock_get_current_user
|
||||
|
||||
self.app.dependency_overrides[get_allowed_cameras_for_filter] = lambda: [
|
||||
"front_door"
|
||||
]
|
||||
async def mock_get_allowed_cameras_for_filter(request: Request):
|
||||
return ["front_door"]
|
||||
|
||||
self.app.dependency_overrides[get_allowed_cameras_for_filter] = (
|
||||
mock_get_allowed_cameras_for_filter
|
||||
)
|
||||
|
||||
def tearDown(self):
|
||||
self.app.dependency_overrides.clear()
|
||||
@ -57,7 +69,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
but ends after is included in the results."""
|
||||
now = datetime.now().timestamp()
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
super().insert_mock_review_segment("123456.random", now, now + 2)
|
||||
response = client.get("/review")
|
||||
assert response.status_code == 200
|
||||
@ -67,7 +79,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
def test_get_review_no_filters(self):
|
||||
now = datetime.now().timestamp()
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
id = "123456.random"
|
||||
super().insert_mock_review_segment(id, now - 2, now - 1)
|
||||
response = client.get("/review")
|
||||
@ -81,7 +93,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
"""Test that review items outside the range are not returned."""
|
||||
now = datetime.now().timestamp()
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
id = "123456.random"
|
||||
super().insert_mock_review_segment(id, now - 2, now - 1)
|
||||
super().insert_mock_review_segment(f"{id}2", now + 4, now + 5)
|
||||
@ -97,7 +109,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
def test_get_review_with_time_filter(self):
|
||||
now = datetime.now().timestamp()
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
id = "123456.random"
|
||||
super().insert_mock_review_segment(id, now, now + 2)
|
||||
params = {
|
||||
@ -113,7 +125,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
def test_get_review_with_limit_filter(self):
|
||||
now = datetime.now().timestamp()
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
id = "123456.random"
|
||||
id2 = "654321.random"
|
||||
super().insert_mock_review_segment(id, now, now + 2)
|
||||
@ -132,7 +144,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
def test_get_review_with_severity_filters_no_matches(self):
|
||||
now = datetime.now().timestamp()
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
id = "123456.random"
|
||||
super().insert_mock_review_segment(id, now, now + 2, SeverityEnum.detection)
|
||||
params = {
|
||||
@ -149,7 +161,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
def test_get_review_with_severity_filters(self):
|
||||
now = datetime.now().timestamp()
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
id = "123456.random"
|
||||
super().insert_mock_review_segment(id, now, now + 2, SeverityEnum.detection)
|
||||
params = {
|
||||
@ -165,7 +177,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
def test_get_review_with_all_filters(self):
|
||||
now = datetime.now().timestamp()
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
id = "123456.random"
|
||||
super().insert_mock_review_segment(id, now, now + 2)
|
||||
params = {
|
||||
@ -188,7 +200,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
################################### GET /review/summary Endpoint #################################################
|
||||
####################################################################################################################
|
||||
def test_get_review_summary_all_filters(self):
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
super().insert_mock_review_segment("123456.random")
|
||||
params = {
|
||||
"cameras": "front_door",
|
||||
@ -219,7 +231,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
self.assertEqual(response_json, expected_response)
|
||||
|
||||
def test_get_review_summary_no_filters(self):
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
super().insert_mock_review_segment("123456.random")
|
||||
response = client.get("/review/summary")
|
||||
assert response.status_code == 200
|
||||
@ -247,7 +259,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
now = datetime.now()
|
||||
five_days_ago = datetime.today() - timedelta(days=5)
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
super().insert_mock_review_segment(
|
||||
"123456.random", now.timestamp() - 2, now.timestamp() - 1
|
||||
)
|
||||
@ -291,7 +303,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
now = datetime.now()
|
||||
five_days_ago = datetime.today() - timedelta(days=5)
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
super().insert_mock_review_segment("123456.random", now.timestamp())
|
||||
five_days_ago_ts = five_days_ago.timestamp()
|
||||
for i in range(20):
|
||||
@ -342,7 +354,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
def test_get_review_summary_multiple_in_same_day_with_reviewed(self):
|
||||
five_days_ago = datetime.today() - timedelta(days=5)
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
five_days_ago_ts = five_days_ago.timestamp()
|
||||
for i in range(10):
|
||||
id = f"123456_{i}.random_alert_not_reviewed"
|
||||
@ -393,14 +405,14 @@ class TestHttpReview(BaseTestHttp):
|
||||
####################################################################################################################
|
||||
|
||||
def test_post_reviews_viewed_no_body(self):
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
super().insert_mock_review_segment("123456.random")
|
||||
response = client.post("/reviews/viewed")
|
||||
# Missing ids
|
||||
assert response.status_code == 422
|
||||
|
||||
def test_post_reviews_viewed_no_body_ids(self):
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
super().insert_mock_review_segment("123456.random")
|
||||
body = {"ids": [""]}
|
||||
response = client.post("/reviews/viewed", json=body)
|
||||
@ -408,7 +420,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
assert response.status_code == 422
|
||||
|
||||
def test_post_reviews_viewed_non_existent_id(self):
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
id = "123456.random"
|
||||
super().insert_mock_review_segment(id)
|
||||
body = {"ids": ["1"]}
|
||||
@ -425,7 +437,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
)
|
||||
|
||||
def test_post_reviews_viewed(self):
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
id = "123456.random"
|
||||
super().insert_mock_review_segment(id)
|
||||
body = {"ids": [id]}
|
||||
@ -445,14 +457,14 @@ class TestHttpReview(BaseTestHttp):
|
||||
################################### POST reviews/delete Endpoint ################################################
|
||||
####################################################################################################################
|
||||
def test_post_reviews_delete_no_body(self):
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
super().insert_mock_review_segment("123456.random")
|
||||
response = client.post("/reviews/delete", headers={"remote-role": "admin"})
|
||||
# Missing ids
|
||||
assert response.status_code == 422
|
||||
|
||||
def test_post_reviews_delete_no_body_ids(self):
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
super().insert_mock_review_segment("123456.random")
|
||||
body = {"ids": [""]}
|
||||
response = client.post(
|
||||
@ -462,7 +474,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
assert response.status_code == 422
|
||||
|
||||
def test_post_reviews_delete_non_existent_id(self):
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
id = "123456.random"
|
||||
super().insert_mock_review_segment(id)
|
||||
body = {"ids": ["1"]}
|
||||
@ -479,7 +491,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
assert review_ids_in_db_after[0].id == id
|
||||
|
||||
def test_post_reviews_delete(self):
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
id = "123456.random"
|
||||
super().insert_mock_review_segment(id)
|
||||
body = {"ids": [id]}
|
||||
@ -495,7 +507,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
assert len(review_ids_in_db_after) == 0
|
||||
|
||||
def test_post_reviews_delete_many(self):
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
ids = ["123456.random", "654321.random"]
|
||||
for id in ids:
|
||||
super().insert_mock_review_segment(id)
|
||||
@ -527,7 +539,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
def test_review_activity_motion_no_data_for_time_range(self):
|
||||
now = datetime.now().timestamp()
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
params = {
|
||||
"after": now,
|
||||
"before": now + 3,
|
||||
@ -540,7 +552,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
def test_review_activity_motion(self):
|
||||
now = int(datetime.now().timestamp())
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
one_m = int((datetime.now() + timedelta(minutes=1)).timestamp())
|
||||
id = "123456.random"
|
||||
id2 = "123451.random"
|
||||
@ -573,7 +585,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
################################### GET /review/event/{event_id} Endpoint #######################################
|
||||
####################################################################################################################
|
||||
def test_review_event_not_found(self):
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
response = client.get("/review/event/123456.random")
|
||||
assert response.status_code == 404
|
||||
response_json = response.json()
|
||||
@ -585,7 +597,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
def test_review_event_not_found_in_data(self):
|
||||
now = datetime.now().timestamp()
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
id = "123456.random"
|
||||
super().insert_mock_review_segment(id, now + 1, now + 2)
|
||||
response = client.get(f"/review/event/{id}")
|
||||
@ -599,7 +611,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
def test_review_get_specific_event(self):
|
||||
now = datetime.now().timestamp()
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
event_id = "123456.event.random"
|
||||
super().insert_mock_event(event_id)
|
||||
review_id = "123456.review.random"
|
||||
@ -626,7 +638,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
################################### GET /review/{review_id} Endpoint #######################################
|
||||
####################################################################################################################
|
||||
def test_review_not_found(self):
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
response = client.get("/review/123456.random")
|
||||
assert response.status_code == 404
|
||||
response_json = response.json()
|
||||
@ -638,7 +650,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
def test_get_review(self):
|
||||
now = datetime.now().timestamp()
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
review_id = "123456.review.random"
|
||||
super().insert_mock_review_segment(review_id, now + 1, now + 2)
|
||||
response = client.get(f"/review/{review_id}")
|
||||
@ -662,7 +674,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
####################################################################################################################
|
||||
|
||||
def test_delete_review_viewed_review_not_found(self):
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
review_id = "123456.random"
|
||||
response = client.delete(f"/review/{review_id}/viewed")
|
||||
assert response.status_code == 404
|
||||
@ -675,7 +687,7 @@ class TestHttpReview(BaseTestHttp):
|
||||
def test_delete_review_viewed(self):
|
||||
now = datetime.now().timestamp()
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
with AuthTestClient(self.app) as client:
|
||||
review_id = "123456.review.random"
|
||||
super().insert_mock_review_segment(review_id, now + 1, now + 2)
|
||||
self._insert_user_review_status(review_id, reviewed=True)
|
||||
|
||||
@ -348,7 +348,7 @@ def migrate_016_0(config: dict[str, dict[str, Any]]) -> dict[str, dict[str, Any]
|
||||
|
||||
|
||||
def migrate_017_0(config: dict[str, dict[str, Any]]) -> dict[str, dict[str, Any]]:
|
||||
"""Handle migrating frigate config to 0.16-0"""
|
||||
"""Handle migrating frigate config to 0.17-0"""
|
||||
new_config = config.copy()
|
||||
|
||||
# migrate global to new recording configuration
|
||||
@ -380,7 +380,7 @@ def migrate_017_0(config: dict[str, dict[str, Any]]) -> dict[str, dict[str, Any]
|
||||
|
||||
if global_genai:
|
||||
new_genai_config = {}
|
||||
new_object_config = config.get("objects", {})
|
||||
new_object_config = new_config.get("objects", {})
|
||||
new_object_config["genai"] = {}
|
||||
|
||||
for key in global_genai.keys():
|
||||
@ -389,7 +389,8 @@ def migrate_017_0(config: dict[str, dict[str, Any]]) -> dict[str, dict[str, Any]
|
||||
else:
|
||||
new_object_config["genai"][key] = global_genai[key]
|
||||
|
||||
config["genai"] = new_genai_config
|
||||
new_config["genai"] = new_genai_config
|
||||
new_config["objects"] = new_object_config
|
||||
|
||||
for name, camera in config.get("cameras", {}).items():
|
||||
camera_config: dict[str, dict[str, Any]] = camera.copy()
|
||||
@ -415,8 +416,9 @@ def migrate_017_0(config: dict[str, dict[str, Any]]) -> dict[str, dict[str, Any]
|
||||
camera_genai = camera_config.get("genai", {})
|
||||
|
||||
if camera_genai:
|
||||
new_object_config = config.get("objects", {})
|
||||
new_object_config["genai"] = camera_genai
|
||||
camera_object_config = camera_config.get("objects", {})
|
||||
camera_object_config["genai"] = camera_genai
|
||||
camera_config["objects"] = camera_object_config
|
||||
del camera_config["genai"]
|
||||
|
||||
new_config["cameras"][name] = camera_config
|
||||
|
||||
@ -1,7 +1,10 @@
|
||||
import atexit
|
||||
import faulthandler
|
||||
import logging
|
||||
import multiprocessing as mp
|
||||
import os
|
||||
import pathlib
|
||||
import subprocess
|
||||
import threading
|
||||
from logging.handlers import QueueHandler
|
||||
from multiprocessing.synchronize import Event as MpEvent
|
||||
@ -48,6 +51,7 @@ class FrigateProcess(BaseProcess):
|
||||
|
||||
def before_start(self) -> None:
|
||||
self.__log_queue = frigate.log.log_listener.queue
|
||||
self.__memray_tracker = None
|
||||
|
||||
def pre_run_setup(self, logConfig: LoggerConfig | None = None) -> None:
|
||||
os.nice(self.priority)
|
||||
@ -64,3 +68,86 @@ class FrigateProcess(BaseProcess):
|
||||
frigate.log.apply_log_levels(
|
||||
logConfig.default.value.upper(), logConfig.logs
|
||||
)
|
||||
|
||||
self._setup_memray()
|
||||
|
||||
def _setup_memray(self) -> None:
|
||||
"""Setup memray profiling if enabled via environment variable."""
|
||||
memray_modules = os.environ.get("FRIGATE_MEMRAY_MODULES", "")
|
||||
|
||||
if not memray_modules:
|
||||
return
|
||||
|
||||
# Extract module name from process name (e.g., "frigate.capture:camera" -> "frigate.capture")
|
||||
process_name = self.name
|
||||
module_name = (
|
||||
process_name.split(":")[0] if ":" in process_name else process_name
|
||||
)
|
||||
|
||||
enabled_modules = [m.strip() for m in memray_modules.split(",")]
|
||||
|
||||
if module_name not in enabled_modules and process_name not in enabled_modules:
|
||||
return
|
||||
|
||||
try:
|
||||
import memray
|
||||
|
||||
reports_dir = pathlib.Path("/config/memray_reports")
|
||||
reports_dir.mkdir(parents=True, exist_ok=True)
|
||||
safe_name = (
|
||||
process_name.replace(":", "_").replace("/", "_").replace("\\", "_")
|
||||
)
|
||||
|
||||
binary_file = reports_dir / f"{safe_name}.bin"
|
||||
|
||||
self.__memray_tracker = memray.Tracker(str(binary_file))
|
||||
self.__memray_tracker.__enter__()
|
||||
|
||||
# Register cleanup handler to stop tracking and generate HTML report
|
||||
# atexit runs on normal exits and most signal-based terminations (SIGTERM, SIGINT)
|
||||
# For hard kills (SIGKILL) or segfaults, the binary file is preserved for manual generation
|
||||
atexit.register(self._cleanup_memray, safe_name, binary_file)
|
||||
|
||||
self.logger.info(
|
||||
f"Memray profiling enabled for module {module_name} (process: {self.name}). "
|
||||
f"Binary file (updated continuously): {binary_file}. "
|
||||
f"HTML report will be generated on exit: {reports_dir}/{safe_name}.html. "
|
||||
f"If process crashes, manually generate with: memray flamegraph {binary_file}"
|
||||
)
|
||||
except Exception as e:
|
||||
self.logger.error(f"Failed to setup memray profiling: {e}", exc_info=True)
|
||||
|
||||
def _cleanup_memray(self, safe_name: str, binary_file: pathlib.Path) -> None:
|
||||
"""Stop memray tracking and generate HTML report."""
|
||||
if self.__memray_tracker is None:
|
||||
return
|
||||
|
||||
try:
|
||||
self.__memray_tracker.__exit__(None, None, None)
|
||||
self.__memray_tracker = None
|
||||
|
||||
reports_dir = pathlib.Path("/config/memray_reports")
|
||||
html_file = reports_dir / f"{safe_name}.html"
|
||||
|
||||
result = subprocess.run(
|
||||
["memray", "flamegraph", "--output", str(html_file), str(binary_file)],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=10,
|
||||
)
|
||||
|
||||
if result.returncode == 0:
|
||||
self.logger.info(f"Memray report generated: {html_file}")
|
||||
else:
|
||||
self.logger.error(
|
||||
f"Failed to generate memray report: {result.stderr}. "
|
||||
f"Binary file preserved at {binary_file} for manual generation."
|
||||
)
|
||||
|
||||
# Keep the binary file for manual report generation if needed
|
||||
# Users can run: memray flamegraph {binary_file}
|
||||
|
||||
except subprocess.TimeoutExpired:
|
||||
self.logger.error("Memray report generation timed out")
|
||||
except Exception as e:
|
||||
self.logger.error(f"Failed to cleanup memray profiling: {e}", exc_info=True)
|
||||
|
||||
6
web/package-lock.json
generated
6
web/package-lock.json
generated
@ -4702,9 +4702,9 @@
|
||||
}
|
||||
},
|
||||
"node_modules/caniuse-lite": {
|
||||
"version": "1.0.30001651",
|
||||
"resolved": "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30001651.tgz",
|
||||
"integrity": "sha512-9Cf+Xv1jJNe1xPZLGuUXLNkE1BoDkqRqYyFJ9TDYSqhduqA4hu4oR9HluGoWYQC/aj8WHjsGVV+bwkh0+tegRg==",
|
||||
"version": "1.0.30001757",
|
||||
"resolved": "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30001757.tgz",
|
||||
"integrity": "sha512-r0nnL/I28Zi/yjk1el6ilj27tKcdjLsNqAOZr0yVjWPrSQyHgKI2INaEWw21bAQSv2LXRt1XuCS/GomNpWOxsQ==",
|
||||
"dev": true,
|
||||
"funding": [
|
||||
{
|
||||
|
||||
@ -166,6 +166,7 @@
|
||||
"noImages": "No sample images generated",
|
||||
"classifying": "Classifying & Training...",
|
||||
"trainingStarted": "Training started successfully",
|
||||
"modelCreated": "Model created successfully. Use the Recent Classifications view to add images for missing states, then train the model.",
|
||||
"errors": {
|
||||
"noCameras": "No cameras configured",
|
||||
"noObjectLabel": "No object label selected",
|
||||
@ -173,7 +174,11 @@
|
||||
"generationFailed": "Generation failed. Please try again.",
|
||||
"classifyFailed": "Failed to classify images: {{error}}"
|
||||
},
|
||||
"generateSuccess": "Successfully generated sample images"
|
||||
"generateSuccess": "Successfully generated sample images",
|
||||
"missingStatesWarning": {
|
||||
"title": "Missing State Examples",
|
||||
"description": "You haven't selected examples for all states. The model will not be trained until all states have images. After continuing, use the Recent Classifications view to classify images for the missing states, then train the model."
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@ -54,6 +54,7 @@
|
||||
"selected_other": "{{count}} selected",
|
||||
"camera": "Camera",
|
||||
"detected": "detected",
|
||||
"suspiciousActivity": "Suspicious Activity",
|
||||
"threateningActivity": "Threatening Activity"
|
||||
"normalActivity": "Normal",
|
||||
"needsReview": "Needs review",
|
||||
"securityConcern": "Security concern"
|
||||
}
|
||||
|
||||
@ -10,12 +10,8 @@ import useSWR from "swr";
|
||||
import { baseUrl } from "@/api/baseUrl";
|
||||
import { isMobile } from "react-device-detect";
|
||||
import { cn } from "@/lib/utils";
|
||||
import {
|
||||
Tooltip,
|
||||
TooltipContent,
|
||||
TooltipTrigger,
|
||||
} from "@/components/ui/tooltip";
|
||||
import { TooltipPortal } from "@radix-ui/react-tooltip";
|
||||
import { Alert, AlertDescription, AlertTitle } from "@/components/ui/alert";
|
||||
import { IoIosWarning } from "react-icons/io";
|
||||
|
||||
export type Step3FormData = {
|
||||
examplesGenerated: boolean;
|
||||
@ -145,20 +141,67 @@ export default function Step3ChooseExamples({
|
||||
);
|
||||
await Promise.all(categorizePromises);
|
||||
|
||||
// Step 3: Kick off training
|
||||
await axios.post(`/classification/${step1Data.modelName}/train`);
|
||||
// Step 2.5: Create empty folders for classes that don't have any images
|
||||
// This ensures all classes are available in the dataset view later
|
||||
const classesWithImages = new Set(
|
||||
Object.values(classifications).filter((c) => c && c !== "none"),
|
||||
);
|
||||
const emptyFolderPromises = step1Data.classes
|
||||
.filter((className) => !classesWithImages.has(className))
|
||||
.map((className) =>
|
||||
axios.post(
|
||||
`/classification/${step1Data.modelName}/dataset/${className}/create`,
|
||||
),
|
||||
);
|
||||
await Promise.all(emptyFolderPromises);
|
||||
|
||||
toast.success(t("wizard.step3.trainingStarted"), {
|
||||
closeButton: true,
|
||||
});
|
||||
setIsTraining(true);
|
||||
// Step 3: Determine if we should train
|
||||
// For state models, we need ALL states to have examples
|
||||
// For object models, we need at least 2 classes with images
|
||||
const allStatesHaveExamplesForTraining =
|
||||
step1Data.modelType !== "state" ||
|
||||
step1Data.classes.every((className) =>
|
||||
classesWithImages.has(className),
|
||||
);
|
||||
const shouldTrain =
|
||||
allStatesHaveExamplesForTraining && classesWithImages.size >= 2;
|
||||
|
||||
// Step 4: Kick off training only if we have enough classes with images
|
||||
if (shouldTrain) {
|
||||
await axios.post(`/classification/${step1Data.modelName}/train`);
|
||||
|
||||
toast.success(t("wizard.step3.trainingStarted"), {
|
||||
closeButton: true,
|
||||
});
|
||||
setIsTraining(true);
|
||||
} else {
|
||||
// Don't train - not all states have examples
|
||||
toast.success(t("wizard.step3.modelCreated"), {
|
||||
closeButton: true,
|
||||
});
|
||||
setIsTraining(false);
|
||||
onClose();
|
||||
}
|
||||
},
|
||||
[step1Data, step2Data, t],
|
||||
[step1Data, step2Data, t, onClose],
|
||||
);
|
||||
|
||||
const handleContinueClassification = useCallback(async () => {
|
||||
// Mark selected images with current class
|
||||
const newClassifications = { ...imageClassifications };
|
||||
|
||||
// Handle user going back and de-selecting images
|
||||
const imagesToCheck = unknownImages.slice(0, 24);
|
||||
imagesToCheck.forEach((imageName) => {
|
||||
if (
|
||||
newClassifications[imageName] === currentClass &&
|
||||
!selectedImages.has(imageName)
|
||||
) {
|
||||
delete newClassifications[imageName];
|
||||
}
|
||||
});
|
||||
|
||||
// Then, add all currently selected images to the current class
|
||||
selectedImages.forEach((imageName) => {
|
||||
newClassifications[imageName] = currentClass;
|
||||
});
|
||||
@ -329,8 +372,43 @@ export default function Step3ChooseExamples({
|
||||
return unclassifiedImages.length === 0;
|
||||
}, [unclassifiedImages]);
|
||||
|
||||
// For state models on the last class, require all images to be classified
|
||||
const isLastClass = currentClassIndex === allClasses.length - 1;
|
||||
const statesWithExamples = useMemo(() => {
|
||||
if (step1Data.modelType !== "state") return new Set<string>();
|
||||
|
||||
const states = new Set<string>();
|
||||
const allImages = unknownImages.slice(0, 24);
|
||||
|
||||
// Check which states have at least one image classified
|
||||
allImages.forEach((img) => {
|
||||
let className: string | undefined;
|
||||
if (selectedImages.has(img)) {
|
||||
className = currentClass;
|
||||
} else {
|
||||
className = imageClassifications[img];
|
||||
}
|
||||
if (className && allClasses.includes(className)) {
|
||||
states.add(className);
|
||||
}
|
||||
});
|
||||
|
||||
return states;
|
||||
}, [
|
||||
step1Data.modelType,
|
||||
unknownImages,
|
||||
imageClassifications,
|
||||
selectedImages,
|
||||
currentClass,
|
||||
allClasses,
|
||||
]);
|
||||
|
||||
const allStatesHaveExamples = useMemo(() => {
|
||||
if (step1Data.modelType !== "state") return true;
|
||||
return allClasses.every((className) => statesWithExamples.has(className));
|
||||
}, [step1Data.modelType, allClasses, statesWithExamples]);
|
||||
|
||||
// For state models on the last class, require all images to be classified
|
||||
// But allow proceeding even if not all states have examples (with warning)
|
||||
const canProceed = useMemo(() => {
|
||||
if (step1Data.modelType === "state" && isLastClass) {
|
||||
// Check if all 24 images will be classified after current selections are applied
|
||||
@ -353,6 +431,28 @@ export default function Step3ChooseExamples({
|
||||
selectedImages,
|
||||
]);
|
||||
|
||||
const hasUnclassifiedImages = useMemo(() => {
|
||||
if (!unknownImages) return false;
|
||||
const allImages = unknownImages.slice(0, 24);
|
||||
return allImages.some((img) => !imageClassifications[img]);
|
||||
}, [unknownImages, imageClassifications]);
|
||||
|
||||
const showMissingStatesWarning = useMemo(() => {
|
||||
return (
|
||||
step1Data.modelType === "state" &&
|
||||
isLastClass &&
|
||||
!allStatesHaveExamples &&
|
||||
!hasUnclassifiedImages &&
|
||||
hasGenerated
|
||||
);
|
||||
}, [
|
||||
step1Data.modelType,
|
||||
isLastClass,
|
||||
allStatesHaveExamples,
|
||||
hasUnclassifiedImages,
|
||||
hasGenerated,
|
||||
]);
|
||||
|
||||
const handleBack = useCallback(() => {
|
||||
if (currentClassIndex > 0) {
|
||||
const previousClass = allClasses[currentClassIndex - 1];
|
||||
@ -399,6 +499,17 @@ export default function Step3ChooseExamples({
|
||||
</div>
|
||||
) : hasGenerated ? (
|
||||
<div className="flex flex-col gap-4">
|
||||
{showMissingStatesWarning && (
|
||||
<Alert variant="destructive">
|
||||
<IoIosWarning className="size-5" />
|
||||
<AlertTitle>
|
||||
{t("wizard.step3.missingStatesWarning.title")}
|
||||
</AlertTitle>
|
||||
<AlertDescription>
|
||||
{t("wizard.step3.missingStatesWarning.description")}
|
||||
</AlertDescription>
|
||||
</Alert>
|
||||
)}
|
||||
{!allImagesClassified && (
|
||||
<div className="text-center">
|
||||
<h3 className="text-lg font-medium">
|
||||
@ -474,35 +585,22 @@ export default function Step3ChooseExamples({
|
||||
<Button type="button" onClick={handleBack} className="sm:flex-1">
|
||||
{t("button.back", { ns: "common" })}
|
||||
</Button>
|
||||
<Tooltip>
|
||||
<TooltipTrigger asChild>
|
||||
<Button
|
||||
type="button"
|
||||
onClick={
|
||||
allImagesClassified
|
||||
? handleContinue
|
||||
: handleContinueClassification
|
||||
}
|
||||
variant="select"
|
||||
className="flex items-center justify-center gap-2 sm:flex-1"
|
||||
disabled={
|
||||
!hasGenerated || isGenerating || isProcessing || !canProceed
|
||||
}
|
||||
>
|
||||
{isProcessing && <ActivityIndicator className="size-4" />}
|
||||
{t("button.continue", { ns: "common" })}
|
||||
</Button>
|
||||
</TooltipTrigger>
|
||||
{!canProceed && (
|
||||
<TooltipPortal>
|
||||
<TooltipContent>
|
||||
{t("wizard.step3.allImagesRequired", {
|
||||
count: unclassifiedImages.length,
|
||||
})}
|
||||
</TooltipContent>
|
||||
</TooltipPortal>
|
||||
)}
|
||||
</Tooltip>
|
||||
<Button
|
||||
type="button"
|
||||
onClick={
|
||||
allImagesClassified
|
||||
? handleContinue
|
||||
: handleContinueClassification
|
||||
}
|
||||
variant="select"
|
||||
className="flex items-center justify-center gap-2 sm:flex-1"
|
||||
disabled={
|
||||
!hasGenerated || isGenerating || isProcessing || !canProceed
|
||||
}
|
||||
>
|
||||
{isProcessing && <ActivityIndicator className="size-4" />}
|
||||
{t("button.continue", { ns: "common" })}
|
||||
</Button>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
|
||||
@ -78,7 +78,7 @@ import { useStreamingSettings } from "@/context/streaming-settings-provider";
|
||||
import { Trans, useTranslation } from "react-i18next";
|
||||
import { CameraNameLabel } from "../camera/FriendlyNameLabel";
|
||||
import { useAllowedCameras } from "@/hooks/use-allowed-cameras";
|
||||
import { useIsCustomRole } from "@/hooks/use-is-custom-role";
|
||||
import { useIsAdmin } from "@/hooks/use-is-admin";
|
||||
|
||||
type CameraGroupSelectorProps = {
|
||||
className?: string;
|
||||
@ -88,7 +88,7 @@ export function CameraGroupSelector({ className }: CameraGroupSelectorProps) {
|
||||
const { t } = useTranslation(["components/camera"]);
|
||||
const { data: config } = useSWR<FrigateConfig>("config");
|
||||
const allowedCameras = useAllowedCameras();
|
||||
const isCustomRole = useIsCustomRole();
|
||||
const isAdmin = useIsAdmin();
|
||||
|
||||
// tooltip
|
||||
|
||||
@ -124,7 +124,7 @@ export function CameraGroupSelector({ className }: CameraGroupSelectorProps) {
|
||||
const allGroups = Object.entries(config.camera_groups);
|
||||
|
||||
// If custom role, filter out groups where user has no accessible cameras
|
||||
if (isCustomRole) {
|
||||
if (!isAdmin) {
|
||||
return allGroups
|
||||
.filter(([, groupConfig]) => {
|
||||
// Check if user has access to at least one camera in this group
|
||||
@ -136,7 +136,7 @@ export function CameraGroupSelector({ className }: CameraGroupSelectorProps) {
|
||||
}
|
||||
|
||||
return allGroups.sort((a, b) => a[1].order - b[1].order);
|
||||
}, [config, allowedCameras, isCustomRole]);
|
||||
}, [config, allowedCameras, isAdmin]);
|
||||
|
||||
// add group
|
||||
|
||||
@ -153,7 +153,7 @@ export function CameraGroupSelector({ className }: CameraGroupSelectorProps) {
|
||||
activeGroup={group}
|
||||
setGroup={setGroup}
|
||||
deleteGroup={deleteGroup}
|
||||
isCustomRole={isCustomRole}
|
||||
isAdmin={isAdmin}
|
||||
/>
|
||||
<Scroller className={`${isMobile ? "whitespace-nowrap" : ""}`}>
|
||||
<div
|
||||
@ -221,7 +221,7 @@ export function CameraGroupSelector({ className }: CameraGroupSelectorProps) {
|
||||
);
|
||||
})}
|
||||
|
||||
{!isCustomRole && (
|
||||
{isAdmin && (
|
||||
<Button
|
||||
className="bg-secondary text-muted-foreground"
|
||||
aria-label={t("group.add")}
|
||||
@ -245,7 +245,7 @@ type NewGroupDialogProps = {
|
||||
activeGroup?: string;
|
||||
setGroup: (value: string | undefined, replace?: boolean | undefined) => void;
|
||||
deleteGroup: () => void;
|
||||
isCustomRole?: boolean;
|
||||
isAdmin?: boolean;
|
||||
};
|
||||
function NewGroupDialog({
|
||||
open,
|
||||
@ -254,7 +254,7 @@ function NewGroupDialog({
|
||||
activeGroup,
|
||||
setGroup,
|
||||
deleteGroup,
|
||||
isCustomRole,
|
||||
isAdmin,
|
||||
}: NewGroupDialogProps) {
|
||||
const { t } = useTranslation(["components/camera"]);
|
||||
const { mutate: updateConfig } = useSWR<FrigateConfig>("config");
|
||||
@ -390,7 +390,7 @@ function NewGroupDialog({
|
||||
>
|
||||
<Title>{t("group.label")}</Title>
|
||||
<Description className="sr-only">{t("group.edit")}</Description>
|
||||
{!isCustomRole && (
|
||||
{isAdmin && (
|
||||
<div
|
||||
className={cn(
|
||||
"absolute",
|
||||
@ -422,7 +422,7 @@ function NewGroupDialog({
|
||||
group={group}
|
||||
onDeleteGroup={() => onDeleteGroup(group[0])}
|
||||
onEditGroup={() => onEditGroup(group)}
|
||||
isReadOnly={isCustomRole}
|
||||
isReadOnly={!isAdmin}
|
||||
/>
|
||||
))}
|
||||
</div>
|
||||
@ -677,7 +677,7 @@ export function CameraGroupEdit({
|
||||
);
|
||||
|
||||
const allowedCameras = useAllowedCameras();
|
||||
const isCustomRole = useIsCustomRole();
|
||||
const isAdmin = useIsAdmin();
|
||||
|
||||
const [openCamera, setOpenCamera] = useState<string | null>();
|
||||
|
||||
@ -867,7 +867,7 @@ export function CameraGroupEdit({
|
||||
<FormMessage />
|
||||
{[
|
||||
...(birdseyeConfig?.enabled &&
|
||||
(!isCustomRole || "birdseye" in allowedCameras)
|
||||
(isAdmin || "birdseye" in allowedCameras)
|
||||
? ["birdseye"]
|
||||
: []),
|
||||
...Object.keys(config?.cameras ?? {})
|
||||
|
||||
@ -13,7 +13,7 @@ import { cn } from "@/lib/utils";
|
||||
import { isPWA } from "@/utils/isPWA";
|
||||
import { Button } from "@/components/ui/button";
|
||||
import { useTranslation } from "react-i18next";
|
||||
import { useLocation } from "react-router-dom";
|
||||
import { useHistoryBack } from "@/hooks/use-history-back";
|
||||
|
||||
const MobilePageContext = createContext<{
|
||||
open: boolean;
|
||||
@ -24,15 +24,16 @@ type MobilePageProps = {
|
||||
children: React.ReactNode;
|
||||
open?: boolean;
|
||||
onOpenChange?: (open: boolean) => void;
|
||||
enableHistoryBack?: boolean;
|
||||
};
|
||||
|
||||
export function MobilePage({
|
||||
children,
|
||||
open: controlledOpen,
|
||||
onOpenChange,
|
||||
enableHistoryBack = true,
|
||||
}: MobilePageProps) {
|
||||
const [uncontrolledOpen, setUncontrolledOpen] = useState(false);
|
||||
const location = useLocation();
|
||||
|
||||
const open = controlledOpen ?? uncontrolledOpen;
|
||||
const setOpen = useCallback(
|
||||
@ -46,33 +47,12 @@ export function MobilePage({
|
||||
[onOpenChange, setUncontrolledOpen],
|
||||
);
|
||||
|
||||
useEffect(() => {
|
||||
let isActive = true;
|
||||
|
||||
if (open && isActive) {
|
||||
window.history.pushState({ isMobilePage: true }, "", location.pathname);
|
||||
}
|
||||
|
||||
const handlePopState = (event: PopStateEvent) => {
|
||||
if (open && isActive) {
|
||||
event.preventDefault();
|
||||
setOpen(false);
|
||||
// Delay replaceState to ensure state updates are processed
|
||||
setTimeout(() => {
|
||||
if (isActive) {
|
||||
window.history.replaceState(null, "", location.pathname);
|
||||
}
|
||||
}, 0);
|
||||
}
|
||||
};
|
||||
|
||||
window.addEventListener("popstate", handlePopState);
|
||||
|
||||
return () => {
|
||||
isActive = false;
|
||||
window.removeEventListener("popstate", handlePopState);
|
||||
};
|
||||
}, [open, setOpen, location.pathname]);
|
||||
// Handle browser back button to close mobile page
|
||||
useHistoryBack({
|
||||
enabled: enableHistoryBack,
|
||||
open,
|
||||
onClose: () => setOpen(false),
|
||||
});
|
||||
|
||||
return (
|
||||
<MobilePageContext.Provider value={{ open, onOpenChange: setOpen }}>
|
||||
|
||||
@ -1,7 +1,11 @@
|
||||
import { Dialog, DialogContent, DialogTrigger } from "@/components/ui/dialog";
|
||||
import { Drawer, DrawerContent, DrawerTrigger } from "@/components/ui/drawer";
|
||||
import { cn } from "@/lib/utils";
|
||||
import { ReviewSegment, ThreatLevel } from "@/types/review";
|
||||
import {
|
||||
ReviewSegment,
|
||||
ThreatLevel,
|
||||
THREAT_LEVEL_LABELS,
|
||||
} from "@/types/review";
|
||||
import { useEffect, useMemo, useState } from "react";
|
||||
import { isDesktop } from "react-device-detect";
|
||||
import { useTranslation } from "react-i18next";
|
||||
@ -55,13 +59,22 @@ export function GenAISummaryDialog({
|
||||
}
|
||||
|
||||
let concerns = "";
|
||||
switch (aiAnalysis.potential_threat_level) {
|
||||
case ThreatLevel.SUSPICIOUS:
|
||||
concerns = `• ${t("suspiciousActivity", { ns: "views/events" })}\n`;
|
||||
break;
|
||||
case ThreatLevel.DANGER:
|
||||
concerns = `• ${t("threateningActivity", { ns: "views/events" })}\n`;
|
||||
break;
|
||||
const threatLevel = aiAnalysis.potential_threat_level ?? 0;
|
||||
|
||||
if (threatLevel > 0) {
|
||||
let label = "";
|
||||
|
||||
switch (threatLevel) {
|
||||
case ThreatLevel.NEEDS_REVIEW:
|
||||
label = t("needsReview", { ns: "views/events" });
|
||||
break;
|
||||
case ThreatLevel.SECURITY_CONCERN:
|
||||
label = t("securityConcern", { ns: "views/events" });
|
||||
break;
|
||||
default:
|
||||
label = THREAT_LEVEL_LABELS[threatLevel as ThreatLevel] || "Unknown";
|
||||
}
|
||||
concerns = `• ${label}\n`;
|
||||
}
|
||||
|
||||
(aiAnalysis.other_concerns ?? []).forEach((c) => {
|
||||
|
||||
@ -113,7 +113,12 @@ export function PlatformAwareSheet({
|
||||
}
|
||||
|
||||
return (
|
||||
<Sheet open={open} onOpenChange={onOpenChange} modal={false}>
|
||||
<Sheet
|
||||
open={open}
|
||||
onOpenChange={onOpenChange}
|
||||
modal={false}
|
||||
enableHistoryBack
|
||||
>
|
||||
<SheetTrigger asChild className={triggerClassName}>
|
||||
{trigger}
|
||||
</SheetTrigger>
|
||||
|
||||
@ -1,7 +1,11 @@
|
||||
import React, { useCallback, useEffect, useMemo, useState } from "react";
|
||||
import { useApiHost } from "@/api";
|
||||
import { isCurrentHour } from "@/utils/dateUtil";
|
||||
import { ReviewSegment } from "@/types/review";
|
||||
import {
|
||||
ReviewSegment,
|
||||
ThreatLevel,
|
||||
THREAT_LEVEL_LABELS,
|
||||
} from "@/types/review";
|
||||
import { getIconForLabel } from "@/utils/iconUtil";
|
||||
import TimeAgo from "../dynamic/TimeAgo";
|
||||
import useSWR from "swr";
|
||||
@ -44,7 +48,7 @@ export default function PreviewThumbnailPlayer({
|
||||
onClick,
|
||||
onTimeUpdate,
|
||||
}: PreviewPlayerProps) {
|
||||
const { t } = useTranslation(["components/player"]);
|
||||
const { t } = useTranslation(["components/player", "views/events"]);
|
||||
const apiHost = useApiHost();
|
||||
const { data: config } = useSWR<FrigateConfig>("config");
|
||||
const [imgRef, imgLoaded, onImgLoad] = useImageLoaded();
|
||||
@ -319,11 +323,21 @@ export default function PreviewThumbnailPlayer({
|
||||
</TooltipTrigger>
|
||||
</div>
|
||||
<TooltipContent className="smart-capitalize">
|
||||
{review.data.metadata.potential_threat_level == 1 ? (
|
||||
<>{t("suspiciousActivity", { ns: "views/events" })}</>
|
||||
) : (
|
||||
<>{t("threateningActivity", { ns: "views/events" })}</>
|
||||
)}
|
||||
{(() => {
|
||||
const threatLevel =
|
||||
review.data.metadata.potential_threat_level ?? 0;
|
||||
switch (threatLevel) {
|
||||
case ThreatLevel.NEEDS_REVIEW:
|
||||
return t("needsReview", { ns: "views/events" });
|
||||
case ThreatLevel.SECURITY_CONCERN:
|
||||
return t("securityConcern", { ns: "views/events" });
|
||||
default:
|
||||
return (
|
||||
THREAT_LEVEL_LABELS[threatLevel as ThreatLevel] ||
|
||||
"Unknown"
|
||||
);
|
||||
}
|
||||
})()}
|
||||
</TooltipContent>
|
||||
</Tooltip>
|
||||
)}
|
||||
|
||||
@ -2,6 +2,7 @@ import * as React from "react";
|
||||
import * as DialogPrimitive from "@radix-ui/react-dialog";
|
||||
import { X } from "lucide-react";
|
||||
import { cn } from "@/lib/utils";
|
||||
import { useHistoryBack } from "@/hooks/use-history-back";
|
||||
|
||||
// Enhanced Dialog with History Support
|
||||
interface HistoryDialogProps extends DialogPrimitive.DialogProps {
|
||||
@ -15,51 +16,28 @@ const Dialog = ({
|
||||
...props
|
||||
}: HistoryDialogProps) => {
|
||||
const [internalOpen, setInternalOpen] = React.useState(open || false);
|
||||
const historyStateRef = React.useRef<null | {
|
||||
listener: (e: PopStateEvent) => void;
|
||||
}>(null);
|
||||
|
||||
// Sync internal state with controlled open prop
|
||||
React.useEffect(() => {
|
||||
if (open !== undefined) {
|
||||
setInternalOpen(open);
|
||||
}
|
||||
}, [open]);
|
||||
|
||||
React.useEffect(() => {
|
||||
if (enableHistoryBack) {
|
||||
if (internalOpen) {
|
||||
window.history.pushState({ dialogOpen: true }, "");
|
||||
const handleOpenChange = React.useCallback(
|
||||
(newOpen: boolean) => {
|
||||
setInternalOpen(newOpen);
|
||||
onOpenChange?.(newOpen);
|
||||
},
|
||||
[onOpenChange],
|
||||
);
|
||||
|
||||
const listener = () => {
|
||||
setInternalOpen(false);
|
||||
if (onOpenChange) onOpenChange(false);
|
||||
};
|
||||
|
||||
historyStateRef.current = { listener };
|
||||
window.addEventListener("popstate", listener);
|
||||
|
||||
return () => {
|
||||
if (internalOpen) {
|
||||
window.removeEventListener("popstate", listener);
|
||||
historyStateRef.current = null;
|
||||
}
|
||||
};
|
||||
} else if (historyStateRef.current) {
|
||||
window.removeEventListener(
|
||||
"popstate",
|
||||
historyStateRef.current.listener,
|
||||
);
|
||||
historyStateRef.current = null;
|
||||
}
|
||||
}
|
||||
}, [enableHistoryBack, internalOpen, onOpenChange]);
|
||||
|
||||
const handleOpenChange = (open: boolean) => {
|
||||
setInternalOpen(open);
|
||||
if (onOpenChange) {
|
||||
onOpenChange(open);
|
||||
}
|
||||
};
|
||||
// Handle browser back button to close dialog
|
||||
useHistoryBack({
|
||||
enabled: enableHistoryBack,
|
||||
open: internalOpen,
|
||||
onClose: () => handleOpenChange(false),
|
||||
});
|
||||
|
||||
return (
|
||||
<DialogPrimitive.Root
|
||||
|
||||
@ -4,6 +4,7 @@ import { cva, type VariantProps } from "class-variance-authority";
|
||||
import { X } from "lucide-react";
|
||||
|
||||
import { cn } from "@/lib/utils";
|
||||
import { useHistoryBack } from "@/hooks/use-history-back";
|
||||
|
||||
// Enhanced Sheet with History Support
|
||||
interface HistorySheetProps extends SheetPrimitive.DialogProps {
|
||||
@ -17,51 +18,28 @@ const Sheet = ({
|
||||
...props
|
||||
}: HistorySheetProps) => {
|
||||
const [internalOpen, setInternalOpen] = React.useState(open || false);
|
||||
const historyStateRef = React.useRef<null | {
|
||||
listener: (e: PopStateEvent) => void;
|
||||
}>(null);
|
||||
|
||||
// Sync internal state with controlled open prop
|
||||
React.useEffect(() => {
|
||||
if (open !== undefined) {
|
||||
setInternalOpen(open);
|
||||
}
|
||||
}, [open]);
|
||||
|
||||
React.useEffect(() => {
|
||||
if (enableHistoryBack) {
|
||||
if (internalOpen) {
|
||||
window.history.pushState({ sheetOpen: true }, "");
|
||||
const handleOpenChange = React.useCallback(
|
||||
(newOpen: boolean) => {
|
||||
setInternalOpen(newOpen);
|
||||
onOpenChange?.(newOpen);
|
||||
},
|
||||
[onOpenChange],
|
||||
);
|
||||
|
||||
const listener = () => {
|
||||
setInternalOpen(false);
|
||||
if (onOpenChange) onOpenChange(false);
|
||||
};
|
||||
|
||||
historyStateRef.current = { listener };
|
||||
window.addEventListener("popstate", listener);
|
||||
|
||||
return () => {
|
||||
if (internalOpen) {
|
||||
window.removeEventListener("popstate", listener);
|
||||
historyStateRef.current = null;
|
||||
}
|
||||
};
|
||||
} else if (historyStateRef.current) {
|
||||
window.removeEventListener(
|
||||
"popstate",
|
||||
historyStateRef.current.listener,
|
||||
);
|
||||
historyStateRef.current = null;
|
||||
}
|
||||
}
|
||||
}, [enableHistoryBack, internalOpen, onOpenChange]);
|
||||
|
||||
const handleOpenChange = (open: boolean) => {
|
||||
setInternalOpen(open);
|
||||
if (onOpenChange) {
|
||||
onOpenChange(open);
|
||||
}
|
||||
};
|
||||
// Handle browser back button to close sheet
|
||||
useHistoryBack({
|
||||
enabled: enableHistoryBack,
|
||||
open: internalOpen,
|
||||
onClose: () => handleOpenChange(false),
|
||||
});
|
||||
|
||||
return (
|
||||
<SheetPrimitive.Root
|
||||
|
||||
@ -1,3 +1,4 @@
|
||||
import { baseUrl } from "@/api/baseUrl";
|
||||
import { CameraConfig, FrigateConfig } from "@/types/frigateConfig";
|
||||
import { useCallback, useEffect, useState, useMemo } from "react";
|
||||
import useSWR from "swr";
|
||||
@ -41,9 +42,12 @@ export default function useCameraLiveMode(
|
||||
|
||||
const metadataPromises = streamNames.map(async (streamName) => {
|
||||
try {
|
||||
const response = await fetch(`/api/go2rtc/streams/${streamName}`, {
|
||||
priority: "low",
|
||||
});
|
||||
const response = await fetch(
|
||||
`${baseUrl}api/go2rtc/streams/${streamName}`,
|
||||
{
|
||||
priority: "low",
|
||||
},
|
||||
);
|
||||
|
||||
if (response.ok) {
|
||||
const data = await response.json();
|
||||
|
||||
74
web/src/hooks/use-history-back.ts
Normal file
74
web/src/hooks/use-history-back.ts
Normal file
@ -0,0 +1,74 @@
|
||||
import * as React from "react";
|
||||
|
||||
interface UseHistoryBackOptions {
|
||||
enabled: boolean;
|
||||
open: boolean;
|
||||
onClose: () => void;
|
||||
}
|
||||
|
||||
/**
|
||||
* Hook that manages browser history for overlay components (dialogs, sheets, etc.)
|
||||
* When enabled, pressing the browser back button will close the overlay instead of navigating away.
|
||||
*/
|
||||
export function useHistoryBack({
|
||||
enabled,
|
||||
open,
|
||||
onClose,
|
||||
}: UseHistoryBackOptions): void {
|
||||
const historyPushedRef = React.useRef(false);
|
||||
const closedByBackRef = React.useRef(false);
|
||||
const urlWhenOpenedRef = React.useRef<string | null>(null);
|
||||
|
||||
// Keep onClose in a ref to avoid effect re-runs that cause multiple history pushes
|
||||
const onCloseRef = React.useRef(onClose);
|
||||
React.useLayoutEffect(() => {
|
||||
onCloseRef.current = onClose;
|
||||
});
|
||||
|
||||
React.useEffect(() => {
|
||||
if (!enabled) return;
|
||||
|
||||
if (open) {
|
||||
// Only push history state if we haven't already (prevents duplicates in strict mode)
|
||||
if (!historyPushedRef.current) {
|
||||
// Store the current URL (pathname + search, without hash) before pushing history state
|
||||
urlWhenOpenedRef.current =
|
||||
window.location.pathname + window.location.search;
|
||||
window.history.pushState({ overlayOpen: true }, "");
|
||||
historyPushedRef.current = true;
|
||||
}
|
||||
|
||||
const handlePopState = () => {
|
||||
closedByBackRef.current = true;
|
||||
historyPushedRef.current = false;
|
||||
urlWhenOpenedRef.current = null;
|
||||
onCloseRef.current();
|
||||
};
|
||||
|
||||
window.addEventListener("popstate", handlePopState);
|
||||
|
||||
return () => {
|
||||
window.removeEventListener("popstate", handlePopState);
|
||||
};
|
||||
} else {
|
||||
// Overlay is closing - clean up history if we pushed and it wasn't via back button
|
||||
if (historyPushedRef.current && !closedByBackRef.current) {
|
||||
const currentUrl = window.location.pathname + window.location.search;
|
||||
const urlWhenOpened = urlWhenOpenedRef.current;
|
||||
|
||||
// If the URL has changed (e.g., filters were applied via search params),
|
||||
// don't go back as it would undo the filter update.
|
||||
// The history entry we pushed will remain, but that's acceptable compared
|
||||
// to losing the user's filter changes.
|
||||
if (!urlWhenOpened || currentUrl === urlWhenOpened) {
|
||||
// URL hasn't changed, safe to go back and remove our history entry
|
||||
window.history.back();
|
||||
}
|
||||
// If URL changed, we skip history.back() to preserve the filter updates
|
||||
}
|
||||
historyPushedRef.current = false;
|
||||
closedByBackRef.current = false;
|
||||
urlWhenOpenedRef.current = null;
|
||||
}
|
||||
}, [enabled, open]);
|
||||
}
|
||||
@ -49,6 +49,7 @@ function ConfigEditor() {
|
||||
|
||||
const [restartDialogOpen, setRestartDialogOpen] = useState(false);
|
||||
const { send: sendRestart } = useRestart();
|
||||
const initialValidationRef = useRef(false);
|
||||
|
||||
const onHandleSaveConfig = useCallback(
|
||||
async (save_option: SaveOptions): Promise<void> => {
|
||||
@ -171,6 +172,33 @@ function ConfigEditor() {
|
||||
};
|
||||
}, [rawConfig, apiHost, systemTheme, theme, onHandleSaveConfig]);
|
||||
|
||||
// when in safe mode, attempt to validate the existing (invalid) config immediately
|
||||
// so that the user sees the validation errors without needing to press save
|
||||
useEffect(() => {
|
||||
if (
|
||||
config?.safe_mode &&
|
||||
rawConfig &&
|
||||
!initialValidationRef.current &&
|
||||
!error
|
||||
) {
|
||||
initialValidationRef.current = true;
|
||||
axios
|
||||
.post(`config/save?save_option=saveonly`, rawConfig, {
|
||||
headers: { "Content-Type": "text/plain" },
|
||||
})
|
||||
.then(() => {
|
||||
// if this succeeds while in safe mode, we won't force any UI change
|
||||
})
|
||||
.catch((e: AxiosError<ApiErrorResponse>) => {
|
||||
const errorMessage =
|
||||
e.response?.data?.message ||
|
||||
e.response?.data?.detail ||
|
||||
"Unknown error";
|
||||
setError(errorMessage);
|
||||
});
|
||||
}
|
||||
}, [config?.safe_mode, rawConfig, error]);
|
||||
|
||||
// monitoring state
|
||||
|
||||
const [hasChanges, setHasChanges] = useState(false);
|
||||
|
||||
@ -14,12 +14,12 @@ import { useTranslation } from "react-i18next";
|
||||
import { useEffect, useMemo, useRef } from "react";
|
||||
import useSWR from "swr";
|
||||
import { useAllowedCameras } from "@/hooks/use-allowed-cameras";
|
||||
import { useIsCustomRole } from "@/hooks/use-is-custom-role";
|
||||
import { useIsAdmin } from "@/hooks/use-is-admin";
|
||||
|
||||
function Live() {
|
||||
const { t } = useTranslation(["views/live"]);
|
||||
const { data: config } = useSWR<FrigateConfig>("config");
|
||||
const isCustomRole = useIsCustomRole();
|
||||
const isAdmin = useIsAdmin();
|
||||
|
||||
// selection
|
||||
|
||||
@ -94,7 +94,7 @@ function Live() {
|
||||
|
||||
const includesBirdseye = useMemo(() => {
|
||||
// Restricted users should never have access to birdseye
|
||||
if (isCustomRole) {
|
||||
if (!isAdmin) {
|
||||
return false;
|
||||
}
|
||||
|
||||
@ -109,7 +109,7 @@ function Live() {
|
||||
} else {
|
||||
return false;
|
||||
}
|
||||
}, [config, cameraGroup, isCustomRole]);
|
||||
}, [config, cameraGroup, isAdmin]);
|
||||
|
||||
const cameras = useMemo(() => {
|
||||
if (!config) {
|
||||
|
||||
@ -37,6 +37,7 @@ import EnrichmentsSettingsView from "@/views/settings/EnrichmentsSettingsView";
|
||||
import UiSettingsView from "@/views/settings/UiSettingsView";
|
||||
import FrigatePlusSettingsView from "@/views/settings/FrigatePlusSettingsView";
|
||||
import { useSearchEffect } from "@/hooks/use-overlay-state";
|
||||
import { usePersistence } from "@/hooks/use-persistence";
|
||||
import { useNavigate, useSearchParams } from "react-router-dom";
|
||||
import { useInitialCameraState } from "@/api/ws";
|
||||
import { useIsAdmin } from "@/hooks/use-is-admin";
|
||||
@ -207,7 +208,21 @@ export default function Settings() {
|
||||
.sort((aConf, bConf) => aConf.ui.order - bConf.ui.order);
|
||||
}, [config]);
|
||||
|
||||
const [selectedCamera, setSelectedCamera] = useState<string>("");
|
||||
const [persistedCamera, setPersistedCamera] = usePersistence(
|
||||
"selectedCamera",
|
||||
"",
|
||||
);
|
||||
const [selectedCamera, setSelectedCamera] = useState(persistedCamera);
|
||||
useEffect(() => {
|
||||
if (persistedCamera) {
|
||||
setSelectedCamera(persistedCamera);
|
||||
}
|
||||
}, [persistedCamera]);
|
||||
useEffect(() => {
|
||||
if (selectedCamera) {
|
||||
setPersistedCamera(selectedCamera);
|
||||
}
|
||||
}, [selectedCamera, setPersistedCamera]);
|
||||
|
||||
const { payload: allCameraStates } = useInitialCameraState(
|
||||
cameras.length > 0 ? cameras[0].name : "",
|
||||
|
||||
@ -87,6 +87,13 @@ export type ZoomLevel = {
|
||||
};
|
||||
|
||||
export enum ThreatLevel {
|
||||
SUSPICIOUS = 1,
|
||||
DANGER = 2,
|
||||
NORMAL = 0,
|
||||
NEEDS_REVIEW = 1,
|
||||
SECURITY_CONCERN = 2,
|
||||
}
|
||||
|
||||
export const THREAT_LEVEL_LABELS: Record<ThreatLevel, string> = {
|
||||
[ThreatLevel.NORMAL]: "Normal",
|
||||
[ThreatLevel.NEEDS_REVIEW]: "Needs review",
|
||||
[ThreatLevel.SECURITY_CONCERN]: "Security concern",
|
||||
};
|
||||
|
||||
@ -1,3 +1,4 @@
|
||||
import { baseUrl } from "@/api/baseUrl";
|
||||
import { generateFixedHash, isValidId } from "./stringUtil";
|
||||
|
||||
/**
|
||||
@ -52,9 +53,12 @@ export async function detectReolinkCamera(
|
||||
password,
|
||||
});
|
||||
|
||||
const response = await fetch(`/api/reolink/detect?${params.toString()}`, {
|
||||
method: "GET",
|
||||
});
|
||||
const response = await fetch(
|
||||
`${baseUrl}api/reolink/detect?${params.toString()}`,
|
||||
{
|
||||
method: "GET",
|
||||
},
|
||||
);
|
||||
|
||||
if (!response.ok) {
|
||||
return null;
|
||||
|
||||
@ -54,7 +54,7 @@ import { useTranslation } from "react-i18next";
|
||||
import { EmptyCard } from "@/components/card/EmptyCard";
|
||||
import { BsFillCameraVideoOffFill } from "react-icons/bs";
|
||||
import { AuthContext } from "@/context/auth-context";
|
||||
import { useIsCustomRole } from "@/hooks/use-is-custom-role";
|
||||
import { useIsAdmin } from "@/hooks/use-is-admin";
|
||||
|
||||
type LiveDashboardViewProps = {
|
||||
cameras: CameraConfig[];
|
||||
@ -661,10 +661,10 @@ export default function LiveDashboardView({
|
||||
function NoCameraView() {
|
||||
const { t } = useTranslation(["views/live"]);
|
||||
const { auth } = useContext(AuthContext);
|
||||
const isCustomRole = useIsCustomRole();
|
||||
const isAdmin = useIsAdmin();
|
||||
|
||||
// Check if this is a restricted user with no cameras in this group
|
||||
const isRestricted = isCustomRole && auth.isAuthenticated;
|
||||
const isRestricted = !isAdmin && auth.isAuthenticated;
|
||||
|
||||
return (
|
||||
<div className="flex size-full items-center justify-center">
|
||||
|
||||
@ -35,6 +35,7 @@ import { useTranslation } from "react-i18next";
|
||||
|
||||
import { useDocDomain } from "@/hooks/use-doc-domain";
|
||||
import { getTranslatedLabel } from "@/utils/i18n";
|
||||
import { cn } from "@/lib/utils";
|
||||
|
||||
type MasksAndZoneViewProps = {
|
||||
selectedCamera: string;
|
||||
@ -697,7 +698,10 @@ export default function MasksAndZonesView({
|
||||
</div>
|
||||
<div
|
||||
ref={containerRef}
|
||||
className="flex max-h-[50%] md:mr-3 md:h-dvh md:max-h-full md:w-7/12 md:grow"
|
||||
className={cn(
|
||||
"flex max-h-[50%] md:h-dvh md:max-h-full md:w-7/12 md:grow",
|
||||
isDesktop && "md:mr-3",
|
||||
)}
|
||||
>
|
||||
<div className="mx-auto flex size-full flex-row justify-center">
|
||||
{cameraConfig &&
|
||||
|
||||
@ -23,6 +23,8 @@ import { LuExternalLink } from "react-icons/lu";
|
||||
import { StatusBarMessagesContext } from "@/context/statusbar-provider";
|
||||
import { Trans, useTranslation } from "react-i18next";
|
||||
import { useDocDomain } from "@/hooks/use-doc-domain";
|
||||
import { cn } from "@/lib/utils";
|
||||
import { isDesktop } from "react-device-detect";
|
||||
|
||||
type MotionTunerViewProps = {
|
||||
selectedCamera: string;
|
||||
@ -325,7 +327,12 @@ export default function MotionTunerView({
|
||||
</div>
|
||||
|
||||
{cameraConfig ? (
|
||||
<div className="flex max-h-[70%] md:mr-3 md:h-dvh md:max-h-full md:w-7/12 md:grow">
|
||||
<div
|
||||
className={cn(
|
||||
"flex max-h-[70%] md:h-dvh md:max-h-full md:w-7/12 md:grow",
|
||||
isDesktop && "md:mr-3",
|
||||
)}
|
||||
>
|
||||
<div className="size-full min-h-10">
|
||||
<AutoUpdatingCameraImage
|
||||
camera={cameraConfig.name}
|
||||
|
||||
@ -43,6 +43,7 @@ import { useTriggers } from "@/api/ws";
|
||||
import { useCameraFriendlyName } from "@/hooks/use-camera-friendly-name";
|
||||
import { CiCircleAlert } from "react-icons/ci";
|
||||
import { useDocDomain } from "@/hooks/use-doc-domain";
|
||||
import { isDesktop } from "react-device-detect";
|
||||
|
||||
type ConfigSetBody = {
|
||||
requires_restart: number;
|
||||
@ -440,7 +441,12 @@ export default function TriggerView({
|
||||
return (
|
||||
<div className="flex size-full flex-col md:flex-row">
|
||||
<Toaster position="top-center" closeButton={true} />
|
||||
<div className="scrollbar-container order-last mb-2 mt-2 flex h-full w-full flex-col overflow-y-auto pb-2 md:order-none md:mr-3 md:mt-0">
|
||||
<div
|
||||
className={cn(
|
||||
"scrollbar-container order-last mb-2 mt-2 flex h-full w-full flex-col overflow-y-auto pb-2",
|
||||
isDesktop && "order-none mr-3 mt-0",
|
||||
)}
|
||||
>
|
||||
{!isSemanticSearchEnabled ? (
|
||||
<div className="mb-5 flex flex-row items-center justify-between gap-2">
|
||||
<div className="flex flex-col items-start">
|
||||
@ -651,7 +657,7 @@ export default function TriggerView({
|
||||
</div>
|
||||
|
||||
{/* Desktop Table View */}
|
||||
<div className="scrollbar-container hidden flex-1 overflow-hidden rounded-lg border border-border bg-background_alt md:mr-3 md:block">
|
||||
<div className="scrollbar-container hidden flex-1 overflow-hidden rounded-lg border border-border bg-background_alt md:block">
|
||||
<div className="h-full overflow-auto">
|
||||
<Table>
|
||||
<TableHeader className="sticky top-0 bg-muted/50">
|
||||
|
||||
Loading…
Reference in New Issue
Block a user