diff --git a/.cspell/frigate-dictionary.txt b/.cspell/frigate-dictionary.txt index f2bcf417a..f5292b167 100644 --- a/.cspell/frigate-dictionary.txt +++ b/.cspell/frigate-dictionary.txt @@ -229,6 +229,7 @@ Reolink restream restreamed restreaming +RJSF rkmpp rknn rkrga diff --git a/.github/copilot-instructions.md b/.github/copilot-instructions.md index f053abe3f..0af9c249f 100644 --- a/.github/copilot-instructions.md +++ b/.github/copilot-instructions.md @@ -324,6 +324,12 @@ try: value = await sensor.read() except Exception: # ❌ Too broad logger.error("Failed") + +# Returning exceptions in JSON responses +except ValueError as e: + return JSONResponse( + content={"success": False, "message": str(e)}, + ) ``` ### ✅ Use These Instead @@ -353,6 +359,16 @@ try: value = await sensor.read() except SensorException as err: # ✅ Specific logger.exception("Failed to read sensor") + +# Safe error responses +except ValueError: + logger.exception("Invalid parameters for API request") + return JSONResponse( + content={ + "success": False, + "message": "Invalid request parameters", + }, + ) ``` ## Project-Specific Conventions diff --git a/.github/pull_request_template.md b/.github/pull_request_template.md index 3204244a6..05a75ca5f 100644 --- a/.github/pull_request_template.md +++ b/.github/pull_request_template.md @@ -1,17 +1,17 @@ +_Please read the [contributing guidelines](https://github.com/blakeblackshear/frigate/blob/dev/CONTRIBUTING.md) before submitting a PR._ + ## Proposed change - ## Type of change - [ ] Dependency upgrade @@ -26,6 +26,44 @@ - This PR fixes or closes issue: fixes # - This PR is related to issue: +## For new features + + + +- [ ] There is an existing feature request or discussion with community interest for this change. + - Link: + +## AI disclosure + + + +- [ ] No AI tools were used in this PR. +- [ ] AI tools were used in this PR. Details below: + +**AI tool(s) used** (e.g., Claude, Copilot, ChatGPT, Cursor): + +**How AI was used** (e.g., code generation, code review, debugging, documentation): + +**Extent of AI involvement** (e.g., generated entire implementation, assisted with specific functions, suggested fixes): + +**Human oversight**: Describe what manual review, testing, and validation you performed on the AI-generated portions. + ## Checklist Debug) to ensure that `face` is being detected along with `person`. - You may need to adjust the `min_score` for the `face` object if faces are not being detected. If you are **not** using a Frigate+ or `face` detecting model: - - Check your `detect` stream resolution and ensure it is sufficiently high enough to capture face details on `person` objects. - You may need to lower your `detection_threshold` if faces are not being detected. 2. Any detected faces will then be _recognized_. - - Make sure you have trained at least one face per the recommendations above. - Adjust `recognition_threshold` settings per the suggestions [above](#advanced-configuration). diff --git a/docs/docs/configuration/genai/config.md b/docs/docs/configuration/genai/config.md index e1f79b744..4026158b7 100644 --- a/docs/docs/configuration/genai/config.md +++ b/docs/docs/configuration/genai/config.md @@ -5,39 +5,31 @@ title: Configuring Generative AI ## Configuration -A Generative AI provider can be configured in the global config, which will make the Generative AI features available for use. There are currently 3 native providers available to integrate with Frigate. Other providers that support the OpenAI standard API can also be used. See the OpenAI section below. +A Generative AI provider can be configured in the global config, which will make the Generative AI features available for use. There are currently 4 native providers available to integrate with Frigate. Other providers that support the OpenAI standard API can also be used. See the OpenAI-Compatible section below. To use Generative AI, you must define a single provider at the global level of your Frigate configuration. If the provider you choose requires an API key, you may either directly paste it in your configuration, or store it in an environment variable prefixed with `FRIGATE_`. -## Ollama +## Local Providers + +Local providers run on your own hardware and keep all data processing private. These require a GPU or dedicated hardware for best performance. :::warning -Using Ollama on CPU is not recommended, high inference times make using Generative AI impractical. +Running Generative AI models on CPU is not recommended, as high inference times make using Generative AI impractical. ::: -[Ollama](https://ollama.com/) allows you to self-host large language models and keep everything running locally. It is highly recommended to host this server on a machine with an Nvidia graphics card, or on a Apple silicon Mac for best performance. +### Recommended Local Models -Most of the 7b parameter 4-bit vision models will fit inside 8GB of VRAM. There is also a [Docker container](https://hub.docker.com/r/ollama/ollama) available. +You must use a vision-capable model with Frigate. The following models are recommended for local deployment: -Parallel requests also come with some caveats. You will need to set `OLLAMA_NUM_PARALLEL=1` and choose a `OLLAMA_MAX_QUEUE` and `OLLAMA_MAX_LOADED_MODELS` values that are appropriate for your hardware and preferences. See the [Ollama documentation](https://docs.ollama.com/faq#how-does-ollama-handle-concurrent-requests). - -### Model Types: Instruct vs Thinking - -Most vision-language models are available as **instruct** models, which are fine-tuned to follow instructions and respond concisely to prompts. However, some models (such as certain Qwen-VL or minigpt variants) offer both **instruct** and **thinking** versions. - -- **Instruct models** are always recommended for use with Frigate. These models generate direct, relevant, actionable descriptions that best fit Frigate's object and event summary use case. -- **Thinking models** are fine-tuned for more free-form, open-ended, and speculative outputs, which are typically not concise and may not provide the practical summaries Frigate expects. For this reason, Frigate does **not** recommend or support using thinking models. - -Some models are labeled as **hybrid** (capable of both thinking and instruct tasks). In these cases, Frigate will always use instruct-style prompts and specifically disables thinking-mode behaviors to ensure concise, useful responses. - -**Recommendation:** -Always select the `-instruct` or documented instruct/tagged variant of any model you use in your Frigate configuration. If in doubt, refer to your model provider’s documentation or model library for guidance on the correct model variant to use. - -### Supported Models - -You must use a vision capable model with Frigate. Current model variants can be found [in their model library](https://ollama.com/library). Note that Frigate will not automatically download the model you specify in your config, Ollama will try to download the model but it may take longer than the timeout, it is recommended to pull the model beforehand by running `ollama pull your_model` on your Ollama server/Docker container. Note that the model specified in Frigate's config must match the downloaded model tag. +| Model | Notes | +| ------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `qwen3-vl` | Strong visual and situational understanding, strong ability to identify smaller objects and interactions with object. | +| `qwen3.5` | Strong situational understanding, but missing DeepStack from qwen3-vl leading to worse performance for identifying objects in people's hand and other small details. | +| `Intern3.5VL` | Relatively fast with good vision comprehension | +| `gemma3` | Slower model with good vision and temporal understanding | +| `qwen2.5-vl` | Fast but capable model with good vision comprehension | :::info @@ -45,108 +37,79 @@ Each model is available in multiple parameter sizes (3b, 4b, 8b, etc.). Larger s ::: +:::note + +You should have at least 8 GB of RAM available (or VRAM if running on GPU) to run the 7B models, 16 GB to run the 13B models, and 24 GB to run the 33B models. + +::: + +### Model Types: Instruct vs Thinking + +Most vision-language models are available as **instruct** models, which are fine-tuned to follow instructions and respond concisely to prompts. However, some models (such as certain Qwen-VL or minigpt variants) offer both **instruct** and **thinking** versions. + +- **Instruct models** are always recommended for use with Frigate. These models generate direct, relevant, actionable descriptions that best fit Frigate's object and event summary use case. +- **Reasoning / Thinking models** are fine-tuned for more free-form, open-ended, and speculative outputs, which are typically not concise and may not provide the practical summaries Frigate expects. For this reason, Frigate does **not** recommend or support using thinking models. + +Some models are labeled as **hybrid** (capable of both thinking and instruct tasks). In these cases, it is recommended to disable reasoning / thinking, which is generally model specific (see your models documentation). + +**Recommendation:** +Always select the `-instruct` or documented instruct/tagged variant of any model you use in your Frigate configuration. If in doubt, refer to your model provider's documentation or model library for guidance on the correct model variant to use. + +### llama.cpp + +[llama.cpp](https://github.com/ggml-org/llama.cpp) is a C++ implementation of LLaMA that provides a high-performance inference server. + +It is highly recommended to host the llama.cpp server on a machine with a discrete graphics card, or on an Apple silicon Mac for best performance. + +#### Supported Models + +You must use a vision capable model with Frigate. The llama.cpp server supports various vision models in GGUF format. + +#### Configuration + +All llama.cpp native options can be passed through `provider_options`, including `temperature`, `top_k`, `top_p`, `min_p`, `repeat_penalty`, `repeat_last_n`, `seed`, `grammar`, and more. See the [llama.cpp server documentation](https://github.com/ggml-org/llama.cpp/blob/master/tools/server/README.md) for a complete list of available parameters. + +```yaml +genai: + provider: llamacpp + base_url: http://localhost:8080 + model: your-model-name + provider_options: + context_size: 16000 # Tell Frigate your context size so it can send the appropriate amount of information. +``` + +### Ollama + +[Ollama](https://ollama.com/) allows you to self-host large language models and keep everything running locally. It is highly recommended to host this server on a machine with an Nvidia graphics card, or on a Apple silicon Mac for best performance. + +Most of the 7b parameter 4-bit vision models will fit inside 8GB of VRAM. There is also a [Docker container](https://hub.docker.com/r/ollama/ollama) available. + +Parallel requests also come with some caveats. You will need to set `OLLAMA_NUM_PARALLEL=1` and choose a `OLLAMA_MAX_QUEUE` and `OLLAMA_MAX_LOADED_MODELS` values that are appropriate for your hardware and preferences. See the [Ollama documentation](https://docs.ollama.com/faq#how-does-ollama-handle-concurrent-requests). + :::tip If you are trying to use a single model for Frigate and HomeAssistant, it will need to support vision and tools calling. qwen3-VL supports vision and tools simultaneously in Ollama. ::: -The following models are recommended: +Note that Frigate will not automatically download the model you specify in your config. Ollama will try to download the model but it may take longer than the timeout, so it is recommended to pull the model beforehand by running `ollama pull your_model` on your Ollama server/Docker container. The model specified in Frigate's config must match the downloaded model tag. -| Model | Notes | -| ------------- | -------------------------------------------------------------------- | -| `qwen3-vl` | Strong visual and situational understanding, higher vram requirement | -| `Intern3.5VL` | Relatively fast with good vision comprehension | -| `gemma3` | Strong frame-to-frame understanding, slower inference times | -| `qwen2.5-vl` | Fast but capable model with good vision comprehension | - -:::note - -You should have at least 8 GB of RAM available (or VRAM if running on GPU) to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models. - -::: - -#### Ollama Cloud models - -Ollama also supports [cloud models](https://ollama.com/cloud), where your local Ollama instance handles requests from Frigate, but model inference is performed in the cloud. Set up Ollama locally, sign in with your Ollama account, and specify the cloud model name in your Frigate config. For more details, see the Ollama cloud model [docs](https://docs.ollama.com/cloud). - -### Configuration +#### Configuration ```yaml genai: provider: ollama base_url: http://localhost:11434 model: qwen3-vl:4b + provider_options: # other Ollama client options can be defined + keep_alive: -1 + options: + num_ctx: 8192 # make sure the context matches other services that are using ollama ``` -## Google Gemini +### OpenAI-Compatible -Google Gemini has a [free tier](https://ai.google.dev/pricing) for the API, however the limits may not be sufficient for standard Frigate usage. Choose a plan appropriate for your installation. - -### Supported Models - -You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://ai.google.dev/gemini-api/docs/models/gemini). - -### Get API Key - -To start using Gemini, you must first get an API key from [Google AI Studio](https://aistudio.google.com). - -1. Accept the Terms of Service -2. Click "Get API Key" from the right hand navigation -3. Click "Create API key in new project" -4. Copy the API key for use in your config - -### Configuration - -```yaml -genai: - provider: gemini - api_key: "{FRIGATE_GEMINI_API_KEY}" - model: gemini-2.5-flash -``` - -:::note - -To use a different Gemini-compatible API endpoint, set the `provider_options` with the `base_url` key to your provider's API URL. For example: - -``` -genai: - provider: gemini - ... - provider_options: - base_url: https://... -``` - -Other HTTP options are available, see the [python-genai documentation](https://github.com/googleapis/python-genai). - -::: - -## OpenAI - -OpenAI does not have a free tier for their API. With the release of gpt-4o, pricing has been reduced and each generation should cost fractions of a cent if you choose to go this route. - -### Supported Models - -You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://platform.openai.com/docs/models). - -### Get API Key - -To start using OpenAI, you must first [create an API key](https://platform.openai.com/api-keys) and [configure billing](https://platform.openai.com/settings/organization/billing/overview). - -### Configuration - -```yaml -genai: - provider: openai - api_key: "{FRIGATE_OPENAI_API_KEY}" - model: gpt-4o -``` - -:::note - -To use a different OpenAI-compatible API endpoint, set the `OPENAI_BASE_URL` environment variable to your provider's API URL. - -::: +Frigate supports any provider that implements the OpenAI API standard. This includes self-hosted solutions like [vLLM](https://docs.vllm.ai/), [LocalAI](https://localai.io/), and other OpenAI-compatible servers. :::tip @@ -165,19 +128,134 @@ This ensures Frigate uses the correct context window size when generating prompt ::: -## Azure OpenAI +#### Configuration + +```yaml +genai: + provider: openai + base_url: http://your-server:port + api_key: your-api-key # May not be required for local servers + model: your-model-name +``` + +To use a different OpenAI-compatible API endpoint, set the `OPENAI_BASE_URL` environment variable to your provider's API URL. + +## Cloud Providers + +Cloud providers run on remote infrastructure and require an API key for authentication. These services handle all model inference on their servers. + +### Ollama Cloud + +Ollama also supports [cloud models](https://ollama.com/cloud), where your local Ollama instance handles requests from Frigate, but model inference is performed in the cloud. Set up Ollama locally, sign in with your Ollama account, and specify the cloud model name in your Frigate config. For more details, see the Ollama cloud model [docs](https://docs.ollama.com/cloud). + +#### Configuration + +```yaml +genai: + provider: ollama + base_url: http://localhost:11434 + model: cloud-model-name +``` + +### Google Gemini + +Google Gemini has a [free tier](https://ai.google.dev/pricing) for the API, however the limits may not be sufficient for standard Frigate usage. Choose a plan appropriate for your installation. + +#### Supported Models + +You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://ai.google.dev/gemini-api/docs/models/gemini). + +#### Get API Key + +To start using Gemini, you must first get an API key from [Google AI Studio](https://aistudio.google.com). + +1. Accept the Terms of Service +2. Click "Get API Key" from the right hand navigation +3. Click "Create API key in new project" +4. Copy the API key for use in your config + +#### Configuration + +```yaml +genai: + provider: gemini + api_key: "{FRIGATE_GEMINI_API_KEY}" + model: gemini-2.5-flash +``` + +:::note + +To use a different Gemini-compatible API endpoint, set the `provider_options` with the `base_url` key to your provider's API URL. For example: + +```yaml {4,5} +genai: + provider: gemini + ... + provider_options: + base_url: https://... +``` + +Other HTTP options are available, see the [python-genai documentation](https://github.com/googleapis/python-genai). + +::: + +### OpenAI + +OpenAI does not have a free tier for their API. With the release of gpt-4o, pricing has been reduced and each generation should cost fractions of a cent if you choose to go this route. + +#### Supported Models + +You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://platform.openai.com/docs/models). + +#### Get API Key + +To start using OpenAI, you must first [create an API key](https://platform.openai.com/api-keys) and [configure billing](https://platform.openai.com/settings/organization/billing/overview). + +#### Configuration + +```yaml +genai: + provider: openai + api_key: "{FRIGATE_OPENAI_API_KEY}" + model: gpt-4o +``` + +:::note + +To use a different OpenAI-compatible API endpoint, set the `OPENAI_BASE_URL` environment variable to your provider's API URL. + +::: + +:::tip + +For OpenAI-compatible servers (such as llama.cpp) that don't expose the configured context size in the API response, you can manually specify the context size in `provider_options`: + +```yaml {5,6} +genai: + provider: openai + base_url: http://your-llama-server + model: your-model-name + provider_options: + context_size: 8192 # Specify the configured context size +``` + +This ensures Frigate uses the correct context window size when generating prompts. + +::: + +### Azure OpenAI Microsoft offers several vision models through Azure OpenAI. A subscription is required. -### Supported Models +#### Supported Models You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models). -### Create Resource and Get API Key +#### Create Resource and Get API Key To start using Azure OpenAI, you must first [create a resource](https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource). You'll need your API key, model name, and resource URL, which must include the `api-version` parameter (see the example below). -### Configuration +#### Configuration ```yaml genai: diff --git a/docs/docs/configuration/genai/objects.md b/docs/docs/configuration/genai/objects.md index e3ae31393..3ed826d21 100644 --- a/docs/docs/configuration/genai/objects.md +++ b/docs/docs/configuration/genai/objects.md @@ -11,7 +11,7 @@ By default, descriptions will be generated for all tracked objects and all zones Optionally, you can generate the description using a snapshot (if enabled) by setting `use_snapshot` to `True`. By default, this is set to `False`, which sends the uncompressed images from the `detect` stream collected over the object's lifetime to the model. Once the object lifecycle ends, only a single compressed and cropped thumbnail is saved with the tracked object. Using a snapshot might be useful when you want to _regenerate_ a tracked object's description as it will provide the AI with a higher-quality image (typically downscaled by the AI itself) than the cropped/compressed thumbnail. Using a snapshot otherwise has a trade-off in that only a single image is sent to your provider, which will limit the model's ability to determine object movement or direction. -Generative AI object descriptions can also be toggled dynamically for a camera via MQTT with the topic `frigate//object_descriptions/set`. See the [MQTT documentation](/integrations/mqtt/#frigatecamera_nameobjectdescriptionsset). +Generative AI object descriptions can also be toggled dynamically for a camera via MQTT with the topic `frigate//object_descriptions/set`. See the [MQTT documentation](/integrations/mqtt#frigatecamera_nameobject_descriptionsset). ## Usage and Best Practices diff --git a/docs/docs/configuration/genai/review_summaries.md b/docs/docs/configuration/genai/review_summaries.md index df287446c..8045f5aa3 100644 --- a/docs/docs/configuration/genai/review_summaries.md +++ b/docs/docs/configuration/genai/review_summaries.md @@ -7,7 +7,7 @@ Generative AI can be used to automatically generate structured summaries of revi Requests for a summary are requested automatically to your AI provider for alert review items when the activity has ended, they can also be optionally enabled for detections as well. -Generative AI review summaries can also be toggled dynamically for a [camera via MQTT](/integrations/mqtt/#frigatecamera_namereviewdescriptionsset). +Generative AI review summaries can also be toggled dynamically for a [camera via MQTT](/integrations/mqtt#frigatecamera_namereview_descriptionsset). ## Review Summary Usage and Best Practices @@ -80,6 +80,7 @@ By default, review summaries use preview images (cached preview frames) which ha review: genai: enabled: true + # highlight-next-line image_source: recordings # Options: "preview" (default) or "recordings" ``` @@ -104,7 +105,7 @@ If recordings are not available for a given time period, the system will automat Along with the concern of suspicious activity or immediate threat, you may have concerns such as animals in your garden or a gate being left open. These concerns can be configured so that the review summaries will make note of them if the activity requires additional review. For example: -```yaml +```yaml {4,5} review: genai: enabled: true @@ -116,7 +117,7 @@ review: By default, review summaries are generated in English. You can configure Frigate to generate summaries in your preferred language by setting the `preferred_language` option: -```yaml +```yaml {4} review: genai: enabled: true diff --git a/docs/docs/configuration/hardware_acceleration_enrichments.md b/docs/docs/configuration/hardware_acceleration_enrichments.md index fac2ffa61..fc246df98 100644 --- a/docs/docs/configuration/hardware_acceleration_enrichments.md +++ b/docs/docs/configuration/hardware_acceleration_enrichments.md @@ -12,23 +12,20 @@ Some of Frigate's enrichments can use a discrete GPU or integrated GPU for accel Object detection and enrichments (like Semantic Search, Face Recognition, and License Plate Recognition) are independent features. To use a GPU / NPU for object detection, see the [Object Detectors](/configuration/object_detectors.md) documentation. If you want to use your GPU for any supported enrichments, you must choose the appropriate Frigate Docker image for your GPU / NPU and configure the enrichment according to its specific documentation. - **AMD** - - ROCm support in the `-rocm` Frigate image is automatically detected for enrichments, but only some enrichment models are available due to ROCm's focus on LLMs and limited stability with certain neural network models. Frigate disables models that perform poorly or are unstable to ensure reliable operation, so only compatible enrichments may be active. - **Intel** - - OpenVINO will automatically be detected and used for enrichments in the default Frigate image. - **Note:** Intel NPUs have limited model support for enrichments. GPU is recommended for enrichments when available. - **Nvidia** - - Nvidia GPUs will automatically be detected and used for enrichments in the `-tensorrt` Frigate image. - Jetson devices will automatically be detected and used for enrichments in the `-tensorrt-jp6` Frigate image. - **RockChip** - RockChip NPU will automatically be detected and used for semantic search v1 and face recognition in the `-rk` Frigate image. -Utilizing a GPU for enrichments does not require you to use the same GPU for object detection. For example, you can run the `tensorrt` Docker image for enrichments and still use other dedicated hardware like a Coral or Hailo for object detection. However, one combination that is not supported is TensorRT for object detection and OpenVINO for enrichments. +Utilizing a GPU for enrichments does not require you to use the same GPU for object detection. For example, you can run the `tensorrt` Docker image to run enrichments on an Nvidia GPU and still use other dedicated hardware like a Coral or Hailo for object detection. However, one combination that is not supported is the `tensorrt` image for object detection on an Nvidia GPU and Intel iGPU for enrichments. :::note diff --git a/docs/docs/configuration/hardware_acceleration_video.md b/docs/docs/configuration/hardware_acceleration_video.md index bbbf5a640..318e1b23e 100644 --- a/docs/docs/configuration/hardware_acceleration_video.md +++ b/docs/docs/configuration/hardware_acceleration_video.md @@ -10,6 +10,7 @@ import CommunityBadge from '@site/src/components/CommunityBadge'; It is highly recommended to use an integrated or discrete GPU for hardware acceleration video decoding in Frigate. Some types of hardware acceleration are detected and used automatically, but you may need to update your configuration to enable hardware accelerated decoding in ffmpeg. To verify that hardware acceleration is working: + - Check the logs: A message will either say that hardware acceleration was automatically detected, or there will be a warning that no hardware acceleration was automatically detected - If hardware acceleration is specified in the config, verification can be done by ensuring the logs are free from errors. There is no CPU fallback for hardware acceleration. @@ -67,7 +68,7 @@ Frigate can utilize most Intel integrated GPUs and Arc GPUs to accelerate video :::note -The default driver is `iHD`. You may need to change the driver to `i965` by adding the following environment variable `LIBVA_DRIVER_NAME=i965` to your docker-compose file or [in the `config.yml` for HA Add-on users](advanced.md#environment_vars). +The default driver is `iHD`. You may need to change the driver to `i965` by adding the following environment variable `LIBVA_DRIVER_NAME=i965` to your docker-compose file or [in the `config.yml` for HA App users](advanced.md#environment_vars). See [The Intel Docs](https://www.intel.com/content/www/us/en/support/articles/000005505/processors.html) to figure out what generation your CPU is. @@ -116,12 +117,13 @@ services: frigate: ... image: ghcr.io/blakeblackshear/frigate:stable + # highlight-next-line privileged: true ``` ##### Docker Run CLI - Privileged -```bash +```bash {4} docker run -d \ --name frigate \ ... @@ -135,7 +137,7 @@ Only recent versions of Docker support the `CAP_PERFMON` capability. You can tes ##### Docker Compose - CAP_PERFMON -```yaml +```yaml {5,6} services: frigate: ... @@ -146,7 +148,7 @@ services: ##### Docker Run CLI - CAP_PERFMON -```bash +```bash {4} docker run -d \ --name frigate \ ... @@ -188,7 +190,7 @@ Frigate can utilize modern AMD integrated GPUs and AMD GPUs to accelerate video ### Configuring Radeon Driver -You need to change the driver to `radeonsi` by adding the following environment variable `LIBVA_DRIVER_NAME=radeonsi` to your docker-compose file or [in the `config.yml` for HA Add-on users](advanced.md#environment_vars). +You need to change the driver to `radeonsi` by adding the following environment variable `LIBVA_DRIVER_NAME=radeonsi` to your docker-compose file or [in the `config.yml` for HA App users](advanced.md#environment_vars). ### Via VAAPI @@ -213,7 +215,7 @@ Additional configuration is needed for the Docker container to be able to access #### Docker Compose - Nvidia GPU -```yaml +```yaml {5-12} services: frigate: ... @@ -230,7 +232,7 @@ services: #### Docker Run CLI - Nvidia GPU -```bash +```bash {4} docker run -d \ --name frigate \ ... @@ -292,7 +294,7 @@ These instructions were originally based on the [Jellyfin documentation](https:/ ## Raspberry Pi 3/4 Ensure you increase the allocated RAM for your GPU to at least 128 (`raspi-config` > Performance Options > GPU Memory). -If you are using the HA Add-on, you may need to use the full access variant and turn off _Protection mode_ for hardware acceleration. +If you are using the HA App, you may need to use the full access variant and turn off _Protection mode_ for hardware acceleration. ```yaml # if you want to decode a h264 stream @@ -309,7 +311,7 @@ ffmpeg: If running Frigate through Docker, you either need to run in privileged mode or map the `/dev/video*` devices to Frigate. With Docker Compose add: -```yaml +```yaml {4-5} services: frigate: ... @@ -319,7 +321,7 @@ services: Or with `docker run`: -```bash +```bash {4} docker run -d \ --name frigate \ ... @@ -351,7 +353,7 @@ You will need to use the image with the nvidia container runtime: ### Docker Run CLI - Jetson -```bash +```bash {3} docker run -d \ ... --runtime nvidia @@ -360,7 +362,7 @@ docker run -d \ ### Docker Compose - Jetson -```yaml +```yaml {5} services: frigate: ... @@ -451,14 +453,14 @@ Restarting ffmpeg... you should try to uprade to FFmpeg 7. This can be done using this config option: -``` +```yaml ffmpeg: path: "7.0" ``` You can set this option globally to use FFmpeg 7 for all cameras or on camera level to use it only for specific cameras. Do not confuse this option with: -``` +```yaml cameras: name: ffmpeg: @@ -480,7 +482,7 @@ Make sure to follow the [Synaptics specific installation instructions](/frigate/ Add one of the following FFmpeg presets to your `config.yml` to enable hardware video processing: -```yaml +```yaml {2} ffmpeg: hwaccel_args: -c:v h264_v4l2m2m input_args: preset-rtsp-restream diff --git a/docs/docs/configuration/index.md b/docs/docs/configuration/index.md index b1fa876f9..be546ca30 100644 --- a/docs/docs/configuration/index.md +++ b/docs/docs/configuration/index.md @@ -3,7 +3,7 @@ id: index title: Frigate Configuration --- -For Home Assistant Add-on installations, the config file should be at `/addon_configs//config.yml`, where `` is specific to the variant of the Frigate Add-on you are running. See the list of directories [here](#accessing-add-on-config-dir). +For Home Assistant App installations, the config file should be at `/addon_configs//config.yml`, where `` is specific to the variant of the Frigate App you are running. See the list of directories [here](#accessing-app-config-dir). For all other installation types, the config file should be mapped to `/config/config.yml` inside the container. @@ -25,24 +25,24 @@ cameras: - detect ``` -## Accessing the Home Assistant Add-on configuration directory {#accessing-add-on-config-dir} +## Accessing the Home Assistant App configuration directory {#accessing-app-config-dir} -When running Frigate through the HA Add-on, the Frigate `/config` directory is mapped to `/addon_configs/` in the host, where `` is specific to the variant of the Frigate Add-on you are running. +When running Frigate through the HA App, the Frigate `/config` directory is mapped to `/addon_configs/` in the host, where `` is specific to the variant of the Frigate App you are running. -| Add-on Variant | Configuration directory | -| -------------------------- | -------------------------------------------- | -| Frigate | `/addon_configs/ccab4aaf_frigate` | -| Frigate (Full Access) | `/addon_configs/ccab4aaf_frigate-fa` | -| Frigate Beta | `/addon_configs/ccab4aaf_frigate-beta` | -| Frigate Beta (Full Access) | `/addon_configs/ccab4aaf_frigate-fa-beta` | +| App Variant | Configuration directory | +| -------------------------- | ----------------------------------------- | +| Frigate | `/addon_configs/ccab4aaf_frigate` | +| Frigate (Full Access) | `/addon_configs/ccab4aaf_frigate-fa` | +| Frigate Beta | `/addon_configs/ccab4aaf_frigate-beta` | +| Frigate Beta (Full Access) | `/addon_configs/ccab4aaf_frigate-fa-beta` | **Whenever you see `/config` in the documentation, it refers to this directory.** -If for example you are running the standard Add-on variant and use the [VS Code Add-on](https://github.com/hassio-addons/addon-vscode) to browse your files, you can click _File_ > _Open folder..._ and navigate to `/addon_configs/ccab4aaf_frigate` to access the Frigate `/config` directory and edit the `config.yaml` file. You can also use the built-in file editor in the Frigate UI to edit the configuration file. +If for example you are running the standard App variant and use the [VS Code App](https://github.com/hassio-addons/addon-vscode) to browse your files, you can click _File_ > _Open folder..._ and navigate to `/addon_configs/ccab4aaf_frigate` to access the Frigate `/config` directory and edit the `config.yaml` file. You can also use the built-in file editor in the Frigate UI to edit the configuration file. ## VS Code Configuration Schema -VS Code supports JSON schemas for automatically validating configuration files. You can enable this feature by adding `# yaml-language-server: $schema=http://frigate_host:5000/api/config/schema.json` to the beginning of the configuration file. Replace `frigate_host` with the IP address or hostname of your Frigate server. If you're using both VS Code and Frigate as an Add-on, you should use `ccab4aaf-frigate` instead. Make sure to expose the internal unauthenticated port `5000` when accessing the config from VS Code on another machine. +VS Code supports JSON schemas for automatically validating configuration files. You can enable this feature by adding `# yaml-language-server: $schema=http://frigate_host:5000/api/config/schema.json` to the beginning of the configuration file. Replace `frigate_host` with the IP address or hostname of your Frigate server. If you're using both VS Code and Frigate as an App, you should use `ccab4aaf-frigate` instead. Make sure to expose the internal unauthenticated port `5000` when accessing the config from VS Code on another machine. ## Environment Variable Substitution @@ -50,6 +50,7 @@ Frigate supports the use of environment variables starting with `FRIGATE_` **onl ```yaml mqtt: + host: "{FRIGATE_MQTT_HOST}" user: "{FRIGATE_MQTT_USER}" password: "{FRIGATE_MQTT_PASSWORD}" ``` @@ -60,7 +61,7 @@ mqtt: ```yaml onvif: - host: 10.0.10.10 + host: "192.168.1.12" port: 8000 user: "{FRIGATE_RTSP_USER}" password: "{FRIGATE_RTSP_PASSWORD}" @@ -82,10 +83,10 @@ genai: Here are some common starter configuration examples. Refer to the [reference config](./reference.md) for detailed information about all the config values. -### Raspberry Pi Home Assistant Add-on with USB Coral +### Raspberry Pi Home Assistant App with USB Coral - Single camera with 720p, 5fps stream for detect -- MQTT connected to the Home Assistant Mosquitto Add-on +- MQTT connected to the Home Assistant Mosquitto App - Hardware acceleration for decoding video - USB Coral detector - Save all video with any detectable motion for 7 days regardless of whether any objects were detected or not @@ -109,15 +110,16 @@ detectors: record: enabled: True - retain: + motion: days: 7 - mode: motion alerts: retain: days: 30 + mode: motion detections: retain: days: 30 + mode: motion snapshots: enabled: True @@ -137,7 +139,10 @@ cameras: - detect motion: mask: - - 0.000,0.427,0.002,0.000,0.999,0.000,0.999,0.781,0.885,0.456,0.700,0.424,0.701,0.311,0.507,0.294,0.453,0.347,0.451,0.400 + timestamp: + friendly_name: "Camera timestamp" + enabled: true + coordinates: "0.000,0.427,0.002,0.000,0.999,0.000,0.999,0.781,0.885,0.456,0.700,0.424,0.701,0.311,0.507,0.294,0.453,0.347,0.451,0.400" ``` ### Standalone Intel Mini PC with USB Coral @@ -165,15 +170,16 @@ detectors: record: enabled: True - retain: + motion: days: 7 - mode: motion alerts: retain: days: 30 + mode: motion detections: retain: days: 30 + mode: motion snapshots: enabled: True @@ -193,7 +199,10 @@ cameras: - detect motion: mask: - - 0.000,0.427,0.002,0.000,0.999,0.000,0.999,0.781,0.885,0.456,0.700,0.424,0.701,0.311,0.507,0.294,0.453,0.347,0.451,0.400 + timestamp: + friendly_name: "Camera timestamp" + enabled: true + coordinates: "0.000,0.427,0.002,0.000,0.999,0.000,0.999,0.781,0.885,0.456,0.700,0.424,0.701,0.311,0.507,0.294,0.453,0.347,0.451,0.400" ``` ### Home Assistant integrated Intel Mini PC with OpenVino @@ -231,15 +240,16 @@ model: record: enabled: True - retain: + motion: days: 7 - mode: motion alerts: retain: days: 30 + mode: motion detections: retain: days: 30 + mode: motion snapshots: enabled: True @@ -259,5 +269,8 @@ cameras: - detect motion: mask: - - 0.000,0.427,0.002,0.000,0.999,0.000,0.999,0.781,0.885,0.456,0.700,0.424,0.701,0.311,0.507,0.294,0.453,0.347,0.451,0.400 + timestamp: + friendly_name: "Camera timestamp" + enabled: true + coordinates: "0.000,0.427,0.002,0.000,0.999,0.000,0.999,0.781,0.885,0.456,0.700,0.424,0.701,0.311,0.507,0.294,0.453,0.347,0.451,0.400" ``` diff --git a/docs/docs/configuration/license_plate_recognition.md b/docs/docs/configuration/license_plate_recognition.md index ac7942675..a44006b63 100644 --- a/docs/docs/configuration/license_plate_recognition.md +++ b/docs/docs/configuration/license_plate_recognition.md @@ -30,7 +30,7 @@ In the default mode, Frigate's LPR needs to first detect a `car` or `motorcycle` ## Minimum System Requirements -License plate recognition works by running AI models locally on your system. The YOLOv9 plate detector model and the OCR models ([PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR)) are relatively lightweight and can run on your CPU or GPU, depending on your configuration. At least 4GB of RAM is required. +License plate recognition works by running AI models locally on your system. The YOLOv9 plate detector model and the OCR models ([PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR)) are relatively lightweight and can run on your CPU or GPU, depending on your configuration. At least 4GB of RAM and a CPU with AVX + AVX2 instructions is required. ## Configuration @@ -43,7 +43,7 @@ lpr: Like other enrichments in Frigate, LPR **must be enabled globally** to use the feature. You should disable it for specific cameras at the camera level if you don't want to run LPR on cars on those cameras: -```yaml +```yaml {4,5} cameras: garage: ... @@ -375,7 +375,6 @@ Use `match_distance` to allow small character mismatches. Alternatively, define Start with ["Why isn't my license plate being detected and recognized?"](#why-isnt-my-license-plate-being-detected-and-recognized). If you are still having issues, work through these steps. 1. Start with a simplified LPR config. - - Remove or comment out everything in your LPR config, including `min_area`, `min_plate_length`, `format`, `known_plates`, or `enhancement` values so that the only values left are `enabled` and `debug_save_plates`. This will run LPR with Frigate's default values. ```yaml @@ -386,31 +385,28 @@ Start with ["Why isn't my license plate being detected and recognized?"](#why-is ``` 2. Enable debug logs to see exactly what Frigate is doing. - - Enable debug logs for LPR by adding `frigate.data_processing.common.license_plate: debug` to your `logger` configuration. These logs are _very_ verbose, so only keep this enabled when necessary. Restart Frigate after this change. ```yaml logger: default: info logs: + # highlight-next-line frigate.data_processing.common.license_plate: debug ``` 3. Ensure your plates are being _detected_. If you are using a Frigate+ or `license_plate` detecting model: - - Watch the debug view (Settings --> Debug) to ensure that `license_plate` is being detected. - View MQTT messages for `frigate/events` to verify detected plates. - You may need to adjust your `min_score` and/or `threshold` for the `license_plate` object if your plates are not being detected. If you are **not** using a Frigate+ or `license_plate` detecting model: - - Watch the debug logs for messages from the YOLOv9 plate detector. - You may need to adjust your `detection_threshold` if your plates are not being detected. 4. Ensure the characters on detected plates are being _recognized_. - - Enable `debug_save_plates` to save images of detected text on plates to the clips directory (`/media/frigate/clips/lpr`). Ensure these images are readable and the text is clear. - Watch the debug view to see plates recognized in real-time. For non-dedicated LPR cameras, the `car` or `motorcycle` label will change to the recognized plate when LPR is enabled and working. - Adjust `recognition_threshold` settings per the suggestions [above](#advanced-configuration). diff --git a/docs/docs/configuration/live.md b/docs/docs/configuration/live.md index 910cb69f1..8e7eff163 100644 --- a/docs/docs/configuration/live.md +++ b/docs/docs/configuration/live.md @@ -15,7 +15,7 @@ The jsmpeg live view will use more browser and client GPU resources. Using go2rt | ------ | ------------------------------------- | ---------- | ---------------------------- | --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | jsmpeg | same as `detect -> fps`, capped at 10 | 720p | no | no | Resolution is configurable, but go2rtc is recommended if you want higher resolutions and better frame rates. jsmpeg is Frigate's default without go2rtc configured. | | mse | native | native | yes (depends on audio codec) | yes | iPhone requires iOS 17.1+, Firefox is h.264 only. This is Frigate's default when go2rtc is configured. | -| webrtc | native | native | yes (depends on audio codec) | yes | Requires extra configuration. Frigate attempts to use WebRTC when MSE fails or when using a camera's two-way talk feature. | +| webrtc | native | native | yes (depends on audio codec) | yes | Requires extra configuration. Frigate attempts to use WebRTC when MSE fails or when using a camera's two-way talk feature. | ### Camera Settings Recommendations @@ -77,7 +77,7 @@ Configure the `streams` option with a "friendly name" for your stream followed b Using Frigate's internal version of go2rtc is required to use this feature. You cannot specify paths in the `streams` configuration, only go2rtc stream names. -```yaml +```yaml {3,6,8,25-29} go2rtc: streams: test_cam: @@ -114,9 +114,9 @@ cameras: WebRTC works by creating a TCP or UDP connection on port `8555`. However, it requires additional configuration: - For external access, over the internet, setup your router to forward port `8555` to port `8555` on the Frigate device, for both TCP and UDP. -- For internal/local access, unless you are running through the HA Add-on, you will also need to set the WebRTC candidates list in the go2rtc config. For example, if `192.168.1.10` is the local IP of the device running Frigate: +- For internal/local access, unless you are running through the HA App, you will also need to set the WebRTC candidates list in the go2rtc config. For example, if `192.168.1.10` is the local IP of the device running Frigate: - ```yaml title="config.yml" + ```yaml title="config.yml" {4-7} go2rtc: streams: test_cam: ... @@ -128,13 +128,13 @@ WebRTC works by creating a TCP or UDP connection on port `8555`. However, it req - For access through Tailscale, the Frigate system's Tailscale IP must be added as a WebRTC candidate. Tailscale IPs all start with `100.`, and are reserved within the `100.64.0.0/10` CIDR block. -- Note that some browsers may not support H.265 (HEVC). You can check your browser's current version for H.265 compatibility [here](https://github.com/AlexxIT/go2rtc?tab=readme-ov-file#codecs-madness). +- Note that some browsers may not support H.265 (HEVC). You can check your browser's current version for H.265 compatibility [here](https://github.com/AlexxIT/go2rtc?tab=readme-ov-file#codecs-madness). :::tip -This extra configuration may not be required if Frigate has been installed as a Home Assistant Add-on, as Frigate uses the Supervisor's API to generate a WebRTC candidate. +This extra configuration may not be required if Frigate has been installed as a Home Assistant App, as Frigate uses the Supervisor's API to generate a WebRTC candidate. -However, it is recommended if issues occur to define the candidates manually. You should do this if the Frigate Add-on fails to generate a valid candidate. If an error occurs you will see some warnings like the below in the Add-on logs page during the initialization: +However, it is recommended if issues occur to define the candidates manually. You should do this if the Frigate App fails to generate a valid candidate. If an error occurs you will see some warnings like the below in the App logs page during the initialization: ```log [WARN] Failed to get IP address from supervisor @@ -154,7 +154,7 @@ If not running in host mode, port 8555 will need to be mapped for the container: docker-compose.yml -```yaml +```yaml {4-6} services: frigate: ... @@ -222,34 +222,28 @@ Note that disabling a camera through the config file (`enabled: False`) removes When your browser runs into problems playing back your camera streams, it will log short error messages to the browser console. They indicate playback, codec, or network issues on the client/browser side, not something server side with Frigate itself. Below are the common messages you may see and simple actions you can take to try to resolve them. - **startup** - - What it means: The player failed to initialize or connect to the live stream (network or startup error). - What to try: Reload the Live view or click _Reset_. Verify `go2rtc` is running and the camera stream is reachable. Try switching to a different stream from the Live UI dropdown (if available) or use a different browser. - Possible console messages from the player code: - - `Error opening MediaSource.` - `Browser reported a network error.` - `Max error count ${errorCount} exceeded.` (the numeric value will vary) - **mse-decode** - - What it means: The browser reported a decoding error while trying to play the stream, which usually is a result of a codec incompatibility or corrupted frames. - What to try: Check the browser console for the supported and negotiated codecs. Ensure your camera/restream is using H.264 video and AAC audio (these are the most compatible). If your camera uses a non-standard audio codec, configure `go2rtc` to transcode the stream to AAC. Try another browser (some browsers have stricter MSE/codec support) and, for iPhone, ensure you're on iOS 17.1 or newer. - Possible console messages from the player code: - - `Safari cannot open MediaSource.` - `Safari reported InvalidStateError.` - `Safari reported decoding errors.` - **stalled** - - What it means: Playback has stalled because the player has fallen too far behind live (extended buffering or no data arriving). - What to try: This is usually indicative of the browser struggling to decode too many high-resolution streams at once. Try selecting a lower-bandwidth stream (substream), reduce the number of live streams open, improve the network connection, or lower the camera resolution. Also check your camera's keyframe (I-frame) interval — shorter intervals make playback start and recover faster. You can also try increasing the timeout value in the UI pane of Frigate's settings. - Possible console messages from the player code: - - `Buffer time (10 seconds) exceeded, browser may not be playing media correctly.` - `Media playback has stalled after seconds due to insufficient buffering or a network interruption.` (the seconds value will vary) @@ -270,21 +264,18 @@ When your browser runs into problems playing back your camera streams, it will l If you are using continuous streaming or you are loading more than a few high resolution streams at once on the dashboard, your browser may struggle to begin playback of your streams before the timeout. Frigate always prioritizes showing a live stream as quickly as possible, even if it is a lower quality jsmpeg stream. You can use the "Reset" link/button to try loading your high resolution stream again. Errors in stream playback (e.g., connection failures, codec issues, or buffering timeouts) that cause the fallback to low bandwidth mode (jsmpeg) are logged to the browser console for easier debugging. These errors may include: - - Network issues (e.g., MSE or WebRTC network connection problems). - Unsupported codecs or stream formats (e.g., H.265 in WebRTC, which is not supported in some browsers). - Buffering timeouts or low bandwidth conditions causing fallback to jsmpeg. - Browser compatibility problems (e.g., iOS Safari limitations with MSE). To view browser console logs: - 1. Open the Frigate Live View in your browser. 2. Open the browser's Developer Tools (F12 or right-click > Inspect > Console tab). 3. Reproduce the error (e.g., load a problematic stream or simulate network issues). 4. Look for messages prefixed with the camera name. These logs help identify if the issue is player-specific (MSE vs. WebRTC) or related to camera configuration (e.g., go2rtc streams, codecs). If you see frequent errors: - - Verify your camera's H.264/AAC settings (see [Frigate's camera settings recommendations](#camera_settings_recommendations)). - Check go2rtc configuration for transcoding (e.g., audio to AAC/OPUS). - Test with a different stream via the UI dropdown (if `live -> streams` is configured). @@ -324,9 +315,7 @@ When your browser runs into problems playing back your camera streams, it will l To prevent this, make the `detect` stream match the go2rtc live stream's aspect ratio (resolution does not need to match, just the aspect ratio). You can either adjust the camera's output resolution or set the `width` and `height` values in your config's `detect` section to a resolution with an aspect ratio that matches. Example: Resolutions from two streams - - Mismatched (may cause aspect ratio switching on the dashboard): - - Live/go2rtc stream: 1920x1080 (16:9) - Detect stream: 640x352 (~1.82:1, not 16:9) diff --git a/docs/docs/configuration/masks.md b/docs/docs/configuration/masks.md index 4a4722586..32280531d 100644 --- a/docs/docs/configuration/masks.md +++ b/docs/docs/configuration/masks.md @@ -33,18 +33,55 @@ Your config file will be updated with the relative coordinates of the mask/zone: ```yaml motion: - mask: "0.000,0.427,0.002,0.000,0.999,0.000,0.999,0.781,0.885,0.456,0.700,0.424,0.701,0.311,0.507,0.294,0.453,0.347,0.451,0.400" + mask: + # Motion mask name (required) + mask1: + # Optional: A friendly name for the mask + friendly_name: "Timestamp area" + # Optional: Whether this mask is active (default: true) + enabled: true + # Required: Coordinates polygon for the mask + coordinates: "0.000,0.427,0.002,0.000,0.999,0.000,0.999,0.781,0.885,0.456,0.700,0.424,0.701,0.311,0.507,0.294,0.453,0.347,0.451,0.400" ``` -Multiple masks can be listed in your config. +Multiple motion masks can be listed in your config: ```yaml motion: mask: - - 0.239,1.246,0.175,0.901,0.165,0.805,0.195,0.802 - - 0.000,0.427,0.002,0.000,0.999,0.000,0.999,0.781,0.885,0.456 + mask1: + friendly_name: "Timestamp area" + enabled: true + coordinates: "0.239,1.246,0.175,0.901,0.165,0.805,0.195,0.802" + mask2: + friendly_name: "Tree area" + enabled: true + coordinates: "0.000,0.427,0.002,0.000,0.999,0.000,0.999,0.781,0.885,0.456" ``` +Object filter masks can also be created through the UI or manually in the config. They are configured under the object filters section for each object type: + +```yaml +objects: + filters: + person: + mask: + person_filter1: + friendly_name: "Roof area" + enabled: true + coordinates: "0.000,0.000,1.000,0.000,1.000,0.400,0.000,0.400" + car: + mask: + car_filter1: + friendly_name: "Sidewalk area" + enabled: true + coordinates: "0.000,0.700,1.000,0.700,1.000,1.000,0.000,1.000" +``` + +## Enabling/Disabling Masks + +Both motion masks and object filter masks can be toggled on or off without removing them from the configuration. Disabled masks are completely ignored at runtime - they will not affect motion detection or object filtering. This is useful for temporarily disabling a mask during certain seasons or times of day without modifying the configuration. + ### Further Clarification This is a response to a [question posed on reddit](https://www.reddit.com/r/homeautomation/comments/ppxdve/replacing_my_doorbell_with_a_security_camera_a_6/hd876w4?utm_source=share&utm_medium=web2x&context=3): diff --git a/docs/docs/configuration/motion_detection.md b/docs/docs/configuration/motion_detection.md index c22491fd0..53e63272a 100644 --- a/docs/docs/configuration/motion_detection.md +++ b/docs/docs/configuration/motion_detection.md @@ -38,7 +38,6 @@ Remember that motion detection is just used to determine when object detection s The threshold value dictates how much of a change in a pixels luminance is required to be considered motion. ```yaml -# default threshold value motion: # Optional: The threshold passed to cv2.threshold to determine if a pixel is different enough to be counted as motion. (default: shown below) # Increasing this value will make motion detection less sensitive and decreasing it will make motion detection more sensitive. @@ -53,7 +52,6 @@ Watching the motion boxes in the debug view, increase the threshold until you on ### Contour Area ```yaml -# default contour_area value motion: # Optional: Minimum size in pixels in the resized motion image that counts as motion (default: shown below) # Increasing this value will prevent smaller areas of motion from being detected. Decreasing will @@ -81,27 +79,49 @@ However, if the preferred day settings do not work well at night it is recommend ## Tuning For Large Changes In Motion +### Lightning Threshold + ```yaml -# default lightning_threshold: motion: - # Optional: The percentage of the image used to detect lightning or other substantial changes where motion detection - # needs to recalibrate. (default: shown below) - # Increasing this value will make motion detection more likely to consider lightning or ir mode changes as valid motion. - # Decreasing this value will make motion detection more likely to ignore large amounts of motion such as a person approaching - # a doorbell camera. + # Optional: The percentage of the image used to detect lightning or + # other substantial changes where motion detection needs to + # recalibrate. (default: shown below) + # Increasing this value will make motion detection more likely + # to consider lightning or IR mode changes as valid motion. + # Decreasing this value will make motion detection more likely + # to ignore large amounts of motion such as a person + # approaching a doorbell camera. lightning_threshold: 0.8 ``` +Large changes in motion like PTZ moves and camera switches between Color and IR mode should result in a pause in object detection. `lightning_threshold` defines the percentage of the image used to detect these substantial changes. Increasing this value makes motion detection more likely to treat large changes (like IR mode switches) as valid motion. Decreasing it makes motion detection more likely to ignore large amounts of motion, such as a person approaching a doorbell camera. + +Note that `lightning_threshold` does **not** stop motion-based recordings from being saved — it only prevents additional motion analysis after the threshold is exceeded, reducing false positive object detections during high-motion periods (e.g. storms or PTZ sweeps) without interfering with recordings. + :::warning -Some cameras like doorbell cameras may have missed detections when someone walks directly in front of the camera and the lightning_threshold causes motion detection to be re-calibrated. In this case, it may be desirable to increase the `lightning_threshold` to ensure these objects are not missed. +Some cameras, like doorbell cameras, may have missed detections when someone walks directly in front of the camera and the `lightning_threshold` causes motion detection to recalibrate. In this case, it may be desirable to increase the `lightning_threshold` to ensure these objects are not missed. ::: -:::note +### Skip Motion On Large Scene Changes -Lightning threshold does not stop motion based recordings from being saved. +```yaml +motion: + # Optional: Fraction of the frame that must change in a single update + # before Frigate will completely ignore any motion in that frame. + # Values range between 0.0 and 1.0, leave unset (null) to disable. + # Setting this to 0.7 would cause Frigate to **skip** reporting + # motion boxes when more than 70% of the image appears to change + # (e.g. during lightning storms, IR/color mode switches, or other + # sudden lighting events). + skip_motion_threshold: 0.7 +``` + +This option is handy when you want to prevent large transient changes from triggering recordings or object detection. It differs from `lightning_threshold` because it completely suppresses motion instead of just forcing a recalibration. + +:::warning + +When the skip threshold is exceeded, **no motion is reported** for that frame, meaning **nothing is recorded** for that frame. That means you can miss something important, like a PTZ camera auto-tracking an object or activity while the camera is moving. If you prefer to guarantee that every frame is saved, leave this unset and accept occasional recordings containing scene noise — they typically only take up a few megabytes and are quick to scan in the timeline UI. ::: - -Large changes in motion like PTZ moves and camera switches between Color and IR mode should result in a pause in object detection. This is done via the `lightning_threshold` configuration. It is defined as the percentage of the image used to detect lightning or other substantial changes where motion detection needs to recalibrate. Increasing this value will make motion detection more likely to consider lightning or IR mode changes as valid motion. Decreasing this value will make motion detection more likely to ignore large amounts of motion such as a person approaching a doorbell camera. diff --git a/docs/docs/configuration/object_detectors.md b/docs/docs/configuration/object_detectors.md index 3a8d599c9..c16d3f5dc 100644 --- a/docs/docs/configuration/object_detectors.md +++ b/docs/docs/configuration/object_detectors.md @@ -34,7 +34,7 @@ Frigate supports multiple different detectors that work on different types of ha **Nvidia GPU** -- [ONNX](#onnx): TensorRT will automatically be detected and used as a detector in the `-tensorrt` Frigate image when a supported ONNX model is configured. +- [ONNX](#onnx): Nvidia GPUs will automatically be detected and used as a detector in the `-tensorrt` Frigate image when a supported ONNX model is configured. **Nvidia Jetson** @@ -49,6 +49,11 @@ Frigate supports multiple different detectors that work on different types of ha - [Synaptics](#synaptics): synap models can run on Synaptics devices(e.g astra machina) with included NPUs. +**AXERA** + +- [AXEngine](#axera): axmodels can run on AXERA AI acceleration. + + **For Testing** - [CPU Detector (not recommended for actual use](#cpu-detector-not-recommended): Use a CPU to run tflite model, this is not recommended and in most cases OpenVINO can be used in CPU mode with better results. @@ -65,7 +70,7 @@ This does not affect using hardware for accelerating other tasks such as [semant # Officially Supported Detectors -Frigate provides the following builtin detector types: `cpu`, `edgetpu`, `hailo8l`, `memryx`, `onnx`, `openvino`, `rknn`, and `tensorrt`. By default, Frigate will use a single CPU detector. Other detectors may require additional configuration as described below. When using multiple detectors they will run in dedicated processes, but pull from a common queue of detection requests from across all cameras. +Frigate provides a number of builtin detector types. By default, Frigate will use a single CPU detector. Other detectors may require additional configuration as described below. When using multiple detectors they will run in dedicated processes, but pull from a common queue of detection requests from across all cameras. ## Edge TPU Detector @@ -157,7 +162,13 @@ A TensorFlow Lite model is provided in the container at `/edgetpu_model.tflite` #### YOLOv9 -YOLOv9 models that are compiled for TensorFlow Lite and properly quantized are supported, but not included by default. [Download the model](https://github.com/dbro/frigate-detector-edgetpu-yolo9/releases/download/v1.0/yolov9-s-relu6-best_320_int8_edgetpu.tflite), bind mount the file into the container, and provide the path with `model.path`. Note that the linked model requires a 17-label [labelmap file](https://raw.githubusercontent.com/dbro/frigate-detector-edgetpu-yolo9/refs/heads/main/labels-coco17.txt) that includes only 17 COCO classes. +YOLOv9 models that are compiled for TensorFlow Lite and properly quantized are supported, but not included by default. [Instructions](#yolov9-for-google-coral-support) for downloading a model with support for the Google Coral. + +:::tip + +**Frigate+ Users:** Follow the [instructions](/integrations/plus#use-models) to set a model ID in your config file. + +:::
YOLOv9 Setup & Config @@ -566,7 +577,7 @@ $ docker run --device=/dev/kfd --device=/dev/dri \ When using Docker Compose: -```yaml +```yaml {4-6} services: frigate: ... @@ -597,7 +608,7 @@ $ docker run -e HSA_OVERRIDE_GFX_VERSION=10.0.0 \ When using Docker Compose: -```yaml +```yaml {4-5} services: frigate: ... @@ -654,11 +665,9 @@ ONNX is an open format for building machine learning models, Frigate supports ru If the correct build is used for your GPU then the GPU will be detected and used automatically. - **AMD** - - ROCm will automatically be detected and used with the ONNX detector in the `-rocm` Frigate image. - **Intel** - - OpenVINO will automatically be detected and used with the ONNX detector in the default Frigate image. - **Nvidia** @@ -1474,6 +1483,42 @@ model: input_pixel_format: rgb/bgr # look at the model.json to figure out which to put here ``` +## AXERA + +Hardware accelerated object detection is supported on the following SoCs: + +- AX650N +- AX8850N + +This implementation uses the [AXera Pulsar2 Toolchain](https://huggingface.co/AXERA-TECH/Pulsar2). + +See the [installation docs](../frigate/installation.md#axera) for information on configuring the AXEngine hardware. + +### Configuration + +When configuring the AXEngine detector, you have to specify the model name. + +#### yolov9 + +A yolov9 model is provided in the container at `/axmodels` and is used by this detector type by default. + +Use the model configuration shown below when using the axengine detector with the default axmodel: + +```yaml +detectors: + axengine: + type: axengine + +model: + path: frigate-yolov9-tiny + model_type: yolo-generic + width: 320 + height: 320 + input_dtype: int + input_pixel_format: bgr + labelmap_path: /labelmap/coco-80.txt +``` + # Models Some model types are not included in Frigate by default. @@ -1556,19 +1601,23 @@ cd tensorrt_demos/yolo python3 yolo_to_onnx.py -m yolov7-320 ``` -#### YOLOv9 +#### YOLOv9 for Google Coral Support + +[Download the model](https://github.com/dbro/frigate-detector-edgetpu-yolo9/releases/download/v1.0/yolov9-s-relu6-best_320_int8_edgetpu.tflite), bind mount the file into the container, and provide the path with `model.path`. Note that the linked model requires a 17-label [labelmap file](https://raw.githubusercontent.com/dbro/frigate-detector-edgetpu-yolo9/refs/heads/main/labels-coco17.txt) that includes only 17 COCO classes. + +#### YOLOv9 for other detectors YOLOv9 model can be exported as ONNX using the command below. You can copy and paste the whole thing to your terminal and execute, altering `MODEL_SIZE=t` and `IMG_SIZE=320` in the first line to the [model size](https://github.com/WongKinYiu/yolov9#performance) you would like to convert (available model sizes are `t`, `s`, `m`, `c`, and `e`, common image sizes are `320` and `640`). ```sh docker build . --build-arg MODEL_SIZE=t --build-arg IMG_SIZE=320 --output . -f- <<'EOF' FROM python:3.11 AS build -RUN apt-get update && apt-get install --no-install-recommends -y libgl1 && rm -rf /var/lib/apt/lists/* -COPY --from=ghcr.io/astral-sh/uv:0.8.0 /uv /bin/ +RUN apt-get update && apt-get install --no-install-recommends -y cmake libgl1 && rm -rf /var/lib/apt/lists/* +COPY --from=ghcr.io/astral-sh/uv:0.10.4 /uv /bin/ WORKDIR /yolov9 ADD https://github.com/WongKinYiu/yolov9.git . RUN uv pip install --system -r requirements.txt -RUN uv pip install --system onnx==1.18.0 onnxruntime onnx-simplifier>=0.4.1 onnxscript +RUN uv pip install --system onnx==1.18.0 onnxruntime onnx-simplifier==0.4.* onnxscript ARG MODEL_SIZE ARG IMG_SIZE ADD https://github.com/WongKinYiu/yolov9/releases/download/v0.1/yolov9-${MODEL_SIZE}-converted.pt yolov9-${MODEL_SIZE}.pt diff --git a/docs/docs/configuration/profiles.md b/docs/docs/configuration/profiles.md new file mode 100644 index 000000000..ef0778e18 --- /dev/null +++ b/docs/docs/configuration/profiles.md @@ -0,0 +1,188 @@ +--- +id: profiles +title: Profiles +--- + +Profiles allow you to define named sets of camera configuration overrides that can be activated and deactivated at runtime without restarting Frigate. This is useful for scenarios like switching between "Home" and "Away" modes, daytime and nighttime configurations, or any situation where you want to quickly change how multiple cameras behave. + +## How Profiles Work + +Profiles operate as a two-level system: + +1. **Profile definitions** are declared at the top level of your config under `profiles`. Each definition has a machine name (the key) and a `friendly_name` for display in the UI. +2. **Camera profile overrides** are declared under each camera's `profiles` section, keyed by the profile name. Only the settings you want to change need to be specified — everything else is inherited from the camera's base configuration. + +When a profile is activated, Frigate merges each camera's profile overrides on top of its base config. When the profile is deactivated, all cameras revert to their original settings. Only one profile can be active at a time. + +:::info + +Profile changes are applied in-memory and take effect immediately — no restart is required. The active profile is persisted across Frigate restarts (stored in the `/config/.active_profile` file). + +::: + +## Configuration + +The easiest way to define profiles is to use the Frigate UI. Profiles can also be configured manually in your configuration file. + +### Using the UI + +To create and manage profiles from the UI, open **Settings**. From there you can: + +1. **Create a profile** — Navigate to **Profiles**. Click the **Add Profile** button, enter a name (and optionally a profile ID). +2. **Configure overrides** — Navigate to a camera configuration section (e.g. Motion detection, Record, Notifications). In the top right, two buttons will appear - choose a camera and a profile from the profile selector to edit overrides for that camera and section. Only the fields you change will be stored as overrides — fields that require a restart are hidden since profiles are applied at runtime. You can click the **Remove Profile Override** button +3. **Activate a profile** — Use the **Profiles** option in Frigate's main menu to choose a profile. Alternatively, in Settings, navigate to **Profiles**, then choose a profile in the Active Profile dropdown to activate it. The active profile is also shown in the status bar at the bottom of the screen on desktop browsers. +4. **Delete a profile** — Navigate to **Profiles**, then click the trash icon for a profile. This removes the profile definition and all camera overrides associated with it. + +### Defining Profiles in YAML + +First, define your profiles at the top level of your Frigate config. Every profile name referenced by a camera must be defined here. + +```yaml +profiles: + home: + friendly_name: Home + away: + friendly_name: Away + night: + friendly_name: Night Mode +``` + +### Camera Profile Overrides + +Under each camera, add a `profiles` section with overrides for each profile. You only need to include the settings you want to change. + +```yaml +cameras: + front_door: + ffmpeg: + inputs: + - path: rtsp://camera:554/stream + roles: + - detect + - record + detect: + enabled: true + record: + enabled: true + profiles: + away: + detect: + enabled: true + notifications: + enabled: true + objects: + track: + - person + - car + - package + review: + alerts: + labels: + - person + - car + - package + home: + detect: + enabled: true + notifications: + enabled: false + objects: + track: + - person +``` + +### Supported Override Sections + +The following camera configuration sections can be overridden in a profile: + +| Section | Description | +| ------------------ | ----------------------------------------- | +| `enabled` | Enable or disable the camera entirely | +| `audio` | Audio detection settings | +| `birdseye` | Birdseye view settings | +| `detect` | Object detection settings | +| `face_recognition` | Face recognition settings | +| `lpr` | License plate recognition settings | +| `motion` | Motion detection settings | +| `notifications` | Notification settings | +| `objects` | Object tracking and filter settings | +| `record` | Recording settings | +| `review` | Review alert and detection settings | +| `snapshots` | Snapshot settings | +| `zones` | Zone definitions (merged with base zones) | + +:::note + +Only the fields you explicitly set in a profile override are applied. All other fields retain their base configuration values. For zones, profile zones are merged with the camera's base zones — any zone defined in the profile will override or add to the base zones. + +::: + +## Activating Profiles + +Profiles can be activated and deactivated from the Frigate UI. Open the Settings cog and select **Profiles** from the submenu to see all defined profiles. From there you can activate any profile or deactivate the current one. The active profile is indicated in the UI so you always know which profile is in effect. + +## Example: Home / Away Setup + +A common use case is having different detection and notification settings based on whether you are home or away. + +```yaml +profiles: + home: + friendly_name: Home + away: + friendly_name: Away + +cameras: + front_door: + ffmpeg: + inputs: + - path: rtsp://camera:554/stream + roles: + - detect + - record + detect: + enabled: true + record: + enabled: true + notifications: + enabled: false + profiles: + away: + notifications: + enabled: true + review: + alerts: + labels: + - person + - car + home: + notifications: + enabled: false + + indoor_cam: + ffmpeg: + inputs: + - path: rtsp://camera:554/indoor + roles: + - detect + - record + detect: + enabled: false + record: + enabled: false + profiles: + away: + enabled: true + detect: + enabled: true + record: + enabled: true + home: + enabled: false +``` + +In this example: + +- **Away profile**: The front door camera enables notifications and tracks specific alert labels. The indoor camera is fully enabled with detection and recording. +- **Home profile**: The front door camera disables notifications. The indoor camera is completely disabled for privacy. +- **No profile active**: All cameras use their base configuration values. diff --git a/docs/docs/configuration/record.md b/docs/docs/configuration/record.md index 4dfd8b77c..afd26c641 100644 --- a/docs/docs/configuration/record.md +++ b/docs/docs/configuration/record.md @@ -130,7 +130,7 @@ When exporting a time-lapse the default speed-up is 25x with 30 FPS. This means To configure the speed-up factor, the frame rate and further custom settings, the configuration parameter `timelapse_args` can be used. The below configuration example would change the time-lapse speed to 60x (for fitting 1 hour of recording into 1 minute of time-lapse) with 25 FPS: -```yaml +```yaml {3-4} record: enabled: True export: @@ -139,7 +139,13 @@ record: :::tip -When using `hwaccel_args` globally hardware encoding is used for time lapse generation. The encoder determines its own behavior so the resulting file size may be undesirably large. +When using `hwaccel_args`, hardware encoding is used for timelapse generation. This setting can be overridden for a specific camera (e.g., when camera resolution exceeds hardware encoder limits); set `cameras..record.export.hwaccel_args` with the appropriate settings. Using an unrecognized value or empty string will fall back to software encoding (libx264). + +::: + +:::tip + +The encoder determines its own behavior so the resulting file size may be undesirably large. To reduce the output file size the ffmpeg parameter `-qp n` can be utilized (where `n` stands for the value of the quantisation parameter). The value can be adjusted to get an acceptable tradeoff between quality and file size for the given scenario. ::: @@ -148,19 +154,18 @@ To reduce the output file size the ffmpeg parameter `-qp n` can be utilized (whe Apple devices running the Safari browser may fail to playback h.265 recordings. The [apple compatibility option](../configuration/camera_specific.md#h265-cameras-via-safari) should be used to ensure seamless playback on Apple devices. -## Syncing Recordings With Disk +## Syncing Media Files With Disk -In some cases the recordings files may be deleted but Frigate will not know this has happened. Recordings sync can be enabled which will tell Frigate to check the file system and delete any db entries for files which don't exist. +Media files (event snapshots, event thumbnails, review thumbnails, previews, exports, and recordings) can become orphaned when database entries are deleted but the corresponding files remain on disk. -```yaml -record: - sync_recordings: True -``` +Normal operation may leave small numbers of orphaned files until Frigate's scheduled cleanup, but crashes, configuration changes, or upgrades may cause more orphaned files that Frigate does not clean up. This feature checks the file system for media files and removes any that are not referenced in the database. -This feature is meant to fix variations in files, not completely delete entries in the database. If you delete all of your media, don't use `sync_recordings`, just stop Frigate, delete the `frigate.db` database, and restart. +The Maintenance pane in the Frigate UI or an API endpoint `POST /api/media/sync` can be used to trigger a media sync. When using the API, a job ID is returned and the operation continues on the server. Status can be checked with the `/api/media/sync/status/{job_id}` endpoint. + +Setting `verbose: true` writes a detailed report of every orphaned file and database entry to `/config/media_sync/.txt`. For recordings, the report separates orphaned database entries (DB records whose files are missing from disk) from orphaned files (files on disk with no corresponding database record). :::warning -The sync operation uses considerable CPU resources and in most cases is not needed, only enable when necessary. +This operation uses considerable CPU resources and includes a safety threshold that aborts if more than 50% of files would be deleted. Only run when necessary. If you set `force: true` the safety threshold will be bypassed; do not use `force` unless you are certain the deletions are intended. ::: diff --git a/docs/docs/configuration/reference.md b/docs/docs/configuration/reference.md index 206d7012e..c6ac207aa 100644 --- a/docs/docs/configuration/reference.md +++ b/docs/docs/configuration/reference.md @@ -16,6 +16,8 @@ mqtt: # Optional: Enable mqtt server (default: shown below) enabled: True # Required: host name + # NOTE: MQTT host can be specified with an environment variable or docker secrets that must begin with 'FRIGATE_'. + # e.g. host: '{FRIGATE_MQTT_HOST}' host: mqtt.server.com # Optional: port (default: shown below) port: 1883 @@ -73,11 +75,19 @@ tls: # Optional: Enable TLS for port 8971 (default: shown below) enabled: True -# Optional: IPv6 configuration +# Optional: Networking configuration networking: # Optional: Enable IPv6 on 5000, and 8971 if tls is configured (default: shown below) ipv6: enabled: False + # Optional: Override ports Frigate uses for listening (defaults: shown below) + # An IP address may also be provided to bind to a specific interface, e.g. ip:port + # NOTE: This setting is for advanced users and may break some integrations. The majority + # of users should change ports in the docker compose file + # or use the docker run `--publish` option to select a different port. + listen: + internal: 5000 + external: 8971 # Optional: Proxy configuration proxy: @@ -337,7 +347,15 @@ objects: # Optional: mask to prevent all object types from being detected in certain areas (default: no mask) # Checks based on the bottom center of the bounding box of the object. # NOTE: This mask is COMBINED with the object type specific mask below - mask: 0.000,0.000,0.781,0.000,0.781,0.278,0.000,0.278 + mask: + # Object filter mask name (required) + mask1: + # Optional: A friendly name for the mask + friendly_name: "Object filter mask area" + # Optional: Whether this mask is active (default: true) + enabled: true + # Required: Coordinates polygon for the mask + coordinates: "0.000,0.000,0.781,0.000,0.781,0.278,0.000,0.278" # Optional: filters to reduce false positives for specific object types filters: person: @@ -357,7 +375,15 @@ objects: threshold: 0.7 # Optional: mask to prevent this object type from being detected in certain areas (default: no mask) # Checks based on the bottom center of the bounding box of the object - mask: 0.000,0.000,0.781,0.000,0.781,0.278,0.000,0.278 + mask: + # Object filter mask name (required) + mask1: + # Optional: A friendly name for the mask + friendly_name: "Object filter mask area" + # Optional: Whether this mask is active (default: true) + enabled: true + # Required: Coordinates polygon for the mask + coordinates: "0.000,0.000,0.781,0.000,0.781,0.278,0.000,0.278" # Optional: Configuration for AI generated tracked object descriptions genai: # Optional: Enable AI object description generation (default: shown below) @@ -456,12 +482,16 @@ motion: # Increasing this value will make motion detection less sensitive and decreasing it will make motion detection more sensitive. # The value should be between 1 and 255. threshold: 30 - # Optional: The percentage of the image used to detect lightning or other substantial changes where motion detection - # needs to recalibrate. (default: shown below) + # Optional: The percentage of the image used to detect lightning or other substantial changes where motion detection needs + # to recalibrate and motion checks stop for that frame. Recordings are unaffected. (default: shown below) # Increasing this value will make motion detection more likely to consider lightning or ir mode changes as valid motion. - # Decreasing this value will make motion detection more likely to ignore large amounts of motion such as a person approaching - # a doorbell camera. + # Decreasing this value will make motion detection more likely to ignore large amounts of motion such as a person approaching a doorbell camera. lightning_threshold: 0.8 + # Optional: Fraction of the frame that must change in a single update before motion boxes are completely + # ignored. Values range between 0.0 and 1.0. When exceeded, no motion boxes are reported and **no motion + # recording** is created for that frame. Leave unset (null) to disable this feature. Use with care on PTZ + # cameras or other situations where you require guaranteed frame capture. + skip_motion_threshold: None # Optional: Minimum size in pixels in the resized motion image that counts as motion (default: shown below) # Increasing this value will prevent smaller areas of motion from being detected. Decreasing will # make motion detection more sensitive to smaller moving objects. @@ -481,7 +511,15 @@ motion: frame_height: 100 # Optional: motion mask # NOTE: see docs for more detailed info on creating masks - mask: 0.000,0.469,1.000,0.469,1.000,1.000,0.000,1.000 + mask: + # Motion mask name (required) + mask1: + # Optional: A friendly name for the mask + friendly_name: "Motion mask area" + # Optional: Whether this mask is active (default: true) + enabled: true + # Required: Coordinates polygon for the mask + coordinates: "0.000,0.469,1.000,0.469,1.000,1.000,0.000,1.000" # Optional: improve contrast (default: shown below) # Enables dynamic contrast improvement. This should help improve night detections at the cost of making motion detection more sensitive # for daytime. @@ -510,8 +548,6 @@ record: # Optional: Number of minutes to wait between cleanup runs (default: shown below) # This can be used to reduce the frequency of deleting recording segments from disk if you want to minimize i/o expire_interval: 60 - # Optional: Two-way sync recordings database with disk on startup and once a day (default: shown below). - sync_recordings: False # Optional: Continuous retention settings continuous: # Optional: Number of days to retain recordings regardless of tracked objects or motion (default: shown below) @@ -534,6 +570,8 @@ record: # The -r (framerate) dictates how smooth the output video is. # So the args would be -vf setpts=0.02*PTS -r 30 in that case. timelapse_args: "-vf setpts=0.04*PTS -r 30" + # Optional: Global hardware acceleration settings for timelapse exports. (default: inherit) + hwaccel_args: auto # Optional: Recording Preview Settings preview: # Optional: Quality of recording preview (default: shown below). @@ -580,13 +618,12 @@ record: # never stored, so setting the mode to "all" here won't bring them back. mode: motion -# Optional: Configuration for the jpg snapshots written to the clips directory for each tracked object +# Optional: Configuration for the snapshots written to the clips directory for each tracked object +# Timestamp, bounding_box, crop and height settings are applied by default to API requests for snapshots. # NOTE: Can be overridden at the camera level snapshots: - # Optional: Enable writing jpg snapshot to /media/frigate/clips (default: shown below) + # Optional: Enable writing snapshot images to /media/frigate/clips (default: shown below) enabled: False - # Optional: save a clean copy of the snapshot image (default: shown below) - clean_copy: True # Optional: print a timestamp on the snapshots (default: shown below) timestamp: False # Optional: draw bounding box on the snapshots (default: shown below) @@ -604,8 +641,8 @@ snapshots: # Optional: Per object retention days objects: person: 15 - # Optional: quality of the encoded jpeg, 0-100 (default: shown below) - quality: 70 + # Optional: quality of the encoded snapshot image, 0-100 (default: shown below) + quality: 60 # Optional: Configuration for semantic search capability semantic_search: @@ -752,7 +789,7 @@ classification: interval: None # Optional: Restream configuration -# Uses https://github.com/AlexxIT/go2rtc (v1.9.10) +# Uses https://github.com/AlexxIT/go2rtc (v1.9.13) # NOTE: The default go2rtc API port (1984) must be used, # changing this port for the integrated go2rtc instance is not supported. go2rtc: @@ -838,6 +875,11 @@ cameras: # Optional: camera specific output args (default: inherit) # output_args: + # Optional: camera specific hwaccel args for timelapse export (default: inherit) + # record: + # export: + # hwaccel_args: + # Optional: timeout for highest scoring image before allowing it # to be replaced by a newer image. (default: shown below) best_image_timeout: 60 @@ -853,6 +895,9 @@ cameras: front_steps: # Optional: A friendly name or descriptive text for the zones friendly_name: "" + # Optional: Whether this zone is active (default: shown below) + # Disabled zones are completely ignored at runtime - no object tracking or debug drawing + enabled: True # Required: List of x,y coordinates to define the polygon of the zone. # NOTE: Presence in a zone is evaluated only based on the bottom center of the objects bounding box. coordinates: 0.033,0.306,0.324,0.138,0.439,0.185,0.042,0.428 @@ -906,6 +951,8 @@ cameras: onvif: # Required: host of the camera being connected to. # NOTE: HTTP is assumed by default; HTTPS is supported if you specify the scheme, ex: "https://0.0.0.0". + # NOTE: ONVIF user, and password can be specified with environment variables or docker secrets + # that must begin with 'FRIGATE_'. e.g. host: '{FRIGATE_ONVIF_USERNAME}' host: 0.0.0.0 # Optional: ONVIF port for device (default: shown below). port: 8000 @@ -982,6 +1029,49 @@ cameras: actions: - notification + # Optional: Named config profiles with partial overrides that can be activated at runtime. + # NOTE: Profile names must be defined in the top-level 'profiles' section. + profiles: + # Required: name of the profile (must match a top-level profile definition) + away: + # Optional: Enable or disable the camera when this profile is active (default: not set, inherits base) + enabled: true + # Optional: Override audio settings + audio: + enabled: true + # Optional: Override birdseye settings + # birdseye: + # Optional: Override detect settings + detect: + enabled: true + # Optional: Override face_recognition settings + # face_recognition: + # Optional: Override lpr settings + # lpr: + # Optional: Override motion settings + # motion: + # Optional: Override notification settings + notifications: + enabled: true + # Optional: Override objects settings + objects: + track: + - person + - car + # Optional: Override record settings + record: + enabled: true + # Optional: Override review settings + review: + alerts: + labels: + - person + - car + # Optional: Override snapshot settings + # snapshots: + # Optional: Override or add zones (merged with base zones) + # zones: + # Optional ui: # Optional: Set a timezone to use in the UI (default: use browser local time) @@ -1048,4 +1138,14 @@ camera_groups: icon: LuCar # Required: index of this group order: 0 + +# Optional: Profile definitions for named config overrides +# NOTE: Profile names defined here can be referenced in camera profiles sections +profiles: + # Required: name of the profile (machine name used internally) + home: + # Required: display name shown in the UI + friendly_name: Home + away: + friendly_name: Away ``` diff --git a/docs/docs/configuration/restream.md b/docs/docs/configuration/restream.md index ebd506294..ac3bcc503 100644 --- a/docs/docs/configuration/restream.md +++ b/docs/docs/configuration/restream.md @@ -7,7 +7,7 @@ title: Restream Frigate can restream your video feed as an RTSP feed for other applications such as Home Assistant to utilize it at `rtsp://:8554/`. Port 8554 must be open. [This allows you to use a video feed for detection in Frigate and Home Assistant live view at the same time without having to make two separate connections to the camera](#reduce-connections-to-camera). The video feed is copied from the original video feed directly to avoid re-encoding. This feed does not include any annotation by Frigate. -Frigate uses [go2rtc](https://github.com/AlexxIT/go2rtc/tree/v1.9.10) to provide its restream and MSE/WebRTC capabilities. The go2rtc config is hosted at the `go2rtc` in the config, see [go2rtc docs](https://github.com/AlexxIT/go2rtc/tree/v1.9.10#configuration) for more advanced configurations and features. +Frigate uses [go2rtc](https://github.com/AlexxIT/go2rtc/tree/v1.9.13) to provide its restream and MSE/WebRTC capabilities. The go2rtc config is hosted at the `go2rtc` in the config, see [go2rtc docs](https://github.com/AlexxIT/go2rtc/tree/v1.9.13#configuration) for more advanced configurations and features. :::note @@ -34,7 +34,7 @@ To improve connection speed when using Birdseye via restream you can enable a sm The go2rtc restream can be secured with RTSP based username / password authentication. Ex: -```yaml +```yaml {2-4} go2rtc: rtsp: username: "admin" @@ -147,6 +147,7 @@ For example: ```yaml go2rtc: streams: + # highlight-error-line my_camera: rtsp://username:$@foo%@192.168.1.100 ``` @@ -155,6 +156,7 @@ becomes ```yaml go2rtc: streams: + # highlight-next-line my_camera: rtsp://username:$%40foo%25@192.168.1.100 ``` @@ -206,7 +208,7 @@ Enabling arbitrary exec sources allows execution of arbitrary commands through g ## Advanced Restream Configurations -The [exec](https://github.com/AlexxIT/go2rtc/tree/v1.9.10#source-exec) source in go2rtc can be used for custom ffmpeg commands. An example is below: +The [exec](https://github.com/AlexxIT/go2rtc/tree/v1.9.13#source-exec) source in go2rtc can be used for custom ffmpeg commands. An example is below: :::warning diff --git a/docs/docs/configuration/review.md b/docs/docs/configuration/review.md index 752c496a3..d8769749b 100644 --- a/docs/docs/configuration/review.md +++ b/docs/docs/configuration/review.md @@ -71,7 +71,7 @@ To exclude a specific camera from alerts or detections, simply provide an empty For example, to exclude objects on the camera _gatecamera_ from any detections, include this in your config: -```yaml +```yaml {3-5} cameras: gatecamera: review: diff --git a/docs/docs/configuration/semantic_search.md b/docs/docs/configuration/semantic_search.md index 91f435ff0..4c646f79a 100644 --- a/docs/docs/configuration/semantic_search.md +++ b/docs/docs/configuration/semantic_search.md @@ -13,7 +13,7 @@ Semantic Search is accessed via the _Explore_ view in the Frigate UI. Semantic Search works by running a large AI model locally on your system. Small or underpowered systems like a Raspberry Pi will not run Semantic Search reliably or at all. -A minimum of 8GB of RAM is required to use Semantic Search. A GPU is not strictly required but will provide a significant performance increase over CPU-only systems. +A minimum of 8GB of RAM is required to use Semantic Search. A CPU with AVX + AVX2 instructions is required to run Semantic Search. A GPU is not strictly required but will provide a significant performance increase over CPU-only systems. For best performance, 16GB or more of RAM and a dedicated GPU are recommended. @@ -76,6 +76,40 @@ Switching between V1 and V2 requires reindexing your embeddings. The embeddings ::: +### GenAI Provider + +Frigate can use a GenAI provider for semantic search embeddings when that provider has the `embeddings` role. Currently, only **llama.cpp** supports multimodal embeddings (both text and images). + +To use llama.cpp for semantic search: + +1. Configure a GenAI provider in your config with `embeddings` in its `roles`. +2. Set `semantic_search.model` to the GenAI config key (e.g. `default`). +3. Start the llama.cpp server with `--embeddings` and `--mmproj` for image support: + +```yaml +genai: + default: + provider: llamacpp + base_url: http://localhost:8080 + model: your-model-name + roles: + - embeddings + - vision + - tools + +semantic_search: + enabled: True + model: default +``` + +The llama.cpp server must be started with `--embeddings` for the embeddings API, and a multi-modal embeddings model. See the [llama.cpp server documentation](https://github.com/ggml-org/llama.cpp/blob/master/tools/server/README.md) for details. + +:::note + +Switching between Jina models and a GenAI provider requires reindexing. Embeddings from different backends are incompatible. + +::: + ### GPU Acceleration The CLIP models are downloaded in ONNX format, and the `large` model can be accelerated using GPU hardware, when available. This depends on the Docker build that is used. You can also target a specific device in a multi-GPU installation. diff --git a/docs/docs/configuration/snapshots.md b/docs/docs/configuration/snapshots.md index 01c034a04..2f339b210 100644 --- a/docs/docs/configuration/snapshots.md +++ b/docs/docs/configuration/snapshots.md @@ -3,7 +3,7 @@ id: snapshots title: Snapshots --- -Frigate can save a snapshot image to `/media/frigate/clips` for each object that is detected named as `-.jpg`. They are also accessible [via the api](../integrations/api/event-snapshot-events-event-id-snapshot-jpg-get.api.mdx) +Frigate can save a snapshot image to `/media/frigate/clips` for each object that is detected named as `--clean.webp`. They are also accessible [via the api](../integrations/api/event-snapshot-events-event-id-snapshot-jpg-get.api.mdx) Snapshots are accessible in the UI in the Explore pane. This allows for quick submission to the Frigate+ service. @@ -13,21 +13,19 @@ Snapshots sent via MQTT are configured in the [config file](/configuration) unde ## Frame Selection -Frigate does not save every frame — it picks a single "best" frame for each tracked object and uses it for both the snapshot and clean copy. As the object is tracked across frames, Frigate continuously evaluates whether the current frame is better than the previous best based on detection confidence, object size, and the presence of key attributes like faces or license plates. Frames where the object touches the edge of the frame are deprioritized. The snapshot is written to disk once tracking ends using whichever frame was determined to be the best. +Frigate does not save every frame. It picks a single "best" frame for each tracked object based on detection confidence, object size, and the presence of key attributes like faces or license plates. Frames where the object touches the edge of the frame are deprioritized. That best frame is written to disk once tracking ends. MQTT snapshots are published more frequently — each time a better thumbnail frame is found during tracking, or when the current best image is older than `best_image_timeout` (default: 60s). These use their own annotation settings configured under `cameras -> your_camera -> mqtt`. -## Clean Copy +## Rendering -Frigate can produce up to two snapshot files per event, each used in different places: +Frigate stores a single clean snapshot on disk: -| Version | File | Annotations | Used by | -| --- | --- | --- | --- | -| **Regular snapshot** | `-.jpg` | Respects your `timestamp`, `bounding_box`, `crop`, and `height` settings | API (`/api/events//snapshot.jpg`), MQTT (`/