Adjust docs

This commit is contained in:
Nicolas Mowen 2025-08-08 13:29:16 -06:00
parent 96a4707164
commit 223023ed42

View File

@ -15,13 +15,13 @@ To use Generative AI, you must define a single provider at the global level of y
```yaml ```yaml
genai: genai:
enabled: True
provider: gemini provider: gemini
api_key: "{FRIGATE_GEMINI_API_KEY}" api_key: "{FRIGATE_GEMINI_API_KEY}"
model: gemini-1.5-flash model: gemini-1.5-flash
cameras: cameras:
front_camera: front_camera:
objects:
genai: genai:
enabled: True # <- enable GenAI for your front camera enabled: True # <- enable GenAI for your front camera
use_snapshot: True use_snapshot: True
@ -30,6 +30,7 @@ cameras:
required_zones: required_zones:
- steps - steps
indoor_camera: indoor_camera:
objects:
genai: genai:
enabled: False # <- disable GenAI for your indoor camera enabled: False # <- disable GenAI for your indoor camera
``` ```
@ -68,7 +69,6 @@ You should have at least 8 GB of RAM available (or VRAM if running on GPU) to ru
```yaml ```yaml
genai: genai:
enabled: True
provider: ollama provider: ollama
base_url: http://localhost:11434 base_url: http://localhost:11434
model: llava:7b model: llava:7b
@ -95,7 +95,6 @@ To start using Gemini, you must first get an API key from [Google AI Studio](htt
```yaml ```yaml
genai: genai:
enabled: True
provider: gemini provider: gemini
api_key: "{FRIGATE_GEMINI_API_KEY}" api_key: "{FRIGATE_GEMINI_API_KEY}"
model: gemini-1.5-flash model: gemini-1.5-flash
@ -117,7 +116,6 @@ To start using OpenAI, you must first [create an API key](https://platform.opena
```yaml ```yaml
genai: genai:
enabled: True
provider: openai provider: openai
api_key: "{FRIGATE_OPENAI_API_KEY}" api_key: "{FRIGATE_OPENAI_API_KEY}"
model: gpt-4o model: gpt-4o
@ -145,7 +143,6 @@ To start using Azure OpenAI, you must first [create a resource](https://learn.mi
```yaml ```yaml
genai: genai:
enabled: True
provider: azure_openai provider: azure_openai
base_url: https://example-endpoint.openai.azure.com/openai/deployments/gpt-4o/chat/completions?api-version=2023-03-15-preview base_url: https://example-endpoint.openai.azure.com/openai/deployments/gpt-4o/chat/completions?api-version=2023-03-15-preview
api_key: "{FRIGATE_OPENAI_API_KEY}" api_key: "{FRIGATE_OPENAI_API_KEY}"
@ -188,22 +185,25 @@ You are also able to define custom prompts in your configuration.
```yaml ```yaml
genai: genai:
enabled: True
provider: ollama provider: ollama
base_url: http://localhost:11434 base_url: http://localhost:11434
model: llava model: llava
objects:
prompt: "Analyze the {label} in these images from the {camera} security camera. Focus on the actions, behavior, and potential intent of the {label}, rather than just describing its appearance." prompt: "Analyze the {label} in these images from the {camera} security camera. Focus on the actions, behavior, and potential intent of the {label}, rather than just describing its appearance."
object_prompts: object_prompts:
person: "Examine the main person in these images. What are they doing and what might their actions suggest about their intent (e.g., approaching a door, leaving an area, standing still)? Do not describe the surroundings or static details." person: "Examine the main person in these images. What are they doing and what might their actions suggest about their intent (e.g., approaching a door, leaving an area, standing still)? Do not describe the surroundings or static details."
car: "Observe the primary vehicle in these images. Focus on its movement, direction, or purpose (e.g., parking, approaching, circling). If it's a delivery vehicle, mention the company." car: "Observe the primary vehicle in these images. Focus on its movement, direction, or purpose (e.g., parking, approaching, circling). If it's a delivery vehicle, mention the company."
``` ```
Prompts can also be overriden at the camera level to provide a more detailed prompt to the model about your specific camera, if you desire. Prompts can also be overridden at the camera level to provide a more detailed prompt to the model about your specific camera, if you desire.
```yaml ```yaml
cameras: cameras:
front_door: front_door:
objects:
genai: genai:
enabled: True
use_snapshot: True use_snapshot: True
prompt: "Analyze the {label} in these images from the {camera} security camera at the front door. Focus on the actions and potential intent of the {label}." prompt: "Analyze the {label} in these images from the {camera} security camera at the front door. Focus on the actions and potential intent of the {label}."
object_prompts: object_prompts: