mirror of
https://github.com/blakeblackshear/frigate.git
synced 2025-12-06 21:44:13 +03:00
Update genai docs
This commit is contained in:
parent
213a1fbd00
commit
f436e70c2e
@ -70,7 +70,7 @@ You should have at least 8 GB of RAM available (or VRAM if running on GPU) to ru
|
|||||||
genai:
|
genai:
|
||||||
provider: ollama
|
provider: ollama
|
||||||
base_url: http://localhost:11434
|
base_url: http://localhost:11434
|
||||||
model: llava:7b
|
model: qwen3-vl:4b
|
||||||
```
|
```
|
||||||
|
|
||||||
## Google Gemini
|
## Google Gemini
|
||||||
|
|||||||
@ -35,19 +35,18 @@ Each model is available in multiple parameter sizes (3b, 4b, 8b, etc.). Larger s
|
|||||||
|
|
||||||
:::tip
|
:::tip
|
||||||
|
|
||||||
If you are trying to use a single model for Frigate and HomeAssistant, it will need to support vision and tools calling. https://github.com/skye-harris/ollama-modelfiles contains optimized model configs for this task.
|
If you are trying to use a single model for Frigate and HomeAssistant, it will need to support vision and tools calling. qwen3-VL supports vision and tools simultaneously in Ollama.
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
The following models are recommended:
|
The following models are recommended:
|
||||||
|
|
||||||
| Model | Notes |
|
| Model | Notes |
|
||||||
| ----------------- | ----------------------------------------------------------- |
|
| ----------------- | -------------------------------------------------------------------- |
|
||||||
| `qwen3-vl` | Strong visual and situational understanding |
|
| `qwen3-vl` | Strong visual and situational understanding, higher vram requirement |
|
||||||
| `Intern3.5VL` | Relatively fast with good vision comprehension |
|
| `Intern3.5VL` | Relatively fast with good vision comprehension |
|
||||||
| `gemma3` | Strong frame-to-frame understanding, slower inference times |
|
| `gemma3` | Strong frame-to-frame understanding, slower inference times |
|
||||||
| `qwen2.5-vl` | Fast but capable model with good vision comprehension |
|
| `qwen2.5-vl` | Fast but capable model with good vision comprehension |
|
||||||
| `llava-phi3` | Lightweight and fast model with vision comprehension |
|
|
||||||
|
|
||||||
:::note
|
:::note
|
||||||
|
|
||||||
|
|||||||
Loading…
Reference in New Issue
Block a user