From 9ec66fcc9bab75c1dc5354598a5b9afef0dd6a63 Mon Sep 17 00:00:00 2001 From: Jason Hunter Date: Wed, 12 Jun 2024 22:59:43 -0400 Subject: [PATCH] update ollama docs --- docs/docs/configuration/genai.md | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/docs/docs/configuration/genai.md b/docs/docs/configuration/genai.md index 0bd896940..651630023 100644 --- a/docs/docs/configuration/genai.md +++ b/docs/docs/configuration/genai.md @@ -33,13 +33,19 @@ cameras: You must use a vision capable model with Frigate. Current model variants can be found [in their model library](https://ollama.com/library). At the time of writing, this includes `llava`, `llava-llama3`, `llava-phi3`, and `moondream`. +:::note + +You should have at least 8 GB of RAM available (or VRAM if running on GPU) to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models. + +::: + ### Configuration ```yaml genai: enabled: True provider: ollama - base_url: http://localhost::11434 + base_url: http://localhost:11434 model: llava ``` @@ -112,7 +118,7 @@ You are also able to define custom prompts in your configuration. genai: enabled: True provider: ollama - base_url: http://localhost::11434 + base_url: http://localhost:11434 model: llava prompt: "Describe the {label} in these images from the {camera} security camera." object_prompts: