frigate/frigate/genai
Nicolas Mowen fa1f9a1fa4
Some checks are pending
CI / AMD64 Build (push) Waiting to run
CI / Assemble and push default build (push) Blocked by required conditions
CI / ARM Build (push) Waiting to run
CI / Jetson Jetpack 6 (push) Waiting to run
CI / AMD64 Extra Build (push) Blocked by required conditions
CI / ARM Extra Build (push) Blocked by required conditions
CI / Synaptics Build (push) Blocked by required conditions
Add GenAI Backend Streaming and Chat (#22152)
* Add basic chat page with entry

* Add chat history

* processing

* Add markdown

* Improvements

* Adjust timing format

* Reduce fields in response

* More time parsing improvements

* Show tool calls separately from message

* Add title

* Improve UI handling

* Support streaming

* Full streaming support

* Fix tool calling

* Add copy button

* Improvements to UI

* Improve default behavior

* Implement message editing

* Add sub label to event tool filtering

* Cleanup

* Cleanup UI and prompt

* Cleanup UI bubbles

* Fix loading

* Add support for markdown tables

* Add thumbnail images to object results

* Add a starting state for chat

* Clenaup
2026-02-27 09:07:30 -07:00
..
__init__.py Add support for multiple GenAI Providers (#22144) 2026-02-27 08:35:33 -07:00
azure-openai.py Implement LLM Chat API with tool calling support (#21731) 2026-02-26 21:27:56 -07:00
gemini.py Adapt to new Gemini format 2026-02-26 21:27:56 -07:00
llama_cpp.py Add GenAI Backend Streaming and Chat (#22152) 2026-02-27 09:07:30 -07:00
manager.py Add support for multiple GenAI Providers (#22144) 2026-02-27 08:35:33 -07:00
ollama.py Add GenAI Backend Streaming and Chat (#22152) 2026-02-27 09:07:30 -07:00
openai.py Implement LLM Chat API with tool calling support (#21731) 2026-02-26 21:27:56 -07:00
utils.py Add GenAI Backend Streaming and Chat (#22152) 2026-02-27 09:07:30 -07:00