Miscellaneous Fixes (0.17 beta) (#21396)
Some checks failed
CI / AMD64 Build (push) Has been cancelled
CI / ARM Build (push) Has been cancelled
CI / Jetson Jetpack 6 (push) Has been cancelled
CI / AMD64 Extra Build (push) Has been cancelled
CI / ARM Extra Build (push) Has been cancelled
CI / Synaptics Build (push) Has been cancelled
CI / Assemble and push default build (push) Has been cancelled

* use fallback timeout for opening media source

covers the case where there is no active connection to the go2rtc stream and the camera takes a long time to start

* Add review thumbnail URL to integration docs

* fix weekday starting point on explore when set to monday in UI settings

* only show allowed cameras and groups in camera filter button

* Reset the wizard state after closing with model

* remove footnote about 0.17

* 0.17

* add triggers to note

* add slovak

* Ensure genai client exists

* Correctly catch JSONDecodeError

* clarify docs for none class

* version bump on updating page

* fix ExportRecordingsBody to allow optional name field

fixes https://github.com/blakeblackshear/frigate/discussions/21413 because of https://github.com/blakeblackshear/frigate-hass-integration/pull/1021

* Catch remote protocol error from ollama

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
This commit is contained in:
Josh Hawkins 2025-12-24 08:03:09 -06:00 committed by GitHub
parent f862ef5d0c
commit a4ece9dae3
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
14 changed files with 87 additions and 36 deletions

View File

@ -39,7 +39,7 @@ For object classification:
:::note :::note
A tracked object can only have a single sub label. If you are using Face Recognition and you configure an object classification model for `person` using the sub label type, your sub label may not be assigned correctly as it depends on which enrichment completes its analysis first. Consider using the `attribute` type instead. A tracked object can only have a single sub label. If you are using Triggers or Face Recognition and you configure an object classification model for `person` using the sub label type, your sub label may not be assigned correctly as it depends on which enrichment completes its analysis first. Consider using the `attribute` type instead.
::: :::
@ -89,9 +89,9 @@ Creating and training the model is done within the Frigate UI using the `Classif
### Step 1: Name and Define ### Step 1: Name and Define
Enter a name for your model, select the object label to classify (e.g., `person`, `dog`, `car`), choose the classification type (sub label or attribute), and define your classes. Include a `none` class for objects that don't fit any specific category. Enter a name for your model, select the object label to classify (e.g., `person`, `dog`, `car`), choose the classification type (sub label or attribute), and define your classes. Frigate will automatically include a `none` class for objects that don't fit any specific category.
For example: To classify your two cats, create a model named "Our Cats" and create two classes, "Charlie" and "Leo". Create a third class, "none", for other neighborhood cats that are not your own. For example: To classify your two cats, create a model named "Our Cats" and create two classes, "Charlie" and "Leo". A third class, "none", will be created automatically for other neighborhood cats that are not your own.
### Step 2: Assign Training Examples ### Step 2: Assign Training Examples

View File

@ -5,7 +5,7 @@ title: Updating
# Updating Frigate # Updating Frigate
The current stable version of Frigate is **0.16.2**. The release notes and any breaking changes for this version can be found on the [Frigate GitHub releases page](https://github.com/blakeblackshear/frigate/releases/tag/v0.16.2). The current stable version of Frigate is **0.17.0**. The release notes and any breaking changes for this version can be found on the [Frigate GitHub releases page](https://github.com/blakeblackshear/frigate/releases/tag/v0.17.0).
Keeping Frigate up to date ensures you benefit from the latest features, performance improvements, and bug fixes. The update process varies slightly depending on your installation method (Docker, Home Assistant Addon, etc.). Below are instructions for the most common setups. Keeping Frigate up to date ensures you benefit from the latest features, performance improvements, and bug fixes. The update process varies slightly depending on your installation method (Docker, Home Assistant Addon, etc.). Below are instructions for the most common setups.
@ -33,21 +33,21 @@ If youre running Frigate via Docker (recommended method), follow these steps:
2. **Update and Pull the Latest Image**: 2. **Update and Pull the Latest Image**:
- If using Docker Compose: - If using Docker Compose:
- Edit your `docker-compose.yml` file to specify the desired version tag (e.g., `0.16.2` instead of `0.15.2`). For example: - Edit your `docker-compose.yml` file to specify the desired version tag (e.g., `0.17.0` instead of `0.16.3`). For example:
```yaml ```yaml
services: services:
frigate: frigate:
image: ghcr.io/blakeblackshear/frigate:0.16.2 image: ghcr.io/blakeblackshear/frigate:0.17.0
``` ```
- Then pull the image: - Then pull the image:
```bash ```bash
docker pull ghcr.io/blakeblackshear/frigate:0.16.2 docker pull ghcr.io/blakeblackshear/frigate:0.17.0
``` ```
- **Note for `stable` Tag Users**: If your `docker-compose.yml` uses the `stable` tag (e.g., `ghcr.io/blakeblackshear/frigate:stable`), you dont need to update the tag manually. The `stable` tag always points to the latest stable release after pulling. - **Note for `stable` Tag Users**: If your `docker-compose.yml` uses the `stable` tag (e.g., `ghcr.io/blakeblackshear/frigate:stable`), you dont need to update the tag manually. The `stable` tag always points to the latest stable release after pulling.
- If using `docker run`: - If using `docker run`:
- Pull the image with the appropriate tag (e.g., `0.16.2`, `0.16.2-tensorrt`, or `stable`): - Pull the image with the appropriate tag (e.g., `0.17.0`, `0.17.0-tensorrt`, or `stable`):
```bash ```bash
docker pull ghcr.io/blakeblackshear/frigate:0.16.2 docker pull ghcr.io/blakeblackshear/frigate:0.17.0
``` ```
3. **Start the Container**: 3. **Start the Container**:
@ -105,8 +105,8 @@ If an update causes issues:
1. Stop Frigate. 1. Stop Frigate.
2. Restore your backed-up config file and database. 2. Restore your backed-up config file and database.
3. Revert to the previous image version: 3. Revert to the previous image version:
- For Docker: Specify an older tag (e.g., `ghcr.io/blakeblackshear/frigate:0.15.2`) in your `docker run` command. - For Docker: Specify an older tag (e.g., `ghcr.io/blakeblackshear/frigate:0.16.3`) in your `docker run` command.
- For Docker Compose: Edit your `docker-compose.yml`, specify the older version tag (e.g., `ghcr.io/blakeblackshear/frigate:0.15.2`), and re-run `docker compose up -d`. - For Docker Compose: Edit your `docker-compose.yml`, specify the older version tag (e.g., `ghcr.io/blakeblackshear/frigate:0.16.3`), and re-run `docker compose up -d`.
- For Home Assistant: Reinstall the previous addon version manually via the repository if needed and restart the addon. - For Home Assistant: Reinstall the previous addon version manually via the repository if needed and restart the addon.
4. Verify the old version is running again. 4. Verify the old version is running again.

View File

@ -245,6 +245,12 @@ To load a preview gif of a review item:
https://HA_URL/api/frigate/notifications/<review-id>/review_preview.gif https://HA_URL/api/frigate/notifications/<review-id>/review_preview.gif
``` ```
To load the thumbnail of a review item:
```
https://HA_URL/api/frigate/notifications/<review-id>/<camera>/review_thumbnail.webp
```
<a name="streams"></a> <a name="streams"></a>
## RTSP stream ## RTSP stream

View File

@ -16,12 +16,10 @@ There are three model types offered in Frigate+, `mobiledet`, `yolonas`, and `yo
Not all model types are supported by all detectors, so it's important to choose a model type to match your detector as shown in the table under [supported detector types](#supported-detector-types). You can test model types for compatibility and speed on your hardware by using the base models. Not all model types are supported by all detectors, so it's important to choose a model type to match your detector as shown in the table under [supported detector types](#supported-detector-types). You can test model types for compatibility and speed on your hardware by using the base models.
| Model Type | Description | | Model Type | Description |
| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `mobiledet` | Based on the same architecture as the default model included with Frigate. Runs on Google Coral devices and CPUs. | | `mobiledet` | Based on the same architecture as the default model included with Frigate. Runs on Google Coral devices and CPUs. |
| `yolonas` | A newer architecture that offers slightly higher accuracy and improved detection of small objects. Runs on Intel, NVidia GPUs, and AMD GPUs. | | `yolonas` | A newer architecture that offers slightly higher accuracy and improved detection of small objects. Runs on Intel, NVidia GPUs, and AMD GPUs. |
| `yolov9` | A leading SOTA (state of the art) object detection model with similar performance to yolonas, but on a wider range of hardware options. Runs on Intel, NVidia GPUs, AMD GPUs, Hailo, MemryX\*, Apple Silicon\*, and Rockchip NPUs. | | `yolov9` | A leading SOTA (state of the art) object detection model with similar performance to yolonas, but on a wider range of hardware options. Runs on Intel, NVidia GPUs, AMD GPUs, Hailo, MemryX, Apple Silicon, and Rockchip NPUs. |
_\* Support coming in 0.17_
### YOLOv9 Details ### YOLOv9 Details
@ -39,7 +37,7 @@ If you have a Hailo device, you will need to specify the hardware you have when
#### Rockchip (RKNN) Support #### Rockchip (RKNN) Support
For 0.16, YOLOv9 onnx models will need to be manually converted. First, you will need to configure Frigate to use the model id for your YOLOv9 onnx model so it downloads the model to your `model_cache` directory. From there, you can follow the [documentation](/configuration/object_detectors.md#converting-your-own-onnx-model-to-rknn-format) to convert it. Automatic conversion is coming in 0.17. For 0.16, YOLOv9 onnx models will need to be manually converted. First, you will need to configure Frigate to use the model id for your YOLOv9 onnx model so it downloads the model to your `model_cache` directory. From there, you can follow the [documentation](/configuration/object_detectors.md#converting-your-own-onnx-model-to-rknn-format) to convert it. Automatic conversion is available in 0.17 and later.
## Supported detector types ## Supported detector types
@ -55,7 +53,7 @@ Currently, Frigate+ models support CPU (`cpu`), Google Coral (`edgetpu`), OpenVi
| [Hailo8/Hailo8L/Hailo8R](/configuration/object_detectors#hailo-8) | `hailo8l` | `yolov9` | | [Hailo8/Hailo8L/Hailo8R](/configuration/object_detectors#hailo-8) | `hailo8l` | `yolov9` |
| [Rockchip NPU](/configuration/object_detectors#rockchip-platform)\* | `rknn` | `yolov9` | | [Rockchip NPU](/configuration/object_detectors#rockchip-platform)\* | `rknn` | `yolov9` |
_\* Requires manual conversion in 0.16. Automatic conversion coming in 0.17._ _\* Requires manual conversion in 0.16. Automatic conversion available in 0.17 and later._
## Improving your model ## Improving your model

View File

@ -1,4 +1,4 @@
from typing import Union from typing import Optional, Union
from pydantic import BaseModel, Field from pydantic import BaseModel, Field
from pydantic.json_schema import SkipJsonSchema from pydantic.json_schema import SkipJsonSchema
@ -16,5 +16,5 @@ class ExportRecordingsBody(BaseModel):
source: PlaybackSourceEnum = Field( source: PlaybackSourceEnum = Field(
default=PlaybackSourceEnum.recordings, title="Playback source" default=PlaybackSourceEnum.recordings, title="Playback source"
) )
name: str = Field(title="Friendly name", default=None, max_length=256) name: Optional[str] = Field(title="Friendly name", default=None, max_length=256)
image_path: Union[str, SkipJsonSchema[None]] = None image_path: Union[str, SkipJsonSchema[None]] = None

View File

@ -203,7 +203,9 @@ class EmbeddingMaintainer(threading.Thread):
# post processors # post processors
self.post_processors: list[PostProcessorApi] = [] self.post_processors: list[PostProcessorApi] = []
if any(c.review.genai.enabled_in_config for c in self.config.cameras.values()): if self.genai_client is not None and any(
c.review.genai.enabled_in_config for c in self.config.cameras.values()
):
self.post_processors.append( self.post_processors.append(
ReviewDescriptionProcessor( ReviewDescriptionProcessor(
self.config, self.requestor, self.metrics, self.genai_client self.config, self.requestor, self.metrics, self.genai_client
@ -244,7 +246,9 @@ class EmbeddingMaintainer(threading.Thread):
) )
self.post_processors.append(semantic_trigger_processor) self.post_processors.append(semantic_trigger_processor)
if any(c.objects.genai.enabled_in_config for c in self.config.cameras.values()): if self.genai_client is not None and any(
c.objects.genai.enabled_in_config for c in self.config.cameras.values()
):
self.post_processors.append( self.post_processors.append(
ObjectDescriptionProcessor( ObjectDescriptionProcessor(
self.config, self.config,

View File

@ -3,7 +3,7 @@
import logging import logging
from typing import Any, Optional from typing import Any, Optional
from httpx import TimeoutException from httpx import RemoteProtocolError, TimeoutException
from ollama import Client as ApiClient from ollama import Client as ApiClient
from ollama import ResponseError from ollama import ResponseError
@ -68,7 +68,12 @@ class OllamaClient(GenAIClient):
f"Ollama tokens used: eval_count={result.get('eval_count')}, prompt_eval_count={result.get('prompt_eval_count')}" f"Ollama tokens used: eval_count={result.get('eval_count')}, prompt_eval_count={result.get('prompt_eval_count')}"
) )
return result["response"].strip() return result["response"].strip()
except (TimeoutException, ResponseError, ConnectionError) as e: except (
TimeoutException,
ResponseError,
RemoteProtocolError,
ConnectionError,
) as e:
logger.warning("Ollama returned an error: %s", str(e)) logger.warning("Ollama returned an error: %s", str(e))
return None return None

View File

@ -42,11 +42,10 @@ def get_latest_version(config: FrigateConfig) -> str:
"https://api.github.com/repos/blakeblackshear/frigate/releases/latest", "https://api.github.com/repos/blakeblackshear/frigate/releases/latest",
timeout=10, timeout=10,
) )
response = request.json()
except (RequestException, JSONDecodeError): except (RequestException, JSONDecodeError):
return "unknown" return "unknown"
response = request.json()
if request.ok and response and "tag_name" in response: if request.ok and response and "tag_name" in response:
return str(response.get("tag_name").replace("v", "")) return str(response.get("tag_name").replace("v", ""))
else: else:

View File

@ -137,6 +137,11 @@ export default function ClassificationModelWizardDialog({
onClose(); onClose();
}; };
const handleSuccessClose = () => {
dispatch({ type: "RESET" });
onClose();
};
return ( return (
<Dialog <Dialog
open={open} open={open}
@ -207,7 +212,7 @@ export default function ClassificationModelWizardDialog({
step1Data={wizardState.step1Data} step1Data={wizardState.step1Data}
step2Data={wizardState.step2Data} step2Data={wizardState.step2Data}
initialData={wizardState.step3Data} initialData={wizardState.step3Data}
onClose={onClose} onClose={handleSuccessClose}
onBack={handleBack} onBack={handleBack}
/> />
)} )}

View File

@ -18,6 +18,7 @@ import PlatformAwareDialog from "../overlay/dialog/PlatformAwareDialog";
import { useTranslation } from "react-i18next"; import { useTranslation } from "react-i18next";
import useSWR from "swr"; import useSWR from "swr";
import { FrigateConfig } from "@/types/frigateConfig"; import { FrigateConfig } from "@/types/frigateConfig";
import { useUserPersistence } from "@/hooks/use-user-persistence";
type CalendarFilterButtonProps = { type CalendarFilterButtonProps = {
reviewSummary?: ReviewSummary; reviewSummary?: ReviewSummary;
@ -105,6 +106,7 @@ export function CalendarRangeFilterButton({
const { t } = useTranslation(["components/filter"]); const { t } = useTranslation(["components/filter"]);
const { data: config } = useSWR<FrigateConfig>("config"); const { data: config } = useSWR<FrigateConfig>("config");
const timezone = useTimezone(config); const timezone = useTimezone(config);
const [weekStartsOn] = useUserPersistence("weekStartsOn", 0);
const [open, setOpen] = useState(false); const [open, setOpen] = useState(false);
const selectedDate = useFormattedRange( const selectedDate = useFormattedRange(
@ -138,6 +140,7 @@ export function CalendarRangeFilterButton({
initialDateTo={range?.to} initialDateTo={range?.to}
timezone={timezone} timezone={timezone}
showCompare={false} showCompare={false}
weekStartsOn={weekStartsOn}
onUpdate={(range) => { onUpdate={(range) => {
updateSelectedRange(range.range); updateSelectedRange(range.range);
setOpen(false); setOpen(false);

View File

@ -13,6 +13,7 @@ import { Drawer, DrawerContent, DrawerTrigger } from "../ui/drawer";
import FilterSwitch from "./FilterSwitch"; import FilterSwitch from "./FilterSwitch";
import { FaVideo } from "react-icons/fa"; import { FaVideo } from "react-icons/fa";
import { useTranslation } from "react-i18next"; import { useTranslation } from "react-i18next";
import { useAllowedCameras } from "@/hooks/use-allowed-cameras";
type CameraFilterButtonProps = { type CameraFilterButtonProps = {
allCameras: string[]; allCameras: string[];
@ -35,6 +36,30 @@ export function CamerasFilterButton({
const [currentCameras, setCurrentCameras] = useState<string[] | undefined>( const [currentCameras, setCurrentCameras] = useState<string[] | undefined>(
selectedCameras, selectedCameras,
); );
const allowedCameras = useAllowedCameras();
// Filter cameras to only include those the user has access to
const filteredCameras = useMemo(
() => allCameras.filter((camera) => allowedCameras.includes(camera)),
[allCameras, allowedCameras],
);
// Filter groups to only include those with at least one allowed camera
const filteredGroups = useMemo(
() =>
groups
.map(([name, config]) => {
const allowedGroupCameras = config.cameras.filter((camera) =>
allowedCameras.includes(camera),
);
return [name, { ...config, cameras: allowedGroupCameras }] as [
string,
CameraGroupConfig,
];
})
.filter(([, config]) => config.cameras.length > 0),
[groups, allowedCameras],
);
const buttonText = useMemo(() => { const buttonText = useMemo(() => {
if (isMobile) { if (isMobile) {
@ -79,8 +104,8 @@ export function CamerasFilterButton({
); );
const content = ( const content = (
<CamerasFilterContent <CamerasFilterContent
allCameras={allCameras} allCameras={filteredCameras}
groups={groups} groups={filteredGroups}
currentCameras={currentCameras} currentCameras={currentCameras}
mainCamera={mainCamera} mainCamera={mainCamera}
setCurrentCameras={setCurrentCameras} setCurrentCameras={setCurrentCameras}

View File

@ -260,7 +260,7 @@ function MSEPlayer({
// @ts-expect-error for typing // @ts-expect-error for typing
value: codecs(MediaSource.isTypeSupported), value: codecs(MediaSource.isTypeSupported),
}, },
3000, (fallbackTimeout ?? 3) * 1000,
).catch(() => { ).catch(() => {
if (wsRef.current) { if (wsRef.current) {
onDisconnect(); onDisconnect();
@ -290,7 +290,7 @@ function MSEPlayer({
type: "mse", type: "mse",
value: codecs(MediaSource.isTypeSupported), value: codecs(MediaSource.isTypeSupported),
}, },
3000, (fallbackTimeout ?? 3) * 1000,
).catch(() => { ).catch(() => {
if (wsRef.current) { if (wsRef.current) {
onDisconnect(); onDisconnect();

View File

@ -35,6 +35,8 @@ export interface DateRangePickerProps {
showCompare?: boolean; showCompare?: boolean;
/** timezone */ /** timezone */
timezone?: string; timezone?: string;
/** First day of the week: 0 = Sunday, 1 = Monday */
weekStartsOn?: number;
} }
const getDateAdjustedForTimezone = ( const getDateAdjustedForTimezone = (
@ -91,6 +93,7 @@ export function DateRangePicker({
onUpdate, onUpdate,
onReset, onReset,
showCompare = true, showCompare = true,
weekStartsOn = 0,
}: DateRangePickerProps) { }: DateRangePickerProps) {
const [isOpen, setIsOpen] = useState(false); const [isOpen, setIsOpen] = useState(false);
@ -150,7 +153,9 @@ export function DateRangePicker({
if (!preset) throw new Error(`Unknown date range preset: ${presetName}`); if (!preset) throw new Error(`Unknown date range preset: ${presetName}`);
const from = new TZDate(new Date(), timezone); const from = new TZDate(new Date(), timezone);
const to = new TZDate(new Date(), timezone); const to = new TZDate(new Date(), timezone);
const first = from.getDate() - from.getDay(); const dayOfWeek = from.getDay();
const daysFromWeekStart = (dayOfWeek - weekStartsOn + 7) % 7;
const first = from.getDate() - daysFromWeekStart;
switch (preset.name) { switch (preset.name) {
case "today": case "today":
@ -184,8 +189,8 @@ export function DateRangePicker({
to.setHours(23, 59, 59, 999); to.setHours(23, 59, 59, 999);
break; break;
case "lastWeek": case "lastWeek":
from.setDate(from.getDate() - 7 - from.getDay()); from.setDate(first - 7);
to.setDate(to.getDate() - to.getDay() - 1); to.setDate(first - 1);
from.setHours(0, 0, 0, 0); from.setHours(0, 0, 0, 0);
to.setHours(23, 59, 59, 999); to.setHours(23, 59, 59, 999);
break; break;

View File

@ -23,5 +23,6 @@ export const supportedLanguageKeys = [
"lt", "lt",
"uk", "uk",
"cs", "cs",
"sk",
"hu", "hu",
]; ];