mirror of
https://github.com/blakeblackshear/frigate.git
synced 2026-01-22 12:08:29 +03:00
Miscellaneous Fixes (0.17 beta) (#21396)
Some checks failed
CI / AMD64 Build (push) Has been cancelled
CI / ARM Build (push) Has been cancelled
CI / Jetson Jetpack 6 (push) Has been cancelled
CI / AMD64 Extra Build (push) Has been cancelled
CI / ARM Extra Build (push) Has been cancelled
CI / Synaptics Build (push) Has been cancelled
CI / Assemble and push default build (push) Has been cancelled
Some checks failed
CI / AMD64 Build (push) Has been cancelled
CI / ARM Build (push) Has been cancelled
CI / Jetson Jetpack 6 (push) Has been cancelled
CI / AMD64 Extra Build (push) Has been cancelled
CI / ARM Extra Build (push) Has been cancelled
CI / Synaptics Build (push) Has been cancelled
CI / Assemble and push default build (push) Has been cancelled
* use fallback timeout for opening media source covers the case where there is no active connection to the go2rtc stream and the camera takes a long time to start * Add review thumbnail URL to integration docs * fix weekday starting point on explore when set to monday in UI settings * only show allowed cameras and groups in camera filter button * Reset the wizard state after closing with model * remove footnote about 0.17 * 0.17 * add triggers to note * add slovak * Ensure genai client exists * Correctly catch JSONDecodeError * clarify docs for none class * version bump on updating page * fix ExportRecordingsBody to allow optional name field fixes https://github.com/blakeblackshear/frigate/discussions/21413 because of https://github.com/blakeblackshear/frigate-hass-integration/pull/1021 * Catch remote protocol error from ollama --------- Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
This commit is contained in:
parent
f862ef5d0c
commit
a4ece9dae3
@ -39,7 +39,7 @@ For object classification:
|
||||
|
||||
:::note
|
||||
|
||||
A tracked object can only have a single sub label. If you are using Face Recognition and you configure an object classification model for `person` using the sub label type, your sub label may not be assigned correctly as it depends on which enrichment completes its analysis first. Consider using the `attribute` type instead.
|
||||
A tracked object can only have a single sub label. If you are using Triggers or Face Recognition and you configure an object classification model for `person` using the sub label type, your sub label may not be assigned correctly as it depends on which enrichment completes its analysis first. Consider using the `attribute` type instead.
|
||||
|
||||
:::
|
||||
|
||||
@ -89,9 +89,9 @@ Creating and training the model is done within the Frigate UI using the `Classif
|
||||
|
||||
### Step 1: Name and Define
|
||||
|
||||
Enter a name for your model, select the object label to classify (e.g., `person`, `dog`, `car`), choose the classification type (sub label or attribute), and define your classes. Include a `none` class for objects that don't fit any specific category.
|
||||
Enter a name for your model, select the object label to classify (e.g., `person`, `dog`, `car`), choose the classification type (sub label or attribute), and define your classes. Frigate will automatically include a `none` class for objects that don't fit any specific category.
|
||||
|
||||
For example: To classify your two cats, create a model named "Our Cats" and create two classes, "Charlie" and "Leo". Create a third class, "none", for other neighborhood cats that are not your own.
|
||||
For example: To classify your two cats, create a model named "Our Cats" and create two classes, "Charlie" and "Leo". A third class, "none", will be created automatically for other neighborhood cats that are not your own.
|
||||
|
||||
### Step 2: Assign Training Examples
|
||||
|
||||
|
||||
@ -5,7 +5,7 @@ title: Updating
|
||||
|
||||
# Updating Frigate
|
||||
|
||||
The current stable version of Frigate is **0.16.2**. The release notes and any breaking changes for this version can be found on the [Frigate GitHub releases page](https://github.com/blakeblackshear/frigate/releases/tag/v0.16.2).
|
||||
The current stable version of Frigate is **0.17.0**. The release notes and any breaking changes for this version can be found on the [Frigate GitHub releases page](https://github.com/blakeblackshear/frigate/releases/tag/v0.17.0).
|
||||
|
||||
Keeping Frigate up to date ensures you benefit from the latest features, performance improvements, and bug fixes. The update process varies slightly depending on your installation method (Docker, Home Assistant Addon, etc.). Below are instructions for the most common setups.
|
||||
|
||||
@ -33,21 +33,21 @@ If you’re running Frigate via Docker (recommended method), follow these steps:
|
||||
2. **Update and Pull the Latest Image**:
|
||||
|
||||
- If using Docker Compose:
|
||||
- Edit your `docker-compose.yml` file to specify the desired version tag (e.g., `0.16.2` instead of `0.15.2`). For example:
|
||||
- Edit your `docker-compose.yml` file to specify the desired version tag (e.g., `0.17.0` instead of `0.16.3`). For example:
|
||||
```yaml
|
||||
services:
|
||||
frigate:
|
||||
image: ghcr.io/blakeblackshear/frigate:0.16.2
|
||||
image: ghcr.io/blakeblackshear/frigate:0.17.0
|
||||
```
|
||||
- Then pull the image:
|
||||
```bash
|
||||
docker pull ghcr.io/blakeblackshear/frigate:0.16.2
|
||||
docker pull ghcr.io/blakeblackshear/frigate:0.17.0
|
||||
```
|
||||
- **Note for `stable` Tag Users**: If your `docker-compose.yml` uses the `stable` tag (e.g., `ghcr.io/blakeblackshear/frigate:stable`), you don’t need to update the tag manually. The `stable` tag always points to the latest stable release after pulling.
|
||||
- If using `docker run`:
|
||||
- Pull the image with the appropriate tag (e.g., `0.16.2`, `0.16.2-tensorrt`, or `stable`):
|
||||
- Pull the image with the appropriate tag (e.g., `0.17.0`, `0.17.0-tensorrt`, or `stable`):
|
||||
```bash
|
||||
docker pull ghcr.io/blakeblackshear/frigate:0.16.2
|
||||
docker pull ghcr.io/blakeblackshear/frigate:0.17.0
|
||||
```
|
||||
|
||||
3. **Start the Container**:
|
||||
@ -105,8 +105,8 @@ If an update causes issues:
|
||||
1. Stop Frigate.
|
||||
2. Restore your backed-up config file and database.
|
||||
3. Revert to the previous image version:
|
||||
- For Docker: Specify an older tag (e.g., `ghcr.io/blakeblackshear/frigate:0.15.2`) in your `docker run` command.
|
||||
- For Docker Compose: Edit your `docker-compose.yml`, specify the older version tag (e.g., `ghcr.io/blakeblackshear/frigate:0.15.2`), and re-run `docker compose up -d`.
|
||||
- For Docker: Specify an older tag (e.g., `ghcr.io/blakeblackshear/frigate:0.16.3`) in your `docker run` command.
|
||||
- For Docker Compose: Edit your `docker-compose.yml`, specify the older version tag (e.g., `ghcr.io/blakeblackshear/frigate:0.16.3`), and re-run `docker compose up -d`.
|
||||
- For Home Assistant: Reinstall the previous addon version manually via the repository if needed and restart the addon.
|
||||
4. Verify the old version is running again.
|
||||
|
||||
|
||||
@ -245,6 +245,12 @@ To load a preview gif of a review item:
|
||||
https://HA_URL/api/frigate/notifications/<review-id>/review_preview.gif
|
||||
```
|
||||
|
||||
To load the thumbnail of a review item:
|
||||
|
||||
```
|
||||
https://HA_URL/api/frigate/notifications/<review-id>/<camera>/review_thumbnail.webp
|
||||
```
|
||||
|
||||
<a name="streams"></a>
|
||||
|
||||
## RTSP stream
|
||||
|
||||
@ -15,13 +15,11 @@ There are three model types offered in Frigate+, `mobiledet`, `yolonas`, and `yo
|
||||
|
||||
Not all model types are supported by all detectors, so it's important to choose a model type to match your detector as shown in the table under [supported detector types](#supported-detector-types). You can test model types for compatibility and speed on your hardware by using the base models.
|
||||
|
||||
| Model Type | Description |
|
||||
| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `mobiledet` | Based on the same architecture as the default model included with Frigate. Runs on Google Coral devices and CPUs. |
|
||||
| `yolonas` | A newer architecture that offers slightly higher accuracy and improved detection of small objects. Runs on Intel, NVidia GPUs, and AMD GPUs. |
|
||||
| `yolov9` | A leading SOTA (state of the art) object detection model with similar performance to yolonas, but on a wider range of hardware options. Runs on Intel, NVidia GPUs, AMD GPUs, Hailo, MemryX\*, Apple Silicon\*, and Rockchip NPUs. |
|
||||
|
||||
_\* Support coming in 0.17_
|
||||
| Model Type | Description |
|
||||
| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
|
||||
| `mobiledet` | Based on the same architecture as the default model included with Frigate. Runs on Google Coral devices and CPUs. |
|
||||
| `yolonas` | A newer architecture that offers slightly higher accuracy and improved detection of small objects. Runs on Intel, NVidia GPUs, and AMD GPUs. |
|
||||
| `yolov9` | A leading SOTA (state of the art) object detection model with similar performance to yolonas, but on a wider range of hardware options. Runs on Intel, NVidia GPUs, AMD GPUs, Hailo, MemryX, Apple Silicon, and Rockchip NPUs. |
|
||||
|
||||
### YOLOv9 Details
|
||||
|
||||
@ -39,7 +37,7 @@ If you have a Hailo device, you will need to specify the hardware you have when
|
||||
|
||||
#### Rockchip (RKNN) Support
|
||||
|
||||
For 0.16, YOLOv9 onnx models will need to be manually converted. First, you will need to configure Frigate to use the model id for your YOLOv9 onnx model so it downloads the model to your `model_cache` directory. From there, you can follow the [documentation](/configuration/object_detectors.md#converting-your-own-onnx-model-to-rknn-format) to convert it. Automatic conversion is coming in 0.17.
|
||||
For 0.16, YOLOv9 onnx models will need to be manually converted. First, you will need to configure Frigate to use the model id for your YOLOv9 onnx model so it downloads the model to your `model_cache` directory. From there, you can follow the [documentation](/configuration/object_detectors.md#converting-your-own-onnx-model-to-rknn-format) to convert it. Automatic conversion is available in 0.17 and later.
|
||||
|
||||
## Supported detector types
|
||||
|
||||
@ -55,7 +53,7 @@ Currently, Frigate+ models support CPU (`cpu`), Google Coral (`edgetpu`), OpenVi
|
||||
| [Hailo8/Hailo8L/Hailo8R](/configuration/object_detectors#hailo-8) | `hailo8l` | `yolov9` |
|
||||
| [Rockchip NPU](/configuration/object_detectors#rockchip-platform)\* | `rknn` | `yolov9` |
|
||||
|
||||
_\* Requires manual conversion in 0.16. Automatic conversion coming in 0.17._
|
||||
_\* Requires manual conversion in 0.16. Automatic conversion available in 0.17 and later._
|
||||
|
||||
## Improving your model
|
||||
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
from typing import Union
|
||||
from typing import Optional, Union
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
from pydantic.json_schema import SkipJsonSchema
|
||||
@ -16,5 +16,5 @@ class ExportRecordingsBody(BaseModel):
|
||||
source: PlaybackSourceEnum = Field(
|
||||
default=PlaybackSourceEnum.recordings, title="Playback source"
|
||||
)
|
||||
name: str = Field(title="Friendly name", default=None, max_length=256)
|
||||
name: Optional[str] = Field(title="Friendly name", default=None, max_length=256)
|
||||
image_path: Union[str, SkipJsonSchema[None]] = None
|
||||
|
||||
@ -203,7 +203,9 @@ class EmbeddingMaintainer(threading.Thread):
|
||||
# post processors
|
||||
self.post_processors: list[PostProcessorApi] = []
|
||||
|
||||
if any(c.review.genai.enabled_in_config for c in self.config.cameras.values()):
|
||||
if self.genai_client is not None and any(
|
||||
c.review.genai.enabled_in_config for c in self.config.cameras.values()
|
||||
):
|
||||
self.post_processors.append(
|
||||
ReviewDescriptionProcessor(
|
||||
self.config, self.requestor, self.metrics, self.genai_client
|
||||
@ -244,7 +246,9 @@ class EmbeddingMaintainer(threading.Thread):
|
||||
)
|
||||
self.post_processors.append(semantic_trigger_processor)
|
||||
|
||||
if any(c.objects.genai.enabled_in_config for c in self.config.cameras.values()):
|
||||
if self.genai_client is not None and any(
|
||||
c.objects.genai.enabled_in_config for c in self.config.cameras.values()
|
||||
):
|
||||
self.post_processors.append(
|
||||
ObjectDescriptionProcessor(
|
||||
self.config,
|
||||
|
||||
@ -3,7 +3,7 @@
|
||||
import logging
|
||||
from typing import Any, Optional
|
||||
|
||||
from httpx import TimeoutException
|
||||
from httpx import RemoteProtocolError, TimeoutException
|
||||
from ollama import Client as ApiClient
|
||||
from ollama import ResponseError
|
||||
|
||||
@ -68,7 +68,12 @@ class OllamaClient(GenAIClient):
|
||||
f"Ollama tokens used: eval_count={result.get('eval_count')}, prompt_eval_count={result.get('prompt_eval_count')}"
|
||||
)
|
||||
return result["response"].strip()
|
||||
except (TimeoutException, ResponseError, ConnectionError) as e:
|
||||
except (
|
||||
TimeoutException,
|
||||
ResponseError,
|
||||
RemoteProtocolError,
|
||||
ConnectionError,
|
||||
) as e:
|
||||
logger.warning("Ollama returned an error: %s", str(e))
|
||||
return None
|
||||
|
||||
|
||||
@ -42,11 +42,10 @@ def get_latest_version(config: FrigateConfig) -> str:
|
||||
"https://api.github.com/repos/blakeblackshear/frigate/releases/latest",
|
||||
timeout=10,
|
||||
)
|
||||
response = request.json()
|
||||
except (RequestException, JSONDecodeError):
|
||||
return "unknown"
|
||||
|
||||
response = request.json()
|
||||
|
||||
if request.ok and response and "tag_name" in response:
|
||||
return str(response.get("tag_name").replace("v", ""))
|
||||
else:
|
||||
|
||||
@ -137,6 +137,11 @@ export default function ClassificationModelWizardDialog({
|
||||
onClose();
|
||||
};
|
||||
|
||||
const handleSuccessClose = () => {
|
||||
dispatch({ type: "RESET" });
|
||||
onClose();
|
||||
};
|
||||
|
||||
return (
|
||||
<Dialog
|
||||
open={open}
|
||||
@ -207,7 +212,7 @@ export default function ClassificationModelWizardDialog({
|
||||
step1Data={wizardState.step1Data}
|
||||
step2Data={wizardState.step2Data}
|
||||
initialData={wizardState.step3Data}
|
||||
onClose={onClose}
|
||||
onClose={handleSuccessClose}
|
||||
onBack={handleBack}
|
||||
/>
|
||||
)}
|
||||
|
||||
@ -18,6 +18,7 @@ import PlatformAwareDialog from "../overlay/dialog/PlatformAwareDialog";
|
||||
import { useTranslation } from "react-i18next";
|
||||
import useSWR from "swr";
|
||||
import { FrigateConfig } from "@/types/frigateConfig";
|
||||
import { useUserPersistence } from "@/hooks/use-user-persistence";
|
||||
|
||||
type CalendarFilterButtonProps = {
|
||||
reviewSummary?: ReviewSummary;
|
||||
@ -105,6 +106,7 @@ export function CalendarRangeFilterButton({
|
||||
const { t } = useTranslation(["components/filter"]);
|
||||
const { data: config } = useSWR<FrigateConfig>("config");
|
||||
const timezone = useTimezone(config);
|
||||
const [weekStartsOn] = useUserPersistence("weekStartsOn", 0);
|
||||
const [open, setOpen] = useState(false);
|
||||
|
||||
const selectedDate = useFormattedRange(
|
||||
@ -138,6 +140,7 @@ export function CalendarRangeFilterButton({
|
||||
initialDateTo={range?.to}
|
||||
timezone={timezone}
|
||||
showCompare={false}
|
||||
weekStartsOn={weekStartsOn}
|
||||
onUpdate={(range) => {
|
||||
updateSelectedRange(range.range);
|
||||
setOpen(false);
|
||||
|
||||
@ -13,6 +13,7 @@ import { Drawer, DrawerContent, DrawerTrigger } from "../ui/drawer";
|
||||
import FilterSwitch from "./FilterSwitch";
|
||||
import { FaVideo } from "react-icons/fa";
|
||||
import { useTranslation } from "react-i18next";
|
||||
import { useAllowedCameras } from "@/hooks/use-allowed-cameras";
|
||||
|
||||
type CameraFilterButtonProps = {
|
||||
allCameras: string[];
|
||||
@ -35,6 +36,30 @@ export function CamerasFilterButton({
|
||||
const [currentCameras, setCurrentCameras] = useState<string[] | undefined>(
|
||||
selectedCameras,
|
||||
);
|
||||
const allowedCameras = useAllowedCameras();
|
||||
|
||||
// Filter cameras to only include those the user has access to
|
||||
const filteredCameras = useMemo(
|
||||
() => allCameras.filter((camera) => allowedCameras.includes(camera)),
|
||||
[allCameras, allowedCameras],
|
||||
);
|
||||
|
||||
// Filter groups to only include those with at least one allowed camera
|
||||
const filteredGroups = useMemo(
|
||||
() =>
|
||||
groups
|
||||
.map(([name, config]) => {
|
||||
const allowedGroupCameras = config.cameras.filter((camera) =>
|
||||
allowedCameras.includes(camera),
|
||||
);
|
||||
return [name, { ...config, cameras: allowedGroupCameras }] as [
|
||||
string,
|
||||
CameraGroupConfig,
|
||||
];
|
||||
})
|
||||
.filter(([, config]) => config.cameras.length > 0),
|
||||
[groups, allowedCameras],
|
||||
);
|
||||
|
||||
const buttonText = useMemo(() => {
|
||||
if (isMobile) {
|
||||
@ -79,8 +104,8 @@ export function CamerasFilterButton({
|
||||
);
|
||||
const content = (
|
||||
<CamerasFilterContent
|
||||
allCameras={allCameras}
|
||||
groups={groups}
|
||||
allCameras={filteredCameras}
|
||||
groups={filteredGroups}
|
||||
currentCameras={currentCameras}
|
||||
mainCamera={mainCamera}
|
||||
setCurrentCameras={setCurrentCameras}
|
||||
|
||||
@ -260,7 +260,7 @@ function MSEPlayer({
|
||||
// @ts-expect-error for typing
|
||||
value: codecs(MediaSource.isTypeSupported),
|
||||
},
|
||||
3000,
|
||||
(fallbackTimeout ?? 3) * 1000,
|
||||
).catch(() => {
|
||||
if (wsRef.current) {
|
||||
onDisconnect();
|
||||
@ -290,7 +290,7 @@ function MSEPlayer({
|
||||
type: "mse",
|
||||
value: codecs(MediaSource.isTypeSupported),
|
||||
},
|
||||
3000,
|
||||
(fallbackTimeout ?? 3) * 1000,
|
||||
).catch(() => {
|
||||
if (wsRef.current) {
|
||||
onDisconnect();
|
||||
|
||||
@ -35,6 +35,8 @@ export interface DateRangePickerProps {
|
||||
showCompare?: boolean;
|
||||
/** timezone */
|
||||
timezone?: string;
|
||||
/** First day of the week: 0 = Sunday, 1 = Monday */
|
||||
weekStartsOn?: number;
|
||||
}
|
||||
|
||||
const getDateAdjustedForTimezone = (
|
||||
@ -91,6 +93,7 @@ export function DateRangePicker({
|
||||
onUpdate,
|
||||
onReset,
|
||||
showCompare = true,
|
||||
weekStartsOn = 0,
|
||||
}: DateRangePickerProps) {
|
||||
const [isOpen, setIsOpen] = useState(false);
|
||||
|
||||
@ -150,7 +153,9 @@ export function DateRangePicker({
|
||||
if (!preset) throw new Error(`Unknown date range preset: ${presetName}`);
|
||||
const from = new TZDate(new Date(), timezone);
|
||||
const to = new TZDate(new Date(), timezone);
|
||||
const first = from.getDate() - from.getDay();
|
||||
const dayOfWeek = from.getDay();
|
||||
const daysFromWeekStart = (dayOfWeek - weekStartsOn + 7) % 7;
|
||||
const first = from.getDate() - daysFromWeekStart;
|
||||
|
||||
switch (preset.name) {
|
||||
case "today":
|
||||
@ -184,8 +189,8 @@ export function DateRangePicker({
|
||||
to.setHours(23, 59, 59, 999);
|
||||
break;
|
||||
case "lastWeek":
|
||||
from.setDate(from.getDate() - 7 - from.getDay());
|
||||
to.setDate(to.getDate() - to.getDay() - 1);
|
||||
from.setDate(first - 7);
|
||||
to.setDate(first - 1);
|
||||
from.setHours(0, 0, 0, 0);
|
||||
to.setHours(23, 59, 59, 999);
|
||||
break;
|
||||
|
||||
@ -23,5 +23,6 @@ export const supportedLanguageKeys = [
|
||||
"lt",
|
||||
"uk",
|
||||
"cs",
|
||||
"sk",
|
||||
"hu",
|
||||
];
|
||||
|
||||
Loading…
Reference in New Issue
Block a user