diff --git a/docs/docs/configuration/live.md b/docs/docs/configuration/live.md index 4e4965204..18e2054c4 100644 --- a/docs/docs/configuration/live.md +++ b/docs/docs/configuration/live.md @@ -324,7 +324,7 @@ When your browser runs into problems playing back your camera streams, it will l 4. Look for messages prefixed with the camera name. These logs help identify if the issue is player-specific (MSE vs. WebRTC) or related to camera configuration (e.g., go2rtc streams, codecs). If you see frequent errors: - - Verify your camera's H.264/AAC settings (see [Frigate's camera settings recommendations](#camera_settings_recommendations)). + - Verify your camera's H.264/AAC settings (see [Frigate's camera settings recommendations](#camera-settings-recommendations)). - Check go2rtc configuration for transcoding (e.g., audio to AAC/OPUS). - Test with a different stream via the UI dropdown (if `live -> streams` is configured). - For WebRTC-specific issues, ensure port 8555 is forwarded and candidates are set (see (WebRTC Extra Configuration)(#webrtc-extra-configuration)). diff --git a/docs/docs/configuration/object_detectors.md b/docs/docs/configuration/object_detectors.md index f53a5a0a5..5d74ec392 100644 --- a/docs/docs/configuration/object_detectors.md +++ b/docs/docs/configuration/object_detectors.md @@ -286,7 +286,7 @@ Note that due to hardware limitations of the Coral, the labelmap is a subset of This detector is available for use with both Hailo-8 and Hailo-8L AI Acceleration Modules. The integration automatically detects your hardware architecture via the Hailo CLI and selects the appropriate default model if no custom model is specified. -See the [installation docs](../frigate/installation.md#hailo-8l) for information on configuring the Hailo hardware. +See the [installation docs](../frigate/installation.md#hailo-8) for information on configuring the Hailo hardware. ### Configuration @@ -850,7 +850,7 @@ Note that the labelmap uses a subset of the complete COCO label set that has onl ### Setup -Support for AMD GPUs is provided using the [ONNX detector](#ONNX). In order to utilize the AMD GPU for object detection use a frigate docker image with `-rocm` suffix, for example `ghcr.io/blakeblackshear/frigate:stable-rocm`. +Support for AMD GPUs is provided using the [ONNX detector](#onnx). In order to utilize the AMD GPU for object detection use a frigate docker image with `-rocm` suffix, for example `ghcr.io/blakeblackshear/frigate:stable-rocm`. ### Docker settings for GPU access diff --git a/docs/docs/configuration/record.md b/docs/docs/configuration/record.md index 194647584..998c6053e 100644 --- a/docs/docs/configuration/record.md +++ b/docs/docs/configuration/record.md @@ -167,7 +167,7 @@ record: -Continuous recording supports different retention modes [which are described below](#what-do-the-different-retain-modes-mean) +Continuous recording supports different retention modes [which are described below](#configuring-recording-retention). ### Object Recording diff --git a/docs/docs/configuration/stationary_objects.md b/docs/docs/configuration/stationary_objects.md index 495c03397..63d03374c 100644 --- a/docs/docs/configuration/stationary_objects.md +++ b/docs/docs/configuration/stationary_objects.md @@ -8,7 +8,7 @@ An object is considered stationary when it is being tracked and has been in a ve ## Why does it matter if an object is stationary? -Once an object becomes stationary, object detection will not be continually run on that object. This serves to reduce resource usage and redundant detections when there has been no motion near the tracked object. This also means that Frigate is contextually aware, and can for example [filter out recording segments](record.md#what-do-the-different-retain-modes-mean) to only when the object is considered active. Motion alone does not determine if an object is "active" for active_objects segment retention. Lighting changes for a parked car won't make an object active. +Once an object becomes stationary, object detection will not be continually run on that object. This serves to reduce resource usage and redundant detections when there has been no motion near the tracked object. This also means that Frigate is contextually aware, and can for example [filter out recording segments](record.md#configuring-recording-retention) to only when the object is considered active. Motion alone does not determine if an object is "active" for active_objects segment retention. Lighting changes for a parked car won't make an object active. ## Tuning stationary behavior diff --git a/docs/docs/frigate/camera_setup.md b/docs/docs/frigate/camera_setup.md index 64c650c13..4cb56dc50 100644 --- a/docs/docs/frigate/camera_setup.md +++ b/docs/docs/frigate/camera_setup.md @@ -34,7 +34,7 @@ For the Dahua/Loryta 5442 camera, I use the following settings: - Encode Mode: H.264 - Resolution: 2688\*1520 - Frame Rate(FPS): 15 -- I Frame Interval: 30 (15 can also be used to prioritize streaming performance - see the [camera settings recommendations](/configuration/live#camera_settings_recommendations) for more info) +- I Frame Interval: 30 (15 can also be used to prioritize streaming performance - see the [camera settings recommendations](/configuration/live#camera-settings-recommendations) for more info) **Sub Stream (Detection)** diff --git a/docs/docs/frigate/hardware.md b/docs/docs/frigate/hardware.md index 86afbfa53..afbd95aaf 100644 --- a/docs/docs/frigate/hardware.md +++ b/docs/docs/frigate/hardware.md @@ -95,7 +95,7 @@ Frigate supports multiple different detectors that work on different types of ha **Rockchip** - [RKNN](#rockchip-platform): RKNN models can run on Rockchip devices with included NPUs to provide efficient object detection. - - [Supports limited model architectures](../../configuration/object_detectors#choosing-a-model) + - [Supports limited model architectures](../../configuration/object_detectors#rockchip-supported-models) - Runs best with tiny or small size models - Runs efficiently on low power hardware @@ -263,7 +263,7 @@ Inference speeds may vary depending on the host platform. The above data was mea ### Nvidia Jetson -Jetson devices are supported via the TensorRT or ONNX detectors when running Jetpack 6. It will [make use of the Jetson's hardware media engine](/configuration/hardware_acceleration_video#nvidia-jetson-orin-agx-orin-nx-orin-nano-xavier-agx-xavier-nx-tx2-tx1-nano) when configured with the [appropriate presets](/configuration/ffmpeg_presets#hwaccel-presets), and will make use of the Jetson's GPU and DLA for object detection when configured with the [TensorRT detector](/configuration/object_detectors#nvidia-tensorrt-detector). +Jetson devices are supported via the TensorRT or ONNX detectors when running Jetpack 6. It will [make use of the Jetson's hardware media engine](/configuration/hardware_acceleration_video#nvidia-jetson) when configured with the [appropriate presets](/configuration/ffmpeg_presets#hwaccel-presets), and will make use of the Jetson's GPU and DLA for object detection when configured with the [TensorRT detector](/configuration/object_detectors#nvidia-tensorrt-detector). Inference speed will vary depending on the YOLO model, jetson platform and jetson nvpmodel (GPU/DLA/EMC clock speed). It is typically 20-40 ms for most models. The DLA is more efficient than the GPU, but not faster, so using the DLA will reduce power consumption but will slightly increase inference time. diff --git a/docs/docs/frigate/installation.md b/docs/docs/frigate/installation.md index a115ecf97..2f2e55fa0 100644 --- a/docs/docs/frigate/installation.md +++ b/docs/docs/frigate/installation.md @@ -271,7 +271,7 @@ If you are using `docker run`, add this option to your command `--device /dev/ha #### Configuration -Finally, configure [hardware object detection](/configuration/object_detectors#hailo-8l) to complete the setup. +Finally, configure [hardware object detection](/configuration/object_detectors#hailo-8) to complete the setup. ### MemryX MX3 diff --git a/docs/docs/guides/configuring_go2rtc.md b/docs/docs/guides/configuring_go2rtc.md index 4d632fdd6..26fb26644 100644 --- a/docs/docs/guides/configuring_go2rtc.md +++ b/docs/docs/guides/configuring_go2rtc.md @@ -17,7 +17,7 @@ First, you will want to configure go2rtc to connect to your camera stream by add For the best experience, you should set the stream name under `go2rtc` to match the name of your camera so that Frigate will automatically map it and be able to use better live view options for the camera. -See [the live view docs](../configuration/live.md#setting-stream-for-live-ui) for more information. +See [the live view docs](../configuration/live.md#setting-streams-for-live-ui) for more information. :::