fix broken anchors

This commit is contained in:
Josh Hawkins 2026-03-29 22:11:22 -05:00
parent 05109f535c
commit 88f14bc589
8 changed files with 10 additions and 10 deletions

View File

@ -324,7 +324,7 @@ When your browser runs into problems playing back your camera streams, it will l
4. Look for messages prefixed with the camera name.
These logs help identify if the issue is player-specific (MSE vs. WebRTC) or related to camera configuration (e.g., go2rtc streams, codecs). If you see frequent errors:
- Verify your camera's H.264/AAC settings (see [Frigate's camera settings recommendations](#camera_settings_recommendations)).
- Verify your camera's H.264/AAC settings (see [Frigate's camera settings recommendations](#camera-settings-recommendations)).
- Check go2rtc configuration for transcoding (e.g., audio to AAC/OPUS).
- Test with a different stream via the UI dropdown (if `live -> streams` is configured).
- For WebRTC-specific issues, ensure port 8555 is forwarded and candidates are set (see (WebRTC Extra Configuration)(#webrtc-extra-configuration)).

View File

@ -286,7 +286,7 @@ Note that due to hardware limitations of the Coral, the labelmap is a subset of
This detector is available for use with both Hailo-8 and Hailo-8L AI Acceleration Modules. The integration automatically detects your hardware architecture via the Hailo CLI and selects the appropriate default model if no custom model is specified.
See the [installation docs](../frigate/installation.md#hailo-8l) for information on configuring the Hailo hardware.
See the [installation docs](../frigate/installation.md#hailo-8) for information on configuring the Hailo hardware.
### Configuration
@ -850,7 +850,7 @@ Note that the labelmap uses a subset of the complete COCO label set that has onl
### Setup
Support for AMD GPUs is provided using the [ONNX detector](#ONNX). In order to utilize the AMD GPU for object detection use a frigate docker image with `-rocm` suffix, for example `ghcr.io/blakeblackshear/frigate:stable-rocm`.
Support for AMD GPUs is provided using the [ONNX detector](#onnx). In order to utilize the AMD GPU for object detection use a frigate docker image with `-rocm` suffix, for example `ghcr.io/blakeblackshear/frigate:stable-rocm`.
### Docker settings for GPU access

View File

@ -167,7 +167,7 @@ record:
</TabItem>
</ConfigTabs>
Continuous recording supports different retention modes [which are described below](#what-do-the-different-retain-modes-mean)
Continuous recording supports different retention modes [which are described below](#configuring-recording-retention).
### Object Recording

View File

@ -8,7 +8,7 @@ An object is considered stationary when it is being tracked and has been in a ve
## Why does it matter if an object is stationary?
Once an object becomes stationary, object detection will not be continually run on that object. This serves to reduce resource usage and redundant detections when there has been no motion near the tracked object. This also means that Frigate is contextually aware, and can for example [filter out recording segments](record.md#what-do-the-different-retain-modes-mean) to only when the object is considered active. Motion alone does not determine if an object is "active" for active_objects segment retention. Lighting changes for a parked car won't make an object active.
Once an object becomes stationary, object detection will not be continually run on that object. This serves to reduce resource usage and redundant detections when there has been no motion near the tracked object. This also means that Frigate is contextually aware, and can for example [filter out recording segments](record.md#configuring-recording-retention) to only when the object is considered active. Motion alone does not determine if an object is "active" for active_objects segment retention. Lighting changes for a parked car won't make an object active.
## Tuning stationary behavior

View File

@ -34,7 +34,7 @@ For the Dahua/Loryta 5442 camera, I use the following settings:
- Encode Mode: H.264
- Resolution: 2688\*1520
- Frame Rate(FPS): 15
- I Frame Interval: 30 (15 can also be used to prioritize streaming performance - see the [camera settings recommendations](/configuration/live#camera_settings_recommendations) for more info)
- I Frame Interval: 30 (15 can also be used to prioritize streaming performance - see the [camera settings recommendations](/configuration/live#camera-settings-recommendations) for more info)
**Sub Stream (Detection)**

View File

@ -95,7 +95,7 @@ Frigate supports multiple different detectors that work on different types of ha
**Rockchip** <CommunityBadge />
- [RKNN](#rockchip-platform): RKNN models can run on Rockchip devices with included NPUs to provide efficient object detection.
- [Supports limited model architectures](../../configuration/object_detectors#choosing-a-model)
- [Supports limited model architectures](../../configuration/object_detectors#rockchip-supported-models)
- Runs best with tiny or small size models
- Runs efficiently on low power hardware
@ -263,7 +263,7 @@ Inference speeds may vary depending on the host platform. The above data was mea
### Nvidia Jetson
Jetson devices are supported via the TensorRT or ONNX detectors when running Jetpack 6. It will [make use of the Jetson's hardware media engine](/configuration/hardware_acceleration_video#nvidia-jetson-orin-agx-orin-nx-orin-nano-xavier-agx-xavier-nx-tx2-tx1-nano) when configured with the [appropriate presets](/configuration/ffmpeg_presets#hwaccel-presets), and will make use of the Jetson's GPU and DLA for object detection when configured with the [TensorRT detector](/configuration/object_detectors#nvidia-tensorrt-detector).
Jetson devices are supported via the TensorRT or ONNX detectors when running Jetpack 6. It will [make use of the Jetson's hardware media engine](/configuration/hardware_acceleration_video#nvidia-jetson) when configured with the [appropriate presets](/configuration/ffmpeg_presets#hwaccel-presets), and will make use of the Jetson's GPU and DLA for object detection when configured with the [TensorRT detector](/configuration/object_detectors#nvidia-tensorrt-detector).
Inference speed will vary depending on the YOLO model, jetson platform and jetson nvpmodel (GPU/DLA/EMC clock speed). It is typically 20-40 ms for most models. The DLA is more efficient than the GPU, but not faster, so using the DLA will reduce power consumption but will slightly increase inference time.

View File

@ -271,7 +271,7 @@ If you are using `docker run`, add this option to your command `--device /dev/ha
#### Configuration
Finally, configure [hardware object detection](/configuration/object_detectors#hailo-8l) to complete the setup.
Finally, configure [hardware object detection](/configuration/object_detectors#hailo-8) to complete the setup.
### MemryX MX3

View File

@ -17,7 +17,7 @@ First, you will want to configure go2rtc to connect to your camera stream by add
For the best experience, you should set the stream name under `go2rtc` to match the name of your camera so that Frigate will automatically map it and be able to use better live view options for the camera.
See [the live view docs](../configuration/live.md#setting-stream-for-live-ui) for more information.
See [the live view docs](../configuration/live.md#setting-streams-for-live-ui) for more information.
:::