mirror of
https://github.com/blakeblackshear/frigate.git
synced 2026-04-11 17:47:37 +03:00
Don't use GPU for training
This commit is contained in:
parent
057d44d6a9
commit
b4ba2ce7c4
@ -13,7 +13,6 @@ nvidia_cusolver_cu12==11.6.3.*; platform_machine == 'x86_64'
|
|||||||
nvidia_cusparse_cu12==12.5.1.*; platform_machine == 'x86_64'
|
nvidia_cusparse_cu12==12.5.1.*; platform_machine == 'x86_64'
|
||||||
nvidia_nccl_cu12==2.23.4; platform_machine == 'x86_64'
|
nvidia_nccl_cu12==2.23.4; platform_machine == 'x86_64'
|
||||||
nvidia_nvjitlink_cu12==12.5.82; platform_machine == 'x86_64'
|
nvidia_nvjitlink_cu12==12.5.82; platform_machine == 'x86_64'
|
||||||
tensorflow==2.19.*; platform_machine == 'x86_64'
|
|
||||||
onnx==1.16.*; platform_machine == 'x86_64'
|
onnx==1.16.*; platform_machine == 'x86_64'
|
||||||
onnxruntime-gpu==1.22.*; platform_machine == 'x86_64'
|
onnxruntime-gpu==1.22.*; platform_machine == 'x86_64'
|
||||||
protobuf==3.20.3; platform_machine == 'x86_64'
|
protobuf==3.20.3; platform_machine == 'x86_64'
|
||||||
|
|||||||
@ -10,7 +10,6 @@ Object classification allows you to train a custom MobileNetV2 classification mo
|
|||||||
Object classification models are lightweight and run very fast on CPU. Inference should be usable on virtually any machine that can run Frigate.
|
Object classification models are lightweight and run very fast on CPU. Inference should be usable on virtually any machine that can run Frigate.
|
||||||
|
|
||||||
Training the model does briefly use a high amount of system resources for about 1–3 minutes per training run. On lower-power devices, training may take longer.
|
Training the model does briefly use a high amount of system resources for about 1–3 minutes per training run. On lower-power devices, training may take longer.
|
||||||
When running the `-tensorrt` image, Nvidia GPUs will automatically be used to accelerate training.
|
|
||||||
|
|
||||||
## Classes
|
## Classes
|
||||||
|
|
||||||
|
|||||||
@ -10,7 +10,6 @@ State classification allows you to train a custom MobileNetV2 classification mod
|
|||||||
State classification models are lightweight and run very fast on CPU. Inference should be usable on virtually any machine that can run Frigate.
|
State classification models are lightweight and run very fast on CPU. Inference should be usable on virtually any machine that can run Frigate.
|
||||||
|
|
||||||
Training the model does briefly use a high amount of system resources for about 1–3 minutes per training run. On lower-power devices, training may take longer.
|
Training the model does briefly use a high amount of system resources for about 1–3 minutes per training run. On lower-power devices, training may take longer.
|
||||||
When running the `-tensorrt` image, Nvidia GPUs will automatically be used to accelerate training.
|
|
||||||
|
|
||||||
## Classes
|
## Classes
|
||||||
|
|
||||||
|
|||||||
Loading…
Reference in New Issue
Block a user