Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Encoding too slow - option for hardware acceleration? #1422

Open
hhackbarth opened this issue Oct 27, 2023 · 3 comments
Open

Encoding too slow - option for hardware acceleration? #1422

hhackbarth opened this issue Oct 27, 2023 · 3 comments

Comments

@hhackbarth
Copy link

I have an application reading from a RTSP source where the displayed result lags more and more behind the source. I can see that the callback is called near realtime, so up to that point there is no problem. The problem seems to happen after the frames have been returned back from the callback.
So my assumption is, that the h.264 encoding for WebRTC happens on the CPU only without any hardware acceleration.

Is there an option to make use of hardware acceleration in that part? The system is a Ubuntu Linux system with NVidia GPU.

@ElinLiu0
Copy link

Same so,after using yoloV8 detection,the framerate on my apps were just 4-10FPS...

@hhackbarth
Copy link
Author

@ElinLiu0 : Are you sure, that this low framerate is caused by the streamlit-webrtc part?
If you use YOLOv8 only on CPU or with PyTorch framework, this may be already quite slow.

In my case, I can see that not the model is the problem (the frame-callbacks do not lag behind). The buffer builds up after the frame was returned from the callback.

It seems to be caused by the underlying aiortc library. Currently, this seems to support hardware accelerated encoding only on RasPi with the h264_omx encoder (which is legacy meanwhile). I saw some approaches for NVidia GPUs but nothing which has been published.

@hhackbarth hhackbarth changed the title Encoding too slow - option for hardware acceeleration? Encoding too slow - option for hardware acceleration? Oct 31, 2023
@ElinLiu0
Copy link

ElinLiu0 commented Oct 31, 2023

@ElinLiu0 : Are you sure, that this low framerate is caused by the streamlit-webrtc part?
If you use YOLOv8 only on CPU or with PyTorch framework, this may be already quite slow.

In my case, I can see that not the model is the problem (the frame-callbacks do not lag behind). The buffer builds up after the frame was returned from the callback.

It seems to be caused by the underlying aiortc library. Currently, this seems to support hardware accelerated encoding only on RasPi with the h264_omx encoder (which is legacy meanwhile). I saw some approaches for NVidia GPUs but nothing which has been published.

Exactly not the Model Problem,i was deploy it on NVIDIA Triton Inference Server and it's throughput is very high by using tensorRT backend.

Though without any frame operation,it's still slow framerate on my Logic 720i Web Camera,almost about 10-15 FPS(lower that Logic says 30)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants