You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As recently seen in llama.cpp (ggerganov/llama.cpp#5226), the cost of starting the threads of the CPU backend is not insignificant. To address this, I propose adding a new CPU context object that holds the threads and can reuse them between invocations. Additionally, this CPU context would behave as an asynchronous queue, so that multiple graph evaluations could be queued into the object. This would enable the implementation of pipeline parallelism with the CPU and GPU backends (ref: ggerganov/llama.cpp#4918 (comment)).
Would the threads wait on a condition variable while not running? I've done some testing in the past to maintain a global pool of threads and wake them when there is work (ggerganov/whisper.cpp#343). This didn't seem to help the performance much, but it's possible that the implementation was not done ideally
Regardless if there is performance gain or not, the rest of the functionality that would be enabled is worth it alone
Yes, the threads would wait on a condition variable or something to the same effect. In linux and possibly macOS, the overhead of creating a thread and waking a blocked thread is probably close enough that for large graphs it wouldn't make much difference, but for very small graphs like the ones used often by ggml_backend_sched it may be significant.
As recently seen in llama.cpp (ggerganov/llama.cpp#5226), the cost of starting the threads of the CPU backend is not insignificant. To address this, I propose adding a new CPU context object that holds the threads and can reuse them between invocations. Additionally, this CPU context would behave as an asynchronous queue, so that multiple graph evaluations could be queued into the object. This would enable the implementation of pipeline parallelism with the CPU and GPU backends (ref: ggerganov/llama.cpp#4918 (comment)).
Possible API:
The text was updated successfully, but these errors were encountered: