You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
How should I convert my model(e.g. .onnx format) to .gguf format and perform inference under the ggml inference framework? How should I implement it step by step?
The text was updated successfully, but these errors were encountered:
So how to convert my pytorch model to .gguf format and perform inference under the ggml inference framework? Is there any tutorial that can guide me step by step on how to do this? I don't know how to start.
It would be easier to start from a tensorflow or pytorch model than onnx. onnx operations are lower level than most ggml operations.
There isn't a step by step guide, you would have to write a program to convert the weights to a format that ggml can understand (ideally GGUF), and then you would need to look at the inference code in python and convert it to ggml operations. The examples show how to do this, but it is not explained step by step, you would have to fill in the blanks.
How should I convert my model(e.g. .onnx format) to .gguf format and perform inference under the ggml inference framework? How should I implement it step by step?
The text was updated successfully, but these errors were encountered: