New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error when running inference for nanoGPT LLM example #3465
Comments
That seems unexpected, let us take a look |
Thanks for reporting this @bryangarza. Could you also let us know which |
Thanks @bryangarza for sharing this! This should be an operator bug we've fixed before. Can you share me your git hash or pull the latest 0.2 branch and try again? |
I was on
https://github.com/pytorch/executorch/commits/release/0.2/ @Gasoonjia which 0.2 branch do you mean? |
Yes The error you encountered should be something we've fixed. If you have ever downloaded any other ET before, maybe your environment is still using that, instead of the latest one you downloaded, which still has that bug. |
Just tried the tutorial from scratch, new conda env and everything, but still getting the same error on |
This is my original PR #3175. I just went through my PR, looks like there might be some issues when merging that PR to release/0.2 branch. |
Right now, I can confirm that my PR has been merged into release/0.2 successfully. Please repull the branch and try again. It should work right now. |
Thanks @Gasoonjia! I tried it again by pulling |
Hi,
I am following the instructions from https://github.com/pytorch/executorch/blob/main/docs/source/llm/getting-started.md and got to the "Building and Running" section where you compile the CPP code and try to do inference. However, when I enter a prompt, I get this error. I also tried other prompts but get the same result.
I am on an Apple M1 Pro, with MacOS 13.6.6
If you need any more detail from me that would help to make this reproducible, just let me know. Thanks!
The text was updated successfully, but these errors were encountered: