New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fine Tuning on Custom Data ipynb #87
Comments
This comment was marked as spam.
This comment was marked as spam.
The entire process is just: make your data, in the jsonl format like you get when you download the standard data If you don't have wandb or hf, you may need to comment some lines out in train.py |
I got your point. Still for better documentation I would be really grateful if some jupyter nptebook can be provided as majority of the audience here is looking for one such fine tuning code. We all would be really grateful if you can provide one such code for fine tuning gpt4all in a jupyter notebook. Thank you |
Hi @zanussbaum, any advise on how to move forward with this? |
This comment was marked as spam.
This comment was marked as spam.
9 similar comments
This comment was marked as spam.
This comment was marked as spam.
This comment was marked as spam.
This comment was marked as spam.
This comment was marked as spam.
This comment was marked as spam.
This comment was marked as spam.
This comment was marked as spam.
This comment was marked as spam.
This comment was marked as spam.
This comment was marked as spam.
This comment was marked as spam.
This comment was marked as spam.
This comment was marked as spam.
This comment was marked as spam.
This comment was marked as spam.
This comment was marked as spam.
This comment was marked as spam.
+1 |
This comment was marked as spam.
This comment was marked as spam.
Closing this issue as stale. A lot has changed since Nomic last trained a text completion model. |
Can you please provide a ipynb notebook which shows steps for fine tuning this model on custom data?
The text was updated successfully, but these errors were encountered: