Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

✨ 😅 Is possibale to use the ChatGPT of OpenAI to train this ChatGPT? #23

Open
Yonv1943 opened this issue Feb 2, 2023 · 8 comments

Comments

@Yonv1943
Copy link

Yonv1943 commented Feb 2, 2023

OpenAI used 40 people when training their own chatGPT, and the annotation process lasted for 3 months.

It is difficult for our open source community (github) to reproduce the Reinforcement Learning by Human Feedback (RLHF) for this work, as OpenAI also employs 40 people to complete human feedback.

However, we can treat OpenAI's web version of chatGPT as human, who can annotate data ✨ for us when training our own chatGPT.

Step 2, A labeler (human or OpenAI chatGPT) ranks the outputs from best to worst.

chatgpt.png

This sounds a bit funny😅, but I currently think it's doable.
@lucidrains

@draganjovanovich
Copy link

draganjovanovich commented Feb 2, 2023 via email

@Yonv1943
Copy link
Author

Yonv1943 commented Feb 2, 2023

You means that "it is forbidden by the Terms of Service (ToS) of OpenAI ChatGPT".
Thank you for your response to this issue.

Maybe the open source community can have other ways to train chatGPT, especially the RL Human Feedback part in Step 2.

@Yonv1943 Yonv1943 closed this as completed Feb 2, 2023
@Yonv1943 Yonv1943 reopened this Feb 2, 2023
@conceptofmind
Copy link
Contributor

conceptofmind commented Feb 3, 2023

You means that "it is forbidden by the Terms of Service (ToS) of OpenAI ChatGPT". Thank you for your response to this issue.

Maybe the open source community can have other ways to train chatGPT, especially the RL Human Feedback part in Step 2.

Using NEOX for RLAIF as a substitute for RLHF may be a plausible solution. Anthropic showed promising results with synthetic data generation. The nonprofit Ought was able to successfully train a reward model with RLAIF for summarization with NEO (1.3b).

I am working with CarperAI and a small group to open-source a few datasets as part of a bigger project relating to this. Harrison Chase and John Nay of LangChain also offered to help. We plan to generate synthetic data for different tasks relating to SFT, RLAIF, CoT, and training the reward models.

@zzzacwork
Copy link

zzzacwork commented Feb 4, 2023 via email

@mqurishi
Copy link

mqurishi commented Feb 5, 2023

Please check this
https://github.com/LAION-AI/Open-Assistant

@zzzacwork
Copy link

zzzacwork commented Feb 7, 2023 via email

@Yonv1943
Copy link
Author

It is possibale to use the ChatGPT of OpenAI to train our own ChatGPT.

The figure below illustrates how we obtained the Alpaca model. For the data, we generated instruction-following demonstrations by building upon the self-instruct method. We started with the 175 human-written instruction-output pairs from the self-instruct seed set. We then prompted text-davinci-003 to generate more instructions using the seed set as in-context examples. We improved over the self-instruct method by simplifying the generation pipeline (see details in GitHub) and significantly reduced the cost. Our data generation process results in 52K unique instructions and the corresponding outputs, which costed less than $500 using the OpenAI API.

https://crfm.stanford.edu/2023/03/13/alpaca.html

https://github.com/tatsu-lab/stanford_alpaca

@rami2102
Copy link

rami2102 commented Nov 5, 2023

It is possibale to use the ChatGPT of OpenAI to train our own ChatGPT.

The figure below illustrates how we obtained the Alpaca model. For the data, we generated instruction-following demonstrations by building upon the self-instruct method. We started with the 175 human-written instruction-output pairs from the self-instruct seed set. We then prompted text-davinci-003 to generate more instructions using the seed set as in-context examples. We improved over the self-instruct method by simplifying the generation pipeline (see details in GitHub) and significantly reduced the cost. Our data generation process results in 52K unique instructions and the corresponding outputs, which costed less than $500 using the OpenAI API.

https://crfm.stanford.edu/2023/03/13/alpaca.html

https://github.com/tatsu-lab/stanford_alpaca

Thanks a lot for your insightful sharing :)

Can you explain please how the method you used for training is compatible with ChatGPT and LLAMA 2 ToS?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants