Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reward Modeling Finetuning Example #10827

Open
wants to merge 6 commits into
base: main
Choose a base branch
from

Conversation

Uxito-Ada
Copy link
Contributor

Description

This is an example of TRL reward modeling, a kind of RLHF, where uses dataset Anthropic/hh-rlhf that on our whitelist.

1. Why the change?

Enable RM on Intel GPU with IPEX LLM.

2. User API changes

no

3. Summary of the change

Reward Modeling Finetuning Example

4. How to test?

  • N/A
  • Unit test
  • Application test
  • Document test
  • ...

model = AutoModelForSequenceClassification.from_pretrained(
model_config.model_name_or_path, num_labels=1, **model_kwargs
)
model = optimize_model(model, low_bit="fp4")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add comments about what we are doing

Copy link
Contributor

@qiyuangong qiyuangong left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants