Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixing MuP #1061

Open
wants to merge 10 commits into
base: main
Choose a base branch
from
Open

Conversation

marcobellagente93
Copy link

@marcobellagente93 marcobellagente93 commented Oct 19, 2023

Current MuP implementation in neox is buggy. This PR allows to get the main functionalities without major changes to the code. Current limitations:

  • only supports non-tied models
  • does not immediately supports embedding multipliers
  • does not immediately supports attention factor multipliers and 1/d**2 attention scaling

The main issue in the current code is that the model is always initialized with use_mup = False, which is then set to its correct value later. This doesn't work, as it sets the wrong attribute at the init of all classes, meaning that effectively it never used mup. Best solution would be to loop through all modules and set the correct attribute there. Current workaround provides a minimal modification whereby the attribute is reset at the re-init of the linear layers, meaning it does the correct thing for everything except for the self attention and embedding

A second issue is that the code as is expects the mu-optimizer to provide with the correct multiplier of target_width/base_width but this is not provided in the mup library. We should probably just open a PR on mup and get rid of this. As the fastest solution, mup is added to the repo, with the multiplier added to the optimizer dict. Plus, removed torch dependancies for mup cause that's useless and can only lead to issues.

Plots are also added for testing the implementation:

  • coordinate checker
  • wider always better

@CLAassistant
Copy link

CLAassistant commented Oct 19, 2023

CLA assistant check
All committers have signed the CLA.

Copy link

@accesslint accesslint bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are accessibility issues in these changes.

mup/README.md Outdated
This can be used to tune extremely large neural networks such as large pretrained transformers, as we have done in our work.
More generally, μP reduces the fragility and uncertainty when transitioning from exploration to scaling up, which are not often talked about explicitly in the deep learning literature.

![](figures/sp_vs_mup_dashed.png)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This image is missing a text alternative. This is a problem for people using screen readers.

mup/README.md Outdated

μP turns out to be the *unique* "natural" parametrization that has this hyperparameter stability property across width, as empirically verified in the gif below on MLPs trained with SGD. Here, across time, we interpolate between PyTorch default and μP's learning rate and initialization scalings (right), and we scale up the width-256 model (log2(width)=8) to width 2^13 = 8192 using this interpolated scaling rule (left).

![](figures/parametrizations.gif)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This image is missing a text alternative. This is a problem for people using screen readers.

```
You should find the generated plots under `./coord_checks`, which show stable coordinate sizes under μP, e.g.,

![](coord_checks/μp_mlp_sgd_coord.png)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This image is missing a text alternative. This is a problem for people using screen readers.


and growing sizes under SP, e.g.,

![](coord_checks/sp_mlp_sgd_coord.png)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This image is missing a text alternative. This is a problem for people using screen readers.

mup/README.md Outdated
The first set of 3 plots shows an MLP in standard parametrization (SP), trained by adam.
We see after 1 step of update, activation/output `l1` are exploding with width.
This means SP is "incorrect."
![](coord_checks/sp_mlp_adam_lr0.001_nseeds5_bn0_coord.png)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This image is missing a text alternative. This is a problem for people using screen readers.

mup/README.md Outdated
![](coord_checks/sp_mlp_adam_lr0.001_nseeds5_bn0_coord.png)
We now do the same for an MLP in maximal update parametrization (μP) (including using `mup.optim.MuAdam` instead of `torch.optim.Adam`).
In contrast to the above, all curves stay horizontal, indicating that μP is implemented correctly.
![](coord_checks/μp_mlp_adam_lr0.001_nseeds5_bn0_coord.png)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This image is missing a text alternative. This is a problem for people using screen readers.

mup/README.md Outdated

### Wider is Always Better

![](figures/widerbetter.png)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This image is missing a text alternative. This is a problem for people using screen readers.

Copy link

@accesslint accesslint bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are accessibility issues in these changes.

4. Set coord-check to false
What you are gonna get is some stastistics of pre-activations for models only differing by the width. If done correctly these should be approximately horizontal
![](mup/figures/coord_check_up.0.jpg)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This image is missing a text alternative. This is a problem for people using screen readers.

What you are gonna get is some stastistics of pre-activations for models only differing by the width. If done correctly these should be approximately horizontal
![](mup/figures/coord_check_up.0.jpg)
<font size="1"> *Healthy coordinate check*</font>
![](mup/figures/coord_check_sp.0.jpg)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This image is missing a text alternative. This is a problem for people using screen readers.

<font size="1"> *Something's wrong*</font>

A second kind of test is to pick any configuration and learning rate (that doesn't lead to diverging training) and simply run a few different experiments fixing everything except for the width. Since with mup wider is always better the results should look like the figure below
![](mup/figures/width_check.png)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This image is missing a text alternative. This is a problem for people using screen readers.

Copy link

@accesslint accesslint bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are accessibility issues in these changes.

4. Set coord-check to false
What you are gonna get is some stastistics of pre-activations for models only differing by the width. If done correctly these should be approximately horizontal
![](mup/figures/coord_check_up.0.jpg)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This image is missing a text alternative. This is a problem for people using screen readers.

What you are gonna get is some stastistics of pre-activations for models only differing by the width. If done correctly these should be approximately horizontal
![](mup/figures/coord_check_up.0.jpg)
<font size="1"> *Healthy coordinate check*</font>
![](mup/figures/coord_check_sp.0.jpg)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This image is missing a text alternative. This is a problem for people using screen readers.

<font size="1"> *Something's wrong*</font>

A second kind of test is to pick any configuration and learning rate (that doesn't lead to diverging training) and simply run a few different experiments fixing everything except for the width. Since with mup wider is always better the results should look like the figure below
![](mup/figures/width_check.png)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This image is missing a text alternative. This is a problem for people using screen readers.

@Quentin-Anthony
Copy link
Member

@nsarka -- FYI

@StellaAthena
Copy link
Member

Instead of incorporating muP into GPT-NeoX we are going to move these changes to our fork of their repo and install that version until the changes are upstreamed.

@Quentin-Anthony
Copy link
Member

Instead of incorporating muP into GPT-NeoX we are going to move these changes to our fork of their repo and install that version until the changes are upstreamed.

Not all of his changes are muP-related. I've separated out the muP 1-line change into our fork until microsoft/mup#65 is merged. We can discuss the GPT-NeoX specific changes here and remove the mup subdir.

Copy link

@accesslint accesslint bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are accessibility issues in these changes.

4. Set coord-check to false
What you are gonna get is some stastistics of pre-activations for models only differing by the width. If done correctly these should be approximately horizontal
![](mup/figures/coord_check_up.0.jpg)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This image is missing a text alternative. This is a problem for people using screen readers.

What you are gonna get is some stastistics of pre-activations for models only differing by the width. If done correctly these should be approximately horizontal
![](mup/figures/coord_check_up.0.jpg)
<font size="1"> *Healthy coordinate check*</font>
![](mup/figures/coord_check_sp.0.jpg)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This image is missing a text alternative. This is a problem for people using screen readers.

<font size="1"> *Something's wrong*</font>

A second kind of test is to pick any configuration and learning rate (that doesn't lead to diverging training) and simply run a few different experiments fixing everything except for the width. Since with mup wider is always better the results should look like the figure below
![](mup/figures/width_check.png)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This image is missing a text alternative. This is a problem for people using screen readers.

Copy link

@accesslint accesslint bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are accessibility issues in these changes.

4. Set coord-check to false
What you are gonna get is some stastistics of pre-activations for models only differing by the width. If done correctly these should be approximately horizontal
![](mup/figures/coord_check_up.0.jpg)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This image is missing a text alternative. This is a problem for people using screen readers.

What you are gonna get is some stastistics of pre-activations for models only differing by the width. If done correctly these should be approximately horizontal
![](mup/figures/coord_check_up.0.jpg)
<font size="1"> *Healthy coordinate check*</font>
![](mup/figures/coord_check_sp.0.jpg)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This image is missing a text alternative. This is a problem for people using screen readers.

<font size="1"> *Something's wrong*</font>

A second kind of test is to pick any configuration and learning rate (that doesn't lead to diverging training) and simply run a few different experiments fixing everything except for the width. Since with mup wider is always better the results should look like the figure below
![](mup/figures/width_check.png)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This image is missing a text alternative. This is a problem for people using screen readers.

@StellaAthena
Copy link
Member

Instead of incorporating muP into GPT-NeoX we are going to move these changes to our fork of their repo and install that version until the changes are upstreamed.

Not all of his changes are muP-related. I've separated out the muP 1-line change into our fork until microsoft/mup#65 is merged. We can discuss the GPT-NeoX specific changes here and remove the mup subdir.

Oh I see, I read the previous discussion backwards (thinking it was a 1-line NeoX edit and a substantial muP edit). I went ahead and removed the muP changes (moving them to the fork) and imported the new muP library. I haven't had a chance to check the correctness of this implementation yet however.

@marcobellagente93
Copy link
Author

Instead of incorporating muP into GPT-NeoX we are going to move these changes to our fork of their repo and install that version until the changes are upstreamed.

Not all of his changes are muP-related. I've separated out the muP 1-line change into our fork until microsoft/mup#65 is merged. We can discuss the GPT-NeoX specific changes here and remove the mup subdir.

Oh I see, I read the previous discussion backwards (thinking it was a 1-line NeoX edit and a substantial muP edit). I went ahead and removed the muP changes (moving them to the fork) and imported the new muP library. I haven't had a chance to check the correctness of this implementation yet however.

That's fairly quick to verify, currently neox adjust the learning with the width here, but group['width'] doesn't exists, and since it's just in the if block it never raised an error

https://github.com/EleutherAI/gpt-neox/blob/b02d98932f95fe0500c28698b38acb175e92e980/megatron/learning_rates.py#L97C1-L101C37

@StellaAthena
Copy link
Member

That's fairly quick to verify, currently neox adjust the learning with the width here, but group['width'] doesn't exists, and since it's just in the if block it never raised an error

https://github.com/EleutherAI/gpt-neox/blob/b02d98932f95fe0500c28698b38acb175e92e980/megatron/learning_rates.py#L97C1-L101C37

It looks like this is unchanged from your branch? I thought your branch was working. Or am I missing something.

@marcobellagente93
Copy link
Author

That's fairly quick to verify, currently neox adjust the learning with the width here, but group['width'] doesn't exists, and since it's just in the if block it never raised an error
https://github.com/EleutherAI/gpt-neox/blob/b02d98932f95fe0500c28698b38acb175e92e980/megatron/learning_rates.py#L97C1-L101C37

It looks like this is unchanged from your branch? I thought your branch was working. Or am I missing something.

It's fixed if we use the eleuther fork of mup where I added the width to the optimizer dict https://github.com/EleutherAI/mup/blob/14e436bc013418725976e7cfb1b4e74e8901ab80/mup/optim.py#L75C9-L80C52.
Apologies for the confusion

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants