-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Not receiving grads with cpu_besides? #4
Comments
@conceptofmind thanks for pointing out the need for a contiguous for the latter issue, i don't really know off the top of my head, not without spending some time debugging is this on one machine? maybe the logic for determining which expert is activated can be moved to init and the device can be set before forward? would welcome a PR if you figure it out, now that mixture of experts is becoming a hot thing |
@conceptofmind i thought you were working on startups with Aran? lol that's what Aran told me last i chat with him regardless, it is cool you are all working on MoE! it needs more love in the open source space |
@lucidrains Regarding 8c3fedb, def all_gather_same_dim(t):
world_size = dist.get_world_size()
gathered_tensors = [torch.empty_like(t.contiguous(), device = t.device, dtype = t.dtype) for i in range(world_size)]
dist.all_gather(gathered_tensors, t.contiguous())
return gathered_tensors |
@tomaarsen oops, fixed |
Hi Phil, Thank you for the response. We are happy to continue to diagnose the issue and open a PR. Can also provide some DDP training code later too. This is currently one machine with 8 GPUs run using I should also clarify a little bit that the inclusion of self.model = DDP(
self.model,
device_ids=[self.local_rank],
output_device=self.local_rank,
find_unused_parameters=True,
gradient_as_bucket_view=True
) If find_unused_params is not set to
We checked to see if the DDP model was on
Aran and I did cofound a startup together! We will hopefully have some interesting things related to OSS MOE out in the near future! We have been trying to organize research efforts across OSS organizations such as EAI, HF, LAION, SAI, etc. as we feel like this type of collaboration will lead to much more fruitful results. |
yup, makes sense re: ddp settings a PR would be a great contribution! there's really only a handful of people who are working on MoE ok cool, looking forward to seeing what you and Aran end up building! |
are you aware that soft moe will not work in LLMs? also have this built https://github.com/lucidrains/st-moe-pytorch |
Yes, we are not looking to apply this for LLMs :) |
Hopefully can provide more information to fulfill that curiosity soon 😄 Can send an email or update here with some preliminary results. We are aware that this type of MOE is incompatible with LLMs but think it can be applied to some other interesting use cases. Going to test out st-moe as well! |
@conceptofmind yes indeed, i've seen some papers using mixture of experts in text-to-image models with great success nice, as long as you are aware! |
Hi Phil, We definitely appreciate you bringing notice to this. I imagine it will save someone the confusion regarding soft-moe and LLMs if they read this issue in the future!
Absolutely! If we do get stuck I can definitely get you access to one machine or multiple machines through LAION/SAI or with one of our grants from Modal/Latitude. Currently talking to Huggingface about compute allocations as well. Thank you, Enrico |
Quick update. Using self.model = DDP(
self.model,
device_ids=[self.local_rank],
output_device=self.local_rank,
find_unused_parameters=True,
gradient_as_bucket_view=True
) Causes the error:
Until further evaluation, one or the other needs to be chosen. Setting Is |
@conceptofmind could you show the full stack trace? what would happen if you just cloned the adam / adamw code and replace that line with |
Hi Phil, Here is the stack trace:
Replacing the line in the adam /adam w code with |
I imported: And made your recommended adjustment above. This does resolve the I added: Maybe the placement of the grad to a device could be done in the for loop instead? for i, param in enumerate(params):
grad = grads[i] if not maximize else -grads[i]
grad = grad.to(param.device)
exp_avg = exp_avgs[i]
exp_avg_sq = exp_avg_sqs[i]
step_t = state_steps[I] Both of these seem to work ok with some other minor adjustments to the Adam code (prevent type checking errors) and allow the use of |
@conceptofmind yup! you got it 💯 |
Thank you for the pointer! This may be a good place to utilize something like Mitchell Wortsman's Stable Adam W as it would require fewer alterations over importing adam files: I messaged Horace He to see if there is a reasonable solution to getting around needing to use
Going to see if hooks can potentially be used to deal with the pass. There is one other slight issue we are investigating. Accidentally closed. Oops! |
@conceptofmind @tomaarsen @haileyschoelkopf do let me know what you end up seeing anything in your experiments, positive or negative |
Definitely will do! We are going to meet with Carlos Riquelme very soon to discuss Soft-MoE and collaboration together. I will keep you updated on that as well. |
Certainly! |
Hi Phil, @haileyschoelkopf, @AranKomat, and I plan to call with Carlos Riquelme about Soft-MoE/MoE on Friday, 11:30 am EST. I wanted to extend the invitation to you as well if that is something of interest. If not, I can just post a discussion update in relation as well. Best, Enrico |
@conceptofmind oh i won't be able to make it at that time yea keep me posted 👍 |
@conceptofmind @tomaarsen how did it go? just came across this paper, so it seems like there is something to it https://arxiv.org/abs/2402.08609 |
It is still going 😆. We are continuing ablations.
I will likely get you some hardware soon to test a few things since we were hitting a few snags here and there. |
ah sounds good no worries, was just curious |
We ran into a snag where StabilityAI rescinded compute they initially offered. We fortunately were able to get it through LAION instead and will be continuing experiments now. |
Hi Phil,
I have been working with @tomaarsen of HF and @haileyschoelkopf of EAI testing soft moe.
One issue that was occurring was that the tensors were not contiguous:
Adding
.contiguous()
tot
inall_gather_same_dim()
seemed to resolve this issue:But after another issue presented itself where the parameter indices were not receiving grads during backward pass:
We diagnosed this back to
self.all_experts_to_cpu_besides(expert_slice)
:By commenting out
self.all_experts_to_cpu_besides(expert_slice)
the script would then run and loss would decrease seemingly normally withamp
.Do you have any idea why the above issue would occur or how it should properly be resolved?
Always greatly appreciate your help.
Thank you,
Enrico
The text was updated successfully, but these errors were encountered: