Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Ulysses DistributedAttention compatibility #5525

Conversation

Kwen-Chen
Copy link
Contributor

The DistributedAttention in DeepSpeed-Ulysses has a compatibility with the training code in Megatron-DeepSpeed because it only takes sequential sequences as input parameters. However, this is not compatible with the frequently used scenarios of specifying parameters, such as the following scenario when using Flash Attention:

ulysses_attn = DistributedAttention(local_attention=flash_attn_func, sequence_process_group=None, scatter_idx=2, gather_idx=1)

attn_output = ulysses_attn(
    query_states,
    key_states,
    value_states,
    dropout,
    softmax_scale,
    causal=causal,
)

Therefore, the **kwargs parameter has been added to increase compatibility with more local attention, while making minimal code modifications.

@Kwen-Chen Kwen-Chen requested a review from mrwyattii as a code owner May 13, 2024 03:55
@Kwen-Chen Kwen-Chen closed this May 13, 2024
@Kwen-Chen Kwen-Chen reopened this May 13, 2024
@loadams loadams added this pull request to the merge queue May 22, 2024
Merged via the queue into microsoft:master with commit f86824b May 22, 2024
12 checks passed
@Kwen-Chen Kwen-Chen deleted the add-ulysses-DistributedAttention-compatibility branch May 23, 2024 04:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants