Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE]: Support qwen2 model #5573

Open
wangbluo opened this issue Apr 9, 2024 · 0 comments
Open

[FEATURE]: Support qwen2 model #5573

wangbluo opened this issue Apr 9, 2024 · 0 comments
Labels
enhancement New feature or request

Comments

@wangbluo
Copy link
Contributor

wangbluo commented Apr 9, 2024

Describe the feature

We are excited to announce the addition of support for the qwen2 model in the ColossalAI framework. The qwen2 model is compatible with version 4.39.3 of the transformer library, as the transformers of version 4.36 doesn't support qwen2 model.

The qwen2 model inherits most the optimization capabilities of ColossalAI and seamlessly integrates with the transformer library. Users can now utilize the qwen2 model for various tasks such as pretraining and fine-tuning, similar to how they would use other models like llama2.

With the integration of the qwen2 model, ColossalAI continues to empower users with state-of-the-art models and extensive optimization functionalities.

Environment:
transformers = 4.39.3

@wangbluo wangbluo added the enhancement New feature or request label Apr 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant