⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training
-
Updated
Feb 26, 2024 - Python
⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training
A open-source framework designed to adapt pre-trained Language Models (LLMs), such as Llama, Mistral, and Mixtral, to a wide array of domains and languages.
Add a description, image, and links to the continual-pre-training topic page so that developers can more easily learn about it.
To associate your repository with the continual-pre-training topic, visit your repo's landing page and select "manage topics."