Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add bash script to install packages #245

Open
wants to merge 8 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
30 changes: 20 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,16 @@ All notebooks are **beginner friendly**! Add your dataset, click "Run All", and

![](https://i.ibb.co/sJ7RhGG/image-41.png)

## 💾 Installation Instructions
## 💾 Simple Installation Instructions

```bash
./install.sh
```

## 💾 Manual Installation Instructions
<details>
<summary>Details</summary>

### Conda Installation
Select either `pytorch-cuda=11.8` for CUDA 11.8 or `pytorch-cuda=12.1` for CUDA 12.1. If you have `mamba`, use `mamba` instead of `conda` for faster solving. See this [Github issue](https://github.com/unslothai/unsloth/issues/73) for help on debugging Conda installs.
```bash
Expand All @@ -93,7 +102,7 @@ conda activate unsloth_env

conda install pytorch-cuda=<12.1/11.8> pytorch cudatoolkit xformers -c pytorch -c nvidia -c xformers

pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
pip install "unsloth[colab-221] @ git+https://github.com/unslothai/unsloth.git"

pip install --no-deps trl peft accelerate bitsandbytes
```
Expand Down Expand Up @@ -138,27 +147,28 @@ pip install "unsloth[cu121-torch220] @ git+https://github.com/unslothai/unsloth.
pip install "unsloth[cu118-ampere-torch220] @ git+https://github.com/unslothai/unsloth.git"
pip install "unsloth[cu121-ampere-torch220] @ git+https://github.com/unslothai/unsloth.git"
```
5. If you get errors, try the below first, then go back to step 1:
```bash
pip install --upgrade pip
```
6. For Pytorch 2.2.1:
5. For Pytorch 2.2.1:
```bash
# RTX 3090, 4090 Ampere GPUs:
pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
pip install "unsloth[colab-221] @ git+https://github.com/unslothai/unsloth.git"
pip install --no-deps packaging ninja einops flash-attn xformers trl peft accelerate bitsandbytes

# Pre Ampere RTX 2080, T4, GTX 1080 GPUs:
pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
pip install "unsloth[colab-221] @ git+https://github.com/unslothai/unsloth.git"
pip install --no-deps xformers trl peft accelerate bitsandbytes
```
7. To troubleshoot installs try the below (all must succeed). Xformers should mostly all be available.
6. To troubleshoot installs try the below (all must succeed). Xformers should mostly all be available.
```bash
nvcc
python -m xformers.info
python -m bitsandbytes
```
7. If you get errors, try the below first, then go back to step 1:
```bash
pip install --upgrade pip
```

</details>
## 📜 Documentation
- Go to our [Wiki page](https://github.com/unslothai/unsloth/wiki) for saving to GGUF, checkpointing, evaluation and more!
- We support Huggingface's TRL, Trainer, Seq2SeqTrainer or even Pytorch code!
Expand Down
114 changes: 114 additions & 0 deletions install.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,114 @@
#!/bin/bash

# Function to get PyTorch version if installed
get_pytorch_version () {
if python -c "import torch" &> /dev/null; then
PYTORCH_VERSION=$(python -c "import torch; print(torch.__version__)")
echo $PYTORCH_VERSION
else
echo "not installed"
fi
}

# Function to get GPU architecture
get_gpu_type () {
GPU_MAJOR_VERSION=$(python -c "import torch; print(torch.cuda.get_device_capability()[0])")
if [[ "$GPU_MAJOR_VERSION" -ge 8 ]]; then
echo "ampere"
else
echo ""
fi
}

# Function to install packages via Conda
conda_install_packages () {
conda create --name unsloth_env python=3.10 -y
CONDA_BASE=$(conda info --base)
CONDA_SH="$CONDA_BASE/etc/profile.d/conda.sh"
if [[ -f "$CONDA_SH" ]]; then
echo "Sourcing Conda from $CONDA_SH"
source "$CONDA_SH"
else
echo "Unable to locate conda.sh at $CONDA_SH. Please ensure Conda is properly installed."
exit 1
fi
conda activate unsloth_env
conda install pytorch cudatoolkit=${CUDA_TAG} torchvision torchaudio pytorch-cuda=${CUDA_TAG} -c pytorch -c nvidia -y
conda install xformers -c xformers -y
pip install bitsandbytes
pip install "unsloth[conda] @ git+https://github.com/unslothai/unsloth.git"
}

# Function to install packages via Pip
pip_install_packages () {
pip install --upgrade --force-reinstall --no-cache-dir torch==${PYTORCH_CORE_VERSION}+${CUDA_TAG} triton --index-url https://download.pytorch.org/whl/${CUDA_TAG}
if [[ "$PYTORCH_VERSION_TAG" == "torch210" ]]; then
pip install "unsloth[${CUDA_TAG}${GPU_TYPE:+-$GPU_TYPE}] @ git+https://github.com/unslothai/unsloth.git"
elif [[ "$PYTORCH_VERSION_TAG" == "torch221" ]]; then
pip install "unsloth[colab-221] @ git+https://github.com/unslothai/unsloth.git"
if [[ "$CUDA_TAG" == "cu118" ]]; then
pip install --no-deps packaging ninja einops flash-attn xformers trl peft accelerate bitsandbytes
else
pip install --no-deps xformers trl peft accelerate bitsandbytes
fi
else
pip install "unsloth[${CUDA_TAG}${GPU_TYPE:+-$GPU_TYPE}-$PYTORCH_VERSION_TAG] @ git+https://github.com/unslothai/unsloth.git"
fi
}

# Check if conda is installed
if type conda &> /dev/null; then
echo "Anaconda/Miniconda is installed, proceeding with Conda installation."

# Determine CUDA version
CUDA_VERSION=$(nvcc --version | grep "release" | sed 's/.*release //' | sed 's/,.*//')
echo "CUDA version detected: $CUDA_VERSION"

# Choose the right tag for pytorch-cuda
# TODO: This is janky, we should find a better way to do this
cuda_version_value=$(echo "$CUDA_VERSION" | bc)
if [ "$(echo "$cuda_version_value < 12" | bc -l)" -eq 1 ]; then
CUDA_TAG="11.8"
else
CUDA_TAG="12.1"
fi

conda_install_packages

# If conda is not installed, use pip
else
echo "Anaconda/Miniconda is not installed, checking for CUDA and proceeding with Pip installation."

# Check if CUDA is available
if type nvcc &> /dev/null; then
CUDA_VERSION=$(nvcc --version | grep "release" | sed 's/.*release //' | sed 's/,.*//')
echo "CUDA version detected: $CUDA_VERSION"
PYTORCH_VERSION=$(get_pytorch_version)
echo "PyTorch version detected: $PYTORCH_VERSION"
GPU_TYPE=$(get_gpu_type)
if [[ $GPU_TYPE == "ampere" ]]; then
echo "Ampere or newer architecture detected. Proceeding with ampere specific installation."
else
echo "Older GPU architecture detected. Proceeding with non-ampere specific installation."
fi
# Define CUDA tag based on CUDA version
if [[ "$CUDA_VERSION" == "11.8" ]]; then
CUDA_TAG="cu118"
elif [[ "$CUDA_VERSION" == "12.1" ]]; then
CUDA_TAG="cu121"
else
echo "Unsupported CUDA version for Pip installation. Exiting."
exit 1
fi

# Extract PyTorch version (ignoring any suffix)
PYTORCH_CORE_VERSION=$(echo $PYTORCH_VERSION | cut -d'+' -f1)
PYTORCH_VERSION_TAG="torch${PYTORCH_CORE_VERSION//./}"

pip_install_packages

else
echo "CUDA not detected. Pip installation requires CUDA. Exiting."
exit 1
fi
fi
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -161,7 +161,7 @@ colab-ampere-torch220 = [
"ninja",
"flash-attn",
]
colab-new = [
colab-221 = [
"tyro",
"transformers>=4.38.2",
"datasets>=2.16.0",
Expand Down