Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added support for Docker with GPU inference. #154

Open
wants to merge 19 commits into
base: main
Choose a base branch
from

Conversation

dengsgo
Copy link

@dengsgo dengsgo commented Jul 2, 2023

Added support for Docker with GPU inference.

Build Image

# cd your/path/to/ChatGLM2-6B
$ docker build -t chatglm2:v1 . 

Output similar execution results:

[+] Building 475.0s (9/9) FINISHED
 => [internal] load build definition from Dockerfile                                                               0.1s
 => => transferring dockerfile: 755B                                                                               0.0s
 => [internal] load .dockerignore                                                                                  0.1s
 => => transferring context: 2B                                                                                    0.0s
 => [internal] load metadata for docker.io/pytorch/pytorch:2.0.1-cuda11.7-cudnn8-runtime                           0.0s
 => [internal] load build context                                                                                  0.1s
 => => transferring context: 11.00MB                                                                               0.1s
 => CACHED [1/4] FROM docker.io/pytorch/pytorch:2.0.1-cuda11.7-cudnn8-runtime                                      0.0s
 => [2/4] COPY . .                                                                                                 0.1s
 => [3/4] RUN apt update && apt install -y git gcc                                                                93.8s
 => [4/4] RUN pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple/ && pip install icetk  372.3s
 => exporting to image                                                                                             8.6s
 => => exporting layers                                                                                            8.5s
 => => writing image sha256:828bdcc8d9b5c8537dc2f243633497f573b41518eae9e2dc11539d7f8d864eb5                       0.0s
 => => naming to docker.io/library/chatglm2:v1                                                                     0.0s

You will get the image of chatglm2:v1:

$ docker images
REPOSITORY   TAG              IMAGE ID            CREATED          SIZE
chatglm2       v1                828bdcc8d9b5   18 minutes ago   9.8GB

Use

The first time I need to download a model project, for example, I want to use chatglm2-6b-int4 and execute it in the path you think is suitable (such as /data/models):

$ cd  /data/models
# Make sure you have git-lfs installed (https://git-lfs.com)
$ git lfs install
$ git clone git@hf.co:THUDM/chatglm2-6b-int4

Now you can start the docker:

$ docker run --rm -it -v /data/models/chatglm2-6b-int4:/workspace/THUDM/chatglm2-6b --gpus=all -e NVIDIA_DRIVER_CAPABILITIES=compute,utility -e NVIDIA_VISIBLE_DEVICES=all -p 7860:7860 chatglm2:v1

Output similar execution results:

/opt/conda/lib/python3.10/site-packages/gradio/components/textbox.py:259: UserWarning: The `style` method is deprecated. Please set these arguments in the constructor instead.
  warnings.warn(
Running on local URL:  http://0.0.0.0:7860

To create a public link, set `share=True` in `launch()`.

Open a local browser and enterhttp://localhost:7860/You can now open webUI.

Enjoy!

@nlpchen
Copy link

nlpchen commented Sep 15, 2023

可以打包成支持gpu的镜像分享一下吗?多谢。nlp_chen@163.com

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants