We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pip Install (I used latest requirements.txt)
Latest | 最新版
Linux
由于dockerhub下载比较慢,使用了一个自己写的Dockerfile和docker-compose.yml
FROM archlinux:latest RUN echo 'Server = https://mirrors.tuna.tsinghua.edu.cn/archlinux/$repo/os/$arch' > /etc/pacman.d/mirrorlist RUN pacman -Syu --needed --noconfirm RUN pacman -Syyu git python3 python-pip --needed --noconfirm RUN pacman -Syyu cuda cudnn --needed --noconfirm RUN pacman -Syyu numactl --needed --noconfirm RUN pacman -Syyu texlive --needed --noconfirm RUN pacman -Syyu texlive-langchinese --needed --noconfirm RUN pacman -Syyu biber --needed --noconfirm RUN useradd --create-home academic USER academic WORKDIR /home/academic/ RUN pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple RUN git clone --depth=1 https://github.com/binary-husky/gpt_academic.git WORKDIR /home/academic/gpt_academic RUN pip install --user -r requirements.txt --break-system-packages RUN pip install --user -r request_llm/requirements_chatglm.txt --break-system-packages RUN pip uninstall -y pydantic --break-system-packages RUN pip install pydantic==2.0.2 --break-system-packages RUN pip install fastapi==0.93.0 --break-system-packages RUN sed -i '223,238d' main.py RUN echo " demo.queue(concurrency_count=CONCURRENT_COUNT)" >> main.py RUN echo " CUSTOM_PATH, = get_conf('CUSTOM_PATH')" >> main.py RUN echo " if CUSTOM_PATH != \"/\":" >> main.py RUN echo " from toolbox import run_gradio_in_subpath" >> main.py RUN echo " run_gradio_in_subpath(demo, auth=AUTHENTICATION, port=PORT, custom_path=CUSTOM_PATH)" >> main.py RUN echo " else:" >> main.py RUN echo " demo.launch(server_name=\"0.0.0.0\", server_port=PORT, auth=AUTHENTICATION, favicon_path=\"docs/logo.png\"," >> main.py RUN echo " blocked_paths=[\"config.py\",\"config_private.py\",\"docker-compose.yml\",\"Dockerfile\"])" >> main.py RUN echo 'if __name__ == "__main__":' >> main.py RUN echo " main()" >> main.py
其中
RUN pip install pydantic==2.0.2 --break-system-packages RUN pip install fastapi==0.93.0 --break-system-packages
是参考了 #936 之后添加的,但没有用,仍然没有反应。后半部分是为了部署在子路径添加的。 docker-compose.yml如下
docker-compose.yml
version: '3' services: gpt_academic: build: . environment: # 请查阅 `config.py` 以查看所有的配置信息 API_KEY: ' sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx,fkxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx ' USE_PROXY: ' True ' proxies: ' { "http": "socks5h://localhost:7891", "https": "socks5h://localhost:7891", } ' LLM_MODEL: ' chatglm ' #AVAIL_LLM_MODELS: ' ["chatglm", "moss", "gpt-3.5-turbo", "gpt-4", "newbing"] ' AVAIL_LLM_MODELS: ' ["chatglm"] ' LOCAL_MODEL_DEVICE: ' cuda ' DEFAULT_WORKER_NUM: ' 10 ' WEB_PORT: ' 12345 ' ADD_WAIFU: ' True ' CUSTOM_PATH: '/mysubpath' # AUTHENTICATION: ' [("username", "passwd"), ("username2", "passwd2")] ' privileged: true volumes: - ./cache:/home/academic/.cache # 与宿主的网络融合 network_mode: "host" command: > bash -c "python3 -u main.py" # command: > # nvidia-smi deploy: resources: reservations: devices: - driver: nvidia count: 1 capabilities: [gpu]
子路径的目的是为了转发,对应的nginx.conf反向代理配置如下
nginx.conf
location /mysubpath/ { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "Upgrade"; proxy_redirect off; proxy_set_header X-Forwarded-Proto $scheme; proxy_pass http://xxx.xxx.xxx.xxx:12345; proxy_http_version 1.1; client_max_body_size 10000M; }
前段时间同样的'Dockerfile'、'docker-compose.yml'和反向代理设置是能够运行的。现在运行其他大语言模型(例如,fastllmhttps://github.com/ztxz16/fastllm)没有问题,故基本排除硬件问题。
我的猜测是和huggingface近期访问困难有关,因为如果反复docker compose down再docker compose up -d有很小的概率有反应(P.S.好几个G的cache目录已经做了持久化)。从目前的测试来看,连接代理对解决这个问题没有帮助。
和 #936 的表述一致
No response
The text was updated successfully, but these errors were encountered:
update:nvidia-smi显示GPU没有被占用,也可能是代码问题。题目描述的问题仍然存在。
Sorry, something went wrong.
update2:使用官方构建的Docker镜像没有问题。
取消Dockerfile中subpath配置后,问题得到解决。似乎toolbox.py中的run_gradio_in_subpath函数有点问题?
run_gradio_in_subpath这个有一段时间没有维护了,用的人太少太少,也许就是这个出问题了
No branches or pull requests
Installation Method | 安装方法与平台
Pip Install (I used latest requirements.txt)
Version | 版本
Latest | 最新版
OS | 操作系统
Linux
Describe the bug | 简述
由于dockerhub下载比较慢,使用了一个自己写的Dockerfile和docker-compose.yml
其中
是参考了 #936 之后添加的,但没有用,仍然没有反应。后半部分是为了部署在子路径添加的。
docker-compose.yml
如下子路径的目的是为了转发,对应的
nginx.conf
反向代理配置如下前段时间同样的'Dockerfile'、'docker-compose.yml'和反向代理设置是能够运行的。现在运行其他大语言模型(例如,fastllmhttps://github.com/ztxz16/fastllm)没有问题,故基本排除硬件问题。
我的猜测是和huggingface近期访问困难有关,因为如果反复docker compose down再docker compose up -d有很小的概率有反应(P.S.好几个G的cache目录已经做了持久化)。从目前的测试来看,连接代理对解决这个问题没有帮助。
Screen Shot | 有帮助的截图
和 #936 的表述一致
Terminal Traceback & Material to Help Reproduce Bugs | 终端traceback(如有) + 帮助我们复现的测试材料样本(如有)
No response
The text was updated successfully, but these errors were encountered: