Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

http error about example on InternLM-XComposer2-4KHD-7B? #260

Open
ztfmars opened this issue Apr 11, 2024 · 2 comments
Open

http error about example on InternLM-XComposer2-4KHD-7B? #260

ztfmars opened this issue Apr 11, 2024 · 2 comments
Assignees

Comments

@ztfmars
Copy link

ztfmars commented Apr 11, 2024

I use the modelscope way to download the model weights,
while test demo for model of 'Shanghai_AI_Laboratory/internlm-xcomposer2-vl-7b' is ok, but 'Shanghai_AI_Laboratory/internlm-xcomposer2-4khd-7b' is wrong and occurred with an HTTP erro.

code can be listed as following:

import torch
from modelscope import snapshot_download, AutoModel, AutoTokenizer

torch.set_grad_enabled(False)

# init model and tokenizer
model_dir = snapshot_download('Shanghai_AI_Laboratory/internlm-xcomposer2-4khd-7b')
model = AutoModel.from_pretrained(model_dir, trust_remote_code=True).cuda().eval()
tokenizer = AutoTokenizer.from_pretrained(model_dir, trust_remote_code=True)

###############
# First Round
###############
query = '<ImageHere>Illustrate the fine details present in the image'
image = 'examples/4khd_example.webp'
with torch.cuda.amp.autocast():
  response, his = model.chat(tokenizer, query=query, image=image, hd_num=55, history=[], do_sample=False, num_beams=3)
print(response)
# The image is a vibrant and colorful infographic that showcases 7 graphic design trends that will dominate in 2021. The infographic is divided into 7 sections, each representing a different trend. 
# Starting from the top, the first section focuses on "Muted Color Palettes", highlighting the use of muted colors in design.
# The second section delves into "Simple Data Visualizations", emphasizing the importance of easy-to-understand data visualizations. 
# The third section introduces "Geometric Shapes Everywhere", showcasing the use of geometric shapes in design. 
# The fourth section discusses "Flat Icons and Illustrations", explaining how flat icons and illustrations are being used in design. 
# The fifth section is dedicated to "Classic Serif Fonts", illustrating the resurgence of classic serif fonts in design.
# The sixth section explores "Social Media Slide Decks", illustrating how slide decks are being used on social media. 
# Finally, the seventh section focuses on "Text Heavy Videos", illustrating the trend of using text-heavy videos in design. 
# Each section is filled with relevant images and text, providing a comprehensive overview of the 7 graphic design trends that will dominate in 2021.

###############
# Second Round
###############
query1 = 'what is the detailed explanation of the third part.'
with torch.cuda.amp.autocast():
  response, _ = model.chat(tokenizer, query=query1, image=image, hd_num=55, history=his, do_sample=False, num_beams=3)
print(response)
# The third part of the infographic is about "Geometric Shapes Everywhere". It explains that last year, designers used a lot of
# flowing and abstract shapes in their designs. However, this year, they have been replaced with rigid, hard-edged geometric
# shapes and patterns. The hard edges of a geometric shape create a great contrast against muted colors.

error can be listed as following:

2024-04-11 16:11:34,422 - modelscope - INFO - PyTorch version 2.0.1+cu117 Found.
2024-04-11 16:11:34,423 - modelscope - INFO - Loading ast index from /home/fusionai/.cache/modelscope/ast_indexer
2024-04-11 16:11:34,513 - modelscope - INFO - Loading done! Current index file version is 1.13.3, with md5 3382d34e525970486512b31f23987eb2 and a total number of 972 components indexed
You are using a model of type internlmxcomposer2 to instantiate a model of type internlm2. This is not supported for all configurations of models and can yield errors.
Set max length to 16384
Traceback (most recent call last):
  File "/home/fusionai/anaconda3/envs/intern_clean/lib/python3.9/site-packages/modelscope/hub/errors.py", line 91, in handle_http_response
    response.raise_for_status()
  File "/home/fusionai/anaconda3/envs/intern_clean/lib/python3.9/site-packages/requests/models.py", line 1021, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://www.modelscope.cn/api/v1/models/openai/clip-vit-large-patch14-336/revisions

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/fusionai/ztf/InternLM-XComposer2-4KHD/code_download/InternLM-XComposer/test.py", line 8, in <module>
    model = AutoModel.from_pretrained(model_dir, trust_remote_code=True).cuda().eval()
  File "/home/fusionai/anaconda3/envs/intern_clean/lib/python3.9/site-packages/modelscope/utils/hf_util.py", line 113, in from_pretrained
    module_obj = module_class.from_pretrained(model_dir, *model_args,
  File "/home/fusionai/anaconda3/envs/intern_clean/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 558, in from_pretrained
    return model_class.from_pretrained(
  File "/home/fusionai/anaconda3/envs/intern_clean/lib/python3.9/site-packages/modelscope/utils/hf_util.py", line 76, in from_pretrained
    return ori_from_pretrained(cls, model_dir, *model_args, **kwargs)
  File "/home/fusionai/anaconda3/envs/intern_clean/lib/python3.9/site-packages/transformers/modeling_utils.py", line 2966, in from_pretrained
    model = cls(config, *model_args, **model_kwargs)
  File "/home/fusionai/.cache/huggingface/modules/transformers_modules/internlm-xcomposer2-4khd-7b/modeling_internlm_xcomposer2.py", line 68, in __init__
    self.vit = build_vision_tower()
  File "/home/fusionai/.cache/huggingface/modules/transformers_modules/internlm-xcomposer2-4khd-7b/build_mlp.py", line 10, in build_vision_tower
    return CLIPVisionTower(vision_tower)
  File "/home/fusionai/.cache/huggingface/modules/transformers_modules/internlm-xcomposer2-4khd-7b/build_mlp.py", line 55, in __init__
    self.load_model()
  File "/home/fusionai/.cache/huggingface/modules/transformers_modules/internlm-xcomposer2-4khd-7b/build_mlp.py", line 58, in load_model
    self.vision_tower = CLIPVisionModel.from_pretrained(self.vision_tower_name)
  File "/home/fusionai/anaconda3/envs/intern_clean/lib/python3.9/site-packages/modelscope/utils/hf_util.py", line 70, in from_pretrained
    model_dir = snapshot_download(
  File "/home/fusionai/anaconda3/envs/intern_clean/lib/python3.9/site-packages/modelscope/hub/snapshot_download.py", line 98, in snapshot_download
    revision_detail = _api.get_valid_revision_detail(
  File "/home/fusionai/anaconda3/envs/intern_clean/lib/python3.9/site-packages/modelscope/hub/api.py", line 497, in get_valid_revision_detail
    all_branches_detail, all_tags_detail = self.get_model_branches_and_tags_details(
  File "/home/fusionai/anaconda3/envs/intern_clean/lib/python3.9/site-packages/modelscope/hub/api.py", line 577, in get_model_branches_and_tags_details
    handle_http_response(r, logger, cookies, model_id)
  File "/home/fusionai/anaconda3/envs/intern_clean/lib/python3.9/site-packages/modelscope/hub/errors.py", line 98, in handle_http_response
    raise HTTPError('Response details: %s, Request id: %s' %
requests.exceptions.HTTPError: Response details: {'Code': 10010205001, 'Message': '获取模型版本失败,信息:record not found', 'RequestId': '7328ea16-d1da-4963-a706-a324cc879166', 'Success': False}, Request id: 8b92826481e24ddc818006f28f90e7c4

waiting for reply , help~ thx

@deathxlent
Copy link

build_mlp.py

openai/clip-vit-large-patch14-336 -> AI-ModelScope/clip-vit-large-patch14-336

@ztfmars
Copy link
Author

ztfmars commented Apr 16, 2024

build_mlp.py

openai/clip-vit-large-patch14-336 -> AI-ModelScope/clip-vit-large-patch14-336

thx, it works after fix the download url in ~/.cache/modelscope/hub/Shanghai_AI_Laboratory/internlm-xcomposer2-4khd-7b/build_mlp.py

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants