Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ability to pre-download models #55

Open
cragwolfe opened this issue Feb 14, 2023 · 2 comments
Open

Ability to pre-download models #55

cragwolfe opened this issue Feb 14, 2023 · 2 comments

Comments

@cragwolfe
Copy link
Contributor

Unstructured-inference lazily downloads models which is likely the better choice for most use cases, however there are scenarios where the consumer would like to prefetch models.

Currently, this can be achieved for the default layout parser models, e.g. typically used for PDF's with:

from unstructured_inference.models.detectron2 import MODEL_TYPES
MODEL_TYPES[None]['model_path']    
MODEL_TYPES[None]['config_path'] 

but, it be nice if there was a simple function (with parameter(s) to allow warming different models) call the user could make to ensure any needed artifacts are downloaded.

@qued
Copy link
Contributor

qued commented Jul 14, 2023

get_model(model_name) will also do this... is that sufficient? We could also wrap it with a name tailored to what you're after...

@AntoninLeroy
Copy link

Not sure if this relate to my issue but my corporate proxy blocks the http call to huggingface so i would love to use the same model I have downloaded locally https://huggingface.co/unstructuredio/yolo_x_layout/tree/main

Would it be possible ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants