Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cache.add() encountered a network error #389

Open
thekevinscott opened this issue May 6, 2024 · 5 comments
Open

Cache.add() encountered a network error #389

thekevinscott opened this issue May 6, 2024 · 5 comments

Comments

@thekevinscott
Copy link

I'm seeing a Cache.add() error when trying to load Llama-3-8B-Instruct-q4f16_1-MLC:

Error: Cannot fetch https://huggingface.co/mlc-ai/Llama-3-8B-Instruct-q4f16_1-MLC/resolve/main/params_shard_0.bin err= NetworkError: Failed to execute 'add' on 'Cache': Cache.add() encountered a network error
Screenshot 2024-05-06 at 6 03 15 PM

This is Chrome 124.0.6367.119, running the demo at https://webllm.mlc.ai/

@thekevinscott
Copy link
Author

thekevinscott commented May 6, 2024

It looks like any model I haven't previously cached is reporting the error:

Screenshot 2024-05-06 at 6 06 25 PM

@thekevinscott
Copy link
Author

I don't think this is related to the typical browser cache, but just for good measure I cleared my cache, but same issue.

@ucalyptus2
Copy link

@thekevinscott I'm facing the same issue on chrome v124

i cleared my cache but no improvement

image

@ucalyptus2
Copy link

cc: @CharlieFRuan

@CharlieFRuan
Copy link
Contributor

Is this issue encountered for all models? To be honest the Cache.add() is a bit vague; I've encountered this when the URL is wrong, but it does not seem to be the case here.

To triage a bit, could you check, in the console, application -> Cache storage, and see if webllm/config or webllm/wasm are populated? Just wanted to see if this is weight-downloading specific, or also applies to config json file and the wasm.

Besides, is this encountered for all models? Perhaps could you try, say TinyLlama? Wanted to know if it is due to a single weight shard being too large.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants