Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question]: How do machines without GPU use embedding models to parse documents? #727

Closed
Nuclear6 opened this issue May 11, 2024 · 2 comments
Labels
question Further information is requested

Comments

@Nuclear6
Copy link

Nuclear6 commented May 11, 2024

Describe your problem

I built the ragflow project on a machine with only CPU. After running it, I found that the uploaded document could be parsed successfully using Tongyi Qianwen's embedding model. Is the embedding model calculated locally or remotely?

image

Task has been received.
Start to parse.
Extract Q&A: 642. 38 failure, line: 22,34,53...
Finished slicing files(642). Start to embedding the content.
Finished embedding(366.5834937430918)! Start to build index!
Done!

PS: I haven’t applied for Tongyi Qianwen’s API key either.

@Nuclear6 Nuclear6 added the question Further information is requested label May 11, 2024
@KevinHuSh
Copy link
Collaborator

It's locally. We add the model to the docker image.

@Nuclear6
Copy link
Author

You mean, the embedding model can be run in cpu? According to my previous experience, the model needs to make some changes and adaptations to run in the CPU.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants