-
Notifications
You must be signed in to change notification settings - Fork 79
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory leak #197
Comments
Hi there, apologies for the late response. We certainly still have work to do on improving memory. We haven't seen a leak yet, but it's possible that batching work like this makes it come on faster. Happy to collaborate on this. First off:
Can you share more details about your workload that we can try to replicate? |
I'm going to close this as we've made a lot of memory improvements over the last few months. Please feel free to create a new issue if needed! |
We still have memory issues floating around, so going to reopen this. cc @lambda-science |
Same here for me. Using |
We're exploring using the unstructured API at work.
We're running
quay.io/unstructured-io/unstructured-api:c9b74d4
on a "Pro" (private service) Render instance (i.e. 4GB RAM)We're using the service to process PDFs with the following parameters
strategy=hi_res
,pdf_infer_table_structure=true
andskip_infer_table_types=[]
. We're also using parallel mode viaUNSTRUCTURED_PARALLEL_MODE_ENABLED=true
(using the defaults for the other environment vars).We've seen the service fall over several times due to OOM, and looking at metrics it looks as if there are resources not being freed after processing runs.
Each spike represents a processing run, with about 10 minutes between each.
The text was updated successfully, but these errors were encountered: