-
Notifications
You must be signed in to change notification settings - Fork 662
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bulk 2.0 is not working Help me #698
Comments
Hi - you should be using df.to_dict(orient='records') There is a function that handles the conversion of a list of dicts to a csv format. It's likely the to_json is causing an issue with it being able to convert it since it is expecting a list of dicts and not JSON. |
Hi, thank you for your message. It's working!! But, I have a question. Some records didn't go up, is there any way to evaluate why they didn't go up? |
If you navigate to bulk data load - Do you see the records in any of the requests or any failed records (ex. Record was sent successfully in a batch but for some reason Salesforce rejected it)? Or are you saying they were not part of any of the batches that were sent up to Salesforce? |
Both options suit me. My DataFrame contains 196,918 records, but when I executed it, the count only increased to 196,479. There was no indication as to why these 439 records did not increase. I re-ran the flow specifically for these 439 records, and they were updated normally. I need to understand the reason behind these records not being processed initially so that I can avoid re-running them in the future. |
I'm getting the same issue, when uploading 500 records with CSV, Salesforce only receives 493. When i try 400 records it only inserts 394. I checked the results and it's always the last few records that get omitted.. |
Hi Everyone,
I'm working simple salesforce using python and a have problem with bulk 2.0. I'm using bulk because of high volume data, i need insert 196k records but is not working.
Follow the code snippets as well as the error.
df = It's DataFrame with 196k records.
Convert DataFrame in Records -> List of Dict
records - It's list of dict based at DataFrame.
records = json.loads(df.to_json(orient='records'))
Using bulk2
records_inserted = sf.bulk2.My_Object.insert(records=records, batch_size=10000,concurrency=10)
Error
My code runs, but no insert all records and return a error.
Conclusion
My code runs, but no insert all records. :(
The text was updated successfully, but these errors were encountered: