Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ignored error from execute request leads to inserting duplicated objects #80

Open
jcavar opened this issue Mar 31, 2017 · 11 comments
Open
Assignees
Labels
Milestone

Comments

@jcavar
Copy link
Contributor

jcavar commented Mar 31, 2017

Hi,
In FEMManagedObjectCache there is
- (NSMutableDictionary *)fetchExistingObjectsForMapping:(FEMMapping *)mapping
which executes fetch request.

We ran into issue when using this with NSSQLiteStoreType. The specific issue we have is when lookupValues is bigger then 999 which is max variable count for SQLite database.
See: https://github.com/mackyle/sqlite/blob/cec875a327f8d1924c60ccb20cc96d17c1868d33/src/sqliteLimit.h#L136

In this case error happens and empty array is returned.
Error Domain=NSSQLiteErrorDomain Code=1 "(null)" UserInfo={EncryptedStoreErrorMessage=too many SQL variables}
However, logic continues as there is nothing in database and inserts objects that may already be in database.
I think there are 2 ways to solve this problem. Either executing multiple requests with limit of 999 or fetching all objects and filtering them in memory.

Additionally, error should should be at least logged but probably exposed to caller as well.

@jcavar
Copy link
Contributor Author

jcavar commented Mar 31, 2017

In memory solution could look something like this:

- (NSMutableDictionary *)fetchExistingObjectsForMapping:(FEMMapping *)mapping {
	NSSet *lookupValues = _lookupKeysMap[mapping.entityName];
	if (lookupValues.count == 0) return [NSMutableDictionary dictionary];

	NSFetchRequest *fetchRequest = [NSFetchRequest fetchRequestWithEntityName:mapping.entityName];
	NSPredicate *predicate = [NSPredicate predicateWithFormat:@"%K IN %@", mapping.primaryKey, lookupValues];
	//[fetchRequest setPredicate:predicate];
	[fetchRequest setFetchLimit:lookupValues.count];

	NSMutableDictionary *output = [NSMutableDictionary new];
	NSArray *existingObjects = [_context executeFetchRequest:fetchRequest error:NULL];
    existingObjects = [existingObjects filteredArrayUsingPredicate:predicate];
	for (NSManagedObject *object in existingObjects) {
		output[[object valueForKey:mapping.primaryKey]] = object;
	}

	return output;
}

@jcavar
Copy link
Contributor Author

jcavar commented Mar 31, 2017

Or maybe not even filtering objects and just adding everything to cache? This may cause memory issues but as long as objects are faults it should be fine?

@dimazen
Copy link
Contributor

dimazen commented Mar 31, 2017

Wow, thats a weird bug. I'd stick to the solution when we're splitting request into several requests in case there are more than 999 objects left. Than we can continue processing them as is it now.
I would handle it later today as well.

Thanks for reporting!

@jcavar
Copy link
Contributor Author

jcavar commented Mar 31, 2017

Nice, thank you :)

@dimazen
Copy link
Contributor

dimazen commented Mar 31, 2017

@jcavar Strange. I have implemented tests that should trigger error you've described but they pass.
Can you check out https://github.com/Yalantis/FastEasyMapping/tree/fix/GH-80 and run FEMCacheSpec spec? There is a code on line 160:

    describe(@"limits", ^{
        __block FEMManagedObjectCache *cache;
        __block NSMutableArray *representation;

        FEMMapping *mapping = [MappingProvider carMappingWithPrimaryKey];
        NSInteger limit = 10000;

By changing that limit variable you can tweak that limit. As you will see - no error produced and all tests are running perfectly.

Going back to your issue: what environment are you using? watchOS, tvOS?

p.s. in order to run tests please run pod install to grab Kiwi and other dependencies.

@jcavar
Copy link
Contributor Author

jcavar commented Mar 31, 2017

Yes, you are actually right. This must be due to custom store we are using then. Sorry for bothering you, I should have checked with clean project first.

I will investigate more and see what is causing this issue for us.

Thanks for your help!

@jcavar jcavar closed this as completed Mar 31, 2017
@jcavar
Copy link
Contributor Author

jcavar commented Mar 31, 2017

Ok, it seems that issue reproduces with much bigger number 500001. This was found here:
http://sqlite.1065341.n5.nabble.com/SQLITE-MAX-VARIABLE-NUMBER-td43457.html

It is compile time constant and in our use case it is set to much smaller number for some reason. I will investigate why is that, but it would still make sense to handle this situation in library.
What do you think?

@jcavar jcavar reopened this Mar 31, 2017
@dimazen
Copy link
Contributor

dimazen commented Apr 1, 2017

I agree, that we need to handle that. The only thing I'm currently looking for is how and where to grab this value. Maybe persistent store will disclose it via options - gonna check it later.

@dimazen
Copy link
Contributor

dimazen commented Apr 1, 2017

On the other hand this limit appears to be a quite big value. I would suggest you to subdivide your data into chunks instead of processing them in a single pass. Why so? In the past I was analyzing and it turned out that saving by batches runs faster than single big batch.

Pseudocode:

let batchSize = SomeNumber (10000?)

for batch in allValues subdivided by batchSize {
   do actual import 
}

See related question on stackexchange.

On the other hand FEM should throw a detailed exception regarding why fetch has failed with recommendations. What do you think?

@jcavar
Copy link
Contributor Author

jcavar commented Apr 1, 2017

Yeah, I think FEM should definitely expose error if it happens and stop processing.

That is actually very good point, we should divide our data in reasonable chunks and increase variable count.
I dont know if that is something FEM could support or how hard would it be to add batch size. Especially considering that data can contain deep reationships and when adding those request could still fail.

I guess it is fine to leave that to caller to figure out as it has more info about data and it should know actual sql limit.

@dimazen
Copy link
Contributor

dimazen commented May 9, 2017

Add temporary fix by 28b3ec3

To be honest this is a weird solution, however it turns out that for a proper error handling I need to rework a lot of existing API (also Swift brings additional challenges). Therefore I'm keeping this PR opened.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants