-
Notifications
You must be signed in to change notification settings - Fork 170
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conditional RAM writes scale nonlinearly in loops #5027
Comments
SSA for the expensive program with
One thing that jumps out is the check that we're not pushing past the capacity of the |
We should be able to just remove this assertion (or throw the error in an unconstrained block if we want to maintain the error message). The attempt to write past the end of the array would trigger the ACVM to fail. |
This ends up being just a small contribution (~4000) constraints. |
Did you really compare the number of constraints or just the number of acir opcodes? |
I was referring to 25 constraints/item, will update first message! This function: global SIZE = 500;
fn main(fields: [Field; SIZE], to_keep: [bool; SIZE]) -> pub [Field; SIZE] {
let mut bounded_vec: BoundedVec<Field, SIZE> = BoundedVec::new();
for i in 0..SIZE {
if to_keep[i] {
bounded_vec.push(fields[i]);
}
}
assert_eq(bounded_vec.len(), 0);
bounded_vec.storage
} is 1.6M gates This one: global SIZE = 500;
fn main(mut fields: [Field; SIZE], indices: [u64; SIZE]) -> pub [Field; SIZE] {
for i in 0..SIZE {
fields[indices[i]] = i as Field;
}
fields
} is 12k gates |
This looks like it'd be difficult to rule out at compile-time. We know that we only push at most once per loop and only loop SIZE times but the compiler just sees that each push is conditional on to_keep and can't keep an exact count. If we wanted to remove this condition we'd need much more meticulous tracking of maximum bounds, probably tracking each individual case of the value (e.g. len can be 0 or 1 here) then verifying all of those cases are < 2. |
The length check is not the main contributor though, there seems to be some weird scaling with loop sizes going on. Constraint counts per iteration become much more similar if we decrease the number of iterations to 20. In that case, the version costs 320 constraints/write (a 10x reduction per iteration by scaling down the number of iterations) |
To avoid any boundedvec related overhead, tried this code: global SIZE = 500;
fn main(mut fields: [Field; SIZE], enables: [bool; SIZE], indices: [u64; SIZE]) -> pub [Field; SIZE] {
for i in 0..SIZE {
if enables[i] {
fields[indices[i]] = i as Field;
}
}
fields
} And it also scales nonlinearly. With size=500 it's 1.65M constraints (3300 constraints/iteration) and with size=30 it's 10k constraints (333 constraints/iteration) |
Updating the original issue |
Sounds related to #4629 then. Testing your most recent example in the comment I get:
So it is increasing by 2 per each increase in SIZE. |
Is there a way to use the as_witness intrinsic to fix this nonlinear scaling? or it's not fixable by that? Haven't looked much into the actual constraints being generated for predicated RAM writes yet |
Aim
Using conditional ram writes:
Expected Behavior
I expected conditional RAM writes to scale in the same way with the number of iterations as the unconditional ones
Bug
The first example scales nonlinearly: With size=30 it's 10k constraints (333 constraints/iteration) and with size=500 it's 1.65M constraints (3300 constraints/iteration)
The second example (unconditional) scales sublinearly (possibly because of fixed costs): with size=100 it's 4.6k constraints and with size=500 it's 12k constraints.
Also, it seems like even without the nonlinear scaling the conditional ram writes are quite expensive (10x the cost of a unconditional one in the best case)
To Reproduce
Project Impact
None
Impact Context
No response
Workaround
None
Workaround Description
No response
Additional Context
No response
Installation Method
None
Nargo Version
No response
NoirJS Version
No response
Would you like to submit a PR for this Issue?
None
Support Needs
No response
The text was updated successfully, but these errors were encountered: