-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Faster fuzzing #11
Faster fuzzing #11
Conversation
Let's consider this a WIP: there are more changes needed to enable fuzzing once more, right now the binary always terminates with After modifying the setup a bit more to make it pass the sanity checks I now get for a single core:
|
9d2b69a
to
6d07a6b
Compare
It's now ready to be merged: I've rebased and updated my branch with all the changes needed to bypass the sanity checks. Note that I've used There are further optimizations possible: the documentation describes using shared memory instead of files for a 10x speedup (ALF++: Persist Mode). Using separate entry points that call specific functions directly might also mean no more workaround are needed to bypass the sanity checks, and easier to fuzz different parts of the code / different modes (such as the |
Great stuff, thanks! I do have comments though. Execution speed was made very slow by having a Also, it was an issue that that storage dir was not in And a question: I see the detection of the compiler doesn't consider And about the entry points: yes, there are many improvements to be made. There could be separate targets in the Cmake file for that, that only throw data at I also broke the helper script with an extra |
Thanks for the feedback.
Good to know, I indeed thought it was mandatory. I'll update the PR and leave it out.
None in particular. I really wanted to use the speed of LTO and while working on that I found the
This morning I managed to get the persistent-mode-with-shared-memory-input working, it runs at 60.000 execs per second on a single core when using the base64encoder as target, it blasted through it's iterations like there's no tomorrow. I programmed in a segfault (crash when length is precisely 13 bytes) and it managed to find it / make a testcase for it, so it's legit. (see hacky patch) Before making a PR I wanted to fuzz something slightly more realistic so I tried making it fuzz the same logic as what now happens with the |
On my system using AFLplusplus, for a single core: * `afl-gcc` / `afl-g++` reaches 100 exec/s * `afl-gcc-fast` / `afl-g++-fast` reaches 330 exec/s * `afl-clang-lto` / `afl-clang-lto++` reaches 400 exec/s
6d07a6b
to
feafd0e
Compare
I'll close this PR in favor of #18: that one is superior in speed and flexibility. |
On my system using AFLplusplus, for a single core:
afl-gcc
/afl-g++
reaches 180 exec/safl-gcc-fast
/afl-g++-fast
reaches 1700 exec/safl-clang-lto
/afl-clang-lto++
reaches 2100 exec/s