On 2018-10-29 Bhargava Shastry wrote: > Thanks for providing two versions for me to test. Here are the > results: > > - version 1 decompresses the whole of fuzzed (compressed) data > - version 2 decompresses in chunks of size (input=13 bytes) > > ### Executions per second > > I ran both versions a total of 96 times (I have 16 cores :-)) > > - version 1 averaged 1757.20 executions per second > - version 2 averaged 429.10 executions per second > > So, clearly version 1 is faster
Yes, and the difference is bigger than I hoped. > Regarding coverage > > - version 1 covered 950.26 CFG edges on average > - version 2 covered 941.11 CFG edges on average I assume you had the latest xz.git that supports FUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION. Did you run the same number of fuzzing rounds on both (so the second version took over four times longer) or did you run them for the same time (so the second version ran only 1/4 of rounds)? If both version saw the same number of rounds, I would expect the second version to have the same or better coverage. But if the comparison was based on time, then it's no surprise if the first version has better apparent coverage even if it is impossible for it to hit certain code paths that are possible with the second version. It might also depend on which input file is used as a starting point for the fuzzer. > Overall, version 1 is superior imho. I don't know yet. Increasing the input and output chunk sizes is probably needed to make the second version faster. You could try some odd values between 100 and 250, or maybe even up to 500. On the other hand, it's possible that I'm putting too much weight on the importance of fuzzing the stop & continue code paths. Thanks again! -- Lasse Collin | IRC: Larhzu @ IRCnet & Freenode