On Sat, Sep 26, 2020 at 2:32 AM 'Nick Desaulniers' via syzkaller-bugs <syzkaller-b...@googlegroups.com> wrote: > > > > On Wed, Sep 23, 2020 at 11:24:48AM +0200, Dmitry Vyukov wrote: > > > > > 3. Run syzkaller locally with custom patches. > > > > > > > > Let's say I wanna build the kernel with clang-10 using your .config and > > > > run it in a vm locally. What are the steps in order to reproduce the > > > > same workload syzkaller runs in the guest on the GCE so that I can at > > > > least try get as close as possible to reproducing locally? > > > > > > It's a random fuzzing workload. You can get this workload by running > > > syzkaller locally: > > > https://github.com/google/syzkaller/blob/master/docs/linux/setup_ubuntu-host_qemu-vm_x86-64-kernel.md > > These are virtualized guests, right? Has anyone played with getting > `rr` working to record traces of guests in QEMU? > > I had seen the bug that generated this on github: > https://julialang.org/blog/2020/09/rr-memory-magic/ > > That way, even if syzkaller didn't have a reproducer binary, it would > at least have a replayable trace.
These are virtualized guests, but they run on GCE, not in QEMU. > Boris, one question I have. Doesn't the kernel mark pages backing > executable code as read only at some point? If that were the case, > then I don't see how the instruction stream could be modified. I > guess static key patching would have to undo that permission mapping > before patching. > > You're right about the length shorter than what I would have expected > from static key patching. That could very well be a write through > dangling int pointer... > > > > > > > The exact clang compiler syzbot used is available here: > > > https://github.com/google/syzkaller/blob/master/docs/syzbot.md#crash-does-not-reproduce > > > > I've marked all other similar ones a dup of this one. Now you can see > > all manifestations on the dashboard: > > https://syzkaller.appspot.com/bug?extid=ce179bc99e64377c24bc > > > > Another possible debugging vector on this: > > The location of crashes does not seem to be completely random and > > evenly spread across kernel code. I think there are many more static > > branches (mm, net), but we have 3 crashes in vdso and 9 in paravirt > > code + these 6 crashes in perf_misc_flags which looks a bit like an > > outlier (?). What's special about paravirt/vdso?.. > > > > -- > Thanks, > ~Nick Desaulniers > > -- > You received this message because you are subscribed to the Google Groups > "syzkaller-bugs" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to syzkaller-bugs+unsubscr...@googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/syzkaller-bugs/CAKwvOdkYEP%3DoRtEu_89JBq2g41PL9_FuFyfeB94XwBKuSz4XLg%40mail.gmail.com.