Re: [Cocci] Coccinelle rule for CVE-2019-18683
> Analysing each matching case would take a lot of time. How many efforts would you like to invest in adjusting the situation? Will any more development possibilities picked up to reduce the presentation of false positives by the mentioned source code analysis approach considerably? Regards, Markus ___ Cocci mailing list Cocci@systeme.lip6.fr https://systeme.lip6.fr/mailman/listinfo/cocci
Re: [Cocci] Coccinelle rule for CVE-2019-18683
On 09.04.2020 22:41, Alexander Popov wrote: > On 09.04.2020 01:26, Jann Horn wrote: >> On Thu, Apr 9, 2020 at 12:01 AM Alexander Popov wrote: >>> CVE-2019-18683 refers to three similar vulnerabilities caused by the same >>> incorrect approach to locking that is used in >>> vivid_stop_generating_vid_cap(), >>> vivid_stop_generating_vid_out(), and sdr_cap_stop_streaming(). >>> >>> For fixes please see the commit 6dcd5d7a7a29c1e4 (media: vivid: Fix wrong >>> locking that causes race conditions on streaming stop). >>> >>> These three functions are called during streaming stopping with >>> vivid_dev.mutex >>> locked. And they all do the same mistake while stopping their kthreads, >>> which >>> need to lock this mutex as well. See the example from >>> vivid_stop_generating_vid_cap(): >>> /* shutdown control thread */ >>> vivid_grab_controls(dev, false); >>> mutex_unlock(>mutex); >>> kthread_stop(dev->kthread_vid_cap); >>> dev->kthread_vid_cap = NULL; >>> mutex_lock(>mutex); >>> >>> But when this mutex is unlocked, another vb2_fop_read() can lock it instead >>> of >>> the kthread and manipulate the buffer queue. That causes use-after-free. >>> >>> I created a Coccinelle rule that detects >>> mutex_unlock+kthread_stop+mutex_lock >>> within one function. >> [...] >>> mutex_unlock@unlock_p(E) >>> ... >>> kthread_stop@stop_p(...) >>> ... >>> mutex_lock@lock_p(E) >> >> Is the kthread_stop() really special here? It seems to me like it's >> pretty much just a normal instance of the "temporarily dropping a >> lock" pattern - which does tend to go wrong quite often, but can also >> be correct. > > Right, searching without kthread_stop() gives more cases. > >> I think it would be interesting though to have a list of places that >> drop and then re-acquire a mutex/spinlock/... that was not originally >> acquired in the same block of code (but was instead originally >> acquired in an outer block, or by a parent function, or something like >> that). So things like this: The following rule reported 146 matching cases, which might be interesting. ``` virtual report virtual context @race exists@ expression E; position unlock_p; position lock_p; @@ ... when != mutex_lock(E) * mutex_unlock@unlock_p(E) ... when != schedule() when != schedule_timeout(...) when != cond_resched() when != wait_event(...) when != wait_event_timeout(...) when != wait_event_interruptible_timeout(...) when != wait_event_interruptible(...) when != msleep() when != msleep_interruptible(...) * mutex_lock@lock_p(E) @script:python@ unlock_p << race.unlock_p; lock_p << race.lock_p; E << race.E; @@ coccilib.report.print_report(unlock_p[0], 'see mutex_unlock(' + E + ') here') coccilib.report.print_report(lock_p[0], 'see mutex_lock(' + E + ') here\n') ``` Analysing each matching case would take a lot of time. However, I'm focused on searching kernel security issues. So I will filter out the code that: - is not enabled in popular kernel configurations, - doesn't create additional attack surface. Then I'll take the time to analyse the rest of reported cases. I'll inform you if I find any bug. Best regards, Alexander ___ Cocci mailing list Cocci@systeme.lip6.fr https://systeme.lip6.fr/mailman/listinfo/cocci
Re: [Cocci] Coccinelle rule for CVE-2019-18683
>> * The source code search pattern can be too generic. >> How do you think about to consider additional constraints >> for safer data control flow analysis? > > Could you please elaborate on that? Julia Lawall chose to mention the design possibility “put when != mutex_lock(E) after the ...”. https://systeme.lip6.fr/pipermail/cocci/2020-April/007107.html https://lore.kernel.org/cocci/alpine.DEB.2.21.2004091248190.2403@hadrien/ > I used 'exists' keyword to find at least one branch that has > mutex_unlock+kthread_stop+mutex_lock chain. Are you informed about development challenges for data flow analysis (or even escape analysis according to computer science)? How many experiences can be reused from other known approaches? Regards, Markus ___ Cocci mailing list Cocci@systeme.lip6.fr https://systeme.lip6.fr/mailman/listinfo/cocci
Re: [Cocci] Coccinelle rule for CVE-2019-18683
> >> kthread_stop@stop_p(...) > >> ... > >> mutex_lock@lock_p(E) > >> > >> @script:python@ > >> stop_p << race.stop_p; > >> unlock_p << race.unlock_p; > >> lock_p << race.lock_p; > >> E << race.E; > >> @@ > >> > >> coccilib.report.print_report(unlock_p[0], 'mutex_unlock(' + E + ') here') > >> coccilib.report.print_report(stop_p[0], 'kthread_stop here') > >> coccilib.report.print_report(lock_p[0], 'mutex_lock(' + E + ') here\n') > > ... > > > Based on Jann's suggestion, it seem like it could be interesting to find > > these locking pauses, and then collect functions that are used in locks > > and in lock pauses. If a function is mostly used with locks held, then > > using it in a lock pause could be a sign of a bug. I will see if it turns > > up anything interesting. > > Do you mean collecting the behaviour that happens between unlocking and > locking > and then analysing it somehow? Yes. I have tried doing what I described, but I'm not sure that the results are very reliable at the moment. julia ___ Cocci mailing list Cocci@systeme.lip6.fr https://systeme.lip6.fr/mailman/listinfo/cocci
Re: [Cocci] Coccinelle rule for CVE-2019-18683
On 09.04.2020 13:53, Julia Lawall wrote: > On Thu, 9 Apr 2020, Alexander Popov wrote: >> virtual report >> >> @race exists@ >> expression E; >> position stop_p; >> position unlock_p; >> position lock_p; >> @@ >> >> mutex_unlock@unlock_p(E) >> ... > > It would be good to put when != mutex_lock(E) after the ... above. Your > rule doesn't actually prevent the lock from being retaken. Thanks Julia! I used this trick in the second version of the rule that I've just sent. >> kthread_stop@stop_p(...) >> ... >> mutex_lock@lock_p(E) >> >> @script:python@ >> stop_p << race.stop_p; >> unlock_p << race.unlock_p; >> lock_p << race.lock_p; >> E << race.E; >> @@ >> >> coccilib.report.print_report(unlock_p[0], 'mutex_unlock(' + E + ') here') >> coccilib.report.print_report(stop_p[0], 'kthread_stop here') >> coccilib.report.print_report(lock_p[0], 'mutex_lock(' + E + ') here\n') ... > Based on Jann's suggestion, it seem like it could be interesting to find > these locking pauses, and then collect functions that are used in locks > and in lock pauses. If a function is mostly used with locks held, then > using it in a lock pause could be a sign of a bug. I will see if it turns > up anything interesting. Do you mean collecting the behaviour that happens between unlocking and locking and then analysing it somehow? Best regards, Alexander ___ Cocci mailing list Cocci@systeme.lip6.fr https://systeme.lip6.fr/mailman/listinfo/cocci
Re: [Cocci] Coccinelle rule for CVE-2019-18683
Jann, thanks for your reply! On 09.04.2020 01:26, Jann Horn wrote: > On Thu, Apr 9, 2020 at 12:01 AM Alexander Popov wrote: >> CVE-2019-18683 refers to three similar vulnerabilities caused by the same >> incorrect approach to locking that is used in >> vivid_stop_generating_vid_cap(), >> vivid_stop_generating_vid_out(), and sdr_cap_stop_streaming(). >> >> For fixes please see the commit 6dcd5d7a7a29c1e4 (media: vivid: Fix wrong >> locking that causes race conditions on streaming stop). >> >> These three functions are called during streaming stopping with >> vivid_dev.mutex >> locked. And they all do the same mistake while stopping their kthreads, which >> need to lock this mutex as well. See the example from >> vivid_stop_generating_vid_cap(): >> /* shutdown control thread */ >> vivid_grab_controls(dev, false); >> mutex_unlock(>mutex); >> kthread_stop(dev->kthread_vid_cap); >> dev->kthread_vid_cap = NULL; >> mutex_lock(>mutex); >> >> But when this mutex is unlocked, another vb2_fop_read() can lock it instead >> of >> the kthread and manipulate the buffer queue. That causes use-after-free. >> >> I created a Coccinelle rule that detects mutex_unlock+kthread_stop+mutex_lock >> within one function. > [...] >> mutex_unlock@unlock_p(E) >> ... >> kthread_stop@stop_p(...) >> ... >> mutex_lock@lock_p(E) > > Is the kthread_stop() really special here? It seems to me like it's > pretty much just a normal instance of the "temporarily dropping a > lock" pattern - which does tend to go wrong quite often, but can also > be correct. Right, searching without kthread_stop() gives more cases. > I think it would be interesting though to have a list of places that > drop and then re-acquire a mutex/spinlock/... that was not originally > acquired in the same block of code (but was instead originally > acquired in an outer block, or by a parent function, or something like > that). So things like this: It's a very good idea. I tried it and got first results (described below). > void X(...) { > mutex_lock(A); > for (...) { > ... > mutex_unlock(A); > ... > mutex_lock(A); > ... > } > mutex_unlock(A); > } I'm not an expert in SmPL yet. Don't know how to describe this case. > or like this: > > void X(...) { > ... [no mutex operations on A] > mutex_unlock(A); > ... > mutex_lock(A); > ... > } Yes, I adapted the rule for that easier case: ``` virtual report virtual context @race exists@ expression E; position unlock_p; position lock_p; @@ ... when != mutex_lock(E) * mutex_unlock@unlock_p(E) ... * mutex_lock@lock_p(E) @script:python@ unlock_p << race.unlock_p; lock_p << race.lock_p; E << race.E; @@ coccilib.report.print_report(unlock_p[0], 'see mutex_unlock(' + E + ') here') coccilib.report.print_report(lock_p[0], 'see mutex_lock(' + E + ') here\n') ``` The command to run it: COCCI=./scripts/coccinelle/kthread_race.cocci make coccicheck MODE=context It shows the code context around in a form of diff. This rule found 195 matches. Not that much! > But of course, there are places where this kind of behavior is > correct; so such a script wouldn't just return report code, just code > that could use a bit more scrutiny than normal. I've spent some time looking through the results. Currently I see 3 types of cases. 1. Cases that look legit: a mutex is unlocked for some waiting or sleeping. Example: ./fs/io_uring.c:7908:2-14: see mutex_unlock(& ctx -> uring_lock) here ./fs/io_uring.c:7910:2-12: see mutex_lock(& ctx -> uring_lock) here diff -u -p ./fs/io_uring.c /tmp/nothing/fs/io_uring.c --- ./fs/io_uring.c +++ /tmp/nothing/fs/io_uring.c @@ -7905,9 +7905,7 @@ static int __io_uring_register(struct io * to drop the mutex here, since no new references will come in * after we've killed the percpu ref. */ - mutex_unlock(>uring_lock); ret = wait_for_completion_interruptible(>completions[0]); - mutex_lock(>uring_lock); if (ret) { percpu_ref_resurrect(>refs); ret = -EINTR; Another example that looks legit: ./mm/ksm.c:2709:2-14: see mutex_unlock(& ksm_thread_mutex) here ./mm/ksm.c:2712:2-12: see mutex_lock(& ksm_thread_mutex) here diff -u -p ./mm/ksm.c /tmp/nothing/mm/ksm.c --- ./mm/ksm.c +++ /tmp/nothing/mm/ksm.c @@ -2706,10 +2706,8 @@ void ksm_migrate_page(struct page *newpa static void wait_while_offlining(void) { while (ksm_run & KSM_RUN_OFFLINE) { - mutex_unlock(_thread_mutex); wait_on_bit(_run, ilog2(KSM_RUN_OFFLINE), TASK_UNINTERRUPTIBLE); - mutex_lock(_thread_mutex); } } 2. Weird cases that look like just avoiding a deadlock. Example. This mutex is unlocked for a while by an interrupt handler: ./sound/pci/pcxhr/pcxhr_core.c:1210:3-15: see mutex_unlock(& mgr -> lock) here
Re: [Cocci] Coccinelle rule for CVE-2019-18683
Markus, thanks for your remarks! On 09.04.2020 11:41, Markus Elfring wrote: > * The source code search pattern can be too generic. > How do you think about to consider additional constraints > for safer data control flow analysis? Could you please elaborate on that? I used 'exists' keyword to find at least one branch that has mutex_unlock+kthread_stop+mutex_lock chain. > * Other operation modes might become helpful. Thanks! I added 'context' mode, it's very good for this purpose. Best regards, Alexander ___ Cocci mailing list Cocci@systeme.lip6.fr https://systeme.lip6.fr/mailman/listinfo/cocci
Re: [Cocci] Coccinelle rule for CVE-2019-18683
On Thu, 9 Apr 2020, Alexander Popov wrote: > Hello! > > Some time ago I fixed CVE-2019-18683 in the V4L2 subsystem of the Linux > kernel. > > I created a Coccinelle rule that detects that bug pattern. Let me show it. Thanks for the discussion :) > > > Bug pattern > === > > CVE-2019-18683 refers to three similar vulnerabilities caused by the same > incorrect approach to locking that is used in vivid_stop_generating_vid_cap(), > vivid_stop_generating_vid_out(), and sdr_cap_stop_streaming(). > > For fixes please see the commit 6dcd5d7a7a29c1e4 (media: vivid: Fix wrong > locking that causes race conditions on streaming stop). > > These three functions are called during streaming stopping with > vivid_dev.mutex > locked. And they all do the same mistake while stopping their kthreads, which > need to lock this mutex as well. See the example from > vivid_stop_generating_vid_cap(): > /* shutdown control thread */ > vivid_grab_controls(dev, false); > mutex_unlock(>mutex); > kthread_stop(dev->kthread_vid_cap); > dev->kthread_vid_cap = NULL; > mutex_lock(>mutex); > > But when this mutex is unlocked, another vb2_fop_read() can lock it instead of > the kthread and manipulate the buffer queue. That causes use-after-free. > > I created a Coccinelle rule that detects mutex_unlock+kthread_stop+mutex_lock > within one function. > > > Coccinelle rule > === > > virtual report > > @race exists@ > expression E; > position stop_p; > position unlock_p; > position lock_p; > @@ > > mutex_unlock@unlock_p(E) > ... It would be good to put when != mutex_lock(E) after the ... above. Your rule doesn't actually prevent the lock from being retaken. > kthread_stop@stop_p(...) > ... > mutex_lock@lock_p(E) > > @script:python@ > stop_p << race.stop_p; > unlock_p << race.unlock_p; > lock_p << race.lock_p; > E << race.E; > @@ > > coccilib.report.print_report(unlock_p[0], 'mutex_unlock(' + E + ') here') > coccilib.report.print_report(stop_p[0], 'kthread_stop here') > coccilib.report.print_report(lock_p[0], 'mutex_lock(' + E + ') here\n') > > > Testing the rule > > > I reverted the commit 6dcd5d7a7a29c1e4 and called: > COCCI=./scripts/coccinelle/kthread_race.cocci make coccicheck MODE=report > > The result: > > ./drivers/media/platform/vivid/vivid-kthread-out.c:347:1-13: mutex_unlock(& > dev > -> mutex) here > ./drivers/media/platform/vivid/vivid-kthread-out.c:348:1-13: kthread_stop here > ./drivers/media/platform/vivid/vivid-kthread-out.c:350:1-11: mutex_lock(& dev > -> > mutex) here > > ./drivers/media/platform/vivid/vivid-sdr-cap.c:306:1-13: mutex_unlock(& dev -> > mutex) here > ./drivers/media/platform/vivid/vivid-sdr-cap.c:307:1-13: kthread_stop here > ./drivers/media/platform/vivid/vivid-sdr-cap.c:309:1-11: mutex_lock(& dev -> > mutex) here > > ./drivers/media/platform/vivid/vivid-kthread-cap.c:1001:1-13: mutex_unlock(& > dev > -> mutex) here > ./drivers/media/platform/vivid/vivid-kthread-cap.c:1002:1-13: kthread_stop > here > ./drivers/media/platform/vivid/vivid-kthread-cap.c:1004:1-11: mutex_lock(& dev > -> mutex) here > > There are no other bugs detected. > > Do you have any idea how to improve it? > Do we need that rule for regression testing in the upstream? Based on Jann's suggestion, it seem like it could be interesting to find these locking pauses, and then collect functions that are used in locks and in lock pauses. If a function is mostly used with locks held, then using it in a lock pause could be a sign of a bug. I will see if it turns up anything interesting. julia ___ Cocci mailing list Cocci@systeme.lip6.fr https://systeme.lip6.fr/mailman/listinfo/cocci
Re: [Cocci] Coccinelle rule for CVE-2019-18683
> Do you have any idea how to improve it? I see further software development possibilities of varying relevance also for this script of the semantic patch language. * The SmPL variables “lock_p”, “unlock_p” and “stop_p” could be declared in a more succinct way just by listing them in the same statement. * The source code search pattern can be too generic. How do you think about to consider additional constraints for safer data control flow analysis? * Other operation modes might become helpful. Regards, Markus ___ Cocci mailing list Cocci@systeme.lip6.fr https://systeme.lip6.fr/mailman/listinfo/cocci
Re: [Cocci] Coccinelle rule for CVE-2019-18683
On Thu, Apr 9, 2020 at 12:01 AM Alexander Popov wrote: > CVE-2019-18683 refers to three similar vulnerabilities caused by the same > incorrect approach to locking that is used in vivid_stop_generating_vid_cap(), > vivid_stop_generating_vid_out(), and sdr_cap_stop_streaming(). > > For fixes please see the commit 6dcd5d7a7a29c1e4 (media: vivid: Fix wrong > locking that causes race conditions on streaming stop). > > These three functions are called during streaming stopping with > vivid_dev.mutex > locked. And they all do the same mistake while stopping their kthreads, which > need to lock this mutex as well. See the example from > vivid_stop_generating_vid_cap(): > /* shutdown control thread */ > vivid_grab_controls(dev, false); > mutex_unlock(>mutex); > kthread_stop(dev->kthread_vid_cap); > dev->kthread_vid_cap = NULL; > mutex_lock(>mutex); > > But when this mutex is unlocked, another vb2_fop_read() can lock it instead of > the kthread and manipulate the buffer queue. That causes use-after-free. > > I created a Coccinelle rule that detects mutex_unlock+kthread_stop+mutex_lock > within one function. [...] > mutex_unlock@unlock_p(E) > ... > kthread_stop@stop_p(...) > ... > mutex_lock@lock_p(E) Is the kthread_stop() really special here? It seems to me like it's pretty much just a normal instance of the "temporarily dropping a lock" pattern - which does tend to go wrong quite often, but can also be correct. I think it would be interesting though to have a list of places that drop and then re-acquire a mutex/spinlock/... that was not originally acquired in the same block of code (but was instead originally acquired in an outer block, or by a parent function, or something like that). So things like this: void X(...) { mutex_lock(A); for (...) { ... mutex_unlock(A); ... mutex_lock(A); ... } mutex_unlock(A); } or like this: void X(...) { ... [no mutex operations on A] mutex_unlock(A); ... mutex_lock(A); ... } But of course, there are places where this kind of behavior is correct; so such a script wouldn't just return report code, just code that could use a bit more scrutiny than normal. For example, in madvise_remove(), the mmap_sem is dropped and then re-acquired, which is fine because the caller deals with that possibility properly: static long madvise_remove(struct vm_area_struct *vma, struct vm_area_struct **prev, unsigned long start, unsigned long end) { loff_t offset; int error; struct file *f; *prev = NULL; /* tell sys_madvise we drop mmap_sem */ if (vma->vm_flags & VM_LOCKED) return -EINVAL; f = vma->vm_file; if (!f || !f->f_mapping || !f->f_mapping->host) { return -EINVAL; } if ((vma->vm_flags & (VM_SHARED|VM_WRITE)) != (VM_SHARED|VM_WRITE)) return -EACCES; offset = (loff_t)(start - vma->vm_start) + ((loff_t)vma->vm_pgoff << PAGE_SHIFT); /* * Filesystem's fallocate may need to take i_mutex. We need to * explicitly grab a reference because the vma (and hence the * vma's reference to the file) can go away as soon as we drop * mmap_sem. */ get_file(f); if (userfaultfd_remove(vma, start, end)) { /* mmap_sem was not released by userfaultfd_remove() */ up_read(>mm->mmap_sem); } error = vfs_fallocate(f, FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE, offset, end - start); fput(f); down_read(>mm->mmap_sem); return error; } ___ Cocci mailing list Cocci@systeme.lip6.fr https://systeme.lip6.fr/mailman/listinfo/cocci
[Cocci] Coccinelle rule for CVE-2019-18683
Hello! Some time ago I fixed CVE-2019-18683 in the V4L2 subsystem of the Linux kernel. I created a Coccinelle rule that detects that bug pattern. Let me show it. Bug pattern === CVE-2019-18683 refers to three similar vulnerabilities caused by the same incorrect approach to locking that is used in vivid_stop_generating_vid_cap(), vivid_stop_generating_vid_out(), and sdr_cap_stop_streaming(). For fixes please see the commit 6dcd5d7a7a29c1e4 (media: vivid: Fix wrong locking that causes race conditions on streaming stop). These three functions are called during streaming stopping with vivid_dev.mutex locked. And they all do the same mistake while stopping their kthreads, which need to lock this mutex as well. See the example from vivid_stop_generating_vid_cap(): /* shutdown control thread */ vivid_grab_controls(dev, false); mutex_unlock(>mutex); kthread_stop(dev->kthread_vid_cap); dev->kthread_vid_cap = NULL; mutex_lock(>mutex); But when this mutex is unlocked, another vb2_fop_read() can lock it instead of the kthread and manipulate the buffer queue. That causes use-after-free. I created a Coccinelle rule that detects mutex_unlock+kthread_stop+mutex_lock within one function. Coccinelle rule === virtual report @race exists@ expression E; position stop_p; position unlock_p; position lock_p; @@ mutex_unlock@unlock_p(E) ... kthread_stop@stop_p(...) ... mutex_lock@lock_p(E) @script:python@ stop_p << race.stop_p; unlock_p << race.unlock_p; lock_p << race.lock_p; E << race.E; @@ coccilib.report.print_report(unlock_p[0], 'mutex_unlock(' + E + ') here') coccilib.report.print_report(stop_p[0], 'kthread_stop here') coccilib.report.print_report(lock_p[0], 'mutex_lock(' + E + ') here\n') Testing the rule I reverted the commit 6dcd5d7a7a29c1e4 and called: COCCI=./scripts/coccinelle/kthread_race.cocci make coccicheck MODE=report The result: ./drivers/media/platform/vivid/vivid-kthread-out.c:347:1-13: mutex_unlock(& dev -> mutex) here ./drivers/media/platform/vivid/vivid-kthread-out.c:348:1-13: kthread_stop here ./drivers/media/platform/vivid/vivid-kthread-out.c:350:1-11: mutex_lock(& dev -> mutex) here ./drivers/media/platform/vivid/vivid-sdr-cap.c:306:1-13: mutex_unlock(& dev -> mutex) here ./drivers/media/platform/vivid/vivid-sdr-cap.c:307:1-13: kthread_stop here ./drivers/media/platform/vivid/vivid-sdr-cap.c:309:1-11: mutex_lock(& dev -> mutex) here ./drivers/media/platform/vivid/vivid-kthread-cap.c:1001:1-13: mutex_unlock(& dev -> mutex) here ./drivers/media/platform/vivid/vivid-kthread-cap.c:1002:1-13: kthread_stop here ./drivers/media/platform/vivid/vivid-kthread-cap.c:1004:1-11: mutex_lock(& dev -> mutex) here There are no other bugs detected. Do you have any idea how to improve it? Do we need that rule for regression testing in the upstream? Thanks in advance! Alexander ___ Cocci mailing list Cocci@systeme.lip6.fr https://systeme.lip6.fr/mailman/listinfo/cocci