Re: [Cocci] Coccinelle rule for CVE-2019-18683
> >> kthread_stop@stop_p(...) > >> ... > >> mutex_lock@lock_p(E) > >> > >> @script:python@ > >> stop_p << race.stop_p; > >> unlock_p << race.unlock_p; > >> lock_p << race.lock_p; > >> E << race.E; > >> @@ > >> > >> coccilib.report.print_report(unlock_p[0], 'mutex_unlock(' + E + ') here') > >> coccilib.report.print_report(stop_p[0], 'kthread_stop here') > >> coccilib.report.print_report(lock_p[0], 'mutex_lock(' + E + ') here\n') > > ... > > > Based on Jann's suggestion, it seem like it could be interesting to find > > these locking pauses, and then collect functions that are used in locks > > and in lock pauses. If a function is mostly used with locks held, then > > using it in a lock pause could be a sign of a bug. I will see if it turns > > up anything interesting. > > Do you mean collecting the behaviour that happens between unlocking and > locking > and then analysing it somehow? Yes. I have tried doing what I described, but I'm not sure that the results are very reliable at the moment. julia ___ Cocci mailing list Cocci@systeme.lip6.fr https://systeme.lip6.fr/mailman/listinfo/cocci
Re: [Cocci] Coccinelle rule for CVE-2019-18683
On 09.04.2020 13:53, Julia Lawall wrote: > On Thu, 9 Apr 2020, Alexander Popov wrote: >> virtual report >> >> @race exists@ >> expression E; >> position stop_p; >> position unlock_p; >> position lock_p; >> @@ >> >> mutex_unlock@unlock_p(E) >> ... > > It would be good to put when != mutex_lock(E) after the ... above. Your > rule doesn't actually prevent the lock from being retaken. Thanks Julia! I used this trick in the second version of the rule that I've just sent. >> kthread_stop@stop_p(...) >> ... >> mutex_lock@lock_p(E) >> >> @script:python@ >> stop_p << race.stop_p; >> unlock_p << race.unlock_p; >> lock_p << race.lock_p; >> E << race.E; >> @@ >> >> coccilib.report.print_report(unlock_p[0], 'mutex_unlock(' + E + ') here') >> coccilib.report.print_report(stop_p[0], 'kthread_stop here') >> coccilib.report.print_report(lock_p[0], 'mutex_lock(' + E + ') here\n') ... > Based on Jann's suggestion, it seem like it could be interesting to find > these locking pauses, and then collect functions that are used in locks > and in lock pauses. If a function is mostly used with locks held, then > using it in a lock pause could be a sign of a bug. I will see if it turns > up anything interesting. Do you mean collecting the behaviour that happens between unlocking and locking and then analysing it somehow? Best regards, Alexander ___ Cocci mailing list Cocci@systeme.lip6.fr https://systeme.lip6.fr/mailman/listinfo/cocci
Re: [Cocci] Coccinelle rule for CVE-2019-18683
Jann, thanks for your reply! On 09.04.2020 01:26, Jann Horn wrote: > On Thu, Apr 9, 2020 at 12:01 AM Alexander Popov wrote: >> CVE-2019-18683 refers to three similar vulnerabilities caused by the same >> incorrect approach to locking that is used in >> vivid_stop_generating_vid_cap(), >> vivid_stop_generating_vid_out(), and sdr_cap_stop_streaming(). >> >> For fixes please see the commit 6dcd5d7a7a29c1e4 (media: vivid: Fix wrong >> locking that causes race conditions on streaming stop). >> >> These three functions are called during streaming stopping with >> vivid_dev.mutex >> locked. And they all do the same mistake while stopping their kthreads, which >> need to lock this mutex as well. See the example from >> vivid_stop_generating_vid_cap(): >> /* shutdown control thread */ >> vivid_grab_controls(dev, false); >> mutex_unlock(>mutex); >> kthread_stop(dev->kthread_vid_cap); >> dev->kthread_vid_cap = NULL; >> mutex_lock(>mutex); >> >> But when this mutex is unlocked, another vb2_fop_read() can lock it instead >> of >> the kthread and manipulate the buffer queue. That causes use-after-free. >> >> I created a Coccinelle rule that detects mutex_unlock+kthread_stop+mutex_lock >> within one function. > [...] >> mutex_unlock@unlock_p(E) >> ... >> kthread_stop@stop_p(...) >> ... >> mutex_lock@lock_p(E) > > Is the kthread_stop() really special here? It seems to me like it's > pretty much just a normal instance of the "temporarily dropping a > lock" pattern - which does tend to go wrong quite often, but can also > be correct. Right, searching without kthread_stop() gives more cases. > I think it would be interesting though to have a list of places that > drop and then re-acquire a mutex/spinlock/... that was not originally > acquired in the same block of code (but was instead originally > acquired in an outer block, or by a parent function, or something like > that). So things like this: It's a very good idea. I tried it and got first results (described below). > void X(...) { > mutex_lock(A); > for (...) { > ... > mutex_unlock(A); > ... > mutex_lock(A); > ... > } > mutex_unlock(A); > } I'm not an expert in SmPL yet. Don't know how to describe this case. > or like this: > > void X(...) { > ... [no mutex operations on A] > mutex_unlock(A); > ... > mutex_lock(A); > ... > } Yes, I adapted the rule for that easier case: ``` virtual report virtual context @race exists@ expression E; position unlock_p; position lock_p; @@ ... when != mutex_lock(E) * mutex_unlock@unlock_p(E) ... * mutex_lock@lock_p(E) @script:python@ unlock_p << race.unlock_p; lock_p << race.lock_p; E << race.E; @@ coccilib.report.print_report(unlock_p[0], 'see mutex_unlock(' + E + ') here') coccilib.report.print_report(lock_p[0], 'see mutex_lock(' + E + ') here\n') ``` The command to run it: COCCI=./scripts/coccinelle/kthread_race.cocci make coccicheck MODE=context It shows the code context around in a form of diff. This rule found 195 matches. Not that much! > But of course, there are places where this kind of behavior is > correct; so such a script wouldn't just return report code, just code > that could use a bit more scrutiny than normal. I've spent some time looking through the results. Currently I see 3 types of cases. 1. Cases that look legit: a mutex is unlocked for some waiting or sleeping. Example: ./fs/io_uring.c:7908:2-14: see mutex_unlock(& ctx -> uring_lock) here ./fs/io_uring.c:7910:2-12: see mutex_lock(& ctx -> uring_lock) here diff -u -p ./fs/io_uring.c /tmp/nothing/fs/io_uring.c --- ./fs/io_uring.c +++ /tmp/nothing/fs/io_uring.c @@ -7905,9 +7905,7 @@ static int __io_uring_register(struct io * to drop the mutex here, since no new references will come in * after we've killed the percpu ref. */ - mutex_unlock(>uring_lock); ret = wait_for_completion_interruptible(>completions[0]); - mutex_lock(>uring_lock); if (ret) { percpu_ref_resurrect(>refs); ret = -EINTR; Another example that looks legit: ./mm/ksm.c:2709:2-14: see mutex_unlock(& ksm_thread_mutex) here ./mm/ksm.c:2712:2-12: see mutex_lock(& ksm_thread_mutex) here diff -u -p ./mm/ksm.c /tmp/nothing/mm/ksm.c --- ./mm/ksm.c +++ /tmp/nothing/mm/ksm.c @@ -2706,10 +2706,8 @@ void ksm_migrate_page(struct page *newpa static void wait_while_offlining(void) { while (ksm_run & KSM_RUN_OFFLINE) { - mutex_unlock(_thread_mutex); wait_on_bit(_run, ilog2(KSM_RUN_OFFLINE), TASK_UNINTERRUPTIBLE); - mutex_lock(_thread_mutex); } } 2. Weird cases that look like just avoiding a deadlock. Example. This mutex is unlocked for a while by an interrupt handler: ./sound/pci/pcxhr/pcxhr_core.c:1210:3-15: see mutex_unlock(& mgr -> lock) here
Re: [Cocci] Coccinelle rule for CVE-2019-18683
Markus, thanks for your remarks! On 09.04.2020 11:41, Markus Elfring wrote: > * The source code search pattern can be too generic. > How do you think about to consider additional constraints > for safer data control flow analysis? Could you please elaborate on that? I used 'exists' keyword to find at least one branch that has mutex_unlock+kthread_stop+mutex_lock chain. > * Other operation modes might become helpful. Thanks! I added 'context' mode, it's very good for this purpose. Best regards, Alexander ___ Cocci mailing list Cocci@systeme.lip6.fr https://systeme.lip6.fr/mailman/listinfo/cocci
Re: [Cocci] Coccinelle: zalloc-simple: Fix patch mode for dma_alloc_coherent()
> What should I be looking for in the archives? https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/scripts/coccinelle/api/alloc/zalloc-simple.cocci?id=dfd32cad146e3624970eee9329e99d2c6ef751b3 Several software development discussions might be interesting also before the change “dma-mapping: remove dma_zalloc_coherent()”. > I've referenced what I think is the relevant commit here. How do you think about the extend the software development background another bit? > I have replied to these messages now. Do you refer to your response from 2020-04-07? https://systeme.lip6.fr/pipermail/cocci/2020-April/007102.html https://lore.kernel.org/cocci/20200407155005.rdyiu7xqss57rzhq@lenovo-laptop/ * You proposed a change for this SmPL script which contains programming mistakes. * Would you like to achieve the restore of a SmPL code variant from a published patch hunk instead? * Will the software development attention grow also around the recurring topic “Code duplications in SmPL disjunctions”? Regards, Markus ___ Cocci mailing list Cocci@systeme.lip6.fr https://systeme.lip6.fr/mailman/listinfo/cocci
Re: [Cocci] Coccinelle: zalloc-simple: Fix patch mode for dma_alloc_coherent()
On Tue, Apr 07, 2020 at 07:10:46PM +0200, Markus Elfring wrote: > >> Will the software development attention grow in a way so that further > >> implementation details can be adjusted also for the mentioned SmPL script? > > > > I'm not sure I understand what you mean. Would you mind clarifying? > > You would like to a change a file for which some software development > discussions happened a while ago. > Do you get any further ideas from available information in message archives? Hi Markus, What should I be looking for in the archives? I've referenced what I think is the relevant commit here. > > Julia Lawall (and me) pointed special development concerns out. > > Example: > Code duplications in SmPL disjunctions > https://lore.kernel.org/cocci/alpine.DEB.2.21.2004062115000.10239@hadrien/ > https://systeme.lip6.fr/pipermail/cocci/2020-April/007099.html I have replied to these messages now. Best, Alex > > Regards, > Markus ___ Cocci mailing list Cocci@systeme.lip6.fr https://systeme.lip6.fr/mailman/listinfo/cocci
Re: [Cocci] Coccinelle rule for CVE-2019-18683
On Thu, 9 Apr 2020, Alexander Popov wrote: > Hello! > > Some time ago I fixed CVE-2019-18683 in the V4L2 subsystem of the Linux > kernel. > > I created a Coccinelle rule that detects that bug pattern. Let me show it. Thanks for the discussion :) > > > Bug pattern > === > > CVE-2019-18683 refers to three similar vulnerabilities caused by the same > incorrect approach to locking that is used in vivid_stop_generating_vid_cap(), > vivid_stop_generating_vid_out(), and sdr_cap_stop_streaming(). > > For fixes please see the commit 6dcd5d7a7a29c1e4 (media: vivid: Fix wrong > locking that causes race conditions on streaming stop). > > These three functions are called during streaming stopping with > vivid_dev.mutex > locked. And they all do the same mistake while stopping their kthreads, which > need to lock this mutex as well. See the example from > vivid_stop_generating_vid_cap(): > /* shutdown control thread */ > vivid_grab_controls(dev, false); > mutex_unlock(>mutex); > kthread_stop(dev->kthread_vid_cap); > dev->kthread_vid_cap = NULL; > mutex_lock(>mutex); > > But when this mutex is unlocked, another vb2_fop_read() can lock it instead of > the kthread and manipulate the buffer queue. That causes use-after-free. > > I created a Coccinelle rule that detects mutex_unlock+kthread_stop+mutex_lock > within one function. > > > Coccinelle rule > === > > virtual report > > @race exists@ > expression E; > position stop_p; > position unlock_p; > position lock_p; > @@ > > mutex_unlock@unlock_p(E) > ... It would be good to put when != mutex_lock(E) after the ... above. Your rule doesn't actually prevent the lock from being retaken. > kthread_stop@stop_p(...) > ... > mutex_lock@lock_p(E) > > @script:python@ > stop_p << race.stop_p; > unlock_p << race.unlock_p; > lock_p << race.lock_p; > E << race.E; > @@ > > coccilib.report.print_report(unlock_p[0], 'mutex_unlock(' + E + ') here') > coccilib.report.print_report(stop_p[0], 'kthread_stop here') > coccilib.report.print_report(lock_p[0], 'mutex_lock(' + E + ') here\n') > > > Testing the rule > > > I reverted the commit 6dcd5d7a7a29c1e4 and called: > COCCI=./scripts/coccinelle/kthread_race.cocci make coccicheck MODE=report > > The result: > > ./drivers/media/platform/vivid/vivid-kthread-out.c:347:1-13: mutex_unlock(& > dev > -> mutex) here > ./drivers/media/platform/vivid/vivid-kthread-out.c:348:1-13: kthread_stop here > ./drivers/media/platform/vivid/vivid-kthread-out.c:350:1-11: mutex_lock(& dev > -> > mutex) here > > ./drivers/media/platform/vivid/vivid-sdr-cap.c:306:1-13: mutex_unlock(& dev -> > mutex) here > ./drivers/media/platform/vivid/vivid-sdr-cap.c:307:1-13: kthread_stop here > ./drivers/media/platform/vivid/vivid-sdr-cap.c:309:1-11: mutex_lock(& dev -> > mutex) here > > ./drivers/media/platform/vivid/vivid-kthread-cap.c:1001:1-13: mutex_unlock(& > dev > -> mutex) here > ./drivers/media/platform/vivid/vivid-kthread-cap.c:1002:1-13: kthread_stop > here > ./drivers/media/platform/vivid/vivid-kthread-cap.c:1004:1-11: mutex_lock(& dev > -> mutex) here > > There are no other bugs detected. > > Do you have any idea how to improve it? > Do we need that rule for regression testing in the upstream? Based on Jann's suggestion, it seem like it could be interesting to find these locking pauses, and then collect functions that are used in locks and in lock pauses. If a function is mostly used with locks held, then using it in a lock pause could be a sign of a bug. I will see if it turns up anything interesting. julia ___ Cocci mailing list Cocci@systeme.lip6.fr https://systeme.lip6.fr/mailman/listinfo/cocci
Re: [Cocci] Coccinelle rule for CVE-2019-18683
> Do you have any idea how to improve it? I see further software development possibilities of varying relevance also for this script of the semantic patch language. * The SmPL variables “lock_p”, “unlock_p” and “stop_p” could be declared in a more succinct way just by listing them in the same statement. * The source code search pattern can be too generic. How do you think about to consider additional constraints for safer data control flow analysis? * Other operation modes might become helpful. Regards, Markus ___ Cocci mailing list Cocci@systeme.lip6.fr https://systeme.lip6.fr/mailman/listinfo/cocci
Re: [Cocci] Coccinelle rule for CVE-2019-18683
On Thu, Apr 9, 2020 at 12:01 AM Alexander Popov wrote: > CVE-2019-18683 refers to three similar vulnerabilities caused by the same > incorrect approach to locking that is used in vivid_stop_generating_vid_cap(), > vivid_stop_generating_vid_out(), and sdr_cap_stop_streaming(). > > For fixes please see the commit 6dcd5d7a7a29c1e4 (media: vivid: Fix wrong > locking that causes race conditions on streaming stop). > > These three functions are called during streaming stopping with > vivid_dev.mutex > locked. And they all do the same mistake while stopping their kthreads, which > need to lock this mutex as well. See the example from > vivid_stop_generating_vid_cap(): > /* shutdown control thread */ > vivid_grab_controls(dev, false); > mutex_unlock(>mutex); > kthread_stop(dev->kthread_vid_cap); > dev->kthread_vid_cap = NULL; > mutex_lock(>mutex); > > But when this mutex is unlocked, another vb2_fop_read() can lock it instead of > the kthread and manipulate the buffer queue. That causes use-after-free. > > I created a Coccinelle rule that detects mutex_unlock+kthread_stop+mutex_lock > within one function. [...] > mutex_unlock@unlock_p(E) > ... > kthread_stop@stop_p(...) > ... > mutex_lock@lock_p(E) Is the kthread_stop() really special here? It seems to me like it's pretty much just a normal instance of the "temporarily dropping a lock" pattern - which does tend to go wrong quite often, but can also be correct. I think it would be interesting though to have a list of places that drop and then re-acquire a mutex/spinlock/... that was not originally acquired in the same block of code (but was instead originally acquired in an outer block, or by a parent function, or something like that). So things like this: void X(...) { mutex_lock(A); for (...) { ... mutex_unlock(A); ... mutex_lock(A); ... } mutex_unlock(A); } or like this: void X(...) { ... [no mutex operations on A] mutex_unlock(A); ... mutex_lock(A); ... } But of course, there are places where this kind of behavior is correct; so such a script wouldn't just return report code, just code that could use a bit more scrutiny than normal. For example, in madvise_remove(), the mmap_sem is dropped and then re-acquired, which is fine because the caller deals with that possibility properly: static long madvise_remove(struct vm_area_struct *vma, struct vm_area_struct **prev, unsigned long start, unsigned long end) { loff_t offset; int error; struct file *f; *prev = NULL; /* tell sys_madvise we drop mmap_sem */ if (vma->vm_flags & VM_LOCKED) return -EINVAL; f = vma->vm_file; if (!f || !f->f_mapping || !f->f_mapping->host) { return -EINVAL; } if ((vma->vm_flags & (VM_SHARED|VM_WRITE)) != (VM_SHARED|VM_WRITE)) return -EACCES; offset = (loff_t)(start - vma->vm_start) + ((loff_t)vma->vm_pgoff << PAGE_SHIFT); /* * Filesystem's fallocate may need to take i_mutex. We need to * explicitly grab a reference because the vma (and hence the * vma's reference to the file) can go away as soon as we drop * mmap_sem. */ get_file(f); if (userfaultfd_remove(vma, start, end)) { /* mmap_sem was not released by userfaultfd_remove() */ up_read(>mm->mmap_sem); } error = vfs_fallocate(f, FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE, offset, end - start); fput(f); down_read(>mm->mmap_sem); return error; } ___ Cocci mailing list Cocci@systeme.lip6.fr https://systeme.lip6.fr/mailman/listinfo/cocci
[Cocci] Coccinelle rule for CVE-2019-18683
Hello! Some time ago I fixed CVE-2019-18683 in the V4L2 subsystem of the Linux kernel. I created a Coccinelle rule that detects that bug pattern. Let me show it. Bug pattern === CVE-2019-18683 refers to three similar vulnerabilities caused by the same incorrect approach to locking that is used in vivid_stop_generating_vid_cap(), vivid_stop_generating_vid_out(), and sdr_cap_stop_streaming(). For fixes please see the commit 6dcd5d7a7a29c1e4 (media: vivid: Fix wrong locking that causes race conditions on streaming stop). These three functions are called during streaming stopping with vivid_dev.mutex locked. And they all do the same mistake while stopping their kthreads, which need to lock this mutex as well. See the example from vivid_stop_generating_vid_cap(): /* shutdown control thread */ vivid_grab_controls(dev, false); mutex_unlock(>mutex); kthread_stop(dev->kthread_vid_cap); dev->kthread_vid_cap = NULL; mutex_lock(>mutex); But when this mutex is unlocked, another vb2_fop_read() can lock it instead of the kthread and manipulate the buffer queue. That causes use-after-free. I created a Coccinelle rule that detects mutex_unlock+kthread_stop+mutex_lock within one function. Coccinelle rule === virtual report @race exists@ expression E; position stop_p; position unlock_p; position lock_p; @@ mutex_unlock@unlock_p(E) ... kthread_stop@stop_p(...) ... mutex_lock@lock_p(E) @script:python@ stop_p << race.stop_p; unlock_p << race.unlock_p; lock_p << race.lock_p; E << race.E; @@ coccilib.report.print_report(unlock_p[0], 'mutex_unlock(' + E + ') here') coccilib.report.print_report(stop_p[0], 'kthread_stop here') coccilib.report.print_report(lock_p[0], 'mutex_lock(' + E + ') here\n') Testing the rule I reverted the commit 6dcd5d7a7a29c1e4 and called: COCCI=./scripts/coccinelle/kthread_race.cocci make coccicheck MODE=report The result: ./drivers/media/platform/vivid/vivid-kthread-out.c:347:1-13: mutex_unlock(& dev -> mutex) here ./drivers/media/platform/vivid/vivid-kthread-out.c:348:1-13: kthread_stop here ./drivers/media/platform/vivid/vivid-kthread-out.c:350:1-11: mutex_lock(& dev -> mutex) here ./drivers/media/platform/vivid/vivid-sdr-cap.c:306:1-13: mutex_unlock(& dev -> mutex) here ./drivers/media/platform/vivid/vivid-sdr-cap.c:307:1-13: kthread_stop here ./drivers/media/platform/vivid/vivid-sdr-cap.c:309:1-11: mutex_lock(& dev -> mutex) here ./drivers/media/platform/vivid/vivid-kthread-cap.c:1001:1-13: mutex_unlock(& dev -> mutex) here ./drivers/media/platform/vivid/vivid-kthread-cap.c:1002:1-13: kthread_stop here ./drivers/media/platform/vivid/vivid-kthread-cap.c:1004:1-11: mutex_lock(& dev -> mutex) here There are no other bugs detected. Do you have any idea how to improve it? Do we need that rule for regression testing in the upstream? Thanks in advance! Alexander ___ Cocci mailing list Cocci@systeme.lip6.fr https://systeme.lip6.fr/mailman/listinfo/cocci