On 8/8/20 10:35 PM, Mike Snitzer wrote:
On Sat, Aug 08 2020 at  8:10am -0400,
kernel test robot <l...@intel.com> wrote:

tree:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git 
master
head:   449dc8c97089a6e09fb2dac4d92b1b7ac0eb7c1e
commit: 374117ad4736c5a4f8012cfe59fc07d9d58191d5 dm mpath: use double checked 
locking in fast path
date:   4 weeks ago
config: arm-randconfig-m031-20200808 (attached as .config)
compiler: arm-linux-gnueabi-gcc (GCC) 9.3.0

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <l...@intel.com>
All 3 recent reports about smatch showing "double unlocked 'm->lock'"
appear to be bogus.  Think smatch is generating false positives (and
given "Old smatch warnings" it seems it has been doing so for some
time).

In addition, does l...@intel.com no longer test linux-next?  The dm-mpath
locking changes that were just merged into mainline have been in
linux-next since July 13.  Why were these tests only done against
mainline?
Hi Mike,

kernel test robot still test linux-next, but the bisection worked on linus/master and didn't check the fix in linux-next,
we'll optimize the bisection to avoid reporting this kind of problem.

Best Regards,
Rong Chen


Mike



New smatch warnings:
drivers/md/dm-mpath.c:524 multipath_clone_and_map() error: double unlocked 
'm->lock' (orig line 516)

Old smatch warnings:
drivers/md/dm-mpath.c:446 choose_pgpath() error: double unlocked 'm->lock' 
(orig line 416)
drivers/md/dm-mpath.c:457 choose_pgpath() error: double unlocked 'm->lock' 
(orig line 403)
drivers/md/dm-mpath.c:525 multipath_clone_and_map() error: double unlocked 
'm->lock' (orig line 516)
drivers/md/dm-mpath.c:526 multipath_clone_and_map() error: double unlocked 
'm->lock' (orig line 524)
drivers/md/dm-mpath.c:626 __map_bio() error: double unlocked 'm->lock' (orig 
line 615)
drivers/md/dm-mpath.c:627 __map_bio() error: double unlocked 'm->lock' (orig 
line 615)
drivers/md/dm-mpath.c:628 __map_bio() error: double unlocked 'm->lock' (orig 
line 626)
drivers/md/dm-mpath.c:629 __map_bio() error: double unlocked 'm->lock' (orig 
line 628)
drivers/md/dm-mpath.c:1607 pg_init_done() error: double unlocked 'm->lock' 
(orig line 1560)
drivers/md/dm-mpath.c:1707 multipath_end_io_bio() error: double unlocked 
'm->lock' (orig line 1704)
drivers/md/dm-mpath.c:1988 multipath_prepare_ioctl() error: double unlocked 
'm->lock' (orig line 1984)
drivers/md/dm-mpath.c:2012 multipath_prepare_ioctl() error: double unlocked 
'm->lock' (orig line 2001)

vim +524 drivers/md/dm-mpath.c

    498 
    499 /*
    500  * Map cloned requests (request-based multipath)
    501  */
    502 static int multipath_clone_and_map(struct dm_target *ti, struct request 
*rq,
    503                                    union map_info *map_context,
    504                                    struct request **__clone)
    505 {
    506         struct multipath *m = ti->private;
    507         size_t nr_bytes = blk_rq_bytes(rq);
    508         struct pgpath *pgpath;
    509         struct block_device *bdev;
    510         struct dm_mpath_io *mpio = get_mpio(map_context);
    511         struct request_queue *q;
    512         struct request *clone;
    513 
    514         /* Do we need to select a new pgpath? */
    515         pgpath = READ_ONCE(m->current_pgpath);
  > 516              if (!pgpath || 
!mpath_double_check_test_bit(MPATHF_QUEUE_IO, m))
    517                 pgpath = choose_pgpath(m, nr_bytes);
    518 
    519         if (!pgpath) {
    520                 if (must_push_back_rq(m))
    521                         return DM_MAPIO_DELAY_REQUEUE;
    522                 dm_report_EIO(m);       /* Failed */
    523                 return DM_MAPIO_KILL;
  > 524              } else if (mpath_double_check_test_bit(MPATHF_QUEUE_IO, m) 
||
    525                    mpath_double_check_test_bit(MPATHF_PG_INIT_REQUIRED, 
m)) {
    526                 pg_init_all_paths(m);
    527                 return DM_MAPIO_DELAY_REQUEUE;
    528         }
    529 
    530         mpio->pgpath = pgpath;
    531         mpio->nr_bytes = nr_bytes;
    532 
    533         bdev = pgpath->path.dev->bdev;
    534         q = bdev_get_queue(bdev);
    535         clone = blk_get_request(q, rq->cmd_flags | REQ_NOMERGE,
    536                         BLK_MQ_REQ_NOWAIT);
    537         if (IS_ERR(clone)) {
    538                 /* EBUSY, ENODEV or EWOULDBLOCK: requeue */
    539                 if (blk_queue_dying(q)) {
    540                         atomic_inc(&m->pg_init_in_progress);
    541                         activate_or_offline_path(pgpath);
    542                         return DM_MAPIO_DELAY_REQUEUE;
    543                 }
    544 
    545                 /*
    546                  * blk-mq's SCHED_RESTART can cover this requeue, so we
    547                  * needn't deal with it by DELAY_REQUEUE. More 
importantly,
    548                  * we have to return DM_MAPIO_REQUEUE so that blk-mq can
    549                  * get the queue busy feedback (via BLK_STS_RESOURCE),
    550                  * otherwise I/O merging can suffer.
    551                  */
    552                 return DM_MAPIO_REQUEUE;
    553         }
    554         clone->bio = clone->biotail = NULL;
    555         clone->rq_disk = bdev->bd_disk;
    556         clone->cmd_flags |= REQ_FAILFAST_TRANSPORT;
    557         *__clone = clone;
    558 
    559         if (pgpath->pg->ps.type->start_io)
    560                 pgpath->pg->ps.type->start_io(&pgpath->pg->ps,
    561                                               &pgpath->path,
    562                                               nr_bytes);
    563         return DM_MAPIO_REMAPPED;
    564 }
    565 

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-...@lists.01.org
_______________________________________________
kbuild-all mailing list -- kbuild-...@lists.01.org
To unsubscribe send an email to kbuild-all-le...@lists.01.org

Reply via email to