* Paul E. McKenney <paul...@linux.vnet.ibm.com> wrote:

> > It's this new usage in fs/fs-writeback.c:
> > 
> > static void bdi_split_work_to_wbs(struct backing_dev_info *bdi,
> >                                   struct wb_writeback_work *base_work,
> >                                   bool skip_if_busy)
> > {
> >         struct bdi_writeback *last_wb = NULL;
> >         struct bdi_writeback *wb = list_entry_rcu(&bdi->wb_list,
> 
> I believe that the above should instead be:
> 
>       struct bdi_writeback *wb = list_entry_rcu(bdi->wb_list.next,
> 
> After all, RCU read-side list primitives need to fetch pointers in order to 
> traverse those pointers in an RCU-safe manner.  The patch below clears this 
> up 
> for me, does it also work for you?

Are you sure about that?

I considered this solution too, but the code goes like this:

static void bdi_split_work_to_wbs(struct backing_dev_info *bdi,
                                  struct wb_writeback_work *base_work,
                                  bool skip_if_busy)
{
        struct bdi_writeback *last_wb = NULL;
        struct bdi_writeback *wb = list_entry_rcu(&bdi->wb_list,
                                                struct bdi_writeback, bdi_node);

        might_sleep();
restart:
        rcu_read_lock();
        list_for_each_entry_continue_rcu(wb, &bdi->wb_list, bdi_node) {

and list_for_each_entry_continue_rcu() will start the iteration with the next 
entry. So if you initialize the head with .next, then we'll start with 
.next->next, i.e. we skip the first entry.

That seems to change behavior and break the logic.

Another solution I considered is to use bd->wb_list.next->prev, but that, 
beyond 
being ugly, causes actual extra runtime overhead - for something that seems 
academical.

Thanks,

        Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to