[PATCH] fs/buffer: Make BH_Uptodate_Lock bit_spin_lock a regular spinlock_t

2019-08-20 Thread Sebastian Siewior
From: Thomas Gleixner 

Bit spinlocks are problematic if PREEMPT_RT is enabled, because they
disable preemption, which is undesired for latency reasons and breaks when
regular spinlocks are taken within the bit_spinlock locked region because
regular spinlocks are converted to 'sleeping spinlocks' on RT. So RT
replaces the bit spinlocks with regular spinlocks to avoid this problem.
Bit spinlocks are also not covered by lock debugging, e.g. lockdep.

Substitute the BH_Uptodate_Lock bit spinlock with a regular spinlock.

Signed-off-by: Thomas Gleixner 
[bigeasy: remove the wrapper and use always spinlock_t]
Signed-off-by: Sebastian Andrzej Siewior 
---
 fs/buffer.c | 19 +++
 fs/ext4/page-io.c   |  8 +++-
 fs/ntfs/aops.c  |  9 +++--
 include/linux/buffer_head.h |  6 +++---
 4 files changed, 16 insertions(+), 26 deletions(-)

diff --git a/fs/buffer.c b/fs/buffer.c
index 131d39ec7d316..eab37fbaa439f 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -275,8 +275,7 @@ static void end_buffer_async_read(struct buffer_head *bh, 
int uptodate)
 * decide that the page is now completely done.
 */
first = page_buffers(page);
-   local_irq_save(flags);
-   bit_spin_lock(BH_Uptodate_Lock, &first->b_state);
+   spin_lock_irqsave(&first->uptodate_lock, flags);
clear_buffer_async_read(bh);
unlock_buffer(bh);
tmp = bh;
@@ -289,8 +288,7 @@ static void end_buffer_async_read(struct buffer_head *bh, 
int uptodate)
}
tmp = tmp->b_this_page;
} while (tmp != bh);
-   bit_spin_unlock(BH_Uptodate_Lock, &first->b_state);
-   local_irq_restore(flags);
+   spin_unlock_irqrestore(&first->uptodate_lock, flags);
 
/*
 * If none of the buffers had errors and they are all
@@ -302,8 +300,7 @@ static void end_buffer_async_read(struct buffer_head *bh, 
int uptodate)
return;
 
 still_busy:
-   bit_spin_unlock(BH_Uptodate_Lock, &first->b_state);
-   local_irq_restore(flags);
+   spin_unlock_irqrestore(&first->uptodate_lock, flags);
return;
 }
 
@@ -331,8 +328,7 @@ void end_buffer_async_write(struct buffer_head *bh, int 
uptodate)
}
 
first = page_buffers(page);
-   local_irq_save(flags);
-   bit_spin_lock(BH_Uptodate_Lock, &first->b_state);
+   spin_lock_irqsave(&first->uptodate_lock, flags);
 
clear_buffer_async_write(bh);
unlock_buffer(bh);
@@ -344,14 +340,12 @@ void end_buffer_async_write(struct buffer_head *bh, int 
uptodate)
}
tmp = tmp->b_this_page;
}
-   bit_spin_unlock(BH_Uptodate_Lock, &first->b_state);
-   local_irq_restore(flags);
+   spin_unlock_irqrestore(&first->uptodate_lock, flags);
end_page_writeback(page);
return;
 
 still_busy:
-   bit_spin_unlock(BH_Uptodate_Lock, &first->b_state);
-   local_irq_restore(flags);
+   spin_unlock_irqrestore(&first->uptodate_lock, flags);
return;
 }
 EXPORT_SYMBOL(end_buffer_async_write);
@@ -3420,6 +3414,7 @@ struct buffer_head *alloc_buffer_head(gfp_t gfp_flags)
struct buffer_head *ret = kmem_cache_zalloc(bh_cachep, gfp_flags);
if (ret) {
INIT_LIST_HEAD(&ret->b_assoc_buffers);
+   spin_lock_init(&ret->uptodate_lock);
preempt_disable();
__this_cpu_inc(bh_accounting.nr);
recalc_bh_state();
diff --git a/fs/ext4/page-io.c b/fs/ext4/page-io.c
index 12ceadef32c5a..7745ed23c6ad9 100644
--- a/fs/ext4/page-io.c
+++ b/fs/ext4/page-io.c
@@ -87,11 +87,10 @@ static void ext4_finish_bio(struct bio *bio)
}
bh = head = page_buffers(page);
/*
-* We check all buffers in the page under BH_Uptodate_Lock
+* We check all buffers in the page under uptodate_lock
 * to avoid races with other end io clearing async_write flags
 */
-   local_irq_save(flags);
-   bit_spin_lock(BH_Uptodate_Lock, &head->b_state);
+   spin_lock_irqsave(&head->uptodate_lock, flags);
do {
if (bh_offset(bh) < bio_start ||
bh_offset(bh) + bh->b_size > bio_end) {
@@ -103,8 +102,7 @@ static void ext4_finish_bio(struct bio *bio)
if (bio->bi_status)
buffer_io_error(bh);
} while ((bh = bh->b_this_page) != head);
-   bit_spin_unlock(BH_Uptodate_Lock, &head->b_state);
-   local_irq_restore(flags);
+   spin_unlock_irqrestore(&head->uptodate_lock, flags);
if (!under_io) {
fscrypt_free_bounce_page(bounce_page);
end_page_writeback(page);
diff --git a/fs/ntfs/aops.c b/fs/ntfs/aops.c
index 7202a1e39d70c..14ca433b3a9e4 100644
--- a/fs/ntfs/aops.c
+++ b/fs/

Re: [PATCH] fs/buffer: Make BH_Uptodate_Lock bit_spin_lock a regular spinlock_t

2019-10-11 Thread Sebastian Siewior
On 2019-08-20 20:01:14 [+0200], Thomas Gleixner wrote:
> On Tue, 20 Aug 2019, Matthew Wilcox wrote:
> > On Tue, Aug 20, 2019 at 07:08:18PM +0200, Sebastian Siewior wrote:
> > > Bit spinlocks are problematic if PREEMPT_RT is enabled, because they
> > > disable preemption, which is undesired for latency reasons and breaks when
> > > regular spinlocks are taken within the bit_spinlock locked region because
> > > regular spinlocks are converted to 'sleeping spinlocks' on RT. So RT
> > > replaces the bit spinlocks with regular spinlocks to avoid this problem.
> > > Bit spinlocks are also not covered by lock debugging, e.g. lockdep.
> > > 
> > > Substitute the BH_Uptodate_Lock bit spinlock with a regular spinlock.
> > > 
> > > Signed-off-by: Thomas Gleixner 
> > > [bigeasy: remove the wrapper and use always spinlock_t]
> > 
> > Uhh ... always grow the buffer_head, even for non-PREEMPT_RT?  Why?
> 
> Christoph requested that:
> 
>   https://lkml.kernel.org/r/20190802075612.ga20...@infradead.org

What do we do about this one?

> Thanks,
> 
>   tglx

Sebastian


Re: [PATCH] fs/buffer: Make BH_Uptodate_Lock bit_spin_lock a regular spinlock_t

2019-08-20 Thread Matthew Wilcox
On Tue, Aug 20, 2019 at 07:08:18PM +0200, Sebastian Siewior wrote:
> Bit spinlocks are problematic if PREEMPT_RT is enabled, because they
> disable preemption, which is undesired for latency reasons and breaks when
> regular spinlocks are taken within the bit_spinlock locked region because
> regular spinlocks are converted to 'sleeping spinlocks' on RT. So RT
> replaces the bit spinlocks with regular spinlocks to avoid this problem.
> Bit spinlocks are also not covered by lock debugging, e.g. lockdep.
> 
> Substitute the BH_Uptodate_Lock bit spinlock with a regular spinlock.
> 
> Signed-off-by: Thomas Gleixner 
> [bigeasy: remove the wrapper and use always spinlock_t]

Uhh ... always grow the buffer_head, even for non-PREEMPT_RT?  Why?



Re: [PATCH] fs/buffer: Make BH_Uptodate_Lock bit_spin_lock a regular spinlock_t

2019-08-20 Thread Thomas Gleixner
On Tue, 20 Aug 2019, Matthew Wilcox wrote:
> On Tue, Aug 20, 2019 at 07:08:18PM +0200, Sebastian Siewior wrote:
> > Bit spinlocks are problematic if PREEMPT_RT is enabled, because they
> > disable preemption, which is undesired for latency reasons and breaks when
> > regular spinlocks are taken within the bit_spinlock locked region because
> > regular spinlocks are converted to 'sleeping spinlocks' on RT. So RT
> > replaces the bit spinlocks with regular spinlocks to avoid this problem.
> > Bit spinlocks are also not covered by lock debugging, e.g. lockdep.
> > 
> > Substitute the BH_Uptodate_Lock bit spinlock with a regular spinlock.
> > 
> > Signed-off-by: Thomas Gleixner 
> > [bigeasy: remove the wrapper and use always spinlock_t]
> 
> Uhh ... always grow the buffer_head, even for non-PREEMPT_RT?  Why?

Christoph requested that:

  https://lkml.kernel.org/r/20190802075612.ga20...@infradead.org

Thanks,

tglx