Sorry, I seem to have missed this email.
On Mon, May 06, 2019 at 06:50:09PM +0200, Oleg Nesterov wrote:
> On 05/03, Peter Zijlstra wrote:
> >
> > -static void lockdep_sb_freeze_release(struct super_block *sb)
> > -{
> > - int level;
> > -
> > - for (level = SB_FREEZE_LEVELS - 1; level >= 0;
On 05/03, Peter Zijlstra wrote:
>
> -static void lockdep_sb_freeze_release(struct super_block *sb)
> -{
> - int level;
> -
> - for (level = SB_FREEZE_LEVELS - 1; level >= 0; level--)
> - percpu_rwsem_release(sb->s_writers.rw_sem + level, 0,
> _THIS_IP_);
> -}
> -
> -/*
> - *
On Fri, May 03, 2019 at 05:37:48PM +0200, Oleg Nesterov wrote:
> (And if we change this code to use wait_event(xchg(readers_block) == 0) we
> can remove rw_sem altogether).
That patch you just saw and didn't look at did just that.
> The main problem is that this is sub-optimal. We can have a
On 05/03, Peter Zijlstra wrote:
>
> On Thu, May 02, 2019 at 12:09:32PM +0200, Oleg Nesterov wrote:
>
> > > +static void readers_block(struct percpu_rw_semaphore *sem)
> > > +{
> > > + wait_event_cmd(sem->writer, !sem->readers_block,
> > > +__up_read(>rw_sem),
On Fri, May 03, 2019 at 04:50:59PM +0200, Peter Zijlstra wrote:
> So how about something like so then?
> --- a/kernel/locking/percpu-rwsem.c
> +++ b/kernel/locking/percpu-rwsem.c
> @@ -63,7 +66,7 @@ int __percpu_down_read(struct percpu_rw_
>* If !readers_block the critical section starts
On Thu, May 02, 2019 at 01:42:59PM +0200, Oleg Nesterov wrote:
> On 05/02, Oleg Nesterov wrote:
> >
> > But this all is cosmetic, it seems that we can remove ->rw_sem altogether
> > but I am not sure...
>
> I mean, afaics percpu_down_read() can just do
>
> wait_event(readers_block == 0);
>
On Thu, May 02, 2019 at 12:09:32PM +0200, Oleg Nesterov wrote:
> On 05/01, Peter Zijlstra wrote:
> >
> > Anyway; I cobbled together the below. Oleg, could you have a look, I'm
> > sure I messed it up.
>
> Oh, I will need to read this carefully. but at first glance I do not see
> any hole...
>
>
On 05/02, Oleg Nesterov wrote:
>
> But this all is cosmetic, it seems that we can remove ->rw_sem altogether
> but I am not sure...
I mean, afaics percpu_down_read() can just do
wait_event(readers_block == 0);
in the slow path, while percpu_down_write()
On 05/01, Peter Zijlstra wrote:
>
> Anyway; I cobbled together the below. Oleg, could you have a look, I'm
> sure I messed it up.
Oh, I will need to read this carefully. but at first glance I do not see
any hole...
> +static void readers_block(struct percpu_rw_semaphore *sem)
> +{
> +
On Wed, May 01, 2019 at 12:22:34PM -0700, Davidlohr Bueso wrote:
> On Wed, 01 May 2019, Peter Zijlstra wrote:
>
> > Nah, the percpu_rwsem abuse by the freezer is atrocious, we really
> > should not encourage that. Also, it completely wrecks -RT.
> >
> > Hence the proposed patch.
>
> Is this
On Wed, 01 May 2019, Peter Zijlstra wrote:
Nah, the percpu_rwsem abuse by the freezer is atrocious, we really
should not encourage that. Also, it completely wrecks -RT.
Hence the proposed patch.
Is this patch (and removing rcuwait) only intended for rt?
Thanks,
Davidlohr
On Wed, May 01, 2019 at 01:26:08PM -0400, Waiman Long wrote:
> On 5/1/19 1:09 PM, Peter Zijlstra wrote:
> > On Tue, Apr 30, 2019 at 03:28:11PM +0200, Peter Zijlstra wrote:
> >
> >> Yeah, but AFAIK fs freezing code has a history of doing exactly that..
> >> This is just the latest incarnation here.
On 5/1/19 1:09 PM, Peter Zijlstra wrote:
> On Tue, Apr 30, 2019 at 03:28:11PM +0200, Peter Zijlstra wrote:
>
>> Yeah, but AFAIK fs freezing code has a history of doing exactly that..
>> This is just the latest incarnation here.
>>
>> So the immediate problem here is that the task doing thaw isn't
On Tue, Apr 30, 2019 at 03:28:11PM +0200, Peter Zijlstra wrote:
> Yeah, but AFAIK fs freezing code has a history of doing exactly that..
> This is just the latest incarnation here.
>
> So the immediate problem here is that the task doing thaw isn't the same
> that did freeze, right? The thing
On 04/30, Peter Zijlstra wrote:
>
> On Tue, Apr 30, 2019 at 04:42:53PM +0200, Oleg Nesterov wrote:
> > I have cloned linux-rt-devel.git
> >
> > If I understand correctly, in rt rw_semaphore is actually defined in
> > rwsem_rt.h
> > so percpu_rwsem_acquire() should probably do
> >
> >
On Tue, Apr 30, 2019 at 04:42:53PM +0200, Oleg Nesterov wrote:
> I have cloned linux-rt-devel.git
>
> If I understand correctly, in rt rw_semaphore is actually defined in
> rwsem_rt.h
> so percpu_rwsem_acquire() should probably do
>
> sem->rw_sem.rtmutex.owner = current;
That'll screw
I have cloned linux-rt-devel.git
If I understand correctly, in rt rw_semaphore is actually defined in rwsem_rt.h
so percpu_rwsem_acquire() should probably do
sem->rw_sem.rtmutex.owner = current;
?
On 04/30, Oleg Nesterov wrote:
>
> Sorry, I don't understand...
>
> On 04/30, Peter
On Tue, Apr 30, 2019 at 04:15:01PM +0200, Oleg Nesterov wrote:
> Sorry, I don't understand...
So the problem is that on -RT rwsem uses PI mutexes, and the below just
cannot work.
Also see:
Sorry, I don't understand...
On 04/30, Peter Zijlstra wrote:
>
> Thaw then does the reverse, frobs lockdep
Yes, in particular it does
lockdep_sb_freeze_acquire()
percpu_rwsem_acquire()
sem->rw_sem.owner = current;
> and then does:
On Tue, Apr 30, 2019 at 03:45:48PM +0200, Sebastian Andrzej Siewior wrote:
> On 2019-04-30 15:28:11 [+0200], Peter Zijlstra wrote:
> > On Tue, Apr 30, 2019 at 02:51:31PM +0200, Sebastian Andrzej Siewior wrote:
> > > On 2019-04-19 10:56:27 [+0200], Juri Lelli wrote:
> > > > On 26/03/19 10:34, Juri
On 2019-04-30 15:28:11 [+0200], Peter Zijlstra wrote:
> On Tue, Apr 30, 2019 at 02:51:31PM +0200, Sebastian Andrzej Siewior wrote:
> > On 2019-04-19 10:56:27 [+0200], Juri Lelli wrote:
> > > On 26/03/19 10:34, Juri Lelli wrote:
> > > > Hi,
> > > >
> > > > Running this reproducer on a 4.19.25-rt16
On Tue, Apr 30, 2019 at 02:51:31PM +0200, Sebastian Andrzej Siewior wrote:
> On 2019-04-19 10:56:27 [+0200], Juri Lelli wrote:
> > On 26/03/19 10:34, Juri Lelli wrote:
> > > Hi,
> > >
> > > Running this reproducer on a 4.19.25-rt16 kernel (with lock debugging
> > > turned on) produces warning
On 2019-04-19 10:56:27 [+0200], Juri Lelli wrote:
> On 26/03/19 10:34, Juri Lelli wrote:
> > Hi,
> >
> > Running this reproducer on a 4.19.25-rt16 kernel (with lock debugging
> > turned on) produces warning below.
>
> And I now think this might lead to an actual crash.
Peter, could you please
On 26/03/19 10:34, Juri Lelli wrote:
> Hi,
>
> Running this reproducer on a 4.19.25-rt16 kernel (with lock debugging
> turned on) produces warning below.
And I now think this might lead to an actual crash.
I've got what below while running xfstest suite [1] on 4.19.31-rt18.
generic/390 test
On 2019-03-26 10:34:21 [+0100], Juri Lelli wrote:
> Hi,
Hi,
…
> # for I in `seq 10`; do fsfreeze -f ./testmount; sleep 1; fsfreeze -u
> ./testmount; done
>
> [ cut here ]
> DEBUG_LOCKS_WARN_ON(rt_mutex_owner(lock) != current)
> WARNING: CPU: 10 PID: 1226 at
Hi,
Running this reproducer on a 4.19.25-rt16 kernel (with lock debugging
turned on) produces warning below.
--->8---
# dd if=/dev/zero of=fsfreezetest count=99
# mkfs -t xfs -q ./fsfreezetest
# mkdir testmount
# mount -t xfs -o loop ./fsfreezetest ./testmount
# for I in `seq 10`; do
26 matches
Mail list logo