Re: [NAK] Re: [PATCH] fs: Optimized fget to improve performance

2020-08-28 Thread Will Deacon
On Thu, Aug 27, 2020 at 03:28:48PM +0100, Al Viro wrote:
> On Thu, Aug 27, 2020 at 06:19:44PM +0800, Shaokun Zhang wrote:
> > From: Yuqi Jin 
> > 
> > It is well known that the performance of atomic_add is better than that of
> > atomic_cmpxchg.
> > The initial value of @f_count is 1. While @f_count is increased by 1 in
> > __fget_files, it will go through three phases: > 0, < 0, and = 0. When the
> > fixed value 0 is used as the condition for terminating the increase of 1,
> > only atomic_cmpxchg can be used. When we use < 0 as the condition for
> > stopping plus 1, we can use atomic_add to obtain better performance.
> 
> Suppose another thread has just removed it from the descriptor table.
> 
> > +static inline bool get_file_unless_negative(atomic_long_t *v, long a)
> > +{
> > +   long c = atomic_long_read(v);
> > +
> > +   if (c <= 0)
> > +   return 0;
> 
> Still 1.  Now the other thread has gotten to dropping the last reference,
> decremented counter to zero and committed to freeing the struct file.
> 
> > +
> > +   return atomic_long_add_return(a, v) - 1;
> 
> ... and you increment that sucker back to 1.  Sure, you return 0, so the
> caller does nothing to that struct file.  Which includes undoing the
> changes to its refecount.
> 
> In the meanwhile, the third thread does fget on the same descriptor,
> and there we end up bumping the refcount to 2 and succeeding.  Which
> leaves the caller with reference to already doomed struct file...
> 
>   IOW, NAK - this is completely broken.  The whole point of
> atomic_long_add_unless() is that the check and conditional increment
> are atomic.  Together.  That's what your optimization takes out.

Cheers Al, yes, this is fscked.

As an aside, I've previously toyed with the idea of implementing a form
of cmpxchg() using a pair of xchg() operations and a smp_cond_load_relaxed(),
where the thing would transition through a "reserved value", which might
perform better with the current trend of building hardware that doesn't
handle CAS failure so well.

But I've never had the time/motivation to hack it up, and it relies on that
reserved value which obviously doesn't always work (so it would have to be a
separate API).

Will


Re: [NAK] Re: [PATCH] fs: Optimized fget to improve performance

2020-08-30 Thread Shaokun Zhang
Hi Al,

在 2020/8/27 22:28, Al Viro 写道:
> On Thu, Aug 27, 2020 at 06:19:44PM +0800, Shaokun Zhang wrote:
>> From: Yuqi Jin 
>>
>> It is well known that the performance of atomic_add is better than that of
>> atomic_cmpxchg.
>> The initial value of @f_count is 1. While @f_count is increased by 1 in
>> __fget_files, it will go through three phases: > 0, < 0, and = 0. When the
>> fixed value 0 is used as the condition for terminating the increase of 1,
>> only atomic_cmpxchg can be used. When we use < 0 as the condition for
>> stopping plus 1, we can use atomic_add to obtain better performance.
> 
> Suppose another thread has just removed it from the descriptor table.
> 
>> +static inline bool get_file_unless_negative(atomic_long_t *v, long a)
>> +{
>> +long c = atomic_long_read(v);
>> +
>> +if (c <= 0)
>> +return 0;
> 
> Still 1.  Now the other thread has gotten to dropping the last reference,
> decremented counter to zero and committed to freeing the struct file.
> 

Apologies that I missed it.

>> +
>> +return atomic_long_add_return(a, v) - 1;
> 
> ... and you increment that sucker back to 1.  Sure, you return 0, so the
> caller does nothing to that struct file.  Which includes undoing the
> changes to its refecount.
> 
> In the meanwhile, the third thread does fget on the same descriptor,
> and there we end up bumping the refcount to 2 and succeeding.  Which
> leaves the caller with reference to already doomed struct file...
> 
>   IOW, NAK - this is completely broken.  The whole point of
> atomic_long_add_unless() is that the check and conditional increment
> are atomic.  Together.  That's what your optimization takes out.
> 

How about this? We try to replace atomic_cmpxchg with atomic_add to improve
performance. The atomic_add does not check the current f_count value.
Therefore, the number of online CPUs is reserved to prevent multi-core
competition.

+
+static inline bool get_file_unless(atomic_long_t *v, long a)
+{
+   long cpus = num_online_cpus();
+   long c = atomic_long_read(v);
+   long ret;
+
+   if (c > cpus || c < -cpus)
+   ret = atomic_long_add_return(a, v) - a;
+   else
+   ret = atomic_long_add_unless(v, a, 0);
+
+   return ret;
+}
+
 #define get_file_rcu_many(x, cnt)  \
-   atomic_long_add_unless(&(x)->f_count, (cnt), 0)
+   get_file_unless(&(x)->f_count, (cnt))

Thanks,
Shaokun

> .
> 



Re: [NAK] Re: [PATCH] fs: Optimized fget to improve performance

2020-08-30 Thread Al Viro
On Mon, Aug 31, 2020 at 09:43:31AM +0800, Shaokun Zhang wrote:

> How about this? We try to replace atomic_cmpxchg with atomic_add to improve
> performance. The atomic_add does not check the current f_count value.
> Therefore, the number of online CPUs is reserved to prevent multi-core
> competition.

No.  Really, really - no.  Not unless you can guarantee that process on another
CPU won't lose its timeslice, ending up with more than one increment happening 
on
the same CPU - done by different processes scheduled there, one after another.

If you have some change of atomic_long_add_unless(), do it there.  And get it
past the arm64 folks.  get_file_rcu() is nothing special in that respect *AND*
it has to cope with any architecture out there.

BTW, keep in mind that there's such thing as a KVM - race windows are much
wider there, since a thread representing a guest CPU might lose its timeslice
whenever the host feels like that.  At which point you get a single instruction
on a guest CPU taking longer than many thousands of instructions on another
CPU of the same guest.

AFAIK, arm64 does support KVM with SMP guests.


RE: [NAK] Re: [PATCH] fs: Optimized fget to improve performance

2020-09-01 Thread David Laight
From: Al Viro
> Sent: 31 August 2020 04:21
> 
> On Mon, Aug 31, 2020 at 09:43:31AM +0800, Shaokun Zhang wrote:
> 
> > How about this? We try to replace atomic_cmpxchg with atomic_add to improve
> > performance. The atomic_add does not check the current f_count value.
> > Therefore, the number of online CPUs is reserved to prevent multi-core
> > competition.
> 
> No.  Really, really - no.  Not unless you can guarantee that process on 
> another
> CPU won't lose its timeslice, ending up with more than one increment 
> happening on
> the same CPU - done by different processes scheduled there, one after another.
> 
> If you have some change of atomic_long_add_unless(), do it there.  And get it
> past the arm64 folks.  get_file_rcu() is nothing special in that respect *AND*
> it has to cope with any architecture out there.
> 
> BTW, keep in mind that there's such thing as a KVM - race windows are much
> wider there, since a thread representing a guest CPU might lose its timeslice
> whenever the host feels like that.  At which point you get a single 
> instruction
> on a guest CPU taking longer than many thousands of instructions on another
> CPU of the same guest.

The same thing can happen if a hardware interrupt occurs.
Not only the delays for the interrupt itself, but all the softint
processing that happens as well.
That can take a long time - even milliseconds.

David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, 
UK
Registration No: 1397386 (Wales)