Re: [PATCH RFC 2/2] futex: Implement mechanism to wait on any of several futexes

2019-08-06 Thread Peter Zijlstra
On Tue, Aug 06, 2019 at 02:26:38AM -0400, Gabriel Krisman Bertazi wrote:
> Peter Zijlstra  writes:
> 
> >
> >> +static int futex_wait_multiple(u32 __user *uaddr, unsigned int flags,
> >> + u32 count, ktime_t *abs_time)
> >> +{
> >> +  struct futex_wait_block *wb;
> >> +  struct restart_block *restart;
> >> +  int ret;
> >> +
> >> +  if (!count)
> >> +  return -EINVAL;
> >> +
> >> +  wb = kcalloc(count, sizeof(struct futex_wait_block), GFP_KERNEL);
> >> +  if (!wb)
> >> +  return -ENOMEM;
> >> +
> >> +  if (copy_from_user(wb, uaddr,
> >> + count * sizeof(struct futex_wait_block))) {
> >> +  ret = -EFAULT;
> >> +  goto out;
> >> +  }
> >
> > I'm thinking we can do away with this giant copy and do it one at a time
> > from the other function, just extend the storage allocated there to
> > store whatever values are still required later.

> I'm not sure I get the suggestion here.  If I understand the code
> correctly, once we do it one at a time, we need to queue_me() each futex
> and then drop the hb lock, before going to the next one. 

So the idea is to reduce to a single allocation; like Thomas also said.
And afaict, we've not yet taken any locks the first time we iterate --
that only does get_futex_key(), the whole __futex_wait_setup() and
queue_me(), comes later, in the second iteration.





Re: [PATCH RFC 2/2] futex: Implement mechanism to wait on any of several futexes

2019-08-06 Thread Gabriel Krisman Bertazi
Peter Zijlstra  writes:

>
>> +static int futex_wait_multiple(u32 __user *uaddr, unsigned int flags,
>> +   u32 count, ktime_t *abs_time)
>> +{
>> +struct futex_wait_block *wb;
>> +struct restart_block *restart;
>> +int ret;
>> +
>> +if (!count)
>> +return -EINVAL;
>> +
>> +wb = kcalloc(count, sizeof(struct futex_wait_block), GFP_KERNEL);
>> +if (!wb)
>> +return -ENOMEM;
>> +
>> +if (copy_from_user(wb, uaddr,
>> +   count * sizeof(struct futex_wait_block))) {
>> +ret = -EFAULT;
>> +goto out;
>> +}
>
> I'm thinking we can do away with this giant copy and do it one at a time
> from the other function, just extend the storage allocated there to
> store whatever values are still required later.

Hey Peter,

Thanks for your very detailed review.  it is deeply appreciated.  My
apologies for the style issues, I blindly trusted checkpatch.pl, when it
said it was ready for submission.

I'm not sure I get the suggestion here.  If I understand the code
correctly, once we do it one at a time, we need to queue_me() each futex
and then drop the hb lock, before going to the next one.  Once we go to
the next one, we need to call get_user_pages (and now copy_from_user),
both of which can sleep, and on return set the task state to
TASK_RUNNING.  This opens a window where we can wake up the task but it
is not in the right sleeping state, which from the comment in
futex_wait_queue_me(), seems problematic.  This is also the reason why I
wanted to split the key memory pin from the actual read in patch 1/2.

Did you consider this problem or is it not a problem for some reason?
What am I missing?

-- 
Gabriel Krisman Bertazi


Re: [PATCH RFC 2/2] futex: Implement mechanism to wait on any of several futexes

2019-07-31 Thread Pierre-Loup A. Griffais

On 7/31/19 6:32 PM, Zebediah Figura wrote:

On 7/31/19 8:22 PM, Zebediah Figura wrote:

On 7/31/19 7:45 PM, Thomas Gleixner wrote:

If I assume a maximum of 65 futexes which got mentioned in one of the
replies then this will allocate 7280 bytes alone for the futex_q 
array with
a stock debian config which has no debug options enabled which would 
bloat

the struct. Adding the futex_wait_block array into the same allocation
becomes larger than 8K which already exceeds thelimit of SLUB kmem
caches and forces the whole thing into the page allocator directly.

This sucks.

Also I'm confused about the 64 maximum resulting in 65 futexes 
comment in

one of the mails.

Can you please explain what you are trying to do exatly on the user 
space

side?


The extra futex comes from the fact that there are a couple of, as it
were, out-of-band ways to wake up a thread on Windows. [Specifically, a
thread can enter an "alertable" wait in which case it will be woken up
by a request from another thread to execute an "asynchronous procedure
call".] It's easiest for us to just add another futex to the list in
that case.


To be clear, the 64/65 distinction is an implementation detail that's 
pretty much outside the scope of this discussion. I should have just 
said 65 directly. Sorry about that.




I'd also point out, for whatever it's worth, that while 64 is a hard
limit, real applications almost never go nearly that high. By far the
most common number of primitives to select on is one.
Performance-critical code never tends to wait on more than three. The
most I've ever seen is twelve.

If you'd like to see the user-side source, most of the relevant code is
at [1], in particular the functions __fsync_wait_objects() [line 712]
and do_single_wait [line 655]. Please feel free to ask for further
clarification.

[1]
https://github.com/ValveSoftware/wine/blob/proton_4.11/dlls/ntdll/fsync.c


In addition, here's an example of how I think it might be useful to 
expose it to apps at large in a way that's compatible with existing 
pthread mutexes:


https://github.com/Plagman/glibc/commit/3b01145fa25987f2f93e7eda7f3e7d0f2f77b290

This patch hasn't received nearly as much testing as the Wine fsync code 
path, but that functionality would provide more CPU-efficient ways for 
thread pool code to sleep in our game engine. We also use eventfd today.


For this, I think the expected upper bound for the per-op futex count 
would be in the same order of magnitude as the logical CPU count on the 
target machine, similar as the Wine use-case.


Thanks,
 - Pierre-Loup







Thanks,

tglx









Re: [PATCH RFC 2/2] futex: Implement mechanism to wait on any of several futexes

2019-07-31 Thread Zebediah Figura

On 7/31/19 8:22 PM, Zebediah Figura wrote:

On 7/31/19 7:45 PM, Thomas Gleixner wrote:

If I assume a maximum of 65 futexes which got mentioned in one of the
replies then this will allocate 7280 bytes alone for the futex_q array with
a stock debian config which has no debug options enabled which would bloat
the struct. Adding the futex_wait_block array into the same allocation
becomes larger than 8K which already exceeds thelimit of SLUB kmem
caches and forces the whole thing into the page allocator directly.

This sucks.

Also I'm confused about the 64 maximum resulting in 65 futexes comment in
one of the mails.

Can you please explain what you are trying to do exatly on the user space
side?


The extra futex comes from the fact that there are a couple of, as it
were, out-of-band ways to wake up a thread on Windows. [Specifically, a
thread can enter an "alertable" wait in which case it will be woken up
by a request from another thread to execute an "asynchronous procedure
call".] It's easiest for us to just add another futex to the list in
that case.


To be clear, the 64/65 distinction is an implementation detail that's 
pretty much outside the scope of this discussion. I should have just 
said 65 directly. Sorry about that.




I'd also point out, for whatever it's worth, that while 64 is a hard
limit, real applications almost never go nearly that high. By far the
most common number of primitives to select on is one.
Performance-critical code never tends to wait on more than three. The
most I've ever seen is twelve.

If you'd like to see the user-side source, most of the relevant code is
at [1], in particular the functions __fsync_wait_objects() [line 712]
and do_single_wait [line 655]. Please feel free to ask for further
clarification.

[1]
https://github.com/ValveSoftware/wine/blob/proton_4.11/dlls/ntdll/fsync.c





Thanks,

tglx







Re: [PATCH RFC 2/2] futex: Implement mechanism to wait on any of several futexes

2019-07-31 Thread Zebediah Figura

On 7/31/19 7:45 PM, Thomas Gleixner wrote:

If I assume a maximum of 65 futexes which got mentioned in one of the
replies then this will allocate 7280 bytes alone for the futex_q array with
a stock debian config which has no debug options enabled which would bloat
the struct. Adding the futex_wait_block array into the same allocation
becomes larger than 8K which already exceeds thelimit of SLUB kmem
caches and forces the whole thing into the page allocator directly.

This sucks.

Also I'm confused about the 64 maximum resulting in 65 futexes comment in
one of the mails.

Can you please explain what you are trying to do exatly on the user space
side?


The extra futex comes from the fact that there are a couple of, as it 
were, out-of-band ways to wake up a thread on Windows. [Specifically, a 
thread can enter an "alertable" wait in which case it will be woken up 
by a request from another thread to execute an "asynchronous procedure 
call".] It's easiest for us to just add another futex to the list in 
that case.


I'd also point out, for whatever it's worth, that while 64 is a hard 
limit, real applications almost never go nearly that high. By far the 
most common number of primitives to select on is one. 
Performance-critical code never tends to wait on more than three. The 
most I've ever seen is twelve.


If you'd like to see the user-side source, most of the relevant code is 
at [1], in particular the functions __fsync_wait_objects() [line 712] 
and do_single_wait [line 655]. Please feel free to ask for further 
clarification.


[1] 
https://github.com/ValveSoftware/wine/blob/proton_4.11/dlls/ntdll/fsync.c






Thanks,

tglx





Re: [PATCH RFC 2/2] futex: Implement mechanism to wait on any of several futexes

2019-07-31 Thread Thomas Gleixner
On Tue, 30 Jul 2019, Gabriel Krisman Bertazi wrote:
> + retry:
> + for (i = 0; i < count; i++) {
> + qs[i].key = FUTEX_KEY_INIT;
> + qs[i].bitset = wb[i].bitset;
> +
> + ret = get_futex_key(wb[i].uaddr, flags & FLAGS_SHARED,
> + [i].key, FUTEX_READ);
> + if (unlikely(ret != 0)) {
> + for (--i; i >= 0; i--)
> + put_futex_key([i].key);
> + goto out;
> + }
> + }
> +
> + set_current_state(TASK_INTERRUPTIBLE);
> +
> + for (i = 0; i < count; i++) {
> + ret = __futex_wait_setup(wb[i].uaddr, wb[i].val,
> +  flags, [i], );
> + if (ret) {
> + /* Drop the failed key directly.  keys 0..(i-1)
> +  * will be put by unqueue_me.
> +  */
> + put_futex_key([i].key);
> +
> + /* Undo the partial work we did. */
> + for (--i; i >= 0; i--)
> + unqueue_me([i]);

That lacks a comment why this does not evaluate the return value of
unqueue_me(). If one of the already queued futexes got woken, then it's
debatable whether that counts as success or not. Whatever the decision is
it needs to be documented instead of documenting the obvious.

> + __set_current_state(TASK_RUNNING);
> + if (ret > 0)
> + goto retry;
> + goto out;
> + }
> +
> + /* We can't hold to the bucket lock when dealing with
> +  * the next futex. Queue ourselves now so we can unlock
> +  * it before moving on.
> +  */
> + queue_me([i], hb);
> + }

Why  does this use two loops and two cleanup loops instead of doing
it in a single one?

> +
> + /* There is no easy to way to check if we are wake already on
> +  * multiple futexes without waking through each one of them.  So

waking?

> + ret = -ERESTARTSYS;
> + if (!abs_time)
> + goto out;
> +
> + ret = -ERESTART_RESTARTBLOCK;
> + out:

There is surely no more convoluted way to write this? That goto above is
just making my eyes bleed. Yes I know where you copied that from and then
removed everthing between the goto and the assignement.

ret = abs_time ? -ERESTART_RESTARTBLOCK : -ERESTARTSYS;

would be too simple to read.

Also this is a new interface and there is absolutely no reason to make it
misfeature compatible to FUTEX_WAIT even for the case that this might share
code. Supporting relative timeouts is wrong to begin with and for absolute
timeouts there is no reason to use restartblock.

> +static int futex_wait_multiple(u32 __user *uaddr, unsigned int flags,
> +u32 count, ktime_t *abs_time)
> +{
> + struct futex_wait_block *wb;
> + struct restart_block *restart;
> + int ret;
> +
> + if (!count)
> + return -EINVAL;
> +
> +
> + wb = kcalloc(count, sizeof(struct futex_wait_block), GFP_KERNEL);

There are a couple of wrongs here:

  - Lacks a sane upper limit for count. Relying on kcalloc() to fail is
just wrong.
  
  - kcalloc() does a pointless zeroing for a buffer which is going to be
overwritten anyway

  - sizeof(struct foo) instead of sizeof(*wb)

count * sizeof(*wb) must be calculated anyway because copy_from_user()
needs it. So using kcalloc() is pointless.

> + if (!wb)
> + return -ENOMEM;
> +
> + if (copy_from_user(wb, uaddr,
> +count * sizeof(struct futex_wait_block))) {

And doing the size calculation only once spares that fugly line break.

But that's cosmetic. The way more serious issue is:

How is that supposed to work with compat tasks? 32bit and 64 bit apps do
not share the same pointer size and your UAPI struct is:

> +struct futex_wait_block {
> + __u32 __user *uaddr;
> + __u32 val;
> + __u32 bitset;
> +};

A 64 bit kernel will happily read beyond the user buffer when called from a
32bit task and all struct members turn into garbage. That can be solved
with padding so pointer conversion can be avoided, but you have to be
careful about endianness.

Coming back to to allocation itself:

do_futex_wait_multiple() does another allocation depending on count. So you
call into the memory allocator twice in a row and on the way out you do the
same thing again with free.

Looking at the size constraints.

If I assume a maximum of 65 futexes which got mentioned in one of the
replies then this will allocate 7280 bytes alone for the futex_q array with
a stock debian config which has no debug options enabled which would bloat
the struct. Adding the futex_wait_block array into the same allocation
becomes larger than 8K which already exceeds thelimit of SLUB kmem
caches and forces the whole thing into the page 

Re: [PATCH RFC 2/2] futex: Implement mechanism to wait on any of several futexes

2019-07-31 Thread Zebediah Figura

On 7/31/19 5:39 PM, Thomas Gleixner wrote:

On Wed, 31 Jul 2019, Zebediah Figura wrote:

On 7/31/19 7:06 AM, Peter Zijlstra wrote:

On Tue, Jul 30, 2019 at 06:06:02PM -0400, Gabriel Krisman Bertazi wrote:

This is a new futex operation, called FUTEX_WAIT_MULTIPLE, which allows
a thread to wait on several futexes at the same time, and be awoken by
any of them.  In a sense, it implements one of the features that was
supported by pooling on the old FUTEX_FD interface.

My use case for this operation lies in Wine, where we want to implement
a similar interface available in Windows, used mainly for event
handling.  The wine folks have an implementation that uses eventfd, but
it suffers from FD exhaustion (I was told they have application that go
to the order of multi-milion FDs), and higher CPU utilization.


So is multi-million the range we expect for @count ?



Not in Wine's case; in fact Wine has a hard limit of 64 synchronization
primitives that can be waited on at once (which, with the current user-side
code, translates into 65 futexes). The exhaustion just had to do with the
number of primitives created; some programs seem to leak them badly.


And how is the futex approach better suited to 'fix' resource leaks?



The crucial constraints for implementing Windows synchronization 
primitives in Wine are that (a) it must be possible to access them from 
multiple processes and (b) it must be possible to wait on more than one 
at a time.


The current best solution for this, performance-wise, backs each Windows 
synchronization primitive with an eventfd(2) descriptor and uses poll(2) 
to select on them. Windows programs can create an apparently unbounded 
number of synchronization objects, though they can only wait on up to 64 
at a time. However, on Linux the NOFILE limit causes problems; some 
distributions have it as low as 4096 by default, which is too low even 
for some modern programs that don't leak objects.


The approach we are developing, that relies on this patch, backs each 
object with a single futex whose value represents its signaled state. 
Therefore the only resource we are at risk of running out of is 
available memory, which exists in far greater quantities than available 
descriptors. [Presumably Windows synchronization primitives require at 
least some kernel memory to be allocated per object as well, so this 
puts us essentially at parity, for whatever that's worth.]


To be clear, I think the primary impetus for developing the futex-based 
approach was performance; it lets us avoid some system calls in hot 
paths (e.g. waiting on an already signaled object, resetting the state 
of an object to unsignaled. In that respect we're trying to get ahead of 
Windows, I guess.) But we have still been encountering occasional grief 
due to NOFILE limits that are too low, so this is another helpful benefit.


Re: [PATCH RFC 2/2] futex: Implement mechanism to wait on any of several futexes

2019-07-31 Thread Thomas Gleixner
On Wed, 31 Jul 2019, Zebediah Figura wrote:
> On 7/31/19 7:06 AM, Peter Zijlstra wrote:
> > On Tue, Jul 30, 2019 at 06:06:02PM -0400, Gabriel Krisman Bertazi wrote:
> > > This is a new futex operation, called FUTEX_WAIT_MULTIPLE, which allows
> > > a thread to wait on several futexes at the same time, and be awoken by
> > > any of them.  In a sense, it implements one of the features that was
> > > supported by pooling on the old FUTEX_FD interface.
> > > 
> > > My use case for this operation lies in Wine, where we want to implement
> > > a similar interface available in Windows, used mainly for event
> > > handling.  The wine folks have an implementation that uses eventfd, but
> > > it suffers from FD exhaustion (I was told they have application that go
> > > to the order of multi-milion FDs), and higher CPU utilization.
> > 
> > So is multi-million the range we expect for @count ?
> > 
> 
> Not in Wine's case; in fact Wine has a hard limit of 64 synchronization
> primitives that can be waited on at once (which, with the current user-side
> code, translates into 65 futexes). The exhaustion just had to do with the
> number of primitives created; some programs seem to leak them badly.

And how is the futex approach better suited to 'fix' resource leaks?

Thanks,

tglx




Re: [PATCH RFC 2/2] futex: Implement mechanism to wait on any of several futexes

2019-07-31 Thread Zebediah Figura

On 7/31/19 7:06 AM, Peter Zijlstra wrote:

On Tue, Jul 30, 2019 at 06:06:02PM -0400, Gabriel Krisman Bertazi wrote:

This is a new futex operation, called FUTEX_WAIT_MULTIPLE, which allows
a thread to wait on several futexes at the same time, and be awoken by
any of them.  In a sense, it implements one of the features that was
supported by pooling on the old FUTEX_FD interface.

My use case for this operation lies in Wine, where we want to implement
a similar interface available in Windows, used mainly for event
handling.  The wine folks have an implementation that uses eventfd, but
it suffers from FD exhaustion (I was told they have application that go
to the order of multi-milion FDs), and higher CPU utilization.


So is multi-million the range we expect for @count ?



Not in Wine's case; in fact Wine has a hard limit of 64 synchronization 
primitives that can be waited on at once (which, with the current 
user-side code, translates into 65 futexes). The exhaustion just had to 
do with the number of primitives created; some programs seem to leak 
them badly.


Re: [PATCH RFC 2/2] futex: Implement mechanism to wait on any of several futexes

2019-07-31 Thread Peter Zijlstra
On Tue, Jul 30, 2019 at 06:06:02PM -0400, Gabriel Krisman Bertazi wrote:
> This is a new futex operation, called FUTEX_WAIT_MULTIPLE, which allows
> a thread to wait on several futexes at the same time, and be awoken by
> any of them.  In a sense, it implements one of the features that was
> supported by pooling on the old FUTEX_FD interface.
> 
> My use case for this operation lies in Wine, where we want to implement
> a similar interface available in Windows, used mainly for event
> handling.  The wine folks have an implementation that uses eventfd, but
> it suffers from FD exhaustion (I was told they have application that go
> to the order of multi-milion FDs), and higher CPU utilization.

So is multi-million the range we expect for @count ?

If so, we're having problems, see below.

> In time, we are also proposing modifications to glibc and libpthread to
> make this feature available for Linux native multithreaded applications
> using libpthread, which can benefit from the behavior of waiting on any
> of a group of futexes.
> 
> In particular, using futexes in our Wine use case reduced the CPU
> utilization by 4% for the game Beat Saber and by 1.5% for the game
> Shadow of Tomb Raider, both running over Proton (a wine based solution
> for Windows emulation), when compared to the eventfd interface. This
> implementation also doesn't rely of file descriptors, so it doesn't risk
> overflowing the resource.
> 
> Technically, the existing FUTEX_WAIT implementation can be easily
> reworked by using do_futex_wait_multiple with a count of one, and I
> have a patch showing how it works.  I'm not proposing it, since
> futex is such a tricky code, that I'd be more confortable to have
> FUTEX_WAIT_MULTIPLE running upstream for a couple development cycles,
> before considering modifying FUTEX_WAIT.
> 
> From an implementation perspective, the futex list is passed as an array
> of (pointer,value,bitset) to the kernel, which will enqueue all of them
> and sleep if none was already triggered. It returns a hint of which
> futex caused the wake up event to userspace, but the hint doesn't
> guarantee that is the only futex triggered.  Before calling the syscall
> again, userspace should traverse the list, trying to re-acquire any of
> the other futexes, to prevent an immediate -EWOULDBLOCK return code from
> the kernel.

> Signed-off-by: Zebediah Figura 
> Signed-off-by: Steven Noonan 
> Signed-off-by: Pierre-Loup A. Griffais 
> Signed-off-by: Gabriel Krisman Bertazi 

That is not a valid SoB chain.

> ---
>  include/uapi/linux/futex.h |   7 ++
>  kernel/futex.c | 161 -
>  2 files changed, 164 insertions(+), 4 deletions(-)
> 
> diff --git a/include/uapi/linux/futex.h b/include/uapi/linux/futex.h
> index a89eb0accd5e..2401c4cf5095 100644
> --- a/include/uapi/linux/futex.h
> +++ b/include/uapi/linux/futex.h

> @@ -150,4 +151,10 @@ struct robust_list_head {
>(((op & 0xf) << 28) | ((cmp & 0xf) << 24)  \
> | ((oparg & 0xfff) << 12) | (cmparg & 0xfff))
>  
> +struct futex_wait_block {
> + __u32 __user *uaddr;
> + __u32 val;
> + __u32 bitset;
> +};

That is not compat invariant and I see a distinct lack of compat code in
this patch.

> diff --git a/kernel/futex.c b/kernel/futex.c
> index 91f3db335c57..2623e8f152cd 100644
> --- a/kernel/futex.c
> +++ b/kernel/futex.c

no function comment in sight

> +static int do_futex_wait_multiple(struct futex_wait_block *wb,
> +   u32 count, unsigned int flags,
> +   ktime_t *abs_time)
> +{
> +

(spurious empty line)

> + struct hrtimer_sleeper timeout, *to;
> + struct futex_hash_bucket *hb;
> + struct futex_q *qs = NULL;
> + int ret;
> + int i;
> +
> + qs = kcalloc(count, sizeof(struct futex_q), GFP_KERNEL);
> + if (!qs)
> + return -ENOMEM;

This will not work for @count ~ 1e6, or rather, MAX_ORDER is 11, so we
can, at most, allocate 4096 << 11 bytes, and since sizeof(futex_q) ==
112, that gives: ~75k objects.

Also; this is the only actual limit placed on @count.

Jann, Al, this also allows a single task to increment i_count or
mm_count by ~75k, which might be really awesome for refcount smashing
attacks.

> +
> + to = futex_setup_timer(abs_time, , flags,
> +current->timer_slack_ns);
> + retry:

(wrongly indented label)

> + for (i = 0; i < count; i++) {
> + qs[i].key = FUTEX_KEY_INIT;
> + qs[i].bitset = wb[i].bitset;
> +
> + ret = get_futex_key(wb[i].uaddr, flags & FLAGS_SHARED,
> + [i].key, FUTEX_READ);
> + if (unlikely(ret != 0)) {
> + for (--i; i >= 0; i--)
> + put_futex_key([i].key);
> + goto out;
> + }
> + }
> +
> + set_current_state(TASK_INTERRUPTIBLE);
> +
> + for (i = 0; i < count; i++) {
> +