Re: [PATCH 03/31] fs: introduce new ->get_poll_head and ->poll_mask methods

2018-01-10 Thread Christoph Hellwig
On Wed, Jan 10, 2018 at 04:31:35PM +, Al Viro wrote:
> *snort*
> 
> Seeing that random drivers are, by far, the majority of instances...
> What I wonder is how many of them conform to that pattern and how
> many can be massaged to that form.
> 
> How painful would it be, to pick an instance with more than one wait
> queue involved, to convert drivers/char/random.c to that form?

Attached.  Not very painful at all.  Would be even nicer if we had a
wait_event version that can deal with keys, but I can look into that.

> static unsigned int vtpm_proxy_fops_poll(struct file *filp, poll_table *wait)
> {
> struct proxy_dev *proxy_dev = filp->private_data;
> unsigned ret;
> 
> poll_wait(filp, &proxy_dev->wq, wait);
> 
> ret = POLLOUT;
> 
> mutex_lock(&proxy_dev->buf_lock);
> 
> if (proxy_dev->req_len)
> ret |= POLLIN | POLLRDNORM;
> 
> if (!(proxy_dev->state & STATE_OPENED_FLAG))
> ret |= POLLHUP;
> 
> mutex_unlock(&proxy_dev->buf_lock);
> 
> return ret;
> } 
> (mainline drivers/char/tpm/tpm_vtpm_proxy.c)

Yeah.  And what exactly is the lock protecting given that each
of them checks a single smaller than register sized variable?

> Is that mutex_lock() in there a bug?  Another fun case is dma_buf_poll()...

Right now it is not a bug, but it is not very helpful behavior.  We
actually have quite a few of those, and not allowing ->poll_mask to
is a good thing to catch these.
>From 5f5549dc78b39799db01d48dc346d14561ec2003 Mon Sep 17 00:00:00 2001
From: Christoph Hellwig 
Date: Wed, 10 Jan 2018 18:37:51 +0100
Subject: random: convert to ->poll_mask

The big change is that random_read_wait and random_write_wait are merged
into a single waitqueue that uses keyed wakeups.  Because wait_event_*
doesn't know about that this will lead to occassional spurious wakeups
in _random_read and add_hwgenerator_randomness, but wait_event_* is
designed to handle these and were are not in a a hot path there.

Signed-off-by: Christoph Hellwig 
---
 drivers/char/random.c | 27 +++
 1 file changed, 15 insertions(+), 12 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 64b59562c872..9a1b85928de3 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -401,8 +401,7 @@ static struct poolinfo {
 /*
  * Static global variables
  */
-static DECLARE_WAIT_QUEUE_HEAD(random_read_wait);
-static DECLARE_WAIT_QUEUE_HEAD(random_write_wait);
+static DECLARE_WAIT_QUEUE_HEAD(random_wait);
 static struct fasync_struct *fasync;
 
 static DEFINE_SPINLOCK(random_ready_list_lock);
@@ -710,7 +709,7 @@ static void credit_entropy_bits(struct entropy_store *r, int nbits)
 
 		/* should we wake readers? */
 		if (entropy_bits >= random_read_wakeup_bits) {
-			wake_up_interruptible(&random_read_wait);
+			wake_up_interruptible_poll(&random_wait, POLLIN);
 			kill_fasync(&fasync, SIGIO, POLL_IN);
 		}
 		/* If the input pool is getting full, send some
@@ -1293,7 +1292,7 @@ static size_t account(struct entropy_store *r, size_t nbytes, int min,
 	trace_debit_entropy(r->name, 8 * ibytes);
 	if (ibytes &&
 	(r->entropy_count >> ENTROPY_SHIFT) < random_write_wakeup_bits) {
-		wake_up_interruptible(&random_write_wait);
+		wake_up_interruptible_poll(&random_wait, POLLOUT);
 		kill_fasync(&fasync, SIGIO, POLL_OUT);
 	}
 
@@ -1748,7 +1747,7 @@ _random_read(int nonblock, char __user *buf, size_t nbytes)
 		if (nonblock)
 			return -EAGAIN;
 
-		wait_event_interruptible(random_read_wait,
+		wait_event_interruptible(random_wait,
 			ENTROPY_BITS(&input_pool) >=
 			random_read_wakeup_bits);
 		if (signal_pending(current))
@@ -1784,14 +1783,17 @@ urandom_read(struct file *file, char __user *buf, size_t nbytes, loff_t *ppos)
 	return ret;
 }
 
+static struct wait_queue_head *
+random_get_poll_head(struct file *file, __poll_t events)
+{
+	return &random_wait;
+}
+
 static __poll_t
-random_poll(struct file *file, poll_table * wait)
+random_poll_mask(struct file *file, __poll_t events)
 {
-	__poll_t mask;
+	__poll_t mask = 0;
 
-	poll_wait(file, &random_read_wait, wait);
-	poll_wait(file, &random_write_wait, wait);
-	mask = 0;
 	if (ENTROPY_BITS(&input_pool) >= random_read_wakeup_bits)
 		mask |= POLLIN | POLLRDNORM;
 	if (ENTROPY_BITS(&input_pool) < random_write_wakeup_bits)
@@ -1890,7 +1892,8 @@ static int random_fasync(int fd, struct file *filp, int on)
 const struct file_operations random_fops = {
 	.read  = random_read,
 	.write = random_write,
-	.poll  = random_poll,
+	.get_poll_head  = random_get_poll_head,
+	.poll_mask  = random_poll_mask,
 	.unlocked_ioctl = random_ioctl,
 	.fasync = random_fasync,
 	.llseek = noop_llseek,
@@ -2223,7 +2226,7 @@ void add_hwgenerator_randomness(const char *buffer, size_t count,
 	 * We'll be woken up again once below random_write_wakeup_thresh,
 	 * or when the calling thread is about to terminate.
 	 */
-	wait_event_interruptible(random_write_wait, kthread_should_stop() 

Re: [PATCH 03/31] fs: introduce new ->get_poll_head and ->poll_mask methods

2018-01-10 Thread Al Viro
On Mon, Jan 08, 2018 at 11:45:13AM +0100, Christoph Hellwig wrote:
> On Sat, Jan 06, 2018 at 07:12:42PM +, Al Viro wrote:
> > On Thu, Jan 04, 2018 at 09:00:15AM +0100, Christoph Hellwig wrote:
> > > ->get_poll_head returns the waitqueue that the poll operation is going
> > > to sleep on.  Note that this means we can only use a single waitqueue
> > > for the poll, unlike some current drivers that use two waitqueues for
> > > different events.  But now that we have keyed wakeups and heavily use
> > > those for poll there aren't that many good reason left to keep the
> > > multiple waitqueues, and if there are any ->poll is still around, the
> > > driver just won't support aio poll.
> > 
> > *UGH*
> > 
> > Gotta love the optimism, but have you actually done the conversion?
> > I'm particularly suspicious about the locking rules here...
> 
> I've done just about everything but random drivers.  Which is the ones
> where people care about performance and thus aio poll.  I suspect that
> we will have various odd cruft drivers that will be left alone.

*snort*

Seeing that random drivers are, by far, the majority of instances...
What I wonder is how many of them conform to that pattern and how
many can be massaged to that form.

How painful would it be, to pick an instance with more than one wait
queue involved, to convert drivers/char/random.c to that form?

FWIW, I agree that it's very common.  Which makes the departures from
that pattern worth looking into - they might be buggy.  And "more than
one queue to wait on" is not all - there's also e.g.
static unsigned int vtpm_proxy_fops_poll(struct file *filp, poll_table *wait)
{
struct proxy_dev *proxy_dev = filp->private_data;
unsigned ret;

poll_wait(filp, &proxy_dev->wq, wait);

ret = POLLOUT;

mutex_lock(&proxy_dev->buf_lock);

if (proxy_dev->req_len)
ret |= POLLIN | POLLRDNORM;

if (!(proxy_dev->state & STATE_OPENED_FLAG))
ret |= POLLHUP;

mutex_unlock(&proxy_dev->buf_lock);

return ret;
} 
(mainline drivers/char/tpm/tpm_vtpm_proxy.c)

Is that mutex_lock() in there a bug?  Another fun case is dma_buf_poll()...

The reason why I went looking at ->poll() in the first place had been
recurring bugs in the instances; e.g. "oh, I've got something odd,
let's return -Esomething".  Or "was it POLLIN or POLL_IN?"

I'd really like to get the interfaces right and get rid of the bitrot
source; turning it into moldering corpse in the corner is fine, as
long as we have a realistic chance of getting rid of that body in not
too distant future...


Re: [PATCH 03/31] fs: introduce new ->get_poll_head and ->poll_mask methods

2018-01-08 Thread Christoph Hellwig
On Sat, Jan 06, 2018 at 07:12:42PM +, Al Viro wrote:
> On Thu, Jan 04, 2018 at 09:00:15AM +0100, Christoph Hellwig wrote:
> > ->get_poll_head returns the waitqueue that the poll operation is going
> > to sleep on.  Note that this means we can only use a single waitqueue
> > for the poll, unlike some current drivers that use two waitqueues for
> > different events.  But now that we have keyed wakeups and heavily use
> > those for poll there aren't that many good reason left to keep the
> > multiple waitqueues, and if there are any ->poll is still around, the
> > driver just won't support aio poll.
> 
> *UGH*
> 
> Gotta love the optimism, but have you actually done the conversion?
> I'm particularly suspicious about the locking rules here...

I've done just about everything but random drivers.  Which is the ones
where people care about performance and thus aio poll.  I suspect that
we will have various odd cruft drivers that will be left alone.


Re: [PATCH 03/31] fs: introduce new ->get_poll_head and ->poll_mask methods

2018-01-06 Thread Al Viro
On Thu, Jan 04, 2018 at 09:00:15AM +0100, Christoph Hellwig wrote:
> ->get_poll_head returns the waitqueue that the poll operation is going
> to sleep on.  Note that this means we can only use a single waitqueue
> for the poll, unlike some current drivers that use two waitqueues for
> different events.  But now that we have keyed wakeups and heavily use
> those for poll there aren't that many good reason left to keep the
> multiple waitqueues, and if there are any ->poll is still around, the
> driver just won't support aio poll.

*UGH*

Gotta love the optimism, but have you actually done the conversion?
I'm particularly suspicious about the locking rules here...