Re: [PATCH 06/10] virtio: console: fix race in port_fops_poll() and port unplug
Jason Wang writes: > On 07/22/2013 01:45 PM, Rusty Russell wrote: >> Amit Shah writes: >>> On (Fri) 19 Jul 2013 [18:17:32], Jason Wang wrote: On 07/19/2013 03:48 PM, Amit Shah wrote: > On (Fri) 19 Jul 2013 [15:03:50], Jason Wang wrote: >> On 07/19/2013 04:16 AM, Amit Shah wrote: >>> Between poll() being called and processed, the port can be unplugged. >>> Check if this happened, and bail out. >>> >>> Signed-off-by: Amit Shah >>> --- >>> drivers/char/virtio_console.c | 4 >>> 1 file changed, 4 insertions(+) >>> >>> diff --git a/drivers/char/virtio_console.c >>> b/drivers/char/virtio_console.c >>> index 7728af9..1d4b748 100644 >>> --- a/drivers/char/virtio_console.c >>> +++ b/drivers/char/virtio_console.c >>> @@ -967,6 +967,10 @@ static unsigned int port_fops_poll(struct file >>> *filp, poll_table *wait) >>> unsigned int ret; >>> >>> port = filp->private_data; >>> + if (!port->guest_connected) { >>> + /* Port was unplugged before we could proceed */ >>> + return POLLHUP; >>> + } >>> poll_wait(filp, &port->waitqueue, wait); >>> >>> if (!port->guest_connected) { >> Looks still racy here. Unlike port_fops_read() which check >> will_read_block(). If unplug happens after the check but before the >> poll_wait(), caller will be blocked forever. > unplug_port() calls wake_up_interruptible on the waitqueue. I mean the following cases: >>> (formatting to fit properly:) >>> CPU0:CPU1: unplug_port() if (!port->guest_connected) { return POLLHUP; } wake_up_interruptiable() poll_wait(filp, &port->waitqueue, wait); >>> Agreed, this can happen. I can't think of a way to resolve this. One >>> way would be to remove the waitqueue (port->waitqueue = NULL in >>> unplug_port()), but I'm not sure of the effect on the other parts >>> yet. I'll leave this one for later analysis. >> No, you are confused by the name, I think, >> >> poll_wait() doesn't actually wait. It's more like a poll_enqueue(). > > Yes, but the caller will wait then and since the wakeup was called > before adding into waitqueue. It may block forever? No, we enqueue then check: port = filp->private_data; poll_wait(filp, &port->waitqueue, wait); if (!port->guest_connected) { /* Port got unplugged */ return POLLHUP; } ret = 0; if (!will_read_block(port)) ret |= POLLIN | POLLRDNORM; if (!will_write_block(port)) ret |= POLLOUT; if (!port->host_connected) ret |= POLLHUP; return ret; Which is the correct way to do this. Cheers, Rusty. ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
Re: [patch v2] virtio: console: cleanup an error message
On Mon, Jul 22, 2013 at 11:41:00PM +0300, Dan Carpenter wrote: > The PTR_ERR(NULL) here is not useful. > > Signed-off-by: Dan Carpenter > --- > v2: completely different > > diff --git a/drivers/char/virtio_console.c b/drivers/char/virtio_console.c > index 1b456fe..4cf46d8 100644 > --- a/drivers/char/virtio_console.c > +++ b/drivers/char/virtio_console.c Rusty handles virtio stuff, please cc: him on these. thanks, greg k-h ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
Re: [PATCH V3 0/2] [BUGFIX] virtio/console: Fix two bugs of splice_write
Amit Shah writes: > On (Mon) 22 Jul 2013 [13:00:43], Yoshihiro YUNOMAE wrote: >> Hi, >> >> This patch set fixes two bugs of splice_write in the virtio-console driver. >> >> [BUG1] Although pipe->nrbufs is empty, the driver tries to do splice_write. >>=> This induces oops in sg_init_table(). >> >> [BUG2] No lock for competition of splice_write. >>=> This induces oops in splice_from_pipe_feed() by bug of any user >> application. >> >> These reports are written in each patch. >> >> Changes in V2: >> - Fix a locking problem for error >> >> Changes in V3: >> - Add Reviewed-by lines and stable@ line in sign-off area > > Thank you! > > Rusty, please pick this up. Both applied, thankyou. Cheers, Rusty. ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
Re: [PATCH 06/10] virtio: console: fix race in port_fops_poll() and port unplug
On 07/22/2013 01:45 PM, Rusty Russell wrote: > Amit Shah writes: >> On (Fri) 19 Jul 2013 [18:17:32], Jason Wang wrote: >>> On 07/19/2013 03:48 PM, Amit Shah wrote: On (Fri) 19 Jul 2013 [15:03:50], Jason Wang wrote: > On 07/19/2013 04:16 AM, Amit Shah wrote: >> Between poll() being called and processed, the port can be unplugged. >> Check if this happened, and bail out. >> >> Signed-off-by: Amit Shah >> --- >> drivers/char/virtio_console.c | 4 >> 1 file changed, 4 insertions(+) >> >> diff --git a/drivers/char/virtio_console.c >> b/drivers/char/virtio_console.c >> index 7728af9..1d4b748 100644 >> --- a/drivers/char/virtio_console.c >> +++ b/drivers/char/virtio_console.c >> @@ -967,6 +967,10 @@ static unsigned int port_fops_poll(struct file >> *filp, poll_table *wait) >> unsigned int ret; >> >> port = filp->private_data; >> +if (!port->guest_connected) { >> +/* Port was unplugged before we could proceed */ >> +return POLLHUP; >> +} >> poll_wait(filp, &port->waitqueue, wait); >> >> if (!port->guest_connected) { > Looks still racy here. Unlike port_fops_read() which check > will_read_block(). If unplug happens after the check but before the > poll_wait(), caller will be blocked forever. unplug_port() calls wake_up_interruptible on the waitqueue. >>> I mean the following cases: >> (formatting to fit properly:) >> >>> CPU0:CPU1: unplug_port() >>> >>> if (!port->guest_connected) { >>> return POLLHUP; >>> } >>> wake_up_interruptiable() >>> >>> poll_wait(filp, &port->waitqueue, wait); >> Agreed, this can happen. I can't think of a way to resolve this. One >> way would be to remove the waitqueue (port->waitqueue = NULL in >> unplug_port()), but I'm not sure of the effect on the other parts >> yet. I'll leave this one for later analysis. > No, you are confused by the name, I think, > > poll_wait() doesn't actually wait. It's more like a poll_enqueue(). Yes, but the caller will wait then and since the wakeup was called before adding into waitqueue. It may block forever? > > Cheers, > Rusty. ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
Re: [PATCH RFC V11 0/18] Paravirtualized ticket spinlocks
On 07/23/2013 01:06 AM, Konrad Rzeszutek Wilk wrote: github link: https://github.com/ktraghavendra/linux/tree/pvspinlock_v11 And chance you have a backup git tree? I get: This repository is temporarily unavailable. I only have it on local apart from there :(. Hope it was a temporary github problem. working for me now. Please note that we set SPIN_THRESHOLD = 32k with this series, that would eatup little bit of overcommit performance of PLE machines and overall performance of non-PLE machines. The older series[3] was tested by Attilio for Xen implementation. Note that Konrad needs to revert below two patches to enable xen on hvm 70dd4998, f10cd522c We could add that to the series. But let me first test it out - and that gets back to the repo :-) okay. ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
Re: [PATCH 08/10] virtio: console: add locks around buffer removal in port unplug path
Amit Shah writes: > The removal functions act on the vqs, and the vq operations need to be > locked. > > Signed-off-by: Amit Shah How can userspace access the port now? By the time we are cleaning up buffers, there should be no possibility of such accesses. The number of bugfixes here is deeply disturbing. I wonder if it's be easier to rewrite it all with a lock per port, and one global to protect ports_driver_data. Cheers, Rusty. > --- > drivers/char/virtio_console.c | 8 ++-- > 1 file changed, 6 insertions(+), 2 deletions(-) > > diff --git a/drivers/char/virtio_console.c b/drivers/char/virtio_console.c > index ce71df1..dd0020d 100644 > --- a/drivers/char/virtio_console.c > +++ b/drivers/char/virtio_console.c > @@ -1516,18 +1516,22 @@ static void remove_port_data(struct port *port) > { > struct port_buffer *buf; > > + spin_lock_irq(&port->inbuf_lock); > /* Remove unused data this port might have received. */ > discard_port_data(port); > > - reclaim_consumed_buffers(port); > - > /* Remove buffers we queued up for the Host to send us data in. */ > while ((buf = virtqueue_detach_unused_buf(port->in_vq))) > free_buf(buf, true); > + spin_unlock_irq(&port->inbuf_lock); > + > + spin_lock_irq(&port->outvq_lock); > + reclaim_consumed_buffers(port); > > /* Free pending buffers from the out-queue. */ > while ((buf = virtqueue_detach_unused_buf(port->out_vq))) > free_buf(buf, true); > + spin_unlock_irq(&port->outvq_lock); > } > > /* > -- > 1.8.1.4 ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
Re: [PATCH 06/10] virtio: console: fix race in port_fops_poll() and port unplug
Amit Shah writes: > On (Fri) 19 Jul 2013 [18:17:32], Jason Wang wrote: >> On 07/19/2013 03:48 PM, Amit Shah wrote: >> > On (Fri) 19 Jul 2013 [15:03:50], Jason Wang wrote: >> >> On 07/19/2013 04:16 AM, Amit Shah wrote: >> >>> Between poll() being called and processed, the port can be unplugged. >> >>> Check if this happened, and bail out. >> >>> >> >>> Signed-off-by: Amit Shah >> >>> --- >> >>> drivers/char/virtio_console.c | 4 >> >>> 1 file changed, 4 insertions(+) >> >>> >> >>> diff --git a/drivers/char/virtio_console.c >> >>> b/drivers/char/virtio_console.c >> >>> index 7728af9..1d4b748 100644 >> >>> --- a/drivers/char/virtio_console.c >> >>> +++ b/drivers/char/virtio_console.c >> >>> @@ -967,6 +967,10 @@ static unsigned int port_fops_poll(struct file >> >>> *filp, poll_table *wait) >> >>> unsigned int ret; >> >>> >> >>> port = filp->private_data; >> >>> +if (!port->guest_connected) { >> >>> +/* Port was unplugged before we could proceed */ >> >>> +return POLLHUP; >> >>> +} >> >>> poll_wait(filp, &port->waitqueue, wait); >> >>> >> >>> if (!port->guest_connected) { >> >> Looks still racy here. Unlike port_fops_read() which check >> >> will_read_block(). If unplug happens after the check but before the >> >> poll_wait(), caller will be blocked forever. >> > unplug_port() calls wake_up_interruptible on the waitqueue. >> >> I mean the following cases: > > (formatting to fit properly:) > >> >> CPU0:CPU1: unplug_port() >> >> if (!port->guest_connected) { >> return POLLHUP; >> } >> wake_up_interruptiable() >> >> poll_wait(filp, &port->waitqueue, wait); > > Agreed, this can happen. I can't think of a way to resolve this. One > way would be to remove the waitqueue (port->waitqueue = NULL in > unplug_port()), but I'm not sure of the effect on the other parts > yet. I'll leave this one for later analysis. No, you are confused by the name, I think, poll_wait() doesn't actually wait. It's more like a poll_enqueue(). Cheers, Rusty. ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
Re: [PATCH 05/10] virtio: console: update private_data in struct file only on successful open
Amit Shah writes: > Mateusz Guzik points out that we update the 'file' struct's private_data > field before we've successfully done all our checks. This means we can > return an error with the private_data field updated. This could lead to > problems. > > Fix by moving the assignment after all checks are done. No, this is a bit weird, but it's fine. If we fail open, filp will be destroyed; we won't be told about it, and private_data will never be accessed. Cheers, Rusty. > CC: > Reported-by: Mateusz Guzik > Signed-off-by: Amit Shah > --- > drivers/char/virtio_console.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/drivers/char/virtio_console.c b/drivers/char/virtio_console.c > index a39702a..7728af9 100644 > --- a/drivers/char/virtio_console.c > +++ b/drivers/char/virtio_console.c > @@ -1032,7 +1032,6 @@ static int port_fops_open(struct inode *inode, struct > file *filp) > /* Port was unplugged before we could proceed */ > return -ENXIO; > } > - filp->private_data = port; > > /* >* Don't allow opening of console port devices -- that's done > @@ -1051,6 +1050,7 @@ static int port_fops_open(struct inode *inode, struct > file *filp) > goto out; > } > > + filp->private_data = port; > port->guest_connected = true; > spin_unlock_irq(&port->inbuf_lock); > > -- > 1.8.1.4 ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
[patch v2] virtio: console: cleanup an error message
The PTR_ERR(NULL) here is not useful. Signed-off-by: Dan Carpenter --- v2: completely different diff --git a/drivers/char/virtio_console.c b/drivers/char/virtio_console.c index 1b456fe..4cf46d8 100644 --- a/drivers/char/virtio_console.c +++ b/drivers/char/virtio_console.c @@ -2215,10 +2215,8 @@ static int __init init(void) } pdrvdata.debugfs_dir = debugfs_create_dir("virtio-ports", NULL); - if (!pdrvdata.debugfs_dir) { - pr_warning("Error %ld creating debugfs dir for virtio-ports\n", - PTR_ERR(pdrvdata.debugfs_dir)); - } + if (!pdrvdata.debugfs_dir) + pr_warning("Error creating debugfs dir for virtio-ports\n"); INIT_LIST_HEAD(&pdrvdata.consoles); INIT_LIST_HEAD(&pdrvdata.portdevs); ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
Re: is kexec on Xen domU possible?
On 07/22/2013 11:33 AM, Greg KH wrote: > > I don't care about kdump, I care about kexec on domU for people who are > running on cloud providers with old versions of Xen so that they can > control what kernel they can boot, when they want to boot it. If kdump > works as well, that's just a bonus, but it's down on the list of things > for me to be concerned about. > Another valid use case, to be sure. -hpa ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
Re: [PATCH RFC V11 0/18] Paravirtualized ticket spinlocks
> > github link: https://github.com/ktraghavendra/linux/tree/pvspinlock_v11 And chance you have a backup git tree? I get: This repository is temporarily unavailable. > > Please note that we set SPIN_THRESHOLD = 32k with this series, > that would eatup little bit of overcommit performance of PLE machines > and overall performance of non-PLE machines. > > The older series[3] was tested by Attilio for Xen implementation. > > Note that Konrad needs to revert below two patches to enable xen on hvm > 70dd4998, f10cd522c We could add that to the series. But let me first test it out - and that gets back to the repo :-) ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
Re: is kexec on Xen domU possible?
On Mon, Jul 22, 2013 at 11:24:46AM -0700, H. Peter Anvin wrote: > On 07/22/2013 10:20 AM, Eric W. Biederman wrote: > >>> > >>> Also, in any virtualized environment the hypervisor can do a better job > >>> for things like kdump, simply because it can provide two things that are > >>> otherwise hard to do: > >>> > >>> 1. a known-good system state; > >>> 2. a known-clean kdump image. > >>> > >>> As such, I do encourage the virtualization people to (also) develop > >>> hypervisor-*aware* solutions for these kinds of things. > >> > >> In general I agree but if you could not change hypervisor > >> and/or dom0 (e.g. you are using cloud providers which are > >> stick to old versions of Xen) then you have no choice. > > > > Which tends to be where kexec on panic comes in most cases. Getting > > platform vendors to do something sane tends to be a multi-year political > > effort of dubious worth while just solving the problem locally actually > > gets the problem solved for those who care. > > > > It should not be a "one or the other" issue. I don't care about kdump, I care about kexec on domU for people who are running on cloud providers with old versions of Xen so that they can control what kernel they can boot, when they want to boot it. If kdump works as well, that's just a bonus, but it's down on the list of things for me to be concerned about. thanks, greg k-h ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
Re: is kexec on Xen domU possible?
On 07/22/2013 10:20 AM, Eric W. Biederman wrote: >>> >>> Also, in any virtualized environment the hypervisor can do a better job >>> for things like kdump, simply because it can provide two things that are >>> otherwise hard to do: >>> >>> 1. a known-good system state; >>> 2. a known-clean kdump image. >>> >>> As such, I do encourage the virtualization people to (also) develop >>> hypervisor-*aware* solutions for these kinds of things. >> >> In general I agree but if you could not change hypervisor >> and/or dom0 (e.g. you are using cloud providers which are >> stick to old versions of Xen) then you have no choice. > > Which tends to be where kexec on panic comes in most cases. Getting > platform vendors to do something sane tends to be a multi-year political > effort of dubious worth while just solving the problem locally actually > gets the problem solved for those who care. > It should not be a "one or the other" issue. -hpa ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
Re: is kexec on Xen domU possible?
Daniel Kiper writes: > On Fri, Jul 19, 2013 at 01:58:05PM -0700, H. Peter Anvin wrote: >> On 07/19/2013 12:14 PM, Greg KH wrote: >> >> >> >>> The errors that the kexec tools seem to run into is finding the memory >> >>> to place the new kernel into, is that just an issue that PV guests >> >>> aren't given enough kernel memory in which to replicate themselves from >> >>> dom0? >> >> >> >> There are a lot of differences between baremetal machines and PV guests. >> >> For example you are not able to do identity mapping per se in PV guests. >> >> Arguments to new kernel are passed in completely different way. etc. >> > >> > Ok, thanks for confirming that it is possible, but doesn't currently >> > work for pv guests. >> > >> >> Also, in any virtualized environment the hypervisor can do a better job >> for things like kdump, simply because it can provide two things that are >> otherwise hard to do: >> >> 1. a known-good system state; >> 2. a known-clean kdump image. >> >> As such, I do encourage the virtualization people to (also) develop >> hypervisor-*aware* solutions for these kinds of things. > > In general I agree but if you could not change hypervisor > and/or dom0 (e.g. you are using cloud providers which are > stick to old versions of Xen) then you have no choice. Which tends to be where kexec on panic comes in most cases. Getting platform vendors to do something sane tends to be a multi-year political effort of dubious worth while just solving the problem locally actually gets the problem solved for those who care. Eric ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
Re: is kexec on Xen domU possible?
On Fri, Jul 19, 2013 at 01:58:05PM -0700, H. Peter Anvin wrote: > On 07/19/2013 12:14 PM, Greg KH wrote: > >> > >>> The errors that the kexec tools seem to run into is finding the memory > >>> to place the new kernel into, is that just an issue that PV guests > >>> aren't given enough kernel memory in which to replicate themselves from > >>> dom0? > >> > >> There are a lot of differences between baremetal machines and PV guests. > >> For example you are not able to do identity mapping per se in PV guests. > >> Arguments to new kernel are passed in completely different way. etc. > > > > Ok, thanks for confirming that it is possible, but doesn't currently > > work for pv guests. > > > > Also, in any virtualized environment the hypervisor can do a better job > for things like kdump, simply because it can provide two things that are > otherwise hard to do: > > 1. a known-good system state; > 2. a known-clean kdump image. > > As such, I do encourage the virtualization people to (also) develop > hypervisor-*aware* solutions for these kinds of things. In general I agree but if you could not change hypervisor and/or dom0 (e.g. you are using cloud providers which are stick to old versions of Xen) then you have no choice. Daniel ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
Re: [PULL] vhost: cleanups and fixes
On Mon, Jul 15, 2013 at 09:31:49PM +0300, Michael S. Tsirkin wrote: > The following changes since commit 09a34c8404c1d4c5782de319c02e1d742c57875c: > > vhost/test: update test after vhost cleanups (2013-07-07 18:02:25 +0300) > > are available in the git repository at: > > git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git tags/for_linus > > for you to fetch changes up to 22fa90c7fb479694d6affebc049d21f06b714be6: > > vhost: Remove custom vhost rcu usage (2013-07-11 15:38:40 +0300) > > > vhost: more fixes for 3.11 > > This includes some fixes for vhost net and scsi drivers. > The test module has already been reworked to avoid rcu > usage, but the necessary core changes are missing, > we fixed this. > Unlikely to affect any real-world users, but it's > early in the cycle so, let's merge them. > > Signed-off-by: Michael S. Tsirkin > > > Asias He (3): > vhost-net: Always access vq->private_data under vq mutex > vhost-scsi: Always access vq->private_data under vq mutex > vhost: Remove custom vhost rcu usage Hi Linus, Just checking that all's well with this PULL request. If it's not merged, we'll have to revert the relevant change in the test module. Either way, need to keep them in sync. Pls let me know, Thanks > drivers/vhost/net.c | 37 - > drivers/vhost/scsi.c | 17 ++--- > drivers/vhost/test.c | 6 ++ > drivers/vhost/vhost.h | 10 ++ > 4 files changed, 26 insertions(+), 44 deletions(-) ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization