On Thu, Nov 11, 2021 at 11:40:02AM -0800, Srivatsa S. Bhat wrote:
> On Thu, Nov 11, 2021 at 07:45:02PM +0100, Greg KH wrote:
> > On Thu, Nov 11, 2021 at 07:39:16AM -0800, Srivatsa S. Bhat wrote:
> > > On Thu, Nov 11, 2021 at 07:50:39AM +0100, Greg KH wrote:
> > > > On Wed, Nov 10, 2021 at 12:08:16P
On 11-11-21, 17:04, Vincent Whitchurch wrote:
> If a timeout is hit, it can result is incorrect data on the I2C bus
> and/or memory corruptions in the guest since the device can still be
> operating on the buffers it was given while the guest has freed them.
>
> Here is, for example, the start of
On Thu, 11 Nov 2021 10:02:01 -0500, Michael S. Tsirkin wrote:
> On Thu, Nov 11, 2021 at 02:52:07PM +0800, Xuan Zhuo wrote:
> > On Wed, 10 Nov 2021 07:53:44 -0500, Michael S. Tsirkin
> > wrote:
> > > On Mon, Nov 08, 2021 at 10:47:40PM +0800, Xuan Zhuo wrote:
> > > > On Mon, 8 Nov 2021 08:49:27 -0
I get the following splats with a kvm guest in idle, after a few seconds
it starts:
[ 242.412806] INFO: task kworker/6:2:271 blockedfor more than 120 seconds.
[ 242.415790] Tainted: GE 5.15.0-next-2021 #68
[ 242.417755] "echo 0 > /proc/sy
Hi Linus,
On Thu, Nov 11, 2021 at 2:03 PM Sudip Mukherjee
wrote:
>
> Hi Linus,
>
> My testing has been failing for the last few days. Last good test was
> with 6f2b76a4a384 and I started seeing the failure with ce840177930f5
> where boot timeout.
>
> Last good test - https://openqa.qa.codethink.c
On Thu, Nov 11, 2021 at 07:45:02PM +0100, Greg KH wrote:
> On Thu, Nov 11, 2021 at 07:39:16AM -0800, Srivatsa S. Bhat wrote:
> > On Thu, Nov 11, 2021 at 07:50:39AM +0100, Greg KH wrote:
> > > On Wed, Nov 10, 2021 at 12:08:16PM -0800, Srivatsa S. Bhat wrote:
> > > > From: Srivatsa S. Bhat (VMware)
On Thu, Nov 11, 2021 at 07:39:16AM -0800, Srivatsa S. Bhat wrote:
> On Thu, Nov 11, 2021 at 07:50:39AM +0100, Greg KH wrote:
> > On Wed, Nov 10, 2021 at 12:08:16PM -0800, Srivatsa S. Bhat wrote:
> > > From: Srivatsa S. Bhat (VMware)
> > >
> > > Deep has decided to transfer maintainership of the V
On Thu, Nov 11, 2021 at 05:04:12PM +0100, Vincent Whitchurch wrote:
> The driver currently assumes that the notify callback is only received
> when the device is done with all the queued buffers.
>
> However, this is not true, since the notify callback could be called
> without any of the queued b
On Thu, Nov 11, 2021 at 05:04:11PM +0100, Vincent Whitchurch wrote:
> If a timeout is hit, it can result is incorrect data on the I2C bus
> and/or memory corruptions in the guest since the device can still be
> operating on the buffers it was given while the guest has freed them.
>
> Here is, for
If a timeout is hit, it can result is incorrect data on the I2C bus
and/or memory corruptions in the guest since the device can still be
operating on the buffers it was given while the guest has freed them.
Here is, for example, the start of a slub_debug splat which was
triggered on the next trans
The driver currently assumes that the notify callback is only received
when the device is done with all the queued buffers.
However, this is not true, since the notify callback could be called
without any of the queued buffers being completed (for example, with
virtio-pci and shared interrupts) or
This fixes a couple of bugs in the buffer handling in virtio-i2c which can
result in incorrect data on the I2C bus or memory corruption in the guest.
I tested this on UML (virtio-uml needs a bug fix too, I will sent that out
later) with the device implementation in rust-vmm/vhost-device.
Changes
On Thu, Nov 11, 2021 at 07:50:39AM +0100, Greg KH wrote:
> On Wed, Nov 10, 2021 at 12:08:16PM -0800, Srivatsa S. Bhat wrote:
> > From: Srivatsa S. Bhat (VMware)
> >
> > Deep has decided to transfer maintainership of the VMware hypervisor
> > interface to Srivatsa, and the joint-maintainership of
On Thu, Nov 11, 2021 at 07:58:29AM +, Wang, Wei W wrote:
> On Wednesday, November 10, 2021 6:50 PM, Michael S. Tsirkin wrote:
> > On Wed, Nov 10, 2021 at 07:12:36AM +, Wang, Wei W wrote:
> >
> > hypercalls are fundamentally hypervisor dependent though.
>
> Yes, each hypervisor needs to sup
On Thu, Nov 11, 2021 at 02:52:07PM +0800, Xuan Zhuo wrote:
> On Wed, 10 Nov 2021 07:53:44 -0500, Michael S. Tsirkin
> wrote:
> > On Mon, Nov 08, 2021 at 10:47:40PM +0800, Xuan Zhuo wrote:
> > > On Mon, 8 Nov 2021 08:49:27 -0500, Michael S. Tsirkin
> > > wrote:
> > > >
> > > > Hmm a bunch of com
On Thu, 11 Nov 2021 07:50:39 +0100
Greg KH wrote:
> > Signed-off-by: Srivatsa S. Bhat (VMware)
> > Acked-by: Alexey Makhalov
> > Acked-by: Deep Shah
> > Acked-by: Juergen Gross
> > Cc: sta...@vger.kernel.org
>
> Why are MAINTAINERS updates needed for stable? That's not normal :(
Probably
On 10.11.21 21:09, Srivatsa S. Bhat wrote:
From: Srivatsa S. Bhat (VMware)
VMware mailing lists in the MAINTAINERS file are private lists meant
for VMware-internal review/notification for patches to the respective
subsystems. Anyone can post to these addresses, but there is no public
read acces
On Wed, 2021-11-10 at 17:39 -0800, Jakub Kicinski wrote:
> On Wed, 10 Nov 2021 12:09:06 -0800 Srivatsa S. Bhat wrote:
> > DRM DRIVER FOR VMWARE VIRTUAL GPU
> > -M: "VMware Graphics"
> > M: Zack Rusin
> > +R: VMware Graphics Reviewers
> > L: dri-de...@lists.freedesktop.org
> > S: Supported
>
On 11/11/21 09:14, Wang, Wei W wrote:
Adding Andra and Sergio, because IIRC Firecracker and libkrun
emulates virtio-vsock with virtio-mmio so the implementation
should be simple and also not directly tied to a specific VMM.
OK. This would be OK for KVM based guests. For Hyperv and VMWare
based
On Wednesday, November 10, 2021 7:17 PM, Stefano Garzarella wrote:
> Adding Andra and Sergio, because IIRC Firecracker and libkrun emulates
> virtio-vsock with virtio-mmio so the implementation should be simple and also
> not directly tied to a specific VMM.
>
OK. This would be OK for KVM based
> From: Stefan Hajnoczi
On Wednesday, November 10, 2021 5:35 PM, Stefan Hajnoczi wrote:
> AF_VSOCK is designed to allow multiple transports, so why not. There is a cost
> to developing and maintaining a vsock transport though.
Yes. The effort could be reduced via simplifying the design as much as
21 matches
Mail list logo