Hi,
On 5/30/19 7:09 PM, Jean-Philippe Brucker wrote:
> Implement the virtio-iommu driver, following specification v0.12 [1].
> Since last version [2] we've worked on improving the specification,
> which resulted in the following changes to the interface:
> * Remove the EXEC flag.
> * Add feature
On Thu, Jun 13, 2019 at 4:15 PM Hillf Danton wrote:
>
>
> Hello Dmitry
>
> On Thu, 13 Jun 2019 20:12:06 +0800 Dmitry Vyukov wrote:
> > On Thu, Jun 13, 2019 at 2:07 PM Hillf Danton wrote:
> > >
> > > Hello Jason
> > >
> > > On Thu, 13 Jun 2019 17:10:39 +0800 Jason Wang wrote:
> > > >
> > > > This
On Thu, Jun 13, 2019 at 03:52:27AM +, Sunil Muthuswamy wrote:
> The current vsock code for removal of socket from the list is both
> subject to race and inefficient. It takes the lock, checks whether
> the socket is in the list, drops the lock and if the socket was on the
> list, deletes it
On Thu, Jun 13, 2019 at 2:07 PM Hillf Danton wrote:
>
>
> Hello Jason
>
> On Thu, 13 Jun 2019 17:10:39 +0800 Jason Wang wrote:
> >
> > This is basically a kfree(ubuf) after the second vhost_net_flush() in
> > vhost_net_release().
> >
> Fairly good catch.
>
> > Could you please post a formal
Sure.
On 13.06.19 13:14, Halil Pasic wrote:
On Thu, 13 Jun 2019 11:11:13 +0200
Michael Mueller wrote:
Halil,
I just ran my toleration tests successfully on current HW for
this series.
Michael
Thanks Michael! May I add a
Tested-by: Michael Mueller
for each patch?
On 12.06.19 13:12,
On Thu, 13 Jun 2019 11:11:13 +0200
Michael Mueller wrote:
> Halil,
>
> I just ran my toleration tests successfully on current HW for
> this series.
>
> Michael
Thanks Michael! May I add a
Tested-by: Michael Mueller
for each patch?
>
> On 12.06.19 13:12, Halil Pasic wrote:
> > Enhanced
Halil,
I just ran my toleration tests successfully on current HW for
this series.
Michael
On 12.06.19 13:12, Halil Pasic wrote:
Enhanced virtualization protection technology may require the use of
bounce buffers for I/O. While support for this was built into the virtio
core, virtio-ccw wasn't
On 2019/6/6 下午10:40, Hillf Danton wrote:
On Wed, 05 Jun 2019 16:42:05 -0700 (PDT) syzbot wrote:
Hello,
syzbot found the following crash on:
HEAD commit: 788a0249 Merge tag 'arc-5.2-rc4' of
git://git.kernel.org/p..
git tree: upstream
console output:
On 2019/6/6 下午4:11, Stefano Garzarella wrote:
On Fri, May 31, 2019 at 05:56:39PM +0800, Jason Wang wrote:
On 2019/5/31 下午4:18, Stefano Garzarella wrote:
On Thu, May 30, 2019 at 07:59:14PM +0800, Jason Wang wrote:
On 2019/5/30 下午6:10, Stefano Garzarella wrote:
On Thu, May 30, 2019 at
On 12.06.19 13:12, Halil Pasic wrote:
Before virtio-ccw could get away with not using DMA API for the pieces of
memory it does ccw I/O with. With protected virtualization this has to
change, since the hypervisor needs to read and sometimes also write these
pieces of memory.
The hypervisor is
On 12.06.19 13:12, Halil Pasic wrote:
Hypervisor needs to interact with the summary indicators, so these
The hypervisor...
need to be DMA memory as well (at least for protected virtualization
guests).
Signed-off-by: Halil Pasic
Reviewed-by: Cornelia Huck
---
On 12.06.19 13:12, Halil Pasic wrote:
This will come in handy soon when we pull out the indicators from
virtio_ccw_device to a memory area that is shared with the hypervisor
(in particular for protected virtualization guests).
Signed-off-by: Halil Pasic
Reviewed-by: Pierre Morel
On 12.06.19 13:12, Halil Pasic wrote:
The flag AIRQ_IV_CACHELINE was recently added to airq_iv_create(). Let
us use it! We actually wanted the vector to span a cacheline all along.
Signed-off-by: Halil Pasic
Reviewed-by: Christian Borntraeger
Reviewed-by: Cornelia Huck
---
On 12.06.19 13:12, Halil Pasic wrote:
Protected virtualization guests have to use shared pages for airq
notifier bit vectors, because hypervisor needs to write these bits.
because the hypervisor
Let us make sure we allocate DMA memory for the notifier bit vectors by
replacing the
On 12.06.19 13:12, Halil Pasic wrote:
As virtio-ccw devices are channel devices, we need to use the
dma area within the common I/O layer for any communication with
the hypervisor.
Note that we do not need to use that area for control blocks
directly referenced by instructions, e.g. the orb.
On 12.06.19 13:12, Halil Pasic wrote:
To support protected virtualization cio will need to make sure the
memory used for communication with the hypervisor is DMA memory.
Let us introduce one global pool for cio.
Our DMA pools are implemented as a gen_pool backed with DMA pages. The
idea is
On 12.06.19 13:12, Halil Pasic wrote:
On s390, protected virtualization guests have to use bounced I/O
buffers. That requires some plumbing.
sed 's/ / /'
Let us make sure, any device that uses DMA API with direct ops correctly
is spared from the problems, that a hypervisor attempting
On Thu, Jun 06, 2019 at 11:59:15AM +0100, Emil Velikov wrote:
> On Mon, 27 May 2019 at 09:19, Emil Velikov wrote:
> >
> > From: Emil Velikov
> >
> > The authentication can be circumvented, by design, by using the render
> > node.
> >
> > From the driver POV there is no distinction between
On Mon, Jun 10, 2019 at 02:18:10PM -0700, davidri...@chromium.org wrote:
> From: David Riley
>
> After data is copied to the cache entry, atomic_set is used indicate
> that the data is the entry is valid without appropriate memory barriers.
> Similarly the read side was missing the corresponding
To improve TLB shootdown performance, flush the remote and local TLBs
concurrently. Introduce flush_tlb_multi() that does so. The current
flush_tlb_others() interface is kept, since paravirtual interfaces need
to be adapted first before it can be removed. This is left for future
work. In such PV
20 matches
Mail list logo