On 2019/01/29 11:23, Jason Wang wrote:
> On 2019/1/29 上午8:45, Toshiaki Makita wrote:
...
>> @@ -2666,10 +2696,10 @@ static void free_unused_bufs(struct
>> virtnet_info *vi)
>> for (i = 0; i < vi->max_queue_pairs; i++) {
>> struct virtqueue *vq = vi->sq[i].vq;
>> while ((bu
On 2019/1/29 上午10:35, Toshiaki Makita wrote:
On 2019/01/29 11:23, Jason Wang wrote:
On 2019/1/29 上午8:45, Toshiaki Makita wrote:
...
@@ -2666,10 +2696,10 @@ static void free_unused_bufs(struct
virtnet_info *vi)
for (i = 0; i < vi->max_queue_pairs; i++) {
struct virtqueue *vq
On 2019/1/27 上午8:31, Michael S. Tsirkin wrote:
On Sat, Jan 26, 2019 at 02:37:08PM -0800, David Miller wrote:
From: Jason Wang
Date: Wed, 23 Jan 2019 17:55:52 +0800
This series tries to access virtqueue metadata through kernel virtual
address instead of copy_user() friends since they had too
On 2019/1/29 上午8:45, Toshiaki Makita wrote:
Commit 8dcc5b0ab0ec ("virtio_net: fix ndo_xdp_xmit crash towards dev not
ready for XDP") tried to avoid access to unexpected sq while XDP is
disabled, but was not complete.
There was a small window which causes out of bounds sq access in
virtnet_xdp_x
We do not reset or free up unused buffers when enabling/disabling XDP,
so it can happen that xdp_frames are freed after disabling XDP or
sk_buffs are freed after enabling XDP on xdp tx queues.
Thus we need to handle both forms (xdp_frames and sk_buffs) regardless
of XDP setting.
One way to trigger
On 2019/1/29 上午8:45, Toshiaki Makita wrote:
We do not reset or free up unused buffers when enabling/disabling XDP,
so it can happen that xdp_frames are freed after disabling XDP or
sk_buffs are freed after enabling XDP on xdp tx queues.
Thus we need to handle both forms (xdp_frames and sk_buffs)
Commit 8dcc5b0ab0ec ("virtio_net: fix ndo_xdp_xmit crash towards dev not
ready for XDP") tried to avoid access to unexpected sq while XDP is
disabled, but was not complete.
There was a small window which causes out of bounds sq access in
virtnet_xdp_xmit() while disabling XDP.
An example case of
When XDP is disabled, curr_queue_pairs + smp_processor_id() can be
larger than max_queue_pairs.
There is no guarantee that we have enough XDP send queues dedicated for
each cpu when XDP is disabled, so do not count drops on sq in that case.
Fixes: 5b8f3c8d30a6 ("virtio_net: Add XDP related stats")
When napi_tx is enabled, virtnet_poll_cleantx() called
free_old_xmit_skbs() even for xdp send queue.
This is bogus since the queue has xdp_frames, not sk_buffs, thus mangled
device tx bytes counters because skb->len is meaningless value, and even
triggered oops due to general protection fault on fr
When _virtnet_set_queues() failed we did not restore real_num_rx_queues.
Fix this by placing the change of real_num_rx_queues after
_virtnet_set_queues().
This order is also in line with virtnet_set_channels().
Fixes: 4941d472bf95 ("virtio-net: do not reset during XDP set")
Signed-off-by: Toshiaki
put_page() can work as a fallback for freeing xdp_frames, but the
appropriate way is to use xdp_return_frame().
Fixes: cac320c850ef ("virtio_net: convert to use generic xdp_frame and
xdp_return_frame API")
Signed-off-by: Toshiaki Makita
Acked-by: Jason Wang
Acked-by: Jesper Dangaard Brouer
Ack
While I'm looking into how to account standard tx counters on XDP tx
processing, I found several bugs around XDP tx and napi_tx.
Patch1: Fix oops on error path. Patch2 depends on this.
Patch2: Fix memory corruption on freeing xdp_frames with napi_tx enabled.
Patch3: Minor fix patch5 depends on.
Pa
Commit 4e09ff536284 ("virtio-net: disable NAPI only when enabled during
XDP set") tried to fix inappropriate NAPI enabling/disabling when
!netif_running(), but was not complete.
On error path virtio_net could enable NAPI even when !netif_running().
This can cause enabling NAPI twice on virtnet_ope
On Thu, Jan 24, 2019 at 04:00:00PM +0100, Joerg Roedel wrote:
> On Thu, Jan 24, 2019 at 09:41:07AM +0100, Christoph Hellwig wrote:
> > On Thu, Jan 24, 2019 at 09:29:23AM +0100, Joerg Roedel wrote:
> > > > As I've just introduced and fixed a bug in this area in the current
> > > > cycle - I don't th
On Mon, Jan 28, 2019 at 09:05:26AM +0100, Christoph Hellwig wrote:
> On Thu, Jan 24, 2019 at 10:51:51AM +0100, Joerg Roedel wrote:
> > On Thu, Jan 24, 2019 at 09:42:21AM +0100, Christoph Hellwig wrote:
> > > Yes. But more importantly it would fix the limit for all other block
> > > drivers that se
On Mon, Jan 28, 2019 at 10:20:05AM -0500, Michael S. Tsirkin wrote:
> On Wed, Jan 23, 2019 at 04:14:53PM -0500, Konrad Rzeszutek Wilk wrote:
> > On Wed, Jan 23, 2019 at 01:51:29PM -0500, Michael S. Tsirkin wrote:
> > > On Wed, Jan 23, 2019 at 05:30:44PM +0100, Joerg Roedel wrote:
> > > > Hi,
> > >
On Wed, Jan 23, 2019 at 04:14:53PM -0500, Konrad Rzeszutek Wilk wrote:
> On Wed, Jan 23, 2019 at 01:51:29PM -0500, Michael S. Tsirkin wrote:
> > On Wed, Jan 23, 2019 at 05:30:44PM +0100, Joerg Roedel wrote:
> > > Hi,
> > >
> > > here is the third version of this patch-set. Previous
> > > versions
On Fri, Jan 25, 2019 at 06:25:27PM +0100, Noralf Trønnes wrote:
>
>
> Den 18.01.2019 13.20, skrev Gerd Hoffmann:
> > Switch qxl over to the new generic fbdev emulation.
> >
> > Signed-off-by: Gerd Hoffmann
> > ---
> > drivers/gpu/drm/qxl/qxl_display.c | 7 ---
> > drivers/gpu/drm/qxl/qxl_d
> > The cursor must be set again after creating the primary surface.
> > Also drop the error message.
> > if (!bo->is_primary) {
> > - if (!same_shadow)
> > + if (!same_shadow) {
> > qxl_io_create_primary(qdev, 0, bo);
> > + qxl_primary
On Thu, Jan 24, 2019 at 10:51:51AM +0100, Joerg Roedel wrote:
> On Thu, Jan 24, 2019 at 09:42:21AM +0100, Christoph Hellwig wrote:
> > Yes. But more importantly it would fix the limit for all other block
> > drivers that set large segment sizes when running over swiotlb.
>
> True, so it would be
After batched used ring updating was introduced in commit e2b3b35eb989
("vhost_net: batch used ring update in rx"). We tend to batch heads in
vq->heads for more than one packet. But the quota passed to
get_rx_bufs() was not correctly limited, which can result a OOB write
in vq->heads.
head
On Fri, Jan 25, 2019 at 09:45:26AM +, Peng Fan wrote:
> Just have a question,
>
> Since vmalloc_to_page is ok for cma area, no need to take cma and per device
> cma into consideration right?
The CMA area itself it a physical memory region. If it is a non-highmem
region you can call virt_to
Hi,
> > If the above explains things better to you I should probably replace the
> > commit message with that.
>
> This is actually my first review of a driver that I'm not familiar with.
> I'm not quite sure how much in depth understanding that is required to
> put my ack on it.
Usually I try
Den 28.01.2019 09.59, skrev Gerd Hoffmann:
> On Fri, Jan 25, 2019 at 06:25:27PM +0100, Noralf Trønnes wrote:
>>
>>
>> Den 18.01.2019 13.20, skrev Gerd Hoffmann:
>>> Switch qxl over to the new generic fbdev emulation.
>>>
>>> Signed-off-by: Gerd Hoffmann
>>> ---
>>> drivers/gpu/drm/qxl/qxl_displ
Den 28.01.2019 09.10, skrev Gerd Hoffmann:
>>> The cursor must be set again after creating the primary surface.
>>> Also drop the error message.
>
>>> if (!bo->is_primary) {
>>> - if (!same_shadow)
>>> + if (!same_shadow) {
>>> qxl_io_create_primary(qd
25 matches
Mail list logo