Re: [Qemu-devel] [PATCH v9 2/7] virtio-pmem: Add virtio pmem driver

2019-05-17 Thread Jakub Staroń
On 5/16/19 10:35 PM, Pankaj Gupta wrote:
> Can I take it your reviewed/acked-by? or tested-by tag? for the virtio patch 
> :)I don't feel that I have enough expertise to give the reviewed-by tag, but 
> you can
take my acked-by + tested-by.

Acked-by: Jakub Staron 
Tested-by: Jakub Staron 

No kernel panics/stalls encountered during testing this patches (v9) with QEMU 
+ xfstests.
Some CPU stalls encountered while testing with crosvm instead of QEMU with 
xfstests
(test generic/464) but no repro for QEMU, so the fault may be on the side of 
crosvm.


The dump for the crosvm/xfstests stall:
[ 2504.175276] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
[ 2504.176681] rcu: 0-...!: (1 GPs behind) idle=9b2/1/0x4000 
softirq=1089198/1089202 fqs=0 
[ 2504.178270] rcu: 2-...!: (1 ticks this GP) idle=cfe/1/0x4002 
softirq=1055108/1055110 fqs=0 
[ 2504.179802] rcu: 3-...!: (1 GPs behind) idle=1d6/1/0x4002 
softirq=1046798/1046802 fqs=0 
[ 2504.181215] rcu: 4-...!: (2 ticks this GP) idle=522/1/0x4002 
softirq=1249063/1249064 fqs=0 
[ 2504.182625] rcu: 5-...!: (1 GPs behind) idle=6da/1/0x4000 
softirq=1131036/1131047 fqs=0 
[ 2504.183955]  (detected by 3, t=0 jiffies, g=1232529, q=1370)
[ 2504.184762] Sending NMI from CPU 3 to CPUs 0:
[ 2504.186400] NMI backtrace for cpu 0
[ 2504.186401] CPU: 0 PID: 6670 Comm: 464 Not tainted 5.1.0+ #1
[ 2504.186401] Hardware name: ChromiumOS crosvm, BIOS 0 
[ 2504.186402] RIP: 0010:queued_spin_lock_slowpath+0x1c/0x1e0
[ 2504.186402] Code: e7 89 c8 f0 44 0f b1 07 39 c1 75 dc f3 c3 0f 1f 44 00 00 
ba 01 00 00 00 8b 07 85 c0 75 0a f0 0f b1 17 85 c0 75 f2 f3 c3 f3 90  ec 81 
fe 00 01 00 00 0f 84 ab 00 00 00 81 e6 00 ff ff ff 75 44
[ 2504.186403] RSP: 0018:c9003ee8 EFLAGS: 0002
[ 2504.186404] RAX: 0001 RBX: 0246 RCX: 00404044
[ 2504.186404] RDX: 0001 RSI: 0001 RDI: 8244a280
[ 2504.186405] RBP: 8244a280 R08: 000f4200 R09: 024709ed6c32
[ 2504.186405] R10:  R11: 0001 R12: 8244a280
[ 2504.186405] R13: 0009 R14: 0009 R15: 
[ 2504.186406] FS:  () GS:8880cc60() 
knlGS:
[ 2504.186406] CS:  0010 DS:  ES:  CR0: 80050033
[ 2504.186406] CR2: 7efd6b0f15d8 CR3: 0260a006 CR4: 00360ef0
[ 2504.186407] DR0:  DR1:  DR2: 
[ 2504.186407] DR3:  DR6: fffe0ff0 DR7: 0400
[ 2504.186407] Call Trace:
[ 2504.186408]  
[ 2504.186408]  _raw_spin_lock_irqsave+0x1d/0x30
[ 2504.186408]  rcu_core+0x3b6/0x740
[ 2504.186408]  ? __hrtimer_run_queues+0x133/0x280
[ 2504.186409]  ? recalibrate_cpu_khz+0x10/0x10
[ 2504.186409]  __do_softirq+0xd8/0x2e4
[ 2504.186409]  irq_exit+0xa3/0xb0
[ 2504.186410]  smp_apic_timer_interrupt+0x67/0x120
[ 2504.186410]  apic_timer_interrupt+0xf/0x20
[ 2504.186410]  
[ 2504.186410] RIP: 0010:unmap_page_range+0x47a/0x9b0
[ 2504.186411] Code: 0f 46 46 10 49 39 6e 18 49 89 46 10 48 89 e8 49 0f 43 46 
18 41 80 4e 20 08 4d 85 c9 49 89 46 18 0f 84 68 ff ff ff 49 8b 51 08 <48> 8d 42 
ff 83 e2 01 49 0f 44 c1 f6 40 18 01 75 38 48 ba ff 0f 00
[ 2504.186411] RSP: 0018:c900036cbcc8 EFLAGS: 0282 ORIG_RAX: 
ff13
[ 2504.186412] RAX:  RBX: 80003751d045 RCX: 0001
[ 2504.186413] RDX: ea0002e09288 RSI: 0269b000 RDI: 8880b6525e40
[ 2504.186413] RBP: 0269c000 R08:  R09: eadd4740
[ 2504.186413] R10: ea0001755700 R11: 8880cc62d120 R12: 02794000
[ 2504.186414] R13: 0269b000 R14: c900036cbdf0 R15: 8880572434d8
[ 2504.186414]  ? unmap_page_range+0x420/0x9b0
[ 2504.186414]  ? release_pages+0x175/0x390
[ 2504.186414]  unmap_vmas+0x7c/0xe0
[ 2504.186415]  exit_mmap+0xa4/0x190
[ 2504.186415]  mmput+0x3b/0x100
[ 2504.186415]  do_exit+0x276/0xc10
[ 2504.186415]  do_group_exit+0x35/0xa0
[ 2504.186415]  __x64_sys_exit_group+0xf/0x10
[ 2504.186416]  do_syscall_64+0x43/0x120
[ 2504.186416]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 2504.186416] RIP: 0033:0x7efd6ae10618
[ 2504.186416] Code: Bad RIP value.
[ 2504.186417] RSP: 002b:7ffcac9bde38 EFLAGS: 0246 ORIG_RAX: 
00e7
[ 2504.186417] RAX: ffda RBX:  RCX: 7efd6ae10618
[ 2504.186418] RDX:  RSI: 003c RDI: 
[ 2504.186418] RBP: 7efd6b0ed8e0 R08: 00e7 R09: ff98
[ 2504.186418] R10: 7ffcac9bddb8 R11: 0246 R12: 7efd6b0ed8e0
[ 2504.186419] R13: 7efd6b0f2c20 R14: 0060 R15: 0070e705
[ 2504.186421] NMI backtrace for cpu 3
[ 2504.226980] CPU: 3 PID: 6596 Comm: xfs_io Not tainted 5.1.0+ #1
[ 2504.227661] Hardware name: ChromiumOS crosvm, BIOS 0 
[ 2504.228261] Call Trace:
[ 

Re: [PATCH v3] vsock/virtio: free packets during the socket release

2019-05-17 Thread David Miller
From: Stefano Garzarella 
Date: Fri, 17 May 2019 16:45:43 +0200

> When the socket is released, we should free all packets
> queued in the per-socket list in order to avoid a memory
> leak.
> 
> Signed-off-by: Stefano Garzarella 

Applied and queued up for -stable.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v4 0/4] vmw_balloon: Compaction and shrinker support

2019-05-17 Thread Greg Kroah-Hartman
On Fri, May 17, 2019 at 05:57:22PM +, Nadav Amit wrote:
> > On May 17, 2019, at 10:24 AM, Greg Kroah-Hartman 
> >  wrote:
> > 
> > On Fri, May 17, 2019 at 05:10:23PM +, Nadav Amit wrote:
> >>> On May 3, 2019, at 6:25 PM, Nadav Amit  wrote:
> >>> 
>  On Apr 25, 2019, at 4:54 AM, Nadav Amit  wrote:
>  
>  VMware balloon enhancements: adding support for memory compaction,
>  memory shrinker (to prevent OOM) and splitting of refused pages to
>  prevent recurring inflations.
>  
>  Patches 1-2: Support for compaction
>  Patch 3: Support for memory shrinker - disabled by default
>  Patch 4: Split refused pages to improve performance
>  
>  v3->v4:
>  * "get around to" comment [Michael]
>  * Put list_add under page lock [Michael]
>  
>  v2->v3:
>  * Fixing wrong argument type (int->size_t) [Michael]
>  * Fixing a comment (it) [Michael]
>  * Reinstating the BUG_ON() when page is locked [Michael] 
>  
>  v1->v2:
>  * Return number of pages in list enqueue/dequeue interfaces [Michael]
>  * Removed first two patches which were already merged
>  
>  Nadav Amit (4):
>  mm/balloon_compaction: List interfaces
>  vmw_balloon: Compaction support
>  vmw_balloon: Add memory shrinker
>  vmw_balloon: Split refused pages
>  
>  drivers/misc/Kconfig   |   1 +
>  drivers/misc/vmw_balloon.c | 489 ++---
>  include/linux/balloon_compaction.h |   4 +
>  mm/balloon_compaction.c| 144 ++---
>  4 files changed, 553 insertions(+), 85 deletions(-)
>  
>  -- 
>  2.19.1
> >>> 
> >>> Ping.
> >> 
> >> Ping.
> >> 
> >> Greg, did it got lost again?
> > 
> > 
> > I thought you needed the mm developers to ack the first patch, did that
> > ever happen?
> 
> Yes. You will see Michael Tsirkin’s “Acked-by" on it.

Ah, missed that, thanks.  Will queue this up after the -rc1 release is
out, can't do anything until then.

greg k-h


Re: [PATCH v4 0/4] vmw_balloon: Compaction and shrinker support

2019-05-17 Thread Nadav Amit
> On May 17, 2019, at 10:24 AM, Greg Kroah-Hartman  
> wrote:
> 
> On Fri, May 17, 2019 at 05:10:23PM +, Nadav Amit wrote:
>>> On May 3, 2019, at 6:25 PM, Nadav Amit  wrote:
>>> 
 On Apr 25, 2019, at 4:54 AM, Nadav Amit  wrote:
 
 VMware balloon enhancements: adding support for memory compaction,
 memory shrinker (to prevent OOM) and splitting of refused pages to
 prevent recurring inflations.
 
 Patches 1-2: Support for compaction
 Patch 3: Support for memory shrinker - disabled by default
 Patch 4: Split refused pages to improve performance
 
 v3->v4:
 * "get around to" comment [Michael]
 * Put list_add under page lock [Michael]
 
 v2->v3:
 * Fixing wrong argument type (int->size_t) [Michael]
 * Fixing a comment (it) [Michael]
 * Reinstating the BUG_ON() when page is locked [Michael] 
 
 v1->v2:
 * Return number of pages in list enqueue/dequeue interfaces [Michael]
 * Removed first two patches which were already merged
 
 Nadav Amit (4):
 mm/balloon_compaction: List interfaces
 vmw_balloon: Compaction support
 vmw_balloon: Add memory shrinker
 vmw_balloon: Split refused pages
 
 drivers/misc/Kconfig   |   1 +
 drivers/misc/vmw_balloon.c | 489 ++---
 include/linux/balloon_compaction.h |   4 +
 mm/balloon_compaction.c| 144 ++---
 4 files changed, 553 insertions(+), 85 deletions(-)
 
 -- 
 2.19.1
>>> 
>>> Ping.
>> 
>> Ping.
>> 
>> Greg, did it got lost again?
> 
> 
> I thought you needed the mm developers to ack the first patch, did that
> ever happen?

Yes. You will see Michael Tsirkin’s “Acked-by" on it.



Re: [PATCH v4 0/4] vmw_balloon: Compaction and shrinker support

2019-05-17 Thread Greg Kroah-Hartman
On Fri, May 17, 2019 at 05:10:23PM +, Nadav Amit wrote:
> > On May 3, 2019, at 6:25 PM, Nadav Amit  wrote:
> > 
> >> On Apr 25, 2019, at 4:54 AM, Nadav Amit  wrote:
> >> 
> >> VMware balloon enhancements: adding support for memory compaction,
> >> memory shrinker (to prevent OOM) and splitting of refused pages to
> >> prevent recurring inflations.
> >> 
> >> Patches 1-2: Support for compaction
> >> Patch 3: Support for memory shrinker - disabled by default
> >> Patch 4: Split refused pages to improve performance
> >> 
> >> v3->v4:
> >> * "get around to" comment [Michael]
> >> * Put list_add under page lock [Michael]
> >> 
> >> v2->v3:
> >> * Fixing wrong argument type (int->size_t) [Michael]
> >> * Fixing a comment (it) [Michael]
> >> * Reinstating the BUG_ON() when page is locked [Michael] 
> >> 
> >> v1->v2:
> >> * Return number of pages in list enqueue/dequeue interfaces [Michael]
> >> * Removed first two patches which were already merged
> >> 
> >> Nadav Amit (4):
> >> mm/balloon_compaction: List interfaces
> >> vmw_balloon: Compaction support
> >> vmw_balloon: Add memory shrinker
> >> vmw_balloon: Split refused pages
> >> 
> >> drivers/misc/Kconfig   |   1 +
> >> drivers/misc/vmw_balloon.c | 489 ++---
> >> include/linux/balloon_compaction.h |   4 +
> >> mm/balloon_compaction.c| 144 ++---
> >> 4 files changed, 553 insertions(+), 85 deletions(-)
> >> 
> >> -- 
> >> 2.19.1
> > 
> > Ping.
> 
> Ping.
> 
> Greg, did it got lost again?


I thought you needed the mm developers to ack the first patch, did that
ever happen?



Re: [PATCH v4 0/4] vmw_balloon: Compaction and shrinker support

2019-05-17 Thread Nadav Amit
> On May 3, 2019, at 6:25 PM, Nadav Amit  wrote:
> 
>> On Apr 25, 2019, at 4:54 AM, Nadav Amit  wrote:
>> 
>> VMware balloon enhancements: adding support for memory compaction,
>> memory shrinker (to prevent OOM) and splitting of refused pages to
>> prevent recurring inflations.
>> 
>> Patches 1-2: Support for compaction
>> Patch 3: Support for memory shrinker - disabled by default
>> Patch 4: Split refused pages to improve performance
>> 
>> v3->v4:
>> * "get around to" comment [Michael]
>> * Put list_add under page lock [Michael]
>> 
>> v2->v3:
>> * Fixing wrong argument type (int->size_t) [Michael]
>> * Fixing a comment (it) [Michael]
>> * Reinstating the BUG_ON() when page is locked [Michael] 
>> 
>> v1->v2:
>> * Return number of pages in list enqueue/dequeue interfaces [Michael]
>> * Removed first two patches which were already merged
>> 
>> Nadav Amit (4):
>> mm/balloon_compaction: List interfaces
>> vmw_balloon: Compaction support
>> vmw_balloon: Add memory shrinker
>> vmw_balloon: Split refused pages
>> 
>> drivers/misc/Kconfig   |   1 +
>> drivers/misc/vmw_balloon.c | 489 ++---
>> include/linux/balloon_compaction.h |   4 +
>> mm/balloon_compaction.c| 144 ++---
>> 4 files changed, 553 insertions(+), 85 deletions(-)
>> 
>> -- 
>> 2.19.1
> 
> Ping.

Ping.

Greg, did it got lost again?



[PATCH v3] vsock/virtio: free packets during the socket release

2019-05-17 Thread Stefano Garzarella
When the socket is released, we should free all packets
queued in the per-socket list in order to avoid a memory
leak.

Signed-off-by: Stefano Garzarella 
---
This patch was in the series "[PATCH v2 0/8] vsock/virtio: optimizations
to increase the throughput" [1]. As Stefan suggested, I'm sending it as
a separated patch.

v3:
  - use list_for_each_entry_safe() [David]

[1] https://patchwork.kernel.org/cover/10938743/
---
 net/vmw_vsock/virtio_transport_common.c | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/net/vmw_vsock/virtio_transport_common.c 
b/net/vmw_vsock/virtio_transport_common.c
index 602715fc9a75..f3f3d06cb6d8 100644
--- a/net/vmw_vsock/virtio_transport_common.c
+++ b/net/vmw_vsock/virtio_transport_common.c
@@ -786,12 +786,19 @@ static bool virtio_transport_close(struct vsock_sock *vsk)
 
 void virtio_transport_release(struct vsock_sock *vsk)
 {
+   struct virtio_vsock_sock *vvs = vsk->trans;
+   struct virtio_vsock_pkt *pkt, *tmp;
struct sock *sk = >sk;
bool remove_sock = true;
 
lock_sock(sk);
if (sk->sk_type == SOCK_STREAM)
remove_sock = virtio_transport_close(vsk);
+
+   list_for_each_entry_safe(pkt, tmp, >rx_queue, list) {
+   list_del(>list);
+   virtio_transport_free_pkt(pkt);
+   }
release_sock(sk);
 
if (remove_sock)
-- 
2.20.1

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH 0/2] Add BO reservation to GEM VRAM pin/unpin/push_to_system

2019-05-17 Thread Gerd Hoffmann
  Hi,

> It turns out that the bochs and vbox drivers automatically reserved and
> unreserved the BO from within their pin and unpin functions. The other
> drivers; ast, hibmc and mgag200; performed reservation explicitly. With the
> GEM VRAM conversion, automatic BO reservation within pin and unpin functions
> accidentally got lost. So for bochs and vbox, ttm_bo_validate() worked on
> unlocked BOs.
> 
> This patch set fixes the problem by adding automatic reservation to the
> implementation of drm_gem_vram_{pin,unpin,push_to_system}() to fix bochs
> and vbox. It removes explicit BO reservation around the pin, unpin and
> push-to-system calls in the ast, hibmc and mgag200 drivers.
> 
> The only exception is the cursor handling of mgag200. In this case, the
> mgag200 driver now calls drm_gem_vram_{pin,unpin}_reserved(), which works
> with reserved BOs. The respective code should be refactored in a future
> patch to work with the regular pin and unpin functions.

Looks good, pushed to drm-misc-next.

thanks,
  Gerd

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH V2 0/4] Prevent vhost kthread from hogging CPU

2019-05-17 Thread Stefan Hajnoczi
On Fri, May 17, 2019 at 12:29:48AM -0400, Jason Wang wrote:
> Hi:
> 
> This series try to prevent a guest triggerable CPU hogging through
> vhost kthread. This is done by introducing and checking the weight
> after each requrest. The patch has been tested with reproducer of
> vsock and virtio-net. Only compile test is done for vhost-scsi.
> 
> Please review.
> 
> This addresses CVE-2019-3900.
> 
> Changs from V1:
> - fix user-ater-free in vosck patch
> 
> Jason Wang (4):
>   vhost: introduce vhost_exceeds_weight()
>   vhost_net: fix possible infinite loop
>   vhost: vsock: add weight support
>   vhost: scsi: add weight support
> 
>  drivers/vhost/net.c   | 41 ++---
>  drivers/vhost/scsi.c  | 21 ++---
>  drivers/vhost/vhost.c | 20 +++-
>  drivers/vhost/vhost.h |  5 -
>  drivers/vhost/vsock.c | 28 +---
>  5 files changed, 72 insertions(+), 43 deletions(-)
> 
> -- 
> 1.8.3.1
> 
> ___
> Virtualization mailing list
> Virtualization@lists.linux-foundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Reviewed-by: Stefan Hajnoczi 


signature.asc
Description: PGP signature
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH RESEND] vsock/virtio: Initialize core virtio vsock before registering the driver

2019-05-17 Thread Stefan Hajnoczi
On Thu, May 16, 2019 at 01:51:07PM -0700, Jorge E. Moreira wrote:
> Avoid a race in which static variables in net/vmw_vsock/af_vsock.c are
> accessed (while handling interrupts) before they are initialized.
> 
> [4.201410] BUG: unable to handle kernel paging request at ffe8
> [4.207829] IP: vsock_addr_equals_addr+0x3/0x20
> [4.211379] PGD 28210067 P4D 28210067 PUD 28212067 PMD 0
> [4.211379] Oops:  [#1] PREEMPT SMP PTI
> [4.211379] Modules linked in:
> [4.211379] CPU: 1 PID: 30 Comm: kworker/1:1 Not tainted 
> 4.14.106-419297-gd7e28cc1f241 #1
> [4.211379] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 
> 1.10.2-1 04/01/2014
> [4.211379] Workqueue: virtio_vsock virtio_transport_rx_work
> [4.211379] task: a3273d175280 task.stack: aea1800e8000
> [4.211379] RIP: 0010:vsock_addr_equals_addr+0x3/0x20
> [4.211379] RSP: :aea1800ebd28 EFLAGS: 00010286
> [4.211379] RAX: 0002 RBX:  RCX: 
> b94e42f0
> [4.211379] RDX: 0400 RSI: ffe0 RDI: 
> aea1800ebdd0
> [4.211379] RBP: aea1800ebd58 R08: 0001 R09: 
> 0001
> [4.211379] R10:  R11: b89d5d60 R12: 
> aea1800ebdd0
> [4.211379] R13: 828cbfbf R14:  R15: 
> aea1800ebdc0
> [4.211379] FS:  () GS:a3273fd0() 
> knlGS:
> [4.211379] CS:  0010 DS:  ES:  CR0: 80050033
> [4.211379] CR2: ffe8 CR3: 2820e001 CR4: 
> 001606e0
> [4.211379] DR0:  DR1:  DR2: 
> 
> [4.211379] DR3:  DR6: fffe0ff0 DR7: 
> 0400
> [4.211379] Call Trace:
> [4.211379]  ? vsock_find_connected_socket+0x6c/0xe0
> [4.211379]  virtio_transport_recv_pkt+0x15f/0x740
> [4.211379]  ? detach_buf+0x1b5/0x210
> [4.211379]  virtio_transport_rx_work+0xb7/0x140
> [4.211379]  process_one_work+0x1ef/0x480
> [4.211379]  worker_thread+0x312/0x460
> [4.211379]  kthread+0x132/0x140
> [4.211379]  ? process_one_work+0x480/0x480
> [4.211379]  ? kthread_destroy_worker+0xd0/0xd0
> [4.211379]  ret_from_fork+0x35/0x40
> [4.211379] Code: c7 47 08 00 00 00 00 66 c7 07 28 00 c7 47 08 ff ff ff ff 
> c7 47 04 ff ff ff ff c3 0f 1f 00 66 2e 0f 1f 84 00 00 00 00 00 8b 47 08 <3b> 
> 46 08 75 0a 8b 47 04 3b 46 04 0f 94 c0 c3 31 c0 c3 90 66 2e
> [4.211379] RIP: vsock_addr_equals_addr+0x3/0x20 RSP: aea1800ebd28
> [4.211379] CR2: ffe8
> [4.211379] ---[ end trace f31cc4a2e6df3689 ]---
> [4.211379] Kernel panic - not syncing: Fatal exception in interrupt
> [4.211379] Kernel Offset: 0x3700 from 0x8100 (relocation 
> range: 0x8000-0xbfff)
> [4.211379] Rebooting in 5 seconds..
> 
> Fixes: 22b5c0b63f32 ("vsock/virtio: fix kernel panic after device hot-unplug")
> Cc: Stefan Hajnoczi 
> Cc: Stefano Garzarella 
> Cc: "David S. Miller" 
> Cc: k...@vger.kernel.org
> Cc: virtualization@lists.linux-foundation.org
> Cc: net...@vger.kernel.org
> Cc: kernel-t...@android.com
> Cc: sta...@vger.kernel.org [4.9+]
> Signed-off-by: Jorge E. Moreira 
> Reviewed-by: Stefano Garzarella 
> Reviewed-by: Stefan Hajnoczi 
> ---
>  net/vmw_vsock/virtio_transport.c | 13 ++---
>  1 file changed, 6 insertions(+), 7 deletions(-)

Acked-by: Stefan Hajnoczi 


signature.asc
Description: PGP signature
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH v2 2/8] vsock/virtio: free packets during the socket release

2019-05-17 Thread Stefano Garzarella
On Thu, May 16, 2019 at 04:32:18PM +0100, Stefan Hajnoczi wrote:
> On Fri, May 10, 2019 at 02:58:37PM +0200, Stefano Garzarella wrote:
> > When the socket is released, we should free all packets
> > queued in the per-socket list in order to avoid a memory
> > leak.
> > 
> > Signed-off-by: Stefano Garzarella 
> > ---
> >  net/vmw_vsock/virtio_transport_common.c | 8 
> >  1 file changed, 8 insertions(+)
> 
> Ouch, this would be nice as a separate patch that can be merged right
> away (with s/virtio_vsock_buf/virtio_vsock_pkt/).

Okay, I'll fix this patch following the David's comment and I'll send
as a separate patch using the virtio_vsock_pkt.

Thanks,
Stefano
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v2 1/8] vsock/virtio: limit the memory used per-socket

2019-05-17 Thread Stefano Garzarella
On Thu, May 16, 2019 at 04:25:33PM +0100, Stefan Hajnoczi wrote:
> On Fri, May 10, 2019 at 02:58:36PM +0200, Stefano Garzarella wrote:
> > +struct virtio_vsock_buf {
> 
> Please add a comment describing the purpose of this struct and to
> differentiate its use from struct virtio_vsock_pkt.
> 

Sure, I'll fix it.

> > +static struct virtio_vsock_buf *
> > +virtio_transport_alloc_buf(struct virtio_vsock_pkt *pkt, bool zero_copy)
> > +{
> > +   struct virtio_vsock_buf *buf;
> > +
> > +   if (pkt->len == 0)
> > +   return NULL;
> > +
> > +   buf = kzalloc(sizeof(*buf), GFP_KERNEL);
> > +   if (!buf)
> > +   return NULL;
> > +
> > +   /* If the buffer in the virtio_vsock_pkt is full, we can move it to
> > +* the new virtio_vsock_buf avoiding the copy, because we are sure that
> > +* we are not use more memory than that counted by the credit mechanism.
> > +*/
> > +   if (zero_copy && pkt->len == pkt->buf_len) {
> > +   buf->addr = pkt->buf;
> > +   pkt->buf = NULL;
> > +   } else {
> > +   buf->addr = kmalloc(pkt->len, GFP_KERNEL);
> 
> buf and buf->addr could be allocated in a single call, though I'm not
> sure how big an optimization this is.
> 

IIUC, in the case of zero-copy I should allocate only the buf,
otherwise I should allocate both buf and buf->addr in a single call
when I'm doing a full-copy.

Is it correct?

> > @@ -841,20 +882,24 @@ virtio_transport_recv_connected(struct sock *sk,
> >  {
> > struct vsock_sock *vsk = vsock_sk(sk);
> > struct virtio_vsock_sock *vvs = vsk->trans;
> > +   struct virtio_vsock_buf *buf;
> > int err = 0;
> >  
> > switch (le16_to_cpu(pkt->hdr.op)) {
> > case VIRTIO_VSOCK_OP_RW:
> > pkt->len = le32_to_cpu(pkt->hdr.len);
> > -   pkt->off = 0;
> > +   buf = virtio_transport_alloc_buf(pkt, true);
> >  
> > -   spin_lock_bh(>rx_lock);
> > -   virtio_transport_inc_rx_pkt(vvs, pkt);
> > -   list_add_tail(>list, >rx_queue);
> > -   spin_unlock_bh(>rx_lock);
> > +   if (buf) {
> > +   spin_lock_bh(>rx_lock);
> > +   virtio_transport_inc_rx_pkt(vvs, pkt->len);
> > +   list_add_tail(>list, >rx_queue);
> > +   spin_unlock_bh(>rx_lock);
> >  
> > -   sk->sk_data_ready(sk);
> > -   return err;
> > +   sk->sk_data_ready(sk);
> > +   }
> 
> The return value of this function isn't used but the code still makes an
> effort to return errors.  Please return -ENOMEM when buf == NULL.
> 
> If you'd like to remove the return value that's fine too, but please do
> it for the whole function to be consistent.

I'll return -ENOMEM when the allocation fails.

Thanks,
Stefano
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization