Re: [PATCH] Fix: buffer overflow during hvc_alloc().

2020-04-06 Thread Andrew Melnichenko
>
> Description of problem:
> Guest get 'Call Trace' when loading module "virtio_console" and unloading
> it frequently
>
>
> Version-Release number of selected component (if applicable):
>   Guest
>  kernel-4.18.0-167.el8.x86_64
>  seabios-bin-1.11.1-4.module+el8.1.0+4066+0f1aadab.noarch
>  # modinfo virtio_console
>filename:   /lib/modules/4.18.0-
>167.el8.x86_64/kernel/drivers/char/virtio_console.ko.xz
>license:GPL
>description:Virtio console driver
>rhelversion:8.2
>srcversion: 55224090DD07750FAD75C9C
>alias:  virtio:d0003v*
>depends:
>intree: Y
>name:   virtio_console
>vermagic:   4.18.0-167.el8.x86_64 SMP mod_unload modversions
>   Host:
>  qemu-kvm-4.2.0-2.scrmod+el8.2.0+5159+d8aa4d83.x86_64
>  kernel-4.18.0-165.el8.x86_64
>  seabios-bin-1.12.0-5.scrmod+el8.2.0+5159+d8aa4d83.noarch
>
>
>
> How reproducible: 100%
>
>
> Steps to Reproduce:
>
> 1. boot guest with command [1]
> 2. load and unload virtio_console inside guest with loop.sh
># cat loop.sh
> while [ 1 ]
> do
> modprobe virtio_console
> lsmod | grep virt
> modprobe -r virtio_console
> lsmod | grep virt
> done
>
>
>
> Actual results:
> Guest reboot and can get vmcore-dmesg.txt file
>
>
> Expected results:
> Guest works well without error
>
>
> Additional info:
> The whole log will attach to the attachments.
>
> Call Trace:
> [   22.974500] fuse: init (API version 7.31)
> [   81.498208] [ cut here ]
> [   81.499263] pvqspinlock: lock 0x92080020 has corrupted value
> 0xc0774ca0!
> [   81.501000] WARNING: CPU: 0 PID: 785 at
> kernel/locking/qspinlock_paravirt.h:500
> __pv_queued_spin_unlock_slowpath+0xc0/0xd0
> [   81.503173] Modules linked in: virtio_console fuse xt_CHECKSUM
> ipt_MASQUERADE xt_conntrack ipt_REJECT nft_counter nf_nat_tftp nft_objref
> nf_conntrack_tftp tun bridge stp llc nft_fib_inet nft_fib_ipv4 nft_fib_ipv6
> nft_fib nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct
> nf_tables_set nft_chain_nat_ipv6 nf_conntrack_ipv6 nf_defrag_ipv6
> nf_nat_ipv6 nft_chain_route_ipv6 nft_chain_nat_ipv4 nf_conntrack_ipv4
> nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack nft_chain_route_ipv4
> ip6_tables nft_compat ip_set nf_tables nfnetlink sunrpc bochs_drm
> drm_vram_helper ttm drm_kms_helper syscopyarea sysfillrect sysimgblt
> fb_sys_fops drm i2c_piix4 pcspkr crct10dif_pclmul crc32_pclmul joydev
> ghash_clmulni_intel ip_tables xfs libcrc32c sd_mod sg ata_generic ata_piix
> virtio_net libata crc32c_intel net_failover failover serio_raw virtio_scsi
> dm_mirror dm_region_hash dm_log dm_mod [last unloaded: virtio_console]
> [   81.517019] CPU: 0 PID: 785 Comm: kworker/0:2 Kdump: loaded Not tainted
> 4.18.0-167.el8.x86_64 #1
> [   81.518639] Hardware name: Red Hat KVM, BIOS
> 1.12.0-5.scrmod+el8.2.0+5159+d8aa4d83 04/01/2014
> [   81.520205] Workqueue: events control_work_handler [virtio_console]
> [   81.521354] RIP: 0010:__pv_queued_spin_unlock_slowpath+0xc0/0xd0
> [   81.522450] Code: 07 00 48 63 7a 10 e8 bf 64 f5 ff 66 90 c3 8b 05 e6 cf
> d6 01 85 c0 74 01 c3 8b 17 48 89 fe 48 c7 c7 38 4b 29 91 e8 3a 6c fa ff
> <0f> 0b c3 0f 0b 90 90 90 90 90 90 90 90 90 90 90 0f 1f 44 00 00 48
> [   81.525830] RSP: 0018:b51a01ffbd70 EFLAGS: 00010282
> [   81.526798] RAX:  RBX: 0010 RCX:
> 
> [   81.528110] RDX: 9e66f1826480 RSI: 9e66f1816a08 RDI:
> 9e66f1816a08
> [   81.529437] RBP: 9153ff10 R08: 026c R09:
> 0053
> [   81.530732] R10:  R11: b51a01ffbc18 R12:
> 9e66cd682200
> [   81.532133] R13: 9153ff10 R14: 9e6685569500 R15:
> 9e66cd682000
> [   81.533442] FS:  () GS:9e66f180()
> knlGS:
> [   81.534914] CS:  0010 DS:  ES:  CR0: 80050033
> [   81.535971] CR2: 5624c55b14d0 CR3: 0003a023c000 CR4:
> 003406f0
> [   81.537283] Call Trace:
> [   81.537763]
>  __raw_callee_save___pv_queued_spin_unlock_slowpath+0x11/0x20
> [   81.539011]  .slowpath+0x9/0xe
> [   81.539585]  hvc_alloc+0x25e/0x300
> [   81.540237]  init_port_console+0x28/0x100 [virtio_console]
> [   81.541251]  handle_control_message.constprop.27+0x1c4/0x310
> [virtio_console]
> [   81.542546]  control_work_handler+0x70/0x10c [virtio_console]
> [   81.543601]  process_one_work+0x1a7/0x3b0
> [   81.544356]  worker_thread+0x30/0x390
> [   81.545025]  ? create_worker+0x1a0/0x1a0
> [   81.545749]  kthread+0x112/0x130
> [   81.546358]  ? kthread_flush_work_fn+0x10/0x10
> [   81.547183]  ret_from_fork+0x22/0x40
> [   81.547842] ---[ end trace aa97649bd16c8655 ]---
> [   83.546539] general protection fault:  [#1] SMP NOPTI
> [   83.547422] CPU: 5 PID: 3225 Comm: modprobe Kdump: loaded Tainted: G
>  W- -  - 4.18.0-167.el8.x86_64 #1
> [   83.549191] 

[PATCH] Fix: buffer overflow during hvc_alloc().

2020-04-05 Thread andrew
From: Andrew Melnychenko 

If there is a lot(more then 16) of virtio-console devices
or virtio_console module is reloaded
- buffers 'vtermnos' and 'cons_ops' are overflowed.
In older kernels it overruns spinlock which leads to kernel freezing:
https://bugzilla.redhat.com/show_bug.cgi?id=1786239

Signed-off-by: Andrew Melnychenko 
---
 drivers/tty/hvc/hvc_console.c | 23 ++-
 1 file changed, 14 insertions(+), 9 deletions(-)

diff --git a/drivers/tty/hvc/hvc_console.c b/drivers/tty/hvc/hvc_console.c
index 27284a2dcd2b..436cc51c92c3 100644
--- a/drivers/tty/hvc/hvc_console.c
+++ b/drivers/tty/hvc/hvc_console.c
@@ -302,10 +302,6 @@ int hvc_instantiate(uint32_t vtermno, int index, const 
struct hv_ops *ops)
vtermnos[index] = vtermno;
cons_ops[index] = ops;
 
-   /* reserve all indices up to and including this index */
-   if (last_hvc < index)
-   last_hvc = index;
-
/* check if we need to re-register the kernel console */
hvc_check_console(index);
 
@@ -960,13 +956,22 @@ struct hvc_struct *hvc_alloc(uint32_t vtermno, int data,
cons_ops[i] == hp->ops)
break;
 
-   /* no matching slot, just use a counter */
-   if (i >= MAX_NR_HVC_CONSOLES)
-   i = ++last_hvc;
+   if (i >= MAX_NR_HVC_CONSOLES) {
+
+   /* find 'empty' slot for console */
+   for (i = 0; i < MAX_NR_HVC_CONSOLES && vtermnos[i] != -1; i++) {
+   }
+
+   /* no matching slot, just use a counter */
+   if (i == MAX_NR_HVC_CONSOLES)
+   i = ++last_hvc + MAX_NR_HVC_CONSOLES;
+   }
 
hp->index = i;
-   cons_ops[i] = ops;
-   vtermnos[i] = vtermno;
+   if (i < MAX_NR_HVC_CONSOLES) {
+   cons_ops[i] = ops;
+   vtermnos[i] = vtermno;
+   }
 
list_add_tail(&(hp->next), _structs);
mutex_unlock(_structs_mutex);
-- 
2.24.1

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization