Re: [Qemu-devel] [PATCH v9 2/7] virtio-pmem: Add virtio pmem driver

2019-05-19 Thread Pankaj Gupta


> 
> 
> > On 5/16/19 10:35 PM, Pankaj Gupta wrote:
> > > Can I take it your reviewed/acked-by? or tested-by tag? for the virtio
> > > patch :)I don't feel that I have enough expertise to give the reviewed-by
> > > tag, but you can
> > take my acked-by + tested-by.
> > 
> > Acked-by: Jakub Staron 
> > Tested-by: Jakub Staron 
> > 
> > No kernel panics/stalls encountered during testing this patches (v9) with
> > QEMU + xfstests.
> 
> Thank you for testing and confirming the results. I will add your tested &
> acked-by in v10.
> 
> > Some CPU stalls encountered while testing with crosvm instead of QEMU with
> > xfstests
> > (test generic/464) but no repro for QEMU, so the fault may be on the side
> > of
> > crosvm.
> 
> yes, looks like crosvm related as we did not see any of this in my and your
> testing with Qemu.

Also, they don't seem to be related with virtio-pmem.

Thanks,
Pankaj
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [Qemu-devel] [PATCH v9 2/7] virtio-pmem: Add virtio pmem driver

2019-05-19 Thread Pankaj Gupta


> On 5/16/19 10:35 PM, Pankaj Gupta wrote:
> > Can I take it your reviewed/acked-by? or tested-by tag? for the virtio
> > patch :)I don't feel that I have enough expertise to give the reviewed-by
> > tag, but you can
> take my acked-by + tested-by.
> 
> Acked-by: Jakub Staron 
> Tested-by: Jakub Staron 
> 
> No kernel panics/stalls encountered during testing this patches (v9) with
> QEMU + xfstests.

Thank you for testing and confirming the results. I will add your tested &
acked-by in v10.

> Some CPU stalls encountered while testing with crosvm instead of QEMU with
> xfstests
> (test generic/464) but no repro for QEMU, so the fault may be on the side of
> crosvm.

yes, looks like crosvm related as we did not see any of this in my and your
testing with Qemu. 

> 
> 
> The dump for the crosvm/xfstests stall:
> [ 2504.175276] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
> [ 2504.176681] rcu: 0-...!: (1 GPs behind) idle=9b2/1/0x4000
> softirq=1089198/1089202 fqs=0
> [ 2504.178270] rcu: 2-...!: (1 ticks this GP)
> idle=cfe/1/0x4002 softirq=1055108/1055110 fqs=0
> [ 2504.179802] rcu: 3-...!: (1 GPs behind) idle=1d6/1/0x4002
> softirq=1046798/1046802 fqs=0
> [ 2504.181215] rcu: 4-...!: (2 ticks this GP)
> idle=522/1/0x4002 softirq=1249063/1249064 fqs=0
> [ 2504.182625] rcu: 5-...!: (1 GPs behind) idle=6da/1/0x4000
> softirq=1131036/1131047 fqs=0
> [ 2504.183955]  (detected by 3, t=0 jiffies, g=1232529, q=1370)
> [ 2504.184762] Sending NMI from CPU 3 to CPUs 0:
> [ 2504.186400] NMI backtrace for cpu 0
> [ 2504.186401] CPU: 0 PID: 6670 Comm: 464 Not tainted 5.1.0+ #1
> [ 2504.186401] Hardware name: ChromiumOS crosvm, BIOS 0
> [ 2504.186402] RIP: 0010:queued_spin_lock_slowpath+0x1c/0x1e0
> [ 2504.186402] Code: e7 89 c8 f0 44 0f b1 07 39 c1 75 dc f3 c3 0f 1f 44 00 00
> ba 01 00 00 00 8b 07 85 c0 75 0a f0 0f b1 17 85 c0 75 f2 f3 c3 f3 90  ec
> 81 fe 00 01 00 00 0f 84 ab 00 00 00 81 e6 00 ff ff ff 75 44
> [ 2504.186403] RSP: 0018:c9003ee8 EFLAGS: 0002
> [ 2504.186404] RAX: 0001 RBX: 0246 RCX:
> 00404044
> [ 2504.186404] RDX: 0001 RSI: 0001 RDI:
> 8244a280
> [ 2504.186405] RBP: 8244a280 R08: 000f4200 R09:
> 024709ed6c32
> [ 2504.186405] R10:  R11: 0001 R12:
> 8244a280
> [ 2504.186405] R13: 0009 R14: 0009 R15:
> 
> [ 2504.186406] FS:  () GS:8880cc60()
> knlGS:
> [ 2504.186406] CS:  0010 DS:  ES:  CR0: 80050033
> [ 2504.186406] CR2: 7efd6b0f15d8 CR3: 0260a006 CR4:
> 00360ef0
> [ 2504.186407] DR0:  DR1:  DR2:
> 
> [ 2504.186407] DR3:  DR6: fffe0ff0 DR7:
> 0400
> [ 2504.186407] Call Trace:
> [ 2504.186408]  
> [ 2504.186408]  _raw_spin_lock_irqsave+0x1d/0x30
> [ 2504.186408]  rcu_core+0x3b6/0x740
> [ 2504.186408]  ? __hrtimer_run_queues+0x133/0x280
> [ 2504.186409]  ? recalibrate_cpu_khz+0x10/0x10
> [ 2504.186409]  __do_softirq+0xd8/0x2e4
> [ 2504.186409]  irq_exit+0xa3/0xb0
> [ 2504.186410]  smp_apic_timer_interrupt+0x67/0x120
> [ 2504.186410]  apic_timer_interrupt+0xf/0x20
> [ 2504.186410]  
> [ 2504.186410] RIP: 0010:unmap_page_range+0x47a/0x9b0
> [ 2504.186411] Code: 0f 46 46 10 49 39 6e 18 49 89 46 10 48 89 e8 49 0f 43 46
> 18 41 80 4e 20 08 4d 85 c9 49 89 46 18 0f 84 68 ff ff ff 49 8b 51 08 <48> 8d
> 42 ff 83 e2 01 49 0f 44 c1 f6 40 18 01 75 38 48 ba ff 0f 00
> [ 2504.186411] RSP: 0018:c900036cbcc8 EFLAGS: 0282 ORIG_RAX:
> ff13
> [ 2504.186412] RAX:  RBX: 80003751d045 RCX:
> 0001
> [ 2504.186413] RDX: ea0002e09288 RSI: 0269b000 RDI:
> 8880b6525e40
> [ 2504.186413] RBP: 0269c000 R08:  R09:
> eadd4740
> [ 2504.186413] R10: ea0001755700 R11: 8880cc62d120 R12:
> 02794000
> [ 2504.186414] R13: 0269b000 R14: c900036cbdf0 R15:
> 8880572434d8
> [ 2504.186414]  ? unmap_page_range+0x420/0x9b0
> [ 2504.186414]  ? release_pages+0x175/0x390
> [ 2504.186414]  unmap_vmas+0x7c/0xe0
> [ 2504.186415]  exit_mmap+0xa4/0x190
> [ 2504.186415]  mmput+0x3b/0x100
> [ 2504.186415]  do_exit+0x276/0xc10
> [ 2504.186415]  do_group_exit+0x35/0xa0
> [ 2504.186415]  __x64_sys_exit_group+0xf/0x10
> [ 2504.186416]  do_syscall_64+0x43/0x120
> [ 2504.186416]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
> [ 2504.186416] RIP: 0033:0x7efd6ae10618
> [ 2504.186416] Code: Bad RIP value.
> [ 2504.186417] RSP: 002b:7ffcac9bde38 EFLAGS: 0246 ORIG_RAX:
> 00e7
> [ 2504.186417] RAX: ffda RBX:  RCX:
> 7efd6ae10618
> [ 2504.186418] RDX:  RSI: 003c RDI:
> 
> [ 2504.186418] RBP: 7efd6b0ed8e0 R08: 00e7 R09:
> 

Re: [Qemu-devel] [PATCH v9 2/7] virtio-pmem: Add virtio pmem driver

2019-05-17 Thread Jakub StaroĊ„
On 5/16/19 10:35 PM, Pankaj Gupta wrote:
> Can I take it your reviewed/acked-by? or tested-by tag? for the virtio patch 
> :)I don't feel that I have enough expertise to give the reviewed-by tag, but 
> you can
take my acked-by + tested-by.

Acked-by: Jakub Staron 
Tested-by: Jakub Staron 

No kernel panics/stalls encountered during testing this patches (v9) with QEMU 
+ xfstests.
Some CPU stalls encountered while testing with crosvm instead of QEMU with 
xfstests
(test generic/464) but no repro for QEMU, so the fault may be on the side of 
crosvm.


The dump for the crosvm/xfstests stall:
[ 2504.175276] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
[ 2504.176681] rcu: 0-...!: (1 GPs behind) idle=9b2/1/0x4000 
softirq=1089198/1089202 fqs=0 
[ 2504.178270] rcu: 2-...!: (1 ticks this GP) idle=cfe/1/0x4002 
softirq=1055108/1055110 fqs=0 
[ 2504.179802] rcu: 3-...!: (1 GPs behind) idle=1d6/1/0x4002 
softirq=1046798/1046802 fqs=0 
[ 2504.181215] rcu: 4-...!: (2 ticks this GP) idle=522/1/0x4002 
softirq=1249063/1249064 fqs=0 
[ 2504.182625] rcu: 5-...!: (1 GPs behind) idle=6da/1/0x4000 
softirq=1131036/1131047 fqs=0 
[ 2504.183955]  (detected by 3, t=0 jiffies, g=1232529, q=1370)
[ 2504.184762] Sending NMI from CPU 3 to CPUs 0:
[ 2504.186400] NMI backtrace for cpu 0
[ 2504.186401] CPU: 0 PID: 6670 Comm: 464 Not tainted 5.1.0+ #1
[ 2504.186401] Hardware name: ChromiumOS crosvm, BIOS 0 
[ 2504.186402] RIP: 0010:queued_spin_lock_slowpath+0x1c/0x1e0
[ 2504.186402] Code: e7 89 c8 f0 44 0f b1 07 39 c1 75 dc f3 c3 0f 1f 44 00 00 
ba 01 00 00 00 8b 07 85 c0 75 0a f0 0f b1 17 85 c0 75 f2 f3 c3 f3 90  ec 81 
fe 00 01 00 00 0f 84 ab 00 00 00 81 e6 00 ff ff ff 75 44
[ 2504.186403] RSP: 0018:c9003ee8 EFLAGS: 0002
[ 2504.186404] RAX: 0001 RBX: 0246 RCX: 00404044
[ 2504.186404] RDX: 0001 RSI: 0001 RDI: 8244a280
[ 2504.186405] RBP: 8244a280 R08: 000f4200 R09: 024709ed6c32
[ 2504.186405] R10:  R11: 0001 R12: 8244a280
[ 2504.186405] R13: 0009 R14: 0009 R15: 
[ 2504.186406] FS:  () GS:8880cc60() 
knlGS:
[ 2504.186406] CS:  0010 DS:  ES:  CR0: 80050033
[ 2504.186406] CR2: 7efd6b0f15d8 CR3: 0260a006 CR4: 00360ef0
[ 2504.186407] DR0:  DR1:  DR2: 
[ 2504.186407] DR3:  DR6: fffe0ff0 DR7: 0400
[ 2504.186407] Call Trace:
[ 2504.186408]  
[ 2504.186408]  _raw_spin_lock_irqsave+0x1d/0x30
[ 2504.186408]  rcu_core+0x3b6/0x740
[ 2504.186408]  ? __hrtimer_run_queues+0x133/0x280
[ 2504.186409]  ? recalibrate_cpu_khz+0x10/0x10
[ 2504.186409]  __do_softirq+0xd8/0x2e4
[ 2504.186409]  irq_exit+0xa3/0xb0
[ 2504.186410]  smp_apic_timer_interrupt+0x67/0x120
[ 2504.186410]  apic_timer_interrupt+0xf/0x20
[ 2504.186410]  
[ 2504.186410] RIP: 0010:unmap_page_range+0x47a/0x9b0
[ 2504.186411] Code: 0f 46 46 10 49 39 6e 18 49 89 46 10 48 89 e8 49 0f 43 46 
18 41 80 4e 20 08 4d 85 c9 49 89 46 18 0f 84 68 ff ff ff 49 8b 51 08 <48> 8d 42 
ff 83 e2 01 49 0f 44 c1 f6 40 18 01 75 38 48 ba ff 0f 00
[ 2504.186411] RSP: 0018:c900036cbcc8 EFLAGS: 0282 ORIG_RAX: 
ff13
[ 2504.186412] RAX:  RBX: 80003751d045 RCX: 0001
[ 2504.186413] RDX: ea0002e09288 RSI: 0269b000 RDI: 8880b6525e40
[ 2504.186413] RBP: 0269c000 R08:  R09: eadd4740
[ 2504.186413] R10: ea0001755700 R11: 8880cc62d120 R12: 02794000
[ 2504.186414] R13: 0269b000 R14: c900036cbdf0 R15: 8880572434d8
[ 2504.186414]  ? unmap_page_range+0x420/0x9b0
[ 2504.186414]  ? release_pages+0x175/0x390
[ 2504.186414]  unmap_vmas+0x7c/0xe0
[ 2504.186415]  exit_mmap+0xa4/0x190
[ 2504.186415]  mmput+0x3b/0x100
[ 2504.186415]  do_exit+0x276/0xc10
[ 2504.186415]  do_group_exit+0x35/0xa0
[ 2504.186415]  __x64_sys_exit_group+0xf/0x10
[ 2504.186416]  do_syscall_64+0x43/0x120
[ 2504.186416]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 2504.186416] RIP: 0033:0x7efd6ae10618
[ 2504.186416] Code: Bad RIP value.
[ 2504.186417] RSP: 002b:7ffcac9bde38 EFLAGS: 0246 ORIG_RAX: 
00e7
[ 2504.186417] RAX: ffda RBX:  RCX: 7efd6ae10618
[ 2504.186418] RDX:  RSI: 003c RDI: 
[ 2504.186418] RBP: 7efd6b0ed8e0 R08: 00e7 R09: ff98
[ 2504.186418] R10: 7ffcac9bddb8 R11: 0246 R12: 7efd6b0ed8e0
[ 2504.186419] R13: 7efd6b0f2c20 R14: 0060 R15: 0070e705
[ 2504.186421] NMI backtrace for cpu 3
[ 2504.226980] CPU: 3 PID: 6596 Comm: xfs_io Not tainted 5.1.0+ #1
[ 2504.227661] Hardware name: ChromiumOS crosvm, BIOS 0 
[ 2504.228261] Call Trace:
[ 

Re: [Qemu-devel] [PATCH v9 2/7] virtio-pmem: Add virtio pmem driver

2019-05-16 Thread Pankaj Gupta



Hi Jakub,

> 
> On 5/14/19 7:54 AM, Pankaj Gupta wrote:
> > +   if (!list_empty(>req_list)) {
> > +   req_buf = list_first_entry(>req_list,
> > +   struct virtio_pmem_request, list);
> > +   req_buf->wq_buf_avail = true;
> > +   wake_up(_buf->wq_buf);
> > +   list_del(_buf->list);
> Yes, this change is the right one, thank you!

Thank you for the confirmation.

> 
> > +/*
> > + * If virtqueue_add_sgs returns -ENOSPC then req_vq virtual
> > + * queue does not have free descriptor. We add the request
> > + * to req_list and wait for host_ack to wake us up when free
> > + * slots are available.
> > + */
> > +   while ((err = virtqueue_add_sgs(vpmem->req_vq, sgs, 1, 1, req,
> > +   GFP_ATOMIC)) == -ENOSPC) {
> > +
> > +   dev_err(>dev, "failed to send command to virtio pmem" \
> > +   "device, no free slots in the virtqueue\n");
> > +   req->wq_buf_avail = false;
> > +   list_add_tail(>list, >req_list);
> > +   spin_unlock_irqrestore(>pmem_lock, flags);
> > +
> > +   /* A host response results in "host_ack" getting called */
> > +   wait_event(req->wq_buf, req->wq_buf_avail);
> > +   spin_lock_irqsave(>pmem_lock, flags);
> > +   }
> > +   err1 = virtqueue_kick(vpmem->req_vq);
> > +   spin_unlock_irqrestore(>pmem_lock, flags);
> > +
> > +   /*
> > +* virtqueue_add_sgs failed with error different than -ENOSPC, we can't
> > +* do anything about that.
> > +*/
> > +   if (err || !err1) {
> > +   dev_info(>dev, "failed to send command to virtio pmem 
> > device\n");
> > +   err = -EIO;
> > +   } else {
> > +   /* A host repsonse results in "host_ack" getting called */
> > +   wait_event(req->host_acked, req->done);
> > +   err = req->ret;
> > +I confirm that the failures I was facing with the `-ENOSPC` error path are
> > not present in v9.

Can I take it your reviewed/acked-by? or tested-by tag? for the virtio patch :)

Thank you,
Pankaj

> 
> Best,
> Jakub Staron
> 
> 
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [Qemu-devel] [PATCH v9 2/7] virtio-pmem: Add virtio pmem driver

2019-05-16 Thread Pankaj Gupta


> 
> On Wed, May 15, 2019 at 10:46:00PM +0200, David Hildenbrand wrote:
> > > + vpmem->vdev = vdev;
> > > + vdev->priv = vpmem;
> > > + err = init_vq(vpmem);
> > > + if (err) {
> > > + dev_err(>dev, "failed to initialize virtio pmem vq's\n");
> > > + goto out_err;
> > > + }
> > > +
> > > + virtio_cread(vpmem->vdev, struct virtio_pmem_config,
> > > + start, >start);
> > > + virtio_cread(vpmem->vdev, struct virtio_pmem_config,
> > > + size, >size);
> > > +
> > > + res.start = vpmem->start;
> > > + res.end   = vpmem->start + vpmem->size-1;
> > 
> > nit: " - 1;"
> > 
> > > + vpmem->nd_desc.provider_name = "virtio-pmem";
> > > + vpmem->nd_desc.module = THIS_MODULE;
> > > +
> > > + vpmem->nvdimm_bus = nvdimm_bus_register(>dev,
> > > + >nd_desc);
> > > + if (!vpmem->nvdimm_bus) {
> > > + dev_err(>dev, "failed to register device with 
> > > nvdimm_bus\n");
> > > + err = -ENXIO;
> > > + goto out_vq;
> > > + }
> > > +
> > > + dev_set_drvdata(>dev, vpmem->nvdimm_bus);
> > > +
> > > + ndr_desc.res = 
> > > + ndr_desc.numa_node = nid;
> > > + ndr_desc.flush = async_pmem_flush;
> > > + set_bit(ND_REGION_PAGEMAP, _desc.flags);
> > > + set_bit(ND_REGION_ASYNC, _desc.flags);
> > > + nd_region = nvdimm_pmem_region_create(vpmem->nvdimm_bus, _desc);
> > > + if (!nd_region) {
> > > + dev_err(>dev, "failed to create nvdimm region\n");
> > > + err = -ENXIO;
> > > + goto out_nd;
> > > + }
> > > + nd_region->provider_data =
> > > dev_to_virtio(nd_region->dev.parent->parent);
> > > + return 0;
> > > +out_nd:
> > > + nvdimm_bus_unregister(vpmem->nvdimm_bus);
> > > +out_vq:
> > > + vdev->config->del_vqs(vdev);
> > > +out_err:
> > > + return err;
> > > +}
> > > +
> > > +static void virtio_pmem_remove(struct virtio_device *vdev)
> > > +{
> > > + struct nvdimm_bus *nvdimm_bus = dev_get_drvdata(>dev);
> > > +
> > > + nvdimm_bus_unregister(nvdimm_bus);
> > > + vdev->config->del_vqs(vdev);
> > > + vdev->config->reset(vdev);
> > > +}
> > > +
> > > +static struct virtio_driver virtio_pmem_driver = {
> > > + .driver.name= KBUILD_MODNAME,
> > > + .driver.owner   = THIS_MODULE,
> > > + .id_table   = id_table,
> > > + .probe  = virtio_pmem_probe,
> > > + .remove = virtio_pmem_remove,
> > > +};
> > > +
> > > +module_virtio_driver(virtio_pmem_driver);
> > > +MODULE_DEVICE_TABLE(virtio, id_table);
> > > +MODULE_DESCRIPTION("Virtio pmem driver");
> > > +MODULE_LICENSE("GPL");
> > > diff --git a/drivers/nvdimm/virtio_pmem.h b/drivers/nvdimm/virtio_pmem.h
> > > new file mode 100644
> > > index ..ab1da877575d
> > > --- /dev/null
> > > +++ b/drivers/nvdimm/virtio_pmem.h
> > > @@ -0,0 +1,60 @@
> > > +/* SPDX-License-Identifier: GPL-2.0 */
> > > +/*
> > > + * virtio_pmem.h: virtio pmem Driver
> > > + *
> > > + * Discovers persistent memory range information
> > > + * from host and provides a virtio based flushing
> > > + * interface.
> > > + **/
> > > +
> > > +#ifndef _LINUX_VIRTIO_PMEM_H
> > > +#define _LINUX_VIRTIO_PMEM_H
> > > +
> > > +#include 
> > > +#include 
> > > +#include 
> > > +#include 
> > > +#include 
> > > +#include 
> > > +
> > > +struct virtio_pmem_request {
> > > + /* Host return status corresponding to flush request */
> > > + int ret;
> > > +
> > > + /* command name*/
> > > + char name[16];
> > 
> > So ... why are we sending string commands and expect native-endianess
> > integers and don't define a proper request/response structure + request
> > types in include/uapi/linux/virtio_pmem.h like
> 
> passing names could be ok.
> I missed the fact we return a native endian int.
> Pls fix that.

Sure. will fix this.

> 
> 
> > 
> > struct virtio_pmem_resp {
> > __virtio32 ret;
> > }
> > 
> > #define VIRTIO_PMEM_REQ_TYPE_FLUSH  1
> > struct virtio_pmem_req {
> > __virtio16 type;
> > }
> > 
> > ... and this way we also define a proper endianess format for exchange
> > and keep it extensible
> > 
> > @MST, what's your take on this?
> 
> Extensions can always use feature bits so I don't think
> it's a problem.

That was exactly my thought when I implemented this. Though I am
fine with separate structures for request/response and I made the
change. 

Thank you for all the comments.

Best regards,
Pankaj 
> > 
> > --
> > 
> > Thanks,
> > 
> > David / dhildenb
> 
> 


Re: [Qemu-devel] [PATCH v9 2/7] virtio-pmem: Add virtio pmem driver

2019-05-15 Thread Dan Williams
On Tue, May 14, 2019 at 8:25 AM Pankaj Gupta  wrote:
>
>
> > On 5/14/19 7:54 AM, Pankaj Gupta wrote:
> > > diff --git a/drivers/virtio/Kconfig b/drivers/virtio/Kconfig
> > > index 35897649c24f..94bad084ebab 100644
> > > --- a/drivers/virtio/Kconfig
> > > +++ b/drivers/virtio/Kconfig
> > > @@ -42,6 +42,17 @@ config VIRTIO_PCI_LEGACY
> > >
> > >   If unsure, say Y.
> > >
> > > +config VIRTIO_PMEM
> > > +   tristate "Support for virtio pmem driver"
> > > +   depends on VIRTIO
> > > +   depends on LIBNVDIMM
> > > +   help
> > > +   This driver provides access to virtio-pmem devices, storage devices
> > > +   that are mapped into the physical address space - similar to NVDIMMs
> > > +- with a virtio-based flushing interface.
> > > +
> > > +   If unsure, say M.
> >
> > 
> > from Documentation/process/coding-style.rst:
> > "Lines under a ``config`` definition
> > are indented with one tab, while help text is indented an additional two
> > spaces."
>
> ah... I changed help text and 'checkpatch' did not say anything :( .
>
> Will wait for Dan, If its possible to add two spaces to help text while 
> applying
> the series.

I'm inclined to handle this with a fixup appended to the end of the
series just so the patchwork-bot does not get confused by the content
changing from what was sent to the list.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [Qemu-devel] [PATCH v9 2/7] virtio-pmem: Add virtio pmem driver

2019-05-14 Thread Pankaj Gupta


> On 5/14/19 7:54 AM, Pankaj Gupta wrote:
> > diff --git a/drivers/virtio/Kconfig b/drivers/virtio/Kconfig
> > index 35897649c24f..94bad084ebab 100644
> > --- a/drivers/virtio/Kconfig
> > +++ b/drivers/virtio/Kconfig
> > @@ -42,6 +42,17 @@ config VIRTIO_PCI_LEGACY
> >  
> >   If unsure, say Y.
> >  
> > +config VIRTIO_PMEM
> > +   tristate "Support for virtio pmem driver"
> > +   depends on VIRTIO
> > +   depends on LIBNVDIMM
> > +   help
> > +   This driver provides access to virtio-pmem devices, storage devices
> > +   that are mapped into the physical address space - similar to NVDIMMs
> > +- with a virtio-based flushing interface.
> > +
> > +   If unsure, say M.
> 
> 
> from Documentation/process/coding-style.rst:
> "Lines under a ``config`` definition
> are indented with one tab, while help text is indented an additional two
> spaces."

ah... I changed help text and 'checkpatch' did not say anything :( .

Will wait for Dan, If its possible to add two spaces to help text while applying
the series.

Thanks,
Pankaj
  

> 
> > +
> >  config VIRTIO_BALLOON
> > tristate "Virtio balloon driver"
> > depends on VIRTIO
> 
> thanks.
> --
> ~Randy
> 
>