Re: [PATCH] block, bfq: keep peak_rate estimation within range 1..2^32-1

2018-03-26 Thread Paolo Valente
> Il giorno 21 mar 2018, alle ore 00:49, Paolo Valente > ha scritto: > > > >> Il giorno 20 mar 2018, alle ore 15:41, Konstantin Khlebnikov >> ha scritto: >> >> On 20.03.2018 06:00, Paolo Valente wrote: Il giorno 19 mar 2018, alle ore 14:28, Konstantin Khlebnikov ha scritto: >>

Re: [PATCH V3 0/4] genirq/affinity: irq vector spread among online CPUs as far as possible

2018-03-26 Thread Thorsten Leemhuis
Lo! Your friendly Linux regression tracker here ;-) On 08.03.2018 14:18, Artem Bityutskiy wrote: > On Thu, 2018-03-08 at 18:53 +0800, Ming Lei wrote: >> This patchset tries to spread among online CPUs as far as possible, so >> that we can avoid to allocate too less irq vectors with online CPUs >>

Re: [PATCH] block, bfq: keep peak_rate estimation within range 1..2^32-1

2018-03-26 Thread Konstantin Khlebnikov
On 26.03.2018 11:01, Paolo Valente wrote: Il giorno 21 mar 2018, alle ore 00:49, Paolo Valente ha scritto: Il giorno 20 mar 2018, alle ore 15:41, Konstantin Khlebnikov ha scritto: On 20.03.2018 06:00, Paolo Valente wrote: Il giorno 19 mar 2018, alle ore 14:28, Konstantin Khlebnikov

Re: [PATCH v3 01/11] PCI/P2PDMA: Support peer-to-peer memory

2018-03-26 Thread Jonathan Cameron
On Tue, 13 Mar 2018 10:43:55 -0600 Logan Gunthorpe wrote: > On 12/03/18 09:28 PM, Sinan Kaya wrote: > > On 3/12/2018 3:35 PM, Logan Gunthorpe wrote: > > Regarding the switch business, It is amazing how much trouble you went into > > limit this functionality into very specific hardware. > > > > I

Re: [PATCH v3 01/11] PCI/P2PDMA: Support peer-to-peer memory

2018-03-26 Thread Bjorn Helgaas
On Mon, Mar 26, 2018 at 12:11:38PM +0100, Jonathan Cameron wrote: > On Tue, 13 Mar 2018 10:43:55 -0600 > Logan Gunthorpe wrote: > > It turns out that root ports that support P2P are far less common than > > anyone thought. So it will likely have to be a white list. > > This came as a bit of a su

Re: [PATCH] block, bfq: keep peak_rate estimation within range 1..2^32-1

2018-03-26 Thread Paolo Valente
> Il giorno 26 mar 2018, alle ore 12:28, Konstantin Khlebnikov > ha scritto: > > > > On 26.03.2018 11:01, Paolo Valente wrote: >>> Il giorno 21 mar 2018, alle ore 00:49, Paolo Valente >>> ha scritto: >>> >>> >>> Il giorno 20 mar 2018, alle ore 15:41, Konstantin Khlebnikov ha

[PATCH BUGFIX] block, bfq: lower-bound the estimated peak rate to 1

2018-03-26 Thread Paolo Valente
If a storage device handled by BFQ happens to be slower than 7.5 KB/s for a certain amount of time (in the order of a second), then the estimated peak rate of the device, maintained in BFQ, becomes equal to 0. The reason is the limited precision with which the rate is represented (details on the ra

Re: [PATCH] block, bfq: keep peak_rate estimation within range 1..2^32-1

2018-03-26 Thread Konstantin Khlebnikov
On 26.03.2018 17:06, Paolo Valente wrote: Il giorno 26 mar 2018, alle ore 12:28, Konstantin Khlebnikov ha scritto: On 26.03.2018 11:01, Paolo Valente wrote: Il giorno 21 mar 2018, alle ore 00:49, Paolo Valente ha scritto: Il giorno 20 mar 2018, alle ore 15:41, Konstantin Khlebnikov

Re: [PATCH 2/3] nvme-pci: Remove unused queue parameter

2018-03-26 Thread Keith Busch
On Mon, Mar 26, 2018 at 09:47:07AM +0800, Ming Lei wrote: > On Fri, Mar 23, 2018 at 04:19:22PM -0600, Keith Busch wrote: > > @@ -1629,9 +1627,7 @@ static int nvme_create_io_queues(struct nvme_dev *dev) > > int ret = 0; > > > > for (i = dev->ctrl.queue_count; i <= dev->max_qid; i++) { > >

Re: Multi-Actuator SAS HDD First Look

2018-03-26 Thread Hannes Reinecke
On Fri, 23 Mar 2018 08:57:12 -0600 Tim Walker wrote: > Seagate announced their split actuator SAS drive, which will probably > require some kernel changes for full support. It's targeted at cloud > provider JBODs and RAID. > > Here are some of the drive's architectural points. Since the two LUNs

DRBD inconsistent - out of sync issue

2018-03-26 Thread Carsten Burkhardt
Hello, DRBD get out of sync if the buffer is been modified during write operations. The testcase, the problem and a proposed solution can be found here: https://bugzilla.kernel.org/show_bug.cgi?id=99171 Mit freundlichen Grüßen Carsten Burkhardt smime.p7s Description: S/MIME Cryptographic S

Re: [PATCH v3 01/11] PCI/P2PDMA: Support peer-to-peer memory

2018-03-26 Thread Logan Gunthorpe
On 24/03/18 09:28 AM, Stephen Bates wrote: > 1. There is no requirement for a single function to support internal DMAs but > in the case of NVMe we do have a protocol specific way for a NVMe function to > indicate it supports via the CMB BAR. Other protocols may also have such > methods but I

Re: [PATCH v3 01/11] PCI/P2PDMA: Support peer-to-peer memory

2018-03-26 Thread Logan Gunthorpe
On 26/03/18 08:01 AM, Bjorn Helgaas wrote: > On Mon, Mar 26, 2018 at 12:11:38PM +0100, Jonathan Cameron wrote: >> On Tue, 13 Mar 2018 10:43:55 -0600 >> Logan Gunthorpe wrote: >>> It turns out that root ports that support P2P are far less common than >>> anyone thought. So it will likely have to

Re: [PATCH BUGFIX] block, bfq: lower-bound the estimated peak rate to 1

2018-03-26 Thread Jens Axboe
On 3/26/18 8:06 AM, Paolo Valente wrote: > If a storage device handled by BFQ happens to be slower than 7.5 KB/s > for a certain amount of time (in the order of a second), then the > estimated peak rate of the device, maintained in BFQ, becomes equal to > 0. The reason is the limited precision with

Re: [PATCH v3 01/11] PCI/P2PDMA: Support peer-to-peer memory

2018-03-26 Thread Jason Gunthorpe
On Mon, Mar 26, 2018 at 12:11:38PM +0100, Jonathan Cameron wrote: > On Tue, 13 Mar 2018 10:43:55 -0600 > Logan Gunthorpe wrote: > > > On 12/03/18 09:28 PM, Sinan Kaya wrote: > > > On 3/12/2018 3:35 PM, Logan Gunthorpe wrote: > > > Regarding the switch business, It is amazing how much trouble you

Re: [PATCH v3 01/11] PCI/P2PDMA: Support peer-to-peer memory

2018-03-26 Thread Logan Gunthorpe
On 26/03/18 10:41 AM, Jason Gunthorpe wrote: > On Mon, Mar 26, 2018 at 12:11:38PM +0100, Jonathan Cameron wrote: >> On Tue, 13 Mar 2018 10:43:55 -0600 >> Logan Gunthorpe wrote: >> >>> On 12/03/18 09:28 PM, Sinan Kaya wrote: On 3/12/2018 3:35 PM, Logan Gunthorpe wrote: Regarding the swi

Re: [PATCH 1/3] blk-mq: Allow PCI vector offset for mapping queues

2018-03-26 Thread Keith Busch
On Sat, Mar 24, 2018 at 09:55:49PM +0800, jianchao.wang wrote: > Maybe we could provide a callback parameter for __blk_mq_pci_map_queues which > give the mapping from hctx queue num to device-relative interrupt vector > index. If a driver's mapping is so complicated as to require a special per-hc

Re: [PATCH 1/3] blk-mq: Allow PCI vector offset for mapping queues

2018-03-26 Thread Keith Busch
On Mon, Mar 26, 2018 at 09:50:38AM +0800, Ming Lei wrote: > > Given no many callers of blk_mq_pci_map_queues(), I suggest to add the > parameter of 'offset' to this API directly, then people may keep the > '.pre_vectors' stuff in mind, and avoid to misuse it. Yeah, I think I have to agree. I was

Re: [PATCH v3 01/11] PCI/P2PDMA: Support peer-to-peer memory

2018-03-26 Thread Jason Gunthorpe
On Mon, Mar 26, 2018 at 11:30:38AM -0600, Logan Gunthorpe wrote: > > > On 26/03/18 10:41 AM, Jason Gunthorpe wrote: > > On Mon, Mar 26, 2018 at 12:11:38PM +0100, Jonathan Cameron wrote: > >> On Tue, 13 Mar 2018 10:43:55 -0600 > >> Logan Gunthorpe wrote: > >> > >>> On 12/03/18 09:28 PM, Sinan Kay

Re: [PATCH v3 01/11] PCI/P2PDMA: Support peer-to-peer memory

2018-03-26 Thread Logan Gunthorpe
On 26/03/18 01:35 PM, Jason Gunthorpe wrote: > I think this is another case of the HW can do it but the SW support is > missing. IOMMU configuration and maybe firmware too, for instance. Nope, not sure how you can make this leap. We've been specifically told that peer-to-peer PCIe DMA is not sup

blk-mq and CPU hotplug error

2018-03-26 Thread joserz
Hello everyone! I'm running Ubuntu 18.04 (4.15.0-12-generic) in KVM/QEMU (powerpc64le). Everything looks good until I try to hotplug CPUs in my VM. As soon as I do that I get the following error in my VM dmesg: [ 763.629425] WARNING: CPU: 34 PID: 2276 at /build/linux-zBVy54/linux-4.15.0/block/b

Re: disk-io lockup in 4.14.13 kernel

2018-03-26 Thread Bart Van Assche
On Sat, 2018-03-24 at 23:38 +0200, Jaco Kroon wrote: > Does the following go with your theory: > > [452545.945561] sysrq: SysRq : Show backtrace of all active CPUs > [452545.946182] NMI backtrace for cpu 5 > [452545.946185] CPU: 5 PID: 31921 Comm: bash Tainted: G I > 4.14.13-uls #2 >

[PATCH 0/2] loop: don't hang on lo_ctl_mutex in ioctls

2018-03-26 Thread Omar Sandoval
From: Omar Sandoval Hi, Jens, We hit an issue where a loop device on NFS (yes, I know) got stuck and a bunch of losetup processes got stuck in uninterruptible sleep waiting for lo_ctl_mutex as a result. Calling into the filesystem while holding lo_ctl_mutex isn't necessary, and there's no reason

[PATCH 1/2] loop: don't call into filesystem while holding lo_ctl_mutex

2018-03-26 Thread Omar Sandoval
From: Omar Sandoval We hit an issue where a loop device on NFS was stuck in loop_get_status() doing vfs_getattr() after the NFS server died, which caused a pile-up of uninterruptible processes waiting on lo_ctl_mutex. There's no reason to hold this lock while we wait on the filesystem; let's drop

[PATCH 2/2] loop: use interruptible lock in ioctls

2018-03-26 Thread Omar Sandoval
From: Omar Sandoval Even after the previous patch to drop lo_ctl_mutex while calling vfs_getattr(), there are other cases where we can end up sleeping for a long time while holding lo_ctl_mutex. Let's avoid the uninterruptible sleep from the ioctls. Signed-off-by: Omar Sandoval --- drivers/blo

Re: [PATCH 2/2] loop: use interruptible lock in ioctls

2018-03-26 Thread Matthew Wilcox
On Mon, Mar 26, 2018 at 04:16:26PM -0700, Omar Sandoval wrote: > Even after the previous patch to drop lo_ctl_mutex while calling > vfs_getattr(), there are other cases where we can end up sleeping for a > long time while holding lo_ctl_mutex. Let's avoid the uninterruptible > sleep from the ioctls

Re: blk-mq and CPU hotplug error

2018-03-26 Thread jianchao.wang
Hi Jose On 03/27/2018 05:25 AM, jos...@linux.vnet.ibm.com wrote: > Hello everyone! > > I'm running Ubuntu 18.04 (4.15.0-12-generic) in KVM/QEMU (powerpc64le). > Everything looks good until I try to hotplug CPUs in my VM. As soon as I > do that I get the following error in my VM dmesg: Please ref

Re: [PATCH 2/2] loop: use interruptible lock in ioctls

2018-03-26 Thread Omar Sandoval
On Mon, Mar 26, 2018 at 05:04:21PM -0700, Matthew Wilcox wrote: > On Mon, Mar 26, 2018 at 04:16:26PM -0700, Omar Sandoval wrote: > > Even after the previous patch to drop lo_ctl_mutex while calling > > vfs_getattr(), there are other cases where we can end up sleeping for a > > long time while holdi

[PATCH v2 1/2] loop: don't call into filesystem while holding lo_ctl_mutex

2018-03-26 Thread Omar Sandoval
From: Omar Sandoval We hit an issue where a loop device on NFS was stuck in loop_get_status() doing vfs_getattr() after the NFS server died, which caused a pile-up of uninterruptible processes waiting on lo_ctl_mutex. There's no reason to hold this lock while we wait on the filesystem; let's drop

[PATCH v2 2/2] loop: use killable lock in ioctls

2018-03-26 Thread Omar Sandoval
From: Omar Sandoval Even after the previous patch to drop lo_ctl_mutex while calling vfs_getattr(), there are other cases where we can end up sleeping for a long time while holding lo_ctl_mutex. Let's avoid the uninterruptible sleep from the ioctls. Signed-off-by: Omar Sandoval --- drivers/blo

[PATCH 0/2] loop: don't hang on lo_ctl_mutex in ioctls

2018-03-26 Thread Omar Sandoval
From: Omar Sandoval Hi, Jens, We hit an issue where a loop device on NFS (yes, I know) got stuck and a bunch of losetup processes got stuck in uninterruptible sleep waiting for lo_ctl_mutex as a result. Calling into the filesystem while holding lo_ctl_mutex isn't necessary, and there's no reason