On Thu, Dec 21, 2017 at 17:16:03 -0600,
Bruno Wolff III wrote:
Enforcing mode alone isn't enough as I tested that one one machine at
home and it didn't trigger the problem. I'll try another machine late
tonight.
I got the problem to occur on my i686 machine when booting in enforcing
mode.
Hi Sagi,
On Thu, Dec 21, 2017 at 10:20:45AM +0200, Sagi Grimberg wrote:
> Ming,
>
> I'd prefer that we make the pci driver match
> the rest of the drivers in nvme.
OK, this way looks better.
>
> IMO it would be better to allocate a queues array at probe time
> and simply reuse it at reset sequ
On Thu, 2017-12-21 at 10:02 -0700, Jens Axboe wrote:
> On 12/21/17 9:42 AM, Bruno Wolff III wrote:
> >
> > On Thu, Dec 21, 2017 at 23:48:19 +0800,
> > weiping zhang wrote:
> > >
> > > >
> > > > output you want. I never saw it for any kernels I compiled
> > > > myself. Only when I test kernels
On Thu, Dec 21, 2017 at 12:15:31 -0600,
Bruno Wolff III wrote:
One important thing I have just found is that it looks like the
problem only happens when booting in enforcing mode. If I boot in
permissive mode it does not happen. My home machines are currently set
to boot in permissive mode
On Thu, Dec 21, 2017 at 03:17:41PM -0700, Jens Axboe wrote:
> On 12/21/17 2:34 PM, Keith Busch wrote:
> > It would be nice, but the driver doesn't know a request's completion
> > is going to be a polled.
>
> That's trivially solvable though, since the information is available
> at submission time
On 12/21/17 2:34 PM, Keith Busch wrote:
> On Thu, Dec 21, 2017 at 02:00:04PM -0700, Jens Axboe wrote:
>> On 12/21/17 1:56 PM, Scott Bauer wrote:
>>> On 12/21/2017 01:46 PM, Keith Busch wrote:
@@ -181,7 +181,10 @@ static void blkdev_bio_end_io_simple(struct bio *bio)
struct task_struct
On Thu, Dec 21, 2017 at 02:00:04PM -0700, Jens Axboe wrote:
> On 12/21/17 1:56 PM, Scott Bauer wrote:
> > On 12/21/2017 01:46 PM, Keith Busch wrote:
> >> @@ -181,7 +181,10 @@ static void blkdev_bio_end_io_simple(struct bio *bio)
> >>struct task_struct *waiter = bio->bi_private;
> >>
> >>W
On 12/21/2017 01:46 PM, Keith Busch wrote:
> When a request completion is polled, the completion task wakes itself
> up. This is unnecessary, as the task can just set itself back to
> running.
>
> Signed-off-by: Keith Busch
> ---
> fs/block_dev.c | 5 -
> 1 file changed, 4 insertions(+), 1
On 12/21/17 2:02 PM, Keith Busch wrote:
> On Thu, Dec 21, 2017 at 01:53:44PM -0700, Jens Axboe wrote:
>> Turns out that wasn't what patch 2 was. And the code is right there
>> above as well, and under the q_lock, so I guess that race doesn't
>> exist.
>>
>> But that does bring up the fact if we sho
On 12/21/17 1:56 PM, Scott Bauer wrote:
>
>
> On 12/21/2017 01:46 PM, Keith Busch wrote:
>> When a request completion is polled, the completion task wakes itself
>> up. This is unnecessary, as the task can just set itself back to
>> running.
>>
>> Signed-off-by: Keith Busch
>> ---
>> fs/block_d
On Thu, Dec 21, 2017 at 01:53:44PM -0700, Jens Axboe wrote:
> Turns out that wasn't what patch 2 was. And the code is right there
> above as well, and under the q_lock, so I guess that race doesn't
> exist.
>
> But that does bring up the fact if we should always be doing the
> nvme_process_cq(nvme
On 12/21/17 1:46 PM, Keith Busch wrote:
> When a request completion is polled, the completion task wakes itself
> up. This is unnecessary, as the task can just set itself back to
> running.
Looks good to me, I can take it for 4.16 in the block tree.
--
Jens Axboe
On 12/21/17 1:46 PM, Keith Busch wrote:
> This is a micro-optimization removing unnecessary check for a disabled
> queue. We no longer need this check because blk-mq provides the ability
> to quiesce queues that nvme uses, and the doorbell registers are never
> unmapped as long as requests are acti
On 12/21/17 1:49 PM, Jens Axboe wrote:
> On 12/21/17 1:46 PM, Keith Busch wrote:
>> This is a performance optimization that allows the hardware to work on
>> a command in parallel with the kernel's stats and timeout tracking.
>>
>> Signed-off-by: Keith Busch
>> ---
>> drivers/nvme/host/pci.c | 3
On 12/21/17 1:46 PM, Keith Busch wrote:
> This is a performance optimization that allows the hardware to work on
> a command in parallel with the kernel's stats and timeout tracking.
>
> Signed-off-by: Keith Busch
> ---
> drivers/nvme/host/pci.c | 3 +--
> 1 file changed, 1 insertion(+), 2 delet
A few IO micro-optimizations for IO polling and NVMe. I'm really working
to close the performance gap with userspace drivers, and this gets me
halfway there on latency. The fastest hardware I could get measured
roundtrip read latency at 5usec with this series that was previously
measuring 5.7usec.
This is a micro-optimization removing unnecessary check for a disabled
queue. We no longer need this check because blk-mq provides the ability
to quiesce queues that nvme uses, and the doorbell registers are never
unmapped as long as requests are active.
Signed-off-by: Keith Busch
---
drivers/nv
When a request completion is polled, the completion task wakes itself
up. This is unnecessary, as the task can just set itself back to
running.
Signed-off-by: Keith Busch
---
fs/block_dev.c | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/fs/block_dev.c b/fs/block_dev.c
in
This is a performance optimization that allows the hardware to work on
a command in parallel with the kernel's stats and timeout tracking.
Signed-off-by: Keith Busch
---
drivers/nvme/host/pci.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/nvme/host/pci.c b/driver
On Thu, Dec 21, 2017 at 10:02:15 -0700,
Jens Axboe wrote:
On 12/21/17 9:42 AM, Bruno Wolff III wrote:
On Thu, Dec 21, 2017 at 23:48:19 +0800,
weiping zhang wrote:
output you want. I never saw it for any kernels I compiled myself. Only when
I test kernels built by Fedora do I see it.
see i
Hi Linus,
It's been a few weeks, so here's a small collection of fixes that
should go into the current series.
This pull request contains:
- NVMe pull request from Christoph, with a few important fixes.
- kyber hang fix from Omar.
- A blk-throttl fix from Shaohua, fixing a case where we double
2017-12-22 1:02 GMT+08:00 Jens Axboe :
> On 12/21/17 9:42 AM, Bruno Wolff III wrote:
>> On Thu, Dec 21, 2017 at 23:48:19 +0800,
>> weiping zhang wrote:
output you want. I never saw it for any kernels I compiled myself. Only
when
I test kernels built by Fedora do I see it.
>>> see
On 12/21/17 9:42 AM, Bruno Wolff III wrote:
> On Thu, Dec 21, 2017 at 23:48:19 +0800,
> weiping zhang wrote:
>>> output you want. I never saw it for any kernels I compiled myself. Only when
>>> I test kernels built by Fedora do I see it.
>> see it every boot ?
>
> I don't look every boot. The w
On Thu, Dec 21, 2017 at 23:48:19 +0800,
weiping zhang wrote:
output you want. I never saw it for any kernels I compiled myself. Only when
I test kernels built by Fedora do I see it.
see it every boot ?
I don't look every boot. The warning gets scrolled of the screen. Once I see
the CPU hang
2017-12-21 23:36 GMT+08:00 Bruno Wolff III :
> On Thu, Dec 21, 2017 at 23:31:40 +0800,
> weiping zhang wrote:
>>
>> does every time boot fail can trigger WANRING in device_add_disk ?
>
>
> Not that I see. But the message could scroll off the screen. The boot gets
> far enough that systemd copies
On Thu, Dec 21, 2017 at 23:31:40 +0800,
weiping zhang wrote:
does every time boot fail can trigger WANRING in device_add_disk ?
Not that I see. But the message could scroll off the screen. The boot gets
far enough that systemd copies over dmesg output to permanent storage that
I can see on
2017-12-21 23:18 GMT+08:00 Bruno Wolff III :
> On Thu, Dec 21, 2017 at 22:01:33 +0800,
> weiping zhang wrote:
>>
>> Hi,
>> how do you do bisect ?build all kernel commit one by one ?
>> as you did before:
>> https://bugzilla.redhat.com/show_bug.cgi?id=1520982
>
>
> I just did the one bisect using
On Thu, Dec 21, 2017 at 22:01:33 +0800,
weiping zhang wrote:
Hi,
how do you do bisect ?build all kernel commit one by one ?
as you did before:
https://bugzilla.redhat.com/show_bug.cgi?id=1520982
I just did the one bisect using Linus' tree. After each build, I would do
a test boot and see if
2017-12-21 21:00 GMT+08:00 Bruno Wolff III :
> After today, I won't have physical access to the problem machine until
> January 2nd. So if you guys have any testing suggestions I need them soon if
> they are to get done before my vacation.
> I do plan to try booting to level 1 to see if I can get a
After today, I won't have physical access to the problem machine until
January 2nd. So if you guys have any testing suggestions I need them soon
if they are to get done before my vacation.
I do plan to try booting to level 1 to see if I can get a login prompt
that might facilitate testing. The l
> Il giorno 21 dic 2017, alle ore 11:57, Paolo Valente
> ha scritto:
>
> Hi,
> a few minutes ago I bumped into this apparent severe regression, with
> 4.15-rc4 and an SSD PLEXTOR PX-256M5S. If, with none as I/O scheduler, I do
> fio --name=global --rw=randread --size=512m --name=job1
>
> I g
Hello,
I think I owe you a reply here... Sorry that it took so long.
On Fri 01-12-17 22:13:27, Luis R. Rodriguez wrote:
> On Fri, Dec 01, 2017 at 12:47:24PM +0100, Jan Kara wrote:
> > On Thu 30-11-17 20:05:48, Luis R. Rodriguez wrote:
> > > > In fact, what might be a cleaner solution is to introd
Hi,
a few minutes ago I bumped into this apparent severe regression, with 4.15-rc4
and an SSD PLEXTOR PX-256M5S. If, with none as I/O scheduler, I do
fio --name=global --rw=randread --size=512m --name=job1
I get
read : io=524288KB, bw=34402KB/s, iops=8600, runt= 15240msec
This device had to reac
On 12/21/2017 03:53 PM, Paolo Valente wrote:
Il giorno 21 dic 2017, alle ore 08:08, Guoqing Jiang ha
scritto:
Hi,
On 12/08/2017 08:34 AM, Holger Hoffstätte wrote:
So plugging in a device on USB with BFQ as scheduler now works without
hiccup (probably thanks to Ming Lei's last patch), bu
On Thu, Dec 21, 2017 at 10:20:45AM +0200, Sagi Grimberg wrote:
> @@ -2470,8 +2465,9 @@ static int nvme_probe(struct pci_dev *pdev, const
> struct pci_device_id *id)
> dev = kzalloc_node(sizeof(*dev), GFP_KERNEL, node);
> if (!dev)
> return -ENOMEM;
> - dev->que
Ming,
I'd prefer that we make the pci driver match
the rest of the drivers in nvme.
IMO it would be better to allocate a queues array at probe time
and simply reuse it at reset sequence.
Can this (untested) patch also fix the issue your seeing:
--
diff --git a/drivers/nvme/host/pci.c b/drivers/
36 matches
Mail list logo