Ming Lei - 25.09.17, 10:59:
> On Sun, Sep 24, 2017 at 07:33:00PM +0200, Martin Steigerwald wrote:
> > Ming Lei - 21.09.17, 06:17:
> > > On Wed, Sep 20, 2017 at 07:25:02PM +0200, Martin Steigerwald wrote:
> > > > Ming Lei - 28.08.17, 21:32:
> > > > > On Mon, Aug 28, 2017 at 03:10:35PM +0200, Martin
Ming Lei - 25.09.17, 10:59:
> On Sun, Sep 24, 2017 at 07:33:00PM +0200, Martin Steigerwald wrote:
> > Ming Lei - 21.09.17, 06:17:
> > > On Wed, Sep 20, 2017 at 07:25:02PM +0200, Martin Steigerwald wrote:
> > > > Ming Lei - 28.08.17, 21:32:
> > > > > On Mon, Aug 28, 2017 at 03:10:35PM +0200, Martin
On Sun, Sep 24, 2017 at 07:33:00PM +0200, Martin Steigerwald wrote:
> Ming Lei - 21.09.17, 06:17:
> > On Wed, Sep 20, 2017 at 07:25:02PM +0200, Martin Steigerwald wrote:
> > > Ming Lei - 28.08.17, 21:32:
> > > > On Mon, Aug 28, 2017 at 03:10:35PM +0200, Martin Steigerwald wrote:
> > > > > Ming Lei
On Sun, Sep 24, 2017 at 07:33:00PM +0200, Martin Steigerwald wrote:
> Ming Lei - 21.09.17, 06:17:
> > On Wed, Sep 20, 2017 at 07:25:02PM +0200, Martin Steigerwald wrote:
> > > Ming Lei - 28.08.17, 21:32:
> > > > On Mon, Aug 28, 2017 at 03:10:35PM +0200, Martin Steigerwald wrote:
> > > > > Ming Lei
Ming Lei - 21.09.17, 06:17:
> On Wed, Sep 20, 2017 at 07:25:02PM +0200, Martin Steigerwald wrote:
> > Ming Lei - 28.08.17, 21:32:
> > > On Mon, Aug 28, 2017 at 03:10:35PM +0200, Martin Steigerwald wrote:
> > > > Ming Lei - 28.08.17, 20:58:
> > > > > On Sun, Aug 27, 2017 at 09:43:52AM +0200,
Ming Lei - 21.09.17, 06:17:
> On Wed, Sep 20, 2017 at 07:25:02PM +0200, Martin Steigerwald wrote:
> > Ming Lei - 28.08.17, 21:32:
> > > On Mon, Aug 28, 2017 at 03:10:35PM +0200, Martin Steigerwald wrote:
> > > > Ming Lei - 28.08.17, 20:58:
> > > > > On Sun, Aug 27, 2017 at 09:43:52AM +0200,
Martin Steigerwald - 21.09.17, 09:30:
> Ming Lei - 21.09.17, 06:20:
> > On Mon, Aug 28, 2017 at 03:10:35PM +0200, Martin Steigerwald wrote:
> > > Ming Lei - 28.08.17, 20:58:
> > > > On Sun, Aug 27, 2017 at 09:43:52AM +0200, Oleksandr Natalenko wrote:
> > > > > Hi.
> > > > >
> > > > > Here is disk
Martin Steigerwald - 21.09.17, 09:30:
> Ming Lei - 21.09.17, 06:20:
> > On Mon, Aug 28, 2017 at 03:10:35PM +0200, Martin Steigerwald wrote:
> > > Ming Lei - 28.08.17, 20:58:
> > > > On Sun, Aug 27, 2017 at 09:43:52AM +0200, Oleksandr Natalenko wrote:
> > > > > Hi.
> > > > >
> > > > > Here is disk
Ming Lei - 21.09.17, 06:20:
> On Mon, Aug 28, 2017 at 03:10:35PM +0200, Martin Steigerwald wrote:
> > Ming Lei - 28.08.17, 20:58:
> > > On Sun, Aug 27, 2017 at 09:43:52AM +0200, Oleksandr Natalenko wrote:
> > > > Hi.
> > > >
> > > > Here is disk setup for QEMU VM:
> > > >
> > > > ===
> > > >
Ming Lei - 21.09.17, 06:20:
> On Mon, Aug 28, 2017 at 03:10:35PM +0200, Martin Steigerwald wrote:
> > Ming Lei - 28.08.17, 20:58:
> > > On Sun, Aug 27, 2017 at 09:43:52AM +0200, Oleksandr Natalenko wrote:
> > > > Hi.
> > > >
> > > > Here is disk setup for QEMU VM:
> > > >
> > > > ===
> > > >
On Mon, Aug 28, 2017 at 03:10:35PM +0200, Martin Steigerwald wrote:
> Ming Lei - 28.08.17, 20:58:
> > On Sun, Aug 27, 2017 at 09:43:52AM +0200, Oleksandr Natalenko wrote:
> > > Hi.
> > >
> > > Here is disk setup for QEMU VM:
> > >
> > > ===
> > > [root@archmq ~]# smartctl -i /dev/sda
> > > …
> >
On Mon, Aug 28, 2017 at 03:10:35PM +0200, Martin Steigerwald wrote:
> Ming Lei - 28.08.17, 20:58:
> > On Sun, Aug 27, 2017 at 09:43:52AM +0200, Oleksandr Natalenko wrote:
> > > Hi.
> > >
> > > Here is disk setup for QEMU VM:
> > >
> > > ===
> > > [root@archmq ~]# smartctl -i /dev/sda
> > > …
> >
On Wed, Sep 20, 2017 at 07:25:02PM +0200, Martin Steigerwald wrote:
> Ming Lei - 28.08.17, 21:32:
> > On Mon, Aug 28, 2017 at 03:10:35PM +0200, Martin Steigerwald wrote:
> > > Ming Lei - 28.08.17, 20:58:
> > > > On Sun, Aug 27, 2017 at 09:43:52AM +0200, Oleksandr Natalenko wrote:
> > > > > Hi.
> >
On Wed, Sep 20, 2017 at 07:25:02PM +0200, Martin Steigerwald wrote:
> Ming Lei - 28.08.17, 21:32:
> > On Mon, Aug 28, 2017 at 03:10:35PM +0200, Martin Steigerwald wrote:
> > > Ming Lei - 28.08.17, 20:58:
> > > > On Sun, Aug 27, 2017 at 09:43:52AM +0200, Oleksandr Natalenko wrote:
> > > > > Hi.
> >
Ming Lei - 28.08.17, 21:32:
> On Mon, Aug 28, 2017 at 03:10:35PM +0200, Martin Steigerwald wrote:
> > Ming Lei - 28.08.17, 20:58:
> > > On Sun, Aug 27, 2017 at 09:43:52AM +0200, Oleksandr Natalenko wrote:
> > > > Hi.
> > > >
> > > > Here is disk setup for QEMU VM:
[…]
> > > > In words: 2 virtual
Ming Lei - 28.08.17, 21:32:
> On Mon, Aug 28, 2017 at 03:10:35PM +0200, Martin Steigerwald wrote:
> > Ming Lei - 28.08.17, 20:58:
> > > On Sun, Aug 27, 2017 at 09:43:52AM +0200, Oleksandr Natalenko wrote:
> > > > Hi.
> > > >
> > > > Here is disk setup for QEMU VM:
[…]
> > > > In words: 2 virtual
Hi,
On Wed, Aug 30, 2017 at 12:58:21PM +0200, oleksa...@natalenko.name wrote:
> Hi.
>
> So, current summary:
>
> 1) first patch + debug patch: I can reproduce the issue in wild, but not
> with pm_test set;
That is interesting, since I always test via pm_test.
> 2) first patch + debug patch +
Hi,
On Wed, Aug 30, 2017 at 12:58:21PM +0200, oleksa...@natalenko.name wrote:
> Hi.
>
> So, current summary:
>
> 1) first patch + debug patch: I can reproduce the issue in wild, but not
> with pm_test set;
That is interesting, since I always test via pm_test.
> 2) first patch + debug patch +
Hi.
So, current summary:
1) first patch + debug patch: I can reproduce the issue in wild, but not
with pm_test set;
2) first patch + debug patch + second patch: I cannot reproduce issue at
all, neither with "none", nor with "mq-deadline".
Thus, "blk-mq: align to legacy path for implementing
Hi.
So, current summary:
1) first patch + debug patch: I can reproduce the issue in wild, but not
with pm_test set;
2) first patch + debug patch + second patch: I cannot reproduce issue at
all, neither with "none", nor with "mq-deadline".
Thus, "blk-mq: align to legacy path for implementing
Hi,
On Wed, Aug 30, 2017 at 08:15:02AM +0200, oleksa...@natalenko.name wrote:
> Hello.
>
> Addressing your questions below.
>
> > Can't reproduce even with putting dmcypt on raid10 after applying my
> > patch.
>
> Just a side note, that dm-crypt is not necessary here — I am able to trigger
>
Hi,
On Wed, Aug 30, 2017 at 08:15:02AM +0200, oleksa...@natalenko.name wrote:
> Hello.
>
> Addressing your questions below.
>
> > Can't reproduce even with putting dmcypt on raid10 after applying my
> > patch.
>
> Just a side note, that dm-crypt is not necessary here — I am able to trigger
>
Hello.
Addressing your questions below.
Can't reproduce even with putting dmcypt on raid10 after applying my
patch.
Just a side note, that dm-crypt is not necessary here — I am able to
trigger hang with RAID10 and LVM only.
BTW, could you share us which blk-mq scheduler you are using on
Hello.
Addressing your questions below.
Can't reproduce even with putting dmcypt on raid10 after applying my
patch.
Just a side note, that dm-crypt is not necessary here — I am able to
trigger hang with RAID10 and LVM only.
BTW, could you share us which blk-mq scheduler you are using on
On Wed, Aug 30, 2017 at 10:15:37AM +0800, Ming Lei wrote:
> Hi,
>
> On Tue, Aug 29, 2017 at 05:52:42PM +0200, Oleksandr Natalenko wrote:
> > Hello.
> >
> > Re-tested with v4.13-rc7 + proposed patch and got the same result.
>
> Maybe there is another issue, I didn't use dmcrypt on raid10, will
>
On Wed, Aug 30, 2017 at 10:15:37AM +0800, Ming Lei wrote:
> Hi,
>
> On Tue, Aug 29, 2017 at 05:52:42PM +0200, Oleksandr Natalenko wrote:
> > Hello.
> >
> > Re-tested with v4.13-rc7 + proposed patch and got the same result.
>
> Maybe there is another issue, I didn't use dmcrypt on raid10, will
>
Hi,
On Tue, Aug 29, 2017 at 05:52:42PM +0200, Oleksandr Natalenko wrote:
> Hello.
>
> Re-tested with v4.13-rc7 + proposed patch and got the same result.
Maybe there is another issue, I didn't use dmcrypt on raid10, will
test in your way to see if I can reproduce it.
BTW, could you share us
Hi,
On Tue, Aug 29, 2017 at 05:52:42PM +0200, Oleksandr Natalenko wrote:
> Hello.
>
> Re-tested with v4.13-rc7 + proposed patch and got the same result.
Maybe there is another issue, I didn't use dmcrypt on raid10, will
test in your way to see if I can reproduce it.
BTW, could you share us
Hello.
Re-tested with v4.13-rc7 + proposed patch and got the same result.
Let me know if any additional testing is needed.
===
[ 82.638148] INFO: task md0_raid10:193 blocked for more than 20 seconds.
[ 82.642804] Not tainted 4.13.0-pf1 #1
[ 82.646998] "echo 0 >
Hello.
Re-tested with v4.13-rc7 + proposed patch and got the same result.
Let me know if any additional testing is needed.
===
[ 82.638148] INFO: task md0_raid10:193 blocked for more than 20 seconds.
[ 82.642804] Not tainted 4.13.0-pf1 #1
[ 82.646998] "echo 0 >
On Mon, Aug 28, 2017 at 08:22:26PM +0200, Oleksandr Natalenko wrote:
> Hi.
>
> On pondělí 28. srpna 2017 14:58:28 CEST Ming Lei wrote:
> > Could you verify if the following patch fixes your issue?
> > …SNIP…
>
> I've applied it to v4.12.9 and rechecked — the issue is still there,
>
On Mon, Aug 28, 2017 at 08:22:26PM +0200, Oleksandr Natalenko wrote:
> Hi.
>
> On pondělí 28. srpna 2017 14:58:28 CEST Ming Lei wrote:
> > Could you verify if the following patch fixes your issue?
> > …SNIP…
>
> I've applied it to v4.12.9 and rechecked — the issue is still there,
>
Hi.
On pondělí 28. srpna 2017 14:58:28 CEST Ming Lei wrote:
> Could you verify if the following patch fixes your issue?
> …SNIP…
I've applied it to v4.12.9 and rechecked — the issue is still there,
unfortunately. Stacktrace is the same as before.
Were you able to reproduce it in a VM?
Should
Hi.
On pondělí 28. srpna 2017 14:58:28 CEST Ming Lei wrote:
> Could you verify if the following patch fixes your issue?
> …SNIP…
I've applied it to v4.12.9 and rechecked — the issue is still there,
unfortunately. Stacktrace is the same as before.
Were you able to reproduce it in a VM?
Should
On Mon, Aug 28, 2017 at 03:10:35PM +0200, Martin Steigerwald wrote:
> Ming Lei - 28.08.17, 20:58:
> > On Sun, Aug 27, 2017 at 09:43:52AM +0200, Oleksandr Natalenko wrote:
> > > Hi.
> > >
> > > Here is disk setup for QEMU VM:
> > >
> > > ===
> > > [root@archmq ~]# smartctl -i /dev/sda
> > > …
> >
On Mon, Aug 28, 2017 at 03:10:35PM +0200, Martin Steigerwald wrote:
> Ming Lei - 28.08.17, 20:58:
> > On Sun, Aug 27, 2017 at 09:43:52AM +0200, Oleksandr Natalenko wrote:
> > > Hi.
> > >
> > > Here is disk setup for QEMU VM:
> > >
> > > ===
> > > [root@archmq ~]# smartctl -i /dev/sda
> > > …
> >
Ming Lei - 28.08.17, 20:58:
> On Sun, Aug 27, 2017 at 09:43:52AM +0200, Oleksandr Natalenko wrote:
> > Hi.
> >
> > Here is disk setup for QEMU VM:
> >
> > ===
> > [root@archmq ~]# smartctl -i /dev/sda
> > …
> > Device Model: QEMU HARDDISK
> > Serial Number:QM1
> > Firmware Version:
Ming Lei - 28.08.17, 20:58:
> On Sun, Aug 27, 2017 at 09:43:52AM +0200, Oleksandr Natalenko wrote:
> > Hi.
> >
> > Here is disk setup for QEMU VM:
> >
> > ===
> > [root@archmq ~]# smartctl -i /dev/sda
> > …
> > Device Model: QEMU HARDDISK
> > Serial Number:QM1
> > Firmware Version:
On Sun, Aug 27, 2017 at 09:43:52AM +0200, Oleksandr Natalenko wrote:
> Hi.
>
> Here is disk setup for QEMU VM:
>
> ===
> [root@archmq ~]# smartctl -i /dev/sda
> …
> Device Model: QEMU HARDDISK
> Serial Number:QM1
> Firmware Version: 2.5+
> User Capacity:4,294,967,296 bytes [4.29
On Sun, Aug 27, 2017 at 09:43:52AM +0200, Oleksandr Natalenko wrote:
> Hi.
>
> Here is disk setup for QEMU VM:
>
> ===
> [root@archmq ~]# smartctl -i /dev/sda
> …
> Device Model: QEMU HARDDISK
> Serial Number:QM1
> Firmware Version: 2.5+
> User Capacity:4,294,967,296 bytes [4.29
Hi.
Here is disk setup for QEMU VM:
===
[root@archmq ~]# smartctl -i /dev/sda
…
Device Model: QEMU HARDDISK
Serial Number:QM1
Firmware Version: 2.5+
User Capacity:4,294,967,296 bytes [4.29 GB]
Sector Size: 512 bytes logical/physical
Device is:Not in smartctl database
Hi.
Here is disk setup for QEMU VM:
===
[root@archmq ~]# smartctl -i /dev/sda
…
Device Model: QEMU HARDDISK
Serial Number:QM1
Firmware Version: 2.5+
User Capacity:4,294,967,296 bytes [4.29 GB]
Sector Size: 512 bytes logical/physical
Device is:Not in smartctl database
On Sat, Aug 26, 2017 at 12:48:01PM +0200, Oleksandr Natalenko wrote:
> Quick update: reproduced on both v4.12.7 and v4.13.0-rc6.
BTW, given it hangs during resume, it isn't easy to collect debug
info, and there should have been lots useful info there.
You mentioned that you can reproduce it on
On Sat, Aug 26, 2017 at 12:48:01PM +0200, Oleksandr Natalenko wrote:
> Quick update: reproduced on both v4.12.7 and v4.13.0-rc6.
BTW, given it hangs during resume, it isn't easy to collect debug
info, and there should have been lots useful info there.
You mentioned that you can reproduce it on
Wols Lists - 26.08.17, 18:17:
> On 26/08/17 12:19, Martin Steigerwald wrote:
> > Also… when a hang happened the mouse pointer was frozen, Ctrl-Alt-F1
> > didn´t
> > work and so on… so it may easily be a completely different issue.
> >
> > I did not see much point in reporting it so far… as I have
Wols Lists - 26.08.17, 18:17:
> On 26/08/17 12:19, Martin Steigerwald wrote:
> > Also… when a hang happened the mouse pointer was frozen, Ctrl-Alt-F1
> > didn´t
> > work and so on… so it may easily be a completely different issue.
> >
> > I did not see much point in reporting it so far… as I have
On 26/08/17 12:19, Martin Steigerwald wrote:
> Also… when a hang happened the mouse pointer was frozen, Ctrl-Alt-F1 didn´t
> work and so on… so it may easily be a completely different issue.
>
> I did not see much point in reporting it so far… as I have no idea on how to
> reliably pin-point
On 26/08/17 12:19, Martin Steigerwald wrote:
> Also… when a hang happened the mouse pointer was frozen, Ctrl-Alt-F1 didn´t
> work and so on… so it may easily be a completely different issue.
>
> I did not see much point in reporting it so far… as I have no idea on how to
> reliably pin-point
Recompiled v4.13-rc6 with debug info and lockdep.
Unfortunately, still cannot use crash utility because 7.1.9 does not support
v4.13 (and git HEAD with required fix for 5-level pagetable does not compile
for me).
But I've got much nicer stacktrace + lockdep output (check paste below).
Looks
Recompiled v4.13-rc6 with debug info and lockdep.
Unfortunately, still cannot use crash utility because 7.1.9 does not support
v4.13 (and git HEAD with required fix for 5-level pagetable does not compile
for me).
But I've got much nicer stacktrace + lockdep output (check paste below).
Looks
Hello Oleksandr,
Oleksandr Natalenko - 26.08.17, 12:48:
> Quick update: reproduced on both v4.12.7 and v4.13.0-rc6.
>
> On sobota 26. srpna 2017 12:37:29 CEST Oleksandr Natalenko wrote:
[…]
> > I've re-checked this issue with 4.12.9, and it is still there.
[…]
> > On úterý 22. srpna 2017
Hello Oleksandr,
Oleksandr Natalenko - 26.08.17, 12:48:
> Quick update: reproduced on both v4.12.7 and v4.13.0-rc6.
>
> On sobota 26. srpna 2017 12:37:29 CEST Oleksandr Natalenko wrote:
[…]
> > I've re-checked this issue with 4.12.9, and it is still there.
[…]
> > On úterý 22. srpna 2017
Quick update: reproduced on both v4.12.7 and v4.13.0-rc6.
On sobota 26. srpna 2017 12:37:29 CEST Oleksandr Natalenko wrote:
> Hi.
>
> I've re-checked this issue with 4.12.9, and it is still there.
>
> Also, I've managed to reproduce it in a VM with non-virtio disks (just
> -hda/- hdb pair in
Quick update: reproduced on both v4.12.7 and v4.13.0-rc6.
On sobota 26. srpna 2017 12:37:29 CEST Oleksandr Natalenko wrote:
> Hi.
>
> I've re-checked this issue with 4.12.9, and it is still there.
>
> Also, I've managed to reproduce it in a VM with non-virtio disks (just
> -hda/- hdb pair in
Hi.
I've re-checked this issue with 4.12.9, and it is still there.
Also, I've managed to reproduce it in a VM with non-virtio disks (just -hda/-
hdb pair in QEMU).
I'm not able to reproduce it with blk_mq disabled. Also, if blk_mq is enabled,
scheduler does not make any difference, even
Hi.
I've re-checked this issue with 4.12.9, and it is still there.
Also, I've managed to reproduce it in a VM with non-virtio disks (just -hda/-
hdb pair in QEMU).
I'm not able to reproduce it with blk_mq disabled. Also, if blk_mq is enabled,
scheduler does not make any difference, even
Hi.
v4.12.8 kernel hangs in I/O path after resuming from suspend-to-ram. I have
blk-mq enabled, tried both BFQ and mq-deadline schedulers with the same
result. Soft lockup happens showing stacktraces I'm pasting below.
Stacktrace shows that I/O hangs in md_super_wait(), and it means it waits
Hi.
v4.12.8 kernel hangs in I/O path after resuming from suspend-to-ram. I have
blk-mq enabled, tried both BFQ and mq-deadline schedulers with the same
result. Soft lockup happens showing stacktraces I'm pasting below.
Stacktrace shows that I/O hangs in md_super_wait(), and it means it waits
58 matches
Mail list logo