On 02/08/2018 08:17 PM, Dmitry Vyukov wrote:
> On Thu, Feb 8, 2018 at 5:23 PM, Andrey Ryabinin
> wrote:
>>
>>
>> On 02/08/2018 07:18 PM, Jan Kara wrote:
>>
By "full kernel crashdump" you mean kdump thing, or something else?
>>>
>>> Yes, the kdump thing (for KVM
On 02/08/2018 08:17 PM, Dmitry Vyukov wrote:
> On Thu, Feb 8, 2018 at 5:23 PM, Andrey Ryabinin
> wrote:
>>
>>
>> On 02/08/2018 07:18 PM, Jan Kara wrote:
>>
By "full kernel crashdump" you mean kdump thing, or something else?
>>>
>>> Yes, the kdump thing (for KVM guest you can grab the
On Thu, Feb 8, 2018 at 5:23 PM, Andrey Ryabinin wrote:
>
>
> On 02/08/2018 07:18 PM, Jan Kara wrote:
>
>>> By "full kernel crashdump" you mean kdump thing, or something else?
>>
>> Yes, the kdump thing (for KVM guest you can grab the memory dump also from
>> the host in a
On Thu, Feb 8, 2018 at 5:23 PM, Andrey Ryabinin wrote:
>
>
> On 02/08/2018 07:18 PM, Jan Kara wrote:
>
>>> By "full kernel crashdump" you mean kdump thing, or something else?
>>
>> Yes, the kdump thing (for KVM guest you can grab the memory dump also from
>> the host in a simplier way and it
On 02/08/2018 07:18 PM, Jan Kara wrote:
>> By "full kernel crashdump" you mean kdump thing, or something else?
>
> Yes, the kdump thing (for KVM guest you can grab the memory dump also from
> the host in a simplier way and it should be usable with the crash utility
> AFAIK).
>
In QEMU
On 02/08/2018 07:18 PM, Jan Kara wrote:
>> By "full kernel crashdump" you mean kdump thing, or something else?
>
> Yes, the kdump thing (for KVM guest you can grab the memory dump also from
> the host in a simplier way and it should be usable with the crash utility
> AFAIK).
>
In QEMU
On Thu 08-02-18 06:49:18, Andi Kleen wrote:
> > > It seems multiple processes deadlocked on the bd_mutex.
> > > Unfortunately there's no backtrace for the lock acquisitions,
> > > so it's hard to see the exact sequence.
> >
> > Well, all in the report points to a situation where some IO was
On Thu 08-02-18 06:49:18, Andi Kleen wrote:
> > > It seems multiple processes deadlocked on the bd_mutex.
> > > Unfortunately there's no backtrace for the lock acquisitions,
> > > so it's hard to see the exact sequence.
> >
> > Well, all in the report points to a situation where some IO was
On Thu 08-02-18 15:18:11, Dmitry Vyukov wrote:
> On Thu, Feb 8, 2018 at 3:08 PM, Jan Kara wrote:
> > On Thu 08-02-18 14:28:08, Dmitry Vyukov wrote:
> >> On Thu, Feb 8, 2018 at 10:28 AM, Jan Kara wrote:
> >> > On Wed 07-02-18 07:52:29, Andi Kleen wrote:
> >> >> > #0:
On Thu 08-02-18 15:18:11, Dmitry Vyukov wrote:
> On Thu, Feb 8, 2018 at 3:08 PM, Jan Kara wrote:
> > On Thu 08-02-18 14:28:08, Dmitry Vyukov wrote:
> >> On Thu, Feb 8, 2018 at 10:28 AM, Jan Kara wrote:
> >> > On Wed 07-02-18 07:52:29, Andi Kleen wrote:
> >> >> > #0: (>bd_mutex){+.+.}, at:
> > It seems multiple processes deadlocked on the bd_mutex.
> > Unfortunately there's no backtrace for the lock acquisitions,
> > so it's hard to see the exact sequence.
>
> Well, all in the report points to a situation where some IO was submitted
> to the block device and never completed (more
> > It seems multiple processes deadlocked on the bd_mutex.
> > Unfortunately there's no backtrace for the lock acquisitions,
> > so it's hard to see the exact sequence.
>
> Well, all in the report points to a situation where some IO was submitted
> to the block device and never completed (more
On Thu, Feb 8, 2018 at 3:08 PM, Jan Kara wrote:
> On Thu 08-02-18 14:28:08, Dmitry Vyukov wrote:
>> On Thu, Feb 8, 2018 at 10:28 AM, Jan Kara wrote:
>> > On Wed 07-02-18 07:52:29, Andi Kleen wrote:
>> >> > #0: (>bd_mutex){+.+.}, at: [<40269370>]
>> >> >
On Thu, Feb 8, 2018 at 3:08 PM, Jan Kara wrote:
> On Thu 08-02-18 14:28:08, Dmitry Vyukov wrote:
>> On Thu, Feb 8, 2018 at 10:28 AM, Jan Kara wrote:
>> > On Wed 07-02-18 07:52:29, Andi Kleen wrote:
>> >> > #0: (>bd_mutex){+.+.}, at: [<40269370>]
>> >> > __blkdev_put+0xbc/0x7f0
On Thu 08-02-18 14:28:08, Dmitry Vyukov wrote:
> On Thu, Feb 8, 2018 at 10:28 AM, Jan Kara wrote:
> > On Wed 07-02-18 07:52:29, Andi Kleen wrote:
> >> > #0: (>bd_mutex){+.+.}, at: [<40269370>]
> >> > __blkdev_put+0xbc/0x7f0 fs/block_dev.c:1757
> >> > 1 lock held by
On Thu 08-02-18 14:28:08, Dmitry Vyukov wrote:
> On Thu, Feb 8, 2018 at 10:28 AM, Jan Kara wrote:
> > On Wed 07-02-18 07:52:29, Andi Kleen wrote:
> >> > #0: (>bd_mutex){+.+.}, at: [<40269370>]
> >> > __blkdev_put+0xbc/0x7f0 fs/block_dev.c:1757
> >> > 1 lock held by blkid/19199:
> >> >
On Thu, Feb 8, 2018 at 10:28 AM, Jan Kara wrote:
> On Wed 07-02-18 07:52:29, Andi Kleen wrote:
>> > #0: (>bd_mutex){+.+.}, at: [<40269370>]
>> > __blkdev_put+0xbc/0x7f0 fs/block_dev.c:1757
>> > 1 lock held by blkid/19199:
>> > #0: (>bd_mutex){+.+.}, at:
On Thu, Feb 8, 2018 at 10:28 AM, Jan Kara wrote:
> On Wed 07-02-18 07:52:29, Andi Kleen wrote:
>> > #0: (>bd_mutex){+.+.}, at: [<40269370>]
>> > __blkdev_put+0xbc/0x7f0 fs/block_dev.c:1757
>> > 1 lock held by blkid/19199:
>> > #0: (>bd_mutex){+.+.}, at: []
>> >
On Wed 07-02-18 07:52:29, Andi Kleen wrote:
> > #0: (>bd_mutex){+.+.}, at: [<40269370>]
> > __blkdev_put+0xbc/0x7f0 fs/block_dev.c:1757
> > 1 lock held by blkid/19199:
> > #0: (>bd_mutex){+.+.}, at: []
> > __blkdev_get+0x158/0x10e0 fs/block_dev.c:1439
> > #1:
On Wed 07-02-18 07:52:29, Andi Kleen wrote:
> > #0: (>bd_mutex){+.+.}, at: [<40269370>]
> > __blkdev_put+0xbc/0x7f0 fs/block_dev.c:1757
> > 1 lock held by blkid/19199:
> > #0: (>bd_mutex){+.+.}, at: []
> > __blkdev_get+0x158/0x10e0 fs/block_dev.c:1439
> > #1:
> #0: (>bd_mutex){+.+.}, at: [<40269370>]
> __blkdev_put+0xbc/0x7f0 fs/block_dev.c:1757
> 1 lock held by blkid/19199:
> #0: (>bd_mutex){+.+.}, at: []
> __blkdev_get+0x158/0x10e0 fs/block_dev.c:1439
> #1: (>atomic_read_lock){+.+.}, at: [<33edf9f2>]
>
> #0: (>bd_mutex){+.+.}, at: [<40269370>]
> __blkdev_put+0xbc/0x7f0 fs/block_dev.c:1757
> 1 lock held by blkid/19199:
> #0: (>bd_mutex){+.+.}, at: []
> __blkdev_get+0x158/0x10e0 fs/block_dev.c:1439
> #1: (>atomic_read_lock){+.+.}, at: [<33edf9f2>]
>
22 matches
Mail list logo