[Qemu-devel] [Bug 1581334] Re: qemu + librbd takes high %sy cpu under high random io workload

2016-05-13 Thread Josh Durgin
Since this works fine with krbd, it sounds like the bug may be in librbd. Could you install debug symbols (the librbd1-dbg package) and when this occurs, attach to the qemu process with gdb and get a backtrace of all threads (there will be a lot of them) via 'gdb -p $pid' and in gdb 'thread apply a

[Qemu-devel] [Bug 1581334] Re: qemu + librbd takes high %sy cpu under high random io workload

2016-05-13 Thread chenwqin
Here are gdb oupout with librbd1-dbg and librados2-dbg. - [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". 0x7ff8cf8dddff in ppoll () from /lib/x86_64-linux-gnu/libc.

[Qemu-devel] [Bug 1581334] Re: qemu + librbd takes high %sy cpu under high random io workload

2016-05-13 Thread Jason Dillaman
Can you run 'perf top' against just the QEMU process? There was an email chain from nearly a year ago about tcmalloc causing extremely high '_raw_spin_lock' calls under high IOPS scenarios. -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to Q

[Qemu-devel] [Bug 1581334] Re: qemu + librbd takes high %sy cpu under high random io workload

2016-05-13 Thread chenwqin
Here are 'perf top -p `pgrep qemu` -a` output; I met tcmalloc problem on the osd host and fix it with a larger TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES. The perf top of tcmalloc problem is a little bit different of my problem. -- Sa

[Qemu-devel] [Bug 1581334] Re: qemu + librbd takes high %sy cpu under high random io workload

2016-05-14 Thread chenwqin
Some more test I have do: 1. running qemu with TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES 256MB, still got problem 2. prevent cpu go into C3 and C6 state, still got problem 3. running qemu with aio=native, still got problem -- You received this bug notification because you are a member of qemu- devel

[Qemu-devel] [Bug 1581334] Re: qemu + librbd takes high %sy cpu under high random io workload

2016-05-14 Thread Jason Dillaman
Any chance you can re-test with a more recent kernel on the hypervisor host? If the spin-lock was coming from user-space, I would expect futex_wait_setup and futex_wake to be much higher. -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU