Re: [PATCH] bcache: Fix writeback_thread never writing back incomplete stripes.
uct cached_dev *dc) return false; } - if (bkey_cmp(>last_scanned, ) >= 0) { - buf->last_scanned = KEY(dc->disk.id, 0, 0); - searched_from_start = true; - } - + start_pos = buf->last_scanned; bch_refill_keybuf(dc->disk.c, buf, , dirty_pred); - return bkey_cmp(>last_scanned, ) >= 0 && searched_from_start; + if (bkey_cmp(>last_scanned, ) < 0) + return false; + + /* +* If we get to the end start scanning again from the beginning, and +* only scan up to where we initially started scanning from: +*/ + buf->last_scanned = KEY(dc->disk.id, 0, 0); + bch_refill_keybuf(dc->disk.c, buf, _pos, dirty_pred); + + return bkey_cmp(>last_scanned, _pos) >= 0; } static void bch_writeback(struct cached_dev *dc) -- 2.5.1 -- To unsubscribe from this list: send the line "unsubscribe linux-bcache" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html -- Peter Kieser 604.338.9294 / pe...@kieser.ca smime.p7s Description: S/MIME Cryptographic Signature
Re: [PATCH] bcache: Fix writeback_thread never writing back incomplete stripes.
full_stripes(dc); @@ -371,14 +373,20 @@ static bool refill_dirty(struct cached_dev *dc) return false; } - if (bkey_cmp(>last_scanned, ) >= 0) { - buf->last_scanned = KEY(dc->disk.id, 0, 0); - searched_from_start = true; - } - + start_pos = buf->last_scanned; bch_refill_keybuf(dc->disk.c, buf, , dirty_pred); - return bkey_cmp(>last_scanned, ) >= 0 && searched_from_start; + if (bkey_cmp(>last_scanned, ) < 0) + return false; + + /* +* If we get to the end start scanning again from the beginning, and +* only scan up to where we initially started scanning from: +*/ + buf->last_scanned = KEY(dc->disk.id, 0, 0); + bch_refill_keybuf(dc->disk.c, buf, _pos, dirty_pred); + + return bkey_cmp(>last_scanned, _pos) >= 0; } static void bch_writeback(struct cached_dev *dc) -- 2.5.1 -- To unsubscribe from this list: send the line "unsubscribe linux-bcache" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html -- Peter Kieser 604.338.9294 / pe...@kieser.ca smime.p7s Description: S/MIME Cryptographic Signature
Re: [PATCH v4 0/3] KVM: Dynamic Halt-Polling
On 2015-08-29 4:55 PM, Wanpeng Li wrote: On 8/30/15 6:26 AM, Peter Kieser wrote: Thanks, Wanpeng. Applied this to Linux 3.18 and seeing much higher CPU usage (200%) for qemu 2.4.0 process on a Windows 10 x64 guest. qemu parameters: Thanks for the report. If Paolo's patch "kvm: add halt_poll_ns module parameter" is applied on your 3.18? Btw, do you test the linux guest? No high CPU usage on Linux guests. Following patch series are applied (in order): * kvm: add halt_poll_ns module parameter * KVM: make halt_poll_ns static * KVM: Dynamic Halt-Polling v4 -Peter smime.p7s Description: S/MIME Cryptographic Signature
Re: [PATCH v4 0/3] KVM: Dynamic Halt-Polling
Thanks, Wanpeng. Applied this to Linux 3.18 and seeing much higher CPU usage (200%) for qemu 2.4.0 process on a Windows 10 x64 guest. qemu parameters: qemu-system-x86_64 -enable-kvm -name arwan-20150704 -S -machine pc-q35-2.2,accel=kvm,usb=off -cpu Haswell,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1000 -m 8192 -realtime mlock=off -smp 4,sockets=4,cores=1,threads=1 -uuid 7c2fc02d-2798-4fc9-ad04-db5f1af92723 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/arwan-20150704.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime -no-shutdown -boot strict=on -device i82801b11-bridge,id=pci.1,bus=pcie.0,addr=0x1e -device pci-bridge,chassis_nr=2,id=pci.2,bus=pci.1,addr=0x1 -device nec-usb-xhci,id=usb1,bus=pci.2,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,bus=pci.2,addr=0x5 -drive file=/dev/mapper/crypt-arwan-20150704,if=none,id=drive-virtio-disk0,format=raw,cache=none,discard=unmap,aio=native -device virtio-blk-pci,scsi=off,bus=pci.2,addr=0x3,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2 -drive file=/usr/share/virtio-win/virtio-win.iso,if=none,media=cdrom,id=drive-sata0-0-2,readonly=on,format=raw -device ide-cd,bus=ide.2,drive=drive-sata0-0-2,id=sata0-0-2,bootindex=1 -netdev tap,fds=31:32:33:34,id=hostnet0,vhost=on,vhostfds=35:36:37:38 -device virtio-net-pci,guest_csum=off,guest_tso4=off,guest_tso6=off,mq=on,vectors=10,netdev=hostnet0,id=net0,mac=52:54:00:f3:6b:c4,bus=pci.2,addr=0x2 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channel/target/arwan-20150704.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel1,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=com.redhat.spice.0 -vnc 127.0.0.1:4 -device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vgamem_mb=16,bus=pcie.0,addr=0x1 -device virtio-balloon-pci,id=balloon0,bus=pci.2,addr=0x1 -msg timestamp=on I revert patch, qemu shows 17% CPU usage on host. Thoughts? -Peter On 2015-08-29 3:21 PM, Wanpeng Li wrote: Hi Peter, On 8/30/15 5:18 AM, Peter Kieser wrote: Hi Wanpeng, Do I need to set any module parameters to use your patch, or should halt_poll_ns automatically tune with just your patch series applied? You don't need any module parameters. Regards, Wanpeng Li Thanks. On 2015-08-27 2:47 AM, Wanpeng Li wrote: v3 -> v4: * bring back grow vcpu->halt_poll_ns when interrupt arrives and shrinks when idle VCPU is detected v2 -> v3: * grow/shrink vcpu->halt_poll_ns by *halt_poll_ns_grow or /halt_poll_ns_shrink * drop the macros and hard coding the numbers in the param definitions * update the comments "5-7 us" * remove halt_poll_ns_max and use halt_poll_ns as the max halt_poll_ns time, vcpu->halt_poll_ns start at zero * drop the wrappers * move the grow/shrink logic before "out:" w/ "if (waited)" v1 -> v2: * change kvm_vcpu_block to read halt_poll_ns from the vcpu instead of the module parameter * use the shrink/grow matrix which is suggested by David * set halt_poll_ns_max to 2ms There is a downside of halt_poll_ns since poll is still happen for idle VCPU which can waste cpu usage. This patchset add the ability to adjust halt_poll_ns dynamically, grows halt_poll_ns if an interrupt arrives and shrinks halt_poll_ns when idle VCPU is detected. There are two new kernel parameters for changing the halt_poll_ns: halt_poll_ns_grow and halt_poll_ns_shrink. Test w/ high cpu overcommit ratio, pin vCPUs, and the halt_poll_ns of halt-poll is the default 50ns, the max halt_poll_ns of dynamic halt-poll is 2ms. Then watch the %C0 in the dump of Powertop tool. The test method is almost from David. +-++---+ | || | | w/o halt-poll | w/ halt-poll | dynamic halt-poll | +-++---+ | || | |~0.9%|~1.8% | ~1.2% | +-++---+ The always halt-poll will increase ~0.9% cpu usage for idle vCPUs and the dynamic halt-poll drop it to ~0.3% which means that reduce the 67% overhead introduced by always halt-poll. Wanpeng Li (3): KVM: make halt_poll_ns per-VCPU KVM: dynamic halt_poll_ns adjustment KVM: trace kvm_halt_poll_ns grow/shrink include/linux/kvm_host.h | 1 + include/trace/events/kvm.h | 30 virt/kvm/kvm_main.c| 50 +++--- 3 files changed, 78 insertions(+), 3 deletions(-) -- Peter Kieser 604.338.9294 / pe...@kieser.ca smime.p7s Description: S/MIME Cryptographic Signature
Re: [PATCH v4 0/3] KVM: Dynamic Halt-Polling
Thanks, Wanpeng. Applied this to Linux 3.18 and seeing much higher CPU usage (200%) for qemu 2.4.0 process on a Windows 10 x64 guest. qemu parameters: qemu-system-x86_64 -enable-kvm -name arwan-20150704 -S -machine pc-q35-2.2,accel=kvm,usb=off -cpu Haswell,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1000 -m 8192 -realtime mlock=off -smp 4,sockets=4,cores=1,threads=1 -uuid 7c2fc02d-2798-4fc9-ad04-db5f1af92723 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/arwan-20150704.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime -no-shutdown -boot strict=on -device i82801b11-bridge,id=pci.1,bus=pcie.0,addr=0x1e -device pci-bridge,chassis_nr=2,id=pci.2,bus=pci.1,addr=0x1 -device nec-usb-xhci,id=usb1,bus=pci.2,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,bus=pci.2,addr=0x5 -drive file=/dev/mapper/crypt-arwan-20150704,if=none,id=drive-virtio-disk0,format=raw,cache=none,discard=unmap,aio=native -device virtio-blk-pci,scsi=off,bus=pci.2,addr=0x3,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2 -drive file=/usr/share/virtio-win/virtio-win.iso,if=none,media=cdrom,id=drive-sata0-0-2,readonly=on,format=raw -device ide-cd,bus=ide.2,drive=drive-sata0-0-2,id=sata0-0-2,bootindex=1 -netdev tap,fds=31:32:33:34,id=hostnet0,vhost=on,vhostfds=35:36:37:38 -device virtio-net-pci,guest_csum=off,guest_tso4=off,guest_tso6=off,mq=on,vectors=10,netdev=hostnet0,id=net0,mac=52:54:00:f3:6b:c4,bus=pci.2,addr=0x2 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channel/target/arwan-20150704.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel1,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=com.redhat.spice.0 -vnc 127.0.0.1:4 -device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vgamem_mb=16,bus=pcie.0,addr=0x1 -device virtio-balloon-pci,id=balloon0,bus=pci.2,addr=0x1 -msg timestamp=on I revert patch, qemu shows 17% CPU usage on host. Thoughts? -Peter On 2015-08-29 3:21 PM, Wanpeng Li wrote: Hi Peter, On 8/30/15 5:18 AM, Peter Kieser wrote: Hi Wanpeng, Do I need to set any module parameters to use your patch, or should halt_poll_ns automatically tune with just your patch series applied? You don't need any module parameters. Regards, Wanpeng Li Thanks. On 2015-08-27 2:47 AM, Wanpeng Li wrote: v3 - v4: * bring back grow vcpu-halt_poll_ns when interrupt arrives and shrinks when idle VCPU is detected v2 - v3: * grow/shrink vcpu-halt_poll_ns by *halt_poll_ns_grow or /halt_poll_ns_shrink * drop the macros and hard coding the numbers in the param definitions * update the comments 5-7 us * remove halt_poll_ns_max and use halt_poll_ns as the max halt_poll_ns time, vcpu-halt_poll_ns start at zero * drop the wrappers * move the grow/shrink logic before out: w/ if (waited) v1 - v2: * change kvm_vcpu_block to read halt_poll_ns from the vcpu instead of the module parameter * use the shrink/grow matrix which is suggested by David * set halt_poll_ns_max to 2ms There is a downside of halt_poll_ns since poll is still happen for idle VCPU which can waste cpu usage. This patchset add the ability to adjust halt_poll_ns dynamically, grows halt_poll_ns if an interrupt arrives and shrinks halt_poll_ns when idle VCPU is detected. There are two new kernel parameters for changing the halt_poll_ns: halt_poll_ns_grow and halt_poll_ns_shrink. Test w/ high cpu overcommit ratio, pin vCPUs, and the halt_poll_ns of halt-poll is the default 50ns, the max halt_poll_ns of dynamic halt-poll is 2ms. Then watch the %C0 in the dump of Powertop tool. The test method is almost from David. +-++---+ | || | | w/o halt-poll | w/ halt-poll | dynamic halt-poll | +-++---+ | || | |~0.9%|~1.8% | ~1.2% | +-++---+ The always halt-poll will increase ~0.9% cpu usage for idle vCPUs and the dynamic halt-poll drop it to ~0.3% which means that reduce the 67% overhead introduced by always halt-poll. Wanpeng Li (3): KVM: make halt_poll_ns per-VCPU KVM: dynamic halt_poll_ns adjustment KVM: trace kvm_halt_poll_ns grow/shrink include/linux/kvm_host.h | 1 + include/trace/events/kvm.h | 30 virt/kvm/kvm_main.c| 50 +++--- 3 files changed, 78 insertions(+), 3 deletions(-) -- Peter Kieser 604.338.9294 / pe...@kieser.ca smime.p7s Description: S/MIME Cryptographic Signature
Re: [PATCH v4 0/3] KVM: Dynamic Halt-Polling
On 2015-08-29 4:55 PM, Wanpeng Li wrote: On 8/30/15 6:26 AM, Peter Kieser wrote: Thanks, Wanpeng. Applied this to Linux 3.18 and seeing much higher CPU usage (200%) for qemu 2.4.0 process on a Windows 10 x64 guest. qemu parameters: Thanks for the report. If Paolo's patch kvm: add halt_poll_ns module parameter is applied on your 3.18? Btw, do you test the linux guest? No high CPU usage on Linux guests. Following patch series are applied (in order): * kvm: add halt_poll_ns module parameter * KVM: make halt_poll_ns static * KVM: Dynamic Halt-Polling v4 -Peter smime.p7s Description: S/MIME Cryptographic Signature
Re: Question about my patch
On 2014-12-16 9:26 PM, NeilBrown wrote: i.e. there is no bug here, and nothing to fix. Thanks, NeilBrown FYI:https://lkml.org/lkml/2014/8/4/206 -Peter smime.p7s Description: S/MIME Cryptographic Signature
Re: Question about my patch
On 2014-12-16 9:26 PM, NeilBrown wrote: i.e. there is no bug here, and nothing to fix. Thanks, NeilBrown FYI:https://lkml.org/lkml/2014/8/4/206 -Peter smime.p7s Description: S/MIME Cryptographic Signature
Re: [GIT PULL] bcache changes for 3.17
On 2014-09-05 2:45 PM, Greg KH wrote: Just because a maintainer/developer doesn't want to do anything for the stable kernel releases does_NOT_ mean the code is "unstable/expreimental" at all. These are more bcache-ate-my-data unstable bugs. It's standard practice to backport fixes that cause instability/data corruption to a 'stable' release (otherwise, why would it be named 'stable')? -Peter smime.p7s Description: S/MIME Cryptographic Signature
Re: [GIT PULL] bcache changes for 3.17
On 2014-09-05 8:37 AM, Eddie Chapman wrote: On 05/09/14 15:17, Jens Axboe wrote: (from oldest to newest). And that's just from 3.16 to 3.17-rc3, going all the way back to 3.10 would be a lot of work. If there's anyone that cares about bcache on stable kernels (and actually use it), now would be a good time to pipe up. Just "piping up" as I care about bcache and actually use it in production on 3.10! Shame I don't have the knowledge to try and backport these though :-) Eddie I'm "piping up" as well, I use bcache on 3.10 in production. -Peter smime.p7s Description: S/MIME Cryptographic Signature
Re: [GIT PULL] bcache changes for 3.17
On 2014-09-05 8:37 AM, Eddie Chapman wrote: On 05/09/14 15:17, Jens Axboe wrote: (from oldest to newest). And that's just from 3.16 to 3.17-rc3, going all the way back to 3.10 would be a lot of work. If there's anyone that cares about bcache on stable kernels (and actually use it), now would be a good time to pipe up. Just piping up as I care about bcache and actually use it in production on 3.10! Shame I don't have the knowledge to try and backport these though :-) Eddie I'm piping up as well, I use bcache on 3.10 in production. -Peter smime.p7s Description: S/MIME Cryptographic Signature
Re: [GIT PULL] bcache changes for 3.17
On 2014-09-05 2:45 PM, Greg KH wrote: Just because a maintainer/developer doesn't want to do anything for the stable kernel releases does_NOT_ mean the code is unstable/expreimental at all. These are more bcache-ate-my-data unstable bugs. It's standard practice to backport fixes that cause instability/data corruption to a 'stable' release (otherwise, why would it be named 'stable')? -Peter smime.p7s Description: S/MIME Cryptographic Signature
Re: [GIT PULL] bcache changes for 3.17
On 2014-08-05 9:58 AM, Jens Axboe wrote: On 08/04/2014 10:33 PM, Kent Overstreet wrote: Hey Jens, here's the pull request for 3.17 - typically late, but lots of tasty fixes in this one :) Normally I'd say no, but since it's basically just fixes, I guess we can pull it in. But generally, it has to be in my hands a week before this, so it can simmer a bit in for-next before going in... Are these fixes going to be backported to 3.10 or other stable releases? -Peter smime.p7s Description: S/MIME Cryptographic Signature
Re: [GIT PULL] bcache changes for 3.17
On 2014-08-05 9:58 AM, Jens Axboe wrote: On 08/04/2014 10:33 PM, Kent Overstreet wrote: Hey Jens, here's the pull request for 3.17 - typically late, but lots of tasty fixes in this one :) Normally I'd say no, but since it's basically just fixes, I guess we can pull it in. But generally, it has to be in my hands a week before this, so it can simmer a bit in for-next before going in... Are these fixes going to be backported to 3.10 or other stable releases? -Peter smime.p7s Description: S/MIME Cryptographic Signature