It seems vdsmd under 4.1.x (or something under it’s control) changes the disk 
schedulers when it starts or a host node is activated, and I’d like to avoid 
this. Is it preventable? Or configurable anywhere? This was probably happening 
under earlier version, but I just noticed it while upgrading some converged 
boxes today.

It likes to set deadline, which I understand is the RHEL default for centos 7 
on non SATA disks. But I’d rather have NOOP on my SSDs because SSDs, and NOOP 
on my SATA spinning platters because ZFS does it’s own scheduling, and running 
anything other than NOOP can cause increased CPU utilization for no gain. It’s 
also fighting ZFS, which tires to set NOOP on whole disks it controls, and my 
kernel command line setting.

Thanks,

  -Darrell
_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to