Nux,
there is no way to set it for NVME as it only has [none] option in
/sys/block/nvme0n1/queue/scheduler
Setting any scheduler for VM volume doesn't improve a thing.
пт, 17 мая 2019 г., 20:21 Nux! :
> What happens when you set deadline scheduler in both HV and guest?
>
> --
> Sent from the De
Back when I worked in a company that used cloudstack. We had it
modified to add queue option. It was not available on default
cloudstack.
As for cache. You can set it in disk offering option.
On Fri, May 17, 2019 at 4:21 PM Nux! wrote:
>
> What happens when you set deadline scheduler in both HV
What happens when you set deadline scheduler in both HV and guest?
--
Sent from the Delta quadrant using Borg technology!
Nux!
www.nux.ro
- Original Message -
> From: "Ivan Kudryavtsev"
> To: "users" , "dev"
> Sent: Friday, 17 May, 2019 14:16:31
> Subject: Re: Poor NVMe Performance wit
BTW, You may think that the improvement is achieved by caching, but I clear
the cache with
sync & echo 3 > /proc/sys/vm/drop_caches
So, can't claim for sure, need other opinion, but looks like for NVMe,
writethrough must be used if you want high IO rate. At least with Intel
p4500.
пт, 17 мая 201
Well, just FYI, I changed cache_mode from NULL (none), to writethrough
directly in DB and the performance boosted greatly. It may be an important
feature for NVME drives.
Currently, on 4.11, the user can set cache-mode for disk offerings, but
cannot for service offerings, which are translated to c
Darius, thanks for your participation,
first, I used 4.14 kernel which is the default one for my cluster. Next,
switched to 4.15 with dist-upgrade.
Do you have an idea how to turn on amount of queues for virtio-scsi with
Cloudstack?
пт, 17 мая 2019 г., 19:26 Darius Kasparavičius :
> Hi,
>
> I c
Hi,
I can see a few issues with your xml file. You can try using "queues"
inside your disk definitions. This should help a little, not sure by
how much for your case, but for my specific it went up by almost the
number of queues. Also try cache directsync or writethrough. You
should switch kernel
Host is Dell r620 with Dual e5-2690/256GB 1333 DDR3.
пт, 17 мая 2019 г., 19:22 Ivan Kudryavtsev :
> Nux,
>
> I use Ubuntu 16.04 with "none" scheduler and the latest kernel 4.15. Guest
> is Ubuntu 18.04 with Noop scheduler for scsi-virtio and "none" for virtio.
>
> Thanks.
>
> пт, 17 мая 2019 г.,
Nux,
I use Ubuntu 16.04 with "none" scheduler and the latest kernel 4.15. Guest
is Ubuntu 18.04 with Noop scheduler for scsi-virtio and "none" for virtio.
Thanks.
пт, 17 мая 2019 г., 19:18 Nux! :
> Hi,
>
> What HV is that? CentOS? Are you using the right tuned profile? What about
> in the gues
Hi,
What HV is that? CentOS? Are you using the right tuned profile? What about in
the guest? Which IO scheduler?
--
Sent from the Delta quadrant using Borg technology!
Nux!
www.nux.ro
- Original Message -
> From: "Ivan Kudryavtsev"
> To: "users"
> Sent: Friday, 17 May, 2019 10:13:50
Thanks René for all your hard work and the good times we had when we worked on
the same team.
I'll still be sticking around, but likely won't be as involved and active as
René was.
There's too many other things needing my attention. ;)
From: Rene Moser
Sent:
Thanks, the manual change in ESX is basically what we did when changing the
details in CS didn't have the desired effect.
Can you clarify how to correctly set the root disk controller for a template?
With the rootDiskController detail, as we tried to do it for VMs?
Since we'll be relying more
Gregor,
I already shared the solution for existing VMs - for any new VMs to be
deployed from some template, please change the template details and specify
the rootController type as you need it - this will make sure all new VMs
deployed from that template will inherit the specific root controller
Hi Andrija,
Thanks for the update, I kind of feared that this still wasn't possible in a
clean way.
As evidenced by the results, setting the details via the commands I posted
*did* have a certain effect, it just didn't work correctly for the root disk
controller.
I know that changing the c
Hello, colleagues.
Hope, someone could help me. I just deployed a new VM host with Intel P4500
local storage NVMe drive.
>From Hypervisor host I can get expected performance, 200K RIOPS, 3GBs with
FIO, write performance is also high as expected.
I've created a new KVM VM Service offering with vi
15 matches
Mail list logo