Rodrigo what's a version and system nfs server? Network performance between
host pod and storage it's ok?
On Sat, 5 Aug 2017 at 14:03 Ivan Kudryavtsev <kudryavtsev...@bw-sw.com>
wrote:

> Qcow2 does lazy allocation. Try to write big file inside VM with dd (say
> 10GB), erase it and try again. May be lazy allocation works bad for your
> raid5e.
>
> 5 авг. 2017 г. 23:29 пользователь "Rodrigo Baldasso" <
> rodr...@loophost.com.br> написал:
>
> > Yes.. mounting an lvm volume inside the host works great, ~500Mb/s write
> > speed.. inside the guest i'm using ext4 but the speed is aroung 30mb/s.
> >
> > - - - - - - - - - - - - - - - - - - -
> >
> > Rodrigo Baldasso - LHOST
> >
> > (51) 9 8419-9861
> > - - - - - - - - - - - - - - - - - - -
> > On 05/08/2017 13:26:00, Ivan Kudryavtsev <kudryavtsev...@bw-sw.com>
> wrote:
> > Rodrigo, is your fio testing shows great results? What filesystem you are
> > using? KVM is known to work very bad over BTRFS.
> >
> > 5 авг. 2017 г. 23:16 пользователь "Rodrigo Baldasso"
> > rodr...@loophost.com.br> написал:
> >
> > Hi Ivan,
> >
> > In fact i'm testing using local storage.. but on NFS I was getting
> similar
> > results also.
> >
> > Thanks!
> >
> > - - - - - - - - - - - - - - - - - - -
> >
> > Rodrigo Baldasso - LHOST
> >
> > (51) 9 8419-9861
> > - - - - - - - - - - - - - - - - - - -
> > On 05/08/2017 13:03:24, Ivan Kudryavtsev wrote:
> > Hi, Rodrigo. It looks strange. Check your NFSconfiguration and network
> > errors, loss. It should work great.
> >
> > 5 авг. 2017 г. 22:22 пользователь "Rodrigo Baldasso"
> > rodr...@loophost.com.br> написал:
> >
> > Hi everyone,
> >
> > I'm having trouble to archive a good I/O rate using cloudstack qcow2 with
> > any type of caching (or even disabled).
> >
> > We have some RAID-5e SSD arrays which give us a very good rates directly
> on
> > the node/host, but on the guest the speed is terrible.
> >
> > Does anyone knows a solution/workaround for this? I never used qcow (only
> > raw+lvm) so I don't know much to do to solve this.
> >
> > Thanks!
> >
>

Reply via email to