Hi. No offence, but as topic author wrote, he has 30MB/s. Just kept it in
mind when wrote about 100.
7 авг. 2017 г. 3:01 пользователь "Eric Green"
написал:
>
> > On Aug 5, 2017, at 21:03, Ivan Kudryavtsev
> wrote:
> >
> > Hi, I think Eric's comments are too tough. E.g. I have 11xSSD 1TB with
>
> On Aug 5, 2017, at 21:03, Ivan Kudryavtsev wrote:
>
> Hi, I think Eric's comments are too tough. E.g. I have 11xSSD 1TB with
> linux soft raid 5 and Ext4 and it works like a charm without special
> tunning.
>
> Qcow2 also not so bad. LVM2 does it better of course (if not being
> snapshotted).
Hi, I think Eric's comments are too tough. E.g. I have 11xSSD 1TB with
linux soft raid 5 and Ext4 and it works like a charm without special
tunning.
Qcow2 also not so bad. LVM2 does it better of course (if not being
snapshotted). Our users have different workloads and nobody claims disk
performanc
Wow great explanation! Thank you Eric!
On Sat, 5 Aug 2017 at 14:59 Eric Green wrote:
> qcow2 performance has been historically bad regardless of the underlying
> storage (it is an absolutely terrible storage format), which is why most
> OpenStack Kilo and later installations instead usually use m
qcow2 performance has been historically bad regardless of the underlying
storage (it is an absolutely terrible storage format), which is why most
OpenStack Kilo and later installations instead usually use managed LVM and
present LVM volumes as iSCSI volumes to QEMU, because using raw LVM volumes
Rodrigo what's a version and system nfs server? Network performance between
host pod and storage it's ok?
On Sat, 5 Aug 2017 at 14:03 Ivan Kudryavtsev
wrote:
> Qcow2 does lazy allocation. Try to write big file inside VM with dd (say
> 10GB), erase it and try again. May be lazy allocation works ba
Qcow2 does lazy allocation. Try to write big file inside VM with dd (say
10GB), erase it and try again. May be lazy allocation works bad for your
raid5e.
5 авг. 2017 г. 23:29 пользователь "Rodrigo Baldasso" <
rodr...@loophost.com.br> написал:
> Yes.. mounting an lvm volume inside the host works g
Yes.. mounting an lvm volume inside the host works great, ~500Mb/s write
speed.. inside the guest i'm using ext4 but the speed is aroung 30mb/s.
- - - - - - - - - - - - - - - - - - -
Rodrigo Baldasso - LHOST
(51) 9 8419-9861
- - - - - - - - - - - - - - - - - - -
On 05/08/2017 13:26:00, Ivan Kud
Rodrigo, is your fio testing shows great results? What filesystem you are
using? KVM is known to work very bad over BTRFS.
5 авг. 2017 г. 23:16 пользователь "Rodrigo Baldasso" <
rodr...@loophost.com.br> написал:
Hi Ivan,
In fact i'm testing using local storage.. but on NFS I was getting similar
Hi Ivan,
In fact i'm testing using local storage.. but on NFS I was getting similar
results also.
Thanks!
- - - - - - - - - - - - - - - - - - -
Rodrigo Baldasso - LHOST
(51) 9 8419-9861
- - - - - - - - - - - - - - - - - - -
On 05/08/2017 13:03:24, Ivan Kudryavtsev wrote:
Hi, Rodrigo. It look
Hi, Rodrigo. It looks strange. Check your NFSconfiguration and network
errors, loss. It should work great.
5 авг. 2017 г. 22:22 пользователь "Rodrigo Baldasso" <
rodr...@loophost.com.br> написал:
Hi everyone,
I'm having trouble to archive a good I/O rate using cloudstack qcow2 with
any type of c
Hi everyone,
I'm having trouble to archive a good I/O rate using cloudstack qcow2 with any
type of caching (or even disabled).
We have some RAID-5e SSD arrays which give us a very good rates directly on the
node/host, but on the guest the speed is terrible.
Does anyone knows a solution/workaro
12 matches
Mail list logo