After changing the settings below. The linux guest has an good write speed.
But still the FreeNas guest stays on 10MB/s.
After doing some test on freebsd with with an bigger blocksize:
"dd if=/dev/zero of=testfile bs=9000" I get about 80MB/s.
With "dd if=/dev/zero of=testfile" the speed is 10MB/
Hi,
Thank you for your help.
After changing these settings the linux guest got en increase in speed.
The FreeNAS guest still has an write speed of 10MB/s.
The disk driver is virtio and has en Write back cache.
What am I missing?
Kinds regards,
Michiel Piscaer
On 23-11-16 08:05, Оралов, Алекс
Are you using virtio_scsi? I found it is much faster on ceph with fio
benchmarks. (and also it supports trim/discard)
https://pve.proxmox.com/wiki/Qemu_discard
On 11/23/16 07:53, M. Piscaer wrote:
> Hi,
>
> I have an little performance problem with KVM and Ceph.
>
> I'm using Proxmox 4.3-10/7230e
Hi Michiel,
How are you configuring VM disks on Proxmox? What type (virtio, scsi,
ide) and what cache setting?
El 23/11/16 a las 07:53, M. Piscaer escribió:
Hi,
I have an little performance problem with KVM and Ceph.
I'm using Proxmox 4.3-10/7230e60f, with KVM version
pve-qemu-kvm_2.7.0-8.
I am afraid the most probable cause is context switching time related
to your guest (or guests).
On Wed, Nov 23, 2016 at 9:53 AM, M. Piscaer wrote:
> Hi,
>
> I have an little performance problem with KVM and Ceph.
>
> I'm using Proxmox 4.3-10/7230e60f, with KVM version
> pve-qemu-kvm_2.7.0-8. Cep
Hi,
I have an little performance problem with KVM and Ceph.
I'm using Proxmox 4.3-10/7230e60f, with KVM version
pve-qemu-kvm_2.7.0-8. Ceph is on version jewel 10.2.3 on both the
cluster as the client (ceph-common).
The systems are connected to the network via an 4x bonding with an total
of 4 Gb/