Re: [ovirt-users] [Gluster-users] slow performance with export storage on glusterfs

2017-11-29 Thread Dmitri Chebotarov
Hello

If you use Gluster as FUSE mount it's always slower than you expect it to
be.
If you want to get better performance out of your oVirt/Gluster storage,
try the following:

- create a Linux VM in your oVirt environment, assign 4/8/12 virtual disks
(Virtual disks are located on your Gluster storage volume).
- Boot/configure the VM, then use LVM to create VG/LV with 4 stripes
(lvcreate -i 4) and use all 4/8/12 virtual disks as PVS.
- then install NFS server and export LV you created in previous step, use
the NFS export as export domain in oVirt/RHEV.

You should get wire speed when you use multiple stripes on Gluster storage,
FUSE mount on oVirt host will fan out requests to all 4 servers.
Gluster is very good at distributed/parallel workloads, but when you use
direct Gluster FUSE mount for Export domain you only have one data stream,
which is fragmented even more my multiple writes/reads that Gluster needs
to do to save your data on all member servers.



On Mon, Nov 27, 2017 at 8:41 PM, Donny Davis  wrote:

> What about mounting over nfs instead of the fuse client. Or maybe
> libgfapi. Is that available for export domains
>
> On Fri, Nov 24, 2017 at 3:48 AM Jiří Sléžka  wrote:
>
>> On 11/24/2017 06:41 AM, Sahina Bose wrote:
>> >
>> >
>> > On Thu, Nov 23, 2017 at 4:56 PM, Jiří Sléžka > > > wrote:
>> >
>> > Hi,
>> >
>> > On 11/22/2017 07:30 PM, Nir Soffer wrote:
>> > > On Mon, Nov 20, 2017 at 5:22 PM Jiří Sléžka > 
>> > > >> wrote:
>> > >
>> > > Hi,
>> > >
>> > > I am trying realize why is exporting of vm to export storage
>> on
>> > > glusterfs such slow.
>> > >
>> > > I am using oVirt and RHV, both instalations on version 4.1.7.
>> > >
>> > > Hosts have dedicated nics for rhevm network - 1gbps, data
>> > storage itself
>> > > is on FC.
>> > >
>> > > GlusterFS cluster lives separate on 4 dedicated hosts. It has
>> > slow disks
>> > > but I can achieve about 200-400mbit throughput in other
>> > applications (we
>> > > are using it for "cold" data, backups mostly).
>> > >
>> > > I am using this glusterfs cluster as backend for export
>> > storage. When I
>> > > am exporting vm I can see only about 60-80mbit throughput.
>> > >
>> > > What could be the bottleneck here?
>> > >
>> > > Could it be qemu-img utility?
>> > >
>> > > vdsm  97739  0.3  0.0 354212 29148 ?S>  0:06
>> > > /usr/bin/qemu-img convert -p -t none -T none -f raw
>> > >
>> >  /rhev/data-center/2ff6d0ee-a10b-473d-b77c-be9149945f5f/
>> ff3cd56a-1005-4426-8137-8f422c0b47c1/images/ba42cbcc-
>> c068-4df8-af3d-00f2077b1e27/c57acd5f-d6cf-48cc-ad0c-4a7d979c0c1e
>> > > -O raw
>> > >
>> >  /rhev/data-center/mnt/glusterSD/10.20.30.41:_rhv__
>> export/81094499-a392-4ea2-b081-7c6288fbb636/images/
>> ba42cbcc-c068-4df8-af3d-00f2077b1e27/c57acd5f-d6cf-48cc-ad0c-4a7d979c0c1e
>> > >
>> > > Any idea how to make it work faster or what throughput should
>> I
>> > > expected?
>> > >
>> > >
>> > > gluster storage operations are using fuse mount - so every write:
>> > > - travel to the kernel
>> > > - travel back to the gluster fuse helper process
>> > > - travel to all 3 replicas - replication is done on client side
>> > > - return to kernel when all writes succeeded
>> > > - return to caller
>> > >
>> > > So gluster will never set any speed record.
>> > >
>> > > Additionally, you are copying from raw lv on FC - qemu-img cannot
>> do
>> > > anything
>> > > smart and avoid copying unused clusters. Instead if copies
>> > gigabytes of
>> > > zeros
>> > > from FC.
>> >
>> > ok, it does make sense
>> >
>> > > However 7.5-10 MiB/s sounds too slow.
>> > >
>> > > I would try to test with dd - how much time it takes to copy
>> > > the same image from FC to your gluster storage?
>> > >
>> > > dd
>> > > if=/rhev/data-center/2ff6d0ee-a10b-473d-b77c-be9149945f5f/
>> ff3cd56a-1005-4426-8137-8f422c0b47c1/images/ba42cbcc-
>> c068-4df8-af3d-00f2077b1e27/c57acd5f-d6cf-48cc-ad0c-4a7d979c0c1e
>> > > of=/rhev/data-center/mnt/glusterSD/10.20.30.41:_rhv__
>> export/81094499-a392-4ea2-b081-7c6288fbb636/__test__
>> > > bs=8M oflag=direct status=progress
>> >
>> > unfrotunately dd performs the same
>> >
>> > 1778384896 bytes (1.8 GB) copied, 198.565265 s, 9.0 MB/s
>> >
>> >
>> > > If dd can do this faster, please ask on qemu-discuss mailing list:
>> > > https://lists.nongnu.org/mailman/listinfo/qemu-discuss
>> > 
>> > >
>> > > If both give similar results, I think asking 

Re: [ovirt-users] Maximum storage per VM?

2017-10-10 Thread Dmitri Chebotarov
Good Morning

I'm using virtio-scsi (also tried virtio), not using IDE (doesn't IDE have 4 
devices limit? it's long time since I used IDE...).

This is for servers hosting own/nextCloud and Samba for a small/medium groups 
with large datasets (60-80TB) per server.

If ovirt-scsi allows to attach >20 disks  then it meets my needs in this case. 
Thank you.

The reason I opted for LVM is performance. It's so much faster with LVM and 
striped volumes compared to a single large disk. I'm seeing very high 'iowait' 
numbers (to the point when VM is unresponsive) when I'm dumping large amount of 
data to a single disk. But with LVM striped volumes 'iowait' at around ~20% and 
I can get to ~750MB/s in the same environment (same config with single disk is 
~160MB/s).

Also, healing a single large disk on gluster takes ages (I'm using erasure 
coded volumes) .


Thank you,
--
Dmitri Chebotarov.
George Mason University,
4400 University Drive,
Fairfax, VA, 22030
GPG Public key# 5E19F14D: [https://goo.gl/SlE8tj]



From: Yaniv Kaul <yk...@redhat.com>
Sent: Tuesday, October 10, 2017 4:03:08 AM
To: Dmitri Chebotarov
Cc: users@ovirt.org
Subject: Re: [ovirt-users] Maximum storage per VM?

On Fri, Oct 6, 2017 at 10:24 PM, Dmitri Chebotarov 
<dcheb...@gmu.edu<mailto:dcheb...@gmu.edu>> wrote:
Hello

I'm trying to find any info on how much storage I can attach to a VM.

Is there a recommended/maximum for number of disks and maximum disk size?

Disk count depends on the interface - IDE - very few, virtio- 20-something 
(depends on the number of PCI slots available), virtio-SCSI - more.
What is the use case?


I'm using GlusterFS as backend storage for the cluster.

The VM uses LVM (/w striped volumes) to manage attached disks.

Aren't you having layers over layers over layers? Is that the optimal 
arrangement?
(Again, would be interesting to understand the use case to better provide 
information).
Y.


Thank you,
--
Dmitri Chebotarov.
George Mason University,
4400 University Drive,
Fairfax, VA, 22030
GPG Public key# 5E19F14D: [https://goo.gl/SlE8tj]

___
Users mailing list
Users@ovirt.org<mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Maximum storage per VM?

2017-10-09 Thread Dmitri Chebotarov
Hi Kasturi

The data disks are located on a stand-alone Gluster cluster, 
distributed-dispersed volume.
Is there a limit in this case?
I tested with up to 8x3TB disks and everything looks good so far. I'm trying to 
find out what's the maximum supported disks count and size w/o actually 
creating/attaching disks.

I cannot find the info in oVirt/RHEV docs...

Thank you,
--
Dmitri Chebotarov.
George Mason University,
4400 University Drive,
Fairfax, VA, 22030
GPG Public key# 5E19F14D: [https://goo.gl/SlE8tj]



From: Kasturi Narra <kna...@redhat.com>
Sent: Monday, October 9, 2017 7:59:34 AM
To: Dmitri Chebotarov
Cc: users@ovirt.org
Subject: Re: [ovirt-users] Maximum storage per VM?

Hi Dmitri,

  If the vms are created on a Hyperconverged setup then the max disk size  
recommended is 2TB.

Thanks
kasturi.

On Sat, Oct 7, 2017 at 12:54 AM, Dmitri Chebotarov 
<dcheb...@gmu.edu<mailto:dcheb...@gmu.edu>> wrote:
Hello

I'm trying to find any info on how much storage I can attach to a VM.

Is there a recommended/maximum for number of disks and maximum disk size?

I'm using GlusterFS as backend storage for the cluster.

The VM uses LVM (/w striped volumes) to manage attached disks.

Thank you,
--
Dmitri Chebotarov.
George Mason University,
4400 University Drive,
Fairfax, VA, 22030
GPG Public key# 5E19F14D: [https://goo.gl/SlE8tj]

___
Users mailing list
Users@ovirt.org<mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Maximum storage per VM?

2017-10-06 Thread Dmitri Chebotarov
Hello

I'm trying to find any info on how much storage I can attach to a VM.

Is there a recommended/maximum for number of disks and maximum disk size? 

I'm using GlusterFS as backend storage for the cluster.

The VM uses LVM (/w striped volumes) to manage attached disks.

Thank you,
--
Dmitri Chebotarov.
George Mason University,
4400 University Drive,
Fairfax, VA, 22030
GPG Public key# 5E19F14D: [https://goo.gl/SlE8tj]

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users