Re: [Gluster-users] [ovirt-users] slow performance with export storage on glusterfs

2017-11-29 Thread Jiří Sléžka
Hello,

> 
> If you use Gluster as FUSE mount it's always slower than you expect it
> to be.
> If you want to get better performance out of your oVirt/Gluster storage,
> try the following: 
> 
> - create a Linux VM in your oVirt environment, assign 4/8/12 virtual
> disks (Virtual disks are located on your Gluster storage volume).
> - Boot/configure the VM, then use LVM to create VG/LV with 4 stripes
> (lvcreate -i 4) and use all 4/8/12 virtual disks as PVS.
> - then install NFS server and export LV you created in previous step,
> use the NFS export as export domain in oVirt/RHEV.
> 
> You should get wire speed when you use multiple stripes on Gluster
> storage, FUSE mount on oVirt host will fan out requests to all 4 servers.
> Gluster is very good at distributed/parallel workloads, but when you use
> direct Gluster FUSE mount for Export domain you only have one data
> stream, which is fragmented even more my multiple writes/reads that
> Gluster needs to do to save your data on all member servers.

Thanks for explanation, it is an interesting solution.

Cheers,

Jiri

> 
> 
> 
> On Mon, Nov 27, 2017 at 8:41 PM, Donny Davis  > wrote:
> 
> What about mounting over nfs instead of the fuse client. Or maybe
> libgfapi. Is that available for export domains
> 
> On Fri, Nov 24, 2017 at 3:48 AM Jiří Sléžka  > wrote:
> 
> On 11/24/2017 06:41 AM, Sahina Bose wrote:
> >
> >
> > On Thu, Nov 23, 2017 at 4:56 PM, Jiří Sléžka
> 
> > >> wrote:
> >
> >     Hi,
> >
> >     On 11/22/2017 07:30 PM, Nir Soffer wrote:
> >     > On Mon, Nov 20, 2017 at 5:22 PM Jiří Sléžka
> 
> >
> >     > 
>  >     >
> >     >     Hi,
> >     >
> >     >     I am trying realize why is exporting of vm to export
> storage on
> >     >     glusterfs such slow.
> >     >
> >     >     I am using oVirt and RHV, both instalations on
> version 4.1.7.
> >     >
> >     >     Hosts have dedicated nics for rhevm network - 1gbps,
> data
> >     storage itself
> >     >     is on FC.
> >     >
> >     >     GlusterFS cluster lives separate on 4 dedicated
> hosts. It has
> >     slow disks
> >     >     but I can achieve about 200-400mbit throughput in other
> >     applications (we
> >     >     are using it for "cold" data, backups mostly).
> >     >
> >     >     I am using this glusterfs cluster as backend for export
> >     storage. When I
> >     >     am exporting vm I can see only about 60-80mbit
> throughput.
> >     >
> >     >     What could be the bottleneck here?
> >     >
> >     >     Could it be qemu-img utility?
> >     >
> >     >     vdsm      97739  0.3  0.0 354212 29148 ?        S 15:43   0:06
> >     >     /usr/bin/qemu-img convert -p -t none -T none -f raw
> >     >   
> >   
>   
> /rhev/data-center/2ff6d0ee-a10b-473d-b77c-be9149945f5f/ff3cd56a-1005-4426-8137-8f422c0b47c1/images/ba42cbcc-c068-4df8-af3d-00f2077b1e27/c57acd5f-d6cf-48cc-ad0c-4a7d979c0c1e
> >     >     -O raw
> >     >   
> >   
>   
> /rhev/data-center/mnt/glusterSD/10.20.30.41:_rhv__export/81094499-a392-4ea2-b081-7c6288fbb636/images/ba42cbcc-c068-4df8-af3d-00f2077b1e27/c57acd5f-d6cf-48cc-ad0c-4a7d979c0c1e
> >     >
> >     >     Any idea how to make it work faster or what
> throughput should I
> >     >     expected?
> >     >
> >     >
> >     > gluster storage operations are using fuse mount - so
> every write:
> >     > - travel to the kernel
> >     > - travel back to the gluster fuse helper process
> >     > - travel to all 3 replicas - replication is done on
> client side
> >     > - return to kernel when all writes succeeded
> >     > - return to caller
> >     >
> >     > So gluster will never set any speed record.
> >     >
> >     > Additionally, you are copying from raw lv on FC -
> qemu-img cannot do
> >     > anything
> >     > smart and avoid copying unused clusters. Instead if copies
> >     gigabytes of
> >     > zeros
> >     > from FC.
> >
> >     ok, it does make sense
>

Re: [Gluster-users] [ovirt-users] slow performance with export storage on glusterfs

2017-11-28 Thread Dmitri Chebotarov
Hello

If you use Gluster as FUSE mount it's always slower than you expect it to
be.
If you want to get better performance out of your oVirt/Gluster storage,
try the following:

- create a Linux VM in your oVirt environment, assign 4/8/12 virtual disks
(Virtual disks are located on your Gluster storage volume).
- Boot/configure the VM, then use LVM to create VG/LV with 4 stripes
(lvcreate -i 4) and use all 4/8/12 virtual disks as PVS.
- then install NFS server and export LV you created in previous step, use
the NFS export as export domain in oVirt/RHEV.

You should get wire speed when you use multiple stripes on Gluster storage,
FUSE mount on oVirt host will fan out requests to all 4 servers.
Gluster is very good at distributed/parallel workloads, but when you use
direct Gluster FUSE mount for Export domain you only have one data stream,
which is fragmented even more my multiple writes/reads that Gluster needs
to do to save your data on all member servers.



On Mon, Nov 27, 2017 at 8:41 PM, Donny Davis  wrote:

> What about mounting over nfs instead of the fuse client. Or maybe
> libgfapi. Is that available for export domains
>
> On Fri, Nov 24, 2017 at 3:48 AM Jiří Sléžka  wrote:
>
>> On 11/24/2017 06:41 AM, Sahina Bose wrote:
>> >
>> >
>> > On Thu, Nov 23, 2017 at 4:56 PM, Jiří Sléžka > > > wrote:
>> >
>> > Hi,
>> >
>> > On 11/22/2017 07:30 PM, Nir Soffer wrote:
>> > > On Mon, Nov 20, 2017 at 5:22 PM Jiří Sléžka > 
>> > > >> wrote:
>> > >
>> > > Hi,
>> > >
>> > > I am trying realize why is exporting of vm to export storage
>> on
>> > > glusterfs such slow.
>> > >
>> > > I am using oVirt and RHV, both instalations on version 4.1.7.
>> > >
>> > > Hosts have dedicated nics for rhevm network - 1gbps, data
>> > storage itself
>> > > is on FC.
>> > >
>> > > GlusterFS cluster lives separate on 4 dedicated hosts. It has
>> > slow disks
>> > > but I can achieve about 200-400mbit throughput in other
>> > applications (we
>> > > are using it for "cold" data, backups mostly).
>> > >
>> > > I am using this glusterfs cluster as backend for export
>> > storage. When I
>> > > am exporting vm I can see only about 60-80mbit throughput.
>> > >
>> > > What could be the bottleneck here?
>> > >
>> > > Could it be qemu-img utility?
>> > >
>> > > vdsm  97739  0.3  0.0 354212 29148 ?S>  0:06
>> > > /usr/bin/qemu-img convert -p -t none -T none -f raw
>> > >
>> >  /rhev/data-center/2ff6d0ee-a10b-473d-b77c-be9149945f5f/
>> ff3cd56a-1005-4426-8137-8f422c0b47c1/images/ba42cbcc-
>> c068-4df8-af3d-00f2077b1e27/c57acd5f-d6cf-48cc-ad0c-4a7d979c0c1e
>> > > -O raw
>> > >
>> >  /rhev/data-center/mnt/glusterSD/10.20.30.41:_rhv__
>> export/81094499-a392-4ea2-b081-7c6288fbb636/images/
>> ba42cbcc-c068-4df8-af3d-00f2077b1e27/c57acd5f-d6cf-48cc-ad0c-4a7d979c0c1e
>> > >
>> > > Any idea how to make it work faster or what throughput should
>> I
>> > > expected?
>> > >
>> > >
>> > > gluster storage operations are using fuse mount - so every write:
>> > > - travel to the kernel
>> > > - travel back to the gluster fuse helper process
>> > > - travel to all 3 replicas - replication is done on client side
>> > > - return to kernel when all writes succeeded
>> > > - return to caller
>> > >
>> > > So gluster will never set any speed record.
>> > >
>> > > Additionally, you are copying from raw lv on FC - qemu-img cannot
>> do
>> > > anything
>> > > smart and avoid copying unused clusters. Instead if copies
>> > gigabytes of
>> > > zeros
>> > > from FC.
>> >
>> > ok, it does make sense
>> >
>> > > However 7.5-10 MiB/s sounds too slow.
>> > >
>> > > I would try to test with dd - how much time it takes to copy
>> > > the same image from FC to your gluster storage?
>> > >
>> > > dd
>> > > if=/rhev/data-center/2ff6d0ee-a10b-473d-b77c-be9149945f5f/
>> ff3cd56a-1005-4426-8137-8f422c0b47c1/images/ba42cbcc-
>> c068-4df8-af3d-00f2077b1e27/c57acd5f-d6cf-48cc-ad0c-4a7d979c0c1e
>> > > of=/rhev/data-center/mnt/glusterSD/10.20.30.41:_rhv__
>> export/81094499-a392-4ea2-b081-7c6288fbb636/__test__
>> > > bs=8M oflag=direct status=progress
>> >
>> > unfrotunately dd performs the same
>> >
>> > 1778384896 bytes (1.8 GB) copied, 198.565265 s, 9.0 MB/s
>> >
>> >
>> > > If dd can do this faster, please ask on qemu-discuss mailing list:
>> > > https://lists.nongnu.org/mailman/listinfo/qemu-discuss
>> > 
>> > >
>> > > If both give similar results, I think asking 

Re: [Gluster-users] [ovirt-users] slow performance with export storage on glusterfs

2017-11-27 Thread Donny Davis
What about mounting over nfs instead of the fuse client. Or maybe libgfapi.
Is that available for export domains

On Fri, Nov 24, 2017 at 3:48 AM Jiří Sléžka  wrote:

> On 11/24/2017 06:41 AM, Sahina Bose wrote:
> >
> >
> > On Thu, Nov 23, 2017 at 4:56 PM, Jiří Sléžka  > > wrote:
> >
> > Hi,
> >
> > On 11/22/2017 07:30 PM, Nir Soffer wrote:
> > > On Mon, Nov 20, 2017 at 5:22 PM Jiří Sléžka  
> > > >> wrote:
> > >
> > > Hi,
> > >
> > > I am trying realize why is exporting of vm to export storage on
> > > glusterfs such slow.
> > >
> > > I am using oVirt and RHV, both instalations on version 4.1.7.
> > >
> > > Hosts have dedicated nics for rhevm network - 1gbps, data
> > storage itself
> > > is on FC.
> > >
> > > GlusterFS cluster lives separate on 4 dedicated hosts. It has
> > slow disks
> > > but I can achieve about 200-400mbit throughput in other
> > applications (we
> > > are using it for "cold" data, backups mostly).
> > >
> > > I am using this glusterfs cluster as backend for export
> > storage. When I
> > > am exporting vm I can see only about 60-80mbit throughput.
> > >
> > > What could be the bottleneck here?
> > >
> > > Could it be qemu-img utility?
> > >
> > > vdsm  97739  0.3  0.0 354212 29148 ?S  0:06
> > > /usr/bin/qemu-img convert -p -t none -T none -f raw
> > >
> >
>   
> /rhev/data-center/2ff6d0ee-a10b-473d-b77c-be9149945f5f/ff3cd56a-1005-4426-8137-8f422c0b47c1/images/ba42cbcc-c068-4df8-af3d-00f2077b1e27/c57acd5f-d6cf-48cc-ad0c-4a7d979c0c1e
> > > -O raw
> > >
> >  /rhev/data-center/mnt/glusterSD/10.20.30.41:
> _rhv__export/81094499-a392-4ea2-b081-7c6288fbb636/images/ba42cbcc-c068-4df8-af3d-00f2077b1e27/c57acd5f-d6cf-48cc-ad0c-4a7d979c0c1e
> > >
> > > Any idea how to make it work faster or what throughput should I
> > > expected?
> > >
> > >
> > > gluster storage operations are using fuse mount - so every write:
> > > - travel to the kernel
> > > - travel back to the gluster fuse helper process
> > > - travel to all 3 replicas - replication is done on client side
> > > - return to kernel when all writes succeeded
> > > - return to caller
> > >
> > > So gluster will never set any speed record.
> > >
> > > Additionally, you are copying from raw lv on FC - qemu-img cannot
> do
> > > anything
> > > smart and avoid copying unused clusters. Instead if copies
> > gigabytes of
> > > zeros
> > > from FC.
> >
> > ok, it does make sense
> >
> > > However 7.5-10 MiB/s sounds too slow.
> > >
> > > I would try to test with dd - how much time it takes to copy
> > > the same image from FC to your gluster storage?
> > >
> > > dd
> > >
> if=/rhev/data-center/2ff6d0ee-a10b-473d-b77c-be9149945f5f/ff3cd56a-1005-4426-8137-8f422c0b47c1/images/ba42cbcc-c068-4df8-af3d-00f2077b1e27/c57acd5f-d6cf-48cc-ad0c-4a7d979c0c1e
> > > of=/rhev/data-center/mnt/glusterSD/10.20.30.41:
> _rhv__export/81094499-a392-4ea2-b081-7c6288fbb636/__test__
> > > bs=8M oflag=direct status=progress
> >
> > unfrotunately dd performs the same
> >
> > 1778384896 bytes (1.8 GB) copied, 198.565265 s, 9.0 MB/s
> >
> >
> > > If dd can do this faster, please ask on qemu-discuss mailing list:
> > > https://lists.nongnu.org/mailman/listinfo/qemu-discuss
> > 
> > >
> > > If both give similar results, I think asking in gluster mailing
> list
> > > about this can help. Maybe your gluster setup can be optimized.
> >
> > ok, this is definitly on the gluster side. Thanks for your guidance.
> >
> > I will investigate the gluster side and also will try Export on NFS
> > share.
> >
> >
> > [Adding gluster users ml]
> >
> > Please provide "gluster volume info" output for the rhv_export gluster
> > volume and also volume profile details (refer to earlier mail from Shani
> > on how to run this) while performing the dd operation above.
>
> you can find all this output on https://pastebin.com/sBK01VS8
>
> as mentioned in other posts. Gluster cluster uses really slow (green)
> disks but without direct io it can achieve throughput around 400mbit/s.
>
> This storage is used mostly for backup purposes. It is not used as a vm
> storage.
>
> In my case it would be nice not to use direct io in export case but I
> understand why it might not be wise.
>
> Cheers,
>
> Jiri
>
> >
> >
> >
> >
> > Cheers,
> >
> > Jiri
> >
> >
> > >
> > > Nir
> > >
> > >
> > >
> > > Cheers,
> > >
> > > Jiri
> > >
> > >
> > > 

Re: [Gluster-users] [ovirt-users] slow performance with export storage on glusterfs

2017-11-26 Thread Jiří Sléžka
On 11/24/2017 06:41 AM, Sahina Bose wrote:
> 
> 
> On Thu, Nov 23, 2017 at 4:56 PM, Jiří Sléžka  > wrote:
> 
> Hi,
> 
> On 11/22/2017 07:30 PM, Nir Soffer wrote:
> > On Mon, Nov 20, 2017 at 5:22 PM Jiří Sléžka  
> > >> wrote:
> >
> >     Hi,
> >
> >     I am trying realize why is exporting of vm to export storage on
> >     glusterfs such slow.
> >
> >     I am using oVirt and RHV, both instalations on version 4.1.7.
> >
> >     Hosts have dedicated nics for rhevm network - 1gbps, data
> storage itself
> >     is on FC.
> >
> >     GlusterFS cluster lives separate on 4 dedicated hosts. It has
> slow disks
> >     but I can achieve about 200-400mbit throughput in other
> applications (we
> >     are using it for "cold" data, backups mostly).
> >
> >     I am using this glusterfs cluster as backend for export
> storage. When I
> >     am exporting vm I can see only about 60-80mbit throughput.
> >
> >     What could be the bottleneck here?
> >
> >     Could it be qemu-img utility?
> >
> >     vdsm      97739  0.3  0.0 354212 29148 ?        S >     /usr/bin/qemu-img convert -p -t none -T none -f raw
> >   
>  
> /rhev/data-center/2ff6d0ee-a10b-473d-b77c-be9149945f5f/ff3cd56a-1005-4426-8137-8f422c0b47c1/images/ba42cbcc-c068-4df8-af3d-00f2077b1e27/c57acd5f-d6cf-48cc-ad0c-4a7d979c0c1e
> >     -O raw
> >   
>  
> /rhev/data-center/mnt/glusterSD/10.20.30.41:_rhv__export/81094499-a392-4ea2-b081-7c6288fbb636/images/ba42cbcc-c068-4df8-af3d-00f2077b1e27/c57acd5f-d6cf-48cc-ad0c-4a7d979c0c1e
> >
> >     Any idea how to make it work faster or what throughput should I
> >     expected?
> >
> >
> > gluster storage operations are using fuse mount - so every write:
> > - travel to the kernel
> > - travel back to the gluster fuse helper process
> > - travel to all 3 replicas - replication is done on client side
> > - return to kernel when all writes succeeded
> > - return to caller
> >
> > So gluster will never set any speed record.
> >
> > Additionally, you are copying from raw lv on FC - qemu-img cannot do
> > anything
> > smart and avoid copying unused clusters. Instead if copies
> gigabytes of
> > zeros
> > from FC.
> 
> ok, it does make sense
> 
> > However 7.5-10 MiB/s sounds too slow.
> >
> > I would try to test with dd - how much time it takes to copy
> > the same image from FC to your gluster storage?
> >
> > dd
> > 
> if=/rhev/data-center/2ff6d0ee-a10b-473d-b77c-be9149945f5f/ff3cd56a-1005-4426-8137-8f422c0b47c1/images/ba42cbcc-c068-4df8-af3d-00f2077b1e27/c57acd5f-d6cf-48cc-ad0c-4a7d979c0c1e
> > 
> of=/rhev/data-center/mnt/glusterSD/10.20.30.41:_rhv__export/81094499-a392-4ea2-b081-7c6288fbb636/__test__
> > bs=8M oflag=direct status=progress
> 
> unfrotunately dd performs the same
> 
> 1778384896 bytes (1.8 GB) copied, 198.565265 s, 9.0 MB/s
> 
> 
> > If dd can do this faster, please ask on qemu-discuss mailing list:
> > https://lists.nongnu.org/mailman/listinfo/qemu-discuss
> 
> >
> > If both give similar results, I think asking in gluster mailing list
> > about this can help. Maybe your gluster setup can be optimized.
> 
> ok, this is definitly on the gluster side. Thanks for your guidance.
> 
> I will investigate the gluster side and also will try Export on NFS
> share.
> 
> 
> [Adding gluster users ml]
> 
> Please provide "gluster volume info" output for the rhv_export gluster
> volume and also volume profile details (refer to earlier mail from Shani
> on how to run this) while performing the dd operation above.

you can find all this output on https://pastebin.com/sBK01VS8

as mentioned in other posts. Gluster cluster uses really slow (green)
disks but without direct io it can achieve throughput around 400mbit/s.

This storage is used mostly for backup purposes. It is not used as a vm
storage.

In my case it would be nice not to use direct io in export case but I
understand why it might not be wise.

Cheers,

Jiri

> 
>  
> 
> 
> Cheers,
> 
> Jiri
> 
> 
> >
> > Nir
> >  
> >
> >
> >     Cheers,
> >
> >     Jiri
> >
> >
> >     ___
> >     Users mailing list
> >     us...@ovirt.org 
> >
> >     http://lists.ovirt.org/mailman/listinfo/users
> 
> >
> 
> 
> 
> ___
> Users mailing list
> 

Re: [Gluster-users] [ovirt-users] slow performance with export storage on glusterfs

2017-11-23 Thread Sahina Bose
On Thu, Nov 23, 2017 at 4:56 PM, Jiří Sléžka  wrote:

> Hi,
>
> On 11/22/2017 07:30 PM, Nir Soffer wrote:
> > On Mon, Nov 20, 2017 at 5:22 PM Jiří Sléžka  > > wrote:
> >
> > Hi,
> >
> > I am trying realize why is exporting of vm to export storage on
> > glusterfs such slow.
> >
> > I am using oVirt and RHV, both instalations on version 4.1.7.
> >
> > Hosts have dedicated nics for rhevm network - 1gbps, data storage
> itself
> > is on FC.
> >
> > GlusterFS cluster lives separate on 4 dedicated hosts. It has slow
> disks
> > but I can achieve about 200-400mbit throughput in other applications
> (we
> > are using it for "cold" data, backups mostly).
> >
> > I am using this glusterfs cluster as backend for export storage.
> When I
> > am exporting vm I can see only about 60-80mbit throughput.
> >
> > What could be the bottleneck here?
> >
> > Could it be qemu-img utility?
> >
> > vdsm  97739  0.3  0.0 354212 29148 ?S > /usr/bin/qemu-img convert -p -t none -T none -f raw
> > /rhev/data-center/2ff6d0ee-a10b-473d-b77c-be9149945f5f/
> ff3cd56a-1005-4426-8137-8f422c0b47c1/images/ba42cbcc-
> c068-4df8-af3d-00f2077b1e27/c57acd5f-d6cf-48cc-ad0c-4a7d979c0c1e
> > -O raw
> > /rhev/data-center/mnt/glusterSD/10.20.30.41:_rhv__
> export/81094499-a392-4ea2-b081-7c6288fbb636/images/
> ba42cbcc-c068-4df8-af3d-00f2077b1e27/c57acd5f-d6cf-48cc-ad0c-4a7d979c0c1e
> >
> > Any idea how to make it work faster or what throughput should I
> > expected?
> >
> >
> > gluster storage operations are using fuse mount - so every write:
> > - travel to the kernel
> > - travel back to the gluster fuse helper process
> > - travel to all 3 replicas - replication is done on client side
> > - return to kernel when all writes succeeded
> > - return to caller
> >
> > So gluster will never set any speed record.
> >
> > Additionally, you are copying from raw lv on FC - qemu-img cannot do
> > anything
> > smart and avoid copying unused clusters. Instead if copies gigabytes of
> > zeros
> > from FC.
>
> ok, it does make sense
>
> > However 7.5-10 MiB/s sounds too slow.
> >
> > I would try to test with dd - how much time it takes to copy
> > the same image from FC to your gluster storage?
> >
> > dd
> > if=/rhev/data-center/2ff6d0ee-a10b-473d-b77c-be9149945f5f/
> ff3cd56a-1005-4426-8137-8f422c0b47c1/images/ba42cbcc-
> c068-4df8-af3d-00f2077b1e27/c57acd5f-d6cf-48cc-ad0c-4a7d979c0c1e
> > of=/rhev/data-center/mnt/glusterSD/10.20.30.41:_rhv__
> export/81094499-a392-4ea2-b081-7c6288fbb636/__test__
> > bs=8M oflag=direct status=progress
>
> unfrotunately dd performs the same
>
> 1778384896 bytes (1.8 GB) copied, 198.565265 s, 9.0 MB/s
>
>
> > If dd can do this faster, please ask on qemu-discuss mailing list:
> > https://lists.nongnu.org/mailman/listinfo/qemu-discuss
> >
> > If both give similar results, I think asking in gluster mailing list
> > about this can help. Maybe your gluster setup can be optimized.
>
> ok, this is definitly on the gluster side. Thanks for your guidance.
>
> I will investigate the gluster side and also will try Export on NFS share.
>

[Adding gluster users ml]

Please provide "gluster volume info" output for the rhv_export gluster
volume and also volume profile details (refer to earlier mail from Shani on
how to run this) while performing the dd operation above.



>
> Cheers,
>
> Jiri
>
>
> >
> > Nir
> >
> >
> >
> > Cheers,
> >
> > Jiri
> >
> >
> > ___
> > Users mailing list
> > us...@ovirt.org 
> > http://lists.ovirt.org/mailman/listinfo/users
> >
>
>
>
> ___
> Users mailing list
> us...@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users