Re: [Gluster-users] Gluster on ZFS with Compression

2015-09-30 Thread Lindsay Mathieson
On 30 September 2015 at 18:50, Frank Rothenstein <
f.rothenst...@bodden-kliniken.de> wrote:

> My brick
> -nodes are also ovirt-Nodes (VM-Hosts),
>


I should have said that my brick noders are also VM nodes (Proxmox). 64 GB
Ram, E5-2620 CPU


> rep=3, i have 3 bricks per node.
>

Are your bricks separate disks? I assumed it would be better to let zfs
handle multiple disks with striping/caching and just present one brick
(zpool dataset) to gluster.



>   Running VMs directly on the bricks may
> improve your VMs by using L2ARC...
>


Not sure what you mean by that - you run the vm direct from the brick on
zfs rather than via the gluster mount? doesn't that mess with the
replication?


thanks,

-- 
Lindsay
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster on ZFS with Compression

2015-09-30 Thread Lindsay Mathieson
On 30 September 2015 at 18:36, Tiemen Ruiten  wrote:

> At the very least, you should have an arbiter volume (introduced in
> Gluster 3.7) on a separate physical node.
>

Running Proxmox (Debian Wheezy) so limited to 3.6, however I do have a
third peer node for voting purposes.



>
> I don't think 4 GB is enough RAM, especially if you have a large L2ARC
>

Learned my lesson with earlier zfs setups :)  1GB ZIL, 10GB L2ARC.


> You should also count the number of spindles you have and have it not
> exceed the number of VM's  you're running much to get decent disk IO
> performance.
>

New one to me, did you mean the reverse? Number of VM's should not exceed
the spindles?


thanks,



-- 
Lindsay
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] glusterd crashing

2015-09-30 Thread Gaurav Garg
Hi Gene,

Could you paste or attach core file/glusterd log file/cmd history to find out 
actual RCA of the crash. What steps you performed for this crash.

>> How can I troubleshoot this?

If you want to troubleshoot this then you can look into the glusterd log file, 
core file.

Thank you..

Regards,
Gaurav

- Original Message -
From: "Gene Liverman" 
To: gluster-users@gluster.org
Sent: Thursday, October 1, 2015 7:59:47 AM
Subject: [Gluster-users] glusterd crashing

In the last few days I've started having issues with my glusterd service 
crashing. When it goes down it seems to do so on all nodes in my replicated 
volume. How can I troubleshoot this? I'm on a mix of CentOS 6 and RHEL 6. 
Thanks! 



Gene Liverman 
Systems Integration Architect 
Information Technology Services 
University of West Georgia 
glive...@westga.edu 


Sent from Outlook on my iPhone 


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] glusterd crashing

2015-09-30 Thread Atin Mukherjee
Could you attach the core mentioning the Gluster version?

-Atin
Sent from one plus one
On Oct 1, 2015 7:59 AM, "Gene Liverman"  wrote:

> In the last few days I've started having issues with my glusterd service
> crashing. When it goes down it seems to do so on all nodes in my replicated
> volume. How can I troubleshoot this? I'm on a mix of CentOS 6 and RHEL 6.
> Thanks!
>
>
>
> Gene Liverman
> Systems Integration Architect
> Information Technology Services
> University of West Georgia
> glive...@westga.edu
>
>
> Sent from Outlook  on my iPhone
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] glusterd crashing

2015-09-30 Thread Gene Liverman
In the last few days I've started having issues with my glusterd service 
crashing. When it goes down it seems to do so on all nodes in my replicated 
volume. How can I troubleshoot this? I'm on a mix of CentOS 6 and RHEL 6. 
Thanks!



Gene Liverman 
Systems Integration Architect 
Information Technology Services 
University of West Georgia 
glive...@westga.edu 


Sent from Outlook on my iPhone___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Tuning for small files

2015-09-30 Thread Ben Turner
- Original Message -
> From: "Iain Milne" 
> To: gluster-users@gluster.org
> Sent: Wednesday, September 30, 2015 11:00:07 AM
> Subject: Re: [Gluster-users] Tuning for small files
> 
> > Here are all 3 of the settings I was talking about:
> >
> > gluster v set testvol client.event-threads 4
> > gluster v set testvol server.event-threads 4
> > gluster v set testvol performance.lookup-optimize on
> >
> > Yes, lookup optimize needs to be enabled.
> 
> Thanks for that. The first two worked ok for me, but the third gives:
>   volume set: failed: option : performance.lookup-optimize does not exist
>   Did you mean performance.cache-size or ...lazy-open?
> 
> If I do: "gluster volume get  all | grep optimize" then I see:
>   cluster.lookup-optimize off
>   cluster.readdir-optimizeoff
> 
> Is it one of these two options perhaps?

Ya do cluster.lookup-optimize on, it was a typo on my part, was going from 
memory and I always confuse that one.

-b

 
> We're running 3.7.4 on Centos 6.7
> 
> Thanks
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] glusterfs geo-replication

2015-09-30 Thread Thibault Godouet
Hi,

I am still getting the same issue, and looking further into this, I have
modified syncdutils.py slightly on the slave node to print more information
on the nature of this OSError:

[2015-09-30 12:19:25.8531] I [gsyncd(slave):649:main_i] : syncing:
gluster://localhost:homegs
[2015-09-30 12:19:26.79925] I [resource(slave):844:service_loop] GLUSTER:
slave listening
[2015-09-30 12:19:30.723542] W [syncdutils(slave):481:errno_wrap] :
about to raise exception for (['.gfid/a780729e-3602-4c30-9966-f31fa5964843/CMakeCache.txt.tmp',
'.gfid/a780729e-3602-4c30-9966-f31fa5964843/CMakeCache.txt']) from
/...[Errno 16] Device or resource busy
[2015-09-30 12:19:30.723845] I [syncdutils(slave):483:errno_wrap] : we
could try deleting
'.glusterfs/a7/80/a780729e-3602-4c30-9966-f31fa5964843/CMakeCache.txt.tmp'
[2015-09-30 12:19:30.723974] E [repce(slave):117:worker] : call failed:
Traceback (most recent call last):
  File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 113, in
worker
res = getattr(self.obj, rmeth)(*in_data[2:])
  File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 725, in
entry_ops
[ENOENT, EEXIST], [ESTALE])
  File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 475,
in errno_wrap
return call(*arg)
OSError: [Errno 16] Device or resource busy
[2015-09-30 12:19:30.770679] I [repce(slave):92:service_loop] RepceServer:
terminating on reaching EOF.
[2015-09-30 12:19:30.770944] I [syncdutils(slave):220:finalize] :
exiting.

Below is a list of errors:

W [syncdutils(slave):481:errno_wrap] : about to raise exception for
(['.gfid/93b6282a-a47f-40fb-b83e-efec5b9c1a4e/.viminfo.tmp',
'.gfid/93b6282a-a47f-40fb-b83e-efec5b9c1a4e/.viminfo']) from /...[Errno 16]
Device or resource busy
W [syncdutils(slave):481:errno_wrap] : about to raise exception for
(['.gfid/9c4f77d0-0855-4607-8d65-1a52b3cdbbce/gr_rsync_homes_new.sh',
'.gfid/9c4f77d0-0855-4607-8d65-1a52b3cdbbce/gr_rsync_homes_new.sh~']) from
/...[Errno 16] Device or resource busy
W [syncdutils(slave):481:errno_wrap] : about to raise exception for
(['.gfid/a780729e-3602-4c30-9966-f31fa5964843/CMakeCache.txt.tmp',
'.gfid/a780729e-3602-4c30-9966-f31fa5964843/CMakeCache.txt']) from
/...[Errno 16] Device or resource busy
W [syncdutils(slave):481:errno_wrap] : about to raise exception for
(['.gfid/a78e4abc-1de4-4f0d-a5e2-1244a0e3b708/depend.internal.tmp',
'.gfid/a78e4abc-1de4-4f0d-a5e2-1244a0e3b708/depend.internal']) from
/...[Errno 16] Device or resource busy
W [syncdutils(slave):481:errno_wrap] : about to raise exception for
(['.gfid/f5491794-32e2-46f4-8c50-7ca31e34111f/
gluster-volume-info.debug.pl', '.gfid/f5491794-32e2-46f4-8c50-7ca31e34111f/
gluster-volume-info.debug.pl~'])...[Errno 16] Device or resource busy
W [syncdutils(slave):481:errno_wrap] : about to raise exception for
(['.gfid/f5491794-32e2-46f4-8c50-7ca31e34111f/
gluster-volume-info.debug.pl', '.gfid/f5491794-32e2-46f4-8c50-7ca31e34111f/
gluster-volume-info.debug.pl~']) from /...[Errno 16] Device or resource busy

I am not entirely clear why rename() would return this ‘Device or resource
busy’.  Also I’m not sure what are these ‘.gfid’ paths exactly: is this
something transient? Where does it live exactly?

I tried deleting the affected files (e.g.
a78e4abc-1de4-4f0d-a5e2-1244a0e3b708/depend.internal.tmp) in .glusterfs/,
and so far that allowed the geo-replication to resume… until it hits
another one of these.

In some cases I found that this gfid points to a directory that doesn’t
exist, e.g.:

ls-alh
/gluster/homegs-brick-1/brick/.glusterfs/a7/8e/a78e4abc-1de4-4f0d-a5e2-1244a0e3b708

lrwxrwxrwx 1 root root 71 Sep 30 11:26
/gluster/homegs-brick-1/brick/.glusterfs/a7/8e/a78e4abc-1de4-4f0d-a5e2-1244a0e3b708
-> ../../0d/be/0dbe0730-5ba9-45d5-91e5-aeb069ab08bf/watchdog_unit_test.dir

but the link is broken.

In that case deleting the
/gluster/homegs-brick-1/brick/.glusterfs/a7/8e/a78e4abc-1de4-4f0d-a5e2-1244a0e3b708
allowed the geo-replication to carry on.

Obviously it doesn’t work if I have to do this 50 times a day…

Any idea what could cause this and how to fix it properly?

Thx,
Thib.
On 26 Aug 2015 12:58 pm, "Thibault Godouet"  wrote:

> Hi,
>
> Given that this is an 'OSError', I should probably have said we use Redhat
> 6.6 64bit and XFS as the brick filesystem.
>
> Has anyone any input on how to troubleshoot this?
>
> Thanks,
> Thibault.
> On 24 Aug 2015 4:03 pm, "Thibault Godouet" <
> thibault.godo...@gresearch.co.uk> wrote:
>
>> I have had multiple issues with geo-replication.  It seems to work OK
>> initially, the replica gets up to date, and not long after (e.g. a couple
>> of days), the replication goes into a faulty state and won't get out of it.
>>
>>
>> I have tried a few times now, and last attempt I re-created the slave
>> volume and setup the replication again.  Same symptoms again.
>>
>>
>>
>> I use Gluster 3.7.3, and you will find my setup and log messages at the
>> bottom of the email.
>>
>>
>> Any idea what could cause this and how to fix

Re: [Gluster-users] Scaling a network - Error while adding new brick to a existing cluster.

2015-09-30 Thread Alastair Neil
Yes it seems likely you have a duplicate conflicting gluster
configuration.  Fixing it  should be fairly easy: clear out
/var/lib/glusterd, and then reinstall the glusterfs  packages.  You will
also have to clear all the extended attributes and files from any brick
directories, it may be easier to reformat the file system if they are on
separate bock devices.  I assume the system has a different host name?  You
should then be able to peer probe the new server from one of the cluster
members.



On 29 September 2015 at 01:00, Sreejith K B  wrote:

> Hi all,
>
> While i am trying to add a new brick in to an existing cluster it
> fails, our servers are on linode, i think the problem occurs because, we
> created new server by copying/cloning an existing server that already part
> of that cluster in to a new server space. So i think the already existing
> gluster configuration causes this issue, i need to find-out a solution for
> this situation. please confirm the error reason and a working solution.
>
> regards,
> sreejith.
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Tuning for small files

2015-09-30 Thread Iain Milne
> Here are all 3 of the settings I was talking about:
>
> gluster v set testvol client.event-threads 4
> gluster v set testvol server.event-threads 4
> gluster v set testvol performance.lookup-optimize on
>
> Yes, lookup optimize needs to be enabled.

Thanks for that. The first two worked ok for me, but the third gives:
  volume set: failed: option : performance.lookup-optimize does not exist
  Did you mean performance.cache-size or ...lazy-open?

If I do: "gluster volume get  all | grep optimize" then I see:
  cluster.lookup-optimize off
  cluster.readdir-optimizeoff

Is it one of these two options perhaps?

We're running 3.7.4 on Centos 6.7

Thanks


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Tuning for small files

2015-09-30 Thread Ben Turner
- Original Message -
> From: "Iain Milne" 
> To: gluster-users@gluster.org
> Sent: Wednesday, September 30, 2015 2:48:57 AM
> Subject: Re: [Gluster-users] Tuning for small files
> 
> > Where you run into problems with smallfiles on gluster is latency of
> sending
> > data over the wire.  For every smallfile create there are a bunch of
> different
> > file opetations we have to do on every file.  For example we will have
> to do
> > at least 1 lookup per brick to make sure that the file doesn't exist
> anywhere
> > before we create it.  We actually got it down to 1 per brick with lookup
> > optimize on, its 2 IIRC(maybe more?) with it disabled.
> 
> Is this lookup optimize something that needs to be enabled manually with
> 3.7, and if so, how?

Here are all 3 of the settings I was talking about:

gluster v set testvol client.event-threads 4
gluster v set testvol server.event-threads 4
gluster v set testvol performance.lookup-optimize on

Yes, lookup optimize needs to be enabled.

-b

> Thanks
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] REMINDER: Weekly gluster community meeting to start in 30 minutes

2015-09-30 Thread Kaushal M
Thank you Raghu for sending out the reminder today.

And thank you everyone who attended today's meeting. We didn't too
many conversations today, with many of the community members away
holding design discussions. Hopefully, we'll have a full house and
some new updates for all next week.

The community meeting will be held next Wednesday at 1200 UTC as
always. I've included a calendar invite for next weeks meeting.

Today's meeting minutes are available at
- Minutes: 
http://meetbot.fedoraproject.org/gluster-meeting/2015-09-30/gluster-meeting.2015-09-30-12.02.html
- Minutes (text):
http://meetbot.fedoraproject.org/gluster-meeting/2015-09-30/gluster-meeting.2015-09-30-12.02.txt
- Log: 
http://meetbot.fedoraproject.org/gluster-meeting/2015-09-30/gluster-meeting.2015-09-30-12.02.log.html

For quick reference the meeting minutes are included below.

Thanks,
Kaushal



#gluster-meeting Meeting



Meeting started by kshlm at 12:02:42 UTC. The full logs are available at
http://meetbot.fedoraproject.org/gluster-meeting/2015-09-30/gluster-meeting.2015-09-30-12.02.log.html
.



Meeting summary
---
* rollcall  (kshlm, 12:03:29)

* Action Items from last week  (kshlm, 12:05:21)

* kshlm to check back with misc on the new jenkins slaves  (kshlm,
  12:05:50)
  * ACTION: kshlm to check back with misc on the new jenkins slaves
(kshlm, 12:06:29)

* krishnan_p will add information about GlusterD-2.0 to the weekly news
  (kshlm, 12:06:43)
  * ACTION: krishnan_p will add information about GlusterD-2.0 to the
weekly news  (kshlm, 12:08:06)

* krishnan_p and atinmu will remind developers to not work in personal
  repositories, but request one for github.com/gluster  (kshlm,
  12:08:18)
  * ACTION: krishnan_p and atinmu will remind developers to not work in
personal repositories, but request one for github.com/gluster
(kshlm, 12:09:29)

* krishnan_p will send an email to the -devel list about merging the
  glusterd-2.0 work into the main glusterfs repo  (kshlm, 12:09:51)
  * ACTION: krishnan_p will send an email to the -devel list about
merging the glusterd-2.0 work into the main glusterfs repo  (kshlm,
12:11:26)

* poornimag to send a mail on gluster-devel asking for volunteers to
  backport glfs_fini patches to release-3.5  (kshlm, 12:11:39)
  * Mail sent. Can be viewed at
http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/12648
(kshlm, 12:13:27)
  * LINK:
http://www.gluster.org/pipermail/gluster-devel/2015-September/046841.html
(poornimag, 12:13:34)

* kkeithley will reply to his previous email, confirming that
  End-Of-Life bugs will be closed  (kshlm, 12:14:04)
  * ACTION: kkeithley will close EOL bugs  (kshlm, 12:17:16)

* kkeithley will close all the EOL'd bugs with a note  (kshlm, 12:17:52)

* overclk will get the dht-scalability doc in glusterfs-specs update to
  the latest design  (kshlm, 12:18:26)
  * ACTION: overclk will get the dht-scalability doc in glusterfs-specs
update to the latest design  (kshlm, 12:20:45)

* jdarcy (and/or others) will post version of the NSR spec "pretty soon"
  (kshlm, 12:21:00)
  * ACTION: jdarcy (and/or others) will post version of the NSR spec
"pretty soon"  (kshlm, 12:22:12)

* kshlm to clean up 3.7.4 tracker bug  (kshlm, 12:22:28)
  * ACTION: kshlm to clean up 3.7.4 tracker bug  (kshlm, 12:23:08)

* ndevos send out a reminder to the maintainers about more actively
  enforcing backports of bugfixes  (kshlm, 12:23:23)
  * ndevos presented at osbconf.org  (kshlm, 12:25:13)
  * ACTION: ndevos send out a reminder to the maintainers about more
actively enforcing backports of bugfixes  (kshlm, 12:25:45)
  * ACTION: poornimag and skoduri will write a trip report for SDC
(kshlm, 12:31:24)
  * ACTION: obnox to write up a report on his trip to SDC.  (kshlm,
12:32:14)

* GlusterFS-3.7  (kshlm, 12:32:30)
  * ACTION: pranithk will update the community on 3.7.5 release plans.
(kshlm, 12:35:43)

* GlusterFS-3.6  (kshlm, 12:36:01)
  * LINK: https://bugzilla.redhat.com/show_bug.cgi?id=1267567   (raghu,
12:36:48)
  * raghu released 3.6.6  (kshlm, 12:37:36)
  * 3.6.7 tracker bug created.  (kshlm, 12:37:56)
  * LINK:
http://www.gluster.org/pipermail/gluster-devel/2015-September/046821.html
(raghu, 12:37:59)
  * ACTION: raghu to call for volunteers and help from maintainers for
doing backports listed by rwareing to 3.6.7  (kshlm, 12:41:27)

* GlusterFS-3.5  (kshlm, 12:45:55)

* Gluster.Next (3.8 and 4.0)  (kshlm, 12:47:50)
  * ACTION: All the leads to update community on the outcomes of the
design discussions.  (kshlm, 12:53:13)

* Open Floor  (kshlm, 12:53:51)

* Testing for various releases  (kshlm, 12:54:07)
  * ACTION: msvbhat to send out an update on DiSTAF once he pushes all
his new changes and creates repo under github.com/gluster  (kshlm,
12:58:28)

* posting
  http://blog.gluster.org/2015/08/welcome-to-the-new-gluster-community-lead/
  to

Re: [Gluster-users] Gluster-Nagios

2015-09-30 Thread Humble Devassy Chirammal
The EL7 rpms of gluster-nagios are available @
http://download.gluster.org/pub/gluster/glusterfs-nagios/1.1.0/

Hope it helps!

--Humble


On Tue, Sep 29, 2015 at 10:56 AM, Sahina Bose  wrote:

> We will publish the EL7 builds soon.
>
> The source tarballs are now available at -
> http://download.gluster.org/pub/gluster/glusterfs-nagios/
>
> thanks
> sahina
>
>
> On 09/25/2015 12:55 PM, Humble Devassy Chirammal wrote:
>
> HI Michael,
>
> Yes, only el6 packages are available @
> 
> http://download.gluster.org/pub/gluster/glusterfs-nagios/ . I am looping
> nagios project team leads to this thread. Lets wait for them to revert.
>
> --Humble
>
>
> On Sun, Sep 20, 2015 at 2:32 PM, Prof. Dr. Michael Schefczyk <
> mich...@schefczyk.net> wrote:
>
>> Dear All,
>>
>> In June 2014, the gluster-nagios team (thanks!) published the
>> availability of gluster-nagios-common and gluster-nagios-addons on this
>> list. As far as I can tell, this quite extensive gluster nagios monitoring
>> tool is available for el6 only. Are there known plans to make this
>> available for el7 outside the RHEL-repos (
>> 
>> http://ftp.redhat.de/pub/redhat/linux/enterprise/7Server/en/RHS/SRPMS/),
>> e.g. for use with oVirt / Centos 7 also? It would be good to be able to
>> monitor gluster without playing around with scripts from sources other than
>> a rpm repo.
>>
>> Regards,
>>
>> Michael
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] REMINDER: Weekly gluster community meeting to start in 30 minutes

2015-09-30 Thread Raghavendra Bhat
Hi All,

In 30 minutes from now we will have the regular weekly Gluster
Community meeting.

Meeting details:
- location: #gluster-meeting on Freenode IRC
- date: every Wednesday
- time: 12:00 UTC, 14:00 CEST, 17:30 IST
(in your terminal, run: date -d "12:00 UTC")
- agenda: https://public.pad.fsfe.org/p/gluster-community-meetings

Currently the following items are listed:
* Roll Call
* Status of last week's action items
* Gluster 3.7
* Gluster 3.8
* Gluster 3.6
* Gluster 3.5
* Gluster 4.0
* Open Floor
- bring your own topic!

The last topic has space for additions. If you have a suitable topic to
discuss, please add it to the agenda.


Regards,
Raghavendra Bhat
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Need Info about concurrent access

2015-09-30 Thread wodel youchi
Hi,

I am a newbie, I am implementing an Alfresco solution with 3 servers all in
active mode

I want to use glusterfs as storage for the 3 servers, to store the index
and documents.


from what I have read, glusterfs do manage concurrent access to files.

I want to be sure of that


thanks in advance.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster on ZFS with Compression

2015-09-30 Thread Frank Rothenstein
Hi,

I'm actually doing this on a pretty much similar system. I'm using
ovirt with KVM, RAIDZ on 4 disks with LZ4 and also dedup. My brick
-nodes are also ovirt-Nodes (VM-Hosts), and have 32/48 GB RAM. ZFS may
use up to 18GB of it, so a little more than your setup. ovirt needs
rep=3, i have 3 bricks per node.
I have no complaints about speed, ovirt is using thin provisioned vm
-disks as one big file per disk, self heal operations do need their
time but with little impact as far as i have seen. Performance is all
about network speed i would say. Running VMs directly on the bricks may
improve your VMs by using L2ARC...

Frank

Am Mittwoch, den 30.09.2015, 15:00 +1000 schrieb Lindsay Mathieson:
> I'm revisiting Gluster for the purpose of hosting Virtual Machine
> images (KVM). I was considering the following configuration
> 
> 2 Nodes
> - 1 Brick per node (replication = 2)
> - 2 * 1GB Eth, LACP Bonded
> - Bricks hosted on ZFS
> - VM Images accessed via Block driver (gfapi)
> 
> ZFS Config:
> - Raid 10
> - SSD SLOG and L2ARC
> - 4 GB RAM
>  - Compression (lz4)
> 
> Does that seem like an sane layout?
> 
> Question: With the gfapi driver, does the vm image appear as a file
> on the host (zfs) file system?
> 
> 
> Background: I currently have our VM's hosted on Ceph using a similar
> config as above, minus zfs. I've found that the performance for such
> a small setup is terrible, the maintenance headache is high and when
> a drive drops out, the performance gets *really* bad. Last time I
> checked, gluster was much slower at healing large files than ceph,
> I'm hoping that has improved :)
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users



 

__
BODDEN-KLINIKEN Ribnitz-Damgarten GmbH
Sandhufe 2
18311 Ribnitz-Damgarten

Telefon: 03821-700-0
Fax:   03821-700-240

E-Mail: i...@bodden-kliniken.de   Internet: http://www.bodden-kliniken.de

Sitz: Ribnitz-Damgarten, Amtsgericht: Stralsund, HRB 2919, Steuer-Nr.: 
081/126/00028
Aufsichtsratsvorsitzende: Carmen Schröter, Geschäftsführer: Dr. Falko Milski

Der Inhalt dieser E-Mail ist ausschließlich für den bezeichneten Adressaten 
bestimmt. Wenn Sie nicht der vorge- 
sehene Adressat dieser E-Mail oder dessen Vertreter sein sollten, beachten Sie 
bitte, dass jede Form der Veröf- 
fentlichung, Vervielfältigung oder Weitergabe des Inhalts dieser E-Mail 
unzulässig ist. Wir bitten Sie, sofort den 
Absender zu informieren und die E-Mail zu löschen. 


 Bodden-Kliniken Ribnitz-Damgarten GmbH 2014
*** Virenfrei durch Kerio Mail Server und Sophos Antivirus ***

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster on ZFS with Compression

2015-09-30 Thread Tiemen Ruiten
Hello Lindsay,

>From personal experience: A two node volume can get you into trouble when
one of the nodes goes down unexpectedly/crashes. At the very least, you
should have an arbiter volume (introduced in Gluster 3.7) on a separate
physical node.

We are running oVirt VM's on top of a two node Gluster cluster and a few
months ago, I ended up transferring several terabytes from one node to the
other because it was the fastest way to resolve the split-brain issues
after a crash of Gluster on one of the nodes. In effect, the second node
did not give us any redundancy, because the vm-images in split-brain would
not be available for writes.

I don't think 4 GB is enough RAM, especially if you have a large L2ARC:
every L2ARC entry needs an entry in ARC as well, which is always in RAM.
RAM is relatively cheap nowadays, so go for at least 16 or 32.

You should also count the number of spindles you have and have it not
exceed the number of VM's  you're running much to get decent disk IO
performance.

On 30 September 2015 at 07:00, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:

> I'm revisiting Gluster for the purpose of hosting Virtual Machine images
> (KVM). I was considering the following configuration
>
> 2 Nodes
> - 1 Brick per node (replication = 2)
> - 2 * 1GB Eth, LACP Bonded
> - Bricks hosted on ZFS
> - VM Images accessed via Block driver (gfapi)
>
> ZFS Config:
> - Raid 10
> - SSD SLOG and L2ARC
> - 4 GB RAM
>  - Compression (lz4)
>
> Does that seem like an sane layout?
>
> Question: With the gfapi driver, does the vm image appear as a file on the
> host (zfs) file system?
>
>
> Background: I currently have our VM's hosted on Ceph using a similar
> config as above, minus zfs. I've found that the performance for such a
> small setup is terrible, the maintenance headache is high and when a drive
> drops out, the performance gets *really* bad. Last time I checked, gluster
> was much slower at healing large files than ceph, I'm hoping that has
> improved :)
>
> --
> Lindsay
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>



-- 
Tiemen Ruiten
Systems Engineer
R&D Media
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users