Re: [Gluster-users] Some questions about GlusterFS and Prod Environment

2017-02-10 Thread Riccardo Filippone
Hi Doug, first of all, thank you for your reply.

1) Considering 2 Frontends and 2 Gluster FS Nodes, the best practice is to
>> write a different fstab entry for each node?
>>
>
> I don't see why you'd need that. The only differences you might have would
> be if you're mounting different volumes on different servers.
> You could homogenise that even further by simply using *localhost*
> instead of the server's FQDN.
>

Ok


>
>
>> Frontend1 (mount the first gluster node, and the second one as backup
>> volume):
>> gluster1.droplet.com:/app_volume /var/www/html glusterfs
>> defaults,_netdev,backupvolfile-server=gluster2.droplet.com 0 0
>>
>> Frontend2 /etc/fstab (mount the second gluster node, and the first one as
>> backup volume):
>> gluster2.droplet.com:/app_volume /var/www/html glusterfs
>> defaults,_netdev,backupvolfile-server=gluster1.droplet.com 0 0
>>
>
> On the more recent versions (3.7+ at least), if you're mounting through
> fuse you no longer need to define backup servers, gluster deals with that
> internally. One mount option I would add would be "relatime".
>

Thank you for this suggestion, so I'm considering to use FUSE now.


> 2) I want to backup every day the volume files. I can run a zip command
>> directly from the gluster node, or I need to mount it on the backup server
>> than run the command? Is there any other good solution to store a backup?
>>
>
> I don't see it'd make much difference. If it's lots of small files though,
> there could be a performance hit. If it did become an issue, it might be
> more efficient exporting snapshots (assuming your bricks are backed by LVM.
> ZFS would make it even easier).
>

Ok


> 3) Can I share WebApps folder between Tomcat servers? Is there any known
>> issue? (for example putting a .war into the WebApps folder, I think I'll
>> generate some errors due to tomcat war deploying? Anyone have experiences
>> with tomcat and glusterfs shared folders?)
>>
>> 4) Can I use a load balancer software to mount GlusterFS volumes? If YES,
>> is there any benefits?
>>
>
>  Again, no need. The replica translator does that for you.
>

Perfect, but I don't understand if it could be a good idea share the
WebApps folder, using a volume (called webapps) for all tomcats (so we can
deploy only one time).

Thank you in advance, on next week, we are going to start the activities :)

regards,

Riccardo
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster Meetup or Conference 2017 ???

2017-02-10 Thread Amye Scavarda
On Tue, Feb 7, 2017 at 4:25 PM, David Spisla 
wrote:

> Hello Gluster-Community,
>
>
>
> are there any Meetups or Conferences planned in 2017? Maybe in Europe? I
> meet the Gluster-Team at FOSDEM a few days ago and I am interested in more
> specialized meetings.
>
>
>
> Sincerely
>
> David Spisla
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>

David,
Great to meet you at FOSDEM!

We're putting events that we'll be at up on our community site at
https://www.gluster.org/events/
Upcoming is Incontro DevOps in Bologna, Italy, Vault in Boston, MA and Red
Hat Summit.
We'll add more as we know about them.

Thanks!
-- amye

-- 
Amye Scavarda | a...@redhat.com | Gluster Community Lead
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] lvm layout for gluster -- using multiple physical volumes

2017-02-10 Thread Joseph Lorenzini
Hi all,

I want to use lvm for two reasons:

- gluster snaphosts
- ability to dynamically add space to a brick.


Here's what i'd like to do:

1. create two more physical volumes
2. create a single volume group from those physical volumes
3. create a single logical volume
4. make the single logical volume a brick

So my questions are this:

1. is there any issue with adding and remove LVM physical volumes from the
logical volume group that backs the gluster brick?
2. is there any issue with having multiple physical volumes in the LVM
volume group?
3. will disk size increase more  with gluster because LVM is being used
instead a filesystem like XFS?

Thanks,
Joe
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Release 3.10 RC0 tagged

2017-02-10 Thread Shyam
Following up on the packages available, [1] has the packages for RC0 
across Debian and Fedora distributions, [2] had packages for OpenSuSE, 
[3] for Ubuntu.


Release notes are available at [4].

We welcome feedback on 3.10, and for any issues faced do raise a bug, so 
that we can assess it prior to 3.10 release.


Thanks

[1] https://download.gluster.org/pub/gluster/glusterfs/3.10/3.10.0rc0/
[2] 
https://build.opensuse.org/project/show/home:kkeithleatredhat:Leap42.2-3.10

[3] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.10
[4] 
https://github.com/gluster/glusterfs/blob/release-3.10/doc/release-notes/3.10.0.md



On 02/02/2017 10:12 PM, Shyam wrote:

Thanks to some last hour support from Jeff, Xavi, Nithya, Anoop, Kaleb
and a few others, we now have RC0 tagged, and on its way to package
maintainers.

Expect an official RC0 release announcement with packages in a day or two.

Now we get into the stabilization phase and testing feedback from all is
highly appreciated.

For the antsy, git clone the repo and checkout v3.10.0rc0 tag to roll
your own builds and test. (doc/release-notes/3.10.0.md has the release
related features and bug fixes in it).

Thanks,
Shyam (Kaleb, Talur)
___
Gluster-devel mailing list
gluster-de...@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Slow performance on samba with small files

2017-02-10 Thread Anoop C S
On Thu, 2017-02-09 at 16:18 +, Gary Lloyd wrote:
> Was just reading the small file section of the 3.9 release notes:
> 
> http://blog.gluster.org/2016/11/announcing-gluster-3-9/
> 
> Setting these options does seem to increase transfer speeds on small files by 
> quite alot:
>   # gluster volume set  features.cache-invalidation on
>   # gluster volume set  features.cache-invalidation-timeout 600
>   # gluster volume set  performance.stat-prefetch on   #This one 
> seemed to have the
> biggest impact in small file performance for me
>   # gluster volume set  performance.cache-invalidation on
>   # gluster volume set  performance.md-cache-timeout 600
> 
> Setting  # gluster volume set  performance.cache-samba-metadata on # 
> Only for SMB
> access. Results in my client to keep losing the state of the server and the 
> shares often disappear
> / become inaccessible and I can only get them back if I logon / logoff the 
> machine, this is with
> distro Samba 4.4.4.
> 

This is something which needs to be analyzed further..
We would need more info on volume config, glusterfs client logs from 
Samba(/var/log/samba/glusterfs-
..log) and your smb.conf?

> Has anyone here had the same issue, does the version of samba need to be 
> newer to support the
> feature ?
> 

To my knowledge, version of Samba should not be a problem here. At least I am 
not aware of any such
issues with v4.4.6.

> Thanks
> 
> Gary Lloyd
> 
> I.T. Systems:Keele University
> Finance & IT Directorate
> Keele:Staffs:IC1 Building:ST5 5NB:UK
> +44 1782 733063
> 
> 
> On 8 February 2017 at 11:49, Дмитрий Глушенок  wrote:
> > For _every_ file copied samba performs readdir() to get all entries of the 
> > destination folder.
> > Then the list is searched for filename (to prevent name collisions as SMB 
> > shares are not case
> > sensitive). More files in folder, more time it takes to perform readdir(). 
> > It is a lot worse for
> > Gluster because single folder contents distributed among many servers and 
> > Gluster has to join
> > many directory listings (requested via network) to form one and return it 
> > to caller.
> > 
> > Rsync does not perform readdir(), it just checks file existence with stat() 
> > IIRC. And as modern
> > Gluster versions has default setting to check for file only at its 
> > destination (when volume is
> > balanced) - the check performs relatively fast.
> > 
> > You can hack samba to prevent such checks if your goal is to get files 
> > copied not so slow (as
> > you sure the files you are copying are not exists at destination). But try 
> > to perform 'ls -l' on
> > _not_ cached folder with thousands of files - it will take tens of seconds. 
> > This is time your
> > users will waste browsing shares.
> > 
> > > 8 февр. 2017 г., в 13:17, Gary Lloyd  написал(а):
> > > 
> > > Thanks for the reply
> > > 
> > > I've just done a bit more testing. If I use rsync from a gluster client 
> > > to copy the same files
> > > to the mount point it only takes a couple of minutes.
> > > For some reason it's very slow on samba though (version 4.4.4).
> > > 
> > > I have tried various samba tweaks / settings and have yet to get 
> > > acceptable write speed on
> > > small files.
> > > 
> > > 
> > > Gary Lloyd
> > > 
> > > I.T. Systems:Keele University
> > > Finance & IT Directorate
> > > Keele:Staffs:IC1 Building:ST5 5NB:UK
> > > +44 1782 733063
> > > 
> > > 
> > > On 8 February 2017 at 10:05, Дмитрий Глушенок  wrote:
> > > > Hi,
> > > > 
> > > > There is a number of tweaks/hacks to make it better, but IMHO overall 
> > > > performance with small
> > > > files is still unacceptable for such folders with thousands of entries.
> > > > 
> > > > If your shares are not too large to be placed on single filesystem and 
> > > > you still want to use
> > > > Gluster - it is possible to run VM on top of Gluster. Inside that VM 
> > > > you can create ZFS/NTFS
> > > > to be shared.
> > > > 
> > > > > 8 февр. 2017 г., в 12:10, Gary Lloyd  написал(а):
> > > > > 
> > > > > Hi
> > > > > 
> > > > > I am currently testing gluster 3.9 replicated/distrbuted on centos 
> > > > > 7.3 with samba/ctdb.
> > > > > I have been able to get it all up and running, but writing small 
> > > > > files is really slow. 
> > > > > 
> > > > > If I copy large files from gluster backed samba I get almost wire 
> > > > > speed (We only have 1Gb
> > > > > at the moment). I get around half that speed if I copy large files to 
> > > > > the gluster backed
> > > > > samba system, which I am guessing is due to it being replicated (This 
> > > > > is acceptable).
> > > > > 
> > > > > Small file write performance seems really poor for us though:
> > > > > As an example I have an eclipse IDE workspace folder