[Gluster-users] Backup of 48126852 files / 9.1 TB data

2016-02-14 Thread Nico Schottelius
Hello everyone,

we have a 2 brick setup running on a raid6 with 19T storage.

We are currently facing the problem that the backup (9.1 TB data in
48126852 files) is taking more than a week when being backed up by
means of rsync (actually, ccollect[0]).

During backup the rsync process is continously in D state (expected),
but cpu load is far from 100% and disk is also only about 15-30% busy.

(this is snapshot from right now)

I have two questions, the second one more important:

a) Is there a good way to identify the bottleneck?
b) Is it "safe" to backup data directly from the underlying
  filesystem instead of going via the glusterfs mount?

The reason why I ask about (b) is that we used to backup from those
servers *before* we switched to glusterfs within about a day and thus
I suspect backing up from the xfs filesystem again should do the job.

Thanks for any hints,

Nico


[0] http://www.nico.schottelius.org/software/ccollect/

-- 
Werde Teil des modernen Arbeitens im Glarnerland auf www.digitalglarus.ch!
Lese Neuigkeiten auf Twitter: www.twitter.com/DigitalGlarus
Diskutiere mit auf Facebook:  www.facebook.com/digitalglarus
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Revamping the GlusterFS Documentation...

2015-03-26 Thread Nico Schottelius
Shravan Chandrashekar [Thu, Mar 26, 2015 at 02:10:59AM -0400]:
> Hi John,
> 
> Thank you, thats a really valuable feedback. 
> We are working on updating the documentation and will ensure to address this 
> gap.

That's much appreciated Shravan!

I can add from my side: if the source is really open and easily
changable (as in github PR or similar), I think people want and will
contribute in improving it.

However I think there need to be some rough table of contents or
structure given by the project lead.

My 50 KRW,

Nico

-- 
New PGP key: 659B 0D91 E86E 7E24 FD15  69D0 C729 21A1 293F 2D24
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Working but some issues

2015-03-17 Thread Nico Schottelius
Joe Julian [Mon, Mar 16, 2015 at 11:43:12PM -0700]:
> Good question. It took me months to figure all that out (with far less
> documentation than there is now) [...]

... just wondering: Why don't we run a kickstarter/indiegogo campaign to
finance people to write (from scratch?) documentation?

There are various good examples of GREAT documentation for various
projects and given that glusterfs is an evolving software that becomes
more popular, it could be a way to improve documentation and thus
experience with glusterfs.

-- 
New PGP key: 659B 0D91 E86E 7E24 FD15  69D0 C729 21A1 293F 2D24
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Peers not connecting after changing IP address

2015-03-12 Thread Nico Schottelius
Alex,

have you checked the firewall settings? Seeing your description reminds
me of a problem in which connections were only possible in one
direction.

Cheers,

Nico

Alex Crow [Tue, Mar 10, 2015 at 03:47:26PM +]:
> Hi JF,
> 
> They are all hostnames, no IPs anywhere. The odd thing is that now, the
> remote site says everything is up, whereas the local servers only show each
> other as connected. It's a bit odd.
> 
> Cheers
> 
> Alex
> 
> On 10/03/15 15:40, JF Le Fillâtre wrote:
> >On my setup: on the host from which I peer probed, it's all hostnames.
> >On the other hosts, it's all IPs.
> >
> >Can you check if it's the case on your setup too?
> >
> >Thanks,
> >JF
> >
> >
> >On 10/03/15 16:29, Alex Crow wrote:
> >>Hi,
> >>
> >>They only have the hostname:
> >>
> >>uuid=22b88f85-0554-419f-a279-980fceaeaf49
> >>state=3
> >>hostname1=zalma
> >>
> >>And pinging these hostnames give the correct IP. Still no connection
> >>though.
> >>
> >>Thanks,
> >>
> >>Alex
> >>
> >>On 10/03/15 15:04, JF Le Fillâtre wrote:
> >>>Hello,
> >>>
> >>>Check the files in the peer directory:
> >>>
> >>>/var/lib/glusterd/peers
> >>>
> >>>They contain the IP addresses of the peers.
> >>>
> >>>I haven't done it but I assume that if you update those files on all
> >>>servers you should be back online.
> >>>
> >>>Thanks,
> >>>JF
> >>>
> >>>
> >>>On 10/03/15 16:00, Alex Crow wrote:
> Hi,
> 
> I've had a 4 node Dis/Rep cluster up and running for a while, but
> recently moved two of the nodes (the replicas of the other 2) to a
> nearby datacentre. The IP addresses of the moved two therefore changed,
> but I updated the /etc/hosts file on all four hosts to reflect the
> change (and the peers were all probed by name, not IP).
> 
> However at each site the other two peers show as disconnected, even
> though the servers can all successfully talk to each other. Is there
> some way I can kick this back into life?
> 
> Regards,
> 
> Alex
> 
> 
> -- 
> This message is intended only for the addressee and may contain
> confidential information. Unless you are that person, you may not
> disclose its contents or use it in any way and are requested to delete
> the message along with any attachments and notify us immediately.
> "Transact" is operated by Integrated Financial Arrangements plc. 29
> Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608
> 5300. (Registered office: as above; Registered in England and Wales
> under number: 3727592). Authorised and regulated by the Financial
> Conduct Authority (entered on the Financial Services Register; no. 190856).
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users

-- 
New PGP key: 659B 0D91 E86E 7E24 FD15  69D0 C729 21A1 293F 2D24
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Debian stable gluster packages

2015-03-09 Thread Nico Schottelius
Uli Zumbuhl [Mon, Mar 09, 2015 at 01:37:50PM +0100]:
> If we are already speaking about stable packages I have a question: is the 
> 3.6 release "safe-enough" to run in production?

We are testing this out at the moment and so far it looks good (also
thanks to the great support of the community)


-- 
New PGP key: 659B 0D91 E86E 7E24 FD15  69D0 C729 21A1 293F 2D24
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] fstab problem

2015-03-07 Thread Nico Schottelius
Good morning,

Raghavendra Bhat [Sat, Mar 07, 2015 at 11:34:01AM +0530]:
> >>What are the chances of getting this pushed into a release soon? I
> >>can patch our hosts manually for the moment, but having this in the
> >>package makes life much easier for maintenance.
> Hi,
> 
> Since this is related to mounting, I can accept the patch for 3.6.3.

I have to say the gluster development team, all of you, you are

A-W-E-S-O-M-E!

Never seen that well organised & good working process and had to
write about it:

http://www.nico.schottelius.org/blog/glusterfs-foss-development-is-awesome/

Have a great day!

Nico

-- 
New PGP key: 659B 0D91 E86E 7E24 FD15  69D0 C729 21A1 293F 2D24
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Volume turns read only when one brick is missing

2015-03-06 Thread Nico Schottelius
Hello,

when I reboot one out of the two servers in a replicated setup,
the volume turns into read only mode until the first server is back.

This is not what I expected and I wonder if I misconfigured anything.

The expected behaviour from my point of view is that the volume stays
read write and that the rebooting node will catch up again.

My setup consists of three servers, two of them hosting the brick,
the third one only contributing to the quorom.

Thanks for any hint!

Nico




I mount the volume from fstab with this line:

vmhost2-cluster1.place4.ungleich.ch:/cluster1 /var/lib/one/datastores/100 
glusterfs 
defaults,_netdev,backupvolfile-server=vmhost1-cluster1.place4.ungleich.ch 0 0



[19:10:05] vmhost1-cluster1:~# gluster volume info
 
Volume Name: cluster1
Type: Replicate
Volume ID: b371ec1f-e01e-49f8-9573-e0d1e74bbd90
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: vmhost1-cluster1.place4.ungleich.ch:/home/gluster
Brick2: vmhost2-cluster1.place4.ungleich.ch:/home/gluster
Options Reconfigured:
nfs.disable: 1
cluster.ensure-durability: off
server.allow-insecure: on
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: on
cluster.eager-lock: enable
network.remote-dio: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server

[19:25:15] vmhost1-cluster1:~# gluster peer status
Number of Peers: 2

Hostname: entrance.place4.ungleich.ch
Uuid: 987e543e-fbc4-497b-9bc9-ae56086d9421
State: Peer in Cluster (Connected)

Hostname: 192.168.0.2
Uuid: 688816e1-aa51-450f-9300-979c4e83e33e
State: Peer in Cluster (Connected)
Other names:
136.243.38.8



-- 
New PGP key: 659B 0D91 E86E 7E24 FD15  69D0 C729 21A1 293F 2D24
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] fstab problem

2015-03-06 Thread Nico Schottelius
Hey Niels,

Niels de Vos [Fri, Mar 06, 2015 at 09:42:57AM -0500]:
> That looks good to me. Care to file a bug and send this patch through
> our Gerrit for review?
> 
> 
> http://www.gluster.org/community/documentation/index.php/Simplified_dev_workflow


it is ready for merging:

http://review.gluster.org/#/c/9824/
https://bugzilla.redhat.com/show_bug.cgi?id=1199545

I've replaced various other /dev/stderr occurences and also took
care of the non-Linux mount version.

What are the chances of getting this pushed into a release soon? I
can patch our hosts manually for the moment, but having this in the
package makes life much easier for maintenance.

Cheers,

Nico

-- 
New PGP key: 659B 0D91 E86E 7E24 FD15  69D0 C729 21A1 293F 2D24
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] fstab problem

2015-03-06 Thread Nico Schottelius
Hey Niels,

I'll setup myself a gerrit account and submit it shortly.

Cheers,

Nico

Niels de Vos [Fri, Mar 06, 2015 at 09:42:57AM -0500]:
> On Fri, Mar 06, 2015 at 03:30:09PM +0100, Nico Schottelius wrote:
> > Just checked - essentially removing the stderr line in
> > /sbin/mount.glusterfs and replacing it by the usual >&2 does the job:
> > 
> > [15:29:30] vmhost1-cluster1:~# diff -u /sbin/mount.glusterfs*
> > --- /sbin/mount.glusterfs   2015-03-06 14:17:13.973729836 +0100
> > +++ /sbin/mount.glusterfs.orig  2015-03-06 14:17:18.798642292 +0100
> > @@ -10,7 +10,7 @@
> >  
> >  warn ()
> >  {
> > -   echo "$@" >&2
> > +   echo "$@" >/dev/stderr
> >  }
> >  
> >  _init ()
> 
> That looks good to me. Care to file a bug and send this patch through
> our Gerrit for review?
> 
> 
> http://www.gluster.org/community/documentation/index.php/Simplified_dev_workflow
> 
> Let me know if that would be too much work, I can send the change for
> you too.
> 
> Thanks,
> Niels
> 
> > [15:29:31] vmhost1-cluster1:~# 
> > 
> > 
> > Nico Schottelius [Fri, Mar 06, 2015 at 01:29:38PM +0100]:
> > > Funny, I am running into the same problem with CentOS 7 and 
> > > glusterfs-3.6.2
> > > right now:
> > > 
> > > var-lib-one-datastores-100.mount - /var/lib/one/datastores/100
> > >Loaded: loaded (/etc/fstab)
> > >Active: failed (Result: exit-code) since Fri 2015-03-06 13:23:12 CET; 
> > > 21s ago
> > > Where: /var/lib/one/datastores/100
> > >  What: vmhost2-cluster1.place4.ungleich.ch:/cluster1
> > >   Process: 2142 ExecMount=/bin/mount 
> > > vmhost2-cluster1.place4.ungleich.ch:/cluster1 /var/lib/one/datastores/100 
> > > -t glusterfs -o 
> > > defaults,_netdev,backupvolfile-server=vmhost1-cluster1.place4.ungleich.ch 
> > > (code=exited, status=1/FAILURE)
> > > 
> > > Mar 06 13:23:12 vmhost1-cluster1 systemd[1]: Mounted 
> > > /var/lib/one/datastores/100.
> > > Mar 06 13:23:12 vmhost1-cluster1 mount[2142]: /sbin/mount.glusterfs: line 
> > > 13: /dev/stderr: No such device or address
> > > Mar 06 13:23:12 vmhost1-cluster1 systemd[1]: 
> > > var-lib-one-datastores-100.mount mount process exited, code=exited 
> > > status=1
> > > Mar 06 13:23:12 vmhost1-cluster1 systemd[1]: Unit 
> > > var-lib-one-datastores-100.mount entered failed state.
> > > [13:23:33] vmhost1-cluster1:~# /bin/mount 
> > > vmhost2-cluster1.place4.ungleich.ch:/cluster1 /var/lib/one/datastores/100 
> > > -t glusterfs -o 
> > > defaults,_netdev,backupvolfile-server=vmhost1-cluster1.place4.ungleich.ch
> > > 
> > > I've found older problems with glusterd reporting this, but no real
> > > solution for the fstab entry.
> > > 
> > > 
> > > 何亦军 [Fri, Mar 06, 2015 at 02:46:12AM +]:
> > > > Hi Guys,
> > > > 
> > > > I meet fstab problem,
> > > > fstab config :   gwgfs01:/vol01  /mnt/glusterglusterfs   
> > > > defaults,_netdev0 0
> > > > 
> > > > The mount can’t take effect, I checked below:
> > > > 
> > > > [root@gfsclient02 ~]# systemctl status mnt-gluster.mount -l
> > > > mnt-gluster.mount - /mnt/gluster
> > > >Loaded: loaded (/etc/fstab)
> > > >Active: failed (Result: exit-code) since Fri 2015-03-06 10:39:05 
> > > > CST; 53s ago
> > > > Where: /mnt/gluster
> > > >  What: gwgfs01:/vol01
> > > >   Process: 1324 ExecMount=/bin/mount gwgfs01:/vol01 /mnt/gluster -t 
> > > > glusterfs -o defaults,_netdev (code=exited, status=1/FAILURE)
> > > > 
> > > > Mar 06 10:38:47 gfsclient02 systemd[1]: Mounting /mnt/gluster...
> > > > Mar 06 10:38:47 gfsclient02 systemd[1]: Mounted /mnt/gluster.
> > > > Mar 06 10:39:05 gfsclient02 mount[1324]: /sbin/mount.glusterfs: line 
> > > > 13: /dev/stderr: No such device or address
> > > > Mar 06 10:39:05 gfsclient02 systemd[1]: mnt-gluster.mount mount process 
> > > > exited, code=exited status=1
> > > > Mar 06 10:39:05 gfsclient02 systemd[1]: Unit mnt-gluster.mount entered 
> > > > failed state.
> > > > 
> > > > 
> > > > BTW, I can manual mount that gluster.  by command : mount -t glusterfs 
> > > > gwgfs01:/vol01 /mnt/gluster
> > > > 
> > > > What happened?
> > > 
> > > > ___
> > > > Gluster-users mailing list
> > > > Gluster-users@gluster.org
> > > > http://www.gluster.org/mailman/listinfo/gluster-users
> > > 
> > > 
> > > -- 
> > > New PGP key: 659B 0D91 E86E 7E24 FD15  69D0 C729 21A1 293F 2D24
> > > ___
> > > Gluster-users mailing list
> > > Gluster-users@gluster.org
> > > http://www.gluster.org/mailman/listinfo/gluster-users
> > 
> > -- 
> > New PGP key: 659B 0D91 E86E 7E24 FD15  69D0 C729 21A1 293F 2D24
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-users



-- 
New PGP key: 659B 0D91 E86E 7E24 FD15  69D0 C729 21A1 293F 2D24
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] fstab problem

2015-03-06 Thread Nico Schottelius
Just checked - essentially removing the stderr line in
/sbin/mount.glusterfs and replacing it by the usual >&2 does the job:

[15:29:30] vmhost1-cluster1:~# diff -u /sbin/mount.glusterfs*
--- /sbin/mount.glusterfs   2015-03-06 14:17:13.973729836 +0100
+++ /sbin/mount.glusterfs.orig  2015-03-06 14:17:18.798642292 +0100
@@ -10,7 +10,7 @@
 
 warn ()
 {
-   echo "$@" >&2
+   echo "$@" >/dev/stderr
 }
 
 _init ()
[15:29:31] vmhost1-cluster1:~# 


Nico Schottelius [Fri, Mar 06, 2015 at 01:29:38PM +0100]:
> Funny, I am running into the same problem with CentOS 7 and glusterfs-3.6.2
> right now:
> 
> var-lib-one-datastores-100.mount - /var/lib/one/datastores/100
>Loaded: loaded (/etc/fstab)
>Active: failed (Result: exit-code) since Fri 2015-03-06 13:23:12 CET; 21s 
> ago
> Where: /var/lib/one/datastores/100
>  What: vmhost2-cluster1.place4.ungleich.ch:/cluster1
>   Process: 2142 ExecMount=/bin/mount 
> vmhost2-cluster1.place4.ungleich.ch:/cluster1 /var/lib/one/datastores/100 -t 
> glusterfs -o 
> defaults,_netdev,backupvolfile-server=vmhost1-cluster1.place4.ungleich.ch 
> (code=exited, status=1/FAILURE)
> 
> Mar 06 13:23:12 vmhost1-cluster1 systemd[1]: Mounted 
> /var/lib/one/datastores/100.
> Mar 06 13:23:12 vmhost1-cluster1 mount[2142]: /sbin/mount.glusterfs: line 13: 
> /dev/stderr: No such device or address
> Mar 06 13:23:12 vmhost1-cluster1 systemd[1]: var-lib-one-datastores-100.mount 
> mount process exited, code=exited status=1
> Mar 06 13:23:12 vmhost1-cluster1 systemd[1]: Unit 
> var-lib-one-datastores-100.mount entered failed state.
> [13:23:33] vmhost1-cluster1:~# /bin/mount 
> vmhost2-cluster1.place4.ungleich.ch:/cluster1 /var/lib/one/datastores/100 -t 
> glusterfs -o 
> defaults,_netdev,backupvolfile-server=vmhost1-cluster1.place4.ungleich.ch
> 
> I've found older problems with glusterd reporting this, but no real
> solution for the fstab entry.
> 
> 
> 何亦军 [Fri, Mar 06, 2015 at 02:46:12AM +]:
> > Hi Guys,
> > 
> > I meet fstab problem,
> > fstab config :   gwgfs01:/vol01  /mnt/glusterglusterfs   
> > defaults,_netdev0 0
> > 
> > The mount can’t take effect, I checked below:
> > 
> > [root@gfsclient02 ~]# systemctl status mnt-gluster.mount -l
> > mnt-gluster.mount - /mnt/gluster
> >Loaded: loaded (/etc/fstab)
> >Active: failed (Result: exit-code) since Fri 2015-03-06 10:39:05 CST; 
> > 53s ago
> > Where: /mnt/gluster
> >  What: gwgfs01:/vol01
> >   Process: 1324 ExecMount=/bin/mount gwgfs01:/vol01 /mnt/gluster -t 
> > glusterfs -o defaults,_netdev (code=exited, status=1/FAILURE)
> > 
> > Mar 06 10:38:47 gfsclient02 systemd[1]: Mounting /mnt/gluster...
> > Mar 06 10:38:47 gfsclient02 systemd[1]: Mounted /mnt/gluster.
> > Mar 06 10:39:05 gfsclient02 mount[1324]: /sbin/mount.glusterfs: line 13: 
> > /dev/stderr: No such device or address
> > Mar 06 10:39:05 gfsclient02 systemd[1]: mnt-gluster.mount mount process 
> > exited, code=exited status=1
> > Mar 06 10:39:05 gfsclient02 systemd[1]: Unit mnt-gluster.mount entered 
> > failed state.
> > 
> > 
> > BTW, I can manual mount that gluster.  by command : mount -t glusterfs 
> > gwgfs01:/vol01 /mnt/gluster
> > 
> > What happened?
> 
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-users
> 
> 
> -- 
> New PGP key: 659B 0D91 E86E 7E24 FD15  69D0 C729 21A1 293F 2D24
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users

-- 
New PGP key: 659B 0D91 E86E 7E24 FD15  69D0 C729 21A1 293F 2D24
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] fstab problem

2015-03-06 Thread Nico Schottelius
Funny, I am running into the same problem with CentOS 7 and glusterfs-3.6.2
right now:

var-lib-one-datastores-100.mount - /var/lib/one/datastores/100
   Loaded: loaded (/etc/fstab)
   Active: failed (Result: exit-code) since Fri 2015-03-06 13:23:12 CET; 21s ago
Where: /var/lib/one/datastores/100
 What: vmhost2-cluster1.place4.ungleich.ch:/cluster1
  Process: 2142 ExecMount=/bin/mount 
vmhost2-cluster1.place4.ungleich.ch:/cluster1 /var/lib/one/datastores/100 -t 
glusterfs -o 
defaults,_netdev,backupvolfile-server=vmhost1-cluster1.place4.ungleich.ch 
(code=exited, status=1/FAILURE)

Mar 06 13:23:12 vmhost1-cluster1 systemd[1]: Mounted 
/var/lib/one/datastores/100.
Mar 06 13:23:12 vmhost1-cluster1 mount[2142]: /sbin/mount.glusterfs: line 13: 
/dev/stderr: No such device or address
Mar 06 13:23:12 vmhost1-cluster1 systemd[1]: var-lib-one-datastores-100.mount 
mount process exited, code=exited status=1
Mar 06 13:23:12 vmhost1-cluster1 systemd[1]: Unit 
var-lib-one-datastores-100.mount entered failed state.
[13:23:33] vmhost1-cluster1:~# /bin/mount 
vmhost2-cluster1.place4.ungleich.ch:/cluster1 /var/lib/one/datastores/100 -t 
glusterfs -o 
defaults,_netdev,backupvolfile-server=vmhost1-cluster1.place4.ungleich.ch

I've found older problems with glusterd reporting this, but no real
solution for the fstab entry.


何亦军 [Fri, Mar 06, 2015 at 02:46:12AM +]:
> Hi Guys,
> 
> I meet fstab problem,
> fstab config :   gwgfs01:/vol01  /mnt/glusterglusterfs   
> defaults,_netdev0 0
> 
> The mount can’t take effect, I checked below:
> 
> [root@gfsclient02 ~]# systemctl status mnt-gluster.mount -l
> mnt-gluster.mount - /mnt/gluster
>Loaded: loaded (/etc/fstab)
>Active: failed (Result: exit-code) since Fri 2015-03-06 10:39:05 CST; 53s 
> ago
> Where: /mnt/gluster
>  What: gwgfs01:/vol01
>   Process: 1324 ExecMount=/bin/mount gwgfs01:/vol01 /mnt/gluster -t glusterfs 
> -o defaults,_netdev (code=exited, status=1/FAILURE)
> 
> Mar 06 10:38:47 gfsclient02 systemd[1]: Mounting /mnt/gluster...
> Mar 06 10:38:47 gfsclient02 systemd[1]: Mounted /mnt/gluster.
> Mar 06 10:39:05 gfsclient02 mount[1324]: /sbin/mount.glusterfs: line 13: 
> /dev/stderr: No such device or address
> Mar 06 10:39:05 gfsclient02 systemd[1]: mnt-gluster.mount mount process 
> exited, code=exited status=1
> Mar 06 10:39:05 gfsclient02 systemd[1]: Unit mnt-gluster.mount entered failed 
> state.
> 
> 
> BTW, I can manual mount that gluster.  by command : mount -t glusterfs 
> gwgfs01:/vol01 /mnt/gluster
> 
> What happened?

> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users


-- 
New PGP key: 659B 0D91 E86E 7E24 FD15  69D0 C729 21A1 293F 2D24
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Looking for volunteer to write up official "How to do GlusterFS in the Cloud: The Right Way" for Rackspace...

2015-02-17 Thread Nico Schottelius
Good evening gentlemen,

I would be interested in creating this documentation.

I have recently setup glusterfs 3.4.2 + 3.6.2 on Ubuntu 14.04 and CentOS 7
and done some benchmarks on our hosting platform, however do not have year
long experience in running glusterfs (however some more experience with
Sheepdog/Ceph).

I am nonetheless interested in creating it, also for the purpose of
finding the rough edges of glusterfs.

The big question for me however is, in which time frame this
documentation is to be created, as I'd create it in my evening spare
time.

Cheers,

Nico

Justin Clift [Tue, Feb 17, 2015 at 10:06:18PM +]:
> On 17 Feb 2015, at 21:49, Josh Boon  wrote:
> > Do we have use cases to focus on? Gluster is part of the answer to many 
> > different questions so if it's things like simple replication and 
> > distribution and basic performance tuning I could help. I also have a heavy 
> > Ubuntu tilt so if it's Red Hat oriented I'm not much help :)
> 
> Jesse, thoughts on this?
> 
> I kinda think it would be useful to have instructions which give
> correct steps for Ubuntu + Red Hat (and anything else suitable).
> 
> Josh, if Jesse agrees, then your Ubuntu knowledge will probably
> be useful for this. ;)
> 
> + Justin
> 
> 
> > 
> > - Original Message -
> > From: "Justin Clift" 
> > To: "Gluster Users" , "Gluster Devel" 
> > 
> > Cc: "Jesse Noller" 
> > Sent: Tuesday, February 17, 2015 9:37:05 PM
> > Subject: [Gluster-devel] Looking for volunteer to write up official "How to 
> > do GlusterFS in the Cloud: The Right Way" for Rackspace...
> > 
> > Yeah, huge subject line.  :)
> > 
> > But it gets the message across... Rackspace provide us a *bunch* of online 
> > VM's
> > which we have our infrastructure in + run the majority of our regression 
> > tests
> > with.
> > 
> > They've asked us if we could write up a "How to do GlusterFS in the Cloud: 
> > The
> > Right Way" (technical) doc, for them to add to their doc collection.
> > They get asked for this a lot by customers. :D
> > 
> > Sooo... looking for volunteers to write this up.  And yep, you're welcome to
> > have your name all over it (eg this is good promo/CV material :>)
> > 
> > VM's (in Rackspace obviously) will be provided of course.
> > 
> > Anyone interested?
> > 
> > (Note - not suitable for a GlusterFS newbie. ;))
> > 
> > Regards and best wishes,
> > 
> > Justin Clift
> > 
> > --
> > GlusterFS - http://www.gluster.org
> > 
> > An open source, distributed file system scaling to several
> > petabytes, and handling thousands of clients.
> > 
> > My personal twitter: twitter.com/realjustinclift
> > 
> > ___
> > Gluster-devel mailing list
> > gluster-de...@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-devel
> 
> --
> GlusterFS - http://www.gluster.org
> 
> An open source, distributed file system scaling to several
> petabytes, and handling thousands of clients.
> 
> My personal twitter: twitter.com/realjustinclift
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users

-- 
New PGP key: 659B 0D91 E86E 7E24 FD15  69D0 C729 21A1 293F 2D24
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Performance loss from 3.4.2 to 3.6.2

2015-02-16 Thread Nico Schottelius
Hello,

re-tested, however also changed some more parameters than just the
volume option due to reinstall:

Ubuntu 14.04 -> CentOS 7 (*)
Glusterfs 3.6.2 from RPM glusterfs-epel.repo vs. gluster-3.6 ppa

Otherwise the test setup is the same. The results are:

3.6.2-CentOS7-cluster.ensure-durability-on: ~65-72 MiB/s
3.6.2-CentOS7-cluster.ensure-durability-off:~86-89 MiB/s

So the question from our side is now, what is exactly the threat that
we have when using cluster.ensure-durability-off?

Also I wonder which other tuning parameters you can recommend for
a workload in which we _only_ have VM images (i.e. big files)?

Cheers,

Nico

(*) Using CentOS 7, as we are interested in using libgfapi with qemu.

Nico Schottelius [Thu, Feb 12, 2015 at 12:07:33AM +0100]:
> Hello,
> 
> switching from 3.4.2 to 3.6.2 reduces our average test performance 
> dramatically.
> 
> Our test setup: directly connected 1 GBit/s hosts setup with:
> 
> rm -rf /home/gluster/.glusterfs/
> rm /home/gluster/*
> setfattr -x trusted.glusterfs.volume-id  /home/gluster/
> setfattr -x trusted.gfid /home/gluster/
> gluster volume create xfs-plain replica 2 transport tcp 
> vmhost1-cluster1:/home/gluster vmhost2-cluster1:/home/gluster
> gluster volume start xfs-plain
> 
> Afterwards we run our near real world test
> 
> mount -t glusterfs vmhost1-cluster1:/xfs-plain /mnt/gluster/
> while true; do dd if=redmine-from-ceph-20150204 of=/mnt/gluster/testvm 
> bs=1M; rm /mnt/gluster/testvm; done
> 
> The results are
> 
> 3.4.2: ~71-72 MiB/s [ubuntu 14.04]
> 3.6.2: ~59-64 MiB/s [gluster-3.6 ppa]
> 
> We have removed 3.6.2 and re-installed 3.4.2 and can consistently reproduce
> these numbers over hours of testing.
> 
> Is there any configuration change that we need to incorporate to run
> 3.6.2 faster or is this a known problem?
> 
> Cheers,
> 
> Nico
> 
> -- 
> New PGP key: 659B 0D91 E86E 7E24 FD15  69D0 C729 21A1 293F 2D24
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users

-- 
New PGP key: 659B 0D91 E86E 7E24 FD15  69D0 C729 21A1 293F 2D24
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Performance loss from 3.4.2 to 3.6.2

2015-02-11 Thread Nico Schottelius
Hello,

switching from 3.4.2 to 3.6.2 reduces our average test performance dramatically.

Our test setup: directly connected 1 GBit/s hosts setup with:

rm -rf /home/gluster/.glusterfs/
rm /home/gluster/*
setfattr -x trusted.glusterfs.volume-id  /home/gluster/
setfattr -x trusted.gfid /home/gluster/
gluster volume create xfs-plain replica 2 transport tcp 
vmhost1-cluster1:/home/gluster vmhost2-cluster1:/home/gluster
gluster volume start xfs-plain

Afterwards we run our near real world test

mount -t glusterfs vmhost1-cluster1:/xfs-plain /mnt/gluster/
while true; do dd if=redmine-from-ceph-20150204 of=/mnt/gluster/testvm 
bs=1M; rm /mnt/gluster/testvm; done

The results are

3.4.2: ~71-72 MiB/s [ubuntu 14.04]
3.6.2: ~59-64 MiB/s [gluster-3.6 ppa]

We have removed 3.6.2 and re-installed 3.4.2 and can consistently reproduce
these numbers over hours of testing.

Is there any configuration change that we need to incorporate to run
3.6.2 faster or is this a known problem?

Cheers,

Nico

-- 
New PGP key: 659B 0D91 E86E 7E24 FD15  69D0 C729 21A1 293F 2D24
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Recommendations for a new gluster cluster

2015-02-05 Thread Nico Schottelius
Good morning guys,

we are setting up our new hosting based on gluster, after
spending some months on trying out both ceph and gluster.

We selected Ubuntu 14.04 as the base stack and are now evaluating, which
version of gluster and which filesystem to choose.

At the moment we are seeing 80 MiB/s with ext4, 93 MiB/s with xfs and
roughly 89-91 MiB/s with xfs + dm-crypt. [0]
 
All tests so far have been running with gluster 3.4.2-1ubuntu1 with a
replicated volume.

My question: What is the recommended gluster version for production (*)
and what is the recommended filesystem?

Our timelines is a roughly 3 years cycle for offering virtual machines
on top of gluster, until we will exchange the whole stack again.

Cheers,

Nico


[0] We tweet about our progress as @ungleich on https://twitter.com/ungleich
if you are curious.

(*) I saw some problems with 3.6.2 on the mailing list.

-- 
New PGP key: 659B 0D91 E86E 7E24 FD15  69D0 C729 21A1 293F 2D24
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Qemu + Gluster: Which distro + version?

2015-01-29 Thread Nico Schottelius
Good morning,

we are looking into different HA platforms of our hosting infrastructure
and besides Ceph thinking about using gluster.

However, so far it looks like the Qemu that is being shipped with most
distributions, does not contain gluster support, even though the patches
date back to 2012.

So I was wondering, if anyone on this list is using gluster with Qemu
productively and what operating systems / Linux distributions plus
versions you are using?

Cheers,

Nico

-- 
New PGP key: 659B 0D91 E86E 7E24 FD15  69D0 C729 21A1 293F 2D24
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users