Re: [Gluster-users] HA with nfs-ganesha and Virtual IP

2017-01-23 Thread Kaleb Keithley
a couple comments in-line

- Original Message -
> From: "David Spisla" 
> 
> 
> 
> Hello,
> 
> 
> 
> I have two ec2-instances with CentOS and I want to create a cluster
> infrastructure with nfs-ganesha, pacamaker and corosnyc. I read different
> instructions but in some points I am not really sure how to do.
> 
> At the moment my configuration is not running. The system says:
> 
> 
> 
> Enabling NFS-Ganesha requires Gluster-NFS to be disabled across the trusted
> pool. Do you still want to continue?
> 
> (y/n) y
> 
> This will take a few minutes to complete. Please wait ..
> 
> nfs-ganesha : success
> 
> 
> 
> But nothing happen. Ganesha is not starting. I think there is a problem with
> my ganesha-ha.conf file.
> 
> 
> 
> 1. What about that Virtual IPs? How I can create them (maybe /etc/hosts) ???

You can't just make them up. In your case you must get them from somewhere in 
AWS so that you don't have a conflict with other AWS users.

("Virtual" is a bad name. They're real IPs, and they are managed by pacemaker.)

> 
> 2. Should I use always use the same ganesha.ha.conf file on all nodes or
> should I change entries. You can see as follows my two ganesha.ha.conf files

You only need to create the ganesha-ha.conf (note the '-') once, on one host, 
namely the same host you issue the gluster commands. Gluster will propagate it 
to the other nodes in the cluster.

> 
> 
> 
> First ec2-instance:
> 
> # Name of the HA cluster created.
> 
> # must be unique within the subnet
> 
> HA_NAME="ganesha-ha-360"
> 
> #
> 
> # The gluster server from which to mount the shared data volume.
> 
> HA_VOL_SERVER="ec2-52-209-xxx-xxx.eu-west-1.compute.amazonaws.com"
> 
> #
> 
> # N.B. you may use short names or long names; you may not use IP addrs.
> 
> # Once you select one, stay with it as it will be mildly unpleasant to
> 
> # clean up if you switch later on. Ensure that all names - short and/or
> 
> # long - are in DNS or /etc/hosts on all machines in the cluster.
> 
> #
> 
> # The subset of nodes of the Gluster Trusted Pool that form the ganesha
> 
> # HA cluster. Hostname is specified.
> 
> HA_CLUSTER_NODES="ec2-52-209-xxx-xxx.eu-west-1.compute.amazonaws.com,ec2-52-18-xxx-xxx.eu-west-1.compute.amazonaws.com"
> 
> #HA_CLUSTER_NODES="server1.lab.redhat.com,server2.lab.redhat.com,..."
> 
> #
> 
> # Virtual IPs for each of the nodes specified above.
> 
> VIP_ec2-52-209-xxx-xxx.eu-west-1.compute.amazonaws.com="10.0.2.1"
> 
> VIP_ec2-52-18-xxx-xxx.eu-west-1.compute.amazonaws.com="10.0.2.2"
> 
> #VIP_server1_lab_redhat_com="10.0.2.1"
> 
> #VIP_server2_lab_redhat_com="10.0.2.2"
> 
> 
> 
> Second ec2-instance:
> 
> # Name of the HA cluster created.
> 
> # must be unique within the subnet
> 
> HA_NAME="ganesha-ha-360"
> 
> #
> 
> # The gluster server from which to mount the shared data volume.
> 
> HA_VOL_SERVER="ec2-52-18-xxx-xxx.eu-west-1.compute.amazonaws.com"
> 
> #
> 
> # N.B. you may use short names or long names; you may not use IP addrs.
> 
> # Once you select one, stay with it as it will be mildly unpleasant to
> 
> # clean up if you switch later on. Ensure that all names - short and/or
> 
> # long - are in DNS or /etc/hosts on all machines in the cluster.
> 
> #
> 
> # The subset of nodes of the Gluster Trusted Pool that form the ganesha
> 
> # HA cluster. Hostname is specified.
> 
> HA_CLUSTER_NODES="ec2-52-18-xxx-xxx.eu-west-1.compute.amazonaws.com,ec2-52-209-xxx-xxx.eu-west-1.compute.amazonaws.com"
> 
> #HA_CLUSTER_NODES="server1.lab.redhat.com,server2.lab.redhat.com,..."
> 
> #
> 
> # Virtual IPs for each of the nodes specified above.
> 
> VIP_ec2-52-209-xxx-xxx.eu-west-1.compute.amazonaws.com="10.0.2.1"
> 
> VIP_ec2-52-18-xxx-xxx.eu-west-1.compute.amazonaws.com="10.0.2.2"
> 
> #VIP_server1_lab_redhat_com="10.0.2.1"
> 
> #VIP_server2_lab_redhat_com="10.0.2.2"
> 
> 
> 
> Thank you for your attention
> 
> 
> 
> 
> 
> 
> 
> David Spisla
> 
> Software Developer
> 
> david.spi...@iternity.com
> 
> www.iTernity.com
> 
> Tel: +49 761-590 34 841
> 
> 
> 
> 
> 
> 
> 
> iTernity GmbH
> Heinrich-von-Stephan-Str. 21
> 79100 Freiburg – Germany
> ---
> unseren technischen Support erreichen Sie unter +49 761-387 36 66
> ---
> 
> Geschäftsführer: Ralf Steinemann
> Eingetragen beim Amtsgericht Freiburg: HRB-Nr. 701332
> USt.Id de-24266431
> 
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Convert to Shard - Setting Guidance

2017-01-23 Thread Darrell Budic
Yeah, I don’t know about the disk related crashes, but the move disk -> migrate 
vm before collapsing snapshots is definitely an ovirt related issue in the way 
it handles them.


> On Jan 23, 2017, at 10:46 AM, Kevin Lemonnier  wrote:
> 
> On Mon, Jan 23, 2017 at 10:01:25AM -0600, Darrell Budic wrote:
>> I did a combination of live storage migrations (be sure to delete any 
>> snapshots afterward if using ovirt) and shutdown, migrate, reboot. Anything 
>> that did heavy disk use, particularly writes, was giving me trouble (pauses, 
>> crashes, reboots) with live storage migrations so I just shut down anything 
>> else with heavy disk writes and did them offline. Mysql servers, for example.
>> 
>> Note that if you’re using ovirt, partially the 3.6 series, I had to migrate 
>> the VM to a new host after moving the disk and before erasing the snapshots. 
>> Otherwise they’d crash when removing the snapshot, something in qemu not 
>> quite right I imagine.
>> 
> 
> All of this sounds weird, I've never had a problem like this. But I'm not 
> using ovirt, maybe it's a problem with that.
> 
> -- 
> Kevin Lemonnier
> PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Cluster not healing

2017-01-23 Thread James Wilkins
On 23 January 2017 at 20:04, Gambit15  wrote:

> Have you verified that Gluster has marked the files as split-brain?
>

Gluster does not recognise all the files as split-brain - in fact only a
handful are recognised as such - e.g for the example I pasted, its not
listed - however the gfid is different (I believe this should be the same?)




>
> gluster volume heal  info split-brain
>
> If you're fairly confident about which files are correct, you can automate
> the split-brain healing procedure.
>
> From the manual...
>
>> volume heal  split-brain bigger-file 
>>   Performs healing of  which is in split-brain by
>> choosing the bigger file in the replica as source.
>>
>> volume heal  split-brain source-brick
>> 
>>   Selects  as the source for all the
>> files that are in split-brain in that replica and heals them.
>>
>> volume heal  split-brain source-brick
>>  
>>   Selects the split-brained  present in
>>  as source and completes heal.
>>
>
> D
>
> On 23 January 2017 at 16:28, James Wilkins  wrote:
>
>> Hello,
>>
>> I have a couple of gluster clusters - setup with distributed/replicated
>> volumes that have starting incrementing the heal-count from statistics -
>> and for some files returning input/output error when attempting to access
>> said files from a fuse mount.
>>
>> If i take one volume, from one cluster as an example:
>>
>> gluster volume heal storage01 statistics info
>> 
>> Brick storage02.:/storage/sdc/brick_storage01
>> Number of entries: 595
>> 
>>
>> And then proceed to look at one of these files (have found 2 copies - one
>> on each server / brick)
>>
>> First brick:
>>
>> # getfattr -m . -d -e hex  /storage/sdc/brick_storage01/
>> projects/183-57c559ea4d60e-canary-test--node02/wordpress285-
>> data/html/wp-content/themes/twentyfourteen/single.php
>> getfattr: Removing leading '/' from absolute path names
>> # file: storage/sdc/brick_storage01/projects/183-57c559ea4d60e-canar
>> y-test--node02/wordpress285-data/html/wp-content/themes/
>> twentyfourteen/single.php
>> security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7
>> 573746572645f627269636b5f743a733000
>> trusted.afr.dirty=0x
>> trusted.afr.storage01-client-0=0x00020001
>> trusted.bit-rot.version=0x02005874e2cd459d
>> trusted.gfid=0xda4253be1c2647b7b6ec5c045d61d216
>> trusted.glusterfs.quota.c9764826-596a-4886-9bc0-60ee9b3fce44
>> .contri.1=0x0601
>> trusted.pgfid.c9764826-596a-4886-9bc0-60ee9b3fce44=0x0001
>>
>> Second Brick:
>>
>> # getfattr -m . -d -e hex /storage/sdc/brick_storage01/p
>> rojects/183-57c559ea4d60e-canary-test--node02/wordpress285-
>> data/html/wp-content/themes/twentyfourteen/single.php
>> getfattr: Removing leading '/' from absolute path names
>> # file: storage/sdc/brick_storage01/projects/183-57c559ea4d60e-canar
>> y-test--node02/wordpress285-data/html/wp-content/themes/
>> twentyfourteen/single.php
>> security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7
>> 573746572645f627269636b5f743a733000
>> trusted.afr.dirty=0x
>> trusted.bit-rot.version=0x020057868423000d6332
>> trusted.gfid=0x14f74b04679345289dbd3290a3665cbc
>> trusted.glusterfs.quota.47e007ee-6f91-4187-81f8-90a393deba2b
>> .contri.1=0x0601
>> trusted.pgfid.47e007ee-6f91-4187-81f8-90a393deba2b=0x0001
>>
>>
>>
>> I can see the only the first brick has the appropiate
>> trusted.afr. tag - e.g in this case
>>
>> trusted.afr.storage01-client-0=0x00020001
>>
>> Files are same size under stat - just the access/modify/change dates are
>> different.
>>
>> My first question is - reading https://gluster.readth
>> edocs.io/en/latest/Troubleshooting/split-brain/ this suggests that i
>> should have this field on both copies of the files - or am I mis-reading?
>>
>> Secondly - am I correct that each one of these entries will require
>> manual fixing?  (I have approx 6K files/directories in this state over two
>> clusters - which appears like an awful lot of manual fixing)
>>
>> I've checked gluster volume info  and all appropiate
>> services/self-heal daemon are running.  We've even tried a full heal/heal
>> and iterating over parts of the filesystem in question with find / stat /
>> md5sum.
>>
>> Any input appreciated.
>>
>> Cheers,
>>
>>
>>
>>
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Cluster not healing

2017-01-23 Thread Gambit15
Have you verified that Gluster has marked the files as split-brain?

gluster volume heal  info split-brain

If you're fairly confident about which files are correct, you can automate
the split-brain healing procedure.

>From the manual...

> volume heal  split-brain bigger-file 
>   Performs healing of  which is in split-brain by
> choosing the bigger file in the replica as source.
>
> volume heal  split-brain source-brick 
>   Selects  as the source for all the files
> that are in split-brain in that replica and heals them.
>
> volume heal  split-brain source-brick
>  
>   Selects the split-brained  present in
>  as source and completes heal.
>

D

On 23 January 2017 at 16:28, James Wilkins  wrote:

> Hello,
>
> I have a couple of gluster clusters - setup with distributed/replicated
> volumes that have starting incrementing the heal-count from statistics -
> and for some files returning input/output error when attempting to access
> said files from a fuse mount.
>
> If i take one volume, from one cluster as an example:
>
> gluster volume heal storage01 statistics info
> 
> Brick storage02.:/storage/sdc/brick_storage01
> Number of entries: 595
> 
>
> And then proceed to look at one of these files (have found 2 copies - one
> on each server / brick)
>
> First brick:
>
> # getfattr -m . -d -e hex  /storage/sdc/brick_storage01/
> projects/183-57c559ea4d60e-canary-test--node02/wordpress285-data/html/wp-
> content/themes/twentyfourteen/single.php
> getfattr: Removing leading '/' from absolute path names
> # file: storage/sdc/brick_storage01/projects/183-57c559ea4d60e-
> canary-test--node02/wordpress285-data/html/wp-
> content/themes/twentyfourteen/single.php
> security.selinux=0x73797374656d5f753a6f626a6563
> 745f723a676c7573746572645f627269636b5f743a733000
> trusted.afr.dirty=0x
> trusted.afr.storage01-client-0=0x00020001
> trusted.bit-rot.version=0x02005874e2cd459d
> trusted.gfid=0xda4253be1c2647b7b6ec5c045d61d216
> trusted.glusterfs.quota.c9764826-596a-4886-9bc0-60ee9b3fce44.contri.1=
> 0x0601
> trusted.pgfid.c9764826-596a-4886-9bc0-60ee9b3fce44=0x0001
>
> Second Brick:
>
> # getfattr -m . -d -e hex /storage/sdc/brick_storage01/
> projects/183-57c559ea4d60e-canary-test--node02/wordpress285-data/html/wp-
> content/themes/twentyfourteen/single.php
> getfattr: Removing leading '/' from absolute path names
> # file: storage/sdc/brick_storage01/projects/183-57c559ea4d60e-
> canary-test--node02/wordpress285-data/html/wp-
> content/themes/twentyfourteen/single.php
> security.selinux=0x73797374656d5f753a6f626a6563
> 745f723a676c7573746572645f627269636b5f743a733000
> trusted.afr.dirty=0x
> trusted.bit-rot.version=0x020057868423000d6332
> trusted.gfid=0x14f74b04679345289dbd3290a3665cbc
> trusted.glusterfs.quota.47e007ee-6f91-4187-81f8-90a393deba2b.contri.1=
> 0x0601
> trusted.pgfid.47e007ee-6f91-4187-81f8-90a393deba2b=0x0001
>
>
>
> I can see the only the first brick has the appropiate trusted.afr.
> tag - e.g in this case
>
> trusted.afr.storage01-client-0=0x00020001
>
> Files are same size under stat - just the access/modify/change dates are
> different.
>
> My first question is - reading https://gluster.readthedocs.io/en/latest/
> Troubleshooting/split-brain/ this suggests that i should have this field
> on both copies of the files - or am I mis-reading?
>
> Secondly - am I correct that each one of these entries will require manual
> fixing?  (I have approx 6K files/directories in this state over two
> clusters - which appears like an awful lot of manual fixing)
>
> I've checked gluster volume info  and all appropiate
> services/self-heal daemon are running.  We've even tried a full heal/heal
> and iterating over parts of the filesystem in question with find / stat /
> md5sum.
>
> Any input appreciated.
>
> Cheers,
>
>
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] NFS Locking (Very complicated)

2017-01-23 Thread Service Mail
Hello,

I have a cluster with 2 nodes that I want to access using NFS, there is no
firewall between the server.
I'm using gluster 3.9.0

The volumes mount correctly on a client machine,  even if I get a strange
"portmap query retrying: RPC: Program not registered"

[root@nf1 ~]# mount -vvv -t nfs -o nfsvers=3,rw
gluster-nfs-a:/replicated-data /export/replicated-data
mount: fstab path: "/etc/fstab"
.
mount: external mount: argv[5] = "rw,nfsvers=3"
mount.nfs: timeout set for Thu Jan 19 21:51:05 2017
mount.nfs: trying text-based options 'nfsvers=3,addr=10.1.0.1'
mount.nfs: prog 13, trying vers=3, prot=6
mount.nfs: trying 10.4.0.28 prog 13 vers 3 prot TCP port 2049
...
mount.nfs: prog 15, trying vers=3, prot=17
mount.nfs: portmap query retrying: RPC: Program not registered
...
mount.nfs: prog 15, trying vers=3, prot=6
mount.nfs: trying 10.4.0.28 prog 15 vers 3 prot TCP port 38465
gluster-nfs-a:/replicated-data on /export/replicated-data type nfs
(rw,nfsvers=3)




When I try to lock a file on the client with something like:

flock -x -w 10 100 file (...)

the lock doesn't work and I get those line on the server nfs.log:

[2017-01-19 20:03:03.621556] I [rpc-clnt.c:1946:rpc_clnt_reconfig]
0-replicated-data-client-2: changing port to 49159 (from 0)
[2017-01-19 20:03:03.626031] I [MSGID: 114057]
[client-handshake.c:1446:select_server_supported_programs]
0-replicated-data-client-0: Using Program GlusterFS 3.3, Num (1298437),
Version (330)
[2017-01-19 20:03:03.626661] I [MSGID: 114046]
[client-handshake.c:1222:client_setvolume_cbk] 0-replicated-data-client-0:
Connected to replicated-data-client-0, attached to remote volume
'/gluster-nfs-zpool/replicated-data'.
[2017-01-19 20:03:03.626693] I [MSGID: 114047]
[client-handshake.c:1233:client_setvolume_cbk] 0-replicated-data-client-0:
Server and Client lk-version numbers are not same, reopening the fds
[2017-01-19 20:03:03.626803] I [MSGID: 108005]
[afr-common.c:4429:afr_notify] 0-replicated-data-replicate-0: Subvolume
'replicated-data-client-0' came back up; going online.
[2017-01-19 20:03:03.626998] I [MSGID: 114035]
[client-handshake.c:201:client_set_lk_version_cbk]
0-replicated-data-client-0: Server lk version = 1
[2017-01-19 20:03:03.630498] I [MSGID: 114057]
[client-handshake.c:1446:select_server_supported_programs]
0-replicated-data-client-2: Using Program GlusterFS 3.3, Num (1298437),
Version (330)
[2017-01-19 20:03:03.631152] I [MSGID: 114046]
[client-handshake.c:1222:client_setvolume_cbk] 0-replicated-data-client-2:
Connected to replicated-data-client-2, attached to remote volume
'/gluster-nfs-zpool/replicated-data'.
[2017-01-19 20:03:03.631180] I [MSGID: 114047]
[client-handshake.c:1233:client_setvolume_cbk] 0-replicated-data-client-2:
Server and Client lk-version numbers are not same, reopening the fds
[2017-01-19 20:03:03.631651] I [MSGID: 114035]
[client-handshake.c:201:client_set_lk_version_cbk]
0-replicated-data-client-2: Server lk version = 1
[2017-01-19 20:03:03.632563] I [MSGID: 108031]
[afr-common.c:2178:afr_local_discovery_cbk] 0-replicated-data-replicate-0:
selecting local read_child replicated-data-client-0
[2017-01-19 20:03:29.463006] W [MSGID: 112165]
[nlm4.c:1484:nlm4svc_lock_common] 0-nfs-NLM: NLM in grace period
[2017-01-19 20:03:32.742975] W [MSGID: 112165] [nlm4.c:1804:nlm4svc_unlock]
0-nfs-NLM: NLM in grace period
The message "W [MSGID: 112165] [nlm4.c:1804:nlm4svc_unlock] 0-nfs-NLM: NLM
in grace period" repeated 4 times between [2017-01-19 20:03:32.742975] and
[2017-01-19 20:03:52.742771]
[2017-01-19 20:03:57.743581] W [MSGID: 112057]
[nlm4.c:1744:nlm4_unlock_resume] 0-nfs-NLM: nlm_get_uniq() returned NULL
[No locks available]
[2017-01-19 20:03:57.743652] W [MSGID: 112122]
[nlm4.c:1759:nlm4_unlock_resume] 0-nfs-NLM: unable to unlock_fd_resume
[Operation not permitted]
[2017-01-19 20:05:03.237220] E [MSGID: 112163] [nlm4.c:570:nsm_monitor]
0-nfs-NLM: clnt_call(): RPC: Success
[2017-01-19 20:06:10.566790] C
[rpc-clnt-ping.c:160:rpc_clnt_ping_timer_expired]
0-replicated-data-client-2: server 10.1.0.33:49159 has not responded in the
last 42 seconds, disconnecting.
[2017-01-19 20:06:13.380796] I [socket.c:3396:socket_submit_request]
0-replicated-data-client-2: not connected (priv->connected = -1)
[2017-01-19 20:06:13.380830] W [rpc-clnt.c:1639:rpc_clnt_submit]
0-replicated-data-client-2: failed to submit rpc-request (XID: 0x29
Program: GlusterFS 3.3, ProgVers: 330, Proc: 26) to rpc-transport
(replicated-data-client-2)
[2017-01-19 20:06:13.380846] W [MSGID: 114031]
[client-rpc-fops.c:2529:client3_3_lk_cbk] 0-replicated-data-client-2:
remote operation failed [Transport endpoint is not connected]
[2017-01-19 20:06:45.739806] W [rpc-clnt.c:1639:rpc_clnt_submit]
0-replicated-data-client-2: failed to submit rpc-request (XID: 0x2a
Program: GlusterFS 3.3, ProgVers: 330, Proc: 14) to rpc-transport
(replicated-data-client-2)
[2017-01-19 20:07:45.651815] W [MSGID: 114031]
[client-rpc-fops.c:799:client3_3_statfs_cbk] 

Re: [Gluster-users] rebalance and volume commit hash

2017-01-23 Thread Piotr Misiak
Do you think that is wise to run rebalance process manually on every
brick with the actual commit hash value?

I didn't do anything with bricks after previous rebalance run and I have
cluster.weighted-rebalance=off.

My problem is that I have a very big directory structure (millions of
directories and files) and I haven't ever completed rebalance process
once, because it will take I guess weeks or months. I'd like to speed it
up a bit by not generating new commit hash for volume during new
rebalance run. Then directories rebalanced in the previous run will be
untouched during the new run. Is it possible?

Thanks


Piotr Misiak
Senior Cloud Engineer
CloudFerro Sp. z o.o.

On 17.01.2017 15:10, Jeff Darcy wrote:
>> I don't understand why  new commit hash is generated for the volume during
>> rebalance process? I think it should be generated only during add/remove
>> brick events but not during rebalance.
> The mismatch only becomes important during rebalance.  Prior to that, even
> if we've added or removed a brick, the layouts haven't changed and the
> optimization is still as valid as it was before.  If there are multiple
> add/remove operations, we don't need or want to change the hash between
> them.  Conversely, there are cases besides add/remove brick where we might
> want to do a rebalance - e.g. after replace-brick with a brick of a
> different size, or to change between total-space vs. free-space weighting.
> Changing the hash in add/remove brick doesn't handle these cases, but
> changing it at the start of rebalance does.


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] rebalance and volume commit hash

2017-01-23 Thread Piotr Misiak

17 sty 2017 14:28 Jeff Darcy  napisał(a):
>
> > Can you tell me please why every volume rebalance generates a new value 
> > for the volume commit hash? 
> > 
> > If I have fully rebalanced cluster (or almost) with millions of 
> > directories then rebalance has to change DHT xattr for every directory 
> > only because there is a new volume commit hash value. It is pointless in 
> > my opinion. Is there any reason behind this? As I observed, the volume 
> > commit hash is set at the rebalance beginning which totally destroys 
> > benefit of lookup optimization algorithm for directories not 
> > scanned/fixed yet by this rebalance run. 
>
> It disables the optimization because the optimization would no longer 
> lead to correct results.  There are plenty of distributed filesystems 
> that seem to have "fast but wrong" as a primary design goal; we're 
> not one of them. 
>
> The best way to think of the volume-commit-hash update is as a kind of 
> cache invalidation.  Lookup optimization is only valid as long as we 
> know that the actual distribution of files within a directory is 
> consistent with the current volume topology.  That ceases to be the 
> case as soon as we add or remove a brick, leaving us with three choices. 
>
> (1) Don't do lookup optimization at all.  *Every* time we fail to find 
> a file on the brick where hashing says it should be, look *everywhere* 
> else.  That's how things used to work, and still work if lookup 
> optimization is disabled.  The drawback is that every add/remove brick 
> operation causes a permanent and irreversible degradation of lookup 
> performance.  Even on a freshly created volume, lookups for files that 
> don't exist anywhere will cause every brick to be queried. 
>
> (2) Mark every directory as "unoptimized" at the very beginning of 
> rebalance.  Besides being almost as slow as fix-layout itself, this 
> would require blocking all lookups and other directory operations 
> *anywhere in the volume* while it completes. 
>
> (3) Change the volume commit hash, effectively marking every 
> directory as unoptimized without actually having to touch every one. 
> The root-directory operation is cheap and almost instantaneous. 
> Checking each directory commit hash isn't free, but it's still a 
> lot better than (1) above.  With upcalls we can enhance this even 
> further. 
>
> Now that you know a bit more about the tradeoffs, do "pointless" 
> and "destroys the benefit" still seem accurate? 
>

Thank you Jeff for your response. I understand this optimisation clearly but I 
don't understand why  new commit hash is generated for the volume during 
rebalance process? I think it should be generated only during add/remove brick 
events but not during rebalance.

Thanks
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] HA with nfs-ganesha and Virtual IP

2017-01-23 Thread David Spisla
Hello,

I have two ec2-instances with CentOS and I want to create a cluster 
infrastructure with nfs-ganesha, pacamaker and corosnyc. I read different 
instructions but in some points I am not really sure how to do.
At the moment my configuration is not running. The system says:

Enabling NFS-Ganesha requires Gluster-NFS to be disabled across the trusted 
pool. Do you still want to continue?
(y/n) y
This will take a few minutes to complete. Please wait ..
nfs-ganesha : success

But nothing happen. Ganesha is not starting. I think there is a problem with my 
ganesha-ha.conf file.


1.  What about that Virtual IPs? How I can create them (maybe /etc/hosts) 
???

2.  Should I use always use the same ganesha.ha.conf file on all nodes or 
should I change entries. You can see as follows my two ganesha.ha.conf files

First ec2-instance:
# Name of the HA cluster created.
# must be unique within the subnet
HA_NAME="ganesha-ha-360"
#
# The gluster server from which to mount the shared data volume.
HA_VOL_SERVER="ec2-52-209-xxx-xxx.eu-west-1.compute.amazonaws.com"
#
# N.B. you may use short names or long names; you may not use IP addrs.
# Once you select one, stay with it as it will be mildly unpleasant to
# clean up if you switch later on. Ensure that all names - short and/or
# long - are in DNS or /etc/hosts on all machines in the cluster.
#
# The subset of nodes of the Gluster Trusted Pool that form the ganesha
# HA cluster. Hostname is specified.
HA_CLUSTER_NODES="ec2-52-209-xxx-xxx.eu-west-1.compute.amazonaws.com,ec2-52-18-xxx-xxx.eu-west-1.compute.amazonaws.com"
#HA_CLUSTER_NODES="server1.lab.redhat.com,server2.lab.redhat.com,..."
#
# Virtual IPs for each of the nodes specified above.
VIP_ec2-52-209-xxx-xxx.eu-west-1.compute.amazonaws.com="10.0.2.1"
VIP_ec2-52-18-xxx-xxx.eu-west-1.compute.amazonaws.com="10.0.2.2"
#VIP_server1_lab_redhat_com="10.0.2.1"
#VIP_server2_lab_redhat_com="10.0.2.2"

Second ec2-instance:
# Name of the HA cluster created.
# must be unique within the subnet
HA_NAME="ganesha-ha-360"
#
# The gluster server from which to mount the shared data volume.
HA_VOL_SERVER="ec2-52-18-xxx-xxx.eu-west-1.compute.amazonaws.com"
#
# N.B. you may use short names or long names; you may not use IP addrs.
# Once you select one, stay with it as it will be mildly unpleasant to
# clean up if you switch later on. Ensure that all names - short and/or
# long - are in DNS or /etc/hosts on all machines in the cluster.
#
# The subset of nodes of the Gluster Trusted Pool that form the ganesha
# HA cluster. Hostname is specified.
HA_CLUSTER_NODES="ec2-52-18-xxx-xxx.eu-west-1.compute.amazonaws.com,ec2-52-209-xxx-xxx.eu-west-1.compute.amazonaws.com"
#HA_CLUSTER_NODES="server1.lab.redhat.com,server2.lab.redhat.com,..."
#
# Virtual IPs for each of the nodes specified above.
VIP_ec2-52-209-xxx-xxx.eu-west-1.compute.amazonaws.com="10.0.2.1"
VIP_ec2-52-18-xxx-xxx.eu-west-1.compute.amazonaws.com="10.0.2.2"
#VIP_server1_lab_redhat_com="10.0.2.1"
#VIP_server2_lab_redhat_com="10.0.2.2"

Thank you for your attention



David Spisla
Software Developer
david.spi...@iternity.com
www.iTernity.com
Tel:   +49 761-590 34 841

[cid:image001.png@01D239C7.FDF7B430]

iTernity GmbH
Heinrich-von-Stephan-Str. 21
79100 Freiburg - Germany
---
unseren technischen Support erreichen Sie unter +49 761-387 36 66
---
Geschäftsführer: Ralf Steinemann
Eingetragen beim Amtsgericht Freiburg: HRB-Nr. 701332
USt.Id de-24266431

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Populated Open Source Project for gluster 4.0 ???

2017-01-23 Thread David Spisla
Hello,

does anybody of you know if there is a populated Open Source Project for 
gluster 4.0? IF yes, where can I find it?

Thank you!
dspisla

David Spisla
Software Developer
david.spi...@iternity.com
www.iTernity.com
Tel:   +49 761-590 34 841

[cid:image001.png@01D239C7.FDF7B430]

iTernity GmbH
Heinrich-von-Stephan-Str. 21
79100 Freiburg - Germany
---
unseren technischen Support erreichen Sie unter +49 761-387 36 66
---
Geschäftsführer: Ralf Steinemann
Eingetragen beim Amtsgericht Freiburg: HRB-Nr. 701332
USt.Id de-24266431

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Cluster not healing

2017-01-23 Thread James Wilkins
Hello,

I have a couple of gluster clusters - setup with distributed/replicated
volumes that have starting incrementing the heal-count from statistics -
and for some files returning input/output error when attempting to access
said files from a fuse mount.

If i take one volume, from one cluster as an example:

gluster volume heal storage01 statistics info

Brick storage02.:/storage/sdc/brick_storage01
Number of entries: 595


And then proceed to look at one of these files (have found 2 copies - one
on each server / brick)

First brick:

# getfattr -m . -d -e hex
 
/storage/sdc/brick_storage01/projects/183-57c559ea4d60e-canary-test--node02/wordpress285-data/html/wp-content/themes/twentyfourteen/single.php
getfattr: Removing leading '/' from absolute path names
# file:
storage/sdc/brick_storage01/projects/183-57c559ea4d60e-canary-test--node02/wordpress285-data/html/wp-content/themes/twentyfourteen/single.php
security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
trusted.afr.dirty=0x
trusted.afr.storage01-client-0=0x00020001
trusted.bit-rot.version=0x02005874e2cd459d
trusted.gfid=0xda4253be1c2647b7b6ec5c045d61d216
trusted.glusterfs.quota.c9764826-596a-4886-9bc0-60ee9b3fce44.contri.1=0x0601
trusted.pgfid.c9764826-596a-4886-9bc0-60ee9b3fce44=0x0001

Second Brick:

# getfattr -m . -d -e hex
/storage/sdc/brick_storage01/projects/183-57c559ea4d60e-canary-test--node02/wordpress285-data/html/wp-content/themes/twentyfourteen/single.php
getfattr: Removing leading '/' from absolute path names
# file:
storage/sdc/brick_storage01/projects/183-57c559ea4d60e-canary-test--node02/wordpress285-data/html/wp-content/themes/twentyfourteen/single.php
security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
trusted.afr.dirty=0x
trusted.bit-rot.version=0x020057868423000d6332
trusted.gfid=0x14f74b04679345289dbd3290a3665cbc
trusted.glusterfs.quota.47e007ee-6f91-4187-81f8-90a393deba2b.contri.1=0x0601
trusted.pgfid.47e007ee-6f91-4187-81f8-90a393deba2b=0x0001



I can see the only the first brick has the appropiate trusted.afr.
tag - e.g in this case

trusted.afr.storage01-client-0=0x00020001

Files are same size under stat - just the access/modify/change dates are
different.

My first question is - reading
https://gluster.readthedocs.io/en/latest/Troubleshooting/split-brain/ this
suggests that i should have this field on both copies of the files - or am
I mis-reading?

Secondly - am I correct that each one of these entries will require manual
fixing?  (I have approx 6K files/directories in this state over two
clusters - which appears like an awful lot of manual fixing)

I've checked gluster volume info  and all appropiate
services/self-heal daemon are running.  We've even tried a full heal/heal
and iterating over parts of the filesystem in question with find / stat /
md5sum.

Any input appreciated.

Cheers,
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] https://download.gluster.org/pub/gluster/glusterfs/nfs-ganesha/rsa.pub give 403

2017-01-23 Thread Cedric Lemarchand
Hello,

It seems there is some rights problem with 
https://download.gluster.org/pub/gluster/glusterfs/nfs-ganesha/rsa.pub 
 :

wget -O /dev/null 
https://download.gluster.org/pub/gluster/glusterfs/nfs-ganesha/rsa.pub
--2017-01-23 19:28:47--  
https://download.gluster.org/pub/gluster/glusterfs/nfs-ganesha/rsa.pub
Resolving download.gluster.org (download.gluster.org)... 23.253.208.221, 
2001:4801:7824:104:be76:4eff:fe10:23d8
Connecting to download.gluster.org 
(download.gluster.org)|23.253.208.221|:443... connected.
HTTP request sent, awaiting response... 403 Forbidden
2017-01-23 19:28:48 ERROR 403: Forbidden.

Cheers,

—
Cédric Lemarchand

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Convert to Shard - Setting Guidance

2017-01-23 Thread Kevin Lemonnier
On Mon, Jan 23, 2017 at 10:01:25AM -0600, Darrell Budic wrote:
> I did a combination of live storage migrations (be sure to delete any 
> snapshots afterward if using ovirt) and shutdown, migrate, reboot. Anything 
> that did heavy disk use, particularly writes, was giving me trouble (pauses, 
> crashes, reboots) with live storage migrations so I just shut down anything 
> else with heavy disk writes and did them offline. Mysql servers, for example.
> 
> Note that if you’re using ovirt, partially the 3.6 series, I had to migrate 
> the VM to a new host after moving the disk and before erasing the snapshots. 
> Otherwise they’d crash when removing the snapshot, something in qemu not 
> quite right I imagine.
> 

All of this sounds weird, I've never had a problem like this. But I'm not using 
ovirt, maybe it's a problem with that.

-- 
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111


signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Convert to Shard - Setting Guidance

2017-01-23 Thread Darrell Budic
I did a combination of live storage migrations (be sure to delete any snapshots 
afterward if using ovirt) and shutdown, migrate, reboot. Anything that did 
heavy disk use, particularly writes, was giving me trouble (pauses, crashes, 
reboots) with live storage migrations so I just shut down anything else with 
heavy disk writes and did them offline. Mysql servers, for example.

Note that if you’re using ovirt, partially the 3.6 series, I had to migrate the 
VM to a new host after moving the disk and before erasing the snapshots. 
Otherwise they’d crash when removing the snapshot, something in qemu not quite 
right I imagine.

   -Darrell


> On Jan 20, 2017, at 2:49 PM, Kevin Lemonnier  wrote:
> 
>> 
>> I would be interested to hear how you did this while running.  On my 
>> test setup, I have gone through the copy (rename) and it does work but 
>> like you said it took quite awhile.
> 
> I went into my proxmox web interface, selected the disk and clicked "move" :)
> It just uses some qemu command behing the scene so you can do it even
> if you aren't using proxmox though, just need to figure out the syntax.
> 
> -- 
> Kevin Lemonnier
> PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Meeting Gluster users and dev at FOSDEM 2017?

2017-01-23 Thread Amye Scavarda
On Mon, Jan 23, 2017 at 6:02 AM, Olivier Lambert 
wrote:

> That's great! Is there a Twitter or somewhere where I can ping you when we
> are there? Or maybe just checking on your booth? In general, there is some
> empty rooms/lecture halls, so it will be perfect to discuss :)
>
> See you there!
>
>
>
> Olivier.
>
> Super! Our stands are in the same group at FOSDEM, so we'll be easy to
find.
Saturday may be better than Sunday due to having the Storage DevRoom, but
happy to watch for you.
- amye



> On Mon, Jan 23, 2017 at 2:11 PM, Kaleb Keithley 
> wrote:
>
>>
>>
>> - Original Message -
>> > From: "Olivier Lambert" 
>> > To: "gluster-users" 
>> > Sent: Monday, January 23, 2017 7:58:27 AM
>> > Subject: [Gluster-users] Meeting Gluster users and dev at FOSDEM 2017?
>> >
>> > Hi there,
>> >
>> > My team and I will be at FOSDEM on the Xen booth (for the project "Xen
>> > Orchestra"). Is there a way to meet Gluster devs and users there to talk
>> > about it? I have a nice pile of questions, and experience + devs point
>> of
>> > view would be really useful.
>>
>>
>> Absolutely. Several of us will be there. We have a booth and a dev
>> workshop.
>>
>> See you there.
>>
>> --
>>
>> Kaleb
>>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>



-- 
Amye Scavarda | a...@redhat.com | Gluster Community Lead
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Meeting Gluster users and dev at FOSDEM 2017?

2017-01-23 Thread Niels de Vos
On Mon, Jan 23, 2017 at 03:02:21PM +0100, Olivier Lambert wrote:
> That's great! Is there a Twitter or somewhere where I can ping you when we
> are there? Or maybe just checking on your booth? In general, there is some
> empty rooms/lecture halls, so it will be perfect to discuss :)

There is a Software Defined Storage devroom on Sunday. That means many
of us will be distributed between the booth and the devroom. Pass by our
booth on Saturday morning to setup a time/place for a discussion in the
afternoon (in case discussions at the booth are too
uncoordinated/interruptable).

Niels

> 
> See you there!
> 
> 
> 
> Olivier.
> 
> On Mon, Jan 23, 2017 at 2:11 PM, Kaleb Keithley  wrote:
> 
> >
> >
> > - Original Message -
> > > From: "Olivier Lambert" 
> > > To: "gluster-users" 
> > > Sent: Monday, January 23, 2017 7:58:27 AM
> > > Subject: [Gluster-users] Meeting Gluster users and dev at FOSDEM 2017?
> > >
> > > Hi there,
> > >
> > > My team and I will be at FOSDEM on the Xen booth (for the project "Xen
> > > Orchestra"). Is there a way to meet Gluster devs and users there to talk
> > > about it? I have a nice pile of questions, and experience + devs point of
> > > view would be really useful.
> >
> >
> > Absolutely. Several of us will be there. We have a booth and a dev
> > workshop.
> >
> > See you there.
> >
> > --
> >
> > Kaleb
> >

> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users



signature.asc
Description: PGP signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Meeting Gluster users and dev at FOSDEM 2017?

2017-01-23 Thread Kaleb Keithley


- Original Message -
> From: "Olivier Lambert" 
> To: "gluster-users" 
> Sent: Monday, January 23, 2017 7:58:27 AM
> Subject: [Gluster-users] Meeting Gluster users and dev at FOSDEM 2017?
> 
> Hi there,
> 
> My team and I will be at FOSDEM on the Xen booth (for the project "Xen
> Orchestra"). Is there a way to meet Gluster devs and users there to talk
> about it? I have a nice pile of questions, and experience + devs point of
> view would be really useful.


Absolutely. Several of us will be there. We have a booth and a dev workshop.

See you there.

--

Kaleb
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Meeting Gluster users and dev at FOSDEM 2017?

2017-01-23 Thread Olivier Lambert
Hi there,

My team and I will be at FOSDEM on the Xen booth (for the project "Xen
Orchestra"). Is there a way to meet Gluster devs and users there to talk
about it? I have a nice pile of questions, and experience + devs point of
view would be really useful.



Regards,


Olivier.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] nfs service not detected

2017-01-23 Thread Kaleb Keithley


- Original Message -
> From: "Matthew Ma 馬耀堂 (奧圖碼)" 
> 
> 
> Hi all,
> 
> 
> 
> I have created two gluster-servers and applied following cmd:
> 
> gluster volume create fs-disk replica 2 transport tcp,rdma
> sgnfs-ser1:/mnt/dev/ sgnfs-ser2:/mnt/dev/
> 

It helps to know what version you are using. Starting with 3.8 gluster NFS is 
disabled by default as be begin transitioning to Ganesha NFS.

use `gluster volume $vol nfs.disable false` to enable the legacy gnfs.


> 
> 
> I touch a file under /mnt/dev in sgnfs-ser2 , but there is nth in sgnfs-ser1.

And as others have already mentioned, don't do that.

> 
> Then I troubleshooting with following cmd:
> 
> 
> 
> -
> 
> 
> 
> root@sgnfs-ser1:~# gluster volume status
> 
> Status of volume: fs-disk
> 
> Gluster process Port Online Pid
> 
> --
> 
> Brick sgnfs-ser1:/mnt/dev 49152 Y 12395
> 
> Brick sgnfs-ser2:/mnt/dev 49152 Y 5791
> 
> NFS Server on localhost N/A N N/A
> 
> Self-heal Daemon on localhost N/A Y 12407
> 
> NFS Server on sgnfs-ser2 N/A N N/A
> 
> Self-heal Daemon on sgnfs-ser2 N/A Y 5804
> 
> 
> 
> There are no active volume tasks
> 
> 
> 
> root@sgnfs-ser1:~# gluster peer status
> 
> Number of Peers: 1
> 
> 
> 
> Hostname: sgnfs-ser2
> 
> Uuid: 539bb70a-7819-457d-9dc9-cc07a85c008e
> 
> State: Peer in Cluster (Connected)
> 
> 
> 
> root@sgnfs-ser1:~# /etc/init.d/glusterfs-server status
> 
> glusterfs-server start/running, process 11672
> 
> 
> 
> -
> 
> 
> 
> I recognized that it maybe due to NFS service.
> 
> However, my NFS service is running.
> 
> Is there anything I should be configure? Or I can check?
> 
> 
> 
> Thanks all
> 
> 
> 
> This e-mail transmission and its attachment are intended only for the use of
> the individual or entity to which it is addressed, and may contain
> information that is privileged, confidential and exempted from disclosure
> under applicable law. If the reader is not the intended recipient, you are
> hereby notified that any disclosure, dissemination, distribution or copying
> of this communication, in part or entirety, is strictly prohibited. If you
> are not the intended recipient for this confidential e-mail, delete it
> immediately without keeping or distributing any copy and notify the sender
> immediately. The hard copies should also be destroyed. Thank you for your
> cooperation. It is advisable that any unauthorized use of confidential
> information of this Company is strictly prohibited; and any information in
> this email that does not relate to the official business of this Company
> shall be deemed as neither given nor endorsed by this Company.
> 
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] nfs service not detected

2017-01-23 Thread Lindsay Mathieson

On 23/01/2017 7:48 PM, Matthew Ma 馬耀堂 (奧圖碼) wrote:
I touch a file under /mnt/dev in sgnfs-ser2 , but there is nth in 
sgnfs-ser1.


Don't access files directly on the bricks, rather via the gluster fuse 
mount, which unifies the access.


--
Lindsay Mathieson

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] ZFS + GlusterFS raid5 low read performance

2017-01-23 Thread Lindsay Mathieson

On 23/01/2017 7:45 PM, Yann Maupu wrote:

I updated both the recordsize to 256K on all nodes


ZFS Record size?

--
Lindsay Mathieson

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] nfs service not detected

2017-01-23 Thread 奧圖碼
Hi all,

I have created two gluster-servers and applied following cmd:
gluster volume create fs-disk replica 2 transport tcp,rdma sgnfs-ser1:/mnt/dev/ 
sgnfs-ser2:/mnt/dev/

I touch a file under /mnt/dev in sgnfs-ser2 , but there is nth in sgnfs-ser1.
Then I troubleshooting with following cmd:

-

root@sgnfs-ser1:~# gluster volume status
Status of volume: fs-disk
Gluster process PortOnline  Pid
--
Brick sgnfs-ser1:/mnt/dev   49152   Y   12395
Brick sgnfs-ser2:/mnt/dev   49152   Y   5791
NFS Server on localhost N/A N   N/A
Self-heal Daemon on localhost   N/A Y   12407
NFS Server on sgnfs-ser2N/A N   N/A
Self-heal Daemon on sgnfs-ser2  N/A Y   5804

There are no active volume tasks

root@sgnfs-ser1:~# gluster peer status
Number of Peers: 1

Hostname: sgnfs-ser2
Uuid: 539bb70a-7819-457d-9dc9-cc07a85c008e
State: Peer in Cluster (Connected)

root@sgnfs-ser1:~# /etc/init.d/glusterfs-server status
glusterfs-server start/running, process 11672

-

I recognized that it maybe due to NFS service.
However, my NFS service is running.
Is there anything I should be configure? Or I can check?

Thanks all


This e-mail transmission and its attachment are intended only for the use of 
the individual or entity to which it is addressed, and may contain information 
that is privileged, confidential and exempted from disclosure under applicable 
law. If the reader is not the intended recipient, you are hereby notified that 
any disclosure, dissemination, distribution or copying of this communication, 
in part or entirety, is strictly prohibited. If you are not the intended 
recipient for this confidential e-mail, delete it immediately without keeping 
or distributing any copy and notify the sender immediately. The hard copies 
should also be destroyed. Thank you for your cooperation. It is advisable that 
any unauthorized use of confidential information of this Company is strictly 
prohibited; and any information in this email that does not relate to the 
official business of this Company shall be deemed as neither given nor endorsed 
by this Company.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users