Re: [Gluster-users] geo-rep: -1 (Directory not empty) warning - STATUS Faulty

2016-09-16 Thread ML mail
;avish...@redhat.com> wrote: We can check in brick backend. ls -ld $BRICK_ROOT/.glusterfs/f7/eb/f7eb9d21-d39a-4dd6-941c-46d430e18aa2 regards Aravinda On Thursday 15 September 2016 09:12 PM, ML mail wrote: > So I ran a on my master a "find /mybrick -name 'File 2016.xlsx'" and got the &

Re: [Gluster-users] geo-rep: -1 (Directory not empty) warning - STATUS Faulty

2016-09-15 Thread ML mail
e_rename_cbk] 0-glusterfs-fuse: 25: /.gfid/f7eb9d21-d39a-4dd6-941c-46d430e18aa2/File 2016.xlsx.ocTransferId1333449197.part -> /.gfid/f7eb9d21-d39a-4dd6-941c-46d430e18aa2/File 2016.xlsx => -1 (Directory not empty) regards Aravinda On Wednesday 14 September 2016 12:49 PM, ML mail wro

Re: [Gluster-users] geo-rep: -1 (Directory not empty) warning - STATUS Faulty

2016-09-14 Thread ML mail
vice_loop] RepceServer: terminating on reaching EOF. [2016-09-13 19:41:13.894532] I [syncdutils(agent):220:finalize] : exiting. [2016-09-13 19:41:14.718497] I [monitor(monitor):343:monitor] Monitor: worker(/data/cloud-pro/brick) died in startup phase On Wednesday, September 14, 2016 8:46 AM, Ar

Re: [Gluster-users] geo-rep: -1 (Directory not empty) warning - STATUS Faulty

2016-09-14 Thread ML mail
. Regards, ML On Wednesday, September 14, 2016 6:14 AM, Aravinda <avish...@redhat.com> wrote: Please share the logs from Master node which is Faulty(/var/log/glusterfs/geo-replication/__/*.log) regards Aravinda On Wednesday 14 September 2016 01:10 AM, ML mail wrote: > Hi, > > I

[Gluster-users] geo-rep: -1 (Directory not empty) warning - STATUS Faulty

2016-09-13 Thread ML mail
Hi, I just discovered that one of my replicated glusterfs volumes is not being geo-replicated to my slave node (STATUS Faulty). The log file on the geo-rep slave node indicates an error with a directory which seems not to be empty. Below you will find the full log entry for this problem which

Re: [Gluster-users] Upgrade guide to 3.8 missing

2016-08-10 Thread ML mail
Good point Gandalf! I really don't feel adventurous on a production cluster... On Wednesday, August 10, 2016 2:14 PM, Gandalf Corvotempesta <gandalf.corvotempe...@gmail.com> wrote: Il 10 ago 2016 11:59, "ML mail" <mlnos...@yahoo.com> ha scritto: > > Hi, &g

[Gluster-users] Upgrade guide to 3.8 missing

2016-08-10 Thread ML mail
Hi, The Upgrading to 3.8 guide is missing from: http://gluster.readthedocs.io/en/latest/Upgrade-Guide/README/ Regards, ML ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Convert replica 2 to replicat3 (arbiter) volume

2016-08-09 Thread ML mail
Hi, I just finished to read the documentation about arbiter (https://gluster.readthedocs.io/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/) and would like to convert my existing replica 2 volumes to replica 3 volumes. How do I proceed? Unfortunately, I did not find any

Re: [Gluster-users] What is op-version?

2016-08-08 Thread ML mail
rade process op-version is not bumped up automatically. HTH,Atin On Sunday 7 August 2016, ML mail <mlnos...@yahoo.com> wrote: Hi, Can someone explain me what is the op-version everybody is speaking about on the mailing list? Cheers ML __ _ Glus

[Gluster-users] What is op-version?

2016-08-07 Thread ML mail
Hi, Can someone explain me what is the op-version everybody is speaking about on the mailing list? Cheers ML ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] LVM thin provisioning for my geo-rep slave:

2016-08-05 Thread ML mail
Hello, I am planning to use snapshots on my geo-rep slave and as such wanted first to ask if the following procedure regarding the LVM thin provisionning is correct: Create physical volume: pvcreate /dev/xvdb Create volume group: vgcreate gfs_vg /dev/xvdb Create thin pool: lvcreate -L 4T -T

[Gluster-users] LVM thin provisionning for my geo-rep slave

2016-08-03 Thread ML mail
Hello, I am planning to use snapshots on my geo-rep slave and as such wanted first to ask if the following procedure regarding the LVM thin provisionning is correct: Create physical volume: pvcreate /dev/xvdb Create volume group: vgcreate gfs_vg /dev/xvdb Create thin pool: lvcreate -L 4T -T

[Gluster-users] net_ratelimit: 3 callbacks suppressed messages on glusterfs FUSE clients

2016-07-31 Thread ML mail
Hi On my GlusterFS clients using FUSE mount I get a lot of these messages in the kernel log: net_ratelimit: 3 callbacks suppressed Does anyone have a clue why and how I can avoid the logs getting clogged with these messages? Regards ML ___

Re: [Gluster-users] to RAID or not?

2016-07-04 Thread ML mail
Hi Gandalf Not suggesting really here but just mentioning what I am using: I am using an HBA adapter with 12 disks so basically JBOD but I am using ZFS and have an array of 12 disks in RAIDZ2 (sort of RAID6 but ZFS-style). I am pretty happy with that setup so far. CheersML On Monday,

Re: [Gluster-users] distributed geo-rep using hostname instead of FQDN of slave node

2016-06-27 Thread ML mail
> of this issue. > And also upload the geo-repliction logs and glusterd logs. We will look into > it. > > Thanks and Regards, > Kotresh H R > > ----- Original Message - >> From: "ML mail" <mlnos...@yahoo.com> >> To: "Gluster-users" <

[Gluster-users] distributed geo-rep using hostname instead of FQDN of slave node

2016-06-25 Thread ML mail
Hi, I just setup distributed geo replication on my two nodes replica master glusterfs 3.7.11 towards my single slave node replica and noticed that for some reasons it takes the hostname of my slave node instead of the fully qualified domain name (FQDN) and this although I have specified the

[Gluster-users] net_ratelimit: 1190 callbacks suppressed on client with FUSE

2016-06-25 Thread ML mail
Hi, On my GlusterFS clients when I do a lot of copying within the GlusterFS volume (mounted as native glusterfs) I get quite a lot of these warning in the log of my kernel (Debian 8): [Sat Jun 25 12:18:58 2016] net_ratelimit: 8000 callbacks suppressed [Sat Jun 25 14:39:39 2016] net_ratelimit:

Re: [Gluster-users] Multiple questions regarding monitoring of Gluster

2016-06-22 Thread ML mail
Where's the package for Debian? On Wednesday, June 22, 2016 3:48 PM, "Glomski, Patrick" wrote: If you're not opposed to another dependency, there is a glusterfs-nagios package (python-based) which presents the volumes in a much more useful format for

Re: [Gluster-users] Small files performance

2016-06-22 Thread ML mail
Luciano, how do you enable direct-io-mode? On Wednesday, June 22, 2016 7:09 AM, Luciano Giacchetta wrote: Hi, I have similar scenario, for a cars classified with millions of small files, mounted with gluster native client in a replica config. The gluster server

[Gluster-users] Gluster volume listening on multiple IP address/networks

2016-06-15 Thread ML mail
Hello In order to avoid losing performance/latency I would like to have my Gluster volumes available through one IP address on each of my networks/VLANs. So that the gluster client and server are available on the same network. My clients mount the volume using native gluster protocol. So my

[Gluster-users] Gluster volume to be available on two different networks (VLANs)

2016-06-04 Thread ML mail
Hello, I am running GlusterFS 3.7.11 and was wondering what is the procedure if I want my volume to listen on an additional IP address on another network (VLAN)? Is this possible and what would be the procedure? RegardsML ___ Gluster-users mailing

[Gluster-users] Gluster nodes on the same network as clients?

2016-05-30 Thread ML mail
Hello, Should the gluster nodes be all located on the same network or subnet as their clients in order to get the best performances? I am currently using Gluster 3.7.11 with a 2 nodes replica for cloud storage and mounting on the clients withe the native glusterfs protocol (mount -t glusterfs)

Re: [Gluster-users] Self Heal Sync Speed after 3.7.11 and small file performance

2016-05-06 Thread ML mail
Hi, I am also observing bad performance with small files on a GlusterFS 3.7.11 cluster. For example if I unpack the latest Linux kernel tar file it takes roughly 9 minutes whereas on my laptop it takes 30 seconds. Maybe there are a some paramters on the GlusterFS side which could help to fine

[Gluster-users] 0-dict: dict|match|action is NULL [Invalid argument] warning

2016-04-14 Thread ML mail
Hello, I just upgraded my 2 nodes replica from GlusterFS 3.7.8 to 3.7.10 on Debian 8 and noticed in the brick log file (/var/log/glusterfs/bricks/myvolume-brick.log) the following warning message each time I copy a file. For example I just copied one single 110 kBytes file and got 19 times

[Gluster-users] Warning message in brick log - what does it mean?

2016-04-03 Thread ML mail
Hello, I just upgraded my 2 nodes replica from GlusterFS 3.7.8 to 3.7.10 on Debian 8 and noticed in the brick log file (/var/log/glusterfs/bricks/myvolume-brick.log) the following warning message each time I copy a file. For example I just copied one single 110 kBytes file and got 19 times

Re: [Gluster-users] GlusterFS 3.7.9 released

2016-03-22 Thread ML mail
And a thank you from me too for this release, I am looking forward to a working geo-replication... btw: where can I find the changelog for this release? I always somehow forget where it is located. Regards ML On Tuesday, March 22, 2016 4:19 AM, Vijay Bellur wrote: Hi

Re: [Gluster-users] Broken after 3.7.8 upgrade from 3.7.6

2016-03-06 Thread ML mail
Sorry to jump into this thread but I also noticed the "unable to get index-dir" warning in my gluster self-healing daemon log file since I upgraded to 3.7.8 and I was wondering what I can do to avoid this warning? I think someone asked if he could create manually the "indices/dirty" directory

Re: [Gluster-users] new warning with geo-rep: _GMaster: ENTRY FAILED:

2016-03-02 Thread ML mail
OENT during create happens only when parent directory does not exists on Slave or exists with different GFID. regards Aravinda On 03/01/2016 11:08 PM, ML mail wrote: > Hi, > > I recently updated GlusterFS from 3.7.6 to 3.7.8 on my two nodes master > volume (one brick per node) and slav

Re: [Gluster-users] What I noticed while upgrading 3.7.6 to 3.7.8

2016-02-28 Thread ML mail
and after this timeout respond again. By the way is there a ChangeLog somewhere for 3.7.8? Regards ML On Sunday, February 28, 2016 5:50 PM, Atin Mukherjee <amukh...@redhat.com> wrote: On 02/28/2016 04:48 PM, ML mail wrote: > Hi, > > I just upgraded from 3.7.6 to 3.

Re: [Gluster-users] geo-rep: remote operation failed - No such file or directory

2016-02-26 Thread ML mail
2016 5:54 AM, Aravinda <avish...@redhat.com> wrote: regards Aravinda On 02/26/2016 12:30 AM, ML mail wrote: > Hi Aravinda, > > Many thanks for the steps. I have a few questions about it: > > - in your point number 3, can I simply do an "rm -rf > /my/brick/.gluste

Re: [Gluster-users] geo-rep: remote operation failed - No such file or directory

2016-02-25 Thread ML mail
to merge Geo-rep patches related to this issue for glusterfs-3.7.9 Geo-rep should cleanup this xattrs when session is deleted, We will work on that fix in future releases BUG: https://bugzilla.redhat.com/show_bug.cgi?id=1311926 regards Aravinda On 02/24/2016 09:59 PM, ML mail wrote: >

Re: [Gluster-users] geo-rep: remote operation failed - No such file or directory

2016-02-24 Thread ML mail
-- Original Message - From: "ML mail" <mlnos...@yahoo.com> To: "Milind Changire" <mchan...@redhat.com> Cc: "Gluster-users" <gluster-users@gluster.org> Sent: Wednesday, February 24, 2016 12:25:26 AM Subject: Re: [Gluster-users] geo-rep: remote o

Re: [Gluster-users] geo-rep: remote operation failed - No such file or directory

2016-02-24 Thread ML mail
<avish...@redhat.com> wrote: We can provide workaround steps to resync from beginning without deleting Volume(s). I will send the Session reset details by tomorrow. regards Aravinda On 02/24/2016 09:08 PM, ML mail wrote: > That's right I saw already a few error messages mentionin

Re: [Gluster-users] geo-rep: remote operation failed - No such file or directory

2016-02-24 Thread ML mail
t path and not the brick back-end path. You should have geo-replication stopped when you are setting the virtual xattr and start it when you are done setting the xattr for the entire directory tree. -- Milind - Original Message - From: "ML mail" <mlnos...@yahoo.com> To: &q

Re: [Gluster-users] geo-rep: remote operation failed - No such file or directory

2016-02-23 Thread ML mail
/c/9337/ -- Milind - Original Message ----- From: "ML mail" <mlnos...@yahoo.com> To: "Milind Changire" <mchan...@redhat.com> Cc: "Gluster-users" <gluster-users@gluster.org> Sent: Monday, February 22, 2016 9:10:56 PM Subject: Re: [Gluster-users] ge

Re: [Gluster-users] geo-rep: remote operation failed - No such file or directory

2016-02-22 Thread ML mail
which will avoid geo-replication going into a Faulty state. -- Milind - Original Message - From: "ML mail" <mlnos...@yahoo.com> To: "Milind Changire" <mchan...@redhat.com>, "Gluster-users" <gluster-users@gluster.org> Sent: Monday, February 22

Re: [Gluster-users] geo-rep: remote operation failed - No such file or directory

2016-02-21 Thread ML mail
Hi Milind, Any news on this issue? I was wondering how can I fix and restart my geo-replication? Can I simply delete the problematic file(s) on my slave and restart geo-rep? Regards ML On Wednesday, February 17, 2016 4:30 PM, ML mail <mlnos...@yahoo.com> wrote: Hi Milind, Tha

Re: [Gluster-users] geo-rep: remote operation failed - No such file or directory

2016-02-17 Thread ML mail
luster? CREATE f1.part RENAME f1.part f1 DELETE f1 CREATE f1.part RENAME f1.part f1 ... ... If not, then it would help if you could send the sequence of file management operations. -- Milind - Original Message - From: "Kotresh Hiremath Ravishankar" <khire...@redhat.com>

[Gluster-users] geo-rep: remote operation failed - No such file or directory

2016-02-15 Thread ML mail
Hello, I noticed that the geo-replication of a volume has STATUS "Faulty" and while looking in the *.gluster.log file in /var/log/glusterfs/geo-replication-slaves/ on my slave I can see the following relevant problem: [2016-02-15 10:58:40.402516] I [rpc-clnt.c:1847:rpc_clnt_reconfig]

Re: [Gluster-users] Upgrade procedure from Gluster 3.7.6 to 3.7.8

2016-02-15 Thread ML mail
cation. Since 3.7.8 is released early due to some issues with 3.7.7, we couldn't get the following Geo-rep patches in the release as discussed in previous mails. http://review.gluster.org/#/c/13316/ http://review.gluster.org/#/c/13189/ Thanks regards Aravinda On 02/12/2016 01:38 AM, ML mail

[Gluster-users] Upgrade procedure from Gluster 3.7.6 to 3.7.8

2016-02-11 Thread ML mail
Hello, I would like to upgrade my Gluster 3.7.6 installation to Gluster 3.7.8 and made up the following procedure below. Can anyone check it and let me know if it is correct or if I am missing anything? Note here that I am using Debian 8 and the Debian packages from Gluster's APT repository. I

Re: [Gluster-users] Setting gfid failed on slave geo-rep node

2016-02-04 Thread ML mail
02/03/2016 08:09 PM, ML mail wrote: > Dear Aravinda, > > Thank you for the analysis and submitting a patch for this issue. I hope it > can make it into the next GlusterFS release 3.7.7. > > > As suggested I ran the find_gfid_issues.py script on my brick on the two >

Re: [Gluster-users] Setting gfid failed on slave geo-rep node

2016-02-03 Thread ML mail
316/ http://review.gluster.org/#/c/13189/ Following script can be used to find problematic file in each Brick backend. https://gist.github.com/aravindavk/29f673f13c2f8963447e regards Aravinda On 02/01/2016 08:45 PM, ML mail wrote: > Sure, I will just send it to you through an encrypted cloud storage app and

[Gluster-users] Setting gfid failed on slave geo-rep node

2016-02-01 Thread ML mail
Hello, I just set up distributed geo-replication to a slave on my 2 nodes' replicated volume and noticed quite a few error messages (around 70 of them) in the slave's brick log file: The exact log file is: /var/log/glusterfs/bricks/data-myvolume-geo-brick.log [2016-01-31 22:19:29.524370] E

Re: [Gluster-users] posix_acl_default [Invalid argument] issue with distributed geo-rep

2016-02-01 Thread ML mail
Hi Jiffin, Thanks for fixing that, will be looking forward to this patch so that my log files don't get so cluttered up ;) Regards ML On Monday, February 1, 2016 6:54 AM, Jiffin Tony Thottan <jthot...@redhat.com> wrote: On 31/01/16 23:25, ML mail wrote: > Hello, > >

Re: [Gluster-users] Setting gfid failed on slave geo-rep node

2016-02-01 Thread ML mail
e: Hi, On 02/01/2016 02:14 PM, ML mail wrote: > Hello, > > I just set up distributed geo-replication to a slave on my 2 nodes' > replicated volume and noticed quite a few error messages (around 70 of them) > in the slave's brick log file: > > The exact log file is: /var/l

Re: [Gluster-users] Setting gfid failed on slave geo-rep node

2016-02-01 Thread ML mail
Arumugam <sarum...@redhat.com> wrote: Hi, On 02/01/2016 02:14 PM, ML mail wrote: > Hello, > > I just set up distributed geo-replication to a slave on my 2 nodes' > replicated volume and noticed quite a few error messages (around 70 of them) > in the slave's brick log file: &

Re: [Gluster-users] Setting gfid failed on slave geo-rep node

2016-02-01 Thread ML mail
Sure, I will just send it to you through an encrypted cloud storage app and send you the password via private mail. Regards ML On Monday, February 1, 2016 3:14 PM, Saravanakumar Arumugam <sarum...@redhat.com> wrote: On 02/01/2016 07:22 PM, ML mail wrote: > I just found out I need

[Gluster-users] posix_acl_default [Invalid argument] issue with distributed geo-rep

2016-01-31 Thread ML mail
Hello, I just set up distributed geo-replication to a slave on my 2 nodes' replicated volume and so far it works but I see every 60 seconds in the slave's geo-replication-slaves gluster log file the following message: [2016-01-31 17:38:48.027792] I [dict.c:473:dict_get]

Re: [Gluster-users] problems with geo-replication on 3.7.4

2015-09-22 Thread ML mail
quot;. Is this normal??? So to resume, I've got geo-replication setup but it's quite patchy and messy and does not run under my special replication user I wanted it to run under. On Monday, September 21, 2015 8:07 AM, Saravanakumar Arumugam <sarum...@redhat.com> wrote: Replies inline. O

Re: [Gluster-users] problems with geo-replication on 3.7.4

2015-09-19 Thread ML mail
9/2015 03:03 AM, ML mail wrote: > Hello, > > I am trying in vain to setup geo-replication on now version 3.7.4 of > GlusterFS but it still does not seem to work. I have at least managed to run > succesfully the georepsetup using the following command: > > > georepset

[Gluster-users] problems with geo-replication on 3.7.4

2015-09-18 Thread ML mail
Hello, I am trying in vain to setup geo-replication on now version 3.7.4 of GlusterFS but it still does not seem to work. I have at least managed to run succesfully the georepsetup using the following command: georepsetup reptest gfsgeo@gfs1geo reptest But as soon as I run: gluster volume

Re: [Gluster-users] Setting up geo replication with GlusterFS 3.6.5

2015-09-14 Thread ML mail
On 09/13/2015 09:46 PM, ML mail wrote: > Hello, > > I am using the following documentation in order to setup geo replication > between two sites > http://www.gluster.org/pipermail/gluster-users.old/2015-January/020080.html > > Unfortunately the step: > > glus

Re: [Gluster-users] Setting up geo replication with GlusterFS 3.6.5

2015-09-14 Thread ML mail
/blob/master/README.md Thanks, Saravana On 09/13/2015 09:46 PM, ML mail wrote: > Hello, > > I am using the following documentation in order to setup geo replication > between two sites > http://www.gluster.org/pipermail/gluster-users.old/2015-January/020080.html > > Unfortuna

[Gluster-users] Setting up geo replication with GlusterFS 3.6.5

2015-09-13 Thread ML mail
Hello, I am using the following documentation in order to setup geo replication between two sites http://www.gluster.org/pipermail/gluster-users.old/2015-January/020080.html Unfortunately the step: gluster volume geo-replication myvolume gfs...@gfs1geo.domain.com::myvolume create push-pem

Re: [Gluster-users] Configure separate network for inter-node communication

2015-03-09 Thread ML mail
Thanks Jeff for this blog post, looking forward to NSR and its chain replication! On Monday, March 9, 2015 1:00 PM, Jeff Darcy jda...@redhat.com wrote: I would be very interested to read your blog post as soon as its out and I guess many others too. Please do post the link to this list as

[Gluster-users] geo-replication create push-pem uses wrong gsyncd path on slave cluster's SSH authorized_keys

2015-03-07 Thread ML mail
Hello, I am setting up geo replication on Debian wheezy using the official 3.5.3 GlusterFS packages and noticed that when creating the geo-replication session using the command: gluster volume geo-replication myvol slavecluster::myvol create push-pem force the authorized_keys SSH file

Re: [Gluster-users] Geo replication on slave not showing files in brick

2015-03-06 Thread ML mail
, ML mail mlnos...@yahoo.com wrote: Hello, I just setup geo replication from a 2 node master cluster to a 1 node slave cluster and so far it worked well. I just have one issue on my slave if I check the files on my brick i just see the following: drwxr-xr-x 2 root root 15 Mar 5 23:13 .gfid drw

[Gluster-users] Geo replication on slave not showing files in brick

2015-03-06 Thread ML mail
Hello, I just setup geo replication from a 2 node master cluster to a 1 node slave cluster and so far it worked well. I just have one issue on my slave if I check the files on my brick i just see the following: drwxr-xr-x  2 root root 15 Mar  5 23:13 .gfid drw--- 20 root root 21 Mar  5 23:13

Re: [Gluster-users] Configure separate network for inter-node communication

2015-03-05 Thread ML mail
Thank you for the detailed explanation. Due to the fact that right now it does not make much difference to split the traffic I will refrain from doing that and simply wait for the new style replication. This looks like a very promising feature and I am looking forward to it. My other concern

[Gluster-users] Configure separate network for inter-node communication

2015-03-04 Thread ML mail
Hello, I have two gluster nodes in a replicated setup and have connected the two nodes together directly through a 10 Gbit/s crossover cable. Now I would like to tell gluster to use this seperate private network for any communications between the two nodes. Does that make sense? Will this

Re: [Gluster-users] /etc/hosts entry requires for gluster servers?

2015-03-03 Thread ML mail
, March 3, 2015 12:57 PM, Claudio Kuenzler c...@claudiokuenzler.com wrote: Can you resolve the other gluster peers with dig? Are you able to ping the other peers, too? On Tue, Mar 3, 2015 at 12:38 PM, ML mail mlnos...@yahoo.com wrote: Well the weird thing is that my DNS resolver servers

Re: [Gluster-users] /etc/hosts entry requires for gluster servers?

2015-03-03 Thread ML mail
fine if it was launched manually, did I understand that right? It's only the automatic startup at boot which causes the lookup failure? On Tue, Mar 3, 2015 at 2:54 PM, ML mail mlnos...@yahoo.com wrote: Thanks for the tip but Debian wheezy does not use systemd at all, it's still old sysV style

Re: [Gluster-users] /etc/hosts entry requires for gluster servers?

2015-03-03 Thread ML mail
. On Tue, Mar 3, 2015 at 1:56 PM, ML mail mlnos...@yahoo.com wrote: Yes dig and ping works fine. I used first the short hostname gfs1 and then I also tried gfs1.intra.domain.com. That did not change anything. Currently for testing I only have a single node setup so my gluster peer status output

Re: [Gluster-users] /etc/hosts entry requires for gluster servers?

2015-03-03 Thread ML mail
this for the future? On Tuesday, March 3, 2015 12:57 PM, Claudio Kuenzler c...@claudiokuenzler.com wrote: Can you resolve the other gluster peers with dig? Are you able to ping the other peers, too? On Tue, Mar 3, 2015 at 12:38 PM, ML mail mlnos...@yahoo.com wrote: Well the weird thing

Re: [Gluster-users] /etc/hosts entry requires for gluster servers?

2015-03-03 Thread ML mail
cluster nodes MUST resolve each other through DNS (preferred) or /etc/hosts. An entry in /etc/hosts is probably even more safe because you don't depend on external DNS resolvers. cheers,ck On Tue, Mar 3, 2015 at 8:43 AM, ML mail mlnos...@yahoo.com wrote: Hello, Is it required to have

[Gluster-users] /etc/hosts entry requires for gluster servers?

2015-03-02 Thread ML mail
Hello, Is it required to have the GlusterFS servers in /etc/hosts for the gluster servers themselves? I read many tutorials where people always add an entry in their /etc/hosts file. I am asking because my issue is that my volumes, or more precisely glusterfsd, are not starting at system

Re: [Gluster-users] Max recommended brick size of 100 TB

2015-02-23 Thread ML mail
, 08:47 + schrieb ML mail: Just saw that my post below never got replied and would be very glad if someone, maybe Niels?, could comment on this. Cheers! On Saturday, February 7, 2015 10:13 PM, ML mail mlnos...@yahoo.com wrote: Thank you Niels for your input, that definitely makes me

Re: [Gluster-users] Max recommended brick size of 100 TB

2015-02-23 Thread ML mail
Just saw that my post below never got replied and would be very glad if someone, maybe Niels?, could comment on this. Cheers! On Saturday, February 7, 2015 10:13 PM, ML mail mlnos...@yahoo.com wrote: Thank you Niels for your input, that definitely makes me more curious... Now let me tell you

Re: [Gluster-users] [Gluster-devel] High CPU Usage - Glusterfsd

2015-02-22 Thread ML mail
Dear Ben, Very interesting answer from yours of how to find out where the bottleneck is. These commands and paramters (iostat, sar) should maybe be documented on the Gluster wiki. I have a question for you, in order to better use my CPU cores (6 cores per node) I was wondering if I should

Re: [Gluster-users] Gluster performance on the small files

2015-02-13 Thread ML mail
For those interested here are the results of my tests using Gluster 3.5.2. Nothing much better here neither... shell$ dd bs=64k count=4k if=/dev/zero of=test oflag=dsync 4096+0 records in 4096+0 records out 268435456 bytes (268 MB) copied, 51.9808 s, 5.2 MB/s shell$ dd bs=64k count=4k

[Gluster-users] performance flush-behind dangerous?

2015-02-12 Thread ML mail
Hi, I was wondering if turning on the performance.flush-behind option is dangerous in terms of data integrity? Reading the documentation it seems to me that I could benefit from that especially for having a lot of small files but I would like to stay on the safe side. So if anyone could tell me

Re: [Gluster-users] Performance loss from 3.4.2 to 3.6.2

2015-02-12 Thread ML mail
: On 02/12/2015 01:17 PM, ML mail wrote: Dear Pranith I would be interested to know what the cluster.ensure-durability off option does, could you explain or point to the documentation? By default replication translator does fsyncs on the files at certain times so that it doesn't lose data

Re: [Gluster-users] Performance loss from 3.4.2 to 3.6.2

2015-02-11 Thread ML mail
Dear Pranith I would be interested to know what the cluster.ensure-durability off option does, could you explain or point to the documentation? RegardsML On Thursday, February 12, 2015 8:24 AM, Pranith Kumar Karampuri pkara...@redhat.com wrote: On 02/12/2015 04:37 AM, Nico

Re: [Gluster-users] 2 Node glusterfs quorum help

2015-02-09 Thread ML mail
This seems to be a workaround, isn't there another proper way with the configuration of the volume to achieve this? I would not like to have to setup a third fake server just in order to avoid that. On Monday, February 9, 2015 2:27 AM, Kaamesh Kamalaaharan kaam...@novocraft.com wrote:

Re: [Gluster-users] Max recommended brick size of 100 TB

2015-02-07 Thread ML mail
performance gain? For example in terms of MB/s throughput? Also are there maybe any disadvantages of running two bricks on the same node, especially in my case? On Saturday, February 7, 2015 10:24 AM, Niels de Vos nde...@redhat.com wrote: On Fri, Feb 06, 2015 at 05:06:38PM +, ML mail wrote: Hello

[Gluster-users] Max recommended brick size of 100 TB

2015-02-06 Thread ML mail
Hello, I read in the Gluster Getting Started leaflet (https://lists.gnu.org/archive/html/gluster-devel/2014-01/pdf3IS0tQgBE0.pdf) that the max recommended brick size should be 100 TB. Once my storage server nodes filled up with disks they will have in total 192 TB of storage space, does this

Re: [Gluster-users] glusterfsd not starting at boot with Debian 7

2015-02-05 Thread ML mail
AM, ML mail wrote: Hi, I have installed Gluster 3.5.3 on Debian 7 and have one single test volume right now. Unfortunately after a reboot this volume does not get get started automatically: the glusterfsd process for that volume is inexistant although the glusterd process is running

Re: [Gluster-users] GlusterFS with FUSE slow vs ZFS volume

2015-02-05 Thread ML mail
Yes, I have activated the SA xattr for my ZFS volume that I use for GlusterFS. On Thursday, February 5, 2015 12:22 PM, Vijay Bellur vbel...@redhat.com wrote: On 02/02/2015 08:26 PM, ML mail wrote: Is ZFS using SA based extended attributes here? Since GlusterFS makes use of extended

[Gluster-users] glusterfsd not starting at boot with Debian 7

2015-02-04 Thread ML mail
Hi, I have installed Gluster 3.5.3 on Debian 7 and have one single test volume right now. Unfortunately after a reboot this volume does not get get started automatically: the glusterfsd process for that volume is inexistant although the glusterd process is running. After a boot running

[Gluster-users] How to convert from 2 node replicated to 4 node distributed-replicated

2015-02-04 Thread ML mail
Hi, Is it possible to convert a 2 nodes replicated volume to a 4 nodes distributed-replicated volume? If yes, is it as simple as just issuing the add-brick with the two additional nodes and then start a rebalance? And can this be repeated ad infinitum? Let's say I want to add again another 2

[Gluster-users] Guideline for Gluster hardware

2015-02-04 Thread ML mail
Hello, I am currently testing GlustserFS and could not find any guidelines or even rules of thumb on what kind of minimal hardware requirements for a bare metal node. My setup would be to start with two Gluster nodes using replication for HA. For that I have two 4U SuperMicro storage servers

[Gluster-users] GlusterFS with FUSE slow vs ZFS volume

2015-02-02 Thread ML mail
Hello, I am testing GlusterFS for the first time and have installed the latest GlusterFS 3.5 stable version on Debian 7 on brand new SuperMicro hardware with ZFS instead of hardware RAID. My ZFS pool is a RAIDZ-2 with 6 SATA disks of 2 TB each. After setting up a first and single test brick