;avish...@redhat.com> wrote:
We can check in brick backend.
ls -ld $BRICK_ROOT/.glusterfs/f7/eb/f7eb9d21-d39a-4dd6-941c-46d430e18aa2
regards
Aravinda
On Thursday 15 September 2016 09:12 PM, ML mail wrote:
> So I ran a on my master a "find /mybrick -name 'File 2016.xlsx'" and got the
&
e_rename_cbk]
0-glusterfs-fuse: 25: /.gfid/f7eb9d21-d39a-4dd6-941c-46d430e18aa2/File
2016.xlsx.ocTransferId1333449197.part ->
/.gfid/f7eb9d21-d39a-4dd6-941c-46d430e18aa2/File 2016.xlsx => -1
(Directory not empty)
regards
Aravinda
On Wednesday 14 September 2016 12:49 PM, ML mail wro
vice_loop] RepceServer:
terminating on reaching EOF.
[2016-09-13 19:41:13.894532] I [syncdutils(agent):220:finalize] : exiting.
[2016-09-13 19:41:14.718497] I [monitor(monitor):343:monitor] Monitor:
worker(/data/cloud-pro/brick) died in startup phase
On Wednesday, September 14, 2016 8:46 AM, Ar
.
Regards,
ML
On Wednesday, September 14, 2016 6:14 AM, Aravinda <avish...@redhat.com> wrote:
Please share the logs from Master node which is
Faulty(/var/log/glusterfs/geo-replication/__/*.log)
regards
Aravinda
On Wednesday 14 September 2016 01:10 AM, ML mail wrote:
> Hi,
>
> I
Hi,
I just discovered that one of my replicated glusterfs volumes is not being
geo-replicated to my slave node (STATUS Faulty). The log file on the geo-rep
slave node indicates an error with a directory which seems not to be empty.
Below you will find the full log entry for this problem which
Good point Gandalf! I really don't feel adventurous on a production cluster...
On Wednesday, August 10, 2016 2:14 PM, Gandalf Corvotempesta
<gandalf.corvotempe...@gmail.com> wrote:
Il 10 ago 2016 11:59, "ML mail" <mlnos...@yahoo.com> ha scritto:
>
> Hi,
&g
Hi,
The Upgrading to 3.8 guide is missing from:
http://gluster.readthedocs.io/en/latest/Upgrade-Guide/README/
Regards,
ML
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
Hi,
I just finished to read the documentation about arbiter
(https://gluster.readthedocs.io/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/)
and would like to convert my existing replica 2 volumes to replica 3 volumes.
How do I proceed? Unfortunately, I did not find any
rade process
op-version is not bumped up automatically.
HTH,Atin
On Sunday 7 August 2016, ML mail <mlnos...@yahoo.com> wrote:
Hi,
Can someone explain me what is the op-version everybody is speaking about on
the mailing list?
Cheers
ML
__ _
Glus
Hi,
Can someone explain me what is the op-version everybody is speaking about on
the mailing list?
Cheers
ML
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
Hello,
I am planning to use snapshots on my geo-rep slave and as such wanted first to
ask if the following procedure regarding the LVM thin provisionning is correct:
Create physical volume:
pvcreate /dev/xvdb
Create volume group:
vgcreate gfs_vg /dev/xvdb
Create thin pool:
lvcreate -L 4T -T
Hello,
I am planning to use snapshots on my geo-rep slave and as such wanted first to
ask if the following procedure regarding the LVM thin provisionning is correct:
Create physical volume:
pvcreate /dev/xvdb
Create volume group:
vgcreate gfs_vg /dev/xvdb
Create thin pool:
lvcreate -L 4T -T
Hi
On my GlusterFS clients using FUSE mount I get a lot of these messages in the
kernel log:
net_ratelimit: 3 callbacks suppressed
Does anyone have a clue why and how I can avoid the logs getting clogged with
these messages?
Regards
ML
___
Hi Gandalf
Not suggesting really here but just mentioning what I am using: I am using an
HBA adapter with 12 disks so basically JBOD but I am using ZFS and have an
array of 12 disks in RAIDZ2 (sort of RAID6 but ZFS-style). I am pretty happy
with that setup so far.
CheersML
On Monday,
> of this issue.
> And also upload the geo-repliction logs and glusterd logs. We will look into
> it.
>
> Thanks and Regards,
> Kotresh H R
>
> ----- Original Message -
>> From: "ML mail" <mlnos...@yahoo.com>
>> To: "Gluster-users" <
Hi,
I just setup distributed geo replication on my two nodes replica master
glusterfs 3.7.11 towards my single slave node replica and noticed that for some
reasons it takes the hostname of my slave node instead of the fully qualified
domain name (FQDN) and this although I have specified the
Hi,
On my GlusterFS clients when I do a lot of copying within the GlusterFS volume
(mounted as native glusterfs) I get quite a lot of these warning in the log of
my kernel (Debian 8):
[Sat Jun 25 12:18:58 2016] net_ratelimit: 8000 callbacks suppressed
[Sat Jun 25 14:39:39 2016] net_ratelimit:
Where's the package for Debian?
On Wednesday, June 22, 2016 3:48 PM, "Glomski, Patrick"
wrote:
If you're not opposed to another dependency, there is a glusterfs-nagios
package (python-based) which presents the volumes in a much more useful format
for
Luciano, how do you enable direct-io-mode?
On Wednesday, June 22, 2016 7:09 AM, Luciano Giacchetta
wrote:
Hi,
I have similar scenario, for a cars classified with millions of small files,
mounted with gluster native client in a replica config.
The gluster server
Hello
In order to avoid losing performance/latency I would like to have my Gluster
volumes available through one IP address on each of my networks/VLANs. So that
the gluster client and server are available on the same network. My clients
mount the volume using native gluster protocol.
So my
Hello,
I am running GlusterFS 3.7.11 and was wondering what is the procedure if I want
my volume to listen on an additional IP address on another network (VLAN)? Is
this possible and what would be the procedure?
RegardsML
___
Gluster-users mailing
Hello,
Should the gluster nodes be all located on the same network or subnet as their
clients in order to get the best performances?
I am currently using Gluster 3.7.11 with a 2 nodes replica for cloud storage
and mounting on the clients withe the native glusterfs protocol (mount -t
glusterfs)
Hi,
I am also observing bad performance with small files on a GlusterFS 3.7.11
cluster. For example if I unpack the latest Linux kernel tar file it takes
roughly 9 minutes whereas on my laptop it takes 30 seconds.
Maybe there are a some paramters on the GlusterFS side which could help to fine
Hello,
I just upgraded my 2 nodes replica from GlusterFS 3.7.8 to 3.7.10 on Debian 8
and noticed in the brick log file
(/var/log/glusterfs/bricks/myvolume-brick.log) the following warning message
each time I copy a file. For example I just copied one single 110 kBytes file
and got 19 times
Hello,
I just upgraded my 2 nodes replica from GlusterFS 3.7.8 to 3.7.10 on Debian 8
and noticed in the brick log file
(/var/log/glusterfs/bricks/myvolume-brick.log) the following warning message
each time I copy a file. For example I just copied one single 110 kBytes file
and got 19 times
And a thank you from me too for this release, I am looking forward to a working
geo-replication...
btw: where can I find the changelog for this release? I always somehow forget
where it is located.
Regards
ML
On Tuesday, March 22, 2016 4:19 AM, Vijay Bellur wrote:
Hi
Sorry to jump into this thread but I also noticed the "unable to get index-dir"
warning in my gluster self-healing daemon log file since I upgraded to 3.7.8
and I was wondering what I can do to avoid this warning? I think someone asked
if he could create manually the "indices/dirty" directory
OENT during create happens only when parent directory does not exists
on Slave or exists with different GFID.
regards
Aravinda
On 03/01/2016 11:08 PM, ML mail wrote:
> Hi,
>
> I recently updated GlusterFS from 3.7.6 to 3.7.8 on my two nodes master
> volume (one brick per node) and slav
and after this timeout
respond again.
By the way is there a ChangeLog somewhere for 3.7.8?
Regards
ML
On Sunday, February 28, 2016 5:50 PM, Atin Mukherjee <amukh...@redhat.com>
wrote:
On 02/28/2016 04:48 PM, ML mail wrote:
> Hi,
>
> I just upgraded from 3.7.6 to 3.
2016 5:54 AM, Aravinda <avish...@redhat.com> wrote:
regards
Aravinda
On 02/26/2016 12:30 AM, ML mail wrote:
> Hi Aravinda,
>
> Many thanks for the steps. I have a few questions about it:
>
> - in your point number 3, can I simply do an "rm -rf
> /my/brick/.gluste
to merge Geo-rep patches related to
this issue for glusterfs-3.7.9
Geo-rep should cleanup this xattrs when session is deleted, We will work
on that fix in future releases
BUG: https://bugzilla.redhat.com/show_bug.cgi?id=1311926
regards
Aravinda
On 02/24/2016 09:59 PM, ML mail wrote:
>
-- Original Message -
From: "ML mail" <mlnos...@yahoo.com>
To: "Milind Changire" <mchan...@redhat.com>
Cc: "Gluster-users" <gluster-users@gluster.org>
Sent: Wednesday, February 24, 2016 12:25:26 AM
Subject: Re: [Gluster-users] geo-rep: remote o
<avish...@redhat.com> wrote:
We can provide workaround steps to resync from beginning without
deleting Volume(s).
I will send the Session reset details by tomorrow.
regards
Aravinda
On 02/24/2016 09:08 PM, ML mail wrote:
> That's right I saw already a few error messages mentionin
t path and not the brick back-end path.
You should have geo-replication stopped when you are
setting the virtual xattr and start it when you are
done setting the xattr for the entire directory tree.
--
Milind
- Original Message -
From: "ML mail" <mlnos...@yahoo.com>
To: &q
/c/9337/
--
Milind
- Original Message -----
From: "ML mail" <mlnos...@yahoo.com>
To: "Milind Changire" <mchan...@redhat.com>
Cc: "Gluster-users" <gluster-users@gluster.org>
Sent: Monday, February 22, 2016 9:10:56 PM
Subject: Re: [Gluster-users] ge
which will avoid geo-replication
going into a Faulty state.
--
Milind
- Original Message -
From: "ML mail" <mlnos...@yahoo.com>
To: "Milind Changire" <mchan...@redhat.com>, "Gluster-users"
<gluster-users@gluster.org>
Sent: Monday, February 22
Hi Milind,
Any news on this issue? I was wondering how can I fix and restart my
geo-replication? Can I simply delete the problematic file(s) on my slave and
restart geo-rep?
Regards
ML
On Wednesday, February 17, 2016 4:30 PM, ML mail <mlnos...@yahoo.com> wrote:
Hi Milind,
Tha
luster?
CREATE f1.part
RENAME f1.part f1
DELETE f1
CREATE f1.part
RENAME f1.part f1
...
...
If not, then it would help if you could send the sequence
of file management operations.
--
Milind
- Original Message -
From: "Kotresh Hiremath Ravishankar" <khire...@redhat.com>
Hello,
I noticed that the geo-replication of a volume has STATUS "Faulty" and while
looking in the *.gluster.log file in /var/log/glusterfs/geo-replication-slaves/
on my slave I can see the following relevant problem:
[2016-02-15 10:58:40.402516] I [rpc-clnt.c:1847:rpc_clnt_reconfig]
cation.
Since 3.7.8 is released early due to some issues with 3.7.7, we couldn't
get the following Geo-rep patches in the release as discussed in
previous mails.
http://review.gluster.org/#/c/13316/
http://review.gluster.org/#/c/13189/
Thanks
regards
Aravinda
On 02/12/2016 01:38 AM, ML mail
Hello,
I would like to upgrade my Gluster 3.7.6 installation to Gluster 3.7.8 and made
up the following procedure below. Can anyone check it and let me know if it is
correct or if I am missing anything? Note here that I am using Debian 8 and the
Debian packages from Gluster's APT repository. I
02/03/2016 08:09 PM, ML mail wrote:
> Dear Aravinda,
>
> Thank you for the analysis and submitting a patch for this issue. I hope it
> can make it into the next GlusterFS release 3.7.7.
>
>
> As suggested I ran the find_gfid_issues.py script on my brick on the two
>
316/
http://review.gluster.org/#/c/13189/
Following script can be used to find problematic file in each Brick backend.
https://gist.github.com/aravindavk/29f673f13c2f8963447e
regards
Aravinda
On 02/01/2016 08:45 PM, ML mail wrote:
> Sure, I will just send it to you through an encrypted cloud storage app and
Hello,
I just set up distributed geo-replication to a slave on my 2 nodes' replicated
volume and noticed quite a few error messages (around 70 of them) in the
slave's brick log file:
The exact log file is: /var/log/glusterfs/bricks/data-myvolume-geo-brick.log
[2016-01-31 22:19:29.524370] E
Hi Jiffin,
Thanks for fixing that, will be looking forward to this patch so that my log
files don't get so cluttered up ;)
Regards
ML
On Monday, February 1, 2016 6:54 AM, Jiffin Tony Thottan <jthot...@redhat.com>
wrote:
On 31/01/16 23:25, ML mail wrote:
> Hello,
>
>
e:
Hi,
On 02/01/2016 02:14 PM, ML mail wrote:
> Hello,
>
> I just set up distributed geo-replication to a slave on my 2 nodes'
> replicated volume and noticed quite a few error messages (around 70 of them)
> in the slave's brick log file:
>
> The exact log file is: /var/l
Arumugam
<sarum...@redhat.com> wrote:
Hi,
On 02/01/2016 02:14 PM, ML mail wrote:
> Hello,
>
> I just set up distributed geo-replication to a slave on my 2 nodes'
> replicated volume and noticed quite a few error messages (around 70 of them)
> in the slave's brick log file:
&
Sure, I will just send it to you through an encrypted cloud storage app and
send you the password via private mail.
Regards
ML
On Monday, February 1, 2016 3:14 PM, Saravanakumar Arumugam
<sarum...@redhat.com> wrote:
On 02/01/2016 07:22 PM, ML mail wrote:
> I just found out I need
Hello,
I just set up distributed geo-replication to a slave on my 2 nodes' replicated
volume and so far it works but I see every 60 seconds in the slave's
geo-replication-slaves gluster log file the following message:
[2016-01-31 17:38:48.027792] I [dict.c:473:dict_get]
quot;.
Is this normal???
So to resume, I've got geo-replication setup but it's quite patchy and messy
and does not run under my special replication user I wanted it to run under.
On Monday, September 21, 2015 8:07 AM, Saravanakumar Arumugam
<sarum...@redhat.com> wrote:
Replies inline.
O
9/2015 03:03 AM, ML mail wrote:
> Hello,
>
> I am trying in vain to setup geo-replication on now version 3.7.4 of
> GlusterFS but it still does not seem to work. I have at least managed to run
> succesfully the georepsetup using the following command:
>
>
> georepset
Hello,
I am trying in vain to setup geo-replication on now version 3.7.4 of GlusterFS
but it still does not seem to work. I have at least managed to run succesfully
the georepsetup using the following command:
georepsetup reptest gfsgeo@gfs1geo reptest
But as soon as I run:
gluster volume
On 09/13/2015 09:46 PM, ML mail wrote:
> Hello,
>
> I am using the following documentation in order to setup geo replication
> between two sites
> http://www.gluster.org/pipermail/gluster-users.old/2015-January/020080.html
>
> Unfortunately the step:
>
> glus
/blob/master/README.md
Thanks,
Saravana
On 09/13/2015 09:46 PM, ML mail wrote:
> Hello,
>
> I am using the following documentation in order to setup geo replication
> between two sites
> http://www.gluster.org/pipermail/gluster-users.old/2015-January/020080.html
>
> Unfortuna
Hello,
I am using the following documentation in order to setup geo replication
between two sites
http://www.gluster.org/pipermail/gluster-users.old/2015-January/020080.html
Unfortunately the step:
gluster volume geo-replication myvolume gfs...@gfs1geo.domain.com::myvolume
create push-pem
Thanks Jeff for this blog post, looking forward to NSR and its chain
replication!
On Monday, March 9, 2015 1:00 PM, Jeff Darcy jda...@redhat.com wrote:
I would be very interested to read your blog post as soon as its out and I
guess many others too. Please do post the link to this list as
Hello,
I am setting up geo replication on Debian wheezy using the official 3.5.3
GlusterFS packages and noticed that when creating the geo-replication session
using the command:
gluster volume geo-replication myvol slavecluster::myvol create push-pem force
the authorized_keys SSH file
, ML mail mlnos...@yahoo.com wrote:
Hello,
I just setup geo replication from a 2 node master cluster to a 1 node slave
cluster and so far it worked well. I just have one issue on my slave if I
check the files on my brick i just see the following:
drwxr-xr-x 2 root root 15 Mar 5 23:13 .gfid
drw
Hello,
I just setup geo replication from a 2 node master cluster to a 1 node slave
cluster and so far it worked well. I just have one issue on my slave if I check
the files on my brick i just see the following:
drwxr-xr-x 2 root root 15 Mar 5 23:13 .gfid
drw--- 20 root root 21 Mar 5 23:13
Thank you for the detailed explanation. Due to the fact that right now it does
not make much difference to split the traffic I will refrain from doing that
and simply wait for the new style replication. This looks like a very promising
feature and I am looking forward to it. My other concern
Hello,
I have two gluster nodes in a replicated setup and have connected the two nodes
together directly through a 10 Gbit/s crossover cable. Now I would like to tell
gluster to use this seperate private network for any communications between the
two nodes. Does that make sense? Will this
, March 3, 2015 12:57 PM, Claudio Kuenzler
c...@claudiokuenzler.com wrote:
Can you resolve the other gluster peers with dig?
Are you able to ping the other peers, too?
On Tue, Mar 3, 2015 at 12:38 PM, ML mail mlnos...@yahoo.com wrote:
Well the weird thing is that my DNS resolver servers
fine if it was launched manually, did I understand that
right? It's only the automatic startup at boot which causes the lookup failure?
On Tue, Mar 3, 2015 at 2:54 PM, ML mail mlnos...@yahoo.com wrote:
Thanks for the tip but Debian wheezy does not use systemd at all, it's still
old sysV style
.
On Tue, Mar 3, 2015 at 1:56 PM, ML mail mlnos...@yahoo.com wrote:
Yes dig and ping works fine. I used first the short hostname gfs1 and then I
also tried gfs1.intra.domain.com. That did not change anything.
Currently for testing I only have a single node setup so my gluster peer
status output
this for the
future?
On Tuesday, March 3, 2015 12:57 PM, Claudio Kuenzler
c...@claudiokuenzler.com wrote:
Can you resolve the other gluster peers with dig?
Are you able to ping the other peers, too?
On Tue, Mar 3, 2015 at 12:38 PM, ML mail mlnos...@yahoo.com wrote:
Well the weird thing
cluster nodes MUST resolve each other through DNS (preferred) or
/etc/hosts.
An entry in /etc/hosts is probably even more safe because you don't depend on
external DNS resolvers.
cheers,ck
On Tue, Mar 3, 2015 at 8:43 AM, ML mail mlnos...@yahoo.com wrote:
Hello,
Is it required to have
Hello,
Is it required to have the GlusterFS servers in /etc/hosts for the gluster
servers themselves? I read many tutorials where people always add an entry in
their /etc/hosts file.
I am asking because my issue is that my volumes, or more precisely glusterfsd,
are not starting at system
, 08:47 + schrieb ML mail:
Just saw that my post below never got replied and would be very glad if
someone, maybe Niels?, could comment on this. Cheers!
On Saturday, February 7, 2015 10:13 PM, ML mail mlnos...@yahoo.com wrote:
Thank you Niels for your input, that definitely makes me
Just saw that my post below never got replied and would be very glad if
someone, maybe Niels?, could comment on this. Cheers!
On Saturday, February 7, 2015 10:13 PM, ML mail mlnos...@yahoo.com wrote:
Thank you Niels for your input, that definitely makes me more curious... Now
let me tell you
Dear Ben,
Very interesting answer from yours of how to find out where the bottleneck is.
These commands and paramters (iostat, sar) should maybe be documented on the
Gluster wiki.
I have a question for you, in order to better use my CPU cores (6 cores per
node) I was wondering if I should
For those interested here are the results of my tests using Gluster 3.5.2.
Nothing much better here neither...
shell$ dd bs=64k count=4k if=/dev/zero of=test oflag=dsync
4096+0 records in
4096+0 records out
268435456 bytes (268 MB) copied, 51.9808 s, 5.2 MB/s
shell$ dd bs=64k count=4k
Hi,
I was wondering if turning on the performance.flush-behind option is dangerous
in terms of data integrity? Reading the documentation it seems to me that I
could benefit from that especially for having a lot of small files but I would
like to stay on the safe side. So if anyone could tell me
:
On 02/12/2015 01:17 PM, ML mail wrote:
Dear Pranith
I would be interested to know what the cluster.ensure-durability off option
does, could you explain or point to the documentation?
By default replication translator does fsyncs on the files at certain times so
that it doesn't lose data
Dear Pranith
I would be interested to know what the cluster.ensure-durability off option
does, could you explain or point to the documentation?
RegardsML
On Thursday, February 12, 2015 8:24 AM, Pranith Kumar Karampuri
pkara...@redhat.com wrote:
On 02/12/2015 04:37 AM, Nico
This seems to be a workaround, isn't there another proper way with the
configuration of the volume to achieve this? I would not like to have to setup
a third fake server just in order to avoid that.
On Monday, February 9, 2015 2:27 AM, Kaamesh Kamalaaharan
kaam...@novocraft.com wrote:
performance gain? For example in terms of MB/s throughput? Also are there maybe
any disadvantages of running two bricks on the same node, especially in my case?
On Saturday, February 7, 2015 10:24 AM, Niels de Vos nde...@redhat.com wrote:
On Fri, Feb 06, 2015 at 05:06:38PM +, ML mail wrote:
Hello
Hello,
I read in the Gluster Getting Started leaflet
(https://lists.gnu.org/archive/html/gluster-devel/2014-01/pdf3IS0tQgBE0.pdf)
that the max recommended brick size should be 100 TB.
Once my storage server nodes filled up with disks they will have in total 192
TB of storage space, does this
AM, ML mail wrote:
Hi,
I have installed Gluster 3.5.3 on Debian 7 and have one single test volume
right now. Unfortunately after a reboot this volume does not get get started
automatically: the glusterfsd process for that volume is inexistant although
the glusterd process is running
Yes, I have activated the SA xattr for my ZFS volume that I use for GlusterFS.
On Thursday, February 5, 2015 12:22 PM, Vijay Bellur vbel...@redhat.com wrote:
On 02/02/2015 08:26 PM, ML mail wrote:
Is ZFS using SA based extended attributes here? Since GlusterFS makes
use of extended
Hi,
I have installed Gluster 3.5.3 on Debian 7 and have one single test volume
right now. Unfortunately after a reboot this volume does not get get started
automatically: the glusterfsd process for that volume is inexistant although
the glusterd process is running.
After a boot running
Hi,
Is it possible to convert a 2 nodes replicated volume to a 4 nodes
distributed-replicated volume? If yes, is it as simple as just issuing the
add-brick with the two additional nodes and then start a rebalance?
And can this be repeated ad infinitum? Let's say I want to add again another 2
Hello,
I am currently testing GlustserFS and could not find any guidelines or even
rules of thumb on what kind of minimal hardware requirements for a bare metal
node.
My setup would be to start with two Gluster nodes using replication for HA. For
that I have two 4U SuperMicro storage servers
Hello,
I am testing GlusterFS for the first time and have installed the latest
GlusterFS 3.5 stable version on Debian 7 on brand new SuperMicro hardware with
ZFS instead of hardware RAID. My ZFS pool is a RAIDZ-2 with 6 SATA disks of 2
TB each.
After setting up a first and single test brick
83 matches
Mail list logo