On 05/08/2014 03:11 PM, 思傑Clover wrote:
Hi All
I'm a newbies in glusterfs. I have 2 machines all ubuntu 13.10 server
AMD64. and step by step build gluster 3.5.0 from
http://gluster.org/community/documentation/index.php/Building_GlusterFS
When I probe client, create volume are successed.
+1 Eco, nice page I must say.
~Atin
On 05/30/2014 04:40 AM, Eco Willson wrote:
Dear Community members,
We have been working on a new site design and we would love to get your
feedback. You can check things out at staging.gluster.org. Things are still
very much in beta (a few pages not
You should have a replica volume in that case, IIUC, you have created a
distributed volume only. Look for gluster volume create help to see how
you can set up a replica cluster.
~Atin
On 06/25/2014 03:08 PM, swaroop kumar wrote:
Hello All,
I have been exploring glusterfs since a while. I'm
On 06/26/2014 01:58 PM, Sachin Pandit wrote:
Hi all,
We had some concern regarding the snapshot delete force option,
That is the reason why we thought of getting advice from everyone out here.
Currently when we give gluster snapshot delete snapname, It gives a
notification
saying that
On 07/11/2014 07:00 PM, Nilesh Govindrajan wrote:
sto1 ~ # gluster volume remove-brick www-common replica 1
sto2:/data/gluster/www-common
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit force: failed: Incorrect brick
Hi All,
Patch [1] introduces a new feature of glusterd where one can list down a
specific or all volume options with a glusterd command.
Request you to let me know your thoughts. I will be also creating a
feature page for the same in sometime.
1. http://review.gluster.org/#/c/8305/
Regards,
On 07/25/2014 03:59 PM, Atin Mukherjee wrote:
Hi All,
Patch [1] introduces a new feature of glusterd where one can list down a
specific or all volume options with a glusterd command.
Request you to let me know your thoughts. I will be also creating a
feature page for the same
On 08/19/2014 12:01 PM, Gabi C wrote:
Had some simmilar issue with 3.4.3, gone after upgrade to 3.5.1
Have you looked into the glusterd logs, if not can you please attach
them for both the nodes?
~Atin
On Tue, Aug 19, 2014 at 9:26 AM, Tejas Gadaria refond.g...@gmail.com
The same goes for worm feature as well.
~Atin
On 09/05/2014 11:35 AM, Prashanth Pai wrote:
Hi Wannes Van causbroeck ,
Thanks to Atin! Seems to have fixed this here (under review):
http://review.gluster.org/8571
Regards,
-Prashanth Pai
- Original Message -
From: VAN
On 09/06/2014 05:55 PM, Pranith Kumar Karampuri wrote:
On 09/05/2014 03:51 PM, Kaushal M wrote:
GlusterD performs the following functions as the management daemon for
GlusterFS:
- Peer membership management
- Maintains consistency of configuration data across nodes
(distributed
On 10/09/2014 09:10 PM, Sean O'Gorman wrote:
Hi,
I'm wondering if someone can help me? I am trying to set up a gluster
cluster and am having issues with the peering process, the primary is
stuck in 'State: Probe sent to peer (Connected)' and the secondary is in
the 'Sent and received peer
Can you let us know why do you need to explicitly kill the brick
process? replace-brick ideally does the same and spawns a new process.
~Atin
On 11/11/2014 12:37 PM, Raghuram BK wrote:
f we'd like to replace a disk on which a brick resides
___
+1, excellent idea, this will definitely give an additional comfort zone
for learning glusterfs faster.
On 11/12/2014 05:47 PM, Krishnan Parthasarathi wrote:
All,
We have come across behaviours and features of GlusterFS that are left
unexplained for various reasons. Thanks to Justin Clift
Folks,
While I was looking into glusterd backlogs I could see there are few BZs
which were marked as needinfo on the reporter as the information was
not sufficient enough for further analysis and the reporter hasn't
gotten back with the required details.
Ideally we should close these bugs saying
Scott,
Can you please find/point out the first instance of the command and its
associated glusterd log which failed to acquire the cluster wide lock.
There are few cases related to rebalance commands where we may end up
having stale locks, have you performed rebalance in between?
~Atin
On
Xavi will be the better person to clear all your doubts on this feature,
however as per my understanding please see the response inline.
~Atin
On 11/25/2014 07:11 PM, Ayelet Shemesh wrote:
Hello Gluster experts,
I have been using gluster for a small cluster for a few years now and I
have a
On 11/26/2014 01:07 AM, Kiebzak, Jason M. wrote:
I have a fresh install of gluster v3.6.1 on debian:
# dpkg -l|grep gluster
ii glusterfs-client 3.6.1-1
ii glusterfs-common 3.6.1-1
ii glusterfs-server 3.6.1-1
When I
Just wanted to clarify one thing here. If you have a 2 node cluster and
one of the node is down and other is rebooted, the daemons wouldn't get
started until and unless there is no peer in the cluster or atleast a
friend update (happens during handshake) is received. This is to ensure
the node
I would vote for 2nd one.
~Atin
On 11/26/2014 06:49 PM, Mohammed Rafi K C wrote:
Hi All,
We are planning to change the volume status command to show RDMA port for
tcp,rdma volumes. We have four output designs in mind , those are,
1)Modify Port column as TCP,RDMA Ports
Eg:
Status
IMO, this would definitely have a corresponding change in --xml output.
We need to take that into account as well.
~Atin
On 11/27/2014 10:00 AM, Kanagaraj wrote:
Do we have corresponding changes in --xml output? If we are removing any
existing elements, the management systems will break.
3.2.5 is too old, can you please upgrade your cluster to recent version
of glusterfs bits and try it out?
~Atin
On 11/28/2014 05:17 PM, Heiko Schröter wrote:
Unable to set cli op
___
Gluster-users mailing list
Gluster-users@gluster.org
On 12/04/2014 09:08 PM, Peter B. wrote:
This is actually directly related to my problem is mentioned here on Monday:
Folder disappeared on volume, but exists on bricks.
I probed node A from server B, which caused all this. My bad.
:(
No data is lost, but is there any way to recover
You wouldn't be able to create a volume if it already exists.
~Atin
On 12/09/2014 01:43 PM, Geoff Galitz wrote:
Hi.
If one were to use the volume creation and start commands on an existing
volume, would that be safe and leave the currently existing volume in a good
state with all data
AM, Scott Merrill wrote:
On 11/25/14, 10:06 AM, Atin Mukherjee wrote:
On 11/25/2014 07:08 PM, Scott Merrill wrote:
On 11/24/14, 11:56 PM, Atin Mukherjee wrote:
Can you please find/point out the first instance of the command and its
associated glusterd log which failed to acquire the cluster
On 12/16/2014 04:56 PM, Jon Colás Gómez wrote:
I have a production enviroment with a volume with two replicated nodes in
replica 2
I want to update replica count from 2 -3 (add another node)
i have tried:
# gluster volume add-brick gluster_data replica 3 host03:/data/glusterfs
wrong
Could you provide the log snippet of host2 machine?
Did you use '*' in the brick path, if so then thats not correct.
~Atin
On 12/22/2014 06:57 PM, Jon Colás Gómez wrote:
I already have *host1* and *host2* with replica 2 volume called
gluster_puppet
I am trying to create a new volume type
[glusterd-handler.c:2671:glusterd_op_unlock_send_resp] 0-glusterd:
Responded to unlock, ret: 0
Greetings,
2014-12-22 16:35 GMT+01:00 Atin Mukherjee amukh...@redhat.com:
Could you provide the log snippet of host2 machine?
Did you use '*' in the brick path, if so then thats not correct
Folks,
If you face any problems while using glusterfs don't hesitate to shoot
us a mail, however please make a point that providing *gluster version*
on every problem would help us to debug the issue much faster and turn
around time would be quicker too.
~Atin
On 02/06/2015 08:21 AM, Jordan Tomkinson wrote:
Hi,
Using Gluster 3.6.1, I'm trying to replace a brick but after issuing a
volume heal nothing gets healed and my clients see an empty volume.
I have reproduced this on a test volume, shown here.
$ gluster volume status test
Status of
Lala,
AFAIR, you are correct. However I think we still don't have code for point
2. Although it generates warning, but rebalance can go through.
--Atin
On 6 Feb 2015 19:01, Lalatendu Mohanty lmoha...@redhat.com wrote:
On 02/06/2015 08:11 AM, Humble Devassy Chirammal wrote:
On 02/05/2015
On 02/12/2015 12:36 AM, Ernie Dunbar wrote:
I nuked the entire partition with mkfs, just to be *sure*, and I still
get the error message:
volume create: gv0: failed: /brick1/gv0 is already part of a volume
Clearly, there's some bit of data being kept somewhere else besides in
:)
-Original Message-
From: Atin Mukherjee [mailto:amukh...@redhat.com]
Sent: venerdì 20 febbraio 2015 10:49
To: RASTELLI Alessandro; gluster-users@gluster.org
Subject: Re: [Gluster-users] GlusterD uses 50% of RAM
Could you please share the cmd_history.log glusterd log file to analyze
Could you please share the cmd_history.log glusterd log file to
analyze this high memory usage.
~Atin
On 02/20/2015 03:10 PM, RASTELLI Alessandro wrote:
Hi,
I've noticed that one of our 6 gluster 3.6.2 nodes has glusterd process
using 50% of RAM, on the other nodes usage is about 5%
This
--
Task : Rebalance
ID : 6d4c6c4e-16da-48c9-9019-dccb7d2cfd66
Status : completed
-- Original Message --
From: Atin Mukherjee amukh...@redhat.com
To: Pranith Kumar Karampuri pkara...@redhat.com; Justin Clift
On 01/27/2015 07:33 AM, Pranith Kumar Karampuri wrote:
On 01/26/2015 09:41 PM, Justin Clift wrote:
On 26 Jan 2015, at 14:50, David F. Robinson
david.robin...@corvidtec.com wrote:
I have a server with v3.6.2 from which I cannot mount using NFS. The
FUSE mount works, however, I cannot get
On 01/06/2015 06:05 PM, Alessandro Ipe wrote:
Hi,
We have set up a md1 volume using gluster 3.4.2 over 4 servers configured
as distributed and replicated. Then, we upgraded smoohtly to 3.5.3, since it
was mentionned that the command volume replace-brick is broken on 3.4.x. We
added
on the source and re-test?
~Atin
On Tue, Jan 13, 2015 at 12:37 PM, Atin Mukherjee amukh...@redhat.com
wrote:
Punit,
cli log wouldn't help much here. To debug this issue further can you
please let us know the following:
1. gluster peer status output
2. gluster volume status output
3. gluster
Punit,
cli log wouldn't help much here. To debug this issue further can you
please let us know the following:
1. gluster peer status output
2. gluster volume status output
3. gluster --version output.
4. Which command got failed
5. glusterd log file of all the nodes
~Atin
On 01/13/2015 07:48
On 02/11/2015 08:08 AM, Craig Yoshioka wrote:
Every 3 seconds my Gluster peers print a socket error to their logs:
[2015-02-11 02:34:51.538651] W [socket.c:611:__socket_rwv] 0-management:
readv on /var/run/8ec925040908cb66670cb4304e0c0538.socket failed (Invalid
argument)
[2015-02-11
On 02/16/2015 04:20 AM, Ed Greenberg wrote:
Something I don't understand.
If I have two servers running gluster code, each with an attached brick,
is there a master slave relationship or are they peers?
There is no master slave relationship here. Each brick can be considered
as individual
On 02/16/2015 12:37 PM, Félix de Lelelis wrote:
Hi,
The last week upgraded us cluster to 3.6 version. I noticed in the log the
following error:
W [socket.c:611:__socket_rwv] 0-management: readv on
/var/run/f3fcde54ca5d30115274155a37baa079.socket failed (Invalid argument)
It is due a
AFR team (Pranith/Ravi cced), FYA..
~Atin
On 02/13/2015 07:48 PM, Subrata Ghosh wrote:
Hi All,
Can any one clarify the issue e , we are facing with heal incorrect
report mentioned below , We are using gluster 3.3.2 .
*Issue:*
*Bug 1039544*
On 02/13/2015 02:05 PM, Feng Wang wrote:
Hi all,
If we set the read-only feature using the following command in the cli to a
volume in service, it will not work until the volume is restarted.
That's the correct functionality. http://review.gluster.org/#/c/8571/
should address it, however it
I managed to find a workaround for it.
A.
On Wednesday 07 January 2015 15:41:07 Atin Mukherjee wrote:
On 01/06/2015 06:05 PM, Alessandro Ipe wrote:
Hi,
We have set up a md1 volume using gluster 3.4.2 over 4 servers
configured as distributed and replicated. Then, we upgraded smoohtly
On 03/18/2015 03:24 PM, Félix de Lelelis wrote:
Hi,
I have a problem with glusterfs 3.6. I am monitoring it with scripts that
lunch gluster volume status VONAME detail and gluster volume profile
VOLNAME info. When this scripts are running about 1-2 hours, with check
every 1 minute,
for this but comes a point at wich I can't luch anythig command
over the cluster, such as gluster volume status. Is there any way around
this?
Thanks
2015-03-18 11:36 GMT+01:00 Atin Mukherjee amukh...@redhat.com:
On 03/18/2015 03:24 PM, Félix de Lelelis wrote:
Hi,
I have a problem
On 03/18/2015 08:04 PM, Vitaly Lipatov wrote:
Osborne, Paul (paul.osbo...@canterbury.ac.uk) писал 2015-03-16
19:22:
Hi,
I am just looking through my logs and am seeing a
lot of entries of the form:
[2015-03-16 16:02:55.553140] I
Could you attach the logs for the analysis?
~Atin
On 03/13/2015 03:29 PM, Kaamesh Kamalaaharan wrote:
Hi guys. Ive been using gluster for a while now and despite a few hiccups,
i find its a great system to use. One of my more persistent hiccups is an
issue with one brick going offline.
My
...@novocraft.com
wrote:
Hi Atin, Thanks for the reply. Im not sure which logs are relevant so ill
just attach them all in a gz file.
I ran a sudo gluster volume start gfsvolume force at 2015-03-19 05:49
i hope this helps.
Thank You Kindly,
Kaamesh
On Sun, Mar 15, 2015 at 11:41 PM, Atin
On 03/20/2015 04:43 AM, John Gardeniers wrote:
As per the subject line, does deleting a gluster volume leave the data?
As of now the data still resides at backend if the volume is deleted.
Definitely an RFE for future.
~Atin
regards,
John
___
Selangor Darul Ehsan
Malaysia
Mobile: +60176562635
Ph: +60379600541
Fax: +60379600540
On Fri, Mar 20, 2015 at 12:57 PM, Atin Mukherjee amukh...@redhat.com
wrote:
I see there is a crash in the brick log.
patchset: git://git.gluster.com/glusterfs.git
signal received: 11
time of crash: 2015
the nodes.
~Atin
On 03/20/2015 12:43 PM, Félix de Lelelis wrote:
Hi Atin,
I send you the log and scripts that I am using.
Wait a response.
Thanks.
2015-03-18 15:21 GMT+01:00 Atin Mukherjee amukh...@redhat.com:
Could you share the glusterd log file and the scripts which were triggered
On 03/05/2015 06:33 PM, Vijay Bellur wrote:
On 03/01/2015 11:44 PM, Atin Mukherjee wrote:
Thanks Fanghuang for your nice words.
Vijay,
Can we try to take this patch in for 3.7 ?
Happy to get this in to 3.7. Could you please rebase this patch to the
latest git HEAD?
I've rebased
Nithya/Susant/Raghavendra G/Shyam can answer this. Ccing them. To
analyze the issue, I would request you to attach glusterd rebalance
logs as well.
~Atin
On 03/11/2015 01:50 PM, Jesper Led Lauridsen TS Infra server wrote:
Hi,
I forced a rebalance on a volume yesterday, but it never seem to
http://review.gluster.org/9269 is going to solve this problem, it will
be available in the coming 3.6 z stream release. You can refer to the
patch to understand the issue and the solution. Let me know in case of
any clarification required.
~Atin
On 03/24/2015 02:44 PM, Atin Mukherjee wrote
git
checkout FETCH_HEAD
Is this enough to install the patch or I missed something?
Thank you
Alessandro
-Original Message-
From: RASTELLI Alessandro
Sent: martedì 24 febbraio 2015 10:28
To: 'Atin Mukherjee'
Cc: gluster-users@gluster.org
Subject: RE: [Gluster-users] GlusterD
checks by one script
that lock all other monitoring checks and only there is one process check
gluster. I send you cmd_log_history of the 2 nodes.
Thanks.
2015-03-24 9:18 GMT+01:00 Atin Mukherjee amukh...@redhat.com:
Could you tell us what activities were run in the cluster
On 03/31/2015 12:27 PM, Pranith Kumar Karampuri wrote:
Atin,
Could it be because bricks are started with PROC_START_NO_WAIT?
That's the correct analysis Pranith. Mount was attempted before the
bricks were started. If we can have a time lag in some seconds between
mount and volume start
On 03/31/2015 01:03 PM, Pranith Kumar Karampuri wrote:
On 03/31/2015 12:53 PM, Atin Mukherjee wrote:
On 03/31/2015 12:27 PM, Pranith Kumar Karampuri wrote:
Atin,
Could it be because bricks are started with PROC_START_NO_WAIT?
That's the correct analysis Pranith. Mount
What error are you getting trying to access the files. Could you also
share the client log file?
~Atin
On 03/28/2015 10:47 AM, Shyam Deshmukh wrote:
Hi all,
Greetings ..
I tried to mounted volume. mount is successful. But files exist on mounted
volume are not available to access in
28, 2015 at 8:43 PM, Atin Mukherjee amukh...@redhat.com wrote:
What error are you getting trying to access the files. Could you also
share the client log file?
~Atin
On 03/28/2015 10:47 AM, Shyam Deshmukh wrote:
Hi all,
Greetings ..
I tried to mounted volume. mount is successful
-01-01 00:01:24.423024] E
[glusterd-utils.c:5760:glusterd_compare_friend_da
ta] 0-management: Importing global options failed
[1970-01-01 00:01:24.423036] E [glusterd-sm.c:1078:glusterd_friend_sm]
0-gluster
d: handler returned: -2
Regards
Andreas
On 03/22/15 07:33, Atin Mukherjee
On Monday, 16 February 2015, 12:28, Atin Mukherjee amukh...@redhat.com
wrote:
On 02/13/2015 02:05 PM, Feng Wang wrote:
Hi all,
If we set the read-only feature using the following command in the cli to a
volume in service, it will not work until the volume is restarted.
That's
On 02/27/2015 07:10 AM, Cary Tsai wrote:
Assume I have 4 bricks in a replica (count=2) volume:
Volume Name: data-vol
Number of Bricks: 2 x 2 = 4
Brick1: 192.168.1.101/brick
Brick2: 192.168.1.102/brick
Brick3: 192.168.1.103/brick
Brick4: 192.168.1.104/brick
Something happens
While probing new nodes have you used mixed flavour of fqdn like short
names, long names?
Can you please paste peer status output here?
~Atin
On 03/03/2015 05:08 PM, ML mail wrote:
Well the weird thing is that my DNS resolver servers are configured correctly
and working fine. Here below is
branch at 3.6 ?
A.
-Original Message-
From: Atin Mukherjee [mailto:amukh...@redhat.com]
Sent: venerdì 20 febbraio 2015 12:54
To: RASTELLI Alessandro
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] GlusterD uses 50% of RAM
From the cmd log history I could see lots
On 02/24/2015 06:41 AM, Sam Giraffe wrote:
Hi,
On my 20 nodes with 2 replica cluster I was able to run:
# gluster volume remove-brick art server1:/brick1 server2:/brick2 start
I got a message stating:
volume remove-brick start: success
ID: 16b887bb-e848-4054-a5af-9390055d32c9
Could you check the N/W firewall setting? Flush iptable setting using
iptables -F and retry.
~Atin
On 02/26/2015 02:55 PM, Kaamesh Kamalaaharan wrote:
Hi guys,
I managed to get gluster running but im having a couple of issues with my
setup 1) my peer status is rejected but connected 2) my
Could you tell us what activities were run in the cluster?
cmd_log_history across all the nodes would give a clear picture of it.
~Atin
On 03/24/2015 01:03 PM, Félix de Lelelis wrote:
Hi,
Today, Glusterd daemon has been killed due to excessive memory consumption:
[3505254.762715] Out of
to gluster peer probe.
Yes I meant peer detach. How about gluster peer detach force?
Regards
Andreas
On 03/23/15 05:34, Atin Mukherjee wrote:
On 03/22/2015 07:11 PM, Andreas Hollaus wrote:
Hi,
I hope that these are the logs that you requested.
Logs from 10.32.0.48
to investigate and how to do it to get the most
reliable result?
Anything else that could cause this?
Regards
Andreas
On 03/23/15 11:10, Atin Mukherjee wrote:
On 03/23/2015 03:28 PM, Andreas Hollaus wrote:
2Hi,
This network problem is persistent. However, I can ping the server so
This question was originally asked by me in the review. I was thinking
if replace brick nature is always commit force in nature then why not to
just abandon the parameters, Admin needs to be cautious enough about its
usage by any means.
~Atin
On 04/03/2015 11:17 AM, Gaurav Garg wrote:
Hi all,
How about
Gluster : Redefine storage
~Atin
On 04/01/2015 05:44 PM, Tom Callaway wrote:
Hello Gluster Ant People!
Right now, if you go to gluster.org, you see our current slogan in giant
text:
Write once, read everywhere
However, no one seems to be super-excited about that slogan.
On 04/10/2015 09:15 PM, Ravishankar N wrote:
On 04/10/2015 08:24 PM, Jesper Led Lauridsen TS Infra server wrote:
Thanks
This is undocumented. At least I can't find it in man-pages or in
'gluster help'. Is there a place I can find undocumented parameters -
if there are any others?
On 04/10/2015 09:36 PM, Pierre Léonard wrote:
Hi All,
Last problem I hope.
With my 14 node one node the 8 is present in the gluster volume info tyty
and
not in the gluster volume status tyty
And when I start a volume on the 8 the other don't start, as I have to start
on
one of
CCing Dan as he has more insights on this.
~Atin
On 04/14/2015 02:52 AM, Yue, Cong wrote:
I am using GlusterFS to make my storage between several servers can be
replicated with fault tolerance. And I am doing the similar way as
On 04/23/2015 08:05 PM, free.aaa wrote:
Hi everybody!
I have gluster peer probe gfs1 command hung with the result of Probe
Sent to Peer (connected)
gfs3#gluster peer status
Number of Peers: 3
Hostname: gfs6
Uuid: 6bd6ee25-e257-4703-b500-330741b90471
State: Peer in Cluster (Connected)
services and try to start
it again if will fail. So I need to rm -rf /var/lib/glusterd and start
again.
23.04.2015 18:09, Atin Mukherjee пишет:
On 04/23/2015 08:05 PM, free.aaa wrote:
Hi everybody!
I have gluster peer probe gfs1 command hung with the result of Probe
Sent to Peer (connected
On 04/21/2015 02:47 PM, Avra Sengupta wrote:
In the logs I see, glusterd_lock() being used. This api is called only
in older versions of gluster or if you have a cluster version is less
then 30600. So along with the version of glusterfs used, could you also
let us know what is the cluster
On 04/21/2015 09:22 AM, Atin Mukherjee wrote:
Hi all,
This meeting is scheduled for anyone that is interested in learning more
about, or assisting with the Bug Triage.
Meeting details:
- location: #gluster-meeting on Freenode IRC
( https://webchat.freenode.net/?channels
. Please check log
file for details.”
Best regards,
Kondo
2015-04-21 18:27 GMT+09:00 Atin Mukherjee amukh...@redhat.com:
On 04/21/2015 02:47 PM, Avra Sengupta wrote:
In the logs I see, glusterd_lock() being used. This api is called only
in older versions of gluster or if you have
On 04/20/2015 10:29 PM, Toms Varghese wrote:
Hi all,
I am trying to understand the source code for GlusterFS. But unfortunately,
the comments are too less in source code and I couldn't find any online
posts/documentations explaining the source code. Does anybody know whether
any
On 04/19/2015 03:04 PM, Shyam Deshmukh wrote:
Hi,
I am getting following status and my cluster is down due to the same.
Please help.
gluster@gluster1:~$ sudo gluster peer status
Number of Peers: 3
Hostname: gluster3
Uuid: c6f6574b-9779-4635-a7c2-06185a9ae973
State: Peer in Cluster
On 04/28/2015 04:41 PM, Oliver wrote:
Hi Gluster Users,
I am hoping that someone can help me put me on the right track maybe ..
Objective is to have a high available, extensible storage cluster with a
fair rate of redundancy.
I was thinking of a cluster of multiple machines, starting
On 04/30/2015 02:32 PM, gjprabu wrote:
Hi bturner,
I am getting below error while adding server.event
gluster v set integvol server.event-threads 3
volume set: failed: option : server.event-threads does not exist
Did you mean server.gid-timeout or ...manage-gids?
This
On 04/30/2015 03:09 PM, gjprabu wrote:
Hi Amukher,
How to resolve this issue, till we need to wait for 3.7 release or
any work around is there.
You will have to as this feature is in for 3.7.
RegardsPrabu
On Thu, 30 Apr 2015 14:49:46 +0530 Atin
On 04/28/2015 06:37 AM, 何亦军 wrote:
Hi Guys,
How to upgrade GlusterFS to 3.6.3 from 3.6.2 ?Any document talk
about that?
http://www.gluster.org/community/documentation/index.php/Upgrade_to_3.6
talks about how to upgrade from 3.4/3.5 to 3.6, however you could follow
the same steps
On 04/28/2015 06:53 AM, Atin Mukherjee wrote:
On 04/28/2015 06:37 AM, 何亦军 wrote:
Hi Guys,
How to upgrade GlusterFS to 3.6.3 from 3.6.2 ?Any document talk
about that?
http://www.gluster.org/community/documentation/index.php/Upgrade_to_3.6
talks about how to upgrade from 3.4
Hi all,
This meeting is scheduled for anyone that is interested in learning more
about, or assisting with the Bug Triage.
Meeting details:
- location: #gluster-meeting on Freenode IRC
( https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
On 05/08/2015 10:37 AM, vyyy杨雨阳 wrote:
Hi,
We've been using glusterfs for 1 years,It’s greadt except occasional glusterd
crashed, and just need restart
But nowadays, Glusterd crashed more frequently, sometimes several glusterds
crashed and cause split-brain and gfid different
The gluster version is pretty old i.e. 3.4 and we have already moved to
3.6 and 3.7 is round the corner. I would recommend you to upgrade your
cluster to 3.6 and see if you hit the same issue again ?
~Atin
On 05/08/2015 03:15 PM, vyyy杨雨阳 wrote:
Following is backTrace of the
On 05/13/2015 03:59 PM, RASTELLI Alessandro wrote:
3.6.2
This release logs this message in ERROR, which can be ignored as the ret
code is 0. In subsequent release it will be solved.
regards
A.
From: Kaushal M [mailto:kshlms...@gmail.com]
Sent: mercoledì 13 maggio 2015 12:29
To:
to your question is partially yes, though it doesn't affect the
data residing in the brick but it resets all of its meta data and set
new extended attributes.
Regards
Andreas
On 04/11/2015 09:18 AM, Atin Mukherjee wrote:
On 04/11/2015 01:21 AM, Andreas Hollaus wrote:
Hi,
I wonder
On 04/14/2015 05:07 PM, Niels de Vos wrote:
Hi all,
This meeting is scheduled for anyone that is interested in learning more
about, or assisting with the Bug Triage.
Meeting details:
- location: #gluster-meeting on Freenode IRC
(
is distributed across the bricks. The way it works with
the help of hashing applied on the file to find out the brick.
HTH,
Atin
On 19 May 2015 20:02, Atin Mukherjee atin.mukherje...@gmail.com
wrote:
On 19 May 2015 17:10, Varadharajan S rajanvara...@gmail.com wrote:
Hi,
We are using Ubuntu 14.04
On 19 May 2015 17:10, Varadharajan S rajanvara...@gmail.com wrote:
Hi,
We are using Ubuntu 14.04 server and for storage purpose we configured
gluster 3.5 as distributed volume and find the below details,
1).4 Servers - 14.04 Ubuntu Server and each server disks free spaces are
configured as
From: gluster-users-boun...@gluster.org gluster-users-boun...@gluster.org
on behalf of Branden Timm bt...@wisc.edu
Sent: Thursday, June 4, 2015 1:31 PM
To: Atin Mukherjee
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] One host won't rebalance
原件-
发件人: Atin Mukherjee [mailto:amukh...@redhat.com]
发送时间: Tuesday, June 02, 2015 2:42 PM
收件人: vyyy杨雨阳; Gluster-users@gluster.org
主题: Re: 答复: 答复: [Gluster-users] Gluster peer rejected and failed to start
On 06/02/2015 12:04 PM, vyyy杨雨阳 wrote:
Glusterfs05~glusterfs10 are clustered
if this information is helpful, but thanks for your reply.
From: Atin Mukherjee amukh...@redhat.com
Sent: Thursday, June 4, 2015 9:24 AM
To: Branden Timm; gluster-users@gluster.org; Nithya Balachandran; Susant
Palai; Shyamsundar Ranganathan
Subject: Re: [Gluster
Hi all,
This meeting is scheduled for anyone that is interested in learning more
about, or assisting with the Bug Triage.
Meeting details:
- location: #gluster-meeting on Freenode IRC
( https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
1 - 100 of 741 matches
Mail list logo