> We do have a way to tackle this situation from the code. Raghavendra
> Talur will be sending a patch shortly.
We should fix it by undoing what daemon-refactoring did, that broke the lazy
creation
of uuid for a node. Fixing it elsewhere is just masking the real cause.
Meanwhile 'rm' is the stop
> Yeah this followed by glusterd restart should help
>
> But frankly, i was hoping that 'rm' the file isn't a neat way to fix this
> issue
Why is rm not a neat way? Is it because the container deployment tool needs to
know about gluster internals? But isn't a Dockerfile dealing with details
of t
> Hi all,
> I two servers with 3.7.1 and have the same problem of this issue:
> http://comments.gmane.org/gmane.comp.file-systems.gluster.user/20693
>
> My servers packages:
> # rpm -qa | grep gluster | sort
> glusterfs-3.7.1-1.el6.x86_64
> glusterfs-api-3.7.1-1.el6.x86_64
> glusterfs-cli-3.7.1-1
All,
GlusterFS 3.7.1 has been released. The packages for Centos, Debian, Fedora and
RHEL
are available at http://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.1/
in their
respective directories.
A total of 58 patches were merged after v3.7.0. The following is the
distribution
of patches a
> > [glusterd-store.c:2063:glusterd_restore_op_version] 0-management: Detected
> > new install. Setting op-version to maximum : 30600
The above message indicates that /var/lib/glusterd/glusterd.info file, carrying
the
identify (UUID) of the node and the operating version of the glusterd binary,
> > restarted after the upgrade. I've also seen it if nfs is disabled on all
> > volumes.
> >
> > On 03/08/2015 09:10 PM, Krishnan Parthasarathi wrote:
> > >> I just upgraded from 3.5.3 to 3.6.2 and have issues mounting my volume
> > >> on a
>
> I just upgraded from 3.5.3 to 3.6.2 and have issues mounting my volume on a
> client. On the server side I found this error message which might be the
> cause of my issues:
Could you describe the issues you are facing?
>
> [2015-03-08 13:22:36.383715] W [socket.c:611:__socket_rwv] 0-managem
Jifeng,
Some of us are looking into this issue. We should have an update
on this next week. We are busy with GlusterFS 3.7[1] release's feature
freeze.
[1] - http://www.gluster.org/community/documentation/index.php/Planning37
cheers,
kp
- Original Message -
>
>
> Hi,
>
>
>
> [envir
> On to the logistics:
>
> When: I'm looking at sometime during the second week of May (May 11-15).
> Alternately, the third week of April (April 13-19), though, I'm
> concerned about being able to get it all in place before then. I'd like
> to have at least one day worth of scheduled presentati
[Sorry for replying to multiple mails in the same thread]
- Original Message -
> I have two production machines (web1, web2) that are currently using
> Glusterfs. I added two new machines, web3 and web4. Web1 and web2 are peered
> and are running great. Web3 and web4 will peer with each ot
Michael,
I understand that this has been frustrating for you. I will establish
a few terminologies that are being used in this thread, that may not
be obvious unless you have heard them before.
glusterd log file - /var/log/glusterfs/etc-glusterd-glusterfs.vol.log
-- this is the file where gluster
Scott,
Could you check if there a process is listening on the unix domain socket
/var/run/3714f2b1aabf9be7087fc323824b74dd.socket?
ss -x | grep /var/run/3714f2b1aabf9be7087fc323824b74dd.socket
~kp
- Original Message -
> What causes these errors?
>
> [2014-12-01 20:22:04.661974] W [sock
All,
We have come across behaviours and features of GlusterFS that are left
unexplained for various reasons. Thanks to Justin Clift for encouraging me to
come up with a document that tries to fill this gap incrementally. We have
decided
to call it "did-you-know.md" and for a reason. We'd love to
- Original Message -
> On Thu, Sep 11, 2014 at 4:55 AM, Krishnan Parthasarathi
> wrote:
> >
> > I think using Salt as the orchestration framework is a good idea.
> > We would still need to have a consistent distributed store. I hope
> > Salt has the provisio
Bala,
I think using Salt as the orchestration framework is a good idea.
We would still need to have a consistent distributed store. I hope
Salt has the provision to use one of our choice. It could be consul
or something that satisfies the criteria for choosing alternate technology.
I would wait f
XiaoZan,
The socket_connect error logs that you are observing are
not really spam logs. With this snippet of log it's impossible
to say why you are seeing these logs. The reason you see the log
message repeat is because gluster's transport layer tries to
connect once every 3s until it successfull
- Original Message -
> > As part of the first phase, we aim to delegate the distributed
> > configuration
> > store. We are exploring consul [1] as a replacement for the existing
> > distributed configuration store (sum total of /var/lib/glusterd/* across
> > all
> > nodes). Consul provid
> Bulk of current GlusterD code deals with keeping the configuration of the
> cluster and the volumes in it consistent and available across the nodes. The
> current algorithm is not scalable (N^2 in no. of nodes) and doesn't prevent
> split-brain of configuration. This is the problem area we are
- Original Message -
> On 5 Sep 2014, at 12:21, Kaushal M < kshlms...@gmail.com > wrote:
>
>
>
> - Peer membership management
> - Maintains consistency of configuration data across nodes (distributed
> configuration store)
> - Distributed command execution (orchestration)
> - Service m
- Original Message -
> On 05/09/2014, at 11:21 AM, Kaushal M wrote:
>
> > As part of the first phase, we aim to delegate the distributed
> > configuration store. We are exploring consul [1]
>
> Does this mean we'll need to learn Go as well as C and Python?
>
> If so, that doesn't sound
> This is is very cool. As a thought, since I don't know the code
> at all, is could it do stuff for parts of a volume?
>
> For example in the server.py GUI a person could give a directory
> path inside a volume, and it would show the IO operations stats
> for just that path?
Are you looking f
/mntgluster -o acl
>
> And BINGO up and running!!!
>
>
> EDV Daniel Müller
>
> Leitung EDV
> Tropenklinik Paul-Lechler-Krankenhaus
> Paul-Lechler-Str. 24
> 72076 Tübingen
> Tel.: 07071/206-463, Fax: 07071/206-499
> eMail: muel...@tropenklinik.de
&g
63, Fax: 07071/206-499
> eMail: muel...@tropenklinik.de
> Internet: www.tropenklinik.de
>
> "Der Mensch ist die Medizin des Menschen"
>
>
>
>
> -Ursprüngliche Nachricht-
> Von: Krishnan Parthasarathi [mailto:kpart...@redhat.com]
> Gesendet: Mittwoch, 30. Juli 2014 11:0
;
>
>
>
> -Ursprüngliche Nachricht-
> Von: Krishnan Parthasarathi [mailto:kpart...@redhat.com]
> Gesendet: Mittwoch, 30. Juli 2014 09:29
> An: muel...@tropenklinik.de
> Cc: gluster-devel-boun...@gluster.org; gluster-users@gluster.org
> Betreff: Re: [Gluster-users] WG
haus
> Paul-Lechler-Str. 24
> 72076 Tübingen
> Tel.: 07071/206-463, Fax: 07071/206-499
> eMail: muel...@tropenklinik.de
> Internet: www.tropenklinik.de
>
>
>
>
>
> -----Ursprüngliche Nachricht-
> Von: Krishnan Parthasarathi [mailto:kpart...@r
Could you attach the entire mount and glustershd log files to this thread?
~KP
- Original Message -
> NO ONE!??
> This is an entry of my glustershd.log:
> [2014-07-30 06:40:59.294334] W
> [client-handshake.c:1846:client_dump_version_cbk] 0-smbbackup-client-1:
> received RPC status error
>
Franco,
When your clients perceive a hang, could you check the status of the bricks by
running,
# gluster volume status VOLNAME (run this on one of the 'server' machines in
the cluster.)
Could you also provide the statedump of the client(s),
by issuing the following command.
# kill -SIGUSR1 p
Milin,
It appears that you are using SSL enabled transport on your volume.
>From the log messages, it appears that the transport layer is unable to
load the cert file. Could you check if the SSL cert file is installed
at the appropriate location? I am not a SSL expert, so I am not sure
where the
James,
Could you provide the logs of the mount process, where you see the hang for 42s?
My initial guess, seeing 42s, is that the client translator's ping timeout
is in play.
I would encourage you to report a bug and attach relevant logs.
If the issue (observed) turns out to be an acceptable/exp
Nux,
To check volume configurations, as seen by glusterds on different
nodes, you could see the contents of the following file for a given
volume.
/var/lib/glusterd/vols//info
For eg. you must see something similar to,
[root@trantor codebase]# cat /var/lib/glusterd/vols/vol/info
type=0
count=2
Dragon,
If /mnt/master-vol2, then the log file would be
mnt-master-vol2.log[].
Are you seeing this log file too empty? Could you attach the logs of the mount
point
that were log-rotated?
thanks,
krish
- Original Message -
> Hello Krish,
>
> i mounted the Volume Vol2 like this "mount -
Robin,
When you say your gluster servers have problems that you have
to restart them, could you explain what is the problem you are
facing? Do you mean brick process when you say gluster server?
thanks,
krish
- Original Message -
> Hi,
>
> I searched many places and I want to how to re
Dragon,
Could you attach the brick and client log files? This information
is not sufficient. The error messages in etc-glusterfs.. log regarding
actor error makes me believe that the client volfile is pointing to
glusterd (management daemon) to be the brick process. So, it would
help if you provid
Dragon,
Could you attach brick log files, client log file(s) and output of the following
commands,
gluster volume info VOLNAME
gluster volume status VOLNAME
Could you attach the "etc-glusterfs.." log as well?
thanks,
krish
- Original Message -
> Hello,
> i didnt find any hint of an erro
Pascal, Kramer,
Which version of glusterfs are you using?
Could you give outputs of the following commands, and if possible,
a tarball containing all the log files under /var/log/glusterfs/
directory?
# gluster volume info
# gluster volume quota list // This should show the erroneous quota
sta
Rahul,
[Removing gluster-devel from CC list since its not a devel list issue.]
Could you provide all the log files under /var/log/glusterfs?
Seeing the brick log files alongside glusterd log file might
provide more context.
thanks,
krish
- Original Message -
> Hi All,
>
> I am facing i
Hi Toby,
- Original Message -
> Hi,
> I'm getting some confusing "Incorrect brick" errors when attempting to
> remove OR replace a brick.
>
> gluster> volume info condor
>
> Volume Name: condor
> Type: Replicate
> Volume ID: 9fef3f76-525f-4bfe-9755-151e0d8279fd
> Status: Started
> Number
> Krishnan,
>
>
> On Mon, Jul 29, 2013 at 8:24 PM, Krishnan Parthasarathi < kpart...@redhat.com
> > wrote:
> > Could you check using netstat, what other process is listening on
> > the port, around the time of failure?
>
> [root@ir2 ~]# netstat -ntlp | gr
Joel,
>From the logs, we see bind(3) is failing with "Address already
in use". Could you check using netstat, what other process is listening on
the port, around the time of failure?
# netstat -ntlp | grep
where brick_port can be found in the logs, see "--xlator-option
home-server.listen-port=
Did you install glusterfs-fuse-3.3.1-1.el6.x86_64.rpm on the machine yout
mounting
from?
thanks.
krish
- Original Message -
> I have two storage servers running on Amazon Linux. I created a replicated
> volume with these storage servers(glustervol)
>
> I am trying to mount the volume on
Hi Ryan,
Could you give the list of test cases that fail due to assumptions
on locations of installed binaries?
thanks,
krish
- Original Message -
> Hi Ryan,
>
> I'm going to bet no. I think you're the first Debian/Ubuntu user to really
> give the test framework a workout.
>
> If you c
ng general volume information , can the state of volume known
> > from any specific logs?
> > #gluster volume info gvol1
> > Volume Name: gvol1
> > Type: Distribute
> > Volume ID: aa25aa58-d191-432a-a84b-325051347af6
> > Status: Stopped
> > Number of Bric
port/brick1
> Options Reconfigured:
> nfs.addr-namelookup: off
> nfs.port: 2049
> From: Krishnan Parthasarathi
> To: srinivas jonn
> Cc: gluster-users@gluster.org
> Sent: Monday, 3 June 2013 3:14 PM
> Subject: Re: [Gluster-users] recovering gluster volume || startup failure
Srinivas,
Could you paste the output of "gluster volume info gvol1"?
This should give us an idea as to what was the state of the volume
before the power loss.
thanks,
krish
- Original Message -
> Hello Gluster users:
> sorry for long post, I have run out of ideas here, kindly let me know
Harry,
Could you paste/attach the contents of /var/lib/glusterd/gli/info
files and the glusterd log files from the 4 peers in cluster?
>From the volume-info snippet you had pasted, it appears that
the node which was shutdown differs in its view of the volume's
status.
thanks,
krish
- Origina
Hi Tungdam,
Could you attach all the log files present under /var/log/glusterfs
(in the case of an RPM installation) ? The log file corresponding to
the crashing process would have a backtrace indicating potential reason
for a crash.
~krish
- Original Message -
From: "tungdam"
To: "Vija
Tomasz,
Glusterd version 3.2.6 doesn't handle concurrently issued volume commands
'gracefully'. It is known to end up in situations like the one you have
described below. This was fixed in the early days of what we informally
refer to as the 3.3.0.
[Ref: https://bugzilla.redhat.com/show_bug.cgi?id
Philip,
Which version of Glusterfs are you using?
Could you attach the log files? If they are big,
could you identify sections of the log files that
were close in time to the commit of the replace-brick operation
and add ?
It would be better if you could provide as much information about
your se
chyd,
Which version of glusterfs are you using? We need to check if this happens
even after the following fix was merged,
http://review.gluster.com/774
thanks,
krish
- Original Message -
From: "chyd"
To: "gluster-users"
Sent: Monday, April 9, 2012 10:06:58 AM
Subject: [Gluster-users
49 matches
Mail list logo