I have a script written to analyze the log message of gluster process.
It actually scans the log file and identifies the log messages with ERROR
and WARNING levels.
It lists the functions (with either ERROR or WARNING logs) and their
percentage of occcurance.
It also lists the MSGIDs for ERROR
There is already a patch submitted for moving TBF part to libglusterfs. It
is under review.
http://review.gluster.org/#/c/12413/
Regards,
Raghavendra
On Mon, Jan 25, 2016 at 2:26 AM, Venky Shankar wrote:
> On Mon, Jan 25, 2016 at 11:06:26AM +0530, Ravishankar N wrote:
> >
Hi Xavier,
There is a patch sent for review which implements the metadata cache in the
posix layer. What the changes do is this:
Whenever there is a fresh lookup on a object (file/directory/symlink),
posix xlator saves the stat attributes of that object in its cache.
As of now, whenever there
is moving the peer to rejected state
1277822 - glusterd: probing a new node(>=3.6) from 3.5 cluster is
moving the peer to rejected state
Regards,
Raghavendra Bhat
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mail
.
Regards,
Raghavendra Bhat
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
f you have a suitable topic to
discuss, please add it to the agenda.
Regards,
Raghavendra Bhat
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
f you have a suitable topic to
discuss, please add it to the agenda.
Regards,
Raghavendra Bhat
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
Hi Oleksandr,
You are right. The description should have said it as the limit on the
number of inodes in the lru list of the inode cache. I have sent a patch
for that.
http://review.gluster.org/#/c/12242/
Regards,
Raghavendra Bhat
On Thu, Sep 24, 2015 at 1:44 PM, Oleksandr Natalenko <
ole
USE mount can not use more than 32 groups
1256245 - AFR: gluster v restart force or brick process restart doesn't
heal the files
1258069 - gNFSd: NFS mount fails with "Remote I/O error"
1173437 - [RFE] changes needed in snapshot info command's xml output.
Regards,
Rag
mple (where the file is opened, unlinked and a graph
swatch happens), there was a patch submitted long back.
http://review.gluster.org/#/c/5428/
Regards,
Raghavendra Bhat
3. Open-behind and unlink from a different client:
==
While open-behind
it even
though the file was not changed
Regards,
Raghavendra Bhat
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
On 08/18/2015 12:39 PM, Avra Sengupta wrote:
+ Adding Raghavendra Bhat.
When is the next GA planned on this branch? And can we take patches in
this branch while this is being investigated.
Regards,
Avra
I am planning to make the release by the end of this week. I can accept
the patches
On 08/10/2015 09:56 PM, Niels de Vos wrote:
On Wed, Jul 29, 2015 at 04:00:48PM +0530, Raghavendra Bhat wrote:
On 07/27/2015 08:30 PM, Glomski, Patrick wrote:
I built a patched version of 3.6.4 and the problem does seem to be fixed
on a test server/client when I mounted with those flags (acl
(In general, the release branch which is being stabilized). And
on 30th of every month 3.7 based release would happen (in general, the
latest release branch).
Please provide feedback. Once a schedule is finalized we can put that
information in gluster.org.
Regards,
Raghavendra Bhat
there are
no beta releases happening in release-3.5 branch and also the latest
release-3.7 branch. I was doing beta releases for release-3.6 branch.
But I am also thinking of moving away from it and make 3.6.5 directly
(and also future release-3.6 releases).
Regards,
Raghavendra Bhat
On Wed, Aug 5
it is accepted in master, it cab be backported to
release-3.6 branch. I will wait till then and make 3.6.5.
Regards,
Raghavendra Bhat
On Thu, Jul 23, 2015 at 6:27 PM, Niels de Vos nde...@redhat.com
mailto:nde...@redhat.com wrote:
On Tue, Jul 21, 2015 at 10:30:04PM +0200, Niels de Vos wrote
list.
https://www.gluster.org/pipermail/gluster-devel/2015-June/045942.html
Regards,
Raghavendra Bhat
ThanksRegards
Anand.N
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
On 07/07/2015 12:30 PM, Raghavendra G wrote:
+ vijay mallikarjuna for quotad has similar concerns
+ Raghavendra Bhat for snapd might've similar concerns.
Snapd also uses protocol/server at the top of the graph. So the fix for
protocol/server should be good enough.
Regards,
Raghavendra Bhat
Adding the correct gluster-devel id.
Regards,
Raghavendra Bhat
On 07/08/2015 11:38 AM, Raghavendra Bhat wrote:
Hi,
In bit-rot feature, the scrubber marks the corrupted (objects whose
data has gone bad) as bad objects (via extended attribute). If the
volume is a replicate volume
On 07/06/2015 01:39 PM, Niels de Vos wrote:
On Mon, Jul 06, 2015 at 12:09:28PM +0530, Raghavendra Bhat wrote:
On 07/06/2015 09:52 AM, Kaushal M wrote:
I checked on NetBSD-7.0_BETA and FreeBSD-10.1. I couldn't reproduce
this. I'll try on NetBSD-6 next.
~kaushal
I think it has to be included
ok?
Or should I go ahead with 3.6.4 and make a quick 3.6.5 with this fix?
Regards,
Raghavendra Bhat
On Mon, Jul 6, 2015 at 8:38 AM, Kaushal M kshlms...@gmail.com wrote:
Krutika hit this last week, and let us (GlusterD maintiners) know of
it. I volunteered to look into this, but couldn't find
. snapview-server xlator is yet
to come into the picture). But still I will try to reproduce it on my
local setup and see what might be causing this.
Regards,
Raghavendra Bhat
#0 0x7f11e2ed3ded in gf_client_put (client=0x0, detached=0x0)
at
/home/jenkins/root/workspace/rackspace
crashed after directory was removed from the mount
point, while self-heal and rebalance were running on
the volume
Regards,
Raghavendra Bhat
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman
returned in lookup.
Should we fail lookup if the dict creation fails?
Regards,
Raghavendra Bhat
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
On 06/27/2015 03:28 PM, Venky Shankar wrote:
On 06/27/2015 02:32 PM, Raghavendra Bhat wrote:
Hi,
There is a patch that is submitted for review to deny access to
objects which are marked as bad by scrubber (i.e. the data of the
object might have been corrupted in the backend).
http
for this.
But does this issue block the above patches in anyway? (Those 2 patches
are still needed to deny access to objects once they are marked as bad
by scrubber).
Regards,
Raghavendra Bhat
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http
?
Yes, one of my patches failed today too:
http://build.gluster.org/job/rackspace-regression-2GB-triggered/11461/consoleFull
-Ravi
Even I had faced failure in tier.t couple of times.
Regards,
Raghavendra Bhat
http://build.gluster.org/job/rackspace-regression-2GB-triggered/11396/consoleFull
On 06/25/2015 09:57 AM, Pranith Kumar Karampuri wrote:
hi,
Does anyone know why glusterfs hangs with valgrind?
Pranith
Yes. I have faced it too. It used work before. But recently its not
working. glusterfs hangs when run with valgrind.
Not sure why it is hanging.
Regards,
Raghavendra
call is suffecient (as in
both inode forgets and brick restarts, a lookup will definitely come if
there is an accss to that file).
Please provide feedback on above 3 methods. If there are any other
solutions which might solve this issue, they are welcome.
Regards,
Raghavendra Bhat
this test from is_bad_test() in run-tests.sh.
-Vijay
I tried to reproduce the issue and it did not happen in my setup. So I
am planning to get a slave machine and test it there.
Regards,
Raghavendra Bhat
___
Gluster-devel mailing list
Gluster-devel
...@redhat.com, Sachin
Pandit span...@redhat.com,
Raghavendra Bhat rab...@redhat.com, Kotresh Hiremath Ravishankar
khire...@redhat.com
Sent: Wednesday, May 6, 2015 10:53:01 PM
Subject: Re: [Gluster-devel] spurious regression status
On 05/06/2015 06:52 AM, Pranith Kumar Karampuri wrote:
hi,
Please
On Thursday 02 April 2015 01:00 PM, Pranith Kumar Karampuri wrote:
On 04/02/2015 12:27 AM, Raghavendra Talur wrote:
On Wed, Apr 1, 2015 at 10:34 PM, Justin Clift jus...@gluster.org
mailto:jus...@gluster.org wrote:
On 1 Apr 2015, at 10:57, Emmanuel Dreyfus m...@netbsd.org
mem_put is done at last. To avoid double free in FRAME_DESTROY,
frame-local is set to NULL before doing STACK_UNWIND.
I suspect not doing 1 of the above three operations (may be either 1st
or 3rd) in crypt xlator might be the reason for the bug.
Regards,
Raghavendra Bhat
files get flooded when removexattr() can't find a
specified key or value
1165938 - Fix regression test spurious failures
1192522 - index heal doesn't continue crawl on self-heal failure
1193970 - Fix spurious ssl-authz.t regression failure (backport)
Regards,
Raghavendra Bhat
://review.gluster.org/#/c/9712/
Regards,
Raghavendra Bhat
On Thursday 19 February 2015 07:34 PM, Venky Shankar wrote:
Hi folks,
Listed below is the initial patchset for the upcoming bitrot detection
feature targeted for GlusterFS 3.7. As of now, these set of patches
implement object signing. Myself
correctly
1174170 - Glusterfs outputs a lot of warnings and errors when quota is
enabled
1186119 - tar on a gluster directory gives message file changed as we
read it even though no updates to file in progress
Regards,
Raghavendra Bhat
___
Gluster-devel
in the description for USS under gluster
volume set help
1171259 - mount.glusterfs does not understand -n option
Regards,
Raghavendra Bhat
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
On Monday 29 December 2014 01:19 PM, RAGHAVENDRA TALUR wrote:
On Sun, Dec 28, 2014 at 5:03 PM, Vijay Bellur vbel...@redhat.com wrote:
On 12/24/2014 02:30 PM, Raghavendra Bhat wrote:
Hi,
I have a doubt. In user serviceable snapshots as of now statfs call is
not implemented. There are 2 ways
Hi,
glusterfs-3.6.2beta1 has been released and the rpms can be found here.
Regards,
Raghavendra Bhat
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
On Friday 26 December 2014 12:22 PM, Raghavendra Bhat wrote:
Hi,
glusterfs-3.6.2beta1 has been released and the rpms can be found here.
Regards,
Raghavendra Bhat
___
Gluster-users mailing list
gluster-us...@gluster.org
http://www.gluster.org
, with the path and the inode being
set to the root of the main volume.
OR
2) It can redirect the call to the snapshot world (the snapshot demon
which talks to all the snapshots of that particular volume) and send
back the reply that it has obtained.
Please provide feedback.
Regards,
Raghavendra
On Thursday 18 December 2014 12:58 PM, Raghavendra Gowdappa wrote:
- Original Message -
From: Raghavendra Bhat rab...@redhat.com
To: Gluster Devel gluster-devel@gluster.org
Cc: Anand Avati aav...@redhat.com
Sent: Thursday, December 18, 2014 12:31:41 PM
Subject: [Gluster-devel] explicit
On Tuesday 23 December 2014 11:09 AM, Atin Mukherjee wrote:
Can you please take in http://review.gluster.org/#/c/9328/ for 3.6.2?
~Atin
On 12/19/2014 02:05 PM, Raghavendra Bhat wrote:
Hi,
glusterfs-3.6.2beta1 has been released. I am planning to make 3.6.2
before end of this year
or assigned state.
https://bugzilla.redhat.com/buglist.cgi?bug_status=NEWbug_status=ASSIGNEDclassification=Communityf1=blockedlist_id=3106878o1=substringproduct=GlusterFSquery_format=advancedv1=1163723
Regards,
Raghavendra Bhat
___
Gluster-devel mailing
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Hi David,
Can you please provide the log files? You can find them in
/var/log/glusterfs.
Regards,
Raghavendra Bhat
conflict. Can
you please rebase and sent it?
Regards,
Raghavendra Bhat
This is the remaining of a fix that has been partially done in
http://review.gluster.org/8933, and that one has been operating without
a hitch for a while.
Without the fix, self heal breaks on NetBSD if it needs to iterate
as well.
Please provide feedback.
Regards,
Raghavendra Bhat
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel
On Monday 01 December 2014 04:51 PM, Raghavendra G wrote:
On Fri, Nov 28, 2014 at 6:48 PM, RAGHAVENDRA TALUR
raghavendra.ta...@gmail.com mailto:raghavendra.ta...@gmail.com wrote:
On Thu, Nov 27, 2014 at 2:59 PM, Raghavendra Bhat
rab...@redhat.com mailto:rab...@redhat.com wrote
at the next snapshot (2 snapshots
example). If there is no next snapshot, then look at the previous snapshot.
Please provide feed back about how this issue can be handled.
Regards,
Raghavendra Bhat
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http
Hi,
I have sent a patch to add the info on how glusterfs manages inodes and
dentries.
http://review.gluster.org/#/c/8815/
Please review it and provide feedback to improve it.
Regards,
Raghavendra Bhat
___
Gluster-devel mailing list
Gluster-devel
know if I have missed anything. Please provide feedback.
Regards,
Raghavendra Bhat
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel
On Tuesday 08 July 2014 01:21 AM, Anand Avati wrote:
On Mon, Jul 7, 2014 at 12:48 PM, Raghavendra Bhat rab...@redhat.com
mailto:rab...@redhat.com wrote:
Hi,
As per my understanding nfs server is not doing inode linking in
readdirp callback. Because of this there might be some
Hi,
I think the regression test bug-1112559.t is causing some spurious
failures. I see some regression jobs being failed due to it.
Regards,
Raghavendra Bhat
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org
for
confirmation for each.
Raghavendra Talur
Agree with Raghavendra Talur. It would be better to ask the user without
force option. The above method suggested by Talur seems to be neat.
Regards,
Raghavendra Bhat
Do you think notification would be more than enough, or do we need to introduce
(inode-gfid))
return inode;
...
}
I think its done with the intention that, root inode should *never* ever
get removed from the active inodes list. (not even accidentally). So
unref on root-inode is a no-op. Dont know whether there are any other
reasons.
Regards,
Raghavendra
-graph: init failed
[2014-06-20 14:19:01.897635] W [glusterfsd.c:1182:cleanup_and_exit] (--
0-: received signum (0), shutting down
[2014-06-20 14:19:01.897677] I [fuse-bridge.c:5561:fini] 0-fuse:
Unmounting '/mnt/glusterfs/0'.
Regards,
Raghavendra Bhat
On Wednesday 04 June 2014 11:23 AM, Rajesh Joseph wrote:
- Original Message -
From: M S Vishwanath Bhat msvb...@gmail.com
To: Rajesh Joseph rjos...@redhat.com
Cc: Vijay Bellur vbel...@redhat.com, Seema Naik sen...@redhat.com, Gluster
Devel
gluster-devel@gluster.org
Sent: Tuesday, June
change the inode table's
lru_limit variable also as part of reconfigure? If so, then probably we
might have to remove the extra inodes present in the lru list by calling
inode_table_prune.
Please provide feedback
Regards,
Raghavendra Bhat
___
Gluster
58 matches
Mail list logo