On 09/02/2015 07:33 PM, Vijay Bellur wrote:
On Wednesday 02 September 2015 06:38 PM, Atin Mukherjee wrote:
IIRC, Pranith already volunteered for it in one of the last community
meetings?
Thanks Atin. I do recollect it now.
Pranith - can you confirm being the release manager for 3.7.5?
Best Regards,
Fang Huang
On Monday, 14 September 2015, 16:30, Pranith Kumar Karampuri
<pkara...@redhat.com> wrote:
hi,
Here is a list of common improvements for both ec and afr planned over
the next few months:
1) Granular entry self-heals.
Both afr and ec at the moment
hi,
Please use
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.7.5 for tracking
bug fixes that need to get into 3.7.5 release.
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
hi,
Here is a list of common improvements for both ec and afr planned over
the next few months:
1) Granular entry self-heals.
Both afr and ec at the moment do lot of readdirs and lookups to
figure out the differences between the directories to perform heals.
Kritika, Ravi, Anuradha and
Hi all,
3.7.5 is scheduled to be tagged in three days. This cannot be extended as it
will break release schedules for others.
Please ensure that any changes you want to get into 3.7.5 gets merged
by the deadline. Also make sure to add those bugs to the tracker bug
[1].
There are around ~10
On 12/08/2015 09:02 AM, Pranith Kumar Karampuri wrote:
On 12/08/2015 02:53 AM, Shyam wrote:
Hi,
Why not think along the lines of new FOPs like fop_compound(_cbk)
where, the inargs to this FOP is a list of FOPs to execute (either in
order or any order)?
That is the intent. The question
hi,
Draft of the design doc:
Main motivation for the design of this feature is to reduce network
round trips by sending more
than one fop in a network operation, preferably without introducing new
rpcs.
There are new 2 new xlators compound-fop-sender, compound-fop-receiver.
Do you mind adding gluster-users to that thread? It would be nice to
know what are the problems they ran into to fix them.
Pranith
On 12/07/2015 03:44 PM, Emmanuel Dreyfus wrote:
Hello
In case nobody noticed, there is ongoing discussion on the dovecot
mailing list about using glusterfs as
On 12/09/2015 10:39 AM, Prashanth Pai wrote:
However, I’d be even more comfortable with an even simpler approach that
avoids the need to solve what the database folks (who have dealt with
complex transactions for years) would tell us is a really hard problem.
Instead of designing for every
On 12/09/2015 06:37 AM, Vijay Bellur wrote:
On 12/08/2015 03:45 PM, Jeff Darcy wrote:
On December 8, 2015 at 12:53:04 PM, Ira Cooper (i...@redhat.com) wrote:
Raghavendra Gowdappa writes:
I propose that we define a "compound op" that contains ops.
Within each op, there are fields that can
On 12/09/2015 08:11 PM, Shyam wrote:
On 12/09/2015 02:37 AM, Soumya Koduri wrote:
On 12/09/2015 11:44 AM, Pranith Kumar Karampuri wrote:
On 12/09/2015 06:37 AM, Vijay Bellur wrote:
On 12/08/2015 03:45 PM, Jeff Darcy wrote:
On December 8, 2015 at 12:53:04 PM, Ira Cooper (i
On 12/09/2015 11:48 PM, Pranith Kumar Karampuri wrote:
On 12/09/2015 08:11 PM, Shyam wrote:
On 12/09/2015 02:37 AM, Soumya Koduri wrote:
On 12/09/2015 11:44 AM, Pranith Kumar Karampuri wrote:
On 12/09/2015 06:37 AM, Vijay Bellur wrote:
On 12/08/2015 03:45 PM, Jeff Darcy wrote
On 12/09/2015 08:08 PM, Shyam wrote:
On 12/09/2015 12:52 AM, Pranith Kumar Karampuri wrote:
On 12/09/2015 10:39 AM, Prashanth Pai wrote:
However, I’d be even more comfortable with an even simpler approach
that
avoids the need to solve what the database folks (who have dealt with
complex
On 01/02/2016 10:11 PM, Raghavendra Talur wrote:
On Jan 2, 2016 8:18 PM, "Atin Mukherjee" > wrote:
>
> -Atin
> Sent from one plus one
> On Jan 2, 2016 4:41 PM, "Raghavendra Talur" >
hi,
I have been thinking of coming up with a utility similar to strace
for gluster. I am sending this mail to find out if anyone else is also
thinking about it and how they are solving this problem to see if we can
come up with a concrete solution that we can implement.
These are my
.
Pranith
Regards,
-Prashanth Pai
- Original Message -
From: "Pranith Kumar Karampuri" <pkara...@redhat.com>
To: "Gluster Devel" <gluster-devel@gluster.org>
Sent: Monday, January 4, 2016 10:49:50 AM
Subject: [Gluster-devel] strace like utility for gluster
hi,
It seems like two ways to create dictionary is causing problems.
There are quite a few dict_new()/dict_destroy() or
get_new_dict()/dict_unref() in the code base. So stopped exposing the
functions without ref/unref i.e. get_new_dict()/dict_destroy() as part
of
On 01/07/2016 02:39 PM, Emmanuel Dreyfus wrote:
On Wed, Jan 06, 2016 at 05:49:04PM +0530, Ravishankar N wrote:
I re triggered NetBSD regressions for http://review.gluster.org/#/c/13041/3
but they are being run in silent mode and are not completing. Can some one
from the infra-team take a
On 01/08/2016 03:25 PM, Emmanuel Dreyfus wrote:
On Fri, Jan 08, 2016 at 03:18:02PM +0530, Pranith Kumar Karampuri wrote:
Should the cleanup script needs to be manually executed on the NetBSD
machine?
You can run the script manually, but if the goal is to restore a
misbehaving machine
On 01/08/2016 02:08 PM, Emmanuel Dreyfus wrote:
On Fri, Jan 08, 2016 at 11:45:20AM +0530, Pranith Kumar Karampuri wrote:
1) How to set up NetBSD VMs on my laptop which is of exact version as the
ones that are run on build systems.
Well, the easier way is to pick the VM image we run
After debugging with David, we found that the issue is already fixed for
3.7.7 by the patch http://review.gluster.org/12312
Pranith
On 12/22/2015 10:45 PM, David Robinson wrote:
Niels,
> 1. how is infiniband involved/configured in this environment?
gfsib01bkp and gfs02bkp are connected via
On 12/26/2015 04:45 AM, Oleksandr Natalenko wrote:
Also, here is valgrind output with our custom tool, that does GlusterFS volume
traversing (with simple stats) just like find tool. In this case NFS-Ganesha
is not used.
https://gist.github.com/e4602a50d3c98f7a2766
hi Oleksandr,
I went
hi Glomski,
This is the second time I am hearing about memory allocation
problems in 3.7.6 but this time on brick side. Are you able to recreate
this issue? Will it be possible to get statedumps of the bricks
processes just before they crash?
Pranith
On 12/22/2015 02:25 AM, Glomski,
--
From: "Pranith Kumar Karampuri" <pkara...@redhat.com
<mailto:pkara...@redhat.com>>
To: "Glomski, Patrick" <patrick.glom...@corvidtec.com
<mailto:patrick.glom...@corvidtec.com>>; gluster-devel@gluster.org
<mailto:gluster-devel@gluster.org&
On 12/22/2015 10:45 PM, David Robinson wrote:
Niels,
> 1. how is infiniband involved/configured in this environment?
gfsib01bkp and gfs02bkp are connected via infiniband. We are using tcp
transport as I never was able to get RDMA to work.
Volume Name: gfsbackup
Type: Distribute
Volume ID:
hi,
I am going to make 3.7.7 release early next week. Please make sure
your patches are merged. If you have any patches that must go to 3.7.7.
let me know. I will wait for them to be merged.
Pranith
___
Gluster-devel mailing list
On 12/17/2015 04:03 PM, Vijay Bellur wrote:
On 12/17/2015 05:09 AM, Sachidananda URS wrote:
Hi,
I tried the same use case with pure DHT (1 & 2 nodes). I don't see any
problems.
However, if I try the same tests with distributed replicate, the indices
go into red.
If any additional details
On 01/08/2016 08:14 PM, Emmanuel Dreyfus wrote:
On Fri, Jan 08, 2016 at 10:56:22AM +, Emmanuel Dreyfus wrote:
On Fri, Jan 08, 2016 at 03:18:02PM +0530, Pranith Kumar Karampuri wrote:
With your support I think we can make things better. To avoid duplication of
work, did you take any tests
Does it seems reasonable? That way nothing can hang more than 2 hours.
That addresses the technical issue of hanging tests. It doesn't address
the process issue of the entire project and development team being held
hostage to one feature.
Guys,
I think we just need to come up with
On 01/13/2016 09:14 AM, Dan Lambright wrote:
- Original Message -
From: "Niels de Vos"
To: "Dan Lambright" , "Joseph Fernandes"
Cc: gluster-devel@gluster.org
Sent: Tuesday, January 12, 2016 3:52:51 PM
Subject:
On 01/11/2016 01:00 PM, Pranith Kumar Karampuri wrote:
On 01/09/2016 12:34 AM, Vijay Bellur wrote:
On 01/08/2016 08:18 AM, Jeff Darcy wrote:
I think we just need to come up with rules for considering a
platform to have voting ability before merging the patch.
I totally agree
On 01/09/2016 12:34 AM, Vijay Bellur wrote:
On 01/08/2016 08:18 AM, Jeff Darcy wrote:
I think we just need to come up with rules for considering a
platform to have voting ability before merging the patch.
I totally agree, except for the "just" part. ;) IMO a platform is much
hi,
Does anyone know/have any scripts to get this information from
bugzilla/gerrit?
--
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
Oleksandr,
Could you take statedump of the shd process once in 5-10 minutes and send
may be 5 samples of them when it starts to increase? This will help us find
what datatypes are being allocated a lot and can lead to coming up with
possible theories for the increase.
On Wed, Jun 8, 2016 at 12:03
ew is allocated but I
would just like to confirm.
> 08.06.2016 09:50, Pranith Kumar Karampuri написав:
>
> Oleksandr,
>> Could you take statedump of the shd process once in 5-10 minutes and
>> send may be 5 samples of them when it starts to increase? This will
>> help us
Hey Amye,
The form doesn't seem to allow editing "Your Role within Gluster" and
"Why should you attend?" Could you let us know how to fill these fields?
Pranith
On Sat, May 28, 2016 at 12:45 AM, Amye Scavarda wrote:
> Important happenings for Gluster this month:
> We're
enough to cover the needed behavior ?
>
> Xavi
>
>
>
>> [1] http://review.gluster.org/13885
>>
>> regards,
>> Raghavendra
>>
>> - Original Message -
>>
>>> From: "Pranith Kumar Karampuri" <pkara...@redhat.com>
>>&g
There are two things you need to change for o-direct to be handled properly
in gluster stack:
1) gluster volume set performance.strict-o-direct on
on nfs this option is gluster volume set
performance.nfs.strict-o-direct on
2) gluster volume set network.remote-dio off
Please note that we
hi,
Is there a plan to come up with an interface for snapshot
functionality? For example, in handling different types of sockets in
gluster all we need to do is to specify which interface we want to use and
ib,network-socket,unix-domain sockets all implement the interface. The code
doesn't
Cool. Nice to know it is on the cards.
On Wed, Jun 22, 2016 at 11:45 AM, Rajesh Joseph <rjos...@redhat.com> wrote:
>
>
> On Tue, Jun 21, 2016 at 4:24 PM, Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote:
>
>> hi,
>> Is there a plan to
On Wed, Jun 22, 2016 at 5:50 AM, Sachin Pandit <span...@commvault.com>
wrote:
> Hey Pranith, I am good, I hope you are doing good too.
>
> Please find the comments inline.
>
>
>
> *From:* Pranith Kumar Karampuri [mailto:pkara...@redhat.com]
> *Sent:* Tuesday, June 2
Hey!!
Hope you are doing good. I took a look at the bt. So when flush
comes write-behind has to flush all the writes down. I see the following
frame hung in iob_unref:
Thread 7 (Thread 0x7fa601a30700 (LWP 16218)):
#0 0x7fa60cc55225 in pthread_spin_lock () from /lib64/libpthread.so.0
hi Xavi,
Meet Manoj from performance team Redhat. He has been testing EC
performance in his stretch clusters. He found some interesting things we
would like to share with you.
1) When we perform multiple streams of big file writes(12 parallel dds I
think) he found one thread to be
In case you missed the post on Gluster twitter/facebook,
https://pkalever.wordpress.com/2016/06/23/gluster-solution-for-non-shared-persistent-storage-in-docker-container/
We would love to hear your feedback on this.
--
Pranith
___
Gluster-devel
hi,
Based on the debugging done by Niels on the bug
https://bugzilla.redhat.com/show_bug.cgi?id=1181500#c5, we need a
confirmation about what listxattr returns on FreeBSD. Could someone please
help?
--
Pranith
___
Gluster-devel mailing list
On 01/08/2016 08:50 PM, Emmanuel Dreyfus wrote:
On Fri, Jan 08, 2016 at 08:37:16PM +0530, Pranith Kumar Karampuri wrote:
NetBSD)
vnd=`vnconfig -l | \
awk '!/not in use/{printf("%s%s:%d ", $1, $2, $5);}'`
Can there be Loopba
On 01/10/2016 02:04 PM, Pranith Kumar Karampuri wrote:
On 01/10/2016 11:08 AM, Emmanuel Dreyfus wrote:
Pranith Kumar Karampuri <pkara...@redhat.com> wrote:
tests/basic/afr/arbiter-statfs.t
I posted patches to fix this one (but it seems Jenkins is down? No
regression is running)
On 01/10/2016 11:08 AM, Emmanuel Dreyfus wrote:
Pranith Kumar Karampuri <pkara...@redhat.com> wrote:
tests/basic/afr/arbiter-statfs.t
I posted patches to fix this one (but it seems Jenkins is down? No
regression is running)
tests/basic/afr/self-heal.t
It seems like in this run
Result: PASS
Build timed out (after 300 minutes). Marking the build as failed.
Build was aborted
Finished: FAILURE
https://build.gluster.org/job/rackspace-regression-2GB-triggered/17664/console
Pranith
___
Gluster-devel mailing list
On Tue, Jun 28, 2016 at 10:21 AM, Poornima Gurusiddaiah <pguru...@redhat.com
> wrote:
> Regards,
> Poornima
>
> --
>
> *From: *"Pranith Kumar Karampuri" <pkara...@redhat.com>
> *To: *"Xavier Hernandez" <xhern
>
>> Yes it will need some changes but I don't think they are big changes. I
>> think the functions to decode/encode already exist. We just to need to
>> move encoding/decoding as tasks and run as synctasks.
>>
>
> I was also thinking in sleeping fops. Currently when they are resumed,
> they are
On 02/08/2016 04:16 PM, Ravishankar N wrote:
[Removing Milind, adding Pranith]
On 02/08/2016 04:09 PM, Emmanuel Dreyfus wrote:
On Mon, Feb 08, 2016 at 04:05:44PM +0530, Ravishankar N wrote:
The patch to add it to bad tests has already been merged, so I guess
this
.t's failure won't pop up
On 02/08/2016 04:22 PM, Pranith Kumar Karampuri wrote:
On 02/08/2016 04:16 PM, Ravishankar N wrote:
[Removing Milind, adding Pranith]
On 02/08/2016 04:09 PM, Emmanuel Dreyfus wrote:
On Mon, Feb 08, 2016 at 04:05:44PM +0530, Ravishankar N wrote:
The patch to add it to bad tests has already
On 02/04/2016 03:39 PM, Kaushal M wrote:
I'm okay with this.
+1
On Thu, Feb 4, 2016 at 3:34 PM, Raghavendra Talur wrote:
Hi,
We recently changed the jenkins builds to be triggered on the following
triggers.
1. Verified+1
2. Code-review+2
3. recheck
On 02/08/2016 08:20 PM, Emmanuel Dreyfus wrote:
On Mon, Feb 08, 2016 at 07:27:46PM +0530, Pranith Kumar Karampuri wrote:
I don't see any logs in the archive. Did we change something?
I think thay are in a different tarball, in /archives/logs
I think the regression run is not giving
Emmanuel,
I don't see any logs in the archive. Did we change something?
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
On 02/08/2016 05:04 PM, Michael Scherer wrote:
Le lundi 08 février 2016 à 16:22 +0530, Pranith Kumar Karampuri a
écrit :
On 02/08/2016 04:16 PM, Ravishankar N wrote:
[Removing Milind, adding Pranith]
On 02/08/2016 04:09 PM, Emmanuel Dreyfus wrote:
On Mon, Feb 08, 2016 at 04:05:44PM +0530
On 02/09/2016 04:13 PM, Emmanuel Dreyfus wrote:
On Tue, Feb 09, 2016 at 11:56:37AM +0530, Pranith Kumar Karampuri wrote:
I think the regression run is not giving that link anymore when the crash
happens? Could you please add that also as a link in regression run?
Ther was the path
On 02/04/2016 08:48 PM, Kaushal M wrote:
I'm still up to mentor the sub-directory mount support idea.
this one? http://review.gluster.org/10186
Pranith
On Thu, Feb 4, 2016 at 2:38 PM, Amye Scavarda wrote:
Hi all,
Google Summer of Code opens up their organization
hi,
I see the following crash. Is this a known issue?
(gdb) bt
#0 0x7f3f8c339fb4 in dht_selfheal_dir_setattr
(frame=0x7f3f6c002a0c, loc=0x7f3f6c000944, stbuf=0x7f3f6c0009d4,
valid=16777215,
layout=0x7f3f6c004140) at
hi Atin, Kaushal,
Is this a known issue?
(gdb) #1 0xbb789fb7 in __synclock_unlock (lock=0xbb1d4ac0)
(gdb) at
/home/jenkins/root/workspace/rackspace-netbsd7-regression-triggered/libglusterfs/src/syncop.c:1056
#2 0xbb789ffd in synclock_unlock (lock=0xbb1d4ac0)
at
, Pranith Kumar Karampuri wrote:
hi Atin, Kaushal,
Is this a known issue?
(gdb) #1 0xbb789fb7 in __synclock_unlock (lock=0xbb1d4ac0)
(gdb) at
/home/jenkins/root/workspace/rackspace-netbsd7-regression-triggered/libglusterfs/src/syncop.c:1056
#2 0xbb789ffd in synclock_unlock (lock
On 01/28/2016 05:05 PM, Pranith Kumar Karampuri wrote:
With baul jianguo's help I am able to see that FLUSH fops are hanging
for some reason.
pk1@localhost - ~/Downloads
17:02:13 :) ⚡ grep "unique=" client-dump1.txt
unique=3160758373
unique=2073075682
unique=1455047665
uni
On 01/27/2016 09:21 AM, Atin Mukherjee wrote:
On 01/27/2016 07:21 AM, Vijay Bellur wrote:
On 01/26/2016 01:19 PM, Marc Eisenbarth wrote:
I'm trying to set a parameter on a volume, but unable to due to the
following message. I have a large number of connected clients and it's
likely that
On 01/27/2016 12:49 PM, Atin Mukherjee wrote:
On 01/27/2016 12:40 PM, Pranith Kumar Karampuri wrote:
On 01/27/2016 09:21 AM, Atin Mukherjee wrote:
On 01/27/2016 07:21 AM, Vijay Bellur wrote:
On 01/26/2016 01:19 PM, Marc Eisenbarth wrote:
I'm trying to set a parameter on a volume
On 01/28/2016 02:59 PM, baul jianguo wrote:
http://pastebin.centos.org/38941/
client statedump,only the pid 27419,168030,208655 hang,you can search
this pid in the statedump file。
Could you take one more statedump please?
Pranith
On Wed, Jan 27, 2016 at 4:35 PM, Pranith Kumar Karampuri
28, 2016 at 5:29 PM, baul jianguo <roidi...@gmail.com> wrote:
http://pastebin.centos.org/38941/
client statedump,only the pid 27419,168030,208655 hang,you can search
this pid in the statedump file。
On Wed, Jan 27, 2016 at 4:35 PM, Pranith Kumar Karampuri
<pkara...@redhat.com> wrote:
On 01/26/2016 08:14 AM, Vijay Bellur wrote:
On 01/25/2016 12:36 AM, Ravishankar N wrote:
Hi,
We are planning to introduce a throttling xlator on the server (brick)
process to regulate FOPS. The main motivation is to solve complaints
about
AFR selfheal taking too much of CPU resources. (due
On 02/03/2016 09:20 AM, Shyam wrote:
On 02/02/2016 06:22 PM, Jeff Darcy wrote:
Background: Quick-read + open-behind xlators are developed to
help
in small file workload reads like apache webserver, tar etc to get the
data of the file in lookup FOP itself. What happens is, when a
The file data would be located based on its GFID, so before the *first*
lookup/stat for a file, there is no way to know it's GFID.
NOTE: Instead of a name hash the GFID hash is used, to get immunity
against renames and the like, as a name hash could change the location
information for the file
On 02/03/2016 07:54 PM, Jeff Darcy wrote:
Problem is with workloads which know the files that need to be read
without readdir, like hyperlinks (webserver), swift objects etc. These
are two I know of which will have this problem, which can't be improved
because we don't have metadata, data
On 02/02/2016 06:22 PM, Jeff Darcy wrote:
Background: Quick-read + open-behind xlators are developed to help
in small file workload reads like apache webserver, tar etc to get the
data of the file in lookup FOP itself. What happens is, when a lookup
FOP is executed, GF_CONTENT_KEY is
hi,
Background: Quick-read + open-behind xlators are developed to help
in small file workload reads like apache webserver, tar etc to get the
data of the file in lookup FOP itself. What happens is, when a lookup
FOP is executed, GF_CONTENT_KEY is added in xdata with max-length and
posix
On 02/01/2016 10:16 PM, Joe Julian wrote:
WTF?
if (!xattrs_list) {
ret = -EINVAL;
gf_msg (this->name, GF_LOG_ERROR, -ret,
AFR_MSG_NO_CHANGELOG,
"Unable to fetch afr pending changelogs. Is
op-version"
"
On 02/03/2016 11:49 AM, Pranith Kumar Karampuri wrote:
On 02/03/2016 09:20 AM, Shyam wrote:
On 02/02/2016 06:22 PM, Jeff Darcy wrote:
Background: Quick-read + open-behind xlators are developed
to help
in small file workload reads like apache webserver, tar etc to get the
data
On 01/28/2016 07:05 PM, Venky Shankar wrote:
Hey folks,
I just merged patch #13302 (and it's 3.7 equivalent) which fixes a scrubber
crash.
This was causing other patches to fail regression.
Requesting a rebase of patches (especially 3.7 pending) that were blocked due to
this.
Thanks a lot
+Anoop, Jiffin
On 01/27/2016 03:25 PM, PankaJ Singh wrote:
Hi,
We are using gluster 3.7.6 on ubuntu 14.04. We are facing an issue
with trashcan feature.
Our scenario is as follow:
1. 2 node server (ubuntu 14.04 with glusterfs 3.7.6)
2. 1 client node (ubuntu 14.04)
3. I have created one
unless the patches are data-loss/crashes will not take any more other
than the ones which help make regressions consistent:
Final set:
http://review.gluster.org/#/c/12768/
http://review.gluster.org/#/c/13305/<< user asked for this on
gluster-users.
http://review.gluster.org/#/c/13119/
On 01/19/2016 08:00 PM, Avra Sengupta wrote:
Hi,
The leader election based replication has been called NSR or "New
Style Replication" for a while now. We would like to have a new name
for the same that's less generic. It can be something like "Leader
Driven Replication" or something more
hey,
Which process is consuming so much cpu? I went through the logs
you gave me. I see that the following files are in gfid mismatch state:
<066e4525-8f8b-43aa-b7a1-86bbcecc68b9/safebrowsing-backup>,
<1d48754b-b38c-403d-94e2-0f5c41d5f885/recovery.bak>,
,
Could you give me the output
On 01/18/2016 02:28 PM, Oleksandr Natalenko wrote:
XFS. Server side works OK, I'm able to mount volume again. Brick is 30% full.
Oleksandr,
Will it be possible to get the statedump of the client, bricks
output next time it happens?
info"? Is there any directory
in there which is LARGE?
Pranith
Please let me know if there is anything else I can provide.
Patrick
On Thu, Jan 21, 2016 at 12:01 AM, Pranith Kumar Karampuri
<pkara...@redhat.com <mailto:pkara...@redhat.com>> wrote:
hey,
Which p
/
Number of entries: 0
Brick gfs02a.corvidtec.com:/data/brick02a/homegfs/
Number of entries: 0
Brick gfs02b.corvidtec.com:/data/brick02b/homegfs/
Number of entries: 0
On Thu, Jan 21, 2016 at 10:40 AM, Pranith Kumar Karampuri
<pkara...@redhat.com <mailto:pkara...@redhat.com&g
/data/brick02b/homegfs/
Number of entries: 0
On Thu, Jan 21, 2016 at 11:10 AM, Pranith Kumar Karampuri
<pkara...@redhat.com <mailto:pkara...@redhat.com>> wrote:
On 01/21/2016 09:26 PM, Glomski, Patrick wrote:
I should mention that the
On 01/22/2016 07:19 AM, Pranith Kumar Karampuri wrote:
On 01/22/2016 07:13 AM, Glomski, Patrick wrote:
We use the samba glusterfs virtual filesystem (the current version
provided on download.gluster.org <http://download.gluster.org>), but
no windows clients connecting directly.
On Thu, Jan 21, 2016 at 8:37 PM, Pranith Kumar Karampuri
<pkara...@redhat.com <mailto:pkara...@redhat.com>> wrote:
Do you have any windows clients? I see a lot of getxattr calls for
"glusterfs.get_real_filename" which lead to full readdirs of the
directories on
ntries: 0
On Thu, Jan 21, 2016 at 11:10 AM, Pranith Kumar Karampuri
<pkara...@redhat.com <mailto:pkara...@redhat.com>> wrote:
On 01/21/2016 09:26 PM, Glomski, Patrick wrote:
I should mention that the problem is not currently occurring
and there a
? That helps in finding the
heavily hit function calls.
Something like "for i in {1..20}; do gstack >
sample-$i.txt; done"
Pranith
On Thu, Jan 21, 2016 at 8:49 PM, Pranith Kumar Karampuri
<pkara...@redhat.com <mailto:pkara...@redhat.com>> wrote:
On 01/
debug to see '1)' doesn't happen. The gstack
traces I asked for should also help.
Pranith
On Thu, Jan 21, 2016 at 8:49 PM, Pranith Kumar Karampuri
<pkara...@redhat.com <mailto:pkara...@redhat.com>> wrote:
On 01/22/2016 07:13 AM, Glomski, Patrick wrote:
We use th
On 01/22/2016 07:14 AM, li.ping...@zte.com.cn wrote:
Hi Pranith, it is appreciated for your reply.
Pranith Kumar Karampuri <pkara...@redhat.com> 写于 2016/01/20 18:51:19:
> 发件人: Pranith Kumar Karampuri <pkara...@redhat.com>
> 收件人: li.ping...@zte.com.cn, gluster-devel@gluste
/origin/*
ERROR: Error cloning remote repo 'origin'
hudson.plugins.git.GitException: Command "git -c core.askpass=true fetch
--tags --progress git://review.gluster.org/glusterfs.git
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout:
stderr: fatal: unable to connect to
On 01/25/2016 02:17 AM, Richard Wareing wrote:
Hello all,
Just gave a talk at SCaLE 14x today and I mentioned our new locks
revocation feature which has had a significant impact on our GFS
cluster reliability. As such I wanted to share the patch with the
community, so here's the bugzilla
https://public.pad.fsfe.org/p/glusterfs-3.7.7 is the final list of
patches I am waiting for before making 3.7.7 release.
Please let me know if I need to wait for any other patches. It would be
great if we make the tag tomorrow.
Pranith
___
http://www.gluster.org/pipermail/gluster-devel/2015-September/046773.html
Pranith
On 01/20/2016 04:11 PM, Niels de Vos wrote:
Hi all,
on Saturday the 30th of January I am scheduled to give a presentation
titled "Gluster roadmap, recent improvements and upcoming features":
Sorry for the delay in response.
On 01/15/2016 02:34 PM, li.ping...@zte.com.cn wrote:
GLUSTERFS_WRITE_IS_APPEND Setting in afr_writev function at glusterfs
client end makes the posix_writev in the server end deal IO write
fops from parallel to serial in consequence.
i.e. multiple
y to get full statedump.
On четвер, 21 січня 2016 р. 14:54:47 EET Raghavendra G wrote:
On Thu, Jan 21, 2016 at 10:49 AM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
On 01/18/2016 02:28 PM, Oleksandr Natalenko wrote:
XFS. Server side works OK, I'm able to mount volume
hi,
Traditionally gluster has been using ctime/mtime of the
files/dirs on the bricks as stat output. Problem we are seeing with this
approach is that, software which depends on it gets confused when there
are differences in these times. Tar especially gives "file changed as we
read it"
On 01/22/2016 03:48 PM, Ravishankar N wrote:
On 01/19/2016 06:44 PM, Ravishankar N wrote:
1) Is there is a compelling reason as to why the bricks of hot-tier
are in the reverse order ?
2) If there isn't one, should we spend time to fix it so that the
bricks appear in the order in which
On 01/23/2016 10:02 AM, Dan Lambright wrote:
- Original Message -
From: "Pranith Kumar Karampuri" <pkara...@redhat.com>
To: "Ravishankar N" <ravishan...@redhat.com>, "Gluster Devel"
<gluster-devel@gluster.org>, "Dan Lambri
On 02/13/2016 12:13 AM, Richard Wareing wrote:
Hey Ravi,
I'll ping Shreyas about this today. There's also a patch we'll need for
multi-threaded SHD to fix the least-pri queuing. The PID of the process wasn't
tagged correctly via the call frame in my original patch. The patch below
fixes
301 - 400 of 772 matches
Mail list logo