Davy,
I will check this with Kaleb and get back to you.
-Atin
Sent from one plus one
On Aug 12, 2015 7:22 PM, Davy Croonen davy.croo...@smartbit.be wrote:
Atin
No problem to raise a bug for this, but isn’t this already addressed here:
Bug 670
I faced the same issue with the sharding translator. I fixed it by making its
readdirp callback initialize individual entries' inode ctx, some of these being
xattr values, which are filled in entry-dict by the posix translator.
Here is the patch that got merged recently:
Kotresh Hiremath Ravishankar khire...@redhat.com wrote:
have fixed the above in following four machines which are up by adding
export PATH=$PATH:/build/install/sbin:/build/install/bin in ~/.kshrc and
similarly in other shells as I didn't know the default shell used by
regression run
IMO you
- Original Message -
From: Raghavendra Gowdappa rgowd...@redhat.com
To: Krutika Dhananjay kdhan...@redhat.com
Cc: Mohammed Rafi K C rkavu...@redhat.com, Gluster Devel
gluster-devel@gluster.org, Dan Lambright dlamb...@redhat.com, Nithya
Balachandran nbala...@redhat.com, Ben Turner
- Original Message -
From: Krutika Dhananjay kdhan...@redhat.com
To: Mohammed Rafi K C rkavu...@redhat.com
Cc: Gluster Devel gluster-devel@gluster.org, Dan Lambright
dlamb...@redhat.com, Nithya Balachandran
nbala...@redhat.com, Raghavendra Gowdappa rgowd...@redhat.com, Ben
Turner
Hi,
Do we have plans to support semi-synchronous type replication in the future?
By semi-sync I mean writing to one leg the replica, securing the write on a
faster stable storage (capacitor backed SSD or NVRAM) and then acknowledge the
client. The write on other replica leg may happen at later
Hi,
./tests/geo-rep/georep-basic-dr-rsync.t fails in regression machine as well
as in my local machine also. Requesting geo-rep team to look in to it.
link:
https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/9158/consoleFull
Regards,
Susant
Hi All,
In about 5 minutes from now we will have the regular weekly Gluster
Community meeting.
Meeting details:
- location: #gluster-meeting on Freenode IRC
- date: every Wednesday
- time: 12:00 UTC, 14:00 CEST, 17:30 IST
(in your terminal, run: date -d 12:00 UTC)
- agenda:
Hmm, that's kind of risky. What if you good leg fails before the sync happens
to the secondary leg? Replay cache may serve as a lifeline in such a scenario.
Thanks
-Anoop
- Original Message -
From: Ravishankar N ravishan...@redhat.com
To: Anoop Nair ann...@redhat.com,
Minutes:
http://meetbot.fedoraproject.org/gluster-meeting/2015-08-12/gluster-meeting.2015-08-12-11.59.html
Minutes (text):
http://meetbot.fedoraproject.org/gluster-meeting/2015-08-12/gluster-meeting.2015-08-12-11.59.txt
Log:
On 08/12/2015 12:50 PM, Anoop Nair wrote:
Hi,
Do we have plans to support semi-synchronous type replication in the future?
By semi-sync I mean writing to one leg the replica, securing the write on a faster stable
storage (capacitor backed SSD or NVRAM) and then acknowledge the client. The
On 08/12/2015 05:56 PM, Anoop Nair wrote:
Hmm, that's kind of risky. What if you good leg fails before the sync happens
to the secondary leg?
Oh, the writes would still need to happen as a part of the AFR
transaction; so if the writes (which are wound to all bricks
immediately, its just
Well, this looks like a bug even in 3.7 as well. I've posted a fix [1]
to address it.
[1] http://review.gluster.org/11898
Could you please raise a bug for this?
~Atin
On 08/12/2015 01:32 PM, Davy Croonen wrote:
Hi Atin
Thanks for your answer. The op-version was indeed an old one, 30501 to
Could some one with merge rights take
http://review.gluster.org/#/c/11858/ in for the 3.7 branch? This
backport has +2 from the maintainer and has passed regressions.
Thanks in advance :-)
Ravi
___
Gluster-devel mailing list
Gluster-devel@gluster.org
- Original Message -
From: Ravishankar N ravishan...@redhat.com
To: Gluster Devel gluster-devel@gluster.org
Sent: Wednesday, August 12, 2015 6:01:16 PM
Subject: [Gluster-devel] Patch merge request-3.7 branch:
http://review.gluster.org/#/c/11858/
Could some one with merge
On Wednesday 29 July 2015 Vijay Bellur wrote:
On Wednesday 29 July 2015 03:40 PM, Pranith Kumar Karampuri wrote:
hi,
I just updated
https://public.pad.fsfe.org/p/gluster-spurious-failures with the latest
spurious failures we saw in linux and NetBSD regressions. Could you guys
update
Hi Emmanuel,
I checked the netbsd regression machines and found that they were already
configure with
PasswordLess SSH for root.
The issue was Geo-rep runs gluster vol info via ssh and it can't find the
gluster PATH via ssh.
I have fixed the above in following four machines which are up by
Hi All,
We are facing some inconsistent behavior for fops like rename, unlink
etc due to lack of lookup followed by a readdirp, more specifically if
inodes/gfid are populated via readdirp call and this nodeid is shared
with kernal, md-cache will cache this based on base-name. Then
subsequent
I think NSR is good candidate here. It has leadership election for
writing data, if that can be enhanced to give more priority to SSD
bricks during leadership election.
regards
Aravinda
On 08/12/2015 06:06 PM, Ravishankar N wrote:
On 08/12/2015 05:56 PM, Anoop Nair wrote:
Hmm, that's kind
19 matches
Mail list logo