-Atin
Sent from one plus one
On 05-Mar-2016 11:46 am, "Ajil Abraham" wrote:
>
> Thanks for all the support. After handling the input validation in my
code, Glusterd no longer crashes. I am still waiting for clearance from my
superior to pass on all the details.
On 03/04/2016 10:05 PM, Diego Remolina wrote:
I run a few two node glusterfs instances, but always have a third
machine acting as an arbiter. I am with Jeff on this one, better safe
than sorry.
Setting up a 3rd system without bricks to achieve quorum is very easy.
This is server side
Thanks for all the support. After handling the input validation in my
code, Glusterd no longer crashes. I am still waiting for clearance from my
superior to pass on all the details. Expecting him to revert by this Sunday.
- Ajil
On Fri, Mar 4, 2016 at 10:20 AM, Joseph Fernandes
hi,
Only if a preload(a readdir request not initiated by application, but instead
triggered by readdir-ahead in an attempt to pre-emptively fill the
readdir-ahead buffer) is in process,the client will wait for its completion. If
the preload has completed, and the client's readdir requests wolud
I run a few two node glusterfs instances, but always have a third
machine acting as an arbiter. I am with Jeff on this one, better safe
than sorry.
Setting up a 3rd system without bricks to achieve quorum is very easy.
Diego
On Fri, Mar 4, 2016 at 10:40 AM, Jeff Darcy
> I like the default to be 'none'. Reason: If we have 'auto' as quorum for
> 2-way replication and first brick dies, there is no HA. If users are
> fine with it, it is better to use plain distribute volume
"Availability" is a tricky word. Does it mean access to data now, or
later despite
Hi,
glusterfs-3.6.9 has been released and the packages for RHEL/Fedora/Centos
can be found here.http://download.gluster.org/pub/gluster/glusterfs/3.6/LATEST/
Requesting people running 3.6.x to please try it out and let us know if
there are any issues.
This release supposedly fixes the bugs
On Fri, Mar 04, 2016 at 09:11:33AM -0500, Glomski, Patrick wrote:
> Just an FYI, I installed gluster-nagios-common on a CentOS7 machine
> (started with minimal server, so there wasn't much there) to monitor the
> bricks in a gluster array. python-ethtool seems to be missing as a
> dependency in
On 03/04/2016 07:30 AM, Pranith Kumar Karampuri wrote:
On 03/04/2016 05:47 PM, Bipin Kunal wrote:
HI Pranith,
Thanks for starting this mail thread.
Looking from a user perspective most important is to get a "good copy"
of data. I agree that people use replication for HA but having stale
Just an FYI, I installed gluster-nagios-common on a CentOS7 machine
(started with minimal server, so there wasn't much there) to monitor the
bricks in a gluster array. python-ethtool seems to be missing as a
dependency in the rpm spec:
import glusternagios.glustercli as gcli
> File
For shared secret, create a simple 10 line python program. Here is an example:
https://github.com/heketi/heketi/wiki/API#example
- Luis
- Original Message -
From: "Aravinda"
To: "Kaushal M"
Cc: "Luis Pabon" , "Kanagaraj
- Original Message -
> There are a handful of centos regressions that have been running for over
> eight hours.
>
> I don't know if that's contributing to the short backlog of centos
> regressions waiting to run.
I'm going to kill these in a moment, but here's more specific info
in
I think Raghavendra Talur is already aware of the problem and he sent a
mail on devel to disable regression trigger on code review +2 or verified
+1.
-Atin
Sent from one plus one
On 04-Mar-2016 6:49 pm, "Kaleb Keithley" wrote:
> There are a handful of centos regressions
Hi all,
This week's status:
-The implementation of buffered way of storing the WORM-Retention profile:
In this way we will have more space for storing additional profile entries
for a file, for which the requirements may arise in the future.
-Tested the program with distributed and replicated
There are a handful of centos regressions that have been running for over eight
hours.
I don't know if that's contributing to the short backlog of centos regressions
waiting to run.
--
Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
On 03/04/2016 06:23 PM, ABHISHEK PALIWAL wrote:
Ok, just to confirm, glusterd and other brick processes are
running after this node rebooted?
When you run the above command, you need to check
/var/log/glusterfs/glfsheal-volname.log logs errros. Setting
client-log-level to
Hi All,
There is a warning while building master code
Making install in fdl
Making install in src
CC fdl.lo
CCLD fdl.la
CC logdump.o
CC libfdl.o
CCLD gf_logdump
CC recon.o
CC librecon.o
librecon.c: In function ‘fdl_replay_rename’:
On Thu, Feb 4, 2016 at 7:13 PM, Niels de Vos wrote:
> On Thu, Feb 04, 2016 at 04:15:16PM +0530, Raghavendra Talur wrote:
> > On Thu, Feb 4, 2016 at 4:13 PM, Niels de Vos wrote:
> >
> > > On Thu, Feb 04, 2016 at 03:34:05PM +0530, Raghavendra Talur wrote:
> >
On 03/04/2016 05:47 PM, Bipin Kunal wrote:
HI Pranith,
Thanks for starting this mail thread.
Looking from a user perspective most important is to get a "good copy"
of data. I agree that people use replication for HA but having stale
data with HA will not have any value.
So I will suggest to
Hi Raghavendra,
On 03/04/2016 03:09 PM, Raghavendra G wrote:
On Fri, Mar 4, 2016 at 2:02 PM, Raghavendra G > wrote:
On Thu, Mar 3, 2016 at 6:26 PM, Kotresh Hiremath Ravishankar
HI Pranith,
Thanks for starting this mail thread.
Looking from a user perspective most important is to get a "good copy"
of data. I agree that people use replication for HA but having stale
data with HA will not have any value.
So I will suggest to make auto quorum as default configuration even
On 03/04/2016 05:26 PM, Pranith Kumar Karampuri wrote:
hi,
So far default quorum for 2-way replication is 'none' (i.e.
files/directories may go into split-brain) and for 3-way replication
and arbiter based replication it is 'auto' (files/directories won't go
into split-brain). There are
On 03/04/2016 12:10 PM, ABHISHEK PALIWAL wrote:
Hi Ravi,
3. On the rebooted node, do you have ssl enabled by any chance? There
is a bug for "Not able to fetch volfile' when ssl is enabled:
https://bugzilla.redhat.com/show_bug.cgi?id=1258931
-> I have checked but ssl is disabled but
hi,
So far default quorum for 2-way replication is 'none' (i.e.
files/directories may go into split-brain) and for 3-way replication and
arbiter based replication it is 'auto' (files/directories won't go into
split-brain). There are requests to make default as 'auto' for 2-way
1008839 (mainline) POST: Certain blocked entry lock info not retained after the
lock is granted
[master] Ie37837 features/locks : Certain blocked entry lock info not
retained after the lock is granted (ABANDONED)
** ata...@redhat.com: Bug 1008839 is in POST, but all changes have been
On Fri, Mar 4, 2016 at 2:02 PM, Raghavendra G
wrote:
>
>
> On Thu, Mar 3, 2016 at 6:26 PM, Kotresh Hiremath Ravishankar <
> khire...@redhat.com> wrote:
>
>> Hi,
>>
>> Yes, with this patch we need not set conn->trans to NULL in
>> rpc_clnt_disable
>>
>
> While [1] fixes
On Thu, Mar 3, 2016 at 6:26 PM, Kotresh Hiremath Ravishankar <
khire...@redhat.com> wrote:
> Hi,
>
> Yes, with this patch we need not set conn->trans to NULL in
> rpc_clnt_disable
>
While [1] fixes the crash, things can be improved in the way how changelog
is using rpc.
1. In the current code,
27 matches
Mail list logo