Not exactly.
The idea is to make use of information in the meta file (list of
journals) so as to not actively maintain the backlog. The API would
return a set of journals (or pathnames) starting from a given timestamp.
This is similar to history, but used for live changes.
Venky
On
Since HTIME file is available we need not maintain changelogs in two
different places as we do now(backend and working dir), We can provide
backend changelog names to consumer instead of working dir names. One
problem with this is for parsing these changelogs they have to use one
more RPC
On 10/31/2014 12:28 PM, Aravinda wrote:
Since HTIME file is available we need not maintain changelogs in two
different places as we do now(backend and working dir), We can provide
backend changelog names to consumer instead of working dir names. One
problem with this is for parsing these
On Fri, Oct 31, 2014 at 10:34:53AM +0530, Kiran Patil wrote:
This patch fixed the issue.
Thanks for testing. Could you file a bug for this issue? And, feel free
to send the patch to Gerrit too. I can do that if you like, just let me
know.
Niels
Thanks,
Kiran.
On Fri, Oct 31, 2014 at
I set zfs pool failmode to continue, which should disable only write and
not read as explained below
failmode=wait | continue | panic
Controls the system behavior in the event of catastrophic pool
failure. This condition is typically a result of a loss of connec-
tivity to
Hi Niels,
I have raised an issue https://bugzilla.redhat.com/show_bug.cgi?id=1159248
.
I am not familiar with sending patch to gerrit, so please send the patch to
Gerrit.
Thanks,
Kiran.
On Fri, Oct 31, 2014 at 2:22 PM, Niels de Vos nde...@redhat.com wrote:
On Fri, Oct 31, 2014 at 10:34:53AM
Hello
A question on bug-853690.t: There is the test to check that brocks are
out of sync:
EXPECT_NOT 0x echo $xa
Perhaps I am missing something, but that test may execute after the
volume completed heal. This looks like the perfecte recipe for a
spurious failure and it
Hi,
On 10/31/2014 09:31 AM, Xavier Hernandez wrote:
Hi Atin,
On 10/31/2014 05:47 AM, Atin Mukherjee wrote:
On 08/24/2014 11:41 PM, Justin Clift wrote:
I'd be kind of concerned about dropping the test case instead of it
being fixed. It sort of seems like these last few spurious failures
may
I am not seeing below message in any log files under /var/log/glusterfs
directroy and its subdirectories.
health-check failed, going down
On Fri, Oct 31, 2014 at 3:16 PM, Kiran Patil ki...@fractalio.com wrote:
I set zfs pool failmode to continue, which should disable only write and
not read
Hey folks,
Myself and Raghavendra (@rabhat) have been discussing about BitRot[1]
and came up with a list of high level tasks (breakup items) captured
here[2]. The pad will be updated on an ongoing basis reflecting the
current status/items that are being worked on. As always, contributions
in
On Fri, Oct 31, 2014 at 11:11:47AM +, 박은준 wrote:
Hi,
Please remove this mail address from gluster-devel-mailing list.
You can unsubscribe yourself by sending an email with the only word
unsubscribe as the subject to gluster-devel-requ...@gluster.org
or visit the web
On 10/31/2014 03:46 AM, Anders Blomdell wrote:
On 2014-10-30 22:44, Anders Blomdell wrote:
On 2014-10-30 22:06, Kaleb KEITHLEY wrote:
On 10/30/2014 04:34 PM, Anders Blomdell wrote:
On 2014-10-30 20:55, Kaleb KEITHLEY wrote:
On 10/30/2014 01:50 PM, Anders Blomdell wrote:
On 2014-10-30
SRC:
http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.4.6beta2.tar.gz
This release is made off jenkins-release-94
-- Gluster Build System
___
Gluster-devel mailing list
Gluster-devel@gluster.org
On 2014-10-31 13:18, Kaleb S. KEITHLEY wrote:
On 10/31/2014 08:15 AM, Humble Chirammal wrote:
I can create a gluster pool, but when trying to create
an image I get the error Libvirt version does not support storage
cloning.
Will continue tomorrow.
As you noticed, this looks to be a libvirt
Hi
I need this one to be merged so that I can setup pre-commit basic
regression tests on NetBSD for master:
http://review.gluster.org/8936
While there, my life would be simplier if that big one could
be merged: http://review.gluster.org/9009
--
Emmanuel Dreyfus
m...@netbsd.org
Hi All,
GlusterFS 3.6.0 is now generally available. GlusterFS 3.6.0 can be
downloaded from [1] and release notes are available at [2]. Upgrade
instructions can be found at [3]. Packages for various distributions
will be available shortly at the download site.
If you would like to propose
On Fri, 31 Oct 2014 12:47:21 +0100
Xavier Hernandez xhernan...@datalab.es wrote:
I've filed a bug and uploaded a patch for master and release-3.6
branches for this problem.
master:
bug: https://bugzilla.redhat.com/show_bug.cgi?id=1159269
patch: http://review.gluster.org/9031/
On Fri, 31 Oct 2014 10:17:28 +0530
Atin Mukherjee amukh...@redhat.com wrote:
snip
Justin,
For last three runs, I've observed the same failure. I think its
really the time to debug this without any further delay. Can you
please share a rackspace machine such that I can debug this issue?
Yep,
On 10/31/2014 06:00 PM, Anders Blomdell wrote:
On 2014-10-31 13:18, Kaleb S. KEITHLEY wrote:
On 10/31/2014 08:15 AM, Humble Chirammal wrote:
I can create a gluster pool, but when trying to create
an image I get the error Libvirt version does not support storage
cloning.
Will continue
Hi guys,
these logs appear on both hosts just like the result of --vm-status. tried to
tcpdump on ovirt hosts and gluster nodes but only packets exchange with my
monitoring VM(zabbix) appeared.
agent.log
new_data = self.refresh(self._state.data)
File
On 10/31/2014 03:53 AM, Jaicel R. Sabonsolin wrote:
Hi guys,
these logs appear on both hosts just like the result of --vm-status. tried to
tcpdump on ovirt hosts and gluster nodes but only packets exchange with my
monitoring VM(zabbix) appeared.
agent.log
new_data =
On 10/31/2014 10:26 AM, Jaicel wrote:
i've increased the limit and then restarted agent and broker. status normalize, but then right now
it went to False state again but still both having 2400 score. agent logs remains the
same, with ovirt-ha-agent dead but subsys locked status. ha-broker logs
22 matches
Mail list logo