On 11/09/2014 10:31 PM, Atin Mukherjee wrote:
>
>
> On 11/08/2014 05:21 AM, Justin Clift wrote:
>> On Wed, 05 Nov 2014 14:58:06 +0530
>> Atin Mukherjee wrote:
>>
>>> Can there be any cases where glusterd instance may go down
>>> unexpectedly without a crash?
>>>
>>> [1] http://build.gluster.o
Hi,
The test case bug-1155042-dont-display-deactivated-snapshots.t is
failing because of the below mentioned scenario.
Till now we used to activate the snapshot while creating it, my test
case was based on that assumption. But we have a slight change in behaviour,
now we don't activate the snapsh
On 11/13/2014 03:23 AM, Justin Clift wrote:
Hi all,
At the moment, our smoke tests in Jenkins only run on a
replicated volume. Extending that out to other volume types
should (in theory :>) help catch other simple gotchas.
Xavi has put together a patch for doing just this, which I'd
like to a
It's safe to purge everything under .processed. That what geo-rep had
already replicated, so it's OK to delete it.
Also, consider purging these entries periodically as geo-rep doesn't purge
them on it's own (at least for now).
Venky
On Thu, Nov 13, 2014 at 3:27 AM, Andrea Tartaglia
wrote:
+1, excellent idea, this will definitely give an additional comfort zone
for learning glusterfs faster.
On 11/12/2014 05:47 PM, Krishnan Parthasarathi wrote:
> All,
>
> We have come across behaviours and features of GlusterFS that are left
> unexplained for various reasons. Thanks to Justin Clift
On 11/12/2014 06:13 PM, Pranith Kumar Karampuri wrote:
>
> On 11/11/2014 05:25 PM, Kiran Patil wrote:
>> I have installed gluster v3.6.1
>> from
>> http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-6/
>>
>>
>> The /tests/bugs/bug-1112559.t testcase passed in all 3 runs and
Hi guys,
I've got a geo-rep setup which copies data across 3 DCs, it works fine,
but I've just spotted that the both the master and slave server ( the
main ones ) ran out of inodes.
Looking through the directories which have lots of files, the
"/var/lib/misc/glusterfsd/[volname]/[connection]/.proce
\> At the moment, our smoke tests in Jenkins only run on a
> replicated volume. Extending that out to other volume types
> should (in theory :>) help catch other simple gotchas.
>
> Xavi has put together a patch for doing just this, which I'd
> like to apply and get us running:
>
>
> https://
Hi all,
At the moment, our smoke tests in Jenkins only run on a
replicated volume. Extending that out to other volume types
should (in theory :>) help catch other simple gotchas.
Xavi has put together a patch for doing just this, which I'd
like to apply and get us running:
https://forge.glus
Community meeting minutes:
http://meetbot.fedoraproject.org/gluster-meeting/2014-11-12/gluster-meeting.2014-11-12-12.03.html
Highlights:
3.6.2 is out
3.5.3 and 3.4.6 release imminent, released to maintainers for packaging
Upcoming discussions for future features, Bitrot detection up first
That's a really huge leak.
>From my experience a 'gluster volume status fd/inode'
commands that could lead to huge rpc responses, when done on a volume
with many files being accessed.
I also vaguely remember having some problems with memory due to rpc
saved_frames. IIRC the saved_frames continu
On Wed, 12 Nov 2014 07:17:25 -0500 (EST)
Krishnan Parthasarathi wrote:
> All,
>
> We have come across behaviours and features of GlusterFS that are left
> unexplained for various reasons. Thanks to Justin Clift for
> encouraging me to come up with a document that tries to fill this gap
> increme
Hi all,
I am running gluster-3.4.5 on 2 servers. Each of them has 7 2TB HDDs to
build a 7 * 2 distributed + replicated volume.
I just notice that the glusterd consume about 120GB memory and get a
coredump today. I read the mempool code try to identify which mempool eat
the memory. Unfortunetly, th
Greetings,
We are excited to announce the launch of planning for GlusterFS 3.7 and
GlusterFS 4.0 releases!
GlusterFS 3.7 will continue the work already ongoing in the 3.x
series,extending the current architecture and foundations to their
limits. We expect 3.7 to be released in April 2015. De
On 11/11/2014 05:25 PM, Kiran Patil wrote:
I have installed gluster v3.6.1 from
http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-6/
The /tests/bugs/bug-1112559.t testcase passed in all 3 runs and rest
of the two tests quota-anon-fd-nfs.t,
/tests/bugs/886998/strict-re
All,
We have come across behaviours and features of GlusterFS that are left
unexplained for various reasons. Thanks to Justin Clift for encouraging me to
come up with a document that tries to fill this gap incrementally. We have
decided
to call it "did-you-know.md" and for a reason. We'd love to
Out of general interest - How much data and what kind are you looking to store?
How does this relate to RAFT as in consensus for cluster? Some form of
leader selection and then leader propagates/makes available all the
SQLite data or storing counters for consensus itself?
On Tue, Nov 11, 2014 at
Past couple of days have been slow for me and I'm new to the FOP stuff
so I will try and finish something available soon with potentially
some benchmarks. Will try and post the feature page and make some of
the code available.
The API can always be expanded to have atomic stream writes - it's
ends
[Adding gluster-users]
On 10/29/2014 12:11 PM, Raghavendra Bhat wrote:
On Monday 27 October 2014 06:34 PM, Vijay Bellur wrote:
Hi All,
As we move closer to the release of 3.6.0, we are looking to add a
release maintainer for the 3.6.0 development branch (release-3.6).
Primary requirements for
Came across this RDMA focused blog while looking for some
stuff (am setting up home Infiniband network again):
http://rdmamojo.com
It's the personal blog of one of the Mellanox guys,
and has all kinds of useful info on RDMA programming,
OS setup for Infiniband cards, and similar.
+ Justin
--
I have create zpool with name d and mnt and they appear in filesystem as
follows.
d on /d type zfs (rw,xattr)
mnt on /mnt type zfs (rw,xattr)
Debug enabled output of quota.t testcase is at http://ur1.ca/irbt1.
On Wed, Nov 12, 2014 at 3:22 PM, Kiran Patil wrote:
> Hi,
>
> Gluster suite report,
Hi Shyam,
I've been discussing this stuff with Krishnan and we ended up with a
proposal (see attached file).
There many comments to explain how it works.
What do you think about this proposal ?
Xavi
On 11/11/2014 07:42 PM, Shyam wrote:
On 11/10/2014 09:59 AM, Xavier Hernandez wrote:
Hi Sh
Hi,
I am happy to announce that we have started to work on supporting ZFS for
Gluster snapshots.
If you have any blueprints to support Btrfs, please share it, so that we
may take some clues from it and who knows we may end up supporting both.
Our developer will handle this thread, once this mail
Hi,
Gluster suite report,
Gluster version: glusterfs 3.6.1
On disk filesystem: Zfs 0.6.3-1.1
Operating system: CentOS release 6.6 (Final)
We are seeing quota and snapshot testcase failures.
We are not sure why quota is failing since quotas worked fine on gluster
3.4.
Test Summary Report
24 matches
Mail list logo