[Gluster-devel] 3.7.0 update

2015-04-07 Thread Vijay Bellur

Hi All,

I am planning to branch release-3.7 by the end of this week. Here is a 
list of tasks that we would need to accomplish by then:


1. Review and merge as many fixes for coverity found issues. [1]

2. Review and merge as many logging improvement patches. [2]

3. Spurious regression tests listed in [3] to be fixed.
  To not impede the review  merge workflow on release-3.7/master, I 
plan to drop those test units which still cause

  spurious failures by the time we branch release-3.7.

4. Maintainers to ACK sanity of their respective components.

Blocker tasks for 3.7.0:

1. All open bugs in 3.7.0 tracker [4] would need to be fixed. If you 
have more bugs to be fixed in 3.7.0, please add them to the tracker.


2. Admin guide updates for new features.

3. Release Notes

4. No build failures for NetBSD, Mac and FreeBSD. All regression tests 
to pass on NetBSD.


5. Anything more that we find along the way :).

Please feel free to add/modify to this list. Your help in accomplishing 
these tasks would be very appreciated.


Thanks,
Vijay

[1] 
http://review.gluster.org/#/q/status:open+project:glusterfs+branch:master+topic:bug-789278,n,z


[2] 
http://review.gluster.org/#/q/status:open+project:glusterfs+branch:master+topic:bug-1194640,n,z


[3] https://public.pad.fsfe.org/p/gluster-spurious-failures

[4] 
https://bugzilla.redhat.com/showdependencytree.cgi?id=1199352hide_resolved=1







___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] 3.7.0 update

2015-04-07 Thread Atin Mukherjee


On 04/07/2015 03:51 PM, Vijay Bellur wrote:
 Hi All,
 
 I am planning to branch release-3.7 by the end of this week. Here is a
 list of tasks that we would need to accomplish by then:
 
 1. Review and merge as many fixes for coverity found issues. [1]
 
 2. Review and merge as many logging improvement patches. [2]
 
 3. Spurious regression tests listed in [3] to be fixed.
   To not impede the review  merge workflow on release-3.7/master, I
 plan to drop those test units which still cause
   spurious failures by the time we branch release-3.7.
How about taking in http://review.gluster.org/#/c/10128/ ?
 
 4. Maintainers to ACK sanity of their respective components.
 
 Blocker tasks for 3.7.0:
 
 1. All open bugs in 3.7.0 tracker [4] would need to be fixed. If you
 have more bugs to be fixed in 3.7.0, please add them to the tracker.
 
 2. Admin guide updates for new features.
 
 3. Release Notes
 
 4. No build failures for NetBSD, Mac and FreeBSD. All regression tests
 to pass on NetBSD.
 
 5. Anything more that we find along the way :).
 
 Please feel free to add/modify to this list. Your help in accomplishing
 these tasks would be very appreciated.
 
 Thanks,
 Vijay
 
 [1]
 http://review.gluster.org/#/q/status:open+project:glusterfs+branch:master+topic:bug-789278,n,z
 
 
 [2]
 http://review.gluster.org/#/q/status:open+project:glusterfs+branch:master+topic:bug-1194640,n,z
 
 
 [3] https://public.pad.fsfe.org/p/gluster-spurious-failures
 
 [4]
 https://bugzilla.redhat.com/showdependencytree.cgi?id=1199352hide_resolved=1
 
 
 
 
 
 
 
 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-devel

-- 
~Atin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] REMINDER: Gluster Community Bug Triage meeting today at 12:00 UTC (in 15 minutes)

2015-04-07 Thread Niels de Vos
Hi all,

This meeting is scheduled for anyone that is interested in learning more
about, or assisting with the Bug Triage.

Meeting details:
- location: #gluster-meeting on Freenode IRC
( https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
(in your terminal, run: date -d 12:00 UTC)
- agenda: https://public.pad.fsfe.org/p/gluster-bug-triage

Currently the following items are listed:
* Roll Call
* Status of last weeks action items
* Group Triage
* Open Floor

The last two topics have space for additions. If you have a suitable bug
or topic to discuss, please add it to the agenda.

Thanks,
Niels


pgpHhGp85AT0j.pgp
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] 3.7.0 update

2015-04-07 Thread Emmanuel Dreyfus
Vijay Bellur vbel...@redhat.com wrote:

 4. No build failures for NetBSD, Mac and FreeBSD. All regression tests
 to pass on NetBSD.

Note on this: NetBSD regression only attempts basic, encryption and
features subdirectories (bugs subdirectory is skiped); 

And in that directories, we currently skip:
basic/afr/split-brain-resolution.t  - broken (by design?)
basic/afr/read-subvol-entry.t - fixed, I will enable it again
basic/ec/*  - works but with spurious failures
basic/quota-anon-fd-nfs.t - broken by recent commit
basic/tier/tier.t - broken
encryption/crypt.t - fix awating to be merged
features/trash.t - being investigated right now

-- 
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Minutes of todays Gluster Community Bug Triage meeting

2015-04-07 Thread Niels de Vos
On Tue, Apr 07, 2015 at 05:14:32PM +0530, Niels de Vos wrote:
 Hi all,
 
 This meeting is scheduled for anyone that is interested in learning more
 about, or assisting with the Bug Triage.
 
 Meeting details:
 - location: #gluster-meeting on Freenode IRC
 ( https://webchat.freenode.net/?channels=gluster-meeting )
 - date: every Tuesday
 - time: 12:00 UTC
 (in your terminal, run: date -d 12:00 UTC)
 - agenda: https://public.pad.fsfe.org/p/gluster-bug-triage
 
 Currently the following items are listed:
 * Roll Call
 * Status of last weeks action items
 * Group Triage
 * Open Floor
 
 The last two topics have space for additions. If you have a suitable bug
 or topic to discuss, please add it to the agenda.


Minutes: 
http://meetbot.fedoraproject.org/gluster-meeting/2015-04-07/gluster-meeting.2015-04-07-12.02.html
Minutes (text): 
http://meetbot.fedoraproject.org/gluster-meeting/2015-04-07/gluster-meeting.2015-04-07-12.02.txt
Log: 
http://meetbot.fedoraproject.org/gluster-meeting/2015-04-07/gluster-meeting.2015-04-07-12.02.log.html


Meeting summary
---
* Agenda: https://public.pad.fsfe.org/p/gluster-bug-triage  (ndevos,
  12:02:13)
* Roll Call  (ndevos, 12:02:19)

* Last weeks action items  (ndevos, 12:03:37)

* someone will send a reminder to the users- and devel- ML about (and
  how to) fixing Coverity defects  (ndevos, 12:03:44)

* Group Triage  (ndevos, 12:09:18)
  * no reported NEEDINFO from gluster-b...@redhat.com  (ndevos,
12:09:50)
  * 24 untriaged bugs: http://goo.gl/0IqF2q  (ndevos, 12:10:12)
  * LINK:

https://bugzilla.redhat.com/buglist.cgi?bug_status=NEWproduct=GlusterFSf1=keywordso1=nowordsv1=Triagedcomponent=glusterd
(ndevos, 12:34:21)

* Open Floor  (ndevos, 12:36:10)

Meeting ended at 12:40:49 UTC.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] 3.7.0 update

2015-04-07 Thread Vijay Bellur

On 04/07/2015 04:39 PM, Atin Mukherjee wrote:



On 04/07/2015 03:51 PM, Vijay Bellur wrote:

Hi All,

I am planning to branch release-3.7 by the end of this week. Here is a
list of tasks that we would need to accomplish by then:

1. Review and merge as many fixes for coverity found issues. [1]

2. Review and merge as many logging improvement patches. [2]

3. Spurious regression tests listed in [3] to be fixed.
   To not impede the review  merge workflow on release-3.7/master, I
plan to drop those test units which still cause
   spurious failures by the time we branch release-3.7.

How about taking in http://review.gluster.org/#/c/10128/ ?


Yes, let us get this in. Nevertheless we need to tackle spurious 
regression test failures quickly than what we have been doing. If we 
don't, the end result is more noise and confusion.


-Vijay

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] 3.7.0 update

2015-04-07 Thread Emmanuel Dreyfus
On Tue, Apr 07, 2015 at 06:26:02PM +0530, Vijay Bellur wrote:
 Yes, let us get this in. Nevertheless we need to tackle spurious regression
 test failures quickly than what we have been doing. If we don't, the end
 result is more noise and confusion.

In the meantime, this could be valuable:
http://review.gluster.org/10128

We can enable -r in regression tests until things improve, ten drop it.

-- 
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] 3.7.0 update

2015-04-07 Thread Vijay Bellur

On 04/07/2015 05:31 PM, Emmanuel Dreyfus wrote:

Vijay Bellur vbel...@redhat.com wrote:


4. No build failures for NetBSD, Mac and FreeBSD. All regression tests
to pass on NetBSD.


Note on this: NetBSD regression only attempts basic, encryption and
features subdirectories (bugs subdirectory is skiped);



Have we tried running it on bugs ever? Do we know how many tests fail?



And in that directories, we currently skip:
basic/afr/split-brain-resolution.t  - broken (by design?)
basic/afr/read-subvol-entry.t - fixed, I will enable it again
basic/ec/*  - works but with spurious failures
basic/quota-anon-fd-nfs.t - broken by recent commit
basic/tier/tier.t - broken
encryption/crypt.t - fix awating to be merged
features/trash.t - being investigated right now



Let us aim to have this list cleaned up before we release 3.7.0.

Thanks,
Vijay

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Shutting down Gerrit for a few minutes

2015-04-07 Thread Justin Clift
Just an FYI.  Shutting down Gerrit for a few minutes, to move around
some files on the Gerrit server (need to free up space urgently).

Shouldn't be too long. (fingers crossed) :)

+ Justin

--
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Rebalance improvement design

2015-04-07 Thread Vijay Bellur

On 04/07/2015 03:08 PM, Susant Palai wrote:

Here is one test performed on a 300GB data set and around 100%(1/2 the time) 
improvement was seen.

[root@gprfs031 ~]# gluster v i

Volume Name: rbperf
Type: Distribute
Volume ID: 35562662-337e-4923-b862-d0bbb0748003
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: gprfs029-10ge:/bricks/gprfs029/brick1
Brick2: gprfs030-10ge:/bricks/gprfs030/brick1
Brick3: gprfs031-10ge:/bricks/gprfs031/brick1
Brick4: gprfs032-10ge:/bricks/gprfs032/brick1


Added server 32 and started rebalance force.

Rebalance stat for new changes:
[root@gprfs031 ~]# gluster v rebalance rbperf status
 Node Rebalanced-files  size   
scanned  failures   skipped   status   run time in secs
-  ---   ---   
---   ---   ---  --
localhost7463936.1GB
297319 0 0completed1743.00
 172.17.40.306751233.5GB
269187 0 0completed1395.00
gprfs029-10ge7909538.8GB
284105 0 0completed1559.00
gprfs032-10ge00Bytes
 0 0 0completed 402.00
volume rebalance: rbperf: success:

Rebalance stat for old model:
[root@gprfs031 ~]# gluster v rebalance rbperf status
 Node Rebalanced-files  size   
scanned  failures   skipped   status   run time in secs
-  ---   ---   
---   ---   ---  --
localhost8649342.0GB
634302 0 0completed3329.00
gprfs029-10ge9411546.2GB
687852 0 0completed3328.00
gprfs030-10ge7431435.9GB
651943 0 0completed3072.00
gprfs032-10ge00Bytes
594166 0 0completed1943.00
volume rebalance: rbperf: success:



This is interesting. Thanks for sharing  well done! Maybe we should 
attempt a much larger data set and see how we fare there :).


Regards,
Vijay


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] 3.7.0 update

2015-04-07 Thread Jeff Darcy
 3. Spurious regression tests listed in [3] to be fixed.
To not impede the review  merge workflow on release-3.7/master, I
 plan to drop those test units which still cause
spurious failures by the time we branch release-3.7.

On a similar note, it seems like bug 1195415 is among the leading
causes of regression failures now.  This manifests not as failures
in run-tests.sh but as core files found afterward.  There's little
to be gained by setting V-1 for an unrelated bug we already know
about.  Perhaps we should modify the Jenkins regression scriptlet
so that it collects those cores *silently* until we find and fix
that bug (which I'll be working on shortly BTW).
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Upcoming event: Storage Developers Conference

2015-04-07 Thread Tom Callaway
Are you interested in representing the Gluster Community at the Storage 
Developers Conference in Santa Clara, California (September 21-24)? That 
event is taking submissions for proposals until April 20, 2015:


http://www.snia.org/events/storage-developer/speaker_info

If you are accepted to present at this event, and need funding to 
attend, Red Hat will cover your travel/lodging costs (please email me 
directly if you're pursuing this option).


Thanks,

~tom

==
Red Hat
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-infra] Shutting down Gerrit for a few minutes

2015-04-07 Thread Justin Clift
On 7 Apr 2015, at 15:45, Justin Clift jus...@gluster.org wrote:
 Just an FYI.  Shutting down Gerrit for a few minutes, to move around
 some files on the Gerrit server (need to free up space urgently).
 
 Shouldn't be too long. (fingers crossed) :)

... and it hasn't returned from rebooting after yum update. :(

We're investigating.

Sorry for the longer-than-expected outage. :/

+ Justin

--
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Shutting down Gerrit for a few minutes

2015-04-07 Thread Justin Clift
On 7 Apr 2015, at 16:31, Justin Clift jus...@gluster.org wrote:
 On 7 Apr 2015, at 15:45, Justin Clift jus...@gluster.org wrote:
 Just an FYI.  Shutting down Gerrit for a few minutes, to move around
 some files on the Gerrit server (need to free up space urgently).
 
 Shouldn't be too long. (fingers crossed) :)
 
 ... and it hasn't returned from rebooting after yum update. :(
 
 We're investigating.
 
 Sorry for the longer-than-expected outage. :/

It's back up and running again.  A bunch of space has been freed up
on the filesystems for it, git gc has been run on each of the
git repos, and the packages have all been updated via yum. (except
Gerrit, which isn't yum installed)

It _seems_ to be working ok now, for the initial git checkout I
just tried.  If something acts up though, please let us know. :)

Regards and best wishes,

Justin Clift

--
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] metdata-volume for gluster

2015-04-07 Thread Raghavendra Talur
On Tue, Apr 7, 2015 at 3:56 PM, Rajesh Joseph rjos...@redhat.com wrote:

 Hi all,

 In gluster 3.7 multiple features (Snapshot scheduler, NFS Ganesha,
 Geo-rep) are planning to use
 additional volume to store metadata related to these features. This volume
 needs to be manually
 created and explicitly managed by an admin.

 I think creating and managing these many metadata volume would be an
 overhead for an admin. Instead
 of that I am proposing to have a single unified metata-volume which can be
 used by all these features.

 For simplicity and easier management we are proposing to have a
 pre-defined volume name.
 If needed this name can be configured using a global gluster option.

 Please let me know if you have any suggestions or comments.


 Thanks  Regards,
 Rajesh

 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-devel



+1 for
a. automatic creation of metadata volume over manual
b. configuration through gluster cli

few suggestions
a. disable access through any mechanisms other than fuse/gfapi.(Samba and
NFS should not export.)
b. Restrict access to peer nodes.

Question:
What would be the replica count of the said volume and would that be ok for
every use case
mentioned above?

Thanks,
Raghavendra Talur
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] metdata-volume for gluster

2015-04-07 Thread Niels de Vos
On Tue, Apr 07, 2015 at 06:26:23AM -0400, Rajesh Joseph wrote:
 Hi all,
 
 In gluster 3.7 multiple features (Snapshot scheduler, NFS Ganesha,
 Geo-rep) are planning to use additional volume to store metadata
 related to these features. This volume needs to be manually created
 and explicitly managed by an admin.
 
 I think creating and managing these many metadata volume would be an
 overhead for an admin. Instead of that I am proposing to have a single
 unified metata-volume which can be used by all these features.

That sounds like a great idea!

 For simplicity and easier management we are proposing to have a
 pre-defined volume name.

Who is the we you speak about? It would be nice to know who gave their
opinions before you wrote this email. (I assume they would not respond
to this email because they have agreed already?)

 If needed this name can be configured using a global gluster option.

I would say that configuring is not needed, and surely not advised. A
name that is easy to recognise would be good, like prepending with a _.

 Please let me know if you have any suggestions or comments.

What would be the (default) name of this volume that you are thinking
about?

 Thanks  Regards,
 Rajesh

Thanks for sharing,
Niels


pgpW8ADewIPM5.pgp
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] 3.7.0 update

2015-04-07 Thread Jeff Darcy
 On a similar note, it seems like bug 1195415 is among the leading
 causes of regression failures now.  This manifests not as failures
 in run-tests.sh but as core files found afterward.  There's little
 to be gained by setting V-1 for an unrelated bug we already know
 about.  Perhaps we should modify the Jenkins regression scriptlet
 so that it collects those cores *silently* until we find and fix
 that bug (which I'll be working on shortly BTW).

I've submitted http://review.gluster.org/#/c/10157/ as another
alternative.  Here's the commit message:

= This started as a way to identify which test created new core files,
= since that's a critical piece of debugging information that's missing
= when we only check for cores at the end of a run.  It also exits
= *immediately* either on bad status or discovery of a new core file.
= This allows the run to be retried or aborted quickly, to reduce the
= latency of all jobs in the regression-test pipeline.

Reviews would be most welcome.  We're currently setting a record for
the lowest regression-test success rate ever.  Let's get that
pipeline moving again.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Got a slogan idea?

2015-04-07 Thread Dustin L. Black
{Flexible|Adaptive|Versatile} Open Data Store



Dustin L. Black, RHCA  
Principal Technical Account Manager
Red Hat, Inc. - Strategic Customer Engagement  
(o) +1.212.510.4138  (m) +1.215.431.0247  
dus...@redhat.com  
  
Red Hat Summit and DevNation | June 23-26, 2015 | Boston  
Learn. Network. Experience open source.  
www.redhat.com/summit  
www.devnation.org  
  

- Original Message -
 From: Tom Callaway tcall...@redhat.com
 To: gluster-us...@gluster.org, gluster-devel@gluster.org
 Sent: Wednesday, April 1, 2015 8:14:40 AM
 Subject: [Gluster-devel] Got a slogan idea?
 
 Hello Gluster Ant People!
 
 Right now, if you go to gluster.org, you see our current slogan in giant
 text:
 
 Write once, read everywhere
 
 However, no one seems to be super-excited about that slogan. It doesn't
 really help differentiate gluster from a portable hard drive or a
 paperback book. I am going to work with Red Hat's branding geniuses to
 come up with some possibilities, but sometimes, the best ideas come from
 the people directly involved with a project.
 
 What I am saying is that if you have a slogan idea for Gluster, I want
 to hear it. You can reply on list or send it to me directly. I will
 collect all the proposals (yours and the ones that Red Hat comes up
 with) and circle back around for community discussion in about a month
 or so.
 
 Thanks!
 
 ~tom
 
 ==
 Red Hat
 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-devel
 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] metdata-volume for gluster

2015-04-07 Thread Jeff Darcy
 In gluster 3.7 multiple features (Snapshot scheduler, NFS Ganesha, Geo-rep)
 are planning to use
 additional volume to store metadata related to these features. This volume
 needs to be manually
 created and explicitly managed by an admin.
 
 I think creating and managing these many metadata volume would be an overhead
 for an admin. Instead
 of that I am proposing to have a single unified metata-volume which can be
 used by all these features.
 
 For simplicity and easier management we are proposing to have a pre-defined
 volume name.
 If needed this name can be configured using a global gluster option.
 
 Please let me know if you have any suggestions or comments.

Do these metadata volumes already exist, or are they being added to designs
as we speak?  There seem to be a lot of unanswered questions that suggest
the latter.  For example...

* What replication level(s) do we need?  What performance translators
  should be left out to ensure consistency?

* How much storage will we need?  How will it be provisioned and tracked?

* What nodes would this volume be hosted on?  Does the user have to
  (or get to) decide, or do we decide automatically?  What happens as
  the cluster grows or shrinks?

* How are the necessary daemons managed?  From glusterd?  What if we
  want glusterd itself to use this facility?

* Will there be an API, so the implementation can be changed to be
  compatible with similar facilities already scoped out for 4.0?

I like the idea of this being shared infrastructure.  It would also be
nice if it can be done with a minimum of administrative overhead.  To
do that, though, I think we need a more detailed exploration of the
problem(s) we're trying to solve and of the possible solutions.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Got a slogan idea?

2015-04-07 Thread Paul Robert Marino
Want a storage cluster? Get Gluster!


On Tue, Apr 7, 2015 at 3:37 PM, Dustin L. Black dbl...@redhat.com wrote:
 {Flexible|Adaptive|Versatile} Open Data Store



 Dustin L. Black, RHCA
 Principal Technical Account Manager
 Red Hat, Inc. - Strategic Customer Engagement
 (o) +1.212.510.4138  (m) +1.215.431.0247
 dus...@redhat.com

 Red Hat Summit and DevNation | June 23-26, 2015 | Boston
 Learn. Network. Experience open source.
 www.redhat.com/summit
 www.devnation.org


 - Original Message -
 From: Tom Callaway tcall...@redhat.com
 To: gluster-us...@gluster.org, gluster-devel@gluster.org
 Sent: Wednesday, April 1, 2015 8:14:40 AM
 Subject: [Gluster-devel] Got a slogan idea?

 Hello Gluster Ant People!

 Right now, if you go to gluster.org, you see our current slogan in giant
 text:

 Write once, read everywhere

 However, no one seems to be super-excited about that slogan. It doesn't
 really help differentiate gluster from a portable hard drive or a
 paperback book. I am going to work with Red Hat's branding geniuses to
 come up with some possibilities, but sometimes, the best ideas come from
 the people directly involved with a project.

 What I am saying is that if you have a slogan idea for Gluster, I want
 to hear it. You can reply on list or send it to me directly. I will
 collect all the proposals (yours and the ones that Red Hat comes up
 with) and circle back around for community discussion in about a month
 or so.

 Thanks!

 ~tom

 ==
 Red Hat
 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-devel

 ___
 Gluster-users mailing list
 gluster-us...@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] glusterfs-3.7 nightly rpms tagged centos under epel-7

2015-04-07 Thread SATHEESARAN

Hi All,

I was looking for latest glusterfs nightly rpms for RHEL-7
I could found it under the download.gluster.org[1]
But the rpms available are tagged with 'centos'.

Could I use this rpms on RHEL-7 as well ?

[1] - 
http://download.gluster.org/pub/gluster/glusterfs/nightly/glusterfs/epel-7-x86_64/glusterfs-3.7dev-0.929.git057d2be.autobuild/


-- Satheesaran
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Got a slogan idea?

2015-04-07 Thread Josh Boon
Gluster: RAID G

or Gluster: RAISE (redundant array of inexpensive storage equipment). This one 
sounds nicer as complete rebrand to RAISE though.

I'm also throwing in support to drop the FS as we do a lot more than files.  


- Original Message -
From: Marcos Renato da Silva Junior marco...@dee.feis.unesp.br
To: gluster-us...@gluster.org
Sent: Tuesday, April 7, 2015 10:45:34 PM
Subject: Re: [Gluster-users] Got a slogan idea?

Gluster : Beyond the limits


On 01-04-2015 09:14, Tom Callaway wrote:
 Hello Gluster Ant People!

 Right now, if you go to gluster.org, you see our current slogan in 
 giant text:

Write once, read everywhere

 However, no one seems to be super-excited about that slogan. It 
 doesn't really help differentiate gluster from a portable hard drive 
 or a paperback book. I am going to work with Red Hat's branding 
 geniuses to come up with some possibilities, but sometimes, the best 
 ideas come from the people directly involved with a project.

 What I am saying is that if you have a slogan idea for Gluster, I want 
 to hear it. You can reply on list or send it to me directly. I will 
 collect all the proposals (yours and the ones that Red Hat comes up 
 with) and circle back around for community discussion in about a month 
 or so.

 Thanks!

 ~tom

 ==
 Red Hat
 ___
 Gluster-users mailing list
 gluster-us...@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
gluster-us...@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] metdata-volume for gluster

2015-04-07 Thread Aravinda

+1 from Geo-rep.

In Geo-rep, Meta Volume is optional setting to increase the stability of 
Geo-replication. If meta volume is configured, geo-rep saves lock files 
for each subvolume to decide Active/Passive geo-rep worker. If not 
configured it fallback to old method of deciding Active/Passive based on 
node-uuid.


3 way replicated meta volume is good for consistency.

I think once Glusterd implements distributed store, we can migrate from 
metavol to distributed store. :)


--
regards
Aravinda

On 04/07/2015 03:56 PM, Rajesh Joseph wrote:

Hi all,

In gluster 3.7 multiple features (Snapshot scheduler, NFS Ganesha, Geo-rep) are 
planning to use
additional volume to store metadata related to these features. This volume 
needs to be manually
created and explicitly managed by an admin.

I think creating and managing these many metadata volume would be an overhead 
for an admin. Instead
of that I am proposing to have a single unified metata-volume which can be used 
by all these features.

For simplicity and easier management we are proposing to have a pre-defined 
volume name.
If needed this name can be configured using a global gluster option.

Please let me know if you have any suggestions or comments.

Thanks  Regards,
Rajesh

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel