Re: [Gluster-devel] [Gluster-users] VM fs becomes read only when one gluster node goes down

2015-10-26 Thread Josh Boon
Hmm even five should be OK.  Do you lose all VMs or just some? 

Also, we had issues with

cluster.quorum-type: auto
cluster.server-quorum-type: server

and had to instead go with

cluster.server-quorum-type: none
cluster.quorum-type: none

though we only replicate instead distribute and replicate so I'd be wary of 
changing those without advice from folks more familiar with the impact on your 
config. 

gfapi upon connect gets the volume file and is aware of the configuration and 
changes to it so it should be OK when a node is lost since it knows where the 
other nodes are. 

If you have a lab with your gluster config setup and you lose all of your VM's 
I'd suggest trying my config to see what happens.  The gluster logs and qemu 
clients could also have some tips on what happens when a node disappears. 
- Original Message -
From: "André Bauer" <aba...@magix.net>
To: "Josh Boon" <glus...@joshboon.com>
Cc: "Krutika Dhananjay" <kdhan...@redhat.com>, "gluster-users" 
<gluster-us...@gluster.org>, gluster-devel@gluster.org
Sent: Monday, October 26, 2015 7:08:15 PM
Subject: Re: [Gluster-users] VM fs becomes read only when one gluster node goes 
down

Thanks guys!
My volume info is attached at the bottom of this mail...

@ Josh
As you can see, i already have a 5 second ping timeout set. I will try
it with 3 seconds.

Not sure, if i want to have errors=continue on the fs level but i will
give it a try, if its the only possibility to get automatic failover work.


@ Roman
I use qemu with libgfapi to access the images. So no glusterfs entries
in fstab for my vm hosts. It also seems this is kind of deprecated:

http://blog.gluster.org/category/mount-glusterfs/

"`backupvolfile-server` - This option did not really do much rather than
provide a 'shell' script based failover which was highly racy and
wouldn't work during many occasions.  It was necessary to remove this to
make room for better options (while it is still provided for backward
compatibility in the code)"


@ all
Can anybody tell me how Glusterfs handles this internaly?
Is the libgfapi client already aware of the server which replicates the
image?
Is there a way i can configure it manualy for a volume?




Volume Name: vmimages
Type: Distributed-Replicate
Volume ID: 029285b2-dfad-4569-8060-3827c0f1d856
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: storage1.domain.local:/glusterfs/vmimages
Brick2: storage2.domain.local:/glusterfs/vmimages
Brick3: storage3.domain.local:/glusterfs/vmimages
Brick4: storage4.domain.local:/glusterfs/vmimages
Options Reconfigured:
network.ping-timeout: 5
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
auth.allow:
192.168.0.21,192.168.0.22,192.168.0.23,192.168.0.24,192.168.0.25,192.168.0.26
server.allow-insecure: on
storage.owner-uid: 2000
storage.owner-gid: 2000



Regards
André


Am 26.10.2015 um 17:41 schrieb Josh Boon:
> Andre,
> 
> I've not explored using a DNS solution to publish the gluster cluster
> addressing space but things you'll want to check out
> are network.ping-timeout and whether or not your VM goes read-only on
> filesystem error. If your network is consistent and robust
> tuning network.ping-timeout to a very low value such as three seconds
> will instruct the client to drop that client on failure. The default
> value for this is 42 seconds which will cause your VM to go read-only as
> you've seen. You could also choose to have your VM's mount their
> partitions errors=continue as well depending on the filesystem they run.
> Our setup has timeout at seven seconds and errors=continue and has
> survived both testing and storage node segfaults. No data integrity
> issues have presented yet but our data is mostly temporal so integrity
> hasn't been tested thoroughly. Also we're qemu 2.0 running gluster 3.6
> on ubuntu 14.04 for those curious. 
> 
> Best,
> Josh 
> 
> 
> *From: *"Roman" <rome...@gmail.com>
> *To: *"Krutika Dhananjay" <kdhan...@redhat.com>
> *Cc: *"gluster-users" <gluster-us...@gluster.org>, gluster-devel@gluster.org
> *Sent: *Monday, October 26, 2015 1:33:57 PM
> *Subject: *Re: [Gluster-users] VM fs becomes read only when one gluster
> node goes down
> 
> Hi,
> got backupvolfile-server=NODE2NAMEHERE in fstab ? :)
> 
> 2015-10-23 5:24 GMT+03:00 Krutika Dhananjay <kdhan...@redhat.com
> <mailto:kdhan...@redhat.com>>:
> 
> Could you share the output of 'gluster volume info', and also
> information as to which node went down on reboot?
> 
> -Krutik

Re: [Gluster-devel] [Gluster-users] VM fs becomes read only when one gluster node goes down

2015-10-26 Thread Josh Boon
I'd see what your qemu logs put out if you have them around from a crash. Also 
you can check client connections across your cluster by hopping on your 
hypervisor and grepping the output of netstat -np for the pid of one of your 
gluster backed VM's like so

netstat -np | grep 11607
tcp0  0 10.9.1.1:60414  10.9.1.1:24007  ESTABLISHED 
11607/qemu-system-x
tcp0  0 10.9.1.1:60409  10.9.1.1:24007  ESTABLISHED 
11607/qemu-system-x
tcp0  0 10.9.1.1:45998  10.9.1.1:50152  ESTABLISHED 
11607/qemu-system-x
tcp0  0 10.9.1.1:42606  10.9.1.2:50152  ESTABLISHED 
11607/qemu-system-x
tcp0  0 10.9.1.1:45993  10.9.1.1:50152  ESTABLISHED 
11607/qemu-system-x
tcp0  0 10.9.1.1:42601  10.9.1.2:50152  ESTABLISHED 
11607/qemu-system-x
unix  3  [ ] STREAM CONNECTED 3286011607/qemu-system-x 
/var/lib/libvirt/qemu/HFMWEB19.monitor

I mounted two disks for the machine so I have two controls and two connections 
per disk for my replicated setup. Someone else might be able to provide more 
info as to what your output should look like.  

- Original Message -
From: "André Bauer" <aba...@magix.net>
To: "Josh Boon" <glus...@joshboon.com>
Cc: "Krutika Dhananjay" <kdhan...@redhat.com>, "gluster-users" 
<gluster-us...@gluster.org>, gluster-devel@gluster.org
Sent: Monday, October 26, 2015 7:47:07 PM
Subject: Re: [Gluster-users] VM fs becomes read only when one gluster node goes 
down

Just some. But i think the reason is some vm images are replicated on
node 1 & 2 and some on node 3 & 4 because i use distributed/replicated
volume.

You're right. I think i have to try it on a testsetup.

At the moment i'm also no completly sure, if its a Glusterfs problem
(not connecting to the node with the replicated file immediately, when
read/write fails) or a problem of the filesystem (ext4 fs goes read only
on error to early)?


Regards
André

Am 26.10.2015 um 20:23 schrieb Josh Boon:
> Hmm even five should be OK.  Do you lose all VMs or just some? 
> 
> Also, we had issues with
> 
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> 
> and had to instead go with
> 
> cluster.server-quorum-type: none
> cluster.quorum-type: none
> 
> though we only replicate instead distribute and replicate so I'd be wary of 
> changing those without advice from folks more familiar with the impact on 
> your config. 
> 
> gfapi upon connect gets the volume file and is aware of the configuration and 
> changes to it so it should be OK when a node is lost since it knows where the 
> other nodes are. 
> 
> If you have a lab with your gluster config setup and you lose all of your 
> VM's I'd suggest trying my config to see what happens.  The gluster logs and 
> qemu clients could also have some tips on what happens when a node 
> disappears. 
> - Original Message -
> From: "André Bauer" <aba...@magix.net>
> To: "Josh Boon" <glus...@joshboon.com>
> Cc: "Krutika Dhananjay" <kdhan...@redhat.com>, "gluster-users" 
> <gluster-us...@gluster.org>, gluster-devel@gluster.org
> Sent: Monday, October 26, 2015 7:08:15 PM
> Subject: Re: [Gluster-users] VM fs becomes read only when one gluster node 
> goes down
> 
> Thanks guys!
> My volume info is attached at the bottom of this mail...
> 
> @ Josh
> As you can see, i already have a 5 second ping timeout set. I will try
> it with 3 seconds.
> 
> Not sure, if i want to have errors=continue on the fs level but i will
> give it a try, if its the only possibility to get automatic failover work.
> 
> 
> @ Roman
> I use qemu with libgfapi to access the images. So no glusterfs entries
> in fstab for my vm hosts. It also seems this is kind of deprecated:
> 
> http://blog.gluster.org/category/mount-glusterfs/
> 
> "`backupvolfile-server` - This option did not really do much rather than
> provide a 'shell' script based failover which was highly racy and
> wouldn't work during many occasions.  It was necessary to remove this to
> make room for better options (while it is still provided for backward
> compatibility in the code)"
> 
> 
> @ all
> Can anybody tell me how Glusterfs handles this internaly?
> Is the libgfapi client already aware of the server which replicates the
> image?
> Is there a way i can configure it manualy for a volume?
> 
> 
> 
> 
> Volume Name: vmimages
> Type: Distributed-Replicate
> Volume ID: 029285b2-dfad-4569-8060-3827c0f1d856
> Status: Started
> Number of Bricks: 2 x 2 = 4
> Transport-type: tcp
> Bricks:
> Brick1: storage1.domain.local

Re: [Gluster-devel] gfapi 3.6.3 QEMU 2.3 Ubuntu 14.04 testing

2015-07-29 Thread Josh Boon
Hey Prasanna,

Thanks for your help! One of the issues we've had is DD doesn't seem to 
reproduce it. Anything that logs and handles large volumes, think mail and web 
servers, tends to segfault the most frequently. I could write up a load test 
and we could put apache on it and try that as that's closet to what we run. 
Also if you don't object would I be able to get on the machine to figure out 
apparmor and do a writeup? Most folks probably won't be able to disable it 
completely. 

Best,
Josh

- Original Message -
From: Prasanna Kalever pkale...@redhat.com
To: Josh Boon glus...@joshboon.com
Cc: Pranith Kumar Karampuri pkara...@redhat.com, Gluster Devel 
gluster-devel@gluster.org
Sent: Wednesday, July 29, 2015 1:54:34 PM
Subject: Re: [Gluster-devel] gfapi 3.6.3 QEMU 2.3 Ubuntu 14.04 testing

Hi Josh Boon,

Below are my setup details:


# qemu-system-x86_64 --version  
  

  
QEMU emulator version 2.3.0 (Debian 1:2.3+dfsg-5ubuntu4), Copyright (c) 
2003-2008 Fabrice Bellard 

  
# gluster --version 
  

  
glusterfs 3.6.3 built on Jul 29 2015 16:01:10   
  
Repository revision: git://git.gluster.com/glusterfs.git
  
Copyright (c) 2006-2011 Gluster Inc. http://www.gluster.com   
  
GlusterFS comes with ABSOLUTELY NO WARRANTY.
  

  
# lsb_release -a
  

  
You may redistribute copies of GlusterFS under the terms of the GNU General 
Public License.   
Distributor ID: Ubuntu  
  
Description:Ubuntu 14.04 LTS
  
Release:14.04   
  
Codename:   trusty  
  

  
# gluster vol info  
  

  
Volume Name: vol1   
  
Type: Replicate 
  
Volume ID: ad78ac6c-c55e-4f4a-8b1b-a11865f1d01e 
  
Status: Started 
  
Number of Bricks: 1 x 2 = 2 
  
Transport-type: tcp 
  
Bricks: 
  
Brick1: 10.70.1.156:/brick1 
  
Brick2: 10.70.1.156:/brick2 
  
Options Reconfigured:   
  
server.allow-insecure: on   
  
storage.owner-uid: 116  
  
storage.owner-gid: 125

[Gluster-devel] gfapi 3.6.3 QEMU 2.3 Ubuntu 14.04 testing

2015-07-06 Thread Josh Boon
Hey folks, 

Does anyone have test environment running Ubuntu 14.04, QEMU 2.0, and Gluster 
3.6.3? I'm looking to have some folks test out QEMU 2.3 for stability and 
performance and see if it removes the segfault errors. Another group of folks 
are experiencing the same segfaults I still experience but looking over their 
logs my theory of it being related to a self-heal didn't work out. I've 
included the stack trace below from their environment which matches mine. I've 
already put together a PPA over at 
https://launchpad.net/~josh-boon/+archive/ubuntu/qemu-edge-glusterfs with QEMU 
2.3 and deps built for trusty. If anyone has the time or the resources that I 
could get into I'd appreciate the support. I'd like to get this ironed out so I 
can give my full vote of confidence to Gluster again. 


Thanks, 
Josh 

Stack 
#0 0x7f369c95248c in ?? () 
No symbol table info available. 
#1 0x7f369bd2b3b1 in glfs_io_async_cbk (ret=optimized out, 
frame=optimized out, data=0x7f369ee536c0) at glfs-fops.c:598 
gio = 0x7f369ee536c0 
#2 0x7f369badb66a in syncopctx_setfspid (pid=0x7f369ee536c0) at 
syncop.c:191 
opctx = 0x0 
ret = -1 
#3 0x00100011 in ?? () 
No symbol table info available. 
#4 0x7f36a5ae26b0 in ?? () 
No symbol table info available. 
#5 0x7f36a81e2800 in ?? () 
No symbol table info available. 
#6 0x7f36a5ae26b0 in ?? () 
No symbol table info available. 
#7 0x7f36a81e2800 in ?? () 
No symbol table info available. 
#8 0x in ?? () 
No symbol table info available. 

Full log attached. 
ProblemType: Crash
Architecture: amd64
CrashCounter: 1
Date: Mon Jun 29 02:45:09 2015
Dependencies:
 acl 2.2.52-1
 adduser 3.113+nmu3ubuntu3
 apt-utils 1.0.1ubuntu2.6
 attr 1:2.4.47-1ubuntu1
 base-passwd 3.5.33
 busybox-initramfs 1:1.21.0-1ubuntu1
 ca-certificates 20141019ubuntu0.14.04.1
 coreutils 8.21-1ubuntu5.1
 cpio 2.11+dfsg-1ubuntu1.1
 cpu-checker 0.7-0ubuntu4
 dbus 1.6.18-0ubuntu4.3
 debconf 1.5.51ubuntu2
 debconf-i18n 1.5.51ubuntu2
 debianutils 4.4
 dmsetup 2:1.02.77-6ubuntu2
 dpkg 1.17.5ubuntu5.3
 e2fslibs 1.42.9-3ubuntu1.2
 e2fsprogs 1.42.9-3ubuntu1.2
 file 1:5.14-2ubuntu3.3
 findutils 4.4.2-7
 gcc-4.8-base 4.8.4-2ubuntu1~14.04
 gcc-4.9-base 4.9.1-0ubuntu1
 glusterfs-common 3.6.3-ubuntu1~trusty10 [origin: LP-PPA-gluster-glusterfs-3.6]
 ifupdown 0.7.47.2ubuntu4.1
 initramfs-tools 0.103ubuntu4.2
 initramfs-tools-bin 0.103ubuntu4.2
 initscripts 2.88dsf-41ubuntu6
 insserv 1.14.0-5ubuntu2
 iproute2 3.12.0-2
 ipxe-qemu 1.0.0+git-2013.c3d1e78-2ubuntu1.1
 isc-dhcp-client 4.2.4-7ubuntu12
 isc-dhcp-common 4.2.4-7ubuntu12
 klibc-utils 2.0.3-0ubuntu1
 kmod 15-0ubuntu6
 krb5-locales 1.12+dfsg-2ubuntu5.1
 libacl1 2.2.52-1
 libaio1 0.3.109-4
 libapparmor1 2.8.95~2430-0ubuntu5.1
 libapt-inst1.5 1.0.1ubuntu2.6
 libapt-pkg4.12 1.0.1ubuntu2.6
 libasn1-8-heimdal 1.6~git20131207+dfsg-1ubuntu1.1
 libasound2 1.0.27.2-3ubuntu7
 libasound2-data 1.0.27.2-3ubuntu7
 libasyncns0 0.8-4ubuntu2
 libattr1 1:2.4.47-1ubuntu1
 libaudit-common 1:2.3.2-2ubuntu1
 libaudit1 1:2.3.2-2ubuntu1
 libblkid1 2.20.1-5.1ubuntu20.4
 libbluetooth3 4.101-0ubuntu13.1
 libboost-system1.54.0 1.54.0-4ubuntu3.1
 libboost-thread1.54.0 1.54.0-4ubuntu3.1
 libbrlapi0.6 5.0-2ubuntu2
 libbz2-1.0 1.0.6-5
 libc6 2.19-0ubuntu6.6
 libcaca0 0.99.beta18-1ubuntu5
 libcap-ng0 0.7.3-1ubuntu2
 libcap2 1:2.24-0ubuntu2
 libcgmanager0 0.24-0ubuntu7.3
 libcomerr2 1.42.9-3ubuntu1.2
 libcurl3-gnutls 7.35.0-1ubuntu2.3
 libdb5.3 5.3.28-3ubuntu3
 libdbus-1-3 1.6.18-0ubuntu4.3
 libdebconfclient0 0.187ubuntu1
 libdevmapper-event1.02.1 2:1.02.77-6ubuntu2
 libdevmapper1.02.1 2:1.02.77-6ubuntu2
 libdrm2 2.4.56-1~ubuntu2
 libexpat1 2.1.0-4ubuntu1
 libfdt1 1.4.0+dfsg-1
 libffi6 3.1~rc1+r3.0.13-12
 libflac8 1.3.0-2ubuntu0.14.04.1
 libgcc1 1:4.9.1-0ubuntu1
 libgcrypt11 1.5.3-2ubuntu4.1
 libglib2.0-0 2.40.2-0ubuntu1
 libglib2.0-data 2.40.2-0ubuntu1
 libgnutls26 2.12.23-12ubuntu2.2
 libgpg-error0 1.12-0.2ubuntu1
 libgpm2 1.20.4-6.1
 libgssapi-krb5-2 1.12+dfsg-2ubuntu5.1
 libgssapi3-heimdal 1.6~git20131207+dfsg-1ubuntu1.1
 libhcrypto4-heimdal 1.6~git20131207+dfsg-1ubuntu1.1
 libheimbase1-heimdal 1.6~git20131207+dfsg-1ubuntu1.1
 libheimntlm0-heimdal 1.6~git20131207+dfsg-1ubuntu1.1
 libhx509-5-heimdal 1.6~git20131207+dfsg-1ubuntu1.1
 libibverbs1 1.1.7-1ubuntu1
 libidn11 1.28-1ubuntu2
 libjpeg-turbo8 1.3.0-0ubuntu2
 libjpeg8 8c-2ubuntu8
 libjson-c2 0.11-3ubuntu1.2
 libjson0 0.11-3ubuntu1.2
 libk5crypto3 1.12+dfsg-2ubuntu5.1
 libkeyutils1 1.5.6-1
 libklibc 2.0.3-0ubuntu1
 libkmod2 15-0ubuntu6
 libkrb5-26-heimdal 1.6~git20131207+dfsg-1ubuntu1.1
 libkrb5-3 1.12+dfsg-2ubuntu5.1
 libkrb5support0 1.12+dfsg-2ubuntu5.1
 libldap-2.4-2 2.4.31-1+nmu2ubuntu8
 liblocale-gettext-perl 1.05-7build3
 liblvm2app2.2 2.02.98-6ubuntu2
 liblzma5 5.1.1alpha+20120614-2ubuntu2
 libmagic1 1:5.14-2ubuntu3.3
 libmount1 2.20.1-5.1ubuntu20.4
 libncurses5 5.9+20140118-1ubuntu1
 libncursesw5 5.9+20140118-1ubuntu1
 libnih-dbus1 1.0.3-4ubuntu25
 libnih1 1.0.3-4ubuntu25
 libnspr4 2

Re: [Gluster-devel] [Gluster-users] Gluster Slogans Revisited

2015-05-04 Thread Josh Boon
Gluster: Software {re}defined storage 

is one I really like. I wouldn't want to eliminate Gluster completely as 
newcomers would then wonder about the binaries, package names etc. The tagline 
speaks to that we've taken the time to consider some of the common pitfalls of 
storage and makes things nicer for the admin involved. Also throwing out 
acronyms in a name leaves more questions than answers I feel and that's 
something to avoid even though I proposed the some of the acronyms myself :) 


- Original Message -

From: Benjamin Turner bennytu...@gmail.com 
To: Tom Callaway tcall...@redhat.com 
Cc: gluster-us...@gluster.org, Gluster Devel gluster-devel@gluster.org 
Sent: Friday, May 1, 2015 8:07:55 PM 
Subject: Re: [Gluster-users] [Gluster-devel] Gluster Slogans Revisited 

I liked: 

Gluster: Redefine storage. 
Gluster: Software-Defined Storage. Redefined 
Gluster: RAID G 
Gluster: RAISE (redundant array of inexpensive storage equipment) 
Gluster: Software {re}defined storage + 

And suggested: 

Gluster {DS|FS|RAISE|RAIDG}: Software Defined Storage Redefined(some 
combination of lines 38-42) 

My thinking is: 

Our new name GlusterDS/whatever: Tagline with excitement 

Gluster * - This is the change gluster fs to gluster ds idea we were discussing 
this already. Maybe we could even come up with a longer acronym, like RAIDG or 
RAISE or something more definitive of what we are? I think instead of jsut 
using Gluster: we come up with the new way to refer to glusterFS and use this 
as a way to push that as well. 

Tagline - Whatever cool saying that get people excited to check out 
glusterDS/glusterRAIDG/ whatever 

So ex: 

Gluster RAISE: Software Defined Storage Redifined 
Gluster DS: Software defined storage defined your way 
Gluster RAIDG: Storage from the ground up 

Just my $0.02 

-b 


On Fri, May 1, 2015 at 2:51 PM, Tom Callaway  tcall...@redhat.com  wrote: 


Hello Gluster Ants! 

Thanks for all the slogan suggestions that you've provided. I've made an 
etherpad page which collected them all, along with some additional 
suggestions made by Red Hat's Brand team: 

https://public.pad.fsfe.org/p/gluster-slogans 

Feel free to discuss them (either here or on the etherpad). If you like 
a particular slogan, feel free to put a + next to it on the etherpad. 

Before we can pick a new slogan, it needs to be cleared by Red Hat 
Legal, this is a small formality to make sure that we're not infringing 
someone else's trademark or doing anything that would cause Red Hat 
undue risk. We don't want to waste their time by having them clear every 
possible suggestion, so your feedback is very helpful to allow us to 
narrow down the list. At the end of the day, barring legal clearance, 
the slogan selection is up to the community. 

Thanks! 

~tom 

== 
Red Hat 
___ 
Gluster-devel mailing list 
Gluster-devel@gluster.org 
http://www.gluster.org/mailman/listinfo/gluster-devel 





___ 
Gluster-users mailing list 
gluster-us...@gluster.org 
http://www.gluster.org/mailman/listinfo/gluster-users 

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Ubuntu PPA 3.6.3

2015-05-04 Thread Josh Boon
Hey folks, 

Do we know when the ubuntu PPA will be up-to-date? I'll be doing a major 
upgrade on my infrastructure and don't want to have to do it more than once. 

Thanks, 
Josh 


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Got a slogan idea?

2015-04-07 Thread Josh Boon
Gluster: RAID G

or Gluster: RAISE (redundant array of inexpensive storage equipment). This one 
sounds nicer as complete rebrand to RAISE though.

I'm also throwing in support to drop the FS as we do a lot more than files.  


- Original Message -
From: Marcos Renato da Silva Junior marco...@dee.feis.unesp.br
To: gluster-us...@gluster.org
Sent: Tuesday, April 7, 2015 10:45:34 PM
Subject: Re: [Gluster-users] Got a slogan idea?

Gluster : Beyond the limits


On 01-04-2015 09:14, Tom Callaway wrote:
 Hello Gluster Ant People!

 Right now, if you go to gluster.org, you see our current slogan in 
 giant text:

Write once, read everywhere

 However, no one seems to be super-excited about that slogan. It 
 doesn't really help differentiate gluster from a portable hard drive 
 or a paperback book. I am going to work with Red Hat's branding 
 geniuses to come up with some possibilities, but sometimes, the best 
 ideas come from the people directly involved with a project.

 What I am saying is that if you have a slogan idea for Gluster, I want 
 to hear it. You can reply on list or send it to me directly. I will 
 collect all the proposals (yours and the ones that Red Hat comes up 
 with) and circle back around for community discussion in about a month 
 or so.

 Thanks!

 ~tom

 ==
 Red Hat
 ___
 Gluster-users mailing list
 gluster-us...@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
gluster-us...@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Looking for volunteer to write up official How to do GlusterFS in the Cloud: The Right Way for Rackspace...

2015-02-17 Thread Josh Boon
Do we have use cases to focus on? Gluster is part of the answer to many 
different questions so if it's things like simple replication and distribution 
and basic performance tuning I could help. I also have a heavy Ubuntu tilt so 
if it's Red Hat oriented I'm not much help :) 


- Original Message -
From: Justin Clift jus...@gluster.org
To: Gluster Users gluster-us...@gluster.org, Gluster Devel 
gluster-devel@gluster.org
Cc: Jesse Noller jesse.nol...@rackspace.com
Sent: Tuesday, February 17, 2015 9:37:05 PM
Subject: [Gluster-devel] Looking for volunteer to write up official How to 
do GlusterFS in the Cloud: The Right Way for Rackspace...

Yeah, huge subject line.  :)

But it gets the message across... Rackspace provide us a *bunch* of online VM's
which we have our infrastructure in + run the majority of our regression tests
with.

They've asked us if we could write up a How to do GlusterFS in the Cloud: The
Right Way (technical) doc, for them to add to their doc collection.
They get asked for this a lot by customers. :D

Sooo... looking for volunteers to write this up.  And yep, you're welcome to
have your name all over it (eg this is good promo/CV material :)

VM's (in Rackspace obviously) will be provided of course.

Anyone interested?

(Note - not suitable for a GlusterFS newbie. ;))

Regards and best wishes,

Justin Clift

--
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel