Re: [Gluster-users] Volume Types?

2013-09-19 Thread Rejy M Cyriac
On 09/19/2013 03:14 PM, Jake G. wrote:
> 
> 
> 
> *From:* Rejy M Cyriac 
> *To:* gluster-users@gluster.org
> *Sent:* Thursday, September 19, 2013 6:30 PM
> *Subject:* Re: [Gluster-users] Volume Types?
> 
> On 09/19/2013 02:55 PM, Jake G. wrote:
>>
>> Does this mean I need 4 host servers (peers) to do this?
>> or
>> Do I could I create 4 bricks within the single distributed volume?
>>
>> I am really confused (X_X)
>>
>>
> 
> You would need 4 bricks of 100GB, 2 on each server, and create a
> distribute-replicate volume, with replica count of 2.
> 
> The command to be run would be of the syntax given below.
> 
> gluster volume create  replica 2 server1:/bricks/brick1
> server2:/bricks/brick1 server1:/bricks/brick2 server1:/bricks/brick2
> 
> The order in which the bricks are given in the command is important to
> specify which bricks form replica sets.
> 
> Since the replica count given in the above example is 2,
> server1:/bricks/brick1 - server2:/bricks/brick1 will be a replica set,
> and server1:/bricks/brick2 - server1:/bricks/brick2 will the other
> replica set.
> 
> - rejy (rmc)
> 
>>
>>
>>
>> 
>> *From:* Athanasios Kostopoulos
>  <mailto:athanasios.kostopou...@classmarkets.com>>
>> *To:* Jake G.  <mailto:dj_dark_jungl...@yahoo.com>>
>> *Cc:* "gluster-users@gluster.org <mailto:gluster-users@gluster.org>"
> mailto:gluster-users@gluster.org>>
>> *Sent:* Thursday, September 19, 2013 6:07 PM
>> *Subject:* Re: [Gluster-users] Volume Types?
>>
>> Hi Jake,
>> glusterFS newbie here so take my email with a big grain of salt.
>> I *think* that in order to have 200Gb available AND replication
>> (assuming a replicating factor of 2) you need 4 bricks of 100Gb, not
>> just 2.
>>
>>
>> On Thu, Sep 19, 2013 at 11:06 AM, Jake G.  <mailto:dj_dark_jungl...@yahoo.com>
>> <mailto:dj_dark_jungl...@yahoo.com
> <mailto:dj_dark_jungl...@yahoo.com>>> wrote:
>>
>>Hi All,
>>
>>Wondering if it is possible to create a duplicated and distributed
>>volume in gluster?
>>
>>I have two host servers both with 100GB partition for gluster.
>>
>>After creating the volume I would like there to be 200GB available,
>>but if serverA dies all the files will be still be present on serverB
>>Is this even possible? If so can it be done with only two host servers?
>>
>>Thank you!
>>
>>
>>___
>>Gluster-users mailing list
>>Gluster-users@gluster.org <mailto:Gluster-users@gluster.org>
> <mailto:Gluster-users@gluster.org <mailto:Gluster-users@gluster.org>>
>>http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>
>>
>>
>> classmarkets GmbH | Schumannstraße 6 | 10117 Berlin | Deutschland
>> Tel: +49 (0)30 56 59 001-0 | Fax: +49 (0)30 56 59 001-99
>> | www.classmarkets.com <http://www.classmarkets.com/>
>> Amtsgericht Charlottenburg HRB 111815 B | USt.Id.Nr <http://ust.id.nr/>:
>> DE 260731582
>> Geschäftsführer: Veit Mürz, Fabian Ströhle
>> Diese Nachricht (inklusive aller Anhänge) ist vertraulich. Sie darf
>> ausschließlich durch den vorgesehenen Empfänger und Adressaten gelesen,
>> kopiert oder genutzt werden. Sollten Sie diese Nachricht versehentlich
>> erhalten haben, bitten wir, den Absender (durch Antwort-E-Mail) hiervon
>> unverzüglich zu informieren und die Nachricht zu löschen. Jede
>> unerlaubte Nutzung oder Weitergabe des Inhalts dieser Nachricht, sei es
>> vollständig oder teilweise, ist unzulässig. Bitte beachten Sie, dass
>> E-Mail-Nachrichten an den Absender nicht für fristgebundene Mitteilungen
>> geeignet sind. Fristgebundene Mitteilungen sind daher ausschließlich per
>> Post oder per Telefax zu übersenden.
>>
>>
>>
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org <mailto:Gluster-users@gluster.org>
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>
> 
> Thank you for your help!
> I would like to confirm the command you gave. Should:
> gluster volume create  replica 2
> server1:/bricks/brick1 server2:/bricks/brick1 server1:/bricks/brick2
> server1:/bricks/brick2
> Be:
> gluster volume create  replica 2
> server1:/bricks/brick1 server2:/bricks/brick1 server1:/bricks

Re: [Gluster-users] Volume Types?

2013-09-19 Thread Rejy M Cyriac
On 09/19/2013 02:55 PM, Jake G. wrote:
> 
> Does this mean I need 4 host servers (peers) to do this?
> or
> Do I could I create 4 bricks within the single distributed volume?
> 
> I am really confused (X_X)
> 
> 

You would need 4 bricks of 100GB, 2 on each server, and create a
distribute-replicate volume, with replica count of 2.

The command to be run would be of the syntax given below.

gluster volume create  replica 2 server1:/bricks/brick1
server2:/bricks/brick1 server1:/bricks/brick2 server1:/bricks/brick2

The order in which the bricks are given in the command is important to
specify which bricks form replica sets.

Since the replica count given in the above example is 2,
server1:/bricks/brick1 - server2:/bricks/brick1 will be a replica set,
and server1:/bricks/brick2 - server1:/bricks/brick2 will the other
replica set.

- rejy (rmc)

> 
> 
> 
> 
> *From:* Athanasios Kostopoulos 
> *To:* Jake G. 
> *Cc:* "gluster-users@gluster.org" 
> *Sent:* Thursday, September 19, 2013 6:07 PM
> *Subject:* Re: [Gluster-users] Volume Types?
> 
> Hi Jake,
> glusterFS newbie here so take my email with a big grain of salt.
> I *think* that in order to have 200Gb available AND replication
> (assuming a replicating factor of 2) you need 4 bricks of 100Gb, not
> just 2. 
> 
> 
> On Thu, Sep 19, 2013 at 11:06 AM, Jake G.  > wrote:
> 
> Hi All,
> 
> Wondering if it is possible to create a duplicated and distributed
> volume in gluster?
> 
> I have two host servers both with 100GB partition for gluster.
> 
> After creating the volume I would like there to be 200GB available,
> but if serverA dies all the files will be still be present on serverB
> Is this even possible? If so can it be done with only two host servers?
> 
> Thank you!
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org 
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> 
> 
> 
> classmarkets GmbH | Schumannstraße 6 | 10117 Berlin | Deutschland
> Tel: +49 (0)30 56 59 001-0 | Fax: +49 (0)30 56 59 001-99
> | www.classmarkets.com 
> Amtsgericht Charlottenburg HRB 111815 B | USt.Id.Nr :
> DE 260731582
> Geschäftsführer: Veit Mürz, Fabian Ströhle
> Diese Nachricht (inklusive aller Anhänge) ist vertraulich. Sie darf
> ausschließlich durch den vorgesehenen Empfänger und Adressaten gelesen,
> kopiert oder genutzt werden. Sollten Sie diese Nachricht versehentlich
> erhalten haben, bitten wir, den Absender (durch Antwort-E-Mail) hiervon
> unverzüglich zu informieren und die Nachricht zu löschen. Jede
> unerlaubte Nutzung oder Weitergabe des Inhalts dieser Nachricht, sei es
> vollständig oder teilweise, ist unzulässig. Bitte beachten Sie, dass
> E-Mail-Nachrichten an den Absender nicht für fristgebundene Mitteilungen
> geeignet sind. Fristgebundene Mitteilungen sind daher ausschließlich per
> Post oder per Telefax zu übersenden.
> 
> 
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] First-time GlusterFS yum install fizzles

2013-09-06 Thread Rejy M Cyriac
On 09/06/2013 06:54 PM, Bret Goodfellow wrote:
> I’m attempting to install GlusterFS on two RHEL 6 servers.  When I
> attempt the yum install on the first server, I get the following messages:
> 
>  
> 
> [root@duchesne1 yum.repos.d]# yum install
> glusterfs-3.3.0-1.el6.x86_64.rpm glusterfs-fuse-3.3.0-1.el6.x86_64.rpm
> glusterfs-geo-replication-3.3.0-1.el6.x86_64.rpm
> glusterfs-server-3.3.0-1.el6.x86_64.rpm

Take out the '.rpm' from the package names, as that is not part of the
package name.

yum install glusterfs-3.3.0-1.el6.x86_64
glusterfs-fuse-3.3.0-1.el6.x86_64
glusterfs-geo-replication-3.3.0-1.el6.x86_64
glusterfs-server-3.3.0-1.el6.x86_64

- rejy (rmc)

> 
> Loaded plugins: refresh-packagekit, rhnplugin, security
> 
> This system is receiving updates from RHN Classic or RHN Satellite.
> 
> Setting up Install Process
> 
> No package glusterfs-3.3.0-1.el6.x86_64.rpm available.
> 
> No package glusterfs-fuse-3.3.0-1.el6.x86_64.rpm available.
> 
> No package glusterfs-geo-replication-3.3.0-1.el6.x86_64.rpm available.
> 
> No package glusterfs-server-3.3.0-1.el6.x86_64.rpm available.
> 
> Error: Nothing to do
> 
> You have new mail in /var/spool/mail/root
> 
> [root@duchesne1 yum.repos.d]#
> 
>  
> 
>  
> 
> I know this really isn’t a GlusterFS issue, but does anyone know how to
> resolve the “No package available” messages. 
> 
>  
> 
> Thanks in advance,
> 
>  
> 
> Bret
> 
>  
> 
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] FW: Backup / Restore for Gluster volumes.

2013-09-01 Thread Rejy M Cyriac
On 09/02/2013 12:18 PM, Tamas Papp wrote:
> On 09/01/2013 08:10 AM, Bobby Jacob wrote:
>>  
>>
>> *From:*Bobby Jacob
>> *Sent:* Wednesday, August 28, 2013 8:35 AM
>> *To:* gluster-users@gluster.org <mailto:gluster-users@gluster.org>
>> *Subject:* Backup / Restore for Gluster volumes.
>>
>>  
>>
>> Hi,
>>
>>  
>>
>> What would be the various options to backup gluster volumes. The
>> bricks are created with xfs filesystems. I’ve gone through the
>> concepts of xfsdump concept. Can we schedule daily incremental backups
>> of the gluster volume or bricks. ??
>>
>>  
>>
>> What about restoring the bricks or volumes. ?
>>
> 
> I wouldn't make backups under glusterfs. In theory it should work, but
> there are too many critical points in the system.
> 
> For example you need to stop every transaction (in case of VMs, then all
> of them) to make consistent snapshots.
> If you have multiple bricks, all of them has to be snapshotted at the
> same timeetc.
> In addition restoring would be nightmare.
> 
> IMHO, it's a crazy idea.
> 
> 
> tamas
> 
> ui.: I use dirvish (rsync based backup solution) and it works fine,
> though cluster side snapshotting feature would help a lot.
> 
> 

http://www.gluster.org/community/documentation/index.php/Features/snapshot


> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> 


-- 
Regards,

Rejy M Cyriac (rmc)
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] One node goes offline, the other node can't see the replicated volume anymore

2013-07-10 Thread Rejy M Cyriac
On 07/10/2013 11:38 AM, Frank Sonntag wrote:
> Hi Greg,
> 
> Try using the same server on both machines when mounting, instead of mounting 
> off the local gluster server on both.
> I've used the same approach like you in the past and got into all kinds of 
> split-brain problems.
> The drawback of course is that mounts will fail if the machine you chose is 
> not available at mount time. It's one of my gripes with gluster that you 
> cannot list more than one server in your mount command.
> 
> Frank

Would not the mount option 'backupvolfile-server= help
at mount time, in the case of the primary server not being available ?

- rejy (rmc)


> 
> 
> 
> On 10/07/2013, at 5:26 PM, Greg Scott wrote:
> 
>> Bummer.   Looks like I’m on my own with this one.
>>  
>> -  Greg
>>  
>> From: gluster-users-boun...@gluster.org 
>> [mailto:gluster-users-boun...@gluster.org] On Behalf Of Greg Scott
>> Sent: Tuesday, July 09, 2013 12:37 PM
>> To: 'gluster-users@gluster.org'
>> Subject: Re: [Gluster-users] One node goes offline, the other node can't see 
>> the replicated volume anymore
>>  
>> No takers?   I am running gluster 3.4beta3 that came with Fedora 19.   Is my 
>> issue a consequence of some kind of quorum split-brain thing?
>>  
>> thanks
>>  
>> -  Greg Scott
>>  
>> From: gluster-users-boun...@gluster.org 
>> [mailto:gluster-users-boun...@gluster.org] On Behalf Of Greg Scott
>> Sent: Monday, July 08, 2013 8:17 PM
>> To: 'gluster-users@gluster.org'
>> Subject: [Gluster-users] One node goes offline, the other node can't see the 
>> replicated volume anymore
>>  
>> I don’t get this.  I have a replicated volume and 2 nodes.  My challenge is, 
>> when I take one node offline, the other node can no longer access the volume 
>> until both nodes are back online again.
>>  
>> Details:
>>  
>> I have 2 nodes, fw1 and fw2.   Each node has an XFS file system, 
>> /gluster-fw1 on node fw1 and gluster-fw2 no node fw2.   Node fw1 is at IP 
>> Address 192.168.253.1.  Node fw2 is at 192.168.253.2. 
>>  
>> I create a gluster volume named firewall-scripts which is a replica of those 
>> two XFS file systems.  The volume holds a bunch of config files common to 
>> both fw1 and fw2.  The application is an active/standby pair of firewalls 
>> and the idea is to keep config files in a gluster volume.
>>  
>> When both nodes are online, everything works as expected.  But when I take 
>> either node offline, node fw2 behaves badly:
>>  
>> [root@chicago-fw2 ~]# ls /firewall-scripts
>> ls: cannot access /firewall-scripts: Transport endpoint is not connected
>>  
>> And when I bring the offline node back online, node fw2 eventually behaves 
>> normally again. 
>>  
>> What’s up with that?  Gluster is supposed to be resilient and self-healing 
>> and able to stand up to this sort of abuse.  So I must be doing something 
>> wrong. 
>>  
>> Here is how I set up everything – it doesn’t get much simpler than this and 
>> my setup is right out the Getting Started Guide but using my own names. 
>>  
>> Here are the steps I followed, all from fw1:
>>  
>> gluster peer probe 192.168.253.2
>> gluster peer status
>>  
>> Create and start the volume:
>>  
>> gluster volume create firewall-scripts replica 2 transport tcp 
>> 192.168.253.1:/gluster-fw1 192.168.253.2:/gluster-fw2
>> gluster volume start firewall-scripts
>>  
>> On fw1:
>>  
>> mkdir /firewall-scripts
>> mount -t glusterfs 192.168.253.1:/firewall-scripts /firewall-scripts
>>  
>> and add this line to /etc/fstab:
>> 192.168.253.1:/firewall-scripts /firewall-scripts glusterfs defaults,_netdev 
>> 0 0
>>  
>> on fw2:
>>  
>> mkdir /firewall-scripts
>> mount -t glusterfs 192.168.253.2:/firewall-scripts /firewall-scripts
>>  
>> and add this line to /etc/fstab:
>> 192.168.253.2:/firewall-scripts /firewall-scripts glusterfs defaults,_netdev 
>> 0 0
>>  
>> That’s it.  That’s the whole setup.  When both nodes are online, everything 
>> replicates beautifully.  But take one node offline and it all falls apart. 
>>  
>> Here is the output from gluster volume info, identical on both nodes:
>>  
>> [root@chicago-fw1 etc]# gluster volume info
>>  
>> Volume Name: firewall-scripts
>> Type: Replicate
>> Volume ID: 239b6401-e873-449d-a2d3-1eb2f65a1d4c
>> Status: Started
>> Number of Bricks: 1 x 2 = 2
>> Transport-type: tcp
>> Bricks:
>> Brick1: 192.168.253.1:/gluster-fw1
>> Brick2: 192.168.253.2:/gluster-fw2
>> [root@chicago-fw1 etc]#
>>  
>> Looking at /var/log/glusterfs/firewall-scripts.log on fw2, I see errors like 
>> this every couple of seconds:
>>  
>> [2013-07-09 00:59:04.706390] I [afr-common.c:3856:afr_local_init] 
>> 0-firewall-scripts-replicate-0: no subvolumes up
>> [2013-07-09 00:59:04.706515] W [fuse-bridge.c:1132:fuse_err_cbk] 
>> 0-glusterfs-fuse: 3160: FLUSH() ERR => -1 (Transport endpoint is not 
>> connected)
>>  
>> And then when I bring fw1 back online, I see these messages on fw2:
>>  
>> [2013-07-09 01:01:35.006782] I [rpc-clnt.c:1648:rpc_clnt_reconfig] 
>> 0

Re: [Gluster-users] problem expanding a volume

2013-07-01 Thread Rejy M Cyriac
On 07/02/2013 07:16 AM, Joshua Hawn wrote:
> I've had this issue recently. The error occurred because I tried adding
> a brick that was previously part of another volume. If this is the case
> for you, then this article may be
> helpful: 
> http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/
> 
> You'll need to install the 'attr' package to remove the attributes of
> the brick directory that specify that it already belongs to another volume.
> 
> 

If the brick is on a separate file-system, reformatting it, forcefully
if required. is the easiest way to remove all traces of earlier volume.

If you want to view what attributes have been set on the brick (probably
so as to remove some of them with the 'setfattr -x' command), you may
run the command

getfattr -d -m . 


- rejy (rmc)

> On Mon, Jul 1, 2013 at 6:04 PM, Matthew Sacks  > wrote:
> 
> Hello,
> I am having trouble expanding a volume. Every time I try to add
> bricks to the volume, I get this error:
> 
> [root@gluster1 sdb1]# gluster volume add-brick vg0 
> gluster5:/export/brick2/sdb1 gluster6:/export/brick2/sdb1
> /export/brick2/sdb1 or a prefix of it is already part of a volume
> 
> 
> Here is the volume info:
> [root@gluster1 sdb1]# gluster volume info vg0
>  
> Volume Name: vg0
> Type: Distributed-Replicate
> Volume ID: 7ebad06f-2b44-4769-a395-475f300608e6
> Status: Started
> Number of Bricks: 2 x 2 = 4
> Transport-type: tcp
> Bricks:
> Brick1: gluster1:/export/brick2/sdb1
> Brick2: gluster2:/export/brick2/sdb1
> Brick3: gluster3:/export/brick2/sdb1
> Brick4: gluster4:/export/brick2/sdb1
> 
> Any help is appreciated.
> 
> ___
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] SNMP exposure for monitoring?

2013-06-17 Thread Rejy M Cyriac
On 06/17/2013 06:00 PM, Antony Hawkins wrote:
> Hi all,
> 
> I see on this page: https://github.com/pcuzner/gluster-monitor (last
> updated maybe a month ago) that "Gluster itself does not currently
> expose performance metrics".
> 
> Is there any plan to change this, to expose performance metrics via SNMP?
> 
> I have been tasked with adding GlusterFS monitoring to LogicMonitor and
> we would want to do this as "natively" as possible, i.e. without the
> need to rely on scripts and without needing GlusterFS users to
> install/maintain additional intermediary monitoring tools such
> gluster-monitor.
> 
> Is this possible / on the road map?
> 
> If it is possible, is there a MIB available or at least a documented
> list of OIDs?
> 
> The original request calls for monitoring of (I'm quoting):
> 
> - basic volume usage, free space, used space (etc) statistics;
> - some notification of error conditions, possible split-brain problems;
> - self healing (stats or notification if healing happens that is)".
> 
> 
> Is any of this possible?
> 
> Many thanks
> 
Hi,

This is related to your query, and may interest you.

https://bugzilla.redhat.com/show_bug.cgi?id=971294

-- 
Regards,

Rejy M Cyriac (rmc)
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] CentOS 6.4 + selinux enforcing + mount.glusterfs == bad?

2013-03-12 Thread Rejy M Cyriac
On 03/12/2013 02:57 PM, Alan Orth wrote:
> All,
> 
> I just learned how to create a new module to allow this request.  In a
> nutshell, use audit2allow to check the audit log and create a new
> module, see [1] and [2].  My exact steps:
> 
> mkdir ~/selinux_gluster
> cd ~/selinux_gluster
> setenforce 0
> load_policy
> service netfs start
> audit2allow -M glusterd_centos64 -l -i /var/log/audit/audit.log
> setenforce 1
> semodule -i glusterd_centos64.pp
> service netfs start
> 
> More precisely, what you are doing is:
> 
>  1. setting selinux to permissive mode
>  2. re-loading the policy to get a clean "starting point"
>  3. performing the actions which are being denied
>  4. creating a module
>  5. re-enabling selinux enforcing mode
>  6. loading the new selinux module (which, after loading, is copied into
> /etc/selinux/targeted/modules/active/modules/ and will persist after
> reboot)
>  7. gluster should now be able to mount via /etc/fstab on boot, or via
> the netfs service, etc (ie, not manually as root).
> 
> Hope this helps some future traveler,
> 
> Alan
> 
> [1] http://fedorasolved.org/security-solutions/selinux-module-building
> [2] man audit2allow
> 
> On 03/12/2013 11:32 AM, Alan Orth wrote:
>> All,
>>
>> I've updated one of my GlusterFS clients from CentOS 6.3 to CentOS 6.4
>> and now my gluster volumes fail to mount at boot.  dmesg shows:
>>
>> type=1400 audit(1363004014.209:4): avc:  denied  { execute } for
>> pid=1150 comm="mount.glusterfs" name="glusterfsd" dev=sda1 ino=1315297
>> scontext=system_u:system_r:mount_t:s0
>> tcontext=system_u:object_r:glusterd_exec_t:s0 tclass=file
>>
>> Mounting manually as root works, but obviously isn't optimal.
>>
>> Does anyone know how to fix this?
>>
>> Thanks!
>>
> 
> -- 
> Alan Orth
> alan.o...@gmail.com
> http://alaninkenya.org
> http://mjanja.co.ke
> "I have always wished for my computer to be as easy to use as my telephone; 
> my wish has come true because I can no longer figure out how to use my 
> telephone." -Bjarne Stroustrup, inventor of C++
> 
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> 
This should be fixed with the latest SELinux policy update, which was
out for Red Hat Enterprise Linux today -
selinux-policy-targeted-3.7.19-195.el6_4.3.noarch,
selinux-policy-3.7.19-195.el6_4.3.noarch .


-- 
Regards,

Rejy M Cyriac (rmc)
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users