Re: [Gluster-users] Is it possible to manually specify brick replication location?

2013-07-08 Thread Joe Julian
See if this helps with the concept you're working on: 
http://joejulian.name/blog/how-to-expand-glusterfs-replicated-clusters-by-one-server/

Michael Peek  wrote:

>Hi Gluster gurus,
>
>I'm new to Gluster, so if there is a solution already talked about
>somewhere then gladly point me to it and I'll get out of the way.  That
>said, here's my problem:
>
>I have four machines.  Each machine is running Ubuntu 12.04 with
>Gluster
>3.2.5.  Each machine has two drives:
>
>node1:/export/bricks/a
>node1:/export/bricks/b
>node2:/export/bricks/a
>node2:/export/bricks/b
>node3:/export/bricks/a
>node3:/export/bricks/b
>node4:/export/bricks/a
>node4:/export/bricks/b
>
>I created a volume with a single replication, added the bricks, mounted
>it to /mnt, and then created a file with "touch /mnt/this".  The file
>"this" appeared on the two bricks located on node1:
>
>node1:/export/bricks/a/this
>and
>node1:/export/bricks/b/this
>
>So if node1 goes down, all access to the file "this" is lost.  It
>seemed
>to me that the order in which bricks were added dictated the
>replication
>location -- i.e. the second brick added is used as the replication
>destination for the first brick, and so on with the 3rd and 4th pair of
>bricks, 5th and 6th, etc.
>
>I've searched the archives, and this seems to be confirmed in a past
>post located here:
>http://supercolony.gluster.org/pipermail/gluster-users/2013-June/036272.html
>
>> Replica sets are done in order that the bricks are added to the
>volume.
>...
>> So, you have an issue here, that both bricks of a replica set are on
>the 
>> same host.
>
>Unfortunately, this was the end of the thread and no more information
>was forthcoming.
>
>Now, I'm just starting out, and my volume is not yet used in
>production,
>so I have the luxury of removing all the bricks and then adding them
>back in an order that allows for replication to be done across nodes
>the
>way that I want.  But I see this as a serious problem.  What happens
>down the road when I need to expand?
>
>How would I add another machine as a node, and then add it's bricks,
>and
>still have replication done outside of that one machine?  Is there a
>way
>to manually specify master/replication location?  Is there a way to
>reshuffle replicant brick on a running system?
>
>A couple of solutions have presented themselves to me:
>1) Only add new nodes in pairs, and make sure to add bricks in the
>correct order.
>2) Only add new nodes in pairs, but setup two Gluster volumes and use
>geo-replication (even though the geographical distance between the two
>clusters may be as little as only 1 inch).
>3) Only add new nodes in pairs, and use RAID or LVM to glue the drives
>together, so that as far as Gluster is concerned, each node only has
>one
>brick.
>
>But each of these solutions involves adding new nodes in pairs, which
>increases the incremental cost of expansion more than it feels like it
>should.  It just seems to me that there should be a smarter way to
>handle things than what I'm seeing before me, so I'm hoping that I've
>just missed something obvious.
>
>So what is the common wisdom among seasoned Gluster admins?
>
>Thanks for your help,
>
>Michael
>
>
>
>
>___
>Gluster-users mailing list
>Gluster-users@gluster.org
>http://supercolony.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Is it possible to manually specify brick replication location?

2013-07-08 Thread Eco Willson
Michael,

For your situation, you would want to use something like gluster volume create 
 replica 2 node1:/export/bricks/a node2:/export/bricks/a 
node1:/export/bricks/b node2:/export/bricks/b etc...

You are correct that the order in which you specify the volume is how the 
pairing will occur.  Specifying replica 2 means there will always be at least 
two copies of the data, and if one of the nodes goes down, the other will pick 
up automatically.

Thanks,

Eco

- Original Message -
From: "Michael Peek" 
To: Gluster-users@gluster.org
Sent: Monday, July 8, 2013 8:46:47 AM
Subject: [Gluster-users] Is it possible to manually specify brick   
replication location?

Hi Gluster gurus, 

I'm new to Gluster, so if there is a solution already talked about somewhere 
then gladly point me to it and I'll get out of the way. That said, here's my 
problem: 

I have four machines. Each machine is running Ubuntu 12.04 with Gluster 3.2.5. 
Each machine has two drives: 

node1:/export/bricks/a 
node1:/export/bricks/b 
node2:/export/bricks/a 
node2:/export/bricks/b 
node3:/export/bricks/a 
node3:/export/bricks/b 
node4:/export/bricks/a 
node4:/export/bricks/b 

I created a volume with a single replication, added the bricks, mounted it to 
/mnt, and then created a file with "touch /mnt/this". The file "this" appeared 
on the two bricks located on node1: 

node1:/export/bricks/a/this 
and 
node1:/export/bricks/b/this 

So if node1 goes down, all access to the file "this" is lost. It seemed to me 
that the order in which bricks were added dictated the replication location -- 
i.e. the second brick added is used as the replication destination for the 
first brick, and so on with the 3rd and 4th pair of bricks, 5th and 6th, etc. 

I've searched the archives, and this seems to be confirmed in a past post 
located here: 
http://supercolony.gluster.org/pipermail/gluster-users/2013-June/036272.html 




Replica sets are done in order that the bricks are added to the volume. 
... 



So, you have an issue here, that both bricks of a replica set are on the 
same host. 

Unfortunately, this was the end of the thread and no more information was 
forthcoming. 

Now, I'm just starting out, and my volume is not yet used in production, so I 
have the luxury of removing all the bricks and then adding them back in an 
order that allows for replication to be done across nodes the way that I want. 
But I see this as a serious problem. What happens down the road when I need to 
expand? 

How would I add another machine as a node, and then add it's bricks, and still 
have replication done outside of that one machine? Is there a way to manually 
specify master/replication location? Is there a way to reshuffle replicant 
brick on a running system? 

A couple of solutions have presented themselves to me: 
1) Only add new nodes in pairs, and make sure to add bricks in the correct 
order. 
2) Only add new nodes in pairs, but setup two Gluster volumes and use 
geo-replication (even though the geographical distance between the two clusters 
may be as little as only 1 inch). 
3) Only add new nodes in pairs, and use RAID or LVM to glue the drives 
together, so that as far as Gluster is concerned, each node only has one brick. 

But each of these solutions involves adding new nodes in pairs, which increases 
the incremental cost of expansion more than it feels like it should. It just 
seems to me that there should be a smarter way to handle things than what I'm 
seeing before me, so I'm hoping that I've just missed something obvious. 

So what is the common wisdom among seasoned Gluster admins? 

Thanks for your help, 

Michael 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Is it possible to manually specify brick replication location?

2013-07-08 Thread Michael Peek
Hi Gluster gurus,

I'm new to Gluster, so if there is a solution already talked about
somewhere then gladly point me to it and I'll get out of the way.  That
said, here's my problem:

I have four machines.  Each machine is running Ubuntu 12.04 with Gluster
3.2.5.  Each machine has two drives:

node1:/export/bricks/a
node1:/export/bricks/b
node2:/export/bricks/a
node2:/export/bricks/b
node3:/export/bricks/a
node3:/export/bricks/b
node4:/export/bricks/a
node4:/export/bricks/b

I created a volume with a single replication, added the bricks, mounted
it to /mnt, and then created a file with "touch /mnt/this".  The file
"this" appeared on the two bricks located on node1:

node1:/export/bricks/a/this
and
node1:/export/bricks/b/this

So if node1 goes down, all access to the file "this" is lost.  It seemed
to me that the order in which bricks were added dictated the replication
location -- i.e. the second brick added is used as the replication
destination for the first brick, and so on with the 3rd and 4th pair of
bricks, 5th and 6th, etc.

I've searched the archives, and this seems to be confirmed in a past
post located here:
http://supercolony.gluster.org/pipermail/gluster-users/2013-June/036272.html

> Replica sets are done in order that the bricks are added to the volume.
...
> So, you have an issue here, that both bricks of a replica set are on the 
> same host.

Unfortunately, this was the end of the thread and no more information
was forthcoming.

Now, I'm just starting out, and my volume is not yet used in production,
so I have the luxury of removing all the bricks and then adding them
back in an order that allows for replication to be done across nodes the
way that I want.  But I see this as a serious problem.  What happens
down the road when I need to expand?

How would I add another machine as a node, and then add it's bricks, and
still have replication done outside of that one machine?  Is there a way
to manually specify master/replication location?  Is there a way to
reshuffle replicant brick on a running system?

A couple of solutions have presented themselves to me:
1) Only add new nodes in pairs, and make sure to add bricks in the
correct order.
2) Only add new nodes in pairs, but setup two Gluster volumes and use
geo-replication (even though the geographical distance between the two
clusters may be as little as only 1 inch).
3) Only add new nodes in pairs, and use RAID or LVM to glue the drives
together, so that as far as Gluster is concerned, each node only has one
brick.

But each of these solutions involves adding new nodes in pairs, which
increases the incremental cost of expansion more than it feels like it
should.  It just seems to me that there should be a smarter way to
handle things than what I'm seeing before me, so I'm hoping that I've
just missed something obvious.

So what is the common wisdom among seasoned Gluster admins?

Thanks for your help,

Michael
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users