Like Eric, I too use lvm to partition off bricks for different volumes.
You can even specify which physical device a brick is on when you're
creating your brick, ie. lvcreate -n myvol_brick_a -l50 vg_gluster
/dev/sda1. This is handy if you have to replace the disk while the old
one is still alive as you can just install the replacement and do a
pvmove.
Each brick uses memory. I have 15 volumes, 4 disks per server, and one
brick per volume per disk. 60 bricks would use a lot of memory. Rather
than buy a bunch more memory that would usually sit idle, I set
performance.cache-size to a number that would use up just enough
memory. Do experiment with that setting. It appears that size limit is
used for multiple caches so the actual memory used seems to be some
multiple of what you set it to.
When a server comes back after maintenance and the self-heals start,
having multiple bricks healing simultaneously can put quite a load on
your servers. Test that and see if it meets your satisfaction. I
actually kill bricks for non-essential volumes while the essential
volumes are healing, then use volume start ... force to start the
bricks for the degraded volumes individually to manage that.
On 11/12/2013 08:21 AM, Eric Johnson wrote:
I would suggest using different partitions for each brick. We use LVM
and start off with a relativity small amount allocated space, then
grow the partitions as needed. If you were to place 2 bricks on the
same partition then the free space is not going to show correctly.
Example:
1TB partition 2 bricks on this partition
brick: vol-1-a using 200GB
brick: vol-2-a using 300GB.
Both volumes would show that they have ~500GB free, but in reality
there would be ~500GB that either could use. I don't know if there
would be any other issues with putting 2 or more bricks on the same
partition, but it doesn't seem like a good idea. I had gluster setup
that way when I was first testing it, and it seemed to work other than
the free space issue, but I quickly realized it would be better to
separate out the bricks on to their own partition. Using LVM allows
you to easily grow partitions as needed.
my 2 cents.
On 11/12/13, 9:31 AM, David Gibbons wrote:
Hi All,
I am interested in some feedback on putting multiple bricks on one
physical disk. Each brick being assigned to a different volume. Here
is the scenario:
4 disks per server, 4 servers, 2x2 distribute/replicate
I would prefer to have just one volume but need to do geo-replication
on some of the data (but not all of it). My thought was to use two
volumes, which would allow me to selectively geo-replicate just the
data that I need to, by replicating only one volume.
A couple of questions come to mind:
1) Any implications of doing two bricks for different volumes on one
physical disk?
2) Will the free space across each volume still calculate
correctly? IE, if one volume takes up 2/3 of the total physical disk
space, will the second volume still reflect the correct amount of
used space?
3) Am I being stupid/missing something obvious?
Cheers,
Dave
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
--
Eric Johnson
713-968-2546
VP of MIS
Internet America
www.internetamerica.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users