Re: [Gluster-users] 90 Brick/Server suggestions?
On 02/17/2017 10:13 AM, Gambit15 wrote: RAID is not an option, JBOD with EC will be used. Any particular reason for this, other than maximising space by avoiding two layers of RAID/redundancy? Local RAID would be far simpler & quicker for replacing failed drives, and it would greatly reduce the number of bricks & load on Gluster. We use RAID volumes for our bricks, and the benefits of simplified management far outweigh the costs of a little lost capacity. D This is as much of a question as a comment. My impression is that distributed filesystems like Gluster shine where the number if bricks is close to the number of servers and both of those numbers are as large as possible. So the ideal solution would be 90 disks as 90 bricks on 90 servers. This would be hard to do in practice but the point of Gluster is to try and spread the load and potential failures over a large surface. Putting all the disks into a big RAID array and then just duplicating that for redundancy is not much better than using something like DRBD which would likely perform faster but be less scaleable. In the end with big RAID arrays and fewer servers you have a smaller surface to absorb failures. Over the years I have seen raid systems fail because users put them in and forget about them and then see system failures becasue they did not monitor the raid arrays. I would be willing to bet that 80%+ of all the raid arrays out there are not monitored. Gluster is more in your face about failures and arguably should be more reliable in practice because you will know quickly about a failure. Feel free to correct my misconceptions. -- Alvin Starr || voice: (905)513-7688 Netvel Inc. || Cell: (416)806-0133 al...@netvel.net || ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] 90 Brick/Server suggestions?
I wouldn't do that kind of per-server density for anything but cold storage. Putting that many eggs in one basket increases the potential for catastrophic failure. On February 15, 2017 11:04:16 AM PST, "Serkan Çoban" wrote: >Hi, > >We are evaluating dell DSS7000 chassis with 90 disks. >Has anyone used that much brick per server? >Any suggestions, advices? > >Thanks, >Serkan >___ >Gluster-users mailing list >Gluster-users@gluster.org >http://lists.gluster.org/mailman/listinfo/gluster-users -- Sent from my Android device with K-9 Mail. Please excuse my brevity.___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] 90 Brick/Server suggestions?
>Any particular reason for this, other than maximising space by avoiding two >layers of RAID/redundancy? Yes that's right we can get 720TB net usable space per server with 90*10TB disks. Any RAID layer will cost too much.. On Fri, Feb 17, 2017 at 6:13 PM, Gambit15 wrote: >> RAID is not an option, JBOD with EC will be used. > > > Any particular reason for this, other than maximising space by avoiding two > layers of RAID/redundancy? > Local RAID would be far simpler & quicker for replacing failed drives, and > it would greatly reduce the number of bricks & load on Gluster. > > We use RAID volumes for our bricks, and the benefits of simplified > management far outweigh the costs of a little lost capacity. > > D ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] 90 Brick/Server suggestions?
> > RAID is not an option, JBOD with EC will be used. > Any particular reason for this, other than maximising space by avoiding two layers of RAID/redundancy? Local RAID would be far simpler & quicker for replacing failed drives, and it would greatly reduce the number of bricks & load on Gluster. We use RAID volumes for our bricks, and the benefits of simplified management far outweigh the costs of a little lost capacity. D ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] 90 Brick/Server suggestions?
There may be some helpful information in this article: http://45drives.blogspot.ca/2016/11/an-introduction-to-clustering-how-to.html Disclaimer: I don't work for 45drives, I'm just a satisfied customer. Good luck, and please let us know how this works out for you. regards, tp On Fri, 17 Feb 2017, Serkan Çoban wrote: We have 12 on order. Actually the DSS7000 has two nodes in the chassis, and each accesses 45 bricks. We will be using an erasure code scheme probably 24:3 or 24:4, we have not sat down and really thought about the exact scheme we will use. If we cannot get 1 node/90 disk configuration, we also get it as 2 nodes/45 disks each. Be careful about EC. I am using 16+4 in production, only drawback is slow rebuild times. It takes 10 days to rebuild 8TB disk. Although parallel heal for EC improves it in 3.9, don't forget to test rebuild times for different EC configurations, 90 disks per server is a lot. In particular, it might be out of balance with other characteristics of the machine - number of cores, amount of memory, network or even bus bandwidth Nodes will be pretty powerful, 2x18 core CPUs with 256GB RAM and 2X10Gb bonded ethernet. It will be used for archive purposes so I don't need more than 1GB/s/node. RAID is not an option, JBOD with EC will be used. gluster volume set all cluster.brick-multiplex on I just read the 3.10 release notes and saw this. I think this is a good solution, I plan to use 3.10.x and will probably test multiplexing and get in touch for help.. Thanks for the suggestions, Serkan On Fri, Feb 17, 2017 at 1:39 AM, Jeff Darcy wrote: We are evaluating dell DSS7000 chassis with 90 disks. Has anyone used that much brick per server? Any suggestions, advices? 90 disks per server is a lot. In particular, it might be out of balance with other characteristics of the machine - number of cores, amount of memory, network or even bus bandwidth. Most people who put that many disks in a server use some sort of RAID (HW or SW) to combine them into a smaller number of physical volumes on top of which filesystems and such can be built. If you can't do that, or don't want to, you're in poorly explored territory. My suggestion would be to try running as 90 bricks. It might work fine, or you might run into various kinds of contention: (1) Excessive context switching would indicate not enough CPU. (2) Excessive page faults would indicate not enough memory. (3) Maxed-out network ports . . . well, you can figure that one out. ;) If (2) applies, you might want to try brick multiplexing. This is a new feature in 3.10, which can reduce memory consumption by more than 2x in many cases by putting multiple bricks into a single process (instead of one per brick). This also drastically reduces the number of ports you'll need, since the single process only needs one port total instead of one per brick. In terms of CPU usage or performance, gains are far more modest. Work in that area is still ongoing, as is work on multiplexing in general. If you want to help us get it all right, you can enable multiplexing like this: gluster volume set all cluster.brick-multiplex on If multiplexing doesn't help for you, speak up and maybe we can make it better, or perhaps come up with other things to try. Good luck! ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] 90 Brick/Server suggestions?
>We have 12 on order. Actually the DSS7000 has two nodes in the chassis, >and each accesses 45 bricks. We will be using an erasure code scheme >probably 24:3 or 24:4, we have not sat down and really thought about the >exact scheme we will use. If we cannot get 1 node/90 disk configuration, we also get it as 2 nodes/45 disks each. Be careful about EC. I am using 16+4 in production, only drawback is slow rebuild times. It takes 10 days to rebuild 8TB disk. Although parallel heal for EC improves it in 3.9, don't forget to test rebuild times for different EC configurations, >90 disks per server is a lot. In particular, it might be out of balance with >other >characteristics of the machine - number of cores, amount of memory, network >or even bus bandwidth Nodes will be pretty powerful, 2x18 core CPUs with 256GB RAM and 2X10Gb bonded ethernet. It will be used for archive purposes so I don't need more than 1GB/s/node. RAID is not an option, JBOD with EC will be used. >gluster volume set all cluster.brick-multiplex on I just read the 3.10 release notes and saw this. I think this is a good solution, I plan to use 3.10.x and will probably test multiplexing and get in touch for help.. Thanks for the suggestions, Serkan On Fri, Feb 17, 2017 at 1:39 AM, Jeff Darcy wrote: >> We are evaluating dell DSS7000 chassis with 90 disks. >> Has anyone used that much brick per server? >> Any suggestions, advices? > > 90 disks per server is a lot. In particular, it might be out of balance with > other characteristics of the machine - number of cores, amount of memory, > network or even bus bandwidth. Most people who put that many disks in a > server use some sort of RAID (HW or SW) to combine them into a smaller number > of physical volumes on top of which filesystems and such can be built. If > you can't do that, or don't want to, you're in poorly explored territory. My > suggestion would be to try running as 90 bricks. It might work fine, or you > might run into various kinds of contention: > > (1) Excessive context switching would indicate not enough CPU. > > (2) Excessive page faults would indicate not enough memory. > > (3) Maxed-out network ports . . . well, you can figure that one out. ;) > > If (2) applies, you might want to try brick multiplexing. This is a new > feature in 3.10, which can reduce memory consumption by more than 2x in many > cases by putting multiple bricks into a single process (instead of one per > brick). This also drastically reduces the number of ports you'll need, since > the single process only needs one port total instead of one per brick. In > terms of CPU usage or performance, gains are far more modest. Work in that > area is still ongoing, as is work on multiplexing in general. If you want to > help us get it all right, you can enable multiplexing like this: > > gluster volume set all cluster.brick-multiplex on > > If multiplexing doesn't help for you, speak up and maybe we can make it > better, or perhaps come up with other things to try. Good luck! ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] 90 Brick/Server suggestions?
> We are evaluating dell DSS7000 chassis with 90 disks. > Has anyone used that much brick per server? > Any suggestions, advices? 90 disks per server is a lot. In particular, it might be out of balance with other characteristics of the machine - number of cores, amount of memory, network or even bus bandwidth. Most people who put that many disks in a server use some sort of RAID (HW or SW) to combine them into a smaller number of physical volumes on top of which filesystems and such can be built. If you can't do that, or don't want to, you're in poorly explored territory. My suggestion would be to try running as 90 bricks. It might work fine, or you might run into various kinds of contention: (1) Excessive context switching would indicate not enough CPU. (2) Excessive page faults would indicate not enough memory. (3) Maxed-out network ports . . . well, you can figure that one out. ;) If (2) applies, you might want to try brick multiplexing. This is a new feature in 3.10, which can reduce memory consumption by more than 2x in many cases by putting multiple bricks into a single process (instead of one per brick). This also drastically reduces the number of ports you'll need, since the single process only needs one port total instead of one per brick. In terms of CPU usage or performance, gains are far more modest. Work in that area is still ongoing, as is work on multiplexing in general. If you want to help us get it all right, you can enable multiplexing like this: gluster volume set all cluster.brick-multiplex on If multiplexing doesn't help for you, speak up and maybe we can make it better, or perhaps come up with other things to try. Good luck! ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] 90 Brick/Server suggestions?
We have 12 on order. Actually the DSS7000 has two nodes in the chassis, and each accesses 45 bricks. We will be using an erasure code scheme probably 24:3 or 24:4, we have not sat down and really thought about the exact scheme we will use. On 15 February 2017 at 14:04, Serkan Çoban wrote: > Hi, > > We are evaluating dell DSS7000 chassis with 90 disks. > Has anyone used that much brick per server? > Any suggestions, advices? > > Thanks, > Serkan > ___ > Gluster-users mailing list > Gluster-users@gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users > ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users