Re: [Gluster-users] bitrot log messages

2016-10-26 Thread Kotresh Hiremath Ravishankar
Hi Jackie, Here is the sample output of scrub status where two files are corrupted. root@FEDORA2:$gluster vol bitrot master scrub status Volume name : master State of scrub: Active (Idle) Scrub impact: lazy Scrub frequency: biweekly Bitrot error log location: /var/log/glusterfs/bitd.log

Re: [Gluster-users] Production cluster planning

2016-10-26 Thread Joe Julian
On 10/26/2016 03:42 PM, Lindsay Mathieson wrote: On 27/10/2016 8:14 AM, Joe Julian wrote: To be fair, though, I can't blame ceph. We had a cascading hardware failure with those storage trays. Even still, if it had been gluster - I would have had files on disks. Ouch :( In that regard how

Re: [Gluster-users] Production cluster planning

2016-10-26 Thread Lindsay Mathieson
On 27/10/2016 8:14 AM, Joe Julian wrote: To be fair, though, I can't blame ceph. We had a cascading hardware failure with those storage trays. Even still, if it had been gluster - I would have had files on disks. Ouch :( In that regard how do you view sharding? why not as simple as pulling

Re: [Gluster-users] Production cluster planning

2016-10-26 Thread Joe Julian
On 10/26/2016 02:54 PM, Lindsay Mathieson wrote: Maybe a controversial question (and hopefully not trolling), but any particularly reason you choose gluster over ceph for these larger setups Joe? For myself, gluster is much easier to manage and provides better performance on my small

Re: [Gluster-users] Production cluster planning

2016-10-26 Thread Lindsay Mathieson
Maybe a controversial question (and hopefully not trolling), but any particularly reason you choose gluster over ceph for these larger setups Joe? For myself, gluster is much easier to manage and provides better performance on my small non-enterprise setup, plus it plays nice with zfs. But

Re: [Gluster-users] Production cluster planning

2016-10-26 Thread Joe Julian
On 10/26/2016 02:12 PM, Gandalf Corvotempesta wrote: 2016-10-26 23:07 GMT+02:00 Joe Julian : And yes, they can fail, but 20TB is small enough to heal pretty quickly. 20TB small enough to build quickly? On which network? Gluster doesn't have a dedicated cluster network,

Re: [Gluster-users] Production cluster planning

2016-10-26 Thread Gandalf Corvotempesta
2016-10-26 23:09 GMT+02:00 Joe Julian : > Open Compute WiWynn Knox trays. I don't recommend them but they are pretty. > https://goo.gl/photos/tmkRE58xKKaWKdL96 What are you hosting on that huge cluster? 10GB network I suppose. ___

Re: [Gluster-users] Production cluster planning

2016-10-26 Thread Gandalf Corvotempesta
2016-10-26 23:07 GMT+02:00 Joe Julian : > And yes, they can fail, but 20TB is small enough to heal pretty quickly. 20TB small enough to build quickly? On which network? Gluster doesn't have a dedicated cluster network, if the cluster is being hevily accessed, the healing

Re: [Gluster-users] Production cluster planning

2016-10-26 Thread Joe Julian
On 10/26/2016 02:06 PM, Gandalf Corvotempesta wrote: 2016-10-26 23:04 GMT+02:00 Joe Julian : I just add enough disks to saturate (and I don't like zfs, personally) per-brick. So with 30 disks on a server, I typically do 5-disk raid-0 and create 6 bricks per server. 30

Re: [Gluster-users] Production cluster planning

2016-10-26 Thread Joe Julian
On 10/26/2016 02:06 PM, Gandalf Corvotempesta wrote: 2016-10-26 23:04 GMT+02:00 Joe Julian : I just add enough disks to saturate (and I don't like zfs, personally) per-brick. So with 30 disks on a server, I typically do 5-disk raid-0 and create 6 bricks per server. 30

Re: [Gluster-users] Production cluster planning

2016-10-26 Thread Joe Julian
On 10/26/2016 02:04 PM, Joe Julian wrote: On 10/26/2016 02:02 PM, Gandalf Corvotempesta wrote: 2016-10-26 22:59 GMT+02:00 Joe Julian : Personally, I prefer raid0 bricks just to get the throughput to saturate my network, then I use replicate to meet my availability

Re: [Gluster-users] Production cluster planning

2016-10-26 Thread Gandalf Corvotempesta
2016-10-26 23:04 GMT+02:00 Joe Julian : > I just add enough disks to saturate (and I don't like zfs, personally) > per-brick. So with 30 disks on a server, I typically do 5-disk raid-0 and > create 6 bricks per server. 30 disks per server? which chassis are you using? Why

Re: [Gluster-users] Production cluster planning

2016-10-26 Thread Lindsay Mathieson
On 27/10/2016 6:59 AM, Joe Julian wrote: Personally, I prefer raid0 bricks just to get the throughput to saturate my network, then I use replicate to meet my availability requirements (typically replica 3). My network is the limiting factor already :( Only 1G * 3 Bond. Cheap and nasty D-Link

Re: [Gluster-users] Production cluster planning

2016-10-26 Thread Joe Julian
On 10/26/2016 02:02 PM, Gandalf Corvotempesta wrote: 2016-10-26 22:59 GMT+02:00 Joe Julian : Personally, I prefer raid0 bricks just to get the throughput to saturate my network, then I use replicate to meet my availability requirements (typically replica 3). Isn't the

Re: [Gluster-users] Production cluster planning

2016-10-26 Thread Lindsay Mathieson
On 27/10/2016 6:58 AM, mabi wrote: Sorry yes I meant vmstat, I was doing too much ionice/iostat today ;) right now its averaging at 45000. Low load though. -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users@gluster.org

Re: [Gluster-users] Production cluster planning

2016-10-26 Thread Gandalf Corvotempesta
2016-10-26 22:59 GMT+02:00 Joe Julian : > Personally, I prefer raid0 bricks just to get the throughput to saturate my > network, then I use replicate to meet my availability requirements > (typically replica 3). Isn't the ZFS cache on SSD enough to saturate the network?

Re: [Gluster-users] Production cluster planning

2016-10-26 Thread Lindsay Mathieson
On 27/10/2016 6:56 AM, Gandalf Corvotempesta wrote: Velociraptors: are still around ? I heard that were EOL a couple of years ago. Legacy hardware :) I must admit they last really well. -- Lindsay Mathieson ___ Gluster-users mailing list

Re: [Gluster-users] Production cluster planning

2016-10-26 Thread Joe Julian
On 10/26/2016 01:56 PM, Gandalf Corvotempesta wrote: 2016-10-26 22:31 GMT+02:00 Lindsay Mathieson : Yah, RAID10. - Two nodes with 4 WD 3TB RED I really hate RAID10. Personally, I prefer raid0 bricks just to get the throughput to saturate my network, then I use

Re: [Gluster-users] Production cluster planning

2016-10-26 Thread mabi
Sorry yes I meant vmstat, I was doing too much ionice/iostat today ;) Original Message Subject: Re: [Gluster-users] Production cluster planning Local Time: October 26, 2016 10:56 PM UTC Time: October 26, 2016 8:56 PM From: lindsay.mathie...@gmail.com To:

Re: [Gluster-users] Production cluster planning

2016-10-26 Thread Gandalf Corvotempesta
2016-10-26 22:31 GMT+02:00 Lindsay Mathieson : > Yah, RAID10. > > - Two nodes with 4 WD 3TB RED I really hate RAID10. I'm evaluating 2 RAIZ2 on each gluster node (12 disks: 6+6 on each RAIDZ2) or one huge RAIDZ3 with 12 disks. The biggest drawback with RAIDZ is that

Re: [Gluster-users] Production cluster planning

2016-10-26 Thread Lindsay Mathieson
On 27/10/2016 6:35 AM, mabi wrote: I was wondering with your setup you mention, how high are your context switches? I mean what is your typical average context switch and what are your highest context switch peeks (as seen in iostat). Wouldn't that be vmstat? -- Lindsay Mathieson

Re: [Gluster-users] Production cluster planning

2016-10-26 Thread mabi
I was wondering with your setup you mention, how high are your context switches? I mean what is your typical average context switch and what are your highest context switch peeks (as seen in iostat). Best, M. Original Message Subject: Re: [Gluster-users] Production

Re: [Gluster-users] Production cluster planning

2016-10-26 Thread Gandalf Corvotempesta
2016-10-05 23:48 GMT+02:00 Lindsay Mathieson : > Its enough? I also run 10 windows VM's per node. > > > My servers typically run at 4-6% max ioload. They idle under 1% Are you using any ZFS RAID on your servers? ___

Re: [Gluster-users] Please help

2016-10-26 Thread Nithya Balachandran
On 26 October 2016 at 19:47, Leung, Alex (398C) wrote: > Does anyone has any idea to troubleshoot the following problem? > > > > Alex > > > Can you please provide the gluster client logs (in /var/log/glusterfs) and the gluster volume info? Regards, Nithya > > >

Re: [Gluster-users] bitrot log messages

2016-10-26 Thread Jackie Tung
That’s great, thank you! We plan to automatically look for these signs. For “scrub status”, do you have a sample output for a positive hit you could share easily? > On Oct 26, 2016, at 12:05 AM, Kotresh Hiremath Ravishankar > wrote: > > Correcting the command..I had

[Gluster-users] Please help

2016-10-26 Thread Leung, Alex (398C)
Does anyone has any idea to troubleshoot the following problem? Alex [root@pdsimg-6 alex]# rsync -av pdsraid1:/export/pdsdata1/mro/safed/rsds/09000_0_ops-120210/crism/ops_crism_vnir_09000_0/ . root@pdsraid1's password: receiving incremental file list ./ 4A_03_B42000_01.DAT

[Gluster-users] Weekly community meeting - 26-Oct-2016

2016-10-26 Thread Kaushal M
Hi all! This weeks meeting was GD. We got rid of the regular updates, and just had an open floor. This had the intended effect of more conversations. We discussed 3 main topics today, - How do we recognize contributors and their contributions [manikandan] - What's happening with memory

Re: [Gluster-users] bitrot log messages

2016-10-26 Thread Kotresh Hiremath Ravishankar
Correcting the command..I had missed 'scrub' keyword. "gluster vol bitrot scrub status" Thanks and Regards, Kotresh H R - Original Message - > From: "Kotresh Hiremath Ravishankar" > To: "Jackie Tung" > Cc: gluster-users@gluster.org > Sent:

Re: [Gluster-users] Shared storage for dovecot clusters

2016-10-26 Thread Gandalf Corvotempesta
I'm really sorry, I've wrote to the bad mailing list. :( Il 26 ott 2016 8:30 AM, "Gandalf Corvotempesta" < gandalf.corvotempe...@gmail.com> ha scritto: > As I'm planning some server migrations and a new mail architecture, i > would like to create an HA cluster > > Any advice on which kind of

[Gluster-users] Shared storage for dovecot clusters

2016-10-26 Thread Gandalf Corvotempesta
As I'm planning some server migrations and a new mail architecture, i would like to create an HA cluster Any advice on which kind of shared storage should i use? Are gluster performances with small files enough for dovecot? Any other solution? It's mandatory to avoid any splibrains or similiar