Above ~100 nodes the answer is "it depends" but memory is certainly the main factor.
Important parts for the estimation are the number of nodes, filesystems, NSDs, NFS & SMB shares and the frequency (aka period) with which measurements are made. For a lot of sensors today the default is 1/sec which is quite high. Depending on your needs 1/ 10 sec might do or even 1/min. With just guessing on some numbers I end up with ~24-32 GB RAM needed in total and about the same number for disk space. If you want HA double the number, then divide by the number of collector nodes used in the federation setup. Place the collectors on nodes which do not play an additional important part in your cluster, then CPU should not be an issue. Mit freundlichen Grüßen / Kind regards Norbert Schuld From: Matt Weil <mw...@wustl.edu> To: gpfsug-discuss@spectrumscale.org Date: 21/08/2017 21:54 Subject: Re: [gpfsug-discuss] pmcollector node Sent by: gpfsug-discuss-boun...@spectrumscale.org any input on this Thanks On 7/5/17 10:51 AM, Matt Weil wrote: > Hello all, > > Question on the requirements on pmcollector node/s for a 500+ node > cluster. Is there a sizing guide? What specifics should we scale? > CPU Disks memory? > > Thanks > > Matt > _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss