Hello, you should put your metadata on a fast (ssd/nvme) pool. The size depends on your data, but you can scale it anytime as you know it from Ceph itself. Maybe just start with 3 SSDs in 3 Servers and see how it goes. For CPU / Ram it's a bit the same, you need a few gigs for smaller and maybe more for bigger deployments. Maybe you can provide some insights about your typical data (size, count,..) and don't forget, you can scale by adding additional online mds as well.
-- Martin Verges Managing director Mobile: +49 174 9335695 E-Mail: martin.ver...@croit.io Chat: https://t.me/MartinVerges croit GmbH, Freseniusstr. 31h, 81247 Munich CEO: Martin Verges - VAT-ID: DE310638492 Com. register: Amtsgericht Munich HRB 231263 Web: https://croit.io YouTube: https://goo.gl/PGE1Bx Am Mi., 1. Mai 2019 um 02:08 Uhr schrieb Manuel Sopena Ballesteros < manuel...@garvan.org.au>: > Dear ceph users, > > > > I would like to ask, does the metadata server needs much block devices for > storage? Or does it only needs RAM? How could I calculate the amount of > disks and/or memory needed? > > > > Thank you very much > > > > > > Manuel Sopena Ballesteros > > Big Data Engineer | Kinghorn Centre for Clinical Genomics > > [image: cid:image001.png@01D4C835.ED3C2230] <https://www.garvan.org.au/> > > > *a:* 384 Victoria Street, Darlinghurst NSW 2010 > *p:* +61 2 9355 5760 | +61 4 12 123 123 > *e:* manuel...@garvan.org.au > > Like us on Facebook <http://www.facebook.com/garvaninstitute> | Follow us > on Twitter <http://twitter.com/GarvanInstitute> and LinkedIn > <http://www.linkedin.com/company/garvan-institute-of-medical-research> > > > NOTICE > Please consider the environment before printing this email. This message > and any attachments are intended for the addressee named and may contain > legally privileged/confidential/copyright information. If you are not the > intended recipient, you should not read, use, disclose, copy or distribute > this communication. If you have received this message in error please > notify us at once by return email and then delete both messages. We accept > no liability for the distribution of viruses or similar in electronic > communications. This notice should not be removed. > _______________________________________________ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com