[ceph-users] Ceph - SSD cluster
Hello, We plan to use the ceph cluster with all SSDs. Do we have any recommendations for Ceph cluster with Full SSD disks. Thanks Swami ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] Ceph - SSD cluster
This topic has been discussed in detail multiple times and from various angles. Your key points are going to be CPU limits iops, dwpd, iops vs bandwidth, and SSD clusters/pools in general. You should be able to find everything you need in the archives. On Mon, Nov 20, 2017, 12:56 AM M Ranga Swami Reddy wrote: > Hello, > We plan to use the ceph cluster with all SSDs. Do we have any > recommendations for Ceph cluster with Full SSD disks. > > Thanks > Swami > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] Ceph - SSD cluster
Thank you...let me dig the archives Thanks Swami On Mon, Nov 20, 2017 at 7:50 PM, David Turner wrote: > This topic has been discussed in detail multiple times and from various > angles. Your key points are going to be CPU limits iops, dwpd, iops vs > bandwidth, and SSD clusters/pools in general. You should be able to find > everything you need in the archives. > > > On Mon, Nov 20, 2017, 12:56 AM M Ranga Swami Reddy > wrote: >> >> Hello, >> We plan to use the ceph cluster with all SSDs. Do we have any >> recommendations for Ceph cluster with Full SSD disks. >> >> Thanks >> Swami >> ___ >> ceph-users mailing list >> ceph-users@lists.ceph.com >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] Ceph - SSD cluster
Hi *, just on note because we hit it, take a look on your discard options make sure it not run on all OSD at the same time. 2017-11-20 6:56 GMT+01:00 M Ranga Swami Reddy : > Hello, > We plan to use the ceph cluster with all SSDs. Do we have any > recommendations for Ceph cluster with Full SSD disks. > > Thanks > Swami > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] Ceph - SSD cluster
On Mon, 20 Nov 2017 15:53:31 +0100 Ansgar Jazdzewski wrote: > Hi *, > > just on note because we hit it, take a look on your discard options > make sure it not run on all OSD at the same time. > Any SSD that actually _requires_ the use of TRIM/DISCARD to maintain either speed or endurance I'd consider unfit for Ceph to boot. Christian > 2017-11-20 6:56 GMT+01:00 M Ranga Swami Reddy : > > Hello, > > We plan to use the ceph cluster with all SSDs. Do we have any > > recommendations for Ceph cluster with Full SSD disks. > > > > Thanks > > Swami > > ___ > > ceph-users mailing list > > ceph-users@lists.ceph.com > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- Christian BalzerNetwork/Systems Engineer ch...@gol.com Rakuten Communications ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] Ceph - SSD cluster
On 20. nov. 2017 23:06, Christian Balzer wrote: On Mon, 20 Nov 2017 15:53:31 +0100 Ansgar Jazdzewski wrote: Hi *, just on note because we hit it, take a look on your discard options make sure it not run on all OSD at the same time. Any SSD that actually _requires_ the use of TRIM/DISCARD to maintain either speed or endurance I'd consider unfit for Ceph to boot. hello is there some sort of hardware compatibillity list for this part ? perhaps community maintained on the wiki or similar. there are some older blog posts covering some devices, but hard to find ceph related for current devices. kind regards Ronny Aasen ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] Ceph - SSD cluster
Hi, not a real HAL, but keeping this list [1] in mind is mandatory. According to me, use roughly any kind of Intel SSD :3750 in SATA or best 3700 in MVNE. Avoid any Samsung pro or EVO of nearly any kind.(Haven't found a link, sorry) My 2 cents [1] : https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/ Le 21/11/2017 à 11:34, Ronny Aasen a écrit : > On 20. nov. 2017 23:06, Christian Balzer wrote: >> On Mon, 20 Nov 2017 15:53:31 +0100 Ansgar Jazdzewski wrote: >> >>> Hi *, >>> >>> just on note because we hit it, take a look on your discard options >>> make sure it not run on all OSD at the same time. >>> >> Any SSD that actually _requires_ the use of TRIM/DISCARD to maintain >> either speed or endurance I'd consider unfit for Ceph to boot. >> > > > hello > > is there some sort of hardware compatibillity list for this part ? > perhaps community maintained on the wiki or similar. > > there are some older blog posts covering some devices, but hard to find > ceph related for current devices. > > kind regards > Ronny Aasen > > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] Ceph - SSD cluster
Plus one here the evos are terrible On Tue, Nov 21, 2017 at 6:10 AM Phil Schwarz wrote: > Hi, > not a real HAL, but keeping this list [1] in mind is mandatory. > > According to me, use roughly any kind of Intel SSD :3750 in SATA or best > 3700 in MVNE. > Avoid any Samsung pro or EVO of nearly any kind.(Haven't found a link, > sorry) > > My 2 cents > > [1] : > > https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/ > > > Le 21/11/2017 à 11:34, Ronny Aasen a écrit : > > On 20. nov. 2017 23:06, Christian Balzer wrote: > >> On Mon, 20 Nov 2017 15:53:31 +0100 Ansgar Jazdzewski wrote: > >> > >>> Hi *, > >>> > >>> just on note because we hit it, take a look on your discard options > >>> make sure it not run on all OSD at the same time. > >>> > >> Any SSD that actually _requires_ the use of TRIM/DISCARD to maintain > >> either speed or endurance I'd consider unfit for Ceph to boot. > >> > > > > > > hello > > > > is there some sort of hardware compatibillity list for this part ? > > perhaps community maintained on the wiki or similar. > > > > there are some older blog posts covering some devices, but hard to find > > ceph related for current devices. > > > > kind regards > > Ronny Aasen > > > > ___ > > ceph-users mailing list > > ceph-users@lists.ceph.com > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] Ceph - SSD cluster
On Tue, 21 Nov 2017 11:34:51 +0100 Ronny Aasen wrote: > On 20. nov. 2017 23:06, Christian Balzer wrote: > > On Mon, 20 Nov 2017 15:53:31 +0100 Ansgar Jazdzewski wrote: > > > >> Hi *, > >> > >> just on note because we hit it, take a look on your discard options > >> make sure it not run on all OSD at the same time. > >> > > Any SSD that actually _requires_ the use of TRIM/DISCARD to maintain > > either speed or endurance I'd consider unfit for Ceph to boot. > > > > > hello > > is there some sort of hardware compatibillity list for this part ? > perhaps community maintained on the wiki or similar. > > there are some older blog posts covering some devices, but hard to find > ceph related for current devices. > Current devices tend to follow in the footsteps of older ones. Thus Intel DC S ones tend to be suitable, however their endurance and performance vary and need to match the expected load. The 37xx ones can deal with anything you throw at them, the x6xx ones with 3DWPD endurance will do the job for many people, the 35xx ones are certainly fast enough for many use cases but should only be deployed by people who know exactly what they're doing and for read-mostly use cases. The same is true in principal for the Samsung DC level SSDs (SM863a these days), they certainly perform well enough, sometimes even better than the Intel DC S36xx to which they are mostly equivalent. Samsung has a history of "firmware (and bug) of the week" issues, so diligent testing and a good return/fix/replace policy from your vendor is always something to make sure off. Christian -- Christian BalzerNetwork/Systems Engineer ch...@gol.com Rakuten Communications ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com