Re: [ceph-users] Best layout for SSD & SAS OSDs

2015-09-07 Thread Jan Schermer
> On 07 Sep 2015, at 12:19, Christian Balzer wrote: > > On Mon, 7 Sep 2015 12:11:27 +0200 Jan Schermer wrote: > >> Dense SSD nodes are not really an issue for network (unless you really >> use all the throughput), > That's exactly what I wrote... > And dense in the sense of

Re: [ceph-users] Best layout for SSD & SAS OSDs

2015-09-07 Thread Christian Balzer
On Mon, 7 Sep 2015 12:11:27 +0200 Jan Schermer wrote: > Dense SSD nodes are not really an issue for network (unless you really > use all the throughput), That's exactly what I wrote... And dense in the sense of saturating his network would be 4 SSDs, so: > the issue is with CPU and memory

Re: [ceph-users] Best layout for SSD & SAS OSDs

2015-09-07 Thread Jan Schermer
Dense SSD nodes are not really an issue for network (unless you really use all the throughput), the issue is with CPU and memory throughput (and possibly crappy kernel scheduler depending on how up-to-date distro you use). Also if you want consistent performance even when failure occurs, you

Re: [ceph-users] Best layout for SSD & SAS OSDs

2015-09-06 Thread Christian Balzer
On Sat, 5 Sep 2015 07:13:29 -0300 German Anders wrote: > Hi Christian, > > Ok so would said that it's better to rearrange the nodes so i dont > mix the hdd and ssd disks right? And create high perf nodes with ssd and > others with hdd, its fine since its a new deploy. > It is what I would

Re: [ceph-users] Best layout for SSD & SAS OSDs

2015-09-05 Thread German Anders
Hi Christian, Ok so would said that it's better to rearrange the nodes so i dont mix the hdd and ssd disks right? And create high perf nodes with ssd and others with hdd, its fine since its a new deploy. Also the nodes had different type of ram cpu, 4 had more cpu and more memory 384gb and

[ceph-users] Best layout for SSD & SAS OSDs

2015-09-04 Thread German Anders
Hi cephers, I've the following scheme: 7x OSD servers with: 4x 800GB SSD Intel DC S3510 (OSD-SSD) 3x 120GB SSD Intel DC S3500 (Journals) 5x 3TB SAS disks (OSD-SAS) The OSD servers are located on two separate Racks with two power circuits each. I would like to know what is the

Re: [ceph-users] Best layout for SSD & SAS OSDs

2015-09-04 Thread Nick Fisk
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > German Anders > Sent: 04 September 2015 17:18 > To: Nick Fisk <n...@fisk.me.uk> > Cc: ceph-users <ceph-users@lists.ceph.com> > Subject: Re: [ceph-users] Best layout for SSD & SAS OSDs > > Thanks a

Re: [ceph-users] Best layout for SSD & SAS OSDs

2015-09-04 Thread Christian Balzer
Hello, On Fri, 4 Sep 2015 12:30:12 -0300 German Anders wrote: > Hi cephers, > >I've the following scheme: > > 7x OSD servers with: > Is this a new cluster, total initial deployment? What else are these nodes made of, CPU/RAM/network? While uniform nodes have some appeal

Re: [ceph-users] Best layout for SSD & SAS OSDs

2015-09-04 Thread Nick Fisk
-users <ceph-users@lists.ceph.com> Subject: [ceph-users] Best layout for SSD & SAS OSDs Hi cephers, I've the following scheme: 7x OSD servers with: 4x 800GB SSD Intel DC S3510 (OSD-SSD) 3x 120GB SSD Intel DC S3500 (Journals) 5x 3TB SAS disks (OSD-SAS) The OSD servers a

Re: [ceph-users] Best layout for SSD & SAS OSDs

2015-09-04 Thread German Anders
ick > > > > *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf > Of *German Anders > *Sent:* 04 September 2015 16:30 > *To:* ceph-users <ceph-users@lists.ceph.com> > *Subject:* [ceph-users] Best layout for SSD & SAS OSDs > > >