Re: [ceph-users] Best setup for SSD

2015-06-12 Thread Mark Nelson
If you are careful about how you balance things, there's probably no 
reason why SSDs and Spinners in the same server wouldn't work so long as 
they are not in the same pool.  I imagine that recommendation is 
probably to keep things simple and have folks avoid designing unbalanced 
systems.


Mark

On 06/12/2015 10:06 AM, Quentin Hartman wrote:

I don't know the official reason, but I would imagine the disparity in
performance would lead to weird behaviors and very spiky overall
performance. I would think that running a mix of SSD and HDD OSDs in the
same pool would be frowned upon, not just the same server.

On Fri, Jun 12, 2015 at 9:00 AM, Dominik Zalewski
dzalew...@optlink.co.uk mailto:dzalew...@optlink.co.uk wrote:

Be warned that running SSD and HD based OSDs in the same server
is not
recommended. If you need the storage capacity, I'd stick to the
journals
on SSDs plan.


Can you please elaborate more why running SSD and HD based OSDs in
the same server is not
recommended ?

Thanks

Dominik

___
ceph-users mailing list
ceph-users@lists.ceph.com mailto:ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Best setup for SSD

2015-06-12 Thread Dominik Zalewski

 Be warned that running SSD and HD based OSDs in the same server is not
 recommended. If you need the storage capacity, I'd stick to the journals
 on SSDs plan.


Can you please elaborate more why running SSD and HD based OSDs in the same
server is not
recommended ?

Thanks

Dominik
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Best setup for SSD

2015-06-12 Thread Quentin Hartman
I don't know the official reason, but I would imagine the disparity in
performance would lead to weird behaviors and very spiky overall
performance. I would think that running a mix of SSD and HDD OSDs in the
same pool would be frowned upon, not just the same server.

On Fri, Jun 12, 2015 at 9:00 AM, Dominik Zalewski dzalew...@optlink.co.uk
wrote:

 Be warned that running SSD and HD based OSDs in the same server is not
 recommended. If you need the storage capacity, I'd stick to the journals
 on SSDs plan.


 Can you please elaborate more why running SSD and HD based OSDs in the
 same server is not
 recommended ?

 Thanks

 Dominik

 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Best setup for SSD

2015-06-12 Thread Christian Balzer
On Fri, 12 Jun 2015 10:18:18 -0500 Mark Nelson wrote:

 If you are careful about how you balance things, there's probably no 
 reason why SSDs and Spinners in the same server wouldn't work so long as 
 they are not in the same pool.  I imagine that recommendation is 
 probably to keep things simple and have folks avoid designing unbalanced 
 systems.
 
Precisely.

A mixed system needs a VERY intimate knowledge of Ceph and you work load,
use case. 
SSD based OSDs will use a lot more CPU, saturate a lot more network
bandwidth that HDD based ones.
And putting 2.5 SSDs into 3.5 bays is a waste of (rack) space.

As an example, a 2U server with 12 3.5 bays (OSDs) and 2 2.5 bays (OS
and journals)in the back (hello Supermicro) makes a good, dense spinning
rust based storage node. This will saturate about 10Gb/s
In the same 2U you can have a twin node (2x 12 2.5 bays), with 8-10 SSDs
per node and the fastest CPUs you can afford, as well as the fasted
network (dual 10Gb/s or faster) that your budget allows.

Christian

 Mark
 
 On 06/12/2015 10:06 AM, Quentin Hartman wrote:
  I don't know the official reason, but I would imagine the disparity in
  performance would lead to weird behaviors and very spiky overall
  performance. I would think that running a mix of SSD and HDD OSDs in
  the same pool would be frowned upon, not just the same server.
 
  On Fri, Jun 12, 2015 at 9:00 AM, Dominik Zalewski
  dzalew...@optlink.co.uk mailto:dzalew...@optlink.co.uk wrote:
 
  Be warned that running SSD and HD based OSDs in the same server
  is not
  recommended. If you need the storage capacity, I'd stick to the
  journals
  on SSDs plan.
 
 
  Can you please elaborate more why running SSD and HD based OSDs in
  the same server is not
  recommended ?
 
  Thanks
 
  Dominik
 
  ___
  ceph-users mailing list
  ceph-users@lists.ceph.com mailto:ceph-users@lists.ceph.com
  http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 
 
 
 
  ___
  ceph-users mailing list
  ceph-users@lists.ceph.com
  http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 


-- 
Christian BalzerNetwork/Systems Engineer
ch...@gol.com   Global OnLine Japan/Fusion Communications
http://www.gol.com/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Best setup for SSD

2015-06-02 Thread Eneko Lacunza

Hi,

On 02/06/15 16:18, Mark Nelson wrote:

On 06/02/2015 09:02 AM, Phil Schwarz wrote:

Le 02/06/2015 15:33, Eneko Lacunza a écrit :

Hi,

On 02/06/15 15:26, Phil Schwarz wrote:

On 02/06/15 14:51, Phil Schwarz wrote:
i'm gonna have to setup a 4-nodes Ceph(Proxmox+Ceph in fact) 
cluster.


-1 node is a little HP Microserver N54L with 1X opteron + 2SSD+ 
3X 4TB

SATA
It'll be used as OSD+Mon server only.

Are these SSDs Intel S3700 too? What amount of RAM?

Yes, All DCS3700, for the four nodes.
16GB of RAM on this node.

This should be enough for 3 OSDs I think, I used to have a Dell
T20/Intel G3230 with 2x1TB OSDs with only 4 GB running OK.

Cheers
Eneko


Yes, indeed.
My main problem is doing something non adviced...
Running VMs on Ceph nodes...
No choice, but it seems that i'll have to do that.
Hope  i won't peg the CPU too quickly..


I'm doing it in 3 different Proxmox clusters. They're not very busy 
clusters, but works very well.
You might want to consider using cgroups or some other mechanism to 
segment what runs on what cores.  While not ideal, dedicating 2-3 of 
the cores to ceph and leaving the other(s) for VMs might be a 
reasonable way to go.



I think this may be must if you setup a dedicated SSD pool.
A single DC S3700 should suffice for journals for 4 OSDs.  I wouldn't 
recommend using the other one for a cache tier unless you have a very 
highly skewed hot/cold workload.  Perhaps instead make a dedicated SSD 
pool that could be used for high IOPS workloads. In fact you might 
consider skipping SSD journals and just making a dedicated SSD pool 
with all of the SSDs depending on how much write workload your main 
pool sees and if you could make good use of a dedicated SSD pool.
Be warned that running SSD and HD based OSDs in the same server is not 
recommended. If you need the storage capacity, I'd stick to the journals 
on SSDs plan.


Cheers
Eneko

--
Zuzendari Teknikoa / Director Técnico
Binovo IT Human Project, S.L.
Telf. 943575997
  943493611
Astigarraga bidea 2, planta 6 dcha., ofi. 3-2; 20180 Oiartzun (Gipuzkoa)
www.binovo.es

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Best setup for SSD

2015-06-02 Thread Phil Schwarz
Le 02/06/2015 15:33, Eneko Lacunza a écrit :
 Hi,
 
 On 02/06/15 15:26, Phil Schwarz wrote:
 On 02/06/15 14:51, Phil Schwarz wrote:
 i'm gonna have to setup a 4-nodes Ceph(Proxmox+Ceph in fact) cluster.

 -1 node is a little HP Microserver N54L with 1X opteron + 2SSD+ 3X 4TB
 SATA
 It'll be used as OSD+Mon server only.
 Are these SSDs Intel S3700 too? What amount of RAM?
 Yes, All DCS3700, for the four nodes.
 16GB of RAM on this node.
 This should be enough for 3 OSDs I think, I used to have a Dell
 T20/Intel G3230 with 2x1TB OSDs with only 4 GB running OK.
 
 Cheers
 Eneko
 
Yes, indeed.
My main problem is doing something non adviced...
Running VMs on Ceph nodes...
No choice, but it seems that i'll have to do that.
Hope  i won't peg the CPU too quickly..
Best regards

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Best setup for SSD

2015-06-02 Thread Mark Nelson

On 06/02/2015 09:02 AM, Phil Schwarz wrote:

Le 02/06/2015 15:33, Eneko Lacunza a écrit :

Hi,

On 02/06/15 15:26, Phil Schwarz wrote:

On 02/06/15 14:51, Phil Schwarz wrote:

i'm gonna have to setup a 4-nodes Ceph(Proxmox+Ceph in fact) cluster.

-1 node is a little HP Microserver N54L with 1X opteron + 2SSD+ 3X 4TB
SATA
It'll be used as OSD+Mon server only.

Are these SSDs Intel S3700 too? What amount of RAM?

Yes, All DCS3700, for the four nodes.
16GB of RAM on this node.

This should be enough for 3 OSDs I think, I used to have a Dell
T20/Intel G3230 with 2x1TB OSDs with only 4 GB running OK.

Cheers
Eneko


Yes, indeed.
My main problem is doing something non adviced...
Running VMs on Ceph nodes...
No choice, but it seems that i'll have to do that.
Hope  i won't peg the CPU too quickly..


You might want to consider using cgroups or some other mechanism to 
segment what runs on what cores.  While not ideal, dedicating 2-3 of the 
cores to ceph and leaving the other(s) for VMs might be a reasonable way 
to go.


A single DC S3700 should suffice for journals for 4 OSDs.  I wouldn't 
recommend using the other one for a cache tier unless you have a very 
highly skewed hot/cold workload.  Perhaps instead make a dedicated SSD 
pool that could be used for high IOPS workloads.  In fact you might 
consider skipping SSD journals and just making a dedicated SSD pool with 
all of the SSDs depending on how much write workload your main pool sees 
and if you could make good use of a dedicated SSD pool.


Things to think about!


Best regards

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Best setup for SSD

2015-06-02 Thread Phil Schwarz
Hi,
i'm gonna have to setup a 4-nodes Ceph(Proxmox+Ceph in fact) cluster.

-1 node is a little HP Microserver N54L with 1X opteron + 2SSD+ 3X 4TB SATA
It'll be used as OSD+Mon server only.

- 3 nodes are setup upon Dell 730+ 1xXeon 2603, 48 GB RAM, 1x 1TB SAS
for OS , 4x 4TB SATA for OSD and 2x DCS3700 200GB intel SSD

I can't change the hardware, especially the poor cpu...

Everything will be connected through Intel X520+Netgear XS708E, as 10GBE
storage network.

This cluster will support VM (mostly KVM) upon the 3 R730 nodes.
I'm already aware of the CPU pegging all the time...But can't change it
for the moment.
The VM will be Filesharing servers, poor usage services (DNS,DHCP,AD or
OpenLDAP).
One Proxy cache (Squid) will be used upon a 100Mb Optical fiber with
500+ clients.


My question is :
Is it recommended to setup  the 2 SSDS as :
One SSD as journal for 2 (up to 3in the future) OSDs
Or
One SSD as journal for the 4 (up to 6 in the future) OSDs and the
remaining SSD as cache tiering for the previous SSD+4 OSDs pool ?

SSD should be rock solid enough to support both bandwidth and living
time before being destroyed by the low amount of data that will be
written on it (Few hundreds of GB per day as rule of thumb..)

Thanks
Best regards.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Best setup for SSD

2015-06-02 Thread Eneko Lacunza

Hi,

On 02/06/15 14:51, Phil Schwarz wrote:

i'm gonna have to setup a 4-nodes Ceph(Proxmox+Ceph in fact) cluster.

-1 node is a little HP Microserver N54L with 1X opteron + 2SSD+ 3X 4TB SATA
It'll be used as OSD+Mon server only.

Are these SSDs Intel S3700 too? What amount of RAM?

- 3 nodes are setup upon Dell 730+ 1xXeon 2603, 48 GB RAM, 1x 1TB SAS
for OS , 4x 4TB SATA for OSD and 2x DCS3700 200GB intel SSD

I can't change the hardware, especially the poor cpu...

Everything will be connected through Intel X520+Netgear XS708E, as 10GBE
storage network.

This cluster will support VM (mostly KVM) upon the 3 R730 nodes.
I'm already aware of the CPU pegging all the time...But can't change it
for the moment.
The VM will be Filesharing servers, poor usage services (DNS,DHCP,AD or
OpenLDAP).
One Proxy cache (Squid) will be used upon a 100Mb Optical fiber with
500+ clients.


My question is :
Is it recommended to setup  the 2 SSDS as :
One SSD as journal for 2 (up to 3in the future) OSDs
Or
One SSD as journal for the 4 (up to 6 in the future) OSDs and the
remaining SSD as cache tiering for the previous SSD+4 OSDs pool ?
I haven't used cache tiering myself, but others have not reported much 
benefit from it (if any) at all, at least this is my understanding.


So I think it would be better to use both SSDs for journals. It probably 
won't help performance using 2 instead of only 1, but it will lessen the 
impact from a SSD failure. Also it seems that the consensus is 3-4 OSD 
for each SSD, so it will help when you expand to 6 OSD.

SSD should be rock solid enough to support both bandwidth and living
time before being destroyed by the low amount of data that will be
written on it (Few hundreds of GB per day as rule of thumb..)
If all are Intel S3700 you're on the safe side unless you have lots on 
writes. Anyway I suggest you monitor the SMART values.


Cheers
Eneko


--
Zuzendari Teknikoa / Director Técnico
Binovo IT Human Project, S.L.
Telf. 943575997
  943493611
Astigarraga bidea 2, planta 6 dcha., ofi. 3-2; 20180 Oiartzun (Gipuzkoa)
www.binovo.es

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Best setup for SSD

2015-06-02 Thread Phil Schwarz
Thanks for your answers; mine are inline, too.

Le 02/06/2015 15:17, Eneko Lacunza a écrit :
 Hi,
 
 On 02/06/15 14:51, Phil Schwarz wrote:
 i'm gonna have to setup a 4-nodes Ceph(Proxmox+Ceph in fact) cluster.

 -1 node is a little HP Microserver N54L with 1X opteron + 2SSD+ 3X 4TB
 SATA
 It'll be used as OSD+Mon server only.
 Are these SSDs Intel S3700 too? What amount of RAM?
Yes, All DCS3700, for the four nodes.
16GB of RAM on this node.
 - 3 nodes are setup upon Dell 730+ 1xXeon 2603, 48 GB RAM, 1x 1TB SAS
 for OS , 4x 4TB SATA for OSD and 2x DCS3700 200GB intel SSD

 I can't change the hardware, especially the poor cpu...

 Everything will be connected through Intel X520+Netgear XS708E, as 10GBE
 storage network.

 This cluster will support VM (mostly KVM) upon the 3 R730 nodes.
 I'm already aware of the CPU pegging all the time...But can't change it
 for the moment.
 The VM will be Filesharing servers, poor usage services (DNS,DHCP,AD or
 OpenLDAP).
 One Proxy cache (Squid) will be used upon a 100Mb Optical fiber with
 500+ clients.


 My question is :
 Is it recommended to setup  the 2 SSDS as :
 One SSD as journal for 2 (up to 3in the future) OSDs
 Or
 One SSD as journal for the 4 (up to 6 in the future) OSDs and the
 remaining SSD as cache tiering for the previous SSD+4 OSDs pool ?
 I haven't used cache tiering myself, but others have not reported much
 benefit from it (if any) at all, at least this is my understanding.
 
Yes, confirmed by the thread SSD DIsk Distribution.
 So I think it would be better to use both SSDs for journals. It probably
 won't help performance using 2 instead of only 1, but it will lessen the
 impact from a SSD failure. Also it seems that the consensus is 3-4 OSD
 for each SSD, so it will help when you expand to 6 OSD.
Agree; let's go apart from tiering and use journals only.

 SSD should be rock solid enough to support both bandwidth and living
 time before being destroyed by the low amount of data that will be
 written on it (Few hundreds of GB per day as rule of thumb..)
 If all are Intel S3700 you're on the safe side unless you have lots on
 writes. Anyway I suggest you monitor the SMART values.
Ok, i'll keep that in mind too.

Thanks
 
 Cheers
 Eneko
 
 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Best setup for SSD

2015-06-02 Thread Eneko Lacunza

Hi,

On 02/06/15 15:26, Phil Schwarz wrote:

On 02/06/15 14:51, Phil Schwarz wrote:

i'm gonna have to setup a 4-nodes Ceph(Proxmox+Ceph in fact) cluster.

-1 node is a little HP Microserver N54L with 1X opteron + 2SSD+ 3X 4TB
SATA
It'll be used as OSD+Mon server only.

Are these SSDs Intel S3700 too? What amount of RAM?

Yes, All DCS3700, for the four nodes.
16GB of RAM on this node.
This should be enough for 3 OSDs I think, I used to have a Dell 
T20/Intel G3230 with 2x1TB OSDs with only 4 GB running OK.


Cheers
Eneko

--
Zuzendari Teknikoa / Director Técnico
Binovo IT Human Project, S.L.
Telf. 943575997
  943493611
Astigarraga bidea 2, planta 6 dcha., ofi. 3-2; 20180 Oiartzun (Gipuzkoa)
www.binovo.es

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com