Re: Ceph Production Environment Setup?

2013-01-29 Thread Joao Eduardo Luis

On 01/29/2013 02:56 AM, femi anjorin wrote:

Please can anyone  an advise  on how exactly a CEPH production
environment should look like? and what the configuration files should
be. My hardwares include the following:

Server A, B, C configuration
CPU - Intel(R) Core(TM)2 Quad  CPU   Q9550  @ 2.83GHz
RAM - 16GB
Hard drive -  500GB
SSD - 120GB

Server D,E,F,G,H,J configuration
CPU - Intel(R) Atom(TM) CPU D525   @ 1.80GHz
RAM - 4 GB
Boot drive -  320gb
SSD - 120 GB
Storage drives - 16 X 2 TB

I am thinking of these configurations but i am not sure.
Server A - MDS and MON
Server B - MON
Server C - MON
Server D, E,F,G,H,J - OSD



Those 16GB RAM on the monitor nodes vs the 4GB RAM on the osd nodes seem 
to be a bit wrong to me.  The OSDs tend to require much more RAM, for 
instance for recovery, while the monitor is not as heavy on the memory 
-- if a cluster grows significantly large, the in-memory maps may grow a 
lot too, but that reason alone shouldn't be the reason you would give a 
monitor 16GB RAM and 4GB for an osd.


Furthermore, I see you have 16x2TB storage drives.  Is that per OSD 
node?  I'm assuming that's what you're aiming to do, so how many OSDs 
were you thinking of running on the same host?  Usually we go for 1 OSD 
per drive, but you might have something else on your mind.  I am not an 
expert on server configuration, but my point is that, if you are going 
to have more than one OSD on the same host, your RAM sure looks smaller 
than what I would envision.


BTW, not sure if you're placing SSDs on the monitor/mds nodes with the 
same intent as when you place it on the OSD nodes (keeping the osd 
journal, maybe?), but if you indeed intend to keep the daemons journals 
in them I think you should know that the monitor and the mds don't keep 
a journal.  The monitors do keep a store on disk, but the mds don't even 
do that, instead keeping its data directly on the osds and whatever it 
needs in-memory.


  -Joao





--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Ceph Production Environment Setup?

2013-01-29 Thread Martin B Nielsen
There is also the hardware recommendation page in the ceph docs (
http://ceph.com/docs/master/install/hardware-recommendations/ )

Basically they recommend something like ~ 1GHz CPU (or 1 core/osd),
500M-1GB RAM pr OSD daemon. Also most run with 1 OSD daemon pr. disk
(so if you put 16x disk pr. node you'll vastly overpower your atom
cpu)

Overall, while the cluster chugs along happily the hw specs are
relatively modest; as soon as it starts to recover you'll see high
cpu/mem usage.

Cheers,
Martin


On Tue, Jan 29, 2013 at 3:56 AM, femi anjorin  wrote:
>
> Please can anyone  an advise  on how exactly a CEPH production
> environment should look like? and what the configuration files should
> be. My hardwares include the following:
>
> Server A, B, C configuration
> CPU - Intel(R) Core(TM)2 Quad  CPU   Q9550  @ 2.83GHz
> RAM - 16GB
> Hard drive -  500GB
> SSD - 120GB
>
> Server D,E,F,G,H,J configuration
> CPU - Intel(R) Atom(TM) CPU D525   @ 1.80GHz
> RAM - 4 GB
> Boot drive -  320gb
> SSD - 120 GB
> Storage drives - 16 X 2 TB
>
> I am thinking of these configurations but i am not sure.
> Server A - MDS and MON
> Server B - MON
> Server C - MON
> Server D, E,F,G,H,J - OSD
>
> Regards.
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Ceph Production Environment Setup?

2013-01-29 Thread femi anjorin
Thanks.  I have upgraded all the systems to quad core processor
machine with 32 GB RAM although i still have 16 hard drives on each of
the storage nodes.

16 hard drives means i should have 16 OSD daemon but i dont know what
the OSD configuration should look like in ceph.conf.

I mounted the disk under the OSD data directory according to
http://ceph.com/docs/master/rados/deployment/mkcephfs/
the mount looks like this:
/dev/sda1 on /var/lib/ceph/osd/ceph-0
/dev/sdb1 on /var/lib/ceph/osd/ceph-1
/dev/sdc1 on /var/lib/ceph/osd/ceph-2
/dev/sdd1 on /var/lib/ceph/osd/ceph-3
/dev/sde1 on /var/lib/ceph/osd/ceph-4
/dev/sdf1 on /var/lib/ceph/osd/ceph-5
/dev/sdg1 on /var/lib/ceph/osd/ceph-6
/dev/sdh1 on /var/lib/ceph/osd/ceph-7
/dev/sdi1 on /var/lib/ceph/osd/ceph-8
/dev/sdj1 on /var/lib/ceph/osd/ceph-9
/dev/sdk1 on /var/lib/ceph/osd/ceph-10
/dev/sdl1 on /var/lib/ceph/osd/ceph-11
/dev/sdm1 on /var/lib/ceph/osd/ceph-12
/dev/sdn1 on /var/lib/ceph/osd/ceph-13
/dev/sdo1 on /var/lib/ceph/osd/ceph-14
/dev/sdp1 on /var/lib/ceph/osd/ceph-15

BUT I dont know how the OSD configuration should look like?  I see the
following in this ceph reference :
http://ceph.com/docs/master/rados/deployment/mkcephfs/

"For each [osd.n] section of your configuration file, specify the
storage device. For example:
[osd.1]
devs = /dev/sda
[osd.2]
devs = /dev/sdb "

I guess this is a configuration for one hard drive. What should the
OSD config be with 16 drives in one host?



Regards,
Femi.


On Tue, Jan 29, 2013 at 1:39 PM, Martin B Nielsen  wrote:
> There is also the hardware recommendation page in the ceph docs (
> http://ceph.com/docs/master/install/hardware-recommendations/ )
>
> Basically they recommend something like ~ 1GHz CPU (or 1 core/osd),
> 500M-1GB RAM pr OSD daemon. Also most run with 1 OSD daemon pr. disk
> (so if you put 16x disk pr. node you'll vastly overpower your atom
> cpu)
>
> Overall, while the cluster chugs along happily the hw specs are
> relatively modest; as soon as it starts to recover you'll see high
> cpu/mem usage.
>
> Cheers,
> Martin
>
>
> On Tue, Jan 29, 2013 at 3:56 AM, femi anjorin  wrote:
>>
>> Please can anyone  an advise  on how exactly a CEPH production
>> environment should look like? and what the configuration files should
>> be. My hardwares include the following:
>>
>> Server A, B, C configuration
>> CPU - Intel(R) Core(TM)2 Quad  CPU   Q9550  @ 2.83GHz
>> RAM - 16GB
>> Hard drive -  500GB
>> SSD - 120GB
>>
>> Server D,E,F,G,H,J configuration
>> CPU - Intel(R) Atom(TM) CPU D525   @ 1.80GHz
>> RAM - 4 GB
>> Boot drive -  320gb
>> SSD - 120 GB
>> Storage drives - 16 X 2 TB
>>
>> I am thinking of these configurations but i am not sure.
>> Server A - MDS and MON
>> Server B - MON
>> Server C - MON
>> Server D, E,F,G,H,J - OSD
>>
>> Regards.
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majord...@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Fwd: Ceph Production Environment Setup and Configurations?

2013-01-28 Thread femi anjorin
Hi,

Please with regards to my questions on Ceph Production Environment ...
I like to give u these details.

i like to test a write, read and delete operation on ceph storage
cluster in a production environment.

i also like to check the self healing and managing functionalities.

i like to know in the production setup , are gateways required for any of the
three methods of accessing ceph cluster? or should the setup just be like
all the servers should be storage nodes with mon,mds and osd running
on each of them ...while i access these storage nodes through a single
computer which one can call a client just like you described in the 5
mins setup?


-- Forwarded message --
Date: Tue, Jan 29, 2013 at 2:56 AM
Subject: Ceph Production Environment Setup and Configurations?
To: ceph-devel@vger.kernel.org


Please can anyone  an advise  on how exactly a CEPH production
environment should look like? and what the configuration files should
be. My hardwares include the following:



Server A, B, C configuration

CPU - Intel(R) Core(TM)2 Quad  CPU   Q9550  @ 2.83GHz

RAM - 16GB

Hard drive -  500GB

SSD - 120GB



Server D,E,F,G,H,J configuration

CPU - Intel(R) Atom(TM) CPU D525   @ 1.80GHz

RAM - 4 GB

Boot drive -  320gb

SSD - 120 GB

Storage drives - 16 X 2 TB



I am thinking of these configurations but i am not sure.

Server A - MDS and MON

Server B - MON

Server C - MON

Server D, E,F,G,H,J - OSD



Regards.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Ceph Production Environment Setup and Configurations?

2013-01-29 Thread Chen, Xiaoxi
[The following views only behalf of myself, not relate with Intel..]
Looking forward for the performance data on Atom.
Atom perform badly in Swift, but since Ceph is slightly efficient than Swift, 
it must be better.
I have some concern about weather Atom can support such high throughput( you 
have 16 disks, assuming 50MB/s per disk, you would like to have 50*16 
*2(include journal write)=1600MB/s ,together with corresponding network 
throughput, say 10GbE. The total IO throughput seems too high for Atom. Baidu 
(www.baidu.com) use Arm based storage node(not using ceph,but their own DFS), a 
node with a quad-core Arm together with 4 disks, and a 2U box can put up to 6 
nodes, the 6 nodes share a 10Gb NIC. Although Atom is different with Arm, but 
you can take Baidu as a reference.


-Original Message-
From: ceph-devel-ow...@vger.kernel.org 
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Gandalf Corvotempesta
Sent: 2013年1月29日 17:25
To: femi anjorin
Cc: ceph-devel@vger.kernel.org; Ross Turk
Subject: Re: Ceph Production Environment Setup and Configurations?

2013/1/29 femi anjorin :
> CPU - Intel(R) Atom(TM) CPU D525   @ 1.80GHz

Atom? For which kind of role ?
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the 
body of a message to majord...@vger.kernel.org More majordomo info at  
http://vger.kernel.org/majordomo-info.html


Re: Ceph Production Environment Setup and Configurations?

2013-01-29 Thread Joao Eduardo Luis
On 01/29/2013 11:25 AM, Chen, Xiaoxi wrote:
> [The following views only behalf of myself, not relate with Intel..]
> Looking forward for the performance data on Atom.
> Atom perform badly in Swift, but since Ceph is slightly efficient than Swift, 
> it must be better.
> I have some concern about weather Atom can support such high throughput( you 
> have 16 disks, assuming 50MB/s per disk, you would like to have 50*16 
> *2(include journal write)=1600MB/s ,together with corresponding network 
> throughput, say 10GbE. The total IO throughput seems too high for Atom. Baidu 
> (www.baidu.com) use Arm based storage node(not using ceph,but their own DFS), 
> a node with a quad-core Arm together with 4 disks, and a 2U box can put up to 
> 6 nodes, the 6 nodes share a 10Gb NIC. Although Atom is different with Arm, 
> but you can take Baidu as a reference.
> 
> 
> -Original Message-
> From: ceph-devel-ow...@vger.kernel.org 
> [mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Gandalf Corvotempesta
> Sent: 2013年1月29日 17:25
> To: femi anjorin
> Cc: ceph-devel@vger.kernel.org; Ross Turk
> Subject: Re: Ceph Production Environment Setup and Configurations?
> 
> 2013/1/29 femi anjorin :
>> CPU - Intel(R) Atom(TM) CPU D525   @ 1.80GHz
> 
> Atom? For which kind of role ?
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the 
> body of a message to majord...@vger.kernel.org More majordomo info at  
> http://vger.kernel.org/majordomo-info.html
> N�Р骒r��yb�X�肚�v�^�)藓{.n�+���z�]z鳐�{ay��,j��f"�h���z��wア�
> ⒎�j:+v���w�j�m������赙zZ+�茛j"��!tml=
> 


FWIW, Wido had been testing Ceph on Atoms a while back.  I don't know
what conclusions he reached, or even if he has already reached any
conclusions at all.  I, for one, would be really interested in knowing
how well (or how poorly) Atoms deal with Ceph.

Maybe Wido can share some thoughts on this one? :-)

  -Joao
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Ceph Production Environment Setup and Configurations?

2013-01-29 Thread Mark Nelson
On 01/29/2013 05:34 AM, Joao Eduardo Luis wrote:
> On 01/29/2013 11:25 AM, Chen, Xiaoxi wrote:
>> [The following views only behalf of myself, not relate with Intel..]
>> Looking forward for the performance data on Atom.
>> Atom perform badly in Swift, but since Ceph is slightly efficient than 
>> Swift, it must be better.
>> I have some concern about weather Atom can support such high throughput( you 
>> have 16 disks, assuming 50MB/s per disk, you would like to have 50*16 
>> *2(include journal write)=1600MB/s ,together with corresponding network 
>> throughput, say 10GbE. The total IO throughput seems too high for Atom. 
>> Baidu (www.baidu.com) use Arm based storage node(not using ceph,but their 
>> own DFS), a node with a quad-core Arm together with 4 disks, and a 2U box 
>> can put up to 6 nodes, the 6 nodes share a 10Gb NIC. Although Atom is 
>> different with Arm, but you can take Baidu as a reference.
>>
>>
>> -Original Message-
>> From: ceph-devel-ow...@vger.kernel.org 
>> [mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Gandalf Corvotempesta
>> Sent: 2013年1月29日 17:25
>> To: femi anjorin
>> Cc: ceph-devel@vger.kernel.org; Ross Turk
>> Subject: Re: Ceph Production Environment Setup and Configurations?
>>
>> 2013/1/29 femi anjorin :
>>> CPU - Intel(R) Atom(TM) CPU D525   @ 1.80GHz
>>
>> Atom? For which kind of role ?
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the 
>> body of a message to majord...@vger.kernel.org More majordomo info at  
>> http://vger.kernel.org/majordomo-info.html
>> N�Р骒r��yb�X�肚�v�^�)藓{.n�+���z�]z鳐�{ay��,j��f"�h���z��wア�
>> ⒎�j:+v���w�j�m������赙zZ+�茛j"��!tml=
>>
> 
> 
> FWIW, Wido had been testing Ceph on Atoms a while back.  I don't know
> what conclusions he reached, or even if he has already reached any
> conclusions at all.  I, for one, would be really interested in knowing
> how well (or how poorly) Atoms deal with Ceph.
> 
> Maybe Wido can share some thoughts on this one? :-)
> 
>-Joao
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

Just FYI, I don't expect you'll be able to get anywhere close to 10GbE
performance on the Atom.  You'll almost certainly be CPU bound long
before that.  I actually would be curious how much throughput you could
push to those disks just doing straight fio tests and how much CPU
overhead you'd see.  I imagine you could max out the CPU without even
having Ceph involved.

I'm guessing you probably are going to be limited to about 0.75-1 OSD
per atom core, and that's assuming that your network and SAS controllers
aren't CPU hogs.

Mark
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html