The "public" network is where all storage accesses from other systems or 
clients will occur. When you map RBD's to other hosts, access object storage 
through the RGW, or CephFS access, you will access the data through the 
"public" network. The "cluster" network is where all internal replication 
between OSD processes will occur. As an example in our set up, we have a 10GbE 
public network for hypervisor nodes to access, along with a 10GbE cluster 
network for back-end replication/communication. Our 1GbE network is used for 
monitoring integration and system administration. 

----- Original Message -----

From: "Jon Heese" <jhe...@inetu.net> 
To: ceph-users@lists.ceph.com 
Sent: Friday, October 23, 2015 8:58:28 AM 
Subject: [ceph-users] Proper Ceph network configuration 



Hello, 



We have two separate networks in our Ceph cluster design: 



10.197.5.0/24 - The "front end" network, "skinny pipe", all 1Gbe, intended to 
be a management or control plane network 

10.174.1.0/24 - The "back end" network, "fat pipe", all OSD nodes use 2x bonded 
10Gbe, intended to be the data network 



So we want all of the OSD traffic to go over the "back end", and the MON 
traffic to go over the "front end". We thought the following would do that: 



public network = 10.197.5.0/24 # skinny pipe, mgmt & MON traffic 

cluster network = 10.174.1.0/24 # fat pipe, OSD traffic 



But that doesn't seem to be the case -- iftop and netstat show that little/no 
OSD communication is happening over the 10.174.1 network and it's all happening 
over the 10.197.5 network. 



What configuration should we be running to enforce the networks per our design? 
Thanks! 



Jon Heese 
Systems Engineer 
INetU Managed Hosting 
P: 610.266.7441 x 261 
F: 610.266.7434 
www.inetu.net 

** This message contains confidential information, which also may be 
privileged, and is intended only for the person(s) addressed above. Any 
unauthorized use, distribution, copying or disclosure of confidential and/or 
privileged information is strictly prohibited. If you have received this 
communication in error, please erase all copies of the message and its 
attachments and notify the sender immediately via reply e-mail. ** 

_______________________________________________ 
ceph-users mailing list 
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 


NOTICE: Protect the information in this message in accordance with the 
company's security policies. If you received this message in error, immediately 
notify the sender and destroy all copies.
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to