> > Ceph.conf cluster network (supernet): 10.0.0.0/8
> >
> > Cluster network #1:  10.1.1.0/24
> > Cluster network #2: 10.1.2.0/24
> >
> > With that configuration OSD address autodection *should* just work.
> 
> It should work but thinking more about it the OSDs will likely be
> assigned IPs on a single network, whichever is inspected and matches
> the supernet range (which could be in either subnet). In order to have
> OSDs on two distinct networks you will likely have to use a
> declarative configuration in /etc/ceph/ceph.conf which lists the OSD
> IP addresses for each OSD (making sure to balance between links).
> 

I've been using this setup for a while:

[osd.60]
host = xxx6
public_addr = 192.168.200.221:6860
cluster_addr = 192.168.228.101:6860

[osd.61]
host = xxx6
public_addr = 192.168.200.221:6861
cluster_addr = 192.168.229.101:6861

[osd.70]
host = xxx7
public_addr = 192.168.200.190:6870
cluster_addr = 192.168.228.107:6870

[osd.71]
host = xxx7
public_addr = 192.168.200.190:6871
cluster_addr = 192.168.229.107:6871

looking at tcpdump all the traffic is going exactly where it is supposed to go, 
in particular an osd on the 192.168.228.x network appears to talk to an osd on 
the 192.168.229.x network without anything strange happening. I was just 
wondering if there was anything about ceph that could make this non-optimal, 
assuming traffic was reasonably balanced between all the osd's (eg all the same 
weights). I think the only time it would suffer is if writes to other osds 
result in a replica write to a single osd, and even then a single OSD is still 
limited to 7200RPM disk speed anyway so the loss isn't going to be that great.

I think I'll be moving over to bonded setup anyway, although I'm not sure if rr 
or lacp is best... rr will give the best potential throughput, but lacp should 
give similar aggregate throughput if there are plenty of connections going on, 
and less cpu load as no need to reassemble fragments.

James

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to