I found network to be the most limiting factor in Ceph.
Any chance to move to 10G+ would be beneficial.
I did have success with Bonding and just doing a simple RR increased the
throughput.


On Mon, Dec 2, 2013 at 10:17 PM, Kyle Bader <kyle.ba...@gmail.com> wrote:

> > Is having two cluster networks like this a supported configuration?
> Every osd and mon can reach every other so I think it should be.
>
> Maybe. If your back end network is a supernet and each cluster network is
> a subnet of that supernet. For example:
>
> Ceph.conf cluster network (supernet): 10.0.0.0/8
>
> Cluster network #1:  10.1.1.0/24
> Cluster network #2: 10.1.2.0/24
>
> With that configuration OSD address autodection *should* just work.
>
> > 1. move osd traffic to eth1. This obviously limits maximum throughput to
> ~100Mbytes/second, but I'm getting nowhere near that right now anyway.
>
> Given three links I would probably do this if your replication factor is
> >= 3. Keep in mind 100Mbps links could very well end up being a limiting
> factor.
>
> What are you backing each OSD with storage wise and how many OSDs do you
> expect to participate in this cluster?
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>


-- 
Follow Me: @Scottix <http://www.twitter.com/scottix>
http://about.me/scottix
scot...@gmail.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to