On 02/15/2011 04:07 AM, Mike Lovell wrote:
On 02/14/2011 12:59 PM, Dennis Jacobfeuerborn wrote:
I'm trying to wrap my head around storage setups that might work for
virtualization and I wonder if people here have experience with creating
a drbd setup for this purpose.

What I am currently planning to implement is this:
2 Storage 8-bay nodes with 8gb RAM and a dual-core Xeon processor.
Each system with gets equipped with 8 1TB SATA drives in a raid-5
configuration.
Networking will either be two dual-port cards or two quad-core cards
which I plan to setup as bonded interfaces (balance-xor).

I'm particularly worried about the networking side being a bottleneck for
the setup. I was looking into 10gbit and infiniband equipment but they
drive the cost up quite a bit and I'm not sure if they are necessary if I
can bond several 1gbit interfaces.

Any thoughts?

performance over bonded gig-e links has been talked about a few times in
the past. seems like there is the discussion brought up regularly. here are
links to the beginnings of 2 threads.

http://lists.linbit.com/pipermail/drbd-user/2010-May/014113.html
http://lists.linbit.com/pipermail/drbd-user/2010-September/014848.html

there are several more to read through. the basics i got from glancing
threads like these in the past is that bonding does okay for 2 interfaces
but there isn't huge gains when going to 4 interfaces. also, considering
that a 4 port ethernet card is gonna cost about 200 on the cheap end and
can go much higher, using a older infiniband card or a 10gig-e card can
make sense. there has been talk of inifiniband cards that can do 10gbps for
under $200 on the list but i haven't actually done it myself. maybe someone
else can chime in on that. i've also seen 10gig-e gear for under $500.

hope that provides some help.

As far as I understand the issue the performance problems come from the fact that when round-robin balancing the packets they tend to arrive in the wrong order on the other end and that messes with tcp's congestion features resulting in a slowdown of the connection. That should only apply to the case where you simply bundle 4 interface to one virtual link though.

If I understand the "balance-xor" algorithm correctly then it basically creates a hash of source and destination MACs and uses that to assign the connection to *one* interface and send all packets *only* over that interface. The disadvantage is that this limits the speed of an individual tcp connection to the speed of one interface (e.g. 1gbit/sec) but on the positive side you shouldn't see the out-of-order-packets performance issue mentioned above. Simply but "balance-xor" doesn't balance individual packets but whole connections over the interfaces which each connection being fixed to one interface....or so I understand it. At least in theory that sounds like something useful for this use-case where you have lots of clients accessing the central storage.

Regards,
  Dennis
_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to