On 02/15/2011 07:15 AM, Mike Lovell wrote:
On 02/14/2011 08:29 PM, yvette hirth wrote:
Mike Lovell wrote:
On 02/14/2011 12:59 PM, Dennis Jacobfeuerborn wrote:
I'm particularly worried about the networking side being a bottleneck
for the setup. I was looking into 10gbit and infiniband equipment but
they drive the cost up quite a bit and I'm not sure if they are
necessary if I can bond several 1gbit interfaces.
there are several more to read through. the basics i got from glancing
threads like these in the past is that bonding does okay for 2
interfaces but there isn't huge gains when going to 4 interfaces. also,
considering that a 4 port ethernet card is gonna cost about 200 on the
cheap end and can go much higher, using a older infiniband card or a
10gig-e card can make sense. there has been talk of inifiniband cards
that can do 10gbps for under $200 on the list but i haven't actually
done it myself. maybe someone else can chime in on that. i've also seen
10gig-e gear for under $500.
the big cost is the switches. 10g switches, even small ones, start at
around $3k and go up from there.
for a while mellanox had a ddr ib "kit", with four or so cards, sfp's,
cables, and the switch, for around $6-7k. while that's still a big budget
bite to swallow, $7k to network 4 boxes at 20gbps is a fab deal.
for just a 2 server set up, switches aren't needed. at least not for
10gig-e. the 10gig-e nics i tried could just be used with a cable between
them and it auto-detected the crossover. i would guess infiniband is
similar. larger than 2 server set ups will require switching and get more
expensive. i was assuming that the context of the question was just about
the link for drbd between the servers and not the connectivity to the rest
of the network.
I'm interested in both the inter-node connection for replication and the
connection to the clients though I think the replication link is easier to
setup. The connection to the clients is worrying me more because I have no
good feeling of what the I/O load from lots of VMs will be like. That's why
we are first testing this with 8-bay systems and regular gbit cards to see
what the actual real-world performance is and then decide if we can go for
16-bay systems servicing more VMs or additional 8-bay twin setups or if we
need to more closely look at 10gbit/infiniband.
I guess the fact that I don't have any experience with shared storage for
virtualization and that I don't have a decent testing setup right now makes
me a bit paranoid. Once I got my hands on the 8-bay twin setup I'll do some
thorough testing though with various disk and networking configurations to
see what I can get out of this.
Regards,
Dennis
_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user