Thomas Burgess wrote:


On Fri, Jan 29, 2010 at 5:54 AM, Edward Ned Harvey <sola...@nedharvey.com <mailto:sola...@nedharvey.com>> wrote:

    > Thanks for the responses guys.  It looks like I'll probably use
    RaidZ2
    > with 8 drives.  The write bandwidth isn't that great as it'll be a
    > hundred gigs every couple weeks but in a bulk load type of
    environment.
    > So, not a major issue.  Testing with 8 drives in a raidz2 easily
    > saturated a GigE connection on the client and the server side.
     We'll
    > probably link aggregate two GigE ports onto the switch to boost the
    > incoming bandwidth.
    >
    > In response to some of the other questions - drives are SATA drives
    > 7200.  All connected via a SAS expander backplane onto a
    machine.  CPU
    > cycles obviously aren't an issue on a Xeon machine/24Gig memory.  We
    > considered a SSD ZIL as well but from my understanding it won't help
    > much on sequential bulk writes but really helps on random writes (to
    > sequence going to disk better).  Also, doubt L2ARC/ARC will help
    that
    > much for sequential either.   I could be wrong on both counts
    here so
    > please correct me if I'm wrong.

    I believe you're correct on all points.

    The one comment I want to add, as a tangent, is about link
    aggregation.  You
    may already know this, but a lot of people don't, so please
    forgive me if
    I'm saying something obvious.

    When you aggregate links together, say, 4x 1Gb ports, you are of
    course
    increasing the speed & reliability of the network interface, but
    you don't
    get something like a 4Gb port.  Instead, you get a link where any
    one client
    TCP or whatever connection will max out at 1Gb, but the advantage
    is, while
    one client is maxing out at 1Gb, another client can come along and
    also max
    out another 1Gb, and a 3rd client ... and a 4th client ...

    Make sense?  Obvious?



Isn't that basically the same thing...i mean.

If you have 4x 1Gb as in your example, can you have 4 clients connected at the same time all over Gb ethernet all getting close to 1Gb/s?

Isn't this LIKE having a 4Gb/s connection considering everything ELSE on your network is essentially limited by thier small 1Gb/s connections? Also, doesn't it also provide a level of fault tolerance as well as load balancing?

I'm not 100% sure that all traffic between two hosts is now absolutely limited to the size of a single member link. The standard requires all traffic for a single "conversation" to happen over a single link (to avoid ethernet packet reordering), but I /think/ modern implementations no longer group all traffic between two hosts over an aggregated link as a single "conversation". I'd have to check, but I think what that means nowdays is that any /single/ connection across an aggregated link maxes out at the speed of one of the component links, but that there is nothing preventing /multiple/ connections between two hosts from using different component links. e.g. you could have an HTTP and FTP connection each use different links, even though both have the same two machines involved.

But, someone, please correct me on this if I'm wrong.


And, we're getting pretty far off topic here...

--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to