On 06/14/2007 03:51 PM, Tom Buskey wrote:
> On 6/14/07, *Flaherty, Patrick* <[EMAIL PROTECTED]
> <mailto:[EMAIL PROTECTED]>> wrote:
>
>     I'm not the best with these bit/byte problems so I might be wrong,
>     but.....
>
>     A PCI bus can pass 1056 bits a second (32 bit, 33 mhz)
>     tcp/ip over head is somewhere around %20 (1056 * .8 = 844.8)
>
BTW, that's a standard PCI bus.  Usual performance after taking out PCI
overhead is a bit over 500Mbit (see the 50MB rate below).  PCI-64 bit
and PCI-66Mhz/100Mhz multiply those rates as standard PCI is
32bit/33Mhz, so a PCI-64bit can handle twice the bandwidth of PCI-32bit
and PCI-66Mhz/64bit can handle 4 times.  You're now easily in the Ghz
range, even if you have other devices on the bus.

I haven't checked to see what PCI-X/PCI-E does, but I've hit pretty high
speeds with it (see below).
>
>
>     What can you reasonably expect a pci gigabit card to give you for
>     through put?
>
>
> The author of O'Reilly "Unix Backup & Restore" says you should expect
> a maximum throughput around 50MB/s for backups over gigabit.
500Mbit is the untuned rate I saw under Linux using netpipe.  With a bit
of kernel tuning I could get to the 825-875Mbit range.
>  
>
>     PCI Buses are generally shared (save high end server boards) right?
>
>
> Yep.  Higher end systems will have multiple PCI buses.  The Sun v890
> has 4 seperate buses and you can distribute the cards based on
Most server-class systems have multiple busses or at least have a
separate bus for its Ethernet controllers.  Even on a desktop-class
system, there's not a whole lot else going over that PCI bus unless you
have a second Gig Ethernet card or a SCSI card.
>  
>
>     On top of that, if hdparm says timed disk writes are around 40MB, what
>     could you see for sustained download speeds? Maybe a static cached
>     webpage could saturate a gig connection, sustained 5 gig http download
>     couldn't right?
>
>     Anyone have real world answers for that stuff?
>
>
> What if you're downloading to RAM disk?
> When I've been doing my network measurements I've been going from
> /dev/zero to /dev/null to eliminate the storage speed effects. 

The reason we did the timing was a few:

- MPI/PVM is usually CPU-CPU (no disk) and we don't want to spend the
additional money for a high speed interconnect, so making the Ethernet
connection really fast was a 0-cost improvement.
- The faster you can shuttle data to a device, the less time that device
(and others trying to contend for bandwidth) are waiting.

-Mark
_______________________________________________
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/

Reply via email to