It's an Intel chipset, I don't have the model number handy.

Steven Gong wrote:
> >From the link Luke provided below, it seems that the culprit is the 
> Option ROM in the e1000. So it should be a hardware issue.
>
> Bill, are you using the Intel 82573E 1G network card in your test?
>
> On 5/8/07, *Luke Hubbard* <[EMAIL PROTECTED] 
> <mailto:[EMAIL PROTECTED]>> wrote:
>
>     Hi Bill,
>
>     Thanks for running this test. The cpu numbers are promising if we can
>     fix this other issue. Can you provide deals of how much memory the
>     red5 process was using.
>
>     To be clear every time the server died it didn't hang its process
>     died. That is very odd, if there was some exception it should have
>     been logged. I suspect something happened in a native networking code
>     which killed the java process. I googled those errors you got in your
>     system logs and found this..
>
>     http://osdir.com/ml/linux.drivers.e1000.devel/2007-01/msg00133.html
>     
> http://www.kaltenbrunner.cc/blog/index.php?/archives/8-fixing-e1000-TX-transmit-timeouts-at-least-some-of-them.html
>
>     Sounds like it might be possible to fix the error by adjusting the
>     nic settings.
>
>     Is anyone else getting experiencing the same symptoms?
>     Process dieing without hanging or throwing any errors? If so
>     please speak up.
>
>     Luke
>
>     On 5/8/07, Interalab <[EMAIL PROTECTED]
>     <mailto:[EMAIL PROTECTED]>> wrote:
>     > Rob Schoenaker and I ran a little stress test this morning and
>     wanted to
>     > share our results.  Rob, feel free to add to or correct me if
>     you want.
>     >
>     > This was a test of one publishing live stream client and many
>     > subscribing clients.
>     >
>     > Here's the server config:
>     >
>     > Xubuntu Linux
>     > AMD 64 3500+ processor
>     > 4 GB RAM
>     > Red 5 trunk ver 1961
>     > Gbit Internet connection
>     >
>     > Client side:
>     >
>     >  From the other side of the world . . .
>     > Lots of available bandwidth
>     >
>     > The first run choked the server at 256 simultaneous
>     connections.  They
>     > were 250k - 450k live streams.
>     >
>     > After a re-boot, we got up into the 300 + connections.  This
>     time the
>     > resolution was lower, so the average bandwidth per stream was
>     about 150k
>     >
>     > Server looked like this:
>     > Cpu(s): 12.0%us,  2.0%sy,  0.0%ni,
>     84.0%id,  0.0%wa,  0.3%hi,  1.7%si,
>     > 0.0%st
>     >  Mem:   3976784k total,  1085004k used,  2891780k free,    
>     7896k buffers
>     >  Swap:  2819368k total,        0k used,  2819368k free,  
>     193740k cached
>     >
>     > After about 15 minutes, and over 400 connections, Red5 quit
>     without any
>     > log errors.  The Java PID just went away.  Had a bunch of these in
>     > dmesg:  e1000: eth1: e1000_clean_tx_irq: Detected Tx Unit Hang
>     >
>     > Started Red5 by running red5.sh without re-booting the
>     server.  It came
>     > right back up and started streaming again.
>     >
>     > This time, we set the resolution to 80x60, or about 60-80 kbps
>     per stream.
>     >
>     > Rob tried to crash it by launching about 200 connections in about 10
>     > seconds, but it kept running.  It didn't die again.
>     >
>     > Final outcome of the last test:
>     >
>     > 627 concurrent connections peak
>     > approx 1100 connections total (some dropped when browsers
>     crashed under
>     > the load, etc.)
>     >
>     > At the peak, player buffers started to get big.  Some as high as
>     70,
>     > most of mine were in the 30's.
>     >
>     > So, my observation is that even though the server and available
>     > bandwidth didn't seem to be stressed too much - lots of memory
>     and cpu %
>     > in the teens, the larger the individual streams, the fewer total
>     > connections we could make.
>     >
>     > Not very scientific, but we thought it was worth sharing with
>     the list.
>     >
>     > Regards,
>     > Bill
>     >
>     >
>     >
>     >
>     >
>     >
>     >
>     >
>     >
>     >
>     >
>     >
>     >
>     > _______________________________________________
>     > Red5 mailing list
>     > [email protected] <mailto:[email protected]>
>     > http://osflash.org/mailman/listinfo/red5_osflash.org
>     >
>
>
>     --
>     Luke Hubbard
>     codegent | coding for the people
>     http://www.codegent.com
>
>     NMA Top 100 Interactive Agencies - Ones to watch!
>     http://www.codegent.com/top100/
>
>     want to know more?
>     http://www.codegent.com/showreel/
>
>     This e-mail may contain information which is privileged, confidential
>     and protected from disclosure. If you are not the intended recipient
>     of this e-mail, or any part of it, please delete this email and any
>     attachments immediately on receipt. You should not disclose the
>     contents to any other person or take copies. Any views expressed in
>     this message are those of the individual sender, except where the
>     sender specifically states them to be the views of codegent limited.
>
>     _______________________________________________
>     Red5 mailing list
>     [email protected] <mailto:[email protected]>
>     http://osflash.org/mailman/listinfo/red5_osflash.org
>
>
>
>
> -- 
> I cannot tell why this heart languishes in silence. It is for small 
> needs it never asks, or knows or remembers.  -- Tagore
>
> Best Regards
> Steven Gong
> ------------------------------------------------------------------------
>
> _______________________________________________
> Red5 mailing list
> [email protected]
> http://osflash.org/mailman/listinfo/red5_osflash.org
>   

_______________________________________________
Red5 mailing list
[email protected]
http://osflash.org/mailman/listinfo/red5_osflash.org

Reply via email to