Has anyone built some kind of profiling tool for VOD streams yet ? Im 
assuming all that needs to be profiled is the connections really, rather 
than data throughput ? Would any kind of web server profile tool work 
here if that is the case ? Im familiar with things like apache bench, 
httperf etc.

John Grden wrote:
> Thanks VERY much Bill for sharing and taking the time - its very much 
> appreciated
>
> John
>
> On 5/7/07, *Interalab* < [EMAIL PROTECTED] 
> <mailto:[EMAIL PROTECTED]>> wrote:
>
>     Rob Schoenaker and I ran a little stress test this morning and
>     wanted to
>     share our results.  Rob, feel free to add to or correct me if you
>     want.
>
>     This was a test of one publishing live stream client and many
>     subscribing clients.
>
>     Here's the server config:
>
>     Xubuntu Linux
>     AMD 64 3500+ processor
>     4 GB RAM
>     Red 5 trunk ver 1961
>     Gbit Internet connection
>
>     Client side:
>
>     From the other side of the world . . .
>     Lots of available bandwidth
>
>     The first run choked the server at 256 simultaneous
>     connections.  They
>     were 250k - 450k live streams.
>
>     After a re-boot, we got up into the 300 + connections.  This time the
>     resolution was lower, so the average bandwidth per stream was
>     about 150k
>
>     Server looked like this:
>     Cpu(s): 12.0%us,  2.0%sy,  0.0%ni, 84.0%id,  0.0%wa,  0.3%hi,  1.7%si,
>     0.0%st
>     Mem:   3976784k total,  1085004k used,  2891780k free,     7896k
>     buffers
>     Swap:  2819368k total,        0k used,  2819368k free,   193740k
>     cached
>
>     After about 15 minutes, and over 400 connections, Red5 quit
>     without any
>     log errors.  The Java PID just went away.  Had a bunch of these in
>     dmesg:  e1000: eth1: e1000_clean_tx_irq: Detected Tx Unit Hang
>
>     Started Red5 by running red5.sh without re-booting the server.  It
>     came
>     right back up and started streaming again.
>
>     This time, we set the resolution to 80x60, or about 60-80 kbps per
>     stream.
>
>     Rob tried to crash it by launching about 200 connections in about 10
>     seconds, but it kept running.  It didn't die again.
>
>     Final outcome of the last test:
>
>     627 concurrent connections peak
>     approx 1100 connections total (some dropped when browsers crashed
>     under
>     the load, etc.)
>
>     At the peak, player buffers started to get big.  Some as high as 70,
>     most of mine were in the 30's.
>
>     So, my observation is that even though the server and available
>     bandwidth didn't seem to be stressed too much - lots of memory and
>     cpu %
>     in the teens, the larger the individual streams, the fewer total
>     connections we could make.
>
>     Not very scientific, but we thought it was worth sharing with the
>     list.
>
>     Regards,
>     Bill
>
>
>
>
>
>
>
>
>
>
>
>
>
>     _______________________________________________
>     Red5 mailing list
>     [email protected] <mailto:[email protected]>
>     http://osflash.org/mailman/listinfo/red5_osflash.org
>
>
>
>
> -- 
> [  JPG  ]
> ------------------------------------------------------------------------
>
> _______________________________________________
> Red5 mailing list
> [email protected]
> http://osflash.org/mailman/listinfo/red5_osflash.org
>   


_______________________________________________
Red5 mailing list
[email protected]
http://osflash.org/mailman/listinfo/red5_osflash.org

Reply via email to