Hi,
        Found RED5 recently, loved it so much, thanks for bringing this cool
product to us. I did a simple stress test on the RED5 0.6 series, and found
out that the 0.6.0 final and 0.6.1 final is not stable in my test
environment, but 0.6rc3 is much more stable.
        Here is the test: 
1. Environment: 
        P4 3.0G, 1 G CPU, Fedora core 5.

2. Test data: 
1. Publish 100 video(100*60) streams, 100 subscriptions: 
top - 07:26:00 up  1:17,  3 users,  load average: 2.14, 1.30, 1.25
Tasks:  73 total,   3 running,  70 sleeping,   0 stopped,   0 zombie
Cpu(s): 30.8% us, 12.3% sy,  0.0% ni, 53.6% id,  0.0% wa,  0.8% hi,  2.5% si
Mem:    999228k total,   468788k used,   530440k free,    21592k buffers
Swap:  2096472k total,        0k used,  2096472k free,   295088k cached


2. Publish 100, 200 subscriptions: 
top - 07:31:22 up  1:22,  3 users,  load average: 1.43, 1.42, 1.32
Tasks:  73 total,   3 running,  70 sleeping,   0 stopped,   0 zombie
Cpu(s): 36.5% us,  9.3% sy,  0.0% ni, 49.9% id,  0.0% wa,  1.3% hi,  3.0% si
Mem:    999228k total,   475616k used,   523612k free,    22016k buffers
Swap:  2096472k total,        0k used,  2096472k free,   294924k cached

3. Publish 100, 300 subscriptions: 
top - 07:39:21 up  1:30,  3 users,  load average: 1.20, 1.70, 1.52
Tasks:  73 total,   2 running,  71 sleeping,   0 stopped,   0 zombie
Cpu(s): 49.9% us,  0.2% sy,  0.0% ni, 49.9% id,  0.0% wa,  0.0% hi,  0.0% si
Mem:    999228k total,   507976k used,   491252k free,    22656k buffers
Swap:  2096472k total,        0k used,  2096472k free,   295064k cached

# Here after less than 15 minutes, server dead, no streaming played back any
more, java process still runing, CPU kept 50% even I shutdown all
clients(publisher and subscriber): 
top - 07:50:56 up  1:42,  3 users,  load average: 1.00, 1.07, 1.23
Tasks:  73 total,   2 running,  71 sleeping,   0 stopped,   0 zombie
Cpu(s): 50.1% us,  0.0% sy,  0.0% ni, 49.9% id,  0.0% wa,  0.0% hi,  0.0% si
Mem:    999228k total,   508104k used,   491124k free,    23588k buffers
Swap:  2096472k total,        0k used,  2096472k free,   294912k cached
# Got the following output: 
     [java] [WARN] 1014880 DefaultQuartzScheduler_Worker-7:(
org.red5.server.net.rtmp.RTMPConnection.execute ) Closing RTMPMinaConnection
from 192.168.1.57:1186 to 192.168.1.50 (in: 3590, out: 74659) due to too
much inactivity (1181651775369). 
     [java] [WARN] 1015310 DefaultQuartzScheduler_Worker-8:(
org.red5.server.net.rtmp.RTMPConnection.execute ) Closing RTMPMinaConnection
from 192.168.1.57:1189 to 192.168.1.50 (in: 3616, out: 78765) due to too
much inactivity (1181651775587). 
     [java] [WARN] 1015311 DefaultQuartzScheduler_Worker-8:(
org.red5.server.net.rtmp.RTMPConnection.execute ) Closing RTMPMinaConnection
from 192.168.1.57:1190 to 192.168.1.50 (in: 3612, out: 74087) due to too
much inactivity (1181651775619). 
     [java] Exception in thread
"DefaultQuartzScheduler_QuartzSchedulerThread" java.lang.OutOfMemoryError:
Java heap space
     [java]     at java.lang.Class.getDeclaredMethods0(Native Method)
     [java]     at
java.lang.Class.privateGetDeclaredMethods(Class.java:2427)
     [java]     at java.lang.Class.getMethod0(Class.java:2670)
     [java]     at java.lang.Class.getMethod(Class.java:1603)
     [java]     at
org.apache.commons.logging.LogFactory.directGetContextClassLoader(LogFactory
.java:825)
     [java]     at
org.apache.commons.logging.LogFactory$1.run(LogFactory.java:791)
     [java]     at java.security.AccessController.doPrivileged(Native
Method)
     [java]     at
org.apache.commons.logging.LogFactory.getContextClassLoader(LogFactory.java:
788)
     [java]     at
org.apache.commons.logging.LogFactory.getFactory(LogFactory.java:383)
     [java]     at
org.apache.commons.logging.LogFactory.getLog(LogFactory.java:645)
     [java]     at org.quartz.core.JobRunShell.<init>(JobRunShell.java:80)
     [java]     at
org.quartz.impl.StdJobRunShellFactory.borrowJobRunShell(StdJobRunShellFactor
y.java:86)
     [java]     at
org.quartz.core.QuartzSchedulerThread.run(QuartzSchedulerThread.java:357)
     [java] Exception in thread "Token Distributor"
java.lang.OutOfMemoryError: Java heap space
     [java] java.lang.reflect.InvocationTargetException
     [java]     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)
     [java]     at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39
)
     [java]     at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl
.java:25)
     [java]     at java.lang.reflect.Method.invoke(Method.java:597)
     [java]     at org.mortbay.log.Slf4jLog.warn(Slf4jLog.java:113)
     [java]     at org.mortbay.log.Log.warn(Log.java:154)
     [java]     at
org.mortbay.jetty.servlet.HashSessionManager$SessionScavenger.run(HashSessio
nManager.java:312)
     [java]     at
org.mortbay.thread.BoundedThreadPool$PoolThread.run(BoundedThreadPool.java:4
75)
     [java] Caused by: java.lang.OutOfMemoryError: Java heap space

4. Result: 
        The same test: 0.6rc3 can last at least 1~3 days, but 0.6.0 final
and 0.6.1 final can only last less than 15 minutes then Java heap space
error.

        Any suggestion to improve my situation? Thanks!

Best regards!
Stephen


_______________________________________________
osflash mailing list
[email protected]
http://osflash.org/mailman/listinfo/osflash_osflash.org

Reply via email to