which node.js version was this benchmark run upon? 

I'm getting this output:
/opt/node/bin/node client.js

node.js:134
        throw e; // process.nextTick error, or 'error' event on first tick
        ^
TypeError: Object #<ClientRequest> has no method 'finish'
    at addClient (/home/arq_msugano/bench/client.js:23:11)
    at Object.<anonymous> (/home/arq_msugano/bench/client.js:32:3)
    at Module._compile (module.js:402:26)
    at Object..js (module.js:408:10)
    at Module.load (module.js:334:31)
    at Function._load (module.js:293:12)
    at Array.<anonymous> (module.js:421:10)
    at EventEmitter._tickCallback (node.js:126:26)

any thoughts? 

tya,

On Monday, January 18, 2010 9:21:39 AM UTC-2, Felix Geisendoerfer wrote:
>
> A client asked me how many Comet connections node could handle, so I
> decided to give it try again today.
>
> To make a long story short, I was able to modify my client/server
> scripts so that node would no longer segfault, smoothly reaching 64510
> parallel connections!
>
> http://gist.github.com/243632
>
> There are two things I changed:
>
> a) Disabled the timeout for the underlaying tcp connections
> b) Always create 10 new connections in parallel and only attempt new
> connections when previous connections are established. Node still
> segfaults if I increase to a higher number of parallel connection
> starts.
>
> I was running my tests on a ridiculous "High-Memory Quadruple Extra
> Large" Ec2 instance with 68.4 GB memory (why not?) using Ubuntu 9.04.
> Here are some numbers:
>
> * From 0 to 64510 connections in 51 seconds
> * Minimal memory usage at 64510 connections: 364828 kb (5.66 kb /
> connection)
> * Peak memory usage at 64510 connections: 452732 kb (7.02 kb /
> connection)
> * Memory oscillation is due to v8's lazy garbage collection
> * There appear to be no noticeable memory leaks
> * The test happily ran for 2h 52m, I stopped it after that because
> nothing interesting seemed to happen anymore
> * During this time a total of 43281192 "Hello\n" messages were
> received by the 64510 clients
> * An average of 4194 messages / sec was received (ideally this value
> would have been closer to 6451, as each client was supposed to be sent
> a message every 10 sec)
>
> Conclusion:
>
> 64510 is the maximum of connections due to the available ports on the
> system, so the test had to stop there. However, I do think node could
> have a chance at the 1 million comet user record [1] set by Richard
> Jones with Erlang. But I decied to wait for the net2 branch to be
> ready before going through the trouble of setting this up.
>
> v8's occasional garbage collection does not seem to take longer than 1
> second for most parts (otherwise the message / last-sec counter would
> drop below 0 several times in the logs, it only does so once after
> reaching 64510 connections). So here is hope it won't become a problem
> for high-performance apps.
>
> Response time variation seems big (as indicated by the oscillation in
> the message / last-sec counter), but this test is not really setup to
> measure it so I can't say much further about it.
>
> There is probably still a bug in node's connection handling that can
> cause a segfault if one tries to establish new connections too
> aggressively. But I challenge everybody to trigger this in a real-
> world scenario that is not a DOS attack. Either way, I'll retest that
> when the net2 branch is merged.
>
> Overall I am very impressed with node handling such a load, and
> especially the memory footprint seems very impressive.
>
> Oh, and I'd love to hear more thoughts on the topic! : )
>
> [1] 
> http://www.metabrew.com/article/a-million-user-comet-application-with-mochiweb-part-3/
>
> --fg
>
> PS: My log files can be found here: http://gist.github.com/279936
>
> On Dec 17 2009, 6:27 pm, Louis Santillan <lpsan...@gmail.com> wrote:
> > This was recently covered on the v8-users list.
> >
> > http://groups.google.com/group/v8-users/browse_thread/thread/21283f61...
> >
> > The skinny:
> >
> > void V8::RunGC() {
> >         while( IdleNotification() )
> >                 ;
> >
> > }
> >
> > -L
> >
> >
> >
> > On Wed, Dec 16, 2009 at 11:47 PM, Stefan Scholl <stefan.sch...@gmail.com> 
> wrote:
> > > On 27 Nov., 22:50, Ryan Dahl <coldredle...@gmail.com> wrote:
> > >> On Fri, Nov 27, 2009 at 10:44 PM, Ricardo Tomasi <
> ricardob...@gmail.com> wrote:
> > >> > Worked out thememoryusage: 400mb for 20k clients (20k per
> > >> > connection) it goes up to roughly 1gb at 36.7kconnections(27k per
> > >> > client). Is it possible node is not completely freeing upmemoryfrom
> > >> > droppedconnections(lots during the process)?
> >
> > >> TheV8GCis pretty lazy - it likes to keep stuff around for a while.
> > >> It's possible there is amemoryleak, but I doubt it.
> >
> > > New to node.js. I was testing the simple Hello World script from the
> > > article on
> > >http://simonwillison.net/2009/Nov/23/node/with ApacheBench and was
> > > wondering
> > > the same.
> >
> > > The memory usage was rising after each run of ApacheBench.
> >
> > > Searched this group for memory leaks and most of them seem to be
> > > fixed.
> > > It's very unlikely that this simple script leaks.
> >
> > > Is there a way to start V8's GC "by hand" to check for leaks in case
> > > I'm
> > > writing something more complex?
> >
> > > --
> >
> > > You received this message because you are subscribed to the Google 
> Groups "nodejs" group.
> > > To post to this group, send email to nodejs@googlegroups.com.
> > > To unsubscribe from this group, send email to 
> nodejs+unsubscr...@googlegroups.com.
> > > For more options, visit this group athttp://
> groups.google.com/group/nodejs?hl=en.
>
>

-- 
Job Board: http://jobs.nodejs.org/
Posting guidelines: 
https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
You received this message because you are subscribed to the Google
Groups "nodejs" group.
To post to this group, send email to nodejs@googlegroups.com
To unsubscribe from this group, send email to
nodejs+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/nodejs?hl=en?hl=en

Reply via email to