On Thu, Feb 16, 2012 at 4:25 AM, billywhizz wrote:
> Matt - just wanted to address a couple of points you made:
>
> 1. With node.js clustering you get what is effectively a layer 4 load
> balancer across the available cpu's on a single box. this is all being
> handled by the OS and is insanely fa
Matt - just wanted to address a couple of points you made:
1. With node.js clustering you get what is effectively a layer 4 load
balancer across the available cpu's on a single box. this is all being
handled by the OS and is insanely fast
2. Restarting a server gracefully is pretty easy with a lit
Maybe I'm dumb, but I ran into a few issues setting up nginx the first
time...
It rejected file uploads greater than 1 meg
It timed out requests after some short period of time
It rejects multi-part form uploads that don't have a proper content
length, and doesn't support chunked encoding. We had
On Wed, Feb 15, 2012 at 4:14 PM, Matt wrote:
> On Wed, Feb 15, 2012 at 5:12 PM, Matt wrote:
>
>> (or your choice of non-alcoholic beverage).
>>
>
Sweet! ;)
> --
> Job Board: http://jobs.nodejs.org/
> Posting guidelines:
> https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
> Yo
On Wed, Feb 15, 2012 at 5:12 PM, Matt wrote:
> 1) please don't benchmark Node against Nginx for static file serving
> unless you're actively working to make it faster - that is a fools game as
> Nginx will always be faster.
>
I should have added in here, so more people don't get mad at me (shees
On Wed, Feb 15, 2012 at 4:48 PM, Dean Landolt wrote:
> On Wed, Feb 15, 2012 at 3:27 PM, Matt wrote:
>
>> On Wed, Feb 15, 2012 at 3:03 PM, Dean Landolt wrote:
>>
>>> On Wed, Feb 15, 2012 at 12:35 PM, Matt wrote:
>>>
On Wed, Feb 15, 2012 at 12:01 PM, Chris Scribner wrote:
> On the t
On Wed, Feb 15, 2012 at 3:27 PM, Matt wrote:
> On Wed, Feb 15, 2012 at 3:03 PM, Dean Landolt wrote:
>
>> On Wed, Feb 15, 2012 at 12:35 PM, Matt wrote:
>>
>>> On Wed, Feb 15, 2012 at 12:01 PM, Chris Scribner wrote:
>>>
On the topic of "why use node to service static files"...
Becau
FWIW, I was quite surprised at how easy it was to set up nginx to front-end
node. It took a couple of hours instead of the days I expected. You can
think like you do with an apache config but use a much simpler config
language that just works.
--
Job Board: http://jobs.nodejs.org/
Posting guide
On Wed, Feb 15, 2012 at 3:03 PM, Dean Landolt wrote:
> On Wed, Feb 15, 2012 at 12:35 PM, Matt wrote:
>
>> On Wed, Feb 15, 2012 at 12:01 PM, Chris Scribner wrote:
>>
>>> On the topic of "why use node to service static files"...
>>>
>>> Because if you don't need to understand, configure, and maint
On Wed, Feb 15, 2012 at 12:35 PM, Matt wrote:
> On Wed, Feb 15, 2012 at 12:01 PM, Chris Scribner wrote:
>
>> On the topic of "why use node to service static files"...
>>
>> Because if you don't need to understand, configure, and maintain an
>> extra piece of software in your stack, things get si
On Wed, Feb 15, 2012 at 2:23 PM, Chris Scribner wrote:
> Matt,
>
> The original post started with an observation and a request for more
> details, presumably from those who know what's going on under the
> hood. I didn't detect any hint of "expecting Node to compare well with
> nginx or Apache."
Matt,
The original post started with an observation and a request for more
details, presumably from those who know what's going on under the
hood. I didn't detect any hint of "expecting Node to compare well with
nginx or Apache."
A conversation about the underlying differences in the servers and
On Wed, Feb 15, 2012 at 1:56 PM, billywhizz wrote:
> i don't hear anyone on this thread complaining that node.js is slower.
>
This thread started with: "I analysed node-0.6.10/windows2008r2 running my
clustered http server script in VTune with a view to understanding why node
took 50% more CPU t
i don't hear anyone on this thread complaining that node.js is slower.
we expect it to be slower and it probably always will be, although
"always" is a dangerous word in computer science. my goal with this
research is to get node.js to a point where it's fast enough that
people have an option not t
On Wed, Feb 15, 2012 at 12:35 PM, Matt wrote:
>
>> If node can get 10-50% faster at serving static files, then that's X
>> number of more deployments that don't need to complicate their
>> infrastructure more than it needs to be.
>>
>
> I can almost guarantee you there are no deployments where th
On Wed, Feb 15, 2012 at 12:01 PM, Chris Scribner wrote:
> On the topic of "why use node to service static files"...
>
> Because if you don't need to understand, configure, and maintain an
> extra piece of software in your stack, things get simpler.
>
On a very basic level yes, but there are more
On the topic of "why use node to service static files"...
Because if you don't need to understand, configure, and maintain an
extra piece of software in your stack, things get simpler.
If node can get 10-50% faster at serving static files, then that's X
number of more deployments that don't need
Also remember for Node you'll have to add in an fs.watch() event on that
file. Probably won't have much of an impact at all, but nginx does that to
be sure it re-loads the file if it changes.
On Tue, Feb 14, 2012 at 8:30 PM, billywhizz wrote:
> it tells me how long it takes to process the http h
it tells me how long it takes to process the http headers. is just a
baseline. i'll add tests for different file sizes.
On Feb 15, 1:17 am, Mark Hahn wrote:
> Why test a 0k file? What could that possibly tell you?
--
Job Board: http://jobs.nodejs.org/
Posting guidelines:
https://github.com/j
Why test a 0k file? What could that possibly tell you?
--
Job Board: http://jobs.nodejs.org/
Posting guidelines:
https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
You received this message because you are subscribed to the Google
Groups "nodejs" group.
To post to this group,
btw - with some tweaks i am able to get 47k rps from nginx on 85% of a
single core (rest is IO wait). i am serving the 0k file from /tmp
which is shared memory. have updated the nginx.conf in the gist. matt
- you are way ahead at the moment!! ;)
On Feb 15, 12:56 am, billywhizz wrote:
> heh. prove
On Tue, Feb 14, 2012 at 7:42 PM, Matt wrote:
> On Tue, Feb 14, 2012 at 7:00 PM, billywhizz wrote:
>
>> Matt, there are all sorts of optimisations available. if you really
>> want top performance, then you could write a c++ module that does
>> static file serving and can be easily plugged into a
heh. proves your point. for now. i'm not dissing nginx btw - i love
nginx. i also like lighttpd and gatling a lot too, but there are many
scenarios where you may not want to serve static files from a separate
server listening on a separate port. node.js is also easily
programmable, none of the othe
On Tue, Feb 14, 2012 at 7:00 PM, billywhizz wrote:
> Matt, there are all sorts of optimisations available. if you really
> want top performance, then you could write a c++ module that does
> static file serving and can be easily plugged into a node.js http
> server.
Yes, but why would you do th
and like libuv is...
On Feb 15, 12:03 am, Mark Hahn wrote:
> > it would be able to spend most of it's time in c++ land serving static
>
> files so there is no reason it could not be as fast as nginx.
>
> That would only be true if you coded your c++ to be event-loop based like
> nginx is.
--
Jo
> it would be able to spend most of it's time in c++ land serving static
files so there is no reason it could not be as fast as nginx.
That would only be true if you coded your c++ to be event-loop based like
nginx is.
--
Job Board: http://jobs.nodejs.org/
Posting guidelines:
https://github.com
Matt, there are all sorts of optimisations available. if you really
want top performance, then you could write a c++ module that does
static file serving and can be easily plugged into a node.js http
server. it would be able to spend most of it's time in c++ land
serving static files so there is no
On Tue, Feb 14, 2012 at 3:22 PM, Tim Caswell wrote:
> Matt,
>
> I'm not offended by the tone. I understand the intent and tone are hard
> to convey on the internet. That's why I love going to tech conferences to
> meet people face to face. I hope to be at nodeconf this summer, maybe we
> can d
Matt,
I'm not offended by the tone. I understand the intent and tone are hard to
convey on the internet. That's why I love going to tech conferences to
meet people face to face. I hope to be at nodeconf this summer, maybe we
can discuss this there.
But I do disagree with your statement "it's j
On Tue, Feb 14, 2012 at 8:49 AM, billywhizz wrote:
> Matt - there's no need to be so rude, especially to one of the best
> guys in the community.
>
I didn't mean "benchmarks or GTFO" in any rude way - it's just that people
need to understand that Node will never be the high performance static fi
On Tue, Feb 14, 2012 at 8:49 AM, billywhizz wrote:
> Liam - i asked Ben about exposing the fd on the socket handle so linux
> folks could use sendfile but it seems the core team don't really want
> to do that. there was a mention of providing a sendfile and/or pipe
> between two handles internall
Liam - i asked Ben about exposing the fd on the socket handle so linux
folks could use sendfile but it seems the core team don't really want
to do that. there was a mention of providing a sendfile and/or pipe
between two handles internally in the c++ code at some stage, but my
feeling was that this
There's an issue filed for a sendfile-based http/socket transfers,
https://github.com/joyent/node/issues/1802
It entails a fairly significant revision to the net.Socket queue
mechanism. I don't know of anyone working on that.
On Feb 13, 3:47 pm, billywhizz wrote:
> from tests i have done using t
A meaningful thing to do might be to run Node in the v8 profiler mode,
and then analyze the log file with the v8 tick-analyzer script.
E.g.
cd /yourdir
node --prof --prof_auto yourscript.js
cd /v8-source/out/native
../../tools/-tick-processor /yourdir/v8.log
I could not get it to work without
I'd be really curious about that patch and the benchmarks as well, and
I'm sure it will make it to Node's core. The furthest I had time to
get so far is: https://gist.github.com/1350901
I don't have the benchmarks but it was a lot faster than serving the
file in a classic way, though less than 50%
from tests i have done using the low level node.js bindings and
sendfile on linux (which requires a small patch to the node.js c++
source), node.js can get pretty close to nginx (within 10-20%) for
serving static files. the larger the files, the smaller the difference
as most of the work is then be
On Feb 13, 5:55 pm, Cosmere Infahm wrote:
> I analysed node-0.6.10/windows2008r2 running my clustered http server
> script in VTune with a view to understanding why node took 50% more CPU
> than nginx/apache for a given scenario. The results indicate that
> GetQueuedCompletionStatusEx takes 40% of
37 matches
Mail list logo