Yury Bulka writes:

> Wow, from 695 requests per second to 49,516 is a huge improvement!
>
> Since we were comparing to django previously, it's now much closer with
> django (which does 78,132 rps.)

I expect the Racket benchmark will do even better on TechEmpower's hw
than it did on mine because their machines are more powerful and they
run the benchmarking code, the database and the servers on different
machines so the gap should end up being even smaller.

Also worth keeping in mind that the server Django uses for these
benchmarks, meinheld, is completely written in C:

* 
https://github.com/mopemope/meinheld/blob/311acbc4e7bd38fa3f3d0e158b35cde9ef73f8e5/meinheld/gmeinheld.py#L11
* 
https://github.com/mopemope/meinheld/tree/311acbc4e7bd38fa3f3d0e158b35cde9ef73f8e5/meinheld/server

So I think it's really impressive that we're able to get this close with
just pure Racket, with the overhead of running one nginx process per
core in front of the app and with the overhead of connecting those nginx
processes to the app via TCP.  Speaking of which, another thing we could
do to improve performance in this benchmark is define a custom `tcp^'
unit based on `unix-socket-lib' and have nginx connect to the backends
through unix sockets rather than TCP.

> So do I undestand correctly that the factors that contribute to the
> improvement are:
>
> 1. running multiple worker processes behind nginx
> 2. adding content-length to all responses
> 3. using CS variant of Racket
> 4. using Racket 7.7
> 5. tuning nginx config (enabling http 1.1 especially)

I'm afraid I haven't kept notes on how much improvement each of these
improvements yielded.  From what I remember: I don't think #4 was much
of a factor, #1 and #5 were the biggest factors followed by #2 and #3,
which I changed at the same time so I couldn't say which made more of a
difference.

> #2 is something that seems to require manual work in the client code,
> but maybe that can be made easier on web-server-lib side somehow.

The work one has to do is pretty minimal[1] and I personally like that
the default is to stream data so I think the only thing to improve here
is awareness: the docs for `response' should be updated to mention that
unless a `Content-Length' is provided, the responses will use chunked
transfer encoding.

[1]: 
https://github.com/TechEmpower/FrameworkBenchmarks/pull/5727/files#diff-b21f7e3ecfa09726dac9ce079f612719R47-R70

-- 
You received this message because you are subscribed to the Google Groups 
"Racket Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to racket-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/racket-users/m2a71dismf.fsf%40192.168.0.142.

Reply via email to