Here are some more thoughts on this benchmark competition:
    http://www.techempower.com/benchmarks/
which has been discussed here a bit already.

I spent a while today playing around with putting an Nginx proxy in front of the Ur/Web HTTP server, to see if there was an easy way to improve performance. I ran the servers on a rather slow virtual machine, since I don't happen to have handy some real machines with a very fast LAN connection. The real benchmark setup uses three physical machines (app, database, benchmarker) connected by gigabit Ethernet, so either of testing a server from its own machine or from a machine far away on the Internet seems to create unrealistic network performance. That's why I decided to use a set of VMs in the same data center, even though the ones I'm using happen to be scandalously slow. (I hope _relative_ performance numbers are still right, but I don't know for sure.)

In that setting, as a sanity check, I benchmarked the VM server over the Internet from my laptop far, far away. Here adding Nginx doubled throughput.

In tests on VMs within the same data center, however, I wasn't able to get any big performance change one way or the other with Nginx in front vs. direct connections with HTTP keepalive. Nginx seems to increase latency substantially, too (like, orders of magnitude).

So, my conclusion for now is that it is not crazy to keep the Ur/Web HTTP server exposed directly. The LAN latency is just too low to call for any special tricks to optimize network usage. Does that sound sensible to folks who know about such things?

I'd still be very grateful for anyone's attempts to make similar measurements in environments as close to the real one as possible. Our latest code is in the UrWeb directory here:
    https://github.com/pseudonom/FrameworkBenchmarks


_______________________________________________
Ur mailing list
[email protected]
http://www.impredicative.com/cgi-bin/mailman/listinfo/ur

Reply via email to