Read:

  http://blog.dscpl.com.au/2007/09/parallel-python-discussion-and-modwsgi.html

Graham

On Oct 7, 8:56 am, "ihomest...@gmail.com" <ihomest...@gmail.com>
wrote:
> Thanks to you guys for the insightful discussions and thoughts on this
> topic. It is always good to hear from many sides to get a complete
> picture. I agree with your comments about the benchmark thing and that
> is the also the origin of my confusions when I read the benchmark part
> in the Tornado doc.
>
> Since many have mentioned that django is not an asynchronous
> framework, this leads to one of my other question: would anyone shed
> any thoughts on how to configure django to take advantage of multi
> core machines? Or is there nothing extra we need to do and everything
> is already taken care of by the underlying django framework on multi
> cores?
>
> Thanks.
>
> On Oct 6, 3:23 am, Roberto De Ioris <robe...@unbit.it> wrote:
>
>
>
> > On Mon, 2009-10-05 at 22:16 -0700, ihomest...@gmail.com wrote:
> > > I read this doc about the performance comparison between Tornado and
> > > Django:http://www.tornadoweb.org/documentation
>
> > > I am quite new to both django and tornado (just heard about it). To me
> > > there are a few confusing points about the conclusion that "Tornado
> > > consistently had 4X the throughput of the next fastest framework, and
> > > even a single standalone Tornado frontend got 33% more throughput even
> > > though it only used one of the four cores" Maybe the document could
> > > add more comments about how the experiment is setup.
>
> > > The context of this statement is that Tornado runs with 4 frontends on
> > > a 4 core machine. My question is "Could django apps take advantage of
> > > 4 cores?" Is the 4X performance difference due to the setup or is it
> > > due to the reason that Tornado could make a better use of a 4 core
> > > machine?
>
> > > Any thoughts?
> > > Thanks.
>
> > Once in a while, someone (me included with uwsgi) spit out a new 
> > outstanding deploy technology
> > forgetting that 99.9999999% of bottlenecks are on the apps and not on
> > the webservers.
>
> > Putting all the efforts on speed is now completely useless, all of this
> > tecnology are heavily based on optimization done by the kernel (epoll,
> > kqueue, sendfile, vector i/o, varouis aio technics), this softwares put
> > them together (mostly in the same manner)  and build their copy of an
> > "ultrafast" asynchronous system.
>
> > Please, stop using "asynchronous" or "evented" as performance
> > mesurement, as Graham already said, an evented environment need that all
> > the players are async/evented. And rarely this happen.
>
> > Django (and wsgi by the current design) are not asynchronous, so do not
> > spend more time trying to gather more then a 1% performance improvement,
> > and choose the environment looking at first at its robustness and its
> > features.
>
> > And please,please (and please) stop doing benchmark with an hello world.
> > Even the most under-talented programmer can optimize his work for this
> > kind of app ;)
>
> > --
> > Roberto De Iorishttp://unbit.it
> > JID: robe...@jabber.unbit.it
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to 
django-developers+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to