Martin P. Hellwig, 08.03.2010 03:08:
I did read, two years or so ago, that AMD was looking in to something
that does just what you say on a cpu level, that is present itself as
one logical cpu but underneath there are multiple physical ones. I
wouldn't hold my breath though waiting for it.

Many (desktop/server) CPUs actually do the opposite today - they present themselves as one physical CPU per core with more than one (commonly two) logical CPUs. This was introduced because modern CPUs have so many independent parts (integer arithmetic, floating point, SSE, memory access) that it's hard to keep all of them busy with a single process (which usually does either integer arithmetic *or* floating point, for example, rarely both in parallel). With multiple processes running on the same core, it becomes a lot easier to find independent operations that can be sent to different parts of the core in parallel.

Automatically splitting single-threaded code over multiple cores is something that compilers (that see the full source code) should be able to do a lot better than hardware (which only sees a couple of basic operations at a time).

http://en.wikipedia.org/wiki/Vectorization_%28computer_science%29

Expecting this to work for an interpreted Python program is somewhat unrealistic, IMHO. If you need data parallel execution, use something like map-reduce or Copperhead instead of relying on the CPU to figure out what's happening inside of a virtual machine.

http://fperez.org/py4science/ucb/talks/20091118_copperhead_bcatanzaro.pdf

Stefan

--
http://mail.python.org/mailman/listinfo/python-list

Reply via email to