Greetings Economists,
This article talks a little about multi-core processors and their
capabilities:
http://www.nytimes.com/2007/02/12/technology/12chip.html?
ref=technology&pagewanted=print
an excerpt;
In leaping beyond the two- and four-core microprocessors that are being
manufactured by Intel and its chief PC industry competitor, Advanced
Micro Devices, Intel is following a design trend that is sweeping the
computing world.
...
another excerpt;
For example, Cisco Systems now uses a chip called Metro with 192 cores
in its high-end network routers. Last November Nvidia introduced its
most powerful graphics processor, the GeForce 8800, which has 128
cores. The Intel demonstration suggests that the technology may come to
dominate mainstream computing in the future.
The shift toward systems with hundreds or even thousands of computing
cores is both an opportunity and a potential crisis, computer
scientists said, because no one has proved how to program such chips
for many applications.
...
During the briefing last week Mr. Rattner essentially endorsed the
Berkeley view, saying that the company believed that its Teraflop chip
was the best way to solve a set of computing problems he described as
“recognition, mining and synthesis,” computing techniques that use
artificial intelligence.
In addition to new kinds of computing applications, Mr. Rattner said
that the so-called network-on-chip Teraflop processor would be ideal
for the kind of heterogeneous computing that is increasingly common in
the corporate world.
Large data centers now routinely use a software technique called
virtualization to run many operating systems on a single processor in
order to gain computing efficiency. Having hundreds or thousands of
cores available would vastly increase the power of this style of
computing.
One of the most impressive technical achievements made by the Intel
researchers was the speed with which they are able to move data among
the separate processors on the chip, Mr. Patterson said.
The Teraflop chip, which consumes just 62 watts at teraflop speeds and
which is air-cooled, contains an internal data packet router in each
processor tile. It is able to move data among tiles in as little as
1.25 nanoseconds, making it possible to transfer 80 billion bytes a
second among the internal cores.
The chip also contains an interface capability that would make it
possible for Intel, in the future, to package a memory chip stacked
directly on top of the microprocessor. Such a design would make it
possible to move data back and forth between memory and processor many
times faster than today’s chips....
Doyle;
The central problem since the sixties with massive parallel designs is
'writing' software applications. The algorithm is a sequential process
of knowledge production which can be used to write for parallel
designs, but let's take the Guardian article on reading minds. One
can't look at low res images of minds and simply say that's what is
being thought. Or assume that our way of thinking via text based
sequential methods can address massive parallel problems of 'knowing'.
The problem is the language like issue of this claim being made in the
article:
excerpt;
In the future, Mr. Rattner said, it will be possible to blend
synthesized and real-time video.
Doyle;
Which assumes we understand via current language theory how to process
and express massive issues of motion expression while showing
conventional stable images or using text somehow in parallel to the
image processing available by multi-core systems. In other words these
being available to the consuming masses one could express oneself as if
movies could be grammatically expressed. Thus supplanting historical
speech itself for the first time.
thanks,
Doyle
- [PEN-L] multi core processors Doyle Saylor
-