[fonc] Massively Parallel Computer Built From Single Layer of Molecules

2011-10-28 Thread Eugen Leitl

(also a crystal, only 2D, not 3D; yet)

https://www.technologyreview.com/blog/arxiv/27291/?p1=blogs

Massively Parallel Computer Built From Single Layer of Molecules

Japanese scientists have built a cellular automaton from individual molecules
that carries out huge numbers of calculations in parallel

kfc 10/27/2011

2 Comments

Modern computer chips handle data at the mind-blowing rate of some 10^13 bits
per second. Neurons, by comparison, fire at a rate of around 100 times per
second or so. And yet the brain outperforms the best computers in numerous
tasks.

One reason for this is way computations take place. In computers,
calculations occur in strict pipelines, one at a time.

In the brain, however, many calculations take place at once. Each neuron
communicates with up to 1000 other neurons at any one time. And since the
brain consists of billions neurons, the potential for parallel calculating is
clearly huge.

Computer scientists are well aware of this difference and have tried in many
ways to mimic the brain's massively parallel capabilities. But success has
been hard to come by.

Today, Anirban Bandyopadhyay at National Institute for Materials Science in
Tsukuba, Japan, unveil a promising new approach. At the heart of their
experiment is a ring-like molecule called
2,3-dichloro-5,6-dicyano-p-benzoquinone, or DDQ.

This has an unusual property: it can exist in four different conducting
states, depending on the location of trapped electrons around the ring.
What's more, it's possible to switch the molecule from one to state to
another by zapping it with voltages of various different strengths using the
tip of a scanning tunnelling microscope. It's even possible to bias the
possible states that can form by placing the molecule in an electric field

Place two DDQ molecules next to each other and it's possible to make them
connect. In fact, a single DDQ molecule can connect with between 2 and 6
neighbours, depending on its conducting state and theirs. When one molecule
changes its state, the change in configuration ripples from one molecule to
the next, forming and reforming circuits as it travels.

Given all this, it's not hard to imagine how a layer of DDQ molecules can act
like a cellular automaton, with each molecule as a cell in the automaton.
Roughly speaking, the rules for flipping cells from one state to another are
set by the bias on the molecules and the starting state is programmed by the
scanning tunnelling microscope.

And that's exactly what these guys have done. They've laid down 300 DDQ
molecules on a gold substrate, setting them up as a cellular automaton. More
impressive still, they've then initialised the system so that it calculates
the way heat diffuses in a conducting medium and the way cancer spreads
through tissue.

And since the entire layer is involved in the calculation, this a massively
parallel computation using a single layer of organic molecules.

Bandyopadhyay and co say the key feature of this type of calculation is the
fact that one DDQ molecule can link to many others, rather like neurons in
the brain. Generalization of this principle would...open up a new vista of
emergent computing using an assembly of molecules, they say.

Clearly an intriguing prospect.

Ref: arxiv.org/abs/1110.5844: Massively Parallel Computing An An Organic
Molecular Layer

TRSF: Read the Best New Science Fiction inspired by today’s emerging
technologies.


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


[fonc] ARM CTO predicts chips the size of blood cells

2011-10-28 Thread Eugen Leitl

(cores, that is)

http://www.techworld.com.au/article/405599/arm_cto_predicts_chips_size_blood_cells

ARM CTO predicts chips the size of blood cells The chip design company is on
its way to making chips no bigger than a red blood cell, its CTO says

James Niccolai (IDG News Service) 28 October, 2011 07:57 Comments

In less than a decade, that smartphone you're holding could have 32 times the
memory, 20 times the bandwidth and a microprocessor core no bigger than a red
blood cell, the CTO of chip design company ARM said on Thursday.

ARM has already helped develop a prototype, implantable device for monitoring
eye-pressure in glaucoma patients that measures just 1 cubic millimeter, CTO
Mike Muller said at ARM's TechCon conference in Silicon Valley Thursday. The
device includes a microprocessor sandwiched between sensors at the top and a
battery at the bottom.

Strip away those extra components, rearrange the transistors into a cube and
apply the type of advanced manufacturing process expected in 2020, and you'd
end up with a device that occupies about the same volume as a blood cell,
Muller said.

ARM designs the processor cores used in most of today's smartphones and
tablets, and smaller cores are generally more energy efficient, he said. That
helps to extend battery life.

That's a good thing, because battery technology is advancing much more
slowly, and Muller expects only twice the improvement in battery performance
by the end of the decade.

That could be a gating factor for all the other improvements, so the
electrical systems inside portable devices will have to be redesigned so that
people don't have to recharge them multiple times a day.

For example, smartphones today contain basically a single compute system,
with one type of CPU and some memory attached. But the tasks performed by
smartphones, such as making a call or playing a 3D game, require very
different levels of performance.

So in the future, MulIer said, some systems will have entire subsystems
within them, including their own CPU and their own memory, devoted to a
particular task such as music playback. That way, other subsystems in a
device can be shut down, conserving battery life.

It's a model ARM is already pursuing with its Big.Little architecture
announced last week. That design will see two types of processor core in the
same device, one powerful and one less so, and uses the most
power-appropriate device for the task at hand. The idea of entire subsystems
takes that a step further.

The bandwidth gains in 2020 will come mostly from advances in topology,
according to Muller -- basically increasing the number of cellular base
stations. Spectrum, and the technologies used to send bits across that
spectrum, won't advance much, he predicted.

That's okay for people in cities, where it can make financial sense to
install more base stations. If you're out in the middle of nowhere, I'm
sorry, there's not going to be much big change for you, Muller said.

He spoke at ARM's TechCon conference in Silicon Valley, where ARM also
announced its next microprocessor architecture, ARMv8, which will be its
first to support 64-bit computing.

James Niccolai covers data centers and general technology news for IDG News
Service. Follow James on Twitter at @jniccolai. James's e-mail address is
james_nicco...@idg.com


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] IBM eyes brain-like computing

2011-10-28 Thread BGB

On 10/28/2011 7:28 AM, K. K. Subramaniam wrote:

On Thursday 27 Oct 2011 11:27:39 PM BGB wrote:

most likely, processing power will stop increasing (WRT density and/or
watts) once the respective physical limits are met (basically, it would
no longer be possible to get more processing power in the same space or
using less power within the confines of the laws of physics).

The adoption of computing machines at large is driven primarily by three needs
- power (portable), space/weight and speed. The last two are now solvable in
the large but the third one is still stuck in the dark ages. I recollect a
joke by Dr An Wang (founder of Wang Labs) in keynote during the 80s that goes
something like this:

A man struggled to lug two heavy suitcases into a bogie in a train that was
just about to depart. A fellow passenger helped him in and they start a
conversation. The man turns out to be a salesman from a company that made
portable computers. He showed one that fit in a pocket to his fellow passenger.
It does everything that a mainframe does and more and it costs only $100.
Amazing! exclaimed the passenger as he held the marvel in his hands, Where
can I get one?. You can have this piece, said the gracious gent, as thank
you gift for helping me. Thank you very much. the passenger was thrilled
beyond words as he gingerly explored the new gadget. Soon, the train reached
the next station and the salesman stepped out. As the train departed, the
passenger yelled at him. Hey! you forgot your suitcases!. Not really! the
gent shouted back. Those are the batteries for your computer.

;-) .. Subbu


yeah...

this is probably a major issue at this point with hugely multi-core 
processors:

if built, they would likely use lots of power and produce lots of heat.

this is sort of also an issue with video cards, one gets a new/fancy 
nVidia card, which is then noted to have a few issues:

it takes up two card slots (much of this apparently its heat-sink);
it is long enough that it partially sticks into the hard-drive bays;
it requires a 500W power supply;
it requires 4 plugs from the power-supply;
...

so, then one can joke that they have essentially installed a brick into 
their computer.


nevermind it getting high framerates in games...


however, they would have an advantage as well:
people can still write their software in good old C/C++/Java/...

it is likely that the existence of existing programming languages and 
methodologies will continue to be necessary of new computing technologies.



also, likewise people will continue pushing to gradually drive-down the 
memory requirements, but for the most part the power use of devices has 
been largely dictated by what one can get from plugging a power-cord 
into the wall (vs either running off batteries, or OTOH, requiring one 
to plug in a 240V dryer/arc-welder/... style power cord).



elsewhere, I designed a hypothetical ISA, partly combining ideas from 
ARM and x86-64, with a few unique ways of representing instructions 
(the idea being that they are aligned values of 1/2/4/8 bytes, rather 
than either more free-form byte-patterns or fixed-width instruction-words).


or such...


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] IBM eyes brain-like computing

2011-10-28 Thread BGB

On 10/28/2011 2:27 PM, karl ramberg wrote:

On Fri, Oct 28, 2011 at 6:36 PM, BGBcr88...@gmail.com  wrote:

On 10/28/2011 7:28 AM, K. K. Subramaniam wrote:

On Thursday 27 Oct 2011 11:27:39 PM BGB wrote:

most likely, processing power will stop increasing (WRT density and/or
watts) once the respective physical limits are met (basically, it would
no longer be possible to get more processing power in the same space or
using less power within the confines of the laws of physics).

The adoption of computing machines at large is driven primarily by three
needs
- power (portable), space/weight and speed. The last two are now solvable
in
the large but the third one is still stuck in the dark ages. I recollect
a
joke by Dr An Wang (founder of Wang Labs) in keynote during the 80s that
goes
something like this:

A man struggled to lug two heavy suitcases into a bogie in a train that
was
just about to depart. A fellow passenger helped him in and they start a
conversation. The man turns out to be a salesman from a company that made
portable computers. He showed one that fit in a pocket to his fellow
passenger.
It does everything that a mainframe does and more and it costs only
$100.
Amazing! exclaimed the passenger as he held the marvel in his hands,
Where
can I get one?. You can have this piece, said the gracious gent, as
thank
you gift for helping me. Thank you very much. the passenger was
thrilled
beyond words as he gingerly explored the new gadget. Soon, the train
reached
the next station and the salesman stepped out. As the train departed, the
passenger yelled at him. Hey! you forgot your suitcases!. Not really!
the
gent shouted back. Those are the batteries for your computer.

;-) .. Subbu

yeah...

this is probably a major issue at this point with hugely multi-core
processors:
if built, they would likely use lots of power and produce lots of heat.

this is sort of also an issue with video cards, one gets a new/fancy nVidia
card, which is then noted to have a few issues:
it takes up two card slots (much of this apparently its heat-sink);
it is long enough that it partially sticks into the hard-drive bays;
it requires a 500W power supply;
it requires 4 plugs from the power-supply;
...

so, then one can joke that they have essentially installed a brick into
their computer.

nevermind it getting high framerates in games...


however, they would have an advantage as well:
people can still write their software in good old C/C++/Java/...

it is likely that the existence of existing programming languages and
methodologies will continue to be necessary of new computing technologies.


also, likewise people will continue pushing to gradually drive-down the
memory requirements, but for the most part the power use of devices has been
largely dictated by what one can get from plugging a power-cord into the
wall (vs either running off batteries, or OTOH, requiring one to plug in a
240V dryer/arc-welder/... style power cord).


elsewhere, I designed a hypothetical ISA, partly combining ideas from ARM
and x86-64, with a few unique ways of representing instructions (the idea
being that they are aligned values of 1/2/4/8 bytes, rather than either more
free-form byte-patterns or fixed-width instruction-words).

or such...


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


This is also relevant regarding understanding how to make these computers work:

http://www.infoq.com/presentations/We-Really-Dont-Know-How-To-Compute


seems interesting, but is very much a pain trying to watch as my 
internet is slow and the player doesn't really seem to buffer up the 
video all that far when paused...



but, yeah, eval and reflection are features I really like, although 
sadly one doesn't really have much of anything like this standard in C, 
meaning one has to put a lot of effort into making a lot of scripting 
and VM technology primarily simply to make up for the lack of things 
like 'eval' and 'apply'.



this becomes at times a point of contention with many C++ developers, 
where they often believe that the greatness of C++ for everything more 
than makes up for its lack of reflection or dynamic features, and I hold 
that plain C has a lot of merit if-anything because it is more readily 
amendable to dynamic features (which can plug into the language from 
outside), which more or less makes up for the lack of syntax sugar in 
many areas...


although, granted, in my case, the language I eval is BGBScript and not 
C, but in many cases they are similar enough that the difference can 
be glossed over. I had considered, but never got around to, creating a 
language I was calling C-Aux, which would have taken this further, being 
cosmetically similar to and mostly (85-95% ?) source-compatible with C, 
but being far more dynamic (being designed to more readily allow quickly 
loading code from source, supporting eval, ...). essentially, in a 
practical sense C-Aux would