A special report on managing information
Handling the cornucopia
Feb 25th 2010
>From The Economist print edition


The best way to deal with all that information is to use machines. But they 
need watching

IN 2002 America’s Defence Advanced Research Projects Agency, best known for 
developing the internet four decades ago, embarked on a futuristic initiative 
called Augmented Cognition, or “AugCog”. Commander Dylan Schmorrow, a cognitive 
scientist with the navy, devised a crown of sensors to monitor activity in the 
brain such as blood flow and oxygen levels. The idea was that modern warfare 
requires soldiers to think like never before. They have to do things that 
require large amounts of information, such as manage drones or oversee a patrol 
from a remote location. The system can help soldiers make sense of the flood of 
information streaming in. So if the sensors detect that the wearer’s spatial 
memory is becoming saturated, new information will be sent in a different form, 
say via an audio alert instead of text. In a trial in 2005 the device achieved 
a 100% improvement in recall and a 500% increase in working memory.

Is this everybody’s future? Probably not. But as the torrent of information 
increases, it is not surprising that people feel overwhelmed. “There is an 
immense risk of cognitive overload,” explains Carl Pabo, a molecular biologist 
who studies cognition. The mind can handle seven pieces of information in its 
short-term memory and can generally deal with only four concepts or 
relationships at once. If there is more information to process, or it is 
especially complex, people become confused.

Moreover, knowledge has become so specialised that it is impossible for any 
individual to grasp the whole picture. A true understanding of climate change, 
for instance, requires a knowledge of meteorology, chemistry, economics and 
law, among many other things. And whereas doctors a century ago were expected 
to keep up with the entire field of medicine, now they would need to be 
familiar with about 10,000 diseases, 3,000 drugs and more than 1,000 lab tests. 
A study in 2004 suggested that in epidemiology alone it would take 21 hours of 
work a day just to stay current. And as more people around the world become 
more educated, the flow of knowledge will increase even further. The number of 
peer-reviewed scientific papers in China alone has increased 14-fold since 1990 
(see chart 3).



[http://www.economist.com/images/20100227/201009SRC835.gif]



“What information consumes is rather obvious: it consumes the attention of its 
recipients,” wrote Herbert Simon, an economist, in 1971. “Hence a wealth of 
information creates a poverty of attention.” But just as it is machines that 
are generating most of the data deluge, so they can also be put to work to deal 
with it. That highlights the role of “information intermediaries”. People 
rarely deal with raw data but consume them in processed form, once they have 
been aggregated or winnowed by computers. Indeed, many of the technologies 
described in this report, from business analytics to recursive machine-learning 
to visualisation software, exist to make data more digestible for humans.

Some applications have already become so widespread that they are taken for 
granted. For example, banks use credit scores, based on data about past 
financial transactions, to judge an applicant’s ability to repay a loan. That 
makes the process less subjective than the say-so of a bank manager. Likewise, 
landing a plane requires a lot of mental effort, so the process has been 
largely automated, and both pilots and passengers feel safer. And in health 
care the trend is towards “evidence-based medicine”, where not only doctors but 
computers too get involved in diagnosis and treatment.

The dangers of complacency

In the age of big data, algorithms will be doing more of the thinking for 
people. But that carries risks. The technology is far less reliable than people 
realise. For every success with big data there are many failures. The inability 
of banks to understand their risks in the lead-up to the financial crisis is 
one example. The deficient system used to identify potential terrorists is 
another.

On Christmas Day last year a Nigerian man, Umar Farouk Abdulmutallab, tried to 
ignite a hidden bomb as his plane was landing in Detroit. It turned out his 
father had informed American officials that he posed a threat. His name was 
entered into a big database of around 550,000 people who potentially posed a 
security risk. But the database is notoriously flawed. It contains many 
duplicates, and names are regularly lost during back-ups. The officials had 
followed all the right procedures, but the system still did not prevent the 
suspect from boarding the plane.

One big worry is what happens if the technology stops working altogether. This 
is not a far-fetched idea. In January 2000 the torrent of data pouring into 
America’s National Security Agency (NSA) brought the system to a crashing halt. 
The agency was “brain-dead” for three-and-a-half days, General Michael Hayden, 
then its director, said publicly in 2002. “We were dark. Our ability to process 
information was gone.”

If an intelligence agency can be hit in this way, the chances are that most 
other users are at even greater risk. Part of the solution will be to pour more 
resources into improving the performance of existing technologies, not just 
pursue more innovations. The computer industry went through a similar period of 
reassessment in 2001-02 when Microsoft and others announced that they were 
concentrating on making their products much more secure rather than adding new 
features.

Another concern is energy consumption. Processing huge amounts of data takes a 
lot of power. “In two to three years we will saturate the electric cables 
running into the building,” says Alex Szalay at Johns Hopkins University. “The 
next challenge is how to do the same things as today, but with ten to 100 times 
less power.”

It is a worry that affects many organisations. The NSA in 2006 came close to 
exceeding its power supply, which would have blown out its electrical 
infrastructure. Both Google and Microsoft have had to put some of their huge 
data centres next to hydroelectric plants to ensure access to enough energy at 
a reasonable price.

Some people are even questioning whether the scramble for ever more information 
is a good idea. Nick Bostrom, a philosopher at Oxford University, identifies 
“information hazards” which result from disseminating information that is 
likely to cause harm, such as publishing the blueprint for a nuclear bomb or 
broadcasting news of a race riot that could provoke further violence. “It is 
said that a little knowledge is a dangerous thing,” he writes. “It is an open 
question whether more knowledge is safer.” Yet similar concerns have been 
raised through the ages, and mostly proved overblown.

Knowledge is power

The pursuit of information has been a human preoccupation since knowledge was 
first recorded. In the 3rd century BC Ptolemy stole every available scroll from 
passing travellers and ships to stock his great library in Alexandria. After 
September 11th 2001 the American Defence Department launched a program called 
“Total Information Awareness” to compile as many data as possible about just 
about everything—e-mails, phone calls, web searches, shopping transactions, 
bank records, medical files, travel history and much more. Since 1996 Brewster 
Kahle, an internet entrepreneur, has been recording all the content on the web 
as a not-for-profit venture called the “Internet Archive”. It has since 
expanded to software, films, audio recordings and scanning books.

There has always been more information than people can mentally process. The 
chasm between the amount of information and man’s ability to deal with it may 
be widening, but that need not be a cause for alarm. “Our sensory and 
attentional systems are tuned via evolution and experience to be selective,” 
says Dennis Proffitt, a cognitive psychologist at the University of Virginia. 
People find patterns to compress information and make it manageable. Even 
Commander Schmorrow does not think that man will be replaced by robots. “The 
flexibility of the human to consider as-yet-unforeseen consequences during 
critical decision-making, go with the gut when problem-solving under 
uncertainty and other such abstract reasoning behaviours built up over years of 
experience will not be readily replaced by a computer algorithm,” he says.

The cornucopia of data now available is a resource, similar to other resources 
in the world and even to technology itself. On their own, resources and 
technologies are neither good nor bad; it depends on how they are used. In the 
age of big data, computers will be monitoring more things, making more 
decisions and even automatically improving their own processes—and man will be 
left with the same challenges he has always faced. As T.S. Eliot asked: “Where 
is the wisdom we have lost in knowledge? Where is the knowledge we have lost in 
information?”




Copyright © 2010 The Economist Newspaper and The Economist Group. All rights 
reserved.




José A. López
Globomedia, Dpto. de Documentación-Comunicación
jalo...@globomedia.es<mailto:jalo...@globomedia.es>



----------------------------------------------------
Los artículos de IWETEL son distribuidos gracias al apoyo y colaboración 
técnica de RedIRIS - Red Académica española - (http://www.rediris.es)
----------------------------------------------------

Responder a