the economist magazine (dated 28 july 2000) reports on a large
distributed computing project called SETI@home (most of you of course
already know about it, and many are part of it!).. .. full text of
article below.

   
   THE most powerful computer in the world is sitting neither in a
   secret military base, nor in a university laboratory, nor even in a
   garage in Silicon Valley. It is, in fact, nowhere in
   particular. Part of it may even be on your desk. The computer in
   question is a "distributed" device that consists of over 2m
   separate machines sprinkled around the Internet, all running a
   screen-saver called SETI@home. This piece of software downloads
   chunks of data from the Arecibo radio telescope in Puerto Rico and,
   when the machine it is installed on is not doing anything else,
   scrutinises them for evidence of signals from alien civilisations,
   sending the results back to a central clearing-house.
   
   So far, no aliens have been found. But in the 15 months since the
   project's launch, the machines running the SETI@home software have
   put in a total of 345,000 years' worth of computer time. These
   machines are collectively the equivalent of a computer operating at
   around ten million million calculations a second, about ten times
   faster than any conventional supercomputer.
   
   All of which has got a number of people thinking: why not harness
   the power of distributed computing for commercial gain? The idea
   would be to farm out large computing tasks to thousands of
   individual PCs. Vast computing power could thus be provided on
   demand, and the individual members of the collective paid for the
   use of their machines-which would probably have otherwise been
   sitting doing nothing.  Many a mickle makes a muckle There are,
   inevitably, several problems to overcome before this
   something-for-nothing idea can actually be made to work. First,
   there is the question of getting the software to run on as wide a
   variety of computers as possible, so as to maximise the number of
   machines available. Second, there is the issue of security. Will
   people be prepared to farm out potentially sensitive work to an
   anonymous collective? And third, not every kind of large
   computational problem can be broken up into the sort of discrete
   chunks that can be processed by individual machines.
   
   Nevertheless, over the past few months several new firms, each with
   different solutions to these problems, have popped up to exploit
   what they believe will become a lucrative market. Steven
   Armentrout, chief executive of one such company-Parabon
   Computation, based in Fairfax, Virginia-points out that large
   organisations often need computing power in bursts, but that
   supercomputers are designed to provide sustained power. Using lots
   of small computers together could provide these power-bursts more
   cheaply, since a company would not need to leave an expensive piece
   of "big iron" lying around unused for much of its life.
   
   So far, Parabon has recruited over 3,000 users for its software. To
   deal with the problem of getting that software to work on different
   sorts of machines, it is written in Java, a programming language
   specifically designed to do just that. To respond to security
   concerns, all network traffic is encrypted, and each machine
   handles only a tiny fraction of any client's data. And Parabon aims
   to answer the question of which computing problems are appropriate
   for distributed solutions by focusing initially on financial
   modelling and on the search for genes in raw DNA sequences, both of
   which clearly are.
   
   The firm is currently negotiating with potential clients in the
   fields of finance, pharmaceuticals and bio-informatics. Pay scales
   for participating computer owners will be determined once a client
   has been signed up. If they wish, owners will be able to donate
   their earnings to charity.
   
   Popular Power, based in San Francisco, started testing its
   distributed-computing software in April, and now has around 5,000
   users. This test version, again written in Java, is now being used
   on a non-profit basis to conduct research into the relative merits
   of different influenza-vaccination strategies. But the company's
   boss, Marc Hedlund, says it is close to signing up its first paying
   customers. He also hopes to do deals with other firms that already
   have large numbers of computer owners signed up-Internet service
   providers, for example, and online retailers. In return for running
   Popular Power's software, Mr Hedlund suggests, owners might get
   free Internet access, or discounts on online purchases, rather than
   actual cash.
   
   A similar strategy is being pursued by Distributed Science, based
   in Toronto. The firm has already accumulated what one executive
   calls a "mercenary army" of 40,000 users to handle its first paying
   job, and its software is now running an experimental simulation to
   evaluate the design of nuclear-waste containers. Chris Harrison, a
   co-founder, says Distributed Science has chosen not to use Java on
   performance grounds, but will release different versions of its
   software for different kinds of computer.
   
   So far, none of these firms has gone from the testing to the
   money-making stage. But there are several reasons to believe that
   distributed computing will be a viable idea.  Dr Armentrout points
   out that IBM's ASCI White, the fastest computer in the world, has a
   power equivalent to a mere 30,000 desktop machines. There are 100m
   computers connected to the Internet in America alone. As fixed
   connections (such as digital subscriber lines and cable modem
   links) become more popular, many of these machines will be online
   around the clock, even though they are doing nothing most of the
   time. Distributed computing would allow their wasted processor
   cycles to be put to good use.
   
   Mr Hedlund notes that, as well as exploiting unused processing
   power, distributed computing could also harness unused network
   capacity. Search engines, for example, find it difficult to
   maintain an up-to-date directory of the World Wide Web, because it
   takes them a month or so to "crawl" round every web article to see
   if it is still there, and whether its contents have changed. It
   would be more efficient to distribute the job of crawling to
   thousands of machines around the Internet, which would then tell
   the central search engine which pages had changed.
   
   The next logical step in distributed computing will be to enable
   members of a collective to communicate with each other directly,
   thus forming a more efficient "virtual machine" that would be able
   to perform far more complex calculations than is currently
   possible.  Large firms might also wish to make use of the
   technology over their internal networks to exploit the collective
   power of their desktop machines, perhaps to perform complex
   calculations overnight. That would get rid of many of the security
   problems associated with letting private data out into the wider
   world.
   
   There is, in other words, plenty of scope for innovation. The
   underlying principle is that many hands make light work. Proponents
   of distributed computing hope that many hands may make a profit,
   too.
   

-----------------------------------------------------
The mailing list archives are available at 
http://lists.linux-india.org/cgi-bin/wilma/linux-delhi/

Reply via email to