Mike Deering said: MD> Their dates vary from 2016 to 2030 depending on whether they are MD> using the 18 month figure or the 12 month figure. Moore's Law is MD> currently at 9 months and falling.
Even if Moore's law does hit the 9 month mark, CPU speed is not the only limiting factor on how much processing can currently be done on today's computers. Bottlenecks exist in memory access speed, bus architecture and hard drive speed that are increasing in speed at a much slower rate than Moore's law is increasing CPU speed. The average PC today does not boot up much faster than PC's of four or 5 years ago. This is one of the reasons the PC industry is in trouble. Businesses are not rushing to upgrade their existing machines since their current machines run the necessary business applications almost as fast as the new and supposedly much faster new generation of machines. While higher CPU speed will continue to benefit some applications such as realistic games, OCR and voice recognition, eventually the bottlenecks will have a much greater proportional negative impact limiting the benefits of Moore's Law. MD>When the $1000 desktop reaches sufficiency to run human level AGI it MD>will be available. This is an economic certainty. Although the supercomputers being made from combining thousands of individual computers may be able to get around the bottlenecks somewhat if the problem they are working on can be effectively partitioned to make use of all of the processors with a minimum of memory per processor and a minimum of message passing throughout the network. Unfortunately systems this large will never see the economy of scale which will get the price down to the $1000 dollar level much less their air conditioning bill! I would also submit that human intelligence coupled with the computer's speed must be taken into account as part of the time equation to develop AGI. Clearly while the computer will get faster, our human intelligence and software is not keeping pace with the increase in hardware speed. While vendors rush to add more features to convince us to upgrade our software, the speed of software development is not a whole lot faster than it was 8 or 10 years ago due to the increasing complexity of the development environment, infrastructure, and maintaining compatibility with legacy applications, operating systems, and databases. The question will then become at what point will Intel and AMD stop pouring money into increasing CPU speed for the small number of users that actually can use that speed and start pouring into the bottleneck areas which are proving to be much more difficult to improve in speed. This year we saw Intel basly abandon the new proposed Infiniband bus architecture for economic reasons even though the existing bus architecture is already maxed out on high speed Intel server boxes. In the early days of PC's we used to see benchmarks published for every new machine that came out. Now the computer magazines and even the companies that write the benchmark programs are scared to death to publish a benchmark for fear of being sued. Software companies write contract prohibiting their product from being benchmarked. If these benchmarks were more readily available it would be even more apparent to businesses and users that a 3Ghz machine will not process a typical application twice as fast as a 1.5Ghz machine. How many of you out there with AGI projects feel you are limited currently in your research by CPU speeds. I was myself up until 2 years ago. If you do feel you are limited, what speeds are you currently running at and how much more CPU (2x, 4x, 8x, ...) do you feel you could optimally utilize. And how much do you feel this would compress the actual project plan for your project. These numbers should give a better indication on whether and how much current processing speeds are limiting the quest for AGI. ------- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]