[algogeeks] Re: Find all the subarrays in a given array with sum=k
http://stackoverflow.com/questions/14948258/given-an-input-array-find-all-subarrays-with-given-sum-k On Sunday, 21 February 2016 20:48:42 UTC+11, Shubh Aggarwal wrote: > > Given an array of n elements(both +ve -ve allowed) > Find all the subarrays with a given sum k! > I know the solution using dynamic programming. It has to be done by > recursion (as asked by the interviewer) > > For ex > > arr = 1 3 1 7 -4 -11 3 4 8 > > k = 12 > > answer = {1 3 1 7}, {4 8}, {1 3 1 7 -4 -11 3 4 8} > > You have to print {from index, to last index} so for above example {0, > 3}; {0,8}; {7,8} is the answer > -- You received this message because you are subscribed to the Google Groups "Algorithm Geeks" group. To unsubscribe from this group and stop receiving emails from it, send an email to algogeeks+unsubscr...@googlegroups.com.
[algogeeks] Re: GOOGLE Q1
On Friday, 8 July 2011 04:13:38 UTC+10, Piyush Sinha wrote: Given an array of integers A, give an algorithm to find the longest Arithmetic progression in it, i.e find a sequence i1 i2 … ik, such that A[i1], A[i2], …, A[ik] forms an arithmetic progression, and k is the largest possible. The sequence S1, S2, …, Sk is called an arithmetic progression if Sj+1 – Sj is a constant. Click on the following links or copy and paste them into your browser. Many interesting possibilities. https://www.google.com.au/search?client=ubuntuchannel=fsq=Given+an+array+of+integers+A%2C+give+an+algorithm+to+find+the+longest+Arithmetic+progression+in+it%2C+i.e+find+a+sequence+i1+%3C+i2+%3C+%E2%80%A6+%3C+ik%2C+such+that+A[i1ie=utf-8oe=utf-8redir_esc=ei=GpQRUZXnJKaeiAen7oDYAw http://scholar.google.com/scholar?hl=enq=Given+an+array+of+integers+A%2C+give+an+algorithm+to+find+the+longest+Arithmetic+progression+in+it%2C+i.e+find+a+sequence+i1+%3C+i2+%3C+%E2%80%A6+%3C+ik%2C+such+that++A[i1]%2C+A[i2]%2C+%E2%80%A6%2C+A[ik]+forms+an+arithmetic+progression%2C+and+k+is+the+largest+possible.btnG=as_sdt=1%2C5as_sdtp= -- *Piyush Sinha* *IIIT, Allahabad* *+91-8792136657* *+91-7483122727* *https://www.facebook.com/profile.php?id=10655377926 * -- You received this message because you are subscribed to the Google Groups Algorithm Geeks group. To unsubscribe from this group and stop receiving emails from it, send an email to algogeeks+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/groups/opt_out.
[algogeeks] Find solutions for Mathematics Institute Millennium Prize Problems by replacing the food particles as prey with the solutions as prey in HTML5 Javascript Neural Networks example.
. Find solutions for Mathematics Institute Millennium Prize Problems by replacing the food particles as prey with the solutions as prey in HTML5 Javascript Neural Networks example. Using Neural Networks with or without Genetic Algorithms to find solutions to Clay Mathematics Institute Millennium Prize Problems by replacing the food particles as prey with the solutions as prey. Using the fish finding food HTML 5 and Javascript Neural Networks example http://www.nixuz.com:8080/html5/fish.html at as an example or template, replace the food particles the fish are hunting for as prey with solutions to the Clay Mathematics Institute Millennium Prize Problems as the prey. There is a million dollar prize for each problem solved. http://www.claymath.org/millennium/. -- You received this message because you are subscribed to the Google Groups Algorithm Geeks group. To post to this group, send email to algogeeks@googlegroups.com. To unsubscribe from this group, send email to algogeeks+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/algogeeks?hl=en.
[algogeeks] Find solutions for Clay Mathematics Institute Millennium Prize Problems by replacing the food particles as prey with the solutions as prey in HTML5 Javascript Neural Networks example.
Find solutions for Clay Mathematics Institute Millennium Prize Problems by replacing the food particles as prey with the solutions as prey in HTML5 Javascript Neural Networks example. . Find solutions for Clay Mathematics Institute Millennium Prize Problems by replacing the food particles as prey with the solutions as prey in HTML5 Javascript Neural Networks example. Using Neural Networks with or without Genetic Algorithms to find solutions to Clay Mathematics Institute Millennium Prize Problems by replacing the food particles as prey with the solutions as prey. Using the fish finding food HTML 5 and Javascript Neural Networks example http://www.nixuz.com:8080/html5/fish.html at as an example or template, replace the food particles the fish are hunting for as prey with solutions to the Clay Mathematics Institute Millennium Prize Problems as the prey. There is a million dollar prize for each problem solved. http://www.claymath.org/millennium/. -- You received this message because you are subscribed to the Google Groups Algorithm Geeks group. To post to this group, send email to algogeeks@googlegroups.com. To unsubscribe from this group, send email to algogeeks+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/algogeeks?hl=en.
[algogeeks] Re: Find solutions for Clay Mathematics Institute Millennium Prize Problems by replacing the food particles as prey with the solutions as prey in HTML5 Javascript Neural Networks example.
Use the view as source code option in your web browser to view the code at http://www.nixuz.com:8080/html5/fish.html. Modify the code by replacing the food particle code with the Millennium or other problem code, save and run from your browser as a local file or upload to a host server / cloud as a web page. On Jun 27, 4:30 am, Ian Martin Ajzenszmidt iajzens...@gmail.com wrote: Find solutions for Clay Mathematics Institute Millennium Prize Problems by replacing the food particles as prey with the solutions as prey in HTML5 Javascript Neural Networks example. . Find solutions for Clay Mathematics Institute Millennium Prize Problems by replacing the food particles as prey with the solutions as prey in HTML5 Javascript Neural Networks example. Using Neural Networks with or without Genetic Algorithms to find solutions to Clay Mathematics Institute Millennium Prize Problems by replacing the food particles as prey with the solutions as prey. Using the fish finding food HTML 5 and Javascript Neural Networks example http://www.nixuz.com:8080/html5/fish.html at as an example or template, replace the food particles the fish are hunting for as prey with solutions to the Clay Mathematics Institute Millennium Prize Problems as the prey. There is a million dollar prize for each problem solved.http://www.claymath.org/millennium/. -- You received this message because you are subscribed to the Google Groups Algorithm Geeks group. To post to this group, send email to algogeeks@googlegroups.com. To unsubscribe from this group, send email to algogeeks+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/algogeeks?hl=en.
[algogeeks] Use of Computational Intelligence / Artificial Intelligence/ Artificial Life To Solve Millennium Prize Mathematical Problems.
Use of Computational Intelligence / Artificial Intelligence/ Artificial Life To Solve Millennium Prize Mathematical Problems. If Computational Intelligence / Artificial Intelligence / Artificial Life could be used to solve the Millennium Prize Mathematical Problems please send me feedback on ian.ajzenszm...@alumni.unimelb.edu.au. Success in this endeavor would be a great public relations and prestige coup. http://www.claymath.org/millennium/ .is the source of the following: In order to celebrate mathematics in the new millennium, The Clay Mathematics Institute of Cambridge, Massachusetts (CMI) has named seven Prize Problems. The Scientific Advisory Board of CMI selected these problems, focusing on important classic questions that have resisted solution over the years. The Board of Directors of CMI designated a $7 million prize fund for the solution to these problems, with $1 million allocated to each. During the Millennium Meeting held on May 24, 2000 at the Collège de France, Timothy Gowers presented a lecture entitled The Importance of Mathematics, aimed for the general public, while John Tate and Michael Atiyah spoke on the problems. The CMI invited specialists to formulate each problem. One hundred years earlier, on August 8, 1900, David Hilbert delivered his famous lecture about open mathematical problems at the second International Congress of Mathematicians in Paris. This influenced our decision to announce the millennium problems as the central theme of a Paris meeting. The rules for the award of the prize have the endorsement of the CMI Scientific Advisory Board and the approval of the Directors. The members of these boards have the responsibility to preserve the nature, the integrity, and the spirit of this prize. September 15, 2009 http://www.claymath.org/millennium/ -- You received this message because you are subscribed to the Google Groups Algorithm Geeks group. To post to this group, send email to algogeeks@googlegroups.com. To unsubscribe from this group, send email to algogeeks+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/algogeeks?hl=en.
[algogeeks] Combine deep analytics technology of IBM's Watson jeopardy winning supercomputer with evolutionary computing to add invention and innovation to analysis.
Combine deep analytics technology of IBM's Watson jeopardy winning supercomputer with evolutionary computing, including Genetic Programming to add invention and innovation to analysis. Use this on all STEM Disciplines ie Science Technology Engineering and Mathematics, including Electronics and Computer Science, as well as Healthcare and Medicine. -- You received this message because you are subscribed to the Google Groups Algorithm Geeks group. To post to this group, send email to algogeeks@googlegroups.com. To unsubscribe from this group, send email to algogeeks+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/algogeeks?hl=en.
[algogeeks] Bootstrap learning for object discovery (2004) Joseph Modayil and Benjamin Kuipers
ftp://ftp.cs.utexas.edu/pub/qsim/papers/Modayil-iros-04-obj.pdf is the source of the following. We show how a robot can autonomously learn an ontology of objects to explain many aspects of its sensor input from an unknown dynamic world. Unsupervised learning about objects is an important conceptual step in developmental learning, whereby the agent clusters observations across space and time to learn stable perceptual representations of objects. Our proposed unsupervised learning method uses the properties of occupancy grids to classify individual sensor readings as static or dynamic. Dynamic readings are clustered and the clusters are tracked over time to identify objects, separating them both from the background of the environment and from the noise of unexplainable sensor readings. Once trackable clusters of sensor readings (i.e., objects) have been identified, we build shape models where they are stable and consistent properties of these objects. However, the representation can tolerate, represent, and track amorphous objects as well as those that have well-defined shape. In the end, the learned ontology makes it possible for the robot to describe a cluttered dynamic world with symbolic object descriptions along with a static environment model, both models grounded in sensory experience, and learned without external supervision. -- You received this message because you are subscribed to the Google Groups Algorithm Geeks group. To post to this group, send email to algogeeks@googlegroups.com. To unsubscribe from this group, send email to algogeeks+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/algogeeks?hl=en.
[algogeeks] Bootstrap learning of foundational representations Authors: Benjamin J. Kuipers; Patrick Beeson; Joseph Modayil; Jefferson Provosta
http://www.informaworld.com/smpp/content~db=all?content=10.1080/09540090600768484 is the source of the following. Search within this journal: About this Journal: Abstracting Indexing News Offers Online Submissions Related Websites Society Information What is iFirst? General Information: Permissions Information Reprints Bootstrap learning of foundational representations Authors: Benjamin J. Kuipersa; Patrick Beesona; Joseph Modayila; Jefferson Provosta Abstract To be autonomous, intelligent robots must learn the foundations of commonsense knowledge from their own sensorimotor experience in the world. We describe four recent research results that contribute to a theory of how a robot learning agent can bootstrap from the 'blooming buzzing confusion' of the pixel level to a higher level ontology including distinctive states, places, objects, and actions. This is not a single learning problem, but a lattice of related learning tasks, each providing prerequisites for tasks to come later. Starting with completely uninterpreted sense and motor vectors, as well as an unknown environment, we show how a learning agent can separate the sense vector into modalities, learn the structure of individual modalities, learn natural primitives for the motor system, identify reliable relations between primitive actions and created sensory features, and can define useful control laws for homing and path- following. Building on this framework, we show how an agent can use self- organizing maps to identify useful sensory features in the environment, and can learn effective hill-climbing control laws to define distinctive states in terms of those features, and trajectory- following control laws to move from one distinctive state to another. Moving on to place recognition, we show how an agent can combine unsupervised learning, map-learning, and supervised learning to achieve high-performance recognition of places from rich sensory input. Finally, we take the first steps toward learning an ontology of objects, showing that a bootstrap learning robot can learn to individuate objects through motion, separating them from the static environment and from each other, and can learn properties useful for classification. These are four key steps in a larger research enterprise on the foundations of human and robot commonsense knowledge. Keywords: Bootstrap learning; Ontology learning; Spatial learning; Learning places; Objects; Actions -- You received this message because you are subscribed to the Google Groups Algorithm Geeks group. To post to this group, send email to algogeeks@googlegroups.com. To unsubscribe from this group, send email to algogeeks+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/algogeeks?hl=en.
[algogeeks] Automated Algorithm Configuration and Selection: Enabling Technologies for Building Better Algorithms
http://www.cs.ubc.ca/labs/beta/Projects/SATzilla/ http://www.csail.mit.edu/events/eventcalendar/calendar.php?show=eventid=2784 Automated Algorithm Configuration and Selection: Enabling Technologies for Building Better Algorithms Speaker: Frank Hutter, University of British Columbia, Vancouver Date: Monday, December 6 2010 Time: 5:00PM to 6:00PM Refreshments: 4:45PM Location: 32-D463 - Star Conference Room Host: Vijay Ganesh, MIT-CSAIL Contact: Mary McDavitt, 617-253-9620, mmcda...@csail.mit.edu Relevant URL: Abstract: Algorithms for solving difficult computational problems play a key role in many applications, including scheduling, time-tabling, resource allocation,production planning and optimization, computer- aided design, and software verification. In many cases, provably efficient algorithms are unlikely to exist, and heuristic methods are the key to solving these problems effectively. However, the design of effective heuristic algorithms is a difficult task that requires substantial expertise and time. Traditionally, it involves an iterative, manual process, in which the designer gradually introduces or modifies components or mechanisms whose empirical performance is then tested on one or more sets of benchmark problems. In this talk, we describe our research on fully formalized methods that automate the most tedious and time-consuming part of this algorithm design process. In particular, we discuss two automated algorithm configuration frameworks, which aim at identifying the combination of algorithm components with the best empirical performance for a given application domain. We also describe our work on algorithm selection, which aims at selecting the best algorithm on a per-instance basis, as well as a recent extension for selecting an algorithm's best components on a per-instance basis. We illustrate the power of these fully automated methods on examples from SAT-based verification and mixed integer programming. Without the need for domain knowledge or human time, in several cases they sped up hand- crafted high-performance algorithms by orders of magnitude, thereby substantially advancing the state of the art in solving a broad range of problems. Based on these results, we believe that automated methods such as the ones we present will play an increasingly crucial role in the design of high-performance algorithms and will be widely used in academia and industry. Based on joint work with Holger Hoos, Kevin Leyton-Brown, and many others Bio: Frank Hutter is a Postdoctoral Research Fellow at the Computer Science Department of the University of British Columbia in Vancouver, Canada, where he works with Profs. Holger Hoos and Kevin Leyton-Brown. His research concentrates on the use of machine learning and optimization to improve algorithms for solving NP-hard problems -- You received this message because you are subscribed to the Google Groups Algorithm Geeks group. To post to this group, send email to algogeeks@googlegroups.com. To unsubscribe from this group, send email to algogeeks+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/algogeeks?hl=en.
[algogeeks] Automated Theory Formation in Pure Mathematics by Simon Colton.
http://www.springer.com/computer/ai/book/978-1-85233-609-7 In recent years, Artificial Intelligence researchers have largely focused their efforts on solving specific problems, with less emphasis on 'the big picture' - automating large scale tasks which require human-level intelligence to undertake. The subject of this book, automated theory formation in mathematics, is such a large scale task. Automated theory formation requires the invention of new concepts, the calculating of examples, the making of conjectures and the proving of theorems. This book, representing four years of PhD work by Dr. Simon Colton demonstrates how theory formation can be automated. Building on over 20 years of research into constructing an automated mathematician carried out in Professor Alan Bundy's mathematical reasoning group in Edinburgh, Dr. Colton has implemented the HR system as a solution to the problem of forming theories by computer. HR uses various pieces of mathematical software, including automated theorem provers, model generators and databases, to build a theory from the bare minimum of information - the axioms of a domain. The main application of this work has been mathematical discovery, and HR has had many successes. In particular, it has invented 20 new types of number of sufficient interest to be accepted into the Encyclopaedia of Integer Sequences, a repository of over 60,000 sequences contributed by many (human) mathematicians. Content Level » Research Keywords » Artificial Intelligence - Automated Theory - Computational Creativity - Machine Learning - Pure Mathematics Related subjects » Artificial Intelligence - Theoretical Computer Science TABLE OF CONTENTS Introduction.- Literature Survey.- Mathematical Theories.- Design Considerations.- Background Knowledge.- Inventing Concepts.- Making Conjectures.- Settling Conjectures.- Assessing Concepts.- Assessing Conjectures.- An Evaluation of HR's Theories.- The Application of HR to Discovery Tasks.- Related Work.- Further Work.- Conclusions.- Appendix A: User Manual for HR 1.11.- Appendix B: Example Sessions.- Appendix C: Number Theory Results. -- You received this message because you are subscribed to the Google Groups Algorithm Geeks group. To post to this group, send email to algogeeks@googlegroups.com. To unsubscribe from this group, send email to algogeeks+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/algogeeks?hl=en.
[algogeeks] Genetic and Evolutionary Computation Series by Springer
http://www.springer.com/series/7373?changeHeader is the source of the following. Researchers and practitioners alike are increasingly turning to search, optimization, and machine-learning procedures based on natural selection and genetics to solve problems across the spectrum of human endeavor. These genetic algorithms and techniques of genetic programming and other forms of evolutionary computation are solving problems and inventing new hardware and software that rival human designs. The Genetic and Evolutionary Computation Book Series publishes research monographs, edited collections, and graduate-level texts on this rapidly growing field. Areas of coverage include applications, theoretical foundations, technique extensions and implementation issues of all areas of genetic and evolutionary computation including genetic algorithms (GAs), genetic programming (GP), evolution strategies (ESs), evolutionary programming (EP), learning classifier systems (LCSs) and other variants of genetic and evolutionary computation (GEC). -- You received this message because you are subscribed to the Google Groups Algorithm Geeks group. To post to this group, send email to algogeeks@googlegroups.com. To unsubscribe from this group, send email to algogeeks+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/algogeeks?hl=en.
[algogeeks] Automatic programming (also called program synthesis or program induction)—that is, getting computers to solve problems without explicitly programming them.
http://www.genetic-programming.com/johnkoza.htm Scientific Research Interests—John R. Koza Our main research interest is automatic programming (also called program synthesis or program induction)—that is, getting computers to solve problems without explicitly programming them. This goal can be accomplished using the technique of genetic programming (of which I am considered the inventor). Genetic programming is an automated method for creating a working computer program from a high-level problem statement of a problem. Genetic programming performs automatic program synthesis using Darwinian natural selection and biologically inspired operations such as recombination, mutation, inversion, gene duplication, and gene deletion. Old Chinese saying says animated gif is worth one megaword, so click here for short tutorial of What is GP? For information about the rapidly growing field of genetic programming, visit www.genetic-programming.org and www.genetic-programming.com While proof of principle (toy) problems are occasionally useful for tutorial or introductory purposes, we believe that it is time for fields of artificial intelligence and machine learning to start delivering non-trivial results that satisfy the test of being competitive with human performance. Accordingly, our criterion for undertaking new research is that, if the anticipated outcome of the research effort is achieved, it can be argued (on some reasonable basis) that the result created by genetic programming is competitive with human-produced results. Competitiveness with human performance can be established in a variety of ways. For example, genetic programming may produce a result that is slightly better, equal, or slightly worse than that produced by a succession of human researchers working on an well-defined problem over a period of years. Or, genetic programming may produce a result that is equivalent to an invention that was patented in the past or that is patentable today as a new invention. Or, genetic programming may produce a result that is publishable in its own right (i.e., independent of the fact that the result was mechanically generated). Or, genetic programming may produce a result that wins or ranks highly in a judged competition involving human contestants. There are examples using genetic programming in all four categories and we have been produced at least one example in three of the four categories. Fourteen are described in detail in the Genetic Programming III: Darwinian Invention and Problem Solving book and Human-Competitive Machine Intelligence videotape For additional discussion, see human-competitive machine intelligence. Specifically, our recent research work involving genetic programming currently emphasizes automated synthesis of analog electrical circuits, automated synthesis of controllers, automated synthesis (reverse engineering) of metabolic pathways (networks of chemical reactions), automated synthesis of antennas, automated synthesis of genetic networks, problems in computational molecular biology, various other problems involving cellular automata, multi-agent systems, mathematical algorithms, and other areas of design, and using genetic programming as an automated invention machine (for creating new and useful patentable new inventions). There are now a number of instances where genetic programming has automatically produced a computer program that is competitive with human performance. (See our criteria for human-competitive results and a list of human-competitive results by clicking on human-competitive machine intelligence). The fact that genetic programming can evolve entities that are competitive with human-produced results suggests that genetic programming may possibly be used as an invention machine to create new and useful patentable inventions. In this connection, evolutionary methods, such as genetic programming, have the advantage of not being encumbered by preconceptions that limit human problem-solving to well-traveled paths. In late July 1999, Genetic Programming Inc. started operating a new 1,000-node Beowulf-style parallel cluster computer consisting of 1,000 Pentium II 350 MHz processors and a host computer. Genetic Programming Inc. has also operated (starting in early 1999) a 70-node Beowulf- style parallel cluster computer consisting of 533 MHz DEC Alpha microprocessors and a host computer. The new 1,000-Pentium system is called the Tera-COTS computer (since it has capacity of about a teraflops and is a beowulf-style customer computer made of commodity off-the-shelf [COTS] parts). Click here for technical discussion of parallel genetic programming and building the 1,000-Pentium Beowulf- style parallel cluster computer. All of the above-mentioned 21 human-competitive results were obtained using computers that were substantially smaller than the new 1000- Pentium computer mentioned above. Fifteen of these 21 human- competitive results were obtained on a 1995-vintage parallel
[algogeeks] Automatic programming (also called program synthesis or program induction)—that is, getting computers to solve problems without explicitly programming them.
http://www.genetic-programming.com/johnkoza.htm Scientific Research Interests—John R. Koza Our main research interest is automatic programming (also called program synthesis or program induction)—that is, getting computers to solve problems without explicitly programming them. This goal can be accomplished using the technique of genetic programming (of which I am considered the inventor). Genetic programming is an automated method for creating a working computer program from a high-level problem statement of a problem. Genetic programming performs automatic program synthesis using Darwinian natural selection and biologically inspired operations such as recombination, mutation, inversion, gene duplication, and gene deletion. Old Chinese saying says animated gif is worth one megaword, so click here for short tutorial of What is GP? For information about the rapidly growing field of genetic programming, visit www.genetic-programming.org and www.genetic-programming.com While proof of principle (toy) problems are occasionally useful for tutorial or introductory purposes, we believe that it is time for fields of artificial intelligence and machine learning to start delivering non-trivial results that satisfy the test of being competitive with human performance. Accordingly, our criterion for undertaking new research is that, if the anticipated outcome of the research effort is achieved, it can be argued (on some reasonable basis) that the result created by genetic programming is competitive with human-produced results. Competitiveness with human performance can be established in a variety of ways. For example, genetic programming may produce a result that is slightly better, equal, or slightly worse than that produced by a succession of human researchers working on an well-defined problem over a period of years. Or, genetic programming may produce a result that is equivalent to an invention that was patented in the past or that is patentable today as a new invention. Or, genetic programming may produce a result that is publishable in its own right (i.e., independent of the fact that the result was mechanically generated). Or, genetic programming may produce a result that wins or ranks highly in a judged competition involving human contestants. There are examples using genetic programming in all four categories and we have been produced at least one example in three of the four categories. Fourteen are described in detail in the Genetic Programming III: Darwinian Invention and Problem Solving book and Human-Competitive Machine Intelligence videotape For additional discussion, see human-competitive machine intelligence. Specifically, our recent research work involving genetic programming currently emphasizes automated synthesis of analog electrical circuits, automated synthesis of controllers, automated synthesis (reverse engineering) of metabolic pathways (networks of chemical reactions), automated synthesis of antennas, automated synthesis of genetic networks, problems in computational molecular biology, various other problems involving cellular automata, multi-agent systems, mathematical algorithms, and other areas of design, and using genetic programming as an automated invention machine (for creating new and useful patentable new inventions). There are now a number of instances where genetic programming has automatically produced a computer program that is competitive with human performance. (See our criteria for human-competitive results and a list of human-competitive results by clicking on human-competitive machine intelligence). The fact that genetic programming can evolve entities that are competitive with human-produced results suggests that genetic programming may possibly be used as an invention machine to create new and useful patentable inventions. In this connection, evolutionary methods, such as genetic programming, have the advantage of not being encumbered by preconceptions that limit human problem-solving to well-traveled paths. In late July 1999, Genetic Programming Inc. started operating a new 1,000-node Beowulf-style parallel cluster computer consisting of 1,000 Pentium II 350 MHz processors and a host computer. Genetic Programming Inc. has also operated (starting in early 1999) a 70-node Beowulf- style parallel cluster computer consisting of 533 MHz DEC Alpha microprocessors and a host computer. The new 1,000-Pentium system is called the Tera-COTS computer (since it has capacity of about a teraflops and is a beowulf-style customer computer made of commodity off-the-shelf [COTS] parts). Click here for technical discussion of parallel genetic programming and building the 1,000-Pentium Beowulf- style parallel cluster computer. All of the above-mentioned 21 human-competitive results were obtained using computers that were substantially smaller than the new 1000- Pentium computer mentioned above. Fifteen of these 21 human- competitive results were obtained on a 1995-vintage parallel