Re: [singularity] Vista/AGI

2008-03-16 Thread Panu Horsmalahti
Just because it takes thousands of programmers to create something as complex as Vista, does *not* mean that thousands of programmers are required to build an AGI, since one property of AGI is/can be that it will learn most of its complexity using algorithms programmed into it. ---

Re: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-02-24 Thread Panu Horsmalahti
If we assume 2x2x2 block of space floating somewhere, and would assign each element the value 1 if a single atom happens to be inside the subspace defined by the grid, and 0 if not. How many ways would there be to read this grid to create (2*2*2) = 8bits? The answer is 8! = 40 320. Lets then assume

Re: [singularity] Has the Turing test been beaten?

2007-12-09 Thread Panu Horsmalahti
2007/12/9, Stefan Pernar <[EMAIL PROTECTED]>: > > Ironic yet thought provoking: > > http://www.roughtype.com/archives/2007/12/slutbot_passes.php > > This is not the Turing Test, since the subjects are not aware that they're talking with a bot. This type of test

Re: [singularity] Towards the Singularity

2007-09-10 Thread Panu Horsmalahti
2007/9/10, Matt Mahoney <[EMAIL PROTECTED]>: > - Human belief in consciousness and subjective experience is universal and > accepted without question. It isn't. Any belief programmed into the brain through > natural selection must be true in any logical system that the human mind > can > comp

Re: [singularity] Interesting Read

2007-08-29 Thread Panu Horsmalahti
It sounds similar to Klik & Play/The Games Factory/Multimedia Fusion. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=4007604&id_secret=37010103-5f39c3

Re: [singularity] ESSAY: Why care about artificial intelligence?

2007-07-12 Thread Panu Horsmalahti
It is my understanding that the basic problem in Friendly AI is that it is possible for the AI to interpret the command "help humanity" etc wrong, and then destroy humanity (what we don't want it to do). The whole problem is to find some way to make it more probable to not destroy us all. It is co

Re: [singularity] critiques of Eliezer's views on AI (was: Re: Personal attacks)

2007-06-21 Thread Panu Horsmalahti
An AGI is not selected by random from all possible "minds", it is designed by humans, therefore you can't apply the probability from the assumption that most AI's are unfriendly. There are many elements in the design of an AGI that most researchers are likely to choose. I think it is safe to say t

Re: [singularity] Bootstrapping AI

2007-06-04 Thread Panu Horsmalahti
2007/6/4, Matt Mahoney <[EMAIL PROTECTED]>: If you are looking for a computer simulations of a human mind, you will be disappointed, because there is no economic incentive to build such a thing. -- Matt Mahoney, [EMAIL PROTECTED] IBM Blue Brain project or CCortex? - This list is sponsor

Re: [singularity] Bootstrapping AI

2007-06-04 Thread Panu Horsmalahti
In 1959, man had never been in space. In 1969, Apollo 11 landed on moon. In my opinion you have the so called linear view of technology progression, instead of the actual exponential progression that has been observed not only in computers, but in many fields (as Kurzweil has pointed out). It is t