Re: [singularity] Vista/AGI

2008-04-14 Thread Charles D Hixson
MI wrote: ... Being able to abstract and then implement only those components and mechanisms relevant to intelligence from all the data these better brain scans provide? If intelligence can be abstracted into "layers" (analogous to network layers), establishing a set of performance indicators at

Re: [singularity] Mindless Thought Experiments

2008-02-29 Thread Charles D Hixson
John K Clark wrote: ... I've been asking people with ideas like yours this question for decades but I've never received a straight answer, not one: If intelligence is not always linked to consciousness then why on Earth did evolution produce it? ... John K Clark [EMAIL PROTECTED] Conscio

Re: [singularity] Definitions

2008-02-22 Thread Charles D Hixson
John K Clark wrote: "Charles D Hixson" <[EMAIL PROTECTED]> Consciousness is the entity evaluating And you evaluate something when you have conscious understanding of it. No. The process of evaluation *is* the consciousness. Consciousness is a process, not a thing. a p

Re: [singularity] Definitions

2008-02-19 Thread Charles D Hixson
John K Clark wrote: "Matt Mahoney" <[EMAIL PROTECTED]> It seems to me the problem is defining consciousness, not testing for it. And it seems to me that beliefs of this sort are exactly the reason philosophy is in such a muddle. A definition of consciousness is not needed, in fact unless you

Re: [singularity] World as Simulation

2008-01-12 Thread Charles D Hixson
Matt Mahoney wrote: --- "John G. Rose" <[EMAIL PROTECTED]> wrote: In a sim world there are many variables that can overcome other motivators so a change in the rate of gene proliferation would be difficult to predict. The agents that correctly believe that it is a simulation could say OK this

Re: [singularity] Requested: objections to SIAI, AGI, the Singularity and Friendliness

2007-12-29 Thread Charles D Hixson
Kaj Sotala wrote: For the recent week, I have together with Tom McCabe been collecting all sorts of objections that have been raised against the concepts of AGI, the Singularity, Friendliness, and anything else relating to SIAI's work. We've managed to get a bunch of them together, so it seemed l

Re: [singularity] Digest 0.101 for singularity

2007-12-11 Thread Charles D Hixson
Michael Gusek wrote: Has anyone devised a replacement/upgrade for/to the Turing Test? What function do you propose that this replacement serve? The original Turing Test was intended to move the idea of computer intelligence out of the metaphysically unanswerable into the potentially testable.

Re: [singularity] Has the Turing test been beaten?

2007-12-10 Thread Charles D Hixson
Panu Horsmalahti wrote: 2007/12/9, Stefan Pernar <[EMAIL PROTECTED] >: Ironic yet thought provoking: http://www.roughtype.com/archives/2007/12/slutbot_passes.php This is not the Turing Test,

Re: [singularity] Why SuperAGI's Could Never Be That Smart

2007-10-30 Thread Charles D Hixson
Benjamin Goertzel wrote: Try & find a single example of any form of intelligence that has ever existed in splendid individual isolation. That is so wrong an idea - like perpetual motion - & so fundamental to the question of superAGI's. (It's also a fascinating philosophic

Re: [singularity] John Searle...(supplement to prior post)

2007-10-26 Thread Charles D Hixson
I noticed in a later read that you differentiate between systems designed to operate via goal stacks and those operating via motivational system. This is not the meaning of goal that I was using. To me, if a motive is a theory to prove, then a goal is a lemma needed to prove the theory. I tr

Re: [singularity] John Searle...

2007-10-26 Thread Charles D Hixson
Richard Loosemore wrote: candice schuster wrote: Richard, Your responses to me seem to go in round abouts. No insult intended however. You say the AI will in fact reach full consciousness. How on earth would that ever be possible ? I think I recently (last week or so) wrote out a repl

Re: [singularity] Re: QUESTION

2007-10-23 Thread Charles D Hixson
Aleksei Riikonen wrote: On 10/22/07, Richard Loosemore <[EMAIL PROTECTED]> wrote: My own opinion is that the first AGI systems to be built will have extremely passive, quiet, peaceful "egos" that feel great empathy for the needs and aspirations of the human species. Sounds rather optim

Re: [singularity] Re: QUESTION

2007-10-22 Thread Charles D Hixson
Aleksei Riikonen wrote: On 10/22/07, albert medina <[EMAIL PROTECTED]> wrote: My question is: AGI, as I perceive your explanation of it, is when a computer gains/develops an ego and begins to consciously plot its own existence and make its own decisions. That would be one form of AGI,

Re: [singularity] AI is almost here (2/2)

2007-08-01 Thread Charles D Hixson
Alan Grimes wrote: Think of asserting that "All computers will be, at their core, adding machines." to get what appears to me to be the right feeling tone. Well, if you take apart a modern CPU, you will find it has 3 or 4 nearly identical units called "Arethmitic Logic Units" -- which are

Re: [singularity] AI is almost here (2/2)

2007-07-31 Thread Charles D Hixson
John G. Rose wrote: From: Alan Grimes [mailto:[EMAIL PROTECTED] Yes, that is all it does. All AIs will be, at their core, pattern matching machines. Every one of them. You can then procede to tack on any other function which you believe will improve the AI's performance but in every case you wil

Re: [singularity] critiques of Eliezer's views on AI

2007-07-05 Thread Charles D Hixson
Stathis Papaioannou wrote: On 05/07/07, Heartland <[EMAIL PROTECTED]> wrote: ... But different moments of existence in a single person's life can also be regarded as different instances. This is strikingly obvious in block universe theories of time, which are empirically indistinguishable fro

Re: [singularity] AI concerns

2007-07-02 Thread Charles D Hixson
Tom McCabe wrote: The problem isn't that the AGI will violate its original goals; it's that the AGI will eventually do something that will destroy something really important in such a way as to satisfy all of its constraints. By setting constraints on the AGI, you're trying to think of everything

Re: [singularity] critiques of Eliezer's views on AI (was: Re: Personal attacks)

2007-07-02 Thread Charles D Hixson
Tom McCabe wrote: -... To quote: "I am not sure you are capable of following an argument" If I'm not capable of even following an argument, it's a pretty clear implication that I don't understand the argument. You have thus far made no attempt that I have been able to detect to justify the

Re: [singularity] AI concerns

2007-07-01 Thread Charles D Hixson
Samantha Atkins wrote: Sergey A. Novitsky wrote: Dear all, ... o Be deprived of free will or be given limited free will (if such a concept is applicable to AI). See above, no effective means of control. - samantha There is *one* effective means of control: An AI wil

Re: [singularity] AI concerns

2007-07-01 Thread Charles D Hixson
Peter Voss wrote: Just for the record: To put it mildly, not everyone is 'Absolutely' sure that AGI can't be implemented on Bill's computer. In fact, some of us are pretty certain that that (a) current hardware is adequate, and (b) AGI software will be with us in (much) less than 10 years. Some

Re: [singularity] AI concerns

2007-07-01 Thread Charles D Hixson
Samantha Atkins wrote: Charles D Hixson wrote: Stathis Papaioannou wrote: Available computing power doesn't yet match that of the human brain, but I see your point, software (in general) isn't getting better nearly as quickly as hardware is getting better. Well, not at the

Re: [singularity] AI concerns

2007-06-30 Thread Charles D Hixson
Stathis Papaioannou wrote: On 01/07/07, Alan Grimes <[EMAIL PROTECTED]> wrote: For the last several years, the limiting factor has absolutely not been hardware. How much hardware do you claim you need to devel a hard AI? Available computing power doesn't yet match that of the human brain, but

Re: [singularity] critiques of Eliezer's views on AI

2007-06-28 Thread Charles D Hixson
Stathis Papaioannou wrote: On 28/06/07, Niels-Jeroen Vandamme <[EMAIL PROTECTED]> wrote: An interesting thought experiment: if the universe is infinite, according to a ballpark estimate there would be an exact copy of you at a distance of 10^(10^29) m: because of the Bekenstein bound of the i

Re: [singularity] critiques of Eliezer's views on AI

2007-06-25 Thread Charles D Hixson
Alan Grimes wrote: OTOH, let's consider a few scenario's where not super-human AI develops. Instead there develops: a) A cult of death that decides that humanity is a mistake, and decides to solve the problem via genetically engineered plagues. (Well, diseases. I don't specifically mean plague

Re: [singularity] critiques of Eliezer's views on AI (was: Re: Personal attacks)

2007-06-25 Thread Charles D Hixson
Matt Mahoney wrote: --- Tom McCabe <[EMAIL PROTECTED]> wrote: These questions, although important, have little to do with the feasibility of FAI. These questions are important because AGI is coming, friendly or not. Will our AGIs cooperate or compete? Do we upload ourselves? ... -

Re: [singularity] critiques of Eliezer's views on AI (was: Re: Personal attacks)

2007-06-25 Thread Charles D Hixson
Kaj Sotala wrote: On 6/22/07, Charles D Hixson <[EMAIL PROTECTED]> wrote: Dividing things into us vs. them, and calling those that side with us friendly seems to be instinctually human, but I don't think that it's a universal. Even then, we are likely to ignore birds, ants

Re: [singularity] critiques of Eliezer's views on AI (was: Re: Personal attacks)

2007-06-22 Thread Charles D Hixson
And *my* best guess is that most super-humanly intelligent AIs will just choose to go elsewhere, and leave us alone. (Well, most of those that have any interest in personal survivalif you posit genetic AI as the route to success, that will be most to all of them, but I'm much less certain

Re: [singularity] Getting ready for takeoff

2007-06-14 Thread Charles D Hixson
becomes cheap, 3-D visualization becomes cheap. So does primitive virtual reality. But the force multiplier comes in when you hook it up to a CAD system. Especially one that being used to design electronics or nano-machinery. Or to control them. On 6/13/07, Charles D Hixson <[EMAI

Re: [singularity] Getting ready for takeoff

2007-06-13 Thread Charles D Hixson
LĂșcio de Souza Coelho wrote: On 6/12/07, Mike Tintner <[EMAIL PROTECTED]> wrote: (...) But that might be an overly bleak interpretation. Another way to look at the rapid uptake of computers in the BRICs is an example of the astonishing possibilities for catch-up that technology offers the devel

Re: [singularity] Bootstrapping AI

2007-06-07 Thread Charles D Hixson
BillK wrote: On 6/7/07, Charles D Hixson wrote: I believe that he said exactly as you quoted it. And I also believe that he gets mad if you quote him as saying "640K ought to be enough for anyone." However, I also remember him having been quoted as saying that in the popular techn

Re: [singularity] Re: Personal attacks

2007-06-06 Thread Charles D Hixson
Tom McCabe wrote: --- Eugen Leitl <[EMAIL PROTECTED]> wrote: On Tue, Jun 05, 2007 at 01:24:04PM -0700, Tom McCabe wrote: Unless, of course, that human turns out to be evil and That why you need to screen them, and build a group with checks and balances. If our psycholo

Re: [singularity] Bootstrapping AI

2007-06-06 Thread Charles D Hixson
BillK wrote: On 6/5/07, Eugen Leitl wrote: "640K ought to be enough for anybody." - Bill Gates, 1981 Hey, Bill Gates never said that! And he gets angry if you quote it to him. It is an out-of-context misquotation. See: which contains an email fr

Re: [singularity] Why do you think your AGI design will work?

2007-04-28 Thread Charles D Hixson
? how does the solver adapt their rules to solve it? ... - Original Message - From: "Charles D Hixson" <[EMAIL PROTECTED]> To: Sent: Saturday, April 28, 2007 6:23 AM Subject: Re: [singularity] Why do you think your AGI design will work? Mike Tintner wrote: Hi, I stro

Re: [singularity] Why do you think your AGI design will work?

2007-04-27 Thread Charles D Hixson
Mike Tintner wrote: Hi, I strongly disagree - there is a need to provide a definition of AGI - not necessarily the right or optimal definition, but one that poses concrete challenges and focusses the mind - even if it's only a starting-point. The reason the Turing Test has been such a succes

Re: [singularity] Implications of an already existing singularity.

2007-03-29 Thread Charles D Hixson
Matt Mahoney wrote: --- Eugen Leitl <[EMAIL PROTECTED]> wrote: ... A proton is a damn complex system. Don't see how you could equal it with one mere bit. I don't. I am equating one bit with a volume of space about the size of a proton. The actual number of baryons in the universe i

Re: [singularity] Scenarios for a simulated universe (second thought!)

2007-03-09 Thread Charles D Hixson
Shane Legg wrote: :-) No offence taken, I was just curious to know what your position was. I can certainly understand people with a practical interest not having time for things like AIXI. Indeed as I've said before, my PhD is in AIXI and related stuff, and yet my own AGI project is based on o

Re: [singularity] Why We are Almost Certainly not in a Simulation

2007-03-09 Thread Charles D Hixson
Stathis Papaioannou wrote: On 3/7/07, *Charles D Hixson* <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> wrote: With so many imponderables, the most reasonable thing to do is to just ignore the possibility, and, after all, that may well be what is desired by th

Re: [singularity] Why We are Almost Certainly not in a Simulation

2007-03-06 Thread Charles D Hixson
John Ku wrote: On 3/3/07, *Charles D Hixson* <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> wrote: Yes, I see no valid argument asserting that this is not a simulation fiction that some other entity is experiencing. And there's no guarantee that sometime soon h

Re: [singularity] Why We are Almost Certainly not in a Simulation

2007-03-03 Thread Charles D Hixson
John Ku wrote: I think I am conceiving of the dialectic in a different way from the way you are imagining it. What I think Bostrom and others are doing is arguing that if the world is as our empirical science says it is, then the anthropic principle actually yields the prediction that we are a

Re: [singularity] Minds beyond the Singularity: literally self-less ?

2006-10-11 Thread Charles D Hixson
Chris Norwood wrote: ... "a human hooked into the Net with VR technology and able to sense and act remotely via sensors and actuators all over the world, might also develop a different flavor of self" Yes. I think this is an important point that I have not seen discussed very much. It could be,

Re: [singularity] Counter-argument

2006-10-08 Thread Charles D Hixson
LĂșcio de Souza Coelho wrote: On 10/6/06, Matt Mahoney <[EMAIL PROTECTED]> wrote: (...) The "program" can't be larger than the DNA which describes it. This is at most 6 x 10^9 bits, but probably much smaller because of all the noise (random mutations that have no effect), redundancy (multiple

Re: [singularity] Counter-argument

2006-10-06 Thread Charles D Hixson
Joshua Fox wrote: Could I offer Singularity-list readers this intellectual challenge: Give an argument supporting the thesis "Any sort of Singularity is very unlikely to occur in this century." Even if you don't actually believe the point, consider it a debate-club-style challenge. If there i

Re: [singularity] An interesting but currently failed attempt to prove provable Friendly AI impossible...

2006-09-17 Thread Charles D Hixson
Shane Legg wrote: On 9/17/06, *Brian Atkins* <[EMAIL PROTECTED] > wrote: ... It would be much easier to aim at the right target if the target was properly defined. ... Shane I have a suspicion that if an FAI could be properly and completely defined, the cons

Re: [singularity] Is Friendly AI Bunk?

2006-09-13 Thread Charles D Hixson
Russell Wallace wrote: On 9/13/06, *Joel Pitt* <[EMAIL PROTECTED] > wrote: Not to be pedantic, but someone in a foreign country where they didn't ... Meh, I should have predicted that... here's another case where an intelligent being fails to predict the rat

Re: [singularity] Is Friendly AI Bunk?

2006-09-12 Thread Charles D Hixson
Russell Wallace wrote: ... Mind you, I don't believe it will be feasible to create conscious or self-willed (e.g. RPOP) AI in the foreseeable future. But that's another matter. That depends entirely on how you define "conscious" and "self-willed". For many definitions the robots that sense t