Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Valentina Poletti
Got ya, thanks for the clarification. That brings up another question. Why do we want to make an AGI? On 8/27/08, Matt Mahoney <[EMAIL PROTECTED]> wrote: > > An AGI will not design its goals. It is up to humans to define the goals > of an AGI, so that it will do what we want it to do. > > Unfor

Re: [agi] The Necessity of Embodiment

2008-08-28 Thread Valentina Poletti
All these points you made are good points, and I agree with you. However, what I was trying to say - and I realized I did not express myself too well, is that, from what I understand I see a paradox in what Eliezer is trying to do. Assuming that we agree on the definition of AGI - a being far more

Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)

2008-08-28 Thread Valentina Poletti
Lol..it's not that impossible actually. On Tue, Aug 26, 2008 at 6:32 PM, Mike Tintner <[EMAIL PROTECTED]>wrote: > Valentina:In other words I'm looking for a way to mathematically define > how the AGI will mathematically define its goals. > Holy Non-Existent Grail? Has any new branch of logic o

Re: [agi] The Necessity of Embodiment

2008-08-28 Thread Vladimir Nesov
On Thu, Aug 28, 2008 at 12:34 PM, Valentina Poletti <[EMAIL PROTECTED]> wrote: > All these points you made are good points, and I agree with you. However, > what I was trying to say - and I realized I did not express myself too well, > is that, from what I understand I see a paradox in what Eliezer

Re: [agi] How Would You Design a Play Machine?

2008-08-28 Thread Bob Mottram
2008/8/27 Mike Tintner <[EMAIL PROTECTED]>: >You on your side insist that you don't have to have such precisely defined >goals > - your intuitive (and by definition, ill-defined) sense of intelligence will > do. As a child I don't believe that I set out with the goal of "becoming a software deve

Re: [agi] How Would You Design a Play Machine?

2008-08-28 Thread Mike Tintner
Just in case there is any confusion, "ill-defined" is in this particular context is in no way pejorative. The crux of a General Intelligence for me is that it is necessarily a machine that works with more or less ill-defined goals to solve ill-structured problems. Bob's self-description is to a

Re: [agi] How Would You Design a Play Machine?

2008-08-28 Thread Bob Mottram
2008/8/28 Mike Tintner <[EMAIL PROTECTED]>: > (I still think of course that current AGI should have a not-so-ill > structured definition of its problem-solving goals). It's certainly true that an AGI could be endowed with well defined goals. Some people also begin from an early age with well de

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Mark Waser
No, the state of ultimate bliss that you, I, and all other rational, goal seeking agents seek Your second statement copied below not withstanding, I *don't* seek ultimate bliss. You may say that is not what you want, but only because you are unaware of the possibilities of reprogramming your

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Abram Demski
Hi mark, I think the miscommunication is relatively simple... On Wed, Aug 27, 2008 at 10:14 PM, Mark Waser <[EMAIL PROTECTED]> wrote: > Hi, > > I think that I'm missing some of your points . . . . > >> Whatever good is, it cannot be something directly >> observable, or the AI will just wirehead

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Abram Demski
Mark, I second that! Matt, This is like my imaginary robot that rewires its video feed to be nothing but tan, to stimulate the pleasure drive that humans put there to make it like humans better. If we have any external goals at all, the state of bliss you refer to prevents us from achieving the

Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Abram Demski
Matt, Ok, you have me, I admit defeat. I could only continue my argument if I could pin down what sorts of facts need to be learned with high probability for RSI, and show somehow that this set does not include unlearnable facts. Learnable facts form a larger set than provable facts, since for ex

Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Abram Demski
PS-- I have thought of a weak argument: If a fact is not probabilistically learnable, then it is hard to see how it has much significance for an AI design. A non-learnable fact won't reliably change the performance of the AI, since if it did, it would be learnable. Furthermore, even if there were

Re: [agi] The Necessity of Embodiment

2008-08-28 Thread Terren Suydam
> > It doesn't matter what I do with the question. It > only matters what an AGI does with it. > > AGI doesn't do anything with the question, you do. You > answer the > question by implementing Friendly AI. FAI is the answer to > the > question. The question is: how could one specify Friendlines

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Mark Waser
Also, I should mention that the whole construction becomes irrelevant if we can logically describe the goal ahead of time. With the "make humans happy" example, something like my construction would be useful if we need to AI to *learn* what a human is and what happy is. (We then set up the pleasur

Re: [agi] The Necessity of Embodiment

2008-08-28 Thread Terren Suydam
--- On Wed, 8/27/08, Vladimir Nesov <[EMAIL PROTECTED]> wrote: > One of the main motivations for the fast development of > Friendly AI is > that it can be allowed to develop superintelligence to > police the > human space from global catastrophes like Unfriendly AI, > which > includes as a special

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Abram Demski
Mark, Actually I am sympathetic with this idea. I do think good can be defined. And, I think it can be a simple definition. However, it doesn't seem right to me to preprogram an AGI with a set ethical theory; the theory could be wrong, no matter how good it sounds. So, better to put such ideas in

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Mark Waser
However, it doesn't seem right to me to preprogram an AGI with a set ethical theory; the theory could be wrong, no matter how good it sounds. Why not wait until a theory is derived before making this decision? Wouldn't such a theory be a good starting point, at least? better to put such ideas

Re: [agi] The Necessity of Embodiment

2008-08-28 Thread Jiri Jelinek
On Thu, Aug 28, 2008 at 12:29 PM, Terren Suydam <[EMAIL PROTECTED]> wrote: >I challenge anyone who believes that Friendliness is attainable in principle >to construct a scenario in which there is a clear right action that does not >depend on cultural or situational context. It does depend on cul

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Abram Demski
Mark, I still think your definitions still sound difficult to implement, although not nearly as hard as "make humans happy without modifying them". How would you define "consent"? You'd need a definition of decision-making entity, right? Personally, if I were to take the approach of a preprogramm

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Mark Waser
Personally, if I were to take the approach of a preprogrammed ethics, I would define good in pseudo-evolutionary terms: a pattern/entity is good if it has high survival value in the long term. Patterns that are self-sustaining on their own are thus considered good, but patterns that help sustain o

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Terren Suydam
--- On Thu, 8/28/08, Mark Waser <[EMAIL PROTECTED]> wrote: > Actually, I *do* define good and ethics not only in > evolutionary terms but > as being driven by evolution. Unlike most people, I > believe that ethics is > *entirely* driven by what is best evolutionarily while not > believing at al

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Matt Mahoney
Valentina Poletti <[EMAIL PROTECTED]> wrote: > Got ya, thanks for the clarification. That brings up another question. Why do > we want to make an AGI? I'm glad somebody is finally asking the right question, instead of skipping over the specification to the design phase. It would avoid a lot of

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Matt Mahoney
Nobody wants to enter a mental state where thinking and awareness are unpleasant, at least when I describe it that way. My point is that having everything you want is not the utopia that many people think it is. But it is where we are headed. -- Matt Mahoney, [EMAIL PROTECTED] - Origina

Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Matt Mahoney
I'm not trying to win any arguments, but I am trying to solve the problem of whether RSI is possible at all. It is an important question because it profoundly affects the path that a singularity would take, and what precautions we need to design into AGI. Without RSI, then a singularity has to b

Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Mike Tintner
Matt:If RSI is possible, then there is the additional threat of a fast takeoff of the kind described by Good and Vinge Can we have an example of just one or two subject areas or domains where a takeoff has been considered (by anyone) as possibly occurring, and what form such a takeoff might t

[agi] Re: Goedel machines ..PS

2008-08-28 Thread Mike Tintner
Sorry, I forgot to ask for what I most wanted to know - what form of RSI in any specific areas has been considered? --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your

RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-28 Thread Matt Mahoney
Here is Vernor Vinge's original essay on the singularity. http://mindstalk.net/vinge/vinge-sing.html The premise is that if humans can create agents with above human intelligence, then so can they. What I am questioning is whether agents at any intelligence level can do this. I don't believe t

Re: [agi] How Would You Design a Play Machine?

2008-08-28 Thread Terren Suydam
Hi Jiri, Comments below... --- On Thu, 8/28/08, Jiri Jelinek <[EMAIL PROTECTED]> wrote: > >That's difficult to reconcile if you don't > believe embodiment is all that important. > > Not really. We might be qualia-driven, but for our AGIs > it's perfectly > ok (and only "natural") to be driven

[agi] MindForth puts AI theory into practice.

2008-08-28 Thread A. T. Murray
Artificial Minds in Win32Forth are online at http://mind.sourceforge.net/mind4th.html and http://AIMind-i.com -- a separate AI branch. http://mentifex.virtualentity.com/js080819.html is the JavaScript AI Mind Programming Journal about the development of a tutorial program at http://mind.source

[agi] Context

2008-08-28 Thread Harry Chesley
I think we would all agree that context is crucial to understanding. "Kill them!" means something quite different if you're at a soccer game, in a military battle, or playing a FPS video game. But in a pragmatic, let's implement it, sense, I'm not as clear what context means. Let me try to enu

Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-28 Thread Mike Tintner
Thanks. But like I said, airy generalities. That machines can become faster and faster at computations and accumulating knowledge is certain. But that's narrow AI. For general intelligence, you have to be able first to integrate as well as accumulate knowledge. We have learned vast amounts a

Re: [agi] MindForth puts AI theory into practice.

2008-08-28 Thread Chris Petersen
On Fri, Aug 22, 2008 at 9:44 AM, A. T. Murray <[EMAIL PROTECTED]> wrote: > Artificial Minds in Win32Forth are online at > http://mind.sourceforge.net/mind4th.html and > http://AIMind-i.com -- a separate AI branch. > > http://mentifex.virtualentity.com/js080819.html > is the JavaScript AI Mind Prog

Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-28 Thread j.k.
On 08/28/2008 04:47 PM, Matt Mahoney wrote: The premise is that if humans can create agents with above human intelligence, then so can they. What I am questioning is whether agents at any intelligence level can do this. I don't believe that agents at any level can recognize higher intelligence

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Mark Waser
Parasites are very successful at surviving but they don't have other goals. Try being parasitic *and* succeeding at goals other than survival. I think you'll find that your parasitic ways will rapidly get in the way of your other goals the second that you need help (or even non-interference)

Re: [agi] Re: Goedel machines ..PS

2008-08-28 Thread David Hart
On 8/29/08, Mike Tintner <[EMAIL PROTECTED]> wrote: > > Sorry, I forgot to ask for what I most wanted to know - what form of RSI in > any specific areas has been considered? To quote Charles Babbage, I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a quest

Re: [agi] How Would You Design a Play Machine?

2008-08-28 Thread Brad Paulsen
Eric, It was a real-life near-death experience (auto accident). I'm aware of the tryptamine compound and its presence in hallucinogenic drugs such as LSD. According to Wikipedia, it is not related to the NDE drug of choice which is Ketamine (Ketalar or ketamine HCL -- street name back in the

[agi] To sleep, perchance to dream...

2008-08-28 Thread Brad Paulsen
EXPLORING THE FUNCTION OF SLEEP http://www.physorg.com/news138941239.html From the article: Because it is universal, tightly regulated, and cannot be lost without serious harm, Cirelli argues that sleep must have an important core function. But what?

[agi] Talk on OpenCogPrime in the San Fran area

2008-08-28 Thread Ben Goertzel
All are welcome... -- Forwarded message -- From: Monica <[EMAIL PROTECTED]> Date: Thu, Aug 28, 2008 at 9:51 PM Subject: [ai-94] New Extraordinary Meetup: Ben Goertzel, Novamente To: [EMAIL PROTECTED] Announcing a new Meetup for Bay Area Artificial Intelligence Meetup Group! What

Re: [agi] How Would You Design a Play Machine?

2008-08-28 Thread Jiri Jelinek
Terren, >is not embodied at all, in which case it is a mindless automaton Researchers and philosophers define mind and intelligence in many different ways = their classifications of particular AI systems differ. What really counts though are problem solving abilities of the system. Not how it's l

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Terren Suydam
Hi Mark, Obviously you need to complicated your original statement "I believe that ethics is *entirely* driven by what is best evolutionarily..." in such a way that we don't derive ethics from parasites. You did that by invoking social behavior - parasites are not social beings. So from ther

Re: [agi] How Would You Design a Play Machine?

2008-08-28 Thread Terren Suydam
Jiri, I think where you're coming from is a perspective that doesn't consider or doesn't care about the prospect of a conscious intelligence, an awake being capable of self reflection and free will (or at least the illusion of it). I don't think any kind of algorithmic approach, which is to sa

Re: [agi] How Would You Design a Play Machine?

2008-08-28 Thread Jiri Jelinek
Terren, > I don't think any kind of algorithmic approach, which is to say, un-embodied, > will ever result in conscious intelligence. But an embodied agent that is > able to construct ever-deepening models of its experience such that it > eventually includes itself in its models, well, that is

Re: [agi] How Would You Design a Play Machine?

2008-08-28 Thread Eric Burton
Brad, scary stuff. Dissociatives/NMDA inhibitors were secret option number three! ;D On 8/29/08, Jiri Jelinek <[EMAIL PROTECTED]> wrote: > Terren, > >> I don't think any kind of algorithmic approach, which is to say, >> un-embodied, will ever result in conscious intelligence. But an embodied >> ag