Re: [agi] How should an AGI ponder about mathematics
--- Eugen Leitl <[EMAIL PROTECTED]> wrote: > On Tue, Apr 24, 2007 at 01:08:02PM -0700, Matt Mahoney wrote: > > > Thus, the fallacy of immortality through uploading is exposed. A copy of > you > > is not you. > > No. As long there's no fork all systems are identical. There's no > measurement which allows you to tell one discrete synchronized system > from another. Space is not labeled. The spray paint of the case is > not relevant. So if you are uploaded, or lets say you step into a duplicating machine that makes an exact copy of you, atom for atom, then you will shoot yourself as soon as you step out and you won't die, as long as you do it right now before you and the copy diverge? Or suppose you know that there are an infinite number of all possible universes, so that at each point in time there are universes with indistinguishable pasts but all possible futures. So you know that if you shoot yourself then there is another universe where you don't. Would you then shoot yourself? The problem with these arguments is not in the logic. It is that all animals fear death, because those that didn't did not pass on their DNA. -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
Re: [agi] How should an AGI ponder about mathematics
On Tue, Apr 24, 2007 at 01:08:02PM -0700, Matt Mahoney wrote: > Thus, the fallacy of immortality through uploading is exposed. A copy of you > is not you. No. As long there's no fork all systems are identical. There's no measurement which allows you to tell one discrete synchronized system from another. Space is not labeled. The spray paint of the case is not relevant. > But why? A copy of a computation is as good as the original. Surely you > don't believe that there is something going on the brain that cannot be > explained by physics? Two synchronized chess computers play the same game of chess. If you allow them to diverge, they play a different game of chess. > This is what happens when logic conflicts with the hard-coded beliefs that I take a rather dim view of logic (it's just a flower that smells bad), but in this case logic is not even to blame. > were programmed by evolution. Such conflicts are inevitable. Human > introspection on human intelligence is limited by the mathematical fact that a > program cannot simulate itself. Your logic chain is rather rusty. Needs more oil. -- Eugen* Leitl http://leitl.org";>leitl http://leitl.org __ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
RE: [agi] How should an AGI ponder about mathematics
--- "Eric B. Ramsay" <[EMAIL PROTECTED]> wrote: > The more problematic issue is what happens if you non-destructively up-load > your mind? What do you do with the original which still considers itself > you? The up-load also considers itself you and may suggest a bullet. Thus, the fallacy of immortality through uploading is exposed. A copy of you is not you. But why? A copy of a computation is as good as the original. Surely you don't believe that there is something going on the brain that cannot be explained by physics? This is what happens when logic conflicts with the hard-coded beliefs that were programmed by evolution. Such conflicts are inevitable. Human introspection on human intelligence is limited by the mathematical fact that a program cannot simulate itself. -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
Re: [agi] How should an AGI ponder about mathematics
On Tue, Apr 24, 2007 at 07:21:31AM -0700, Eric B. Ramsay wrote: > >Your twin example is not a good choice. The upload will consider It's the same in principle, though. The only difference is that you'll be getting a really young 'copy' (not as an exact copy as a real upload; I know). >itself to have a claim on the contents of your life - financial >resources for example. Of course. It would result in the creation of two or more people with identical history and memory until the fork. After the fork, they're two people. Very similiar, initially, but exponentially diverging. The same thing as with identical twins, which start diverging in the womb. The only way to keep copies identical is to never allow them to diverge (keeping them in synch). That's a potentially useful setup for a HA failover situation. Heartbeat, drbd, stonith, the whole enchilada. -- Eugen* Leitl http://leitl.org";>leitl http://leitl.org __ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
Re: [agi] How should an AGI ponder about mathematics
Your twin example is not a good choice. The upload will consider itself to have a claim on the contents of your life - financial resources for example. Eugen Leitl <[EMAIL PROTECTED]> wrote: On Tue, Apr 24, 2007 at 07:09:22AM -0700, Eric B. Ramsay wrote: > The more problematic issue is what happens if you non-destructively > up-load your mind? What do you do with the original which still It's a theoretical problem for any of us on this list. Nondestructive scans require medical nanotechnology. > considers itself you? The up-load also considers itself you and may > suggest a bullet. How is that different from identical twins? I hope you're not suggesting suicide to your twin brother. -- Eugen* Leitl leitl http://leitl.org __ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?&; - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
Re: [agi] How should an AGI ponder about mathematics
On Tue, Apr 24, 2007 at 07:09:22AM -0700, Eric B. Ramsay wrote: >The more problematic issue is what happens if you non-destructively >up-load your mind? What do you do with the original which still It's a theoretical problem for any of us on this list. Nondestructive scans require medical nanotechnology. >considers itself you? The up-load also considers itself you and may >suggest a bullet. How is that different from identical twins? I hope you're not suggesting suicide to your twin brother. -- Eugen* Leitl http://leitl.org";>leitl http://leitl.org __ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
RE: [agi] How should an AGI ponder about mathematics
The more problematic issue is what happens if you non-destructively up-load your mind? What do you do with the original which still considers itself you? The up-load also considers itself you and may suggest a bullet. Matt Mahoney <[EMAIL PROTECTED]> wrote: --- "John G. Rose" wrote: > A baby AGI has immense advantage. It's starting (life?) after billions of > years of evolution and thousands of years of civilization. A 5 YO child > can't float all languages, all science, all mathematics, all recorded > history, all encyclopedia, etc. in sub-millisecond RAM and be able to > interconnect to almost any type of electronics. There are a lot of > comparisons of a 5YO with an AGI but I wonder about those... are we just > anthropomorphisizing AGI by coming up with a tabula rasa feel good AGI that > needs to learn like a cute human baby? Our brains are good I mean they are > us but aren't they just biological blobs of goop that are half-assed excuses > for intelligence? I mean why are AGI's coming about anyway? Is it because > our brains are awesome and fulfill all of our needs? No. We need to be > uploaded otherwise we die. I thought the reason for building an AGI was so we would have a utopian society where machines do all the work. Uploading raises troubling questions. How far can the copied mind stray from the original before you die? How do you distinguish between consciousness (sense of self) and the programmed belief in consciousness, free will, and fear of death that all animals possess because it confers a survival advantage? What happens if you reprogram your uploaded mind not to have these beliefs? Would it then be OK to turn it off? -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?&; - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
Re: [agi] How should an AGI ponder about mathematics
How will it handle the Mid-East crisis? God comes crying to me every night about that one. I tell Him to shut up, be a Man and get on with it. Or the Iraq crisis? As for humanising the US gun laws - even God doesn't go there. How will it sell more Coke, or get Yahoo back on top of Google? How will it get my daughter to get to talk to me? Or your partner to talk to you? How will a metallic, superfast thinking machine empathise with flesh-and-blood, slow-but-ever-so-flexibly thinking humans? In other words, get real. All this speculation is wildly divorced from reality. It lacks totally a TYPOLOGY OF PROBLEMS, intelligent and creative. Some problems aren't soluble by brute power. Most of our day-to-day problems in fact. As for that gunk of goop, you aren't LOOKING. The only gunk around here is that hunk of metal you call a would-be AGI computer. That's all it is - it doesn't EXIST until a gunk of goop puts his or her hand up its backside and switches it on, and feeds it and interprets it. And even then computers and robots are still only EXTENSIONS OF HUMAN BEINGS... literally - even if you can't see the puppet strings. Their intelligence is a DIRECT EXTENSION of our "useless" intelligence. Until you've truly absorbed that rather obvious truth, all your thinking about this area will be deeply confused. It may well be that only biorobots - some kind of synthetic organisms - will be truly alive and independent of humans. I'd concentrate on more immediate targets - like a robot that can *truly understand language, is *truly multimediate - able to convert from any one sign system into any other, that can be *truly metacognitive, able to conceive of activities as wholes, that can be *truly adaptive - able to come up with new, non-programmed responses to new, problematic situations, and can be *truly creative - able to create radically new ways of doing things - hard invention, innovation, discovery And I think you'll need a body to do all of them. And it may well take a long time, even, say, with fabulous quantum computers. (And as for an ideal intelligence - what's the ideal form of sex? - answer that & you'll be able to answer the first qestion - the secret of life is that there isn't meant to be an ideal form. Better yes, ideal no.) - Original Message - From: "John G. Rose" <[EMAIL PROTECTED]> To: Sent: Tuesday, April 24, 2007 2:48 AM Subject: RE: [agi] How should an AGI ponder about mathematics 1. They will probably create more problems than they fix... as usual. But they should be able to assist man with his issues. Kind of like machines did. 2. You would have to imagine an ideal pure intelligence and bridge the gap somewhat. 1.What are your AGI's going to do with their intelligence? What kinds of problems are they going to solve? 2.What are the flaws in our "excuses for intelligence" - in the ways we use our brains? And how are AGI's going to remedy them? - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?&; -- No virus found in this incoming message. Checked by AVG Free Edition. Version: 7.5.463 / Virus Database: 269.5.10/774 - Release Date: 23/04/2007 17:26 - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
RE: [agi] How should an AGI ponder about mathematics
>From biological conception to old age the mind changes quite a bit already. Consciousness, sense of self, free will - all illusions. Fear of death - if the mind agent lost it perhaps it would choose to terminate unless something else supported its intent to keep running > From: Matt Mahoney [mailto:[EMAIL PROTECTED] > > I thought the reason for building an AGI was so we would have a utopian > society where machines do all the work. Uploading raises troubling > questions. > How far can the copied mind stray from the original before you die? How > do > you distinguish between consciousness (sense of self) and the programmed > belief in consciousness, free will, and fear of death that all animals > possess > because it confers a survival advantage? What happens if you reprogram > your > uploaded mind not to have these beliefs? Would it then be OK to turn it > off? - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
RE: [agi] How should an AGI ponder about mathematics
1. They will probably create more problems than they fix... as usual. But they should be able to assist man with his issues. Kind of like machines did. 2. You would have to imagine an ideal pure intelligence and bridge the gap somewhat. > 1.What are your AGI's going to do with their intelligence? What kinds of > problems are they going to solve? > 2.What are the flaws in our "excuses for intelligence" - in the ways we use > our brains? And how are AGI's going to remedy them? - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
Re: [agi] How should an AGI ponder about mathematics
On Monday 23 April 2007 19:45, Matt Mahoney wrote: >... How do you distinguish between consciousness (sense of self) and the > programmed belief in consciousness, free will, and fear of death that all > animals possess because it confers a survival advantage? A distinction without a difference, I claim... Josh - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
Re: [agi] How should an AGI ponder about mathematics
Hmmm. Design a combinational logic circuit that has inputs a, b, and c, and outputs not(a), not(b), and not(c) -- its function is just 3 paralleled inverters. But, while you may use as many AND and OR gates as you like, you may only use at most two NOT gates. Josh On Monday 23 April 2007 17:43, Samantha Atkins wrote: > On Apr 23, 2007, at 2:05 PM, J. Storrs Hall, PhD. wrote: > > On Monday 23 April 2007 15:40, Lukasz Stafiniak wrote: > >> ... An AGI working with bigger numbers had better discovered binary > >> numbers. Could an AGI do it? Could it discover rational numbers? (It > >> would initially believe that irrational numbers do not exist, as > >> early > >> Pythagoreans have believed.) After having discovered the basic > >> grounding, it could be taught the more advanced things. > > > > How many people on this list have discovered anything as fundamental > > as binary > > numbers, I wonder? > > Many I would suspect. I learned math by ignoring most of what went on > in junior high and early high school classes. My school ran out of > math to teach me by my junior year. I would look up now and then from > my SF book once a week or so to see what was being taught. If it was > new I would take it, abstract it, play with the abstractions and > generally figure out what was likely to be taught the next week or > month. If I saw something new I would figure out at least one way it > could have been discovered for myself. This kept math interesting. I > very much doubt I am unique in that respect around these parts. > > > We take a lot of stuff for granted but we *learned* almost > > all of it, we didn't discover it. > > I generally got less happy when I couldn't figure out a way to derive > what was being taught. I wasn't big on memorization or applying > things I did not understand. > > > There's a lot of hubris in the notion that > > we, working from a technology base that can't build an AI with the > > common > > sense of a 5-year-old, will turn around and build a system that will > > duplicate 3000 years of the accumulated efforts of humanitiy's > > greatest > > geniuses in a year or two. > > Yay for hubris! A lot has been done throughout history by people who > didn't know any better than to assume it was possible to do what they > desired and not give up. What would it serve us to assume that > creating at least a seed AI is impossible? > > - samantha > > - > This list is sponsored by AGIRI: http://www.agiri.org/email > To unsubscribe or change your options, please go to: > http://v2.listbox.com/member/?&; - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
Re: [agi] How should an AGI ponder about mathematics
"He who refuses to do arithmetic is doomed to talk nonsense. " - John McCarthy We're talking about relative numbers here. Suppose you had an AI algorithm that was exactly as good as the one the human brain uses. In fact, let's suppose you had one that was two orders of magnitude better, since you will be running it on serial hardware that has signal restoration and error correction built in. This gives you approximately the Moravec HEPP to shoot at, 100 tera-ops to equal a human. Buy a multi-megabuck supercomputer to run it on. Now you have a machine that's just as smart as you are. How fast is it going to improve itself? Just as fast as you could improve it--no faster. Reading the internet sounds like a win ( and will be very useful) but there's a disconnect between how fast current-day algorithms can process data, for very stupid meanings of process, and how fast they could understand it, in the sense that you do when you read. I don't see why we should expect a human-level AGI to read the internet any faster than we can, if we want it to understand and integrate the knowledge. That's the part that takes the big horsepower. Josh On Monday 23 April 2007 18:29, John G. Rose wrote: > A baby AGI has immense advantage. It's starting (life?) after billions of > years of evolution and thousands of years of civilization. A 5 YO child > can't float all languages, all science, all mathematics, all recorded > history, all encyclopedia, etc. in sub-millisecond RAM and be able to > interconnect to almost any type of electronics. There are a lot of > comparisons of a 5YO with an AGI but I wonder about those... are we just > anthropomorphisizing AGI by coming up with a tabula rasa feel good AGI that > needs to learn like a cute human baby? Our brains are good I mean they are > us but aren't they just biological blobs of goop that are half-assed > excuses for intelligence? I mean why are AGI's coming about anyway? Is it > because our brains are awesome and fulfill all of our needs? No. We need > to be uploaded otherwise we die. > > John > > >> ... An AGI working with bigger numbers had better discovered binary > >> numbers. Could an AGI do it? Could it discover rational numbers? (It > >> would initially believe that irrational numbers do not exist, as early > >> Pythagoreans have believed.) After having discovered the basic > >> grounding, it could be taught the more advanced things. > > > > How many people on this list have discovered anything as fundamental as > > binary > > > numbers, I wonder? We take a lot of stuff for granted but we *learned* > > almost > > > all of it, we didn't discover it. There's a lot of hubris in the notion > > that > > > we, working from a technology base that can't build an AI with the common > > sense of a 5-year-old, will turn around and build a system that will > > duplicate 3000 years of the accumulated efforts of humanitiy's greatest > > geniuses in a year or two. > > > > Josh > > - > This list is sponsored by AGIRI: http://www.agiri.org/email > To unsubscribe or change your options, please go to: > http://v2.listbox.com/member/?&; - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
RE: [agi] How should an AGI ponder about mathematics
--- "John G. Rose" <[EMAIL PROTECTED]> wrote: > A baby AGI has immense advantage. It's starting (life?) after billions of > years of evolution and thousands of years of civilization. A 5 YO child > can't float all languages, all science, all mathematics, all recorded > history, all encyclopedia, etc. in sub-millisecond RAM and be able to > interconnect to almost any type of electronics. There are a lot of > comparisons of a 5YO with an AGI but I wonder about those... are we just > anthropomorphisizing AGI by coming up with a tabula rasa feel good AGI that > needs to learn like a cute human baby? Our brains are good I mean they are > us but aren't they just biological blobs of goop that are half-assed excuses > for intelligence? I mean why are AGI's coming about anyway? Is it because > our brains are awesome and fulfill all of our needs? No. We need to be > uploaded otherwise we die. I thought the reason for building an AGI was so we would have a utopian society where machines do all the work. Uploading raises troubling questions. How far can the copied mind stray from the original before you die? How do you distinguish between consciousness (sense of self) and the programmed belief in consciousness, free will, and fear of death that all animals possess because it confers a survival advantage? What happens if you reprogram your uploaded mind not to have these beliefs? Would it then be OK to turn it off? -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
Re: [agi] How should an AGI ponder about mathematics
John: Our brains are good I mean they are us but aren't they just biological blobs of goop that are half-assed excuses for intelligence? I mean why are AGI's coming about anyway? Is it because our brains are awesome and fulfill all of our needs? No. We need to be uploaded otherwise we die. 1.What are your AGI's going to do with their intelligence? What kinds of problems are they going to solve? 2.What are the flaws in our "excuses for intelligence" - in the ways we use our brains? And how are AGI's going to remedy them? - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
RE: [agi] How should an AGI ponder about mathematics
A baby AGI has immense advantage. It's starting (life?) after billions of years of evolution and thousands of years of civilization. A 5 YO child can't float all languages, all science, all mathematics, all recorded history, all encyclopedia, etc. in sub-millisecond RAM and be able to interconnect to almost any type of electronics. There are a lot of comparisons of a 5YO with an AGI but I wonder about those... are we just anthropomorphisizing AGI by coming up with a tabula rasa feel good AGI that needs to learn like a cute human baby? Our brains are good I mean they are us but aren't they just biological blobs of goop that are half-assed excuses for intelligence? I mean why are AGI's coming about anyway? Is it because our brains are awesome and fulfill all of our needs? No. We need to be uploaded otherwise we die. John >> ... An AGI working with bigger numbers had better discovered binary >> numbers. Could an AGI do it? Could it discover rational numbers? (It >> would initially believe that irrational numbers do not exist, as early >> Pythagoreans have believed.) After having discovered the basic >> grounding, it could be taught the more advanced things. > How many people on this list have discovered anything as fundamental as binary > numbers, I wonder? We take a lot of stuff for granted but we *learned* almost > all of it, we didn't discover it. There's a lot of hubris in the notion that > we, working from a technology base that can't build an AI with the common > sense of a 5-year-old, will turn around and build a system that will > duplicate 3000 years of the accumulated efforts of humanitiy's greatest > geniuses in a year or two. > > Josh - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
Re: [agi] How should an AGI ponder about mathematics
On Apr 23, 2007, at 2:05 PM, J. Storrs Hall, PhD. wrote: On Monday 23 April 2007 15:40, Lukasz Stafiniak wrote: ... An AGI working with bigger numbers had better discovered binary numbers. Could an AGI do it? Could it discover rational numbers? (It would initially believe that irrational numbers do not exist, as early Pythagoreans have believed.) After having discovered the basic grounding, it could be taught the more advanced things. How many people on this list have discovered anything as fundamental as binary numbers, I wonder? Many I would suspect. I learned math by ignoring most of what went on in junior high and early high school classes. My school ran out of math to teach me by my junior year. I would look up now and then from my SF book once a week or so to see what was being taught. If it was new I would take it, abstract it, play with the abstractions and generally figure out what was likely to be taught the next week or month. If I saw something new I would figure out at least one way it could have been discovered for myself. This kept math interesting. I very much doubt I am unique in that respect around these parts. We take a lot of stuff for granted but we *learned* almost all of it, we didn't discover it. I generally got less happy when I couldn't figure out a way to derive what was being taught. I wasn't big on memorization or applying things I did not understand. There's a lot of hubris in the notion that we, working from a technology base that can't build an AI with the common sense of a 5-year-old, will turn around and build a system that will duplicate 3000 years of the accumulated efforts of humanitiy's greatest geniuses in a year or two. Yay for hubris! A lot has been done throughout history by people who didn't know any better than to assume it was possible to do what they desired and not give up. What would it serve us to assume that creating at least a seed AI is impossible? - samantha - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
Re: [agi] How should an AGI ponder about mathematics
On Monday 23 April 2007 15:40, Lukasz Stafiniak wrote: > ... An AGI working with bigger numbers had better discovered binary > numbers. Could an AGI do it? Could it discover rational numbers? (It > would initially believe that irrational numbers do not exist, as early > Pythagoreans have believed.) After having discovered the basic > grounding, it could be taught the more advanced things. How many people on this list have discovered anything as fundamental as binary numbers, I wonder? We take a lot of stuff for granted but we *learned* almost all of it, we didn't discover it. There's a lot of hubris in the notion that we, working from a technology base that can't build an AI with the common sense of a 5-year-old, will turn around and build a system that will duplicate 3000 years of the accumulated efforts of humanitiy's greatest geniuses in a year or two. Josh - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
Re: [agi] How should an AGI ponder about mathematics
--- Lukasz Stafiniak <[EMAIL PROTECTED]> wrote: > Perhaps CIC is simply too impractical. Probably. Deriving multiplication from zero and S() is like computing m*n using: for (i=0; ihttp://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
Re: [agi] How should an AGI ponder about mathematics
On 4/23/07, J. Storrs Hall, PhD. <[EMAIL PROTECTED]> wrote: We really are pigs in space when it comes to discrete symbol manipulation such as arithmetic or logic. It's actually harder (mentally) to do a multiplication step such as 8*7=56 than to catch a Frisbee -- and I claim I've learnt multiplication table up to 100 by heart as a kid, I was made to. This is how I would do the multiplication now: 8*7 = 8*(5+2) = 8*5 + 8*2 = 8*(10/2) + 8*2 = (8/2)*10 + 8*2 = 4*10 + 8*2 = 40 + 8*2 = 40 + 10 + 6 = (4+1)*10 + 6 = 50 + 6 = 56 There is much understanding put into it (decimal numbers, laws of arithmetic). Do you know the definition of multiplication? m * 0 = 0 m * S(n) = m + (m * n) (I put m balls into each of n boxes, and I collect the balls one box at a time.) m + 0 = m m + S(n) = S(m + n) (I have two piles of balls and I merge them one ball at a time.) (I could explicitly put balls from one pile to the other: m + S(n) = S(m) + m.) 0 = 0 n = m ==> S(n) = S(m) (Do I have the same number of balls on both piles? Let's take one ball at a time from each pile at once and see if we are left with "empty piles" simultaneously.) AGI could do mining of e.g. CIC for correspondence with the world. The bonus with CIC is that since you understand (e.g. the definition of multiplication on unary numbers), you can compute with it. An AGI working with bigger numbers had better discovered binary numbers. Could an AGI do it? Could it discover rational numbers? (It would initially believe that irrational numbers do not exist, as early Pythagoreans have believed.) After having discovered the basic grounding, it could be taught the more advanced things. Perhaps CIC is simply too impractical. Łukasz - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
Re: [agi] How should an AGI ponder about mathematics
On 4/23/07, John G. Rose <[EMAIL PROTECTED]> wrote: Hi, Adding some thoughts on AGI math - If the AGI or a sub processor of the AGI is allotted time to "sleep" or idle process it could lazily postulate and construct theorems with spare CPU cycles (cores are cheap nowadays), put things together and use those theorems to further test the processing of data structures and representations in new ways. When the AGI is first started it could have the proof engine Mizar or Coq built in with a base set of proofs. It could use existing theorems to operate over its data and it can monitor the success and efficiency of the algorithms that it is using but implicitly understand that more efficient methods are possible. The This is part of "my point". close mapping of mathematical structures and language to its existing operational framework begets efficiency - if the internal language is closely related to a mathematical language it is better IMHO. This is probably not the case of existing AGI's perhaps there is a close mapping to NL for NLP sake and for efficiency in "rolling it out" existing AGI's are probably more hardcoded /hardwired. It is not that internal language of AGIs is not mathematical (in the sense of C-H isomorphism) because it is modeled on NL. Its use is to build (statistically) models of the world. The knowledge of the world needs to be heavily formalized before it can be fed to C-H. Formalization comes as an advanced use of the language, high in the dual network. My idea was to put CIC in there as a part of the body so that an AGI could go beyond "counting on ten humanly fingers". The C-H could kick-in when an AGI becomes a conscious programmer. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
Re: [agi] How should an AGI ponder about mathematics
On Monday 23 April 2007 10:03, Matt Mahoney wrote: > ... The brain is a billion times slower per step, has only about 7 > words of short term memory, ... For some appropriate meaning of "word" -- I'd suggest that "frame" might be more useful in thinking about what's going on. One of Miller's magical 7+/-2 "items" or "chunks" could be any coherent memory or concept (e.g. "That time we were in San francisco and saw the street clown with the bush near Fisherman's Wharf.") I conjecture that the reason there is such a limited number of them is that each one is actually a copy of the entire semantic net (and not just, say, a pointer into it) which has a full-fledged activation pattern, connection strengths, etc, distinct from that of the other "items" in STM. We really are pigs in space when it comes to discrete symbol manipulation such as arithmetic or logic. It's actually harder (mentally) to do a multiplication step such as 8*7=56 than to catch a Frisbee -- and I claim we're using essentially the same mechanisms: recognize an entire frame, search/interpolate memory for the appropriate response, and actuate it. It's harder because it takes more effort, not less, to block out all the extraneous info from the senses in the "mental exercise". Someone who's just learned the rules of chess isn't a hell of a lot better than a computer when it comes to picking moves. A chess master manages to pack a lot more into his representation of any given position than the bare coordinates of the pieces -- his frame for a position is just as complex as the frame any of us has for any real-world situation. Similarly, understanding a sentence is a sequence of reconfigurations of the entire network, each of which reflects the partial possible world as created by the words heard thus far, and primes the interpretation process for the next one for meaning disambiguation, pronoun reference, and the like. For those of you playing with NL, here's an easy problem: show how your system would understand the same meaning from these two sentences: 1. Henry was a 17-year-old boy. 2. Henry was a lad of some 17 summers. Here's a hard problem: represent the *difference* in meaning between the two. Josh - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
RE: [agi] How should an AGI ponder about mathematics
Hi, Adding some thoughts on AGI math - If the AGI or a sub processor of the AGI is allotted time to "sleep" or idle process it could lazily postulate and construct theorems with spare CPU cycles (cores are cheap nowadays), put things together and use those theorems to further test the processing of data structures and representations in new ways. When the AGI is first started it could have the proof engine Mizar or Coq built in with a base set of proofs. It could use existing theorems to operate over its data and it can monitor the success and efficiency of the algorithms that it is using but implicitly understand that more efficient methods are possible. The close mapping of mathematical structures and language to its existing operational framework begets efficiency - if the internal language is closely related to a mathematical language it is better IMHO. This is probably not the case of existing AGI's perhaps there is a close mapping to NL for NLP sake and for efficiency in "rolling it out" existing AGI's are probably more hardcoded /hardwired. Reading about Coq and CIC the concept of the "Curry - Howard isomorphism" of typed lambda calculi is interesting. I have never heard of CIC. Mizar is a different approach to proofing? Mizar source code seems to be less available that Coq... John -Original Message- From: Lukasz Stafiniak [mailto:[EMAIL PROTECTED] > Hi, > How should an AGI think about formal mathematical ideas? What > formalization of a logical argument would be the most digestible? Many > mathematicians don't like formalized approaches, because they think on > the level of patterns (which correspond to understanding the reality), > not on the "assembly" level of formalizations. But AGI could take > advantage of perceiving and manipulating the formalization itself. > Although I live not far from Bialystok, I don't think Mizar is the > right formalization to use. Mizar is a sophisticated artificial > language. What I would like to see used is a powerful, intelligible > system with frugal formulation, a logic whose assertions combine > theorems and proofs, and whose proofs are algorithms: the Calculus of > Inductive Constructions (CIC). The system that implements CIC is Coq > (http://coq.inria.fr/). The AGI would still need to think on the > patterns level (sometimes called heuristics), and it would still be > very useful to translate Mizar to CIC (perhaps the AGI could do the > translation...) but to have a being embodied at once in the physical > world and in the CIC world, wow! That would certainly prove something > ;-) - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
Re: [agi] How should an AGI ponder about mathematics
--- Lukasz Stafiniak <[EMAIL PROTECTED]> wrote: > On 4/23/07, Matt Mahoney <[EMAIL PROTECTED]> wrote: > > Ontic looks like an interesting and elegant formalism, but I don't see how > it > > would help an AGI learn mathematics. We are not yet at the point where we > can > > solve word problems like "if I pay for a $4.95 item with a $10 bill, how > much > > change should I get back?" Never mind the harder problem of proving > theorems. > > Give people calculators and they will just unlearn math, not having to > add their fees by themselves. But show them how calculators work, and > who knows, some of them might become mathematicians. > > But you are right in that an AGI could ultimately reprogram itself to > think in Ontic when it wants to. I think there is nothing wrong with giving a calculator (or a conventional computer) to an AGI to enhance its intelligence, just as computers enhance the intelligence of humans. We need to solve the NLP problem of converting word problems to equations, but then the equations (or programs) can be done more quickly and accurately on a computer. So to predict: I have 3 apples and eat 1. Now I have 2 apples. The first step is to match the first sentence to the learned pattern: I have X (noun)s and (remove) Y. The second step is to plug X - Y into the calculator and get Z. This step could be done by a language model trained by rote memorization, but it would be vastly more inefficient and error prone. That is why people use calculators instead. The brain can execute sequential algorithms, just not very well. The brain is a billion times slower per step, has only about 7 words of short term memory, and has a few percent error rate per step. The third step is to match the pattern "Now I have Z (noun)s". Again, this is a language modeling problem. It is akin to grammar constraints such as number agreement or case agreement, which involve variable substitution spanning sentences that are individually correct. "I had an apple. Then I ate it." (Correct) "I had an apple. Then I ate them." (Number disagreement) "I had an apple. Then I eat it". (Case disagreement) These are chained, context sensitive substitution problems: I had -> I past tense -> I ate, and apple -> singular noun -> it, just like the original problem required a chain of context sensitive substitutions: 3 apples -> X apples, and Z apples -> 2 apples. Current language models are still a few developmental stages away from solving this problem. -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
Re: [agi] How should an AGI ponder about mathematics
On 4/23/07, Matt Mahoney <[EMAIL PROTECTED]> wrote: Ontic looks like an interesting and elegant formalism, but I don't see how it would help an AGI learn mathematics. We are not yet at the point where we can solve word problems like "if I pay for a $4.95 item with a $10 bill, how much change should I get back?" Never mind the harder problem of proving theorems. Give people calculators and they will just unlearn math, not having to add their fees by themselves. But show them how calculators work, and who knows, some of them might become mathematicians. But you are right in that an AGI could ultimately reprogram itself to think in Ontic when it wants to. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
Re: [agi] How should an AGI ponder about mathematics
Ontic looks like an interesting and elegant formalism, but I don't see how it would help an AGI learn mathematics. We are not yet at the point where we can solve word problems like "if I pay for a $4.95 item with a $10 bill, how much change should I get back?" Never mind the harder problem of proving theorems. Learning word problems seems to me related to learning high level grammar. I have 3 apples and eat one. 3 - 1 = 2. I have 2 apples left. I have 15 dollars and spend 6. 15 - 6 = 9. I have 9 dollars left. After many examples, we learn the pattern: I have X (noun)s and (remove) Y. X - Y = Z. I have Z (noun)s left. And we do this thousands of times with many different patterns. The problem is hard because it requires learning a grammar that spans sentences, and it requires a more complex form of variable substitution than pronoun dereferencing. We have not there yet. The developmental process is: phonemes, word segmentation, semantics, parts of speech, phrases, sentences, paragraphs. Practical language models are still at the level of semantics, and we need to get to the paragraph level. Of course we have to teach an AGI grade school math before we can solve the much harder problem of proving theorems. Unlike simple math, there is no formula for discovering proofs. It is not computable. Mathematicians learn to do it by studying thousands of examples and using lots of heuristics in ways we don't understand. At best, proving theorems is an exponential search problem with no guarantee of success. Even if the problem is well defined, like chess, our understanding of heuristics is poor. Why did Deep Blue need to explore 200,000,000 chess positions per second, compared to 3 per second for Kasparov? --- "J. Storrs Hall, PhD." <[EMAIL PROTECTED]> wrote: > Look also at Ontic: > http://lambda-the-ultimate.org/classic/message6641.html > http://ttic.uchicago.edu/%7Edmcallester/ontic-spec.ps > http://www.cs.cmu.edu/afs/cs/project/ai-repository/ai/areas/kr/systems/ontic/0.html > http://citeseer.ist.psu.edu/witty95ontic.html > > Josh > > On Saturday 21 April 2007 17:45, Lukasz Stafiniak wrote: > > Hi, > > > > How should an AGI think about formal mathematical ideas? What > > formalization of a logical argument would be the most digestible? Many > > mathematicians don't like formalized approaches, because they think on > > the level of patterns (which correspond to understanding the reality), > > not on the "assembly" level of formalizations. But AGI could take > > advantage of perceiving and manipulating the formalization itself. > > Although I live not far from Bialystok, I don't think Mizar is the > > right formalization to use. Mizar is a sophisticated artificial > > language. What I would like to see used is a powerful, intelligible > > system with frugal formulation, a logic whose assertions combine > > theorems and proofs, and whose proofs are algorithms: the Calculus of > > Inductive Constructions (CIC). The system that implements CIC is Coq > > (http://coq.inria.fr/). The AGI would still need to think on the > > patterns level (sometimes called heuristics), and it would still be > > very useful to translate Mizar to CIC (perhaps the AGI could do the > > translation...) but to have a being embodied at once in the physical > > world and in the CIC world, wow! That would certainly prove something > > ;-) > > > > Best regards, > > Lukasz Stafiniak -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
Re: [agi] How should an AGI ponder about mathematics
Look also at Ontic: http://lambda-the-ultimate.org/classic/message6641.html http://ttic.uchicago.edu/%7Edmcallester/ontic-spec.ps http://www.cs.cmu.edu/afs/cs/project/ai-repository/ai/areas/kr/systems/ontic/0.html http://citeseer.ist.psu.edu/witty95ontic.html Josh On Saturday 21 April 2007 17:45, Lukasz Stafiniak wrote: > Hi, > > How should an AGI think about formal mathematical ideas? What > formalization of a logical argument would be the most digestible? Many > mathematicians don't like formalized approaches, because they think on > the level of patterns (which correspond to understanding the reality), > not on the "assembly" level of formalizations. But AGI could take > advantage of perceiving and manipulating the formalization itself. > Although I live not far from Bialystok, I don't think Mizar is the > right formalization to use. Mizar is a sophisticated artificial > language. What I would like to see used is a powerful, intelligible > system with frugal formulation, a logic whose assertions combine > theorems and proofs, and whose proofs are algorithms: the Calculus of > Inductive Constructions (CIC). The system that implements CIC is Coq > (http://coq.inria.fr/). The AGI would still need to think on the > patterns level (sometimes called heuristics), and it would still be > very useful to translate Mizar to CIC (perhaps the AGI could do the > translation...) but to have a being embodied at once in the physical > world and in the CIC world, wow! That would certainly prove something > ;-) > > Best regards, > Lukasz Stafiniak > > - > This list is sponsored by AGIRI: http://www.agiri.org/email > To unsubscribe or change your options, please go to: > http://v2.listbox.com/member/?&; - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
Re: [agi] How should an AGI ponder about mathematics
On 4/22/07, Benjamin Goertzel <[EMAIL PROTECTED]> wrote: Well Matt, there's not only one hard problem! NL understanding is hard, but theorem-proving is hard too, and narrow-AI approaches have not succeeded at proving nontrivial theorems except in very constrained domains... I happen to think that both can be solved by the same sort of architecture, though... No doubt about that. Classical mathematicians see understanding in interpreting formulas in models, whereas constructivist mathematicians see understanding in applying proofs to objects, but it both reduces to Piaget ;-) - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
Re: [agi] How should an AGI ponder about mathematics
On 4/22/07, Benjamin Goertzel <[EMAIL PROTECTED]> wrote: Well Matt, there's not only one hard problem! NL understanding is hard, but theorem-proving is hard too, and narrow-AI approaches have not succeeded at proving nontrivial theorems except in very constrained domains... Verification of sloppy proofs by human mathematicians is also hard because it needs both these problems almost solved... just a side note. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
Re: [agi] How should an AGI ponder about mathematics
Well Matt, there's not only one hard problem! NL understanding is hard, but theorem-proving is hard too, and narrow-AI approaches have not succeeded at proving nontrivial theorems except in very constrained domains... I happen to think that both can be solved by the same sort of architecture, though... -- Ben G On 4/21/07, Matt Mahoney <[EMAIL PROTECTED]> wrote: --- Lukasz Stafiniak <[EMAIL PROTECTED]> wrote: > How should an AGI think about formal mathematical ideas? I think the hard problem is in learning how to apply it. For example, suppose you say to an AGI, "Bob and Alice shared a $100 prize. How much did Bob receive?" Mathematically, it is simple, but the problem of converting natural language to mathematical formulas (when appropriate) is unsolved. -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?&; - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
Re: [agi] How should an AGI ponder about mathematics
--- Lukasz Stafiniak <[EMAIL PROTECTED]> wrote: > How should an AGI think about formal mathematical ideas? I think the hard problem is in learning how to apply it. For example, suppose you say to an AGI, "Bob and Alice shared a $100 prize. How much did Bob receive?" Mathematically, it is simple, but the problem of converting natural language to mathematical formulas (when appropriate) is unsolved. -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
[agi] How should an AGI ponder about mathematics
Hi, How should an AGI think about formal mathematical ideas? What formalization of a logical argument would be the most digestible? Many mathematicians don't like formalized approaches, because they think on the level of patterns (which correspond to understanding the reality), not on the "assembly" level of formalizations. But AGI could take advantage of perceiving and manipulating the formalization itself. Although I live not far from Bialystok, I don't think Mizar is the right formalization to use. Mizar is a sophisticated artificial language. What I would like to see used is a powerful, intelligible system with frugal formulation, a logic whose assertions combine theorems and proofs, and whose proofs are algorithms: the Calculus of Inductive Constructions (CIC). The system that implements CIC is Coq (http://coq.inria.fr/). The AGI would still need to think on the patterns level (sometimes called heuristics), and it would still be very useful to translate Mizar to CIC (perhaps the AGI could do the translation...) but to have a being embodied at once in the physical world and in the CIC world, wow! That would certainly prove something ;-) Best regards, Lukasz Stafiniak - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936