Re: [agi] Narrow AGI

2019-08-14 Thread Robert Levy
via AGI > *Sent:* Tuesday, 13 August 2019 21:32 > *To:* AGI > *Subject:* Re: [agi] Narrow AGI > > None of this are actual downers... they're all just mathematical > exercises. > > ("Almost all strings" is practically just as uninteresting as the > mathematica

Re: [agi] Narrow AGI

2019-08-14 Thread Alan Grimes via AGI
Nanograte Knowledge Technologies wrote: I was wondering, what is currently regarded as being the best language to develop AI in? Python is the most popular these days, and that's why I have a python book on my desk rn. That said, I'd tend to shut down any discussion of low level implement

Re: [agi] Narrow AGI

2019-08-14 Thread Basile Starynkevitch
On 8/14/19 10:11 AM, Basile Starynkevitch wrote: On 8/14/19 6:57 AM, Nanograte Knowledge Technologies wrote: I was wondering, what is currently regarded as being the best language to develop AI in? It should become a programming language designed by some AGI, and with the implementation s

Re: [agi] Narrow AGI

2019-08-14 Thread Basile Starynkevitch
On 8/14/19 6:57 AM, Nanograte Knowledge Technologies wrote: I was wondering, what is currently regarded as being the best language to develop AI in? It should become a programming language designed by some AGI, and with the implementation software generated by that AGI. In a more pragmati

Re: [agi] Narrow AGI

2019-08-13 Thread Nanograte Knowledge Technologies
I was wondering, what is currently regarded as being the best language to develop AI in? From: Stefan Reich via AGI Sent: Tuesday, 13 August 2019 21:32 To: AGI Subject: Re: [agi] Narrow AGI None of this are actual downers... they're all just mathema

Re: [agi] Narrow AGI

2019-08-13 Thread Stefan Reich via AGI
None of this are actual downers... they're all just mathematical exercises. ("Almost all strings" is practically just as uninteresting as the mathematical definition of "almost all numbers".) On Sat, 10 Aug 2019 at 23:36, Matt Mahoney wrote: > On Sat, Aug 10, 2019 at 9:04 AM korrelan wrote: >

Re: [agi] Narrow AGI

2019-08-13 Thread Mike Archbold
Hey "agi" (formerly Jim Bromer, just kidding buddy :) I like this comment of yours: "The misunderstanding that a 'predictor' is the same as absolute > knowledge that is always right has no basis in the world that might be known > from common sense." This captures the tension between the mathemat

Re: [agi] Narrow AGI

2019-08-13 Thread agi
"Suppose you have a simple learner that can predict any computable sequence of symbols with some probability at least as good as random guessing. Then I can create a simple sequence that your predictor will get wrong 100% of the time. My program runs a copy of your program and outputs something

Re: [agi] Narrow AGI

2019-08-11 Thread Basile Starynkevitch
On 8/11/19 10:22 AM, Basile Starynkevitch wrote: On 8/11/19 10:08 AM, Basile Starynkevitch wrote: On 8/10/19 4:50 PM, Stefan Reich via AGI wrote: > Language and mathematics are constructs created by an intelligent system; they are not an insight into how the intelligent system functions.

Re: [agi] Narrow AGI

2019-08-11 Thread Basile Starynkevitch
On 8/11/19 10:08 AM, Basile Starynkevitch wrote: On 8/10/19 4:50 PM, Stefan Reich via AGI wrote: > Language and mathematics are constructs created by an intelligent system; they are not an insight into how the intelligent system functions. The interesting and AGi-related question is /how/

Re: [agi] Narrow AGI

2019-08-11 Thread Basile Starynkevitch
On 8/10/19 4:50 PM, Stefan Reich via AGI wrote: > Language and mathematics are constructs created by an intelligent system; they are not an insight into how the intelligent system functions. The interesting and AGi-related question is /how/ mathematicians think (and the mentalese

RE: [agi] Narrow AGI

2019-08-10 Thread peter
This is clearly wrong. We gain knowledge and improve our learning. So can (a correctly designed) AI -Original Message- From: Matt Mahoney Sent: Saturday, August 10, 2019 2:35 PM To: AGI Subject: Re: [agi] Narrow AGI There is no such thing as recursively self improving software

Re: [agi] Narrow AGI

2019-08-10 Thread Matt Mahoney
On Sat, Aug 10, 2019 at 9:04 AM korrelan wrote: > >Legg proved there is no such thing as a simple, universal learner. So we can > >stop looking for one. > > With all due respect to everyone involved this kind of comprehensive sweeping > statement is both narrow minded and counter productive. Wh

Re: [agi] Narrow AGI

2019-08-10 Thread Matt Mahoney
On Sat, Aug 10, 2019 at 11:14 AM Ben Goertzel wrote: > > The point is, Matt, you can't copy my quantum predictor without me > knowing you were copying it. Basic principle of quantum > cryptography. > > This is irrelevant to AGI though, just a sorta fun thought experiment... Actually it is relev

Re: [agi] Narrow AGI

2019-08-10 Thread Secretary of Trades
http://research.ibm.com/ibm-q/quantum-card-test/ It kind of renders complex information less probabilistic... Takes out prophets! 😛 On 10.08.2019 18:13, Ben Goertzel wrote: The point is, Matt, you can't copy my quantum predictor without me knowing you were copying it. Basic principle of quan

Re: [agi] Narrow AGI

2019-08-10 Thread Ben Goertzel
The point is, Matt, you can't copy my quantum predictor without me knowing you were copying it. Basic principle of quantum cryptography. This is irrelevant to AGI though, just a sorta fun thought experiment... On Sat, Aug 10, 2019 at 10:27 AM Matt Mahoney wrote: > > Evolution is not time rever

Re: [agi] Narrow AGI

2019-08-10 Thread Stefan Reich via AGI
> Language and mathematics are constructs created by an intelligent system; they are not an insight into how the intelligent system functions. But we can use those to simulate thought. -- Artificial General Intelligence List: AGI Permalink: https://agi.top

Re: [agi] Narrow AGI

2019-08-10 Thread korrelan
>Legg proved there is no such thing as a simple, universal learner. So we can stop looking for one. With all due respect to everyone involved this kind of comprehensive sweeping statement is both narrow minded and counter productive. >Suppose you have a simple learner that can predict any compu

Re: [agi] Narrow AGI

2019-08-10 Thread Stefan Reich via AGI
Yeah that's nice and all, but I don't see how this would steer our research in any way. On Sat, 10 Aug 2019 at 03:09, Matt Mahoney wrote: > Suppose you have a simple learner that can predict any computable sequence > of symbols with some probability at least as good as random guessing. Then > I

Re: [agi] Narrow AGI

2019-08-09 Thread Matt Mahoney
Evolution is not time reversible so it can't run on a quantum computer. Quantum processes can also produce uncomputable sequences since they can produce infinite random bits. But that aside, let's say you have a simple quantum learner that can predict any quantum computable sequence, which is any

Re: [agi] Narrow AGI

2019-08-09 Thread Mike Archbold
I think it may be that some people think in an either/or: EITHER we fuse a number of narrow AIs OR we build a general AI from which the narrow AIs emerge later on. Matt, I suspect you are thinking this way. The third alternative is doing both concurrently. The narrow AIs should be viewed as intri

Re: [agi] Narrow AGI

2019-08-09 Thread Ben Goertzel
What if my program was created by quantum evolutionary learning, and carries out its predictions while running in an uncollapsed quantum state, coupled with the classical system reading-out its predictions in a way that doesn't collapse its internal memory states... Then I can set it up so you can

Re: [agi] Narrow AGI

2019-08-09 Thread Matt Mahoney
Suppose you have a simple learner that can predict any computable sequence of symbols with some probability at least as good as random guessing. Then I can create a simple sequence that your predictor will get wrong 100% of the time. My program runs a copy of your program and outputs something diff

Re: [agi] Narrow AGI

2019-08-09 Thread Ben Goertzel
> > Legg proved there is no such thing as a simple, universal learner. So we > can stop looking for one. > To be clear, these algorithmic information theory results don't show there is no such thing as a simple learner that is universal in our physical universe... I'm not saying there necessaril

Re: [agi] Narrow AGI

2019-08-09 Thread Matt Mahoney
In order for an AI to apply knowledge from one domain to a different domain, there has to be mutual information between the domains. For example, text compression algorithms don't work well on images and vice versa. To compress both well, you need to write algorithms for both and apply the appropr

Re: [agi] Narrow AGI

2019-08-08 Thread Jim Bromer
I do not see any reason why genuine learning that could occur in one field (whatever that is) could not be adequate for looking at another field. The problem is that fundamental knowledge is not incorporated into these AI programs because the system becomes too complex (complicated). I do not know

Re: [agi] Narrow AGI

2019-08-05 Thread Manuel Korfmann
https://gist.github.com/LemonAndroid/7a5f2f521d0e0aa2f8ec8dcce28dc904#file-pain-iq-2-rb ```ruby class PAINDOTIQ PAINFUL_THOUGHTS = { "nuking the whole world" => [0, 20, 300],

Re: [agi] Narrow AGI

2019-08-05 Thread Mike Archbold
On 8/5/19, Matt Mahoney wrote: > Narrow AI doesn't grow into AGI. AGI is lots of narrow AI specialists put > together. Nobody in an organization can do what the organization does. > Every member either knows one specific task well, or can refer you to > someone who does. Kind of like the organizat

Re: [agi] Narrow AGI

2019-08-05 Thread Matt Mahoney
Narrow AI doesn't grow into AGI. AGI is lots of narrow AI specialists put together. Nobody in an organization can do what the organization does. Every member either knows one specific task well, or can refer you to someone who does. Kind of like the organization of the structures of your brain. On

Re: [agi] Narrow AGI

2019-08-04 Thread Manuel Korfmann
Shout out to Stefan for being so real here signed The realist > On 4. Aug 2019, at 23:54, Stefan Reich via AGI wrote: > > > Maybe the > best interface in the early going from one "narrow AGI" to another > would be somewhere between an oversimplified English with some > structure and mayb

Re: [agi] Narrow AGI

2019-08-04 Thread Stefan Reich via AGI
> Maybe the best interface in the early going from one "narrow AGI" to another would be somewhere between an oversimplified English with some structure and maybe JSON Yes. It's called agi.blue. Here's your JSON: [ {"a":"AGI","b":"means","c":"artificial general intelligence","slice":""} ] htt

Re: [agi] Narrow AGI

2019-08-04 Thread Mike Archbold
On 8/2/19, Secretary of Trades wrote: > 1), 2) and 5) nouns extracted for phraseology. > > Unidentified "F" Objects in 3), 4). > > > Judgment is a highly intelligent procedure; shouldn't be the same with > discrimination or recognition. > Instead of 4): should be able to use the existing communica

Re: [agi] Narrow AGI

2019-08-02 Thread Stefan Reich via AGI
8155b9 >> >> -Original Message- >> From: Ben Goertzel >> Sent: Thursday, August 1, 2019 3:16 AM >> To: AGI >> Subject: [agi] Narrow AGI >> >> https://blog.singularitynet.io/from-narrow-ai-to-agi-via-narrow-agi-9618e6ccf2ce >> >&g

Re: [agi] Narrow AGI

2019-08-02 Thread Secretary of Trades
1), 2) and 5) nouns extracted for phraseology. Unidentified "F" Objects in 3), 4). Judgment is a highly intelligent procedure; shouldn't be the same with discrimination or recognition. Instead of 4): should be able to use the existing communication space in the usual manners. And most of all, A

Re: [agi] Narrow AGI

2019-08-02 Thread Stefan Reich via AGI
ertzel > Sent: Thursday, August 1, 2019 3:16 AM > To: AGI > Subject: [agi] Narrow AGI > > https://blog.singularitynet.io/from-narrow-ai-to-agi-via-narrow-agi-9618e6ccf2ce > > -- > > Ben Goertzel, PhD > > http://goertzel.org > > “The only people for me

Re: [agi] Narrow AGI

2019-08-01 Thread rouncer81
Machines today only want what we make them want,  only we truly want things,    so if you want AGI you need to create something with a true purpose, not an artificial one. I think any more than narrow a.i. is blowing it out of superfluous proportions,  and we need something more important to do

Re: [agi] Narrow AGI

2019-08-01 Thread Mike Archbold
and importantly, as Ben predicts, 6) the ability for a narrow AGI to utilize multiple sub AGIs seamlessly within a function area group increases On 8/1/19, Mike Archbold wrote: > I like this editorial but I'm not sure "Narrow AGI" is the best label. > At the moment I don't have a better name for

Re: [agi] Narrow AGI

2019-08-01 Thread Mike Archbold
I like this editorial but I'm not sure "Narrow AGI" is the best label. At the moment I don't have a better name for it though. I mean, I agree in principle but it's like somebody saying "X is a liberal conservative." X might really be so, but it might be that... oh hell, why don't we just call it "

Re: [agi] Narrow AGI

2019-08-01 Thread Costi Dumitrescu
So Mars gets conquered by AI robots. What Tensor Flaw is so intelligent about surgery or proving math theorems? Bias? On 01.08.2019 13:16, Ben Goertzel wrote: https://blog.singularitynet.io/from-narrow-ai-to-agi-via-narrow-agi-9618e6ccf2ce -- Artifici

RE: [agi] Narrow AGI

2019-08-01 Thread peter
gence: https://medium.com/intuitionmachine/from-narrow-to-general-ai-e21b568155b9 -Original Message- From: Ben Goertzel Sent: Thursday, August 1, 2019 3:16 AM To: AGI Subject: [agi] Narrow AGI <https://blog.singularitynet.io/from-narrow-ai-to-agi-via-narrow-agi-9618

Re: [agi] Narrow AGI

2019-08-01 Thread Duncan Murray
That was a good article - I generally agree with it, but am a little skeptical in terms of different industries sharing knowledge openly for it to be completely effective. It most likely will turn out to be a lot of paywalls / walled gardens. On Thu, Aug 1, 2019 at 7:47 PM Ben Goertzel wrote: >

[agi] Narrow AGI

2019-08-01 Thread Ben Goertzel
https://blog.singularitynet.io/from-narrow-ai-to-agi-via-narrow-agi-9618e6ccf2ce -- Ben Goertzel, PhD http://goertzel.org “The only people for me are the mad ones, the ones who are mad to live, mad to talk, mad to be saved, desirous of everything at the same time, the ones who never yawn or say