Re: [agi] Two Questions about Mentiflex...

2019-02-17 Thread Ben Goertzel
On Sun, Feb 17, 2019 at 2:10 PM justcamel wrote: > > So you are a supporter of the idea that AGI will emerge from defining 10k > nonsensical public variables? Why are you giving this full-time troll a stage? Dude... this is the Trump Era ... and you're asking why give trolls a stage? I though

Re: [agi] Two Questions about Mentiflex...

2019-02-17 Thread justcamel
Ben, the world is experiencing never-seen-before levels of suffering, exploitation, destruction, chaos and insanity. The number of people who understand the nature of the process which is causing a whole civilization (or rather their quality of consciousness) to degenerate is so low ... it's ju

Re: [agi] Two Questions about Mentiflex...

2019-02-17 Thread Joshua Maurice
Probably most people here haven't had time to look at 15k lines of code and form an evaluation of it. Probably most people here share your serious concerns, we're just not sure that banning someone from a mailing list is a very important part of minimizing pain/suffering and maximizing goodness/chi

Re: [agi] Two Questions about Mentiflex...

2019-02-17 Thread justcamel
Taking 30 seconds to look at the file should be enough ... and I said implement regex/filters that block postings containing certain strings ... I am the last person to ban anybody from anywhere. It's just insane to have 75% of the list's postings to be about the "work" of a passionate troll.

Re: [agi] Two Questions about Mentiflex...

2019-02-17 Thread Stefan Reich via AGI
Steve, OK, thanks, I think I kind of see what you mean. My approach to AI is to simulate human dialog. All our lives, we try to figure out complex problems in dialog - and are successful -, so when the AI joins this dialog, it can participate in the truth-finding. Sure, we're never 100% in our un

Re: [agi] Two Questions about Mentiflex...

2019-02-17 Thread Stefan Reich via AGI
Well then let's just up the volume of the non-troll posts :) On Sun, 17 Feb 2019 at 12:34, justcamel wrote: > Taking 30 seconds to look at the file should be enough ... and I said > implement regex/filters that block postings containing certain strings > ... I am the last person to ban anybody f

Re: [agi] openAI's AI advances and PR stunt...

2019-02-17 Thread Stefan Reich via AGI
I'm not sure how one would go the next step from a random-speech-generating network like that. We do want the speech to mean something. My new approach is to incorporate semantics into a rule engine right from the start. On Sun, 17 Feb 2019 at 02:09, Ben Goertzel wrote: > Rob, > > These deep N

Re: [agi] Why are people afraid of robots? Past life memories.

2019-02-17 Thread Stefan Reich via AGI
> You can't upload yourself. You exist a layer above this physical realm. THIS is a great statement. On Sun, 17 Feb 2019 at 08:47, justcamel wrote: > You can't upload yourself. You exist a layer above this physical realm. > > Super Mario can't upload the controlling player "into" Luigi. It does

Re: [agi] openAI's AI advances and PR stunt...

2019-02-17 Thread Ben Goertzel
One can see the next steps from the analogy of deep NNs for computer vision First they did straightforward visual analytics, then they started worrying more about the internal representations, and now in the last 6 months or so there is finally a little progress in getting sensible internal repre

Re: [agi] Two Questions about Mentiflex...

2019-02-17 Thread Jim Bromer
I think an ad-hoc logic would be something like the logic of all computer programs. No question that a computer program has something to do with logic, but a generalization of the logic of all programs would involve something more of an informal logic. You can begin to develop a systems-analysis ap

Re: [agi] openAI's AI advances and PR stunt...

2019-02-17 Thread Jim Bromer
These days a symbolic system is usually seen in the form of a network - as almost everyone in this groups know. The idea that a symbolic network will need deep NNs is seems like it is a little obscure except as an immediate practical matter. Jim Bromer On Sun, Feb 17, 2019 at 8:27 AM Ben Goertzel

Re: [agi] openAI's AI advances and PR stunt...

2019-02-17 Thread Stefan Reich via AGI
I mean, the results are impressive for sure. Almost on the level of the usual text-farm pseudo-content made through cheap labor. Incidentally, Ars Technica's commenters feel that the world just ended because of this software. (The comment section there is the weirdest ever IMHO...) https://arstec

Re: [agi] openAI's AI advances and PR stunt...

2019-02-17 Thread Stefan Reich via AGI
Is that an anti-NN argument? Not exactly sure what you're saying there. On Sun, 17 Feb 2019 at 15:42, Jim Bromer wrote: > These days a symbolic system is usually seen in the form of a network - as > almost everyone in this groups know. The idea that a symbolic network will > need deep NNs is see

Re: [agi] Two Questions about Mentiflex...

2019-02-17 Thread A.T. Murray
Steve Richfield asks: > 1. THEORY: In broad computer science terms, how does your system work? > From what I can tell, it is an ad hoc text manipulation program capable > of gathering information and answering simple questions within the limited > subject domains that have been programmed. Right?

Re: [agi] openAI's AI advances and PR stunt...

2019-02-17 Thread Jim Bromer
The most significant advancements seem to be made by using NNs with categorical feature detection or by using discrete systems in a network of some kind. The networks may not be explicit in discrete methods in but even in the earliest developments they were intrinsic to the case detection. Discrete

Re: [agi] openAI's AI advances and PR stunt...

2019-02-17 Thread Jim Bromer
The argument of discrete vs weighted systems and GOFAI vs neural networks seem pretty dated to me. So I guess I should be careful on the terms that I use. A symbolic network would have to operate on different levels. One thing I have learned is that you do not want to translate information in terms

Re: [agi] openAI's AI advances and PR stunt...

2019-02-17 Thread Rob Freeman
On Mon, Feb 18, 2019 at 2:27 AM Ben Goertzel wrote: > ... > Don't get me wrong tho, I don't think this is the golden path to AGI > or anything By contrast, if anyone does want to look for the golden path to AGI, rather than the next step down a dead end as usual! In case someone here is loo

Re: [agi] openAI's AI advances and PR stunt...

2019-02-17 Thread Stefan Reich via AGI
Nothing wrong with pushing your own results if you consider them worthwhile... On Sun, 17 Feb 2019 at 21:46, Rob Freeman wrote: > On Mon, Feb 18, 2019 at 2:27 AM Ben Goertzel wrote: > >> ... >> Don't get me wrong tho, I don't think this is the golden path to AGI >> or anything > > > By cont

Re: [agi] openAI's AI advances and PR stunt...

2019-02-17 Thread Rob Freeman
On Mon, Feb 18, 2019 at 10:05 AM Stefan Reich via AGI wrote: > Nothing wrong with pushing your own results if you consider them > worthwhile... > Well, I think on one level it's much the same as Pissanetzky. Pissanetzky's is a meaningful way of relating elements which generates new patterns. Yo

Re: [agi] openAI's AI advances and PR stunt...

2019-02-17 Thread Stefan Reich via AGI
> demo.chaoticlanguage.com It works with "I went to Brazil", but seems to break with "In Brazil, people are friendly" (it creates "Brazil people" as a node). Any way to give it feedback? On Sun, 17 Feb 2019 at 22:48, Rob Freeman wrote: > On Mon, Feb 18, 2019 at 10:05 AM Stefan Reich via AGI < >

Re: [agi] openAI's AI advances and PR stunt...

2019-02-17 Thread Rob Freeman
Feedback? To me? Any number of ways to break it. It's old now. 20 years back. And the data set a few 10's of 1000's of words I scraped up from some websites back in the day. Just treat it as a proof of concept: you get (meaningful) hierarchy from novel rearrangements of word vector "embeddings".

Re: [agi] openAI's AI advances and PR stunt...

2019-02-17 Thread Ben Goertzel
*** I was wrong to start with embedding vectors as the base representation for patterns. I now think the way to implement it is not with vectors, but directly in a network of observed language sequences. *** This is what we're doing in our OpenCog based language learning project But we're using

Re: [agi] Why are people afraid of robots? Past life memories.

2019-02-17 Thread Alan Grimes
justcamel wrote: > If you dissolve the identification with form and enjoy true freedom > then why would you go back to identifying with yet another embodiment? Normal humans are hard-locked to their existing form. Transmunanists wish to use technology to change that form to some degree or other,

Re: [agi] Why are people afraid of robots? Past life memories.

2019-02-17 Thread justcamel
Spiritual enlightenment is the realization that there never was any "binding". You being _something_ is just an idea. The 12 year old child playing Super Mario never really is bound to Super Mario ... it's just an assumed role ... an identification with the form of Mario ... You are consciousn

Re: [agi] openAI's AI advances and PR stunt...

2019-02-17 Thread Rob Freeman
On Mon, Feb 18, 2019 at 4:01 PM Ben Goertzel wrote: > *** > ... > And likely the way to do this is to set the network oscillating, and > vary inhibition to get the resolution of "invariants" you want. > *** > > But we are not doing that. Interesting... Cool. Maybe there could be a match. I wan