> A summary ....we are looking at the idea that there are 2 fundamental kinds
> of putative AGI (1) & (3), and their hybrid (2) that forms a third approach
> as follows:
>
> (1) C-AGI      computer substrate only. Neuromorphic equivalents of it.
> (2) H-AGI      hybrid of (1) and (3). The inorganic version is a new kind
> of neuromorphic chip. The organic version has ... erm... organics in it.
> (3) S-AGI      synthetic AGI. organic or inorganic. Natural brain physics
> only. No computer.
>
> (aside: S-AGI just came out of my fingers. I hope this is OK, Dorian!)
>

This is a cool idea, somewhat mind boggling in its possibilities.
Cool though!

Personally I would favor something more like "EM-AGI" for
electromagnetic AGI.  I mean, I don't understand the details of the
approach, only the generalities.  But, "S" seems a bit vague/ambiguous
while EM hits it more or less on target IMHO.

MIke A


> Think this way: What we have now is 100% computer. S-AGI is 100% natural
> physics (organic or inorganic). H-AGI is set somewhere in between.  It's
> the level of computer computation/natural computation that is at issue. All
> are computation.
>
> The human brain is a natural version of (3) with a neuronal/astrocyte
>  substrate. (3) has no computer whatever in it. it retains all the natural
> physics (whatever that is). H-AGI targets the inclusion of the essential
> natural brain physics in the substrate of (2) and to incorporate (1)
> computer-substrates and software to an extent to be determined. In my case
> an H-AGI would be inorganic. Others see differently.
>
> Where you might have a stake in this?
>
> The history of AGI can be summed up as an experiment that seeks to see if
> the role of (1) C-AGI as a brain is fundamentally indistinguishable from
> (3) S-AGI under all conditions. That is the hypothesis. The 65 year old bet
> that has attracted 100% of the investment to date. H-AGI does not make that
> presupposition and seeks to contrast (1) and (3) in revealing ways that
> then allow us to speak authoritatively about the (1)/(3) relationship in
> AGI potential. Only then will we really understand the difference between
> (1) and (3). So far that difference is entirely and intuition. A good one.
> But only intuition. Its time for that intuition to be turned into science.
> Experiments in (1) have ruled to date. Now we seek to do some (2)... E.E.
> we have 65 years of 'control' subject. H-AGI builds the first 'test'
> subject.
>
> How about this?
>
> What would be super cool is if this mighty AGI beast you intend making
> could be turned into the brain of a robot. Then we could contrast what it
> does with what an IGI candidate brain does in an identical robot in the
> same test. That kind of testing vision (as far off as it may seem) is a
> potential way your work and the IGI might interface. Which candidate robot
> best encounters radical novelty, without any human intervention/involvement
> whatever? .... is a really good question. To do this test you'd not need to
> reveal anything about its workings. Observed robot behaviour is decisive.
>
> It seems to me that whatever venture you plan, it might be wise to keep an
> eye on any (2)/(3) approaches. IGI or not. Because it is directly informing
> expectations of outcomes in (1). We are currently asking the question "*If
> H-AGI were to be championed into existence, what would the first vehicle
> for that look like?*" If the enthusiasm maintains it will be sketched into
> a web page and we'll see what it tells us and what to do next. It may halt.
> It may go. I don't know. Worth a shot? You bet.
>
> With your (1) C-AGI glasses firmly strapped to your head, your wisdom at
> all stages in this would be well received, whatever the messages. So if you
> have time to keep an  eye on happenings, I for one would appreciate it.
>
> regards
>
> Colin Hales
>
>
>
> On Wed, May 20, 2015 at 6:58 AM, Peter Voss <[email protected]> wrote:
>
>> Thanks for asking. Haven’t followed the IGI discussions.
>>
>>
>>
>> Is this about non-computer based approaches to AGI?  If so, I don’t think
>> I have anything positive to contribute.
>>
>>
>>
>> More generally, non-profit orgs need strong focus and champions.  And
>> specific goals.
>>
>>
>>
>> *From:* Benjamin Kapp [mailto:[email protected]]
>> *Sent:* Tuesday, May 19, 2015 12:23 PM
>> *To:* AGI
>> *Subject:* Re: [agi] Institute of General Intelligence (IGI)
>>
>>
>>
>> Mr. Voss,
>>
>> Given your understanding of the AGI community do you believe an IGI would
>> be redundant?  Would your organization be open to collaborating with the
>> IGI?  Do you have any advice for how we could be successful in starting
>> up
>> this organization?  Perhaps you would be open to being a member of the
>> board?
>>
>>
>>
>> On Tue, May 19, 2015 at 2:03 PM, Peter Voss <[email protected]> wrote:
>>
>> Not something that can be adequately covered in a few words, but…. “We’re
>> building a fully integrated, top-down & bottom-up, real-time, adaptive
>> knowledge (& skill) representation, learning and reasoning engine. We’re
>> using a combination of graph representation and NN techniques overlaid
>> with
>> fuzzy, adaptive rule systems” – ha!
>>
>>
>>
>> Here again are links for some clues:
>>
>>
>>
>>
>> http://www.kurzweilai.net/essentials-of-general-intelligence-the-direct-path-to-agi
>>
>> http://www.realagi.com/index.html
>>
>> https://www.facebook.com/groups/RealAGI/
>>
>>
>>
>>
>>
>> *From:* Benjamin Kapp [mailto:[email protected]]
>>
>>
>>
>> Mr. Voss,
>>
>> Since you are the founder I'm certain you know what agi-3's methodology
>> is.  In a few words (maybe more?) could you share with us what that is?
>>
>>
>>
>> On Tue, May 19, 2015 at 1:24 PM, Peter Voss <[email protected]> wrote:
>>
>> *>*http://www.agi-3.com  They just glue together anything and everything
>> that works.
>>
>> Actually, no.  We have a very specific theory of AGI and architecture
>>
>> *Peter Voss*
>>
>> *Founder, AGI Innovations Inc.*
>>
>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/26973278-698fd9ee>|
>> Modify
>> <https://www.listbox.com/member/?&;> Your Subscription
>>
>> <http://www.listbox.com>
>>
>>
>>
>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/231420-b637a2b0>| Modify
>> <https://www.listbox.com/member/?&;> Your Subscription
>>
>> <http://www.listbox.com>
>>
>>
>>   *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/11721311-20a65d4a> |
>> Modify
>> <https://www.listbox.com/member/?&;>
>> Your Subscription <http://www.listbox.com>
>>
>
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/11943661-d9279dae
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to