How about we have a discussion sometime this weekend?  We have some serious
timezone issues to work around..

On Wed, May 20, 2015 at 10:48 PM, Mark Seveland <[email protected]> wrote:

> Just a suggestion. Google+ Meetups are a good way for everyone to meet
> each other, and in live voice and/or video chat discuss topics.
>
> On Wed, May 20, 2015 at 7:33 PM, Colin Hales <[email protected]> wrote:
>
>> Hi Dorian et. al.,
>> I am having trouble getting time to properly participate here because of
>> family stuff and my other commitments. I'm checking in to acknowledge
>> how encouraging it is to see the activity is ongoing, and the birth of a
>> possible paper that might underpin whatever this IGI initiative turns into.
>>
>> I'd like to focus my efforts on the paper primarily as a way to discover
>> IGI directions. So if you could bear with a patchy contribution from me for
>> a little while it would be greatly appreciated. I have a particularly
>> difficult week ahead of me. There's no huge crashing need for speed here,
>> so I'm hoping slow and steady might be OK.
>>
>> Whatever form this website takes: fantastic. It may only ever be a 'line
>> in the sand'. But it's a significant one in the greater scheme of AGI
>> futures and really good to see after being sidelined for so long. Yay!
>>
>> cheers
>> Colin Hales
>>
>>
>>
>> On Thu, May 21, 2015 at 10:07 AM, Mike Archbold <[email protected]>
>> wrote:
>>
>>> Why don't you just call it "AI" and if somebody asks THEN you can
>>> clarify it?  I mean, why be arcane about it?  One of the reasons I got
>>> into AI is because I don't like the way that people create things that
>>> are intentionally difficult and known only to the in-group.  Now here
>>> you go with a boatload of new acronyms, known only to the select tiny
>>> group that knows the secret meaning behind it.  So, I guess I am
>>> getting into Alan Grimes vent space with this.
>>>
>>> On 5/20/15, Dorian Aur <[email protected]> wrote:
>>> > *Colin et al,*
>>> >
>>> >
>>> > A possible plan for H-AGI towards S-AGI paper
>>> >
>>> >
>>> >
>>> > *Hybrid artificial general intelligent systems towards S-AGI*
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > *Introduction* – a short presentation of AI systems and general goal to
>>> > build human general intelligence
>>> >
>>> >
>>> >
>>> > Why H-AGI?
>>> >
>>> >    - Present different forms of computation , ( particular forms of
>>> >    computation analog, digital -Turing machines )
>>> >    - Computations in the brain (examples of computations that are
>>> hardly
>>> >    replicated on digital computers)
>>> >    - H-AGI can include all forms of computations, algorithmic /
>>> >    non-algorithmic, analog, digital,* quantum and classical *since
>>> >     biological structure is incorporated in the system
>>> >
>>> >
>>> >
>>> > *Steps to develop  H-AGI*
>>> >
>>> >
>>> >
>>> >    - A.  Build the structure using either natural stem cells or
>>> induced
>>> >    pluripotent cells  a three-dimensional vascularized structure, test
>>> 3D
>>> >    printing possibilities
>>> >    - Shape the structure and control  spatial organization of cells
>>> >    - Detect the need of neurotrophic factors, nutrients and oxygen
>>> ...use
>>> >    nanosensor devices, carbon nanotubes...
>>> >    - Regulate, control the entire phenomenon using a computer
>>> interface,
>>> >    ability to use combine analog/digital and biophysical computations
>>> >
>>> > B. Train the hybrid system
>>> >
>>> >    - Enhance bidirectional communication between biological structure
>>> and
>>> >    computers
>>> >    - Create and use  a virtual world to provide accelerated training,
>>> use
>>> >    machine learning, DL,  digital/algorithmic  AI or AGI if something
>>> is
>>> >    developed on digital systems
>>> >    - The interactive training system should also shape the evolution of
>>> >    biological structure,  natural language and visual information can
>>> be
>>> >    progressively included
>>> >
>>> >  see  details in Can we build a conscious machine,
>>> > http://arxiv.org/abs/1411.5224
>>> >
>>> >
>>> > *Goals of H-AGI*
>>> >
>>> > H-AGI  can be seen as a transitional step required to understand  which
>>> > parts can be fully replicated in a synthetic form to  build a more
>>> powerful
>>> > system,
>>> >
>>> > ·        Natural language processing, robotics...
>>> >
>>> > ·        Space exploration, colonization..... etc
>>> >
>>> > ·        Techniques for therapy (brain diseases, cancer ....) since we
>>> will
>>> > learn how to shape biological structure
>>> >
>>> >
>>> >
>>> >
>>> > Dorian
>>> >
>>> >
>>> > PS This brief presentation may  also provide an idea about possible
>>> > collaboration list 1- list 3
>>> >
>>> >
>>> >
>>> > On Tue, May 19, 2015 at 11:20 PM, Mike Archbold <[email protected]>
>>> > wrote:
>>> >
>>> >> > A summary ....we are looking at the idea that there are 2
>>> fundamental
>>> >> kinds
>>> >> > of putative AGI (1) & (3), and their hybrid (2) that forms a third
>>> >> approach
>>> >> > as follows:
>>> >> >
>>> >> > (1) C-AGI      computer substrate only. Neuromorphic equivalents of
>>> it.
>>> >> > (2) H-AGI      hybrid of (1) and (3). The inorganic version is a new
>>> >> > kind
>>> >> > of neuromorphic chip. The organic version has ... erm... organics in
>>> >> > it.
>>> >> > (3) S-AGI      synthetic AGI. organic or inorganic. Natural brain
>>> >> > physics
>>> >> > only. No computer.
>>> >> >
>>> >> > (aside: S-AGI just came out of my fingers. I hope this is OK,
>>> Dorian!)
>>> >> >
>>> >>
>>> >> This is a cool idea, somewhat mind boggling in its possibilities.
>>> >> Cool though!
>>> >>
>>> >> Personally I would favor something more like "EM-AGI" for
>>> >> electromagnetic AGI.  I mean, I don't understand the details of the
>>> >> approach, only the generalities.  But, "S" seems a bit vague/ambiguous
>>> >> while EM hits it more or less on target IMHO.
>>> >>
>>> >> MIke A
>>> >>
>>> >>
>>> >> > Think this way: What we have now is 100% computer. S-AGI is 100%
>>> >> > natural
>>> >> > physics (organic or inorganic). H-AGI is set somewhere in between.
>>> >> > It's
>>> >> > the level of computer computation/natural computation that is at
>>> issue.
>>> >> All
>>> >> > are computation.
>>> >> >
>>> >> > The human brain is a natural version of (3) with a
>>> neuronal/astrocyte
>>> >> >  substrate. (3) has no computer whatever in it. it retains all the
>>> >> natural
>>> >> > physics (whatever that is). H-AGI targets the inclusion of the
>>> >> > essential
>>> >> > natural brain physics in the substrate of (2) and to incorporate (1)
>>> >> > computer-substrates and software to an extent to be determined. In
>>> my
>>> >> case
>>> >> > an H-AGI would be inorganic. Others see differently.
>>> >> >
>>> >> > Where you might have a stake in this?
>>> >> >
>>> >> > The history of AGI can be summed up as an experiment that seeks to
>>> see
>>> >> > if
>>> >> > the role of (1) C-AGI as a brain is fundamentally indistinguishable
>>> >> > from
>>> >> > (3) S-AGI under all conditions. That is the hypothesis. The 65 year
>>> old
>>> >> bet
>>> >> > that has attracted 100% of the investment to date. H-AGI does not
>>> make
>>> >> that
>>> >> > presupposition and seeks to contrast (1) and (3) in revealing ways
>>> that
>>> >> > then allow us to speak authoritatively about the (1)/(3)
>>> relationship
>>> >> > in
>>> >> > AGI potential. Only then will we really understand the difference
>>> >> > between
>>> >> > (1) and (3). So far that difference is entirely and intuition. A
>>> good
>>> >> one.
>>> >> > But only intuition. Its time for that intuition to be turned into
>>> >> science.
>>> >> > Experiments in (1) have ruled to date. Now we seek to do some (2)...
>>> >> > E.E.
>>> >> > we have 65 years of 'control' subject. H-AGI builds the first 'test'
>>> >> > subject.
>>> >> >
>>> >> > How about this?
>>> >> >
>>> >> > What would be super cool is if this mighty AGI beast you intend
>>> making
>>> >> > could be turned into the brain of a robot. Then we could contrast
>>> what
>>> >> > it
>>> >> > does with what an IGI candidate brain does in an identical robot in
>>> the
>>> >> > same test. That kind of testing vision (as far off as it may seem)
>>> is a
>>> >> > potential way your work and the IGI might interface. Which candidate
>>> >> robot
>>> >> > best encounters radical novelty, without any human
>>> >> intervention/involvement
>>> >> > whatever? .... is a really good question. To do this test you'd not
>>> >> > need
>>> >> to
>>> >> > reveal anything about its workings. Observed robot behaviour is
>>> >> > decisive.
>>> >> >
>>> >> > It seems to me that whatever venture you plan, it might be wise to
>>> keep
>>> >> an
>>> >> > eye on any (2)/(3) approaches. IGI or not. Because it is directly
>>> >> informing
>>> >> > expectations of outcomes in (1). We are currently asking the
>>> question
>>> >> "*If
>>> >> > H-AGI were to be championed into existence, what would the first
>>> >> > vehicle
>>> >> > for that look like?*" If the enthusiasm maintains it will be
>>> sketched
>>> >> into
>>> >> > a web page and we'll see what it tells us and what to do next. It
>>> may
>>> >> halt.
>>> >> > It may go. I don't know. Worth a shot? You bet.
>>> >> >
>>> >> > With your (1) C-AGI glasses firmly strapped to your head, your
>>> wisdom
>>> >> > at
>>> >> > all stages in this would be well received, whatever the messages.
>>> So if
>>> >> you
>>> >> > have time to keep an  eye on happenings, I for one would appreciate
>>> it.
>>> >> >
>>> >> > regards
>>> >> >
>>> >> > Colin Hales
>>> >> >
>>> >> >
>>> >> >
>>> >> > On Wed, May 20, 2015 at 6:58 AM, Peter Voss <[email protected]>
>>> wrote:
>>> >> >
>>> >> >> Thanks for asking. Haven’t followed the IGI discussions.
>>> >> >>
>>> >> >>
>>> >> >>
>>> >> >> Is this about non-computer based approaches to AGI?  If so, I don’t
>>> >> think
>>> >> >> I have anything positive to contribute.
>>> >> >>
>>> >> >>
>>> >> >>
>>> >> >> More generally, non-profit orgs need strong focus and champions.
>>> And
>>> >> >> specific goals.
>>> >> >>
>>> >> >>
>>> >> >>
>>> >> >> *From:* Benjamin Kapp [mailto:[email protected]]
>>> >> >> *Sent:* Tuesday, May 19, 2015 12:23 PM
>>> >> >> *To:* AGI
>>> >> >> *Subject:* Re: [agi] Institute of General Intelligence (IGI)
>>> >> >>
>>> >> >>
>>> >> >>
>>> >> >> Mr. Voss,
>>> >> >>
>>> >> >> Given your understanding of the AGI community do you believe an IGI
>>> >> would
>>> >> >> be redundant?  Would your organization be open to collaborating
>>> with
>>> >> >> the
>>> >> >> IGI?  Do you have any advice for how we could be successful in
>>> >> >> starting
>>> >> >> up
>>> >> >> this organization?  Perhaps you would be open to being a member of
>>> the
>>> >> >> board?
>>> >> >>
>>> >> >>
>>> >> >>
>>> >> >> On Tue, May 19, 2015 at 2:03 PM, Peter Voss <[email protected]>
>>> wrote:
>>> >> >>
>>> >> >> Not something that can be adequately covered in a few words, but….
>>> >> “We’re
>>> >> >> building a fully integrated, top-down & bottom-up, real-time,
>>> adaptive
>>> >> >> knowledge (& skill) representation, learning and reasoning engine.
>>> >> >> We’re
>>> >> >> using a combination of graph representation and NN techniques
>>> overlaid
>>> >> >> with
>>> >> >> fuzzy, adaptive rule systems” – ha!
>>> >> >>
>>> >> >>
>>> >> >>
>>> >> >> Here again are links for some clues:
>>> >> >>
>>> >> >>
>>> >> >>
>>> >> >>
>>> >> >>
>>> >>
>>> http://www.kurzweilai.net/essentials-of-general-intelligence-the-direct-path-to-agi
>>> >> >>
>>> >> >> http://www.realagi.com/index.html
>>> >> >>
>>> >> >> https://www.facebook.com/groups/RealAGI/
>>> >> >>
>>> >> >>
>>> >> >>
>>> >> >>
>>> >> >>
>>> >> >> *From:* Benjamin Kapp [mailto:[email protected]]
>>> >> >>
>>> >> >>
>>> >> >>
>>> >> >> Mr. Voss,
>>> >> >>
>>> >> >> Since you are the founder I'm certain you know what agi-3's
>>> >> >> methodology
>>> >> >> is.  In a few words (maybe more?) could you share with us what that
>>> >> >> is?
>>> >> >>
>>> >> >>
>>> >> >>
>>> >> >> On Tue, May 19, 2015 at 1:24 PM, Peter Voss <[email protected]>
>>> wrote:
>>> >> >>
>>> >> >> *>*http://www.agi-3.com  They just glue together anything and
>>> >> everything
>>> >> >> that works.
>>> >> >>
>>> >> >> Actually, no.  We have a very specific theory of AGI and
>>> architecture
>>> >> >>
>>> >> >> *Peter Voss*
>>> >> >>
>>> >> >> *Founder, AGI Innovations Inc.*
>>> >> >>
>>> >> >> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>>> >> >> <https://www.listbox.com/member/archive/rss/303/26973278-698fd9ee
>>> >|
>>> >> >> Modify
>>> >> >> <https://www.listbox.com/member/?&;> Your Subscription
>>> >> >>
>>> >> >> <http://www.listbox.com>
>>> >> >>
>>> >> >>
>>> >> >>
>>> >> >> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>>> >> >> <https://www.listbox.com/member/archive/rss/303/231420-b637a2b0>|
>>> >> Modify
>>> >> >> <https://www.listbox.com/member/?&;> Your Subscription
>>> >> >>
>>> >> >> <http://www.listbox.com>
>>> >> >>
>>> >> >>
>>> >> >>   *AGI* | Archives <
>>> https://www.listbox.com/member/archive/303/=now>
>>> >> >> <https://www.listbox.com/member/archive/rss/303/11721311-20a65d4a>
>>> |
>>> >> >> Modify
>>> >> >> <https://www.listbox.com/member/?&;>
>>> >> >> Your Subscription <http://www.listbox.com>
>>> >> >>
>>> >> >
>>> >> >
>>> >> >
>>> >> > -------------------------------------------
>>> >> > AGI
>>> >> > Archives: https://www.listbox.com/member/archive/303/=now
>>> >> > RSS Feed:
>>> >> https://www.listbox.com/member/archive/rss/303/11943661-d9279dae
>>> >> > Modify Your Subscription:
>>> >> > https://www.listbox.com/member/?&;
>>> >> > Powered by Listbox: http://www.listbox.com
>>> >> >
>>> >>
>>> >>
>>> >> -------------------------------------------
>>> >> AGI
>>> >> Archives: https://www.listbox.com/member/archive/303/=now
>>> >> RSS Feed:
>>> >> https://www.listbox.com/member/archive/rss/303/17795807-366cfa2a
>>> >> Modify Your Subscription:
>>> >> https://www.listbox.com/member/?&;
>>> >> Powered by Listbox: http://www.listbox.com
>>> >>
>>> >
>>> >
>>> >
>>> > -------------------------------------------
>>> > AGI
>>> > Archives: https://www.listbox.com/member/archive/303/=now
>>> > RSS Feed:
>>> https://www.listbox.com/member/archive/rss/303/11943661-d9279dae
>>> > Modify Your Subscription:
>>> > https://www.listbox.com/member/?&;
>>> > Powered by Listbox: http://www.listbox.com
>>> >
>>>
>>>
>>> -------------------------------------------
>>> AGI
>>> Archives: https://www.listbox.com/member/archive/303/=now
>>> RSS Feed:
>>> https://www.listbox.com/member/archive/rss/303/11721311-20a65d4a
>>> Modify Your Subscription: https://www.listbox.com/member/?&;
>>> Powered by Listbox: http://www.listbox.com
>>>
>>
>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/27079473-66e47b26> |
>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>> <http://www.listbox.com>
>>
>
>
>
> --
> Regards,
> Mark Seveland
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/26973278-698fd9ee> |
> Modify
> <https://www.listbox.com/member/?&;>
> Your Subscription <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to