Colin et al,

Colin: "The personal crocodiles are keeping me too distracted.... Perhaps a
lighter term might suit a broader audience..."


We cannot solve everything with one paper, it will be just an introduction
with our vision  that might suit a broader audience - specific technical
terms  can make it difficult



Ben: "I was thinking about birds today. It seems as though they have high
selection pressure to have light weight brains."



It is a good point since even with  a tiny brain many birds can
be highly intelligent compared to other creatures with bigger brains,
their visual system is usually well developed, high efficiency in
"computing"  their flight trajectory with far, far  less energy
(  remember less than 30 watts is needed for our entire brain).

However, harvesting neurons from  birds’ brain (or rats) and “wiring” them
in an electronic  circuit shows very little understanding of how
a biological structure is able to “compute”



Mark: "I hae a question, have any of you worked with or toyed with OpenCog?"


Related to AGI on digital computers we probably need to find answers for at
least two questions:

a.Why the transition from “narrow” to "general" is so difficult ?

b.Is anything that  can make algorithmic computation   OpenCog , NARS...
more "general"?


Dorian





On Sun, May 24, 2015 at 6:42 PM, Mark Seveland <[email protected]> wrote:

> I hae a question, have any of you worked with or toyed with OpenCog?
>
> On Sun, May 24, 2015 at 3:20 PM, colin hales <[email protected]> wrote:
>
>> Hi Dorian et. al.
>> I am kind of blown away by what is happening here. Maybe this thing's
>> time really has come and this is what it looks like? Dunno.
>>
>> The personal crocodiles are keeping me too distracted to do much other
>> than cursory contact. And that'll keep going till the end of the week.
>>
>> Manuscript matters:
>>
>> All feedback gratefully accepted. It's a fair way from complete. If you
>> don't mind I'd like to keep going. When I have done a story with start
>> middle, end, refs then at that point I can release it into the wild of the
>> IGI and in particular Dorian ...
>>
>> It's  already in .docx form. I use endnote for refs. I am assuming it
>> will be formatted to a journal's preferred layout in the end.
>>
>> The next section will cover practical instances so the reader sees the
>> hybrid and synth. And how it relates to analytic. I'll stick to the
>> analytic term for now. I can see the formal distinction working better when
>> in review because of the technical specificity. Perhaps a lighter term
>> might suit a broader audience. I will go with the various needs.
>>
>> The main thing I need is to learn when raw Colin starts to grate in the
>> eyes of potential investors/ funders, to whom the doc is likely to be
>> central.
>>
>> Off back to crocs and writing when I can.
>>
>> Regards,
>> Colin Hales
>>
>>
>>
>>
>> ------------------------------
>> From: Dorian Aur <[email protected]>
>> Sent: ‎25/‎05/‎2015 4:37 AM
>> To: AGI <[email protected]>
>> Subject: Re: [agi] H-AGI towards S-AGI
>>
>> Colin, Ben et al
>>
>> Colin: Excellent start, I feel that anyone can get  an idea about AI/AGI
>> goals (2016-1956 =60 years)
>> Ben: Indeed a careful selection of words e.g. synthetic/abstract may help
>> especially if the audience is  picky.
>> Also  very good questions.We should slightly alter Colin's text and
>> provide  answers for every question  at the end of the manuscript
>> -Discussion or Questions and Answers
>>
>> Also we need to be honest. IGI has an agenda to bring together everyone
>> and everything that works in AI,computer science,  neuroscience,
>> electronics, nanotechnology to solve one problem - design a system that
>> generates human like intelligence or better. This part  can  be probably
>> written on the IGI webpage.
>>
>> We may  like to include  Potter and other similar  labs
>> http://www.nature.com/srep/2014/140630/srep05489/full/srep05489.html on
>> our list of  possible collaborators (list3) so I can't  reveal the issue of
>> such approaches here.  The robot rat, a  nice attempt which may never work.
>> Remember everyone has followed the "mob opinion". If you read
>> http://www.researchgate.net/post/Place_cells_What_does_it_prove you may
>> be able to get at least a part of the problem.
>>
>> To fully write the paper, we may need a Word like environment, include,
>> keep corrections, references.
>>
>>
>> Dorian
>>
>>
>>
>>
>>
>> On Sun, May 24, 2015 at 6:10 AM, Benjamin Kapp <[email protected]> wrote:
>>
>>> When I read the ideas you have there Colin I don't feel like the ideas
>>> flow in a reasoned way.  It feels contrived, like you have an agenda.  It
>>> would be better if instead of assuming the conclusion we explored the issue
>>> without bias and let our empirical knowledge and rational faculties reign
>>> supreme.
>>>
>>>
>>> 1         Introduction
>>>
>>> Here we seek to instigate a broadening of approaches to artificial
>>> general intelligence (AGI). Be it an artificial brain the size of a
>>> worm, ant, bee, dog or human, such an artificial intelligence is recognized
>>> here as a kind of AGI.
>>> *The definition of AGI is rather important, and it would be better to
>>> state what our definition of AGI is rather then just give examples of
>>> things that have AGI.*
>>>
>>> The original science program coined ‘artificial intelligence’ (AI) in
>>> 1956 {refs} set sail, at the birth of computing, with a goal to create
>>> machines that potentially have human level intelligence or better.
>>>
>>> *I'm uncertain why this particular date is of great importance.  The
>>> origins of AI predate 1956 (see Ada lovelace for an example). *
>>>
>>>
>>> What has actually happened since then is the application of computers to
>>> a vast array of technical challenges within which great successes have
>>> occurred and are ongoing. However, in practice AI successes fell, and
>>> continue to fall, within a now well recognized category called ‘narrow’ or
>>> ‘domain-bound’ AI.
>>> *The majority of AGI research yes, but not all research.  (e.g.
>>> https://www.youtube.com/watch?v=1-0eZytv6Qk
>>> <https://www.youtube.com/watch?v=1-0eZytv6Qk>) *
>>>
>>> Within the atmosphere of its successes, however, the original goal of
>>> human-level intelligence has, at least so far, evaded the energies of a
>>> huge investment. Such has been the prevalence of this pattern it can now be
>>> called a kind of syndrome and in recognition of that syndrome in recent
>>> years the attainment of the original goal of human level AI has taken on
>>> two main forms.
>>> *Syndrome? Seems rather harsh. Humans have always made analogies between
>>> the mind and the technology of their time. For Aristotle it was the mind
>>> being like a clay tablet, for others it was their mechanical clocks, and
>>> for us it is our computers. This isn't a syndrome, it is human nature. And
>>> this approach is being fruitful something you even admit later in this
>>> write up. And it is certainly something our personal experience can provide
>>> many examples of. To speak so harshly of this approach gives a strong
>>> negative impression in the mind of the reader that you aren't reasoning
>>> fairly and that you have an agenda to sell the reader on your approach.*
>>>
>>>
>>>
>>> The first approach to human level AI one of simple assumption that by
>>> attending to the AI ‘parts’ that the route to the AGI ‘whole’ will become
>>> apparent or emerge naturally. This activity, now industrialised, forms the
>>> backbone of AI investment at this present time. Its successes emerge almost
>>> weekly now. The second approach is one of a concerted direct attack on
>>> human-level AI. This is a recent phenomenon manifest in a comparatively
>>> small community of investigators, with commensurate levels of investment,
>>> who have explicitly coined the name of the goal: AGI. In doing so the
>>> target is explicitly recognised as being of a nature deserved of an
>>> integrated, holistic approach. This, too, is having its successes, but once
>>> again the syndrome of narrow-AI outcomes tends to be what the practice
>>> achieves.
>>>
>>>
>>>
>>>
>>> *Not sure if AGI is so small anymore. I think Google/deepmind/Kurzweil
>>> are in the process of creating AGI.And I think China is working on
>>> AGI.. China-Brain
>>> Projecthttp://www.igi-global.com/chapter/china-brain-project/46407
>>> <http://www.igi-global.com/chapter/china-brain-project/46407>*
>>>
>>>
>>>
>>> Throughout all this history one thing has been invariant: The use of the
>>> computer or more generally the use of models of intelligence as an instance
>>> of machine intelligence. This document signals the beginning of another
>>> approach: where the computer (model) approach is joined (to an extent to be
>>> determined) by its natural counterpart. This new approach, for whatever
>>> reason, is essentially untried and invisible to the AI community.
>>> *Is this true? How do you know? Have you surveyed  all current AGI
>>> research approaches?*
>>>
>>>  It was always an option. All we do here is get it off the shelf and
>>> dust it off as an AGI option. This paper is a vehicle for the clear
>>> expression of an untried approach. As such it is hoped that AI and AGI
>>> acquire a suite of ideas and new scientific assessment techniques that will
>>> improve AI generally as a science discipline based on a new kind of
>>> empirical testing. Investment in the approach has been zero since day one
>>> of AI. We seek here to make a case that if investment in this new approach
>>> was non-zero, a cost-effective dramatic shift may occur in our
>>> understanding of the potential kinds of machine intelligence. Specifically
>>> we seek to introduce the concept of synthetic and hybrid AGI.
>>> 2         Computation and AGI – a perspective on practice
>>>
>>> To understand what follows we need to carefully compare and contrast two
>>> fundamentally different forms of computation. Formally their difference is
>>> best captured by the words analytic computation and synthetic computation.
>>> The first kind, analytic, is easily recognised as model-based computation.
>>> This is where, by whatever means chosen, an abstract model is explored by
>>> its designers. Its usefulness is inherent in what the computation tells us
>>> upon interpretation. Within the model are representations of
>>> characteristics that are being studied. A voltage in model may be used, for
>>> example, to represent the actual voltage of what is being modelled. That
>>> *representation* of something is not an *instance of* the original
>>> thing. Recognizable forms of analytic computation include that of the
>>> analog or digital computer (Turing machines). Its distinguishing feature is
>>> that however the computation is carried out, its meaning is ultimately
>>> inherent in the mental processes of a designer or in some explicit,
>>> separate document such as software or a circuit diagram of a model.
>>> However, complex the model is, it is best thought of as a description of
>>> something. The description itself is the analytic form. Clearly the
>>> analytic form is responsible for a dramatic change and technological
>>> advances in science over decades. The computer revolution itself.
>>>
>>>
>>>
>>> The second kind of computation, synthetic, is best understood as simply
>>> the regularity of nature itself. Synthetic computation occurs when nature
>>> itself is simply regarded as computation. Synthetic computation, too, may
>>> have a designer. That is, the distinction between analytic and synthetic
>>> computation is not held up as the distinction between ‘human-made’ and
>>> ‘naturally occurring’. Synthetic computation is when the regularity of
>>> nature itself accepted as, or configured to be the computation. There may
>>> be documents needed to establish the initial conditions of the
>>> ‘computation’. For example, an engineer builds and configures the initial
>>> conditions of natural material as an automobile. The result is a synthetic
>>> computation called ‘the automobile’ or ‘transport’. No documents are needed
>>> to further interpret the meaning of the result of the computation. Nature
>>> itself is the outcome of synthetic computation. Another simple example of
>>> such computation may be seen in the concept of flight. A bird ‘computes’
>>> those aspects of the physics of flight suited to the needs of a bird.
>>> Humans have used those same synthetic computations (manifest in
>>> air/fight-surface interactions) to create artificial flight. The result is
>>> a regularity in nature accepted as a form of computation. Physically the
>>> result is flight. That being the case, what is ‘analytic flight’? We all
>>> recognise this: the flight simulator.
>>>
>>>
>>>
>>> The program of future directions proposed here is one that recognises
>>> the two different kinds of computation in the very specialized science of
>>> the brain where the analytic/synthetic distinction can be shown to be
>>> under-developed and potentially confused. The brain is unique in that it is
>>> a synthetic object with a specialised role to become the natural regularity
>>> that forms the control system of natural organisms. It embodies the
>>> intellect of whatever creature it inhabits. The kinds of tasks such a
>>> control system does can and have been modelled to great effect in analytic
>>> approaches. The question is: *“What is the difference, application to
>>> the brain, between the analytic and the synthetic approach?”* Asking
>>> that question, and expecting a scientific answer, is what this paper is
>>> seeking.
>>>
>>>
>>> I think analytic/synthetic as you use them could be replaced by
>>> abstract/material, which are words that are of far more common usage and as
>>> such easier to understand.
>>>
>>> For over half a century, approaches to creating an artificial brain have
>>> been entirely confined to analytic forms. These analytic approaches are
>>> explorations of models of the brain made by humans. That being the case,
>>> then the hyper-critical issue is in understanding the conditions under
>>> which the analytic is indistinguishable from the synthetic. If there is a
>>> difference, then how does that difference manifest in the capability of an
>>> AGI. For the brain, for these many decades, the synthetic half of the route
>>> to AGI has simply been neglected for a variety of reasons. The actual
>>> reasons for the absence of synthetic approaches to AGI is something
>>> historians can evaluate. The practical restoration of the synthetic
>>> approach is the goal here. The restoration of the synthetic approach is
>>> necessary to scientifically test the difference between the analytic and
>>> synthetic AGI. Whatever that difference is, the whole AGI enterprise has
>>> been living within a realm of that difference for reasons that are
>>> essentially unexplored.*Scientifically *evaluating the
>>> analytic/synthetic difference (or the lack of it) is the goal of the
>>> proposed shift in methodology.
>>> *If human brains are instances of synthetic AGI then it would seem that
>>> ALL analytic AGI research would be checked against synthetic AGI since
>>> those doing the research are synthetic AGI and since they are those ones
>>> reasoning as to whether they're AGI is functioning as expected or not.  As
>>> such the idea that the proposed approach is of great importance, or
>>> something that is under explored seems to be lacking.*
>>>
>>> In summary: The prospect of restoration of a synthetic approach to AGI
>>> is our topic. We look at a potential change in the direction of AGI
>>> science, and therefore the investment profile, where the analytic, the
>>> synthetic and their hybrid are formally recognised as separate and where
>>> scientific testing is then applied to compare and contrast their scope and
>>> effectiveness in application to the science of the artificial brain as AGI.
>>> In the creation of such a brain the approach can be
>>>
>>>    1.
>>>
>>>    Nil% synthetic computation (entirely analytic)
>>>
>>> or
>>>
>>>    1.
>>>
>>>    100% synthetic computation
>>>
>>> or
>>>
>>>    1.
>>>
>>>    H% synthetic. A hybrid form of both.
>>>
>>>
>>>
>>> That is, the inclusion of synthetic computation to some desired level
>>> becomes an experimental parameter. Natural brain tissue can be regarded as
>>> naturally occurring object based on (2) synthetic computation. In
>>> application to artificial brain tissue (AGI) so far, option (1) has been
>>> the only approach. This has achieved all of the progress in artificial
>>> intelligence to date. Here we suggest that the success of analytic
>>> approaches be joined by synthetic approac
>>>
>>
>> [The entire original message is not included.]
>>   *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/27079473-66e47b26> |
>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>> <http://www.listbox.com>
>>
>
>
>
> --
> Regards,
> Mark Seveland
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/17795807-366cfa2a> |
> Modify
> <https://www.listbox.com/member/?&;>
> Your Subscription <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to