Done

On Sat, May 23, 2015 at 7:19 PM, Piaget Modeler <[email protected]>
wrote:

> Shouldn't it go on the FAQ page as well (in the interim)?
>
> ~PM
>
> ------------------------------
> Date: Sat, 23 May 2015 18:49:57 -0700
> Subject: Re: [agi] H-AGI towards S-AGI
> From: [email protected]
> To: [email protected]
>
>
> I'll post that to the IGI site forum for discussion and further refinement.
>
> On Sat, May 23, 2015 at 6:36 PM, Colin Hales <[email protected]> wrote:
>
> Dear IGI enthusiasts,
> Here's a stab at an intro to a paper that I hope begins to capture the
> essence of what is proposed.
> I don't claim it as perfect or the final product.
> What I need to know is if it speaks in a way that might lead to the change
> we are looking for.
>
> *=========================================*
> *AGI Directions: towards Hybrid (H) and Synthetic (S) Forms.*
> By
> Dorian Aur  (see previous posts)
>
> (blame for this bit is accepted by Colin Hales
>  others? TBA.
> 1         Introduction
>
> Here we seek to instigate a broadening of approaches to artificial general
> intelligence (AGI). Be it an artificial brain the size of a worm, ant, bee,
> dog or human, such an artificial intelligence is recognized here as a kind
> of AGI. The original science program coined ‘artificial intelligence’ (AI)
> in 1956 {refs} set sail, at the birth of computing, with a goal to create
> machines that potentially have human level intelligence or better. What has
> actually happened since then is the application of computers to a vast
> array of technical challenges within which great successes have occurred
> and are ongoing. However, in practice AI successes fell, and continue to
> fall, within a now well recognized category called ‘narrow’ or
> ‘domain-bound’ AI. Within the atmosphere of its successes, however, the
> original goal of human-level intelligence has, at least so far, evaded the
> energies of a huge investment. Such has been the prevalence of this pattern
> it can now be called a kind of syndrome and in recognition of that syndrome
> in recent years the attainment of the original goal of human level AI has
> taken on two main forms.
>
>
>
> The first approach to human level AI one of simple assumption that by
> attending to the AI ‘parts’ that the route to the AGI ‘whole’ will become
> apparent or emerge naturally. This activity, now industrialised, forms the
> backbone of AI investment at this present time. Its successes emerge almost
> weekly now. The second approach is one of a concerted direct attack on
> human-level AI. This is a recent phenomenon manifest in a comparatively
> small community of investigators, with commensurate levels of investment,
> who have explicitly coined the name of the goal: AGI. In doing so the
> target is explicitly recognised as being of a nature deserved of an
> integrated, holistic approach. This, too, is having its successes, but once
> again the syndrome of narrow-AI outcomes tends to be what the practice
> achieves.
>
>
>
> Throughout all this history one thing has been invariant: The use of the
> computer or more generally the use of models of intelligence as an instance
> of machine intelligence. This document signals the beginning of another
> approach: where the computer (model) approach is joined (to an extent to be
> determined) by its natural counterpart. This new approach, for whatever
> reason, is essentially untried and invisible to the AI community. It was
> always an option. All we do here is get it off the shelf and dust it off as
> an AGI option. This paper is a vehicle for the clear expression of an
> untried approach. As such it is hoped that AI and AGI acquire a suite of
> ideas and new scientific assessment techniques that will improve AI
> generally as a science discipline based on a new kind of empirical testing.
> Investment in the approach has been zero since day one of AI. We seek here
> to make a case that if investment in this new approach was non-zero, a
> cost-effective dramatic shift may occur in our understanding of the
> potential kinds of machine intelligence. Specifically we seek to introduce
> the concept of synthetic and hybrid AGI.
> 2         Computation and AGI – a perspective on practice
>
> To understand what follows we need to carefully compare and contrast two
> fundamentally different forms of computation. Formally their difference is
> best captured by the words analytic computation and synthetic computation.
> The first kind, analytic, is easily recognised as model-based computation.
> This is where, by whatever means chosen, an abstract model is explored by
> its designers. Its usefulness is inherent in what the computation tells us
> upon interpretation. Within the model are representations of
> characteristics that are being studied. A voltage in model may be used, for
> example, to represent the actual voltage of what is being modelled. That
> *representation* of something is not an *instance of* the original thing.
> Recognizable forms of analytic computation include that of the analog or
> digital computer (Turing machines). Its distinguishing feature is that
> however the computation is carried out, its meaning is ultimately inherent
> in the mental processes of a designer or in some explicit, separate
> document such as software or a circuit diagram of a model. However, complex
> the model is, it is best thought of as a description of something. The
> description itself is the analytic form. Clearly the analytic form is
> responsible for a dramatic change and technological advances in science
> over decades. The computer revolution itself.
>
>
>
> The second kind of computation, synthetic, is best understood as simply
> the regularity of nature itself. Synthetic computation occurs when nature
> itself is simply regarded as computation. Synthetic computation, too, may
> have a designer. That is, the distinction between analytic and synthetic
> computation is not held up as the distinction between ‘human-made’ and
> ‘naturally occurring’. Synthetic computation is when the regularity of
> nature itself accepted as, or configured to be the computation. There may
> be documents needed to establish the initial conditions of the
> ‘computation’. For example, an engineer builds and configures the initial
> conditions of natural material as an automobile. The result is a synthetic
> computation called ‘the automobile’ or ‘transport’. No documents are needed
> to further interpret the meaning of the result of the computation. Nature
> itself is the outcome of synthetic computation. Another simple example of
> such computation may be seen in the concept of flight. A bird ‘computes’
> those aspects of the physics of flight suited to the needs of a bird.
> Humans have used those same synthetic computations (manifest in
> air/fight-surface interactions) to create artificial flight. The result is
> a regularity in nature accepted as a form of computation. Physically the
> result is flight. That being the case, what is ‘analytic flight’? We all
> recognise this: the flight simulator.
>
>
>
> The program of future directions proposed here is one that recognises the
> two different kinds of computation in the very specialized science of the
> brain where the analytic/synthetic distinction can be shown to be
> under-developed and potentially confused. The brain is unique in that it is
> a synthetic object with a specialised role to become the natural regularity
> that forms the control system of natural organisms. It embodies the
> intellect of whatever creature it inhabits. The kinds of tasks such a
> control system does can and have been modelled to great effect in analytic
> approaches. The question is: *“What is the difference, application to the
> brain, between the analytic and the synthetic approach?”* Asking that
> question, and expecting a scientific answer, is what this paper is seeking.
>
>
>
> For over half a century, approaches to creating an artificial brain have
> been entirely confined to analytic forms. These analytic approaches are
> explorations of models of the brain made by humans. That being the case,
> then the hyper-critical issue is in understanding the conditions under
> which the analytic is indistinguishable from the synthetic. If there is a
> difference, then how does that difference manifest in the capability of an
> AGI. For the brain, for these many decades, the synthetic half of the route
> to AGI has simply been neglected for a variety of reasons. The actual
> reasons for the absence of synthetic approaches to AGI is something
> historians can evaluate. The practical restoration of the synthetic
> approach is the goal here. The restoration of the synthetic approach is
> necessary to scientifically test the difference between the analytic and
> synthetic AGI. Whatever that difference is, the whole AGI enterprise has
> been living within a realm of that difference for reasons that are
> essentially unexplored. *Scientifically *evaluating the
> analytic/synthetic difference (or the lack of it) is the goal of the
> proposed shift in methodology.
>
>
>
> In summary: The prospect of restoration of a synthetic approach to AGI is
> our topic. We look at a potential change in the direction of AGI science,
> and therefore the investment profile, where the analytic, the synthetic and
> their hybrid are formally recognised as separate and where scientific
> testing is then applied to compare and contrast their scope and
> effectiveness in application to the science of the artificial brain as AGI.
> In the creation of such a brain the approach can be
>
>    1.
>
>    Nil% synthetic computation (entirely analytic)
>
> or
>
>    1.
>
>    100% synthetic computation
>
> or
>
>    1.
>
>    H% synthetic. A hybrid form of both.
>
>
>
> That is, the inclusion of synthetic computation to some desired level
> becomes an experimental parameter. Natural brain tissue can be regarded as
> naturally occurring object based on (2) synthetic computation. In
> application to artificial brain tissue (AGI) so far, option (1) has been
> the only approach. This has achieved all of the progress in artificial
> intelligence to date. Here we suggest that the success of analytic
> approaches be joined by synthetic approaches to AGI. If indeed the time has
> arrived for the formal introduction of (2) synthetic AGI and (3) hybrid AGI
> as viable prospects, then we need to open a discourse. What would the new
> AGI science look like? What does it tell us about the scope, nature and
> expectations inherent in the purely analytic approach? What does it add to
> the nearly 70 year-old AGI program?
> (end of section)
> ============================
>
> This is offered up for discussion as the possible first part of the
> document Dorian started. I have a lot more to add.
>
> regards
>
> Colin Hales
>
>
>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/27079473-66e47b26> |
> Modify <https://www.listbox.com/member/?&;> Your Subscription
> <http://www.listbox.com>
>
>
>
>
> --
> Regards,
> Mark Seveland
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/19999924-4a978ccc> |
> Modify <https://www.listbox.com/member/?&;> Your Subscription
> <http://www.listbox.com>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/27079473-66e47b26> |
> Modify
> <https://www.listbox.com/member/?&;>
> Your Subscription <http://www.listbox.com>
>



-- 
Regards,
Mark Seveland



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to