To me, computationalism, defined via

"
Computationalism. = _abstract_ symbol manipulation.
"

is an **interpretation** of certain things that occur inside computers
sometimes ...

The fact that this is a bad interpretation, doesn't imply that computer
themselves aren't able to carry out advanced intelligence...

-- Ben G

On Sun, Oct 5, 2008 at 4:28 PM, Colin Hales <[EMAIL PROTECTED]>wrote:

>  Hi all,
> This seems to have touched a point of interest. I'll try and address all
> the issues raised in one post. I hope I don't miss any of them. Please
> remind me if I have. Apologies if I don;t reference the originator of the
> query explicitly. You know who you are!
>
> Re 'defining terms'.
> 1) Yes: Theres pages and pages of background information not in the posts.
> It is the result of thousands of hours of reading and analysis. Without it
> the readership is not 'calibrated' properly and the dialogue is bound to
> have its problems. The reader has only been exposed to about 1/50th of the
> total work, so please don;t assume that any of the terms are poorly defined.
> They are only.poorly defined here and so far,
>
> 1a) Here's the basics of the term "visual scene". This is long proven
> empirical physiology. I already said: it is the occipital lobe deliverable.
> Very specific neurons, well known, highly documented, are responsible.
> Officially no-one knows 'how'/'why' visual experience results  happens. The
> official position  only declared what does it. "Visual Scene" = that
> construct that is replaced by a roughly hemispherical gloom/blackness when
> you close your eyes. It is highly localised to specific neuron populations
> (occipital V4 does colour, for example) and has been studied for decades.
> Everyone who studies cognition should be aware of this empirical knowledge.
> I do not need to specify it further or justify it. The evidence speaks for
> itself. An entire empirical science paradigm call the 'neural correlates of
> consciousness' has been set up specifically to isolate the neural basis. All
> experiential fields are the same. They are all cranial central nervous
> system deliverables. This means audition, haptic, olfaction, gustation,
> vision, situational and primordial emotions and all internal imagined
> versions of these (including the visual imagery in the post by  JLM). My
> argument deals only with the visual scene.
>
> 1b) So, in answer to another comment from one of the posts: "visual scene",
> specifically: "I assume you mean the original image impressed on your
> retina". No, I do not mean this. The molecular machinations of the entire
> peripheral nervous system, including the 'peripherals' of the central
> nervous system..... are 100% empirically proven for 100 years to be
> experientially inert. You do not see with your eyes. Vision occurs in the
> occipital. Please read the literature. Peripheral sensory transduction is
> not experienced. Central perceptual fields are projected to the periphery.
> This is physiology. EG. For the peripheral insult sensory transduction, the
> term nociception is used. PAIN, the experience, is added in the CNS and
> projected (often rather badly) to the site of origin. There is an entire
> collection of nomenclature established by physiology to enable descriptive
> specificity..I should not have to provide any more information along these
> lines. Please read the literature. There's lots of it. I can supply refs if
> you need them.
>
> 1c) Computationalism. = _abstract_ symbol manipulation. This is meant in
> specific contrast to the manipulation of  _natural_ symbol manipulation.
> Analog computing is also COMP. This means that all computing based on the
> various calculii are COMP. It means that all machines using any sort of
> abstract mathematical or logical framework where the semantics of the
> symbols need extra documentation... are COMP. The basic defs:
> See:
>     (i) Moor, J. 'The Dartmouth College Artificial Intelligence Conference:
> The next fifty years', Ai Magazine vol. 27, no. 4, 2006. 87-91.
>     (ii) Beer, R. D. 'A Dynamical-Systems Perspective on Agent Environment
> Interaction', Artificial Intelligence vol. 72, no. 1-2, 1995. 173-215.
>
> RE: The nature of the COMP = false as an argument.
> 2) I don't intend to formalise the argument any further here. I have given
> the precis. The two papers I mentioned are in review. One of them for 18
> months already. Very painful. When they come out they can speak for
> themselves.
>
> 3) Please do not taint my words with any attributions in respect of being
> 'philosophical'. ;-)  I love reading philosophy and have internalised
> truckloads....but this is irrelevant...When I say this is an empirical
> argument I mean it. All I do is aggregate well known (but
> hyper-cross-disciplinary) fact into one place. I follow the path of least
> resistance to what it says of the natural world. If the empirical evidence
> said anything else I'd say something else.
>
> 4)  I'll say it again: this is an empirical argument... therefore:  If you
> want to helpfully counter the argument then please deliver the novel
> evidence to the contrary that counters the evidence I give and explain why.
> Show references. Point to a history of facts. Then I can respond because i
> have encountered something I must account for which alters the implications
> of the evidence. If this cannot be done then the statements you make are
> empty beliefs and I can do nothing with them. I defer to real evidence.
> Nothing else. And I require that of all critique. I am absolutely STOKED if
> you can set me straight empirically. I relish it. I need it.
>
> 5) To be more blunt about it: I cannot respond to any empty meta-belief
> (belief about belief) such as "I find it unconvincing", "What if it
> doesn't/wont't/isn't?", "This seems wrong", "I find this implausible" and
> all manner of other such statements.  If you can't deliver the evidence to
> the contrary then please don't say anything. Educate yourself in the
> empirical areas you find problematic and then come back and tell me exactly
> where I am wrong and why. A viable discussion will ensue..
>
> RE: Motivation
> 6) The only reason I have made such an effort to complete the refutation of
> COMP (you have seen only 1 way, there are 3 others) is that despite being
> publicly established as merely "conjecture" (i) and a "theoretical claim"
> (ii), for 50 years it has continued to be entertained by computer science/AI
> projects as if it was a "law of nature". It has no proof. There is merely a
> "failure to refute". This situation has been part of the framework of
> justification (implicitly and explicitly by various research efforts) that
> real AGI might result from COMP principles. I am here to put that
> expectation to death once and for all: so that funds may be more cautiously
> directed and expectations be more wisely managed.
>
> 7) Another refutation of COMP"
> Bringsjord, S. 'The Zombie Attack on the Computational Conception of Mind',
> Philosophy and Phenomenological Research vol. LIX, no. 1, 1999. 41-69
>
> 8) Another refutation of COMP comes by appropriately contextualising the
> 'No Free Lunch' Theorem(of machine learning)  into a context of cognition
> during a scientific act. In that circumstance NFL applies to a refutation of
> COMP.
>     Koppen, M., Wolpert, D. H. and Macready, W. G. 'Remarks on a recent
> paper on the ''No free lunch'' theorems', IEEE Transactions on Evolutionary
> Computation vol. 5, no. 3, 2001. 295-296.
>     Wolpert, D. H. 'The lack of A priori distinctions between learning
> algorithms', Neural Computation vol. 8, no. 7, 1996. 1341-1390.
>     Wolpert, D. H. 'The existence of A priori distinctions between learning
> algorithms', Neural Computation vol. 8, no. 7, 1996. 1391-1420.
>
> RE: Wider implications
> 7) In line with (6), I need draw no further connection to a solution to the
> problem of consciousness generally. The claim of the COMP argument is very
> specific. It merely tells us that a turing machine cannot 'compute' a
> scientist in an authentic act of original science using visual observation
> of the novel distal natural world. On generalisation the important
> implication is that the term "simulated scientist" is an oxymoron. A logical
> impossibility.
>
> 8) Other connections to the physics and role of consciousness in cognition
> and intelligent behaviour generally, whilst very interesting and much more
> important, are not part of this COMP discussion.
>
> 9) There is a more fundamental issue which I have also not included thus
> far which I think may be important here..."INVERSE or ILL-DEFINED PROBLEMS".
> The expectation that source reconstruction from a remote data-slice can
> occur is fatally and a-priori discountable. To be rather more practical
> about it...I am involved in an EEG/Epilepsy group. The explanation of the
> origins of EEG (a surface field structure) is literally identical to the
> problem of explanation of the originator of retinal photon impact. Science
> knows that the former is an ill-defined problem and does not claim to have
> acquired the solution. It knows that the models are metaphors and cannot
> claim any further veridicality. Indeed it regards the problem as extreme and
> unresolved. How is it that anyone can assume that vision, an even harder and
> structurally identical inverse problem, is somehow possible with only the
> retinal impact? Please read Nunez for the appropriate background:
>     Nunez, P. L. and Srinivasan, R., Electric fields of the brain : the
> neurophysics of EEG, 2nd ed., Oxford University Press, Oxford, New York,
> 2006
>
> I am realising that I may have a contribution to make to AGI by helping
> strengthen its science base. I've run out of Sunday, so I'd like to leave
> the discussion there... to be continued sometime.
>
> Meanwhile I'd encourage everyone to get used to the idea that to be
> involved in AGI is to _not_ be involved in purely COMP principles. Purely
> COMP = traditional domain-bound AI. It will produce very good results in
> specific problem areas and will be fragile and inflexible when encountering
> novelty. AI will remain a perfectly valid target for very exciting COMP
> based solutions. However those solutions will never be AGI. Continuation
> with purely COMP approach is a strategically fatal flaw which will result in
> a club, not a scientific discipline. This is of great concern to me. Please
> sit back and let this realisation wash over you. It's what I had to do. I
> used to think in COMP terms too. And have fun! This is supposed to be fun!
>
> cheers
> Colin Hales
>
> Ben Goertzel wrote:
>
>
> The argument seems wrong to me intuitively, but I'm hard-put to argue
> against it because the terms are so unclearly defined ... for instance I
> don't really know what you mean by a "visual scene" ...
>
> I can understand that to create a form of this argument worthy of being
> carefully debated, would be a lot more work than writing this summary email
> you've given.
>
> So, I agree with your judgment not to try to extensively debate the
> argument in its current sketchily presented form.
>
> If you do choose to present it carefully at some point, I encourage you to
> begin by carefully defining all the terms involved ... otherwise it's really
> not possible to counter-argue in a useful way ...
>
> thx
> ben g
>
> On Sat, Oct 4, 2008 at 12:31 AM, Colin Hales <[EMAIL PROTECTED]
> > wrote:
>
>> Hi Mike,
>> I can give the highly abridged flow of the argument:
>>
>> !) It refutes COMP , where COMP = Turing machine-style abstract symbol
>> manipulation. In particular the 'digital computer' as we know it.
>> 2) The refutation happens in one highly specific circumstance. In being
>> false in that circumstance it is false as a general claim.
>> 3) The circumstances:  If COMP is true then it should be able to implement
>> an artificial scientist with the following faculties:
>>   (a) scientific behaviour (goal-delivery of a 'law of nature', an
>> abstraction BEHIND the appearances of the distal natural world, not merely
>> the report of what is there),
>>   (b) scientific observation based on the visual scene,
>>   (c) scientific behaviour in an encounter with radical novelty. (This is
>> what humans do)
>>
>> The argument's empirical knowledge is:
>> 1) The visual scene is visual phenomenal consciousness. A highly specified
>> occipital lobe deliverable.
>> 2) In the context of a scientific act, scientific evidence is 'contents of
>> phenomenal consciousness'. You can't do science without it. In the context
>> of this scientific act, visual P-consciousness and scientific evidence are
>> identities. P-consciousness is necessary but on its own is not sufficient.
>> Extra behaviours are needed, but these are a secondary consideration here.
>>
>> NOTE: Do not confuse "scientific observation"  with the "scientific
>> measurement", which is a collection of causality located in the distal
>> external natural world. (Scientific measurement is not the same thing as
>> scientific evidence, in this context). The necessary feature of a visual
>> scene is that it operate whilst faithfully inheriting the actual causality
>> of the distal natural world. You cannot acquire a law of nature without this
>> basic need being met.
>>
>> 3) Basic physics says that it is impossible for a brain to create a visual
>> scene using only the inputs acquired by the peripheral stimulus received at
>> the retina. This is due to fundamentals of quantum degeneracy. Basically
>> there are an infinite number of distal external worlds that can deliver the
>> exact same photon impact. The transduction that occurs in the retinal
>> rod/cones is entirely a result of protein isomerisation. All information
>> about distal origins is irretievably gone. An impacting photon could have
>> come across the room or across the galaxy. There is no information about
>> origins in the transduced data in the retina.
>>
>> That established, you are then faced with a paradox:
>>
>> (i) (3) says a visual scene is impossible.
>> (ii) Yet the brain makes one.
>> (iii) To make the scene some kind of access to distal spatial relations
>> must be acquired as input data in addition to that from the retina.
>> (iv) There are only 2 places that can come from...
>>       (a) via matter (which we already have - retinal impact at the
>> boundary that is the agent periphery)
>>       (b) via space (at the boundary of the matter of the brain with
>> space, the biggest boundary by far).
>> So, the conclusion is that the brain MUST acquire the necessary data via
>> the spatial boundary route. You don't have to know how. You just have no
>> other choice. There is no third party in there to add the necessary data and
>> the distal world is unknown. There is literally nowhere else for the data to
>> come from. Matter and Space exhaust the list of options. (There is alway
>> magical intervention ... but I leave that to the space cadets.)
>>
>> That's probably the main novelty for the reader to  to encounter. But we
>> are not done yet.
>>
>> Next empirical fact:
>> (v) When  you create a turing-COMP substrate the interface with space is
>> completely destroyed and replaced with the randomised machinations of the
>> matter of the computer manipulating a model of the distal world. All actual
>> relationships with the real distal external world are destroyed. In that
>> circumstance the COMP substrate is implementing the science of an encounter
>> with a model, not an encounter with the actual distal natural world.
>>
>> No amount of computation can make up for that loss, because you are in a
>> circumstance of an intrinsically unknown distal natural world, (the novelty
>> of an act of scientific observation).
>> .
>> => COMP is false.
>> ======
>> OK.  There are subtleties here.
>> The refutation is, in effect, a result of saying you can't do it (replace
>> a scientist with a computer) because you can't simulate inputs. It is just
>> the the nature of 'inputs' has been traditionally impoverished by assumption
>> born merely of cross-disciplinary blindness.. Not enough quantum mechanics
>> or electrodynamics is done by those exposed to 'COMP' principles.
>>
>> This result, at first appearance, says "you can't simulate a scientist".
>> But you can! If you already know what is out there in the natural world then
>> you can simulate a scientific act. But you don't - by definition  - you are
>> doing science to find out! So it's not that you can't simulate a scientist,
>> it is just that in order to do it you already have to know everything, so
>> you don't want to ... it's useless. So the words 'refutation of COMP by an
>> attempted  COMP implementation of a scientist' have to be carefully
>> contrasted with the words "you can't simulate a scientist".
>>
>> The self referential use of scientific behaviour as scientific evidence
>> has cut logical swathes through all sorts of issues. COMP is only one of
>> them. My AGI benchmark and design aim is "the artificial scientist".  Note
>> also that this result does not imply that real AGI can only be organic like
>> us. It means that real AGI must have new chips that fully capture all the
>> inputs and make use of them to acquire knowledge the way humans do. A
>> separate matter altogether. COMP, as an AGI designer' option, is out of the
>> picture.
>>
>> I think this just about covers the basics. The papers are dozens of pages.
>> I can't condense it any more than this..I have debated this so much it's way
>> past its use-by date. Most of the arguments go like this: "But you CAN!...".
>> I am unable to defend such 'arguments from under-informed-authority' ... I
>> defer to the empirical reality of the situation and would prefer that it be
>> left to justify itself. I did not make any of it up. I merely observed. .
>> ...and so if you don't mind I'd rather leave the issue there.  ..
>>
>> regards,
>>
>> Colin Hales
>>
>>
>>
>> Mike Tintner wrote:
>>
>>> Colin:
>>>
>>> 1) Empirical refutation of computationalism...
>>>
>>> .. interesting because the implication is that if anyone
>>> doing AGI lifts their finger over a keyboard thinking they can be
>>> directly involved in programming anything to do with the eventual
>>> knowledge of the creature...they have already failed. I don't know
>>> whether the community has internalised this yet.
>>>
>>> Colin,
>>>
>>> I'm sure Ben is right, but I'd be interested to hear the essence of your
>>> empirical refutation. Please externalise it so we can internalise it :)
>>>
>>>
>>>
>>> -------------------------------------------
>>> agi
>>> Archives: https://www.listbox.com/member/archive/303/=now
>>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>>> Modify Your Subscription: https://www.listbox.com/member/?&;
>>> Powered by Listbox: http://www.listbox.com
>>>
>>
>>
>> -------------------------------------------
>> agi
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> Modify Your Subscription: https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>>
>
>
>
> --
> Ben Goertzel, PhD
> CEO, Novamente LLC and Biomind LLC
> Director of Research, SIAI
> [EMAIL PROTECTED]
>
> "Nothing will ever be attempted if all possible objections must be first
> overcome "  - Dr Samuel Johnson
>
>
>  ------------------------------
>    *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>
>  ------------------------------
>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome "  - Dr Samuel Johnson



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com

Reply via email to