To add to this discussion, I'd like to point out that many AI systems have
been used and scientifically evaluated as *psychological* models, e.g.
cognitive models.

For instance, SOAR and ACT-R are among the  many systems that have been used
and evaluated this way.

The goal of that sort of research is to come up with simple, principled
explanations of human behaviors in psychology experiments, via coming up
with software systems that precisely simulate these behaviors.

So, one possible approach to AGI would be via cognitive modeling of this
sort.

This is quite different than brain simulation, and also quite different than
AGI which seeks generally but not precisely humanlike behavior.

I know there is some divergence in the SOAR community between whose who want
to use SOAR for scientific cognitive modeling, and those who want to use it
for building AGI that emulates human thought qualitatively but not precisely

-- Ben G



On Mon, Dec 22, 2008 at 11:36 PM, Colin Hales
<c.ha...@pgrad.unimelb.edu.au>wrote:

>  Ed,
> I wasn't trying to justify or promote a 'divide'. The two worlds must be
> better off in collaboration, surely? I merely point out that there are
> fundamental limits as to how computer science (CS) can inform/validate
> basic/physical science - (in an AGI context, brain science). Take the
> Baars/Franklin "IDA" project. Baars invents 'Global Workspace' = a metaphor
> of apparent brain operation. Franklin writes one. Afterwards, you're
> standing next to to it, wondering as to its performance. What part of its
> behaviour has any direct bearing on how a brain works? It predicts nothing
> neuroscience can poke a stick at. All you can say is that the computer is
> manipulating abstractions according to a model of brain material. At best
> you get to be quite right and prove nothing. If the beastie also
> underperforms then you have seeds for doubt that also prove nothing.
>
> CS as 'science' has always had this problem. AGI merely inherits its
> implications in a particular context/speciality. There's nothing bad or good
> - merely justified limits as to how CS and AGI may interact via brain
> science.
> ----------------
> I agree with your :
>
> "*At the other end of things, physicists are increasingly viewing physical
> reality as a computation, and thus the science of computation (and
> communication which is a part of it), such as information theory, have begun
> to play an increasingly important role in the most basic of all sciences.*
> "
>
> I would advocate physical reality (all of it) as *literally *computation
> in the sense of information processing. Hold a pencil up in front of your
> face and take a look at it... realise that the universe is 'computing a
> pencil'. Take a look at the computer in front of you: the universe is
> 'computing a computer'. The universe is literally computing YOU, too. The
> computation is not 'about' a pencil, a computer, a human. The computation IS
> those things. In exactly this same sense I want the universe to 'compute' an
> AGI (inorganic general intelligence). To me, then, this is *not*manipulating 
> abstractions ('aboutnesses') - which is the sense meant by CS
> generally and what actually happens in reality in CS.
>
> So despite some agreement as to words  - it is in the details we are likely
> to differ. The information processing in the natural world is not that which
> is going on in a model of it. As Edelman said(1) "*A theory to account for
> a hurricane is not a hurricane*". In exactly this way a
> computational-algorithmic process "about" intelligence cannot a-priori be
> claimed to be the intelligence of that which was modelled. Or - put yet
> another way: That {THING behaves 'abstract- RULE-ly'} does not entail that
> {anything manipulated according to abstract-RULE will become THING}. The
> only perfect algorithmic (100% complete information content) description of
> a thing is the actual thing, which includes all 'information' at all
> hierarchical descriptive levels, simultaneously.
> --------------------
> I disagree with:
>
> "But the brain is not part of an eternal verity.  It is the result of the
> engineering of evolution. "
>
> Unless I've missed something ... The natural evolutionary 'engineering'
> that has been going on has *not* been the creation of a MODEL (aboutness)
> of things - the 'engineering' has evolved the construction of the 
> *actual*things. The two are not the same. The brain is indeed 'part of an 
> eternal
> verity' - it is made of natural components operating in a fashion we attempt
> to model as 'laws of nature'. Those models, abstracted and shoehorned into a
> computer - are not the same as the original. To believe that they are is one
> of those Occam's Razor violations I pointed out before my xmas shopping
> spree (see previous-1 post).
> -----------------------
>
> Anyway, for these reasons, folks who use computer models to study human
> brains/consciousness will encounter some difficulty justifying, to the basic
> physical sciences, claims made as to the equivalence of the model and
> reality. That difficulty is fundamental and cannot be 'believed away'. At
> the same time it's not a show-stopper; merely something to be aware of as we
> go about our duties. This will remain an issue - the only real, certain,
> known example of a general intelligence is the human. The intelligence
> originates in the brain. AGI and brain science must be literally joined at
> the hip or the AGI enterprise is arguably scientifically impoverished
> wishful thinking. Which is pretty much what Ben said...although as usual I
> have used too many damned words!
>
> I expect we'll just have to agree to disagree... but there you have it :-)
>
> colin hales
> (1) Edelman, G. (2003). Naturalizing consciousness: A theoretical
> framework. Proc Natl Acad Sci U S A, 100(9), 5520–24.
>
>
>
> Ed Porter wrote:
>
>  Colin,
>
>
>
> From a quick read, the gist of what your are saying seems to be that AGI is
> just "engineering", i.e., the study of what man can make and the properties
> thereof, whereas "science" relates to the eternal verities of reality.
>
>
>
> But the brain is not part of an eternal verity.  It is the result of the
> engineering of evolution.
>
>
>
> At the other end of things, physicists are increasingly viewing physical
> reality as a computation, and thus the science of computation (and
> communication which is a part of it), such as information theory, have begun
> to play an increasingly important role in the most basic of all sciences.
>
>
>
> And to the extent that the study of the human mind is a "science", then the
> study of the types of computation that are done in the mind is part of that
> science, and AGI is the study of many of the same functions.
>
>
>
> So your post might explain the reason for a current cultural divide, but it
> does not really provide a justification for it.  In addition, if you attend
> events at either MIT's brain study center or its AI center, you will find
> many of the people who are there are from the other of these two centers,
> and that there is a considerable degree of cross-fertilization there that I
> have heard people at such event describe the benefits of.
>
>
>
> Ed Porter
>
>
>
>
>
> -----Original Message-----
> *From:* Colin Hales 
> [mailto:c.ha...@pgrad.unimelb.edu.au<c.ha...@pgrad.unimelb.edu.au>]
>
> *Sent:* Monday, December 22, 2008 6:19 PM
> *To:* agi@v2.listbox.com
> *Subject:* Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a
> machine that can learn from experience
>
>
>
> Ben Goertzel wrote:
>
>
>
> On Mon, Dec 22, 2008 at 11:05 AM, Ed Porter <ewpor...@msn.com> wrote:
>
> Ben,
>
>
>
> Thanks for the reply.
>
>
>
> It is a shame the brain science people aren't more interested in AGI.  It
> seems to me there is a lot of potential for cross-fertilization.
>
>
>
> I don't think many of these folks have a principled or deep-seated
> **aversion** to AGI work or anything like that -- it's just that they're
> busy people and need to prioritize, like all working scientists
>
> There's a more fundamental reason: Software engineering is not 'science' in
> the sense understood in the basic physical sciences. Science works to
> acquire models of empirically provable critical dependencies (apparent
> causal necessities). Software engineering never delivers this. The result of
> the work, however interesting and powerful, is a model that is, at best,
> merely a correlate of some a-priori 'designed' behaviour. Testing to your
> own specification is a normal behaviour in computer science. This is *not*the 
> testing done in the basic physical science - they 'test' (empirically
> examine) whatever is naturally there - which is, by definition, a-priori
> unknown.
>
> No matter how interesting it may be, software tells us nothing about the
> actual causal dependencies. The computer's physical hardware (semiconductor
> charge manipulation), configured as per the software, is the actual and
> ultimate causal necessitator of all the natural behaviour of hot rocks
> inside your computer. Software is MANY:1 redundantly/degenerately related to
> the physical (natural world) outcomes. The brilliantly useful
> 'hardware-independence' achieved by software engineering and essentially
> analogue electrical machines behaving 'as-if' they were digital - so
> powerful and elegant - actually places the status of the software activities
> outside the realm of any claims as causal.
>
> This is the fundamental problem that the  basic physical sciences have with
> computer 'science'. It's not, in a formal sense a 'science'. That doesn't
> mean CS is bad or irrelevant - it just means that it's value as a revealer
> of the properties of the natural world must be accepted with appropriate
> caution.
>
> I've spent 10's of thousands of hours testing software that drove all
> manner of physical world equipment - some of it the size of a 10 storey
> building. I was testing to my own/others specification. Throughout all of it
> I knew I was not doing science in the sense that scientists know it to be.
> The mantra is "correlation is not causation" and it's beaten into scientist
> pups from an early age. Software is a correlate only - it 'causes' nothing.
> In critical argument revolving around claims in respect of software as
> causality  - it would be defeated in review every time. A scientist,
> standing there with an algorithm/model of a natural world behaviour, knows
> that the model does not cause the behaviour. However, the scientist's model
> represents a route to predictive efficacy in respect of a unique natural
> phenomenon. Computer software does not predict the causal origination of the
> natural world behaviours driven by it. 10 compilers could produce 10
> different causalities on the same computer. 10 different computers running
> the same software would produce 10 different lots of causality.
>
> That's my take on why the basic physical sciences may be under-motivated to
> use AGI as a route to the outcomes demanded of their field of interest =
> 'Laws/regularities of *Nature*'. It may be that computer 'science'
> generally needs to train people better in their understanding of science. As
> an engineer with a foot in both camps it's not so hard for me to see this.
>
> Randalf Beer called software "tautologous" as a law of nature... I think it
> was here:
> Beer, R. D. (1995). A Dynamical-Systems Perspective on Agent Environment
> Interaction. Artificial Intelligence, 72(1-2), 173-215.
> I have a .PDF if anyone's interested...it's 3.6MB though.
>
> cheers
> colin hales
>  ------------------------------
>
> *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/>| 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
>
> <http://www.listbox.com>
>
>
>   ------------------------------
>    *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>
>  ------------------------------
>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

"I intend to live forever, or die trying."
-- Groucho Marx



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com

Reply via email to