[agi] Professor Asim Roy Finally Publishes Controversial Brain Theory

2008-11-20 Thread BillK
Nobody has mentioned this yet.

http://www.physorg.com/news146319784.html

Quotes:

 However, Roy's controversial ideas on how the brain works and learns
probably won't immediately win over many of his colleagues, who have
spent decades teaching robots and artificial intelligence (AI) systems
how to think using the classic connectionist theory of the brain.
Connectionists propose that the brain consists of an interacting
network of neurons and cells, and that it solves problems based on how
these components are connected. In this theory, there are no separate
controllers for higher level brain functions, but all control is local
and distributed fairly equally among all the parts.

In his paper, Roy argues for a controller theory of the brain. In this
view, there are some parts of the brain that control other parts,
making it a hierarchical system. In the controller theory, which fits
with the so-called computational theory, the brain learns lots of
rules and uses them in a top-down processing method to operate.

In his paper, Roy shows that the connectionist theory actually is
controller-based, using a logical argument and neurological evidence.
He explains that some of the simplest connectionist systems use
controllers to execute operations, and, since more complex
connectionist systems are based on simpler ones, these too use
controllers. If Roy's logic correctly describes how the brain
functions, it could help AI researchers overcome some inherent
limitations in connectionist algorithms.

Connectionism can never create autonomous learning machines, and
that's where its flaw is, Roy told PhysOrg.com. Connectionism
requires human babysitting of their learning algorithms, and that's
not very brain-like. We don't guide and control the learning inside
our head.
etc

BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Professor Asim Roy Finally Publishes Controversial Brain Theory

2008-11-20 Thread Vladimir Nesov
Here's a link to the paper:
http://wpcarey.asu.edu/pubs/index.cfm?fct=detailsarticle_cobid=2216410author_cobid=1039524journal_cobid=2216411

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Professor Asim Roy Finally Publishes Controversial Brain Theory

2008-11-20 Thread Bob Mottram
2008/11/20 Vladimir Nesov [EMAIL PROTECTED]:
 Here's a link to the paper:
 http://wpcarey.asu.edu/pubs/index.cfm?fct=detailsarticle_cobid=2216410author_cobid=1039524journal_cobid=2216411


This doesn't sound especially controversial to me.  Clearly there are
systems in the brain which control parameters of the body, such as
heart rate and temperature, in the classical control theory sense.
Feedback control is probably not just limited to these more obvious
examples though, and regulates other psychological processes.

It's not difficult to criticize trivial connectionist systems, such as
MLP, where obviously there is no explicit control or regulation going
on - merely an elaborate transformation from inputs to outputs.  But
in larger conectionist systems such as Edelman's Darwin automata there
are parts of the system which control or regulate other parts.  It
could be argued that control systems are essential for scalability of
a system without loss of coherence.

There may not be any overall uber-controller in the brain though.  If
this were the case then cutting the corpus colossum would cause major
psychological disintegration, which doesn't appear to happen.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


[agi] Hunting for a Brainy Computer

2008-11-20 Thread Rafael C.P.
http://bits.blogs.nytimes.com/2008/11/20/hunting-for-a-brainy-computer/

===[ Rafael C.P. ]===



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Pei Wang
The basic assumptions behind the project, from the webpage of its team
lead at http://www.modha.org/ :

The mind arises from the wetware of the brain. Thus, it would seem
that reverse engineering the computational function of the brain is
perhaps the cheapest and quickest way to engineer computers that mimic
the robustness and versatility of the mind.

Cognitive computing, seeks to engineer holistic intelligent machines
that neatly tie together all of the pieces. Cognitive computing seeks
to uncover the core micro and macro circuits of the brain underlying a
wide variety of abilities. So, it aims to proceeds in algorithm-first,
problems-later fashion.

I believe that spiking computation is a key to achieving this vision.

--- I have problem with each of these assumptions and beliefs, though
I don't think anyone can convince someone who just get a big grant
that they are moving in a wrong direction. ;-)

Pei

On Thu, Nov 20, 2008 at 8:29 AM, Rafael C.P. [EMAIL PROTECTED] wrote:
 http://bits.blogs.nytimes.com/2008/11/20/hunting-for-a-brainy-computer/

 ===[ Rafael C.P. ]===
 
 agi | Archives | Modify Your Subscription


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Derek Zahn

Pei Wang: --- I have problem with each of these assumptions and beliefs, 
though I don't think anyone can convince someone who just get a big grant 
that they are moving in a wrong direction. ;-)
With his other posts about the Singularity Summit and his invention of the word 
Synaptronics, Modha certainly seems to be a kindred spirit to many on this 
list.
 
I think what he's trying to do with this project (to the extent I understand 
it) seems like a reasonably promising approach (not really to AGI as such, but 
experimenting with soft computing substrates is kind of a cool enterprise to 
me).  Let a thousand flowers bloom.
 
However, when he says things on his blog like In my opinion, there are three 
reasons why the time is now ripe to begin to draw inspiration from structure, 
dynamics, function, and behavior of the brain for developing novel computing 
architectures and cognitive systems. -- I despair again.
 
Dr. Wang, if you want to get some funding maybe you should start promoting NARS 
as a theory of the brain :)
 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: [agi] Professor Asim Roy Finally Publishes Controversial Brain Theory

2008-11-20 Thread Mark Waser
Yeah.  Great headline -- Man beats dead horse beyond death!

I'm sure that there will be more details at 11.

Though I am curious . . . .  BillK, why did you think that this was worth 
posting?
  - Original Message - 
  From: Derek Zahn 
  To: agi@v2.listbox.com 
  Sent: Thursday, November 20, 2008 9:43 AM
  Subject: **SPAM** RE: [agi] Professor Asim Roy Finally Publishes 
Controversial Brain Theory



  From the paper:
   
   This paper has proposed a new paradigm for the 
   internal mechanisms of the brain, one that postulates 
   that there are parts of the brain that control other parts. 
   
  Sometimes I despair.
   


--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Professor Asim Roy Finally Publishes Controversial Brain Theory

2008-11-20 Thread BillK
On Thu, Nov 20, 2008 at 3:06 PM, Mark Waser [EMAIL PROTECTED] wrote:
 Yeah.  Great headline -- Man beats dead horse beyond death!

 I'm sure that there will be more details at 11.

 Though I am curious . . . .  BillK, why did you think that this was worth
 posting?



???  Did you read the article?

---
Quote:
In the late '90s, Asim Roy, a professor of information systems at
Arizona State University, began to write a paper on a new brain
theory. Now, 10 years later and after several rejections and
resubmissions, the paper Connectionism, Controllers, and a Brain
Theory has finally been published in the November issue of IEEE
Transactions on Systems, Man, and Cybernetics – Part A: Systems and
Humans.

Roy's theory undermines the roots of connectionism, and that's why his
ideas have experienced a tremendous amount of resistance from the
cognitive science community. For the past 15 years, Roy has engaged
researchers in public debates, in which it's usually him arguing
against a dozen or so connectionist researchers. Roy says he wasn't
surprised at the resistance, though.

I was attempting to take down their whole body of science, he
explained. So I would probably have behaved the same way if I were in
their shoes.

No matter exactly where or what the brain controllers are, Roy hopes
that his theory will enable research on new kinds of learning
algorithms. Currently, restrictions such as local and memoryless
learning have limited AI designers, but these concepts are derived
directly from that idea that control is local, not high-level.
Possibly, a controller-based theory could lead to the development of
truly autonomous learning systems, and a next generation of
intelligent robots.

 The sentiment that the science is stuck is becoming common to AI
researchers. In July 2007, the National Science Foundation (NSF)
hosted a workshop on the Future Challenges for the Science and
Engineering of Learning. The NSF's summary of the Open Questions in
Both Biological and Machine Learning [see below] from the workshop
emphasizes the limitations in current approaches to machine learning,
especially when compared with biological learners' ability to learn
autonomously under their own self-supervision:

Virtually all current approaches to machine learning typically
require a human supervisor to design the learning architecture, select
the training examples, design the form of the representation of the
training examples, choose the learning algorithm, set the learning
parameters, decide when to stop learning, and choose the way in which
the performance of the learning algorithm is evaluated. This strong
dependence on human supervision is greatly retarding the development
and ubiquitous deployment of autonomous artificial learning systems.
Although we are beginning to understand some of the learning systems
used by brains, many aspects of autonomous learning have not yet been
identified.

Roy sees the NSF's call for a new science as an open door for a new
theory, and he plans to work hard to ensure that his colleagues
realize the potential of the controller model. Next April, he will
present a four-hour workshop on autonomous machine learning, having
been invited by the Program Committee of the International Joint
Conference on Neural Networks (IJCNN).
-


Now his 'new' theory may be old hat to you personally,  but apparently
not to the majority of AI researchers, (according to the article).  He
must be saying something a bit unusual to have been fighting for ten
years to get it published and accepted enough for him to now have been
invited to do a workshop on his theory.


BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Professor Asim Roy Finally Publishes Controversial Brain Theory

2008-11-20 Thread Ben Goertzel
Hmmm...

I skimmed over the paper at

http://wpcarey.asu.edu/pubs/index.cfm

and I have to say I agree with the skeptics.

I don't doubt that this guy has made significant contributions in
other areas of science and engineering, but this paper displeases me a
great deal, due to making big claims of originality for ideas that are
actually very old hat, and bolstering these claims via attacking a
straw man of simplistic connectionism.

The idea that engineering control theory could be applicable to the
brain is hardly original.

As one among many, many examples, James Albus has published a lot of
stuff along these lines since the 1970s

http://www.isd.mel.nist.gov/personnel/albus/publications.htm

including a great talk at the recent AAAI BICA symposium focusing on
brain theory specifically

http://binf.gmu.edu/~asamsono/bica/albus.htm

Also, Stephen Grossberg's brain theories, going back to the 60s, have
posed a strong role for controllers and analogues of engineering style
control theory in the brain.

The simplistic connectionism this author argues against **is** a
real point of view held by some theorists, but it's hardly a consensus
... it's kind of an unpopular, 20-years-old, worn-out meme by now...

And his proposed alternative is simply far less fleshed out that
Grossberg's , Albus's or many other theorists' ideas with similar (but
deeper and broader) conceptual foundations...

Double thumbs down: not for wrongheadedness, but for excessive claims
of originality plus egregious straw man arguments...

-- Ben G

On Thu, Nov 20, 2008 at 10:37 AM, BillK [EMAIL PROTECTED] wrote:
 On Thu, Nov 20, 2008 at 3:06 PM, Mark Waser [EMAIL PROTECTED] wrote:
 Yeah.  Great headline -- Man beats dead horse beyond death!

 I'm sure that there will be more details at 11.

 Though I am curious . . . .  BillK, why did you think that this was worth
 posting?



 ???  Did you read the article?

 ---
 Quote:
 In the late '90s, Asim Roy, a professor of information systems at
 Arizona State University, began to write a paper on a new brain
 theory. Now, 10 years later and after several rejections and
 resubmissions, the paper Connectionism, Controllers, and a Brain
 Theory has finally been published in the November issue of IEEE
 Transactions on Systems, Man, and Cybernetics – Part A: Systems and
 Humans.

 Roy's theory undermines the roots of connectionism, and that's why his
 ideas have experienced a tremendous amount of resistance from the
 cognitive science community. For the past 15 years, Roy has engaged
 researchers in public debates, in which it's usually him arguing
 against a dozen or so connectionist researchers. Roy says he wasn't
 surprised at the resistance, though.

 I was attempting to take down their whole body of science, he
 explained. So I would probably have behaved the same way if I were in
 their shoes.

 No matter exactly where or what the brain controllers are, Roy hopes
 that his theory will enable research on new kinds of learning
 algorithms. Currently, restrictions such as local and memoryless
 learning have limited AI designers, but these concepts are derived
 directly from that idea that control is local, not high-level.
 Possibly, a controller-based theory could lead to the development of
 truly autonomous learning systems, and a next generation of
 intelligent robots.

  The sentiment that the science is stuck is becoming common to AI
 researchers. In July 2007, the National Science Foundation (NSF)
 hosted a workshop on the Future Challenges for the Science and
 Engineering of Learning. The NSF's summary of the Open Questions in
 Both Biological and Machine Learning [see below] from the workshop
 emphasizes the limitations in current approaches to machine learning,
 especially when compared with biological learners' ability to learn
 autonomously under their own self-supervision:

 Virtually all current approaches to machine learning typically
 require a human supervisor to design the learning architecture, select
 the training examples, design the form of the representation of the
 training examples, choose the learning algorithm, set the learning
 parameters, decide when to stop learning, and choose the way in which
 the performance of the learning algorithm is evaluated. This strong
 dependence on human supervision is greatly retarding the development
 and ubiquitous deployment of autonomous artificial learning systems.
 Although we are beginning to understand some of the learning systems
 used by brains, many aspects of autonomous learning have not yet been
 identified.

 Roy sees the NSF's call for a new science as an open door for a new
 theory, and he plans to work hard to ensure that his colleagues
 realize the potential of the controller model. Next April, he will
 present a four-hour workshop on autonomous machine learning, having
 been invited by the Program Committee of the International Joint
 Conference on Neural Networks (IJCNN).
 -


 Now his 

Re: [agi] Professor Asim Roy Finally Publishes Controversial Brain Theory

2008-11-20 Thread Richard Loosemore

BillK wrote:

Nobody has mentioned this yet.

http://www.physorg.com/news146319784.html


I got a draft version of the paper earlier this year, and after a quick 
scan I filed it under 'junk'.


I just read it through again, and the filing stays the same.

His basic premise is that connectionists argued from the very beginning 
that they wanted to do things in a way that did not involve a central 
executive.  They wanted to see how much could be done by having large 
numbers of autonomous units do things independently.  Turns out, quite a 
lot can be achieved that way.


But it seems that Asim Roy has fundamentally misunderstood the force and 
the intent of that initial declaration by the connectionists.  There was 
a reason they said what they said:  they wanted to get away from the old 
symbol processing paradigm in which one thing happened at a time and 
symbols were separated from the mechanisms that modified or used 
symbols.  The connectionists were not being dogmatic about No 
Controllers!, they just wanted to stop all power being vested in the 
hands of a central executive ... and their motivation was from cognitive 
science, not engineering or control theory.


Roy seems to be completely obsessed with the idea that they are wrong, 
while at the same time not really understanding why they said it, and 
not really having a concrete proposal (or account of empirical data) to 
substitute for the connectionist ideas.


To tell the truth, I don't think there are many connectionists who are 
so hell-bent on the idea of not having a central controller, that they 
would not be open to an architecture that did have one (or several). 
They just don't think it would be good to have central controllers in 
charge of ALL the heavy lifting.


Roy's paper has the additional disadvantage of being utterly filled with 
underlines and boldface.  He shouts.  Not good in something that is 
supposed to be a scientific paper.


Sorry, but this is just junk.




Richard Loosemore



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Professor Asim Roy Finally Publishes Controversial Brain Theory

2008-11-20 Thread Vladimir Nesov
On Thu, Nov 20, 2008 at 7:04 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
 BillK wrote:

 Nobody has mentioned this yet.

 http://www.physorg.com/news146319784.html

 I got a draft version of the paper earlier this year, and after a quick scan
 I filed it under 'junk'.

 I just read it through again, and the filing stays the same.


I have to agree. The paper attacks a strawman by blanket assertions.
Even worse, the attack itself is flawed: in section 2 he tries to
define the concept of control, and, having trouble with free
will-like issues, produces a combination of brittle and nontechnical
assertions. As a result, in his own example (at the very end of
section 2), a doctor is considered in control of treating a patient
only if he can prescribe *arbitrary* treatment that doesn't depend on
the patient (or his illness).

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Professor Asim Roy Finally Publishes Controversial Brain Theory

2008-11-20 Thread Ben Goertzel
yay ... we all agree on something ;-p

On Thu, Nov 20, 2008 at 11:46 AM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 On Thu, Nov 20, 2008 at 7:04 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
 BillK wrote:

 Nobody has mentioned this yet.

 http://www.physorg.com/news146319784.html

 I got a draft version of the paper earlier this year, and after a quick scan
 I filed it under 'junk'.

 I just read it through again, and the filing stays the same.


 I have to agree. The paper attacks a strawman by blanket assertions.
 Even worse, the attack itself is flawed: in section 2 he tries to
 define the concept of control, and, having trouble with free
 will-like issues, produces a combination of brittle and nontechnical
 assertions. As a result, in his own example (at the very end of
 section 2), a doctor is considered in control of treating a patient
 only if he can prescribe *arbitrary* treatment that doesn't depend on
 the patient (or his illness).

 --
 Vladimir Nesov
 [EMAIL PROTECTED]
 http://causalityrelay.wordpress.com/


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

A human being should be able to change a diaper, plan an invasion,
butcher a hog, conn a ship, design a building, write a sonnet, balance
accounts, build a wall, set a bone, comfort the dying, take orders,
give orders, cooperate, act alone, solve equations, analyze a new
problem, pitch manure, program a computer, cook a tasty meal, fight
efficiently, die gallantly. Specialization is for insects.  -- Robert
Heinlein


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Professor Asim Roy Finally Publishes Controversial Brain Theory

2008-11-20 Thread Ben Goertzel
And btw, the notion that control is a key concept in the brain goes
back at least to Norbert Wiener's book Cybernetics from the 1930's !!
... Principia Cybernetica has a simple but clear webpage on the
control concept in cybernetics...

http://pespmc1.vub.ac.be/CONTROL.html

ben g

On Thu, Nov 20, 2008 at 11:53 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 yay ... we all agree on something ;-p

 On Thu, Nov 20, 2008 at 11:46 AM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 On Thu, Nov 20, 2008 at 7:04 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
 BillK wrote:

 Nobody has mentioned this yet.

 http://www.physorg.com/news146319784.html

 I got a draft version of the paper earlier this year, and after a quick scan
 I filed it under 'junk'.

 I just read it through again, and the filing stays the same.


 I have to agree. The paper attacks a strawman by blanket assertions.
 Even worse, the attack itself is flawed: in section 2 he tries to
 define the concept of control, and, having trouble with free
 will-like issues, produces a combination of brittle and nontechnical
 assertions. As a result, in his own example (at the very end of
 section 2), a doctor is considered in control of treating a patient
 only if he can prescribe *arbitrary* treatment that doesn't depend on
 the patient (or his illness).

 --
 Vladimir Nesov
 [EMAIL PROTECTED]
 http://causalityrelay.wordpress.com/


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 [EMAIL PROTECTED]

 A human being should be able to change a diaper, plan an invasion,
 butcher a hog, conn a ship, design a building, write a sonnet, balance
 accounts, build a wall, set a bone, comfort the dying, take orders,
 give orders, cooperate, act alone, solve equations, analyze a new
 problem, pitch manure, program a computer, cook a tasty meal, fight
 efficiently, die gallantly. Specialization is for insects.  -- Robert
 Heinlein




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

A human being should be able to change a diaper, plan an invasion,
butcher a hog, conn a ship, design a building, write a sonnet, balance
accounts, build a wall, set a bone, comfort the dying, take orders,
give orders, cooperate, act alone, solve equations, analyze a new
problem, pitch manure, program a computer, cook a tasty meal, fight
efficiently, die gallantly. Specialization is for insects.  -- Robert
Heinlein


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Pei Wang
Derek,

I have no doubt that their proposal contains interesting ideas and
will produce interesting and valuable results --- most AI projects do,
though the results and the values are often not what they targeted (or
they claimed to be targeting) initially.

Biologically inspired approaches are attractive, partly because they
have existing proof for the mechanism to work. However, we need to
remember that inspired by a working solution is one thing, and to
treat that solution as the best way to achieve a goal is another.
Furthermore, the difficult part in these approaches is to separate the
aspect of the biological mechanism/process that should be duplicated
from the aspects that shouldn't.

Yes, maybe I should market NARS as a theory of the brain, just a very
high-level one. ;-)

Pei

On Thu, Nov 20, 2008 at 10:06 AM, Derek Zahn [EMAIL PROTECTED] wrote:
 Pei Wang:

 --- I have problem with each of these assumptions and beliefs, though
 I don't think anyone can convince someone who just get a big grant
 that they are moving in a wrong direction. ;-)

 With his other posts about the Singularity Summit and his invention of the
 word Synaptronics, Modha certainly seems to be a kindred spirit to many on
 this list.

 I think what he's trying to do with this project (to the extent I understand
 it) seems like a reasonably promising approach (not really to AGI as such,
 but experimenting with soft computing substrates is kind of a cool
 enterprise to me).  Let a thousand flowers bloom.

 However, when he says things on his blog like In my opinion, there are
 three reasons why the time is now ripe to begin to draw inspiration from
 structure, dynamics, function, and behavior of the brain for developing
 novel computing architectures and cognitive systems. -- I despair again.

 Dr. Wang, if you want to get some funding maybe you should start promoting
 NARS as a theory of the brain :)

 
 agi | Archives | Modify Your Subscription


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Richard Loosemore

Pei Wang wrote:

Derek,

I have no doubt that their proposal contains interesting ideas and
will produce interesting and valuable results --- most AI projects do,
though the results and the values are often not what they targeted (or
they claimed to be targeting) initially.

Biologically inspired approaches are attractive, partly because they
have existing proof for the mechanism to work. However, we need to
remember that inspired by a working solution is one thing, and to
treat that solution as the best way to achieve a goal is another.
Furthermore, the difficult part in these approaches is to separate the
aspect of the biological mechanism/process that should be duplicated
from the aspects that shouldn't.


I share your concerns about this project, although I might have a 
slightly different set of reasons for being doubtful.


I watched part of one of the workshops that Mohdra chaired, on Cognitive 
Computing, and it gave me the same feeling that neuroscience gatherings 
always give me:  a lot of talk about neural hardware, punctuated by 
sudden, out-of-the-blue statements about cognitive ideas that seem 
completely unrelated to the ocean of neural talk that comes before and 
after.


There is a *depresssingly* long history of people doing this - and not 
just in neuroscience, but in many branches of engineering, in physics, 
in computer science, etc.  There are people out there who know that the 
mind is the new frontier, and they want to be in the party.  They also 
know that the cognitive scientists (in the broad sense) are probably the 
folks who are at the center of the party (in the sense of having most 
comprehensive knowledge).  So these people do what they do best, but add 
in a sprinkling of technical terms and (to be fair) some actual 
knowledge of some chunks of cognitive science.


Problem is, that to a cognitive scientist what they are doing is 
amateurish.


Another, closely related thing that they do is talk about low level 
issues witout realizing just how disconnected those are from where the 
real story (probably) lies.  Thus, Mohdra emphasizes the importance of 
spike timing as opposed to average firing rate.  He may well be right 
that the pattern or the timing is more important, but IMO he is doing 
the equivalent of saying Let's talk about the best way to design an 
algorithm to control an airport.  First problem to solve:  should we use 
Emitter-Coupled Logic in the transistors that are in oour computers that 
will be running the algorithms.


-|




Richard Loosemore



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Professor Asim Roy Finally Publishes Controversial Brain Theory

2008-11-20 Thread Vladimir Nesov
On Thu, Nov 20, 2008 at 7:56 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 And btw, the notion that control is a key concept in the brain goes
 back at least to Norbert Wiener's book Cybernetics from the 1930's !!
 ... Principia Cybernetica has a simple but clear webpage on the
 control concept in cybernetics...

 http://pespmc1.vub.ac.be/CONTROL.html


I don't like that definition for basically the same reason, but it
maybe explains where Asim Roy comes from. At least they are not
literally insisting on control being a property of the system itself,
according to this remark:

Of course, two systems can be in a state of mutual control, but this
will be a different, more complex, relation, which we will still
describe as a combination of two asymmetric control relations.

Controller-controlled relation is a model assigned to the system, not
an intrinsic property of the system itself. Also, there is no may or
could apart from semantics of search algorithm, which is a thing to
keep in mind when making claims like the following, about freedom of
controller, and especially when trying to use this notion of freedom
to establish asymmetry:

The controller C may change the state of the controlled system S in
any way, including the destruction of S.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Professor Asim Roy Finally Publishes Controversial Brain Theory

2008-11-20 Thread Ben Goertzel

 So, basically, you don't disagree with his paper to much.
 You just don't like his attitude.;)

 Danged AI researchers that think they know it all!   ;)

 You don't think you could call it excessive PR where he is trying to
 dislodge an entrenched view?


The thing is, the simplistic connectionism he's railing against is
**not** an entrenched view in the AI community at large ... it's just
an entrenched view in a particular subcommunity of the AI
community  And it's not as though he has *disproved* their
entrenched view, he has just argued in favor of an alternative view,
which I am more sympathetic to, but which is also well known..

Perhaps the reason his paper got rejected so many times was not that
it was so radical, but rather that it contained so little novel
content ;-)

Occasionally, the peer review system actually can be right...

ben g


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Professor Asim Roy Finally Publishes Controversial Brain Theory

2008-11-20 Thread BillK
On Thu, Nov 20, 2008 at 3:52 PM, Ben Goertzel wrote:

 I skimmed over the paper at
 http://wpcarey.asu.edu/pubs/index.cfm
 and I have to say I agree with the skeptics.

 I don't doubt that this guy has made significant contributions in
 other areas of science and engineering, but this paper displeases me a
 great deal, due to making big claims of originality for ideas that are
 actually very old hat, and bolstering these claims via attacking a
 straw man of simplistic connectionism.

snip

 Double thumbs down: not for wrongheadedness, but for excessive claims
 of originality plus egregious straw man arguments...




So, basically, you don't disagree with his paper to much.
You just don't like his attitude.;)

Danged AI researchers that think they know it all!   ;)

You don't think you could call it excessive PR where he is trying to
dislodge an entrenched view?


BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-20 Thread Richard Loosemore

Ed Porter wrote:

Richard,

 

In response to your below copied email, I have the following response to 
the below quoted portions:


 


### My prior post 


 That aspects of consciousness seem real does not provides much of an



 “explanation for consciousness.”  It says something, but not much.  It



 adds little to Descartes’ “I think therefore I am.”  I don’t think it



 provides much of an answer to any of the multiple questions Wikipedia



 associates with Chalmer’s hard problem of consciousness.


 


### Richard said 

I would respond as follows.  When I make statements about consciousness

deserving to be called real, I am only saying this as a summary of a

long argument that has gone before.  So it would not really be fair to

declare that this statement of mine says something, but not much

without taking account of the reasons that have been building up toward

that statement earlier in the paper. 

 


## My response ##

Perhaps ---  but this prior work which you claim explains so much is not 
in the paper being discussed.  Without it, it is not clear how much your 
paper itself contributes.  And, Ben, who is much more knowledgeable than 
I on these things seemed similarly unimpressed.


I would say that it does.  I blieve that the situation is that you do 
not yet understand it.  Ben has had similar trouble, but seems to be 
comprehending more of the issue as I respond to his questions.


(I owe him one response right now:  I am working on it)



 


### Richard said 

I am arguing that when we probe

the meaning of real we find that the best criterion of realness is the

way that the system builds a population of concept-atoms that are (a)

mutually consistent with one another,

 


## My response ##

I don’t know what mutually consistent means in this context, and from my 
memory of reading you paper multiple times I don’t think it explains it, 
other than perhaps implying that the framework of atoms represent 
experiential generalization and associations, which would presumably 
tend to represent the regularities of experienced reality.


I'll grant you that one:  I did not explain in detail this idea of 
mutual consistency.


However, the reason I did not is that I really had to assume some 
background, and I was hoping that the reader would already be aware of 
the general idea that cognitive systems build their knowledge in the 
form of concepts that are (largely) consistent with one another, and 
that it is this global consistency that lends strength to the whole.  In 
other words, all the bits of our knowledge work together.


A piece of knowledge like The Loch Ness monster lives in Loch Ness is 
NOT a piece of knowledge that fits well with all of the rest of our 
knowledge, because we have little or no evidence that such a thing as 
the Loch Ness Monster has been photographed, observed by independent 
people, observed by several people at the same time, caught in a trap 
and taken to a museum, been found as a skeletal remain, bumped into a 
boat, etc etc etc.  There are no links from the rest of our knowledge to 
the LNM fact, so we actually do not credit the LNM as being real.


By contrast, facts about Coelacanths are very well connected to the rest 
of our knowledge, and we believe that they do exist.





 


### Richard said 

and (b) strongly supported by

sensory evidence (there are other criteria, but those are the main

ones).  If you think hard enough about these criteria, you notice that

the qualia-atoms (those concept-atoms that cause the analysis mechanism

to bottom out) score very high indeed.  This is in dramatic contrast to

other concept-atoms like hallucinations, which we consider 'artifacts'

precisely because they score so low.  The difference between these two

is so dramatic that I think we need to allow the qualia-atoms to be

called real by all our usual criteria, BUT with the added feature that

they cannot be understood in any more basic terms.

 


## My response ##

You seem to be defining “real” here to mean believed to exist in what is 
perceived as objective reality.  I personally believe a sense of 
subjective reality is much more central to the concept of consciousness. 

 

Personal computers of today, which most people don’t think have anything 
approaching a human-like consciousness, could in many tasks make 
estimations of whether some signal was “real” in the sense of 
representing something in objective reality without being conscious.  
But a powerful hallucination, combined with a human level of sense of 
being conscious of it, does not appear to be something any current 
computer can achieve. 

 

So if you are looking for the hard problems in consciousness focus more 
on the human subjective sense of awareness, not whether there is 
evidence something is real in what we perceive as objective reality.



Alas, you have 

Re: [agi] Professor Asim Roy Finally Publishes Controversial Brain Theory

2008-11-20 Thread Mark Waser

???  Did you read the article?


Absolutely.  I don't comment on things without reading them (unlike some 
people on this list).  Not only that, I also read the paper that someone was 
nice enough to send the link for.


Now his 'new' theory may be old hat to you personally,  but apparently 
not to the majority of AI researchers, (according to the article).


The phrase according to the article is what is telling.  It is an improper 
(and incorrect) portrayal of the majority of AI researchers.


He must be saying something a bit unusual to have been fighting for ten 
years to get it published and accepted enough for him to now have been 
invited to do a workshop on his theory.


Something a bit unusual like Mike Tintner fighting us on this list for ten 
years and then finding someone to accept his theories and run a workshop? 
Note who is running the workshop . . . . not the normal BICA community for 
sure . . . .




- Original Message - 
From: BillK [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Thursday, November 20, 2008 10:37 AM
Subject: **SPAM** Re: [agi] Professor Asim Roy Finally Publishes 
Controversial Brain Theory



On Thu, Nov 20, 2008 at 3:06 PM, Mark Waser [EMAIL PROTECTED] wrote:

Yeah.  Great headline -- Man beats dead horse beyond death!

I'm sure that there will be more details at 11.

Though I am curious . . . .  BillK, why did you think that this was worth
posting?




???  Did you read the article?

---
Quote:
In the late '90s, Asim Roy, a professor of information systems at
Arizona State University, began to write a paper on a new brain
theory. Now, 10 years later and after several rejections and
resubmissions, the paper Connectionism, Controllers, and a Brain
Theory has finally been published in the November issue of IEEE
Transactions on Systems, Man, and Cybernetics – Part A: Systems and
Humans.

Roy's theory undermines the roots of connectionism, and that's why his
ideas have experienced a tremendous amount of resistance from the
cognitive science community. For the past 15 years, Roy has engaged
researchers in public debates, in which it's usually him arguing
against a dozen or so connectionist researchers. Roy says he wasn't
surprised at the resistance, though.

I was attempting to take down their whole body of science, he
explained. So I would probably have behaved the same way if I were in
their shoes.

No matter exactly where or what the brain controllers are, Roy hopes
that his theory will enable research on new kinds of learning
algorithms. Currently, restrictions such as local and memoryless
learning have limited AI designers, but these concepts are derived
directly from that idea that control is local, not high-level.
Possibly, a controller-based theory could lead to the development of
truly autonomous learning systems, and a next generation of
intelligent robots.

The sentiment that the science is stuck is becoming common to AI
researchers. In July 2007, the National Science Foundation (NSF)
hosted a workshop on the Future Challenges for the Science and
Engineering of Learning. The NSF's summary of the Open Questions in
Both Biological and Machine Learning [see below] from the workshop
emphasizes the limitations in current approaches to machine learning,
especially when compared with biological learners' ability to learn
autonomously under their own self-supervision:

Virtually all current approaches to machine learning typically
require a human supervisor to design the learning architecture, select
the training examples, design the form of the representation of the
training examples, choose the learning algorithm, set the learning
parameters, decide when to stop learning, and choose the way in which
the performance of the learning algorithm is evaluated. This strong
dependence on human supervision is greatly retarding the development
and ubiquitous deployment of autonomous artificial learning systems.
Although we are beginning to understand some of the learning systems
used by brains, many aspects of autonomous learning have not yet been
identified.

Roy sees the NSF's call for a new science as an open door for a new
theory, and he plans to work hard to ensure that his colleagues
realize the potential of the controller model. Next April, he will
present a four-hour workshop on autonomous machine learning, having
been invited by the Program Committee of the International Joint
Conference on Neural Networks (IJCNN).
-


Now his 'new' theory may be old hat to you personally,  but apparently
not to the majority of AI researchers, (according to the article).  He
must be saying something a bit unusual to have been fighting for ten
years to get it published and accepted enough for him to now have been
invited to do a workshop on his theory.


BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your 

Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-20 Thread Steve Richfield
Ben,

Mapping RRA to Hegel's space isn't trivial, but here goes...

On 11/19/08, Ben Goertzel [EMAIL PROTECTED] wrote:


 I have nothing against Hegel; I think he was a great philosopher.  His
 Logic is really fantastic reading.  And, having grown up surrounded by
 Marxist wannabe-revolutionaries (most of whom backed away from strict
 Marxism in the mid-70s when the truth about the Soviet Union came out in
 America), I am also aware there is a lot of deep truth in Marx's thought, in
 spite of the evil that others wrought with it after his death...


It's refreshing to be able to discuss the structure of problems rather than
simply planning the future of the world as:
1.  We will build AGIs.
2.  The AGIs will create a Singularity.
3.  Then, something wonderful (or horrible) will happen.



 I just think that Hegel's dialectical philosophy is clearer than your
 reverse reductio ad absurdum,


That is because he saw a process that he didn't fully understand, leaving
the participants to argue their many positions for decades/centuries
until many consensus resolution were identified. Things always look simpler
when you ignore the necessary details.

BTW, there was once a government run by consensus - where all differences
were argued until everyone agreed. That was early Islam, first under Mohamed
and later under 4 subsequent caliphs who worked with Mohamed until his
death. Of course, this is ONLY possible given some sort of understanding of
RRA, yet historical accounts do NOT include anything like RRA (that I have
found). Then, things came unraveled. In a logical world (if this is even
possible given illogical people), consensus should be possible. Allowing for
a few idiots, it should take 90% majority to pass any law or do anything
that is potentially destructive (as though there were anything that a
government could do that is NOT potentially destructive). In short, the
whole rule by majority thing is severely flawed, though it may be OK to
choose representatives.



 and so I'm curious to know what you think your formulation *adds* to the
 classic Hegelian one...


A clear path to resolving differences rather than leaving it to unstructured
argument, compromise, etc., as Hegel did. It directly challenges BOTH sides
of an intractable dispute to seek and find the shared bad assumptions and
NOT compromise, or to shut up because they are simply not smart enough to
participate.



 From what I understand, your RRA heuristic says that, sometimes, when both
 X and ~X are appealing to rational people, there is some common assumption
 underlying the two, which when properly questioned and modified can yield a
 new Y that transcends and in some measure synthesizes aspects of X and ~X


Usually, neither X nor ~X are even deducible from Y. For example, in the
abortion debate, the pro-life side is happy because abortions are more
effectively stopped than if a law had been passed, and the pro-choice is
happy because there are no laws in place. Neither side can even get to the
contentious point that they were at before.



 I suppose Hegel would have called Y the dialectical synthesis of X and ~X,
 right?


Not being a Hegel scholar, that's the way that I see it. Hegel just failed
to take the next step of mapping out exactly how to reach a dialectical
synthesis, which is what RRA does.



 BTW, we are certainly not seeing the fall of capitalism now.  Marx's
 dialectics-based predictions made a lot of errors; for instance, both he and
 Hegel failed to see the emergence of the middle class as a sort of
 dialectical synthesis of the ruling class and the proletariat ;-)


... and America failed to see the coming disappearance of the middle class,
that throws society back into Marx's realm.



 ... but, I digress!!


I don't think so, as we are now thinking about things at the level that a
future AGI would have to be able to think at to provide societal guidance.
If we can't function at this level ourselves, how are we ever going to
create AGIs that do this?


 So, how would you apply your species of dialectics to solve the problem of
 consciousness?  This is a case where, clearly, rational intelligent and
 educated people hold wildly contradictory opinions,


... which is a pretty clear demonstration that consciousness doesn't work
very well. This was EXACTLY my point when discussing Dr. Eliza (that also
has its obvious limitations), that other methods can potentially avoid the
logical traps of the conscious process.



 e.g.




 X1 = consciousness does not exist

 X2 = consciousness is a special extra-physical entity that correlates with
 certain physical systems at certain times

 X3 = consciousness is a kind of physical entity

 X4 = consciousness is a property immanent in everything, that gets
 focused/structured differently via interaction with different physical
 systems

 All these positions contradict each other.  How do you suggest to
 dialectically synthesize them?  ;-)


No, properly restating the above question: 

Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Ben Goertzel
Richard,

 The main problem is that if you interpret spike timing to be playing the
 role that you (and they) imply above, then you are commiting yourself to a
 whole raft of assumptions about how knowledge is generally represented and
 processed.  However, there are *huge* problems with that set of implicit
 assumptions  not to put too fine a point on it, those implicit
 assumptions are equivalent to the worst, most backward kind of cognitive
 theory imaginable.  A theory that is 30 or 40 years out of date.

 The gung-ho neuroscientists seem blissfully unaware of this fact because
  they do not know enough cognitive science.

 Richard Loosemore


I don't think this is the reason.  There are plenty of neuroscientists
out there
who know plenty of cognitive science.

I think many neuroscientists just hold different theoretical
presuppositions than
you, for reasons other than ignorance of cog sci data.

Interdisciplinary cog sci has been around a long time now as you know ... it's
not as though cognitive neuroscientists are unaware of its data and ideas...

-- Ben G


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Richard Loosemore

Steve Richfield wrote:

Richard,
 
Broad agreement, with one comment from the end of your posting...
 
On 11/20/08, *Richard Loosemore* [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


Another, closely related thing that they do is talk about low level
issues witout realizing just how disconnected those are from where
the real story (probably) lies.  Thus, Mohdra emphasizes the
importance of spike timing as opposed to average firing rate.

 
There are plenty of experiments that show that consecutive 
closely-spaced pulses result when something goes off scale, probably 
the equivalent to computing Bayesian probabilities  100%, somewhat akin 
to the overflow light on early analog computers. These closely-spaced 
pulses have a MUCH larger post-synaptic effect than the same number of 
regularly spaced pulses. However, as far as I know, this only occurs 
during anomalous situations - maybe when something really new happens, 
that might trigger learning?
 
IMHO, it is simply not possible to play this game without having a close 
friend with years of experience poking mammalian neurons. This stuff is 
simply NOT in the literature.


He may well be right that the pattern or the timing is more
important, but IMO he is doing the equivalent of saying Let's talk
about the best way to design an algorithm to control an airport.
 First problem to solve:  should we use Emitter-Coupled Logic in the
transistors that are in oour computers that will be running the
algorithms.

 
Still, even with my above comments, you conclusion is still correct.


The main problem is that if you interpret spike timing to be playing 
the role that you (and they) imply above, then you are commiting 
yourself to a whole raft of assumptions about how knowledge is generally 
represented and processed.  However, there are *huge* problems with that 
set of implicit assumptions  not to put too fine a point on it, 
those implicit assumptions are equivalent to the worst, most backward 
kind of cognitive theory imaginable.  A theory that is 30 or 40 years 
out of date.


The gung-ho neuroscientists seem blissfully unaware of this fact because 
 they do not know enough cognitive science.




Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-20 Thread Ben Goertzel
Hmmm...

I don't agree w/ you that the hard problem of consciousness is
unimportant or non-critical in a philosophical sense.  Far from it.

However, from the point of view of this list, I really don't think it
needs to be solved (whatever that might mean) in order to build AGI.

Of course, I think that because I think the hard problem of
consciousness is actually easy: I'm a panpsychist ... I think
everything is conscious, and different kinds of structures just focus
and amplify this universal consciousness in different ways...

Interestingly, this panpsychist perspective is seen as obviously right
by most folks deeply involved with meditation or yoga whom I've talked
to, and seen as obviously wrong by most scientists I talk to...

-- Ben G

On Thu, Nov 20, 2008 at 5:26 PM, Ed Porter [EMAIL PROTECTED] wrote:
 Richard,



 Thank you for your reply.



 I started to write a point-by-point response to your reply, copied below,
 but after 45 minutes I said stop.  As interesting as it is, from a
 philosophical and argumentative writing standpoint to play wack-a-mole with
 your constantly sifting and often contradictory arguments --- right now, I
 have much more pressing things to do.



 And I think I have already stated many of my positions on the subject of
 this thread sufficiently clearly that intelligent people who have a little
 imagination and really want to can understand them.  Since few others beside
 you have responded to my posts, I don't think there is any community demand
 that I spend further time on such replies.



 What little I can add to what I have already said is that I basically I
 think the hard problem/easy problem dichotomy is largely, although, not
 totally pointless.



 I do not think the hard problem is central to understanding consciousness,
 because so much of consciousness is excluded from being part of the hard
 problem.  It is excluded either because it can be described verbally by
 introspection by the mind itself, or because it affects external behavior,
 and, thus, at least according to Wikipedia's definition of p-consciousness,
 is part of the easy problem.



 It should be noted that not affecting external behavior excludes one hell of
 a lot of consciousness, because emotions, which clearly affect external
 behavior, are so closely associated with much of our sensing of experience.



 Thus, it seems a large part of what we humans consider to be our subjective
 sense of experience of consciousness is rejected by hard problem purists
 as being part of the easy problem.



 Richard, you in particular seems to be much more of a hard problem purist
 than those who wrote the Wikipedia definition of p-consciousness.   This is
 because in your responses to me you have even excluded as not part of the
 hard problem any lateral or higher level associations of one of your bottom
 level red detector nodes might have.  This, for example, would arguably
 exclude from the p-consciousness of the color red the associations between
 the lowest level, local red sensing nodes, that are necessary so the
 activation of such nodes can be recognized as a common color red no matter
 where they occur in different parts of the visual field.



 Thus according to such a definition, qualia for red would have to be
 different for each location of V1 in which red is sensed --- even when
 different portions of V1 get mapped into the same portions of the semi
 stationary representation your brain builds out of stationary surroundings
 as your eyes saccade and pan across them.  Thus, your concept of the qualia
 for the color red does not cover a unified color red, and necessarily
 includes thousands of separate red qualia, each associated with a different
 portion of V1.



 Aspects of consciousness that (a) cannot be verbally described by
 introspection; (b) have no effect on behavior, and (c) cannot involve any
 associations with the activation of other nodes (which is an exclusion you,
 Richard, seem to have added to Wikipedia's description of p-consciousness)
 --- defines the hard problem so narrowly as to make it of relatively little,
 or no importance.  It certainly is not the central question of
 consciousness, because a sense of experiencing something has no meaning
 unless it has grounding, and that requires associations in large numbers,
 and, thus, according to your definition could not be part of the hard
 problem.



 Plus, Richard, you have not even come close to addressing my statement that
 just because certain aspects of consciousness cannot be verbally described
 by the introspection of the brain or by affects on external behavior of the
 body itself does not mean they cannot be subject to further analysis through
 scientific research --- such as by brain science, brain scanning, brain
 simulations, and advances in understanding of AGIs.



 I have already spent way, way too much time in this response, So, I will
 leave it at that.  If you want to think you have won the argument 

Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Vladimir Nesov
On Fri, Nov 21, 2008 at 1:40 AM, Richard Loosemore [EMAIL PROTECTED] wrote:

 The main problem is that if you interpret spike timing to be playing the
 role that you (and they) imply above, then you are commiting yourself to a
 whole raft of assumptions about how knowledge is generally represented and
 processed.  However, there are *huge* problems with that set of implicit
 assumptions  not to put too fine a point on it, those implicit
 assumptions are equivalent to the worst, most backward kind of cognitive
 theory imaginable.  A theory that is 30 or 40 years out of date.


Could you give some references to be specific in what you mean?
Examples of what you consider outdated cognitive theory and better
cognitive theory.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-20 Thread Mike Tintner

Ben: I'm a panpsychist ...

You think that all things are sentient/ conscious?

(I argue that consciousness depends on having a nervous system and being 
able to feel - and if we could understand the mechanics of that, we would 
probably have solved the hard problem and be able to give something similar 
to a machine (which might have to be organic) ).


So I'm interested in any alternative/panpsychist views. If you do think that 
inorganic things like stones, say, are conscious, then surely it would 
follow, that we should ultimately be able to explain their consciousness, 
and make even inanimate metallic computers conscious?


Care to expand a little on your views? 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-20 Thread Ben Goertzel
well, what does feel mean to you ... what is feeling that a slug can
do but a rock or an atom cannot ... are you sure this is an absolute
distinction rather than a matter of degree?

On Thu, Nov 20, 2008 at 6:15 PM, Mike Tintner [EMAIL PROTECTED] wrote:
 Ben: I'm a panpsychist ...

 You think that all things are sentient/ conscious?

 (I argue that consciousness depends on having a nervous system and being
 able to feel - and if we could understand the mechanics of that, we would
 probably have solved the hard problem and be able to give something similar
 to a machine (which might have to be organic) ).

 So I'm interested in any alternative/panpsychist views. If you do think that
 inorganic things like stones, say, are conscious, then surely it would
 follow, that we should ultimately be able to explain their consciousness,
 and make even inanimate metallic computers conscious?

 Care to expand a little on your views?



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

A human being should be able to change a diaper, plan an invasion,
butcher a hog, conn a ship, design a building, write a sonnet, balance
accounts, build a wall, set a bone, comfort the dying, take orders,
give orders, cooperate, act alone, solve equations, analyze a new
problem, pitch manure, program a computer, cook a tasty meal, fight
efficiently, die gallantly. Specialization is for insects.  -- Robert
Heinlein


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: [agi] A paper that actually does solve the problem of consciousness

2008-11-20 Thread Ed Porter
Ben, 

If you place the limitations on what is part of the hard problem that
Richard has, most of what you consider part of the hard problem would
probably cease to be part of the hard problem.  In one argument he
eliminated things relating to lateral or upward associative connections from
being consider part of the hard problem of consciousness.  That would
eliminate the majority of sources of grounding from any notion of
consciousness.

I like you tend to think that all of reality is conscious, but I think there
are vastly different degrees and types of consciousness, and I think there
are many meaningful types of consciousness that humans have that most of
reality does not have.

When I was in college and LSD was the rage, one of the main goals of the
heavy duty heads was ego loss which was to achieve a sense of cosmic
oneness with all of the universe.  It was commonly stated that 1000
micrograms was the ticket to ego loss.  I never went there.  Nor have I
ever achieved cosmic oneness through meditation, although I have achieved
temporary (say fifteen or thirty seconds) feeling of deep peaceful bliss.

Perhaps you have been more brave (acid wise) or much lucky or disciplined
meditation wise, and have achieve a seen of oneness with the cosmic
consciousness.  If so, I tip my hat (and Colbert wag of the finger) to you.

Ed Porter


-Original Message-
From: Ben Goertzel [mailto:[EMAIL PROTECTED] 
Sent: Thursday, November 20, 2008 5:46 PM
To: agi@v2.listbox.com
Subject: Re: [agi] A paper that actually does solve the problem of
consciousness

Hmmm...

I don't agree w/ you that the hard problem of consciousness is
unimportant or non-critical in a philosophical sense.  Far from it.

However, from the point of view of this list, I really don't think it
needs to be solved (whatever that might mean) in order to build AGI.

Of course, I think that because I think the hard problem of
consciousness is actually easy: I'm a panpsychist ... I think
everything is conscious, and different kinds of structures just focus
and amplify this universal consciousness in different ways...

Interestingly, this panpsychist perspective is seen as obviously right
by most folks deeply involved with meditation or yoga whom I've talked
to, and seen as obviously wrong by most scientists I talk to...

-- Ben G

On Thu, Nov 20, 2008 at 5:26 PM, Ed Porter [EMAIL PROTECTED] wrote:
 Richard,



 Thank you for your reply.



 I started to write a point-by-point response to your reply, copied below,
 but after 45 minutes I said stop.  As interesting as it is, from a
 philosophical and argumentative writing standpoint to play wack-a-mole
with
 your constantly sifting and often contradictory arguments --- right now, I
 have much more pressing things to do.



 And I think I have already stated many of my positions on the subject of
 this thread sufficiently clearly that intelligent people who have a little
 imagination and really want to can understand them.  Since few others
beside
 you have responded to my posts, I don't think there is any community
demand
 that I spend further time on such replies.



 What little I can add to what I have already said is that I basically I
 think the hard problem/easy problem dichotomy is largely, although, not
 totally pointless.



 I do not think the hard problem is central to understanding consciousness,
 because so much of consciousness is excluded from being part of the hard
 problem.  It is excluded either because it can be described verbally by
 introspection by the mind itself, or because it affects external behavior,
 and, thus, at least according to Wikipedia's definition of
p-consciousness,
 is part of the easy problem.



 It should be noted that not affecting external behavior excludes one hell
of
 a lot of consciousness, because emotions, which clearly affect external
 behavior, are so closely associated with much of our sensing of
experience.



 Thus, it seems a large part of what we humans consider to be our
subjective
 sense of experience of consciousness is rejected by hard problem purists
 as being part of the easy problem.



 Richard, you in particular seems to be much more of a hard problem purist
 than those who wrote the Wikipedia definition of p-consciousness.   This
is
 because in your responses to me you have even excluded as not part of the
 hard problem any lateral or higher level associations of one of your
bottom
 level red detector nodes might have.  This, for example, would arguably
 exclude from the p-consciousness of the color red the associations between
 the lowest level, local red sensing nodes, that are necessary so the
 activation of such nodes can be recognized as a common color red no
matter
 where they occur in different parts of the visual field.



 Thus according to such a definition, qualia for red would have to be
 different for each location of V1 in which red is sensed --- even when
 different portions of V1 get mapped into the same portions of the 

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-20 Thread Mike Tintner

Ben,

I suspect you're being evasive. You and I know what feel means. When I feel 
the wind, I feel cold. When I feel tea poured on my hand, I/it feel/s 
scalding hot. And we can trace the line of feeling to a considerable 
extent - no? - through the nervous system and brain. Not only do I feel it 
internally, but there are normally external signs of my feeling. You see me 
shivering/ wincing etc. And we - science - can interfere with those feelings 
and anaesthetise or heighten them.


Now when the rock is exposed to the same wind or hot tea, if it does feel 
anything, it stoically and heroically refuses to display any signs 
whatsoever. It appears to be magnificently indifferent. And if it really is 
suffering, we wouldn't know what to do to alleviate its suffering.


So what do you (or others) mean by inanimate things feeling?

I'm mainly seeking enlightenment not an argument here  -  and to see whether 
your or others' panpsychism has been at all thought through, and is more 
than an abstract conjunction of concepts. I assume there is some substance 
to the philosophy - I'd like to know what it is.

I
Ben:

well, what does feel mean to you ... what is feeling that a slug can
do but a rock or an atom cannot ... are you sure this is an absolute
distinction rather than a matter of degree?

On Thu, Nov 20, 2008 at 6:15 PM, Mike Tintner [EMAIL PROTECTED] 
wrote:

Ben: I'm a panpsychist ...

You think that all things are sentient/ conscious?

(I argue that consciousness depends on having a nervous system and being
able to feel - and if we could understand the mechanics of that, we would
probably have solved the hard problem and be able to give something 
similar

to a machine (which might have to be organic) ).

So I'm interested in any alternative/panpsychist views. If you do think 
that

inorganic things like stones, say, are conscious, then surely it would
follow, that we should ultimately be able to explain their consciousness,
and make even inanimate metallic computers conscious?

Care to expand a little on your views?





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-20 Thread Vladimir Nesov
On Fri, Nov 21, 2008 at 2:23 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 well, what does feel mean to you ... what is feeling that a slug can
 do but a rock or an atom cannot ... are you sure this is an absolute
 distinction rather than a matter of degree?


Does a rock compute Fibonacci numbers just to a lesser degree than
this program? A concept, like any other. Also, some shades of gray are
so thin you'd run out of matter in the Universe to track all the
things that light.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-20 Thread Ben Goertzel
 When I was in college and LSD was the rage, one of the main goals of the
 heavy duty heads was ego loss which was to achieve a sense of cosmic
 oneness with all of the universe.  It was commonly stated that 1000
 micrograms was the ticket to ego loss.  I never went there.  Nor have I
 ever achieved cosmic oneness through meditation, although I have achieved
 temporary (say fifteen or thirty seconds) feeling of deep peaceful bliss.

 Perhaps you have been more brave (acid wise) or much lucky or disciplined
 meditation wise, and have achieve a seen of oneness with the cosmic
 consciousness.  If so, I tip my hat (and Colbert wag of the finger) to you.

Not a great topic for public mailing list discussion but ... uh ... yah ...

But it's not really so much about the dosage ... entheogens are tools
and it's all about what you do with them ;-)

ben


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Richard Loosemore

Ben Goertzel wrote:

Richard,


The main problem is that if you interpret spike timing to be playing the
role that you (and they) imply above, then you are commiting yourself to a
whole raft of assumptions about how knowledge is generally represented and
processed.  However, there are *huge* problems with that set of implicit
assumptions  not to put too fine a point on it, those implicit
assumptions are equivalent to the worst, most backward kind of cognitive
theory imaginable.  A theory that is 30 or 40 years out of date.

The gung-ho neuroscientists seem blissfully unaware of this fact because
 they do not know enough cognitive science.

Richard Loosemore



I don't think this is the reason.  There are plenty of neuroscientists
out there
who know plenty of cognitive science.

I think many neuroscientists just hold different theoretical
presuppositions than
you, for reasons other than ignorance of cog sci data.

Interdisciplinary cog sci has been around a long time now as you know ... it's
not as though cognitive neuroscientists are unaware of its data and ideas...


I disagree.

Trevor Harley wrote one very influential paper on the subject, and he 
and I wrote a second paper in which we took a random sampling of 
neuroscience papers and analyzed them carefully.  We found it trivially 
easy to gather data to illustrate our point.  And, no, even though I 
used my own framework as a point of reference, this was not crucial to 
the argument, merely a way of bringing the argument into sharp focus.


So I am basing my conclusion on gathering actual evidence and publishing 
a paper about it.


Since such luminaries as Jerry Fodor have said much the same thing, I 
think I stand in fairly solid company.





Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Trent Waddington
On Fri, Nov 21, 2008 at 11:02 AM, Richard Loosemore [EMAIL PROTECTED] wrote:
 Since such luminaries as Jerry Fodor have said much the same thing, I think
 I stand in fairly solid company.

Wow, you said Fodor without being critical of his work.  Is that legal?

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Richard Loosemore

Vladimir Nesov wrote:

On Fri, Nov 21, 2008 at 1:40 AM, Richard Loosemore [EMAIL PROTECTED] wrote:

The main problem is that if you interpret spike timing to be playing the
role that you (and they) imply above, then you are commiting yourself to a
whole raft of assumptions about how knowledge is generally represented and
processed.  However, there are *huge* problems with that set of implicit
assumptions  not to put too fine a point on it, those implicit
assumptions are equivalent to the worst, most backward kind of cognitive
theory imaginable.  A theory that is 30 or 40 years out of date.



Could you give some references to be specific in what you mean?
Examples of what you consider outdated cognitive theory and better
cognitive theory.



Well, you could start with the question of what the neurons are supposed 
to represent, if the spikes are coding (e.g.) bayesian contingencies. 
Are the neurons the same as concepts/symbols?  Are groups of neurons 
redundantly coding for concepts/symbols?


One or other of these possibilties is usually assumed by default, but 
this leads to glaring inconsistencies in the interpretation of 
neuroscience data, as well as begging all of the old questions about how 
grandmother cells are supposed to do their job.  As I said above, 
cognitive scientists already came to the conclusion, 30 or 40 years ago, 
that it made no sense to stick to a simple identification of one neuron 
per concept.  And yet many neuroscientists are *implictly* resurrecting 
this broken idea, without addressing the faults that were previously 
found in it.  (In case you are not familiar with the faults, they 
include the vulnerability of neurons, the lack of connectivity between 
arbitrary neurons, the problem of assigning neurons to concepts, the 
encoding of variables, relationships and negative facts .. ).


For example, in Loosemore  Harley (in press) you can find an analysis 
of a paper by Quiroga, Reddy, Kreiman, Koch, and Fried (2005) in which 
the latter try to claim they have evidence in favor of grandmother 
neurons (or sparse collections of grandmother neurons) and against the 
idea of distributed representations.


We showed their conclusion to be incoherent.  It was deeply implausible, 
given the empirical data they reported.


Furthermore, we used my molecular framework (the same one that was 
outlined in the consciousness paper) to see how that would explain the 
same data.  It turns out that this much more sophisticated model was 
very consistent with the data (indeed, it is the only one I know of that 
can explain the results they got).


You can find our paper at www.susaro.com/publications.



Richard Loosemore


Loosemore, R.P.W.  Harley, T.A. (in press). Brains and Minds:  On the 
Usefulness of Localisation Data to Cognitive Psychology. In M. Bunzl  
S.J. Hanson (Eds.), Foundations of Functional Neuroimaging. Cambridge, 
MA: MIT Press.


Quiroga, R. Q., Reddy, L., Kreiman, G., Koch, C.  Fried, I. (2005). 
Invariant visual representation by single-neurons in the human brain. 
Nature, 435, 1102-1107.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Richard Loosemore

Trent Waddington wrote:

On Fri, Nov 21, 2008 at 11:02 AM, Richard Loosemore [EMAIL PROTECTED] wrote:

Since such luminaries as Jerry Fodor have said much the same thing, I think
I stand in fairly solid company.


Wow, you said Fodor without being critical of his work.  Is that legal?

Trent


Arrrggghhh... you noticed!  :-(

I was hoping nobody would catch me out on that one.

Okay, so Fodor and I disagree about everything else.

But that's not the point :-).  He's a Heavy, so if he is on my side on 
this one issue, its okay to quote him.  (That's my story and I'm 
sticking to it.)






Richard Loosemore



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Vladimir Nesov
Referencing your own work is obviously not what I was asking for.
Still, something more substantial than neuron is not a concept, as
an example of cognitive theory?


On Fri, Nov 21, 2008 at 4:35 AM, Richard Loosemore [EMAIL PROTECTED] wrote:
 Vladimir Nesov wrote:

 Could you give some references to be specific in what you mean?
 Examples of what you consider outdated cognitive theory and better
 cognitive theory.


 Well, you could start with the question of what the neurons are supposed to
 represent, if the spikes are coding (e.g.) bayesian contingencies. Are the
 neurons the same as concepts/symbols?  Are groups of neurons redundantly
 coding for concepts/symbols?

 One or other of these possibilties is usually assumed by default, but this
 leads to glaring inconsistencies in the interpretation of neuroscience data,
 as well as begging all of the old questions about how grandmother cells
 are supposed to do their job.  As I said above, cognitive scientists already
 came to the conclusion, 30 or 40 years ago, that it made no sense to stick
 to a simple identification of one neuron per concept.  And yet many
 neuroscientists are *implictly* resurrecting this broken idea, without
 addressing the faults that were previously found in it.  (In case you are
 not familiar with the faults, they include the vulnerability of neurons, the
 lack of connectivity between arbitrary neurons, the problem of assigning
 neurons to concepts, the encoding of variables, relationships and negative
 facts .. ).

 For example, in Loosemore  Harley (in press) you can find an analysis of a
 paper by Quiroga, Reddy, Kreiman, Koch, and Fried (2005) in which the latter
 try to claim they have evidence in favor of grandmother neurons (or sparse
 collections of grandmother neurons) and against the idea of distributed
 representations.

 We showed their conclusion to be incoherent.  It was deeply implausible,
 given the empirical data they reported.

 Furthermore, we used my molecular framework (the same one that was outlined
 in the consciousness paper) to see how that would explain the same data.  It
 turns out that this much more sophisticated model was very consistent with
 the data (indeed, it is the only one I know of that can explain the results
 they got).

 You can find our paper at www.susaro.com/publications.



 Richard Loosemore


 Loosemore, R.P.W.  Harley, T.A. (in press). Brains and Minds:  On the
 Usefulness of Localisation Data to Cognitive Psychology. In M. Bunzl  S.J.
 Hanson (Eds.), Foundations of Functional Neuroimaging. Cambridge, MA: MIT
 Press.

 Quiroga, R. Q., Reddy, L., Kreiman, G., Koch, C.  Fried, I. (2005).
 Invariant visual representation by single-neurons in the human brain.
 Nature, 435, 1102-1107.




-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Richard Loosemore

Vladimir Nesov wrote:

Referencing your own work is obviously not what I was asking for.
Still, something more substantial than neuron is not a concept, as
an example of cognitive theory?


I don't understand your objection here:  I referenced my own work 
because I specifically described several answers to your question that 
were written down in that paper.  And I brougt one of them out and 
summarized it for you.  Why would that be obviously not what I was 
asking for?  I am confused.


That paper was partly about my own theory, but partly about the general 
problem of neuroscience models making naive assumptions about cognitive 
theories in general.


And why do you say that you want something more substantial than neuron 
is not a concept  that is an extremely serious issue.  Why do you 
dismiss it as insubstantial?


Lastly, I did not say that the neuroscientists picked old, broken 
theories AND that they could have picked a better, not-broken theory 
 I only said that they have gone back to old theories that are known 
to be broken.  Whether anyone has a good replacement yet is not 
relevant:  it does not alter the fact that they are using broken 
theories.  The neuron = concept 'theory' is extremely broken:  it is so 
broken, that when neuroscientists talk about bayesian contingencies 
being calculated or encoded by spike timing mechanisms, that claim is 
incoherent.


If you really insist on another example, take one of the other ones that 
I mentioned in the paper:  the naive identification of attentional 
limitations with a literal bottleneck in processing.


I may as well jsut quote you the entire passage that we wrote on the 
matter.  (There are no references to the basic facts about dual-task 
studies, it is true.  Is it really necessary for me to dig those up, or 
do you know them already?):


QUOTE from Loosemore  Harley---

Dux, Ivanoff, Asplund and Marois (2006) describe a study in which 
participants were asked to carry out two tasks that were too hard to 
perform simultaneously. In these circumstances, we would expect (from a 
wide range of previous cognitive psychological studies) that the tasks 
would be serially queued, and that this would show up in reaction time 
data. Some theories of this effect interpret it as a consequence of a 
modality-independent “central bottleneck” in task performance.
Dux et al. used time-resolved fMRI to show that activity in a particular 
brain area—the posterior lateral prefrontal cortex (pLPFC)—was 
consistent with the queuing behavior that would be expected if this 
place were the locus of the bottleneck responsible for the brain’s 
failure to execute the tasks simultaneously. They also showed that the 
strength of the response in the pLPFC seemed to be a function of the 
difficulty of one of the competing tasks, when, in a separate 
experiment, participants were required to do that task alone. The 
conclusion drawn by Dux et al. is that this brain imaging data tells us 
the location of the bottleneck: it’s in the pLPFC. So this study aspires 
to be Level 2, perhaps even Level 3: telling us the absolute location of 
an important psychological process, perhaps telling us how it relates to 
other psychological processes.
Rather than immediately address the question of whether the pLPFC really 
is the bottleneck, we would first like to ask whether such a thing as 
“the bottleneck” exists at all. Should the psychological theory of a 
bottleneck be taken so literally that we can start looking for it in the 
brain? And if we have doubts, could imaging data help us to decide that 
we are justified in taking the idea of a bottleneck literally?

What is a “Bottleneck”?
Let’s start with a simple interpretation of the bottleneck idea. We 
start with mainstream ideas about cognition, leaving aside our new 
framework for the moment. There are tasks to be done by the cognitive 
system, and each task is some kind of package of information that goes 
to a place in the system and gets itself executed. This leads to a clean 
theoretical picture: the task is a package moving around the system, and 
there is a particular place where it can be executed. As a general rule, 
the “place” has room for more than one package (perhaps), but only if 
the packages are small, or if the packages have been compiled to make 
them automatic. In this study, though, the packages (tasks) are so big 
that there is only room for one at a time.
The difference between this only-room-for-one-package idea and its main 
rival within conventional cognitive psychology is that the rival theory 
would allow multiple packages to be executed simultaneously, but with a 
slowdown in execution speed. Unfortunately for this rival theory, 
psychology experiments have indicated that no effort is initially 
expended on a task that arrives later, until the first task is 
completed. Hence, the bottleneck theory is accepted as the best 
description of what happens in dual-task 

Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Ben Goertzel
  The neuron = concept
 'theory' is extremely broken:  it is so broken, that when neuroscientists
 talk about bayesian contingencies being calculated or encoded by spike
 timing mechanisms, that claim is incoherent.

This is not always true ... in some cases there are solidly demonstrated
connections between neurally computed bayesian contingencies and
observed perceptual and motor phenomena in organisms...

I agree that no one knows how abstract concepts are represented in the brain,
but for sensorimotor stuff it is not the case that work on bayesian population
coding in the brain is incoherent

ben g


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Richard Loosemore

Ben Goertzel wrote:

  The neuron = concept

'theory' is extremely broken:  it is so broken, that when neuroscientists
talk about bayesian contingencies being calculated or encoded by spike
timing mechanisms, that claim is incoherent.


This is not always true ... in some cases there are solidly demonstrated
connections between neurally computed bayesian contingencies and
observed perceptual and motor phenomena in organisms...

I agree that no one knows how abstract concepts are represented in the brain,
but for sensorimotor stuff it is not the case that work on bayesian population
coding in the brain is incoherent


No contest:  it is valid there.

But I am only referring to the cases where neuroscientists imply that 
what they are talking about are higher level concepts.


This happens extremely frequently.



Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Vladimir Nesov
On Fri, Nov 21, 2008 at 5:14 AM, Richard Loosemore [EMAIL PROTECTED] wrote:

 Lastly, I did not say that the neuroscientists picked old, broken theories
 AND that they could have picked a better, not-broken theory  I only said
 that they have gone back to old theories that are known to be broken.
  Whether anyone has a good replacement yet is not relevant:  it does not
 alter the fact that they are using broken theories.  The neuron = concept
 'theory' is extremely broken:  it is so broken, that when neuroscientists
 talk about bayesian contingencies being calculated or encoded by spike
 timing mechanisms, that claim is incoherent.


Well, you know I read that paper ;-)
A theory that is 30 or 40 years out of date, you said -- which
suggested something that is up to date, hence the question.

Neural code can be studied from the areas where we know the
correlates. You could assign concepts to neurons and theorize about
their structure as dictated by dynamic of neural substrate. They will
be no word-level concepts, and you'd probably need to build bigger
abstractions on top, but there is no inherent problem with that.
Still, it's so murky even for simple correlates that no good overall
picture exists.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


[agi] MSRobot vs E3

2008-11-20 Thread Mike Tintner
http://www.marketwatch.com/news/story/Battle-lines-forming-nascent-robotics/story.aspx?guid={FA2B30F1-B78B-4E33-91A4-F7F3D07DECCB} 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Steve Richfield
Richard,

On 11/20/08, Richard Loosemore [EMAIL PROTECTED] wrote:

 Steve Richfield wrote:

 Richard,
  Broad agreement, with one comment from the end of your posting...
  On 11/20/08, *Richard Loosemore* [EMAIL PROTECTED] mailto:
 [EMAIL PROTECTED] wrote:

Another, closely related thing that they do is talk about low level
issues witout realizing just how disconnected those are from where
the real story (probably) lies.  Thus, Mohdra emphasizes the
importance of spike timing as opposed to average firing rate.

  There are plenty of experiments that show that consecutive closely-spaced
 pulses result when something goes off scale, probably the equivalent to
 computing Bayesian probabilities  100%, somewhat akin to the overflow
 light on early analog computers. These closely-spaced pulses have a MUCH
 larger post-synaptic effect than the same number of regularly spaced pulses.
 However, as far as I know, this only occurs during anomalous situations -
 maybe when something really new happens, that might trigger learning?
  IMHO, it is simply not possible to play this game without having a close
 friend with years of experience poking mammalian neurons. This stuff is
 simply NOT in the literature.

He may well be right that the pattern or the timing is more
important, but IMO he is doing the equivalent of saying Let's talk
about the best way to design an algorithm to control an airport.
 First problem to solve:  should we use Emitter-Coupled Logic in the
transistors that are in oour computers that will be running the
algorithms.

  Still, even with my above comments, you conclusion is still correct.


 The main problem is that if you interpret spike timing to be playing the
 role that you (and they) imply above, then you are commiting yourself to a
 whole raft of assumptions about how knowledge is generally represented and
 processed.  However, there are *huge* problems with that set of implicit
 assumptions  not to put too fine a point on it, those implicit
 assumptions are equivalent to the worst, most backward kind of cognitive
 theory imaginable.  A theory that is 30 or 40 years out of date.


OK, so how else do you explain that in fairly well understood situations
like stretch receptors, that the rate indicates the stretch UNLESS you
exceed the mechanical limit of the associated joint, whereupon you start
getting pulse doublets, triplets, etc. Further, these pulse groups have a
HUGE effect on post synaptic neurons. What does your cognitive science tell
you about THAT?



 The gung-ho neuroscientists seem blissfully unaware of this fact because
  they do not know enough cognitive science.


I stated a Ben's List challenge a while back that you apparently missed, so
here it is again.

*You can ONLY learn how a system works by observation, to the extent that
its operation is imperfect. Where it is perfect, it represents a solution to
the environment in which it operates, and as such, could be built in
countless different ways so long as it operates perfectly. Hence,
computational delays, etc., are fair game, but observed cognition and
behavior are NOT except to the extent that perfect cognition and behavior
can be described, whereupon the difference between observed and theoretical
contains the information about construction.*
**
*A perfect example of this is superstitious learning, which on its
surface appears to be an imperfection. However, we must use incomplete data
to make imperfect predictions if we are to ever interact with our
environment, so superstitious learning is theoretically unavoidable. Trying
to compute what is perfect for superstitious learning is a pretty
challenging task, as it involves factors like the regularity of disastrous
events throughout evolution, etc.*

If anyone has successfully done this, I would be very interested. This is
because of my interest in central metabolic control issues, wherein
superstitious red tagging appears to be central to SO many age-related
conditions. Now, I am blindly assuming perfection in neural computation
and proceeding on that assumption. However, if I could recognize and
understand any imperfections (none are known), I might be able to save
(another) life or two along the way with that knowledge.

Anyway, this suggests that much of cognitive science, which has NOT
computed this difference but rather is running with the raw data of
observation, is rather questionable at best. For reasons such as this, I
(perhaps prematurely and/or improperly) dismissed cognitive science rather
early on. Was I in error to do so?

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com