Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-08 Thread Russell Wallace
On Fri, May 9, 2008 at 1:51 AM, Jim Bromer <[EMAIL PROTECTED]> wrote:
> I don't want to get into a quibble fest, but understanding is not
> necessarily constrained to prediction.

Indeed, "understanding" is a fuzzy word that means lots of different
things in different contexts. In the context of Newcomb's paradox,
however, the relevant concept is prediction.

The logic here is similar to that of Goedel's theorem, and of Turing's
proof of the unsolvability of the halting problem. It also relates to
an even older question: if there exists an omniscient God, can He know
in advance what we will do? Answer: even God can, in general, only
know what the output of a program will be by actually running the
program. He can only know our actions by watching to see what we
actually do.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-08 Thread Steve Richfield
Matt,

On 5/8/08, Matt Mahoney <[EMAIL PROTECTED]> wrote:
>
>
> --- Steve Richfield <[EMAIL PROTECTED]> wrote:
>
> > On 5/7/08, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
> > >
> > > See http://www.overcomingbias.com/2008/01/newcombs-proble.html
>
> > After many postings on this subject, I still assert that
> > ANY rational AGI would be religious.
>
> Not necessarily.  You execute a program P that inputs the conditions of
> the game and outputs "1 box" or "2 boxes".  Omega executes a program W as
> follows:
>
> if P outputs "1 box"
>then put $1 million in box B
> else
>leave box B empty.
>
> No matter what P is, it cannot call W because it would be infinite
> recursion.


QED this is NOT the program that Omega executes.

A rational agent only has to know that there are some things it cannot
> compute.  In particular, it cannot understand its own algorithm.


There is a LOT wrapped up in your "only". It is one thing to know that you
can't presently compute certain things that you have identified, and quite
another to believe that an unseen power changes things that you have NOT
identified as being beyond your present (flawed) computational abilities. No
matter how extensive your observations, you can NEVER be absolutely sure
that you understand anything, and you will in fact fail to understand key
details of some things without realizing it. With a good workable
explanation of the variances between predicted and actual events (God), of
course you will continue to look for less divine explanations, but at
exactly what point do you broadly dismiss ALL divine explanations, in the
absence of alternative explanations?

Steve Richfield

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-08 Thread Matt Mahoney

--- Jim Bromer <[EMAIL PROTECTED]> wrote:

> I don't want to get into a quibble fest, but understanding is not
> necessarily constrained to prediction.

What would be a good test for understanding an algorithm?


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: pattern definition

2008-05-08 Thread Boris Kazachenko

"Entities must not be multiplied unnecessarily". William of Okkam.

A pattern is a set of matching inputs.
A match is a partial identity of the comparands.
The comparands for general intelligence must incrementally & indefinitely 
scale in complexity.
The scaling must start from the bottom: uncompressed single-integer 
comparands, & the match here is the sum of bitwise AND.


For more see my blog: http://scalable-intelligence.blogspot.com/
Boris.

- Original Message - 
From: "Richard Loosemore" <[EMAIL PROTECTED]>

To: 
Sent: Thursday, May 08, 2008 1:17 PM
Subject: [agi] Re: pattern definition



[EMAIL PROTECTED] wrote:

Hello

I am writing a literature review on AGI and I am mentioning the 
definition of pattern as explained by Ben in his work.


"A pattern is a representation of an object on a simpler scale. For 
example, a pattern in a drawing of a mathematical curve could be a 
program that can compute the curve from a formula (Looks et al. 2004). My 
supervisor told me that "she doesn?t see how this can be simpler than the 
actual drawing".


Any other definition I could use in the same context to explain to a 
non-technical audience?


thanks

xav


Xav,

[I am copying this to the AGI mailing list because it is more appropriate 
there than on Singularity]


A more general definition of pattern would include the idea that there is 
a collection of mechanisms that take in a source of information (e.g. an 
image consisting of a grid of pixels) and respond in such a way that each 
mechanism 'triggers' in some way when a particular arrangement of signal 
values appears in the information source.


Note that the triggering of each mechanism is the 'recognition' of a 
pattern, and the mechanism in question is a 'recognizer' of a pattern. (In 
this way of looking at things, there are many mechanisms, one for each 
pattern).  The 'particular arrangement of signal values' is the pattern 
itself.


Most importantly note that a mechanism does not have to trigger for some 
exact, deterministic set of signal values.  For example, a mechanism could 
respond in a stochastic, noisy way to a whole bunch of different 
arrangements of signal values.  It is allowed to be slightly inconsistent, 
and not always respond in the same way to the same input (although it 
would be a particularly bad pattern recognizer if it did not behave in a 
reasonably consistent way!).  The amount of the 'triggering' reaction does 
not have to be all-or-nothing, either:  the mechanism can give a graded 
response.


What the above paragraph means is that the thing that we call a 'pattern' 
is actually 'whatever makes a mechanism trigger', and we have to be 
extremely tolerant of the fact that a wide range of different signal 
arrangements will give rise to triggering ... so a pattern is something 
much more amorphous and hard to define than simply *one* arrangement of 
signals.


Finally, there is one more twist to this definition, which is very 
important.  Everything said above was about arrangements of signals in the 
primary information source ... but we also allow that some mechanisms are 
designed to trigger on an arrangement of other *mechanisms*, not just 
primary input signals.  In other words, this pattern finding system is 
hierarchical, and there can be abstract patterns.


This definition of pattern is the most general one that I know of.  I use 
it in my own work, but I do not know if it has been explicitly published 
and named by anyone else.


In this conception, patterns are defined by the mechanisms that trigger, 
and further deponent sayeth not what they are, in any more fundamental 
way.


And one last thing:  as far as I can seem this does not easily map onto 
the concept of Kolmogorov complexity.  At least, the mapping is very 
awkward and uninformative, if it exists.  If a mechanism triggers on a 
possibly stochastic, nondeterminstic set of features, it can hardly be 
realised by a feasible algorithm, so talking about a pattern as an 
algorithm that can generate the source seems, to me at least, to be 
unworkable.


Hope that is useful.




Richard Loosemore


P.S.  Nice to see some Welsh in the boilerplate stuff at the bottom of 
your message. I used to work at Bangor in the early 90s, so it brought 
back fond memories to see "Prifysgol Bangor"!  Are you in the Psychology 
department?


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com 


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Accidental Genius

2008-05-08 Thread Bryan Bishop
On Thu, May 8, 2008 at 10:02 AM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Anyhow it is very interesting.  Perhaps savantism is an attention mechanism
> disorder?  Like, too much attention.

Yes.

"Autism is a devastating neurodevelopmental disorder with a
polygenetic predisposition that seems to be triggered by multiple envi
ronmental factors during embryonic and/or early postnatal life. While
significant advances have been made in identifying the neuronal
structures and cells affected, a unifying theory that could explain
the manifold autistic symptoms has still not emerged. Based on recent
synaptic, cellular, molecular, microcircuit, and behavioral results
obtained with the valproic acid (VPA) rat model of autism, we propose
here a unifying hypothesis where the core pathology of the autistic
brain is hyper-reactivity and hyper-plasticity of local neuronal
circuits. Such excessive neuronal processing in circumscribed circuits
is suggested to lead to hyper-perception, hyper-attention, and
hyper-memory, which may lie at the heart of most autistic symptoms. In
this view, the autistic spectrum are disorders of hyper-functionality,
which turns debilitating, as opposed to disorders of
hypo-functionality, as is often assumed. We discuss how excessive
neuronal processing may render the world painfully intense when the
neocortex is affected and even aversive when the amygdala is affected,
leading to social and environmental withdrawal. Excessive neuronal
learning is also hypothesized to rapidly lock down the individual into
a small repertoire of secure behavioral routines that are obsessively
repeated. We further discuss the key autistic neuropathologies and
several of the main theories of autism and re-interpret them in the
light of the hypothesized Intense World Syndrome."

http://heybryan.org/intense_world_syndrome.html

See also the last email I sent out on this subject:
http://heybryan.org/pipermail/hplusroadmap/2008-May/000466.html

- Bryan

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: Symbol Grounding [WAS Re: [agi] AGI-08 videos]

2008-05-08 Thread Mike Tintner

  Hi Jim,

  Funny, I was just thinking re the reply to your point, the second before I 
read it. What I was going to say was: I read a lot of Harnad many years ago, 
and I was a bit confused then about exactly what he was positing re the 
intermediate levels of processing - iconic/categorical.

  It's simply I think - and I stand to be corrected - that he has never pushed 
those levels v. hard at all. They are definitely there in his writing, but not 
elaborated.

  So the only enduring impression he has left, IMO, is the idea of "symbol 
grounding" - which people have interpreted in various ways.

  As you can imagine, I personally would have liked to see a lot more re those 
intermediate levels. And if he had pushed them, someone would presumably have 
brought him up in conection with Jeff Hawkins' work.

  Jim:MT:
  No, a symbol is simply anything abstract that stands for an object -  word 
  sounds, alphabetic words, numbers, logical variables etc. The earliest 
  proto-symbols may well have been emotions.

  My point is that Harnad clearly talks of two intermediate visual/sensory 
  levels of processing - the iconic and still-more-schematic "categorical 
  representations" -  neither of which I can remember seeing in the ideas of 
  anyone here for their AGI's. But I may have forgotten something/someone. 
  Have I?

  -
  If Harnad's ideas had made the critical difference between true artificial 
intelligence and the kind of AI that you have criticized, you would have heard 
a lot more about them before this.
  Jim Bromer



--
  Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try it now.

--
agi | Archives  | Modify Your Subscription  

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-08 Thread Jim Bromer



- Original Message 
From: Matt Mahoney <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Thursday, May 8, 2008 8:29:02 PM
Subject: Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI 
Dangers)

--- Vladimir Nesov <[EMAIL PROTECTED]> wrote: 
> Matt,
> 
> (I don't really expect you to give an answer to this question, as you
> didn't on a number of occasions before.) Can you describe
> mathematically what you mean by "understanding its own algorithm", and
> sketch a proof of why it's impossible?


Informally I mean there are circumstances (at least one) where you can't
predict what you are going to think without thinking it.

I don't want to get into a quibble fest, but understanding is not necessarily 
constrained to prediction.
Jim Bromer



  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: Symbol Grounding [WAS Re: [agi] AGI-08 videos]

2008-05-08 Thread Jim Bromer



- Original Message 
From: Mike Tintner <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Thursday, May 8, 2008 8:16:32 PM
Subject: Re: Symbol Grounding  [WAS Re: [agi] AGI-08 videos]

No, a symbol is simply anything abstract that stands for an object -  word 
sounds, alphabetic words, numbers, logical variables etc. The earliest 
proto-symbols may well have been emotions.

My point is that Harnad clearly talks of two intermediate visual/sensory 
levels of processing - the iconic and still-more-schematic "categorical 
representations" -  neither of which I can remember seeing in the ideas of 
anyone here for their AGI's. But I may have forgotten something/someone. 
Have I?

-
If Harnad's ideas had made the critical difference between true artificial 
intelligence and the kind of AI that you have criticized, you would have heard 
a lot more about them before this.
Jim Bromer



  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-08 Thread Matt Mahoney
--- Vladimir Nesov <[EMAIL PROTECTED]> wrote:

> On Fri, May 9, 2008 at 2:13 AM, Matt Mahoney <[EMAIL PROTECTED]>
> wrote:
> >
> > A rational agent only has to know that there are some things it cannot
> > compute.  In particular, it cannot understand its own algorithm.
> >
> 
> Matt,
> 
> (I don't really expect you to give an answer to this question, as you
> didn't on a number of occasions before.) Can you describe
> mathematically what you mean by "understanding its own algorithm", and
> sketch a proof of why it's impossible?


Informally I mean there are circumstances (at least one) where you can't
predict what you are going to think without thinking it.

More formally, "understanding an algorithm P" means that for any input x
you can compute the output P(x).  Perhaps x is a program Q together with
some input y.  It is possible to have P be a simulator such that P(Q,y) =
Q(y).  Then we would say that P understands Q.

I claim there is no P such that P(P,y) = P(y) for all y.  My sketch of the
proof is as follows.  All realizable computers are finite state machines. 
In order for P to simulate Q, P must have as much memory as Q to represent
all of the possible states of Q, plus additional memory to run the
simulation.  (If it uses no additional states, then P is an isomorphism of
Q, not a simulation.  It can't record the output without outputting it). 
But P cannot have more memory than itself.

This is quite common in human thought.  For example, we learn grammar
before we learn what grammar is.  We sometimes cannot explain the process
by which we solve some problems.  We misjudge what we might do in some
circumstances.  The latter is a case where we form an approximation Q of
our mind, P, which uses less memory but sometimes gives wrong results,
P(Q,x) = Q(x) != P(x).  We can't predict for which x we will make
mistakes.  Often the best we can do is a probabilistic model tuned to
minimize the error over some assumed distribution of x.  For example, we
might use an order-3 statistical model trained on a corpus of text to
approximate a language model.

This implies that AI design is experimental.  We cannot predict what an AI
will do.  Nor can each generation predict the next generation of recursive
self improvement.  This is true at all levels of intelligence (or more
precisely, memory size, which you need for intelligence).


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: Symbol Grounding [WAS Re: [agi] AGI-08 videos]

2008-05-08 Thread Mike Tintner
No, a symbol is simply anything abstract that stands for an object -  word 
sounds, alphabetic words, numbers, logical variables etc. The earliest 
proto-symbols may well have been emotions.


My point is that Harnad clearly talks of two intermediate visual/sensory 
levels of processing - the iconic and still-more-schematic "categorical 
representations" -  neither of which I can remember seeing in the ideas of 
anyone here for their AGI's. But I may have forgotten something/someone. 
Have I?


Richard:


You may want to check out the background material on this issue.  Harnad 
invented the idea that there is a 'symbol grounding problem', so that is 
why I quoted him.  His usage of the word 'symbol' is the one that is 
widespread in cognitive science, but it appears that you are missing this, 
and instead interpreting the word 'symbol' to be one of your own 
idiosyncratic meanings.  You can see this most clearly when you write that 
the symbols are things like "H-O-R-S-E" and "C-A-T" etc  those look 
like strings of letters, so if you think that a symbol, by definition, 
must involve a string of letters (or phonemes), then you are 
misunderstanding Harnad's (and everyone else's) meaning of by rather a 
wide margin.  That probably explains your puzzlement in this case.



Richard Loosemore



Mike Tintner wrote:
I'm not quite sure why Richard would want to quote Harnad. Harnad's idea 
of how the brain works depends on it first processing our immediate 
sensory images as "iconic representations"  - not 1m miles from Lakoff's 
image schemas. He sees the brain as first developing some kind of horse 
graphics, for the horses we see,


Then there is an additional and very confusing level of "categorical 
representations" which pick out the "invariant features" of horses - and 
are still  nonsymbolic. But Harnad doesn't give any examples of what 
these features are. They are necessary he claims to be able to 
distinguish between horses and similar animals.


(If anyone has further light to shed here, I'd be v. interested).

And only after those two levels of processing does the brain come to 
symbols - to "H-O-R-S-E" and "C-A-T" etc - although, of course, if you're 
thinking evolutionarily, it's arguable that the brain doesn't actually 
need these symbols at all -our ancestors survived happily without 
language.


So Harnad depicts symbols as not so much simply grounded as deeply rooted 
in a tree of imagistic processing - and I'm not aware of any AGI-er using 
imagistic processing (or have I got someone, like Ben, wrong?)


Richard:

Derek Zahn wrote:

Richard Loosemore:

 > My god, Mark: I had to listen to people having a general discussion 
of
 > "grounding" (the supposed them of that workshop) without a single 
person

 > showing the slightest sign that they had more than an amateur's
 > perspective on what that concept actually means.
 I was not at that workshop and am no expert on that topic, though I 
have seen the word used in several different ways.  Could you point at 
a book or article that does explain the concept or at least use it 
heavily in a correct way?  I would like to improve my understanding of 
the meaning of the "grounding" concept.
 Note:  sometimes written words do not convey intensions very well -- 
I am not being sarcastic, I am asking for information to help improve 
the quality of discussion that you have found lacking in the past.


I still think it is best to go back to Stevan Harnad's two main papers 
on the topic.  He originated the issue, then revisited it with some 
frustration when people starting diverging it to mean anything under the 
sun.


So:

http://users.ecs.soton.ac.uk/harnad/Papers/Harnad/harnad90.sgproblem.html

and

http://users.ecs.soton.ac.uk/harnad/Papers/Harnad/harnad93.cogsci.html

are both useful.

I do not completely concur with Harnad, but I certainly agree with him 
that there is a real issue here.


However..

The core confusion about the SGP is so basic that you will find it 
difficult to locate one source that explains it.  Here it is in Harnad's 
own words (from the second paper above):


"The goal of symbol grounding is not to guarantee uniqueness but to 
ensure that the connection between the symbols and the objects they are 
systematically interpretable as being about does not depend exclusively 
on an interpretation projected onto the symbols by an interpreter 
outside the system."


The crucial part is to guarantee that the meaning of the symbols does 
not depend on interpreter-applied meanings.  This is a subtle issue, 
because the interpreter (i.e. the programmer or system designer) can 
insert their own interpretations on the symbols in all sorts of ways. 
For example, they can grab a symbol and label it "cat" (this being the 
most egregiouse example of failure to ground), or they can stick 
parameters into all of the symbols and insist that the parameter "means" 
something like the "probability that this is true, or real".  If the 
prog

Re: [agi] standard way to represent NL ..PS

2008-05-08 Thread Stephen Reed
Hi Mike,

I've spent some time working with the CMU Sphinx automatic speech recognition 
software, as well as the Festival text-to-speech software.  From the Texai 
SourceForge source code repository, anyone interested can inspect and download 
an echo application that recognizes a spoken utterance and speaks back the 
words that it understood.  I enhanced the Java Sphinx software to allow a hook 
into its word-by-word language model scoring so that in the future I will be 
able to improve speech recognition via Texai's disambiguation facility.

I postponed more work on the speech interface until after the text-based 
English dialog system is deployed and working well.  I believe that the delay 
imposed by working on speech now is not worth holding back the text-based 
dialog system.  Given that deaf humans are perfectly competent with text 
interaction, I expect that I'm making the right decision.

Cheers.
-Steve

 Stephen L. Reed


Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860



- Original Message 
From: Mike Tintner <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Thursday, May 8, 2008 5:15:09 PM
Subject: Re: [agi] standard way to represent NL ..PS

A nice analogy occurs to me for NLP - processing language without the 
sounds.

It's like processing songs without the music. 


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Cognitive Neuropsychology

2008-05-08 Thread Bryan Bishop
On Thu, May 8, 2008 at 3:21 PM, Mike Tintner <[EMAIL PROTECTED]> wrote:
> It oughtn't to be all neuro- though. There is a need for some kind of
> "corporate" science - that studies "whole body" simulation and not just the
> cerebral end,.After all, a lot of the simulations being talked about are v.
> definitely whole body affairs. You're playing the football match you watch
> and reacting with your whole body. In fact, I wonder whether any simulations
> aren't. It shouldn't be too hard to set up some kind of whole body studies.
> Know of anyone thinking along these lines? Has to come soon.

http://heybryan.org/mediawiki/index.php/Henry_Markram
for the brain. Now for the rest of the body.

http://sbml.org/

- Bryan

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-08 Thread Vladimir Nesov
On Fri, May 9, 2008 at 2:13 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
>
> A rational agent only has to know that there are some things it cannot
> compute.  In particular, it cannot understand its own algorithm.
>

Matt,

(I don't really expect you to give an answer to this question, as you
didn't on a number of occasions before.) Can you describe
mathematically what you mean by "understanding its own algorithm", and
sketch a proof of why it's impossible?

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] standard way to represent NL ..PS

2008-05-08 Thread Mike Tintner
A nice analogy occurs to me for NLP - processing language without the 
sounds.


It's like processing songs without the music. 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] standard way to represent NL in logic?

2008-05-08 Thread Mike Tintner
Actually, the sound of language isn't just a subtle thing - it's 
foundational. Language is sounds first, and letters second (or third/fourth 
historically).


And the sounds aren't just sounds - they express emotions about what is 
being said. Not just emphases per one earlier post.


You could in principle have a logical approach to sounds - and deciding 
which sounds should be attached to given words - albeit there can be a vast 
variety of accents, dialects etc.  - and then individual styles of speech.


But your chances of attaching sounds successfully would be the same, 
presumably, as being able to understand that the same melody is being played 
on very different instruments (something someone else here recently referred 
to) -   extremely low.


Doesn't anyone discuss the problem of hearing (written as well as spoken) 
as well as reading language in processing it? It's a huge omission, if not.


YKY/MT:

YKY : Logic can deal with almost everything, depending on how much effort
you put in it =)

"LES sanglots longs. des violons. de l'automne.
Blessent mon cour d'une langueur monotone."

You don't just read those words, (and most words), you hear them. How's
logic going to hear them?



Google translates that into English as:
"The long sobbing violins of autumn hurt my heart with a monotonous 
languor."


Believe me, an AGI is potentially capable of appreciating the sounds
of the verse and other such nuances.  I won't go into the details, but
the input sentence would be represented as a raw sensory event, and it
is up to abductive interpretation to derive its meanings.  That means,
the AGI would understand it superficially as "The long sobbing
violins... etc", but augmented with other logic formulae that convey
other nuances.

You're talking about some very suble effects and that's not my focus
right now.  Right now I'm focusing on simple and practical AGI.  Your
stuff is potentially solvable by logic-based AGI, but I won't be
spending time on it now.

YKY

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-08 Thread Matt Mahoney

--- Steve Richfield <[EMAIL PROTECTED]> wrote:

> On 5/7/08, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
> >
> > See http://www.overcomingbias.com/2008/01/newcombs-proble.html

> After many postings on this subject, I still assert that
> ANY rational AGI would be religious.

Not necessarily.  You execute a program P that inputs the conditions of
the game and outputs "1 box" or "2 boxes".  Omega executes a program W as
follows:

  if P outputs "1 box"
then put $1 million in box B
  else
leave box B empty.

No matter what P is, it cannot call W because it would be infinite
recursion.

A rational agent only has to know that there are some things it cannot
compute.  In particular, it cannot understand its own algorithm.


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Accidental Genius

2008-05-08 Thread Joel Pitt
On Fri, May 9, 2008 at 3:02 AM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> I have a vague memory of coming across this research to duplicate savant
> behavior, and I seem to remember thinking that the conclusion seems to be
> that there is a part of the brain that is responsible for 'damping down'
> some other mechanism that loves to analyze everything in microscopic detail.
>  It appears that the brain could be set up in such a way that there are two
> opponent processes, with one being capable of phenomenal powers of analysis,
> while the other keeps the first under control and prevents it from
> overwhelming the other things that the system has to do.
...
> Anyhow it is very interesting.  Perhaps savantism is an attention mechanism
> disorder?  Like, too much attention.

Another possibility is that the analytic and microscopic detail method
of thinking doesn't scale well to real life (particularly in modelling
OTHER minds), which might be why autistics are often unable to
function in every day society without assistance, and why non-autistic
people may have the capability to display similar characteristics with
proper stimulation of certain parts of the brain, possibly disabling a
generality or abstraction system.

J

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] standard way to represent NL in logic?

2008-05-08 Thread YKY (Yan King Yin)
On 5/7/08, Mike Tintner <[EMAIL PROTECTED]> wrote:
> YKY : Logic can deal with almost everything, depending on how much effort
> you put in it =)
>
> "LES sanglots longs. des violons. de l'automne.
> Blessent mon cour d'une langueur monotone."
>
> You don't just read those words, (and most words), you hear them. How's
> logic going to hear them?


Google translates that into English as:
"The long sobbing violins of autumn hurt my heart with a monotonous languor."

Believe me, an AGI is potentially capable of appreciating the sounds
of the verse and other such nuances.  I won't go into the details, but
the input sentence would be represented as a raw sensory event, and it
is up to abductive interpretation to derive its meanings.  That means,
the AGI would understand it superficially as "The long sobbing
violins... etc", but augmented with other logic formulae that convey
other nuances.

You're talking about some very suble effects and that's not my focus
right now.  Right now I'm focusing on simple and practical AGI.  Your
stuff is potentially solvable by logic-based AGI, but I won't be
spending time on it now.

YKY

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Cognitive Neuropsychology

2008-05-08 Thread Radhika Tibrewal
Here are a few,

http://serendip.brynmawr.edu/exchange/morton/socialneuroscience
http://www.psypress.com/socialneuroscience/introduction.asp





> Radhika Tibrewal wrote:
>> Something similar with respect to Social Neuroscience would also be
>> interesting, since it being an emerging field is bound to be heavily
>> criticized. It is definitely still in a very nascent stage but growing
>> rapidly.
>
> I am actually not familiar with Scoial Neuroscience:  can you provide
> some links?
>
>
> Richard Loosemore
>
> ---
> agi
> Archives: http://www.listbox.com/member/archive/303/=now
> RSS Feed: http://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> http://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Cognitive Neuropsychology

2008-05-08 Thread Richard Loosemore

Mike Tintner wrote:

http://www.dundee.ac.uk/psychology/taharley/pcgn_harley_review.pdf

Richard's cowriter above reviews the state of cognitive neuropsychology, 
[and the Handbook of Cognitive Neuropsychology] painting a picture of v. 
considerable disagreement in the discipline. I'd be interested if anyone 
can recommend similar overviews of cognitive science. I'd be 
particularlyinterested  to have some kind of survey of the acceptance of 
embodied cognitive science within the field as a whole. My impression is 
it's still limited, although relentlessly growing.  But anyway a good 
overview would be good to have:


"While a description of any subject will
describe only theories, what is quite remarkable
about those described in the HCN is the extent to
which they conflict. Furthermore, the conflict
between theories is often at a high level: To what
extent does the mind use symbolic rather than
subsymbolic processing? How modular is it? How
closely tied are psychological processes to neural
pathways? How many routes are involved in any
one process? and so on. Here are a couple of
examples from the HCN. Shelton and Caramazza,
in their chapter on the organisation of semantic
memory, argue for a domain-specific knowledge
hypothesis that views knowledge as being organised
into broad domains deriving from specialised
neural mechanisms, against the otherwise prevalent
modality-specific, sensory-functional theory.
Nickels's chapter reflects the dominant view in
studies based on normal and brain participants, and
computational modelling, that there is a stage of
lemma access in speech production; Caramazza
(1997) argues convincingly against the existence of
such a stage. There is even disagreement about
what commonly used terms mean: As Nickels notes
in her chapter on spoken word production, the
words "semantics" and "concepts" are both used to
refer to general preverbal aspects of knowledge and
to lexically specific aspects of meaning. To these
examples one can add: How many routes are
involved in reading? Is there a general phonological
deficit underlying phonological dyslexia? Is speech
production an interactive process? How many
phonological buffers are there? and so on. While
debate and controversy are signs of a healthy, developing
subject, one can have too much of a good
thing. Although any particular description of a
theory sounds sensible, overall the HCN leaves me
in a turmoil of confusion."


Trevor Harley's review, above, was something of a watershed in the 
field, causing much controversy and discussion.  The paper that he and I 
wrote recently was a follow-up that paper.


There are no similar overviews or critiques of cognitive science in 
general.  I am less familiar with critiques of embodied approaches.



Richard Loosemore

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Cognitive Neuropsychology

2008-05-08 Thread Richard Loosemore

Radhika Tibrewal wrote:

Something similar with respect to Social Neuroscience would also be
interesting, since it being an emerging field is bound to be heavily
criticized. It is definitely still in a very nascent stage but growing
rapidly.


I am actually not familiar with Scoial Neuroscience:  can you provide 
some links?



Richard Loosemore

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Cognitive Neuropsychology

2008-05-08 Thread Radhika Tibrewal
Something similar with respect to Social Neuroscience would also be
interesting, since it being an emerging field is bound to be heavily
criticized. It is definitely still in a very nascent stage but growing
rapidly.


> http://www.dundee.ac.uk/psychology/taharley/pcgn_harley_review.pdf
>
> Richard's cowriter above reviews the state of cognitive neuropsychology,
> [and the Handbook of Cognitive Neuropsychology] painting a picture of v.
> considerable disagreement in the discipline. I'd be interested if anyone
> can
> recommend similar overviews of cognitive science. I'd be
> particularlyinterested  to have some kind of survey of the acceptance of
> embodied cognitive science within the field as a whole. My impression is
> it's still limited, although relentlessly growing.  But anyway a good
> overview would be good to have:
>
> "While a description of any subject will
> describe only theories, what is quite remarkable
> about those described in the HCN is the extent to
> which they conflict. Furthermore, the conflict
> between theories is often at a high level: To what
> extent does the mind use symbolic rather than
> subsymbolic processing? How modular is it? How
> closely tied are psychological processes to neural
> pathways? How many routes are involved in any
> one process? and so on. Here are a couple of
> examples from the HCN. Shelton and Caramazza,
> in their chapter on the organisation of semantic
> memory, argue for a domain-specific knowledge
> hypothesis that views knowledge as being organised
> into broad domains deriving from specialised
> neural mechanisms, against the otherwise prevalent
> modality-specific, sensory-functional theory.
> Nickels's chapter reflects the dominant view in
> studies based on normal and brain participants, and
> computational modelling, that there is a stage of
> lemma access in speech production; Caramazza
> (1997) argues convincingly against the existence of
> such a stage. There is even disagreement about
> what commonly used terms mean: As Nickels notes
> in her chapter on spoken word production, the
> words "semantics" and "concepts" are both used to
> refer to general preverbal aspects of knowledge and
> to lexically specific aspects of meaning. To these
> examples one can add: How many routes are
> involved in reading? Is there a general phonological
> deficit underlying phonological dyslexia? Is speech
> production an interactive process? How many
> phonological buffers are there? and so on. While
> debate and controversy are signs of a healthy, developing
> subject, one can have too much of a good
> thing. Although any particular description of a
> theory sounds sensible, overall the HCN leaves me
> in a turmoil of confusion."
>
>
>
> ---
> agi
> Archives: http://www.listbox.com/member/archive/303/=now
> RSS Feed: http://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> http://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Evaluating Conference Quality [WAS Re: Symbol Grounding ...]

2008-05-08 Thread Mark Waser
Stefan,

I would prefer that you not remain quiet.  I would prefer that you pick 
*specific* points and argue them -- that's the way that science is done.  The 
problem is that AGI is an extremely complex subject and mailing lists are a 
horrible forum for discussing such unless all participants are both qualified 
and willing to follow certain rules and assumptions.  I'd love to throw a 
number of people of this list for making baseless proclamations and not 
defending them but you generally tend not to make broad, baseless statements 
(the current thread excluded ;-).  I *AM* intending to write more to clarify my 
view of Richard's point but I'm frantically trying to finish a paper due May 
15th that is eating up all my spare time at the moment.  Remind me late next 
week if it doesn't appear.

Mark
  - Original Message - 
  From: Stefan Pernar 
  To: agi@v2.listbox.com 
  Sent: Thursday, May 08, 2008 1:03 PM
  Subject: **SPAM** Re: [agi] Evaluating Conference Quality [WAS Re: Symbol 
Grounding ...]


  On Fri, May 9, 2008 at 12:44 AM, Mark Waser <[EMAIL PROTECTED]> wrote:

>> Richard, there is no substance behind your speculations - zero. Zip. And 
all the fantasy and imagination you so clearly demonstrated here on the board 
wont make up for that. You make stuff up as you go along and as you need it and 
you clearly have enough time at your hand to do so.
 

>> Beside Kaj - can we see a show of hand who disagrees with me? Happy to 
step back and be quiet then. It is too often that people stay quite and let 
stuff like this slide.

Sorry, Stefan, but I disagree strongly with you.  Richard has an extremely 
valid point that is obscured by his explanations and personality.

Almost every practitioner of AGI is currently looking for intelligence 
under a streetlight when every indication is that it is in the darkness less 
than 10 years off (reasonably close to where Texai is curently headed).

  Mark - thanks for sharing your point of view. I respect that and will - true 
to my word - be quiet now.

  -- 
  Stefan Pernar
  3-E-101 Silver Maple Garden
  #6 Cai Hong Road, Da Shan Zi
  Chao Yang District
  100015 Beijing
  P.R. CHINA
  Mobil: +86 1391 009 1931
  Skype: Stefan.Pernar 

--
agi | Archives  | Modify Your Subscription  

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


[agi] Re: pattern definition

2008-05-08 Thread Richard Loosemore

[EMAIL PROTECTED] wrote:

Hello

I am writing a literature review on AGI and I am mentioning the 
definition of pattern as explained by Ben in his work.


"A pattern is a representation of an object on a simpler scale. For 
example, a pattern in a drawing of a mathematical curve could be a 
program that can compute the curve from a formula (Looks et al. 2004). 
My supervisor told me that "she doesn?t see how this can be simpler than 
the actual drawing".


Any other definition I could use in the same context to explain to a 
non-technical audience?


thanks

xav


Xav,

[I am copying this to the AGI mailing list because it is more 
appropriate there than on Singularity]


A more general definition of pattern would include the idea that there 
is a collection of mechanisms that take in a source of information (e.g. 
an image consisting of a grid of pixels) and respond in such a way that 
each mechanism 'triggers' in some way when a particular arrangement of 
signal values appears in the information source.


Note that the triggering of each mechanism is the 'recognition' of a 
pattern, and the mechanism in question is a 'recognizer' of a pattern. 
(In this way of looking at things, there are many mechanisms, one for 
each pattern).  The 'particular arrangement of signal values' is the 
pattern itself.


Most importantly note that a mechanism does not have to trigger for some 
exact, deterministic set of signal values.  For example, a mechanism 
could respond in a stochastic, noisy way to a whole bunch of different 
arrangements of signal values.  It is allowed to be slightly 
inconsistent, and not always respond in the same way to the same input 
(although it would be a particularly bad pattern recognizer if it did 
not behave in a reasonably consistent way!).  The amount of the 
'triggering' reaction does not have to be all-or-nothing, either:  the 
mechanism can give a graded response.


What the above paragraph means is that the thing that we call a 
'pattern' is actually 'whatever makes a mechanism trigger', and we have 
to be extremely tolerant of the fact that a wide range of different 
signal arrangements will give rise to triggering ... so a pattern is 
something much more amorphous and hard to define than simply *one* 
arrangement of signals.


Finally, there is one more twist to this definition, which is very 
important.  Everything said above was about arrangements of signals in 
the primary information source ... but we also allow that some 
mechanisms are designed to trigger on an arrangement of other 
*mechanisms*, not just primary input signals.  In other words, this 
pattern finding system is hierarchical, and there can be abstract patterns.


This definition of pattern is the most general one that I know of.  I 
use it in my own work, but I do not know if it has been explicitly 
published and named by anyone else.


In this conception, patterns are defined by the mechanisms that trigger, 
and further deponent sayeth not what they are, in any more fundamental way.


And one last thing:  as far as I can seem this does not easily map onto 
the concept of Kolmogorov complexity.  At least, the mapping is very 
awkward and uninformative, if it exists.  If a mechanism triggers on a 
possibly stochastic, nondeterminstic set of features, it can hardly be 
realised by a feasible algorithm, so talking about a pattern as an 
algorithm that can generate the source seems, to me at least, to be 
unworkable.


Hope that is useful.




Richard Loosemore


P.S.  Nice to see some Welsh in the boilerplate stuff at the bottom of 
your message. I used to work at Bangor in the early 90s, so it brought 
back fond memories to see "Prifysgol Bangor"!  Are you in the Psychology 
department?


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Evaluating Conference Quality [WAS Re: Symbol Grounding ...]

2008-05-08 Thread Stefan Pernar
On Fri, May 9, 2008 at 12:44 AM, Mark Waser <[EMAIL PROTECTED]> wrote:

>  >> Richard, there is no substance behind your speculations - zero. Zip.
> And all the fantasy and imagination you so clearly demonstrated here on the
> board wont make up for that. You make stuff up as you go along and as you
> need it and you clearly have enough time at your hand to do so.
>
> >> Beside Kaj - can we see a show of hand who disagrees with me? Happy to
> step back and be quiet then. It is too often that people stay quite and let
> stuff like this slide.
>
> Sorry, Stefan, but I disagree strongly with you.  Richard has an extremely
> valid point that is obscured by his explanations and personality.
>
> Almost every practitioner of AGI is currently looking for intelligence
> under a streetlight when every indication is that it is in the darkness less
> than 10 years off (reasonably close to where Texai is curently headed).
>

Mark - thanks for sharing your point of view. I respect that and will - true
to my word - be quiet now.

-- 
Stefan Pernar
3-E-101 Silver Maple Garden
#6 Cai Hong Road, Da Shan Zi
Chao Yang District
100015 Beijing
P.R. CHINA
Mobil: +86 1391 009 1931
Skype: Stefan.Pernar

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


[agi] Cognitive Neuropsychology

2008-05-08 Thread Mike Tintner

http://www.dundee.ac.uk/psychology/taharley/pcgn_harley_review.pdf

Richard's cowriter above reviews the state of cognitive neuropsychology, 
[and the Handbook of Cognitive Neuropsychology] painting a picture of v. 
considerable disagreement in the discipline. I'd be interested if anyone can 
recommend similar overviews of cognitive science. I'd be 
particularlyinterested  to have some kind of survey of the acceptance of 
embodied cognitive science within the field as a whole. My impression is 
it's still limited, although relentlessly growing.  But anyway a good 
overview would be good to have:


"While a description of any subject will
describe only theories, what is quite remarkable
about those described in the HCN is the extent to
which they conflict. Furthermore, the conflict
between theories is often at a high level: To what
extent does the mind use symbolic rather than
subsymbolic processing? How modular is it? How
closely tied are psychological processes to neural
pathways? How many routes are involved in any
one process? and so on. Here are a couple of
examples from the HCN. Shelton and Caramazza,
in their chapter on the organisation of semantic
memory, argue for a domain-specific knowledge
hypothesis that views knowledge as being organised
into broad domains deriving from specialised
neural mechanisms, against the otherwise prevalent
modality-specific, sensory-functional theory.
Nickels's chapter reflects the dominant view in
studies based on normal and brain participants, and
computational modelling, that there is a stage of
lemma access in speech production; Caramazza
(1997) argues convincingly against the existence of
such a stage. There is even disagreement about
what commonly used terms mean: As Nickels notes
in her chapter on spoken word production, the
words "semantics" and "concepts" are both used to
refer to general preverbal aspects of knowledge and
to lexically specific aspects of meaning. To these
examples one can add: How many routes are
involved in reading? Is there a general phonological
deficit underlying phonological dyslexia? Is speech
production an interactive process? How many
phonological buffers are there? and so on. While
debate and controversy are signs of a healthy, developing
subject, one can have too much of a good
thing. Although any particular description of a
theory sounds sensible, overall the HCN leaves me
in a turmoil of confusion."



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: Symbol Grounding [WAS Re: [agi] AGI-08 videos]

2008-05-08 Thread Richard Loosemore


You may want to check out the background material on this issue.  Harnad 
invented the idea that there is a 'symbol grounding problem', so that is 
why I quoted him.  His usage of the word 'symbol' is the one that is 
widespread in cognitive science, but it appears that you are missing 
this, and instead interpreting the word 'symbol' to be one of your own 
idiosyncratic meanings.  You can see this most clearly when you write 
that the symbols are things like "H-O-R-S-E" and "C-A-T" etc  those 
look like strings of letters, so if you think that a symbol, by 
definition, must involve a string of letters (or phonemes), then you are 
misunderstanding Harnad's (and everyone else's) meaning of by rather a 
wide margin.  That probably explains your puzzlement in this case.



Richard Loosemore



Mike Tintner wrote:
I'm not quite sure why Richard would want to quote Harnad. Harnad's idea 
of how the brain works depends on it first processing our immediate 
sensory images as "iconic representations"  - not 1m miles from Lakoff's 
image schemas. He sees the brain as first developing some kind of horse 
graphics, for the horses we see,


Then there is an additional and very confusing level of "categorical 
representations" which pick out the "invariant features" of horses - and 
are still  nonsymbolic. But Harnad doesn't give any examples of what 
these features are. They are necessary he claims to be able to 
distinguish between horses and similar animals.


(If anyone has further light to shed here, I'd be v. interested).

And only after those two levels of processing does the brain come to 
symbols - to "H-O-R-S-E" and "C-A-T" etc - although, of course, if 
you're thinking evolutionarily, it's arguable that the brain doesn't 
actually need these symbols at all -our ancestors survived happily 
without language.


So Harnad depicts symbols as not so much simply grounded as deeply 
rooted in a tree of imagistic processing - and I'm not aware of any 
AGI-er using imagistic processing (or have I got someone, like Ben, wrong?)


Richard:

Derek Zahn wrote:

Richard Loosemore:

 > My god, Mark: I had to listen to people having a general 
discussion of
 > "grounding" (the supposed them of that workshop) without a single 
person

 > showing the slightest sign that they had more than an amateur's
 > perspective on what that concept actually means.
 I was not at that workshop and am no expert on that topic, though I 
have seen the word used in several different ways.  Could you point 
at a book or article that does explain the concept or at least use it 
heavily in a correct way?  I would like to improve my understanding 
of the meaning of the "grounding" concept.
 Note:  sometimes written words do not convey intensions very well -- 
I am not being sarcastic, I am asking for information to help improve 
the quality of discussion that you have found lacking in the past.


I still think it is best to go back to Stevan Harnad's two main papers 
on the topic.  He originated the issue, then revisited it with some 
frustration when people starting diverging it to mean anything under 
the sun.


So:

http://users.ecs.soton.ac.uk/harnad/Papers/Harnad/harnad90.sgproblem.html

and

http://users.ecs.soton.ac.uk/harnad/Papers/Harnad/harnad93.cogsci.html

are both useful.

I do not completely concur with Harnad, but I certainly agree with him 
that there is a real issue here.


However..

The core confusion about the SGP is so basic that you will find it 
difficult to locate one source that explains it.  Here it is in 
Harnad's own words (from the second paper above):


"The goal of symbol grounding is not to guarantee uniqueness but to 
ensure that the connection between the symbols and the objects they 
are systematically interpretable as being about does not depend 
exclusively on an interpretation projected onto the symbols by an 
interpreter outside the system."


The crucial part is to guarantee that the meaning of the symbols does 
not depend on interpreter-applied meanings.  This is a subtle issue, 
because the interpreter (i.e. the programmer or system designer) can 
insert their own interpretations on the symbols in all sorts of ways. 
For example, they can grab a symbol and label it "cat" (this being the 
most egregiouse example of failure to ground), or they can stick 
parameters into all of the symbols and insist that the parameter 
"means" something like the "probability that this is true, or real".  
If the programmer does anything to interpret the meaning of system 
components, then there is at least a DANGER that the symbol system has 
been compromised, and is therefore not grounded.


You see, when a programmer makes some kind of design choice, they very 
often insert some *implicit* interpretation of what symbols mean.  But 
then, if that same programmer goes to the trouble of connecting that 
AGI to some mechanisms that build and use symbols, then the 
build-and-use mechanisms will also *implictly* impose 

Re: [agi] Evaluating Conference Quality [WAS Re: Symbol Grounding ...]

2008-05-08 Thread Mark Waser
>> Richard, there is no substance behind your speculations - zero. Zip. And all 
>> the fantasy and imagination you so clearly demonstrated here on the board 
>> wont make up for that. You make stuff up as you go along and as you need it 
>> and you clearly have enough time at your hand to do so.
 
>> Beside Kaj - can we see a show of hand who disagrees with me? Happy to step 
>> back and be quiet then. It is too often that people stay quite and let stuff 
>> like this slide.

Sorry, Stefan, but I disagree strongly with you.  Richard has an extremely 
valid point that is obscured by his explanations and personality.

Almost every practitioner of AGI is currently looking for intelligence under a 
streetlight when every indication is that it is in the darkness less than 10 
years off (reasonably close to where Texai is curently headed).

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Evaluating Conference Quality [WAS Re: Symbol Grounding ...]

2008-05-08 Thread Richard Loosemore

Stefan Pernar wrote:

Richard, there is no substance behind your speculations - zero. Zip. And 
all the fantasy and imagination you so clearly demonstrated here on the 
board wont make up for that. You make stuff up as you go along and as 
you need it and you clearly have enough time at your hand to do so.


Wow!  This is astonishing.

I gently invited you to take the discussion onto a higher, more rational 
plane, and you came back with  even more personal abuse of the worst 
possible sort.  There is nothing in the above paragraph except 
unsupported insults.


Breathtaking.




All of the points you just made could be met, if you articulated
them. Scruffies?  Some people only use that as a derogatory term:
 what did you mean by it?  I am not necessarily even a 'scruffy' by
any accepted definition of that term, and certainly not by the
definition from Russell and Norvig that I quoted in my paper.  As
far as I am aware, *nobody* has accused me of being a scruffy ... it
was actually me who first mentioned the scruffy-neat divide!


Let's not use shady rhetoric here - shall we? You know exactly that 
scruffy refers to a technical distinction. How do you expect to be taken 
seriously if you try to manipulate like this? Not going to happen with me.


I am honestly completely confused about what you are saying ('shady 
rhetoric', 'manipulate'  ?).


The scruffy-neat distinction was supposed to be a contrast between 
Logical AI people and those who came before, but in some people's mouths 
it is used to denigrate the 'scruffies' as unscientific and adulate the 
'neats' as real scientists.  That is a derogatory usage.  Some scruffies 
don't mind being called that at all, because they consider it to be 
merely a summary of the fact that they disagree with the premises of the 
Logical AI people ... they certainly do not regard themselves as 
unscientific hackers, just interested in getting a system working by a 
build-and-fix approach.


So there is a confusion here.  Do you just mean that I am not in 
agreement with the Logical AI crowd?  That would not be insulting, and 
it would be correct.  Do you mean that I am doing the same kind of thing 
that was done by the people who came before the Logical AI period?  That 
would also not be insulting, but it would be technically wrong.  Do you 
mean that I am doing something basically unscientific?  That would be 
insulting and wrong, both.


I was merely inviting you, in a polite way, to explain which of these 
meanings you were intending.  They are very different.





"Wild speculations"?  Which, exactly?  "Grand pie-in-the-sky plans
without substance"?  Again, what are you referring to?  Don't these
all sound like Stefan's personal opinion?

 
Beside Kaj - can we see a show of hand who disagrees with me? Happy to 
step back and be quiet then. It is too often that people stay quite and 
let stuff like this slide.


I am happy either way:  but I would prefer that you articulate what 
exactly you mean by making these allegations.


You see, your statements could be interpreted as based on pure ignorance 
on your part  an inability to actually understand the arguments, 
plus a willingness to condemn things that you do not understand, and an 
eagerness to imply that the people talking about those things are 
ignorant, not you.  There are many people who do engage in that kind of 
behavior:  you don't want to look like one of those people, believe me.


I would really rather that you prove that you understand the arguments, 
because if you continue to just complain with supporting arguments, it 
does not reflect very well on you.





On all of these points, we could have had meaningful discussion (if
you chose), but if you keep them to yourself and simply decide that
I am an idiot, what chance do I have to meet your objections?  I am
always open to criticism, but to be fair it has to be detailed,
specific and not personal.

 
The lack of consistency and quality to your writings make it not 
worthwhile for me to point out particular points of criticism that would 
be even worth debating with you. It is not that there are two or three 
point that I do not understand. No - your whole concept is is an 
uninteresting house of cards to me. Your rhetoric is shady and dogmatic 
- you are unresponsive to substantial criticisms. No matter what people 
say you will continue to make up stuff and throw it right back at them  
- spiked with subtle personal attacks.


Astonishing!

Can you give any examples of these things?  These are amazingly strong 
allegations.  Back them up, please.



In short you are not worth my time and the only reason why I am spending 
time on this is because I hope the list will wake up to it.


Also, I am a little confused by the first sentence of the above.  It
implies that you only just started looking through my 'stuff' ...
have you read the published papers?  The blog 

Re: [agi] Accidental Genius

2008-05-08 Thread Richard Loosemore

Brad Paulsen wrote:
I happened to catch a program on National Geographic Channel today 
entitled "Accidental Genius."  It was quite interesting from an AGI 
standpoint. 

One of the researchers profiled has invented a device that, by sending 
electromagnetic pulses through a person's skull to the appropriate spot 
in the left hemisphere of that person's brain, can achieve behavior 
similar to that of an idiot savant in a non-brain-damaged person (in the 
session shown, this was a volunteer college student). 

Before being "zapped" by the device, the student is taken through a 
series of exercises.  One is to draw a horse from memory.  The other is 
to read aloud a very familiar "saying" with a slight grammatical mistake 
in it (the word "the" is duplicated, i.e., "the the," in the saying -- 
sorry I can't recall the saying used). Then the student is shown a 
computer screen full of "dots" for about 1 second and asked to record 
his best guess at how many dots there were.  This exercise is repeated 
several times (with different numbers of dots each time). 

The student is then zapped by the electromagnetic pulse device for 15 
minutes.  It's kind of scary to watch the guy's face flinch 
uncontrollably as each pulse is delivered. But, while he reported 
feeling something, he claimed there was no pain or disorientation. His 
language facilities were unimpaired (they zap a very particular spot in 
the left hemisphere based on brain scans taken of idiot savants). 

After being zapped, the exercises are repeated.  The results were 
impressive.  The horse drawn after the zapping contained much more 
detail and was much better rendered than the horse drawn before the 
zapping.  Before the zapping, the subject read the familiar saying 
correctly (despite the duplicate "the").  After zapping, the duplicate 
"the" stopped him dead in his tracks.  He definitely noticed it.  The 
dots were really impressive though.  Before being zapped, he got the 
count right in only two cases.  After being zapped, he got it right in 
four cases.


The effects of the electromagnetic zapping on the left hemisphere fade 
within a few hours.  Don't know about you, but I'd want that in writing.


You can watch the episode on-line here: 
http://channel.nationalgeographic.com/tv-schedule.  It's not scheduled 
for repeat showing anytime soon.


That's not a direct link (I couldn't find one).  When you get to that 
Web page, navigate to Wed, May 7 at 3PM and click the "More" button 
under the picture.  Unfortunately, the "full-motion" video is the size 
of a large postage stamp.  The "full screen" view uses "stop motion" (at 
least i did on my laptop using a DSL-based WiFi hotspot). The audio is 
the same in both versions.


Cheers,

Brad


I haven't seen the program, but the method is (unles I am mistaken) 
called "transcranial magnetic stimulation" or TMS.  It zaps the brain 
with a magnetic pulse, which scrambles signals and systems for a while, 
but as far as anyone can tell, has no lasting effects.


I have a vague memory of coming across this research to duplicate savant 
behavior, and I seem to remember thinking that the conclusion seems to 
be that there is a part of the brain that is responsible for 'damping 
down' some other mechanism that loves to analyze everything in 
microscopic detail.  It appears that the brain could be set up in such a 
way that there are two opponent processes, with one being capable of 
phenomenal powers of analysis, while the other keeps the first under 
control and prevents it from overwhelming the other things that the 
system has to do.


This is a very thought-provoking example of a process that is not (as 
far I know) duplicated in AGI systems.  Note carefully:  there is not 
necessarily any 'intelligence' in the mechanism that enforces the 
balance (the part that was presumably knocked out), it is probably just 
a blind regulator.  This means that the regulator would control the 
other processes in a somewhat nondeterministic manner, imposing its 
effects by a diffuse control parameter.


Anyhow it is very interesting.  Perhaps savantism is an attention 
mechanism disorder?  Like, too much attention.





Richard Loosemore















---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-08 Thread Stan Nilsen

Steve,

I suspect I'll regret asking, but...

Does this rational belief make a difference to intelligence?  (For the 
moment confining the idea of intelligence to making good choices.)


If the AGI rationalized the existence of a higher power, what ultimate 
bad choice do you see as a result? (I've assumed that you have a bias 
against religion and hence see a big zero or negative in it.)


I agree that asking God to hold together what we ought to fix is a bad 
choice.  But then again non-religious folks use bailing wire too.


I prefer not to digress into a discussion of religion, but rather stay 
to the question of "potential impact on AGI if such a belief was present 
in the assumptions of the AGI."  If the subject can only lead to 
religious critiques, please ignore my response.


Stan



Steve Richfield wrote:

Vladamir,

On 5/7/08, *Vladimir Nesov* <[EMAIL PROTECTED] 
> wrote:


See http://www.overcomingbias.com/2008/01/newcombs-proble.html

 
This is a PERFECT talking point for the central point that I have been 
trying to make. Belief in the Omega discussed early in that article is 
essentially a religious belief in a greater power. Most Christians see 
examples of the power of God at around a monthly rate. Whenever chance 
works for apparent good or against perceived evil, there is clear 
evidence of God doing his job.
 
Story: A Baptist minister neighbor had his alternator come loose just as 
he was leaving for an important meeting, so I temporarily secured it 
with an industrial zip tie, and told him to remove the zip tie and 
properly bold the alternator back into place when he got back home. 
Three weeks later, his alternator came loose again. He explained that he 
had done NOTHING wrong this week, and so he just couldn't see why God 
took this occasion to smite his alternator. I suggested that we examine 
it for clues. Sure enough, there were the remnants of my zip tie which 
he had never replaced. He explained that God seemed to be holding things 
together OK, so why bother fixing it. Explaining the limitations of 
industrial zip ties seemed to be hopeless, so I translated my 
engineering paradigm to his religious paradigm:
 
I explained that he had been testing God by seeing how long God would 
continue to hold his alternator in place, and apparently God had grown 
tired of playing this game. "Oh, I see what you mean" he said quite 
contritely, and he immediately proceeded to properly bolt his alternator 
back down. Clearly, God had yet again shown his presence to him.
 
Christianity (and other theologies) are no less logical than the 
one-boxer in the page you cited. Indeed, the underlying thought process 
is essentially identical.
 


"It is precisely the notion that Nature does not care about our
algorithm, which frees us up to pursue the winning Way - without
attachment to any particular ritual of cognition, apart from our
belief that it wins.  Every rule is up for grabs, except the rule of
winning."

 
Now, consider that ~50% of our population believes that people who do 
not believe in God are fundamentally untrustworthy. This tends to work 
greatly to the disadvantage of atheists, thereby showing that God does 
indeed favor his believers. After many postings on this subject, I 
still assert that ANY rational AGI would be religious. Atheism is a 
radical concept and atheists generally do not do well in our society. 
What sort of "rational" belief (like atheism) would work AGAINST 
winning? In short, your Omega example has apparently made my point - 
that religious belief IS arguably just as logical (if not more so)than 
atheism. Do you agree?
 
Thank you.
 
Steve Richfield
 

 
*agi* | Archives  
 | Modify 
 
Your Subscription 	[Powered by Listbox] 




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com