Re: [agi] A 1st Step To Using Your Image-ination

2008-02-15 Thread Ben Goertzel
>  Perhaps it will start to give you a sense that words and indeed all symbols
>  provide an extremely limited *inventory of the world* and all its infinite
>  parts and behaviours.
>
>  I welcome any impressionistic responses here, including confused questions.

I agree with the above, but I think one needs to be careful about levels of
description...

One way to define "symbol" is in accordance with Peircean semiotics
... and in this sense,
not every term, predicate or variable utilized in a logical reasoning engine
is actually a "symbol" from the standpoint of the reasoning/learning
process implemented
by the reasoning engine

Similarly, if one implements a neural net learning algorithm on a
digital computer,
the bits used to realize the software program are symbols from the
standpoint of the
programming language compiler and executor, but not from the standpoint of the
neural net itself...

LIke neurons, logical tokens may be used as components of complex patterned
arrangements, without any individual symbolic meaning.

Visual images may be represented with superhuman accuracy using logical tokens
for instance.  These tokens are symbolic at one level, but not
visually symbolic...

ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


[agi] A 1st Step To Using Your Image-ination

2008-02-15 Thread Mike Tintner
Perhaps this site will  help some of you to start seeing that symbols have 
extremely limited powers, and something more is needed - and also give you a 
sense of how attitudes are changing.


http://www.imageandmeaning.org/

(it's part of the Envisioning Science Program - check out the movie)

also:

The Initiative in Innovative Computing (IIC)

http://iic.harvard.edu/

No v. coherent message behind all this stuff - just a lot of ongoing 
questions, which I hope will get you to start asking questions. And they're 
mainly talking about the need to envision *science*. What they haven't 
realised is what follows - the need to envision and image-ine intelligence, 
period. But this shows things starting to happen. And the momentum will 
build.(Welcome info re anything related).


Perhaps it will start to give you a sense that words and indeed all symbols 
provide an extremely limited *inventory of the world* and all its infinite 
parts and behaviours.


I welcome any impressionistic responses here, including confused questions. 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


RE: [agi] reasoning & knowledge.. p.s.

2008-02-15 Thread David Clark
I agree with Pei Wang 100% on this point.

Even though I find many of the comments from Mike to be interesting, I think
it would be much more productive to add to the solutions and problems of
creating a computer based AGI rather than trying to convince the converted
that AGI on today's computers is impossible.  Some problems might be solved
by visual techniques but if this is important then that aspect of the
problem will probably have to wait until the more general problems of object
extraction from video images is further along (from Bob Mottram's comments).

Most of the people on this list have quite different ideas about how an AGI
should be made BUT I think there are a few things that most, if not all
agree on.

1. Intelligence can be created by using computers that exist today using
software.
2. Physical embodiment of the software is not essential (might be desirable)
for intelligence to be created.
3. Intelligence hasn't yet been reached in anyone's AGI project.

It is not possible to *prove* any AGI project to be correct until it is
actually an AGI and this list won't matter much when that happens.  The only
way to find out if a particular AGI approach is actually a good one is to
try and create it.  It will be difficult to identify even the right projects
when they appear because the AGI will inevitably have some capabilities far
beyond a humans' and other abilities that are far less.  Even if a project
gets some level of intelligence with a particular approach, that doesn't
mean that that approach will continue to produce even higher levels of
intelligence or get to the AGI level (whatever that is).

Therefore, all current AGI projects have to be fundamentally based on
intuition or faith or both.  No argument there, but it would seem that there
is no other way to get to creating an AGI when none currently exists.

It is just a waste of time to demand that someone or some group produce
proof that their ideas are correct when that proof is impossible to produce
until an AGI is achieved.  That doesn't mean we can't debate the merits of
different approaches, or demonstrate why previous attempts weren't
successful.  The last point being very difficult because many things could
result in the failure of a project including scale, resources, etc.  Just
because some previous approach didn't work, doesn't necessarily mean that
that approach couldn't work if some other variable was changed.

I believe that a paraplegic person can still be intelligent and useful if
they could just type on a keyboard and use their brain.  This doesn't
*prove* that human intelligence can be created without a body in the first
place but I think it shows that roaming around in the world and getting
firsthand knowledge from a person's senses isn't a 100% prerequisite for
intelligence.

I would appreciate more comments on how to achieve an AGI and less on
whether a AGI on computers using software is possible or not.

David Clark

> -Original Message-
> From: Pei Wang [mailto:[EMAIL PROTECTED]
> Sent: February-14-08 5:11 PM
> To: agi@v2.listbox.com
> Subject: Re: [agi] reasoning & knowledge.. p.s.
> 
> You are correct that MOST PEOPLE in AI treat observation/perception as
> pure passive. As on many topics, most people in AI are probably wrong.
> However, you keep making claim on "everyone", "nobody", ..., which is
> almost never true. If this is your way to get people to reply your
> email, it won't work on me anymore.
> 
> There are many open problems in AI, so it is not hard to find one that
> haven't been solved. If you have an idea about how to solve it, then
> work on it and show us how far you can go. Just saying "Nobody has
> idea about how to ..." contribute little to the field, since that
> problem typically has been raised decades ago.
> 
> Pei
> 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning & knowledge.. p.s.

2008-02-15 Thread Stephen Reed
David said:

Most  of  the  people  on  this  list  have  quite  different  ideas  about  
how  an  AGI
should  be  made  BUT  I  think  there  are  a  few  things  that  most,  if  
not  all
agree  on.

1.  Intelligence  can  be  created  by  using  computers  that  exist  today  
using
software.
2.  Physical  embodiment  of  the  software  is  not  essential  (might  be  
desirable)
for  intelligence  to  be  created.
3.  Intelligence  hasn't  yet  been  reached  in  anyone's  AGI  project.

 
I agree entirely.  My comments on this list are generally grounded in my own 
work, or my experience at Cycorp and my intuition is that indeed today's 
multicore computers are sufficient to achieve intelligence.  Some evidence:
driverless cars, e.g. the DARPA Urban Challenge, are quite competent using 
modest clusters of multicore computersI estimate that a near-future 8-core cpu 
can achieve real-time automatic speech recognition using the Sphinx-4 
softwareMy own very preliminary results on an English dialog system gives me 
hope that a multicore cpu can be used to robustly convert text into logic 
faster than a human can perform the same task.  For example, my Incremental 
Fluid Construction Grammar parser can convert "the book is on the table" into 
logical statements at the rate of 400 times per second per thread.  That gives 
me a lot of headroom when expanding the grammar rule set, adding commonsense 
entailed facts, and pruning alternative interpretations.
-Steve


Stephen L. Reed 
Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860





  

Never miss a thing.  Make Yahoo your home page. 
http://www.yahoo.com/r/hs

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


RE: [agi] A 1st Step To Using Your Image-ination

2008-02-15 Thread Ed Porter
Mike,

You have been pushing this anti-symbol/pro-image dichotomy for a long time.
I don't understand it.  

Images are set, or nets, of symbols.  So, if, as you say

" all symbols provide an extremely limited *inventory of the
world* and all its infinite parts and behaviors "

then images are equally limited, since they are nothing but set or nets of
symbols.  Your position either doesn't make sense or is poorly stated.

What you are saying is somewhat like the statement that "people don't matter
in politics, institutions do."  Such a statement ignores the fact that
institutions are made of people.  But given the human mind's ability to find
that portions of a statement that makes sense (the ability that enables
metaphor work), one hearing this statement might understand it as implying
that people acting together are more important in politics than people
acting alone. 

Perhaps your viewpoint is that merely considering symbols operating alone or
in small numbers fails to explain many important aspects of human-like
intelligence.  If so, that makes sense.  

But that idea is shared by many people on this list.  Hofstadter's Copycat
and his fluid reasoning approach, which has been praised by many on this
list (Pei Wang worked with Hofstadter), is based exactly on the idea of
computations that involve so many individual actors that many of its
processes become "liquid" -- in much the same sense that a financial market
with many purchasers and sellers becomes liquid.  Hofstadter's Copycat,
combined both local and global influences, as well as randomness to control
a synthesis of a solution to a problem.  Promising research (the Serre paper
I have cited so many times before) has been done on image recognition, using
digital and hierarchical memory representations (both symbolic), that --
even with the trivial amount of computation resources involved compared to
that of the human visual system -- provides certain types of visual
recognition that out perform humans. 

So feel free to keep pushing the importance of computing on complex sets or
nets of symbols, such as visual images -- or the complex context that builds
up as one reads a good novel.  

But please stop attacking the use of symbols, unless you can come up with
arguments much better than you have in the past.  The activation of a neuron
in a human brain can be viewed as a symbol, because such neurons tend to
have receptive fields, when specify what patterns of synaptic or chemical
inputs will activate it to varying degrees or in varying firing patterns.
So apparently the human mind does pretty will with symbols.  Symbols can be
probabilistic.  Their meanings can be context sensitive.  They can represent
correlations that humans haven't even explicitly considered. Even almost all
so-called non-symbolic computing, such as that with neural nets, use symbols
to represent their weights.  All digital computers are symbolic, and one can
interpret many analogue computers as being symbolic as well.

So focus less on attacking symbols, and more on describing what its is about
computing on large set or nets of symbols, such as those involved in images,
that you think is more needed.

Ed Porter



-Original Message-
From: Mike Tintner [mailto:[EMAIL PROTECTED] 
Sent: Friday, February 15, 2008 10:34 AM
To: agi@v2.listbox.com
Subject: [agi] A 1st Step To Using Your Image-ination

Perhaps this site will  help some of you to start seeing that symbols have 
extremely limited powers, and something more is needed - and also give you a

sense of how attitudes are changing.

http://www.imageandmeaning.org/

(it's part of the Envisioning Science Program - check out the movie)

also:

The Initiative in Innovative Computing (IIC)

http://iic.harvard.edu/

No v. coherent message behind all this stuff - just a lot of ongoing 
questions, which I hope will get you to start asking questions. And they're 
mainly talking about the need to envision *science*. What they haven't 
realised is what follows - the need to envision and image-ine intelligence, 
period. But this shows things starting to happen. And the momentum will 
build.(Welcome info re anything related).

Perhaps it will start to give you a sense that words and indeed all symbols 
provide an extremely limited *inventory of the world* and all its infinite 
parts and behaviours.

I welcome any impressionistic responses here, including confused questions. 


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: 

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-02-15 Thread Kaj Sotala
Gah, sorry for the awfully late response. Studies aren't leaving me
the energy to respond to e-mails more often than once in a blue
moon...

On Feb 4, 2008 8:49 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> They would not operate at the "proposition level", so whatever
> difficulties they have, they would at least be different.
>
> Consider [curiosity].  What this actually means is a tendency for the
> system to seek pleasure in new ideas.  "Seeking pleasure" is only a
> colloquial term for what (in the system) would be a dimension of
> constraint satisfaction (parallel, dynamic, weak-constraint
> satisfaction).  Imagine a system in which there are various
> micro-operators hanging around, which seek to perform certain operations
> on the structures that are currently active (for example, there will be
> several micro-operators whose function is to take a representation such
> as [the cat is sitting on the mat] and try to investigate various WHY
> questions about the representation (Why is this cat sitting on this mat?
>   Why do cats in general like to sit on mats?  Why does this cat Fluffy
> always like to sit on mats?  Does Fluffy like to sit on other things?
> Where does the phrase 'the cat sat on the mat' come from?  And so on).
[cut the rest]

Interesting. This sounds like it might be workable, though of course,
the exact assosciations and such that the AGI develops sound hard to
control. But then, that'd be the case for any real AGI system...

> > Humans have lots of desires - call them goals or motivations - that
> > manifest in differing degrees in different individuals, like wanting
> > to be respected or wanting to have offspring. Still, excluding the
> > most basic ones, they're all ones that a newborn child won't
> > understand or feel before (s)he gets older. You could argue that they
> > can't be inborn goals since the newborn mind doesn't have the concepts
> > to represent them and because they manifest variably with different
> > people (not everyone wants to have children, and there are probably
> > even people who don't care about the respect of others), but still,
> > wouldn't this imply that AGIs *can* be created with in-built goals? Or
> > if such behavior can only be implemented with a motivational-system
> > AI, how does that avoid the problem of some of the wanted final
> > motivations being impossible to define in the initial state?
>
> I must think about this more carefully, because I am not quite sure of
> the question.
>
> However, note that we (humans) probably do not get many drives that are
> introduced long after childhood, and that the exceptions (sex,
> motherhood desires, teenage rebellion) could well be sudden increases in
> the power of drives that were there from the beginning.
>
> Ths may not have been your question, so I will put this one on hold.

Well, the basic gist was this: you say that AGIs can't be constructed
with built-in goals, because a "newborn" AGI doesn't yet have built up
the concepts needed to represent the goal. Yet humans seem tend to
have built-in (using the term a bit loosely, as all goals do not
manifest in everyone) goals, despite the fact that newborn humans
don't yet have built up the concepts needed to represent those goals.

It is true that many of those drives seem to begin in early childhood,
but it seems to me that there are still many goals that aren't
activated until after infancy, such as the drive to have children.


-- 
http://www.saunalahti.fi/~tspro1/ | http://xuenay.livejournal.com/

Organizations worth your time:
http://www.singinst.org/ | http://www.crnano.org/ | http://lifeboat.com/

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] A 1st Step To Using Your Image-ination

2008-02-15 Thread William Pearson
On 15/02/2008, Ed Porter <[EMAIL PROTECTED]> wrote:
> Mike,
>
>  You have been pushing this anti-symbol/pro-image dichotomy for a long time.
>  I don't understand it.
>
>  Images are set, or nets, of symbols.  So, if, as you say
>
>
> " all symbols provide an extremely limited *inventory of the
>
> world* and all its infinite parts and behaviors "
>
>  then images are equally limited, since they are nothing but set or nets of
>  symbols.  Your position either doesn't make sense or is poorly stated.

I think the definition of symbols, is what is the problem is here. I
tend to think of symbol (in an AI sense at least) to be about or
related to something in the world. The classic idea of having symbols
for cat or dog, and deducing facts from them.

An image is not intrinsically about anything in the world, the optical
illusions (dalmatian in spots, two faces or vase or the necker cube)
show we can view an image in different ways. Mental Images aren't even
necessarily made up of data "about" photon activity, they can be
entirely concocted.

Mike needs to clarify what he means by symbol before we start, or
perhaps find or invent a less confusing word.

  Will Pearson

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


RE: [agi] A 1st Step To Using Your Image-ination

2008-02-15 Thread Ed Porter
In response to William Pearson's message below.

I agree a definition of symbol might be helpful.  

By symbol I just mean something that represents something other than itself.

There need not be any explicit definition of what it represents.  In some
cases what it represents may only be defined by what causes it to have
various activations, and what effects those various activations have.  Often
a symbol represents a set or a range of patterns, including hierarchies or
chains of associations, including probabilistic associations.  They can
represent various degrees of match.

Ed Porter

-Original Message-
From: William Pearson [mailto:[EMAIL PROTECTED] 
Sent: Friday, February 15, 2008 7:10 PM
To: agi@v2.listbox.com
Subject: Re: [agi] A 1st Step To Using Your Image-ination

On 15/02/2008, Ed Porter <[EMAIL PROTECTED]> wrote:
> Mike,
>
>  You have been pushing this anti-symbol/pro-image dichotomy for a long
time.
>  I don't understand it.
>
>  Images are set, or nets, of symbols.  So, if, as you say
>
>
> " all symbols provide an extremely limited *inventory of
the
>
> world* and all its infinite parts and behaviors "
>
>  then images are equally limited, since they are nothing but set or nets
of
>  symbols.  Your position either doesn't make sense or is poorly stated.

I think the definition of symbols, is what is the problem is here. I
tend to think of symbol (in an AI sense at least) to be about or
related to something in the world. The classic idea of having symbols
for cat or dog, and deducing facts from them.

An image is not intrinsically about anything in the world, the optical
illusions (dalmatian in spots, two faces or vase or the necker cube)
show we can view an image in different ways. Mental Images aren't even
necessarily made up of data "about" photon activity, they can be
entirely concocted.

Mike needs to clarify what he means by symbol before we start, or
perhaps find or invent a less confusing word.

  Will Pearson

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com
<>

Re: [agi] A 1st Step To Using Your Image-ination

2008-02-15 Thread Mike Tintner
RE: [agi] A 1st Step To Using Your Image-inationEd:I agree a definition of 
symbol might be helpful.  

In general there is agreement here, but some people do use the term confusingly 
in an all-purpose way.

1.Symbols : Abstract signs that have NO RESEMBLANCE to the signified.Word - 
"tree" - no resemblance to object.  Numbers "1" "2" - no resemblance to the 
objects or number of objects described.  Logic "If p then q" -   Algebra- "x + 
y=z"   No resemblance to anything being designated.

Here's where it gets a little complex - there is no agreement about this next 
category but it's v. important

2. Graphics/ "Icons"  [like computer icons rather than Peirce's]/ Image Schemas 
 -   signs that have a real if simple and sometimes distorted OUTLINE/PATTERN 
RESEMBLANCE to their signified objects -  Computer Icons, Stick Drawings, 
Geometrical Graphs, Geometrical Figures,  Cartoons, Maps, Traffic Signs

3.Images - much more DETAILED RESEMBLANCE - pseudo- or even actual-recordings - 
photographs, movies, realistic drawings, paintings, sculptures

Computers are basically symbol processors - they can only deal in 
Graphics/Icons-as-Symbolic-Formulae  and Images-as-Symbolic-Formulae. They 
cannot deal in whole forms directly as humans do - cannot literally handle them 
and reshape them and put one on top of another to see if they fit. 

Hence the celebrated imagery debate, with Pylyshyn, in order to safeguard 
current AI,  trying to maintain that the human brain also, like current 
computers, reduces images to symbolic formulae in order to process them, 
whereas others like Kosslyn deny this. 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com