agi@v2.listbox.com

2007-10-15 Thread Edward W. Porter
This is in response to Josh Storrs  Monday, October 15, 2007 3:02 PM post
and Richard Loosemore’s Mon 10/15/2007 1:57 PM post.

I mis-understood you, Josh.  I thought you were saying semantics could be
a type of grounding.  It appears you were saying that grounding requires
direct experience, but that grounding is only one (although perhaps the
best) possible way of providing semantic meaning.  Am I correct?

I would tend to differ with the concept that grounding only relates to
what you directly experience.  (Of course it appears to be a definitional
issue, so there is probably no theoretical right or wrong.)  I consider
what I read, hear in lectures, and see in videos about science or other
abstract fields such as patent law to be experience, even though the
operative content in such experiences is derived second, third, fourth, or
more handed.

In Richard Loosemore’s above mentioned informative post he implied that
according to Harnad a system that could interpret its own symbols is
grounded.  I think this is more important to my concept of grounding than
from where the information that lets the system do such important
interpretation comes.  To me the important distinction is are we just
dealing with realtively naked symbols, or are we dealing with symbols that
have a lot of the relations with other symbols and patterns, something
like those Pei Wang was talking about, that lets the system use the
symbols in an intelligent way.

Usually for such relations and patterns to be useful in a world, they have
to have come directly or indirectly from experience of that world.  But
again, it is not clear to me that they has to come first handed.

Presumably if the AGI equivalents of personal computers are being mass
produced  by  the millions 10 to 20 years from now, and if they come out
of the box with significant world knowledge that has been copied into
their non-volatile memory bit-for-bit from world knowledge that came from
the direct experience from many learning machines and indirectly from
massive sophisticated NL readings of large bodies of text and visual
recognition of large image and video data bases.  I would consider most of
the symbols in such a brand new personal AGI to be grounded -- even though
they have not been derived from any experience of a particular personal
AGI, itself -- if they had meaning to the personal AGI itself.

It seems ridiculous to say that one could have two identical large
knowledge bases of experiential knowledge each containing millions of
identically interconnected symbols and patterns in two AGI having
identical hardware, and claim that the symbols in one were grounded but
those in the other were not because of the purely historical distinction
that the sensing to learn such a knowledge was performed on only one of
the two identical systems.

Of course, going forward each system would have to be able to do its own
learning from its own experience if it were to be able to respond to the
unique aspects and events in its own environment.


Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=5384-4f4f95

agi@v2.listbox.com

2007-10-15 Thread Edward W. Porter
In response to you below post, I have responded in all-cap to certain
quoted portions of it.

“I'm arguing that its meaning makes an assumption about the nature of
semantics that obscures rather than informing some important questions”

WHAT EXACTLY DO YOU MEAN?

“I'd just say that for the 2 in my calculator, the answer is
no, in Harnad's fairly precise sense of grounding. Whereas the calculator
clearly does have the appropriate semantics for arithmetic.”

I JUST READ THE ABSTRACT OF Harnad, S. (1990) The Symbol Grounding
Problem. Physica D 42: 335-346. ON THE WEB, AND IT SEEMS HE IS TALKING
ABOUT USING SOMETHING LIKE A GEN/COMP HIERARCHY OF REPRESENTATION HAVING
AS A BOTTOM LAYER SIMPLE SENSORY PATTERNS, AS A BASIS OF GROUNDING.

SO HOW DOES THE CALCULATOR HAVE SIGNIFICANTLY MORE OF THIS TYPE OF
GROUNDING THAN  “10” IN BINARY.

ALTHOUGH THE HARNAD TYPE OF GROUNDING IS THE GENERAL TYPE I SPEND MOST OF
MY TIME THINKING ABOUT, I THINK IT IS POSSIBLE FOR A SYSTEM TO BE CREATED,
SUCH AS CYC, THAT WOULD HAVE SOME LEVEL (ALTHOUGH RELATIVELY LOW)
GROUNDING IN THE SENSE OF SEMANTICS, YET NOT HAVE HARNAD GROUNDING (AS I
UNDERSTOOD IT FROM HIS ABSTRACT)

“Typically one assumes that experience means the experience of the person,
AI,
or whatever that we're talking about...”

IF THAT IS TRUE, MUCH OF MY UNDERSTANDING OF SCIENCE AND AI IS NOT
GROUNDED, SINCE IT HAS BEEN LEARNED LARGELY BY READING, HEARING LECTURES,
AND WATCHING DOCUMENTARIES.  THESE ARE ALL FORMS OF LEARNING WHERE THE
IMPORTANT CONTENT OF THE INFORMATION HAS NOT BEEN SENSED BY ME DIRECTLY.

“I claim that we can talk about a more proximate
criterion for semantics, which is that the system forms a model of some
phenomenon of interest. It may well be that experience, narrowly or
broadly
construed, is often the best way of producing such a system (and in fact I

believe that it is), but the questions are logically separable.”

THIS MAKES SENSE, BUT THIS WOULD COVER A LOT OF SYSTEM THAT ARE NOT
“GROUNDED” IN THE WAY MOST OF USE US THAT WORD

“It's conceivable to have a system that has the appropriate semantics that
was just
randomly produced...”

I ASSUME THAT BY RANDOMLY PRODUCED, YOU DON’T MEAN THAT THE SYSTEM WOULD
BE TOTALLY RANDOM, IN WHICH CASE IT WOULD SEEM THE CONCEPT OF A MODEL
WOULD BE MEANINGLESS.

I WOULD PICK AS A GOOD EXAMPLE OF A SEMANTIC SYSTEM THAT IS SOMEWHAT
INDEPENDENT OF PHYSICAL REALITY, BUT YET HAS PROVED USEFUL, AT LEAST FOR
ENTERTAINMENT, IS THE HARRY POTTER SERIES, OR SOME OTHER FICTIONAL WORLD
WHICH CREATES A FICTIONAL REALITY IN WHICH THERE IS A CERTAIN REGULARITY
TO THE BEHAVIOR AND CHARACTERISTICS OF THE FICTITIOUS PEOPLE AND PLACES IT
DESCRIBES.

Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: J Storrs Hall, PhD [mailto:[EMAIL PROTECTED]
Sent: Monday, October 15, 2007 11:29 AM
To: agi@v2.listbox.com
Subject: Re: [agi] "symbol grounding" Q&A


On Monday 15 October 2007 10:21:48 am, Edward W. Porter wrote:
> Josh,
>
> Also a good post.

Thank you!

> You seem to be defining "grounding" as having meaning, in a semantic
> sense.

Certainly it has meaning, as generally used in the philosophical
literature.
I'm arguing that its meaning makes an assumption about the nature of
semantics that obscures rather than informing some important questions.

> If so, why is it a meaningless question to ask if "2" in your
> calculator has grounding, since you say the calculator has limited but
> real semantics.  Would not the relationships "2" has to other numbers in
> the semantics of that system be a limited form of semantics.

Not meaningless -- I'd just say that for the 2 in my calculator, the
answer is
no, in Harnad's fairly precise sense of grounding. Whereas the calculator
clearly does have the appropriate semantics for arithmetic.

> And what other source besides experience can grounding come from,
> either directly or indirectly?  The semantic model of arithmetic in
> you calculator was presumably derived from years of human experience
> that found the generalities of arithmetic to be valid and useful in
> the real world of things like sheep, cows, and money.

I'd claim that this is a fairly elastic use of the term "experience".
Typically one assumes that experience means the experience of the person,
AI,
or whatever that we're talking about, in this case the calculator. The 2
in
the calculator clearly does not get its semantics from the calculator's
experience.

If we allow an expanded meaning of experience as including the experience
of
the designer of the system, we more or less have to allow it to mean any
feedback in the evolutionary process that produced the low-level semantic
mechanisms in our own brains. This strains my concept of the word a bit.

Whether 

agi@v2.listbox.com

2007-10-15 Thread Edward W. Porter
Josh,

Also a good post.

You seem to be defining "grounding" as having meaning, in a semantic
sense.  If so, why is it a meaningless question to ask if "2" in your
calculator has grounding, since you say the calculator has limited but
real semantics.  Would not the relationships "2" has to other numbers in
the semantics of that system be a limited form of semantics.

And what other source besides experience can grounding come from, either
directly or indirectly?  The semantic model of arithmetic in you
calculator was presumably derived from years of human experience that
found the generalities of arithmetic to be valid and useful in the real
world of things like sheep, cows, and money.  Of course there could be
semantics in an imaginary world, but they would come from experiences of
imagination.

Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: J Storrs Hall, PhD [mailto:[EMAIL PROTECTED]
Sent: Saturday, October 13, 2007 12:50 PM
To: agi@v2.listbox.com
Subject: Re: [agi] "symbol grounding" Q&A


This is a very nice list of questions and makes a good framework for
talking
about the issues. Here are my opinions...

On Saturday 13 October 2007 11:29:16 am, Pei Wang wrote:

> *. When is a symbol "grounded"?

"Grounded" is not a good way of approaching what we're trying to get at,
which
is semantics. The term implies that meanings are inherent in words, and
this
obscures the fact that semantics are a property of systems of which words
are
only a part.
Example: is the symbol 2 grounded in my calculator? there's no pointer
from
the bit pattern to an actual pair of anything. However, when I type in 2+2
it
tells me 4. There is a system implemented that is a semantic model of
arithmetic, and 2 is connected into the system in such a way that I get
the
right answer when I use it. Is 2 grounded? meaningless question. Does the
calculator have a limited but real semantics of arithmetic? Definitely.

> *. What is wrong in traditional "symbolic AI" on this topic?

These systems didn't come close to implementing a competent semantics of
the
parts of the world they were claimed to "understand".

> *. What is the "experience" needed for symbol grounding?

Experience per se isn't strictly necessary, but you have get the semantics

from somewhere, and experience is a good source. The scientific method
relies
heavily on experience in the form of experiment to validate theories, for
example.

> *. For the symbols in an AGI to be grounded, should the experience of
> the system be the same, or very similar, to human sensory experience?

No, as long as it can form coherent predictive models. On the other hand,
some
overlap may be necessary to use human language with much proficiency.

> *. Is vision necessary for symbol grounding in AGI?

No, but much of human modelling is based on spatial metaphors, and thus
the
communication issue is particularly salient.

> *. Is vision important in deciding the meaning of human concepts?

Many human concepts are colored with visual connotations, pun intended.
You're
clearly missing something if you don't have it; but I would guess that
with
only moderate exceptions, you could capture the essence without it.

> *. In that case, if an AGI has no vision, how can it still understand
> a human concept?

The same way it can understand anything: it has a model whose semantics
match
the semantics of the real domain.

> *. Can a blind person be intelligent?

Yes.

> *. How can a sensorless system like NARS have grounded symbol?

Forget "grounded". Can it *understand* things? Yes, if  it has a model
whose
semantics match the semantics of the real domain.

> *. If NARS always uses symbols differently from typical human usage,
> can we still consider it intelligent?

Certainly, if the symbols it uses for communication are close enough to
the
usages of whoever it's communicating with to be comprehensible. Internally
it
can use whatever symbols it wants any way it wants.

> *. Are you saying that vision has nothing to do with AGI?

Personally I think that vision is fairly important in a practical sense,
because I think we'll get a lot of insights into what's going on in there
when we try to unify the higher levels of the visual and natural language
interpretive structures. And of course, vision will be of immense
practical
use in a robot.

But I think that once we do know what's going on, it will be possible to
build
a Turing-test-passing AI without vision.

Josh


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=53686175-b02541


RE: [agi] The Grounding of Maths

2007-10-15 Thread Edward W. Porter
Mike,

I think there is a miscommunication, either at my end or yours.

I was arguing that grounding would use senses besides vision.

My posts have indicated that I believe higher level concepts are derived
from lower level concepts (the gen/comp hierarchy of patterns I have
referred to, as reflected, in the case of vsion by the paper by Serre I
have cited so many times).  This gen/comp hierarchy bottoms out with
simple patterns in sensory and emotional input space

Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Mike Tintner [mailto:[EMAIL PROTECTED]
Sent: Saturday, October 13, 2007 10:29 AM
To: agi@v2.listbox.com
Subject: Re: [agi] The Grounding of Maths


Edward,

As a v. quick reply to start with, grounding means "making sense of" -
using ALL senses not just visual.

"Did you think about it?"
"Yes I did. A lot"

Your ability to reply here is based on your sensory experience of having
thought - that is not a visual.sensory experience.  "I felt sad" - is a
grounded statement - grounded in your internal kinaesthetic experience of
your emotions.

Would you like to rephrase your question in the light of this - the common
sense nature of grounding, which I think obvious and beyond dispute?

One further and huge point. My impression is that you may be like most AI
guys - somewhat lacking in both an evolutionary and developmental
perspective (perhaps it's to do with being so future-oriented).

Consequently, you are making another mistake as drastic as thinking that
grounding is just one sense. You are leaving out the developmental history
of human understanding altogether.

No child will be able to understand this legal document. Many adults won't
be able to understand it either. Why not?

Because human understanding and the human model of the world have to go
through a great number of stages.  It takes many stages of intellectual
development to go from something like:

"Cat bite Lucy"

to

"Animals eat people"

to

"Human-animal relationships are fraught with conflict"

to

"This Darwinian picture of evolution presupposes arms races as an
important factor."

Ben a mathematician looked at the immense complexity of the numbers and
maths he deals with and failed to appreciate that they are composite
affairs -   which can only be mastered,  psychologically, stage by stage,
building from very directly grounded numbers like ten's, to very complexly
and indirectly grounded numbers like trillions, "very large numbers",
irrational numbers etc. For maths this is actually rather obvious WHEN you
look at things developmentally.

You are making the same mistake in jumping to the most complex forms of
language and concepts and asking: how can these immensely complex and
abstract concepts possibly be grounded?

It's a good question. But,as with maths, the broad answer is: only
developmentally,  grounded stage by grounded stage. If human reasoning
were as you think it, based only (or only in certain areas) on
manipulation of symbolic networks, you and other humans would have no
problem jumping to an understanding of that legal document at the age of
5.

In fact to understand it, you have had to built up, stage by stage,  a
vast GROUNDED model of the world - you have had to learn what "courts"
are, what a "justice" is (and you had to SEE courts and watch movies to do
that), you had to look at several machines before you could understand
what a general concept like  "mechanism" meant, you have had to LOOK at
patents and then physically compare them with actual machines, and then
physically compare those machines with other machines to see whether their
parts are indeed new or essentially copies. You had to SEE books of logic
etc etc

And great sections of that immensely complex grounded model will be
invoked - UNCONSCIOUSLY - as you read the document.

And even so as you read sections like:


Claim 4 of the Engelgau patent describes a mechanism for combining an
electronic sensor with an adjustable automobile pedal so the pedal’s
position can be transmitted to a computer that controls the throttle in
the vehicle’s engine. When Teleflex accused KSR of infringing the Engelgau
patent by adding an electronic sensor to one of KSR’s previously designed
pedals, KSR countered that claim 4 was invalid under the Patent Act,
<http://www.law.cornell.edu/supct-cgi/get-usc-cite/35/103> 35 U. S. C.
§103, because its subject matter was obvious.



you will repeatedly  have momentary if not extensive difficulties
understanding which parts and which machine is being referred to at which
point. Why? Because your brain is continually trying to MAKE SENSE of
those damn words and SEE where the pedal is in relation to the throttle,
and which pedal is which etc.



As with all legal docu

agi@v2.listbox.com

2007-10-15 Thread Edward W. Porter
Pei,  Good post.  Ed Porter

-Original Message-
From: Pei Wang [mailto:[EMAIL PROTECTED]
Sent: Saturday, October 13, 2007 11:29 AM
To: agi@v2.listbox.com
Subject: [agi] "symbol grounding" Q&A


Hi,

The current discussion on symbol grounding, to me, includes several
different (though related) questions. In the following, I'll try to
separate them, and give my opinion on each of them.

*. When is a symbol "grounded"?

A symbol is grounded when its meaning to the system is determined
according to the system's experience on the symbol.

*. What is wrong in traditional "symbolic AI" on this topic?

In those systems, the meaning of a symbol is determined by an
"interpretation", which takes the meaning of the symbol to be an
object/entity in the world that is "referred to" by the symbol. In this
way, the system's experience plays no role, and a symbol can be
interpreted in many different ways.

*. What is the "experience" needed for symbol grounding?

Any input-output activity that happens between a system and its
environment.

*. For the symbols in an AGI to be grounded, should the experience of the
system be the same, or very similar, to human sensory experience?

No. The system can ground its symbols in experience that is very different
from human experience, as far as "intelligence" is concerned.

*. Is vision necessary for symbol grounding in AGI?

No, for the above reason.

*. Is vision important in deciding the meaning of human concepts?

Yes, since vision is a major channel of human experience, the meaning of
many human concepts include visual components.

*. In that case, if an AGI has no vision, how can it still understand a
human concept?

"Understanding" is a matter of degree. Since the meaning of a symbol is
determined by the system's experience about it, it will have different
meanings in different systems, though as far as the systems' experience
have overlap, the symbol will have common meaning in these systems. If an
AGI's does not have visual experience, it won't understand a concept
exactly as a human, though its other experience channels may allow the
understanding to be close to a human understanding.

*. Can a blind person to be intelligent?

According to the above opinion, a blind person can be perfectly
intelligent, with symbols grounded in (non-visual) experience. However,
there will always be some difference in what certain concepts mean to such
a person, compared to the "normal" people.

*. How can a sensorless system like NARS have grounded symbol?

In principle, as far as a system has input, it has sensor, though its
sensor can be very different from human sensors. The mistake of
traditional symbolic AI is not that the systems have no sensor (or have no
body), but that their experience play no role in determining the meaning
of the symbols used in the system. Since in NARS the meaning of symbols
(i.e., how they are treated by the system) is determined by the system's
experience, they are grounded. Of course, since NARS' experience is not
human experience, the same symbol usually have different meaning to it,
compared to its meaning to a human being.

*. If NARS always uses symbols differently from typical human usage, can
we still consider it intelligent?

Yes we can. Even among human beings, the same word often means different
things --- just see what happens in this mailing list! We should not treat
"different understanding" as "no understanding". Very often, my
understanding of English is still different from a native English speaker,
but I guess I can say that I understand English, in my way. For this
reason, when I meet someone who have a different understanding on a
concept, I usually don't conclude that he/she has no intelligence. ;-)

*. Are you saying that vision has nothing to do with AGI?

Of course not! I'm saying that vision is not a necessary component of an
AGI. Since vision plays an important role in human cognition, there are
practical reasons for certain AGI projects to include it to ground
concepts in a more "human-like" manner, though some other AGI projects may
exclude it, at least at early stage. Again, intelligence can be achieved
without vision, or any other human sensory channel, though it will have an
impact on the meaning of the symbols in the system.

More "academic" treatments of this topic:
http://nars.wang.googlepages.com/wang.semantics.pdf
http://nars.wang.googlepages.com/wang.AI_Misconceptions.pdf

Pei

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=53678059-3da78b


RE: [agi] The Grounding of Maths

2007-10-13 Thread Edward W. Porter
I im trying to send the message by just typing my comments in your post
referred to below.  I have been told that will end up shoing your text
with a ">" in front of each line.  Just in case it doesn't if you view
this in rich text you will see my comments underlined.

-Original Message-
From: a [mailto:[EMAIL PROTECTED]
Sent: Saturday, October 13, 2007 9:36 AM
To: agi@v2.listbox.com
Subject: Re: [agi] The Grounding of Maths


Bayesian nets, Copycat, Shruiti, Fair Isaac, and CYC, are a failure,

Bayesian and fair Issac are not failures they generate tens or hundreds of
millions a dollars a year in economic value.  But they are limited.
Approaches to reducing such limitations are beginning to arrive.

probably because of their lack of grounding.

Yes


According to Occam's Razor the simplest method of grounding visual images
is not words, but vision.

Actually evidence indicates the brain uses a gen/comp hierarchical
representation built on visual primatives.  But yest it makes sense to
ground visual things with visual things.

I think that people do not notice visual pictures, visual motion and
visual text when they read is because they are mostly subconscious.
Mathematicians do not realize visual calculations because they do it in
their subconscious.

My prior post acknowledge as much, but the difference is that I believe
for certain types of reasoning non-visual memories, generalizations, and
inferences may the the dominant force.

There is also auditory memory. You memorize the words purely as sounds
by subvocalization and then visualize it on-the-fly. I don't think there
is "auditory grounding". Auditory is a simply a method of efficient
storage, without translating it into visual.

What evidence do you have that auditory memories can not be used for
auditory grounding?  In fact, without auditory grounding how do you think
audio perception would work?

You can also memorize the image of text. Then as you "understand" it,
you perform OCR.

Why can't you just understand it at the level of the words that have been
derived from the original reading (or OCRing)of the text.  If it is best
to store visual information with visual representations, why wouldn't it
be best to store verbal information with verbal representations.

I am beginning to wonder if you are serious, or just playing with me.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=53237919-3178f9

RE: [agi] Do the inference rules.. P.S.

2007-10-12 Thread Edward W. Porter
Mike,

CopyCat and Shruiti are small systems, and thus limited.  (Although as of
several years ago there has been an implimentation of Shruiti on a
connection machine with, I think, over 100K relation nodes).

But when I read papers I try to focus on their aspects that teach me
something valuable, rather than on their limitations.  CopyCat helped
clarify my thinking on how AI can best find analogies, particularly with
its notion of coordianted context-specific slippage, and I found its
codelet based control scheme very interesting and very parallelizable.
Shruity helped clarify the concept of pasing bindings through implications
for me.  I also liked the way it indicated how binding might operate
through synchrony in the human mind, and I found its concept of reflexive
thinking interesting.  Both have been valuable in showing me the path
forward.

>From what I know of Goertzel's work I am very impressed.  I think drawing
analogies should be child's play for Novamente.

Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Mike Tintner [mailto:[EMAIL PROTECTED]
Sent: Friday, October 12, 2007 8:32 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Do the inference rules.. P.S.


Edward,

Thanks again for a detailed response (I really do appreciate it).

Your interesting examples of systems confirm my casual impressions of what
can actually be done -  and my reluctance to shell out money on "Fluid
Concepts." Inferences like MARY DOES OWN A BOOK and those from "IF I
CHANGED “ABC TO ABD”, HOW WOULD YOU (THE COPYCAT) MAKE AN ANALOGOUS CHANGE
TO “MRRJJJ”. strike me as fairly trivial, though by no means useless, and
not really AGI . (Yes those are what I mean by "purely symbolic" systems,
although I take your point that there are no absolute boundaries between
different kinds of signs and particularly sign systems - even networks of
symbols are used in complex ways that are not just symbolic).

Inferences like those you mention in:

IF YOU ASKED SUCH A SYSTEM WHAT LOVE BETWEEN A MAN AND A WOMAN WAS, IT
WOULD BE ABLE TO GIVE YOU ALL SORTS OF MEANINGFUL GENERALIZATIONS ABOUT
WHAT LOVE WAS, BASED ON ALL THE DESCRIPTIONS OF LOVE AND HOW IT MAKES
CHARACTERS ACT IN THE BOOKS IT HAS READ.

might be v. productive and into AGI territory, but I note that you are
talking hypothetically, not about real systems.

Ben, if you followed our exchange, has claimed a v. definite form of true
AGI analogy - his system inferring from being able to "fetch", how to play
hide-and-seek. I would like more explanation and evidence, though. But
that's the sort of inference/analogy I think we should all be talking
about.

Vis-a-vis neuroscience & what it tells us about what information is laid
down in the brain, & in what form, I would be vastly more cautious than
you. For instance, we see images as properly shaped, right? But the images
on the retina have a severely distorted form - see Hawkin's photo in On
Intelligence. So where in the brain or in the world is the properly shaped
image? (The main point of that question is simply: hey, there's still
masses we don't know - although if you have an answer, I'd be v.
interested).

P.S. I hope you receive my privately emailed post with the definitions you
requested.

- Original Message -
From: Edward W.  <mailto:[EMAIL PROTECTED]> Porter
To: agi@v2.listbox.com
Sent: Friday, October 12, 2007 5:49 PM
Subject: Re: [agi] Do the inference rules.. P.S.


IN RESPONSE TO MIKE TINTNER’S Thu 10/11/2007 11:47 PM POST.  AGAIN MY
RESPONSE IS IN BLUE ALL CAPS.
=

Edward,

Thanks for interesting info - but if I may press you once more. You talk
of different systems, but you don't give one specific example of the kind
of useful (& significant for AGI) inferences any of them can produce -as I
do with my cat example. I'd especially like to hear of one or more from
Novamente, or Copycat.

TO THE BEST OF MY KNOWLEDGE THERE IS NO AGI THAT CURRENTLY DOES ANYTHING
CLOSE TO HUMAN LEVEL INFERENCING, IF THAT IS WHAT YOU MEAN.  NOVAMENTE IS
THE CLOSEST THING I KNOW OF.  BUT, UNFORTUNATELY, AS OF THIS WRITING, I
DON’T KNOW ENOUGH ABOUT IT TO KNOW EXACTLY HOW POWERFUL ITS CURRENT
CAPABILITIES ARE.

BAYESIAN NETS ARE CURRENTLY USED TO DO A TON USEFUL INFERENCING OF A
GENERAL TYPE THAT COULD BE VALUABLE TO AGI.  FOR EXAMPLE, A LOT OF
COMPUTER-BASED DIAGNOSTIC SYSTEMS US IT.  BAYESIAN NETS HAVE SOME LIMITS,
BUT THE LIMITS ARE BEING LOOSENED BY BRIGHT PEOPLE LIKE DAPHNE KOLLER
(REALLY BRIGHT!).  I ATTENDED A LECTURE SHE GAVE AT MIT ABOUT A YEAR AND A
HALF AGO IN WHICH SHE TALKED ABOUT HER GROUPS’ WORK ON GETTING BAYSIAN
NETS TO HANDLE RELATIONAL REASONING, SOMETHING THAT WOULD SUBSTAINTIALLY
INCREASE THEIR POWER.  SHE HAS ALSO DONE WORK ON INTRODUCING BAYESIAN
INFEREN

RE: [agi] The Grounding of Maths

2007-10-12 Thread Edward W. Porter
In response to Charles Hixson’s 10/12/2007 7:56 PM post:

Different people’s minds probably work differently.  For me dredging up of
memories, including verbal memories, is an important part of my mental
processes.  Maybe that is because I have been trained as a lawyer.

I am not arguing against the fact that visual memories play an important
role in human thinking.  They do.  I often do a lot of my best thinking in
terms of images.

What I am arguing is that other types of grounding play an important part
as well.  I am arguing that visual grounding is not necessarily the
largest force in each and every mathematical thought.  Yes, the human
brain dedicates a lot of real estate to visual processing, but if you take
all of the language, behavioral, emotional and higher level association
areas, you have a lot of brain real estate dedicated to concepts that are
either non-visual or only partially visual.  We should not assume that all
that brain real estate plays little or no role in most thinking.

Of course, I wouldn’t be surprised if visual memories and patterns are
taking at least some part in the massively parallel spreading activation
and inferencing in the sub-conscious that helps pop most thoughts up to
consciousness -- without me even knowing it.  But by similar reasoning I
would also assume a lot of non-visual memories and patterns would also be
taking part in such massive parallel inferencing.

In many types of thinking I am consciously aware of words in my head much
more than I am of images.  Perhaps this is because I am a patent lawyer,
and I have spent thousands of hours reading text in which many of the
words have only loose association to concrete visual memories.  And as a
lawyer when I read such abstract texts, to the extent that I can sense
what is in my consciousness and near consciousness, many of the words I
read seem to derive their meaning largely from other concepts and memories
that also seem to be largely defined in terms of words, although
occasionally visual memories pop out.

When I read “The plaintiff is an Illinois corporation selling services for
the maintenance of photocopiers” it is probably not until I get to
“photocopiers” than anything approaching a concrete image pops into my
mind.

Thus, at least from my personal experience, it seems that many concepts
learned largely through words can be grounded to a significant degree in
other concepts defined largely through words.  Yes, at some level in the
gen/comp pattern hierarchy and in episodic memory all of these concepts
derive at least some of their meaning from visual memories.  But for
seconds at a time that does not seem to be the level of representation my
consciousness is aware of.

Does any body else on this list have similar episodes of what appears to
be largely verbal conscious thought, or am I (a) out of touch with my own
conscious processes, and/or (b) weird?




Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Charles D Hixson [mailto:[EMAIL PROTECTED]
Sent: Friday, October 12, 2007 7:56 PM
To: agi@v2.listbox.com
Subject: Re: [agi] The Grounding of Maths


But what you're reporting is the dredging up of a memory. What would be
the symbolism if in response to "4" came the question "How do you know
that?" For me it's visual (and leads directly into the definition of "+"
as an amalgamation of two disjunct groupings).


Edward W. Porter wrote:
>
> (second sending--roughly 45 minutes after first sending with no
> appearance on list)
>
>
> Why can't grounding from language, syntax, musical patterns, and other
> non-visual forms of grounding play a role in mathematical thinking?
>
> Why can't grounding in the form of abstract concepts learned from
> hours of thinking about math and its transformations play an important
> role.
>
> Because we humans are such multimedia machines, probably most of us
> who are sighted have at least some visual associations tainting most
> of our concepts -- including most of our mathematical concepts -- at
> least somewhere in the gen/comp hierarchies representing them and the
> memories and patterns that include them.
>
> I have always considered myself a visual thinker, and much of my AGI
> thinking is visual, but if you ask me what is “2 + 2”, it is a voice I
> hear in my head that says “4”, not a picture. It is not necessary that
> visual reasoning be the main driving force in reasoning involving a
> particular mathematical thought. To a certain extent math is a
> language, and it would be surprising if linguistic patterns and
> behaviors -- or at least patterns and behaviors partially derived from
> them -- didn’t play a large role in mathematical thinking.
>
>
&

RE: [agi] Do the inference rules.. P.S.

2007-10-12 Thread Edward W. Porter
Jean-Paul,

Thank you for you kind comments.

I look forward to hearing about your system.

Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Jean-paul Van Belle [mailto:[EMAIL PROTECTED]
Sent: Friday, October 12, 2007 8:22 AM
To: agi@v2.listbox.com
Subject: Re: [agi] Do the inference rules.. P.S.


When commenting on a lot of different items in a posting, in-line
responses make more sense and using ALL-CAPS in one accepted way of doing
it in an email client/platform neutral manner. I for one do it often when
responding to individual emails so I don't mind at all. I do *not*
associate it with shouting in such a context - especially not in the light
of the extremely high-quality contributions made by Edward on this list
(I, for one, think that he has elevated the level of discussion here
greatly and I have archived more of his postings than anyone else's). I do
agree that small-caps is easier on the eye. However, Durk, if one wishes
to comment on posting etiquette, I thought one other rule was to quote as
little of the previous post as necessary to make one's point ... some
members may still have bandwidth issues ;-) (just kidding!)

And, for the record, after reading AI literature for well over 20 years
and having done a lot of thinking, the AGI architecture I'm busy working
on is strongly founded on insights (principles, axioms, hypotheses and
assumptions:) many of which are remarkably similar to Edward's views
(including those of the value of past AI research projects, the role of
semantic networks and the possibility of symbolic grounding) though I
(obviously) differ on some other aspects (e.g. complexity :). I hope to
invite him and some others to comment on my prototype end-2008 (and
possibly contribute thereafter :)

^.^
Jean-Paul


Research Associate: CITANDA
Department of Information Systems
Email: [EMAIL PROTECTED]
Phone: (+27)-(0)21-6504256
Fax: (+27)-(0)21-6502280
Office: Leslie Commerce 4.21


>>> "Kingma, D.P." <[EMAIL PROTECTED]> 2007/10/12 10:57 >>>
Dear Edward, may I ask why you regularly choose to type in all-caps? Do
you have a broken keyboard? Otherwise, please restrain from doing so since
(1) many people associate it with shouting and (2) small-caps is easier to
read...

Kind regards,
Durk Kingma


On 10/12/07, Edward W. Porter <[EMAIL PROTECTED]> wrote:

This is in response to Mike Tintner's 10/11/2007 7:53 PM post.  My
response is in all-caps.

  _

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?
<http://v2.listbox.com/member/?&;
> &

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=53190111-565794

RE: [agi] The Grounding of Maths

2007-10-12 Thread Edward W. Porter
Why can't grounding from language, syntax, musical patterns, and other
non-visual forms of grounding play a role in mathematical thinking?

Why can't grounding in the form of abstract concepts learned from hours of
thinking about math and its transformations play an important role.

Because we humans are such multimedia machines, probably most of us who
are sighted have at least some visual associations tainting most of our
concepts -- including most of our mathematical concepts -- at least
somewhere in the gen/comp hierarchies representing them and the memories
and patterns that include them.

I have always considered myself a visual thinker, and much of my AGI
thinking is visual, but if you ask me what is “2 + 2”, it is a voice I
hear in my head that says “4”, not a picture.   It is not necessary that
visual reasoning be the main driving force in reasoning involving a
particular mathematical thought.  To a certain extent math is a language,
and it would be surprising if linguistic patterns and behaviors -- or at
least patterns and behaviors partially derived from them -- didn’t play a
large role in mathematical thinking.

Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: a [mailto:[EMAIL PROTECTED]
Sent: Friday, October 12, 2007 3:40 PM
To: agi@v2.listbox.com
Subject: Re: [agi] The Grounding of Maths


Mathematician-level mathematics must be visually grounded. Without
groundedness, simplified and expanded forms of expressions are the same,
so there is no motive to simplify. If it is not visually grounded, then
it will only reach the level of the top tier computer algebra systems
(full of bugs, unsimplified expressions, etc.).

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=53170101-47f9f6

RE: [agi] The Grounding of Maths

2007-10-12 Thread Edward W. Porter
(second sending--roughly 45 minutes after first sending with no appearance
on list)


Why can't grounding from language, syntax, musical patterns, and other
non-visual forms of grounding play a role in mathematical thinking?

Why can't grounding in the form of abstract concepts learned from hours of
thinking about math and its transformations play an important role.

Because we humans are such multimedia machines, probably most of us who
are sighted have at least some visual associations tainting most of our
concepts -- including most of our mathematical concepts -- at least
somewhere in the gen/comp hierarchies representing them and the memories
and patterns that include them.

I have always considered myself a visual thinker, and much of my AGI
thinking is visual, but if you ask me what is “2 + 2”, it is a voice I
hear in my head that says “4”, not a picture.   It is not necessary that
visual reasoning be the main driving force in reasoning involving a
particular mathematical thought.  To a certain extent math is a language,
and it would be surprising if linguistic patterns and behaviors -- or at
least patterns and behaviors partially derived from them -- didn’t play a
large role in mathematical thinking.


Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=53167185-002da2

Re: [agi] Do the inference rules.. P.S.

2007-10-12 Thread Edward W. Porter
 A TABLE IS
CONTINUOUS AND SOLID, YET FROM CHEMISTRY AND PHYSICS WE KNOW IT IS NOT.

IN FACT, CURRENT BRAIN SCIENCE INDICATES WE DON’T STORE PICTURES IN
ANYTHING LIKE THE FORM OF A PHOTOGRAPH OR A LINE DRAWING.  INSTEAD WE
NORMALLY STORE A NETWORK OF ONE OR MORE NODES FROM A GEN/COMP HIEARARCHY,
EACH OF WHICH MAPS TO MULTIPLE POSSIBLE LOWER LEVEL REPRESENTATIONS UNTIL
YOU GET DOWN TO THE EQUIVALENT OF THE PIXEL LEVEL.  IT IS GENERALLY
BELIEVED THERE IS NO ONE NODE THAT STORES A PARTICULAR IMAGE.

SO EVEN OUR MEMORIES OF THE PICTURES YOU CONSIDER SO IMPORTANT ARE
SYMBOLIC, IN THAT THEY ARE MADE UP OF NODES THAT SYMBOLIZE PATTERNS OF
OTHER NODES.

SO GETTING BACK TO BOOKWORLD, WHAT I AM TRYING TO SAY IS THAT JUST AS OUR
MINDS FABRICATE CONCEPTS OF “PHYSICAL REALITY” BASED ON CORRELATIONS AND
RELATIONS WITHIN A HUGE AMOUNT OF DATA, AN EXTREMELY POWERFUL AGI THAT HAD
A REASONABLE DEEP STRUCTURE REPRESENTATION OF ALL CURRENTLY EXISTING BOOKS
WOULD SIMILARLY HAVE FABRICATED CONCEPTS OF A “BOOK-WORLD REALITY”, AND
THAT SUCH CONCEPTS WOULD BE WELL GROUNDED IN THE SENSE THAT THEY WOULD BE
CONNECTED BY MANY RELATIONSHIPS, ORDERINGS, GENERALIZATIONS, AND
BEHAVIORS.

I DON’T REALLY KNOW EXACTLY HOW MUCH KNOWLEDGE COULD BE EXTRACTED FROM
BOOKWORLD.  I KNOW THAT LINGUISTS PLAYING WITH ROUGHLY 1G WORD TEXT
CORPORA BITCH ABOUT HOW SPARSE THE DATA IS.  BUT MY HUNCH IS THAT IF YOU
READ SAY 30 MILLION BOOKS AND THE WEB WITH A GOOD AGI YOU WOULD BE ABLE TO
LEARN A LOT.

IF YOU ASKED THE BOOKWORLD AGI WHAT HAPPENS WHEN A PERSON DROPS SOMETHING,
IT WOULD PROBABLY BE ABLE TO GUESS IT OFTEN FALLS TO THE GROUND, AND THAT
IF IT IS MAKE OF GLASS IT MIGHT BREAK.

IF YOU ASKED SUCH A SYSTEM WHAT LOVE BETWEEN A MAN AND A WOMAN WAS, IT
WOULD BE ABLE TO GIVE YOU ALL SORTS OF MEANINGFUL GENERALIZATIONS ABOUT
WHAT LOVE WAS, BASED ON ALL THE DESCRIPTIONS OF LOVE AND HOW IT MAKES
CHARACTERS ACT IN THE BOOKS IT HAS READ.  I WOULD NOT BE SURPRISED IF SUCH
A SYSTEM UPON READING A ROMANTIC NOVEL WOULD PROBABLY HAVE ABOUT AS GOOD A
CHANCE AS THE AVERAGE HUMAN READER OF PREDICTING WHETHER TO THE TWO LOVERS
WILL OR WILL NOT BE TOGETHER AT THE END OF THE NOVEL.

IF YOU ASKED IT ABOUT HOW PEOPLE MENTALLY ADJUST TO GROWING OLD, IT WOULD
PROBABLY BE ABLE TO GENERATE A MORE THOUGHTFUL ANSWER THAN MOST YOUNG
HUMAN BEINGS.

IN SHORT, IT IS MY HUNCH THAT A POWERFUL BOOKWORLD AGI COULD BE EXTREMELY
VALUABLE.  AND AS I SAID IN MY Thu 10/11/2007 7:33 PM POST, THERE IS NO
REASON WHY KNOWLEDGE LEARNED FROM BOOKWORLD COULD NOT BE COMBINED
KNOWLEDGE LEARNED BY OTHER MEANS, INCLUDING THE IMAGE SEQUENCES YOU ARE SO
FOND OF.




Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=52965260-4d877b

RE: [agi] Do the inference rules.. P.S.

2007-10-11 Thread Edward W. Porter
 (ALTHOUGH IT WOULD
ALMOST CERTAINLY DEVELOP OR START WITH A GENERAL MODEL OF N-DIMENSIONAL
SPACES.)

I BELIEVE THE CONCEPT OF TURING EQUIVALENCE SHOULD OPEN OUR MINDS TO THE
FACT THAT MOST THINGS IN COMPUTATION CAN BE DONE MANY DIFFERENT WAYS.
ALTHOUGH SOME WAYS ARE MUCH LESS EFFICIENT THAN OTHERS AS TO BE
PRACTICALLY USELESS, AND ALTHOUGH SOME WAYS MAY LACK ESSENTIAL
CHARACTERISTICS THAT LIMIT EVEN THEIR THEORETICAL CAPABILITIES.

AS MUCH AS YOU MAY KNOCK OLD FASHIONED AI SYSTEMS, THEY ACCOMPLISHED A
HELL OF A LOT WITH FLY-BRAIN LEVEL HARDWARE.  THUS, RATHER THAN DISMISS
THE TYPES OF REPRESENTATIONS AND REASONING THEY USED AS USELESS, I WOULD
SEEK TO UNDERSTAND BOTH THEIR STRENGTHS AND WEAKNESSES.  BEN GOERTZEL’S
NOVAMENTE EMBRACES USING THE EFFICIENCY OF SOME MORE NARROW FORMS OF AI IN
DOMAINS OR TASKS WHERE THEY ARE MORE EFFICIENT (SUCH AS LOW LEVEL VISION,
OR FOR DIFFERENT TYPES OF MENTAL FUNCTIONS), BUT HE SEEKS TO HAVE SUCH
DIFFERENT AI’S RELATIVELY TIGHTLY INTEGRATED, SUCH AS BY HAVING THE SYSTEM
HAVE SELF AWARENESS OF THEIR INDIVIDUAL CHARACTERISTICS.  WITH SUCH SELF
AWARENESS AN INTELLIGENT AGI MIGHT WELL OPTIMIZE REPRESENTATIONS FOR
DIFFERENT DOMAINS OR DIFFERENT LEVELS OF ACCESS.

LIKE NOVAMENTE, I HAVE FAVORED A FORM OF REPRESENTATION WHICH IS MORE LIKE
A SEMANTIC NET.  BUT ONE CAN REPRESENT A SET OF LOGICAL STATEMENTS IN
SEMANTIC NET FORM.  I THINK WITH ENOUGH LOGICAL STATEMENTS IN A GENERAL,
FLEXIBLE, PROBABILISTIC LOGIC ONE SHOULD BE ABLE TO THEORETICALLY
REPRESENT MOST FORMS OF EXPERIENCE THAT ARE RELEVANT TO AN AGI --
INCLUDING THE VERY TYPE OF VISUAL SENSORY MODELING YOU SEEM TO BE
ADVOCATING.



Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Mike Tintner [mailto:[EMAIL PROTECTED]
Sent: Thursday, October 11, 2007 7:53 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Do the inference rules.. P.S.


Vladimir: ..and also why can't 3D world model be just described
abstractly,
by
> presenting the intelligent agent with bunch of objects with attached
> properties and relations between them that preserve certain
> invariants? Spacial part of world model doesn't seem to be more
> complex than general problem of knowledge arrangement, when you have
> to keep track of all kinds of properties that should (and shouldn't)
> be derived for given scene.
>
Vladimir and Edward,

I didn't really address this idea essentially common to you both,
properly.

The idea is that a network or framework of symbols/ symbolic concepts can
somehow be used to reason usefully and derive new knowledge about the
world - a network of classes and subclasses and relations between them,
all
expressed symbolically. Cyc and Nars are examples.

OK let's try and set up a rough test of how fruitful such networks/ models

can be.

Take your Cyc or similar symbolic model, which presumably will have
something like "animal - mammals - humans -  primates - cats  etc " and
various relations to "move - jump - sit - stand "   and then "jump -
on - objects" etc etc. A vast hierarchy and network of symbolic concepts,
which among other things tell us something about various animals and the
kinds of movements they can make.

Now ask that model in effect: "OK you know that the cat can sit and jump
on
a mat. Now tell me what other items in a domestic room a cat can sit and
jump on. And create a scenario of a cat moving around a room."

I suspect that you will find that any purely symbolic system like Cyc will

be extremely limited in its capacity to deduce further knowledge about
cats
or other animals and their movements with relation to a domestic room  -
and
may well have no power at all to create scenarios.

But you or I, with a visual/ sensory model of that cat and that room, will

be able to infer with reasonable success whether it can or can't jump, sit

and stand on every single object in that room -  sofa, chair, bottle,
radio,
cupboard  etc etc. And we will also be able to make very complex
assessments
about which parts of the objects it can or can't jump or stand on - which
parts of the sofa, for example - and assessments about which states of
objects, (well  it couldn't jump or stand on a large Coke bottle if erect,

but maybe if the bottle were on its side, and almost certainly if it were
a
jeroboam on its side). And I think you'll find that our capacity to draw
inferences - from our visual and sensory model - about cats and their
movements is virtually infinite.

And we will also be able to create a virtually infinite set of scenarios
of
a cat moving in various ways from point to point around the room.

Reality check: what you guys are essentially advocating is logical systems

and logical reasoning for AGI's - now how many kinds of problems in the
real
human world is logic actually used to solve? Not that

RE: [agi] Do the inference rules.. P.S.

2007-10-11 Thread Edward W. Porter
Re: postings of John Rose and Vladimir Nesov below:

I generally agree with both of your postings.  Grounding is a relative
concept.  There are many different degrees of grounding, some much more
powerful than others. Many expert systems had a degree (a relatively low
one) of grounding in their narrow domains.

And grounding can come in many different forms, from many different types
of experience.

With powerful learning algorithms I think you could, as John has
suggested, obtain a significant amount of grounding from reading extremely
large amounts of text.  The read text would constitute a form of
experiences.  There would be a lot of regularities in the aspects of the
world describe by many types of text and many associations, patterns,
situations,and generalizations could be learned from it.  Of course there
probably would be very important gaps in knowledge obtain only in this
way.

So it would be good to have more than just learning from text.

Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: John G. Rose [mailto:[EMAIL PROTECTED]
Sent: Thursday, October 11, 2007 7:10 PM
To: agi@v2.listbox.com
Subject: RE: [agi] Do the inference rules.. P.S.


This is how I "envision" it when a "text only" AGI is fed its world view
as text only. The more text it processes the more a spatial physical
system would emerge internally assuming it is fed text that describes
physical things. Certain relational laws are common across text that
describe and reflect the physical world as we know it. There might be
limits to what the AGI could construct from this information alone but
basic Newtonian physics systems could be constructed. If you fed it more
advanced physics textbooks it should be able to construct Newtonian+
systems - branch out from the basics. It's "handles" to the physical world
would be text based or internally constructed representational entities,
which BTW would be text based i.e. numerical representations in base 256
or base n, binary in physical memory. Theoretically it could construct
bitmap visual scenes, or estimate what they would look like if it was told
to "show" what visual imagery would look like to someone with eyes. It
could figure out what color is, shading, textures, and ultimately 3D space
with motion - depending on the AGI algorithms programmed into it that
is... But if it was not fed enough text containing physical
interrelationships its physics and projected bitmaps would be distorted.
There would have to be enough information in the text or it would have to
be smart enough to derive from minimal information for it to be accurate.

Now naturally it might be better to ground it from the get-go with spatial
physics but for development and testing purposes having it figure that out
would be challenging to build.

John



> From: Vladimir Nesov [mailto:[EMAIL PROTECTED]
> Subject: Re: [agi] Do the inference rules.. P.S.
>
> ...and also why can't 3D world model be just described abstractly, by
> presenting the intelligent agent with bunch of objects with attached
> properties and relations between them that preserve certain
> invariants? Spacial part of world model doesn't seem to be more
> complex than general problem of knowledge arrangement, when you have
> to keep track of all kinds of properties that should (and shouldn't)
> be derived for given scene.
>
> On 10/12/07, Mark Waser <[EMAIL PROTECTED]> wrote:
> > >> spatial perception cannot exist without vision.
> >
> > How does someone who is blind from birth have spatial perception
> > then?
> >
> > Vision is one particular sense that can lead to a 3-dimensional
> > model
> of the
> > world (spatial perception) but there are others (touch &
> > echo-location hearing to name two).
> >
> > Why can't echo-location lead to spatial perception without vision?
> Why
> > can't touch?
> >

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=52675787-15ce5d


RE: [agi] Do the inference rules.. P.S.

2007-10-11 Thread Edward W. Porter
Dear indefinate article,

Agreed, a "human-like" reasoning system -- that is one that has
associations for concepts similar to a human -- requires human-like
grounding. I have said exactly that for years.

Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: a [mailto:[EMAIL PROTECTED]
Sent: Thursday, October 11, 2007 4:11 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Do the inference rules.. P.S.


I think that building a "human-like" reasoning system without /visual/
perception is theoretically possible, but not feasible in practice. But
how is it "human like" without vision? Communication problems will
arise. Concepts cannot be grounded without vision.

It is impossible to completely "understand" natural language without
vision. Our visual perception acts like a disambiguator for natural
language.

To build a human-like computer algebra system that can prove its own
theorems and find interesting conjectures requires vision to perform
complex symbolic manipulation. A big part of mathematics is about
aesthetics. It needs vision to judge which expressions are interesting,
which are the simplified ones. Finding interesting theorems, such as the
"power rule", the "chain rule" in calculus requires vision to judge that
the rules are simple and visually appealing enough to be communicated or
published.

I think that computer programming is similar. It requires vision to
program easily. It requires vision to remember the locations of the
symbols in the language.

Visual perception and visual grounding is nothing except the basic
motion detection, pattern matching parts of similar images etc. Vision
/is/ a reasoning system.

IMO, we already /have /AGI--that is, NARS. AGI is just not adapted to
visual reasoning. You cannot improve "symbolic" reasoning further
without other sensory perception.

Edward W. Porter wrote:
>
> Validimir and Mike,
>
> For humans, much of our experience is grounded on sensory information,
> and thus much of our understanding is based on experiences and
> analogies derived largely from the physical world. So Mike you are
> right that for us humans, much of our thinking is based on recasting
> of experiences of the physical world.
>
> But just because experience of the physical world is at the center of
> much of human thinking, does not mean it must be at the center of all
> possible AGI thinking -- any more than the fact that for millions of
> years the earth and the view from it was at the center of our thinking
> and that of our ancestors means the earth and the view from it must
> forever be at the center of the thinking of all intelligences
> throughout the universe.
>
> In fact, one can argue that for us humans, one of our most important
> sources of grounding – emotion -- is not really about the physical
> world (at least directly), but rather about our own internal state.
> Furthermore, multiple AGI projects, including Novamente and Joshua
> Blue are trying to ground their systems from experience in virtual
> words. Yes those virtual worlds try to simulate physical reality, but
> the fact remains that much of the grounding is coming from bits and
> bytes, and not from anything more physical.
>
> Take Doug Lenat’s AM and create a much more powerful AGI equivalent of
> it, one with much more powerful learning algorithms (such as those in
> Novamente), running on the equivalent of a current 128K processor
> BlueGene L with 16TBytes of RAM, but with a cross sectional bandwidth
> roughly 500 times that of the current BlueGene L (the type of hardware
> that could be profitably sold for well under 1 million dollars in 7
> years if there were are thriving market for making hardware to support
> AGI).
>
> Assume the system creates programs, mathematical structures, and
> transformations, etc. and in its own memory. It starts out learning
> like a little kid, constantly performing little experiments, except
> the experiments -- instead of being things like banging spoons against
> a glass -- would be running programs that create data structures and
> then observing what is created (it would have built in primitives for
> observing its own workspace), changing the program and observing the
> change, etc. Assume it receives no input from the physical world, but
> that it has goals and a reward system related to learning about
> programming, finding important mathematical and programming
> generalities, finding compact representations and transformation,
> creating and finding patterns in complexity, and things like that.
> Over time such a system would develop its own type of grounding, one
> derived from years of experience -- and from billions of trillions of
> ma

RE: [agi] Do the inference rules.. P.S.

2007-10-11 Thread Edward W. Porter
Validimir and Mike,

For humans, much of our experience is grounded on sensory information, and
thus much of our understanding is based on experiences and analogies
derived largely from the physical world.  So Mike you are right that for
us humans, much of our thinking is based on recasting of experiences of
the physical world.

But just because experience of the physical world is at the center of much
of human thinking, does not mean it must be at the center of all possible
AGI thinking -- any more than the fact that for millions of years the
earth and the view from it was at the center of our thinking and that of
our ancestors means the earth and the view from it must forever be at the
center of the thinking of all intelligences throughout the universe.

In fact, one can argue that for us humans, one of our most important
sources of grounding – emotion -- is not really about the physical world
(at least directly), but rather about our own internal state.
Furthermore, multiple AGI projects, including Novamente and Joshua Blue
are trying to ground their systems from experience in virtual words.  Yes
those virtual worlds try to simulate physical reality, but the fact
remains that much of the grounding is coming from bits and bytes, and not
from anything more physical.

Take Doug Lenat’s AM and create a much more powerful AGI equivalent of it,
one with much more powerful learning algorithms (such as those in
Novamente), running on the equivalent of a current 128K processor BlueGene
L with 16TBytes of RAM, but with a cross sectional bandwidth roughly 500
times that of the current BlueGene L (the type of hardware that could be
profitably sold for well under 1 million dollars in 7 years if there were
are thriving market for making hardware to support AGI).

Assume the system creates programs, mathematical structures, and
transformations, etc. and in its own memory.   It starts out learning like
a little kid, constantly performing little experiments, except the
experiments -- instead of being things like banging spoons against a glass
-- would be running programs that create data structures and then
observing what is created (it would have built in primitives for observing
its own workspace), changing the program and observing the change, etc.
Assume it receives no input from the physical world, but that it has goals
and a reward system related to learning about programming, finding
important mathematical and programming generalities, finding compact
representations and transformation, creating and finding patterns in
complexity, and things like that.  Over time such a system would develop
its own type of grounding, one derived from years of experience -- and
from billions of trillions of machine opps -- in programming and math
space.

Thus, I think you are both right.  Mike is right that for humans, sensory
experience is a vital part of much of our ability to understand, even of
our ability to understand things that might seem totally abstract.  But
Validmir is right for believing that it should be possible to build an AGI
that was well grounded in its own domain, without any knowledge of the
physical world (other than as the manifesting of bits and bytes).


Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Mike Tintner [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 10, 2007 11:10 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Do the inference rules.. P.S.


Vladimir,

I'm not trying to be difficult or critical but I literally can't
understand
what you're saying, because you haven't given any example of a problem,
where "knowledge of concepts' relations and implications" somehow
supersedes
or is independent of physical casting/ recasting.

Your analogy though of what I might be saying about maths (or other
symbols)
is wrong.  Numbers and arithmetic are based on and derive from physical
objects and our ability to add and subtract objects etc . Geometry is
obviously based on an analysis of physical objects and shapes. They are
totally physically object-based and can only be understood as such. To
point
this out is not at all the same as suggesting that their figures are
composed of ink. I am talking about what their figures (and other symbols
like language) refer to, not what they are composed of.  (Even a
mathematical concept BTW like "infinity" only became acceptable in maths
about the time of the printing press - when it became possible physically/

realistically for the first time to imagine objects being produced ad
infinitum).

And I would suggest that our ability to perceive the kinds of concept
relations you may be thinking of is very much physically based and
"digital" - IOW based on pointing with our digits to different objects in
a
scene (even if only in our mind's eye) - to explain, for example,  by
pointing to how  "

RE: [agi] Do the inference rules.. P.S.

2007-10-10 Thread Edward W. Porter
I don’t know if, how, or how well NARS would handle all of the task of
performing the type of “recastings” you claim is desirable.

But remember NARS was part of Hofstadter’s Fluid Analogy Research Group
(FARG), which was dedicated to the very type of “recasting” you mention --
that is non-literal matching and analogy making.  One of NARS’s key
functions is to place concepts into a generalization and similarity
(gen/sim) network that makes it easy to see the correspondence between
different, yet similar, parts of two semantic structures over which
analogies are to be made.

However, from reading about four or five of Pei Wang’s NARS papers I have
not seen any discussion of any net matching procedure that could be used
for non-literal similarity-based matching of net nodes – such the net
matching algorithm used in Hofstadter’s own Copycat program -- a program
that was amazingly good at making creative analogies in an interesting toy
domain.

But one doesn’t have to be a rocket scientist to figure out how to use
NARS’s type of gen/sim network in such net matching.

Ed Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Vladimir Nesov [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 10, 2007 6:50 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Do the inference rules.. P.S.


These 'recastings' of problems are essentially inference steps, where each
step is evident and is performed by trained expert's intuition. Sequence
of such simple steps can constitute complex inference which leads to
solution of complex problem. This recasting isn't necessarily related to
physical common sense, even though each intermediate representation can be
represented as spatially-temporal construction by virtue of being
representable by frame graphs evolving over time, which does not reflect
the rules of this evolution (which are the essence of inference which is
being performed).

On 10/11/07, Mike Tintner <[EMAIL PROTECTED]> wrote:
> Just to underline my point about the common sense foundations of logic
> and general intelligence  - I came across this from : Education &
> Learning to Think by Lauren B Resnick - (and a section entitled
> "General Reasoning - Improving Intelligence).
>
> "Recent research in science problem solving shows that experts do not
> respond to problems as they are presented - writing equations for
> every relationship described and then using routine procedures for
> manipulating equations.Instead they reinterpret the problems,
> recasting them in terms of general scientific principles until the
> solutions become almost self-evident."
>
> He points out that the same principles apply to virtually all subjects
> in the curriculum. I would suggest that those experts are recasting
> problems principally in terms of physical common sense models.  NARS,
> it seems to me, "responds to problems as they are presented."
>
>
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email To
> unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>


--
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=52164935-1e09e0

RE: [agi] Religion-free technical content & breaking the small hardware mindset

2007-10-09 Thread Edward W. Porter
I think IQ tests are an important measure, but they don't measure
everything important.  FDR was not nearly as bright as Richard Nixon, but
he was probably a much better president.

Ed Porter

Original Message-
From: a [mailto:[EMAIL PROTECTED]
Sent: Tuesday, October 09, 2007 4:19 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free technical content & breaking the small
hardware mindset


With googling, I found that older people has lower IQ
http://www.sciencedaily.com/releases/2006/05/060504082306.htm
IMO, the brain is like a muscle, not an organ. IQ is said to be highly
genetic, and the heritability increases with age. Perhaps that older
people do not have much mental stimulation as young people?

IMO, IQ does not measure general intelligence, and does not certainly
measure common sense intelligence. The Bushmen and Pygmy peoples have an
average IQ of 54. (source: http://www.rlynn.co.uk/) These IQs are much
lower than some mentally retarded and down syndrome people, but the
Bushmen and Pygmy peoples act very normal.

Yes, IQ is a sensitive and controversial topic, particularly the racial
differences in IQ.

"my ability to recall things is much worse than it was twenty years ago"
Commonly used culture-free IQ tests, such as Raven Progressive Matrices,
generally measure visualspatial intelligence. It does not measure
crystallized intelligence such as memory recall, but visualspatial fluid
intelligence.

I do not take IQ tests importantly. IQ only measures visualspatial
reasoning, not auditory nor linguistic intelligence. Some mentally
retarded autistic people have extremely high IQs.

Edward W. Porter wrote:
>
> Dear indefinite article,
>
> The Wikipedia entry for "Flynn Effect" suggests -- in agreement with
> your comment in the below post -- that older people (at least those in
> the pre-dementia years) don't get dumber with age relative to their
> younger selves, but rather relative to the increasing intelligence of
> people younger than themselves (and, thus, relative to re-normed IQ
> tests).
>
> Perhaps that is correct, but I can tell you that based on my own
> experience, my ability to recall things is much worse than it was
> twenty years ago. Furthermore, my ability to spend most of three or
> four nights in a row lying bed in most of the night with my head
> buzzing with concepts about an intellectual problem of interest
> without feeling like a total zombiod in the following days has
> substantially declined.
>
> Since most organs of the body diminish in function with age, it would
> be surprising if the brain didn't also.
>
> We live in the age of political correctness where it can be dangerous
> to one’s careers to say anything unfavorable about any large group of
> people, particularly one as powerful as the over 45, who, to a large
> extent, rule the world. (Or even to those in the AARP, which is an
> extremely powerful lobby.) So I don't know how seriously I would take
> the statements that age doesn't affect IQ.
>
> My mother, who had the second highest IQ in her college class, was a
> great one for relaying choice tidbits. She once said that Christiaan
> Barnard, the first doctor to successfully perform a heart transplant,
> once said something to the effect of
>
> “If you think old people look bad from the outside, you
> should see how bad they look from the inside.”
>
> That would presumably also apply to our brains.
>



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=51654844-578b6d


RE: [agi] Do the inference rules of categorical logic make sense?

2007-10-09 Thread Edward W. Porter
Mark,



The basic inference rules in NARS that would support an implication of the
form S is a child of P are of the form:



DEDUCTION INFERENCE RULE:
 Given S --> M and M--> P, this implies S --> P

ABDUCTION INFERENCE RULE:
 Given S --> M and P --> M, this implies S --> P to some degree

INDUCTION INFERENCE RULE:
 Given M --> S and M --> P, this implies S --> P to some degree



where "-->" is the inheritance relations.



Your arguments, are of the very different form :

Given P and Q, this implies Q --> P and P --> Q



And



Given S and R, this implies S --> R and R --> S



 In the argument regarding drinking and being an adult, you do not
appear to use any of these NARS inference rules to show that P inherits
from Q or vice versa (unless, perhaps, one assumes multiple other NARS
sentences or terms that might help the inference along, such as an uber
category such as the “category of all categories” from which one could use
the abduction rule to imply both of the inheritances mentioned (which one
would assume the system would have learned over time was such a weak
source of implication as to be normally useless).



But in that example, just from common sense reasoning, including knowledge
of the relevant subject matter, (absent any knowledge of NARS) it appears
reasonable to imply P from Q and Q from P.  So if NARS did the same it
would be behaving in a common sense way.  Loops in transitivity might be
really ugly, but it seems any human-level AGI has to have the same ability
to deal with them as human common sense.



To be honest, I do not yet understand how implication is derived from the
inheritance relations in NARS.  Assuming truth values of one for the child
and child/parent inheritance statement, I would guess a child implies its
parent with a truth value of one.  I would assume a parent with a truth
value of one implies a given child with a lesser value that decrease the
more often the parent is mapped against other children.



The argument claiming NARS says that R ("most ravens are black") is both
the parent and child of S ("this raven is white") (and vice versa),
similarly does not appear to be derivable from only the statements given
using the NARS inference rules.



Nor does my common sense reasoning help me understand why “most ravens are
black” is both the parent and child of “this raven is white.”  (All though
my common sense does tell me that “this raven is black” would provide
common sense inductive evidence for “most ravens are black” and that “this
raven” that is black would be a child of the category of “most ravens”
that are black.)



But I do understand that each of these two statements would tend to have
probabilistic effects on the other, as you suggested,  assuming that the
fact a raven is black has implications on whether or not it is white.  But
such two way probabilistic relationships are at the core of Bayesian
inference, so there is no reason why they should not be part of an AGI.


Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Mark Waser [mailto:[EMAIL PROTECTED]
Sent: Tuesday, October 09, 2007 2:28 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Do the inference rules of categorical logic make sense?



Most of the discussion I read in Pei's article related to inheritance
relations between terms, that operated as subject and predicates in
sentences that are inheritance statements, rather than between entire
statements, unless the statement was a subject or a predicate of a higher
order inheritance statement.  So what you are referring to appears to be
beyond what I have read.

Label the statement "I am allowed to drink alcohol" as P and the statement
"I am an adult" as Q.  P implies Q and Q implies P (assume that age 21
equals adult) --OR-- P is the parent of Q and Q is the parent of P.

Label the statement that "most ravens are black" as R and the statement
that "this raven is white" as S.  R affects the probability of S and, to a
lesser extent, S affects the probability of R (both in a negative
direction) --OR-- R is the parent of S and S is the parent of R (although,
realistically, the probability change is so miniscule that you really
could argue that this isn't true).

NARS's inheritance is the "inheritance" of influence on the probability
values.

- Original Message -

From: Edward W.  <mailto:[EMAIL PROTECTED]> Porter
To: agi@v2.listbox.com
Sent: Tuesday, October 09, 2007 1:12 PM
Subject: RE: [agi] Do the inference rules of categorical logic make sense?

Mark,

Thank you for your reply.  I just ate a lunch with too much fat (luckily
largely olive oil) in it so, my brain is a little sleepy.  If it is not
too much trouble could you please map out the inheritance relat

RE: [agi] Do the inference rules of categorical logic make sense?

2007-10-09 Thread Edward W. Porter
Mark,

Thank you for your reply.  I just ate a lunch with too much fat (luckily
largely olive oil) in it so, my brain is a little sleepy.  If it is not
too much trouble could you please map out the inheritance relationships
from which one derives how "I am allowed to drink alcohol" is both a
parent and the child of "I am an adult."  And could you please do the same
with how "most ravens are balck" is both parent and child of "this raven
is white."

Most of the discussion I read in Pei's article related to inheritance
relations between terms, that operated as subject and predicates in
sentences that are inheritance statements, rather than between entire
statemens, unless the statement was a subject or a predicate of a higher
order inheritance statement.  So what you are referring to appears to be
beyond what I have read.

Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Mark Waser [mailto:[EMAIL PROTECTED]
Sent: Tuesday, October 09, 2007 12:47 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Do the inference rules of categorical logic make sense?


Thus, as I understand it, one can view all inheritance statements as
indicating the evidence that one instance or category belongs to, and thus
is “a child of” another category, which includes, and thus can be viewed
as “a parent” of the other.

Yes, that is inheritance as Pei uses it.  But are you comfortable with the
fact that "I am allowed to drink alcohol" is normally both the parent and
the child of "I am an adult " (and vice versa)?  How about the fact that
"most ravens are black" is both the parent and child of "this raven is
white" (and vice versa)?

Since inheritance relations are transitive, the resulting hierarchy of
categories involves nodes that can be considered ancestors (i.e., parents,
parents of parents, etc.) of others and nodes that can be viewed as
descendents (children, children of children, etc.) of others.

And how often do you really want to do this with concepts like the above
-- or when the evidence is substantially less than unity?

And loops and transitivity are really ugly . . . .

NARS really isn't your father's inheritance.


- Original Message -
From: Edward W.  <mailto:[EMAIL PROTECTED]> Porter
To: agi@v2.listbox.com
Sent: Tuesday, October 09, 2007 12:24 PM
Subject: RE: [agi] Do the inference rules of categorical logic make sense?


RE: (1) THE VALUE OF “CHILD OF” AND “PARENT OF” RELATIONS  &  (2)
DISCUSSION OF POSSIBLE VALUE IN DISTINGUISHING BETWEEN GENERALIZATIONAL
AND COMPOSITIONAL INHERITANCE HIERARCHIES.

Re Mark Waser’s 10/9/2007 9:46 AM post: Perhaps Mark understands something
I don’t.

I think relations that can be viewed as “child of” and “parent of” in a
hierarchy of categories are extremely important (for reasons set forth in
more detail below) and it is not clear to me that Pei meant something
other than this.

If Mark or anyone else has reason to believe that “what [Pei] means is
quite different” than such “child of” and “parent of” relations, I would
appreciate being illuminated by what that different meaning is.



My understanding of NARS is that it is concerned with inheritance
relations, which as I understand it, indicate the truth value of the
assumption that one category falls within another category, where category
is broadly defined to included not only what we normally think of as
categories, but also relationships, slots in relationships, and categories
defined by a sets of one or more properties, attributes, elements,
relationships, or slot in relationships.  Thus, as I understand it, one
can view all inheritance statements as indicating the evidence that one
instance or category belongs to, and thus is “a child of” another
category, which includes, and thus can be viewed as “a parent” of the
other.  Since inheritance relations are transitive, the resulting
hierarchy of categories involves nodes that can be considered ancestors
(i.e., parents, parents of parents, etc.) of others and nodes that can be
viewed as descendents (children, children of children, etc.) of others.

I tend to think of similarity as a sibling relationship under a shared
hidden parent category -- based on similar aspects of the sibling’s
extensions and/or intensions.

In much of my own thinking I have thought of such categorization relations
as is generalization, in which the parent is the genus, and the child is
the species.   Generalization is important for many reasons.  First,
perception is trying to figure which in category or generalization of
things, actions, or situations various parts of a current set of sensory
information might fit.  Secondly, Generalization is important because it
is necessary for implication.  All those Bayesian probabilities we are
used to thinking about such as P(A|B,C), are tota

RE: [agi] Do the inference rules of categorical logic make sense?

2007-10-09 Thread Edward W. Porter
more properties or
elements).

Although I understand there is an importance equivalence between down in
the comp hierarchical and up in the gen hierarchy, and that the two could
be viewed as one hierarchy, I have preferred to think of them as different
hierarchies, because the type of gens one gets by going up in the gen
hierarchy tend to be different than the type of gens one gets by going
down in the comp hierarchy.

Each possible set in the powerset (the set of all subsets) of elements
(eles), relationships (rels), attributes (atts) and contextual patterns
(contextual pats) could be considered as possible generalizations.  I have
assumed, as does Goertzel’s Novamente, that there is a competitive
ecosystem for representational resources, in which only the fittest pats
and gens -- as determined by some measure of usefulness to the system --
survive.  There are several major uses of gens, such as aiding in
perception, providing inheritance of significant implication, providing
appropriate level of representation for learning, and providing invariant
representation in higher level comps.  Although temporary gens will be
generated at a relatively high frequency, somewhat like the inductive
implications in NARS, the number of gens that survive and get incorporated
into a lot of comps and episodic reps, will be an infinitesimal fraction
of the powerset of eles, rels, atts, and contextual features stored in the
system.  Pats in the up direction in the Gen hierarchy will tend to be
ones that have been selected for the usefulness as generalizations.  They
will often have reasonable number of features that correspond to that of
their species node, but with some of them more broadly defined.  The gens
found by going down in the comp hierarchy are ones that have been selected
for their representational value in a comp, and many of them would not
normally be that valuable as what we normally think of as generalizations.

In the type of system I have been thinking of I have assumed there will be
substantially less multiple inheritance in the up direction in the gen
hierarchy than in the down direction in the comp hierarchy (in which there
would be potential inheritance from every ele, rel, att, and contextual
feature of in a comp’s descendant nodes at multiple levels in the comp
hierarchy below it.  Thus, for spreading activation control purposes, I
think it is valuable to distinguish between generalization and
compositional hierarchies, although I understand they have an important
equivalence that should not be ignored.

I wonder if NARS makes such a distinction.

These are only initial thoughts.  I hope to become part of a team that
gets an early world-knowledge computing AGI up and running.  Perhaps when
I do feedback from reality will change my mind.

I would welcome comments, not only from Mark, but also from other readers.


Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Mark Waser [mailto:[EMAIL PROTECTED]
Sent: Tuesday, October 09, 2007 9:46 AM
To: agi@v2.listbox.com
Subject: Re: [agi] Do the inference rules of categorical logic make sense?


>I don't believe that this is the case at all.  NARS correctly
> handles
> cases where entities co-occur or where one entity implies another only
due
> to other entities/factors.  "Is an ancestor of" and "is a descendant of"

> has nothing to do with this.

Ack!  Let me rephrase.  Despite the fact that Pei always uses the words of

inheritance (and is technically correct), what he means is quite different

from what most people assume that he means.  You are stuck on the "common"

meanings of the terms  "is an ancestor of" and "is a descendant of" and
it's
impeding your understanding.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=51480730-4665d4

RE: [agi] Religion-free technical content & breaking the small hardware mindset

2007-10-08 Thread Edward W. Porter
Dear indefinite article,

The Wikipedia entry for "Flynn Effect" suggests --  in agreement with your
comment in the below post --  that older people (at least those in the
pre-dementia years) don't get dumber with age relative to their younger
selves, but rather relative to the increasing intelligence of people
younger than themselves (and, thus, relative to re-normed IQ tests).

Perhaps that is correct, but I can tell you that based on my own
experience, my ability to recall things is much worse than it was twenty
years ago.  Furthermore, my ability to spend most of three or four nights
in a row lying bed in most of the night with my head buzzing with concepts
about an intellectual problem of interest without feeling like a total
zombiod in the following days has substantially declined.

Since most organs of the body diminish in function with age, it would be
surprising if the brain didn't also.

We live in the age of political correctness where it can be dangerous to
one’s careers to say anything unfavorable about any large group of people,
particularly one as powerful as the over 45, who, to a large extent, rule
the world.  (Or even to those in the AARP, which is an extremely powerful
lobby.)  So I don't know how seriously I would take the statements that
age doesn't affect IQ.

My mother, who had the second highest IQ in her college class, was a great
one for relaying choice tidbits.  She once said that Christiaan Barnard,
the first doctor to successfully perform a heart transplant, once said
something to the effect of

“If you think old people look bad from the outside, you
should see how bad they look from the inside.”

That would presumably also apply to our brains.


Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: a [mailto:[EMAIL PROTECTED]
Sent: Saturday, October 06, 2007 10:00 AM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free technical content & breaking the small
hardware mindset


Edward W. Porter wrote:
> It's also because the average person looses 10 points in IQ between
> mid twenties and mid fourties and another ten points between mid
> fourties and sixty.  (Help! I'am 59.)
>
> But this is just the average.  Some people hang on to their marbles as
> they age better than others.  And knowledge gained with age can, to
> some extent, compensate for less raw computational power.
>
> The book in which I read this said they age norm IQ tests (presumably
> to keep from offending the people older than mid-forties who
> presumably largely control most of society's institutions, including
> the purchase of IQ tests.)
>
>
I disagree with your theory. I primarily see the IQ drop as a  result of
the Flynn effect, not the age.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=51303117-b7930f

RE: [agi] Do the inference rules of categorical logic make sense?

2007-10-08 Thread Edward W. Porter
Charles D. Hixson’s post of 10/8/2007 5:50 PM, was quite impressive as a
first reaction upon reading about NARS.

After I first read Pei Wang’s “A Logic of Categorization”, it took me
quite a while to know what I thought of it.  It was not until I got
answers to some of my basic questions from Pei though postings under the
current thread title that I was able to start to understand it reasonably
well.  Since then I have been coming to understand that it is quite
similar to some of my own previous thinking, and if it were used in a
certain way, it would seem to have tremendous potential.

But I still have some questions about it, such as” (PEI, IF YOU ARE
READING THIS I WOULD BE INTERESTED IN HEARING YOUR ANSWERS)

--(1) How are episodes represented in NARS?
--(2) How are complex pattern and sets of patterns with many interrelated
elements represented in NARS?  (I.e., how would NARS represents an auto
mechanic’s understanding of automobiles?  Would it be in terms of many
thousands of sentences containing relational inheritance statements such
as those shown on page 197 of “A Logic of Categorization”?)
--(3) How are time and temporal patterns represented?
--(4) How are specific mappings between the elements of a pattern and what
they map to represented in NARS?
--(5) How does NARS learn behaviors?
--(6) Finally, this is a much larger question.  Is it really optimal to
limit your representational scheme to a language in which all sentences
are based on the inheritance relation?

With regard to Question (6):

Categorization is essential.  I don’t question that.  I believe the
pattern is the essential source of intelligence.  It is essential to
implication and reasoning from experiences.  NARS’s categorization relates
to patterns and relationships between patterns.  It patterns are
represented in a generalization hierarchy (where a property or set of
properties can be viewed as a generalization), with a higher level pattern
(i.e., category) being able to represent different species of itself in
the different contexts where those different species are appropriate,
thus, helping to solve two of the major problems in AI, that of
non-literal matching and context appropriateness.

All this is well and good.  But without having had a chance to fully
consider the subject it seems to me that there might be other aspects of
reality and representation that -- even if they might all be reducible to
representation in terms of categorization -- could perhaps be more easily
thought of by us poor humans in terms of concepts other than
categorization.

For example, Novamente bases its inference and much of its learning on
PTL, Probabilistic Term Logic, which is based on inheritance relations,
much as is NARS.  But both of Ben’s articles on Novamente spend a lot of
time describing things in terms like “hypergraph”, “maps”, “attractors”,
“logical unification”, “PredicateNodes”, “genetic programming”, and
“associative links”.  Yes, perhaps all these things could be thought of as
categories, inheritance statements, and things derived from them of the
type described in you paper “A Logic of Catagorization”, and such thoughts
might provide valuable insights, but is that the most efficient way for us
mortals to think of them and for a machine to represent them.

I would be interested in hearing your answer to all these questions.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=51300772-e34770

RE: [agi] Do the inference rules of categorical logic make sense?

2007-10-06 Thread Edward W. Porter
Great,  I look forward to trying this when I get back from a brief
vacation for the holiday weekend.

Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Pei Wang [mailto:[EMAIL PROTECTED]
Sent: Saturday, October 06, 2007 8:51 AM
To: agi@v2.listbox.com
Subject: Re: [agi] Do the inference rules of categorical logic make sense?


On 10/6/07, Pei Wang <[EMAIL PROTECTED]> wrote:
> On 10/6/07, Edward W. Porter <[EMAIL PROTECTED]> wrote:
> >
> > So is the following understanding correct?
> >
> > If you have two statements
> >
> > Fred is a human
> > Fred is an animal
> >
> > And assuming you know nothing more about any of the three terms in
> > both these statements, then each of the following would be an
> > appropriate induction
> >
> > A human is an animal
> > An animal is a human
> > A human and an animal are similar
>
> Correct, though for technical reasons I don't call the last one
> "induction" but "comparison".

BTW, in the future you can easily try it yourself, if you want:

(1) start the NARS demo by clicking
http://nars.wang.googlepages.com/NARS.html
(2) open the inference log window by select "View/Inference Log" from the
main window
(3) copy/paste the following two lines into the input window:

.
.

then click OK.
(4) click "Walk" in the main window for a few times. For this example, in
the 5th step the three conclusions you mentioned will be produced, with a
bunch of others.

There is a User's Guide for the demo at
http://nars.wang.googlepages.com/NARS-Guide.html

Pei

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=50771487-e5f225


RE: [agi] Do the inference rules of categorical logic make sense?

2007-10-06 Thread Edward W. Porter
Thanks.

So as I understand it, whether a premise is major or minor is defined by
its role of its terms relative to a given conconclusion.  But the same
premise could play a major role relative to once conclusion and a minor
role relative to another.

Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Pei Wang [mailto:[EMAIL PROTECTED]
Sent: Saturday, October 06, 2007 8:20 AM
To: agi@v2.listbox.com
Subject: Re: [agi] Do the inference rules of categorical logic make sense?


The "order" here isn't the "incoming order" of the premises. From
M-->S(t1) and M-->P(t2), where t1 and t2 are truth values, the rule
produces two symmetric conclusions, and which truth function is called
depends on the subject/predicate order in the conclusion. That is,
S-->P will use a function f(t1,t2), while P-->S will use the symmetric
function f(t2,t1).

Pei

On 10/6/07, Edward W. Porter <[EMAIL PROTECTED]> wrote:
> If you are a machine reasoning from pieces of information you receive
> in no particular order how do you know which is the major and which is
> the minor premise?
>
> Edward W. Porter
> Porter & Associates
> 24 String Bridge S12
> Exeter, NH 03833
> (617) 494-1722
> Fax (617) 494-1822
> [EMAIL PROTECTED]
>
>
>
> -Original Message-
> From: Lukasz Stafiniak [mailto:[EMAIL PROTECTED]
> Sent: Saturday, October 06, 2007 4:30 AM
> To: agi@v2.listbox.com
> Subject: Re: [agi] Do the inference rules of categorical logic make
> sense?
>
>
> Major premise and minor premise in a syllogism are not
> interchangeable. Read the derivation of truth tables for abduction and
> induction from the semantics of NAL to learn that different ordering
> of premises results in different truth values. Thus while both
> orderings are applicable, one will usually give more confident result
> which will dominate the other.
>
> On 10/6/07, Edward W. Porter <[EMAIL PROTECTED]> wrote:
> >
> >
> > But I don't understand the rules for induction and abduction which
> > are as
> > following:
> >
> > ABDUCTION INFERENCE RULE:
> >  Given S --> M and P --> M, this implies S --> P to some degree
> >
> > INDUCTION INFERENCE RULE:
> >  Given M --> S and M --> P, this implies S --> P to some degree
> >
> > The problem I have is that in both the abduction and induction rule
> > -- unlike in the deduction rule -- the roles of S and P appear to be
> > semantically identical, i.e., they could be switched in the two
> > premises with no apparent change in meaning, and yet in the
> > conclusion switching S and P would change in meaning.  Thus, it
> > appears that from premises which appear to make no distinctions
> > between S and P a conclusion is drawn that does make such a
> > distinction.  At least to me, with my current limited knowledge of
> > the subject, this seems illogical.
>
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email To
> unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email To
> unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=50771155-cc051f


RE: [agi] Do the inference rules of categorical logic make sense?

2007-10-06 Thread Edward W. Porter
So is the following understanding correct?

If you have two statements

Fred is a human
Fred is an animal

And assuming you know nothing more about any of the three
terms in both these statements, then each of the following would be an
appropriate induction

A human is an animal
An animal is a human
A human and an animal are similar

It would only then be from further information that you
would find the first of these two inductions has a larger truth value than
the second and that the third probably has a larger truth value than the
second..

Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Pei Wang [mailto:[EMAIL PROTECTED]
Sent: Saturday, October 06, 2007 7:03 AM
To: agi@v2.listbox.com
Subject: Re: [agi] Do the inference rules of categorical logic make sense?


Right. See concrete examples in
http://nars.wang.googlepages.com/NARS-Examples-SingleStep.txt

In induction and abduction, S-->P and P-->S are usually (though not
always) produced in pair, though usually (though not always) with
different truth values, unless the two premises have the same truth-value
--- as Edward said, it would be illogical to produce difference from
sameness. ;-)

Especially, positive evidence equally support both conclusions, while
negative evidence only deny one of the two --- see the "Induction and
Revision" example in
http://nars.wang.googlepages.com/NARS-Examples-MultiSteps.txt

For a more focused discussion on induction in NARS, see
http://www.cogsci.indiana.edu/pub/wang.induction.ps

The situation for S<->P is similar --- see "comparison" in
http://nars.wang.googlepages.com/NARS-Examples-SingleStep.txt

Pei

On 10/6/07, Lukasz Stafiniak <[EMAIL PROTECTED]> wrote:
> Major premise and minor premise in a syllogism are not
> interchangeable. Read the derivation of truth tables for abduction and
> induction from the semantics of NAL to learn that different ordering
> of premises results in different truth values. Thus while both
> orderings are applicable, one will usually give more confident result
> which will dominate the other.
>
> On 10/6/07, Edward W. Porter <[EMAIL PROTECTED]> wrote:
> >
> >
> > But I don't understand the rules for induction and abduction which
> > are as
> > following:
> >
> > ABDUCTION INFERENCE RULE:
> >  Given S --> M and P --> M, this implies S --> P to some degree
> >
> > INDUCTION INFERENCE RULE:
> >  Given M --> S and M --> P, this implies S --> P to some degree
> >
> > The problem I have is that in both the abduction and induction rule
> > -- unlike in the deduction rule -- the roles of S and P appear to be
> > semantically identical, i.e., they could be switched in the two
> > premises with no apparent change in meaning, and yet in the
> > conclusion switching S and P would change in meaning.  Thus, it
> > appears that from premises which appear to make no distinctions
> > between S and P a conclusion is drawn that does make such a
> > distinction.  At least to me, with my current limited knowledge of
> > the subject, this seems illogical.
>
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email To
> unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=50767228-6b318e

RE: [agi] Do the inference rules of categorical logic make sense?

2007-10-06 Thread Edward W. Porter
If you are a machine reasoning from pieces of information you receive in
no particular order how do you know which is the major and which is the
minor premise?

Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Lukasz Stafiniak [mailto:[EMAIL PROTECTED]
Sent: Saturday, October 06, 2007 4:30 AM
To: agi@v2.listbox.com
Subject: Re: [agi] Do the inference rules of categorical logic make sense?


Major premise and minor premise in a syllogism are not interchangeable.
Read the derivation of truth tables for abduction and induction from the
semantics of NAL to learn that different ordering of premises results in
different truth values. Thus while both orderings are applicable, one will
usually give more confident result which will dominate the other.

On 10/6/07, Edward W. Porter <[EMAIL PROTECTED]> wrote:
>
>
> But I don't understand the rules for induction and abduction which are
> as
> following:
>
> ABDUCTION INFERENCE RULE:
>  Given S --> M and P --> M, this implies S --> P to some degree
>
> INDUCTION INFERENCE RULE:
>  Given M --> S and M --> P, this implies S --> P to some degree
>
> The problem I have is that in both the abduction and induction rule --
> unlike in the deduction rule -- the roles of S and P appear to be
> semantically identical, i.e., they could be switched in the two
> premises with no apparent change in meaning, and yet in the conclusion
> switching S and P would change in meaning.  Thus, it appears that from
> premises which appear to make no distinctions between S and P a
> conclusion is drawn that does make such a distinction.  At least to
> me, with my current limited knowledge of the subject, this seems
> illogical.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=50766573-29b233


[agi] Do the inference rules of categorical logic make sense?

2007-10-05 Thread Edward W. Porter

I am trying to understand categorical logic from reading Pei Wang’s very
interesting paper, “ A Logic of Categorization.”  Since I am a total
newbie to the field I have some probably dumb questions.  But at the risk
of making a fool of myself let me ask them to members of the list.

Lets use “-->” as the arrow symbol commonly used to represent an
inheritance relation of the type used in categorical logic, where A --> B,
roughly means category A is a species (or instance) of category B.
Category B, in addition to what we might normally think as a
generalization, can also be a property (meaning B’s category would be that
of concepts having property B).

I understand how the deduction inference rule works.

DEDUCTION INFERENCE RULE:
 Given S --> M and M--> P, this implies S --> P

This make total sense.  If S is a type of M, and M is a type of P, S is a
type of P.

But I don’t understand the rules for induction and abduction which are as
following:

ABDUCTION INFERENCE RULE:
 Given S --> M and P --> M, this implies S --> P to some degree

INDUCTION INFERENCE RULE:
 Given M --> S and M --> P, this implies S --> P to some degree

The problem I have is that in both the abduction and induction rule --
unlike in the deduction rule -- the roles of S and P appear to be
semantically identical, i.e., they could be switched in the two premises
with no apparent change in meaning, and yet in the conclusion switching S
and P would change in meaning.  Thus, it appears that from premises which
appear to make no distinctions between S and P a conclusion is drawn that
does make such a distinction.  At least to me, with my current limited
knowledge of the subject, this seems illogical.

It would appear to me that both the Abduction and Induction inference
rules should imply each of the following, each with some degree of
evidentiary value
 S --> P
 P --> S,  and
 S <--> P, where “<-->” represents a similarity relation.

Since these rules have been around for years I assume the rules are right
and my understanding is wrong.

I would appreciate it if someone on the list with more knowledge of the
subject than I could point out my presumed error.

Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=50726265-cee19c

RE: [agi] Religion-free technical content & breaking the small hardware mindset

2007-10-05 Thread Edward W. Porter
It's also because the average person looses 10 points in IQ between mid
twenties and mid fourties and another ten points between mid fourties and
sixty.  (Help! I'am 59.)  

But this is just the average.  Some people hang on to their marbles as
they age better than others.  And knowledge gained with age can, to some
extent, compensate for less raw computational power.  

The book in which I read this said they age norm IQ tests (presumably to
keep from offending the people older than mid-forties who presumably
largely control most of society's institutions, including the purchase of
IQ tests.)

Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Linas Vepstas [mailto:[EMAIL PROTECTED] 
Sent: Friday, October 05, 2007 7:31 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free technical content & breaking the small
hardware mindset


On Wed, Oct 03, 2007 at 08:39:18PM -0400, Edward W. Porter wrote:
> the
> IQ bell curve is not going down.  The evidence is its going up.

So that's why us old folks 'r gettin' stupider as compared to 
them's young'uns.

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=50724257-8e390c


RE: [agi] breaking the small hardware mindset

2007-10-04 Thread Edward W. Porter
Mike,

I think the concept of image schema is a very good one.

Among my many computer drawings are ones showing multiple simplified
drawings of different, but at different semantic levels, similar events
for the purpose of helping me to understand how a system can naturally
extract appropriate generalizations from such images.   For example,
multiple different types of "hitting:". Balls hitting balls.  Ball hitting
walls.  Bats hitting balls.  Multiple pictures of Harry hitting Bill and
Bill hitting Harry. Etc.

So you are preaching to the choir.

I have no idea how new the idea is.  When Schank was talking about scripts
I have a hunch the types of computers he had couldn't even begin to do the
level of image recognition necessary do the type of generalization I think
we are both interested in.  The Serre article, a link to which I sent you
earlier today, and the hierarchical memory architecture it provides an
example of, make such automatic generalization from images much easier.
So learning directly from video, to the extent it is not already here (and
some surprising forms of it are already here), will be coming soon, and
that learning will definitely include things you could properly call image
schemas.

Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Mike Tintner [mailto:[EMAIL PROTECTED]
Sent: Thursday, October 04, 2007 4:03 PM
To: agi@v2.listbox.com
Subject: Re: [agi] breaking the small hardware mindset



Edward You talk about the Cohen article I quoted as perhaps leading to a
major
> paradigm shift, but actually much of its central thrust is similar to
> idea’s that have been around for decades.  Cohen’s gists are
> surprisingly similar to the scripts Schank was talking about circa
> 1980.

Josh: And his "static image schemas" are Minsky's frames.

No doubt. But image schemas, as used by the school of
Lakoff/Johnson/Turner/Fauconnier, are definitely a significant step
towards
a major paradigm shift in cognitive science - are very influential in
cognitive linguistics, have helped found cognitive semantics - and are
backed by an evergrowing body of experimental science. So that's why I was

just a little (and definitely no more) excited by seeing them being used
in
AGI, however inadequately. I had already casually predicted elsewhere that

they would be influential, and I think you'll see more of them. Neither
Minsky nor any other AGI person, to my knowledge, uses image schemas as
set
out by Mark Johnson in "The Body in the Mind"  -  or could do, if my
understanding is correct, on digital computers.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=50135051-e3911e

RE: [agi] breaking the small hardware mindset

2007-10-04 Thread Edward W. Porter
Josh,

Again a good reply.  So it appears the problem is they don't have good
automatic learning of semantics.

But, of course, that's vertually impossible to do in small systems except,
perhaps, about trivial domains.  It becomes much easier in tera-machines.
So if my interpretation of what you are saying is true, it bodes well for
the ease of overcoming this problem in the coming years with the coming
hardware.

I look forward to reading Pei's article on this subject. It may shed some
new light on my understanding of the subject.  But it may take me some
time.  I read and understand symbolic logic slowly.

Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: J Storrs Hall, PhD [mailto:[EMAIL PROTECTED]
Sent: Thursday, October 04, 2007 4:30 PM
To: agi@v2.listbox.com
Subject: Re: [agi] breaking the small hardware mindset


Let me answer with an anecdote. I was just in the shop playing with some
small
robot motors and I needed a punch to remove a pin holding a gearbox onto
one
of them. I didn't have a purpose-made punch, so I cast around in the
toolbox
until Aha! an object close enough to use. (It was a small rattail file)

Now the file and a true punch have many things in common and many other
things
different. Among the common things that were critical are the fact that
the
hardened steel of the file wouldn't bend and wedge beside the pin, and I
could hammer on the other end of it. These semantic aspects of the file
had
to match the same ones of the punch before I could see it as one.

Where did these semantic aspects come from? Somehow I've learned enough
about
punches and files to know what a punch needs (i.e. which of its properties

are necessary for it to work) and what a file gives.

In Copycat, the idea is to build up an interpretation of an object
(analogy as
perception) under pressures from what it has to match. So far, well and
good -- that's what I was doing. But in Copycat (and tabletop and ...) the

semantics is built in and ad hoc. And there isn't really all that much of
an
analogy net matching algorithm without the semantics (codelets).

In my case, I have lots of experience misusing tools, so I have built up
an
internal theory of which properties are likely to matter and which aren't.


I think this most closely matches your even-numbered points below :-)

Perhaps more succinctly, they have a general purpose representation but
it's
snippets of hand-written lisp code, and no way to automatically generate
more
like it.

Josh

On Thursday 04 October 2007 02:59:38 pm, Edward W. Porter wrote:
> Josh,
>
> (Talking of “breaking the small hardware mindset,” thank god for the
> company with the largest hardware mindset -- or at least the largest
> physical embodiment of one-- Google.  Without them I wouldn’t have
> known what “FARG” meant, and would have had to either (1) read your
> valuable response with less than the understanding it deserves or (2)
> embarrassed myself by admitting ignorance and asking for a
> clarification.)
>
> With regard to your answer, copied below, I thought the answer would
> be something like that.
>
> So which of the below types of “representational problems” are the
> reasons why their basic approach is not automatically extendable?
>
>   1. They have no general purpose representation that can
represent
> almost anything in a sufficiently uniform representational scheme to
> let their analogy net matching algorithm be universally applied
> without requiring custom patches for each new type of thing to be
> represented.
>
>   2. They have no general purpose mechanism for determining
what are
> relevant similarities and generalities across which to allow slippage
> for purposes of analogy.
>
>   3. They have no general purpose mechanism for
> automatically finding which compositional patterns map to which lower
> level representations, and which of those compositional patterns are
> similar to each other in a way appropriate for slippages.
>
>   4. They have no general purpose mechanism for
> automatically determining what would be appropriately coordinated
> slippages in semantic hyperspace.
>
>   5. Some reason not listed above.
>
> I don’t know the answer.  There is no reason why you should.  But if
> you
> -- or any other interested reader –  do, or if you have any good
thoughts
> on the subject, please tell me.
>
> I may be naïve.  I may be overly big-hardware optimistic.  But based
> on the architecture I have in mind, I think a Novamente-type system,
> if it is not already architected to do so, could be modified to handle
> all of these problems (except perhaps 5, if there is a 5) and, thus,
> provide p

RE: [agi] breaking the small hardware mindset

2007-10-04 Thread Edward W. Porter
In response to Pei Wang’s post of 10/4/2007 3:13 PM

Thanks for giving us a pointer so such inside info.

Googling for the article you listed I found

1. The Logic of Categorization, by PeiWang at
http://nars.wang.googlepages.com/wang.categorization.pdf FOR FREE; and

2. A logic of categorization Authors: Wang, Pei; Hofstadter,
Douglas; Source: Journal of Experimental & Theoretical Artificial
Intelligence <http://www.ingentaconnect.com/content/tandf/teta> , Volume
18, Number 2, June 2006 , pp. 193-213(21) FOR $46.92

Is the free one roughly as good as the $46.92 one, and, if not, are you
allowed to send me a copy of the better one for free?

Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Pei Wang [mailto:[EMAIL PROTECTED]
Sent: Thursday, October 04, 2007 3:13 PM
To: agi@v2.listbox.com
Subject: Re: [agi] breaking the small hardware mindset


On 10/4/07, Edward W. Porter <[EMAIL PROTECTED]> wrote:
>
>
>
> Josh,
>
> (Talking of "breaking the small hardware mindset," thank god for the
> company with the largest hardware mindset -- or at least the largest
> physical embodiment of one-- Google.  Without them I wouldn't have
> known what "FARG" meant, and would have had to either (1) read your
> valuable response with less than the understanding it deserves or (2)
> embarrassed myself by admitting ignorance and asking for a
> clarification.)
>
> With regard to your answer, copied below, I thought the answer would
> be something like that.
>
> So which of the below types of "representational problems" are the
> reasons why their basic approach is not automatically extendable?
>
>
> 1. They have no general purpose representation that can represent
> almost anything in a sufficiently uniform representational scheme to
> let their analogy net matching algorithm be universally applied
> without requiring custom patches for each new type of thing to be
> represented.
>
> 2. They have no general purpose mechanism for determining what are
> relevant similarities and generalities across which to allow slippage
> for purposes of analogy.
>
> 3. They have no general purpose mechanism for automatically finding
> which compositional patterns map to which lower level representations,
> and which of those compositional patterns are similar to each other in
> a way appropriate for slippages.
>
> 4. They have no general purpose mechanism for automatically
> determining what would be appropriately coordinated slippages in
> semantic hyperspace.
>
> 5. Some reason not listed above.
>
> I don't know the answer.  There is no reason why you should.  But if
> you -- or any other interested reader –  do, or if you have any good
> thoughts on the subject, please tell me.

I guess I do know more on this topic, but it is a long story for which I
don't have the time to tell. Hopefully the following paper can answer some
of the questions:

A logic of categorization
Pei Wang and Douglas Hofstadter
Journal of Experimental & Theoretical Artificial Intelligence, Vol.18,
No.2, Pages 193-213, 2006

Pei

> I may be naïve.  I may be overly big-hardware optimistic.  But based
> on the architecture I have in mind, I think a Novamente-type system,
> if it is not already architected to do so, could be modified to handle
> all of these problems (except perhaps 5, if there is a 5) and, thus,
> provide powerful analogy drawing across virtually all domains.
>
> Edward W. Porter
> Porter & Associates
> 24 String Bridge S12
> Exeter, NH 03833
> (617) 494-1722
> Fax (617) 494-1822
> [EMAIL PROTECTED]
>
>
>
> -Original Message-
> From: J Storrs Hall, PhD [mailto:[EMAIL PROTECTED]
> Sent: Thursday, October 04, 2007 1:44 PM
> To: agi@v2.listbox.com
> Subject: Re: [agi] breaking the small hardware mindset
>
>
>
> On Thursday 04 October 2007 10:56:59 am, Edward W. Porter wrote:
> > You appear to know more on the subject of current analogy drawing
> > research than me. So could you please explain to me what are the
> > major current problems people are having in trying figure out how to
> > draw analogies using a structure mapping approach that has a
> > mechanism for coordinating similarity slippage, an approach somewhat
> > similar to Hofstadter approach in Copycat?
>
> > Lets say we want a system that could draw analogies in real time
> > when generating natural language output at the level people can,
> > assuming there is some roughly semantic-net like representation of
> > world knowledge, and lets say we have roughly brain level hardware,
> > what ever that is.  What are the c

RE: [agi] breaking the small hardware mindset

2007-10-04 Thread Edward W. Porter
Josh,

(Talking of “breaking the small hardware mindset,” thank god for the
company with the largest hardware mindset -- or at least the largest
physical embodiment of one-- Google.  Without them I wouldn’t have known
what “FARG” meant, and would have had to either (1) read your valuable
response with less than the understanding it deserves or (2) embarrassed
myself by admitting ignorance and asking for a clarification.)

With regard to your answer, copied below, I thought the answer would be
something like that.

So which of the below types of “representational problems” are the reasons
why their basic approach is not automatically extendable?

1. They have no general purpose representation that can
represent almost anything in a sufficiently uniform representational
scheme to let their analogy net matching algorithm be universally applied
without requiring custom patches for each new type of thing to be
represented.

2. They have no general purpose mechanism for determining
what are relevant similarities and generalities across which to allow
slippage for purposes of analogy.

3. They have no general purpose mechanism for
automatically finding which compositional patterns map to which lower
level representations, and which of those compositional patterns are
similar to each other in a way appropriate for slippages.

4. They have no general purpose mechanism for
automatically determining what would be appropriately coordinated
slippages in semantic hyperspace.

5. Some reason not listed above.

I don’t know the answer.  There is no reason why you should.  But if you
-- or any other interested reader –  do, or if you have any good thoughts
on the subject, please tell me.

I may be naïve.  I may be overly big-hardware optimistic.  But based on
the architecture I have in mind, I think a Novamente-type system, if it is
not already architected to do so, could be modified to handle all of these
problems (except perhaps 5, if there is a 5) and, thus, provide powerful
analogy drawing across virtually all domains.

Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: J Storrs Hall, PhD [mailto:[EMAIL PROTECTED]
Sent: Thursday, October 04, 2007 1:44 PM
To: agi@v2.listbox.com
Subject: Re: [agi] breaking the small hardware mindset


On Thursday 04 October 2007 10:56:59 am, Edward W. Porter wrote:
> You appear to know more on the subject of current analogy drawing
> research than me. So could you please explain to me what are the major
> current problems people are having in trying figure out how to draw
> analogies using a structure mapping approach that has a mechanism for
> coordinating similarity slippage, an approach somewhat similar to
> Hofstadter approach in Copycat?

> Lets say we want a system that could draw analogies in real time when
> generating natural language output at the level people can, assuming
> there is some roughly semantic-net like representation of world
> knowledge, and lets say we have roughly brain level hardware, what
> ever that is.  What are the current major problems?

The big problem is that structure mapping is brittlely dependent on
representation, as Hofstadter complains; but that the FARG school hasn't
really come up with a generative theory (every Copycat-like analogizer
requires a pile of human-written Codelets which increases linearly with
the
knowledge base -- and thus there is a real problem building a Copycat that

can learn its concepts).

In my humble opinion, of course.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=50064710-fa7794

RE: [agi] breaking the small hardware mindset

2007-10-04 Thread Edward W. Porter
In response to the below post from Mike Tintner of 10/4/2007 12:33 PM:

You talk about the Cohen article I quoted as perhaps leading to a major
paradigm shift, but actually much of its central thrust is similar to
idea’s that have been around for decades.  Cohen’s gists are surprisingly
similar to the scripts Schank was talking about circa 1980.

Again, I think the major paradigm shift needed for AGI is not so much some
new idea that blows everything away, but rather a realization of how most
of the basic problems in AI have actually been solved at a conceptual
level, an appreciation of the power of the concepts we already have, and
understanding of what they could do if put together and run on brain level
hardware that has human-level world knowledge, and a focus on learning how
to pick and chose from all the ideas the right components and getting them
to all work together well automatically on such really powerful hardware.

As Goertzel points out in his articles on Novamente -- and as anyone who
has thought about the problem understand -- even with brain level hardware
you have to come up with good context appropriate schemes for distributing
the computational power you have to where it is most effective.  This is
because no mater how great your computational power is, it will always be
infinitesimal compared to the massively combinatorial space of possible
inferences and computations.  There are lots of possible schemes for how
to do this, including sophisticated probabilistic inference and context
specific importance weighting.  But until I see results from actual
world-knowledge-size systems running with various such algorithms, I can’t
begin to understand how big a problem it is to get things to work well.

Regarding your disappointment that Cohen’s schema operated at something
close to a predicate logic level -- far removed from the actual sensation
of from which one would think they would be derived -- I expressed a
similar sentiment in my response to you are now responding.  A good
human-level system should have  much more visually grounding, and a much
more sophisticated on at that.  But that is not meant as a criticism of
Cohen's work, because he is trying to get stuff done on relatively small
hardware.

At the risk of repeating myself, check out the visual grounding in Thomas
Serre’s great article about a visual recognition system
(http://cbcl.mit.edu/projects/cbcl/publications/ps/MIT-CSAIL-TR-2006-028.p
df).  I have cited this article multiple times in the last week, but it
blows me away.  This article does not pretend to explain the whole story
in visual recognition, but it explains a lot of it.  It gives a very good
idea of both the hierarchical memory that Jeff Hawkings and many others
are heralding as the solution to much of the non-literal match problem
(previously one of the major problems in AI), it gives a pretty good feel
for the types of grounding our brains actually use, it demonstrates the
importance of computer power, thru simulations, in brain understanding,
and it is a damn powerful little system.  To the extent that there are new
paradigms, this article captures a few of them.

You will note that the type of hierarchical representation used in Serre’s
paper would not normally be comparing views of similar objects at a pixel
level, but at levels higher up in its hierarchical memory scheme that are
derived from pixel level mappings against the different views separately.
So schemas of the type Cohen talks, if operating at a semantic level on
top of a hierarchical representation like that used in Serre would not be
operating at anything close to the pixel level, but they could be quickly
mapped to, or from, the pixel level and intermedial levels in between.
Implications from such intermediate representations could be combined with
those from the semantic level to improve semantic implication from visual
information.  They could also be used by imagination from such
intermediary representations, semantically relevant information such as
generalizations of how, or whether or not, a context appropriate view of
an objecte would fit in a given context.  (I don't think Serre focuses
much on top down processessing, except mainly for inhibition of less
relevant upward flow, but there has been much work on top down information
flows, so it is not hard to imagine how it could be mapped into his
system.)

As the Serre article shows, grounding of semantic representations in the
pixel level and more importantly in the many levels between the semantic
and the pixel level is possible with today’s hardware in limited domains.
It should be fully possible across all sensory domains with the much more
powerful hardware that the Serre’s of the future will be working on.


Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Mike Tintner [mailto:[EMAIL PROTECTED]
Sent: Thursd

RE: [agi] breaking the small hardware mindset

2007-10-04 Thread Edward W. Porter
Josh, in your 10/4/2007 9:57 AM post you wrote:

“RESEARCH IN ANALOGY-MAKING IS SLOW -- I CAN ONLY THINK OF GENTNER AND
HOFSTADTER AND THEIR GROUPS AS MAJOR MOVERS. WE DON'T HAVE A SOLID THEORY
OF ANALOGY YET (STRUCTURE-MAPPING TO THE CONTRARY NOTWITHSTANDING). IT'S
CLEARLY CENTRAL, AND SO I DON'T UNDERSTAND WHY MORE PEOPLE AREN'T WORKING
ON IT. (BTW: ANYTIME YOU'RE DOING ANYTHING THAT EVEN SMELLS LIKE SUBGRAPH
ISOMORPHISM, BIG IRON IS YOUR FRIEND.)”

You appear to know more on the subject of current analogy drawing research
than me. So could you please explain to me what are the major current
problems people are having in trying figure out how to draw analogies
using a structure mapping approach that has a mechanism for coordinating
similarity slippage, an approach somewhat similar to Hofstadter approach
in Copycat?

Lets say we want a system that could draw analogies in real time when
generating natural language output at the level people can, assuming there
is some roughly semantic-net like representation of world knowledge, and
lets say we have roughly brain level hardware, what ever that is.  What
are the current major problems?

Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: J Storrs Hall, PhD [mailto:[EMAIL PROTECTED]
Sent: Thursday, October 04, 2007 9:57 AM
To: agi@v2.listbox.com
Subject: Re: [agi] breaking the small hardware mindset


On Wednesday 03 October 2007 09:37:58 pm, Mike Tintner wrote:

> I disagree also re how much has been done.  I don't think AGI -
> correct me -
has solved a single creative problem - e.g. creativity - unprogrammed
adaptivity - drawing analogies - visual object recognition - NLP -
concepts -
creating an emotional system - general learning - embodied/ grounded
knowledge - visual/sensory thinking.- every dimension in short
of "imagination". (Yes, vast creativity has gone into narrow AI, but
that's
different).

Ah, the Lorelei sings so sweetly. That's what happened to AI in the 80's
-- it
went off chasing "human-level performance" at specific tasks, which
requires
a completely different mindset (and something of a different toolset) than

solving the general AI problem. To repeat a previous letter, solving
particular problems is engineering, but AI needed science.

There are, however, several subproblems that may need to be solved to make
a
general AI work. General learning is surely one of them. I happen to think

that analogy-making is another. But there has been a significant amount of

basic research done on these areas. 21st century AI, even narrow AI, looks

very different from say 80's expert systems. Lots of new techniques that
work
a lot better. Some of them require big iron, some don't.

Research in analogy-making is slow -- I can only think of Gentner and
Hofstadter and their groups as major movers. We don't have a solid theory
of
analogy yet (structure-mapping to the contrary notwithstanding). It's
clearly
central, and so I don't understand why more people aren't working on it.
(btw: anytime you're doing anything that even smells like subgraph
isomorphism, big iron is your friend.)

One main reason I support the development of AGI as a serious subfield is
not
that I think any specific approach here is likely to work (even mine), but

that there is a willingness to experiment and a tolerance for new and
odd-sounding ideas that spells a renaissance of science in AI.

Josh



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49804068-d1884d

RE: [agi] Another AGI Project

2007-10-04 Thread Edward W. Porter
Response to Mike Tintner’s  Thu 10/4/2007 7:36 AM post:

I skimmed “LGIST: Learning Generalized Image Schemas for Transfer Thrust D
Architecture Report”, by Carole Beal and Paul Cohen at the USC Information
Sciences Institute.  It was one of the PDFs listed on the web link you
sent me (at http://eksl.isi.edu/files/papers/cohen_2006_1160084799.pdf).
It was interesting and valuable.  I found its initial few pages a good
statement of some solid AI ideas.  Its idea of splitting states based on
entropy is a good one, one that I have myself have considered as a guide
for when and where in semantic space to split models and how to segment
temporal representations.

But the system appears to be a pretty small one.  It appears to start out
with a fair number of relatively abstract programmer-defined concepts
(shown in Table One), which is probably necessary for what it is trying to
accomplish on the hardware it is trying to accomplish it on.  But is very
different than a more human brain approach, that would be able to learn
most of those concepts itself, and thus would probably have them be more
grounded.  It starts with a lot of the hard work necessary for the type of
problem it is trying to solve already done (which is very helpful for what
it is trying to do), but which at least suggests a question about how well
it will be able to learn in areas where that type of hard work has not
already been done for it.

Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Mike Tintner [mailto:[EMAIL PROTECTED]
Sent: Thursday, October 04, 2007 7:36 AM
To: agi@v2.listbox.com
Subject: [agi] Another AGI Project


Another AGI project -some similarities to Ben's. (I was not however able
to
play with my Wubble - perhaps you'll have better luck). Comments?

http://eksl.isi.edu/cgi-bin/page.cgi?page=project-jean.html


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49688212-d1bb83

RE: [agi] breaking the small hardware mindset

2007-10-03 Thread Edward W. Porter
Mike Tintner said in his 10/3/2007 9:38 PM post:



I DON'T THINK AGI - CORRECT ME - HAS SOLVED A SINGLE CREATIVE PROBLEM -
E.G. CREATIVITY - UNPROGRAMMED ADAPTIVITY - DRAWING ANALOGIES - VISUAL
OBJECT RECOGNITION - NLP - CONCEPTS -  CREATING AN EMOTIONAL SYSTEM -
GENERAL LEARNING - EMBODIED/ GROUNDED KNOWLEDGE - VISUAL/SENSORY
THINKING.- EVERY DIMENSION IN SHORT OF "IMAGINATION".



A lot of good thinking has gone into how to attack each of the problems
you listed above.  I am quite sure that if I spent less than a week doing
Google research on each such problem I could find at least twenty very
good article on how to attack each of them.   Yes, most of the approaches
don’t work very well yet, but they don’t have the benefit sufficiently
large integrated systems.



In AI more is more.  More knowledge provides more restraint, which leads
to faster and better solutions.  More knowledge provides more context
specific probabilities and models.  World knowledge helps solve the
problem of common sense.  Massive sensory and emotional labeling provide
grounding.  Massive associations provide meaning and thus appropriate
implication.  More computational power allows more alternatives to be
explored.  Moore is more.



In my mind the questions is not whether or not each of these problems can
be solved, it is how much time, hardware, and tweaking will be required to
perform them at a human level.  For example, having such a large system
learn how to run itself automatically is non-trivial because the size of
the problem space is very large. To get it all to work together well
automatically might requires some significant conceptual breakthroughs, it
will almost certainly requires some minor ones.  We won’t know until we
try.







To give you just one examples of some of the tremendously creative work
that has been done in one of the allegedly unsolved problems describe
above, read Doug Hofstadter’s work on Copycat to get a vision of how one
elegant system solves the problem of analogy in a clever toy domain in a
surprisingly creative way.  That basic approach, described at a very broad
level, could be mapped into a Novamente-like machine to draw analogizes
between virtually any types of patterns that shared similarities at some
level which seem worthy of note to the system in the current context.


Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Mike Tintner [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 03, 2007 9:38 PM
To: agi@v2.listbox.com
Subject: Re: [agi] breaking the small hardware mindset


Edward:The biggest brick wall is the small-hardware mindset that has been
absolutely necessary for decades to get anything actually accomplished on
the hardware of the day

Completely disagree. It's that purely numerical mindset about small/big
hardware that I see as so widespread and that shows merely intelligent
rather than creative thinking.  IQ which you mention is about intelligence
not creativity. It's narrow AI as opposed to AGI.

Somebody can no doubt give me the figures here - worms and bees and v.
simple animals are truly adaptive despite having extremely small brains.
(How many cells/ neurons ?)

I disagree also re how much has been done.  I don't think AGI - correct me
- has solved a single creative problem - e.g. creativity - unprogrammed
adaptivity - drawing analogies - visual object recognition - NLP -
concepts -  creating an emotional system - general learning - embodied/
grounded knowledge - visual/sensory thinking.- every dimension in short of
"imagination". (Yes, vast creativity has gone into narrow AI, but that's
different).  If you don't believe it takes major creativity (or "knock-out
ideas" pace Voss) , you don't solve creative problems.
  _

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?
<http://v2.listbox.com/member/?&;
> &

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49608086-0b0a58

RE: [agi] Religion-free technical content & breaking the small hardware mindset

2007-10-03 Thread Edward W. Porter
nstratable results on the type of hardware most have had
access to in the past.



Rather than this small hardware thinking, those in the field of AGI should
open up their minds to the power of big numbers -- complexity as some call
It -- one of the most seminal concepts in all of science.  They should
look at all of the very powerful tools AI has already cooked up for us and
think how these tools can be put together into powerful systems once we
were are free from the stranglehold of massively sub-human hardware - as
we are now starting to be.  They should start thinking how do we actually
do appropriate probabilistic and goal weighted inference in world
knowledge with brain level hardware in real time.



Some have already spend a lot of time thinking about exactly this.  Those
who are interested in AGi -- and haven’t already done so -- should follow
their lead.



Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Mike Tintner [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 03, 2007 6:22 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free technical content


Edward Porter:I don’t know about you, but I think there are actually a lot
of very bright people in the interrelated fields of AGI, AI, Cognitive
Science, and Brain science.  There are also a lot of very good ideas
floating around.

Yes there are bright people in AGI. But there's no one remotely close to
the level, say, of von Neumann or Turing, right? And do you really think a
revolution such as AGI is going to come about without that kind of
revolutionary, creative thinker? Just by tweaking existing systems, and
increasing computer power and complexity?  Has any intellectual revolution
ever happened that way? (Josh?)
  _

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?
<http://v2.listbox.com/member/?&;
> &

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49575176-b41b51

RE: [agi] Religion-free technical content

2007-10-03 Thread Edward W. Porter
Re: The following statement in Linas Vepstas’s  10/3/2007 5:51 PM post:

P.S. THE INDIAN MATHEMATICIAN RAMANUJAN SEEMS TO HAVE MANAGED TO TRAIN A
SET OF NEURONS IN HIS HEAD TO BE A VERY FAST SYMBOLIC MULTIPLIER/DIVIDER.
WITH THIS, HE WAS ABLE TO SEE VAST AMOUNTS (SIX VOLUMES WORTH BEFORE DYING
AT AGE 26) OF STRANGE AND INTERESTING RELATIONSHIPS BETWEEN CERTAIN
EQUATIONS THAT WERE OTHERWISE QUITE OPAQUE TO OTHER HUMAN BEINGS. SO,
"RUNNING AN EMULATOR IN YOUR HEAD" IS NOT IMPOSSIBLE, EVEN FOR HUMANS;
ALTHOUGH, ADMITEDLY, ITS EXTREMELY RARE.

As a young patent attorney I worked in a firm in NYC that did a lot of
work for a major Japanese Electronics company.  Each year they sent a
different Japanese employee to our firm to, among other things, improve
their English and learn more about U.S. patent law.  I made a practice of
having lunch with these people because I was fascinated with Japan.

One of them once told me that in Japan it was common for high school boys
who were interested in math, science, or business to go to abacus classes
after school or on weekends.  He said once they fully mastered using
physical abacuses, they were taught to create a visually imagined abacus
in their mind that they could operate faster than a physical one.

I asked if his still worked.  He said it did, and that he expected it to
continue to do so for the rest of his life.  To prove it he asked me to
pick any two three digit numbers and he would see if he could get the
answer faster than I could on a digital calculator.  He won, he had the
answer before I had finished typing in the numbers on the calculator.

He said his talent was not that unusual among bright Japanese, that many
thousands of Japan businessmen  carry such mental abacuses with them at
all times.

So you see how powerful representational and behavioral learning can be in
the human mind.


Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Linas Vepstas [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 03, 2007 5:51 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free technical content


On Wed, Oct 03, 2007 at 02:00:03PM -0400, Edward W. Porter wrote:
> From what you say below it would appear human-level AGI would not
> require recursive self improvement,
[...]
> A lot of people on this list seem to hang a lot on RSI, as they use
> it, implying it is necessary for human-level AGI.

Nah. A few people have suggested that an extremely-low IQ "internet worm"
that is capable of modifying its own code might be able to ratchet itself
up to human intelligence levels.  In-so-far as it "modifies its own code",
its RSI.

First, I don't tink such a thing is likely. Secondly, even if its likely,
one can implement an entirely equivalent thing that doesn't actually "self
modify" in this way, by using e.g. scheme or lisp,
or even with the proper stuructures, in C.

I think that, at this level, talking about "code that can modify itself"
is smoke-n-mirrors. Self-modifying code is just one of many things in a
programmer's kit bag, and there are plenty of equivalenet formulations
that don't actually require changing source code and
recompiling.

Put it this way: if I were an AGI, and I was prohibited from recompiling
my own program, I could still emulate a computer with pencil and paper,
and write programs for my pencil-n-paper computer. (I wouldn't use
pencil-n-paper, of course, I'd "do it in my head"). I might be able to
do this pencil-paper emulatation pretty danged fast (being AGI and all),
and then re-incorporate those results back into my own thinking.

In fact, I might choose to do all of my thinking on my pen-n-paper
emulator, and, since I was doing it all in my head anyway, I might not
bother to tell my creator that I was doing this. (which is not to say it
would be undetectable .. creator might notice that an inordinate
amount of cpu time is being used in one area, while other previously
active areas have gone dormant).

So a prohibition from modifying one's own code is not really much of a
prohibition at all.

--linas

p.s. The Indian mathematician Ramanujan seems to have managed to train a
set of neurons in his head to be a very fast symbolic multiplier/divider.
With this, he was able to see vast amounts (six volumes worth before
dying at age 26) of strange and interesting relationships between certain
equations that were otherwise quite opaque to other human beings. So,
"running an emulator in your head" is not impossible, even for humans;
although, admitedly, its extremely rare.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49534399-4aa5a4

RE: [agi] Religion-free technical content

2007-10-03 Thread Edward W. Porter
To Mike Douglas regarding the below comment to my prior post:

I think your notion that post-grads with powerful machines would only
operate in the space of ideas that don’t work is unfair.

A lot of post-grads may be drones, but some of them are cranking some
really good stuff.  The article, Learning a Dictionary of Shape-Components
in Visual Cortex: Comparisons with Neurons, Humans and Machines, by Thomas
Serre (accessible by Google), which I cited the other day, is a prime
example.

I don’t know about you, but I think there are actually a lot of very
bright people in the interrelated fields of AGI, AI, Cognitive Science,
and Brain science.  There are also a lot of very good ideas floating
around.  And having seen how much increased computing power has already
sped up and dramatically increased what all these fields are doing, I am
confident that multiplying by several thousand fold more the power of the
machine people in such fields can play with would greatly increase their
productivity.

I am not a fan of huge program size per se, but I am a fan of being able
to store and process a lot of representation.  You can’t compute human
level world knowledge without such power.  That’s the major reason why the
human brain is more powerful than the brains of rats, cats, dogs, and
monkeys -- because it has more representational and processing power.

And although clock cycles can be wasted doing pointless things such as
do-nothing loops, generally to be able to accomplish a given useful
computational task in less times makes a system smarter at some level.

Your last paragraph actually seems to make an argument for the value of
clock cycles because it implies general intelligences will come through
iterations.  More opps/sec enable iterations to be made faster.


Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Mike Dougherty [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 03, 2007 5:20 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free technical content


On 10/3/07, Edward W. Porter <[EMAIL PROTECTED]> wrote:
> In fact, if the average AI post-grad of today had such hardware to
> play with, things would really start jumping.  Within ten years the
> equivents of such machines could easily be sold for somewhere between
> $10k and $100k, and lots of post-grads will be playing with them.

I see the only value to giving post-grads the kind of computing hardware
you are proposing is that they can more quickly exhaust the space of ideas
that won't work.  Just because a program has more lines of code does not
make it more elegant and just because there are more clock cycles per unit
time does not make a computer any smarter.

Have you ever computed the first dozen iterations of a sierpinski gasket
by hand?  There appears to be no order at all.  Eventually over enough
iterations the pattern becomes clear.  I have little doubt that general
intelligence will develop in a similar way:  there will be many apparently
unrelated efforts that eventually flesh out in function until they
overlap.  It might not be seamless but there is not enough evidence that
human cognitive processing is a seamless process either.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49523228-fa9460

RE: [agi] Religion-free technical content

2007-10-03 Thread Edward W. Porter


Again a well reasoned response.

With regard to the limitations of AM, I think if the young Doug Lenat and
those of his generation had had 32K processor Blue Gene Ls, with 4TBytes
of RAM, to play with they would have soon started coming up with things
way way beyond AM.

In fact, if the average AI post-grad of today had such hardware to play
with, things would really start jumping.  Within ten years the equivents
of such machines could easily be sold for somewhere between $10k and
$100k, and lots of post-grads will be playing with them.

Hardware to the people!

Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: J Storrs Hall, PhD [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 03, 2007 3:21 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free technical content


Thanks!

It's worthwhile being specific about levels of interpretation in the
discussion of self-modification. I can write self-modifying assembly code
that yet does not change the physical processor, or even its microcode it
it's one of those old architectures. I can write a self-modifying Lisp
program that doesn't change the assembly language interpreter that's
running
it.

So it's certainly possible to push the self-modification up the
interpretive
abstraction ladder, to levels designed to handle it cleanly. But the basic

point, I think, stands: there has to be some level that is both
controlling
the way the system does things, and gets modified.

I agree with you that there has been little genetic change in human brain
structure since the paleolithic, but I would claim that culture *is* the
software and it has been upgraded drastically. And I would agree that the
vast bulk of human self-improvement has been at this software level, the
level of learned representations.

If we want to improve our basic hardware, i.e. brains, we'll need to
understand them well enough to do basic engineering on them -- a
self-model.
However, we didn't need that to build all the science and culture we have
so
far, a huge software self-improvement. That means to me that it is
possible
to abstract out the self-model until the part you need to understand and
modify is some tractable kernel. For human culture that is the concept of
science (and logic and evidence and so forth).

This means to me that it should be possible to structure an AGI so that it

could be recursively self improving at a very abstract, highly interpreted

level, and still have a huge amount to learn before it do anything about
the
next level down.

Regarding machine speed/capacity: yes, indeed. Horsepower is definitely
going
to be one of the enabling factors, over the next decade or two. But I
don't
think AM would get too much farther on a Blue Gene than on a PDP-10 -- I
think it required hyper-exponential time for concepts of a given size.

Josh


On Wednesday 03 October 2007 12:44:20 pm, Edward W. Porter wrote:
> Josh,
>
> Thank you for your reply, copied below.  It was – as have been many of
> your posts – thoughtful and helpful.
>
> I did have a question about the following section
>
> “THE LEARNING PROCESS MUST NOT ONLY IMPROVE THE WORLD MODEL AND
> WHATNOT, BUT MUST IMPROVE (=> MODIFY) *ITSELF*. KIND OF THE WAY
> CIVILIZATION HAS (MORE OR LESS) MOVED FROM RELIGION TO PHILOSOPHY TO
> SCIENCE AS THE METHODOLOGY OF CHOICE FOR ITS SAGES.”
>
> “THAT, OF COURSE, IS SELF-MODIFYING CODE -- THE DARK PLACE IN A COMPUTER
> SCIENTIST'S SOUL WHERE ONLY THE KWISATZ HADERACH CAN LOOK.   :^)”
>
> My question is: if a machine’s world model includes the system’s model
> of itself and its own learned mental representation and behavior
> patterns, is it not possible that modification of these learned
> representations and behaviors could be enough to provide what you are
> talking about -- without requiring modifying its code at some deeper
> level.
>
> For example, it is commonly said that humans and their brains have
> changed very little in the last 30,000 years, that if a new born from
> that age were raised in our society, nobody would notice the
> difference.  Yet in the last 30,000 years the sophistication of
> mankind’s understanding of, and ability to manipulate, the world has
> grown exponentially.  There has been tremendous changes in code, at
> the level of learned representations and learned mental behaviors,
> such as advances in mathematics, science, and technology, but there
> has been very little, if any, significant changes in code at the level
> of inherited brain hardware and software.
>
> Take for example mathematics and algebra.  These are learned mental
> representations and behaviors that let a human manage levels of
> complexity they could not otherwise even begin to.  But my belief is
> that

RE: [agi] RSI

2007-10-03 Thread Edward W. Porter
Good distinction!


Edward W. Porter


 -Original Message-
From: Derek Zahn [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 03, 2007 3:22 PM
To: agi@v2.listbox.com
Subject: RE: [agi] RSI



Edward W. Porter writes:

> As I say, what is, and is not, RSI would appear to be a matter of
> definition.
> But so far the several people who have gotten back to me, including
> yourself, seem to take the position that that is not the type of
recursive
> self improvement they consider to be "RSI." Some people have drawn the
> line at coding. RSI they say includes modifying ones own code, but code
> of course is a relative concept, since code can come in higher and
higher
> level languages and it is not clear where the distinction between code
and
> non-code lies.

As I had included comments along these lines in a previous conversation, I
would like to clarify.  That conversation was not specifically about a
definition of RSI, it had to do with putting restrictions on the type of
RSI we might consider prudent, in terms of cutting the risk of creating
intelligent entities whose abilities grow faster than we can handle.

One way to think about that problem is to consider that building an AGI
involves taking a theory of mind and embodying it in a particular
computational substrate, using one or more layers of abstraction built on
the primitive operations of the substrate.  That implementation is not the
same thing as the mind model, it is one expression of the mind model.

If we do not give arbitrary access to the mind model itself or its
implementation, it seems safer than if we do -- this limits the extent
that RSI is possible: the efficiency of the model implementation and the
capabilities of the model do not change.  Those capabilities might of
course still be larger than was expected, so it is not a safety guarantee;
further analysis using the particulars of the model and implementation,
should be considered also.

RSI in the sense of "learning to learn better" or "learning to think
better" within a particular theory of mind seems necessary for any
practical AGI effort so we don't have to code the details of every
cognitive capability from scratch.



  _

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?
<http://v2.listbox.com/member/?&;
> &

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49469788-9ca8f0

RE: [agi] Religion-free technical content

2007-10-03 Thread Edward W. Porter
>From what you say below it would appear human-level AGI would not require
recursive self improvement, because as you appear to define it human's
don't either (i.e., we currently don't artificially substantially expand
the size of our brain).

I wonder what percent of the AGI community would accept that definition? A
lot of people on this list seem to hang a lot on RSI, as they use it,
implying it is necessary for human-level AGI.


Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Linas Vepstas [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 03, 2007 12:19 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free technical content


On Mon, Oct 01, 2007 at 10:40:53AM -0400, Edward W. Porter wrote:
> [...]
> RSI (Recursive Self Improvement)
> [...]
> I didn't know exactly what the term covers.
>
> So could you, or someone, please define exactly what its meaning is?
>
> Is it any system capable of learning how to improve its current
> behavior by changing to a new state with a modified behavior, and then
> from that new state (arguably "recursively") improving behavior to yet
> another new state, and so on and so forth?  If so, why wouldn't any
> system doing ongoing automatic learning that changed its behavior be
> an RSI system.

No; learning is just learning.

For example, humans are known to have 5 to 9 short-term memory "slots"
(this has been measured by a wide variety of psychology experiments, e.g.
ability to recall random data, etc.)

When reading a book, watching a movie, replying to an email, or solving
a problem, humans presumably use many or all of these slots (watching
a movie: to remember the characters, plot twists, recent scenes, etc.
Replying to this email: to remember the point that I'm trying to make,
while simultaneously composing a gramatical, pleasant-to-read sentence.)

Now, suppose I could learn enough neuropsychology to grow some extra
neurons in a petri dish, then implant them in my brain, and up my
short-term memory slots to, say, 50-100.  The new me would be like the old
me, except that I'd probably find movies and books to be trite
and boring, as they are threaded together from only a half-dozen
salient characteristics and plot twists (how many characters and
situations are there in Jane Austen's Pride & Prejudice?
Might it not seem like a children's book, since I'll be able
to "hold in mind" its entire plot, and have a whole lotta
short-term memory slots left-over for other tasks?).

Music may suddenly seem lame, being at most a single melody line
that expounds on a chord progression consisting of a half-dozen chords,
each chord consisting of 4-6 notes.  The new me might come to like
multiple melody lines exploring a chord progression of some 50 chords,
each chord being made of 14 or so notes...

The new me would probably be a better scientist: being able to
remember and operate on 50-100 items in short term memory will likely
allow me to decipher a whole lotta biochemistry that leaves current
scientists puzzled.  And after doing that, I might decide that some other
parts of my brain could use expansion too.

*That* is RSI.

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49387922-edf0e9


RE: [agi] Religion-free technical content

2007-10-03 Thread Edward W. Porter
Josh,

Thank you for your reply, copied below.  It was – as have been many of
your posts – thoughtful and helpful.

I did have a question about the following section

“THE LEARNING PROCESS MUST NOT ONLY IMPROVE THE WORLD MODEL AND WHATNOT,
BUT MUST IMPROVE (=> MODIFY) *ITSELF*. KIND OF THE WAY CIVILIZATION HAS
(MORE OR LESS) MOVED FROM RELIGION TO PHILOSOPHY TO SCIENCE AS THE
METHODOLOGY OF CHOICE FOR ITS SAGES.”

“THAT, OF COURSE, IS SELF-MODIFYING CODE -- THE DARK PLACE IN A COMPUTER
SCIENTIST'S SOUL WHERE ONLY THE KWISATZ HADERACH CAN LOOK.   :^)”

My question is: if a machine’s world model includes the system’s model of
itself and its own learned mental representation and behavior patterns, is
it not possible that modification of these learned representations and
behaviors could be enough to provide what you are talking about -- without
requiring modifying its code at some deeper level.

For example, it is commonly said that humans and their brains have changed
very little in the last 30,000 years, that if a new born from that age
were raised in our society, nobody would notice the difference.  Yet in
the last 30,000 years the sophistication of mankind’s understanding of,
and ability to manipulate, the world has grown exponentially.  There has
been tremendous changes in code, at the level of learned representations
and learned mental behaviors, such as advances in mathematics, science,
and technology, but there has been very little, if any, significant
changes in code at the level of inherited brain hardware and software.

Take for example mathematics and algebra.  These are learned mental
representations and behaviors that let a human manage levels of complexity
they could not otherwise even begin to.  But my belief is that when
executing such behaviors or remembering such representations, the basic
brain mechanisms involved – probability, importance, and temporal based
inference; instantiating general patterns in a context appropriate way;
context sensitive pattern-based memory access; learned patterns of
sequential attention shifts, etc. -- are all virtually identical to ones
used by our ancestors 30,000 years ago.

I think in the coming years there will be lots of changes in AGI code at a
level corresponding to the human inherited brain level.  But once human
level AGI has been created -- with what will obviously have to a learning
capability as powerful, adaptive, exploratory, creative, and as capable of
building upon its own advances at that of a human -- it is not clear to me
it would require further changes at a level equivalent to the human
inherited brain level to continue to operate and learn as well as a human,
any more than have the tremendous advances of human civilization in the
last 30,000 years.

Your implication that civilization had improved itself by moving “from
religion to philosophy to science” seems to suggest that the level of
improvement you say is needed might actually be at the level of learned
representation, including learned representation of mental behaviors.



As a minor note, I would like to point out the following concerning your
statement that:

“ALL AI LEARNING SYSTEMS TO DATE HAVE BEEN "WIND-UP TOYS" “

I think a lot of early AI learning systems, although clearly toys when
compared with humans in many respects, have been amazingly powerful
considering many of them ran on roughly fly-brain-level hardware.  As I
have been saying for decades, I know which end is up in AI -- its
computational horsepower. And it is coming fast.


Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: J Storrs Hall, PhD [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 03, 2007 10:14 AM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free technical content


On Tuesday 02 October 2007 05:50:57 pm, Edward W. Porter wrote:
> The below is a good post:

Thank you!

> I have one major question for Josh.  You said
>
> “PRESENT-DAY TECHNIQUES CAN DO MOST OF THE THINGS THAT AN AI NEEDS
> TO DO,  WITH THE EXCEPTION OF COMING UP WITH NEW REPRESENTATIONS AND
> TECHNIQUES. THAT'S THE SELF-REFERENTIAL KERNEL, THE TAIL-BITING,
> GÖDEL-INVOKING COMPLEX CORE OF THE WHOLE PROBLEM.”
>
> Could you please elaborate on exactly what the “complex core of the
> whole problem” is that you still think is currently missing.

No, but I will try to elaborate inexactly.   :^)

Let me quote Tom Mitchell, Head of Machine Learning Dept. at CMU:

"It seems the real problem with current AI is that NOBODY to my knowledge
is
seriously trying to design a 'never ending learning' machine." (Private
communication)

By which he meant what we tend to call "RSI" here. I think the "coming up
with
new representations and techniques" part is pretty straightforward, the
question is how to do it. Search works, a la a GA, if y

RE: [agi] Religion-free technical content

2007-10-02 Thread Edward W. Porter
The below is a good post:

I have one major question for Josh.  You said

“PRESENT-DAY TECHNIQUES CAN DO MOST OF THE THINGS THAT AN AI NEEDS TO DO,
WITH
THE EXCEPTION OF COMING UP WITH NEW REPRESENTATIONS AND TECHNIQUES. THAT'S

THE SELF-REFERENTIAL KERNEL, THE TAIL-BITING, GÖDEL-INVOKING COMPLEX CORE
OF
THE WHOLE PROBLEM.”

Could you please elaborate on exactly what the “complex core of the whole
problem” is that you still think is currently missing.

Why for example would a Novamente-type system’s representations and
techniques not be capable of being self-referential in the manner you seem
to be implying is both needed and currently missing?

>From my reading of Novamente it would have a tremendous amount of
activation and representation of its own states, emotions, and actions.
In fact virtually every representation in the system would have weightings
reflecting its value to the system.


Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: J Storrs Hall, PhD [mailto:[EMAIL PROTECTED]
Sent: Tuesday, October 02, 2007 4:39 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free technical content


On Tuesday 02 October 2007 01:20:54 pm, Richard Loosemore wrote:
> J Storrs Hall, PhD wrote:
> > a) the most likely sources of AI are corporate or military labs, and
> > not
just
> > US ones. No friendly AI here, but profit-making and
> > "mission-performing"
AI.
>
> Main assumption built into this statement: that it is possible to
> build
> an AI capable of doing anything except dribble into its wheaties, using
> the techiques currently being used.

Lots of smart people work for corporations and governments; why assume
they
won't advance the state of the art?

Furthermore, it's not clear that One Great Blinding Insight is necessary.
Intelligence evolved, after all, making it reasonable to assume that it
can
be duplicated by a series of small steps in the right direction.

> I have explained elsewhere why this is not going to work.

I find your argument quotidian and lacking in depth. Virtually any of the
salient properties of complex systems are true of any Turing-equivalent
computational system -- non-linearity, sensitive dependence on initial
conditions, provable unpredictability, etc. It's why complex systems can
be
simulated on computers. Computer scientists have been dealing with these
issues for half a century and we have a good handle on what can and can't
be
done.

> You can disagree with my conclusions if you like, but you did not
> cover
> this in Beyond AI.

The first half of the book, roughly, is about where and why classic AI
stalled
and what it needs to get going. Note that some dynamical systems theory is

included.

> > b) the only people in the field who even claim to be interested in
building
> > friendly AI (SIAI) aren't even actually building anything.
>
> That, Josh, is about to change.

Glad to hear it. However, you are now on the horns of a dilemma. If you
tell
enough of your discoveries/architecture to convince me (and the other more

skeptical people here) that you are really on the right track, all those
governments and corporations will take them (as Derek noted) and throw
much
greater resources at them than we can.

> So what you are saying is that I "[have no] idea how to make it
> friendly
> or even any coherent idea what friendliness might really mean."
>
> Was that your most detailed response to the proposal?

I think it's self-contradictory. You claim to have found a stable,
un-short-circuitable motivational architecture on the one hand, and you
claim
that you'll be able to build a working system soon because you have a way
of
bootstrapping on all the results of cog psych, on the other. But the prime

motivational (AND learning) system of the human brain is the
dopamine/reward-predictor error signal system, and it IS
short-circuitable.

> You yourself succinctly stated the final piece of the puzzle
> yesterday.
> When the first AGI is built, its first actions will be to make sure that

> nobody is trying to build a dangerous, unfriendly AGI.  After that
> point, the first friendliness of the first one will determine the
> subsequent motivations of the entire population, because they will
> monitor each other.

I find the hard take-off scenario very unlikely, for reasons I went into
at
some length in the book. (I know Eliezer likes to draw an analogy to
cellular
life getting started in a primeval soup, but I think the more apt parallel
to
draw is with the Cambrian Explosion.)

> The question is only whether the first one will be friendly:  any talk
> about "all AGIs" that pretends that there will be some other scenario is

> a meaningless.

A very loose and hyperbolic u

RE: [agi] The Future of Computing, According to Intel -- Massively multicore processors will enable smarter computers that can infer our activities

2007-10-02 Thread Edward W. Porter
Re Jiri Jelinek’s below 10/2/2007 1:21 AM post:

Interesting links.

I just spent about a half hour skimming them.  I must admit I haven’t
spent enough time to get my head around how one would make a powerful AGI
using Hadoop or MapReduce, although it clearly could be helpful for
certain parts of the job, like getting information for use as a basis for
induction or inference.

If you have any more detailed thoughts on the subject I would be
interested in hearing them.

Also, the pricing listed on the first Amazon link seemed to indicate the
charge per “instance” was 10 cents/hour, does that mean you could use 1K
machines, with a total nominal 1.7 Topp/sec, 1.75 TByte RAM, and 160
TBytes of hard drive for one hour for just $100?  Or, that you could use
that much hardware for one year, 24/7, for just $876,000?  Of course the
interconnect, which is very important for AGI, is slow, 250Mbits/sec or
just 1/40 that of a 10Tbit infiniband networked system, but still the
pricing is impressive.  It provides a valuable potential resource for at
least some types of AGI research.

I only read enough of the Google PDF about MapReduce to understand what
MapReduce was and the major types of things it could be used for.  What
that reading made me think of was that it represented the type of
sub-human computation that human-level AGI’s will be able to execture
and/or command and interface with millions of times faster than humans.
If it had access to the Googleplex -- once a hierarchy of MapReduce
software objects had been created – it would be able to generate and
specify task appropriate MapReduces more rapidly than we can generate NL
sentences.

This again emphasizes one of my key points, that an AGI with the hardware
to be human-level at mental tasks we humans currently do much better than
machines will be able to do and interface with things that computers
already do much faster than humans thousands or millions of times faster
than we can, meaning their overall capability for many tasks will be
thousands of times ours.  So human-level AGI wil easily be made superhuman
for many tasks, greatly increasing their commercial value.

Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Jiri Jelinek [mailto:[EMAIL PROTECTED]
Sent: Tuesday, October 02, 2007 1:21 AM
To: agi@v2.listbox.com
Subject: Re: [agi] The Future of Computing, According to Intel --
Massively multicore processors will enable smarter computers that can
infer our activities


Talking about processing power... A friend just sent me an email with
links some of you may find interesting:

--- cut --
Building or gaining access to computing resources with enough power to
complete jobs on large data sets usually costs a lot of money.  Amazon Web
Services (AWS) allows users to create and run (nearly) unlimited numbers
of virtual servers for a per-minute fee.  That means you could potentially
have a server farm of dozens of machines, all of which run for only a few
minutes, and be charged only for the time that you need them to finish
your computation job.

First, look at AWS ec2:
http://www.amazon.com/b/ref=sc_fe_l_2/104-0929857-7317547?ie=UTF8&node=201
590011&no=342430011&me=A36L942TSJ2AJA

Then, look at what MapReduce is:
http://labs.google.com/papers/mapreduce.html

Then, look at the open-source MapReduce framework, Hadoop:
http://lucene.apache.org/hadoop/

Then, look at how to use Hadoop on ec2:
http://developer.amazonwebservices.com/connect/entry.jspa?externalID=873&c
ategoryID=112

I believe that using AWS with Hadoop would be a very useful and
cost-efficient way to develop and test powerful AGI algorithms.
--- cut --

Regards,
Jiri Jelinek

On 10/1/07, Edward W. Porter <[EMAIL PROTECTED]> wrote:
>
>
>
> Check out the following article entitled:  The Future of Computing,
> According to Intel -- Massively multicore processors will enable
> smarter computers that can infer our activities.
>
> http://www.technologyreview.com/printer_friendly_article.aspx?id=19432
>
> Not only is the type of hardware needed for AGI coming fast, but one
> of the world's biggest, fastest, smartest computer technology
> companies is focusing on developing software using massively parallel
> hardware that is directly related to AGI.
>
> It's all going to start happening very fast. The race is on.
>
> Edward W. Porter
> Porter & Associates
> 24 String Bridge S12
> Exeter, NH 03833
> (617) 494-1722
> Fax (617) 494-1822
> [EMAIL PROTECTED] 
>  This list is sponsored by AGIRI: http://www.agiri.org/email To
> unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your opt

[agi] The Future of Computing, According to Intel -- Massively multicore processors will enable smarter computers that can infer our activities

2007-10-01 Thread Edward W. Porter
Check out the following article entitled:  The Future of Computing,
According to Intel -- Massively multicore processors will enable smarter
computers that can infer our activities.  

http://www.technologyreview.com/printer_friendly_article.aspx?id=19432

Not only is the type of hardware needed for AGI coming fast, but one of
the world's biggest, fastest, smartest computer technology companies is
focusing on developing software using massively parallel hardware that is
directly related to AGI. 

It's all going to start happening very fast. The race is on.

Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=48557502-e337d4

RE: [agi] Religion-free technical content

2007-10-01 Thread Edward W. Porter
Richard and Matt,

The below is an interesting exchange.

For Richard I have the question, how is what you are proposing that
different than what could be done with Novamente, where if one had
hardcoded a set of top level goals, all of the perceptual, cognitive,
behavioral, and goal patterns -- and the activation of such patterns -
developed by the system would not only be molded by the probabilities of
the "world' in which the system dealt, but also with how important each of
those patterns have proven relative to the system's high level goals.

So in a Novamente system you would appear to have the types of biases you
suggest that would greatly influence the each of the millions to trillions
(depending on system size) of patterns in the "cloud of concepts" that
would be formed, their links, and their activation patterns.

So, how is your system different?  What am I missing?


Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
Sent: Monday, October 01, 2007 1:41 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free technical content


Matt Mahoney wrote:
> Richard,
> Let me make sure I understand your proposal.  You propose to program
> friendliness into the motivational structure of the AGI as tens of
> thousands of hand-coded soft constraints or rules.  Presumably with so
> many rules, we should be able to cover every conceivable situation now
> or in the future where the AGI would have to make a moral decision.
> Among these rules: the AGI is not allowed to modify the function that
> computes its reward signal, nor it is allowed to create another AGI
> with a different function.
>
> You argue that the reward function becomes more stable after RSI.  I
> presume this is because when there are a large number of AGIs, they
> will be able to observe any deviant behavior, then make a collective
> decision as to whether the deviant should be left alone, reprogrammed,
> or killed.  This policing would be included in the reward function.
>
> Presumably the reward function is designed by a committee of
> upstanding citizens who have reached a consensus on what it means to
> be friendly in every possible scenario.  Once designed, it can never
> be changed.  Because if there were any mechanism by which all of the
> AGIs could be updated at once, then there is a single point of
> failure.  This is not allowed.  On the other hand, if the AGIs were
> updated one at a time (allowed only with human permission), then the
> resulting deviant behavior would be noticed by the other AGIs before
> they could be updated.  So the reward function remains fixed.
>
> Is this correct?

Well, I am going to assume that Mark is wrong and that you are not
trying to be sarcastic, but really do genuinely mean to pose the
questions.

You have misunderstood the design at a very deep level, so none ofthe
above would happen.

The multiple constraints are not explicitly programmed into the system
in the form of semantically interpretable statements (like Asimov's
laws), nor would there be a simple "reward function", nor would there be
a committe of experts who sat down and tried to write out a complete
list of all the rules.  These are all old-AI concepts (conventional,
non-complex AI), they simply do not map onto the system at all.

The AGI has a motivational system that *biasses* the cloud of concepts
in one direction or another, to make the system have certain goals, and
the nature of this bias is that during development, the concepts
themselves all grew from simple primitive ideas (so primitive that they
are not even ideas, but just sources of influence on the concept
building process), and these simple primitives reach out through the
entire web of adult concepts.

This is a difficult idea to grasp, I admit, but the consequence of that
type of system design is that, for example, the general idea of "feeling
empathy for the needs and aspirations of the entire human race" is not
represented in the system as an explcit memory location that says "Rule
number 71, as decided by the Committee of World AGI Ethics Experts, is
that you must feel empathy for the entire human race"  instead, the
thing that we externally describe as "empathy" is just a collective
result of a massive number of learned concepts and their connections.

This makes "empathy" a _systemic_ characteristic, intrinsic to the
entire system, not a localizable rule.

The empathy feeling, to be sure, is controlled by roots that go back to
the motivational system, but these roots would be built in such a way
that tampering or malfunction would:

(a) not be able to happen without huge intervention, which would be
easily noticed, and

(b) not cause any cata

RE: [agi] Religion-free technical content

2007-10-01 Thread Edward W. Porter
RE: Matt Mahoney's Mon 10/1/2007 12:01 PM post which said in part

"IN MY LAST POST I HAD IN MIND RSI AT THE LEVEL OF SOURCE CODE OR MACHINE
CODE."

Thank you for clarifying this, as least with regard to what you meant.

But that begs the question: is there any uniform agreement about this
definition in the AGI community or is it currently a vaguely defined term?

As stated in my previous posts, a Novamente level system would have a form
of Recursive Self Improvement that would recursively improve cognitive,
behavior, and goal patterns.  Is the distinction between that level RSI
and RSI at the C++ level that in a Novamente-type RSI one can hope that
all or vital portions of the C++ code could be kept off-limits to the
higher level RSI, and, from that, one could hope that certain goals and
behaviors could remain hardcoded into the machine's behavioral control
system.

I assume it was maintaining that that the type of distinction that you
considered important.  Is that correct?

If so, that seems to make sense to me, at least at this thinking.  But one
can easily think of all sorts of ways a human level AGI with RSI of the
Novamente level could try to get around this limitation, if it broke
sufficiently free from, or sufficiently re-interpreted, its presumably
human friendly goals in a way that allowed it to want to do so.  For
example, it could try to program other systems, such as by hacking on the
net, that don't have such limitations.

But hopefully they would not do so if the hardcoded goals could maintain
their dominance.  As I said in my 9/30/2007 7:11 PM post, I don't really
have much understanding about how robust an initial set of hardcoded goals
and values are likely to remain against new goals and subgoals that are
defined by automatic learning and that are needed for the interpretation
of the original goals in a changing world.  Like human judges, these
systems might routinely dilute or substantially change the meaning of the
laws they are meant to uphold.  This is particularly true because almost
any set of goals for "human friendliness" are going to be vaguely defined
and the world is likely to generate many situations where various
sub-goals of being human friendly will conflict.

That is why I think keeping humans in the loop, and Intelligent
Augmentation, and Collective Intelligence are so important.

But in any case, it would seem that being able to hardcode certain parts
of the machine's behavioral, value, and goal system, and have it made as
difficult as possible for the machine to change those parts, would at
least make it substantially harder for an AGI to develop a set of goals
contrary to those originally intended for it.  It is my belief that a
Novamente-type system could have consideral room to learn and adapt while
still being restrained to avoid certain goals and behaviors.  Afterall,
most of us humans do.

Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Matt Mahoney [mailto:[EMAIL PROTECTED]
Sent: Monday, October 01, 2007 12:01 PM
To: agi@v2.listbox.com
Subject: RE: [agi] Religion-free technical content


In my last post I had in mind RSI at the level of source code or machine
code.  Clearly we already have RSI in more restricted computational
models, such as a neural network modifying its objective function by
adjusting its weights.
This type of RSI is not dangerous because it cannot interact with the
operating system or remote computers in ways not intended by the
developer.


--- "Edward W. Porter" <[EMAIL PROTECTED]> wrote:

> To Matt Mahoney.
>
> Your 9/30/2007 8:36 PM post referred to mine in reply to Derek Zahn
> and implied RSI (which I assume from context is a reference to
> Recursive Self
> Improvement) is necessary for general intelligence.
>
> When I said -- in reply to Derek's suggestion that RSI be banned --
> that I didn't fully understand the implications of banning RSI, I said
> that largely because I didn't know exactly what the term covers.
>
> So could you, or someone, please define exactly what its meaning is?
>
> Is it any system capable of learning how to improve its current
> behavior by changing to a new state with a modified behavior, and then
> from that new state (arguably "recursively") improving behavior to yet
> another new state, and so on and so forth?  If so, why wouldn't any
> system doing ongoing automatic learning that changed its behavior be
> an RSI system.
>
> Is it any system that does the above, but only at a code level?  And,
> if so, what is the definition of code level?  Is it machine code; C++
> level code; prolog level code; code at the level Novamente's MOSES
> learns through evolution, is it code at the level of learned goal and
> behavi

RE: [agi] Religion-free technical content

2007-10-01 Thread Edward W. Porter
To Matt Mahoney.

Your 9/30/2007 8:36 PM post referred to mine in reply to Derek Zahn and
implied RSI (which I assume from context is a reference to Recursive Self
Improvement) is necessary for general intelligence.

When I said -- in reply to Derek's suggestion that RSI be banned -- that I
didn't fully understand the implications of banning RSI, I said that
largely because I didn't know exactly what the term covers.

So could you, or someone, please define exactly what its meaning is?

Is it any system capable of learning how to improve its current behavior
by changing to a new state with a modified behavior, and then from that
new state (arguably "recursively") improving behavior to yet another new
state, and so on and so forth?  If so, why wouldn't any system doing
ongoing automatic learning that changed its behavior be an RSI system.

Is it any system that does the above, but only at a code level?  And, if
so, what is the definition of code level?  Is it machine code; C++ level
code; prolog level code; code at the level Novamente's MOSES learns
through evolution, is it code at the level of learned goal and behaviors,
or is it code at all those levels.  If the later were true, than again, it
would seem the term covered virtually any automatic learning system
capable of changing its behavior.

Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Matt Mahoney [mailto:[EMAIL PROTECTED]
Sent: Sunday, September 30, 2007 8:36 PM
To: agi@v2.listbox.com
Subject: RE: [agi] Religion-free technical content


--- "Edward W. Porter" <[EMAIL PROTECTED]> wrote:
> To Derek Zahn
>
> You're 9/30/2007 10:58 AM post is very interesting.  It is the type of
> discussion of this subject -- potential dangers of AGI and how and
> when do we deal with them -- that is probably most valuable.
>
> In response I have the following comments regarding selected portions
> of your post's (shown in all-caps).
>
> "ONE THING THAT COULD IMPROVE SAFETY IS TO REJECT THE NOTION THAT AGI
> PROJECTS SHOULD BE FOCUSED ON, OR EVEN CAPABLE OF, RECURSIVE SELF
> IMPROVEMENT IN THE SENSE OF REPROGRAMMING ITS CORE IMPLEMENTATION."
>
> Sounds like a good idea to me, although I don't fully understand the
> implications of such a restriction.

The implication is you would have to ban intelligent software productivity
tools.  You cannot do that.  You can make strong arguments for the need
for tools for proving software security.  But any tool that is capable of
analysis and testing with human level intelligence is also capable of
recursive self improvement.

> "BUT THERE'S AN EASY ANSWER TO THIS:  DON'T BUILD AGI THAT WAY.  IT IS
> CLEARLY NOT NECESSARY FOR GENERAL INTELLIGENCE "

Yes it is.  In my last post I mentioned Legg's proof that a system cannot
predict (understand) a system of greater algorithmic complexity.  RSI is
necessarily an evolutionary algorithm.  The problem is that any goal other
than rapid reproduction and acquisition of computing resources is
unstable.
The first example of this was the 1988 Morris worm.

It doesn't matter if Novamente is a "safe" design.  Others will not be.
The first intelligent worm would mean the permanent end of being able to
trust your computers.  Suppose we somehow come up with a superhumanly
intelligent intrusion detection system able to match wits with a
superhumanly intelligent worm.  How would you know if it was working?
Your computer says "all is OK".
Is that the IDS talking, or the worm?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=48427137-75820d

RE: [agi] Religion-free technical content

2007-09-30 Thread Edward W. Porter
Don,

I think we agree on the basic issues.

The difference is one of emphasis.  Because I believe AGI can be so very
powerful -- starting in a perhaps only five years if the right people got
serious funding -- I place much more emphasis on trying to stay way ahead
of the curve with regard to avoiding the very real dangers its very great
power could bring.

Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Don Detrich [mailto:[EMAIL PROTECTED]
Sent: Sunday, September 30, 2007 1:12 PM
To: agi@v2.listbox.com
Subject: RE: [agi] Religion-free technical content



First, let me say I think this is an interesting and healthy discussion
and has enough "technical" ramifications to qualify for inclusion on this
list.



Second, let me clarify that I am not proposing that the dangers of AGI be
"swiped under the rug" or that we should be "misleading" the public.



>>I just think we're a long way from having real
data to base such discussions on, which means if held at the moment
they'll inevitably be based on wild flights of fancy.<<



We have no idea what the "personality" of AGI will be like. I believe it
will be VERY different from humans. This goes back to my post "Will AGI
like Led Zeppelin?" To which my answer is, probably not. Will AGI want to
knock me over the head to take my sandwich or steal my woman? No, because
it won't have the same kind of biological imperative that humans have.
AGI, it's a whole different animal. We have to wait and see what kind of
animal it will be.



>>By that point, there will be years of time to consider its wisdom and
hopefully apply some sort of friendliness theory to an actually dangerous
stage. <<



Now, you can feel morally at ease to promote AGI to the public and go out
and get some money for your research.



As an aside, let me make a few comments about my point of view. I was half
owner of an IT staffing and solutions company for ten years. I was the
sales manager and a big part of my job was to act as the translator
between the technology guys and the client decision makers, who usually
were NOT technology people. They were business people with a problem
looking for ROI. I have been told by technology people before that
concentrating on "what the hell we actually want to accomplish here" is
not an important technical issue. I believe it is. "What the hell we
actually want to accomplish here" is to develop AGI. Offering a REALISTIC
evaluation of the possible advantages and disadvantages of the technology
is very much a technical issue. What we are currently discussing is, what
ARE the realistic dangers of AGI and how does that effect our development
and investment strategy. That is both a technical and a strategic issue.





Don Detrich





  _

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?
<http://v2.listbox.com/member/?&;
> &

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=48325024-2cff63

RE: [agi] Religion-free technical content

2007-09-30 Thread Edward W. Porter
Kaj,

Another solid post.

I think you, Don Detrich, and many others on this list believe that, for
at least a couple of years, it's still pretty safe to go full speed ahead
on AGI research and development.  It appears from the below post that both
you and Don agree AGI can potentially present grave problems (which
distinguished Don from some on this list who make fun of anyone who even
considers such dangers).  It appears the major distinction between the two
of you is whether, and how much, we should talk and think about the
potential dangers of AGI in the next few years.

I believe AGI is so potentially promising it is irresponsible not to fund
it.  I also believe it is so potentially threatening it is irresponsible
to not fund trying to understanding such threats and how they can best be
controlled.  This should start now so by the time we start making and
deploying powerful AGI's there will be a good chance they are relatively
safe.

At this point much more effort and funding should go into learning how to
increase the power of AGI, than into how to make it safe.  But even now
there should be some funding for initial thinking and research (by
multiple different people using multiple different approaches) on how to
create machines that provide maximal power with reasonable safety.  AGI
could actually happen very soon.  If the right team, or teams, were funded
by Google, Microsoft, IBM, Intel, Samsung, Honda, Toshiba, Matsushita,
DOD, Japan, China, Russia, the EU, or Israel (to name just a few), at a
cost of, say, 50 million dollars per team over five years, it is not
totally unrealistic to think one of them could have a system of the
general type envisioned by Goertzel providing powerful initial AGI,
although not necessarily human-level in many ways, within five years.  The
only systems that are likely to get there soon are those that rely heavily
on automatic learning and self organization, both techniques that are
widely considered to be more difficult to understand and control that
other, less promising approaches.

It would be inefficient to spend too much money on how to make AGI safe at
this early stage, because as Don points out there is much about it we
still don't understand.  But I think it is foolish to say there is no
valuable research or theoretical thinking that can be done at this time,
without, at least, first having a serious discussion of the subject within
the AGI field.

If AGIRI's purpose is, as stated in its mission statement, truly to
"Foster the creation of powerful and ethically positive Artificial General
Intelligence [underlining added]," it would seem AGIRI's mailing list
would be an appropriate place to have a reasoned discussion about what
sorts of things can and should be done now to better understand how to
make AGI safe.

I for one would welcome such discussion, of subjects such as  "what are
the currently recognized major problems involved in getting automatic
learning and control algorithms of the type most likely to be used in AGI
to operate as desired; what are the major techniques for dealing with
those problems; and how effect have those techniques been.

I would like to know how many other people on this list would also.


Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Kaj Sotala [mailto:[EMAIL PROTECTED]
Sent: Sunday, September 30, 2007 10:11 AM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free technical content


On 9/30/07, Don Detrich - PoolDraw <[EMAIL PROTECTED]> wrote:
> So, let's look at this from a technical point of view. AGI has the
> potential of becoming a very powerful technology and misused or out of
> control could possibly be dangerous. However, at this point we have
> little idea of how these kinds of potential dangers may become
> manifest. AGI may or may not want to take over the world or harm
> humanity. We may or may not find some effective way of limiting its
> power to do harm. AGI may or may not even work. At this point there is
> no AGI. Give me one concrete technical example where AGI is currently
> a threat to humanity or anything else.
>
> I do not see how at this time promoting investment in AGI research is
> "dangerously irresponsible" or "fosters an atmosphere that could lead
> to humanity's demise". It us up to the researchers to devise a safe
> way of implementing this technology not the public or the investors.
> The public and the investors DO want to know that researchers are
> aware of these potential dangers and are working on ways to mitigate
> them, but it serves nobodies interest to dwell on dangers we as yet
> know little about and therefore can't control. Besides, it's a stupid
> way to promote the AGI industry or get investment to further
> responsi

RE: [agi] Religion-free technical content

2007-09-29 Thread Edward W. Porter
 uncontrollability
and unpredictability, and it’s important that the superhuman AI’s we’ll
eventually create should be able to rationally and predictably chart their
own growth and evolution.

“I agree that it’s important that powerful
AI’s be more rational than humans, with a greater level of
self-understanding than we humans display.  But, I don’t think the way to
achieve this is to consider logical deduction as the foundational aspect
of intelligence.  Rather, I think one needs complex, self-organizing
system of patterns on the emergent level – and then solve the problem of
how a self-organizing pattern system may learn to rationally control
itself.  I think this is a hard problem but almost surely a solvable one.”

The portion of this quote I have underlined, matches the
apparent implication in Don Detrich’s recent (9/29/2007 7:24 PM) post that
we can not really understand the threats of powerful AGI until we get
closer to it and, thus, we should delay thinking and talking about such
threats until we learn more about them.

I would like to know how many other reader of this list
would be interested in discussions such as:

(for example, with regard to the above
quoted text:)

-Which of Goertzel’s or Yudkowski’s
approaches is most likely to help achieve the goal of creating powerful
and ethical AGI?

-How long, as we develop increasingly more
powerful self-organizing systems, would it be safe to delay focusing on
the problem Goertzel refers to of how to make self-organizing pattern
systems rational --where “rational” presumably means rational for mankind?
And how will we know, before its too late, how long is too long?

-And what basis does Goertzel have for
saying the problem of how a self-organizing pattern system may learn to
[ethically?] control itself is almost surely a solvable one?

It seems to me such discussions would both be technically
interesting and extremely valuable for AGIRI mission of fostering
“powerful and ethically positive Artificial General Intelligence.”

I also think having more reasoned answers to such
questions will actually make promoting AGI funding easier.


Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Kaj Sotala [mailto:[EMAIL PROTECTED]
Sent: Saturday, September 29, 2007 9:09 AM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free technical content


On 9/29/07, Russell Wallace <[EMAIL PROTECTED]> wrote:
> I've been through the specific arguments at length on lists where
> they're on topic, let me know if you want me to dig up references.

I'd be curious to see these, and I suspect many others would, too. (Even
though they're probably from lists I am on, I haven't followed them nearly
as actively as I could've.)

> I will be more than happy to refrain on this list from further mention
> of my views on the matter - as I have done heretofore. I ask only that
> the other side extend similar courtesy.

I haven't brought up the topics here, myself, but I feel the need to note
that there has been talk about massive advertisements campaigns for
developing AGI, campaigns which, I quote,

On 9/27/07, Don Detrich - PoolDraw <[EMAIL PROTECTED]> wrote:
>However, this
> organization should take a very conservative approach and avoid over
>speculation. The objective is to portray AGI as a difficult but
>imminently  doable technology. AGI is a real technology and a real
>business opportunity.  All talk of Singularity, life extension, the end
>of humanity as we know it  and run amok sci-fi terminators should be
>portrayed as the pure speculation  and fantasy that it is. Think what
>you want to yourself, what investors and  the public want is a useful
>and marketable technology. AGI should be  portrayed as the new
>internet, circa 1995. Our objective is to create some  interest and
>excitement in the general public, and most importantly,  investors.

>From the point of view of those who believe that AGI is real danger, any
campaigns to promote the development of AGI while specificially ignoring
discussion about the potential implications are dangerously irresponsible
(and, in fact, exactly the thing we're working to stop). Personally, I am
ready to stay entirely quiet about the Singularity on this list, since it
is, indeed, off-topical - but that is only for as long as I don't run
across messages which I feel are helping foster an atmosphere that could
lead to humanity's demise.

(As a sidenote - if you really are convinced that any talk about
Singularity is religious nonsense, I don't know if I'd consider it a
courtesy for you not to bring up

RE: [agi] HOW TO CREATE THE BUZZ THAT BRINGS THE BUCKS

2007-09-28 Thread Edward W. Porter
Derek,

This is how I responded to the below quoted comment from Don Detrich in
your email



Admittedly there are many possible dangers with future AGI technology. We
can think of a million horror stories and in all probability some of the
problems that will crop up are things we didn’t anticipate. At this point
it is pure conjecture.



True, the threat is pure conjecture, if by that you mean reasoning without
proof.  But that is not the proper standard for judging threats.  If you
had lived you life by disregarding all threats except those that had proof
you almost certainly would have died in early childhood.



 All new technologies have dangers, just like life in general. We can’t
know the kinds of personal problems and danger we will face in our future.




True, many other new technologies involve threats, and certainly among
them are nano-technology and bio-technology, which have potentials for
severe threats.  But there is something particularly threatening about a
technology can purposely try to outwit us that, particularly if networked,
it could easily be millions of times more intelligent than we are, and
that would be able to understand and hack the lesser computer
intelligences that we depend our lives on millions of times faster than
any current team of humans.  Just as it is hard to image a world in which
humans long stayed enslaved to cows, it is hard to imagine one in which
machines much brighter than we are stayed enslaved to us.



It should also be noted that the mere fact there have not been any major
disasters in fields as new as biotechology and nanotechnolgy in no ways
means that all concern for such threats were or are foolish.  The levies
in New Orleans held for how many years before they proved insufficient.


Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Derek Zahn [mailto:[EMAIL PROTECTED]
Sent: Friday, September 28, 2007 5:45 PM
To: agi@v2.listbox.com
Subject: RE: [agi] HOW TO CREATE THE BUZZ THAT BRINGS THE BUCKS


Don Detrich writes:



AGI Will Be The Most Powerful Technology In Human History – In Fact, So
Powerful that it Threatens Us <<


Admittedly there are many possible dangers with future AGI technology. We
can think of a million horror stories and in all probability some of the
problems that will crop up are things we didn’t anticipate. At this point
it is pure conjecture. All new technologies have dangers, just like life
in general.



It'll be interesting to see if the "horror stories" about AGI follow the
same pattern as they did for Nanotechnology... After many years and
dollars of real nanotechnology research, the simplistic vision of the lone
wolf researcher stumbling on a runaway self-replicator that turns the
planet into gray goo became much more complicated and unlikely.  Plus you
can only write about gray goo for so long before it gets boring.



Not to say that AGI is necessarily the same as Nanotechnology in its
actual risks, or even that gray goo is less of an actual risk than writers
speculated about, but it will be interesting to see if the scenario of a
runaway self-reprogramming AI becomes similarly passe.



  _

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?
<http://v2.listbox.com/member/?&;
> &

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=48004725-0222a5

RE: [agi] HOW TO CREATE THE BUZZ THAT BRINGS THE BUCKS

2007-09-28 Thread Edward W. Porter
 HOW TO CREATE THE BUZZ THAT BRINGS THE BUCKS



Wow, there it is. That just about says it all. Take the content of that
concise evaluation and go on the road. That is what AGI needs. For general
PR purposes it doesn’t have to be much more detailed than that. Talk shows
and news articles are unlikely to cover even that much. I have done
national media and this is the kind of story they would love. Some
additional hooks, events and visuals would also be effective.



>From this and prior emails do I detect a pattern.

-When I am an AGI booster, pat me on the head and throw me a bone. (I love
it. If I had a tail it would wag.)

-When I am an AGI detractor, politely and intelligently challenge me .  (I
don’t love it as much, but it is interesting and thought provoking.)



So given your apparent bias, I would like to agree in part and disagree in
part.



I would love to be a big time booster for AGI.  I want to be on one of the
teams that make it happen in some capacity.  I want to be one of the first
people to ride an AGI dream machine, something that can talk with you like
the most wise, intelligent, and funny of men, that can be like the most
brilliant of teacher, one with the world’s knowledge likely to be of any
interest already in deep structure, and that can not only brilliantly talk
in real time but also simultaneously show real time images, graphs, and
photorealistic animations as it talks.



I am 59 so I want this to start happening soon.  I am convinced it can
happen.  I am convinced I basically know how to do it (at a high level
with a lot of things far from totally filled in)  But I think others, like
Ben Goertzel, are probably significantly ahead of me.  And I have no
experience at leading a software team, which I would need because I have
never written a program more than 100 pages long and that was twenty years
ago, when I programmed Dragon Systems’ first general purpose dictating
machine.



So I am an AGI booster, but there needs to be serious discussion of AGI’s
threats and how we can deal with them, at least among the AGI community,
to which the readers of this list are probably pretty much limited.



Admittedly there are many possible dangers with future AGI technology. We
can think of a million horror stories and in all probability some of the
problems that will crop up are things we didn’t anticipate. At this point
it is pure conjecture.



True, the threat is pure conjecture, if by that you mean reasoning without
proof.  But that is not the proper standard for judging threats.  If you
had lived you life by disregarding all threats except those that had proof
you almost certainly would have died in early childhood.



 All new technologies have dangers, just like life in general. We can’t
know the kinds of personal problems and danger we will face in our future.




True, many other new technologies involve threats, and certainly among
them are nano-technology and bio-technology, which have potentials for
severe threats.  But there is something particularly threatening about a
technology can purposely try to outwit us that, particularly if networked,
it could easily be millions of times more intelligent than we are, and
that would be able to understand and hack the lesser computer
intelligences that we depend our lives on millions of times faster than
any current team of humans.  Just as it is hard to image a world in which
humans long stayed enslaved to cows, it is hard to imagine one in which
machines much brighter than we are stayed enslaved to us.



..in the end you have to follow the road ahead.



Totally agree.



There is no turning back at this point.  The wisp of smoke that will
become the Geni is already out of the bottle.  Assuming that Moore’s law
keeps on keeping on for another couple generations, within five to seven
years starting to make a powerful AGI will probably be within the capacity
of half the world’s governments and all of the world’s thousand largest
companies.   So to keep the world safe we will need safer AI’s to protect
us from the type the Leona Helmsly’s and Kim Yung il’s of the world are
likely to make.



We well be better informed and better adept at dealing with the inevitable
problems the future holds as they arise.



This is particularly true if there is a special emphasis on the problem.
That is why it should be discussed.



I have said for years that for humans to defend themselves against
machine, learning how to defend human against machines should be rewarded
as one of mankind’s highest callings.  That is why, despite the fact that
I disagree with Eliezer Yudkowski on certain points, have I tremendous
respect for the fact that he is probably the first human to dedicate him
self to this highest calling.



Of course the proper use of intelligence augmentation and collective human
intelligence greatly increases our chances, particularly if through the
use of augmented intelligence and collective intelligence we can both
better learn and unde

[agi] HOW TO CREATE THE BUZZ THAT BRINGS THE BUCKS

2007-09-28 Thread Edward W. Porter
 the
prefrontal cortex/basal ganglia system”, by Thomas E. Hazy, Michael J.
Frank and Randall C. O’Reilly

“Engines of the brain: The computational instruction set of human
cognition”, by Richard Granger



Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=47932591-7eb688

<    1   2