Re: [agi] Logical representation

2007-03-12 Thread Richard Loosemore

Russell Wallace wrote:
On 3/11/07, *Ben Goertzel* <[EMAIL PROTECTED] > 
wrote:


YES -- anything can be represented in logic.  The question is whether
this is a useful representational style, in the sense that it matches
up with effective learning algorithms!!!  In some domains it is, in
others not.


"Represented in logic" can mean a number of different things, just 
checking to see if everyone's talking about the same thing.


Consider, say, a 5 megapixel image. A common programming language 
representation would be something like:


struct point {float red, green, blue;};
point *image = new point[n];

For AGI purposes if that's our only representation well obviously we 
lose. So we do need to take at least some steps in the direction of a 
unified representation.


What should that representation look like? The answer is that logic, or 
some variant thereof, is as good as we're going to get. So we might have 
something like:


[Meaning #1]
color(point(1, 2), red) = 3
...another 1499 similar assertions

Now we can build up further layers of information, such as:

[Meaning #2]
edge(point(4, 5), orientation(3.7))
estimated-depth(point(87, 9), 120.4)
convex(line#17)
chair(object#33)
...etc

What about the algorithms that generate those further layers of 
information? Well, typical image processing code is written in something 
like C, but again for AGI purposes if we do that we lose. So,


[Meaning #3]
we want to create (whether by hand, auto learning or most likely 
combination) algorithms in a more logic-oriented language, something 
that (at least superficially) looks more like Prolog or Haskell than C.


A classic mistake is to slide a step further and assume

[Meaning #4]
the application of those algorithms will be pure deduction and the 
runtime engine can be purely a theorem prover. We now know that doesn't 
work, at best you just run into exponential search, need 
procedural/heuristic components. So leave that aside.


What about the physical representation of all this? Well suppose the 
general logic database stores stuff as a set of explicit sentences, then


[Meaning #5]
we can use the same database to store all kinds of data as explicit 
sentences. Do we want to? For prototyping, sure. For production code, 
we'll eventually want to do things like _physically_ storing image data 
as an array of floats because there's a couple orders of magnitude 
performance improvement to be had that way - only a constant factor to 
be sure, but enough to be eventually worth having. But that has no 
bearing on the _logical_ side of things - the semantics should still be 
just as if everything was stored as explicit sentences using the general 
database engine.


So the answer to whether standardizing on logical representation is good is:

Meanings #1, 2, 3 - yes.
Meaning #4 - no.
Meaning #5 - for prototyping sure, not for production code.

(There are probably some more I'm overlooking, but that's a start.)


I'm not sure if you're just summarizing what someone would mean if they 
were talking about 'logical representation,' or advocating it.


Either way, from my point of view this is a dead end you have listed here.

I say that because (a) I can think of other representations that do not 
fit that way of thinking, and (b) I think it is a terrible mistake (an 
AI-field-crippling mistake) to start out by making an early commitment 
to a particular kind of representation.


For example, what about replacing this:

[Meaning #2]
edge(point(4, 5), orientation(3.7))
estimated-depth(point(87, 9), 120.4)
convex(line#17)
chair(object#33)
...etc

with this:

[Meaning #2(A)]
regularity_A1, regularity_A27, regularity_A81 
regularity_B79, regularity_B34, regularity_B22 



Where the "regularities" are just elements generated by a "regularity 
finding mechanism"?  The "A" and "B" labels signify that there are 
multiple levels, so the "B" regularities are formed out of clusters of 
"A" level regularities, and so on.  A regularity is just a pattern of 
some kind.


So what is the "regularity finding mechanism"?  Well, it is not well 
defined at the outset:  it is up to us to investigate and discover what 
regularity finding mechanisms actually produce useful elements.  We 
should start out *agnostic* about what those mechanisms are.  All kinds 
of posibilities exist, if we are open minded enough about what to consider.


Not only that, but we can allow that there is not one, but a cluster of 
such mechanisms, all of which can operate simultaneously to refine the 
structure of the elements.


And even more:  do not have a fixed set of regularity finding 
mechanisms, but allow the system to have an initial set that it can 
learn to augment.  So later on the system has extra ways to build new 
elements, not available at an early stage.


Finally, alongside the learning mechanisms that build these regularities 
there are mechanisms that *use* all of this stuff (no point in 

Re: [agi] Logical representation

2007-03-12 Thread Russell Wallace

On 3/12/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:


I'm not sure if you're just summarizing what someone would mean if they
were talking about 'logical representation,' or advocating it.



I'm saying there are 5 different things someone might mean, and going on to
advocate 3.5 of them while dissing 1.

For example, what about replacing this:


[Meaning #2]
edge(point(4, 5), orientation(3.7))
estimated-depth(point(87, 9), 120.4)
convex(line#17)
chair(object#33)
...etc

with this:

[Meaning #2(A)]
regularity_A1, regularity_A27, regularity_A81 
regularity_B79, regularity_B34, regularity_B22 




To what purpose? My version is easier to understand and debug, what
advantage does your version have?

So what is the "regularity finding mechanism"?  Well, it is not well

defined at the outset:  it is up to us to investigate and discover what
regularity finding mechanisms actually produce useful elements.  We
should start out *agnostic* about what those mechanisms are.  All kinds
of posibilities exist, if we are open minded enough about what to
consider.



Of course. That's why we need a flexible, general-purpose representation
that can work with lots of different kinds of mechanisms.

Where do "logical representations" sit in all of this?  In the case of

human systems, they appear to be an acquired representation



What of it? Just because birds are feathered doesn't mean aeroplanes have to
follow suit.

what

good would it do us to start out by throwing away our neutrality on the
"what is a regularity" question and committing straight away to the idea
that a regularity is a logical atom



I didn't advocate that.

and the thinking mechanisms are a

combination of [logical inference] + [inference control mechanism]?



Not only did I not advocate that, I called it a classic mistake.

But we need to commit to some representation. It's like XML. XML gets
criticized for not being the solution to all problems, but the critics miss
the point: it's only intended to solve one problem, that of every program
using its own opaque proprietary format.

We've got the same sort of problem here. You agree any system displaying a
significant degree of intelligence will need lots of different modules,
using different kinds of algorithms, with no way to enumerate them in
advance. Yet the modules need to work together (otherwise you don't have a
system, merely a catalog). To do that they need a shared data
representation. That representation needs to be decided before many modules
are written. Agreed so far?

If so, what else would you use? Strictly speaking, there is (in the Turing
sense) no more powerful data representation than logic, because logic can
represent anything. So we move on to pragmatic issues. Pragmatically, logic
is well understood, concise, flexible, easy to debug, easy for lots of
different kinds of modules to work with. Decades of hard work have failed to
find anything better. If you've got something better, I'm all ears. If not,
what's your objection?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Logical representation

2007-03-12 Thread Richard Loosemore

Russell Wallace wrote:
On 3/12/07, *Richard Loosemore* <[EMAIL PROTECTED] 
> wrote:


I'm not sure if you're just summarizing what someone would mean if they
were talking about 'logical representation,' or advocating it.


I'm saying there are 5 different things someone might mean, and going on 
to advocate 3.5 of them while dissing 1.


For example, what about replacing this:

[Meaning #2]
edge(point(4, 5), orientation(3.7))
estimated-depth(point(87, 9), 120.4)
convex(line#17)
chair(object#33)
...etc

with this:

[Meaning #2(A)]
regularity_A1, regularity_A27, regularity_A81 
regularity_B79, regularity_B34, regularity_B22 



To what purpose? My version is easier to understand and debug, what 
advantage does your version have?


The advantage is in the later consequences.



So what is the "regularity finding mechanism"?  Well, it is not well
defined at the outset:  it is up to us to investigate and discover what
regularity finding mechanisms actually produce useful elements.  We
should start out *agnostic* about what those mechanisms are.  All kinds
of posibilities exist, if we are open minded enough about what to
consider.


Of course. That's why we need a flexible, general-purpose representation 
that can work with lots of different kinds of mechanisms.


Where do "logical representations" sit in all of this?  In the case of
human systems, they appear to be an acquired representation


What of it? Just because birds are feathered doesn't mean aeroplanes 
have to follow suit.


That argument comes up a lot, and generally I just ignore it because 
it's too general to be a proper target for demolition, .. but having 
said that, what about: (1) So far, the aeroplanes aren't getting off the 
ground, (2) The Wright brothers spent a huge amount of time studying 
natural flight first, (3) Even the simplest of the "feathered" variety 
of AI system are displaying intriguingly powerful properties.





what
good would it do us to start out by throwing away our neutrality on the
"what is a regularity" question and committing straight away to the idea
that a regularity is a logical atom


I didn't advocate that.

and the thinking mechanisms are a
combination of [logical inference] + [inference control mechanism]? 



Not only did I not advocate that, I called it a classic mistake.

But we need to commit to some representation. It's like XML. XML gets 
criticized for not being the solution to all problems, but the critics 
miss the point: it's only intended to solve one problem, that of every 
program using its own opaque proprietary format.


This is puzzling, in a way, because this is my ammunition that you are 
using here!  That is exactly what I am trying to do:  invent an AIXML. 
I am a little baffled because you agree, but think I am not trying to do 
that




We've got the same sort of problem here. You agree any system displaying 
a significant degree of intelligence will need lots of different 
modules, using different kinds of algorithms, with no way to enumerate 
them in advance. Yet the modules need to work together (otherwise you 
don't have a system, merely a catalog). To do that they need a shared 
data representation. That representation needs to be decided before many 
modules are written. Agreed so far?


If so, what else would you use? Strictly speaking, there is (in the 
Turing sense) no more powerful data representation than logic, because 
logic can represent anything. So we move on to pragmatic issues. 
Pragmatically, logic is well understood, concise, flexible, easy to 
debug, easy for lots of different kinds of modules to work with. Decades 
of hard work have failed to find anything better. If you've got 
something better, I'm all ears. If not, what's your objection?


It is hard to answer this battery of questions without sending you an 
entire book that I don't have the stomach to write (see note below), so 
now I am in a quandary.


Overall: there is such a gulf between your way of thinking and the one I 
just outlined that it is hard to communicate.  For example, I just *did* 
describe something better!  But you did not recognize it as such, so now 
I am at a loss.


But more specifically, your statement that "there is (in the Turing 
sense) no more powerful data representation than logic, because logic 
can represent anything" is just ... loaded with assumptions, and too 
general to be of any use.  This is rather like saying "there is no more 
powerful way of building intelligent systems than out of atoms, because 
atoms can be used to build anything."  Well, .. yes, in a sense, but 
at the level I am operating at, it means nothing.


Illustration.  When the connectionists first came along, one of the 
first things they did was build a simple network system that could model 
the process of recognizing the difference between son

Re: [agi] Logical representation

2007-03-12 Thread Russell Wallace

On 3/12/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:


This is puzzling, in a way, because this is my ammunition that you are
using here!  That is exactly what I am trying to do:  invent an AIXML.
I am a little baffled because you agree, but think I am not trying to do
that



I'm equally puzzled, since you came across to me as advocating the opposite!
Okay, for the moment rather than reply point by point to this message I'll
try to summarize in the hope of pruning the search space.

1. We both say we want AIXML. One of the primary goals of anything in that
space is human readability, yet the only example you presented was a list of
opaque identifiers with names like foo_A1 and foo_A27. How do you propose to
meet the readability goal?

2. You appeared to be suggesting that each module use a different
representation, which is contrary to the AIXML goal of a unified
representation.

3. You appear to be advocating the 'copy the brain' approach to AI, which I
don't subscribe to.

Please take the above in order of priority, that is if you (understandably)
don't have time to explain everything in full detail, I'm most interested in
your reply to #1 followed by #2.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Logical representation

2007-03-12 Thread Richard Loosemore

Russell Wallace wrote:
On 3/12/07, *Richard Loosemore* <[EMAIL PROTECTED] 
> wrote:


This is puzzling, in a way, because this is my ammunition that you are
using here!  That is exactly what I am trying to do:  invent an AIXML.
I am a little baffled because you agree, but think I am not trying to do
that


I'm equally puzzled, since you came across to me as advocating the 
opposite! Okay, for the moment rather than reply point by point to this 
message I'll try to summarize in the hope of pruning the search space.


1. We both say we want AIXML. One of the primary goals of anything in 
that space is human readability, yet the only example you presented was 
a list of opaque identifiers with names like foo_A1 and foo_A27. How do 
you propose to meet the readability goal?



1.  Human readibility is, in my view, a bad thing to desire at the 
beginning.  Here is (one aspect of) the reasoning behind that statement.


The main motivation that we, as AI researchers, have for the human 
readibility requirement is that we want to do some kind of hand-assembly 
and hand-debugging (in a very general sense) of our AI systems.  But 
what happens in practice is that by committing to that requirement, we 
usually postpone the question of how the system could have autonomously 
learned that human-readable knowledge in the first place.  We know that 
we do this postponement (everyone admits that the unsupervised learning 
and/or grounding of logical terms is a late developer in AI research), 
but we excuse it in a variety of ways (which it might be better not to 
delve into, because that is a big subject).


But there is a substantial body of thought that says that those 
postponed things (mostly learning) are being postponed precisely because 
conventional AI has boxed itself into a corner by insisting that the 
representations be readable... they are in a You Can't Get There From 
Here situation.  You might have heard the entrepreneurs' story of the 
Marketing Guy who came up to the Technology Guy at a company and said 
"I've invented this great new type of paint:  you just brush it on and 
it produces a pattern like wallpaper!" -- to which the other replies 
"This is amazing!  How does it work?".  Marketing Guy looks offended: 
"I don't know how it works, I just invented it:  it's up to you to 
figure out how the technology bit works."


The point is that it is all very well to come up with a great idea for 
the way that representations are structured -- that they have clear 
semantics, etc. -- but if you look into the learning issue in great 
depth, you eventually come to realize that there might not actually be 
any viable (unsupervised) learning mechanism that will actually pick up 
from the world that particular, preordained type of representation.


One of the arguments against this position, of course, is that We Don't 
Care, because if we went to enough trouble we could 'hand-build' a 
complete system, or get it up above some threshold of completeness 
beyond which it would have enough intelligence to be able pick up the 
learning ball and go on to build new knowledge in a viable way (Doug 
Lenat said this explicitly in his Google lecture, IIRC).  We would not 
have to do things the way human cognitive systems do them, according to 
this argument, because we are not constrained by the same problems.


Maybe.  But that is a huge maybe.  It is contradicted by the Complex 
Systems Problem (about which more in my AGIRI Workshop 2006 paper), for 
one thing.  Some would also say that all the arguments against this 
problem sound like special pleading:  they all amount to "if we keep 
doing what we are doing, that problem of not having a good way to 
acquire new knowledge autonomously will just slowly evaporate."  There 
are a lot of people who simply don't buy that.


More importantly, there is positive evidence that if you abandon the 
requirement that KR have a clear semantics, you immediately start 
running into new kinds of powerful behavior:  my example of the sonar 
neural network was suppsoed to illustrate that.  Perhaps this example 
should be taken as a harbinger of a more general truth:  abandon the "I 
must be able to inspect these representations and understand them" 
requirement, and you can start finding powerful learning systems that 
build their own representations,a nd can do a lot of thinking, but 
without the individual KR atoms having a predefined semantics.  This 
does not mean that the atoms have to be completely opaque, as in the 
sonar example, but it does mean that we could try inventing learning 
mechanisms first, then later analyzing the good ones to see if we can do 
a post-hoc interpretation of their semantics.


At the very least, you can see that this philosophy and the one that I 
take you to be adopting are miles apart.





2. You appeared to be suggesting that each module use a different 
representation, which is contrary to the AIXML go

Re: [agi] Logical representation

2007-03-12 Thread Russell Wallace

Ah! That makes your position much clearer, thanks. To paraphrase to make
sure I understand you, the reason you don't regard human readability as a
critical feature is that you're of the "seed AI" school of thought that says
we don't need to do large-scale engineering, we just need to solve the
scientific problem of how to create a small core that can then auto-learn
the large body of required knowledge.

I spent a lot of time on every known variant of that idea and some AFAIK
hitherto unknown ones, before coming to the conclusion that I had been
simply fooling myself with wishful thinking; it's the perpetual motion
machine of our field. Admittedly biology did it, but even with a whole
planet for workspace it took four billion years and "I don't know about you
gentlemen, but that's more time than I'm prepared to devote to this
enterprise". When we try to program that way, we find there's an awful lot
of prep work to generate a very small special-purpose program A to do one
task, then to generate small program B for another task is a whole new
project in its own right, and A and B can never be subsequently integrated
or even substantially upgraded, so there's a hard threshold on the amount of
complexity that can be produced this way, and that threshold is tiny
compared to the complexity of Word or Firefox let alone Google let alone
anything with even a glimmer of general intelligence.

One of the arguments against this position, of course, is that We Don't

Care, because if we went to enough trouble we could 'hand-build' a
complete system, or get it up above some threshold of completeness
beyond which it would have enough intelligence to be able pick up the
learning ball and go on to build new knowledge in a viable way (Doug
Lenat said this explicitly in his Google lecture, IIRC).



Oh no, I don't believe that. I don't believe a complete system can be
hand-built; Google wasn't, after all, most of what it knows was auto-learned
(admittedly from other human-generated material, but not as part of the same
project or organization). Conversely (depending on how you look at it)
either there is no completeness threshold, or it's so far beyond anything we
can coherently imagine today that there might as well not be one, so the
seed AI approach can't work either.

In reality, both software engineering and (above a minimum adequacy
threshold) auto-learning are both going to stay important all the way up so
we have to cater for both. And from the software engineering viewpoint
(which is what I'm talking about here)... well, would Google ever have
worked if they used things like foo_A1 and foo_A27 for all their variable
names? No. QED :)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Logical representation

2007-03-12 Thread Eugen Leitl
On Mon, Mar 12, 2007 at 06:58:37PM +, Russell Wallace wrote:

>I spent a lot of time on every known variant of that idea and some
>AFAIK hitherto unknown ones, before coming to the conclusion that I
>had been simply fooling myself with wishful thinking; it's the
>perpetual motion machine of our field. Admittedly biology did it, but

Ah, it took you a while to see it ;)

>even with a whole planet for workspace it took four billion years and
>"I don't know about you gentlemen, but that's more time than I'm
>prepared to devote to this enterprise". When we try to program that

You don't need the entire four billion years since you don't have
to start from scratch (animals, ahem), and you can put things on 
fast-forward, and select the fitness function for a heavy bias towards
intelligence.

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


signature.asc
Description: Digital signature


Re: [agi] Logical representation

2007-03-12 Thread Richard Loosemore



I'm still not quite sure if what I said came across clearly, because 
some of what you just said is so far away from what I intended that I 
have to make some kind of response.


For example, it looks like I've got to add "Seed AI" to the list of dumb 
approaches that I do NOT want to be identified with!  At least if you 
define Seed AI the way you do:  trying to bootstrap the whole AI from a 
small core, without any big effort to encode some structure.


I thought I did deny that approach already:  I explained that I was 
doing a huge reengineering of the existing body of knowledge in 
cognitive science.  Can you imagine how much structure there is in such 
a thing?  There are roughly 1,000 human experiments or AI simulations 
accounted for in that structure, and all integrated in such a way that 
it implies one over system framework (at least, that is the goal of the 
project).  That doesn't sound like Seed AI to me:  it has both structure 
in its architecture, and it also allows for some priming of its 
knowledge base with 'hand-built' knowledge.



As for your comment about "would Google ever have worked if they used 
things like foo_A1 and foo_A27 for all their variable names?"


Huh?

That sounds like, after all, I communicated nothing whatsoever.  I don't 
know if that is supposed to be a serious point or not.  I will assume not.



Richard Loosemore.


Russell Wallace wrote:
Ah! That makes your position much clearer, thanks. To paraphrase to make 
sure I understand you, the reason you don't regard human readability as 
a critical feature is that you're of the "seed AI" school of thought 
that says we don't need to do large-scale engineering, we just need to 
solve the scientific problem of how to create a small core that can then 
auto-learn the large body of required knowledge.


I spent a lot of time on every known variant of that idea and some AFAIK 
hitherto unknown ones, before coming to the conclusion that I had been 
simply fooling myself with wishful thinking; it's the perpetual motion 
machine of our field. Admittedly biology did it, but even with a whole 
planet for workspace it took four billion years and "I don't know about 
you gentlemen, but that's more time than I'm prepared to devote to this 
enterprise". When we try to program that way, we find there's an awful 
lot of prep work to generate a very small special-purpose program A to 
do one task, then to generate small program B for another task is a 
whole new project in its own right, and A and B can never be 
subsequently integrated or even substantially upgraded, so there's a 
hard threshold on the amount of complexity that can be produced this 
way, and that threshold is tiny compared to the complexity of Word or 
Firefox let alone Google let alone anything with even a glimmer of 
general intelligence.


One of the arguments against this position, of course, is that We Don't
Care, because if we went to enough trouble we could 'hand-build' a
complete system, or get it up above some threshold of completeness
beyond which it would have enough intelligence to be able pick up the
learning ball and go on to build new knowledge in a viable way (Doug
Lenat said this explicitly in his Google lecture, IIRC).


Oh no, I don't believe that. I don't believe a complete system can be 
hand-built; Google wasn't, after all, most of what it knows was 
auto-learned (admittedly from other human-generated material, but not as 
part of the same project or organization). Conversely (depending on how 
you look at it) either there is no completeness threshold, or it's so 
far beyond anything we can coherently imagine today that there might as 
well not be one, so the seed AI approach can't work either.


In reality, both software engineering and (above a minimum adequacy 
threshold) auto-learning are both going to stay important all the way up 
so we have to cater for both. And from the software engineering 
viewpoint (which is what I'm talking about here)... well, would Google 
ever have worked if they used things like foo_A1 and foo_A27 for all 
their variable names? No. QED :)


This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Logical representation

2007-03-12 Thread Russell Wallace

On 3/12/07, Eugen Leitl <[EMAIL PROTECTED]> wrote:


You don't need the entire four billion years since you don't have
to start from scratch (animals, ahem), and you can put things on
fast-forward, and select the fitness function for a heavy bias towards
intelligence.



You're also a couple dozen orders of magnitude short on computing power, and
you don't know how to set up the graded sequence of fitness functions. That
said, if you or anyone else wants to actually take a shot at that route, let
me know if you want a summary of conclusions and ideas I got to before I
moved away from it.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Logical representation

2007-03-12 Thread Russell Wallace

On 3/12/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:


I'm still not quite sure if what I said came across clearly, because
some of what you just said is so far away from what I intended that I
have to make some kind of response.



Indeed it seems I'm still not understanding you...

I thought I did deny that approach already:  I explained that I was

doing a huge reengineering of the existing body of knowledge in
cognitive science.  Can you imagine how much structure there is in such
a thing?  There are roughly 1,000 human experiments or AI simulations
accounted for in that structure, and all integrated in such a way that
it implies one over system framework (at least, that is the goal of the
project).  That doesn't sound like Seed AI to me:  it has both structure
in its architecture, and it also allows for some priming of its
knowledge base with 'hand-built' knowledge.



Is this huge structure intended to be part of/input to an AI program? If so,
then it needs a machine-readable representation. Since it is to be built by
humans, does that representation not also need to be human-readable?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Logical representation

2007-03-12 Thread Eugen Leitl
On Mon, Mar 12, 2007 at 07:47:26PM +, Russell Wallace wrote:

>You're also a couple dozen orders of magnitude short on computing

You don't have to recrunch the total ops of the biosphere for
the same reason you don't have to redo the whole four gigayears.
You're already surrounded by the products of the process. Here's
a major shortcut.

>power, and you don't know how to set up the graded sequence of fitness

Computer power will be cheap. You will live to see a single system
with a mole of switches. Even now, there's a lot of crunch hiding
in a sea of gates of a 300 mm wafer. The challenge is to get the
mainstream to unlock it. Simulations for gaming and virtual reality
are a good driver. Cell does a quarter of a teraflop. Intel Tera Scale
does over a teraflop, soon. Blue Gene next-gen will do a petaflop.
When your home box does a petaflop, and there are billions of
such on the network, that's not negligible. Not because there are
a lot of of them, but because a lot of people will own a petaflop
box. 

>functions. That said, if you or anyone else wants to actually take a

The first and biggest step is to get your system to learn how to evolve.
I understand many do not yet see this as a problem at all.

>shot at that route, let me know if you want a summary of conclusions
>and ideas I got to before I moved away from it.

I don't understand why you moved away from it (it's the only game
in town), but if you have a document of your conclusions to share,
fire away.

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


signature.asc
Description: Digital signature


Re: [agi] Logical representation

2007-03-12 Thread Russell Wallace

On 3/12/07, Eugen Leitl <[EMAIL PROTECTED]> wrote:


The first and biggest step is to get your system to learn how to evolve.
I understand many do not yet see this as a problem at all.



Indeed!

I don't understand why you moved away from it (it's the only game

in town), but if you have a document of your conclusions to share,
fire away.



I concluded it's a dead end for AGI, so I discontinued work on that track
and didn't bother writing up anything like formal documentation, but it
isn't useless, has been used for some narrow AI problems, so to summarize my
conclusions on how to maximize the chances of getting results out of it:

The big issue is that evolution by default takes time exponential in the
amount of information you want to generate. You wouldn't think so because
it's hill climbing so you get to build step by step, and biological
evolution did that in polynomial time, but in general the local optima get
you. I was surprised looking at the graphs of some of my runs (the ones with
fewer complicating factors) just how regular the exponential curve was, each
big step taking twice as long as the previous one to find, each triggering a
string of little steps rapid at first but each little step also taking twice
as long as the previous.

That means don't sweat constant factors of machine efficiency. A factor of a
hundredfold here or there doesn't matter. Program in Python if that's what
you're most productive in, you can always rewrite in Fortran later if you
like but for the foreseeable future the bottleneck will be your time, not
machine time.

The important thing is to beat, or at least avoid as much as possible, that
exponential curve. Create ways to find building blocks, or ways to find such
ways.

Common wisdom is representation needs to be flexible like protoplasm not
brittle like program code. Common wisdom is vaguely in right direction, but
not exactly: don't need to be soft and squishy, do need to be concise.

Also need, surprisingly, to enforce good software engineering discipline on
evolutionary process! In one run I used byte code as representation, with
CALL and RET instructions. Easiest way to use RET was as indirect branch, so
evolution did that. Resulting code was incredibly, remarkably obfuscated,
should probably have kept some for entering in one of those obfuscated code
contests, as in 20 instruction program with trace so hairy I gave up trying
to follow it.

But evolution had same result as ultra-lazy human programmer, spaghetti code
no building blocks, progress halted. Evolution has no foresight, need to
encourage/enforce modularity. Provide functions as primitives, not
multipurpose abusable stuff like CALL/RET. No goto!

For code representation think functional programming languages. (Koza
dabbled in this but mostly restricted, not full recursion, no lambda etc.
Want the full works if trying for more complexity than he achieved.)

Alternatively, look at Joy = functional version of Forth, never got around
to trying it but looks promising, concatenative property.

For data structures, want flexibility and encourage modularity, best
workhorse structure is associative array, also consider Lisp-style 'code is
data' lists (strict lists, don't expose the cons cells with set-car like
Lisp did!), either way stick as far as possible to functional programming,
minimize mutable data.

Need to find better ways to evolve, better search heuristics and
modification operators. Level 0 = program to solve problem, level 1 =
heuristic that works on level 0 programs (tested by how well it speeds up
evolution of level 0 programs in long term), level 2 = heuristic that
operates on level 1, etc, generalize, have programs that could be described
as level omega (operate on all programs at all levels) etc. Higher levels
require rapidly more processing time to evaluate, but have all levels
running simultaneously so can generate more sophisticated results as time
passes. (I never got past the first couple of levels, came up with vague
outline ideas how to do the all-levels thing before abandoning the area.)

Pick appropriate problem domain, need something that requires little real
world knowledge, has lots of crunchy bits inside, no easy off the shelf
solutions but is easy to check results.

Playing Go is a good one. Does get you into coevolution, which isn't bad in
itself but need to be careful you don't get into circles: A is beaten by B
is beaten by C is beaten by A. Some of the runs I did were oddly reminiscent
of biochemistry, no actual intelligence at playing time but a pattern would
evolve which would be beaten by another pattern which would give way to
another like evolutionary warfare of poisons/antidote chemistry. Quick and
dirty way to avoid: require each new champion to beat all previous ones.

That's pretty much the gist of it, as far as I recall offhand.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox

RE: [agi] Logical representation

2007-03-12 Thread Peter Voss
Evolutionary approaches are what you use when you run of engineering
ideas... (and run of statistical approaches)

The last game in town.

Some of us are making good progress towards AGI via engineering.

Peter

-Original Message-
From: Eugen Leitl [mailto:[EMAIL PROTECTED] 
Sent: Monday, March 12, 2007 1:43 PM
To: Russell Wallace; agi@v2.listbox.com
Subject: Re: [agi] Logical representation

On Mon, Mar 12, 2007 at 07:47:26PM +, Russell Wallace wrote:

...

The first and biggest step is to get your system to learn how to evolve.
I understand many do not yet see this as a problem at all.

>shot at that route, let me know if you want a summary of conclusions
>and ideas I got to before I moved away from it.

I don't understand why you moved away from it (it's the only game
in town), but if you have a document of your conclusions to share,
fire away.

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Logical representation

2007-03-12 Thread YKY (Yan King Yin)

On 3/12/07, Russell Wallace <[EMAIL PROTECTED]> wrote:

"Represented in logic" can mean a number of different things, just

checking to see if everyone's talking about the same thing.


Consider, say, a 5 megapixel image. A common programming language

representation would be something like:


struct point {float red, green, blue;};
point *image = new point[n];

For AGI purposes if that's our only representation well obviously we lose.

So we do need to take at least some steps in the direction of a unified
representation.


What should that representation look like? The answer is that logic, or

some variant thereof, is as good as we're going to get. So we might have
something like:


[Meaning #1]
color(point(1, 2), red) = 3
...another 1499 similar assertions

Now we can build up further layers of information, such as:

[Meaning #2]
edge(point(4, 5), orientation(3.7))
estimated-depth(point(87, 9), 120.4)
convex(line#17)
chair(object#33)
...etc

What about the algorithms that generate those further layers of

information? Well, typical image processing code is written in something
like C, but again for AGI purposes if we do that we lose. So,


[Meaning #3]
we want to create (whether by hand, auto learning or most likely

combination) algorithms in a more logic-oriented language, something that
(at least superficially) looks more like Prolog or Haskell than C.


A classic mistake is to slide a step further and assume

[Meaning #4]
the application of those algorithms will be pure deduction and the runtime

engine can be purely a theorem prover. We now know that doesn't work, at
best you just run into exponential search, need procedural/heuristic
components. So leave that aside.


What about the physical representation of all this? Well suppose the

general logic database stores stuff as a set of explicit sentences, then


[Meaning #5]
we can use the same database to store all kinds of data as explicit

sentences. Do we want to? For prototyping, sure. For production code, we'll
eventually want to do things like _physically_ storing image data as an
array of floats because there's a couple orders of magnitude performance
improvement to be had that way - only a constant factor to be sure, but
enough to be eventually worth having. But that has no bearing on the
_logical_ side of things - the semantics should still be just as if
everything was stored as explicit sentences using the general database
engine.


So the answer to whether standardizing on logical representation is good

is:


Meanings #1, 2, 3 - yes.
Meaning #4 - no.
Meaning #5 - for prototyping sure, not for production code.


Hi Russell,
Thanks for clarifying all this.

I agree with the above analysis, ie Meanings 1,2,3, and 5.  In addition,

1.  re #4:  As an example, the logical term "chair" is defined, as a logical
rule, by other logical terms like "edges", "planes", "blocks", etc.  Sensory
perception is a process of *applying* such rules;  algorithmically this is
known as *pattern matching*:  we're given a set of low-level features (edges
etc) and we need to search for (match) the description of a chair.  The
computational bottleneck here is that there can be a huge number of
objects-to-be-recognized, such as chair, table, car, human,... a gadzillion
things.  This classic problem is *already* addressed by the rete algorithm.

2.  At the very lowest level (from pixels to edge detection, blob detection,
etc) I think we must use neural-like, specialized algorithms.  Using a
logical representation is not practical here.  But we lose nothing, because
this level is sub-conscious, like the retina is to the brain.

YKY

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Logical representation

2007-03-12 Thread Russell Wallace

On 3/13/07, YKY (Yan King Yin) <[EMAIL PROTECTED]> wrote:


1.  re #4:  As an example, the logical term "chair" is defined, as a
logical rule, by other logical terms like "edges", "planes", "blocks", etc.
Sensory perception is a process of *applying* such rules;  algorithmically
this is known as *pattern matching*:  we're given a set of low-level
features (edges etc) and we need to search for (match) the description of a
chair.  The computational bottleneck here is that there can be a huge number
of objects-to-be-recognized, such as chair, table, car, human,... a
gadzillion things.  This classic problem is *already* addressed by the rete
algorithm.



Right. More generally we do need actual algorithms to handle problems like
this, pure logical deduction isn't enough.

2.  At the very lowest level (from pixels to edge detection, blob detection,

etc) I think we must use neural-like, specialized algorithms.



Yep, and not just at the very lowest level either; there'll be lots of
situations where specialized algorithms (some reminiscent of neurons, some
not) will be appropriate.

Using a logical representation is not practical here.




This doesn't follow! An input from a video camera to an edge detection
algorithm, for example, can consist of 1500 logical assertions about the
RGB values of the pixels; the edge detection code will probably be cleaner
and more modular, and certainly easier to integrate with the rest of the
system, when the input is in this form. (Later when optimizing for speed we
might want this to get compiled to an array of floats representation - but
this should be a compiler flag, it should make no difference to the
semantics.)

But we lose nothing, because this level is sub-conscious, like the retina is

to the brain.



But just because low-level visual processing isn't consciously accessible to
humans doesn't mean the same must be true of an AI. Computers have lots of
weaknesses compared to the human brain, but they also have some strengths
and we should take advantage of those.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Logical representation

2007-03-13 Thread Richard Loosemore

Russell Wallace wrote:
On 3/12/07, *Richard Loosemore* <[EMAIL PROTECTED] 
> wrote:


I'm still not quite sure if what I said came across clearly, because
some of what you just said is so far away from what I intended that I
have to make some kind of response.


Indeed it seems I'm still not understanding you...

I thought I did deny that approach already:  I explained that I was
doing a huge reengineering of the existing body of knowledge in
cognitive science.  Can you imagine how much structure there is in such
a thing?  There are roughly 1,000 human experiments or AI simulations
accounted for in that structure, and all integrated in such a way that
it implies one over system framework (at least, that is the goal of the
project).  That doesn't sound like Seed AI to me:  it has both structure
in its architecture, and it also allows for some priming of its
knowledge base with 'hand-built' knowledge.


Is this huge structure intended to be part of/input to an AI program? If 
so, then it needs a machine-readable representation. Since it is to be 
built by humans, does that representation not also need to be 
human-readable?


Ummm... is it the input?  You mean, as in input *data* to an AI program?

Good god no.  It *is* the program.  It is the architecture of an AI.

(To be absolutely strict about it, this is a framework for a class of AI 
systems, rather than a particular system itself, but the distinction is 
not important in this context).


Regarding the use of readable names.  The atomic units of knowledge in 
the resulting system (the symbols, concepts, logical terms, whatever you 
want to call them) are mostly built by the system itself, so they start 
out without names that are chosen by me, obviously.  However, they would 
acquire descriptions after they were created  these would be mostly 
assigned in an automatic way, by a system monitor, but the experimenter 
could make changes if they wished.




Richard Loosemore.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Logical representation

2007-03-13 Thread Russell Wallace

On 3/13/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:


Good god no.  It *is* the program.  It is the architecture of an AI.



So it is part of the AI then, like I said.

Regarding the use of readable names.  The atomic units of knowledge in

the resulting system (the symbols, concepts, logical terms, whatever you
want to call them) are mostly built by the system itself, so they start
out without names that are chosen by me, obviously.



So what do you build then? You seemed to be saying that you the human were
planning to build something big and complex based on some scientific
knowledge. And then the program would start running and create more stuff,
but I'm talking about what happens before you press Go. You've got this big
complex thing (BCT) that you the human will write, yes? Doesn't BCT,
whatever my misunderstandings about your plans for its exact nature, need to
be written in a human-maintainable notation with readable variable names and
whatnot, then?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Logical representation

2007-03-13 Thread Richard Loosemore

Russell Wallace wrote:
On 3/13/07, *Richard Loosemore* <[EMAIL PROTECTED] 
> wrote:


Good god no.  It *is* the program.  It is the architecture of an AI.


So it is part of the AI then, like I said.

Regarding the use of readable names.  The atomic units of knowledge in
the resulting system (the symbols, concepts, logical terms, whatever you
want to call them) are mostly built by the system itself, so they start
out without names that are chosen by me, obviously.


So what do you build then? You seemed to be saying that you the human 
were planning to build something big and complex based on some 
scientific knowledge. And then the program would start running and 
create more stuff, but I'm talking about what happens before you press 
Go. You've got this big complex thing (BCT) that you the human will 
write, yes? Doesn't BCT, whatever my misunderstandings about your plans 
for its exact nature, need to be written in a human-maintainable 
notation with readable variable names and whatnot, then?


8-|

All along I have been talking about the architecture, the knowledge 
acquisition mechanisms and (crucially) the semantic opaqueness of the 
symbols, in various types of AI.  The "opaqueness of the symbols" issue 
was about whether the labels and the structure of the atomic units of 
knowledge within the system were going to be (a) chosen by the system 
architect, or (b) constructed by the system.


What have variable names got to do with architecture?

When I was talking about the symbols not having preselected names, I was 
talking about high-level issues


You thought I was talking about scrambling the letters in the variable 
names, in the software?





Richard Loosemore.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Logical representation

2007-03-13 Thread Russell Wallace

Richard: I'm not sure why it's been so extraordinarily difficult to
communicate, but from what you're saying here it seems to be back to square
one again; continuing to try to communicate in abstract English about this
topic doesn't appear to be a productive use of either of our time at this
point. If you get to the stage where you have sample code, screenshots or
something similarly concrete, though, I'd be interested in taking a look.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Logical representation

2007-03-13 Thread Mark Waser

What have variable names got to do with architecture?


Russell is conflating concept names (a.k.a. symbols) and variables.

Personally, I would just have the system autonumber each concept as the 
system generates it and then have some serious resources devoted to 
determining and maintaining a set of "friendly names" (which, of course, 
depends upon your audience, level of abstraction, etc.) for each concept.


Human readability is a necessity as far as I am concerned; however, as long 
as the system can and will accurately convert it's internal representation 
(with concept numbers) into an accurate human-readable form (with "friendly 
names"), I don't know what more you can possibly ask for.


- Original Message - 
From: "Richard Loosemore" <[EMAIL PROTECTED]>

To: 
Sent: Tuesday, March 13, 2007 12:29 PM
Subject: Re: [agi] Logical representation



Russell Wallace wrote:
On 3/13/07, *Richard Loosemore* <[EMAIL PROTECTED] 
<mailto:[EMAIL PROTECTED]>> wrote:


Good god no.  It *is* the program.  It is the architecture of an AI.


So it is part of the AI then, like I said.

Regarding the use of readable names.  The atomic units of knowledge 
in
the resulting system (the symbols, concepts, logical terms, whatever 
you
want to call them) are mostly built by the system itself, so they 
start

out without names that are chosen by me, obviously.


So what do you build then? You seemed to be saying that you the human 
were planning to build something big and complex based on some scientific 
knowledge. And then the program would start running and create more 
stuff, but I'm talking about what happens before you press Go. You've got 
this big complex thing (BCT) that you the human will write, yes? Doesn't 
BCT, whatever my misunderstandings about your plans for its exact nature, 
need to be written in a human-maintainable notation with readable 
variable names and whatnot, then?


8-|

All along I have been talking about the architecture, the knowledge 
acquisition mechanisms and (crucially) the semantic opaqueness of the 
symbols, in various types of AI.  The "opaqueness of the symbols" issue 
was about whether the labels and the structure of the atomic units of 
knowledge within the system were going to be (a) chosen by the system 
architect, or (b) constructed by the system.


What have variable names got to do with architecture?

When I was talking about the symbols not having preselected names, I was 
talking about high-level issues


You thought I was talking about scrambling the letters in the variable 
names, in the software?





Richard Loosemore.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Logical representation

2007-03-13 Thread Russell Wallace

On 3/13/07, Mark Waser <[EMAIL PROTECTED]> wrote:


Russell is conflating concept names (a.k.a. symbols) and variables.



And a longer list of other things than I care to enumerate. The distinction
I've been making all along is between human-readable formats like predicate
calculus, SQL and XML, versus non-human-readable ones like vectors of
floats, binary machine code, graphs of unlabeled nodes etc. I'm arguing for
the former (and in particular, for something in the predicate calculus
family, though if anyone thinks they have something better I'm all ears -
the important thing is to choose _some_ good, flexible, expressive
human-readable format and use it as the canonical format for the whole
system).

Personally, I would just have the system autonumber each concept as the

system generates it and then have some serious resources devoted to
determining and maintaining a set of "friendly names" (which, of course,
depends upon your audience, level of abstraction, etc.) for each concept.



That's a good approach to have in one's toolbox for machine-generated
content, yep, though I've primarily been discussing human-generated content.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Logical representation

2007-03-13 Thread Mark Waser
>> The distinction I've been making all along is between human-readable formats 
>> like predicate calculus, SQL and XML, versus non-human-readable ones like 
>> vectors of floats, binary machine code, graphs of unlabeled nodes etc. I'm 
>> arguing for the former (and in particular, for something in the predicate 
>> calculus family, though if anyone thinks they have something better I'm all 
>> ears - the important thing is to choose _some_ good, flexible, expressive 
>> human-readable format and use it as the canonical format for the whole 
>> system). 

Human-readable is an interesting term . . . .  Is a picture human-readable?  I 
think that you would argue not (in this context, obviously).

All of the things that you name as non-human-readable certainly can be 
converted (albeit, extremely inefficiently) to a human readable format 
(sufficient to reproduce the item in question with no further information -- 
given sufficient time).

Arguably, the *only* human readable format is a human language (in which you 
can then explain predicate calculus, SQL, and XML as well as everything you 
label as non-human-readable).
  - Original Message - 
  From: Russell Wallace 
  To: agi@v2.listbox.com 
  Sent: Tuesday, March 13, 2007 12:59 PM
  Subject: Re: [agi] Logical representation


  On 3/13/07, Mark Waser <[EMAIL PROTECTED]> wrote:
Russell is conflating concept names (a.k.a. symbols) and variables.

  And a longer list of other things than I care to enumerate. The distinction 
I've been making all along is between human-readable formats like predicate 
calculus, SQL and XML, versus non-human-readable ones like vectors of floats, 
binary machine code, graphs of unlabeled nodes etc. I'm arguing for the former 
(and in particular, for something in the predicate calculus family, though if 
anyone thinks they have something better I'm all ears - the important thing is 
to choose _some_ good, flexible, expressive human-readable format and use it as 
the canonical format for the whole system). 



Personally, I would just have the system autonumber each concept as the
system generates it and then have some serious resources devoted to 
determining and maintaining a set of "friendly names" (which, of course,
depends upon your audience, level of abstraction, etc.) for each concept.

  That's a good approach to have in one's toolbox for machine-generated 
content, yep, though I've primarily been discussing human-generated content. 


--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?list_id=303 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Logical representation

2007-03-13 Thread Russell Wallace

On 3/13/07, Mark Waser <[EMAIL PROTECTED]> wrote:


Human-readable is an interesting term . . . .  Is a picture
human-readable?  I think that you would argue not (in this context,
obviously).



Well, a picture is (in some domains) human-readable - and I think tools that
display certain kinds of information in picture format will be essential.
But it's not human-maintainable, and there are a lot of things it's simply
not suitable for.

All of the things that you name as non-human-readable certainly can be

converted (albeit, extremely inefficiently) to a human readable format
(sufficient to reproduce the item in question with no further information --
given sufficient time).



The issue isn't machine time, it's that an AI system consisting of many
modules has to have one canonical format for representing content, so that
the modules can work together; so versatility is a key virtue. Vector of
floats for example is a perfectly good format for early stages of vision
processing - it can easily be converted into a human-readable picture. But
it's not a good representation for most other purposes. I'm suggesting
predicate calculus (or some variant thereof) is the best all-round candidate
for a canonical format.

Arguably, the *only* human readable format is a human language (in which you

can then explain predicate calculus, SQL, and XML as well as everything you
label as non-human-readable).



Well, the chosen format also has to be machine-readable, so as always we
have to compromise.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Logical representation

2007-03-13 Thread Richard Loosemore

Russell Wallace wrote:
Richard: I'm not sure why it's been so extraordinarily difficult to 
communicate, but from what you're saying here it seems to be back to 
square one again; continuing to try to communicate in abstract English 
about this topic doesn't appear to be a productive use of either of our 
time at this point. If you get to the stage where you have sample code, 
screenshots or something similarly concrete, though, I'd be interested 
in taking a look.


Well, it was a productive use of my time because I wrote some stuff that 
I have already modified for use elsewhere.


I was not writing in abstract English, though, I was talking about 
architectural issues that have been discussed by many others.


I would cautiously (and with due respect) suggest that IF you have been 
tempted to categorize this discussion as [Loosemore talking vague 
nonsense again], you might want to resist that temptation.  The "more 
concrete stuff", when it arrives, is going to take some of this stuff as 
a course prerequisite.



Richard Loosemore.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Logical representation

2007-03-13 Thread Richard Loosemore

Mark Waser wrote:

What have variable names got to do with architecture?


Russell is conflating concept names (a.k.a. symbols) and variables.

Personally, I would just have the system autonumber each concept as the 
system generates it and then have some serious resources devoted to 
determining and maintaining a set of "friendly names" (which, of course, 
depends upon your audience, level of abstraction, etc.) for each concept.


Human readability is a necessity as far as I am concerned; however, as 
long as the system can and will accurately convert it's internal 
representation (with concept numbers) into an accurate human-readable 
form (with "friendly names"), I don't know what more you can possibly 
ask for.


Exactly right.  I have given some thought to the issue of building 
friendly names.


At the high level you can get the system itself to feed its own concepts 
through a modified version of the machinery, in its own system, that 
would extract a description of the concept.


At lower levels, and earlier stages of development, there could be 
diagrams that simply grab the nearby concepts and show their relationships.


The virtue of the architecture I have in mind is that, unlike neural 
nets, it does not use distributed representations, which are a pain to 
interpret.


All of this is especially important for the Friendliness issue.  During 
development, we want to monitor whether the system is thinking certain 
kinds of pathological thoughts, so we can figure out how the 
motivational system is affecting the behavior.  For that, we need 
automatic triggers that attach to the things we consider bad.




Richard Loosemore.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Logical representation

2007-03-13 Thread Russell Wallace

On 3/13/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:


I would cautiously (and with due respect) suggest that IF you have been
tempted to categorize this discussion as [Loosemore talking vague
nonsense again], you might want to resist that temptation.  The "more
concrete stuff", when it arrives, is going to take some of this stuff as
a course prerequisite.



Not quite :) more like "me again getting frustrated talking to Loosemore
because he keeps saying his agenda is _not_ this or that, and I can't for
the life of me figure out what his agenda _is_". But I'll be happy to look
back over some of this stuff in the light of the more concrete stuff when it
arrives.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Logical representation

2007-03-13 Thread Mark Waser
>> The issue isn't machine time, it's that an AI system consisting of many 
>> modules has to have one canonical format for representing content, so that 
>> the modules can work together; so versatility is a key virtue. 

Do the many modules have to have one canonical format for representing content 
-- or do they have to have one canonical format for *communicating* content?

I think that you need to resign yourself to the fact that many of the modules 
are going to have *very* different internal representations.

>>  I'm suggesting predicate calculus (or some variant thereof) is the best 
>> all-round candidate for a canonical format. 

I would argue that predicate calculus is just a 
simplified-to-the-point-of-pretending-to-be-rigorous human language.  Of 
course, the problem with predicate calculus is the vocabulary (or lack of 
specification thereof).

The brain co-evolved with language.  I suspect that the easiest minimal 
canonical communicating format is going to be something pretty close to an even 
more rigorously syntactically defined Simple English 
(http://en.wikipedia.org/wiki/Wikipedia:Simple_English_Wikipedia).

  - Original Message - 
  From: Russell Wallace 
  To: agi@v2.listbox.com 
  Sent: Tuesday, March 13, 2007 1:24 PM
  Subject: Re: [agi] Logical representation


  On 3/13/07, Mark Waser <[EMAIL PROTECTED]> wrote:

Human-readable is an interesting term . . . .  Is a picture human-readable? 
 I think that you would argue not (in this context, obviously).

  Well, a picture is (in some domains) human-readable - and I think tools that 
display certain kinds of information in picture format will be essential. But 
it's not human-maintainable, and there are a lot of things it's simply not 
suitable for. 



All of the things that you name as non-human-readable certainly can be 
converted (albeit, extremely inefficiently) to a human readable format 
(sufficient to reproduce the item in question with no further information -- 
given sufficient time).

  The issue isn't machine time, it's that an AI system consisting of many 
modules has to have one canonical format for representing content, so that the 
modules can work together; so versatility is a key virtue. Vector of floats for 
example is a perfectly good format for early stages of vision processing - it 
can easily be converted into a human-readable picture. But it's not a good 
representation for most other purposes. I'm suggesting predicate calculus (or 
some variant thereof) is the best all-round candidate for a canonical format. 



Arguably, the *only* human readable format is a human language (in which 
you can then explain predicate calculus, SQL, and XML as well as everything you 
label as non-human-readable).

  Well, the chosen format also has to be machine-readable, so as always we have 
to compromise.


--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?list_id=303 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Logical representation

2007-03-13 Thread Russell Wallace

On 3/13/07, Mark Waser <[EMAIL PROTECTED]> wrote:


Do the many modules have to have one canonical format for representing
content -- or do they have to have one canonical format for *communicating*
content?

I think that you need to resign yourself to the fact that many of the
modules are going to have *very* different internal representations.



I'm inclined to think that on a semantic level they should also use the same
internal representation. Sure, for efficiency e.g. vision processing code
might want to use vector of floats _implementation_, but that should be a
compiler flag to be added after you've written and profiled a working
prototype - the code should be written in terms of the logical
representation. I think if you start actually designing each module around a
hand-tweaked internal representation, you'll end up spending your whole life
on one narrow AI application. This isn't just theory - spending one's whole
life on one narrow AI application is exactly what people currently do. The
trick is to get to the next level of productivity, and I think using a
consistent across-the-board logical representation is a key part of that.

The brain co-evolved with language.  I suspect that the easiest minimal

canonical communicating format is going to be something pretty close to an
even more rigorously syntactically defined Simple English (
http://en.wikipedia.org/wiki/Wikipedia:Simple_English_Wikipedia).



I'm skeptical, but it's hard to be sure of a "can't", so if you want to go
that route - then go ahead and prove me wrong.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Logical representation

2007-03-13 Thread Mark Waser
>> I'm inclined to think that on a semantic level they should also use the same 
>> internal representation

Hmmm, the dictionary definition of semantic is "of, pertaining to, or arising 
from the different meanings of words or other symbols" -- which I take to be 
the *meaning* or *communication* level which certainly can be different from 
the *working* level.  How does what you're saying differ from what I'm saying?

>> Sure, for efficiency e.g. vision processing code might want to use vector of 
>> floats _implementation_, but that should be a compiler flag to be added 
>> after you've written and profiled a working prototype - the code should be 
>> written in terms of the logical representation.

Uh huh.  So your vision processing "code" is something like a description which 
then compiles down to the most efficient implementation.  Sounds to me like a 
descriptive communication level with a magical compiler that translates it to a 
machine-code internal representation.

>> I think if you start actually designing each module around a hand-tweaked 
>> internal representation, you'll end up spending your whole life on one 
>> narrow AI application

I think that if I convince someone/everybody else to throw a standard 
communication layer on top of *their* hand-tweaked internal representation, 
I'll do just fine, thank you very much.

>> The trick is to get to the next level of productivity, and I think using a 
>> consistent across-the-board logical representation 

Yes.  And I'm pushing for that at the *communication* level (where it clearly 
is possible) instead of at the *internal working* level (where I contend that 
it is clearly *not* possible -- or, at least, not feasible).

>> I'm skeptical, but it's hard to be sure of a "can't", so if you want to go 
>> that route - then go ahead and prove me wrong. 

Which is what I'm working towards.  But how are *you* progressing on converging 
on a canonical format?  Do you believe that logical representation is 
sufficient for describing vision processing well enough that a compiler can 
then create functioning vision code?
  - Original Message - 
  From: Russell Wallace 
  To: agi@v2.listbox.com 
  Sent: Tuesday, March 13, 2007 2:46 PM
  Subject: Re: [agi] Logical representation


  On 3/13/07, Mark Waser <[EMAIL PROTECTED]> wrote:

Do the many modules have to have one canonical format for representing 
content -- or do they have to have one canonical format for *communicating* 
content?

I think that you need to resign yourself to the fact that many of the 
modules are going to have *very* different internal representations.

  I'm inclined to think that on a semantic level they should also use the same 
internal representation. Sure, for efficiency e.g. vision processing code might 
want to use vector of floats _implementation_, but that should be a compiler 
flag to be added after you've written and profiled a working prototype - the 
code should be written in terms of the logical representation. I think if you 
start actually designing each module around a hand-tweaked internal 
representation, you'll end up spending your whole life on one narrow AI 
application. This isn't just theory - spending one's whole life on one narrow 
AI application is exactly what people currently do. The trick is to get to the 
next level of productivity, and I think using a consistent across-the-board 
logical representation is a key part of that. 



The brain co-evolved with language.  I suspect that the easiest minimal 
canonical communicating format is going to be something pretty close to an even 
more rigorously syntactically defined Simple English ( 
http://en.wikipedia.org/wiki/Wikipedia:Simple_English_Wikipedia).

  I'm skeptical, but it's hard to be sure of a "can't", so if you want to go 
that route - then go ahead and prove me wrong. 



--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?list_id=303 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Logical representation

2007-03-13 Thread Russell Wallace

On 3/13/07, Mark Waser <[EMAIL PROTECTED]> wrote:


Hmmm, the dictionary definition of semantic is "of, pertaining to, or
arising from the different meanings of words or other symbols" -- which I
take to be the *meaning* or *communication* level which certainly can be
different from the *working* level.  How does what you're saying differ from
what I'm saying?



Rather than get into dictionaries, you know the way in SQL you can say
SELECT blah FROM whatever, and the database engine will automatically use an
index if there's one available _while leaving your code no different than if
it had been using a linear search_? That's what I mean.

Uh huh.  So your vision processing "code" is something like a description

which then compiles down to the most efficient implementation.



No, just an adequately efficient implementation :)

I think that if I convince someone/everybody else to throw a standard

communication layer on top of *their* hand-tweaked internal representation,
I'll do just fine, thank you very much.



I hope you're right!

Which is what I'm working towards.  But how are *you* progressing on

converging on a canonical format?  Do you believe that logical
representation is sufficient for describing vision processing well enough
that a compiler can then create functioning vision code?



Yes. As for progress, I think the outline ideas are solid enough that I'm
ready to start taking a shot at detailed design once I can get enough money
together that I don't have to worry about that end of things for a few years
(the perennial story of modern AGI research :P)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Logical representation

2007-03-14 Thread David Clark
"an AI system consisting of many modules has to have one canonical format for 
representing content" WHY?

In a modern operating system that consists of a huge number of component parts, 
there is no one data representation.  There must be a consistent interface 
between the modules for them to work together but definitely not a single data 
representation.

Each type of representation has plus/minus depending on the domain where used 
so why compromise on one represetation when you don't need to?
  - Original Message - 
  From: Russell Wallace 
  To: agi@v2.listbox.com 
  Sent: Tuesday, March 13, 2007 10:24 AM
  Subject: Re: [agi] Logical representation


  On 3/13/07, Mark Waser <[EMAIL PROTECTED]> wrote:

Human-readable is an interesting term . . . .  Is a picture human-readable? 
 I think that you would argue not (in this context, obviously).

  Well, a picture is (in some domains) human-readable - and I think tools that 
display certain kinds of information in picture format will be essential. But 
it's not human-maintainable, and there are a lot of things it's simply not 
suitable for. 



All of the things that you name as non-human-readable certainly can be 
converted (albeit, extremely inefficiently) to a human readable format 
(sufficient to reproduce the item in question with no further information -- 
given sufficient time).

  The issue isn't machine time, it's that an AI system consisting of many 
modules has to have one canonical format for representing content, so that the 
modules can work together; so versatility is a key virtue. Vector of floats for 
example is a perfectly good format for early stages of vision processing - it 
can easily be converted into a human-readable picture. But it's not a good 
representation for most other purposes. I'm suggesting predicate calculus (or 
some variant thereof) is the best all-round candidate for a canonical format. 



Arguably, the *only* human readable format is a human language (in which 
you can then explain predicate calculus, SQL, and XML as well as everything you 
label as non-human-readable).

  Well, the chosen format also has to be machine-readable, so as always we have 
to compromise.


--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?list_id=303 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Logical representation

2007-03-14 Thread Russell Wallace

On 3/14/07, David Clark <[EMAIL PROTECTED]> wrote:


 "an AI system consisting of many modules has to have one canonical format
for representing content" WHY?



Because for A to talk to B, they have to use a
language/format/representation that both of them understand. By far the most
efficient way to achieve this is to decide on a single representation. If
you do it on an ad hoc basis, N modules will require you to either write
O(N^2) translation routines (not feasible) or abandon general
interoperability (thereby also abandoning general intelligence).

In a modern operating system that consists of a huge number of component

parts, there is no one data representation.



And the parts mostly don't talk to each other, indeed computer scientists
have for decades lamented the extent to which everything has to be
reinvented all the time because we can't effectively reuse existing
components. The exceptions to this e.g. relational databases, the clipboard,
the Web, do indeed involve agreeing on a single data representation.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Logical representation

2007-03-14 Thread David Clark

  - Original Message - 
  From: Russell Wallace 
  To: agi@v2.listbox.com 
  Sent: Wednesday, March 14, 2007 1:33 PM
  Subject: Re: [agi] Logical representation


  On 3/14/07, David Clark <[EMAIL PROTECTED]> wrote:
"an AI system consisting of many modules has to have one canonical format 
for representing content" WHY?

  Because for A to talk to B, they have to use a language/format/representation 
that both of them understand. By far the most efficient way to achieve this is 
to decide on a single representation. If you do it on an ad hoc basis, N 
modules will require you to either write O(N^2) translation routines (not 
feasible) or abandon general interoperability (thereby also abandoning general 
intelligence). 



In a modern operating system that consists of a huge number of component 
parts, there is no one data representation.

  And the parts mostly don't talk to each other, indeed computer scientists 
have for decades lamented the extent to which everything has to be reinvented 
all the time because we can't effectively reuse existing components. The 
exceptions to this e.g. relational databases, the clipboard, the Web, do indeed 
involve agreeing on a single data representation.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Logical representation

2007-03-14 Thread David Clark
I think that our minds have many systems that, at least at the higher levels, 
have different data representations.  These systems in our minds seem to 
communicate with each other in words.  The words aren't totally appropriate in 
all domains (like Math) but they do to communicate the big ideas.  Could Math 
be done using English only and no Math symbols?  Possibly, but I don't think 
many Mathematicians would want to try it.

I think that using a common English language interface between the larger 
models is totally feasible and using object inheritance, the interface code 
wouldn't have to be rewritten by many modules at all.

In general, device drivers that work on one version of an OS still work on the 
next one.  An exception might be when the OS went from 32 to 64 bit operation 
or from WIN95 to NT.  Each device driver has a pretty well defined interface 
and much change can occur within that driver without any change to user code at 
all.  I wouldn't call this "reinvented all the time" at all.

Relational databases, the clipboard, and the Web all have totally different 
data representations.  They are all well known and used but totally different 
none the less.

In some cases, each of fuzzy logic, Bayesian logic, statistical methods, vector 
arithmetic, neural networks, predicate logic, heuristics etc seem to be the 
best solution but it is easy to come up with many examples where each either 
can't work or won't be workable on current hardware.  What if you could have a 
system made up of all of these methods where the data representation suited the 
domain and communication was done between the modules by using simple English?  
Like the Math example above, you won't necessarily be able to communicate all 
the detail between each module but why do that when each module can be it's own 
"expert".  Wouldn't such a system make a lot more sense then always trying to 
fit a square peg into a round hole?

-- David Clark
  - Original Message - 
  From: Russell Wallace 
  To: agi@v2.listbox.com 
  Sent: Wednesday, March 14, 2007 1:33 PM
  Subject: Re: [agi] Logical representation


  On 3/14/07, David Clark <[EMAIL PROTECTED]> wrote:
"an AI system consisting of many modules has to have one canonical format 
for representing content" WHY?

  Because for A to talk to B, they have to use a language/format/representation 
that both of them understand. By far the most efficient way to achieve this is 
to decide on a single representation. If you do it on an ad hoc basis, N 
modules will require you to either write O(N^2) translation routines (not 
feasible) or abandon general interoperability (thereby also abandoning general 
intelligence). 



In a modern operating system that consists of a huge number of component 
parts, there is no one data representation.

  And the parts mostly don't talk to each other, indeed computer scientists 
have for decades lamented the extent to which everything has to be reinvented 
all the time because we can't effectively reuse existing components. The 
exceptions to this e.g. relational databases, the clipboard, the Web, do indeed 
involve agreeing on a single data representation.


--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?list_id=303 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Logical representation

2007-03-14 Thread Russell Wallace

On 3/14/07, David Clark <[EMAIL PROTECTED]> wrote:


 I think that our minds have many systems that, at least at the higher
levels, have different data representations.  These systems in our minds
seem to communicate with each other in words.



I don't think it's as simple as that, but in any case the mind evolved under
very different constraints; it's a good existence proof, but not a good
model for human engineers to follow.

I think that using a common English language interface between the larger

models is totally feasible



But if you can pull that off, I promise to be duly impressed! :)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Logical representation

2007-03-15 Thread YKY (Yan King Yin)

3 issues have been raised in this thread, by different people:

1.  Richard Loosemore:  Symbol names -- should they be system-generated or
human-entered?

This is a good question.  In Cyc there are so-called "pretty names" (English
terms that describe Cyc concepts) but they are not sophisticated enough to
allow automatic translation of Cyc KB into English.

A dilemma is that 1) if symbol names are machine-generated, they would not
be human-readable;  but 2) the machine should be able to *create* new
symbols during machine learning.

My solution:  all symbols in the system have *no names*;  they are only
referenced by number.  The natural-language part of the KB contains
fact/rules for the translation of symbols into words.  For example, if block
A is on top of block B, there will be a link for "on_top_of" but it will
just be a nameless link.  The natural-language KB will contain rules for
translating that "on_top_of" to human language.

In other words, when we specify the knowledge representation scheme, we
specify them as if they were machine-generated, and we simultaneously
specify the rules for natural-language translation.

The bottom line:  we can insert rules into the system *as if* they are
machine-generated.  After that they are indistinguishable from machine
representation.  Why so much aversion to human-entered facts/rules?

2.  Josh Storrs, John Scanlon: "Numeric representation better than symbolic
representation".

Ben has given his version of the answer.  My view is very similar to his.
Josh keeps saying that logic cannot represent certain things, eg a chipmunk
resembling a leaf blowing in the wind.  In probabilistic logic this CAN be
represented, because the definition of "leaf" is a *weighted sum*
of features, and the jumping chipmunk shares many of those same features as
the leaf's.

If probabilistic logic can handle *all* aspects of AGI, why use a more
complicated method such as vector space?  I'm not a master of math as Ben
is, but I think many logical operations (eg abduction) are so non-linear
that their n-space counterpart is almost impossible to fathom.

Bottomline:  what does the numerical representation offer that the
logico-numerical form has not?

3.  Mark Waser, David Clark:  "Many representations, communicating via
protocols".

My proposal is to have the entire AGI run by a *central* inference engine
(plus truth maintenance system etc etc, let's call the whole shebang a
"cognitive engine").  For this to work, the representation has to be
uniform.

Why use a *centralized* cognitive engine, you ask?  Well, basically because
such an engine is very complicated and not easy to build -- it's got to have
probabilistic logic, an efficient deduction algorithm, and efficient search
mechanisms, etc.  It seems to be a good thing if we could design and write
it just once and solve the whole AGI problem.

The alternative is to let people in different AI subfields write their own
modules, and glue them together via communication protocols.  But what
if the AGI faces a *new*, unseen problem?  The whole point of having an AGI
is that it is *general*, isn't it?  At first it might seem that having
specialized algorithms for (say) vision is cool, but when you give the
vision module a long list of requirements -- reading fonts, playing
boardgames, understanding drawings, etc -- you may find that the vision
module needs to be more and more *general*, to the point that you're almost
making the general cognitive engine again.

YKY

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Logical representation

2007-03-15 Thread Eric Baum

I think this is interesting but I have a number of comments.
(1) Just fyi, last week or the week before on this list there was
discussion of a Sussman essay on robust computation. (Google
"Sussman essay on robust computation" and I bet you'll find it.)
Since then, I
have discussed it with him in person. He is proposing schemes by which
programs can be robust-- extend to handle circumstances not previously
forseen. He advocates some specific procedure, the details of which
I have not yet looked at, he references some of his former students' papers 
that I
have yet to look at, by which different modules would adaptively
discover different languages to converse in. At the least this
avoids your comment (maybe in a previous post) that you otherwise
need to provide n**2 languages for n modules to communicate.

(2) In any language, the words are going to have to invoke some stored
and possibly fairly complex code. In C, for example, instructions will
have to invoke some programs in machine language etc. In English, I
think the words must be labels for quite complex modules. The word
"word", for example, must invoke some considerable object useful for
doing computations concerning words. In this view, language can do a 
very powerful thing: by sending the labels of a number of powerful
modules, I send a program, so you can more or less run the same
program, thus perceiving more or less the same thought. This picture
also, to my mind, explains metaphor-- when you "spend" time you invoke
a "spend" object/method  within valuable resource management (or at the 
least an instance of it created in your time understanding program). 
However, I don't understand how smaller modules within the brain or mind
could communicate like this, in English. The module that deals 
with the word ``word" for example, in order to deal with a 
sentence including lots of other words, would have to invoke the
other modules themselves. This is discussed at more length in 
my book What is Thought?, if memory serves in Ch. 13. If you can 
propose a solution to this, I would be most interested.

(3) Cassimatis has another interesting proposal. He proposes that
all modules (at some high level of granularity) must support a stipulated
interlingua. They take requests in this interlingua, perhaps translate
them into internal language, do computations, and then return results
in the interlingua. It is the responsibility of the module designer
(or presumably module creation algorithm) to produce a module
supporting the interlingua.

David> I think that our minds have many systems that, at least at the
David> higher levels, have different data representations.  These
David> systems in our minds seem to communicate with each other in
David> words.  The words aren't totally appropriate in all domains
David> (like Math) but they do to communicate the big ideas.  Could
David> Math be done using English only and no Math symbols?  Possibly,
David> but I don't think many Mathematicians would want to try it.  I
David> think that using a common English language interface between
David> the larger models is totally feasible and using object
David> inheritance, the interface code wouldn't have to be rewritten
David> by many modules at all.

David> In general, device drivers that work on one version of an OS
David> still work on the next one.  An exception might be when the OS
David> went from 32 to 64 bit operation or from WIN95 to NT.  Each
David> device driver has a pretty well defined interface and much
David> change can occur within that driver without any change to user
David> code at all.  I wouldn't call this "reinvented all the time" at
David> all.

David> Relational databases, the clipboard, and the Web all have
David> totally different data representations.  They are all well
David> known and used but totally different none the less.

David> In some cases, each of fuzzy logic, Bayesian logic, statistical
David> methods, vector arithmetic, neural networks, predicate logic,
David> heuristics etc seem to be the best solution but it is easy to
David> come up with many examples where each either can't work or
David> won't be workable on current hardware.  What if you could have
David> a system made up of all of these methods where the data
David> representation suited the domain and communication was done
David> between the modules by using simple English?  Like the Math
David> example above, you won't necessarily be able to communicate all
David> the detail between each module but why do that when each module
David> can be it's own "expert".  Wouldn't such a system make a lot
David> more sense then always trying to fit a square peg into a round
David> hole?

David> -- David Clark - Original Message - From: Russel

Re: [agi] Logical representation

2007-03-15 Thread David Clark
Is "very complicated" a good reason to have 1 cognitive engine?  Why not have 
many and even use many on the same problem and then accept the best answer?  
Best answer might change for a single problem depending on other issues outside 
the actual problem area.  Why put all the eggs in one basket?  Is deduction the 
appropriate metaphor for all questions and thinking?  Do you use only logical 
analysis or fuzzy logic for everything you think about?

"It seems to be a good thing if we could design and write it just once and 
solve the whole AGI problem".

I would like to be a millionaire right now but that isn't likely to happen.  It 
seems quite obvious to me that ANY one algorithmic engine or data 
representation can only solve a small subset of what an AGI should be able to 
do.  Let's say that you start with an architecture that can include many of 
each and it turns out that just one can do an adequate job for everything, then 
just use the one that works.  If another potential algorithm doesn't work, just 
don't use it.  If an architecture is designed that can EVER only have one 
algorithm or data representation, and it just so happens you made a mistake, 
then the game is over.

Many times people on this list have stated that no one knows for sure which 
exact direction will produce an AGI.  Many different techniques have failed in 
the past.  Why set yourself up at the start of the project to have the least 
chance of success?

"But what if the AGI faces a *new*, unseen problem? "

Why not have a module that handles "new and unseen" problems while having 
others that work well in domains that you know?  It might not be the most 
efficient at first but it could be made to handle un-known problems.  The AGI 
could then make a better or more efficient module if this new problem warranted 
such importance.  In some ways, humans seem to do this now.

"but when you give the vision module a long list of requirements -- reading 
fonts, playing boardgames, understanding drawings, etc -- you may find that the 
vision module needs to be more and more *general*, to the point that you're 
almost making the general cognitive engine again. "

If the vision module got more complicated, why not have the AGI split off the 
parts that make sense, into a higher level module and leave the lowest level in 
the format that best handles that?  If 10, 20 for more modules work in 
different ways on vision, then why not let it be so?

How can one monolithic program accomplish all that humans can do now and all 
that we will do in the future?  Our brains (at the highest level) don't seem to 
be monolithic either so what evidence is there (biological or otherwise) that 
shows cognition can be had with 1 algorithm or data rep?

If there was a single quick method of creating an AGI, wouldn't someone have 
found it by now?

-- David Clark
  - Original Message - 
  From: YKY (Yan King Yin) 
  To: agi@v2.listbox.com 
  Sent: Thursday, March 15, 2007 6:38 AM
  Subject: Re: [agi] Logical representation

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Logical representation

2007-03-15 Thread David Clark
orry, I haven't read your book!

"Word" doesn't have to be contained in only 1 model.  Many jokes are made 
because the meaning of words are so context sensitive in our brains that we are 
surprised when other (legitimate in other contexts) meanings are later used 
instead.  Context would be contained in a model that would contain the language 
and relations appropriate to that domain.  The model could use stored 
experience for some results and use other techniques if there was significant 
changes or more detail necessary.  I don't propose that a sentence would be 
syntactically parsed and then the models for each word called.  I think the 
whole sentence would go to a context model and the semantic meaning of the 
sentence extracted using local and global tools (more models) as necessary.  
Previous sentences and other sources could be included in determining the 
semantic meaning of the sentence and adding that information to the model to be 
used further.

The "English" communication at some levels could be like  
 and not full sentences.  Some models could be called that 
have access to the context model or other higher levels so that their output 
could change depending on how they were created.


> (3) Cassimatis has another interesting proposal. He proposes that
> all modules (at some high level of granularity) must support a stipulated
> interlingua. 

This is exactly what I propose.  I think this interlingua can be a subset of 
normal English but more likely a group of English subsets depending on the 
level of interaction.  The highest levels could probably communicate in normal 
English while at the lowest of some levels it could be a matrix of numbers or 
  like I described above.

-- David Clark

- Original Message - 
From: "Eric Baum" <[EMAIL PROTECTED]>
To: 
Sent: Thursday, March 15, 2007 5:42 AM
Subject: Re: [agi] Logical representation


> (2) In any language, the words are going to have to invoke some stored
> and possibly fairly complex code. In C, for example, instructions will
> have to invoke some programs in machine language etc. In English, I
> think the words must be labels for quite complex modules. The word
> "word", for example, must invoke some considerable object useful for
> doing computations concerning words. In this view, language can do a 
> very powerful thing: by sending the labels of a number of powerful
> modules, I send a program, so you can more or less run the same
> program, thus perceiving more or less the same thought. This picture
> also, to my mind, explains metaphor-- when you "spend" time you invoke
> a "spend" object/method  within valuable resource management (or at the 
> least an instance of it created in your time understanding program). 
> However, I don't understand how smaller modules within the brain or mind
> could communicate like this, in English. The module that deals 
> with the word ``word" for example, in order to deal with a 
> sentence including lots of other words, would have to invoke the
> other modules themselves. This is discussed at more length in 
> my book What is Thought?, if memory serves in Ch. 13. If you can 
> propose a solution to this, I would be most interested.
> 
> (3) Cassimatis has another interesting proposal. He proposes that
> all modules (at some high level of granularity) must support a stipulated
> interlingua. They take requests in this interlingua, perhaps translate
> them into internal language, do computations, and then return results
> in the interlingua. It is the responsibility of the module designer
> (or presumably module creation algorithm) to produce a module
> supporting the interlingua.
> 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Logical representation

2007-03-16 Thread YKY (Yan King Yin)

On 3/16/07, David Clark <[EMAIL PROTECTED]> wrote:

Is "very complicated" a good reason to have 1 cognitive engine?  Why not

have many and even use many on the same problem and then accept the best
answer?  Best answer might change for a single problem depending on other
issues outside the actual problem area.  Why put all the eggs in one
basket?  Is deduction the appropriate metaphor for all questions and
thinking?  Do you use only logical analysis or fuzzy logic for everything
you think about?
Let me concede that having a centralized cognitive engine may make the
system kind of brittle.  It may be the same reason why airplanes still have
accidents but I've never heard of a birds having accidents during flight.


From my perspective I think building a "von Neumann" style AGI (ie with a

small number of "neat" modules) is much easier than the distributive
approach.  I'm not saying that the latter approach won't work, but it's just
that I find the first route *much* easier (perhaps to me, particularly).

Notice that I have outlined an agenda for building the "neat" AGI,
whereas the distributive AGI is still at the stage of some very fundamental
questions.

If you can see the "logic" underlying a diverse spectrum of cognitive tasks,
then you may be convinced that a central cognitive engine can handle it
all.  It was probably just as hard to believe that *rigid* planes could make
a flying machine.  In fact, the logical approach has been applied to diverse
areas including natural language and vision.  What I see is that some
incremental change will lead us to success, whereas you see this as a dead
end.

Perhaps we can settle this issue by saying that *both* approaches are
viable, and that exactly which approach is superior is a very complex
issue.  There are pros and cons on each side.

YKY

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Logical representation

2007-03-16 Thread Eric Baum

David> I think that humans intertwine their thoughts with their
David> language.  

There is some anecdotal evidence that the
language facility is not necessary to thought,
for example a Monk who would periodically undergo localized 
epileptic seizures
in which he would report (after the fact) that his comprehension
and production of language vanished, while his cognitive abilities
and general abilities to function
reportedly were otherwise unimpaired. Of course, the removal
of language  could be
just at a conscious level, perhaps his modules still gabbed away
internally unbeknownst to him. I would also argue that other
animals think pretty effectively-- but maybe they use internally
pidgin English as you think we do.

Good automatic language translation has been stumped
David> by the semantic problems etc.  Context is also a huge problem
David> for people working on computer based language problems.  I
David> think AGI will be found by using models (I use this term is the
David> most general way) that communicate in English at a high level
David> where the language and knowledge is interpreted within the
David> model.  Any model could call any other model as needed and many
David> models could be called even at the top level.  Detail
David> information would be known only by the specialist models but
David> abstraction and patterns would be generated at every level to
David> encourage analogies and finding appropriate models to use.  The
David> top most level would also be a model for determining the
David> results from lower level models.

David> In Schanks CD (conceptual dependency) language model, a
David> relatively small set of primitive actions are necessary to
David> represent the semantics of sentences.  Although I don't
David> necessarily advocate only using this set of techniques, I do
David> believe that most language can be translated quite quickly into
David> some useable semantic form.  It could take a lot more effort to
David> get fool proof language out of the AGI if you were intent on
David> tricking the AGI.  Many humans can be easily ticked by other
David> humans as well.  A naive AGI is better than no AGI at all, I
David> believe.

David> The interesting thing about a model versus entering a huge
David> number of rules can be shown by the following analogy.  If I
David> have a function that takes a parameter and produces a result
David> based on a linear algorithm, I only need to store the Y
David> intercept and the slope to produce an infinite set of answers.
David> If I am given some number of numerical pairs, I could create a
David> best fit linear line but I wouldn't know that it was the best
David> because of the small data set or that it was linear instead of
David> geometric etc.  Knowledge is like the line.  Small differences
David> in the input still produce a pretty good answer and this
David> information can be stored very efficiently.  Entering a bunch
David> of rules into an inference engine is like the numerical pairs.
David> The system still has to guess at the function to generate any
David> useful information unless you can just look up the answer
David> directly from previous experience.

My book
What is Thought? studies how this picture extends more generally to
thought. It explains how understanding, semantics, language etc
arises from a generalized version of Occam's razor, in which if you
find a compact enough program behaving well enough on enough data,
it is so constrained as to exploit underlying structure and generalize
to solve new problems. The Occam program is argued to be primarily
located in the genome. The book surveys and extends a computer science
literature exploring these kinds of ideas, as well as data and ideas
from a number of other fields.

David> If you want consistency, then try to enter statements that
David> don't contradict each other.  It would probably be easy for a
David> while but eventually it would be almost impossible to find the
David> knowledge holes that need plugged (CYC).  You would also
David> probably find out just how inconsistent humans really are.  The
David> idea is to teach the AGI knowledge and not just meaningless
David> symbols.  This can be done using models which use data
David> representations and algorithms that are appropriate to the
David> domain the model was created for.  Context can be had by making
David> models that encapsulate language and other models for different
David> contexts.  This means no single dictionary with the meaning of
David> every word.

David> How quickly would humans learn if the teacher could reach right
David> into their heads and place an appropriate analogy, algorithm
David> and exceptions right into their brain structures?  Instead, we
David> use English to encourage a model to be created in the persons
David> head while using repetition to make a deep enough groove for
David> the memory to stick.  Over time this model normally has to be
David> thrown away and replaced to make way fo

Re: [agi] Logical representation

2007-03-16 Thread David Clark
- Original Message - 
From: "Eric Baum" <[EMAIL PROTECTED]>
To: 
Sent: Friday, March 16, 2007 6:02 AM
Subject: Re: [agi] Logical representation


> I would also argue that other
> animals think pretty effectively-- but maybe they use internally
> pidgin English as you think we do.

I don't think animals or all humans use pidgin English internally but I
think that all humans use some kind of language or abbreviations thereof for
communicating between higher level thinking models.  I am not overly
concerned exactly how humans achieve cognition as the tools available to
evolution and biological humans is totally different than implementing an
AGI using present day computers.  I will get your book as I can see you have
many incites into the topic of cognition in humans but ultimately a computer
solution will have to be programmed to create an AGI IMHO.

> My book
> What is Thought? studies how this picture extends more generally to
> thought. It explains how understanding, semantics, language etc
> arises from a generalized version of Occam's razor, in which if you
> find a compact enough program behaving well enough on enough data,
> it is so constrained as to exploit underlying structure and generalize
> to solve new problems. The Occam program is argued to be primarily
> located in the genome. The book surveys and extends a computer science
> literature exploring these kinds of ideas, as well as data and ideas
> from a number of other fields.

I know that Occam's razor is an observation that given 2 theories that
explain a set of facts, the simplest one is probably correct.  This
observation is not a law or even correct in all cases.  I however, much
prefer simple over complex in general so I don't totally disagree with
finding smaller and simpler solutions.  Calling a single program an "Occam
program" seems a little strange.  I have seen no evidence that our
intelligence comes from a single algorithm or source.  I can conceive that
evolution could just as easily have created algorithms that were very
convoluted and very "non Occam" but because they work "good enough", there
was no pressure by evolution/selection to create anything better.  As we
both know, evolution has no goal or direction as such.  I am not convinced
that evolution/selection cares much about the size, complexity or eligance
of any solution it might stumble upon.

> According to my theory, "Spontaneous intelligence" is a miasma because
> finding Occam code is a hard computational problem. Our intelligence
> emerged through an astoundingly huge evolutionary program. When I am
> taught a new concept, say by you, I have to build new code. Your code is
not
> identical to mine, so it would be a hard problem for you to reach
> in and simply supply code-- even if you could you'd have to interface
> it with my code. Instead, you provide through language, program
> sketches and examples of subconcepts so that I can build the code
> very rapidly. And in fact, I build it remarkably rapidly-- learning
> is incredibly fast if you consider the complexity of the problem.
> (If you had to reach in and specify synapse values and connections,
> good luck, it would take you a lot longer to teach me mathematics
> so that I understood it and could go out and prove new theorems! ;^)

I rarely am stumped for the definition of a word but "miasma", I had to look
up.  It's meaning "A harmful or corrupting atmosphere or influence; also, an
atmosphere that obscures" has very interesting implications!

My remark about putting concepts directly into a persons head wasn't to
suggest this was a good thing for people.  I was using this to show that
direct coding of some concepts into an AGI is a good thing.  This would be
impossible if all internal data was represented in non human readable form
as some have suggested on this list.  The success of HTML, XML, Unix config
files etc can be directly linked to their being stored in standard ASCII
text.  This makes it easy to inspect and change diverse formats using a stan
dard text editor even if a specialized binary format may be more computer
efficient.  From these observations I believe that some hand coding of
models and a plain ASCII set of internal interfaces will be the most useful
way of pursuing AGI.

Thank you for your comments.

-- David Clark


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Logical representation

2007-03-16 Thread David Clark
- Original Message - 
  From: YKY (Yan King Yin) 
  To: agi@v2.listbox.com 
  Sent: Friday, March 16, 2007 5:48 AM
  Subject: Re: [agi] Logical representation


  Let me concede that having a centralized cognitive engine may make the system 
kind of brittle.  It may be the same reason why airplanes still have accidents 
but I've never heard of a birds having accidents during flight. 
I don't understand how bird's having accidents has anything to do trying to 
develop an AGI using only one "cognitive engine" or one data representation.  
Bird's obviously have "accidents during flight" and planes are made of many 
subsystems, each with their own structures and systems.  If a single "cognitive 
engine" would work better than many, then my suggested methodology would still 
work.  It would also work if a small number or a large number of cognitive 
engines were necessary.  
  From my perspective I think building a "von Neumann" style AGI (ie with a 
small number of "neat" modules) is much easier than the distributive approach.  
I'm not saying that the latter approach won't work, but it's just that I find 
the first route *much* easier (perhaps to me, particularly). 
Everyone is entitled to spend their time as they please, of course, but 
creating a flexible initial structure that can accommodate future unseen 
changes isn't that difficult.  I have never created even a business application 
that wasn't designed in many ways that could be extended with un-expected new 
functionality.

  Notice that I have outlined an agenda for building the "neat" AGI, whereas 
the distributive AGI is still at the stage of some very fundamental questions. 
Please elaborate on these "fundamental questions".  Creating a predicate 
calculus kind of heuristic system has been tried many times without success.  
You say your proposal is like SOAR, so why not take SOAR and add your 
modifications?  I think using "logic" for all potential domains is unworkable, 
even if it was possible, which I doubt.
  If you can see the "logic" underlying a diverse spectrum of cognitive tasks, 
then you may be convinced that a central cognitive engine can handle it all.
It sounds like you are asking me to just "believe" but I don't put much stock 
in beliefs (without supporting arguments or facts) at it's most basic level.
  It was probably just as hard to believe that *rigid* planes could make a 
flying machine.  In fact, the logical approach has been applied to diverse 
areas including natural language and vision.  
I don't think that creating airplanes is a suitable analogy for your proposal.  
The Wright bothers made many experiments and had the ideas and calculations of 
many other people to work with before they made their first plane.  "Natural 
language and vision" are just a tiny part of any AGI and semantic understanding 
of natural language is embryonic.  The best approaches for vision systems today 
use NNs and other approaches, not symbolic logic.
  What I see is that some incremental change will lead us to success, whereas 
you see this as a dead end. 

  Perhaps we can settle this issue by saying that *both* approaches are viable, 
and that exactly which approach is superior is a very complex issue.  There are 
pros and cons on each side. 
Your optimism for success through "incremental change" doesn't seem to be 
backed with new evidence or from past history.  I can't say your approach can't 
work but it seems to have many problems before it starts.  I don't see my 
approach as so much a particular approach (as yours is) but as a container 
where any number of techniques can compete to create an AGI.  I can't see how I 
could agree "both approaches are viable" when my suggestions can include all of 
yours but not vise versa.  I could agree that we just disagree however ;)

-- David Clark

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303