Re: [agi] advice-level dev collaboration

2007-11-14 Thread Jiri Jelinek
Thanks for the responses. Sorry, I picked just a couple of folks.
Dealing with the wide audience of the whole AGI list would IMO make
things more difficult for me. I may share selected stuff later.

Regards,
Jiri Jelinek

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64842712-fa84ee


Re: [agi] Relativistic irrationalism

2007-11-14 Thread Pei Wang
Stefan,

Though I agree with most of your analysis on inter-agent relationship,
I don't share your conception of rationality.

To me, rationality itself is relativistic, that is, what
behavior/action is rational is always judged according to the
assumptions and postulations on a system's goal, knowledge, resources,
etc. There is no single rationality that can be used in all
situations.

Similar ideas have been argued by I.J. Good, H.A. Simon, and some others.

In the context of AGI, AIXI is an important model of rationality, but
not the only one. At least there are NARS and OSCAR, which are based
on different assumptions about the system and its environment. Being
impractical is not the only problem of AIXI. As soon as one of its
assumptions (infinite resources is only one of them) is dropped, its
conclusions become inapplicable.

Some people think in theory we should accept unrealistic
assumptions, like infinite resources, since they lead to rigorous
models; then, in implementation, the realistic restrictions (on
resources etc.) can be introduced, which lead to approximations of the
idealized model. What they fail to see is that when a new restriction
is added, it may change the problem to the extent that the ideal
theory becomes mostly irrelevant. To me, it is much better to start
with more realistic assumptions in the first place, even though it
will make the problem harder to solve.

Pei

On Nov 13, 2007 10:40 PM, Stefan Pernar [EMAIL PROTECTED] wrote:
 Would be great if people could poke the following with their metaphorical
 sticks:


 Imagine two agents A(i) each one with a utility function F(i), capability
 level C(i) and no knowledge as to the other agents F and C values. Both
 agents are given equal resources and are tasked with devising the most
 efficient and effective way to maximize their respective utility with said
 resources.

 Scenario 1: Both agents have fairly similar utility functions F(1) = F(2),
 level of knowledge, cognitive complexity, experience - in short capability
 C(1) = C(2) - and a high level of mutual trust T(1-2) = T(2-1) = 1. They
 will quickly agree on the way forward, pool their resources and execute
 their joint plan. Rather boring.

 Scenario 2: Again we assume F(1) = F(2), however C(1)  C(2) - again T(1-2)
 = T(2-1) = 1. The more capable agent will devise a plan, the less capable
 agent will provide its resources and execute the plan trusted by C(2). A bit
 more interesting.

 Scenario 3: F(1) = F(2), C(1)  C(2) but this time T(1-2) = 1 and T(2-1) =
 0.5 meaning the less powerful agent assumes with a probability of 50% that
 A(1) is in fact a self serving optimizer who's difference in plan will turn
 out to be decremental to A(2) while A(1) is certain that this is all just
 one big misunderstanding. The optimal plan devised under scenario 2 will now
 face opposition by A(2) although it would be in A(2)'s best interest to
 actually support it with its resources to maximize (F2) while A(1) will see
 A(2)'s objection as being detrimental to maximizing their shared utility
 function. Fairly interesting: based on lack of trust and differences in
 capability each agent perceives the other agent's plan as being irrational
 from their respective points of view.

 Under scenario 3, both agents now have a variety of strategies at their
 disposal:
 deny pooling of part or all of ones resources = If we do not do it my way
 you can do it alone.
 use resources to sabotage the other agent's plan = I must stop him with
 these crazy ideas!
 deceive the other agent in order to skew how the other agent is deploying
 strategies 1 and 2
 spend resources to explain the plan to the other agent = Ok - let's help him
 see the light
 spend resources on self improvement to understand the other agent's plan
 better = Let's have a closer look, the plan might not be so bad after all
 strike a compromise to ensure a higher level of pooled resources = If we
 don't compromise we both loose out

 Number 1 is a given under scenario 3. Number 2 is risky, particularly as it
 would cause a further reduction in trust on both sides if this strategy gets
 deployed assuming the other party would find out similarly with number 3.
 Number 4 seems like the way to go but may not always work particularly with
 large differences in C(i) among the agents. Number 5 is a likely strategy
 with a fairly high level of trust. Most likely however is strategy 6.

 Striking a compromise is trust building in repeated encounters and thus
 promises less objection and thus higher total payoff the next times around.

 Assuming the existence of an arguably optimal path leading to a maximally
 possible satisfaction of a given utility function anything else would be
 irrational. Actually such a maximally intelligent algorithm exists in the
 form of Hutter's universal algorithmic agent AIXI. The only problem being
 however that the execution of said algorithm requires infinite resources and
 is thus rather unpractical as every 

Re: [agi] What best evidence for fast AI?

2007-11-14 Thread Lukasz Stafiniak
I think that there are two basic directions to better the Novamente
architecture:
the one Mark talks about
more integration of MOSES with PLN and RL theory

On 11/13/07, Edward W. Porter [EMAIL PROTECTED] wrote:
 Response to Mark Waser  Mon 11/12/2007 2:42 PM post.



 MARK  Remember that the brain is *massively* parallel.  Novamente and
 any other linear (or minorly-parallel) system is *not* going to work in
 the same fashion as the brain.  Novamente can be parallelized to some
 degree but *not* to anywhere near the same degree as the brain.  I love
 your speculation and agree with it -- but it doesn't match near-term
 reality.  We aren't going to have brain-equivalent parallelism anytime in
 the near future.



 ED I think in five to ten years there could be computers capable of
 providing every bit as much parallelism as the brain at prices that will
 allow thousands or hundreds of thousands of them to be sold.



 But it is not going to happen overnight.  Until then the lack of brain
 level hardware is going to limit AGI. But there are still a lot of high
 value system that could be built on say $100K to $10M of hardware.



 You claim we really need experience with computing and controlling
 activation over large atom tables.  I would argue that obtaining such
 experience should be a top priority for government funders.



 MARK  The node/link architecture is very generic and can be used for
 virtually anything.  There is no rational way to attack it.  It is, I
 believe, going to be the foundation for any system since any system can
 easily be translated into it.  Attacking the node/link architecture is
 like attacking assembly language or machine code.  Now -- are you going to
 write your AGI in assembly language?  If you're still at the level of
 arguing node/link, we're not communicating well.



 ED  nodes and links are what patterns are made of, and each static
 pattern can have an identifying node associated with it as well as the
 nodes and links representing its sub-patterns, elements, the compositions
 of which it is part, it associations, etc.  The system automatically
 organize patterns into a gen/comp hierarchy.  So, I am not just dealing at
 a node and link level, but they are the basic building blocks.





 MARK ... I *AM* saying that the necessity of using probabilistic
 reasoning for day-to-day decision-making is vastly over-rated and has been
 a horrendous side-road for many/most projects because they are attempting
 to do it in situations where it is NOT appropriate.  The increased,
 almost ubiquitous adaptation of probabilistic methods is the herd
 mentality in action (not to mention the fact that it is directly
 orthogonal to work thirty years older).  Most of the time, most projects
 are using probabilistic methods to calculate a tenth place decimal of a
 truth value when their data isn't even sufficient for one.  If you've got
 a heavy-duty discovery system, probabilistic methods are ideal.  If you're
 trying to derive probabilities from a small number of English statements
 (like this raven is white and most ravens are black), you're seriously
 on the wrong track.  If you go on and on about how humans don't understand
 Bayesian reasoning, you're both correct and clueless in not recognizing
 that your very statement points out how little Bayesian reasoning has to
 do with most general intelligence.  Note, however, that I *do* believe
 that probabilistic methods *are* going to be critically important for
 activation for attention, etc.



 ED  I agree that many approaches accord too much importance to the
 numerical accuracy and Bayesian purity of their approach, and not enough
 importance on the justification for the Bayesian formulations they use.
 I know of one case where I suggested using information that would almost
 certainly have improved a perception process and the suggestion was
 refused because it would not fit within the system's probabilistic
 framework.   At an AAAI conference in 1997 I talked to a programmer for a
 big defense contractor who said he as a fan of fuzzy logic system; that
 they were so much more simple to get up an running because you didn't have
 to worry about probabilistic purity.  He said his group that used fuzzy
 logic was getting things out the door that worked faster than the more
 probability limited competition.  So obviously there is something to say
 for not letting probabilistic purity get in the way of more reasonable
 approaches.



 But I still think probabilities are darn important. Even your this raven
 is white and most ravens are black example involves notions of
 probability.  We attribute probabilities to such statements based on
 experience with the source of such statements or similar sources of
 information, and the concept most is a probabilistic one.  The reason we
 humans are so good at reasoning from small data is based on our ability to
 estimate rough probabilities from similar or generic patterns.



 MARK  The 

RE: [agi] What best evidence for fast AI?

2007-11-14 Thread Edward W. Porter
Lukasz,

Which of the multiple issues that Mark listed is one of the two basic
directions you were referring to.

Ed Porter

-Original Message-
From: Lukasz Stafiniak [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 14, 2007 9:15 AM
To: agi@v2.listbox.com
Subject: Re: [agi] What best evidence for fast AI?


I think that there are two basic directions to better the Novamente
architecture:
the one Mark talks about
more integration of MOSES with PLN and RL theory

On 11/13/07, Edward W. Porter [EMAIL PROTECTED] wrote:
 Response to Mark Waser  Mon 11/12/2007 2:42 PM post.



 MARK  Remember that the brain is *massively* parallel.  Novamente
 MARK and
 any other linear (or minorly-parallel) system is *not* going to work
 in the same fashion as the brain.  Novamente can be parallelized to
 some degree but *not* to anywhere near the same degree as the brain.
 I love your speculation and agree with it -- but it doesn't match
 near-term reality.  We aren't going to have brain-equivalent
 parallelism anytime in the near future.



 ED I think in five to ten years there could be computers capable
 ED of
 providing every bit as much parallelism as the brain at prices that
 will allow thousands or hundreds of thousands of them to be sold.



 But it is not going to happen overnight.  Until then the lack of brain
 level hardware is going to limit AGI. But there are still a lot of
 high value system that could be built on say $100K to $10M of
 hardware.



 You claim we really need experience with computing and controlling
 activation over large atom tables.  I would argue that obtaining such
 experience should be a top priority for government funders.



 MARK  The node/link architecture is very generic and can be used
 MARK for
 virtually anything.  There is no rational way to attack it.  It is, I
 believe, going to be the foundation for any system since any system
 can easily be translated into it.  Attacking the node/link
 architecture is like attacking assembly language or machine code.  Now
 -- are you going to write your AGI in assembly language?  If you're
 still at the level of arguing node/link, we're not communicating well.



 ED  nodes and links are what patterns are made of, and each static
 pattern can have an identifying node associated with it as well as the
 nodes and links representing its sub-patterns, elements, the
 compositions of which it is part, it associations, etc.  The system
 automatically organize patterns into a gen/comp hierarchy.  So, I am
 not just dealing at a node and link level, but they are the basic
 building blocks.





 MARK ... I *AM* saying that the necessity of using probabilistic
 reasoning for day-to-day decision-making is vastly over-rated and has
 been a horrendous side-road for many/most projects because they are
 attempting to do it in situations where it is NOT appropriate.  The
 increased, almost ubiquitous adaptation of probabilistic methods is
 the herd mentality in action (not to mention the fact that it is
 directly orthogonal to work thirty years older).  Most of the time,
 most projects are using probabilistic methods to calculate a tenth
 place decimal of a truth value when their data isn't even sufficient
 for one.  If you've got a heavy-duty discovery system, probabilistic
 methods are ideal.  If you're trying to derive probabilities from a
 small number of English statements (like this raven is white and
 most ravens are black), you're seriously on the wrong track.  If you
 go on and on about how humans don't understand Bayesian reasoning,
 you're both correct and clueless in not recognizing that your very
 statement points out how little Bayesian reasoning has to do with most
 general intelligence.  Note, however, that I *do* believe that
 probabilistic methods *are* going to be critically important for
 activation for attention, etc.



 ED  I agree that many approaches accord too much importance to the
 numerical accuracy and Bayesian purity of their approach, and not
 enough importance on the justification for the Bayesian formulations
 they use. I know of one case where I suggested using information that
 would almost certainly have improved a perception process and the
 suggestion was refused because it would not fit within the system's
probabilistic
 framework.   At an AAAI conference in 1997 I talked to a programmer for
a
 big defense contractor who said he as a fan of fuzzy logic system;
 that they were so much more simple to get up an running because you
 didn't have to worry about probabilistic purity.  He said his group
 that used fuzzy logic was getting things out the door that worked
 faster than the more probability limited competition.  So obviously
 there is something to say for not letting probabilistic purity get in
 the way of more reasonable approaches.



 But I still think probabilities are darn important. Even your this
 raven is white and most ravens are black example involves notions
 of probability.  We attribute 

Re: [agi] Relativistic irrationalism

2007-11-14 Thread Stefan Pernar
Pei,

many thanks for your comments. Good input on rationality and AIXI.

Kind regards,
Stefan

On Nov 14, 2007 10:13 PM, Pei Wang [EMAIL PROTECTED] wrote:

 Stefan,

 Though I agree with most of your analysis on inter-agent relationship,
 I don't share your conception of rationality.

 To me, rationality itself is relativistic, that is, what
 behavior/action is rational is always judged according to the
 assumptions and postulations on a system's goal, knowledge, resources,
 etc. There is no single rationality that can be used in all
 situations.

 Similar ideas have been argued by I.J. Good, H.A. Simon, and some others.

 In the context of AGI, AIXI is an important model of rationality, but
 not the only one. At least there are NARS and OSCAR, which are based
 on different assumptions about the system and its environment. Being
 impractical is not the only problem of AIXI. As soon as one of its
 assumptions (infinite resources is only one of them) is dropped, its
 conclusions become inapplicable.

 Some people think in theory we should accept unrealistic
 assumptions, like infinite resources, since they lead to rigorous
 models; then, in implementation, the realistic restrictions (on
 resources etc.) can be introduced, which lead to approximations of the
 idealized model. What they fail to see is that when a new restriction
 is added, it may change the problem to the extent that the ideal
 theory becomes mostly irrelevant. To me, it is much better to start
 with more realistic assumptions in the first place, even though it
 will make the problem harder to solve.

 Pei

 On Nov 13, 2007 10:40 PM, Stefan Pernar [EMAIL PROTECTED] wrote:
  Would be great if people could poke the following with their
 metaphorical
  sticks:
 
 
  Imagine two agents A(i) each one with a utility function F(i),
 capability
  level C(i) and no knowledge as to the other agents F and C values. Both
  agents are given equal resources and are tasked with devising the most
  efficient and effective way to maximize their respective utility with
 said
  resources.
 
  Scenario 1: Both agents have fairly similar utility functions F(1) =
 F(2),
  level of knowledge, cognitive complexity, experience - in short
 capability
  C(1) = C(2) - and a high level of mutual trust T(1-2) = T(2-1) = 1.
 They
  will quickly agree on the way forward, pool their resources and execute
  their joint plan. Rather boring.
 
  Scenario 2: Again we assume F(1) = F(2), however C(1)  C(2) - again
 T(1-2)
  = T(2-1) = 1. The more capable agent will devise a plan, the less
 capable
  agent will provide its resources and execute the plan trusted by C(2). A
 bit
  more interesting.
 
  Scenario 3: F(1) = F(2), C(1)  C(2) but this time T(1-2) = 1 and
 T(2-1) =
  0.5 meaning the less powerful agent assumes with a probability of 50%
 that
  A(1) is in fact a self serving optimizer who's difference in plan will
 turn
  out to be decremental to A(2) while A(1) is certain that this is all
 just
  one big misunderstanding. The optimal plan devised under scenario 2 will
 now
  face opposition by A(2) although it would be in A(2)'s best interest to
  actually support it with its resources to maximize (F2) while A(1) will
 see
  A(2)'s objection as being detrimental to maximizing their shared utility
  function. Fairly interesting: based on lack of trust and differences in
  capability each agent perceives the other agent's plan as being
 irrational
  from their respective points of view.
 
  Under scenario 3, both agents now have a variety of strategies at their
  disposal:
  deny pooling of part or all of ones resources = If we do not do it my
 way
  you can do it alone.
  use resources to sabotage the other agent's plan = I must stop him with
  these crazy ideas!
  deceive the other agent in order to skew how the other agent is
 deploying
  strategies 1 and 2
  spend resources to explain the plan to the other agent = Ok - let's help
 him
  see the light
  spend resources on self improvement to understand the other agent's plan
  better = Let's have a closer look, the plan might not be so bad after
 all
  strike a compromise to ensure a higher level of pooled resources = If we
  don't compromise we both loose out
 
  Number 1 is a given under scenario 3. Number 2 is risky, particularly as
 it
  would cause a further reduction in trust on both sides if this strategy
 gets
  deployed assuming the other party would find out similarly with number
 3.
  Number 4 seems like the way to go but may not always work particularly
 with
  large differences in C(i) among the agents. Number 5 is a likely
 strategy
  with a fairly high level of trust. Most likely however is strategy 6.
 
  Striking a compromise is trust building in repeated encounters and thus
  promises less objection and thus higher total payoff the next times
 around.
 
  Assuming the existence of an arguably optimal path leading to a
 maximally
  possible satisfaction of a given utility function anything else would be

Re: [agi] What best evidence for fast AI?

2007-11-14 Thread Lukasz Stafiniak
On Nov 14, 2007 3:48 PM, Edward W. Porter [EMAIL PROTECTED] wrote:
 Lukasz,

 Which of the multiple issues that Mark listed is one of the two basic
 directions you were referring to.

 Ed Porter

(First of all, I'm sorry for attaching my general remark as a reply: I
was writing from a cell-phone which limited navigation.)

I think, that it would be a more fleshed-out knowledge representation
(but without limiting the representation-building flexibility of
Novamente).

 -Original Message-
 From: Lukasz Stafiniak [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, November 14, 2007 9:15 AM
 To: agi@v2.listbox.com
 Subject: Re: [agi] What best evidence for fast AI?


 I think that there are two basic directions to better the Novamente
 architecture:
 the one Mark talks about
 more integration of MOSES with PLN and RL theory


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64970556-f74c23


Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Richard Loosemore

Linas Vepstas wrote:

On Tue, Nov 13, 2007 at 12:34:51PM -0500, Richard Loosemore wrote:
Suppose that in some significant part of Novamente there is a 
representation system that uses probability or likelihood numbers to 
encode the strength of facts, as in [I like cats](p=0.75).  The (p=0.75) 
is supposed to express the idea that the statement [I like cats] is in 
some sense 75% true.


Either way, we have a problem:  a fact like [I like cats](p=0.75) is 
ungrounded because we have to interpret it.  Does it mean that I like 
cats 75% of the time?  That I like 75% of all cats?  75% of each cat? 
Are the cats that I like always the same ones, or is the chance of an 
individual cat being liked by me something that changes?  Does it mean 
that I like all cats, but only 75% as much as I like my human family, 
which I like(p=1.0)?  And so on and so on.


Eh?

You are standing at the proverbial office water coooler, and Aneesh 
says Wen likes cats. On your drive home, you mind races .. does this

mean that Wen is a cat fancier?  You were planning on taking Wen out
on a date, and this tidbit of information could be useful ... 

when you try to build the entire grounding mechanism(s) you are forced 
to become explicit about what these numbers mean, during the process of 
building a grounding system that you can trust to be doing its job:  you 
cannot create a mechanism that you *know* is constructing sensible p 
numbers and facts during all of its development *unless* you finally 
bite the bullet and say what the p numbers really mean, in fully cashed 
out terms.


But has a human, asking Wen out on a date, I don't really know what 
Wen likes cats ever really meant. It neither prevents me from talking 
to Wen, or from telling my best buddy that ...well, I know, for
instance, that she likes cats...  

Lack of grounding is what makes humour funny, you can do a whole 
Pygmalion / Seinfeld episode on she likes cats.


No:  the real concept of lack of grounding is nothing so simple as the 
way you are using the word grounding.


Lack of grounding makes an AGI fall flat on its face and not work.

I can't summarize the grounding literature in one post.  (Though, heck, 
I have actually tried to do that in the past:  didn't do any good).




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64980585-67cbc9


Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-14 Thread Richard Loosemore

Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:


Matt Mahoney wrote:

--- Jiri Jelinek [EMAIL PROTECTED] wrote:


On Nov 11, 2007 5:39 PM, Matt Mahoney [EMAIL PROTECTED] wrote:

We just need to control AGIs goal system.

You can only control the goal system of the first iteration.

..and you can add rules for it's creations (e.g. stick with the same
goals/rules unless authorized otherwise)
You can program the first AGI to program the second AGI to be friendly. 

You

can program the first AGI to program the second AGI to program the third

AGI

to be friendly.  But eventually you will get it wrong, and if not you,

then

somebody else, and evolutionary pressure will take over.
This statement has been challenged many times.  It is based on 
assumptions that are, at the very least, extremely questionable, and 
according to some analyses, extremely unlikely.


I guess it will continue to be challenged until we can do an experiment to
prove who is right.  Perhaps you should challenge SIAI, since they seem to
think that friendliness is still a hard problem.


I have done so, as many people on this list will remember.  The response 
was deeply irrational.




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64985895-75bf5b


Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Benjamin Goertzel
Hi,




 No:  the real concept of lack of grounding is nothing so simple as the
 way you are using the word grounding.

 Lack of grounding makes an AGI fall flat on its face and not work.

 I can't summarize the grounding literature in one post.  (Though, heck,
 I have actually tried to do that in the past:  didn't do any good).



FYI, I have read the symbol-grounding literature (or a lot of it), and
generally
found it disappointingly lacking in useful content... though I do agree with
the basic point that non-linguistic grounding is extremely helpful for
effective
manipulation of linguistic entities...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64981284-09925d

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Richard Loosemore

Benjamin Goertzel wrote:



On Nov 13, 2007 2:37 PM, Richard Loosemore [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:



Ben,

Unfortunately what you say below is tangential to my point, which is
what happens when you reach the stage where you cannot allow any more
vagueness or subjective interpretation of the qualifiers, because you
have to force the system to do its own grounding, and hence its own
interpretation.



I don't see why you talk about forcing the system to do its own 
grounding --

the probabilities in the system are grounded in the first place, as they
are calculated based on experience.

The system observes, records what it sees, abstracts from it, and chooses
actions that it guess will fulfill its goals.  Its goals are ultimately 
grounded in in-built
feeling-evaluation routines, measuring stuff like amount of novelty 
observed,

amount of food in system etc.

So, the system sees and then acts ... and the concepts it forms and uses
are created/used based on their utility in deriving appropriate actions.

There is no symbol-grounding problem except in the minds of people who
are trying to interpret what the system does, and get confused.  Any symbol
used within the system, and any probability calculated by the system, are
directly grounded in the system's experience.

There is nothing vague about an observation like Bob_Yifu was observed
at time-stamp 599933322, or a fact Command 'wiggle ear' was sent
at time-stamp 54.  These perceptions and actions are the root of the
probabilities the system calculated, and need no further grounding.

 


What you gave below was a sketch of some more elaborate 'qualifier'
mechanisms.  But I described the process of generating more and more
elaborate qualifier mechanisms in the body of the essay, and said why
this process was of no help in resolving the issue.


So, if a system can achieve its goals based on choosing procedures that
it thinks are likely to achieve its goals, based on the knowledge it 
gathered
via its perceived experience -- why do you think it has a problem? 


I don't really understand your point, I guess.  I thought I did -- I thought
your point was that precisely specifying the nature of a conditional 
probability

is a rats-nest of complexity.  And my response was basically that in
Novamente we don't need to do that, because we define conditional 
probabilities

based on the system's own knowledge-base, i.e.

Inheritance A B .8

means

If A and B were reasoned about a lot, then A would (as measred by an 
weighted

average) have 80% of the relationships that B does

But apparently you were making some other point, which I did not grok, 
sorry...


Anyway, though, Novamente does NOT require logical relations of escalating
precision and complexity to carry out reasoning, which is one thing you 
seemed

to be assuming in your post.


You are, in essence, using one of the trivial versions of what symbol 
grounding is all about.


The complaint is not your symbols are not connected to experience. 
Everyone and their mother has an AI system that could be connected to 
real world input.  The simple act of connecting to the real world is NOT 
the core problem.


If you have an AGI system in which the system itself is allowed to do 
all the work of building AND interpreting all of its symbols, I don't 
have any issues with it.


Where I do have an issue is with a system which is supposed to be doing 
the above experiential pickup, and where the symbols are ALSO supposed 
to be interpretable by human programmers who are looking at things like 
probability values attached to facts.  When a programmer looks at a 
situation like


 ContextLink .7,.8
  home
  InheritanceLink Bob_Yifu friend

... and then follows this with a comment like:

 which suggests that Bob is less friendly at home than
 in general.

... they have interpreted the meaning of that statement using their 
human knowledge.


So here I am, looking at this situation, and I see:

   AGI system intepretation (implicit in system use of it)
   Human programmer intepretation

and I ask myself which one of these is the real interpretation?

It matters, because they do not necessarily match up.  The human 
programmer's intepretation has a massive impact on the system because 
all the inference and other mechanisms are built around the assumption 
that the probabilities mean a certain set of things.  You manipulate 
those p values, and your manipulations are based on assumptions about 
what they mean.


But if the system is allowed to pick up its own knowledge from the 
environment, the implicit meaning of those p values will not 
necessarily match the human interpretation.  As I say, the meaning is 
then implicit in the way the system *uses* those p values (and other stuff).


It is a nontrivial question to ask whether the implicit system 
interpretation does indeed match the human intepretation built into the 
inference 

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Richard Loosemore

Benjamin Goertzel wrote:

Hi,
 




No:  the real concept of lack of grounding is nothing so simple as the
way you are using the word grounding.

Lack of grounding makes an AGI fall flat on its face and not work.

I can't summarize the grounding literature in one post.  (Though, heck,
I have actually tried to do that in the past:  didn't do any good).



FYI, I have read the symbol-grounding literature (or a lot of it), and 
generally
found it disappointingly lacking in useful content... though I do agree 
with
the basic point that non-linguistic grounding is extremely helpful for 
effective

manipulation of linguistic entities...


Ben,

As you will recall, Harnad himself got frustrated with the many people 
who took the term symbol grounding and trivialized or distorted it in 
various ways.  One of the reasons the grounding literature is such a 
waste of time (and you are right:  it is) is that so many people talked 
so much nonsense about it.


As far as I am concerned, your use of it is one of those trivial senses 
that Harnad complained of.  (Essentially, if the system uses world input 
IN ANY WAY during the building of its symbols, then the system is grounded).


The effort I put into that essay yesterday will have been completely 
wasted if your plan is to stick to that interpretation and not discuss 
the deeper issue that I raised.


I really have no energy for pursuing yet another discussion about symbol 
grounding.


Sorry:  don't mean to blow you off, but you and I both have better 
things to do, and I foresee a big waste of time ahead if we pursue it.



So let's just drop it?



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64998305-6bdb18


Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Benjamin Goertzel
Richard,



 So here I am, looking at this situation, and I see:

    AGI system intepretation (implicit in system use of it)
    Human programmer intepretation

 and I ask myself which one of these is the real interpretation?

 It matters, because they do not necessarily match up.


That is true, but in some cases they may approximate each other well..

In others, not...

This happens to be a pretty simple case, so the odds of a good
approximation seem high.



  The human
 programmer's intepretation has a massive impact on the system because
 all the inference and other mechanisms are built around the assumption
 that the probabilities mean a certain set of things.  You manipulate
 those p values, and your manipulations are based on assumptions about
 what they mean.



Well, the PLN inference engine's treatment of

ContextLink
home
InheritanceLink Bob_Yifu friend

is in no way tied to whether the system's implicit interpretation of the
ideas of home or friend are humanly natural, or humanly comprehensible.

The same inference rules will be applied to cases like

ContextLink
Node_66655
InheritanceLink Bob_Yifu Node_544

where the concepts involved have no humanly-comprehensible label.

It is true that the interpretation of ContextLink and InheritanceLink are
fixed
by the wiring of the system, in a general way (but what kinds of properties
are referred to by them may vary in a way dynamically determined by the
system).


 In order to completely ground the system, you need to let the system
 build its own symbols, yes, but that is only half the story:  if you
 still have a large component of the system that follows a
 programmer-imposed interpretation of things like probability values
 attached to facts, you have TWO sets of symbol-using mechanisms going
 on, and the system is not properly grounded (it is using both grounded
 and ungrounded symbols within one mechanism).



I don't think the system needs to learn its own probabilistic reasoning
rules
in order to be an AGI.  This, to me, is too much like requiring that a brain
needs
to learn its own methods for modulating the conductances of the bundles of
synapses linking between the neurons in cell assembly A and cell assembly B.

I don't see a problem with the AGI system having hard-wired probabilistic
inference rules, and hard-wired interpretations of probabilistic link
types.  But
the interpretation of any **particular** probabilistic relationship inside
the system, is relative
to the concepts and the empirical and conceptual relationships that the
system
has learned.

You may think that the brain learns its own uncertain inference rules based
on a
lower-level infrastructure that operates in terms entirely unconnected from
ideas
like uncertainty and inference.  I think this is wrong.  I think the brain's
uncertain
inference rules are the result, on the cell assembly level, of Hebbian
learning and
related effects on the neuron/synapse level.  So I think the brain's basic
uncertain
inference rules are wired-in, just as Novamente's are, though of course
using
a radically different infrastructure.

Ultimately an AGI system needs to learn its own reasoning rules and
radically
modify and improve itself, if it's going to become strongly superhuman!  But
that is
not where we need to start...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64998317-8c4281

Re: [agi] What best evidence for fast AI?

2007-11-14 Thread Richard Loosemore

Bryan Bishop wrote:

On Tuesday 13 November 2007 09:11, Richard Loosemore wrote:

This is the whole brain emulation approach, I guess (my previous
comments were about evolution of brains rather than neural level
duplication).


Ah, you are right. But this too is an interesting topic. I think that 
the order of magnitudes for whole brain emulation, connectome, and 
similar evolutionary methods, are roughly the same, but I haven't done 
any calculations.



It seems quite possible that what we need is a detailed map of every
synapse, exact layout of dendritic tree structures, detailed
knowledge of the dynamics of these things (they change rapidly) AND
wiring between every single neuron.


Hm. It would seem that we could have some groups focusing on neurons, 
another on types of neurons, another on dendritic tree structures, some 
more on the abstractions of dendritic trees, etc. in an up-*and*-down 
propagation hierarchy so that the abstract processes of the brain are 
studied just as well as the in-betweens of brain architecture.


I was really thinking of the data collection problem:  we cannot take 
one brain and get full information about all those things, down to a 
sufficient level of detail.  I do not see such a technology even over 
the horizon (short of full-blow nanotechnology) that can deliver that. 
We can get different information from different individual brains (all 
of them dead), but combining that would not necessarily be meaningful: 
all brains are different.




I think that if they did the whole project at that level of detail it
would amount to a possibly interesting hint at some of the wiring, of
peripheral interest to people doing work at the cognitive system
level. But that is all.


You see no more possible value of such a project?


Well, I think that it will have more value one day, but at such a late 
stage in the history of cognitive system building that it will 
essentially just be a mopping up operation.


In other words, we will have to do so much work at the cognitive level 
to be able to make sense of the wiring diagrams, that by that stage we 
will be able to generate our own systems.




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65002389-10cd4a


Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Mike Tintner

RL:In order to completely ground the system, you need to let the system
build its own symbols

V. much agree with your whole argument. But -  I may well have missed some 
vital posts - I have yet to get the slightest inkling of how you yourself 
propose to do this. 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65013351-96e8f0


Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Benjamin Goertzel
On Nov 14, 2007 1:36 PM, Mike Tintner [EMAIL PROTECTED] wrote:

 RL:In order to completely ground the system, you need to let the system
 build its own symbols



Correct.  Novamente is designed to be able to build its own symbols.

what is built-in, are mechanisms for building symbols, and for
probabilistically
interrelating symbols once created...

ben g



 V. much agree with your whole argument. But -  I may well have missed
 some
 vital posts - I have yet to get the slightest inkling of how you yourself
 propose to do this.


 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65100803-21ddd3

Re: [agi] What best evidence for fast AI?

2007-11-14 Thread Bryan Bishop
On Wednesday 14 November 2007 11:55, Richard Loosemore wrote:
 I was really thinking of the data collection problem:  we cannot take
 one brain and get full information about all those things, down to a
 sufficient level of detail.  I do not see such a technology even over
 the horizon (short of full-blow nanotechnology) that can deliver
 that. We can get different information from different individual
 brains (all of them dead), but combining that would not necessarily
 be meaningful: all brains are different.

Re: all brains are different. What about the possibilities of cloning 
mice and then proceeding to raise them in Skinner boxes with the exact 
same environmental conditions, the same stimulation routines, etc. ? 
Ideally this will give us a baseline mouse that is not only 
genetically similar, but also behaviorally similar to some degree. This 
would undoubtedly be helpful in this quest.

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65191157-9f3b24

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Bryan Bishop
On Wednesday 14 November 2007 11:28, Richard Loosemore wrote:
 The complaint is not your symbols are not connected to experience.
 Everyone and their mother has an AI system that could be connected to
 real world input.  The simple act of connecting to the real world is
 NOT the core problem.

Are we sure? How much of the real world are we able to get into our AGI 
models anyway? Bandwidth is limited, much more limited than in humans 
and other animals. In fact, it might be the equivalent to worm tech.

To do the calculations would I just have to check out how many neurons 
are in a worm, how many sensory neurons, and rough information 
theoretic estimations as to the minimum and maximums as to amounts of 
information processing that the worm's sensorium could be doing?

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65191610-b12544

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Richard Loosemore

Bryan Bishop wrote:

On Wednesday 14 November 2007 11:28, Richard Loosemore wrote:

The complaint is not your symbols are not connected to experience.
Everyone and their mother has an AI system that could be connected to
real world input.  The simple act of connecting to the real world is
NOT the core problem.


Are we sure? How much of the real world are we able to get into our AGI 
models anyway? Bandwidth is limited, much more limited than in humans 
and other animals. In fact, it might be the equivalent to worm tech.


To do the calculations would I just have to check out how many neurons 
are in a worm, how many sensory neurons, and rough information 
theoretic estimations as to the minimum and maximums as to amounts of 
information processing that the worm's sensorium could be doing?


I'm not quite sure where this is at .  but the context of this 
particular discussion is the notion of 'symbol grounding' raised by 
Steven Harnad.  I am essentially talking about how to solve the problem 
he described, and what exactly the problem was.  Hence a lot of 
background behind this one, which if you don't know it might make it 
confusing.



Richard Loosemore


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65202116-6cf6d0


Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Russell Wallace
On Nov 14, 2007 11:58 PM, Bryan Bishop [EMAIL PROTECTED] wrote:
 Are we sure? How much of the real world are we able to get into our AGI
 models anyway? Bandwidth is limited, much more limited than in humans
 and other animals. In fact, it might be the equivalent to worm tech.

 To do the calculations would I just have to check out how many neurons
 are in a worm, how many sensory neurons, and rough information
 theoretic estimations as to the minimum and maximums as to amounts of
 information processing that the worm's sensorium could be doing?

Pretty much.

Let's take as our reference computer system a bog standard video
camera connected to a high-end PC, which can do something (video
compression, object recognition or whatever) with the input in real
time.

On the worm side, consider the model organism Caenorhabditis elegans,
which has a few hundred neurons.

It turns out that the computer has much more bandwidth. Then again,
while intelligence unlike bandwidth isn't a scalar quantity even to a
first approximation, to the extent they are comparable our best
computer systems do seem to be considerably smarter than C. elegans.

If we move up to something like a mouse, then the mouse has
intelligence we can't replicate, and also has much more bandwidth than
the computer system. Insects are somewhere in between, enough so that
the comparison (both bandwidth and intelligence) doesn't produce an
obvious answer; it's therefore considered not unreasonable to say
present-day computers are in the ballpark of insect-smart.

Of course that doesn't mean if we took today's software and connected
it to mouse-bandwidth hardware it would become mouse-smart, but
hopefully it means when we have that hardware we'll be able to use it
to develop software that matches some of the things mice can do.

(And it's still my opinion that by accepting - embracing - slowness on
existing hardware we can work on the software at the same time as the
hardware guys are working on their end, parallel rather than serial
development.)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65207531-031731


Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Mike Tintner


Sounds a little confusing. Sounds like you plan to evolve a system through 
testing thousands of candidate mechanisms. So one way or another you too 
are taking a view - even if it's an evolutionary, I'm not taking a view 
view -  on, and making a lot of asssumptions about


-how systems evolve
-the known architecture of human cognition.

about which science has extremely patchy and confused knowledge. I don't see 
how any system-builder can avoid taking a view of some kind on such matters, 
yet you seem to be criticising Ben for so doing.


I was hoping that you also had some view on how a system 's symbols should 
be grounded, especially since you mention Harnad, who does make vague 
gestures towards the brain's levels of grounding. But you don't indicate any 
such view.


Sounds like you too, pace MW, are hoping for a number of miracles - IOW 
creative ideas - to emerge, and make your system work.


Anyway, you have to give Ben credit for putting a lot of his stuff  
principles out there  on the line. I think anyone who wants to mount a 
full-scale assault on him ( why not?) should be prepared to reciprocate.








-

RL:

Mike Tintner wrote:

RL:In order to completely ground the system, you need to let the system
build its own symbols

V. much agree with your whole argument. But -  I may well have missed 
some vital posts - I have yet to get the slightest inkling of how you 
yourself propose to do this.


Well, for the purposes of the present discussion I do not need to say how, 
only to say that there is a difference between two different research 
strategies for finding out what the mechanism is that does this.


One strategy (the one that I claim has serious problems) is where you try 
to have your cake and eat it too:  let the system build its own symbols, 
with attached parameters that 'mean' whatever they end up meaning after 
the symbols have been built, BUT then at the same time insist that some of 
the parameters really do 'mean' things like probabilities or likelihood or 
confidence values.  If the programmer does anything at all to include 
mechanisms that rely on these meanings (these interpretations of what the 
parameters signify) then the programmer has second-guessed what the system 
itself was going to use those things for, and you have a conflict between 
the two.


My strategy is to keep my hands off, not do anything to strictly interpret 
those parameters, and experimentally observe the properties of systems 
that seem loosely consistent with the known architecture of human 
cognition.


I have a parameter, for instance, that seems to be a happiness or 
consistency parameter attached to a knowledge-atom.  But beyond roughly 
characterising it as such, I do not insert any mechanisms that (implicitly 
or explicitly) lock the system into such an intepretation. Instead, I have 
a wide variety of different candidate mechanisms that use that parameter, 
and I look at the overall properties of systems that use these different 
candidate mechanisms.  I let the system use the parameter according to the 
dictates of whatever mechanism is in place, but then I just explore the 
consequences (the high level behavior of the system).


In this way I do not get a conflict between what I think the parameter 
'ought' to mean and what the system is implicitly taking it to 'mean' by 
its use of the parameter.


I could start talking about all the different candidate mechanisms, but 
there are thousands of them (at least thousands of candidates that I go so 
far as to test:  they are generated in a semi-automatic way, so there are 
an unlimited number of potential candidates).




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



--
No virus found in this incoming message.
Checked by AVG Free Edition. Version: 7.5.503 / Virus Database: 
269.15.30/1125 - Release Date: 11/11/2007 9:50 PM






-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65232546-91c089


Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Richard Loosemore

Mike Tintner wrote:

RL:In order to completely ground the system, you need to let the system
build its own symbols

V. much agree with your whole argument. But -  I may well have missed 
some vital posts - I have yet to get the slightest inkling of how you 
yourself propose to do this.


Well, for the purposes of the present discussion I do not need to say 
how, only to say that there is a difference between two different 
research strategies for finding out what the mechanism is that does this.


One strategy (the one that I claim has serious problems) is where you 
try to have your cake and eat it too:  let the system build its own 
symbols, with attached parameters that 'mean' whatever they end up 
meaning after the symbols have been built, BUT then at the same time 
insist that some of the parameters really do 'mean' things like 
probabilities or likelihood or confidence values.  If the programmer 
does anything at all to include mechanisms that rely on these meanings 
(these interpretations of what the parameters signify) then the 
programmer has second-guessed what the system itself was going to use 
those things for, and you have a conflict between the two.


My strategy is to keep my hands off, not do anything to strictly 
interpret those parameters, and experimentally observe the properties of 
systems that seem loosely consistent with the known architecture of 
human cognition.


I have a parameter, for instance, that seems to be a happiness or 
consistency parameter attached to a knowledge-atom.  But beyond 
roughly characterising it as such, I do not insert any mechanisms that 
(implicitly or explicitly) lock the system into such an intepretation. 
Instead, I have a wide variety of different candidate mechanisms that 
use that parameter, and I look at the overall properties of systems that 
use these different candidate mechanisms.  I let the system use the 
parameter according to the dictates of whatever mechanism is in place, 
but then I just explore the consequences (the high level behavior of the 
system).


In this way I do not get a conflict between what I think the parameter 
'ought' to mean and what the system is implicitly taking it to 'mean' by 
its use of the parameter.


I could start talking about all the different candidate mechanisms, but 
there are thousands of them (at least thousands of candidates that I go 
so far as to test:  they are generated in a semi-automatic way, so there 
are an unlimited number of potential candidates).




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65198894-3ece99


[agi] Polyworld: Using Evolution to Design Artificial Intelligence

2007-11-14 Thread Jef Allbright
This may be of interest to the group.

http://video.google.com/videoplay?docid=-112735133685472483


This presentation is about a potential shortcut to artificial
intelligence by trading mind-design for world-design using artificial
evolution. Evolutionary algorithms are a pump for turning CPU cycles
into brain designs. With exponentially increasing CPU cycles while our
understanding of intelligence is almost a flat-line, the evolutionary
route to AI is a centerpiece of most Kurzweilian singularity
scenarios. This talk introduces the Polyworld artificial life
simulator as well as results from our ongoing attempt to evolve
artificial intelligence and further the Singularity.

Polyworld is the brain child of Apple Computer Distinguished Scientist
Larry Yaeger, who remains the primary developer of Polyworld:

http://www.beanblossom.in.us/larryy/P...

Speaker: Virgil Griffith
Virgil Griffith is a first year graduate student in Computation and
Neural Systems at the California Institute of Technology. On weekdays
he studies evolution, computational neuroscience, and artificial life.
He did computer security work until his first year of university when
his work got him sued for sedition and espionage. He then decided that
security was probably not safest field to be in and he turned his life
to science. (less)
Added: November 13, 2007

- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65277465-3d25ea


Re: [agi] What best evidence for fast AI?

2007-11-14 Thread Richard Loosemore

Bryan Bishop wrote:

On Wednesday 14 November 2007 11:55, Richard Loosemore wrote:

I was really thinking of the data collection problem:  we cannot take
one brain and get full information about all those things, down to a
sufficient level of detail.  I do not see such a technology even over
the horizon (short of full-blow nanotechnology) that can deliver
that. We can get different information from different individual
brains (all of them dead), but combining that would not necessarily
be meaningful: all brains are different.


Re: all brains are different. What about the possibilities of cloning 
mice and then proceeding to raise them in Skinner boxes with the exact 
same environmental conditions, the same stimulation routines, etc. ? 
Ideally this will give us a baseline mouse that is not only 
genetically similar, but also behaviorally similar to some degree. This 
would undoubtedly be helpful in this quest.


Well, now you have suggested this I am sure some neuroscientist will do 
it ;-).


But you have to understand that I am a cognitive scientist, with a huge 
agenda that involves making good use of what I see as the uneplxored 
fertile ground between cognitive science and AI  and I think that I 
will be able to build an AGI using this approach *long* before the 
neuroscientists even get one mouse-brain scan at the neuron level (never 
mind the synaptic bouton level)!


So:  yeah, but not necessary.



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65204588-4868d1