Re: [agi] What best evidence for fast AI?

2007-11-18 Thread Harvey Newstrom
On Saturday 10 November 2007 16:51, Robin Hanson wrote:
  At 02:06 PM 11/10/2007, Richard Loosemore wrote:

  Basically, 'traditional' AI people have an almost theological aversion to
  the idea that the task of building an AI might involve having to learn 
(and
  deconstruct!) a vast amount of cognitive science, and then use an
  experimental-science methodology to find the mechanisms that really give
  rise to AI.

  I have to give a lot of weight to the apparent fact that most AI
 researchers have not yet been convinced to accept your favored approach.  
 More persuasive to me are arguments for fast AI based on more widely shared
 premises.

I believe that both Richard and Robin misrepresent the profession when they 
reference traditional AI researchers.  I believe that they are thinking 
only of those researchers who have concluded that AI is possible with today's 
technology without further advances in cognitive science being required.  
Thus, only those who believe in such a thing are included in the group.  I 
think this unfairly excludes the vast larger number of computer experts, 
cognitive experts, and those with knowledge in both fields, who have studied 
the field of AI and have concluded that such a thing is not possible at this 
time.

Admitidly, most computer scientists and cognitive experts do not agree with 
the approach being discussed above.  Therefore, using Robin's reasoning, I 
would have to give more weight to all these people, instead of assuming that 
they are all wrong, and that the small minority of our favorite AI 
researchers are correct.  Therefore, I use Robin's logic to agree with 
Richard's conclusion!

-- 
Harvey Newstrom
CISSP CISA CISM CIFI NSA-IAM GSEC ISSAP ISSMP ISSPCS IBMCP

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=66361805-203259

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-15 Thread Richard Loosemore

Mike Tintner wrote:


Sounds a little confusing. Sounds like you plan to evolve a system 
through testing thousands of candidate mechanisms. So one way or 
another you too are taking a view - even if it's an evolutionary, I'm 
not taking a view view -  on, and making a lot of asssumptions about


-how systems evolve
-the known architecture of human cognition.


No, I think because of the paucity of information I gave you have 
misunderstood slightly.


Everything I mentioned was in the context of an extremely detailed 
framework that tries to include all of the knowledge we have so far 
gleaned by studying human cognition using the methods of cognitive science.


So I am not making assumptions about the architecture of human cognition 
I am using every scrap of experimental data I can.  You can say that 
this is still assuming that the framework is correct, but that is 
nothing compared to the usual assumptions made in AI, where the 
programmer just picks up a grab bag of assorted ideas that are floating 
around in the literature (none of them part of a coherent theory of 
cognition) and starts hacking.


And just because I talk of thousands of candidate mechanisms, that does 
not mean that there is evolution involved:  it just means that even with 
a complete framework for human cognition to start from there are still 
so many questions about the low-level to high-level linkage that a vast 
number of mechanisms have to be explored.



about which science has extremely patchy and confused knowledge. I don't 
see how any system-builder can avoid taking a view of some kind on such 
matters, yet you seem to be criticising Ben for so doing.


Ben does not start from a complete framework for human cognition, nor 
does he feel compelled to stick close to the human model, and my 
criticisms (at least in this instance) are not really about whether or 
not he has such a framework, but about a problem that I can see on his 
horizon.



I was hoping that you also had some view on how a system 's symbols 
should be grounded, especially since you mention Harnad, who does make 
vague gestures towards the brain's levels of grounding. But you don't 
indicate any such view.


On the contrary, I explained exactly how they would be grounded:  if the 
system is allowed to build its own symbols *without* me also inserting 
ungrounded (i.e. interpreted, programmer-constructed) symbols and 
messing the system up by forcing it to use both sorts of symbols, then 
ipso fact it is grounded.


It is easy to build a grounded system.  The trick is to make it both 
grounded and intelligent at the same time.  I have one strategy for 
ensuring that it turns out intelligent, and Ben has another  my 
problem with Ben's strategy is that I believe his attempt to ensure that 
the system is intelligent ends up compromising the groundedness of the 
system.



Sounds like you too, pace MW, are hoping for a number of miracles - IOW 
creative ideas - to emerge, and make your system work.


I don't understand where I implied this.  You have to remember that I am 
doing this within a particular strategy (outlined in my CSP paper). 
When you see me exploring 'thousands' of candidate mechanisms to see how 
one parameter plays a role, this is not waiting for a miracle, it is a 
vital part of the strategy.  A strategy that, I claim, is the only 
viable one.




Anyway, you have to give Ben credit for putting a lot of his stuff  
principles out there  on the line. I think anyone who wants to mount a 
full-scale assault on him ( why not?) should be prepared to reciprocate.


Nice try, but there are limits to what I can do to expose the details. 
I have not yet worked out how much I should release and how much to 
withhold (I confess, I nearly decided to go completely public a month or 
so ago, but then changed my mind after seeing the dismally poor response 
that even one of the ideas provoked).  Maybe in the near future I will 
write a summary account.


In the mean time, yes, it is a little unfair of me to criticise other 
projects.  But not that unfair.  When a scientist sees a big problem 
with a theory, do you suppose they wait until they have a completely 
worked out alternative before discussing the fact that there is a 
problem with the theory that other people may be praising?  That is not 
the way of science.



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65349870-56ef76


Re: [agi] What best evidence for fast AI?

2007-11-14 Thread Lukasz Stafiniak
  The problem with probability-based conflict resolution is
 that it is a hack to get around insufficient knowledge rather than an
 attempt to figure out how to get more knowledge



 ED This agrees with what I said above about not putting enough
 emphasis on selecting what probabilistic formulas are appropriate.  But it
 doesn't argue against the importance of probabilities  It argues against
 using them blindly.




 ED  So by operating with small amounts of data how small, very
 roughly, are you talking about.  And are you only talking about the active
 goals or sources of activation, that will be small or are you saying that
 all the computation in the system will only be dealing with a small amount
 of data within, for example,  one second of the processing of  human-level
 system operating at human-level speed?



 MARK  I mean like the way humans reason, there is only concentration
 on a small number of objects -- which are only one link away from an
 almost inconceivable number of related things -- and then the brain can
 jump at least three of these links with lightning rapidity.



 ED So this implies you are not arguing against the idea that AGI will
 be dealing with massive data, just that that use will be focused by a
 concentration on a relatively small number of sources of activation at
 once.





 MARK  Ask Ben how much actual work has been done on activation
 control in very large, very sparse atom spaces in Novamente.  He'll tell
 you that it's a project for when he's further along.  I'll insist (as will
 Richard) that if it isn't baked in from the very beginning, you're
 probably going to have to go back to the beginning to repair the lack.



 ED  It is exactly such research I want to see funded.  It strikes me
 as one of the key things we must learn to do well to make powerful AGI.
 But I think even with some fairly dumb activation control systems you
 could get useful results.  Such results would not be at all human-level in
 may ways, but in other ways they could be much more powerful because such
 systems could deal with many more explicit facts and could input and
 output information at a much higher rate than humans.



 For example, what is the equivalent of the activation control (or search)
 algorithm in Google sets.  They operate over huge data.  I bet the
 algorithm for calculating their search or activation is relatively simple
 (much, much, much less than a PhD theses) and look what they can do.  So I
 think one path is to come up with applications that can use and reason
 with large data, having roughly world knowledge-like sparseness, (such as
 NL data) and start with relatively simple activation algorithms and
 develop then from the ground up.



 MARK  P.S.  Oh yeah -- if you were public enemy number one, I
 wouldn't bother answering you (and I probably should lay off of the
 fan-boy crap :-).



 ED  Thanks.



 I admit I am impressed with Novamente.  Since it's the best AGI
 architecture I currently know of; I am impressed with Ben; believe there
 is a high probability all the gaps you address could be largely fixed
 within five years with deep funding (which may never come); and since I
 want to get such deep funding for just the type of large atom-base work
 you say is so critical,  I think it is important to focus on the potential
 for greatness that Novamente and somewhat similar systems have, rather
 than only think of its current gaps and potential problems.



 But of course, at the same time, we must look for and try to understand
 its gaps and potential problems so that we can remove them.



 Ed Porter




 -Original Message-
 From: Mark Waser [mailto:[EMAIL PROTECTED]
 Sent: Monday, November 12, 2007 2:42 PM
 To: agi@v2.listbox.com
 Subject: Re: [agi] What best evidence for fast AI?


  It is NOT clear that Novamente documentation is NOT enabling, or could
 not be made enabling, with, say, one man year of work.  Strong argument
 could be made both ways.

 I believe that Ben would argue that Novamente documentation is NOT
 enabling even with one man-year of work.  Ben?  There is still way to much
 *research* work to be done.

   But the standard for non-enablement is very arguably weaker than not
 requiring a miracle.  It would be more like not requiring a leap of
 creativity that is outside the normal skill of talented PhDs trained in
 related fields.

  So although your position is reasonable, I hope you understand so is
 that on the other side.


 My meant-to-be-humorous miracle phrasing is clearly throwing you.  The
 phrase not requiring a leap of creativity that is outside the normal
 skill of talented PhDs trained in related fields works for me.  Novamente
 is *definitely* not there yet.  I'm rather sure that Ben would agree -- as
 in, I'm not on the other side, *you* are on the other side from the
 system's designer.  Again, Ben please feel free to chime in.

  much scaling stuff

 Remember that the brain is *massively* parallel

RE: [agi] What best evidence for fast AI?

2007-11-14 Thread Edward W. Porter
Lukasz,

Which of the multiple issues that Mark listed is one of the two basic
directions you were referring to.

Ed Porter

-Original Message-
From: Lukasz Stafiniak [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 14, 2007 9:15 AM
To: agi@v2.listbox.com
Subject: Re: [agi] What best evidence for fast AI?


I think that there are two basic directions to better the Novamente
architecture:
the one Mark talks about
more integration of MOSES with PLN and RL theory

On 11/13/07, Edward W. Porter [EMAIL PROTECTED] wrote:
 Response to Mark Waser  Mon 11/12/2007 2:42 PM post.



 MARK  Remember that the brain is *massively* parallel.  Novamente
 MARK and
 any other linear (or minorly-parallel) system is *not* going to work
 in the same fashion as the brain.  Novamente can be parallelized to
 some degree but *not* to anywhere near the same degree as the brain.
 I love your speculation and agree with it -- but it doesn't match
 near-term reality.  We aren't going to have brain-equivalent
 parallelism anytime in the near future.



 ED I think in five to ten years there could be computers capable
 ED of
 providing every bit as much parallelism as the brain at prices that
 will allow thousands or hundreds of thousands of them to be sold.



 But it is not going to happen overnight.  Until then the lack of brain
 level hardware is going to limit AGI. But there are still a lot of
 high value system that could be built on say $100K to $10M of
 hardware.



 You claim we really need experience with computing and controlling
 activation over large atom tables.  I would argue that obtaining such
 experience should be a top priority for government funders.



 MARK  The node/link architecture is very generic and can be used
 MARK for
 virtually anything.  There is no rational way to attack it.  It is, I
 believe, going to be the foundation for any system since any system
 can easily be translated into it.  Attacking the node/link
 architecture is like attacking assembly language or machine code.  Now
 -- are you going to write your AGI in assembly language?  If you're
 still at the level of arguing node/link, we're not communicating well.



 ED  nodes and links are what patterns are made of, and each static
 pattern can have an identifying node associated with it as well as the
 nodes and links representing its sub-patterns, elements, the
 compositions of which it is part, it associations, etc.  The system
 automatically organize patterns into a gen/comp hierarchy.  So, I am
 not just dealing at a node and link level, but they are the basic
 building blocks.





 MARK ... I *AM* saying that the necessity of using probabilistic
 reasoning for day-to-day decision-making is vastly over-rated and has
 been a horrendous side-road for many/most projects because they are
 attempting to do it in situations where it is NOT appropriate.  The
 increased, almost ubiquitous adaptation of probabilistic methods is
 the herd mentality in action (not to mention the fact that it is
 directly orthogonal to work thirty years older).  Most of the time,
 most projects are using probabilistic methods to calculate a tenth
 place decimal of a truth value when their data isn't even sufficient
 for one.  If you've got a heavy-duty discovery system, probabilistic
 methods are ideal.  If you're trying to derive probabilities from a
 small number of English statements (like this raven is white and
 most ravens are black), you're seriously on the wrong track.  If you
 go on and on about how humans don't understand Bayesian reasoning,
 you're both correct and clueless in not recognizing that your very
 statement points out how little Bayesian reasoning has to do with most
 general intelligence.  Note, however, that I *do* believe that
 probabilistic methods *are* going to be critically important for
 activation for attention, etc.



 ED  I agree that many approaches accord too much importance to the
 numerical accuracy and Bayesian purity of their approach, and not
 enough importance on the justification for the Bayesian formulations
 they use. I know of one case where I suggested using information that
 would almost certainly have improved a perception process and the
 suggestion was refused because it would not fit within the system's
probabilistic
 framework.   At an AAAI conference in 1997 I talked to a programmer for
a
 big defense contractor who said he as a fan of fuzzy logic system;
 that they were so much more simple to get up an running because you
 didn't have to worry about probabilistic purity.  He said his group
 that used fuzzy logic was getting things out the door that worked
 faster than the more probability limited competition.  So obviously
 there is something to say for not letting probabilistic purity get in
 the way of more reasonable approaches.



 But I still think probabilities are darn important. Even your this
 raven is white and most ravens are black example involves notions
 of probability.  We attribute

Re: [agi] What best evidence for fast AI?

2007-11-14 Thread Lukasz Stafiniak
On Nov 14, 2007 3:48 PM, Edward W. Porter [EMAIL PROTECTED] wrote:
 Lukasz,

 Which of the multiple issues that Mark listed is one of the two basic
 directions you were referring to.

 Ed Porter

(First of all, I'm sorry for attaching my general remark as a reply: I
was writing from a cell-phone which limited navigation.)

I think, that it would be a more fleshed-out knowledge representation
(but without limiting the representation-building flexibility of
Novamente).

 -Original Message-
 From: Lukasz Stafiniak [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, November 14, 2007 9:15 AM
 To: agi@v2.listbox.com
 Subject: Re: [agi] What best evidence for fast AI?


 I think that there are two basic directions to better the Novamente
 architecture:
 the one Mark talks about
 more integration of MOSES with PLN and RL theory


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64970556-f74c23


Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Richard Loosemore

Linas Vepstas wrote:

On Tue, Nov 13, 2007 at 12:34:51PM -0500, Richard Loosemore wrote:
Suppose that in some significant part of Novamente there is a 
representation system that uses probability or likelihood numbers to 
encode the strength of facts, as in [I like cats](p=0.75).  The (p=0.75) 
is supposed to express the idea that the statement [I like cats] is in 
some sense 75% true.


Either way, we have a problem:  a fact like [I like cats](p=0.75) is 
ungrounded because we have to interpret it.  Does it mean that I like 
cats 75% of the time?  That I like 75% of all cats?  75% of each cat? 
Are the cats that I like always the same ones, or is the chance of an 
individual cat being liked by me something that changes?  Does it mean 
that I like all cats, but only 75% as much as I like my human family, 
which I like(p=1.0)?  And so on and so on.


Eh?

You are standing at the proverbial office water coooler, and Aneesh 
says Wen likes cats. On your drive home, you mind races .. does this

mean that Wen is a cat fancier?  You were planning on taking Wen out
on a date, and this tidbit of information could be useful ... 

when you try to build the entire grounding mechanism(s) you are forced 
to become explicit about what these numbers mean, during the process of 
building a grounding system that you can trust to be doing its job:  you 
cannot create a mechanism that you *know* is constructing sensible p 
numbers and facts during all of its development *unless* you finally 
bite the bullet and say what the p numbers really mean, in fully cashed 
out terms.


But has a human, asking Wen out on a date, I don't really know what 
Wen likes cats ever really meant. It neither prevents me from talking 
to Wen, or from telling my best buddy that ...well, I know, for
instance, that she likes cats...  

Lack of grounding is what makes humour funny, you can do a whole 
Pygmalion / Seinfeld episode on she likes cats.


No:  the real concept of lack of grounding is nothing so simple as the 
way you are using the word grounding.


Lack of grounding makes an AGI fall flat on its face and not work.

I can't summarize the grounding literature in one post.  (Though, heck, 
I have actually tried to do that in the past:  didn't do any good).




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64980585-67cbc9


Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Benjamin Goertzel
Hi,




 No:  the real concept of lack of grounding is nothing so simple as the
 way you are using the word grounding.

 Lack of grounding makes an AGI fall flat on its face and not work.

 I can't summarize the grounding literature in one post.  (Though, heck,
 I have actually tried to do that in the past:  didn't do any good).



FYI, I have read the symbol-grounding literature (or a lot of it), and
generally
found it disappointingly lacking in useful content... though I do agree with
the basic point that non-linguistic grounding is extremely helpful for
effective
manipulation of linguistic entities...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64981284-09925d

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Richard Loosemore

Benjamin Goertzel wrote:



On Nov 13, 2007 2:37 PM, Richard Loosemore [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:



Ben,

Unfortunately what you say below is tangential to my point, which is
what happens when you reach the stage where you cannot allow any more
vagueness or subjective interpretation of the qualifiers, because you
have to force the system to do its own grounding, and hence its own
interpretation.



I don't see why you talk about forcing the system to do its own 
grounding --

the probabilities in the system are grounded in the first place, as they
are calculated based on experience.

The system observes, records what it sees, abstracts from it, and chooses
actions that it guess will fulfill its goals.  Its goals are ultimately 
grounded in in-built
feeling-evaluation routines, measuring stuff like amount of novelty 
observed,

amount of food in system etc.

So, the system sees and then acts ... and the concepts it forms and uses
are created/used based on their utility in deriving appropriate actions.

There is no symbol-grounding problem except in the minds of people who
are trying to interpret what the system does, and get confused.  Any symbol
used within the system, and any probability calculated by the system, are
directly grounded in the system's experience.

There is nothing vague about an observation like Bob_Yifu was observed
at time-stamp 599933322, or a fact Command 'wiggle ear' was sent
at time-stamp 54.  These perceptions and actions are the root of the
probabilities the system calculated, and need no further grounding.

 


What you gave below was a sketch of some more elaborate 'qualifier'
mechanisms.  But I described the process of generating more and more
elaborate qualifier mechanisms in the body of the essay, and said why
this process was of no help in resolving the issue.


So, if a system can achieve its goals based on choosing procedures that
it thinks are likely to achieve its goals, based on the knowledge it 
gathered
via its perceived experience -- why do you think it has a problem? 


I don't really understand your point, I guess.  I thought I did -- I thought
your point was that precisely specifying the nature of a conditional 
probability

is a rats-nest of complexity.  And my response was basically that in
Novamente we don't need to do that, because we define conditional 
probabilities

based on the system's own knowledge-base, i.e.

Inheritance A B .8

means

If A and B were reasoned about a lot, then A would (as measred by an 
weighted

average) have 80% of the relationships that B does

But apparently you were making some other point, which I did not grok, 
sorry...


Anyway, though, Novamente does NOT require logical relations of escalating
precision and complexity to carry out reasoning, which is one thing you 
seemed

to be assuming in your post.


You are, in essence, using one of the trivial versions of what symbol 
grounding is all about.


The complaint is not your symbols are not connected to experience. 
Everyone and their mother has an AI system that could be connected to 
real world input.  The simple act of connecting to the real world is NOT 
the core problem.


If you have an AGI system in which the system itself is allowed to do 
all the work of building AND interpreting all of its symbols, I don't 
have any issues with it.


Where I do have an issue is with a system which is supposed to be doing 
the above experiential pickup, and where the symbols are ALSO supposed 
to be interpretable by human programmers who are looking at things like 
probability values attached to facts.  When a programmer looks at a 
situation like


 ContextLink .7,.8
  home
  InheritanceLink Bob_Yifu friend

... and then follows this with a comment like:

 which suggests that Bob is less friendly at home than
 in general.

... they have interpreted the meaning of that statement using their 
human knowledge.


So here I am, looking at this situation, and I see:

   AGI system intepretation (implicit in system use of it)
   Human programmer intepretation

and I ask myself which one of these is the real interpretation?

It matters, because they do not necessarily match up.  The human 
programmer's intepretation has a massive impact on the system because 
all the inference and other mechanisms are built around the assumption 
that the probabilities mean a certain set of things.  You manipulate 
those p values, and your manipulations are based on assumptions about 
what they mean.


But if the system is allowed to pick up its own knowledge from the 
environment, the implicit meaning of those p values will not 
necessarily match the human interpretation.  As I say, the meaning is 
then implicit in the way the system *uses* those p values (and other stuff).


It is a nontrivial question to ask whether the implicit system 
interpretation does indeed match the human intepretation built into the 
inference 

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Richard Loosemore

Benjamin Goertzel wrote:

Hi,
 




No:  the real concept of lack of grounding is nothing so simple as the
way you are using the word grounding.

Lack of grounding makes an AGI fall flat on its face and not work.

I can't summarize the grounding literature in one post.  (Though, heck,
I have actually tried to do that in the past:  didn't do any good).



FYI, I have read the symbol-grounding literature (or a lot of it), and 
generally
found it disappointingly lacking in useful content... though I do agree 
with
the basic point that non-linguistic grounding is extremely helpful for 
effective

manipulation of linguistic entities...


Ben,

As you will recall, Harnad himself got frustrated with the many people 
who took the term symbol grounding and trivialized or distorted it in 
various ways.  One of the reasons the grounding literature is such a 
waste of time (and you are right:  it is) is that so many people talked 
so much nonsense about it.


As far as I am concerned, your use of it is one of those trivial senses 
that Harnad complained of.  (Essentially, if the system uses world input 
IN ANY WAY during the building of its symbols, then the system is grounded).


The effort I put into that essay yesterday will have been completely 
wasted if your plan is to stick to that interpretation and not discuss 
the deeper issue that I raised.


I really have no energy for pursuing yet another discussion about symbol 
grounding.


Sorry:  don't mean to blow you off, but you and I both have better 
things to do, and I foresee a big waste of time ahead if we pursue it.



So let's just drop it?



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64998305-6bdb18


Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Benjamin Goertzel
Richard,



 So here I am, looking at this situation, and I see:

    AGI system intepretation (implicit in system use of it)
    Human programmer intepretation

 and I ask myself which one of these is the real interpretation?

 It matters, because they do not necessarily match up.


That is true, but in some cases they may approximate each other well..

In others, not...

This happens to be a pretty simple case, so the odds of a good
approximation seem high.



  The human
 programmer's intepretation has a massive impact on the system because
 all the inference and other mechanisms are built around the assumption
 that the probabilities mean a certain set of things.  You manipulate
 those p values, and your manipulations are based on assumptions about
 what they mean.



Well, the PLN inference engine's treatment of

ContextLink
home
InheritanceLink Bob_Yifu friend

is in no way tied to whether the system's implicit interpretation of the
ideas of home or friend are humanly natural, or humanly comprehensible.

The same inference rules will be applied to cases like

ContextLink
Node_66655
InheritanceLink Bob_Yifu Node_544

where the concepts involved have no humanly-comprehensible label.

It is true that the interpretation of ContextLink and InheritanceLink are
fixed
by the wiring of the system, in a general way (but what kinds of properties
are referred to by them may vary in a way dynamically determined by the
system).


 In order to completely ground the system, you need to let the system
 build its own symbols, yes, but that is only half the story:  if you
 still have a large component of the system that follows a
 programmer-imposed interpretation of things like probability values
 attached to facts, you have TWO sets of symbol-using mechanisms going
 on, and the system is not properly grounded (it is using both grounded
 and ungrounded symbols within one mechanism).



I don't think the system needs to learn its own probabilistic reasoning
rules
in order to be an AGI.  This, to me, is too much like requiring that a brain
needs
to learn its own methods for modulating the conductances of the bundles of
synapses linking between the neurons in cell assembly A and cell assembly B.

I don't see a problem with the AGI system having hard-wired probabilistic
inference rules, and hard-wired interpretations of probabilistic link
types.  But
the interpretation of any **particular** probabilistic relationship inside
the system, is relative
to the concepts and the empirical and conceptual relationships that the
system
has learned.

You may think that the brain learns its own uncertain inference rules based
on a
lower-level infrastructure that operates in terms entirely unconnected from
ideas
like uncertainty and inference.  I think this is wrong.  I think the brain's
uncertain
inference rules are the result, on the cell assembly level, of Hebbian
learning and
related effects on the neuron/synapse level.  So I think the brain's basic
uncertain
inference rules are wired-in, just as Novamente's are, though of course
using
a radically different infrastructure.

Ultimately an AGI system needs to learn its own reasoning rules and
radically
modify and improve itself, if it's going to become strongly superhuman!  But
that is
not where we need to start...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64998317-8c4281

Re: [agi] What best evidence for fast AI?

2007-11-14 Thread Richard Loosemore

Bryan Bishop wrote:

On Tuesday 13 November 2007 09:11, Richard Loosemore wrote:

This is the whole brain emulation approach, I guess (my previous
comments were about evolution of brains rather than neural level
duplication).


Ah, you are right. But this too is an interesting topic. I think that 
the order of magnitudes for whole brain emulation, connectome, and 
similar evolutionary methods, are roughly the same, but I haven't done 
any calculations.



It seems quite possible that what we need is a detailed map of every
synapse, exact layout of dendritic tree structures, detailed
knowledge of the dynamics of these things (they change rapidly) AND
wiring between every single neuron.


Hm. It would seem that we could have some groups focusing on neurons, 
another on types of neurons, another on dendritic tree structures, some 
more on the abstractions of dendritic trees, etc. in an up-*and*-down 
propagation hierarchy so that the abstract processes of the brain are 
studied just as well as the in-betweens of brain architecture.


I was really thinking of the data collection problem:  we cannot take 
one brain and get full information about all those things, down to a 
sufficient level of detail.  I do not see such a technology even over 
the horizon (short of full-blow nanotechnology) that can deliver that. 
We can get different information from different individual brains (all 
of them dead), but combining that would not necessarily be meaningful: 
all brains are different.




I think that if they did the whole project at that level of detail it
would amount to a possibly interesting hint at some of the wiring, of
peripheral interest to people doing work at the cognitive system
level. But that is all.


You see no more possible value of such a project?


Well, I think that it will have more value one day, but at such a late 
stage in the history of cognitive system building that it will 
essentially just be a mopping up operation.


In other words, we will have to do so much work at the cognitive level 
to be able to make sense of the wiring diagrams, that by that stage we 
will be able to generate our own systems.




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65002389-10cd4a


Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Mike Tintner

RL:In order to completely ground the system, you need to let the system
build its own symbols

V. much agree with your whole argument. But -  I may well have missed some 
vital posts - I have yet to get the slightest inkling of how you yourself 
propose to do this. 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65013351-96e8f0


Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Benjamin Goertzel
On Nov 14, 2007 1:36 PM, Mike Tintner [EMAIL PROTECTED] wrote:

 RL:In order to completely ground the system, you need to let the system
 build its own symbols



Correct.  Novamente is designed to be able to build its own symbols.

what is built-in, are mechanisms for building symbols, and for
probabilistically
interrelating symbols once created...

ben g



 V. much agree with your whole argument. But -  I may well have missed
 some
 vital posts - I have yet to get the slightest inkling of how you yourself
 propose to do this.


 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65100803-21ddd3

Re: [agi] What best evidence for fast AI?

2007-11-14 Thread Bryan Bishop
On Wednesday 14 November 2007 11:55, Richard Loosemore wrote:
 I was really thinking of the data collection problem:  we cannot take
 one brain and get full information about all those things, down to a
 sufficient level of detail.  I do not see such a technology even over
 the horizon (short of full-blow nanotechnology) that can deliver
 that. We can get different information from different individual
 brains (all of them dead), but combining that would not necessarily
 be meaningful: all brains are different.

Re: all brains are different. What about the possibilities of cloning 
mice and then proceeding to raise them in Skinner boxes with the exact 
same environmental conditions, the same stimulation routines, etc. ? 
Ideally this will give us a baseline mouse that is not only 
genetically similar, but also behaviorally similar to some degree. This 
would undoubtedly be helpful in this quest.

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65191157-9f3b24

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Bryan Bishop
On Wednesday 14 November 2007 11:28, Richard Loosemore wrote:
 The complaint is not your symbols are not connected to experience.
 Everyone and their mother has an AI system that could be connected to
 real world input.  The simple act of connecting to the real world is
 NOT the core problem.

Are we sure? How much of the real world are we able to get into our AGI 
models anyway? Bandwidth is limited, much more limited than in humans 
and other animals. In fact, it might be the equivalent to worm tech.

To do the calculations would I just have to check out how many neurons 
are in a worm, how many sensory neurons, and rough information 
theoretic estimations as to the minimum and maximums as to amounts of 
information processing that the worm's sensorium could be doing?

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65191610-b12544

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Richard Loosemore

Bryan Bishop wrote:

On Wednesday 14 November 2007 11:28, Richard Loosemore wrote:

The complaint is not your symbols are not connected to experience.
Everyone and their mother has an AI system that could be connected to
real world input.  The simple act of connecting to the real world is
NOT the core problem.


Are we sure? How much of the real world are we able to get into our AGI 
models anyway? Bandwidth is limited, much more limited than in humans 
and other animals. In fact, it might be the equivalent to worm tech.


To do the calculations would I just have to check out how many neurons 
are in a worm, how many sensory neurons, and rough information 
theoretic estimations as to the minimum and maximums as to amounts of 
information processing that the worm's sensorium could be doing?


I'm not quite sure where this is at .  but the context of this 
particular discussion is the notion of 'symbol grounding' raised by 
Steven Harnad.  I am essentially talking about how to solve the problem 
he described, and what exactly the problem was.  Hence a lot of 
background behind this one, which if you don't know it might make it 
confusing.



Richard Loosemore


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65202116-6cf6d0


Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Russell Wallace
On Nov 14, 2007 11:58 PM, Bryan Bishop [EMAIL PROTECTED] wrote:
 Are we sure? How much of the real world are we able to get into our AGI
 models anyway? Bandwidth is limited, much more limited than in humans
 and other animals. In fact, it might be the equivalent to worm tech.

 To do the calculations would I just have to check out how many neurons
 are in a worm, how many sensory neurons, and rough information
 theoretic estimations as to the minimum and maximums as to amounts of
 information processing that the worm's sensorium could be doing?

Pretty much.

Let's take as our reference computer system a bog standard video
camera connected to a high-end PC, which can do something (video
compression, object recognition or whatever) with the input in real
time.

On the worm side, consider the model organism Caenorhabditis elegans,
which has a few hundred neurons.

It turns out that the computer has much more bandwidth. Then again,
while intelligence unlike bandwidth isn't a scalar quantity even to a
first approximation, to the extent they are comparable our best
computer systems do seem to be considerably smarter than C. elegans.

If we move up to something like a mouse, then the mouse has
intelligence we can't replicate, and also has much more bandwidth than
the computer system. Insects are somewhere in between, enough so that
the comparison (both bandwidth and intelligence) doesn't produce an
obvious answer; it's therefore considered not unreasonable to say
present-day computers are in the ballpark of insect-smart.

Of course that doesn't mean if we took today's software and connected
it to mouse-bandwidth hardware it would become mouse-smart, but
hopefully it means when we have that hardware we'll be able to use it
to develop software that matches some of the things mice can do.

(And it's still my opinion that by accepting - embracing - slowness on
existing hardware we can work on the software at the same time as the
hardware guys are working on their end, parallel rather than serial
development.)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65207531-031731


Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Mike Tintner


Sounds a little confusing. Sounds like you plan to evolve a system through 
testing thousands of candidate mechanisms. So one way or another you too 
are taking a view - even if it's an evolutionary, I'm not taking a view 
view -  on, and making a lot of asssumptions about


-how systems evolve
-the known architecture of human cognition.

about which science has extremely patchy and confused knowledge. I don't see 
how any system-builder can avoid taking a view of some kind on such matters, 
yet you seem to be criticising Ben for so doing.


I was hoping that you also had some view on how a system 's symbols should 
be grounded, especially since you mention Harnad, who does make vague 
gestures towards the brain's levels of grounding. But you don't indicate any 
such view.


Sounds like you too, pace MW, are hoping for a number of miracles - IOW 
creative ideas - to emerge, and make your system work.


Anyway, you have to give Ben credit for putting a lot of his stuff  
principles out there  on the line. I think anyone who wants to mount a 
full-scale assault on him ( why not?) should be prepared to reciprocate.








-

RL:

Mike Tintner wrote:

RL:In order to completely ground the system, you need to let the system
build its own symbols

V. much agree with your whole argument. But -  I may well have missed 
some vital posts - I have yet to get the slightest inkling of how you 
yourself propose to do this.


Well, for the purposes of the present discussion I do not need to say how, 
only to say that there is a difference between two different research 
strategies for finding out what the mechanism is that does this.


One strategy (the one that I claim has serious problems) is where you try 
to have your cake and eat it too:  let the system build its own symbols, 
with attached parameters that 'mean' whatever they end up meaning after 
the symbols have been built, BUT then at the same time insist that some of 
the parameters really do 'mean' things like probabilities or likelihood or 
confidence values.  If the programmer does anything at all to include 
mechanisms that rely on these meanings (these interpretations of what the 
parameters signify) then the programmer has second-guessed what the system 
itself was going to use those things for, and you have a conflict between 
the two.


My strategy is to keep my hands off, not do anything to strictly interpret 
those parameters, and experimentally observe the properties of systems 
that seem loosely consistent with the known architecture of human 
cognition.


I have a parameter, for instance, that seems to be a happiness or 
consistency parameter attached to a knowledge-atom.  But beyond roughly 
characterising it as such, I do not insert any mechanisms that (implicitly 
or explicitly) lock the system into such an intepretation. Instead, I have 
a wide variety of different candidate mechanisms that use that parameter, 
and I look at the overall properties of systems that use these different 
candidate mechanisms.  I let the system use the parameter according to the 
dictates of whatever mechanism is in place, but then I just explore the 
consequences (the high level behavior of the system).


In this way I do not get a conflict between what I think the parameter 
'ought' to mean and what the system is implicitly taking it to 'mean' by 
its use of the parameter.


I could start talking about all the different candidate mechanisms, but 
there are thousands of them (at least thousands of candidates that I go so 
far as to test:  they are generated in a semi-automatic way, so there are 
an unlimited number of potential candidates).




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



--
No virus found in this incoming message.
Checked by AVG Free Edition. Version: 7.5.503 / Virus Database: 
269.15.30/1125 - Release Date: 11/11/2007 9:50 PM






-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65232546-91c089


Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Richard Loosemore

Mike Tintner wrote:

RL:In order to completely ground the system, you need to let the system
build its own symbols

V. much agree with your whole argument. But -  I may well have missed 
some vital posts - I have yet to get the slightest inkling of how you 
yourself propose to do this.


Well, for the purposes of the present discussion I do not need to say 
how, only to say that there is a difference between two different 
research strategies for finding out what the mechanism is that does this.


One strategy (the one that I claim has serious problems) is where you 
try to have your cake and eat it too:  let the system build its own 
symbols, with attached parameters that 'mean' whatever they end up 
meaning after the symbols have been built, BUT then at the same time 
insist that some of the parameters really do 'mean' things like 
probabilities or likelihood or confidence values.  If the programmer 
does anything at all to include mechanisms that rely on these meanings 
(these interpretations of what the parameters signify) then the 
programmer has second-guessed what the system itself was going to use 
those things for, and you have a conflict between the two.


My strategy is to keep my hands off, not do anything to strictly 
interpret those parameters, and experimentally observe the properties of 
systems that seem loosely consistent with the known architecture of 
human cognition.


I have a parameter, for instance, that seems to be a happiness or 
consistency parameter attached to a knowledge-atom.  But beyond 
roughly characterising it as such, I do not insert any mechanisms that 
(implicitly or explicitly) lock the system into such an intepretation. 
Instead, I have a wide variety of different candidate mechanisms that 
use that parameter, and I look at the overall properties of systems that 
use these different candidate mechanisms.  I let the system use the 
parameter according to the dictates of whatever mechanism is in place, 
but then I just explore the consequences (the high level behavior of the 
system).


In this way I do not get a conflict between what I think the parameter 
'ought' to mean and what the system is implicitly taking it to 'mean' by 
its use of the parameter.


I could start talking about all the different candidate mechanisms, but 
there are thousands of them (at least thousands of candidates that I go 
so far as to test:  they are generated in a semi-automatic way, so there 
are an unlimited number of potential candidates).




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65198894-3ece99


Re: [agi] What best evidence for fast AI?

2007-11-14 Thread Richard Loosemore

Bryan Bishop wrote:

On Wednesday 14 November 2007 11:55, Richard Loosemore wrote:

I was really thinking of the data collection problem:  we cannot take
one brain and get full information about all those things, down to a
sufficient level of detail.  I do not see such a technology even over
the horizon (short of full-blow nanotechnology) that can deliver
that. We can get different information from different individual
brains (all of them dead), but combining that would not necessarily
be meaningful: all brains are different.


Re: all brains are different. What about the possibilities of cloning 
mice and then proceeding to raise them in Skinner boxes with the exact 
same environmental conditions, the same stimulation routines, etc. ? 
Ideally this will give us a baseline mouse that is not only 
genetically similar, but also behaviorally similar to some degree. This 
would undoubtedly be helpful in this quest.


Well, now you have suggested this I am sure some neuroscientist will do 
it ;-).


But you have to understand that I am a cognitive scientist, with a huge 
agenda that involves making good use of what I see as the uneplxored 
fertile ground between cognitive science and AI  and I think that I 
will be able to build an AGI using this approach *long* before the 
neuroscientists even get one mouse-brain scan at the neuron level (never 
mind the synaptic bouton level)!


So:  yeah, but not necessary.



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65204588-4868d1


Re: [agi] What best evidence for fast AI?

2007-11-13 Thread Richard Loosemore

Bryan Bishop wrote:

On Monday 12 November 2007 22:16, Richard Loosemore wrote:

If anyone were to throw that quantity of resources at the AGI problem
(recruiting all of the planet), heck, I could get it done in about 3
years. ;-)


I have done some research on this topic in the last hour and have found 
that a Connectome Project is in fact in the very early stages out 
there on the internet:


http://iic.harvard.edu/projects/connectome.html
http://acenetica.blogspot.com/2005/11/human-connectome.html
http://acenetica.blogspot.com/2005/10/mission-to-build-simulated-brain.html
http://www.indiana.edu/~cortex/connectome_plos.pdf


This is the whole brain emulation approach, I guess (my previous 
comments were about evolution of brains rather than neural level 
duplication).


But (switching topics to whole brain emulation) there are serious 
problems with this.


It seems quite possible that what we need is a detailed map of every 
synapse, exact layout of dendritic tree structures, detailed knowledge 
of the dynamics of these things (they change rapidly) AND wiring between 
every single neuron.


When I say it seems possible I mean that the chance of this 
information being absolutely necessary in order to understand what the 
neural system is doing, is so high that we would not want to gamble on 
them NOT being necessary.


So are the researchers working at that level of detail?

Egads, no!  Here's a quote from the PLOS Computational Biology paper you 
referenced (above):


Attempting to assemble the human connectome at the level
of single neurons is unrealistic and will remain infeasible at
least in the near future.

They are not even going to do it at the resolution needed to see 
individual neurons?!


I think that if they did the whole project at that level of detail it 
would amount to a possibly interesting hint at some of the wiring, of 
peripheral interest to people doing work at the cognitive system level. 
 But that is all.


I think it would be roughly equivalent to the following:  You say to me 
I want to understand how computers work, in enough detail to build my 
own and I reply with I can get a you a photo of a motherboard and a 
500 by 500 pixel image of the inside of an Intel chip...




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64523531-24742d


Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-13 Thread Richard Loosemore
 
flexibility is what compounds the problem.  Remember, life exists on the 
boundary between order and chaos.  Too much flexibility (unconstrained 
chaos) is as deadly as too much structure.
 
I think that I see both sides of the issue and how Novamente could 
be altered/enhanced to make Richard happy (since it's almost universally 
flexible) -- but doing so would also impose many constraints that I 
think that you would be unwilling to live with since I'm not sure that 
you would see the point.  I don't think that you're ever going to be 
able to change his view that the current direction of Novamente 
is -- pick one:  a) a needle in an infinite haystack or b) too fragile 
to succeed -- particularly since I'm pretty sure that you couldn't 
convince me without making some serious additions to Novamente
 


- Original Message -
*From:* Benjamin Goertzel mailto:[EMAIL PROTECTED]
*To:* agi@v2.listbox.com mailto:agi@v2.listbox.com
*Sent:* Monday, November 12, 2007 3:49 PM
*Subject:* Re: [agi] What best evidence for fast AI?


To be honest, Richard, I do wonder whether a sufficiently in-depth
conversation
about AGI between us would result in you changing your views about
the CSP
problem in a way that would accept the possibility of Novamente-type
solutions.

But, this conversation as I'm envisioning it would take dozens of
hours, and would
require you to first spend 100+ hours studying detailed NM
materials, so this seems
unlikely to happen in the near future.

-- Ben

On Nov 12, 2007 3:32 PM, Richard Loosemore [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:

Benjamin Goertzel wrote:
 
  Ed --
 
  Just a quick comment: Mark actually read a bunch of the
proprietary,
  NDA-required Novamente documents and looked at some source
code (3 years
  ago, so a lot of progress has happened since then).  Richard
didn't, so
  he doesn't have the same basis of knowledge to form detailed
comments on
  NM, that Mark does.

This is true, but not important to my line of argument, since of
course
I believe that a problem exists (CSP), which we have discussed on a
number of occasions, and your position is not that you have some
proprietary, unknown-to-me solution to the problem, but rather
that you
do not really think there is a problem.

Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?; http://v2.listbox.com/member/?;



This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?; http://v2.listbox.com/member/?;


This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?; 
http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64591405-3ee8b5


RE: [agi] What best evidence for fast AI?

2007-11-13 Thread Edward W. Porter
 probabilistic formulas are appropriate.  But it
doesn’t argue against the importance of probabilities  It argues against
using them blindly.




ED  So by “operating with small amounts of data” how small, very
roughly, are you talking about.  And are you only talking about the active
goals or sources of activation, that will be small or are you saying that
all the computation in the system will only be dealing with a small amount
of data within, for example,  one second of the processing of  human-level
system operating at human-level speed?



MARK  I mean like the way humans reason, there is only concentration
on a small number of objects -- which are only one link away from an
almost inconceivable number of related things -- and then the brain can
jump at least three of these links with lightning rapidity.



ED So this implies you are not arguing against the idea that AGI will
be dealing with massive data, just that that use will be focused by a
concentration on a relatively small number of sources of activation at
once.





MARK  Ask Ben how much actual work has been done on activation
control in very large, very sparse atom spaces in Novamente.  He'll tell
you that it's a project for when he's further along.  I'll insist (as will
Richard) that if it isn't baked in from the very beginning, you're
probably going to have to go back to the beginning to repair the lack.



ED  It is exactly such research I want to see funded.  It strikes me
as one of the key things we must learn to do well to make powerful AGI.
But I think even with some fairly dumb activation control systems you
could get useful results.  Such results would not be at all human-level in
may ways, but in other ways they could be much more powerful because such
systems could deal with many more explicit facts and could input and
output information at a much higher rate than humans.



For example, what is the equivalent of the activation control (or search)
algorithm in Google sets.  They operate over huge data.  I bet the
algorithm for calculating their search or activation is relatively simple
(much, much, much less than a PhD theses) and look what they can do.  So I
think one path is to come up with applications that can use and reason
with large data, having roughly world knowledge-like sparseness, (such as
NL data) and start with relatively simple activation algorithms and
develop then from the ground up.



MARK  P.S.  Oh yeah -- if you were public enemy number one, I
wouldn't bother answering you (and I probably should lay off of the
fan-boy crap :-).



ED  Thanks.



I admit I am impressed with Novamente.  Since it’s the best AGI
architecture I currently know of; I am impressed with Ben; believe there
is a high probability all the gaps you address could be largely fixed
within five years with deep funding (which may never come); and since I
want to get such deep funding for just the type of large atom-base work
you say is so critical,  I think it is important to focus on the potential
for greatness that Novamente and somewhat similar systems have, rather
than only think of its current gaps and potential problems.



But of course, at the same time, we must look for and try to understand
its gaps and potential problems so that we can remove them.



Ed Porter




-Original Message-
From: Mark Waser [mailto:[EMAIL PROTECTED]
Sent: Monday, November 12, 2007 2:42 PM
To: agi@v2.listbox.com
Subject: Re: [agi] What best evidence for fast AI?


 It is NOT clear that Novamente documentation is NOT enabling, or could
not be made enabling, with, say, one man year of work.  Strong argument
could be made both ways.

I believe that Ben would argue that Novamente documentation is NOT
enabling even with one man-year of work.  Ben?  There is still way to much
*research* work to be done.

  But the standard for non-enablement is very arguably weaker than not
requiring a miracle.  It would be more like not requiring a leap of
creativity that is outside the normal skill of talented PhDs trained in
related fields.

 So although your position is reasonable, I hope you understand so is
that on the other side.


My meant-to-be-humorous miracle phrasing is clearly throwing you.  The
phrase not requiring a leap of creativity that is outside the normal
skill of talented PhDs trained in related fields works for me.  Novamente
is *definitely* not there yet.  I'm rather sure that Ben would agree -- as
in, I'm not on the other side, *you* are on the other side from the
system's designer.  Again, Ben please feel free to chime in.

 much scaling stuff

Remember that the brain is *massively* parallel.  Novamente and any
other linear (or minorly-parallel) system is *not* going to work in the
same fashion as the brain.  Novamente can be parallelized to some degree
but *not* to anywhere near the same degree as the brain.  I love your
speculation and agree with it -- but it doesn't match near-term reality.
We aren't going to have brain-equivalent parallelism

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-13 Thread Benjamin Goertzel
 view that the current direction of Novamente
  is -- pick one:  a) a needle in an infinite haystack or b) too fragile
  to succeed -- particularly since I'm pretty sure that you couldn't
  convince me without making some serious additions to Novamente
 
 
  - Original Message -
  *From:* Benjamin Goertzel mailto:[EMAIL PROTECTED]
  *To:* agi@v2.listbox.com mailto:agi@v2.listbox.com
  *Sent:* Monday, November 12, 2007 3:49 PM
  *Subject:* Re: [agi] What best evidence for fast AI?
 
 
  To be honest, Richard, I do wonder whether a sufficiently in-depth
  conversation
  about AGI between us would result in you changing your views about
  the CSP
  problem in a way that would accept the possibility of Novamente-type
  solutions.
 
  But, this conversation as I'm envisioning it would take dozens of
  hours, and would
  require you to first spend 100+ hours studying detailed NM
  materials, so this seems
  unlikely to happen in the near future.
 
  -- Ben
 
  On Nov 12, 2007 3:32 PM, Richard Loosemore [EMAIL PROTECTED]
  mailto:[EMAIL PROTECTED] wrote:
 
  Benjamin Goertzel wrote:
   
Ed --
   
Just a quick comment: Mark actually read a bunch of the
  proprietary,
NDA-required Novamente documents and looked at some source
  code (3 years
ago, so a lot of progress has happened since then).  Richard
  didn't, so
he doesn't have the same basis of knowledge to form detailed
  comments on
NM, that Mark does.
 
  This is true, but not important to my line of argument, since of
  course
  I believe that a problem exists (CSP), which we have discussed
 on a
  number of occasions, and your position is not that you have some
  proprietary, unknown-to-me solution to the problem, but rather
  that you
  do not really think there is a problem.
 
  Richard Loosemore
 
  -
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?; http://v2.listbox.com/member/?;
 
 
 
 
 
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?; http://v2.listbox.com/member/?;
 
  
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;
  http://v2.listbox.com/member/?;

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64606349-2f1f37

Re: [agi] What best evidence for fast AI?

2007-11-13 Thread Benjamin Goertzel



 For example, what is the equivalent of the activation control (or search)
 algorithm in Google sets.  They operate over huge data.  I bet the
 algorithm for calculating their search or activation is relatively simple
 (much, much, much less than a PhD theses) and look what they can do.  So I
 think one path is to come up with applications that can use and reason with
 large data, having roughly world knowledge-like sparseness, (such as NL
 data) and start with relatively simple activation algorithms and develop
 then from the ground up.



Google, I believe, does reasoning about word and phrase co-occurrence using
a combination of Bayes net learning with EM clustering (this is based on
personal conversations with folks who have worked on related software
there).

The use of EM helps the Bayes net approach scale.

Bayes nets are good for domains like word co-occurence probabilities, in
which the relevant data is relatively static.  They are not much good for
real-time learning.

Unlike Bayes nets, the approach taken in PLN and NARS allows efficient
uncertain reasoning in dynamic environments based on large knowledge bases
(at least in principle, based on the math, algorithms and structures; we
haven't proved it yet).

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64609544-b69ea5

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-13 Thread Richard Loosemore

Mike Tintner wrote:

RL:Suppose that in some significant part of Novamente there is a
representation system that uses probability or likelihood numbers to
encode the strength of facts, as in [I like cats](p=0.75).  The (p=0.75)
is supposed to express the idea that the statement [I like cats] is in
some sense 75% true.

This essay seems to be a v.g. demonstration of why the human system 
almost certainly does not use numbers or anything like,  as stores of 
value - but raw, crude emotions.  How much do you like cats [or 
marshmallow ice cream]? Miaow//[or yummy] [those being an expression 
of internal nervous and muscular impulses] And black cats [or 
strawberry marshmallow] ? Miaow-miaoww![or yummy yummy] . It's crude 
but it's practical.


It is all a question of what role the numbers play.  Conventional AI 
wants them at the surface, and transparently interpretable.


I am not saying that there are no numbers, but only that they are below 
the surface, and not directly interpretable.  that might or might not 
gibe with what you are saying ... although I would not go so far as to 
put it in the way you do.




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64636829-14d428


Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-13 Thread Richard Loosemore


Ben,

Unfortunately what you say below is tangential to my point, which is 
what happens when you reach the stage where you cannot allow any more 
vagueness or subjective interpretation of the qualifiers, because you 
have to force the system to do its own grounding, and hence its own 
interpretation.


What you gave below was a sketch of some more elaborate 'qualifier' 
mechanisms.  But I described the process of generating more and more 
elaborate qualifier mechanisms in the body of the essay, and said why 
this process was of no help in resolving the issue.




Richard Loosemore





Benjamin Goertzel wrote:


Richard,

The idea of the PLN semantics underlying Novamente's probabilistic
truth values is that we can have **both**

-- simple probabilistic truth values without highly specific interpretation

-- more complex, logically refined truth values, when this level of
precision is necessary

To make the discussion more concrete, I'll use a specfic example
to do with virtual animals in Second Life.  Our first version of the
virtual pets won't use PLN in this sort of way, it'll be focused on MOSES
evolutionary learning; but, this is planned for the second version and
is within the scope of what Novamente can feasibly be expected to
do with modest effort.

Consider an avatar identified as Bob_Yifu

And, consider the concept of friend, which is a ConceptNode

-- associated to the WordNode friend via a learned ReferenceLink
-- defined operationally via a number of links such as

ImplicationLink
   AND
  InheritanceLink X friend
  EvaluationLink near (I, X)
   Pleasure

(this one just says that being near a friend confers pleasure.  Other
links about friendship may contain knowledge such as that friends
often give one food, friends help one find things, etc.)

 The concept of friend may be learned, via mining of the animal's 
experience-base --

basically, this is a matter of learning that there are certain predicates
whose SatisfyingSets (the set of Atoms that fulfill the predicate)
have significant intersection, and creating a ConceptNode to denote
that intersection. 


Then, once the concept of friend has been formed, more links pertaining
to it may be learned via mining the experience base and via inference rules.

Then, we can may find that

InheritanceLink Bob_Yifu friend .9,1

(where the .9,1 is an interval probability, interpreted according to
the indefinite probabilities framework) and this link mixes intensional
and extensional inheritance, and thus is only useful for heuristic
reasoning (which however is a very important kind).

What this link means is basically that Bob_Yifu's node in the memory
has a lot of the same links as the friend node -- or rather, that it
**would**, if all its links were allowed to exist rather than being
pruned to save memory.  So, note that the semantics are actually
tied to the mind itself.

Or we can make more specialized logical constructs if we really
want to, denoting stuff like

-- at certain times Bob_Yifu is a friend
-- Bob displays some characteristics of friendship very strongly,
and others not at all
-- etc.

We can also do crude, heuristic contextualization like

ContextLink .7,.8
 home
 InheritanceLink Bob_Yifu friend

which suggests that Bob is less friendly at home than
in general.

Again this doesn't capture all the subtleties of Bob's friendship in
relation to being at home -- and one could do so if one wanted to, but 
it would

require introducing a larger complex of nodes and links, which is
not always the most appropriate
thing to do.

The PLN inference rules are designed to give heuristically
correct conclusions based on heuristically interpreted links;
or more precise conclusions based on more precisely interpreted
links. 


Finally, the semantics of PLN relationships is explicitly an
**experiential** semantics.  (One of the early chapters in the PLN
book, to appear via Springer next year, is titled Experiential
Semantics.)  So, all node and link truth values in PLN are
intended to be settable and adjustable via experience, rather than
via programming or importation from databases or something like
that.

Now, the above example is of course a quite simple one.
Discussing a more complex example would go beyond the scope
of what I'm willing to do in an email conversation, but the mechanisms
I've described are not limited to such simple examples.

I am aware that identifying Bob_Yifu as a coherent, distinct entity is a 
problem

faced by humans and robots, and eliminated via the simplicity of the SL
environment.  However, there is detailed discussion in the (proprietary) 
NM book of

how these same mechanisms may be used to do object recognition and
classification, as well.

You may of course argue that these mechanisms won't scale up
to large knowledge bases and rich experience streams.  I believe that
they will, and have arguments but not rigorous proofs that they will.

-- Ben G



On Nov 13, 2007 12:34 PM, Richard Loosemore 

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-13 Thread Benjamin Goertzel
On Nov 13, 2007 2:37 PM, Richard Loosemore [EMAIL PROTECTED] wrote:


 Ben,

 Unfortunately what you say below is tangential to my point, which is
 what happens when you reach the stage where you cannot allow any more
 vagueness or subjective interpretation of the qualifiers, because you
 have to force the system to do its own grounding, and hence its own
 interpretation.



I don't see why you talk about forcing the system to do its own grounding
--
the probabilities in the system are grounded in the first place, as they
are calculated based on experience.

The system observes, records what it sees, abstracts from it, and chooses
actions that it guess will fulfill its goals.  Its goals are ultimately
grounded in in-built
feeling-evaluation routines, measuring stuff like amount of novelty
observed,
amount of food in system etc.

So, the system sees and then acts ... and the concepts it forms and uses
are created/used based on their utility in deriving appropriate actions.

There is no symbol-grounding problem except in the minds of people who
are trying to interpret what the system does, and get confused.  Any symbol
used within the system, and any probability calculated by the system, are
directly grounded in the system's experience.

There is nothing vague about an observation like Bob_Yifu was observed
at time-stamp 599933322, or a fact Command 'wiggle ear' was sent
at time-stamp 54.  These perceptions and actions are the root of the
probabilities the system calculated, and need no further grounding.



 What you gave below was a sketch of some more elaborate 'qualifier'
 mechanisms.  But I described the process of generating more and more
 elaborate qualifier mechanisms in the body of the essay, and said why
 this process was of no help in resolving the issue.


So, if a system can achieve its goals based on choosing procedures that
it thinks are likely to achieve its goals, based on the knowledge it
gathered
via its perceived experience -- why do you think it has a problem?

I don't really understand your point, I guess.  I thought I did -- I thought
your point was that precisely specifying the nature of a conditional
probability
is a rats-nest of complexity.  And my response was basically that in
Novamente we don't need to do that, because we define conditional
probabilities
based on the system's own knowledge-base, i.e.

Inheritance A B .8

means

If A and B were reasoned about a lot, then A would (as measred by an
weighted
average) have 80% of the relationships that B does

But apparently you were making some other point, which I did not grok,
sorry...

Anyway, though, Novamente does NOT require logical relations of escalating
precision and complexity to carry out reasoning, which is one thing you
seemed
to be assuming in your post.

Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64644318-8bbdee

Re: [agi] What best evidence for fast AI?

2007-11-13 Thread Linas Vepstas
On Mon, Nov 12, 2007 at 08:44:58PM -0500, Mark Waser wrote:
 
 So perhaps the AGI question is, what is the difference between
 a know-it-all mechano-librarian, and a sentient being?
 
 I wasn't assuming a mechano-librarian.  I was assuming a human that could 
 (and might be trained to) do some initial translation of the question and 
 some final rephrasing of the answer.

I'm surprised by your answer. 

I don't see that the hardest part of agi is NLP i/o. To put it into
perspective: one can fake up some trivial NLP i/o now, and with a bit of
effort, one can improve significantly on that.  Sure, it would be
child-like conversation, and the system would be incapable of learning
new idioms, expressions, etc., but I don't see that you'd need a human
to translate the question into some formal reasoning-engine language.

The hard part of NLP is being able to read complex texts, whether
Alexander Pope or Karl Marx; but a basic NLP i/o interface stapled to
a reasoning engine doesn't need to really do that, or at least not well.
Yet, these two stapled toegether would qualify as a mechano-librarian
for me.

To me, the hard part is still the reasoning engine itself, and the 
pruning, and the tailoring of responses to the topic at hand. 

So let me rephrase the question: If one had
1) A reasoing engine that could provide short yet appropriate responses
   to questions,
2) A simple NLP interface to the reasoning engine

would that be AGI?  I imagine most folks would say no, so let me throw
in: 

3) System can learn new NLP idioms, so that it can eventually come to
understand those sentences and paragraphs that make Karl Marx so hard to
read.

With this enhanced reading ability, it could then presumably become a
know-it-all ultra-question-answerer. 

Would that be AGI? Or is there yet more? Well, of course there's more:
one expects creativity, aesthetics, ethics. But we know just about nothing
about that.

This is the thing that I think is relevent to Robin Hanson's original
question.  I think we can build 1+2 is short order, and maybe 3 in a
while longer. But the result of 1+2+3 will almost surely be an
idiot-savant: knows everything about horses, and can talk about them
at length, but, like a pedantic lecturer, the droning will put you
asleep.  So is there more to AGI, and exactly how do way start laying
hands on that?

--linas






-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64661358-af169f


Re: [agi] What best evidence for fast AI?

2007-11-13 Thread Benjamin Goertzel


 This is the thing that I think is relevent to Robin Hanson's original
 question.  I think we can build 1+2 is short order, and maybe 3 in a
 while longer. But the result of 1+2+3 will almost surely be an
 idiot-savant: knows everything about horses, and can talk about them
 at length, but, like a pedantic lecturer, the droning will put you
 asleep.  So is there more to AGI, and exactly how do way start laying
 hands on that?

 --linas



I think that evolutionary-learning-type methods play a big role in
creativity.

I elaborated on this quite a bit toward the end of my 1997 book From
Complexity to Creativity.

Put simply, inference is ultimately a local search method -- inference
rules, even heuristic and speculative ones, always lead you step by step
from what you know into the unknown.  This makes you, as you say, like
a pedantic lecturer.

OTOH, evolutionary algorithms can take big creative leaps.  This is one
reason why the MOSES evolutionary algorithm plays a big role in the
Novamente design (the other, related reason being that evolutionary learning
is
better than logical inference for many kinds of procedure learning).

Integrating evolution with logic is key to intelligence.  The brain does it,
I believe, via

-- implementing logic via Hebbian learning (neuron-level Hebb stuff leading
to
PLN-like logic stuff on the neural-assembly level)
-- implementing evolution via Edelman-style Neural Darwinist neural map
evolution (which ultimately bottoms out in Hebbian learning too)

Novamente seeks to enable this integration
 via grounding both inference and evolutionary
learning in probability theory.

-- Ben G


-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64667888-a48aa3

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-13 Thread Linas Vepstas
On Tue, Nov 13, 2007 at 12:34:51PM -0500, Richard Loosemore wrote:
 
 Suppose that in some significant part of Novamente there is a 
 representation system that uses probability or likelihood numbers to 
 encode the strength of facts, as in [I like cats](p=0.75).  The (p=0.75) 
 is supposed to express the idea that the statement [I like cats] is in 
 some sense 75% true.
 
 Either way, we have a problem:  a fact like [I like cats](p=0.75) is 
 ungrounded because we have to interpret it.  Does it mean that I like 
 cats 75% of the time?  That I like 75% of all cats?  75% of each cat? 
 Are the cats that I like always the same ones, or is the chance of an 
 individual cat being liked by me something that changes?  Does it mean 
 that I like all cats, but only 75% as much as I like my human family, 
 which I like(p=1.0)?  And so on and so on.

Eh?

You are standing at the proverbial office water coooler, and Aneesh 
says Wen likes cats. On your drive home, you mind races .. does this
mean that Wen is a cat fancier?  You were planning on taking Wen out
on a date, and this tidbit of information could be useful ... 

 when you try to build the entire grounding mechanism(s) you are forced 
 to become explicit about what these numbers mean, during the process of 
 building a grounding system that you can trust to be doing its job:  you 
 cannot create a mechanism that you *know* is constructing sensible p 
 numbers and facts during all of its development *unless* you finally 
 bite the bullet and say what the p numbers really mean, in fully cashed 
 out terms.

But has a human, asking Wen out on a date, I don't really know what 
Wen likes cats ever really meant. It neither prevents me from talking 
to Wen, or from telling my best buddy that ...well, I know, for
instance, that she likes cats...  

Lack of grounding is what makes humour funny, you can do a whole 
Pygmalion / Seinfeld episode on she likes cats.

--linas 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64672202-2af80e


Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-13 Thread Benjamin Goertzel


 But has a human, asking Wen out on a date, I don't really know what
 Wen likes cats ever really meant. It neither prevents me from talking
 to Wen, or from telling my best buddy that ...well, I know, for
 instance, that she likes cats...


yes, exactly...

The NLP statement Wen likes cats is vague in the same way as the
Novamente or NARS relationship

EvaluationLink
likes
ListLink
   Wen
cats


is vague  The vagueness passes straight from NLP into the internal KR,
which is how it should be.

And that same vagueness may be there if the relationship is learned via
inference based on experience, rather than acquired by natural language.

I.e., if the above relationship is inferred, it may just mean that

 {the relationship between Wen and cats} shares many relationships with
other person/object relationships that have been categorized as 'liking'
before

In this case, the system can figure out that Wen likes cats without ever
actually making explicit what this means.  All it knows is that, whatever it
means,
it's the same thing that was meant in other circumstances where liking
was used as a label.

So, vagueness can not only be important into an AI system from natural
language,
but also propagated around the AI system via inference.

This is NOT one of the trickier things about building probabilistic AGI,
it's really
kind of elementary...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64674694-3ada83

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-13 Thread Benjamin Goertzel



 So, vagueness can not only be important


imported, I meant


 into an AI system from natural language,
 but also propagated around the AI system via inference.

 This is NOT one of the trickier things about building probabilistic AGI,
 it's really
 kind of elementary...

 -- Ben G




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64674943-4b25e0

Re: [agi] What best evidence for fast AI?

2007-11-13 Thread Mark Waser

I don't see that the hardest part of agi is NLP i/o.


I didn't say that i/o was the hardest part of agi.  Truly understanding NLP 
is agi-complete though.  And please, get off this kick of just faking 
something up and thinking that because you can create a shallow toy example 
that holds for ten seconds that you've answered *anything*.  That's the 
*narrow ai* approach.


- Original Message - 
From: Linas Vepstas [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Tuesday, November 13, 2007 4:01 PM
Subject: Re: [agi] What best evidence for fast AI?



On Mon, Nov 12, 2007 at 08:44:58PM -0500, Mark Waser wrote:


So perhaps the AGI question is, what is the difference between
a know-it-all mechano-librarian, and a sentient being?

I wasn't assuming a mechano-librarian.  I was assuming a human that could
(and might be trained to) do some initial translation of the question and
some final rephrasing of the answer.


I'm surprised by your answer.

I don't see that the hardest part of agi is NLP i/o. To put it into
perspective: one can fake up some trivial NLP i/o now, and with a bit of
effort, one can improve significantly on that.  Sure, it would be
child-like conversation, and the system would be incapable of learning
new idioms, expressions, etc., but I don't see that you'd need a human
to translate the question into some formal reasoning-engine language.

The hard part of NLP is being able to read complex texts, whether
Alexander Pope or Karl Marx; but a basic NLP i/o interface stapled to
a reasoning engine doesn't need to really do that, or at least not well.
Yet, these two stapled toegether would qualify as a mechano-librarian
for me.

To me, the hard part is still the reasoning engine itself, and the
pruning, and the tailoring of responses to the topic at hand.

So let me rephrase the question: If one had
1) A reasoing engine that could provide short yet appropriate responses
  to questions,
2) A simple NLP interface to the reasoning engine

would that be AGI?  I imagine most folks would say no, so let me throw
in:

3) System can learn new NLP idioms, so that it can eventually come to
understand those sentences and paragraphs that make Karl Marx so hard to
read.

With this enhanced reading ability, it could then presumably become a
know-it-all ultra-question-answerer.

Would that be AGI? Or is there yet more? Well, of course there's more:
one expects creativity, aesthetics, ethics. But we know just about nothing
about that.

This is the thing that I think is relevent to Robin Hanson's original
question.  I think we can build 1+2 is short order, and maybe 3 in a
while longer. But the result of 1+2+3 will almost surely be an
idiot-savant: knows everything about horses, and can talk about them
at length, but, like a pedantic lecturer, the droning will put you
asleep.  So is there more to AGI, and exactly how do way start laying
hands on that?

--linas






-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64683060-82d4be


Re: [agi] What best evidence for fast AI?

2007-11-13 Thread Bryan Bishop
On Tuesday 13 November 2007 09:11, Richard Loosemore wrote:
 This is the whole brain emulation approach, I guess (my previous
 comments were about evolution of brains rather than neural level
 duplication).

Ah, you are right. But this too is an interesting topic. I think that 
the order of magnitudes for whole brain emulation, connectome, and 
similar evolutionary methods, are roughly the same, but I haven't done 
any calculations.

 It seems quite possible that what we need is a detailed map of every
 synapse, exact layout of dendritic tree structures, detailed
 knowledge of the dynamics of these things (they change rapidly) AND
 wiring between every single neuron.

Hm. It would seem that we could have some groups focusing on neurons, 
another on types of neurons, another on dendritic tree structures, some 
more on the abstractions of dendritic trees, etc. in an up-*and*-down 
propagation hierarchy so that the abstract processes of the brain are 
studied just as well as the in-betweens of brain architecture.

 I think that if they did the whole project at that level of detail it
 would amount to a possibly interesting hint at some of the wiring, of
 peripheral interest to people doing work at the cognitive system
 level. But that is all.

You see no more possible value of such a project?

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64757679-f3c1ec


Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Mark Waser
 I was using the term episodic in the standard sense of episodic memory 
 from cog psych, in which episodic memory is differentiated from procedural 
 and declarative memory. 

I understood that.  The problem is that procedural and declarative memory is 
*not* as simple as is often purported.  If you can't rapidly realize when and 
why your previously reliable procedural and declarative stuff is suddenly no 
longer valid . . . . 

 The main point is, we have specialized indices to make memory access 
 efficient for knowledge involving (certain and uncertain) logical 
 relationships, associations, spatial and temporal relationships, and 
 procedures

Indices are important but compactness of data storage is also important as are 
ways to have what is effectively indexed derivation of knowledge.  Obviously my 
knowledge of Novamente is becoming dated but, unless you opened some really new 
areas, there is a lot of work that could be done in this area that you're not 
focusing on.  (Note: Please don't be silly infer that by compactness of data 
storage that I mean that disk size is important -- we're long past those days.  
Assume that I mean the computational costs of manipulating data that is not 
stored in an efficient manner).

 Research project 1.  How do you find analogies between neural networks, 
 enzyme kinetics and the formation of galaxies (hint:  think Boltzmann)? 
 That is a question most humans couldn't answer, and is only suitable for 
 testing an AGI that is already very advanced.

In your opinion.  I don't believe that an AGI is going to get far at all 
without having at least a partial handle on this.

 Research project 2.  How do you recognize and package up all of the data 
 that represents horse and expose only that which is useful at a given time? 
 That is covered quite adequately in the NM design, IMO.  We are actually 
 doing a commercial project right now (w/ delivery in 2008) that will showcase 
 our ability to solve this problem.  Details are confidential unfortunately, 
 due to the customer's preference. 

I'm afraid that I have to snort at this.  Either you didn't understand the full 
implications of what I'm saying or you're snowing me (ok, I'll give you a .1% 
chance of having it).

 That is what is called map encapsulation in the Novamente design.

Yes, yes, I saw it in the design . . . . a miracle happens here.
Which, granted, is better than not realizes that the area exists . . . . but 
still . . . .

 I do not think the design has any huge gaps.  But much further RD work is 
 required, and I agree there may be a simpler approach; but I am not 
 convinced that you have one. 

These are two *very* different issues (with a really spurious statement tacked 
onto the end).

Of course you don't think the design has any gaps -- you would have filled them 
if you saw them.

There is no reason to be convinced that *I* have a simpler approach because I 
haven't put one forth.  I may or may not be working on one:-) but if I am, 
I certainly haven't got to the point where I feel that I can defend it.:-)

  - Original Message - 
  From: Benjamin Goertzel 
  To: agi@v2.listbox.com 
  Sent: Monday, November 12, 2007 11:45 AM
  Subject: Re: [agi] What best evidence for fast AI?





  On Nov 12, 2007 11:36 AM, Mark Waser [EMAIL PROTECTED] wrote:

 I am extremely confident of Novamente's memory design regarding 
declarative and procedural knowledge.  Tweaking the system for optimal 
representation of episodic knowledge may require some more thought. 

Granted -- the memory design is very generic and will handle virtually 
anything.  The question is -- is it in a reasonably optimal from for retrieval 
and other operations (i.e. optimal enough that it won't end up being impossibly 
slow once you get a realistic amount of data/knowledge).  Your caveat on 
episodic knowledge proves very informative since *all* knowledge is effectively 
episodic.

  I was using the term episodic in the standard sense of episodic memory 
from cog psych, in which episodic memory is differentiated from procedural and 
declarative memory. 

  The main point is, we have specialized indices to make memory access 
efficient for knowledge involving (certain and uncertain) logical 
relationships, associations, spatial and temporal relationships, and procedures 
... but we haven't put much work into creating specialized indices to make 
access of stories/narratives efficient.  Though this may not wind up being 
necessary since the AtomTable now has the capability to create new indices on 
the fly, based on the statistics of the data contained therein. 

   

 I have no idea what you mean by scale invariance of knowledge nor and 
only weak understanding of what you mean by ways of determining and exploiting 
encapsulation and modularity of knowledge without killing useful leaky 
abstractions.

Research project 1.  How do you find analogies between neural networks, 
enzyme

Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Benjamin Goertzel
Hi,


  Research project 1.  How do you find analogies between neural networks,
 enzyme kinetics and the formation of galaxies (hint:  think Boltzmann)?
  That is a question most humans couldn't answer, and is only suitable for
 testing an AGI that is already very advanced.
   In your opinion.  I don't believe that an AGI is going to get far at all
 without having at least a partial handle on this.


I'm more interested at this stage in analogies like

-- btw seeking food and seeking understanding
-- between getting an object out of a hole and getting an object out of a
pocket, or a guarded room

etc.

Why would one need to introduce advanced scientific concepts to an
early-stage AGI?  I don't get it...




  Research project 2.  How do you recognize and package up all of the
 data that represents horse and expose only that which is useful at a given
 time?
  That is covered quite adequately in the NM design, IMO.  We are actually
 doing a commercial project right now (w/ delivery in 2008) that will
 showcase our ability to solve this problem.  Details are confidential
 unfortunately, due to the customer's preference.

 I'm afraid that I have to snort at this.  Either you didn't understand the
 full implications of what I'm saying or you're snowing me (ok, I'll give you
 a .1% chance of having it).


Hmmm  I guess I didn't understand what you meant.

What I thought you meant was, if a user asked I'm a small farmer in New
Zealand.  Tell me about horses then the system would be able to disburse
its relevant knowledge about horses, filtering out the irrelevant stuff.

What did you mean, exactly?




  That is what is called map encapsulation in the Novamente design.
 Yes, yes, I saw it in the design . . . . a miracle happens here.
 Which, granted, is better than not realizes that the area exists . . . . but
 still . . . .


There are specific algorithms proposed, in the NM book, for doing map
encapsulation.  You may not believe they will work for the task, but still,
it's not fair to use the label a miracle happens here to describe a
description of specific algorithms applied to a specific data structure.




  I do not think the design has any huge gaps.  But much further RD work
 is required, and I agree there may be a simpler approach; but I am not
 convinced that you have one.
 These are two *very* different issues (with a really spurious statement
 tacked onto the end).

 Of course you don't think the design has any gaps -- you would have filled
 them if you saw them.


I think it has medium-sized gaps, not huge ones.  I have not filled all
these gaps because of lack of time -- implementing stuff needs to be
balanced with finalizing design details of stuff that won't be implemented
for a while anyway due to limited resources.


-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64208314-2be377

Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Mark Waser
 I'm more interested at this stage in analogies like
 -- btw seeking food and seeking understanding
 -- between getting an object out of a hole and getting an object out of a 
 pocket, or a guarded room
 Why would one need to introduce advanced scientific concepts to an 
 early-stage AGI?  I don't get it... 

:-)  A bit disingenuous there, Ben.  Obviously you start with the simple and 
move on to the complex (though I suspect that the first analogy you cite is 
rather more complex than you might think) -- but to take too simplistic an 
approach that might not grow is just the narrow AI approach in other clothing.

 Hmmm  I guess I didn't understand what you meant.
 What I thought you meant was, if a user asked I'm a small farmer in New 
 Zealand.  Tell me about horses then the system would be able to disburse 
 its relevant knowledge about horses, filtering out the irrelevant stuff.   
 What did you mean, exactly?

That's a good simple, starting case.  But how do you decide how much knowledge 
to disburse?  How do you know what is irrelevant?  How much do your answers 
differ between a small farmer in New Zealand, a rodeo rider in the West, a 
veterinarian is Pennsylvania, a child in Washington, a bio-mechanician studying 
gait?  And horse is actually a *really* simple concept since it refers to a 
very specific type of physical object.  

Besides, are you really claiming that you'll be able to do this next year?  
Sorry, but that is just plain, unadulterated BS.  If you can do that, you are 
light-years further along than . . . . 

 There are specific algorithms proposed, in the NM book, for doing map 
 encapsulation.  You may not believe they will work for the task, but still, 
 it's not fair to use the label a miracle happens here to describe a 
 description of specific algorithms applied to a specific data structure.  

I guess that the jury will have to be out until you publicize the algorithms.  
What I've seen in the past are too small, too simple, and won't scale to what 
is likely to be necessary.

 I think it has medium-sized gaps, not huge ones.  I have not filled all 
 these gaps because of lack of time -- implementing stuff needs to be 
 balanced with finalizing design details of stuff that won't be implemented 
 for a while anyway due to limited resources. 

:-)  You have more than enough design experience to know that medium-size gaps 
can frequently turn huge once you turn your attention to them.  Who are you 
snowing here?



  - Original Message - 
  From: Benjamin Goertzel 
  To: agi@v2.listbox.com 
  Sent: Monday, November 12, 2007 12:55 PM
  Subject: Re: [agi] What best evidence for fast AI?



  Hi,
   

 Research project 1.  How do you find analogies between neural networks, 
enzyme kinetics and the formation of galaxies (hint:  think Boltzmann)? 
 That is a question most humans couldn't answer, and is only suitable for 
testing an AGI that is already very advanced.

  
In your opinion.  I don't believe that an AGI is going to get far at all 
without having at least a partial handle on this.

  I'm more interested at this stage in analogies like

  -- btw seeking food and seeking understanding
  -- between getting an object out of a hole and getting an object out of a 
pocket, or a guarded room

  etc.

  Why would one need to introduce advanced scientific concepts to an 
early-stage AGI?  I don't get it... 

   

 Research project 2.  How do you recognize and package up all of the data 
that represents horse and expose only that which is useful at a given time?  
 That is covered quite adequately in the NM design, IMO.  We are actually 
doing a commercial project right now (w/ delivery in 2008) that will showcase 
our ability to solve this problem.  Details are confidential unfortunately, due 
to the customer's preference. 

I'm afraid that I have to snort at this.  Either you didn't understand the 
full implications of what I'm saying or you're snowing me (ok, I'll give you a 
.1% chance of having it).

  Hmmm  I guess I didn't understand what you meant.

  What I thought you meant was, if a user asked I'm a small farmer in New 
Zealand.  Tell me about horses then the system would be able to disburse its 
relevant knowledge about horses, filtering out the irrelevant stuff.   

  What did you mean, exactly?

   


 That is what is called map encapsulation in the Novamente design.

Yes, yes, I saw it in the design . . . . a miracle happens here.
Which, granted, is better than not realizes that the area exists . . . . but 
still . . . .

  There are specific algorithms proposed, in the NM book, for doing map 
encapsulation.  You may not believe they will work for the task, but still, 
it's not fair to use the label a miracle happens here to describe a 
description of specific algorithms applied to a specific data structure.  

   

 I do not think the design has any huge gaps.  But much further RD work 
is required

Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Benjamin Goertzel


  That's a good simple, starting case.  But how do you decide how much
  knowledge to disburse?  How do you know what is irrelevant?  How much do
  your answers differ between a small farmer in New Zealand, a rodeo rider in
  the West, a veterinarian is Pennsylvania, a child in Washington, a
  bio-mechanician studying gait?  And horse is actually a *really* simple
  concept since it refers to a very specific type of physical object.
 
  Besides, are you really claiming that you'll be able to do this next
  year?  Sorry, but that is just plain, unadulterated BS.  If you can do that,
  you are light-years further along than . . . .
 


Actually, this example is just not that hard.  I think we may be able to do
this during 2008, if funding for that particular NM application project
holds up (it's currently confirmed only thru May-June)

ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64228510-369311

RE: [agi] What best evidence for fast AI?

2007-11-12 Thread Edward W. Porter
Ben,  Thanks,  I think Mark is raising some interesting issues.  I may not
agree with him on all of them but it is good to have your ideas tested by
intelligent questioning.  Ed

-Original Message-
From: Benjamin Goertzel [mailto:[EMAIL PROTECTED]
Sent: Monday, November 12, 2007 11:37 AM
To: agi@v2.listbox.com
Subject: Re: [agi] What best evidence for fast AI?




Ed --

Just a quick comment: Mark actually read a bunch of the proprietary,
NDA-required Novamente documents and looked at some source code (3 years
ago, so a lot of progress has happened since then).  Richard didn't, so he
doesn't have the same basis of knowledge to form detailed comments on NM,
that Mark does.

-- Ben


On Nov 12, 2007 11:35 AM, Edward W. Porter [EMAIL PROTECTED] wrote:


I'm sorry.  I guess I did misunderstand you.

If you have time I wish you could state the reasons why you find it
lacking as efficiently as has Mark Waser.

Ed Porter


-Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
Sent: Monday, November 12, 2007 11:20 AM
To: agi@v2.listbox.com
Subject: Re: [agi] What best evidence for fast AI?



Edward W. Porter wrote:
 Richard Loosemore wrote in a Sun 11/11/2007 11:09 PM post


RICHARD You are right.  I have only spent about 25 years working
on
 this
 problem.  Perhaps, no matter how bright I am, this is not enough to
 understand Novamente's promise.

 ED There a many people who have spent 25 years working on AI who
 have not spent the time to try to understand the multiple threads that
 make up the Novamente approach.  From the one paper I read from you, as
 I remember it, your major approach to AI was based on a concept of
 complexity in which it was hard-for-humans-to-understand the
 relationship between the lower level of the system and the higher level
 functions you presumably want it to have.  This is very different than
 the Novamente approach, which involves complexity, but not so much at an

 architectural level, but rather at the level of what will emerge in the
 self-organizing gen/comp network of patterns and behaviors that
 architecture is designed to grow, all under the constant watchful eye --

 and selective weeding and watering -- of its goal and reward systems.
 As I understand it, the complexity in Novamente is much more like that
 in an economy in which semi-rational actors struggle to find and make a
 niche at which they can make a living, than the somewhat more anarchical

 complexity in the cellular automata Game Of Life.

I am sorry, but this is a rather enormous misunderstanding of the claim
I made.  Too extensive for me to be able to deal with in a list post.


 So perhaps you are like most people who have spent a career in AI, in
 that the deep learning you have obtained has not spend enough time
 thinking about the pieces of Novamente-like approaches.  But it is
 almost certain that that 25 years worth of knowledge would make it much
 easier for you to understand Novamente-like approach than all but a very

 small percent of this planet/s people, if you really wanted to.

 ED I am sure you are smart enough to understand its promise if
 you wanted to.  Do you?

RICHARD I did want to.

 I did.

 I do.

 ED Great. If you really do, I would start reading the papers at
 ___http://www.novamente.net/papers/_.  Perhaps Ben could give you a
 better reading list than I.

 I don't know about you, Richard, but given my mental limitations, I
 often find I have to read some parts of paper 2 to 10 times to
 understand them.  Usually much is unsaid in most papers, even the well
 written ones. You often have to spend time filling in the blanks and
 trying to imagine how what its describing would actually work.  Much of
 my understanding of the Novamente approach not only comes from a broad
 range of reading and attending lectures in AI, micro-electronic, and
 brain science, but also a lot of thinking about what I have read and
 heard from other, and about what I have observed over decades of my own
 thought processes.

There is a fundamental misunderstanding here, Ed.  I read all of the
Novamente papers a couple of years ago.  My own thinking had already
gone to that point and (in my opinion) well beyond it.

You are implying that perhaps I do not understand it well enough.  I
understand it, understand a very wide range of issues that surround it,
and also understand what i see as some serious limitations (some of
which are encapsulated in my complexity paper).

Thanks for your concern, but understanding the Novamente approach is not
my problem.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:

http://v2.listbox.com/member/? http://v2.listbox.com/member/?; 


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:

http://v2.listbox.com/member/? http://v2.listbox.com/member/?; 



  _

This list is sponsored by AGIRI

Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Benjamin Goertzel
On Nov 12, 2007 1:49 PM, Mark Waser [EMAIL PROTECTED] wrote:

   I'm more interested at this stage in analogies like
  -- btw seeking food and seeking understanding
  -- between getting an object out of a hole and getting an object out of
 a pocket, or a guarded room
  Why would one need to introduce advanced scientific concepts to an
 early-stage AGI?  I don't get it...

 :-)  A bit disingenuous there, Ben.  Obviously you start with the simple
 and move on to the complex (though I suspect that the first analogy you cite
 is rather more complex than you might think) -- but to take too simplistic
 an approach that might not grow is just the narrow AI approach in other
 clothing.



Well, I don't think we're doing the latter, obviously.  It's not as though
we are creating an AGI architecture that is overfitted to controlling simple
organisms in virtual worlds.  We've created a general AGI architecture and
will then be applying it in this particular context.




  Hmmm  I guess I didn't understand what you meant.
  What I thought you meant was, if a user asked I'm a small farmer in
 New Zealand.  Tell me about horses then the system would be able to
 disburse its relevant knowledge about horses, filtering out the irrelevant
 stuff.
  What did you mean, exactly?

 That's a good simple, starting case.  But how do you decide how much
 knowledge to disburse?  How do you know what is irrelevant?  How much do
 your answers differ between a small farmer in New Zealand, a rodeo rider in
 the West, a veterinarian is Pennsylvania, a child in Washington, a
 bio-mechanician studying gait?  And horse is actually a *really* simple
 concept since it refers to a very specific type of physical object.

 Besides, are you really claiming that you'll be able to do this next
 year?  Sorry, but that is just plain, unadulterated BS.  If you can do that,
 you are light-years further along than . . . .


Well, understanding the relevant context underlying a query is a fuzzy, not
an absolute thing.  There can be varying levels of capability at doing
this.  We have the basic mechanisms to enable this in NM, but they won't
during 2008 perform this kind of contextualization as well as humans do.   I
didn't mean to be implying they would.




  There are specific algorithms proposed, in the NM book, for doing map
 encapsulation.  You may not believe they will work for the task, but still,
 it's not fair to use the label a miracle happens here to describe a
 description of specific algorithms applied to a specific data structure.
 I guess that the jury will have to be out until you publicize the
 algorithms.  What I've seen in the past are too small, too simple, and won't
 scale to what is likely to be necessary.


I disagree, but this would get into a very in-depth technical conversation
which isn't really apropos for this list.



  I think it has medium-sized gaps, not huge ones.  I have not filled all
 these gaps because of lack of time -- implementing stuff needs to be
 balanced with finalizing design details of stuff that won't be implemented
 for a while anyway due to limited resources.

 :-)  You have more than enough design experience to know that medium-size
 gaps can frequently turn huge once you turn your attention to them.  Who are
 you snowing here?



Certainly they can, but I've thought about these particular gaps a lot, and
believe that's not going to happen here.  But of course it **could** -- as I
keep saying, completing the NM system does involve some RD, not pure
engineering.

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64229523-f67219

Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Mark Waser
I don't know at what point you'll be blocked from answering by confidentiality 
concerns but I'll ask a few questions you hopefully can answer like:
  1.. How is the information input and stored in your system (i.e. Is it more 
like simple formal assertions with a restricted syntax and/or language or like 
English language)?
  2.. How constrained in the information content (and is the content even 
relevant)?
  3.. To what degree does the system understand the information (i.e. how 
much can in manipulate it)?
  4.. Who tags the information as relevant to particular users?
  5.. How constrained are the tags?
  6.. What is the output (is it just a regurgitation of appropriately tagged 
information pieces)?
I have to assume that you're taking the easy way out on most of the questions 
(like formal assertions, restricted syntax, any language but the system does 
not understand or manipulate the language so content is irrelevant, users apply 
tags, fairly simply regurgitation) if you think 2008 is anywhere close to 
reasonable.

  - Original Message - 
  From: Benjamin Goertzel 
  To: agi@v2.listbox.com 
  Sent: Monday, November 12, 2007 1:59 PM
  Subject: Re: [agi] What best evidence for fast AI?




  That's a good simple, starting case.  But how do you decide how much 
knowledge to disburse?  How do you know what is irrelevant?  How much do your 
answers differ between a small farmer in New Zealand, a rodeo rider in the 
West, a veterinarian is Pennsylvania, a child in Washington, a bio-mechanician 
studying gait?  And horse is actually a *really* simple concept since it refers 
to a very specific type of physical object.  

  Besides, are you really claiming that you'll be able to do this next 
year?  Sorry, but that is just plain, unadulterated BS.  If you can do that, 
you are light-years further along than . . . .


  Actually, this example is just not that hard.  I think we may be able to do 
this during 2008, if funding for that particular NM application project holds 
up (it's currently confirmed only thru May-June) 

  ben

--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64259017-2fd868

Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Benjamin Goertzel
On Nov 12, 2007 2:51 PM, Mark Waser [EMAIL PROTECTED] wrote:

  I don't know at what point you'll be blocked from answering by
 confidentiality concerns



I can't say much more than I will do in this email, due to customer
confidentiality concerns


 but I'll ask a few questions you hopefully can answer like:

1. How is the information input and stored in your system (i.e. Is
it more like simple formal assertions with a restricted syntax and/or
language or like English language)?


English input as well as other forms of input; NM Atom storage

Obviously English language comprehension will not be complete; and
proprietary (not Novamente's) UI devices will be used to work around this.


1.
2. How constrained in the information content (and is the content
even relevant)?


We'll work with a particular (relatively simple) text source for starters,
with a view toward later generalization


1.
2. To what degree does the system understand the information (i.e.
how much can in manipulate it)?


That degree will increase as we bring more and more of PLN into the system.
Initially, it'll just be simple PLN first-order term logic inference; then
we'll extend it.



1.
2. Who tags the information as relevant to particular users?


User feedback


1.
2. How constrained are the tags?


They're English


1.
2. What is the output


That's confidential, but it's very expressive and flexible

|  (is it just a regurgitation of appropriately tagged information pieces)?


No

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64260324-14ecdf

Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Benjamin Goertzel
On Nov 12, 2007 2:41 PM, Mark Waser [EMAIL PROTECTED] wrote:

   It is NOT clear that Novamente documentation is NOT enabling, or could
 not be made enabling, with, say, one man year of work.  Strong argument
 could be made both ways.

 I believe that Ben would argue that Novamente documentation is NOT
 enabling even with one man-year of work.  Ben?  There is still way to much
 *research* work to be done.



I'm not really familiar with this terminology, and don't have time to study
it right now.



   But the standard for non-enablement is very arguably weaker than not
 requiring a miracle.  It would be more like not requiring a leap of
 creativity that is outside the normal skill of talented PhDs trained in
 related fields.



Yes.  I believe that completion of NM does not require any leaps of
creativity outside the normal skill of talented PhD's trained in related
fields.





  Ask Ben how much actual work has been done on activation control in
 very large, very sparse atom spaces in Novamente.  He'll tell you that it's
 a project for when he's further along.



In this regard you are a bit out of date, Mark, due to your lack of recent
contact w/ the NM project.

In 2005 we did some testing of NM attention allocation mechanisms w/
millions of nodes and hundreds of millions of links, derived from NLP
parsing and quantitative data mining.  More recently I did some
smaller-scale testing of similar (but better) mechanisms in a Ruby
prototype, but this code is not yet ported into the main C++ codebase.  This
was all researchy stuff done with throwaway code just to see how the math
worked on large AtomTables.

But testing these mechanisms in isolation is not that informative -- they
seem to work, but the real test will be seeing how they work in combination
with large-scale inference and evolutionary learning, and we're not ready
for that yet, due to incompleteness of the PLN and MOSES codebases relative
to the respective designs.


-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64260364-265a64

Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Mark Waser
Hmm.  Interesting.  This e-mail (and the last) lead me to guess that you seem 
to have made some major, quantum leaps in NLP.  Is that correct?  You sure 
haven't been talking about it . . . . 
  - Original Message - 
  From: Benjamin Goertzel 
  To: agi@v2.listbox.com 
  Sent: Monday, November 12, 2007 2:57 PM
  Subject: Re: [agi] What best evidence for fast AI?





  On Nov 12, 2007 2:51 PM, Mark Waser [EMAIL PROTECTED] wrote:

I don't know at what point you'll be blocked from answering by 
confidentiality concerns


  I can't say much more than I will do in this email, due to customer 
confidentiality concerns
   
but I'll ask a few questions you hopefully can answer like:
  1.. How is the information input and stored in your system (i.e. Is it 
more like simple formal assertions with a restricted syntax and/or language or 
like English language)?

  English input as well as other forms of input; NM Atom storage

  Obviously English language comprehension will not be complete; and 
proprietary (not Novamente's) UI devices will be used to work around this. 

  1.. 
  2.. How constrained in the information content (and is the content even 
relevant)?

  We'll work with a particular (relatively simple) text source for starters, 
with a view toward later generalization

  1.. 
  2.. To what degree does the system understand the information (i.e. how 
much can in manipulate it)?

  That degree will increase as we bring more and more of PLN into the system.  
Initially, it'll just be simple PLN first-order term logic inference; then 
we'll extend it. 
   
  1.. 
  2.. Who tags the information as relevant to particular users?

  User feedback 

  1.. 
  2.. How constrained are the tags?

  They're English 

  1.. 
  2.. What is the output

  That's confidential, but it's very expressive and flexible

  |  (is it just a regurgitation of appropriately tagged information pieces)?


  No

  -- Ben

--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64263051-2c4067

Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Richard Loosemore

Benjamin Goertzel wrote:


Ed --

Just a quick comment: Mark actually read a bunch of the proprietary, 
NDA-required Novamente documents and looked at some source code (3 years 
ago, so a lot of progress has happened since then).  Richard didn't, so 
he doesn't have the same basis of knowledge to form detailed comments on 
NM, that Mark does.


This is true, but not important to my line of argument, since of course 
I believe that a problem exists (CSP), which we have discussed on a 
number of occasions, and your position is not that you have some 
proprietary, unknown-to-me solution to the problem, but rather that you 
do not really think there is a problem.


Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64272026-c0e7dd


Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Linas Vepstas
On Sat, Nov 10, 2007 at 10:19:44AM -0800, Jef Allbright wrote:
 as I was driving home I approached a
 truck off the side of the road, its driver   pulling hard on a bar,
 tightening the straps securing the load.  Without conscious thought I
 moved over in my lane to allow for the possibility that he might slip.
  That chain of inference, and its requisite knowledge base, leading to
 a simple human behavior, are not even on the radar horizon of
 current AI technology.

?

I see a human, better give him wide berth. Certainly, the ability to
detect and deal with pedestrians will be required before these things
become street-legal.  

I can easily imagine that next-years grand challenge, or the one
thereafter, will explicitly require ability to deal with cyclists, 
motorcyclists, pedestrians, children and dogs. Exactly how they'd test
this, however, I don't know ... 

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64316665-a9fb25


Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Lukasz Stafiniak
On Nov 12, 2007 10:34 PM, Linas Vepstas [EMAIL PROTECTED] wrote:

 I can easily imagine that next-years grand challenge, or the one
 thereafter, will explicitly require ability to deal with cyclists,
 motorcyclists, pedestrians, children and dogs. Exactly how they'd test
 this, however, I don't know ...

DARPA seems to be winding up the car challenges :-(

(anyone knows anything to the contrary?)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64332374-2a763e


Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Richard Loosemore

Linas Vepstas wrote:

On Sat, Nov 10, 2007 at 10:19:44AM -0800, Jef Allbright wrote:

as I was driving home I approached a
truck off the side of the road, its driver   pulling hard on a bar,
tightening the straps securing the load.  Without conscious thought I
moved over in my lane to allow for the possibility that he might slip.
 That chain of inference, and its requisite knowledge base, leading to
a simple human behavior, are not even on the radar horizon of
current AI technology.


?

I see a human, better give him wide berth. Certainly, the ability to
detect and deal with pedestrians will be required before these things
become street-legal.  


I can easily imagine that next-years grand challenge, or the one
thereafter, will explicitly require ability to deal with cyclists, 
motorcyclists, pedestrians, children and dogs. Exactly how they'd test
this, however, I don't know ... 


The problem (essentially the frame problem) is that it is no good to 
say Oh, we had better code for the situation of avoiding pedestrians, 
cyclists, children and dogs, it is that the system needs to be able to 
generally model the world in such a way that it can *anticipate*, by 
itself, a general situation that looks like developing into a problem.


You never know what new situation might arise that might be a problem, 
and you cannot market a driverless car on the understanding that IF it 
starts killing people under particular circumstances, THEN someone will 
follow that by adding code to deal with that specific circumstance.


The whole question then becomes:  just how general are the mechanisms 
for understanding that a situation is a problem situation (like the 
one that Jef posed)?


My understanding of the existing technology is that it is ridiculously 
far from being able to represent the world in such a general way that it 
could anticipate novel hazards without using up too many pedestrians.


Absent that solution, I don't think these systems are going to be onthe 
market any time soon.





Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64343296-de95d4


Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Jef Allbright
On 11/12/07, Linas Vepstas [EMAIL PROTECTED] wrote:
 I see a human, better give him wide berth. Certainly, the ability to
 detect and deal with pedestrians will be required before these things
 become street-legal.

Well, I think we'll see robotic vehicles first play a significant role
in war zones (including populated urban settings) with flashing lights
and audible warning devices advising bystanders of their
responsibility to avoid the risk.

A difficulty (and this is only my limited, personal opinion) is that
as the problems become more subtle, the corresponding requirements for
extended inference increase exponentially.

But I realize that what we're talking about here are really subtle
problems, as in really quite small.


 I can easily imagine that next-years grand challenge, or the one
 thereafter, will explicitly require ability to deal with cyclists,
 motorcyclists, pedestrians, children and dogs. Exactly how they'd test
 this, however, I don't know ...

Well it's clear from this and an earlier post of yours today that you
(among relatively few others here) have a sound grasp of the big
picture, and anything remaining is just minor detail.

Makes me wonder why I tend to make everything so complicated.

- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64347199-d76b50


Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Jef Allbright
On 11/12/07, Lukasz Stafiniak [EMAIL PROTECTED] wrote:
 On Nov 12, 2007 10:34 PM, Linas Vepstas [EMAIL PROTECTED] wrote:
 
  I can easily imagine that next-years grand challenge, or the one
  thereafter, will explicitly require ability to deal with cyclists,
  motorcyclists, pedestrians, children and dogs. Exactly how they'd test
  this, however, I don't know ...
 
 DARPA seems to be winding up the car challenges :-(

 (anyone knows anything to the contrary?)

There's no word of a further event, and no buzz, but plenty of similar
question at the event.

But if it's any consolation to you, Singapore has a grand challenge in
the works involving robots able to enter buildings, operate doors,
elevators, etc. and use weapons (only for defense, of course.)

- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64349751-a59bd4


Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Mark Waser
I'm going to try to put some words into Richard's mouth here since I'm 
curious to see how close I am . . . . (while radically changing the words).

I think that Richard is not arguing about the possibility of Novamente-type 
solutions as much as he is arguing about the predictability of *very* flexible 
Novamente-type solutions as they grow larger and more complex (and the 
difficulty in getting it to not instantaneously crash-and-burn).  Indeed, I 
have heard a very faint shadow of Richard's concerns in your statements about 
the tuning problems that you had with BioMind.

Novamente looks, at times, like the very first step in an inductive proof . 
. . . except that it is in a chaotic environment rather than the nice orderly 
number system.  Pieces of the system clearly sail in calm, friendly waters but 
hooking them all up in a wild environment is another story entirely (again, 
look at your own BioMind stories).

I've got many doubts because I don't think that you have a handle on the 
order -- the big (O) -- of many of the operations you are proposing (why I harp 
on scalability, modularity, etc.).  Richard is going further and saying that 
the predictability of even some of your smaller/simpler operations is 
impossible (although, as he has pointed out, many of them could be constrained 
by attractors, etc. if you were so inclined to view/treat your design that 
way).  

Personally, I believe that intelligence is *not* complex -- despite the 
fact that it does (probably necessarily) rest on top of complex pieces -- 
because those pieces' interactions are constrained enough that intelligence is 
stable.  I think that this could be built into a Novamente-type design *but* 
you have to be attempting to do so (and I think that I could convince Richard 
of that -- or else, I'd learn a lot by trying  :-).

Richard's main point is that he believes that the search space of viable 
parameters and operations for Novamente is small enough that you're not going 
to hit it by accident -- and Novamente's very flexibility is what compounds the 
problem.  Remember, life exists on the boundary between order and chaos.  Too 
much flexibility (unconstrained chaos) is as deadly as too much structure.

I think that I see both sides of the issue and how Novamente could be 
altered/enhanced to make Richard happy (since it's almost universally flexible) 
-- but doing so would also impose many constraints that I think that you would 
be unwilling to live with since I'm not sure that you would see the point.  I 
don't think that you're ever going to be able to change his view that the 
current direction of Novamente is -- pick one:  a) a needle in an infinite 
haystack or b) too fragile to succeed -- particularly since I'm pretty sure 
that you couldn't convince me without making some serious additions to Novamente

  - Original Message - 
  From: Benjamin Goertzel 
  To: agi@v2.listbox.com 
  Sent: Monday, November 12, 2007 3:49 PM
  Subject: Re: [agi] What best evidence for fast AI?



  To be honest, Richard, I do wonder whether a sufficiently in-depth 
conversation
  about AGI between us would result in you changing your views about the CSP
  problem in a way that would accept the possibility of Novamente-type 
solutions. 

  But, this conversation as I'm envisioning it would take dozens of hours, and 
would
  require you to first spend 100+ hours studying detailed NM materials, so this 
seems
  unlikely to happen in the near future. 

  -- Ben


  On Nov 12, 2007 3:32 PM, Richard Loosemore [EMAIL PROTECTED] wrote:

Benjamin Goertzel wrote:

 Ed --

 Just a quick comment: Mark actually read a bunch of the proprietary,
 NDA-required Novamente documents and looked at some source code (3 years 
 ago, so a lot of progress has happened since then).  Richard didn't, so
 he doesn't have the same basis of knowledge to form detailed comments on
 NM, that Mark does.


This is true, but not important to my line of argument, since of course 
I believe that a problem exists (CSP), which we have discussed on a
number of occasions, and your position is not that you have some
proprietary, unknown-to-me solution to the problem, but rather that you
do not really think there is a problem. 

Richard Loosemore


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to: 

http://v2.listbox.com/member/?;



--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64351025-209479

Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Linas Vepstas
On Sun, Nov 11, 2007 at 02:16:06PM -0500, Edward W. Porter wrote:
 Its way out, but not crazy.  If humanity or some mechanical legacy of us
 ever comes out the other end of the first century after superhuman
 intelligence arrives, it or they will be ready to start playing in the
 Galactic big leagues.

Or, if Nick Bostrom is right about his simulation argument, then 
perhaps instead our simulators will reveal themselves to us.  
So far, I find Bostrom's work as one of the more reasonable 
solutions to the Fermi paradox ('where are they?').

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64351589-c930a4


Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Linas Vepstas
On Mon, Nov 12, 2007 at 04:56:00PM -0500, Richard Loosemore wrote:
 Linas Vepstas wrote:
 I can easily imagine that next-years grand challenge, or the one
 thereafter, will explicitly require ability to deal with cyclists, 
 motorcyclists, pedestrians, children and dogs. Exactly how they'd test
 this, however, I don't know ... 
 
 The problem (essentially the frame problem) is that it is no good to 
 say Oh, we had better code for the situation of avoiding pedestrians, 
 cyclists, children and dogs, it is that the system needs to be able to 
 generally model the world in such a way that it can *anticipate*, by 
 itself, a general situation that looks like developing into a problem.

Yes, but there is a standard solution for the frame problem that 
has been in use for several decades now. Its those signs posted on
highway entrance ramps that state Minimum speed 45 mph. Bicycle and
pedestrian access prohibited.

I hate to explain flip answers, but sigh, I guess I need to sometimes.
I'm saying that the solution to the frame problem can sometimes be 
to not solve it. My cognition and perception abilites are not so 
great as to be able to avoid being hit by a meteor as I drive down the
highway: in other words, my brain fails to solve that particular frame
problem as well. It is also somewhat unprepared for mexican trucks with
bad brakes and bald tires, and so the standard solution is to make 
these illegal. Human beings, when college educated, can sometimes
*anticipate* a general situation that looks like developing into a
problem, but not always, and usually not at highway speeds.

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64359579-cb8713


Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Mark Waser
 You seem to be thinking about Webmind, an AI company I was involved in 
 during the late 1990's; as opposed to Biomind

Yes, sorry, I'm laboring under a horrible cold and my brain is not all here.

 The big-O order is almost always irrelevant.  Most algorithms useful for 
 cognition are exponential-time worst-case complexity.  What matters is 
 average-case complexity over the probability distribution of problem 
 instances actually observed in the real world.  And yeah, this is very hard 
 to estimate mathematically. 

Well . . . . big-O order certainly does matter for things like lookups and 
activation where we're not talking about heuristic shortcuts and average 
complexity.  But I would certainly accept your correction for other operations 
like finding modularity and analogies -- except we don't have good heuristic 
shortcuts, etc. for them -- yet.

  Saying a system is universally capable doesn't mean hardly anything, and 
 isn't really worth saying. 

Nope.  Saying it usually forestalls a lot of silly objections.  That's really 
worthwhile.:-)

 I believe Richard's complaints are of a quite different character than 
 yours.  

And I might be projecting . . . . :-)which is why I figured I'd run this 
out there and see how he reacted.:-)

  - Original Message - 
  From: Benjamin Goertzel 
  To: agi@v2.listbox.com 
  Sent: Monday, November 12, 2007 5:14 PM
  Subject: Re: [agi] What best evidence for fast AI?





  On Nov 12, 2007 5:02 PM, Mark Waser [EMAIL PROTECTED] wrote:

I'm going to try to put some words into Richard's mouth here since I'm 
curious to see how close I am . . . . (while radically changing the words).

I think that Richard is not arguing about the possibility of 
Novamente-type solutions as much as he is arguing about the predictability of 
*very* flexible Novamente-type solutions as they grow larger and more complex 
(and the difficulty in getting it to not instantaneously crash-and-burn).  
Indeed, I have heard a very faint shadow of Richard's concerns in your 
statements about the tuning problems that you had with BioMind.

  You seem to be thinking about Webmind, an AI company I was involved in during 
the late 1990's; as opposed to Biomind, a bioinformatics company in which I am 
currently involved, and which is doing pretty well. 

  The Webmind AI Engine was an order of magnitude more complex than the 
Novamente Cognition Engine; and this is intentional.  Many aspects of the NM 
design were specifically originated to avoid problems that we found with the 
Webmind system.  




I've got many doubts because I don't think that you have a handle on 
the order -- the big (O) -- of many of the operations you are proposing (why I 
harp on scalability, modularity, etc.).

  The big-O order is almost always irrelevant.  Most algorithms useful for 
cognition are exponential-time worst-case complexity.  What matters is 
average-case complexity over the probability distribution of problem instances 
actually observed in the real world.  And yeah, this is very hard to estimate 
mathematically. 

   
  Richard is going further and saying that the predictability of even some 
of your smaller/simpler operations is impossible (although, as he has pointed 
out, many of them could be constrained by attractors, etc. if you were so 
inclined to view/treat your design that way).  

  Oh, I thought **I** was the one who pointed that out.
   

Personally, I believe that intelligence is *not* complex -- despite the 
fact that it does (probably necessarily) rest on top of complex pieces -- 
because those pieces' interactions are constrained enough that intelligence is 
stable.  I think that this could be built into a Novamente-type design *but* 
you have to be attempting to do so (and I think that I could convince Richard 
of that -- or else, I'd learn a lot by trying  :-).

  That is part of the plan, but we have a bunch of work of implementing/tuning 
components first.
   

Richard's main point is that he believes that the search space of 
viable parameters and operations for Novamente is small enough that you're not 
going to hit it by accident -- and Novamente's very flexibility is what 
compounds the problem.  

  The Webmind system had this problem.  Novamente is carefully designed not to. 
 Of course, I can't prove that it won't, though. 
   
Remember, life exists on the boundary between order and chaos.  Too much 
flexibility (unconstrained chaos) is as deadly as too much structure.

I think that I see both sides of the issue and how Novamente could be 
altered/enhanced to make Richard happy (since it's almost universally flexible) 
-- 


  Novamente is universally capable but so are a lot of way simpler, 
pragmatically useless system.  Saying a system is universally capable doesn't 
mean hardly anything, and isn't really worth saying.   The question as you know 
is what can a system do given a pragmatic amount

Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Benjamin Goertzel



 I am heavily focussed on my own design at the moment, but when you talk
 about the need for 100+ hours of studying detailed NM materials, are you
 talking about publicly available documents, or proprietary information?



Proprietary info, much of which may be made public next year, though...

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64372765-448103

Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Richard Loosemore

Benjamin Goertzel wrote:


To be honest, Richard, I do wonder whether a sufficiently in-depth 
conversation

about AGI between us would result in you changing your views about the CSP
problem in a way that would accept the possibility of Novamente-type 
solutions.


But, this conversation as I'm envisioning it would take dozens of hours, 
and would
require you to first spend 100+ hours studying detailed NM materials, so 
this seems

unlikely to happen in the near future.


Well, I am not by any means hostile to the idea that Novamente could be 
built in such a way as to solve the CSP.  It is all a question of 
methodology and flexibility, which I don't *think* is there, but I could 
be wrong.


I am heavily focussed on my own design at the moment, but when you talk 
about the need for 100+ hours of studying detailed NM materials, are you 
talking about publicly available documents, or proprietary information?



Richard Loosemore






-- Ben

On Nov 12, 2007 3:32 PM, Richard Loosemore [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


Benjamin Goertzel wrote:
 
  Ed --
 
  Just a quick comment: Mark actually read a bunch of the proprietary,
  NDA-required Novamente documents and looked at some source code
(3 years
  ago, so a lot of progress has happened since then).  Richard
didn't, so
  he doesn't have the same basis of knowledge to form detailed
comments on
  NM, that Mark does.

This is true, but not important to my line of argument, since of course
I believe that a problem exists (CSP), which we have discussed on a
number of occasions, and your position is not that you have some
proprietary, unknown-to-me solution to the problem, but rather that you
do not really think there is a problem.

Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?; http://v2.listbox.com/member/?;



This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?; 
http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64373890-c05dbe


Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Linas Vepstas
On Mon, Nov 12, 2007 at 01:49:52PM -0500, Mark Waser wrote:
  What I thought you meant was, if a user asked I'm a small farmer in New 
  Zealand.  Tell me about horses then the system would be able to disburse 
  its relevant knowledge about horses, filtering out the irrelevant stuff.   
  What did you mean, exactly?
 
 That's a good simple, starting case.  But how do you decide how much 
 knowledge to disburse?  How do you know what is irrelevant?  How much do your 
 answers differ between a small farmer in New Zealand, a rodeo rider in the 
 West, a veterinarian is Pennsylvania, a child in Washington, a 
 bio-mechanician studying gait?  And horse is actually a *really* simple 
 concept since it refers to a very specific type of physical object.  
 
 Besides, are you really claiming that you'll be able to do this next year?  
 Sorry, but that is just plain, unadulterated BS.  If you can do that, you are 
 light-years further along than . . . . 

Eh?

I can demo a system to you today, that does a very lame version of this.
And it's probably only the umpteenth system to do this, and it does it
in only a few thousand lines of code (not counting modules pulled off the
net). Its a bot on #opencyc on freenode.net (seems to be crashed at the
moment)

When you ask it about Abraham Lincoln, it will respond with a
grade-school like essay that Abe is a person and a male person and a
historical person and is famous. All it knows is from the opencyc db.
It will happily include irrelevant facts like Abe is a person and
a male person, but it has some ability to prune these; when you ask
again, it'll refuse to answer, with an I already told you response.

Its not AI, but it does demonstrate those things you are calling BS.

As to talking about horses, even I am not capable of maintaining a
conversation with a rodeo rider, and I live in Texas.  I once talked 
to a professional blacksmith; turns out they are required by law to 
have a degree in veternary medicine; bet you didn't know that.

If and when you find a human who is capable of having conversations
about horses with small farmers, rodeo riders, vets, children 
and biomechanicians, I'll bet that they won't have a clue about 
galaxy formation or enzyme reactions. Don't set the bar above 
human capabilites.

--linas


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64386951-c91d87


Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Bryan Bishop
On Monday 12 November 2007 15:56, Richard Loosemore wrote:
 You never know what new situation might arise that might be a
 problem, and you cannot market a driverless car on the understanding
 that IF it starts killing people under particular circumstances, THEN
 someone will follow that by adding code to deal with that specific
 circumstance.

It seems that this was the way that the brain was 
progressively 'improved' via evolution. However, we want to compress a 
few billion years of evolutionary selective pressure into the next 10 
or 100 years instead. Have there been any proposed strategies that try 
to take an evolutionary approach on the magnitude that was needed for 
human brain evolution?

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64395703-74a5cb


Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Linas Vepstas
On Mon, Nov 12, 2007 at 06:56:51PM -0500, Mark Waser wrote:
 It will happily include irrelevant facts
 
 Which immediately makes it *not* relevant to my point.
 
 Please read my e-mails more carefully before you hop on with ignorant 
 flames.  

I read your emails, and, mixed in with some insightful and highly 
relevent commentary, there are also many flames. Repeatedly so.

Relevence is not an easy problem, nor is it obviously a hard one.
To provide relevent answers, one must have a model of who is asking.
So, in building a computer chat system, one must first deduce things
about the speaker.  This is something I've been trying to do.

Again, with my toy system, I've gotten so far as to be able to 
let the speaker proclaim that this is boring, and have the
system remember, so that, for future conversations, the boring 
assertions are not revisited. 

Now, boring is a tricky thing: a horse is genus equus may be boring 
for a child, and yet interesting to young adults. So the problem of 
relevent answers to questions is more about creating a model of the
person one is conversing with, than it is about NLP processing,
representation of knowledge, etc. Conversations are contextual;
modelling that context is what is interesting to me.

The result of hooking up a reasoning system, a knowledgebase like
opencyc or sumo, an nlp parser, and a homebrew contextualizer is
not agi.  It's little more than a son-et-lumiere show.  But it 
already does the things that you are claiming to be unadulterated BS.

 And regarding
 If and when you find a human who is capable of having conversations
 about horses with small farmers, rodeo riders, vets, children
 and biomechanicians, I'll bet that they won't have a clue about
 galaxy formation or enzyme reactions. Don't set the bar above
 human capabilites.
 
 Go meet your average librarian.  They won't know the information off the 
 top of their heads (yet), but they'll certainly be able to get it to you -- 

Go meet google. Or wikipedia. Cheeses.

 and the average librarian fifteen years from now *will* be able to. 

When the average librarian is able to answer veterinary questions to
the satisfaction of a licensing board conducting an oral examination,
then we will be living in the era of agi, won't we?

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64410937-e020ba


Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Mark Waser
:-)  I don't think I've ever known you to spout intentionally BS . . . .

A well-architected statistical-NLP-based information-retrieval system would 
require an identification (probably an exemplar) of the cluster(s) that matched 
each of the portfolios and would return a mixed conglomerate of data rather 
than any sort of coherent explanation (other than the explanations present in 
the data cluster).  The WASNLPBIRS certainly wouldn't be able to condense the 
data to a nicely readable format or perform any other real operations on the 
information.

What I meant by *really* sophisticated should have been indicated by the 
difficult end of my six point list -- which is fundamentally equivalent (in my 
opinion) to a full-up AGI since it basically requires full understanding of 
English and a WASNLPBIRS feeding it.

The problem with the WASNLPBIRS and what Linas suggested is that they look 
*really* cool at first -- and then you realize how little they actually do.

The real problem with your claim of if a user asked I'm a small farmer in 
New Zealand.  Tell me about horses then the system would be able to disburse 
its relevant knowledge about horses, filtering out the irrelevant stuff is the 
last five words.  How do you intend to do *that*.  (And notice that what I 
kicked Linas for was precisely his It will happily include irrelevant facts.

I've had to deal with users who have bought large, expensive conceptual 
clustering systems who were *VERY* unhappy once they realized what they had 
actually purchased.  I would be *real* careful if I were you about what you're 
promising because there are already a good number of companies that, a decade 
ago, had already perfected the best that that approach could offer -- and then 
died on the rope of user dissatisfaction.

Mark

  - Original Message - 
  From: Benjamin Goertzel 
  To: agi@v2.listbox.com 
  Sent: Monday, November 12, 2007 7:10 PM
  Subject: Re: [agi] What best evidence for fast AI?





  On Nov 12, 2007 6:56 PM, Mark Waser [EMAIL PROTECTED] wrote:

 It will happily include irrelevant facts


Which immediately makes it *not* relevant to my point.

Please read my e-mails more carefully before you hop on with ignorant 
flames.  The latter part of your e-mail clearly makes my point -- anyone
claiming to be able to do a sophisticated version of this in the next year
is spouting plain, unadulterated BS.

  Mark, I really wasn't spouting BS.  I imagine what you are conceiving 
  when you use the label of sophisticated is more sophisticated than what
  I am hoping to launch within the next year.  

  Being sophisticated is not a precise criterion.

  Your example of giving information about horses in a contextual way 

  **
  How do you know what is irrelevant?  How much do your answers differ between 
a small farmer in New Zealand, a rodeo rider in the West, a veterinarian is 
Pennsylvania, a child in Washington, a bio-mechanician studying gait?

  **

  is in my judgment not beyond what a well-architected statistical-NLP-based 
information-retrieval system could deliver.  I don't think you even need a 
Novamente system to do this.So is this all you mean by sophisticated?  I 
don't really understand what you intend... seriously... 

  -- Ben

--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64413835-ad7189

Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Mark Waser
   There is a big difference between being able to fake something for a 
brief period of time and being able to do it correctly.  All of your 
phrasing clearly indicates that *you* believe that your systems can only 
fake it for a brief period of time, not do it correctly.  Why are you 
belaboring the point?  I don't get it since your own points seem to deny 
your own argument.


   And even if you can do it for small, toy conversations where you 
recognize the exact same assertions -- that is nowhere close to what you're 
going to need in the real world.



When the average librarian is able to answer veterinary questions to
the satisfaction of a licensing board conducting an oral examination,
then we will be living in the era of agi, won't we?


Depends upon your definition of AGI.  That could be just a really kick-ass 
decision support system -- and I would actually bet a pretty fair chunk of 
money that 15 years *is* entirely within reason for the scenario you 
suggest.


- Original Message - 
From: Linas Vepstas [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Monday, November 12, 2007 7:28 PM
Subject: Re: [agi] What best evidence for fast AI?



On Mon, Nov 12, 2007 at 06:56:51PM -0500, Mark Waser wrote:

It will happily include irrelevant facts

Which immediately makes it *not* relevant to my point.

Please read my e-mails more carefully before you hop on with ignorant
flames.


I read your emails, and, mixed in with some insightful and highly
relevent commentary, there are also many flames. Repeatedly so.

Relevence is not an easy problem, nor is it obviously a hard one.
To provide relevent answers, one must have a model of who is asking.
So, in building a computer chat system, one must first deduce things
about the speaker.  This is something I've been trying to do.

Again, with my toy system, I've gotten so far as to be able to
let the speaker proclaim that this is boring, and have the
system remember, so that, for future conversations, the boring
assertions are not revisited.

Now, boring is a tricky thing: a horse is genus equus may be boring
for a child, and yet interesting to young adults. So the problem of
relevent answers to questions is more about creating a model of the
person one is conversing with, than it is about NLP processing,
representation of knowledge, etc. Conversations are contextual;
modelling that context is what is interesting to me.

The result of hooking up a reasoning system, a knowledgebase like
opencyc or sumo, an nlp parser, and a homebrew contextualizer is
not agi.  It's little more than a son-et-lumiere show.  But it
already does the things that you are claiming to be unadulterated BS.


And regarding
If and when you find a human who is capable of having conversations
about horses with small farmers, rodeo riders, vets, children
and biomechanicians, I'll bet that they won't have a clue about
galaxy formation or enzyme reactions. Don't set the bar above
human capabilites.

Go meet your average librarian.  They won't know the information off the
top of their heads (yet), but they'll certainly be able to get it to 
you -- 


Go meet google. Or wikipedia. Cheeses.


and the average librarian fifteen years from now *will* be able to.


When the average librarian is able to answer veterinary questions to
the satisfaction of a licensing board conducting an oral examination,
then we will be living in the era of agi, won't we?

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64415771-2a51bf


Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Linas Vepstas
On Mon, Nov 12, 2007 at 06:22:37PM -0600, Bryan Bishop wrote:
 On Monday 12 November 2007 17:31, Linas Vepstas wrote:
  If and when you find a human who is capable of having conversations
  about horses with small farmers, rodeo riders, vets, children
  and biomechanicians, I'll bet that they won't have a clue about
  galaxy formation or enzyme reactions. Don't set the bar above
  human capabilites.
 
 Are these things supposed to be rare discussion topics? I think this 
 just serves to illustrate the wide-ranging shades of normal that some 
 of us see in the daily human population. This stuff is hard and we seem 
 to restrict so much to one or two variables.

Conversation is hard. You can talk to almost anyone about the weather,
but you won't be able to talk to a rodeo rider about horses the way
that other riders do.

You can read a book about how to be a good conversationalist,
apply the basic tricks it teaches you, with great success, and 
still remain ignorant and shallow.

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64418119-21691d


Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Linas Vepstas
On Mon, Nov 12, 2007 at 07:46:15PM -0500, Mark Waser wrote:
There is a big difference between being able to fake something for a 
 brief period of time and being able to do it correctly.  All of your 
 phrasing clearly indicates that *you* believe that your systems can only 
 fake it for a brief period of time, not do it correctly.  Why are you 
 belaboring the point?  I don't get it since your own points seem to deny 
 your own argument.

I don't think BenG claimed to be able to build an AGI in 6 months,
but rather something that can fake it for a breif period of time.
I was rising to the defense of that.

 When the average librarian is able to answer veterinary questions to
 the satisfaction of a licensing board conducting an oral examination,
 then we will be living in the era of agi, won't we?
 
 Depends upon your definition of AGI.  That could be just a really kick-ass 
 decision support system -- and I would actually bet a pretty fair chunk of 
 money that 15 years *is* entirely within reason for the scenario you 
 suggest.

Actually, I agree with that. Or, to paraphrase, I think that
NLP-speaking know-it-all librarians are reasonable in 15 years,
as they seem to be just shiny and polished versions of things 
we have today.

So perhaps the AGI question is, what is the difference between 
a know-it-all mechano-librarian, and a sentient being? 

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64420786-7a64ef


Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Richard Loosemore

Mark Waser wrote:
 
Yes, sorry, I'm laboring under a horrible cold and my brain is not all here.


Same here:  I'm recovering from it now, but it was a real doozy.  (Is 
that how you spell doozy?)


Anyhow, this is all just to say that your detailed post and questions 
was very thought provoking, but it will have to wait until tomorrow for 
an answer



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64422194-d5c7a6


Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Richard Loosemore

Bryan Bishop wrote:

On Monday 12 November 2007 15:56, Richard Loosemore wrote:

You never know what new situation might arise that might be a
problem, and you cannot market a driverless car on the understanding
that IF it starts killing people under particular circumstances, THEN
someone will follow that by adding code to deal with that specific
circumstance.


It seems that this was the way that the brain was 
progressively 'improved' via evolution. However, we want to compress a 
few billion years of evolutionary selective pressure into the next 10 
or 100 years instead. Have there been any proposed strategies that try 
to take an evolutionary approach on the magnitude that was needed for 
human brain evolution?


Yikes, no:  my strategy is to piggyback on all that work, not to try to 
duplicate it.


Even the Genetic Algorithm people don't (I think) dream of evolution on 
that scale.




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64422731-1d8dab


Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Bryan Bishop
On Monday 12 November 2007 19:31, Richard Loosemore wrote:
 Yikes, no:  my strategy is to piggyback on all that work, not to try
 to duplicate it.

 Even the Genetic Algorithm people don't (I think) dream of evolution
 on that scale.

Yudkowsky recently wrote an email on preservation of the absurdity of 
the future. The method that I have proposed requires this massive 
international effort and maybe can only be started when we hit a few 
more billion births. It is not entirely absurd, however, since we would 
start the project with investigation methods known today and slowly 
improve until we have millions of people researching the millions of 
varied pathways in the brain. From what I have read of Novamente today, 
Goertzel might be hoping that the circuits in the brain are ultimately 
simple, or that some similar model that has simpler components building 
up to some greater actor-exchange medium, effectively mimics the brain 
to some degree.

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64423497-d1a153

Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Richard Loosemore

Edward W. Porter wrote:

I'm sorry.  I guess I did misunderstand you.

If you have time I wish you could state the reasons why you find it
lacking as efficiently as has Mark Waser.

Ed Porter


I'll do my best when I respond to Mark's questions/commentary tomorrow.

Briefly, though, the complex systems paper I wrote really was my 
statement of the main problem (though seeing *how* it applies is, I 
admit, a rather big exercise for the reader).


I suspect that because of the unusual nature of my claims about the 
complex systems problem, it will probably need a book-length exposition 
to make it clear.




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64424505-67912a


Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Richard Loosemore

Bryan Bishop wrote:

On Monday 12 November 2007 19:31, Richard Loosemore wrote:

Yikes, no:  my strategy is to piggyback on all that work, not to try
to duplicate it.

Even the Genetic Algorithm people don't (I think) dream of evolution
on that scale.


Yudkowsky recently wrote an email on preservation of the absurdity of 
the future. The method that I have proposed requires this massive 
international effort and maybe can only be started when we hit a few 
more billion births. It is not entirely absurd, however, since we would 
start the project with investigation methods known today and slowly 
improve until we have millions of people researching the millions of 
varied pathways in the brain. From what I have read of Novamente today, 
Goertzel might be hoping that the circuits in the brain are ultimately 
simple, or that some similar model that has simpler components building 
up to some greater actor-exchange medium, effectively mimics the brain 
to some degree.


Yudkowsky's ramblings don't cut much ice with me.

Ben is not so much interested in whether the circuits (mechanisms) in 
the brain are simple or not, since he belongs to the school that 
believes that AGI does not need to be done exactly the way the human 
mind does it.


I, on the other hand, believe that we must stick fairly closely to an 
emulation of the *cognitive* level (not neural, but much higher up).


Even with everyone on the planet running evolutionary simulations, I do 
not believe we could reinvent an intelligent system by brute force.



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64425716-c7bb52


Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Bryan Bishop
On Monday 12 November 2007 19:48, Richard Loosemore wrote:
 Even with everyone on the planet running evolutionary simulations, I
 do not believe we could reinvent an intelligent system by brute
 force.

Of your message, this part is the most peculiar. Brute force is all that 
we have.

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64427651-dc0d91


Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Benjamin Goertzel
On Nov 12, 2007 8:44 PM, Mark Waser [EMAIL PROTECTED] wrote:

  I don't think BenG claimed to be able to build an AGI in 6 months,
  but rather something that can fake it for a breif period of time.
  I was rising to the defense of that.

 No.  Ben is honest in his claims and he said that this was for a paying
 client.  It isn't going to be a deliberate fake it for a brief period of
 time.  He'll definitely deliver something cool -- I was much more
 objecting
 to some possibly dangerous, over-enthusiastic phrasing.


Yes, for this NLP-related contract I mentioned,
we are going for something cool in a limited domain, but built in a way
allowing generalizability according to the NM architecture...

My hope is that we won't get too bogged down in NLP particularities
(as we've already got lots of code for handling this), and after 6 months or
so we'll be able to spend most of our time on the project dealing with
interesting PLN inference stuff.

Also, the NLP code we make on this project will likely be integrable w/
our virtual-animal code, thus allowing us to create virtual embodied agents
w/ linguistic ability as I've discussed before.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64428269-6457d0

Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Richard Loosemore

Bryan Bishop wrote:

On Monday 12 November 2007 19:48, Richard Loosemore wrote:

Even with everyone on the planet running evolutionary simulations, I
do not believe we could reinvent an intelligent system by brute
force.


Of your message, this part is the most peculiar. Brute force is all that 
we have.


We might be talking at cross purposes...

I didn't intend to suggest that there was a brute-force and a non brute 
force way to duplicate evolution, with the brute force method being 
infeasible  I was just trying to say that it would be such a 
gigantic project that I do not think it feasible.


That's a bit of a judgment call, I guess, but since I think there are 
much more viable alternatives, I don't feel pressed to get a more 
accurate handle on just how difficult it would be.


If anyone were to throw that quantity of resources at the AGI problem 
(recruiting all of the planet), heck, I could get it done in about 3 
years. ;-)




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64448535-d88cd9


Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Bryan Bishop
On Monday 12 November 2007 22:16, Richard Loosemore wrote:
 If anyone were to throw that quantity of resources at the AGI problem
 (recruiting all of the planet), heck, I could get it done in about 3
 years. ;-)

I have done some research on this topic in the last hour and have found 
that a Connectome Project is in fact in the very early stages out 
there on the internet:

http://iic.harvard.edu/projects/connectome.html
http://acenetica.blogspot.com/2005/11/human-connectome.html
http://acenetica.blogspot.com/2005/10/mission-to-build-simulated-brain.html
http://www.indiana.edu/~cortex/connectome_plos.pdf

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64449857-7dd95a


RE: [agi] What best evidence for fast AI?

2007-11-11 Thread Edward W. Porter
Ben said -- the possibility of dramatic, rapid, shocking success in
robotics is LOWER
than in cognition

That's why I tell people the value of manual labor will not be impacted as
soon by the AGI revolution as the value of mind labor.

Ed Porter



-Original Message-
From: Benjamin Goertzel [mailto:[EMAIL PROTECTED]
Sent: Saturday, November 10, 2007 5:29 PM
To: agi@v2.listbox.com
Subject: Re: [agi] What best evidence for fast AI?






I'm impressed with the certainty of some of the views expressed here,
nothing like I get talking to people actually building robots.

- Jef




Robotics involves a lot of difficulties regarding sensor and actuator
mechanics and data-processing. Whether these need to be solved to
create AGI is a matter of much contention.  Some, like Rodney Brooks,
think so.  Others, like me, doubt it -- though I think embodiment does
have
a lot to offer an AGI system, hence my current focus on virtual
embodiment...

Still, in spite of the hurdles, the solvability of the problems facing
humanoid
robotics w/in the next few decades seems pretty clear to me --- if
sufficient
resources are devoted to the problem (and it's not clear they will be).

I think that, compared to fundamental progress in AGI cognition,

-- our certitude in dramatic robotics progress can be greater, under
assumptions
of adequate funding

-- the possibility of dramatic, rapid, shocking success in robotics is
LOWER
than in cognition

-- Ben G

  _

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?
http://v2.listbox.com/member/?;
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63975170-cc0347

Re: [agi] What best evidence for fast AI?

2007-11-11 Thread Jef Allbright
On 11/11/07, Edward W. Porter [EMAIL PROTECTED] wrote:

 Ben said -- the possibility of dramatic, rapid, shocking success in
 robotics is LOWER than in cognition

 That's why I tell people the value of manual labor will not be impacted as
 soon by the AGI revolution as the value of mind labor.

Both valid points -- emphasizing possibility leading to dramatic,
shocking success -- but this does not invalidate the (in my opinion)
greater near-term *probability* of accelerating development and
practical deployment of robotics and its broad impact on society.

Robotics (like all physical technologies) will hit a ceiling defined
by intelligence.

Machine intelligence surpassing human capabilities in general will be
far more dramatic, rapid, and shocking than any previous technology.

But we do not yet have a complete, verifiable theory, let alone a
practical design.

- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63984519-51ebc9


Re: [agi] What best evidence for fast AI?

2007-11-11 Thread Benjamin Goertzel


 But we do not yet have a complete, verifiable theory, let alone a
 practical design.

 - Jef


To be more accurate, we don't have a practical design that is commonly
accepted in the AGI research community.

I believe that I *do* have a practical design for AGI and I am working hard
toward getting it implemented.

This practical design is based on a theory that is fairly complete, but not
easily verifiable using current technology.  The verification, it seems,
will
come via actually getting the AGI built!

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63987650-f9a81b

Re: [agi] What best evidence for fast AI?

2007-11-11 Thread Benjamin Goertzel
Richard,




 Even Ben Goertzel, in a recent comment, said something to the effect
 that the only good reason to believe that his model is going to function
 as advertised is that *when* it is working we will be able to see that
 it really does work:


The above paragraph is a distortion of what I said, and misrepresents my
own thoughts and beliefs.

I think that, after the Novamente design and the ideas underlying it are
carefully studied by a suitably trained individiual, the hypothesis that it
will
lead to a human-level AI comes to seem plausible.  But, there is no
solid proof, it's in part a matter of educated intuition.


The following quote which you gave is accurate:


 Ben Goertzel wrote:
  This practical design is based on a theory that is fairly complete, but
 not
  easily verifiable using current technology.  The verification, it seems,
 will
  come via actually getting the AGI built!

 This is a million miles short of a declaration that there are no hard
 problems left in AI.



Whether there are hard problems left in AI, conditional on the assumption
that
the Novamente design is workable, comes down to a question of semantic
interpretation.

In the completion of the detailed-design and implementation of the Novamente
system,
there are around a half-dozen research problems on the PhD thesis level
to be solved.

This means there is some hard thinking left, yet if the Novamente design is
correct, it
pertains some well-defined and well-delimited technical questions, which
seem very likely
to be solvable.

As an example, there is the task of generalizing the MOSES algorithm (see
metacog.org)
to handle general programmatic constructs at the nodes of its internal
program trees.  Of
course this is a hard problem, yet it's a well-defined computer science
problem which
(after a lot of things) doesn't
seem likely to be hiding any deep gotchas.

But this is research and development -- not pure development -- so one never
knows for sure...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64055433-fe7f04

Re: [agi] What best evidence for fast AI?

2007-11-11 Thread Richard Loosemore

Benjamin Goertzel wrote:



Richard,
 




Even Ben Goertzel, in a recent comment, said something to the effect
that the only good reason to believe that his model is going to function
as advertised is that *when* it is working we will be able to see that
it really does work:


The above paragraph is a distortion of what I said, and misrepresents my
own thoughts and beliefs.


When pressed, you always resort to a phrase equivalent to the one you 
give below:  I think that, after the Novamente design and the ideas 
underlying it are carefully studied by a suitably trained individiual, 
the hypothesis that it will lead to a human-level AI comes to seem 
plausible


When you look carefully at this phrasing, its core is a statement that 
the best reason to believe that it will work is the *intuition* of 
someone who studies the design ... and you state that you believe that 
anyone who is suitably trained, who studies it, will have the same 
intuition that you do.  This is all well and good, but it contains no 
metric, no new analysis of the outstanding problems that we can all 
scrutinize and assess.


I would consider an appeal to the intuition of suitably trained 
individuals to be very much less than a good reason to believe that 
the model is going to function as advertised.


Thus: if someone wanted volunteers to fly in their brand-new aircraft 
design, but all they could do to reassure people that it was going to 
work were the intuitions of suitably trained individuals, then most 
rational people would refuse to fly - they would want more than intuitions.


In this light, my summary would not be a distortion of your position at 
all, but only a statement about whether an appeal to intuition counts as 
a good reason to believe.


And, of course, there are some suitably trained individuals who do not 
share your intuitions, even given the limited access they have to your 
detailed design.


I respect your optimism, and applaud your single-minded commitment to 
the project:  if it is going to work, that is the way to get it done.  I 
certainly wish you luck with it.





Richard Loosemore


I think that, after the Novamente design and the ideas underlying it are
carefully studied by a suitably trained individiual, the hypothesis that 
it will

lead to a human-level AI comes to seem plausible.  But, there is no
solid proof, it's in part a matter of educated intuition.
 


The following quote which you gave is accurate:


Ben Goertzel wrote:
  This practical design is based on a theory that is fairly
complete, but not
  easily verifiable using current technology.  The verification, it
seems, will
  come via actually getting the AGI built!

This is a million miles short of a declaration that there are no hard
problems left in AI.



Whether there are hard problems left in AI, conditional on the 
assumption that

the Novamente design is workable, comes down to a question of semantic
interpretation.

In the completion of the detailed-design and implementation of the 
Novamente system,
there are around a half-dozen research problems on the PhD thesis 
level to be solved.
 
This means there is some hard thinking left, yet if the Novamente design 
is correct, it
pertains some well-defined and well-delimited technical questions, which 
seem very likely

to be solvable.

As an example, there is the task of generalizing the MOSES algorithm 
(see metacog.org http://metacog.org)
to handle general programmatic constructs at the nodes of its internal 
program trees.  Of
course this is a hard problem, yet it's a well-defined computer science 
problem which

(after a lot of things) doesn't
seem likely to be hiding any deep gotchas.

But this is research and development -- not pure development -- so one 
never knows for sure...


-- Ben

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?; 
http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64076724-00fae4


Re: [agi] What best evidence for fast AI?

2007-11-11 Thread Richard Loosemore

Edward W. Porter wrote:

Richard,

Geortzel claims his planning indicates it is rougly 6 years x 15 
excellent, hard-working programmers, or 90 man years to getting his 
architecture up an running.  I assume that will involve a lot of “hard” 
mental work.


By “hard problem” I mean a problem for which we don’t have what seems -- 
within the  Novemente model -- to be a way for handling it at, at least, 
a roughly human-level.  We won’t have proof that the problem is not hard 
until we actually get the part of the system that deals with that 
problem up and running successfully. 

Until then, you have every right to be skeptical.  But you also have the 
right, should you so choose, to open your mind up to the tremendous 
potential of the Novamente approach.




RICHARD What would be the solution of the grounding problem?
ED Not hard. As one linguist said “Words are defined by the company 
they keep”.  Kinda like I am guessing Google sets work, but at more 
different levels in the gen/comp pattern hierarchy and with more cross 
inferencing between different google-set seeds.  The same goes not only 
for words, but for almost all concepts and sub-concepts.  Grounding is 
made out of a life-time of experience recording such associations and 
the dynamic reactivation of those associations both in the subconscious 
and conscious in response to current activations.


RICHARD What would be the solution of the problem of autonomous, 

unsupervised learning of concepts?
ED Not hard! Read Novamente (or for a starter my prior summaries of 
it).  That’s one of its main focus.


RICHARD Can you find proofs that inference control engines will not 
show divergent behavior under heavy load (i.e. will they degrade 
gracefully when forced to provide answers in real time)?


ED Not totally clear.  Brain level hardware will really help here, 
but what is six orders of magnitude to the potential of combinatorial 
explosion in dynamic activations of something as large and 
high-dimensional as world knowledge?. 

This issue falls under the 
getting-it-all-to-work-together-well-automatically heading, which I said 
is non-trivial.  But Novamente directs a lot of attention to this 
problems, by among other approaches (a) using long and short term 
importance metrics to guide computational resource allocation, (b) 
having a deep memory of which computational patterns have proven 
appropriate in prior similar circumstances, (c) having a gen/comp 
hierarchy of such prior computational patterns which allows them to be 
instantiated in a given case in a context appropriate way, and (d) 
providing powerful inferencing mechanisms that go way beyond those 
commonly used in most current AIs.


I am totally confident we could get something very useful out of the 
system even if it was not as well tuned as a human brain.  There as all 
sorts of ways you could dampen the potential not only for combinatorial 
explosion, but also for instability.  We probably would start it out 
with a lot of such damping, but over time give it more freedom to 
control its own parameters.


RICHARD Are there solutions to the problems of flexible, abstract 

analogy building?
Language learning?
ED Not hard!  A Novamente class machine would be like Hofstadter’s 
CopyCat on steroids when it comes to making analogies. 

The gen/comp hierarchy of patterns would not only apply to all the 
concepts that fall directly within what we think of as NL, but also to 
the system’s world-knowledge, itself, of which such NL concepts and 
their contexts would be a part.  This includes knowledge about its own 
life-history, behavior, and the feedback it has received.  Thus, it 
would be fully capable of representing and matching concepts at the 
level humans do when understanding and communicating with NL.  The deep 
contextual grounding contained within such world knowledge and the 
ability to make inferences from it in real time would largely solve the 
hard disambiguation problems in natural language recognition, and allow 
language generation to be performed rapidly in a way that is appropriate 
to all the levels of context that humans use when speaking.



RICHARD Pragmatics?
ED Not hard! Follows from the above answer.  Understanding of 
pragmatics would result from the ability to dynamically generalize from 
prior similar statements in prior similar contexts, of what those prior 
contexts contained.





RICHARD Ben Goertzel wrote:
Goertzel This practical design is based on a theory that is 
fairly complete, but not easily verifiable using current technology.  
The verification, it seems, will come via actually getting the AGI built!


ED  You and Ben are totally correct.  None of this will be proven 
until it has actually been shown to work.  But significant pieces of it 
have already been shown to work. 

I think Ben believes it will work, as do I, but we both agree it will 
not be “verifiable” until it actually does.



Re: [agi] What best evidence for fast AI?

2007-11-11 Thread Benjamin Goertzel
Richard,



 Thus: if someone wanted volunteers to fly in their brand-new aircraft
 design, but all they could do to reassure people that it was going to
 work were the intuitions of suitably trained individuals, then most
 rational people would refuse to fly - they would want more than
 intuitions.


Yeah, sure.  I wouldn't trust the Novamente design's AGI potential, at
this stage, nearly enough to allow the life of one of my kids to depend on
it.

But I trust cars and airplanes in this manner every day.

Novamente is a promising-looking RD project, not a proven technology;
that's obvious.




 In this light, my summary would not be a distortion of your position at
 all, but only a statement about whether an appeal to intuition counts as
 a good reason to believe.



Just to be clear: the whole design doesn't have to be taken in one big
gulp of mysterious intuition.  There are plenty of well-substantiated
aspects, substantiated by math or by prototype experiments or
functionalities
of various system components.  But there are some aspects whose ability
to deliver the desired functionality is not yet
well substantiated, also.



 And, of course, there are some suitably trained individuals who do not
 share your intuitions, even given the limited access they have to your
 detailed design.


So far, no one who has taken the time to carefully study the detailed design
has
come forward and told me  I think that ain't gonna work.   Varying levels
of confidence have been expressed; and most of all, the opinion has been
expressed that the design is complicated and even though the whole thing
seems to make a lot of sense, there are a heck of a lot of details to be
resolved.



 I respect your optimism, and applaud your single-minded commitment to
 the project:  if it is going to work, that is the way to get it done.  I
 certainly wish you luck with it.


Thanks!
Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64085576-1e462a

Re: [agi] What best evidence for fast AI?

2007-11-10 Thread Pei Wang
Hi,

The following was my brief reply when someone asked me recently why I
think AGI is coming:

1. New constructive theories and engineering plans on AGI begin to
appear after decades of vacancy on this topic --- AGI won't be
possible until someone begin to try

2. All proposed arguments on the impossibility of AGI failed to settle
the debate --- if something isn't proven impossible, it remains
possible

3. More and more people get disappointed by the mainstream AI research
--- if you want AGI, you must directly work on it, not on a piece
cut from it arbitrarily

4. The advance of computer techniques, both in hardware and software,
make system development much easier --- an individual or a small team
can go quite far

5. The Web let the small number of AGI believers speak to and hear
from each other, and an AGI community is forming --- not only the
widely accepted opinions can be heard

6. Theoretical progress in the related cognitive sciences --- to build
AGI, it is needed to know the I in it first

As for the rapid progress part of your question, of course it will
be considered as rapid, compared to the last two decades, when there
wasn't much progress in this direction at all.

I don't expect the above answer to convince a wide academic audience
--- that requires a much more detailed and technical analysis.  To my
opinion, even when AGI is finally achieved, it will still take some
people some time to acknowledge its intelligence, since it will be
very different from their expectation.

Pei Wang
http://nars.wang.googlepages.com/

On Nov 10, 2007 6:41 AM, Robin Hanson [EMAIL PROTECTED] wrote:

  I've been invited to write an article for an upcoming special issue of IEEE
 Spectrum on Singularity, which in this context means rapid and large
 social change from human-level or higher artificial intelligence.   I may be
 among the most enthusiastic authors in that issue, but even I am somewhat
 skeptical.   Specifically, after ten years as an AI researcher, my
 inclination has been to see progress as very slow toward an explicitly-coded
 AI, and so to guess that the whole brain emulation approach would succeed
 first if, as it seems, that approach becomes feasible within the next
 century.

  But I want to try to make sure I've heard the best arguments on the other
 side, and my impression was that many people here expect more rapid AI
 progress.   So I am here to ask: where are the best analyses arguing the
 case for rapid (non-emulation) AI progress?   I am less interested in the
 arguments that convince you personally than arguments that can or should
 convince a wide academic audience.

  [I also posted this same question to the sl4 list.]


  Robin Hanson  [EMAIL PROTECTED]  http://hanson.gmu.edu
  Research Associate, Future of Humanity Institute at Oxford University
  Associate Professor of Economics, George Mason University
  MSN 1D3, Carow Hall, Fairfax VA 22030-
  703-993-2326  FAX: 703-993-2323

  This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63814140-f1835c


Re: [agi] What best evidence for fast AI?

2007-11-10 Thread Mark Waser
 my inclination has been to see progress as very slow toward an 
 explicitly-coded AI, and so to guess that the whole brain emulation approach 
 would succeed first 

Why are you not considering a seed/learning AGI? 
  - Original Message - 
  From: Robin Hanson 
  To: agi@v2.listbox.com 
  Sent: Saturday, November 10, 2007 6:41 AM
  Subject: [agi] What best evidence for fast AI?


  I've been invited to write an article for an upcoming special issue of IEEE 
Spectrum on Singularity, which in this context means rapid and large social 
change from human-level or higher artificial intelligence.   I may be among the 
most enthusiastic authors in that issue, but even I am somewhat skeptical.   
Specifically, after ten years as an AI researcher, my inclination has been to 
see progress as very slow toward an explicitly-coded AI, and so to guess that 
the whole brain emulation approach would succeed first if, as it seems, that 
approach becomes feasible within the next century.  

  But I want to try to make sure I've heard the best arguments on the other 
side, and my impression was that many people here expect more rapid AI 
progress.   So I am here to ask: where are the best analyses arguing the case 
for rapid (non-emulation) AI progress?   I am less interested in the arguments 
that convince you personally than arguments that can or should convince a wide 
academic audience. 

  [I also posted this same question to the sl4 list.] 

  Robin Hanson  [EMAIL PROTECTED]  http://hanson.gmu.edu 
  Research Associate, Future of Humanity Institute at Oxford University
  Associate Professor of Economics, George Mason University
  MSN 1D3, Carow Hall, Fairfax VA 22030-
  703-993-2326  FAX: 703-993-2323



--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?; 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63822575-74b1e4

Re: [agi] What best evidence for fast AI?

2007-11-10 Thread Robin Hanson


At 09:10 AM 11/10/2007, you wrote:


my inclination has been to see
progress as very slow toward an

explicitly-coded AI,
and so to guess that the whole brain emulation approach would succeed
first 

Why are you not considering a
seed/learning AGI? 
That would count for non-emulation AI, which is what I intended to ask
about. 

Robin Hanson [EMAIL PROTECTED]
http://hanson.gmu.edu 
Research Associate, Future of Humanity Institute at Oxford
University
Associate Professor of Economics, George Mason University
MSN 1D3, Carow Hall, Fairfax VA 22030-
703-993-2326 FAX: 703-993-2323
 
This list is sponsored by AGIRI: http://www.agiri.org/emailTo unsubscribe or change your options, please go to:http://v2.listbox.com/member/?member_id=8660244_secret=63826231-f92c46




RE: [agi] What best evidence for fast AI?

2007-11-10 Thread Derek Zahn
Hi Robin.  In part it depends on what you mean by fast.
 
1. Fast - less than 10 years.
 
I do not believe there are any strong arguments for general-purpose AI being 
developed in this timeframe.  The argument here is not that it is likely, but 
rather that it is *possible*.  Some AI researchers, such as Marvin Minsky, 
believe that we already have the necessary hardware commonly available, if we 
only knew what software to write for it.  If, as seems likely, there is a large 
economic incentive for the development of this software, it seems reasonable to 
grant the possibility that it will be developed.
 
Following that line of reasoning, a computation of probability * impact 
yields a large number for even small probabilities since the impact of a 
technological singularity could be very large.  So planning for the possibility 
seems prudent.
 
2. Fast - less than 50 years.
 
For this timeframe, just dust off Moravec's old computer speed chart.  On such 
a chart I think we're supposed to be at something like mouse level right now -- 
and in fact we have seen supercomputers beginning to take a shot at simulating 
mouse-brain-like structures.  It does not feel so wrong to think that the robot 
cars succeeding in the DARPA challenges are maybe up to mouse-level 
capabilities.
 
It is certainly possible that once computers surpass the raw processing power 
of the human brain by 10, 100, 1000 times, we will just be too stupid to keep 
up with their capabilities for some reason, but it seems like a more reasonable 
bet to me that the economic pressures to make somewhat good use of available 
computing resources will win out.
 
AI is often called a perpetual failure, but from this view that is not true at 
all; AI has been a spectacular success.  It's very impressive that the early 
researchers were able to get computers with nematode-level nervous systems to 
show any interesting cognitive behavior at all.  At worst, AI is keeping up 
with the available machine capabilities admirably.
 
Still, putting aside the brain simulation route, we do have to build models 
of mind that actually work.  As Pei Wang just pointed out, we are beginning to 
see models such as Ben Goertzel's Novamente that at least seem like they might 
have a shot at sufficiency.  That is not proof, but it is an indication that we 
may not be overmatched by this challenge, once the machinery becomes available.
 
If something like Moore's law continues (I suppose it's a cognitive bias to 
assume it will continue and a different bias to assume it won't), who wants to 
bet that computers 10,000, 100,000, or 1,000,000 times as powerful as our 
brains will go to waste?  Add as many zeros as you want... they cost five years 
each.
 
-
 
Having written that, I confess it is not completely convincing.  There are a 
lot of assumptions involved.  I don't think there *is* an objectively 
convincing argument.  That's why I never try to convince anybody... I can play 
in the intersection between engineering and wishful thinking if I want, simply 
because it amuses me more than watching football.
 
Hopefully some folks with more earnest beliefs will have better arguments for 
you.
 
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63831279-12920a

Re: [agi] What best evidence for fast AI?

2007-11-10 Thread Vladimir Nesov
AGI might turn out to be relatively easy to implement, if right theory
comes along, so there's some chance of building AGI in the nearest
future, while there's NO chance of implementing brain emulation before
all those numerous technical details are tackled, and it can take
really long time, which WILL take many lives.

-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63832567-75155c


RE: [agi] What best evidence for fast AI?

2007-11-10 Thread Robin Hanson


At 10:29 AM 11/10/2007, Derek Zahn wrote:
2. Fast - less
than 50 years.
For this timeframe, just dust off Moravec's old computer speed
chart. On such a chart I think we're supposed to be at something
like mouse level right now -- and in fact we have seen supercomputers
beginning to take a shot at simulating mouse-brain-like structures.
It does not feel so wrong to think that the robot cars succeeding in the
DARPA challenges are maybe up to mouse-level capabilities. ... AI has
been a spectacular success. It's very impressive that the early
researchers were able to get computers with nematode-level nervous
systems to show any interesting cognitive behavior at all. At
worst, AI is keeping up with the available machine capabilities
admirably.
My impression is that the cognitive performance of mice is vastly
superior to that of current robot cars. I don't see how
they could be considered even remotely comparable. But
perhaps I have misjudged. Has anyone attempted to itemize an
inventory of mouse mental abilities, and compared that to current robot
abilities?  


Robin Hanson [EMAIL PROTECTED]
http://hanson.gmu.edu 
Research Associate, Future of Humanity Institute at Oxford
University
Associate Professor of Economics, George Mason University
MSN 1D3, Carow Hall, Fairfax VA 22030-
703-993-2326 FAX: 703-993-2323
 
This list is sponsored by AGIRI: http://www.agiri.org/emailTo unsubscribe or change your options, please go to:http://v2.listbox.com/member/?member_id=8660244_secret=63834458-b785dd




Re: [agi] What best evidence for fast AI?

2007-11-10 Thread Bryan Bishop
On Saturday 10 November 2007 09:29, Derek Zahn wrote:
 On such a chart I think we're supposed to be at something like mouse
 level right now -- and in fact we have seen supercomputers beginning
 to take a shot at simulating mouse-brain-like structures.

Ref?

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63834893-c2b731


Re: [agi] What best evidence for fast AI?

2007-11-10 Thread Kaj Sotala
On 11/10/07, Bryan Bishop [EMAIL PROTECTED] wrote:
 On Saturday 10 November 2007 09:29, Derek Zahn wrote:
  On such a chart I think we're supposed to be at something like mouse
  level right now -- and in fact we have seen supercomputers beginning
  to take a shot at simulating mouse-brain-like structures.
 Ref?

http://news.bbc.co.uk/2/hi/technology/6600965.stm

Somebody else can probably provide more technical details, as well as
information about where this research is now, half a year later.




-- 
http://www.saunalahti.fi/~tspro1/ | http://xuenay.livejournal.com/

Organizations worth your time:
http://www.singinst.org/ | http://www.crnano.org/ | http://lifeboat.com/

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63835208-fffe86


Re: [agi] What best evidence for fast AI?

2007-11-10 Thread Bryan Bishop
On Saturday 10 November 2007 10:07, Kaj Sotala wrote:
 http://news.bbc.co.uk/2/hi/technology/6600965.stm

 The researchers say that although the simulation shared some
 similarities with a mouse's mental make-up in terms of nerves and
 connections it lacked the structures seen in real mice brains.  

Looks like they were just simulating eight million neurons with up to 
6.3k synapses each. How's that necessarily a mouse simulation, anyway?

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63838397-7d08b6


Re: [agi] What best evidence for fast AI?

2007-11-10 Thread Mark Waser

Looks like they were just simulating eight million neurons with up to
6.3k synapses each. How's that necessarily a mouse simulation, anyway?


It really isn't because the individual neuron behavior is so *vastly* 
simplified.  It is, however, a necessary first step and likely to teach us 
*a lot*.


- Original Message - 
From: Bryan Bishop [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Saturday, November 10, 2007 11:22 AM
Subject: Re: [agi] What best evidence for fast AI?



On Saturday 10 November 2007 10:07, Kaj Sotala wrote:

http://news.bbc.co.uk/2/hi/technology/6600965.stm



The researchers say that although the simulation shared some
similarities with a mouse's mental make-up in terms of nerves and
connections it lacked the structures seen in real mice brains.


Looks like they were just simulating eight million neurons with up to
6.3k synapses each. How's that necessarily a mouse simulation, anyway?

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63842006-af107f


RE: [agi] What best evidence for fast AI?

2007-11-10 Thread Derek Zahn
Bryan Bishop: Looks like they were just simulating eight million neurons with 
up to  6.3k synapses each. How's that necessarily a mouse simulation, anyway?
It isn't.  Nobody said it was necessarily a mouse simulation.  I said it was 
a simulation of a mouse-brain-like structure.  Unfortunately, not enough is yet 
known about specific connectivity so the best that can be done is play with 
structures of similar scale in anticipation of further advances.
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63847412-4e7cf3

Re: [agi] What best evidence for fast AI?

2007-11-10 Thread Bryan Bishop
On Saturday 10 November 2007 11:31, Derek Zahn wrote:
 Unfortunately, not enough is yet known about specific connectivity so
 the best that can be done is play with structures of similar scale in
 anticipation of further advances.

What signs will tell us that we do know enough about the architecture of 
the mouse brain to simulate it to some degree of usefulness?

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63855916-7f88e6


Re: [agi] What best evidence for fast AI?

2007-11-10 Thread Jef Allbright
On 11/10/07, Robin Hanson [EMAIL PROTECTED] wrote:

  My impression is that the cognitive performance of mice is vastly superior
 to that of current robot cars.   I don't see how they could be considered
 even remotely comparable.   But perhaps I have misjudged.  Has anyone
 attempted to itemize an inventory of mouse mental abilities, and compared
 that to current robot abilities?

It might be worthwhile to point out robotic technology is currently on
a rapidly advancing segment of the curve, exploiting low-hanging fruit
recently reachable by a convergence of capabilities becoming
affordable including significant processing power, memory, batteries,
wireless comm, motors and actuators, etc.  In my opinion, the
availability of the hardware is defining the near-term potential, with
competition accelerating the rush to fill that void.  Development
beyond that level, however, proceeds at a much slower evolutionary
rate.

Much as natural language processing made substantial gains and then
leveled off distinctly below the level of human understanding,
robotics development is accelerating toward that level at which the
rate of progress will sharply plateau.

At the DARPA Urban Challenge last weekend, the optimism and flush of
rapid growth was palpable, but as I was driving home I approached a
truck off the side of the road, its driver   pulling hard on a bar,
tightening the straps securing the load.  Without conscious thought I
moved over in my lane to allow for the possibility that he might slip.
 That chain of inference, and its requisite knowledge base, leading to
a simple human behavior, are not even on the radar horizon of
current AI technology.

- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63856239-c9c2f5


Re: [agi] What best evidence for fast AI?

2007-11-10 Thread Bob Mottram
I think the media coverage of mouse brain simulation was a little
misleading.  What I think they actually achieved was to simulate many
neurons based upon the Izhikevich model on a large computer at a rate
significantly slower than real time.  As far as I know there was no
attempt to actually simulate the brain structures of a mammal.


On 10/11/2007, Kaj Sotala [EMAIL PROTECTED] wrote:
 http://news.bbc.co.uk/2/hi/technology/6600965.stm

 Somebody else can probably provide more technical details, as well as
 information about where this research is now, half a year later.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63857903-1e305f


RE: [agi] What best evidence for fast AI?

2007-11-10 Thread Edward W. Porter
Robin,



I am an evangelist for the fact that the time for powerful AI could be
here very rapidly if there were reasonable funding for the right people.
There is a small, but increasing number of people who pretty much
understand how to build artificial brains as powerful as that of humans,
not 100% but probably at least 90% at an architectual level.  What is
needed is funding.  It will come, but exactly how fast, and to which
people, is the big question.  The below paper is written with the
assumption that someone -- some VC's, Governments, Google, Microsoft,
Intel, some Chinese multi-billionaire -- makes a significant investment in
the right people.



I have cobbled this together rapidly from some similar prior writings, so
please forgive the typos.  I assume you will only pick through it for
ideas, so exact language is not important.



I you have any questions, please call or email me.



Ed Porter





==



The Time for Powerful General AI

is Rapidly Approaching

by Edward Porter



The time for powerful general AI is rapidly approaching.  Its beginnings
could be here in two to ten years if the right people got the right
funding.  Starting in two years it could start providing the first in a
series of ever-more-powerful, ever-more-valuable, market-dominating
products.  In five to ten years it could be delivering true superhuman
intelligence.  In that time frame, for example, this would enable software
running on less than $3 million dollar hardware to write reliable code
faster than a thousand human programmers - or, with a memory swap, to
remember every word, every concept, every stated rational in a world-class
law library and to reason from that knowledge hundreds to millions of
times faster than a human lawyer, depending on the exact nature of the
reasoning task.



You should be skeptical.  The AI field has been littered with false claims
before.  But for each of history's long-sought, but long-delayed,
technical breakthroughs, there has always come a time when it finally
happened.  There is strong reason to believe that for powerful machine
intelligence that time is now.



What is the evidence?  It has two major threads.



The first is that for the first time in history we have hardware with the
computational power to support near-human intelligence, and in five to
seven years the cost of hardware powerful enough to support superhuman
intelligence could be as low as $200,000 to $3,000,000, meaning that
virtually every medium to mid-size organization will want many of them.



The second is that, due to advances in brain science and in AI, itself,
there are starting to be people, like those at Novamente LLC, who have
developed reasonable and detailed architectures for how to use such
powerful hardware efficiently to create near- or super-human intelligence.




THE HARDWARE



To do computation of the general type and sophistication of the human
brain, you need something within at least several orders of magnitude of
the capacity of the human brain, itself, in each of three dimensions:
representational, computational, and intercommunication capacity.  You
can't have the common sense, intuition, and context appropriateness of a
human mind unless you can represent and rapidly make generalizations from
and inference between substantially all parts of world knowledge - where
world knowledge is the name given to the extremely large body of
experientially derived knowledge most humans have.



Most past AI work has been done on machines that have less than one one
millionth the capacity in one or more of these three dimensions.  This is
like trying to do what the human brain does with a brain roughly 2000
times smaller than that of a rat.



No wonder most prior attempts at human-level AI have had so many false
promises and failures.  No wonder the correct, large-hardware approaches
have been up until very recently impossible to properly demonstrate and,
thus, get funding for.  And, thus, no wonder, the AI establishment does
not understand such correct approaches.



But human-level hardware is coming soon.  Systems are already available
for under ten million dollars (with roughly 4.5K 2Ghz 4 core processors,
168 TeraFlops/sec, a nominal bandwidth of 4TBytes/sec, and massive hard
disk storage) that are very roughly human level in two out of the above
three dimensions.  These machines are very roughly 1000 times slower than
humans with regard to messaging interconnect, but they are also hundreds
of millions of times faster than humans for many of the tasks at which
machines already out perform us.



Even machines with much less hardware could provide marketable powerful
intelligences.  AIs that were substantially sub-human at some tasks could
combine that sub-human intelligence with the skill at which computers
greatly out perform us to produce combined intelligences that could be
extremely valuable for many tasks.



Furthermore, 

Re: [agi] What best evidence for fast AI?

2007-11-10 Thread Bob Mottram
On 10/11/2007, Jef Allbright [EMAIL PROTECTED] wrote:
 At the DARPA Urban Challenge last weekend, the optimism and flush of
 rapid growth was palpable, but as I was driving home I approached a
 truck off the side of the road, its driver   pulling hard on a bar,
 tightening the straps securing the load.  Without conscious thought I
 moved over in my lane to allow for the possibility that he might slip.
  That chain of inference, and its requisite knowledge base, leading to
 a simple human behavior, are not even on the radar horizon of
 current AI technology.


I was saying to someone recently that it's hard to watch something
like the recent Urban Challenge and argue convincingly that AI is not
making progress or that it's been a failure.  Admittedly the
intelligence here is not smart enough to carry out the sort of
reasoning you describe, such as I see a large object and predict that
it may be about to fall so I better move out of the way.  However,
the path to this sort of ability just involves more accurate 3D
modelling of the environment together with intelligent segmentation
and some naive physics applied.  It's the perception accuracy/modeling
which is key to being able to implement these skills, which a mouse
may or may not be capable of (I don't know enough about the cognitive
skills of mice to be able to say).

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63859193-c54189


Re: [agi] What best evidence for fast AI?

2007-11-10 Thread Bryan Bishop
On Saturday 10 November 2007 12:52, Edward W. Porter wrote:
 In fact, if the ITRS roadmap projections continue to be met through

What is the ITRS roadmap? Do you have a link?

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63859781-dcb1eb


Re: [agi] What best evidence for fast AI?

2007-11-10 Thread Richard Loosemore

Robin Hanson wrote:
I've been invited to write an article for an upcoming special issue of 
IEEE Spectrum on Singularity, which in this context means rapid and 
large social change from human-level or higher artificial 
intelligence.   I may be among the most enthusiastic authors in that 
issue, but even I am somewhat skeptical.   Specifically, after ten years 
as an AI researcher, my inclination has been to see progress as very 
slow toward an explicitly-coded AI, and so to guess that the whole brain 
emulation approach would succeed first if, as it seems, that approach 
becomes feasible within the next century. 

But I want to try to make sure I've heard the best arguments on the 
other side, and my impression was that many people here expect more 
rapid AI progress.   So I am here to ask: where are the best analyses 
arguing the case for rapid (non-emulation) AI progress?   I am less 
interested in the arguments that convince you personally than arguments 
that can or should convince a wide academic audience.




I gave my answer to this question in a paper I presented at the 2006 
AGIRI workshop on Artificial General Intelligence [1].


Stripped to its core, the argument is that AI progress has been slow for 
a specific reason, not because the problem is intrinsically hard.  The 
reason for the slow progress is a fundamental misperception of the 
nature of the AI problem:  intelligent systems (by which I mean 
completely general intelligent systems that are capable of acquiring 
knowledge on their own initiative) *probably* contain an irreducible 
element of complexity, in the 'complex systems' sense of 'complexity'.


The two main consequence of this complexity are that (1) we would expect 
some of an AI's low-level mechanisms to have an opaque relationship to 
the AI's overall behavior (i.e. there are mechanisms down there that do 
not look like they have any bearing whatsoever on the intelligence of 
the overall system, and yet they play an indispensible role in the 
system's intelligent performance), and (2) the only way to get around 
the problems caused by (1) would be to make a systematic effort to 
emulate the human cognitive system -- not at the neural level, mark you, 
but at the cognitive level.


The final conclusion of the argument I give in the paper is an 
interesting sociology-of-science observation that bears directly on your 
question of how rapidly we could get to full AGI:  unfortunately, the AI 
community is populated with people who have an extremely strong bias 
against accepting these arguments, and this strong bias is what is 
holding back progress.  Basically, 'traditional' AI people have an 
almost theological aversion to the idea that the task of building an AI 
might involve having to learn (and deconstruct!) a vast amount of 
cognitive science, and then use an experimental-science methodology to 
find the mechanisms that really give rise to AI.  AI people are, at 
heart, mathematicians, and this is serious problem if the only way to 
succeed has little to do with mathematics.


Looked at in this way, the answer to your question is that if a new type 
of AI comes along (what I have dubbed 'theoretical psychology' because 
of its unique relationship to AI and psychology) and if it gathers 
enough support, we could find that the progress rate of this new 
approach bears no relationship to the progress rate of AI over the last 
fifty years.


I have started the process of building the infrastructure needed to do 
this kind of work.  So far this is working well:  among other things, a 
colleague of mine (Trevor Harley) and I have started re-analyzing the 
literature of cognitive science to bring it into line with the new 
approach, and our efforts have met with some surprising early successes 
(the first fruits of this effort being a cognitive neuroscience paper 
that is currently in press [2]).  From my point of view, old-style 
cognitive science and old-style AI are both falling neatly and elegantly 
into this new framework, so my personal feeling is that a new period of 
rapid progress is just over the horizon, and that human-level AGI might 
happen in the coming decade.


If it were not for this particular way of seeing the problems of AI, I 
would be with the skeptics:  I think that conventional AI will not yield 
a singularity-class AGI for a long time (if ever), and I believe that 
the brain-emulation folks are being wildly optimistic about what they 
can achieve, because they are blind to functional-level issues, and do 
not have the resolution or in-vivo tools needed to reach their goals.


Regards


Richard Loosemore


References.

[1]  Loosemore, R.P.W. (2007). Complex Systems, Artificial Intelligence 
and Theoretical Psychology. In B. Goertzel  P. Wang, Proceedings of the 
2006 AGI Workshop. Amsterdam: IOS Press.  This can be found online at 
http://www.agiri.org/wiki/Workshop_Proceedings (chapter 11).


[2]  Loosemore, R.P.W.  Harley, T.A. Brains and Minds:  On 

Re: [agi] What best evidence for fast AI?

2007-11-10 Thread Jef Allbright
On 11/10/07, Edward W. Porter [EMAIL PROTECTED] wrote:
 There
 is a small, but increasing number of people who pretty much understand how
 to build artificial brains as powerful as that of humans, not 100% but
 probably at least 90% at an architectual level.

Being 90% certain of where to get on the path is quite different from
being 90% certain of the path.

- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63864457-e6c274


RE: [agi] What best evidence for fast AI?

2007-11-10 Thread Edward W. Porter
http://www.itrs.net/reports.html

Edward W. Porter
Porter  Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Bryan Bishop [mailto:[EMAIL PROTECTED] 
Sent: Saturday, November 10, 2007 2:03 PM
To: agi@v2.listbox.com
Subject: Re: [agi] What best evidence for fast AI?


On Saturday 10 November 2007 12:52, Edward W. Porter wrote:
 In fact, if the ITRS roadmap projections continue to be met through

What is the ITRS roadmap? Do you have a link?

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63868500-801939


  1   2   >