AW: AW: [agi] I Can't Be In Two Places At Once.

2008-10-07 Thread Dr. Matthias Heger
The quantum level biases would be more general and more correct as it is the
case 
with quantum physics and classical physics.

The reasons why humans do not have modern physics biases for space and time:
There is no relevant advantage to survive when you have such biases
and probably the costs of necessary resources to obtain any advantage are
far too high
for a biological system.

But with future AGI (not the first level), these objections won't hold.
We don't need AGI do help us with middle level physics. We will need AGI
to make progress in worlds, were our innate intuitions do not hold, namely
nanotechnology, inner cellular biology.
So there would be an advantage for quantum biases and because of this
advantage the quantum biases would probably more often used than non-quantum
biases.

And what about the costs of resources? We could imagine an AGI brain which
has the size of a continent.
Of course not for the first level AGI. But I am sure, that future AGIs will
have quantum biases.

But as Ben said: First we should build AGI with biases we have and
understand.

And the main 3 problems of AGI should be solved first:
How to obtain knowledge, how to represent knowledge and how to use knowledge
to solve different problems in different domains.





Charles Hixson wrote:

I feel that an AI with quantum level biases would be less general. It 
would be drastically handicapped when dealing with the middle level, 
which is where most of living is centered. Certainly an AGI should have 
modules which can more or less directly handle quantum events, but I 
would predict that those would not be as heavily used as the ones that 
deal with the mid level. We (usually) use temperature rather then 
molecule speeds for very good reasons.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


AW: AW: [agi] I Can't Be In Two Places At Once.

2008-10-06 Thread Dr. Matthias Heger
Good points. I would like to add a further point:

Human language is a sequence of words which is used to transfer patterns of
one brain into another brain.

When we have an AGI which understands and speaks language, then for the
first time there would be an exchange of patterns between an artificial
brain and a human brain.

So human language is not only useful to teach the AGI some stuff. We also
will have an easy access to the toplevel patterns of the AGI when it speaks
to us. Human language will be useful to understand what is going on in the
AGI. This makes testing easier.

-Matthias

 

Ben G wrote 

 

A few points...

1)  
Closely associating embodiment with GOFAI is just flat-out historically
wrong.  GOFAI refers to a specific class of approaches to AI that wer
pursued a few decades ago, which were not centered on embodiment as a key
concept or aspect.  

2)
Embodiment based approaches to AGI certainly have not been extensively tried
and failed in any serious way, simply because of the primitive nature of
real and virtual robotic technology.  Even right now, the real and virtual
robotics tech are not *quite* there to enable us to pursue embodiment-based
AGI in a really tractable way.  For instance, humanoid robots like the Nao
cost $20K and have all sorts of serious actuator problems ... and virtual
world tech is not built to allow fine-grained AI control of agent skeletons
... etc.   It would be more accurate to say that we're 5-15 years away from
a condition where embodiment-based AGI can be tried-out without immense
time-wastage on making not-quite-ready supporting technologies work

3)
I do not think that humanlike NL understanding nor humanlike embodiment are
in any way necessary for AGI.   I just think that they seem to represent the
shortest path to getting there, because they represent a path that **we
understand reasonably well** ... and because AGIs following this path will
be able to **learn from us** reasonably easily, as opposed to AGIs built on
fundamentally nonhuman principles

To put it simply, once an AGI can understand human language we can teach it
stuff.  This will be very helpful to it.  We have a lot of experience in
teaching agents with humanlike bodies, communicating using human language.
Then it can teach us stuff too.   And human language is just riddled through
and through with metaphors to embodiment, suggesting that solving the
disambiguation problems in linguistics will be much easier for a system with
vaguely humanlike embodied experience.

4)
I have articulated a detailed proposal for how to make an AGI using the OCP
design together with linguistic communication and virtual embodiment.
Rather than just a promising-looking assemblage of in-development
technologies, the proposal is grounded in a coherent holistic theory of how
minds work.

What I don't see in your counterproposal is any kind of grounding of your
ideas in a theory of mind.  That is: why should I believe that loosely
coupling a bunch of clever narrow-AI widgets, as you suggest, is going to
lead to an AGI capable of adapting to fundamentally new situations not
envisioned by any of its programmers?   I'm not completely ruling out the
possiblity that this kind of strategy could work, but where's the beef?  I'm
not asking for a proof, I'm asking for a coherent, detailed argument as to
why this kind of approach could lead to a generally-intelligent mind.

5)
It sometimes feels to me like the reason so little progress is made toward
AGI is that the 2000 people on the planet who are passionate about it, are
moving in 4000 different directions ;-) ... 

OpenCog is an attempt to get a substantial number of AGI enthusiasts all
moving in the same direction, without claiming this is the **only** possible
workable direction.  

Eventually, supporting technologies will advance enough that some smart guy
can build an AGI on his own in a year of hacking.  I don't think we're at
that stage yet -- but I think we're at the stage where a team of a couple
dozen could do it in 5-10 years.  However, if that level of effort can't be
systematically summoned (thru gov't grants, industry funding, open-source
volunteerism or wherever) then maybe AGI won't come about till the
supporting technologies develop further.  My hope is that we can overcome
the existing collective-psychology and practical-economic obstacles that
hold us back from creating AGI together, and build a beneficial AGI ASAP ...

-- Ben G









On Mon, Oct 6, 2008 at 2:34 AM, David Hart [EMAIL PROTECTED] wrote:

On Mon, Oct 6, 2008 at 4:39 PM, Brad Paulsen [EMAIL PROTECTED] wrote:

So, it has, in fact, been tried before.  It has, in fact, always failed.
Your comments about the quality of Ben's approach are noted.  Maybe you're
right.  But, it's not germane to my argument which is that those parts of
Ben G.'s approach that call for human-level NLU, and that propose embodiment
(or virtual embodiment) as a way to achieve human-level NLU, have been tried

AW: AW: [agi] I Can't Be In Two Places At Once.

2008-10-05 Thread Dr. Matthias Heger
Brad Paulson wrote

More generally, as long as AGI designers and developers insist on
simulating human intelligence, they will have to deal with the AI-complete
problem of natural language understanding.  Looking for new approaches to
this problem, many researches (including prominent members of this list)
have turned to embodiment (or virtual embodiment) for help.  


We only know one human level intelligence which works. And this works with 
embodiment. So for this reason, it seems to be an useful approach.

But, of course, if we always use the humans as a guide to develop AGI then we 
will probably obtain similar limitations we observe in humans.

I think an AGI which should be useful for us, must be a very good scientist, 
physicist and mathematician. Is the human kind of learning by experience and 
the human kind of intelligence good for this job? I don't think so. 

Most people on this planet are very poor in these disciplines and I don't think 
that this is only a question of education. There seems to be a very subtle fine 
tuning of genes necessary to change the level of intelligence from a monkey to 
the average human. And there is an even more subtle fine tuning necessary to 
obtain a good mathematician.

This is discouraging for the development of AGI because it shows that human 
level intelligence is not only a question of the right architecture but it 
seems to be more a question of the right fine tuning of some parameters. Even 
if we know that we have the right software architecture, then the real hard 
problems would still arise.

We know that humans can swim. But who would create a swimming machine by 
following the example of the human anatomy?

Similarly, we know that some humans can be scientists. But is it real the best 
way to follow the example of humans to create an artificial scientists? 
Probably not.
If you have the goal to create an artificial scientist in nanotechnology, is it 
a good strategy to let this artificial agent walk through an artificial garden 
with trees and clouds and so on? Is this the best way to make progress in 
nanotechnology, economy and so on? Probably not.

But if we have no idea how to do it better, we have no other chance than to 
follow the example of human intelligence.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] I Can't Be In Two Places At Once.

2008-10-05 Thread Brad Paulsen



Dr. Matthias Heger wrote:

Brad Paulson wrote More generally, as long as AGI designers and
developers insist on simulating human intelligence, they will have to
deal with the AI-complete problem of natural language understanding.
Looking for new approaches to this problem, many researches (including
prominent members of this list) have turned to embodiment (or virtual
embodiment) for help. 

We only know one human level intelligence which works. And this works
with embodiment. So for this reason, it seems to be an useful approach.


Dr. Heger,

First, I don't subscribe to the belief that AGI 1.0 need be human-level. 
In fact, my belief is just the opposite: I don't think it should be 
human-level.  And, with all due respect sir, while we may know that 
human-level intelligence works, we have no idea (or very little idea) *how* 
it works.  That, to me, seems to be the more important issue.


If we did have a better idea of how human-level intelligence worked, we'd 
probably have built a human-like AGI by now.  Instead, for all we know, 
human intelligence (and not just the absence or presence or degree thereof 
in any individual human) may be at the bottom end of the scale in the 
universe of all possible intelligences.


You are also, again with all due respect, incorrect in saying that we have 
no other intelligence with which to work.  We have the digital computer. 
It can beat expert humans at the game of chess.  It can beat any human at 
arithmetic -- both in speed and accuracy.  Unlike humans, it remembers 
anything ever stored in its memory and can recall anything in its memory 
with 100% accuracy.  It never shows up to work tired or hung over.  It 
never calls in sick.  On the other hand, what a digital computer doesn't do 
well at present, things like understanding human natural language and being 
creative (in a non-random way), humans do very well.


So, why are we so hell-bent on building an AGI in our own image?  It just 
doesn't make sense when it is manifestly clear that we know how to do 
better.  Why aren't we designing and developing an AGI that leverages the 
strengths, rather than attempts to overcome the weaknesses, of both forms 
of intelligence?


For many tasks that would be deemed intelligent if Turing's imitation game 
had not required natural HUMAN language understanding (or the equivalent 
mimicking thereof), we have already created a non-human intelligence 
superior to human-level intelligence.  It thinks nothing like we do 
(base-2 vs. base-10) yet, for many feats of intelligence only humans used 
to be able to perform, it is a far superior intelligence.  And, please 
note, not only is human-like embodiment *not* required by this 
intelligence, it would be (as it is to the human chess player) a HINDRANCE.



But, of course, if we always use the humans as a guide to develop AGI
then we will probably obtain similar limitations we observe in humans.

I actually don't have a problem with using human-level intelligence as an 
*inspiration* for AGI 1.0.  Digital computers were certainly inspired by 
human-level intelligence.  I do, however, have a problem with using 
human-level intelligence as a *destination* for AGI 1.0.



I think an AGI which should be useful for us, must be a very good
scientist, physicist and mathematician. Is the human kind of learning by
experience and the human kind of intelligence good for this job? I don't
think so.

Most people on this planet are very poor in these disciplines and I
don't think that this is only a question of education. There seems to be
a very subtle fine tuning of genes necessary to change the level of
intelligence from a monkey to the average human. And there is an even
more subtle fine tuning necessary to obtain a good mathematician.



One must be careful with arguments from genetics.  The average chimp will 
beat any human for lunch in a short-term memory contest.  I don't care how 
good the human contestant is at mathematics.  Since judgments about 
intelligence are always relative to the environment in which it is evinced, 
in an environment where those with good short-term memory skills thrive and 
those without barely survive, chimps sure look like the higher intelligence.



This is discouraging for the development of AGI because it shows that
human level intelligence is not only a question of the right
architecture but it seems to be more a question of the right fine tuning
of some parameters. Even if we know that we have the right software
architecture, then the real hard problems would still arise.



Perhaps.  But your first sentence should have read, This is discouraging 
for the development of HUMAN-LEVEL AGI because  It doesn't really 
matter to a non-human AGI.



We know that humans can swim. But who would create a swimming machine by
following the example of the human anatomy?



Yes.  Just as we didn't design airplanes to fly bird-like, even though 
the bird was our best source of inspiration for developing 

Re: AW: AW: AW: [agi] I Can't Be In Two Places At Once.

2008-10-05 Thread Brad Paulsen



Dr. Matthias Heger wrote:


Brad Paulson wrote Fortunately, as I argued above, we do have other
choices.  We don't have to settle for human-like. 

I do not see so far other choices. Chess is AI but not AGI.


Yes, I agree but IFF by AGI you mean human-level AGI.  As you point out
below, a lot has to do with how we define AGI.


Your idea of an incremental roadmap to human-level AGI is interesting,
but I think everyone who tries to build a human-level AGI already makes
incremental experiments and first steps with non-human-level AGI in
order to make a proof of concept. I think, Ben Goertzel has done some
experiments with artificial dogs and other non-human agents.

So it is only a matter of definition what we mean by AGI 1.0 I think, we
now have already AGI 0.0.x and the goal is AGI 1.0 which can do the same
as a human.

Why this goal? An AGI which resembles functionally (not necessarily in
algorithmic details) a human has the great advantage that everyone can
communicate with this agent.

Yes, but everyone can communicate with baby AGI right now using a 
highly-restricted subset of human natural language.  The system I'm working 
on now uses the simple, declarative sentence, the propositional (if/then) 
rule statement, and simple query as its NL interface.  The declarations of 
fact and propositional rules are upgraded, internally, to FOL+.  AI-agent 
to AI-agent communication is done entirely in FOL+.  I had considered using 
Prolog for the human interface but the non-success of Prolog in a community 
(computer programmers) already expert at communicating with computers using 
formal languages caused me to drop back to the, more difficult, but not 
impossible, semi-formal NL approach.


We don't need to crack the entire NLU problem to be able to communicate 
with AGI's in a semi-formalized version of natural human language.  Sure, 
it can get tedious. just as talking to a two-year old human child can get 
tedious (unless it's your kid, of course: then, it's fascinating!).  Does 
it impress people at demos?  The average person?  Yep, it pretty much 
does.  Even though it's far from finished at this time.  Skeptical AGHI 
designers and developers?  Not so much.  But, I'm working on that!


The question I'm raising in this thread is more one of priorities and 
allocation of scarce resources.  Engineers and scientists comprise only 
about 1% of the world's population.  Is human-level NLU worth the resources 
it has consumed, and will continue to consume, in the pre-AGI-1.0 stage? 
Even if we eventually succeed, would it be worth the enormous cost? 
Wouldn't it be wiser to go with the strengths of both humans and 
computers during this (or any other) stage of AGI development?


Getting digital computers to understand natural human language at 
human-level has proven itself to be an AI-complete problem.  Do we need 
another fifty years of failure to achieve NLU using computers to finally 
accept this?  Developing NLU for AGI 1.0 is not playing to the strengths of 
the digital computer or of humans (who only take about three years to gain 
a basic grasp of language and continue to improve that grasp as they age 
into adulthood).


Computers calculate better than do humans.  Humans are natural language 
experts.  IMHO, saying that the first version of AGI should include 
enabling computers to understand human language like humans is just about 
as silly as saying the first version of AGI should include enabling humans 
to be able to calculate like computers.


IMHO, embodiment is another loosing proposition where AGI 1.0 is concerned. 
 For all we know, embodiment won't work until we can produce an artificial 
bowel movement.  It's the, To think like Einstein, you have to stink like 
Einstein. theory.  Well, I don't want AGI 1.0 to think like Einstein.  I 
want it to think BETTER than Einstein (and without the odoriferous 
side-effect, thank you very much).



It would be interesting for me which set of abilities you want to have
in AGI 1.0.

Well, we (humanity) need, first, to decide *why* we want to create another 
form of intelligence.  And the answer has to be something other than 
because we can.  What benefits do we propose should issue to humanity 
from such an expensive pursuit?  In other words, what does 
human-beneficial AGI really mean?


Only once we have ironed out our differences in that regard (or, at least, 
have produced a compromise on a list of core abilities), should we start 
thinking about an implementation.  In general, though, when it comes to 
implementation, we need to start small and play to our strengths.


For example, people who want to build AGHI tend to look down their noses at 
classic, narrow-AI successes such as expert (production) systems (Ben G. is 
NOT in this group, BTW).  This has prevented these folks from even 
considering using this technology to achieve AGI 1.0.  I *am* (proudly and 
loudly) using this technology to build bootstrapping intelligent agents 
for AGI.