RE: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Derek Zahn


 Some might say that if they get conservation of mass 
 and newton's law then they skipped all the useless stuff!
 OK, but those some probably don't include any preschool 
 teachers or educational theorists. That hypothesis is completely at odds 
 with my own intuition 
 from having raised 3 kids and spent probably hundreds of hours 
 helping out in daycare centers, preschools, kindergartens, etc.
 
Sorry, that was just kind of a joke.  Probably nobody actually has the opinion 
I was lampooning though I do see similar things said sometimes, as if inferring 
minimum-description-length root level reductionisms is a realistic approach to 
learning to deal with the world.  It might even be true, but the humor was 
supposed to be to juxtapose that idea with the AGI preschool.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Derek Zahn

Ben: Right.  My intuition is that we don't need to simulate the dynamics of 
fluids, powders and the like in our virtual world to make it adequate for 
teaching AGIs humanlike, human-level AGI.  But this could be wrong.I suppose 
it depends on what kids actually learn when making cakes, skipping rocks, and 
making a mess with play-dough.  Some might say that if they get conservation of 
mass and newton's law then they skipped all the useless stuff!
 
I think I agree with the plausibility of something you have said many times:  
that there may be many paths to AGI that are not similar at all to human 
development -- abstract paths to modelling the universe, teasing meaning from 
sheer statistics of the chinese/chinese dictionary of the raw html internet, 
who knows what.
 
But in the case where we are trying to roughly follow stages of human 
development with goals of producing human-like linguistic and reasoning 
capabilities, I very much fear that any significant simplification of the 
universe will provide an insufficient basis for the large sensory concept set 
underlying language and analogical reasoning (both gross and fine).  Literally, 
I think you're throwing the baby out with the bathwater.  But, as you say, this 
could be wrong.
 
It's really the only critique I have of the AGI preschool idea, which I do like 
because we can all relate to it very easily.  At any rate, if it turns out to 
be a valid criticism the symptom will be that an insufficiently rich set of 
concepts will develop to support the range of capabilities needed and at that 
point the simulations can be adjusted to be more complete and realistic and 
provide more human sensory modalities.  I guess it will be disappointing if 
building an adequate virtual world turns out to be as difficult and expensive 
as building high quality robots -- but at least it's easier to clean up after 
cake-baking.
 
 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Derek Zahn

Oh, and because I am interested in the potential of high-fidelity physical 
simulation as a basis for AI research, I did spend some time recently looking 
into options.  Unfortunately the results, from my perspective, were 
disappointing.
 
The common open-source physics libraries like ODE, Newton, and so on, have 
marginal feature sets and frankly cannot scale very well performance-wise.  
Once I even did a little application whose purpose was to see whether a human 
being could learn to control an ankle joint to compensate for an impulse event 
and stabilize a simple body model (that is, to make it not fall over) by 
applying torques to the ankle.  I was curious to see (through introspection) 
how humans learn to act as process controllers.  
http://happyrobots.com/anklegame.zip for anybody bored enough to care.  It 
wasn't a very good test of the question so I didn't really get a satisfactory 
answer.  I did discover, though, that a game built around more appealing cases 
of the player learning to control physics-inspired processes could be quite 
absorbing.
 
Beyond that, the most promising avenue seems to be physics libraries tied to 
graphics hardware being worked on by the hardware companies to help sell 
their stream processors.  The best example is Nvidia, who bought PhysX and 
ported it to their latest cards, giving a huge performance boost.  Intel has 
bought Havok and I can only imagine that they are planning on using that as the 
interface to some Larrabee-based physics engine.  I'm sure that ATI is working 
on something similar for their newer (very impressive) stream processing cards.
 
At this stage, though, despite some interesting features and leaping 
performance, it is still not possible to do things like get realistic sensor 
maps for a simulated soft hand/arm, and complex object modifications like 
bending and breaking are barely dreamed of in those frameworks.  Complex 
multi-body interactions (like realistic behavior when dropping or otherwise 
playing with a ring of keys or realistic baby toys) have a long ways to go.
 
Basically, I fear those of us who are interested in this are just waiting to 
ride the game development coattails and it will be a few years at least until 
performance that even begins to interest me will be available.
 
Just my opinions on the situation.
 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Derek Zahn

Hi Ben.
 
 OTOH, if one wants to go the virtual-robotics direction (as is my intuition), 
 then it is possible to bypass many of the lower-level perception/actuation 
 issues and focus on preschool-level learning, reasoning and conceptual 
 creation.
 
And yet, in your paper (which I enjoyed), you emphasize the importance of not 
providing
a simplistic environment (with the screwdriver example).  Without facing the 
low-level 
sensory world (either through robotics or through very advanced simulations 
feeding 
senses essentially equivalent to those of humans), I wonder if a targeted 
human-like 
AGI will be able to acquire the necessary concepts that children absorb and use 
as much o
f the metaphorical basis for their thought -- slippery, soft, hot, hard, rough, 
sharp, and on 
and on.
 
I assume you have some sort of middle ground in mind... what's your thinking 
about
how much you can cheat in this way (beyond that of what is conveniently 
doable 
I mean)?
 
Thanks!
 
 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: [agi] The Future of AGI

2008-11-26 Thread Derek Zahn

Although a lot of AI-type research focuses on natural language interfaces 
between computer systems and their human users, computers have the ability to 
create visual images (which people can't do in real-time beyond gestures and 
facial expressions).  Building computer systems that generate pictures or 
videos as their way of communicating with us could be a very lucrative addition 
to computer applications that include cognitive models of their users (instead 
of focusing solely on generating natural language), because most of us do 
process visual information so well.
 
This is really narrow AI I suppose, though it's kind of on the borderline.  It 
does seem like one of the ways to commercialize incremental progress toward AGI.
 
Derek Zahn
supermodelling.net


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Derek Zahn

Pei Wang: --- I have problem with each of these assumptions and beliefs, 
though I don't think anyone can convince someone who just get a big grant 
that they are moving in a wrong direction. ;-)
With his other posts about the Singularity Summit and his invention of the word 
Synaptronics, Modha certainly seems to be a kindred spirit to many on this 
list.
 
I think what he's trying to do with this project (to the extent I understand 
it) seems like a reasonably promising approach (not really to AGI as such, but 
experimenting with soft computing substrates is kind of a cool enterprise to 
me).  Let a thousand flowers bloom.
 
However, when he says things on his blog like In my opinion, there are three 
reasons why the time is now ripe to begin to draw inspiration from structure, 
dynamics, function, and behavior of the brain for developing novel computing 
architectures and cognitive systems. -- I despair again.
 
Dr. Wang, if you want to get some funding maybe you should start promoting NARS 
as a theory of the brain :)
 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: [agi] A paper that actually does solve the problem of consciousness

2008-11-14 Thread Derek Zahn

Richard,
 
As a general rule, I find discussions about consciousness, qualia, and so forth 
to be unhelpful, frustrating, and unnecessary.  However, I enjoyed this paper a 
great deal.  Thanks for writing it.  Because of my inclinations on these 
matters, I am not an expert on the history of thought on the topic, or its 
current status among philosophers, but I find your account to be credible and 
reasonably clear.  I'm not particularly repulsed by the idea that ... our most 
immediate, subjective experiance of the world is, in some sense, an artifact 
produced by the operation of the brain so searching for a more satisfying 
conclusion is not really high up on my priority list.  Still, I don't see 
anything immediately objectionable in your analysis.
 
I am not certain about the distinguishing power of your falsifiable 
predictions, but only because I would need to give that considerably more 
thought.
 
I look forward to being in the audience when you present the paper at AGI-09.
 
Derek Zahn
agiblog.net


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: [agi] A paper that actually does solve the problem of consciousness

2008-11-14 Thread Derek Zahn

Oh, one other thing I forgot to mention.  To reach my cheerful conclusion about 
your paper, I have to be willing to accept your model of cognition.  I'm pretty 
easy on that premise-granting, by which I mean that I'm normally willing to 
go along with architectural suggestions to see where they lead.  But I will be 
curious to see whether others are also willing to go along with you on your 
generic  cognitive system model.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: AW: AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread Derek Zahn
Matthias Heger:

 
 If chess is so easy because it is completely described, complete information 
 about 
 state available, fully deterministic etc. then the more important it is that 
 your AGI 
 can learn such an easy task before you try something more difficult.
 
Chess is not easy.  Becoming good at chess is something that most humans 
never accomplish and none accomplish without years of training in background 
material.  The question is whether chess is representative  of the domains we 
want AGIs to master.  I think a case could be made either way.
 
I don't want to be discouraging -- any concrete demonstration of AGI ideas is 
of great interest, even in formal toy domains.
 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


RE: [agi] META: A possible re-focusing of this list

2008-10-15 Thread Derek Zahn
As somebody who considers consciousness, qualia, and so on to be poorly-defined 
anthropomorphic mind-traps, I am not interested in any such discussions.  Other 
people are, and I have no problem ignoring them, like I ignore a number of 
individual cranks and critics who post things of similarly low interest.I think 
a forum divided into topic areas would be better than this mailing list for 
many different reasons, but if you don't want to move to that setup and if you 
want to police the posts more actively (this list, according to agiri.org, is 
already supposed to be about technical aspects of particular AGI approaches), 
it won't bother me.
I do like to see different perspectives on issues of common interest if they 
are of high quality.  That is subjective, though.  For example, I consider Matt 
Mahoney and Richard Loosemore to contribute very interesting material, even if 
I do not agree with their conclusions.  Others may consider Mike Tintner and 
Steve Richfield to have useful things to say, when I do not.



Date: Wed, 15 Oct 2008 15:18:14 -0400From: [EMAIL PROTECTED]: [EMAIL 
PROTECTED]: Re: [agi] META: A possible re-focusing of this list
By the way, I'm avoiding responding to this thread till a little time has 
passed and a larger number of lurkers have had time to pipe up if they wish 
to...ben


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


RE: [agi] META: A possible re-focusing of this list

2008-10-15 Thread Derek Zahn
I bet if you tried very hard to move the group to the forum (for example, by 
only posting there yourself and periodically urging people to use it), people 
could be moved there.  Right now, nobody posts there because nobody else posts 
there; if one wants one's stuff to be read, one sends it to the high traffic 
location unless there's a reason not to.

Date: Wed, 15 Oct 2008 16:00:45 -0400From: [EMAIL PROTECTED]: [EMAIL 
PROTECTED]: Re: [agi] META: A possible re-focusing of this list
There is already a forum site on agiri.org .  Nobody uses it  So, just 
setting up a forum site is not the answer...ben g


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


RE: [agi] META: A possible re-focusing of this list

2008-10-15 Thread Derek Zahn
How about this:
 
Those who *do* think it's worthwhile to move to the forum:  Instead of posting 
email responses to the mailing list, post them to the forum and then post a 
link to the response to the email list, thus encouraging threads to continue in 
the more advanced venue.
 
I shall do this myself from now on.  I have not participated much on this list 
lately due to my current work schedule but will make an effort to do so.  If 
used, I do think the forum could help solve some of these META issues.
 
 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


RE: [agi] META: A possible re-focusing of this list

2008-10-15 Thread Derek Zahn
Oh, also:
 
When I try to register a form account, it says:Sorry, an error occurred. If you 
are unsure on how to use a feature, or don't know why you got this error 
message, try looking through the help files for more information.

The error returned was:
To register, please send your request to [EMAIL PROTECTED] Please include your 
desired username.A random password will be sent back to you. 
---
A forum that won't let people register isn't likely to catch on.
 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


RE: [agi] Advocacy Is no Excuse for Exaggeration

2008-10-14 Thread Derek Zahn
I am reminded of this:
 
http://www.serve.com/bonzai/monty/classics/MissAnneElk



Date: Tue, 14 Oct 2008 17:14:39 -0400From: [EMAIL PROTECTED]: [EMAIL 
PROTECTED]: Re: [agi] Advocacy Is no Excuse for Exaggeration
OK, but you have not yet explained what your theory of consciousness is, nor 
what the physical mechanism nor role for consciousness that you propose is ... 
you've just alluded obscurely to these things.  So it's hard to react except 
with raised eyebrows and skepticism!!ben g


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


RE: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-06 Thread Derek Zahn
It has been explained many times to Tintner that even though computer hardware 
works with a particular set of primitive operations running in sequence, a 
hardwired set of primitive logical operations operating in sequence is NOT the 
theory of intelligence that any AGI researchers are proposing (to my 
knowledge).  A computer is just a system for holding a theory of intelligence 
which does not look like those primitives (at least not since the view that 
intelligence consists of simple interpretations of atomic tokens representing 
physical objects in small numbers of relationships with other such tokens was 
given up decades ago as insufficient).  As an example, the representational 
mechansms in Novamente and the dynamics of the mind agents that operate on them 
are probably better thought of as churning masses of probability relationships 
with varying and often non-specific semantic interpretations than Tintner's 
narrow view of what a computer is -- although I do not yet understand Novamente 
in detail.  He has to ignore all such efforts, though, because if he paid 
attention he would have to stop saying that NONE of us understand ANYTHING 
about how REAL intelligence is actually based on line drawings, or keyboards, 
or other childish notions.
 
Though he's in my killfile I do see his posts when others take the bait.  So 
Mike, please try to finally understand this:  AGI researchers do not think of 
intelligence as what you think of as a computer program -- some rigid sequence 
of logical operations programmed by a designer to mimic intelligent behavior.  
We know it is deeper than that.  This has been clear to just about everybody 
for many many years.  By engaging the field at such a level you do nothing 
worthwhile.



 Date: Sat, 6 Sep 2008 15:38:59 +0100 From: [EMAIL PROTECTED] To: 
 agi@v2.listbox.com Subject: Re: [agi] A NewMetaphor for Intelligence - the 
 Computer/Organiser  2008/9/6 Mike Tintner [EMAIL PROTECTED]:  Will,  
  Yes, humans are manifestly a RADICALLY different machine paradigm- if you 
  care to stand back and look at the big picture.   Employ a machine of 
 any kind and in general, you know what you're getting -  some glitches 
 (esp. with complex programs) etc sure - but basically, in  general, it will 
 do its job.  What exactly is a desktop computers job?


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


RE: [agi] The Necessity of Embodiment

2008-08-22 Thread Derek Zahn
By embodied I think people usually mean a dense sensory connection (with a 
feedback loop) to the physical world.  The feedback could be as simple as 
aiming a camera.  However, it seems to me that an AI program connected to 
YouTube could maybe have a dense enough link to the real world to charge up a 
grounded sufficiently-complete and human-compatible set of concepts.  A large 
quantity of video of other intelligences interacting with the world could maybe 
substitute for direct interaction.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


RE: [agi] OpenCog Prime wikibook and roadmap posted (moderately detailed design for an OpenCog-based thinking machine)

2008-08-01 Thread Derek Zahn
Ben,
 
Thanks for the large amount of work that must have gone into the production of 
the wikibook.  Along with the upcoming PLN book (now scheduled for Sept 26 
according to Amazon) and re-reading The Hidden Pattern, there should be enough 
material for a diligent student to grok your approach.
 
I think it will take some considerable time for anybody to absorb it all, so 
don't be too discouraged if there isn't a lot of visible banter about issues 
you think are important; we all come at the Big Questions of AGI from our own 
peculiar perspectives.  Even those of us who want to believe may have 
difficulty finding sufficient common ground in viewpoints to really understand 
your ideas in depth, at least for a while.
 
If there's one thing I'd like to see more of sometime soon, it would be more 
detail on the early stages of your vision of a roadmap, to help focus both 
analysis and development.
 
Great stuff!
 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


RE: [agi] Simple example of the complex systems problem, for those in a hurry

2008-07-01 Thread Derek Zahn
 
Thanks again Richard for continuing to make your view on this topic clear to 
those who are curious.
 
As somebody who has tried in good faith and with limited but nonzero success to 
understand your argument, I have some comments.  They are just observations 
offered with no sarcasm or insult intended.
 
1) The presentations would be a LOT clearer if you did not always start with 
Suppose that... and then make up a hypothetical situation.  As a reader I 
don't care about the hypothetical situation, and it is frustrating to be forced 
into trying to figure out if it is somehow a metaphor for what I *am* 
interested in, or what exactly the reason behind it is.  In this case, if you 
are actually talking about a theory of how evolution produced a significant 
chunk of human cognition (a society of CBs), then just say so and lead us to 
the conclusions about the actual world.  If you are not theorizing that the 
evolution/CBs thing is how human minds work, then I do not see the benefit of 
walking down the path.  Note that the basic CB idea you user here strikes me as 
a good one; it resonates with things like Minsky's Society of Mind, as well as 
the intent behind things like Hall's Sigmas and Goertzel's subgraphs.
 
2) Similarly, when you say 
 if we were able to look inside a CB system and see what the CBs are  doing 
 [Note: we can do this, to a limited extent: it is called  introspection], 
 we would notice many aspects of CB behavior ...
 
It would be a lot better if you left out the if and the would.  Say when 
we look inside this CB system... and we do notice any aspects... if that is 
what you mean.  If again this is some sort of strange hypothetical universe as 
a reader I am not very interested in speculations about it.
 
3) When you say
 
 But now, here is a little problem that we have to deal with. It turns  out 
 that the CB system built by evolution was functioning *because* 
 of all that chaotic, organized mayhem, *not* in spite of it.
 
Assuming that you are actually talking about human minds instead of a 
hypothetical universe, this is a very strong statement.  It is a theory about 
human intelligence that needs some support.  It is not necessarily a theory 
about intelligence-in-general; linking it to intelligence in general would be 
another theory requring support.  You may or may not think that intelligence 
in general is a coherent concept; given your recent statements that there can 
be no formal definition of intelligence, it's hard to say whether 
intelligence that is not isomorphic to human intelligence can exist in your 
view.
 
4) Regarding:
 
 Evolution explored the space of possible intelligent mechanisms. In the 
 course 
 of doing so, it discovered a class of systems that work, but it may well be  
 that the ONLY systems in the whole universe that can function as well as  a 
 human intelligence involve a small percentage of weirdness that just  
 balances out to make the system work. There may be no cleaned-up  versions 
 that work.
 
The natural response is:  sure, this may well be, but it just as easily may 
well not be.  This is addressed in your concluding points, which say that it 
is not definite, but is very likely.  As a reader, I do not see a reason to 
suppose that this is true.  You offer only the circumstantial evidence that AI 
has failed for 50 years, but there are many other possible reasons for this:
 
- maybe it's just hard.  many aspects of the universe took more than 50 years 
to understand, many are still not understood.  i personally think that if this 
is true we are unlikely to be just a few years from the solution, but it does 
seem like a reasonable viewpoint.
 
- maybe logic just stinks as a tool for modeling the world.  it seemed 
natural but looking at the things and processes in the human universe logically 
seems like a pretty poor idea to me.  maybe probabilistic logic of one sort 
or another will help.  but the point here is that it might not be a complex 
systems issue, it might just be a knowledge representation and reasoning issue. 
 perhaps generated or evolved program fragments will fare better; perhaps 
things that look like neural clusters will work, perhaps we haven't 
discovered a good way to model the universe yet.
 
- maybe we haven't ripped the kitchen sink out of the wall yet... maybe 
intelligence will turn out to be a conglomeration of 1543 different 
representation schemes and reasoning tricks, but we've only put a fraction 
together so far and therefore only covered a small section of what intelligence 
needs to do.
 
5) Of course, the argument would be strengthened by a somewhat detailed 
suggestion of how AI research *should* proceed; you give some arguments for why 
certain (unspecified) approaches *might* not work, but nothing beyond the 
barest hint of what to do about it, which doesn't motivate anybody to give much 
more than a shrug to your comments.  I wonder what it is that you expect people 
to do in response to 

RE: [agi] Simple example of the complex systems problem, for those in a hurry

2008-07-01 Thread Derek Zahn
Oh, one last point:
 
I find your thoughts in this message quite interesting personally because I 
think that puzzling out exactly what concept builders need to do, and how 
they might be built to do it, is the most interesting thing in the whole world. 
 I am resistant to the idea that it is impossible because all efforts to do so 
must be destined to result in insufficient results.  I admit to stubbornness on 
this point, and it will take strong deprogramming to stop me from taking an 
interest in recipes for the philosophers' stone.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


RE: [agi] Simple example of the complex systems problem, for those in a hurry

2008-07-01 Thread Derek Zahn
Sorry for three messages in short succession.  Regarding concept builders, I 
have been writing in my bumbling way about this (and will continue to muse on 
fundamental issues) in my little blog:
 
http://agiblog.net


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


RE: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-29 Thread Derek Zahn
I agree that the hardware advances are inspirational, and it seems possible 
that just having huge hardware around could change the way people think and 
encourage new ideas.
 
But what I'm really looking forward to is somebody producing a very impressive 
general intelligence result that was just really annoying because it took 10 
days of computing instead of an hour.
 
Seems to me that all the known AGI researchers are in theory, design, or system 
building phases; I don't think any of them are CPU-bound at present -- and no 
fair pointing to Goedel Machines or AIXI either, which will ALWAYS be 
resource-starved :)


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


RE: [agi] Approximations of Knowledge

2008-06-25 Thread Derek Zahn
Richard,
 
If I can make a guess at where Jim is coming from:
 
Clearly, intelligent systems CAN be produced.  Assuming we can define 
intelligent system well enough to recognize it, we can generate systems at 
random until one is found.  That is impractical, however.  So, we can look at 
the problem as one of search optimization.  Evolution produced intelligent 
systems through a biased search, for example, so it is at least possible to 
improve search over completely random generate and test.
 
What other ways can be used to speed up search?  Jim is suggesting some methods 
that he believes may help.  If I understand what you've said about your 
approach, you have some very different methods than what he is proposing to 
focus the search.  I do not understand exactly what Jim is proposing; 
presumably he is aiming to use his SAT solver to guide the search toward areas 
that contain partial solutions or promising partial models of some sort.
 
It seems to me very difficult to define the goal formally, very difficult to 
develop a meta system in which a sufficiently broad class of candidate systems 
can be expressed, and very difficult to describe the splices or reductions 
or partial models in such a way to smooth the fitness landscape and thus speed 
up search.  So I don't know how practical such a plan is.
 
But (again assuming I understand Jim's approach) it avoids your complex system 
arguments because it is not making any effort to predict global behavior from 
the low-level system components, it's just searching through possibilities. 
 


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


[agi] Roadrunner PetaVision

2008-06-16 Thread Derek Zahn
 
Brain modeling certainly does seem to be in the news lately.  Checking out 
nextbigfuture.com, I was reading about that petaflop computer Roadrunner and 
articles about it say that they are or will soon be emulating the entire visual 
cortex -- a billion neurons.  I'm sure I'm not the only one who thinks that 
knowing what the cortex does and roughly how it does it could be quite 
inspiring for AGI, so I was surprised at this news.
 
Does anybody have links to more information (besides the short recent 
mainstream news story)?  Are they just being enthusiastic about their big 
computer or do they have a sophisticated theory?
 
 


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


RE: [agi] Definition of AGI - comparison with animals

2008-06-14 Thread Derek Zahn
Dr. Matthias Heger:

 
 Which animal has the smallest level of intelligence 
 which still would be sufficient for a robot to  be an 
 AGI-robot? 
 
You ask for opinions, we got lots of those!
 
I believe most people on this list would consider that humans are the only 
animals with significant-enough amounts of general intelligence to warrant 
the label.
 
For example, using Goertzel's definition for intelligence: complex goals in 
complex environments -- the goals of non-human animals do not seem complex in 
the same way that building an airplane is complex... although it is 
interesting whether, say, becoming the dominant member of my group for other 
primates is really a simple goal.  In any case, the generality of goals that 
can be undertaken by nonhuman animals seems very limited.
 
Human cognition evolved from less capable forms.  Do our higher cognitive 
functions depend on the lower neural machinery so much that it does not make 
sense to talk about abstract thought without the base functions shared by other 
animals?  Figuring that out is a good reason to study animal-level cognition 
even though it is not the ultimate goal.
 
 
 


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


RE: [agi] IBM, Los Alamos scientists claim fastest computer

2008-06-12 Thread Derek Zahn
 TeslasTwo things I think are interesting about these trends in 
 high-performance commodity hardware:
 
1) The flops/bit ratio (processing power vs memory) is skyrocketing.  The 
move to parallel architectures makes the number of high-level operations per 
transistor go up, but bits of memory per transistor in large memory circuits 
doesn't go up.  The old bit per op/s or byte per op/s rules of thumb get 
really broken on things like Tesla (0.03 bit/flops).  Of course we don't know 
the ratio needed for de novo AGI or brain modeling, but the assumptions about 
processing vs memory certainly seem to be changing.
 
2) Much more than previously, effective utilization of processor operations 
requires incredibly high locality (processing cores only have immediate access 
to very small memories).  This is also referred to as arithmetic intensity.  
This of course is because parallelism causes operations per second to expand 
much faster than methods for increasing memory bandwidth to large banks.  
Perhaps future 3D layering techniques will help with this problem, but for now 
AGI paradigms hoping to cache in (yuk yuk) on these hyperincreases in FLOPS 
need to be geared to high arithmetic intensity.
 
Interestingly (to me), these two things both imply to me that we get to 
increase the complexity of neuron and synapse models beyond the muladd/synapse 
+ simple activation function model with essentially no degradation in 
performance since the bandwidth of propagating values between neurons is the 
bottleneck much more than local processing inside the neuron model.
 


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: [agi] Pearls Before Swine...

2008-06-08 Thread Derek Zahn
Gary Miller writes:
 
 We're thinking Don't feed the Trolls!
 
Yeah, typical trollish behavior -- upon failing to stir the pot with one 
approcah, start adding blanket insults.  I put Steve Richfield in my killfile a 
week ago or so, but I went back to the archive to read the message in question. 
 The reason it got no response is that it is incoherent.  Seriously, I couldn't 
even understand the point of it.  Something about dreams and brains being wired 
completely different and some thumbnail calculations which are not included 
but apparently conclude that AGI will need the entire population of the earth 
for software maintenance... um, that's just weird rambling crackpottery.  It is 
so far away from any sort of AGI nuts and bolts that it cannot even be parsed.  
 
There are people who do not believe they are crackpots (but are certainly 
perceived that way) who then transform into trolls spouting vague blanket 
insults and whining about being ignored.  That type of unsupported fringe 
wackiness is tolerated because, frankly, the whole field is fringe to most 
people.  When it turns into vague attacks, blanket condemnation, and insults (a 
la Tintner and now Richfield) it simply isn't worth reading any more.
 
For others in danger of spiraling down the same drain, I recommend:
* Be cordial.   Note: condescending is not cordial.
* Be specific and concise.  Stick to one point.
* Do not refer to decades-old universally ignored papers about character 
recognition as if they are AI-shaping revolutions.
* Do not drop names from some hazy good old days
* Attempt to limit rambling off-topic insights into marginally related material
* If you are going to criticize instead of putting forward positive ideas (why 
you'd bother criticizing this field is beyond me, but if you must): criticize 
specific things, not the herd or all of you researchers or the field of 
AGI... as Ben pointed out earlier, no two people in this area agree on much of 
anything and they cannot be lumped together.  Criticizing specific things means 
actually reading and attempting to understand the published works of AGI 
researchers -- the test for whether you belong here is whether you are willing 
and able to actually do that.
 
Mr. Richfield may find a more receptive audience here:
 
http://www.kurzweilai.net/mindx/frame.html


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: [agi] Ideological Interactions Need to be Studied

2008-06-02 Thread Derek Zahn
Speaking of neurons and simplicity, I think it's interesting that some of the 
how much cpu power needed to replicate brain function arguments use the basic 
ANN model, assuming a MULADD per synapse, updating at say 100 times per second 
(giving a total computing power of about 10^16 OPS).  But the people actually 
trying brain replication (I am thinking of the Blue Brain cortical column 
project) simulate 30 million synapses on a 22 TF computer, four orders of 
magnitude higher (~1M op/sec/synapse vs 100).  Markram is certainly a rabid 
optimist in terms of how far this work will go and how fast, so I don't think 
he's just throwing cycles away for no reason.  Assuming their neuron model 
turns out to be adequate, I wonder how far it can be squeezed down.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: [agi] More Info Please

2008-05-27 Thread Derek Zahn
Mark Waser:
 Does anybody have any interest in and/or willingness to program in a  
 different environment?
I haven't decided to what extent I'll participate in OpenCog myself yet.  For 
me, it depends more on whether the capabilities of the system seem worth 
exploring, which in turn depends as much on the underlying philosophy as the 
codebase.  I'm thinking of OpenCog right now as a concrete way to understand 
the ideas of Bencompany.  Frankly I find OpenCog a rather odd open source 
project given its open-ended nature -- no target end users, no clear 
applications, no (apparent) dedicated driving personality declaring here is 
exactly what we need to accomplish, who's with me?.  I don't mean that Ben 
isn't dedicated, but I don't envision him herding this particularly ornery 
flock and browbeating people into actually finishing and debugging code.  
Still, it's a very cool effort wherever it leads.
 
The language used doesn't particularly matter to me, so I'm willing to work in 
a different environment.  I don't have a Linux machine at the moment so a 
requirement to work in Linux is a small but significant barrier to entry for 
me.  Screwing around with operating systems is just about my least favorite 
thing to do.It's hard for me even to make my own guesses about the best way to 
go because the overall architecture isn't very clear to me yet.  I guess that 
the central data structure -- the AtomTable, contains a persistent cached bunch 
nodes and links that come in various types and have numbers attached to 
them.  But it's not clear to me whether the types are supposed to be part of 
the cognitive theory or not -- are OpenCog developers supposed to invent new 
node types or just work with those provided?  If they can be created, does that 
mean changing the AtomTable implementation?  Is the meaning of the numbers on 
links predetermined or can they be overloaded?  If they can be overloaded, how 
do the Mind Agents cope with the ensuing chaos?  If they can't be overloaded, 
how can the system be extended to include new ideas?
 
If for example the AtomTable is sufficiently compartmentalized so that it won't 
need changing, it would seem that porting it to another language would be a 
lower priority than providing an environment where developing Mind Agents 
(which I am assuming is the really interesting stuff) could occur in whatever 
language individual developers feel most productive in.  Or maybe the amount of 
work required to do even this is larger than the actual interest in using the 
code warrants.
 
The more interesting issues to me are things like how the Atoms (and any other 
representational structures I don't know about yet) get their semantics and how 
adding new code changes those semantics... what is the representational 
flexibility and power of the knowledge representation scheme when applied to 
some non-toy cases...  I'm sure the documentation will make things a lot 
clearer.
 
 


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: [agi] Re: Merging - or: Multiplicity

2008-05-27 Thread Derek Zahn
Steve Richfield:
 It is sure nice that this is a VIRTUAL forum, for if we were all 
 in one room together, my posting above would probably get 
 me thrashed by the true AGI believers here.
 
 Does anyone here want to throw a virtual stone?
 
Sure.
 
*plonk*
 


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: [agi] Pattern extrapolation as a method requiring limited intelligence

2008-05-22 Thread Derek Zahn
John Rose writes: So I feel that much of our brain mass is there due to the 
natural richness of nature, and there may be quite a bit of overkill compared 
to what would be needed in software AGI.
Are we satisfied building AGIs that cannot cope with the actual world because 
it is too rich?
 
Personally I think that without the natural richness of nature, our 
intelligence would never develop.  We climb those levels of richness like a 
rock face.
 
 


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: [agi] Pattern extrapolation as a method requiring limited intelligence

2008-05-22 Thread Derek Zahn
Vladimir Nesov: I think sterile texture of artificial environments hides 
the richness of their structure from our intuition, since we already have it 
imprinted by experience with the real world. Anything less than capable of 
dealing with the real world won't understand cleaned up environments also.
 
+1.
 
John Rose:
 
 Which actual world, a natural or manmade?
 
Both, at least up to the present day.  In my opinion (though I know from your 
previous post that you don't agree), I don't see a huge difference in the 
environmental complexity of the land on which New York City sits now vs 1000 
years ago.  I did not grow up, nor do I live, in a mostly featureless box.
 
I do agree with your more general point that SOME of the brain's functionality 
does not have to be duplicated in silicon to achieve AGI.  Whether it is a 
significant fraction, and whether it would need to be replaced with some other 
functionality, seems like a hard question to me.
 


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


[agi] AI in virtual worlds -- popular press

2008-05-19 Thread Derek Zahn
 
For those who might not have seen it yet, seems this concept is becoming rather 
popular:
 
http://www.msnbc.msn.com/id/24668099/
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: Symbol Grounding [WAS Re: [agi] AGI-08 videos]

2008-05-05 Thread Derek Zahn
Richard Loosemore writes:
 
  some very useful text about the symbol grounding problem.
 
Thank you Richard.  For once I don't feel like a complete idiot.  I am familiar 
with these Harnad papers and find them quite clear.  Beyond that I understand 
your further explanation and even agree personally that it is a critical issue. 
 It is especially interesting in view of the push to embody AGI projects in 
virtual worlds, about which I am spending some time thinking anyway.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


RE: [agi] AGI-08 videos

2008-05-05 Thread Derek Zahn
Richard Loosemore: So, for example, if I were organizing a conference on AGI I 
would want  people to address such questions as:
 
I find your list of questions to be quite fascinating, and I'd love to 
participate in an active list or conference devoted to these Foundations of 
Cognitive Computing type of issues.  
 
However, it doesn't particularly bother me that people are building systems 
without explicit answers to these things, because I find the systems 
themselves, and the ideas about AI that they embody, to be very cool on their 
own terms.  I am not competing with any of them for money or fame, I am hopeful 
that lessons will be learned no matter how right or wrong their approaches end 
up, I think we're decades away from AGI systems that are intelligent enough to 
have a real impact on our society (a more useful phrase that than human level 
IMO) so I'm not mad that wasted effort is delaying a cure for hunger and 
disease.  People do not accept critiques that cast their entire professional 
output as worthless and their most basic premises as fatally flawed... if a 
point can be made in an understandable way from the assumed world view of 
somebody else, I do think it's worth making, but it's somewhat rare to be able 
to do so on material with which I have only a casual familiarity.
 
The deep questions that interest you (and me and, to an extent i believe, 
everybody on this list) are troublesome because they are so hard to talk about. 
 Consider your complex systems argument.  There appears to be some basic 
point-of-view differences that make communication on these topics difficult.  
It's not all pigheadedness or ill will, I don't think.
Or (picking one of your questions at random):
 - What assumptions do we have to buy into if we go with bayesian nets  as a 
 choice of reasoning/representation formalism? And how would we go  about 
 finding out if those assumptions are valid enough to make it safe  to use 
 bayes nets?
I'm not sure how to even begin a conversation about such a question.  First we 
have to decide what a reasoning/representation formalism has to do, and I'm 
afraid everybody has a different set of premises on points like that.
 
Those debates would be highly worthwhile, but I doubt many people will bother 
with them.
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


[agi] AGI-08 videos

2008-05-04 Thread Derek Zahn
 
I noticed yesterday that most of the videos of talks and panels from AGI-08 
have been uploaded (http://www.agi-08.org/schedule.php).  Big thanks to the 
organizers for that!
 
I have some difficulty getting into some of the papers but the 10-ish minute 
overview talks are by and large quite good, and the panel discussions are 
particularly interesting.  I feel much better now about not going to the 
conference!  Hopefully the rest of the talks will be posted, I can't wait to 
watch them.
 
Some personal reactions to particular things:
 
* Finally, I think the field has moved beyond the need for so many papers on 
six secrets of AI, five reasons AI has failed, and so on.  Even the 
obligatory What is AI? talk was largely redundant (although Dr. Wang's point 
-- that we will all have different definitions and we should take that into 
account when studying the work of others -- needs saying).  This is good news.  
Perhaps next year's conference won't need so many overview talks.
 
* Somehow I had this vague notion that SOAR had basically dried up and blown 
away in the 1990s, but John Laird's description of current work in SOAR was 
terrific and quite exciting.  I'll be following their progress closely.
 
* There are now quite a number of architectures with AGI-type ambitions that 
have significant implementation behind them (Novamente, SOAR, LIDA, NARS, 
OSCAR, BICA, Texai, and others).  The most interesting parts of the panels for 
me was when the people involved in building those architectures discussed what 
they have in common and differences in approach for similar problems.  As Ben 
Goertzel (and Sam Adams and others) point out, these architectures share quite 
a lot at the level of their boxes and arrows overview slides, which provides 
some context for interesting detailed discussion.  If such discussion occurred 
on this list that would be really cool; but perhaps a workshop at AGI-09 where 
the architects of these actually-existing systems discussed their similarities 
and differences and current limitations would be worthwhile.  I'd sure pay to 
see it!
 
* It was quite interesting to see that simulation/visualization as an important 
operating principle / reasoning mechanism is becoming so popular.  Ideally, I 
suppose, such modal mechanisms would do double duty in perception and 
simulation... accomplishing that and interfacing it cleanly with other 
modalities or general-purpose knowledge representation is really fascinating 
and I have a feeling we'll be seeing more along those lines.  I wonder if 
Novamente will go sort of solipsistic and absorb AGISim into itself as a modal 
reasoning module.
 
* Along those lines, there seems to be a growing (certainly not universal) 
consensus among complete-system builders that virtual embodiment is a best 
approach for providing broad knowledge support (grounding) without messing 
around with robots.  Somebody could write an excellent paper about the 
potential pitfalls of such an approach (detail, fidelity, deep causality issues 
behind appearance, function, and inter-object + inter-feature relationships, 
and so on).  If nobody else is working in detail on publishing such an analysis 
perhaps I will study those issues for some months and try to write something 
for AGI-09 about it.
 
* Stephen Reed is one of the most clear and deliberate speakers I've seen in 
this field.  It's really interesting how seeing a person talk about their 
research makes it seem more real and interesting than just reading about it.
 
* I wish Josh's Variac paper wasn't just a poster... but I suppose something 
has to get left out.  Hopefully next year there will be more concrete 
implementation/experimentation progress to report in a talk.
 
* Limiting people to 10-12 minutes makes it basically impossible to present the 
contents of a paper, so the talks turn into project overviews.  Actually I 
found that to be a GOOD thing, and hope it continues that way (as long as we 
don't get the same overview talks year after year...)
 
* Some presenters were very effective and some were not.  I encourage everybody 
to rehearse their talks to make sure that the amount of material presented is 
appropriate to the time frame, and to make sure it is presented smoothly.
 
Thanks to the organizers and all the participants.  Fantastic stuff.
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


RE: [agi] AGI-08 videos

2008-05-04 Thread Derek Zahn
One other observation I forgot to mention:  Several people brought up the 
desirability of some kind of benchmark problem area to help compare the methods 
and effectiveness of various approaches.  For a bunch of reasons I think it 
will be difficult to define such things in a way that researchers will pay 
attention to, but if it could be done (either as simply a commonly-understood 
point of reference for discussions or as a grand challenge or anything in 
between) it would be very neat, and in my opinion beneficial to the field as a 
whole.
 
I have a suggestion for such a task:  figuring out how to operate the 
buttons-and-light system that determines whose turn it is to talk during panel 
discussions.  It may be too ambitious though, as clearly it requires superhuman 
intelligence (har har har).

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


RE: [agi] AGI-08 videos

2008-05-04 Thread Derek Zahn
Bob Mottram writes:
 
 I havn't watched all of the AGI-08 videos, but of those that I have seen the 
 15 minute format left me non the wiser.  With limited time I would have 
 preferred longer talks with more depth but perhaps fewer in number, 
 especially on the more mathematical topics.  Another suggestion. If there is 
 an AGI-09 perhaps part of the conference could be in Second Life, allowing 
 for longer discussion if needed. If you wanted to get really fancy you could 
 set up a projector and incorporate speakers/questioners from within SL as 
 part of the live conference.
Interesting ideas.  I'm not sure the participants would have preferred 10 
in-depth math lectures to the format actually chosen, but I agree that 
explaining papers with sufficient time would be better than what was possible 
to actually do.  I guess the only way to achieve that is break into parallel 
topic sessions... though it does seem unfortunate for a field with general in 
its name to split up into special interests.
 
The second life thing seems like a good idea, though I don't see how it helps 
the time issue.  Perhaps a pre-first-life or post-first-life meeting (a couple 
weeks before after the conference) in second life for detailed lectures about 
the papers could work, on a relaxed schedule.
 
Most people (especially scientists) have some access to a video camera these 
days; I wonder if a longer talk could be videotaped at home by the 
participants and then put up on google video for later viewing at leisure, in 
addition to the on-site filming.
 
I don't know if it's standard practice at conferences these days to videotape 
the sessions and make them available like this, but it's really wonderful.
 
Bob, if you wrote a paper for the conference about state of the art vision or 
robotic systems and how they relate to AGI research, that would be very cool I 
think!
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


RE: [agi] AGI-08 videos

2008-05-04 Thread Derek Zahn
Richard Loosemore writes: Prompted by your enthusiastic write-up, I just 
wasted one and a half  hours scanning through all of the AGI-08 papers that I 
downloaded  previously. I have 28 of them; they did not include anything from 
 Stephen Reed, nor any NARS paper, so I guess my collection must be  
incomplete, but even so  I saw absolutely nothing that makes me believe 
that a field called  Artificial General Intelligence even exists yet. 
 
Sorry for wasting your time Richard!  At least you got to keep your collection 
of doomed projects up to date!
 
There was no NARS paper but Pei Wang did have a paper called What Do You Mean 
By AI? that I'm sure you'd find horrifying.
 
Stephen Reed's talk was called Natural Language Approach of the Texai 
Project, in which he described in some detail what he has been implementing.  
Surprisingly, I don't see a paper with that title in the paper list.
 
My post was only about the videos of the talks/panels and the impression they 
gave regarding the conference as a whole.  I'm not sure I'd bother with the 
videos though if I were you... I have a feeling you would find them 
unsatisfying.
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


RE: [agi] AGI-08 videos

2008-05-04 Thread Derek Zahn
Richard Loosemore: I read Pei's paper and there was nothing horrifying about 
it (please  spare the sarcasm).
 
No sarcasm intended.  If I had just come to the conclusion that 28 papers in a 
row were a waste of time, I'd be horrified at the prospect of a 29th that would 
also not give me what I was looking for.  I was merely trying for a light tone 
while expressing my belief that you were unlikely to change your conclusion 
about the field of Artificial General Intelligence by spending further time 
with that paper or the videos.
 
I shall, however, spare you such glib language in future correspondence.
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


RE: [agi] AGI-08 videos

2008-05-04 Thread Derek Zahn
Richard Loosemore: My god, Mark: I had to listen to people having a general 
discussion of  grounding (the supposed them of that workshop) without a 
single person  showing the slightest sign that they had more than an amateur's 
 perspective on what that concept actually means.
 
I was not at that workshop and am no expert on that topic, though I have seen 
the word used in several different ways.  Could you point at a book or article 
that does explain the concept or at least use it heavily in a correct way?  I 
would like to improve my understanding of the meaning of the grounding 
concept.
 
Note:  sometimes written words do not convey intensions very well -- I am not 
being sarcastic, I am asking for information to help improve the quality of 
discussion that you have found lacking in the past.
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


RE: [agi] help me,please for books for agi and mind in pdf

2008-05-02 Thread Derek Zahn
Bruno Frandemiche asked for online AGI-related text.
 
If you're adventurous, I'd recommend the Workshop proceedings from 2006:
 
http://www.agiri.org/wiki/Workshop_Proceedings
 
and the conference proceedings from AGI-08:
 
http://www.agi-08.org/papers

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


RE: [agi] An interesting project on embodied AGI

2008-04-28 Thread Derek Zahn
Thanks, what an interesting project.  Purely on the mechanical side, it shows 
how far away we are from truly flexible house-friendly robust mobile robotic 
devices.
 
I'm a big fan of the robotic approach myself.  I think it is quite likely that 
dealing with the messy flood of dirty data coming from the real world, and 
sorting it all into forms where it can be learned from effectively, will be 
very helpful in rooting out the nature of concepts and the meaning of symbols 
as applied to the actual world (rather than some non-representative 
abstraction).
 
On the other hand, if we want to build philosophers and scientists, it would be 
kind of nice if we didn't have to build them expensive bodies!  Do you have to 
be able to think like a cat to think like Descartes?  I'm glad people are 
looking for the answer from both sides of the issue.
 
A quote:
 
 Whatever the robot learns individually and socially should help 
 to develop its language skills, which in turn, should help iCub 
 interact better with its environment and pick up more knowledge 
 to learn more. Knowledge of grammar and vocabulary is also 
 likely to emerge naturally through this process.
 
I wonder if they have a theory of mind that would make this actually possible, 
or if they are just optimistic.  Sounds sort of magical to me so far.
 
If I ever actually start experimenting with AGI for my own amusement, I'll 
start by building a robot.   Of course, I've been saying that for years, and I 
am a big fan of Bob Mottram's work in this area.
 
 
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


RE: [agi] How general can be and should be AGI?

2008-04-26 Thread Derek Zahn
I assume you are referring to Mike Tintner.
 
As I described a while ago, I *plonk*ed him myself a long time ago, most mail 
programs have the ability to do that. and it's a good idea to figure out how to 
do it with your own email program.
 
He does have the ability to point at other thinkers and their papers, such as 
Lakoff and Barsalou, who have extremely interesting things to say... but his 
own contributions (beyond citing) to any converation are infuriating., I think 
it's about time to give up on Mike until he learns to  behave again.  And 
you shouldn't use sarcasm -- he just doesn't get it.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


RE: [agi] Why Symbolic Representation P.S.

2008-04-25 Thread Derek Zahn
The little Barsalou I have read so far has been quite interesting, and I think 
there are a lot of good points there, even if it is a rather extreme position.  
The issue of how concepts (which is likely a nice suitcase word lumping a lot 
of discrete or at least overlapping cognitive functions into one blob) 
originate, either from sensory data or from other concepts, is the most 
interesting thing in the world to me and trying to think about it from all 
angles, although quite time consuming, is a fun way to spend evenings.
 
I think that the following response to a particularly-important Barsalou 
article would resonate with the viewpoints of many people on this list:
 
http://www.vub.ac.be/CLEA/liane/Reviews/Barsalou.htm
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


RE: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and responses

2008-04-22 Thread Derek Zahn
J Andrew Rogers writes: Most arguments and disagreements over complexity are 
fundamentally  about the strict definition of the term, or the complete 
absence  thereof. The arguments tend to evaporate if everyone is forced to  
unambiguously define such terms, but where is the fun in that.
I agree with this to a point at least.  My attempt to rephrase Richard's 
argument falters because I have not yet understood his use of the term 
'complexity'.  I'd prefer a rigorous definition but will settle for a better 
general understanding of what he means.  Despite his several attempts to 
describe his meaning I have not been able yet to successfully grasp exactly 
what counts as complex and what does not, and for things inbetween, how to 
judge the degree of complexity.
 
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


RE: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and responses

2008-04-22 Thread Derek Zahn
Richard:  I get tripped up on your definition of complexity:
 
 A system contains a certain amount of complexity in it if it 
 has some regularities in its overall behavior that are governed 
 by mechanisms that are so tangled that, for all practical purposes, 
 we must assume that we will never be able to find a closed-form 
 explanation of how the global arises from the local.on figuring out what 
 counts as a regularity in overall behavior.  Consider a craps table.  The 
 trajectories of the dice would seem to have global regularities (for which 
 craps players and normal people have words and phrases, like bouncing off 
 the back, flying off the table, or whatever).  Our ability to create 
 concepts around this activity would seem to imply the existence of global 
 regularities (finding them is what we do when we make concepts).  Yet the 
 behavior of those regularities is not just physical law but the specific 
 configuration of the felt, the chips, the wind, and so forth, and all that 
 data makes a closed-form explanation impractical.
 
Yet, I don't get the sense that this is what you mean by a complex system.  
If it is, your contention that they are rare is certainly not correct, since 
many such examples can easily be found.  This aspect of complexity iillustrates 
the butterfly effect often used in discussions of complexity.
 
I'm not trying to be difficult; it's crucial for me to understand what you mean 
(versus my interpretation of what others have meant or my own internal 
definitions) if I am to follow your argument.
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


RE: Thoughts on the Zahn take on Complex Systems [WAS Re: [agi] WHAT ARE THE MISSING ...]

2008-04-22 Thread Derek Zahn
Mark Waser: Huh? Why doesn't engineering discipline address building complex 
devices? 
 
Perhaps I'm wrong about that.  Can you give me some examples where engineering 
has produced complex devices (in the sense of complex that Richard means)? 
 
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


RE: Thoughts on the Zahn take on Complex Systems [WAS Re: [agi] WHAT ARE THE MISSING ...]

2008-04-22 Thread Derek Zahn
Me: Can you give me some examples where engineering 
 has produced complex devices (in the sense of complex 
 that Richard means)? 
Mark: Computers.  Anything that involves aerodynamics.
 
Richard, is this correct?  Are human-engineered airplanes complex in the sense 
you mean?
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


RE: Thoughts on the Zahn take on Complex Systems [WAS Re: [agi] WHAT ARE THE MISSING ...]

2008-04-22 Thread Derek Zahn
Mark Waser:



 I don't know what is going to be more complex than a variable-geometry-wing 
 aircraft like a F-14 Tomcat.  Literally nothing can predict it's aerodynamic 
 behavior.  
 The avionics are purely reactive because it's future behavior cannot be 
 predicted 
 to any certainty even at computer speeds -- yet it's behavior envelope is 
 small 
 enough to be safe, provided you do have computer speeds (though no human 
 can fly it unaided).
 
I agree that this is a very sensible way to think about being complex and it 
is certainly similar to the way I think about it myself.  My embryonic 
understanding of Richard's argument suggests to me that he means something 
else, though.  If not, traditional engineering methods are often pretty good at 
taming complexity as long as they take the range of possible system states into 
account (which is what you have been saying all along).
 
Since I'm trying (with limited success) to understand his point of view, I 
might suggest that (from the point of view of his argument), the global 
regularities of the aircraft (its flight characteristics) DO have a 
sufficiently-efficacious small theory in terms of the components (the aircraft 
body, including the moveable bits).  In fact, it is exactly that small theory 
which is embedded in the control program.  Since the global regularities 
(straight-line flight, turns, and so on) are sufficiently predictable from the 
local interactions of the control surfaces with the air, the aircraft is not 
complex *in the sense that Richard is talking about*.
 
Now I suppose I've pissed everybody off, but I'm really just trying to 
understand Richard's definitions so I can follow his argument.
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


RE: Thoughts on the Zahn take on Complex Systems [WAS Re: [agi] WHAT ARE THE MISSING ...]

2008-04-22 Thread Derek Zahn
Richard Loosemore: it makes no sense to ask is system X complex?. You can 
only ask  how much complexity, and what role it plays in the system.
 
Yes, I apologize for my sloppy language.  When I say is system X complex? 
what I mean is whether the RL-complexity of the system is important in 
describing the behaviors of interest under the operating conditions being 
discussed, in particular whether the global behaviors have an effective small 
theory expressed in terms of local components and their interactions -- because 
my current understanding of what you mean by complexity means the extent to 
which no such small theory is available.
 
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


RE: [agi] For robotics folks: Seeking thoughts about integration of OpenSim and Player

2008-04-21 Thread Derek Zahn
Ben Goertzel writes:
 it might be valuable to have an integration of Player/Stage/Gazebo with 
 OpenSim
 
I think this type of project is a good start toward addressing one of the major 
critiques of the virtual world approach -- the temptation to (unintentionally) 
cheat -- those canned animations can fool the untrained observer into 
thinking there is a lot more going on than they would think from the actual agi 
output (MOVE AVATAR 3M FORWARD, e.g.).  Similarly for input... if input is 
obtained from simulated senses instead of providing direct access to the object 
model of the simulator, the resulting internal representations in the AI are 
much more convincing somehow.
 
I don't know much about OpenSim or Player/Stage/Gazebo but it would seem that 
they overlap quite a bit, in that they both include simulation as their core 
function.  I guess what you really want to do is figure out a middle layer 
between the standard player interface and the specific virtual world of 
interest (OpenSim).  My guess would be that you could re-use a bunch of code as 
part of that task but it will still be rather complex.  It would be an 
interesting GSoC project I'd think.
 
It doesn't address the other major criticisms of virtual worlds -- such as the 
fact that in current ones the objects are more iconic than realistic... by 
which I mean that the appearance and limited behavior of a mug of beer is 
purely for the purpose of being identifiable by humans with a lot of experience 
of real mugs of beer.  Its role in the virtual world is suggestive rather than 
functional and in a lot of cases I have no idea how an AI could make any useful 
sense out of it.
 
I imagine that criticism will be dealt with over a lot of time as the worlds 
become more realistic, though the computing requirements for something 
approaching the richness of real-world sensation seem daunting, as does the 
sheer volume of object data needed to realistically model something as simple 
as a baby's bedroom.
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


RE: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and responses

2008-04-21 Thread Derek Zahn
One more bit of ranting on this topic, to try to clarify the sort of thing I'm 
trying to understand.
 
Some dude is telling my AGI program:  There's a piece called a 'knight'.  It 
moves by going two squares in one direction and then one in a perpendicular 
direction.  And here's something neat:  Except for one other obscure case I'll 
tell you about later, it's the only piece that moves by jumping through the air 
instead of moving a square at a time on its journey.
 
When I try to think about how an intelligence works, I wonder about specific 
cases like these (and thanks to William Pearson for inventing this one) -- the 
genesis of the knight concept from this specific purely verbal exchange.  How 
could this work?  What is it about the specific word sequences and/or the 
conversational context that creates this new thing -- the Knight?  It would 
have to be a hugely complicated language processing system... so where did that 
language processing system come from?  Did somebody hardcode a model of 
language and conversation and explicitly insert generate concept here 
actions?  That sounds like a big job.  If it was learned (much better), how was 
it learned?  What is the internal representation of the language processing 
model that leads to this particular concept formation, and how was it 
generated?  If I can see something specific like that in a system (say 
Novamente) I can start to really understand the theory of mind it expresses.
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


RE: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and responses

2008-04-21 Thread Derek Zahn
Stephen Reed writes:

Hey Texai, let's program
[Texai] I don't know how to program, can you teach me by yourself?
Sure, first thing is that a program consists of statements that each does 
something
[Texai]  I assume by program you mean a sequence of instructions that a 
computer can interpret and execute, and that by statement you mean a line of 
code written as part of a computer program
Right.  One type of instruction is the assignment statement.
[Texai] OK, how is it interpreted?
It has two parts, one part identifies a variable that receives a copy of, or a 
reference to, an evaluated expression described by the other part of the 
instruction
[Texai] I assume by variable you mean a symbol (like x or y) that is used in 
mathematical or logical expressions to represent a variable quantity. What's an 
evaluated expression?  And under what circumstances does the situation in which 
the variable receives a copy of the evaluated expression occur, as contrasted 
with the situation in which the variable receives a reference to the evaluated 
expression?
Wow, if that turns out to be an actual transcript sent back through a time 
machine (I mean, if it works like you think), that's amazingly impressive.  
Every part of it, from knowing to ask you to teach it to do something, to 
connecting 'program' used as a verb to 'program' used as a noun, to knowing all 
about sequences of instructions, what computers are and how they work, what a 
line of code even means, and so on.  I assume these things were taught to it 
through previous teaching sessions, and I'm really eager to see that in action. 
 Of particular interest to me here is the conceptual leap from equality in a 
mathematical expression (which I guess the system already knows about) to the 
very different idea of assignment in a normal programming language.  The origin 
of a variable as a named thing that can hold a value was an interesting 
concept to communicate to undergraduate business majors back in the day when I 
taught introductory programming... you could just see them get it after 
trying analogies with mailboxes and diagrams of computer memory and whatnot.  
It had never occurred to some of them to put a number in a box for later use 
before but I clearly remember the instant of concept formation occurring in 
their fresh young minds :)
 
Now the aha moment behind learning the concept of recursion is even more 
interesting...
 
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


RE: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and responses

2008-04-21 Thread Derek Zahn
Vladimir Nesov writes: Generating concepts out of thin air is no big deal, 
if only a resource-hungry process. You can create a dozen for each episode, 
for example.
 
If I am not certain of the appropriate mechanism and circumstances for 
generating one concept, it doesn't help to suggest that a dozen get generated 
instead... now I have twelve times as many things to explain.  If you are 
suggesting that concept formation is a (perhaps stochastic) generate-and-test 
procedure, that seems like an okay idea but the issues are then redescribed as: 
what is the generation procedure, what causes it to be invoked, what the test 
procedure is, and so on.
 
These questions cannot be answered outside the context of a particular system; 
they are just the things I'd like to understand exactly how they would happen 
in Novamente or Texai or whatever, with all handwaving removed.
 
To get back to the original question of this thread, these are some of the many 
missing conceptual pieces TO ME because I cannot see the specific nuts and 
bolts solution for any proposed system.  It may in fact be that for any non-toy 
example the mechanisms and data are going to be too complicated for such 
analysis... that is, my brain is too puny and ineffective to understand (in a 
clear and relatively complete way) the inner workings of a general 
intelligence.  In that case, all I can do is hope for proof by performance.
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


RE: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and responses

2008-04-21 Thread Derek Zahn
Richard Loosemore: I do not laugh at your misunderstanding, I laugh at the 
general  complacency; the attitude that a problem denied is a problem solved. 
I  laugh at the tragicomedic waste of effort.
I'm not sure I have ever seen anybody successfully rephrase your complexity 
argument back at you; since nobody understands what you mean it's not 
surprising that people are complacent about it.
 
I was going to wait for some more blog posts to have a go at rephrasing it 
myself but my (probably wrong effort) would go like this:
 
1. Many things we want to build have desired properties that are described at a 
different level than the things we build them out of.  Flying is emergent in 
this sense from rivets and sheet metal, for example.  Thinking is emergent from 
neurons, for another example.
2. Some such things are complex in that the emergent properties cannot be 
predicted from the lower-level details.
3. Flying as above is not complex in this way.  In fact, all of engineering 
is the study of how to build things that are increasingly complicated but NOT 
complex.  We do not want airplanes to have complex behavior and the engineering 
methodology is expressly for squeezing complexity out.
4. Thinking must be complex.  [my understanding of why this must be true is 
lacking.  Something like: otherwise we'd be able to predict the behavior of an 
AGI which would make it useless?]
5. Therefore we have no methods for building thinking machines, since 
engineering discipline does not address how to build complex devices.  Building 
them as if they are not complex will result in poor behavior; squeezing out the 
complexity will squeeze out the thinking, and leaving it in makes traditional 
engineering impossible.
 
Not quite right I suppose, but I'll keep working at it.
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


RE: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI?

2008-04-21 Thread Derek Zahn
Josh writes: You see, I happen to think that there *is* a consistent, general, 
overall  theory of the function of feedback throughout the architecture. And I 
think  that once it's understood and widely applied, a lot of the 
architectures  (repeat: a *lot* of the architectures) we have floating around 
here will  suddenly start working a lot better.
Want to share this theory? :)
 
Oh, by the way, of the ones I read so far, I thought your Variac paper was the 
most interesting one from AGI-08.  I'm particularly interested to hear more 
about sigmas and your thoughts on  transparent, composable, and robust 
programming languages.  I used to think about some slightly related topics and 
thought more in terms of evolvability and plasticity (and did not consider 
opaqueness at all) but I think your approach to thinking about things is quite 
exciting.
 
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


RE: Thoughts on the Zahn take on Complex Systems [WAS Re: [agi] WHAT ARE THE MISSING ...]

2008-04-21 Thread Derek Zahn
Richard Loosemore:
 I'll try to tidy this up and put it on the blog tomorrow.
 
I'd like to pursue the discussion and will do so in that venue after your post.
 
I do think it is a very interesting issue.  Truthfully I'm more interested in 
your specific program for how to succeed than this argument about why everybody 
else will fail, but I understand that they are linked.
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


RE: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI?

2008-04-20 Thread Derek Zahn
William Pearson writes: Consider an AI learning chess, it is told in plain 
english that...
 
I think the points you are striving for (assuming I understand what you mean) 
are very important and interesting.  Even the first simplest steps toward this 
clear and (seemingly) simple task baffle me.  How does the concept of 'knight' 
poof into existence during the conversation? How does a system learn how to 
learn to play a game in the first place?  I like this task as a tool for 
considering how a potential AGI approach is truly general -- by asking over and 
over again how and why could that happen for any imagining of how each 
sentence could be processed.
 
Now, Edward, I hope you are right about Novamente but I don't quite follow the 
reasoning behind your confidence.  I'm imagining that in a previous life you'd 
pointed me toward a drawing of a DaVinci flying machine, excitedly projecting 
3-8 years until we'd be flying around.  Now DaVinci's a bright guy (smarter 
than me) and it's a nice concept, and I can't prove it won't work -- I'd have 
to invent a pretty effective aerodynamic science to do so.  I still might not 
be convinced.  Absence of disproof is not necessarily strong evidence.
 
I'm looking forward to getting more info about Novamente soon and hopefully 
understand the nuts and bolts of how it could do tasks like the ones William 
wrote about.  I have some concerns about things like whether propagating truth 
values around is really a very effective modeling substrate for the world of 
objects and ideas we live in -- but since I don't understand Novamente well 
enough, there's lititle I can say pro or con beyond those vague intuitions (and 
the last thing I'd want to do is bug Ben with questions like how would 
Novamente do X?  How about Y?  He has plenty of real work to do.)
 
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


RE: [agi] associative processing

2008-04-17 Thread Derek Zahn
Steve Richfield writes:
 
 Hmm, I haven't seen a reference to those core publications. Is there a 
 semi-official list?
 
This list is maintained by the Artificial General Intelligence Research 
Instutute.  See www.agiri.org .  On that site there are several semi-official 
lists -- under Publications and Instead of an AGI Textbook.
 
Certainly there is very little agreement (on anything!) amongst the 
idiosyncratic group of people who post on this list and I did not intend to 
dissuade you from presenting your ideas (which I have found interesting so far, 
in proportion to the degree they address AGI topics); I was just explaining why 
people here are unlikely to find Dr. Eliza to be particularly interesting.
 
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


RE: [agi] associative processing

2008-04-17 Thread Derek Zahn
Note that the Instead of an AGI Textbook section is hardly fleshed out at all 
at this point, but it does link to a more-complete similar effort to be found 
here:
 
http://nars.wang.googlepages.com/wang.AGI-Curriculum.html

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


RE: [agi] associative processing

2008-04-16 Thread Derek Zahn
Steve Richfield, writing about J Storrs Hall:
 You sound like the sort that once the things is sort of 
 roughed out, likes to polish it up and make it as good as possible. 
 
I don't believe your characterization is accurate.  You could start with this 
well-done book to check that opinion:
 
http://www.amazon.com/Beyond-AI-Creating-Conscience-Machine/dp/1591025117
 
Because you are new to the discussion here you probably don't quite get the 
topic of this mailing list (AGI); the system sort-of described in your papers 
does not address any of the issues of that topic (as defined in its core 
publications and conferences) so don't be too surprised if people here are not 
particularly excited about it.
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


RE: [agi] Logical Satisfiability...Get used to it.

2008-03-31 Thread Derek Zahn
Jim Bromer writes: With God's help, I may have discovered a path toward a 
method to achieve a polynomial time solution to Logical Satisfiability
 
If you want somebody to talk about the solution, you're
more likely to get helpful feedback elsewhere as it is not a
topic that most of us on this list deal with or know a lot about.
 
Besides that, publish your result and it will be used if it is helpful.
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


RE: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Derek Zahn
[EMAIL PROTECTED] writes:
 
 But it should be quite clear that such methods could eventually be very handy 
 for AGI.
 
I agree with your post 100%, this type of approach is the most interesting 
AGI-related stuff to me.
 
 An audiovisual perception layer generates semantic interpretation on the  
 (sub)symbolic level. How could a symbolic engine ever reason about the real  
 world without access to such information?
 
Even more interesting:  How could a symbolic engine ever reason about the real 
world *with* access to such information? :)

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


RE: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Derek Zahn

Stephen Reed writes:
 
 How could a symbolic engine ever reason about the real world *with* access 
 to such information? 
 
 I hope my work eventually demonstrates a solution to your satisfaction.  
 
Me too!
 
 In the meantime there is evidence from robotics, specifically driverless 
 cars,  that real world sensor input can be sufficiently combined and 
 abstracted for use  by symbolic route planners.
 
True enough, that is one answer:  by hand-crafting the symbols and the 
mechanics for instantiating them from subsymbolic structures.  We of course 
hope for better than this but perhaps generalizing these working systems is a 
practical approach.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


[agi] Symbols

2008-03-30 Thread Derek Zahn
Related obliquely to the discussion about pattern discovery algorithms What 
is a symbol?
 
I am not sure that I am using the words in this post in exactly the same way 
they are normally used by cognitive scientists; to the extent that causes 
confusion, I'm sorry.  I'd rather use words in their strict conventional sense 
but I do not fully understand what that is.  These thoughts are fuzzier than 
I'd like; if I was better at de-fuzzifying them I might be a pro instead of an 
amateur!
 
Proposition:  a symbol is a token with both denotative and model-theoretic 
semantics.
 
The denotative semantacs are what makes a symbol refer to something or be 
about something.  The model-theoretic semantics allow symbol processing 
operations to occur (such as reasoning).
 
I believe this is a somewhat more restrictive use of the word symbol than is 
necessarily implied by Newell and Simon in the Physical Symbol System 
Hypothesis, but my aim is engineering rather than philosophy.
 
I'm actually somewhat skeptical that human beings use symbols in this sense for 
much of our cognition.  We appear to be a million times better at it than any 
other animal, and that is the special thing that makes us so great, but we 
still aren't very good at it.  However, most of the things we want to build AGI 
*for* require us to greatly expand the symbol processing capabilities of mere 
humans.  I think we're mostly interested in building artificial scientists and 
engineers rather than artificial musicians.  Since computer programs, 
engineering drawings, and physics theories are explicitly symbolic constructs, 
we're more interested in effectively creating symbols than in the totality of 
the murky subsymbolic world supporting it.  To what extent can we separate 
them?  I wish I knew.
 
In this view, subsymbolic simply refers to tokens that lack some of the 
features of symbols.  For example, a representation of a pixel from a camera 
has clear denotational semantics but it is not elaborated as well as a better 
symbol would be (the light coming from direction A at time B is not as useful 
as the light reflecting off of Fred's pinky fingernail).  Similarly, and more 
importantly, subsymbolic products of sensory systems lack useful 
model-theoretic semantics.  The origin of symbols problem involves how those 
semantics arise -- and to me it's the most interesting piece of the AGI puzzle.
 
Is anybody else interested in this kind of question, or am I simply inventing 
issues that are not meaningful and useful?

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


RE: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Derek Zahn
Mark Waser writes:
 
 True enough, that is one answer:  by hand-crafting the symbols and  the 
 mechanics for instantiating them from subsymbolic structures.   We of 
 course hope for better than this but perhaps generalizing these  working 
 systems is a practical approach.  Um.  That is what is known as the 
 grounding problem.  I'm sure that  Richard Loosemore would be more than 
 happy to send references explaining  why this is not productive.
 
It's not the grounding problem.  The symbols crashing around inthese robotic 
systems are very well grounded.  
 
The problem is that these systems are narrow, not that they 
manipulateungrounded symbols.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


[agi] Novamente study

2008-03-25 Thread Derek Zahn
Ben,
 
It seems to me that Novamente is widely considered the most promising and 
advanced AGI effort around (at least of the ones one can get any detailed 
technical information about), so I've been planning to put some significant 
effort into understanding it with a view toward deciding whether I think you're 
on the right track or not (with as little hand-waving, faith, or bigotry as 
possible in my conclusion).  To do that properly, I am waiting for your book on 
Probabilistic Logic Networks to be published.  Amazon says July 2008... is that 
date correct?
 
Thanks!
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


RE: [agi] Novamente study

2008-03-25 Thread Derek Zahn
Ben Goertzel writes: The PLN book should be out by that date ... I'm currently 
putting in some final edits to the manuscript...  Also, in April and May 
I'll be working on a lot of documentation regarding plans for OpenCog. 
 
Thanks, I look forward to both of these.
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


RE: [agi] Complexity in AGI design

2007-12-07 Thread Derek Zahn
Dennis Gorelik writes: Derek,  I quoted this Richard's article in my blog: 
http://www.dennisgorelik.com/ai/2007/12/reducing-agi-complexity-copy-only-high.html
Cool.  Now I'll quote your blogged response:
 
 So, if low level brain design is incredibly complex - how do we copy it? The 
 answer is: we don't copy low level brain design. Low level design is 
 critical for AGI. Instead we observe high level brain 
 patterns and try to implement them on top of our own, more understandable, 
 low level design.
 
I'm not sure for myself what I think of this complexity argument, so I don't 
have anything to say about your answer except to wish you luck (if Richard is 
right, you'll need a lot of it; if many paths lead up the hill then you might 
not need much at all).
 
I am curious what you mean by high level brain patterns though.  Could you 
give an example?
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73825873-cc7440

RE: [agi] Solution to Grounding problem

2007-12-07 Thread Derek Zahn
Richard Loosemore writes: This becomes a problem because when we say of 
another person that they  meant something by their use of a particular word 
(say cat), what we  actually mean is that that person had a huge amount of 
cognitive  machinery connected to that word cat (reaching all the way down 
to the  sensory perception mechanisms that allow the person to recognise an  
instance of a cat, and motor output mechanisms that let them interact  with a 
cat).  What Stephen Harnad said in his original paper was Hang on a second: 
 if the AI system does not have all that other machinery inside it when  it 
uses a word like cat, surely it does not really mean the same  thing by 
cat as a person would?

 [...]
 
Thanks, Richard.  That post was a terrific bit of writing.
 
On a related note, I think those that are uneasy with the idea of grounding 
symbols in experience with a virtual world wonder whether the (current) thin 
and skewed sensory experiece of cats or any other concept-friendly 
regularities in such worlds are sufficiently similar to provide enough of the 
same meaning for communication with humans using the resulting concepts.
 
For that matter, one wonders even when concepts are grounded in the real world 
whether the resulting concepts and their meanings can be similar enough for 
communication if the concept formation machinery is not quite similar to our 
own sometimes even individual human conceptualizations are barely similar 
enough to allow conversation.
 
Very interesting stuff.
 
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73797095-e37936

RE: [agi] None of you seem to be able ...

2007-12-06 Thread Derek Zahn
Richard Loosemore writes: Okay, let me try this.  Imagine that we got a 
bunch of computers [...]
 
Thanks for taking the time to write that out.  I think it's the most 
understandable version of your argument that you have written yet.  Put it on 
the web somewhere and link to it whenever the issue comes up again in the 
future.
 
If you are right, you may have to resort to told you so when other projects 
fail to produce the desired emergent intelligence.  No matter what you do, 
system builders can and do and will say that either their system is probably 
not heavily impacted by the issue, or that the issue itself is overstated for 
AGI development, and I doubt that most will be convinced otherwise.  By making 
such a clear exposition, at least the issue is out there for people to think 
about.
 
I have no position myself on whether Novamente (for example) is likely to be 
slain by its own complexity, but it is interesting to ponder.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73249587-454993

RE: [agi] What best evidence for fast AI?

2007-11-10 Thread Derek Zahn
Hi Robin.  In part it depends on what you mean by fast.
 
1. Fast - less than 10 years.
 
I do not believe there are any strong arguments for general-purpose AI being 
developed in this timeframe.  The argument here is not that it is likely, but 
rather that it is *possible*.  Some AI researchers, such as Marvin Minsky, 
believe that we already have the necessary hardware commonly available, if we 
only knew what software to write for it.  If, as seems likely, there is a large 
economic incentive for the development of this software, it seems reasonable to 
grant the possibility that it will be developed.
 
Following that line of reasoning, a computation of probability * impact 
yields a large number for even small probabilities since the impact of a 
technological singularity could be very large.  So planning for the possibility 
seems prudent.
 
2. Fast - less than 50 years.
 
For this timeframe, just dust off Moravec's old computer speed chart.  On such 
a chart I think we're supposed to be at something like mouse level right now -- 
and in fact we have seen supercomputers beginning to take a shot at simulating 
mouse-brain-like structures.  It does not feel so wrong to think that the robot 
cars succeeding in the DARPA challenges are maybe up to mouse-level 
capabilities.
 
It is certainly possible that once computers surpass the raw processing power 
of the human brain by 10, 100, 1000 times, we will just be too stupid to keep 
up with their capabilities for some reason, but it seems like a more reasonable 
bet to me that the economic pressures to make somewhat good use of available 
computing resources will win out.
 
AI is often called a perpetual failure, but from this view that is not true at 
all; AI has been a spectacular success.  It's very impressive that the early 
researchers were able to get computers with nematode-level nervous systems to 
show any interesting cognitive behavior at all.  At worst, AI is keeping up 
with the available machine capabilities admirably.
 
Still, putting aside the brain simulation route, we do have to build models 
of mind that actually work.  As Pei Wang just pointed out, we are beginning to 
see models such as Ben Goertzel's Novamente that at least seem like they might 
have a shot at sufficiency.  That is not proof, but it is an indication that we 
may not be overmatched by this challenge, once the machinery becomes available.
 
If something like Moore's law continues (I suppose it's a cognitive bias to 
assume it will continue and a different bias to assume it won't), who wants to 
bet that computers 10,000, 100,000, or 1,000,000 times as powerful as our 
brains will go to waste?  Add as many zeros as you want... they cost five years 
each.
 
-
 
Having written that, I confess it is not completely convincing.  There are a 
lot of assumptions involved.  I don't think there *is* an objectively 
convincing argument.  That's why I never try to convince anybody... I can play 
in the intersection between engineering and wishful thinking if I want, simply 
because it amuses me more than watching football.
 
Hopefully some folks with more earnest beliefs will have better arguments for 
you.
 
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63831279-12920a

RE: [agi] What best evidence for fast AI?

2007-11-10 Thread Derek Zahn
Bryan Bishop: Looks like they were just simulating eight million neurons with 
up to  6.3k synapses each. How's that necessarily a mouse simulation, anyway?
It isn't.  Nobody said it was necessarily a mouse simulation.  I said it was 
a simulation of a mouse-brain-like structure.  Unfortunately, not enough is yet 
known about specific connectivity so the best that can be done is play with 
structures of similar scale in anticipation of further advances.
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63847412-4e7cf3

RE: [agi] How valuable is Solmononoff Induction for real world AGI?

2007-11-08 Thread Derek Zahn
Edward,
 
For some reason, this list has become one of the most hostile and poisonous 
discussion forums around.  I admire your determined effort to hold substantive 
conversations here, and hope you continue.  Many of us have simply given up.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63054363-e5048c

RE: [agi] Connecting Compatible Mindsets

2007-11-07 Thread Derek Zahn
A large number of individuals on this list are architecting an AGI solution 
(or part of one) in their spare time.  I think that most of those efforts do 
not have meaningful answers to many of the questions, but rather intend to 
address AGI questions from a particular perspective.  Would such people be 
encouraged to fill this out, even though they might only answer a couple of the 
numbered points?
 
Probably most people like that are not serious contenders in the sense of 
having a complete detailed plan for achieving a full AGI.  Rather they think a 
particular aspect or approach is not being given enough attention and hope to 
explore part of it to see if their ideas merit further development.
 
I could see wanting to include or exclude such amateur efforts, depending on 
the goals of this database.  Perhaps a separate section would be a good idea 
for such people to provide a brief unstructured summary of their interests and 
ideas.
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=62431770-893502

RE: [agi] Poll

2007-10-18 Thread Derek Zahn
 1. What is the single biggest technical gap between current AI and AGI? 
 
I think hardware is a limitation because it biases our thinking to focus on 
simplistic models of intelligence.   However, even if we had more computational 
power at our disposal we do not yet know what to do with it, and so the biggest 
gap is conceptual rather than technical.
 
In particular, I become more and more skeptical that the effort to produce 
concise theories of things like knowledge representation are likely to succeed. 
 Frames, is-a relations, logical inference on atomic tokens, and so on, are 
efforts to make intelligent behavior comprehensible in concisely describable 
ways, but they seem to only be crude approximations to the reality of 
intelligent behavior, which seem less and less likely to have formulations that 
are comfortably within our human ability to reason about effectively.  As one 
example, consider the study in cognitive science of the theory of categories -- 
from the necessary and sufficient conditions classical view to the more 
modern competing views of prototypes vs exemplars.  All of these are nice 
simple descriptions but as so often happens it seems that the effort to boil 
down the phenomena to nice simple ideas we can work with in our tiny brains 
actually boils off most of the important stuff.
 
The challenge is for us to come up with ways to think about or at least work 
with (and somehow reproduce or invent!) mechanisms that appear not to be 
reduceable to convenient theories.  I expect that our ways of thinking about 
these things will evolve as the systems we build operate on more and more data. 
 As Novamente's atom table grows from thousands to millions and eventually 
billions of rows; as cortex simulations become more and more detailed and 
studyable; as we start to grapple with semantic nets containing many millions 
of nodes -- our understanding of the dynamics of such systems will increase.  
Eventually we will become comfortable with and become more able to build 
systems whose desired behaviors cannot even be specified in a simple or 
rigorous way.
 
Or, perhaps, theoretical breakthroughs will occur making it possible to 
describe intelligence and its associated phenomena in simple scientific 
language.
 
Because neither of these things can be done at present, we can barely even talk 
to each other about things like goals, semantics, grounding, intelligence, and 
so forth... the process of taking these unknown and perhaps inherently complex 
things and compressing them into simple language symbols throws out too much 
information to even effectively communicate what little we do understand.
 
Either way, it will take decades if we're lucky.  Moving from mouse-level 
hardware to monkey-level hardware in the next couple decades will be helpful, 
just like our views on machine intelligence have expanded beyond those of our 
forebears looking at the first digital computers and wondering about how they 
might be made to think.
 
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=55002867-d97b38

RE: Self-improvement is not a special case (was Re: [agi] Religion-free technical content)

2007-10-12 Thread Derek Zahn
Tim Freeman writes: Let's take Novamente as an example. ... It cannot improve 
itself until the following things happen:  1) It acquires the knowledge 
and skills to become a competent  programmer, a task that takes a human many 
years of directed  training and practical experience.   2) It is given 
access to its own implementation and permission to alter it.   3) It 
understands its own implementation well enough to make a helpful change. ... 
 I agree that resource #1, competent programming, is essential for any 
interesting takeoff scenario. I don't think the other two matter, though.
Ok, this alternative scenario -- where Novamente secretly reinvents the 
theoretical foundations needed for AGI development, designs its successor from 
those first principles, and somehow hijacks an equivalent or superior 
supercomputer to receive the de novo design and surreptitiously trains it to 
superhuman capacity -- should also be protected against.  It's a fairly 
ridiculous scenario, but for completeness should be mentioned.
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=53003223-9d4579

RE: Self-improvement is not a special case (was Re: [agi] Religion-free technical content)

2007-10-12 Thread Derek Zahn
Tim Freeman: No value is added by introducing considerations about 
self-reference into conversations about the consequences of AI engineering.  
Junior geeks do find it impressive, though.
The point of that conversation was to illustrate that if people are worried 
about Seed AI exploding, then one option is to not build Seed AI (since that is 
only one approach to developing AGI, and in fact I do not know of any actual 
project that includes it at present).  Quoting Yudkowsky:
 
 The task is not to build an AI with some astronomical level 
 of intelligence; the task is building an AI which is capable 
 of improving itself, of understanding and rewriting its own 
 source code.
 
Perhaps only junior geeks like him find the concept relevant.  You seem to 
think that self-reference buys you nothing at all since it is a simple matter 
for the first AGI projects to reinvent their own equivalent from scratch, but 
I'm not sure that's true.
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=53018769-30e88d

RE: Self-improvement is not a special case (was Re: [agi] Religion-free technical content)

2007-10-12 Thread Derek Zahn
Linas Vepstas:  Let's take Novamente as an example. ... It cannot improve 
itself  until the following things happen:1) It acquires the 
knowledge and skills to become a competent   programmer, a task that takes a 
human many years of directed   training and practical experience.  Wrong. 
This was hashed to death in previous emails; and then again  probably several 
more times before I joined the list.   Anyone care to assemble a position 
paper on self improvement that reviews the situation? I'm slightly irritated 
by the  recurring speculation and misunderstanding.
Ok, the conversation was about how Novamente could recursively self-improve 
itself into a runaway hard takeoff scenario.
 
You're claiming that it can do so without the knowledge or skills of a 
competent programmer, with the very convincing argument Wrong.  Care to 
elaborate at all?  Or is your only purpose to communicate your slight 
irritation?
 
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=53154037-90851f

RE: [agi] RSI

2007-10-03 Thread Derek Zahn
Edward W. Porter writes: As I say, what is, and is not, RSI would appear to be 
a matter of definition. But so far the several people who have gotten back to 
me, including yourself, seem to take the position that that is not the type of 
recursive self improvement they consider to be RSI. Some people have drawn 
the line at coding. RSI they say includes modifying ones own code, but code 
of course is a relative concept, since code can come in higher and higher 
level languages and it is not clear where the distinction between code and 
non-code lies.
 
As I had included comments along these lines in a previous conversation, I 
would like to clarify.  That conversation was not specifically about a 
definition of RSI, it had to do with putting restrictions on the type of RSI we 
might consider prudent, in terms of cutting the risk of creating intelligent 
entities whose abilities grow faster than we can handle.
 
One way to think about that problem is to consider that building an AGI 
involves taking a theory of mind and embodying it in a particular computational 
substrate, using one or more layers of abstraction built on the primitive 
operations of the substrate.  That implementation is not the same thing as the 
mind model, it is one expression of the mind model.
 
If we do not give arbitrary access to the mind model itself or its 
implementation, it seems safer than if we do -- this limits the extent that RSI 
is possible: the efficiency of the model implementation and the capabilities of 
the model do not change.  Those capabilities might of course still be larger 
than was expected, so it is not a safety guarantee; further analysis using the 
particulars of the model and implementation, should be considered also.
 
RSI in the sense of learning to learn better or learning to think better 
within a particular theory of mind seems necessary for any practical AGI effort 
so we don't have to code the details of every cognitive capability from scratch.
 
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=49450400-3b2d82

RE: [agi] RSI

2007-10-03 Thread Derek Zahn
I wrote:
 If we do not give arbitrary access to the mind model itself or its 
 implementation, it seems safer than if we do -- this limits the 
 extent that RSI is possible: the efficiency of the model implementation 
 and the capabilities of the model do not change.
 
An obvious objection to this is that if the capabilities of the model include 
the ability to simulate a turing machine then the capabilities actually include 
everything computable.  However, this issue being addressed is a practical one 
referring to what actually happens, and there are enormous practical issues 
involving resource limits of processing time and memory space that should be 
considered.  Such consideration is part of a model-specific safety analysis.
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=49457514-fe026f

RE: [agi] Religion-free technical content

2007-10-02 Thread Derek Zahn
Richard Loosemore:  a) the most likely sources of AI are corporate or 
military labs, and not just   US ones. No friendly AI here, but profit-making 
and mission-performing AI.  Main assumption built into this statement: that 
it is possible to build  an AI capable of doing anything except dribble into 
its wheaties, using  the techiques currently being used.  I have explained 
elsewhere why this is not going to work.
 
If your explanations are convincing, smart people in industry and the military 
might just absorb them and then they still have more money and manpower than 
you do.
 When the first AGI is built, its first actions will be to make sure that  
 nobody is trying to build a dangerous, unfriendly AGI. 
 
I often see it assumed that the step between first AGI is built (which I 
interpret as a functoning model showing some degree of generally-intelligent 
behavior) and god-like powers dominating the planet is a short one.  Is that 
really likely?
 
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=48961147-810f59

RE: [agi] Religion-free technical content

2007-10-01 Thread Derek Zahn
Richard Loosemore writes: You must remember that the complexity is not a 
massive part of the  system, just a small-but-indispensible part.  I think 
this sometimes causes confusion: did you think that I meant  that the whole 
thing would be so opaque that I could not understand  *anything* about the 
behavior of the system? Like, all the  characteristics of the system would be 
one huge emergent property, with  us having no idea about where the 
intelligence came from?
 
No, certainly not.  I think the confusion here involves the distinction between 
Friendliness with a capital F (meaning a formal theory of what the term means 
and an intelligent system built to provably maintain that property in the 
mathematical, not verbal, sense), and friendliness with a lower case f, which 
relies on more human types of reasoning.
 
It sounds to me like on the one hand you are saying that your system is complex 
and yet its behavior is not complex (at least in a particular but quite broad 
way - friendliness), as if you can bottle up the complexity in such a way that 
it has no important actual effects.  In particular, when you write:
 
 You can build such a motivational system by controlling the system's  agenda 
 by diffuse connections into the thinking component that controls  what it 
 wants to do.
It seems like doing that requires a rigorous understanding of the dynamics of 
the thinking component and I don't quite get how that can work in a 
guaranteed way since elements of the thinking component may change their 
nature in unpredictable ways and new elements of the thinking component may 
arise.  If the nature of all thinking component elements are indeed perfectly 
predictable so that is impossible, I don't get where the complexity is.  In 
your AGIRI paper, you seem to be saying that intelligence itself is an emergent 
property of a complex system (sorry to use words that everybody is probably 
allergic to), which seems to imply a global impact of the underlying complexity.
 
I think (probably incorrectly) that I have a rough idea of how you intend to 
guide this complex system, and in fact I think that is likely the only way to 
go about it.  It makes me nervous a bit when phrasings that bring capital-F 
Friendliness to mind are applied to designs that cannot possibly exhibit it.
 
Basically, it rings alarm bells about overconfidence.
 
However, the details of what you actually are working on will explain more than 
this conversation can, I'm sure.
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=48433850-805862

RE: [agi] Religion-free technical content

2007-10-01 Thread Derek Zahn
Edward W. Porter writes: To Matt Mahoney.
 Your 9/30/2007 8:36 PM post referred to mine in reply to Derek Zahn and 
 implied RSI 
 (which I assume from context is a reference to Recursive Self Improvement) is 
 necessary for general intelligence.
 
 So could you, or someone, please define exactly what its meaning is?
 
Thanks for asking this question, I was just going to do so myself.  If I am 
generally intelligent then I must be able to recursively self improve, even 
though all I can do is change some parameters in a particular neural structure. 
 But that's just learning (even if it is learning better ways to learn).  I 
don't think that sort of learning is what gives Singularitarians nightmares 
about AGI going BOOM a few days after birth, so defining what we mean and what 
we're actually afraid of could be very useful.
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=48437261-c8a336

RE: [agi] Religion-free technical content

2007-10-01 Thread Derek Zahn
Richard:
 
 You agree that if we could get such a connection between the  probabilities, 
 we are home and dry? That we need not care about  proving the friendliness 
 if we can show that the probability is simply  too low to be plausible?
Yes, although the probability itself would have to be proven from first 
principles to be as strong as Friendliness.  For any actual system such rigor 
seems as unlikely as Friendliness itself.
 
 Once the system is  set up to behave according to a diffuse set of checks 
 and balances (tens  of thousands of ideas about what is right, rather than 
 one single  directive), it can never wander far from that set of constraints 
 without  noticing the departure immediately.  Would you agree that IF such 
 a design were feasible, you would not be  able to think of any way to bollix 
 it?
* They have to be the right set of checks and balances, that completely cover 
ill-defined territory
* Nothing unforseen can arise that is not covered by the designed-in checks and 
balances
* The meaning of the constraints has to be applicable to all future 
developments somehow (e.g. the changing nature of humanity)
* The meaning of the constraints and the complex items they operate on has to 
be immune to drift
 
Given all that, nothing springs immediately into my little mind to disagree 
with your conclusion.
 
Note that I think this type of approach is an excellent way to try for little-f 
friendliness, which is probably our best and only option.  I like it a lot.



 Date: Mon, 1 Oct 2007 11:34:09 -0400 From: [EMAIL PROTECTED] To: 
 agi@v2.listbox.com Subject: Re: [agi] Religion-free technical content  
 Derek Zahn wrote:  Richard Loosemore writes: You must remember 
 that the complexity is not a massive part of the   system, just a 
 small-but-indispensible part. I think this sometimes causes 
 confusion: did you think that I meant   that the whole thing would be so 
 opaque that I could not understand   *anything* about the behavior of the 
 system? Like, all the   characteristics of the system would be one huge 
 emergent property, with   us having no idea about where the intelligence 
 came from?No, certainly not. I think the confusion here involves the 
 distinction   between Friendliness with a capital F (meaning a formal 
 theory of what   the term means and an intelligent system built to provably 
 maintain that   property in the mathematical, not verbal, sense), and 
 friendliness with   a lower case f, which relies on more human types of 
 reasoning.  Derek,  Your post raises several issues that I will try to 
 get to in due course,  but I want to deal with one of them quickly (if I 
 can).  I am attacking the very notion that there really is something that 
 is  mathematical Friendliness with a capital F, which can be proved formally 
  rather than (something else).  I am also stating that while this mythical 
 provable-friendliness does  not really exist (i.e. it will never be 
 possible), there is something  else that gives us exactly what we want, but 
 is not a mathematical proof.  Here is why. According to quantum mechanics 
 there is a finite, non-zero  probability that the Sun could suddenly 
 quantum-tunnel itself to a new  position inside the perfume department of 
 Bloomingdales.  There is no formal proof that it will not do this. There is 
 no  possibility of such a formal proof.  But we accept that we do not need 
 to worry about this happening because  we have an idea of what the 
 probability is. In essence, we know that  for the Sun to do that, each atom 
 in it would have to do the same thing  all at once, and since the 
 probability of each individual event is so  small, and since they are all 
 multiplied, the overall probability is  stupidly small  Now, of course I 
 exaggerate for comedy, but the fact is that if you can  make the event An 
 AGI reneges on the motivations designed into it  dependent on a very large 
 number of improbable events all happening at  once, then you can multipl the 
 probabilities and come to a situation  where the overall probability is 
 vanishingly small.  You agree that if we could get such a connection 
 between the  probabilities, we are home and dry? That we need not care about 
  proving the friendliness if we can show that the probability is simply  
 too low to be plausible?  Right, now consider the nature of the design I 
 propose: the  motivational system never has an opportunity for a point 
 failure:  everything that happens is multiply-constrained (and on a massive 
 scale:  far more than is the case even in our own brains). Once the system 
 is  set up to behave according to a diffuse set of checks and balances (tens 
  of thousands of ideas about what is right, rather than one single  
 directive), it can never wander far from that set of constraints without  
 noticing the departure immediately.  Would you agree that IF such a design 
 were feasible, you would not be  able to think of any way

RE: [agi] Religion-free technical content

2007-09-30 Thread Derek Zahn
I suppose I'd like to see the list management weigh in on whether this type of 
talk belongs on this particular list or whether it is more appropriate for the 
singularity list.
 
Assuming it's okay for now, especially if such talk has a technical focus:
 
One thing that could improve safety is to reject the notion that AGI projects 
should be focused on, or even capable of, recursive self improvement in the 
sense of reprogramming its core implementation.
 
Let's take Novamente as an example.  Imagine that Ben G is able to take a break 
at some point from standing behind the counter of his Second Life pet store for 
a few years, and he gets his 1000-pc cluster and the implementation goes just 
as imagined and baby novamente is born some years down the road.
 
At this point, Ben  co. begin teaching it the difference between its virtual 
ass and a virtual hole in the ground.
 
Novamente's model of mind is not the same thing as the C++ code that implements 
it; Baby Novamente has no particular affinity for computer programming or built 
in knowledge about software engineering.  It cannot improve itself until the 
following things happen:
 
1) It acquires the knowledge and skills to become a competent programmer, a 
task that takes a human many years of directed training and practical 
experience.
 
2) It is given access to its own implementation and permission to alter it.
 
3) It understands its own implementation well enough to make a helpful change.
 
Even if the years of time and effort were deliberately taken to make those 
things possible, further things would be necessary for it to be particularly 
worrisome:
 
1) Its programming abilities need to expand to the superhuman somehow -- a 
human equivalent programmer is not going to make radical improvements to a 
huge software system with man-decades of work behind it in a short period of 
time.  A 100x or 1000x programming intelligence enhancement would be needed 
for that to happen.
 
2) The core implementation has to be incredibly flawed to squeeze orders of 
magnitude of extra efficiency into it.  We're not really worried about a 30% 
improvement, we're worried about radical conceptual breakthroughs leading to 
huge peformance boosts.
 
It stretches the imagination past its breaking point to imagine all of the 
above happening accidentally without Ben noticing.  Therefore, to me, Novamente 
gets the Safe AGI seal of approval until such time as the above steps seem 
feasible and are undertaken.  By that point, there will be years of time to 
consider its wisdom and hopefully apply some sort of friendliness theory to an 
actually dangerous stage.  I think the development of such a theory is valuable 
(which is why I give money to SIAI) but I neither expect or want Ben to drop 
his research until it is ready.  There is no need.
 
I could imagine an approach to AGI that has at its core a reflexive 
understanding of its own implementation; a development pathway where 
algorithmic complexity theory, predictive models of its own code, code 
generation from an abstract specification language that forms a fluid 
self-model, unrestricted invention of new core components, and similar things.  
Such an approach might, in flights of imagination, be vulnerable to the oops, 
it's smarter than me now and I can't pull the plug scenario.
 
But there's an easy answer to this:  Don't build AGI that way.  It is clearly 
not necessary for general intelligence (I don't understand my neural substrate 
and cannot rewire it arbitrarily at will).
 
Surely certain AGI efforts are more dangerous than others, and the opaqueness 
that Yudkowski writes about is, at this point, not the primary danger.  
However, in that context, I think that Novamente is, to an extent, opaque in 
the sense that its actions may not be reduceable to anything clear (call such 
things emergent if you like, or just complex).
 
If I understand Loosemore's argument, he might say that AGI without this type 
of opaqueness is inherently impossible, which could mean that Friendly AI is 
impossible.  Suppose that's true... what do we do then?  Minimize risks, I 
suppose.  Perhaps certain protocol issues could be developed and agreed to. As 
an example:
 
1. A method to determine whether a given project at a certain developmental 
stage is dangerous enough to require restrictions.  It is conceivable, for 
example, that any genetic programming homework, corewars game, or random 
programming error could accidentally generate the 200-instruction key to 
intelligence that wreaks havok on the planet... but it's so unlikely that 
forcing all programming to occur in cement bunkers seems like overkill.
 
2. Precautions for dangerous programs, such as limits to network access, limits 
to control of physical devices, and various types of deadman and emergency 
power cutoffs.
 
I think we're a while away from needing any of this, but agree that it is not 
too soon to start thinking about it and, as has been pointed out, 

RE: [agi] Religion-free technical content

2007-09-30 Thread Derek Zahn
Richard Loosemore writes: It is much less opaque.  I have argued that this 
is the ONLY way that I know of to ensure that  AGI is done in a way that 
allows safety/friendliness to be guaranteed.  I will have more to say about 
that tomorrow, when I hope to make an  announcement.
Cool.  I'm sure I'm not the only one eager to see how you can guarantee (read: 
prove) such specific detailed things about the behaviors of a complex system.
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=48327693-309579

RE: [agi] HOW TO CREATE THE BUZZ THAT BRINGS THE BUCKS

2007-09-28 Thread Derek Zahn
Don Detrich writes:


 
AGI Will Be The Most Powerful Technology In Human History – In Fact, So 
Powerful that it Threatens Us 
Admittedly there are many possible dangers with future AGI technology. We can 
think of a million horror stories and in all probability some of the problems 
that will crop up are things we didn’t anticipate. At this point it is pure 
conjecture. All new technologies have dangers, just like life in general. 
 
It'll be interesting to see if the horror stories about AGI follow the same 
pattern as they did for Nanotechnology... After many years and dollars of real 
nanotechnology research, the simplistic vision of the lone wolf researcher 
stumbling on a runaway self-replicator that turns the planet into gray goo 
became much more complicated and unlikely.  Plus you can only write about gray 
goo for so long before it gets boring.
 
Not to say that AGI is necessarily the same as Nanotechnology in its actual 
risks, or even that gray goo is less of an actual risk than writers speculated 
about, but it will be interesting to see if the scenario of a runaway 
self-reprogramming AI becomes similarly passe.
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=47965814-e6c819

RE: [agi] Selfish promotion of AGI

2007-09-27 Thread Derek Zahn
Responding to Edward W. Porter:
 
Thanks for the excellent message!
 
I am perhaps too interested in seeing what the best response from the field of 
AGI might be to intelligent critics, and probably think of too many 
conversations in those terms; I did not mean to attack or criticise your 
statements, just guess at objections that a skeptic might make.  Much of what 
you wrote here could be used in such a response. The truth is we really don’t 
know how big a good easy to computer representation of human level world 
knowledge would be.  
 
Right, or processing power to manipulate it.  There are reasons to suspect it 
might be less than would be required for a molecular-level brain simulation.  
Intuitions about how much less will vary from individual to individual.
 
  2. The software problem is solved.  Ben Goertzel has solved it.  I think 
  most people will 
  want more demonstration than a book review.
 I do too.  
Just to be clear, I'm not sure myself about whether Novamente has solved the 
software problem.  It necessarily contains a large number of complex 
representational and implementation choices which I do not understand well 
enough to judge in an informed way.  All I was trying to communicate is: prior 
to impressive demonstrations, nobody will believe that the software problem is 
solved.
 
 AI buzz has not been steady for 50 years.  Except in SciFi, it has largely 
 been missing in action for the last twenty years, since the overstated 
 promises of the expert systems boom of the mid-eighties fell flat.
 
If buzz means academic respectability, government grant levels, and 
availibility of risk capital, you're certainly right!  I'm not sure what makes 
any of those groups tick so I am not sure what sort of buzzers would be 
effective.
 
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=47484044-5944f6

RE: [agi] NVIDIA GPU's

2007-06-21 Thread Derek Zahn
Ben Goertzel writes: http://www.nvidia.com/page/home.html  Anyone know what 
are the weaknesses of these GPU's as opposed to ordinary processors?  They 
are good at linear algebra and number crunching, obviously.  Is there some 
reason they would be bad at, say, MOSES learning?
These parallel hardware innovations are indeed very exciting.  I recently 
purchased a PC with two of these GPUs in it to play with.  Like JoSH, I think 
that number crunching is The Way To Go.
 
Unfortunately, these will be spectacularly bad at evaluating individuals for 
genetic programming.  First, although they can do standard logic, program flow, 
and integer operations, that doesn't make very good use of the transistor count 
since the bulk of the silicon is dedicated to floating point arithmetic.  
Second, and more important, the programming model is SIMD, which means that the 
processors have to be running the same program.  If, for example, and if 
statement's condition is satisfied on one processor but not the others, the 
others have to wait for the code inside to finish so they can all synchronize 
again.  That would be terrible for evaluating heterogenous program trees.
 
You're going to get your speedup over the coming years on that task from 
multicore CPUS that can run heterogenous threads.
 
However, intuitively I think this massively parallel SIMD type of hardware 
might work rather well for propagation through your Probabilistic Logic 
Networks, depending on the details.
 
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

RE: [agi] NVIDIA GPU's

2007-06-21 Thread Derek Zahn
Moshe Looks writes: This is not quite correct; it really depends on the 
complexity of the  programs one is evolving and the structure of the fitness 
function. For  simple cases, it can really rock; see  
http://www.cs.ucl.ac.uk/staff/W.Langdon/
 
That's interesting work, thanks for the link!  It's not immediately obvious, 
but the particular example there is a population of programs that estimate pi 
with up to 8 leaves from an alphabet of six tokens (2 constants and 4 basic 
arithmetic operations).  
 
The strategy used for parallelization is to run all programs currently waiting 
on a '+' operation, then run all programs doing a '-' operation and so on.  If 
there are N operations (4 in this case), the population runs at 1/N speed 
(since the SIMD nature of the thing makes the others wait).  So you're right, 
for simple cases like this one it only wastes 75% of the available processing 
power.  It doesn't seem like it will scale very well though.  Even on this 
simple task I bet a quad-core cpu is competitive with the GPU hardware.
 
It does point out though that some things that are not intuitively data 
parallel can be executed effectively on a GPU.
 
Do you personally think MOSES will run well on a GPU?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

RE: [agi] Another attempt to define General Intelligence, and some AGI design thoughts.

2007-06-15 Thread Derek Zahn
Robert Wensman writes:
 Has there been any work done previously in statistical, example driven 
 deduction? 
 
Yes.  In this AGI community, Pei Wang's NARS system is exactly that:
 
http://nars.wang.googlepages.com/
 
Also, Ben Goertzel (et. al.) is building a system called Novamente 
(www.novamente.net) that has a Bayesian system of Probabilistic Logic 
Networks as its major representational scheme.
 
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

RE: [agi] Another attempt to define General Intelligence, and some AGI design thoughts.

2007-06-14 Thread Derek Zahn
Robert Wensman writes:
 
 Databases: 1. Facts: Contains sensory data records, and actuator records. 
 2. Theory: Contains memeplexes that tries to model the world.
 
I don't usually think of 'memes' as having a primary purpose of modeling the 
world... it seems to me like the key to your whole approach is how you 
represent them (the schema of database 2).  Could you elaborate a bit on that?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

RE: [agi] poll: what do you look for when joining an AGI group?

2007-06-13 Thread Derek Zahn
 9. a particular AGI theoryThat is, one that convinces me it's on the right 
 track.
 
Now that you have run this poll, what did you learn from the responses and how 
are you using this information in your effort?
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

RE: [agi] Symbol Grounding

2007-06-12 Thread Derek Zahn
I think probably AGI-curious person has intuitions about this subject.  Here 
are mine:
 
Some people, especially those espousing a modular software-engineering type of 
approach seem to think that a perceptual system basically should spit out a 
token for chair when it sees a chair, and then a reasoning system can take 
over to reason about chairs and what you might do with them -- and further it 
is thought that the reasoning about chairs part is really the essence of 
intelligence, whereas chair detection is just discardable pre-processing.  My 
personal intuition says that by the time you have taken experience and boiled 
it down to a token labeled chair you have discarded almost everything 
important about the experience and all that is left is something that can be 
used by our logical inference systems.  And although that ability to do logical 
inference (probabilistic or pure) is a super-cool thing that humans can do, it 
is a fairly minor part of our intelligence.
 
Often I see AGI types referring to physical embodiment as a costly sideshow or 
as something that would be nice if a team of roboticists were available.  But 
really, a simple robot is trivial to build, and even a camera on a pan/tilt 
base pointed at an interesting physical location is way easier to build than a 
detailed simulation world.  The next objection is that image processing is 
too expensive and difficult.  I guess my only thought about that it doesn't 
inspire confidence in an approach if the very first layer of neural processing 
is too hard.  I suspect the real issue is that even if you do the image 
processing, then what?  What do you do with the output?
 
Ignoring those issues -- inventing a way of representing and manipulating 
knowledge, and assuming that sensory processes can create those data 
structures if built properly -- can work IF it turns out that brains are just 
really really bad at being intelligent.  That is, if the extreme tip of the 
evolutionary iceberg (some thousands of generations of lightly-populated 
species) finally stumbled on the fluid symbol-manipulating abilities that 
define intelligence, and the rest of the historical structures are only mildly 
more important than organs that pump blood -- if that's true, thinking about 
all this low-level grunk is a waste of time.  I actually hope that it's true, 
but I doubt it.  To the first people who had the ability to code our magical 
symbol processing abilities on a machine, it must have seemed like an exciting 
theory.
 
 
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

RE: [agi] Symbol Grounding

2007-06-12 Thread Derek Zahn
One last bit of rambling in addition to my last post:
 
When I assert that almost everything important gets discarded while merely 
distilling an array of rod and cone firings into a symbol for chair, it's 
fair to ask exactly what that other stuff is.  Alas, I believe it is 
fundamentally impossible to tell you!
 
I have seen some people attempt to communicate it, perhaps with a phrase like 
the play of shadow on the angle of the chair arm whose texture reminds me of 
the bus seat on that day with Julie in Madrid and the scratch on the leg which 
might be wood or might be plastic, sort of cone-like taking part of the chair's 
weight...
 
The problem with trying to evoke the complexity and associative nature of the 
perceptual experience with a phrase like that is that every symbolist can 
easily nod and think about how all that gets encoded in their symbolic 
representation, with its nodes for bus and leg and the encoded memory of past 
events.
 
But actually, the stuff is not at the right level for communicating 
linguistically so the above type of description is a made-up sham, more 
misleading than revealing.
 
To the extent I have a theory about all this stuff, it's this:  animals, 
including our evolutionary forebears, have concepts much like we do.  However, 
somewhere recently in our history, something happened that greatly magnified 
our ability to use language, reason logically, and form dizzyingly abstract 
concepts.  I think it's likely that it was a single thing (or that these are 
aspects of the single thing) rather than postulating three different radical 
innovations occurring at once.  I'm not sure what that thing was, but I'd guess 
the following analogy:
 
Concepts formed in some part of the brain grew handles of some kind, which 
allows them to be manipulated in a flexible combinational way by some new or 
improved dynamic processing mechanism that is either unique to us or is maybe 
vastly expanded from the abilities of lower species.  Symbolists see the 
handles and the way they get tugged around and abstract it into combinatorial 
logics and linguistic grammars, but it doesn't do any good to tug handles 
around unless they are attached to the huge gooey blobs of mind-stuff, which 
are NOT logical or linguistic in nature.
 
I'm philosophically a bottom upper because I think the hard and interesting 
questions have to do with the nature of those gooey blobby concepts.  Examples 
of people who are trying to deal with that issue are Hawkins with his 
Hierarchical Temporal Memory, and Josh with his Interpolating Associative 
Memory (though those models are quite different in detail).  I don't have a 
model myself.
 
I do like to follow you top downers though as you do amazing things!
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

RE: [agi] Pure reason is a disease.

2007-06-11 Thread Derek Zahn
Matt Mahoney writes: Below is a program that can feel pain. It is a simulation 
of a programmable 2-input logic gate that you train using reinforcement 
conditioning.
Is it ethical to compile and run this program?
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

RE: [agi] Get your money where your mouth is

2007-06-08 Thread Derek Zahn
Josh writes: http://www.netflixprize.com
Thanks for bringing this up!  I had heard of it but forgot about it.  While I 
read about other people's projects/theories and build a robot for my own 
project, this will be a fun way to refresh myself on statistical machine 
learning techniques and statistics.  I downloaded the data and it looks pretty 
easy to work with.  See ya on the leaderboard :)
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

RE: [agi] about AGI designers

2007-06-06 Thread Derek Zahn
YKY writes:
 
 There're several reasons why AGI teams are 
 fragmented and AGI designers don't want to 
 join a consortium:
 
 A.  believe that one's own AGI design is superior
 B.  want to ensure that the global outcome of AGI is friendly
 C.  want to get bigger financial rewards
 
D.  There are no consortiums to join.
 
I see talk about joining Novamente, but are they hiring?  It might be 
possible to volunteer to work on peripheral things like AGISIM, but I sort of 
doubt that Ben is eager to train volunteers on the AGI-type code itself.  On 
average, the cost/benefit of that would probably be quite poor.
 
I see that AdaptiveAI has an opening for a programmer.  We don't talk about 
them much, probably because they have chosen not to make much information 
availableabout what they're up to, beyond Peter Voss's vague overview paper.
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

RE: [agi] Pure reason is a disease.

2007-06-05 Thread Derek Zahn
 
Mark Waser writes:
 
 BTW, with this definition of morality, I would argue that it is a very rare 
 human that makes moral decisions any appreciable percent of the time 
 
Just a gentle suggestion:  If you're planning to unveil a major AGI initiative 
next month, focus on that at the moment.  This stuff you have been arguing 
lately is quite peripheral to what you have in mind, except perhaps for the 
business model but in that area I see little compromise on more than subtle 
technical points.
 
As I have begun to re-attach myself to the issues of AGI I have become 
suspicious of the ability or wisdom of attaching important semantics to atomic 
tokens (as I suspect you are going to attempt to do, along with most 
approaches), but I'd dearly like to contribute to something I thought had a 
chance.
This stuff, though, belongs on comp.ai.philosophy (which is to say, it belongs 
unread).

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

  1   2   >