Re: [agi] Conway's Game of Life and Turing machine equivalence

2007-10-05 Thread Andrew Babian
Honestly, it seems to me pretty clearly that whatever Richard's thing is with
complexity being the secret sauce for intelligence and therefore everyone
having it wrong is just foolishness.  I've quit paying him any mind.  Everyone
has his own foolishness.  We just wait for the demos.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50720641-1f7528


Re: [agi] a2i2 news update

2007-07-26 Thread Andrew Babian
Not only that, if you work in IT, you might think, considering how poorly
adding people to a project works, is he getting desperate or just being foolish?
andi


On Wed, 25 Jul 2007 18:25:56 -0700 (PDT), Ton Genah wrote 
 Just increasing the number doesn’t guarantee a clear path towards increased
intelligence . This seems to be the important  issue in current AI  and not
the number! 
  
   
  Ton 
 
 Mike Tintner [EMAIL PROTECTED] wrote:   
 Of course, numerical comparisons are petty, unfair and invidious. But being
that sort of person, I can't help noticing that Peter is promising to increase
his staff to 24 soon. Will that give him the biggest AGI army? How do the
contenders stack up here? 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415id_secret=25415821-b920f1


Re: [agi] News bit: Carnegie Mellon unveils Internet-controlled robots anyone can build

2007-04-26 Thread Andrew Babian
Bob Mottram  wrote:   
  I have thought about making a robotic artist in the distant past.  Some of
the first robots which I remember seeing in the 1980s used the LOGO language
to produce sketches using different coloured pens.  You could maybe do
something similar to that, with a mouse-like body and a few differently
coloured pens mounted on servos (there is plenty of scope on the Qwerk to add
multiple servos).  Alternatively you could build something more like a
manipulator arm, and attach pens as if they were fingers on separate servos.  

Ben wrote:
 I like the idea of different fingers having different magic markers on the
tips of them ;-) 


Wow, that sounds like a great idea, guys, thanks for bringing it up!  One
thing, though, I probably wouldn't want to integrate the pens or whatever into
the device, since pens dry out and are consumable.  I'd prefer a general
manipulator.  Also,  having them all at once seems like it's only about making
it faster, but computers and robots are things with inifinite patience, so I
would guess one color at a time should not be a problem for them, though of
course, I'm sure they could handle all them at once.  And all this, too, is
reminiscent of the various automatic fabrication robots that have been popping
up, with plastic deposition and stuff. None of those integrate vision systems,
though, which you might use in a general robot.   It might be nice also to
have a robot that could handle drills and saws, and other machining tools, for
a really productive system.  But that's what industrial robots do, I guess.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: Goals of AGI (was Re: [agi] AGI interests)

2007-04-18 Thread Andrew Babian
Not only is each movie different for each person, it is different each time
one person sees it.  The movie itself is different from the movie-witnessing
experience, and there seems to be a feeling that you could compress it by just
grabbing the inner experience.  But you notice different things each time. 
And more often than just trying to take away the factual bits of what
happened,  in any situation we are much more interesting in extracting the
meaning than teh simple fact about what just happened.  The implications,
the point of any particular action.  Even in speech, we aren't trying to
remember sounds, but which actual word-sound-meaning unit was intended, since
it is always ambiguous.

And the narrative nature of knowledge I think was mentioned, and I think
it's helpful to point out a part of narrative that is often neglected.  A
narrative is a chronologically ordered telling of a situation that has some
moral or point.  This moral or point is an important part of the meaning, just
as the is the factual content, but it is not nearly so absolutely or clearly
defined.  Very often, if not mostly at least for TV shows, the moral is just
that good triumphs over evil.  But if you leave it out in a story, people find
themselves not caring and thus not remembering.  

andi


On Wed, 18 Apr 2007 09:24:51 +0200, Kingma, D.P. wrote 
 [Spelling corrected and reworded...] 
 
 I'm not convinced by this reasoning. First, the way individuals store
audiovisual information differs, simply because of slight differences in brain
development (nurture). Also, memory is condensed information about the actual
high-level sensory/experience information. The actual 45kb memory of a movie
is therefore quite personal to the subject. Recall of a photo/video is more
like an impressionistic painting then an actual photo. 
 
 An AGI that reconstructs a movie from 45kb human-ish compressed memory will
have to make up 99.99% of video and audio. A very educated guess, but still a
guess. 
 
 Compare it with an extremely talented photorealistic animator human that,
purely from memory, creates a reconstruction of a scene from The Matrix.
Wouldn't you notice the difference in experience? 
 
 On 4/18/07, Matt Mahoney [EMAIL PROTECTED] wrote: On 4/17/07, James
Ratcliff [EMAIL PROTECTED] wrote: 
  
  A simple list, or set of goals for an AGI to accomplish reasonably I would 
  find very useful, and something to work for. 
 
 I think an important goal is to solve the user interface problem.  The 
 current 
 approach is for the computer to present a menu of choices (e.g. a set of 
 icons, or automated voicemail press or say 'one'), which is hardly 
 satisfactory.  An interface should be more like Google.   I tell the computer 
 what I want and it gets it for me. 
 
 In http://cs.fit.edu/~mmahoney/compression/rationale.html  I argue the 
 equivalence of text compression with AI.  I would therefore set a goal of 
 matching humans at text prediction (about 1 bit per character).  Humans use 
 vast knowledge and reasoning to predict strings like All men are mortal. 
 Socrates is a man.  Therefore .  An AGI should be able to make 
 predictions as accurately as humans given only a 1 GB corpus of text, about 
 what a human could read in 20+ years. 
 
 I would go further and include lossy compression tests.  In theory, you could 
 compress speech to 10 bits per second by converting it to text and using text 
 compression.  The rate at which the human brain can remember video is not 
 much 
 greater, probably less than 50 bps*.   Therefore, as a goal, an AGI ought to 
 be 
 able to compress a 2 hour movie to a 45 KB file, such that when a person 
 views 
 the original and reconstructed movie on consecutive days (not side by side), 
 the viewer will not notice any differences.   It should be able to do this 
 after training on 20 years of video. 
 
 The purpose of this goal is that such an AGI could also perform useful tasks 
 such as reduce a video to a verbal description understandable by humans, or 
 given a script, produce a movie.  These tasks would be trivial extensions of 
 the compression process, which would probably consist of describing a movie 
 using text and augmenting with some nonverbal data such as descriptions of 
 faces and voices in terms that humans cannot easily express. 
 
 *50 bps is probably high.  Tests of image recall by Standing [1] suggest that 
 a picture viewed for 5 seconds is worth about 30 bits. 
 
 [1] Standing, L. (1973), Learning 10,000 Pictures, Quarterly Journal of 
 Experimental Psychology (25) pp. 207-222. 
 
 -- Matt Mahoney, [EMAIL PROTECTED] 
 
 - 
 This list is sponsored by AGIRI:  http://www.agiri.org/email 
 To unsubscribe or change your options, please go to: 
 http://v2.listbox.com/member/?;  
  
 
---
 This list is sponsored by AGIRI: http://www.agiri.org/email 
 To unsubscribe or change your options, please go to: 
 

Re: Goals of AGI (was Re: [agi] AGI interests)

2007-04-18 Thread Andrew Babian

It occurs to me the problem I'm having with this definition of AI as
compression.  There are two different tasks here, recognition of sensory
data and reproduction of it.  It sounds like this definition proposes that
they are exactly equivalent, or that any recognition system is automatically
invertable.  I simply doubt that this can be true, using a principle (I have
no proof for but I hold) that meaning--something we use to recognize
equivalence-- is just not the same for different peceptual events.  

An another example I use to think about it is how difficult it is trying to
draw a reproduction of a picture from memory, and how different the task is
from drawing a copy is from analyzing the elements in a picture.  Reproducing
 visual information is different from conceptual scene decomposition.


On Wed, 18 Apr 2007 16:45:04 -0700 (PDT), Matt Mahoney wrote
 --- Matt Mahoney [EMAIL PROTECTED] wrote:
  3. Standing [3] had subject memorize 10,000 pictures, one every 5.6 seconds
  over 5 days.  Two days later they could recall about 80% in tests.  This is
  about the result you would get if you reduced each picture to a 16 bit
  feature
  vector and checked for matches.  This is a memory rate of 0.3 bits per
  second.
 
 That should be 3 bits per second.
 
 -- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Why evolution? Why not neuroscience?

2007-03-23 Thread Andrew Babian
Eugen discussed evolution as a development process.  I just wanted to comment
about what Minsky said in his talk (and I have to thank this list for
pointing out that resource).  He said that the problem with evolution is
that it throws away the information about why bad solutions failed.  That
really has affected my thinking about it, since I was thinking that it at
least sounded like a pretty good idea.  But it is really a very terrible
waste, and I no longer really think it is such a great model to use.  I'm
not sure what adaptations could be made to make up fo that loss, but surely
there could be an improvement over evolution, even in a sytem of random
generation and recombination with competitive survival.
andi

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] general weak ai

2007-03-10 Thread Andrew Babian
I can't speak for Minsky, but I would wonder what advantage would there be for
having only one agent?  I think he talks about the disadvantages.  How is it
going to deal with naturally different sorts of management problems and
information?  It seems like it's just a better aproach to have a system that
has several different resources working together. BTW, Minsky has gone from
calling them agents to calling them resources in _The Emotion Machine_.
andi


On Fri, 9 Mar 2007 19:31:38 -0500, J. Storrs Hall, PhD. wrote
 Not at all. the agent that does the pointing is just a build a 
 deck agent 
 (or, more likely, a society of deck) that gets activated when deck-
 building is the thing to do.
 
 I don't know Minsky's ultimate take on the subject, but I don't see 
 any problem with putting one agent in charge of the whole business,
  especially for the duration of a specific task, as long as it isn't 
 supposed to have any more capabilities per se than any other agent.
 
 Josh
 
 On Friday 09 March 2007 07:36, Pei Wang wrote:
  On 3/9/07, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote:
   If I understand Minsky's Society of Mind, the basic idea is to have the
   tools be such that you can build your deck by first pointing at the saw
   and saying you do your thing and then pointing at the hammer, etc. The
   tools are then in turn made of little guys who do the same to their
   tools, ad infinitum (or at least ad neuronium).
 
  This understanding assumes a you who does the pointing, which is a
  central controller not assumed in the Society of Mind. To see
  intelligence as a toolbox, we would have to assume that somehow the
  saw, hammer, etc. can figure out what they should do in building the
  deck all by themselves.
 
  Pei
 
 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?list_id=303

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] general weak ai

2007-03-06 Thread Andrew Babian
On Tue, 6 Mar 2007 09:49:47 +, Bob Mottram wrote 
 Some of the 3D reconstruction stuff being done now is quite impressive (I'm
thinking of things like photosynth, monoSLAM and Moravec's stereo vision) and
this kind of capability to take raw sensor data and turn it into useful 3D
models which may then be cogitated upon would be a basic prerequisite for any
AGI operating in the real world.  I'm sure that these and other similar
methods are soon destined to be fall into the bracket of being no longer AI,
instead being considered as just another computational tool. 
 
 In the past I've tried many ad-hoc vision experiments, which would certainly
come under the narrow AI label, but I now no longer believe that this kind
of approach is a good way to proceed.  Far more straightforward, albeit more
computationally demanding, techniques give a general solution to the vision
problem which is not highly specific to any particular kind of domain or
environment.  Under this system applications which are often treated
separately, such as visual navigation and object recognition, actually turn
out to be the same algorithm deployed on different spatial scales (maybe a
classic case of physics envy!). 


Well what is intelligence if not a collection of tools?  One of the hardest
problems is coming up with such tools that are generalizable across domains,
but can't that just be a question of finding more tools that work well in a
computer environment, instead of just finding the ultimate principle.  Ideas
like gofai symbolic symbol manipulation and Bayesian decision networks seem to
me to naturally just fit into the idea of part of an AI kit, but I personally
would want this kit to be more compatible with the post AI techniques. 
Another example, that someone is using AI is often recognized by them using
some kind of search instead of some algorithm, like gradient ascent or
resolution, but there's not reason why a system can't throw multiple
approaches at a problem, and maybe fall back on some general search when
needed.  And maybe that's why I think an AI's proper world is controlling a
computer (ie. a PC), so it can just run programs whenever it needs to get
things done.

andi

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] SOTA

2006-10-21 Thread Andrew Babian
On Fri, 20 Oct 2006 22:15:37 -0400, Richard Loosemore wrote
 Matt Mahoney wrote:
  From: Pei Wang [EMAIL PROTECTED]
  On 10/20/06, Matt Mahoney [EMAIL PROTECTED] wrote:
  
  It is not that we can't come up with the right algorithms.  
   It's that we don't have the
  computing power to implement them.
  
  Can you give us an example? I hope you don't mean algorithms like
  exhaustive search.
  
  For example, neural networks which perform rudamentary pattern 
  detection and control for vision, speech, language, robotics etc.  
  Most of the theory had been worked out by the 1980's, but 
  applications have been limited by CPU speed, memory, and training 
  data.  The basic building blocks were worked out much earlier.  
  There are only two types of learning in animals, classical 
  (association) and operant (reinforcement) conditioning.  
  Hebb's rule for classicical condioning proposed in 1949 is 
  the basis for most neural network learning algorithms today.  
  Models of operant conditioning date back to W. Ross Ashby's 
  1960 Design for a Brain where he used randomized weight 
  adjustments to stabilize a 4 neuron system build from vacuum 
  tubes and mechanical components.
  
  Neural algorithms are not intractable.  They run in polynomial time.  
  Neural networks can recognize arbitrarily complex patterns by adding 
  more layers and training them one at a time.  This parallels the 
  way people learn complex behavior.  We learn simple patterns first, 
  then build on them.
 
 I initially wrote a few sentences saying what was wrong with the 
 above, but I chopped it.  There is just no point.
 
 What you said above is just flat-out wrong from beginning to end.  I 
 have done research in that field, and taught postgraduate courses in 
 it, and what you are saying is completely divorced from reality.
 
 Richard Loosemore

I have simply taken maybe one and say a half (because it seems like every ai
survey class has to touch upon neural nets again) graduate classes on the
subject, and not taught or done research in the area, but I recognized that
most of that was wrong.  I at least hold out the possibility that neural nets
can be made useful with some greater theory about architectures and much
greater computing power.  I think it would be worthwhile for you to take the
time to list what you think the flaws were, if only to open the possibility
for some positive recomendations for research directions.  Even thought you
may be completely disillusioned, maybe not everyone is.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Computer monitoring and control API

2006-10-01 Thread Andrew Babian
I wrote:
  I just had a notion.  The proper sensory input and motor output for an AI is
  the computer screen (and sound input and regular keyboard and mouse input). 
  One thing that needs to exist is a freely available standard API for these
  things, so people can work on them, plus implementations for the platforms
  that people use.  My hope is that it would give different researchers,
  especially all those lone wolves out there, something intercompatible to 
  work
  with. It also seems possible that this could be a common mechanism for the
  different systems to
  work together, in a sort of extension of the Blackboard model.  And, as a
  lighter element of it, I'd really like it if these projects could use video
  games, because they more and more have become very sophisticated real-world
  modelling tools.
  andi
 
And Richard Loosemore asked for clarification:
 Can you be more specific about what this would entail?  I can think 
 of several interpretations of what you say, but am not sure which 
 you mean.

Well, if you can think of several interpretations, then why don't you pick one
you like?

I was thinking along the lines of java.awt.Robot.  I only had a vague
recollection of it, and I never used it, and looking at it again now, I think
it is exactly what I was thinking of.  Another reason I thought of it is that
Stan Franklin's Ida model uses e-mail as a sort of sensory-motor and that's a
kind of subset of this notion.  It seems like the standard reactions people
have when they wonder what an artificial intelligence is going to do is either
sit in a box and answer questions or control a physical robot clunking around
the world.  I would simply propose that one other useful answer is to control
and use a computer the way a person might control a computer.  This would mean
that it could use all manner of existing tools to multiply whatever power its
additional intelligence adds.

But one of the tricky bits of the idea is having something sufficiently
general and useful enough to make a contribution.  As I mentioned, there is a
Java class that does the kind of thing I'm interested in.  And it's probably
straightforward to have this kind of thing in other imperative languages.  But
how would you have a neural network system interface to it?  I don't know,
maybe the API idea is foolish.  I've never tried to design one, so I don't
particularly know what's involved or if it's even a good idea.  The really
basic functions I would expect are an ability to capture a piece of the
screen, to control the mouse, and input keyboard events.  I think a very
valuable addition would be to discover a character (or piece of text) that's
at a particular location, so reading in text from a screen would be easier. 
We have to do this to use a computer and any agent using a computer would need
to do this anyway, so it would be just more useful to add that in at the
beginning.  Unfortunately, that could be a tricky bit of code, but it is miles
away from OCR, so it isn't unreasonable.  I also mentioned having access to
the sound streams.  People can get away with not using the sound on a
computer, so clearly it wouldn't be necessary for an artificial agent using it
to use it, but it might make a valuable addition.  And it might a useful
feature if part of this interface enabled an AI to simply watch what a person
(or conceivably another agent) was doing, which could open opportunities for
some kind of instruction or learning.

andi

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] Computer monitoring and control API

2006-09-29 Thread Andrew Babian
I just had a notion.  The proper sensory input and motor output for an AI is
the computer screen (and sound input and regular keyboard and mouse input). 
One thing that needs to exist is a freely available standard API for these
things, so people can work on them, plus implementations for the platforms
that people use.  My hope is that it would give different researchers,
especially all those lone wolves out there, something intercompatible to work
with. It also seems possible that this could be a common mechanism for the
different systems to
work together, in a sort of extension of the Blackboard model.  And, as a
lighter element of it, I'd really like it if these projects could use video
games, because they more and more have become very sophisticated real-world
modelling tools.
andi

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Failure scenarios

2006-09-25 Thread Andrew Babian
Peter Voss mentioned trying to solve the wrong problem is the first place
as a source for failure in an AGI project.  This was actually this first thing
that I thought of, and it brought to my mind a problem that I think of when
considering general intelligence theories--object permanence.  Now, I think
it's established that babies have to learn the concept of object permanence.
They are probably genetically inclined to do so, but they still have to
acquire the concept.  You don't have to have an anthromorphic system,
certainly, but to me this says profound things about what intelligence itself
could possibly be if you could be intelligent before having such a simple
concept, and then have some way that you develop it.  One of the implications
for me is that intelligence almost certainly requires a some kind of causal,
sensory-motor interaction with the world.  Object permanence itself is an
abstract notion from the various practical behaviors involved with it, so I
would also not expect it to be just a piece of knowledge that was added to a
system.  It's a hard question of what it actually is, speaking to the nature
of generalization of knowledge.  And while this is only one concept of many,
it and others like it are the kind of problems that I see as getting missed in
the sorts of general intelligence theories that I see.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Why so few AGI projects?

2006-09-13 Thread Andrew Babian
 PS. http://adaptiveai.com/company/opportunities.htm

This also reminds me of something, and I know it's true of myself, and I think
it might be generally true.  It seems like people tend to have their own ideas
of what they want to be done, and they are just not very interested in working
on someone else's idea or concept.  I know that's why I am not working on
Stan's project.  It could also be why I haven't been aggressive enough to
really go after working on one of the other projects that are out there, a2i2
included.   It seems like there are quite a few lone AI hackers out there. 
And  this is a specific case of something I have found:  nobody likes to be
told what to do--some people tolerate it more than others, but nobody likes it.

andi

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]