[agi] Russel: If you can figure out another way to do it, I'm all ears!

2010-09-22 Thread David Jones
Russel Said:
*Oh, I can figure out how to solve most specific problems. From an AGI
point of view, however, that leaves the question of how those individual
solutions are going to serve as sources of knowledge for a system, rather
than separate specific programs. My answer is to build something that can
reason about code, for which formal logic is a necessary ingredient. If you
can figure out another way to do it, I'm all ears!

*Well, there are at least two problems here. *1) How to gain initial
knowledge 2) How to use knowledge to achieve goals once we have it.
*

*1) How to gain initial knowledge*

Ah, this is something very cool that I've been working on lately. Pick a
particular example of initial knowledge from the example below and we can
trace how it is learned and how such learning mechanisms can be implemented.
There are many, so I'm not going to try to list them. I thought it would
also be more fun for you all to pick one and surprise me.


*Let's start with a simple example of 2 (using knowledge we already have and
learning more) : Creating a Hello World program*

Note that many of the details in how the reasoning is done are left out
because 1) they are yet to be determined in detail and 2) the email is long
enough without them.

*Initial Assumptions: *
The agent has some initial knowledge about programs, where one might find
information about programming. The agent might have a text book on it. The
agent understands what a hello world program is supposed to do.

So, what are we solving for if the agent has so many initial capabilities?
We're trying to show how the agent reasons about what it already knows to
achieve a goal.

The goal is to create a program that says hello world. The agent
understands this by reasons about statements made in a textbook about the
hello world example program.

The agent has to plan its actions to achieve the intention write a hello
world program.  The plan is not a complete step by step plan. It just tells
the general direction to go. This is the rough to fine heuristic that human
beings often use. From there, does mean's ends analysis, searches for and
finds information that might be relevant to the situation at hand, and
reasons about what they've done in the past that have help achieve parts of
such a goal.

The AGI knows that programs can be created through the visual studio's IDE,
based on reading about programming in C# (the book he/she has). So, it
realizes that it needs to achieve a subgoal of finding visual studio's IDE
to use it. It knows it can do this by getting to the computer and clicking
on the icon that it knows is associated with visual studio.
The program comes up. So, then we ask ourselves what's the next step?. Our
brain has marked memories associated with creating programs. It has recorded
the fact that we clicked on the file menu to create a new program and that
this was part of the process in achieving the goal. So, our memory pulls
this fact and executes the action because we have no reasons to not pursue
the action in memory.  So, to this we go to the file menu and click create
a new project. We also pull in relevant information, which says you have to
do this that and the other also if we want to create a program. We pull in
relevant info from what we read in the text book about what to be careful of
and what has to be done, etc.

What's next? We want to make the program print out hello world. we recall
that we can do this by using the command Console.WriteLine(). and we
recall that the thing printed out was in between the parantheses like so:
Console.WriteLine(something to print out);
So, we hypothesize that if replace what was printed out with hello world
that it will work.
so we try Console.WriteLine(hello world). it works! hurray. toda. done.

Yeah, I know. It's over simplified. But you can see the types of reasoning
that are required to achieve such a task. Do this thought experiment on
enough problems and generalize what it takes to achieve them (don't try to
overgeneralize though!).

DO NOT THROW OUT the requirements. You cannot throw out computer vision
because you don't know how to implement it. Sensory perception is a
requirement for AGI for many reasons. So, just make it an assumption in your
design until you can work out the details. We'll do the same thought
experiment on computer vision as well to see how it can be integrated with
the whole system. For now though, we're just focusing on this simple
programming task.



*
*



---
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/8660244-d750797a
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Very Cool Object Name Intent Test

2010-09-03 Thread David Jones
I just came up with an awesome test. Ask someone, anyone you know to name
something really big and obvious around them that they already know the
position of. Tell them to point to it and name it. Practically *every* time,
they will look at it just before or as they are naming it! And it feels
incredibly uncomfortable not to look at what you are naming as you are
trying to communicate that.

These are the sorts of built in cues that children require to learn
language. The children know when they are being addressed, and they know how
to narrow the possible things that you intend to refer to when talking to
them. Pointing gestures, eye movements, etc. They all are very strong
*tells* (like in poker) regarding the intent of your speech.

We are constantly analyzing the actual intent of speakers and then
interpreting what they say. This is how children and adults learn language
and gain experience :)

I'm working on a rough to fine model of this in my Pseudo AGI design.



---
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Pseudo Design as a Solution to AGI Design

2010-09-01 Thread David Jones
I've been to think lately that the solution to creating a realistic AGI
design is psuedo design. What do I mean? Not simulation... not practical
applications... not extremely detailed implementations. The design would
start at a high level and go deeper into detail as possible.

So, why would this be a solution? Well, before I mention the cons to this
approach, consider the following:

*Problems it would solve:*
1) There is no money and little interest for AGI. Even if you could get
money, I am 99.99% sure it would be spent wrong. I know, I know... I'm
supposed to be trying to get us money, not dissuade it. But, I really think
we are repeating the mistakes of earlier researchers that promised too much
on unjustified ideas. Then when they failed, it created AI winters, over and
over and over again. History repeats itself.

So, getting us more money would likely do harm in addition to too little
good, the way it would be spent, for me to care. Extremely few people are
interested in AGI and among those that are, their ideas about it are very,
very flawed. We tend to approach the problem using our typical heuristics
and problem solving techniques, but the problem is no longer amenable to
these techniques. For example, the idea that patterns finding is sufficient
for intelligence. It has not been proven beyond my reasonable arguments
against it. Yet, people are getting funding and pursuing entire
architectures based on it. Does that really make sense? Nope. We must pseudo
test and pseudo design our algorithms first. Why? Because after spending
several years on these designs that I can reasonably predict will fail with
a high likelihood, we'll be back as the same place we were. Wouldn't we be
much better off figuring that out earlier rather than later through fast
prototyping techniques, such as the one I mentioned (pseudo design and
testing)?


2) Implementations tend to get overwhelmed by the desire to show immediate
results or achieve practical short-term goals. This completely throws off
AGI implementations, because these other constraints are not compatible with
more important AGI constraints.


3) We could find a solution much faster... AGI is a massively constrained
CSP (Constraint Satisfaction Problem). The eternity puzzle is a great
example of such a problem. If you approach the eternity puzzle using
heuristics alone to generate a likely solution, such as how pretty the
pattern is, or how plausible it is that the designers created this design,
it is guaranteed to fail. This is especially true if it takes you even a few
minutes to reject the design. The puzzle has so many possibilities that if
you were to try to look at each one to see if it was a solution, it would
literally take an eternity.

So, how do you solve such problems? You start with the most constrained
parts of the puzzle first, and you use heuristics to guide your search for
solutions paths that are likely to contain a solution and avoid solutions
paths that are less likely to contain a solution. Most importantly, you have
to try a lot of solutions and reject the bad ones quickly, so that you can
get to the right one. How does this apply to AGI? It's almost exactly the
same. Current researchers are spending a lot of time on solutions that were
generated using bad heuristics (unjustifiable human reasoning heuristics).
Then they take forever to test them out (years) before they inevitably fail.
A better way is to test solutions with as minimal effort and time as
possible, such as by using pseudo design and testing techniques. This way
you can settle onto the right solution path much, much faster and not waste
time on a solution that clearly wouldn't work if you simply spent a bit more
time analyzing it. Yes, such an approach has problems also, such as
dishonesty or delusion in how the algorithms would actually work. I'll
mention these more below. But, we have those delusions and problems already
:) So, overall, this approach seems to be significantly better.


4) if we could show that a pseudo AGI design works in sufficient detail and
with sufficient plausibility, it would likely change the minds of:
-many people that don't think AGI is possible,
-those that think it isn't possible in their lifetimes, and
-those that think it isn't worth investing in.
In other words... we would get the money, help and interest needed to make
it happen. Demos are great at generating interest in things that are very
complicated. This would be a fantastic demonstration.


*Pros:*
1) Fast design testing and rejection
2) Rough to fine design... would arrive at a solution faster because it uses
the *Most*-*Constrained*-Variable-First heuristic (such as has been used
to solve the eternity puzzle... you solve the most constrained portion first
to avoid having to try out many possibilities that will fail at the most
constrained part).
3) Less pressure for practical applications and more focus on important AGI
issues... this removes extra constraints that are not 

[agi] Wow.... just wow. (Adaptive AI)

2010-08-25 Thread David Jones
I accidentally stumbled upon the website of Adaptive AI. I must say, it is
by FAR the best AGI approach and design I have ever seen. As I'm read it
today and yesterday (haven't quite finished it all), I agreed with so much
of what he wrote that I could almost swear that I wrote it myself. He even
uses the key word I've begun to use myself, which is explicit AGI design.
This dude is awesome. If you haven't read about it yet, please do:

http://www.adaptiveai.com/research/index.htm

Dave

PS: I don't agree with absolutely everything per say, such as the fuzzy
pattern matching stuff... because I just don't understand the specifics,
pros and cons of it to agree or disagree. But, damn, this guy got enough of
it right that I have to applaud him regardless of the other details.



---
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Human Reasoning Examples

2010-08-23 Thread David Jones
Does anyone know of a list, book or links about human reasoning examples?
I'm having such a hard time finding info on this. I don't want to have to
create all the examples myself. but I don't know where to look.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Alternative way to reverse engineer the brain

2010-08-20 Thread David Jones
Has anyone thought about sort of self-assembling nano electrodes or other
nano detectors that could probe the vast majority of neurons and important
structures in a very small brain (such as a gnat brain or a C. Elegans worm,
or even a larger animal)?

It seems to me that this would be a hell of a lot easier than simulating a
brain, since there are waay too many factors and dynamics involved to
get the simulation to be accurate. Maybe we could just invent a way to probe
every part of the brain in vivo.

Dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Language Acquisition TV Special

2010-08-19 Thread David Jones
I've become extremely fascinated with language acquisition. I am convinced
that we can tease out the algorithms that children use to learn language
from observations like the ones seen in the video link below. I'm about to
start watching the second video, but thought you guys might like watching
this too :) Check it out! Also, if you haven't done so yet, check out
William O'Grady's book How Children Learn Language. I love that book.

http://www.youtube.com/watch?v=PZatrvNDOiENR=1

Dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Neuroplasticity Explanation Hypothesis

2010-08-14 Thread David Jones
I just had this really interesting idea about neuroplasticity as I'm sitting
here listening to a speeches at the Singularity Summit.

I was trying to figure out how neuroplasticity works and why the hell is it
that the brain can find the same patterns in input from completely different
senses. For example, if born without eyes, we can see with touch. If born
without hearing and vision, we can also see and hear with touch! (an example
of this is a blind and deaf person putting their hand on your mouth and neck
to detect and understand your speech. this is a real example).

How the hell does the brain do that?!

The brain knows how to process certain inputs just the right way. For
example, it knows to group things by color or that faces have certain
special meanings. How does it know to process this sensory input the right
way? I don't think it's purely pattern recognition. Actually, it cannot be
just pattern recognition alone.

So, I realized that it would make sense that cells don't create a network
and wait for input. The cells are not specialized *before* they get sensory
inputs or other types of input (such as input from nearby cells). These
cells specialize AFTER receiving input! That means that our DNA defines what
patterns we should look for and how to process those patterns. Guess what
that means! That means that if these patterns come from completely different
sensory organs, the brain can still recognize the patterns and the cells
that receive these patterns can specialize just right to process them a
certain way! That would perfectly (so I believe) explain neuroplasticity.

Basically, it is a side-effect of the specific design of our brains. But, it
means that the brain is not just a pattern recognizer. It has built-in
knowledge which is absolutely essential to process inputs correctly. This
supports my hypothesis that artificial neural nets are not correctly design
to be able to achieve AGI the way the brain does.

This would also explain my beliefs that the brain knows how to process in
ways that correctly represent true real-world relationships. It would also
explain why this processing can self assemble correctly. The knowledge for
how to process inputs is built in(my hypothesis), but it self assembles only
when inputs that have certain patterns and chemical signals are presented to
the cells.

This would explain the confusion for between purely self-assembling models
and built-in knowledge of how certain patterns or input should be processed.
Clearly, the brain does not evolve to process world input correctly every
single time a person is born. We solved this problem already through our DNA
and billions of years of evolution. So, the solutions to the problems are
built into our DNA.

This would also explain how the brain is able to handle other important
functions such as: memory, hierarchical relationships, etc. When the brain
detects the need and the right patterns of specialized cells, it can then
create even more specialized cells or cellular changes to perform: memory
and other important brain functions.

I also came up with an interesting idea to explain why people go into comas.
I could be completely off. It's just an uneducated guess. The cause of comas
could be that the brain circuit that controls attention has been damaged.
The attention part of the brain probably drives everything by deciding what
circuits to activate and why! Without that circuit creating activity, the
brain's neurons have no reason to fire normally and the brain's normal
activity does not occur.


Dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Compressed Cross-Indexed Concepts

2010-08-11 Thread David Jones
This seems to be an overly simplistic view of AGI from a mathematician. It's
kind of funny how people over emphasize what they know or depend on their
current expertise too much when trying to solve new problems.

I don't think it makes sense to apply sanitized and formal mathematical
solutions to AGI. What reason do we have to believe that the problems we
face when developing AGI are solvable by such formal representations? What
reason do we have to think we can represent the problems as an instance of
such mathematical problems?

We have to start with the specific problems we are trying to solve, analyze
what it takes to solve them, and then look for and design a solution.
Starting with the solution and trying to hack the problem to fit it is not
going to work for AGI, in my opinion. I could be wrong, but I would need
some evidence to think otherwise.

Dave

On Wed, Aug 11, 2010 at 10:39 AM, Jim Bromer jimbro...@gmail.com wrote:

 You probably could show that a sophisticated mathematical structure would
 produce a scalable AGI program if is true, using contemporary mathematical
 models to simulate it.  However, if scalability was completely dependent on
 some as yet undiscovered mathemagical principle, then you couldn't.

 For example, I think polynomial time SAT would solve a lot of problems with
 contemporary AGI.  So I believe this could be demonstrated on a simulation.
 That means, that I could demonstrate effective AGI that works so long as the
 SAT problems are easily solved.  If the program reported that a complicated
 logical problem could not be solved, the user could provide his insight into
 the problem at those times to help with the problem.  This would not work
 exactly as hoped, but by working from there, I believe that I would be able
 to determine better ways to develop such a program so it would work better -
 if my conjecture about the potential efficacy of polynomial time SAT for AGI
 was true.

 Jim Bromer

 On Mon, Aug 9, 2010 at 6:11 PM, Jim Bromer jimbro...@gmail.com wrote:

 On Mon, Aug 9, 2010 at 4:57 PM, John G. Rose johnr...@polyplexic.comwrote:

  -Original Message-
  From: Jim Bromer [mailto:jimbro...@gmail.com]
 
   how would these diverse examples
  be woven into highly compressed and heavily cross-indexed pieces of
  knowledge that could be accessed quickly and reliably, especially for
 the
  most common examples that the person is familiar with.

 This is a big part of it and for me the most exciting. And I don't think
 that this subsystem would take up millions of lines of code either.
 It's
 just that it is a *very* sophisticated and dynamic mathematical structure
 IMO.

 John



 Well, if it was a mathematical structure then we could start developing
 prototypes using familiar mathematical structures.  I think the structure
 has to involve more ideological relationships than mathematical.  For
 instance you can apply a idea to your own thinking in a such a way that you
 are capable of (gradually) changing how you think about something.  This
 means that an idea can be a compression of some greater change in your own
 programming.  While the idea in this example would be associated with a
 fairly strong notion of meaning, since you cannot accurately understand the
 full consequences of the change it would be somewhat vague at first.  (It
 could be a very precise idea capable of having strong effect, but the
 details of those effects would not be known until the change had
 progressed.)

 I think the more important question is how does a general concept be
 interpreted across a range of different kinds of ideas.  Actually this is
 not so difficult, but what I am getting at is how are sophisticated
 conceptual interrelations integrated and resolved?
 Jim




*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Compressed Cross-Indexed Concepts

2010-08-11 Thread David Jones
Jim,

Fair enough. My apologies then. I just often see your posts on SAT or other
very formal math problems and got the impression that you thought this was
at the core of AGI's problems and that pursuing a fast solution to
NP-complete problems is the best way to solve it. At least, that was my
impression. So, my thought was that such formal methods don't seem to be a
complete solution at all and other factors, such as uncertainty, could make
such formal solutions ineffective or unusable. Which is why I said it's
important to analyze the requirements of the problem and then apply a
solution.

Dave

On Wed, Aug 11, 2010 at 1:02 PM, Jim Bromer jimbro...@gmail.com wrote:

 David,
 I am not a mathematician although I do a lot
 of computer-related mathematical work of course.  My remark was directed
 toward John who had suggested that he thought that there is some
 sophisticated mathematical sub system that would (using my words here)
 provide such a substantial benefit to AGI that its lack may be at the core
 of the contemporary problem.  I was saying that unless this required
 mathemagic then a scalable AGI system demonstrating how effective this kind
 of mathematical advancement could probably be simulated using contemporary
 mathematics.  This is not the same as saying that AGI is solvable by
 sanitized formal representations any more than saying that your message is a
 sanitized formal statement because it was dependent on a lot of computer
 mathematics in order to send it.  In other words I was challenging John at
 that point to provide some kind of evidence for his view.

 I then went on to say, that for example, I think that fast SAT solutions
 would make scalable AGI possible (that is, scalable up to a point that is
 way beyond where we are now), and therefore I believe that I could create a
 simulation of an AGI program to demonstrate what I am talking about.  (A
 simulation is not the same as the actual thing.)

 I didn't say, nor did I imply, that the mathematics would be all there is
 to it.  I have spent a long time thinking about the problems of applying
 formal and informal systems to 'real world' (or other world) problems and
 the application of methods is a major part of my AGI theories.  I don't
 expect you to know all of my views on the subject but I hope you will keep
 this in mind for future discussions.
 Jim Bromer

 On Wed, Aug 11, 2010 at 10:53 AM, David Jones davidher...@gmail.comwrote:

 This seems to be an overly simplistic view of AGI from a mathematician.
 It's kind of funny how people over emphasize what they know or depend on
 their current expertise too much when trying to solve new problems.

 I don't think it makes sense to apply sanitized and formal mathematical
 solutions to AGI. What reason do we have to believe that the problems we
 face when developing AGI are solvable by such formal representations? What
 reason do we have to think we can represent the problems as an instance of
 such mathematical problems?

 We have to start with the specific problems we are trying to solve,
 analyze what it takes to solve them, and then look for and design a
 solution. Starting with the solution and trying to hack the problem to fit
 it is not going to work for AGI, in my opinion. I could be wrong, but I
 would need some evidence to think otherwise.

 Dave

   On Wed, Aug 11, 2010 at 10:39 AM, Jim Bromer jimbro...@gmail.comwrote:

   You probably could show that a sophisticated mathematical structure
 would produce a scalable AGI program if is true, using contemporary
 mathematical models to simulate it.  However, if scalability was
 completely dependent on some as yet undiscovered mathemagical principle,
 then you couldn't.

 For example, I think polynomial time SAT would solve a lot of problems
 with contemporary AGI.  So I believe this could be demonstrated on a
 simulation.  That means, that I could demonstrate effective AGI that works
 so long as the SAT problems are easily solved.  If the program reported that
 a complicated logical problem could not be solved, the user could provide
 his insight into the problem at those times to help with the problem.  This
 would not work exactly as hoped, but by working from there, I believe that I
 would be able to determine better ways to develop such a program so it would
 work better - if my conjecture about the potential efficacy of polynomial
 time SAT for AGI was true.

 Jim Bromer

 On Mon, Aug 9, 2010 at 6:11 PM, Jim Bromer jimbro...@gmail.com wrote:

 On Mon, Aug 9, 2010 at 4:57 PM, John G. Rose 
 johnr...@polyplexic.comwrote:

  -Original Message-
  From: Jim Bromer [mailto:jimbro...@gmail.com]
 
   how would these diverse examples
  be woven into highly compressed and heavily cross-indexed pieces of
  knowledge that could be accessed quickly and reliably, especially for
 the
  most common examples that the person is familiar with.

 This is a big part of it and for me the most exciting. And I don't
 think

Re: [agi] Re: Compressed Cross-Indexed Concepts

2010-08-11 Thread David Jones
Slightly off the topic of your last email. But, all this discussion has made
me realize how to phrase something... That is that solving AGI requires
understand the constraints that problems impose on a solution. So, it's sort
of a unbelievably complex constraint satisfaction problem. What we've been
talking about is how we come up with solutions to these problems when we
sometimes aren't actually trying to solve any of the real problems. As I've
been trying to articulate lately is that in order to satisfy the constraints
of the problems AGI imposes, we must really understand the problems we want
to solve and how they can be solved(their constraints). I think that most of
us do not do this because the problem is so complex, that we refuse to
attempt to understand all of its constraints. Instead we focus on something
very small and manageable with fewer constraints. But, that's what creates
narrow AI, because the constraints you have developed the solution for only
apply to a narrow set of problems. Once you try to apply it to a different
problem that imposes new, incompatible constraints, the solution fails.

So, lately I've been pushing for people to truly analyze the problems
involved in AGI, step by step to understand what the constraints are. I
think this is the only way we will develop a solution that is guaranteed to
work without wasting undo time in trial and error. I don't think trial and
error approaches will work. We must know what the constraints are, instead
of guessing at what solutions might approximate the constraints. I think the
problem space is too large to guess.

Of course, I think acquisition of knowledge through automated means is the
first step in understanding these constraints. But, unfortunately, few agree
with me.

Dave

On Wed, Aug 11, 2010 at 3:44 PM, Jim Bromer jimbro...@gmail.com wrote:

 I've made two ultra-brilliant statements in the past few days.  One is that
 a concept can simultaneously be both precise and vague.  And the other is
 that without judgement even opinions are impossible.  (Ok, those two
 statements may not be ultra-brilliant but they are brilliant right?  Ok,
 maybe not truly brilliant,  but highly insightful and
 perspicuously intelligent... Or at least interesting to the cognoscenti
 maybe?.. Well, they were interesting to me at least.)

 Ok, these two interesting-to-me comments made by me are interesting because
 they suggest that we do not know how to program a computer even to create
 opinions.  Or if we do, there is a big untapped difference between those
 programs that show nascent judgement (perhaps only at levels relative to the
 domain of their capabilities) and those that don't.

 This is AGI programmer's utopia.  (Or at least my utopia).  Because I need
 to find something that is simple enough for me to start with and which can
 lend itself to develop and test theories of AGI judgement and scalability.
 By allowing an AGI program to participate more in the selection of its own
 primitive 'interests' we will be able to interact with it, both as
 programmer and as user, to guide it toward selecting those interests which
 we can understand and seem interesting to us.  By creating an AGI program
 that has a faculty for primitive judgement (as we might envision such an
 ability), and then testing the capabilities in areas where the program seems
 to work more effectively, we might be better able to develop more
 powerful AGI theories that show greater scalability, so long as we are able
 to understand what interests the program is pursuing.

 Jim Bromer

 On Wed, Aug 11, 2010 at 1:40 PM, Jim Bromer jimbro...@gmail.com wrote:

 On Wed, Aug 11, 2010 at 10:53 AM, David Jones davidher...@gmail.comwrote:

 I don't think it makes sense to apply sanitized and formal mathematical
 solutions to AGI. What reason do we have to believe that the problems we
 face when developing AGI are solvable by such formal representations? What
 reason do we have to think we can represent the problems as an instance of
 such mathematical problems?

 We have to start with the specific problems we are trying to solve,
 analyze what it takes to solve them, and then look for and design a
 solution. Starting with the solution and trying to hack the problem to fit
 it is not going to work for AGI, in my opinion. I could be wrong, but I
 would need some evidence to think otherwise.



 I agree that disassociated theories have not proved to be very successful
 at AGI, but then again what has?

 I would use a mathematical method that gave me the number or percentage of
 True cases that satisfy a propositional formula as a way to check the
 internal logic of different combinations of logic-based conjectures.  Since
 methods that can do this with logical variables for any logical system that
 goes (a little) past 32 variables are feasible the potential of this method
 should be easy to check (although it would hit a rather low ceiling of
 scalability).  So I do think that logic

Re: [agi] Nao Nao

2010-08-10 Thread David Jones
Way too pessimistic in my opinion.

On Mon, Aug 9, 2010 at 7:06 PM, John G. Rose johnr...@polyplexic.comwrote:

 Aww, so cute.



 I wonder if it has a Wi-Fi connection, DHCP's an IP address, and relays
 sensory information back to the main servers with all the other Nao's all
 collecting personal data in a massive multi-agent geo-distributed
 robo-network.



 So cuddly!



 And I wonder if it receives and executes commands, commands that come in
 over the network from whatever interested corporation or government pays the
 most for access.



 Such a sweet little friendly Nao. Everyone should get one :)



 John



 *From:* Mike Tintner [mailto:tint...@blueyonder.co.uk]



 An unusually sophisticated ( somewhat expensive) promotional robot vid:




 http://www.telegraph.co.uk/technology/news/7934318/Nao-the-robot-that-expresses-and-detects-emotions.html

 *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/| 
 Modifyhttps://www.listbox.com/member/?;Your Subscription

 http://www.listbox.com


   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Anyone going to the Singularity Summit?

2010-08-10 Thread David Jones
Steve,

Capable and effective AI systems would be very helpful at every step of the
research process. Basic research is a major area I think that AGI will be
applied to. In fact, that's exactly where I plan to apply it first.

Dave

On Tue, Aug 10, 2010 at 7:25 AM, Steve Richfield
steve.richfi...@gmail.comwrote:

 Ben,

 On Mon, Aug 9, 2010 at 1:07 PM, Ben Goertzel b...@goertzel.org wrote:


 I'm speaking there, on Ai applied to life extension; and participating in
 a panel discussion on narrow vs. general AI...

 Having some interest, expertise, and experience in both areas, I find it
 hard to imagine much interplay at all.

 The present challenge is wrapped up in a lack of basic information,
 resulting from insufficient funds to do the needed experiments.
 Extrapolations have already gone WAY beyond the data, and new methods to
 push extrapolations even further wouldn't be worth nearly as much as just a
 little more hard data.

 Just look at Aubrey's long list of aging mechanisms. We don't now even know
 which predominate, or which cause others. Further, there are new candidates
 arising every year, e.g. Burzynski's theory that most aging is secondary to
 methylation of DNA receptor sites, or my theory that Aubrey's entire list
 could be explained by people dropping their body temperatures later in life.
 There are LOTS of other theories, and without experimental results, there is
 absolutely no way, AI or not, to sort the wheat from the chaff.

 Note that one of the front runners, the cosmic ray theory, could easily be
 tested by simply raising some mice in deep tunnels. This is high-school
 level stuff, yet with NO significant funding for aging research, it remains
 undone.

 Note my prior posting explaining my inability even to find a source of
 used mice for kids to use in high-school anti-aging experiments, all while
 university labs are now killing their vast numbers of such mice. So long as
 things remain THIS broken, anything that isn't part of the solution simply
 becomes a part of the very big problem, AIs included.

 The best that an AI could seemingly do is to pronounce Fund and facilitate
 basic aging research and then suspend execution pending an interrupt
 indicating that the needed experiments have been done.

 Could you provide some hint as to where you are going with this?

 Steve

*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Anyone going to the Singularity Summit?

2010-08-10 Thread David Jones
The think the biggest thing to remember here is that general AI could be
applied to many different problems in parallel by many different people.
They would help with many aspects of the problem solving process, not just a
single one and certainly not just applied to a single experiment/study.

I'm confident that Ben is aware of this


On Tue, Aug 10, 2010 at 1:43 PM, Bob Mottram fuzz...@gmail.com wrote:

 On 10 August 2010 16:44, Ben Goertzel b...@goertzel.org wrote:
  I'm writing an article on the topic for H+ Magazine, which will appear in
 the next couple weeks ... I'll post a link to it when it appears
 
  I'm not advocating applying AI in the absence of new experiments of
 course.  I've been working closely with Genescient, applying AI tech to
 analyze the genomics of their long-lived superflies, so part of my message
 is about the virtuous cycle achievable via synergizing AI data analysis with
 carefully-designed experimental evolution of model organisms...




 Probably if I was going to apply AI in a medical context I'd
 prioritize those conditions which are both common and either fatal or
 have a severe impact on quality of life.  Also worthwhile would be
 using AI to try to discover drugs which have an equivalent effect to
 existing known ones but can be manufactured at a significantly lower
 cost, such that they are brought within the means of a larger fraction
 of the population.  Investigating aging is perfectly legitimate, but
 if you're trying to maximize your personal utility I'd regard it as a
 low priority compared to other more urgent medical issues which cause
 premature deaths.

 Also in the endeavor to extend life we need not focus entirely upon
 medical aspects.  The organizational problems of delivering known
 medications on a large scale is also a problem which AI could perhaps
 be used to optimize.  The way in which things like this are currently
 organized seems to be based upon some combination of tradition and
 intuitive hunches, so there may be low hanging fruit to be obtained
 here.  For example, if an epidemic breaks out, why should you
 vaccinate first?  If you have access to a social graph (from Facebook,
 or wherever) it's probably possible to calculate an optimal strategy.


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Anyone going to the Singularity Summit?

2010-08-10 Thread David Jones
Bob, their are serious issues with such a suggestion.

The biggest issue, is that there is a good chance it wouldn't work because
diseases, including the common cold, have incubation times. So, you may not
have any symptoms at all, yet you can pass it on to other people.

And even if we did know who was sick, are you really going to stay home for
2 weeks every time you get sick? If I were an employer, I would rather have
you come to work when you feel up to it.

Another point I've given to germaphobes is that let's say you are successful
at avoiding as many possible germs as possible and avoid getting sick as
much as possible. That means that you are likely not immune to some common
colds and such that you should be. So, when you are old and less capable,
your immune system will not be able to fight off the infection and you will
die an early death.

Dave

On Tue, Aug 10, 2010 at 1:51 PM, Bob Mottram fuzz...@gmail.com wrote:

 On 10 August 2010 18:43, Bob Mottram fuzz...@gmail.com wrote:
  here.  For example, if an epidemic breaks out, why should you
  vaccinate first?


 That should have been who rather than why :-)

 Just thinking a little further, in hand waving mode, If something like
 the common cold were added as a status within social networks, and
 everyone was on the network it might even be possible to eliminate
 this disease simply by getting people to avoid those who are known to
 have it for a certain period of time - a sort of internet enabled
 smart avoidance strategy.  This wouldn't be a cure, but it could
 severely hamper the disease transmission mechanism, perhaps even to
 the extent of driving it to extinction.


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: RE: [agi] How To Create General AI Draft2

2010-08-09 Thread David Jones
I agree John that this is a useful exercise. This would be a good discussion
if mike would ever admit that I might be right and he might be wrong. I'm
not sure that will ever happen though. :) First he says I can't define a
pattern that works. Then, when I do, he says the pattern is no good because
it isn't physical. Lol. If he would ever admit that I might have gotten it
right, the discussion would be a good one. Instead, he hugs his preconceived
notions no matter how good my arguments are and finds yet another reason,
any reason will do, to say I'm still wrong.

On Aug 9, 2010 2:18 AM, John G. Rose johnr...@polyplexic.com wrote:

Actually this is quite critical.



Defining a chair - which would agree with each instance of a chair in the
supplied image - is the way a chair should be defined and is the way the
mind processes it.



It can be defined mathematically in many ways. There is a particular one I
would go for though...



John



*From:* Mike Tintner [mailto:tint...@blueyonder.co.uk]
*Sent:* Sunday, August 08, 2010 7:28 AM


To: agi
Subject: Re: [agi] How To Create General AI Draft2



You're waffling.



You say there's a pattern for chair - DRAW IT. Attached should help you.



Analyse the chairs given in terms of basic visual units. Or show how any
basic units can be applied to them. Draw one or two.



You haven't identified any basic visual units  - you don't have any. Do you?
Yes/no.



No. That's not funny, that's a waste.. And woolly and imprecise through
and through.







*From:* David Jones davidher...@gmail.com

*Sent:* Sunday, August 08, 2010 1:59 PM

*To:* agi agi@v2.listbox.com

*Subject:* Re: [agi] How To Create General AI Draft2



Mike,

We've argued about this over and over and over. I don't want to repeat
previous arguments to you.

You have no proof that the world cannot be broken down into simpler concepts
and components. The only proof you attempt to propose are your example
problems that *you* don't understand how to solve. Just because *you* cannot
solve them, doesn't mean they cannot be solved at all using a certain
methodology. So, who is really making wild assumptions?

The mere fact that you can refer to a chair means that it is a
recognizable pattern. LOL. That fact that you don't realize this is quite
funny.

Dave

On Sun, Aug 8, 2010 at 8:23 AM, Mike Tintner tint...@blueyonder.co.uk
wrote:

Dave:No... it is equivalent to saying that the whole world can be modeled as
if everything was made up of matter



And matter is... ?  Huh?



You clearly don't realise that your thinking is seriously woolly - and you
will pay a heavy price in lost time.



What are your basic world/visual-world analytic units  wh. you are
claiming to exist?



You thought - perhaps think still - that *concepts* wh. are pretty
fundamental intellectual units of analysis at a certain level, could be
expressed as, or indeed, were patterns. IOW there's a fundamental pattern
for chair or table. Absolute nonsense. And a radical failure to
understand the basic nature of concepts which is that they are *freeform*
schemas, incapable of being expressed either as patterns or programs.



You had merely assumed that concepts could be expressed as patterns,but had
never seriously, visually analysed it. Similarly you are merely assuming
that the world can be analysed into some kind of visual units - but you
haven't actually done the analysis, have you? You don't have any of these
basic units to hand, do you? If you do, I suggest, reply instantly, naming a
few. You won't be able to do it. They don't exist.



Your whole approach to AGI is based on variations of what we can call
fundamental analysis - and it's wrong. God/Evolution hasn't built the
world with any kind of geometric, or other consistent, bricks. He/It is a
freeform designer. You have to start thinking outside the
box/brick/fundamental unit.



*From:* David Jones davidher...@gmail.com

*Sent:* Sunday, August 08, 2010 5:12 AM

*To:* agi agi@v2.listbox.com

*Subject:* Re: [agi] How To Create General AI Draft2



Mike,

I took your comments into consideration and have been updating my paper to
make sure these problems are addressed.

See more comments below.

On Fri, Aug 6, 2010 at 8:15 PM, Mike Tintner tint...@blueyonder.co.uk
wrote:

1) You don't define the difference between narrow AI and AGI - or make clear
why your approach is one and not the other


I removed this because my audience is for AI researchers... this is AGI 101.
I think it's clear that my design defines general as being able to handle
the vast majority of things we want the AI to handle without requiring a
change in design.




2) Learning about the world won't cut it -  vast nos. of progs. claim they
can learn about the world - what's the difference between narrow AI and AGI
learning?


The difference is in what you can or can't learn about and what tasks you
can or can't perform. If the AI is able to receive input about anything it
needs to know about in the same formats

Re: [agi] How To Create General AI Draft2

2010-08-09 Thread David Jones
You see. This is precisely why I don't want to argue with Mike anymore. it
must be a physical pattern. LOL. Who ever said that patterns must be
physical? This is exactly why you can't see my point of view. You impose
unnecessary restrictions on any possible solution when there really are no
such restrictions.

Dave

On Mon, Aug 9, 2010 at 7:27 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  John:It can be defined mathematically in many ways

 Try it - crude drawings/jottings/diagrams totally acceptable. See my set of
 fotos to Dave.

 (And yes, you're right this is of extreme importance. And no. Dave, there
 are no such things as non-physical patterns).





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How To Create General AI Draft2

2010-08-09 Thread David Jones
I already stated these. read previous emails.

On Mon, Aug 9, 2010 at 8:48 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  PS Examples of nonphysical patterns AND how they are applicable to visual
 AGI.?


  *From:* David Jones davidher...@gmail.com
 *Sent:* Monday, August 09, 2010 1:34 PM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] How To Create General AI Draft2

 You see. This is precisely why I don't want to argue with Mike anymore. it
 must be a physical pattern. LOL. Who ever said that patterns must be
 physical? This is exactly why you can't see my point of view. You impose
 unnecessary restrictions on any possible solution when there really are no
 such restrictions.

 Dave

 On Mon, Aug 9, 2010 at 7:27 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  John:It can be defined mathematically in many ways

 Try it - crude drawings/jottings/diagrams totally acceptable. See my set
 of fotos to Dave.

 (And yes, you're right this is of extreme importance. And no. Dave, there
 are no such things as non-physical patterns).


   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How To Create General AI Draft2

2010-08-09 Thread David Jones
Mike,

Quoting a previous email:

QUOTE

In fact, the chair patterns you refer to are not strictly physical
patterns. The pattern is based on how the objects can be used, what their
intended uses probably are, and what most common effective uses are.

So, chairs are objects that are used to sit on. You can identify objects
whose most likely use is for sitting based on experience.

END QUOTE


Even refrigerators can be chairs. If a fridge is in the woods and you're out
there camping, you can sit on it. I could say sit on that fridge couch over
there. The fact that multiple people can sit on it, makes it possible to
call it a couch.

But, it's odd to call it a chair, because it's a fridge. So, when the object
has a more common effective use, as I stated above, it is usually referred
to by that use. If something is most likely used for sitting by a single
person, then it is a chair. If its most common best use is something else,
like cooling food, you would call it a fridge.

So, maybe the pattern would be, if it has some features like a chair, like
possible arm rests, a soft bottom, cushions, legs, a back rest, etc. and you
can't see it being used as anything else, then maybe it's a chair. If
someone sits on it, it certainly is a chair, if you find it by searching for
chairs, its likely a chair. etc.

You see, chairs are not simply recognized by their physical structure. There
are multiple ways you can recognize it and it is certainly important to know
that it doesn't seem useful for another task.

The idea that chairs cannot be recognized because they come in all shapes,
sizes and structures is just wrong.

Dave


On Mon, Aug 9, 2010 at 8:47 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  Examples of nonphysical patterns?

  *From:* David Jones davidher...@gmail.com
 *Sent:* Monday, August 09, 2010 1:34 PM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] How To Create General AI Draft2

 You see. This is precisely why I don't want to argue with Mike anymore. it
 must be a physical pattern. LOL. Who ever said that patterns must be
 physical? This is exactly why you can't see my point of view. You impose
 unnecessary restrictions on any possible solution when there really are no
 such restrictions.

 Dave

 On Mon, Aug 9, 2010 at 7:27 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  John:It can be defined mathematically in many ways

 Try it - crude drawings/jottings/diagrams totally acceptable. See my set
 of fotos to Dave.

 (And yes, you're right this is of extreme importance. And no. Dave, there
 are no such things as non-physical patterns).


   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How To Create General AI Draft2

2010-08-09 Thread David Jones
, But what's not
 so obvious - although undeniable - is how stretchable and fluid that line
 must be in order to recognize diverse objects - as diverse as one octopus,
 one cactus,  one mountain. See foto below.  The brain can stretch a
 line outwards to encompass any form of object in the universe - or
 conversely, squeeze/stretch any object inwards to form a 1. All those
 objects in the foto can be squeezed/stretched into that one on the top
 left.

 Now is anyone here going to have the gall to tell me that process of object
 recognition is mathematical?

 But just as strings are - or could be - central to matter and physics; so
 are fluid schemas central to intelligence - and especially to concepts.

 **Correction - a blind idiot *could* see - by touch - that the diverse
 forms of one octopus/flower etc  could not be reduced to a line by any
 mathematical process.

 P.S. When I say that maths cannot deal with fluid schemas and object
 recognition, one should perhaps amend that - it may be that no existing form
 of maths. wh. deals entirely in set forms and patterns can, but that a
 creative version of maths, dealing in free forms and patchworks, could.

 P.P.S. String - the concept - itself involves an extremely fluid schema -
 is a variation, in fact, of the schema of one/1 - and must embrace many
 diverse forms that strings may be shaped into.




  *From:* David Jones davidher...@gmail.com
 *Sent:* Monday, August 09, 2010 2:13 PM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] How To Create General AI Draft2

 Mike,

 Quoting a previous email:

 QUOTE

 In fact, the chair patterns you refer to are not strictly physical
 patterns. The pattern is based on how the objects can be used, what their
 intended uses probably are, and what most common effective uses are.

 So, chairs are objects that are used to sit on. You can identify objects
 whose most likely use is for sitting based on experience.

 END QUOTE


 Even refrigerators can be chairs. If a fridge is in the woods and you're
 out there camping, you can sit on it. I could say sit on that fridge couch
 over there. The fact that multiple people can sit on it, makes it possible
 to call it a couch.

 But, it's odd to call it a chair, because it's a fridge. So, when the
 object has a more common effective use, as I stated above, it is usually
 referred to by that use. If something is most likely used for sitting by a
 single person, then it is a chair. If its most common best use is something
 else, like cooling food, you would call it a fridge.

 So, maybe the pattern would be, if it has some features like a chair, like
 possible arm rests, a soft bottom, cushions, legs, a back rest, etc. and you
 can't see it being used as anything else, then maybe it's a chair. If
 someone sits on it, it certainly is a chair, if you find it by searching for
 chairs, its likely a chair. etc.

 You see, chairs are not simply recognized by their physical structure.
 There are multiple ways you can recognize it and it is certainly important
 to know that it doesn't seem useful for another task.

 The idea that chairs cannot be recognized because they come in all shapes,
 sizes and structures is just wrong.

 Dave


 On Mon, Aug 9, 2010 at 8:47 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  Examples of nonphysical patterns?

  *From:* David Jones davidher...@gmail.com
 *Sent:* Monday, August 09, 2010 1:34 PM
  *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] How To Create General AI Draft2

  You see. This is precisely why I don't want to argue with Mike anymore.
 it must be a physical pattern. LOL. Who ever said that patterns must be
 physical? This is exactly why you can't see my point of view. You impose
 unnecessary restrictions on any possible solution when there really are no
 such restrictions.

 Dave

 On Mon, Aug 9, 2010 at 7:27 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  John:It can be defined mathematically in many ways

 Try it - crude drawings/jottings/diagrams totally acceptable. See my set
 of fotos to Dave.

 (And yes, you're right this is of extreme importance. And no. Dave, there
 are no such things as non-physical patterns).


   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription 
 http://www.listbox.com/
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/


   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com

Re: [agi] How To Create General AI Draft2

2010-08-09 Thread David Jones
Thanks Ben,

I think the biggest difference with the way I approach it is to be
deliberate in how the system solves specific kinds of problems. I haven't
gone into that in detail yet though.

For example, Itamar seems to want to give the AI the basic building blocks
that make up spaciotemporal dependencies as a sort of bag of features and
just let a neural-net-like structure find the patterns. If that is not
accurate, please correct me. I am very skeptical of such approaches because
there is no guarantee at all that the system will properly represent the
relationships and structure of the data. It seems just hopeful to me that
such a system would get it right out of the vast number of possible results
it could accidental arrive at.

The human visual system doesn't evolve like that on the fly. This can be
proven by the fact that we all see the same visual illusions. We all exhibit
the same visual limitations in the same way. There is much evidence that the
system doesn't evolve accidentally. It has a limited set of rules it uses to
learn from perceptual data.

I think a more deliberate approach would be more effective because we can
understand why it does what it does, how it does it, and why its not working
if it doesn't work. With such deliberate approaches, it is much more clear
how to proceed and to reuse knowledge in many complementary ways. This is
what I meant by emergence.

I propose a more deliberate approach that knows exactly why problems can be
solved a certain way and how the system is likely to solve them.

I'm suggesting to represent the spaciotemporal relationships deliberately
and explicitly. Then we can construct general algorithms to solve problems
explicitly, yet generally.

Regarding computer vision not being that important... Don't you think that
because knowledge is so essential and manual input is inneffective,
perception-based acquisition of knowledge is a very serious barrier to AGI?
It seems to me that the solutions to AGI problems being constructed are not
using knowledge gained from simulated perception effectively. OpenCog's
natural language processing for example, seems to use very very little
knowledge that would be gathered from visual perception. As far as I
remember, it mostly uses things that are learned from other sources. To me,
it doesn't make sense to spend so much time debugging and developing such
solutions, when a better and more general approach to language understanding
would use a lot of knowledge.

Those are the sorts of things I feel are new to this approach.

Thanks Again,

Dave

PS: I'm planning to go to the Singularity Summit :) Last minute. Hope to see
you there.


On Mon, Aug 9, 2010 at 10:01 AM, Ben Goertzel b...@goertzel.org wrote:

 Hi David,

 I read the essay

 I think it summarizes well some of the key issues involving the bridge
 between perception and cognition, and the hierarchical decomposition of
 natural concepts

 I find the ideas very harmonious with those of Jeff Hawkins, Itamar Arel,
 and other researchers focused on hierarchical deep learning approaches to
 vision with longer-term AGI ambitions

 I'm not sure there are any dramatic new ideas in the essay.  Do you think
 there are?

 My own view is that these ideas are basically right, but handle only a
 modest percentage of what's needed to make a human-level, vaguely human-like
 AGI   I.e. I don't agree that solving vision and the vision-cognition
 bridge is *such* a huge part of AGI, though it's certainly a nontrivial
 percentage...


 -- Ben G

 On Fri, Aug 6, 2010 at 4:44 PM, David Jones davidher...@gmail.com wrote:

 Hey Guys,

 I've been working on writing out my approach to create general AI to share
 and debate it with others in the field. I've attached my second draft of it
 in PDF format, if you guys are at all interested. It's still a work in
 progress and hasn't been fully edited. Please feel free to comment,
 positively or negatively, if you have a chance to read any of it. I'll be
 adding to and editing it over the next few days.

 I'll try to reply more professionally than I have been lately :) Sorry :S

 Cheers,

 Dave
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 CTO, Genescient Corp
 Vice Chairman, Humanity+
 Advisor, Singularity University and Singularity Institute
 External Research Professor, Xiamen University, China
 b...@goertzel.org

 I admit that two times two makes four is an excellent thing, but if we are
 to give everything its due, two times two makes five is sometimes a very
 charming thing too. -- Fyodor Dostoevsky

*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com

Re: [agi] How To Create General AI Draft2

2010-08-09 Thread David Jones
Ben,

Comments below.

On Mon, Aug 9, 2010 at 12:00 PM, Ben Goertzel b...@goertzel.org wrote:



 The human visual system doesn't evolve like that on the fly. This can be
 proven by the fact that we all see the same visual illusions. We all exhibit
 the same visual limitations in the same way. There is much evidence that the
 system doesn't evolve accidentally. It has a limited set of rules it uses to
 learn from perceptual data.



 That is not a proof, of course.  It could be that given a general
 architecture, and inputs with certain statistical properties, the same
 internal structures inevitably self-organize


You're right, I should organize details and evidence that the human brain
has a lot of its processing algorithms built in.

Another example of this innate ability to process inputs the right way is
the fact that many language acquisition researchers believe that children
have a built-in hypothesis space that they use when learning language (see
generativism at http://en.wikipedia.org/wiki/Language_acquisition).

It is likely not enough to just give it all the data it needs and let it
guess till it fines a good answer. The hypothesis space is likely too large.




So I'm curious

 -- what are the specific pattern-recognition modules that you will put into
 your system, and how will you arrange them hierarchically?


Well, the first pattern-recognition modules are the ones for inferring scene
and object structures and properties from visual/lidar data. I can't really
be specific because

The next set of pattern-recognition modules would be for inferring
relationships such as object whole-to-part relationships and their other
behavioral relationships. Basically, algorithms for inferring a sparse or
dense models of objects. Again, it is quite hard to be specific about
algorithms. There is a lot of detailed analysis that I have yet to do for
each type of problem and how the whole is broken down into these types of
relationships. Again, as you can see, I think the problem can be broken down
into generic components that can be reasoned about.

As for hierarchical design... I haven't decided yet. It really depends on
the purpose of the hierarchy and its function. That's why in the paper I
stress function before design.






 -- how will you handle feedback connections (top-down) among the modules?



That's a very good question. I haven't decided yet really because I haven't
fully worked out all the pieces of the design and how they must interact to
solve problems. I'd need to analyze specific requirements and what problems
such feedback is required to solve.

I guess one example of feedback might be the interpretation of ambiguous
visual input, such as single images from a less than ideal camera and scene
setup. Such problems require feedback from knowledge. I see this as a
separate visual processing system from the visual learning system that I
mentioned in the paper. This is because the system I designed is for
learning from less ambiguous input. Once it has gained sufficient knowledge
this way, more ambiguous input would be possible to process and understand
with confidence.

So, clearly much still has to be worked out about the design. But, my
working assumption is that these things can be broken down analytically and
solved. The alternative is to just hope that a similar-to-the-brain model is
going to work. I just don't think we can reasonably hope that such a model
will work, be effective and be efficient. I think it is just too hard to
guess at the right structure that will solve the problems without actually
showing how it solves all the problems we want to apply it to.
*
I really think it is very important for the functional requirements to
create the design.* Regardless of the approach, we need to understand why
the solutions we create solve the problems we want to solve. And if we can't
show that they do solve them or how they solve them, then the odds are
against us that they will work. That's my opinion.

If one could show how deep learning models, for example, really do solve all
the problems we want to solve, then I would be willing to use them. I just
don't see it though. I doesn't seem that the solution was generated by the
problem. It seems more that the solution was generated based on its
similarity to the brain. I just can't accept the risk that such approaches
won't work.

Since I don't think reverse engineering the brain makes sense either. My
only alternative to those two approaches seems to be the one I'm taking.

Dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Anyone going to the Singularity Summit?

2010-08-09 Thread David Jones
I've decided to go. I was wondering if anyone else here is.

Dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How To Create General AI Draft2

2010-08-08 Thread David Jones
Mike,

We've argued about this over and over and over. I don't want to repeat
previous arguments to you.

You have no proof that the world cannot be broken down into simpler concepts
and components. The only proof you attempt to propose are your example
problems that *you* don't understand how to solve. Just because *you* cannot
solve them, doesn't mean they cannot be solved at all using a certain
methodology. So, who is really making wild assumptions?

The mere fact that you can refer to a chair means that it is a
recognizable pattern. LOL. That fact that you don't realize this is quite
funny.

Dave

On Sun, Aug 8, 2010 at 8:23 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  Dave:No... it is equivalent to saying that the whole world can be modeled
 as if everything was made up of matter

 And matter is... ?  Huh?

 You clearly don't realise that your thinking is seriously woolly - and you
 will pay a heavy price in lost time.

 What are your basic world/visual-world analytic units  wh. you are
 claiming to exist?

 You thought - perhaps think still - that *concepts* wh. are pretty
 fundamental intellectual units of analysis at a certain level, could be
 expressed as, or indeed, were patterns. IOW there's a fundamental pattern
 for chair or table. Absolute nonsense. And a radical failure to
 understand the basic nature of concepts which is that they are *freeform*
 schemas, incapable of being expressed either as patterns or programs.

 You had merely assumed that concepts could be expressed as patterns,but had
 never seriously, visually analysed it. Similarly you are merely assuming
 that the world can be analysed into some kind of visual units - but you
 haven't actually done the analysis, have you? You don't have any of these
 basic units to hand, do you? If you do, I suggest, reply instantly, naming a
 few. You won't be able to do it. They don't exist.

 Your whole approach to AGI is based on variations of what we can call
 fundamental analysis - and it's wrong. God/Evolution hasn't built the
 world with any kind of geometric, or other consistent, bricks. He/It is a
 freeform designer. You have to start thinking outside the
 box/brick/fundamental unit.

  *From:* David Jones davidher...@gmail.com
 *Sent:* Sunday, August 08, 2010 5:12 AM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] How To Create General AI Draft2

 Mike,

 I took your comments into consideration and have been updating my paper to
 make sure these problems are addressed.

 See more comments below.

 On Fri, Aug 6, 2010 at 8:15 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

  1) You don't define the difference between narrow AI and AGI - or make
 clear why your approach is one and not the other


 I removed this because my audience is for AI researchers... this is AGI
 101. I think it's clear that my design defines general as being able to
 handle the vast majority of things we want the AI to handle without
 requiring a change in design.



 2) Learning about the world won't cut it -  vast nos. of progs. claim
 they can learn about the world - what's the difference between narrow AI and
 AGI learning?


 The difference is in what you can or can't learn about and what tasks you
 can or can't perform. If the AI is able to receive input about anything it
 needs to know about in the same formats that it knows how to understand and
 analyze, it can reason about anything it needs to.



 3) Breaking things down into generic components allows us to learn about
 and handle the vast majority of things we want to learn about. This is what
 makes it general!

 Wild assumption, unproven or at all demonstrated and untrue.


 You are only right that I haven't demonstrated it. I will address this in
 the next paper and continue adding details over the next few drafts.

 As a simple argument against your counter argument...

 If that were true that we could not understand the world using a limited
 set of rules or concepts, how is it that a human baby, with a design that is
 predetermined to interact with the world a certain way by its DNA, is able
 to deal with unforeseen things that were not preprogrammed? That’s right,
 the baby was born with a set of rules that robustly allows it to deal with
 the unforeseen. It has a limited set of rules used to learn. That is
 equivalent to a limited set of “concepts” (i.e. rules) that would allow a
 computer to deal with the unforeseen.


  Interesting philosophically because it implicitly underlies AGI-ers'
 fantasies of take-off. You can compare it to the idea that all science can
 be reduced to physics. If it could, then an AGI could indeed take-off. But
 it's demonstrably not so.


 No... it is equivalent to saying that the whole world can be modeled as if
 everything was made up of matter. Oh, I forgot, that is the case :) It is a
 limited set of concepts, yet it can create everything we know.



 You don't seem to understand that the problem of AGI is to deal with the
 NEW - the unfamiliar

Re: [agi] How To Create General AI Draft2

2010-08-08 Thread David Jones
:) what you don't realize is that patterns don't have to be strictly limited
to the actual physical structure.

In fact, the chair patterns you refer to are not strictly physical
patterns. The pattern is based on how the objects can be used, what their
intended uses probably are, and what most common effective uses are.

So, chairs are objects that are used to sit on. You can identify objects
whose most likely use is for sitting based on experience.

If you think this is not a sufficient refutation of your argument, then
please don't argue with me regarding it anymore. I know your opinion and
respectfully disagree. If you don't accept my counter argument, there is no
point to continuing this back and forth ad finitum.

Dave

On Aug 8, 2010 9:29 AM, Mike Tintner tint...@blueyonder.co.uk wrote:

 You're waffling.

You say there's a pattern for chair - DRAW IT. Attached should help you.

Analyse the chairs given in terms of basic visual units. Or show how any
basic units can be applied to them. Draw one or two.

You haven't identified any basic visual units  - you don't have any. Do you?
Yes/no.

No. That's not funny, that's a waste.. And woolly and imprecise through
and through.



 *From:* David Jones davidher...@gmail.com
*Sent:* Sunday, August 08, 2010 1:59 PM


To: agi
Subject: Re: [agi] How To Create General AI Draft2

Mike,

We've argued about this over and over and over. I don't want to repeat
previous arguments to you.

You have no proof that the world cannot be broken down into simpler concepts
and components. The only proof you attempt to propose are your example
problems that *you* don't understand how to solve. Just because *you* cannot
solve them, doesn't mean they cannot be solved at all using a certain
methodology. So, who is really making wild assumptions?

The mere fact that you can refer to a chair means that it is a
recognizable pattern. LOL. That fact that you don't realize this is quite
funny.

Dave

On Sun, Aug 8, 2010 at 8:23 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  Dave:No... it is equivalent to saying that the whole world can be modeled
 as if everything was made up of matter

 And matter is... ?  Huh?

 You clearly don't realise that your thinking is seriously woolly - and you
 will pay a heavy price in lost time.

 What are your basic world/visual-world analytic units  wh. you are
 claiming to exist?

 You thought - perhaps think still - that *concepts* wh. are pretty
 fundamental intellectual units of analysis at a certain level, could be
 expressed as, or indeed, were patterns. IOW there's a fundamental pattern
 for chair or table. Absolute nonsense. And a radical failure to
 understand the basic nature of concepts which is that they are *freeform*
 schemas, incapable of being expressed either as patterns or programs.

 You had merely assumed that concepts could be expressed as patterns,but had
 never seriously, visually analysed it. Similarly you are merely assuming
 that the world can be analysed into some kind of visual units - but you
 haven't actually done the analysis, have you? You don't have any of these
 basic units to hand, do you? If you do, I suggest, reply instantly, naming a
 few. You won't be able to do it. They don't exist.

 Your whole approach to AGI is based on variations of what we can call
 fundamental analysis - and it's wrong. God/Evolution hasn't built the
 world with any kind of geometric, or other consistent, bricks. He/It is a
 freeform designer. You have to start thinking outside the
 box/brick/fundamental unit.

  *From:* David Jones davidher...@gmail.com
 *Sent:* Sunday, August 08, 2010 5:12 AM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] How To Create General AI Draft2

 Mike,

 I took your comments into consideration and have been updating my paper to
 make sure these problems are addressed.

 See more comments below.

 On Fri, Aug 6, 2010 at 8:15 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

  1) You don't define the difference between narrow AI and AGI - or make
 clear why your approach is one and not the other


 I removed this because my audience is for AI researchers... this is AGI
 101. I think it's clear that my design defines general as being able to
 handle the vast majority of things we want the AI to handle without
 requiring a change in design.



 2) Learning about the world won't cut it -  vast nos. of progs. claim
 they can learn about the world - what's the difference between narrow AI and
 AGI learning?


 The difference is in what you can or can't learn about and what tasks you
 can or can't perform. If the AI is able to receive input about anything it
 needs to know about in the same formats that it knows how to understand and
 analyze, it can reason about anything it needs to.



 3) Breaking things down into generic components allows us to learn about
 and handle the vast majority of things we want to learn about. This is what
 makes it general!

 Wild assumption, unproven or at all demonstrated and untrue

Re: [agi] Help requested: Making a list of (non-robotic) AGI low hanging fruit apps

2010-08-07 Thread David Jones
Hey Ben,

Faster, cheaper, and more robust 3D modeling for the movie industry. The
modeling allows different sources of video content to be extracted from
scenes, manipulated and mixed with others.

The movie industry has the money and motivation to extract data from images.
Making it easier, more robust and cheaper could drive innovation and
progress.

Why is it AGI-related? Because AGI requires knowledge. Knowledge can be
extracted from facts about the world. Facts can be extracted from images in
a general way using a limited set of algorithms and concepts.

Some say that computer vision is AI-complete and requires knowledge to do.
But, I have to disagree. Given sufficient data and good images from multiple
cameras or devices, unambiguous data can extract very accurate 3D models. If
this was AI-completed and required knowledge, that would not be possible.

Dave

On Sat, Aug 7, 2010 at 9:10 PM, Ben Goertzel b...@goertzel.org wrote:

 Hi,

 A fellow AGI researcher sent me this request, so I figured I'd throw it
 out to you guys

 
 I'm putting together an AGI pitch for investors and thinking of low
 hanging fruit applications to argue for. I'm intentionally not
 involving any mechanics (robots, moving parts, etc.). I'm focusing on
 voice (i.e. conversational agents) and perhaps vision-based systems.
 Hellen Keller AGI, if you will :)

 Along those lines, I'd like any ideas you may have that would fall
 under this description. I need to substantiate the case for such AGI
 technology by making an argument for high-value apps. All ideas are
 welcome.
 

 All serious responses will be appreciated!!

 Also, I would be grateful if we
 could keep this thread closely focused on direct answers to this
 question, rather than
 digressive discussions on Helen Keller, the nature of AGI, the definition
 of AGI
 versus narrow AI, the achievability or unachievability of AGI, etc.
 etc.  If you think
 the question is bad or meaningless or unclear or whatever, that's
 fine, but please
 start a new thread with a different subject line to make your point.

 If the discussion is useful, my intention is to mine the answers into a
 compact
 list to convey to him

 Thanks!
 Ben G


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How To Create General AI Draft2

2010-08-07 Thread David Jones
Abram,

Thanks for the comments.

I think probability is just one way to deal with uncertainty. Defeasible
reasoning is another. Non-monotonic logic of various implementations.

I often think that probability is the wrong way to do some things regarding
AGI design.

Maybe things can't be known with super high confidence, but we still want as
high confidence as reasonably possible. Once we have that, we just have to
have working assumptions and working hypotheses. From there we need the
ability to update beliefs if we can find a reason to think the beliefs are
wrong...

Dave


On Fri, Aug 6, 2010 at 9:48 PM, Abram Demski abramdem...@gmail.com wrote:



 On Fri, Aug 6, 2010 at 8:22 PM, Abram Demski abramdem...@gmail.comwrote:


 (Without this sort of generality, your approach seems restricted to
 gathering knowledge about whatever events unfold in front of a limited
 quantity of high-quality camera systems which you set up. To be honest, the
 usefulness of that sort of knowledge is not obvious.)


 On second thought, this statement was a bit naive. You obviously intend the
 camera systems to be connected to robots or other systems which perform
 actual tasks in the world, providing a great variety of information
 including feedback from success/failure of actions to achieve results.

 What is unrealistic to me is not that this information could be useful, but
 that this level of real-world intelligence could be achieved with the
 super-high confidence bounds you are imagining. What I think is that
 probabilistic reasoning is needed. Once we have the object/location/texture
 information with those confidence bounds (which I do see as possible),
 gaining the sort of knowledge Cyc set out to contain seems inherently
 statistical.



 --Abram



 On Fri, Aug 6, 2010 at 4:44 PM, David Jones davidher...@gmail.comwrote:

 Hey Guys,

 I've been working on writing out my approach to create general AI to
 share and debate it with others in the field. I've attached my second draft
 of it in PDF format, if you guys are at all interested. It's still a work in
 progress and hasn't been fully edited. Please feel free to comment,
 positively or negatively, if you have a chance to read any of it. I'll be
 adding to and editing it over the next few days.

 I'll try to reply more professionally than I have been lately :) Sorry :S

 Cheers,

 Dave
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




 --
 Abram Demski
 http://lo-tho.blogspot.com/
 http://groups.google.com/group/one-logic




 --
 Abram Demski
 http://lo-tho.blogspot.com/
 http://groups.google.com/group/one-logic
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How To Create General AI Draft2

2010-08-07 Thread David Jones
Mike,

I took your comments into consideration and have been updating my paper to
make sure these problems are addressed.

See more comments below.

On Fri, Aug 6, 2010 at 8:15 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

  1) You don't define the difference between narrow AI and AGI - or make
 clear why your approach is one and not the other


I removed this because my audience is for AI researchers... this is AGI 101.
I think it's clear that my design defines general as being able to handle
the vast majority of things we want the AI to handle without requiring a
change in design.



 2) Learning about the world won't cut it -  vast nos. of progs. claim
 they can learn about the world - what's the difference between narrow AI and
 AGI learning?


The difference is in what you can or can't learn about and what tasks you
can or can't perform. If the AI is able to receive input about anything it
needs to know about in the same formats that it knows how to understand and
analyze, it can reason about anything it needs to.



 3) Breaking things down into generic components allows us to learn about
 and handle the vast majority of things we want to learn about. This is what
 makes it general!

 Wild assumption, unproven or at all demonstrated and untrue.


You are only right that I haven't demonstrated it. I will address this in
the next paper and continue adding details over the next few drafts.

As a simple argument against your counter argument...

If that were true that we could not understand the world using a limited set
of rules or concepts, how is it that a human baby, with a design that is
predetermined to interact with the world a certain way by its DNA, is able
to deal with unforeseen things that were not preprogrammed? That’s right,
the baby was born with a set of rules that robustly allows it to deal with
the unforeseen. It has a limited set of rules used to learn. That is
equivalent to a limited set of “concepts” (i.e. rules) that would allow a
computer to deal with the unforeseen.


 Interesting philosophically because it implicitly underlies AGI-ers'
 fantasies of take-off. You can compare it to the idea that all science can
 be reduced to physics. If it could, then an AGI could indeed take-off. But
 it's demonstrably not so.


No... it is equivalent to saying that the whole world can be modeled as if
everything was made up of matter. Oh, I forgot, that is the case :) It is a
limited set of concepts, yet it can create everything we know.



 You don't seem to understand that the problem of AGI is to deal with the
 NEW - the unfamiliar, that wh. cannot be broken down into familiar
 categories, - and then find ways of dealing with it ad hoc.


You don't seem to understand that even the things you think cannot be broken
down, can be.


Dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Computer Vision not as hard as I thought!

2010-08-06 Thread David Jones
On Fri, Aug 6, 2010 at 7:37 PM, Jim Bromer jimbro...@gmail.com wrote:

 On Wed, Aug 4, 2010 at 9:27 AM, David Jones davidher...@gmail.com wrote:
 *So, why computer vision? Why can't we just enter knowledge manually?

 *
 a) The knowledge we require for AI to do what we want is vast and complex
 and we can prove that it is completely ineffective to enter the knowledge we
 need manually.
 b) Computer vision is the most effective means of gathering facts about the
 world. Knowledge and experience can be gained from analysis of these facts.
 c) Language is not learned through passive observation. The associations
 that words have to the environment and our common sense knowledge of the
 environment/world are absolutely essential to language learning,
 understanding and disambiguation. When visual information is available,
 children use visual cues from their parents and from the objects they are
 interacting with to figure out word-environment associations. If visual info
 is not available, touch is essential to replace the visual cues. Touch can
 provide much of the same info as vision, but it is not as effective because
 not everything is in reach and it provides less information than vision.
 There is some very good documentation out there on how children learn
 language that supports this. One example is How Children Learn Language by
 William O'grady.
 d) The real world cannot be predicted blindly. It is absolutely essential
 to be able to directly observe it and receive feedback.
 e) Manual entry of knowledge, even if possible, would be extremely slow and
 would be a very serious bottleneck(it already is). This is a major reason we
 want AI... to increase our man power and remove man-power related
 bottlenecks.
  

 Discovering a way to get a computer program to interpret a human language
 is a difficult problem.  The feeling that an AI program might be able to
 attain a higher level of intelligence if only it could examine data from a
 variety of different kinds of sensory input modalities it is not new.  It
 has been tried and tried during the past 35 years.  But there is no
 experimental data (that I have heard of) that suggests that this method is
 the only way anyone will achieve intelligence.


if only it could examine data from a variety of different kinds of sensory
input modalities

That statement suggests that such different kinds of input have no
meaningful relationship to the problem at hand. I'm not talking about
different kinds of input. I'm talking about explicitly and deliberately
extracting facts about the environment from sensory perception, specifically
remote perception or visual perception. The input modalities are not what
is important. It is the facts that you can extract from computer vision that
are useful in understanding what is out there in the world, what
relationships and associations exist, and how is language associated with
the environment.

It is well documented that children learn language by interacting with
adults around them and using cues from them to learn how the words they
speak are associated with what is going on. It is not hard to support the
claim that extensive knowledge about the world is important for
understanding and interpreting human language. Nor is it hard to support the
idea that such knowledge can be gained from computer vision.





 I have tried to explain that I believe the problem is twofold.  First of
 all, there have been quite a few AI programs that worked real well as long
 as the problem was simple enough.  This suggests that the complexity of
 what is trying to be understood is a critical factor.  This in turn
 suggests that using different input modalities, would not -in itself- make
 AI possible.


Your conclusion isn't supported by your arguments. I'm not even saying it
makes AI possible. I'm saying that a system can make reasonable inferences
and come to reasonable conclusions with sufficient knowledge. Without
sufficient knowledge, there is reason to believe that it is significantly
harder and often impossible to come to correct conclusions.

Therefore, gaining knowledge about how things are related is not just
helpful in making correct inferences, it is required. So, different input
modalities which can give you facts about the world, which in turn would
give you knowledge about the world, do make correct reasoning possible, when
it otherwise would not be possible.

You see, it has nothing to do with the source of the info or whether it is
more info or not. It has everything to do with the relationships that
information have. Just calling them different input modalities is not
correct.



   Secondly, there is a problem of getting the computer to accurately model
 that which it can know in such a way that it could be effectively utilized
 for higher degrees of complexity.


This is an engineering problem, not necessarily a problem that can't be
solved. When we get

Re: [agi] Computer Vision not as hard as I thought!

2010-08-04 Thread David Jones
 they are
interacting with to figure out word-environment associations. If visual info
is not available, touch is essential to replace the visual cues. Touch can
provide much of the same info as vision, but it is not as effective because
not everything is in reach and it provides less information than vision.
There is some very good documentation out there on how children learn
language that supports this. One example is How Children Learn Language by
William O'grady.
d) The real world cannot be predicted blindly. It is absolutely essential to
be able to directly observe it and receive feedback.
e) Manual entry of knowledge, even if possible, would be extremely slow and
would be a very serious bottleneck(it already is). This is a major reason we
want AI... to increase our man power and remove man-power related
bottlenecks.

I could argue the above pieces separately. But, since the email is already
long, I'll leave at that for now. If you want to explore any of them
further, I can delve more into them.











On Wed, Aug 4, 2010 at 9:10 AM, Jim Bromer jimbro...@gmail.com wrote:

 On Tue, Aug 3, 2010 at 11:52 AM, David Jones davidher...@gmail.comwrote:
 I've suddenly realized that computer vision of real images is very much
 solvable and that it is now just a matter of engineering...
 I've also realized that I don't actually have to implement it, which is
 what is most difficult because even if you know a solution to part of the
 problem has certain properties and issues, implementing it takes a lot of
 time. Whereas I can just assume I have a less than perfect solution with the
 properties I predict from other experiments. Then I can solve the problem
 without actually implementing every last detail...
 *First*, existing methods find observations that are likely true by
 themselves. They find data patterns that are very unlikely to occur by
 coincidence, such as many features moving together over several frames of a
 video and over a statistically significant distance. They use thresholds to
 ensure that the observed changes are likely transformations of the original
 property observed or to ensure the statistical significance of an
 observation. These are highly likely true observations and not coincidences
 or noise.
  --
 Just looking at these statements, I can find three significant errors. (I
 do agree with some of what you said, like the significance of finding
 observations that are likely true in themselves.)  When the camera moves (in
 a rotation or pan) many features will appear 'to move together over a
 statistically significant distance'.  One might suppose that the animal can
 sense the movement of his own eyes but then again, he can rotate his head
 and use his vision to gauge the rotation of his head.  So right away there
 is some kind of serious error in your statement.  It might be resolvable, it
 is just that your statement does not really do the resolution.  I do believe
 that computer vision is possible with contemporary computers but you are
 also saying that while you can't get your algorithms to work the way you had
 hoped, it doesn't really matter because you can figure it all out without
 the work of implementation.  My point of view is that these represent major
 errors in reasoning.
 I hope to get back to actual visual processing experiments again.  Although
 I don't think that computer vision is necessary for AGI, I do think that
 there is so much to be learned from experimenting with computer vision that
 it is a serious mistake not to take advantage of opportunity.
 Jim Bromer


 On Tue, Aug 3, 2010 at 11:52 AM, David Jones davidher...@gmail.comwrote:

 I've suddenly realized that computer vision of real images is very much
 solvable and that it is now just a matter of engineering. I was so stuck
 before because you can't make the simple assumptions in screenshot computer
 vision that you can in real computer vision. This makes experience probably
 necessary to effectively learn from screenshots. Objects in real images to
 not change drastically in appearance, position or other dimensions in
 unpredictable ways.

 The reason I came to the conclusion that it's a lot easier than I thought
 is that I found a way to describe why existing solutions work, how they work
 and how to come up with even better solutions.

 I've also realized that I don't actually have to implement it, which is
 what is most difficult because even if you know a solution to part of the
 problem has certain properties and issues, implementing it takes a lot of
 time. Whereas I can just assume I have a less than perfect solution with the
 properties I predict from other experiments. Then I can solve the problem
 without actually implementing every last detail.

 *First*, existing methods find observations that are likely true by
 themselves. They find data patterns that are very unlikely to occur by
 coincidence, such as many features moving together over several

Re: [agi] Computer Vision not as hard as I thought!

2010-08-04 Thread David Jones
Steve,

I wouldn't say that's an accurate description of what I wrote. What a wrote
was a way to think about how to solve computer vision.

My approach to artificial intelligence is a Neat approach. See
http://en.wikipedia.org/wiki/Neats_vs._scruffies The paper you attached is a
Scruffy approach. Neat approaches are characterized by deliberate
algorithms that are analogous to the problem and can sometimes be shown to
be provably correct. An example of a Neat approach is the use of features in
the paper I mentioned. One can describe why the features are calculated and
manipulated the way they are. An example of a scruffies approach would be
neural nets, where you don't know the rules by which it comes up with an
answer and such approaches are not very scalable. Neural nets require
manually created training data and the knowledge generated is not in a form
that can be used for other tasks. The knowledge isn't portable.

I also wouldn't say I switched from absolute values to rates of change.
That's not really at all what I'm saying here.

Dave

On Wed, Aug 4, 2010 at 2:32 PM, Steve Richfield
steve.richfi...@gmail.comwrote:

 David,

 It appears that you may have reinvented the wheel. See the attached
 article. There is LOTS of evidence, along with some good math, suggesting
 that our brains work on rates of change rather than absolute values. Then,
 temporal learning, which is otherwise very difficult, falls out as the
 easiest of things to do.

 In effect, your proposal shifts from absolute values to rates of change.

 Steve
 ===
 On Tue, Aug 3, 2010 at 8:52 AM, David Jones davidher...@gmail.com wrote:

 I've suddenly realized that computer vision of real images is very much
 solvable and that it is now just a matter of engineering. I was so stuck
 before because you can't make the simple assumptions in screenshot computer
 vision that you can in real computer vision. This makes experience probably
 necessary to effectively learn from screenshots. Objects in real images to
 not change drastically in appearance, position or other dimensions in
 unpredictable ways.

 The reason I came to the conclusion that it's a lot easier than I thought
 is that I found a way to describe why existing solutions work, how they work
 and how to come up with even better solutions.

 I've also realized that I don't actually have to implement it, which is
 what is most difficult because even if you know a solution to part of the
 problem has certain properties and issues, implementing it takes a lot of
 time. Whereas I can just assume I have a less than perfect solution with the
 properties I predict from other experiments. Then I can solve the problem
 without actually implementing every last detail.

 *First*, existing methods find observations that are likely true by
 themselves. They find data patterns that are very unlikely to occur by
 coincidence, such as many features moving together over several frames of a
 video and over a statistically significant distance. They use thresholds to
 ensure that the observed changes are likely transformations of the original
 property observed or to ensure the statistical significance of an
 observation. These are highly likely true observations and not coincidences
 or noise.

 *Second*, they make sure that the other possible explanations of the
 observations are very unlikely. This is usually done using a threshold, and
 a second difference threshold from the first match to the second match. This
 makes sure that second best matches are much farther away than the best
 match. This is important because it's not enough to find a very likely match
 if there are 1000 very likely matches. You have to be able to show that the
 other matches are very unlikely, otherwise the specific match you pick may
 be just a tiny bit better than the others, and the confidence of that match
 would be very low.


 So, my initial design plans are as follows. Note: I will probably not
 actually implement the system because the engineering part dominates the
 time. I'd rather convert real videos to pseudo test cases or simulation test
 cases and then write a psuedo design and algorithm that can solve it. This
 would show that it works without actually spending the time needed to
 implement it. It's more important for me to prove it works and show what it
 can do than to actually do it. If I can prove it, there will be sufficient
 motivation for others to do it with more resources and man power than I have
 at my disposal.

 *My Design*
 *First, we use high speed cameras and lidar systems to gather sufficient
 data with very low uncertainty because the changes possible between data
 points can be assumed to be very low, allowing our thresholds to be much
 smaller, which eliminates many possible errors and ambiguities.

 *Second*, *we have to gain experience from high confidence observations.
 These are gathered as follows:
 1) Describe allowable transformations(thresholds) and what they mean

Re: [agi] Computer Vision not as hard as I thought!

2010-08-04 Thread David Jones
Steve,

Sorry if I misunderstood your approach. I do not really understand how it
would work though because it is not clear how you go from inputs to output
goals. It likely will still have many of the same problems as other neural
networks including 1) poor knowledge portability  2) difficult to extend,
augment or understand how it works  3) requires manually created training
data, which is a major problem.  4) is designed with biological hardware in
mind, not necessarily existing hardware and software.

These are my main reasons, at least that I can remember, that I avoid
biologically inspired methods. It's not to say that they are wrong. But they
don't meet my requirements. It is also very unclear how to implement the
system and make it work. My approach is very deliberate, so the steps
required to make it work are pretty clear to me.

It is not that your approach is bad. It is just different and I really
prefer methods that are not biologically inspired, but are designed
specifically with goals and requirements in mind as the most important
design motivator.

Dave

On Wed, Aug 4, 2010 at 3:54 PM, Steve Richfield
steve.richfi...@gmail.comwrote:

 David,

 You are correct in that I keep bad company. My approach to NNs is VERY
 different than other people's approaches. I insist on reasonable math being
 performed on quantities that I understand, which sets me apart from just
 about everyone else.

 Your neat approach isn't all that neat, and is arguably scruffier than
 mine. At least I have SOME math to back up my approach. Further, note that
 we are self-organizing systems, and that this process is poorly understood.
 I am NOT particularly interest in people-programmed systems because of their
 very fundamental limitations. Yes, self-organization is messy, but it fits
 the neat definition better than it meets the scruffy definition. Scruffy
 has more to do with people-programmed ad hoc approaches (like most of AGI),
 which I agree are a waste of time.

 Steve
 
 On Wed, Aug 4, 2010 at 12:43 PM, David Jones davidher...@gmail.comwrote:

 Steve,

 I wouldn't say that's an accurate description of what I wrote. What a
 wrote was a way to think about how to solve computer vision.

 My approach to artificial intelligence is a Neat approach. See
 http://en.wikipedia.org/wiki/Neats_vs._scruffies The paper you attached
 is a Scruffy approach. Neat approaches are characterized by deliberate
 algorithms that are analogous to the problem and can sometimes be shown to
 be provably correct. An example of a Neat approach is the use of features in
 the paper I mentioned. One can describe why the features are calculated and
 manipulated the way they are. An example of a scruffies approach would be
 neural nets, where you don't know the rules by which it comes up with an
 answer and such approaches are not very scalable. Neural nets require
 manually created training data and the knowledge generated is not in a form
 that can be used for other tasks. The knowledge isn't portable.

 I also wouldn't say I switched from absolute values to rates of change.
 That's not really at all what I'm saying here.

 Dave

 On Wed, Aug 4, 2010 at 2:32 PM, Steve Richfield 
 steve.richfi...@gmail.com wrote:

 David,

 It appears that you may have reinvented the wheel. See the attached
 article. There is LOTS of evidence, along with some good math, suggesting
 that our brains work on rates of change rather than absolute values. Then,
 temporal learning, which is otherwise very difficult, falls out as the
 easiest of things to do.

 In effect, your proposal shifts from absolute values to rates of change.

 Steve
 ===
 On Tue, Aug 3, 2010 at 8:52 AM, David Jones davidher...@gmail.comwrote:

 I've suddenly realized that computer vision of real images is very much
 solvable and that it is now just a matter of engineering. I was so stuck
 before because you can't make the simple assumptions in screenshot computer
 vision that you can in real computer vision. This makes experience probably
 necessary to effectively learn from screenshots. Objects in real images to
 not change drastically in appearance, position or other dimensions in
 unpredictable ways.

 The reason I came to the conclusion that it's a lot easier than I
 thought is that I found a way to describe why existing solutions work, how
 they work and how to come up with even better solutions.

 I've also realized that I don't actually have to implement it, which is
 what is most difficult because even if you know a solution to part of the
 problem has certain properties and issues, implementing it takes a lot of
 time. Whereas I can just assume I have a less than perfect solution with 
 the
 properties I predict from other experiments. Then I can solve the problem
 without actually implementing every last detail.

 *First*, existing methods find observations that are likely true by
 themselves. They find data patterns that are very unlikely to occur

Re: [agi] Computer Vision not as hard as I thought!

2010-08-04 Thread David Jones
Steve,

I replace your need for math with my need to understand what the system is
doing and why it is doing it. It's basically the same thing. But you are
approaching it at an extremely low level. It doesn't seem to me that you are
clear on how this math makes the system work the way we want it to work.
So, make the math as perfect as you like, if you don't understand why you
need the math and how it makes the system do what you want, then it's not
going to do you any good.

Understanding what you are trying to accomplish and how you want the system
to work comes first, not math.

If your neural net doesn't require training data, I don't understand how it
works or why you expect it to do what you want it to do if it is self
organized. How do you tell it how to process inputs correctly? What guides
the processing and analysis?

Dave

On Wed, Aug 4, 2010 at 4:33 PM, Steve Richfield
steve.richfi...@gmail.comwrote:

 David

 On Wed, Aug 4, 2010 at 1:16 PM, David Jones davidher...@gmail.com wrote:

 3) requires manually created training data, which is a major problem.


 Where did this come from. Certainly, people are ill equipped to create
 dP/dt type data. These would have to come from sensors.



 4) is designed with biological hardware in mind, not necessarily existing
 hardware and software.


 The biology is just good to help the math over some humps. So far, I have
 not been able to identify ANY neuronal characteristic that hasn't been
 refined to near-perfection, once the true functionality was fully
 understood.

 Anyway, with the math, you can build a system anyway you want. Without the
 math, you are just wasting your time and electricity. The math comes first,
 and all other things follow.

 Steve
 ===


 These are my main reasons, at least that I can remember, that I avoid
 biologically inspired methods. It's not to say that they are wrong. But they
 don't meet my requirements. It is also very unclear how to implement the
 system and make it work. My approach is very deliberate, so the steps
 required to make it work are pretty clear to me.

 It is not that your approach is bad. It is just different and I really
 prefer methods that are not biologically inspired, but are designed
 specifically with goals and requirements in mind as the most important
 design motivator.

 Dave

 On Wed, Aug 4, 2010 at 3:54 PM, Steve Richfield 
 steve.richfi...@gmail.com wrote:

 David,

 You are correct in that I keep bad company. My approach to NNs is VERY
 different than other people's approaches. I insist on reasonable math being
 performed on quantities that I understand, which sets me apart from just
 about everyone else.

 Your neat approach isn't all that neat, and is arguably scruffier than
 mine. At least I have SOME math to back up my approach. Further, note that
 we are self-organizing systems, and that this process is poorly understood.
 I am NOT particularly interest in people-programmed systems because of their
 very fundamental limitations. Yes, self-organization is messy, but it fits
 the neat definition better than it meets the scruffy definition. Scruffy
 has more to do with people-programmed ad hoc approaches (like most of AGI),
 which I agree are a waste of time.

 Steve
 
 On Wed, Aug 4, 2010 at 12:43 PM, David Jones davidher...@gmail.comwrote:

 Steve,

 I wouldn't say that's an accurate description of what I wrote. What a
 wrote was a way to think about how to solve computer vision.

 My approach to artificial intelligence is a Neat approach. See
 http://en.wikipedia.org/wiki/Neats_vs._scruffies The paper you attached
 is a Scruffy approach. Neat approaches are characterized by deliberate
 algorithms that are analogous to the problem and can sometimes be shown to
 be provably correct. An example of a Neat approach is the use of features 
 in
 the paper I mentioned. One can describe why the features are calculated and
 manipulated the way they are. An example of a scruffies approach would be
 neural nets, where you don't know the rules by which it comes up with an
 answer and such approaches are not very scalable. Neural nets require
 manually created training data and the knowledge generated is not in a form
 that can be used for other tasks. The knowledge isn't portable.

 I also wouldn't say I switched from absolute values to rates of change.
 That's not really at all what I'm saying here.

 Dave

 On Wed, Aug 4, 2010 at 2:32 PM, Steve Richfield 
 steve.richfi...@gmail.com wrote:

 David,

 It appears that you may have reinvented the wheel. See the attached
 article. There is LOTS of evidence, along with some good math, suggesting
 that our brains work on rates of change rather than absolute values. Then,
 temporal learning, which is otherwise very difficult, falls out as the
 easiest of things to do.

 In effect, your proposal shifts from absolute values to rates of
 change.

 Steve
 ===
 On Tue, Aug 3, 2010 at 8:52 AM, David Jones davidher

Re: [agi] Computer Vision not as hard as I thought!

2010-08-04 Thread David Jones
On Wed, Aug 4, 2010 at 6:17 PM, Steve Richfield
steve.richfi...@gmail.comwrote:

 David,

 On Wed, Aug 4, 2010 at 1:45 PM, David Jones davidher...@gmail.com wrote:


 Understanding what you are trying to accomplish and how you want the
 system to work comes first, not math.


 It's all the same. First comes the qualitative, then comes the
 quantitative.


 If your neural net doesn't require training data,


 Sure it needs training data -real-world interactive sensory input training
 data, rather than static manually prepared training data.


You design is not described well enough or succinctly enough for me to
comment on then.



 I don't understand how it works or why you expect it to do what you want it
 to do if it is self organized. How do you tell it how to process inputs
 correctly? What guides the processing and analysis?


 Bingo - you have just hit on THE great challenge in AI/AGI., and the source
 of much past debate. Some believe in maximizing the information content of
 the output. Some believe in other figures of merit, e.g. success in
 interacting with a test environment, success in forming a layered structure,
 etc. This particular sub-field is still WIDE open and waiting for some good
 answers.

 Note that this same problem presents itself, regardless of approach, e.g.
 AGI.


Ah, but I think that this problem is much more solvable and better defined
with a more deliberate approach that does not depend on emergence. Emergence
is wishful thinking. I hope you do not include such wishful thinking in your
design :)

Once the AI has the tools and knowledge needed to solve a problem, which I
expect to get from computer vision, then it can reason about user stated
goals (in natural language) and we can work on how the goal pursuit part
works. Much work has already been done on planning and execution. But, all
that work was done with insufficient knowledge on narrow problems. All the
research needs to be re-evaluated and studied with sufficient knowledge
about the world. It changes everything. This is another mile marker on my
roadmap to general AI.

Dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Walker Lake

2010-08-02 Thread David Jones
How about you go to war yourself or send your children. I'd rather send a
robot. It's safer for both the soldier and the people on the ground because
you don't have to shoot first, ask questions later.

And you're right, we shouldn't monitor anyone. We should just allow
terrorists to talk openly to plot attacks on us. After all, I'd rather have
my privacy than my life.

dumb.

On Mon, Aug 2, 2010 at 10:40 AM, Steve Richfield
steve.richfi...@gmail.comwrote:

 Sometime when you are flying between the northwest US to/from Las Vegas,
 look out your window as you fly over Walker Lake in eastern Nevada. At the
 south end you will see a system of roads leading to tiny buildings, all
 surrounded by military security. From what I have been able to figure out,
 you will find the U.S. arsenal of chemical and biological weapons housed
 there. No, we are not now making these weapons, but neither are we disposing
 of them.

 Similarly, there has been discussion of developing advanced military
 technology using AGI and other computer-related methods. I believe that
 these efforts are fundamentally anti-democratic, as they allow a small
 number of people to control a large number of people. Gone are the days when
 people voted with their swords. We now have the best government that money
 can buy monitoring our every email, including this one, to identify anyone
 resisting such efforts. 1984 has truly arrived. This can only lead to a
 horrible end to freedom, with AGIs doing their part and more.

 Like chemical and biological weapons, unmanned and automated weapons should
 be BANNED. Unfortunately, doing so would provide a window of opportunity for
 others to deploy them. However, if we make these and stick them in yet
 another building at the south end of Walker Lake, we would be ready in case
 other nations deploy such weapons.

 How about an international ban on the deployment of all unmanned and
 automated weapons? The U.S. won't now even agree to ban land mines. At least
 this would restore SOME relationship between popular support and military
 might. Doesn't it sound ethical to insist that a human being decide when
 to end another human being's life? Doesn't it sound fair to require the
 decision maker to be in harm's way, especially when the person being killed
 is in or around their own home? Doesn't it sound unethical to add to the
 present situation? When deployed on a large scale, aren't these WMDs?

 Steve

*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Shhh!

2010-08-02 Thread David Jones
Abram Wrote:

 I take this as evidence that there is a very strong mental landscape...
 if you go in a particular direction there is a natural series of landmarks,
 including both great ideas and pitfalls that everyone runs into. (Different
 people take different amounts of time to climb out of the pitfalls, though.
 Some may keep looking for gold at a dead end for a long time.)



That is a very nice description of AI research and the pitfalls we come
across in our quest.  :)

Dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Clues to the Mind: Learning Ability

2010-07-28 Thread David Jones
:) Intelligence isn't limited to higher cognitive functions. One could say
a virus is intelligent or alive because it can replicate itself.

Intelligence is not just one function or ability, it can be many different
things. But mostly, for us, it comes down to what the system can accomplish
for us.

As for the turing test, it is basically worthless in my opinion.

PS: you probably should post these video posts to a single thread...

Dave

On Wed, Jul 28, 2010 at 12:39 AM, deepakjnath deepakjn...@gmail.com wrote:

 http://www.facebook.com/video/video.php?v=287151911466

 See how the parrot can learn so much! Does that mean that the parrot does
 intelligence. Will this parrot pass the turing test?

 There must be a learning center in the brain which is much lower than the
 higher cognitive fucntions like imagination and thoughts.


 cheers,
 Deepak
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Huge Progress on the Core of AGI

2010-07-26 Thread David Jones
Sure. Thanks Arthur.

On Sun, Jul 25, 2010 at 10:42 AM, A. T. Murray menti...@scn.org wrote:

 David Jones wrote:
 
 Arthur,
 
 Thanks. I appreciate that. I would be happy to aggregate some of those
 things. I am sometimes not good at maintaining the website because I get
 bored of maintaining or updating it very quickly :)
 
 Dave
 
 On Sat, Jul 24, 2010 at 10:02 AM, A. T. Murray menti...@scn.org wrote:
 
  The Web site of David Jones at
 
  http://practicalai.org
 
  is quite impressive to me
  as a kindred spirit building AGI.
  (Just today I have been coding MindForth AGI :-)
 
  For his Practical AI Challenge or similar
  ventures, I would hope that David Jones is
  open to the idea of aggregating or archiving
  representative AI samples from such sources as
  - TexAI;
  - OpenCog;
  - Mentifex AI;
  - etc.;
  so that visitors to PracticalAI may gain an
  overview of what is happening in our field.
 
  Arthur
  --
  http://www.scn.org/~mentifex/AiMind.htmlhttp://www.scn.org/%7Ementifex/AiMind.html
  http://www.scn.org/~mentifex/mindforth.txthttp://www.scn.org/%7Ementifex/mindforth.txt

 Just today, a few minutes ago, I updated the
 mindforth.txt AI souce code listed above.

 In the PracticalAi aggregates, you might consider
 listing Mentifex AI with copies of the above two
 AI source code pages, and with links to the
 original scn.org URL's, where visitors to
 PracticalAi could look for any more recent
 updates that you had not gotten around to
 transferring from scn.org to PracticalAi.
 In that way, theses releases of Mentifex
 free AI source code would have a more robust
 Web presence (SCN often goes down) and I
 could link to PracticalAi for the aggregates
 and other features of PracticalAI.

 Thanks.

 Arthur T. Murray



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we hear music

2010-07-26 Thread David Jones
Deepak,

I have some insight on this question. There was a study regarding change
blindness. One of the study's famous experiments was having a person ask for
directions on a college campus. Then in the middle of this, a door would
pass between the person asking directions and the student giving directions.
What they found is that many people didn't realize the person had changed.

BUT, 100% of the people that did notice the change were the same age or
younger than the person they were observing!
So, they did another experiment to rule out the different possible
explanations. They took young people and dressed them as construction
workers. Then, they performed the experiment again with similar age groups.
They found that the people that had noticed the change before no longer did!

Why? Well, the evidence leads us to believe that people pay much closer
attention to the details of people they consider to be similar to them. So,
we notice fewer details when we are observing people of a group we consider
our out-group. In other words, we don't think we belong to the same group
as the person we are observing.

That is why asians all look the same to you :)

I think the purpose of this is analogous to attention. We only learn about
things we consider important. Or we only pay attention to things we think
are important. So, for whatever reason, we think that out-group people are
not as important to us, and we don't need to spend our brain's resources on
remembering details about them.

Dave

On Jul 26, 2010 2:58 PM, deepakjnath deepakjn...@gmail.com wrote:

Mike,

All chinese look the same for me. But for a chinese person they don't. Why
is this? Is there another clue here?

Thanks,
Deepak



On Mon, Jul 26, 2010 at 9:10 PM, Mike Tintner tint...@blueyonder.co.uk
wrote:

 David,

 T...
-- 
cheers,
Deepak
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
https://www.listbox.com/member/archive/rss/303/ |
Modifyhttps://www.listbox.com/member/?;Your
Subscription
http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Clues to the Mind: What do you think is the reason for selective attention

2010-07-25 Thread David Jones
I found proof of my interpretation in the following paper also. It concludes
that we can only keep track of 3 or 4 objects in detail at a time.(something
like that)

http://www.pni.princeton.edu/conte/pdfs/project2/Proj2Pub8anne.pdf

It says:
For explicit visual working memory, object tokens are stored in a limited
capacity, vulnerable store that maintains the bindings of features for just
2 to 4 objects.
Attention is required to sustain the memories.

Dave


On Sun, Jul 25, 2010 at 1:00 AM, deepakjnath deepakjn...@gmail.com wrote:

 Thanks Dave, its very interesting. This gives us more clues in to how the
 brain compresses and uses the relevant information while neglecting the
 irrelevant information. But as Anast has demonstrated, the brain does need
 priming inorder to decide what is relevant and irrelevant. :)

 Cheers,
 Deepak

 On Sun, Jul 25, 2010 at 5:34 AM, David Jones davidher...@gmail.comwrote:

 I also wanted to say that it is agi related because this may be the way
 that the brain deals with ambiguity in the real world. It ignores many
 things if it can use expectations to constrain possibilities. It is an
 important way in which the brain tracks objects and identifies them without
 analyzing all of an objects features before matching over the whole image.

 On Jul 24, 2010 7:53 PM, David Jones davidher...@gmail.com wrote:

 Actually Deepak, this is AGI related.

 This week I finally found a cool body of research that I previously had no
 knowledge of. This research area is in psychology, which is probably why I
 missed it the first time. It has to do with human perception, object files,
 how we keep track of object, individuate them, match them (the
 correspondence problem), etc.

 And I found the perfect article just now for you Deepak:
 http://www.duke.edu/~mitroff/papers/SimonsMitroff_01.pdfhttp://www.duke.edu/%7Emitroff/papers/SimonsMitroff_01.pdf

 This article mentions why the brain does not notice things. And I just
 realized as I was reading it why we don't see the gorilla or other
 unexpected changes. The reason is this:
 We have a limited amount of processing power that we can apply to visual
 tracking and analysis. So, in attention demanding situations such as these,
 we assign our processing resources to only track the things we are
 interested in. In fact, we probably do this all the time, but it is only
 when we need a lot of attention to be applied to a few objects do we notice
 that we don't see some unexpected events.

 So, our brain knows where to expect the ball next and our visual
 processing is very busy tracking the ball and then seeing who is throwing
 it. As a result, it is unable to also process the movement of other objects.
 If the unexpected event is drastic enough, it will get our attention. But
 since some of the people are in black, our brain probably thinks it is just
 a person in black and doesn't consider it an event that is worthy of
 interrupting our intense tracking.

 Dave



 On Sat, Jul 24, 2010 at 4:58 PM, Anastasios Tsiolakidis sokratis.dk@
 gmail.com wrote:
 
  On Sat,...

*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




 --
 cheers,
 Deepak
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Huge Progress on the Core of AGI

2010-07-24 Thread David Jones
lol. thanks Jim :)


On Thu, Jul 22, 2010 at 10:08 PM, Jim Bromer jimbro...@gmail.com wrote:

 I have to say that I am proud of David Jone's efforts.  He has really
 matured during these last few months.  I'm kidding but I really do respect
 the fact that he is actively experimenting.  I want to get back to work on
 my artificial imagination and image analysis programs - if I can ever figure
 out how to get the time.

 As I have read David's comments, I realize that we need to really leverage
 all sorts of cruddy data in order to make good agi.  But since that kind of
 thing doesn't work with sparse knowledge, it seems that the only way it
 could work is with extensive knowledge about a wide range of situations,
 like the knowledge gained from a vast variety of experiences.  This
 conjecture makes some sense because if wide ranging knowledge could be kept
 in superficial stores where it could be accessed quickly and economically,
 it could be used efficiently in (conceptual) model fitting.  However, as
 knowledge becomes too extensive it might become too unwieldy to find what is
 needed for a particular situation.  At this point indexing becomes necessary
 with cross-indexing references to different knowledge based on similarities
 and commonalities of employment.

 Here I am saying that relevant knowledge based on previous learning might
 not have to be totally relevant to a situation as long as it could be used
 to run during an ongoing situation.  From this perspective
 then, knowledge from a wide variety of experiences should actually be
 composed of reactions on different conceptual levels.  Then as a piece of
 knowledge is brought into play for an ongoing situation, those levels that
 seem best suited to deal with the situation could be promoted quickly as the
 situation unfolds, acting like an automated indexing system into other
 knowledge relevant to the situation.  So the ongoing process of trying to
 determine what is going on and what actions should be made would
 simultaneously act like an automated index to find better knowledge more
 suited for the situation.
 Jim Bromer
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-24 Thread David Jones
Abram,

I haven't found a method that I think works consistently yet. Basically I
was trying methods like the one you suggested, which measures the number of
correct predictions or expectations. But, then I ran into the problem of,
what if the predictions you are counting are more of the same? Do you count
them or not? For example, lets say that we see a piece of paper on a table
in an image and we see that the paper looks different but moves with the
table. So, we can hypothesize that they are attached. Now what if it is not
a piece of paper, but a mural. Do you count every little piece of the mural
that moves with the desk as a correct prediction? Is it a single prediction?
What about the number of times they move together? It doesn't seem right to
count each and every time, but we also have to be careful about coincidental
movement together. Just because it seems to move together in one frame out
of 1000 does not mean we should consider them temporarily attached.

So, quantitatively defining simpler and predictive is quite challenging. I
am honestly a bit stumped at how to do it at the moment. I will keep trying
to find ways to at least approximate it, but I'm really not sure the best
way.

Of course, I haven't been working on this specific problem long, but other
people have tried to quantify our explanatory methods in other areas and
have also failed. I think part of the failure has to do with the fact that
the things they want to explain using the same method should probably use
different methods and should be more heuristic than mathematically precise.
It's all quite overwhelming to analyze sometimes.

I may have thought about fractions correct vs. incorrect also. The truth is,
I haven't locked on and carefully analyzed the different ideas I've come up
with because they all seem to have issues and it is difficult to analyze. I
definitely need to try some out and just see what the results are and
document them better.

Dave

On Thu, Jul 22, 2010 at 10:23 PM, Abram Demski abramdem...@gmail.comwrote:

 David,

 What are the different ways you are thinking of for measuring the
 predictiveness? I can think of a few different possibilities (such as
 measuring number incorrect vs measuring fraction incorrect, et cetera) but
 I'm wondering which variations you consider significant/troublesome/etc.

 --Abram

 On Thu, Jul 22, 2010 at 7:12 PM, David Jones davidher...@gmail.comwrote:

 It's certainly not as simple as you claim. First, assigning a probability
 is not always possible, nor is it easy. The factors in calculating that
 probability are unknown and are not the same for every instance. Since we do
 not know what combination of observations we will see, we cannot have a
 predefined set of probabilities, nor is it any easier to create a
 probability function that generates them for us. That is just as exactly
 what I meant by quantitatively define the predictiveness... it would be
 proportional to the probability.

 Second, if you can define a program ina way that is always simpler when it
 is smaller, then you can do the same thing without a program. I don't think
 it makes any sense to do it this way.

 It is not that simple. If it was, we could solve a large portion of agi
 easily.

 On Thu, Jul 22, 2010 at 3:16 PM, Matt Mahoney matmaho...@yahoo.com
 wrote:

 David Jones wrote:

  But, I am amazed at how difficult it is to quantitatively define more
 predictive and simpler for specific problems.

 It isn't hard. To measure predictiveness, you assign a probability to each
 possible outcome. If the actual outcome has probability p, you score a
 penalty of log(1/p) bits. To measure simplicity, use the compressed size of
 the code for your prediction algorithm. Then add the two scores together.
 That's how it is done in the Calgary challenge
 http://www.mailcom.com/challenge/ and in my own text compression
 benchmark.



 -- Matt Mahoney, matmaho...@yahoo.com

 *From:* David Jones davidher...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Thu, July 22, 2010 3:11:46 PM
 *Subject:* Re: [agi] Re: Huge Progress on the Core of AGI

 Because simpler is not better if it is less predictive.

 On Thu, Jul 22, 2010 at 1:21 PM, Abram Demski abramdem...@gmail.com
 wrote:

 Jim,

 Why more predictive *and then* simpler?

 --Abram

 On Thu, Jul 22, 2010 at 11:49 AM, David Jones davidher...@gmail.com
 wrote:

  An Update

 I think the following gets to the heart of general AI and what it takes to
 achieve it. It also provides us with evidence as to why general AI is so
 difficult. With this new knowledge in mind, I think I will be much more
 capable now of solving the problems and making it work.

 I've come to the conclusion lately that the best hypothesis is better
 because it is more predictive and then simpler than other hypotheses (in
 that order more predictive... then simpler). But, I am amazed at how
 difficult it is to quantitatively define more predictive and simpler for
 specific problems

Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-24 Thread David Jones
Abram,

I should also mention that I ran into problems mainly because I was having a
hard time deciding how to identify objects and determine what is really
going on in a scene. This adds a whole other layer of complexity to
hypotheses. It's not just about what is more predictive of the observations,
it is about deciding what exactly you are observing in the first place.
(although you might say its the same problem).

I ran into this problem when my algorithm finds matches between items that
are not the same. Or it may not find any matches between items that are the
same, but have changed. So, how do you decide whether it is 1) the same
object, 2) a different object or 3) the same object but it has changed.
And how do you decide its relationship to something else...  is it 1)
dependently attached 2) semi-dependently attached(can move independently,
but only in certain ways. Yet also moves dependently) 3) independent 4)
sometimes dependent 5) was dependent, but no longer is, 6) was dependent on
something else, but then was independent, but now is dependent on something
new.

These hypotheses are different ways of explaining the same observations, but
are complicated by the fact that we aren't sure of the identity of the
objects we are observing in the first place. Multiple hypotheses may fit the
same observations, and its hard to decide why one is simpler or better than
the other. The object you were observing at first may have disappeared. A
new object may have appeared at the same time (this is why screenshots are a
bit malicious). Or the object you were observing may have changed. In
screenshots, sometimes the objects that you are trying to identify as
different never appear at the same time because they always completely
occlude each other. So, that can make it extremely difficult to decide
whether they are the same object that has changed or different objects.

Such ambiguities are common in AGI. It is unclear to me yet how to deal with
them effectively, although I am continuing to work hard on it.

I know its a bit of a mess, but I'm just trying to demonstrate the trouble
I've run into.

I hope that makes it more clear why I'm having so much trouble finding a way
of determining what hypothesis is most predictive and simplest.

Dave

On Thu, Jul 22, 2010 at 10:23 PM, Abram Demski abramdem...@gmail.comwrote:

 David,

 What are the different ways you are thinking of for measuring the
 predictiveness? I can think of a few different possibilities (such as
 measuring number incorrect vs measuring fraction incorrect, et cetera) but
 I'm wondering which variations you consider significant/troublesome/etc.

 --Abram

 On Thu, Jul 22, 2010 at 7:12 PM, David Jones davidher...@gmail.comwrote:

 It's certainly not as simple as you claim. First, assigning a probability
 is not always possible, nor is it easy. The factors in calculating that
 probability are unknown and are not the same for every instance. Since we do
 not know what combination of observations we will see, we cannot have a
 predefined set of probabilities, nor is it any easier to create a
 probability function that generates them for us. That is just as exactly
 what I meant by quantitatively define the predictiveness... it would be
 proportional to the probability.

 Second, if you can define a program ina way that is always simpler when it
 is smaller, then you can do the same thing without a program. I don't think
 it makes any sense to do it this way.

 It is not that simple. If it was, we could solve a large portion of agi
 easily.

 On Thu, Jul 22, 2010 at 3:16 PM, Matt Mahoney matmaho...@yahoo.com
 wrote:

 David Jones wrote:

  But, I am amazed at how difficult it is to quantitatively define more
 predictive and simpler for specific problems.

 It isn't hard. To measure predictiveness, you assign a probability to each
 possible outcome. If the actual outcome has probability p, you score a
 penalty of log(1/p) bits. To measure simplicity, use the compressed size of
 the code for your prediction algorithm. Then add the two scores together.
 That's how it is done in the Calgary challenge
 http://www.mailcom.com/challenge/ and in my own text compression
 benchmark.



 -- Matt Mahoney, matmaho...@yahoo.com

 *From:* David Jones davidher...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Thu, July 22, 2010 3:11:46 PM
 *Subject:* Re: [agi] Re: Huge Progress on the Core of AGI

 Because simpler is not better if it is less predictive.

 On Thu, Jul 22, 2010 at 1:21 PM, Abram Demski abramdem...@gmail.com
 wrote:

 Jim,

 Why more predictive *and then* simpler?

 --Abram

 On Thu, Jul 22, 2010 at 11:49 AM, David Jones davidher...@gmail.com
 wrote:

  An Update

 I think the following gets to the heart of general AI and what it takes to
 achieve it. It also provides us with evidence as to why general AI is so
 difficult. With this new knowledge in mind, I think I will be much more
 capable now of solving the problems and making it work.

 I've

Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-24 Thread David Jones
Matt,

Any method must deal with similar, if not the same, ambiguities. You need to
show how neural nets solve this problem or how they solve agi goals while
completely skipping the problem. Until then, it is not a successful method.

Dave

On Jul 24, 2010 7:18 PM, Matt Mahoney matmaho...@yahoo.com wrote:

Mike Tintner wrote:
 Huh, Matt? What examples of this holistic scene analysis are there (or
are y...
I mean a neural model with increasingly complex features, as opposed to an
algorithmic 3-D model (like video game graphics in reverse).

Of course David rejects such ideas (
http://practicalai.org/Prize/Default.aspx ) even though the one proven
working vision model uses it.




-- Matt Mahoney, matmaho...@yahoo.com

--
*From:* Mike Tintner tint...@blueyonder.co.uk


To: agi agi@v2.listbox.com
*Sent:* Sat, July 24, 2010 6:16:07 PM


Subject: Re: [agi] Re: Huge Progress on the Core of AGI


Huh, Matt? What examples of this holistic scene analysis are there (or are
you thinking about)?

...
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
https://www.listbox.com/member/archive/rss/303/ |
Modifyhttps://www.listbox.com/member/?;Your
Subscription
http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-24 Thread David Jones
Check this out!

The title Space and time, not surface features, guide object persistence
says it all.

http://pbr.psychonomic-journals.org/content/14/6/1199.full.pdf

Over just the last couple days I have begun to realize that they are so
right. My idea before of using high frame rates is also spot on. The brain
does not use features as much as we think. First we construct a model of the
object, then we probably decide what features to index it with for future
search. If we know that the object occurs at a particular location in space,
then we can learn a great deal about it with very little ambiguity! Of
course, processing images at all is hard, but that's besides the point...
The point is that we can automatically learn about the world using high
frame rates and a simple heuristic for identifying specific objects in a
scene. Because we can reliably identify them, we can learn an extremely
large amount in a very short period of time. We can learn about how lighting
affects the colors, noise, size, shape, components, attachment
relationships, etc. etc.

So, it is very likely that screenshots are not simpler than real images!
lol. The objects in real images usually don't change as much, as drastically
or as quickly as the objects in screenshots. That means that we can use the
simple heuristics of size, shape, location and continuity of time to match
objects and learn about them.

Dave

On Sat, Jul 24, 2010 at 9:10 PM, Matt Mahoney matmaho...@yahoo.com wrote:

 Mike Tintner wrote:
  Which is?

 The one right behind your eyes.


 -- Matt Mahoney, matmaho...@yahoo.com


 --
 *From:* Mike Tintner tint...@blueyonder.co.uk
 *To:* agi agi@v2.listbox.com
 *Sent:* Sat, July 24, 2010 9:00:42 PM

 *Subject:* Re: [agi] Re: Huge Progress on the Core of AGI

 Matt:
 I mean a neural model with increasingly complex features, as opposed to an
 algorithmic 3-D model (like video game graphics in reverse). Of course David
 rejects such ideas ( http://practicalai.org/Prize/Default.aspx ) even
 though the one proven working vision model uses it.


 Which is? and does what?  (I'm starting to consider that vision and visual
 perception  -  or perhaps one should say common sense, since no sense in
 humans works independent of the others -  may well be considerably *more*
 complex than language. The evolutionary time required to develop our common
 sense perception and conception of the world was vastly greater than that
 required to develop language. And we are as a culture merely in our babbling
 infancy in beginning to understand how sensory images work and are
 processed).
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-24 Thread David Jones
This is absolutely incredible. The answer was right there in the last
paragraph:

The present experiments suggest that the computation
of object persistence appears to rely so heavily upon spatiotemporal
information that it will not (or at least is unlikely
to) use otherwise available surface feature information,
particularly when there is conflicting spatiotemporal
information. This reveals a striking limitation, given various
theories that visual perception uses whatever shortcuts,
or heuristics, it can to simplify processing, as well as
the theory that perception evolves out of a buildup of the
statistical nature of our environment (e.g., Purves  Lotto,
2003). Instead, it appears that the object file system has
“tunnel vision” and turns a blind eye to surface feature information,
focusing on spatiotemporal information when
computing persistence.

So much for Matt's claim that the brain uses hierarchical features LOL

Dave

On Sat, Jul 24, 2010 at 11:52 PM, David Jones davidher...@gmail.com wrote:

 Check this out!

 The title Space and time, not surface features, guide object persistence
 says it all.

 http://pbr.psychonomic-journals.org/content/14/6/1199.full.pdf

 Over just the last couple days I have begun to realize that they are so
 right. My idea before of using high frame rates is also spot on. The brain
 does not use features as much as we think. First we construct a model of the
 object, then we probably decide what features to index it with for future
 search. If we know that the object occurs at a particular location in space,
 then we can learn a great deal about it with very little ambiguity! Of
 course, processing images at all is hard, but that's besides the point...
 The point is that we can automatically learn about the world using high
 frame rates and a simple heuristic for identifying specific objects in a
 scene. Because we can reliably identify them, we can learn an extremely
 large amount in a very short period of time. We can learn about how lighting
 affects the colors, noise, size, shape, components, attachment
 relationships, etc. etc.

 So, it is very likely that screenshots are not simpler than real images!
 lol. The objects in real images usually don't change as much, as drastically
 or as quickly as the objects in screenshots. That means that we can use the
 simple heuristics of size, shape, location and continuity of time to match
 objects and learn about them.

 Dave


 On Sat, Jul 24, 2010 at 9:10 PM, Matt Mahoney matmaho...@yahoo.comwrote:

 Mike Tintner wrote:
  Which is?

 The one right behind your eyes.


 -- Matt Mahoney, matmaho...@yahoo.com


 --
 *From:* Mike Tintner tint...@blueyonder.co.uk
 *To:* agi agi@v2.listbox.com
 *Sent:* Sat, July 24, 2010 9:00:42 PM

 *Subject:* Re: [agi] Re: Huge Progress on the Core of AGI

 Matt:
 I mean a neural model with increasingly complex features, as opposed to an
 algorithmic 3-D model (like video game graphics in reverse). Of course David
 rejects such ideas ( http://practicalai.org/Prize/Default.aspx ) even
 though the one proven working vision model uses it.


 Which is? and does what?  (I'm starting to consider that vision and visual
 perception  -  or perhaps one should say common sense, since no sense in
 humans works independent of the others -  may well be considerably *more*
 complex than language. The evolutionary time required to develop our common
 sense perception and conception of the world was vastly greater than that
 required to develop language. And we are as a culture merely in our babbling
 infancy in beginning to understand how sensory images work and are
 processed).
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Clues to the Mind: Illusions / Vision

2010-07-24 Thread David Jones
Yes. I think I may have discovered the keys to crack this puzzle wide open.
The brain seems to use simplistic heuristics for depth perception and
surface bounding. Once it has that, it can apply the spaciotemporal
heuristic I mentioned in other emails to identify and track an object, which
allows it to learn a lot with high confidence. So, that model would explain
why we see depth perception illusions.

Dave

On Jul 25, 2010 1:04 AM, deepakjnath deepakjn...@gmail.com wrote:

http://www.youtube.com/watch?v=QbKw0_v2clofeature=player_embedded

What we see is not really what you see. Its what you see and what you know
you are seeing. The brain superimposes the predicted images to the viewed
image to actually have a perception of image.

cheers,
Deepak
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
https://www.listbox.com/member/archive/rss/303/ |
Modifyhttps://www.listbox.com/member/?;Your
Subscription
http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Re: Huge Progress on the Core of AGI

2010-07-22 Thread David Jones
An Update

I think the following gets to the heart of general AI and what it takes to
achieve it. It also provides us with evidence as to why general AI is so
difficult. With this new knowledge in mind, I think I will be much more
capable now of solving the problems and making it work.

I've come to the conclusion lately that the best hypothesis is better
because it is more predictive and then simpler than other hypotheses (in
that order more predictive... then simpler). But, I am amazed at how
difficult it is to quantitatively define more predictive and simpler for
specific problems. This is why I have sometimes doubted the truth of the
statement.

In addition, the observations that the AI gets are not representative of all
observations! This means that if your measure of predictiveness depends on
the number of certain observations, it could make mistakes! So, the specific
observations you are aware of may be unrepresentative of the predictiveness
of a hypothesis relative to the truth. If you try to calculate which
hypothesis is more predictive and you don't have the critical observations
that would give you the right answer, you may get the wrong answer! This all
depends of course on your method of calculation, which is quite elusive to
define.

Visual input from screenshots, for example, can be somewhat malicious.
Things can move, appear, disappear or occlude each other suddenly. So,
without sufficient knowledge it is hard to decide whether matches you find
between such large changes are because it is the same object or a different
object. This may indicate that bias and preprogrammed experience should be
introduced to the AI before training. Either that or the training inputs
should be carefully chosen to avoid malicious input and to make them nice
for learning.

This is the correspondence problem that is typical of computer vision and
has never been properly solved. Such malicious input also makes it difficult
to learn automatically because the AI doesn't have sufficient experience to
know which changes or transformations are acceptable and which are not. It
is immediately bombarded with malicious inputs.

I've also realized that if a hypothesis is more explanatory, it may be
better. But quantitatively defining explanatory is also elusive and truly
depends on the specific problems you are applying it to because it is a
heuristic. It is not a true measure of correctness. It is not loyal to the
truth. More explanatory is really a heuristic that helps us find
hypothesis that are more predictive. The true measure of whether a
hypothesis is better is simply the most accurate and predictive hypothesis.
That is the ultimate and true measure of correctness.

Also, since we can't measure every possible prediction or every last
prediction (and we certainly can't predict everything), our measure of
predictiveness can't possibly be right all the time! We have no choice but
to use a heuristic of some kind.

So, its clear to me that the right hypothesis is more predictive and then
simpler. But, it is also clear that there will never be a single measure of
this that can be applied to all problems. I hope to eventually find a nice
model for how to apply it to different problems though. This may be the
reason that so many people have tried and failed to develop general AI. Yes,
there is a solution. But there is no silver bullet that can be applied to
all problems. Some methods are better than others. But I think another major
reason of the failures is that people think they can predict things without
sufficient information. By approaching the problem this way, we compound the
need for heuristics and the errors they produce because we simply don't have
sufficient information to make a good decision with limited evidence. If
approached correctly, the right solution would solve many more problems with
the same efforts than a poor solution would. It would also eliminate some of
the difficulties we currently face if sufficient data is available to learn
from.

In addition to all this theory about better hypotheses, you have to add on
the need to solve problems in reasonable time. This also compounds the
difficulty of the problem and the complexity of solutions.

I am always fascinated by the extraordinary difficulty and complexity of
this problem. The more I learn about it, the more I appreciate it.

Dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-22 Thread David Jones
Because simpler is not better if it is less predictive.


On Thu, Jul 22, 2010 at 1:21 PM, Abram Demski abramdem...@gmail.com wrote:

 Jim,

 Why more predictive *and then* simpler?

 --Abram

 On Thu, Jul 22, 2010 at 11:49 AM, David Jones davidher...@gmail.comwrote:

 An Update

 I think the following gets to the heart of general AI and what it takes to
 achieve it. It also provides us with evidence as to why general AI is so
 difficult. With this new knowledge in mind, I think I will be much more
 capable now of solving the problems and making it work.

 I've come to the conclusion lately that the best hypothesis is better
 because it is more predictive and then simpler than other hypotheses (in
 that order more predictive... then simpler). But, I am amazed at how
 difficult it is to quantitatively define more predictive and simpler for
 specific problems. This is why I have sometimes doubted the truth of the
 statement.

 In addition, the observations that the AI gets are not representative of
 all observations! This means that if your measure of predictiveness
 depends on the number of certain observations, it could make mistakes! So,
 the specific observations you are aware of may be unrepresentative of the
 predictiveness of a hypothesis relative to the truth. If you try to
 calculate which hypothesis is more predictive and you don't have the
 critical observations that would give you the right answer, you may get the
 wrong answer! This all depends of course on your method of calculation,
 which is quite elusive to define.

 Visual input from screenshots, for example, can be somewhat malicious.
 Things can move, appear, disappear or occlude each other suddenly. So,
 without sufficient knowledge it is hard to decide whether matches you find
 between such large changes are because it is the same object or a different
 object. This may indicate that bias and preprogrammed experience should be
 introduced to the AI before training. Either that or the training inputs
 should be carefully chosen to avoid malicious input and to make them nice
 for learning.

 This is the correspondence problem that is typical of computer vision
 and has never been properly solved. Such malicious input also makes it
 difficult to learn automatically because the AI doesn't have sufficient
 experience to know which changes or transformations are acceptable and which
 are not. It is immediately bombarded with malicious inputs.

 I've also realized that if a hypothesis is more explanatory, it may be
 better. But quantitatively defining explanatory is also elusive and truly
 depends on the specific problems you are applying it to because it is a
 heuristic. It is not a true measure of correctness. It is not loyal to the
 truth. More explanatory is really a heuristic that helps us find
 hypothesis that are more predictive. The true measure of whether a
 hypothesis is better is simply the most accurate and predictive hypothesis.
 That is the ultimate and true measure of correctness.

 Also, since we can't measure every possible prediction or every last
 prediction (and we certainly can't predict everything), our measure of
 predictiveness can't possibly be right all the time! We have no choice but
 to use a heuristic of some kind.

 So, its clear to me that the right hypothesis is more predictive and then
 simpler. But, it is also clear that there will never be a single measure of
 this that can be applied to all problems. I hope to eventually find a nice
 model for how to apply it to different problems though. This may be the
 reason that so many people have tried and failed to develop general AI. Yes,
 there is a solution. But there is no silver bullet that can be applied to
 all problems. Some methods are better than others. But I think another major
 reason of the failures is that people think they can predict things without
 sufficient information. By approaching the problem this way, we compound the
 need for heuristics and the errors they produce because we simply don't have
 sufficient information to make a good decision with limited evidence. If
 approached correctly, the right solution would solve many more problems with
 the same efforts than a poor solution would. It would also eliminate some of
 the difficulties we currently face if sufficient data is available to learn
 from.

 In addition to all this theory about better hypotheses, you have to add on
 the need to solve problems in reasonable time. This also compounds the
 difficulty of the problem and the complexity of solutions.

 I am always fascinated by the extraordinary difficulty and complexity of
 this problem. The more I learn about it, the more I appreciate it.

 Dave
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




 --
 Abram Demski
 http://lo-tho.blogspot.com/
 http://groups.google.com/group

Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-22 Thread David Jones
It's certainly not as simple as you claim. First, assigning a probability is
not always possible, nor is it easy. The factors in calculating that
probability are unknown and are not the same for every instance. Since we do
not know what combination of observations we will see, we cannot have a
predefined set of probabilities, nor is it any easier to create a
probability function that generates them for us. That is just as exactly
what I meant by quantitatively define the predictiveness... it would be
proportional to the probability.

Second, if you can define a program ina way that is always simpler when it
is smaller, then you can do the same thing without a program. I don't think
it makes any sense to do it this way.

It is not that simple. If it was, we could solve a large portion of agi
easily.

On Thu, Jul 22, 2010 at 3:16 PM, Matt Mahoney matmaho...@yahoo.com wrote:

David Jones wrote:

 But, I am amazed at how difficult it is to quantitatively define more
predictive and simpler for specific problems.

It isn't hard. To measure predictiveness, you assign a probability to each
possible outcome. If the actual outcome has probability p, you score a
penalty of log(1/p) bits. To measure simplicity, use the compressed size of
the code for your prediction algorithm. Then add the two scores together.
That's how it is done in the Calgary challenge
http://www.mailcom.com/challenge/ and in my own text compression benchmark.



-- Matt Mahoney, matmaho...@yahoo.com

*From:* David Jones davidher...@gmail.com
*To:* agi agi@v2.listbox.com
*Sent:* Thu, July 22, 2010 3:11:46 PM
*Subject:* Re: [agi] Re: Huge Progress on the Core of AGI

Because simpler is not better if it is less predictive.

On Thu, Jul 22, 2010 at 1:21 PM, Abram Demski abramdem...@gmail.com wrote:

Jim,

Why more predictive *and then* simpler?

--Abram

On Thu, Jul 22, 2010 at 11:49 AM, David Jones davidher...@gmail.com wrote:

 An Update

I think the following gets to the heart of general AI and what it takes to
achieve it. It also provides us with evidence as to why general AI is so
difficult. With this new knowledge in mind, I think I will be much more
capable now of solving the problems and making it work.

I've come to the conclusion lately that the best hypothesis is better
because it is more predictive and then simpler than other hypotheses (in
that order more predictive... then simpler). But, I am amazed at how
difficult it is to quantitatively define more predictive and simpler for
specific problems. This is why I have sometimes doubted the truth of the
statement.

In addition, the observations that the AI gets are not representative of all
observations! This means that if your measure of predictiveness depends on
the number of certain observations, it could make mistakes! So, the specific
observations you are aware of may be unrepresentative of the predictiveness
of a hypothesis relative to the truth. If you try to calculate which
hypothesis is more predictive and you don't have the critical observations
that would give you the right answer, you may get the wrong answer! This all
depends of course on your method of calculation, which is quite elusive to
define.

Visual input from screenshots, for example, can be somewhat malicious.
Things can move, appear, disappear or occlude each other suddenly. So,
without sufficient knowledge it is hard to decide whether matches you find
between such large changes are because it is the same object or a different
object. This may indicate that bias and preprogrammed experience should be
introduced to the AI before training. Either that or the training inputs
should be carefully chosen to avoid malicious input and to make them nice
for learning.

This is the correspondence problem that is typical of computer vision and
has never been properly solved. Such malicious input also makes it difficult
to learn automatically because the AI doesn't have sufficient experience to
know which changes or transformations are acceptable and which are not. It
is immediately bombarded with malicious inputs.

I've also realized that if a hypothesis is more explanatory, it may be
better. But quantitatively defining explanatory is also elusive and truly
depends on the specific problems you are applying it to because it is a
heuristic. It is not a true measure of correctness. It is not loyal to the
truth. More explanatory is really a heuristic that helps us find
hypothesis that are more predictive. The true measure of whether a
hypothesis is better is simply the most accurate and predictive hypothesis.
That is the ultimate and true measure of correctness.

Also, since we can't measure every possible prediction or every last
prediction (and we certainly can't predict everything), our measure of
predictiveness can't possibly be right all the time! We have no choice but
to use a heuristic of some kind.

So, its clear to me that the right hypothesis is more predictive and then
simpler. But, it is also clear

Re: [agi] Of definitions and tests of AGI

2010-07-21 Thread David Jones
Training data is not available in many real problems. I don't think training
data should be used as the main learning mechanism. It likely won't solve
any of the problems.

On Jul 21, 2010 2:52 AM, deepakjnath deepakjn...@gmail.com wrote:

Yes we could do a 4x4 tic tac toe game like this in a PC. The training sets
can be generated simply by playing the agents against each other using
random moves and letting the agents know if it passed or failed as a
feedback mechanism.

Cheers,
Deepak



On Wed, Jul 21, 2010 at 9:02 AM, Matt Mahoney matmaho...@yahoo.com wrote:

 Mike, I think we a...
-- 
cheers,
Deepak
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
https://www.listbox.com/member/archive/rss/303/ |
Modifyhttps://www.listbox.com/member/?;Your
Subscription
http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Is there any Contest or test to ensure that a System is AGI?

2010-07-18 Thread David Jones
not really.

On Sun, Jul 18, 2010 at 9:41 AM, deepakjnath deepakjn...@gmail.com wrote:

 Yes, but is there a competition like the XPrize or something that we can
 work towards. ?


 On Sun, Jul 18, 2010 at 6:40 PM, Panu Horsmalahti nawi...@gmail.comwrote:

 2010/7/18 deepakjnath deepakjn...@gmail.com

 I wanted to know if there is any bench mark test that can really convince
 majority of today's AGIers that a System is true AGI?

 Is there some real prize like the XPrize for AGI or AI in general?

 thanks,
 Deepak


 Have you heard about the Turing test?

 - Panu Horsmalahti
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




 --
 cheers,
 Deepak
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Of definitions and tests of AGI

2010-07-18 Thread David Jones
If you can't convince someone, clearly something is wrong with it. I don't
think a test is the right way to do this. Which is why I haven't commented
much. When you understand how to create AGI, it will be obvious that it is
AGI or that it is what you intend it to be. You'll then understand how what
you have built fits into the bigger scheme of things. There is no such point
at which you can say something is AGI and not AGI. Intelligence is a
very subjective thing that really depends on your goals. Someone will always
say it is not good enough. But if it really works, people will quickly
realize it based on results.

What you want is to develop a system that can learn about the world or its
environment in a general way so that it can solve arbitrary problems, be
able to plan in general ways, act in general ways and perform the types of
goals you want it to perform.

Dave

On Sun, Jul 18, 2010 at 3:03 PM, deepakjnath deepakjn...@gmail.com wrote:

 So if I have a system that is close to AGI, I have no way of really knowing
 it right?

 Even if I believe that my system is a true AGI there is no way of
 convincing the others irrefutably that this system is indeed a AGI not just
 an advanced AI system.

 I have read the toy box problem and rock wall problem, but not many people
 will still be convinced I am sure.

 I wanted to know that if there is any consensus on a general problem which
 can be solved and only solved by a true AGI. Without such a test bench how
 will we know if we are moving closer or away from our quest. There is no
 map.

 Deepak




 On Sun, Jul 18, 2010 at 11:50 PM, Mike Tintner 
 tint...@blueyonder.co.ukwrote:

  I realised that what is needed is a *joint* definition *and*  range of
 tests of AGI.

 Benamin Johnston has submitted one valid test - the toy box problem. (See
 archives).

 I have submitted another still simpler valid test - build a rock wall from
 rocks given, (or fill an earth hole with rocks).

 However, I see that there are no valid definitions of AGI that explain
 what AGI is generally , and why these tests are indeed AGI. Google - there
 are v. few defs. of AGI or Strong AI, period.

 The most common: AGI is human-level intelligence -  is an
 embarrassing non-starter - what distinguishes human intelligence? No
 explanation offered.

 The other two are also inadequate if not as bad: Ben's solves a variety
 of complex problems in a variety of complex environments. Nope, so does  a
 multitasking narrow AI. Complexity does not distinguish AGI. Ditto Pei's -
 something to do with insufficient knowledge and resources...
 Insufficient is open to narrow AI interpretations and reducible to
 mathematically calculable probabilities.or uncertainties. That doesn't
 distinguish AGI from narrow AI.

 The one thing we should all be able to agree on (but who can be sure?) is
 that:

 ** an AGI is a general intelligence system, capable of independent
 learning**

 i.e. capable of independently learning new activities/skills with minimal
 guidance or even, ideally, with zero guidance (as humans and animals are) -
 and thus acquiring a general, all-round range of intelligence..

 This is an essential AGI goal -  the capacity to keep entering and
 mastering new domains of both mental and physical skills WITHOUT being
 specially programmed each time - that crucially distinguishes it from narrow
 AI's, which have to be individually programmed anew for each new task. Ben's
 AGI dog exemplified this in a v simple way -  the dog is supposed to be able
 to learn to fetch a ball, with only minimal instructions, as real dogs do -
 they can learn a whole variety of new skills with minimal instruction.  But
 I am confident Ben's dog can't actually do this.

 However, the independent learning def. while focussing on the distinctive
 AGI goal,  still is not detailed enough by itself.

 It requires further identification of the **cognitive operations** which
 distinguish AGI,  and wh. are exemplified by the above tests.

 [I'll stop there for interruptions/comments  continue another time].

  P.S. Deepakjnath,

 It is vital to realise that the overwhelming majority of AGI-ers do not *
 want* an AGI test -  Ben has never gone near one, and is merely typical in
 this respect. I'd put almost all AGI-ers here in the same league as the US
 banks, who only want mark-to-fantasy rather than mark-to-market tests of
 their assets.
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




 --
 cheers,
 Deepak
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: 

Re: [agi] Of definitions and tests of AGI

2010-07-18 Thread David Jones
Deepak,

I think you would be much better off focusing on something more practical.
Understanding a movie and all the myriad things going on, their
significance, etc... that's AI complete. There is no way you are going to
get there without a hell of a lot of steps in between. So, you might as well
focus on the steps required to get there. Such a test is so complicated,
that you cannot even start, except to look for simpler test cases and goals.


My approach to testing agi has been to define what AGI must accomplish.
Which I have in the following steps:
1) understand the environment
2) understand ones own actions and how they affect the environment
3) understand language
4) learn goals from other people through language
5) perform planning and attempt to achieve goals
6) other miscellaneous requirements.

Each step must be accomplished in a general way. By general, I mean that it
can solve many many problems with the same programming.

Each step must be done in order because each step requires previous steps to
proceed. So, to me, the most important place to start is general environment
understanding.

Then, now that you know where to start, you pick more specific goals and
test cases. How do you develop and test general environment understanding?
What is a simple test case you can develop on? What are the fundamental
problems and principles involved? What is required to solve these problems?

Those are the sorts of tests you should be considering. But that only comes
after you decide what AGI requires and steps required. Maybe you'll agree
with me, maybe you won't. So, that's how I would recommend going about it.

Dave

On Sun, Jul 18, 2010 at 4:04 PM, deepakjnath deepakjn...@gmail.com wrote:

 Let me clarify. As you all know there are somethings computers are good at
 doing and somethings that Humans can do but a computer cannot.

 One of the test that I was thinking about recently is to have to movies
 show to the AGI. Both movies will have the same story but it would be a
 totally different remake of the film probably in different languages and
 settings. If the AGI is able to understand the sub plot and say that the
 story line is similar in the two movies then it could be a good test for AGI
 structure.

 The ability of a system to understand its environment and underlying sub
 plots is an important requirement of AGI.

 Deepak

 On Mon, Jul 19, 2010 at 1:14 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  Please explain/expound freely why you're not convinced - and indicate
 what you expect,  - and I'll reply - but it may not be till tomorrow.

 Re your last point, there def. is no consensus on a general problem/test
 OR a def. of AGI.

 One flaw in your expectations seems to be a desire for a single test -
 almost by definition, there is no such thing as

 a) a single test - i.e. there should be at least a dual or serial test -
 having passed any given test, like the rock/toy test, the AGI must be
 presented with a new adjacent test for wh. it has had no preparation,
 like say building with cushions or sand bags or packing with fruit. (and
 neither rock/toy test state that clearly)

 b) one kind of test - this is an AGI, so it should be clear that if it can
 pass one kind of test, it has the basic potential to go on to many different
 kinds, and it doesn't really matter which kind of test you start with - that
 is partly the function of having a good.definition of AGI .


  *From:* deepakjnath deepakjn...@gmail.com
 *Sent:* Sunday, July 18, 2010 8:03 PM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] Of definitions and tests of AGI

 So if I have a system that is close to AGI, I have no way of really
 knowing it right?

 Even if I believe that my system is a true AGI there is no way of
 convincing the others irrefutably that this system is indeed a AGI not just
 an advanced AI system.

 I have read the toy box problem and rock wall problem, but not many people
 will still be convinced I am sure.

 I wanted to know that if there is any consensus on a general problem which
 can be solved and only solved by a true AGI. Without such a test bench how
 will we know if we are moving closer or away from our quest. There is no
 map.

 Deepak



 On Sun, Jul 18, 2010 at 11:50 PM, Mike Tintner 
 tint...@blueyonder.co.ukwrote:

  I realised that what is needed is a *joint* definition *and*  range of
 tests of AGI.

 Benamin Johnston has submitted one valid test - the toy box problem. (See
 archives).

 I have submitted another still simpler valid test - build a rock wall
 from rocks given, (or fill an earth hole with rocks).

 However, I see that there are no valid definitions of AGI that explain
 what AGI is generally , and why these tests are indeed AGI. Google - there
 are v. few defs. of AGI or Strong AI, period.

 The most common: AGI is human-level intelligence -  is an
 embarrassing non-starter - what distinguishes human intelligence? No
 explanation offered.

 The other two are also inadequate 

Re: [agi] Is there any Contest or test to ensure that a System is AGI?

2010-07-18 Thread David Jones
Ian,

Although most people see natural language as one of the most important parts
of AGI, if you think about it carefully, you'll realize that solving natural
language could be done with sufficient knowledge of the world and sufficient
ability to learn this knowledge automatically. That's why i don't consider
natural language a problem we can focus on until we solve the knowledge
problem... which is what I'm focusing on.

Dave

2010/7/18 Ian Parker ianpark...@gmail.com

 In my view the main obstacle to AGI is the understanding of Natural
 Language. If we have NL comprehension we have the basis for doing a whole
 host of marvellous things.

 There is the Turing test. A good question to ask is What is the difference
 between laying concrete at 50C and fighting Israel. Google translated wsT
 jw AlmErkp or وسط جو المعركة  as central air battle. Correct is the
 climatic environmental battle or a more free translation would be the
 battle against climate and environment. In Turing competitions no one ever
 asks the questions that really would tell AGI apart from a brand X
 chatterbox.

 http://sites.google.com/site/aitranslationproject/Home/formalmethods

 http://sites.google.com/site/aitranslationproject/Home/formalmethodsWe
 can I think say that anything which can carry out the program of my blog
 would be well on its way. AGI will also be the link between NL and
 formal mathematics. Let me take yet another example.

 http://sites.google.com/site/aitranslationproject/deepknowled

 Google translated it as 4 times the temperature. Ponder this, you have in
 fact 3 chances to get this right.

 1)  درجة means degree. GT has not translated this word. In this context it
 means power.

 2) If you search for Stefan Boltzmann or Black Body Google gives you
 the correct law.

 3) The translation is obviously mathematically incorrect from the
 dimensional stand-point.

 This 3 things in fact represent different aspects of knowledge. In AGI they
 all have to be present.

 The other interesting point is that there are programs in existence now
 that will address the last two questions. A translator that produces OWL
 solves 2.

 If we match up AGI to Mizar we can put dimensions into the proof engine.

 There are a great many things on the Web which will solve specific
 problems. NL is *THE* problem since it will allow navigation between the
 different programs on the Web.

 MOLTO BTW does have its mathematical parts even though it is primerally
 billed as a translator.


   - Ian Parker

 On 18 July 2010 14:41, deepakjnath deepakjn...@gmail.com wrote:

 Yes, but is there a competition like the XPrize or something that we can
 work towards. ?

 On Sun, Jul 18, 2010 at 6:40 PM, Panu Horsmalahti nawi...@gmail.comwrote:

 2010/7/18 deepakjnath deepakjn...@gmail.com

 I wanted to know if there is any bench mark test that can really
 convince majority of today's AGIers that a System is true AGI?

 Is there some real prize like the XPrize for AGI or AI in general?

 thanks,
 Deepak


 Have you heard about the Turing test?

 - Panu Horsmalahti
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




 --
 cheers,
 Deepak
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com


*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Is there any Contest or test to ensure that a System is AGI?

2010-07-18 Thread David Jones
Oh, I wanted to add one thing that I've learned recently. The core problem
of AGI is to come up with hypotheses (hopefully the right hypothesis or
one that is good enough is included) and then determine whether the
hypothesis is 1) acceptable and 2) better than other hypotheses. In
addition, you have to have a way to decide *when* to look for better
hypotheses, because you can't just always be looking at all possible
hypotheses.

So, with that in mind, the reason that natural language can only be very
roughly approximated without a lot more knowledge is because there isn't
sufficient knowledge to say that one hypothesis is better than another in
the vast majority of cases. The AI doesn't have sufficient *reason* to think
that the right hypothesis is better than others. The only way to give it
that sufficient reason is to give it sufficient knowledge.

Dave

2010/7/18 David Jones davidher...@gmail.com

 Ian,

 Although most people see natural language as one of the most important
 parts of AGI, if you think about it carefully, you'll realize that solving
 natural language could be done with sufficient knowledge of the world and
 sufficient ability to learn this knowledge automatically. That's why i don't
 consider natural language a problem we can focus on until we solve the
 knowledge problem... which is what I'm focusing on.

 Dave

 2010/7/18 Ian Parker ianpark...@gmail.com

 In my view the main obstacle to AGI is the understanding of Natural
 Language. If we have NL comprehension we have the basis for doing a whole
 host of marvellous things.

 There is the Turing test. A good question to ask is What is the
 difference between laying concrete at 50C and fighting Israel. Google
 translated wsT jw AlmErkp or وسط جو المعركة  as central air battle.
 Correct is the climatic environmental battle or a more free translation
 would be the battle against climate and environment. In Turing
 competitions no one ever asks the questions that really would tell AGI apart
 from a brand X chatterbox.

 http://sites.google.com/site/aitranslationproject/Home/formalmethods

 http://sites.google.com/site/aitranslationproject/Home/formalmethodsWe
 can I think say that anything which can carry out the program of my blog
 would be well on its way. AGI will also be the link between NL and
 formal mathematics. Let me take yet another example.

 http://sites.google.com/site/aitranslationproject/deepknowled

 Google translated it as 4 times the temperature. Ponder this, you have in
 fact 3 chances to get this right.

 1)  درجة means degree. GT has not translated this word. In this context
 it means power.

 2) If you search for Stefan Boltzmann or Black Body Google gives you
 the correct law.

 3) The translation is obviously mathematically incorrect from the
 dimensional stand-point.

 This 3 things in fact represent different aspects of knowledge. In AGI
 they all have to be present.

 The other interesting point is that there are programs in existence now
 that will address the last two questions. A translator that produces OWL
 solves 2.

 If we match up AGI to Mizar we can put dimensions into the proof engine.

 There are a great many things on the Web which will solve specific
 problems. NL is *THE* problem since it will allow navigation between the
 different programs on the Web.

 MOLTO BTW does have its mathematical parts even though it is primerally
 billed as a translator.


   - Ian Parker

 On 18 July 2010 14:41, deepakjnath deepakjn...@gmail.com wrote:

 Yes, but is there a competition like the XPrize or something that we can
 work towards. ?

 On Sun, Jul 18, 2010 at 6:40 PM, Panu Horsmalahti nawi...@gmail.comwrote:

 2010/7/18 deepakjnath deepakjn...@gmail.com

 I wanted to know if there is any bench mark test that can really
 convince majority of today's AGIers that a System is true AGI?

 Is there some real prize like the XPrize for AGI or AI in general?

 thanks,
 Deepak


 Have you heard about the Turing test?

 - Panu Horsmalahti
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




 --
 cheers,
 Deepak
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com


*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] NL parsing

2010-07-16 Thread David Jones
This is actually a great example of why we should not try to write AGI as
something able to solve any possible problem generally. We, strong ai
agents, are not able to understand this sentence without quite a lot more
information. Likewise, we shouldn't expect a general AI to try many
possibilities until it is able to solve such a maliciously constructed
sentence. There isn't explanatory reason to believe most of the possible
hypotheses. We need more information to come up with possible hypotheses,
which we can then test out on the sentence and confirm. That' why our
additional knowledge from the blog is the only way we can reasonably
disambiguate the sentence. Normal natural language disambiguation is similar
in that way.

Dave

On Fri, Jul 16, 2010 at 11:29 AM, Matt Mahoney matmaho...@yahoo.com wrote:

 That that that Buffalo buffalo that Buffalo buffalo buffalo buffalo that
 Buffalo
 buffalo that Buffalo buffalo buffalo.

  -- Matt Mahoney, matmaho...@yahoo.com



 - Original Message 
 From: Mike Tintner tint...@blueyonder.co.uk
 To: agi agi@v2.listbox.com
 Sent: Fri, July 16, 2010 11:05:51 AM
 Subject: Re: [agi] NL parsing

 Or if you want to be pedantic about caps, the speaker is identifying 3
 buffaloes from Buffalo,  2 from elsewhere.

 Anyone got any other readings?

 --
 From: Jiri Jelinek jjelinek...@gmail.com
 Sent: Friday, July 16, 2010 3:12 PM
 To: agi agi@v2.listbox.com
 Subject: [agi] NL parsing

  Believe it or not, this sentence is grammatically correct and has
  meaning: 'Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo
  buffalo.'
 
  source: http://www.mentalfloss.com/blogs/archives/13120
 
  :-)
 
 
  ---
  agi
  Archives: https://www.listbox.com/member/archive/303/=now
  RSS Feed: https://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription:
  https://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com




 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] NL parsing

2010-07-16 Thread David Jones
Mike, Your reading requires extensive knowledge also to even think that it
is explanatory. You've heard people say a noun over and over to note that
they saw one. So, that is the only reason you are able to try to
disambiguate the sentence this way.

Even so, without context, the sentence is still not very explanatory, and
so, any regular person would look for more information because it seems to
be such a strange sentence.

Dave

On Fri, Jul 16, 2010 at 12:04 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

  Dave: That's why our additional knowledge from the blog is the only way
 we can reasonably disambiguate the sentence.

 Contradicted by my reading. The particular blog reading was esoteric sure.
 But you do have to be capable of creative readings as humans are - that's
 the fundamental challenge of language.

 But of course no machine understands language yet, period  - and isn't
 likely to for a v. v. long time.

  *From:* David Jones davidher...@gmail.com
 *Sent:* Friday, July 16, 2010 4:35 PM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] NL parsing

 This is actually a great example of why we should not try to write AGI as
 something able to solve any possible problem generally. We, strong ai
 agents, are not able to understand this sentence without quite a lot more
 information. Likewise, we shouldn't expect a general AI to try many
 possibilities until it is able to solve such a maliciously constructed
 sentence. There isn't explanatory reason to believe most of the possible
 hypotheses. We need more information to come up with possible hypotheses,
 which we can then test out on the sentence and confirm. That' why our
 additional knowledge from the blog is the only way we can reasonably
 disambiguate the sentence. Normal natural language disambiguation is similar
 in that way.

 Dave

 On Fri, Jul 16, 2010 at 11:29 AM, Matt Mahoney matmaho...@yahoo.comwrote:

 That that that Buffalo buffalo that Buffalo buffalo buffalo buffalo that
 Buffalo
 buffalo that Buffalo buffalo buffalo.

  -- Matt Mahoney, matmaho...@yahoo.com



 - Original Message 
 From: Mike Tintner tint...@blueyonder.co.uk
 To: agi agi@v2.listbox.com
  Sent: Fri, July 16, 2010 11:05:51 AM
 Subject: Re: [agi] NL parsing

 Or if you want to be pedantic about caps, the speaker is identifying 3
 buffaloes from Buffalo,  2 from elsewhere.

 Anyone got any other readings?

 --
 From: Jiri Jelinek jjelinek...@gmail.com
 Sent: Friday, July 16, 2010 3:12 PM
 To: agi agi@v2.listbox.com
 Subject: [agi] NL parsing

  Believe it or not, this sentence is grammatically correct and has
  meaning: 'Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo
  buffalo.'
 
  source: http://www.mentalfloss.com/blogs/archives/13120
 
  :-)
 
 
  ---
  agi
  Archives: https://www.listbox.com/member/archive/303/=now
  RSS Feed: https://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription:
  https://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com




 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;

 Powered by Listbox: http://www.listbox.com


   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we Score Hypotheses?

2010-07-15 Thread David Jones
It is no wonder that I'm having a hard time finding documentation on
hypothesis scoring. Few can agree on how to do it and there is much debate
about it.

I noticed though that a big reason for the problems is that explanatory
reasoning is being applied to many diverse problems. I think, like I
mentioned before, that people should not try to come up with a single
universal rule set for applying explanatory reasoning to every possible
problem. So, maybe that's where the hold up is.

I've been testing my ideas out on complex examples. But now I'm going to go
back to simplified model testing (although not as simple as black squares :)
) and work my way up again.

Dave

On Wed, Jul 14, 2010 at 12:59 PM, David Jones davidher...@gmail.com wrote:

 Actually, I just realized that there is a way to included inductive
 knowledge and experience into this algorithm. Inductive knowledge and
 experience about a specific object or object type can be exploited to know
 which hypotheses in the past were successful, and therefore which hypothesis
 is most likely. By choosing the most likely hypothesis first, we skip a lot
 of messy hypothesis comparison processing and analysis. If we choose the
 right hypothesis first, all we really have to do is verify that this
 hypothesis reveals in the data what we expect to be there. If we confirm
 what we expect, that is reason enough not to look for other hypotheses
 because the data is explained by what we originally believed to be likely.
 We only look for additional hypotheses when we find something unexplained.
 And even then, we don't look at the whole problem. We only look at what we
 have to to explain the unexplained data. In fact, we could even ignore the
 unexplained data if we believe, from experience, that it isn't pertinent.

 I discovered this because I'm analyzing how a series of hypotheses are
 navigated when analyzing images. It seems to me that it is done very
 similarly to way we do it. We sort of confirm what we expect and try to
 explain what we don't expect. We try out hypotheses in a sort of trial and
 error manor and see how each hypothesis affects what we find in the image.
 If we confirm things because of the hypothesis, we are likely to keep it. We
 keep going, navigating the tree of hypotheses, conflicts and unexpected
 observations until we find a good hypothesis. Something like that. I'm
 attempting to construct an algorithm for doing this as I analyze specific
 problems.

 Dave


 On Wed, Jul 14, 2010 at 10:22 AM, David Jones davidher...@gmail.comwrote:

 What do you mean by definitive events?

 I guess the first problem I see with my approach is that the movement of
 the window is also a hypothesis. I need to analyze it in more detail and see
 how the tree of hypotheses affects the hypotheses regarding the es on the
 windows.

 What I believe is that these problems can be broken down into types of
 hypotheses,  types of events and types of relationships. then those types
 can be reasoned about in a general way. If possible, then you have a method
 for reasoning about any object that is covered by the types of hypotheses,
 events and relationships that you have defined.

 How to reason about specific objects should not be preprogrammed. But, I
 think the solution to this part of AGI is to find general ways to reason
 about a small set of concepts that can be combined to describe specific
 objects and situations.

 There are other parts to AGI that I am not considering yet. I believe the
 problem has to be broken down into separate pieces and understood before
 putting it back together into a complete system. I have not covered
 inductive learning for example, which would be an important part of AGI. I
 have also not yet incorporated learned experience into the algorithm, which
 is also important.

 The general AI problem is way too complicated to consider all at once. I
 simply can't solve hypothesis generation, comparison and disambiguation
 while at the same time solving induction and experience-based reasoning. It
 becomes unwieldly. So, I'm starting where I can and I'll work my way up to
 the full complexity of the problem.

 I don't really understand what you mean here: The central unsolved
 problem, in my view, is: How can hypotheses be conceptually integrated along
 with the observable definitive events of the problem to form good
 explanatory connections that can mesh well with other knowledge about the
 problem that is considered to be reliable.  The second problem is finding
 efficient ways to represent this complexity of knowledge so that the program
 can utilize it efficiently.

 You also might want to include concrete problems to analyze for your
 central problem suggestions. That would help define the problem a bit better
 for analysis.

 Dave


 On Wed, Jul 14, 2010 at 8:30 AM, Jim Bromer jimbro...@gmail.com wrote:



 On Tue, Jul 13, 2010 at 9:05 PM, Jim Bromer jimbro...@gmail.com wrote:
 Even if you refined your model until it was just

Re: [agi] How do we Score Hypotheses?

2010-07-15 Thread David Jones
:) You say that as if bayesian explanatory reasoning is the only way.

There is much debate over bayesian explanatory reasoning and non-bayesian.
There are pros and cons to bayesian methods. Likewise, there is the problem
with non-bayesian methods because few have figured out how to do it
effectively. I'm still going to pursue a non-bayesian approach because I
believe there is likely more merit to it and that the short-comings can be
overcome.

Dave

On Thu, Jul 15, 2010 at 10:54 AM, Matt Mahoney matmaho...@yahoo.com wrote:

 Hypotheses are scored using Bayes law. Let D be your observed data and H be
 your hypothesis. Then p(H|D) = p(D|H)p(H)/p(D). Since p(D) is constant, you
 can remove it and rank hypotheses by p(D|H)p(H).

 p(H) can be estimated using the minimum description length principle or
 Solomonoff induction. Ideally, p(H) = 2^-|H| where |H| is the length (in
 bits) of the description of the hypothesis. The value is language dependent,
 so this method is not perfect.


 -- Matt Mahoney, matmaho...@yahoo.com


 --
 *From:* David Jones davidher...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Thu, July 15, 2010 10:22:44 AM
 *Subject:* Re: [agi] How do we Score Hypotheses?

 It is no wonder that I'm having a hard time finding documentation on
 hypothesis scoring. Few can agree on how to do it and there is much debate
 about it.

 I noticed though that a big reason for the problems is that explanatory
 reasoning is being applied to many diverse problems. I think, like I
 mentioned before, that people should not try to come up with a single
 universal rule set for applying explanatory reasoning to every possible
 problem. So, maybe that's where the hold up is.

 I've been testing my ideas out on complex examples. But now I'm going to go
 back to simplified model testing (although not as simple as black squares :)
 ) and work my way up again.

 Dave

 On Wed, Jul 14, 2010 at 12:59 PM, David Jones davidher...@gmail.comwrote:

 Actually, I just realized that there is a way to included inductive
 knowledge and experience into this algorithm. Inductive knowledge and
 experience about a specific object or object type can be exploited to know
 which hypotheses in the past were successful, and therefore which hypothesis
 is most likely. By choosing the most likely hypothesis first, we skip a lot
 of messy hypothesis comparison processing and analysis. If we choose the
 right hypothesis first, all we really have to do is verify that this
 hypothesis reveals in the data what we expect to be there. If we confirm
 what we expect, that is reason enough not to look for other hypotheses
 because the data is explained by what we originally believed to be likely.
 We only look for additional hypotheses when we find something unexplained.
 And even then, we don't look at the whole problem. We only look at what we
 have to to explain the unexplained data. In fact, we could even ignore the
 unexplained data if we believe, from experience, that it isn't pertinent.

 I discovered this because I'm analyzing how a series of hypotheses are
 navigated when analyzing images. It seems to me that it is done very
 similarly to way we do it. We sort of confirm what we expect and try to
 explain what we don't expect. We try out hypotheses in a sort of trial and
 error manor and see how each hypothesis affects what we find in the image.
 If we confirm things because of the hypothesis, we are likely to keep it. We
 keep going, navigating the tree of hypotheses, conflicts and unexpected
 observations until we find a good hypothesis. Something like that. I'm
 attempting to construct an algorithm for doing this as I analyze specific
 problems.

 Dave


 On Wed, Jul 14, 2010 at 10:22 AM, David Jones davidher...@gmail.comwrote:

 What do you mean by definitive events?

 I guess the first problem I see with my approach is that the movement of
 the window is also a hypothesis. I need to analyze it in more detail and see
 how the tree of hypotheses affects the hypotheses regarding the es on the
 windows.

 What I believe is that these problems can be broken down into types of
 hypotheses,  types of events and types of relationships. then those types
 can be reasoned about in a general way. If possible, then you have a method
 for reasoning about any object that is covered by the types of hypotheses,
 events and relationships that you have defined.

 How to reason about specific objects should not be preprogrammed. But, I
 think the solution to this part of AGI is to find general ways to reason
 about a small set of concepts that can be combined to describe specific
 objects and situations.

 There are other parts to AGI that I am not considering yet. I believe the
 problem has to be broken down into separate pieces and understood before
 putting it back together into a complete system. I have not covered
 inductive learning for example, which would be an important part of AGI. I
 have also not yet incorporated

Re: [agi] How do we Score Hypotheses?

2010-07-15 Thread David Jones
Jim,

even that isn't an obvious event. You don't know what is background and what
is not. You don't even know if there is an object or not. You don't know if
anything moved or not. You can make some observations using predefined
methods and then see if you find matches... then hypothesize about the
matches...

 It all has to be learned and figured out through reasoning.

That's why I asked what you meant by definitive events. Nothing is really
definitive. It is all hypothesized in a non-monotonic manner.

Dave

On Thu, Jul 15, 2010 at 12:01 PM, Jim Bromer jimbro...@gmail.com wrote:

 On Wed, Jul 14, 2010 at 10:22 AM, David Jones davidher...@gmail.comwrote:

 What do you mean by definitive events?



 I was just trying to find a way to designate obsverations that would be
 reliably obvious to a computer program.  This has something to do with the
 assumptions that you are using.  For example if some object appeared against
 a stable background and it was a different color than the background, it
 would be a definitive observation event because your algorithm could detect
 it with some certainty and use it in the definition of other more
 complicated events (like occlusion.)  Notice that this example would not
 necessarily be so obvious (a definitive event) using a camera, because there
 are a number of ways that an illusion (of some kind) could end up as a data
 event.

 I will try to reply to the rest of your message sometime later.
 Jim Bromer
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we Score Hypotheses?

2010-07-15 Thread David Jones
On screenshots, the point of view is equivalent to the absolute positions
and their relative positions using absolute(screen x and y) measurements.

You don't need a robot to learn about how AGI works and figure out how to
solve some problems. It would be a terrible mistake to spend years, or even
weeks for that matter, on robotics before getting started.

Dave

On Thu, Jul 15, 2010 at 1:09 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

  Sounds like a good explanation of why a body is essential for vision -
 not just for POV and orientation [up/left/right/down/ towards/ away] but for
 comparison and yardstick - you do know when your body or parts thereof are
 moving -and  it's not merely touch but the comparison of other objects still
 and moving with your own moving hands and body that is important.

 The more you go into it, the crazier the prospect of vision without eyes in
 a body becomes.

  *From:* David Jones davidher...@gmail.com
 *Sent:* Thursday, July 15, 2010 5:54 PM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] How do we Score Hypotheses?

 Jim,

 even that isn't an obvious event. You don't know what is background and
 what is not. You don't even know if there is an object or not. You don't
 know if anything moved or not. You can make some observations using
 predefined methods and then see if you find matches... then hypothesize
 about the matches...

  It all has to be learned and figured out through reasoning.

 That's why I asked what you meant by definitive events. Nothing is really
 definitive. It is all hypothesized in a non-monotonic manner.

 Dave

 On Thu, Jul 15, 2010 at 12:01 PM, Jim Bromer jimbro...@gmail.com wrote:

  On Wed, Jul 14, 2010 at 10:22 AM, David Jones davidher...@gmail.comwrote:

 What do you mean by definitive events?



 I was just trying to find a way to designate obsverations that would be
 reliably obvious to a computer program.  This has something to do with the
 assumptions that you are using.  For example if some object appeared against
 a stable background and it was a different color than the background, it
 would be a definitive observation event because your algorithm could detect
 it with some certainty and use it in the definition of other more
 complicated events (like occlusion.)  Notice that this example would not
 necessarily be so obvious (a definitive event) using a camera, because there
 are a number of ways that an illusion (of some kind) could end up as a data
 event.

 I will try to reply to the rest of your message sometime later.
 Jim Bromer
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com


   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we Score Hypotheses?

2010-07-14 Thread David Jones
What do you mean by definitive events?

I guess the first problem I see with my approach is that the movement of the
window is also a hypothesis. I need to analyze it in more detail and see how
the tree of hypotheses affects the hypotheses regarding the es on the
windows.

What I believe is that these problems can be broken down into types of
hypotheses,  types of events and types of relationships. then those types
can be reasoned about in a general way. If possible, then you have a method
for reasoning about any object that is covered by the types of hypotheses,
events and relationships that you have defined.

How to reason about specific objects should not be preprogrammed. But, I
think the solution to this part of AGI is to find general ways to reason
about a small set of concepts that can be combined to describe specific
objects and situations.

There are other parts to AGI that I am not considering yet. I believe the
problem has to be broken down into separate pieces and understood before
putting it back together into a complete system. I have not covered
inductive learning for example, which would be an important part of AGI. I
have also not yet incorporated learned experience into the algorithm, which
is also important.

The general AI problem is way too complicated to consider all at once. I
simply can't solve hypothesis generation, comparison and disambiguation
while at the same time solving induction and experience-based reasoning. It
becomes unwieldly. So, I'm starting where I can and I'll work my way up to
the full complexity of the problem.

I don't really understand what you mean here: The central unsolved problem,
in my view, is: How can hypotheses be conceptually integrated along with the
observable definitive events of the problem to form good explanatory
connections that can mesh well with other knowledge about the problem that
is considered to be reliable.  The second problem is finding efficient ways
to represent this complexity of knowledge so that the program can utilize it
efficiently.

You also might want to include concrete problems to analyze for your central
problem suggestions. That would help define the problem a bit better for
analysis.

Dave

On Wed, Jul 14, 2010 at 8:30 AM, Jim Bromer jimbro...@gmail.com wrote:



 On Tue, Jul 13, 2010 at 9:05 PM, Jim Bromer jimbro...@gmail.com wrote:
 Even if you refined your model until it was just right, you would have only
 caught up to everyone else with a solution to a narrow AI problem.


 I did not mean that you would just have a solution to a narrow AI problem,
 but that your solution, if put in the form of scoring of points on the basis
 of the observation *of definitive* events, would constitute a narrow AI
 method.  The central unsolved problem, in my view, is: How can hypotheses be
 conceptually integrated along with the observable definitive events of the
 problem to form good explanatory connections that can mesh well with other
 knowledge about the problem that is considered to be reliable.  The second
 problem is finding efficient ways to represent this complexity of knowledge
 so that the program can utilize it efficiently.

*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we Score Hypotheses?

2010-07-14 Thread David Jones
Actually, I just realized that there is a way to included inductive
knowledge and experience into this algorithm. Inductive knowledge and
experience about a specific object or object type can be exploited to know
which hypotheses in the past were successful, and therefore which hypothesis
is most likely. By choosing the most likely hypothesis first, we skip a lot
of messy hypothesis comparison processing and analysis. If we choose the
right hypothesis first, all we really have to do is verify that this
hypothesis reveals in the data what we expect to be there. If we confirm
what we expect, that is reason enough not to look for other hypotheses
because the data is explained by what we originally believed to be likely.
We only look for additional hypotheses when we find something unexplained.
And even then, we don't look at the whole problem. We only look at what we
have to to explain the unexplained data. In fact, we could even ignore the
unexplained data if we believe, from experience, that it isn't pertinent.

I discovered this because I'm analyzing how a series of hypotheses are
navigated when analyzing images. It seems to me that it is done very
similarly to way we do it. We sort of confirm what we expect and try to
explain what we don't expect. We try out hypotheses in a sort of trial and
error manor and see how each hypothesis affects what we find in the image.
If we confirm things because of the hypothesis, we are likely to keep it. We
keep going, navigating the tree of hypotheses, conflicts and unexpected
observations until we find a good hypothesis. Something like that. I'm
attempting to construct an algorithm for doing this as I analyze specific
problems.

Dave

On Wed, Jul 14, 2010 at 10:22 AM, David Jones davidher...@gmail.com wrote:

 What do you mean by definitive events?

 I guess the first problem I see with my approach is that the movement of
 the window is also a hypothesis. I need to analyze it in more detail and see
 how the tree of hypotheses affects the hypotheses regarding the es on the
 windows.

 What I believe is that these problems can be broken down into types of
 hypotheses,  types of events and types of relationships. then those types
 can be reasoned about in a general way. If possible, then you have a method
 for reasoning about any object that is covered by the types of hypotheses,
 events and relationships that you have defined.

 How to reason about specific objects should not be preprogrammed. But, I
 think the solution to this part of AGI is to find general ways to reason
 about a small set of concepts that can be combined to describe specific
 objects and situations.

 There are other parts to AGI that I am not considering yet. I believe the
 problem has to be broken down into separate pieces and understood before
 putting it back together into a complete system. I have not covered
 inductive learning for example, which would be an important part of AGI. I
 have also not yet incorporated learned experience into the algorithm, which
 is also important.

 The general AI problem is way too complicated to consider all at once. I
 simply can't solve hypothesis generation, comparison and disambiguation
 while at the same time solving induction and experience-based reasoning. It
 becomes unwieldly. So, I'm starting where I can and I'll work my way up to
 the full complexity of the problem.

 I don't really understand what you mean here: The central unsolved
 problem, in my view, is: How can hypotheses be conceptually integrated along
 with the observable definitive events of the problem to form good
 explanatory connections that can mesh well with other knowledge about the
 problem that is considered to be reliable.  The second problem is finding
 efficient ways to represent this complexity of knowledge so that the program
 can utilize it efficiently.

 You also might want to include concrete problems to analyze for your
 central problem suggestions. That would help define the problem a bit better
 for analysis.

 Dave


 On Wed, Jul 14, 2010 at 8:30 AM, Jim Bromer jimbro...@gmail.com wrote:



 On Tue, Jul 13, 2010 at 9:05 PM, Jim Bromer jimbro...@gmail.com wrote:
 Even if you refined your model until it was just right, you would have
 only caught up to everyone else with a solution to a narrow AI problem.


 I did not mean that you would just have a solution to a narrow AI problem,
 but that your solution, if put in the form of scoring of points on the basis
 of the observation *of definitive* events, would constitute a narrow AI
 method.  The central unsolved problem, in my view, is: How can hypotheses be
 conceptually integrated along with the observable definitive events of the
 problem to form good explanatory connections that can mesh well with other
 knowledge about the problem that is considered to be reliable.  The second
 problem is finding efficient ways to represent this complexity of knowledge
 so that the program can utilize it efficiently.

*agi

Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-13 Thread David Jones
Abram,

Thanks for the clarification Abram. I don't have a single way to deal with
uncertainty. I try not to decide on a method ahead of time because what I
really want to do is analyze the problems and find a solution. But, at the
same time. I have looked at the probabilistic approaches and they don't seem
to be sufficient to solve the problem as they are now. So, my inclination is
to use methods that don't gloss over important details. For me, the most
important way of dealing with uncertainty is through explanatory-type
reasoning. But, explanatory reasoning has not been well defined yet. So, the
implementation is not yet clear. That's where I am now.

I've begun to approach problems as follows. I try to break the problem down
and answer the following questions:
1) How do we come up with or construct possible hypotheses.
2) How do we compare hypotheses to determine which is better.
3) How do we lower the uncertainty of hypotheses.
4) How do we determine the likelihood or strength of a single hypothesis all
by itself. Is it sufficient on its own?

With those questions in mind, the solution seems to be to break possible
hypotheses down into pieces that are generally applicable. For example, in
image analysis, a particular type of hypothesis might be related to 1)
motion or 2) attachment relationships or 3) change or movement behavior of
an object, etc.

By breaking the possible hypotheses into very general pieces, you can apply
them to just about any problem. With that as a tool, you can then develop
general methods for resolving uncertainty of such hypotheses using
explanatory scoring, consistency, and even statistical analysis.

Does that make sense to you?

Dave

On Tue, Jul 13, 2010 at 2:29 AM, Abram Demski abramdem...@gmail.com wrote:

 PS-- I am not denying that statistics is applied probability theory. :)
 When I say they are different, what I mean is that saying I'm going to use
 probability theory and I'm going to use statistics tend to indicate very
 different approaches. Probability is a set of axioms, whereas statistics is
 a set of methods. The probability theory camp tends to be bayesian, whereas
 the stats camp tends to be frequentist.

 Your complaint that probability theory doesn't try to figure out why it was
 wrong in the 30% (or whatever) it misses is a common objection. Probability
 theory glosses over important detail, it encourages lazy thinking, etc.
 However, this all depends on the space of hypotheses being examined.
 Statistical methods will be prone to this objection because they are
 essentially narrow-AI methods: they don't *try* to search in the space of
 all hypotheses a human might consider. An AGI setup can and should have such
 a large hypothesis space. Note that AIXI is typically formulated as using a
 space of crisp (non-probabilistic) hypotheses, though probability theory is
 used to reason about them. This means no theory it considers will gloss over
 detail in this way: every theory completely explains the data. (I use AIXI
 as a convenient example, not because I agree with it.)

 --Abram




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-13 Thread David Jones
Mike, you are so full of it. There is a big difference between *can* and
*don't*. You have no proof that programs can't handle anything you say that
can't.

On Tue, Jul 13, 2010 at 2:36 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

  The first thing is to acknowledge that programs *don't* handle concepts -
 if you think they do, you must give examples.

 The reasons they can't, as presently conceived, is

 a) concepts encase a more or less *infinite diversity of forms* (even
 if only applying at first to a species of object)  -  *chair* for example
 as I've demonstrated embraces a vast open-ended diversity of radically
 different chair forms; higher order concepts like  furniture embrace ...
 well, it's hard to think even of the parameters, let alone the diversity of
 forms, here.

 b) concepts are *polydomain*- not just multi- but open-endedly extensible
 in their domains; chair for example, can also refer to a person, skin in
 French, two humans forming a chair to carry s.o., a prize, etc.

 Basically concepts have a freeform realm or sphere of reference, and you
 can't have a setform, preprogrammed approach to defining that realm.

 There's no reason however why you can't mechanically and computationally
 begin to instantiate the kind of freeform approach I'm proposing. The most
 important obstacle is the setform mindset of AGI-ers - epitomised by Dave
 looking at squares, moving along set lines - setform objects in setform
 motion -  when it would be more appropriate to look at something like
 snakes.- freeform objects in freeform motion.

 Concepts also - altho this is a huge subject - are *the* language of the
 general programs (as distinct from specialist programs, wh. is all we
 have right now)  that must inform an AGI. Anyone proposing a grandscale AGI
 project like Ben's (wh. I def. wouldn't recommend) must crack the problem of
 conceptualisation more or less from the beginning. I'm not aware of anyone
 who has any remotely viable proposals here, are you?

  *From:* Jim Bromer jimbro...@gmail.com
 *Sent:* Tuesday, July 13, 2010 5:46 PM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] Re: Huge Progress on the Core of AGI

 On Tue, Jul 13, 2010 at 10:07 AM, Mike Tintner 
 tint...@blueyonder.co.ukwrote:


 And programs as we know them, don't and can't handle *concepts* -  despite
 the misnomers of conceptual graphs/spaces etc wh are not concepts at all.
 They can't for example handle writing or shopping because these can only
 be expressed as flexible outlines/schemas as per ideograms.


 I disagree with this, and so this is proper focus for our disagreement.
 Although there are other aspects of the problem that we probably disagree
 on, this is such a fundamental issue, that nothing can get past it.  Either
 programs can deal with flexible outlines/schema or they can't.  If they
 can't then AGI is probably impossible.  If they can, then AGI is probably
 possible.

 I think that we both agree that creativity and imagination is absolutely
 necessary aspects of intelligence.

 Jim Bromer




   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-13 Thread David Jones
Mike,

see below.

On Tue, Jul 13, 2010 at 2:36 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

  The first thing is to acknowledge that programs *don't* handle concepts -
 if you think they do, you must give examples.

 The reasons they can't, as presently conceived, is

 a) concepts encase a more or less *infinite diversity of forms* (even
 if only applying at first to a species of object)  -  *chair* for example
 as I've demonstrated embraces a vast open-ended diversity of radically
 different chair forms; higher order concepts like  furniture embrace ...
 well, it's hard to think even of the parameters, let alone the diversity of
 forms, here.


invoking infinity is insufficient argument to say that a program can't
recognize an infinite number of forms.

In fact, I can prove it. Lets say that all numbers are made of digits
0,1,2,3...9. If you can recognize just 9 digits, you can read infinitely
large numbers.

Another example, you can create an infinite number of very diverse shapes
and forms out of clay. But, I can represent every last one of them using
simple mesh models. The mesh models are made of a very small number of
concepts: lines, points, distance constraints, etc. So, an infinite number
of diverse concepts or forms can be modeled using a very small number of
concepts.

In conclusion, you have no proof at all that programs can't handle these
things. You just THINK they can't. But, I for one, know you're dead wrong.



 b) concepts are *polydomain*- not just multi- but open-endedly extensible
 in their domains; chair for example, can also refer to a person, skin in
 French, two humans forming a chair to carry s.o., a prize, etc.


A chair is defined by anything you can sit on. Anything you can sit on is
defined by a certain type of form that you can actually learn inductively.
It is not impossible to teach a computer to recognize things that could be
sat on or even things that seem like they have the form of something that
might be sat on. To say that a computer can never learn this is impossible
for you to claim. You see, very diverse concepts can be represented by a
small number of other concepts such as time, space, 3D form, etc. You claim
is completely baseless.



 Basically concepts have a freeform realm or sphere of reference, and you
 can't have a setform, preprogrammed approach to defining that realm.


you can if it covers base concepts which can represent larger concepts.


 There's no reason however why you can't mechanically and computationally
 begin to instantiate the kind of freeform approach I'm proposing. The most
 important obstacle is the setform mindset of AGI-ers - epitomised by Dave
 looking at squares, moving along set lines - setform objects in setform
 motion -  when it would be more appropriate to look at something like
 snakes.- freeform objects in freeform motion.


squares can move in an infinite number of ways. It is an experiment An
exercise... to learn how AGI deals with uncertainty, even if the uncertainty
is very limited.

Clearly you have no imagination to understand why doing such experiments
might be useful. You think moving squares is simple just because they are
squares. But, you fail to realize that uncertainty can be generated out of
even very simple systems. And so far you have never stated how you would
deal with such uncertainty.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-13 Thread David Jones
Thanks Abram, I'll read up on it when I get a chance.


On Tue, Jul 13, 2010 at 12:03 PM, Abram Demski abramdem...@gmail.comwrote:

 David,

 Yes, this makes sense to me.

 To go back to your original query, I still think you will find a rich
 community relevant to your work if you look into the MDL literature (which
 additionally does not rely on probability theory, though as I said I'd say
 it's equivalent).

 Perhaps this book might be helpful:

 http://www.amazon.com/Description-Principle-Adaptive-Computation-Learning/dp/0262072815/ref=sr_1_1?ie=UTF8s=booksqid=1279036776sr=8-1

 It includes a (short-ish?) section comparing the pros/cons of MDL and
 Bayesianism, and examining some of the mathematical linkings between them--
 with the aim of showing that MDL is a broader principle. I disagree there,
 of course. :)

 --Abram

 On Tue, Jul 13, 2010 at 10:01 AM, David Jones davidher...@gmail.comwrote:

 Abram,

 Thanks for the clarification Abram. I don't have a single way to deal with
 uncertainty. I try not to decide on a method ahead of time because what I
 really want to do is analyze the problems and find a solution. But, at the
 same time. I have looked at the probabilistic approaches and they don't seem
 to be sufficient to solve the problem as they are now. So, my inclination is
 to use methods that don't gloss over important details. For me, the most
 important way of dealing with uncertainty is through explanatory-type
 reasoning. But, explanatory reasoning has not been well defined yet. So, the
 implementation is not yet clear. That's where I am now.

 I've begun to approach problems as follows. I try to break the problem
 down and answer the following questions:
 1) How do we come up with or construct possible hypotheses.
 2) How do we compare hypotheses to determine which is better.
 3) How do we lower the uncertainty of hypotheses.
 4) How do we determine the likelihood or strength of a single hypothesis
 all by itself. Is it sufficient on its own?

 With those questions in mind, the solution seems to be to break possible
 hypotheses down into pieces that are generally applicable. For example, in
 image analysis, a particular type of hypothesis might be related to 1)
 motion or 2) attachment relationships or 3) change or movement behavior of
 an object, etc.

 By breaking the possible hypotheses into very general pieces, you can
 apply them to just about any problem. With that as a tool, you can then
 develop general methods for resolving uncertainty of such hypotheses using
 explanatory scoring, consistency, and even statistical analysis.

 Does that make sense to you?

 Dave


 On Tue, Jul 13, 2010 at 2:29 AM, Abram Demski abramdem...@gmail.comwrote:

 PS-- I am not denying that statistics is applied probability theory. :)
 When I say they are different, what I mean is that saying I'm going to use
 probability theory and I'm going to use statistics tend to indicate very
 different approaches. Probability is a set of axioms, whereas statistics is
 a set of methods. The probability theory camp tends to be bayesian, whereas
 the stats camp tends to be frequentist.

 Your complaint that probability theory doesn't try to figure out why it
 was wrong in the 30% (or whatever) it misses is a common objection.
 Probability theory glosses over important detail, it encourages lazy
 thinking, etc. However, this all depends on the space of hypotheses being
 examined. Statistical methods will be prone to this objection because they
 are essentially narrow-AI methods: they don't *try* to search in the space
 of all hypotheses a human might consider. An AGI setup can and should have
 such a large hypothesis space. Note that AIXI is typically formulated as
 using a space of crisp (non-probabilistic) hypotheses, though probability
 theory is used to reason about them. This means no theory it considers will
 gloss over detail in this way: every theory completely explains the data. (I
 use AIXI as a convenient example, not because I agree with it.)

 --Abram


*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




 --
 Abram Demski
 http://lo-tho.blogspot.com/
 http://groups.google.com/group/one-logic
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] How do we Score Hypotheses?

2010-07-13 Thread David Jones
I've been trying to figure out how to score hypotheses. Do you guys have any
constructive ideas about how to define the way you score hypotheses like
these a little better? I'll define the problem below in detail. I know Abram
mentioned MDL, which I'm about to look into. Does that even apply to this
sort of thing?

I came up with a hypothesis scoring idea. It goes as follows

*Rule 1:* Hypotheses are compared only 1 at a time.
*Rule 2:* If hypothesis 1 predicts/expects/anticipates something, then you
add (+1) to its score and subtract (-1) from hypothesis 2 if it doesn't also
anticipate the observation. (Note:When comparing only 2 hypotheses, it may
actually not be necessary to subtract from the competing hypothesis I
guess.)

*Here is the specific problem I'm analyzing: *Let's say that you have two
window objects that contain the same letter, such as the letter e. In
frame 0, the first window object is visible. In frame 1, window 1 moves a
bit. In frame 2 though, the second window object appears and completely
occludes the first window object. So, if you only look at the letter e
from frame 0 to frame 2, it looks like it never disappears and it just
moves. But that's not what happens. There are two independent instances of
the letter e. But, how do we get the algorithm to figure this out in a
general way? How do we get it to compare the two possible hypotheses (1
object or two objects) and decide that one is better than the other? That is
what the hypothesis scoring method is for.

*Algorithm Description and Details*
*Hypothesis 1:* there are two separate objects... there are two separate
instances of the letter e
*Hypothesis 2:* there is only one letter object... only one letter e that
occurs in all the frames of the video.

*Time 0: object 1*

*Time 1: e moves rigidly with object 1*
H1: +1 compared to h2 because we expect the e to move rigidly with the
first object, rather than independently from the first object.
H2: -1 compared to h1 because we don't expect the first object to move
rigidly with e but h1 does.

*Time 2: object 2 appears and completely occludes object 1.  Object 1 and 2
both have the letter e on them. So, to a dumb algorithm, it looks as if
the e moved between the two frames of the video.*
H1: -1 compared to h2 because we don't expect what h2 expects.
H2: +1 compared to h1 e moves independently of the first window

*Time 3: e moves rigidly with object 2*
H1: +1 compared to h2 e moves with second object.
H2: -1 compared to h1
*Time 4: e moves rigidly with object 2*
H1: +1 compared to h2 e moves with second object.
H2: -1 compared to h1
*Time 5: e moves rigidly with object 2*
H1: +1 compared to h2 e moves with second object.
H2: -1 compared to h1

*After 5 video frames the score is: *
H1: +3
H2: -3

Dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-13 Thread David Jones
, find one of
 them that works with *unspecified kinds of actions and objects.*  (Or you
 can always try and explain how  formulae that are clearly designed to be
 setform can somehow simultaneously be freeform and embrace et cetera ).

 There are by the same token no branches of logic and maths that work with
 *unspecified kinds of actions and objects.*   (Mathematicians who invent new
 formulae have to work with and develop new kinds of objects - but normal
 maths can't help them do this).

 The whole of rationality - incl. all rational technology - only works with
 specified kinds of actions and objects.

 **One of the most basic rationales  of rationality is let's stop everyone
 farting around making their own versions of products with their own
 differently specified actions and objects; let's  specify/standardise  the
 actions and objects that everyone must use. Let's start making standard
 specification cherry cakes with standard ingredients, and standard
 mathematical sums with standard numbers and operations, and standard logical
 variables with standard meanings [and cut out any kind of et cetera]  **

 (And for much the same reason programs can't - aren't meant to - handle
 concepts. Every concept , like chair has to refer to a general class of
 objects embracing et ceteras - new, unspecified, yet-to-be-invented kinds of
 objects  and ones that you simply haven't heard of  yet, as well as
 specified, known kinds  of object . Concepts are wonderful cognitive tools
 for embracing unspecified objects. Concepts, for example,  like things,
 objects, actions, do something -  anything all sorts of things -
 everything you can possibly think of  even  write totally new kinds of
 programs - anti-programs - et cetera -  such concepts endow humans with
 massive creative freedom and scope of reference.

 You along with the whole of AI/AGI are effectively claiming that there is
 or can be a formula/program for dealing with the unknown - i.e. unknown
 kinds of objects. It's patent absurdity - and counter to the whole spirit
 of logic and rationality -  in fact lunacy. You'll wonder in years to come
 how so smart people could be so dumb.   Could think they're producing
 programs that can make anything - can make cars or cakes - any car or
 cake  - when the rest of the world and his uncle can see that they're only
 producing very specific brands of car and cake (with very specific
 objects/parts).  VW Beetles not cars let alone vehicles let alone forms
 of transportation let alone means of travel let alone universal
 programs. .

 I'm full of it? AI/AGI is full of the most amazing hype about its
 generality and creativity wh. you have clearly swallowed whole .
 Programs are simply specialist procedures for producing specialist products
 and procedures with specified kinds of actions and objects - they cannot
 deal with unspecified kinds of actions and objects, period. You won't
 produce any actual examples to the contrary.



  *From:* David Jones davidher...@gmail.com
 *Sent:* Tuesday, July 13, 2010 8:00 PM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] Re: Huge Progress on the Core of AGI

 Correction:

 Mike, you are so full of it. There is a big difference between *can* and
 *don't*. You have no proof that programs can't handle anything you say
 [they] can't.

 On Tue, Jul 13, 2010 at 2:59 PM, David Jones davidher...@gmail.comwrote:

 Mike, you are so full of it. There is a big difference between *can* and
 *don't*. You have no proof that programs can't handle anything you say that
 can't.


 On Tue, Jul 13, 2010 at 2:36 PM, Mike Tintner 
 tint...@blueyonder.co.ukwrote:

  The first thing is to acknowledge that programs *don't* handle concepts
 - if you think they do, you must give examples.

 The reasons they can't, as presently conceived, is

 a) concepts encase a more or less *infinite diversity of forms* (even
 if only applying at first to a species of object)  -  *chair* for example
 as I've demonstrated embraces a vast open-ended diversity of radically
 different chair forms; higher order concepts like  furniture embrace ...
 well, it's hard to think even of the parameters, let alone the diversity of
 forms, here.

 b) concepts are *polydomain*- not just multi- but open-endedly extensible
 in their domains; chair for example, can also refer to a person, skin in
 French, two humans forming a chair to carry s.o., a prize, etc.

 Basically concepts have a freeform realm or sphere of reference, and you
 can't have a setform, preprogrammed approach to defining that realm.

 There's no reason however why you can't mechanically and computationally
 begin to instantiate the kind of freeform approach I'm proposing. The most
 important obstacle is the setform mindset of AGI-ers - epitomised by Dave
 looking at squares, moving along set lines - setform objects in setform
 motion -  when it would be more appropriate to look at something like
 snakes.- freeform objects in freeform motion.

 Concepts also

Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-11 Thread David Jones
Thanks Abram,

I know that probability is one approach. But there are many problems with
using it in actual implementations. I know a lot of people will be angered
by that statement and retort with all the successes that they have had using
probability. But, the truth is that you can solve the problems many ways and
every way has its pros and cons. I personally believe that probability has
unacceptable cons if used all by itself. It must only be used when it is the
best tool for the task.

I do plan to use some probability within my approach. But only when it makes
sense to do so. I do not believe in completely statistical solutions or
completely Bayesian machine learning alone.

A good example of when I might use it is when a particular hypothesis
predicts something with 70% accuracy, well it may be better than any other
hypothesis we can come up with so far. So, we may use that hypothesis. But,
the 30% unexplained errors should be explained if possible with the
resources and algorithms available, if at all possible. This is where my
method differs from statistical methods. I want to build algorithms that
resolve the 30% and explain it. For many problems, there are rules and
knowledge that will solve them effectively. Probability should only be used
when you cannot find a more accurate solution.

Basically we should use probability when we don't know the factors involved,
can't find any rules to explain the phenomena or we don't have the time and
resources to figure it out. So you must simply guess at the most probable
event without any rules for figuring out which event is more applicable
under the current circumstances.

So, in summary, probability definitely has its place. I just think that
explanatory reasoning and other more accurate methods should be preferred
whenever possible.

Regarding learning the knowledge being the bigger problem, I completely
agree. That is why I think it is so important to develop machine learning
that can learn by direct observation of the environment. Without that, it is
practically impossible to gather the knowledge required for AGI-type
applications. We can learn this knowledge by analyzing the world
automatically and generally through video.

My step by step approach for learning and then applying the knowledge for
agi is as follows:
1) Understand and learn about the environment(through Computer Vision for
now and other sensory perceptions in the future)
2) learn about your own actions and how they affect the environment
3) learn about language and how it is associated with or related to the
environment.
4) learn goals from language(such as through dedicated inputs).
5) Goal pursuit
6) Other Miscellaneous capabilities as needed

Dave

On Sat, Jul 10, 2010 at 8:40 PM, Abram Demski abramdem...@gmail.com wrote:

 David,

 Sorry for the slow response.

 I agree completely about expectations vs predictions, though I wouldn't use
 that terminology to make the distinction (since the two terms are
 near-synonyms in English, and I'm not aware of any technical definitions
 that are common in the literature). This is why I think probability theory
 is necessary: to formalize this idea of expectations.

 I also agree that it's good to utilize previous knowledge. However, I think
 existing AI research has tackled this over and over; learning that knowledge
 is the bigger problem.

 --Abram




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-10 Thread David Jones
Mike,

Using the image itself as a template to match is possible. In fact it has
been done before. But it suffers from several problems that would also need
solving.

1) Images are 2D. I assume you are also referring to 2D outlines. Real
objects are 3D. So, you're going to have to infer the shape of the object...
which means you are no longer actually transforming the image itself. You
are transforming a model of the image, which would have points, curves,
dimensions, etc. Basically, a mathematical shape :) No matter how much you
disapprove of encoding info, sometimes it makes sense to do it.
2) Creating the first outline and figuring out what to outline is not
trivial at all. So, this method can only be used after you can do that.
There is a lot more uncertainty involved here than you seem to realize.
First, how do you even determine the outline? That is an unsolved problem.
So, not only will you have to try many transformations with the right
outline, you have to try many with wrong outlines, increase the
possibilities (exponentially?). It looks like you need a way to score
possibilities and decide which ones to try.
3) rock is a word and words are always learned by induction along with
other types of reasoning before we can even consider it a type of object.
So, you are starting with a somewhat unrepresentative or artificial problem.

4) Even the same rock can look very different from different perspectives.
In fact, how do you even match the same rock? Please describe how your
system would do this. It is not trivial at all. And you will soon see that
there is an extremely large amount of uncertainty. Dealing with this type of
uncertainty is the central problem of AGI. The central problem is not fluid
schemas.Even if I used this method, I would be stuck with the same exact
uncertainty problems. So, you're not going to get passed them like this. The
same research on explanatory and non-monotonic type reasoning must still be
done.
5) what is a fluid transform? You can't just throw out words. Please define
it. You are going to realize that your representation is pretty much
geometric anyway. Regardless, it will likely be equivalent. Are you going to
try every possible transformation? Nope. That would be impossible. So, how
do you decide what transformations to try? When is a transformation too
large of a change to consider it the same rock? When is it too large to
consider it a different rock?
6) Are you seriously going to transform every object you've every tried to
outline? This is going to be prohibitively costly in terms of processing.
Especially because you haven't defined how you're going to decide what to
transform and what not to. So, before you can even use this algorithm,
you're going to have to use something else to decide what is a possible
candidate and what is not.


On Fri, Jul 9, 2010 at 6:42 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

 Now let's see **you** answer a question. Tell me how any
 algorithmic/mathematical approach of any kind actual or in pure principle
 can be applied to recognize raindrops falling down a pane - and to
 predict their movement?


Like I've said many times before, we can't predict everything, and we
certainly shouldn't try. But


 http://www.pond5.com/stock-footage/263778/beautiful-rain-drops.html

 or to recognize a rock?

 http://www.handprint.com/HP/WCL/IMG/LPR/adams.jpg

 or a [filled] shopping bag?

 http://www.abc.net.au/reslib/200801/r215609_837743.jpg

 http://www.sustainableisgood.com/photos/uncategorized/2007/03/29/shoppingbags.jpg

 http://thegogreenblog.com/wp-content/uploads/2007/12/plastic_shopping_bag.jpg

 or if you want a real killer, google some vid clips of amoebas in oozing
 motion?

 PS In every case, I suggest, the brain observes different principles of
 transformation - for the most part unconsciously. And they can be of various
 kinds not just direct natural transformations, of course. It's possible, it
 occurs to me, that the principle that applies to rocks might just be
 something like whatever can be carved out of stone





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-10 Thread David Jones
I accidentally pressed something and it sent it early... this is a finished
version:


Mike,

Using the image itself as a template to match is possible. In fact it has
been done before. But it suffers from several problems that would also need
solving.

1) Images are 2D. I assume you are also referring to 2D outlines. Real
objects are 3D. So, you're going to have to infer the shape of the object...
which means you are no longer actually transforming the image itself. You
are transforming a model of the image, which would have points, curves,
dimensions, etc. Basically, a mathematical shape :) No matter how much you
disapprove of encoding info, sometimes it makes sense to do it.
2) Creating the first outline and figuring out what to outline is not
trivial at all. So, this method can only be used after you can do that.
There is a lot more uncertainty involved here than you seem to realize.
First, how do you even determine the outline? That is an unsolved problem.
So, not only will you have to try many transformations with the right
outline, you have to try many with wrong outlines, increase the
possibilities (exponentially?). It looks like you need a way to score
possibilities and decide which ones to try.
3) rock is a word and words are always learned by induction along with
other types of reasoning before we can even consider it a type of object.
So, you are starting with a somewhat unrepresentative or artificial problem.

4) Even the same rock can look very different from different perspectives.
In fact, how do you even match the same rock? Please describe how your
system would do this. It is not trivial at all. And you will soon see that
there is an extremely large amount of uncertainty. Dealing with this type of
uncertainty is the central problem of AGI. The central problem is not fluid
schemas.Even if I used this method, I would be stuck with the same exact
uncertainty problems. So, you're not going to get passed them like this. The
same research on explanatory and non-monotonic type reasoning must still be
done.
5) what is a fluid transform? You can't just throw out words. Please define
it. You are going to realize that your representation is pretty much
geometric anyway. Regardless, it will likely be equivalent. Are you going to
try every possible transformation? Nope. That would be impossible. So, how
do you decide what transformations to try? When is a transformation too
large of a change to consider it the same rock? When is it too large to
consider it a different rock?
6) Are you seriously going to transform every object you've every tried to
outline? This is going to be prohibitively costly in terms of processing.
Especially because you haven't defined how you're going to decide what to
transform and what not to. So, before you can even use this algorithm,
you're going to have to use something else to decide what is a possible
candidate and what is not.


On Fri, Jul 9, 2010 at 6:42 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

  Now let's see **you** answer a question. Tell me how any
 algorithmic/mathematical approach of any kind actual or in pure principle
 can be applied to recognize raindrops falling down a pane - and to
 predict their movement?


Like I've said many times before, we can't predict everything, and we
certainly shouldn't try. But  we should expect what might happen.
Raindrops are probably recognized as an unexpected distortion when it occurs
on a window. When its not on a window, it is still a sort of distortion of
brightness and just a small object with different contrast. You're right
that geometric definitions are not the right way to recognize that. It would
have to use a different method to remember the features/properties of
raindrops and how they appeared, such as the contrast, size, quantity,
location, context, etc.


 http://www.pond5.com/stock-footage/263778/beautiful-rain-drops.html

 or to recognize a rock?


A specific rock could be recognized with geometric definitions. Texture is
certainly important, size, context (very important), etc. If we are talking
about the category rock, that's different than the instance of a rock. The
category of a rock probably needs a description of the types of properties
that rocks have, such as the types of curves, texture, sizes, interactions,
behavior, etc. Exactly how you do it, I haven't decided. I'm not at that
point yet.



 http://www.handprint.com/HP/WCL/IMG/LPR/adams.jpg

 or a [filled] shopping bag?


same as the rock.



 http://www.abc.net.au/reslib/200801/r215609_837743.jpg

 http://www.sustainableisgood.com/photos/uncategorized/2007/03/29/shoppingbags.jpg

 http://thegogreenblog.com/wp-content/uploads/2007/12/plastic_shopping_bag.jpg

 or if you want a real killer, google some vid clips of amoebas in oozing
 motion?


same.



 PS In every case, I suggest, the brain observes different principles of
 transformation - for the most part unconsciously. And they can be of various
 kinds not just direct natural 

Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-10 Thread David Jones
 or reason.

 Here is a graphic demonstration of what you're trying to claim - in effect,
 you're saying

 geometry can define 'a piece of plasticine'  [and by extension any
 standard transformation of a piece of plasticine as in a playroom]

 That's a nonsense. A piece of plasticine is a **freeform** object - it can
 be transformed into an unlimited diversity of shapes/forms (albeit with
 constraints).

 Formulae - the formulae of geometry - can only define **set form** objects,
 with a precise form and structure. There are no exceptions. Black is not
 white.  Homogeneous is not heterogeneous. Set form is not freeform.

 All the objects I list - all irregular objects - are freeform objects.

 You are ironically misunderstanding the very foundations and rationale of
 geometry. Geometry - with its set form forms - was invented precisely
 because mathematicians didn't like the freeform nature of the world - wanted
 to create set forms (in the footsteps of the rational technologists who
 preceded them) - that they could control and reduce to formulae and
 reproduce with ease. Freeform rocks are a lot more complex to draw and make
 and reproduce than  set form rectangular bricks.

 Set forms are not free forms. They are the opposite.

 (And while you and others will continue to *claim*  in theory
 absolute setform=freeform nonsense, you will in practice always, always
 stick to setform objects. Some part of you knows the v.obvious truth ).




  *From:* David Jones davidher...@gmail.com
 *Sent:* Saturday, July 10, 2010 3:51 PM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] Re: Huge Progress on the Core of AGI

 Mike,

 Using the image itself as a template to match is possible. In fact it has
 been done before. But it suffers from several problems that would also need
 solving.

 1) Images are 2D. I assume you are also referring to 2D outlines. Real
 objects are 3D. So, you're going to have to infer the shape of the object...
 which means you are no longer actually transforming the image itself. You
 are transforming a model of the image, which would have points, curves,
 dimensions, etc. Basically, a mathematical shape :) No matter how much you
 disapprove of encoding info, sometimes it makes sense to do it.
 2) Creating the first outline and figuring out what to outline is not
 trivial at all. So, this method can only be used after you can do that.
 There is a lot more uncertainty involved here than you seem to realize.
 First, how do you even determine the outline? That is an unsolved problem.
 So, not only will you have to try many transformations with the right
 outline, you have to try many with wrong outlines, increase the
 possibilities (exponentially?). It looks like you need a way to score
 possibilities and decide which ones to try.
 3) rock is a word and words are always learned by induction along with
 other types of reasoning before we can even consider it a type of object.
 So, you are starting with a somewhat unrepresentative or artificial problem.

 4) Even the same rock can look very different from different perspectives.
 In fact, how do you even match the same rock? Please describe how your
 system would do this. It is not trivial at all. And you will soon see that
 there is an extremely large amount of uncertainty. Dealing with this type of
 uncertainty is the central problem of AGI. The central problem is not fluid
 schemas.Even if I used this method, I would be stuck with the same exact
 uncertainty problems. So, you're not going to get passed them like this. The
 same research on explanatory and non-monotonic type reasoning must still be
 done.
 5) what is a fluid transform? You can't just throw out words. Please define
 it. You are going to realize that your representation is pretty much
 geometric anyway. Regardless, it will likely be equivalent. Are you going to
 try every possible transformation? Nope. That would be impossible. So, how
 do you decide what transformations to try? When is a transformation too
 large of a change to consider it the same rock? When is it too large to
 consider it a different rock?
 6) Are you seriously going to transform every object you've every tried
 to outline? This is going to be prohibitively costly in terms of processing.
 Especially because you haven't defined how you're going to decide what to
 transform and what not to. So, before you can even use this algorithm,
 you're going to have to use something else to decide what is a possible
 candidate and what is not.


 On Fri, Jul 9, 2010 at 6:42 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

  Now let's see **you** answer a question. Tell me how any
 algorithmic/mathematical approach of any kind actual or in pure principle
 can be applied to recognize raindrops falling down a pane - and to
 predict their movement?


 Like I've said many times before, we can't predict everything, and we
 certainly shouldn't try. But


 http://www.pond5.com/stock-footage/263778/beautiful-rain-drops.html

Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-10 Thread David Jones
On Sat, Jul 10, 2010 at 5:02 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

  Dave:You can't solve the problems with your approach either

 This is based on knowledge of what examples? Zero?


It is based on the fact that you have refused to show how you deal with
uncertainty. You haven't even conceded that there is uncertainty. I know for
a fact that your method cannot solve the uncertainty, because it doesn't
even consider that there might be any uncertainty. It is not a solution to
anything. It is a mere suggestion of a way to compare objects. It isn't even
a way to match them! So, when you're done comparing, your method only says
it is different by this much. Well, what the hell does that do for you?
Nothing at all. So, clearly my statement that your approach doesn't solve
anything is well based. Yet, your claim that my approach is wrong is very
poorly based. Your main disagreement is my simplification of the problem.
That doesn't mean anything. I can go back and forth between the simple
version and the more complex version whenever I want to after I've gained
understanding through experiments on the simpler version. There is nothing
wrong with the approach I am taking. It is completely necessary to study the
nature of the problems and the principles that can solve the problems.


 I have given you one instance of s.o. [a technologist not a philosopher
 like me] who is if only in broad principle, trying to proceed in
 a non-encoding, analog-comparison direction. There must be others who are
 however crudely trying and considering what can be broadly classified as
 analog approaches. How much do you know, or have you even thought about such
 approaches? [Of course, computing doesn't have to be either/or
 analog-digital but can be both]


the approaches are equivalent. I don't even say that my approach is digital.
If I find a reason to use an analog approach, I'll use it. But so far, I
can't find any reason to do so. BTW, you would be wiser to realize that
analog can likely be well represented by digital encoding for the problems
we are discussing. I see absolutely no reason to think analog is better than
digital for any of these problems. You simply have a bias against my
approach. And bias is not sufficient reason to disagree with me.


 My point 6) BTW is irrefutable, completely irrefutable, and puts a finger
 bang on why geometry  obviously cannot cope with real objects,  ( although I
 can and must, do a much more extensive job of exposition).


That is ridiculous. First of all, a plastic bag can easily be represented
geometrically as a mesh with length constraints and connectivity
constraints. Of course it doesn't represent every possible transformation of
the bag. It doesn't even make sense to store such a representation. In fact,
its not possible. Your claim that geometry can't represent a plastic bag is
downright dumb and trivially refutable. You could easily use your own ideas
then to transform the mesh for matching, although I still claim this is
not the right way to always match objects. In fact, I would dare say it is
often the wrong way to match objects because of the processing and time
cost.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-09 Thread David Jones
Mike,

On Thu, Jul 8, 2010 at 6:52 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

  Isn't the first problem simply to differentiate the objects in a scene?


Well, that is part of the movement problem. If you say something moved, you
are also saying that the objects in the two or more video frames are the
same instance.


 (Maybe the most important movement to begin with is not  the movement of
 the object, but of the viewer changing their POV if only slightly  - wh.
 won't be a factor if you're looking at a screen)


Maybe, but this problem becomes kind of trivial in a 2D environment,
assuming you don't allow rotation of the POV. Moving the POV would simply
translate all the objects linearly. If you make it a 3D environment, it
becomes significantly more complicated. I could work on 3D, which I will,
but I'm not sure I should start there. I probably should consider it though
and see what complications it adds to the problem and how they might be
solved.


 And that I presume comes down to being able to put a crude, highly
 tentative, and fluid outline round them (something that won't be neces. if
 you're dealing with squares?) . Without knowing v. little if anything about
 what kind of objects they are. As an infant most likely does. {See infants'
 drawings and how they evolve v. gradually from a v. crude outline blob that
 at first can represent anything - that I'm suggesting is a replay of how
 visual perception developed).


 The fluid outline or image schema is arguably the basis of all intelligence
 - just about everything AGI is based on it.  You need an outline for
 instance not just of objects, but of where you're going, and what you're
 going to try and do - if you want to survive in the real world.  Schemas
 connect everything AGI.

 And it's not a matter of choice - first you have to have an outline/sense
 of the whole - whatever it is -  before you can start filling in the parts.



Well, this is the question. The solution is underdetermined, which means
that a right solution is not possible to know with complete certainty. So,
you may take the approach of using contours to match objects, but that is
certainly not the only way to approach the problem. Yes, you have to use
local features in the image to group pixels together in some way. I agree
with you there.

Is using contours the right way? Maybe, but not by itself. You have to
define the problem a little better than just saying that we need to
construct an outline. The real problem/question is this: How do you
determine the uncertainty of a hypothesis, lower it and also determine how
good a hypothesis is, especially in comparison to other hypotheses?

So, in this case, we are trying to use an outline comparison to determine
the best match hypotheses between objects. But, that doesn't define how you
score alternative hypotheses. That also is certainly not the only way to do
it. You could use the details within the outline too. In fact, in some
situations, this would be required to disambiguate between the possible
hypotheses.


 P.S. It would be mindblowingly foolish BTW to think you can do better
 than the way an infant learns to see - that's an awfully big visual section
 of the brain there, and it works.


I'm not trying to do better than the human brain. I am trying to solve the
same problems that the brain solves in a different way, sometimes better
than the brain, sometimes worse, sometimes equivalently. What would be
foolish is to assume the only way to duplicate general intelligence is to
copy the human brain. By taking this approach, you are forced to reverse
engineer and understand something that is extremely difficult to reverse
engineer. In addition, a solution that using the brain's design may not be
economically feasible. So, approaching the problem by copying the human
brain has additional risks. You may end up figuring out how the brain works
and not be able to use it. In addition might not end up with a good
understanding of what other solutions might be possible.

Dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-09 Thread David Jones
On Fri, Jul 9, 2010 at 10:04 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  Couple of quick comments (I'm still thinking about all this  - but I'm
 confident everything AGI links up here).

 A fluid schema is arguably by its v. nature a method - a trial and error,
 arguably universal method. It links vision to the hand or any effector.
 Handling objects also is based on fluid schemas - you put out a fluid
 adjustably-shaped hand to grasp things. And even if you don't have hands,
 like a worm, and must grasp things with your body, and must grasp the
 ground under which you move, then too you must use fluid body schemas/maps.

 All concepts - the basis of language and before language, all intelligence
 - are also almost certainly fluid schemas (and not as you suggested,
 patterns).


fluid schemas is not an actual algorithm. It is not clear how to go about
implementing such a design. Even so, when you get into the details of
actually implementing it, you will find yourself faced with the exact same
problems I'm trying to solve. So, lets say you take the first frame and
generate an initial fluid schema. What if an object disappears? What if
the object changes? What if the object moves a little or a lot? What if a
large number of changes occur at once, like one new thing suddenly blocking
a bunch of similar stuff that is behind it? How far does your fluid schema
have to be distorted for the algorithm to realize that it needs a new schema
and can't use the same old one? You can't just say that all objects are
always present and just distort the schema. What if two similar objects
appear or both move and one disappears? How does your schema handle this?
Regardless of whether you talk about hypotheses or schemas, it is the SAME
problem. You can't avoid the fact that the whole thing is underdetermined
and you need a way to score and compare hypotheses.

If you disagree, please define your schema algorithm a bit more
specifically. Then we would be able to analyze its pros and cons better.



 All creative problemsolving begins from concepts of what you want to do
  (and not formulae or algorithms as in rational problemsolving). Any
 suggestion to the contrary will not, I suggest, bear the slightest serious
 examination.


Sure.  I would point out though that children do stuff just to learn in the
beginning. A good example is our desire to play. Playing is a strategy by
which children learn new things even though they don't have a need for those
things yet. It motivates us to learn for the future and not for any pressing
present needs.

No matter how you look at it, you will need algorithms for general
intelligence. To say otherwise makes zero sense. No algorithms, no design.
No matter what design you come up with, I call that an algorithm. Algorithms
don't have to be formulaic or narrow. Keep an open mind about the world
algorithm, unless you can suggest a better term to describe general AI
algorithms.


 **Fluid schemas/concepts/fluid outlines are attempts-to-grasp-things -
 gropings.**

 Point 2 : I'd relook at your assumptions in all your musings  - my
 impression is they all assume, unwittingly, an *adult* POV - the view of
 s.o. who already knows how to see - as distinct from an infant who is just
 learning to see and get to grips with an extremely blurred world, (even
 more blurred and confusing, I wouldn't be surprised, than that Prakash
 video). You're unwittingly employing top down, fully-formed-intelligence
 assumptions even while overtly trying to produce a learning system - you're
 looking for what an adult wants to know, rather than what an infant
 starting-from-almost-no-knowledge-of-the-world wants to know.

 If you accept the point in any way, major philosophical rethinking is
 required.


this point doesn't really define at all how the approach should be changed
or what approach to take. So, it doesn't change the way I approach the
problem. You would really have to be more specific. For example, you could
say that the infant doesn't even know how to group pixels, so it has to
automatically learn that. I would have to disagree with this approach
because I can't think of any reasonable algorithms that could reasonably
explore possibilities. It doesn't seem better to me to describe the problem
even more generally to the point where you are learning how to learn. This
is what Abram was suggesting. But, as I said to him, you need a way to
suggest and search for possible learning methods and then compare them.
There doesn't seem to be a way to do this effectively. And so, you shouldn't
over generalize in this way. As I said in the initial email(this week),
there is no such thing as perfectly general and a silver bullet for solving
any problem. So, I believe that even infants are born expecting what the
world will be like. They aren't able to learn about any world. They are
optimized to configure their brains for this world.



  *From:* David Jones davidher...@gmail.com
 *Sent:* Friday, July 09

Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-09 Thread David Jones
Mike,

Please outline your algorithm for fluid schemas though. It will be clear
when you do that you are faced with the exact same uncertainty problems I am
dealing with and trying to solve. The problems are completely equivalent.
Yours is just a specific approach that is not sufficiently defined.

You have to define how you deal with uncertainty when using fluid schemas or
even how to approach the task of figuring it out. Until then, its not a
solution to anything.

Dave

On Fri, Jul 9, 2010 at 10:59 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  If fluid schemas - speaking broadly - are what is needed, (and I'm pretty
 sure they are), it's n.g. trying for something else. You can't substitute a
 square approach for a fluid amoeba outline approach. (And you will
 certainly need exactly such an approach to recognize amoeba's).

 If it requires a new kind of machine, or a radically new kind of
 instruction set for computers, then that's what it requires - Stan Franklin,
 BTW, is one person who does recognize, and is trying to deal with this
 problem - might be worth checking up on him.

 This is partly BTW why my instinct is that it may be better to start with
 tasks for robot hands*, because it should be possible to get them to apply
 a relatively flexible and fluid grip/handshape and grope for and experiment
 with differently shaped objects And if you accept the broad philosophy I've
 been outlining, then it does make sense that evolution should have started
 with touch as a more primary sense, well before it got to vision.

 *Or perhaps it may prove better to start with robot snakes/bodies or
 somesuch.

  *From:* David Jones davidher...@gmail.com
 *Sent:* Friday, July 09, 2010 3:22 PM
   *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] Re: Huge Progress on the Core of AGI



 On Fri, Jul 9, 2010 at 10:04 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  Couple of quick comments (I'm still thinking about all this  - but I'm
 confident everything AGI links up here).

 A fluid schema is arguably by its v. nature a method - a trial and error,
 arguably universal method. It links vision to the hand or any effector.
 Handling objects also is based on fluid schemas - you put out a fluid
 adjustably-shaped hand to grasp things. And even if you don't have hands,
 like a worm, and must grasp things with your body, and must grasp the
 ground under which you move, then too you must use fluid body schemas/maps.

 All concepts - the basis of language and before language, all intelligence
 - are also almost certainly fluid schemas (and not as you suggested,
 patterns).


 fluid schemas is not an actual algorithm. It is not clear how to go about
 implementing such a design. Even so, when you get into the details of
 actually implementing it, you will find yourself faced with the exact same
 problems I'm trying to solve. So, lets say you take the first frame and
 generate an initial fluid schema. What if an object disappears? What if
 the object changes? What if the object moves a little or a lot? What if a
 large number of changes occur at once, like one new thing suddenly blocking
 a bunch of similar stuff that is behind it? How far does your fluid schema
 have to be distorted for the algorithm to realize that it needs a new schema
 and can't use the same old one? You can't just say that all objects are
 always present and just distort the schema. What if two similar objects
 appear or both move and one disappears? How does your schema handle this?
 Regardless of whether you talk about hypotheses or schemas, it is the SAME
 problem. You can't avoid the fact that the whole thing is underdetermined
 and you need a way to score and compare hypotheses.

 If you disagree, please define your schema algorithm a bit more
 specifically. Then we would be able to analyze its pros and cons better.



 All creative problemsolving begins from concepts of what you want to do
  (and not formulae or algorithms as in rational problemsolving). Any
 suggestion to the contrary will not, I suggest, bear the slightest serious
 examination.


 Sure.  I would point out though that children do stuff just to learn in the
 beginning. A good example is our desire to play. Playing is a strategy by
 which children learn new things even though they don't have a need for those
 things yet. It motivates us to learn for the future and not for any pressing
 present needs.

 No matter how you look at it, you will need algorithms for general
 intelligence. To say otherwise makes zero sense. No algorithms, no design.
 No matter what design you come up with, I call that an algorithm. Algorithms
 don't have to be formulaic or narrow. Keep an open mind about the world
 algorithm, unless you can suggest a better term to describe general AI
 algorithms.


 **Fluid schemas/concepts/fluid outlines are attempts-to-grasp-things -
 gropings.**

 Point 2 : I'd relook at your assumptions in all your musings  - my
 impression is they all assume, unwittingly

Re: [agi] Solomonoff Induction is Not Universal and Probability is not Prediction

2010-07-09 Thread David Jones
Although I haven't studied Solomonoff induction yet, although I plan to read
up on it, I've realized that people seem to be making the same mistake I
was. People are trying to find one silver bullet method of induction or
learning that works for everything. I've begun to realize that its OK if
something doesn't work for everything. As long as it works on a large enough
subset of problems to be useful. If you can figure out how to construct
justifiable methods of induction for enough problems that you need to solve,
then that is sufficient for AGI.

This is the same mistake I made and it was the point I was trying to make in
the recent email I sent. I kept trying to come up with algorithms for doing
things and I could always find a test case to break it. So, now I've begun
to realize that it's ok if it breaks sometimes! The question is, can you
define an algorithm that breaks gracefully and which can figure out what
problems it can be applied to and what problems it should not be applied to.
If you can do that, then you can solve the problems where it is applicable,
and avoid the problems where it is not.

This is perfectly OK! You don't have to find a silver bullet method of
induction or inference that works for everything!

Dave



On Fri, Jul 9, 2010 at 10:49 AM, Ben Goertzel b...@goertzel.org wrote:


 To make this discussion more concrete, please look at

 http://www.vetta.org/documents/disSol.pdf

 Section 2.5 gives a simple version of the proof that Solomonoff induction
 is a powerful learning algorithm in principle, and Section 2.6 explains why
 it is not practically useful.

 What part of that paper do you think is wrong?

 thx
 ben



 On Fri, Jul 9, 2010 at 9:54 AM, Jim Bromer jimbro...@gmail.com wrote:

  On Fri, Jul 9, 2010 at 7:56 AM, Ben Goertzel b...@goertzel.org wrote:

 If you're going to argue against a mathematical theorem, your argument
 must be mathematical not verbal.  Please explain one of

 1) which step in the proof about Solomonoff induction's effectiveness you
 believe is in error

 2) which of the assumptions of this proof you think is inapplicable to
 real intelligence [apart from the assumption of infinite or massive compute
 resources]
 

 Solomonoff Induction is not a provable Theorem, it is therefore a
 conjecture.  It cannot be computed, it cannot be verified.  There are many
 mathematical theorems that require the use of limits to prove them for
 example, and I accept those proofs.  (Some people might not.)  But there is
 no evidence that Solmonoff Induction would tend toward some limits.  Now
 maybe the conjectured abstraction can be verified through some other means,
 but I have yet to see an adequate explanation of that in any terms.  The
 idea that I have to answer your challenges using only the terms you specify
 is noise.

 Look at 2.  What does that say about your Theorem.

 I am working on 1 but I just said: I haven't yet been able to find a way
 that could be used to prove that Solomonoff Induction does not do what Matt
 claims it does.
   Z
 What is not clear is that no one has objected to my characterization of
 the conjecture as I have been able to work it out for myself.  It requires
 an infinite set of infinitely computed probabilities of each infinite
 string.  If this characterization is correct, then Matt has been using the
 term string ambiguously.  As a primary sample space: A particular string.
 And as a compound sample space: All the possible individual cases of the
 substring compounded into one.  No one has yet to tell of his mathematical
 experiments of using a Turing simulator to see what a finite iteration of
 all possible programs of a given length would actually look like.

 I will finish this later.




  On Fri, Jul 9, 2010 at 7:49 AM, Jim Bromer jimbro...@gmail.com wrote:

 Abram,
 Solomoff Induction would produce poor predictions if it could be used
 to compute them.


 Solomonoff induction is a mathematical, not verbal, construct.  Based on
 the most obvious mapping from the verbal terms you've used above into
 mathematical definitions in terms of which Solomonoff induction is
 constructed, the above statement of yours is FALSE.

 If you're going to argue against a mathematical theorem, your argument
 must be mathematical not verbal.  Please explain one of

 1) which step in the proof about Solomonoff induction's effectiveness you
 believe is in error

 2) which of the assumptions of this proof you think is inapplicable to
 real intelligence [apart from the assumption of infinite or massive compute
 resources]

 Otherwise, your statement is in the same category as the statement by the
 protagonist of Dostoesvky's Notes from the Underground --

 I admit that two times two makes four is an excellent thing, but if we
 are to give everything its due, two times two makes five is sometimes a very
 charming thing too.

 ;-)



 Secondly, since it cannot be computed it is useless.  Third, it is not
 the sort of 

Re: [agi] Solomonoff Induction is Not Universal and Probability is not Prediction

2010-07-09 Thread David Jones
The same goes for inference. There is no silver bullet method that is
completely general and can infer anything. There is no general inference
method. Sometimes it works, sometimes it doesn't. That is the nature of the
complex world we live in. My current theory is that the more we try to find
a single silver bullet, the more we will just break against the fact that
none exists.



On Fri, Jul 9, 2010 at 11:35 AM, David Jones davidher...@gmail.com wrote:

 Although I haven't studied Solomonoff induction yet, although I plan to
 read up on it, I've realized that people seem to be making the same mistake
 I was. People are trying to find one silver bullet method of induction or
 learning that works for everything. I've begun to realize that its OK if
 something doesn't work for everything. As long as it works on a large enough
 subset of problems to be useful. If you can figure out how to construct
 justifiable methods of induction for enough problems that you need to solve,
 then that is sufficient for AGI.

 This is the same mistake I made and it was the point I was trying to make
 in the recent email I sent. I kept trying to come up with algorithms for
 doing things and I could always find a test case to break it. So, now I've
 begun to realize that it's ok if it breaks sometimes! The question is, can
 you define an algorithm that breaks gracefully and which can figure out what
 problems it can be applied to and what problems it should not be applied to.
 If you can do that, then you can solve the problems where it is applicable,
 and avoid the problems where it is not.

 This is perfectly OK! You don't have to find a silver bullet method of
 induction or inference that works for everything!

 Dave



 On Fri, Jul 9, 2010 at 10:49 AM, Ben Goertzel b...@goertzel.org wrote:


 To make this discussion more concrete, please look at

 http://www.vetta.org/documents/disSol.pdf

 Section 2.5 gives a simple version of the proof that Solomonoff induction
 is a powerful learning algorithm in principle, and Section 2.6 explains why
 it is not practically useful.

 What part of that paper do you think is wrong?

 thx
 ben



 On Fri, Jul 9, 2010 at 9:54 AM, Jim Bromer jimbro...@gmail.com wrote:

  On Fri, Jul 9, 2010 at 7:56 AM, Ben Goertzel b...@goertzel.org wrote:

 If you're going to argue against a mathematical theorem, your argument
 must be mathematical not verbal.  Please explain one of

 1) which step in the proof about Solomonoff induction's effectiveness you
 believe is in error

 2) which of the assumptions of this proof you think is inapplicable to
 real intelligence [apart from the assumption of infinite or massive compute
 resources]
 

 Solomonoff Induction is not a provable Theorem, it is therefore a
 conjecture.  It cannot be computed, it cannot be verified.  There are many
 mathematical theorems that require the use of limits to prove them for
 example, and I accept those proofs.  (Some people might not.)  But there is
 no evidence that Solmonoff Induction would tend toward some limits.  Now
 maybe the conjectured abstraction can be verified through some other means,
 but I have yet to see an adequate explanation of that in any terms.  The
 idea that I have to answer your challenges using only the terms you specify
 is noise.

 Look at 2.  What does that say about your Theorem.

 I am working on 1 but I just said: I haven't yet been able to find a way
 that could be used to prove that Solomonoff Induction does not do what Matt
 claims it does.
   Z
 What is not clear is that no one has objected to my characterization of
 the conjecture as I have been able to work it out for myself.  It requires
 an infinite set of infinitely computed probabilities of each infinite
 string.  If this characterization is correct, then Matt has been using the
 term string ambiguously.  As a primary sample space: A particular string.
 And as a compound sample space: All the possible individual cases of the
 substring compounded into one.  No one has yet to tell of his mathematical
 experiments of using a Turing simulator to see what a finite iteration of
 all possible programs of a given length would actually look like.

 I will finish this later.




  On Fri, Jul 9, 2010 at 7:49 AM, Jim Bromer jimbro...@gmail.comwrote:

 Abram,
 Solomoff Induction would produce poor predictions if it could be used
 to compute them.


 Solomonoff induction is a mathematical, not verbal, construct.  Based on
 the most obvious mapping from the verbal terms you've used above into
 mathematical definitions in terms of which Solomonoff induction is
 constructed, the above statement of yours is FALSE.

 If you're going to argue against a mathematical theorem, your argument
 must be mathematical not verbal.  Please explain one of

 1) which step in the proof about Solomonoff induction's effectiveness
 you believe is in error

 2) which of the assumptions of this proof you think is inapplicable to
 real

Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-09 Thread David Jones
The way I define algorithms encompasses just about any intelligently
designed system. So, call it what you want. I really wish you would stop
avoiding the word. But, fine. I'll play your word game...

Define your system please. And justify why or how it handles uncertainty.
You said overlay a hand to see if it fits. How do you define fits? The
truth is that it will never fit perfectly, so how do you define a good fit
and a bad one? You will find that you end up with the same exact problems I
am working on. You keep avoiding the need to define the system of fluid
schemas. You're avoiding it because it's not a solution to anything and you
can't define it without realizing that your idea doesn't pan out.

So, I dare you. Define your fluid schemas without revealing the fatal flaw
in your reasoning.

Dave
On Fri, Jul 9, 2010 at 12:05 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

  There isn't an algorithm. It's basically a matter of overlaying shapes to
 see if they fit -  much as you put one hand against another to see if they
 fit - much as you can overlay a hand to see if it fits and is capable of
 grasping an object - except considerably more fluid/ rougher. There has to
 be some instruction generating the process, but it's not an algorithm. How
 can you have an algorithm for recognizing amoebas - or rocks or a drop of
 water? They are not patterned entities - or by extension reducible to
 algorithms. You don't need to think too much about internal visual processes
 - you can just look,at the external objects-to-be-classified , the objects
 that make up this world, and see this. Just as you can look at a set of
 diverse patterns and see that they too are not reducible to any single
 formula/pattern/algorithm. We're talking about the fundamental structure of
 the universe and its contents.  If this is right and God is an artist
 before he is a mathematician, then it won't do any good screaming about it,
 you're going to have to invent a way  to do art, so to speak, on computers .
 Or you can pretend that dealing with mathematical squares will somehow help
 here - but it hasn't and won't.

 Do you think that a creative process like creating

 http://www.apocalyptic-theories.com/gallery/lastjudge/bosch.jpg

 started with an algorithm?  There are other ways of solving problems than
 algorithms - the person who created each algorithm in the first place
 certainly didn't have one.

  *From:* David Jones davidher...@gmail.com
 *Sent:* Friday, July 09, 2010 4:20 PM
   *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] Re: Huge Progress on the Core of AGI

 Mike,

 Please outline your algorithm for fluid schemas though. It will be clear
 when you do that you are faced with the exact same uncertainty problems I am
 dealing with and trying to solve. The problems are completely equivalent.
 Yours is just a specific approach that is not sufficiently defined.

 You have to define how you deal with uncertainty when using fluid schemas
 or even how to approach the task of figuring it out. Until then, its not a
 solution to anything.

 Dave

 On Fri, Jul 9, 2010 at 10:59 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  If fluid schemas - speaking broadly - are what is needed, (and I'm
 pretty sure they are), it's n.g. trying for something else. You can't
 substitute a square approach for a fluid amoeba outline approach. (And
 you will certainly need exactly such an approach to recognize amoeba's).

 If it requires a new kind of machine, or a radically new kind of
 instruction set for computers, then that's what it requires - Stan Franklin,
 BTW, is one person who does recognize, and is trying to deal with this
 problem - might be worth checking up on him.

 This is partly BTW why my instinct is that it may be better to start with
 tasks for robot hands*, because it should be possible to get them to apply
 a relatively flexible and fluid grip/handshape and grope for and experiment
 with differently shaped objects And if you accept the broad philosophy I've
 been outlining, then it does make sense that evolution should have started
 with touch as a more primary sense, well before it got to vision.

 *Or perhaps it may prove better to start with robot snakes/bodies or
 somesuch.

  *From:* David Jones davidher...@gmail.com
 *Sent:* Friday, July 09, 2010 3:22 PM
   *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] Re: Huge Progress on the Core of AGI



 On Fri, Jul 9, 2010 at 10:04 AM, Mike Tintner 
 tint...@blueyonder.co.ukwrote:

  Couple of quick comments (I'm still thinking about all this  - but I'm
 confident everything AGI links up here).

 A fluid schema is arguably by its v. nature a method - a trial and error,
 arguably universal method. It links vision to the hand or any effector.
 Handling objects also is based on fluid schemas - you put out a fluid
 adjustably-shaped hand to grasp things. And even if you don't have hands,
 like a worm, and must grasp things with your body, and must grasp the
 ground under

[agi] Re: Huge Progress on the Core of AGI

2010-07-08 Thread David Jones
I've learned something really interesting today. I realized that general
rules of inference probably don't really exists. There is no such thing as
complete generality for these problems. The rules of inference that work for
one environment would fail in alien environments.

So, I have to modify my approach to solving these problems. As I studied
over simplified problems, I realized that there are probably an infinite
number of environments with their own behaviors that are not representative
of the environments we want to put a general AI in.

So, it is not ok to just come up with any case study and solve it. The case
study has to actually be representative of a problem we want to solve in an
environment we want to apply AI. Otherwise the solution required will take
too long to develop because of it tries to accommodate too much
generality. As I mentioned, such a general solution is likely impossible.
So, someone could easily get stuck trying to solve an impossible task of
creating one general solution to too many problems that don't allow a
general solution.

The best course is a balance between the time required to write a very
general solution and the time required to write less general solutions for
multiple problem types and environments. The best way to do this is to
choose representative case studies to solve and make sure the solutions are
truth-tropic and justified for the environments they are to be applied.

Dave


On Sun, Jun 27, 2010 at 1:31 AM, David Jones davidher...@gmail.com wrote:

 A method for comparing hypotheses in explanatory-based reasoning: *

 We prefer the hypothesis or explanation that ***expects* more
 observations. If both explanations expect the same observations, then the
 simpler of the two is preferred (because the unnecessary terms of the more
 complicated explanation do not add to the predictive power).*

 *Why are expected events so important?* They are a measure of 1)
 explanatory power and 2) predictive power. The more predictive and the more
 explanatory a hypothesis is, the more likely the hypothesis is when compared
 to a competing hypothesis.

 Here are two case studies I've been analyzing from sensory perception of
 simplified visual input:
 The goal of the case studies is to answer the following: How do you
 generate the most likely motion hypothesis in a way that is general and
 applicable to AGI?
 *Case Study 1)* Here is a link to an example: animated gif of two black
 squares move from left to righthttp://practicalai.org/images/CaseStudy1.gif.
 *Description: *Two black squares are moving in unison from left to right
 across a white screen. In each frame the black squares shift to the right so
 that square 1 steals square 2's original position and square two moves an
 equal distance to the right.
 *Case Study 2) *Here is a link to an example: the interrupted 
 squarehttp://practicalai.org/images/CaseStudy2.gif.
 *Description:* A single square is moving from left to right. Suddenly in
 the third frame, a single black square is added in the middle of the
 expected path of the original black square. This second square just stays
 there. So, what happened? Did the square moving from left to right keep
 moving? Or did it stop and then another square suddenly appeared and moved
 from left to right?

 *Here is a simplified version of how we solve case study 1:
 *The important hypotheses to consider are:
 1) the square from frame 1 of the video that has a very close position to
 the square from frame 2 should be matched (we hypothesize that they are the
 same square and that any difference in position is motion).  So, what
 happens is that in each two frames of the video, we only match one square.
 The other square goes unmatched.
 2) We do the same thing as in hypothesis #1, but this time we also match
 the remaining squares and hypothesize motion as follows: the first square
 jumps over the second square from left to right. We hypothesize that this
 happens over and over in each frame of the video. Square 2 stops and square
 1 jumps over it over and over again.
 3) We hypothesize that both squares move to the right in unison. This is
 the correct hypothesis.

 So, why should we prefer the correct hypothesis, #3 over the other two?

 Well, first of all, #3 is correct because it has the most explanatory power
 of the three and is the simplest of the three. Simpler is better because,
 with the given evidence and information, there is no reason to desire a more
 complicated hypothesis such as #2.

 So, the answer to the question is because explanation #3 expects the most
 observations, such as:
 1) the consistent relative positions of the squares in each frame are
 expected.
 2) It also expects their new positions in each from based on velocity
 calculations.
 3) It expects both squares to occur in each frame.

 Explanation 1 ignores 1 square from each frame of the video, because it
 can't match it. Hypothesis #1 doesn't have a reason for why the a new square

[agi] Re: Huge Progress on the Core of AGI

2010-07-08 Thread David Jones
An easy demonstration of this is visual illusions and even visual mistakes
like one I sent to this list before. Our eyes sometimes infer things that
are not true. It is absolutely necessary for such mistakes to occur because
our sensory interpretation system is optimized for the world we expect to
encounter, which didn't optical illusions during most of our development. A
perfect solution to all visual problems and possible environments is
[likely] impossible. It is ok to fail on optical illusions, since the
failure has no fatal consequences, other than maybe thinking that there is a
water puddle in the middle of the desert :).

Dave

On Thu, Jul 8, 2010 at 3:17 PM, David Jones davidher...@gmail.com wrote:

 I've learned something really interesting today. I realized that general
 rules of inference probably don't really exists. There is no such thing as
 complete generality for these problems. The rules of inference that work for
 one environment would fail in alien environments.

 So, I have to modify my approach to solving these problems. As I studied
 over simplified problems, I realized that there are probably an infinite
 number of environments with their own behaviors that are not representative
 of the environments we want to put a general AI in.

 So, it is not ok to just come up with any case study and solve it. The case
 study has to actually be representative of a problem we want to solve in an
 environment we want to apply AI. Otherwise the solution required will take
 too long to develop because of it tries to accommodate too much
 generality. As I mentioned, such a general solution is likely impossible.
 So, someone could easily get stuck trying to solve an impossible task of
 creating one general solution to too many problems that don't allow a
 general solution.

 The best course is a balance between the time required to write a very
 general solution and the time required to write less general solutions for
 multiple problem types and environments. The best way to do this is to
 choose representative case studies to solve and make sure the solutions are
 truth-tropic and justified for the environments they are to be applied.

 Dave


 On Sun, Jun 27, 2010 at 1:31 AM, David Jones davidher...@gmail.comwrote:

 A method for comparing hypotheses in explanatory-based reasoning: *

 We prefer the hypothesis or explanation that ***expects* more
 observations. If both explanations expect the same observations, then the
 simpler of the two is preferred (because the unnecessary terms of the more
 complicated explanation do not add to the predictive power).*

 *Why are expected events so important?* They are a measure of 1)
 explanatory power and 2) predictive power. The more predictive and the more
 explanatory a hypothesis is, the more likely the hypothesis is when compared
 to a competing hypothesis.

 Here are two case studies I've been analyzing from sensory perception of
 simplified visual input:
 The goal of the case studies is to answer the following: How do you
 generate the most likely motion hypothesis in a way that is general and
 applicable to AGI?
 *Case Study 1)* Here is a link to an example: animated gif of two black
 squares move from left to 
 righthttp://practicalai.org/images/CaseStudy1.gif.
 *Description: *Two black squares are moving in unison from left to right
 across a white screen. In each frame the black squares shift to the right so
 that square 1 steals square 2's original position and square two moves an
 equal distance to the right.
 *Case Study 2) *Here is a link to an example: the interrupted 
 squarehttp://practicalai.org/images/CaseStudy2.gif.
 *Description:* A single square is moving from left to right. Suddenly in
 the third frame, a single black square is added in the middle of the
 expected path of the original black square. This second square just stays
 there. So, what happened? Did the square moving from left to right keep
 moving? Or did it stop and then another square suddenly appeared and moved
 from left to right?

 *Here is a simplified version of how we solve case study 1:
 *The important hypotheses to consider are:
 1) the square from frame 1 of the video that has a very close position to
 the square from frame 2 should be matched (we hypothesize that they are the
 same square and that any difference in position is motion).  So, what
 happens is that in each two frames of the video, we only match one square.
 The other square goes unmatched.
 2) We do the same thing as in hypothesis #1, but this time we also match
 the remaining squares and hypothesize motion as follows: the first square
 jumps over the second square from left to right. We hypothesize that this
 happens over and over in each frame of the video. Square 2 stops and square
 1 jumps over it over and over again.
 3) We hypothesize that both squares move to the right in unison. This is
 the correct hypothesis.

 So, why should we prefer the correct hypothesis, #3 over the other two

Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-08 Thread David Jones
It may not be possible to create a learning algorithm that can learn how to
generally process images and other general AGI problems. This is for the
same reason that completely general vision algorithms are likely impossible.
I think that figuring out how to process sensory information intelligently
requires either 1) impossible amounts of processing or 2) intelligent design
and understanding by us.

Maybe you could be more specific about how general learning algorithms would
solve problems such as the one I'm tackling. But, I am extremely doubtful it
can be done because the problems cannot be effectively described to such an
algorithm. If you can't describe the problem, it can't search for solutions.
If it can't search for solutions, you're basically stuck with evolution type
algorithms, which require prohibitory amounts of processing.

The reason that vision is so important for learning is that sensory
perception is the foundation required to learn everything else. If you don't
start with a foundational problem like this, you won't be representing the
real nature of general intelligence problems that require extensive
knowledge of the world to solve properly. Sensory perception is required to
learn the information needed to understand everything else. Text and
language for example, require extensive knowledge about the world to
understand and especially to learn about. If you start with general learning
algorithms on these unrepresentative problems, you will get stuck as we
already have.

So, it still makes a lot of sense to start with a concrete problem that does
not require extensive amounts of previous knowledge to start learning. In
fact, AGI requires that you not pre-program the AI with such extensive
knowledge. So, lots of people are working on general learning algorithms
that are unrepresentative of what is required for AGI because the algorithms
don't have the knowledge needed to learn what they are trying to learn
about. Regardless of how you look at it, my approach is definitely the right
approach to AGI in my opinion.



On Thu, Jul 8, 2010 at 5:02 PM, Abram Demski abramdem...@gmail.com wrote:

 David,

 That's why, imho, the rules need to be *learned* (and, when need be,
 unlearned). IE, what we need to work on is general learning algorithms, not
 general visual processing algorithms.

 As you say, there's not even such a thing as a general visual processing
 algorithm. Learning algorithms suffer similar environment-dependence, but
 (by their nature) not as severe...

 --Abram

 On Thu, Jul 8, 2010 at 3:17 PM, David Jones davidher...@gmail.com wrote:

 I've learned something really interesting today. I realized that general
 rules of inference probably don't really exists. There is no such thing as
 complete generality for these problems. The rules of inference that work for
 one environment would fail in alien environments.

 So, I have to modify my approach to solving these problems. As I studied
 over simplified problems, I realized that there are probably an infinite
 number of environments with their own behaviors that are not representative
 of the environments we want to put a general AI in.

 So, it is not ok to just come up with any case study and solve it. The
 case study has to actually be representative of a problem we want to solve
 in an environment we want to apply AI. Otherwise the solution required will
 take too long to develop because of it tries to accommodate too much
 generality. As I mentioned, such a general solution is likely impossible.
 So, someone could easily get stuck trying to solve an impossible task of
 creating one general solution to too many problems that don't allow a
 general solution.

 The best course is a balance between the time required to write a very
 general solution and the time required to write less general solutions for
 multiple problem types and environments. The best way to do this is to
 choose representative case studies to solve and make sure the solutions are
 truth-tropic and justified for the environments they are to be applied.

 Dave


 On Sun, Jun 27, 2010 at 1:31 AM, David Jones davidher...@gmail.comwrote:

 A method for comparing hypotheses in explanatory-based reasoning: *

 We prefer the hypothesis or explanation that ***expects* more
 observations. If both explanations expect the same observations, then the
 simpler of the two is preferred (because the unnecessary terms of the more
 complicated explanation do not add to the predictive power).*

 *Why are expected events so important?* They are a measure of 1)
 explanatory power and 2) predictive power. The more predictive and the more
 explanatory a hypothesis is, the more likely the hypothesis is when compared
 to a competing hypothesis.

 Here are two case studies I've been analyzing from sensory perception of
 simplified visual input:
 The goal of the case studies is to answer the following: How do you
 generate the most likely motion hypothesis in a way that is general

Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-08 Thread David Jones
Abram,

Yeah, I would have to object for a couple reasons.

First, prediction requires previous knowledge. So, even if you make that
your primary goal, you're still going to have my research goals as the
prerequisite: which are to process visual information in a more general way
and learn about the environment in a more general way.

Second, not everything is predictable. Certainly, we should not try to
predict everything. Only after we have experience, can we actually predict
anything. Even then, it's not precise prediction, like predicting the next
frame of a video. It's more like having knowledge of what is quite likely to
occur, or maybe an approximate prediction, but not guaranteed in the least.
For example, based on previous experience, striking a match will light it.
But, sometimes it doesn't light, and that too is expected to occur
sometimes. We definitely don't predict the next image we'll see when it
lights though. We just have expectations for what we might see and this
helps us interpret the image effectively. We should try to expect certain
outcomes or possible outcomes though. You could call that prediction, but
it's not quite the same. The things we are more likely to see should be
attempted as an explanation first and preferred if not given a reason to
think otherwise.


Dave


On Thu, Jul 8, 2010 at 5:51 PM, Abram Demski abramdem...@gmail.com wrote:

 David,

 How I'd present the problem would be predict the next frame, or more
 generally predict a specified portion of video given a different portion. Do
 you object to this approach?

 --Abram

 On Thu, Jul 8, 2010 at 5:30 PM, David Jones davidher...@gmail.com wrote:

 It may not be possible to create a learning algorithm that can learn how
 to generally process images and other general AGI problems. This is for the
 same reason that completely general vision algorithms are likely impossible.
 I think that figuring out how to process sensory information intelligently
 requires either 1) impossible amounts of processing or 2) intelligent design
 and understanding by us.

 Maybe you could be more specific about how general learning algorithms
 would solve problems such as the one I'm tackling. But, I am extremely
 doubtful it can be done because the problems cannot be effectively described
 to such an algorithm. If you can't describe the problem, it can't search for
 solutions. If it can't search for solutions, you're basically stuck with
 evolution type algorithms, which require prohibitory amounts of processing.

 The reason that vision is so important for learning is that sensory
 perception is the foundation required to learn everything else. If you don't
 start with a foundational problem like this, you won't be representing the
 real nature of general intelligence problems that require extensive
 knowledge of the world to solve properly. Sensory perception is required to
 learn the information needed to understand everything else. Text and
 language for example, require extensive knowledge about the world to
 understand and especially to learn about. If you start with general learning
 algorithms on these unrepresentative problems, you will get stuck as we
 already have.

 So, it still makes a lot of sense to start with a concrete problem that
 does not require extensive amounts of previous knowledge to start learning.
 In fact, AGI requires that you not pre-program the AI with such extensive
 knowledge. So, lots of people are working on general learning algorithms
 that are unrepresentative of what is required for AGI because the algorithms
 don't have the knowledge needed to learn what they are trying to learn
 about. Regardless of how you look at it, my approach is definitely the right
 approach to AGI in my opinion.



 On Thu, Jul 8, 2010 at 5:02 PM, Abram Demski abramdem...@gmail.comwrote:

 David,

 That's why, imho, the rules need to be *learned* (and, when need be,
 unlearned). IE, what we need to work on is general learning algorithms, not
 general visual processing algorithms.

 As you say, there's not even such a thing as a general visual processing
 algorithm. Learning algorithms suffer similar environment-dependence, but
 (by their nature) not as severe...

 --Abram

 On Thu, Jul 8, 2010 at 3:17 PM, David Jones davidher...@gmail.comwrote:

 I've learned something really interesting today. I realized that general
 rules of inference probably don't really exists. There is no such thing as
 complete generality for these problems. The rules of inference that work 
 for
 one environment would fail in alien environments.

 So, I have to modify my approach to solving these problems. As I studied
 over simplified problems, I realized that there are probably an infinite
 number of environments with their own behaviors that are not representative
 of the environments we want to put a general AI in.

 So, it is not ok to just come up with any case study and solve it. The
 case study has to actually be representative of a problem we

Re: [agi] Open Sets vs Closed Sets

2010-07-02 Thread David Jones
narrow AI is a term that describes the solution to a problem, not the
problem. It is a solution with a narrow scope. General AI on the other hand
should have a much larger scope than narrow ai and be able to handle
unforseen circumstances.

What I don't think you realize is that open sets can be described by closed
sets. Here is an example from my own research. The set of objects I'm
allowing in the simplest case studies so far are black squares. This is a
closed set. But, the number, movement and relative positions of these
squares is an open set. I can define an infinite number of ways in which a 0
to infinite number of black squares can move. If I define a general AI
algorithm, it should be able to handle the infinite subset of the open set
that is representative of some aspect of the real world. We could also study
case studies that are not representative of the environment though.

The example I just gave is a completely open set, yet an algorithm could
handle such an open set, and I am designing for it. So, your claim that no
one is studying or handling such things is not right.

Dave
On Wed, Jun 30, 2010 at 8:58 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  I'd like opinions on terminology here.

 IMO the opposition of closed sets vs open sets is fundamental to the
 difference between narrow AI and AGI.

 However I notice that these terms have different meanings to mine in maths.

 What I mean is:

 closed set: contains a definable number and *kinds/species* of objects

 open set: contains an undefinable number and *kinds/species* of objects
 (what we in casual, careless conversation describe as containing all kinds
 of things);  the rules of an open set allow adding new kinds of things ad
 infinitum

 Narrow AI's operate in artificial environments containing closed sets of
 objects - all of wh. are definable. AGI's operate in real world environments
 containing open sets of objects - some of wh. will be definable, and some
 definitely not

 To engage in any real world activity, like walking down a street or
 searching/tidying a room or reading a science book/text is to  operate
 with open sets of objects,  because the next field of operations - the
 next street or room or text -  may and almost certainly will have
 unpredictably different kinds of objects from the last.

 Any objections to my use of these terms, or suggestions that I should use
 others?

   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-02 Thread David Jones
Nice Occam's Razor argument. I understood it simply because I knew there are
always an infinite number of possible explanations for every observation
that are more complicated than the simplest explanation. So, without a
reason to choose one of those other interpretations, then why choose it? You
could look for reasons in complex environments, but it would likely be more
efficient to wait for a reason to need a better explanation. It's more
efficient to wait for an inconsistency than to search an infinite set
without a reason to do so.

Dave

On Fri, Jul 2, 2010 at 6:08 PM, Matt Mahoney matmaho...@yahoo.com wrote:

   Jim, to address all of your points,

 Solomonoff induction claims that the probability of a string is
 proportional to the number of programs that output the string, where each
 program M is weighted by 2^-|M|. The probability is dominated by the
 shortest program (Kolmogorov complexity), but it is not exactly the same.
 The difference is small enough that we may neglect it, just as we neglect
 differences that depend on choice of language.

 Here is the proof that Kolmogorov complexity is not computable. Suppose it
 were. Then I could test the Kolmogorov complexity of strings in increasing
 order of length (breaking ties lexicographically) and describe the first
 string that cannot be described in less than a million bits, contradicting
 the fact that I just did. (Formally, I could write a program that outputs
 the first string whose Kolmogorov complexity is at least n bits, choosing n
 to be larger than my program).

 Here is the argument that Occam's Razor and Solomonoff distribution must be
 true. Consider all possible probability distributions p(x) over any infinite
 set X of possible finite strings x, i.e. any X = {x: p(x)  0} that is
 infinite. All such distributions must favor shorter strings over longer
 ones. Consider any x in X. Then p(x)  0. There can be at most a finite
 number (less than 1/p(x)) of strings that are more likely than x, and
 therefore an infinite number of strings which are less likely than x. Of
 this infinite set, only a finite number (less than 2^|x|) can be shorter
 than x, and therefore there must be an infinite number that are longer than
 x. So for each x we can partition X into 4 subsets as follows:

 - shorter and more likely than x: finite
 - shorter and less likely than x: finite
 - longer and more likely than x: finite
 - longer and less likely than x: infinite.

 So in this sense, any distribution over the set of strings must favor
 shorter strings over longer ones.


 -- Matt Mahoney, matmaho...@yahoo.com


  --
 *From:* Jim Bromer jimbro...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Fri, July 2, 2010 4:09:38 PM

 *Subject:* Re: [agi] Re: Huge Progress on the Core of AGI



 On Fri, Jul 2, 2010 at 2:25 PM, Jim Bromer jimbro...@gmail.com wrote:

There cannot be a one to one correspondence to the representation of
 the shortest program to produce a string and the strings that they produce.
 This means that if the consideration of the hypotheses were to be put into
 general mathematical form it must include the potential of many to one
 relations between candidate programs (or subprograms) and output strings.



 But, there is also no way to determine what the shortest program is,
 since there may be different programs that are the same length.  That means
 that there is a many to one relation between programs and program length.
 So the claim that you could just iterate through programs *by length* is
 false.  This is the goal of algorithmic information theory not a premise
 of a methodology that can be used.  So you have the diagonalization problem.



 A counter argument is that there are only a finite number of Turing Machine
 programs of a given length.  However, since you guys have specifically
 designated that this theorem applies to any construction of a Turing Machine
 it is not clear that this counter argument can be used.  And there is still
 the specific problem that you might want to try a program that writes a
 longer program to output a string (or many strings).  Or you might want to
 write a program that can be called to write longer programs on a dynamic
 basis.  I think these cases, where you might consider a program that outputs
 a longer program, (or another instruction string for another Turing
 Machine) constitutes a serious problem, that at the least, deserves to be
 answered with sound analysis.

 Part of my original intuitive argument, that I formed some years ago, was
 that without a heavy constraint on the instructions for the program, it will
 be practically impossible to test or declare that some program is indeed the
 shortest program.  However, I can't quite get to the point now that I can
 say that there is definitely a diagonalization problem.

 Jim Bromer

   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 

[agi] Re: Huge Progress on the Core of AGI

2010-06-29 Thread David Jones
If anyone has any knowledge of or references to the state of the art in
explanation-based reasoning, can you send me keywords or links? I've read
some through google, but I'm not really satisfied with anything I've found.

Thanks,

Dave

On Sun, Jun 27, 2010 at 1:31 AM, David Jones davidher...@gmail.com wrote:

 A method for comparing hypotheses in explanatory-based reasoning: *

 We prefer the hypothesis or explanation that ***expects* more
 observations. If both explanations expect the same observations, then the
 simpler of the two is preferred (because the unnecessary terms of the more
 complicated explanation do not add to the predictive power).*

 *Why are expected events so important?* They are a measure of 1)
 explanatory power and 2) predictive power. The more predictive and the more
 explanatory a hypothesis is, the more likely the hypothesis is when compared
 to a competing hypothesis.

 Here are two case studies I've been analyzing from sensory perception of
 simplified visual input:
 The goal of the case studies is to answer the following: How do you
 generate the most likely motion hypothesis in a way that is general and
 applicable to AGI?
 *Case Study 1)* Here is a link to an example: animated gif of two black
 squares move from left to righthttp://practicalai.org/images/CaseStudy1.gif.
 *Description: *Two black squares are moving in unison from left to right
 across a white screen. In each frame the black squares shift to the right so
 that square 1 steals square 2's original position and square two moves an
 equal distance to the right.
 *Case Study 2) *Here is a link to an example: the interrupted 
 squarehttp://practicalai.org/images/CaseStudy2.gif.
 *Description:* A single square is moving from left to right. Suddenly in
 the third frame, a single black square is added in the middle of the
 expected path of the original black square. This second square just stays
 there. So, what happened? Did the square moving from left to right keep
 moving? Or did it stop and then another square suddenly appeared and moved
 from left to right?

 *Here is a simplified version of how we solve case study 1:
 *The important hypotheses to consider are:
 1) the square from frame 1 of the video that has a very close position to
 the square from frame 2 should be matched (we hypothesize that they are the
 same square and that any difference in position is motion).  So, what
 happens is that in each two frames of the video, we only match one square.
 The other square goes unmatched.
 2) We do the same thing as in hypothesis #1, but this time we also match
 the remaining squares and hypothesize motion as follows: the first square
 jumps over the second square from left to right. We hypothesize that this
 happens over and over in each frame of the video. Square 2 stops and square
 1 jumps over it over and over again.
 3) We hypothesize that both squares move to the right in unison. This is
 the correct hypothesis.

 So, why should we prefer the correct hypothesis, #3 over the other two?

 Well, first of all, #3 is correct because it has the most explanatory power
 of the three and is the simplest of the three. Simpler is better because,
 with the given evidence and information, there is no reason to desire a more
 complicated hypothesis such as #2.

 So, the answer to the question is because explanation #3 expects the most
 observations, such as:
 1) the consistent relative positions of the squares in each frame are
 expected.
 2) It also expects their new positions in each from based on velocity
 calculations.
 3) It expects both squares to occur in each frame.

 Explanation 1 ignores 1 square from each frame of the video, because it
 can't match it. Hypothesis #1 doesn't have a reason for why the a new square
 appears in each frame and why one disappears. It doesn't expect these
 observations. In fact, explanation 1 doesn't expect anything that happens
 because something new happens in each frame, which doesn't give it a chance
 to confirm its hypotheses in subsequent frames.

 The power of this method is immediately clear. It is general and it solves
 the problem very cleanly.

 *Here is a simplified version of how we solve case study 2:*
 We expect the original square to move at a similar velocity from left to
 right because we hypothesized that it did move from left to right and we
 calculated its velocity. If this expectation is confirmed, then it is more
 likely than saying that the square suddenly stopped and another started
 moving. Such a change would be unexpected and such a conclusion would be
 unjustifiable.

 I also believe that explanations which generate fewer incorrect
 expectations should be preferred over those that more incorrect
 expectations.

 The idea I came up with earlier this month regarding high frame rates to
 reduce uncertainty is still applicable. It is important that all generated
 hypotheses have as low uncertainty as possible given our constraints and
 resources available

Re: [agi] Re: Huge Progress on the Core of AGI

2010-06-29 Thread David Jones
Thanks Matt,

Right. But Occam's Razor is not complete. It says simpler is better, but 1)
this only applies when two hypotheses have the same explanatory power and 2)
what defines simpler?

So, maybe what I want to know from the state of the art in research is:

1) how precisely do other people define simpler
and
2) More importantly, how do you compare competing explanations/hypotheses
that have more or less explanatory power. Simpler does not apply unless you
are comparing equally explanatory hypotheses.

For example, the simplest hypothesis for all visual interpretation is that
everything in the first image is gone in the second image, and everything in
the second image is a new object. Simple. Done. Solved :) right? Well,
clearly a more complicated explanation is warranted because a more
complicated explanation is more *explanatory* and a better explanation. So,
why is it better? Can it be defined as better in a precise way so that you
can compare arbitrary hypotheses or explanations? That is what I'm trying to
learn about. I don't think much progress has been made in this area, but I'd
like to know what other people have done and any successes they've had.

Dave


On Tue, Jun 29, 2010 at 10:29 AM, Matt Mahoney matmaho...@yahoo.com wrote:

 David Jones wrote:
  If anyone has any knowledge of or references to the state of the art in
 explanation-based reasoning, can you send me keywords or links?

 The simplest explanation of the past is the best predictor of the future.
 http://en.wikipedia.org/wiki/Occam's_razorhttp://en.wikipedia.org/wiki/Occam%27s_razor
 http://en.wikipedia.org/wiki/Occam%27s_razor
 http://www.scholarpedia.org/article/Algorithmic_probability
 http://www.scholarpedia.org/article/Algorithmic_probability

 -- Matt Mahoney, matmaho...@yahoo.com


 --
 *From:* David Jones davidher...@gmail.com

 *To:* agi agi@v2.listbox.com
 *Sent:* Tue, June 29, 2010 9:05:45 AM
 *Subject:* [agi] Re: Huge Progress on the Core of AGI

 If anyone has any knowledge of or references to the state of the art in
 explanation-based reasoning, can you send me keywords or links? I've read
 some through google, but I'm not really satisfied with anything I've found.

 Thanks,

 Dave

 On Sun, Jun 27, 2010 at 1:31 AM, David Jones davidher...@gmail.comwrote:

 A method for comparing hypotheses in explanatory-based reasoning: *

 We prefer the hypothesis or explanation that ***expects* more
 observations. If both explanations expect the same observations, then the
 simpler of the two is preferred (because the unnecessary terms of the more
 complicated explanation do not add to the predictive power).*

 *Why are expected events so important?* They are a measure of 1)
 explanatory power and 2) predictive power. The more predictive and the more
 explanatory a hypothesis is, the more likely the hypothesis is when compared
 to a competing hypothesis.

 Here are two case studies I've been analyzing from sensory perception of
 simplified visual input:
 The goal of the case studies is to answer the following: How do you
 generate the most likely motion hypothesis in a way that is general and
 applicable to AGI?
 *Case Study 1)* Here is a link to an example: animated gif of two black
 squares move from left to 
 righthttp://practicalai.org/images/CaseStudy1.gif.
 *Description: *Two black squares are moving in unison from left to right
 across a white screen. In each frame the black squares shift to the right so
 that square 1 steals square 2's original position and square two moves an
 equal distance to the right.
 *Case Study 2) *Here is a link to an example: the interrupted 
 squarehttp://practicalai.org/images/CaseStudy2.gif.
 *Description:* A single square is moving from left to right. Suddenly in
 the third frame, a single black square is added in the middle of the
 expected path of the original black square. This second square just stays
 there. So, what happened? Did the square moving from left to right keep
 moving? Or did it stop and then another square suddenly appeared and moved
 from left to right?

 *Here is a simplified version of how we solve case study 1:
 *The important hypotheses to consider are:
 1) the square from frame 1 of the video that has a very close position to
 the square from frame 2 should be matched (we hypothesize that they are the
 same square and that any difference in position is motion).  So, what
 happens is that in each two frames of the video, we only match one square.
 The other square goes unmatched.
 2) We do the same thing as in hypothesis #1, but this time we also match
 the remaining squares and hypothesize motion as follows: the first square
 jumps over the second square from left to right. We hypothesize that this
 happens over and over in each frame of the video. Square 2 stops and square
 1 jumps over it over and over again.
 3) We hypothesize that both squares move to the right in unison. This is
 the correct hypothesis.

 So, why should we prefer

Re: [agi] A Primary Distinction for an AGI

2010-06-29 Thread David Jones
Mike,

THIS is the flawed reasoning that causes people to ignore vision as the
right way to create AGI. And I've finally come up with a great way to show
you how wrong this reasoning is.

I'll give you an extremely obvious argument that proves that vision requires
much less knowledge to interpret than language does. Let's say that you have
never been to egypt, you have never seen some particular movie before.  But
if you see the movie, an alien landscape, an alien world, a new place or any
such new visual experience, you can immediately interpret it in terms of
spacial, temporal, compositional and other relationships.

Now, go to egypt and listen to them speak. Can you interpret it? Nope. Why?!
Because you don't have enough information. The language itself does not
contain any information to help you interpret it. We do not learn language
simply by listening. We learn based on evidence from how the language is
used and how it occurs in our daily lives. Without that experience, you
cannot interpret it.

But with vision, you do not need extra knowledge to interpret a new
situation. You can recognize completely new objects without any training
except for simply observing them in their natural state.

I wish people understood this better.

Dave

On Tue, Jun 29, 2010 at 12:51 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

  Just off the cuff here - isn't the same true for vision? You can't learn
 vision from vision. Just as all NLP has no connection with the real world,
 and totally relies on the human programmer's knowledge of that world.

 Your visual program actually relies totally on your visual vocabulary -
 not its own. That is the inevitable penalty of processing unreal signals on
 a computer screen which are not in fact connected to the real world any more
 than the verbal/letter signals involved in NLP are.

 What you need to do - what anyone in your situation with anything like your
 asprations needs to do - is to hook up with a roboticist. Everyone here
 should be doing that.


  *From:* David Jones davidher...@gmail.com
 *Sent:* Tuesday, June 29, 2010 5:27 PM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] A Primary Distinction for an AGI

 You can't learn language from language without embedding way more knowledge
 than is reasonable. Language does not contain the information required for
 its interpretation. There is no *reason* to interpret the language into any
 of the infinite possible interpretaions. There is nothing to explain but it
 requires explanatory reasoning to determine the correct real world
 interpretation

 On Jun 29, 2010 10:58 AM, Matt Mahoney matmaho...@yahoo.com wrote:

  David Jones wrote:
  Natural language requires more than the words on the page in the real
 world. Of...
 Any knowledge that can be demonstrated over a text-only channel (as in the
 Turing test) can also be learned over a text-only channel.



  Cyc also is trying to store knowledge about a super complicated world in
 simplistic forms and al...
 Cyc failed because it lacks natural language. The vast knowledge store of
 the internet is unintelligible to Cyc. The average person can't use it
 because they don't speak Cycl and because they have neither the ability nor
 the patience to translate their implicit thoughts into augmented first order
 logic. Cyc's approach was understandable when they started in 1984 when they
 had neither the internet nor the vast computing power that is required to
 learn natural language from unlabeled examples like children do.



  Vision and other sensory interpretaion, on the other hand, do not require
 more info because that...
 Without natural language, your system will fail too. You don't have enough
 computing power to learn language, much less the million times more
 computing power you need to learn to see.




 -- Matt Mahoney, matmaho...@yahoo.com

  
 From: David Jones davidher...@gmail.com
 To: agi a...@v2.listbox.c...
 *Sent:* Mon, June 28, 2010 9:28:57 PM


 Subject: Re: [agi] A Primary Distinction for an AGI


 Natural language requires more than the words on the page in the real
 world. Of course that didn't ...
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com

*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com

Re: [agi] A Primary Distinction for an AGI

2010-06-29 Thread David Jones
The point I was trying to make is that an approach that tries to interpret
language just using language itself and without sufficient information or
the means to realistically acquire that information, *should* fail.

On the other hand, an approach that tries to interpret vision with minimal
upfront knowledge needs *should* succeed because the knowledge required to
automatically learn to interpret images is amenable to preprogramming. In
addition, such knowledge must be pre-programmed. The knowledge for
interpreting language though should not be pre-programmed.

Dave

On Tue, Jun 29, 2010 at 2:51 PM, Matt Mahoney matmaho...@yahoo.com wrote:

 David Jones wrote:
  I wish people understood this better.

 For example, animals can be intelligent even though they lack language
 because they can see. True, but an AGI with language skills is more useful
 than one without.

 And yes, I realize that language, vision, motor skills, hearing, and all
 the other senses and outputs are tied together. Skills in any area make
 learning the others easier.


 -- Matt Mahoney, matmaho...@yahoo.com


 --
 *From:* David Jones davidher...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Tue, June 29, 2010 1:42:51 PM

 *Subject:* Re: [agi] A Primary Distinction for an AGI

 Mike,

 THIS is the flawed reasoning that causes people to ignore vision as the
 right way to create AGI. And I've finally come up with a great way to show
 you how wrong this reasoning is.

 I'll give you an extremely obvious argument that proves that vision
 requires much less knowledge to interpret than language does. Let's say that
 you have never been to egypt, you have never seen some particular movie
 before.  But if you see the movie, an alien landscape, an alien world, a new
 place or any such new visual experience, you can immediately interpret it in
 terms of spacial, temporal, compositional and other relationships.

 Now, go to egypt and listen to them speak. Can you interpret it? Nope.
 Why?! Because you don't have enough information. The language itself does
 not contain any information to help you interpret it. We do not learn
 language simply by listening. We learn based on evidence from how the
 language is used and how it occurs in our daily lives. Without that
 experience, you cannot interpret it.

 But with vision, you do not need extra knowledge to interpret a new
 situation. You can recognize completely new objects without any training
 except for simply observing them in their natural state.

 I wish people understood this better.

 Dave

 On Tue, Jun 29, 2010 at 12:51 PM, Mike Tintner 
 tint...@blueyonder.co.ukwrote:

  Just off the cuff here - isn't the same true for vision? You can't learn
 vision from vision. Just as all NLP has no connection with the real world,
 and totally relies on the human programmer's knowledge of that world.

 Your visual program actually relies totally on your visual vocabulary -
 not its own. That is the inevitable penalty of processing unreal signals on
 a computer screen which are not in fact connected to the real world any more
 than the verbal/letter signals involved in NLP are.

 What you need to do - what anyone in your situation with anything like
 your asprations needs to do - is to hook up with a roboticist. Everyone here
 should be doing that.


  *From:* David Jones davidher...@gmail.com
 *Sent:* Tuesday, June 29, 2010 5:27 PM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] A Primary Distinction for an AGI

 You can't learn language from language without embedding way more
 knowledge than is reasonable. Language does not contain the information
 required for its interpretation. There is no *reason* to interpret the
 language into any of the infinite possible interpretaions. There is nothing
 to explain but it requires explanatory reasoning to determine the correct
 real world interpretation

 On Jun 29, 2010 10:58 AM, Matt Mahoney matmaho...@yahoo.com wrote:

  David Jones wrote:
  Natural language requires more than the words on the page in the real
 world. Of...
 Any knowledge that can be demonstrated over a text-only channel (as in the
 Turing test) can also be learned over a text-only channel.



  Cyc also is trying to store knowledge about a super complicated world in
 simplistic forms and al...
 Cyc failed because it lacks natural language. The vast knowledge store of
 the internet is unintelligible to Cyc. The average person can't use it
 because they don't speak Cycl and because they have neither the ability nor
 the patience to translate their implicit thoughts into augmented first order
 logic. Cyc's approach was understandable when they started in 1984 when they
 had neither the internet nor the vast computing power that is required to
 learn natural language from unlabeled examples like children do.



  Vision and other sensory interpretaion, on the other hand, do not
 require more info because that...
 Without natural language, your system

Re: [agi] A Primary Distinction for an AGI

2010-06-29 Thread David Jones
the purpose of text is to convey something. It has to be interpreted. who
cares about predicting the next word if you can't interpret a single bit of
it.

On Tue, Jun 29, 2010 at 3:43 PM, David Jones davidher...@gmail.com wrote:

 People do not predict the next words of text. We anticipate it, but when
 something different shows up, we accept it if it is *explanatory*. Using
 compression like algorithms though will never be able to do this type of
 explanatory reasoning, which is required to disambiguate text. It is
 certainly not sufficient for learning language, which is not at all about
 predicting text.


 On Tue, Jun 29, 2010 at 3:38 PM, Matt Mahoney matmaho...@yahoo.comwrote:

 Experiments in text compression show that text alone is sufficient for
 learning to predict text.

 I realize that for a machine to pass the Turing test, it needs a visual
 model of the world. Otherwise it would have a hard time with questions like
 what word in this ernai1 did I spell wrong? Obviously the easiest way to
 build a visual model is with vision, but it is not the only way.


 -- Matt Mahoney, matmaho...@yahoo.com


 --
 *From:* David Jones davidher...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Tue, June 29, 2010 3:22:33 PM

 *Subject:* Re: [agi] A Primary Distinction for an AGI

 I certainly agree that the techniques and explanation generating
 algorithms for learning language are hard coded into our brain. But, those
 techniques alone are not sufficient to learn language in the absence of
 sensory perception or some other way of getting the data required.

 Dave

 On Tue, Jun 29, 2010 at 3:19 PM, Matt Mahoney matmaho...@yahoo.comwrote:

 David Jones wrote:
   The knowledge for interpreting language though should not be
 pre-programmed.

 I think that human brains are wired differently than other animals to
 make language learning easier. We have not been successful in training other
 primates to speak, even though they have all the right anatomy such as vocal
 chords, tongue, lips, etc. When primates have been taught sign language,
 they have not successfully mastered forming sentences.


 -- Matt Mahoney, matmaho...@yahoo.com


 --
 *From:* David Jones davidher...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Tue, June 29, 2010 3:00:09 PM

 *Subject:* Re: [agi] A Primary Distinction for an AGI

 The point I was trying to make is that an approach that tries to
 interpret language just using language itself and without sufficient
 information or the means to realistically acquire that information, *should*
 fail.

 On the other hand, an approach that tries to interpret vision with
 minimal upfront knowledge needs *should* succeed because the knowledge
 required to automatically learn to interpret images is amenable to
 preprogramming. In addition, such knowledge must be pre-programmed. The
 knowledge for interpreting language though should not be pre-programmed.

 Dave

 On Tue, Jun 29, 2010 at 2:51 PM, Matt Mahoney matmaho...@yahoo.comwrote:

 David Jones wrote:
  I wish people understood this better.

 For example, animals can be intelligent even though they lack language
 because they can see. True, but an AGI with language skills is more useful
 than one without.

 And yes, I realize that language, vision, motor skills, hearing, and all
 the other senses and outputs are tied together. Skills in any area make
 learning the others easier.


 -- Matt Mahoney, matmaho...@yahoo.com


  --
 *From:* David Jones davidher...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Tue, June 29, 2010 1:42:51 PM

 *Subject:* Re: [agi] A Primary Distinction for an AGI

 Mike,

 THIS is the flawed reasoning that causes people to ignore vision as the
 right way to create AGI. And I've finally come up with a great way to show
 you how wrong this reasoning is.

 I'll give you an extremely obvious argument that proves that vision
 requires much less knowledge to interpret than language does. Let's say 
 that
 you have never been to egypt, you have never seen some particular movie
 before.  But if you see the movie, an alien landscape, an alien world, a 
 new
 place or any such new visual experience, you can immediately interpret it 
 in
 terms of spacial, temporal, compositional and other relationships.

 Now, go to egypt and listen to them speak. Can you interpret it? Nope.
 Why?! Because you don't have enough information. The language itself does
 not contain any information to help you interpret it. We do not learn
 language simply by listening. We learn based on evidence from how the
 language is used and how it occurs in our daily lives. Without that
 experience, you cannot interpret it.

 But with vision, you do not need extra knowledge to interpret a new
 situation. You can recognize completely new objects without any training
 except for simply observing them in their natural state.

 I wish people understood this better.

 Dave

Re: [agi] Re: Huge Progress on the Core of AGI

2010-06-29 Thread David Jones
Such an example is no where near sufficient to accept the assertion that
program size is the right way to define simplicity of a hypothesis.

Here is a counter example. It requires a slightly more complex example
because all zeros doesn't leave any room for alternative hypotheses.

Here is the sequence: 10, 21, 32

void hypothesis_1() {
   int ten = 10;
   int counter = 0;
while (1)
{
   print(ten+counter)
   ten = ten + 10;
   counter = counter + 1;
}
  }

void hypothesis_2() {
while (1)
   print(10 21 32)
   }


Hypothesis 2 is simpler, yet clearly wrong. These examples don't really show
anything.

Dave

On Tue, Jun 29, 2010 at 3:15 PM, Matt Mahoney matmaho...@yahoo.com wrote:

 David Jones wrote:
  I really don't think this is the right way to calculate simplicity.

 I will give you an example, because examples are more convincing than
 proofs.

 Suppose you perform a sequence of experiments whose outcome can either be 0
 or 1. In the first 10 trials you observe 00. What do you expect to
 observe in the next trial?

 Hypothesis 1: the outcome is always 0.
 Hypothesis 2: the outcome is 0 for the first 10 trials and 1 thereafter.

 Hypothesis 1 is shorter than 2, so it is more likely to be correct.

 If I describe the two hypotheses in French or Chinese, then 1 is still
 shorter than 2.

 If I describe the two hypotheses in C, then 1 is shorter than 2.

   void hypothesis_1() {
 while (1) printf(0);
   }

   void hypothesis_2() {
 int i;
 for (i=0; i10; ++i) printf(0);
 while (1) printf(1);
   }

 If I translate these programs into Perl or Lisp or x86 assembler, then 1
 will still be shorter than 2.

 I realize there might be smaller equivalent programs. But I think you could
 find a smaller program equivalent to hypothesis_1 than hypothesis_2.

 I realize there are other hypotheses than 1 or 2. But I think that the
 smallest one you can find that outputs eleven bits of which the first ten
 are zeros will be a program that outputs another zero.

 I realize that you could rewrite 1 so that it is longer than 2. But it is
 the shortest version that counts. More specifically consider all programs in
 which the first 10 outputs are 0. Then weight each program by 2^-length. So
 the shortest programs dominate.

 I realize you could make up a language where the shortest encoding of
 hypothesis 2 is shorter than 1. You could do this for any pair of
 hypotheses. However, I think if you stick to simple languages (and I
 realize this is a circular definition), then 1 will usually be shorter than
 2.


 -- Matt Mahoney, matmaho...@yahoo.com


 --
 *From:* David Jones davidher...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Tue, June 29, 2010 1:31:01 PM

 *Subject:* Re: [agi] Re: Huge Progress on the Core of AGI



 On Tue, Jun 29, 2010 at 11:26 AM, Matt Mahoney matmaho...@yahoo.comwrote:

  Right. But Occam's Razor is not complete. It says simpler is better, but
 1) this only applies when two hypotheses have the same explanatory power and
 2) what defines simpler?

 A hypothesis is a program that outputs the observed data. It explains
 the data if its output matches what is observed. The simpler hypothesis is
 the shorter program, measured in bits.


 I can't be confident that bits is the right way to do it. I suspect bits is
 an approximation of a more accurate method. I also suspect that you can
 write a more complex explanation program with the same number of bits. So,
 there are some flaws with this approach. It is an interesting idea to
 consider though.



 The language used to describe the data can be any Turing complete
 programming language (C, Lisp, etc) or any natural language such as
 English. It does not matter much which language you use, because for any two
 languages there is a fixed length procedure, described in either of the
 languages, independent of the data, that translates descriptions in one
 language to the other.


 Hypotheses don't have to be written in actual computer code and probably
 shouldn't be because hypotheses are not really meant to be run per say.
 And outputs are not necessarily the right way to put it either. Outputs
 imply prediction. And as mike has often pointed out, things cannot be
 precisely predicted. We can, however, determine whether a particular
 observation fits expectations, rather than equals some prediction. There may
 be multiple possible outcomes that we expect and which would be consistent
 with a hypothesis, which is why actual prediction should not be used.

  For example, the simplest hypothesis for all visual interpretation is
 that everything in the first image is gone in the second image, and
 everything in the second image is a new object. Simple. Done. Solved :)
 right?

 The hypothesis is not the simplest. The program that outputs the two
 frames as if independent cannot be smaller than the two frames compressed
 independently. The program could be made smaller

Re: [agi] A Primary Distinction for an AGI

2010-06-29 Thread David Jones
On Tue, Jun 29, 2010 at 3:33 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

  You're not getting where I'm coming from at all. I totally agree vision
 is far prior to language. (We and I've covered your points many times).
 That's not the point - wh. is that vision is nevertheless still vastly more
 complex, than you have any idea.


whatever you say. That has nothing to do with whether it should be pursued
this way or not.



 For one thing, vision depends on perceptualising/ conceptualising the world
 - a schematic ontology of the world - image-schematic. It almost certainly
 has to be done in a certain order, gradually built up.


how is that, even remotely, a reason to change the way I do my research? It
doesn't even logically follow...



 No one in our culture has much idea of either what that ontology - a visual
 ontology - consists of, or how it's built up.


Again, how is that an argument for changing my research? It's not. It does
not follow again.



 And for the most basic thing, you still haven't registered that your
 computer program has ZERO VISION. It's not actually looking at the world at
 all. It's BLIND - if you take the time to analyse it. A pretty fundamental
 error/ misconception.


Not an argument again. It has nothing to do with whether my approach will or
will not provide the valuable knowledge and foundation required to solve the
fundamental problems of general vision.



 Consequently, it also lacks a fundamental dimension of vision, wh. is
 POINT-OF-VIEW - distance of the visual medium (eg the retina) and viewing
 subject from the visual object.



AGAIN. Not an argument against my approach. It simply doesn't logically
follow anything. How is having a point of view in example problems prove
that anything learned or developed isn't applicable to general vision?


 Get thee to a roboticist,  make contact with the real world.


Get yourself to a psychologist so that they can show you how flawed your
reasoning is. Fallacy upon fallacy. You are not in touch with reality.



  *From:* David Jones davidher...@gmail.com
 *Sent:* Tuesday, June 29, 2010 6:42 PM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] A Primary Distinction for an AGI

 Mike,

 THIS is the flawed reasoning that causes people to ignore vision as the
 right way to create AGI. And I've finally come up with a great way to show
 you how wrong this reasoning is.

 I'll give you an extremely obvious argument that proves that vision
 requires much less knowledge to interpret than language does. Let's say that
 you have never been to egypt, you have never seen some particular movie
 before.  But if you see the movie, an alien landscape, an alien world, a new
 place or any such new visual experience, you can immediately interpret it in
 terms of spacial, temporal, compositional and other relationships.

 Now, go to egypt and listen to them speak. Can you interpret it? Nope.
 Why?! Because you don't have enough information. The language itself does
 not contain any information to help you interpret it. We do not learn
 language simply by listening. We learn based on evidence from how the
 language is used and how it occurs in our daily lives. Without that
 experience, you cannot interpret it.

 But with vision, you do not need extra knowledge to interpret a new
 situation. You can recognize completely new objects without any training
 except for simply observing them in their natural state.

 I wish people understood this better.

 Dave

 On Tue, Jun 29, 2010 at 12:51 PM, Mike Tintner 
 tint...@blueyonder.co.ukwrote:

  Just off the cuff here - isn't the same true for vision? You can't learn
 vision from vision. Just as all NLP has no connection with the real world,
 and totally relies on the human programmer's knowledge of that world.

 Your visual program actually relies totally on your visual vocabulary -
 not its own. That is the inevitable penalty of processing unreal signals on
 a computer screen which are not in fact connected to the real world any more
 than the verbal/letter signals involved in NLP are.

 What you need to do - what anyone in your situation with anything like
 your asprations needs to do - is to hook up with a roboticist. Everyone here
 should be doing that.


  *From:* David Jones davidher...@gmail.com
 *Sent:* Tuesday, June 29, 2010 5:27 PM
 *To:* agi agi@v2.listbox.com
  *Subject:* Re: [agi] A Primary Distinction for an AGI

 You can't learn language from language without embedding way more
 knowledge than is reasonable. Language does not contain the information
 required for its interpretation. There is no *reason* to interpret the
 language into any of the infinite possible interpretaions. There is nothing
 to explain but it requires explanatory reasoning to determine the correct
 real world interpretation

 On Jun 29, 2010 10:58 AM, Matt Mahoney matmaho...@yahoo.com wrote:

  David Jones wrote:
  Natural language requires more than the words on the page in the real
 world. Of...
 Any

Re: [agi] A Primary Distinction for an AGI

2010-06-29 Thread David Jones
Scratch my statement about it being useless :) It's useful, but no where
near sufficient for AGI like understanding.

On Tue, Jun 29, 2010 at 4:58 PM, David Jones davidher...@gmail.com wrote:

 notice how you said *context* of the conversation. The context is the real
 world, and is completely missing. You cannot model human communication
 using text alone. The responses you would get back would be exactly like
 eliza. Sure, it might be pleasing to someone that has never seen AI before,
 but its certainly not answering any questions.

 This reminds me of the Bing search engine commercials where people ask a
 question and get responses that include the words they asked about, but in a
 completely wrong context.

 Predicting the next word and understanding the question are completely
 different and cannot be solved the same way. In fact, predicting the next
 word is altogether useless (at least by itself) in my opinion.

 Dave


 On Tue, Jun 29, 2010 at 4:50 PM, Matt Mahoney matmaho...@yahoo.comwrote:

 Answering questions is the same problem as predicting the answers. If you
 can compute p(A|Q) where Q is the question (and previous context of the
 conversation) and A is the answer, then you can also choose an answer A from
 the same distribution. If p() correctly models human communication, then the
 response would be indistinguishable from a human in a Turing test.


 -- Matt Mahoney, matmaho...@yahoo.com


 --
 *From:* David Jones davidher...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Tue, June 29, 2010 3:43:53 PM

 *Subject:* Re: [agi] A Primary Distinction for an AGI

 the purpose of text is to convey something. It has to be interpreted. who
 cares about predicting the next word if you can't interpret a single bit of
 it.

 On Tue, Jun 29, 2010 at 3:43 PM, David Jones davidher...@gmail.comwrote:

 People do not predict the next words of text. We anticipate it, but when
 something different shows up, we accept it if it is *explanatory*. Using
 compression like algorithms though will never be able to do this type of
 explanatory reasoning, which is required to disambiguate text. It is
 certainly not sufficient for learning language, which is not at all about
 predicting text.


 On Tue, Jun 29, 2010 at 3:38 PM, Matt Mahoney matmaho...@yahoo.comwrote:

 Experiments in text compression show that text alone is sufficient for
 learning to predict text.

 I realize that for a machine to pass the Turing test, it needs a visual
 model of the world. Otherwise it would have a hard time with questions like
 what word in this ernai1 did I spell wrong? Obviously the easiest way to
 build a visual model is with vision, but it is not the only way.


 -- Matt Mahoney, matmaho...@yahoo.com


 --
 *From:* David Jones davidher...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Tue, June 29, 2010 3:22:33 PM

 *Subject:* Re: [agi] A Primary Distinction for an AGI

 I certainly agree that the techniques and explanation generating
 algorithms for learning language are hard coded into our brain. But, those
 techniques alone are not sufficient to learn language in the absence of
 sensory perception or some other way of getting the data required.

 Dave

 On Tue, Jun 29, 2010 at 3:19 PM, Matt Mahoney matmaho...@yahoo.comwrote:

 David Jones wrote:
   The knowledge for interpreting language though should not be
 pre-programmed.

 I think that human brains are wired differently than other animals to
 make language learning easier. We have not been successful in training 
 other
 primates to speak, even though they have all the right anatomy such as 
 vocal
 chords, tongue, lips, etc. When primates have been taught sign language,
 they have not successfully mastered forming sentences.


 -- Matt Mahoney, matmaho...@yahoo.com


 --
 *From:* David Jones davidher...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Tue, June 29, 2010 3:00:09 PM

 *Subject:* Re: [agi] A Primary Distinction for an AGI

 The point I was trying to make is that an approach that tries to
 interpret language just using language itself and without sufficient
 information or the means to realistically acquire that information, 
 *should*
 fail.

 On the other hand, an approach that tries to interpret vision with
 minimal upfront knowledge needs *should* succeed because the knowledge
 required to automatically learn to interpret images is amenable to
 preprogramming. In addition, such knowledge must be pre-programmed. The
 knowledge for interpreting language though should not be pre-programmed.

 Dave

 On Tue, Jun 29, 2010 at 2:51 PM, Matt Mahoney matmaho...@yahoo.comwrote:

 David Jones wrote:
  I wish people understood this better.

 For example, animals can be intelligent even though they lack language
 because they can see. True, but an AGI with language skills is more 
 useful
 than one without.

 And yes, I realize that language, vision, motor skills, hearing

Re: [agi] A Primary Distinction for an AGI

2010-06-28 Thread David Jones
Mike,

Alive vs. dead? As I've said before, there is no actual difference. It is
not a qualitative difference that makes something alive or dead. It is a
quantitative difference. They are both controlled by physics. I don't mean
the nice clean physics rules that we approximate things with, I mean the
real dynamics of matter. Neither moves any more regularly or irregularly
than the other. It is harder to define why something alive moves because
the mechanism is normally too complex. If you didn't realize, there are life
forms that don't really move, such as viruses. Viruses are controlled by the
liquid that contains them. Yet, viruses are arguably alive. Some plants or
algae don't really move either. They may just grow in some direction, which
is not quite the same as movement.

Likewise, your analogy of this to AGI fails. You think there is a
difference, but there is none. You may think a fractal is more AGI than a
simple, low noise black square, but that is not the case. It is completely
besides the point. I can easily add noise to my experiments. I can simulate
the noise of light, camera lenses, blurring, etc. But, why should I when,
even without noise, there is a clear unsolved AGI challenge. The explanatory
reasoning required to solve even zero noise problems is still required for
full complexity problems. If you can't solve it for 2 squares on a screen,
what makes you think you can solve it for real images? Your grasp of reality
regarding AGI is quite poor, in my opinion.

Your main claim is that the problems I am working on are not representative
or applicable to AGI. But, you fail to see that they really are. The
abductive reasoning required to solve these extremely simplified problems is
required for every other AGI problem as well. These problems might be
solvable using methods that don't apply to AGI. But, that's why it is
important to force oneself to solve them in such a way that it IS applicable
to AGI. It doesn't mean that you have to choose a problem that is so hard
you can't cheat. It's unnecessary to do that unless you can't control your
desire to cheat. I can. Developing in this way, such as an implementation of
explanatory based reasoning, is very much applicable to AGI.

Dave

On Mon, Jun 28, 2010 at 11:15 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

 The recent Core of AGI exchange has led me IMO to a beautiful conclusion -
 to one of the most basic distinctions a real AGI system must make, and also
 a  simple way of distinguishing between narrow AI and real AGI projects of
 any kind.

 Consider - you have

 a) Dave's square moving across a screen

 b) my square moving across a screen

 (it was a sort-of-Pong-player line, but let's make it a square box).

 How do you distinguish which is animate or inanimate, alive or dead? A
 very early distinction an infant must make.

 Remember inanimate objects move (or are moved) too, and in this case you
 can only see them in motion,  - so the self-starting distinction is out.

 Well, obviously, if Dave's moves *regularly* (like a train or falling
 stone), it's probably inanimate. If mine moves *irregularly*, - if it stops
 and starts, or slows and accelerates in irregular, even if only subtly jerky
 fashion (like one operated by a human Pong player)  - it's probably
 inanimate. That's what distinguishes the movement of life.

 Inanimate objects normally move  *regularly,* in *patterned*/*pattern*
 ways, and *predictably.*

 Animate objects normally move *irregularly*, * in *patchy*/*patchwork*
 ways, and *unbleedingpredictably* .

 (IOW Newton is wrong - the laws of physics do not apply to living objects
 as whole objects  - that's the fundamental way we know they are living,
 because they visibly don't obey those laws - they don't normally move
 regularly like a stone falling to earth, or thrown through the sky. And
 we're v. impressed when humans like dancers or soldiers do manage by dint of
 great effort and practice to move with a high though not perfect degree of
 regularity and smoothness).

 And now we have such a simple way of distinguishing between narrow AI and
 real AGI projects. Look at their objects. The really narrow AI-er  will
 always do what Dave did - pick objects that are shaped regularly, move and
 behave regularly, are patterned, and predictable. Even  at as simple a level
 as plain old squares.

 And he'll pick closed, definable sets of objects.

 He'll do this instinctively, because he doesn't know any different - that's
 his intellectual, logicomathematical world - one of objects that no matter
 how complex (like fractals) are always regular in shape, movement,
 patterned, come in definable sets and are predictable.

 That's why Ben wants to see the world only as structured and patterned even
 though there's so much obvious mess and craziness everywhere - he's never
 known any different intellectually.

 That's why Michael can't bear to even contemplate a world in which things
 and people behave unpredictably. (And 

Re: [agi] A Primary Distinction for an AGI

2010-06-28 Thread David Jones
Yeah. I forgot to mention that robots are not aalive yet could act
indistinguishably from what is alive. The concept of alive is likely
something that requires inductive type reasoning and generalization to
learn. Categorization, similarity analysis, etc could assist in making such
distinctions as well.

The point is that agi is not defined by any particular problem. It is
defined by how you solve problems, even simple ones. Which is why your claim
that my problems are not agi is simply wrong.

On Jun 28, 2010 12:22 PM, Jim Bromer jimbro...@gmail.com wrote:

On Mon, Jun 28, 2010 at 11:15 AM, Mike Tintner tint...@blueyonder.co.uk
wrote:



 Inanimate objects normally move  *regularly,* in *patterned*/*pattern*
ways, and *predictably
This presumption looks similar (in some profound way) to many of the
presumptions that were tried in the early days of AI, partly because
computers lacked memory and they were very slow.  It's unreliable just
because we need the AGI program to be able to consider situations when, for
example, inanimate objects move in patchy patchwork ways or in unpredictable
patterns.

Jim Bromer
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
https://www.listbox.com/member/archive/rss/303/ |
Modifyhttps://www.listbox.com/member/?;Your
Subscription
http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] The true AGI Distinction

2010-06-28 Thread David Jones
In case anyone missed it... Problems are not AGI. Solutions are. And AGI
is not the right adjective anyway. The correct word is general. In other
words, generally applicable to other problems. I repeat, Mike, you are *
wrong*. Did anyone miss that?

To recap, it has nothing to do with what problem you solve. It is all about
how you solve the problem and your understanding of how the solution is
generally applicable to other problems. So, you can kiss it Mike.

:D

Dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] A Primary Distinction for an AGI

2010-06-28 Thread David Jones
I also want to mention that I develop solutions to the toy problems with the
real problems in mind. I also fully intend to work my way up to the real
thing by incrementally adding complexity and exploring the problem well at
each level of complexity. As you do this, the flaws in the design will be
clear and I can retrace my steps to create a different solution. The benefit
to this strategy is that we fully understand the problems at each level of
complexity. When you run into something that is not accounted, you are much
more likely to know how to solve it. Despite its difficulties, I prefer my
strategy to the alternatives.

Dave

On Mon, Jun 28, 2010 at 3:56 PM, David Jones davidher...@gmail.com wrote:

 That does not have to be the case. Yes, you need to know what problems you
 might have in more complicated domains to avoid developing completely
 useless theories on toy problems. But, as you develop for full complexity
 problems, you are confronted with several sub problems. Because you have no
 previous experience, what tends to happen is you hack together a solution
 that barely works and simply isn't right or scalable because we don't have a
 full understanding of the individual sub problems. Having experience with
 the full problem is important, but forcing yourself to solve every sub
 problem at once is not a better strategy at all. You may think my strategies
 has flaws, but I know that and still chose it because the alternative
 strategies are worse.

 Dave


 On Mon, Jun 28, 2010 at 3:41 PM, Russell Wallace 
 russell.wall...@gmail.com wrote:

 On Mon, Jun 28, 2010 at 4:54 PM, David Jones davidher...@gmail.com
 wrote:
  But, that's why it is important to force oneself to solve them in such a
 way that it IS applicable to AGI. It doesn't mean that you have to choose a
 problem that is so hard you can't cheat. It's unnecessary to do that unless
 you can't control your desire to cheat. I can.

 That would be relevant if it was entirely a problem of willpower and
 self-discipline, but it isn't. It's also a problem of guidance. A real
 problem gives you feedback at every step of the way, it keeps blowing
 your ideas out of the water until you come up with one that will
 actually work, that you would never have thought of in a vacuum. A toy
 problem leaves you guessing, and most of your guesses will be wrong in
 ways you won't know about until you come to try a real problem and
 realize you have to throw all your work away.

 Conversely, a toy problem doesn't make your initial job that much
 easier. It means you have to write less code, sure, but what of it?
 That was only ever the lesser difficulty. The main reason toy problems
 are easier is that you can use lower grade methods that could never
 scale up to real problems -- in other words, precisely that you can
 'cheat'. But if you aren't going to cheat, you're sacrificing most of
 the ease of a toy problem, while also sacrificing the priceless
 feedback from a real problem -- the worst of both worlds.


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] A Primary Distinction for an AGI

2010-06-28 Thread David Jones
Yes I have. But what I found is that real vision is so complex, involving so
many problems that must be solved and studied, that any attempt at general
vision is beyond my current abilities. It would be like expecting a single
person, such as myself, to figure out how to build the h-bomb all by
themselves back before it had ever been done. It is the same scenario
because it involves many engineering and scientific problems that must all
be solved and studied.

You see in real vision you have a 3D world, camera optics, lighting issues,
noise, blurring, rotation, distance, projection, reflection, shadows,
occlusion, etc, etc, etc.

It is many magnitudes more difficult than the problems I'm studying. Yet,
really consider the two black squares problem. Its hard! It's so simple, yet
so hard. I still haven't fully defined how to do it algorithmically... I
will get to that in the coming weeks.

So, to work on the full problem is practically impossible for me. Seeing as
though there isn't a lot of support for AGI research such as this, I am much
better served by proving the principle rather than implementing the full
solution to the real problem. If I can even prove how vision works on simple
black squares, I might be able to get help in my research... without a proof
of concept, no one will help. If I can prove it on screenshots, even better.
It would be a very significant achievement, if done in a truly general
fashion (keeping in mind that truly general is not really possible).

A great example of what happens when you work with real images is this...
Look at the current solutions. They use features, such as sift. Using sift
features, you might be able to say that an object exists with 70% certainty,
or something like that. But, it won't be able to tell you what the object
looks like, whats behind it. What is it occluding. What's next to it. What
color is it. What pixels in the image belong to it. How are those parts
attached. Etc. etc. etc. Now do you see why it makes little sense to tackle
the full problem? Even the state of the art in computer vision sucks. It is
great at certain narrow applications, but no where near where it needs to be
for AGI.

Dave

On Mon, Jun 28, 2010 at 4:00 PM, Russell Wallace
russell.wall...@gmail.comwrote:

 On Mon, Jun 28, 2010 at 8:56 PM, David Jones davidher...@gmail.com
 wrote:
  Having experience with the full problem is important, but forcing
 yourself to solve every sub problem at once is not a better strategy at all.

 Certainly going back to a toy problem _after_ gaining some experience
 with the full problem would have a much better chance of being a
 viable strategy. Have you tried that with what you're doing, i.e.
 having a go at writing a program to understand real video before going
 back to black squares and screen shots to improve the fundamentals?


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


  1   2   >