Re: [agi] My proposal for an AGI agenda

2007-03-22 Thread Kevin Peterson

On 3/22/07, John Rose [EMAIL PROTECTED] wrote:

Enhancements to existing computer languages or new computer languages that
could possibly grease the wheels for AGI development would be aligning the
language more closely to mathematics.  Many of the computer languages are

[...]

This would be counter productive. Give a man a hammer, ..., give a man
Prolog, and everything is a problem in logic, give him Lisp and
everything is symbols, give him high order language structures to deal
with sets, groups, graphs, and everything will be expressed via those.
Force him to think about his data structures and representation, and
at least it will be clear that there are choices to be made, that
there should be reasoning behind things.

This thread has been going on for what, weeks? The argument has been
going on since the beginning of time. If you think you need to invent
a new language to accomplish something, you don't know how to do it,
and creating the language will accomplish nothing. Every good language
has been a refinement of techniques that have grown popular in other
languages.

Solutions are best to well specified problems. AI is hard isn't
specific. Self modifying code is difficult in Java is the kind of
problem that may warrant using a different language. Wait, let me
qualify that. Self modifying code is difficult in Java _and I've got
a design thought up that will make use of self modifying code_ is the
kind of problem that may warrant using a different language. But AGI
is not going to be hacked together by some undergrad between WOW
sessions once he's given the right tools.

The portions of the first seed AGI that are written by humans will not
be written in a language designed for that project.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Semi-amusing Novamente machinima

2007-03-17 Thread Kevin Peterson

On 3/17/07, Ben Goertzel [EMAIL PROTECTED] wrote:

This doesn't really showcase Novamente's learning ability very much --
it's basically a smoke test of the integration of Novamente probabilistic
learning with the AGISim sim world -- an integration which we've had
sorta working for a while but has had a lot of kinks needing working-out.


I'm curious about how the AGI interfaces with the sim world. I'm
guessing for now it's mostly just a visualization for humans and NM is
given access to state (i.e. object, red, spherical, at x,y,z) without
needing a perceptual system?

Like Bob asks, what is the significance? Is it the success of the
general learning engine with no goals or knowledge except positive
feedback when it stumbles on the right behavior?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Do AGIs dream of electric sheep?

2007-02-25 Thread Kevin Peterson

On 2/26/07, Chuck Esterbrook [EMAIL PROTECTED] wrote:

But wouldn't it be difficult to integrate the results of the
experimental copy back into the working copy which has since had new
experiences, memory formation and lessons at the end of the time
period for experimentation and/or optimization?


I don't see why. Any input into the duplicated module could be saved
during the time that subsystem is undergoing regularly scheduled
maintenance, then played back into the module at an accelerated rate
before swapping the optimized version back in.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303