Re: [agi] Novamente's next 15 minutes of fame...

2008-03-29 Thread Ben Goertzel
Nothing has been publicly released yet, it's still at the
research-prototype stage ... I'll announce when we have some kind of
product release...

ben

On Sat, Mar 29, 2008 at 5:39 PM, Jim Bromer <[EMAIL PROTECTED]> wrote:
> It sounds interesting.  Can anyone go and try it, or does it cost money or
> something.  Is it set up already?
> Jim Bromer
>
>
>
> On Fri, Mar 28, 2008 at 6:54 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
> >
> >
> >
> >
> http://technology.newscientist.com/article/mg19726495.700-virtual-pets-can-learn-just-like-babies.html
> >
> > --
> > Ben Goertzel, PhD
> > CEO, Novamente LLC and Biomind LLC
> > Director of Research, SIAI
> > [EMAIL PROTECTED]
> >
> > "If men cease to believe that they will one day become gods then they
> > will surely become worms."
> > -- Henry Miller
> >
> > ---
> > agi
> > Archives: http://www.listbox.com/member/archive/303/=now
> > RSS Feed: http://www.listbox.com/member/archive/rss/303/
> > Modify Your Subscription: http://www.listbox.com/member/?&;
> > Powered by Listbox: http://www.listbox.com
> >
>
>
>  
>
>  agi | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Logical Satisfiability...Get used to it.

2008-03-29 Thread David Salamon
Hey Jim,

Glad to hear you're making some headway on such an important and challenging
problem!

Don't read to much in to Vladimir's response... he's probably just having a
hard day or something :p  If it's fair game to talk about all the other
narrow-AI topics on this list, talking about SAT is fair game.

Although as Vladimir notes we do have some pretty good solutions (two that
are both powerful and easy to understand are: DPLL and WalkSAT [interstingly
this one is by the inventor of the BitTorrent protocol :p]). It is also very
important to remember that those engineering AGIs would actually use SAT
more often given such a solution, so it is very hard to tell all the massive
benefits to the AGI effort (and the rest of humanity) straight off.

I for one would (pardon my freedom) cream myself with joy to have such a
solver. Additionally my heart goes out to anyone with the drive and skill to
work in areas like the ones you're in, especially one with the rocks to get
up in front of a bunch of atheists and talk about their creator.

Keep us updated of this and any other AGI related areas of interest (my
brief google stalk turned up your interest in genetic
algorithms/programming, is that related?).

cheers,
-david salamon


On Sat, Mar 29, 2008 at 3:29 PM, Jim Bromer <[EMAIL PROTECTED]> wrote:

> I have made a little progress on my SAT theory. As I said, I believe that
> there is a chance that I might have gotten the word from the Lord on my
> efforts (seriously), although I am not, in any way, saying that either the
> project or the Lord's involvement is a sure thing.  So I am partially going
> on faith, but it is not blind faith.  I haven't come close to getting
> objective evidence that it will work for all cases, but so far it is acting
> within the range of expectations that I developed based on simulations that
> I created by parts.  (These 'simulations' were simple and many done in my
> mind, but some were done with pencil and paper, etc.)  I have examined the
> problem in parts, and by looking at the parts with different assumptions and
> examining the problems using positive, critical and alternative theories, I
> have come to the conclusion that it is feasible.  It will be a clunker
> though, no question about that.
>
> So anyway, I cannot yet prove my theory, but I cannot disprove it either.
> I have been working on the problem for three years, and I worked on it for a
> few months 20 years ago.  But I have been working on this current theory
> since Oct 2007.  I have had experiences similar to the those that Ben and
> others have talked about, where I too thought I solved the problem only to
> discover that I hadn't a short time later, but this has been going for five
> months since October, and I am not retracting anything yet.
>
> But the thing that I still want to talk about is whether or not anyone
> will be able to use a polynomial time solution to advantage if indeed I can
> actually do it (as I am starting to believe that I can).  An n^4 or n^5
> solution to SAT does not look so great and even an n^3 solution is a
> clunker.  And I also do not believe that strict logic is going work for
> AGI.  But even so, I think I would be able to use the theory in AGI because
> I believe it would be useful to use logic in creating theories and
> theoretical models of whatever the program would consider, and even though
> those logical theories would have to be broken up into parts (parts that
> would be interconnected and may overlap) I now suspect that if simple
> logical theories were composed of hundreds of variations they could be used
> more intuitively and more profoundly than if they were constrained to a
> concise statement of only a few logical variables.  And an n^3 SAT solver
> can easily handle a few thousand variables; a 2^n solver cannot.
>
> And what most of the readers of my previous message have not realized is
> that a solution to SAT will almost surely have a greater potential effect
> than the solution to the simple problem of SAT.  It will be a new way to
> look at logical complexity and it will eventually lead to new ways to handle
> logical problems.  Imagining how overlapping interrelated partitions of
> logical theories which can handle up to a few thousand logical variables
> each and which can handle a few thousand logical interconnections between
> those parts I believe that I can see how artficial mind might be both an
> intuitive network device and a strong logical analytical device.
>
> Jim Bromer
>  --
>   *agi* | Archives 
>  | 
> ModifyYour Subscription
> 
>

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=86

[agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-29 Thread Boris Kazachenko

Here's another try:



I think the main reason for the failure of AI is that no existing approach 
is derived from a theoretically consistent definition of intelligence. Some, 
such as Algorithmic Information Theory, are close but not close enough.
Scalable (general) intelligence must recursively self-improve: continuously 
develop new mechanisms.


These mechanisms must be selected according to a universal criterion, which 
can only be derived from a functional definition of intelligence.


I define intelligence as an ability to predict/plan by discovering & 
projecting patterns within an input flow.
For an excellent high-level discussion see "On intelligence" by Jeff 
Hawkins.


We know of one mechanism that did produce an intelligence, although a pretty 
messed-up one: the evolution. Initially algorithmically very simple, 
evolution changes heritable traits at random & evaluates results for 
reproductive fitness.
But biological evolution is ludicrously inefficient because intelligence is 
only one element of reproductive fitness, & selection is extremely 
coarse-grained: on the level of a whole genome rather than of individual 
traits.


From my definition, a fitness function specific to intelligence is 
predictive correspondence of the memory. Correspondence is a 
representational analog of reproduction, maximized by an internalized 
evolution:


- the heritable traits for evolving predictions are past inputs, &

- the fitness is their cummulative match to the following inputs.



Match (fitness) should be quantified on the lowest level of comparison,- 
this makes selection more incremental & efficient. The lowest level is 
comparison between two single-variable inputs, & the match is a partial 
identity: a complimentary of the difference, or the smaller of the 
variables. This is also a measure of analog compression: a sum of bitwise 
AND between uncompressed comparands (represented by strings of ones).


To speedup, the search algorithm must incorporate increasingly more complex 
shortcuts to discover better predictions (the speed is what it's all about, 
otherwise we can just sit back & let the biological evolution do the job).


These more complex predictions (patterns) & pattern discovery methods are 
derived from the past inputs of increasing comparison range & order: 
derivation depth.



The most basic shortcuts are based on the assumption that the environment is 
not random:

- Input patterns are decreasingly predictive with the distance.
- Pattern is increasingly predictive with the accumulated match, & 
decreasingly so with the difference between constituent inputs.


A core algorithm based on these assumptions would be an iterative step that 
selectively increases range & complexity of the patterns in proportion to 
their projected cummulative match:


The original inputs are single variables produced by senses, such as pixels 
of visual perception.
Their subsequent comparison by iterative subtraction adds new variable 
types: length & aggregate value for both partial match & miss (derivatives) 
for each variable of the comparands. The inputs are integrated into patterns 
(higher-level inputs) if the additional projected match is greater than the 
system's average for the computational resources necessary to record & 
compare additional syntactic complexity. Each variable of thus-formed 
patterns is compared on a higher level of search & can form its own pattern.


On the other hand, if predictive value (projected match) falls below the 
systems' average, the input pattern is aggregated with adjacent 
"subcritical" patterns by iterative addition, into a lower-resolution input. 
Aggregation results in a "fractional" projection range for constituent 
inputs, as opposed to "multiple" range for matching inputs. By increasing 
magnitude of the input it increases its projected match: a subset of the 
magnitude. Aggregation also produces the averages to determine resolution of 
future inpus & evaluate their matches.


So, the alternative integrated/aggregated representations of inputs are 
produced by iterative subtraction/addition (the neural analogs are 
inhibition & excitation), both determined by comparison among the respective 
inputs. It's a kind of evolution where neither traits nor their change are 
really produced at random. The inputs are inherently predictive on the 
average by the virtue of their proximity, & the change is introduced either 
by new inputs (proximity update), or as incremental syntax of the old 
inputs, produced by their individual predictiveness evaluation: comparison, 
selectively incremental in distance & derivation.


The biggest hangup people usually have is that this kind of algorithm is 
obviously very simple, while working intelligence is obviously very complex. 
But, as I tried to explain, additional complexity should only improve speed, 
rather than changing the "direction" of cognition (although it may save a 
few zillion years).
The main requirement for suc

Re: [agi] Logical Satisfiability...Get used to it.

2008-03-29 Thread Vladimir Nesov
Jim,

Could you keep P=NP discussion off this list? There are plenty of
powerful SAT solvers already, so if there is a path towards AGI that
needs a SAT solver, they can be used in at least small-scale
prototypes, and thus the absence of scalable SAT solver is not a
bottleneck at the moment. P=NP can have profound implication on other
issues, but it's hardly specifically relevant for AGI. If your
interest lies in AI, P=NP is not the way, and if your interest lies in
P=NP, AGI is irrelevant.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Novamente's next 15 minutes of fame...

2008-03-29 Thread Jim Bromer
It sounds interesting.  Can anyone go and try it, or does it cost money or
something.  Is it set up already?
Jim Bromer

On Fri, Mar 28, 2008 at 6:54 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:

>
> http://technology.newscientist.com/article/mg19726495.700-virtual-pets-can-learn-just-like-babies.html
>
> --
> Ben Goertzel, PhD
> CEO, Novamente LLC and Biomind LLC
> Director of Research, SIAI
> [EMAIL PROTECTED]
>
> "If men cease to believe that they will one day become gods then they
> will surely become worms."
> -- Henry Miller
>
> ---
> agi
> Archives: http://www.listbox.com/member/archive/303/=now
> RSS Feed: http://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> http://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Instead of an AGI Textbook

2008-03-29 Thread Jim Bromer
On Sat, Mar 29, 2008 at 8:58 AM, Mike Tintner <[EMAIL PROTECTED]>
wrote:

>
> Robert/Ben:. In fact. I would suggest that AGI researchers start to
> distinguish
> >> themselves from narrow AGI by replacing the over ambiguous concepts
> from
> >> AI,
> >> one by one. For example:
> >>
> >> knowledge representation = world model.
> >> learning = world model creation
> >> reasoning = world model simulation
> >> goal = life goal (to indicate that we have the ambition of building
> >> something really alive)
> >> If we say something like "world model creation", it seems pretty
> obvious
> >> that we do not mean anything like just tweaking a few bits in some
> >> function.
> >
> > Yet, those terms are used for quite shallow things in many Good Old
> > Fashioned
> > robotics architectures ;-)
> >
>
> IMO there is one key & in fact crucial distinction between AI & AGI -
> which
> hinges on "adaptivity".
>
> An AI program has "special(ised) adaptivity" -can adapt its actions but
> only
> within a known domain
>
> An AGI has "general adaptivity"- can also adapt its actions to deal with
> unknown, unfamiliar domains.
>
>
> ---
>
The distinction in terms is not generally recognized.  Most AI programs do
not show a wide range of adaptivity of learning.  However, most of us who
are interested in the field believe that there will be more achievements in
the future.  The use of the term AGI in this group is meant to differentiate
the general adaptivity that you mentioned that would be required for general
artificial intelligence, but the term AI is an inclusive term that has
different meanings but does definitely include the future of AI research and
general AI.

The way you expressed 'general adaptivity' is interesting.  People only have
a constrained ability to learn, just as computers do, but obviously they can
learn in ways that computers cannot.  But there is ample evidence that AI
programming is improving.

So the issue is not just general adaptivity but the range of adaptivity, or
the ranges of different kinds of adaptivity.  The reason I am making this
point is because by exmaining the problem with a little more precision, or
at least differentiation, some of the more obscure issues may eventually be
revealed.

Jim Bromer

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


General vs. narrow AI (was: [agi] Instead of an AGI Textbook)

2008-03-29 Thread Vladimir Nesov
On Sat, Mar 29, 2008 at 3:58 PM, Mike Tintner <[EMAIL PROTECTED]> wrote:
>
>  IMO there is one key & in fact crucial distinction between AI & AGI - which
>  hinges on "adaptivity".
>
>  An AI program has "special(ised) adaptivity" -can adapt its actions but only
>  within a known domain
>
>  An AGI has "general adaptivity"- can also adapt its actions to deal with
>  unknown, unfamiliar domains.
>

There is no "general adaptivity" - all learning algorithms are
constrained to efficient learning only in narrow domains. The problem
with narrow AI systems is that their performance either relies on
people manually designing learning algorithms which work fine on given
narrow domains, or requires insanely much data to learn things with
less biased algorithms. In first case each new problem requires a
human in the loop and months of thinking about the problem, in effect
human acquires information about the target domain using his
intelligence and then encodes this information in parameterized form,
so that it can then be tweaked a little to solve a "last mile problem"
of adapting to particular features of target domain that are hard to
encode manually or are different in each case. In second case
algorithm is terrible at learning in target domain, but when you have
a whole Internet of data it doesn't necessarily look like that,
considering that you don't have to tweak the algorithm to the problem.

Making a general AI learning requires understanding of a general AI
domain, and this general AI domain is not "more general" than some of
these narrow AIs. Probability is conserved, general AI is necessarily
restricted in learning ability. So, while an ability to represent an
arbitrary algorithm would be nice (which is not present in many
popular machine learning algorithms), it doesn't mean that system will
guess any algorithm given only partial data about it. It might need to
actually look at it. But it needs to be able to guess the kind of
information that people are good at guessing, and what is it exactly
that we learn, infer from incomplete description, as opposed to
memorizing when presented in whole, is the core of the problem.


Here's a paper that may give some intuition about this issue:

http://yann.lecun.com/exdb/publis/pdf/bengio-lecun-07.pdf
Yoshua Bengio, Yann LeCun. 2007. Scaling learning algorithms towards AI.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Instead of an AGI Textbook

2008-03-29 Thread Mike Tintner


Robert/Ben:. In fact. I would suggest that AGI researchers start to 
distinguish
themselves from narrow AGI by replacing the over ambiguous concepts from 
AI,

one by one. For example:

knowledge representation = world model.
learning = world model creation
reasoning = world model simulation
goal = life goal (to indicate that we have the ambition of building
something really alive)
If we say something like "world model creation", it seems pretty obvious
that we do not mean anything like just tweaking a few bits in some 
function.


Yet, those terms are used for quite shallow things in many Good Old 
Fashioned

robotics architectures ;-)



IMO there is one key & in fact crucial distinction between AI & AGI - which 
hinges on "adaptivity".


An AI program has "special(ised) adaptivity" -can adapt its actions but only 
within a known domain


An AGI has "general adaptivity"- can also adapt its actions to deal with 
unknown, unfamiliar domains. 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Instead of an AGI Textbook

2008-03-29 Thread Robert Wensman
Hmm.. well, but at least, using words related to robotics gives a flavour of
embodiment :-).

Anyhow, I still prefer sharing terminology with robotics, as opposed to
narrow AI. Narrow AI and AGI are perhaps closer, so the risk of confusion is
bigger.

/R


2008/3/29, Ben Goertzel <[EMAIL PROTECTED]>:
>
> > 4. In fact. I would suggest that AGI researchers start to distinguish
> > themselves from narrow AGI by replacing the over ambiguous concepts from
> AI,
> > one by one. For example:
> >
> > knowledge representation = world model.
> > learning = world model creation
> > reasoning = world model simulation
> > goal = life goal (to indicate that we have the ambition of building
> > something really alive)
> > If we say something like "world model creation", it seems pretty obvious
> > that we do not mean anything like just tweaking a few bits in some
> function.
>
> Yet, those terms are used for quite shallow things in many Good Old
> Fashioned
> robotics architectures ;-)
>
> ben
>
> ---
> agi
> Archives: http://www.listbox.com/member/archive/303/=now
> RSS Feed: http://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> http://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com