Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Vladimir Nesov
Hello Boris, and welcome to the list.

I didn't understand your algorithm, you use many terms that you didn't
define. It probably would be clearer if you use some kind of
pseudocode and systematically describe all occurring procedures. But I
think more fundamental questions that need clarifying won't depend on
these.

What is it that your system tries to predict? Does it predict only
specific terminal inputs, values on the ends of its sensors? Or
something else? When does prediction occur?

What is this prediction for? How does it help? How does the system use
it? What use is this ability of the system for us?

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Boris Kazachenko

Hello Boris, and welcome to the list.


Thanks Vladimir, I actually posted a few times a while back.
Don't do it often because of the "mindset" problem I mentioned in my blog 
:).

http://scalable-intelligence.blogspot.com/


I didn't understand your algorithm, you use many terms that you didn't
define. It probably would be clearer if you use some kind of
pseudocode and systematically describe all occurring procedures. But I
think more fundamental questions that need clarifying won't depend on
these.


Right, "fundamental" issues first. That's the only kind I deal with anyway.
I actually do define terms a great deal, but there's always a "context" 
problem.
General means decontextualized, the examples here are useless, & people have 
hard time thinking without them.



What is it that your system tries to predict? Does it predict only
specific terminal inputs, values on the ends of its sensors? Or
something else? When does prediction occur?
What is this prediction for? How does it help? How does the system use
it? What use is this ability of the system for us?


The system builds a hierarchy of generalization from discrete sensory 
inputs: pixels, initially in 1 dimension, - the bottom of the hierarchy.
Everything we know (except for math) was originally derived from senses, & 
that's by far the easiest place to start anyway.
Prediction (projection) occurs on every level, with corresponding range & 
complexity.

What for? What is intelligence for? That'd be a long list.

Boris. 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Vladimir Nesov
On Sun, Mar 30, 2008 at 5:12 PM, Boris Kazachenko <[EMAIL PROTECTED]> wrote:
>
>  > What is it that your system tries to predict? Does it predict only
>  > specific terminal inputs, values on the ends of its sensors? Or
>  > something else? When does prediction occur?
>  > What is this prediction for? How does it help? How does the system use
>  > it? What use is this ability of the system for us?
>
>  The system builds a hierarchy of generalization from discrete sensory
>  inputs: pixels, initially in 1 dimension, - the bottom of the hierarchy.
>  Everything we know (except for math) was originally derived from senses, &
>  that's by far the easiest place to start anyway.
>  Prediction (projection) occurs on every level, with corresponding range &
>  complexity.

I'm going to attack you by questions again :-)

What are 'range' and 'complexity'? Is there a specific architecture of
'levels'? Why should higher-level concepts be multidimensional?

What is the dynamics of system's operation in time? Is inference
feed-forward and 'instantaneous', measuring by external clock? Can
capture time series?

By 'what prediction is for?' I mean connection to action. How does
prediction of inputs or features of inputs translate into action? If
this prediction activity doesn't lead to any result, it may as well be
absent.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


RE: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Derek Zahn
It seems like a reasonable and not uncommon idea that an AI could be built as a 
mostly-hierarchical autoassiciative memory.  As you point out, it's not so 
different from Hawkins's ideas.  Neighboring "pixels" will correlate in space 
and time; "features" such as edges should become principle components given 
enough data, and so on.  There is a bunch of such work on self-organizing the 
early visual system like this.
 
That overall concept doesn't get you very far though; the trick is to make it 
work past the first few rather obvious feature extraction stages of sensory 
data, and to account for things like episodic memory, language use, 
goal-directed behavior, and all other cognitive activity that is not just 
statistical categorization.
 
I sympathize with your approach and wish you luck.  If you think you have 
something that produce more than Hawkins has with his HTM, please explain it 
with enough precision that we can understand the details.
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Kingma, D.P.
Although I symphathize with some of Hawkin's general ideas about
unsupervised learning, his current HTM framework is unimpressive in
comparison with state-of-the-art techniques such as Hinton's RBM's, LeCun's
convolutional nets and the promising low-entropy coding variants.

But it should be quite clear that such methods could eventually be very
handy for AGI. For example, many of you would agree that a reliable,
computationally affordable solution to Vision is a crucial factor for AGI:
much of the world's information, even on the internet, is encoded in
audiovisual information. Extracting (sub)symbolic semantics from these
sources would open a world of learning data to symbolic systems.

An audiovisual perception layer generates semantic interpretation on the
(sub)symbolic level. How could a symbolic engine ever reason about the real
world without access to such information?

Vision may be classified under "Narrow" AI, but I reckon that an AGI can
never understand our physical world without a reliable perceptual system.
Therefore, perception is essential for any AGI reasoning about physical
entities!

Greets, Durk

On Sun, Mar 30, 2008 at 4:34 PM, Derek Zahn <[EMAIL PROTECTED]> wrote:

>
> It seems like a reasonable and not uncommon idea that an AI could be built
> as a mostly-hierarchical autoassiciative memory.  As you point out, it's not
> so different from Hawkins's ideas.  Neighboring "pixels" will correlate in
> space and time; "features" such as edges should become principle components
> given enough data, and so on.  There is a bunch of such work on
> self-organizing the early visual system like this.
>
> That overall concept doesn't get you very far though; the trick is to make
> it work past the first few rather obvious feature extraction stages of
> sensory data, and to account for things like episodic memory, language use,
> goal-directed behavior, and all other cognitive activity that is not just
> statistical categorization.
>
> I sympathize with your approach and wish you luck.  If you think you have
> something that produce more than Hawkins has with his HTM, please explain it
> with enough precision that we can understand the details.
>
> --
>   *agi* | Archives 
>  | 
> ModifyYour Subscription
> 
>

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Vladimir Nesov
On Sun, Mar 30, 2008 at 7:23 PM, Kingma, D.P. <[EMAIL PROTECTED]> wrote:
>
> Although I symphathize with some of Hawkin's general ideas about
> unsupervised learning, his current HTM framework is unimpressive in
> comparison with state-of-the-art techniques such as Hinton's RBM's, LeCun's
> convolutional nets and the promising low-entropy coding variants.
>
> But it should be quite clear that such methods could eventually be very
> handy for AGI. For example, many of you would agree that a reliable,
> computationally affordable solution to Vision is a crucial factor for AGI:
> much of the world's information, even on the internet, is encoded in
> audiovisual information. Extracting (sub)symbolic semantics from these
> sources would open a world of learning data to symbolic systems.
>
> An audiovisual perception layer generates semantic interpretation on the
> (sub)symbolic level. How could a symbolic engine ever reason about the real
> world without access to such information?
>
> Vision may be classified under "Narrow" AI, but I reckon that an AGI can
> never understand our physical world without a reliable perceptual system.
> Therefore, perception is essential for any AGI reasoning about physical
> entities!
>

At this point I think that although vision doesn't seem absolutely
necessary, it may as well be implemented, if it can run on the same
substrate as everything else (it probably can). It may prove to be a
good playground for prototyping. If it's implemented with moving fovea
(which is essentially what LeCun's hack is about) and relies on
selective attention (so that only gist of the scene is perceived, read
supported on higher levels), it shouldn't require insanely much
resources, compared to the rest of reasoning engine. Alas in this
picture I give up my previous assessment (of few months back) that
reasoning can be implemented efficiently, so that only few active
concepts need to figure into computation each tact. In my current
model all concepts compute all the time...

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


RE: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Derek Zahn
[EMAIL PROTECTED] writes:
 
> But it should be quite clear that such methods could eventually be very handy 
> for AGI.
 
I agree with your post 100%, this type of approach is the most interesting 
AGI-related stuff to me.
 
> An audiovisual perception layer generates semantic interpretation on the > 
> (sub)symbolic level. How could a symbolic engine ever reason about the real > 
> world without access to such information?
 
Even more interesting:  How could a symbolic engine ever reason about the real 
world *with* access to such information? :)

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread William Pearson
On 30/03/2008, Kingma, D.P. <[EMAIL PROTECTED]> wrote:
> Although I symphathize with some of Hawkin's general ideas about unsupervised
>learning, his current HTM framework is unimpressive in comparison with
>state-of-the-art techniques such as Hinton's RBM's, LeCun's
convolutional nets >and the promising low-entropy coding variants.
>
> But it should be quite clear that such methods could eventually be very handy
> for AGI. For example, many of you would agree that a reliable, computationally
> affordable solution to Vision is a crucial factor for AGI: much of the world's
> information, even on the internet, is encoded in audiovisual information.
> Extracting (sub)symbolic semantics from these sources would open a world of
> learning data to symbolic systems.
>
> An audiovisual perception layer generates semantic interpretation on the
> (sub)symbolic level. How could a symbolic engine ever reason about the real
> world without access to such information?

So a deafblind person couldn't reason about the real world? Put ear
muffs and a blind fold on, see what you can figure out about the world
around you. Less certainly, but then you could figure out more about
the world if you had magnetic sense like pidgeons.

Intelligence is not about the modalities of the data you get, it is
about the what you do with the data you do get.

All of the data on the web is encoded in electronic form, it is only
because of our comfort with incoming photons and phonons that it is
translated to video and sound. This fascination with A/V is useful,
but does not help us figure out the core issues that are holding us up
whilst trying to create AGI.

  Will Pearson

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Stephen Reed
Derek: How could a symbolic engine ever reason about the real world *with* 
access to such information? 

I hope my work eventually demonstrates a solution to your satisfaction.  In the 
meantime there is evidence from robotics, specifically driverless cars, that 
real world sensor input can be sufficiently combined and abstracted for use by 
symbolic route planners.
 
-Steve


Stephen L. Reed

Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860

- Original Message 
From: Derek Zahn <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Sunday, March 30, 2008 11:21:52 AM
Subject: RE: [agi] Intelligence: a pattern discovery algorithm of scalable 
complexity.

.hmmessage P { margin:0px;padding:0px;} body.hmmessage { 
FONT-SIZE:10pt;FONT-FAMILY:Tahoma;}  [EMAIL PROTECTED] writes:
  
 > But it should be quite clear that such methods could eventually be very 
 > handy for AGI.
  
 I agree with your post 100%, this type of approach is the most interesting 
AGI-related stuff to me.
  
 > An audiovisual perception layer generates semantic interpretation on the 
> (sub)symbolic level. How could a symbolic engine ever reason about the real 
> world without access to such information?
  
 Even more interesting:  How could a symbolic engine ever reason about the real 
world *with* access to such information? :)

 agi | Archives   | Modify  Your Subscription   

 






  

Special deal for Yahoo! users & friends - No Cost. Get a month of Blockbuster 
Total Access now 
http://tc.deals.yahoo.com/tc/blockbuster/text3.com

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Mike Tintner
Durk,

Absolutely right about the need for what is essentially an imaginative level of 
mind. But wrong in thinking:

"Vision may be classified under "Narrow" AI"

You seem to be treating this extra "audiovisual perception layer" as a purely 
passive layer. The latest psychology & philosophy recognize that this is in 
fact a level of v. active thought and intelligence. And our culture is only 
starting to understand imaginative thought generally.

Just to begin reorienting your thinking here, I suggest you consider how much 
time people spend on audiovisual information (esp. tv) vs purely symbolic 
information (books).  And allow for how much and how rapidly even academic 
thinking is going audiovisual.  

Know of anyone trying to give computers that extra layer? I saw some vague 
reference about this recently.of which I have only a confused memory.


  Durk:Although I symphathize with some of Hawkin's general ideas about 
unsupervised learning, his current HTM framework is unimpressive in comparison 
with state-of-the-art techniques such as Hinton's RBM's, LeCun's convolutional 
nets and the promising low-entropy coding variants.

  But it should be quite clear that such methods could eventually be very handy 
for AGI. For example, many of you would agree that a reliable, computationally 
affordable solution to Vision is a crucial factor for AGI: much of the world's 
information, even on the internet, is encoded in audiovisual information. 
Extracting (sub)symbolic semantics from these sources would open a world of 
learning data to symbolic systems.

  An audiovisual perception layer generates semantic interpretation on the 
(sub)symbolic level. How could a symbolic engine ever reason about the real 
world without access to such information?

  Vision may be classified under "Narrow" AI, but I reckon that an AGI can 
never understand our physical world without a reliable perceptual system. 
Therefore, perception is essential for any AGI reasoning about physical 
entities!

  Greets, Durk


  On Sun, Mar 30, 2008 at 4:34 PM, Derek Zahn <[EMAIL PROTECTED]> wrote:


It seems like a reasonable and not uncommon idea that an AI could be built 
as a mostly-hierarchical autoassiciative memory.  As you point out, it's not so 
different from Hawkins's ideas.  Neighboring "pixels" will correlate in space 
and time; "features" such as edges should become principle components given 
enough data, and so on.  There is a bunch of such work on self-organizing the 
early visual system like this.
 
That overall concept doesn't get you very far though; the trick is to make 
it work past the first few rather obvious feature extraction stages of sensory 
data, and to account for things like episodic memory, language use, 
goal-directed behavior, and all other cognitive activity that is not just 
statistical categorization.
 
I sympathize with your approach and wish you luck.  If you think you have 
something that produce more than Hawkins has with his HTM, please explain it 
with enough precision that we can understand the details.
 



  agi | Archives  | Modify Your Subscription  




--
agi | Archives  | Modify Your Subscription  



--


  No virus found in this incoming message.
  Checked by AVG. 
  Version: 7.5.519 / Virus Database: 269.22.1/1349 - Release Date: 3/29/2008 
5:02 PM

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


RE: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Derek Zahn

Stephen Reed writes:
 
>> How could a symbolic engine ever reason about the real world *with* access 
>> to such information? 
 
> I hope my work eventually demonstrates a solution to your satisfaction.  
 
Me too!
 
> In the meantime there is evidence from robotics, specifically driverless 
> cars, > that real world sensor input can be sufficiently combined and 
> abstracted for use > by symbolic route planners.
 
True enough, that is one answer:  "by hand-crafting the symbols and the 
mechanics for instantiating them from subsymbolic structures".  We of course 
hope for better than this but perhaps generalizing these working systems is a 
practical approach.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Kingma, D.P.
Mike, you seem to have misinterpreted my statement. Perception is certainly
not 'passive', as it can be described as active inference using a (mostly
actively) learned world model. Inference is done on many levels, and could
integrate information from various abstraction levels, so I don't see it as
an isolated layer.

On Sun, Mar 30, 2008 at 6:27 PM, Mike Tintner <[EMAIL PROTECTED]>
wrote:

>  Durk,
>
> Absolutely right about the need for what is essentially an imaginative
> level of mind. But wrong in thinking:
>
> "Vision may be classified under "Narrow" AI"
>
> You seem to be treating this extra "audiovisual perception layer" as a
> purely passive layer. The latest psychology & philosophy recognize that this
> is in fact a level of v. active thought and intelligence. And our
> culture is only starting to understand imaginative thought generally.
>
> Just to begin reorienting your thinking here, I suggest you consider how
> much time people spend on audiovisual information (esp. tv) vs purely
> symbolic information (books).  And allow for how much and how rapidly even
> academic thinking is going audiovisual.
>
> Know of anyone trying to give computers that extra layer? I saw some vague
> reference about this recently.of which I have only a confused memory.
>
>
>
> Durk:Although I symphathize with some of Hawkin's general ideas about
> unsupervised learning, his current HTM framework is unimpressive in
> comparison with state-of-the-art techniques such as Hinton's RBM's, LeCun's
> convolutional nets and the promising low-entropy coding variants.
>
>
> But it should be quite clear that such methods could eventually be very
> handy for AGI. For example, many of you would agree that a reliable,
> computationally affordable solution to Vision is a crucial factor for AGI:
> much of the world's information, even on the internet, is encoded in
> audiovisual information. Extracting (sub)symbolic semantics from these
> sources would open a world of learning data to symbolic systems.
>
> An audiovisual perception layer generates semantic interpretation on the
> (sub)symbolic level. How could a symbolic engine ever reason about the real
> world without access to such information?
>
> Vision may be classified under "Narrow" AI, but I reckon that an AGI can
> never understand our physical world without a reliable perceptual system.
> Therefore, perception is essential for any AGI reasoning about physical
> entities!
>
> Greets, Durk
>
> On Sun, Mar 30, 2008 at 4:34 PM, Derek Zahn <[EMAIL PROTECTED]> wrote:
>
> >
> > It seems like a reasonable and not uncommon idea that an AI could be
> > built as a mostly-hierarchical autoassiciative memory.  As you point out,
> > it's not so different from Hawkins's ideas.  Neighboring "pixels" will
> > correlate in space and time; "features" such as edges should become
> > principle components given enough data, and so on.  There is a bunch of such
> > work on self-organizing the early visual system like this.
> >
> > That overall concept doesn't get you very far though; the trick is to
> > make it work past the first few rather obvious feature extraction stages of
> > sensory data, and to account for things like episodic memory, language use,
> > goal-directed behavior, and all other cognitive activity that is not just
> > statistical categorization.
> >
> > I sympathize with your approach and wish you luck.  If you think you
> > have something that produce more than Hawkins has with his HTM, please
> > explain it with enough precision that we can understand the details.
> >
> >  --
> >   *agi* | Archives 
> >  | 
> > ModifyYour Subscription
> > 
> >
>
>  --
>   *agi* | Archives 
>  | 
> ModifyYour Subscription
> 
>
> --
>
> No virus found in this incoming message.
> Checked by AVG.
> Version: 7.5.519 / Virus Database: 269.22.1/1349 - Release Date: 3/29/2008
> 5:02 PM
>
> --
>   *agi* | Archives 
>  | 
> ModifyYour Subscription
> 
>

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Kingma, D.P.
On Sun, Mar 30, 2008 at 6:48 PM, William Pearson <[EMAIL PROTECTED]> wrote:
>
> On 30/03/2008, Kingma, D.P. <[EMAIL PROTECTED]> wrote:
> > An audiovisual perception layer generates semantic interpretation on the
> > (sub)symbolic level. How could a symbolic engine ever reason about the real
> > world without access to such information?
>
> So a deafblind person couldn't reason about the real world? Put ear
> muffs and a blind fold on, see what you can figure out about the world
> around you. Less certainly, but then you could figure out more about
> the world if you had magnetic sense like pidgeons.
>
> Intelligence is not about the modalities of the data you get, it is
> about the what you do with the data you do get.
>
> All of the data on the web is encoded in electronic form, it is only
> because of our comfort with incoming photons and phonons that it is
> translated to video and sound. This fascination with A/V is useful,
> but does not help us figure out the core issues that are holding us up
> whilst trying to create AGI.
>
>  Will Pearson

Intelligence is not *only* about the modalities of the data you get,
but modalities are certainly important. A deafblind person can still
learn a lot about the world with taste, smell, and touch, but the
senses one has access to defines the limits to the world model one can
build.

If I put on ear muffs and a blind fold right now, I can still reason
quite well using touch, since I have access to a world model build
using e.g. vision. If you were deafblind and paralysed since your
birth, would you have any possibility of spatial reasoning? No, maybe
except for some extremely crude genetically coded heuristics.

Sure, you could argue that an intelligence purely based on text,
disconnected from the physical world, could be intelligent, but it
would have a very hard time reasoning about interaction of entities in
the physicial world. It would be unable to understand humans in many
aspects: I wouldn't call that generally intelligent.

Perception is a about learning and using a model of our physical
world. Input is often high-bandwidth, while output is often
low-bandwidth and useful for high-level processing (e.g. reasining and
memory). Luckily, efficient methods are arising, so I'm quite
optimistic about progress towards this aspect of intelligence.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Vladimir Nesov
On Sun, Mar 30, 2008 at 10:16 PM, Kingma, D.P. <[EMAIL PROTECTED]> wrote:
>
>  Intelligence is not *only* about the modalities of the data you get,
>  but modalities are certainly important. A deafblind person can still
>  learn a lot about the world with taste, smell, and touch, but the
>  senses one has access to defines the limits to the world model one can
>  build.
>
>  If I put on ear muffs and a blind fold right now, I can still reason
>  quite well using touch, since I have access to a world model build
>  using e.g. vision. If you were deafblind and paralysed since your
>  birth, would you have any possibility of spatial reasoning? No, maybe
>  except for some extremely crude genetically coded heuristics.
>
>  Sure, you could argue that an intelligence purely based on text,
>  disconnected from the physical world, could be intelligent, but it
>  would have a very hard time reasoning about interaction of entities in
>  the physicial world. It would be unable to understand humans in many
>  aspects: I wouldn't call that generally intelligent.
>
>  Perception is a about learning and using a model of our physical
>  world. Input is often high-bandwidth, while output is often
>  low-bandwidth and useful for high-level processing (e.g. reasining and
>  memory). Luckily, efficient methods are arising, so I'm quite
>  optimistic about progress towards this aspect of intelligence.
>

One of the requirements that I try to satisfy with my design is
ability to equivalently perceive information encoded by seemingly
incompatible modalities. For example, visual stream can be encoded
using a set of pairs , where tags are unique labels that
correspond to positions of pixels. This set of pairs can be shuffled
and supplied using serial input (where tags and colors are encoded as
binary words of activation), and system must be able to reconstruct
representation as good as that supplied by naturally arranged video
input. Of course getting to that point requires careful incremental
teaching, but after that there should be no real difference (aside
from bandwidth, of course).

It might be useful to look at all concepts as 'modalities': you can
'see' your thoughts, when you know a certain theory, you can 'see' how
it's applied, how its parts interact, what obvious conclusions are.
Prewiring sensory input in a certain way merely pushes learning in
certain direction, just like inbuilt drives bias action in theirs.

This way, for example, it should be possible to teach a 'modality' for
understanding simple graphs encoded as text, so that on one hand
text-based input is sufficient, and on the other hand system
effectively perceives simple vector graphics. This trick can be used
to explain spacial concepts from natural language. But, again, video
camera might be a simpler and more powerful way to the same end, even
if visual processing is severely limited.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Kingma, D.P.
Vladimir, I agree with you on many issues, but...

On Sun, Mar 30, 2008 at 9:03 PM, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
>  This way, for example, it should be possible to teach a 'modality' for
>  understanding simple graphs encoded as text, so that on one hand
>  text-based input is sufficient, and on the other hand system
>  effectively perceives simple vector graphics. This trick can be used
>  to explain spacial concepts from natural language. But, again, video
>  camera might be a simpler and more powerful way to the same end, even
>  if visual processing is severely limited.

Vector graphics can indeed be communicated to an AGI by relatively
low-bandwidth textual input. But, unfortunately,
the physical world is not made of vector graphics, so reducing the
physical world to vector graphics is quite lossy (and computationally
expensive an sich). I'm not sure whether you're assuming that vector
graphics is very useful for AGI, but I would disagree.

> Prewiring sensory input in a certain way merely pushes learning in
> certain direction, just like inbuilt drives bias action in theirs.

Who said perception needs to be prewired? Perception should be made
efficient by exploiting statistical regularities in the data, not
assuming them per se. Regularities in the data (captured by your world
model) should tell you where to focus your attention on *most* of the
time, not *all* the time ;)

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Vladimir Nesov
On Sun, Mar 30, 2008 at 11:33 PM, Kingma, D.P. <[EMAIL PROTECTED]> wrote:
>
>  Vector graphics can indeed be communicated to an AGI by relatively
>  low-bandwidth textual input. But, unfortunately,
>  the physical world is not made of vector graphics, so reducing the
>  physical world to vector graphics is quite lossy (and computationally
>  expensive an sich). I'm not sure whether you're assuming that vector
>  graphics is very useful for AGI, but I would disagree.
>

I referred to manually providing explanation in conversational format.
Of course it's lossy, but whether it's lossy compared to the real
world is not an issue, it's much more important how it compares to
'gist' scheme that we extract from full vision. It's clearly not much.
Vision allows to attend to any of huge number of details present in
the input, but there are only few details seen at a time. When a
specific issue needs a spacial explanation, it can be carried out by
explicitly specifying its structure in vector graphics.

>
>  > Prewiring sensory input in a certain way merely pushes learning in
>  > certain direction, just like inbuilt drives bias action in theirs.
>
>  Who said perception needs to be prewired? Perception should be made
>  efficient by exploiting statistical regularities in the data, not
>  assuming them per se. Regularities in the data (captured by your world
>  model) should tell you where to focus your attention on *most* of the
>  time, not *all* the time ;)
>

By prewiring I meant a trivial level, like routing signals from the
retina to certain places in the brain, from the start suggesting that
nearby pixels on the retina are close together, and making temporal
synchrony of signals to be approximately the same as in image on the
retina. Bad prewiring would consist in sticking signals from pixels on
the retina to random parts of the brain, with random delays. It would
take much more effort to acquire good visual perception in this case
(and would be impossible on brain wetware).

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Kingma, D.P.
Alright, agreed with all you say. If I understood correctly, your
system (at the moment) assumes scene descriptions at a level higher
than pixels, but certainly lower than objects. An application of such
system seems be a simulated, virtual world where such descriptions are
at hand... Is this indeed the direction you're going?

Greets,
Durk

On Sun, Mar 30, 2008 at 10:00 PM, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
>
> On Sun, Mar 30, 2008 at 11:33 PM, Kingma, D.P. <[EMAIL PROTECTED]> wrote:
> >
> >  Vector graphics can indeed be communicated to an AGI by relatively
> >  low-bandwidth textual input. But, unfortunately,
> >  the physical world is not made of vector graphics, so reducing the
> >  physical world to vector graphics is quite lossy (and computationally
> >  expensive an sich). I'm not sure whether you're assuming that vector
> >  graphics is very useful for AGI, but I would disagree.
> >
>
> I referred to manually providing explanation in conversational format.
> Of course it's lossy, but whether it's lossy compared to the real
> world is not an issue, it's much more important how it compares to
> 'gist' scheme that we extract from full vision. It's clearly not much.
> Vision allows to attend to any of huge number of details present in
> the input, but there are only few details seen at a time. When a
> specific issue needs a spacial explanation, it can be carried out by
> explicitly specifying its structure in vector graphics.
>
>
> >
> >  > Prewiring sensory input in a certain way merely pushes learning in
> >  > certain direction, just like inbuilt drives bias action in theirs.
> >
> >  Who said perception needs to be prewired? Perception should be made
> >  efficient by exploiting statistical regularities in the data, not
> >  assuming them per se. Regularities in the data (captured by your world
> >  model) should tell you where to focus your attention on *most* of the
> >  time, not *all* the time ;)
> >
>
> By prewiring I meant a trivial level, like routing signals from the
> retina to certain places in the brain, from the start suggesting that
> nearby pixels on the retina are close together, and making temporal
> synchrony of signals to be approximately the same as in image on the
> retina. Bad prewiring would consist in sticking signals from pixels on
> the retina to random parts of the brain, with random delays. It would
> take much more effort to acquire good visual perception in this case
> (and would be impossible on brain wetware).
>
> --
>
> Vladimir Nesov
> [EMAIL PROTECTED]
>
>
>
>
> ---
> agi
> Archives: http://www.listbox.com/member/archive/303/=now
> RSS Feed: http://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: http://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Vladimir Nesov
On Mon, Mar 31, 2008 at 12:21 AM, Kingma, D.P. <[EMAIL PROTECTED]> wrote:
> Alright, agreed with all you say. If I understood correctly, your
>  system (at the moment) assumes scene descriptions at a level higher
>  than pixels, but certainly lower than objects. An application of such
>  system seems be a simulated, virtual world where such descriptions are
>  at hand... Is this indeed the direction you're going?

I'm far from dealing with high-level stuff, so it's only in design.
Vector graphics that I talked about was supposed to be provided
manually by a human. For example, it can be a part of explanation of
what 'between' word is about. Alternatively a kind of sketchpad can be
used. My point is that 'modality' seems to be a learnable thing, that
can be stimulated not only by direct sensory input, but also by
learned inferences coming from completely different modalities.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Mark Waser
> True enough, that is one answer:  "by hand-crafting the symbols and the 
> mechanics for instantiating them from subsymbolic structures".  We of course 
> hope for better than this but perhaps generalizing these working systems is a 
> practical approach.

Um.  That is what is known as the grounding problem.  I'm sure that Richard 
Loosemore would be more than happy to send references explaining why this is 
not productive.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Mark Waser

From: "Kingma, D.P." <[EMAIL PROTECTED]>

Sure, you could argue that an intelligence purely based on text,
disconnected from the physical world, could be intelligent, but it
would have a very hard time reasoning about interaction of entities in
the physicial world. It would be unable to understand humans in many
aspects: I wouldn't call that generally intelligent.


Given sufficient bandwidth, why would it have a hard time reasoning about 
interaction of entities?  You could describe vision down to the pixel, 
hearing down to the pitch and decibel, touch down to the sensation, etc. and 
the system could internally convert it to exactly what a human feels.  You 
could explain to it all the known theories of psychology and give it the 
personal interactions of billions of people.  Sure, that's a huge amount of 
bandwidth, but it proves that your statement is inaccurate.




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Mark Waser

From: "Kingma, D.P." <[EMAIL PROTECTED]>

Vector graphics can indeed be communicated to an AGI by relatively
low-bandwidth textual input. But, unfortunately,
the physical world is not made of vector graphics, so reducing the
physical world to vector graphics is quite lossy (and computationally
expensive an sich).


Huh?  Intelligence is based upon lossyness and the ability to lose rarely 
relevant (probably incorrect) outlier information is frequently the key to 
making problems tractable (though it can also set you up for failure when 
you miss a phase transition by mistaking it for just an odd outlier :-) 
since it forms the basis of discovery by analogy.


Matt Mahoney's failure to recognize this has him trapped in *exact* 
compression hell.;-)



Who said perception needs to be prewired? Perception should be made
efficient by exploiting statistical regularities in the data, not
assuming them per se. Regularities in the data (captured by your world
model) should tell you where to focus your attention on *most* of the
time, not *all* the time ;)


Which is the correct answer to the "grounding problem".  Thank you. 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


RE: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Derek Zahn
Mark Waser writes:
 
>> True enough, that is one answer:  "by hand-crafting the symbols and >> the 
>> mechanics for instantiating them from subsymbolic structures".  >> We of 
>> course hope for better than this but perhaps generalizing these >> working 
>> systems is a practical approach. > Um.  That is what is known as the 
>> grounding problem.  I'm sure that > Richard Loosemore would be more than 
>> happy to send references explaining > why this is not productive.
 
It's not the grounding problem.  The symbols crashing around inthese robotic 
systems are very well grounded.  
 
The problem is that these systems are narrow, not that they 
manipulateungrounded symbols.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Mark Waser
The symbols "crashing around" are well-grounded but then you *disconnect* them 
from their grounding with your manipulation.

A system needs to build it's own mental structure from the bottom up -- not 
have it imposed from above by an entity with questionable congruence.
  - Original Message - 
  From: Derek Zahn 
  To: agi@v2.listbox.com 
  Sent: Sunday, March 30, 2008 5:13 PM
  Subject: RE: [agi] Intelligence: a pattern discovery algorithm of scalable 
complexity.


  Mark Waser writes:
   
  >> True enough, that is one answer:  "by hand-crafting the symbols and 
  >> the mechanics for instantiating them from subsymbolic structures".  
  >> We of course hope for better than this but perhaps generalizing these 
  >> working systems is a practical approach.
   
  > Um.  That is what is known as the grounding problem.  I'm sure that 
  > Richard Loosemore would be more than happy to send references explaining 
  > why this is not productive.
   
  It's not the grounding problem.  The symbols crashing around in
  these robotic systems are very well grounded.  
   
  The problem is that these systems are narrow, not that they manipulate
  ungrounded symbols.



--
agi | Archives  | Modify Your Subscription  

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread William Pearson
On 30/03/2008, Kingma, D.P. <[EMAIL PROTECTED]> wrote:

> Intelligence is not *only* about the modalities of the data you get,
>  but modalities are certainly important. A deafblind person can still
>  learn a lot about the world with taste, smell, and touch, but the
>  senses one has access to defines the limits to the world model one can
>  build.

As long as you have one high bandwidth modality you should be able to
add on technological gizmos to convert information to that modality,
and thus be able to model the phenomenon from that part of the world.

Humans manage to convert modalities E.g.

http://www.engadget.com/2006/04/25/the-brain-port-neural-tongue-interface-of-the-future/
Using touch on the tongue.

We don't do it very well, but that is mainly because we don't have to
do it it very often.

AIs that are designed to have new modalities added to them, using
their major modality of their memory space+interrupts (or other
computational modality), may be even more flexible than humans and
able to adapt to to a new modality as quickly as  a current computer
is able to add a new device.


>  If I put on ear muffs and a blind fold right now, I can still reason
>  quite well using touch, since I have access to a world model build
>  using e.g. vision. If you were deafblind and paralysed since your
>  birth, would you have any possibility of spatial reasoning? No, maybe
>  except for some extremely crude genetically coded heuristics.

Sure if you don't get any spatial information you won't be able to
model spatially. But getting the information is different from having
a dedicated modality.  My point was that audiovisual is not the only
way to get spatial information. It may not even be the best way for
what we happen to want to do. So not to get too hung up on any
specific modality when discussing intelligence.

>  Sure, you could argue that an intelligence purely based on text,
>  disconnected from the physical world, could be intelligent, but it
>  would have a very hard time reasoning about interaction of entities in
>  the physicial world. It would be unable to understand humans in many
>  aspects: I wouldn't call that generally intelligent.

I'm not so much interested in this case, but what about the case where
you have a robot with Sonar, Radar and other sensors. But not the
normal 2 camera +2 microphone thing people imply when they say
audiovisual.

  Will Pearson

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Kingma, D.P.
Okay, with "text", I mean "natural language", in it's usual
low-bandwidth form. That should clarify my statement. Any data can be
represented with text of course, but that's not the point... The point
that I was trying to make is that natural language is too
low-bandwidth to provide sufficient data to learn a sufficient model
about entities embedded in a complex physical world, such as humans.



On Sun, Mar 30, 2008 at 10:50 PM, Mark Waser <[EMAIL PROTECTED]> wrote:
> From: "Kingma, D.P." <[EMAIL PROTECTED]>
>
> > Sure, you could argue that an intelligence purely based on text,
>  > disconnected from the physical world, could be intelligent, but it
>  > would have a very hard time reasoning about interaction of entities in
>  > the physicial world. It would be unable to understand humans in many
>  > aspects: I wouldn't call that generally intelligent.
>
>  Given sufficient bandwidth, why would it have a hard time reasoning about
>  interaction of entities?  You could describe vision down to the pixel,
>  hearing down to the pitch and decibel, touch down to the sensation, etc. and
>  the system could internally convert it to exactly what a human feels.  You
>  could explain to it all the known theories of psychology and give it the
>  personal interactions of billions of people.  Sure, that's a huge amount of
>  bandwidth, but it proves that your statement is inaccurate.
>
>
>
>
>
>  ---
>  agi
>  Archives: http://www.listbox.com/member/archive/303/=now
>  RSS Feed: http://www.listbox.com/member/archive/rss/303/
>  Modify Your Subscription: http://www.listbox.com/member/?&;
>  Powered by Listbox: http://www.listbox.com
>

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Kingma, D.P.
On Sun, Mar 30, 2008 at 11:00 PM, Mark Waser <[EMAIL PROTECTED]> wrote:
> From: "Kingma, D.P." <[EMAIL PROTECTED]>
>
> > Vector graphics can indeed be communicated to an AGI by relatively
>  > low-bandwidth textual input. But, unfortunately,
>  > the physical world is not made of vector graphics, so reducing the
>  > physical world to vector graphics is quite lossy (and computationally
>  > expensive an sich).
>
>  Huh?  Intelligence is based upon lossyness and the ability to lose rarely
>  relevant (probably incorrect) outlier information is frequently the key to
>  making problems tractable (though it can also set you up for failure when
>  you miss a phase transition by mistaking it for just an odd outlier :-)
>  since it forms the basis of discovery by analogy.
>
>  Matt Mahoney's failure to recognize this has him trapped in *exact*
>  compression hell.;-)

Agreed with that, exact compression is not the way to go if you ask
me. But that doesn't mean any lossy method is OK. Converting a scene
to vector graphics will lead you to throwing away much visual
information early in the process: visual information (e.g. texture)
that might be useful later in the process (for e.g. disambiguation).
I'm not stating a vector description is not useful: I'm stating that
information is thrown away that could have been used to construct an
essential part of a world model that understands physical entities
down to the level of e.g. textures.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Kingma, D.P.
(Sorry for triple posting...)

On Sun, Mar 30, 2008 at 11:34 PM, William Pearson <[EMAIL PROTECTED]> wrote:
> On 30/03/2008, Kingma, D.P. <[EMAIL PROTECTED]> wrote:
>
>
> > Intelligence is not *only* about the modalities of the data you get,
>  >  but modalities are certainly important. A deafblind person can still
>  >  learn a lot about the world with taste, smell, and touch, but the
>  >  senses one has access to defines the limits to the world model one can
>  >  build.
>
>  As long as you have one high bandwidth modality you should be able to
>  add on technological gizmos to convert information to that modality,
>  and thus be able to model the phenomenon from that part of the world.
>
>  Humans manage to convert modalities E.g.
>
>  
> http://www.engadget.com/2006/04/25/the-brain-port-neural-tongue-interface-of-the-future/
>  Using touch on the tongue.

Nice article. Apparently even the brain's region for perception of
taste is generally adaptable to new input.

>  I'm not so much interested in this case, but what about the case where
>  you have a robot with Sonar, Radar and other sensors. But not the
>  normal 2 camera +2 microphone thing people imply when they say
>  audiovisual.

That's an interesting case indeed. AGIs equipped with
sonar/radar/ladar instead of 'regular' vision should be perfectly able
at certain forms of spatial reasoning, but still unable to understand
humans at certain subjects. Still, if you don't need your agents to
completely understand humans, audiovisual senses could go out of the
window. It depends on your agent's goals, I guess.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Mark Waser

From: "Kingma, D.P." <[EMAIL PROTECTED]>

Okay, with "text", I mean "natural language", in it's usual
low-bandwidth form. That should clarify my statement. Any data can be
represented with text of course, but that's not the point... The point
that I was trying to make is that natural language is too
low-bandwidth to provide sufficient data to learn a sufficient model
about entities embedded in a complex physical world, such as humans.


Ah . . . . but natural language is *NOT* necessarily low-bandwidth.

As humans experience it with pretty much just a single focus of attention 
and only one set of eyes and ears that can only operate so fast -- Yes, it 
is low bandwidth.


But what about an intelligence with a hundred or more foci of attention and 
the ability to pull that many Wikipedia pages simultaneously at extremely 
high speed? 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Mark Waser

From: "Kingma, D.P." <[EMAIL PROTECTED]>

Agreed with that, exact compression is not the way to go if you ask
me. But that doesn't mean any lossy method is OK. Converting a scene
to vector graphics will lead you to throwing away much visual
information early in the process: visual information (e.g. texture)
that might be useful later in the process (for e.g. disambiguation).
I'm not stating a vector description is not useful: I'm stating that
information is thrown away that could have been used to construct an
essential part of a world model that understands physical entities
down to the level of e.g. textures.


I would agree completely except that I would think that there is some way to 
include texture in the vector graphics in the same way in which color is 
included.



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Boris Kazachenko

I'm going to attack you by questions again :-)


You're more than welcome to, sorry for being brisk. I did reply about RSS on 
the blog, but for some reason the post never made it through.

I don't how RSS works, but you can subscribe via bloglines.com.


What are 'range' and 'complexity'? Is there a specific architecture of
'levels'? Why should higher-level concepts be multidimensional?


Levels are defined incrementally, a comparison adds a set of derivatives to 
the syntax (complexity) of the template pattern.
After sufficient accumulation (range) the pattern is evaluated & selectively 
transfered to a higher level, where these derivatives are also compared, 
forming yet another level of complexity.
Complexity generally corresponds to the range of search (& resulting 
projection) because it adds "cost", which must be justified by the 
"benefit": accumulated match (one of the "derivatives").
The levels may differ in dimensionality (we do live in a 4D world)b or 
modality integration, but this doesn't have to be designed-in, the 
differences can be discovered by the system.



What is the dynamics of system's operation in time? Is inference
feed-forward and 'instantaneous', measuring by external clock? Can
capture time series?


Temporal, as well as spatial, range of search (& duration is storage) 
increases with the level of generality, the feedback (projection) delay 
increases too.



By 'what prediction is for?' I mean connection to action. How does
prediction of inputs or features of inputs translate into action? If
this prediction activity doesn't lead to any result, it may as well be
absent.


The "intellectual" part of action is planning, which technically is a 
self-prediction.
Prediction is a pattern with adjusted focus: coordinates & resolution, sent 
down the hierarchy the changes act as a motor feedback.
Using the feedback the system will focus on areas of the environment with 
the greatest predictive potential.

To do so, it will eventualy learn to use "tools".

Boris, http://scalable-intelligence.blogspot.com/ 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Boris Kazachenko

  It seems like a reasonable and not uncommon idea that an AI could be built as 
a mostly-hierarchical autoassiciative memory.  As you point out, it's not so 
different from Hawkins's ideas.  Neighboring "pixels" will correlate in space 
and time; "features" such as edges should become principle components given 
enough data, and so on.  There is a bunch of such work on self-organizing the 
early visual system like this.
   That overall concept doesn't get you very far though; the trick is to make 
it work past the first few rather obvious feature extraction stages of sensory 
data, and to account for things like episodic memory, language use, 
goal-directed behavior, and all other cognitive activity that is not just 
statistical categorization.
  I sympathize with your approach and wish you luck.  If you think you have 
something that produce more than Hawkins has with his HTM, please explain it 
with enough precision that we can understand the details.

  Good questions.

  I agree with you on Hawkins & HTM, but his main problem is conceptual.
  He seems to be profoundly confused as to what the hierarchy should select 
for: generality or novelty. He nominates both, apparently not realizing that 
they're mutually exclusive. This creates a difficulty in defining a 
quantitative criterion for selection, which is a key for my approach. This 
internal inconsistency leads to haphazard hacking in the HTM. For example, he 
starts by comparing 2D frames in a binary fashion, which pretty perverse for an 
incremental approach. I start from the begining, by comparing pixels: the limit 
of resolution, & I quantify the degree of match right there, as a distinct 
variable. I also record & compare explicit coordinates & derivatives, while he 
simply junks all that information. His approach doesn't scale because it's not 
consistent & incremental enough.

  I disagree that we need to specifically code episodic memory, language, & 
action, - to me these are "emergent properties" (damn, I hate that word:)).

  Boris.   
  http://scalable-intelligence.blogspot.com/

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-31 Thread a

This are just some controversial tips/inspirations:

Warning: Don't read it if you do not believe that sensory and AGI go 
together or if you are skeptical. Just ignore it.


What to detect?
detect inregulaties and store them


analysis
complexity
structure
evolution

memorization is about memorising the gist,
or parts of visual aspects and some
spatial aspects, etc., but no
sense completely memorize all of it

you do not memorize the locations of the objects.
you memorize the parts or gist





taxonomies classify using subjectively important
parts of an object.

*subjectively significant*

subjective unimportant qualities are unclassified

complexity is a product of two factor-independent symbols

complexity is the incompatibility between input and output

for example, random black and white dots
is considered nonrandom because our senses
overabstracts the dots as one unit

if the symbols have a constraint relation, they
can be reduced to a simpler structure using
factor analysis

factor analysis analyze the factors of a trait.
the more ad hoc a trait, the more likely it would
be overriden by a general ability

evolution prefers traits that generalize
many structures

knowledge cannot be appraised nor ignored.
you must know everything about a topic
before you judge

taxonomies do not holistic

taxonomies are just as leaky as abstractions

efficiency is achieved using minimization
of structures.
if an attribute has changed, then would
reorder minimization

innovation would do that called creative destruction

evolution creates traits that generalize,
which cannot be perfect

risk is the lack of knowledge. these are minimized by genetic heuristics


language are taxonomies

there exist "empty taxonomies" which are
contradictions

genetic irregulaties

language is a different classification system
than logic, which is incompatible to each other

this difference produces contradictions and ambiguities

a contradiction is a partial map between languages
an ambiguity is a surjective function

entrophy is a subset of complexity
it is the information required to convert
a surjective function to a bijective function,
avoid contradiction

the division of knowledge minimizes risk
from lacking critical and revolutionary
knowledge

latency occurs in social interactions
the fastest way is to think individually

the law of unintended consequences
is the product of appraising
unforseen knowledge

prohibition would let users choose another

negative motivations are hard coded, in order
to prevent alternatives

positive motivations are also hard coded, but not to
a high extent, because alternatives are inevitable
if there are only negative

according to the 100,000 human population
it is impossible to hard code all negative motivations.
Therefore, a Pavlovian learning capability is replaced

However, the pavlovian capability requires learning,
perhaps risky

Therefore parents would teach them

a type of imitation learning develops

positive motivations are more likely learned
than hard

also prominent negative motivations are preserved

distributed systems are like division of labor
the more labor intensive a task is, the more likely that an individual
would not count. The more predictable the labor is. So labor intensive 
industries

would have to maximize efficiency. It is more likely that an invention would
help

thinking collectively produces latency. Thinking collectively
distracts concentration, because humans cannot multitask

the faults of abstraction can be found using factor analysis



to be motivated to learn, you need positive motivations
such as rewarding unexpected things and imitation

learning irregularities

our nonconscious memory is huge

you do not feel pain when sacrificing. laissez-faire is optimal if there 
is infinite preference


most of our daily actions are unconscious

doing something "you dislike doing" is internal conflict between two 
virtual opposing individuals




you automatically pattern recognize emotionally fulfilling parts and do 
it. that

is called motivation

emotion is concentration

exception recognization entails negative priming from subconscious memory

negative priming is an adaptation

boredom and interesting is motivation using negative priming of common 
or boring




pavlovian learning does not require object detection. it is subconscious
using implicit occurances to associate negative stimuli with it, because
explicit pavlovian conditioning is inhibited by inhibition

pavlovian learning is automatic association



pavlovian/unconditioned learning is automatic, requiring no motivational
system

subconscious learning requires no motivational system
emotion is perpetual concentration

automatic emotion recognizers

subconsciously recognize emotional effects

the emotion recognizers are always searching for emotional rewarding parts

emotional pattern recognizers

subconscious motor conditioning

emotion induced selective focus and negative prime

motor learning is streng