This are just some controversial tips/inspirations:
Warning: Don't read it if you do not believe that sensory and AGI go
together or if you are skeptical. Just ignore it.
What to detect?
detect inregulaties and store them
analysis
complexity
structure
evolution
memorization is about memoris
It seems like a reasonable and not uncommon idea that an AI could be built as
a mostly-hierarchical autoassiciative memory. As you point out, it's not so
different from Hawkins's ideas. Neighboring "pixels" will correlate in space
and time; "features" such as edges should become principle c
I'm going to attack you by questions again :-)
You're more than welcome to, sorry for being brisk. I did reply about RSS on
the blog, but for some reason the post never made it through.
I don't how RSS works, but you can subscribe via bloglines.com.
What are 'range' and 'complexity'? Is ther
From: "Kingma, D.P." <[EMAIL PROTECTED]>
Agreed with that, exact compression is not the way to go if you ask
me. But that doesn't mean any lossy method is OK. Converting a scene
to vector graphics will lead you to throwing away much visual
information early in the process: visual information (e.g
From: "Kingma, D.P." <[EMAIL PROTECTED]>
Okay, with "text", I mean "natural language", in it's usual
low-bandwidth form. That should clarify my statement. Any data can be
represented with text of course, but that's not the point... The point
that I was trying to make is that natural language is t
(Sorry for triple posting...)
On Sun, Mar 30, 2008 at 11:34 PM, William Pearson <[EMAIL PROTECTED]> wrote:
> On 30/03/2008, Kingma, D.P. <[EMAIL PROTECTED]> wrote:
>
>
> > Intelligence is not *only* about the modalities of the data you get,
> > but modalities are certainly important. A deafblind
On Sun, Mar 30, 2008 at 11:00 PM, Mark Waser <[EMAIL PROTECTED]> wrote:
> From: "Kingma, D.P." <[EMAIL PROTECTED]>
>
> > Vector graphics can indeed be communicated to an AGI by relatively
> > low-bandwidth textual input. But, unfortunately,
> > the physical world is not made of vector graphics, s
Okay, with "text", I mean "natural language", in it's usual
low-bandwidth form. That should clarify my statement. Any data can be
represented with text of course, but that's not the point... The point
that I was trying to make is that natural language is too
low-bandwidth to provide sufficient data
On 30/03/2008, Kingma, D.P. <[EMAIL PROTECTED]> wrote:
> Intelligence is not *only* about the modalities of the data you get,
> but modalities are certainly important. A deafblind person can still
> learn a lot about the world with taste, smell, and touch, but the
> senses one has access to def
riginal Message -
From: Derek Zahn
To: agi@v2.listbox.com
Sent: Sunday, March 30, 2008 5:13 PM
Subject: RE: [agi] Intelligence: a pattern discovery algorithm of scalable
complexity.
Mark Waser writes:
>> True enough, that is one answer: "by hand-crafting
Mark Waser writes:
>> True enough, that is one answer: "by hand-crafting the symbols and >> the
>> mechanics for instantiating them from subsymbolic structures". >> We of
>> course hope for better than this but perhaps generalizing these >> working
>> systems is a practical approach. > Um.
From: "Kingma, D.P." <[EMAIL PROTECTED]>
Vector graphics can indeed be communicated to an AGI by relatively
low-bandwidth textual input. But, unfortunately,
the physical world is not made of vector graphics, so reducing the
physical world to vector graphics is quite lossy (and computationally
exp
From: "Kingma, D.P." <[EMAIL PROTECTED]>
Sure, you could argue that an intelligence purely based on text,
disconnected from the physical world, could be intelligent, but it
would have a very hard time reasoning about interaction of entities in
the physicial world. It would be unable to understand
> True enough, that is one answer: "by hand-crafting the symbols and the
> mechanics for instantiating them from subsymbolic structures". We of course
> hope for better than this but perhaps generalizing these working systems is a
> practical approach.
Um. That is what is known as the ground
On Mon, Mar 31, 2008 at 12:21 AM, Kingma, D.P. <[EMAIL PROTECTED]> wrote:
> Alright, agreed with all you say. If I understood correctly, your
> system (at the moment) assumes scene descriptions at a level higher
> than pixels, but certainly lower than objects. An application of such
> system see
Alright, agreed with all you say. If I understood correctly, your
system (at the moment) assumes scene descriptions at a level higher
than pixels, but certainly lower than objects. An application of such
system seems be a simulated, virtual world where such descriptions are
at hand... Is this indee
On Sun, Mar 30, 2008 at 11:33 PM, Kingma, D.P. <[EMAIL PROTECTED]> wrote:
>
> Vector graphics can indeed be communicated to an AGI by relatively
> low-bandwidth textual input. But, unfortunately,
> the physical world is not made of vector graphics, so reducing the
> physical world to vector gra
Vladimir, I agree with you on many issues, but...
On Sun, Mar 30, 2008 at 9:03 PM, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
> This way, for example, it should be possible to teach a 'modality' for
> understanding simple graphs encoded as text, so that on one hand
> text-based input is sufficie
On Sun, Mar 30, 2008 at 10:16 PM, Kingma, D.P. <[EMAIL PROTECTED]> wrote:
>
> Intelligence is not *only* about the modalities of the data you get,
> but modalities are certainly important. A deafblind person can still
> learn a lot about the world with taste, smell, and touch, but the
> senses
On Sun, Mar 30, 2008 at 6:48 PM, William Pearson <[EMAIL PROTECTED]> wrote:
>
> On 30/03/2008, Kingma, D.P. <[EMAIL PROTECTED]> wrote:
> > An audiovisual perception layer generates semantic interpretation on the
> > (sub)symbolic level. How could a symbolic engine ever reason about the real
> > wor
Mike, you seem to have misinterpreted my statement. Perception is certainly
not 'passive', as it can be described as active inference using a (mostly
actively) learned world model. Inference is done on many levels, and could
integrate information from various abstraction levels, so I don't see it a
Stephen Reed writes:
>> How could a symbolic engine ever reason about the real world *with* access
>> to such information?
> I hope my work eventually demonstrates a solution to your satisfaction.
Me too!
> In the meantime there is evidence from robotics, specifically driverless
> car
Durk,
Absolutely right about the need for what is essentially an imaginative level of
mind. But wrong in thinking:
"Vision may be classified under "Narrow" AI"
You seem to be treating this extra "audiovisual perception layer" as a purely
passive layer. The latest psychology & philosophy recogn
gt;
To: agi@v2.listbox.com
Sent: Sunday, March 30, 2008 11:21:52 AM
Subject: RE: [agi] Intelligence: a pattern discovery algorithm of scalable
complexity.
.hmmessage P { margin:0px;padding:0px;} body.hmmessage {
FONT-SIZE:10pt;FONT-FAMILY:Tahoma;} [EMAIL PROTECTED] writes:
> But it
On 30/03/2008, Kingma, D.P. <[EMAIL PROTECTED]> wrote:
> Although I symphathize with some of Hawkin's general ideas about unsupervised
>learning, his current HTM framework is unimpressive in comparison with
>state-of-the-art techniques such as Hinton's RBM's, LeCun's
convolutional nets >and the pro
[EMAIL PROTECTED] writes:
> But it should be quite clear that such methods could eventually be very handy
> for AGI.
I agree with your post 100%, this type of approach is the most interesting
AGI-related stuff to me.
> An audiovisual perception layer generates semantic interpretation on the
On Sun, Mar 30, 2008 at 7:23 PM, Kingma, D.P. <[EMAIL PROTECTED]> wrote:
>
> Although I symphathize with some of Hawkin's general ideas about
> unsupervised learning, his current HTM framework is unimpressive in
> comparison with state-of-the-art techniques such as Hinton's RBM's, LeCun's
> convolu
Although I symphathize with some of Hawkin's general ideas about
unsupervised learning, his current HTM framework is unimpressive in
comparison with state-of-the-art techniques such as Hinton's RBM's, LeCun's
convolutional nets and the promising low-entropy coding variants.
But it should be quite
It seems like a reasonable and not uncommon idea that an AI could be built as a
mostly-hierarchical autoassiciative memory. As you point out, it's not so
different from Hawkins's ideas. Neighboring "pixels" will correlate in space
and time; "features" such as edges should become principle comp
On Sun, Mar 30, 2008 at 5:12 PM, Boris Kazachenko <[EMAIL PROTECTED]> wrote:
>
> > What is it that your system tries to predict? Does it predict only
> > specific terminal inputs, values on the ends of its sensors? Or
> > something else? When does prediction occur?
> > What is this prediction f
Hello Boris, and welcome to the list.
Thanks Vladimir, I actually posted a few times a while back.
Don't do it often because of the "mindset" problem I mentioned in my blog
:).
http://scalable-intelligence.blogspot.com/
I didn't understand your algorithm, you use many terms that you didn't
d
Hello Boris, and welcome to the list.
I didn't understand your algorithm, you use many terms that you didn't
define. It probably would be clearer if you use some kind of
pseudocode and systematically describe all occurring procedures. But I
think more fundamental questions that need clarifying won
Here's another try:
I think the main reason for the failure of AI is that no existing approach
is derived from a theoretically consistent definition of intelligence. Some,
such as Algorithmic Information Theory, are close but not close enough.
Scalable (general) intelligence must recursively
33 matches
Mail list logo