Thanks Ben,

I think the biggest difference with the way I approach it is to be
deliberate in how the system solves specific kinds of problems. I haven't
gone into that in detail yet though.

For example, Itamar seems to want to give the AI the basic building blocks
that make up spaciotemporal dependencies as a sort of bag of features and
just let a neural-net-like structure find the patterns. If that is not
accurate, please correct me. I am very skeptical of such approaches because
there is no guarantee at all that the system will properly represent the
relationships and structure of the data. It seems just hopeful to me that
such a system would get it right out of the vast number of possible results
it could accidental arrive at.

The human visual system doesn't evolve like that on the fly. This can be
proven by the fact that we all see the same visual illusions. We all exhibit
the same visual limitations in the same way. There is much evidence that the
system doesn't evolve accidentally. It has a limited set of rules it uses to
learn from perceptual data.

I think a more deliberate approach would be more effective because we can
understand why it does what it does, how it does it, and why its not working
if it doesn't work. With such deliberate approaches, it is much more clear
how to proceed and to reuse knowledge in many complementary ways. This is
what I meant by "emergence".

I propose a more deliberate approach that knows exactly why problems can be
solved a certain way and how the system is likely to solve them.

I'm suggesting to represent the spaciotemporal relationships deliberately
and explicitly. Then we can construct general algorithms to solve problems
explicitly, yet generally.

Regarding computer vision not being that important... Don't you think that
because knowledge is so essential and manual input is inneffective,
perception-based acquisition of knowledge is a very serious barrier to AGI?
It seems to me that the solutions to AGI problems being constructed are not
using knowledge gained from simulated perception effectively. OpenCog's
natural language processing for example, seems to use very very little
knowledge that would be gathered from visual perception. As far as I
remember, it mostly uses things that are learned from other sources. To me,
it doesn't make sense to spend so much time debugging and developing such
solutions, when a better and more general approach to language understanding
would use a lot of knowledge.

Those are the sorts of things I feel are new to this approach.

Thanks Again,

Dave

PS: I'm planning to go to the Singularity Summit :) Last minute. Hope to see
you there.


On Mon, Aug 9, 2010 at 10:01 AM, Ben Goertzel <b...@goertzel.org> wrote:

> Hi David,
>
> I read the essay
>
> I think it summarizes well some of the key issues involving the bridge
> between perception and cognition, and the hierarchical decomposition of
> natural concepts....
>
> I find the ideas very harmonious with those of Jeff Hawkins, Itamar Arel,
> and other researchers focused on hierarchical deep learning approaches to
> vision with longer-term AGI ambitions
>
> I'm not sure there are any dramatic new ideas in the essay.  Do you think
> there are?
>
> My own view is that these ideas are basically right, but handle only a
> modest percentage of what's needed to make a human-level, vaguely human-like
> AGI ....  I.e. I don't agree that solving vision and the vision-cognition
> bridge is *such* a huge part of AGI, though it's certainly a nontrivial
> percentage...
>
>
> -- Ben G
>
> On Fri, Aug 6, 2010 at 4:44 PM, David Jones <davidher...@gmail.com> wrote:
>
>> Hey Guys,
>>
>> I've been working on writing out my approach to create general AI to share
>> and debate it with others in the field. I've attached my second draft of it
>> in PDF format, if you guys are at all interested. It's still a work in
>> progress and hasn't been fully edited. Please feel free to comment,
>> positively or negatively, if you have a chance to read any of it. I'll be
>> adding to and editing it over the next few days.
>>
>> I'll try to reply more professionally than I have been lately :) Sorry :S
>>
>> Cheers,
>>
>> Dave
>>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/> | 
>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>> <http://www.listbox.com>
>>
>
>
>
> --
> Ben Goertzel, PhD
> CEO, Novamente LLC and Biomind LLC
> CTO, Genescient Corp
> Vice Chairman, Humanity+
> Advisor, Singularity University and Singularity Institute
> External Research Professor, Xiamen University, China
> b...@goertzel.org
>
> "I admit that two times two makes four is an excellent thing, but if we are
> to give everything its due, two times two makes five is sometimes a very
> charming thing too." -- Fyodor Dostoevsky
>
>    *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to