Mike,

We've argued about this over and over and over. I don't want to repeat
previous arguments to you.

You have no proof that the world cannot be broken down into simpler concepts
and components. The only proof you attempt to propose are your example
problems that *you* don't understand how to solve. Just because *you* cannot
solve them, doesn't mean they cannot be solved at all using a certain
methodology. So, who is really making wild assumptions?

The mere fact that you can refer to a "chair" means that it is a
recognizable pattern. LOL. That fact that you don't realize this is quite
funny.

Dave

On Sun, Aug 8, 2010 at 8:23 AM, Mike Tintner <tint...@blueyonder.co.uk>wrote:

>  Dave:No... it is equivalent to saying that the whole world can be modeled
> as if everything was made up of matter
>
> And "matter" is... ?  Huh?
>
> You clearly don't realise that your thinking is seriously woolly - and you
> will pay a heavy price in lost time.
>
> What are your "basic world/visual-world analytic units"  wh. you are
> claiming to exist?
>
> You thought - perhaps think still - that *concepts* wh. are pretty
> fundamental intellectual units of analysis at a certain level, could be
> expressed as, or indeed, were patterns. IOW there's a fundamental pattern
> for "chair" or "table." Absolute nonsense. And a radical failure to
> understand the basic nature of concepts which is that they are *freeform*
> schemas, incapable of being expressed either as patterns or programs.
>
> You had merely assumed that concepts could be expressed as patterns,but had
> never seriously, visually analysed it. Similarly you are merely assuming
> that the world can be analysed into some kind of visual units - but you
> haven't actually done the analysis, have you? You don't have any of these
> basic units to hand, do you? If you do, I suggest, reply instantly, naming a
> few. You won't be able to do it. They don't exist.
>
> Your whole approach to AGI is based on variations of what we can call
> "fundamental analysis" - and it's wrong. God/Evolution hasn't built the
> world with any kind of geometric, or other consistent, bricks. He/It is a
> freeform designer. You have to start thinking outside the
> box/brick/"fundamental unit".
>
>  *From:* David Jones <davidher...@gmail.com>
> *Sent:* Sunday, August 08, 2010 5:12 AM
> *To:* agi <agi@v2.listbox.com>
> *Subject:* Re: [agi] How To Create General AI Draft2
>
> Mike,
>
> I took your comments into consideration and have been updating my paper to
> make sure these problems are addressed.
>
> See more comments below.
>
> On Fri, Aug 6, 2010 at 8:15 PM, Mike Tintner <tint...@blueyonder.co.uk>wrote:
>
>>  1) You don't define the difference between narrow AI and AGI - or make
>> clear why your approach is one and not the other
>>
>
> I removed this because my audience is for AI researchers... this is AGI
> 101. I think it's clear that my design defines general as being able to
> handle the vast majority of things we want the AI to handle without
> requiring a change in design.
>
>
>>
>> 2) "Learning about the world" won't cut it -  vast nos. of progs. claim
>> they can learn about the world - what's the difference between narrow AI and
>> AGI learning?
>>
>
> The difference is in what you can or can't learn about and what tasks you
> can or can't perform. If the AI is able to receive input about anything it
> needs to know about in the same formats that it knows how to understand and
> analyze, it can reason about anything it needs to.
>
>
>>
>> 3) "Breaking things down into generic components allows us to learn about
>> and handle the vast majority of things we want to learn about. This is what
>> makes it general!"
>>
>> Wild assumption, unproven or at all demonstrated and untrue.
>>
>
> You are only right that I haven't demonstrated it. I will address this in
> the next paper and continue adding details over the next few drafts.
>
> As a simple argument against your counter argument...
>
> If that were true that we could not understand the world using a limited
> set of rules or concepts, how is it that a human baby, with a design that is
> predetermined to interact with the world a certain way by its DNA, is able
> to deal with unforeseen things that were not preprogrammed? That’s right,
> the baby was born with a set of rules that robustly allows it to deal with
> the unforeseen. It has a limited set of rules used to learn. That is
> equivalent to a limited set of “concepts” (i.e. rules) that would allow a
> computer to deal with the unforeseen.
>
>
>>  Interesting philosophically because it implicitly underlies AGI-ers'
>> fantasies of "take-off". You can compare it to the idea that all science can
>> be reduced to physics. If it could, then an AGI could indeed take-off. But
>> it's demonstrably not so.
>>
>
> No... it is equivalent to saying that the whole world can be modeled as if
> everything was made up of matter. Oh, I forgot, that is the case :) It is a
> limited set of "concepts", yet it can create everything we know.
>
>
>>
>> You don't seem to understand that the problem of AGI is to deal with the
>> NEW - the unfamiliar, that wh. cannot be broken down into familiar
>> categories, - and then find ways of dealing with it ad hoc.
>>
>
> You don't seem to understand that even the things you think cannot be
> broken down, can be.
>
>
> Dave
>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>    *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to