Re: [agi] AGI's Philosophy of Learning

2008-08-20 Thread BillK
On Tue, Aug 19, 2008 at 2:56 PM, Abram Demski <[EMAIL PROTECTED]> wrote:
> Wow, sorry about that. I am using firefox and had no problems. The
> site was just the first reference I was able to find using  google.
>
> Wikipedia references the same fact:
>
> http://en.wikipedia.org/wiki/Feedforward_neural_network#Multi-layer_perceptron
>


I've done a bit more investigation.

The web site is probably clean.

These attacks are probably coming from a compromised ad server.
ScanSafe Quote:
"Online ads have become a primary target for malware authors because
they offer a stealthy way to distribute malware to a wide audience. In
many instances, the malware perpetrator can leverage the distributed
nature of online advertising and the decentralization of website
content to spread malware to hundreds of sites.


So you might encounter these attacks at any site, because almost all
sites serve up ads to you.
And you're correct that FireFox with AdBlock Plus and NoScript is safe
from these attacks.

Using a Linux or Apple operating system is even safer.

I dualboot to use Linux for browsing and only go into Windows when necessary.
Nowadays you can also use virtualization to run several operating
systems at once.
Cooperative Linux also runs happily alongside Windows.


BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI's Philosophy of Learning

2008-08-19 Thread Abram Demski
Wow, sorry about that. I am using firefox and had no problems. The
site was just the first reference I was able to find using  google.

Wikipedia references the same fact:

http://en.wikipedia.org/wiki/Feedforward_neural_network#Multi-layer_perceptron

On Tue, Aug 19, 2008 at 3:42 AM, Brad Paulsen <[EMAIL PROTECTED]> wrote:
> Abram,
>
> Just FYI... When I attempted to access the Web page in your message,
> http://www.learnartificialneuralnetworks.com/ (that's without the
> "backpropagation.html" part), my virus checker, AVG, blocked the attempt
> with a message similar to the following:
>
> Threat detected!
> Virus found: JS/Downloader.Agent
> Detected on open
>
> Quarantined
>
> On a second attempt, I also got the IE 7.0 warning banner:
>
> "This website wants to run the following add-on: "Microsoft Data Access -
> Remote Data Services Dat...' from 'Microsoft Corporation'.  If you trust the
> website and the add-on and want to allow it to run, click..." (of course, I
> didn't click).
>
> This time, AVG gave me the option to "heal" the virus.  I took this option.
>
> It may be nothing, but it also could be a "drive by" download attempt of
> which the owners of that site may not be aware.
>
> Cheers,
>
> Brad
>
>
>
> Abram Demski wrote:
>>
>> Mike,
>>
>> There are at least 2 ways this can happen, I think. The first way is
>> that a mechanism is theoretically proven to be "complete", for some
>> less-than-sufficient formalism. The best example of this is one I
>> already mentioned: the neural nets of the nineties (specifically,
>> feedforward neural nets with multiple hidden layers). There is a
>> completeness result associated with these. I quote from
>> http://www.learnartificialneuralnetworks.com/backpropagation.html :
>>
>> "Although backpropagation can be applied to networks with any number
>> of layers, just as for networks with binary units it has been shown
>> (Hornik, Stinchcombe, & White, 1989; Funahashi, 1989; Cybenko, 1989;
>> Hartman, Keeler, & Kowalski, 1990) that only one layer of hidden units
>> su ces to approximate any function with finitely many discontinuities
>> to arbitrary precision, provided the activation functions of the
>> hidden units are non-linear (the universal approximation theorem). In
>> most applications a feed-forward network with a single layer of hidden
>> units is used with a sigmoid activation function for the units. "
>>
>> This sort of thing could have contributed to the 50 years of
>> less-than-success you mentioned.
>>
>> The second way this phenomenon could manifest is more a personal fear
>> than anything else. I am worried that there really might be partial
>> principles of mind that could seem to be able to do everything for a
>> time. The possibility is made concrete for me by analogies to several
>> smaller domains. In linguistics, the grammar that we are taught in
>> high school does almost everything. In logic, 1st-order systems do
>> almost everything. In sequence learning, hidden markov models do
>> almost everything. So, it is conceivable that some AGI method will be
>> missing something fundamental, yet seem for a time to be
>> all-encompassing.
>>
>> On Mon, Aug 18, 2008 at 5:58 AM, Mike Tintner <[EMAIL PROTECTED]>
>> wrote:
>>>
>>> Abram:I am worried-- worried that an AGI system based on anything less
>>> than
>>> the one most powerful logic will be able to fool AGI researchers for a
>>> long time into thinking that it is capable of general intelligence.
>>>
>>> Can you explain this to me? (I really am interested in understanding your
>>> thinking). AGI's have a roughly 50 year record of total failure. They
>>> have
>>> never shown the slightest sign of general intelligence - of being able to
>>> cross domains. How do you think they will or could fool anyone?
>>>
>>>
>>>
>>> ---
>>> agi
>>> Archives: https://www.listbox.com/member/archive/303/=now
>>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>>> Modify Your Subscription:
>>> https://www.listbox.com/member/?&;
>>> Powered by Listbox: http://www.listbox.com
>>>
>>
>>
>> ---
>> agi
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> Modify Your Subscription: https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI's Philosophy of Learning

2008-08-19 Thread BillK
On Tue, Aug 19, 2008 at 8:42 AM, Brad Paulsen wrote:
> Abram,
>
> Just FYI... When I attempted to access the Web page in your message,
> http://www.learnartificialneuralnetworks.com/ (that's without the
> "backpropagation.html" part), my virus checker, AVG, blocked the attempt
> with a message similar to the following:
>
> Threat detected!
> Virus found: JS/Downloader.Agent
> Detected on open
>
> Quarantined
>
> On a second attempt, I also got the IE 7.0 warning banner:
>
> "This website wants to run the following add-on: "Microsoft Data Access -
> Remote Data Services Dat...' from 'Microsoft Corporation'.  If you trust the
> website and the add-on and want to allow it to run, click..." (of course, I
> didn't click).
>
> This time, AVG gave me the option to "heal" the virus.  I took this option.
>
> It may be nothing, but it also could be a "drive by" download attempt of
> which the owners of that site may not be aware.
>


Yes, the possibility that the site has been hacked should always be
considered as javascript injection attacks are becoming more and more
common. Because of this, the latest version of AVG has been made to be
very suspicious about javascript. This is causing some false
detections when AVG encounters very complicated javascript as it errs
on the side of safety. And looking at the source code for that page
there is one large function near the top that might well have confused
AVG (or it could be a hack, I'm not a javascript expert!).

However, I scanned the site with Dr Web antivirus and it said the site
was clean and the javascript was ok.
This site has not yet been scanned by McAfee Site Advisor, but I have
submitted it to them to be scanned soon.

Of course, if you use the Mozilla FireFox browser you are protected
from many drive by infections.
Especially if you use the AdBlock Plus and NoScript addons.

BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI's Philosophy of Learning

2008-08-19 Thread Brad Paulsen

Abram,

Just FYI... When I attempted to access the Web page in your message, 
http://www.learnartificialneuralnetworks.com/ (that's without the 
"backpropagation.html" part), my virus checker, AVG, blocked the attempt 
with a message similar to the following:


Threat detected!
Virus found: JS/Downloader.Agent
Detected on open

Quarantined

On a second attempt, I also got the IE 7.0 warning banner:

"This website wants to run the following add-on: "Microsoft Data Access - 
Remote Data Services Dat...' from 'Microsoft Corporation'.  If you trust 
the website and the add-on and want to allow it to run, click..." (of 
course, I didn't click).


This time, AVG gave me the option to "heal" the virus.  I took this option.

It may be nothing, but it also could be a "drive by" download attempt of 
which the owners of that site may not be aware.


Cheers,

Brad



Abram Demski wrote:

Mike,

There are at least 2 ways this can happen, I think. The first way is
that a mechanism is theoretically proven to be "complete", for some
less-than-sufficient formalism. The best example of this is one I
already mentioned: the neural nets of the nineties (specifically,
feedforward neural nets with multiple hidden layers). There is a
completeness result associated with these. I quote from
http://www.learnartificialneuralnetworks.com/backpropagation.html :

"Although backpropagation can be applied to networks with any number
of layers, just as for networks with binary units it has been shown
(Hornik, Stinchcombe, & White, 1989; Funahashi, 1989; Cybenko, 1989;
Hartman, Keeler, & Kowalski, 1990) that only one layer of hidden units
suces to approximate any function with finitely many discontinuities
to arbitrary precision, provided the activation functions of the
hidden units are non-linear (the universal approximation theorem). In
most applications a feed-forward network with a single layer of hidden
units is used with a sigmoid activation function for the units. "

This sort of thing could have contributed to the 50 years of
less-than-success you mentioned.

The second way this phenomenon could manifest is more a personal fear
than anything else. I am worried that there really might be partial
principles of mind that could seem to be able to do everything for a
time. The possibility is made concrete for me by analogies to several
smaller domains. In linguistics, the grammar that we are taught in
high school does almost everything. In logic, 1st-order systems do
almost everything. In sequence learning, hidden markov models do
almost everything. So, it is conceivable that some AGI method will be
missing something fundamental, yet seem for a time to be
all-encompassing.

On Mon, Aug 18, 2008 at 5:58 AM, Mike Tintner <[EMAIL PROTECTED]> wrote:

Abram:I am worried-- worried that an AGI system based on anything less than
the one most powerful logic will be able to fool AGI researchers for a
long time into thinking that it is capable of general intelligence.

Can you explain this to me? (I really am interested in understanding your
thinking). AGI's have a roughly 50 year record of total failure. They have
never shown the slightest sign of general intelligence - of being able to
cross domains. How do you think they will or could fool anyone?



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI's Philosophy of Learning

2008-08-18 Thread Abram Demski
Charles,

I find this perspective interesting. Given what logicians know so far,
it is more plausible that there is not one right logic, but merely a
hierarchy of better/worse/different logics. My search for the "top" is
somewhat unjustified (but I cannot help myself from thinking that
there must be a top). Nonetheless, the image of evolution randomly
experimenting in the space of possible logics, and simply finding very
powerful logics rather than this "top" of mine (even if it exists), is
quite reasonable.

But, I cannot help from saying it... if this is the right perspective,
then evolution itself could be seen as the "top", the correct logic. I
am not sure what this view implies.

--Abram

On Sun, Aug 17, 2008 at 10:52 PM, Charles Hixson
<[EMAIL PROTECTED]> wrote:
> Abram Demski wrote:
>>
>> On Fri, Aug 15, 2008 at 5:19 PM, Jim Bromer <[EMAIL PROTECTED]> wrote:
>>
>>>
>>> On Fri, Aug 15, 2008 at 3:40 PM, Abram Demski <[EMAIL PROTECTED]>
>>> wrote:
>>>

 The ... the moment I want to ignore computational resources...

>>>
>>> Ok but what are you getting at?
>>>
>>
>> I had a friend who would win arguments in high school by saying
>> "what's your point?" after a long back-and-forth, shifting the burden
>> on me to show that what I was arguing was not only true but
>> important... which it often wasn't. :)
>>
>> Part of the point is to answer the question "What do we mean when we
>> refer to mathematical entities?". Part of the point is to find the
>> point is to find the correct logic, rejecting the notion that logics
>> are simply different, not better or worse*. Part of the point is that
>> I am worried-- worried that an AGI system based on anything less than
>> the one most powerful logic will be able to fool AGI researchers for a
>> long time into thinking that it is capable of general intelligence.
>> Several examples-- Artificial neural networks in their currently most
>> popular form are limited to models that a logical might call
>> "0th-order" or "propositional", not even first-order, yet they are
>> powerful enough to solve many problems. It is thus easy to think that...
>>
>
> FWIW, I doubt that any AGI is actually possible.  I'm reasonably certain
> that it's possible to get closer than people are, but we aren't really even
> an attempt at a fully general AI.  I have a strong suspicion that things
> analogous the the halting problem and Gödel's incompleteness theorem are
> lurking.
>
> As such, I don't think it's reasonable to worry about implementing the "most
> powerful logic".  Anything that gets implemented will be incomplete (or
> self-contradictory).  People seem to have evolved to go with
> self-contradictory.
>
> As such, my "solution" is like the solution to the "global maximization of
> hill-climbing"...the best solution is to start in lots of different places
> that each find their own local optimum.  You still won't find the global
> optimum except by chance, but you can get a lot closer.  I don't like
> thinking of this as relaxation or annealing, but I'm not sure why.  Possibly
> because they usually use smaller chunks than I think best.  I don't think
> the surface is sufficiently homogeneous to use the same approach in every
> locale, except on a very large scale.  (And by writing this I'm probably
> revealing my ignorance [profound] of the techniques.)
>
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI's Philosophy of Learning

2008-08-18 Thread Abram Demski
Mike,

But this is horrible! If what you are saying is true, then research
will barely progress.

On Mon, Aug 18, 2008 at 11:46 AM, Mike Tintner <[EMAIL PROTECTED]> wrote:
> Abram,
>
> The key distinction here is probably that some approach to AGI may be widely
> accepted as having great *promise*. That has certainly been the case,
> although I doubt actually that it could happen again. There were also no
> robots of note in the past. Personally, I can't see any approach being
> accepted  now - and the general responses of this forum, I think, support
> this - until it actually delivers on some form of GI.
>
> Mike,
>
> There are at least 2 ways this can happen, I think. The first way is
> that a mechanism is theoretically proven to be "complete", for some
> less-than-sufficient formalism. The best example of this is one I
> already mentioned: the neural nets of the nineties (specifically,
> feedforward neural nets with multiple hidden layers). There is a
> completeness result associated with these. I quote from
> http://www.learnartificialneuralnetworks.com/backpropagation.html :
>
> "Although backpropagation can be applied to networks with any number
> of layers, just as for networks with binary units it has been shown
> (Hornik, Stinchcombe, & White, 1989; Funahashi, 1989; Cybenko, 1989;
> Hartman, Keeler, & Kowalski, 1990) that only one layer of hidden units
> su ces to approximate any function with finitely many discontinuities
> to arbitrary precision, provided the activation functions of the
> hidden units are non-linear (the universal approximation theorem). In
> most applications a feed-forward network with a single layer of hidden
> units is used with a sigmoid activation function for the units. "
>
> This sort of thing could have contributed to the 50 years of
> less-than-success you mentioned.
>
> The second way this phenomenon could manifest is more a personal fear
> than anything else. I am worried that there really might be partial
> principles of mind that could seem to be able to do everything for a
> time. The possibility is made concrete for me by analogies to several
> smaller domains. In linguistics, the grammar that we are taught in
> high school does almost everything. In logic, 1st-order systems do
> almost everything. In sequence learning, hidden markov models do
> almost everything. So, it is conceivable that some AGI method will be
> missing something fundamental, yet seem for a time to be
> all-encompassing.
>
> On Mon, Aug 18, 2008 at 5:58 AM, Mike Tintner <[EMAIL PROTECTED]>
> wrote:
>>
>> Abram:I am worried-- worried that an AGI system based on anything less
>> than
>> the one most powerful logic will be able to fool AGI researchers for a
>> long time into thinking that it is capable of general intelligence.
>>
>> Can you explain this to me? (I really am interested in understanding your
>> thinking). AGI's have a roughly 50 year record of total failure. They have
>> never shown the slightest sign of general intelligence - of being able to
>> cross domains. How do you think they will or could fool anyone?
>>
>>
>>
>> ---
>> agi
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> Modify Your Subscription:
>> https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI's Philosophy of Learning

2008-08-18 Thread Mike Tintner

Abram,

The key distinction here is probably that some approach to AGI may be widely 
accepted as having great *promise*. That has certainly been the case, 
although I doubt actually that it could happen again. There were also no 
robots of note in the past. Personally, I can't see any approach being 
accepted  now - and the general responses of this forum, I think, support 
this - until it actually delivers on some form of GI.


Mike,

There are at least 2 ways this can happen, I think. The first way is
that a mechanism is theoretically proven to be "complete", for some
less-than-sufficient formalism. The best example of this is one I
already mentioned: the neural nets of the nineties (specifically,
feedforward neural nets with multiple hidden layers). There is a
completeness result associated with these. I quote from
http://www.learnartificialneuralnetworks.com/backpropagation.html :

"Although backpropagation can be applied to networks with any number
of layers, just as for networks with binary units it has been shown
(Hornik, Stinchcombe, & White, 1989; Funahashi, 1989; Cybenko, 1989;
Hartman, Keeler, & Kowalski, 1990) that only one layer of hidden units
suces to approximate any function with finitely many discontinuities
to arbitrary precision, provided the activation functions of the
hidden units are non-linear (the universal approximation theorem). In
most applications a feed-forward network with a single layer of hidden
units is used with a sigmoid activation function for the units. "

This sort of thing could have contributed to the 50 years of
less-than-success you mentioned.

The second way this phenomenon could manifest is more a personal fear
than anything else. I am worried that there really might be partial
principles of mind that could seem to be able to do everything for a
time. The possibility is made concrete for me by analogies to several
smaller domains. In linguistics, the grammar that we are taught in
high school does almost everything. In logic, 1st-order systems do
almost everything. In sequence learning, hidden markov models do
almost everything. So, it is conceivable that some AGI method will be
missing something fundamental, yet seem for a time to be
all-encompassing.

On Mon, Aug 18, 2008 at 5:58 AM, Mike Tintner <[EMAIL PROTECTED]> 
wrote:
Abram:I am worried-- worried that an AGI system based on anything less 
than

the one most powerful logic will be able to fool AGI researchers for a
long time into thinking that it is capable of general intelligence.

Can you explain this to me? (I really am interested in understanding your
thinking). AGI's have a roughly 50 year record of total failure. They have
never shown the slightest sign of general intelligence - of being able to
cross domains. How do you think they will or could fool anyone?



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI's Philosophy of Learning

2008-08-18 Thread Charles Hixson

Abram Demski wrote:

On Fri, Aug 15, 2008 at 5:19 PM, Jim Bromer <[EMAIL PROTECTED]> wrote:
  

On Fri, Aug 15, 2008 at 3:40 PM, Abram Demski <[EMAIL PROTECTED]> wrote:


The ... the moment I want to ignore computational resources...
  

Ok but what are you getting at?



I had a friend who would win arguments in high school by saying
"what's your point?" after a long back-and-forth, shifting the burden
on me to show that what I was arguing was not only true but
important... which it often wasn't. :)

Part of the point is to answer the question "What do we mean when we
refer to mathematical entities?". Part of the point is to find the
point is to find the correct logic, rejecting the notion that logics
are simply different, not better or worse*. Part of the point is that
I am worried-- worried that an AGI system based on anything less than
the one most powerful logic will be able to fool AGI researchers for a
long time into thinking that it is capable of general intelligence.
Several examples-- Artificial neural networks in their currently most
popular form are limited to models that a logical might call
"0th-order" or "propositional", not even first-order, yet they are
powerful enough to solve many problems. It is thus easy to think that...
  


FWIW, I doubt that any AGI is actually possible.  I'm reasonably certain 
that it's possible to get closer than people are, but we aren't really 
even an attempt at a fully general AI.  I have a strong suspicion that 
things analogous the the halting problem and Gödel's incompleteness 
theorem are lurking.


As such, I don't think it's reasonable to worry about implementing the 
"most powerful logic".  Anything that gets implemented will be 
incomplete (or self-contradictory).  People seem to have evolved to go 
with self-contradictory.


As such, my "solution" is like the solution to the "global maximization 
of hill-climbing"...the best solution is to start in lots of different 
places that each find their own local optimum.  You still won't find the 
global optimum except by chance, but you can get a lot closer.  I don't 
like thinking of this as relaxation or annealing, but I'm not sure why.  
Possibly because they usually use smaller chunks than I think best.  I 
don't think the surface is sufficiently homogeneous to use the same 
approach in every locale, except on a very large scale.  (And by writing 
this I'm probably revealing my ignorance [profound] of the techniques.)





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI's Philosophy of Learning

2008-08-18 Thread Abram Demski
Mike,

There are at least 2 ways this can happen, I think. The first way is
that a mechanism is theoretically proven to be "complete", for some
less-than-sufficient formalism. The best example of this is one I
already mentioned: the neural nets of the nineties (specifically,
feedforward neural nets with multiple hidden layers). There is a
completeness result associated with these. I quote from
http://www.learnartificialneuralnetworks.com/backpropagation.html :

"Although backpropagation can be applied to networks with any number
of layers, just as for networks with binary units it has been shown
(Hornik, Stinchcombe, & White, 1989; Funahashi, 1989; Cybenko, 1989;
Hartman, Keeler, & Kowalski, 1990) that only one layer of hidden units
suces to approximate any function with finitely many discontinuities
to arbitrary precision, provided the activation functions of the
hidden units are non-linear (the universal approximation theorem). In
most applications a feed-forward network with a single layer of hidden
units is used with a sigmoid activation function for the units. "

This sort of thing could have contributed to the 50 years of
less-than-success you mentioned.

The second way this phenomenon could manifest is more a personal fear
than anything else. I am worried that there really might be partial
principles of mind that could seem to be able to do everything for a
time. The possibility is made concrete for me by analogies to several
smaller domains. In linguistics, the grammar that we are taught in
high school does almost everything. In logic, 1st-order systems do
almost everything. In sequence learning, hidden markov models do
almost everything. So, it is conceivable that some AGI method will be
missing something fundamental, yet seem for a time to be
all-encompassing.

On Mon, Aug 18, 2008 at 5:58 AM, Mike Tintner <[EMAIL PROTECTED]> wrote:
> Abram:I am worried-- worried that an AGI system based on anything less than
> the one most powerful logic will be able to fool AGI researchers for a
> long time into thinking that it is capable of general intelligence.
>
> Can you explain this to me? (I really am interested in understanding your
> thinking). AGI's have a roughly 50 year record of total failure. They have
> never shown the slightest sign of general intelligence - of being able to
> cross domains. How do you think they will or could fool anyone?
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI's Philosophy of Learning

2008-08-18 Thread Mike Tintner

Abram:I am worried-- worried that an AGI system based on anything less than
the one most powerful logic will be able to fool AGI researchers for a
long time into thinking that it is capable of general intelligence.

Can you explain this to me? (I really am interested in understanding your 
thinking). AGI's have a roughly 50 year record of total failure. They have 
never shown the slightest sign of general intelligence - of being able to 
cross domains. How do you think they will or could fool anyone? 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI's Philosophy of Learning

2008-08-17 Thread Abram Demski
On Fri, Aug 15, 2008 at 5:19 PM, Jim Bromer <[EMAIL PROTECTED]> wrote:
> On Fri, Aug 15, 2008 at 3:40 PM, Abram Demski <[EMAIL PROTECTED]> wrote:
>> The paradox seems trivial, of course. I generally agree with your
>> analysis (describing how we consider the sentence, take into account
>> its context, and so on. But the big surprise to logicians was that the
>> paradox is not just a lingual curiosity, it is an essential feature of
>> any logic satisfying some broad, seemingly reasonable requirements.
>>
>> A logical "sentence" corresponds better to a concept/idea, so bringing
>> in the lingual context and so on does not help much in the logic-based
>> version (although I readily admit that it solves the paradox in the
>> lingual form I presented it in my previous email). The question
>> becomes, does the system allow "This thought is false" to be thought,
>> and if so, how does it deal with it? Intuitively it seems that we
>> cannot think such a silly concept.
>
>> you said "I don't think the problem of self-reference is
>> significantly more difficult than the problem of general reference",
>> so I will say "I don't think the frame problem is significantly more
>> difficult than the problem of general inference." And like I said, for
>> the moment I want to ignore computational resources...
>
> Ok but what are you getting at?

I had a friend who would win arguments in high school by saying
"what's your point?" after a long back-and-forth, shifting the burden
on me to show that what I was arguing was not only true but
important... which it often wasn't. :)

Part of the point is to answer the question "What do we mean when we
refer to mathematical entities?". Part of the point is to find the
point is to find the correct logic, rejecting the notion that logics
are simply different, not better or worse*. Part of the point is that
I am worried-- worried that an AGI system based on anything less than
the one most powerful logic will be able to fool AGI researchers for a
long time into thinking that it is capable of general intelligence.
Several examples-- Artificial neural networks in their currently most
popular form are limited to models that a logical might call
"0th-order" or "propositional", not even first-order, yet they are
powerful enough to solve many problems. It is thus easy to think that
the problem is just computational power. The currently popular AIXI
model could (if it were built) learn to skillfully manipulate all
sorts of mathematical formalisms, and speak in a convincing manner to
human mathematicians. Yet, it is easy to see from AIXI's definition
that it will not actually apply any math it learns to model the world,
since it has a hardwired assumption that the universe is actually
computable. (I don't know if you're familiar with AIXI, just ignore
this example if not...)

*(I am being a bit extreme here. One logic can be the right logic for
one purpose, while another is the right logic for a different purpose.
Two logics can turn out to be equivalent, therefore about as good for
any purpose. But, I am saying that there should be some set of logics,
all inter-equivalent, that are the right logic for the "broadest
possible" purpose-- that is, reasoning.)

>  I don't want to stop you from going
> on and explaining what it is that you are getting at, but I want to
> tell you about another criticism I developed from talking to people
> who asserted that everything could be logically reduced (and in
> particular anything an AI program could do could be logically
> reduced.)  I finally realized that what they were saying could be
> reduced to something along the lines of "If I could understand
> everything then I could understand everything."

EXACTLY!

Or, um, rather, yes. That is what I am getting at. If I could
understand everything then I would understand everything. It is an odd
way of putting it, but, true.

> I mentioned that to
> the guys I was talking to but I don't think that they really got it.
> Or at least they didn't like it. I think you might find yourself on
> the same lane if you don't keep your eyes open.  But I really want to
> know what where it is you are going.

It seems to me that these people must have been arguing with you
because they saw certain points you were making as essentially
illogical, and got caught up trying to explain something that was
utterly obvious to them but which they thought you were denying. So
you came back to them and said that their point was utterly obvious,
which was true.

>
> I just read the message that you referred to in OpenCog Prime wikibook
> and... I really didn't understand it completely but I still don't
> understand what the problem is.

I am somewhat confused. I do not remember referring to the wikibook,
and didn't find the reference with a brief sweep of the emails I've
sent on this thread.

> You should realize that you cannot
> expect to use inductive processes to create a single logical theory
> about everything that can be underst

Re: [agi] AGI's Philosophy of Learning

2008-08-15 Thread Jim Bromer
On Fri, Aug 15, 2008 at 3:40 PM, Abram Demski <[EMAIL PROTECTED]> wrote:
> The paradox seems trivial, of course. I generally agree with your
> analysis (describing how we consider the sentence, take into account
> its context, and so on. But the big surprise to logicians was that the
> paradox is not just a lingual curiosity, it is an essential feature of
> any logic satisfying some broad, seemingly reasonable requirements.
>
> A logical "sentence" corresponds better to a concept/idea, so bringing
> in the lingual context and so on does not help much in the logic-based
> version (although I readily admit that it solves the paradox in the
> lingual form I presented it in my previous email). The question
> becomes, does the system allow "This thought is false" to be thought,
> and if so, how does it deal with it? Intuitively it seems that we
> cannot think such a silly concept.

> you said "I don't think the problem of self-reference is
> significantly more difficult than the problem of general reference",
> so I will say "I don't think the frame problem is significantly more
> difficult than the problem of general inference." And like I said, for
> the moment I want to ignore computational resources...

Ok but what are you getting at?  I don't want to stop you from going
on and explaining what it is that you are getting at, but I want to
tell you about another criticism I developed from talking to people
who asserted that everything could be logically reduced (and in
particular anything an AI program could do could be logically
reduced.)  I finally realized that what they were saying could be
reduced to something along the lines of "If I could understand
everything then I could understand everything."  I mentioned that to
the guys I was talking to but I don't think that they really got it.
Or at least they didn't like it. I think you might find yourself on
the same lane if you don't keep your eyes open.  But I really want to
know what where it is you are going.

I just read the message that you referred to in OpenCog Prime wikibook
and... I really didn't understand it completely but I still don't
understand what the problem is.  You should realize that you cannot
expect to use inductive processes to create a single logical theory
about everything that can be understood.  I once discussed things with
Pei and he agreed that the representational system that contains the
references to ideas can be logical even though the references may not
be.  So a debugged referential program does not mean that the system
that the references referred to have to be perfectly sound. We can
consider paradoxes and the like.

Your argument sounds as if you are saying that a working AI system,
because it would be perfectly logical would imply that the Goedel
Theorem and the Halting Problem weren't problems.  But I have already
expressed my point of view on this, I don't think that the ideas that
an AI program can create are going to be integrated into a perfectly
logical system.  We can use logical sentences to input ideas very
effectively as you pointed out. But that does not mean that those
logical sentences have to be integrated into a single sound logical
system.

Where are you going with this?
Jim Bromer


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI's Philosophy of Learning

2008-08-15 Thread Abram Demski
> I don't think the problems of a self-referential paradox is
> significantly more difficult than the problems of general reference.
> Not only are there implicit boundaries, some of which have to be
> changed in an instant as the conversation develops, there are also
> multiple levels of generalization in conversation.  These multiple
> levels of generalization are not simple or even reliably constructive
> (reinforcing).  They are complex and typically contradictory.  In my
> opinion they can be understood because we are somehow able access
> different kinds of relevant information necessary to decode them.

The paradox seems trivial, of course. I generally agree with your
analysis (describing how we consider the sentence, take into account
its context, and so on. But the big surprise to logicians was that the
paradox is not just a lingual curiosity, it is an essential feature of
any logic satisfying some broad, seemingly reasonable requirements.

A logical "sentence" corresponds better to a concept/idea, so bringing
in the lingual context and so on does not help much in the logic-based
version (although I readily admit that it solves the paradox in the
lingual form I presented it in my previous email). The question
becomes, does the system allow "This thought is false" to be thought,
and if so, how does it deal with it? Intuitively it seems that we
cannot think such a silly concept. (Oh, and don't let the quotes
around it make you try to just think the sentence... I can say "This
thought is false" in my head, but can I actually think a thought that
asserts its own falsehood? Not so sure...)

> This is one reason why I think that the Relevancy Problem of the Frame
> Problem is the primary problem of contemporary AI.  We need to be able
> to access relevant information even though the appropriate information
> may change dramatically in response to the most minror variations in
> the comprehension of a sentence or of a situation.

Well, you said "I don't think the problem of self-reference is
significantly more difficult than the problem of general reference",
so I will say "I don't think the frame problem is significantly more
difficult than the problem of general inference." And like I said, for
the moment I want to ignore computational resources...

On Fri, Aug 15, 2008 at 2:21 PM, Jim Bromer <[EMAIL PROTECTED]> wrote:
> Our ability to think about abstractions and extrapolations off of
> abstractions comes because we are able to create game boundaries
> around the systems that we think about.  So yes you can talk about
> infinite resources and compare it to the domain of the lambda
> calculus, but this kind of thinking is possible only because we are
> able to abstract ideas by creating rules and barriers for the games.
> People don't always think of these as games because they can be so
> effective at producing material change that they seem and can be as
> practical as truck, or as armies of trucks.
>
>> It is possible that your logic, fleshed out, could circumnavigate the
>> issue. Perhaps you can provide some intuition about how such a logic
>> should deal with the following line of argument (most will have seen
>> it, but I repeat it for concreteness):
>>
>> "Consider the sentence "This sentence is false". It is either true or
>> false. If it is true, then it is false. If it is false, then it is
>> true. In either case, it is both true and false. Therefore, it is both
>> true and false."
>
> Why?  I mean that my imagined program is a little like a method actor
> (like Marlon Brando).  What is its motivation?  Is it a children's
> game?  A little like listening to ghost stories? Or watching movies
> about the undead?
>
> The sentence, 'this sentence is false,' obviously relates to a
> boundary around the sentence. However, that insight wasn't obvious to
> me every time I came across the sentence.  Why not?  I don't know, but
> I think that when statements like that are unfamiliar, you put them
> into their own abstracted place and wait to see how it they are going
> to be used relative to other information.
>
> Let's go with your statement and suppose that the argument is
> unfamiliar.  Basically, the first step would be to interpret the
> elementary partial meanings of the sentences without necessarily
> integrating them.  Each sentence is put into a temporary boundary.
> 'It is either true or false.'  Ok got it, but since this kind of
> argument is unfamiliar to my imaginary program, it does not
> immediately realize that the second sentence is referring to the
> first.  Why not?  Because the first sentence creates an aura of
> reference, and if the self-reference that was intended is appreciated,
> then the sense that second sentence is going to refer to the first
> sentence will - in some cases - be made less likely.  In other cases,
> the awareness that the first sentence is self referential might make
> it more likely that the next sentence will also be interpreted as
> referring to it.
>
> T

Re: [agi] AGI's Philosophy of Learning

2008-08-15 Thread Jim Bromer
Our ability to think about abstractions and extrapolations off of
abstractions comes because we are able to create game boundaries
around the systems that we think about.  So yes you can talk about
infinite resources and compare it to the domain of the lambda
calculus, but this kind of thinking is possible only because we are
able to abstract ideas by creating rules and barriers for the games.
People don't always think of these as games because they can be so
effective at producing material change that they seem and can be as
practical as truck, or as armies of trucks.

> It is possible that your logic, fleshed out, could circumnavigate the
> issue. Perhaps you can provide some intuition about how such a logic
> should deal with the following line of argument (most will have seen
> it, but I repeat it for concreteness):
>
> "Consider the sentence "This sentence is false". It is either true or
> false. If it is true, then it is false. If it is false, then it is
> true. In either case, it is both true and false. Therefore, it is both
> true and false."

Why?  I mean that my imagined program is a little like a method actor
(like Marlon Brando).  What is its motivation?  Is it a children's
game?  A little like listening to ghost stories? Or watching movies
about the undead?

The sentence, 'this sentence is false,' obviously relates to a
boundary around the sentence. However, that insight wasn't obvious to
me every time I came across the sentence.  Why not?  I don't know, but
I think that when statements like that are unfamiliar, you put them
into their own abstracted place and wait to see how it they are going
to be used relative to other information.

Let's go with your statement and suppose that the argument is
unfamiliar.  Basically, the first step would be to interpret the
elementary partial meanings of the sentences without necessarily
integrating them.  Each sentence is put into a temporary boundary.
'It is either true or false.'  Ok got it, but since this kind of
argument is unfamiliar to my imaginary program, it does not
immediately realize that the second sentence is referring to the
first.  Why not?  Because the first sentence creates an aura of
reference, and if the self-reference that was intended is appreciated,
then the sense that second sentence is going to refer to the first
sentence will - in some cases - be made less likely.  In other cases,
the awareness that the first sentence is self referential might make
it more likely that the next sentence will also be interpreted as
referring to it.

The practical problems of understanding the elementary relations of
communication are so complicated, that the problem of dealing with a
paradox is not as severe as you might think.

We are able to abstract and use those abstractions in processes that
can be likened to extrapolation because we have to be able to do that.

I don't think the problems of a self-referential paradox is
significantly more difficult than the problems of general reference.
Not only are there implicit boundaries, some of which have to be
changed in an instant as the conversation develops, there are also
multiple levels of generalization in conversation.  These multiple
levels of generalization are not simple or even reliably constructive
(reinforcing).  They are complex and typically contradictory.  In my
opinion they can be understood because we are somehow able access
different kinds of relevant information necessary to decode them.

This is one reason why I think that the Relevancy Problem of the Frame
Problem is the primary problem of contemporary AI.  We need to be able
to access relevant information even though the appropriate information
may change dramatically in response to the most minror variations in
the comprehension of a sentence or of a situation.

I didn't write much about the self-referential paradox because I think
it is somewhat trivial. Although an AI program will be 'logical' in
the sense of the logic of computing machinery, that does not mean that
a computer program has to be strictly logical.  This means that
thinking can contain errors, but that is not front page news.  Man
bites dog!  Now that's news.

Jim Bromer


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI's Philosophy of Learning

2008-08-15 Thread Abram Demski
That made more sense to me. Responses follow.

On Fri, Aug 15, 2008 at 10:57 AM, Jim Bromer <[EMAIL PROTECTED]> wrote:
> On Thu, Aug 14, 2008 at 5:05 PM, Abram Demski <[EMAIL PROTECTED]> wrote:
>> But, I am looking for a system that "is" me.
>
> You, like everyone else's me, has it's limitations.  So there is a
> difference between the potential of the system and the actual system.
> This point of stressing potentiality rather than casually idealizing
> all-inclusiveness, which I originally mentioned only out of technical
> feasibility, is significant because you are applying the idea to
> yourself.  You would not be able to achieve what you have achieved if
> you were busy trying to achieve what all humanity has achieved.  So,
> even the potential of the system is dependent on what has already been
> achieved.  That is, the true potential of the system (of one's
> existence or otherwise) is readjusted as the system evolves.  So a
> baby's potential is not greater than ours, the potential of his or her
> potential is. (This even makes greater sense when you consider the
> fact that individual potential must be within a common range.)
>
>> My only conclusion is that we are talking past eachother because we
>> are applying totally different models to the problem.
>>
>> When I say "logic", I mean something quite general-- an ideal system
>> of mental operation. "Ideal" means that I am ignoring computational
>> resources.
>
> That is an example of how your ideal has gone beyond the feasible
> potential of an individual.

The idea is exactly like saying "computer" in the mathematical sense.
The theory of computation pretends that unbounded memory and time is
available. So, I feel a bit like I am talking about some issue in
lambda calculus and you are trying to tell me that the answer depends
on whether the processor is 32 bit or 64 bit. You do not think we can
abstract away from a particular person?

>
>> I  think what you are saying is that we can apply different
>> logics to different situations, and so we can at one moment operate
>> within a logic but at the next moment transcend that logic. This is
>> all well and good, but that system of operation in and of itself can
>> be seen to be a larger logical system, one that manipulates smaller
>> systems. This larger system, we cannot transcend; we *are* that
>> system.
>>
>> So, if no such logic exists, if there is no one "big" logic that
>> transcends all the "little" logics that we apply to individual
>> situations, then it makes sense to conclude that we cannot exist.
>> Right?
>> --Abram
>
> Whaaa?
>
> You keep talking about things like fantastic resources but then end up
> claiming that your ideal somehow proves that we cannot exist.  (Please
> leave me out of your whole non-existence thing by the way. I like
> existing and hope to continue at it for some time. I recommend that
> you take a similar approach to the problem too.)

OK, to continue the metaphor: I am saying that a sufficient theory of
computation must exist, because actual computers exist. At the very
least, for my mathematical ideal, I could simply take the best
computer around. This would not lead to a particularly satisfying
theory of computation, but it shows that if such an ideal were totally
impossible, we would have to be in a universe in which no computers
existed to serve as minimal examples.

>
> If it weren't for your conclusion I would be thinking that I
> understand what you are saying.
> The boundary issues of logic or of other bounded systems are not
> absolute laws that we have to abide by all of the time, they are
> designed for special kinds of thinking.  I believe they are useful
> because they can be used to illuminate certain kinds of situations so
> spectacularly.

What you are saying corresponds to what I called "little" logics, absolutely.

>
> As far as the logic of some kind of system of thinking, or potential
> of thought, I do not feel that the boundaries are absolutely fixed for
> all problems.  We can transcend the boundaries because they are only
> boundaries of thought.  We can for example create connections between
> separated groups of concepts (or whatever) and if these new systems
> can be used to effectively illuminate the workings of some problem and
> they require some additional boundaries in order to avoid certain
> errors, then new boundaries can be constructed for them over or with
> the previous boundaries.

I see what you are thinking now. The "big" logic that we use changes
over time as we learn, so as humans we escape Tarski's proof by being
an ever-moving target rather than one fixed logical system. However,
if this is the solution, there is a challenge that must be met: how,
exactly, do we change over time? Or, ideally speaking, how *should* we
change over time to optimally adapt? The problem is, *if* this
question is answered, then the answer provides another "big" logic for
Tarski's proof to aim at-- we are no longer a moving target.

This pr

Re: [agi] AGI's Philosophy of Learning

2008-08-15 Thread Jim Bromer
On Thu, Aug 14, 2008 at 5:05 PM, Abram Demski <[EMAIL PROTECTED]> wrote:
> But, I am looking for a system that "is" me.

You, like everyone else's me, has it's limitations.  So there is a
difference between the potential of the system and the actual system.
This point of stressing potentiality rather than casually idealizing
all-inclusiveness, which I originally mentioned only out of technical
feasibility, is significant because you are applying the idea to
yourself.  You would not be able to achieve what you have achieved if
you were busy trying to achieve what all humanity has achieved.  So,
even the potential of the system is dependent on what has already been
achieved.  That is, the true potential of the system (of one's
existence or otherwise) is readjusted as the system evolves.  So a
baby's potential is not greater than ours, the potential of his or her
potential is. (This even makes greater sense when you consider the
fact that individual potential must be within a common range.)

> My only conclusion is that we are talking past eachother because we
> are applying totally different models to the problem.
>
> When I say "logic", I mean something quite general-- an ideal system
> of mental operation. "Ideal" means that I am ignoring computational
> resources.

That is an example of how your ideal has gone beyond the feasible
potential of an individual.

> I  think what you are saying is that we can apply different
> logics to different situations, and so we can at one moment operate
> within a logic but at the next moment transcend that logic. This is
> all well and good, but that system of operation in and of itself can
> be seen to be a larger logical system, one that manipulates smaller
> systems. This larger system, we cannot transcend; we *are* that
> system.
>
> So, if no such logic exists, if there is no one "big" logic that
> transcends all the "little" logics that we apply to individual
> situations, then it makes sense to conclude that we cannot exist.
> Right?
> --Abram

Whaaa?

You keep talking about things like fantastic resources but then end up
claiming that your ideal somehow proves that we cannot exist.  (Please
leave me out of your whole non-existence thing by the way. I like
existing and hope to continue at it for some time. I recommend that
you take a similar approach to the problem too.)

If it weren't for your conclusion I would be thinking that I
understand what you are saying.
The boundary issues of logic or of other bounded systems are not
absolute laws that we have to abide by all of the time, they are
designed for special kinds of thinking.  I believe they are useful
because they can be used to illuminate certain kinds of situations so
spectacularly.

As far as the logic of some kind of system of thinking, or potential
of thought, I do not feel that the boundaries are absolutely fixed for
all problems.  We can transcend the boundaries because they are only
boundaries of thought.  We can for example create connections between
separated groups of concepts (or whatever) and if these new systems
can be used to effectively illuminate the workings of some problem and
they require some additional boundaries in order to avoid certain
errors, then new boundaries can be constructed for them over or with
the previous boundaries.

As far as I can tell, the kind of thing that you are talking about
would be best explained by saying that there is only one kind of
'logical' system at work, but it can examine problems using
abstraction by creating theoretical boundaries around the problem.
Why does it have to be good at that?  Because we need to be able to
take information about a single object like a building without getting
entangled into all the real world interrelations. We can abstract
because we have to.

I see that you weren't originally talking about whether "you" could
exist, you were originally talking about whether an AI program could
exist.

I don't see how my idea of multiple dynamic bounded systems does not
provide an answer to your question to be honest.  The problem with
multiple dynamic bounded systems is that it can accept illusory
conclusions.  But these can be controlled, to some extent, by
examining a concept from numerous presumptions and interrelations and
by examining the results of these pov's as they can be interrelated
with other concepts including some of which are grounded on the most
reliable aspects of the IO data environment.

Jim Bromer


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI's Philosophy of Learning

2008-08-14 Thread Abram Demski
On Thu, Aug 14, 2008 at 4:26 PM, Jim Bromer <[EMAIL PROTECTED]> wrote:
> On Thu, Aug 14, 2008 at 3:06 PM, Abram Demski <[EMAIL PROTECTED]> wrote:
>> Jim,
>> You are right to call me on that. I need to provide an argument that,
>> if no logic satisfying B exists, human-level AGI is impossible.
>
> I don't know why I am being so aggressive these days.  I don't start
> out intending to be in everyone's face.
>
> If a logical system is incapable of representing a context of a
> problem then it cannot be said that it (the logical system) implies
> that the problem cannot be solved in the system.  You are able to come
> to a conclusion like that because you can transcend the supposed
> logical boundaries of the system.

But, I am looking for a system that "is" me.

I think there is still some confusion, because I still don't see how
your points apply to what I'm saying. (Yes, that is why I replied only
to your first sentence before, hoping the added clarification would
resolve any misunderstanding.)

My only conclusion is that we are talking past eachother because we
are applying totally different models to the problem.

When I say "logic", I mean something quite general-- an ideal system
of mental operation. "Ideal" means that I am ignoring computational
resources. I think what you are saying is that we can apply different
logics to different situations, and so we can at one moment operate
within a logic but at the next moment transcend that logic. This is
all well and good, but that system of operation in and of itself can
be seen to be a larger logical system, one that manipulates smaller
systems. This larger system, we cannot transcend; we *are* that
system.

So, if no such logic exists, if there is no one "big" logic that
transcends all the "little" logics that we apply to individual
situations, then it makes sense to conclude that we cannot exist.

Right?

--Abram


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI's Philosophy of Learning

2008-08-14 Thread Jim Bromer
The paradox (I assume that is what you were pointing to) is based on
your idealized presentation.  Not only was your presentation
idealized, but it was also exaggerated.

I sometimes wonder why idealizations can be so effective in some
cases. An idealization is actually an imperfect way of thinking about
the world.  I think that logical idealizations are effective because
as they are refined by knowledge of reality they can illuminate
effective relations by clearing  non-central relations out of the way.

And idealizations can lead quickly toward feasible tests if they are
refined towards feasibility based on relevant world experiences.  It
is a little like making some simplistic outrageous claim.  No matter
how absurd it is, if you are willing to examine it based on applicable
cases, you can learn from it.  And if you are willing to make ad-hoc
(or is it post-hoc) refinements to the claim there will be a greater
chance that it will lead toward serendipity.

If an AI program made some claim which it 'thought' it could evaluate,
then the failure of its ability to evaluate it could lead it to find
some other data which it could evaluate.  For example if it recognized
that it could not apply a claim to anything in the IO data
environment, it could subsequently try to do the same kind of thing
with some more obvious situation that it is able to reliably detect in
the IO environment.  If its programming leads it  toward
generalization then it can create systems to detect what it considers
to be kinds of data events.

Jim Bromer

On Thu, Aug 14, 2008 at 4:26 PM, Jim Bromer <[EMAIL PROTECTED]> wrote:
> On Thu, Aug 14, 2008 at 3:06 PM, Abram Demski <[EMAIL PROTECTED]> wrote:
>> Jim,
>> You are right to call me on that. I need to provide an argument that,
>> if no logic satisfying B exists, human-level AGI is impossible.
>
> I don't know why I am being so aggressive these days.  I don't start
> out intending to be in everyone's face.
>
> If a logical system is incapable of representing a context of a
> problem then it cannot be said that it (the logical system) implies
> that the problem cannot be solved in the system.  You are able to come
> to a conclusion like that because you can transcend the supposed
> logical boundaries of the system.  I am really disappointed that you
> did not understand what I was saying, because your unusual social
> skills make you seem unusually capable of understanding what other
> people are saying.
>
> However, your idea is interesting so I am glad that you helped clarify
> it.  I have additional comments below.
>>
>> B1: A foundational logic for a human-level intelligence should be
>> capable of expressing any concept that a human can meaningfully
>> express.
>>
>> If a broad enough interpretation of the word "logic" is taken, this
>> statement is obvious; it could amount to simply "A human level
>> intelligence should be capable of expressing anything it can
>> meaningfully express". (ie, logic = way of operating.)
>>
>> The key idea for me is that logic is not the way we *do* think, it is
>> the way we *should* think, in the ideal situation of infinite
>> computational resources. So, a more refined B would state:
>>
>> B2: The theoretical ideal of how a human-level intelligence should
>> think, should capture everything worth capturing about the way humans
>> actually do think.
>>
>> "Everything worth capturing" means everything that could lead to good 
>> results.
>>
>> So, I argue, if no logic exists satisfying B2, then human-level
>> artificial intelligence is not possible. In fact, I think the negation
>> of B2 is nonsensical:
>>
>> not-B2: There is no concept of how a human-level intelligence should
>> think that captures everything worth capturing about how humans do
>> think.
>>
>> This seems to imply that humans do not exist, since the way humans
>> actually *do* think captures everything worth capturing (as well as
>> some things not worth capturing) about how we think.
>>
>> -Abram
>
> Well it doesn't actually imply that humans do not exist.  (What have
> you been smoking?) I would say that B2 should be potentially capable
> of capturing anything of human thinking worth capturing.  But why
> would I say that?  Just because it makes it a little more feasible?
> Or is there some more significant reason?  Again, B2 should be capable
> of potentially capturing anything from an individual's thinking given
> the base of the expression of those possibilities.  No single human
> mind captures everything possible in human thought.  So in this case
> my suggested refinement is based on the feasible extent of the
> potential of a single mind as opposed to billions of individual mind.
> This makes sense, but again the refinement is derived from what I
> think would be more feasible. Since there are more limited
> representations of what seems to be models of the way human think,
> then there is no contradiction.  There are just a series of
> constraints.
> Jim Bromer
>


--

Re: [agi] AGI's Philosophy of Learning

2008-08-14 Thread Jim Bromer
On Thu, Aug 14, 2008 at 3:06 PM, Abram Demski <[EMAIL PROTECTED]> wrote:
> Jim,
> You are right to call me on that. I need to provide an argument that,
> if no logic satisfying B exists, human-level AGI is impossible.

I don't know why I am being so aggressive these days.  I don't start
out intending to be in everyone's face.

If a logical system is incapable of representing a context of a
problem then it cannot be said that it (the logical system) implies
that the problem cannot be solved in the system.  You are able to come
to a conclusion like that because you can transcend the supposed
logical boundaries of the system.  I am really disappointed that you
did not understand what I was saying, because your unusual social
skills make you seem unusually capable of understanding what other
people are saying.

However, your idea is interesting so I am glad that you helped clarify
it.  I have additional comments below.
>
> B1: A foundational logic for a human-level intelligence should be
> capable of expressing any concept that a human can meaningfully
> express.
>
> If a broad enough interpretation of the word "logic" is taken, this
> statement is obvious; it could amount to simply "A human level
> intelligence should be capable of expressing anything it can
> meaningfully express". (ie, logic = way of operating.)
>
> The key idea for me is that logic is not the way we *do* think, it is
> the way we *should* think, in the ideal situation of infinite
> computational resources. So, a more refined B would state:
>
> B2: The theoretical ideal of how a human-level intelligence should
> think, should capture everything worth capturing about the way humans
> actually do think.
>
> "Everything worth capturing" means everything that could lead to good results.
>
> So, I argue, if no logic exists satisfying B2, then human-level
> artificial intelligence is not possible. In fact, I think the negation
> of B2 is nonsensical:
>
> not-B2: There is no concept of how a human-level intelligence should
> think that captures everything worth capturing about how humans do
> think.
>
> This seems to imply that humans do not exist, since the way humans
> actually *do* think captures everything worth capturing (as well as
> some things not worth capturing) about how we think.
>
> -Abram

Well it doesn't actually imply that humans do not exist.  (What have
you been smoking?) I would say that B2 should be potentially capable
of capturing anything of human thinking worth capturing.  But why
would I say that?  Just because it makes it a little more feasible?
Or is there some more significant reason?  Again, B2 should be capable
of potentially capturing anything from an individual's thinking given
the base of the expression of those possibilities.  No single human
mind captures everything possible in human thought.  So in this case
my suggested refinement is based on the feasible extent of the
potential of a single mind as opposed to billions of individual mind.
This makes sense, but again the refinement is derived from what I
think would be more feasible. Since there are more limited
representations of what seems to be models of the way human think,
then there is no contradiction.  There are just a series of
constraints.
Jim Bromer


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI's Philosophy of Learning

2008-08-14 Thread Abram Demski
Jim,

You are right to call me on that. I need to provide an argument that,
if no logic satisfying B exists, human-level AGI is impossible.

B1: A foundational logic for a human-level intelligence should be
capable of expressing any concept that a human can meaningfully
express.

If a broad enough interpretation of the word "logic" is taken, this
statement is obvious; it could amount to simply "A human level
intelligence should be capable of expressing anything it can
meaningfully express". (ie, logic = way of operating.) So, with this
interpretation, it doesn't even make sense for B to be false. But,
this is not quite what I mean.

The key idea for me is that logic is not the way we *do* think, it is
the way we *should* think, in the ideal situation of infinite
computational resources. So, a more refined B would state:

B2: The theoretical ideal of how a human-level intelligence should
think, should capture everything worth capturing about the way humans
actually do think.

"Everything worth capturing" means everything that could lead to good results.

So, I argue, if no logic exists satisfying B2, then human-level
artificial intelligence is not possible. In fact, I think the negation
of B2 is nonsensical:

not-B2: There is no concept of how a human-level intelligence should
think that captures everything worth capturing about how humans do
think.

This seems to imply that humans do not exist, since the way humans
actually *do* think captures everything worth capturing (as well as
some things not worth capturing) about how we think.

-Abram

On Thu, Aug 14, 2008 at 2:04 PM, Jim Bromer <[EMAIL PROTECTED]> wrote:
> On Thu, Aug 14, 2008 at 12:59 PM, Abram Demski <[EMAIL PROTECTED]> wrote:
>> A more worrisome problem is that B may be contradictory in and of
>> itself. If (1) I can as a human meaningfully explain logical system X,
>> and (2) logical system X can meaningfully explain anything that humans
>> can, then (3) system X can meaningfully explain itself. Tarski's
>> Indefineability Theorem shows that any such system (under some
>> seemingly reasonable assumptions) can express the concept "This
>> concept is false", and is therefore (again under some seemingly
>> reasonable assumptions) contradictory. So, if we accept those
>> "seemingly reasonable assumptions", no logic satisfying B exists.
>>
>> But, this implies that AI is impossible.
>
> At risk of being really annoying I have to say: it does not imply
> anything of the sort!  How could it imply some idea if it can't even
> represent it?  It implies impossibility to you because you are capable
> of dealing with fictions of boundaries as if they were real until you
> either conflate two or more bounded concepts, or simply transcend them
> by virtue of the extent of your everyday experience.
>
> In order to deal with logic you have to be taught how to consider such
> a thing to be bounded from the rest of reality.  That is not difficult
> because that is a requirement of all thought and it is a fundamental
> necessity of dealing with the real universe.  To study anything you
> have to limit your attention to the subject.
>
> I believe we can use logic in AI to detect possible errors and
> boundary issues.  But then we have to build up a system of knowledge
> from experience which seems to transcend the usual boundaries when
> that kind of transcendent insight becomes useful. After a while
> transcendent insight is itself recognized to be bounded and so it
> becomes part of the mundane.
>
> Think of this way, boundaries are primarily dependent on limitations.
> Jim Bromer
>
>
>
>
>
>
> On Thu, Aug 14, 2008 at 12:59 PM, Abram Demski <[EMAIL PROTECTED]> wrote:
>> This looks like it could be an interesting thread.
>>
>> However, I disagree with your distinction between ad hoc and post hoc.
>> The programmer may see things from the high-level "maze" view, but the
>> program itself typically deals with the "mess". So, I don't think
>> there is a real distinction to be made between post-hoc AI systems and
>> ad-hoc ones.
>>
>> When we decide on the knowledge representation, we predefine the space
>> of solutions that the AI can find. This cannot be avoided. The space
>> can be made wider by restricting the knowledge representation less
>> (for example, allowing the AI to create arbitrary assembly-language
>> programs is less of a restriction than requiring it to learn
>> production-rule programs that get executed by some implementation of
>> the rete algorithm). But obviously we run into hardware restrictions.
>> The broadest space to search is the space of all possible
>> configurations of 1s and 0s inside the computer we're using. An AI
>> method called "Godel Machines" is supposed to do that. William Pearson
>> is also interested in this.
>>
>> Since we're doing philosophy here, I'll take a philosophical stance.
>> Here are my assumptions.
>>
>> 0. I assume that there is some proper logic.
>>
>> 1. I assume probability theory and utility theory, acting upo

Re: [agi] AGI's Philosophy of Learning

2008-08-14 Thread Jim Bromer
On Thu, Aug 14, 2008 at 12:59 PM, Abram Demski <[EMAIL PROTECTED]> wrote:
> A more worrisome problem is that B may be contradictory in and of
> itself. If (1) I can as a human meaningfully explain logical system X,
> and (2) logical system X can meaningfully explain anything that humans
> can, then (3) system X can meaningfully explain itself. Tarski's
> Indefineability Theorem shows that any such system (under some
> seemingly reasonable assumptions) can express the concept "This
> concept is false", and is therefore (again under some seemingly
> reasonable assumptions) contradictory. So, if we accept those
> "seemingly reasonable assumptions", no logic satisfying B exists.
>
> But, this implies that AI is impossible.

At risk of being really annoying I have to say: it does not imply
anything of the sort!  How could it imply some idea if it can't even
represent it?  It implies impossibility to you because you are capable
of dealing with fictions of boundaries as if they were real until you
either conflate two or more bounded concepts, or simply transcend them
by virtue of the extent of your everyday experience.

In order to deal with logic you have to be taught how to consider such
a thing to be bounded from the rest of reality.  That is not difficult
because that is a requirement of all thought and it is a fundamental
necessity of dealing with the real universe.  To study anything you
have to limit your attention to the subject.

I believe we can use logic in AI to detect possible errors and
boundary issues.  But then we have to build up a system of knowledge
from experience which seems to transcend the usual boundaries when
that kind of transcendent insight becomes useful. After a while
transcendent insight is itself recognized to be bounded and so it
becomes part of the mundane.

Think of this way, boundaries are primarily dependent on limitations.
Jim Bromer






On Thu, Aug 14, 2008 at 12:59 PM, Abram Demski <[EMAIL PROTECTED]> wrote:
> This looks like it could be an interesting thread.
>
> However, I disagree with your distinction between ad hoc and post hoc.
> The programmer may see things from the high-level "maze" view, but the
> program itself typically deals with the "mess". So, I don't think
> there is a real distinction to be made between post-hoc AI systems and
> ad-hoc ones.
>
> When we decide on the knowledge representation, we predefine the space
> of solutions that the AI can find. This cannot be avoided. The space
> can be made wider by restricting the knowledge representation less
> (for example, allowing the AI to create arbitrary assembly-language
> programs is less of a restriction than requiring it to learn
> production-rule programs that get executed by some implementation of
> the rete algorithm). But obviously we run into hardware restrictions.
> The broadest space to search is the space of all possible
> configurations of 1s and 0s inside the computer we're using. An AI
> method called "Godel Machines" is supposed to do that. William Pearson
> is also interested in this.
>
> Since we're doing philosophy here, I'll take a philosophical stance.
> Here are my assumptions.
>
> 0. I assume that there is some proper logic.
>
> 1. I assume probability theory and utility theory, acting upon
> statements in this logic, are good descriptions of the ideal
> decision-making process (if we do not need to worry about
> computational resources).
>
> 2. I assume that there is some reasonable bayesian prior over the
> logic, and therefore (given #1) that bayesian updating is the ideal
> learning method (again given infinite computation).
>
> This philosophy is not exactly the one you outlined as the AI/AGI
> standard: there is no searching. #2 should ideally be carried out by
> computing the probability of *all* models. With finite computational
> resources, this is typically approximated by searching for
> high-probability models, which works well because the low-probability
> models contribute little to the decision-making process in most cases.
>
> Now, to do some philosophy on my assumptions :).
>
> Consideration of #0:
>
> This is my chief concern. We must find the proper logic. This is very
> close to your concern, because the search space is determined by the
> logic we choose (given the above-mentioned approximation to #2). If
> you think the search space is too restricted, then essentially you are
> saying we need a broader logic. My requirements for the logic are:
> A. The logic should be grounded
> B. The logic should be able to say any meaningful thing a human can say
> The two requirements are not jointly satisfied by any existing logic
> (using my personal definition of grounded, at least). Set theory is
> the broadest logic typically considered, so it comes closest to B, but
> it (and most other logics considered strong enough to serve as a
> foundation of mathematics) do not pass the test of A because the
> manipulation rules do not match up to their semantics. I explain this
>

Re: [agi] AGI's Philosophy of Learning

2008-08-14 Thread Abram Demski
This looks like it could be an interesting thread.

However, I disagree with your distinction between ad hoc and post hoc.
The programmer may see things from the high-level "maze" view, but the
program itself typically deals with the "mess". So, I don't think
there is a real distinction to be made between post-hoc AI systems and
ad-hoc ones.

When we decide on the knowledge representation, we predefine the space
of solutions that the AI can find. This cannot be avoided. The space
can be made wider by restricting the knowledge representation less
(for example, allowing the AI to create arbitrary assembly-language
programs is less of a restriction than requiring it to learn
production-rule programs that get executed by some implementation of
the rete algorithm). But obviously we run into hardware restrictions.
The broadest space to search is the space of all possible
configurations of 1s and 0s inside the computer we're using. An AI
method called "Godel Machines" is supposed to do that. William Pearson
is also interested in this.

Since we're doing philosophy here, I'll take a philosophical stance.
Here are my assumptions.

0. I assume that there is some proper logic.

1. I assume probability theory and utility theory, acting upon
statements in this logic, are good descriptions of the ideal
decision-making process (if we do not need to worry about
computational resources).

2. I assume that there is some reasonable bayesian prior over the
logic, and therefore (given #1) that bayesian updating is the ideal
learning method (again given infinite computation).

This philosophy is not exactly the one you outlined as the AI/AGI
standard: there is no searching. #2 should ideally be carried out by
computing the probability of *all* models. With finite computational
resources, this is typically approximated by searching for
high-probability models, which works well because the low-probability
models contribute little to the decision-making process in most cases.

Now, to do some philosophy on my assumptions :).

Consideration of #0:

This is my chief concern. We must find the proper logic. This is very
close to your concern, because the search space is determined by the
logic we choose (given the above-mentioned approximation to #2). If
you think the search space is too restricted, then essentially you are
saying we need a broader logic. My requirements for the logic are:
A. The logic should be grounded
B. The logic should be able to say any meaningful thing a human can say
The two requirements are not jointly satisfied by any existing logic
(using my personal definition of grounded, at least). Set theory is
the broadest logic typically considered, so it comes closest to B, but
it (and most other logics considered strong enough to serve as a
foundation of mathematics) do not pass the test of A because the
manipulation rules do not match up to their semantics. I explain this
at some length here:

http://groups.google.com/group/opencog/browse_thread/thread/28755f668e2d4267/10245c1d4b3984ca?lnk=gst&q=abramdemski#10245c1d4b3984ca

A more worrisome problem is that B may be contradictory in and of
itself. If (1) I can as a human meaningfully explain logical system X,
and (2) logical system X can meaningfully explain anything that humans
can, then (3) system X can meaningfully explain itself. Tarski's
Indefineability Theorem shows that any such system (under some
seemingly reasonable assumptions) can express the concept "This
concept is false", and is therefore (again under some seemingly
reasonable assumptions) contradictory. So, if we accept those
"seemingly reasonable assumptions", no logic satisfying B exists.

But, this implies that AI is impossible. So, some of the seemingly
reasonable assumptions need to be dismissed. (But I don't know which
ones.)

Consideration of #2:

Assumption 3 is that there exists some reasonable prior probability
distribution that we can use for learning. A now-common way of
choosing this prior is the minimum description length principle, which
tells us that shorter theories are more probable.

The following argument was sent to me by private email by Wei Dai, and
I think it is very revealing:

"I did suggest a prior based on set theory, but then I realized that
it doesn't really solve the entire problem. The real problem seems to
be that if we formalize induction as Bayesian sequence prediction with
a well-defined prior, we can immediately produce a sequence that an
ideal predictor should be able to predict, but this one doesn't, no
matter what the prior is. Specifically, the sequence is the "least
expected" sequence of the predictor. We generate each symbol in this
sequence by feeding the previous symbols to the predictor and then
pick the next symbol as the one that it predicts with the smallest
probability. (Pick the lexicographically first symbol if more than one
has the smallest probability.)

This least expected sequence has a simple description, and therefore
should not be the least e

Re: [agi] AGI's Philosophy of Learning

2008-08-14 Thread Jim Bromer
I realized that I made a very important error in my brief description
of prejudice.  Prejudice is the application of over-generalizations,
typically critical, that are inappropriately applied to a group. The
cause of the prejudice is based on a superficial characteristic that
most of the members of the group do share but the assertions that the
critical characteristics can be pinned to the group are made contrary
to evidence that only a few members of the group actually exhibit the
characteristics or contrary to the lack of evidence that would show
almost all the members of the group do exhibit the characteristics.

So prejudice, which is over-generalization based on one or a few
superficial characteristics of the group in spite of the evidence of
the diversity of the group, will typically be used to apply a list of
grievances onto the group as a whole even though the lack of evidence
that they all exhibit the undesirable behavior that the grievances
highlight is evident.

So then one of the earmarks of intellectual prejudice, that is
prejudice against a group based on ideas, is that the group is
typically mis-characterized as being of one mind.  That is, prejudice,
based on a single or a few superficial characteristics that the group
does share to some extent, is used inappropriately to suggest that all
of the members of the group share some other idea in common in spite
of evidence to the contrary.

However, even if the over-generalization of the group is obvious this
does not mean that the intellectual prejudice is intentional.

There is also a case for the style of presentation.  But when a person
doesn't ever try to qualify his remarks or to explain that he gets a
criticism about his presentation of obvious over-generalizations, then
we can take the this as evidence that his remarks are based, at least
to some degree, on superficial prejudice rather than objective
evidence and reason to the contrary.

Jim Bromer


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI's Philosophy of Learning

2008-08-14 Thread Jim Bromer
One of the worst problems of early AI was that it over-generalized
when it tried to use a general rule on a specific case.  Actually they
over-generalized, under-generalized, and under-specified problem
solutions, but over-generalization was the most notable because they
relied primarily on word based generalizations which are often
simplifications of what would otherwise be immensely complicated
qualified cases.  For instance all primates have arms is not really
true (even if it is typically true) because there are some primates
that don't have arms (like people who have lost their arms or who were
born without arms).  So if a computer program was written to make
judgments based general rules which were then applied to information
that was input for some problem, it might come to an incorrect
conclusion for those less typical cases that did not match the overly
generalized descriptions.

When a person over-generalizes he can appear to be exaggerating or
relying on prejudiced thinking because a human being has such well
developed capabilities that odd over-generalizations tend to stand
out.  That is probably why early AI did not seem to work very well.
The over-generalizations were so apparent to most people that the
programs would disappoint anyone who expected too much from them.  The
programmers might be very excited because they saw the potential in
the first steps made in the technology, whereas the non-programmer
skeptic might only see the insipid errors.

Over generalization is one of the tools of the prejudiced.  They see
some characteristics in a few of the members of the group of people
they are trying to disparage and instead of accurately describing an
experience they might have had or might have heard about with an
individual, they convert it into a statement about them people (as
Archie Bunker used to say) as if the characteristic seen in a few
could be applied to the entire group.  This is typically combined with
exaggeration which is often used to make a weak case stronger and a
dull story more interesting.  And since people who tend to dwell on
the faults of a group that are different from them in some superficial
way often don't have much intellectually stimulating work to occupy
them, they may rely on over-simplification as well.  And when you add
emotional exaggeration to this mix the over-generalizations of the
prejudiced mind can become quite apparent and really out of touch with
reality.  Of course, there are some exceptions to the rules.  A few
prejudiced people are very intelligent, and they may be as civil
towards others as they are prejudiced.  But for the most part,
negative prejudice is associated with contempt and hatred and it is
not based on objective thinking.

But when you think about it, prejudice is not just a problem of
over-generalization, but of under-generalization as well.  For some
reason the prejudiced person has difficulty generalizing about the
positive experiences they have had with the members of the group that
they feel so much contempt for.  The reason is probably that they
never have good experiences because their own attitudes make all of
their experiences undesirable.  But the problem can be seen in this
case to be in their own heads, not somewhere else.

Of course what may appear to be prejudiced may actually be based on
uncommon thinking relative to some group.  How can anyone
differentiate between prejudice and individuality?  Often prejudice is
directed at some group based on ethnic difference, color of the skin,
religion, national identity or towards some working group that holds
different ethical views, whereas individuality is seen in contrast to
groups that are otherwise quite diverse.  But this kind of rule should
not be over-generalized.  The non-conformist has to have some good
reasons for his views if he is constantly claiming that the group has
got it wrong and he has to be able to make the case that the group
thinks in group think if he is claiming that the group all share some
underlying principle.  To make the claim that his individualistic
views (or the views of someone who the critic thinks got it right) are
somehow stronger than the group's belief he would have to be capable
of presenting objective evidence to show, (for example) that the
belief in system A which he disagrees with in contrast to the other
members of the group would invalidate some general truth B which the
group does believe in. But the problem with this definition of
objective reasonable non-conformity is that it too may be subject to
the artifacts of thought that cause over-generalization even if lacks
the earmarks of more common prejudices.  (My definition should not be
over-generalized by the way. I had to simplify it in order to make it
understandable, and I doubt that I could qualify it to the extent that
would be necessary to make it into some kind of general truth.)

So we have to strive for objectivity and qualification in order to
avoid making the errors of ov

[agi] AGI's Philosophy of Learning

2008-08-13 Thread Mike Tintner
THE POINT OF PHILOSOPHY:  There seemed to be some confusion re this - the 
main point of philosophy is that it makes us aware of the frameworks that 
are brought to bear on any subject, from sci to tech to business to arts - 
and therefore the limitations of those frameworks. Crudely, it says: hey 
you're looking in 2D, you could be loooking in 3D or nD.


Classic example: Kuhn. Hey, he said, we've thought science discovers bodies 
feature-by-feature, with a steady-accumulation-of-facts. Actually those 
studies are largely governed by paradigms [or frameworks] of bodies, which 
heavily determine  what features we even look for in the first place. A 
beatiful piece of philosophical analysis.


AGI: PROBLEM-SOLVING VS LEARNING.

I have difficulties with AGI-ers, because my philosophical approach to AGI 
is -  start with the end-problems that an AGI must solve, and how they 
differ from AI. No one though is interested in discussing them - to a great 
extent, perhaps, because the general discussion of such problem distinctions 
throughout AI's history (and through psychology's and philosophy's history) 
has been pretty poor.


AGI-ers, it seems to me, focus on learning - on how AGI's must *learn* to 
solve problems. The attitude is : if we can just develop a good way for 
AGI's to learn here, then they can learn to solve any problem, and gradually 
their intelligence will just take off, (hence superAGI). And there is a 
great deal of learning theory in AI, and detailed analysis of different 
modes of learning, that is logic- and maths-based. So AGI-ers are more 
comfortable with this approach.


PHILOSOPHY OF LEARNING

However there is relatively little broad-based philosophy of learning. Let's 
do some.


V. broadly, the basic framework, it seems to me, that AGI imposes on 
learning to solve problems is:


1) define a *set of options* for solving a problem,  and attach if you can, 
certain probabilities to them


2) test those options,  and carry the best, if any, forward

3) find a further set of options from the problem environment, and test 
those, updating your probabilities and also perhaps your basic rules for 
applying them, as you go


And, basically, just keep going like that, grinding your way to a solution, 
and adapting your program.


What separates AI from AGI is that in the former:

* the set of options [or problem space]  is well-defined, [as say, for how a 
program can play chess] and the environnment is highly accessible.AGI-ers 
recognize their world is much more complicated and not so clearly defined, 
and full of *uncertainty*.


But the common philosophy of both AI and AGI and programming, period, it 
seems to me, is : test a set of options.


THE $1M QUESTION with both approaches is: *how do you define your set of 
options*? That's the question I'd like you to try and answer. Let's make it 
more concrete.


a) Defining A Set of Actions?   Take AGI agents, like Ben's, in virtual 
worlds. Such agents must learn to perform physical actions and move about 
their world. Ben's had to learn how to move to a ball and pick it up.


So how do you define the set of options here - the set of 
actions/trajectories-from-A-to-B that an agent must test? For,say, moving 
to, or picking up/hitting a ball. Ben's tried a load - how were they 
defined? And by whom? The AGI programmer or the agent?


b)Defining A Set of Associations ?Essentially, a great deal of formal 
problem-solving comes down to working out that A is associated with B,  (if 
C,D,E, and however many conditions apply) -   whether A "means," "causes," 
or "contains" B etc etc .


So basically you go out and test a set of associations, involving A and B 
etc, to solve the problem. If you're translating or defining language, you 
go and test a whole set of statements involving the relevant words, say "He 
jumped over the limit" to know what it means.


So, again, how do you define the set of options here - the set of 
associations to be tested, e.g. the set of texts to be used on Google, say, 
for reference for your translation?


c)What's The Total Possible Set of Options [Actions/Associations] -  how can 
you work out the *total* possible set of options to be tested (as opposed to 
the set you initially choose) ? Is there one with any AGI problem?


Can the set of options be definitively defined at all? Is it infinite say 
for that set of trajectories, or somehow limited?   (Is there a definitive 
or guaranteed way to learn language?)


d) How Can You Insure the Set of Options is not arbitrary?  That you won't 
entirely miss out the crucial options no matter how many more you add? Is 
defining a set of options an art not a science - the art of programming, 
pace Matt?


POST HOC VS AD HOC APPROACHES TO LEARNING:  It seems to me there should be a 
further condition to how you define your set of options.


Basically, IMO, AGI learns to solve problems, and AI solves them, *post 
hoc.* AFTER the problem has already been solved/learned.


The pe