Re: [agi] Deviations from generality

2019-11-09 Thread TimTyler

On 2019-11-08 20:34:PM, rounce...@hotmail.com wrote:
The thing about the adversary controlling the environment around the 
agent,  his brain is working with the same physics as your feet 
hitting the floor,  but its not simulatable in a physics system, 
because its not mechanical to start with, but why it could never be 
ever simulated is you dont have xray vision to build the model of his 
brain to predict what he does! 

Adversaries don't need perfect models to be able to thwart your
ability to attain your goals. They need some skill and ability,
of course, but high quality simulations of your brain are
absolutely not required.

On 2019-11-08 19:30:PM, Stanley Nilsen wrote:

Jumping down into the laws of physics is one example.  Weren't people 
fairly intelligent when they knew little about physics and the laws of 
nature?  Yes, there is the "repeatability" of natural phenomenon given 
that nature runs on pretty strict rules, but is the "intelligent" 
stuff man does, that is making "better" choices, due to the fact that 
man "learned" the details of nature's rules?


That's part of it, yes. The brain builds a model of the world,
and uses it to predict the future consequences of its possible
actions, followed by evaluation of the results. Then, that's the
data that is used to choose between actions. Of course the model
is not all represented consciously and labelled as being
"the laws of physics" - but a representation of the laws of
physics is in still there, even in cavemen who lived long
before Newton was born.

--

__
 |im |yler http://timtyler.org/  t...@tt1.org  617-671-9930


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T3326778943da25b8-Mf4892dfa1d7e7e7c20396249
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Deviations from generality

2019-11-08 Thread rouncer81
The "no free lunch" theorem IMO is skeptical,   I say its just fear of the 
unknown again helped confirmed with the fact that ppl havent optimized lossless 
similarity matching past linear search yet.   Theres nothing abstractly 
mathematical here, its just ppl failing at things because they arent especial 
enough. :) hawhaw

The thing about the adversary controlling the environment around the agent,  
his brain is working with the same physics as your feet hitting the floor,  but 
its not simulatable in a physics system,  because its not mechanical to start 
with, but why it could never be ever simulated is you dont have xray vision to 
build the model of his brain to predict what he does! :)

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T3326778943da25b8-Mbe7622f14ec70696f36bc2c7
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Deviations from generality

2019-11-08 Thread Stanley Nilsen

Hi Tim,

Interesting that your talk mentions simplicity and Occam's razor but 
doesn't seem to head in the simple direction.
Jumping down into the laws of physics is one example.  Weren't people 
fairly intelligent when they knew little about physics and the laws of 
nature?  Yes, there is the "repeatability" of natural phenomenon given 
that nature runs on pretty strict rules, but is the "intelligent" stuff 
man does, that is making "better" choices, due to the fact that man 
"learned" the details of nature's rules?


If I were to use Occam's razor to try to find the explanation of human 
intelligence, I would say that man's tendency toward "better choices" 
has much to do with "adopting" good practices from those in his 
society.  Example - Man learned to build a fire from others before he 
figured out about oxidation and exothermic reactions...


It doesn't gain much traction on this mailing list to talk in simple 
terms, but there is a simple way to view "general" intelligence. I'll 
keep it short - Know as much as you can about opportunity - how to 
recognize it, how to evaluate it, and how to make ones self prepared to 
implement the opportunity.   It can't help but lead to better choices - 
my definition of intelligence.


Much could be said about computers, languages, various algorithms and 
capabilities of hardware, and all the rest - but most of it is only a 
component of the big picture.   Not too many big pictures out there.


Stan
P.S. By the way, I am interested in details too.  Been playing with 
computers since we used modems to talk to them... I'm not anti 
technology, just disappointed that there is so much hype over new 
technology.   There are machine intelligence "weaknesses" but we tend to 
ignore those and look at the bells and whistles.




On 11/7/19 9:50 PM, TimTyler wrote:

Hi. I am giving a talk on machine intelligence next week.
I have a slide about "generality" and I have an associated
question that I thought I would try running by you guys.

My question is basically: what do you think of this
presentation, and how can I improve on it? The
presentation goes something like this:

My slides refer to definitions of intelligence, that say
that it is "general" and then mention that the "no free lunch"
theorems of Wolpert and Macready from 1997 say that no
algorithm performs better than any other on arbitrary
search problems - and so an intelligent agent can't really
be "general" in this way.

I then argue that some kinds of assumption about the space
of problems likely to be faced are actually quite reasonable
and acceptable. I then list some of these assumptions:

One is "the uniformity of nature". Physics appears to be
fairly uniform in space and time. The uniformity of nature
allows induction to work. It means that the past is relevant
to the future. It means that experiences in one place are
relevant in other places and that experiences at one time
are also relevant later.

Matter in the universe also exhibits a fair amount of
regularity, due to repulsive forces, and the common,
diluting forces of radiation, diffusion, turbulence
and entropy increase. These make concentrations of
matter tend to spread out. That is especially true
far from deep gravity wells (where attractive forces
dominate). That happens to be where most living systems
find themselves. These effects compound the
"uniformity of nature" effect.

Another is Occam's razor. Occam's razor says that simpler
hypotheses that are compatible with observations should be
preferred. Occam's razor has subsequently been generalized
to include a measure of time complexity.

Another involves nature's preference for simplicity. One
cause of this is locality. Physics is local (quantum mechanics
notwithstanding). Locality imposes limits on the complexity of
observed sequences, and it means that sequences with simpler
generators are actually more likely to be observed. In three
dimensional space, radiation and diffusion make distant
effects less relevant. Again, these factors lead to simpler
sequences being more likely to be observed.

I then say other types of assumption associated with the
laws of physics, regularity of the universe's contents,
or the special place of living systems in the universe
could also be possible.

Lastly, I argue that these assumptions could be mistaken.
Probably the most important example where this could happen
is if your environment is being controlled by an adversary.
They could then manipulate things so that your assumptions
are systematically mistaken.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T3326778943da25b8-Mff2238694ba986bb3512f816
Delivery options: https://agi.topicbox.com/groups/agi/subscription