It seems to me that most of the arguments present here could be understood as 
defending intelligence. The arguments offered here seem to be saying that we 
should be able to build intelligent machines. And of course, these arguments 
are likely valid.

However, I think that the problem I posed is a bit different. It is not a 
question on whether one can build a smart machines, much smarter than what we 
have today. My question is about the notion of “general intelligence”—some sort 
of agent that can solve better intelligence-type problems than any other 
specialized agent. The assumption under general intelligence seems to be that 
this superb generally intelligent agent can solve all the problems that 
specialized intelligences can and can solve them equally well, plus it can 
solve more problems that specialized agents cannot. This would be general vs. 
specialized intelligences, I understand it.

The opposite notion—one that adheres to no-free-lunch—that by adding more 
capabilities to an agent, you always lose something on the other side, the 
specialized side. So, you never get to solve larger number of specialized 
problem without having to lose on the specialized side. You are never become 
generally intelligent as to be able to resolve everything being presented to 
you. You instead only become better adapted to a certain niche. And you always 
pay some price. For example, if you are big and strong as an elephant, it is 
hard to hide under a stone like a small mouse can. Animals strong as elephants 
may get for example extinct before mice do. Similarly, if you are smart for 
math, you may have to pay price somewhere else. At human level intelligence, 
idiot savants may be examples of human beings super-smart in some domains but 
then paying a price by being sub-performant in other domains. The idea of no 
free lunch is that you cannot be small and big in the same time; and you cannot 
have the intellectual capabilities of an idiot savant and of non-idiot savant 
in the same time.

If I am right about that, then the no-free-lunch theorem should always apply 
and thus, there should exist no such thing as general intelligence.

But I repeat, this does not prevent us from building a machine with human-like 
intellectual capabilities.  We just should not expect that this could be 
created by some special “divine” algorithm that is the last word of all 
algorithms and somehow miraculously achieves general intelligence capabilities 
that work everywhere and for everything. That is, there will be no an 
equivalent of E=mc2 for the problem of intelligence.

To conclude, there is a difference between arguing for advances in intelligent 
machines on one side and the applicability of no-free-lunch theorem to those 
machines (including humans) on the other side. I think Jennifer’s ideas were 
somewhat in agreement with that conclusion.

With regards,

Danko

Sent from Mail for Windows 10

From: immortal.discover...@gmail.com
Sent: Montag, 3. Februar 2020 20:59
To: AGI
Subject: Re: [agi] Re: General Intelligence vs. no-free-lunch theorem

Hmm, we don't like the No Lunch Theorem. Me and Matt and any other enlightened 
one doesn't. Ignore Stefan :-D and ignore R81 :|. We follow the AGI Theorem 
instead. We believe it doesn't get contradicted. It is same meaning to say we 
attack the FLT or attack the contradiction it makes. Yes we are expecting Free 
Lunches, less work, more that Good change/evolution to come. Survival makes us 
think what is Good. Nothing is actually ugly or painful or gay or bad or a 
hobby or makes sense or is complex or superior. So the particle Sorting of 
array items / sorting in Data Compression leads to Survival/Patterns that 
emerge, and extracts new insights/patterns from old data; data self recursion. 
So yes we expect free lunches; we expect change to occur; we expect to survive 
longer; to happen on its own and soon; evolution; physics. It's just evolution 
of particles settling into equilibrium from chaos to patterns.

Evolution is sorting particles so that they are more pattern-y so they are more 
compressible and can remove the trash and Learn the actual elementary and 
higher facets / features (cat, face, nose, curve, line, dot), but Earth 
actually grows more patterns and also grows larger by eating planets nearby. So 
it's not just some shrinking but actually more growth than size lost. All while 
becoming more patterny. However as I said you can't create/delete 
particles/bits, so nothing is 'compressed', rather you are finding the pattern 
representation, removing/randomizing the trash after 'compression', then taking 
chaotic matter and growing the system/pattern larger.
Artificial General Intelligence List / AGI / see discussions + participants + 
delivery options Permalink 


------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T353f2000d499d93b-M4a0f7c92157a9190577ca687
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to