RE: [agi] Epiphany - Statements of Stupidity

2010-08-06 Thread John G. Rose
> -Original Message-
> From: Steve Richfield [mailto:steve.richfi...@gmail.com]
> On Fri, Aug 6, 2010 at 10:09 AM, John G. Rose 
> wrote:
> "statements of stupidity" - some of these are examples of cramming
> sophisticated thoughts into simplistic compressed text.
> 
> Definitely, as even the thoughts of stupid people transcends our (present)
> ability to state what is happening behind their eyeballs. Most stupidity
is
> probably beyond simple recognition. For the initial moment, I was just
> looking at the linguistic low hanging fruit.

You are talking about, those phrases, some are clichés, are like local K
complexity minima, in a knowledge graph of partial linguistic structure,
where neural computational energy is preserved, and the statements are
patterns with isomorphisms to other experiential knowledge intra and inter
agent. More intelligent agents have ways of working more optimally with the
neural computational energy, perhaps by using other more efficient patterns
thus avoiding those particular detrimental pattern/statements. But the
statements are catchy because they are common and allow some minimization of
computational energy as well as they are like objects in a higher level
communication protocol. To store them is less bits and transfer is less bits
per second. Their impact is maximal since they are isomorphic across
knowledge and experience. At some point they may just become symbols due to
their pre-calculated commonness.

> Language is both intelligence enhancing and limiting. Human language is a
> protocol between agents. So there is minimalist data transfer, "I had no
> choice but to ..." is a compressed summary of potentially vastly complex
> issues.
> 
> My point is that they could have left the country, killed their
adversaries,
> taken on a new ID, or done any number of radical things that they probably
> never considered, other than taking whatever action they chose to take. A
> more accurate statement might be "I had no apparent rational choice but to
> ...".

The other low probability choices are lossily compressed out of the
expressed statement pattern. It's assumed that there were other choices,
usually factored in during the communicational complexity related
decompression, being situational. The onus at times is on the person
listening to the stupid statement.

> The mind gets hung-up sometimes on this language of ours. Better off at
> times to think less using English language and express oneself with a
wider
> spectrum communiqué. Doing a dance and throwing paint in the air for
> example, as some *primitive* cultures actually do, conveys information
also
> and is medium of expression rather than using a restrictive human chat
> protocol.
> 
> You are saying that the problem is that our present communication permits
> statements of stupidity, so we shouldn't have our present system of
> communication? Scrap English?!!! I consider statements of stupidity as a
sort
> of communications checksum, to see if real interchange of ideas is even
> possible. Often, it is quite impossible to communicate new ideas to
inflexible-
> minded people.
> 

Of course not scrap English, too ingrained. Though it is rather limiting and
I've never seen an alternative that isn't in the same region of limitingness
'cept perhaps mathematics. But that is limited too in many ways due to
symbology and its usual dimensional representation.

> BTW the rules of etiquette of the human language "protocol" are even more
> potentially restricting though necessary for efficient and standardized
data
> transfer to occur. Like, TCP/IP for example. The "Etiquette" in TCP/IP is
like
> an OSI layer, akin to human language etiquette.
> 
> I'm not sure how this relates, other than possibly identifying people who
> don't honor linguistic etiquette as being (potentially) stupid. Was that
your
> point?
> 

Well, agents (us) communicate. There is a communication protocol. The
protocol has layers, sort of. Patterns and chunks of patterns, common ones
are passed between agents. These get put into the knowledge/intelligence
graph and operated on and with, stored, replicated, etc.. Linguistic
restrictions in some ways cause the bottlenecks. The language, English in
this case, has rules of etiquette, where violations can cause breakdowns of
the informational transfer efficiency and coordination unless other
effective pattern channels exist - for example music - some types of chants
violate normal English etiquette yet can convey almost
linguistically(proper) indescribable information. 

John





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] Epiphany - Statements of Stupidity

2010-08-06 Thread John G. Rose
> -Original Message-
> From: Ian Parker [mailto:ianpark...@gmail.com]
> 
> The Turing test is not in fact a test of intelligence, it is a test of
similarity with
> the human. Hence for a machine to be truly Turing it would have to make
> mistakes. Now any "useful" system will be made as intelligent as we can
> make it. The TT will be seen to be an irrelevancy.
> 
> Philosophical question no 1 :- How useful is the TT.
> 

TT in its basic form is rather simplistic. It's thought of usually in its
ideal form, the determination of an AI or a human. I look at it more of
analogue verses discrete boolean. Much of what is out there is human with
computer augmentation and echoes of human interaction. It's blurry in
reality and the TT has been passed in some ways but not in its most ideal
way.

> As I said in my correspondence With Jan Klouk, the human being is stupid,
> often dangerously stupid.
> 
> Philosophical question 2 - Would passing the TT assume human stupidity and
> if so would a Turing machine be dangerous? Not necessarily, the Turing
> machine could talk about things like jihad without
ultimately identifying with
> it.
> 

Humans without augmentation are only so intelligent. A Turing machine would
be potentially dangerous, a really well built one. At some point we'd need
to see some DNA as ID of another "extended" TT.

> Philosophical question 3 :- Would a TM be a psychologist? I think it would
> have to be. Could a TM become part of a population simulation that would
> give us political insights.
> 

You can have a relatively stupid TM or a sophisticated one just like humans.
It might be easier to pass the TT by not exposing too much intelligence.

John

> These 3 questions seem to me to be the really interesting ones.
> 
> 
>   - Ian Parker 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How To Create General AI Draft2

2010-08-06 Thread Abram Demski
On Fri, Aug 6, 2010 at 8:22 PM, Abram Demski  wrote:

>
> (Without this sort of generality, your approach seems restricted to
> gathering knowledge about whatever events unfold in front of a limited
> quantity of high-quality camera systems which you set up. To be honest, the
> usefulness of that sort of knowledge is not obvious.)
>

On second thought, this statement was a bit naive. You obviously intend the
camera systems to be connected to robots or other systems which perform
actual tasks in the world, providing a great variety of information
including feedback from success/failure of actions to achieve results.

What is unrealistic to me is not that this information could be useful, but
that this level of real-world intelligence could be achieved with the
super-high confidence bounds you are imagining. What I think is that
probabilistic reasoning is needed. Once we have the object/location/texture
information with those confidence bounds (which I do see as possible),
gaining the sort of knowledge Cyc set out to contain seems inherently
statistical.


>
> --Abram
>
>
>
> On Fri, Aug 6, 2010 at 4:44 PM, David Jones  wrote:
>
>> Hey Guys,
>>
>> I've been working on writing out my approach to create general AI to share
>> and debate it with others in the field. I've attached my second draft of it
>> in PDF format, if you guys are at all interested. It's still a work in
>> progress and hasn't been fully edited. Please feel free to comment,
>> positively or negatively, if you have a chance to read any of it. I'll be
>> adding to and editing it over the next few days.
>>
>> I'll try to reply more professionally than I have been lately :) Sorry :S
>>
>> Cheers,
>>
>> Dave
>>   *agi* | Archives 
>>  | 
>> ModifyYour Subscription
>> 
>>
>
>
>
> --
> Abram Demski
> http://lo-tho.blogspot.com/
> http://groups.google.com/group/one-logic
>



-- 
Abram Demski
http://lo-tho.blogspot.com/
http://groups.google.com/group/one-logic



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How To Create General AI Draft2

2010-08-06 Thread Abram Demski
David,

Seems like a reasonable argument to me. I agree with the emphasis on
acquiring knowledge. I agree that tackling language first is not the easiest
path. I agree with the comments on compositionality of knowledge & the
regularity of the vast majority of the environment.

Vision seems like a fine domain choice. However, there are other domain
choices. I think your goal of generality would be well-served by keeping in
mind some of these other domains at the same time as vision, so that your
algorithms have some cross-domain applicability. The same algorithm that can
find patterns in the visual field should also be able to find patterns in a
database of medical information, say. That is my way of thinking, at least.

(Without this sort of generality, your approach seems restricted to
gathering knowledge about whatever events unfold in front of a limited
quantity of high-quality camera systems which you set up. To be honest, the
usefulness of that sort of knowledge is not obvious.)

--Abram


On Fri, Aug 6, 2010 at 4:44 PM, David Jones  wrote:

> Hey Guys,
>
> I've been working on writing out my approach to create general AI to share
> and debate it with others in the field. I've attached my second draft of it
> in PDF format, if you guys are at all interested. It's still a work in
> progress and hasn't been fully edited. Please feel free to comment,
> positively or negatively, if you have a chance to read any of it. I'll be
> adding to and editing it over the next few days.
>
> I'll try to reply more professionally than I have been lately :) Sorry :S
>
> Cheers,
>
> Dave
>   *agi* | Archives 
>  | 
> ModifyYour Subscription
> 
>



-- 
Abram Demski
http://lo-tho.blogspot.com/
http://groups.google.com/group/one-logic



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How To Create General AI Draft2

2010-08-06 Thread Mike Tintner
1) You don't define the difference between narrow AI and AGI - or make clear 
why your approach is one and not the other

2) "Learning about the world" won't cut it -  vast nos. of progs. claim they 
can learn about the world - what's the difference between narrow AI and AGI 
learning?

3) "Breaking things down into generic components allows us to learn about and 
handle the vast majority of things we want to learn about. This is what makes 
it general!"

Wild assumption, unproven or at all demonstrated and untrue. Interesting 
philosophically because it implicitly underlies AGI-ers' fantasies of 
"take-off". You can compare it to the idea that all science can be reduced to 
physics. If it could, then an AGI could indeed take-off. But it's demonstrably 
not so.

You don't seem to understand that the problem of AGI is to deal with the NEW - 
the unfamiliar, that wh. cannot be broken down into familiar categories, - and 
then find ways of dealing with it ad hoc.

You have to demonstrate a capacity for dealing with the new. (As opposed to, 
say, narrow AI squares).




From: David Jones 
Sent: Friday, August 06, 2010 9:44 PM
To: agi 
Subject: [agi] How To Create General AI Draft2


Hey Guys,

I've been working on writing out my approach to create general AI to share and 
debate it with others in the field. I've attached my second draft of it in PDF 
format, if you guys are at all interested. It's still a work in progress and 
hasn't been fully edited. Please feel free to comment, positively or 
negatively, if you have a chance to read any of it. I'll be adding to and 
editing it over the next few days.

I'll try to reply more professionally than I have been lately :) Sorry :S

Cheers,

Dave 
  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Computer Vision not as hard as I thought!

2010-08-06 Thread David Jones
On Fri, Aug 6, 2010 at 7:37 PM, Jim Bromer  wrote:

> On Wed, Aug 4, 2010 at 9:27 AM, David Jones  wrote:
> *So, why computer vision? Why can't we just enter knowledge manually?
>
> *
> a) The knowledge we require for AI to do what we want is vast and complex
> and we can prove that it is completely ineffective to enter the knowledge we
> need manually.
> b) Computer vision is the most effective means of gathering facts about the
> world. Knowledge and experience can be gained from analysis of these facts.
> c) Language is not learned through passive observation. The associations
> that words have to the environment and our common sense knowledge of the
> environment/world are absolutely essential to language learning,
> understanding and disambiguation. When visual information is available,
> children use visual cues from their parents and from the objects they are
> interacting with to figure out word-environment associations. If visual info
> is not available, touch is essential to replace the visual cues. Touch can
> provide much of the same info as vision, but it is not as effective because
> not everything is in reach and it provides less information than vision.
> There is some very good documentation out there on how children learn
> language that supports this. One example is "How Children Learn Language" by
> William O'grady.
> d) The real world cannot be predicted blindly. It is absolutely essential
> to be able to directly observe it and receive feedback.
> e) Manual entry of knowledge, even if possible, would be extremely slow and
> would be a very serious bottleneck(it already is). This is a major reason we
> want AI... to increase our man power and remove man-power related
> bottlenecks.
>  
>
> Discovering a way to get a computer program to interpret a human language
> is a difficult problem.  The feeling that an AI program might be able to
> attain a higher level of intelligence if only it could examine data from a
> variety of different kinds of sensory input modalities it is not new.  It
> has been tried and tried during the past 35 years.  But there is no
> experimental data (that I have heard of) that suggests that this method is
> the only way anyone will achieve intelligence.
>

"if only it could examine data from a variety of different kinds of sensory
input modalities"

That statement suggests that such "different kinds" of input have no
meaningful relationship to the problem at hand. I'm not talking about
different kinds of input. I'm talking about explicitly and deliberately
extracting facts about the environment from sensory perception, specifically
remote perception or visual perception. The input "modalities" are not what
is important. It is the facts that you can extract from computer vision that
are useful in understanding what is out there in the world, what
relationships and associations exist, and how is language associated with
the environment.

It is well documented that children learn language by interacting with
adults around them and using cues from them to learn how the words they
speak are associated with what is going on. It is not hard to support the
claim that extensive knowledge about the world is important for
understanding and interpreting human language. Nor is it hard to support the
idea that such knowledge can be gained from computer vision.



>
>
> I have tried to explain that I believe the problem is twofold.  First of
> all, there have been quite a few AI programs that worked real well as long
> as the problem was simple enough.  This suggests that the complexity of
> what is trying to be understood is a critical factor.  This in turn
> suggests that using different input modalities, would not -in itself- make
> AI possible.
>

Your conclusion isn't supported by your arguments. I'm not even saying it
makes AI possible. I'm saying that a system can make reasonable inferences
and come to reasonable conclusions with sufficient knowledge. Without
sufficient knowledge, there is reason to believe that it is significantly
harder and often impossible to come to correct conclusions.

Therefore, gaining knowledge about how things are related is not just
helpful in making correct inferences, it is required. So, "different input
modalities" which can give you facts about the world, which in turn would
give you knowledge about the world, do make correct reasoning possible, when
it otherwise would not be possible.

You see, it has nothing to do with the source of the info or whether it is
more info or not. It has everything to do with the relationships that
information have. Just calling them "different input modalities" is not
correct.



>   Secondly, there is a problem of getting the computer to accurately model
> that which it can know in such a way that it could be effectively utilized
> for higher degrees of complexity.
>

This is an engineering problem, not necessarily a problem that can't be

Re: [agi] Computer Vision not as hard as I thought!

2010-08-06 Thread Jim Bromer
On Wed, Aug 4, 2010 at 9:27 AM, David Jones  wrote:
*So, why computer vision? Why can't we just enter knowledge manually?

*a) The knowledge we require for AI to do what we want is vast and complex
and we can prove that it is completely ineffective to enter the knowledge we
need manually.
b) Computer vision is the most effective means of gathering facts about the
world. Knowledge and experience can be gained from analysis of these facts.
c) Language is not learned through passive observation. The associations
that words have to the environment and our common sense knowledge of the
environment/world are absolutely essential to language learning,
understanding and disambiguation. When visual information is available,
children use visual cues from their parents and from the objects they are
interacting with to figure out word-environment associations. If visual info
is not available, touch is essential to replace the visual cues. Touch can
provide much of the same info as vision, but it is not as effective because
not everything is in reach and it provides less information than vision.
There is some very good documentation out there on how children learn
language that supports this. One example is "How Children Learn Language" by
William O'grady.
d) The real world cannot be predicted blindly. It is absolutely essential to
be able to directly observe it and receive feedback.
e) Manual entry of knowledge, even if possible, would be extremely slow and
would be a very serious bottleneck(it already is). This is a major reason we
want AI... to increase our man power and remove man-power related
bottlenecks.


Discovering a way to get a computer program to interpret a human language is
a difficult problem.  The feeling that an AI program might be able to attain
a higher level of intelligence if only it could examine data from a variety
of different kinds of sensory input modalities it is not new.  It has been
tried and tried during the past 35 years.  But there is no experimental data
(that I have heard of) that suggests that this method is the only way anyone
will achieve intelligence.



I have tried to explain that I believe the problem is twofold.  First of
all, there have been quite a few AI programs that worked real well as long
as the problem was simple enough.  This suggests that the complexity of what
is trying to be understood is a critical factor.  This in turn suggests that
using different input modalities, would not -in itself- make AI
possible.  Secondly,
there is a problem of getting the computer to accurately model that which it
can know in such a way that it could be effectively utilized for higher
degrees of complexity.  I consider this to be a conceptual integration
problem.  We do not know how to integrate different kinds of ideas (or
idea-like knowledge) in an effective manner, and as a result we have not
seen the gradual advancement in AI programming that we would expect to see
given all the advances in computer technology that have been occurring.



Both visual analysis and linguistic analysis are significant challenges in
AI programming.  The idea that combining both of them would make the problem
1/2 as hard may not be any crazier than saying that it would make the
problem 2 times as hard, but without experimental evidence it isn't any
saner either.

Jim Bromer




On Wed, Aug 4, 2010 at 9:27 AM, David Jones  wrote:

> :D Thanks Jim for paying attention!
>
> One very cool thing about the human brain is that we use multiple feedback
> mechanisms to correct for such problems as observer movement. For example,
> the inner ear senses your bodies movement and provides feedback for visual
> processing. This is why we get nauseous when the ear disagrees with the eyes
> and other senses. As you said, eye muscles also provide feedback about how
> the eye itself has moved. In example papers I have read, such as "Object
> Discovery through Motion, Appearance and Shape", the researchers know the
> position of the camera (I'm not sure how) and use that to determine which
> moving features are closest to the cameras movement, and therefore are not
> actually moving. Once you know how much the camera moved, you can try to
> subtract this from apparent motion.
>
> You're right that I should attempt to implement the system. I think I will
> in fact, but it is difficult because I have limited time and resources. My
> main goal is to make sure it is accomplished, even if not by me. So,
> sometimes I think that it is better to prove that it can be done than to
> actually spend a much longer amount of time to actually do it myself. I am
> struggling to figure out how I can gather the resources or support to
> accomplish the monstrous task. I think that I should work on the theoretical
> basis in addition to the actual implementation. This is likely important to
> make sure that my design is well grounded and reflects reality. It is very
> hard 

Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-08-06 Thread Abram Demski
Jim,

>From the article Matt linked to, specifically see the line:

"As [image: p] is itself a binary string, we can define the discrete
universal a priori probability, [image: m(x)], to be the probability that
the output of a universal prefix Turing machine [image: U] is [image:
x]when provided with fair coin flips on the input tape."

--Abram

On Fri, Aug 6, 2010 at 3:38 PM, Matt Mahoney  wrote:

> Jim, see http://www.scholarpedia.org/article/Algorithmic_probability
> I think this
> answers your questions.
>
>
> -- Matt Mahoney, matmaho...@yahoo.com
>
>
> --
> *From:* Jim Bromer 
> *To:* agi 
> *Sent:* Fri, August 6, 2010 2:18:09 PM
>
> *Subject:* Re: [agi] Comments On My Skepticism of Solomonoff Induction
>
> I meant:
> Did Solomonoff's original idea use randomization to determine the bits of
> the programs that are used to produce the *prior probabilities*?  I think
> that the answer to that is obviously no.  The randomization of the next bit
> would used in the test of the prior probabilities as done using a random
> sampling.  He probably found that students who had some familiarity with
> statistics would initially assume that the prior probability was based on
> some subset of possible programs as would be expected from a typical sample,
> so he gave this statistics type of definition to emphasize the extent of
> what he had in mind.
>
> I asked this question just to make sure that I understood what Solomonoff
> Induction was, because Abram had made some statement indicating that I
> really didn't know.  Remember, this particular branch of the discussion was
> originally centered around the question of whether Solomonoff
> Induction would be convergent, even given a way around the incomputability
> of finding only those programs that halted.  So while the random testing of
> the prior probabilities is of interest to me, I wanted to make sure that
> there is no evidence that Solomonoff Induction is convergent. I am not being
> petty about this, but I also needed to make sure that I understood what
> Solomonoff Induction is.
>
> I am interested in hearing your ideas about your variation of
> Solomonoff Induction because your convergent series, in this context, was
> interesting.
> Jim Bromer
>
> On Fri, Aug 6, 2010 at 6:50 AM, Jim Bromer  wrote:
>
>> Jim: So, did Solomonoff's original idea involve randomizing whether the
>>> next bit would be a 1 or a 0 in the program?
>>
>> Abram: Yep.
>> I meant, did Solomonoff's original idea involve randomizing whether the
>> next bit in the program's that are originally used to produce the *prior
>> probabilities* involve the use of randomizing whether the next bit would
>> be a 1 or a 0?  I have not been able to find any evidence that it was.
>> I thought that my question was clear but on second thought I guess it
>> wasn't. I think that the part about the coin flips was only a method to
>> express that he was interested in the probability that a particular string
>> would be produced from all possible programs, so that when actually testing
>> the prior probability of a particular string the program that was to be run
>> would have to be randomly generated.
>> Jim Bromer
>>
>>
>>
>>
>> On Wed, Aug 4, 2010 at 10:27 PM, Abram Demski wrote:
>>
>>> Jim,
>>>
>>>  Your function may be convergent but it is not a probability.

>>>
>>> True! All the possibilities sum to less than 1. There are ways of
>>> addressing this (ie, multiply by a normalizing constant which must also be
>>> approximated in a convergent manner), but for the most part adherents of
>>> Solomonoff induction don't worry about this too much. What we care about,
>>> mostly, is comparing different hyotheses to decide which to favor. The
>>> normalizing constant doesn't help us here, so it usually isn't mentioned.
>>>
>>>
>>> You said that Solomonoff's original construction involved flipping a coin
 for the next bit.  What good does that do?
>>>
>>>
>>> Your intuition is that running totally random programs to get predictions
>>> will just produce garbage, and that is fine. The idea of Solomonoff
>>> induction, though, is that it will produce systematically less garbage than
>>> just flipping coins to get predictions. Most of the garbage programs will be
>>> knocked out of the running by the data itself. This is supposed to be the
>>> least garbage we can manage without domain-specific knowledge
>>>
>>> This is backed up with the proof of dominance, which we haven't talked
>>> about yet, but which is really the key argument for the optimality of
>>> Solomonoff induction.
>>>
>>>
>>> And how does that prove that his original idea was convergent?
>>>
>>>
>>> The proofs of equivalence between all the different formulations of
>>> Solomonoff induction are something I haven't cared to look into too deeply.
>>>
>>> Since his idea is incomputable, there are no algorithms that can be run
 to demonstrate what he 

Re: [agi] AGI & Alife

2010-08-06 Thread Mike Tintner
This is on the surface interesting. But I'm kinda dubious about it. 

I'd like to know exactly what's going on - who or what (what kind of organism) 
is solving what kind of problem about what? The exact nature of the problem and 
the solution, not just a general blurb description.

If you follow the link from Kurzweil, you get a really confusing 
picture/screen. And I wonder whether the real action/problemsolving isn't 
largely taking place in the viewer/programmer's mind.


From: rob levy 
Sent: Friday, August 06, 2010 7:23 PM
To: agi 
Subject: Re: [agi] AGI & Alife


Interesting article: 
http://www.newscientist.com/article/mg20727723.700-artificial-life-forms-evolve-basic-intelligence.html?page=1


On Sun, Aug 1, 2010 at 3:13 PM, Jan Klauck  wrote:

  Ian Parker wrote


  > I would like your
  > opinion on *proofs* which involve an unproven hypothesis,


  I've no elaborated opinion on that.



  ---
  agi
  Archives: https://www.listbox.com/member/archive/303/=now
  RSS Feed: https://www.listbox.com/member/archive/rss/303/

  Modify Your Subscription: https://www.listbox.com/member/?&;

  Powered by Listbox: http://www.listbox.com



  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI & Alife

2010-08-06 Thread Ian Parker
This is much more interesting in the context of Evolution than it is for
the creation of AGI. Point is that all the things that have ben done would
have been done (much more simply in fact) from straightforward narrow
programs. However it demonstrates the early multicelluar organisms of the
Pre Cambrian and early Cambrian.

What AGI is interested in is how *language* evolves. That is to say the last
6 million years or so. We also need a process for creating AGI which is
rather more efficient than Evolution. We can't wait that time for something
to happen.


  - Ian Parker

On 6 August 2010 19:23, rob levy  wrote:

> Interesting article:
> http://www.newscientist.com/article/mg20727723.700-artificial-life-forms-evolve-basic-intelligence.html?page=1
>
> On Sun, Aug 1, 2010 at 3:13 PM, Jan Klauck wrote:
>
>> Ian Parker wrote
>>
>> > I would like your
>> > opinion on *proofs* which involve an unproven hypothesis,
>>
>> I've no elaborated opinion on that.
>>
>>
>> ---
>> agi
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> Modify Your Subscription: https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>>
>
>*agi* | Archives 
>  | 
> ModifyYour Subscription
> 
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-08-06 Thread Matt Mahoney
Jim, see http://www.scholarpedia.org/article/Algorithmic_probability
I think this answers your questions.

 -- Matt Mahoney, matmaho...@yahoo.com





From: Jim Bromer 
To: agi 
Sent: Fri, August 6, 2010 2:18:09 PM
Subject: Re: [agi] Comments On My Skepticism of Solomonoff Induction


I meant:
Did Solomonoff's original idea use randomization to determine the bits of the 
programs that are used to produce the prior probabilities?  I think that the 
answer to that is obviously no.  The randomization of the next bit would used 
in 
the test of the prior probabilities as done using a random sampling.  He 
probably found that students who had some familiarity with statistics would 
initially assume that the prior probability was based on some subset of 
possible 
programs as would be expected from a typical sample, so he gave this statistics 
type of definition to emphasize the extent of what he had in mind.
 
I asked this question just to make sure that I understood what Solomonoff 
Induction was, because Abram had made some statement indicating that I really 
didn't know.  Remember, this particular branch of the discussion was originally 
centered around the question of whether Solomonoff Induction would 
be convergent, even given a way around the incomputability of finding only 
those 
programs that halted.  So while the random testing of the prior probabilities 
is 
of interest to me, I wanted to make sure that there is no evidence that 
Solomonoff Induction is convergent. I am not being petty about this, but I also 
needed to make sure that I understood what Solomonoff Induction is.
 
I am interested in hearing your ideas about your variation of 
Solomonoff Induction because your convergent series, in this context, was 
interesting.
Jim Bromer


On Fri, Aug 6, 2010 at 6:50 AM, Jim Bromer  wrote:

Jim: So, did Solomonoff's original idea involve randomizing whether the next 
bit 
would be a 1 or a 0 in the program? 

Abram: Yep. 

I meant, did Solomonoff's original idea involve randomizing whether the next 
bit 
in the program's that are originally used to produce the prior probabilities 
involve the use of randomizing whether the next bit would be a 1 or a 0?  I 
have 
not been able to find any evidence that it was.  I thought that my question was 
clear but on second thought I guess it wasn't. I think that the part about the 
coin flips was only a method to express that he was interested in the 
probability that a particular string would be produced from all possible 
programs, so that when actually testing the prior probability of a particular 
string the program that was to be run would have to be randomly generated.
Jim Bromer
 
 

 
On Wed, Aug 4, 2010 at 10:27 PM, Abram Demski  wrote:

Jim,
>
>
>Your function may be convergent but it is not a probability. 
>>
>
>True! All the possibilities sum to less than 1. There are ways of addressing 
>this (ie, multiply by a normalizing constant which must also be approximated 
>in 
>a convergent manner), but for the most part adherents of Solomonoff induction 
>don't worry about this too much. What we care about, mostly, is comparing 
>different hyotheses to decide which to favor. The normalizing constant doesn't 
>help us here, so it usually isn't mentioned. 
>
>
>
>
>You said that Solomonoff's original construction involved flipping a coin for 
>the next bit.  What good does that do?
>
>Your intuition is that running totally random programs to get predictions will 
>just produce garbage, and that is fine. The idea of Solomonoff induction, 
>though, is that it will produce systematically less garbage than just flipping 
>coins to get predictions. Most of the garbage programs will be knocked out of 
>the running by the data itself. This is supposed to be the least garbage we 
>can 
>manage without domain-specific knowledge
>
>This is backed up with the proof of dominance, which we haven't talked about 
>yet, but which is really the key argument for the optimality of Solomonoff 
>induction. 
>
>
>
>
>And how does that prove that his original idea was convergent?
>
>The proofs of equivalence between all the different formulations of Solomonoff 
>induction are something I haven't cared to look into too deeply. 
>
>
>
>
>Since his idea is incomputable, there are no algorithms that can be run to 
>demonstrate what he was talking about so the basic idea is papered with all 
>sorts of unverifiable approximations.
>
>I gave you a proof of convergence for one such approximation, and if you wish 
>I 
>can modify it to include a normalizing constant to ensure that it is a 
>probability measure. It would be helpful to me if your criticisms were more 
>specific to that proof.
>
>
>
>So, did Solomonoff's original idea involve randomizing whether the next bit 
>would be a 1 or a 0 in the program? 
>
>>
>
>Yep. 
>
>
>
>Even ignoring the halting problem what kind of result would that give?
>>

Well, the general idea is this. An even distribution i

Re: [agi] Epiphany - Statements of Stupidity

2010-08-06 Thread Steve Richfield
John,

Congratulations, as your response was the only one that was on topic!!!

On Fri, Aug 6, 2010 at 10:09 AM, John G. Rose wrote:

> "statements of stupidity" - some of these are examples of cramming
> sophisticated thoughts into simplistic compressed text.
>

Definitely, as even the thoughts of stupid people transcends our (present)
ability to state what is happening behind their eyeballs. Most stupidity is
probably beyond simple recognition. For the initial moment, I was just
looking at the linguistic low hanging fruit.

Language is both intelligence enhancing and limiting. Human language is a
> protocol between agents. So there is minimalist data transfer, "I had no
> choice but to ..." is a compressed summary of potentially vastly complex
> issues.
>

My point is that they could have left the country, killed their adversaries,
taken on a new ID, or done any number of radical things that they probably
never considered, other than taking whatever action they chose to take. A
more accurate statement might be "I had no apparent rational choice but to
...".

The mind gets hung-up sometimes on this language of ours. Better off at
> times to think less using English language and express oneself with a wider
> spectrum communiqué. Doing a dance and throwing paint in the air for
> example, as some **primitive** cultures actually do, conveys information
> also and is medium of expression rather than using a restrictive human chat
> protocol.
>

You are saying that the problem is that our present communication permits
statements of stupidity, so we shouldn't have our present system of
communication? Scrap English?!!! I consider statements of stupidity as a
sort of communications checksum, to see if real interchange of ideas is even
possible. Often, it is quite impossible to communicate new ideas to
inflexible-minded people.

>
>
> BTW the rules of etiquette of the human language "protocol" are even more
> potentially restricting though necessary for efficient and standardized data
> transfer to occur. Like, TCP/IP for example. The "Etiquette" in TCP/IP is
> like an OSI layer, akin to human language etiquette.
>

I'm not sure how this relates, other than possibly identifying people who
don't honor linguistic etiquette as being (potentially) stupid. Was that
your point?

Steve
==

>
> *From:* Steve Richfield [mailto:steve.richfi...@gmail.com]
>
> To All,
>
> I have posted plenty about "statements of ignorance", our probable
> inability to comprehend what an advanced intelligence might be "thinking",
> heidenbugs, etc. I am now wrestling with a new (to me) concept that
> hopefully others here can shed some light on.
>
> People often say things that indicate their limited mental capacity, or at
> least their inability to comprehend specific situations.
>
> 1)  One of my favorites are people who say "I had no choice but to ...",
> which of course indicates that they are clearly intellectually challenged
> because there are ALWAYS other choices, though it may be difficult to find
> one that is in all respects superior. While theoretically this statement
> could possibly be correct, in practice I have never found this to be the
> case.
>
> 2)  Another one recently from this very forum was "If it sounds too good to
> be true, it probably is". This may be theoretically true, but in fact was,
> as usual, made as a statement as to why the author was summarily dismissing
> an apparent opportunity of GREAT value. This dismissal of something BECAUSE
> of its great value would seem to severely limit the authors prospects for
> success in life, which probably explains why he spends so much time here
> challenging others who ARE doing something with their lives.
>
> 3)  I used to evaluate inventions for some venture capitalists. Sometimes I
> would find that some basic law of physics, e.g. conservation of energy,
> would have to be violated for the thing to work. When I explained this to
> the inventors, their inevitable reply was "Yea, and they also said that the
> Wright Brothers' plane would never fly". To this, I explained that the
> Wright Brothers had invested ~200 hours of effort working with their crude
> homemade wind tunnel, and ask what the inventors have done to prove that
> their own invention would work.
>
> 4)  One old stupid standby, spoken when you have make a clear point that
> shows that their argument is full of holes "That is just your opinion". No,
> it is a proven fact for you to accept or refute.
>
> 5)  Perhaps you have your own pet "statements of stupidity"? I suspect that
> there may be enough of these to dismiss some significant fraction of
> prospective users of beyond-human-capability (I just hate the word
> "intelligence") programs.
>
> In short, semantic analysis of these statements typically would NOT find
> them to be conspicuously false, and hence even an AGI would be tempted to
> accept them. However, their use almost universally indicates some
> short-circuit in thinking. The pr

Re: [agi] AGI & Alife

2010-08-06 Thread rob levy
Interesting article:
http://www.newscientist.com/article/mg20727723.700-artificial-life-forms-evolve-basic-intelligence.html?page=1

On Sun, Aug 1, 2010 at 3:13 PM, Jan Klauck wrote:

> Ian Parker wrote
>
> > I would like your
> > opinion on *proofs* which involve an unproven hypothesis,
>
> I've no elaborated opinion on that.
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-08-06 Thread Jim Bromer
I meant:
Did Solomonoff's original idea use randomization to determine the bits of
the programs that are used to produce the *prior probabilities*?  I think
that the answer to that is obviously no.  The randomization of the next bit
would used in the test of the prior probabilities as done using a random
sampling.  He probably found that students who had some familiarity with
statistics would initially assume that the prior probability was based on
some subset of possible programs as would be expected from a typical sample,
so he gave this statistics type of definition to emphasize the extent of
what he had in mind.

I asked this question just to make sure that I understood what Solomonoff
Induction was, because Abram had made some statement indicating that I
really didn't know.  Remember, this particular branch of the discussion was
originally centered around the question of whether Solomonoff
Induction would be convergent, even given a way around the incomputability
of finding only those programs that halted.  So while the random testing of
the prior probabilities is of interest to me, I wanted to make sure that
there is no evidence that Solomonoff Induction is convergent. I am not being
petty about this, but I also needed to make sure that I understood what
Solomonoff Induction is.

I am interested in hearing your ideas about your variation of
Solomonoff Induction because your convergent series, in this context, was
interesting.
Jim Bromer

On Fri, Aug 6, 2010 at 6:50 AM, Jim Bromer  wrote:

> Jim: So, did Solomonoff's original idea involve randomizing whether the
>> next bit would be a 1 or a 0 in the program?
>
> Abram: Yep.
> I meant, did Solomonoff's original idea involve randomizing whether the
> next bit in the program's that are originally used to produce the *prior
> probabilities* involve the use of randomizing whether the next bit would
> be a 1 or a 0?  I have not been able to find any evidence that it was.
> I thought that my question was clear but on second thought I guess it
> wasn't. I think that the part about the coin flips was only a method to
> express that he was interested in the probability that a particular string
> would be produced from all possible programs, so that when actually testing
> the prior probability of a particular string the program that was to be run
> would have to be randomly generated.
> Jim Bromer
>
>
>
>
> On Wed, Aug 4, 2010 at 10:27 PM, Abram Demski wrote:
>
>> Jim,
>>
>>  Your function may be convergent but it is not a probability.
>>>
>>
>> True! All the possibilities sum to less than 1. There are ways of
>> addressing this (ie, multiply by a normalizing constant which must also be
>> approximated in a convergent manner), but for the most part adherents of
>> Solomonoff induction don't worry about this too much. What we care about,
>> mostly, is comparing different hyotheses to decide which to favor. The
>> normalizing constant doesn't help us here, so it usually isn't mentioned.
>>
>>
>> You said that Solomonoff's original construction involved flipping a coin
>>> for the next bit.  What good does that do?
>>
>>
>> Your intuition is that running totally random programs to get predictions
>> will just produce garbage, and that is fine. The idea of Solomonoff
>> induction, though, is that it will produce systematically less garbage than
>> just flipping coins to get predictions. Most of the garbage programs will be
>> knocked out of the running by the data itself. This is supposed to be the
>> least garbage we can manage without domain-specific knowledge
>>
>> This is backed up with the proof of dominance, which we haven't talked
>> about yet, but which is really the key argument for the optimality of
>> Solomonoff induction.
>>
>>
>> And how does that prove that his original idea was convergent?
>>
>>
>> The proofs of equivalence between all the different formulations of
>> Solomonoff induction are something I haven't cared to look into too deeply.
>>
>> Since his idea is incomputable, there are no algorithms that can be run to
>>> demonstrate what he was talking about so the basic idea is papered with all
>>> sorts of unverifiable approximations.
>>
>>
>> I gave you a proof of convergence for one such approximation, and if you
>> wish I can modify it to include a normalizing constant to ensure that it is
>> a probability measure. It would be helpful to me if your criticisms were
>> more specific to that proof.
>>
>> So, did Solomonoff's original idea involve randomizing whether the next
>>> bit would be a 1 or a 0 in the program?
>>>
>>
>> Yep.
>>
>> Even ignoring the halting problem what kind of result would that give?
>>>
>>
>> Well, the general idea is this. An even distribution intuitively
>> represents lack of knowledge. An even distribution over possible data fails
>> horribly, however, predicting white noise. We want to represent the idea
>> that we are very ignorant of what the data might be, but not *that*
>> ignorant. To capture the idea of reg

Re: [agi] Epiphany - Statements of Stupidity

2010-08-06 Thread Ian Parker
I think that some quite important philosofical questions are raised by
Steve's posting. I don't know BTW how you got it. I monitor all
correspondence to the group, and I did not see it.

The Turing test is not in fact a test of intelligence, it is a test of
similarity with the human. Hence for a machine to be truly Turing it would
have to make mistakes. Now any "*useful*" system will be made as intelligent
as we can make it. The TT will be seen to be an irrelevancy.

Philosophical question no 1 :- How useful is the TT.

As I said in my correspondence With Jan Klouk, the human being is stupid,
often dangerously stupid.

Philosophical question 2 - Would passing the TT assume human stupidity and
if so would a Turing machine be dangerous? Not necessarily, the Turing
machine could talk about things like jihad without
ultimately identifying with it.

Philosophical question 3 :- Would a TM be a psychologist? I think it would
have to be. Could a TM become part of a population simulation that would
give us political insights.

These 3 questions seem to me to be the really interesting ones.


  - Ian Parker

On 6 August 2010 18:09, John G. Rose  wrote:

> "statements of stupidity" - some of these are examples of cramming
> sophisticated thoughts into simplistic compressed text. Language is both
> intelligence enhancing and limiting. Human language is a protocol between
> agents. So there is minimalist data transfer, "I had no choice but to ..."
> is a compressed summary of potentially vastly complex issues. The mind gets
> hung-up sometimes on this language of ours. Better off at times to think
> less using English language and express oneself with a wider spectrum
> communiqué. Doing a dance and throwing paint in the air for example, as some
> **primitive** cultures actually do, conveys information also and is medium
> of expression rather than using a restrictive human chat protocol.
>
>
>
> BTW the rules of etiquette of the human language "protocol" are even more
> potentially restricting though necessary for efficient and standardized data
> transfer to occur. Like, TCP/IP for example. The "Etiquette" in TCP/IP is
> like an OSI layer, akin to human language etiquette.
>
>
>
> John
>
>
>
>
>
> *From:* Steve Richfield [mailto:steve.richfi...@gmail.com]
>
> To All,
>
> I have posted plenty about "statements of ignorance", our probable
> inability to comprehend what an advanced intelligence might be "thinking",
> heidenbugs, etc. I am now wrestling with a new (to me) concept that
> hopefully others here can shed some light on.
>
> People often say things that indicate their limited mental capacity, or at
> least their inability to comprehend specific situations.
>
> 1)  One of my favorites are people who say "I had no choice but to ...",
> which of course indicates that they are clearly intellectually challenged
> because there are ALWAYS other choices, though it may be difficult to find
> one that is in all respects superior. While theoretically this statement
> could possibly be correct, in practice I have never found this to be the
> case.
>
> 2)  Another one recently from this very forum was "If it sounds too good to
> be true, it probably is". This may be theoretically true, but in fact was,
> as usual, made as a statement as to why the author was summarily dismissing
> an apparent opportunity of GREAT value. This dismissal of something BECAUSE
> of its great value would seem to severely limit the authors prospects for
> success in life, which probably explains why he spends so much time here
> challenging others who ARE doing something with their lives.
>
> 3)  I used to evaluate inventions for some venture capitalists. Sometimes I
> would find that some basic law of physics, e.g. conservation of energy,
> would have to be violated for the thing to work. When I explained this to
> the inventors, their inevitable reply was "Yea, and they also said that the
> Wright Brothers' plane would never fly". To this, I explained that the
> Wright Brothers had invested ~200 hours of effort working with their crude
> homemade wind tunnel, and ask what the inventors have done to prove that
> their own invention would work.
>
> 4)  One old stupid standby, spoken when you have make a clear point that
> shows that their argument is full of holes "That is just your opinion". No,
> it is a proven fact for you to accept or refute.
>
> 5)  Perhaps you have your own pet "statements of stupidity"? I suspect that
> there may be enough of these to dismiss some significant fraction of
> prospective users of beyond-human-capability (I just hate the word
> "intelligence") programs.
>
> In short, semantic analysis of these statements typically would NOT find
> them to be conspicuously false, and hence even an AGI would be tempted to
> accept them. However, their use almost universally indicates some
> short-circuit in thinking. The present Dr. Eliza program could easily
> recognize such statements.
>
> OK, so what? What should an AI progr

Re: [agi] Epiphany - Statements of Stupidity

2010-08-06 Thread Mike Tintner
Maybe you could give me one example from the history of technology where 
machines "ran" before they could "walk"? Where they started complex rather than 
simple?  Or indeed from evolution of any kind? Or indeed from human 
development? Where children started doing complex mental operations like logic, 
say, or maths or the equivalent before they could speak?  Or started running 
before they could control their arms, roll over, crawl, sit up, haul themselves 
up, stand up, totter -  just went straight to running?**

A bottom-up approach, I would have to agree, clearly isn't obvious to AGI-ers. 
But then there are v. few AGI-ers who have much sense of history or evolution. 
It's so much easier to engage in sci-fi fantasies about future, topdown AGI's.

It's HARDER to think about where AGI starts - requires serious application to 
the problem.

And frankly, until you or anyone else has a halfway viable of where AGI will or 
can start, and what uses it will serve,  speculation about whether it's worth 
building complex, sci-fi AGI's is a waste of your valuable time.

**PS Note BTW - a distinction that eludes most AGI-ers -  a present computer 
program doing logic or maths or chess, is a fundamentally and massively 
different thing from a human or AGI doing the same, just as a current program 
doing NLP is totally different from a human using language.   IN all these 
cases, humans (and real AGIs to come) don't merely manipulate meaningless 
patterns of numbers,   they relate the symbols first to concepts and then to 
real world referents - massively complex operations totally beyond current 
computers.

The whole history of AI/would-be AGI shows the terrible price of starting 
complex - with logic/maths/chess programs for example - and not having a clue 
about how intelligence has to be developed from v. simple origins, step by 
step, in order to actually understand these activities.



From: Steve Richfield 
Sent: Friday, August 06, 2010 4:52 PM
To: agi 
Subject: Re: [agi] Epiphany - Statements of Stupidity


Mike,

Your reply flies in the face of two obvious facts:
1.  I have little interest in what is called AGI here. My interests lie 
elsewhere, e.g. uploading, Dr. Eliza, etc. I posted this piece for several 
reasons, as it is directly applicable to Dr. Eliza, and because it casts a 
shadow on future dreams of AGI. I was hoping that those people who have thought 
things through regarding AGIs might have some thoughts here. Maybe these people 
don't (yet) exist?!
2.  You seem to think that a "walk before you run" approach, basically a 
bottom-up approach to AGI, is the obvious one. It sure isn't obvious to me. 
Besides, if my "statements of stupidity" theory is true, then why even bother 
building AGIs, because we won't even be able to meaningfully discuss things 
with them.

Steve
==

On Fri, Aug 6, 2010 at 2:57 AM, Mike Tintner  wrote:

  sTEVE:I have posted plenty about "statements of ignorance", our probable 
inability to comprehend what an advanced intelligence might be "thinking", 

  What will be the SIMPLEST thing that will mark the first sign of AGI ? - 
Given that there are zero but zero examples of AGI.

  Don't you think it would be a good idea to begin at the beginning? With 
"initial AGI"? Rather than "advanced AGI"? 
agi | Archives  | Modify Your Subscription  


  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] Epiphany - Statements of Stupidity

2010-08-06 Thread John G. Rose
"statements of stupidity" - some of these are examples of cramming
sophisticated thoughts into simplistic compressed text. Language is both
intelligence enhancing and limiting. Human language is a protocol between
agents. So there is minimalist data transfer, "I had no choice but to ..."
is a compressed summary of potentially vastly complex issues. The mind gets
hung-up sometimes on this language of ours. Better off at times to think
less using English language and express oneself with a wider spectrum
communiqué. Doing a dance and throwing paint in the air for example, as some
*primitive* cultures actually do, conveys information also and is medium of
expression rather than using a restrictive human chat protocol.

 

BTW the rules of etiquette of the human language "protocol" are even more
potentially restricting though necessary for efficient and standardized data
transfer to occur. Like, TCP/IP for example. The "Etiquette" in TCP/IP is
like an OSI layer, akin to human language etiquette.

 

John

 

 

From: Steve Richfield [mailto:steve.richfi...@gmail.com] 



To All,

I have posted plenty about "statements of ignorance", our probable inability
to comprehend what an advanced intelligence might be "thinking", heidenbugs,
etc. I am now wrestling with a new (to me) concept that hopefully others
here can shed some light on.

People often say things that indicate their limited mental capacity, or at
least their inability to comprehend specific situations.

1)  One of my favorites are people who say "I had no choice but to ...",
which of course indicates that they are clearly intellectually challenged
because there are ALWAYS other choices, though it may be difficult to find
one that is in all respects superior. While theoretically this statement
could possibly be correct, in practice I have never found this to be the
case.

2)  Another one recently from this very forum was "If it sounds too good to
be true, it probably is". This may be theoretically true, but in fact was,
as usual, made as a statement as to why the author was summarily dismissing
an apparent opportunity of GREAT value. This dismissal of something BECAUSE
of its great value would seem to severely limit the authors prospects for
success in life, which probably explains why he spends so much time here
challenging others who ARE doing something with their lives.

3)  I used to evaluate inventions for some venture capitalists. Sometimes I
would find that some basic law of physics, e.g. conservation of energy,
would have to be violated for the thing to work. When I explained this to
the inventors, their inevitable reply was "Yea, and they also said that the
Wright Brothers' plane would never fly". To this, I explained that the
Wright Brothers had invested ~200 hours of effort working with their crude
homemade wind tunnel, and ask what the inventors have done to prove that
their own invention would work.

4)  One old stupid standby, spoken when you have make a clear point that
shows that their argument is full of holes "That is just your opinion". No,
it is a proven fact for you to accept or refute.

5)  Perhaps you have your own pet "statements of stupidity"? I suspect that
there may be enough of these to dismiss some significant fraction of
prospective users of beyond-human-capability (I just hate the word
"intelligence") programs.

In short, semantic analysis of these statements typically would NOT find
them to be conspicuously false, and hence even an AGI would be tempted to
accept them. However, their use almost universally indicates some
short-circuit in thinking. The present Dr. Eliza program could easily
recognize such statements.

OK, so what? What should an AI program do when it encounters a stupid user?
Should some attempt be made to explain stupidity to someone who is almost
certainly incapable of comprehending their own stupidity? "Stupidity is
forever" is probably true, especially when expressed by an adult.

Note my own dismissal of a some past posters for insufficient mental ability
to understand certain subjects, whereupon they invariably come back
repeating the SAME flawed logic, after I carefully explained the breaks in
their logic. Clearly, I was just wasting my effort by continuing to interact
with these people.

Note that providing a stupid user with ANY output is probably a mistake,
because they will almost certainly misconstrue it in some way. Perhaps it
might be possible to "dumb down" the output to preschool-level, at least
that (small) part of the output that can be accurately stated in preschool
terms.

Eventually as computers continue to self-evolve, we will ALL be categorized
as some sort of stupid, and receive stupid-adapted output.

I wonder whether, ultimately, computers will have ANYTHING to say to us,
like any more than we now say to our dogs.

Perhaps the final winner of the Reverse Turing Test will remain completely
silent?!

"You don't explain to your dog why you can't pay the rent" from The Fall of
Colossu

Re: [agi] Epiphany - Statements of Stupidity

2010-08-06 Thread Matt Mahoney
Mike Tintner wrote:
> What will be the SIMPLEST thing that will mark the first sign of AGI ? - 
> Given 
>that there are zero but zero examples of AGI.
 
Machines have already surpassed human intelligence. If you don't think so, try 
this IQ test. http://mattmahoney.net/iq/

Or do you prefer to define intelligence as "more like a human"? In that case I 
agree that AGI will never happen. No machine will ever be more like a human 
than 
a human.

I really don't care how you define it. Either way, computers are profoundly 
affecting the way people interact with each other and with the world. Where is 
the threshold when machines do most of our thinking for us? Who cares as long 
as 
the machines still give us the feeling that we are in charge.

-- Matt Mahoney, matmaho...@yahoo.com





From: Mike Tintner 
To: agi 
Sent: Fri, August 6, 2010 5:57:33 AM
Subject: Re: [agi] Epiphany - Statements of Stupidity


sTEVE:I have  posted plenty about "statements of ignorance", our probable 
inability to  comprehend what an advanced intelligence might be "thinking", 

 
What will be the SIMPLEST thing that will mark the  first sign of AGI ? - Given 
that there are zero but zero examples of  AGI.
 
Don't you think it would be a good idea to begin at  the beginning? With 
"initial AGI"? Rather than "advanced AGI"? 

agi | Archives  | Modify Your Subscription  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Epiphany - Statements of Stupidity

2010-08-06 Thread Steve Richfield
Mike,

Your reply flies in the face of two obvious facts:
1.  I have little interest in what is called AGI here. My interests lie
elsewhere, e.g. uploading, Dr. Eliza, etc. I posted this piece for several
reasons, as it is directly applicable to Dr. Eliza, and because it casts a
shadow on future dreams of AGI. I was hoping that those people who have
thought things through regarding AGIs might have some thoughts here. Maybe
these people don't (yet) exist?!
2.  You seem to think that a "walk before you run" approach, basically a
bottom-up approach to AGI, is the obvious one. It sure isn't obvious to me.
Besides, if my "statements of stupidity" theory is true, then why even
bother building AGIs, because we won't even be able to meaningfully discuss
things with them.

Steve
==
On Fri, Aug 6, 2010 at 2:57 AM, Mike Tintner wrote:

>  sTEVE:I have posted plenty about "statements of ignorance", our probable
> inability to comprehend what an advanced intelligence might be "thinking",
>
> What will be the SIMPLEST thing that will mark the first sign of AGI ? -
> Given that there are zero but zero examples of AGI.
>
> Don't you think it would be a good idea to begin at the beginning? With
> "initial AGI"? Rather than "advanced AGI"?
>*agi* | Archives 
>  | 
> ModifyYour Subscription
> 
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-08-06 Thread Jim Bromer
>
> Jim: So, did Solomonoff's original idea involve randomizing whether the
> next bit would be a 1 or a 0 in the program?

Abram: Yep.
I meant, did Solomonoff's original idea involve randomizing whether the next
bit in the program's that are originally used to produce the *prior
probabilities* involve the use of randomizing whether the next bit would be
a 1 or a 0?  I have not been able to find any evidence that it was.
I thought that my question was clear but on second thought I guess it
wasn't. I think that the part about the coin flips was only a method to
express that he was interested in the probability that a particular string
would be produced from all possible programs, so that when actually testing
the prior probability of a particular string the program that was to be run
would have to be randomly generated.
Jim Bromer




On Wed, Aug 4, 2010 at 10:27 PM, Abram Demski  wrote:

> Jim,
>
>  Your function may be convergent but it is not a probability.
>>
>
> True! All the possibilities sum to less than 1. There are ways of
> addressing this (ie, multiply by a normalizing constant which must also be
> approximated in a convergent manner), but for the most part adherents of
> Solomonoff induction don't worry about this too much. What we care about,
> mostly, is comparing different hyotheses to decide which to favor. The
> normalizing constant doesn't help us here, so it usually isn't mentioned.
>
>
> You said that Solomonoff's original construction involved flipping a coin
>> for the next bit.  What good does that do?
>
>
> Your intuition is that running totally random programs to get predictions
> will just produce garbage, and that is fine. The idea of Solomonoff
> induction, though, is that it will produce systematically less garbage than
> just flipping coins to get predictions. Most of the garbage programs will be
> knocked out of the running by the data itself. This is supposed to be the
> least garbage we can manage without domain-specific knowledge
>
> This is backed up with the proof of dominance, which we haven't talked
> about yet, but which is really the key argument for the optimality of
> Solomonoff induction.
>
>
> And how does that prove that his original idea was convergent?
>
>
> The proofs of equivalence between all the different formulations of
> Solomonoff induction are something I haven't cared to look into too deeply.
>
> Since his idea is incomputable, there are no algorithms that can be run to
>> demonstrate what he was talking about so the basic idea is papered with all
>> sorts of unverifiable approximations.
>
>
> I gave you a proof of convergence for one such approximation, and if you
> wish I can modify it to include a normalizing constant to ensure that it is
> a probability measure. It would be helpful to me if your criticisms were
> more specific to that proof.
>
> So, did Solomonoff's original idea involve randomizing whether the next bit
>> would be a 1 or a 0 in the program?
>>
>
> Yep.
>
> Even ignoring the halting problem what kind of result would that give?
>>
>
> Well, the general idea is this. An even distribution intuitively represents
> lack of knowledge. An even distribution over possible data fails horribly,
> however, predicting white noise. We want to represent the idea that we are
> very ignorant of what the data might be, but not *that* ignorant. To capture
> the idea of regularity, ie, similarity between past and future, we instead
> take an even distribution over *descriptions* of the data. (The distribution
> in the 2nd version of solomonoff induction that I gave, the one in which the
> space of possible programs is represented as a continuum, is an even
> distribution.) This appears to provide a good amount of regularity without
> too much.
>
> --Abram
>
> On Wed, Aug 4, 2010 at 8:10 PM, Jim Bromer  wrote:
>
>> Abram,
>> Thanks for the explanation.  I still don't get it.  Your function may be
>> convergent but it is not a probability.  You said that Solomonoff's original
>> construction involved flipping a coin for the next bit.  What good does that
>> do?  And how does that prove that his original idea was convergent?  The
>> thing that is wrong with these explanations is that they are not coherent.
>> Since his idea is incomputable, there are no algorithms that can be run to
>> demonstrate what he was talking about so the basic idea is papered with all
>> sorts of unverifiable approximations.
>>
>> So, did Solomonoff's original idea involve randomizing whether the next
>> bit would be a 1 or a 0 in the program?  Even ignoring the halting
>> problem what kind of result would that give?  Have you ever solved the
>> problem for some strings and have you ever tested the solutions using a
>> simulation?
>>
>> Jim Bromer
>>
>> On Mon, Aug 2, 2010 at 5:12 PM, Abram Demski wrote:
>>
>>> Jim,
>>>
>>> Interestingly, the formalization of Solomonoff induction I'm most
>>> familiar with uses a construction that relates the space of programs with
>>> the real 

Re: [agi] Epiphany - Statements of Stupidity

2010-08-06 Thread Mike Tintner
sTEVE:I have posted plenty about "statements of ignorance", our probable 
inability to comprehend what an advanced intelligence might be "thinking", 

What will be the SIMPLEST thing that will mark the first sign of AGI ? - Given 
that there are zero but zero examples of AGI.

Don't you think it would be a good idea to begin at the beginning? With 
"initial AGI"? Rather than "advanced AGI"? 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Epiphany - Statements of Stupidity

2010-08-06 Thread Steve Richfield
To All,

I have posted plenty about "statements of ignorance", our probable inability
to comprehend what an advanced intelligence might be "thinking", heidenbugs,
etc. I am now wrestling with a new (to me) concept that hopefully others
here can shed some light on.

People often say things that indicate their limited mental capacity, or at
least their inability to comprehend specific situations.

1)  One of my favorites are people who say "I had no choice but to ...",
which of course indicates that they are clearly intellectually challenged
because there are ALWAYS other choices, though it may be difficult to find
one that is in all respects superior. While theoretically this statement
could possibly be correct, in practice I have never found this to be the
case.

2)  Another one recently from this very forum was "If it sounds too good to
be true, it probably is". This may be theoretically true, but in fact was,
as usual, made as a statement as to why the author was summarily dismissing
an apparent opportunity of GREAT value. This dismissal of something BECAUSE
of its great value would seem to severely limit the authors prospects for
success in life, which probably explains why he spends so much time here
challenging others who ARE doing something with their lives.

3)  I used to evaluate inventions for some venture capitalists. Sometimes I
would find that some basic law of physics, e.g. conservation of energy,
would have to be violated for the thing to work. When I explained this to
the inventors, their inevitable reply was "Yea, and they also said that the
Wright Brothers' plane would never fly". To this, I explained that the
Wright Brothers had invested ~200 hours of effort working with their crude
homemade wind tunnel, and ask what the inventors have done to prove that
their own invention would work.

4)  One old stupid standby, spoken when you have make a clear point that
shows that their argument is full of holes "That is just your opinion". No,
it is a proven fact for you to accept or refute.

5)  Perhaps you have your own pet "statements of stupidity"? I suspect that
there may be enough of these to dismiss some significant fraction of
prospective users of beyond-human-capability (I just hate the word
"intelligence") programs.

In short, semantic analysis of these statements typically would NOT find
them to be conspicuously false, and hence even an AGI would be tempted to
accept them. However, their use almost universally indicates some
short-circuit in thinking. The present Dr. Eliza program could easily
recognize such statements.

OK, so what? What should an AI program do when it encounters a stupid user?
Should some attempt be made to explain stupidity to someone who is almost
certainly incapable of comprehending their own stupidity? "Stupidity is
forever" is probably true, especially when expressed by an adult.

Note my own dismissal of a some past posters for insufficient mental ability
to understand certain subjects, whereupon they invariably come back
repeating the SAME flawed logic, after I carefully explained the breaks in
their logic. Clearly, I was just wasting my effort by continuing to interact
with these people.

Note that providing a stupid user with ANY output is probably a mistake,
because they will almost certainly misconstrue it in some way. Perhaps it
might be possible to "dumb down" the output to preschool-level, at least
that (small) part of the output that can be accurately stated in preschool
terms.

Eventually as computers continue to self-evolve, we will ALL be categorized
as some sort of stupid, and receive stupid-adapted output.

I wonder whether, ultimately, computers will have ANYTHING to say to us,
like any more than we now say to our dogs.

Perhaps the final winner of the Reverse Turing Test will remain completely
silent?!

"You don't explain to your dog why you can't pay the rent" from *The Fall of
Colossus*.

Any thoughts?

Steve



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com