Re: [agi] Approximations of Knowledge

2008-06-22 Thread Jim Bromer
Abram Demski said:
To be honest, I am not completely satisfied with my conclusion on the
post you refer to. I'm not so sure now that the fundamental split
between logical/messy methods should occur at the line between perfect
& approximate methods. This is one type of messiness, but one only. I
think you are referring to a related but different messiness: not
knowing what kind of environment your AI is dealing with. Since we
don't know which kinds of models will fit best with the world, we
should (1) trust our intuitions to some extent, and (2) try things and
see how well they work...
Mathematics and mathematical proof is a very important tool...
Mine is a system built out of somewhat smart pieces,
cooperating to build somewhat smarter pieces, and so on. Each piece
has provable smarts.

Mathematics can be extended to include new kinds of relations and systems.  One 
of the problems I have had with AI-probability buffs is that there are other 
ways to deal with knowledge that is only partially understood and this kind of 
complexity can be extended to measurable quantities as well.  Notice that 
economics is not just probability.  There are measurable quantities in 
economics that are not based solely on the economics of money.

We cannot make perfect decisions.  However, we can often make fairly good 
decisions even when based on partial knowledge.  A conclusion however, should 
not be taken as a reliable rule unless it has withstood numerous tests.  These 
empirical tests of a conclusion usually cause them to be modified.  Even a good 
conclusion will typically be modified by conditional variations after be 
extensively tested.  That is the nature of expertise.

Our conclusions are often only approximations, but they can contain 
unarticulated links to other possibilities that may indicate other ways of 
looking at the data or conditional variations to the base conclusion.

Jim Bromer


  


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


[agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-22 Thread William Pearson
While SIAI fills that niche somewhat, it concentrates on the
Intelligence explosion scenario. Is there a sufficient group of
researchers/thinkers with a shared vision of the future of AI coherent
enough to form an organisation? This organisation would discus,
explore and disseminate what can be done to make the introduction as
painless as possible.

The base beliefs shared between the group would be something like

 - The entities will not have goals/motivations inherent to their
form. That is robots aren't likely to band together to fight humans,
or try to take over the world for their own means.  These would have
to be programmed into them, as evolution has programmed group loyalty
and selfishness into humans.
- The entities will not be capable of fully wrap around recursive
self-improvement. They will improve in fits and starts in a wider
economy/ecology like most developments in the world *
- The goals and motivations of the entities that we will likely see in
the real world will be shaped over the long term by the forces in the
world, e.g. evolutionary, economic and physics.

Basically an organisation trying to prepare for a world where AIs
aren't sufficiently advanced technology or magic genies, but still
dangerous and a potentially destabilising world change. Could a
coherent message be articulated by the subset of the people that agree
with these points. Or are we all still too fractured?

  Will Pearson

* I will attempt to give an inside view of why I take this view, at a
later date.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-22 Thread Jim Bromer


On 6/21/08, I wrote: 
The major problem I have is that writing a really really complicated computer 
program is really really difficult.
--
Steve Richfield replied:
Jim,

The ONLY rational approach to this (that I know of) is to construct an "engine" 
that develops and applies machine knowledge, wisdom, or whatever, and NOT write 
code yourself that actually deals with articles of knowledge/wisdom.
-

I agree with that, (assuming that I understand what you meant). 
-- 
Steve wrote:

REALLY complex systems may require multi-level interpreters, where a low-level 
interpreter provides a pseudo-machine on which to program a really smart 
high-level interpreter, on which you program your AGI. In ~1970 I wrote an 
ALGOL/FORTRAN/BASIC compiler that ran in just 16K bytes this way. At the bottom 
was a pseudo-computer whose primitives were fundamental to compiling. That 
pseudo-machine was then fed a program to read BNF and make compilers, which was 
then fed a BNF description of my compiler, with the output being my compiler in 
pseudo-machine code. One feature of this approach is that for anything to work, 
everything had to work, so once past initial debugging, it worked perfectly! 
Contrast this with "modern" methods that consume megabytes and never work quite 
right.
--

A compiler may be a useful tool to use in an advanced AI program (just as we 
all use compilers in our programming), but I don't feel that a compiler is a 
good basis for or a good metaphor for advanced AI.

--
Steve wrote:

The more complex the software, the better the design must be, and the more 
protected the execution must be. You can NEVER anticipate everything that might 
go into a program, so they must fail ever so softly.
 
Much of what I have been challenging others on this form for came out of the 
analysis and design of Dr. Eliza. The real world definitely has some 
interesting structure, e.g. the figure 6 shape of cause-and-effect chains, and 
that problems are a phenomenon that exists behind people's eyeballs and NOT 
otherwise in the real world. Ignoring such things and "diving in" and hoping 
that machine intelligence will resolve all (as many/most here seem to believe) 
IMHO is a rookie error that leads nowhere useful.
Steve Richfield
---

I don't think that most people in this group think that machine intelligence 
will resolve all the remaining problems in designing artificial intelligence, 
although I have talked to people who feel that way, and the lack of discussion 
about resolving some of the complexity issues does seem curious to me.  Where 
are they coming from?  I don't know.  I think most of the people feel that once 
they get their basic programs working, that they will be able to figure out the 
rest on the fly.  This method hasn't worked yet, but as I mentioned I do think 
it has something to do with the difficulty of writing complicated computer 
programs. I know that you are one of the outspoken critics of faith-based 
programming, so at least there is some consistency in your comments.  I mention 
this because, I (seriously) believe that that the Lord may have indicated that 
my algorithm to solve the logical satisfiability problem will work, and if this 
is true, then that may mean
 that the algorithm may help resolve some lesser logical complexity problems.  
Although we cannot use pure logic to represent knowable knowledge, I can use 
logic to represent theory-like relations between references to knowable 
components of knowledge.  (By the way, please note that I did not claim that I 
presently have a polynomial time solution to SAT, and I did not say that I was 
absolutely certain that God pronounced my SAT algorithm to be workable.  I have 
carefully qualified my statements about this.  I would also suggest that you 
think about the fact that we have to use different kinds of reasoning with 
different kinds of questions.  Regardless of your own beliefs, the topic about 
the necessity of using different kinds of reasoning for different kinds of 
question is very relevant to discussions about advanced AI.)

What do you mean by the figure 6 shape of cause-and-effect chains.  It must 
refer to some kind of feedback-like effect.

Jim Bromer



  


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-22 Thread Vladimir Nesov
On Sun, Jun 22, 2008 at 8:38 PM, William Pearson <[EMAIL PROTECTED]> wrote:
> While SIAI fills that niche somewhat, it concentrates on the
> Intelligence explosion scenario. Is there a sufficient group of
> researchers/thinkers with a shared vision of the future of AI coherent
> enough to form an organisation? This organisation would discus,
> explore and disseminate what can be done to make the introduction as
> painless as possible.
>
> The base beliefs shared between the group would be something like
>
>  - The entities will not have goals/motivations inherent to their
> form. That is robots aren't likely to band together to fight humans,
> or try to take over the world for their own means.  These would have
> to be programmed into them, as evolution has programmed group loyalty
> and selfishness into humans.
> - The entities will not be capable of fully wrap around recursive
> self-improvement. They will improve in fits and starts in a wider
> economy/ecology like most developments in the world *
> - The goals and motivations of the entities that we will likely see in
> the real world will be shaped over the long term by the forces in the
> world, e.g. evolutionary, economic and physics.
>
> Basically an organisation trying to prepare for a world where AIs
> aren't sufficiently advanced technology or magic genies, but still
> dangerous and a potentially destabilising world change. Could a
> coherent message be articulated by the subset of the people that agree
> with these points. Or are we all still too fractured?
>

Two questions:
1) Do you know enough to estimate which scenario is more likely?
2) What does this difference change for research at this stage?

Otherwise it sounds like you are just calling to start a cult that
believes in this particular unsupported thing, for no good reason. ;-)

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-22 Thread Abram Demski
Well, since you found my blog, you probably are grouping me somewhat
with the "probability buffs". I have stated that I will not be
interested in any other fuzzy logic unless it is accompanied by a
careful account of the meaning of the numbers.

You have stated that it is unrealistic to expect a logical model to
reflect the world perfectly. The intuition behind this seems clear.
Instead, what should be hoped for is convergence to (nearly) correct
models of (small parts of) the universe. So I suppose that rather than
asking for "meaning" in a fuzzy logic, I should be asking for clear
accounts of convergence properties... but my intuition says that from
clear meaning, everything else follows.

On Sun, Jun 22, 2008 at 9:45 AM, Jim Bromer <[EMAIL PROTECTED]> wrote:
> Abram Demski said:
> To be honest, I am not completely satisfied with my conclusion on the
> post you refer to. I'm not so sure now that the fundamental split
> between logical/messy methods should occur at the line between perfect
> & approximate methods. This is one type of messiness, but one only. I
> think you are referring to a related but different messiness: not
> knowing what kind of environment your AI is dealing with. Since we
> don't know which kinds of models will fit best with the world, we
> should (1) trust our intuitions to some extent, and (2) try things and
> see how well they work...
> Mathematics and mathematical proof is a very important tool...
> Mine is a system built out of somewhat smart pieces,
> cooperating to build somewhat smarter pieces, and so on. Each piece
> has provable smarts.
> 
> Mathematics can be extended to include new kinds of relations and systems.
> One of the problems I have had with AI-probability buffs is that there are
> other ways to deal with knowledge that is only partially understood and this
> kind of complexity can be extended to measurable quantities as well.  Notice
> that economics is not just probability.  There are measurable quantities in
> economics that are not based solely on the economics of money.
>
> We cannot make perfect decisions.  However, we can often make fairly good
> decisions even when based on partial knowledge.  A conclusion however,
> should not be taken as a reliable rule unless it has withstood numerous
> tests.  These empirical tests of a conclusion usually cause them to be
> modified.  Even a good conclusion will typically be modified by conditional
> variations after be extensively tested.  That is the nature of expertise.
>
> Our conclusions are often only approximations, but they can contain
> unarticulated links to other possibilities that may indicate other ways of
> looking at the data or conditional variations to the base conclusion.
>
> Jim Bromer
>
>
>
> 
> agi | Archives | Modify Your Subscription


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-22 Thread Steve Richfield
Jim,

On 6/22/08, Jim Bromer <[EMAIL PROTECTED]> wrote:
>
>
>  A compiler may be a useful tool to use in an advanced AI program (just as
> we all use compilers in our programming), but I don't feel that a compiler
> is a good basis for or a good metaphor for advanced AI.
>

A compiler is just another complicated computer program. The sorts of
methods I described are applicable to ALL complicated programs. I know of no
exceptions.

   --
> Steve wrote:
> The more complex the software, the better the design must be, and the more
> protected the execution must be. You can NEVER anticipate everything that
> might go into a program, so they must fail ever so softly.
>
> Much of what I have been challenging others on this form for came out of
> the analysis and design of Dr. Eliza. The real world definitely has some
> interesting structure, e.g. the figure 6 shape of cause-and-effect chains,
> and that problems are a phenomenon that exists behind people's eyeballs and
> NOT otherwise in the real world. Ignoring such things and "diving in" and
> hoping that machine intelligence will resolve all (as many/most here seem to
> believe) IMHO is a rookie error that leads nowhere useful.
> Steve Richfield
> ---
>
> I don't think that most people in this group think that machine
> intelligence will resolve all the remaining problems in designing artificial
> intelligence, although I have talked to people who feel that way, and the
> lack of discussion about resolving some of the complexity issues does seem
> curious to me.
>

I simply attribute this to rookie error - but many of the people on this
forum are definitely NOT rookies. Hmmm.

Where are they coming from?  I don't know.  I think most of the people
> feel that once they get their basic programs working, that they will be able
> to figure out the rest on the fly.  This method hasn't worked yet, but as I
> mentioned I do think it has something to do with the difficulty of writing
> complicated computer programs. I know that you are one of the outspoken
> critics of faith-based programming,
>

YES - and you said it even better than I have!

   so at least there is some consistency in your comments.  I mention this
> because, I (seriously) believe that that the Lord may have indicated that my
> algorithm to solve the logical satisfiability problem will work, and if this
> is true, then that may mean that the algorithm may help resolve some lesser
> logical complexity problems.
>

Most of my working career has been as a genuine consultant (and not just an
unemployed programmer). I am typically hired by a major investor. My
specialty is resurrecting projects that are in technological trouble. At the
heart of the most troubled projects. I typically find either a born-again
Christian or a PhD Chemist. These people make the same bad decisions from
faith. The Christian's faith is that God wouldn't lead them SO astray, so
abandoning the project would in effect be abandoning their faith in God -
which of course leads straight to Hell. The Chemist has heard all of the
stories of perseverance leading to breakthrough discoveries, and if you KNOW
that the solution is there just waiting to be found, then just keep on
plugging away. These both lead to projects that stumble on and on long after
any sane person would have found another better way. Christians tend to make
good programmers, but really awful project managers.


>Although we cannot use pure logic to represent knowable knowledge, I
> can use logic to represent theory-like relations between references to
> knowable components of knowledge.  (By the way, please note that I did not
> claim that I presently have a polynomial time solution to SAT, and I did not
> say that I was absolutely certain that God pronounced my SAT algorithm to be
> workable.
>

Are you waiting for me to make such a pronouncement?!

   I have carefully qualified my statements about this.  I would also
> suggest that you think about the fact that we have to use different kinds of
> reasoning with different kinds of questions.  Regardless of your own
> beliefs, the topic about the necessity of using different kinds of reasoning
> for different kinds of question is very relevant to discussions about
> advanced AI.)
>
> What do you mean by the figure 6 shape of cause-and-effect chains.  It must
> refer to some kind of feedback-like effect.
>

EVERYTHING works by cause and effect - even God's work, because he is
responding to what he sees, and therefore HE is but another link. Where
things are dynamically changing, there is little opportunity to run over to
your computer and inquire about what to do about things you don't like.
However, where things appear to be both stable and undesirable, there is
probably a looped cause-and-effect chain that is at least momentarily
running in a circle. Of course, there must have been a causal
cause-and-effect chain that led to this loop, so drawing

Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-22 Thread William Pearson
2008/6/22 Vladimir Nesov <[EMAIL PROTECTED]>:

>
> Two questions:
> 1) Do you know enough to estimate which scenario is more likely?

Well since intelligence explosions haven't happened previously in our
light cone, it can't be a simple physical pattern, so I think
non-exploding intelligences have the evidence for being simpler on
their side. So we might find them more easily. I also think I have
solid reasoning to think intelligence exploding is unlikely, which
requires paper length rather than post length. So it I think I do, but
should I trust my own rationality?

Getting a bunch of people together to argue for both paths seems like
a good bet at the moment.

> 2) What does this difference change for research at this stage?

It changes the focus of research from looking for simple principles of
intelligence (that can be improved easily on the fly), to one that
expects intelligence creation to be a societal process over decades.

It also makes secrecy no longer be the default position. If you take
the intelligence explosion scenario seriously you won't write anything
in public forums that might help other people make AI. As bad/ignorant
people might get hold of it and cause the first explosion.

 > Otherwise it sounds like you are just calling to start a cult that
> believes in this particular unsupported thing, for no good reason. ;-)
>

Hope that gives you some reasons. Let me know if I have misunderstood
your questions.

  Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-22 Thread Mike Tintner
Steve:Most of my working career has been as a genuine consultant (and not just 
an unemployed programmer). I am typically hired by a major investor. My 
specialty is resurrecting projects that are in technological trouble. At the 
heart of the most troubled projects. I typically find either a born-again 
Christian or a PhD Chemist. These people make the same bad decisions from 
faith. The Christian's faith is that God wouldn't lead them SO astray, so 
abandoning the project would in effect be abandoning their faith in God - which 
of course leads straight to Hell. The Chemist has heard all of the stories of 
perseverance leading to breakthrough discoveries, and if you KNOW that the 
solution is there just waiting to be found, then just keep on plugging away. 
These both lead to projects that stumble on and on long after any sane person 
would have found another better way. Christians tend to make good programmers, 
but really awful project managers.

V. interesting. The thing that amazes me  - & I don't know whether this relates 
to your experience - is that so many AGI-ers don't seem to realise that if 
you're going to commit to a creative project, you must have at least one big, 
central creative idea to start with. Especially if investors are to be involved.

I find the "pathologies" of how would-be creatives fail to see this fascinating 
- you have possible examples above. Another obvious example is how many people 
think that they are being creative simply by going into a new area, even though 
they have no real new ideas or approaches to it.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-22 Thread Richard Loosemore


Abram

I am pressed for time right now, but just to let you know that, now that 
I am aware of your post, I will reply soon.  I think that many of your 
concerns are a result of seeing a different message in the paper than 
the one I intended.



Richard Loosemore



Abram Demski wrote:

To be honest, I am not completely satisfied with my conclusion on the
post you refer to. I'm not so sure now that the fundamental split
between logical/messy methods should occur at the line between perfect
& approximate methods. This is one type of messiness, but one only. I
think you are referring to a related but different messiness: not
knowing what kind of environment your AI is dealing with. Since we
don't know which kinds of models will fit best with the world, we
should (1) trust our intuitions to some extent, and (2) try things and
see how well they work. This is as Loosemore suggests.

On the other hand, I do not want to agree with Loosemore too strongly.
Mathematics and mathematical proof is a very important tool, and I
feel like he wants to reject it. His image of an AGI seems to be a
system built up out of totally dumb pieces, with intelligence emerging
unexpectedly. Mine is a system built out of somewhat smart pieces,
cooperating to build somewhat smarter pieces, and so on. Each piece
has provable smarts.

On Sat, Jun 21, 2008 at 6:54 AM, Jim Bromer <[EMAIL PROTECTED]> wrote:

I just read Abram Demski's comments about Loosemore's, "Complex Systems,
Artificial Intelligence and Theoretical Psychology," at
http://dragonlogic-ai.blogspot.com/2008/03/i-recently-read-article-called-complex.html

I thought Abram's comments were interesting.  I just wanted to make a few
criticisms. One is that a logical or rational approach to AI does not
necessarily mean that it would be a fully constrained logical - mathematical
method.  My point of view is that if you use a logical or a rational method
with an unconstrained inductive system (open and not monotonic) then the
logical system will, for any likely use, act like a rational-non-rational
system no matter what you do.  So when, I for example, start thinking about
whether or not I will be able to use my SAT system (logical satisfiability)
for an AGI program, I am not thinking of an implementation of a pure
Aristotelian-Boolean system of knowledge.  The system I am currently
considering would use logic to study theories and theory-like relations that
refer to concepts about the natural universe and the universe of thought,
but without the expectation that those theories could ever constitute a
sound strictly logical or rational model of everything.  Such ideas are so
beyond the pale that I do not even consider the possibility to be worthy of
effort.  No one in his right mind would seriously think that he could write
a computer program that could explain everything perfectly without error.
If anyone seriously talked like that I would take it as a indication of some
significant psychological problem.



I also take it as a given that AI would suffer from the problem of
computational irreducibility if it's design goals were to completely
comprehend all complexity using only logical methods in the strictest sense.
However, many complex ideas may be simplified and these simplifications can
be used wisely in specific circumstances.  My belief is that many
interrelated layers of simplification, if they are used insightfully, can
effectively represent complex ideas that may not be completely understood,
just as we use insightful simplifications while trying to discuss something
that is completely understood, like intelligence.  My problem with
developing an AI program is not that I cannot figure out how to create
complex systems of  insightful simplifications, but that I do not know how
to develop a computer program capable of sufficient complexity to handle the
load that the system would produce.  So while I agree with Demski's
conclusion that, "there is a way to salvage Loosemore's position,
...[through] shortcutting an irreducible computation by compromising,
allowing the system to produce less-than-perfect results," and, "...as we
tackle harder problems, the methods must become increasingly approximate," I
do not agree that the contemporary problem is with logic or with the
complexity of human knowledge. I feel that the major problem I have is that
writing a really really complicated computer program is really really
difficult.



The problem I have with people who talk about ANNs or probability nets as if
their paradigm of choice were the inevitable solution to complexity is that
they never discuss how their approach might actually handle complexity. Most
advocates of ANNs or probability deal with the problem of complexity as if
it were a problem that either does not exist or has already been solved by
whatever tired paradigm they are advocating.  I don't get that.



The major problem I have is that writing a really really complicated
computer program is really really diffic

Re: [agi] Approximations of Knowledge

2008-06-22 Thread J. Andrew Rogers


On Jun 22, 2008, at 1:37 PM, Steve Richfield wrote:
At the heart of the most troubled projects. I typically find either  
a born-again Christian or a PhD Chemist. These people make the same  
bad decisions from faith. The Christian's faith is that God wouldn't  
lead them SO astray, so abandoning the project would in effect be  
abandoning their faith in God - which of course leads straight to  
Hell. The Chemist has heard all of the stories of perseverance  
leading to breakthrough discoveries, and if you KNOW that the  
solution is there just waiting to be found, then just keep on  
plugging away. These both lead to projects that stumble on and on  
long after any sane person would have found another better way.  
Christians tend to make good programmers, but really awful project  
managers.



Somewhere in the world, there is a PhD chemist and a born-again  
Christian on another mailing list "...the project had hit a serious  
snag, and so the investors brought in a consultant that would explain  
why the project was broken by defectively reasoning about dubious  
generalizations he pulled out of his ass..."



J. Andrew Rogers



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Breaking Solomonoff induction (really)

2008-06-22 Thread Kaj Sotala
On 6/21/08, Matt Mahoney <[EMAIL PROTECTED]> wrote:
>
> Eliezer asked a similar question on SL4. If an agent flips a fair quantum 
> coin and is copied 10 times if it comes up heads, what should be the agent's 
> subjective probability that the coin will come up heads? By the anthropic 
> principle, it should be 0.9. That is because if you repeat the experiment 
> many times and you randomly sample one of the resulting agents, it is highly 
> likely that will have seen heads about 90% of the time.

That's the wrong answer, though (as I believe I pointed out when the
question was asked over on SL4). The copying is just a red herring, it
doesn't affect the probability at all.

Since this question seems to confuse many people, I wrote a short
Python program simulating it:
http://www.saunalahti.fi/~tspro1/Random/copies.py

Set the number of trials to whatever you like (if it's high, you might
want to comment out the "A randomly chosen agent has seen..." lines to
make it run faster) - the ratio will converge to 1:1 on any higher
amount of trials.




-- 
http://www.saunalahti.fi/~tspro1/ | http://xuenay.livejournal.com/

Organizations worth your time:
http://www.singinst.org/ | http://www.crnano.org/ | http://www.mfoundation.org/


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Breaking Solomonoff induction (really)

2008-06-22 Thread Matt Mahoney
--- On Sun, 6/22/08, Kaj Sotala <[EMAIL PROTECTED]> wrote:

> On 6/21/08, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> >
> > Eliezer asked a similar question on SL4. If an agent
> flips a fair quantum coin and is copied 10 times if it
> comes up heads, what should be the agent's subjective
> probability that the coin will come up heads? By the
> anthropic principle, it should be 0.9. That is because if
> you repeat the experiment many times and you randomly
> sample one of the resulting agents, it is highly likely
> that will have seen heads about 90% of the time.
> 
> That's the wrong answer, though (as I believe I pointed out when the
> question was asked over on SL4). The copying is just a red
> herring, it doesn't affect the probability at all.
> 
> Since this question seems to confuse many people, I wrote a
> short Python program simulating it:
> http://www.saunalahti.fi/~tspro1/Random/copies.py

The question was about subjective anticipation, not the actual outcome. It 
depends on how the agent is programmed. If you extend your experiment so that 
agents perform repeated, independent trials and remember the results, you will 
find that on average agents will remember the coin coming up heads 99% of the 
time. The agents have to reconcile this evidence with their knowledge that the 
coin is fair.

It is a tricker question without multiple trials. The agent then needs to model 
its own thought process (which is impossible for any Turing computable agent to 
do with 100% accuracy). If the agent knows that it is programmed so that if it 
observes an outcome R times out of N that it would expect the probability to be 
R/N, then it would conclude "I know that I would observe heads 99% of the time 
and therefore I would expect heads with probability 0.99". But this programming 
would not make sense in a scenario with conditional copying.

Here is an equivalent question. If you flip a fair quantum coin, and you are 
killed with 99% probability conditional on the coin coming up tails, then, when 
you look at the coin, what is your subjective anticipation of seeing "heads"?


-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-22 Thread Jim Bromer
Abram,
I did not group you with "probability buffs".  One of the errors I feel that 
writers make when their field is controversial is that they begin representing 
their own opinions from the vantage of countering critics.  Unfortunately, I am 
one of those writers, (or perhaps I am just projecting).  But my comment about 
the probability buffs wasn't directed toward you, I was just using it as an 
exemplar (of something or another).

Your comments seem to make sense to me although I don't know where you are 
heading.  You said: 
"what should be hoped for is convergence to (nearly) correct models of (small 
parts of) the universe. So I suppose that rather than asking for "meaning" in a 
fuzzy logic, I should be asking for clear accounts of convergence 
properties..."  

When you have to find a way to tie together components of knowledge together 
you typically have to achieve another kind of convergence.  Even if these 
'components' of knowledge are reliable, they cannot usually be converged easily 
due to the complexity that their interrelations with other kinds of knowledge 
(other 'components' of knowledge) will cause.

To follow up on what I previously said, if my logic program works it will mean 
that I can combine and test logical formulas of up to a few hundred distinct 
variables and find satisfiable values for these combinations in a relatively 
short period of time.  I think this will be an important method to test whether 
AI can be advanced by advancements in handling complexity even though some 
people do not feel that logical methods are appropriate to use on multiple 
source complexity.  As you seem to appreciate, logic can still be brought to to 
the field even though it is not a purely logical game that is to be played.

When I begin to develop some simple theories about a subject matter, I will 
typically create hundreds of minor variations concerning those theories over a 
period of time.  I cannot hold all those variations of the conjecture in 
consciousness at any one moment, but I do feel that they can come to mind in 
response to a set of conditions for which that particular set of variations was 
created for.  So while a simple logical theory (about some subject) may be 
expressible with only a few terms, when you examine all of the possible 
variations that can be brought into conscious consideration in response to a 
particular set of stimuli, I think you may find that the theories could be more 
accurately expressed using hundreds of distinct logical values.  

If this conjecture of mine turns out to be true, and if I can actually get my 
new logical methods to work, then I believe that this new range of logical 
methods may show whether advancements in complexity can make a difference to AI 
even if its application does not immediately result in human level of 
intelligence.

Jim Bromer


- Original Message 
From: Abram Demski <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Sunday, June 22, 2008 4:38:02 PM
Subject: Re: [agi] Approximations of Knowledge

Well, since you found my blog, you probably are grouping me somewhat
with the "probability buffs". I have stated that I will not be
interested in any other fuzzy logic unless it is accompanied by a
careful account of the meaning of the numbers.

You have stated that it is unrealistic to expect a logical model to
reflect the world perfectly. The intuition behind this seems clear.
Instead, what should be hoped for is convergence to (nearly) correct
models of (small parts of) the universe. So I suppose that rather than
asking for "meaning" in a fuzzy logic, I should be asking for clear
accounts of convergence properties... but my intuition says that from
clear meaning, everything else follows.



  


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-22 Thread Matt Mahoney
--- On Sun, 6/22/08, William Pearson <[EMAIL PROTECTED]> wrote:

> From: William Pearson <[EMAIL PROTECTED]>
> > Two questions:
> > 1) Do you know enough to estimate which scenario is
> more likely?
> 
> Well since intelligence explosions haven't happened previously in our
> light cone, it can't be a simple physical pattern, so I think
> non-exploding intelligences have the evidence for being simpler on
> their side. So we might find them more easily. I also think I have
> solid reasoning to think intelligence exploding is unlikely, which
> requires paper length rather than post length. So it I think I do, but
> should I trust my own rationality?

I agree. I raised this question recently on SL4 but I don't think it has been 
resolved. Namely, is there a non-evolutionary model for recursive self 
improvement? By non-evolutionary, I mean that the parent AI, and not the 
environment, chooses which of its children are more intelligent.

I am looking for a mathematical model, or a model that could be experimentally 
verified. It could use a simplified definition of intelligence, for example, 
ability to win at chess. In this scenario, an agent would produce a modified 
copy of itself and play its copy to the death. After many iterations, a 
successful model should produce a good chess-playing agent. If this is too 
computationally expensive or too complex to analyze mathematically, you could 
substitute a simpler game like tic-tac-toe or prisoner's dilemma. Another 
variation would use mathematical problems that we believe are hard to solve but 
easy to verify, such as traveling salesman, factoring, or data compression.

I find the absence of such models troubling. One problem is that there are no 
provably hard problems. Problems like tic-tac-toe and chess are known to be 
easy, in the sense that they can be fully analyzed with sufficient computing 
power. (Perfect chess is O(1) using a giant lookup table). At that point, the 
next generation would have to switch to a harder problem that was not 
considered in the original design. Thus, the design is not friendly.

Other problems like factoring can always be scaled by using larger numbers, but 
there is no proof that the problem is harder to solve than to verify. We only 
believe so because all of humanity has failed to find a fast solution (which 
would break RSA), but this is not a proof. Even if we use provably uncomputable 
problems like data compression or the halting problem, there is no provably 
correct algorithm for selecting among these a subset of problems such that at 
least half are hard to solve.

One counter argument is that maybe human level intelligence is required for 
RSI. But there is a vast difference between human intelligence and humanity's 
intelligence. Producing an AI with an IQ of 200 is not self-improvement if you 
use any knowledge that came from other humans. RSI would be humanity producing 
an AI that is smarter than all of humanity. I have no doubt that will happen 
for some definition of "smarter", but without a model of RSI I don't believe it 
will be humanity's choice. Just like you can have children, some of whom will 
be smarter than you, but you won't know which ones.

Another counter argument is we could proceed without proof: if problem X is 
hard, then RSI is possible. However we lack models even with this relaxation. 
Suppose factoring is hard. An agent makes a modified copy of itself and 
challenges its child to a factoring context. Last one to answer dies. This 
might work except that most mutations would be harmful and there would be 
enough randomness in the test that intelligence would decline over time. I 
would be interested if anyone could get a model like this to work for any X 
believed to be harder to solve than to verify.

I believe that RSI is necessarily evolutionary (and therefore not controllable 
by us), because you can't test for any level of intelligence without already 
being that smart. However, I don't believe the issue is settled, either.


-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com