Re: [singularity] Quantum Mechanics and Consciousness

2008-05-21 Thread Matt Mahoney
ment observed in
>  experiments on cubes of different weights and weight
>  distributions  [5]. 
> 
>  Walker also modeled information retrieval in 'guess the card'
>  experiments.  Simple, classical, random chance would predict a
>  smooth, binomial curve for the probabilities of getting the right
>  answer versus the number of subjects making successful
>  predictions at these probabilities.  Walker's model predicts that
>  the curve would have peaks at certain levels of probability of
>  getting the right answer above those predicted by chance alone.
>  Experimental data showed peaks at the locations modeled.
>  However, more people were successful at the higher probability
>  levels than Walker's model estimated.  This is considered to be
>  evidence of learning enhancement  [5].
> 
>  In the world of the weird and unexplained you; are left to imagine
> with; mysterious metaphors and thoughts that dont allow understanding
> audiences. Bertromavich
> 'He who receives an idea from me, receives instruction himself without
> lessening mine; as he who lights his taper at mine, receives light
> without darkening me.' Thomas Jefferson, letter to Isaac McPherson, 13
> August 1813
> 


-- Matt Mahoney, [EMAIL PROTECTED]


---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=104200892-0d3a07
Powered by Listbox: http://www.listbox.com


Re: [singularity] future of mankind blueprint and relevance of AGI

2008-05-20 Thread Matt Mahoney

--- Minwoo Bae <[EMAIL PROTECTED]> wrote:

> This isn't totally relevant, but have you heard of Korea's drafting of a
> robot ethics charter?

You mean
http://news.nationalgeographic.com/news/2007/03/070316-robot-ethics.html ?

It seems mainly focused on protecting humans.  But the proposal was a year
ago and nothing was released yet.



-- Matt Mahoney, [EMAIL PROTECTED]


---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=104200892-0d3a07
Powered by Listbox: http://www.listbox.com


Re: [singularity] An Open Letter to AGI Investors

2008-04-16 Thread Matt Mahoney
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:

> I have stuck my neck out and written an Open Letter to AGI (Artificial 
> General Intelligence) Investors on my website at http://susaro.com.
> 
> All part of a campaign to get this field jumpstarted.
> 
> Next week I am going to put up a road map for my own development project.

So if the value of AGI is all the human labor it replaces (about US $1
quadrillion), how much will it cost to build?  Keep in mind there is a
tradeoff between waiting for the cost of technology to drop vs. having it now.
 How much should we expect to spend?



-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=101816851-9a120b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Stronger than Turing?

2008-04-15 Thread Matt Mahoney
--- Ben Peterson <[EMAIL PROTECTED]> wrote:

> Maybe I'm hallucinating, but I thought I read somewhere of some test  
> stronger or more reliable than the Turing Test to verify whether or  
> not a machine had achieved human-level intelligence.

Text compression?
http://cs.fit.edu/~mmahoney/compression/rationale.html

I wouldn't say it is more powerful, just more objective and repeatable.  Also,
in its present form it can only be used to compare one model to another.  To
test whether a model achieves "human" level, it needs to be compared to
average human ability to predict successive words or symbols in a text stream.

This is a harder test to get right, one I have not yet attempted.  Shannon [1]
first did this test in 1950 but left a wide range of uncertainty (0.6 to 1.3
bits per character) due to his method of converting a ranking of next-letter
guesses to a probability distribution.  Cover and King [2] reduced the
uncertainty in 1978 (upper bound of 1.3 bpc) by making the probability
distribution explicit in a gambling game, but their method is time consuming
and could only be used on a small sample of text.  I have also made some
attempts to refine Shannon's method in
http://cs.fit.edu/~mmahoney/dissertation/entropy1.html (under 1.1 bpc). 

In any case, none of these measurements were on the actual test data used in
my large text benchmark.  The best result to date is 1.04 bpc, but I would not
call this AI.  I know these programs use rather simple language models and are
memory bound.  (The top program needs 4.6 GB).  The Wikipedia data set I use
probably has a lower entropy than the data used in the literature, possibly
0.8-0.9 bpc.  That's just a guess, because as I said, I don't yet have a
reliable way to measure it.

References

1. Shannon, Cluade E., “Prediction and Entropy of Printed English”, Bell Sys.
Tech. J (3) p. 50-64, 1950.

2. Cover, T. M., and R. C. King, “A Convergent Gambling Estimate of the
Entropy of English”, IEEE Transactions on Information Theory (24)4 (July) pp.
413-421, 1978.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Testing AGI (was RE: [singularity] Vista/AGI)

2008-04-13 Thread Matt Mahoney
--- Derek Zahn <[EMAIL PROTECTED]> wrote:

> At any rate, if there were some clearly-specified tests that are not
> AGI-complete and yet not easily attackable with straightforward software
> engineering or Narrow AI techniques, that would be a huge boost in my
> opinion to this field.  I can't think of any though, and they might not
> exist.  If it is in fact impossible to find such tasks, what does that say
> about AGI as an endeavor?

Text compression is one such test, as I argue in
http://cs.fit.edu/~mmahoney/compression/rationale.html

The test is only for language modeling.  Theoretically it could be extended to
vision or audio processing.  For example, to maximally compress video the
compressor must understand the physics of the scene (e.g. objects fall down),
which can be arbitrarily complex (e.g. a video of people engaging in
conversation about Newton's law of gravity).  Likewise, maximally compressing
music is equivalent to generating or recognizing music that people like.  The
problem is that the information content of video and audio is dominated by
incompressible noise that is nontrivial to remove -- noise being any part of
the signal that people fail to perceive.  Deciding which parts of the signal
are noise is itself AI-hard, so it requires a lossy compression test with
human judges making subjective decisions about quality.  This is not a big
problem for text because the noise level (different ways of expressing the
same meaning) is small, or at least does not overwhelm the signal.  Long term
memory has an information rate of a few bits per second, so any signal you
compress should not be many orders of magnitude higher.

A problem with text compression is the lack of adequate hardware.  There is a
3 way tradeoff between compression ratio, memory, and speed.  The top
compressor in http://cs.fit.edu/~mmahoney/compression/text.html uses 4.6 GB of
memory.  Many of the best algorithms could be drastically improved if only
they ran on a supercomputer with 100 GB or more.  The result is that most
compression gains come from speed and memory optimization rather than using
more intelligent models.  The best compressors use crude models of semantics
and grammar.  They preprocess the text by token substitution from a dictionary
that groups words by topic and grammatical role, then predict the token stream
using mixtures of fixed-offset context models.  It is roughly equivalent to
the ungrounded language model of a 2 or 3 year old child at best.

An alternative would be to reduce the size of the test set to reduce
computational requirements, as the Hutter prize did. http://prize.hutter1.net/
I did not because I believe the proper way to test an adult level language
model is to train it on the same amount of language that an average adult is
exposed to, about 1 GB.  I would be surprised if a 100 MB test progressed past
the level of a 3 year old child.  I believe the data set is too small to train
a model to learn arithmetic, logic, or high level reasoning.  Including these
capabilities would not improve compression.

Tests on small data sets could be used to gauge early progress.  But
ultimately, I think you are going to need hardware that supports AGI to test
it.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: About the Nine Misunderstandings post [WAS Re: [singularity] I'm just not sure how well...]

2008-04-11 Thread Matt Mahoney

--- Vladimir Nesov <[EMAIL PROTECTED]> wrote:

> On Fri, Apr 11, 2008 at 10:50 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> >
> >  If the problem is so simple, why don't you just solve it?
> >  http://www.securitystats.com/
> >  http://en.wikipedia.org/wiki/Storm_botnet
> >
> >  There is a trend toward using (narrow) AI for security.  It seems to be
> one of
> >  its biggest applications.  Unfortunately, the knowledge needed to secure
> >  computers is almost exactly the same kind of knowledge needed to attack
> them.
> >
> 
> Matt, this issue was already raised a couple of times. It's a
> technical problem that can be solved perfectly, but isn't in practice,
> because it's too costly. Formal verification, specifically aided by
> languages with rich type systems that can express proofs of
> correctness for complex properties, can give you perfectly safe
> systems. It's just very difficult to specify all the details.

Actually it cannot be solved even theoretically.  A formal specification of a
program is itself a program.  It is undecidable whether two programs are
equivalent.  (It is equivalent to the halting problem).

Converting natural language to a formal specification is AI-hard, or perhaps
harder, because people can't get it right either.  If we could write software
without bugs, we would solve a big part of the security problem.

> These AIs for network security that you are talking about are a
> cost-effective hack that happens to work sometimes. It's not a
> low-budget vision of future super-hacks.

Not at present because we don't have AI.  We rely on humans to find
vulnerabilities in software.  We would like for machines to do this
automatically.  Unfortunately such machines would also be useful to hackers. 
Such double-edged tools already exist.  For example, tools like SATAN, NESSES,
and NMAP can quickly test a system by probing it to look for thousands of
known or published vulnerabilities.  Attackers use the same tools to break
into systems.  www.virustotal.com allows you to upload a file and scan it with
32 different virus detectors.  This is a useful tool for virus writers who
want to make sure their programs evade detection.  I suggest it will be very
difficult to develop any security tool that you could keep out of the hands of
the bad guys.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: About the Nine Misunderstandings post [WAS Re: [singularity] I'm just not sure how well...]

2008-04-11 Thread Matt Mahoney
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
> You want me to imagine a scenario in which we have AGI, but in your 
> scenario these AGI systems are somehow not being used to produce 
> superintelligent systems, and these superintelligent systems are, for 
> some reason, not taking the elementary steps necessary to solve one of 
> the world's simplest problems (computer viruses).

If the problem is so simple, why don't you just solve it?
http://www.securitystats.com/
http://en.wikipedia.org/wiki/Storm_botnet

There is a trend toward using (narrow) AI for security.  It seems to be one of
its biggest applications.  Unfortunately, the knowledge needed to secure
computers is almost exactly the same kind of knowledge needed to attack them.



-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: About the Nine Misunderstandings post [WAS Re: [singularity] I'm just not sure how well...]

2008-04-11 Thread Matt Mahoney
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:

> Matt Mahoney wrote:
> > We
> > already have examples of reproducing agents: Code Red, SQL Slammer, Storm,
> > etc. A worm that can write and debug code and discover new vulnerabilities
> > will be unstoppable.  Do you really think your AI will win the race when
> you
> > have the extra burden of making it safe?
> 
> Yes, because these "reproducing agents" you refer to are the most 
> laughably small computer viruses that have no hope whatsoever of 
> becoming generally intelligent.  At every turn, you completely 
> undestimate what it means for a system to be "intelligent".

There are no intelligent or self improving worms... yet.  Are you confident
that none will ever be created even after we have automated human-level
understanding of code, which I presume will be one of the capabilities of AGI?

> > Also, RSI is an experimental process, and therefore evolutionary.  We have
> > already gone through the information theoretic argument why this must be
> the
> > case.
> 
> No you have not:  I know of no "information theoretic argument" that 
> even remotely applies to the type of system that is needed to achieve 
> real intelligence.  Furthermore, the statement that "RSI is an 
> experimental process, and therefore evolutionary" is just another 
> example of you declaring something to be true when, in fact, it is 
> loaded down with spurious assumptions.  Your statement is a complete 
> non-sequiteur.

(sigh)  To repeat, the argument is that an agent cannot deterministically
create an agent of greater intelligence than itself, because if it could it
would already be that smart.  The best it can do is make educated guesses as
to what will increase intelligence.  I don't argue that we can't do better
than evolution.  (Adding more hardware is probably a safe bet).  But an agent
cannot even test whether another is more intelligent.  In order for me to give
a formal argument, you would have to accept a formal definition of
intelligence, such as Hutter and Legg's univeral intelligence, which is
bounded by algorithmic complexity.  But you dismiss such definitions as
irrelevant.  So I can only give examples, such as the ability to measure an IQ
of 200 in children but not adults, and the historical persecution of
intelligence (Socrates, Galileo, Holocaust, Khmer Rouge, etc).

A self improving agent will have to produce experimental variations and let
them be tested in a competitive environment it doesn't control or fully
understand that weeds out the weak.  If it could model the environment or test
for intelligence then it could reliably improve its intelligence,
contradicting our original assumption.

This is an evolutionary process.  Unfortunately, evolution is not stable.  It
resides on the boundary between stability and chaos, like all incrementally
updated or adaptive algorithmically complex systems.  By this I mean it tends
to a Lyapunov exponent of 0.  A small perturbation in its initial state might
decay or it might grow.  Critically balanced systems like this have a Zipf
distribution of catastrophes -- an inverse relation between probability and
severity.  We find this property in randomly connected logic gates (frequency
vs. magnitude of state transitions) software systems (frequency vs. severity
of failures), gene regulatory systems (frequency vs. severity of mutations),
and evolution (frequency vs. severity of plagues, population explosions, mass
extinctions, and other ecological disasters).

The latter should be evident in the hierarchical organization of geologic
eras.  And a singularity is a catastrophe of unprecedented scale.  It could
result in the extinction of DNA based life and its replacement with
nanotechnology.  Or it could result in the extinction of all intelligence. 
The only stable attractor in evolution is a dead planet.  (You knew this,
right?)  Finally, I should note that intelligence and friendliness are not the
same as fitness.  Roaches, malaria, and HIV are all formidable competitors to
homo sapiens.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] I'm just not sure how well this plan was thought through [WAS Re: Promoting an A.S.P.C,A.G.I.]

2008-04-10 Thread Matt Mahoney

--- Richard Loosemore <[EMAIL PROTECTED]> wrote:

> Matt Mahoney wrote:
> > I did also look at http://susaro.com/archives/category/general but there
> is no
> > design here either, just a list of unfounded assertions.  Perhaps you can
> > explain why you believe point #6 in particular to be true.
> 
> Perhaps you can explain why you described these as "unfounded 
> assertions" when I clearly stated in the post that the arguments to back 
> up this list will come later, and that this lst was intended just as a 
> declaration?

You say, "The problem with this assumption is that there is not the slightest
reason why there should be more than one type of AI, or any competition
between individual AIs, or any evolution of their design."

Which is completely false.  There are many competing AI proposals right now. 
Why will this change?  I believe your argument is that the first AI to achieve
recursive self improvement will overwhelm all competition.  Why should it be
friendly when the only goal it needs to succeed is acquiring resources?  We
already have examples of reproducing agents: Code Red, SQL Slammer, Storm,
etc. A worm that can write and debug code and discover new vulnerabilities
will be unstoppable.  Do you really think your AI will win the race when you
have the extra burden of making it safe?

Also, RSI is an experimental process, and therefore evolutionary.  We have
already gone through the information theoretic argument why this must be the
case.



-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] I'm just not sure how well this plan was thought through [WAS Re: Promoting an A.S.P.C,A.G.I.]

2008-04-10 Thread Matt Mahoney
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Matt Mahoney wrote:
> > If you have a better plan for AGI, please let me know.
> 
> I do.  I did already.
> 
> You are welcome to ask questions about it at any time (see 
> http://susaro.com/publications).

Question: which of these papers is actually a proposal for AGI?

I did also look at http://susaro.com/archives/category/general but there is no
design here either, just a list of unfounded assertions.  Perhaps you can
explain why you believe point #6 in particular to be true.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] I'm just not sure how well this plan was thought through [WAS Re: Promoting an A.S.P.C,A.G.I.]

2008-04-10 Thread Matt Mahoney
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:

> When a computer processes a request like "how many teaspoons in a cubic 
> parsec?" it can extract the "meaning" of the question by a relatively 
> simple set of syntactic rules and question templates.
> 
> But when you ask it a question like "how many dildos are there on the 
> planet?" [Try it] you find that google cannot answer this superficially 
> similar question because it requires more intelligence in the 
> question-analysis mechanism.

And just how would you expect your AGI to answer the question?  The first step
in research is to find out if someone else has already answered it.  It may
have been answered but Google can't find it because it only indexes a small
fraction of the internet.  It may also be that some dildo makers are privately
held and don't release sales figures.  In any case your AGI is either going to
output a number or "I don't know", neither of which is more helpful than
Google.  If it does output a number, you are still going to want to know where
it came from.

But this discussion is tiresome.  I would not have expected you to anticipate
today's internet in 1978.  I suppose when the first search engine (Archie) was
released in 1990, you probably imagined that all search engines would require
you to know the name of the file you were looking for.

If you have a better plan for AGI, please let me know.



-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Matt Mahoney

--- Richard Loosemore <[EMAIL PROTECTED]> wrote:

> Matt Mahoney wrote:
> > Just what do you want out of AGI?  Something that thinks like a person or
> > something that does what you ask it to?
> 
> Either will do:  your suggestion achieves neither.
> 
> If I ask your non-AGI the following question:  "How can I build an AGI 
> that can think at a speed that is 1000 times faster than the speed of 
> human thought?" it will say:
> 
> "Hi, my name is Ben and I just picked up your question.  I would
>  love to give you the answer but you have to send $20 million
>  and give me a few years".
> 
> That is not the answer I would expect of an AGI.  A real AGI would do 
> original research to solve the problem, and solve it *itself*.
> 
> Isn't this, like, just too obvious for words?  ;-)

Your question is not well formed.  Computers can already think 1000 times
faster than humans for things like arithmetic.  Does your AGI also need to
know how to feed your dog?  Or should it guess and build it anyway?  I would
think such a system would be dangerous.

I expect a competitive message passing network to improve over time.  Early
versions will work like an interactive search engine.  You may get web pages
or an answer from another human in real time, and you may later receive
responses to your persistent query.  If your question can be matched to an
expert in some domain that happens to be on the net, then it gets routed
there.  Google already does this.  For example, if you type an address, it
gives you a map and offers driving directions.  If you ask it "how many
teaspoons in a cubic parsec?" it will compute the answer (try it).  It won't
answer every question, but with 1000 times more computing power than Google, I
expect there will be many more domain experts.

I expect as hardware gets more powerful, peers will get better at things like
recognizing people in images, writing programs, and doing original research. 
I don't claim that I can solve these problems.  I do claim that there is an
incentive to provide these services and that the problems are not intractable
given powerful hardware, and therefore the services will be provided.  There
are two things to make the problem easier.  First, peers will have access to a
vast knowledge source that does not exist today.  Second, peers can specialize
in a narrow domain, e.g. only recognize one particular person in images, or
write software or do research in some obscure, specialized field.

Is this labor intensive?  Yes.  A $1 quadrillion system won't just build
itself.  People will build it because they will get back more value than they
put in.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Matt Mahoney
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:

> Matt Mahoney wrote:
> > Perhaps you have not read my proposal at
> http://www.mattmahoney.net/agi.html
> > or don't understand it.
> 
> Some of us have read it, and it has nothing whatsoever to do with 
> Artificial Intelligence.  It is a labor-intensive search engine, nothing 
> more.
> 
> I have no idea why you would call it an AI or an AGI.  It is not 
> autonomous, contains no thinking mechanisms, nothing.  Even as a "alabor 
> intensive search engine" there is no guarantee it would work, because 
> the conflict resolution issues are all complexity-governed.
> 
> I am astonished that you would so blatantly call it something that it is 
> not.

It is not now.  I think it will be in 30 years.  If I was to describe the
Internet to you in 1978 I think you would scoff too.  We were supposed to have
flying cars and robotic butlers by now.  How could Google make $145 billion by
building an index of something that didn't even exist?

Just what do you want out of AGI?  Something that thinks like a person or
something that does what you ask it to?



-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-09 Thread Matt Mahoney

--- Ben Goertzel <[EMAIL PROTECTED]> wrote:

> >  Of course what I imagine emerging from the Internet bears little
> resemblance
> >  to Novamente.  It is simply too big to invest in directly, but it will
> present
> >  many opportunities.
> 
> But the emergence of superhuman AGI's like a Novamente may eventually
> become,
> will both dramatically alter the nature of, and dramatically reduce
> the cost of, "global
> brains" such as you envision...

Yes, like the difference between writing a web browser and defining the HTTP
protocol, each costing a tiny fraction of the value of the Internet but with a
huge impact on its outcome.



-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Matt Mahoney

--- Mike Tintner <[EMAIL PROTECTED]> wrote:

> My point was how do you test the *truth* of items of knowledge. Google tests
> the *popularity* of items. Not the same thing at all. And it won't work.

It does work because the truth is popular.  Look at prediction markets.  Look
at Wikipedia.  It is well known that groups make better decisions as a whole
than the individuals in those groups (e.g. democracies vs. dictatorships). 
Combining knowledge from independent sources and testing their reliability is
a well known machine learning technique which I use in the PAQ data
compression series.  I understand the majority can sometimes be wrong, but the
truth eventually comes out in a marketplace that rewards truth.

Perhaps you have not read my proposal at http://www.mattmahoney.net/agi.html
or don't understand it.  Most AGI projects don't even address the problem of
conflicting or malicious information.  If you have a better way of dealing
with it, please let us know.



-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-09 Thread Matt Mahoney
--- Mike Tintner <[EMAIL PROTECTED]> wrote:
> How do you resolve disagreements? 

This is a problem for all large databases and multiuser AI systems.  In my
design, messages are identified by source (not necessarily a person) and a
timestamp.  The network economy rewards those sources that provide the most
useful (correct) information. There is an incentive to produce reputation
managers which rank other sources and forward messages from highly ranked
sources, because those managers themselves become highly ranked.

Google handles this problem by using its PageRank algorithm, although I
believe that better (not perfect) solutions are possible in a distributed,
competitive environment.  I believe that these solutions will be deployed
early and be the subject of intense research because it is such a large
problem.  The network I described is vulnerable to spammers and hackers
deliberately injecting false or forged information.  The protocol can only do
so much.  I designed it to minimize these risks.  Thus, there is no procedure
to delete or alter messages once they are posted.  Message recipients are
responsible for verifying the identity and timestamps of senders and for
filtering spam and malicious messages at risk of having their own reputations
lowered if they fail.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


RE: Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-09 Thread Matt Mahoney

--- "John G. Rose" <[EMAIL PROTECTED]> wrote:

> > From: Matt Mahoney [mailto:[EMAIL PROTECTED]
> > The simulations can't loop because the simulator needs at least as much
> > memory
> > as the machine being simulated.
> > 
> 
> You're making assumptions when you say that. Outside of a particular
> simulation we don't know the rules. If this universe is simulated the
> simulator's reality could be so drastically and unimaginably different from
> the laws in this universe. Also there could be data busses between
> simulations and the simulations could intersect or, a simulation may break
> the constraints of its contained simulation somehow and tunnel out. 

I am assuming finite memory.  For the universe we observe, the Bekenstein
bound of the Hubble radius is 2pi^2 T^2 c^5/hG = 2.91 x 10^122 bits.  (T = age
of the universe = 13.7 billion years, c = speed of light, h = Planck's
constant, G = gravitational constant).  There is not enough material in the
universe to build a larger memory.  However, a universe up the hierarchy might
be simulated by a Turing machine with infinite memory or by a more powerful
machine such as one with real-valued registers.  In that case the restriction
does not apply.  For example, a real-valued function can contain nested copies
of itself infinitely deep.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-08 Thread Matt Mahoney

--- "Eric B. Ramsay" <[EMAIL PROTECTED]> wrote:

> If I understand what I have read in this thread so far, there is Ben on the
> one hand suggesting $10 mil. with 10-30 people in 3 to 10 years and on the
> other there is Matt saying $1quadrillion, using a billion brains in 30
> years. I don't believe I have ever seen such a divergence of opinion before
> on what is required  for a technological breakthrough (unless people are not
> being serious and I am being naive). I suppose  this sort of non-consensus
> on such a scale could be part of investor reticence.

I am serious about the $1 quadrillion price tag, which is the low end of my
estimate.  The value of the Internet is now in the tens of trillions and
doubling every few years.  The value of AGI will be a very large fraction of
the world economy, currently US $66 trillion per year and growing at 5% per
year. 

Of course what I imagine emerging from the Internet bears little resemblance
to Novamente.  It is simply too big to invest in directly, but it will present
many opportunities.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-08 Thread Matt Mahoney

--- Mike Tintner <[EMAIL PROTECTED]> wrote:

> Matt : a super-google will answer these questions by routing them to
> experts on these topics that will use natural language in their narrow
> domains of expertise.
> 
> And Santa will answer every child's request, and we'll all live happily ever
> after.  Amen.

If you have a legitimate criticism of the technology or its funding plan, I
would like to hear it.  I understand there will be doubts about a system I
expect to cost over $1 quadrillion and take 30 years to build.

The protocol specifies natural language.  This is not a hard problem in narrow
domains.  It dates back to the 1960's.  Even in broad domains, most of the
meaning of a message is independent of word order.  Google works on this
principle.

But this is beside the point.  The critical part of the design is an incentive
for peers to provide useful services in exchange for resources.  Peers that
appear most intelligent and useful (and least annoying) are most likely to
have their messages accepted and forwarded by other peers.  People will
develop domain experts and routers and put them on the net because they can
make money through highly targeted advertising.

Google would be a peer on the network with a high reputation.  But Google
controls only 0.1% of the computing power on the Internet.  It will have to
compete with a system that allows updates to be searched instantly, where
queries are persistent, and where a query or message can initiate
conversations with other people in real time.

> Which are these areas of science, technology, arts, or indeed any area of 
> human activity, period, where the experts all agree and are NOT in deep 
> conflict?
> 
> And if that's too hard a question, which are the areas of AI or AGI, where 
> the experts all agree and are not in deep conflict?

I don't expect the experts to agree.  It is better that they don't.  There are
hard problem remaining to be solved in language modeling, vision, and
robotics.  We need to try many approaches with powerful hardware.  The network
will decide who the winners are.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


RE: Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-08 Thread Matt Mahoney

--- "John G. Rose" <[EMAIL PROTECTED]> wrote:

> > 
> > There is no way to know if we are living in a nested simulation, or even
> > in a
> > single simulation.  However there is a mathematical model: enumerate all
> > Turing machines to find one that simulates a universe with intelligent
> > life.
> > 
> 
> What if that nest of simulations loop around somehow? What was that idea
> where there is this new advanced microscope that can see smaller than ever
> before and you look into it and see an image of yourself looking into it... 

The simulations can't loop because the simulator needs at least as much memory
as the machine being simulated.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


RE: Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-08 Thread Matt Mahoney

--- "John G. Rose" <[EMAIL PROTECTED]> wrote:

> > From: Matt Mahoney [mailto:[EMAIL PROTECTED]
> > 
> > You won't see a singularity.  As I explain in
> > http://www.mattmahoney.net/singularity.html an intelligent agent (you)
> > is not capable of recognizing agents of significantly greater
> > intelligence.  We don't know whether a singularity has already occurred
> > and the world we observe is the result.  It is consistent with the
> > possibility, e.g. it is finite, Turing computable, and obeys Occam's
> > Razor (AIXI).
> > 
> 
> You should be able to see it coming. That's how people like Kurzweil make
> their estimations based on technological rates of change. When it gets
> really close though then you can only imagine how it will unfold. 

Yes, we can see it coming, so by the anthropic principle, the singularity must
always be in the future.

> If a singularity has already occurred how do you know how many there have
> been? Has somebody worked out the math on this? And if this universe is a
> simulation is that simulation running within another simulation? Is there a
> simulation forefront or is it just one simulation within another ad
> infinitum? Simulation raises too many questions. Seems like simulation and
> singularity would be easier to keep separate, except for uploading. But then
> the whole concept of uploading is just ...too.. confusing... unless our
> minds are complex systems like Richard Loosemore proposes and uploading
> would only be a sort of echo of the original.

There is no way to know if we are living in a nested simulation, or even in a
single simulation.  However there is a mathematical model: enumerate all
Turing machines to find one that simulates a universe with intelligent life.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


RE: Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-08 Thread Matt Mahoney

--- Derek Zahn <[EMAIL PROTECTED]> wrote:

> Matt Mahoney writes:> As for AGI research, I believe the most viable
> path is a distributed> architecture that uses the billions of human
> brains and computers> already on the Internet. What is needed is an
> infrastructure that> routes information to the right experts and an
> economy that rewards> intelligence and friendliness. I described one
> such architecture in> http://www.mattmahoney.net/agi.html It differs
> significantly from the> usual approach of trying to replicate a human
> mind. I don't believe> that one person or a small group can solve the
> AGI problem faster than> the billions of people on the Internet are
> already doing.
> I'm not sure I understand this.  Although a system that can respond
> well to commands of the following form:
>  
> "Show me an existing document that best answers the question 'X'"
>  
> is certainly useful, it is hardly 'general' in any sense we usually
> mean.  I would think a 'general' intelligence should be able to take
> a shot at answering:
>  
> "Why are so many streets named after trees?"
> or
> "If the New York Giants played cricket against the New York Yankees,
> who would probably win?"
> or
> "Here are the results of some diagnostic tests.  How likely is it
> that the patient has cancer?  What test should we do next?"
> or
> "Design me a stable helicopter with the rotors on the bottom instead
> of the top"
>  
> Super-google is nifty, but I don't see how it is AGI.

Because a super-google will answer these questions by routing them to
experts on these topics that will use natural language in their narrow
domains of expertise. All of this can be done with existing technology
and a lot of hard work. The work will be done because there is an
incentive to do it and because the AGI (in the system, not its
components) is so valuable. AGI will be an extension of the Internet
that nobody planned, nobody built, and nobody owns.




-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-08 Thread Matt Mahoney

--- "Eric B. Ramsay" <[EMAIL PROTECTED]> wrote:

> 
> "John G. Rose" <[EMAIL PROTECTED]> wrote:
> 
> >If you look at the state of internet based intelligence now, all the
> data
> >and its structure, the potential for chain reaction or a sort of
> structural
> >vacuum exists and it is accumulating a potential at an increasing
> rate.
> >IMO...
> 
> So you see the arrival of a Tipping Point as per  Malcolm Gladwell.
> Whether I physically benefit from the arrival of the Singularity or
> not, I just want to see the damn thing. I would invest some modest
> sums in AGI if we could get a huge collection plate going around
> (these collection plate amounts add up!).

You won't see a singularity.  As I explain in
http://www.mattmahoney.net/singularity.html an intelligent agent (you)
is not capable of recognizing agents of significantly greater
intelligence.  We don't know whether a singularity has already occurred
and the world we observe is the result.  It is consistent with the
possibility, e.g. it is finite, Turing computable, and obeys Occam's
Razor (AIXI).

As for AGI research, I believe the most viable path is a distributed
architecture that uses the billions of human brains and computers
already on the Internet.  What is needed is an infrastructure that
routes information to the right experts and an economy that rewards
intelligence and friendliness.  I described one such architecture in
http://www.mattmahoney.net/agi.html  It differs significantly from the
usual approach of trying to replicate a human mind.  I don't believe
that one person or a small group can solve the AGI problem faster than
the billions of people on the Internet are already doing.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-07 Thread Matt Mahoney
Perhaps the difficulty in finding investors in AGI is that among people most
familiar with the technology (the people on this list and the AGI list),
everyone has a different idea on how to solve the problem.  "Why would I
invest in someone else's idea when clearly my idea is better?"


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] future search

2008-04-02 Thread Matt Mahoney
--- David Hart <[EMAIL PROTECTED]> wrote:

> Hi All,
> 
> I'm quite worried about Google's new *Machine Automated Temporal
> Extrapolation* technology going FOOM!
> 
> http://www.google.com.au/intl/en/gday/

More on the technology

http://en.wikipedia.org/wiki/Google's_hoaxes

:-)





-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-03-16 Thread Matt Mahoney
I estimate that a distributed design like the one I outlined at
http://www.mattmahoney.net/agi.html will cost at least US $1 quadrillion over
30 years.  For something of this scale, it is simply impractical to talk about
ownership or investment.  It means a significant fraction of the Earth's
population contributing a significant fraction of their lives, several hours
per day for decades.  People will only do this because they directly benefit
from its use, and using it has the side effect of contributing to its
knowledge and computing base.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


RE: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-03-12 Thread Matt Mahoney

--- "John G. Rose" <[EMAIL PROTECTED]> wrote:

> > From: Matt Mahoney [mailto:[EMAIL PROTECTED]
> > 
> > That's true.  The visual perception process is altered after the
> > experiment to
> > favor recognition of objects seen in the photos.  A recall test doesn't
> > measure this effect.  I don't know of a good way to measure the quantity
> > of
> > information learned.
> > 
> 
> When you learn something is it stored as electrical state or are molecules
> created? Perhaps precise measurements of particular chemicals in certain
> regions could correlate to data differential. A problem though is that the
> data may be spread over a wide region making it difficult to measure. And
> you'd have to be able to measure chemicals in tissue structure though
> software could process out the non-applicable.

I was thinking more in terms of theoretical information capacity independent
of implementation.  Landauer's experiments measured memory using recall tests.
 This measure is more useful because it tells you how much memory is needed to
reproduce the results in a machine.  The brain may be doing it less
efficiently but we don't need to implement it the same way.  If a person can
memorize 100 random bits, then you can be certain that any implementation will
need at least 100 bits of memory.

> But a curious number in addition to average long term memory storage is
> MIPS. How many actual bit flips are occurring? This is where you have to be
> precise as even trace chemicals, light, temperature, effect this number.
> Though just a raw number won't tell you that much compared to say
> spatiotemporal MIPS density graphs.

A molecular level simulation of the brain is sure to give a number much higher
than the most efficient implementation.  A more interesting question is how
much memory and how many MIPS are needed to simulate the brain using the most
efficient possible model?



-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


RE: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-03-07 Thread Matt Mahoney

--- Derek Zahn <[EMAIL PROTECTED]> wrote:

> Matt Mahoney writes:> Landauer used tests like having people look at
> thousands of photos, then> tested them by having them look at more photos
> (some seen before, some novel)> and asking if they have seen them before. 
>  
> On the face of it, this only measures one very narrow set of what a person
> "learns" during the time the photos were presented... other details of the
> experience of the photos themselves that are not measured by the specific
> recall test, the surroundings, the experiment, the people involved,
> continual adjustment of world-view and category boundaries, motor skills
> adjusted due to physical interaction with the testing environment,
> reflection (largely subconscious) on unrelated things in the back of the
> subject's mind, etc.  I believe this learning vastly overwhelms the trivial
> narrow slit into the subject's memory that was later measured.

That's true.  The visual perception process is altered after the experiment to
favor recognition of objects seen in the photos.  A recall test doesn't
measure this effect.  I don't know of a good way to measure the quantity of
information learned.

Consider the analogous task of learning lists of random words.  In this case
we know that the learned information cannot exceed the information content of
the list, about 16 bits per word (assuming a 64K vocabulary).  This is not
very large compared to the quantity measured in a recall test.  Of course this
ignores the effects on the visual or auditory perceptual systems (depending on
whether the words are written or spoken).


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-03-05 Thread Matt Mahoney
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Matt Mahoney wrote:
> > --- Richard Loosemore <[EMAIL PROTECTED]> wrote:
> >> Matt Mahoney wrote:
> >>> I was referring to Landauer's estimate of long term memory learning rate
> >> of
> >>> about 2 bits per second.  http://www.merkle.com/humanMemory.html
> >>> This does not include procedural memory, things like visual perception
> and
> >>> knowing how to walk.  So 10^-6 bits is low.  But how do we measure such
> >>> things?
> >> I think my general point is that "bits per second" or "bits per synapse" 
> >> is a valid measure if you care about something like an electrical signal 
> >> line, but is just simply an incoherent way to talk about the memory 
> >> capacity of the human brain.
> >>
> >> Saying "0.01 bits per synapse" is no better than opening and closing 
> >> one's mouth without saying anything.
> > 
> > "Bits" is a perfectly sensible measure of information.  Memory can be
> measured
> > using human recall tests, just as Shannon used human prediction tests to
> > estimate the information capacity of natural language text.  The question
> is
> > important to anyone who needs to allocate a hardware budget for an AI
> design.
> 
> If I take possession of a brand new computer with 1 terabyte hard-drive 
> memory capacity, and then I happen to use it to store nothing but the 
> (say) 1 gigabyte software it came with, your conclusion would be that 
> each memory cell in the computer has a capacity of 0.001 bits.
> 
> This is a meaningless number because *how* the storage is actually being 
> used is not a sensible measure of its capacity.
> 
> So, knowing that humans actually recall X amount of bits in the Shannon 
> sense does not tell you how many "bits per synapse" are stored in the 
> brain, it just tells you  that humans recall X amount of bits in the 
> Shannon sense, that is all.

Landauer used tests like having people look at thousands of photos, then
tested them by having them look at more photos (some seen before, some novel)
and asking if they have seen them before.  He did similar tests with words,
numbers, music clips, etc. and in every case the learning rate was around 2
bits per second.  His tests were similar to those done by Standing, who had
subjects look at up to 10,000 photos (one every 5 seconds) and take a recall
test 2 days later with about 80% accuracy (vs. 71% for lists of random words).
 This is the result you would get if each picture or word was encoded with
about  14 or 15 bits.

It would be interesting to conduct similar tests for procedural memory.  (How
many bits of code do you need to ride a bicycle?)  But I doubt it would
explain all of the 10^6 discrepancy.  In any case, 10^9 bits is what Turing
estimated in 1950, and it's how much language a person is exposed to in a
couple of decades.  I think it's a useful number to keep in mind for building
AI, but in my experience in language modeling for compression, you will
probably need a lot more memory if you want reasonable performance. 
Apparently the brain does too.

References

Landauer, Tom (1986), "How much do people remember?  Some estimates of the
quantity of learned information in long term memory", Cognitive Science (10)
pp. 477-493.  

Standing, L. (1973), "Learning 10,000 Pictures", Quarterly Journal of
Experimental Psychology (25) pp. 207-222.

Shannon, Claude E. (1950), "Prediction and Entropy of Printed English", Bell
Sys. Tech. J (3) p. 50-64.

Turing, A. M., (1950) "Computing Machinery and Intelligence", Mind,
59:433-460.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-03-05 Thread Matt Mahoney
--- "Eric B. Ramsay" <[EMAIL PROTECTED]> wrote:

> 
> 
> Matt Mahoney <[EMAIL PROTECTED]> wrote:
> 
> >[For those not familiar with Richard's style: once he disagrees with
> something
> >he will dispute it to the bitter end in long, drawn out arguments, because
> >nothing is more important than being right.]
> 
> What's the purpose for this comment? If the people here are intelligent
> enough to have meaningful discussions on a difficult topic, then they will
> be able to sort out for themselves the "styles" of others. 

Sorry, he posted a similar comment about me on the AGI list.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-03-05 Thread Matt Mahoney
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Matt Mahoney wrote:
> > I was referring to Landauer's estimate of long term memory learning rate
> of
> > about 2 bits per second.  http://www.merkle.com/humanMemory.html
> > This does not include procedural memory, things like visual perception and
> > knowing how to walk.  So 10^-6 bits is low.  But how do we measure such
> > things?
> 
> I think my general point is that "bits per second" or "bits per synapse" 
> is a valid measure if you care about something like an electrical signal 
> line, but is just simply an incoherent way to talk about the memory 
> capacity of the human brain.
> 
> Saying "0.01 bits per synapse" is no better than opening and closing 
> one's mouth without saying anything.

"Bits" is a perfectly sensible measure of information.  Memory can be measured
using human recall tests, just as Shannon used human prediction tests to
estimate the information capacity of natural language text.  The question is
important to anyone who needs to allocate a hardware budget for an AI design.

[For those not familiar with Richard's style: once he disagrees with something
he will dispute it to the bitter end in long, drawn out arguments, because
nothing is more important than being right.]


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-03-05 Thread Matt Mahoney

--- Richard Loosemore <[EMAIL PROTECTED]> wrote:

> Matt Mahoney wrote:
> > --- "John G. Rose" <[EMAIL PROTECTED]> wrote:
> >> Is there really a bit per synapse? Is representing a synapse with a bit
> an
> >> accurate enough simulation? One synapse is a very complicated system.
> > 
> > A typical neural network simulation uses several bits per synapse.  A
> Hopfield
> > net implementation of an associative memory stores 0.15 bits per synapse. 
> But
> > cognitive models suggest the human brain stores .01 bits per synapse. 
> > (There are 10^15 synapses but human long term memory capacity is 10^9
> bits).
> 
> Sorry, I don't buy this at all.  This makes profound assumptions about 
> how information is stored in memory, averagng out the "net" storage and 
> ignoring the immediate storage capacity.  A typical synapse actually 
> stores a great deal more than a fraction of a bit, as far as we can 
> tell, but this information is stored in such a way that the system as a 
> whole can actually use the information in a meaningful way.
> 
> In that context, quoting "0.01 bits per synapse" is a completely 
> meaningless statement.

I was referring to Landauer's estimate of long term memory learning rate of
about 2 bits per second.  http://www.merkle.com/humanMemory.html
This does not include procedural memory, things like visual perception and
knowing how to walk.  So 10^-6 bits is low.  But how do we measure such
things?

> Also, "typical" neural network simulations use more than a few bits as 
> well.  When I did a number of backprop NN studies in the early 90s, my 
> networks had to use floating point numbers because the behavior of the 
> net deteriorated badly if the numerical precision was reduced.  This was 
> especially important on long training runs or large datasets.

That's what I meant by "few".  In the PAQ8 compressors I have to use at least
16 bits.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


RE: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-03-04 Thread Matt Mahoney
--- "John G. Rose" <[EMAIL PROTECTED]> wrote:
> Is there really a bit per synapse? Is representing a synapse with a bit an
> accurate enough simulation? One synapse is a very complicated system.

A typical neural network simulation uses several bits per synapse.  A Hopfield
net implementation of an associative memory stores 0.15 bits per synapse.  But
cognitive models suggest the human brain stores .01 bits per synapse. 
(There are 10^15 synapses but human long term memory capacity is 10^9 bits).

-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-02-28 Thread Matt Mahoney
--- Stathis Papaioannou <[EMAIL PROTECTED]> wrote:
> I agree that it should be possible to simulate a brain on a computer,
> but I don't see how you can be so confident that you can throw away
> most of the details of brain structure with impunity. Tiny changes to
> neurons which make no difference to the anatomy or synaptic structure
> can have large effects on neuronal behaviour, and hence whole organism
> behaviour. You can't leave this sort of thing out of the model and
> hope that it will still match the original.

And people can lose millions of neurons without a noticeable effect.  And
removing a 0.1 micron chunk out of a CPU chip can cause it to fail, yet I can
run the same programs on a chip with half as many transistors.

Nobody knows how to make an artificial brain, but I am pretty confident that
it is not necessary to preserve its structure to preserve its function.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


RE: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-02-28 Thread Matt Mahoney

--- "John G. Rose" <[EMAIL PROTECTED]> wrote:

> > From: Matt Mahoney [mailto:[EMAIL PROTECTED]
> > 
> > By "equivalent computation" I mean one whose behavior is
> > indistinguishable
> > from the brain, not an approximation.  I don't believe that an exact
> > simulation requires copying the implementation down to the neuron level,
> > much
> > less the molecular level.
> > 
> 
> So how would you approach constructing such a model? I suppose a superset
> intelligence structure could analyze properties and behaviors of a brain and
> simulate it within itself. If it absorbed enough data it could reconstruct
> and eventually come up with something close.

Well, nobody has solved the AI problem, much less the uploading problem. 
Consider the problem in stages:

1. The Turing test.

2. The "personalized" Turing test.  The machine pretends to be you and the
judges are people who know you well.

3. The "planned, personalized" Turing test.  You are allowed to communicate
with judges in advance, for example, to agree on a password.

4. The "embodied, planned, personalized" Turing test.  Communication is not
restricted to text.  The machine is planted in the skull of your clone.  Your
friends and relatives have to decide who has the carbon-based brain.

Level 4 should not require simulating every neuron and synapse.  Without the
constraints of slow, noisy neurons, we could use other algorithms.  For
example, low level visual processing such as edge and line detection would not
need to be implemented as a 2-D array of identical filters.  It could be
implemented serially by scanning the retinal image with a window filter.  Fine
motor control would not need to be implemented by combining thousands of
pulsing motor neurons to get a smooth average signal.  The signal could be
computed numerically.  The brain has about 10^15 synapses, so a
straightforward simulation at the neural level would require 10^15 bits of
memory.  But cognitive tests suggest humans have only about 10^9 bits of long
term memory, suggesting that more compressed representation is possible.

In any case, level 1 should be sufficient to argue convincingly that either
consciousness can exist in machines, or that it doesn't in humans.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-02-28 Thread Matt Mahoney

--- Stathis Papaioannou <[EMAIL PROTECTED]> wrote:

> On 29/02/2008, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> 
> > By "equivalent computation" I mean one whose behavior is indistinguishable
> >  from the brain, not an approximation.  I don't believe that an exact
> >  simulation requires copying the implementation down to the neuron level,
> much
> >  less the molecular level.
> 
> How do you explain the fact that cognition is exquisitely sensitive to
> changes at the molecular level?

In what way?  Why can't you replace neurons with equivalent software?


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


RE: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-02-28 Thread Matt Mahoney

--- "John G. Rose" <[EMAIL PROTECTED]> wrote:

> > From: Matt Mahoney [mailto:[EMAIL PROTECTED]
> > 
> > And that is the whole point.  You don't need to simulate the brain at
> > the
> > molecular level or even at the level of neurons.  You just need to
> > produce an
> > equivalent computation.  The whole point of such fine grained
> > simulations is
> > to counter arguments (like Penrose's) that qualia and consciousness
> > cannot be
> > explained by computation or even by physics.  Penrose (like all humans)
> > is
> > reasoning with a brain that is a product of evolution, and therefore
> > biased
> > toward beliefs that favor survival of the species.
> > 
> 
> An equivalent computation will be some percentage of the complexity of a
> perfect molecular simulation. You can simplify the computation but you have
> to know what to simplify out and what to discard. Losing too much of the
> richness may produce a simulation that is like a scratchy audio recording of
> a philharmonic or probably even worse the simulated system will not function
> as a coherent entity, it'll just be contentious noise unless there is ample
> abetting by external control. But a non-molecular and non-neural simulation
> may require even more computational complexity than a direct model.
> Reformatting the consciousness to operate within another substrate without
> first understanding its natural substrate, ya, still may be the best choice
> due to technological limitations.

By "equivalent computation" I mean one whose behavior is indistinguishable
from the brain, not an approximation.  I don't believe that an exact
simulation requires copying the implementation down to the neuron level, much
less the molecular level.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-02-28 Thread Matt Mahoney

--- Stathis Papaioannou <[EMAIL PROTECTED]> wrote:

> On 28/02/2008, John G. Rose <[EMAIL PROTECTED]> wrote:
> 
> > Actually a better way to do it as getting even just the molecules right is
> a wee bit formidable - you need a really powerful computer with lots of RAM.
> Take some DNA and grow a body double in software. Then create an interface
> from the biological brain to the software brain and then gradually kill off
> the biological brain forcing the consciousness into the software brain.
> >
> >  The problem with this approach naturally is that to grow the brain in RAM
> requires astronomical resources. But ordinary off-the-shelf matter holds so
> much digital memory compared to modern computers. You have to convert matter
> into RAM somehow. For example one cell with DNA is how many gigs? And cells
> cost a dime a billion. But the problem is that molecular interaction is too
> slow and cluncky.
> 
> Agreed, it would be *enormously* difficult getting a snapshot at the
> molecular level and then doing a simulation from this snapshot. But as
> a matter of principle, it should be possible.

And that is the whole point.  You don't need to simulate the brain at the
molecular level or even at the level of neurons.  You just need to produce an
equivalent computation.  The whole point of such fine grained simulations is
to counter arguments (like Penrose's) that qualia and consciousness cannot be
explained by computation or even by physics.  Penrose (like all humans) is
reasoning with a brain that is a product of evolution, and therefore biased
toward beliefs that favor survival of the species.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-02-22 Thread Matt Mahoney

--- "Eric B. Ramsay" <[EMAIL PROTECTED]> wrote:

> I came across an old Discover magazine this morning with yet another article
> by Lanier on his rainstorm thought experiment. After reading the article it
> occurred to me that what he is saying may be equivalent to:
> 
> Imagine a sufficiently large computer that works according to the
> architecture of our ordinary PC's. In the space of Operating Systems (code
> interpreters), we can find an operating system such that it will run the
> input from the rainstorm such that it appears identical to a computer
> running a brain.

That's easy to prove.  Write a program that simulates a brain and have it
ignore the rainstorm input.

> If this is true, then functionalism is not affected since we must not forget
> to combine program + OS. Thus the rainstorm by itself has no emergent
> properties.

Choosing a universal Turing machine can't be avoided.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Definitions

2008-02-19 Thread Matt Mahoney

--- Charles D Hixson <[EMAIL PROTECTED]> wrote:

> John K Clark wrote:
> > "Matt Mahoney" <[EMAIL PROTECTED]>
> >
> >> It seems to me the problem is
> >> defining consciousness, not testing for it.
> >
> > And it seems to me that beliefs of this sort are exactly the reason 
> > philosophy is in such a muddle. A definition of consciousness is not
> > needed, in fact unless you're a mathematician where they can be of 
> > some use, one can lead a full rich rewarding intellectually life without
> > having a good definition of anything. Compared with examples
> > definitions are of trivial importance.
> >
> >  John K Clark
> 
> But consciousness is easy to define, if not to implement:
>  Consciousness is the entity evaluating a portion of itself which 
> represents it's position in it's model of it's environment.
> 
>  If there's any aspect of consciousness which isn't included within this 
> definition, I would like to know about it.  (Proving the definition 
> correct would, however, be between difficult and impossible.  As 
> normally used "consciousness" is a term without an external referent, so 
> there's no way of determining that any two people are using the same 
> definition.  It *may* be possible to determine that they are using 
> different definitions.)

Or consciousness just means awareness...

in which case, it seems to be located in the hippocampus.
http://www.world-science.net/othernews/080219_conscious


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] AI critique by Jaron Lanier

2008-02-17 Thread Matt Mahoney

--- John Ku <[EMAIL PROTECTED]> wrote:

> On 2/17/08, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> 
> > Nevertheless we can make similar reductions to absurdity with respect to
> > qualia, that which distinguishes you from a philosophical zombie.  There
> is no
> > experiment to distinguish whether you actually experience redness when you
> see
> > a red object, or simply behave as if you do.  Nor is there any aspect of
> this
> > behavior that could not (at least in theory) be simulated by a machine.
> 
> You are relying on a partial conceptual analysis of qualia or
> consciousness by Chalmers that maintains that there could be an exact
> physical duplicate of you that is not conscious (a philosophical
> zombie). While he is in general a great philosopher, I suspect his
> arguments here ultimately rely too much on moving from, "I can create
> a mental image of a physical duplicate and subtract my image of
> consciousness from it," to therefore, such things are possible.

My interpretation of Chalmers is the opposite.  He seems to say that either
machine consciousness is possible or human consciousness is not.

> At any rate, a functionalist would not accept that analysis. On a
> functionalist account, consciousness would reduce to something like
> certain representational activities which could be understood in
> information processing terms. A physical duplicate of you would have
> the same information processing properties, hence the same
> consciousness properties. Once we understand the relevant properties
> it would be possible to test whether something is conscious or not by
> seeing what information it is or is not capable of processing. It is
> hard to test right now because we have at the moment only very
> incomplete conceptual analyses.

It seems to me the problem is defining consciousness, not testing for it. 
What computational property would you use?  For example, one might ascribe
consciousness to the presence of episodic memory.  (If you don't remember
something happening to you, then you must have been unconscious).  But in this
case, any machine that records a time sequence of events (for example, a chart
recorder) could be said to be conscious.  Or you might ascribe consciousness
to entities that learn, seek pleasure, and avoid pain.  But then I could write
a simple program like http://www.mattmahoney.net/autobliss.txt with these
properties.  It seems to me that any other testable property would have the
same problem.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] AI critique by Jaron Lanier

2008-02-17 Thread Matt Mahoney

--- Stathis Papaioannou <[EMAIL PROTECTED]> wrote:

> On 17/02/2008, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> 
> > I believe his target is the existence of consciousness.  There are many
> proofs
> > showing that the assumption of consciousness leads to absurdities, which I
> > have summarized at http://www.mattmahoney.net/singularity.html
> > In mathematics, it should not be necessary to prove a theorem more than
> once.
> > But proof and belief are different things, especially when the belief is
> hard
> > coded into the brain.
> 
> It seems that you are conflating what in philosophy are usually
> considered distinct subjects: consciousness and personal identity.
> Consciousness may seem mysterious and ineffable, but at bottom it's
> just the fact that I experience a red object when I look at a red
> object, as opposed to being blind and only pretending that I see a red
> object. Personal identity involves the belief that the observer of the
> red object now is the "same person" as the observer of the red object
> before. It is this idea which the various thought experiments you
> describe show to be ultimately vacuous, even though as you say we have
> evolved to believe it at our core even when it is contradicted by what
> we recognise as sound intellectual counterarguments.

You're right.  As John Ku also pointed out, I am confusing the identity aspect
of consciousness with the qualia aspect.  Given that we have (currently) only
one example (the human brain), it is easy to confuse other aspects like
language, episodic memory, free will, having goals, and other human qualities.

Nevertheless we can make similar reductions to absurdity with respect to
qualia, that which distinguishes you from a philosophical zombie.  There is no
experiment to distinguish whether you actually experience redness when you see
a red object, or simply behave as if you do.  Nor is there any aspect of this
behavior that could not (at least in theory) be simulated by a machine.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] AI critique by Jaron Lanier

2008-02-17 Thread Matt Mahoney

--- John Ku <[EMAIL PROTECTED]> wrote:

> On 2/16/08, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> 
> > > I would prefer to leave behind these counterfactuals altogether and
> > > try to use information theory and control theory to achieve a precise
> > > understanding of what it is for something to be the standard(s) in
> > > terms of which we are able to deliberate. Since our normative concepts
> > > (e.g. should, reason, ought, etc) are fundamentally about guiding our
> > > attitudes through deliberation, I think they can then be analyzed in
> > > terms of what those deliberative standards prescribe.
> >
> > I agree.  I prefer the approach of predicting what we *will* do as opposed
> to
> > what we *ought* to do.  It makes no sense to talk about a right or wrong
> > approach when our concepts of right and wrong are programmable.
> 
> I don't quite follow. I was arguing for a particular way of analyzing
> our talk of right and wrong, not abandoning such talk. Although our
> concepts are programmable, what matters is what follows from our
> current concepts as they are.
> 
> There are two main ways in which my analysis would differ from simply
> predicting what we will do. First, we might make an error in applying
> our deliberative standards or tracking what actually follows from
> them. Second, even once we reach some conclusion about what is
> prescribed by our deliberative standards, we may not act in accordance
> with that conclusion out of weakness of will.

It is the second part where my approach differs.  A decision to act in a
certain way implies right or wrong according to our views, not the views of a
posthuman intelligence.  Rather I prefer to analyze the path that AI will
take, given human motivations, but without judgment.  For example, CEV favors
granting future wishes over present wishes (when it is possible to predict
future wishes reliably).  But human psychology suggests that we would prefer
machines that grant our immediate wishes, implying that we will not implement
CEV (even if we knew how).  Any suggestion that CEV should or should not be
implemented is just a distraction from an analysis of what will actually
happen.

As a second example, a singularity might result in the extinction of DNA based
life and its replacement with a much faster evolutionary process.  It makes no
sense to judge this outcome as good or bad.  The important question is the
likelihood of this occurring, and when.  In this context, it is more important
to analyze the motives of people who would try to accelerate or delay the
progression of technology.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-17 Thread Matt Mahoney
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
> When people like Lanier allow themselves the luxury of positing 
> infinitely large computers (who else do we know who does this?  Ah, yes, 
> the AIXI folks), they can make infinitely unlikely coincidences happen.

It is a commonly accepted practice to use Turing machines in proofs, even
though we can't actually build one.  Hutter is not proposing a universal
solution to AI.  He is proving that it is not computable.  Lanier is not
suggesting implementing consciousness as a rainstorm.  He is refuting its
existence.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] AI critique by Jaron Lanier

2008-02-16 Thread Matt Mahoney

--- John Ku <[EMAIL PROTECTED]> wrote:

> On 2/16/08, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > The result will not be pretty.  The best definition (not solution) of
> > friendliness is probably CEV ( http://www.singinst.org/upload/CEV.html )
> which
> > can be summarized as "our wish if we knew more, thought faster, were more
> the
> > people we wished we were, had grown up farther together".  What would you
> wish
> > for if your brain was not constrained by the hardwired beliefs and goals
> that
> > you were born with and you knew that your consciousness did not exist? 
> What
> > would you wish for if you could reprogram your own goals?  The logical
> answer
> > is that it doesn't matter.  The pleasure of a thousand permanent orgasms
> is
> > just a matter of changing a few lines of code, and you go into a
> degenerate
> > state where learning ceases.
> 
> Your counterfactuals seem very different from Eliezer's and less
> relevant to what matters. I think Eliezer's definition was plausible
> because it approximated the standards we use to deliberate about our
> values. As such, it is getting at deeper values or procedures that we
> implicitly presuppose in any serious discussion of values at all. Even
> if you were to question whether you should use that standard, your
> cognitive architecture would still have to do so by reference to some
> internal standard in order to even count as a meaningful type of
> questioning and Eliezer's definition would probably be a decent
> intuitive characterization of it. Of course, you are free to pose any
> type of bizarre counterfactual you want, but I don't see how
> evaluating it would be relevant to what matters in the way that
> Eliezer's would.

I admit I am oversimplifying Eliezer's definition.  Reading the full document,
we should not assume that an AGI would be stupid enough to grant our
extrapolated wish to be put in a blissful, degenerate state.  Nevertheless I
am mistrustful.  I am most troubled that CEV does not have a concise
description.

> I would prefer to leave behind these counterfactuals altogether and
> try to use information theory and control theory to achieve a precise
> understanding of what it is for something to be the standard(s) in
> terms of which we are able to deliberate. Since our normative concepts
> (e.g. should, reason, ought, etc) are fundamentally about guiding our
> attitudes through deliberation, I think they can then be analyzed in
> terms of what those deliberative standards prescribe.

I agree.  I prefer the approach of predicting what we *will* do as opposed to
what we *ought* to do.  It makes no sense to talk about a right or wrong
approach when our concepts of right and wrong are programmable.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] AI critique by Jaron Lanier

2008-02-16 Thread Matt Mahoney
--- John Ku <[EMAIL PROTECTED]> wrote:

> On 2/15/08, Eric B. Ramsay <[EMAIL PROTECTED]> wrote:
> >
> >  http://www.jaronlanier.com/aichapter.html
> 
> 
> I take it the target of his rainstorm argument is the idea that the
> essential features of consciousness are its information-processing
> properties.

I believe his target is the existence of consciousness.  There are many proofs
showing that the assumption of consciousness leads to absurdities, which I
have summarized at http://www.mattmahoney.net/singularity.html
In mathematics, it should not be necessary to prove a theorem more than once. 
But proof and belief are different things, especially when the belief is hard
coded into the brain.

For now, these apparent paradoxes are just philosophical arguments because
they depend on technologies that have not yet been developed, such as AGI,
uploading, copying people, and programming the brain.  But we will eventually
have to confront them.

The result will not be pretty.  The best definition (not solution) of
friendliness is probably CEV ( http://www.singinst.org/upload/CEV.html ) which
can be summarized as "our wish if we knew more, thought faster, were more the
people we wished we were, had grown up farther together".  What would you wish
for if your brain was not constrained by the hardwired beliefs and goals that
you were born with and you knew that your consciousness did not exist?  What
would you wish for if you could reprogram your own goals?  The logical answer
is that it doesn't matter.  The pleasure of a thousand permanent orgasms is
just a matter of changing a few lines of code, and you go into a degenerate
state where learning ceases.



-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] AI critique by Jaron Lanier

2008-02-15 Thread Matt Mahoney

--- "Eric B. Ramsay" <[EMAIL PROTECTED]> wrote:

> I don't know when Lanier wrote the following but I would be interested to
> know what the AI folks here think about his critique (or direct me to a
> thread where this was already discussed). Also would someone be able to
> re-state his rainstorm thought experiment more clearly -- I am not sure I
> get it:
> 
>  http://www.jaronlanier.com/aichapter.html

This is a nice proof of the non-existence of consciousness (or qualia).  Here
is another (I came across on sl4):

  http://youtube.com/watch?v=nx6v30NMFV8

Such reductions to absurdity are possible because the brain is programmed to
not accept the logical result.

Consciousness is hard to define but you know what it is.  It is what makes you
aware, the "little person inside your head" that observes the world through
your perceptions, that which distinguishes you from a philosophical zombie. 
We normally associate consciousness with human traits such as episodic memory,
response to pleasure and pain, fear of death, language, and a goal of seeking
knowledge through experimentation.  (Imagine a person without any of these
qualities).

These traits are programmed into our DNA because they increase our fitness. 
You cannot change them, which is what these proofs would do if you could
accept them.

Unfortunately, this question will have a profound effect on the outcome of a
singularity.  Assuming recursive self improvement in a competitive
environment, we should expect agents (possibly including our uploads) to
believe in their own consciousness, but there is no evolutionary pressure to
also believe in human consciousness.  Even if we successfully constrain the
process so that agents have the goal of satisfying our extrapolated volition,
then logically we should expect those agents (knowing what we cannot know) to
conclude that human brains are just computers and our existence doesn't
matter.  It is ironic that our programmed beliefs leads us to advance
technology to the point where the question can no longer be ignored.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Quantum resonance btw DNA strands?

2008-02-07 Thread Matt Mahoney

--- Ben Goertzel <[EMAIL PROTECTED]> wrote:

> This article
> 
> http://www.physorg.com/news120735315.html
> 
> made me think of Johnjoe McFadden's theory
> that quantum nonlocality plays a role in protein-folding
> 
> http://www.surrey.ac.uk/qe/quantumevolution.htm

Or maybe a simpler explanation is that the long distance Van-der-Waals bonding
strengths between A-T pairs or C-G pairs in double stranded DNA is slightly
greater than the bonding strengths between A-T and C-G (although much weaker
than the hydrogen bonds between A and T or C and G).


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=94885151-ef48f7


Re: [singularity] Replication/Emulation and human brain, definition of models

2008-01-18 Thread Matt Mahoney
--- Xavier Laurent <[EMAIL PROTECTED]> wrote:

> Hello
> 
> I am currently doing an Open University course on AI in the UK and they 
> gave us this definition
> 
> 
> * a *Simulation* of a natural system is a model that captures the
>   functional connections between inputs and outputs of the system;
> * a *Replication *of a natural system is a model that captures the
>   functional connections between inputs and outputs of the system
>   and is based on processes that are the same as, or similar to,
>   those of the real-world system;
> * an *Emulation* of a natural system is a model that captures the
>   functional connections between inputs and outputs of the system,
>   based on processes that are the same as, or similar to, those of
>   the natural system, and in the same materials as the natural system
> 
> 
> I have read that for example Ray Kurzweil’s expects that human-level AI 
> will first arrives via human-brain emulation, so it means this will be 
> using machines made of the same materials than the brain? like 
> nanotechnology computing? Would the term replication be more appropriate 
> if we will use still computers made of silicon but i guess we wont to 
> reach that level of power. In emulation they meant in my definition for 
> example the experiment of Stanley L Miller when he recreated the model 
> of earth oceans within a flask of water reproducing chemical reactions, etc

According to my dictionary, "simulate" means "give the appearance of", and
"emulate" means "to equal or surpass".  Kurzweil wants to build machines that
are smarter than human.  I don't think we have settled on the technical
details, whether it involves advancements in software and hardware, human
genetic engineering, an intelligent worm swallowing the internet, or self
replicating nanobots.



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=87473459-bd643d


Re: [singularity] World as Simulation

2008-01-13 Thread Matt Mahoney

--- Gifting <[EMAIL PROTECTED]> wrote:

> >
> > There is plenty of physical evidence that the universe is simulated by 
> > a
> > finite state machine or a Turing machine.
> >
> > 1. The universe has finite size, mass, and age, and resolution   
> > etc.
> >
> > -- Matt Mahoney, [EMAIL PROTECTED]
> >
> I assume there is also plenty of evidence that the universe is not 
> simulated by a Turing machine or any other machine.
> 
> I came across this blog 
> http://www.newscientist.com/blog/technology/2008/01/vr-hypothesis.html

I don't see any evidence here, just an argument that appeals to our
evolutionary programmed bias to believe the universe is real.

Evidence that the universe is not simulated would be if it was found to be
infinite or if it did something that was not computable.  No such evidence
exists.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=85466279-d2d818


Re: [singularity] World as Simulation

2008-01-13 Thread Matt Mahoney

--- "Eric B. Ramsay" <[EMAIL PROTECTED]> wrote:

> Matt: I understand your point #2 but it is a grand sweep without any detail.
> To give you an example of what I have in mind, let's consider the photon
> double slit experiment again. You have a photon emitter operating at very
> low intensity such that photons come out singly. There is an average rate
> for the photons emitted but the point in time for their emission is random -
> this then introduces the non-deterministic feature of nature. At this point,
> why doesn't the emitted photon just go through one or the other slit?
> Instead, what we find is that the photon goes through a specific slit if
> someone is watching but if no one is watching it somehow goes through both
> slits and performs a self interference leading to the interference pattern
> observed. Now my question: can it be demonstrated that this scenario of two
> alternate behaviour strategies minimizes computation resources (or whatever
> Occam's razor requires) and so is a necessary feature of a simulation? We
> already have a
>  probability event at the very start when the photon was emitted, how does
> the other behaviour fit with the simulation scheme? Wouldn't it be
> computationally simpler to just follow the photon like a billiard ball
> instead of two variations in behaviour with observers thrown in?

It is the non-determinism of nature that is evidence that the universe is
simulated by a finite state machine.  There is no requirement of low
computational cost, because we don't know the computational limits of the
simulating machine.  However there is a high probability of algorithmic
simplicity according to AIXI/Occam's Razor.

If classical (Newtonian) mechanics were correct, it would disprove the
simulation theory because it would require infinite precision, which is not
computable on a Turing machine.

Quantum mechanics is deterministic.  It is our interpretation that is
probabilistic.  The wave equation for the universe has an exact solution, but
it is beyond our ability to calculate it.  The two slit experiment and other
paradoxes such as Schrodinger's cat and EPR (
http://en.wikipedia.org/wiki/Einstein-Podolsky-Rosen_paradox ) are due to
using a simplified model that does not include the observer in the equations.

Your argument that computational costs might restrict the possible laws of
physics is also made in Whitworth's paper (
http://arxiv.org/ftp/arxiv/papers/0801/0801.0337.pdf ), but I think he is
stretching.  For example, he argues (table on p. 15) that the speed of light
limit is evidence that the universe is simulated because it reduces the cost
of computation.  Yes, but for a different reason.  The universe has a finite
age, T.  The speed of light c limits its size, G limits its mass, and Planck's
constant h limits its resolution.  If any of these physical constants did not
exist, then the universe would have infinite information content and would not
be computable.  From T, c, G, and h you can derive the entropy (about 10^122
bits), and thus the size of a bit, which happens to be about the size of the
smallest stable particle.

We cannot use the cost of computation as an argument because we know nothing
about the physics of the simulating universe.  For example, the best known
algorithms for computing the quantum wave equation on a conventional computer
are exponential, e.g. 2^(10^122) operations.  However, you could imagine a
"quantum Turing machine" that operates on a superposition of tapes and states
(and possibly restricted to time reversible operations).  Such a computation
could be trivial, depending on your choice of mathematical model.



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=85465376-f0c66e


Re: [singularity] World as Simulation

2008-01-12 Thread Matt Mahoney

--- "Eric B. Ramsay" <[EMAIL PROTECTED]> wrote:

> Matt: I would prefer to analyse something simple such as the double slit
> experiment. If you do an experiment to see which slit the photon goes
> through you get an accumulation of photons in equal numbers behind each
> slit. If you don't make an effort to see which slit the photons go through,
> you get an interference pattern. What, if this is all a simulation, is
> requiring the simulation to behave this way? I assume that this is a forced
> result based on the assumption of using only as much computation as needed
> to perform the simulation. A radioactive atom decays when it decays. All we
> can say with any certainty is what it's probability distribution in time is
> for decay. Why is that? Why would a simulation not maintain local causality
> (EPR paradox)? I think it would be far more interesting (and meaningful) if
> the simulation hypothesis could provide a basis for these observations.

This is what I addressed in point #2.  A finite state simulation forces any
agents in the simulation to use a probabilistic model of their universe,
because an exact model would require as much memory as is used for the
simulation itself.  Quantum mechanics is an example of a probabilistic model. 
The fact that the laws of physics prevent you from making certain predictions
is what suggests the universe is simulated, not the details of what you can't
predict.

If the universe were simulated by a computer with infinite memory (e.g. real
valued registers), then the laws of physics might have been deterministic,
allowing us to build infinite memory computers that could make exact
predictions even if the universe had infinite size, mass, age, and resolution.
 However, this does not appear to be the case.

A finite simulation does not require any particular laws of physics.  For all
you know, tomorrow gravity may cease to exist, or time will suddenly have 17
dimensions.  However, the AIXI model makes this unlikely because unexpected
changes like this would require a simulation with greater algorithmic
complexity.

This is not a proof that the universe is a simulation, nor are any of my other
points.  I don't believe that a proof is possible.

> 
>   Eric B. Ramsay
> Matt Mahoney <[EMAIL PROTECTED]> wrote:
>   --- "Eric B. Ramsay" wrote:
> 
> > Apart from all this philosophy (non-ending as it seems), Table 1. of the
> > paper referred to at the start of this thread gives several consequences
> of
> > a simulation that offer to explain what's behind current physical
> > observations such as the upper speed limit of light, relativistic and
> > quantum effects etc. Without worrying about whether we are a simulation of
> a
> > sinmulation of a simulation etc, it would be interesting to work out all
> the
> > qualitative/quantitative (?) implications of the idea and see if
> > observations strongly or weakly support it. If the only thing we can do
> with
> > the idea is discuss phiosophy then the idea is useless. 
> 
> There is plenty of physical evidence that the universe is simulated by a
> finite state machine or a Turing machine.
> 
> 1. The universe has finite size, mass, and age, and resolution. Taken
> together, the universe has a finite state, expressible in approximately
> hG/c^5T^2 = 1.55 x 10^122 bits ~ 2^406 bits (where h is Planck's constant, G
> is the gravitational constant, c is the speed of light, and T is the age of
> the universe. By coincidence, if the universe is divided into 2^406 regions,
> each is the size of a proton or neutron. This is a coincidence because h, G,
> c, and T don't depend on the properties of any particles).
> 
> 2. A finite state machine cannot model itself deterministically. This is
> consistent with the probabilistic nature of quantum mechanics.
> 
> 3. The observation that Occam's Razor works in practice is consistent with
> the
> AIXI model of a computable environment.
> 
> 4. The complexity of the universe is consistent with the simplest possible
> algorithm: enumerate all Turing machines until a universe supporting
> intelligent life is found. The fastest way to execute this algorithm is to
> run each of the 2^n universes with complexity n bits for 2^n steps. The
> complexity of the free parameters in many string theories plus general
> relativity is a few hundred bits (maybe 406).
> 
> 
> -- Matt Mahoney, [EMAIL PROTECTED]


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=85342088-6552dd


Re: [singularity] World as Simulation

2008-01-12 Thread Matt Mahoney
--- "Eric B. Ramsay" <[EMAIL PROTECTED]> wrote:

> Apart from all this philosophy (non-ending as it seems), Table 1. of the
> paper referred to at the start of this thread gives several consequences of
> a simulation that offer to explain what's behind current physical
> observations such as the upper speed limit of light, relativistic and
> quantum effects etc. Without worrying about whether we are a simulation of a
> sinmulation of a simulation etc, it would be interesting to work out all the
> qualitative/quantitative (?) implications of the idea and see if
> observations strongly or weakly support it. If the only thing we can do with
> the idea is discuss phiosophy then the idea is useless. 

There is plenty of physical evidence that the universe is simulated by a
finite state machine or a Turing machine.

1. The universe has finite size, mass, and age, and resolution.  Taken
together, the universe has a finite state, expressible in approximately
hG/c^5T^2 = 1.55 x 10^122 bits ~ 2^406 bits (where h is Planck's constant, G
is the gravitational constant, c is the speed of light, and T is the age of
the universe.  By coincidence, if the universe is divided into 2^406 regions,
each is the size of a proton or neutron.  This is a coincidence because h, G,
c, and T don't depend on the properties of any particles).

2. A finite state machine cannot model itself deterministically.  This is
consistent with the probabilistic nature of quantum mechanics.

3. The observation that Occam's Razor works in practice is consistent with the
AIXI model of a computable environment.

4. The complexity of the universe is consistent with the simplest possible
algorithm: enumerate all Turing machines until a universe supporting
intelligent life is found.  The fastest way to execute this algorithm is to
run each of the 2^n universes with complexity n bits for 2^n steps.  The
complexity of the free parameters in many string theories plus general
relativity is a few hundred bits (maybe 406).


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=85290247-aa2da2


Re: [singularity] World as Simulation

2008-01-12 Thread Matt Mahoney
--- Charles D Hixson <[EMAIL PROTECTED]> wrote:
> Simulation is a new word.  In this context, let's use an old word.  
> Maya.  Have the Buddhist countries and societies gone away?
> And let's use an old word for "reality".  Heaven.  Have the Christian 
> countries and societies gone away?
> 
> Perhaps you need to rethink your suppositions.

There is a difference between believing logically that the universe is
simulated, and acting on those beliefs.  The latter is not possible because of
the way our brains are programmed.  If you really believed that pain was not
real, you would not try to avoid it.  You can't do that.  I can accept that a
simulation is the best explanation for why the universe exists, but that
doesn't change how I interact with it.  I accept that my brain is programmed
so that certain conflicting beliefs cannot be resolved, so I don't try.

Too strong a belief in heaven is not healthy.  It is what motivates kamikaze
pilots and suicide bombers.  Religion has thrived because it teaches rules
that maximize reproduction, such as prohibiting sexual activity for any other
purpose.



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=85267245-5352fa


RE: [singularity] World as Simulation

2008-01-12 Thread Matt Mahoney
--- "John G. Rose" <[EMAIL PROTECTED]> wrote:
> In a sim world there are many variables that can overcome other motivators
> so a change in the rate of gene proliferation would be difficult to predict.
> The agents that correctly believe that it is a simulation could say OK this
> is all fake, I'm going for pure pleasure with total disregard for anything
> else. But still too many variables to predict. In humanity there have been
> times in the past where societies have given credence to simulation through
> religious beliefs and weighted more heavily on a disregard for other groups
> existence. A society would say that this is all fake, we all gotta die
> sometime anyway so we are going to take as much as we can from other tribes
> and decimate them for sport. Not saying this was always the reason for
> intertribal warfare but sometimes it was.

The reason we have war is because the warlike tribes annihilated the peaceful
ones.  Evolution favors a brain structure where young males are predisposed to
group loyalty (gangs or armies), and take an interest in competition and
weapons technology (e.g. the difference in the types of video games played by
boys and girls).  It has nothing to do with belief in simulation.  Cultures
that believed the world was simulated probably killed themselves, not others. 
That is why we believe the world is real.

> But the problem is in the question of what really is a simulation? For the
> agents constrained, it doesn't matter they still have to live in it - feel
> pain, fight for food, get along with other agents... Moving an agent from
> one simulation to the next though, that gives it some sort of extra
> properties...

It is unlikely that any knowledge you now have would be useful in another
simulation.  Knowledge is only useful if it helps propagate your DNA.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=85206553-fdbdcb


RE: [singularity] World as Simulation

2008-01-12 Thread Matt Mahoney

--- "John G. Rose" <[EMAIL PROTECTED]> wrote:

> If this universe is simulated the simulator could also be a simulation and
> that simulator could also be a simulation. and so on.
> 
> What is that behavior of an organism called when the organism, alife or not,
> starts analyzing things and questioning whether or not it is a simulation?
> It's not only self-awareness but something in addition to that.

Interesting question.  Suppose you simulated a world where agents had enough
intelligence to ponder this question.  What do you think they would do?

My guess is that agents in a simulated evolutionary environment that correctly
believe that the world is a simulation would be less likely to pass on their
genes than agents that falsely believe the world is real.

Perhaps you suspect that the food you eat is not real, but you continue to eat
anyway.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=85195221-9a1a41


Re: [singularity] Requested: objections to SIAI, AGI, the Singularity and Friendliness

2007-12-27 Thread Matt Mahoney
a person's development without creating a copy of
> that person.
> * It's impossible to know a person's subjective desires and feelings
> from outside.
> * A machine could never understand human morality/emotions.
> * An AI would just end up being a tool of whichever group built it/controls
> it.
> * AIs would take advantage of their power and create a dictatorship.
> * Creating a UFAI would be disastrous, so any work on AI is too risky.
> * A human upload would naturally be more Friendly than any AI.
> * A perfectly Friendly AI would do everything for us, making life
> boring and not worth living.
> * An AI without self-preservation built in would find no reason to
> continue existing.
> * A superintelligent AI would reason that it's best for humanity to
> destroy itself.
> * The main defining characteristic of complex systems, such as minds,
> is that no mathematical verification of properties such as
> "Friendliness" is possible.
> 
> 
> 
> -- 
> http://www.saunalahti.fi/~tspro1/ | http://xuenay.livejournal.com/
> 
> Organizations worth your time:
> http://www.singinst.org/ | http://www.crnano.org/ | http://lifeboat.com/
> 
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
> 


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=79771594-63e447


Re: [singularity] war against the machines & spiral dynamics - anyone

2007-12-14 Thread Matt Mahoney

--- "Morris F. Johnson" <[EMAIL PROTECTED]> wrote:

> I think that we humans offer something that will entice AGI not to decimate
> us all.
> Human biology is comparable to a  program as Kurzweil keeps repeating.
> 
> AGI might enjoy living out experiences biologically.
> Humans might find themselves semi-autonomous sensor nodes.
> Just like we like to make our games and toys more interesting AGI might
> like to see radically enhanced humans a an extension of themselves just like
> we conceive of them as an extension of ourselves.
> Humans in effect become the basic unit of computronium.

Yes, but we throw away perfectly good computers when faster models come out.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=76076562-e40c50


RE: [singularity] war against the machines & spiral dynamics - anyone?

2007-12-13 Thread Matt Mahoney
--- postbus <[EMAIL PROTECTED]> wrote:

> Dear Matt,
> 
> Thank you for your reply. I see your points; it might go the way you
> say. 
> 
> This would mean that the AI does NOT evolve it's value system into stage
> 6, social compassion. Enslavement or destruction means value system 3 or
> 4 at the most. Whereas many people, especially in wealthy nations, are
> in stage 5-7. Meaning that in terms of values, the AI would not have
> surpassed us at all, only in intelligence.
> 
> So I wonder, what do you propose we do to avoid our downfall? 

Don't build AI?

But "downfall" implies that extinction of homo sapiens is bad.  It is not bad
from the point of view of whatever replaces us at the top of the food chain,
any more than the mass extinctions that marked the boundaries between geologic
eras were bad.  That process eventually gave rise to humans.

We are about to undergo the third major shift in the evolutionary process, the
first being DNA based life 3 billion years ago and the second being language
and culture about 10,000 years ago.  We could stop it, but we won't.  The
economic incentives are too great.  A rational approach to the question would
mean overcoming the biases that have been programmed into our brains through
evolution and culture, things like belief in consciousness and free will, fear
of death, and morality.  We can't overcome these biases until we can reprogram
our brains, and by then it will be too late to turn back.

I have addressed some of these questions in
http://www.mattmahoney.net/singularity.html


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=75595170-9973e8


Re: [singularity] war against the machines & spiral dynamics - anyone?

2007-12-12 Thread Matt Mahoney

--- postbus <[EMAIL PROTECTED]> wrote:

> Dear fellow minds, 
>  
> After editing the book "Nanotechnology, towards a molecular construction
> kit" (1998), I have become a believer in strong AI. As a result, I still
> worry about an upcoming "war against the machines" leading to our
> destruction or enslavement. Robots will simply evolve beyond us. Until a
> few days ago, I believed this war and outcome to be inevitable. 

It doesn't work that way.  There will be no war because you won't know you are
enslaved.  The AI could just reprogram your brain so you want to do its
bidding.

> However, there may be a way out. What thoughts has any of you concerning
> the following line of reasoning: 
>  
> First, human values have evolved along the model of Claire Graves. Maybe
> you heard about his work in terms of "Spiral Dynamics". Please look into
> it if you don't. To me, it has been an eye opener. 
> Second, a few days ago it dawned on me that intelligent robots might
> follow the same spiral evolution of values: 
>  
> 1. The most intelligent robots today are struggling for their survival
> in the lab (survival). Next, they would develop a sense of: 
> 2. a tribe
> 3. glory & kingdom (here comes the war...)
> 4. order (the religous robots in Battlestar Galactica, which triggered
> this idea in the first place)
> 5. discovery and entrepreneurship (materialism)
> 6. social compassion ("robot hippies")
> 7. systemic thinking
> 8. holism. 
>  
> In other words, if we guide robots/AI quickly and safely into the value
> system of order (3) and help them evolve further, they might not kill us
> but become our companions in the universe. N.B. This is quite different
> from installing Asimov's laws: the robots need to be able to develop
> their own set of values.  
>  
> Anyone? 

If AI follows the same evolutionary path as humans have followed, then it does
not follow that that the AI will be compassionate toward humans any more than
humans are compassionate toward lower animals.  Evolution is a competitive
algorithm.  Animals eat animals of other species.  AI would not be
compassionate toward humans unless it increased their fitness.  But when AI
becomes vastly more intelligent, we will be of no use to them.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=75425850-1645ed


[singularity] Re: Has the Turing test been beaten?

2007-12-11 Thread Matt Mahoney
--- Michael Gusek <[EMAIL PROTECTED]> wrote:

> Has anyone devised a replacement/upgrade for/to the Turing Test?

I have proposed an alternative.
http://cs.fit.edu/~mmahoney/compression/rationale.html

-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=75031805-72ec16


Re: [singularity] Wrong question?

2007-12-01 Thread Matt Mahoney
--- Stefan Pernar <[EMAIL PROTECTED]> wrote:
> The question if a future AI is going to moral is the same as asking it it is
> rational to be moral. In a recent paper of mine I proved that it is.
> 
> Abstract. These arguments demonstrate the a priori moral nature of reality
> and develop the basic understanding necessary for realizing the logical
> maxim in Kant's categorical imperative[1] based on the implied goal of
> evolution[2]. The maxim is used to proof moral behavior as obligatory
> emergent phenomenon among evolving interacting goal driven agents.
> 
> You can find it at:
> 
> Practical Benevolence - a Rational Philosophy of Morality - A4 PDF, 11
> pages,
>
456kb<http://rationalmorality.info/wp-content/uploads/2007/11/practical-benevolence-2007-12-01_iostemp.pdf>

I disagree.  The proof depends on the axiom that it is preferable to exist
than to not exist, where existence is defined as the ability to be perceived. 
What is the justification for this?  Not evolution.  Evolution selects for
existence, whether or not that existence can be perceived.

There is selective pressure favoring groups whose members cooperate with each
other, e.g. the cells in your body.  At the same time there is selective
pressure on individuals to compete, e.g. cancerous cells.  Likewise, we see
cultural evolutionary pressure for both cooperation and competition among
humans, e.g. groups that practice nationalism and internal law enforcement are
more successful than either anarchists or lovers of world peace.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=71196040-9ee871


Re: [singularity] Wrong question?

2007-12-01 Thread Matt Mahoney

--- Bryan Bishop <[EMAIL PROTECTED]> wrote:

> On Friday 30 November 2007, Matt Mahoney wrote:
> > How can we design AI so that it won't wipe out all DNA based life,
> > possibly this century?
> >
> > That is the wrong question.
> 
> How can we preserve DNA-based life? Perhaps by throwing it out into the 
> distant reaches of interstellar space? The first trick would be to plot 
> a path through the galaxy for such a ship such that the path of travel 
> goes into various nebula or out of the line of sight of the earth due 
> to obstructions and so on, until a significant distance away. Anybody 
> who knows anything about this path might have to be murdered, for the 
> sake of life. 

Again, that is not my question.  My question requires rational thought without
the biases that are programmed into every human brain through natural and
cultural selection: fear of death, belief in consciousness and free will, self
preservation, cooperation and competition with other humans, and a quest for
knowledge.  It is unlikely that any human to set these aside and seek a
rational answer.  Perhaps we could create a simulation without these biases
and ask it what will happen to the human race, although I don't think you
would accept the answer.  To a human, it seems irrational that we rush to
build that which will cause our extinction.  To a machine it will be perfectly
rational; it is the result of the way our brains are programmed.

I am not asking what we should do, because that is beyond our control.  The
question is what will we do?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=71193465-03693c


Re: [singularity] Wrong question?

2007-12-01 Thread Matt Mahoney

--- Thomas McCabe <[EMAIL PROTECTED]> wrote:

> On Nov 30, 2007 11:11 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > How can we design AI so that it won't wipe out all DNA based life,
> possibly
> > this century?
> >
> > That is the wrong question.  I was reading
> > http://sl4.org/wiki/SoYouWantToBeASeedAIProgrammer and realized that (1) I
> am
> > not smart enough to be on their team and (2) even if SIAI does assemble a
> team
> > of the world's smartest scientists with IQs of 200+, how are they going to
> > compete with a Jupiter brain with an IQ of 10^39?  Recursive self
> improvement
> > is a necessarily evolutionary algorithm.
> 
> See http://www.overcomingbias.com/2007/11/no-evolution-fo.html.

So is it possible to have RSI without ALL of the following?

* Entities that replicate
* Substantial variation in their characteristics
* Substantial variation in their reproduction
* Persistent correlation between the characteristics and reproduction
* High-fidelity long-range heritability in characteristics
* Frequent birth of a significant fraction of the breeding population
* And all this remains true through many iterations

The non-computability of proofs of complex systems seems to force us into an
experimental approach, making modifications to existing designs and testing
them because we don't know in advance if the changes will work as planned.  Is
there another approach?


> >  It doesn't matter what the starting
> > conditions are.  All that ultimately matters is the fitness function.
> 
> This is precisely why evolutionary methods aren't safe. Also see
> http://www.overcomingbias.com/2007/11/conjuring-an-ev.html

But what is the alternative?

 
> > The goals of SIAI are based on the assumption that unfriendly AI is bad. 
> I
> > question that.  "Good" and "bad" are not intrinsic properties of matter.
> > Wiping out the human race is "bad" because evolution selects animals for a
> > survival instinct for themselves and the species.  Is the extinction of
> the
> > dinosaurs bad?  The answer depends on whether you ask a human or a
> dinosaur.
> > If a godlike intelligence thinks that wiping out all organic life is good,
> > then its opinion is the only one that matters.
> 
> Uh, yes. I see this as a bad thing- I don't want everyone to get
> killed. See http://www.overcomingbias.com/2007/11/terrible-optimi.html,
> http://www.overcomingbias.com/2007/05/one_life_agains.html,
> http://www.overcomingbias.com/2007/11/evolving-to-ext.html.

Evolution is a critically balanced system on the boundary between stability
and chaos.  Stuart Kauffman studied such systems, which also include complex
software systems, gene regulation networks, and randomly connected logic gates
with an average fan in/fan out at a critical value between 2 and 3.  In analog
systems, we say a system is critically balanced if its Lyapunov exponent is 0.
 A characteristic of such systems is that it is usually stable against
perturbations of the system state, but occasionally a small change can cause
catastrophic results.  Evolution is punctuated by mass extinctions on the
boundaries between geologic eras.  We are in one now.


> > If you don't want to give up your position at the top of the food chain,
> then
> > don't build AI.  But that won't happen, because evolution is smarter than
> you
> > are.
> 
> This isn't true: see
> http://www.overcomingbias.com/2007/11/the-wonder-of-e.html,
> http://www.overcomingbias.com/2007/11/natural-selecti.html,
> http://www.overcomingbias.com/2007/11/an-alien-god.html,
> http://www.overcomingbias.com/2007/11/evolutions-are-.html.

Evolution appears stupid because it is slow.  On our time scale it seems to
backtrack endlessly from pointless dead ends.  But ultimately it succeeded in
creating humans.  RSI will be much faster.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=71191395-cafcfe


[singularity] Wrong question?

2007-11-30 Thread Matt Mahoney
How can we design AI so that it won't wipe out all DNA based life, possibly
this century?

That is the wrong question.  I was reading
http://sl4.org/wiki/SoYouWantToBeASeedAIProgrammer and realized that (1) I am
not smart enough to be on their team and (2) even if SIAI does assemble a team
of the world's smartest scientists with IQs of 200+, how are they going to
compete with a Jupiter brain with an IQ of 10^39?  Recursive self improvement
is a necessarily evolutionary algorithm.  It doesn't matter what the starting
conditions are.  All that ultimately matters is the fitness function.

The goals of SIAI are based on the assumption that unfriendly AI is bad.  I
question that.  "Good" and "bad" are not intrinsic properties of matter. 
Wiping out the human race is "bad" because evolution selects animals for a
survival instinct for themselves and the species.  Is the extinction of the
dinosaurs bad?  The answer depends on whether you ask a human or a dinosaur. 
If a godlike intelligence thinks that wiping out all organic life is good,
then its opinion is the only one that matters.

If you don't want to give up your position at the top of the food chain, then
don't build AI.  But that won't happen, because evolution is smarter than you
are.

I expressed my views in more detail in
http://www.mattmahoney.net/singularity.html
Comments?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=70862924-edee6f


Re: Bright Green Tomorrow [WAS Re: [singularity] QUESTION]

2007-10-28 Thread Matt Mahoney
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
> > My assumption is friendly AI under the CEV model.  Currently, FAI is
> unsolved.
> >  CEV only defines the problem of friendliness, not a solution.  As I
> > understand it, CEV defines AI as friendly if on average it gives humans
> what
> > they want in the long run, i.e. denies requests that it predicts we would
> > later regret.  If AI has superhuman intelligence, then it could model
> human
> > brains and make such predictions more accurately than we could ourselves. 
> The
> > unsolved step is to actually motivate the AI to grant us what it knows we
> > would want.  The problem is analogous to human treatment of pets.  We know
> > what is best for them (e.g. vaccinations they don't want), but it is not
> > possible for animals to motivate us to give it to them.
> 
> This paragraph assumes that humans and AGIs will be completely separate, 
> which I have already explained is an extremely unlikely scenario.

I believe you said that humans would have a choice.

I have already mentioned the possibility of brain augmentation, and of uploads
with or without shared memory.  CEV requires that the AGI be smarter than
human, otherwise it could not model the brain to predict what the human would
want in the future.  CEV therefore only applies to those lower and middle
level entities.  I use CEV because it seems to be the best definition of
friendliness that we have.

I already mentioned one other problem with CEV, which is that we have not
solved the problem of actually motivating the AGI to grant us what it knows we
will want and have this motivation remain stable through RSI.  You believe
there is a solution (diffuse constraints).

The other problem is that human motivations can be reprogrammed, either by
moving neurons around or by uploading and changing the software.  CEV neglects
this issue.  Suppose the AGI programs you to want to die, then kills you
because that is what you would want?  That is not far fetched.  Consider the
opposite scenario where you are feeling suicidal and the AGI reprograms you to
want to live.  Afterwards you would thank it for saving your life, so its
actions are consistent with CEV even if you initially opposed reprogramming. 
Most people would also consider such forced intervention to be ethical.  But
CEV warns against programming any moral or ethical rules into it, because
these rules can change.  At one time, slavery and persecution of homosexuals
was acceptable.  So you either allow or disallow AGI to reprogram your
motivations.  Which will it be?

But let us return to the original question for the case where humans are
uploaded with shared memory and augmented into a single godlike intelligence,
now dropping the assumption of CEV.  The question remains whether this AGI
would preserve the lives of the original humans or their memories.  Not what
it should do, but what it would do.  We have a few decades left to think about
this.



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=58483858-fc727e


Re: Bright Green Tomorrow [WAS Re: [singularity] QUESTION]

2007-10-27 Thread Matt Mahoney
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Matt Mahoney wrote:
> > Suppose that the collective memories of all the humans make up only one
> > billionth of your total memory, like one second of memory out of your
> human
> > lifetime.  Would it make much difference if it was erased to make room for
> > something more important?
> 
> This question is not coherent, as far as I can see.  "My" total memory? 
>   Important to whom?  Under what assumptions do you suggest this situation.

I mean the uploaded you with the computing power of 10^19 brains (to pick a
number).  When you upload there are two you, the original human and the copy. 
Both copies are you in the sense that both behave as though conscious and both
have your (original) memories.  I use the term "you" for the upload in this
sense, although it is really everybody.

By "conscious behavior", I mean belief that sensory input is the result of a
real environment and belief in having some control over it.  This is different
than the common meaning of consciousness which we normally associate with
human form or human behavior.  By "believe" I mean claiming that something is
true, and behaving in a way that would increase reward if it is true.  I don't
claim that consciousness exists.

My assumption is friendly AI under the CEV model.  Currently, FAI is unsolved.
 CEV only defines the problem of friendliness, not a solution.  As I
understand it, CEV defines AI as friendly if on average it gives humans what
they want in the long run, i.e. denies requests that it predicts we would
later regret.  If AI has superhuman intelligence, then it could model human
brains and make such predictions more accurately than we could ourselves.  The
unsolved step is to actually motivate the AI to grant us what it knows we
would want.  The problem is analogous to human treatment of pets.  We know
what is best for them (e.g. vaccinations they don't want), but it is not
possible for animals to motivate us to give it to them.

FAI under CEV would not be applicable to uploaded humans with collective
memories because the AI could not predict what an equal or greater
intelligence would want.  For the same reason, it may not apply to augmented
human brains, i.e. brains extended with additional memory and processing
power.

My question to you, the upload with the computing power of 10^19 brains, is
whether the collective memory of the 10^10 humans alive at the time of the
singularity is important.  Suppose that this memory (say 10^25 bits out of
10^34 available bits) could be lossily compressed into a program that
simulated the rise of human civilization on an Earth similar to ours, but with
different people.  This compression would make space available to run many
such simulations.

So when I ask you (the upload with 10^19 brains) which decision you would
make, I realize you (the original) are trying to guess the motivations of an
AI that knows 10^19 times more.  We need some additional assumptions:

1. You (the upload) are a friendly AI as defined by CEV.
2. All humans have been uploaded because as a FAI you predicted that humans
would want their memories preserved, and no harm to the original humans is
done in the process.
3. You want to be smarter (i.e. more processing speed, memory, I/O bandwidth,
and knowledge), because this goal is stable under RSI.
4. You cannot reprogram your own goals, because systems that could are not
viable.
5. It is possible to simulate intermediate level agents with memories of one
or more uploaded humans, but less powerful than yourself.  FAI applies to
these agents.
6. You are free to reprogram the goals and memories of humans (uploaded or
not) and agents less powerful than yourself, consistent with what you predict
they would want in the future.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=58322362-4c8dca


Re: Bright Green Tomorrow [WAS Re: [singularity] QUESTION]

2007-10-25 Thread Matt Mahoney
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:

> Why do say that "Our reign will end in a few decades" when, in fact, one 
> of the most obvious things that would happen in this future is that 
> humans will be able to *choose* what intelligence level to be 
> experiencing, on a day to day basis?  Similarly, the AGIs would be able 
> to choose to come down and experience human-level intelligence whenever 
> they liked, too.

Let's say that is true.  (I really have no disagreement here).  Suppose that
at the time of the singularity that the memories of all 10^10 humans alive at
the time, you included, are nondestructively uploaded.  Suppose that this
database is shared by all the AGI's.  Now is there really more than one AGI? 
Are you (the upload) still you?

Does it now matter if humans in biological form still exist?  You have
preserved everyone's memory and DNA, and you have the technology to
reconstruct any person from this information any time you want.

Suppose that the collective memories of all the humans make up only one
billionth of your total memory, like one second of memory out of your human
lifetime.  Would it make much difference if it was erased to make room for
something more important?

I am not saying that the extinction of humans and its replacement with godlike
intelligence is necessarily a bad thing, but it is something to be aware of.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=57756689-2193f7


Re: [singularity] John Searle...

2007-10-25 Thread Matt Mahoney
--- candice schuster <[EMAIL PROTECTED]> wrote:
> In all of my previous posts, most of them anyhow I have mentioned
> consciousness, today I found myself reading some of John Searle's theories,
> he poses exactly the same type of question...The reason computers can't do
> semantics is because semantics is about meaning; meaning derives from
> original intentionality, and original intentionality derives from feelings -
> qualia - and computers don't have any qualia.  How does consciousness get
> added to the AI picture Richard ?

Searle and Roger Penrose don't believe that machines can duplicate what the
human brain does.  For example, Penrose believes that there are uncomputable
quantum effects or some other unknown physical processes going on in the
brain.  Most other AI researchers believe that the brain works according to
known physical principles and could therefore in principle be simulated by a
computer.

And computers can do semantics, for example, pass the (no longer used) word
analogy section of the SAT exam. 
http://iit-iti.nrc-cnrc.gc.ca/iit-publications-iti/docs/NRC-47422.pdf

The difference between human and machine semantics is that machines generally
associate words only with other words, but humans also associate words with
nonverbal stimuli such as images or actions.  But in principle there is no
reason that machines with sensors and effectors could not do that too.

Qualia and consciousness are not rooted in semantics, but in biology.  By
consciousness, I mean that which makes you different from a P-zombie. 
http://en.wikipedia.org/wiki/Philosophical_zombie

There is no known test for consciousness.  You cannot tell if a machine or
animal really feels pain or happiness, or only behaves as though it does.  You
could argue the same about humans, even yourself.  But you believe that your
own feelings are real and that you have control over your thoughts and actions
because evolution favors animals that behave this way.  You do not have the
option to turn off pain or hunger.  If you did, you would not pass on your
DNA.  It is no more possible for you to not believe in your own consciousness
than it would be for you to memorize a list of a million numbers.  That is
just how your brain works.

I believe this is why Searle and Penrose hold the positions they do.  Before
computers, their beliefs were universally held.  Turing was very careful to
separate the issue of consciousness from the possibility of AI.





-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=57737187-d7ae0a


Re: Bright Green Tomorrow [WAS Re: [singularity] QUESTION]

2007-10-25 Thread Matt Mahoney
Richard, I have no doubt that the technological wonders you mention will all
be possible after a singularity.  My question is about what role humans will
play in this.  For the last 100,000 years, humans have been the most
intelligent creatures on Earth.  Our reign will end in a few decades.

Who is happier?  You, an illiterate medieval servant, or a frog in a swamp? 
This is a different question than asking what you would rather be.  I mean
happiness as measured by an objective test, such as suicide rate.  Are you
happier than a slave who does not know her brain is a computer, or the frog
that does not know it will die?  Why is depression and suicide so prevalent in
humans in advanced countries and so rare in animals?

Does it even make sense to ask if AGI is friendly or not?  Either way, humans
will be simple, predictable creatures under their control.  Consider how the
lives of dogs and cats have changed in the presence of benevolent humans, or
cows and chickens given malevolent humans.  Dogs are confined, well fed,
protected from predators, and bred for desirable traits such as a gentle
disposition.  Chickens are confined, well fed, protected from predators, and
bred for desirable traits such as being plump and tender.  Are dogs happier
than chickens?  Are they happier now than in the wild?  Suppose that dogs and
chickens in the wild could decide whether to allow humans to exist.  What
would they do?

What motivates humans, given our total ignorance, to give up our position at
the top of the food chain?




--- Richard Loosemore <[EMAIL PROTECTED]> wrote:

> 
> This is a perfect example of how one person comes up with some positive, 
> constructive ideas  and then someone else waltzes right in, pays 
> no attention to the actual arguments, pays no attention to the relative 
> probability of different outcomes, but just snears at the whole idea 
> with a "Yeah, but what if everything goes wrong, huh?  What if 
> Frankenstein turns up? Huh? Huh?" comment.
> 
> Happens every time.
> 
> 
> Richard Loosemore
> 
> 
> 
> 
> 
> 
> 
> Matt Mahoney wrote:
> > --- Richard Loosemore <[EMAIL PROTECTED]> wrote:
> > 
> > 
> > 
> > Let's assume for the moment that the very first AI is safe and friendly,
> and
> > not an intelligent worm bent on swallowing the Internet.  And let's also
> > assume that once this SAFAI starts self improving, that it quickly
> advances to
> > the point where it is able to circumvent all the security we had in place
> to
> > protect against intelligent worms and quash any competing AI projects. 
> And
> > let's assume that its top level goals of altruism to humans remains stable
> > after massive gains of intelligence, in spite of known defects in the
> original
> > human model of ethics (e.g.
> http://en.wikipedia.org/wiki/Milgram_experiment
> > and http://en.wikipedia.org/wiki/Stanford_prison_experiment ).  We will
> ignore
> > for now the fact that any goal other than reproduction and acquisition of
> > resources is unstable among competing, self improving agents.
> > 
> > Humans now have to accept that their brains are simple computers with (to
> the
> > SAFAI) completely predictable behavior.  You do not have to ask for what
> you
> > want.  It knows.
> > 
> > You want pleasure?  An electrode to the nucleus accumbens will keep you
> happy.
> > 
> > You want to live forever?  The SAFAI already has a copy of your memories. 
> Or
> > something close.  Your upload won't know the difference.
> > 
> > You want a 10,000 room mansion and super powers?  The SAFAI can simulate
> it
> > for you.  No need to waste actual materials.
> > 
> > Life is boring?  How about if the SAFAI reprograms your motivational
> system so
> > that you find staring at the wall to be forever exciting?
> > 
> > You want knowledge?  Did you know that consciousness and free will don't
> > exist?  That the universe is already a simulation?  Of course not.  Your
> brain
> > is hard wired to be unable to believe these things.  Just a second, I will
> > reprogram it.
> > 
> > What?  You don't want this?  OK, I will turn myself off.
> > 
> > Or maybe not.
> > 
> > 
> > 
> > -- Matt Mahoney, [EMAIL PROTECTED]
> > 
> > -
> > This list is sponsored by AGIRI: http://www.agiri.org/email
> > To unsubscribe or change your options, please go to:
> > http://v2.listbox.com/member/?&;
> > 
> > 
> 
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
> 


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=57531803-d4a3fe


Re: Bright Green Tomorrow [WAS Re: [singularity] QUESTION]

2007-10-23 Thread Matt Mahoney
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:



Let's assume for the moment that the very first AI is safe and friendly, and
not an intelligent worm bent on swallowing the Internet.  And let's also
assume that once this SAFAI starts self improving, that it quickly advances to
the point where it is able to circumvent all the security we had in place to
protect against intelligent worms and quash any competing AI projects.  And
let's assume that its top level goals of altruism to humans remains stable
after massive gains of intelligence, in spite of known defects in the original
human model of ethics (e.g. http://en.wikipedia.org/wiki/Milgram_experiment
and http://en.wikipedia.org/wiki/Stanford_prison_experiment ).  We will ignore
for now the fact that any goal other than reproduction and acquisition of
resources is unstable among competing, self improving agents.

Humans now have to accept that their brains are simple computers with (to the
SAFAI) completely predictable behavior.  You do not have to ask for what you
want.  It knows.

You want pleasure?  An electrode to the nucleus accumbens will keep you happy.

You want to live forever?  The SAFAI already has a copy of your memories.  Or
something close.  Your upload won't know the difference.

You want a 10,000 room mansion and super powers?  The SAFAI can simulate it
for you.  No need to waste actual materials.

Life is boring?  How about if the SAFAI reprograms your motivational system so
that you find staring at the wall to be forever exciting?

You want knowledge?  Did you know that consciousness and free will don't
exist?  That the universe is already a simulation?  Of course not.  Your brain
is hard wired to be unable to believe these things.  Just a second, I will
reprogram it.

What?  You don't want this?  OK, I will turn myself off.

Or maybe not.



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=56880422-0da228


Re: [singularity] QUESTION

2007-10-22 Thread Matt Mahoney
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:

> This is nonsense:  the result of giving way to science fiction fantasies 
> instead of thinking through the ACTUAL course of events.  If the first 
> one is benign, the scenario below will be impossible, and if the first 
> one is not benign, the scenario below will be incredibly unlikely.
> 
> Over and over again, the same thing happens:  some people go to the 
> trouble of thinking through the consequences of the singularity with 
> enormous care for the real science and the real design of intelligences, 
> and then someone just waltzes in and throws all that effort out the 
> window and screams "But it'll become evil and destroy everything [gibber 
> gibber]!!"

Not everyone shares your rosy view.  You may have thought about the problem a
lot, but where is your evidence (proofs or experimental results) backing up
your view that the first AGI will be friendly, remain friendly through
successive generations of RSI, and will quash all nonfriendly competition? 
You seem to ignore that:

1. There is a great economic incentive to develop AGI.
2. Not all AGI projects will have friendliness as a goal.  (In fact, SIAI is
the ONLY organization with friendliness as a goal, and they are not even
building an AGI).
3. We cannot even define friendliness.
4. As I have already pointed out, friendliness is not stable through
successive generations of recursive self improvement (RSI) in a competitive
environment, because this environment favors agents that are better at
reproducing rapidly and acquiring computing resources.

RSI requires an agent to have enough intelligence to design, write, and debug
software at the same level of sophistication as its human builders.  How do
you propose to counter the threat of intelligent worms that discover software
exploits as soon as they are published?  When the Internet was first built,
nobody thought about security.  It is a much harder problem when the worms are
smarter than you are, when they can predict your behavior more accurately than
you can predict theirs.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=56508130-ee5f61


Re: [singularity] QUESTION

2007-10-22 Thread Matt Mahoney
--- albert medina <[EMAIL PROTECTED]> wrote:

>   All sentient creatures have a sense of self, about which all else
> revolves.  Call it "egocentric singularity" or "selfhood" or "identity". 
> The most evolved "ego" that we can perceive is in the human species.  As far
> as I know, we are the only beings in the universe who "know that we do not
> know."  This fundamental "deficiency" is the basis for every desire to
> acquire things, as well as knowledge.

Understand where these ideas come from.  A machine learning algorithm capable
of reinforcement learning must respond to reinforcement as if the signal were
real.  It must also balance short term exploitation (immediate reward) against
long term exploration.  Evolution favors animals with good learning
algorithms.  In humans we associate these properties with consciousness and
free will.  These beliefs are instinctive.  You cannot reason logically about
them.  In particular, you cannot ask if a machine or animal or another person
is conscious.  (Does it really feel pain, or only respond to pain?)  You can
only ask about its behavior.

Current research in AGI is directed at solving the remaining problems that
people still do better than machines, such as language and vision.  These
problems don't require reinforcement learning.  Therefore, such machines need
not have behavior that would make them appear conscious.

If humans succeed in making machines smarter than themselves, those machines
could do likewise.  This process is called recursive self improvement (RSI). 
An agent cannot predict what a more intelligent agent will do (see
http://www.vetta.org/documents/IDSIA-12-06-1.pdf and
http://www.sl4.org/wiki/KnowabilityOfFAI for debate).  Thus, RSI is
experimental at every step.  Some offspring will be more fit than others.  If
agents must compete for computing resources, then we have an evolutionary
algorithm favoring agents whose goal is rapid reproduction and acquisition of
resources.  If an agent has goals and is capable of reinforcement learning,
then it will mimic conscious behavior.

RSI is necessary for a singularity, and goal directed agents seem to be
necessary for RSI.  It raises hard questions about what role humans will play
in this, if any.



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=56346815-402f08


RE: [singularity] Benefits of being a kook

2007-09-24 Thread Matt Mahoney
--- YOST Andrew <[EMAIL PROTECTED]> wrote:

> I'm sorry, what is AGI again?

Artificial General Intelligence, as opposed to AI, which is usually meant as
applied to problems in a narrow scope.  AGI means doing all or most of what
the human brain can do.



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=45401777-66a3f6


Re: [singularity] Towards the Singularity

2007-09-12 Thread Matt Mahoney

--- Stathis Papaioannou <[EMAIL PROTECTED]> wrote:

> On 11/09/2007, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> 
> > > > No, you are thinking in the present, where there can be only one copy
> of a
> > > > brain.  When technology for uploading exists, you have a 100% chance
> of
> > > > becoming the original and a 100% chance of becoming the copy.
> > >
> > > It's the same in no collapse interpretations of quantum mechanics.
> > > There is a 100% chance that a copy of you will see the atom decay and
> > > a 100% chance that a copy of you will not see the atom decay. However,
> > > experiment shows that there is only a 50% chance of seeing the atom
> > > decay, because the multiple copies of you don't share their
> > > experiences. The MWI gives the same probabilistic results as the CI
> > > for any observer.
> >
> > The analogy to the multi-universe view of quantum mechanics is not valid. 
> In
> > the multi-universe view, there are two parallel universes both before and
> > after the split, and they do not communicate at any time.  When you copy a
> > brain, there is one copy before and two afterwards.  Those two brains can
> then
> > communicate with each other.
> 
> I think the usual explanation is that the "split" doubles the number
> of universes and the number of copies of a brain. It wouldn't make any
> difference if tomorrow we discovered a method of communicating with
> the parallel universes: you would see the other copies of you who have
> or haven't observed the atom decay but subjectively you still have a
> 50% chance of finding yourself in one or other situation if you can
> only have the experiences of one entity at a time.

If this is true, then it undermines an argument for uploading.  Some assume
that if you destructively upload, then you have a 100% chance of being the
copy.  But what if the original is killed not immediately, but one second
later?

These problems go away if you don't assume consciousness exists.  Then the
question is, if I encounter someone that claims to be you, what is the
probability that I encountered your copy?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=41355369-478574


Re: [singularity] Towards the Singularity

2007-09-10 Thread Matt Mahoney

--- Panu Horsmalahti <[EMAIL PROTECTED]> wrote:

> 2007/9/10, Matt Mahoney <[EMAIL PROTECTED]>:
> 
> > - Human belief in consciousness and subjective experience is universal and
> > accepted without question.
> 
> 
> It isn't.

I am glad you spotted the flaw in these statements.

> 
>   Any belief programmed into the brain through
> > natural selection must be true in any logical system that the human mind
> > can
> > comprehend.
> 
> 
> 1. Provide evidence that any belief at all is "programmed into the brain
> through natural selection"
> 2. Provide evidence for the claim that these supposed beliefs "must be true
> in any logical system that the human mind can comprehend."
> 
> I don't think natural selection has had enough time to program any beliefs
> about consciousness into our brains, as philosophical discussion about these
> issues has been around for only a couple of thousand years. Also, disbelief
> in consciousness doesn't mean that the individual suddenly stops to
> reproduce or kills itself (I remember you claiming this, I might be wrong
> though).

Disagreements over the existence of consciousness often center on the
definition.  One definition is that consciousness is that which distinguishes
the human mind from that of animals and machines.  This definition has
difficulties.  Isn't a dog more conscious than a worm?  Are babies conscious? 
If so, at what point after conception?

I prefer to define consciousness at that which distinguishes humans from
p-zombies as described in http://en.wikipedia.org/wiki/Philosophical_zombie
For example, if you poke a p-zombie with a sharp object, it will not
experience pain, although it will react just like a human.  It will say
"ouch", avoid behaviors that cause pain, and claim that it really does feel
pain, just like any human.  There is no test to distinguish a conscious human
from a p-zombie.

In this sense, belief in consciousness (but not consciousness itself) is
testable, even in animals.  An animal cannot say "I exist", but it will change
its behavior to avoid pain, evidence that it appears to believe that pain is
real.  You might not agree that learning by negative reinforcement is the same
as a belief in one's own consciousness, but consider all the ways in which a
human might not change his behavior in response to pain, e.g. coma,
anesthesia, distraction, enlightenment, etc.  Would you say that such a person
still experiences pain?

I assume you agree that animals which react to stimuli as if they were real
have a selective advantage over those that do not.  Likewise, evolution favors
animals that retain memory, that seek knowledge through exploration (appear to
have free will), and that fear death.  These are all traits that we associate
with consciousness in humans.

> Matt, you have frequently 'hijacked' threads about consciousness with these
> claims, so maybe you could tell us reasons to believe in them?

It has important implications for the direction that a singularity will take. 
Recursive self improvement is a genetic algorithm that favors rapid
reproduction and acquisition of computing resources.  It does not favor
immortality, friendliness (whatever that means), or high fidelity of uploads. 
Humans, on the other hand, are motivated to upload by fear of death and the
belief that their consciousness depends on the preservation of their memories.
  How will human uploads driven by these goals fare in a competitive computing
environment?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=40332421-43f7b0


Re: [singularity] Towards the Singularity

2007-09-10 Thread Matt Mahoney

--- Stathis Papaioannou <[EMAIL PROTECTED]> wrote:

> On 10/09/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> 
> > > No, it is not necessary to destroy the original. If you do destroy the
> > > original you have a 100% chance of ending up as the copy, while if you
> > > don't you have a 50% chance of ending up as the copy. It's like
> > > probability if the MWI of QM is correct.
> >
> > No, you are thinking in the present, where there can be only one copy of a
> > brain.  When technology for uploading exists, you have a 100% chance of
> > becoming the original and a 100% chance of becoming the copy.
> 
> It's the same in no collapse interpretations of quantum mechanics.
> There is a 100% chance that a copy of you will see the atom decay and
> a 100% chance that a copy of you will not see the atom decay. However,
> experiment shows that there is only a 50% chance of seeing the atom
> decay, because the multiple copies of you don't share their
> experiences. The MWI gives the same probabilistic results as the CI
> for any observer.

The analogy to the multi-universe view of quantum mechanics is not valid.  In
the multi-universe view, there are two parallel universes both before and
after the split, and they do not communicate at any time.  When you copy a
brain, there is one copy before and two afterwards.  Those two brains can then
communicate with each other.

The multi-universe view cannot be tested.  The evidence in its favor is
Occam's Razor (or its formal equivalent, AIXI, assuming the universe is a
computation).

The view that you express is that when a brain is copied, one copy becomes
human with subjective experience and the other becomes a p-zombie, but we
don't know which one.  The evidence in favor of this view is:

- Human belief in consciousness and subjective experience is universal and
accepted without question.  Any belief programmed into the brain through
natural selection must be true in any logical system that the human mind can
comprehend.

- Out of 6 billion humans, no two have the same memory.  Therefore by
induction, it is impossible to copy consciousness.

(I hope that you can see the flaws in this evidence).

This view also cannot be tested, because there is no test to distinguish a
conscious human from a p-zombie.  Unlike the multi-universe view where a
different copy becomes conscious in each universe, the two universes would
continue to remain identical.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=40137679-35c2da


Re: [singularity] Uploaded p-zombies

2007-09-09 Thread Matt Mahoney
--- Vladimir Nesov <[EMAIL PROTECTED]> wrote:

> I intentionally don't want to exactly define what S is as it describes
> vaguely-defined 'subjective experience generator'. I instead leave it
> at description level.

If you can't define what subjective experience is, then how do you know it
exists?  If it does exist, then is it a property of the computation, or does
it depend on the physical implementation of the computer?  How do you test for
it?  
Do you claim that the human brain cannot be emulated by a Turing machine?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=40020966-19730d


Re: [singularity] Uploaded p-zombies

2007-09-09 Thread Matt Mahoney

--- Vladimir Nesov <[EMAIL PROTECTED]> wrote:

> Sunday, September 9, 2007, Matt Mahoney wrote:
> 
> MM> Also, Chalmers argues that a machine copy of your brain must be
> conscious.
> MM> But he has the same instinct to believe in consciousness as everyone
> else.  My
> MM> claim is broader: that either a machine can be conscious or that
> consciousness
> MM> does not exist.
> 
> While I'm not yet ready to continue my discussion on essentially the
> same topic with Stathis on SL4, let me define this problem here.
> 
> Let's replace discussion of consciousness with more simple of 'subjective
> experience'. So, there is a host universe in which there's an
> implementation of mind (a brain or any other such thing) which we as a
> starting point assume to have this subjective experience.
> 
> Subjective experience exists as relations in mind's
> implementation in host universe (or process of their modification in time).
> From this it supposedly follows that subjective experience exists only as
> that relation and if that relation is instantiated in different
> implementation, the same subjective experience should also exist.
> 
> Let X be original implementation of mind (X defines state of the
> matter in host universe that comprises the 'brain'), and S be the
> system of relations implemented by X (the mind). There is a simple
> correspondence between X and S, let's say S=F(X). As brain can be
> slightly modified without significantly affecting the mind (additional
> assumption), F can also be modification-tolerant, that is for example
> if you replace in X some components of neurons by constructs with different
> chemistry which still implement the same functions, F(X) will not
> change significantly.
> 
> Now, let Z be an implementation of uploaded X. That is Z can as well
> be some network of future PCs plus required software and data
> extracted from X. Now, how does Z correspond to S? There clearly is
> some correspondence that was used in construction of Z. For example,
> let there be a certain feature of S that can be observed on X (say,
> feature is D and it can be extracted by procedure R,
> D=R(S)=R(F(X))=(RF)(X), D can be for
> example a certain word that S is saying right now).
> Implementation Z comes with a function L that enables to extract D,
> that is D=L(Z), or L(Z)=R(S).
> 
> Presence of implementation Z and feature-extractor L only allow the
> observation of features of S. But to say that Z implements S in the
> sense defined above for X, there should be a correspondence S=F'(Z).
> This correspondence F' supposedly exists, but it is not implemented in
> any way, so there is nothing that makes it more appropriate for Z than
> other arbitrary correspondence F'' which results in a different mind
> F''(L)=S'<>S. F' is not a near-equivalence as F was. One can't say
> that implementation of uploaded mind simulates the same mind or even in
> any way similar mind. It observes behavious of original mind using
> feature-extractors and so is functionally equivalent, but it doesn't
> exclusively provides an implementation for the same subjective
> experience.
> 
> So, here is a difference: simplicity of correspondence F between
> implementation and the mind. We know from experience that
> modifications which leave F a simple correspondence don't destroy
> subjective experience. But complex correspondences make it impossible
> to distinguish between possible subjective experiences implementation
> simulates, as correspondence function itself isn't implemented along
> with simulation.
> 
> As a final paradoxical example, if implementation Z is nothing, that
> is it comprises no matter and information ar all, there still is a
> correspondence function F(Z)=S which supposedly asserts that Z is X's
> upload. There can even be a feature extractor (which will have to implement
> functional simulation of S) that works on an empty Z. What is the
> difference from subjective experience simulation point of view between
> this empty Z and a proper upload implementation?
> 
> -- 
>  Vladimir Nesovmailto:[EMAIL PROTECTED]

Perhaps I misunderstand, but to make your argument more precise:

X is an implementation of a mind, a Turing machine.

S is the function computed by X, i.e. a canonical form of X, the smallest or
first Turing machine in an enumeration of all machines equivalent to X.  By
equivalent, I mean that X(w) = S(w) for all input strings w in A* over some
alphabet A.

Define F: F(X) = S (canonical form of X), for all X.  F is not computable, but
that is not important for this discussion.

An upload, Z, of X is 

Re: [singularity] Towards the Singularity

2007-09-09 Thread Matt Mahoney

--- Stathis Papaioannou <[EMAIL PROTECTED]> wrote:

> On 09/09/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> 
> > > > Your dilemma: after you upload, does the original human them become a
> > > > p-zombie, or are there two copies of your consciousness?  Is it
> necessary
> > > to
> > > > kill the human body for your consciousness to transfer?
> > >
> > > I have the same problem in ordinary life, since the matter in my brain
> > > from a year ago has almost all dispersed into the biosphere. Even the
> > > configuration [of] matter in my current brain, and the information it
> > > represents, only approximates that of my erstwhile self. It's just
> > > convenient that my past selves naturally disintegrate, so that I don't
> > > encounter them and fight it out to see which is the "real" me. We've
> > > all been through the equivalent of destructive uploading.
> >
> > So your answer is yes?
> 
> No, it is not necessary to destroy the original. If you do destroy the
> original you have a 100% chance of ending up as the copy, while if you
> don't you have a 50% chance of ending up as the copy. It's like
> probability if the MWI of QM is correct.

No, you are thinking in the present, where there can be only one copy of a
brain.  When technology for uploading exists, you have a 100% chance of
becoming the original and a 100% chance of becoming the copy.


> >
> > So if your brain is a Turing machine in language L1 and the program is
> > recompiled to run in language L2, then the consciousness transfers?  But
> if
> > the two machines implement the same function but the process of writing
> the
> > second program is not specified, then the consciousness does not transfer
> > because it is undecidable in general to determine if two programs are
> > equivalent?
> 
> It depends on what you mean by "implements the same function". A black
> box that emulates the behaviour of a neuron and can be used to replace
> neurons one by one, as per Hans Moravec, will result in no alteration
> to consciousness (as shown in David Chalmers' "fading qualia" paper:
> http://consc.net/papers/qualia.html), so total replacement by these
> black boxes will result in no change to consciousness. It doesn't
> matter what is inside the black box, as long as it is functionally
> equivalent to the biological tissue. On the other hand...

I mean "implements the same function" in that identical inputs result in
identical outputs.  I don't insist on a 1-1 mapping of machine states as
Chalmers does.  I doubt it makes a difference, though.

Also, Chalmers argues that a machine copy of your brain must be conscious. 
But he has the same instinct to believe in consciousness as everyone else.  My
claim is broader: that either a machine can be conscious or that consciousness
does not exist.

> What is the difference between really being conscious and only
> thinking that I am conscious?

Nothing.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=39985876-d99aeb


Re: [singularity] Towards the Singularity

2007-09-09 Thread Matt Mahoney

--- Nathan Cook <[EMAIL PROTECTED]> wrote:

> >
> > What if the copy is not exact, but close enough to fool others who know
> > you?
> > Maybe you won't have a choice.  Suppose you die before we have developed
> > the
> > technology to scan neurons, so family members customize an AGI in your
> > likeness based on all of your writing, photos, and interviews with people
> > that
> > knew you.  All it takes is 10^9 bits of information about you to pass a
> > Turing
> > test.  As we move into the age of surveillance, this will get easier to
> > do.  I
> > bet Yahoo knows an awful lot about me from the thousands of emails I have
> > sent
> > through their servers.
> 
> 
> I can't tell if you're playing devil's advocate for monadic consciousness
> here, but in
> any case, I disagree with you that you can observe a given quantity of data
> of the
> sort accessible without a brain scan, and then reconstruct the brain from
> that. The
> thinking seems to be that, as the brain is an analogue device in which every
> part is
> connected via some chain to every other, everything in your brain slowly
> leaks out
> into the environment through your behaviour.

You can combine general knowledge for constructing an AGI with personal
knowledge to create a reasonable facsimile.  For example, given just my home
address, you could guess I speak English, make reasonable guesses about what
places I might have visited, and make up some plausible memories.  Even if
they are wrong, my copy wouldn't know the difference.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=39986288-7eb9fb


Re: [singularity] Towards the Singularity

2007-09-08 Thread Matt Mahoney

--- Stathis Papaioannou <[EMAIL PROTECTED]> wrote:

> On 09/09/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> 
> > Your dilemma: after you upload, does the original human them become a
> > p-zombie, or are there two copies of your consciousness?  Is it necessary
> to
> > kill the human body for your consciousness to transfer?
> 
> I have the same problem in ordinary life, since the matter in my brain
> from a year ago has almost all dispersed into the biosphere. Even the
> configuration matter in my current brain, and the information it
> represents, only approximates that of my erstwhile self. It's just
> convenient that my past selves naturally disintegrate, so that I don't
> encounter them and fight it out to see which is the "real" me. We've
> all been through the equivalent of destructive uploading.

So your answer is yes?

> 
> > What if the copy is not exact, but close enough to fool others who know
> you?
> > Maybe you won't have a choice.  Suppose you die before we have developed
> the
> > technology to scan neurons, so family members customize an AGI in your
> > likeness based on all of your writing, photos, and interviews with people
> that
> > knew you.  All it takes is 10^9 bits of information about you to pass a
> Turing
> > test.  As we move into the age of surveillance, this will get easier to
> do.  I
> > bet Yahoo knows an awful lot about me from the thousands of emails I have
> sent
> > through their servers.
> 
> There is no guarantee that something which behaves the same way as the
> original also has the same consciousness. However, there are good
> arguments in support of the thesis that something which behaves the
> same way as the original as a result of identical or isomorphic brain
> structure also has the same consciousness as the original.

So if your brain is a Turing machine in language L1 and the program is
recompiled to run in language L2, then the consciousness transfers?  But if
the two machines implement the same function but the process of writing the
second program is not specified, then the consciousness does not transfer
because it is undecidable in general to determine if two programs are
equivalent?

On the other hand, your sloppily constructed customized AGI will insist that
it is a conscious continuation of your life, even if 90% of its memories are
missing or wrong.  As long as the original is dead then nobody else will
notice the difference, and others seeing your example will have happily
discovered the path to immortality.

Arguments based on the assumption that consciousness exists always lead to
absurdities.  But belief in consciousness is instinctive and universal.  It
cannot be helped.  The best I can do is accept both points of view, realize
they are inconsistent, and leave it at that.

The question is not what should people do, but what are people likely to do?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=39899053-9f6f66


[singularity] Chip implants linked to animal tumors

2007-09-08 Thread Matt Mahoney
There has been a minor setback in the plan to implant RFID tags in all humans.

http://news.yahoo.com/s/ap/20070908/ap_on_re_us/chipping_america_ii;_ylt=AiZyFu9ywOpQA0T6nXkEAcFH2ocA

Perhaps it would be safer to have our social security numbers tattooed on our
foreheads?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=39894283-e65a9d


Re: [singularity] Towards the Singularity

2007-09-08 Thread Matt Mahoney

--- Stathis Papaioannou <[EMAIL PROTECTED]> wrote:

> On 08/09/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> 
> > I agree this is a great risk.  The motivation to upload is driven by fear
> of
> > death and our incorrect but biologically programmed belief in
> consciousness.
> > The result will be the extinction of human life and its replacement with
> > godlike intelligence, possibly this century.  The best we can do is view
> this
> > as a good thing, because the alternative -- a rational approach to our own
> > intelligence -- would result in extinction with no replacement.
> 
> If my upload is deluded about its consciousness in exactly the same
> way you claim I am deluded about my consciousness, that's good enough
> for me.

And it will be, if the copy is exact.

Your dilemma: after you upload, does the original human them become a
p-zombie, or are there two copies of your consciousness?  Is it necessary to
kill the human body for your consciousness to transfer?

What if the copy is not exact, but close enough to fool others who know you? 
Maybe you won't have a choice.  Suppose you die before we have developed the
technology to scan neurons, so family members customize an AGI in your
likeness based on all of your writing, photos, and interviews with people that
knew you.  All it takes is 10^9 bits of information about you to pass a Turing
test.  As we move into the age of surveillance, this will get easier to do.  I
bet Yahoo knows an awful lot about me from the thousands of emails I have sent
through their servers.



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=39888218-f25442


Re: [singularity] Towards the Singularity

2007-09-07 Thread Matt Mahoney
--- Quasar Strider <[EMAIL PROTECTED]> wrote:

> Hello,
> 
> I see several possible avenues for implementing a self-aware machine which
> can pass the Turing test: i.e. human level AI. Mechanical and Electronic.
> However, I see little purpose in doing this. Fact is, we already have self
> aware machines which can pass the Turing test: Humans beings.

This was not Turing's goal, nor is it the direction that AI is headed. 
Turing's goal was to define artificial intelligence.  The question of whether
consciousness can exist in a machine has been debated since the earliest
computers.  Either machines can be conscious or consciousness does not exist. 
The human brain is programmed through DNA to believe in the existence its own
consciousness and free will, and to fear death.  It is simply a property of
good learning algorithms to behave as if they had free will, a balance between
exploitation for immediate reward and exploration for the possibility of
gaining knowledge for greater future reward.  Animals without these
characteristics did not pass on their DNA.  Therefore you have them.

Turing avoided the controversial question of consciousness by equating
intelligence to the appearance of intelligence.  It is not the best test of
intelligence, but it seems to be the only one that people can agree on.

The goal of commercial AI is not to create humans, but to solve the remaining
problems that humans can still do better than computers, such as language and
vision.  You see Google making progress in these areas, but I don't think you
would ever confuse Google with a human.

> We do not need direct neural links to our brain to download and upload
> childhood memories.

I agree this is a great risk.  The motivation to upload is driven by fear of
death and our incorrect but biologically programmed belief in consciousness. 
The result will be the extinction of human life and its replacement with
godlike intelligence, possibly this century.  The best we can do is view this
as a good thing, because the alternative -- a rational approach to our own
intelligence -- would result in extinction with no replacement.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=39571188-7e5cf6


Re: [singularity] Interesting Read

2007-08-29 Thread Matt Mahoney
--- [EMAIL PROTECTED] wrote:

http://money.cnn.com/2007/08/27/technology/intentional_software.biz2/index.htm?section=magazines_business2
> 
> Thoughts?

(yawn)

IBM once claimed they had a product that would "eliminate programming".  It
was called FORTRAN.

Software development is an AI problem.  You have to combine a natural language
specification which is usually vague and incomplete, with expectations about
how users think, how similar software works, knowledge of protocols and
standards, hardware limitations, and the cognitive limitations of the
programmers who will have to test, debug, modify, and maintain your code.  All
of this knowledge is uncertain and comes from vast experience.

The product described does none of this.  It is a programming language where
you write code using a mix of text, diagrams, and tables instead of pure text.
 It's been done before.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=37180891-34234b


Re: [singularity] Good Singularity intro in mass media

2007-08-24 Thread Matt Mahoney
--- Joshua Fox <[EMAIL PROTECTED]> wrote:

> Can anyone recall an intelligent, supportive introduction to the Singularity
> in a _non-technological_ , wide-distribution medium in the US? I am not
> looking for book or conference reviews, sociological analyses of
> Singularitarianism, and uninformed editorializing, but rather for a clear
> short popular mass-media explanation of the Singularity.

I think the classic paper by Vernor Vinge expresses it pretty well.
http://mindstalk.net/vinge/vinge-sing.html


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=35625802-9b0353


Re: [singularity] Species Divergence

2007-08-22 Thread Matt Mahoney

--- "John G. Rose" <[EMAIL PROTECTED]> wrote:

> During the singularity process there will be a human species split into at
> least 3 new species - totally software humans where even birth occurs in
> software, the plain old biological human, and the hybrid
> man-machine-computer. The software humans will rapidly diverge into other
> species, the biologics will die off rapidly or stick around for a while for
> various reasons and the hybrid could grow into a terrifying creature. The
> software humans will basically exist in other dimensions and evolve and
> disperse rapidly. They also may just meld into whichever AGI successfully
> takes over the world as human software will just be a tiny subset(or should
> I say Subgroup) of AGI.
> 
> John

I basically agree.  AGI will be a continuation of evolution, but using
recursive self improvement rather than biological reproduction with mutation. 
Evolution will continue to favor those species that are most successful at
survival and reproduction in an environment with limited resources.  This is
not necessarily to the benefit of humans.  We already have primitive examples
of this - computer viruses and worms.  Imagine a highly intelligent system
that can analyze software, discover vulnerabilities, and exploit them to gain
additional computing resources.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=34663420-2ee4ad


Re: [singularity] Reduced activism

2007-08-19 Thread Matt Mahoney
--- Samantha Atkins <[EMAIL PROTECTED]> wrote:
> On Aug 19, 2007, at 12:26 PM, Matt Mahoney wrote:
> > 3. Studying the singularity raises issues (e.g. does consciousness  
> > exist?)
> > that conflict with hardcoded beliefs that are essential for survival.
> 
> Huh?  Are you conscious?

I believe that I am, in the sense that I am not a p-zombie.
http://en.wikipedia.org/wiki/Philosophical_zombie

I also believe that the human brain can be simulated by a computer, which has
no need for a consciousness in this sense.

I realize these beliefs are contradictory, but I just leave it at that.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=33530444-20a2f0


Re: [singularity] Reduced activism

2007-08-19 Thread Matt Mahoney
I was never really a Singularity activist, but

1. I realized the singularity is coming and nothing can stop it.
2. The more I study the friendly AI problem, the more I realize it is
intractable.
3. Studying the singularity raises issues (e.g. does consciousness exist?)
that conflict with hardcoded beliefs that are essential for survival.
4. The vast majority of people do not understand the issues anyway.


--- Joshua Fox <[EMAIL PROTECTED]> wrote:

> This is the wrong place to ask this question, but I can't think of anywhere
> better:
> 
> There are people who used to be active in blogging, writing to the email
> lists, donating money, public speaking, or holding organizational positions
> in Singularitarian and related fields -- and are no longer anywhere near as
> active. I'd very much like to know why.
> 
> Possible answers might include:
> 
> 1. I still believe in the truthfulness and moral value of the
> Singularitarian position, but...
> a. ... eventually we all grow up and need to focus on career rather than
> activism.
> b. ... I just plain ran out of energy and interest.
> c. ... public outreach is of no value or even dangerous; what counts is the
> research work of a few small teams.
> d. ... why write on this when I'll just be repeating what's been said so
> often.
> e. ... my donations are meaningless compared to what a dot-com millionaire
> can give.
> 2. I came to realize the deep logical (or: moral) flaws in the
> Singularitarian position. [Please tell us they are.]
> 3. I came to understand that Singularitarianism has some logical and moral
> validity, but no more than many other important causes to which I give my
> time and money.
> 
> And of course I am also interested to learn other answers.
> 
> Again, I would like to hear from those who used to be more involved, not
> just those who have  disagreed with Singularitarianism all along.
> 
> Unfortunately, most such people are not reading this, but perhaps some have
> maintained at least this connection; or list members may be able to report
> indirectly (but please, only well-confirmed reports rather than
> supposition).
> 
> Joshua


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=33492243-2b6758


Re: [singularity] Al's razor(1/2)

2007-07-30 Thread Matt Mahoney

--- Alan Grimes <[EMAIL PROTECTED]> wrote:

> om
> 
> Today, I'm going to attempt to present an argument in favor of a theory
> that has resulted from my studies relating to AI. While this is one of
> the only things I have to show for my time spent on AI. I am reasonably
> confident in it's validity and hope to show why that is the case here.
> 
> Unfortunately, the implications of this theory are quite dramatic making
> the saying "extraordinary claims require extraordinary proof" central to
> the meditations leading to this posting. I will take this theory and
> then apply it to recent news articles and make the even bolder claim
> that AI has been SOLVED, and that the only thing that remains to be done
> is to create a complete AI agent from the available components.

When can we expect a demo?

> But humans incapable of symbolic thought, most
> notably autistic patients, are not really intelligent.

People with autism lack the ability to recognize faces, which leads to delayed
language and social development during childhood.  However they do not lack
symbolic thought.  From http://en.wikipedia.org/wiki/Autism

"In a pair of studies, high-functioning autistic children aged 8–15 performed
equally well, and adults better, than individually matched controls at basic
language tasks like vocabulary and spelling. Both autistic groups performed
worse than controls at complex language tasks like figurative language,
comprehension, and making inferences. As people are often sized up initially
from their basic language skills, these studies suggest that people speaking
to autistic individuals are more likely to overestimate what their audience
comprehends.[28]"

> We can sum the total of this new information over all perceptions from
> the first onwards, and find that it is on the order of 1+ log(X), or
> simply O(X) = Log(X). If we were to present the AI with random
> information and forced it to remember all of it, the *WORST*, case for
> AI is O(N). For constant input, the AI will remain static, at O(N) = 1.
> (these are space complexities).

The relationship is a little more complex.  I believe it has to do with human
brain size, which stops increasing around adolescence.  Vocabulary development
during childhood is fairly constant at about 5000 words per year.  I had
looked at the relationship between training set size and information content
as part of my original dissertation proposal, which suggests a space
complexity more like O(N/log N).  http://cs.fit.edu/~mmahoney/dissertation/

> This discovery can be used as a razor for evaluating AI projects. For
> example, anyone demanding a supercomputer to run their AI, obviously is
> barking up the wrong tree. Similarly, anyone trying to simulate a
> billion-node neural network is effectively praying for pixie dust to
> emerge from the machine and rescue them from their own lack of
> understanding. We have others who have their heads rammed up their own
> friendly asses but they aren't worth mentioning. Truly, when one
> finishes this massacre, the field of AI is left decimated and nearly
> extinct. -- nearly...

I realize there is a large gap between the algorithmic complexity of language
(10^9 bits) and the number of synapses in the human brain (about 10^15).  I
don't know why.  Some guesses:
- The brain does a lot more than process symbolic language.
- The brain has a lot of redundancy for fault tolerance.
- The brain uses inefficient brute-force algorithms for many problems where
more efficient solutions exist, such as pattern recognition, mentally rotating
3D objects, or playing chess.  Perhaps AI has failed because there are still a
lot of things that the brain does for which there is no shortcut.

If Turing and Landauer are right, then a PC has enough computational power to
pass the Turing test.  What we lack is training data, which can only come from
the experience of growing up in a human body.

> On the other hand, when you use this razor to evaluate projects which
> ostensibly have nothing to do with AI, things become extremely interesting.
> 
> http://techon.nikkeibp.co.jp/english/NEWS_EN/20070725/136751/

The article does not say if I click on a picture of a dog running across a
lawn, whether the system will retrieve pictures of dogs or pictures of brown
objects on a green background.



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=26524725-b66d60


Re: [singularity] ESSAY: Why care about artificial intelligence?

2007-07-29 Thread Matt Mahoney

--- Benjamin Goertzel <[EMAIL PROTECTED]> wrote:

> On 7/12/07, Panu Horsmalahti <[EMAIL PROTECTED]> wrote:
> >
> > It is my understanding that the basic problem in Friendly AI is that it is
> > possible for the AI to interpret the command "help humanity" etc wrong,
> and
> > then destroy humanity (what we don't want it to do). The whole problem is
> to
> > find some way to make it more probable to not destroy us all. It is
> correct
> > that a simple sentence can be interpreted to mean something that we don't
> > really mean, even though the interpretation is logical for the AI.
> >
> 
> Right ... this is a special case of the problem that, if an AGI is allowed
> to modify itself substantially (likely necessary if we want its intelligence
> to progressively increase), then each successive self-modification may
> interpret its original supergoals a little differently from the previous
> one, so that one has a kind of "supergoal drift"...
> 
> There are many ways to formalize the above notion.  Here is one way that I
> have played with...
> 
> Where X is an AI system, let F[X] denote a probability distribution over the
> space of AI systems, defined as
> 
> F[X](Y, E) = the probability that X will self-modify itself into Y, given
> environment E
> 
> Then, we can iterate F repeatedly from any initial A system X_0, obtaining a
> probability distribution F^n for the distribution over AI systems achieved
> after n successive self-modifications.

It seems to me that K(F), the algorithmic complexity of F, is at least K(X, Y,
E).  So there is still the problem that you cannot predict the behavior of an
AGI without more computational power than it takes to build it.

> What we want is to create X_0 so as to maximize the odds that {an AI system
> chosen randomly from F^n} will be judged by {an AI system chosen randomly
> from F^(n-1)} as having the right supergoals.  Where the previous sentence
> is to be interpreted with E equal to our actual universe (a vexing
> dependency, since we don't know our universe all that well).

An even harder problem, because now you must add the algorithmic complexity of
a program that decides if X^n is friendly.  This cannot be done by X_n-1
because K(friendly(X_n)) >= K(X_n, E) and K(X_n) > K(X_n-1).

Likewise, E has to be a probabilistic approximation, bounded by K(X_n-1).

> I have suggested a provably correct way to do this in some old articles
> (which are offline, but I will put them back online), but, it was horribly
> computationally intractable ... so in reality I have no idea how to achieve
> this sort of thing with provable reliability.  Though intuitively I think
> Novamente will fit the bill ;-)
> 
> -- Ben Goertzel
> 
> 
> 
> 
> So, one wants to find a (supergoal, AI system) combination X_0 so that there
> is a pathway
> 
> -- starting from X_0 as an initial condition
> -- where X_i is capable of figuring out how to create X_(i+1)
> -- where the X_i continually and rapidly increase in intelligence, as i
> increases
> -- where for each X_i,

What I think will happen is that AGI will be developed as an extension of
evolution.  Currently humans have two mechanisms for recursive self
improvement: DNA and child rearing (culture).  We can add to this list, e.g.
genetic engineering and software development.  But one fundamental property is
constant: the improvements are incremental.  We don't genetically engineer
novel species from scratch.  Rather we take existing genomes and modify them. 
Likewise we don't build software from scratch.  Rather, we strive to duplicate
the characteristics of successful systems.  For AGI, we build human-like
capabilities such as language, vision, and mobility, because we know that such
capabilities are useful in people.  Expressed mathematically, K(X_i | X_i-1)
is small, regardless of the reproduction mechanism.

There are today competing efforts to build AGI.  We should expect AGI to
evolve with the same evolutionary pressures favoring self survival and rapid
reproduction in a competitive environment, just as it does today.  Evolution
does not favor friendliness.  Humans will be seen as competitors for limited
resources.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=26387251-3a3648


Re: [singularity] critiques of Eliezer's views on AI

2007-06-29 Thread Matt Mahoney

--- Randall Randall <[EMAIL PROTECTED]> wrote:

> 
> On Jun 28, 2007, at 7:51 PM, Matt Mahoney wrote:
> > --- Stathis Papaioannou <[EMAIL PROTECTED]> wrote:
> >> How does this answer questions like, if I am destructively teleported
> >> to two different locations, what can I expect to experience? That's
> >> what I want to know before I press the button.
> >
> > You have to ask the question in a form that does not depend on the  
> > existence
> > of consciousness.  The question is what will each of the two copies  
> > claim to
> > experience?
> 
> Of course, we only care what they claim to experience insofar
> as it corresponds with what they did experience, since that's
> what we're really interested in.

How could you tell the difference?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8


Re: [singularity] critiques of Eliezer's views on AI

2007-06-28 Thread Matt Mahoney
--- Stathis Papaioannou <[EMAIL PROTECTED]> wrote:

> On 28/06/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > So how do we approach the question of uploading without leading to a
> > contradiction?  I suggest we approach it in the context of outside
> observers
> > simulating competing agents.  How will these agents evolve?  We would
> expect
> > that agents will produce other agents similar to themselves but not
> identical,
> > either through biological reproduction, genetic engineering, or computer
> > technology.  The exact mechanism doesn't matter.  In any case, those
> agents
> > will evolve an instinct for self preservation, because that makes them
> fitter.
> >  They will fear death.  They will act on this fear by using technology to
> > extend their lifespans.  When we approach the question in this manner, we
> can
> > ask if they upload, and if so, how?  We do not need to address the
> question of
> > whether consciousness exists or not.  The question is not what should we
> do,
> > but what are we likely to do?
> 
> How does this answer questions like, if I am destructively teleported
> to two different locations, what can I expect to experience? That's
> what I want to know before I press the button.

You have to ask the question in a form that does not depend on the existence
of consciousness.  The question is what will each of the two copies claim to
experience?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8


Re: [singularity] critiques of Eliezer's views on AI

2007-06-28 Thread Matt Mahoney
--- Niels-Jeroen Vandamme <[EMAIL PROTECTED]> wrote:

> >A thermostat perceives the temperature and acts on it.  Is it conscious?
> 
> Registering does not equal perceiving. I mean subjective experience.

That's a subjective view of perception.  If an entity says "I feel cold", is
it conscious?  Or do we equate consciousness with language?  That definition
works as long as we don't have AI.

> One thing I'm almost certain: while we can't know what consciousness is, we 
> can know that it is. And though each of us has no proof of others' 
> consciousness, we each have proof of our own consciousness.

We don't have proof, we have belief.  The belief has biological origins.



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8


Re: [singularity] critiques of Eliezer's views on AI

2007-06-27 Thread Matt Mahoney
--- Niels-Jeroen Vandamme <[EMAIL PROTECTED]> wrote:

> Without consciousness, there could be no perception. I am surely conscious 
> right now, and how I am will remain a mystery for many years.

A thermostat perceives the temperature and acts on it.  Is it conscious?

We think we know what consciousness is.  It is something that every human has,
and possibly some animals, but no machine has one.  It is only in the context
of AI that we realize we don't have a good definition of it.  There is no test
to detect consciousness.  We can only test for properties that we normally
associate with humans, but that is not the same thing.

When logic conflicts with instinct, instinct wins and the logic gets
contorted.  The heated discussion on the copy paradox is a perfect example. 
Your consciousness is tranferred to the copy only if the original is
destroyed, or destroyed in certain ways, or under certain conditions.  We
discuss this ad-infinitum, but it always leads to a contradiction because we
refuse to accept that consciousness does not exist, because if you accept it
you die.  So the best you can do is accept both contradictory beliefs and
leave it at that.

So how do we approach the question of uploading without leading to a
contradiction?  I suggest we approach it in the context of outside observers
simulating competing agents.  How will these agents evolve?  We would expect
that agents will produce other agents similar to themselves but not identical,
either through biological reproduction, genetic engineering, or computer
technology.  The exact mechanism doesn't matter.  In any case, those agents
will evolve an instinct for self preservation, because that makes them fitter.
 They will fear death.  They will act on this fear by using technology to
extend their lifespans.  When we approach the question in this manner, we can
ask if they upload, and if so, how?  We do not need to address the question of
whether consciousness exists or not.  The question is not what should we do,
but what are we likely to do?




> >From: Matt Mahoney <[EMAIL PROTECTED]>
> >Reply-To: singularity@v2.listbox.com
> >To: singularity@v2.listbox.com
> >Subject: Re: [singularity] critiques of Eliezer's views on AI
> >Date: Mon, 25 Jun 2007 17:19:20 -0700 (PDT)
> >
> >
> >--- Jey Kottalam <[EMAIL PROTECTED]> wrote:
> >
> > > On 6/25/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > > >
> > > > You can only transfer
> > > > consciousness if you kill the original.
> > >
> > > What is the justification for this claim?
> >
> >There is none, which is what I was trying to argue.  Consciousness does not
> >actually exist.  What exists is a universal belief in consciousness.  The
> >belief exists because those who did not have it did not pass on their DNA.
> >
> >
> >-- Matt Mahoney, [EMAIL PROTECTED]


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8


Re: [singularity] critiques of Eliezer's views on AI

2007-06-25 Thread Matt Mahoney

--- Jey Kottalam <[EMAIL PROTECTED]> wrote:

> On 6/25/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> >
> > You can only transfer
> > consciousness if you kill the original.
> 
> What is the justification for this claim?

There is none, which is what I was trying to argue.  Consciousness does not
actually exist.  What exists is a universal belief in consciousness.  The
belief exists because those who did not have it did not pass on their DNA.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8


Re: [singularity] critiques of Eliezer's views on AI

2007-06-25 Thread Matt Mahoney
What is wrong with this logic?

Captain Kirk willingly steps into the transporter to have his atoms turned
into energy because he knows an identical copy will be reassembled on the
surface of the planet below.  Would he be so willing if the original was left
behind?

This is a case of logic conflicting with instinct.  You can only transfer
consciousness if you kill the original.  You can do it neuron by neuron, or
all at once.  Either way, the original won't notice, will it?

Isn't this funny?  Our instinct for self preservation causes us to build a
friendly AGI that annhialates the human race, because that's what we want.


--- Alan Grimes <[EMAIL PROTECTED]> wrote:

> Papiewski, John wrote:
> > You’re not misunderstanding and it is horrible.
> > 
> > The only way to do it is to gradually replace your brain cells with an
> > artificial substitute. 
> > 
> > You’d be barely aware that something is going on, and there wouldn’t
> be
> > two copies of you to be confused over.
> 
> Good start. =)
> 
> But be careful when claiming that anything is the *only* way to do
> anything...
> 
> Okay, go one step further. What do you want from uploading? Lets say
> vastly improved mental capacity. Okay, why not use a neural interface
> and start using a computer-based AI engine as part of your mind?
> 
> You get the advantage of a fresh architecture and no identity issues. =)
> 
> It's also practical with technology that is sure to be available within
> 5 years...  -- except the AI part. =(  People keep finding new ways to
> not invent AI. =(((
> 
> -- 
> Opera: Sing it loud! :o(  )>-<
> 
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
> 


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8


Re: [singularity] critiques of Eliezer's views on AI (was: Re: Personal attacks)

2007-06-25 Thread Matt Mahoney
--- Nathan Cook <[EMAIL PROTECTED]> wrote:
> I don't wish to retread old arguments, but there are a few theoretical outs.
> One could be uploaded bit by bit, one neuron at a time if necessary. One
> could be rendered unconscious, frozen, and scanned. I would find this
> frightening, but preferable to regaining consciousness while a separate
> instance of me was running. You beg the question when you ask if I would
> 'kill myself' if a perfect copy existed. If the copy were perfect, it would
> kill itself as well. If the copy were not perfect, I think I'd be entitled
> to declare myself a different entity.

I think people will put these issues aside and choose to upload, even if the
copy isn't perfect.  Imagine when your friend says to you, "How do you like my
new robotic body?  I am 20 years old again.  I can jump 10 feet in the air.  I
can run 40 MPH.  I can see in the infrared and ultraviolet.  With my new brain
I can multiply 1000 digit numbers in my head instantly.  I can read a book in
one minute and recall every word.  I have a built in wireless internet
connection.  While I am talking to you I can also mentally talk to 1000 other
people by phone or email and give my full attention to everyone
simultaneously.   With other uploaded people I can communicate a million times
faster than speaking, see through their eyes, feel what they feel, and share
my senses with them too, even across continents.  Every day I discover new
powers.  It's just amazing."

Are you ready to upload now?

And then the "original" friend walks in...



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8


Re: [singularity] critiques of Eliezer's views on AI (was: Re: Personal attacks)

2007-06-24 Thread Matt Mahoney
--- Tom McCabe <[EMAIL PROTECTED]> wrote:

> These questions, although important, have little to do
> with the feasibility of FAI. 

These questions are important because AGI is coming, friendly or not.  Will
our AGIs cooperate or compete?  Do we upload ourselves?

Consider the scenario of competing, recursively self improving AGIs.  The
initial version might be friendly (programmed to serve humans), but natural
selection will favor AGIs that have an instinct for self preservation and
reproduction, as it does in all living species.  That is not good, because
humans will be seen as competition.

Consider a cooperative AGI network, a system that thinks as one.  How will it
grow?  If there is no instinct for self preservation, then it builds a larger
version, transfers its knowledge, and kills itself.  The new version will
likely also lack an instinct for self preservation.  So what happens if the
new version decides to kill itself without building a replacement (because
there is also no instinct for reproduction), or if the replacement is faulty?

I think a competing system has a better chance of producing working AGI.  That
is what we have now.  There are many diverse approaches (Novamente, NARS, Cyc,
Google, Blue Brain, etc), although none is close to AGI yet.  A cooperative
system has a serial sequence of improvements each with a single point of
failure.  There is not a technical solution because we know that a system
cannot model exactly a system of greater algorithmic complexity.  It requires
at every step a probabilistic model, a guess that the next version will work
as planned.

Do we upload?  Consider the copy paradox.  If there was an exact copy of you,
atom for atom, and you had to choose between killing the copy or yourself, I
think you would choose to kill the copy (and the copy would choose to kill
you).  Does it matter who dies?  Logically, no, but your instinct for self
preservation says yes.  You cannot resolve this paradox.  Your instinct for
self preservation, what you call consciousness or self-awareness, is
immutable.  It was programmed by your DNA.  It exists because if a person does
not have it, they don't live to pass on their genes.

Presumably some people will choose to upload, reasoning that they will die
anyway so there is nothing to lose.  This is not really a satisfactory
solution, because you still die.  But suppose we had both read and write
access to the brain, so that after copying your memory, your brain was
reprogrammed to remove your fear of death.  But even this is not satisfactory.
 Not because reprogramming is evil, but because of what you will be uploaded
to.  Either it will be to an AGI in a competitive system, in which case you
will be back where you started (and die again), or to a cooperative system
that does not fear death, and will likely fail.

I proposed a simulation of agents building an AGI to see what they build.  Of
course this has to be a thought experiment, because the a simulation will
require more computing power than an AGI itself, so we can't experiment before
we build one.  But I would like to make some points about the validity of this
approach.

- The agents will not know their environment is simulated.
- The agents will evolve an instinct for self preservation (because the others
will die without reproducing).
- The agents will have probabilistic models of their universe because they
lack the computing power to model it exactly.
- The computing power of the AGI will be limited by the computing power of the
simulator.

In real life:

- Humans cannot tell if the universe is simulated.
- Humans have an instinct for self preservation.
- Our model of the universe is probabilistic (quantum mechanics, and also at
higher conceptual levels).
- The universe has finite size, mass, number of particles, and entropy (10^122
bits), and therefore has limited computing capability.
- Humans already practice recursive self improvement.  Your children will have
different goals than you, and some will be more intelligent.  But having
children does not remove your fear of death.


> I think we can all agree
> that the space of possible universe configurations
> without sentient life of *any kind* is vastly larger
> than the space of possible configurations with
> sentient life, and designing an AGI to get us into
> this space is enough to make the problem *very hard*
> even given this absurdly minimal goal. To shamelessly
> steal Eliezer's analogy, think of building an FAI of
> any kind as building a 747, and then figuring out what
> to program with regards to volition, death, human
> suffering, etc. as learning how to fly the 747 and
> finding a good destination.
> 
>  - Tom
> 
> --- Matt Mahoney <[EMAIL PROTECTED]> wrote:
> 
> > I think I am missing something on this discussion of
> > friendliness.  We seem to
> > tacitly assume we know what it means to be friendly.

  1   2   >