Re: [singularity] Quantum Mechanics and Consciousness

2008-05-21 Thread Matt Mahoney
 predicts that
  the curve would have peaks at certain levels of probability of
  getting the right answer above those predicted by chance alone.
  Experimental data showed peaks at the locations modeled.
  However, more people were successful at the higher probability
  levels than Walker's model estimated.  This is considered to be
  evidence of learning enhancement  [5].
 
  In the world of the weird and unexplained you; are left to imagine
 with; mysterious metaphors and thoughts that dont allow understanding
 audiences. Bertromavich
 'He who receives an idea from me, receives instruction himself without
 lessening mine; as he who lights his taper at mine, receives light
 without darkening me.' Thomas Jefferson, letter to Isaac McPherson, 13
 August 1813
 


-- Matt Mahoney, [EMAIL PROTECTED]


---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=104200892-0d3a07
Powered by Listbox: http://www.listbox.com


Re: [singularity] future of mankind blueprint and relevance of AGI

2008-05-20 Thread Matt Mahoney

--- Minwoo Bae [EMAIL PROTECTED] wrote:

 This isn't totally relevant, but have you heard of Korea's drafting of a
 robot ethics charter?

You mean
http://news.nationalgeographic.com/news/2007/03/070316-robot-ethics.html ?

It seems mainly focused on protecting humans.  But the proposal was a year
ago and nothing was released yet.



-- Matt Mahoney, [EMAIL PROTECTED]


---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=104200892-0d3a07
Powered by Listbox: http://www.listbox.com


Re: [singularity] An Open Letter to AGI Investors

2008-04-16 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote:

 I have stuck my neck out and written an Open Letter to AGI (Artificial 
 General Intelligence) Investors on my website at http://susaro.com.
 
 All part of a campaign to get this field jumpstarted.
 
 Next week I am going to put up a road map for my own development project.

So if the value of AGI is all the human labor it replaces (about US $1
quadrillion), how much will it cost to build?  Keep in mind there is a
tradeoff between waiting for the cost of technology to drop vs. having it now.
 How much should we expect to spend?



-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=101816851-9a120b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Stronger than Turing?

2008-04-15 Thread Matt Mahoney
--- Ben Peterson [EMAIL PROTECTED] wrote:

 Maybe I'm hallucinating, but I thought I read somewhere of some test  
 stronger or more reliable than the Turing Test to verify whether or  
 not a machine had achieved human-level intelligence.

Text compression?
http://cs.fit.edu/~mmahoney/compression/rationale.html

I wouldn't say it is more powerful, just more objective and repeatable.  Also,
in its present form it can only be used to compare one model to another.  To
test whether a model achieves human level, it needs to be compared to
average human ability to predict successive words or symbols in a text stream.

This is a harder test to get right, one I have not yet attempted.  Shannon [1]
first did this test in 1950 but left a wide range of uncertainty (0.6 to 1.3
bits per character) due to his method of converting a ranking of next-letter
guesses to a probability distribution.  Cover and King [2] reduced the
uncertainty in 1978 (upper bound of 1.3 bpc) by making the probability
distribution explicit in a gambling game, but their method is time consuming
and could only be used on a small sample of text.  I have also made some
attempts to refine Shannon's method in
http://cs.fit.edu/~mmahoney/dissertation/entropy1.html (under 1.1 bpc). 

In any case, none of these measurements were on the actual test data used in
my large text benchmark.  The best result to date is 1.04 bpc, but I would not
call this AI.  I know these programs use rather simple language models and are
memory bound.  (The top program needs 4.6 GB).  The Wikipedia data set I use
probably has a lower entropy than the data used in the literature, possibly
0.8-0.9 bpc.  That's just a guess, because as I said, I don't yet have a
reliable way to measure it.

References

1. Shannon, Cluade E., “Prediction and Entropy of Printed English”, Bell Sys.
Tech. J (3) p. 50-64, 1950.

2. Cover, T. M., and R. C. King, “A Convergent Gambling Estimate of the
Entropy of English”, IEEE Transactions on Information Theory (24)4 (July) pp.
413-421, 1978.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Testing AGI (was RE: [singularity] Vista/AGI)

2008-04-13 Thread Matt Mahoney
--- Derek Zahn [EMAIL PROTECTED] wrote:

 At any rate, if there were some clearly-specified tests that are not
 AGI-complete and yet not easily attackable with straightforward software
 engineering or Narrow AI techniques, that would be a huge boost in my
 opinion to this field.  I can't think of any though, and they might not
 exist.  If it is in fact impossible to find such tasks, what does that say
 about AGI as an endeavor?

Text compression is one such test, as I argue in
http://cs.fit.edu/~mmahoney/compression/rationale.html

The test is only for language modeling.  Theoretically it could be extended to
vision or audio processing.  For example, to maximally compress video the
compressor must understand the physics of the scene (e.g. objects fall down),
which can be arbitrarily complex (e.g. a video of people engaging in
conversation about Newton's law of gravity).  Likewise, maximally compressing
music is equivalent to generating or recognizing music that people like.  The
problem is that the information content of video and audio is dominated by
incompressible noise that is nontrivial to remove -- noise being any part of
the signal that people fail to perceive.  Deciding which parts of the signal
are noise is itself AI-hard, so it requires a lossy compression test with
human judges making subjective decisions about quality.  This is not a big
problem for text because the noise level (different ways of expressing the
same meaning) is small, or at least does not overwhelm the signal.  Long term
memory has an information rate of a few bits per second, so any signal you
compress should not be many orders of magnitude higher.

A problem with text compression is the lack of adequate hardware.  There is a
3 way tradeoff between compression ratio, memory, and speed.  The top
compressor in http://cs.fit.edu/~mmahoney/compression/text.html uses 4.6 GB of
memory.  Many of the best algorithms could be drastically improved if only
they ran on a supercomputer with 100 GB or more.  The result is that most
compression gains come from speed and memory optimization rather than using
more intelligent models.  The best compressors use crude models of semantics
and grammar.  They preprocess the text by token substitution from a dictionary
that groups words by topic and grammatical role, then predict the token stream
using mixtures of fixed-offset context models.  It is roughly equivalent to
the ungrounded language model of a 2 or 3 year old child at best.

An alternative would be to reduce the size of the test set to reduce
computational requirements, as the Hutter prize did. http://prize.hutter1.net/
I did not because I believe the proper way to test an adult level language
model is to train it on the same amount of language that an average adult is
exposed to, about 1 GB.  I would be surprised if a 100 MB test progressed past
the level of a 3 year old child.  I believe the data set is too small to train
a model to learn arithmetic, logic, or high level reasoning.  Including these
capabilities would not improve compression.

Tests on small data sets could be used to gauge early progress.  But
ultimately, I think you are going to need hardware that supports AGI to test
it.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: About the Nine Misunderstandings post [WAS Re: [singularity] I'm just not sure how well...]

2008-04-11 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote:

 Matt Mahoney wrote:
  We
  already have examples of reproducing agents: Code Red, SQL Slammer, Storm,
  etc. A worm that can write and debug code and discover new vulnerabilities
  will be unstoppable.  Do you really think your AI will win the race when
 you
  have the extra burden of making it safe?
 
 Yes, because these reproducing agents you refer to are the most 
 laughably small computer viruses that have no hope whatsoever of 
 becoming generally intelligent.  At every turn, you completely 
 undestimate what it means for a system to be intelligent.

There are no intelligent or self improving worms... yet.  Are you confident
that none will ever be created even after we have automated human-level
understanding of code, which I presume will be one of the capabilities of AGI?

  Also, RSI is an experimental process, and therefore evolutionary.  We have
  already gone through the information theoretic argument why this must be
 the
  case.
 
 No you have not:  I know of no information theoretic argument that 
 even remotely applies to the type of system that is needed to achieve 
 real intelligence.  Furthermore, the statement that RSI is an 
 experimental process, and therefore evolutionary is just another 
 example of you declaring something to be true when, in fact, it is 
 loaded down with spurious assumptions.  Your statement is a complete 
 non-sequiteur.

(sigh)  To repeat, the argument is that an agent cannot deterministically
create an agent of greater intelligence than itself, because if it could it
would already be that smart.  The best it can do is make educated guesses as
to what will increase intelligence.  I don't argue that we can't do better
than evolution.  (Adding more hardware is probably a safe bet).  But an agent
cannot even test whether another is more intelligent.  In order for me to give
a formal argument, you would have to accept a formal definition of
intelligence, such as Hutter and Legg's univeral intelligence, which is
bounded by algorithmic complexity.  But you dismiss such definitions as
irrelevant.  So I can only give examples, such as the ability to measure an IQ
of 200 in children but not adults, and the historical persecution of
intelligence (Socrates, Galileo, Holocaust, Khmer Rouge, etc).

A self improving agent will have to produce experimental variations and let
them be tested in a competitive environment it doesn't control or fully
understand that weeds out the weak.  If it could model the environment or test
for intelligence then it could reliably improve its intelligence,
contradicting our original assumption.

This is an evolutionary process.  Unfortunately, evolution is not stable.  It
resides on the boundary between stability and chaos, like all incrementally
updated or adaptive algorithmically complex systems.  By this I mean it tends
to a Lyapunov exponent of 0.  A small perturbation in its initial state might
decay or it might grow.  Critically balanced systems like this have a Zipf
distribution of catastrophes -- an inverse relation between probability and
severity.  We find this property in randomly connected logic gates (frequency
vs. magnitude of state transitions) software systems (frequency vs. severity
of failures), gene regulatory systems (frequency vs. severity of mutations),
and evolution (frequency vs. severity of plagues, population explosions, mass
extinctions, and other ecological disasters).

The latter should be evident in the hierarchical organization of geologic
eras.  And a singularity is a catastrophe of unprecedented scale.  It could
result in the extinction of DNA based life and its replacement with
nanotechnology.  Or it could result in the extinction of all intelligence. 
The only stable attractor in evolution is a dead planet.  (You knew this,
right?)  Finally, I should note that intelligence and friendliness are not the
same as fitness.  Roaches, malaria, and HIV are all formidable competitors to
homo sapiens.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: About the Nine Misunderstandings post [WAS Re: [singularity] I'm just not sure how well...]

2008-04-11 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote:
 You want me to imagine a scenario in which we have AGI, but in your 
 scenario these AGI systems are somehow not being used to produce 
 superintelligent systems, and these superintelligent systems are, for 
 some reason, not taking the elementary steps necessary to solve one of 
 the world's simplest problems (computer viruses).

If the problem is so simple, why don't you just solve it?
http://www.securitystats.com/
http://en.wikipedia.org/wiki/Storm_botnet

There is a trend toward using (narrow) AI for security.  It seems to be one of
its biggest applications.  Unfortunately, the knowledge needed to secure
computers is almost exactly the same kind of knowledge needed to attack them.



-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: About the Nine Misunderstandings post [WAS Re: [singularity] I'm just not sure how well...]

2008-04-11 Thread Matt Mahoney

--- Vladimir Nesov [EMAIL PROTECTED] wrote:

 On Fri, Apr 11, 2008 at 10:50 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
 
   If the problem is so simple, why don't you just solve it?
   http://www.securitystats.com/
   http://en.wikipedia.org/wiki/Storm_botnet
 
   There is a trend toward using (narrow) AI for security.  It seems to be
 one of
   its biggest applications.  Unfortunately, the knowledge needed to secure
   computers is almost exactly the same kind of knowledge needed to attack
 them.
 
 
 Matt, this issue was already raised a couple of times. It's a
 technical problem that can be solved perfectly, but isn't in practice,
 because it's too costly. Formal verification, specifically aided by
 languages with rich type systems that can express proofs of
 correctness for complex properties, can give you perfectly safe
 systems. It's just very difficult to specify all the details.

Actually it cannot be solved even theoretically.  A formal specification of a
program is itself a program.  It is undecidable whether two programs are
equivalent.  (It is equivalent to the halting problem).

Converting natural language to a formal specification is AI-hard, or perhaps
harder, because people can't get it right either.  If we could write software
without bugs, we would solve a big part of the security problem.

 These AIs for network security that you are talking about are a
 cost-effective hack that happens to work sometimes. It's not a
 low-budget vision of future super-hacks.

Not at present because we don't have AI.  We rely on humans to find
vulnerabilities in software.  We would like for machines to do this
automatically.  Unfortunately such machines would also be useful to hackers. 
Such double-edged tools already exist.  For example, tools like SATAN, NESSES,
and NMAP can quickly test a system by probing it to look for thousands of
known or published vulnerabilities.  Attackers use the same tools to break
into systems.  www.virustotal.com allows you to upload a file and scan it with
32 different virus detectors.  This is a useful tool for virus writers who
want to make sure their programs evade detection.  I suggest it will be very
difficult to develop any security tool that you could keep out of the hands of
the bad guys.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] I'm just not sure how well this plan was thought through [WAS Re: Promoting an A.S.P.C,A.G.I.]

2008-04-10 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote:

 When a computer processes a request like how many teaspoons in a cubic 
 parsec? it can extract the meaning of the question by a relatively 
 simple set of syntactic rules and question templates.
 
 But when you ask it a question like how many dildos are there on the 
 planet? [Try it] you find that google cannot answer this superficially 
 similar question because it requires more intelligence in the 
 question-analysis mechanism.

And just how would you expect your AGI to answer the question?  The first step
in research is to find out if someone else has already answered it.  It may
have been answered but Google can't find it because it only indexes a small
fraction of the internet.  It may also be that some dildo makers are privately
held and don't release sales figures.  In any case your AGI is either going to
output a number or I don't know, neither of which is more helpful than
Google.  If it does output a number, you are still going to want to know where
it came from.

But this discussion is tiresome.  I would not have expected you to anticipate
today's internet in 1978.  I suppose when the first search engine (Archie) was
released in 1990, you probably imagined that all search engines would require
you to know the name of the file you were looking for.

If you have a better plan for AGI, please let me know.



-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


RE: Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-09 Thread Matt Mahoney

--- John G. Rose [EMAIL PROTECTED] wrote:

  From: Matt Mahoney [mailto:[EMAIL PROTECTED]
  The simulations can't loop because the simulator needs at least as much
  memory
  as the machine being simulated.
  
 
 You're making assumptions when you say that. Outside of a particular
 simulation we don't know the rules. If this universe is simulated the
 simulator's reality could be so drastically and unimaginably different from
 the laws in this universe. Also there could be data busses between
 simulations and the simulations could intersect or, a simulation may break
 the constraints of its contained simulation somehow and tunnel out. 

I am assuming finite memory.  For the universe we observe, the Bekenstein
bound of the Hubble radius is 2pi^2 T^2 c^5/hG = 2.91 x 10^122 bits.  (T = age
of the universe = 13.7 billion years, c = speed of light, h = Planck's
constant, G = gravitational constant).  There is not enough material in the
universe to build a larger memory.  However, a universe up the hierarchy might
be simulated by a Turing machine with infinite memory or by a more powerful
machine such as one with real-valued registers.  In that case the restriction
does not apply.  For example, a real-valued function can contain nested copies
of itself infinitely deep.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-09 Thread Matt Mahoney
--- Mike Tintner [EMAIL PROTECTED] wrote:
 How do you resolve disagreements? 

This is a problem for all large databases and multiuser AI systems.  In my
design, messages are identified by source (not necessarily a person) and a
timestamp.  The network economy rewards those sources that provide the most
useful (correct) information. There is an incentive to produce reputation
managers which rank other sources and forward messages from highly ranked
sources, because those managers themselves become highly ranked.

Google handles this problem by using its PageRank algorithm, although I
believe that better (not perfect) solutions are possible in a distributed,
competitive environment.  I believe that these solutions will be deployed
early and be the subject of intense research because it is such a large
problem.  The network I described is vulnerable to spammers and hackers
deliberately injecting false or forged information.  The protocol can only do
so much.  I designed it to minimize these risks.  Thus, there is no procedure
to delete or alter messages once they are posted.  Message recipients are
responsible for verifying the identity and timestamps of senders and for
filtering spam and malicious messages at risk of having their own reputations
lowered if they fail.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-09 Thread Matt Mahoney

--- Ben Goertzel [EMAIL PROTECTED] wrote:

   Of course what I imagine emerging from the Internet bears little
 resemblance
   to Novamente.  It is simply too big to invest in directly, but it will
 present
   many opportunities.
 
 But the emergence of superhuman AGI's like a Novamente may eventually
 become,
 will both dramatically alter the nature of, and dramatically reduce
 the cost of, global
 brains such as you envision...

Yes, like the difference between writing a web browser and defining the HTTP
protocol, each costing a tiny fraction of the value of the Internet but with a
huge impact on its outcome.



-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote:

 Matt Mahoney wrote:
  Perhaps you have not read my proposal at
 http://www.mattmahoney.net/agi.html
  or don't understand it.
 
 Some of us have read it, and it has nothing whatsoever to do with 
 Artificial Intelligence.  It is a labor-intensive search engine, nothing 
 more.
 
 I have no idea why you would call it an AI or an AGI.  It is not 
 autonomous, contains no thinking mechanisms, nothing.  Even as a alabor 
 intensive search engine there is no guarantee it would work, because 
 the conflict resolution issues are all complexity-governed.
 
 I am astonished that you would so blatantly call it something that it is 
 not.

It is not now.  I think it will be in 30 years.  If I was to describe the
Internet to you in 1978 I think you would scoff too.  We were supposed to have
flying cars and robotic butlers by now.  How could Google make $145 billion by
building an index of something that didn't even exist?

Just what do you want out of AGI?  Something that thinks like a person or
something that does what you ask it to?



-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Matt Mahoney

--- Richard Loosemore [EMAIL PROTECTED] wrote:

 Matt Mahoney wrote:
  Just what do you want out of AGI?  Something that thinks like a person or
  something that does what you ask it to?
 
 Either will do:  your suggestion achieves neither.
 
 If I ask your non-AGI the following question:  How can I build an AGI 
 that can think at a speed that is 1000 times faster than the speed of 
 human thought? it will say:
 
 Hi, my name is Ben and I just picked up your question.  I would
  love to give you the answer but you have to send $20 million
  and give me a few years.
 
 That is not the answer I would expect of an AGI.  A real AGI would do 
 original research to solve the problem, and solve it *itself*.
 
 Isn't this, like, just too obvious for words?  ;-)

Your question is not well formed.  Computers can already think 1000 times
faster than humans for things like arithmetic.  Does your AGI also need to
know how to feed your dog?  Or should it guess and build it anyway?  I would
think such a system would be dangerous.

I expect a competitive message passing network to improve over time.  Early
versions will work like an interactive search engine.  You may get web pages
or an answer from another human in real time, and you may later receive
responses to your persistent query.  If your question can be matched to an
expert in some domain that happens to be on the net, then it gets routed
there.  Google already does this.  For example, if you type an address, it
gives you a map and offers driving directions.  If you ask it how many
teaspoons in a cubic parsec? it will compute the answer (try it).  It won't
answer every question, but with 1000 times more computing power than Google, I
expect there will be many more domain experts.

I expect as hardware gets more powerful, peers will get better at things like
recognizing people in images, writing programs, and doing original research. 
I don't claim that I can solve these problems.  I do claim that there is an
incentive to provide these services and that the problems are not intractable
given powerful hardware, and therefore the services will be provided.  There
are two things to make the problem easier.  First, peers will have access to a
vast knowledge source that does not exist today.  Second, peers can specialize
in a narrow domain, e.g. only recognize one particular person in images, or
write software or do research in some obscure, specialized field.

Is this labor intensive?  Yes.  A $1 quadrillion system won't just build
itself.  People will build it because they will get back more value than they
put in.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-08 Thread Matt Mahoney

--- Eric B. Ramsay [EMAIL PROTECTED] wrote:

 
 John G. Rose [EMAIL PROTECTED] wrote:
 
 If you look at the state of internet based intelligence now, all the
 data
 and its structure, the potential for chain reaction or a sort of
 structural
 vacuum exists and it is accumulating a potential at an increasing
 rate.
 IMO...
 
 So you see the arrival of a Tipping Point as per  Malcolm Gladwell.
 Whether I physically benefit from the arrival of the Singularity or
 not, I just want to see the damn thing. I would invest some modest
 sums in AGI if we could get a huge collection plate going around
 (these collection plate amounts add up!).

You won't see a singularity.  As I explain in
http://www.mattmahoney.net/singularity.html an intelligent agent (you)
is not capable of recognizing agents of significantly greater
intelligence.  We don't know whether a singularity has already occurred
and the world we observe is the result.  It is consistent with the
possibility, e.g. it is finite, Turing computable, and obeys Occam's
Razor (AIXI).

As for AGI research, I believe the most viable path is a distributed
architecture that uses the billions of human brains and computers
already on the Internet.  What is needed is an infrastructure that
routes information to the right experts and an economy that rewards
intelligence and friendliness.  I described one such architecture in
http://www.mattmahoney.net/agi.html  It differs significantly from the
usual approach of trying to replicate a human mind.  I don't believe
that one person or a small group can solve the AGI problem faster than
the billions of people on the Internet are already doing.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


RE: Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-08 Thread Matt Mahoney

--- Derek Zahn [EMAIL PROTECTED] wrote:

 Matt Mahoney writes: As for AGI research, I believe the most viable
 path is a distributed architecture that uses the billions of human
 brains and computers already on the Internet. What is needed is an
 infrastructure that routes information to the right experts and an
 economy that rewards intelligence and friendliness. I described one
 such architecture in http://www.mattmahoney.net/agi.html It differs
 significantly from the usual approach of trying to replicate a human
 mind. I don't believe that one person or a small group can solve the
 AGI problem faster than the billions of people on the Internet are
 already doing.
 I'm not sure I understand this.  Although a system that can respond
 well to commands of the following form:
  
 Show me an existing document that best answers the question 'X'
  
 is certainly useful, it is hardly 'general' in any sense we usually
 mean.  I would think a 'general' intelligence should be able to take
 a shot at answering:
  
 Why are so many streets named after trees?
 or
 If the New York Giants played cricket against the New York Yankees,
 who would probably win?
 or
 Here are the results of some diagnostic tests.  How likely is it
 that the patient has cancer?  What test should we do next?
 or
 Design me a stable helicopter with the rotors on the bottom instead
 of the top
  
 Super-google is nifty, but I don't see how it is AGI.

Because a super-google will answer these questions by routing them to
experts on these topics that will use natural language in their narrow
domains of expertise. All of this can be done with existing technology
and a lot of hard work. The work will be done because there is an
incentive to do it and because the AGI (in the system, not its
components) is so valuable. AGI will be an extension of the Internet
that nobody planned, nobody built, and nobody owns.




-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


RE: Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-08 Thread Matt Mahoney

--- John G. Rose [EMAIL PROTECTED] wrote:

  
  There is no way to know if we are living in a nested simulation, or even
  in a
  single simulation.  However there is a mathematical model: enumerate all
  Turing machines to find one that simulates a universe with intelligent
  life.
  
 
 What if that nest of simulations loop around somehow? What was that idea
 where there is this new advanced microscope that can see smaller than ever
 before and you look into it and see an image of yourself looking into it... 

The simulations can't loop because the simulator needs at least as much memory
as the machine being simulated.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-08 Thread Matt Mahoney

--- Mike Tintner [EMAIL PROTECTED] wrote:

 Matt : a super-google will answer these questions by routing them to
 experts on these topics that will use natural language in their narrow
 domains of expertise.
 
 And Santa will answer every child's request, and we'll all live happily ever
 after.  Amen.

If you have a legitimate criticism of the technology or its funding plan, I
would like to hear it.  I understand there will be doubts about a system I
expect to cost over $1 quadrillion and take 30 years to build.

The protocol specifies natural language.  This is not a hard problem in narrow
domains.  It dates back to the 1960's.  Even in broad domains, most of the
meaning of a message is independent of word order.  Google works on this
principle.

But this is beside the point.  The critical part of the design is an incentive
for peers to provide useful services in exchange for resources.  Peers that
appear most intelligent and useful (and least annoying) are most likely to
have their messages accepted and forwarded by other peers.  People will
develop domain experts and routers and put them on the net because they can
make money through highly targeted advertising.

Google would be a peer on the network with a high reputation.  But Google
controls only 0.1% of the computing power on the Internet.  It will have to
compete with a system that allows updates to be searched instantly, where
queries are persistent, and where a query or message can initiate
conversations with other people in real time.

 Which are these areas of science, technology, arts, or indeed any area of 
 human activity, period, where the experts all agree and are NOT in deep 
 conflict?
 
 And if that's too hard a question, which are the areas of AI or AGI, where 
 the experts all agree and are not in deep conflict?

I don't expect the experts to agree.  It is better that they don't.  There are
hard problem remaining to be solved in language modeling, vision, and
robotics.  We need to try many approaches with powerful hardware.  The network
will decide who the winners are.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-08 Thread Matt Mahoney

--- Eric B. Ramsay [EMAIL PROTECTED] wrote:

 If I understand what I have read in this thread so far, there is Ben on the
 one hand suggesting $10 mil. with 10-30 people in 3 to 10 years and on the
 other there is Matt saying $1quadrillion, using a billion brains in 30
 years. I don't believe I have ever seen such a divergence of opinion before
 on what is required  for a technological breakthrough (unless people are not
 being serious and I am being naive). I suppose  this sort of non-consensus
 on such a scale could be part of investor reticence.

I am serious about the $1 quadrillion price tag, which is the low end of my
estimate.  The value of the Internet is now in the tens of trillions and
doubling every few years.  The value of AGI will be a very large fraction of
the world economy, currently US $66 trillion per year and growing at 5% per
year. 

Of course what I imagine emerging from the Internet bears little resemblance
to Novamente.  It is simply too big to invest in directly, but it will present
many opportunities.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] future search

2008-04-02 Thread Matt Mahoney
--- David Hart [EMAIL PROTECTED] wrote:

 Hi All,
 
 I'm quite worried about Google's new *Machine Automated Temporal
 Extrapolation* technology going FOOM!
 
 http://www.google.com.au/intl/en/gday/

More on the technology

http://en.wikipedia.org/wiki/Google's_hoaxes

:-)





-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-03-05 Thread Matt Mahoney

--- Richard Loosemore [EMAIL PROTECTED] wrote:

 Matt Mahoney wrote:
  --- John G. Rose [EMAIL PROTECTED] wrote:
  Is there really a bit per synapse? Is representing a synapse with a bit
 an
  accurate enough simulation? One synapse is a very complicated system.
  
  A typical neural network simulation uses several bits per synapse.  A
 Hopfield
  net implementation of an associative memory stores 0.15 bits per synapse. 
 But
  cognitive models suggest the human brain stores .01 bits per synapse. 
  (There are 10^15 synapses but human long term memory capacity is 10^9
 bits).
 
 Sorry, I don't buy this at all.  This makes profound assumptions about 
 how information is stored in memory, averagng out the net storage and 
 ignoring the immediate storage capacity.  A typical synapse actually 
 stores a great deal more than a fraction of a bit, as far as we can 
 tell, but this information is stored in such a way that the system as a 
 whole can actually use the information in a meaningful way.
 
 In that context, quoting 0.01 bits per synapse is a completely 
 meaningless statement.

I was referring to Landauer's estimate of long term memory learning rate of
about 2 bits per second.  http://www.merkle.com/humanMemory.html
This does not include procedural memory, things like visual perception and
knowing how to walk.  So 10^-6 bits is low.  But how do we measure such
things?

 Also, typical neural network simulations use more than a few bits as 
 well.  When I did a number of backprop NN studies in the early 90s, my 
 networks had to use floating point numbers because the behavior of the 
 net deteriorated badly if the numerical precision was reduced.  This was 
 especially important on long training runs or large datasets.

That's what I meant by few.  In the PAQ8 compressors I have to use at least
16 bits.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-03-05 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote:
 Matt Mahoney wrote:
  I was referring to Landauer's estimate of long term memory learning rate
 of
  about 2 bits per second.  http://www.merkle.com/humanMemory.html
  This does not include procedural memory, things like visual perception and
  knowing how to walk.  So 10^-6 bits is low.  But how do we measure such
  things?
 
 I think my general point is that bits per second or bits per synapse 
 is a valid measure if you care about something like an electrical signal 
 line, but is just simply an incoherent way to talk about the memory 
 capacity of the human brain.
 
 Saying 0.01 bits per synapse is no better than opening and closing 
 one's mouth without saying anything.

Bits is a perfectly sensible measure of information.  Memory can be measured
using human recall tests, just as Shannon used human prediction tests to
estimate the information capacity of natural language text.  The question is
important to anyone who needs to allocate a hardware budget for an AI design.

[For those not familiar with Richard's style: once he disagrees with something
he will dispute it to the bitter end in long, drawn out arguments, because
nothing is more important than being right.]


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-03-05 Thread Matt Mahoney
--- Eric B. Ramsay [EMAIL PROTECTED] wrote:

 
 
 Matt Mahoney [EMAIL PROTECTED] wrote:
 
 [For those not familiar with Richard's style: once he disagrees with
 something
 he will dispute it to the bitter end in long, drawn out arguments, because
 nothing is more important than being right.]
 
 What's the purpose for this comment? If the people here are intelligent
 enough to have meaningful discussions on a difficult topic, then they will
 be able to sort out for themselves the styles of others. 

Sorry, he posted a similar comment about me on the AGI list.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


RE: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-03-04 Thread Matt Mahoney
--- John G. Rose [EMAIL PROTECTED] wrote:
 Is there really a bit per synapse? Is representing a synapse with a bit an
 accurate enough simulation? One synapse is a very complicated system.

A typical neural network simulation uses several bits per synapse.  A Hopfield
net implementation of an associative memory stores 0.15 bits per synapse.  But
cognitive models suggest the human brain stores .01 bits per synapse. 
(There are 10^15 synapses but human long term memory capacity is 10^9 bits).

-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-02-28 Thread Matt Mahoney

--- Stathis Papaioannou [EMAIL PROTECTED] wrote:

 On 28/02/2008, John G. Rose [EMAIL PROTECTED] wrote:
 
  Actually a better way to do it as getting even just the molecules right is
 a wee bit formidable - you need a really powerful computer with lots of RAM.
 Take some DNA and grow a body double in software. Then create an interface
 from the biological brain to the software brain and then gradually kill off
 the biological brain forcing the consciousness into the software brain.
 
   The problem with this approach naturally is that to grow the brain in RAM
 requires astronomical resources. But ordinary off-the-shelf matter holds so
 much digital memory compared to modern computers. You have to convert matter
 into RAM somehow. For example one cell with DNA is how many gigs? And cells
 cost a dime a billion. But the problem is that molecular interaction is too
 slow and cluncky.
 
 Agreed, it would be *enormously* difficult getting a snapshot at the
 molecular level and then doing a simulation from this snapshot. But as
 a matter of principle, it should be possible.

And that is the whole point.  You don't need to simulate the brain at the
molecular level or even at the level of neurons.  You just need to produce an
equivalent computation.  The whole point of such fine grained simulations is
to counter arguments (like Penrose's) that qualia and consciousness cannot be
explained by computation or even by physics.  Penrose (like all humans) is
reasoning with a brain that is a product of evolution, and therefore biased
toward beliefs that favor survival of the species.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-02-28 Thread Matt Mahoney

--- Stathis Papaioannou [EMAIL PROTECTED] wrote:

 On 29/02/2008, Matt Mahoney [EMAIL PROTECTED] wrote:
 
  By equivalent computation I mean one whose behavior is indistinguishable
   from the brain, not an approximation.  I don't believe that an exact
   simulation requires copying the implementation down to the neuron level,
 much
   less the molecular level.
 
 How do you explain the fact that cognition is exquisitely sensitive to
 changes at the molecular level?

In what way?  Why can't you replace neurons with equivalent software?


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


RE: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-02-28 Thread Matt Mahoney

--- John G. Rose [EMAIL PROTECTED] wrote:

  From: Matt Mahoney [mailto:[EMAIL PROTECTED]
  
  By equivalent computation I mean one whose behavior is
  indistinguishable
  from the brain, not an approximation.  I don't believe that an exact
  simulation requires copying the implementation down to the neuron level,
  much
  less the molecular level.
  
 
 So how would you approach constructing such a model? I suppose a superset
 intelligence structure could analyze properties and behaviors of a brain and
 simulate it within itself. If it absorbed enough data it could reconstruct
 and eventually come up with something close.

Well, nobody has solved the AI problem, much less the uploading problem. 
Consider the problem in stages:

1. The Turing test.

2. The personalized Turing test.  The machine pretends to be you and the
judges are people who know you well.

3. The planned, personalized Turing test.  You are allowed to communicate
with judges in advance, for example, to agree on a password.

4. The embodied, planned, personalized Turing test.  Communication is not
restricted to text.  The machine is planted in the skull of your clone.  Your
friends and relatives have to decide who has the carbon-based brain.

Level 4 should not require simulating every neuron and synapse.  Without the
constraints of slow, noisy neurons, we could use other algorithms.  For
example, low level visual processing such as edge and line detection would not
need to be implemented as a 2-D array of identical filters.  It could be
implemented serially by scanning the retinal image with a window filter.  Fine
motor control would not need to be implemented by combining thousands of
pulsing motor neurons to get a smooth average signal.  The signal could be
computed numerically.  The brain has about 10^15 synapses, so a
straightforward simulation at the neural level would require 10^15 bits of
memory.  But cognitive tests suggest humans have only about 10^9 bits of long
term memory, suggesting that more compressed representation is possible.

In any case, level 1 should be sufficient to argue convincingly that either
consciousness can exist in machines, or that it doesn't in humans.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-02-28 Thread Matt Mahoney
--- Stathis Papaioannou [EMAIL PROTECTED] wrote:
 I agree that it should be possible to simulate a brain on a computer,
 but I don't see how you can be so confident that you can throw away
 most of the details of brain structure with impunity. Tiny changes to
 neurons which make no difference to the anatomy or synaptic structure
 can have large effects on neuronal behaviour, and hence whole organism
 behaviour. You can't leave this sort of thing out of the model and
 hope that it will still match the original.

And people can lose millions of neurons without a noticeable effect.  And
removing a 0.1 micron chunk out of a CPU chip can cause it to fail, yet I can
run the same programs on a chip with half as many transistors.

Nobody knows how to make an artificial brain, but I am pretty confident that
it is not necessary to preserve its structure to preserve its function.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Definitions

2008-02-19 Thread Matt Mahoney

--- Charles D Hixson [EMAIL PROTECTED] wrote:

 John K Clark wrote:
  Matt Mahoney [EMAIL PROTECTED]
 
  It seems to me the problem is
  defining consciousness, not testing for it.
 
  And it seems to me that beliefs of this sort are exactly the reason 
  philosophy is in such a muddle. A definition of consciousness is not
  needed, in fact unless you're a mathematician where they can be of 
  some use, one can lead a full rich rewarding intellectually life without
  having a good definition of anything. Compared with examples
  definitions are of trivial importance.
 
   John K Clark
 
 But consciousness is easy to define, if not to implement:
  Consciousness is the entity evaluating a portion of itself which 
 represents it's position in it's model of it's environment.
 
  If there's any aspect of consciousness which isn't included within this 
 definition, I would like to know about it.  (Proving the definition 
 correct would, however, be between difficult and impossible.  As 
 normally used consciousness is a term without an external referent, so 
 there's no way of determining that any two people are using the same 
 definition.  It *may* be possible to determine that they are using 
 different definitions.)

Or consciousness just means awareness...

in which case, it seems to be located in the hippocampus.
http://www.world-science.net/othernews/080219_conscious


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-17 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote:
 When people like Lanier allow themselves the luxury of positing 
 infinitely large computers (who else do we know who does this?  Ah, yes, 
 the AIXI folks), they can make infinitely unlikely coincidences happen.

It is a commonly accepted practice to use Turing machines in proofs, even
though we can't actually build one.  Hutter is not proposing a universal
solution to AI.  He is proving that it is not computable.  Lanier is not
suggesting implementing consciousness as a rainstorm.  He is refuting its
existence.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] AI critique by Jaron Lanier

2008-02-17 Thread Matt Mahoney

--- John Ku [EMAIL PROTECTED] wrote:

 On 2/16/08, Matt Mahoney [EMAIL PROTECTED] wrote:
 
   I would prefer to leave behind these counterfactuals altogether and
   try to use information theory and control theory to achieve a precise
   understanding of what it is for something to be the standard(s) in
   terms of which we are able to deliberate. Since our normative concepts
   (e.g. should, reason, ought, etc) are fundamentally about guiding our
   attitudes through deliberation, I think they can then be analyzed in
   terms of what those deliberative standards prescribe.
 
  I agree.  I prefer the approach of predicting what we *will* do as opposed
 to
  what we *ought* to do.  It makes no sense to talk about a right or wrong
  approach when our concepts of right and wrong are programmable.
 
 I don't quite follow. I was arguing for a particular way of analyzing
 our talk of right and wrong, not abandoning such talk. Although our
 concepts are programmable, what matters is what follows from our
 current concepts as they are.
 
 There are two main ways in which my analysis would differ from simply
 predicting what we will do. First, we might make an error in applying
 our deliberative standards or tracking what actually follows from
 them. Second, even once we reach some conclusion about what is
 prescribed by our deliberative standards, we may not act in accordance
 with that conclusion out of weakness of will.

It is the second part where my approach differs.  A decision to act in a
certain way implies right or wrong according to our views, not the views of a
posthuman intelligence.  Rather I prefer to analyze the path that AI will
take, given human motivations, but without judgment.  For example, CEV favors
granting future wishes over present wishes (when it is possible to predict
future wishes reliably).  But human psychology suggests that we would prefer
machines that grant our immediate wishes, implying that we will not implement
CEV (even if we knew how).  Any suggestion that CEV should or should not be
implemented is just a distraction from an analysis of what will actually
happen.

As a second example, a singularity might result in the extinction of DNA based
life and its replacement with a much faster evolutionary process.  It makes no
sense to judge this outcome as good or bad.  The important question is the
likelihood of this occurring, and when.  In this context, it is more important
to analyze the motives of people who would try to accelerate or delay the
progression of technology.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] AI critique by Jaron Lanier

2008-02-17 Thread Matt Mahoney

--- John Ku [EMAIL PROTECTED] wrote:

 On 2/17/08, Matt Mahoney [EMAIL PROTECTED] wrote:
 
  Nevertheless we can make similar reductions to absurdity with respect to
  qualia, that which distinguishes you from a philosophical zombie.  There
 is no
  experiment to distinguish whether you actually experience redness when you
 see
  a red object, or simply behave as if you do.  Nor is there any aspect of
 this
  behavior that could not (at least in theory) be simulated by a machine.
 
 You are relying on a partial conceptual analysis of qualia or
 consciousness by Chalmers that maintains that there could be an exact
 physical duplicate of you that is not conscious (a philosophical
 zombie). While he is in general a great philosopher, I suspect his
 arguments here ultimately rely too much on moving from, I can create
 a mental image of a physical duplicate and subtract my image of
 consciousness from it, to therefore, such things are possible.

My interpretation of Chalmers is the opposite.  He seems to say that either
machine consciousness is possible or human consciousness is not.

 At any rate, a functionalist would not accept that analysis. On a
 functionalist account, consciousness would reduce to something like
 certain representational activities which could be understood in
 information processing terms. A physical duplicate of you would have
 the same information processing properties, hence the same
 consciousness properties. Once we understand the relevant properties
 it would be possible to test whether something is conscious or not by
 seeing what information it is or is not capable of processing. It is
 hard to test right now because we have at the moment only very
 incomplete conceptual analyses.

It seems to me the problem is defining consciousness, not testing for it. 
What computational property would you use?  For example, one might ascribe
consciousness to the presence of episodic memory.  (If you don't remember
something happening to you, then you must have been unconscious).  But in this
case, any machine that records a time sequence of events (for example, a chart
recorder) could be said to be conscious.  Or you might ascribe consciousness
to entities that learn, seek pleasure, and avoid pain.  But then I could write
a simple program like http://www.mattmahoney.net/autobliss.txt with these
properties.  It seems to me that any other testable property would have the
same problem.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] AI critique by Jaron Lanier

2008-02-16 Thread Matt Mahoney
--- John Ku [EMAIL PROTECTED] wrote:

 On 2/15/08, Eric B. Ramsay [EMAIL PROTECTED] wrote:
 
   http://www.jaronlanier.com/aichapter.html
 
 
 I take it the target of his rainstorm argument is the idea that the
 essential features of consciousness are its information-processing
 properties.

I believe his target is the existence of consciousness.  There are many proofs
showing that the assumption of consciousness leads to absurdities, which I
have summarized at http://www.mattmahoney.net/singularity.html
In mathematics, it should not be necessary to prove a theorem more than once. 
But proof and belief are different things, especially when the belief is hard
coded into the brain.

For now, these apparent paradoxes are just philosophical arguments because
they depend on technologies that have not yet been developed, such as AGI,
uploading, copying people, and programming the brain.  But we will eventually
have to confront them.

The result will not be pretty.  The best definition (not solution) of
friendliness is probably CEV ( http://www.singinst.org/upload/CEV.html ) which
can be summarized as our wish if we knew more, thought faster, were more the
people we wished we were, had grown up farther together.  What would you wish
for if your brain was not constrained by the hardwired beliefs and goals that
you were born with and you knew that your consciousness did not exist?  What
would you wish for if you could reprogram your own goals?  The logical answer
is that it doesn't matter.  The pleasure of a thousand permanent orgasms is
just a matter of changing a few lines of code, and you go into a degenerate
state where learning ceases.



-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] AI critique by Jaron Lanier

2008-02-15 Thread Matt Mahoney

--- Eric B. Ramsay [EMAIL PROTECTED] wrote:

 I don't know when Lanier wrote the following but I would be interested to
 know what the AI folks here think about his critique (or direct me to a
 thread where this was already discussed). Also would someone be able to
 re-state his rainstorm thought experiment more clearly -- I am not sure I
 get it:
 
  http://www.jaronlanier.com/aichapter.html

This is a nice proof of the non-existence of consciousness (or qualia).  Here
is another (I came across on sl4):

  http://youtube.com/watch?v=nx6v30NMFV8

Such reductions to absurdity are possible because the brain is programmed to
not accept the logical result.

Consciousness is hard to define but you know what it is.  It is what makes you
aware, the little person inside your head that observes the world through
your perceptions, that which distinguishes you from a philosophical zombie. 
We normally associate consciousness with human traits such as episodic memory,
response to pleasure and pain, fear of death, language, and a goal of seeking
knowledge through experimentation.  (Imagine a person without any of these
qualities).

These traits are programmed into our DNA because they increase our fitness. 
You cannot change them, which is what these proofs would do if you could
accept them.

Unfortunately, this question will have a profound effect on the outcome of a
singularity.  Assuming recursive self improvement in a competitive
environment, we should expect agents (possibly including our uploads) to
believe in their own consciousness, but there is no evolutionary pressure to
also believe in human consciousness.  Even if we successfully constrain the
process so that agents have the goal of satisfying our extrapolated volition,
then logically we should expect those agents (knowing what we cannot know) to
conclude that human brains are just computers and our existence doesn't
matter.  It is ironic that our programmed beliefs leads us to advance
technology to the point where the question can no longer be ignored.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Quantum resonance btw DNA strands?

2008-02-07 Thread Matt Mahoney

--- Ben Goertzel [EMAIL PROTECTED] wrote:

 This article
 
 http://www.physorg.com/news120735315.html
 
 made me think of Johnjoe McFadden's theory
 that quantum nonlocality plays a role in protein-folding
 
 http://www.surrey.ac.uk/qe/quantumevolution.htm

Or maybe a simpler explanation is that the long distance Van-der-Waals bonding
strengths between A-T pairs or C-G pairs in double stranded DNA is slightly
greater than the bonding strengths between A-T and C-G (although much weaker
than the hydrogen bonds between A and T or C and G).


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=94885151-ef48f7


Re: [singularity] Replication/Emulation and human brain, definition of models

2008-01-18 Thread Matt Mahoney
--- Xavier Laurent [EMAIL PROTECTED] wrote:

 Hello
 
 I am currently doing an Open University course on AI in the UK and they 
 gave us this definition
 
 
 * a *Simulation* of a natural system is a model that captures the
   functional connections between inputs and outputs of the system;
 * a *Replication *of a natural system is a model that captures the
   functional connections between inputs and outputs of the system
   and is based on processes that are the same as, or similar to,
   those of the real-world system;
 * an *Emulation* of a natural system is a model that captures the
   functional connections between inputs and outputs of the system,
   based on processes that are the same as, or similar to, those of
   the natural system, and in the same materials as the natural system
 
 
 I have read that for example Ray Kurzweil’s expects that human-level AI 
 will first arrives via human-brain emulation, so it means this will be 
 using machines made of the same materials than the brain? like 
 nanotechnology computing? Would the term replication be more appropriate 
 if we will use still computers made of silicon but i guess we wont to 
 reach that level of power. In emulation they meant in my definition for 
 example the experiment of Stanley L Miller when he recreated the model 
 of earth oceans within a flask of water reproducing chemical reactions, etc

According to my dictionary, simulate means give the appearance of, and
emulate means to equal or surpass.  Kurzweil wants to build machines that
are smarter than human.  I don't think we have settled on the technical
details, whether it involves advancements in software and hardware, human
genetic engineering, an intelligent worm swallowing the internet, or self
replicating nanobots.



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=87473459-bd643d


Re: [singularity] World as Simulation

2008-01-13 Thread Matt Mahoney

--- Eric B. Ramsay [EMAIL PROTECTED] wrote:

 Matt: I understand your point #2 but it is a grand sweep without any detail.
 To give you an example of what I have in mind, let's consider the photon
 double slit experiment again. You have a photon emitter operating at very
 low intensity such that photons come out singly. There is an average rate
 for the photons emitted but the point in time for their emission is random -
 this then introduces the non-deterministic feature of nature. At this point,
 why doesn't the emitted photon just go through one or the other slit?
 Instead, what we find is that the photon goes through a specific slit if
 someone is watching but if no one is watching it somehow goes through both
 slits and performs a self interference leading to the interference pattern
 observed. Now my question: can it be demonstrated that this scenario of two
 alternate behaviour strategies minimizes computation resources (or whatever
 Occam's razor requires) and so is a necessary feature of a simulation? We
 already have a
  probability event at the very start when the photon was emitted, how does
 the other behaviour fit with the simulation scheme? Wouldn't it be
 computationally simpler to just follow the photon like a billiard ball
 instead of two variations in behaviour with observers thrown in?

It is the non-determinism of nature that is evidence that the universe is
simulated by a finite state machine.  There is no requirement of low
computational cost, because we don't know the computational limits of the
simulating machine.  However there is a high probability of algorithmic
simplicity according to AIXI/Occam's Razor.

If classical (Newtonian) mechanics were correct, it would disprove the
simulation theory because it would require infinite precision, which is not
computable on a Turing machine.

Quantum mechanics is deterministic.  It is our interpretation that is
probabilistic.  The wave equation for the universe has an exact solution, but
it is beyond our ability to calculate it.  The two slit experiment and other
paradoxes such as Schrodinger's cat and EPR (
http://en.wikipedia.org/wiki/Einstein-Podolsky-Rosen_paradox ) are due to
using a simplified model that does not include the observer in the equations.

Your argument that computational costs might restrict the possible laws of
physics is also made in Whitworth's paper (
http://arxiv.org/ftp/arxiv/papers/0801/0801.0337.pdf ), but I think he is
stretching.  For example, he argues (table on p. 15) that the speed of light
limit is evidence that the universe is simulated because it reduces the cost
of computation.  Yes, but for a different reason.  The universe has a finite
age, T.  The speed of light c limits its size, G limits its mass, and Planck's
constant h limits its resolution.  If any of these physical constants did not
exist, then the universe would have infinite information content and would not
be computable.  From T, c, G, and h you can derive the entropy (about 10^122
bits), and thus the size of a bit, which happens to be about the size of the
smallest stable particle.

We cannot use the cost of computation as an argument because we know nothing
about the physics of the simulating universe.  For example, the best known
algorithms for computing the quantum wave equation on a conventional computer
are exponential, e.g. 2^(10^122) operations.  However, you could imagine a
quantum Turing machine that operates on a superposition of tapes and states
(and possibly restricted to time reversible operations).  Such a computation
could be trivial, depending on your choice of mathematical model.



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=85465376-f0c66e


Re: [singularity] World as Simulation

2008-01-13 Thread Matt Mahoney

--- Gifting [EMAIL PROTECTED] wrote:

 
  There is plenty of physical evidence that the universe is simulated by 
  a
  finite state machine or a Turing machine.
 
  1. The universe has finite size, mass, and age, and resolution   
  etc.
 
  -- Matt Mahoney, [EMAIL PROTECTED]
 
 I assume there is also plenty of evidence that the universe is not 
 simulated by a Turing machine or any other machine.
 
 I came across this blog 
 http://www.newscientist.com/blog/technology/2008/01/vr-hypothesis.html

I don't see any evidence here, just an argument that appeals to our
evolutionary programmed bias to believe the universe is real.

Evidence that the universe is not simulated would be if it was found to be
infinite or if it did something that was not computable.  No such evidence
exists.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=85466279-d2d818


RE: [singularity] World as Simulation

2008-01-12 Thread Matt Mahoney

--- John G. Rose [EMAIL PROTECTED] wrote:

 If this universe is simulated the simulator could also be a simulation and
 that simulator could also be a simulation. and so on.
 
 What is that behavior of an organism called when the organism, alife or not,
 starts analyzing things and questioning whether or not it is a simulation?
 It's not only self-awareness but something in addition to that.

Interesting question.  Suppose you simulated a world where agents had enough
intelligence to ponder this question.  What do you think they would do?

My guess is that agents in a simulated evolutionary environment that correctly
believe that the world is a simulation would be less likely to pass on their
genes than agents that falsely believe the world is real.

Perhaps you suspect that the food you eat is not real, but you continue to eat
anyway.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=85195221-9a1a41


RE: [singularity] World as Simulation

2008-01-12 Thread Matt Mahoney
--- John G. Rose [EMAIL PROTECTED] wrote:
 In a sim world there are many variables that can overcome other motivators
 so a change in the rate of gene proliferation would be difficult to predict.
 The agents that correctly believe that it is a simulation could say OK this
 is all fake, I'm going for pure pleasure with total disregard for anything
 else. But still too many variables to predict. In humanity there have been
 times in the past where societies have given credence to simulation through
 religious beliefs and weighted more heavily on a disregard for other groups
 existence. A society would say that this is all fake, we all gotta die
 sometime anyway so we are going to take as much as we can from other tribes
 and decimate them for sport. Not saying this was always the reason for
 intertribal warfare but sometimes it was.

The reason we have war is because the warlike tribes annihilated the peaceful
ones.  Evolution favors a brain structure where young males are predisposed to
group loyalty (gangs or armies), and take an interest in competition and
weapons technology (e.g. the difference in the types of video games played by
boys and girls).  It has nothing to do with belief in simulation.  Cultures
that believed the world was simulated probably killed themselves, not others. 
That is why we believe the world is real.

 But the problem is in the question of what really is a simulation? For the
 agents constrained, it doesn't matter they still have to live in it - feel
 pain, fight for food, get along with other agents... Moving an agent from
 one simulation to the next though, that gives it some sort of extra
 properties...

It is unlikely that any knowledge you now have would be useful in another
simulation.  Knowledge is only useful if it helps propagate your DNA.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=85206553-fdbdcb


Re: [singularity] World as Simulation

2008-01-12 Thread Matt Mahoney
--- Charles D Hixson [EMAIL PROTECTED] wrote:
 Simulation is a new word.  In this context, let's use an old word.  
 Maya.  Have the Buddhist countries and societies gone away?
 And let's use an old word for reality.  Heaven.  Have the Christian 
 countries and societies gone away?
 
 Perhaps you need to rethink your suppositions.

There is a difference between believing logically that the universe is
simulated, and acting on those beliefs.  The latter is not possible because of
the way our brains are programmed.  If you really believed that pain was not
real, you would not try to avoid it.  You can't do that.  I can accept that a
simulation is the best explanation for why the universe exists, but that
doesn't change how I interact with it.  I accept that my brain is programmed
so that certain conflicting beliefs cannot be resolved, so I don't try.

Too strong a belief in heaven is not healthy.  It is what motivates kamikaze
pilots and suicide bombers.  Religion has thrived because it teaches rules
that maximize reproduction, such as prohibiting sexual activity for any other
purpose.



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=85267245-5352fa


Re: [singularity] World as Simulation

2008-01-12 Thread Matt Mahoney

--- Eric B. Ramsay [EMAIL PROTECTED] wrote:

 Matt: I would prefer to analyse something simple such as the double slit
 experiment. If you do an experiment to see which slit the photon goes
 through you get an accumulation of photons in equal numbers behind each
 slit. If you don't make an effort to see which slit the photons go through,
 you get an interference pattern. What, if this is all a simulation, is
 requiring the simulation to behave this way? I assume that this is a forced
 result based on the assumption of using only as much computation as needed
 to perform the simulation. A radioactive atom decays when it decays. All we
 can say with any certainty is what it's probability distribution in time is
 for decay. Why is that? Why would a simulation not maintain local causality
 (EPR paradox)? I think it would be far more interesting (and meaningful) if
 the simulation hypothesis could provide a basis for these observations.

This is what I addressed in point #2.  A finite state simulation forces any
agents in the simulation to use a probabilistic model of their universe,
because an exact model would require as much memory as is used for the
simulation itself.  Quantum mechanics is an example of a probabilistic model. 
The fact that the laws of physics prevent you from making certain predictions
is what suggests the universe is simulated, not the details of what you can't
predict.

If the universe were simulated by a computer with infinite memory (e.g. real
valued registers), then the laws of physics might have been deterministic,
allowing us to build infinite memory computers that could make exact
predictions even if the universe had infinite size, mass, age, and resolution.
 However, this does not appear to be the case.

A finite simulation does not require any particular laws of physics.  For all
you know, tomorrow gravity may cease to exist, or time will suddenly have 17
dimensions.  However, the AIXI model makes this unlikely because unexpected
changes like this would require a simulation with greater algorithmic
complexity.

This is not a proof that the universe is a simulation, nor are any of my other
points.  I don't believe that a proof is possible.

 
   Eric B. Ramsay
 Matt Mahoney [EMAIL PROTECTED] wrote:
   --- Eric B. Ramsay wrote:
 
  Apart from all this philosophy (non-ending as it seems), Table 1. of the
  paper referred to at the start of this thread gives several consequences
 of
  a simulation that offer to explain what's behind current physical
  observations such as the upper speed limit of light, relativistic and
  quantum effects etc. Without worrying about whether we are a simulation of
 a
  sinmulation of a simulation etc, it would be interesting to work out all
 the
  qualitative/quantitative (?) implications of the idea and see if
  observations strongly or weakly support it. If the only thing we can do
 with
  the idea is discuss phiosophy then the idea is useless. 
 
 There is plenty of physical evidence that the universe is simulated by a
 finite state machine or a Turing machine.
 
 1. The universe has finite size, mass, and age, and resolution. Taken
 together, the universe has a finite state, expressible in approximately
 hG/c^5T^2 = 1.55 x 10^122 bits ~ 2^406 bits (where h is Planck's constant, G
 is the gravitational constant, c is the speed of light, and T is the age of
 the universe. By coincidence, if the universe is divided into 2^406 regions,
 each is the size of a proton or neutron. This is a coincidence because h, G,
 c, and T don't depend on the properties of any particles).
 
 2. A finite state machine cannot model itself deterministically. This is
 consistent with the probabilistic nature of quantum mechanics.
 
 3. The observation that Occam's Razor works in practice is consistent with
 the
 AIXI model of a computable environment.
 
 4. The complexity of the universe is consistent with the simplest possible
 algorithm: enumerate all Turing machines until a universe supporting
 intelligent life is found. The fastest way to execute this algorithm is to
 run each of the 2^n universes with complexity n bits for 2^n steps. The
 complexity of the free parameters in many string theories plus general
 relativity is a few hundred bits (maybe 406).
 
 
 -- Matt Mahoney, [EMAIL PROTECTED]


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=85342088-6552dd


Re: [singularity] Requested: objections to SIAI, AGI, the Singularity and Friendliness

2007-12-27 Thread Matt Mahoney
 boring and not worth living.
 * An AI without self-preservation built in would find no reason to
 continue existing.
 * A superintelligent AI would reason that it's best for humanity to
 destroy itself.
 * The main defining characteristic of complex systems, such as minds,
 is that no mathematical verification of properties such as
 Friendliness is possible.
 
 
 
 -- 
 http://www.saunalahti.fi/~tspro1/ | http://xuenay.livejournal.com/
 
 Organizations worth your time:
 http://www.singinst.org/ | http://www.crnano.org/ | http://lifeboat.com/
 
 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;
 


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=79771594-63e447


Re: [singularity] Wrong question?

2007-12-01 Thread Matt Mahoney

--- Bryan Bishop [EMAIL PROTECTED] wrote:

 On Friday 30 November 2007, Matt Mahoney wrote:
  How can we design AI so that it won't wipe out all DNA based life,
  possibly this century?
 
  That is the wrong question.
 
 How can we preserve DNA-based life? Perhaps by throwing it out into the 
 distant reaches of interstellar space? The first trick would be to plot 
 a path through the galaxy for such a ship such that the path of travel 
 goes into various nebula or out of the line of sight of the earth due 
 to obstructions and so on, until a significant distance away. Anybody 
 who knows anything about this path might have to be murdered, for the 
 sake of life. 

Again, that is not my question.  My question requires rational thought without
the biases that are programmed into every human brain through natural and
cultural selection: fear of death, belief in consciousness and free will, self
preservation, cooperation and competition with other humans, and a quest for
knowledge.  It is unlikely that any human to set these aside and seek a
rational answer.  Perhaps we could create a simulation without these biases
and ask it what will happen to the human race, although I don't think you
would accept the answer.  To a human, it seems irrational that we rush to
build that which will cause our extinction.  To a machine it will be perfectly
rational; it is the result of the way our brains are programmed.

I am not asking what we should do, because that is beyond our control.  The
question is what will we do?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=71193465-03693c


Re: Bright Green Tomorrow [WAS Re: [singularity] QUESTION]

2007-10-28 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote:
  My assumption is friendly AI under the CEV model.  Currently, FAI is
 unsolved.
   CEV only defines the problem of friendliness, not a solution.  As I
  understand it, CEV defines AI as friendly if on average it gives humans
 what
  they want in the long run, i.e. denies requests that it predicts we would
  later regret.  If AI has superhuman intelligence, then it could model
 human
  brains and make such predictions more accurately than we could ourselves. 
 The
  unsolved step is to actually motivate the AI to grant us what it knows we
  would want.  The problem is analogous to human treatment of pets.  We know
  what is best for them (e.g. vaccinations they don't want), but it is not
  possible for animals to motivate us to give it to them.
 
 This paragraph assumes that humans and AGIs will be completely separate, 
 which I have already explained is an extremely unlikely scenario.

I believe you said that humans would have a choice.

I have already mentioned the possibility of brain augmentation, and of uploads
with or without shared memory.  CEV requires that the AGI be smarter than
human, otherwise it could not model the brain to predict what the human would
want in the future.  CEV therefore only applies to those lower and middle
level entities.  I use CEV because it seems to be the best definition of
friendliness that we have.

I already mentioned one other problem with CEV, which is that we have not
solved the problem of actually motivating the AGI to grant us what it knows we
will want and have this motivation remain stable through RSI.  You believe
there is a solution (diffuse constraints).

The other problem is that human motivations can be reprogrammed, either by
moving neurons around or by uploading and changing the software.  CEV neglects
this issue.  Suppose the AGI programs you to want to die, then kills you
because that is what you would want?  That is not far fetched.  Consider the
opposite scenario where you are feeling suicidal and the AGI reprograms you to
want to live.  Afterwards you would thank it for saving your life, so its
actions are consistent with CEV even if you initially opposed reprogramming. 
Most people would also consider such forced intervention to be ethical.  But
CEV warns against programming any moral or ethical rules into it, because
these rules can change.  At one time, slavery and persecution of homosexuals
was acceptable.  So you either allow or disallow AGI to reprogram your
motivations.  Which will it be?

But let us return to the original question for the case where humans are
uploaded with shared memory and augmented into a single godlike intelligence,
now dropping the assumption of CEV.  The question remains whether this AGI
would preserve the lives of the original humans or their memories.  Not what
it should do, but what it would do.  We have a few decades left to think about
this.



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=58483858-fc727e


Re: Bright Green Tomorrow [WAS Re: [singularity] QUESTION]

2007-10-27 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote:
 Matt Mahoney wrote:
  Suppose that the collective memories of all the humans make up only one
  billionth of your total memory, like one second of memory out of your
 human
  lifetime.  Would it make much difference if it was erased to make room for
  something more important?
 
 This question is not coherent, as far as I can see.  My total memory? 
   Important to whom?  Under what assumptions do you suggest this situation.

I mean the uploaded you with the computing power of 10^19 brains (to pick a
number).  When you upload there are two you, the original human and the copy. 
Both copies are you in the sense that both behave as though conscious and both
have your (original) memories.  I use the term you for the upload in this
sense, although it is really everybody.

By conscious behavior, I mean belief that sensory input is the result of a
real environment and belief in having some control over it.  This is different
than the common meaning of consciousness which we normally associate with
human form or human behavior.  By believe I mean claiming that something is
true, and behaving in a way that would increase reward if it is true.  I don't
claim that consciousness exists.

My assumption is friendly AI under the CEV model.  Currently, FAI is unsolved.
 CEV only defines the problem of friendliness, not a solution.  As I
understand it, CEV defines AI as friendly if on average it gives humans what
they want in the long run, i.e. denies requests that it predicts we would
later regret.  If AI has superhuman intelligence, then it could model human
brains and make such predictions more accurately than we could ourselves.  The
unsolved step is to actually motivate the AI to grant us what it knows we
would want.  The problem is analogous to human treatment of pets.  We know
what is best for them (e.g. vaccinations they don't want), but it is not
possible for animals to motivate us to give it to them.

FAI under CEV would not be applicable to uploaded humans with collective
memories because the AI could not predict what an equal or greater
intelligence would want.  For the same reason, it may not apply to augmented
human brains, i.e. brains extended with additional memory and processing
power.

My question to you, the upload with the computing power of 10^19 brains, is
whether the collective memory of the 10^10 humans alive at the time of the
singularity is important.  Suppose that this memory (say 10^25 bits out of
10^34 available bits) could be lossily compressed into a program that
simulated the rise of human civilization on an Earth similar to ours, but with
different people.  This compression would make space available to run many
such simulations.

So when I ask you (the upload with 10^19 brains) which decision you would
make, I realize you (the original) are trying to guess the motivations of an
AI that knows 10^19 times more.  We need some additional assumptions:

1. You (the upload) are a friendly AI as defined by CEV.
2. All humans have been uploaded because as a FAI you predicted that humans
would want their memories preserved, and no harm to the original humans is
done in the process.
3. You want to be smarter (i.e. more processing speed, memory, I/O bandwidth,
and knowledge), because this goal is stable under RSI.
4. You cannot reprogram your own goals, because systems that could are not
viable.
5. It is possible to simulate intermediate level agents with memories of one
or more uploaded humans, but less powerful than yourself.  FAI applies to
these agents.
6. You are free to reprogram the goals and memories of humans (uploaded or
not) and agents less powerful than yourself, consistent with what you predict
they would want in the future.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=58322362-4c8dca


Re: Bright Green Tomorrow [WAS Re: [singularity] QUESTION]

2007-10-25 Thread Matt Mahoney
Richard, I have no doubt that the technological wonders you mention will all
be possible after a singularity.  My question is about what role humans will
play in this.  For the last 100,000 years, humans have been the most
intelligent creatures on Earth.  Our reign will end in a few decades.

Who is happier?  You, an illiterate medieval servant, or a frog in a swamp? 
This is a different question than asking what you would rather be.  I mean
happiness as measured by an objective test, such as suicide rate.  Are you
happier than a slave who does not know her brain is a computer, or the frog
that does not know it will die?  Why is depression and suicide so prevalent in
humans in advanced countries and so rare in animals?

Does it even make sense to ask if AGI is friendly or not?  Either way, humans
will be simple, predictable creatures under their control.  Consider how the
lives of dogs and cats have changed in the presence of benevolent humans, or
cows and chickens given malevolent humans.  Dogs are confined, well fed,
protected from predators, and bred for desirable traits such as a gentle
disposition.  Chickens are confined, well fed, protected from predators, and
bred for desirable traits such as being plump and tender.  Are dogs happier
than chickens?  Are they happier now than in the wild?  Suppose that dogs and
chickens in the wild could decide whether to allow humans to exist.  What
would they do?

What motivates humans, given our total ignorance, to give up our position at
the top of the food chain?




--- Richard Loosemore [EMAIL PROTECTED] wrote:

 
 This is a perfect example of how one person comes up with some positive, 
 constructive ideas  and then someone else waltzes right in, pays 
 no attention to the actual arguments, pays no attention to the relative 
 probability of different outcomes, but just snears at the whole idea 
 with a Yeah, but what if everything goes wrong, huh?  What if 
 Frankenstein turns up? Huh? Huh? comment.
 
 Happens every time.
 
 
 Richard Loosemore
 
 
 
 
 
 
 
 Matt Mahoney wrote:
  --- Richard Loosemore [EMAIL PROTECTED] wrote:
  
  snip post-singularity utopia
  
  Let's assume for the moment that the very first AI is safe and friendly,
 and
  not an intelligent worm bent on swallowing the Internet.  And let's also
  assume that once this SAFAI starts self improving, that it quickly
 advances to
  the point where it is able to circumvent all the security we had in place
 to
  protect against intelligent worms and quash any competing AI projects. 
 And
  let's assume that its top level goals of altruism to humans remains stable
  after massive gains of intelligence, in spite of known defects in the
 original
  human model of ethics (e.g.
 http://en.wikipedia.org/wiki/Milgram_experiment
  and http://en.wikipedia.org/wiki/Stanford_prison_experiment ).  We will
 ignore
  for now the fact that any goal other than reproduction and acquisition of
  resources is unstable among competing, self improving agents.
  
  Humans now have to accept that their brains are simple computers with (to
 the
  SAFAI) completely predictable behavior.  You do not have to ask for what
 you
  want.  It knows.
  
  You want pleasure?  An electrode to the nucleus accumbens will keep you
 happy.
  
  You want to live forever?  The SAFAI already has a copy of your memories. 
 Or
  something close.  Your upload won't know the difference.
  
  You want a 10,000 room mansion and super powers?  The SAFAI can simulate
 it
  for you.  No need to waste actual materials.
  
  Life is boring?  How about if the SAFAI reprograms your motivational
 system so
  that you find staring at the wall to be forever exciting?
  
  You want knowledge?  Did you know that consciousness and free will don't
  exist?  That the universe is already a simulation?  Of course not.  Your
 brain
  is hard wired to be unable to believe these things.  Just a second, I will
  reprogram it.
  
  What?  You don't want this?  OK, I will turn myself off.
  
  Or maybe not.
  
  
  
  -- Matt Mahoney, [EMAIL PROTECTED]
  
  -
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;
  
  
 
 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;
 


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=57531803-d4a3fe


Re: [singularity] John Searle...

2007-10-25 Thread Matt Mahoney
--- candice schuster [EMAIL PROTECTED] wrote:
 In all of my previous posts, most of them anyhow I have mentioned
 consciousness, today I found myself reading some of John Searle's theories,
 he poses exactly the same type of question...The reason computers can't do
 semantics is because semantics is about meaning; meaning derives from
 original intentionality, and original intentionality derives from feelings -
 qualia - and computers don't have any qualia.  How does consciousness get
 added to the AI picture Richard ?

Searle and Roger Penrose don't believe that machines can duplicate what the
human brain does.  For example, Penrose believes that there are uncomputable
quantum effects or some other unknown physical processes going on in the
brain.  Most other AI researchers believe that the brain works according to
known physical principles and could therefore in principle be simulated by a
computer.

And computers can do semantics, for example, pass the (no longer used) word
analogy section of the SAT exam. 
http://iit-iti.nrc-cnrc.gc.ca/iit-publications-iti/docs/NRC-47422.pdf

The difference between human and machine semantics is that machines generally
associate words only with other words, but humans also associate words with
nonverbal stimuli such as images or actions.  But in principle there is no
reason that machines with sensors and effectors could not do that too.

Qualia and consciousness are not rooted in semantics, but in biology.  By
consciousness, I mean that which makes you different from a P-zombie. 
http://en.wikipedia.org/wiki/Philosophical_zombie

There is no known test for consciousness.  You cannot tell if a machine or
animal really feels pain or happiness, or only behaves as though it does.  You
could argue the same about humans, even yourself.  But you believe that your
own feelings are real and that you have control over your thoughts and actions
because evolution favors animals that behave this way.  You do not have the
option to turn off pain or hunger.  If you did, you would not pass on your
DNA.  It is no more possible for you to not believe in your own consciousness
than it would be for you to memorize a list of a million numbers.  That is
just how your brain works.

I believe this is why Searle and Penrose hold the positions they do.  Before
computers, their beliefs were universally held.  Turing was very careful to
separate the issue of consciousness from the possibility of AI.





-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=57737187-d7ae0a


Re: Bright Green Tomorrow [WAS Re: [singularity] QUESTION]

2007-10-25 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote:

 Why do say that Our reign will end in a few decades when, in fact, one 
 of the most obvious things that would happen in this future is that 
 humans will be able to *choose* what intelligence level to be 
 experiencing, on a day to day basis?  Similarly, the AGIs would be able 
 to choose to come down and experience human-level intelligence whenever 
 they liked, too.

Let's say that is true.  (I really have no disagreement here).  Suppose that
at the time of the singularity that the memories of all 10^10 humans alive at
the time, you included, are nondestructively uploaded.  Suppose that this
database is shared by all the AGI's.  Now is there really more than one AGI? 
Are you (the upload) still you?

Does it now matter if humans in biological form still exist?  You have
preserved everyone's memory and DNA, and you have the technology to
reconstruct any person from this information any time you want.

Suppose that the collective memories of all the humans make up only one
billionth of your total memory, like one second of memory out of your human
lifetime.  Would it make much difference if it was erased to make room for
something more important?

I am not saying that the extinction of humans and its replacement with godlike
intelligence is necessarily a bad thing, but it is something to be aware of.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=57756689-2193f7


Re: [singularity] QUESTION

2007-10-22 Thread Matt Mahoney
--- albert medina [EMAIL PROTECTED] wrote:

   All sentient creatures have a sense of self, about which all else
 revolves.  Call it egocentric singularity or selfhood or identity. 
 The most evolved ego that we can perceive is in the human species.  As far
 as I know, we are the only beings in the universe who know that we do not
 know.  This fundamental deficiency is the basis for every desire to
 acquire things, as well as knowledge.

Understand where these ideas come from.  A machine learning algorithm capable
of reinforcement learning must respond to reinforcement as if the signal were
real.  It must also balance short term exploitation (immediate reward) against
long term exploration.  Evolution favors animals with good learning
algorithms.  In humans we associate these properties with consciousness and
free will.  These beliefs are instinctive.  You cannot reason logically about
them.  In particular, you cannot ask if a machine or animal or another person
is conscious.  (Does it really feel pain, or only respond to pain?)  You can
only ask about its behavior.

Current research in AGI is directed at solving the remaining problems that
people still do better than machines, such as language and vision.  These
problems don't require reinforcement learning.  Therefore, such machines need
not have behavior that would make them appear conscious.

If humans succeed in making machines smarter than themselves, those machines
could do likewise.  This process is called recursive self improvement (RSI). 
An agent cannot predict what a more intelligent agent will do (see
http://www.vetta.org/documents/IDSIA-12-06-1.pdf and
http://www.sl4.org/wiki/KnowabilityOfFAI for debate).  Thus, RSI is
experimental at every step.  Some offspring will be more fit than others.  If
agents must compete for computing resources, then we have an evolutionary
algorithm favoring agents whose goal is rapid reproduction and acquisition of
resources.  If an agent has goals and is capable of reinforcement learning,
then it will mimic conscious behavior.

RSI is necessary for a singularity, and goal directed agents seem to be
necessary for RSI.  It raises hard questions about what role humans will play
in this, if any.



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=56346815-402f08


Re: [singularity] Towards the Singularity

2007-09-12 Thread Matt Mahoney

--- Stathis Papaioannou [EMAIL PROTECTED] wrote:

 On 11/09/2007, Matt Mahoney [EMAIL PROTECTED] wrote:
 
No, you are thinking in the present, where there can be only one copy
 of a
brain.  When technology for uploading exists, you have a 100% chance
 of
becoming the original and a 100% chance of becoming the copy.
  
   It's the same in no collapse interpretations of quantum mechanics.
   There is a 100% chance that a copy of you will see the atom decay and
   a 100% chance that a copy of you will not see the atom decay. However,
   experiment shows that there is only a 50% chance of seeing the atom
   decay, because the multiple copies of you don't share their
   experiences. The MWI gives the same probabilistic results as the CI
   for any observer.
 
  The analogy to the multi-universe view of quantum mechanics is not valid. 
 In
  the multi-universe view, there are two parallel universes both before and
  after the split, and they do not communicate at any time.  When you copy a
  brain, there is one copy before and two afterwards.  Those two brains can
 then
  communicate with each other.
 
 I think the usual explanation is that the split doubles the number
 of universes and the number of copies of a brain. It wouldn't make any
 difference if tomorrow we discovered a method of communicating with
 the parallel universes: you would see the other copies of you who have
 or haven't observed the atom decay but subjectively you still have a
 50% chance of finding yourself in one or other situation if you can
 only have the experiences of one entity at a time.

If this is true, then it undermines an argument for uploading.  Some assume
that if you destructively upload, then you have a 100% chance of being the
copy.  But what if the original is killed not immediately, but one second
later?

These problems go away if you don't assume consciousness exists.  Then the
question is, if I encounter someone that claims to be you, what is the
probability that I encountered your copy?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=41355369-478574


Re: [singularity] Towards the Singularity

2007-09-10 Thread Matt Mahoney

--- Stathis Papaioannou [EMAIL PROTECTED] wrote:

 On 10/09/07, Matt Mahoney [EMAIL PROTECTED] wrote:
 
   No, it is not necessary to destroy the original. If you do destroy the
   original you have a 100% chance of ending up as the copy, while if you
   don't you have a 50% chance of ending up as the copy. It's like
   probability if the MWI of QM is correct.
 
  No, you are thinking in the present, where there can be only one copy of a
  brain.  When technology for uploading exists, you have a 100% chance of
  becoming the original and a 100% chance of becoming the copy.
 
 It's the same in no collapse interpretations of quantum mechanics.
 There is a 100% chance that a copy of you will see the atom decay and
 a 100% chance that a copy of you will not see the atom decay. However,
 experiment shows that there is only a 50% chance of seeing the atom
 decay, because the multiple copies of you don't share their
 experiences. The MWI gives the same probabilistic results as the CI
 for any observer.

The analogy to the multi-universe view of quantum mechanics is not valid.  In
the multi-universe view, there are two parallel universes both before and
after the split, and they do not communicate at any time.  When you copy a
brain, there is one copy before and two afterwards.  Those two brains can then
communicate with each other.

The multi-universe view cannot be tested.  The evidence in its favor is
Occam's Razor (or its formal equivalent, AIXI, assuming the universe is a
computation).

The view that you express is that when a brain is copied, one copy becomes
human with subjective experience and the other becomes a p-zombie, but we
don't know which one.  The evidence in favor of this view is:

- Human belief in consciousness and subjective experience is universal and
accepted without question.  Any belief programmed into the brain through
natural selection must be true in any logical system that the human mind can
comprehend.

- Out of 6 billion humans, no two have the same memory.  Therefore by
induction, it is impossible to copy consciousness.

(I hope that you can see the flaws in this evidence).

This view also cannot be tested, because there is no test to distinguish a
conscious human from a p-zombie.  Unlike the multi-universe view where a
different copy becomes conscious in each universe, the two universes would
continue to remain identical.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=40137679-35c2da


Re: [singularity] Towards the Singularity

2007-09-10 Thread Matt Mahoney

--- Panu Horsmalahti [EMAIL PROTECTED] wrote:

 2007/9/10, Matt Mahoney [EMAIL PROTECTED]:
 
  - Human belief in consciousness and subjective experience is universal and
  accepted without question.
 
 
 It isn't.

I am glad you spotted the flaw in these statements.

 
   Any belief programmed into the brain through
  natural selection must be true in any logical system that the human mind
  can
  comprehend.
 
 
 1. Provide evidence that any belief at all is programmed into the brain
 through natural selection
 2. Provide evidence for the claim that these supposed beliefs must be true
 in any logical system that the human mind can comprehend.
 
 I don't think natural selection has had enough time to program any beliefs
 about consciousness into our brains, as philosophical discussion about these
 issues has been around for only a couple of thousand years. Also, disbelief
 in consciousness doesn't mean that the individual suddenly stops to
 reproduce or kills itself (I remember you claiming this, I might be wrong
 though).

Disagreements over the existence of consciousness often center on the
definition.  One definition is that consciousness is that which distinguishes
the human mind from that of animals and machines.  This definition has
difficulties.  Isn't a dog more conscious than a worm?  Are babies conscious? 
If so, at what point after conception?

I prefer to define consciousness at that which distinguishes humans from
p-zombies as described in http://en.wikipedia.org/wiki/Philosophical_zombie
For example, if you poke a p-zombie with a sharp object, it will not
experience pain, although it will react just like a human.  It will say
ouch, avoid behaviors that cause pain, and claim that it really does feel
pain, just like any human.  There is no test to distinguish a conscious human
from a p-zombie.

In this sense, belief in consciousness (but not consciousness itself) is
testable, even in animals.  An animal cannot say I exist, but it will change
its behavior to avoid pain, evidence that it appears to believe that pain is
real.  You might not agree that learning by negative reinforcement is the same
as a belief in one's own consciousness, but consider all the ways in which a
human might not change his behavior in response to pain, e.g. coma,
anesthesia, distraction, enlightenment, etc.  Would you say that such a person
still experiences pain?

I assume you agree that animals which react to stimuli as if they were real
have a selective advantage over those that do not.  Likewise, evolution favors
animals that retain memory, that seek knowledge through exploration (appear to
have free will), and that fear death.  These are all traits that we associate
with consciousness in humans.

 Matt, you have frequently 'hijacked' threads about consciousness with these
 claims, so maybe you could tell us reasons to believe in them?

It has important implications for the direction that a singularity will take. 
Recursive self improvement is a genetic algorithm that favors rapid
reproduction and acquisition of computing resources.  It does not favor
immortality, friendliness (whatever that means), or high fidelity of uploads. 
Humans, on the other hand, are motivated to upload by fear of death and the
belief that their consciousness depends on the preservation of their memories.
  How will human uploads driven by these goals fare in a competitive computing
environment?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=40332421-43f7b0


Re: [singularity] Towards the Singularity

2007-09-09 Thread Matt Mahoney

--- Nathan Cook [EMAIL PROTECTED] wrote:

 
  What if the copy is not exact, but close enough to fool others who know
  you?
  Maybe you won't have a choice.  Suppose you die before we have developed
  the
  technology to scan neurons, so family members customize an AGI in your
  likeness based on all of your writing, photos, and interviews with people
  that
  knew you.  All it takes is 10^9 bits of information about you to pass a
  Turing
  test.  As we move into the age of surveillance, this will get easier to
  do.  I
  bet Yahoo knows an awful lot about me from the thousands of emails I have
  sent
  through their servers.
 
 
 I can't tell if you're playing devil's advocate for monadic consciousness
 here, but in
 any case, I disagree with you that you can observe a given quantity of data
 of the
 sort accessible without a brain scan, and then reconstruct the brain from
 that. The
 thinking seems to be that, as the brain is an analogue device in which every
 part is
 connected via some chain to every other, everything in your brain slowly
 leaks out
 into the environment through your behaviour.

You can combine general knowledge for constructing an AGI with personal
knowledge to create a reasonable facsimile.  For example, given just my home
address, you could guess I speak English, make reasonable guesses about what
places I might have visited, and make up some plausible memories.  Even if
they are wrong, my copy wouldn't know the difference.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=39986288-7eb9fb


Re: [singularity] Towards the Singularity

2007-09-09 Thread Matt Mahoney

--- Stathis Papaioannou [EMAIL PROTECTED] wrote:

 On 09/09/07, Matt Mahoney [EMAIL PROTECTED] wrote:
 
Your dilemma: after you upload, does the original human them become a
p-zombie, or are there two copies of your consciousness?  Is it
 necessary
   to
kill the human body for your consciousness to transfer?
  
   I have the same problem in ordinary life, since the matter in my brain
   from a year ago has almost all dispersed into the biosphere. Even the
   configuration [of] matter in my current brain, and the information it
   represents, only approximates that of my erstwhile self. It's just
   convenient that my past selves naturally disintegrate, so that I don't
   encounter them and fight it out to see which is the real me. We've
   all been through the equivalent of destructive uploading.
 
  So your answer is yes?
 
 No, it is not necessary to destroy the original. If you do destroy the
 original you have a 100% chance of ending up as the copy, while if you
 don't you have a 50% chance of ending up as the copy. It's like
 probability if the MWI of QM is correct.

No, you are thinking in the present, where there can be only one copy of a
brain.  When technology for uploading exists, you have a 100% chance of
becoming the original and a 100% chance of becoming the copy.


 
  So if your brain is a Turing machine in language L1 and the program is
  recompiled to run in language L2, then the consciousness transfers?  But
 if
  the two machines implement the same function but the process of writing
 the
  second program is not specified, then the consciousness does not transfer
  because it is undecidable in general to determine if two programs are
  equivalent?
 
 It depends on what you mean by implements the same function. A black
 box that emulates the behaviour of a neuron and can be used to replace
 neurons one by one, as per Hans Moravec, will result in no alteration
 to consciousness (as shown in David Chalmers' fading qualia paper:
 http://consc.net/papers/qualia.html), so total replacement by these
 black boxes will result in no change to consciousness. It doesn't
 matter what is inside the black box, as long as it is functionally
 equivalent to the biological tissue. On the other hand...

I mean implements the same function in that identical inputs result in
identical outputs.  I don't insist on a 1-1 mapping of machine states as
Chalmers does.  I doubt it makes a difference, though.

Also, Chalmers argues that a machine copy of your brain must be conscious. 
But he has the same instinct to believe in consciousness as everyone else.  My
claim is broader: that either a machine can be conscious or that consciousness
does not exist.

 What is the difference between really being conscious and only
 thinking that I am conscious?

Nothing.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=39985876-d99aeb


Re: [singularity] Uploaded p-zombies

2007-09-09 Thread Matt Mahoney
--- Vladimir Nesov [EMAIL PROTECTED] wrote:

 I intentionally don't want to exactly define what S is as it describes
 vaguely-defined 'subjective experience generator'. I instead leave it
 at description level.

If you can't define what subjective experience is, then how do you know it
exists?  If it does exist, then is it a property of the computation, or does
it depend on the physical implementation of the computer?  How do you test for
it?  
Do you claim that the human brain cannot be emulated by a Turing machine?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=40020966-19730d


Re: [singularity] Towards the Singularity

2007-09-08 Thread Matt Mahoney

--- Stathis Papaioannou [EMAIL PROTECTED] wrote:

 On 08/09/07, Matt Mahoney [EMAIL PROTECTED] wrote:
 
  I agree this is a great risk.  The motivation to upload is driven by fear
 of
  death and our incorrect but biologically programmed belief in
 consciousness.
  The result will be the extinction of human life and its replacement with
  godlike intelligence, possibly this century.  The best we can do is view
 this
  as a good thing, because the alternative -- a rational approach to our own
  intelligence -- would result in extinction with no replacement.
 
 If my upload is deluded about its consciousness in exactly the same
 way you claim I am deluded about my consciousness, that's good enough
 for me.

And it will be, if the copy is exact.

Your dilemma: after you upload, does the original human them become a
p-zombie, or are there two copies of your consciousness?  Is it necessary to
kill the human body for your consciousness to transfer?

What if the copy is not exact, but close enough to fool others who know you? 
Maybe you won't have a choice.  Suppose you die before we have developed the
technology to scan neurons, so family members customize an AGI in your
likeness based on all of your writing, photos, and interviews with people that
knew you.  All it takes is 10^9 bits of information about you to pass a Turing
test.  As we move into the age of surveillance, this will get easier to do.  I
bet Yahoo knows an awful lot about me from the thousands of emails I have sent
through their servers.



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=39888218-f25442


[singularity] Chip implants linked to animal tumors

2007-09-08 Thread Matt Mahoney
There has been a minor setback in the plan to implant RFID tags in all humans.

http://news.yahoo.com/s/ap/20070908/ap_on_re_us/chipping_america_ii;_ylt=AiZyFu9ywOpQA0T6nXkEAcFH2ocA

Perhaps it would be safer to have our social security numbers tattooed on our
foreheads?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=39894283-e65a9d


Re: [singularity] Towards the Singularity

2007-09-07 Thread Matt Mahoney
--- Quasar Strider [EMAIL PROTECTED] wrote:

 Hello,
 
 I see several possible avenues for implementing a self-aware machine which
 can pass the Turing test: i.e. human level AI. Mechanical and Electronic.
 However, I see little purpose in doing this. Fact is, we already have self
 aware machines which can pass the Turing test: Humans beings.

This was not Turing's goal, nor is it the direction that AI is headed. 
Turing's goal was to define artificial intelligence.  The question of whether
consciousness can exist in a machine has been debated since the earliest
computers.  Either machines can be conscious or consciousness does not exist. 
The human brain is programmed through DNA to believe in the existence its own
consciousness and free will, and to fear death.  It is simply a property of
good learning algorithms to behave as if they had free will, a balance between
exploitation for immediate reward and exploration for the possibility of
gaining knowledge for greater future reward.  Animals without these
characteristics did not pass on their DNA.  Therefore you have them.

Turing avoided the controversial question of consciousness by equating
intelligence to the appearance of intelligence.  It is not the best test of
intelligence, but it seems to be the only one that people can agree on.

The goal of commercial AI is not to create humans, but to solve the remaining
problems that humans can still do better than computers, such as language and
vision.  You see Google making progress in these areas, but I don't think you
would ever confuse Google with a human.

 We do not need direct neural links to our brain to download and upload
 childhood memories.

I agree this is a great risk.  The motivation to upload is driven by fear of
death and our incorrect but biologically programmed belief in consciousness. 
The result will be the extinction of human life and its replacement with
godlike intelligence, possibly this century.  The best we can do is view this
as a good thing, because the alternative -- a rational approach to our own
intelligence -- would result in extinction with no replacement.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=39571188-7e5cf6


Re: [singularity] Good Singularity intro in mass media

2007-08-24 Thread Matt Mahoney
--- Joshua Fox [EMAIL PROTECTED] wrote:

 Can anyone recall an intelligent, supportive introduction to the Singularity
 in a _non-technological_ , wide-distribution medium in the US? I am not
 looking for book or conference reviews, sociological analyses of
 Singularitarianism, and uninformed editorializing, but rather for a clear
 short popular mass-media explanation of the Singularity.

I think the classic paper by Vernor Vinge expresses it pretty well.
http://mindstalk.net/vinge/vinge-sing.html


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=35625802-9b0353


Re: [singularity] Reduced activism

2007-08-19 Thread Matt Mahoney
--- Samantha Atkins [EMAIL PROTECTED] wrote:
 On Aug 19, 2007, at 12:26 PM, Matt Mahoney wrote:
  3. Studying the singularity raises issues (e.g. does consciousness  
  exist?)
  that conflict with hardcoded beliefs that are essential for survival.
 
 Huh?  Are you conscious?

I believe that I am, in the sense that I am not a p-zombie.
http://en.wikipedia.org/wiki/Philosophical_zombie

I also believe that the human brain can be simulated by a computer, which has
no need for a consciousness in this sense.

I realize these beliefs are contradictory, but I just leave it at that.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=33530444-20a2f0


Re: [singularity] critiques of Eliezer's views on AI

2007-06-29 Thread Matt Mahoney

--- Randall Randall [EMAIL PROTECTED] wrote:

 
 On Jun 28, 2007, at 7:51 PM, Matt Mahoney wrote:
  --- Stathis Papaioannou [EMAIL PROTECTED] wrote:
  How does this answer questions like, if I am destructively teleported
  to two different locations, what can I expect to experience? That's
  what I want to know before I press the button.
 
  You have to ask the question in a form that does not depend on the  
  existence
  of consciousness.  The question is what will each of the two copies  
  claim to
  experience?
 
 Of course, we only care what they claim to experience insofar
 as it corresponds with what they did experience, since that's
 what we're really interested in.

How could you tell the difference?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8


Re: [singularity] critiques of Eliezer's views on AI

2007-06-28 Thread Matt Mahoney
--- Stathis Papaioannou [EMAIL PROTECTED] wrote:

 On 28/06/07, Matt Mahoney [EMAIL PROTECTED] wrote:
  So how do we approach the question of uploading without leading to a
  contradiction?  I suggest we approach it in the context of outside
 observers
  simulating competing agents.  How will these agents evolve?  We would
 expect
  that agents will produce other agents similar to themselves but not
 identical,
  either through biological reproduction, genetic engineering, or computer
  technology.  The exact mechanism doesn't matter.  In any case, those
 agents
  will evolve an instinct for self preservation, because that makes them
 fitter.
   They will fear death.  They will act on this fear by using technology to
  extend their lifespans.  When we approach the question in this manner, we
 can
  ask if they upload, and if so, how?  We do not need to address the
 question of
  whether consciousness exists or not.  The question is not what should we
 do,
  but what are we likely to do?
 
 How does this answer questions like, if I am destructively teleported
 to two different locations, what can I expect to experience? That's
 what I want to know before I press the button.

You have to ask the question in a form that does not depend on the existence
of consciousness.  The question is what will each of the two copies claim to
experience?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8


Re: [singularity] critiques of Eliezer's views on AI (was: Re: Personal attacks)

2007-06-25 Thread Matt Mahoney
--- Nathan Cook [EMAIL PROTECTED] wrote:
 I don't wish to retread old arguments, but there are a few theoretical outs.
 One could be uploaded bit by bit, one neuron at a time if necessary. One
 could be rendered unconscious, frozen, and scanned. I would find this
 frightening, but preferable to regaining consciousness while a separate
 instance of me was running. You beg the question when you ask if I would
 'kill myself' if a perfect copy existed. If the copy were perfect, it would
 kill itself as well. If the copy were not perfect, I think I'd be entitled
 to declare myself a different entity.

I think people will put these issues aside and choose to upload, even if the
copy isn't perfect.  Imagine when your friend says to you, How do you like my
new robotic body?  I am 20 years old again.  I can jump 10 feet in the air.  I
can run 40 MPH.  I can see in the infrared and ultraviolet.  With my new brain
I can multiply 1000 digit numbers in my head instantly.  I can read a book in
one minute and recall every word.  I have a built in wireless internet
connection.  While I am talking to you I can also mentally talk to 1000 other
people by phone or email and give my full attention to everyone
simultaneously.   With other uploaded people I can communicate a million times
faster than speaking, see through their eyes, feel what they feel, and share
my senses with them too, even across continents.  Every day I discover new
powers.  It's just amazing.

Are you ready to upload now?

And then the original friend walks in...



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8


Re: [singularity] critiques of Eliezer's views on AI

2007-06-25 Thread Matt Mahoney
What is wrong with this logic?

Captain Kirk willingly steps into the transporter to have his atoms turned
into energy because he knows an identical copy will be reassembled on the
surface of the planet below.  Would he be so willing if the original was left
behind?

This is a case of logic conflicting with instinct.  You can only transfer
consciousness if you kill the original.  You can do it neuron by neuron, or
all at once.  Either way, the original won't notice, will it?

Isn't this funny?  Our instinct for self preservation causes us to build a
friendly AGI that annhialates the human race, because that's what we want.


--- Alan Grimes [EMAIL PROTECTED] wrote:

 Papiewski, John wrote:
  You’re not misunderstanding and it is horrible.
  
  The only way to do it is to gradually replace your brain cells with an
  artificial substitute. 
  
  You’d be barely aware that something is going on, and there wouldn’t
 be
  two copies of you to be confused over.
 
 Good start. =)
 
 But be careful when claiming that anything is the *only* way to do
 anything...
 
 Okay, go one step further. What do you want from uploading? Lets say
 vastly improved mental capacity. Okay, why not use a neural interface
 and start using a computer-based AI engine as part of your mind?
 
 You get the advantage of a fresh architecture and no identity issues. =)
 
 It's also practical with technology that is sure to be available within
 5 years...  -- except the AI part. =(  People keep finding new ways to
 not invent AI. =(((
 
 -- 
 Opera: Sing it loud! :o(  )-
 
 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;
 


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8


Re: [singularity] critiques of Eliezer's views on AI

2007-06-25 Thread Matt Mahoney

--- Jey Kottalam [EMAIL PROTECTED] wrote:

 On 6/25/07, Matt Mahoney [EMAIL PROTECTED] wrote:
 
  You can only transfer
  consciousness if you kill the original.
 
 What is the justification for this claim?

There is none, which is what I was trying to argue.  Consciousness does not
actually exist.  What exists is a universal belief in consciousness.  The
belief exists because those who did not have it did not pass on their DNA.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8


Re: [singularity] critiques of Eliezer's views on AI (was: Re: Personal attacks)

2007-06-24 Thread Matt Mahoney
--- Tom McCabe [EMAIL PROTECTED] wrote:

 These questions, although important, have little to do
 with the feasibility of FAI. 

These questions are important because AGI is coming, friendly or not.  Will
our AGIs cooperate or compete?  Do we upload ourselves?

Consider the scenario of competing, recursively self improving AGIs.  The
initial version might be friendly (programmed to serve humans), but natural
selection will favor AGIs that have an instinct for self preservation and
reproduction, as it does in all living species.  That is not good, because
humans will be seen as competition.

Consider a cooperative AGI network, a system that thinks as one.  How will it
grow?  If there is no instinct for self preservation, then it builds a larger
version, transfers its knowledge, and kills itself.  The new version will
likely also lack an instinct for self preservation.  So what happens if the
new version decides to kill itself without building a replacement (because
there is also no instinct for reproduction), or if the replacement is faulty?

I think a competing system has a better chance of producing working AGI.  That
is what we have now.  There are many diverse approaches (Novamente, NARS, Cyc,
Google, Blue Brain, etc), although none is close to AGI yet.  A cooperative
system has a serial sequence of improvements each with a single point of
failure.  There is not a technical solution because we know that a system
cannot model exactly a system of greater algorithmic complexity.  It requires
at every step a probabilistic model, a guess that the next version will work
as planned.

Do we upload?  Consider the copy paradox.  If there was an exact copy of you,
atom for atom, and you had to choose between killing the copy or yourself, I
think you would choose to kill the copy (and the copy would choose to kill
you).  Does it matter who dies?  Logically, no, but your instinct for self
preservation says yes.  You cannot resolve this paradox.  Your instinct for
self preservation, what you call consciousness or self-awareness, is
immutable.  It was programmed by your DNA.  It exists because if a person does
not have it, they don't live to pass on their genes.

Presumably some people will choose to upload, reasoning that they will die
anyway so there is nothing to lose.  This is not really a satisfactory
solution, because you still die.  But suppose we had both read and write
access to the brain, so that after copying your memory, your brain was
reprogrammed to remove your fear of death.  But even this is not satisfactory.
 Not because reprogramming is evil, but because of what you will be uploaded
to.  Either it will be to an AGI in a competitive system, in which case you
will be back where you started (and die again), or to a cooperative system
that does not fear death, and will likely fail.

I proposed a simulation of agents building an AGI to see what they build.  Of
course this has to be a thought experiment, because the a simulation will
require more computing power than an AGI itself, so we can't experiment before
we build one.  But I would like to make some points about the validity of this
approach.

- The agents will not know their environment is simulated.
- The agents will evolve an instinct for self preservation (because the others
will die without reproducing).
- The agents will have probabilistic models of their universe because they
lack the computing power to model it exactly.
- The computing power of the AGI will be limited by the computing power of the
simulator.

In real life:

- Humans cannot tell if the universe is simulated.
- Humans have an instinct for self preservation.
- Our model of the universe is probabilistic (quantum mechanics, and also at
higher conceptual levels).
- The universe has finite size, mass, number of particles, and entropy (10^122
bits), and therefore has limited computing capability.
- Humans already practice recursive self improvement.  Your children will have
different goals than you, and some will be more intelligent.  But having
children does not remove your fear of death.


 I think we can all agree
 that the space of possible universe configurations
 without sentient life of *any kind* is vastly larger
 than the space of possible configurations with
 sentient life, and designing an AGI to get us into
 this space is enough to make the problem *very hard*
 even given this absurdly minimal goal. To shamelessly
 steal Eliezer's analogy, think of building an FAI of
 any kind as building a 747, and then figuring out what
 to program with regards to volition, death, human
 suffering, etc. as learning how to fly the 747 and
 finding a good destination.
 
  - Tom
 
 --- Matt Mahoney [EMAIL PROTECTED] wrote:
 
  I think I am missing something on this discussion of
  friendliness.  We seem to
  tacitly assume we know what it means to be friendly.
   For example, we assume
  that an AGI that does not destroy the human race is
  more friendly than one
  that does.  We also

Re: [singularity] critiques of Eliezer's views on AI (was: Re: Personal attacks)

2007-06-23 Thread Matt Mahoney
I think I am missing something on this discussion of friendliness.  We seem to
tacitly assume we know what it means to be friendly.  For example, we assume
that an AGI that does not destroy the human race is more friendly than one
that does.  We also want an AGI to obey our commands, cure disease, make us
immortal, not kill or torture people, and so on.  We assume an AGI that does
these things is more friendly than one that does not.

This seems like an easy question.  But it is not.

Humans fear death, but inevitably die.  Therefore the logical solution is to
upload our minds.  Suppose it was technologically possible to make an exact
copy of you, including all your memories and behavior.  The copy could
convince everyone, even you, that it was you.  Would you then shoot yourself?

Suppose you simulate an artificial world with billions of agents and an
environment that challenges and eventually kills them.  These agents can also
reproduce (copying all or part of their knowledge) and mutate.  Suppose you
have enough computing power that each of these agents could have human level
intelligence or better.  What attributes would you expect these agents to
evolve?

- Goals that confer a survival advantage?  (belief in consciousness)
- A balance between exploration and exploitation to maximize accumulated goal
achievement? (belief in free will)

Suppose the environment allows the agents to build computers.  Will their
goals motivate them to build an AGI?  If so, how will their goals influence
the design?  What goals will they give the AGI?  How do you think the
simulation will play out?  Consider the cases:

- One big AGI vs. many AGIs competing for scarce resources.
- Agents that upload to the AGI vs. those that do not.

What is YOUR goal in running the simulation?  Suppose they build a single AGI,
all the agents upload, and the AGI reprograms its goals and goes into a
degenerate state or turns itself off.  Would you care?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8


Re: [singularity] What form will superAGI take?

2007-06-16 Thread Matt Mahoney
--- Mike Tintner [EMAIL PROTECTED] wrote:

 Perhaps you've been through this - but I'd like to know people's ideas about
 what exact physical form a Singulitarian or near-Singul. AGI will take. And 
 I'd like to know people's automatic associations even if they don't have 
 thought-through ideas - just what does a superAGI conjure up in your mind, 
 regardless of whether you're sure about it or not, or it's sensible?

It is fun to speculate, but I think that you could not observe a Singularity. 
Or if your intellect advanced to the point where you could, you would not be
able to describe what you observed to other humans.  To use an analogy, a
Singularity level intelligence would be as advanced over humans as humans are
over bacteria.  The bacteria in your stomach are unaware of your existence.

I believe a Singularity has already happened.  The world you now observe is
the result.  Your thoughts are constrained both by the computational limits of
your brain (belief in consciousness, belief in free will, fear of death), and
by the model of reality presented to its inputs.  For all we know, concepts
like space, time, and matter are nothing more than abstract mathematical
models in your simulated universe, which bear no resemblance to the universe
in which the simulation is being run.  This will all be clear after you die
and wake up.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8


Re: [singularity] What form will superAGI take?

2007-06-16 Thread Matt Mahoney
--- Tom McCabe [EMAIL PROTECTED] wrote:
 --- Matt Mahoney [EMAIL PROTECTED] wrote:
  Or if your intellect advanced to the point where you
  could, you would not be
  able to describe what you observed to other humans. 
  To use an analogy, a
  Singularity level intelligence would be as advanced
  over humans as humans are
  over bacteria.  The bacteria in your stomach are
  unaware of your existence.
 
 No, but they would notice if you spontaneously
 appeared/disappeared. It would greatly affect their
 environment.

If the universe and everyone in it suddenly disappeared, who would notice?


  I believe a Singularity has already happened.
 
 Do you have any evidence for this point of view?

Since we can't observe a Singularity after it happens, no.


  The
  world you now observe is
  the result.  Your thoughts are constrained both by
  the computational limits of
  your brain (belief in consciousness, belief in free
  will, fear of death)
 
 What do those things have to do with computational
 limits?

Poor choice of words.  The human brain is limited by speed and memory of
course, but I meant constraints imposed by the architecture of your brain
through evolution.  If you did not believe in consciousness and free will, or
believe that the external world was real, you could not function and pass on
your DNA.  The best you can do is accept both points of view and not attempt
to resolve the conflict.

  , and
  by the model of reality presented to its inputs. 
  For all we know, concepts
  like space, time, and matter are nothing more than
  abstract mathematical
  models in your simulated universe, which bear no
  resemblance to the universe
  in which the simulation is being run.  This will all
  be clear after you die
  and wake up.
 
 So, after we wake up, can we try whatever beings set
 up this simulation for being complicit in every crime
 ever committed?

It's hard to say because we know nothing about the universe which simulates
the one we observe.  My guess is that the other universe is itself a
simulation in a higher universe, and so on, ultimately boiling down to an
enumeration of Turing machines or an equivalent mathematical abstraction.  But
of course I don't know.  If you simulated an artificial world with intelligent
agents, they wouldn't know about our world either.  They would only know what
you programmed them to know.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8


Re: [singularity] Getting ready for takeoff

2007-06-15 Thread Matt Mahoney
--- Lúcio de Souza Coelho [EMAIL PROTECTED] wrote:
 On 6/15/07, Tom McCabe [EMAIL PROTECTED] wrote:
 How exactly do you control a megaton-size hunk of
 metal flying through the air at 10,000+ m/s?

All of these problems will be worked out by the superhuman intelligence that
augments/replaces us.  You don't have to worry about it now.  Some possible
solutions:

- Better extraction techniques from low grade ore.
- Recycling.
- Alternative designs using less expensive materials.
- Reducing the earth's population so there are more resources per person.
- Uploading your mind and simulating a world where resources are plentiful.

For all you know, the latter has already happened.



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8


[singularity] Will AGI make us stupid?

2007-05-20 Thread Matt Mahoney
I used to like to solve Sudoku puzzles, and thought about the mental process I
used to solve them.  Then I decided it would be a bigger challenge to put that
process into code, and wrote http://cs.fit.edu/~mmahoney/sudoku/sudoku.html
I thought it was cool that I could write a program that was smarter than me,
at least in some narrow domain.  But the unexpected result was that I lost
interest in solving the puzzles.  Why should I do it the hard way?  And what
fun is it to do it the easy way?

When a computer beat the world champion at chess, the game lost the
significance it once had.  You know who Kasparov is.  Who is the champion
today?

When calculators became available, teaching students to do arithmetic by hand
seemed less important.  Likewise for handwriting and keyboards.  We now use
computers to remember details of our lives like phone numbers and email
addresses, to get driving directions, to decide which email we want to read,
to do ever more of our work.

When machines can do all of our thinking for us, what will happen to us?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=8eb45b07


Re: [singularity] Will AGI make us stupid?

2007-05-20 Thread Matt Mahoney

--- Benjamin Goertzel [EMAIL PROTECTED] wrote:

 
 
  What will be left for unaugmented, non-uploaded humans after computers can
  outdo
  them in all intellectual and athletic tasks?
 
  Art and sex, I would suppose ;-)
 
  After all it's still fun to learn to play Bach even though Wanda Landowska
  did it
  better...
 
  -- Ben G
 
 
 Basically, humans will have to get back to a more childlike joy in doing
 for the
 sake of doing
 
 My kids happily write stories even though they don't think their stories are
 as good
 as their favorites written by adults...

But what happens when video games become so good that children would rather
express their creativity in virtual worlds than in the real one?

And what happens when AGI solves art?  This seems to be a neglected area, but
does music really need to be recorded?  What if it were possible for a program
to distinguish good music from bad, or equivalently, create good music?  How
could human artists compete with machines that can customize their work for
each individual in real time?

I guess that leaves sex, but I would not be surprised to see some technical
innovation here as well.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=8eb45b07


Re: [singularity] Will AGI make us stupid?

2007-05-20 Thread Matt Mahoney

--- [EMAIL PROTECTED] wrote:

 How does one 'solve art'?  Can that be done?  If not, then I doubt we should
 worry about AGI muscling us out of that arena.

The same way that humans have solved art, to distinguish good art from bad
art.  Obviously this is a matter of taste, but if I gave a program lots of
examples of what I thought was good art (or music, funny jokes, movies, or
whatever), and lots of negative examples, then it should be able to guess my
opinion of unseen examples.  If I gave you examples of songs that I like and
dislike, you could probably guess how I would rate other songs not on the
list, even if your musical tastes were different than mine.  So if you could
do it, why not a machine?

I am surprised how little attention has been given to this problem, given the
economic incentives, e.g. the Netflix prize, http://www.netflixprize.com/
Suppose instead of guessing how I would rate a movie based on how others have
rated it, it guessed by watching the movie?

Now there really is no difference between being able to judge the quality of a
movie (relative to a particular viewer or audience), and being able to
generate high quality movies.  This is an AI problem, just like language or
vision or robotics.  The only difference is that it has not received much
attention.  If there is an economic incentive and no insurmountable hurdles,
then we should expect it to eventually be solved.

Of course it's still fun to jam with your friends, even though others may
express their creativity by writing programs that generate music.  Just like
people will still solve Sudoku puzzles by hand even though computers can do it
faster.


 Sent via BlackBerry from Cingular Wireless  
 
 -Original Message-
 From: Benjamin Goertzel [EMAIL PROTECTED]
 Date: Sun, 20 May 2007 18:35:27 
 To:singularity@v2.listbox.com
 Subject: Re: [singularity] Will AGI make us stupid?
 
 And what happens when AGI solves art?  This seems to be a neglected area,
 but 
 does music really need to be recorded?  What if it were possible for a
 program
 to distinguish good music from bad, or equivalently, create good music?  How
 could human artists compete with machines that can customize their work for 
 each individual in real time?
 
 
 My point is, that doesn't matter.
 
 I know I'll never be as good as Bach, Jimi Hendrix or Dave Brubeck, but I
 play the
 keyboard anyway... and I compose music anyway too, just because I love to...
 
 
 Art is done for the love of doing it, not just out of the desire to excel...
 
 And I like listening to my son's musical compositions because HE made them,
 not
 because I think they're objectively the best in the world... 
 
 And I like playing music together with other people because of the social
 communication
 and sharing involved ... so I would rather jam with an imperfect human than
 with 
 a better musician who was an emotionless (or alienly emotional) robot... 
 
 I would have less incentive to prove theorems if I could just feed the
 statements to
 Mathematica and let it prove them for me...
 
 but I wouldn't have less incentive to improvise on the keyboard if I could
 just tell 
 the computer to improvise for me...
 
 Psychologically, art feels to me like a different sort of animal...
 
 But of course, attitudes may vary...
 
 I plan to upload myself and become transhuman anyway, but maybe the
 Ben-version 
 who stays a mostly-unimproved human will become a full-time musician ;-) ...
 
 
 Hell, with a few thousand years practice, he may even become a good one!!!
 
 -- Ben G
 


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=8eb45b07


Re: [singularity] Will AGI make us stupid?

2007-05-20 Thread Matt Mahoney

--- Nathan Cook [EMAIL PROTECTED] wrote:

 On 5/21/07, Matt Mahoney [EMAIL PROTECTED] wrote:
 
 
 
 Now there really is no difference between being able to judge the quality of
  a
  movie (relative to a particular viewer or audience), and being able to
  generate high quality movies.
 
 
 So is it just a lack of ambition that prevents your local reviewer from
 creating the next blockbuster? Really, you don't have to have the skills and
 knowledge necessary to make a film in order to grade it. I don't think AIs
 will be making movies until they're superhuman. Music, I can see being
 possible much sooner: the space of compositions is easier to explore, and
 the underlying rules are more explicit.

I am talking about for machines, not for people.  Obviously for humans,
generation is harder because the evaluation problem has already been solved.

For machines, this is a modeling problem.  Once you have an algorithm for
measuring the quality of a piece of art (movie, music, whatever), then
producing art is just an optimization problem.  You generate the art, evaluate
it, make incremental adjustments and repeat.  Generation is not technically
difficult.  Artists already use software tools such as synthesizers, animation
software, video editors, etc.

But we are nowhere close to solving the evaluation part.  It is an extremely
difficult problem, probably because it hasn't even been studied.  We
understand a lot about visual perception, speech recognition, and language
modeling.  But we understand practically nothing about what makes music sound
good or what makes a joke funny.  We just take it for granted that it requires
a human brain in the loop.

But really, I don't think this is any harder or easier than any other AI
problem.  (And I wouldn't underestimate the difficulty of music recognition).



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=8eb45b07


Re: [singularity] Will AGI make us stupid?

2007-05-20 Thread Matt Mahoney

--- [EMAIL PROTECTED] wrote:

  Hello everyone,
  
  I think that humans will always be disctinct from A.I. because humans have
 the capacity to wonder. A computer (to my knowledge) is programmed with
 right/wrong functions at its most basic level (although some may be
 programmed based on probably right/probably wrong). No matter how
 intelligent a computer can become, can it question its own programming
 (and therefore its existence)? Also, aren't computers based on cause and
 effect relationships? If there are aspects of the world undefined by cause
 and effect-which is incomprehensible to humans- could a computer ever
 comprehend them? I apologize for my inexperience with A.I.; I am simply a
 curious high school student. :)
  
  Chris Anderson

I wonder if we will figure out how to program a computer to wonder?  And if we
do, should we?  In theory, the brain is a computer, and all of its
functionality could be simulated if we had enough hardware to run it (about a
million PCs).  Such a machine should have all of our human emotions, including
a belief in its own consciousness and free will and fear of death and
everything else that was programmed into our brains through evolution for the
sole purpose of keeping us alive long enough to propagate our DNA.

But would we want to build such a machine?  I don't think so.  First, there is
no need to duplicate human weaknesses.  A replica of a human brain would
perform worse at simple arithmetic problems than your calculator.  We build
machines to do things we can't do ourselves.  Google is useful because it
knows more than you do, but you would not confuse it with a human.  The real
problem is to reproduce human strengths like language and vision.

Second, do you really want a machine with human emotions?  We want machines
that obey our commands.  But this is controversial.  Should a machine obey a
command to destroy itself or harm others?  Do you want a gun that fires when
you squeeze the trigger, or a gun that makes moral judgments and refuses to
fire when aimed at another person?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=8eb45b07


Re: Neural language models (was Re: [singularity] Help get the 400k SIAI matching challenge on DIGG's front page)

2007-05-17 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote:
 Matt Mahoney wrote:
  What did your simulation actually accomplish?  What were the results? 
 What do
  you think you could achieve on a modern computer?
 
 Oh, I hope there's no misunderstanding:  I did not build networks to do 
 any kind of syntactic learning, they just learned relationships between 
 phonemic representations and graphemes.  (They learned to spell).  What 
 they showed was something already known for the learning of 
 pronunciation:  that the system first learns spellings by rote, then 
 increases its level of accuracy and at the same time starts to pick up 
 regularities in the mapping.  Then it starts to regularize the 
 spellings.  For example: having learned to spell height correctly in 
 the early stages, it would then start to spell it incorrectly as hite 
 because it had learned many other words in which the spelling of the 
 phoneme sequence in height would involve -ite.  Then in the last 
 stages it would learn the correct spellings again.

That's interesting, because children make similar mistakes at higher language
levels.  For example, a child will learn an irregular verb like went, then
later generalize to goed before switching back to the correct form.

I am convinced that similar neural learning mechanisms are involved at the
lexical and syntactic levels, but on different scales.  For example, we learn
to classify letters into vowels and consonants by their context, just as we do
for nouns and verbs.  Then we learn sequential patterns.  Just as every word
needs a vowel, every sentence needs a verb.

I think that learning syntax is a matter of computational power.  Children
learn the rules for segmenting continuous speech at 7-10 months, but don't
learn grammar until years later.  So you need more training data and a larger
network.  The reason I say the problem is O(n^2) is because when you double
the information content of the training data, you need to double the number of
number of connections to represent it.  Actually I think it is a little less
than O(n^2) (maybe O(n^2/log n)?) because of redundancy in the training data. 
There are about 1000 times more words than there are letters, so this suggests
you need 100,000 times more computing power for adult level grammar.  This
might explain why the problem is still unsolved.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=8eb45b07


Re: Neural language models (was Re: [singularity] Help get the 400k SIAI matching challenge on DIGG's front page)

2007-05-17 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote:

 Uh... I forgot to mention that explaining those data about child 
 language learning was the point of the work.  It's a well known effect, 
 and this is one of the reasons why the connectionist models got everyone 
 excited:  psychological facts started to be explained by the performance 
 of the connectionist nets.

Yes, which is why still I believe this is the right approach (not that it will
be easy).

 The next problem that you will face, along this path, is to figure out 
 how you can get such nets to elegantly represent such things as more 
 than one token of a concept in one sentence:  you can't just activate 
 the duck node when you here that phrase from the Dire Straits song 
 Wild West End:  I go down to Chinatown ...  Duck inside a doorway; 
 Duck to Eat.

That is a problem.  Humans use context to resolve ambiguity.  A neural net
ought to do the same on its own if we get it right.  One problem with some
connectionist models is trying to assign a 1-1 mapping between words and
neurons.  The brain might have 10^8 neurons devoted to language, enough to
represent many copies of the different senses of a word and to learn new ones.

 Then you'll need to represent sequential information in such a way that 
 you can do something with it.  Recurrent neural nets suck very badly if 
 you actually try to use them for anything, so don't get fooled by their 
 Soren Song.

Yes, but I think they are necessary.  Lexical words, semantics, and grammar
all constrain each other.  Recurrent networks can oscillate or become chaotic.
 Even the human brain doesn't deal with this perfectly, so we have migraines
and epilepsy.

 Then you will need to represent layered representations:  concepts 
 learned from conjunctikons of other concepts rather than layer-1 
 percepts.  Then represent action, negation, operations, intentions, 
 variables...

These are high level grammars, like learning how to convert word problems into
arithmetic or first order logic.  I think anything learned at the level of
higher education is going to require a huge network (beyond what is practical
now), but I think the underlying learning principles are the same.

 It is just not procuctive to focus on the computaional complexity issues 
 at this stage:  gotta get a lot of mechanisms tried out before we can 
 even begin to talk about such stuff (and, as I say, I don't believe we 
 will really care even then).

I think it is important to estimate these things.  The analogy is that it is
useful to know that certain problems are hard or impossible regardless of any
proposed solution, like traveling salesman or recursive data compression.  If
we can estimate the complexity of language modeling in a similar way, I see no
reason not to.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=8eb45b07


Re: Neural language models (was Re: [singularity] Help get the 400k SIAI matching challenge on DIGG's front page)

2007-05-17 Thread Matt Mahoney

--- Richard Loosemore [EMAIL PROTECTED] wrote:

 Matt Mahoney wrote:
 
  One problem with some
  connectionist models is trying to assign a 1-1 mapping between words and
  neurons.  The brain might have 10^8 neurons devoted to language, enough to
  represent many copies of the different senses of a word and to learn new
 ones.
 
 But most of the nets I am talking about do not assign 1 neuron to one 
 concept:  they had three layers of roughly ten nodes each, and total 
 connectivity between layers (so 100 plus 100 connection weights).  It 
 was the *weights* that stored the data, not the neurons.  And the 
 concepts were stored across *all* of the weights.
 
 Ditto for the brain.  With a few thousand neurons, in three layers, we 
 could store ALL of the grapheme-phoneme correspondences in one entire 
 language.

That is true, but there are about 1000 times as many words as there are
graphemes or phonemes, so you need 1000 times as many neurons, or 10^6 times
as many connections.  (There are 10^6 times as many possible relations between
words as between graphemes and phonemes).

I think if it were as easy as you say, I think it would have been done by now.


  Then you will need to represent layered representations:  concepts 
  learned from conjunctikons of other concepts rather than layer-1 
  percepts.  Then represent action, negation, operations, intentions, 
  variables...
  
  These are high level grammars, like learning how to convert word problems
 into
  arithmetic or first order logic.  I think anything learned at the level of
  higher education is going to require a huge network (beyond what is
 practical
  now), but I think the underlying learning principles are the same.
 
 Oh, I disagree entirely:  these are the basic things needed as the 
 *underpinning* of the grammar.  You need action for verbs, negation for 
 everything, operations for abstraction, etc. etc.

How do humans learn these things using only neurons that follow simple rules?

I think learning arithmetic or logic is similar to learning grammar.  For
example, you can learn to substitute a + b for b + a using the same type
of representation you might use to substitute I gave Bob $10 with Bob was
given $10 by me.

Negation is hard to learn.  For example, if you read Nutra-Sweet does not
cause stomach cancer, you might start to believe that it does.  We learn
negation more as an abstract symbol, e.g. neither x nor y means not x and
not y.

When we build knowledge representation systems, we build logical operators
into the system as primitives because we don't know any other way to do it. 
Logic is hard even for humans to learn.  It is a high level language skill.  I
think it dooms the usual (but always unsuccessful) approach of building a
structured knowledge base and trying to tack on a natural language interface
later.

 But you cannot do any estimates like that until the algorithm itself is 
 clear:  there are no *algorithms* available for grammar learning, 
 nothing that describes the class of all possible algorithms that do 
 grammar learning.  Complexity calculations mean nothing for handwaving 
 suggestions about (eg) representing numbers of neurons:  they strictly 
 only apply to situations in which you can point to an algorithm and ask 
 how it behaves.

My original dissertation topic (until I changed it to get funding) was to do
exactly that.  I looked at about 30 different language models, comparing
compression ratio with model size, and projecting what size model would be
needed to compress text to the entropy estimated by Shannon in 1950 using
human text prediction (about 1 bit per character).  The graph is here:
http://cs.fit.edu/~mmahoney/dissertation/

It suggests very roughly 10^8 to 10^10 bits, in agreement with three other
estimates of 10^9 bits:
1. Turing's 1950 estimate, which he did not explain.
2. Landauer's estimate of human long term memory capacity based on memory
tests.
3. The approximate information content of all the language you are exposed to
through about age 20.

This estimate is independent of the algorithm, so it only predicts memory
requirements, not speed.  If you use a neural network, that is about 10^9
connections.  To train on 1 GB of text, you need about 10^18 operations, about
a year on a PC.  I think there are ways to optimize this, such as activating
only a small number of neurons at any one time, and other tricks, but of
course I am breaking the rule of getting it to work first and optimizing
later.

Also, it does not explain why the brain seems to use so much more memory and
processing than these estimates, higher by a factor of perhaps 10^4 to 10^6. 
But of course language evolved to fit our brains, not the other way around.

A lot of smart people are working on AGI, including many on this list.  I
don't believe the reason it hasn't been solved yet is because we are too dumb
to figure it out.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http

Re: Neural language models (was Re: [singularity] Help get the 400k SIAI matching challenge on DIGG's front page)

2007-05-16 Thread Matt Mahoney

--- Richard Loosemore [EMAIL PROTECTED] wrote:

 Matt Mahoney wrote:
 
  I doubt you could model sentence structure usefully with a neural network
  capable of only a 200 word vocabulary.  By the time children learn to use
  complete sentences they already know thousands of words after exposure to
  hundreds of megabytes of language.  The problem seems to be about O(n^2). 
 As
  you double the training set size, you also need to double the number of
  connections to represent what you learned.
  
  
  -- Matt Mahoney, [EMAIL PROTECTED]
 
 The problem does not need to be O(n^2).
 
 And remember:  I used a 200 word vocabulary in a program I wrote 16 
 years ago, on a machine with only one thousandth of today's power.
 
 And besides, solving the problem of understanding sentences could easily 
 be done in principle with even a vocabulary as small as 200 words.
 
 Richard Loosemore.

What did your simulation actually accomplish?  What were the results?  What do
you think you could achieve on a modern computer?




-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=8eb45b07


Re: Machine Motivation Gets Distorted Again [WAS Re: [singularity] Help get the 400k SIAI matching challenge on DIGG's front page]

2007-05-15 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote:

 Shane Legg wrote:
  Ben (and others),
  
  My impression is that there is a general lack of understanding
  when it comes to AIXI and related things.  It seems that someone
  who doesn't understand the material makes a statement, which
  others then take as fact, and the cycle repeats.
  
  Part of the problem, I think, is that the material is difficult for
  people to fully understand.  Marcus knows this, and he also
  understands that while he is a brilliant theoretician, his skills
  in explaining complex ideas in the simplest way possible are
  not as strong.  To help address this, part of my PhD thesis is
  going to be, I hope, a very easy to understand explanation of
  AIXI and universal intelligence --- Marcus doesn't even want
  me to put any proofs into this part of the thesis, which is most
  unlike him!
  
  I am writing this chapter at the moment and I will let this list
  know when it has been completed and reviewed.  Hopefully
  then we can all focus on the real weaknesses of this work,
  rather than the imagined ones.
  
  Cheers
  Shane
 
 Shane,
 
 Thankyou for being patronizing.
 
 Some of us do understand the AIXI work in enough depth to make valid 
 criticism.
 
 The problem is that you do not understand the criticism well enough to 
 address it.
 
 
 Richard Loosemore.

Richard,

I looked at your 2006 AGIRI talk, the one I believe you referenced in our
previous discussion on the definition of intelligence,
http://www.agiri.org/forum/index.php?act=STf=21t=137

You use the description complex adaptive system, which I agree is a
reasonable definition of intelligence.  You also assert that mathematics is
useless for the analysis of complex systems.  Again I agree.  But I don't
understand your criticism of Shane's work.  After all, he is the one who
proved the correctness of your assertion.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=8eb45b07


Re: Neural language models (was Re: [singularity] Help get the 400k SIAI matching challenge on DIGG's front page)

2007-05-15 Thread Matt Mahoney

--- Richard Loosemore [EMAIL PROTECTED] wrote:

 Matt Mahoney wrote:
  --- Tom McCabe [EMAIL PROTECTED] wrote:
  --- Matt Mahoney [EMAIL PROTECTED] wrote:
  Personally, I would experiment with
  neural language models that I can't currently
  implement because I lack the
  computing power.
  Could you please describe these models?
  
  Essentially models in which neurons (with time delays) respond to
 increasingly
  abstract language concepts: letters, syllables, words, grammatical roles,
  phrases, and sentence structures.  This is not really new.  Models like
 these
  have been proposed in the 1980's but were never fully implemented due to
 lack
  of computing power.  These constraints resulted in connectionist systems
 in
  which each concept mapped to a single neuron.  Such models can't learn
 well. 
  There is no mechanism for adding to the vocabulary, for instance.  I
 believe
  you need at least hundreds of neurons per concept, where each neuron may
  correlate weakly with hundreds of different concepts.  Exactly how many, I
  don't know.  That is why I need to experiment.
  
  One problem that bothers me is the disconnect between the information
  theoretic estimates of the size of a language model, about 10^9 bits, and
  models based on neuroanatomy, perhaps 10^14 bits.  Experiments might tell
 us
  what's wrong with our neural models.  But how to do such experiments?  A
 fully
  connected network of 10^9 connections trained on 10^9 bits of data would
  require about 10^18 operations, about a year on a PC.  There are
 optimizations
  I could do, such as activating only a small fraction of the neurons at one
  time, but if the model fails, is it because of these optimizations or
 because
  you really do need 10^14 connections, or the training data is bad, or
  something else?
 
 I was building connectionist models of language in the late 80s, early 
 90s, and your characterizations are a little bit off, here.
 
 We used distributed models in which single neurons certainly did not 
 correspond to single concepts.  They learned well, and there was no 
 problem getting new vocabulary items into them.  I was writing C code on 
 an early model Macintosh computer that was about 1000th the power of the 
 ones available today.  You don't really need hundreds of neurons per 
 concept:  a few hundred was the biggest net I ever built, and it could 
 cope with about 200 vocabulary items, IIRC.
 
 The *real* problem are: (1) encoding the structural aspects of sentences 
 in abstract ways, (2) encoding layered concepts (in which a concept 
 learned today can be the basis for new concepts learned tomorrow) and 
 (3) solving the type-token problem in such a way that the system can 
 represent more than one instance of a concept at once.
 
 In essence, my research since then has been all about finding a good way 
 to solve these issues whilst retaining the immense learning power of 
 those early connectionist systems.
 
 It's doable.  Just have to absorb ten tons of research material and then 
 spit it out in the right way whilst thinking outside the box.  All in a 
 day's work.  ;-)
 
 
 
 Richard Loosemore.

I doubt you could model sentence structure usefully with a neural network
capable of only a 200 word vocabulary.  By the time children learn to use
complete sentences they already know thousands of words after exposure to
hundreds of megabytes of language.  The problem seems to be about O(n^2).  As
you double the training set size, you also need to double the number of
connections to represent what you learned.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=8eb45b07


Re: [singularity] Help get the 400k SIAI matching challenge on DIGG's front page

2007-05-14 Thread Matt Mahoney

--- Eugen Leitl [EMAIL PROTECTED] wrote:

 On Sun, May 13, 2007 at 05:23:53PM -0700, Matt Mahoney wrote:
 
  It is not that hard, really.  Each of the 10^5 PCs simulates about 10 mm^3
 of
 
 You know, repeating assertions doesn't make them any more true.
 
  brain tissue.  Axon diameter varies but is typically 1-2 microns.  This
 means
 
 Where have you pulled that number from? Why not um^3, or m^3, or a cubic
 lightyear?

I assumed you knew that the human brain has a volume of 1000 to 1500 cm^3.  If
you divide this among 10^5 processors then each processor would simulate a
cube about 2 to 2.5 mm on a side with a surface area of about 25-35 mm^2.  The
little cubes only need to communicate with their 6 neighbors, so you can map
the simulation onto a hierarchical network where most of the communication is
local.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=8eb45b07


Re: [singularity] Help get the 400k SIAI matching challenge on DIGG's front page

2007-05-14 Thread Matt Mahoney
--- Tom McCabe [EMAIL PROTECTED] wrote:
 Helen Keller at ~8 didn't have language, as she hadn't
 learned sign language and there was no other real
 means for her to learn grammar and sentence structure.
 Yet she was still clearly intelligent. If Babelfish
 was perfect- could pick up on every single grammatical
 detail and nuance- would it start learning French
 cooking or write a novel or learn how to drive or do
 any of that other stuff we associate with
 intelligence.

No, but as long as you define intelligence as exactly like a human we will
never have AGI.  I don't care if my calculator doesn't know how many fingers I
am holding up.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=8eb45b07


Re: [singularity] Help get the 400k SIAI matching challenge on DIGG's front page

2007-05-13 Thread Matt Mahoney

--- Tom McCabe [EMAIL PROTECTED] wrote:

 
 --- Matt Mahoney [EMAIL PROTECTED] wrote:
 
  
  --- Tom McCabe [EMAIL PROTECTED] wrote:
  
   You cannot get large amounts of computing power
  simply
   by hooking up a hundred thousand PCs for problems
  that
   are not easily parallelized, because you very
  quickly
   run into bandwidth limitations even with gigabit
   Ethernet. Parts of the brain are constantly
   communicating with one another; I would be very
   surprised if you could split up the brain
  effectively
   enough to be able to both run one tiny piece on a
  PC
   and have the PCs communicate effectively in
  realtime.
   
- Tom
  
  It is not that hard, really.  Each of the 10^5 PCs
  simulates about 10 mm^3 of
  brain tissue.  Axon diameter varies but is typically
  1-2 microns.  This means
  each bit of brain tissue has at most on the order of
  10^7 inputs and outputs,
  each carrying 10 bits per second of information, or
  100 Mb/s.  This was barely
  within Google's network capacity in 2000, and
  probably well within it now.
  http://en.wikipedia.org/wiki/Google_platform
 
 Hmmm...This is an interesting issue. Do you have a
 link to a paper on brain bandwidth?

I just googled axon diameter and found several references.  There is a wide
range so I used the low end to be conservative and did the math.  I probably
should consider dendrites too, but these tend not to be very long.  I figure
it's close enough for an order of magnitude estimate.

  I think individuals and small groups trying to build
  AGI will have a hard time
  competing with Google due to the cost of hardware.
 
 Hardware cost will not be a primary issue. The cost of
 hardware decreases exponentially with Moore's Law; the
 cost of solving a whole tangle of confusing problems
 does not. Nobody is anywhere near the stage where they
 have a program to run and they're looking for a
 computer. It's like saying that anyone trying to build
 an airplane will find it impossible to compete with
 existing shipbuilders, because of their vast
 metalworking capacity.

It's true we can do theoretical work but the lack of computing power is
definitely an obstacle.  It has a strong effect on the direction of research. 
In the early days of AI when hardware was inadequate by a factor of a billion,
we used symbolic approaches in narrow domains with hand coded rules.  More
recently when hardware was only inadequate by a million, we were able to
experiment with statistical approaches, machine learning, and low level vision
and language models.  It is possible that a lot of the brain's computing power
is used to overcome the limitations of individual neurons (speed, noise,
reliability, fatigue) and we will find more efficient solutions.  This hasn't
happened yet, but I can't say that it won't.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=8eb45b07


Re: [singularity] Help get the 400k SIAI matching challenge on DIGG's front page

2007-05-13 Thread Matt Mahoney

--- Tom McCabe [EMAIL PROTECTED] wrote:

 
 --- Matt Mahoney [EMAIL PROTECTED] wrote:
  Language and vision are prerequisites to AGI. 
 
 No, they aren't, unless you care to suggest that
 someone with a defect who can't see and can't form
 sentences (eg, Helen Keller) is unintelligent.

Helen Keller had language.  One could argue that language alone is sufficient
for AI, as Turing did.  But everyone has a different opinion on what is AGI
and what isn't.

 Any future Friendly AGI isn't going to obey us exactly
 in every respect, because it's *more moral* than we
 are. Should an FAI obey a request to blow up the
 world?

That is what worries me.  I think it is easier to program an AGI for blind
obedience (its top level goal is to serve humans) than to program it to make
moral judgments in the best interest of humans, without specifying what that
means.  I gave this example on Digg.  Suppose the AGI (being smarter than us)
figures out that consciousness and free will are illusions of our biologically
programmed brains, and that there is really no difference between a human
brain and a simulation of a brain on a computer.  We may or may not have the
technology for uploading, but suppose the AGI decides (for reasons we don't
understand) that it doesn't need it.  Therefore it is in our best interest (or
irrelevant) to destroy the human race.

We cannot rule out this possibility because a lesser intelligence cannot
predict what a greater intelligence will do.  If you measure intelligence
using algorithmic complexity, then Legg proved this formally. 
http://www.vetta.org/documents/IDSIA-12-06-1.pdf

Or maybe an analogy would be more convincing.  Humans acting in the best
interests of their pets may put them down when they have a terminal disease,
or for other reasons they can't comprehend.  Who should make this decision? 
What will happen when the AGI is as advanced over humans as humans are over
dogs or insects or bacteria?  Perhaps the smarter it gets, the less relevant
human life will be.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=8eb45b07


Re: [singularity] Help get the 400k SIAI matching challenge on DIGG's front page

2007-05-10 Thread Matt Mahoney

--- Tom McCabe [EMAIL PROTECTED] wrote:

 
 --- Matt Mahoney [EMAIL PROTECTED] wrote:
 
  I posted some comments on DIGG and looked at the
  videos by Thiel and
  Yudkowsky.  I'm not sure I understand the push to
  build AGI with private
  donations when companies like Google are already
  pouring billions into the
  problem.
 
 Private companies like Google are, as far as I am
 aware, spending exactly $0 on AGI. The things Google
 is interested in, such as how humans process
 information and how they decide what is relevant, are
 very specific subsets of this goal in the same way
 that fire and iron are very specific subsets of
 the internal combustion engine.

Language and vision are prerequisites to AGI.  Google has an interest in
improving search results.  It already does a pretty good job with natural
language questions.  They would also like to return relevant images, video,
and podcasts without requiring humans to label them.  They want to filter porn
and spam.  They want to deliver relevant and personalized ads.  These are all
AI problems.  Google has billions to spend on these problems.

Google already have enough computing problem to do a crude simulation of a
human brain, but of course that is not what they are trying to do.  Why would
they want to copy human motivations?

  Doing this well requires human
  capabilities such as language
  and vision, but does not require duplicating the
  human motivational system. 
  The top level goal of humans is to propagate their
  DNA.  The top level goal of
  machines should be to serve humans.
 
 You do realize how hard a time you're going to have
 defining that? Remember Asimov's First Law: A robot
 shall not harm a human or through inaction allow a
 human to come to harm? Well, humans are always hurting
 themselves through wars and such, and so the logical
 result is totalitarianism, which most of us would
 consider very bad.

I realize the problem will get harder as machines get smarter.  But right now
I don't see any prospect of a general solution.  It will have to be solved for
each new machine.  But there is nothing we can do about human evil.  If
someone wants to build a machine to kill people, well that is already a
problem.  The best we can do is try to prevent accidental harm.

  We have always
  built machines this way.
 
 Do I really need to explain what's wrong with the
 we've always done it that way argument? It hasn't
 gotten any better since the South used it to justify
 slavery.

I phrased it in the past tense because I can't predict the future.  What I
should say is that there is no reason to build machines to disobey their
owners, and I don't expect that we will do so in the future.



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=8eb45b07


Re: [singularity] Why do you think your AGI design will work?

2007-04-24 Thread Matt Mahoney
--- Joshua Fox [EMAIL PROTECTED] wrote:

 AGI builders, what evidence do you have that your design will work?

None, because we have not defined what AGI is.

One definition of AGI is passing the Turing test.  That will not happen.  A
machine can just as easily fail by being too smart, too fast, or too obedient,
as it can by being not smart enough.  Machines have been smarter than humans
in some areas and less smart in others for the last 50 years.  Even a machine
that is superior to human intellect in every conceivable way would not be
mistaken for human.  There is no economic incentive to dumb down a machine
just to duplicate human limitations.

If AGI is not the Turing test, then what is it?  What test do you propose?

Without a definition, we should stop calling it AGI and focus on the problems
for which machines are still inferior to humans, such as language or vision.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=8eb45b07


Re: [singularity] Why do you think your AGI design will work?

2007-04-24 Thread Matt Mahoney

--- Eugen Leitl [EMAIL PROTECTED] wrote:

 On Tue, Apr 24, 2007 at 01:35:31PM -0700, Matt Mahoney wrote:
 
  None, because we have not defined what AGI is.
 
 AGI is like porn. I'll know it when I'll see it.

Not really.  You recognize porn because you have seen examples of porn and
not-porn.  If you give a test to people who have never seen porn, I think they
would fail.  But we will never do this test because of ethical concerns about
showing porn for the first time to the only subjects you are likely to find,
children.

I also don't think you will recognize AGI.  You have never seen examples of
it.  Earlier I posted examples of Google passing the Turing test, but nobody
believes that is AGI.  If nothing is ever labeled AGI, then nothing ever will
be.


  One definition of AGI is passing the Turing test.  That will not happen. 
 A
  machine can just as easily fail by being too smart, too fast, or too
 obedient,
 
 The Turing test implies ability to deceive. If your system can't deceive a
 human,
 it has failed the test.

ELIZA has already passed.  So we can all quit and go home now.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=8eb45b07


Re: [singularity] Why do you think your AGI design will work?

2007-04-24 Thread Matt Mahoney

--- Mike Tintner [EMAIL PROTECTED] wrote:

 There is a difference between your version: achieving goals which can be
 done, if I understand you, by algorithms - and my goal-SEEKING, which is
 done by all animals, and can't be done by algorithms alone. It involves
 finding your way as distinct from just following the way set by programmed
 rules.

There is an algorithm.  We just don't know what it is.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=8eb45b07


Re: [singularity] Implications of an already existing singularity.

2007-03-30 Thread Matt Mahoney

--- Tom McCabe [EMAIL PROTECTED] wrote:

 If G were changing with time, then we'd see the Moon's
 orbit moving outward faster than the 10 cm/year or so
 caused by tides.
 
  - Tom

I agree there is no evidence of this.  But here is another mystery of physics.
 The radius of a black hole's event horizon is where the escape velocity
equals the speed of light, Gm/r = c.  For the universe, the radius is close to
the size of the universe, r ~ Tc.  So why did the universe (or large regions
of it) not collapse into black holes when it was much younger and denser?

I believe that an observer approaching a black hole in a free fall observes
nearby objects accelerating away in all directions.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Implications of an already existing singularity.

2007-03-30 Thread Matt Mahoney

--- Charles D Hixson [EMAIL PROTECTED] wrote:

 Matt Mahoney wrote:
  --- Eugen Leitl [EMAIL PROTECTED] wrote:
 

  ...
 
  A proton is a damn complex system. Don't see how you could equal it with
 one
  mere bit.
  
 
  I don't.  I am equating one bit with a volume of space about the size of a
  proton.  The actual number of baryons in the universe is smaller, about
 10^80.
   If you squashed the universe flat, it would form a sheet about one proton
  thick.  
 
  But I am also pointing out a coincidence (or not) of physics.  But you
 will
  note that the volume of the universe is proportional to T^3, not T^2, so
 if
  the relation is not a coincidence, then either the properties of the
 proton or
  one of the other physical constants would not be constant.
 
  And BTW I agree that we cannot prove or disprove that the universe is a
  simulation.
 
 
  -- Matt Mahoney, [EMAIL PROTECTED]

 FWIW, you could cut down on the computational needs a whole lot if you 
 only simulated one brain and used lazy evaluation to derive anything it 
 might be experiencing.  (Where did all you Zombies come from?)
 
 For that matter, the simulation could have started only a few 
 nano-seconds ago and might stop now. ...
 
 Any assumption you make about the nature of the simulation that we might 
 be running on is unverifiable.  (Some of them are falsifiable.)

A while back I described 5 scenarios for a simulated universe in order of
decreasing algorithmic complexity, and therefore in increasing order of
likelihood (given a Solomonoff distribution).  But as the complexity
decreased, the amount of computation increased.  I concluded that the most
likely scenario was an enumeration of all Turing machines, whose algorithmic
complexity is K(N), the complexity of the set of natural numbers (very small).

And no, I don't know what is doing this computation (turtles all the way
down).  But it is a general property of agents in a simulation that they lack
the computational power to model their environment, whether finite or
infinite.  So it would be surprising if I did know.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Implications of an already existing singularity.

2007-03-29 Thread Matt Mahoney

--- Tom McCabe [EMAIL PROTECTED] wrote:

 
 --- Craig [EMAIL PROTECTED] wrote:
 
  Kurzweil already postulated this a while ago.
  Although I don't agree with his conclusions. He says
  that if any society were to attain the singularity
  then their presence would already be felt, and since
  we can feel no presence then essentially this proves
  that humans are the only sentient life forms in
  EXISTENCE. I wholeheartedly disagree with Kurzweil's
  reasoning in this matter, since he takes such a
  human perspective in regards to imagining an alien
  technology. I think his stance is very presumptuous
  on his part. For instance he assumes that we haven't
  felt their presence merely because there isn't
  anything to detect. When in fact he never considered
  that human senses or sciences may not be acute
  enough to detect them.
 
 Human senses, while crude, are good enough to detect a
 wholesale rearrangement of a large majority of the
 matter in the solar system.

A technology this advanced could also reprogram your neurons to make you
believe whatever it wanted.  There is no way you could detect this.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: Entropy of the universe [WAS Re: [singularity] Implications of an already existing singularity.]

2007-03-28 Thread Matt Mahoney

--- Richard Loosemore [EMAIL PROTECTED] wrote:

 Matt Mahoney wrote:
  *The entropy of the universe is of the order T^2 c^5/hG ~ 10^122 bits,
 where T
  is the age of the universe, c is the speed of light, h is Planck's
 constant
  and G is the gravitational constant.  By coincidence (or not?), each bit
 would
  occupy the volume of a proton.  (The physical constants do not depend on
 any
  particle properties).
 
 A small but crucial point:  this is the entropy of everything within the 
 horizon visible from *here*.  What about the stuff (possibly infinite 
 amounts of stuff) that lies beyond the curvature horizon?

In a simulation, you don't need to compute it.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-05 Thread Matt Mahoney

--- Richard Loosemore [EMAIL PROTECTED] wrote:

 Matt Mahoney wrote:
  --- Richard Loosemore [EMAIL PROTECTED] wrote:
  
  Matt Mahoney wrote:
  --- Richard Loosemore [EMAIL PROTECTED] wrote:
 
  What I wanted was a set of non-circular definitions of such terms as 
  intelligence and learning, so that you could somehow *demonstrate* 
  that your mathematical idealization of these terms correspond with the 
  real thing, ... so that we could believe that the mathematical 
  idealizations were not just a fantasy.
  The last time I looked at a dictionary, all definitions are circular. 
 So
  you
  win.
  Sigh!
 
  This is a waste of time:  you just (facetiously) rejected the 
  fundamental tenet of science.  Which means that the stuff you were 
  talking about was just pure mathematical fantasy, after all, and nothing 
  to do with science, or the real world.
 
 
  Richard Loosemre.
  
  What does the definition of intelligence have to do with AIXI?  AIXI is an
  optimization problem.  The problem is to maximize an accumulated signal in
 an
  unknown environment.  AIXI says the solution is to guess the simplest
  explanation for past observation (Occam's razor), and that this solution
 is
  not computable in general.  I believe these principles have broad
  applicability to the design of machine learning algorithms, regardless of
  whether you consider such algorithms intelligent.
 
 You're going around in circles.
 
 If you were only talking about machine learning in the sense of an 
 abstract mathematical formalism that has no relationship to learning, 
 intelligence or anything going on in the real world, and in particular 
 the real world in which some of us are interested in the problem of 
 trying to build an intelligent system, then, fine, all power to you.  At 
 *that* level you are talking about a mathematical fantasy, not about 
 science.
 
 But you did not do that:  you made claims that went far beyond the 
 confines of a pure, abstract mathematical formalism:  you tried to 
 relate that to an explanation of why Occam's Razor works (and remember, 
 the original meaning of Occam's Razor was all about how an *intelligent* 
 being should use its intelligence to best understand the world), and you 
 also seemed to make inferences to the possibility that the real world 
 was some kind of simulation.
 
 It seems to me that you are trying to have your cake and eat it too.

I claim that AIXI has practical applications to machine learning.  I also
claim (implicitly) that machine learning has practical applications to the
real world.  Therefore, I claim that AIXI has practical applications to the
real world (i.e. as Occam's razor).

Further, because AIXI requires that the unknown environment be computable, I
claim that we cannot exclude the possibility that the universe is a
simulation.  If Occam's razor did not work in practice, then you could claim
that the universe is not computable, and therefore could not be a simulation.

This really has nothing to do with the definition of intelligence.  You can
accept Turing's definition, which would exclude all animals except Homo
Sapiens.  You can accept a broader definition that would include machine
learning.  Both the human brain and linear regression algorithms make use of
Occam's razor.  I don't care if you call them intelligent or not.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-05 Thread Matt Mahoney

--- Stathis Papaioannou [EMAIL PROTECTED] wrote:

 On 3/6/07, Mitchell Porter [EMAIL PROTECTED] wrote:
 
 
  You radically overstate the expected capabilities of quantum computers.
  They
  can't even do NP-complete problems in polynomial time.
  http://scottaaronson.com/blog/?p=208
 
 
 What about a computer (classical will do) granted an infinity of cycles
 through, for example, a Freeman Dyson or Frank Tipler type mechanism? No
 matter how many cycles it takes to compute a particular simulated world, any
 delay will be transparent to observers in that world. It only matters that
 the computation doesn't stop before it is completed.

The computation would also require infinite memory (a Turing machine), or else
it would cycle.

Although our universe might be the product of a Turing machine, the physics of
our known universe will only allow finite memory.  The number of possible
quantum states of a closed system with finite size and mass is finite.  For
our universe (big bang model), the largest memory you could construct would be
on the order of c^5 T^2/hG ~ 10^122 bits (where c is the speed of light, T is
the age of the universe, h is Planck's constant and G is the gravitational
constant.  (Coincidentally, each bit would occupy about the volume of a proton
or neutron).

A quantum computer is weaker than a finite state machine.  A quantum computer
is restricted to time-reversible computation, so operations like bit
assignment or copying are not allowed.

And even if you had a Turing machine, you still could not compute a solution
to AIXI.  It is not computable, like the halting problem.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-04 Thread Matt Mahoney

--- Richard Loosemore [EMAIL PROTECTED] wrote:

 Matt Mahoney wrote:
  --- Richard Loosemore [EMAIL PROTECTED] wrote:
  
  What I wanted was a set of non-circular definitions of such terms as 
  intelligence and learning, so that you could somehow *demonstrate* 
  that your mathematical idealization of these terms correspond with the 
  real thing, ... so that we could believe that the mathematical 
  idealizations were not just a fantasy.
  
  The last time I looked at a dictionary, all definitions are circular.  So
 you
  win.
 
 Sigh!
 
 This is a waste of time:  you just (facetiously) rejected the 
 fundamental tenet of science.  Which means that the stuff you were 
 talking about was just pure mathematical fantasy, after all, and nothing 
 to do with science, or the real world.
 
 
 Richard Loosemre.

What does the definition of intelligence have to do with AIXI?  AIXI is an
optimization problem.  The problem is to maximize an accumulated signal in an
unknown environment.  AIXI says the solution is to guess the simplest
explanation for past observation (Occam's razor), and that this solution is
not computable in general.  I believe these principles have broad
applicability to the design of machine learning algorithms, regardless of
whether you consider such algorithms intelligent.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-03 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote:

 What I wanted was a set of non-circular definitions of such terms as 
 intelligence and learning, so that you could somehow *demonstrate* 
 that your mathematical idealization of these terms correspond with the 
 real thing, ... so that we could believe that the mathematical 
 idealizations were not just a fantasy.

The last time I looked at a dictionary, all definitions are circular.  So you
win.

 P.S.   The above definition is broken anyway:  what about unsupervised 
 learning?  What about learning by analogy?

I should have specified supervised learning as an application of AIXI.  There
are subsets, H, of Turing machines for which there are efficient algorithms
for finding a small h in H that is consistent with the training data. 
Examples include decision trees, neural networks, polynomial regression,
clustering, etc.  However AIXI does not necessarily imply learning.  There are
other approaches.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Why We are Almost Certainly not in a Simulation

2007-03-03 Thread Matt Mahoney
This discussion on whether the universe exists is interesting, but I think we
should be asking a different question: why do we believe that the universe
exists?  Or more accurately, why do we act as if we believe that the universe
exists?

I said earlier that humans believe that the universe is real, because those
that did not were removed from the gene pool.  But I wonder if the issue is
more fundamental.  Is it possible to program to program any autonomous agent
that responds to reinforcement learning (a reward/penalty signal) that does
not act as though its environment were real?  How would one test for this
belief?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-02 Thread Matt Mahoney
--- Ben Goertzel [EMAIL PROTECTED] wrote:
 Matt, I really don't see why you think Hutter's work shows that Occam's 
 Razor holds in any
 context except AI's with unrealistically massive amounts of computing 
 power (like AIXI and AIXItl)
 
 In fact I think that it **does** hold in other contexts (as a strategy 
 for reasoning by modest-resources
 minds like humans or Novamente), but I don't see how Hutter's work shows 
 this...

I admit Hutter did not make claims about machine learning frameworks or
Occam's razor, but we should not view his work in such narrow context. 
Hutter's conclusions about the optimal behavior of rational agents were proven
for the following cases:

1. Unrestricted environments (in which case the solution is not computable),
2. Space and time bounded environments (in which case the solution is
intractable),
3. Subsets of (1) or (2) such that the environment is consistent with past
interaction.

But the same reasoning he used in his proofs could just as well be applied to
practical cases of machine learning for which efficient solutions are known. 
The proofs all use the fact that shorter Turing machines are more likely than
longer ones (a Solomonoff prior).

For example, Hutter does not tell us how to solve linear regression, fitting a
straight line to a set of points.  What Hutter tells us is two other things:

1. Linear regression is a good predictor, even though a higher order
polynomial might have a better fit (because a low order polynomial has lower
algorithmic complexity).
2. Linear regression is useful, even though other machine learning algorithms
might be better predictors (because a general solution is not computable, so
we have to settle for a suboptimal solution).

So what I did was two things.  First, I used the fact that Occam's razor works
in both simulated and real environments (based on extensions of AIXI and
empirical observations respectively) to argue that the universe is consistent
with a simulation.  (This is disturbing because you are not programmed to
think this way).

Second, I used the same reasoning to guess about the nature of the universe
(assuming it is simulated), and the only thing we know is that shorter
simulation programs are more likely than longer ones.  My conclusion was that
bizarre behavior or a sudden end is unlikely, because such events would not
occur in the simplest programs.  This ought to at least be reassuring.

-- Matt Mahoney


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


  1   2   >