RE: RE: FW: [agi] A paper that actually does solve the problem of consciousness

2008-12-02 Thread Ed Porter
Hector,

 

I skimmed your paper linked to in the post below.  

 

From my quick read it appears the only meaningful way it suggests a brain
might be infinite was that since the brain used analogue values --- such as
synaptic weights, or variable time intervals between spikes (and presumably
since those analogue values would be determined by so many factors, each of
which might modify their values slightly) --- the brain would be capable of
computing many values each of which could arguably have infinite gradation
in value.  So arguably its computations would be infinitely complex, in
terms of the number of bits that would be required to describe them exactly.

 

If course, it is not clear the universe itself supports infinitely fine
gradation in values, which your paper admits is a questions.

 

But even if the universe and the brain did support infinitely fine
gradations in value, it is not clear computing with weights or signals
capable of such infinitely fine gradations, necessarily yields computing
that is meaningfully much more powerful, in terms of the sense of experience
it can provide --- unless it has mechanisms that can meaningfully encode and
decode much more information in such infinite variability.  You can only
communicate over a very broad bandwidth communication medium as much as your
transmitting and receiving mechanisms can encode and decode.

 

For example, it is not clear a high definition TV capable of providing an
infinite degree of variation in its colors, rather than only say 8, 16, 32,
or 64 bits for each primary color, would provide any significantly greater
degree of visual experience, even though one could claim the TV was sending
out a signal of infinite complexity.

 

I have read and been told by neural net designers that typical neural nets
operate by dividing a high dimensional space into subspaces.  If this is
true, then it is not clear that merely increasing the resolution at which
such neural nets were computed, say beyond 64 bits, would change the number
of subspaces that could be represented with a given number, say 100 billion,
of nodes --- or that the minute changes in boundaries, or the occasional
difference in tipping points that might result from infinite precision math,
if it were possible, would be of that great a significance with regard to
the overall capabilities of the system.  Thus, it is not clear that infinite
resolution in neural weights and spike timing would greatly increase the
meaningful (i.e., having grounding), rememberable, and actionable number of
states the brain could represent. 

 

My belief --- and it is only a belief at this point in time --- is that the
complexity a finite human brain could deliver is so great --- arguably equal
to 1000 millions simultaneous DVD signals that interact with each other and
memories --- that such a finite computation is enough to create the sense of
experiential awareness we humans call consciousness.  

 

I am not aware of anything that modern science says with authority about
external reality --- or that I have sensed from my own experiences of my own
consciousness --- that would seem to require infinite resources.

 

Something can have a complexity far beyond human comprehension, far beyond
even the most hyperspeed altered imaginings of a drugged mind, arguably far
beyond the complexity of the observable universe, without requiring for its
representation more than an infinitesimal fraction of anything that could be
accurately called infinite.

 

Ed Porter

 

-Original Message-
From: Hector Zenil [mailto:[EMAIL PROTECTED] 
Sent: Sunday, November 30, 2008 10:42 PM
To: agi@v2.listbox.com
Subject: Re:  RE: FW: [agi] A paper that actually does solve the problem
of consciousness

 

On Mon, Dec 1, 2008 at 3:09 AM, Ben Goertzel [EMAIL PROTECTED] wrote:

 But quantum theory does appear to be directly related to limits of the

 computations of physical reality.  The uncertainty theory and the

 quantization of quantum states are limitations on what can be computed by

 physical reality.



 Not really.  They're limitations on what  measurements of physical

 reality can be simultaneously made.



 Quantum systems can compute *exactly* the class of Turing computable

 functions ... this has been proved according to standard quantum

 mechanics math.  however, there are some things they can compute

 faster than any Turing machine, in the average case but not the worst

 case.



 

Sorry, I am not really following the discussion but I just read that

there is some misinterpretation here. It is the standard model of

quantum computation that effectively computes exactly the Turing

computable functions, but that was almost hand tailored to do so,

perhaps because adding to the theory an assumption of continuum

measurability was already too much (i.e. distinguishing infinitely

close quantum states). But that is far from the claim that quantum

systems can compute exactly the class of Turing 

Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness

2008-12-02 Thread J. Andrew Rogers


On Dec 2, 2008, at 8:31 AM, Ed Porter wrote:
From my quick read it appears the only meaningful way it suggests a  
brain might be infinite was that since the brain used analogue  
values --- such as synaptic weights, or variable time intervals  
between spikes (and presumably since those analogue values would be  
determined by so many factors, each of which might modify their  
values slightly) --- the brain would be capable of computing many  
values each of which could arguably have infinite gradation in  
value.  So arguably its computations would be infinitely complex, in  
terms of the number of bits that would be required to describe them  
exactly.


If course, it is not clear the universe itself supports infinitely  
fine gradation in values, which your paper admits is a questions.



The universe has a noise floor (see: Boltzmann, Planck, et al), from  
which it follows that all analog values are equivalent to some  
trivial number of bits. Since digital deals with the case of analog  
at the low end of signal to noise ratios, digital usually denotes a  
proper subset of analog, making the equivalence unsurprising.


The obvious argument against infinite values is that the laws of  
thermodynamics would no longer apply if that were the case.  Given the  
weight of the evidence for thermodynamics being valid, it is probably  
prudent to stick with models that work when restricted to a finite  
dynamic range for values.



The fundamental non-equivalence of digital and analog is one of those  
hard-to-kill memes that needs to die, along with the fundamental non- 
equivalence of parallel and serial computation. Persistent buggers,  
even among people who should know better.


Cheers,

J. Andrew Rogers



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: RE: FW: [agi] A paper that actually does solve the problem of consciousness

2008-12-02 Thread Ed Porter
J.,

 

Your arguments seem to support my intuitive beliefs, so my instinctual
response is to be thankful for them.  

 

But I have to sheepishly admit I don't totally understand them.

 

Could you please give me a simple explanation for why it is an obvious
argument against infinite values ... that the laws of thermodynamics would
no longer apply if that were the case.  

 

I am not disagreeing, just not understanding. For example, I am not
knowledgeable enough about the subject to understand why the laws of
thermodynamics could not apply in a classical model of the world in which
atoms and molecules have positions and velocities defined with infinite
precision, which I think many people who believed in them for years thought
before the rise of quantum mechanics.

 

I addition --- although I do understand how noise provides a limit to what
can be encoded and decoded as intended communication between an encoding and
decoding entity even on a hypothetical infinite bandwidth medium --- it is
not clear to me that, at least, that at some physical level, the noise
itself might be considered information, and might play a role in the
computations of reality. 

 

That is not an argument that proves infinite variability, but it might be
viewed as an arguments that limits the range of applicability of your
noise-floor argument. As anybody who has listened to noisy radio, or watched
noisy TV reception can, hear or see, noise can be perceived as signal, even
if not an intended one.  

 

To the extent that I am wrong in this devil's advocacy, please enlighten me.


 

(Despite his obvious deficiencies, the devil is a most interesting client,
and I am sure I have offended many people --- but, I hope, not you --- by
arguing his cause too strenuously out of intellectual curiosity.)

 

Ed Porter

 

 

-Original Message-
From: J. Andrew Rogers [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, December 02, 2008 4:15 PM
To: agi@v2.listbox.com
Subject: Re:  RE: FW: [agi] A paper that actually does solve the problem
of consciousness

 

On Dec 2, 2008, at 8:31 AM, Ed Porter wrote:

 From my quick read it appears the only meaningful way it suggests a  

 brain might be infinite was that since the brain used analogue  

 values --- such as synaptic weights, or variable time intervals  

 between spikes (and presumably since those analogue values would be  

 determined by so many factors, each of which might modify their  

 values slightly) --- the brain would be capable of computing many  

 values each of which could arguably have infinite gradation in  

 value.  So arguably its computations would be infinitely complex, in  

 terms of the number of bits that would be required to describe them  

 exactly.



 If course, it is not clear the universe itself supports infinitely  

 fine gradation in values, which your paper admits is a questions.

 

 

The universe has a noise floor (see: Boltzmann, Planck, et al), from  

which it follows that all analog values are equivalent to some  

trivial number of bits. Since digital deals with the case of analog  

at the low end of signal to noise ratios, digital usually denotes a  

proper subset of analog, making the equivalence unsurprising.

 

The obvious argument against infinite values is that the laws of  

thermodynamics would no longer apply if that were the case.  Given the  

weight of the evidence for thermodynamics being valid, it is probably  

prudent to stick with models that work when restricted to a finite  

dynamic range for values.

 

 

The fundamental non-equivalence of digital and analog is one of those  

hard-to-kill memes that needs to die, along with the fundamental non- 

equivalence of parallel and serial computation. Persistent buggers,  

even among people who should know better.

 

Cheers,

 

J. Andrew Rogers

 

 

 

---

agi

Archives: https://www.listbox.com/member/archive/303/=now

RSS Feed: https://www.listbox.com/member/archive/rss/303/

Modify Your Subscription:
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness

2008-12-02 Thread Hector Zenil
Hi Ed,

I am glad you have read the paper with such detail. You have
summarized quite well what it is about. I have no objection to the
points you make. It is only important to bear in mind that the paper
is about studying the possible computational power of the mind by
using the model of an artificial neural network. The question of
whether the mind is something else was not in the scope of that paper.
Assuming that the brain is a neural network we wanted to see what
features may take the neural network to achieve certain computational
power. We found, effectively, that either an encoding at the level of
the neuron (space, e.g. a natural encoding of a real number) or at the
neuron firing time. In both cases, to reach any computational power
beyond the Turing limit one would need either infinite or
infinitesimal space or time, assuming finite brain resources (number
of neurons and connections). My personal opinion (perhaps not
reflected in the paper itself) is that  such super capabilities does
not really hold, but the idea was to explore all the possibilities.

It is also very important to highlight, that such a power beyond the
computational power of Turing machines, does not require to
communicate, encode or decode any infinite value in order to compute a
non-computable function. It suffices to posit a natural encoding
either in the space or time in which the neurons work, and then make
questions in the form of characteristic functions encoding a
non-computable function. A characteristic function is one of the type
yes or no, so it only needs to transmit a finite amount of
information even if the answer required an infinite amount. So a set
of neurons may be capable of taking advantage of infinitesimals, and
answer yes or no to a non-computable function, even if I think that is
not the case it might be. That seems perhaps compatible with your
ideas about consciousness.

- Hector



On Tue, Dec 2, 2008 at 5:31 PM, Ed Porter [EMAIL PROTECTED] wrote:
 Hector,



 I skimmed your paper linked to in the post below.



 From my quick read it appears the only meaningful way it suggests a brain
 might be infinite was that since the brain used analogue values --- such as
 synaptic weights, or variable time intervals between spikes (and presumably
 since those analogue values would be determined by so many factors, each of
 which might modify their values slightly) --- the brain would be capable of
 computing many values each of which could arguably have infinite gradation
 in value.  So arguably its computations would be infinitely complex, in
 terms of the number of bits that would be required to describe them exactly.



 If course, it is not clear the universe itself supports infinitely fine
 gradation in values, which your paper admits is a questions.



 But even if the universe and the brain did support infinitely fine
 gradations in value, it is not clear computing with weights or signals
 capable of such infinitely fine gradations, necessarily yields computing
 that is meaningfully much more powerful, in terms of the sense of experience
 it can provide --- unless it has mechanisms that can meaningfully encode and
 decode much more information in such infinite variability.  You can only
 communicate over a very broad bandwidth communication medium as much as your
 transmitting and receiving mechanisms can encode and decode.



 For example, it is not clear a high definition TV capable of providing an
 infinite degree of variation in its colors, rather than only say 8, 16, 32,
 or 64 bits for each primary color, would provide any significantly greater
 degree of visual experience, even though one could claim the TV was sending
 out a signal of infinite complexity.



 I have read and been told by neural net designers that typical neural nets
 operate by dividing a high dimensional space into subspaces.  If this is
 true, then it is not clear that merely increasing the resolution at which
 such neural nets were computed, say beyond 64 bits, would change the number
 of subspaces that could be represented with a given number, say 100 billion,
 of nodes --- or that the minute changes in boundaries, or the occasional
 difference in tipping points that might result from infinite precision math,
 if it were possible, would be of that great a significance with regard to
 the overall capabilities of the system.  Thus, it is not clear that infinite
 resolution in neural weights and spike timing would greatly increase the
 meaningful (i.e., having grounding), rememberable, and actionable number of
 states the brain could represent.



 My belief --- and it is only a belief at this point in time --- is that the
 complexity a finite human brain could deliver is so great --- arguably equal
 to 1000 millions simultaneous DVD signals that interact with each other and
 memories --- that such a finite computation is enough to create the sense of
 experiential awareness we humans call consciousness.



 I am not aware of 

Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness

2008-12-02 Thread Hector Zenil
Suppose that the gravitational constant is a non-computable number (it
might be, we don't know because as you say, we can only measure with
finite precision). Planets compute G as part of the law of gravitation
that rules their movement (you can of course object, that G is part of
a model that has been replaced by a another theory --General
Relativity-- and that neither one nor the other can be taken as full
and ultimate descriptions, but then I can change my argument to
whichever theory turns out to be the ultimate and true, even if we
never have access to it). Planets don't necessarily have to encode and
decode G, because it is given by granted, it is already naturally
encoded, they just follow the law in which it is given. The same, if a
non-computable number is already encoded in the brain, to compute with
such a real number the neuron would not need necessarily to encode or
decode the number. The neuron could then carry out a non-computable
computation (no measurement involved) and then give a no/yes
answer, just as a planet would hit or not another a planet by
following a non-computable gravitational constant.

But even in the case of need of measurement, it is only the most
significant part relevant to the computation that is performing that
is actually needed, since we are not interested in infinitely long
computations, that's also why, even when noise is of course a
practical problem, it is not an infrangible one. Now you can argue
that if only a finite (the most significant part) of the real number
is necessary to perform the computation, it would have sufficed to
store only a rational (computable) number since the beginning, rather
than a non-computable number. However, it is this potential access to
an infinite number that makes the system more powerful and not the
fact of be able to infinite precision measurements.

For more about these results you can take a look at Hava Siegelman's
work on Recurrent Analogical Neural Networks, which more than a work
on hypercomputation, I consider it a work on computational complexity
with pretty nice scientific results. On the other hand, I would say
that I may have many objections, mainly those pointed out by Davis in
his paper The Myth of Hypercomputation, which I also recommend you in
case you haven't read it. The only thing that from my point of view
Davis is trivializing is that whether there are non-computable numbers
in nature, taking advantage of their computational power, is an open
question, so it is still plausible.


On Wed, Dec 3, 2008 at 12:17 AM, Ed Porter [EMAIL PROTECTED] wrote:
 Hector,



 Thank you for your reply saying my description of your paper was much better
 than clueless.



 I am, however, clueless about how to interpret the second paragraph of your
 reply (all of which is copied below).



 For example, I am confused by your statements that:



 such a power beyond the computational power of Turing machines, does not
 require to communicate, encode or decode any infinite value in order to
 compute a non-computable function.



 considering that you then state:



 A characteristic function is one of the type yes or no, so it only
 needs to transmit a finite amount of information even if the answer required
 an infinite amount.



 What I don't understand is how a system



 does not require to communicate, encode or decode any infinite value in
 order to compute a non-computable function



 if its



 answer required an infinite amount [of information].



 It seems like the computing of an infinite amount of information was
 required somewhere, even if not in communicating the answer, so how does
 such a system not¸ as you said



 require to communicate, encode or decode any infinite value in order to
 compute a non-computable function



 even if only internally?



 Ed Porter





 -Original Message-
 From: Hector Zenil [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, December 02, 2008 5:14 PM
 To: agi@v2.listbox.com
 Subject: Re:  RE: FW: [agi] A paper that actually does solve the problem
 of consciousness



 Hi Ed,



 I am glad you have read the paper with such detail. You have

 summarized quite well what it is about. I have no objection to the

 points you make. It is only important to bear in mind that the paper

 is about studying the possible computational power of the mind by

 using the model of an artificial neural network. The question of

 whether the mind is something else was not in the scope of that paper.

 Assuming that the brain is a neural network we wanted to see what

 features may take the neural network to achieve certain computational

 power. We found, effectively, that either an encoding at the level of

 the neuron (space, e.g. a natural encoding of a real number) or at the

 neuron firing time. In both cases, to reach any computational power

 beyond the Turing limit one would need either infinite or

 infinitesimal space or time, assuming finite brain resources (number

 of neurons and 

Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness

2008-12-02 Thread Ben Goertzel
Hector,

Yes, it's possible that the brain uses uncomputable neurons to predict
uncomputable physical dynamics in the observed world

However, even if this is the case, **there is no possible way to
verify or falsify this hypothesis using science**, if science is
construed to involve evaluation of theories based on finite sets of
finite-precision data ...

So, this hypothesis has much the same status as the hypothesis that
the brain has an ineffable soul inside it, which can never be
measured.  This is certainly possible too, but we have no way to
verify or falsify it using science.

You may say the hypothesis of neural hypercomputing valid in the sense
that it helps guide you to interesting, falsifiable theories.  That's
fine.  But, then you  must admit that the hypothesis of souls could be
valid in the same sense, right?  It could guide some other people to
interesting, falsifiable theories -- even though, in itself, it stands
outside the domain of scientific validation/falsification.

It is possible that the essence of intelligence lies in something that
can't be scientifically addressed.  If so, no matter how many
finite-precision measurements of the brain we record and analyze,
we'll never get at the core of intelligence that way.  So, in that
hypothesis, if we succeed at making AGI, it will be due to some
non-scientific, non-computable force somehow guiding us.  However, I
doubt this is the case.  I strongly suspect the essence of
intelligence lies in properties of systems that can be measured, and
therefore *not* in hypercomputing.

Consciousness is another issue -- I do happen to think there is an
aspect of consciousness that, like hypercomputing, lies outside the
realm of science.  However, I don't fall for the argument that X and Y
must be equal just because they're both outside the realm of
science...

-- Ben G

On Tue, Dec 2, 2008 at 6:54 PM, Hector Zenil [EMAIL PROTECTED] wrote:
 Suppose that the gravitational constant is a non-computable number (it
 might be, we don't know because as you say, we can only measure with
 finite precision). Planets compute G as part of the law of gravitation
 that rules their movement (you can of course object, that G is part of
 a model that has been replaced by a another theory --General
 Relativity-- and that neither one nor the other can be taken as full
 and ultimate descriptions, but then I can change my argument to
 whichever theory turns out to be the ultimate and true, even if we
 never have access to it). Planets don't necessarily have to encode and
 decode G, because it is given by granted, it is already naturally
 encoded, they just follow the law in which it is given. The same, if a
 non-computable number is already encoded in the brain, to compute with
 such a real number the neuron would not need necessarily to encode or
 decode the number. The neuron could then carry out a non-computable
 computation (no measurement involved) and then give a no/yes
 answer, just as a planet would hit or not another a planet by
 following a non-computable gravitational constant.

 But even in the case of need of measurement, it is only the most
 significant part relevant to the computation that is performing that
 is actually needed, since we are not interested in infinitely long
 computations, that's also why, even when noise is of course a
 practical problem, it is not an infrangible one. Now you can argue
 that if only a finite (the most significant part) of the real number
 is necessary to perform the computation, it would have sufficed to
 store only a rational (computable) number since the beginning, rather
 than a non-computable number. However, it is this potential access to
 an infinite number that makes the system more powerful and not the
 fact of be able to infinite precision measurements.

 For more about these results you can take a look at Hava Siegelman's
 work on Recurrent Analogical Neural Networks, which more than a work
 on hypercomputation, I consider it a work on computational complexity
 with pretty nice scientific results. On the other hand, I would say
 that I may have many objections, mainly those pointed out by Davis in
 his paper The Myth of Hypercomputation, which I also recommend you in
 case you haven't read it. The only thing that from my point of view
 Davis is trivializing is that whether there are non-computable numbers
 in nature, taking advantage of their computational power, is an open
 question, so it is still plausible.


 On Wed, Dec 3, 2008 at 12:17 AM, Ed Porter [EMAIL PROTECTED] wrote:
 Hector,



 Thank you for your reply saying my description of your paper was much better
 than clueless.



 I am, however, clueless about how to interpret the second paragraph of your
 reply (all of which is copied below).



 For example, I am confused by your statements that:



 such a power beyond the computational power of Turing machines, does not
 require to communicate, encode or decode any infinite value in order to
 compute a 

Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness

2008-12-02 Thread Hector Zenil
On Wed, Dec 3, 2008 at 1:51 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 Hector,

 Yes, it's possible that the brain uses uncomputable neurons to predict
 uncomputable physical dynamics in the observed world

 However, even if this is the case, **there is no possible way to
 verify or falsify this hypothesis using science**, if science is
 construed to involve evaluation of theories based on finite sets of
 finite-precision data ...

 So, this hypothesis has much the same status as the hypothesis that
 the brain has an ineffable soul inside it, which can never be
 measured.  This is certainly possible too, but we have no way to
 verify or falsify it using science.

 You may say the hypothesis of neural hypercomputing valid in the sense
 that it helps guide you to interesting, falsifiable theories.  That's
 fine.  But, then you  must admit that the hypothesis of souls could be
 valid in the same sense, right?  It could guide some other people to
 interesting, falsifiable theories -- even though, in itself, it stands
 outside the domain of scientific validation/falsification.


I understand the point, but I insist that it is not that trivial. You
could  apply the same argument against the automated proof of the
four-color theorem. Since there is no human capable of verifying it in
a lifetime (and even if a group of people try to verify it, no single
mind would ever have the intellectual capacity to get convinced by its
own), then the four-color proof is not science... and me, I am pretty
convinced that it is, including computer science and proof theory.
Actually I think that that kind of proofs and approaches to science
will happen more and more often, as we can already witness.

Just as the four-color theorem was proved and then verified by another
computer program, the outcome of a hypercomputer could be verified by
another hypercomputer. And just as for the finite case of the
four-color theorem, you would not be able to verify it but by trusting
on another system.

I am not hypercomputationalist, all the opposite! but  closed
definitions about what is science and people trying to have the good
definition of science, look to me pretty narrow. However, if I were
director of a computer science department, I wouldn't probably put any
money into hypercomputationism research. But even if it is just
philosophy, that doesn't make it less valid or less plausible. On the
other hand, the scientific arguments against it often sound very
weak, perhaps just as weak as the arguments in favor, but sometimes
even weaker.

What if a hypercomputer provides you, each time you ask, the answer to
whether a Turing machine halts. You effectively cannot verify that it
works for all cases (it is of course a problem of induction very
spread in science in general), but I am pretty sure you would believe
that it is what it says it is, if for any Turing machine, as
complicated as you may want, it tells you whether it halts and when
(you could argue for example that it is just simulating the Turing
machine extremely fast, but let's suppose it does it instantaneously).
How this prediction power would make it less science than, let's say,
quantum mechanics? To me, that would be much more scientific than
people doing string theory...

The same about noise. People use to think about it as a constraint,
but some of recent results in computational complexity and serious
interpretations suggest that actually, as I was telling before, if it
nature is indeterministic, noise is actually a computation carried out
by something more powerful (even if it seems meaningful) than a
universal Turing machine, so by itself, rather than subtracting
computational power, it might add up! One would need of course to
conciliate this with thermodynamics, but there are actually some
interpretations that would easily allow this interpretation of noise.
However I don't think I will take that thread of discussion.

Together with the bibliography I've provided before, I recommend also
a very recent paper by Karl Svozil in the Complex Systems journal
about whether hypercomputation is falsifiable.


 It is possible that the essence of intelligence lies in something that
 can't be scientifically addressed.  If so, no matter how many
 finite-precision measurements of the brain we record and analyze,
 we'll never get at the core of intelligence that way.  So, in that
 hypothesis, if we succeed at making AGI, it will be due to some
 non-scientific, non-computable force somehow guiding us.  However, I
 doubt this is the case.  I strongly suspect the essence of
 intelligence lies in properties of systems that can be measured, and
 therefore *not* in hypercomputing.

 Consciousness is another issue -- I do happen to think there is an
 aspect of consciousness that, like hypercomputing, lies outside the
 realm of science.  However, I don't fall for the argument that X and Y
 must be equal just because they're both outside the realm of
 science...

 -- Ben G

 On Tue, Dec 2, 

Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness

2008-12-02 Thread Ben Goertzel
Hi Hector,

 You may say the hypothesis of neural hypercomputing valid in the sense
 that it helps guide you to interesting, falsifiable theories.  That's
 fine.  But, then you  must admit that the hypothesis of souls could be
 valid in the same sense, right?  It could guide some other people to
 interesting, falsifiable theories -- even though, in itself, it stands
 outside the domain of scientific validation/falsification.


 I understand the point, but I insist that it is not that trivial. You
 could  apply the same argument against the automated proof of the
 four-color theorem. Since there is no human capable of verifying it in
 a lifetime (and even if a group of people try to verify it, no single
 mind would ever have the intellectual capacity to get convinced by its
 own), then the four-color proof is not science...

So, the distinction here is that

-- in one case, **no possible finite set of observations** can verify or
falsify the hypothesis at hand [hypercomputing]

-- in the other case, some finite set of observations could verify or
falsify the hypothesis at hand ... but this observation set wouldn't
fit into the mind of a certain observer O [four color theorem]

So, to simplify a bit, do I define X has direct scientific meaning as

I can personally falsify X

or as

Some being could potentially falsify X; and I can use science
to distinguish those being capable of falsifying X from those
that are incapable

??

If the former, then the four color theorem isn't human science

If the latter, it is...

I choose the latter...

ben


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness

2008-12-01 Thread Ben Goertzel
We cannot
 ask Feynman, but I actually asked Deutsch. He does not only think QM
 is our most basic physical reality (he thinks math and computer
 science lie in quantum mechanics), but he even takes quite seriously
 his theory of parallel universes! and he is not alone. Speaking by
 myself, I would agree with you, but I think we would need to
 relativize the concept of agreement. I don't think QM is just another
 model of merely mathematical value to make finite predictions. I think
 physical models say something about our physical reality. If you deny
 QM as part of our physical reality then I guess you deny any other
 physical model. I wonder then what is left to you. You perhaps would
 embrace total skepticism, perhaps even solipsism. Current trends have
 moved from there to a more relativized positions, where models are
 considered so, models, but still with some value as part of our actual
 physical reality (just as Newtonian physics is not just completely
 wrong after General Relativity since it still describes a huge part of
 our physical reality).


Well, I don't embrace solipsism, but that is really a philosophic and
personal rather than scientific matter ...

 and, I'm not going talk here about what is,
which IMO is not a matter for science ... but merely about what science
can tell us.

And, science cannot tell us whether QM or some empirically-equivalent,
wholly randomness-free theory is the right one...

ben g


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness

2008-12-01 Thread Philip Hunt
2008/12/1 Ben Goertzel [EMAIL PROTECTED]:

 And, science cannot tell us whether QM or some empirically-equivalent,
 wholly randomness-free theory is the right one...

If two theories give identical predictions under all circumstances
about how the real world behaves, then they are not two separate
theories, they are merely rewordings of the same theory. And choosing
between them is arbitrary; you may prefer one to the other because
human minds can visualise it more easily, or it's easier to calculate,
or you have an aethetic preference for it.

-- 
Philip Hunt, [EMAIL PROTECTED]
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness

2008-12-01 Thread Ben Goertzel
 If two theories give identical predictions under all circumstances
 about how the real world behaves, then they are not two separate
 theories, they are merely rewordings of the same theory. And choosing
 between them is arbitrary; you may prefer one to the other because
 human minds can visualise it more easily, or it's easier to calculate,
 or you have an aethetic preference for it.

 --
 Philip Hunt, [EMAIL PROTECTED]



However, the two theories may still have very different consequences
**within the minds of the community of scientists** ...

Even though T1 and T2 are empirically equivalent in their predictions,
T1 might have a tendency to lead a certain community of scientists
in better directions, in terms of creating new theories later on

However, empirically validating this property of T1 is another question ...
which leads one to the topic of scientific theories about the sociological
consequences of scientific theories ;-)

ben g


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] A paper that actually does solve the problem of consciousness

2008-12-01 Thread Eric Burton
Ed, they used to combine ritalin with lsd for psychotherapy. It
assists in absorbing insights achieved from psycholitic doses, which
is a term for doses that are not fully psychedelic. Those are edifying
on their own but are less organized. I don't know if you can get this
in a clinical setting today. But these molecules are gradually being
apprehended as tools

On 11/30/08, Ben Goertzel [EMAIL PROTECTED] wrote:
 Ed,

 Unfortunately to reply to your message in detail would absorb a lot of
 time, because there are two issues mixed up

 1) you don't know much about computability theory, and educating you
 on it would take a lot of time (and is not best done on an email list)

 2) I may not have expressed some of my weird philosophical ideas about
 computability and mind and reality clearly ... though Abram, at least,
 seemed to get them ;)  [but he has a lot of background in the area]

 Just to clarify some simple things though: Pi is a computable number,
 because there's a program that would generate it if allowed to run
 long enough  Also, pi has been proved irrational; and, quantum
 theory really has nothing directly to do with uncomputability...

 About

How can several pounds of matter that is the human brain model
 the true complexity of an infinity of infinitely complexity things?

 it is certainly thinkable that the brain is infinite not finite in its
 information content, or that it's a sort of antenna that receives
 information from some infinite-information-content source.  I'm not
 saying I believe this, just saying it's a logical possibility, and not
 really ruled out by available data...

 Your reply seems to assume that the brain is a finite computational
 system and that other alternatives don't make sense.  I think this is
 an OK working assumption for AGI engineers but it's not proved by any
 means.

 My main point in that post was, simply, that science and language seem
 intrinsically unable to distinguish computable from uncomputable
 realities.  That doesn't necessarily mean the latter don't exist but
 it means they're not really scientifically useful entities.  But, my
 detailed argument in favor of this point requires some basic
 understanding of computability math to appreciate, and I can't review
 those basics in an email, it's too much...

 ben g

 On Sun, Nov 30, 2008 at 4:20 PM, Ed Porter [EMAIL PROTECTED] wrote:
 Ben,



 On November 19, 2008 5:39 you wrote the following under the above titled
 thread:



 --

 Ed,



 I'd be curious for your reaction to



 http://multiverseaccordingtoben.blogspot.com/2008/10/are-uncomputable-entities-useless-forhtml



 which explores the limits of scientific and linguistic explanation, in

 a different but possibly related way to Richard's argument.



 --



 In the below email I asked you some questions about your article, which
 capture my major problem in understanding it, and I don't think I ever
 receive a reply



 The questions were at the bottom of such a long post you may well never
 have
 even seen them.  I know you are busy, but if you have time I would be
 interested in hearing your answers to the following questions about the
 following five quoted parts (shown in red if you are seeing this in rich
 text) from you article.  If you are too busy to respond just say so,
 either
 on or off list.



 -



 (1) In the simplest case, A2 may represent U directly in the language,
 using a single expression



 How, can U be directly represented in the language if it is
 uncomputable?



 I assume you consider any irational number, such as pi to be uncomputable
 (although, at least pi has a forumula that with enough computation can
 approach it as a limit –I assume that for most real numbers if there is
 such
 a formula, we do not know it.) (By the way, do we know for a fact that pi
 is
 irational, and if so how do we know other than that we have caluclated it
 to
 millions of places and not yet found an exact solution?)



 Merely communicating the symbol pi only represents the number if the agent
 receiving the communication has a more detailed definition, but any
 definition, such as a formula for iteratively approaching pi, which
 presumably is what you mean by R_U would only be an approximation.



 So U could never by fully represented unless one had infinite time --- and
 I
 generally consider it a waste of time to think about infinate time unless
 there is something valuable about such considerations that has a use in
 much
 more human-sized chunks of time.



 In fact, it seems the major message of quantum mechanics is that even
 physical reality doesn't have the time or machinery to compute
 uncomputable
 things, like a space constructed of dimensions each correspond to all the
 real numbers within some astronomical range .  So the real number line is
 not really real.  It is at best a construct of the human mind that can at
 best only be approximated in part.



 

Re: FW: [agi] A paper that actually does solve the problem of consciousness

2008-11-30 Thread Ben Goertzel
Ed,

Unfortunately to reply to your message in detail would absorb a lot of
time, because there are two issues mixed up

1) you don't know much about computability theory, and educating you
on it would take a lot of time (and is not best done on an email list)

2) I may not have expressed some of my weird philosophical ideas about
computability and mind and reality clearly ... though Abram, at least,
seemed to get them ;)  [but he has a lot of background in the area]

Just to clarify some simple things though: Pi is a computable number,
because there's a program that would generate it if allowed to run
long enough  Also, pi has been proved irrational; and, quantum
theory really has nothing directly to do with uncomputability...

About

How can several pounds of matter that is the human brain model
 the true complexity of an infinity of infinitely complexity things?

it is certainly thinkable that the brain is infinite not finite in its
information content, or that it's a sort of antenna that receives
information from some infinite-information-content source.  I'm not
saying I believe this, just saying it's a logical possibility, and not
really ruled out by available data...

Your reply seems to assume that the brain is a finite computational
system and that other alternatives don't make sense.  I think this is
an OK working assumption for AGI engineers but it's not proved by any
means.

My main point in that post was, simply, that science and language seem
intrinsically unable to distinguish computable from uncomputable
realities.  That doesn't necessarily mean the latter don't exist but
it means they're not really scientifically useful entities.  But, my
detailed argument in favor of this point requires some basic
understanding of computability math to appreciate, and I can't review
those basics in an email, it's too much...

ben g

On Sun, Nov 30, 2008 at 4:20 PM, Ed Porter [EMAIL PROTECTED] wrote:
 Ben,



 On November 19, 2008 5:39 you wrote the following under the above titled
 thread:



 --

 Ed,



 I'd be curious for your reaction to



 http://multiverseaccordingtoben.blogspot.com/2008/10/are-uncomputable-entities-useless-forhtml



 which explores the limits of scientific and linguistic explanation, in

 a different but possibly related way to Richard's argument.



 --



 In the below email I asked you some questions about your article, which
 capture my major problem in understanding it, and I don't think I ever
 receive a reply



 The questions were at the bottom of such a long post you may well never have
 even seen them.  I know you are busy, but if you have time I would be
 interested in hearing your answers to the following questions about the
 following five quoted parts (shown in red if you are seeing this in rich
 text) from you article.  If you are too busy to respond just say so, either
 on or off list.



 -



 (1) In the simplest case, A2 may represent U directly in the language,
 using a single expression



 How, can U be directly represented in the language if it is uncomputable?



 I assume you consider any irational number, such as pi to be uncomputable
 (although, at least pi has a forumula that with enough computation can
 approach it as a limit –I assume that for most real numbers if there is such
 a formula, we do not know it.) (By the way, do we know for a fact that pi is
 irational, and if so how do we know other than that we have caluclated it to
 millions of places and not yet found an exact solution?)



 Merely communicating the symbol pi only represents the number if the agent
 receiving the communication has a more detailed definition, but any
 definition, such as a formula for iteratively approaching pi, which
 presumably is what you mean by R_U would only be an approximation.



 So U could never by fully represented unless one had infinite time --- and I
 generally consider it a waste of time to think about infinate time unless
 there is something valuable about such considerations that has a use in much
 more human-sized chunks of time.



 In fact, it seems the major message of quantum mechanics is that even
 physical reality doesn't have the time or machinery to compute uncomputable
 things, like a space constructed of dimensions each correspond to all the
 real numbers within some astronomical range .  So the real number line is
 not really real.  It is at best a construct of the human mind that can at
 best only be approximated in part.



 (2) complexity(U)  complexity(R_U)



 Because I did not understand how U could be represented, and how R_U could
 be anything other than an approximation for any practical purposes, I didn't
 understand the meaning of the above line from your article?



 If U and R_U have the meaning I guessed in my discussion of quote (1), then
 U could not be meaningfully representable in the language, other than by a
 symbol that references some definition 

Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness

2008-11-30 Thread Ben Goertzel
 But quantum theory does appear to be directly related to limits of the
 computations of physical reality.  The uncertainty theory and the
 quantization of quantum states are limitations on what can be computed by
 physical reality.

Not really.  They're limitations on what  measurements of physical
reality can be simultaneously made.

Quantum systems can compute *exactly* the class of Turing computable
functions ... this has been proved according to standard quantum
mechanics math.  however, there are some things they can compute
faster than any Turing machine, in the average case but not the worst
case.

 But, I am old fashioned enough to be more interested in things about the
 brain and AGI that are supported by what would traditionally be considered
 scientific evidence or by what can be reasoned or designed from such
 evidence.

 If there is any thing that would fit under those headings to support the
 notion of the brain either being infinite, or being an antenna that receives
 decodable information from some infinite-information-content source, I would
 love to hear it.

the key point of the blog post you didn't fully grok, was a careful
argument that (under certain, seemingly reasonable assumptions)
science can never provide evidence in favor of infinite mechanisms...

ben g


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness

2008-11-30 Thread Trent Waddington
On Mon, Dec 1, 2008 at 11:19 AM, Ed Porter [EMAIL PROTECTED] wrote:
 You said QUANTUM THEORY REALLY HAS NOTHING DIRECTLY TO DO WITH
 UNCOMPUTABILITY.

Please don't quote people using this style, it hurts my eyes.

 But quantum theory does appear to be directly related to limits of the
 computations of physical reality.  The uncertainty theory and the
 quantization of quantum states are limitations on what can be computed by
 physical reality.

I don't even know what you're saying here.  Maybe you're trying to say
that it takes a really big computer to compute a very small box of
physical reality.. which is true.. I just don't know why you would be
saying that.

 You said  IT IS CERTAINLY THINKABLE THAT THE BRAIN IS INFINITE NOT FINITE
 IN ITS INFORMATION CONTENT, OR THAT IT'S A SORT OF ANTENNA THAT RECEIVES
 INFORMATION FROM SOME INFINITE-INFORMATION-CONTENT SOURCE 

 This certainly is thinkable.  And that is a non-trivial statement.  We
 should never forget that our concepts of reality could be nothing but
 illusions, and that our understanding of science and physical reality may be
 much more partial and flawed than we think.

It's also completely unscientific.  You might as well say that magic
pixies deliver your thoughts from big invisible bucket made of gold.

 But, I am old fashioned enough to be more interested in things about the
 brain and AGI that are supported by what would traditionally be considered
 scientific evidence or by what can be reasoned or designed from such
 evidence.

So why are you entertaining notions of magic antennas to God?

 If there is any thing that would fit under those headings to support the
 notion of the brain either being infinite, or being an antenna that receives
 decodable information from some infinite-information-content source, I would
 love to hear it.

I wouldn't.  It's untestable non-sense.

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: RE: FW: [agi] A paper that actually does solve the problem of consciousness

2008-11-30 Thread Ed Porter
Regarding the uncertainty principal, Wikipedia says:

 

In quantum physics, the Heisenberg uncertainty principle states that the
values of certain pairs of conjugate variables (position and momentum, for
instance) cannot both be known with arbitrary precision. That is, the more
precisely one variable is known, the less precisely the other is known. THIS
IS NOT A STATEMENT ABOUT THE LIMITATIONS OF A RESEARCHER'S ABILITY TO
MEASURE PARTICULAR QUANTITIES OF A SYSTEM, BUT RATHER ABOUT THE NATURE OF
THE SYSTEM ITSELF. (emphasis added.)

 

I am sure you know more about quantum mechanics than I do.  But I have heard
many say the uncertainty controls limits not just on scientific measurement,
but the amount of information different parts of reality can have about each
other when computing in response to each other.  Perhaps such theories are
wrong, but they are not without support in the field.

 

With regard to the statement science can never provide evidence in favor of
infinite mechanisms I though you were saying there was no way the human
mind could fully represent or fully understand an infinite mechanism ---
which I agree with.  

 

You were correct in thinking that I did not grok that you were implying this
means if an infinite mechanism exited there could be no evidence in favor of
it infinity.  

 

In fact, it is not clear that this is the case, if you use provide
evidence considerably more loosely than provide proof for.  Until the
advent of quantum mechanics and/or the theory of the expanding universe,
based in part on observations and in part intuitions derived from them, many
people felt the universe was infinitely continuous and/or of infinite extent
in space and time.  I agree you would probably never be able to prove
infinite realities, but the mind is capable of conceiving of them, and of
seeing evidence that might suggest to some their existence, such as was
suggested to Einstein, who for many years I have been told believed in a
universe that was infinite in time.

 

Ed Porter

 

-Original Message-
From: Ben Goertzel [mailto:[EMAIL PROTECTED] 
Sent: Sunday, November 30, 2008 9:09 PM
To: agi@v2.listbox.com
Subject: Re:  RE: FW: [agi] A paper that actually does solve the problem
of consciousness

 

 But quantum theory does appear to be directly related to limits of the

 computations of physical reality.  The uncertainty theory and the

 quantization of quantum states are limitations on what can be computed by

 physical reality.

 

Not really.  They're limitations on what  measurements of physical

reality can be simultaneously made.

 

Quantum systems can compute *exactly* the class of Turing computable

functions ... this has been proved according to standard quantum

mechanics math.  however, there are some things they can compute

faster than any Turing machine, in the average case but not the worst

case.

 

 But, I am old fashioned enough to be more interested in things about the

 brain and AGI that are supported by what would traditionally be considered

 scientific evidence or by what can be reasoned or designed from such

 evidence.



 If there is any thing that would fit under those headings to support the

 notion of the brain either being infinite, or being an antenna that
receives

 decodable information from some infinite-information-content source, I
would

 love to hear it.

 

the key point of the blog post you didn't fully grok, was a careful

argument that (under certain, seemingly reasonable assumptions)

science can never provide evidence in favor of infinite mechanisms...

 

ben g

 

 

---

agi

Archives: https://www.listbox.com/member/archive/303/=now

RSS Feed: https://www.listbox.com/member/archive/rss/303/

Modify Your Subscription:
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness

2008-11-30 Thread Ben Goertzel
HI,

 In quantum physics, the Heisenberg uncertainty principle states that the
 values of certain pairs of conjugate variables (position and momentum, for
 instance) cannot both be known with arbitrary precision. That is, the more
 precisely one variable is known, the less precisely the other is known. THIS
 IS NOT A STATEMENT ABOUT THE LIMITATIONS OF A RESEARCHER'S ABILITY TO
 MEASURE PARTICULAR QUANTITIES OF A SYSTEM, BUT RATHER ABOUT THE NATURE OF
 THE SYSTEM ITSELF. (emphasis added.)



 I am sure you know more about quantum mechanics than I do.  But I have heard
 many say the uncertainty controls limits not just on scientific measurement,
 but the amount of information different parts of reality can have about each
 other when computing in response to each other.  Perhaps such theories are
 wrong, but they are not without support in the field.


Yeah, the interpretation of quantum theory is certainly contentious
and there are multiple conflicting views...

However, regarding quantum computing, it is universally agreed that
the class of quantum computable functions is identical to the class of
classically Turing computable functions.


 With regard to the statement science can never provide evidence in favor of 
 infinite mechanisms I
 though you were saying there was no way the human mind could fully represent 
 or fully understand
 an infinite mechanism --- which I agree with.

No, I was not saying that there was no way the human mind could fully
represent or fully understand
an infinite mechanism

What I argued is that **scientific data** can never convincingly be
used to argue in favor of an infinite mechanism, due to the
intrinsically finite nature of scientific data.

This says **nothing** about any intrinsic limitations on the human
mind ... unless one adds the axiom that the human mind must be
entirely comprehensible via science ... which seems an unnecessary
assumption to make

 In fact, it is not clear that this is the case, if you use provide
 evidence considerably more loosely than provide proof for.  Until the
 advent of quantum mechanics and/or the theory of the expanding universe,
 based in part on observations and in part intuitions derived from them, many
 people felt the universe was infinitely continuous and/or of infinite extent
 in space and time.  I agree you would probably never be able to prove
 infinite realities, but the mind is capable of conceiving of them, and of
 seeing evidence that might suggest to some their existence, such as was
 suggested to Einstein, who for many years I have been told believed in a
 universe that was infinite in time.

well, my argument implies that you can never use science to prove that
the mind is capable of conceiving of infinite realities

This may be true in some other sense, but I argue, not in a scientific sense...

-- Ben G


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness

2008-11-30 Thread Ben Goertzel
OTOH, there is no possible real-world test to distinguish a true
random sequence from a high-algorithmic-information quasi-random
sequence

So I don't find this argument very convincing...

On Sun, Nov 30, 2008 at 10:42 PM, Hector Zenil [EMAIL PROTECTED] wrote:
 On Mon, Dec 1, 2008 at 3:09 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 But quantum theory does appear to be directly related to limits of the
 computations of physical reality.  The uncertainty theory and the
 quantization of quantum states are limitations on what can be computed by
 physical reality.

 Not really.  They're limitations on what  measurements of physical
 reality can be simultaneously made.

 Quantum systems can compute *exactly* the class of Turing computable
 functions ... this has been proved according to standard quantum
 mechanics math.  however, there are some things they can compute
 faster than any Turing machine, in the average case but not the worst
 case.


 Sorry, I am not really following the discussion but I just read that
 there is some misinterpretation here. It is the standard model of
 quantum computation that effectively computes exactly the Turing
 computable functions, but that was almost hand tailored to do so,
 perhaps because adding to the theory an assumption of continuum
 measurability was already too much (i.e. distinguishing infinitely
 close quantum states). But that is far from the claim that quantum
 systems can compute exactly the class of Turing computable functions.
 Actually the Hilbert space and the superposition of particles in an
 infinite number of states would suggest exactly the opposite. While
 the standard model of quantum computation only considers a
 superposition of 2 states (the so-called qubit, capable of
 entanglement in 0 and 1). But even if you stick to the standard model
 of quantum computation, the proof that it computes exactly the set
 of recursive functions [Feynman, Deutsch] can be put in jeopardy very
 easy : Turing machines are unable to produce non-deterministic
 randomness, something that quantum computers do as an intrinsic
 property of quantum mechanics (not only because of measure limitations
 of the kind of the Heisenberg principle but by quantum non-locality,
 i.e. the violation of Bell's theorem). I just exhibited a non-Turing
 computable function that standard quantum computers compute...
 [Calude, Casti]


 But, I am old fashioned enough to be more interested in things about the
 brain and AGI that are supported by what would traditionally be considered
 scientific evidence or by what can be reasoned or designed from such
 evidence.

 If there is any thing that would fit under those headings to support the
 notion of the brain either being infinite, or being an antenna that receives
 decodable information from some infinite-information-content source, I would
 love to hear it.


 You and/or other people might be interested in a paper of mine
 published some time ago on the possible computational power of the
 human mind and the way to encode infinite information in the brain:

 http://arxiv.org/abs/cs/0605065


 the key point of the blog post you didn't fully grok, was a careful
 argument that (under certain, seemingly reasonable assumptions)
 science can never provide evidence in favor of infinite mechanisms...

 ben g


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




 --
 Hector Zenilhttp://www.mathrix.org


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

I intend to live forever, or die trying.
-- Groucho Marx


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness

2008-11-30 Thread Hector Zenil
On Mon, Dec 1, 2008 at 3:09 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 But quantum theory does appear to be directly related to limits of the
 computations of physical reality.  The uncertainty theory and the
 quantization of quantum states are limitations on what can be computed by
 physical reality.

 Not really.  They're limitations on what  measurements of physical
 reality can be simultaneously made.

 Quantum systems can compute *exactly* the class of Turing computable
 functions ... this has been proved according to standard quantum
 mechanics math.  however, there are some things they can compute
 faster than any Turing machine, in the average case but not the worst
 case.


Sorry, I am not really following the discussion but I just read that
there is some misinterpretation here. It is the standard model of
quantum computation that effectively computes exactly the Turing
computable functions, but that was almost hand tailored to do so,
perhaps because adding to the theory an assumption of continuum
measurability was already too much (i.e. distinguishing infinitely
close quantum states). But that is far from the claim that quantum
systems can compute exactly the class of Turing computable functions.
Actually the Hilbert space and the superposition of particles in an
infinite number of states would suggest exactly the opposite. While
the standard model of quantum computation only considers a
superposition of 2 states (the so-called qubit, capable of
entanglement in 0 and 1). But even if you stick to the standard model
of quantum computation, the proof that it computes exactly the set
of recursive functions [Feynman, Deutsch] can be put in jeopardy very
easy : Turing machines are unable to produce non-deterministic
randomness, something that quantum computers do as an intrinsic
property of quantum mechanics (not only because of measure limitations
of the kind of the Heisenberg principle but by quantum non-locality,
i.e. the violation of Bell's theorem). I just exhibited a non-Turing
computable function that standard quantum computers compute...
[Calude, Casti]


 But, I am old fashioned enough to be more interested in things about the
 brain and AGI that are supported by what would traditionally be considered
 scientific evidence or by what can be reasoned or designed from such
 evidence.

 If there is any thing that would fit under those headings to support the
 notion of the brain either being infinite, or being an antenna that receives
 decodable information from some infinite-information-content source, I would
 love to hear it.


You and/or other people might be interested in a paper of mine
published some time ago on the possible computational power of the
human mind and the way to encode infinite information in the brain:

http://arxiv.org/abs/cs/0605065


 the key point of the blog post you didn't fully grok, was a careful
 argument that (under certain, seemingly reasonable assumptions)
 science can never provide evidence in favor of infinite mechanisms...

 ben g


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Hector Zenilhttp://www.mathrix.org


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness

2008-11-30 Thread Hector Zenil
On Mon, Dec 1, 2008 at 4:44 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 OTOH, there is no possible real-world test to distinguish a true
 random sequence from a high-algorithmic-information quasi-random
 sequence

I know, but the point is not whether we can distinguish it, but that
quantum mechanics actually predicts to be intrinsically capable of
non-deterministic randomness, while for a Turing machine that is
impossible by definition. I find quite convincing and interesting the
way in which the mathematical proof of the standard model of quantum
computation as Turing computable has been put in jeopardy by physical
reality.


 So I don't find this argument very convincing...

 On Sun, Nov 30, 2008 at 10:42 PM, Hector Zenil [EMAIL PROTECTED] wrote:
 On Mon, Dec 1, 2008 at 3:09 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 But quantum theory does appear to be directly related to limits of the
 computations of physical reality.  The uncertainty theory and the
 quantization of quantum states are limitations on what can be computed by
 physical reality.

 Not really.  They're limitations on what  measurements of physical
 reality can be simultaneously made.

 Quantum systems can compute *exactly* the class of Turing computable
 functions ... this has been proved according to standard quantum
 mechanics math.  however, there are some things they can compute
 faster than any Turing machine, in the average case but not the worst
 case.


 Sorry, I am not really following the discussion but I just read that
 there is some misinterpretation here. It is the standard model of
 quantum computation that effectively computes exactly the Turing
 computable functions, but that was almost hand tailored to do so,
 perhaps because adding to the theory an assumption of continuum
 measurability was already too much (i.e. distinguishing infinitely
 close quantum states). But that is far from the claim that quantum
 systems can compute exactly the class of Turing computable functions.
 Actually the Hilbert space and the superposition of particles in an
 infinite number of states would suggest exactly the opposite. While
 the standard model of quantum computation only considers a
 superposition of 2 states (the so-called qubit, capable of
 entanglement in 0 and 1). But even if you stick to the standard model
 of quantum computation, the proof that it computes exactly the set
 of recursive functions [Feynman, Deutsch] can be put in jeopardy very
 easy : Turing machines are unable to produce non-deterministic
 randomness, something that quantum computers do as an intrinsic
 property of quantum mechanics (not only because of measure limitations
 of the kind of the Heisenberg principle but by quantum non-locality,
 i.e. the violation of Bell's theorem). I just exhibited a non-Turing
 computable function that standard quantum computers compute...
 [Calude, Casti]


 But, I am old fashioned enough to be more interested in things about the
 brain and AGI that are supported by what would traditionally be considered
 scientific evidence or by what can be reasoned or designed from such
 evidence.

 If there is any thing that would fit under those headings to support the
 notion of the brain either being infinite, or being an antenna that 
 receives
 decodable information from some infinite-information-content source, I 
 would
 love to hear it.


 You and/or other people might be interested in a paper of mine
 published some time ago on the possible computational power of the
 human mind and the way to encode infinite information in the brain:

 http://arxiv.org/abs/cs/0605065


 the key point of the blog post you didn't fully grok, was a careful
 argument that (under certain, seemingly reasonable assumptions)
 science can never provide evidence in favor of infinite mechanisms...

 ben g


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




 --
 Hector Zenilhttp://www.mathrix.org


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 [EMAIL PROTECTED]

 I intend to live forever, or die trying.
 -- Groucho Marx


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Hector Zenilhttp://www.mathrix.org



Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness

2008-11-30 Thread Hector Zenil
On Mon, Dec 1, 2008 at 4:53 AM, Hector Zenil [EMAIL PROTECTED] wrote:
 On Mon, Dec 1, 2008 at 4:44 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 OTOH, there is no possible real-world test to distinguish a true
 random sequence from a high-algorithmic-information quasi-random
 sequence

 I know, but the point is not whether we can distinguish it, but that
 quantum mechanics actually predicts to be intrinsically capable of
 non-deterministic randomness, while for a Turing machine that is
 impossible by definition. I find quite convincing and interesting the
 way in which the mathematical proof of the standard model of quantum
 computation as Turing computable has been put in jeopardy by physical
 reality.

or at least by a model of physical reality... =)  (a reality by the
way, that the authors of the mathematical proof believe in as the most
basic)



 So I don't find this argument very convincing...

 On Sun, Nov 30, 2008 at 10:42 PM, Hector Zenil [EMAIL PROTECTED] wrote:
 On Mon, Dec 1, 2008 at 3:09 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 But quantum theory does appear to be directly related to limits of the
 computations of physical reality.  The uncertainty theory and the
 quantization of quantum states are limitations on what can be computed by
 physical reality.

 Not really.  They're limitations on what  measurements of physical
 reality can be simultaneously made.

 Quantum systems can compute *exactly* the class of Turing computable
 functions ... this has been proved according to standard quantum
 mechanics math.  however, there are some things they can compute
 faster than any Turing machine, in the average case but not the worst
 case.


 Sorry, I am not really following the discussion but I just read that
 there is some misinterpretation here. It is the standard model of
 quantum computation that effectively computes exactly the Turing
 computable functions, but that was almost hand tailored to do so,
 perhaps because adding to the theory an assumption of continuum
 measurability was already too much (i.e. distinguishing infinitely
 close quantum states). But that is far from the claim that quantum
 systems can compute exactly the class of Turing computable functions.
 Actually the Hilbert space and the superposition of particles in an
 infinite number of states would suggest exactly the opposite. While
 the standard model of quantum computation only considers a
 superposition of 2 states (the so-called qubit, capable of
 entanglement in 0 and 1). But even if you stick to the standard model
 of quantum computation, the proof that it computes exactly the set
 of recursive functions [Feynman, Deutsch] can be put in jeopardy very
 easy : Turing machines are unable to produce non-deterministic
 randomness, something that quantum computers do as an intrinsic
 property of quantum mechanics (not only because of measure limitations
 of the kind of the Heisenberg principle but by quantum non-locality,
 i.e. the violation of Bell's theorem). I just exhibited a non-Turing
 computable function that standard quantum computers compute...
 [Calude, Casti]


 But, I am old fashioned enough to be more interested in things about the
 brain and AGI that are supported by what would traditionally be considered
 scientific evidence or by what can be reasoned or designed from such
 evidence.

 If there is any thing that would fit under those headings to support the
 notion of the brain either being infinite, or being an antenna that 
 receives
 decodable information from some infinite-information-content source, I 
 would
 love to hear it.


 You and/or other people might be interested in a paper of mine
 published some time ago on the possible computational power of the
 human mind and the way to encode infinite information in the brain:

 http://arxiv.org/abs/cs/0605065


 the key point of the blog post you didn't fully grok, was a careful
 argument that (under certain, seemingly reasonable assumptions)
 science can never provide evidence in favor of infinite mechanisms...

 ben g


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




 --
 Hector Zenilhttp://www.mathrix.org


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 [EMAIL PROTECTED]

 I intend to live forever, or die trying.
 -- Groucho Marx


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: 

Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness

2008-11-30 Thread Ben Goertzel
But I don't get your point at all, because the whole idea of
nondeterministic randomness has nothing to do with physical
reality... true random numbers are uncomputable entities which can
never be existed, and any finite series of observations can be modeled
equally well as the first N bits of an uncomputable series or of a
computable one...

ben g

On Sun, Nov 30, 2008 at 10:53 PM, Hector Zenil [EMAIL PROTECTED] wrote:
 On Mon, Dec 1, 2008 at 4:44 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 OTOH, there is no possible real-world test to distinguish a true
 random sequence from a high-algorithmic-information quasi-random
 sequence

 I know, but the point is not whether we can distinguish it, but that
 quantum mechanics actually predicts to be intrinsically capable of
 non-deterministic randomness, while for a Turing machine that is
 impossible by definition. I find quite convincing and interesting the
 way in which the mathematical proof of the standard model of quantum
 computation as Turing computable has been put in jeopardy by physical
 reality.


 So I don't find this argument very convincing...

 On Sun, Nov 30, 2008 at 10:42 PM, Hector Zenil [EMAIL PROTECTED] wrote:
 On Mon, Dec 1, 2008 at 3:09 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 But quantum theory does appear to be directly related to limits of the
 computations of physical reality.  The uncertainty theory and the
 quantization of quantum states are limitations on what can be computed by
 physical reality.

 Not really.  They're limitations on what  measurements of physical
 reality can be simultaneously made.

 Quantum systems can compute *exactly* the class of Turing computable
 functions ... this has been proved according to standard quantum
 mechanics math.  however, there are some things they can compute
 faster than any Turing machine, in the average case but not the worst
 case.


 Sorry, I am not really following the discussion but I just read that
 there is some misinterpretation here. It is the standard model of
 quantum computation that effectively computes exactly the Turing
 computable functions, but that was almost hand tailored to do so,
 perhaps because adding to the theory an assumption of continuum
 measurability was already too much (i.e. distinguishing infinitely
 close quantum states). But that is far from the claim that quantum
 systems can compute exactly the class of Turing computable functions.
 Actually the Hilbert space and the superposition of particles in an
 infinite number of states would suggest exactly the opposite. While
 the standard model of quantum computation only considers a
 superposition of 2 states (the so-called qubit, capable of
 entanglement in 0 and 1). But even if you stick to the standard model
 of quantum computation, the proof that it computes exactly the set
 of recursive functions [Feynman, Deutsch] can be put in jeopardy very
 easy : Turing machines are unable to produce non-deterministic
 randomness, something that quantum computers do as an intrinsic
 property of quantum mechanics (not only because of measure limitations
 of the kind of the Heisenberg principle but by quantum non-locality,
 i.e. the violation of Bell's theorem). I just exhibited a non-Turing
 computable function that standard quantum computers compute...
 [Calude, Casti]


 But, I am old fashioned enough to be more interested in things about the
 brain and AGI that are supported by what would traditionally be considered
 scientific evidence or by what can be reasoned or designed from such
 evidence.

 If there is any thing that would fit under those headings to support the
 notion of the brain either being infinite, or being an antenna that 
 receives
 decodable information from some infinite-information-content source, I 
 would
 love to hear it.


 You and/or other people might be interested in a paper of mine
 published some time ago on the possible computational power of the
 human mind and the way to encode infinite information in the brain:

 http://arxiv.org/abs/cs/0605065


 the key point of the blog post you didn't fully grok, was a careful
 argument that (under certain, seemingly reasonable assumptions)
 science can never provide evidence in favor of infinite mechanisms...

 ben g


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




 --
 Hector Zenilhttp://www.mathrix.org


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 [EMAIL PROTECTED]

 I 

Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness

2008-11-30 Thread Hector Zenil
On Mon, Dec 1, 2008 at 4:55 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 But I don't get your point at all, because the whole idea of
 nondeterministic randomness has nothing to do with physical
 reality...

It has all to do when it is about quantum mechanics. Quantum mechanics
is non-deterministic by nature. A quantum computer, even within the
standard model of quantum computation, could then take advantage of
this intrinsic property of the physical (quantum) reality (assuming
the model correct, as most physicists would).

 true random numbers are uncomputable entities which can
 never be existed, and any finite series of observations can be modeled
 equally well as the first N bits of an uncomputable series or of a
 computable one...

That's the point, that's what the classical theory of computability
would say (also making some assumptions, namely Church's thesis), but
again quantum mechanics says something else :

The fact that quantum computers are able of non-deterministic
randomness by definition and Turing machines are unable of
non-deterministic randomness also by definition seems incompatible
with the claim (or mathematical proof) that standard quantum computers
compute exactly the same functions than Turing machines, and that's
only when dealing with standard quantum computation, because
non-standard quantum computation is far from being proved to be
reduced to Turing-computable (modulo their speed-up).

Concerning the observations, you don't need to do an infinite number
of them to get a non-computable answer from an Oracle (although you
would need in case you want to finitely verify it). And even if you
can model equally well the first N bits of a non-deterministic random
sequence, the fact that a random sequence is ontologically of a
non-deterministic nature, makes it a priori a different one in essence
from a pseudo random sequence. The point is not epistemological.

In any case, whether we agree on the philosophical matter, my point is
that it is not the case that there is a mathematical proof about
quantum systems computing exactly the same functions than Turing
machines. There is a mathematical proof that the standard model of
quantum computation computes the same set of functions than Turing
machines.


 ben g

 On Sun, Nov 30, 2008 at 10:53 PM, Hector Zenil [EMAIL PROTECTED] wrote:
 On Mon, Dec 1, 2008 at 4:44 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 OTOH, there is no possible real-world test to distinguish a true
 random sequence from a high-algorithmic-information quasi-random
 sequence

 I know, but the point is not whether we can distinguish it, but that
 quantum mechanics actually predicts to be intrinsically capable of
 non-deterministic randomness, while for a Turing machine that is
 impossible by definition. I find quite convincing and interesting the
 way in which the mathematical proof of the standard model of quantum
 computation as Turing computable has been put in jeopardy by physical
 reality.


 So I don't find this argument very convincing...

 On Sun, Nov 30, 2008 at 10:42 PM, Hector Zenil [EMAIL PROTECTED] wrote:
 On Mon, Dec 1, 2008 at 3:09 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 But quantum theory does appear to be directly related to limits of the
 computations of physical reality.  The uncertainty theory and the
 quantization of quantum states are limitations on what can be computed by
 physical reality.

 Not really.  They're limitations on what  measurements of physical
 reality can be simultaneously made.

 Quantum systems can compute *exactly* the class of Turing computable
 functions ... this has been proved according to standard quantum
 mechanics math.  however, there are some things they can compute
 faster than any Turing machine, in the average case but not the worst
 case.


 Sorry, I am not really following the discussion but I just read that
 there is some misinterpretation here. It is the standard model of
 quantum computation that effectively computes exactly the Turing
 computable functions, but that was almost hand tailored to do so,
 perhaps because adding to the theory an assumption of continuum
 measurability was already too much (i.e. distinguishing infinitely
 close quantum states). But that is far from the claim that quantum
 systems can compute exactly the class of Turing computable functions.
 Actually the Hilbert space and the superposition of particles in an
 infinite number of states would suggest exactly the opposite. While
 the standard model of quantum computation only considers a
 superposition of 2 states (the so-called qubit, capable of
 entanglement in 0 and 1). But even if you stick to the standard model
 of quantum computation, the proof that it computes exactly the set
 of recursive functions [Feynman, Deutsch] can be put in jeopardy very
 easy : Turing machines are unable to produce non-deterministic
 randomness, something that quantum computers do as an intrinsic
 property of quantum mechanics (not only because of 

Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness

2008-11-30 Thread Hector Zenil
On Mon, Dec 1, 2008 at 4:55 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 But I don't get your point at all, because the whole idea of
 nondeterministic randomness has nothing to do with physical
 reality...

I don't get it. You don't think that quantum mechanics is part of our
physical reality (if it is not all of it)?

 true random numbers are uncomputable entities which can
 never be existed,

you can say that either they don't exist or they do exist but that we
don't have access to them. That's a rather philosophical matter. But
scientifically QM says the latter. Even more, since bits from a
non-deterministic random source are truly independent from each other,
something that does not happen when produced by a Turing machine, then
any sequence (even finite) is of different nature from one produced by
a Turing machine. In practice, if your claim is that you will not be
able to distinguish the difference, you actually would if you let the
machine run for a longer period of time, once finished its physical
resources it will either halt or start over (making the random
string periodic), while QM says that resources don't matter, a quantum
computer will always continue producing non-deterministic (e.g. never
periodic) strings of any length independently of any constraint of
time or space!

 and any finite series of observations can be modeled
 equally well as the first N bits of an uncomputable series or of a
 computable one...

 ben g

 On Sun, Nov 30, 2008 at 10:53 PM, Hector Zenil [EMAIL PROTECTED] wrote:
 On Mon, Dec 1, 2008 at 4:44 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 OTOH, there is no possible real-world test to distinguish a true
 random sequence from a high-algorithmic-information quasi-random
 sequence

 I know, but the point is not whether we can distinguish it, but that
 quantum mechanics actually predicts to be intrinsically capable of
 non-deterministic randomness, while for a Turing machine that is
 impossible by definition. I find quite convincing and interesting the
 way in which the mathematical proof of the standard model of quantum
 computation as Turing computable has been put in jeopardy by physical
 reality.


 So I don't find this argument very convincing...

 On Sun, Nov 30, 2008 at 10:42 PM, Hector Zenil [EMAIL PROTECTED] wrote:
 On Mon, Dec 1, 2008 at 3:09 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 But quantum theory does appear to be directly related to limits of the
 computations of physical reality.  The uncertainty theory and the
 quantization of quantum states are limitations on what can be computed by
 physical reality.

 Not really.  They're limitations on what  measurements of physical
 reality can be simultaneously made.

 Quantum systems can compute *exactly* the class of Turing computable
 functions ... this has been proved according to standard quantum
 mechanics math.  however, there are some things they can compute
 faster than any Turing machine, in the average case but not the worst
 case.


 Sorry, I am not really following the discussion but I just read that
 there is some misinterpretation here. It is the standard model of
 quantum computation that effectively computes exactly the Turing
 computable functions, but that was almost hand tailored to do so,
 perhaps because adding to the theory an assumption of continuum
 measurability was already too much (i.e. distinguishing infinitely
 close quantum states). But that is far from the claim that quantum
 systems can compute exactly the class of Turing computable functions.
 Actually the Hilbert space and the superposition of particles in an
 infinite number of states would suggest exactly the opposite. While
 the standard model of quantum computation only considers a
 superposition of 2 states (the so-called qubit, capable of
 entanglement in 0 and 1). But even if you stick to the standard model
 of quantum computation, the proof that it computes exactly the set
 of recursive functions [Feynman, Deutsch] can be put in jeopardy very
 easy : Turing machines are unable to produce non-deterministic
 randomness, something that quantum computers do as an intrinsic
 property of quantum mechanics (not only because of measure limitations
 of the kind of the Heisenberg principle but by quantum non-locality,
 i.e. the violation of Bell's theorem). I just exhibited a non-Turing
 computable function that standard quantum computers compute...
 [Calude, Casti]


 But, I am old fashioned enough to be more interested in things about the
 brain and AGI that are supported by what would traditionally be 
 considered
 scientific evidence or by what can be reasoned or designed from such
 evidence.

 If there is any thing that would fit under those headings to support the
 notion of the brain either being infinite, or being an antenna that 
 receives
 decodable information from some infinite-information-content source, I 
 would
 love to hear it.


 You and/or other people might be interested in a paper of mine
 published some 

Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness

2008-11-30 Thread Ben Goertzel
On Sun, Nov 30, 2008 at 11:48 PM, Hector Zenil [EMAIL PROTECTED] wrote:
 On Mon, Dec 1, 2008 at 4:55 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 But I don't get your point at all, because the whole idea of
 nondeterministic randomness has nothing to do with physical
 reality...

 I don't get it. You don't think that quantum mechanics is part of our
 physical reality (if it is not all of it)?

Of course it isn't -- quantum mechanics is a mathematical and
conceptual model that we use in order to predict certain finite sets
of finite-precision observations, based on other such sets

 true random numbers are uncomputable entities which can
 never be existed,

 you can say that either they don't exist or they do exist but that we
 don't have access to them. That's a rather philosophical matter. But
 scientifically QM says the latter.

Sure it does: but there is an equivalent mathematical theory that
explains all observations identically to QM, yet doesn't posit any
uncomputable entities

So, choosing to posit that these uncomputable entities exist in
reality, is just a matter of aesthetic or philosophical taste ... so
you can't really say they exist in reality, because they contribute
nothing to the predictive power of QM ...

-- Ben G


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness

2008-11-30 Thread Hector Zenil
On Mon, Dec 1, 2008 at 6:20 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 On Sun, Nov 30, 2008 at 11:48 PM, Hector Zenil [EMAIL PROTECTED] wrote:
 On Mon, Dec 1, 2008 at 4:55 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 But I don't get your point at all, because the whole idea of
 nondeterministic randomness has nothing to do with physical
 reality...

 I don't get it. You don't think that quantum mechanics is part of our
 physical reality (if it is not all of it)?

 Of course it isn't -- quantum mechanics is a mathematical and
 conceptual model that we use in order to predict certain finite sets
 of finite-precision observations, based on other such sets


Oh I see! I think that's of philosophical taste as well. I don't think
everybody would agree with you. Specially if you poll physicists like
those that constructed the standard model of computation! We cannot
ask Feynman, but I actually asked Deutsch. He does not only think QM
is our most basic physical reality (he thinks math and computer
science lie in quantum mechanics), but he even takes quite seriously
his theory of parallel universes! and he is not alone. Speaking by
myself, I would agree with you, but I think we would need to
relativize the concept of agreement. I don't think QM is just another
model of merely mathematical value to make finite predictions. I think
physical models say something about our physical reality. If you deny
QM as part of our physical reality then I guess you deny any other
physical model. I wonder then what is left to you. You perhaps would
embrace total skepticism, perhaps even solipsism. Current trends have
moved from there to a more relativized positions, where models are
considered so, models, but still with some value as part of our actual
physical reality (just as Newtonian physics is not just completely
wrong after General Relativity since it still describes a huge part of
our physical reality).

At the end, even if you claim a Platonic physical reality to which we
have no access at all, not even through our best explanations in the
way of models, the world is either quantum or not (as we have defined
the theory), and as long as it remains as our best explanation of a
the phenomena that characterizes one has to face it to other models
describing other aspects or models of our best known physical reality.
It is not clear to me how you would deny the physical reality of QM
but defend the theory of computability or algorithmic information
theory as if they were more basic than QM.

If we take as equally basic QM and AIT, even in a practical sense,
there are incompatibilities in essence. QM cannot be said as Turing
computable, and AIT cannot posit the in-existence of non-deterministic
randomness specially when QM says something else. I am more in the
side of AIT but I think the question is open, is interesting (both
philosophically and scientific) and not trivial at all.


 true random numbers are uncomputable entities which can
 never be existed,

 you can say that either they don't exist or they do exist but that we
 don't have access to them. That's a rather philosophical matter. But
 scientifically QM says the latter.

 Sure it does: but there is an equivalent mathematical theory that
 explains all observations identically to QM, yet doesn't posit any
 uncomputable entities

 So, choosing to posit that these uncomputable entities exist in
 reality, is just a matter of aesthetic or philosophical taste ... so
 you can't really say they exist in reality, because they contribute
 nothing to the predictive power of QM ...



There are people that think that quantum randomness is actually the
source of the complexity we see in the universe [Bennett, Lloyd]. Even
when I do not agree with them (since AIT does not require
non-deterministic randomness) I think it is not that trivial since
even researchers think they contribute in some fundamental (not only
philosophical) way.


 -- Ben G


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Hector Zenilhttp://www.mathrix.org


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness

2008-11-30 Thread Charles Hixson

Hector Zenil wrote:

On Mon, Dec 1, 2008 at 6:20 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
  

On Sun, Nov 30, 2008 at 11:48 PM, Hector Zenil [EMAIL PROTECTED] wrote:


On Mon, Dec 1, 2008 at 4:55 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
  

But I don't get your point at all, because the whole idea of
...


...


Oh I see! I think that's of philosophical taste as well. I don't think
everybody would agree with you. Specially if you poll physicists like
those that constructed the standard model of computation! We cannot
ask Feynman, but I actually asked Deutsch. He does not only think QM
is our most basic physical reality (he thinks math and computer
science lie in quantum mechanics), but he even takes quite seriously
his theory of parallel universes! and he is not alone. Speaking by...
when I do not agree with them (since AIT does not require
non-deterministic randomness) I think it is not that trivial since
even researchers think they contribute in some fundamental (not only
philosophical) way.

  

-- Ben G


Still, one must remember that there is Quantum Theory, and then there 
are the interpretations of Quantum Theory.  As I understand things there 
are still several models of the universe which yield the same 
observables, and choosing between them is a matter of taste.  They are 
all totally consistent with standard Quantum Theory...but ...well, which 
do you prefer?  Multi-world?  Action at a distance?  No objective 
universe? (I'm not sure what that means.)  The present is created by the 
future as well as the past?  As I understand things, these cannot be 
chosen between on the basis of Quantum Theory.  And somewhere in that 
mix is Wholeness and the Implicate Order.


When math gets translated into Language, interpretations add things.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction)

2008-11-19 Thread Jiri Jelinek
Matt Mahoney wrote:
Autobliss...

Imagine that there is another human language which is the same as
English, just the pain/pleasure related words have the opposite
meaning. Then consider what would that mean for your Autobliss.

My definition of pain is negative reinforcement in a system that learns.

IMO, pain is more like a data with the potential to cause disorder in
hard-wired algorithms. I'm not saying this fully covers it but it's
IMO already out of the Autobliss scope.

Trent Waddington wrote:
Apparently, it was Einstein who said that if you can't explain it to
your grandmother then you don't understand it.

That was Richard Feynman

Regards,
Jiri Jelinek

PS: Sorry if I'm missing anything. Being busy, I don't read all posts.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction)

2008-11-19 Thread Trent Waddington
On Wed, Nov 19, 2008 at 6:20 PM, Jiri Jelinek [EMAIL PROTECTED] wrote:
Trent Waddington wrote:
Apparently, it was Einstein who said that if you can't explain it to
your grandmother then you don't understand it.

 That was Richard Feynman

When?  I don't really know who said it.. but everyone else on teh
internets seems to attribute it to Einstein.  I've seen at least one
site attribute it to the bible (but of course they give no reference).

As such, I think there's two nuggets of wisdom here:  If you can't
provide references, then your opinion is just as good as mine, and if
you can provide references, that doesn't excuse you from explaining
what you're talking about so that everyone can understand.

Two points that many members of this list would do well to heed now and then.

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction)

2008-11-19 Thread Matt Mahoney
--- On Wed, 11/19/08, Jiri Jelinek [EMAIL PROTECTED] wrote:

 My definition of pain is negative reinforcement in a system that learns.
 
 IMO, pain is more like a data with the potential to cause disorder in
 hard-wired algorithms. I'm not saying this fully covers it but it's
 IMO already out of the Autobliss scope.

You might be thinking of continuous or uncontrollable pain. Like when a rat is 
shocked and can stop the shock by turning a paddle wheel, and a second rat 
receives identical shocks to the first but its paddle wheel has no effect. Only 
the second rat develops stomach ulcers.

When autobliss is run with two negative arguments so that it is punished no 
matter what it does, the neural network weights take on random values and it 
never learns a function. It also dies, but only because I programmed it that 
way.

-- Matt Mahoney, [EMAIL PROTECTED]





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction)

2008-11-19 Thread Jiri Jelinek
Trent,

Feynman's page on wikipedia has it as: If you can't explain something
to a first year student, then you haven't really understood it. but
Feynman reportedly said it in a number of ways, including the
grandmother variant. I learned about it when taking physics classes a
while ago so I don't have a very useful source info, but I remember
one of my professors saying that Feynman also says it in his books.
But yes, I did a quick search and noticed that many attribute the
grandmother variant to Einstein (which I didn't know - sorry). Some
attribute it to Ernest Rutherford, some talk about Kurt Vonnegut, and
yes, some about Bible... Well, I guess it's not that important. But
one of my related thoughts is that when teaching AGIs, we should start
with very high-level basic concepts/explanations/world_model and not
dive into great granularity before the high-level concepts are
relatively well understood [/correctly used when generating
solutions]. I oppose the idea of throwing tons of raw data (from very
different granularity levels [and possibly different contexts]) at the
AGI and expecting that it will somehow sort everything [or most of it]
out correctly.

Jiri

On Wed, Nov 19, 2008 at 3:39 AM, Trent Waddington
[EMAIL PROTECTED] wrote:
 On Wed, Nov 19, 2008 at 6:20 PM, Jiri Jelinek [EMAIL PROTECTED] wrote:
Trent Waddington wrote:
Apparently, it was Einstein who said that if you can't explain it to
your grandmother then you don't understand it.

 That was Richard Feynman

 When?  I don't really know who said it.. but everyone else on teh
 internets seems to attribute it to Einstein.  I've seen at least one
 site attribute it to the bible (but of course they give no reference).

 As such, I think there's two nuggets of wisdom here:  If you can't
 provide references, then your opinion is just as good as mine, and if
 you can provide references, that doesn't excuse you from explaining
 what you're talking about so that everyone can understand.

 Two points that many members of this list would do well to heed now and then.

 Trent


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-18 Thread John G. Rose
 From: Trent Waddington [mailto:[EMAIL PROTECTED]
 
 On Tue, Nov 18, 2008 at 7:44 AM, Matt Mahoney [EMAIL PROTECTED]
 wrote:
  I mean that people are free to decide if others feel pain. For
 example, a scientist may decide that a mouse does not feel pain when it
 is stuck in the eye with a needle (the standard way to draw blood) even
 though it squirms just like a human would. It is surprisingly easy to
 modify one's ethics to feel this way, as proven by the Milgram
 experiments and Nazi war crime trials.
 
 I'm sure you're not meaning to suggest that scientists commonly
 rationalize in this way, nor that they are all Nazi war criminals for
 experimenting on animals.
 
 I feel the need to remind people that animal rights is a fringe
 movement that does not represent the views of the majority.  We
 experiment on animals because the benefits, to humans, are considered
 worthwhile.
 

I like animals. And I like the idea of coming up with cures to diseases and
testing them on animals first. In college my biologist roommate protested
the torture of fruit flies. My son has starting playing video games where
you shoot, zapp and chemically immolate the opponent, so I need to explain
to him that those bad guys are not conscious...yet.

I don't know if there are guidelines. Humans, being the rulers of planet,
appear as godlike beings to other conscious inhabitants. That brings
responsibility. So when we start coming up with AI stuff in the lab that
attains certain levels of consciousness we have to know what consciousness
is in order to govern our behavior.

And naturally if some superintelligent space alien or rogue interstellar AI
encounters us and decides that we are a culinary delicacy and wants to grow
us enmass economically, we hope that some respect is given eh? 

Reminds me of hearing that some farms are experimenting with growing
chickens w/o heads. Animal rights may be more than just a fringe movement.
Kind of like Mike - http://en.wikipedia.org/wiki/Mike_the_Headless_Chicken

John





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-18 Thread Mark Waser

I mean that people are free to decide if others feel pain.


Wow!  You are one sick puppy, dude.  Personally, you have just hit my Do 
not bother debating with list.


You can decide anything you like -- but that doesn't make it true.

- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Monday, November 17, 2008 4:44 PM
Subject: RE: FW: [agi] A paper that actually does solve the problem of 
consciousness--correction




--- On Mon, 11/17/08, Ed Porter [EMAIL PROTECTED] wrote:

First, it is not clear people
are free to decide what makes pain real, at least
subjectively real.


I mean that people are free to decide if others feel pain. For example, a 
scientist may decide that a mouse does not feel pain when it is stuck in 
the eye with a needle (the standard way to draw blood) even though it 
squirms just like a human would. It is surprisingly easy to modify one's 
ethics to feel this way, as proven by the Milgram experiments and Nazi war 
crime trials.


If we have anything close to the advances in brain scanning and brain 
science

that Kurzweil predicts 1, we should come to understand the correlates of
consciousness quite well


No. I used examples like autobliss ( 
http://www.mattmahoney.net/autobliss.txt ) and the roundworm c. elegans as 
examples of simple systems whose functions are completely understood, yet 
the question of whether such systems experience pain remains a 
philosophical question that cannot be answered by experiment.


-- Matt Mahoney, [EMAIL PROTECTED]


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-18 Thread Matt Mahoney
--- On Tue, 11/18/08, Mark Waser [EMAIL PROTECTED] wrote:

  I mean that people are free to decide if others feel pain.
 
 Wow!  You are one sick puppy, dude.  Personally, you have
 just hit my Do not bother debating with list.
 
 You can decide anything you like -- but that
 doesn't make it true.

Aren't you the one who decided that autobliss feels pain? Or did you decide 
that it doesn't?


-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction)

2008-11-18 Thread Matt Mahoney
--- On Tue, 11/18/08, Mark Waser [EMAIL PROTECTED] wrote:

 Autobliss has no grounding, no internal feedback, and no
 volition.  By what definitions does it feel pain?

Now you are making up new rules to decide that autobliss doesn't feel pain. My 
definition of pain is negative reinforcement in a system that learns. There is 
no other requirement.

You stated that machines can feel pain, and you stated that we don't get to 
decide which ones. So can you precisely define grounding, internal feedback and 
volition (as properties of Turing machines) and prove that these criteria are 
valid?

And just to avoid confusion, my question has nothing to do with ethics.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction)

2008-11-18 Thread Ben Goertzel
On Tue, Nov 18, 2008 at 6:26 PM, Matt Mahoney [EMAIL PROTECTED] wrote:

 --- On Tue, 11/18/08, Mark Waser [EMAIL PROTECTED] wrote:

  Autobliss has no grounding, no internal feedback, and no
  volition.  By what definitions does it feel pain?

 Now you are making up new rules to decide that autobliss doesn't feel pain.
 My definition of pain is negative reinforcement in a system that learns.
 There is no other requirement.

 You stated that machines can feel pain, and you stated that we don't get to
 decide which ones. So can you precisely define grounding, internal feedback
 and volition (as properties of Turing machines)


Clearly, this can be done, and has largely been done already ... though
cutting and pasting or summarizing the relevant literature in emails would
not a productive use of time


 and prove that these criteria are valid?


That is a different issue, as it depends on the criteria of validity, of
course...

I think one can argue that these properties are necessary for a
finite-resources AI system to display intense systemic patterns correlated
with its goal-achieving behavior in the context of diverse goals and
situations.  So, one can argue that these properties are necessary for **the
sort of consciousness associated with general intelligence** ... but that's
a bit weaker than saying they are necessary for consciousness (and I don't
think they are)

ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction)

2008-11-18 Thread Trent Waddington
On Wed, Nov 19, 2008 at 9:29 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 Clearly, this can be done, and has largely been done already ... though
 cutting and pasting or summarizing the relevant literature in emails would
 not a productive use of time

Apparently, it was Einstein who said that if you can't explain it to
your grandmother then you don't understand it.

Of course, he never had to argue on the Internet.

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction)

2008-11-18 Thread Mark Waser
Now you are making up new rules to decide that autobliss doesn't feel 
pain. My definition of pain is negative reinforcement in a system that 
learns. There is no other requirement.


I made up no rules.  I merely asked a question.  You are the one who makes a 
definition like this and then says that it is up to people to decide whether 
other humans feel pain or not.  That is hypocritical to an extreme.


I also believe that your definition is a total crock that was developed for 
no purpose other than to support your BS.


You stated that machines can feel pain, and you stated that we don't get 
to decide which ones. So can you precisely define grounding, internal 
feedback and volition (as properties of Turing machines) and prove that 
these criteria are valid?


I stated that *SOME* future machines will be able to feel pain.  I can 
define grounding, internal feedback and volition but feel no need to do so 
as properties of a Turing machine and decline to attempt to prove anything 
to you since you're so full of it that your mother couldn't prove to you 
that you were born.



- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Tuesday, November 18, 2008 6:26 PM
Subject: Definition of pain (was Re: FW: [agi] A paper that actually does 
solve the problem of consciousness--correction)




--- On Tue, 11/18/08, Mark Waser [EMAIL PROTECTED] wrote:


Autobliss has no grounding, no internal feedback, and no
volition.  By what definitions does it feel pain?


Now you are making up new rules to decide that autobliss doesn't feel 
pain. My definition of pain is negative reinforcement in a system that 
learns. There is no other requirement.


You stated that machines can feel pain, and you stated that we don't get 
to decide which ones. So can you precisely define grounding, internal 
feedback and volition (as properties of Turing machines) and prove that 
these criteria are valid?


And just to avoid confusion, my question has nothing to do with ethics.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction)

2008-11-18 Thread Mark Waser
 I am just trying to point out the contradictions in Mark's sweeping 
 generalizations about the treatment of intelligent machines

Huh?  That's what you're trying to do?  Normally people do that by pointing to 
two different statements and arguing that they contradict each other.  Not by 
creating new, really silly definitions and then trying to posit a universe 
where blue equals red so everybody is confused.

 But to be fair, such criticism is unwarranted. 

So exactly why are you persisting?

 Ethical beliefs are emotional, not rational,

Ethical beliefs are subconscious and deliberately obscured from the conscious 
mind so that defections can be explained away without triggering other 
primate's lie-detecting senses.  However, contrary to your antiquated beliefs, 
they are *purely* a survival trait with a very solid grounding.

 Ethical beliefs are also algorithmically complex

Absolutely not.  Ethical beliefs are actually pretty darn simple as far as the 
subconscious is concerned.  It's only when the conscious rational mind gets 
involved that ethics are twisted beyond recognition (just like all your 
arguments).

 so the result of this argument could only result in increasingly complex 
 rules to fit his model

Again, absolutely not.  You have no clue as to what my argument is yet you 
fantasize that you can predict it's results.  BAH!

 For the record, I do have ethical beliefs like most other people

Yet you persist in arguing otherwise.  *Most* people would call that dishonest, 
deceitful, and time-wasting. 

 The question is not how should we interact with machines, but how will we? 

No, it isn't.  Study the results on ethical behavior when people are convinced 
that they don't have free will.

= = = = = 

BAH!  I should have quit answering you long ago.  No more.


  - Original Message - 
  From: Matt Mahoney 
  To: agi@v2.listbox.com 
  Sent: Tuesday, November 18, 2008 7:58 PM
  Subject: Re: Definition of pain (was Re: FW: [agi] A paper that actually does 
solve the problem of consciousness--correction)


Just to clarify, I'm not really interested in whether machines feel 
pain. I am just trying to point out the contradictions in Mark's sweeping 
generalizations about the treatment of intelligent machines. But to be fair, 
such criticism is unwarented. Mark is arguing about ethics. Everyone has 
ethical beliefs. Ethical beliefs are emotional, not rational, although we often 
forget this. Ethical beliefs are also algorithmically complex, so the result of 
this argument could only result in increasingly complex rules to fit his model. 
It would be unfair to bore the rest of this list with such a discussion.

For the record, I do have ethical beliefs like most other people, but 
they are irrelevant to the design of AGI. The question is not how should we 
interact with machines, but how will we? For example, when we develop the 
technology to simulate human minds in general, or to simulate specific humans 
who have died, common ethical models among humans will probably result in the 
granting of legal and property rights to these simulations. Since these 
simulations could reproduce, evolve, and acquire computing resources much 
faster than humans, the likely result will be human extinction, or viewed 
another way, our evolution into a non-DNA based life form. I won't offer an 
opinion on whether this is desirable or not, because my opinion would be based 
on my ethical beliefs.

-- Matt Mahoney, [EMAIL PROTECTED]

--- On Tue, 11/18/08, Ben Goertzel [EMAIL PROTECTED] wrote:

  From: Ben Goertzel [EMAIL PROTECTED]
  Subject: Re: Definition of pain (was Re: FW: [agi] A paper that 
actually does solve the problem of consciousness--correction)
  To: agi@v2.listbox.com
  Date: Tuesday, November 18, 2008, 6:29 PM





  On Tue, Nov 18, 2008 at 6:26 PM, Matt Mahoney [EMAIL PROTECTED] 
wrote:

--- On Tue, 11/18/08, Mark Waser [EMAIL PROTECTED] wrote:

 Autobliss has no grounding, no internal feedback, and no
 volition.  By what definitions does it feel pain?

Now you are making up new rules to decide that autobliss doesn't 
feel pain. My definition of pain is negative reinforcement in a system that 
learns. There is no other requirement.

You stated that machines can feel pain, and you stated that we 
don't get to decide which ones. So can you precisely define grounding, internal 
feedback and volition (as properties of Turing machines) 

  Clearly, this can be done, and has largely been done already ... 
though cutting and pasting or summarizing the relevant literature in emails 
would not a productive use of time
   
and prove that these criteria are valid?


  That is a different issue, as it depends on the criteria of validity, 
of course...

  I think one can argue that these properties are necessary for a 

Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Matt Mahoney
--- On Mon, 11/17/08, Ed Porter [EMAIL PROTECTED] wrote:
 For example, in
 fifty years, I think it is quite possible we will be able to say with some
 confidence if certain machine intelligences we design are conscious nor not,
 and whether their pain is as real as the pain of another type of animal, such
 as chimpanzee, dog, bird, reptile, fly, or amoeba .

No it won't, because people are free to decide what makes pain real. The 
question is not resolved for simple systems which are completely understood, 
for example, the 302 neuron nervous system of C. elegans. If it can be trained 
by reinforcement learning, it that real pain? What about autobliss? It learns 
to avoid negative reinforcement and it says ouch. Do you really think that if 
we build AGI in the likeness of a human mind, and stick it with a pin and it 
says ouch, that we will finally have an answer to the question of whether 
machines have a consciousness?

And there is no reason to believe the question will be easier in the future. 
100 years ago there was little controversy over animal rights, euthanasia, 
abortion, or capital punishment. Do you think that the addition of intelligent 
robots will make the boundary between human and non-human any sharper?

-- Matt Mahoney, [EMAIL PROTECTED]




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Matt Mahoney
--- On Mon, 11/17/08, Mark Waser [EMAIL PROTECTED] wrote:

  No it won't, because people are free to decide what makes pain real.
 
 What?  You've got to be kidding . . . .  What makes
 pain real is how the sufferer reacts to it -- not some
 abstract wishful thinking that we use to justify our
 decisions of how we wish to behave.

Autobliss responds to pain by changing its behavior to make it less likely. 
Please explain how this is different from human suffering. And don't tell me 
its because one is human and the other is a simple program, because...

  Do you think that the addition of intelligent
 robots will make the boundary between human and non-human
 any sharper?
 
 No, I think that it will make it much fuzzier . . . . but
 since the boundary is just a strawman for lazy thinkers,
 removing it will actually make our ethics much sharper.

So either pain is real to both, or to neither, or there is some other criteria 
which you haven't specified, in which case I would like to know what that is.

-- Matt Mahoney, [EMAIL PROTECTED]





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Ed Porter
Matt,

 

First, it is not clear people are free to decide what makes pain real,
at least subjectively real.  If I zap you will a horrible electric shock of
the type Sadam Hussein might have used when he was the chief
interrogator/torturer of Iraq's Baathist party, it is not clear exactly how
much freedom you would to decide how subjectively real the resulting pain
would seem to you --- that is, unless you had a level of mental control far
beyond that of most humans.  

 

You indicate we currently don't know the degree of consciousness or pain
that would be suffered by a certain organism with 302 neurons.  I agree.

 

Our understanding of the physical correlates of consciousness is still
relatively limited, but it is rapidly increasing.  I think it is probable
that consciousness comes in various decrees, and it is possible that all of
physical reality has a form of consciousness, just one that lacks many of he
attributes of a human consciousness.  A 302 neuron nervous system may have a
type of consciousness, but it is my belief it would be one so much less rich
and complex than that supported by the 100,000,000,000 neurons of a human
brain that it is not only different in degree but also extremely different
in kind.

 

I understand I am making a statement based on belief when I predict we will
make great strides in understanding the physical correlates of consciousness
in the coming fifty years.  But there are already a number of studies
shedding light on that subject.  If we have anything close to the advances
in brain scanning and brain science that Kurzweil predicts 1, we should come
to understand the correlates of consciousness quite well --- so well, in
fact, that we should have pretty good, although not necessarily complete,
explanations for the various facets of the Chalmers' hard problem of
consciousness.  That is, we will come to understand that consciousness is
created largely or entirely by computations in physical reality, and we will
develop a fairly broad understanding of what type of physical computations
yield what types of subjective conscious experience. 

 

With this knowledge we would be better able to understand the physical
correlates of conscious pain, and, thus, better estimate the probability
that various humans, animals, or machines will suffer something like pain
under what circumstance. 

 

The hard problem of consciousness is based on the assumption --- or at least
the question whether --- consciousness has aspects that are separate from
the physical world.  As we increasingly learn more about the physical
correlates of consciousness, I think the scope of the hard problem will
increasingly diminish.  Yes, there are things about consciousness that we
cannot clearly define in terms of physical computations at this point in
time, but it is not clear that will always be the case.  

 

Just as life is created to various degrees of complexity out of bio-chemical
computations, I think human consciousness will be shown to be created to
various degrees of complexity out of neurological computations.  It is
conceivable that the properties of other levels or reality will be required
so explain some physical correlates of consciousness, such as such as
quantum entanglement or quantum weirdness.  I think future study will
probably tell us if this is necessary.

 

But ultimately there will always be limits to our knowledge.  We have no
ultimate way of knowing with total certainty that our perceptions of reality
are anything other than an illusion.  I agree with Richard's paper when it
points out the often repeated statement that our subjective experiences are
the most real things we have.  

 

But just because they are subjective to us now, does not necessarily mean
that they are largely beyond the scope of human and AGI assisted science.

 

Ed Porter

 

1. Kurzweil has claimed we will be able to so accurately scan and model an
individual human mind that we will be able to create a virtually exact
duplicate of it, including is consciousness, its memories, its passions,
etc.  I personally think that is unlikely within 50 years.  But I think that
the combination of brain science and AGI will allow us to understand the
mysteries of the hard problem of consciousness much better in fifty years
than we do today.

 

 

-Original Message-
From: Matt Mahoney [mailto:[EMAIL PROTECTED] 
Sent: Monday, November 17, 2008 12:44 PM
To: agi@v2.listbox.com
Subject: Re: FW: [agi] A paper that actually does solve the problem of
consciousness--correction

 

--- On Mon, 11/17/08, Ed Porter [EMAIL PROTECTED] wrote:

 For example, in

 fifty years, I think it is quite possible we will be able to say with some

 confidence if certain machine intelligences we design are conscious nor
not,

 and whether their pain is as real as the pain of another type of animal,
such

 as chimpanzee, dog, bird, reptile, fly, or amoeba .

 

No it won't, because people are free to decide what makes pain real. The

RE: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Matt Mahoney
--- On Mon, 11/17/08, Ed Porter [EMAIL PROTECTED] wrote:
First, it is not clear people
are free to decide what makes pain real, at least
subjectively real.

I mean that people are free to decide if others feel pain. For example, a 
scientist may decide that a mouse does not feel pain when it is stuck in the 
eye with a needle (the standard way to draw blood) even though it squirms just 
like a human would. It is surprisingly easy to modify one's ethics to feel this 
way, as proven by the Milgram experiments and Nazi war crime trials.

If we have anything close to the advances in brain scanning and brain science
that Kurzweil predicts 1, we should come to understand the correlates of
consciousness quite well

No. I used examples like autobliss ( http://www.mattmahoney.net/autobliss.txt ) 
and the roundworm c. elegans as examples of simple systems whose functions are 
completely understood, yet the question of whether such systems experience pain 
remains a philosophical question that cannot be answered by experiment.

-- Matt Mahoney, [EMAIL PROTECTED]


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Trent Waddington
On Tue, Nov 18, 2008 at 7:44 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
 I mean that people are free to decide if others feel pain. For example, a 
 scientist may decide that a mouse does not feel pain when it is stuck in the 
 eye with a needle (the standard way to draw blood) even though it squirms 
 just like a human would. It is surprisingly easy to modify one's ethics to 
 feel this way, as proven by the Milgram experiments and Nazi war crime trials.

I'm sure you're not meaning to suggest that scientists commonly
rationalize in this way, nor that they are all Nazi war criminals for
experimenting on animals.

I feel the need to remind people that animal rights is a fringe
movement that does not represent the views of the majority.  We
experiment on animals because the benefits, to humans, are considered
worthwhile.

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Eric Burton
There are procedures in place for experimenting on humans. And the
biologies of people and animals are orthogonal! Much of this will be
simulated soon



On 11/17/08, Trent Waddington [EMAIL PROTECTED] wrote:
 On Tue, Nov 18, 2008 at 7:44 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
 I mean that people are free to decide if others feel pain. For example, a
 scientist may decide that a mouse does not feel pain when it is stuck in
 the eye with a needle (the standard way to draw blood) even though it
 squirms just like a human would. It is surprisingly easy to modify one's
 ethics to feel this way, as proven by the Milgram experiments and Nazi war
 crime trials.

 I'm sure you're not meaning to suggest that scientists commonly
 rationalize in this way, nor that they are all Nazi war criminals for
 experimenting on animals.

 I feel the need to remind people that animal rights is a fringe
 movement that does not represent the views of the majority.  We
 experiment on animals because the benefits, to humans, are considered
 worthwhile.

 Trent


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Matt Mahoney
--- On Mon, 11/17/08, Mark Waser [EMAIL PROTECTED] wrote:

  Autobliss responds to pain by changing its behavior to
 make it less likely. Please explain how this is different
 from human suffering. And don't tell me its because one
 is human and the other is a simple program, because...
 
 Why don't you resend the link to this new autobliss
 that responds to pain by changing its behavior to make
 it less likely and clearly explain why what you refer
 to as pain for autobliss isn't just some
 ungrounded label that has absolutely nothing to do with pain
 in any real sense of the word. As far as I have seen, your
 autobliss argument is akin to claiming that a rock feels
 pain and runs away to avoid pain when I kick it
 
  So either pain is real to both, or to neither, or
 there is some other criteria which you haven't
 specified, in which case I would like to know what that is.
 
 Absolutely.  Pain is real for both.

autobliss: http://www.mattmahoney.net/autobliss.txt

By pain I mean any signal that has the effect of negative reinforcement, such 
that a system that learns will modify its behavior to reduce the expected 
accumulated sum of the signal according to its model. In the AIXI model, pain 
is the negative of the reward signal. Kicking a rock or cutting down a tree 
does not inflict pain because rocks and trees don't learn.

-- Matt Mahoney, [EMAIL PROTECTED]




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Ed Porter
Matt, 

With regard to your first point I largely agree with you.  I would, however,
qualify it with the fact that many of us find it hard not to sympathize with
people or animals, such as a dog, under certain circumstances when we
directly sense outward manifestations that they are experiencing terrible
pain, unless we have a sufficient hatred toward them to compensate for our
natural tendency to feel sympathy for them.  Some people attribute this to
mirror neurons, and the fact that we evolved to be tribal social animals.

With regard to the second point, your statement does not refute my point,
although my point is admittedly based on belief that is far from certain.
Our understanding of the physical (such as neural) correlates of conscious
is currently sufficiently limited that it does not yet let us say much about
the consciousness or lack thereof of the systems you describe, even if one
assumes they are totally understood in terms of things other than the
knowledge of the physical correlates of consciousness that we currently
don't have, but will have within fifty years.

But from what little we do understand about the neural correlates of
consciousness, it does not seem that either system you describe would have
anything approaching a human consciousness, and thus a human experience of
pain, since they lack the type of computation normally associated with
reports by humans of conscious experience.

Ed Porter

-Original Message-
From: Matt Mahoney [mailto:[EMAIL PROTECTED] 
Sent: Monday, November 17, 2008 4:45 PM
To: agi@v2.listbox.com
Subject: RE: FW: [agi] A paper that actually does solve the problem of
consciousness--correction

--- On Mon, 11/17/08, Ed Porter [EMAIL PROTECTED] wrote:
First, it is not clear people
are free to decide what makes pain real, at least
subjectively real.

I mean that people are free to decide if others feel pain. For example, a
scientist may decide that a mouse does not feel pain when it is stuck in the
eye with a needle (the standard way to draw blood) even though it squirms
just like a human would. It is surprisingly easy to modify one's ethics to
feel this way, as proven by the Milgram experiments and Nazi war crime
trials.

If we have anything close to the advances in brain scanning and brain
science
that Kurzweil predicts 1, we should come to understand the correlates of
consciousness quite well

No. I used examples like autobliss (
http://www.mattmahoney.net/autobliss.txt ) and the roundworm c. elegans as
examples of simple systems whose functions are completely understood, yet
the question of whether such systems experience pain remains a philosophical
question that cannot be answered by experiment.

-- Matt Mahoney, [EMAIL PROTECTED]


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Matt Mahoney
--- On Mon, 11/17/08, Trent Waddington [EMAIL PROTECTED] wrote:

 On Tue, Nov 18, 2008 at 7:44 AM, Matt Mahoney
 [EMAIL PROTECTED] wrote:
  I mean that people are free to decide if others feel
 pain. For example, a scientist may decide that a mouse does
 not feel pain when it is stuck in the eye with a needle (the
 standard way to draw blood) even though it squirms just like
 a human would. It is surprisingly easy to modify one's
 ethics to feel this way, as proven by the Milgram
 experiments and Nazi war crime trials.
 
 I'm sure you're not meaning to suggest that scientists commonly
 rationalize in this way, nor that they are all Nazi war
 criminals for experimenting on animals.
 
 I feel the need to remind people that animal rights is a fringe
 movement that does not represent the views of the majority.  We
 experiment on animals because the benefits, to humans, are
 considered worthwhile.

I am not taking a position on whether inflicting pain on animals (or people or 
machines) is right or wrong. That is an ethical question. Ethics is a system of 
beliefs that varies from one person to another. There is no such thing as a 
correct model, although everyone believe so. All we can say is that some 
models work better than others as measured by individual or group survival.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Matt Mahoney
--- On Mon, 11/17/08, Eric Burton [EMAIL PROTECTED] wrote:

 There are procedures in place for experimenting on humans. And the
 biologies of people and animals are orthogonal! Much of this will be
 simulated soon

When we start simulating people, there will be ethical debates about that. And 
there are no procedures in place.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Matt Mahoney
Before you can start searching for consciousness, you need to describe 
precisely what you are looking for.

-- Matt Mahoney, [EMAIL PROTECTED]


--- On Mon, 11/17/08, Ed Porter [EMAIL PROTECTED] wrote:

 From: Ed Porter [EMAIL PROTECTED]
 Subject: RE: FW: [agi] A paper that actually does solve the problem of 
 consciousness--correction
 To: agi@v2.listbox.com
 Date: Monday, November 17, 2008, 5:15 PM
 Matt, 
 
 With regard to your first point I largely agree with you. 
 I would, however,
 qualify it with the fact that many of us find it hard not
 to sympathize with
 people or animals, such as a dog, under certain
 circumstances when we
 directly sense outward manifestations that they are
 experiencing terrible
 pain, unless we have a sufficient hatred toward them to
 compensate for our
 natural tendency to feel sympathy for them.  Some people
 attribute this to
 mirror neurons, and the fact that we evolved to be tribal
 social animals.
 
 With regard to the second point, your statement does not
 refute my point,
 although my point is admittedly based on belief that is far
 from certain.
 Our understanding of the physical (such as neural)
 correlates of conscious
 is currently sufficiently limited that it does not yet let
 us say much about
 the consciousness or lack thereof of the systems you
 describe, even if one
 assumes they are totally understood in terms of things
 other than the
 knowledge of the physical correlates of consciousness that
 we currently
 don't have, but will have within fifty years.
 
 But from what little we do understand about the neural
 correlates of
 consciousness, it does not seem that either system you
 describe would have
 anything approaching a human consciousness, and thus a
 human experience of
 pain, since they lack the type of computation normally
 associated with
 reports by humans of conscious experience.
 
 Ed Porter
 
 -Original Message-
 From: Matt Mahoney [mailto:[EMAIL PROTECTED] 
 Sent: Monday, November 17, 2008 4:45 PM
 To: agi@v2.listbox.com
 Subject: RE: FW: [agi] A paper that actually does solve the
 problem of
 consciousness--correction
 
 --- On Mon, 11/17/08, Ed Porter [EMAIL PROTECTED]
 wrote:
 First, it is not clear people
 are free to decide what makes pain
 real, at least
 subjectively real.
 
 I mean that people are free to decide if others feel pain.
 For example, a
 scientist may decide that a mouse does not feel pain when
 it is stuck in the
 eye with a needle (the standard way to draw blood) even
 though it squirms
 just like a human would. It is surprisingly easy to modify
 one's ethics to
 feel this way, as proven by the Milgram experiments and
 Nazi war crime
 trials.
 
 If we have anything close to the advances in brain
 scanning and brain
 science
 that Kurzweil predicts 1, we should come to understand
 the correlates of
 consciousness quite well
 
 No. I used examples like autobliss (
 http://www.mattmahoney.net/autobliss.txt ) and the
 roundworm c. elegans as
 examples of simple systems whose functions are completely
 understood, yet
 the question of whether such systems experience pain
 remains a philosophical
 question that cannot be answered by experiment.
 
 -- Matt Mahoney, [EMAIL PROTECTED]
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com
 
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Ed Porter
Matt, 

Matt,

 

Although different people (or even the same people at different times)
define consciousness differently, there as a considerable degree of overlap.


 

I think a good enough definition to get started with is that which we humans
feel our minds are directly aware of, including awareness of senses,
emotions, perceptions, and thoughts.  (This would include much of what
Richard was discussing in his paper.) Much of scientific discovery searches
for things of which it has only partial descriptions, often ones much less
complete than that which I have just given.

 

But others on this list might have meaningful additions to the definition of
what it is that we should be looking for when we search to understand
consciousness.

 

Ed Porter

 

-Original Message-
From: Matt Mahoney [mailto:[EMAIL PROTECTED] 
Sent: Monday, November 17, 2008 5:39 PM
To: agi@v2.listbox.com
Subject: RE: FW: [agi] A paper that actually does solve the problem of
consciousness--correction

 

Before you can start searching for consciousness, you need to describe
precisely what you are looking for.

 

-- Matt Mahoney, [EMAIL PROTECTED]

 

 

--- On Mon, 11/17/08, Ed Porter [EMAIL PROTECTED] wrote:

 

 From: Ed Porter [EMAIL PROTECTED]

 Subject: RE: FW: [agi] A paper that actually does solve the problem of
consciousness--correction

 To: agi@v2.listbox.com

 Date: Monday, November 17, 2008, 5:15 PM

 Matt, 

 

 With regard to your first point I largely agree with you. 

 I would, however,

 qualify it with the fact that many of us find it hard not

 to sympathize with

 people or animals, such as a dog, under certain

 circumstances when we

 directly sense outward manifestations that they are

 experiencing terrible

 pain, unless we have a sufficient hatred toward them to

 compensate for our

 natural tendency to feel sympathy for them.  Some people

 attribute this to

 mirror neurons, and the fact that we evolved to be tribal

 social animals.

 

 With regard to the second point, your statement does not

 refute my point,

 although my point is admittedly based on belief that is far

 from certain.

 Our understanding of the physical (such as neural)

 correlates of conscious

 is currently sufficiently limited that it does not yet let

 us say much about

 the consciousness or lack thereof of the systems you

 describe, even if one

 assumes they are totally understood in terms of things

 other than the

 knowledge of the physical correlates of consciousness that

 we currently

 don't have, but will have within fifty years.

 

 But from what little we do understand about the neural

 correlates of

 consciousness, it does not seem that either system you

 describe would have

 anything approaching a human consciousness, and thus a

 human experience of

 pain, since they lack the type of computation normally

 associated with

 reports by humans of conscious experience.

 

 Ed Porter

 

 -Original Message-

 From: Matt Mahoney [mailto:[EMAIL PROTECTED] 

 Sent: Monday, November 17, 2008 4:45 PM

 To: agi@v2.listbox.com

 Subject: RE: FW: [agi] A paper that actually does solve the

 problem of

 consciousness--correction

 

 --- On Mon, 11/17/08, Ed Porter [EMAIL PROTECTED]

 wrote:

 First, it is not clear people

 are free to decide what makes pain

 real, at least

 subjectively real.

 

 I mean that people are free to decide if others feel pain.

 For example, a

 scientist may decide that a mouse does not feel pain when

 it is stuck in the

 eye with a needle (the standard way to draw blood) even

 though it squirms

 just like a human would. It is surprisingly easy to modify

 one's ethics to

 feel this way, as proven by the Milgram experiments and

 Nazi war crime

 trials.

 

 If we have anything close to the advances in brain

 scanning and brain

 science

 that Kurzweil predicts 1, we should come to understand

 the correlates of

 consciousness quite well

 

 No. I used examples like autobliss (

 http://www.mattmahoney.net/autobliss.txt ) and the

 roundworm c. elegans as

 examples of simple systems whose functions are completely

 understood, yet

 the question of whether such systems experience pain

 remains a philosophical

 question that cannot be answered by experiment.

 

 -- Matt Mahoney, [EMAIL PROTECTED]

 

 

 ---

 agi

 Archives: https://www.listbox.com/member/archive/303/=now

 RSS Feed: https://www.listbox.com/member/archive/rss/303/

 Modify Your Subscription:

 https://www.listbox.com/member/?;

 Powered by Listbox: http://www.listbox.com

 

 

 

 ---

 agi

 Archives: https://www.listbox.com/member/archive/303/=now

 RSS Feed: https://www.listbox.com/member/archive/rss/303/

 Modify Your Subscription:

 https://www.listbox.com/member/?;

 Powered by Listbox: http://www.listbox.com

 

 

---

agi

Archives: 

Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Trent Waddington
On Tue, Nov 18, 2008 at 9:03 AM, Ed Porter [EMAIL PROTECTED] wrote:
 I think a good enough definition to get started with is that which we humans
 feel our minds are directly aware of, including awareness of senses,
 emotions, perceptions, and thoughts.  (This would include much of what
 Richard was discussing in his paper.) Much of scientific discovery searches
 for things of which it has only partial descriptions, often ones much less
 complete than that which I have just given.

So basically you're just saying that consciousness is what the
programming language people call reflection.

Sounds pretty easy to implement.

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Mike Tintner

[so who's near Berkeley to report back?]:

UC Berkeley Cognitive Science Students Association presents:


Pain and the Brain



Wednesday, November 19th
 5101 Tolman Hall
6 pm - 8 pm



UCSF neuroscienctist Dr. Howard Fields and Berkeley philosopher John  Searle 
represent some of the most knowledgeable minds in their respective
fields, and they will be answering questions about the relationship between 
pain, pleasure, addiction, and consciousness from their intellectual 
perspectives.


This pairing is sure to make for an extremely intriguing forum, so please 
come out and attend if you are interested in an interesting discussion!


This event is free, and light refreshments will be served afterward.
All are welcome!


for more information, contact:  [EMAIL PROTECTED]




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Ed Porter
Trent,

 

No, it is not easy to implement. 

 

I am talking about the type of awareness that we humans have when we say we
are conscious of something.  Some of the studies we have on the neural
correlates of consciousness indicate humans only report being consciously
aware of things that receive considerable coordinated attention from the
brain, and, thus, which receive an extremely complex level of computation. 

 

And this coordinated complexity is occurring as controlled spreading
activation in a self organized hierarchical memory of patterns learned from
sensed and felt experience, in such a manner as to provide not only
attention to, but also extensive contextually relevant grounding for, the
concepts involved.  This grounding provides a sense of meaning and depth to
our awareness.

 

A reasonably high level of awareness of a single concept involves the
sending and receiving and potential summing of many billions or trillions of
messages.  At any instant, the short term dynamic state of the brain would
probably require many terabytes to represent in current computer hardware.

 

Creating such a massively parallel, contextually grounded, self-focusing,
dynamic, state remembering, self-aware complex is not a trivial task, and
would not take place in any current software that I know of, to the extent
required for a human level of conscious awareness.

 

I think such a human-level sense of awareness could be created out of
Novemente-like components, if running on a machine with massive memory (say
roughly 100TBytes) , massive opps/sec (say 1000Topp/sec) , and massive
interconnect (say an effective whole machine x-sectional bandwidth of 1T
64byte payload msgs/sec, a total x-sectional bandwidth across regions 1/1000
the size of the system of 30T Msg/sec, and the ability to access cache lines
within a distance of 1/100,000th of the machine about 300T times a second).
Such a machine could probably be profitable built and sold for under $3M in
10 years (and perhaps much less than that), if they were sold in a quantity
of, say, 1000 machines per year.

 

But as I have said, it is conceivable, much more or much less hardware would
be required, or even that a different type of computing would be required
such as some type of quantum computing, in order to produce human-like
consciousness.  I doubt it quantum computing will be required, but it is
certainly possible.  

 

In fifty years, humankind will probably know for sure.

 

Ed Porter

 

-Original Message-
From: Trent Waddington [mailto:[EMAIL PROTECTED] 
Sent: Monday, November 17, 2008 6:19 PM
To: agi@v2.listbox.com
Subject: Re: FW: [agi] A paper that actually does solve the problem of
consciousness--correction

 

On Tue, Nov 18, 2008 at 9:03 AM, Ed Porter [EMAIL PROTECTED] wrote:

 I think a good enough definition to get started with is that which we
humans

 feel our minds are directly aware of, including awareness of senses,

 emotions, perceptions, and thoughts.  (This would include much of what

 Richard was discussing in his paper.) Much of scientific discovery
searches

 for things of which it has only partial descriptions, often ones much less

 complete than that which I have just given.

 

So basically you're just saying that consciousness is what the

programming language people call reflection.

 

Sounds pretty easy to implement.

 

Trent

 

 

---

agi

Archives: https://www.listbox.com/member/archive/303/=now

RSS Feed: https://www.listbox.com/member/archive/rss/303/

Modify Your Subscription:
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Trent Waddington
On Tue, Nov 18, 2008 at 10:21 AM, Ed Porter [EMAIL PROTECTED] wrote:
 I am talking about the type of awareness that we humans have when we say we
 are conscious of something.

You must talk to different humans to me.  I've not had anyone use the
word conscious around me in decades.. and usually they're either
high or talking about AI (or both).

Can you give some examples of their usage?  Cause if you're going to
talk about consciousness in terms of you know, that thing then I'd
really like to be sure that we're both talking about the same thing.

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Matt Mahoney
--- On Mon, 11/17/08, Ed Porter [EMAIL PROTECTED] wrote:
I think a good enough definition
to get started with is that which we humans feel our minds are directly aware
of, including awareness of senses, emotions, perceptions, and thoughts.

You are describing episodic memory, the ability to recall a sequence of events. 
These events include recalling other events; we are aware of our own thoughts. 
Reading from the higher levels of the brain also writes into it.

That's easy enough to implement, for example, a database that logs transactions.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Ed Porter
This is a subject on which I have done a lot of talking to myself, since as
Richard's paper implies, our own subjective experiences are among the most
real things to us.  And we have the most direct access to our own
consciousness, and is since of richness, simultaneity, and meaning. I am
also aware that much of what we feel we are aware of is an illusion, such as
the example of the man in the gorilla suite walking unobserved in plain
through a scene in which you are asked to count how many times a team passes
a basketball back and forth, as mentioned recently under this thread by Mark
Waser.

But if you read papers about the neural correlates of consciousness, you
will find that some of them are based on reports from human subjects about
whether or not that were aware of something or not, such as images, or
sounds, or an answer to a question.pp

-Original Message-
From: Trent Waddington [mailto:[EMAIL PROTECTED] 
Sent: Monday, November 17, 2008 7:36 PM
To: agi@v2.listbox.com
Subject: Re: FW: [agi] A paper that actually does solve the problem of
consciousness--correction

On Tue, Nov 18, 2008 at 10:21 AM, Ed Porter [EMAIL PROTECTED] wrote:
 I am talking about the type of awareness that we humans have when we say
we
 are conscious of something.

You must talk to different humans to me.  I've not had anyone use the
word conscious around me in decades.. and usually they're either
high or talking about AI (or both).

Can you give some examples of their usage?  Cause if you're going to
talk about consciousness in terms of you know, that thing then I'd
really like to be sure that we're both talking about the same thing.

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Ed Porter
See the post I just sent to Matt Mahoney.  You have a much greater access to
your own memory than just high level episodic memory.  Although your
memories of such experience are more limited than their actual experience,
you can remember qualities about them, that include their sense of richness,
simultaneity, and meaning. 

-Original Message-
From: Matt Mahoney [mailto:[EMAIL PROTECTED] 
Sent: Monday, November 17, 2008 8:46 PM
To: agi@v2.listbox.com
Subject: RE: FW: [agi] A paper that actually does solve the problem of
consciousness--correction

--- On Mon, 11/17/08, Ed Porter [EMAIL PROTECTED] wrote:
I think a good enough definition
to get started with is that which we humans feel our minds are directly
aware
of, including awareness of senses, emotions, perceptions, and thoughts.

You are describing episodic memory, the ability to recall a sequence of
events. These events include recalling other events; we are aware of our own
thoughts. Reading from the higher levels of the brain also writes into it.

That's easy enough to implement, for example, a database that logs
transactions.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com